text
stringlengths
9
7.94M
\begin{document} \title{ f The record method for two and three dimensional parameters random fields} \abstract Let $S$ be a regular set of $\R^d$ and $X : S\rightarrow \R$ be Gaussian field with regular paths. In order to give bound to the tail of the distribution of the maximum, we use the record method of Mercadier. We present some new form in dimension 2 and extend it to dimension 3 using the result of the expectation of the absolute value of quadratic forms by Li and Wei. Comparison with other methods is conducted. \textbf{Key-words:} Stochastic processes, Gaussian fields, Rice formula, distribution of the maximum.\\ \indent \textbf{Classifications:} 60G15, 60G60, 60G70. \section{Introduction} The problem of computing the tail of the maximum has a lot of applications in spatial statistics, image processing, oceanography, genetics etc ..., see for example Cressie and Wikle \cite{7}. It is exactly solved only for about ten processes with parameter of dimension 1, see Aza\"\i s and Wschebor \cite{5} p4 for a complete list. In the other cases, one has to use some approximations. Several methods have been used, in particular \begin{itemize} \item The tube method, Sun \cite{17}. \item Double sum method, Piterbarg \cite{14}. \item Euler characteristic method see, for example, Adler and Taylor \cite{1}. \item Rice or direct method, Aza\"\i s and Delmas \cite{3}, Aza\"\i s and Wschebor \cite{5}. \end{itemize} With respect to these methods, the record method which is the main subject of this paper and which is detailed in Section \ref{se2} has the advantage of simplicity and also the advantage of giving a bound which is non asymptotic: it is true for every level and not for large $u$ only. It has been introduced for one-parameter random processes by Rychlik \cite{16} and extended to two-parameter random fields by Mercadier \cite{13} to study the tail of the maximum of smooth Gaussian random fields on rather regular sets.\\ It has two version, one is an exact implicit formula : Theorem 2 in \cite{13} that is interesting for numerical purpose and that will not be considered here; the other form is a bound for the tail, see inequality (\ref{re1}) hereunder. This bound has the advantage of its simplicity. In particular it avoids the computation of the expectation of the absolute value of the Hessian determinant as in the direct method of \cite{4} but it works only dimension 2. \\ For practical applications, the dimensions 2 and 3 (for the parameter set) are the most relevant so there is a need of an extension to dimension 3 and this is done in Section 3 using results on quadratic forms by Li and Wei \cite{11}.\\ The bound also has the drawback of demanding a parameterization of the boundary. For example, if we consider the version of Aza\"\i s and Wschebor (\cite{5}, Theorem 9.5 ) of the result of Mercadier, under some mild conditions on the set $S \subset \R^2$ and on the Gaussian process $X$, we have \begin{align}\label{re1} \PP \{M_S \geq u \} \leq & \PP\{Y(O)\geq u\} + \int_0^L \E(|Y'(l)|\mid Y(l)=u)p_{Y(l)}(u)\, dl \notag \\ & + \int_S \E(|X''_{11}(t)^-X'_2(t)^+|\mid X(t)=u,\, X'_1(t)=0) p_{X(t),X'_1(t)}(u,0)\, dt, \end{align} where \begin{itemize} \item $M_S $ is the maximum of $X(t)$ on the set $S$. \item $Y(l)=X(\rho (l))$ with $\rho: \; [0,L]\rightarrow \partial S$ is a parameterization of the boundary $\partial S $ by its length. \item $\displaystyle X''_{ij}=\frac{\partial^2 X}{\partial x_i \partial x_j}$. \item $p_Z(x)$: the value of the density function of random vector $Z$ at point $x$. \item $x^+=\sup (x,0)$, \; $x^-=\sup (-x,0)$. \end{itemize} The proof is based on considering the point with minimal ordinate (second coordinate) on the level curve. As we will see, this point can be considered as a ``record point''. \\ So the second direction of generalizations is to propose nicer and stronger forms of the inequality (\ref{re1}). This is done in Section \ref{se2}. The result on quadratic form is presented in Section \ref{se4} and some numerical experiment is presented in Section \ref{se5}. \subsection*{Notation} \begin{itemize} \item $S$ is some rather regular set included in $\R^2$ or $\R^3$. $\partial S$ is its boundary; $\overset{\circ}{S}$ is its interior. \item $M_S=\underset{s\in S}{\max}\, X(s)$ where $X(s)$ is some rather regular process. \item $\sigma_i$ is the surface measure of dimension $i$. It can be defined as a Hausdorff measure. \item $X', \; X''$ are the first and second derivatives of the process $ X(t)$. In particular if $\alpha $ is some direction then $X'_{\alpha}$ is the derivative along the direction $\alpha$. \item $M \preceq 0$ means that the square matrix $M$ is semi-definite negative. \item $S^{ +\epsilon}$ is the tube around $S$, i.e $$S^{+\epsilon}=\{s\in \R^2:\, \mbox{dist}(s,S)\leq \epsilon\}.$$ \item $d_H$ is the Hausdorff distance between sets, defined by $$d_H(S,T)=\inf \{\epsilon:\; S \subset T^{+\epsilon},\, T\subset S^{+\epsilon}\}.$$ \item $\varphi(x)$ and $\Phi(x)$ are the density and distribution function of a standard normal variable. \\$\overline{\Phi}(x)=1-\Phi(x)$. \end{itemize} \section{The record method in dimension 2 revisited}\label{se2} We will work essentially under the following assumption: \\ \textbf{Assumption 1:} $\{X(t),\; t \in NS \subset \R^2\}$ is a Gaussian stationary field, defined in a neighborhood $NS$ of $S$ with $\mathcal{C}^1$ paths and such that there exists some direction, that will be assumed (without loss of generality) to be the direction of the first coordinate, in which the second derivative $X''_{11}(t)$ exists.\\ We assume moreover the following normalizing conditions that can always be obtained by a scaling $$ \E(X(t))=0, \; \Var(X(t))=1, \; \Var {X'(t)}=I_2 . $$ Finally we assume that $ \Var(X''_{11}(t))>1$ which is true as soon as the spectral measure of the process restricted to the first axis is not concentrated on two opposite atoms. \\ In some cases we will assume in addition \\ \textbf{Assumption 2:} $X(t)$ is isotropic, i.e $\textnormal{Cov}(X(s),X(t))=\rho(\|t-s\|^2)$, with $\mathcal{C}^2$ paths and $S$ is a convex polygon. \\ Under Assumption 1 and 2 plus some light additional hypotheses, the Euler Characteritic (EC) method \cite{1} gives $$ \PP\{M_S \geq u \} = \PP_E (u) +\mbox{Rest}, $$ with $$ \PP_E (u) = \overline{\Phi}(u)+\frac{\sigma_1(\partial S)}{2\sqrt{2\pi}} \varphi (u)+\frac{\sigma_2(S)}{2\pi}u\varphi (u), $$ where the rest is super exponentially small.\\ The direct method gives \cite{4} \begin{align} \PP\{M_S \geq u \} & \leq \PP_M (u) = \overline{\Phi}(u)\displaystyle +\frac{\sigma_1(\partial S)}{2\sqrt{2\pi}} \int_u^{\infty} \left[ c\varphi (x\textrm{/}c)+x\Phi (x\textrm{/c})\right]\varphi (x)dx \notag \\ &\displaystyle + \frac{\sigma_2(S)}{2\pi } \int_u^{\infty} \left[x^2-1+\frac{\displaystyle (8\rho''(0) )^{3/2}\exp(-x^2.(24\rho''(0) -2)^{-1})}{ \sqrt{24\rho''(0) -2}}\right]\varphi(x)dx, \label{bou} \end{align} where $c=\sqrt{\textnormal{Var} (X''_{11})-1}=\sqrt{12\rho''(0)-1}$.\\ The record method gives \cite{13} $$\PP\{M_S \geq u \} \leq \overline{\Phi}(u)+\frac{\sigma_1(\partial S)}{\sqrt{2\pi}} \varphi (u)+\frac{\sigma_2(S)}{2\pi }\left[ c\varphi (u\textrm{/}c)+u\Phi (u\textrm{/c})\right]\varphi (u). $$ A careful examination of these equations shows that the main terms are almost the same except that in the record method the coefficient of $\sigma_1(\partial S)$ is twice too large. When $S$ is a rectangle $[0,T_1]\times[0,T_2]$, it is easy to prove that this coefficient 2 can be removed, see for example Exercise 9.2 in \cite{5}. \\ The goal of this section is to extend the result above to more general sets and to fields satisfying Assumption 1 only. The main result of this section is the following \begin{theorem}\label{th1} Let $X$ satisfy the Assumption 1 and suppose that $S$ is the Hausdorff limit of connected polygons $S_n$. Then, \begin{equation}\label{bou1} \PP \{M_S \geq u\} \leq \overline{\Phi}(u)+ \frac{\liminf_n \sigma_1(\partial S_n) \varphi (u)}{2\sqrt{2\pi}}+\frac{\sigma_2(S)}{2\pi }\left[ c\varphi (u\textrm{/}c)+u\Phi (u\textrm{/c})\right]\varphi (u), \end{equation} where $c=\sqrt{\textnormal{Var} (X''_{11})-1}$. \end{theorem} {\bf Remark:} the choice of the direction of ordinates is arbitrary and is a consequence of the arbitrary choice of the the second derivative $X''_{11}$. When the process $X(t)$ admits derivative in all direction, the choice that gives the sharpest bound consists in chosing as first axis, the direction $\alpha$ such that $ \Var(X''_{\alpha \alpha})$ is minimum. Unfortunately the proof it is based on an exotic topological property of the set $S$ that will be called ''emptyable". \begin{definition} The compact set $S$ is emptyable if there exists a point $O\in S$ which has minimal ordinate, and such that for every $s \in S$ there exists a continuous path inside $S$ from $O$ to $s$ with non decreasing ordinate. \end{definition} In other word, suppose that $S$ is filled with water and that gravity is in the usual direction; $S$ is emptyable if after making a small hole at $O$, all the water will empty out, see Figure \ref{fig1}. \\ \begin{figure} \caption{Example of non-emptyable set. The non-emptyable part is displayed in black. } \label{fig1} \end{figure} \begin{proof} Step1 : Suppose for the moment that $X$ has $\mathcal{C}^{\infty}$ paths and that $S$ is an emptyable polygon. Considering the event $\{M_S \geq u\}$, we have \begin{equation} \label{pt1} \PP \{M_S \geq u\} = \PP \{ X(O) \geq u\} + \PP \{ X(O) <u,M_S \geq u\}. \end{equation} It is clear that if $X(O)< u$ and $M_S\geq u$, because $S$ is connected, the level curve $$ \mathcal{C}_u=\{t \in S: \; X(t)=u\} $$ is not empty, and there is at least one point $T$ on $\mathcal{C}_u$ with minimal ordinate. There are two possibilities: \begin{itemize} \item $T$ is in the interior of $S$. In that case, suppose that there exists a point $s \in S$ with smaller ordinate than $T$ ($s_2<T_2$), such that $X(s)>u$. Then, due to the emptyable property, on the continuous path from $O$ to $s$ there would exist one point $s'$ with smaller ordinate than $T$, and with $X(s') =u$. This is in contradiction with the definition of $T$. So we have proved that for every $s \in S$, $s_2<T_2$ we have $X(s) \leq u$. It is in the sense that $T$ can be considered as a record point. It implies that $$ \{ X'_1(T)=0, \; X'_2(T) \geq 0, \; X''_{11}(T) \leq 0 \}. $$ The probability that there exists such a point is clearly bounded, by the Markov inequality, by $$ \E\left(\textnormal{card}\{t\in S:\; X(t)=u, \; X'_1(t)=0, \; X'_2(t) \geq 0, \; X''_{11}(t) \leq 0\}\right). \notag \\ $$ Applying the Rice formula to the field $Z=(X,X'_1)$ from $\R^2$ to $\R^2$, we get that \begin{align} &\quad \PP \{\exists \, t \in \overset{\circ}{S}: \, X(t)=u, \; t \; \mbox{has minimal ordinate on} \; \mathcal{C}_u \} \notag \\ \leq & \quad \displaystyle \int_S \E\left(|\det(Z'(t))| \mathbb{I}_{X'_2(t) \geq 0} \mathbb{I}_{X''_{11}(t) \leq 0}\mid Z(t)=(u,0)\right) \times p_{Z(t)}(u,0) \; dt \notag \\ = & \quad \displaystyle \sigma_2(S) \frac{\varphi(u)}{\sqrt{2\pi}}\; \E\left(X''^-_{11}(t)X^{\prime +}_2(t) \mid X(t)=u, X'_1(t)=0\right) \notag \\ = & \quad \displaystyle \sigma_2(S) \frac{\varphi(u)}{\sqrt{2\pi}} \; \E\left(X^{\prime +}_2(t)\right) \; \E\left(X''^-_{11}(t) \mid X(t)=u, X'_1(t)=0\right) \notag \\ = &\displaystyle \quad \sigma_2(S) \frac{\varphi (u)}{2\pi }\left[ c\varphi (u\textrm{/}c)+u\Phi (u\textrm{/}c)\right]. \label{pt2} \end{align} Note that the validity of the Rice formula holds true because the paths are of class $\cal{C}^\infty $ and that $X(t)$ and $X'_1(t)$ are independent. The computations above use some extra independences that are a consequence of the normalization of the process. The main point is that, under the conditioning $$ \det( Z'(t) ) = X''_{11}(t) X^{\prime}_2(t).$$ \item $T$ is on the boundary of $S$ that is the union of the edges $(F_1,\ldots, F_n)$. It is with probability 1 not located on a vertex. Suppose that, without loss of generality, it belongs to $F_1$. Using the reasoning we have done in the preceding case, because of the emptyable property, it is easy to see that $$ \{X(T)=u, \; X'_{\alpha}(T) \geq 0, \; X'_{\beta}(T) \leq 0\}, $$ where $\alpha$ is the upward direction on $F_1$ and $\beta$ is the inward horizontal direction. Then, apply the Markov inequality and Rice formula in the edge $F_1$, \begin{displaymath} \begin{array}{rl} & \PP \{\exists \, t \in F_1: \; X(t)=u, \; t\; \mbox{has minimal ordinate on} \; \mathcal{C}_u\} \\ \leq & \PP\{\exists \, t \in F_1:\; X(t)=u, \; X'_{\alpha}(t) \geq 0, \; X'_{\beta}(t) \leq 0\}\\ \leq & \E\left(\textnormal{card}\{t\in F_1:\; X(t)=u, \; X'_{\alpha}(t) \geq 0, \; X'_{\beta}(t) \leq 0\}\right) \\ = & \displaystyle \int_{F_1} \E\left(|X'_{\alpha}(t))| \mathbb{I}_{X'_{\alpha}(t) \geq 0}\mathbb{I}_{X'_{\beta}(t) \leq 0}\mid X(t)=(u)\right) \times p_{X(t)}(u) \; dt \\ = & \sigma_1(F_1) \varphi(u) \, \E\left(X'^+_{\alpha}(t)\, \mathbb{I}_{X'_{\beta}(t) \leq 0}\right). \end{array} \end{displaymath} Denote by $\theta_1$ the angle $(\alpha,\beta)$. $X'_{\beta}$ can be expressed as $$ \cos\theta_1\, X'_{\alpha}+\sin\theta_1\,Y, $$ with $Y$ is a standard normal variable that is independent with $X'_{\alpha}$. Then \begin{displaymath} \begin{array}{rl} &\E(X'^+_{\alpha}(t)\, \mathbb{I}_{X'_{\beta}(t) \leq 0})\\ =&\E(X'^+_{\alpha}\, \mathbb{I}_{\cos \theta_1 X'_{\alpha}+\sin \theta_1 Y \leq 0})\\ =&\displaystyle \frac{1-\cos\theta_1}{2\sqrt{2\pi}}. \end{array} \end{displaymath} Summing up, the term corresponding to the boundary of $S$ is at most equal to \begin{equation}\label{pt3} \varphi(u) \sum_{i=1}^n \frac{(1-\cos\theta_i)\sigma_1(F_i)}{2\sqrt{2\pi}}=\frac{\varphi(u)\sigma_1(\partial S)}{2\sqrt{2\pi}}, \end{equation} since $\displaystyle \sum_{i=1}^n \sigma_1(F_i) \, \cos\theta_i $ is just the length of the oriented projection of the boundary of $S$ on the $x$ -axis, so it is zero. \end{itemize} Hence, summing up (\ref{pt2}), (\ref{pt3}) and substituting into (\ref{pt1}), we obtain the desired upper-bound in our particular case. Step 2: Suppose now that $S$ is a general connected polygon such that the vertex $O$ with minimal ordinate is unique. We define $S_1$ as the maximal emptyable subset of $S$ that contains $O$. It is easy to prove that $S_1$ is still a polygon with some horizontal edges and that $S\backslash S_1$ consists of several polygons with horizontal edges, say $S_2^1,\ldots,S_2^{n_2}$, see Figure \ref{fig2}.\\ \begin{figure} \caption{ Example on construction of $S_1$. } \label{fig2} \end{figure} \\ So we write \begin{equation}\label{pt4} \PP\{M_S \geq u\}\leq \PP\{X(O)\geq u\}+\PP\{M_{S_1}\geq u,\; X(O)<u \}+\underset{i=1}{\overset{n_2}{\sum}}\PP\{M_{S_1}< u, \; M_{S_2^i}\geq u\}. \end{equation} Suppose for the moment that all the $S_2^i,\; i=1,\ldots,n$ are emptyable. Then, to give bounds to the event $$\{M_{S_1}< u, \; M_{S_2^i}\geq u\},$$ we can apply the reasoning of the preceding proof but inverting the direction: in $S_2^i$, we search points on the level curve with {\bf maximum ordinate}. Let $E$ be the common edge of $S_1$ and $S_2^i$. Clearly, when $\{M_{S_1}< u, \; M_{S_2^i}\geq u\},$ the level curve is non empty and by the same arguments as in Theorem \ref{th1}, there exists $t\in S_2^i$ satisfying whether (except events with zero probability) \begin{itemize} \item $t$ is in the interior of $S^i_2$ and $$\{ X(t)=u, \; X'_1(t)=0, \; X'_2(t) \leq 0, \; X''_{11}(t) \leq 0 \}.$$ From Markov inequality and Rice formula, this probability is at most equal to \begin{equation}\label{pt5} \frac{\varphi (u)\sigma_2(S_2^i)}{2\pi }\left[ c\varphi (u\textrm{/}c)+u\Phi (u\textrm{/c})\right]. \end{equation} \item $t$ lies on some edges of $S^i_2$. Note that $t$ can not belong to $E$. Then, as in Theorem \ref{th1}, we consider the event $t$ is on each edge and sum up the bounds to obtain \begin{equation}\label{pt6} \begin{array}{rl} & \PP \left( \{\exists \, t \in \partial S_2^i: \, X(t)=u, \, t \, \mbox{has maximal second ordinate on the level curve} \, \} \cap \{M_{S_1}<u\}\right)\\ \leq &\displaystyle \frac{\varphi(u)[\sigma_1(\partial S^i_2)-2\sigma_1(E)]}{2\sqrt{2\pi}}. \end{array} \end{equation} \end{itemize} From (\ref{pt5}) and (\ref{pt6}) we have \begin{equation}\label{pt7} \displaystyle \PP\{M_{S_1}< u, \; M_{S_2^i}\geq u\} \leq \frac{\varphi (u)\sigma_2(S_2^i)}{2\pi }\left[ c\varphi (u\textrm{/}c)+u\Phi (u\textrm{/c})\right]+\frac{\varphi(u)[\sigma_1(\partial S^i_2)-2\sigma_1(E)]}{2\sqrt{2\pi}}. \end{equation} Summing up all the bounds as in (\ref{pt7}), considering the upper bound for $\PP\{X(O)<u,\, M_{S_1}\geq u\}$ as in Theorem \ref{th1} and substituting into (\ref{pt4}), we get the result.\\ In the general case, when some $S_2^i$ is not emptyable, we can decompose $S_2^i$ as we did for $S$ and by induction. Since the number of vertices is decreasing, we get the result. Step 3: Passing to the limit. The extension to process with non $\mathcal{C}^\infty$ paths is direct by an approximation argument. Let $\overline{X}_\epsilon(t)$ be the Gaussian field obtained by convolution of $X(t)$ with a size $\epsilon $ convolution kernel (for example a Gaussian density with variance $ \epsilon^2 I_2$). We can apply the preceding bound to the process $$ X_ \epsilon (t) := \frac 1{\sqrt{\Var (\overline{X}_\epsilon(t)) }} \overline{X}_\epsilon\left( \Sigma ^{-1/2} _ \epsilon t\right) , $$ where $ \Sigma _ \epsilon = \Var(\overline{X}'_\epsilon(t))$. Since $ \Var (\overline{X}_\epsilon(t)) \to 1$ and $ \Sigma _ \epsilon \to I_2$ , $\max_{t \in S } X_\epsilon (t) \to M_S$ and we are done. The passage to the limit for $S_n$ tending to $ S$ is direct. \end{proof} \subsection*{Some examples} \begin{itemize} \item If $S$ is compact convex with non-empty interior then it is easy to construct a sequence of polygons $S_n$ converging to $S$ and such that $\liminf_n \sigma_1(\partial S_n)=\sigma_1(\partial S)$, giving \begin{equation}\label{f:bound} \PP\{M_S\geq u\}\leq \PP_R(u)=\overline{\Phi}(u)+ \frac{\sigma_1(\partial S)}{2\sqrt{2\pi}}\varphi(u)+ \frac{\sigma_2(S)}{2\pi }\left[ c\varphi (u\textrm{/}c)+u\Phi (u\textrm{/c})\right]\varphi (u). \end{equation} \item More generaly, if $S$ is compact and has a boundary that is piecewise-$\mathcal{C}^2$ except for a finite number of points and the closure of the interior of $S$ equals to $S$, we get (\ref{f:bound}) by the same tools. \item Let us now get rid of the condition $\overline{\overset{\circ}{S}}=S$ but still assuming the piecewise-$\mathcal{C}^2$ condition. Define the ``outer Minkowski content" of a closed subset $S \subset \R^2$ as (see \cite{8}) $$\rm{OMC}(S)= \underset{\epsilon \rightarrow 0}{\lim} \frac{\sigma_2(S^{+\epsilon}\setminus S)}{\epsilon},$$ whenever the limit exists (for more treatment in this subject, see \cite{2}). This definition of the perimeter differs from the quantity $\sigma_1(\partial S)$. A simple counter-example is a set corresponding to the preceding example with some ``whisker'' added. Using approximation by polygons, we get \begin{equation} \label{star} \PP\{M_S\geq u\}\leq \PP_R(u)=\overline{\Phi}(u)+ \frac{\rm{OMC}( S)}{2\sqrt{2\pi}}\varphi(u)+ \frac{\sigma_2(S)}{2\pi }\left[ c\varphi (u\textrm{/}c)+u\Phi (u\textrm{/c})\right]\varphi (u). \end{equation} \item The next generalization concerns compact $r$-convex sets with a positive $r$ in the sense of \cite{8}. These sets satisfy $$S=\underset{\overset{\circ}{B}(x,r) \cap S=\emptyset}{\bigcap}\R^2\setminus\overset{\circ}{B}(x,r). $$ This condition is slightly more general than the condition of having positive reach in the sense of Federer \cite{9}. Suppose in addition that $S$ satisfies the interior local connectivity property: there exists $\alpha_0 >0$ such that for all $0<\alpha<\alpha_0$ and for all $x\in S,\; \mbox{int}\left(B(x,\alpha)\cap S\right)$ is a non-empty connected set. Then we can construct a sequence of approximating polygons in the following way. Let $X_1,X_2,\ldots ,X_n$ be a random sample drawn from a uniform distribution on $S$ and $S_n$ be the r-convex hull of this sample, i.e $$S_n=\underset{\overset{\circ}{B}(x,r) \cap \{X_1,X_2,\ldots , X_n\}=\emptyset}{\bigcap}\R^2\setminus\overset{\circ}{B}(x,r),$$ which can be approximated by polygons with an arbitrary error. By Theorem 6 of Cuevas et al \cite{8}, $S_n$ is a fully consistent estimator of $S$, it means that $d_H(S_n,S)$ and $d_H(\partial S_n,\partial S)$ tend to 0 as $n$ tends to infinity. This implies $\sigma_2(S_n)\rightarrow \sigma_2(S)$ and $\rm{OMC}(S_n)\rightarrow \rm{OMC}(S)$. Hence, we obtain (\ref{star}). \item A complicated case: a ``Swiss cheese''. Here, we consider an unit square and inside it, we remove a sequence of disjoint disks of radius $r_i$ such that $\displaystyle \pi \underset{i=1} {\overset{\infty}{\sum}}r_i^2 <1$ to obtain the set $S$. When $\displaystyle \underset{i=1}{\overset{\infty}{\sum}}r_i <\infty$ the bound (\ref{bou1}) makes sense directly. But examples can be constructed from the Sierpinski carpet (see Figure \ref{fig3}) such that $\displaystyle \underset{i=1}{\overset{\infty}{\sum}}r_i =\infty$ : divide the square into 9 subsquares of the same size and instead of removing the central square, remove the disk inscribed in this square and do the same procedure for the remaining 8 subsquares, ad infinitum.\\ \begin{figure} \caption{ Sierpinski carpet (source: Wikipedia).} \label{fig3} \end{figure}\\ In our case, $$\sum_{i=1}^{\infty}r_i^2=\frac{1}{4}\sum_{i=1}^{\infty}\frac{8^{i-1}}{3^{2i}} =\frac{1}{4}$$ This proves that the obtained set $S$ has positive Lebesgue measure and is not fractal. We have on the other hand $$ \sum_{i=1}^{\infty}r_i=\frac{1}{2}\sum_{i=1}^{\infty}\frac{8^{i-1}}{3^{i}} = \infty. $$ Let $S_n$ be the set obtained after removing the n-th disk. Since $S \subset S_n$, $$\PP \{M_S \geq u\} \leq \PP \{M_{S_n } \geq u\} \leq \overline{\Phi}(u)+ \frac{ \varphi (u)}{2\sqrt{2\pi}}(4+2\pi\underset{i=1}{\overset{n}{\sum}}r_i)+(1-\pi\underset{i=1}{\overset{n}{\sum}}r_i^2)\left[ c\varphi (u\textrm{/}c)+u\Phi (u\textrm{/c})\right]\varphi(u)/(2\pi).$$ Hence, $$\PP \{M_S \geq u\} \leq \overline{\Phi}(u)+ \underset{n}{\min}\left[\frac{ \varphi (u)}{2\sqrt{2\pi}}(4+2\pi\underset{i=1}{\overset{n}{\sum}}r_i)+(1-\pi\underset{i=1}{\overset{n}{\sum}}r_i^2)\left[ c\varphi (u\textrm{/}c)+u\Phi (u\textrm{/c})\right]\varphi(u)/(2\pi)\right].$$ \end{itemize} {\bf Remarks}: \begin{itemize} \item[1.] In comparison with other results, all the examples considered here are new. Firstly the conditions on the process are minimal and weaker than the ones of the other methods. Secondly the considered sets are not covered by any other methods. Even for the first example, because we do not assume that the number of irregular points is finite, which is needed, for example, for the convex set to be a stratified manifold as in \cite{1}. \item[2.] Theorem \ref{th1} can be extended directly to non connected sets using sub-additivity $$ \PP\{M_{S_1\cup S_2}\geq u \} \leq \PP\{M_{S_1}\geq u \} + \PP\{M_{ S_2}\geq u \}. $$ This implies that the coefficient of $\overline{\Phi}(u)$ in (\ref{bou1}) must be the number of components. \end{itemize} \subsection*{Is the bound sharp?} \begin{itemize} \item Under Assumption 2, Adler and Taylor \cite{1} show that $$\underset{u\rightarrow +\infty}{\liminf} -2u^2\log |\PP\{M_S\geq u\} -\PP_E(u)|\geq 1+1/c^2.$$ From $$0\leq \PP_R(u)-\PP_E(u)=\frac{\sigma_2(S)}{2\pi}\varphi(u)\left[c\varphi (u\textrm{/}c)-u\overline{\Phi} (u\textrm{/}c)\right]$$ and the elementary inequality for $x>0$, $$\varphi(x)\left(\frac{1}{x}-\frac{1}{x^3}\right)<\overline{\Phi}(x)<\varphi(x)\left(\frac{1}{x}-\frac{1}{x^3}+\frac{3}{x^5}\right),$$ it is easy to see that $$\underset{u\rightarrow +\infty}{\lim\inf} -2u^2\log (\PP_R(u)-\PP_E(u)) \geq 1+1/c^2.$$ So the upper bound $\PP_R(u)$ is as sharp as $\PP_E(u)$ . \item Let $S$ be a compact and simply connected domain in $\R^2$ having a $\mathcal{C}^3$-piecewise boundary. Assume that all the discontinuity point are convex, in the sense that if we parametrize the boundary in the direction of positive rotation, then at each discontinuity point, the angle of the tangent has a positive discontinuity. Then, it is easy to see that the quantity $$\kappa (S)= \underset{t\in S}{\sup} \underset{s\in S,\; s\neq t}{\sup} \frac{\rm{dist}(s-t,\mathcal{C}_t)}{\|s-t\|^2}$$ is finite, where $\rm{dist}$ is the Euclidean distance and $C_t$ is the cone generated by the set of directions $$\Biggl\{ \lambda \in \R^2 :\; \|\lambda\|=1,\, \exists s_n \in S \; \rm{such\; that}\; s_n\rightarrow t \; \rm{and} \; \frac{s_n-t}{\|s_n-t\|}\rightarrow \lambda \Biggr\}.$$ In order to apply the Theorem 8.12 in \cite{5}, besides the Assumption 1, we make some additional assumptions on the field $X$ such that it satisfies the conditions (A1)-(A5) page 185 in \cite{5}. Assume that \begin{itemize} \item $X$ has $\mathcal{C}^3$ paths. \item The covariance function $r(t)$ satisfies $|r(t)|\neq 1$ for all $t\neq 0$. \item For all $s\neq t$, the distribution of $(X(s),X(t),X'(s),X'(t))$ does not degenerate. \end{itemize} With these hypotheses, we can see that \begin{itemize} \item The conditions (A1)-(A3) are easily verified. \item The condition (A4) which states that the maximum is attained at a single point, can be deduced from Proposition 6.11 in \cite{5} since for $s\neq t$, $(X(s),X(t),X'(s),X'(t))$ has a nondegenerate distribution. \item The condition (A5) which states that almost surely there is no point $t\in S$ such that $X'(t)=0$ and $\det(X''(t))=0$, can be deduced from Proposition 6.5 in \cite{5} applied to the process $X'(t)$. \end{itemize} Since all the required conditions are met, by Theorem 8.12 in \cite{5}, we have \begin{equation}\label{f:1} \underset{x\rightarrow +\infty}{\liminf}-2x^2 \log \big[\PP_M(x)-\PP\{M_S\geq x\}\big] \geq 1+ \underset{t\in S}{\inf}\frac{1}{\sigma_t^2+\kappa_t^2}>1, \end{equation} where $$\sigma_t^2=\underset{s\in S\setminus\{t\}}{\sup}\frac{\rm{Var}\left(X(s)\mid X(t),X'(t)\right)}{(1-r(s,t))^2}$$ and $$\kappa_t=\underset{s\in S\setminus\{t\}}{\sup}\frac{\rm{dist}\left(\frac{\partial}{\partial t} \it{r}(s,t),C_t\right)}{1-r(s,t)}.$$ Note that the condition $\kappa(S)$ is finite implies that $\kappa(t)$ is also finite for every $t\in S$. (\ref{f:1}) is true also for $\PP_R$, since as $x\rightarrow +\infty$, $\PP_R(x)$ is smaller than $\PP_M(x)$ (see Section \ref{se5} for the easy proof). As a consequence $\PP_R$ is super exponentially sharp. \item Suppose that $S$ is a circle in $\R^2$. Then $\{X(t)\, :\; t\in S\}$ can be viewed as a periodic process on the line. In that case, it is easy to show, see for example Exercise 4.2 in \cite{5}, that as $u\rightarrow \infty$ $$ \PP(M_S\geq u)=\frac{\sigma_1(S)}{\sqrt{2\pi}}\varphi(u) + O(\varphi(u(1+\delta))=\frac{\rm{OMC}(S)}{2\sqrt{2\pi}}\varphi(u) + O(\varphi(u(1+\delta)) $$ for some $\delta >0$; while Theorem \ref{th1} gives with a standard approximation of the circle by polygons $$\PP(M_S\geq u) \leq P_R(u)=\overline{\Phi}(u)+ \frac{\rm{OMC}(S)}{2\sqrt{2\pi}}\varphi(u), $$ which is too large. This shows that the bound $P_R$ is not always super exponentially sharp. \end{itemize} \section{The record method in dimension 3}\label{se3} For example, with the direct method, some difficulties arise in dimension 3 because we need to compute $$\E | \det (X'' (t)|,$$ under some conditional law. This can be conducted only in the isotropic case using random matrices theory, see \cite{4} and even in this case the result is complicated. In dimension 2, the record method is a trick that permits to spare a dimension in the size of the determinant we have to consider because the conditioning implies a factorization. For example in equation (\ref{pt2}) we have used the fact that $$ \det( Z'(t) ) = X''_{11}(t) X^{\prime }_2(t),$$ under the condition. In this section we will use the same kind of trick to pass from a 3,3 matrix to a 2,2 matrix and then a 2,2 determinant is just a quadratic form so we can use, to compute the expectation of its absolute value, the Fourier method of Berry and Dennis \cite{6} or Li and Wei \cite{11}. This computation is detailed in Section 4 and is one of the main contributions of the paper. \\ Before stating the main theorem of this section, we recall the following lemma (see Chapter 5 of Prasolov and Sharygin \cite{15}) \begin{lemma}\label{lema} Let $Oxyz$ be a trihedral. Denote by $a,\, b$ and $c$ the plane angles $\widehat{xOy},\, \widehat{yOz}$ and $\widehat{zOx}$, respectively. Denote by $A, \,B$ and $ C$ the angles between two faces containing the line $Oz, \, Ox$ and $ Oy$, respectively. Then, \begin{itemize} \item[a.] $\sin a : \sin A=\sin b : \sin B=\sin c : sin C$. \item[b.] $\cos a=\cos b\cos c+\sin b\sin c \cos A$. \end{itemize} \end{lemma} Our main result is the following \begin{theorem}\label{th3} Let $S$ be a compact and convex subset of $\R^3$ with non-empty interior and let $X$ satisfy Assumption 1. Suppose, in addition that $X$ is isotropic with respect to the first and second coordinate, i.e $$\textnormal{Cov}(X(t_1,t_2,t_3);X(s_1,s_2,t_3))=\rho((t_1-s_1)^2+(t_2-s_2)^2) \mbox{ with}\; \rho \, \mbox{ of class}\; \mathcal{C}^2.$$ Then, for every real $u$, \begin{displaymath} \begin{array}{rl} \PP \{ M\geq u \} \leq & 1 -\Phi (u) + \displaystyle \frac{2\lambda(S)}{\sqrt{2\pi }} \varphi (u) + \frac{\sigma_2(S) \varphi (u)}{4\pi } \left[ \sqrt{12\rho''(0) -1}\varphi \left(\frac{u}{\sqrt{12\rho''(0) -1}} \right) + u \Phi \left(\frac{u}{\sqrt{12\rho''(0) -1}} \right)\right]\\ &+ \displaystyle \frac{\sigma_3(S)\varphi (u)}{(2\pi)^{3/2} } \left[u^2-1+\frac{\displaystyle (8\rho''(0))^{3/2}\exp\left(-u^2.(24\rho''(0)-2)^{-1}\right)}{\displaystyle \sqrt{24\rho''(0)-2}}\right], \end{array} \end{displaymath} where $\lambda$ is the caliper diameter. \end{theorem} \begin{proof} By the same limit argument as in Theorem \ref{th1}, we can assume that $X(t) $ has $\mathcal{C}^{\infty}$ paths and that $S$ is a convex polyhedron. Let $O$ be the vertex of $S$ that has minimal third coordinate, we can assume also that this vertex is unique. It is clear that if $X(O) < u$ and $M_S>u$ then the level set $$\mathcal{C}(u)=\{t\in S: \; X(t)=u\}$$ is non empty and there exists at least one point $T$ having minimal third coordinate on this set. Then, \begin{equation} \begin{array}{rcl} \PP\{M_S\geq u\}& = & \PP \{ X(O)\geq u \}+\PP\{X(O)<u,\, M_S\geq u\}\\ &\leq & \PP \{ X(O)\geq u \}+\PP\{\exists \, T\in S: \, X(T)=u,\, T \, \mbox{has minimal third coordinate on}\; \mathcal{C}_u\}. \end{array} \end{equation} Now, we consider three possibilities:\\ $\bullet $ Firstly, if $T$ is in the interior of $S$, then by the same arguments as in Theorem \ref{th1}, for all the point $s\in S$ with the third coordinate smaller than the one of $T$, $X(s)<X(T)$; it means that, at $T$, $X(t)$ has a local maximum with respect to the first and second coordinates and is non-decreasing with respect to the third coordinate. Therefore, setting \begin{displaymath} A(t)= \left( \begin{array}{cc} X''_{11}(t) & X''_{12}(t) \\ X''_{12}(t) & X''_{22}(t) \end{array} \right), \end{displaymath} we have $$ \{X(T)=u, \;X'_1(T)=0, \; X'_2(T)=0,\; A(T) \preceq 0,\; X'_3(T) \geq 0\}. $$ Then, apply the Rice formula to the field $Z=(X,X'_1,X'_2)$ and the Markov inequality, \begin{displaymath} \begin{array}{rl} &\displaystyle\PP\{\exists \, T\in \overset{\circ}{S}: \, X(T)=u,\, T \, \mbox{has minimal third coordinate on}\; \mathcal{C}_u\}\\ \leq &\displaystyle \PP\{\exists \, t \in \overset{\circ}{S}:\;X(t)=u, \; X'_1(t)=0, \; X'_2(t)= 0, \; X'_3(t)\geq 0,\; A(t) \preceq 0\}\\ \leq &\displaystyle \E\big(\textnormal{card}\{t\in \overset{\circ}{S}:\; X(t)=u, \; X'_1(t)=0, \; X'_2(t)= 0, \; X'_3(t)\geq 0,\; A(t) \preceq 0\}\big) \\ = &\displaystyle \E\big(\textnormal{card}\{t\in \overset{\circ}{S}:\;Z(t)=(u,0,0), \; X'_3(t) \geq 0, \; A(t) \preceq 0\}\big)\\ =& \displaystyle \int_{\overset{\circ}{S}} \E\left(|\det(Z'(t))|\mathbb{I}_{X'_3(t) \leq 0}\mathbb{I}_{A(t)\preceq 0} \mid Z(t)=(u,0,0)\right) \times p_{Z(t)}(u,0,0)\; dt.\\ \end{array} \end{displaymath} Under the condition $Z(t)=(u,0,0)$, it is clear that $\det(Z'(t))=X'_3(t)\, \det(A(t))$. So, we obtain the bound $$ \sigma_3(S) \, \frac{\varphi(u)}{2\pi} \, \E\left(|\det(A(t))|.\mathbb{I}_{A(t) \preceq 0}X'^+_3(t) \mid Z(t)=(u,0,0)\right). $$ From Corollary \ref{cor} of Section \ref{se4}, we know that $$ \E\left(|\det(A(t))|. \mathbb{I}_{A(t)\preceq 0}\mid Z(t)=(u,0,0) \right) \leq u^2-1+\frac{\displaystyle (8\rho''(0) )^{3/2}\exp\left(-u^2.(24\rho''(0) -2)^{-1}\right)}{\displaystyle \sqrt{24\rho''(0) -2}}.$$ Hence, \begin{align} &\PP\{\exists \, T\in \overset{\circ}{S}: \, X(T)=u,\, T \, \mbox{has minimal third coordinate on}\; \mathcal{C}_u\} \notag \\ \leq \;\;& \label{t1} \frac{\sigma_3(S)\varphi (u)}{(2\pi)^{3/2} } \left[u^2-1+\frac{\displaystyle (8\rho''(0))^{3/2}\exp\left(-u^2.(24\rho''(0)-2)^{-1}\right)}{\displaystyle \sqrt{24\rho''(0)-2}}\right]. \end{align} $\bullet $ Secondly, if $T$ is in the interior of a face $S_1$, then, in this face, we choose the base $\overrightarrow{\alpha},\, \overrightarrow{\beta}$ such that $\overrightarrow{\alpha}$ is in the horizontal plane $ 0\ t_1t_2$ such that along this vector, the second coordinate is not decreasing. Let us denote vector $\overrightarrow{\gamma}$ in the horizontal plane that is perpendicular to $\alpha$ and goes into $S$. It is easy to see that $$\{ X(T)=u, \; X'_ {\alpha}(T) =0,\; X'_{\beta}(T) \geq 0,\; X'_{\gamma}(T) \leq 0,\; X''_{\alpha}(T) \leq 0 \}.$$ Apply Markov inequality and Rice formula to the field $Y(t)=(X(t),X'_{\alpha}(t))$, \begin{displaymath} \begin{array}{rl} &\displaystyle \PP\{\exists \, T\in \overset{\circ}{S_1}: \, X(T)=u,\, T \, \mbox{has the minimal third ordinate on}\; \mathcal{C}_u\}\\ \leq & \PP \{\exists \, t \in \overset{\circ}{S_1}: \; X(t)=u, \; X'_ {\alpha}(t) =0,\; X'_{\beta}(t) \geq 0,\; X'_{\gamma}(t) \leq 0,\; X''_{\alpha} \leq 0 \}\\ \leq & \E (\textnormal{card}\{ t \in \overset{\circ}{S_1}: \; X(t)=u, \; X'_ {\alpha}(t) =0,\; X'_{\beta}(t) \geq 0,\; X'_{\gamma}(t) \leq 0,\; X''_{\alpha} \leq 0 \})\\ = & \displaystyle \int_{\overset{\circ}{S_1}} \E\left( |\det(Y'(t))| \mathbb{I}_{X'_{\beta}(t) \geq 0}\mathbb{I}_{X'_{\gamma}(t) \leq 0}\mathbb{I}_{X''_{\alpha}(t) \leq 0}\mid Y(t)=(u,0)\right)\, p_{Y(t)}(u,0)\, dt\\ = &\displaystyle \frac{\sigma_2(S_1)\varphi(u)}{\sqrt{2\pi}} \E\left( |X''^-_{\alpha}(t)|X'^+_{\beta}(t)\mathbb{I}_{X'_{\gamma}(t) \leq 0} \mid Y(t)=(u,0)\right). \end{array} \end{displaymath} As in Theorem \ref{th1}, it is clear that \begin{align*} \E\big( |X''^-_{\alpha}(t)| \mid Y(t)=(u,0)\big) & =\, \sqrt{12\rho''(0) -1}\varphi \left(\frac{u}{\sqrt{12\rho''(0) -1}} \right) + u \Phi \left(\frac{u}{\sqrt{12\rho''(0) -1}} \right),\\ \E\big( X'^+_{\beta}(t)\mathbb{I}_{X'_{\gamma}(t) \leq 0} \mid Y(t) &=(u,0)\big)=\frac{1-\cos(\beta,\gamma)}{2\sqrt{2\pi}}. \end{align*} Observe that the angle between $\beta$ and $\gamma$ is the angle $\theta_1$ between the face $S_1$ and the horizontal plane, then the probability that there exists one point with minimal third coordinate on the level set and in the interior of the face $S_1$ is at most equal to $$ \frac{\sigma_2(S_1) \varphi (u)(1-\cos\theta_1)}{4\pi } \left[ \sqrt{12\rho''(0) -1}\varphi \left(\frac{u}{\sqrt{12\rho''(0) -1}} \right) + u \Phi \left(\frac{u}{\sqrt{12\rho''(0) -1}} \right)\right] .$$ Taking the sum of all the bounds at each faces, observing that $$\sum_{i=1}^n \sigma_2(S_i)\cos\theta_i=0,$$ we have the following upper bound for the probability of having a point $T$ with minimal third coordinate on the level set and belonging to a face: \begin{equation}\label{t2} \frac{\sigma_2(S) \varphi (u)}{4\pi } \left[ \sqrt{12\rho''(0) -1}\varphi \left(\frac{u}{\sqrt{12\rho''(0) -1}} \right) + u \Phi \left(\frac{u}{\sqrt{12\rho''(0) -1}} \right)\right] . \end{equation} $\bullet $ Thirdly, when $T$ belongs to one edge, for example $F_1$. Let us define $\overrightarrow{\eta}$ is the upward direction on this edge, i.e such that along this vector, the third coordinate is not decreasing, and $\overrightarrow{\alpha}$ and $\overrightarrow{\beta}$ are the two horizontal directions that go inside two faces containing the edge. Then, $$ \{X(T)=u,\; X'_{\eta}(T) \geq 0,\; X'_{\alpha}(T) \leq 0,\; X'_{\beta}(T) \leq 0\} .$$ By Rice formula, the expectation of the number of the points in $F_1$ satisfying this condition is \begin{displaymath} \begin{array}{rl} & \displaystyle \int_{F_1} \E\left(X'^+_{\eta}(t) \mathbb{I}_{X'_{\alpha}(t) \leq 0}\mathbb{I}_{X'_{\beta}(t) \leq 0} \mid X(t)=u\right)\times p_{X(t)}(u) \; dt\\ =& \displaystyle \sigma_1(F_1) \; \varphi(u) \; \E\left(X'^+_{\eta}(t) \mathbb{I}_{X'_{\alpha}(t) \leq 0}\mathbb{I}_{X'_{\beta}(t) \leq 0}\right). \end{array} \end{displaymath} Let $\overrightarrow{a}$ and $\overrightarrow{b}$ be two vectors in two faces containing the edge $F_1$ and perpendicular to $\overrightarrow{\eta}$; $\theta_1$ be the angle between $\overrightarrow{\alpha}$ and $\overrightarrow{\eta}$; $\theta_2$ be the angle between $\overrightarrow{\beta}$ and $\overrightarrow{\eta}$. It is clear that $$X'_{\alpha}(t)=\cos\theta_1 \, X'_{\eta}(t)+ \sin\theta_1 \, X'_a(t) ,$$ $$ X'_{\beta}(t)=\cos\theta_2 \, X'_{\eta}(t)+\sin\theta_2 \, X'_b(t), $$ and $\textnormal{cov}(X'_a(t),X'_b(t))=\cos\theta_3$, where $\theta_3$ is the angle between two faces containing the edge $F_1$. Then, \begin{displaymath} \begin{aligned} &\E\left(X'^+_{\eta} \mathbb{I}_{X'_{\alpha}(t) \leq 0}\mathbb{I}_{X'_{\beta}(t) \leq 0}\right)\\ =& \E\left(X'^+_{\eta} \mathbb{I}_{\{\cos\theta_1 \, X'_{\eta}(t)+ \sin\theta_1 \, X'_a(t) \leq 0\} }\mathbb{I}_{\{\cos\theta_2 \, X'_{\eta}(t)+\sin\theta_2 \, X'_b(t) \leq 0\}}\right)\\ =& \displaystyle \int_0^{\infty} x \, \varphi(x) \, F(x) \, dx,\\ \end{aligned} \end{displaymath} where \begin{displaymath} \begin{aligned} F(x)= & \E\left( \mathbb{I}_{\{\cos\theta_1 \, X'_{\eta}(t)+ \sin\theta_1 \, X'_a(t) \leq 0\}}\mathbb{I}_{\{\cos\theta_2 \, X'_{\eta}(t)+\sin\theta_2 \, X'_b(t) \leq 0\}} \mid X'_{\eta}(t)=x\right)\\ = & \displaystyle \int_{-\infty}^{-\cot\theta_1 .x} \varphi(y) \, \Phi\left(\frac{- \cot\theta_2 .x - \cos\theta_3 .y}{\sin \theta_3}\right) \, dy.\\ \end{aligned} \end{displaymath} So, \begin{displaymath} \begin{aligned} F'(x)=&-\cot \theta_2 \; \varphi(-\cot\theta_2 \, .x) \, \Phi\left(\frac{-\cot\theta_1 \,.x+\cos \theta_3 \, \cot \theta_2\, . x}{\sin \theta_3}\right)\\ &-\cot \theta_1 \; \varphi(-\cot (\theta_1 \, x)) \, \Phi\left(\frac{-\cot\theta_2\, . x+\cos \theta_3 \, \cot \theta_1\, . x}{\sin \theta_3}\right). \end{aligned} \end{displaymath} By integration by parts, \begin{displaymath} \begin{aligned} \displaystyle \int_0^{\infty} x\, \varphi(x)\, F(x)\, dx & = -\int_0^{\infty} F(x) \, d(\varphi(x))\\ &= \displaystyle F(0)\varphi(0)+ \int_0^{\infty} \varphi(x)\, F'(x)\, dx. \end{aligned} \end{displaymath} It is easy to check that $$\int_0^{\infty} \varphi(x)\, \Phi(mx) \, dx=\frac{1}{4}+\frac{\arctan (m)}{2\pi},$$ $$\int_{-\infty}^0 \varphi(x)\, \Phi(mx) \, dx=\frac{1}{4}-\frac{\arctan (m)}{2\pi}.$$ From the above results, we have \begin{displaymath} \begin{aligned} F(0)\varphi(0)& = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^0 \varphi(y)\Phi\left( \frac{-\cos\theta_3 \, }{\sin\theta_3}y\right) \, dy\\ &= \frac{1}{(2\pi)^{3/2}}\left(\pi-\theta_3\right). \end{aligned} \end{displaymath} and \begin{displaymath} \begin{aligned} \int_0^{\infty} \varphi(x)\, F'(x)\, dx&= -\cot \theta_2 \,\int_0^{\infty} \varphi(x) \varphi(-\cot \theta_2\, . x) \, \Phi\left(\frac{-\cot \theta_1\, . x+\cos \theta_3 \, \cot \theta_2\, . x}{\sin \theta_3}\right) \, dx\\ &\quad -\cot \theta_1 \,\int_0^{\infty} \varphi(x) \varphi(-\cot \theta_1\, . x) \, \Phi\left(\frac{-\cot \theta_2\, .x+\cos \theta_3 \, \cot \theta_1\, . x}{\sin \theta_3}\right)\,dx\\ &= \frac{-\cos\theta_2}{\sqrt{2\pi}}\left(\frac{1}{4}+\frac{1}{2\pi}\arctan \left(\frac{-\sin\theta_2.\cot\theta_1+\cos\theta_3\cos\theta_2}{\sin\theta_3}\right)\right)\\ &\quad + \frac{-\cos\theta_1}{\sqrt{2\pi}}\left(\frac{1}{4}+\frac{1}{2\pi}\arctan \left(\frac{-\sin\theta_1\cot\theta_2+\cos\theta_3\cos\theta_1}{\sin\theta_3}\right)\right). \end{aligned} \end{displaymath} Therefore, the probability that there exists one point with minimal third coordinate on the level set $\mathcal{C}(u)$ and belonging to $F_1$ is at most equal to \begin{displaymath} \begin{array}{rl} \displaystyle \frac{\sigma_1(F_1)(\pi-\theta_3)\varphi(u)}{(2\pi)^{3/2}} +\sigma_1(F_1)& \displaystyle \left[\frac{-\cos\theta_2}{\sqrt{2\pi}} \left( \frac{1}{4}+\frac{1}{2\pi}\arctan \left(\frac{-\sin\theta_2\cot\theta_1+\cos\theta_3\cos\theta_2}{\sin\theta_3}\right)\right) \right.\\ &\displaystyle +\left. \frac{-\cos\theta_1}{\sqrt{2\pi}}\left(\frac{1}{4}+\frac{1}{2\pi}\arctan \left(\frac{-\sin\theta_1\cot\theta_2+\cos\theta_3\cos\theta_1}{\sin\theta_3}\right)\right)\right]. \end{array} \end{displaymath} Summing up all the terms at all the edges, we obtain the bound \begin{displaymath} \begin{aligned} \displaystyle \varphi(u)\sum_{i=1}^n \frac{\sigma_1(F_i)(\pi-\theta_{3i})}{(2\pi)^{3/2}} & +\displaystyle \varphi(u)\sum_{i=1}^n \sigma_1(F_i)\left[ \frac{-\cos\theta_{2i}}{\sqrt{2\pi}}\left( \frac{1}{4}+\frac{1}{2\pi}\arctan \left(\frac{-\sin\theta_{2i}\cot\theta_{1i}+\cos\theta_{3i}\cos\theta_{2i}}{\sin\theta_{3i}}\right)\right) \right.\\ &\displaystyle \qquad + \left. \frac{-\cos\theta_{1i}}{\sqrt{2\pi}}\left(\frac{1}{4}+\frac{1}{2\pi}\arctan \left(\frac{-\sin\theta_{1i}\cot\theta_{2i}+\cos\theta_{3i}\cos\theta_{1i}}{\sin\theta_{3i}}\right)\right)\right]. \end{aligned} \end{displaymath} By definition, $$\sum_{i=1}^n \sigma_1(F_i) (\pi-\theta_{3i})=4\pi\lambda(S).$$ Now, we prove \begin{displaymath} \begin{aligned} I = & \displaystyle \sum_{i=1}^n l_i \left[ \frac{\cos\theta_{2i}}{\sqrt{2\pi}}\left(\frac{1}{4}+\frac{1}{2\pi}\arctan \left(\frac{-\sin\theta_{2i}\cot\theta_{1i}+\cos\theta_{3i}\cos\theta_{2i}}{\sin\theta_{3i}}\right)\right) \right.\\ &\qquad + \left. \frac{\cos\theta_{1i}}{\sqrt{2\pi}}\left(\frac{1}{4}+\frac{1}{2\pi}\arctan \left(\frac{-\sin\theta_{1i}\cot\theta_{2i}+\cos\theta_{3i}\cos\theta_{1i}}{\sin\theta_{3i}}\right)\right) \right]=0. \end{aligned} \end{displaymath} Indeed, from Lemma \ref{lema}, we have $$ \frac{-\sin\theta_1\cot\theta_2+\cos\theta_3\cos\theta_1}{\sin\theta_3}=\frac{-\cos h}{\sin h}, $$ where $h$ is the dihedral angle at $\overrightarrow{\alpha}$, i.e, the angle between the horizontal plane and the face containing $\overrightarrow{\alpha}$ and $\overrightarrow{\eta}$. Since $h$ is constant for each face, $$ I= \sum_{S \in \{S_1,\ldots,S_k\}} \sum_{l \subset S} l\cos \theta_1 \left(\frac{1}{4}+\frac{1}{2\pi}\arctan\left(\frac{-\cos h}{\sin h}\right)\right)=0. $$ Therefore, we have the following upper bound for the probability of having a point $T$ with minimal third coordinate on the level set and belonging to an edge: \begin{equation}\label{t3} \frac{2\lambda(S)\varphi(u)}{(2\pi)^{1/2}}. \end{equation} From (\ref{t1}), (\ref{t2}), (\ref{t3}) and the fact that $\PP\{X(O)>u\}=\overline{\Phi}(u)$, the result follows. \end{proof} \section{Computation of the absolute value of the determinant of the Hessian matrices}\label{se4} As we see in the proof of Theorem \ref{th3}, we deal with the following $$ \E(|\det(X''(t))|\mathbb{I}_{X''(t)\preceq 0} \mid X(t)=u,\; X'_1(t)=0,\;X'_2(t)=0). $$ To evaluate this quantity, we have the following statement that is one of our main results in this paper: \begin{theorem} Let $X$ be a standard stationary isotropic centered two-dimensional Gaussian field. One has \begin{equation}\label{eth3} \E\left(|\det(X''(t))|\mid (X,X'_1,X'_2)(t)=(u,0,0) \right) \, = u^2-1+2\frac{\displaystyle (8\rho''(0) )^{3/2}\exp(-u^2.(24\rho''(0) -2)^{-1})}{\displaystyle \sqrt{24\rho''(0) -2}}. \end{equation} \end{theorem} \begin{proof} Under the condition, the vector $(X''_{11},X''_{12},X''_{22})$ has the same distribution with $(Y_1,Y_2,Y_3)+(-u,0,-u)$, where $(Y_1,Y_2,Y_3)$ is a centered Gaussian vector with the covariance matrix:\\ $$\Sigma = \left( \begin{array}{ccc} 12\rho''(0) -1 & 0 & 4\rho''(0) -1\\ 0 & 4\rho''(0) & 0 \\ 4\rho''(0) -1& 0 & 12\rho''(0) -1 \\ \end{array} \right).$$ Then, the LHS in (\ref{eth3}) can be written as\\ $$\begin{array}{ll} & \E(|X''_{11}(t)X''_{22}(t)-X''_{12}(t)^2| \, \mid \, (X,X'_1,X'_2)(t)=(u,0,0) ) \\ =& \E(|(Y_1-u)(Y_3-u)-Y^2_2|)\\ =& \E(|Y_1Y_3-Y^2_2-u(Y_1+Y_3)+u^2|)\\ =& \E(|<Y,AY>+<b,Y>+u^2|), \end{array}$$ where $A=\left(\begin{array}{ccc} 0 & 0 & \frac{1}{2}\\ 0 & -1 & 0 \\ \frac{1}{2} & 0 & 0 \\ \end{array}\right)$ and $b=\left(\begin{array}{c} -u \\ 0 \\ -u \\ \end{array}\right)$.\\ Here, from Theorem 2.1 of \cite{11}, the expectation is equal to $$\E(|<Y,AY>+<b,Y>+u^2|)=\frac{2}{\pi } \int_0^{\infty }t^{-2}(1-F(t)-\overline{F}(t))dt,$$ where $$F(t)=\frac{\exp(itu^2-2^{-1}t^2<b,(I-2it\Sigma A)^{-1}\Sigma b>)}{2\det(I-2it\Sigma A)^{1/2}}.$$ It is clear that $$F(t)=\frac{\exp(itu^2[1-it(16\rho''(0) -2)]^{-1})}{2(1+8it\rho''(0) )[1-it(16\rho''(0) -2)]^{1/2}},$$ and $$\bar {F}(t)=\frac{\exp(-itu^2[1+it(16\rho''(0) -2)]^{-1})}{2(1-8it\rho''(0) )[1+it(16\rho''(0) -2)]^{1/2}}=F(-t).$$ So, the expectation is equal to $$\begin{array}{rl} \displaystyle \frac{2}{\pi} \int_0^{\infty }\frac{1}{t^2}(1-F(t)-\overline{F}(t))dt=& \displaystyle \textnormal{Re}( \frac{1}{\pi} \int_{-\infty}^{\infty }\frac{1}{t^2}(1-2.F(t))dt)\\ =& \displaystyle \textnormal{Re}(\frac{1}{\pi} \int_{-\infty}^{\infty }\frac{1}{t^2}(1-\frac{\displaystyle \exp(itu^2[1-it(16\rho''(0) -2)]^{-1})}{\displaystyle (1+8it\rho''(0) )[1-it(16\rho''(0) -2)]^{1/2}})dt). \end{array}$$ Here, we apply the residue theorem to compute $$\begin{array}{ll} & \displaystyle \frac{1}{\pi} \int_{-\infty}^{\infty }\frac{1}{t^2}(1-\frac{\displaystyle \exp(itu^2[1-it(16\rho''(0) -2)]^{-1})}{\displaystyle (1+8it\rho''(0) )[1-it(16\rho''(0) -2)]^{1/2}})dt\\ =& 2i . \left(\text{sum of residues in upper half plane}\right) + i. \left(\text{sum of residues on x-axis} \right). \end{array}$$ The residues come from two poles at $i.(8\rho''(0) )^{-1}$ and $0$ and we see that:\\ The residue at $0$ is equal to $$\frac{d}{dt}\left( 1-\frac{\exp(itu^2[1-it(16\rho''(0) -2)]^{-1})}{(1+8it\rho''(0) )[1-it(16\rho''(0) -2)]^{1/2}}\right) \bigg|_{t=0}=-i.u^2+i.$$ And the residue at $i.(8\rho''(0) )^{-1}$ is equal to \begin{displaymath} \begin{array}{l} \displaystyle \frac{(1+8it\rho''(0) )[1-it(16\rho''(0) -2)]^{}-\exp(itu^2[1-it(16\rho''(0) -2)]^{-1})}{t^2.8i\rho''(0) .[1-it(16\rho''(0) -2)]^{1/2}} \bigg|_{t=i.(8\rho''(0) )^{-1}}\\ = \displaystyle \frac{(8\rho''(0) )^{3/2}\exp(-u^2.(24\rho''(0) -2)^{-1})}{\sqrt{24\rho''(0) -2}.i}. \end{array} \end{displaymath} These two residues imply the result. \end{proof} We have the corollary \begin{corollary} \label{cor} Let $X$ be a standard stationary isotropic centered Gaussian field. One has $$\E(|\det(X''(t))|. \mathbb{I}_{X''(t)\preceq 0}\mid (X,X'_1,X'_2)(t)=(u,0,0) ) \leq u^2-1+\frac{\displaystyle (8\rho''(0) )^{3/2}\exp(-u^2.(24\rho''(0) -2)^{-1})}{\displaystyle \sqrt{24\rho''(0) -2}}.$$ \end{corollary} \begin{proof} The result follows from two observations \begin{itemize} \item $\displaystyle |\det(X''(t))|. \mathbb{I}_{X''(t)\preceq 0} \leq \frac{|\det(X''(t))|+\det(X''(t))}{2}.$ \item $\E(\det(X''(t))\mid (X,X'_1,X'_2)(t)=(u,0,0)) \; = \; u^2-1.$ \end{itemize} \end{proof} \section{Numerical comparison}\label{se5} In this section, we compare the upper bounds given by the direct method and record method with the approximation given by the EC method. For simplicity we limit our attention to the case where $S$ is the square $[0,T]^2$ and $X$ is a standard stationary isotropic centered Gaussian field with covariance function $\rho(\|s-t\|^2)$. Note that only $\rho''(0)$ plays a role, the exact form of $\rho$ does not need to be specified. More precisely, we consider \begin{itemize} \item[1.] the approximation given by the EC method $$ \PP_E (u) = \overline{\Phi}(u)+\frac{2T}{\sqrt{2\pi}} \varphi (u)+\frac{T^2}{2\pi}u\varphi (u); $$ \item[2.] and the upper bound given by the direct method \begin{align} \PP_M (u) = \quad \overline{\Phi}(u)\displaystyle &+\frac{2T}{\sqrt{2\pi}} \int_u^{\infty} \left[ c\varphi (x\textrm{/}c)+x\Phi (x\textrm{/c})\right]\varphi (x)dx \notag \\ &\displaystyle + \frac{T^2}{2\pi } \int_u^{\infty} \left[x^2-1+\left(\frac{2(c^2+1)}{3}\right)^{3/2} \sqrt{\pi}\frac{ \varphi(x/c)}{c}\right]\varphi(x)dx, \notag \end{align} where $c=\sqrt{12\rho''(0)-1}$,\\ \item[3.] and the one given by the record method $$\PP_R(u) = \overline{\Phi}(u)+\frac{2T}{\sqrt{2\pi}} \varphi (u)+\frac{T^2}{2\pi }\left[ c\varphi (u\textrm{/}c)+u\Phi (u\textrm{/c})\right]\varphi (u). $$ \end{itemize} It is easy to see that $\PP_E$ is always less than $\PP_R$ and $\PP_M$. We will prove that $\PP_R(u) $ is smaller than $\PP_M (u) $ as $u$ is large. Indeed, if we compare the "dimension 1 terms" (corresponding to $\sigma_1(\partial S)$), we have \begin{displaymath} \begin{array}{rl} & \int_u^{\infty} \left[ c\varphi (x\textrm{/}c)+x\Phi (x\textrm{/c})\right]\varphi (x)dx - \varphi (u)\\ = & \int_u^{\infty} \left[ c\varphi (x\textrm{/}c)+x\Phi (x\textrm{/c})\right]\varphi (x)dx - \int_u^{\infty} x\varphi (x)dx\\ =& \int_u^{\infty} \left[ c\varphi (x\textrm{/}c)-x\overline{\Phi} (x\textrm{/c})\right]\varphi (x)dx \geq 0, \end{array} \end{displaymath} since when $x\geq 0$, $$\frac{\varphi(x)}{x}\geq \overline{\Phi}(x).$$ So the term in the direct method is always larger when $u\geq 0$. \\ Let us consider now the two terms corresponding to $\sigma_2(S)$: \begin{itemize} \item $A_d = u\varphi(u) + \int_u^{\infty} \left[ \left(\frac{2(c^2+1)}{3}\right)^{3/2} \sqrt{\pi}\frac{ \varphi(x/c)}{c}\right]\varphi(x)dx = u \varphi(u) + \overline{A}_d.$ \item $A_r = \left[ c\varphi (u\textrm{/}c)+u\Phi (u\textrm{/c})\right]\varphi (u) = u\varphi(u) +\overline{A}_r.$ \end{itemize} It is easy to show that , as $u\to +\infty$, $$ \overline{A}_d= \int _u ^{\infty} \varphi \bigg( \frac x c\bigg) \varphi(x) dx = (const) \overline{\Phi} \bigg( u \sqrt{ \frac{1+ c^2}{ c^2}}\bigg) \simeq (const) u^{-1} \varphi \bigg( u \sqrt{ \frac{1+ c^2}{ c^2}}\bigg). $$ and that $$ \overline{A}_r \simeq (const) u^{-2} \varphi \bigg( u \sqrt{ \frac{1+ c^2}{ c^2}}\bigg). $$ This shows that for $u$ sufficiently large $ A_r$ is smaller than $ A_d$. \\ The numerical comparison is performed in Figure 4 for six different situations. It shows that the record method is always better than the direct method. EC method and record method are very close, but it is not possible to identify the better among those two since $\PP_E$ can be smaller than the true value. \begin{figure}\end{figure} [email protected]\\ [email protected] \end{document}
\begin{document} \title{$\Omega$-theorem for short trigonometric sum} \author{Jan Moser} \address{Department of Mathematical Analysis and Numerical Mathematics, Comenius University, Mlynska Dolina M105, 842 48 Bratislava, SLOVAKIA} \email{[email protected]} \keywords{Riemann zeta-function} \begin{abstract} We obtain in this paper new application of the classical E.C. Titchmarsh' discrete method (1934) in the theory of the Riemann $\zf$ - function. Namely, we shall prove the first localized $\Omega$-theorem for short trigonometric sum. This paper is the English version of the work of reference \cite{4}. \end{abstract} \maketitle \section{Result} \subsection{} In this paper we shall study the following short trigonometric sum \begin{equation} S(t,T,K)=\sum_{e^{-1/K}P_0<n<P_0}\cos(t\ln n),\ P_0=\sqrt{\frac{T}{2\pi}},\ t\in [T,T+U], \end{equation} where \begin{equation} U=T^{1/2}\psi\ln T,\ \psi\leq K\leq T^{1/6}\ln^2T,\ \psi <\ln T, \end{equation} and $\psi=\psi(T)$ stands for arbitrary slowly increasing function unbounded (from above). For example \begin{displaymath} \psi(T)=\ln\ln T,\ \ln\ln\ln T,\ \dots \end{displaymath} Let \begin{displaymath} \{ t_\nu\} \end{displaymath} be the Gram-Titchmarsh sequence defined by the formula \begin{displaymath} \vartheta(t_\nu)=\pi\nu,\quad \nu=1,2,\dots \end{displaymath} (see \cite{6}, pp. 221, 329) where \begin{displaymath} \vartheta(t)=-\frac t2\ln\pi+\mbox{Im}\left\{\ln\Gamma\left(\frac 14+i\frac t2\right)\right\}. \end{displaymath} Next, we denote by \begin{displaymath} G(T,K,\psi) \end{displaymath} the number of such $t_\nu$ that (see (1.1)) obey the following \begin{equation} t_\nu\in [T,T+U] \ \wedge \ |S(t_\nu,T,K)|>\frac 12 \sqrt{\frac{P_0}{K}}=AT^{1/4}K^{-1/2}. \end{equation} The following theorem holds true. \begin{mydef1} There are \begin{displaymath} T_0(K,\psi)>0, \ A>0 \end{displaymath} such that \begin{equation} G(T,K,\psi)>AT^{1/6}K^{-1}\psi \ln^2T,\ T\geq T_0(K,\psi). \end{equation} \end{mydef1} \subsection{} Let us remind the following estimate by Karatsuba (see \cite{1}, p. 89) \begin{displaymath} \overset{*}{S}(x)=\sum_{1\leq n\leq x}n^{it}=\mathcal{O}(\sqrt{x}t^{\epsilon}),\ 0<x<t \end{displaymath} holds true on the Lindel\" of hypothesis ($0<\epsilon$ is an arbitrary small number in this). Of course, \begin{equation} \begin{split} & \overset{*}{S}(t,T,K)=\sum_{e^{-1/K}P_0\leq n\leq P_0}n^{it}=\mathcal{O}(\sqrt{P_0}T^\epsilon)=\mathcal{O}(T^{1/4+\epsilon}), \\ & t\in [T,T+U], \end{split} \end{equation} and \begin{displaymath} |\overset{*}{S}(t,T,K)|\geq |S(t,T,K)|. \end{displaymath} \begin{remark} Since (see (1.1), (1.3), (1.4)) the inequality \begin{displaymath} |S(t,T,\ln\ln T)|> A\frac{T^{1/4}}{\sqrt{\ln\ln T}},\ T\to\infty \end{displaymath} is fulfilled for arbitrary big $t$, then the Karatsuba's estimate (1.5) is an almost exact estimate. \end{remark} \subsection{} With regard to connection between short trigonometric sum and the theory of the Riemann zeta function see our paper \cite{3}. \section{Main lemmas and proof of Theorem} Let \begin{equation} w(t,T,K)=\sum_{e^{-1/K}P_0<n<P_0}\frac{1}{\sqrt{n}}\cos(t\ln n), \end{equation} \begin{equation} w_1(t,T,K)=\sum_{e^{-1/K}P_0<n<P_0}\left(\frac{1}{\sqrt{n}}-\frac{1}{\sqrt{P_0}}\right)\cos(t\ln n). \end{equation} The following lemmas hold true. \begin{mydef5A} \begin{equation} \sum_{T\leq t_\nu\leq T+U}w^2(t_\nu,T,K)=\frac{1}{4\pi}UK^{-1}\ln\frac{T}{2\pi}+\mathcal{O}(K^{-1}\sqrt{T}\ln^2T). \end{equation} \end{mydef5A} \begin{mydef5B} \begin{equation} \sum_{T\leq t_\nu\leq T+U}w_1^2(t_\nu,T,K)=\frac{1}{48\pi}UK^{-3}\ln\frac{T}{2\pi}+\mathcal{O}(K^{-3}\sqrt{T}\ln^2T). \end{equation} \end{mydef5B} \begin{remark} Of course, the formulae (2.3), (2.4) are asymptotic ones (see (1.2)). \end{remark} Now we use the main lemmas A and B for completion of the Theorem. Since (see \cite{2}, (23)) \begin{equation} Q_0=\sum_{T\leq t_\nu\leq T+U}1=\frac{1}{2\pi}U\ln\frac{T}{2\pi}+\mathcal{O}\frac{U^2}{T}, \end{equation} then we obtain (see (2.3), (2.4)) that \begin{equation} \frac{1}{Q_0}\sum_{T\leq t_\nu\leq T+U}w^2(t_\nu,T,K)\sim \frac{1}{2K},\ T\to\infty, \end{equation} \begin{equation} \frac{1}{Q_0}\sum_{T\leq t_\nu\leq T+U}w_1^2(t_\nu,T,K)\sim \frac{1}{24K^3}, \end{equation} \begin{equation} \frac{1}{Q_0}\sum_{T\leq t_\nu\leq T+U}w\cdot w_1=\mathcal{O}\left(\frac{1}{K^2}\right), \end{equation} (we used the Schwarz inequality in (2.8)). Next, we have (see (1.1), (2.1), (2.2)) that \begin{equation} w(t_\nu,T,K)=\frac{1}{\sqrt{P_0}}S(t_\nu,T,K)+w_1(t_\nu,T,K). \end{equation} Consequently we obtain (see (2.6) -- (2.9)) the following \begin{mydef6} \begin{equation} \frac{1}{Q_0}\sum_{T\leq t_\nu\leq T+U}S^2\sim \frac{P_0}{2K},\ T\to\infty. \end{equation} \end{mydef6} Next, we denote by $Q_1$ the number of such values \begin{displaymath} t_\nu\in [T,T+U], \end{displaymath} that fulfill the inequality (see (1.3)) \begin{equation} |S|>\frac{1}{2}\sqrt{\frac{P_0}{K}};\quad Q_1=G(T,K,\psi), \end{equation} and \begin{displaymath} Q_0-Q_1=Q_2 \end{displaymath} (see (2.5)). Since (see (1.1) and \cite{6}, p. 92) \begin{displaymath} |S(t,T,K)|<A \sqrt{P_0}T^{1/6},\ t\in [T,T+U], \end{displaymath} then we have (see (2.10), (2.11)) that \begin{displaymath} \frac{1}{3K}< AT^{1/3}\frac{Q_1}{Q_0}+\frac{1}{4K}\frac{Q_2}{Q_0}<A T^{1/3}\frac{Q_1}{Q_0}+\frac{1}{4K}, \end{displaymath} i. e. \begin{equation} AQ_1>\frac{1}{12}Q_0T^{-1/3}K^{-1}. \end{equation} Consequently, we obtain (see (1.2), (2.5), (2.11), (2.12)) the following estimate \begin{displaymath} Q_1=G>A T^{1/6}K^{-1}\psi \ln^2T \end{displaymath} that is required result (1.4). \section{Lemma 1} Let \begin{equation} w_2=\ssum_{e^{-1/K}P_0<n<m<P_0}\frac{1}{\sqrt{nm}}\cos\left( t_\nu\ln\frac nm\right). \end{equation} The following lemma holds true. \begin{mydef51} \begin{equation} \sum_{T\leq t_\nu\leq T+U}w_2=\mathcal{O}(K^{-1}\sqrt{T}\ln^2T). \end{equation} \end{mydef51} \begin{proof} The following inner sum (comp. \cite{5}, p. 102; $t_{\nu+1}\longrightarrow t_\nu$) \begin{equation} w_{21}=\sum_{T\leq T_\nu\leq T+U}\cos\{ 2\pi\psi_1(\nu)\}, \end{equation} where \begin{displaymath} \psi_1(\nu)=\frac{1}{2\pi}t_\nu\ln\frac nm \end{displaymath} applies to our sum (3.2). Now we obtain by method \cite{5}, pp. 102-103 the following estimate \begin{equation} w_{21}=\mathcal{O}\left(\frac{\ln T}{\ln\frac nm}\right). \end{equation} Since (see (1.2)) \begin{displaymath} e^{-1/K}>1-\frac 1K\geq 1-\frac{1}{\psi}>\frac 12, \end{displaymath} then \begin{displaymath} 2m>2e^{-1/K}P_0>P_0,\quad m\in (e^{-1/K}P_0,P_0), \end{displaymath} i. e. in our case (see (3.1)) we have that \begin{displaymath} 2m>n. \end{displaymath} Consequently, the method \cite{6}, p. 116, $\sigma=\frac 12,\ m=n-r$ gives the estimate \begin{equation} \begin{split} & \ssum_{e^{-1/K}P_0<n<m<P_0}\frac{1}{\sqrt{mn}\ln\frac nm}< \\ & < A\sum_{e^{-1/K}P_0<n<P_0}\sum_{r\leq n/2}\frac 1r< AK^{-1}P_0\ln P_0<AK^{-1}\sqrt{T}\ln T, \end{split} \end{equation} where \begin{equation} \sum_{e^{-1/K}P_0<n<P_0} 1\sim \frac{P_0}{K}. \end{equation} Now, required result (3.2) follows from (3.1), (3.3) -- (3.5). \end{proof} \section{Lemma 2} Let \begin{equation} w_3=\ssum_{e^{-1/K}P_0<m<n<P_0}\frac{1}{\sqrt{mn}}\cos\{ t_\nu\ln(mn)\}. \end{equation} The following lemma holds true. \begin{mydef52} \begin{equation} \sum_{T\leq t_\nu\leq T+U}w_3=\mathcal{O}(K^{-1}\sqrt{T}\ln^2T). \end{equation} \end{mydef52} \begin{proof} The following inner sum (comp. \cite{5}, p. 103; $t_{\nu+1}\longrightarrow t_\nu$) \begin{displaymath} w_{31}=\sum_{T\leq t_\nu\leq T+U}\cos\{ 2\pi \chi(\nu)\}, \end{displaymath} where \begin{displaymath} \chi(\nu)=\frac{1}{2\pi}t_\nu\ln(nm) \end{displaymath} applies to our sum (4.2). Next, the method \cite{5}, pp. 103-104 gives us that \begin{equation} \begin{split} & w_{31}=\int_{\chi'(x)<1/2}\cos\{ 2\pi\chi(x)\}{\rm d}x+ \\ & + \int_{\chi'(x)>1/2}\cos[2\pi\{ \chi(x)-x\}]{\rm d}x+\mathcal{O}(1)=J_1+J_2+\mathcal{O}(1), \end{split} \end{equation} where \begin{displaymath} J_1=\mathcal{O}\left(\frac{\ln T}{\ln n}\right)=\mathcal{O}(1),\ n\in (e^{-1/K}P_0,P_0), \end{displaymath} and ($m<n<2m,\ n=m+r$) \begin{displaymath} J_2=\mathcal{O}\left( \frac{m\ln(m+1)}{r}\right). \end{displaymath} Now, the term $J_1$ contributes to the sum (4.2), (comp. (3.6)) as \begin{equation} \begin{split} & \mathcal{O}\left(\ssum_{e^{-1/K}P_0<m<n<P_0}\frac{1}{\sqrt{mn}}\right) = \\ & = \mathcal{O}\left( \frac{1}{P_0}\ssum_{e^{-1/K}P_0<m<n<P_0} 1\right)= \\ & = \mathcal{O}\left(\frac{1}{P_0}\frac{P_0^2}{K^2}\right)=\mathcal{O}(K^{-2}\sqrt{T}), \end{split} \end{equation} and the same contribution corresponds to the term $\mathcal{O}(1)$ in (4.3), while the contribution of the term $J_2$ is \begin{equation} \begin{split} & \mathcal{O}\left(\sum_{e^{-1/K}P_0<m<P_0}\frac{1}{\sqrt{m}}\sum_{r=1}^m\frac{1}{\sqrt{m}}\frac{m\ln(m+1)}{r}\right)= \\ & = \mathcal{O}\left(\frac{P_0}{K}\ln^2P_0\right)=\mathcal{O}(K^{-1}\sqrt{T}\ln^2T). \end{split} \end{equation} Now, the required result (4.2) follows from (4.1), (4.4), (4.5). \end{proof} \section{Lemma 3} Let \begin{equation} w_4=\sum_{e^{-1/K}P_0<n<P_0}\frac 1n. \end{equation} The following lemma holds true. \begin{mydef53} \begin{equation} \sum_{T\leq t_\nu\leq T+U}w_4=\mathcal{O}(K^{-1}\sqrt{T}\ln^2T). \end{equation} \end{mydef53} \begin{proof} The following inner sum \begin{displaymath} w_{41}=\sum_{T\leq t_\nu\leq T+U}\cos\{ 2\pi\chi_1(\nu)\}, \end{displaymath} where \begin{displaymath} \chi_1(\nu)=\frac{1}{\pi}t_\nu\ln n. \end{displaymath} applies to our sum (5.2). Since (comp. \cite{5}, p. 103) \begin{displaymath} \chi_1'(\nu)=\frac{\ln n}{\vartheta'(t_\nu)}, \end{displaymath} next, (comp. \cite{5}, p. 100) \begin{displaymath} \begin{split} & \vartheta'(t_\nu)=\frac 12\ln\frac{t_\nu}{2\pi}+\mathcal{O}\left(\frac{1}{t_\nu}\right)= \frac 12\ln\frac{T}{2\pi}+\mathcal{O}\left(\frac UT\right)+\mathcal{O}\left(\frac 1T\right)\sim \\ & \sim \ln P_0, \end{split} \end{displaymath} and \begin{displaymath} \ln P_0-\frac 1K<\ln n<\ln P_0,\ n\in (e^{-1/K}P_0,P_0), \end{displaymath} then \begin{displaymath} \chi_1'(\nu)\sim 1. \end{displaymath} Since, for example, \begin{displaymath} \frac 12<\chi_1'(\nu)<\frac 32, \end{displaymath} then we have (comp. \cite{5}, p. 104) that \begin{displaymath} w_{41}=\int\cos[2\pi\{\chi_1(x)-x\}]{\rm d}x+\mathcal{O}(1)=J_3+\mathcal{O}(1). \end{displaymath} Now, (comp. \cite{5}, p. 104) \begin{displaymath} \chi_1''(\nu)<-A\frac{\ln n}{T\ln^3T}<-\frac{B}{T\ln^2T},\ n\in (e^{-1/K}P_0,P_0), \end{displaymath} and \begin{displaymath} J_3=\mathcal{O}(\sqrt{T}\ln T),\quad w_{41}=\mathcal{O}(\sqrt{T}\ln T). \end{displaymath} Consequently, we get the required result (5.2) \begin{displaymath} \sum_{T\leq t_\nu\leq T+U}w_4=\mathcal{O}\left(\sqrt{T}\ln T\sum_{e^{-1/K}P_0<n<P_0}\frac 1n\right)= \mathcal{O}(K^{-1}\sqrt{T}\ln T), \end{displaymath} where \begin{displaymath} \sum_{e^{-1/K}P_0<n<P_0}\frac 1n\sim \frac 1K \end{displaymath} by the well-known Euler's formula \begin{displaymath} \sum_{1\leq n<x}\frac 1n=\ln x+c+\mathcal{O}\left(\frac 1x\right), \end{displaymath} where $c$ is the Euler's constant. \end{proof} \section{Lemmas A and B} \subsection{Proof of Lemma A} First of all, we have (see (2.1)) that \begin{equation} \begin{split} & w^2(t_\nu,T,K)=\\ & = \ssum_{e^{-1/K}P_0<m,n<P_0}\frac{1}{\sqrt{mn}}\cos( t_\nu\ln m)\cos( t_\nu\ln n)= \\ & = \frac 12 \sum_n \frac 1n+\ssum_{m<n}\frac{1}{\sqrt{mn}}\cos\left( t_\nu\ln\frac nm\right)+ \\ & + \ssum_{m<n}\frac{1}{\sqrt{mn}}\cos\{ t_\nu\ln (mn)\}+\frac 12\sum_{n}\frac 1n\cos( 2t_\nu\ln n)= \\ & = \frac{1}{2K}+\mathcal{O}\left(\frac{1}{\sqrt{T}}\right)+w_2+w_3+w_4, \end{split} \end{equation} (see (5.3), (3.1), (4.1), (5.1)). Consequently, we obtain the required result (2.3) from (6.1) by (2.5), (3.2), (4.2), (5.2). \subsection{Proof of Lemma B} First of all we have (see (2.2)) that \begin{displaymath} w_1(t_\nu,T,K)=\sum_{e^{-1/K}P_0<n<P_0}\frac{\alpha(n)}{\sqrt{n}}\cos(t_\nu\ln n), \end{displaymath} where \begin{displaymath} \alpha(n)=1-\sqrt{\frac{n}{P_0}}. \end{displaymath} Of course, $\alpha(n)$ is decreasing and \begin{equation} 0<\alpha(n)<\frac 1K,\quad n\in (e^{-1/K}P_0,P_0). \end{equation} Next, (comp. (6.1)) \begin{equation} \begin{split} & w_1^2(t_\nu,T,K)= \\ & = \frac 12\sum_n\frac{\alpha^2(n)}{n}+\ssum_{m<n}\frac{\alpha(m)\alpha(n)}{\sqrt{mn}}\cos\left( t_\nu\ln\frac nm\right)+ \\ & + \ssum_{m<n}\frac{\alpha(m)\alpha(n)}{\sqrt{mn}}\cos\left( t_\nu\ln(mn)\right)+\frac 12\sum_n \frac{\alpha^2(n)}{n}\cos( 2t_\nu\ln n)= \\ & = \frac 12\bar{w}_1+\bar{w}_2+\bar{w}_3+\frac 12\bar{w}_4. \end{split} \end{equation} Since (see (6.2)) \begin{displaymath} \alpha(m)\alpha(n)<K^{-2} \end{displaymath} then we obtain by a similar way as in the case of the estimates (3.2), (4.2) and (5.2) that \begin{equation} \sum_{T\leq t_\nu\leq T+U}\left\{ \bar{w}_2+\bar{w}_3+\frac 12\bar{w}_4\right\}= \mathcal{O}(K^{-3}\sqrt{T}\ln^2T). \end{equation} In the case of the sum \begin{equation} \frac 12\sum_{T\leq t_\nu\leq T+U}\bar{w}_1 \end{equation} we use the following summation formula (see \cite{6}, p. 13) \begin{displaymath} \begin{split} & \sum_{a\leq n<b}\varphi(n)=\int_a^b \varphi(x){\rm d}x+\int_a^b \left( x-[x]-\frac 12\right)\varphi'(x){\rm d}x+ \\ & + \left( a-[a]-\frac 12\right)\varphi(a)-\left( b-[b]-\frac 12\right)\varphi(b) \end{split} \end{displaymath} in the case \begin{displaymath} a=e^{-1/K}P_0, b=P_0, \varphi(x)=\frac{\alpha^2(x)}{x}=\frac 1x-\frac{2}{\sqrt{P_0x}}+\frac{1}{P_0}. \end{displaymath} Hence \begin{displaymath} \begin{split} & \int_{e^{-1/K}P_0}^{P_0}\frac{\alpha^2(x)}{x}{\rm d}x=\frac{1}{K}-4\left( 1-e^{-1/(2K)}\right)+1-e^{-1/K}= \\ & = \frac{1}{12K^3}+\mathcal{O}(K^{-4}), \end{split} \end{displaymath} and \begin{displaymath} \varphi'(x)=\mathcal{O}(P_0^{-2}),\ \varphi(e^{-1/K}P_0)=\mathcal{O}\left(\frac{1}{x^2P_0}\right),\ \varphi(P_0)=0. \end{displaymath} Consequently, we have (see (6.5)) \begin{displaymath} \frac 12\bar{w}_1=\frac{1}{24K^3}+\mathcal{O}\left(\frac{1}{K^4}\right), \end{displaymath} and (see (1.2), (2.5)) \begin{equation} \begin{split} & \frac 12\sum_{T\leq t_\nu\leq T+U}\bar{w}_1=\frac{1}{48\pi}UK^{-3}\ln\frac{T}{2\pi}+ \mathcal{O}\left(\frac{U\ln T}{K^4}\right)+\mathcal{O}\left(\frac{U^2}{K^3T}\right)= \\ & = \frac{1}{48\pi}UK^{-3}\ln\frac{T}{2\pi}+\mathcal{O}(K^{-3}\sqrt{T}\ln T). \end{split} \end{equation} Finally, we obtain the required result (2.4) from (6.3) by (6.4) -- (6.6). \end{document}
\begin{document} \title{Mean value formulas for classical solutions to uniformly parabolic equations in divergence form} \author{{\sc{Emanuele Malagoli} \thanks{Dipartimento di Scienze Fisiche, Informatiche e Matematiche, Universit\`{a} degli Studi di Modena e Reggio Emilia, via Campi 213/b, 41125 Modena (Italy). E-mail: [email protected]}, \ \sc{Diego Pallara} \thanks{Dipartimento di Matematica e Fisica ``Ennio De Giorgi'', Universit\`{a} del Salento and INFN, Sezione di Lecce, Ex Collegio Fiorini - Via per Arnesano - Lecce (Italy). E-mail: [email protected]} \ \sc{Sergio Polidoro} \thanks{Dipartimento di Scienze Fisiche, Informatiche e Matematiche, Universit\`{a} degli Studi di Modena e Reggio Emilia, Via Campi 213/b, 41125 Modena (Italy). E-mail: [email protected]} }} \date{ } \maketitle \begin{abstract} We prove surface and volume mean value formulas for classical solutions to uniformly parabolic equations in divergence form. We then use them to prove the parabolic strong maximum principle and the parabolic Harnack inequality. We emphasize that our results only rely on the classical theory, and our arguments follow the lines used in the original theory of harmonic functions. We provide two proofs relying on two different formulations of the divergence theorem, one stated for sets with {\em almost $C^1$-boundary}, the other stated for sets with finite perimeter. \end{abstract} \setcounter{equation}{0} \section{Introduction}\label{secIntro} Let $\Omega$ be an open subset of $\mathbb{R}^{N+1}$. We consider classical solutions $u$ to the equation $\L u = f$ in $\Omega$, where $\L$ is a parabolic operator in divergence form defined for $z=(x,t) \in \mathbb{R}^{N+1}$ as follows \begin{equation} \label{e-L} \L u (z) := \sum_{i,j=1}^{N}\tfrac{\partial}{\partial x_i} \left( a_{ij} (z) \tfrac{\partial u}{\partial x_j}(z) \right) + \sum_{i=1}^{N} b_{i} (z) \tfrac{\partial u}{\partial x_i}(z) + c(z) u(z) - \, \tfrac{\partial u}{\partial t}(z). \end{equation} In the following we use the notation $A(z) := \left( a_{ij}(z) \right)_{i,j=1,\dots, N}, b(z) := \left( b_{1} (z), \dots, b_{N} (z) \right)$ and we write $\L u $ in the short form \begin{equation} \label{e-LL} \L u (z) := \div \left( A (z) \nabla_x u(z) \right) + \langle b(z), \nabla_x u(z)\rangle + c(z) u(z) - \, \tfrac{\partial u}{\partial t}(z). \end{equation} Here $\div, \nabla_x$ and $\langle \, \cdot \, , \, \cdot \, \rangle$ denote the divergence, the gradient and the inner product in $\mathbb{R}^N$, respectively. We assume that the matrix $A(z)$ is symmetric and that the coefficients of the operator $\L$ are H\"older continuous functions with respect to the parabolic distance. This means that there exist two constants $M>0$ and $\alpha \in ]0,1]$, such that \begin{equation} \label{e-hc} |c (x,t) - c (y,s)| \le M \left(|x-y|^\alpha + |t-s|^{\alpha/2}\right), \end{equation} for every $(x,t), (y,s) \in \mathbb{R}^{N+1}$. We require that the above condition is satisfied not only by $c$, but also by $a_{ij}, \frac{\partial a_{ij}}{\partial x_i}, b_{i}, \frac{\partial b_{i}}{\partial x_i}$, for $i, j = 1, \dots, N$, with the same constants $M$ and $\alpha$. We finally assume that the coefficients of $\L$ are bounded and that $\L$ is uniformly parabolic, \emph{i.e.}, there exist two constants $\lambda, \Lambda$, with $0 < \lambda < \Lambda$, such that \begin{equation} \label{e-up} \lambda |\xi|^2 \le \langle A(z) \xi, \xi \rangle \le \Lambda |\xi|^2, \quad \left|\tfrac{\partial a_{ij}}{\partial x_i}\right| \le \Lambda, \quad |b_i(z)| \le \Lambda, \quad |c(z)| \le \Lambda, \end{equation} for every $\xi \in \mathbb{R}^N$, for every $z \in \mathbb{R}^{N+1}$, and for $i,j=1, \dots, N$. Under the above assumptions, the classical parametrix method provides us with the existence of a fundamental solution $\Gamma$. In Section \ref{secFundSol} we shall quote from the monograph of Friedman \cite{Friedman} the results we need for our purposes. The main achievements of this note are some mean value formulas for the solutions to $\L u = f$ that are written in terms of the level and super-level sets of the fundamental solution $\Gamma$. We extend previous results of Fabes and Garofalo \cite{FabesGarofalo} and Garofalo and Lanconelli \cite{GarofaloLanconelli-1989} in that we weaken the regularity requirement on the coefficients of $\L$ that in \cite{FabesGarofalo, GarofaloLanconelli-1989} are assumed to be $C^\infty$ smooth. As applications of the mean value formulas we give an elementary proof of the parabolic strong maximum principle. We note that the conditions on the functions $\frac{\partial a_{ij}}{\partial x_i}$'s are needed in order to deal with classical solutions to the adjoint equation $\L^* v = 0$, as the mean value formulas rely on the divergence theorem applied to the function $(\x,\tau) \mapsto \Gamma(x,t,\xi,\tau)$. We introduce some notation in order to state our main results. For every $z_0=(x_0, t_0) \in \mathbb{R}^{N+1}$ and for every $r>0$, we set \begin{equation} \label{e-Psi} \begin{split} \psi_r(z_0) & := \left\{ z \in \mathbb{R}^{N+1} \mid \Gamma(z_0; z ) = \tfrac{1}{r^N} \right\}, \\ \Omega_r(z_0) & := \left\{ z \in \mathbb{R}^{N+1} \mid \Gamma(z_0; z) > \tfrac{1}{r^N} \right\}. \end{split} \end{equation} \begin{center} \begin{tikzpicture} \clip (-.51,7.21) rectangle (6.31,1.99); \path[draw,thick] (-.5,7.2) rectangle (6.3,2); \begin{axis}[axis x line=middle, axis y line=middle, xtick=\empty,ytick=\empty, ymin=-1.2, ymax=1.2, xmin=-.2,xmax=1.7, samples=121, rotate= -90] \addplot [ddddblue,line width=.7pt,domain=.001:.1111] {sqrt(- 3 * x * ln(9*x)}; \addplot [ddddblue,line width=.7pt,domain=.001:.1111] {- sqrt(- 3 * x * ln(9*x))}; \addplot [dddblue,line width=.7pt,domain=.001:.25] {sqrt(- 2 * x * ln(4*x)}; \addplot [dddblue,line width=.7pt,domain=.001:.25] {- sqrt(- 2 * x * ln(4*x))}; \addplot [ddblue,line width=.7pt, domain=.001:1] {sqrt(- 2 * x * ln(x))}; \addplot [ddblue,line width=.7pt,domain=.001:1] {- sqrt(- 2 * x * ln(x))}; \addplot [black,line width=1pt, domain=-.01:.01] {sqrt(.0001 - x * x)} node[above] {\quad $z_0$}; \addplot [black,line width=1pt, domain=-.01:.01] {-sqrt(.0001 - x * x)} node[below]{\qquad \qquad \qquad \qquad \quad \quad \quad \quad {\color{ddddblue} $\psi_r(z_0)$}}; \end{axis} \draw [<-,line width=.4pt] (2.8475,7) -- (2.8475,2); \end{tikzpicture} {{\sc Fig.1}} - $\psi_r(z_0)$ for three different values of $r$. \end{center} Similarly to the elliptic case, we call $\psi_r(z_0)$ and $\Omega_r(z_0)$ respectively the \emph{parabolic sphere} and the \emph{parabolic ball} with radius $r$ and ``center'' at $(x_0,t_0)$. Note that, unlike the elliptic setting, $z_0$ belongs to the topological boundary of $\Omega_r(z_0)$. Because of the properties of the fundamental solution of uniformly parabolic operators, the parabolic balls $\Omega_r(z_0)$ are bounded sets and shrink to the center $z_0$ as $r \to 0$. We finally introduce the following kernels \begin{equation} \label{e-kernels} \begin{split} K (z_0; z) & := \frac{\langle A(z) \nabla_x \Gamma(z_0; z), \nabla_x \Gamma(z_0; z) \rangle } {|\nabla_{(x,t)}\Gamma(z_0;z)|}, \\ M(z_0; z) & := \frac{\langle A(z) \nabla_x \Gamma(z_0; z), \nabla_x \Gamma(z_0; z) \rangle } {\Gamma(z_0; z)^2}. \end{split} \end{equation} Here $\nabla_x \Gamma(z_0; z)$ and $|\nabla_{(x,t)}\Gamma(z_0; z)|$ denote the gradient with respect to the space variable $x$ and the norm of the gradient with respect to the variables $(x,t)$ of $\Gamma$, respectively. Moreover, we agree to set $K (z_0; z) = 0$ whenever $\nabla_{(x,t)}\Gamma(z_0;z)=0$. In the following, $\H^{N}$ denotes the $N$-dimensional Hausdorff measure. The first achievements of this note are the following mean value formulas. \begin{theorem} \label{th-1} Let $\Omega$ be an open subset of $\mathbb{R}^{N+1}$, $f\in C(\Omega)$ and let $u$ be a classical solution to $\L u = f$ in $\Omega$. Then, for every $z_0 \in \Omega$ and for almost every $r>0$ such that $\overline{\Omega_r(z_0)} \subset \Omega$ we have \begin{align*} u(z_0) = \int_{\psi_r(z_0)} K (z_0; z) u(z) \, d \H^{N} (z) + & \int_{\Omega_r(z_0)} f (z) \left( \tfrac{1}{r^N} - \Gamma(z_0; z) \right)\ dz \\ + & \frac{1}{r^N} \int_{\Omega_r(z_0)} \left( \div \, b(z) - c(z) \right) u(z) \ dz, \\ u(z_0) = \frac{1}{r^N} \int_{\Omega_r(z_0)} \!\!\!\!\! M (z_0; z) u(z) \, dz + \frac{N}{r^N} & \int_0^{r} \left(\r^{N-1} \int_{\Omega_\r (z_0)} f (z) \left( \tfrac{1}{\r^N} - \Gamma(z_0;z) \right) dz \right) d \r \\ + & \frac{N}{r^N} \int_0^{r} \left(\frac{1}{\r} \int_{\Omega_\r (z_0)} \left( \div \, b(z) - c(z) \right) u(z) \, dz \right) d \r. \end{align*} The second statement holds for \emph{every} $r>0$ such that ${\Omega_r(z_0)} \subset \Omega$. \end{theorem} Note that $\tfrac{1}{r^N} - \Gamma(z_0; z) < 0$ in the set $\Omega_r(z_0)$, because of its very definiton \eqref{e-Psi}. This fact, together with the non-negativity of the kernels \eqref{e-kernels} will be used in the sequel to obtain the strong maximum principle from Theorem \ref{th-1}. We next put Theorem \ref{th-1} in its context. It restores the mean value formulas first proved by Pini in \cite{Pini1951} for the heat equation $\partial_t u = \partial_x^2 u$, then by Watson in \cite{Watson1973} for the heat equation in several space variables. We also recall the mean value formulas first proved by Fabes and Garofalo in \cite{FabesGarofalo} for the equation $\L u = 0$, then extended by Garofalo and Lanconelli \cite{GarofaloLanconelli-1989} to the equation $\L u = f$, where the operator $\L$ has the form \eqref{e-L} and its coefficients are assumed to be $C^\infty$ smooth. This extra regularity assumption on the coefficients of $\L$ is due to the fact that the mean value formula relies on the divergence theorem applied to the parabolic ball $\Omega_r(z_0)$. Since the explicit epression of the fundamental solution $\Gamma$ is not available when the coefficients of $\L$ are variable, the authors of \cite{FabesGarofalo} and \cite{GarofaloLanconelli-1989} rely on the Sard theorem (see \cite{Sard}) which guarantees that $\psi_r(z_0)$ is a manifold for \emph{almost every} positive $r$, provided that the fundamental solution $\Gamma$ is $N+1$ times differentiable. The smoothness of the coefficients of the operator $\L$ is used in \cite{FabesGarofalo} and \cite{GarofaloLanconelli-1989} in order to have the needed regularity on $\Gamma$. The main goal of this note is the restoration of natural regularity hypotheses for the existence of classical solutions to $\L u =f$. These assumptions can be further weakened, since the existence of a fundamental solution has been proved for operators with Dini continuous coefficients. We prefer to keep our treatment in the usual setting of H\"older continuous functions for the sake of simplicity. The unnecessary regularity conditions on the coefficients of $\L$ can be removed in two ways. Following an approach close to the classical one, it is possible to rely on a result due to Dubovicki\u{\i} \cite{Dubovickii} (see also Bojarski, Ha{j\l}asz, and Strzelecki \cite{BojHajStrz}) which allows to reduce the regularity requirement on $\Gamma$ in order to apply a generalized divergence theorem for {\em sets with almost $C^1$ boundary}. This is presented in Section \ref{SectionDivergence} and applied in Section \ref{sectionProof}. The other approach relies on geometric measure theory and is presented in the last section: we show how the proof of Theorem \ref{th-1} can be modified relying on the generalized divergence theorem proved by De Giorgi \cite{DeGiorgi2,DeGiorgi3} in the framework of finite perimeter sets. As said before, this deep theory is not necessary in the present context, but it is more flexible and its generalization to Carnot groups (where the analogue of Dubovicki\u{\i}'s Theorem is not available) will allow us to extend the results of the present paper to {\em degenerate} parabolic operators. We have presented the application to uniformly parabolic operators to pave the way to this generalization, which will be the subject of a forthcoming paper. The mean value formulas stated in Theorem \ref{th-1} provide us with a simple proof of the strong maximum (minimum) principle for the operator $\L$ when $c = 0$. Note that, in this case, the constant function $u(x,t) = 1$ is a solution to $\L u = 0$, so that the mean value formula gives $\frac{1}{r^N}\int_{\Omega_r(z_0)}M(z_0;z)dz=1$. In order to state this result we first introduce the notion of \emph{attainable set}. We say that a curve $ \g: [0,T] \rightarrow \mathbb{R}^{N+1}$ is \emph{$\L$-admissible} if it is absolutely continuous and \begin{equation*} \dot{\g}(s) = \left( \dot {x}_1(s), \dots, \dot {x}_N(s), -1 \right) \end{equation*} for almost every $s \in [0,T]$, with $\dot{x}_1, \dots, \dot{x}_N \in L^{2}([0,T])$. \begin{definition} \label{def-prop-set} Let $\O$ be any open subset of $\mathbb{R}^{N+1}$, and let $z_0 \in \O$. The \emph{attainable set} is \begin{equation*} \AS ( \O ) = \begin{Bmatrix} z \in \O \mid \hspace{1mm} \text{\rm there exists an} \ \L - \text{\rm admissible curve} \ \g : [0,T] \rightarrow \O \hspace{1mm} \\ \text{\rm such that} \ \g(0) = z_0 \hspace{1mm} {\rm and} \hspace{1mm} \g(T) = z \end{Bmatrix}. \end{equation*} Whenever there is no ambiguity on the choice of the set $\O$ we denote $\AS = \AS ( \O )$. \end{definition} \begin{proposition} \label{prop-smp} Let $\O$ be any open subset of $\mathbb{R}^{N+1}$, and suppose that $c = 0$. Let $z_0=(x_0,t_0) \in \O$ and let $u$ be a classical solution to $\L u = f$. If $u (z_0) = \max_\Omega u$ and $f \ge 0$ in $\Omega$, then \begin{equation*} u(z) = u(z_0) \quad \text{and} \quad f(z) = 0 \qquad \text{for every} \ z \in \overline {\AS ( \O )}. \end{equation*} The analogous result holds true if $u (z_0) = \min_\Omega u$ and $f \le 0$ in $\Omega$. \end{proposition} If we remove the assumption $c =0$ we obtain the following weaker result. \begin{proposition} \label{prop-smp1} Let $\O$ be any open subset of $\mathbb{R}^{N+1}$. Let $u \le 0$ ($u \ge 0$, respectively) be a classical solution to $\L u = f$ with $f \ge 0$ ($f \le 0$, respectively) in $\Omega$. If $u (z_0) = 0$ for some $z_0 \in \Omega$, then \begin{equation*} u(z) = 0 \quad \text{and} \quad f(z) = 0 \qquad \text{for every} \ z \in \overline {\AS ( \O )}. \end{equation*} \end{proposition} In the remaning part of this intoduction we focus on some modified mean value formulas useful in the proof of parabolic Harnack inequality. As already noticed, the main difficulty one encounters in the proof of the Harnack inequality is due to the unboundedness of the kernels introduced in \eqref{e-kernels}. In order to overcome this issue, we can rely on the idea introduced by Kupcov in \cite{Kupcov4}, and developed by Garofalo and Lanconelli in \cite{GarofaloLanconelli-1989} in the case of parabolic operators with smooth coefficients. This method provides us with some bounded kernels and gives us a useful tool for a direct proof of the Harnack inequality. We outline here the procedure. Let $m$ be a positive integer, and let $u$ be a solution to $\L u = f$ in $\mathbb{R}^{N+1}$. We set \begin{equation*} \widetilde u(x,y,t) := u(x,t), \qquad \widetilde f(x,y,t) := f(x,t), \qquad (x,y,t) \in \mathbb{R}^{N}\times \mathbb{R}^{m} \times \mathbb{R}, \end{equation*} and we note that \begin{equation*} \widetilde \L \ \widetilde u(x,y,t) = \widetilde f(x,y,t) \qquad \widetilde \L = \L + \sum_{j=1}^{m}\tfrac{\partial^2}{\partial y^2_j} = \L + \Delta_y. \end{equation*} Moreover, if $\Gamma$ and $K_m$ denote fundamental solutions of $\L$ and of the heat equation in $\mathbb{R}^m$, respectively, then the function \begin{equation*} \widetilde \Gamma(\xi, \eta, \tau ;x,y,t) = \Gamma (\xi, \tau ;x,t) K_m (\eta,\tau ;y,t) \end{equation*} is a fundamental solution of $\widetilde \L$. Then, integrating with respect to $y$ in the mean value formulas of Theorem \ref{th-1}, applied to $\widetilde u$ and to the operator $\widetilde \L$, gives new kernels, that are bounded whenever $m>2$. We intoduce further notations. \begin{equation} \label{e-Omegam} \begin{split} \Omega^{(m)}_r(z_0) := & \left\{ z \in \mathbb{R}^{N+1} \mid (4 \pi (t_0-t))^{-m/2}\Gamma(z_0; z) > \tfrac{1}{r^{N+m}} \right\}, \\ N_r(z_0;z) := & 2 \sqrt{t_0-t}\sqrt{\log\left(\tfrac{r^{N+m}}{ (4 \pi (t_0-t))^{m/2}} \Gamma(z_0;z) \right)}, \\ M_r^{(m)} (z_0;z):= & \omega_m N_r^m(z_0;z) \left( M(z_0;z) + \frac{m}{m+2} \cdot \frac {N_r^2(z_0;z)}{4(t_0-t)^2} \right),\\ W_r^{(m)} (z_0;z):= & \frac{\omega_m}{r^{N+m}} N_r^m(z_0;z) - \frac{m}{2} \cdot \frac{\omega_m}{(4 \pi)^{m/2}} \, \Gamma(z_0,z) \cdot \widetilde \gamma \left(\frac{m}{2}; \frac {N_r^2(z_0;z)}{4(t_0-t)} \right), \end{split} \end{equation} where $M(z_0;z)$ is the kernel introduced in \eqref{e-kernels}, $\omega_m$ denotes the volume of the $m$-dimensional unit ball and $\widetilde \gamma$ is the lower incomplete gamma function \begin{equation*} \widetilde \gamma(s;w) := \int_0^w \tau^{s-1} e^{- \tau} d \tau. \end{equation*} Note that the function $N(z_0,z)$ is well defined for every $z \in \Omega^{(m)}_r(z_0)$, as the argument of the logarithm is positive, and that we did not point out the dependence of $N_r$ on the space dimension $m$ to avoid a possible confusion with its powers appearing in the definitions of $M_r^{(m)}$ and $W_r^{(m)}$. \begin{proposition} \label{prop-2} Let $\Omega$ be an open subset of $\mathbb{R}^{N+1}$, and let $u$ be a classical solution to $\L u = f$ in $\Omega$. Then, for every $z_0 \in \Omega$ and for every $r>0$ such that ${\Omega^{(m)}_r(z_0)} \subset \Omega$ we have \begin{equation*} \begin{split} u(z_0) = & \frac{1}{r^{N+m}} \int_{\Omega^{(m)}_r(z_0)} \! \! \! \! \! \! M_r^{(m)} (z_0; z) u(z) \, dz \, \\ & + \frac{N+m}{r^{N+m}} \int_0^{r} \left( \r^{N+m-1} \int_{\Omega_\r^{(m)} (z_0)} \! \! \! \! W_\r^{(m)} (z_0;z) f (z) \, dz \right) d \r \\ & + \frac{N+m}{r^{N+m}} \int_0^{r} \left( \frac{\omega_m}{\r} \int_{\Omega_\r^{(m)} (z_0)} \! \! \! \! N_\r^m(z_0;z) \left( \div \, b(z) - c(z) \right) u(z) \, dz \right) d \r. \end{split} \end{equation*} \end{proposition} We conclude this introduction with two statements of the parabolic Harnack inequality. The first one is given in terms of the parabolic ball $\Omega_{r}^{(m)}(z_0)$, the second one is the usual invariant parabolic Harnack inequality. We emphasize that our proof is elementary, as it is based on the mean value formula, however some accurate estimates of the fundamental solution are needed in order to control the Harnack constant and the size of the cylinders appearing in its statement. For every $z_0 = (x_0,t_0) \in \mathbb{R}^{N+1}, r >0$, and $m \in \mathbb{N}$ we set \begin{equation} \label{e-Kr} K^{(m)}_{r}(z_0) := \overline{\Omega_{r}^{(m)}(z_0)} \cap \Big\{ t \le t_0 - \frac{1}{4 \pi \lambda^{N/(N+m)}} \,r^2 \Big\}. \end{equation} We note that, as a consequence of Lemma \ref{lem-localestimate*} below, for every sufficiently small $r$ the compact set $K^{(m)}_{r}(z_0)$ is non empty. \begin{proposition} \label{prop-Harnack} For every $m \in \mathbb{N}$ with $m > 2$, there exist two positive constants $r_0$ and $C_K$, only depending on $\L$ and $m$, such that the following inequality holds. Let $\Omega$ be an open subset of $\mathbb{R}^{N+1}$. For every $z_0 \in \Omega$ and for every positive $r$ such that $r \le r_0$ and $\Omega_{5r}^{(m)}(z_0) \subset \Omega$ we have that \begin{equation} \label{e-H0} \sup_{K^{(m)}_{r}(z_0)} u \le C_K u(z_0) \end{equation} for every $u \ge 0$ solution to $\L u = 0$ in $\Omega$. \end{proposition} We introduce some further notation in order to state an \emph{invariant} Harnack inequality. For every $z_0 = (x_0,t_0) \in \mathbb{R}^{N+1}$ and for every $r>0$ we set \begin{equation} \label{e-Q} \Q_{r}(z_0) := B_r(x_0) \times ]t_0- r^2,t_0[, \end{equation} where $B_r(x_0)$ denotes the Euclidean ball with center at $x_0$ and radius $r$. Moreover, for $0 < \iota < \kappa < \mu < 1$ and $0 < \vartheta < 1$ we set \begin{equation} \label{e-QPM} \Q^-_{r}(z_0) := B_{\vartheta r}(x_0) \times ]t_0 - \kappa r^2,t_0 - \mu r^2[, \qquad \Q^+_{r}(z_0) := B_{\vartheta r}(x_0) \times ]t_0 - \iota r^2,t_0[. \end{equation} We have \begin{theorem} \label{th-Harnack-inv} Choose positive constants $R_0$ and $\iota, \kappa, \mu, \vartheta$ as above and let $\Omega$ be an open subset of $\mathbb{R}^{N+1}$. Then there exists a positive constant $C_H$, only depending on $\L$, on $R_0$ and on the constant that define the cylinders $\Q, \Q^+, \Q^-$, such that the following inequality holds. For every $z_0 \in \Omega$ and for every positive $r$ such that $r \le R_0$ and ${\Q_{r}(z_0)} \subset \Omega$ we have that \begin{equation} \label{e-H1} \sup_{\Q^-_{r}(z_0)} u \le C_H \inf_{\Q^+_{r}(z_0)} u \end{equation} for every $u \ge 0$ solution to $\L u = 0$ in $\Omega$. \end{theorem} \begin{center} \begin{tikzpicture} \path[draw,thick] (-1,.7) rectangle (9,6.8); \filldraw [fill=blue!20!white, draw=blue, line width=.6pt] (1.5,6) rectangle node {{\color{blue} $Q^+_r(x_0,t_0)$}} node [below=.7cm,right=1.95cm] {{\color{blue} $t_0 - \iota r^2$}} (5.5,4.5); \filldraw [fill=red!20!white, draw=red, line width=.6pt] (1.5,2) rectangle node {{\color{red} $Q^-_r(x_0,t_0)$}} node [above=.7cm,right=1.95cm] {{\color{red} $t_0 - \kappa r^2$}} node [below=.7cm,right=1.95cm] {{\color{red} $t_0 - \mu r^2$}} (5.5,3.5); \draw [line width=.6pt] (0,6) rectangle node [above=2cm,right=3.6cm] {$Q_r(x_0,t_0)$} node [below=2.2cm,right=3.6cm] {$t_0-r^2$} (7,1); \draw [line width=.6pt] (3.5,6) circle (.6pt) node[above] {$(x_0,t_0)$} node[above= 7pt, right=1.2cm] {{\color{blue}$\vartheta r$}} node[above=7pt, right=2.8cm] {$r$}; \end{tikzpicture} {\sc \qquad Fig.2} - The set $Q_r(x_0,t_0)$. \end{center} We conclude this introduction with some comments about our main results. Mean value formulas don't require the uniqueness of the fundamental solution $\Gamma$. In Section \ref{secFundSol} we recall the main results we need on the existence of a fundamental solution together with some known facts about its uniqueness. We also recall in Proposition \ref{prop-localestimate} an asymptotic bound of $\Gamma$ which allows us to use a direct procedure in a part of the proof of the mean value formulas stated in Theorem \ref{th-1}. We point out that recent progresses on mean value formulas and their applications can be found e.g. in \cite{CMPCS}. Moreover, an alternative and more general approach has been introduced by Cupini and Lanconelli in \cite{CupiniLanconelli2021}, where a wide family of differential operators with smooth coefficients is considered. We continue the outline of this article. Section \ref{SectionDivergence} contains the statement of a generalized divergence theorem for {sets with almost $C^1$ boundary} that is used in Section \ref{sectionProof} for the proof of the mean values formulas. Section \ref{sectionHarnack} is devoted to the proof of the Harnack inequality. In the last section we present an alternative approach for the mean value formula that relies on geometric measure theory. We finally remark that our method also applies to uniformly elliptic equations. Moreover, mean value formulas and Harnack inequality are fundamental tools in the development of the Potential Theory for the operator $\L$. \setcounter{equation}{0} \section{Fundamental solution}\label{secFundSol} In this Section we recall some notations and some known results on the classical theory of uniformly parabolic equations that will be used in the sequel. Points of $\mathbb{R}^{N+1}$ are denoted by $z=(x,t), \z = (\x,\t)$ and $\Omega$ denotes an open subset of $\mathbb{R}^{N+1}$. Let $u$ be a real valued function defined on $\Omega$. We say that $u$ belongs to $C^{2,1}(\O)$ if $u, \frac{\partial u}{\partial x_j}, \frac{\partial^2 u}{\partial x_i\partial x_j}$ for $i,j= 1, \dots, N$ and $\frac{\partial u}{\partial t}$ are continuous functions, it belongs to $C^{2+ \alpha,1 + \alpha/2}(\O)$ if $u$ and all the derivatives of $u$ listed above belong to the space $C^{\alpha}(\O)$ of the H\"older continuous functions defined by \eqref{e-hc}. A function $u$ belongs to $C^\alpha_\loc(\Omega)$ ($C^{2+ \alpha,1 + \alpha/2}_\loc(\O)$, respectively) if it belongs to $C^\alpha(K)$ (resp. $C^{2+ \alpha,1 + \alpha/2}(K)$) for every compact set $K \subset \O$. Let $f$ be a continuous function defined on $\O$. We say that $u \in C^{2,1}(\O)$ is a classical solution to $\L u = f$ in $\Omega$ if the equation \eqref{e-L} is satisfied at every point $z \in \O$. According to Friedman \cite{Friedman}, we say that a fundamental solution $\G$ for the operator $\L$ is a function $\G = \G(z; \z)$ defined for every $(z; \z) \in \mathbb{R}^{N+1} \times \mathbb{R}^{N+1}$ with $t> \tau$, which satisfies the following contitions: \begin{enumerate} \item For every $\z = (\xi, \t) \in \mathbb{R}^{N+1}$ the function $\G( \, \cdot \, ; \z)$ belongs to $C^{2,1}(\mathbb{R}^{N} \times ]\t, + \infty[)$ and is a classical solution to $\L \, \G(\cdot \,; \z) = 0$ in $\mathbb{R}^{N} \times ]\t, + \infty[$; \item for every $\phi \in C_c(\mathbb{R}^N)$ the function $$ u(z)=\int_{\mathbb{R}^{N}}\Gamma(z;\xi, \t)\phi(\x)d \x, $$ is a classical solution to the Cauchy problem \begin{equation*} \left\{ \begin{array}{ll} \L u = 0, & \hbox{$z \in \mathbb{R}^{N} \times ]\t, + \infty[$} \\ u( \cdot,\t) = \varphi & \hbox{in} \ \mathbb{R}^N. \end{array} \right. \end{equation*} \end{enumerate} Note that $u$ is defined for $t>\t$, then the above identity is understood as follows: for every $\x \in \mathbb{R}^N$ we have $\lim_{(x,t) \to (\x,\t)}u(x,t) = \varphi (\x)$. We also point out that the two above conditions do not guarantee the uniqueness of the fundamental solution. However, as we shall see in the following, estimates \eqref{upper-bound} and \eqref{e-deriv-bds} hold for the fundamental solution $\Gamma$ built by the parametrix method and the fundamental solution verifying such estimates is unique. Indeed, it follows from the proof of Theorem 15 in Ch.1 of \cite{Friedman} that there is only one fundamental solution under the further assumptions that $\Gamma(x,t;\x,\t) \to 0$ as $|x| \to + \infty$ and $|\partial_{x_j}\Gamma(x,t;\x,\t)| \to 0$ as $|x| \to + \infty$, for $j=1, \dots,N$, uniformly with respect to $t$ varying in bounded intervals of the form $]\tau, \tau + T]$. We outline here the parametrix method for the construction of a fundamental solution $\G$ of $\L$. We first note that, if the matrix $A$ in the operator $\L$ is constant, then the fundamental solution of $\L$ is explicitly known \begin{equation} \label{eq-fundsol-A} \G_A(z;\z) = \frac{1}{\sqrt{(4 \pi (t-\t))^{N}\det A}} \exp \left( - \frac{\langle A^{-1}(x-\x),x-\x \rangle}{4(t-\t)} \right), \end{equation} and moreover the \emph{reproduction property} holds: \begin{equation} \label{eq-rep-prop-A} \G_A(z;\z) = \int_{\mathbb{R}^N} \G_A(x,t;y,s) \G_A(y,s;\x,\t) d y, \end{equation} for every $z=(x,t), \z=(\x,\t) \in \mathbb{R}^{N+1}$ and $s \in \mathbb{R}$ with $\t < s < t$. A direct computation shows that, for every $T>0$, and $\Lambda^+ > \Lambda$ as in \eqref{e-up}, there exists a positive constant $C^+= C^+(\lambda, \Lambda, \Lambda^+,T)$ such that \begin{equation} \label{eq-fundsol-bd-2} \begin{split} \left| \frac{\partial \G_A}{\partial {x_j}}(z;\z) \right| \le \frac{C^+}{\sqrt{t-\t}} \G^+(z;\z), \qquad \left| \frac{\partial^2 \G_A}{\partial{x_i x_j}}(z;\z) \right| \le \frac{C^+}{t-\t} \G^+(z;\z) \end{split} \end{equation} for any $i,j= 1, \dots, N$ and for every $z, \z \in \mathbb{R}^{N+1}$ such that $ 0 < t - \t \le T$. Here the function \begin{equation} \label{eq-fundsol+} \G^+(z;\z) = \frac{1}{(\Lambda^+4\pi (t-\t))^{N/2}} \exp \left( - \frac{|x- \x|^2}{4 \Lambda^+(t-\t)} \right), \end{equation} is the fundamental solution of $\Lambda^+ \Delta - \frac{\partial}{\partial t}$. The \emph{parametrix} $Z$ for $\L$ is defined as \begin{equation} \label{eq-fundsol-Z} Z(z;\z) := \G_{A(\z)}(z;\z) = \frac{1}{\sqrt{(4 \pi (t-\t))^{N}\det A(\z)}} \exp \left( - \frac{\langle A(\z)^{-1}(x-\x),x-\x \rangle}{4(t-\t)} \right). \end{equation} More specifically, for every fixed $(\x,\tau) \in \mathbb{R}^{N+1}, Z( \, \cdot \, ; \x,\t)$ is the fundamental solution of the operator $\L_{\z}$ obtained by \emph{freezing} the coefficients $a_{ij}$'s of the operator $\L$ at the point $\z$: \begin{equation} \label{e-L0} \L_{\z} := \div \left( A (\z) \nabla_x \right) - \, \tfrac{\partial }{\partial t}. \end{equation} Note that \begin{equation} \label{eq-LZ} \L Z(z;\z) := \div \left[\left( A(z) - A(\z) \right) \nabla_x Z(z;\z) \right], \end{equation} which vanishes as $z \to \z$, by the continuity of the matrix $A$. The fundamental solution $\G$ for $\L$ is obtained from $Z$ by an iterative procedure. We define the sequence of functions $(\L Z)_1 (z;\z) := \L Z(z;\z)$, \begin{equation} \label{eq-LkZ} (\L Z)_{k+1}(z;\z) := \int_\t^t \bigg( \int_{\mathbb{R}^N}(\L Z)_{k} (x,t;y,s) \L Z(y,s;\x,\t) d y \bigg) d s, \qquad k \in \mathbb{N}. \end{equation} Note that estimates \eqref{eq-fundsol-bd-2} also apply to $Z$ then, by using the H\"older continuity of the coefficients of $\L$, we obtain \begin{equation*} \left|\L Z(z;\z) \right| \le \frac{\widetilde C}{(t-\t)^{1 - \alpha/2}} \G^+(z;\z), \end{equation*} for a positive constant $\widetilde C$ depending on $\lambda, \Lambda, \Lambda^+, T$ and on the constant $M$ in \eqref{e-hc}. This inequality and the reproduction property \eqref{eq-rep-prop-A} applied to $\Gamma^+$ imply that, for every $k \ge 2$, the integral that defines $(\L Z)_{k}$ converges and \begin{equation*} \left| (\L Z)_{k}(z;\z) \right| \le \frac{ (\Gamma_E(\alpha/2) \widetilde C)^k }{\Gamma_E(\alpha k /2)(t-\t)^{1 - k\alpha/2}} \G^+(z;\z), \qquad k \in \mathbb{N}, \end{equation*} were $\Gamma_E$ denotes the Euler's Gamma function. Theorem 8 in \cite[Chapter 1]{Friedman} states that, under the assumption that the coefficients $a_{ij}, \frac{\partial a_{ij}}{\partial x_i}, b_{i}, \frac{\partial b_{i}}{\partial x_i}$, for $i, j = 1, \dots, N$ and $c$ belong to the space $C^\alpha(\mathbb{R}^N \times ]T_0, T_1[)$ with $T_0 < T_1$ and satisfy \eqref{e-up}, the series \begin{equation} \label{eq-Gamma} \G (z;\z) := Z(z;\z) + \sum_{k=1}^{\infty} \int_\t^t \bigg( \int_{\mathbb{R}^N}Z(x,t;y,s) (\L Z)_k (y,s;\x,\t) d y \bigg) d s \end{equation} converges in $\mathbb{R}^N \times ]T_0, T_1[$ and it turns out that its sum $\G$ is a fundamental solution for $\L$. We next list some properties of the function $\G$ defined in \eqref{eq-Gamma}. We mainly refer to Chapter I in the monograph \cite{Friedman} by Friedman. \begin{enumerate} \item Theorem 8 in \cite{Friedman}: for every $\z \in \mathbb{R}^{N+1}$ the function $\G(\cdot \, ; \z)$ belongs to $C^{2,1}(\mathbb{R}^N \times ]\t, + \infty[)$ and it is a classical solution to $\L \, \G = 0$ in $\mathbb{R}^N \times ]\t, + \infty[$. \item Theorem 9 in \cite{Friedman}: for every bounded functions $\phi \in C(\mathbb{R}^N)$ and $f \in C^\alpha(\mathbb{R}^N \times ]\t, T_1[)$, with $T_0 < \t < T_1$, the function $$ u(z)=\int_{\mathbb{R}^{N}}\Gamma(z;\z)\phi(\x)d \x - \int_\t^t \bigg( \int_{\mathbb{R}^{N}}\Gamma(x,t;\x, s) f(\x, s)d \x \bigg) d s $$ is a classical solution to the Cauchy problem \begin{equation} \label{cauchyproblem} \left\{ \begin{array}{ll} \L u = f, & \hbox{$z \in \mathbb{R}^{N} \times ]\t, + \infty[$} \\ u( \cdot,\t) = \varphi & \hbox{in} \ \mathbb{R}^N. \end{array} \right. \end{equation} \item Theorem 15 in \cite{Friedman}: The function $\G^*(z;\z) := \G(\z;z)$ is the fundamental solution of the transposed operator $\L^*$ acting on a suitably smooth function $v$ as follows \begin{equation} \label{e-L*} \L^* v (z) := \div \left( A (z) \nabla_x v(z) \right) - \langle b(z), \nabla_x v(z)\rangle + (c(z) - \div \, b(z)) v(z) + \, \tfrac{\partial u}{\partial t}(z). \end{equation} \item Inequalities (6.10) and (6.11) in \cite{Friedman}: for every positive $T$ and $\Lambda^+ > \Lambda$ there exists a positive constant $C^{+}$ such that \begin{equation} \label{upper-bound} \G(z; \z) \le C^{+} \, \G^{+} (z; \z), \end{equation} for every $z = (x,t), \z = (\x, \t) \in \mathbb{R}^{N+1}$ with $0 < t- \t < T$. Moreover, the following bounds for the derivatives hold \begin{equation} \label{e-deriv-bds} \begin{split} \left| \frac{\partial \G}{\partial {x_j}}(z;\z) \right| & \le \frac{C^+}{\sqrt{t-\t}} \G^+(z;\z), \quad \left| \frac{\partial^2 \G}{\partial{x_i x_j}}(z;\z) \right| \le \frac{C^+}{t-\t} \G^+(z;\z), \\ \left| \frac{\partial \G}{\partial {\x_j}}(z;\z) \right| & \le \frac{C^+}{\sqrt{t-\t}} \G^+(z;\z), \quad \left| \frac{\partial^2 \G}{\partial{\x_i \x_j}}(z;\z) \right| \le \frac{C^+}{t-\t} \G^+(z;\z), \end{split} \end{equation} for any $i,j=1, \dots, N$ and for every $z, \z \in \mathbb{R}^{N+1}$ with $0 < t- \t < T$. \end{enumerate} We recall that the monograph \cite{Friedman} also contains an existence and uniqueness result for the Cauchy problem under the assumptions that the functions $\varphi$ and $f$ in the Cauchy problem \eqref{cauchyproblem} do satisfy the following growth condition: \begin{equation*} | \varphi(x)| + |f(z)| \le C_0 \exp \left( h |x|^2 \right) \qquad \text{for every} \ x\in \mathbb{R}^N \ \text{and} \ t \in ]\tau, T_1], \end{equation*} for some positive constants $C_0$ and $h$. The reproduction property \eqref{eq-rep-prop-A} for $\G$ holds as a direct consequence of the uniqueness of the solution to the Cauchy problem. We also have \begin{equation*} e^{-\Lambda(t-\t)} \le \int_{\mathbb{R}^{N}} \G(x,t;\x, \t) \; d \x \le e^{\Lambda(t-\t)} \end{equation*} for every $(x, t), (\x, \t) \in \mathbb{R}^{N+1}$ with $\t < t$, where $\Lambda$ is the constant introduced in \eqref{e-up}. We conclude this section by quoting a statement on the asymptotic behavior of fundamental solutions, which in the stochastic theory is referred to as \emph{large deviation principle}. In our setting it is useful in the description of the \emph{parabolic ball} $\Omega_r(z_0)$ introduced in \eqref{e-Psi}. The first large deviation theorem is due to Varhadhan \cite{Varadhan1967behavior, Varadhan1967diffusion}, who considers parabolic operators $\L$ whose coefficients only depend on $x$ and are H\"older continuous. It states that \begin{equation} 4 (t-\tau) \log (\Gamma(x,t;\xi,\tau)) \longrightarrow - d^2(x,\xi) \quad \text{as} \ t \to \tau, \end{equation} uniformly with respect to $x,\xi$ varying on compact sets. Here $d(x,\xi)$ denotes the Riemannian distance (induced by the matrix $A$) of $x$ and $\xi$. Several extensions of the large deviation principle are available in literature, under different assumption on the regularity of the coefficients of $\L$. Azencott considers in \cite{Azencott84} operators with smooth coefficients and proves more accurate estimates for the asymptotic behavior of $\log\big(\Gamma(x,t;\x,\t)\big)$. Garofalo and Lanconelli prove an analogous result by using purely PDEs methods in \cite{GarofaloLanconelli-1989}. We recall here a version of this result which is suitable for our purposes. \begin{proposition} \label{prop-localestimate} {\sc [Theorem 1.2 in \cite{Polidoro2}]} \ For every $\eta \in ]0,1[$ there exists $C_\eta>0$ such that \begin{equation} \label{eq:approximation} (1 - \eta) Z(z;\z) \le \G(z;\z) \le (1 + \eta) Z(z;\z) \end{equation} for every $z, \z \in \mathbb{R}^{N+1}$ such that $Z(z;\z) > C_\eta $. \end{proposition} We finally prove a simple consequence of Proposition \ref{prop-localestimate} that will be used in the following. We introduce some further notation in order to give its statement. We first note that the function $\Gamma^*$ can be built by using the parametrix method, starting from the expression of the parametrix relevant to $\L^*$, that is \begin{equation} \label{eq-fundsol-Z*} Z^*(z;\z) := \G^*_{A(\z)}(z;\z) = \frac{1}{\sqrt{(4 \pi (\t-t))^{N}\det A(\z)}} \exp \left( - \frac{\langle A(\z)^{-1}(x-\x),x-\x \rangle}{4(\t-t)} \right). \end{equation} We set \begin{equation} \label{eq-Omega_r*} \Omega_r^*(z_0) := \bigg\{ z \in \mathbb{R}^{N+1} \mid Z^*(z;z_0) \ge \frac{2}{r^N} \bigg\}, \end{equation} and we point out that its explicit expression is: \begin{equation} \label{eq-Omega_r*-exp} \begin{split} \Omega_r^*(z_0) = & \Big\{ (x,t) \in \mathbb{R}^{N+1} \mid \langle A^{-1}(z_0)(x-x_0), x-x_0 \rangle \le \\ & \qquad - 4 (t_0 - t) \left( \log \big( \tfrac{2}{r^N} \big) + \tfrac{1}{2} \log (\text{det} A(z_0) )+ \tfrac{N}{2} \log (4 \pi (t_0 - t)) \right) \Big\}. \end{split} \end{equation} We have \begin{lemma} \label{lem-localestimate*} There exists a positive constant $r^*$, only depending on the operator $\L$, such that \begin{equation*} \Omega_r^*(z_0) \subset \Omega_r(z_0) \subset \Omega_{3r}^*(z_0) \end{equation*} for every $z_0 \in \mathbb{R}^{N+1}$ and $r \in ]0,r^*]$. \end{lemma} \begin{proof} As said before, the function $\Gamma^*$ can be built by using the parametrix $Z^*$ defined in \eqref{eq-fundsol-Z*}. In particular, Proposition \ref{prop-localestimate} applies to $\Gamma^*$. Then, if we apply the estimate \eqref{eq:approximation} with $\eta = \frac12$ and we use \eqref{e-L*}, we find that there exists $C^*>0$ such that \begin{equation*} \frac12 Z^*(\z;z_0) \le \G(z_0;\z) \le \frac32 Z^*(\z;z_0) \end{equation*} for every $z_0, \z \in \mathbb{R}^{N+1}$ such that $Z^*(\z;z_0) > C^*$. The claim then follows from \eqref{e-Psi} and \eqref{eq-Omega_r*} by choosing $r^*:=\left(\frac{2}{C^*}\right)^{1/N}$. \end{proof} We conclude this section with a further result useful in the proof of the Harnack inequality. \begin{lemma} \label{lem-localestimategradient} {\sc [Proposition 5.3 in \cite{Polidoro2}]} \ Let $r^*$ be the constant appearing in Lemma \ref{lem-localestimate*}. There exists a positive constants $C$, only depending on the operator $\L$, such that \begin{equation*} \left| \partial_{x_j} \Gamma(z_0, z) \right| \le C \left( \frac{|x_0-x|}{t_0-t} + 1 \right) \Gamma(z_0, z), \qquad j=1, \dots, N, \end{equation*} for every $z_0 \in \mathbb{R}^{N+1}$ and $z \in \Omega_r(z_0)$ with $r \in ]0,r^*]$. \end{lemma} \setcounter{equation}{0} \section{A generalized divergence theorem}\label{SectionDivergence} Let $\Omega$ be an open subset of $\mathbb{R}^n$, and let $\Phi \in C^1 \pr{\Omega;\mathbb{R}^{n}}$. The classical divergence formula reads \begin{equation} \label{eq-div} \int_{ E } \mathrm{div}\,\Phi\ dz =-\int_{ \partial E} \scp{\nu,\Phi}\ d \H^{n-1}, \end{equation} where $E$ is a bounded set such that $\overline E \subset \Omega$ and its boundary is $C^1$. We are interested in the situation in which $E$ is the \emph{super-level} set of a real valued function $F \in C^1\pr{\O}$, that is $E = \left\{F>y\right\}$ for some $y \in \mathbb{R}$. At every point $z\in \partial E$ such that $\nabla F(z)\neq 0$ the \emph{inner} unit normal vector $\nu = \nu(z)$ appearing in \eqref{eq-div} is defined as $\nu(z) = \frac{1}{|\nabla F(z)|}\nabla F(z)$ and $\partial E$ is a $C^1$ manifold in a neighborhood of $z$. But, if we denote $$ \mathrm{Crit} \pr{F}:=\left\{z \in \mathbb{R}^n : \nabla F =0\right\}, $$ the set of \emph{critical points} and $F \pr{\mathrm{Crit}\pr{F}}$ the set of \emph{critical values} of $F$, under our hypotheses we cannot apply the classical Sard theorem to state that ``for almost every $y \in \mathbb{R}$ the level set $\{F=y\}$ is globally a $C^1$ manifold''. Indeed, Whitney proves in \cite{whitney1935} that there exist functions $F \in C^1\pr{\O}$ having the property that $\{F=y\} \cap \mathrm{Crit} \pr{F}$ is not empty for every $y$. Therefore, the purpose of this section is to discuss a version of \eqref{eq-div} when the boundary of $E$ is $C^1$ up to a closed set of null Hausdorff measure and to see how it can be applied in our framework. We first introduce the class of sets with the relevant regularity and state the corresponding divergence formula. We draw this definition and the following theorem from \cite[Section 9.3]{Maggi}. \begin{definition}\label{def-almost-C1} An open set $E\subset \mathbb{R}^n$ has {\em almost $C^1$-boundary} if there is a closed set $M_0\subset \partial E$ with $\H^{n-1}(M_0)=0$ such that, for every $z_0\in M = \partial E \setminus M_0$ there exist $s > 0$ and $F \in C^1 (B(z_0, s))$ with the property that \begin{align*} B(z_0,s) \cap E & = \{z\in B(z_0,s):\ F(z) > 0\} , \\ B(z_0,s) \cap \partial E & = \{z\in B(z_0,s):\ F(z) = 0\} \end{align*} and $\nabla F(z)\neq 0$ for every $z\in B(z_0,s)$. We call $M$ the {\em regular part} of $\partial E$ (note that $M$ is a $C^1$-hypersurface). The inner unit normal to $E$ is the continuous vector field $\nu\in C^0 (M;{\mathbb S}^{n-1})$ given by \[ \nu(z) = \frac{\nabla F(z)}{|\nabla F(z)|}, \quad z \in B(z_0,s) \cap M . \] \end{definition} Let us state the divergence theorem for sets with almost $C^1$-boundary. \begin{theorem}\label{gen-div-thm} If $E\subset\mathbb{R}^n$ is an open set with almost $C^1$-boundary and $M$ is the regular part of its boundary, then for every $\Phi\in C^1_c(\mathbb{R}^n;\mathbb{R}^n)$ the following equality holds \begin{equation}\label{eq-div-almost} \int_{ E } \mathrm{div}\,\Phi\ dz =-\int_{M} \scp{\nu,\Phi}\ d \H^{n-1} . \end{equation} \end{theorem} If $F \in C^1\pr{\O}$ and $E = \left\{F>y\right\}$ for some $y \in \mathbb{R}$, we can apply Theorem \ref{gen-div-thm} thanks to the following result due to A.~Ya.~Dubovicki\v{\i} \cite{Dubovickii}, that generalizes Sard's theorem. \begin{theorem}[\sc Dubovicki\v{\i}] \label{th-Dubo} Assume that $\mathbb{N}^n$ and $\M^m$ are two smooth Riemannian manifolds of dimension $n$ and $m$, respectively. Let $F:\mathbb{N}^n \rightarrow \M^m$ be a function of class $C^k$. Set $s=n-m-k+1$, then for $\H^m-$a.e.~$y \in \M^m$ \begin{equation}\label{e-Du} \H^s \pr{\left\{F=y \right\} \cap \mathrm{Crit} \pr{F}}=0. \end{equation} \end{theorem} Notice that if $m=k=1$ and $\M^m=\mathbb{R}$, then $s=n-1$ and for $\H^1-$a.e.~$y \in \mathbb{R}$ the critical part of $\left\{F=y \right\}$ is an $\H^{n-1}$ null set, while its regular part is an $\pr{n-1}-$manifold of class $C^1$. In other words, $\{F=y\}$ is a set with almost $C^1$-boundary and we cannot apply the classical divergence theorem \eqref{eq-div}, but rather Theorem \ref{gen-div-thm}. Summarizing, we have the following result, that immediately follows from the above discussion. \begin{proposition} \label{prop-1} Let $\Omega$ be an open subset of $\mathbb{R}^{n}$ and let $F \in C^1 \pr{{\Omega};\mathbb{R}}$. Then, for $\H^1$-almost every $y \in \mathbb{R}$, we have: $$ \int_{\left\{F>y\right\}} \mathrm{div}\,\Phi\ dz = -\int_{\left\{F=y\right\}\setminus \mathrm{Crit} \pr{F}} \scp{\nu,\Phi}\ d \H^{n-1}, \quad \forall \: \Phi \in C_c^1 \pr{\Omega;\mathbb{R}^{n}}, $$ were $\nu=\tfrac{\nabla F}{\abs{\nabla F}}$. \end{proposition} \begin{proof} By Dubovicki\v{\i} Theorem \ref{th-Dubo} for $\H^1-$almost every $y \in \mathbb{R}$ the set $\{F>y\}$ has almost $C^1$-boundary, hence Theorem \ref{eq-div-almost} applies. Moreover, as $F$ is continuous, for any such $y$ we have $\partial\{F>y\}\subset\{F=y\}$, $\H^{n-1}(\{F=y\}\setminus\partial\{F>y\})=0$ and the regular part of $\partial\{F>y\}$ is $\{F=y\}\setminus\{\nabla F=0\}$ and has full $\H^{n-1}$ measure. \end{proof} In order to prove Theorem \ref{th-1} we apply Proposition \ref{prop-1} to the super-level set $\Omega_r(z_0)$ of the fundamental solution $\Gamma(z_0,\cdot)$ of $\L$. Then, as explained in the Introduction, we have to cut at a time less than $t_0$ to avoid the singularity of the kernels at $z_0$. Therefore, we specialize Proposition \ref{prop-1} as follows. \begin{proposition}\label{prop-1bis} Let $G \in C^1 \pr{\mathbb{R}^{N+1} \setminus \left\{ \pr{x_0,t_0} \right\};\mathbb{R}}$. Then for $\H^1-$almost every $w, \varepsilon \in \mathbb{R}$ $$ \int_{\left\{ G > w \right\} \cap \left\{ t<t_0-\varepsilon \right\}} \!\!\!\!\!\!\mathrm{div}\Phi\, dz =-\int_{(\{ G = w \} \setminus \mathrm{Crit}\pr{G}) \cap \{ t<t_0-\varepsilon \}} \!\!\!\!\!\!\scp{\nu,\Phi}d \H^{N} +\int_{\left\{ G > w \right\} \cap \left\{ t=t_0-\varepsilon \right\}} \!\!\!\!\!\!\scp{e,\Phi}d\H^{N}, $$ for every $\Phi \in C^1_c \pr{\Omega;\mathbb{R}^{N+1}}$, where $\nu=\tfrac{\nabla G }{\abs{\nabla G}}$ and $e=\pr{0,\ldots,0,1}$. \end{proposition} \begin{proof} Notice that for $\H^1-$a.e. $w\in\mathbb{R}$ the level set $\{G>w\}$ has almost-$C^1$ boundary and fix such a value. Let $S$ be the $\H^N$-negligible singular set of $\partial\{G>w\}$: by Fubini theorem, for $\H^1-$a.e. $\varepsilon>0$ the set $S\cap\{t=t_0-\varepsilon\}$ is in turn $\H^{N-1}-$negligible, and out of this set the unit normal is given $\H^N-$a.e. by $\nu$ in $\{ G = w \} \setminus \mathrm{Crit}\pr{G} \cap \{ t<t_0-\varepsilon \}$ and by $e$ in $\{ G > w \} \cap \{ t=t_0-\varepsilon \}$. Therefore, Proposition \ref{prop-1} applies with $n=N+1$, $\Omega=\mathbb{R}^{N+1}\setminus \left\{ \pr{x_0,t_0} \right\}$, \[ F(x,t)= (G(x,t)- w)\wedge (t-t_0+\varepsilon) , \] $y=0$ and the set \begin{equation*} \Sigma=(\partial\{ G > w \} \cap \{ t<t_0-\varepsilon \} \cap \mathrm{Crit}\pr{G} \bigr) \cup \bigl(\{ G = w \} \cap \{ t=t_0-\varepsilon \}\bigr) \end{equation*} is $\H^N$-negligible. \end{proof} The last result we need to prove Theorem \ref{th-1} is the coarea formula for Lipschitz functions. We refer to \cite[3.2.12]{federer1969geometric} or \cite{AmbrosioFuscoPallara}, Theorem 2.93 and formula (2.74) for the proof. \begin{theorem}[\sc Coarea formula for Lipschitz functions] \label{th-co} Let $G:\mathbb{R}^n \rightarrow \mathbb{R}$ be a Lipschitz function, and let $g$ be a non-negative measurable function. Then \begin{equation} \label{e-co} \int_{\mathbb{R}^n} g\pr{z}\abs{\nabla G \pr{z}}dz= \int_{\mathbb{R}}\pr{\int_{\left\{ G=y \right\}}g\pr{z}d\H^{n-1} \pr{z}}dy. \end{equation} \end{theorem} \setcounter{equation}{0} \section{Proof of the mean value formulas and maximum principle}\label{sectionProof} In this Section we give the proof of the mean value formulas and of the strong maximun principle. \begin{proof} {\sc of Theorem \ref{th-1}.} Let $\Omega$ be an open subset of $\mathbb{R}^{N+1}$, and let $u$ be a classical solution to $\L u = f$ in $\Omega$. Let $z_0=(x_0,t_0) \in \Omega$ and let $r_0>0$ be such that $\overline{\Omega_{r_0}(z_0)} \subset \Omega$. We prove our claim by applying Proposition \ref{prop-1bis} with $G(z) = \Gamma(z_0; z)$ and $w = \frac{1}{r^N}$, where $r \in ]0,r_0]$ is such that the statement of Proposition \ref{prop-1bis} holds true with $w = \frac{1}{r^N}$, and $\varepsilon := \varepsilon_k$ for some monotone sequence $\big(\varepsilon_k\big)_{k \in \mathbb{N}}$ such that $\varepsilon_k \to 0$ as $k \to + \infty$ (see Figure 3). \begin{center} \begin{tikzpicture} \clip (-.5,7.5) rectangle (6.7,2); \shadedraw [top color=blue!10] (-2,6) rectangle (7,1); \begin{axis}[axis y line=middle, axis x line=middle, xtick=\empty,ytick=\empty, ymin=-1.1, ymax=1.1, xmin=-.2,xmax=1.8, samples=101, rotate= -90] \addplot [black,line width=.7pt, domain=-.01:.01] {sqrt(.0001 - x * x)} node[above] {\hskip12mm $(x_0,t_0)$}; \addplot [black,line width=.7pt, domain=-.01:.01] {-sqrt(.0001 - x * x)}; \addplot [blue,line width=.7pt, domain=.001:1] {sqrt(- 2 * x * ln(x))}; \addplot [blue,line width=.7pt,domain=.001:1] {- sqrt(- 2 * x * ln(x))} node[below] { \hskip20mm $\Omega_r(x_0,t_0)$}; \end{axis} \draw [<-,line width=.4pt] (2.8475,7) -- (2.8475,2); \draw [red, line width=.6pt,] (-1,6) -- (6.7,6) node[below] { \hskip-18mm $t = t_0 - \varepsilon_k$}; \path[draw,thick] (-.49,7.49) rectangle (6.69,2.01); \end{tikzpicture} {\sc Fig.3} - The set $\Omega_r(x_0,t_0) \cap \big\{ t < t_0 - \varepsilon_k \big\}$. \end{center} For this choice of $r$, we set $v(z) := \Gamma(z_0;z) - \frac{1}{r^N}$, and we note that \begin{equation} \label{eq-div-L*} \begin{split} u(z) \L^* v(z) - v(z) \L u(z) = & \div_x \big( u(z) A(z) \nabla_x v(z) - v(z) A(z) \nabla_x u(z) \big) - \\ & \div_x \big( u(z) v(z) b(z) \big) + \partial_t (u(z)v(z)) \end{split} \end{equation} for every $z \in \Omega \backslash \big\{z_0 \big\}.$ We then recall that $\L^* v = \frac{1}{r^N} \left( \div \, b - c \right)$ and $\L u = f$ in $\Omega \backslash \big\{z_0 \big\}$. Then \eqref{eq-div-L*} can be written as follows \begin{equation*} \frac{1}{r^N} \left( \div \, b(z) - c(z) \right) u(z) - v(z) f(z) = \div \, \Phi (z), \qquad \Phi(z) := \big( u A \nabla_x v - v A \nabla_x u - uv b, uv \big)(z). \end{equation*} We then apply Proposition \ref{prop-1bis} to the set $\Omega_r(z_0) \cap \left\{ t<t_0-\varepsilon_k \right\}$ and we find \begin{equation} \label{eq-div-k} \begin{split} \int_{\Omega_r(z_0) \cap \left\{ t<t_0-\varepsilon_k \right\}} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \left( \tfrac{1}{r^N} \left( \div \, b(z) - c(z) \right) u(z) - v(z) f(z)\right) dz = & \\ - \int_{\psi_r(z_0)\setminus \mathrm{Crit}\pr{\Gamma} \cap \left\{ t<t_0-\varepsilon_k \right\}} \!\!\scp{\nu,\Phi} & d \H^{N} + \int_{\Omega_r(z_0) \cap \left\{ t=t_0-\varepsilon_k \right\}} \!\!\scp{e,\Phi}d\H^{N}, \end{split} \end{equation} where $\nu(z)=\tfrac{\nabla_{(x,t)} \Gamma(z_0,z) }{\abs{\nabla_{(x,t)} \Gamma(z_0,z)}}$ and $e=\pr{0,\ldots,0,1}$. We next let $k \to + \infty$ in the above identity. As $f$ is continuous on $\overline{\Omega_r(z_0)}$ and $v \in L^1({\Omega_r(z_0)})$, we find \begin{equation} \label{eq-div-1} \begin{split} \lim_{k \to + \infty} \int_{\Omega_r(z_0) \cap \left\{ t<t_0-\varepsilon_k \right\}} &\left( \tfrac{1}{r^N} \left( \div \, b(z) - c(z) \right) u(z) - v(z) f(z)\right) dz = \\ & \int_{\Omega_r(z_0)} \left( \tfrac{1}{r^N} \left( \div \, b(z) - c(z) \right) u(z) - v(z) f(z)\right) dz. \end{split} \end{equation} We next consider the last integral in the right hand side of \eqref{eq-div-k}. We have $\scp{e,\Phi} (z) = u(z) v(z)$, then \begin{equation} \label{eq-psir} \int_{\Omega_r(z_0) \cap \left\{ t=t_0-\varepsilon_k \right\}} \!\!\scp{e,\Phi}d\H^{N} = \int_{\mathcal{I}_r^k (z_0)} \!\! \!\! u(x,t_0- \varepsilon_k) \left( \Gamma(x_0, t_0; x,t_0 - \varepsilon_k) - \frac{1}{r^N} \right) d x, \end{equation} where we have denoted \begin{equation*} \mathcal{I}_r^k (z_0) := \left\{ x \in \mathbb{R}^N \mid (x,t_0-\varepsilon_k) \in \overline{\Omega_r (z_0)} \right\}. \end{equation*} We next prove that the right hand side of \eqref{eq-psir} tends to $u(z_0)$ as $k \to + \infty$. Since $\Gamma$ is the fundamental solution to $\L$ we have \begin{equation*} \lim_{k \to + \infty} \int_{\mathbb{R}^N} \!\! \!\! \Gamma(x_0, t_0; x,t_0 - \varepsilon_k) u(x,t_0- \varepsilon_k) d x = u(x_0,t_0), \end{equation*} then, being $u$ continuous on $\overline{\Omega_r (z_0)}$, we only need to show that \begin{equation} \label{eq-claim-2} \lim_{k \to + \infty} \H^{N} \left( \mathcal{I}_r^k (z_0) \right) = 0, \qquad \lim_{k \to + \infty} \int_{\mathbb{R}^N \backslash \mathcal{I}_r^k (z_0)} \!\! \!\! \Gamma(x_0, t_0; x,t_0 - \varepsilon_k) d x = 0. \end{equation} With this aim, we note that the upper bound \eqref{upper-bound} and \eqref{eq-fundsol+} imply \begin{equation*} \mathcal{I}_r^k (z_0) \subset \left\{ x \in \mathbb{R}^N \mid | x - x_0|^2 \le 4 \Lambda^+ \varepsilon_k \left( \log \left(C^+ r^N \right) - \tfrac{N}{2} \log (4 \pi \Lambda^+ \varepsilon_k) \right) \right\}. \end{equation*} The first assertion of \eqref{eq-claim-2} is then a plain consequence of the above inclusion. In order to prove the second statement in \eqref{eq-claim-2}, we rely on Lemma \ref{lem-localestimate*}. We let $r_0 := \min (r, r^*)$, so that \begin{equation*} \Omega_{r_0}^*(z_0) \subset \Omega_{r_0}(z_0) \subset \Omega_r(z_0), \end{equation*} thus \begin{equation*} \begin{split} \mathbb{R}^N \backslash \mathcal{I}_r^k (z_0) \subset & \left\{ x \in \mathbb{R}^N \mid Z^*(x,t_0- \varepsilon_k;x_0,t_0) \le \tfrac{2}{r_0^N} \right\} \\ = & \Big\{ x \in \mathbb{R}^N \mid \langle A(z_0) (x - x_0), x-x_0 \rangle \\ & \quad \ge - 4 \varepsilon_k \left( \log \left( \tfrac{2}{r_0^N} \right) + \tfrac{1}{2} \log (\det A(z_0) ) + \tfrac{N}{2} \log (4 \pi \varepsilon_k) \right) \Big\}. \end{split} \end{equation*} By using again \eqref{upper-bound}, the above inclusion, and the change of variable $x = x_0 + 2 \sqrt{\Lambda^+ \varepsilon_k} \, \xi$, we find \begin{equation*} \begin{split} \int_{\mathbb{R}^N \backslash \mathcal{I}_r^k (z_0)} & \!\! \!\! \Gamma(x_0, t_0; x,t_0 - \varepsilon_k) d x \le C^+ \int_{\mathbb{R}^N \backslash \mathcal{I}_r^k (z_0)} \!\! \!\! \Gamma^+ (x_0, t_0; x,t_0 - \varepsilon_k) d x \\ & \le \frac{C^+}{\pi^{N/2}} \int_{ \left\{ \langle A(z_0) \xi, \xi \rangle \ge - \tfrac{1}{\Lambda^+} \left( \log \left( \tfrac{2}{r_0^N} \right) + \tfrac{1}{2} \log (\det A(z_0) ) + \tfrac{N}{2} \log (4 \pi \varepsilon_k) \right) \right\} } \!\! \exp\left( - |\xi|^2 \right) d \xi. \end{split} \end{equation*} The second assertion of \eqref{eq-claim-2} then follows. Thus, we have shown that \begin{equation} \label{eq-div-2} \lim_{k \to + \infty} \int_{\Omega_r(z_0) \cap \left\{ t=t_0-\varepsilon_k \right\}} \scp{e,\Phi} d\H^{N} = u(z_0). \end{equation} We are left with the first integral in the right hand side of \eqref{eq-div-k}. We preliminarily note that its limit, as $k \to + \infty$, does exist. Moreover, for every $z \in \psi_r(z_0)$ we have $v(z) = 0$, then $\Phi(z) = \big( u (z) A(z) \nabla_x v(z), 0 \big)$, so that \begin{equation*} \int_{\psi_r(z_0) \setminus \mathrm{Crit}\pr{\Gamma} \cap \left\{ t<t_0-\varepsilon_k \right\}} \!\!\scp{\nu,\Phi}d \H^{N} = \int_{\psi_r(z_0) \setminus \mathrm{Crit}\pr{\Gamma} \cap \left\{ t<t_0-z\varepsilon_k \right\}} \!\! u(x,t) K (z_0;z) d \H^{N}, \end{equation*} where \begin{equation*} K (z_0; z) = \frac{\langle A(z) \nabla_x \Gamma(z_0;z), \nabla_x \Gamma(z_0;z) \rangle } {|\nabla_{(x,t)}\Gamma(z_0; z)|} \end{equation*} is the kernel defined in \eqref{e-kernels}. Note that $K$ is non-negative and, if we consider the function $u = 1$ and we let $k \to + \infty$, we find \begin{equation*} \lim_{k \to + \infty} \int_{\psi_r(z_0) \setminus \mathrm{Crit}\pr{\Gamma} \cap \left\{ t<t_0-\varepsilon_k \right\}} \!\! K (z_0; z) d \H^{N} = \int_{\psi_r(z_0)\setminus \mathrm{Crit}\pr{\Gamma}} \!\!\! K (z_0;z) d \H^{N} < + \infty. \end{equation*} Thus, if $u$ is a classical solution to $\L u = 0$, we obtain \begin{equation} \label{eq-div-3i} \lim_{k \to + \infty} \int_{\psi_r(z_0) \setminus \mathrm{Crit}\pr{\Gamma} \cap \left\{ t<t_0-\varepsilon_k \right\}} \!\!\scp{\nu,\Phi}d \H^{N} = \int_{\psi_r(z_0)\setminus \mathrm{Crit}\pr{\Gamma}} \!\!\! K (z_0;z) u(z) d \H^{N}. \end{equation} We recall that Dubovicki\v{\i}'s theorem implies that $\H^{N} \left(\psi_r(z_0) \cap \mathrm{Crit}\pr{\Gamma}\right) = 0$ for $\H^{1}$ almost every $r$, so that we can equivalently write \begin{equation} \label{eq-div-3} \lim_{k \to + \infty} \int_{\psi_r(z_0) \setminus \mathrm{Crit}\pr{\Gamma} \cap \left\{ t<t_0-\varepsilon_k \right\}} \!\!\scp{\nu,\Phi}d \H^{N} = \int_{\psi_r(z_0)} \!\!\! K (z_0;z) u(z) d \H^{N}. \end{equation} The proof of the first assertion of Theorem \ref{th-1} then follows by using \eqref{eq-div-1}, \eqref{eq-div-2} and \eqref{eq-div-3} in \eqref{eq-div-k}. The proof of the second assertion of Theorem \ref{th-1} is a direct consequence of the first one and of the coarea formula stated in Theorem \ref{th-co}. Indeed, fix a positive $r$ as above, multiply by $\frac{N}{r^N}$ and integrate over $]0,r[$. We find \begin{equation} \label{e-meanvalue-step1} \begin{split} \frac{N}{r^N} \int_0^r \varrho^{N-1} u(z_0) d \varrho = & \frac{N}{r^N} \int_0^r \varrho^{N-1} \bigg(\int_{\psi_\varrho(z_0)} K (z_0;z) u(z) \, d \H^{N} (x,t) \bigg) d \varrho\, \\ & + \frac{N}{r^N} \int_0^r \varrho^{N-1} \bigg( \int_{\Omega_\r(z_0)} f (z) \left( \tfrac{1}{r^N} - \Gamma(z_0;z) \right) dz \bigg) d \varrho \\ & + \frac{N}{r^N} \int_0^r \frac{1}{\varrho} \bigg( \int_{\Omega_\r (z_0)} \left( \div \, b(z) - c(z) \right) u(z) \, dz \bigg) d \varrho. \end{split} \end{equation} The left hand side of the above equality equals $u(z_0)$, while the last two terms agree with the last two terms appearing in the statement of Theorem \ref{th-1}. In order to conclude the proof we only need to show that \begin{equation} \label{e-meanvalue-step2} \begin{split} \int_0^r \varrho^{N-1} \bigg(\int_{\left\{\Gamma(z_0;z) = \tfrac{1}{\varrho^N}\right\} } & K (z_0;z) u(z) \, d \H^{N} (z) \bigg) d \varrho \\ & = \frac{1}{N} \int_0^{r} \r^{N-1}\int_{\Omega_\r (z_0)} M (z_0;z) u(z) dz. \end{split} \end{equation} With this aim, we substitute $y = \frac{1}{\varrho^N}$ in the left hand side of \eqref{e-meanvalue-step2} and we recall the definition of the kernel $K$. We find \begin{equation} \label{e-meanvalue-step3} \begin{split} \int_0^r \varrho^{N-1} & \bigg( \int_{\left\{\Gamma(z_0;z) = \tfrac{1}{\varrho^N}\right\} } \frac{\langle A(z) \nabla_x \Gamma(z_0;z), \nabla_x \Gamma(z_0;z) \rangle } {|\nabla_{(x,t)}\Gamma(z_0;z)|} u(z) \, d \H^{N} (z) \bigg) d \varrho \\ & = \frac{1}{N} \int_{\frac{1}{r^N}}^{+ \infty} \frac{1}{y^{2}} \bigg(\int_{\left\{\Gamma(z_0;z) = y\right\} } \frac{\langle A(z) \nabla_x \Gamma(z_0; z), \nabla_x \Gamma(z_0;z) \rangle } {|\nabla_{(x,t)}\Gamma(z_0;z)|}u(z) \, d \H^{N} (z) \bigg) d y \\ & =\frac{1}{N} \int_{\frac{1}{r^N}}^{+ \infty} \bigg(\int_{\left\{\Gamma(z_0;z) = y\right\} } \frac{\langle A(z) \nabla_x \Gamma(z_0;z), \nabla_x \Gamma(z_0;z) \rangle } {\Gamma^2(z_0;z) {|\nabla_{(x,t)}\Gamma(z_0;z)|}} u(z) \, d \H^{N} (z) \bigg) d y. \end{split} \end{equation} We conclude the proof of \eqref{e-meanvalue-step2} by applying the coarea formula stated in Theorem \ref{th-co}. \end{proof} \begin{proof} {\sc of Proposition \ref{prop-smp}.} We prove our claim under the additional assumption $\div \, b \ge 0$. At the end of the proof we show that this assumption is not restrictive. We first note that, as a direct consequence of our assumption $c=0$, we have that $\L \, 1 = 0$, then Theorem \ref{th-1} yields \begin{equation*} \frac{1}{\varrho^N} \int_{\Omega_\varrho(z_1)} M (z_1; z) \, dz + \frac{N}{\r^N} \int_0^{\r} \Big(\frac{1}{s} \int_{\Omega_s (z_1)} \div \, b(z) \, dz \Big) d s = 1 \end{equation*} for every $z_1 \in \Omega$ and $\r >0$ such that $\overline{\Omega_\varrho(z_1)} \subset \Omega$. We claim that, if $u(z_1) = \max_\Omega u$, then \begin{equation} \label{eq-claim-smp} u(z) = u(z_1) \qquad \text{for every} \quad z \in \overline{\Omega_\varrho(z_1)}. \end{equation} By using again Theorem \ref{th-1} and the above identity we obtain \begin{align*} 0 = & \frac{1}{\varrho^N} \int_{\Omega_\varrho(z_1)} M (z_1; z) \big((u(z)- u(z_1)\big) \, dz \\ & + \frac{N}{\r^N} \int_0^{\r} \Big(\frac{1}{s} \int_{\Omega_s (z_1)} \div \, b(z) \big((u(z)- u(z_1)\big) \, dz \Big) d s \\ & + \frac{N}{\r^N} \int_0^{\r} \left(s^{N-1} \int_{\Omega_s (z_1)} f (z) \left( \tfrac{1}{s^N} - \Gamma(z_1;z) \right) dz \right) d s \le 0, \end{align*} since $f \ge 0$, $\div \, b \ge 0$ and $u(z) \le u(z_1)$, being $u(z_1) = \max_{\Omega} u$. We have also used the fact that $M(z_1;z) \ge 0$ and $\Gamma(z_1;z) \ge \tfrac{1}{s^N}$ for every $z \in \Omega_s(z_1)$. Hence, $M (z_1; z) \big((u(z)- u(z_1)\big)=0$ for $\H^{N+1}$ almost every $z \in \Omega_\r(z_1)$. As already noticed, Dubovicki\v{\i}'s theorem implies that $\H^{N} \left( \psi_s (z_1) \cap \mathrm{Crit}\pr{\Gamma} \right) = 0$, for almost every $s \in ]0,\r]$, then $M (z_1; z) \ne 0$ for $\H^{N+1}$ almost every $z \in \Omega_\r(z_1)$. As a consequence $u(z)= u(z_1)$ for $\H^{N+1}$ almost every $z \in \Omega_\r(z_1)$, and the claim \eqref{eq-claim-smp} follows from the continuity of $u$. We are in position to conclude the proof of Proposition \ref{prop-smp}. Let $z$ be a point of $\AS ( \O )$, and let $\gamma: [0,T] \to \O$ be an $\L$--admissible path such that $\g(0)= z_0$ and $\g(T) = z$. We will prove that $u(\gamma(t)) = u(z_0)$ for every $t \in [0,T]$. Let \begin{equation*} I := \big\{ t \in [0,T] \mid u(\gamma(s)) = u(z_0) \ \text{for every} \ s \in[0,t] \big\}, \qquad \overline t := \sup I. \end{equation*} Clearly, $I \ne \emptyset$ as $0 \in I$. Moreover $I$ is closed, because of the continuity of $u$ and $\gamma$, then $\overline t \in I$. We now prove by contradiction that $\overline t = T$. Suppose that $\overline t < T$. Let $z_1 := \gamma(\overline t)$ and note that $z_1 \in \Omega$, $u(z_1) = \max_\Omega u$. We aim to show that there exist positive constants $r_1$ and $s_1$ such that $\overline{\Omega_{r_1}(z_1)} \subset \O$ and \begin{equation} \label{eq-claim2-smp} \gamma(\overline t + s) \in \Omega_{r_1}(z_1)\quad \text{for every} \quad s \in [0, s_1[. \end{equation} As a consequence of \eqref{eq-claim-smp} we obtain $u(\gamma(\overline t + s)) = u(z_1) = u(z_0)$ for every $s \in [0, s_1[$, and this contradicts the assumption $\overline t < T$. The proof of \eqref{eq-claim2-smp} is a consequence of Lemma \ref{lem-localestimate*}. It is not restrictive to assume that $r_1 \le r^*$, then it is sufficient to show that there exists a positive $s_1$ such that \begin{equation} \label{eq-claim4-smp} \gamma(\overline t + s) \in \Omega^*_{r_1}(z_1) \quad \text{for every} \quad s \in [0, s_1[. \end{equation} Recall the definition of $\gamma(\overline t + s) = (x(\overline t + s), t(\overline t + s))$. We have $\gamma(\overline t) = z_1 = (x_1, t_1), t(\overline t + s) = t_1-s$ and, for every positive $s$ \begin{equation*} \begin{split} |x(s + \overline t) - x_1 | & = \left| \int_0^s \dot x(\overline t + \sigma) d \sigma \right| \le \int_0^s \left| \dot x(\overline t + \sigma) \right| d \sigma \\ & \le \left( \int_0^s \left| \dot x(\overline t + \sigma) \right|^2 d\sigma \right)^{1/2} s^{1/2} \le \|\dot x \|_{L^2([0,T])} \sqrt{s}, \end{split} \end{equation*} then \begin{equation*} \langle A^{-1}(z_1 )(x(\overline t + s)-x_1), x(\overline t + s)-x_1 \rangle \le s \cdot \| A^{-1}(z_1)\| \cdot \|\dot x \|_{L^2([0,T])}^2. \end{equation*} By using the above inequality in \eqref{eq-Omega_r*-exp} we see that there exists a positive constant $s_1$ such that \eqref{eq-claim4-smp} holds. This proves \eqref{eq-claim2-smp}, and then $u(z) = u(z_0)$ for every $z \in \AS ( \O )$. By the continuity of $u$ we conclude that $u(z) = u(z_0)$ for every $z \in \overline{\AS ( \O )}$. Eventually, since $u$ is constant in $\overline{\AS ( \O )}$ and $c= 0$, we conclude that $\L u = 0$. We finally prove that the additional assumption $\div \, b \ge 0$ is not restrictive. Let $k$ be any given constant such that $k>\Lambda$, where $\Lambda$ is the quantity appearing in \eqref{e-up}, recall that $z_0 = (x_0,t_0)$ and define the function \begin{equation} v(y,t):= u\left( e^{-k(t-t_0)}y, t\right), \qquad (y,t) \in \widehat \Omega, \end{equation} where $(y,t) \in \widehat \Omega$ if, and only if $\left( e^{-k(t-t_0)}y, t\right) \in \Omega$. Then $v$ is a solution to \begin{equation*} \widehat \L v (y,t) := \div \left( \widehat A (y,t) \nabla_y v(y,t) \right) + \langle \widehat b(y,t) + k y, \nabla_y v(y,t)\rangle - \, \tfrac{\partial v}{\partial t}(y,t) = f\left( e^{-k(t-t_0)}y, t\right), \end{equation*} where $\widehat A (y,t) = \left( \widehat a_{ij} (y,t) \right)_{i,j=1, \dots, N}$, $\widehat b(y,t) = \left( \widehat b_1(y,t), \dots, \widehat b_N(y,t) \right)$, are defined as $\widehat a_{ij} (y,t) = e^{-2k(t-t_0)} a_{ij} \left( e^{-k(t-t_0)}y, t\right), \widehat b_{j} (y,t) = e^{-k(t-t_0)} b_{j} \left( e^{-k(t-t_0)}y, t\right)$, for $i,j=1, \dots, N$. Note that from the assumption \eqref{e-up} it follows that $|\div \, b| \le N \Lambda$, then \begin{equation*} \div \left( \widehat b(y,t) + k y \right) \ge N \left(k - \Lambda e^{-2k(t-t_0)} \right). \end{equation*} In particular, there exists a positive $\delta$, depending on $k$ and $\Lambda$, such that the right hand side of above expression is non-negative as $t \ge t_0 - \delta$. Then, if we set $\widehat t_0 := t_0- \delta$, we have \begin{equation*} \div \left( \widehat b(y,t) + k y \right) \ge 0 \qquad \text{for every} \quad (y,t) \in \widehat \Omega \cap \big\{t \ge \widehat t_0 \big\}. \end{equation*} We also note that $\widehat \L$ satisfies the same assumptions as $\L$, with a possibily different constant $\widehat \Lambda$, in every set of the form $\widehat \Omega \cap \big( \mathbb{R}^N \times I \big)$, where $I$ is any bounded open interval of $\mathbb{R}$. We then apply the above argument to prove that, if $v$ reaches its maximum at some point $(y_0,t_0) \in \widehat \Omega$, then it is constant in its propagation set in $\widehat \Omega \cap \big\{t \ge \widehat t_0 \big\}$. Note that $u$ reaches its maximum at some point $(x,t)$ if, and only if, $v$ reaches its maximum at $\left( e^{k(t-t_0)}x, t\right)$. Moreover, $(x(s), t-s)$ is an admissible curve for $\L$ if, and only if, $(e^{-k(t-s-t_0)}x(s), t-s)$ is an admissible curve for $\widehat \L$. We conclude that \begin{equation*} u(z) = u(z_0) \qquad \text{for every} \ z \in \overline {\AS ( \O )} \cap \big\{t \ge \widehat t_0 \big\}. \end{equation*} We then repeat the above argument. Assume that $u$ reaches its maximum at some point $\left( \widehat x_0, \widehat t_0\right)$, we define a new function $\widehat v(y,t):= u\left( e^{-k(t-\hat t_0)}y, t\right)$ and we find a new constant $\widehat t_1 := t_0 - 2 \delta$ such that \begin{equation*} u(z) = u(z_0) \qquad \text{for every} \ z \in \overline {\AS ( \O )} \cap \big\{t \ge \widehat t_1 \big\}. \end{equation*} As we can use the same constant $\delta$ at every iteration, we conclude that the above identity holds for every $z \in \overline {\AS ( \O )}$. \end{proof} \begin{proof} {\sc of Proposition \ref{prop-smp1}.} Let $k$ be a constant such that $\div \, b - c -k \ge 0$ and note that the function $v(x,t) := e^{k t}u(x,t)$ is a non negative solution to the equation \begin{equation*} \L v(x,t) + k v(x,t) = e^{k t}f(x,t), \qquad (x,t) \in \Omega. \end{equation*} Then Theorem \ref{th-1} yields \begin{align*} 0 = & \frac{1}{\varrho^N} \int_{\Omega_\varrho(z_1)} M (z_1; z) v(z) \, dz \\ & + \frac{N}{\r^N} \int_0^{\r} \Big(\frac{1}{s} \int_{\Omega_s (z_1)} \big(\div \, b(z) - c(z) - k \big) v(z) \, dz \Big) d s \\ & + \frac{N}{\r^N} \int_0^{\r} \left(s^{N-1} \int_{\Omega_s (z_1)} e^{k t} f (z) \left( \tfrac{1}{s^N} - \Gamma(z_1;z) \right) dz \right) d s \le 0, \end{align*} for every $z_1 \in \Omega$ and $\r >0$ such that $\overline{\Omega_\varrho(z_1)} \subset \Omega$. Here we have sued the facts that $f \ge 0$, and $\div \, b - c -k \ge 0$. By following the same argument used in the proof of Proposition \ref{prop-smp} we find that $v \ge 0$ in $\overline {\AS ( \O )}$. This concludes the proof of Proposition \ref{prop-smp1}. \end{proof} \begin{proof} {\sc of Proposition \ref{prop-2}.} Let $m$ be a positive integer, and let $u$ be a solution to $\L u = f$ in $\Omega \subset \mathbb{R}^{N+1}$. As said in the Introduction, we set \begin{equation*} \widetilde u(x,y,t) := u(x,t), \qquad \widetilde f(x,y,t) := f(x,t), \end{equation*} for every $(x,y,t) \in \mathbb{R}^{N}\times \mathbb{R}^{m} \times \mathbb{R}$ such that $(x,t) \in \Omega$, and we note that \begin{equation*} \widetilde \L \ \widetilde u(x,y,t) = \widetilde f(x,y,t) \qquad \widetilde \L := \L + \sum_{j=1}^{m}\tfrac{\partial^2}{\partial y_j^2}. \end{equation*} Moreover, the function \begin{equation} \label{eq-tildegamma} \widetilde \Gamma(x_0, y_0, t_0 ;x,y,t) := \Gamma (x_0, t_0 ;x,t) \cdot \frac{1}{(4 \pi (t_0-t))^{m/2}}\exp \left( \frac{-|y_0-y|^2}{4(t_0-t)} \right) \end{equation} is a fundamental solution of $\widetilde \L$. We then use $\widetilde \Gamma$ to represent the solution $u$ in accordance with Theorem \ref{th-1} as follows \begin{equation*} \begin{split} u(z_0) = & \, \widetilde u(x_0,y_0,t_0) = \frac{1}{r^{N+m}} \int_{\widetilde\Omega_r(x_0,y_0,t_0)} \! \! \! \! \widetilde M(x_0,y_0,t_0; x,y,t) u(x,t) \, dx \, dy\, dt \\ & +\frac{N+m}{r^{N+m}} \int_0^{r} \left(\r^{N+m-1} \int_{\widetilde\Omega_\r(x_0,y_0,t_0)} \! \! \! \! f (x,t) \left( \tfrac{1}{\r^{N+m}} - \widetilde \Gamma(x_0,y_0,t_0;x,y,t) \right) \, dx \, dy\, dt \right) d \r \\ & + \frac{N+m}{r^{N+m}} \int_0^{r} \left(\frac{1}{\r} \int_{\widetilde\Omega_\r(x_0,y_0,t_0)} \! \! \! \! \left( \div \, b(x,t) - c(x,t) \right) u(x,t) \, dx \, dy\, dt \right) d \r. \end{split} \end{equation*} where $\widetilde\Omega_r(x_0,y_0,t_0)$ is the parabolic ball relevant to $\widetilde \Gamma$ and \begin{equation*} \widetilde M(x_0,y_0,t_0; x,y,t) = M(x_0,t_0;x,t) + \frac{|y_0-y|^2}{4(t_0-t)^2}. \end{equation*} The proof is accomplished by integrating the above identity with respect to the variable $y$. \end{proof} \setcounter{equation}{0} \section{Proof of the Harnack inequalities}\label{sectionHarnack} In this Section we use the mean value formula stated in Proposition \ref{prop-2} to give a simple proof of the parabolic Harnack inequality. \begin{proof} {\sc of Proposition \ref{prop-Harnack}.} We first prove our claim under the additional assumption that $\div \, b - c = 0$. This assumption simplifies the proof as in this case we only need to use the first integral in the representation formula given in Proposition \ref{prop-2}. It will be removed at the end of the proof. Let $m \in \mathbb{N}$ with $m > 2$, let $\Omega$ be an open subset of $\mathbb{R}^{N+1}, z_0 \in \Omega$ and $r>0$ such that $\Omega_{4r}^{(m)}(z_0) \subset \Omega$. We claim that there exist four positive constants $r_0, \vartheta, M^+, m^-$ such that the following assertions hold for every $r \in ]0,r_0]$. \begin{description} \item[{\it i)}] $K^{(m)}_{r}(z_0) \ne \emptyset$; \item[{\it ii)}] $M_{\vartheta r}^{(m)} (z; \zeta) \le M^+$ for every $\zeta \in \Omega_{\vartheta r}^{(m)}(z)$; \item[{\it iii)}] $\Omega_{\vartheta r}^{(m)}(z) \subset \Omega_{4r}^{(m)}(z_0) \cap \big\{\tau \le t_0 - \frac{r^2}{4 \pi \lambda^{N/(N+m)}}\big\}$ for every $z \in K^{(m)}_{r}(z_0)$; \item[{\it iv)}] $M_{5 r}^{(m)} (z_0; \zeta) \ge m^-$ for every $\zeta = (\xi, \tau) \in \Omega_{4 r}^{(m)}(z_0)$ such that $\tau \le t_0 - \frac{r^2}{4 \pi \lambda^{N/(N+m)}}$. \end{description} By using Proposition \ref{prop-2} and the above claim it follows that, for every $z \in K^{(m)}_{r}(z_0)$, it holds \begin{equation} \label{e-core-h} \begin{split} u(z) = & \frac{1}{(\vartheta r)^{N+m}} \int_{\Omega^{(m)}_{\vartheta r}(z)} \! \! \! \! \! \! M_{\vartheta r}^{(m)} (z; \zeta) u(\zeta) \, d\zeta \\ & (\text{by {\it ii}}) \le \frac{M^+}{(\vartheta r)^{N+m}} \int_{\Omega^{(m)}_{\vartheta r}(z)} \!\!\! u(\zeta) \, d\zeta \\ & (\text{by {\it iii}}) \le \frac{M^+}{(\vartheta r)^{N+m}} \int_{\Omega_{4r}^{(m)}(z_0) \cap \big\{\tau \le t_0 - \frac{r^2}{4 \pi \lambda^{N/(N+m)}}\big\}} \!\!\! u(\zeta) \, d\zeta \\ & (\text{by {\it iv}}) \le \frac{M^+}{m^- (\vartheta r)^{N+m}} \int_{\Omega_{5r}^{(m)}(z_0)} \! \! \! \! \! \! M_{5 r}^{(m)} (z_0; \zeta)u(\zeta) \, d\zeta = \frac{5^{N+m}M^+}{\vartheta^{N+m} m^-} u(z_0). \end{split} \end{equation} This proves Proposition \ref{prop-Harnack} with $C_K := \frac{5^{N+m}M^+}{ \vartheta^{N+m}m^-}$. We are left with the proof of our claims. We mainly rely on Lemma \ref{lem-localestimate*}, applied to the function $\widetilde \Gamma$ introduced in \eqref{eq-tildegamma}. In the sequel we let $r^*$ be the constant appearing in Lemma \ref{lem-localestimate*} and relative to $\widetilde \Gamma$, and in accordance with \eqref{eq-Omega_r*}, \begin{equation} \label{eq-Omega_rm*} \Omega_r^{(m)*}(z_0) := \bigg\{ z \in \mathbb{R}^{N+1} \mid (4 \pi (t_0-t))^{-m/2}Z^*(z;z_0) \ge \frac{2}{r^{N+m}} \bigg\}. \end{equation} Moreover, we choose $r_0 := r^*/2$. \begin{center} \begin{tikzpicture} \clip (-.52,7.02) rectangle (6.52,1.78); \path[draw,thick] (-.5,7) rectangle (6.5,1.8); \begin{axis}[axis y line=none, axis x line=none, xtick=\empty,ytick=\empty, ymin=-1.1, ymax=1.1, xmin=-.2,xmax=1.8, samples=101, rotate= -90] \addplot [black,line width=1pt, domain=-.01:.01] {sqrt(.0001 - x * x)} node[above] {$z_0$}; \addplot [black,line width=1pt, domain=-.01:.01] {-sqrt(.0001 - x * x)}; \addplot [lblue,line width=.7pt,domain=.001:.051] {sqrt(- 3 * x * ln(9*x)}; \addplot [lblue,line width=.7pt,domain=.001:.051] {- sqrt(- 3 * x * ln(9*x))}; \addplot [blue,line width=.7pt, domain=.001:1] {sqrt(- 2 * x * ln(x))}; \addplot [blue,line width=.7pt,domain=.001:1] {- sqrt(- 2 * x * ln(x))} node[below=-10pt] { \hskip35mm $\Omega^{(m)}_{4r}(z_0)$}; \addplot [ddddblue,line width=.7pt,domain=.051:.1111] {sqrt(- 3 * x * ln(9*x)}; \addplot [ddddblue,line width=.7pt,domain=.051:.1111] {- sqrt(- 3 * x * ln(9*x))}; \addplot [red,line width=.7pt,domain=.0001:.0625,below=15pt,right=9pt] {sqrt(- 4 * x * ln(16*x)}; \addplot [red,line width=.7pt,domain=.0001:.0625,below=15pt,right=9pt] {- sqrt(- 4 * x * ln(16*x))}; \addplot [black,line width=1pt, domain=-.01:.01,below=15pt,right=9pt] {sqrt(.0001 - x * x)} node[below=-3pt] {$z$}; \addplot [black,line width=1pt, domain=-.01:.01,below=15pt,right=9pt] {-sqrt(.0001 - x * x)} node[below=15pt,right=10pt] {\hskip-20mm \color{red} $\Omega^{(m)}_{\vartheta r}(z)$}; \end{axis} \draw [ddddblue,line width=.7pt] (1.92,6) -- node [below=5pt] { \hskip25mm $K^{(m)}_r(z_0)$} (3.75,6); \end{tikzpicture} {\sc \qquad Fig.4} - The inclusion \emph{(iii)}. \end{center} \noindent {\it Proof of i)} \ By the definition \eqref{e-Kr} of $K^{(m)}_{r}(z_0)$ we only need to show that there exists at least a point $(x,t) \in \overline{\Omega^{(m)}_{r}(z_0)}$ with $t \le t_0 - \frac{1}{4 \pi \lambda^{N/(N+m)}} \,r^2$. From Lemma \ref{lem-localestimate*} it follows that $\Omega_r^{(m)*}(z_0) \subset \Omega^{(m)}_r(z_0)$, then we only need to show that that the point $\big(x_0, t_0 - \frac{1}{4 \pi \lambda^{N/(N+m)}} \,r^2\big)$ belongs to $\overline{\Omega^{(m)}_{r}(z_0)}$. In view of \eqref{eq-fundsol-Z*}, this is equivalent to $\det A(z_0)\ge \lambda^N/4$, which directly follows from the parabolicity assumption \eqref{e-up}. \noindent {\it Proof of ii)} \ We first note that \eqref{eq-fundsol+} and the defintion of $N_{\vartheta r}$ directly give \begin{equation} \label{eq-boundN} N_{\vartheta r} (z;\z) \le 2 \sqrt{t-\tau}\sqrt{\log\left( \tfrac{C^+ (\vartheta r)^{N+m}}{ (\Lambda^+)^{N/2}(4 \pi (t-\tau))^{(N+m)/2}} \right)} = 2 \sqrt{t-\tau} \sqrt{C_1 + \tfrac{N+m}{2}\log\left( \tfrac{\vartheta^2 \, r^2 }{t-\tau} \right)}, \end{equation} where $C_1$ is a positive constant that only depends on $\L$. Moreover, Lemma \ref{lem-localestimategradient} implies that there exists a positive constant $M_0$, only depending on the operator $\L$, such that \begin{equation*} M(z,\z) \le M_0 \left( \frac{|x-\xi|^2}{(t-\tau)^2} + 1 \right), \qquad \text{for every} \ \zeta \in \Omega_{\vartheta r}^{(m)}(z). \end{equation*} Moreover, Lemma \ref{lem-localestimate*} implies that there exists another positive constant $M_1$ such that \begin{equation} \label{eq-boundM} M(z,\z) \le M_1 \left( \frac{1}{t-\tau} \log\left( \frac{\vartheta^2 \, r^2}{t-\tau} \right) + 1 \right), \qquad \text{for every} \ \zeta \in \Omega_{\vartheta r}^{(m)}(z). \end{equation} We point out that the constants $C_1$ and $M_1$ depend neither on the choice of $\vartheta \in ]0,1[$, that will be specified in the following proof of the point {\it iii)}, nor on the choice of $r \in ]0,r_0[$. By using \eqref{eq-boundN} and \eqref{eq-boundM} we conclude that there exists a positive constant $M_2$, that only depends on $\L$ and on $m$, such that \begin{equation*} M_{\vartheta r}^{(m)} (z; \zeta) \le M_2 (t-\tau)^{m/2} \left( 1 + \left| \log\left( \tfrac{\vartheta^2 \, r^2 }{t-\tau} \right) \right| \right)^{m/2} \left( 1 + \tfrac{1}{t-\tau} \left| \log\left( \tfrac{\vartheta^2 \, r^2 }{t-\tau} \right) \right| \right) \end{equation*} for every $\zeta \in \Omega_{\vartheta r}^{(m)}(z)$. The right hand side of the above inequality is bounded whenever $m>2$, uniformly with respect to $r \in ]0,r_0[$ and $\vartheta \in ]0,1[$. This concludes the proof of {\it ii)}. \noindent {\it Proof of iii)} \ We prove the existence of a constant $\vartheta \in ]0,1[$ as claimed by using a compactness argument and the parabolic scaling. We first observe that Lemma \ref{lem-localestimate*} implies that \begin{equation*} K_r^{(m)}(z_0) \subset \overline{\Omega^{(m)*}_{3r}(z_0)} \cap \Big\{ t \le t_0 - \tfrac{1}{4 \pi \lambda^{N/(N+m)}} \,r^2 \Big\}, \end{equation*} which is a compact subset of $\Omega^{(m)*}_{4r}(z_0)$. We now show that there exists $\vartheta \in ]0,1[$ such that \begin{equation} \label{eq-claim*} \Omega_{3 \vartheta r}^{(m)*}(z) \subset \Omega_{4r}^{(m)*}(z_0) \cap \Big\{\tau \le t_0 - \tfrac{r^2}{4 \pi \lambda^{N/(N+m)}}\Big\} \end{equation} for every $z \in \overline{\Omega^{(m)*}_{3r}(z_0)} \cap \Big\{ t \le t_0 - \frac{1}{4 \pi \lambda^{N/(N+m)}} \,r^2 \Big\}$. Our claim {\it iii)} will follow from \eqref{eq-claim*} and from Lemma \ref{lem-localestimate*}. We next prove \eqref{eq-claim*} by using the parabolic scaling. We note that \begin{equation} \label{eq-parabolicscaling} (x_0 + r \xi,t_0 + r^2 \tau) \in {\Omega^{(m)*}_{k r}(z_0)} \quad \Longleftrightarrow \quad (x_0+\xi,t_0 + \tau) \in {\Omega^{(m)*}_{k}(z_0)}, \end{equation} for every positive $k$. We will need to use $k = 3, 4$ and $\vartheta$. We next show that \eqref{eq-claim*} holds for $r=1$. The result for every $r \in ]0,r_0[$ will follow from \eqref{eq-parabolicscaling}. Let $\delta(z_0)$ be the distance of the compact set $\overline{\Omega_{3}^{(m)*}(z_0)} \cap \big\{t \le t_0 - \frac{1}{4 \pi \lambda^{N/(N+m)}}\big\}$ from the boundary of $\Omega_{4}^{(m)*}(z_0)$. We have that $\delta$ is a strictly positive function which depends continuously on $z_0$ through the coefficients of the matrix $A(z_0)$. Moreover, the condition \eqref{e-up} is satisfied, then there exists a positive constant $\delta_0$, only depending on $\lambda, \Lambda$ and $N$, such that \begin{equation*} \delta(z_0) \ge \delta_0 \quad \text{for every} \ z_0 \in \Omega. \end{equation*} On the other hand, the diameter of the set $\Omega^{(m)*}_{1}(z)$ is bounded by a constant that doesn't depend on $z$. Then, by \eqref{eq-parabolicscaling}, it is possible to find $\vartheta \in ]0,1[$ such that the diameter of $\Omega^{(m)*}_{\vartheta}(z)$ is not greater than $\delta_0$. This concludes the proof of \eqref{eq-claim*} in the case $r=1$. As said above, the case $r \in ]0,r_0[$ does follow from \eqref{eq-parabolicscaling}. This concludes the proof of {\it iii)}. \noindent {\it Proof of iv)} \ From \eqref{e-Omegam} it directly follows that \begin{equation*} M_{5r}^{(m)} (z_0;\z) \ge \frac{m \, \omega_m }{m+2} \cdot \frac {N_{5r}^{m+2}(z_0;\z)}{4(t_0-\t)^2} \ge \frac{m \, \omega_m}{m+2} (2(t_0-\t))^{m-2} \left( (N+m) \log(5/4)\right)^{(m+2)/2}. \end{equation*} The last claim then follows by choosing \begin{equation*} m^-:= \lambda^{- N(m-2)/(N+m)}\frac{m \, \omega_m}{m+2} \frac{r^{2m-4}}{(2 \pi)^{m-2} } \left( (N+m) \log(5/4)\right)^{(m+2)/2}. \end{equation*} We eventually remove the assumption $\div \, b - c = 0$. We mainly rely on the steps in the display \eqref{e-core-h} and we point out the needed changes for this more difficult situation. With this aim, we recall that $| \div \, b(z) - c(z)| \le k := (N+1) \Lambda $ because of \eqref{e-up}. We next introduce two auxiliary functions \begin{equation*} \widehat u(x,t) := e^{k(t-t_0)} u(x,t), \qquad \widetilde u(x,t) := e^{-k(t-t_0)} u(x,t). \end{equation*} Note that, as $\L u = 0$, we have that \begin{equation*} \widehat \L \, \widehat u : = \L \widehat u + k \widehat u = 0, \qquad \widetilde \L \, \widetilde u : = \L \widetilde u - k \widetilde u = 0. \end{equation*} Note that $\widehat \L, \widetilde \L$ satisfy the condition \eqref{e-up} with $\Lambda$ replaced by $k$. In particular, the statemets \emph{i)}-\emph{iv)} hold for $\L, \widehat \L$ and $\widetilde \L$ with the same constants $r_0, \vartheta, M^+, m^-$. We denote by $\widehat M^{(m)}, \widetilde M^{(m)}$ the kernels relative to $\widehat \L, \widetilde \L$, respectively, and $\widehat \Omega^{(m)}, \widetilde \Omega^{(m)}$ the superlevel sets we use in the representation formulas appearing in \eqref{e-core-h}. As we did before, we let $r^*$ be the constant appearing in Lemma \ref{lem-localestimate*} and we choose $r_0 := r^*/2$. As a direct consequence of the definiton of $\widehat u$ and $\widetilde u$, there exist two positive constants $\widehat c$ and $\widetilde c$ such that \begin{equation} \label{e-bounds-u} \widehat c u(z) \le \widehat u(z) \le u(z), \qquad u(z) \le \widetilde u(z) \le \widetilde c u(z), \end{equation} for every $z \in \Omega_{5r}^{(m)}(z_0)$. We are now in position to conclude the of proof Proposition \ref{prop-Harnack}. Let's consider the first two lines of \eqref{e-core-h}. Since $\widehat u$ is a solution to $\widehat \L \, \widehat u = 0$, for every $z \in K^{(m)}_{r}(z_0)$, it holds \begin{equation*} \widehat u(z) \le \frac{1}{(\vartheta r)^{N+m}} \int_{\widehat \Omega^{(m)}_{\vartheta r}(z)} \! \! \! \! \! \! \widehat M_{\vartheta r}^{(m)} (z; \zeta) \widehat u(\zeta) \, d\zeta \le \frac{M^+}{(\vartheta r)^{N+m}} \int_{\widehat \Omega^{(m)}_{\vartheta r}(z)} \!\!\! \widehat u(\zeta) \, d\zeta. \end{equation*} The first inequality follows from the fact that $\div \, b(\zeta) - c(\zeta) - k \le 0$ for every $\zeta$. From \eqref{e-bounds-u} it then follows that \begin{equation*} u(z) \le \frac{M^+}{\widehat c \, (\vartheta r)^{N+m}} \int_{\widehat \Omega^{(m)}_{\vartheta r}(z)} \!\!\! u(\zeta) \, d\zeta. \end{equation*} Continuing along the next lines of \eqref{e-core-h}, we note that {\it iii)} also holds in this form: $\widehat \Omega_{\vartheta r}^{(m)}(z) \subset \widetilde \Omega_{4r}^{(m)}(z_0) \cap \big\{\tau \le t_0 - \frac{r^2}{4 \pi \lambda^{N/(N+m)}}\big\}$ for every $z \in K^{(m)}_{r}(z_0)$, so that \begin{equation*} u(z) \le \frac{M^+}{\widehat c \, (\vartheta r)^{N+m}} \int_{\widetilde \Omega_{4r}^{(m)}(z_0) \cap \big\{\tau \le t_0 - \frac{r^2}{4 \pi \lambda^{N/(N+m)}}\big\}} \!\! u(\zeta) \, d\zeta. \end{equation*} On the other hand, using the fact that $\widetilde \L \, \widetilde u = 0$, and $\div \, b(\zeta) - c(\zeta) + k \ge 0$, we find \begin{equation*} \frac{m^-}{(5 r)^{N+m}} \int_{\widetilde \Omega_{4r}^{(m)}(z_0) \cap \big\{\tau \le t_0 - \frac{r^2}{4 \pi \lambda^{N/(N+m)}}\big\}} \!\!\! \widetilde u(\zeta) \, d\zeta \le \frac{1}{(5 r)^{N+m}} \int_{\widetilde \Omega_{5r}^{(m)}(z_0)} \! \! \! \! \! \! \widetilde M_{5 r}^{(m)} (z_0; \zeta) \widetilde u(\zeta) \, d\zeta \le \widetilde u(z_0). \end{equation*} Thus, recalling that $u \le \widetilde u$ and $u (z_0) = \widetilde u(z_0)$, we conclude that \begin{equation*} u(z) \le \frac{5^{N+m} M^+}{\widehat c \, \vartheta^{N+m} m^-} u(z_0), \end{equation*} for every $z \in K^{(m)}_{r}(z_0)$. This concludes the proof of Proposition \ref{prop-Harnack}. \end{proof} As a simple consequence of Proposition \ref{prop-Harnack} we obtain the following result. \begin{corollary} \label{cor-Harnack-inv} There exist four positive constants $r_1, \kappa_1, \vartheta_1$ and $C_D$, with $\kappa_1, \vartheta_1 < 1$, such that the following inequality holds. For every $z_0 \in \Omega$ and for every positive $r$ such that $r \le r_1$ and ${\Q_{r}(z_0)} \subset \Omega$ we have that \begin{equation} \label{e-HD} \sup_{D_{r}(z_0)} u \le C_D u(z_0) \end{equation} for every $u \ge 0$ solution to $\L u = 0$ in $\Omega$. Here \begin{equation*} D_{r}(z_0) := B_{\vartheta_1 r}(x_0) \times \{t_0 - \kappa_1 r^2\}. \end{equation*} \end{corollary} \begin{center} \begin{tikzpicture} \clip(-.52,6.82) rectangle (7.02,1.48); \path[draw,thick] (-.5,6.8) rectangle (7,1.5); \begin{axis}[axis y line=none, axis x line=none, xtick=\empty,ytick=\empty, ymin=-1.1, ymax=1.1, xmin=-.2,xmax=1.8, samples=101, rotate= -90] \addplot [black,line width=.7pt, domain=-.01:.01] {sqrt(.0001 - x * x)} node[above] {$z_0$}; \addplot [black,line width=.7pt, domain=-.01:.01] {-sqrt(.0001 - x * x)}; \addplot [ddblue,line width=.7pt, domain=.001:1] {sqrt(- 2 * x * ln(x))}; \addplot [ddblue,line width=.7pt,domain=.001:1] {- sqrt(- 2 * x * ln(x))} node[below] { \hskip30mm {\color{dddblue} $\Omega_r(z_0)$}}; \addplot [bblue,line width=.7pt,domain=.001:.1111] {sqrt(- 3 * x * ln(9*x)}; \addplot [bblue,line width=.7pt,domain=.001:.1111] {- sqrt(- 3 * x * ln(9*x))}; \end{axis} \draw [line width=.6pt] (0,6.165) rectangle node [above=1.5cm,right=2.8cm] {$Q_r(z_0)$} (5.65,2); \draw [red,line width=1.2pt] (2.1,5.9) -- node [below=5pt] { \hskip20mm $D_r(z_0)$} (3.6,5.9); \end{tikzpicture} {\sc \qquad Fig.5} - The set $D_r(z_0)$. \end{center} The above assertion follows from the fact that there exists a positive constant $\delta_1$ such that $\Omega_{r}^{(m)}(z_0) \subset \Q_{\delta_1 r}(z_0)$ for every $r \in 0], r_0[$ and that $D_{r}(z_0) \subset K^{(m)}_{r}(z_0)$, for some positive $\kappa_1, \vartheta_1$. Note that Corollary \ref{cor-Harnack-inv} and Theorem \ref{th-Harnack-inv} differ in that, unlike the cylinders $\Q^+_{r}(z_0)$ and $\Q^-_{r}(z_0)$, the set $D_{r}(z_0)$ is not arbitrary. We next prove Theorem \ref{th-Harnack-inv} by using iteratively the Harnack inequality proved in Corollary \ref{cor-Harnack-inv}. \begin{proof} {\sc of Theorem \ref{th-Harnack-inv}.} As a first step we note that, up to the change of variable $v(x,t) := u(x_0 + r t, t_0 +r^2 t)$, it is not restrictive to assume that $z_0 = 0$ and $r = 1$. Indeed, the function $v$ is a solution to an equation $\widehat \L v = 0$, where the coefficients of the operator $\widehat \L$ are $\widehat a_{ij}(x,t) = a_{ij}(x_0 + r t, t_0 +r^2 t)$ satisfy all the assumptions made for $\L$, with the constants $M$ and $\Lambda$ appearing in \eqref{e-hc} and \eqref{e-up} replaced by $r^\alpha M$ and $r^\alpha \Lambda$, respectively, and the same constant $\lambda$ in \eqref{e-up}. Then, as $r \in ]0, R_0]$, the H\"older constant in \eqref{e-hc} of $\widehat \L$ is $R_0^{\, \alpha} M$ and the parabolicity constants in \eqref{e-up} are $\lambda$ and $R_0^{\, \alpha} \Lambda$, for every $r \in ]0, R_0]$. In the following we then assume that $z_0 = 0$ and $r=1$. Moreover, $r_1$ denotes the constant appearing in Corollary \ref{cor-Harnack-inv} and relative to $\widehat \L$, which depends on the constants $M, \lambda, \Lambda$ and $R_0$. We then choose four positive constants $\iota, \kappa, \mu, \vartheta$ with $0 < \iota < \kappa < \mu < 1$ and $0 < \vartheta < 1$ and we consider the cylinders $\Q^+ := \Q_1^+(0)$ and $\Q^- := \Q_1^-(0)$ as defined in \eqref{e-QPM}. We let \begin{equation*} r_0 := \min \big\{r_1, 1 - \vartheta, \sqrt{1 - \mu} \big\} \end{equation*} and we note that $\Q_r(z) \subset \Q_1(0)$ whenever $z \in B(0,\vartheta) \times ]- \mu, 0[$ and $0 < r < r_0$. We next choose any $z^- = (x^-,t^-) \in \Q^-, z^+ = (x^+,t^+) \in \Q^+$ and we rely on Corollary \ref{cor-Harnack-inv} to construct a \emph{Harnack chain}, that is a finite sequence $w_0, w_1, \dots, w_m$ in $\Q_1(0)$ such that \begin{equation} \label{eq-HC} w_0 = z^+, \qquad w_k = z^-, \qquad u(w_j) \le C_D u(w_{j-1}), \quad j=1, \dots, m. \end{equation} \begin{center} \begin{tikzpicture} \path[draw,thick] (-1,.7) rectangle (8.7,6.5); \filldraw [fill=black!4!white, line width=.6pt] (1.5,6) rectangle node[right=2cm] {$Q^+_r(z_0)$} (5.5,4.5); \filldraw [fill=black!4!white, line width=.6pt] (1.5,2) rectangle node[right=2cm] {$Q^-_r(z_0)$} (5.5,3.5); \draw [line width=.6pt] (0,6) rectangle node[right=3.5cm] {$Q_r(z_0)$}(7,1); \draw [line width=.6pt] (3.5,6) circle (1pt) node[above] {$z_0$}; \foreach \x in {0,1,...,5} \draw [xshift=\x*.4 cm,yshift=-\x*.2 cm,bblue,line width=.6pt] (.4,3.6) rectangle (3.2,4.6); \foreach \x in {0,1,...,5} \draw [xshift=\x*.4 cm,yshift=-\x*.2 cm,red,line width=1.2pt] (1.4,4.4) -- (2.2,4.4); \foreach \x in {0,1,...,5} \draw [xshift=\x*.4 cm,yshift=-\x*.2 cm,line width=1pt] (2.2,4.4) circle (1pt); \draw [line width=1pt] (1.8,4.6) circle (1pt) node[above] {$z^+$}; \draw [line width=1pt] (4.2,3.4) circle (1pt) node[above=2pt] {$\, z^-$}; \end{tikzpicture} {\sc \qquad Fig.6} - A Harnack chain. \end{center} We build a Harnack chain as follows. For a positive integer $m$ that will be fixed in the sequel, we choose a positive $r$ and the vector $y \in \mathbb{R}^N$ satisfying \begin{equation} \label{eq-y} m \kappa_1 r^2 = t^+-t^-, \qquad m r y = x^+ - x^-. \end{equation} Let $\kappa_1,\ \vartheta_1$ be the constants in Corollary \ref{cor-Harnack-inv}. We define \begin{equation} \label{eq-wj} w_j := (x^+ + j r y, t^+ - j \kappa_1 r^{2}), \qquad j=0,1, \dots, m. \end{equation} Clearly, if $r \le r_0$, then $\Q_r(w_j) \subset \Q_1(0)$ for $j=0, 1, \dots, m$. If moreover $|y| \le \vartheta_1$, then $w_{j} \in D_{r}(w_j-1)$ for $j=1, \dots, m$. This proves that \eqref{eq-HC} holds, and we conclude that \begin{equation} \label{eq-H+-} u(z^-) \le C_D^{\, m} u(z^+). \end{equation} We next choose $m$ in order to have both condtions $r \le r_0$ and $|y| \le \vartheta_1$ satisfied. The choice of $m$ is different in the case $|x^+ - x^-|$ is \emph{small} or \emph{large} with respect to $t^+-t^-$. If \begin{equation} \label{eq-case1} \frac{|x^+-x^-|}{t^+-t^-} \le \frac{\vartheta_1}{\kappa_1 r_0}, \end{equation} we let $m$ be the positive integer satisfying \begin{equation} \label{eq-mm} (m-1) \kappa_1 r_0^{\, 2} < t^+ - t^- \le m \kappa_1 r_0^{\, 2}, \end{equation} and, in accordance with \eqref{eq-y}, we choose $r$ as the unique positive number satisfying $m \kappa_1 r^2 = t^+-t^-$. From \eqref{eq-mm} it directly follows $r \le r_0$, while from \eqref{eq-mm} and \eqref{eq-case1} we obtain $|y| \le \vartheta_1$. Suppose now that \begin{equation} \label{eq-case2} \frac{|x^+-x^-|}{t^+-t^-} > \frac{\vartheta_1}{\kappa_1 r_0}. \end{equation} In view of \eqref{eq-y}, in this case we choose $m$ as the integer satisfying \begin{equation} \label{eq-m} m - 1 < \frac{\kappa_1 |x^+-x^-|^2}{\vartheta_1^{\, 2} (t^+-t^-)} \le m, \end{equation} and we let $y$ be the vector parallel to $x^- - x^+$ and such that \begin{equation*} m |y| = \frac{\kappa_1 |x^+-x^-|^2}{{\vartheta_1 (t^+-t^-)}}. \end{equation*} Clearly, $|y| \le \vartheta_1$, and \eqref{eq-case2} implies $r \le r_0$. We next find a bound for the integer $m$, which is uniform with respect to $z^- \in \Q^-$ and $z^+ \in \Q^+$, and we rely on \eqref{eq-H+-} to conclude the proof. In the first case \eqref{eq-case1} we obtain from \eqref{eq-mm} that $m \le \frac{t^+-t^-}{\kappa_1 r_0^{\, 2}}$. In the second case \eqref{eq-case2} we rely on \eqref{eq-m} and we note that $t^+-t^- \ge \kappa- \iota$, by our choiche of $\Q^-$ and $\Q^+$. Then in this case we have $m < \frac{ 4\kappa_1}{\vartheta_1^{\, 2} (\kappa - \iota)}$ Summarizing, we have proved that the inequality \eqref{e-H1} holds with \begin{equation*} C_H := \exp \left( \max \big\{ \tfrac{1}{\kappa_1 r_0^{\, 2}}, \tfrac{4\kappa_1}{\vartheta_1^{\, 2} (\kappa - \iota)} \big\} \log C_D \right). \end{equation*} \end{proof} \setcounter{equation}{0} \section{An approach relying on sets of finite perimeter}\label{SectionBV} In this section we present another approach to the generalized divergence theorem, relying on De Giorgi's theory of perimeters, see \cite{DeGiorgi2,DeGiorgi3} or \cite{AmbrosioFuscoPallara,Maggi}, and we show how this leads to a slightly different proof of Theorem \ref{th-1}. This approach requires more prerequisites than that used in Section \ref{SectionDivergence}, but, as explained in the Introduction, is more flexible and avoids the Dubovicki\v{\i} theorem. In this section, if $\mu$ is a Borel measure and $E$ is a Borel set, we use the notation $\mu\mathbin{\vrule height 1.6ex depth 0pt width 0.13ex\vrule height 0.13ex depth 0pt width 1.3ex} E(B)=\mu(E\cap B)$. As before, $C^1_c \pr{\Omega}$ denotes the set of $C^1$ functions compactly supported in the open set $\Omega \subset \mathbb{R}^n$. \begin{definition}[\sc $BV$ Functions] Let $u \in L^1 \pr{\Omega}$; we say that $u$ is a function of bounded variation in $\Omega$ if its distributional derivative $Du=\pr{D_1 u,\ldots,D_n u}$ is an $\mathbb{R}^n$-valued Radon measure in $\Omega$, {\em i.e.}, if \[ \int_{\Omega}u \frac{\partial \varphi}{\partial z_i}\ dz=-\int_{\Omega}\varphi \, dD_i u, \quad \forall \: \varphi \in C_c^1 \pr{\Omega}, \; i=1,\ldots,n \] or, in vectorial form, \begin{equation} \label{e-bv-div} \int_{\Omega}u\, \mathrm{div}\,\Phi\ dz=-\sum_{i=1}^n \int_{\Omega}\Phi_i\ d D_i u = -\int_{\Omega}\langle\Phi, Du\rangle , \quad \forall \: \Phi \in C_c^1 \pr{\Omega;\mathbb{R}^n}. \end{equation} The vector space of all functions of bounded variation in $\Omega$ is denoted by $BV \pr{\Omega}$. The variation $V \pr{u,\Omega}$ of $u$ in $\Omega$ is defined by: $$ V \pr{u,\Omega}:=\sup \left\{ \int_{\Omega}u\, \mathrm{div}\,\Phi\ dz: \Phi \in C^1_c \pr{\Omega;\mathbb{R}^{n}},\, \norm{\Phi}_{\infty} \leq 1 \right\}. $$ \end{definition} We recall that ${V \pr{u,\Omega}=\abs{Du}\pr{\Omega}}<\infty$ for any $u \in BV \pr{\Omega}$, where $\abs{Du}$ denotes the total variation of the measure $Du$. We also recall that if $u \in C^1 \pr{\Omega}$ then $$ V \pr{u,\Omega}=\int_{\Omega} \abs{\nabla u} dz. $$ When the function $u$ is the characteristic functions $\chi_E$ of some measurable set, its variation is said \emph{ perimeter of $E$}. \begin{definition}[\sc Sets of finite perimeter] Let $E$ be an $\mathcal{L}^n-$measurable subset of $\mathbb{R}^n$. For any open set $\Omega \subset \mathbb{R}^n$ the perimeter of $E$ in $\Omega$ is denoted by $P \pr{E,\Omega}$ and it is the variation of $\rchi_{E}$ in $\Omega$, {\em i.e.}, \begin{equation*} P \pr{E,\Omega}:=\sup \left\{\int_{E}\mathrm{div}\, \Phi\ dz:\Phi \in C^1_c \pr{\Omega;\mathbb{R}^n},\, \norm{\Phi}_{\infty}\leq 1 \right\}. \end{equation*} We say that $E$ is a set of finite perimeter in $\Omega$ if $P \pr{E,\Omega}<\infty$. \end{definition} Obviously, several properties of the perimeter of $E$ can be stated in terms of the variation of $\chi_E$. In particular, if $\mathcal{L}^n(E \cap \Omega)$ is finite, then $\rchi_{E} \in L^1 \pr{\Omega}$ and $E$ has finite perimeter in $\Omega$ if and only if $\rchi_{E}\in BV \pr{\Omega}$ and $P \pr{E,\Omega} = \abs{D{\rchi_{E}}}\pr{\Omega}$. Both the notations $|D\chi_E|(B)$ and $P(E,B)$, $B$ Borel, are used to denote the total variation measure of $\chi_E$ on a Borel set $B$ and we say that $E$ is a set of locally finite perimeter in $\Omega$ if $P(E,K)<\infty$ for every compact set $K \subset \Omega$. Finally, formula \eqref{e-bv-div} looks like a divergence theorem: \begin{equation} \label{e-per-div} \int_{E}\, \mathrm{div}\, \Phi\ dz=- \int_{\Omega}\langle\Phi,D\chi_E\rangle, \quad \forall \: \Phi \in C_c^1 \pr{\Omega;\mathbb{R}^n}, \end{equation} but it becomes more readable if some precise information is given on the set where the measure $D\chi_E$ is concentrated. Therefore, we introduce the notions of {\em reduced boundary} and of {\em density} and recall the structure theorem for sets with finite perimeter due to E. De Giorgi, see \cite{DeGiorgi3} and \cite[Theorem 3.59]{AmbrosioFuscoPallara}, and the characterization due to H. Federer. \begin{definition}[\sc Reduced boundary] Let $\Omega$ be an open subset of $\mathbb{R}^n$ and let $E$ be a set of locally finite perimeter in $\Omega$. We say that $z \in \Omega$ belongs to the reduced boundary ${\mathcal F}E$ of $E$ if $|D\chi_E|(B_\r(z))>0$ for every $\r>0$ and the limit $$ \nu_E\pr{z}:=\lim_{\r \to 0^+}\frac{D{\rchi_{E}}\pr{B_{\r}\pr{z}}}{\abs{D{\rchi_{E}}}\pr{B_{\r}\pr{z}}} $$ exists in $\mathbb{R}^n$ and satisfies $\abs{\nu_E \pr{z}}=1$. The function $\nu_E:{\mathcal F}E \rightarrow {\mathbb S}^{n-1}$ is Borel continuous and it is called the {\em generalized (or measure-theoretic) inner normal} to $E$. \end{definition} Notice that the reduced boundary is a subset of the topological boundary. The Besicovitch differentiation theorem, see e.g. \cite[Theorem 2.22]{AmbrosioFuscoPallara}, yields $D\rchi_{E} = \nu_E\abs{D\rchi_{E}}$, and $\abs{D\rchi_{E}}(\Omega\setminus{\mathcal F}E)=0$, hence \eqref{e-per-div} becomes \begin{equation} \label{e-3} \int_{E}\mathrm{div}\, \Phi\ dz=-\int_{{\mathcal F}E}\scp{\nu_E,\Phi}\ d \abs{D\rchi_{E}}, \quad \forall \: \Phi \in C^1_c \pr{\Omega;\mathbb{R}^N}. \end{equation} \begin{comment} The following celebrated \emph{De Giorgi’s structure theorem} gives a further improvement of the above identity. Let us recall that a set $S\subset\mathbb{R}^n$ is {\em countably $(n-1)-$rectifiable} if there exist countably many Lipschitz functions $\psi_j:\mathbb{R}^{n-1}\to\mathbb{R}^n$ such that \[ {\cal H}^{n-1}\left(S\setminus\bigcup_{j=0}^\infty \psi_j(\mathbb{R}^{n-1})\right)=0 \] and ${\cal H}^{n-1}(S)<\infty$. \begin{theorem}[\sc De Giorgi] Let $E$ be an $\mathcal{L}^n-$measurable subset of $\mathbb{R}^n$. Then ${\mathcal F}E$ is countably $\pr{n-1}-$rectifiable and $\abs{D{\rchi_{E}}}=\H^{n-1}\mathbin{\vrule height 1.6ex depth 0pt width 0.13ex\vrule height 0.13ex depth 0pt width 1.3ex} \mathcal{F}E$. In addition, for any $z_0 \in {\mathcal F}E$ the following statements hold: \begin{description} \item[$(a)$] the sets $\frac{1}{\r}\pr{E-z_0}$ locally converge in measure in $\mathbb{R}^n$ as $\r \to 0^+$ to the half-space $H$ orthogonal to $\nu_E \pr{z_0}$ and containing $\nu_E \pr{z_0}$; \item[$(b)$] $\mathrm{Tan}^{n-1} \pr{\H^{n-1}\mathbin{\vrule height 1.6ex depth 0pt width 0.13ex\vrule height 0.13ex depth 0pt width 1.3ex} {\mathcal F} E,z_0}=\H^{n-1}\mathbin{\vrule height 1.6ex depth 0pt width 0.13ex\vrule height 0.13ex depth 0pt width 1.3ex} \nu_E^\perp \pr{z_0}$, {\em i.e.}, \[ \lim_{\varrho\to 0+}\frac{1}{\varrho^{n-1}} \int_{{\mathcal F}E}\phi\Bigl(\frac{z-z_0}{\varrho}\Bigr) d\H^{n-1}(z) =\int_{\nu_E^\perp(z_0)} \Phi(z)\ d\H^{n-1}(z). \] In particular, \begin{equation*} \lim_{\r \to 0^+}\frac{\H^{n-1}\pr{{\mathcal F}E \cap B_{\r}\pr{z_0}}}{\omega_{n-1}\r^{n-1}}=1, \end{equation*} where $\omega_{n-1}$ is the Lebesgue measure of the unit ball in $\mathbb{R}^{n-1}$. \end{description} \end{theorem} Here $\mathrm{Tan}^{n-1}\pr{\mu,z}$ denotes the \emph{approximate tangent space} to a Radon measure $\mu$ at point $z$ while $\mu \mathbin{\vrule height 1.6ex depth 0pt width 0.13ex\vrule height 0.13ex depth 0pt width 1.3ex} E$ denotes the \emph{restriction} of a measure $\mu$ to $E$ (see Definitions 1.65 and 2.79 in \cite{AmbrosioFuscoPallara}). \end{comment} The relation between the topological boundary and the reduced boundary can be further analyzed by introducing the notion of {\em density} of a set at a given point. \begin{definition}[\sc Points of density $\alpha$] For every $\alpha \in \left[0,1\right]$ and every $\mathcal{L}^n-$measurable set $E \subset \mathbb{R}^n$ we denote by $E^{\pr{\alpha}}$ the set $$ E^{\pr{\alpha}}=\left\{ z \in \mathbb{R}^n: \lim_{\r \to 0^+} \frac{\mathcal{L}^n(E \cap B_\r \pr{z})}{\mathcal{L}^n(B_\r \pr{z})}=\alpha\right\}. $$ \end{definition} Thus $E^{\pr{\alpha}}$, which turns out to be a Borel set, is the set of all points where $E$ has density $\alpha$. The sets $E^{\pr{0}}$ and $E^{\pr{1}}$ are called \emph{the measure-theoretic exterior} and \emph{interior} of $E$ and, in general, strictly contain the topological exterior and interior of the set $E$, respectively. We recall the well known \emph{Lebesgue's density theorem}, that asserts that for every $\mathcal{L}^n-$measurable set $E \subset \mathbb{R}^n$ $$ \mathcal{L}^n(E \triangle E^{\pr{1}})=0, \quad \mathcal{L}^n(\pr{\mathbb{R}^n \setminus E} \triangle E^{\pr{0}})=0, $$ {\em i.e.}, the density of $E$ is $0$ or $1$ at $\mathcal{L}^n-$almost every point in $\mathbb{R}^n$. This notion allows to introduce the {\em essential} or {\em measure-theoretic} boundary of $E$ as $\partial^*E=\mathbb{R}^n\setminus (E^{\pr{0}}\cup E^{\pr{1}})$, which is contained in the topological boundary and contains the reduced boundary. Finally, the De Giorgi structure theorem says that $|D\rchi_E|=\H^{n-1}\mathbin{\vrule height 1.6ex depth 0pt width 0.13ex\vrule height 0.13ex depth 0pt width 1.3ex}{\mathcal F}E$ and a deep result due to Federer (see \cite[4.5.6]{federer1969geometric} or \cite[Theorem 3.61]{AmbrosioFuscoPallara}) states that if $E$ has finite perimeter in $\mathbb{R}^n$ then \[ \mathcal{F}E\subset E^{\pr{1/2}}\subset \partial^*E \quad \text{and}\quad \H^{n-1}(\mathbb{R}^n\setminus (E^{\pr{0}}\cup\mathcal{F}E\cup E^{\pr{1}}))=0 \] hence, in particular, $\nu_E$ is defined $\H^{n-1}-$a.e. in $\partial^*E$. Notice also (see \cite[Theorem 3.62]{AmbrosioFuscoPallara}) that if $\H^{n-1}(\partial E)<\infty$ then $E$ has finite perimeter. The results of De Giorgi and Federer imply that if $E$ is a set of finite perimeter in $\Omega$ then $D \rchi_E=\nu_E \H^{n-1}\mathbin{\vrule height 1.6ex depth 0pt width 0.13ex\vrule height 0.13ex depth 0pt width 1.3ex} \mathcal{F}E$ and the divergence theorem \eqref{e-3} can be rewritten in the form: \begin{equation}\label{e-rem-1} \int_E \mathrm{div}\, \Phi\ dz= -\int_{{\mathcal F}E}\scp{\nu_E,\Phi}\ d \H^{n-1}= -\int_{\partial^*E}\scp{\nu_E,\Phi}\ d \H^{n-1}, \quad \forall \: \Phi \in C^1_c \pr{\Omega;\mathbb{R}^n}, \end{equation} much closer to the classical formula \eqref{eq-div}. Indeed, the only difference is that the inner normal and the boundary are understood in a measure-theoretic sense and not in the topological one; in particular, for a generic set of finite perimeter, ${\mathcal F}E$ needs not to be closed and $\nu_E$ needs not to be continuous. Moreover, $\nu_E$ is defined $\H^{n-1}-$a.e. in $\partial^*E$. Let us see now how we can rephrase the results of Section \ref{SectionDivergence} in terms of perimeters and how we can modify the proof of Theorem \ref{th-1}. We first recall the Fleming--Rischel formula (see \cite{fleming} or \cite[Theorem 3.40]{AmbrosioFuscoPallara}), {\em i.e.}, the coarea formula for $BV$ functions. \begin{theorem}[\sc Coarea formula in $BV$] \label{th-cobv} For any open set $\Omega \subset \mathbb{R}^n$ and $G \in L^1_{\mathrm{loc}}\pr{\Omega}$ one has $$ V \pr{G,\Omega}=\int_{\mathbb{R}}P\pr{\{z \in \Omega:G \pr{z}>y\},\Omega}\ dy. $$ In particular, if $G \in BV \pr{\Omega}$ the set $\left\{G >y \right\}$ has finite perimeter in $\Omega$ for $\H^1-$a.e.~$y \in \mathbb{R}$ and $$ \abs{DG}\pr{B}=\int_{\mathbb{R}}\abs{D{\rchi_{\left\{G>y\right\}}}}\pr{B}\ dy, \quad DG\pr{B}=\int_{\mathbb{R}}D{\rchi_{\left\{G>y\right\}}}\pr{B}\ dy, \quad \forall \: B \in \mathcal{B}\pr{\Omega}. $$ \end{theorem} Now we are ready to state the analogue of Proposition \ref{prop-1} and to prove Theorem \ref{th-1} again. \begin{proposition} \label{prop-1BV} Let $\Omega$ be an open subset of $\mathbb{R}^{n}$ and let $F \in BV \pr{{\Omega};\mathbb{R}}\cap C \pr{{\Omega};\mathbb{R}}$. Then, for $\H^1-$almost every $y \in \mathbb{R}$, we have: \begin{equation}\label{cobv} \int_{\left\{F>y\right\}} \mathrm{div}\, \Phi\ dz = -\int_{\partial^*\{F>y\}} \scp{\nu,\Phi}\ d \H^{n-1}, \quad \forall \: \Phi \in C_c^1 \pr{\Omega;\mathbb{R}^{n}}, \end{equation} were $\nu$ is the generalized inner normal to $\{F>y\}$. \end{proposition} \begin{proof} By Theorem \ref{th-cobv}, the set $\left\{F>y\right\}$ has finite perimeter in $\Omega$ for $\H^1-$a.e. $y \in \mathbb{R}$, hence we may apply \eqref{e-rem-1} with $E=\{F>y\}$ and conclude. \end{proof} As in Section \ref{SectionDivergence}, we have to cut the integration domain: therefore, we study the intersection between the super-level set of a generic function $G \in BV\pr{\Omega;\mathbb{R}}\cap C\pr{\Omega;\mathbb{R}}$ and a half-space $H_t=\left\{ x \in \mathbb{R}^n: \scp{x,e}<t \right\}$, for some $e \in {\mathbb S}^{n-1}$, $t \in \mathbb{R}$. First, we present a general formula that characterizes the intersection of two sets of finite perimeter for which we refer to Maggi's book, see \cite[Theorem 16.3] {Maggi}. \begin{theorem}[\sc Intersection of sets of finite perimeter] \label{th-int} If $A$ and $B$ are sets of locally finite perimeter in $\Omega$, and we let $$ \left\{ \nu_A = \nu_B \right \} = \left\{ x \in {\mathcal F}A \cap {\mathcal F}B: \nu_A \pr{x}=\nu_B \pr{x} \right\}, $$ then $A \cap B$ is a set of locally finite perimeter in $\Omega$, with \begin{equation} \label{e-9} D \rchi_{A \cap B}=D \rchi_A \mathbin{\vrule height 1.6ex depth 0pt width 0.13ex\vrule height 0.13ex depth 0pt width 1.3ex} B^{\pr{1}}+D \rchi_B \mathbin{\vrule height 1.6ex depth 0pt width 0.13ex\vrule height 0.13ex depth 0pt width 1.3ex} A^{\pr{1}} + \nu_A \H^{n-1} \mathbin{\vrule height 1.6ex depth 0pt width 0.13ex\vrule height 0.13ex depth 0pt width 1.3ex} \left\{ \nu_A = \nu_B \right\}. \end{equation} \end{theorem} In the case in which $B$ is a half-space, formula \eqref{e-9} can be greatly simplified; indeed we can prove the following corollary. \begin{corollary}[\sc Intersections with a half-space]\label{cor-1} Let $E$ be a set of locally finite perimeter in $\Omega$ and let $H_t=\left\{ z \in \mathbb{R}^n : \scp{z,e} < t\right\}$ for some $e \in {\mathbb S}^{n-1}$, $t \in \mathbb{R}$. Then, for every $t \in \mathbb{R}$, $E \cap H_t$ is a set of locally finite perimeter in $\Omega$ and moreover, for $\H^{1}-$almost every $t \in \mathbb{R}$, $$ D \rchi_{E \cap H_t}=D \rchi_E \mathbin{\vrule height 1.6ex depth 0pt width 0.13ex\vrule height 0.13ex depth 0pt width 1.3ex} H_t-e\H^{n-1}\mathbin{\vrule height 1.6ex depth 0pt width 0.13ex\vrule height 0.13ex depth 0pt width 1.3ex} \pr{E \cap \left\{ \scp{x,e}=t\right\}}. $$ \end{corollary} \begin{proof} The half-space $H_t$ is clearly a set of locally finite perimeter in $\Omega$ for every $t\in\mathbb{R}$, and for every $t \in \mathbb{R}$ we have, $H_t^{\pr{1}}=H_t$, $\mathcal{F}H_t=\partial H_t =\left\{ \scp{x,e}=t \right\}$ and $\nu_{H_t} \equiv -e$. Then, applying Theorem \ref{th-int} we see that $E \cap H_t$ is a set of locally finite perimeter in $\Omega$ for every $t \in \mathbb{R}$ and \eqref{e-9} reads \[ D \rchi_{E \cap H_t}=D \rchi_E \mathbin{\vrule height 1.6ex depth 0pt width 0.13ex\vrule height 0.13ex depth 0pt width 1.3ex} H_t - e \H^{n-1} \mathbin{\vrule height 1.6ex depth 0pt width 0.13ex\vrule height 0.13ex depth 0pt width 1.3ex} (E^{\pr{1}}\cup \left\{ \nu_E = \nu_{H_t} \right\}) . \] Since by Fubini theorem \[ 0=\mathcal{L}^n (E\triangle E^{(1)}) = \int_{\mathbb{R}} \H^{n-1}\pr{(E\triangle E^{(1)}) \cap \left\{ \scp{x,e}=t \right\}}\ dt , \] for $\H^1-$a.e. $t \in \mathbb{R}$ we have \[ \H^{n-1}\pr{E \triangle E^{\pr{1}} \cap \left\{ \scp{x,e}=t \right\}}=0. \] Therefore, \[ D \rchi_{H_t} \mathbin{\vrule height 1.6ex depth 0pt width 0.13ex\vrule height 0.13ex depth 0pt width 1.3ex} E = D \rchi_{H_t} \mathbin{\vrule height 1.6ex depth 0pt width 0.13ex\vrule height 0.13ex depth 0pt width 1.3ex} E^{\pr{1}} = -e\H^{n-1} \mathbin{\vrule height 1.6ex depth 0pt width 0.13ex\vrule height 0.13ex depth 0pt width 1.3ex} (E\cap\{\scp{x,e}=t\}) = -e\H^{n-1} \mathbin{\vrule height 1.6ex depth 0pt width 0.13ex\vrule height 0.13ex depth 0pt width 1.3ex} (E\cap\{\nu_E=\nu_{H_t}\}) \] for $\H^1-$a.e. $t \in \mathbb{R}$ and the thesis follows. \end{proof} The following corollary allows us to perform (with some modifications) the last part of the proof of our main result. \begin{corollary}\label{prop-2-BV} Let $\Omega=\mathbb{R}^{N+1} \setminus \left\{ \pr{z_0} \right\}$, $G \in BV\pr{\Omega;\mathbb{R}} \cap C \pr{\Omega;\mathbb{R}}$, $H_t=\left\{ z \in \mathbb{R}^{N+1} : \scp{z,e} < t\right\}$ for some $e \in {\mathbb S}^N$, $t \in \mathbb{R}$. Then, for $\H^{1}-$almost every $w \in \mathbb{R}$ and for every $t < t_0$ the set $E \cap H_t$ has locally finite perimeter in $\Omega$ and $$ \int_{\left\{ G > w \right\} \cap H_t}\mathrm{div}\, \Phi\ dz= -\int_{\partial^*\{ G > w \} \cap H_t}\scp{\nu,\Phi}\ d \H^N +\int_{\left\{ G >w \right\} \cap \left\{ \scp{x,e}=t \right\}} \scp{e,\Phi}\ d\H^N, $$ for every $\Phi \in C^1_c \pr{\Omega;\mathbb{R}^n}$, where $\nu$ is the generalized inner normal to $\partial^*(\{G>w\})$. In particular, if $e=\pr{0,\ldots,0,1}$, for every $\varepsilon > 0$ $$ \int_{\left\{ G > w \right\} \cap \left\{ t<t_0-\varepsilon \right\}} \!\!\!\mathrm{div}\, \Phi\ dz =-\int_{\partial^*\{ G>w \} \cap \left\{ t<t_0-\varepsilon \right\}} \!\!\scp{\nu,\Phi}d \H^{N} +\int_{\left\{ G>w \right\} \cap \left\{ t=t_0-\varepsilon \right\}} \!\!\scp{e,\Phi}d\H^{N}. $$ \end{corollary} Notice that the difference between Proposition \ref{prop-1bis} and Corollary \ref{prop-2-BV} is that in the former we can exclude the set of critical points of $G$ from the surface integral, thanks to Dubovicki\v{\i} theorem, and we know that $\nu$ is given by the normalized gradient of $G$ {\em everywhere} in the integration set, whereas in the latter we don't need to know any estimate on the size of ${\rm Crit}(G)$ and $\nu$ is defined $\H^N-${\em a.e.} on the integration set (still coinciding with the normalized gradient of $G$ out of ${\rm Crit}(G)$, of course). First, notice that we apply Corollary \ref{prop-2-BV} to $G(z)=\Gamma(z_0;z)$, which is $C^1(\Omega)$, hence Lipschitz on bounded sets. As a consequence, $\partial\{G>w\}\subseteq\{G=w\}$ and comparing the coarea formulas \eqref{e-co} and \eqref{cobv}, we deduce that $\H^N(\{G=w\}\setminus\partial^*\{G>w\})=0$ for $\H^{1}$-a.e. $w$. Let us see how this entails modifications of the proof of Theorem \ref{th-1}: the proof goes in the same vein until \eqref{eq-div-3i}, \eqref{eq-div-3}, which in the present context are replaced by \begin{align*} \lim_{k \to +\infty} \int_{\psi_r(z_0) \cap \left\{ t<t_0-\varepsilon_k \right\}} \scp{\nu,\Phi}d \H^{N} & = \int_{\psi_r(z_0)} K (z_0;z) u(z) d \H^{N} \\ & = \int_{\psi_r(z_0)\setminus \mathrm{Crit}\pr{\Gamma}} K (z_0;z) u(z) d \H^{N} \end{align*} where the first equality follows from Corollary \ref{prop-2-BV}, as explained, and the last equality follows from the fact that the kernel $K$ vanishes in ${\rm Crit}(\Gamma)$. The rest of the proof needs no modifications. \def$'${$'$} \def$'${$'$} \def$'${$'$} \def\lfhook#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth \lower1.5ex\hbox{'}\hidewidth\crcr\unhbox0}}} \def$'${$'$} \def$'${$'$} \end{document}
\begin{document} \title{Maximum-Likelihood-Estimate Hamiltonian learning via efficient and robust quantum likelihood gradient} \author{Tian-Lun Zhao} \affiliation{International Center for Quantum Materials, School of Physics, Peking University, Beijing, 100871, China} \author{Shi-Xin Hu} \affiliation{International Center for Quantum Materials, School of Physics, Peking University, Beijing, 100871, China} \author{Yi Zhang} \email{[email protected]} \affiliation{International Center for Quantum Materials, School of Physics, Peking University, Beijing, 100871, China} \date{Today} \begin{abstract} Given the recent developments in quantum techniques, modeling the physical Hamiltonian of a target quantum many-body system is becoming an increasingly practical and vital research direction. Here, we propose an efficient strategy combining maximum likelihood estimation, gradient descent, and quantum many-body algorithms. Given the measurement outcomes, we optimize the target model Hamiltonian and density operator via a series of descents along the quantum likelihood gradient, which we prove is negative semi-definite with respect to the negative-log-likelihood function. In addition to such optimization efficiency, our maximum-likelihood-estimate Hamiltonian learning respects the locality of a given quantum system, therefore, extends readily to larger systems with available quantum many-body algorithms. Compared with previous approaches, it also exhibits better accuracy and overall stability toward noises, fluctuations, and temperature ranges, which we demonstrate with various examples. \end{abstract} \maketitle \section{Introduction} Understanding the quantum states and the corresponding properties of a given quantum Hamiltonian is a crucial problem in quantum physics. Many powerful numerical and theoretical tools have been developed for such purposes and made compelling progress \cite{Lanczos, SteveWhite1992, dmrg, QMC, rgf}. On the other hand, with the rapid experimental developments of quantum technology, e.g., near-term quantum computation \cite{nielsen2002, Nayak2008} and simulation \cite{Buluta2009, Georgescu2014, Barthelemy2013, Browaeys2020, Scholl2021, Ebadi2022, Bluvstein2021}, it is also vital to explore the inverse problem, e.g., Hamiltonian learning - optimize a model Hamiltonian characterizing a quantum system with respect to the measurement results. Given the knowledge and assumption of a target system, researchers have achieved many resounding successes modeling quantum Hamiltonians with physical pictures and phenomenological approaches \cite{tbg, bcs}. However, such subjective perspectives may risk biases and are commonly insufficient on detailed quantum devices. Therefore, the explorations for objective Hamiltonian learning strategies have attracted much recent attention \cite{Qi2019, FETH, Corr1, RN571, Corr2, Entangle, dynamic1, Wenjunyu, Hsinyuan}. There are mainly two categories of Hamiltonian-learning strategies, based upon either quantum measurements on a large number of (identical copies of) quantum states, e.g., Gibbs states or eigenstates \cite{Qi2019, FETH, Corr1, RN571, Corr2, Entangle}, or initial states' time evolution dynamics \cite{dynamic1, Wenjunyu, Hsinyuan, TNmethod}, corresponding to the target quantum system. For example, given the measurements of the correlations of a set of local operators, the kernel of the resulting correlation matrix offers a candidate model Hamiltonian \cite{Qi2019, FETH, Corr1}. On the other hand, while established theoretically, most approaches suffer from elevated costs and are limited to small systems in experiments or numerical simulations \cite{Corr1, RN571, RN572, Zoller}. Besides, there remains much room for improvements in stability towards noises and temperature ranges. Maximum likelihood estimation (MLE) is a powerful tool that parameterizes and then optimizes the probability distribution of a statistical model so that the given observed data is most probable. MLE's intuitive and flexible logic makes it a prevailing method for statistical inference. Adding to its wide range of applications, MLE has been applied successfully to quantum state tomography\cite{QSE, PhysRevA.75.042108, Lvovsky_2004, PhysRevA.85.042317,PhysRevA.63.020101}, providing the most probable quantum states given the measurement outputs. Inspired by MLE's successes in quantum problems, we propose a general MLE Hamiltonian learning protocol: given finite-temperature measurements of the target quantum system in thermal equilibrium, we optimize the model Hamiltonian towards the MLE step-by-step via a ``quantum likelihood gradient". We show that such quantum likelihood gradient, acting collectively on all presenting operators, is negative semi-definite with respect to the negative-log-likelihood function and thus provides efficient optimization. In addition, our strategy may take advantage of the locality of the quantum system, therefore allowing us to extend studies to larger quantum systems with tailored quantum many-body ansatzes such as Lanczos, quantum Monte Carlo (QMC), density matrix renormalization group (DMRG), and finite temperature tensor network (FTTN) \cite{FTTN, ITensor} algorithms in suitable scenarios. We also demonstrate that MLE Hamiltonian learning is more accurate, less restrictive, and more robust against noises and broader temperature ranges. Further, we generalize our protocol to measurements on pure states, such as the target quantum systems' ground states or quantum chaotic eigenstates. Therefore, MLE Hamiltonian learning enriches our arsenal for cutting-edge research and applications of quantum devices and experiments, such as quantum computation, quantum simulation, and quantum Boltzmann machines \cite{QBM}. We organize the rest of the paper as follows: In Sec. II, we review the MLE context and introduce the MLE Hamiltonian learning protocol; especially, we show explicitly that the corresponding quantum likelihood gradient leads to a negative semi-definite change to the negative-log-likelihood function. Via various examples in Sec. III, we demonstrate our protocol's capability, especially its robustness against noises and temperature ranges. We generalize the protocol to quantum measurements of pure states in Sec. IV and Appendix D, with consistent results for exotic quantum systems such as quantum critical and topological models. We summarize our studies in Sec. V with a conclusion on our protocol's advantages (and limitations), potential applications, and future outlooks. \section{Maximum-likelihood-estimate Hamiltonian learning} To start, we consider an unknown target quantum system $\hat H_s = \sum_{j} \mu_{j} \hat{O}_{j}$ in thermal equilibrium, and measurements of a set of observables $\{\hat{O}_{i}\}$ on its Gibbs state $\hat{\rho}_{s}=\exp(-\beta\hat{H}_s)/\mbox{tr}[\exp(-\beta\hat{H}_s)]$, where $\beta$ is the inverse temperature. Given a sufficient number $N_i$ of measurements of the operator $\hat{O}_{i}$, the occurrence time $f_{\lambda_i}$ of the $\lambda_i^{th}$ eigenvalue $o_{\lambda_i}$ approaches: \begin{equation} f_{\lambda_i}= p_{\lambda_{i}}N_i\approx\mbox{tr}[\hat{\rho}_{s}\hat{P}_{\lambda_{i}}] N_i, \end{equation} where $p_{\lambda_i}=f_{\lambda_i}/N_i$ denotes the statistics of the outcome $o_{\lambda_i}$, and $\hat{P}_{\lambda_i}$ is the corresponding projection operator to the $o_{\lambda_i}$ sector. Our goal is to locate the model Hamiltonian $\hat H_s$ for the quantum system, which commonly requires the presence of all $\hat{H}_s$'s terms in the measurement set $\{\hat{O}_i\}$. Following previous MLE analysis \cite{QSE, PhysRevA.75.042108, Lvovsky_2004, PhysRevA.85.042317,PhysRevA.63.020101}, the statistical weight of any given state $\hat \rho$ is: \begin{equation} \mathcal{L}(\hat{\rho})\propto \prod_{i, \lambda_i}\{ \mbox{tr}[\hat{\rho}\hat{P}_{\lambda_i}]^{\frac{f_{\lambda_i}}{N_{tot}}}\}^{N_{tot}}, \label{LHF} \end{equation} upto a trivial factor, where $N_{tot}=\sum_{i}N_i$ is the total number of measurements. For Hamiltonian learning, we search for (the set of parameters ${\mu_j}$ of) the MLE Hamiltonian $\hat{H}$, whose Gibbs state $\hat \rho$ maximizes the likelihood function in Eq. \ref{LHF}. The maximum condition for Eq. \ref{LHF} can be re-expressed as: \begin{eqnarray} &&\hat{R}(\hat{\rho}) \hat{\rho} = \hat{\rho}, \nonumber \\ &&\hat{R}(\hat{\rho}) = \sum_{i,\lambda_i}\frac{f_{\lambda_i}}{N_{tot}} \frac{\hat{P}_{\lambda_i}}{\mbox{tr} [\hat \rho \hat P_{\lambda_i}]}, \label{eq:mlerho} \end{eqnarray} see Appendix A for a detailed review. Solving Eq. \ref{eq:mlerho} is a nonlinear and nontrivial problem, for which many algorithms have been proposed \cite{ Lvovsky_2004, PhysRevA.75.042108, PhysRevA.85.042317,PhysRevA.63.020101}. For example, we can employ iterative updates $\hat{\rho}_{k+1} \propto \hat{R}(\hat{\rho}_k)\hat{\rho}_k\hat{R}(\hat{\rho}_k)$ until Eq. \ref{eq:mlerho} is fulfilled \cite{Lvovsky_2004}. These algorithms mostly center around the parameterization and optimization of a quantum state $\hat \rho$, whose cost is exponential in the system size. Besides, such iterative updates do not guarantee that the quantum state $\hat{\rho}$ remains a Gibbs form, especially when the measurements are insufficient to uniquely determine the state (e.g., large noises or small numbers of measurements and there are many quantum states satisfying Eq. \ref{eq:mlerho}). Consequently, extracting $\hat{H}\propto-\frac{1}{\beta}\ln\hat{\rho}$ from $\hat \rho$ further adds up to the inconvenience. Considering that the operator $\hat{R}(\hat{\rho})$ has the same operator structure as the Hamiltonian, we take an alternative stance for the Hamiltonian learning task and update the candidate Hamiltonian $\hat{H}_k$, i.e., the model parameters, collectively and iteratively. In particular, we integrate the corrections to the Hamiltonian coefficients to the operator $\hat{R}(\hat{\rho})$, which offers such a quantum likelihood gradient (Fig. \ref{fig:algorithm}): \begin{eqnarray} \hat H_{k+1} &=& \hat H_k -\gamma \hat R_k, \nonumber\\ \hat{\rho}_{k+1}&=&\frac{e^{-\beta\hat{H}_{k+1}}}{\mbox{tr}[e^{-\beta\hat{H}_{k+1}}]}=\frac{e^{-\beta(\hat{H}_{k}-\gamma\hat{R}_{k})}}{\mbox{tr}[e^{-\beta(\hat{H}_{k}-\gamma\hat{R}_{k})}]}, \label{eq:iter} \end{eqnarray} where $\gamma>0$ is the learning rate - a small parameter controlling the step size. We denote $\hat{R}_k \equiv \hat{R}(\hat{\rho}_k)$ for short here afterwards. Compared with previous Hamiltonian extractions from MLE quantum state tomography, the update in Eq. \ref{eq:iter} possesses several advantages in Hamiltonian learning. First, we can utilize the Hamiltonian structure (e.g., locality) to choose suitable numerical tools (e.g., QMC and FTTN) and even calculate within the subregions - we circumvent the costly parametrization of the quantum state $\hat{\rho}$. Also, the update guarantees a state in its Gibbs form. Last but not least, we will show that for $\gamma \ll 1$, such a quantum likelihood gradient in Eq. \ref{eq:iter} yields a negative semi-definite contribution to the negative-log-likelihood function, guaranteeing the MLE Hamiltonian (upto a trivial constant) at its convergence and an efficient optimization toward it. \begin{figure} \caption{An illustration of the MLE Hamiltonian learning algorithm: given the quantum measurements on the Gibbs state $\hat{\rho}_s$ of the target quantum system, we update the candidate Hamiltonian iteratively until the negative-log-likelihood function (or relative entropy) converges below a given threshold $\epsilon$, after which the output yields the MLE Hamiltonian. Within each iterative step, we evaluate the operator expectation values with respect to the Gibbs state $\hat{\rho}_k = \exp(-\beta\hat H_k) / \mbox{tr}[\exp(-\beta\hat H_k)]$, which directs the model update $\Delta \hat{H}_k=-\gamma\hat{R}_k$ for the next iterative step.} \label{fig:algorithm} \end{figure} \textbf{Theorem:} For $\gamma\ll 1$, $\gamma>0$, the quantum likelihood gradient in Eq. \ref{eq:iter} yields a negative semi-definite contribution to the negative-log-likelihood function $ M(\hat{\rho}_{k+1})=-\frac{1}{N_{tot}}\log\mathcal{L}(\hat{\rho}_{k+1})$. \textbf{Proof:} We note that upto linear order in $\gamma \ll 1$: \begin{eqnarray} e^{-\beta\hat{H}_{k+1}}&=&e^{-\beta\hat{H}_k}\prod_{n=0}^{\infty} \exp\left[\frac{(-1)^{n}}{(n+1)!}{\rm ad}_{-\beta\hat{H}_k}^{n}(\beta\gamma\hat{R}_k)+o(\gamma^2)\right] \nonumber\\ &\approx& e^{-\beta\hat{H}_k}\left[1+\sum_{n=0}^{\infty}\frac{(-1)^{n}}{(n+1)!}{\rm ad}_{-\beta\hat{H}_k}^{n}(\beta\gamma\hat{R}_k)\right]\nonumber\\ &=&e^{-\beta\hat{H}_k}(1+\beta\gamma \int_{0}^{1}e^{\beta s\hat{H}_k}\hat{R}_k e^{-\beta s\hat{H}_k}{\rm d}s), \label{eq:linear_gamma} \end{eqnarray} where ${\rm ad}_{\hat{A}}^{j}\hat{B}=[\hat{A},{\rm ad}_{\hat{A}}^{j-1}\hat{B}]$ and ${\rm ad}_{\hat{A}}^{0}\hat{B}=\hat{B}$ are the adjoint action of the Lie algebra. The first and third lines are based on the Zassenhaus formula\cite{zassen} and the Baker-Hausdorff formula \cite{zassen}, respectively, while the second line neglects terms above the linear order of $\gamma$. Following this, we can re-express the quantum state in Eq. \ref{eq:iter} as: \begin{equation} \hat{\rho}_{k+1}=\hat{\rho}_k\frac{1+\beta\gamma \int_{0}^{1}e^{\beta s\hat{H}_k}\hat{R}_k e^{-\beta s\hat{H}_k}{\rm d}s}{1+\beta\gamma}, \label{appr_iter} \end{equation} where we have used $\mbox{tr}[\hat{\rho}_k \hat{R}_k]=1$ as a direct consequence of $R_k$'s definition in Eq. \ref{eq:mlerho}. Subsequently, after introducing the quantum likelihood gradient, the negative-log-likelihood function becomes: \begin{eqnarray} M(\hat{\rho}_{k+1})&=&-\frac{1}{N_{tot}}\log\mathcal{L}(\hat{\rho}_{k+1}) \nonumber\\ &=&-\sum_{i,\lambda_i}\frac{f_{\lambda_i}}{N_{tot}}\log \mbox{tr}[\hat{\rho}_{k+1}\hat{P}_{\lambda_i}] \nonumber\\ &\approx& M(\hat{\rho}_k)+\beta\gamma(1-\Delta_k), \label{MLE_iter} \end{eqnarray} where we keep terms upto linear order of $\gamma$ in the $\log$ expansion. On the other hand, we can establish the following inequality: \begin{eqnarray} \Delta_k &=& \mbox{tr}[\hat{\rho}_k\int_{0}^{1}e^{\beta s\hat{H}_k}\hat{R}_k e^{-\beta s\hat{H}_k}\hat{R}_k{\rm d}s] \nonumber\\ &=&\int_{0}^{1}\mbox{tr}[\hat{\rho}_k e^{\beta s\hat{H}_k}\hat{R}_k e^{-\beta s\hat{H}_k}\hat{R}_k]\mbox{tr}[\hat{\rho}_k]{\rm d}s \nonumber\\ &=&\int_{0}^{1}||e^{-\beta s\hat{H}_{k}/2}\hat{R}_{k}e^{-\beta(1-s)\hat{H}_{k}/2}||_{F}^{2}||e^{-\beta\hat{H}_{k}/2}||_{F}^{2}\frac{{\rm d}s}{Z_{k}^2} \nonumber\\ &\ge& \int_{0}^{1}\mbox{tr}[\hat{\rho}_{k}\hat{R}_{k}]^2{\rm d}s=1, \label{eq:deltagt1} \end{eqnarray} where $Z_{k}=\mbox{tr}[e^{-\beta\hat{H}_{k}}]$ is the partition function, $||A||_{F}=\sqrt{\mbox{tr}[A^{\dagger}A]}$ is the Frobenius norm of matrix $A$, and the non-negative definiteness of $\hat \rho_k$ allows $\hat{\rho}_{k}=(\hat{\rho}_{k}^{1/2})^2=(e^{-\beta\hat{H}_{k}/2})^2/Z_{k}$. The inequality in the fourth line follows the Cauchy-Schwarz inequality. We note that the equality - the convergence criteria of our MLE Hamiltonian learning protocol - is established if and only if: \begin{equation} e^{-\beta s\hat{H}_{k}/2}\hat{R}_{k}e^{-\beta(1-s)\hat{H}_{k}/2}=e^{-\beta\hat{H}_{k}/2}, \end{equation} which implies the conventional MLE optimization target $\hat{R}\hat{\rho}=\hat{\rho}$ in Eq. \ref{eq:mlerho}. We can also establish such consistency from our iterative convergence \footnote{In practice, given sufficient measurements, we have $\hat{R}_k\sim \hat{I}$ dictating the quantum likelihood gradient at the iteration's convergence.} following Eq. \ref{eq:iter}: \begin{equation} \hat \rho_{k+1} =\frac{e^{-\beta(\hat{H}_k-\gamma\hat{R}_k)}} {\mbox{tr}[e^{-\beta(\hat{H}_{k}-\gamma \hat{R}_k)}]} = \frac{e^{\beta \gamma\hat{R}_k }\hat{\rho}_k}{\mbox{tr}[e^{\beta \gamma\hat{R}_k}\hat{\rho}_k]}=\hat \rho_{k} , \label{eq:extre2} \end{equation} where we have used the commutation relation $[\hat{R}_k,\hat{H}_k]=0$ between the Hermitian operators $\hat{R}_k$ and $\hat{H}_k$ following $[\hat{R}_k, \hat{\rho}_k] = [\hat{R}_k, e^{-\beta\hat{H}_k}] = 0$. Finally, combining Eq. \ref{MLE_iter} and Eq. \ref{eq:deltagt1}, we have shown that $M(\hat{\rho}_{k+1})-M(\hat{\rho}_{k}) \le 0 $ is a negative semi-definite quantity, which proves the theorem. We conclude that the quantum likelihood gradient in Eq. \ref{eq:iter} offers an efficient and collective optimization towards the MLE Hamiltonian, modifying all model parameters simultaneously. For each step of quantum likelihood gradient, the most costly calculation is on $\hat{\rho}_{k+1}$, or more precisely, the expectation value $\mbox{tr}[\hat{\rho}_{k+1}\hat{P}_{\lambda_i}]$ from $\hat{H}_{k+1}$. Fortunately, this is a routine calculation in quantum many-body physics and condensed matter physics with various tailored candidate algorithms under different scenarios. For example, we may resort to the FTTN, or the QMC approaches, which readily apply to much larger systems than brute-force exact diagonalization. Thus, we emphasize that MLE Hamiltonian learning works with evaluations of the expectation values of quantum states instead of the more expensive quantum states themselves in their entirety. Interestingly, MLE Hamiltonian learning also allows a more local stance. For a given Hamiltonian, the necessary expectation value of its Gibbs state $\mbox{tr}[\hat \rho \hat P_{\lambda_i}]$ takes the form: \begin{eqnarray} \mbox{tr}[\hat \rho \hat P_{\lambda_i}] &=& \mbox{tr}[\hat \rho^A_{eff} \hat P_{\lambda_i}], \nonumber \\ \hat \rho^A_{eff} = \mbox{tr}_{\bar A} [\hat \rho] &=& \frac{e^{-\beta \hat H^{A}_{eff}}}{\mbox{tr}[e^{-\beta \hat H^{A}_{eff}}]}, \label{eq:subregion} \end{eqnarray} where $\hat \rho^A_{eff}$ is the reduced density operator defined upon a relatively local subsystem $A$ still containing $\hat P_{ \lambda_i}$. The effective Hamiltonian $\hat{H}_{eff}^{A}=\hat{H}_A+\hat{V}_{eff}^{A}$ of the subregion $A$ contains the existing terms $\hat{H}_A$ within the subsystem and the effective interacting terms $\hat{V}_{eff}^{A}$ from the trace operation \cite{QBP2}. According to the conclusions of the quantum belief propagation theory \cite{QBP1, QBP2}, the locality of the interaction in the latter term $||\hat{V}_{eff}^{A}(l_A)||_F\propto (\beta/\beta_c)^{l_A/r}$, where $\beta$ ($\beta_c$) denotes the current (model-dependent critical) inverse temperature, $l_A$ is the distance between a specific site in the bulk of $A$ and the boundary of the subregion $A$, and $r$ is the maximum acting distance(diameter) of a single operator in the original Hamiltonian(similar to the k-local in the next section). Thus, when $\beta<\beta_c$(especially when $\beta\ll \beta_c$), $\hat{V}_{eff}^{A}$ is exponentially localized around the boundary of $A$, and the effective Hamiltonian in the bulk of $A$ remains the same as that of the original $\hat{H}_A$ of the entire system. Therefore, we may further boost the efficiency of MLE Hamiltonian learning by redirecting the expectation-value evaluations of the global quantum system to that of a series of local patches, as we will show in the next section. In summary, given the quantum measurements of a thermal (Gibbs) state: $\{\hat{O}_{i}\}$, $N_i$, and $f_{\lambda_i}$, we can perform MLE Hamiltonian learning to obtain the MLE Hamiltonian via the following steps (Fig. \ref{fig:algorithm}): \begin{itemize} \item [1)] Initialization/Update: For initialization, start with a random model Hamiltonian $\hat{H}_0$: \begin{equation} \hat{H}_0= \sum_{i}\mu_{i}\hat{O}_{i}, \end{equation} or an identity Hamiltonian. For update, carry out the quantum likelihood gradient: \begin{equation} \hat{H}_{k+1} = \hat{H}_k - \gamma \hat{R}_k, \end{equation} where $\hat{R}_k$ is defined in Eq. \ref{eq:mlerho} or Eq. \ref{eq:hiter}. \item [2)] Evaluate the properties $\mbox{tr}[\hat \rho_k \hat P_{\lambda_i}]$ of the quantum state: \begin{equation} \hat{\rho}_{k}=\frac{e^{-\beta\hat{H}_{k}}}{\mbox{tr}[e^{-\beta\hat{H}_{k}}]}, \label{eq:iter3} \end{equation} with suitable numerical methods. \item [3)] Check for convergence: loop back to step 1) to update, $k\rightarrow k+1$, if the relative entropy $M(\hat{\rho}_{k})-M_0 \ge \epsilon$ is above a given threshold $\epsilon$; otherwise, terminate the process, and the final $\hat{H}_k$ is the result for the MLE Hamiltonian. Here, $M_0$ is the theoretical minimum of the negative-log-likelihood function: \begin{equation} M_0=-\sum_{i,\lambda_i}\frac{N_i}{N_{tot}}p_{\lambda_i}\log{p_{\lambda_i}}. \end{equation} \end{itemize} In practice, $\hat R_k$ in Eq. \ref{eq:iter} is singular for small values of $\mbox{tr}[\hat \rho_k \hat P_{\lambda_i}]$ and may become numerically unstable, which requires a minimal or dynamical learning rate $\gamma$ to maintain the range of quantum likelihood gradient properly. Instead, we may employ a re-scaled version of $\hat R_k$: \begin{equation} \tilde{\hat{R}}_{k} = \sum_{i,\lambda_i} \frac{N_i}{N_{tot}} f_{g}(p_{\lambda_i}/ \mbox{tr}[\hat \rho_k \hat{P}_{\lambda_i}]) \hat{P}_{\lambda_i}, \label{eq:hiter} \end{equation} where $f_{g}$ is a monotonic tuning-function: \begin{equation} f_{g}(x)=\frac{gx}{x+g-1}, g>1,g\in\mathbb{N}, \label{eq:fg} \end{equation} which maps its argument in $(0,\infty)$ to a finite range $(0,g)$. Such a re-scaled $\tilde{\hat{R}}_k$ regularizes the quantum likelihood gradient and allows a simple yet relatively larger learning rate $\gamma$ for more efficient MLE Hamiltonian learning. We also have $f_g(1)=1$, therefore $\tilde{\hat{R}}_k \rightarrow \hat{R}_k$ as we approach convergence. We will mainly employ $\tilde{\hat{R}}_{k}$ for our examples in the following sections. In addition to the negative-log-likelihood function $M(\hat{\rho}_k)$, we also consider the Hamiltonian distance as another criterion on the quality of Hamiltonian learning: \begin{equation} \Delta{\vec{\mu}}_k = \frac{||\vec{\mu}_s-\vec{\mu}_{k}||_{2}}{||\vec{\mu}_s||_2}, \label{eq:hamdis} \end{equation} where $\vec{\mu}_s$ and $\vec{\mu}_{k}$ are the (vectors of) coefficients \footnote{We typically perform MLE Hamiltonian learning and update the model Hamiltonian on the projection-operator basis; therefore, we transform the Hamiltonian back to the original, ordinary operator basis before $\Delta \vec{\mu}_k$ evaluations.} of the target Hamiltonian and the learned Hamiltonian after $k$ iterations, respectively. However, we do not recommend $\Delta{\vec{\mu}}_k$($\Delta \mu$ for short) as convergence criteria as $\vec{\mu}_s$ is generally unknown aside from benchmark scenarios \cite{Zoller}. \section{Example models and results} In this section, we demonstrate the performance of the MLE Hamiltonian learning protocol. For better numerical simulations, we consider the $k$-local Hamiltonians, with operators acting non-trivially on no more than $k$ contiguous sites in each direction. For example, for a 1-dimensional spin-$\frac{1}{2}$ system, a $k$-local operator for $k=2$ takes the form $\hat{S}_{i}^{\alpha}\hat{S}_{i+1}^{\beta}$ or $\hat{S}_{i}^{\alpha}$, $\alpha, \beta\in\{x,y,z\}$, where $\hat{S}_{i}^{\alpha}$ denotes the spin operator. In particular, we focus on general 1D quantum spin chains with $k=2$, taking the following form: \begin{equation} \hat{H}_s = \sum_{i,\alpha,\beta}^{L-1}J_{i}^{\alpha\beta}\hat{S}_{i}^{\alpha}\hat{S}_{i+1}^{\beta}+\sum_{i,\alpha}^{L}h_{i}^{\alpha}\hat{S}_{i}^{\alpha}, \label{XZ} \end{equation} where $\hat{S}_{i}^{\alpha}$ denotes the spin operator on site $i$, $\alpha,\beta,\in\{x,y,z\}$. There are $12L-9$ 2-local operators under the open boundary condition, where $L$ is the system size. We generate the model parameters $\vec{\mu}_s = \{ J_{i}^{\alpha\beta}, h_{i}^{\alpha} \}$ randomly following a uniform distribution in $[-1,1]$. This Hamiltonian $\hat{H}_s$, specifically the model parameters $\vec{\mu}_s$, will be our target for MLE Hamiltonian learning. As the protocol's inputs, we simulate quantum measurements of all 2-local operators $\{\hat{O}_i\}$ on the Gibbs states of $\hat{H}_s$ numerically via exact diagonalization on small systems and FTTN for large systems. For the latter, we use a tensor network ansatz called the ``ancilla" method \cite{FTTN}, where we purify a Gibbs state with some auxiliary qubits $\hat\rho_s = \mbox{tr}_{aux}|\psi_s\rangle\langle \psi_s|$, and obtain $\ket{\psi_s}=e^{-\beta\hat{H}_s/2}\ket{\psi_0}$ from a maximally-entangled state $\ket{\psi_0}$ via imaginary time evolution. In addition, given a large number $n$ of Trotter steps, the imaginary time evolution operator $e^{-\beta\hat{H}_s/2}$ is decomposed into Trotter gates' product as $(\Pi_{i}e^{-\mu_i\hat{O}_{i} \beta/2n})^{n}+O(\beta^2/n)$. Here, we set the Trotter step $\delta t=\beta/n \in [0.01,0.1]$, for which the Trotter errors of order $O(\beta^2/n)$ show little impact on our protocol's accuracy. Without loss of generality, we employ the integrated FTTN algorithm in the ITensor numerical toolkit \cite{ITensor}, and set the number of measures $N_i=N$ for all operators in our examples for simplicity. \begin{figure} \caption{Both the Hamiltonian distance $\Delta\mu$ defined in Eq. \ref{eq:hamdis} and the negative-log-likelihood function $M(\hat{\rho}_{k+1})$ (or relative entropy $M(\hat{\rho}_{k+1})-M_0$) show successful convergence of the iterations in MLE Hamiltonian learning, albeit a variety of system sizes and temperature ranges. We simulate the target Hamiltonian and the iteration process by FTTN with Trotter step $\delta t=0.1$. Each curve is averaged on 10 trials of random $\hat{H}_0$ initializations. We set the learning rate $\gamma=0.1$. The maximum number of iterations here is 1000.} \label{Gibbs_differ} \end{figure} As we demonstrate in Fig. \ref{Gibbs_differ}, MLE Hamiltonian learning obtains the target Hamiltonians with high accuracy and efficiency under various settings of system sizes and inverse temperatures $\beta$. Besides, instead of the original quantum likelihood gradient in Eq. \ref{eq:mlerho}, we may obtain a faster convergence with the re-scaled $\tilde{\hat{R}}_k$ in Eq. \ref{eq:hiter} and a larger learning rate, as we discuss in Appendix B. In the following numerical examples, we use the re-scaled quantum likelihood gradient $\tilde{\hat{R}}_k$ and set $g=2$ for the tuning function in Eq. \ref{eq:fg}. Within the given iterations, not only have we achieved results (Hamiltonian distance $\Delta\mu\sim O(10^{-12})$ and relative entropy $M(\hat{\rho}_k)-M_0\sim O(10^{-16})$) comparable to, if not exceeding, previous methods \cite{Corr1} for $L=10$ systems and $\beta=1$ straightforwardly, but we have also achieved satisfactory consistency ($\Delta\mu\sim O(10^{-2})$ and $M(\hat{\rho}_k)-M_0\sim O(10^{-9})$) for large systems $L=100$ and low temperatures $\beta=3$ that were previously inaccessible. \begin{figure} \caption{The performance of MLE Hamiltonian learning maintains relatively well against noises and, especially, broader temperature ranges. Left: the Hamiltonian distance versus the inverse temperature $\beta$ shows a broader applicable temperature range. Each data point contains 10 trials. Right: the performance (left figure's data averaged over temperature) versus the noise strength $\delta$ shows the impact of noises and the protocol's relative robustness against them. The slope of the straight line is $\sim 1$, indicating a linear relationship between $\Delta\mu$ and $\delta$. Note the log scale $\log(\Delta\mu)$ for the vertical axis. We set $L=10$ for the system size, and learning rate $\gamma=1$.} \label{Gibbs_temp} \end{figure} MLE Hamiltonian learning is also relatively robust against temperature and noises, two key factors impacting accuracy in Hamiltonian learning. For illustration, we include random errors $\delta \langle \hat O_i \rangle$ following Gaussian distribution with zero mean and standard deviation $\delta$ to all quantum measurements: $\langle \hat O_i \rangle \rightarrow \langle \hat O_i \rangle + \delta \langle \hat O_i \rangle$. We note that such $\delta$ may also depict the quantum fluctuations \cite{Corr1, Corr2} from a finite number of measurements $\delta \propto N_i^{-1/2}$. We also focus on smaller systems with $L=10$ and employ exact diagonalization to avoid confusion from potential Trotter error of the FTTN ansatz\cite{FTTN}. We summarize the results in Fig. \ref{Gibbs_temp}. Most previous algorithms on Hamiltonian learning have a rather specific applicable temperature range. For example, the high-temperature expansion of $e^{-\beta\hat{H}}$ only works in the $\beta\ll 1$ limit \cite{ogl, PhysRevA.92.052322}. Besides, gradient descent on the log partition function, despite a convex optimization, performs well in a narrow temperature range \cite{RN571}. The gradient of this algorithm is proportional to the inverse temperature, so the algorithm's convergence slows at high temperatures. Also, the gradient descent algorithm cannot extend to the $\beta\rightarrow\infty$ limit - the ground state, while our protocol is directly applicable to the ground states of quantum systems, as we will generalize and justify later. MLE Hamiltonian learning is also more robust to noises, with an accuracy of Hamiltonian distance $\Delta\mu \sim O(10^{-11})$ across a broad temperature range at noise strength $\delta \sim O(10^{-12})$. Such noise level is hard to realize in practice; nevertheless, it is necessary to safeguard the correlation matrix method \cite{Qi2019, Corr1, sbhl}. Even so, due to the uncontrollable spectral gap, the correlation matrix method is susceptible to high temperature, and its accuracy drastically decreases to $\Delta\mu \sim O(10^{-3})$ at $\beta=0.01$. In comparison, MLE Hamiltonian learning is more versatile, with an approximately linear dependence between its accuracy $\Delta\mu$ and the noise strength $\delta$ across a broad range of temperatures and noise strengths, saturating the previous bound \cite{RN571}; see the right panel of Fig. \ref{Gibbs_temp}. We also provide more detailed comparisons between the algorithms in Appendix C. \begin{figure} \caption{Upper: MLE Hamiltonian learning's need for evaluations of local observables can be satisfied among local patches for thermal states at sufficiently high temperatures, where the effective potential $V^A_{eff}$ ($V^{B}_{eff}$) is weak and localized on the boundaries of subregion $A$ ($B$) within a cut-off range $\Lambda_A$ ($\Lambda_B$). Consequently, for $\hat{P}_{\lambda_i}$ defined sufficiently deep inside $A$ ($B$), we can estimate its expectation value via $\mbox{tr}[\hat{\rho}^{A}\hat{P}_{\lambda_i}]$ ($\mbox{tr}[\hat{\rho}^{B}\hat{P}_{\lambda_i}]$). Lower: the Hamiltonian distance $\Delta\mu$ of the results after MLE Hamiltonian learning indicates better validity of the local-patch approximation at higher temperatures. The total system size is $L=100$, while the local patches are of sizes $L_A=10, 16$ with cut-offs $\Lambda=1, 2, 4$, respectively. Each data point contains 10 trials. We set $\delta t=0.1$ for the Trotter step in FTTN ansatz and learning rate $\gamma=1$. } \label{subregion} \end{figure} Despite efficient quantum likelihood gradient and applicable quantum many-body ansatz, the computational cost of MLE Hamiltonian learning still increases rapidly with the system size $L$. Fortunately, as stated above in Eq. \ref{eq:subregion}, we may resort to calculations on local patches, especially for low dimensions and high temperatures due to their quasi-Markov property. In particular, when $\beta<\beta_c$ ($T>T_c$), the difference between the cutoff Hamiltonian $\hat{H}_A$ and the effective Hamiltonian $\hat{H}_{eff}^{A}$ in a local subregion $A$, $V_{eff}^{A}$, should be weak, short-ranged, and localized at $A$'s boundary \cite{QBP1, QBP2}; therefore, for those operators $\hat{P}_{\lambda_i}$ adequately deep inside $A$, we can use $\hat \rho^A$, the Gibbs state defined by $\hat{H}_A$, to estimate the corresponding $\mbox{tr}[\hat{\rho}\hat{P}_{\lambda_i}]$; see illustration in Fig. \ref{subregion} upper panel. For example, we apply MLE Hamiltonian learning on $L=100$ systems, where we iteratively calculate the necessary expectation values on different local patches of size $L_A=10, 16$. We also choose different cut-offs $\Lambda$, and evaluate $\mbox{tr}[\hat{\rho}^{A}\hat{P}_{\lambda_i}]$ for those operators at least $\Lambda$ away from the boundaries and sufficiently deep inside the subregion $A$, so that the effective potential $V^{A}_{eff}$ may become negligible. We also employ a sufficient number of local patches to guarantee full coverage of necessary observables - operators outside $A$ or in $\Lambda_A$ are obtainable from another local patch $B$, as shown in the upper panel of Fig. \ref{subregion}, and so on so forth. Both the $L=100$ target system and the local patches for MLE Hamiltonian learning are simulated via FTTN. We have no problem achieving convergence, and the resulting Hamiltonians' accuracy, the Hamiltonian distance $\Delta\mu$ versus the inverse temperature $\beta$, is summarized in the lower panel of Fig. \ref{subregion}. Indeed, the local-patch approximation is more reliable at higher temperatures, as well as with larger subsystems and cutoffs, albeit with rising costs. We also note that we can achieve much larger systems with the local patches than $L=100$ we have demonstrated. \section{MLE Hamiltonian learning for pure eigenstates} In addition to the Gibbs states, MLE Hamiltonian learning also applies to measurements of certain eigenstates of target quantum systems: 1. The ground states are essentially the $\beta \rightarrow \infty$ limit of the Gibbs states. However, due to the order-of-limit issue, the $\gamma \rightarrow 0$ requirement of the theorem on Gibbs states forbids a direct extension to the ground states. In the Appendix D, we offer rigorous proof of the effectiveness of quantum likelihood gradient based on ground-state measurements, along with several nontrivial MLE Hamiltonian learning examples on quantum critical and topological ground states. We note that Ref. \onlinecite{wjb} offers preliminary studies on pure-state quantum state tomography, inspiring this work. 2. A highly-excited eigenstate of a (non-integrable) quantum chaotic system $\hat{H}_s$ is believed to obey the eigenstate thermalization hypothesis (ETH), that its density operator $\hat{\rho}_{s}=\ket{\psi_s}\bra{\psi_s}$ behaves locally indistinguishable from a Gibbs state $\hat \rho_{s, A}$ in thermal equilibrium \cite{Tarun}: \begin{equation} \hat{\rho}_{s,A}=\mbox{tr}_{\bar{A}}[\hat{\rho}_s]\approx \frac{e^{-\beta_s\hat{H}_A}}{\mbox{tr}[e^{-\beta_s\hat{H}_A}]}, \label{eq:eth} \end{equation} where $\beta_s$ is an effective temperature determined by the energy expectation value $\braket{\psi_s|\hat{H}_s|\psi_s}=\frac{\mbox{tr}[e^{-\beta_s\hat{H}_s}\hat{H}_s]}{\mbox{tr}[e^{-\beta_s\hat{H}_s}]}$. As MLE Hamiltonian learning only engages local operators, its applicability directly generalizes to such eigenstates $\ket{\psi_s}$ following ETH. 3. In general, ETH applies to eigenstates in the center of the spectrum of quantum chaotic systems, while low-lying eigenstates are too close to the ground state to exhibit ETH \cite{PhysRevE.90.052105}. However, in the rest of the section, we demonstrate numerically that MLE Hamiltonian learning still works well for low-lying eigenstates. We consider the 1D longitudinal-transverse-field Ising model \cite{Tarun, PhysRevE.90.052105} as our target quantum system: \begin{equation} \hat{H}_s=J\sum_{j}^{L-1}\hat{\sigma}_{j}^{z}\hat{\sigma}_{j+1}^{z}+g_{z}\sum_{j}^{L}\hat{\sigma}_{j}^{z}+g_{x}\sum_{j}^{L}\hat{\sigma}_{j}^{x}, \label{ETH_ham} \end{equation} where the system size is $L=80$. We set $J=1$, $g_{x}=0.9045$, and $g_{z}=0.8090$. The quantum system is strongly non-integrable under such settings. Previous studies mainly focused on eigenstates in the middle of the energy spectrum. In contrast, we pick the first excited state - a typical low-lying eigenstate considered asymptotically integrable and ETH-violating \cite{PhysRevE.90.052105} - for quantum measurements (via DMRG) and then MLE Hamiltonian learning for its candidate Hamiltonian (via FTTN). \begin{figure} \caption{The coefficients obtained via MLE Hamiltonian learning (green columns) compare well with those of the target Hamiltonian $\hat{H}_s$ even though the quantum measurements are based upon a low-lying (first) excited state. The red columns denote the coefficients of $\hat{H}_s$ in Eq. \ref{ETH_ham} multiply the effective (inverse) temperature $\beta_s=4$. The error bars demonstrate the variances over the lattices and trials. We set the system size $L=80$, learning rate $\gamma=0.1$, and the Trotter step $\delta t=0.1$.} \label{ETH} \end{figure} We summarize the results in Fig. \ref{ETH}. Further, the model Hamiltonian we established is approximately equivalent to the target quantum Hamiltonian at an (inverse) temperature $\beta_s\approx 4$ \cite{Tarun}, which we have absorbed into the unit of our $\hat{H}_k$. Therefore, we have accurately established the model Hamiltonian and derived the effective temperature consistent with previous results \cite{Tarun} for a low-lying excited eigenstate not necessarily following ETH. The physical reason for quantum likelihood gradient applicability in such states is an interesting problem that deserves further studies. \section{Discussions} We have proposed a novel MLE Hamiltonian learning protocol to achieve the model Hamiltonian of the target quantum system based on quantum measurements of its Gibbs states. The protocol updates the model Hamiltonian iteratively with respect to the negative-log-likelihood function from the measurement data. We have theoretically proved the efficiency and convergence of the corresponding quantum likelihood gradient and demonstrated it numerically on multiple non-trivial examples, which show more accuracy, better robustness against noises, and less temperature dependence. Indeed, the accuracy is almost linear to the imposed noise amplitude, thus inverse proportional to the square root of the number of samples, the asymptotic upper bound\cite{RN571}. Further, MLE Hamiltonian learning directly rests on the Hamiltonians and their physical properties instead of direct and costly access to the quantum many-body states. Consequently, we can resort to various quantum many-body ansatzes in our systematic quantum toolbox and even local-patch approximation when the situation allows. These advantages allow applications to larger systems and lower temperatures with better accuracy than previous approaches. On the other hand, while our protocol is generally applicable for learning any Hamiltonian, its advantages are most apparent for local Hamiltonians, where various quantum many-body ansatzes and local-patch approximation shine. Despite such limitations, we note that the physical systems are characterized by local Hamiltonians in a significant proportion of scenarios. In addition to the Gibbs states, we have generalized the applicability of MLE Hamiltonian learning to eigenstates of the target quantum states, including ground states, ETH states, and even selected cases of low-lying excited states. We have also provided theoretical proof of quantum likelihood gradient rigor and convergence in the Appendix D, along with several other numerical examples. Our strategy may apply to the entanglement Hamiltonians and the tomography of the quantum states under the maximum-likelihood-maximum-entropy assumption \cite{PhysRevLett.107.020404}. Besides, our algorithm may also provide insights into the quantum Boltzmann machine \cite{QBM} - a quantum version of the classical Boltzmann machine with degrees of freedom that obey the distribution of a target quantum Gibbs state. Instead of brute-force calculations of the loss function derivatives with respect to the model parameters or approximations with the gradients' upper bounds, our protocol provides an efficient optimization that updates the model parameters collectively. \emph{Acknowledgement:}- We thank insightful discussions with Jia-Bao Wang. We acknowledge support from the National Key R\&D Program of China (No.2021YFA1401900) and the National Science Foundation of China (No.12174008 \& No.92270102). The calculations of this work are supported by HPC facilities at Peking University. \appendix \section{Maximum condition for MLE} In this appendix, we review the derivation of the maximum condition \cite{PhysRevA.75.042108} in Eq. \ref{eq:mlerho} in the main text. A general quantum state takes the form of a density operator: \begin{equation} \hat{\rho}=\sum_{j}p_j\ket{\psi_j}\bra{\psi_j}, \label{general} \end{equation} where $p_j\ge 0$, $\sum_j p_j=1$, and $\ket{\psi_j}$ is a set of orthonormal basis. The search for the quantum state that maximizes the likelihood function: \begin{equation} \mathcal{L}(\hat{\rho}) = \prod_{i, \lambda_i}\{\mbox{tr}[\hat{\rho}\hat{P}_{\lambda_i}]^{\frac{f_{\lambda_i}}{N_{tot}}}\}^{N_{tot}}, \end{equation} can be converted to the optimization problem: \begin{equation} \begin{split} &\min\limits_{\hat{\rho}\in\mathcal{D}}\quad M(\hat{\rho})=-\frac{1}{N_{tot}}\log[\mathcal{L}(\hat{\rho})]\\ &{\rm subject\enspace to}\quad \hat{\rho}\succeq 0, \mbox{tr}[\hat{\rho}]=1.\\ \label{eq:optprob} \end{split} \end{equation} It is hard to solve this semi-definite programming problem directly and numerically. Instead, forgoing the non-negative definiteness, we adopt the Lagrangian multiplier method: \begin{equation} \frac{\partial}{\partial{\bra{\psi_j}}}\{M(\hat{\rho})+\lambda \mbox{tr}[\hat{\rho}]\}=0, \end{equation} where $\lambda$ is a Lagrangian multiplier. Given Eq. \ref{general}, we obtain the following solution: \begin{equation} \begin{split} &\hat{R}\ket{\psi_j}=\ket{\psi_j}\\ &\hat{R} = \sum_{i,\lambda_i}\frac{f_{\lambda_i}}{N_{tot}} \frac{\hat{P}_{\lambda_i}}{\mbox{tr} [\hat \rho \hat P_{\lambda_i}]},\\ \end{split} \label{extre_appendix} \end{equation} and $\lambda=1$. Combining Eq. \ref{general} and Eq. \ref{extre_appendix}, we obtain the maximum condition: \begin{equation} \hat{R}\hat{\rho}=\hat{\rho}. \label{eq:exteme} \end{equation} We note that Eq. \ref{extre_appendix} does not guarantee the positive semi-definiteness of the density operator. Instead, one may search within the density-operator space (the space of positive semi-definite matrix with unit trace) to locate the MLE quantum state fulfilling Eq. \ref{extre_appendix} or Eq. \ref{eq:exteme}. For the Hamiltonian learning task in this work, the search space is naturally the space of Gibbs states (under selected quantum many-body ansatz). \section{MLE Hamiltonian learning with rescaling function} In this appendix, we compare the MLE Hamiltonian learning with the quantum likelihood gradient $\hat{R}_k$ and the re-scaled counterpart $\tilde{\hat{R}}_k$. As we state in the main text, $\tilde{\hat{R}}_k$ regularizes the gradient, allowing us to employ a larger learning rate $\gamma=1$, which leads to a faster convergence (Fig. \ref{fig:rescaled_FIG2}) and a higher accuracy (Tab. \ref{table}) given identical number of iterations. \begin{figure}\label{fig:rescaled_FIG2} \end{figure} \begin{table} \renewcommand\arraystretch{1.6} \resizebox{80mm}{!}{ \begin{tabular}{|c|c|c|c|r|l|} \hline & $L=10$,$\beta=1$ & $L=50$,$\beta=2$ & $L=100$,$\beta=3$ \\ \hline $\hat{R}_k$ &$O(10^{-6})$ & $O(10^{-2})$ & $O(10^{-1})$ \\ \hline $\tilde{\hat{R}}_k$ & $O(10^{-12})$ & $O(10^{-3})$ & $O(10^{-2})$ \\ \hline \end{tabular}} \caption{The algorithm's accuracy (Hamiltonian distance $\Delta\mu$) further improves with a re-scaled quantum likelihood gradient $\tilde{\hat{R}}_k$ under various system sizes $L$ and (inverse) temperatures $\beta$. } \label{table} \end{table} \section{Comparisons between Hamiltonian learning algorithms} In this appendix, we compare different Hamiltonian learning algorithms, including the correlation matrix (CM) method \cite{Qi2019, Corr1}, the gradient descent (GD) method \cite{RN571}, and the MLE Hamiltonian learning (MLEHL) algorithm, by looking into some of their numerical results and performances. We consider general 2-local Hamiltonians in Eq. \ref{XZ} in the main text for demonstration and measurements $\{\hat{O}_i\}$ over all the 2-local operators (instead of all 4-local operators as in Ref. \cite{Corr1}). We summarize the results in Fig. \ref{fig:compare}: the accuracy of CM is unstable and highly sensitive to temperature; while GD performs similarly to the proposed MLEHL algorithm at low temperatures, its descending gradient becomes too small at high temperatures to allow a satisfactory convergence within the given maximum iterations. \begin{figure} \caption{The performances (logarithm of Hamiltonian distances) of different algorithms versus the inverse temperature $\beta$ show the advantages of the MLEHL algorithm. Each data point contains 10 trials of random Hamiltonians. We also include noises following a normal distribution with zero means and $O(10^{-12})$ standard deviation. For both GD and MLEHL algorithms, we employ a learning rate $\gamma = 1$ and a maximum number of iterations of 7000. The system size is $L=7$.} \label{fig:compare} \end{figure} We also compare the convergence rates of the MLEHL and GD algorithms with the same learning rate. As in Fig. \ref{fig:compare_GD_MLEHL}, the MLEHL algorithm exhibits a faster convergence and a smaller computational cost, which is similar under both algorithms for each iteration. \begin{figure} \caption{The logarithm of the Hamiltonian distances versus the number of iterations shows a faster convergence under the proposed MLEHL algorithm. We also include noises following a normal distribution with zero means and $O(10^{-12})$ standard deviation. We employ a learning rate of $\gamma = 1$ for both GD and MLEHL. We set the system size $L=7$ and the (inverse) temperature $\beta=1$.} \label{fig:compare_GD_MLEHL} \end{figure} \section{Hamiltonian learning from ground state} In this appendix, we prove the effectiveness of the quantum likelihood gradient based on measurements of the target quantum system's ground state and provide several nontrivial numerical examples, including 1D quantum critical states and 2D topological states. \subsection{Proof for ground-state-based quantum likelihood gradient} Given a sufficient number $N_i$ measurements of the operator $\hat{O}_i$ on the non-degenerate ground state $\ket{\psi_s}$ of a target system $\hat{H}_s$, we obtain a number of outcomes as the $\lambda_i^{th}$ eigenvalue of $\hat{O}_i$ as: \begin{equation} f_{\lambda_i}=p_{\lambda_i}N_i\approx\bra{\psi_s}\hat{P}_{\lambda_i}\ket{\psi_s}N_i, \end{equation} where $p_{\lambda_i}=f_{\lambda_i}/N_i$, and $\hat{P}_{\lambda_i}$ is the projection operator of the eigenvalue $o_{\lambda_i}$. Our MLE Hamiltonian learning follows the iterations: \begin{equation} \begin{split} &\hat{H}_{k+1}=\hat{H}_k-\gamma\hat{R}_k,\\ &\hat{R}_k = \sum_{i,\lambda_i}\frac{f_{\lambda_i}}{N_{tot}}\frac{\hat{P}_{\lambda_i}}{\bra{\psi_{k}^{gs}}\hat{P}_{\lambda_i}\ket{\psi_{k}^{gs}}},\\ \end{split} \label{iter_gs} \end{equation} where $\ket{\psi_k^{gs}}$ is the non-degenerate ground state of $\hat{H}_k$. \textbf{Theorem:} For $\gamma\ll1,\gamma>0$, the quantum likelihood gradient in Eq. \ref{iter_gs} yields a negative semi-definite contribution to the negative-log-likelihood function $M(\ket{\psi_{k+1}^{gs}})=-\frac{1}{N_{tot}}\log\mathcal{L}(\ket{\psi_{k+1}^{gs}})$ following Eq. \ref{LHF} in the main text. \textbf{Proof:} At the linear order in $\gamma$, we may treat the addition of $-\gamma\hat{R}_k$ to $\hat{H}_k$ at the $k^{th}$ iteration as a perturbation: \begin{equation} \ket{\psi_{k+1}^{gs}}=\ket{\psi_{k}^{gs}}-\gamma\hat{G}_{k}\hat{R}_{k}\ket{\psi_{k}^{gs}}+O(\gamma^2),\label{eq:itergs_pertb} \end{equation} where $\hat{G}_k$ is the Green's function in the $k_{th}$ iteration: \begin{equation} \hat{G}_k =\hat{Q}_k \frac{1}{E_{k}^{gs}-\hat{H}_k}\hat{Q}_k, \end{equation} where $\hat{Q}_k=I-\ket{\psi_{k}^{gs}}\bra{\psi_{k}^{gs}}$ is the projection operator orthogonal to the ground space $\ket{\psi_{k}^{gs}}\bra{\psi_{k}^{gs}}$, and $E_{k}^{gs}$ is the ground state energy. Keeping terms upto the linear order of $\gamma$ in the log expansion of the negative-log-likelihood function, we have: \begin{equation} \begin{split} M(\ket{\psi_{k+1}^{gs}})&=-\frac{1}{N_{tot}}\log\mathcal{L}(\ket{\psi_{k+1}^{gs}})\\ &=-\sum_{i,\lambda_i}\frac{f_{\lambda_i}}{N_{tot}}\log\bra{\psi_{k+1}^{gs}}\hat{P}_{\lambda_i}\ket{\psi_{k+1}^{gs}},\\ &\approx M(\ket{\psi_k^{gs}})+2\gamma\Delta_k. \end{split} \label{nlf_gs} \end{equation} where difference takes the form: \begin{equation} \begin{split} \Delta_k &= \bra{\psi_{k}^{gs}}\hat{R}_k\hat{G}_k\hat{R}_k\ket{\psi_{k}^{gs}}\\ &=\sum_{l\neq gs}\frac{|\bra{\psi_{k}^{gs}}\hat{R}_{k}\ket{\psi_{k}^{l}}|^2}{E_{k}^{gs}-E_{k}^{l}}\le 0. \end{split} \label{nlf_gs2} \end{equation} Here, $E_{k}^{l}>E_{k}^{gs}$ because $E_{k}^{l}$ denotes the energy for eigenstates other than the ground state. Our iteration converges when the equality in Eq. \ref{nlf_gs2} is established. This happens when $\ket{\psi_k^{gs}}$ is an eigenstate of $\hat{R}_k$, consistent with the MLE condition $\hat{R}\ket{\psi}=\ket{\psi}$ (or $\hat{R}\hat{\rho}=\hat{\rho}$). Finally, combining Eq. \ref{nlf_gs} and Eq. \ref{nlf_gs2}, we have shown that $M(\ket{\psi_{k+1}^{gs}})-M(\ket{\psi_{k}^{gs}})$ is a negative semi-definite quantity, which proves the theorem. One potential complication to the proof is that Eq. \ref{eq:itergs_pertb} needs to assume there is no ground-state level crossing or degeneracy after adding the quantum likelihood gradient. A potential remedy is to keep some low-lying excited states together with the ground state and compare them for maximum likelihood, especially for steps with singular behaviors. Otherwise, we can only hope such transitions are sparse, especially near convergence, and they establish a new line of iterations heading toward the same convergence. A more detailed discussion is available in Ref. \onlinecite{wjb}. \subsection{Example: $c=\frac{3}{2}$ CFT ground state of Majorana fermion chain} Here, we consider the spinless 1D Majorana fermion chain model of length $2L$ as an example \cite{PhysRevB.92.235123}: \begin{equation} \hat{H}_s=\sum_{j}it\hat{\gamma}_{j}\hat{\gamma}_{j+1}+g\hat{\gamma}_{j}\hat{\gamma}_{j+1}\hat{\gamma}_{j+2}\hat{\gamma}_{j+3}, \label{major} \end{equation} where $\hat{\gamma}_{j}$ is the Majorana fermion operator obeying: \begin{equation} \hat{\gamma}_{j}^{\dagger}=\hat{\gamma}_{j}, \{\hat{\gamma}_{i}, \hat{\gamma}_{j}\}=\delta_{ij}, \end{equation} and $t$ and $g=-1$ are model parameters. This model presents a wealth of nontrivial quantum phases under different $t/g$. We focus on the model parameters in $t/g\in(-2.86, -0.28)$, where the ground state of Eq. \ref{major} is a $c=\frac{3}{2}$ CFT composed of a critical Ising theory ($c=\frac{1}{2}$) and a Luttinger liquid ($c=1$). \begin{figure}\label{fig:majorana} \end{figure} Through the definition of the complex fermions followed by the Jordan-Wigner transformation: \begin{eqnarray} \hat{c}_{j}&=&\frac{\hat{\gamma}_{2j}+i\hat{\gamma}_{2j+1}}{2}, \nonumber\\ \hat{\sigma}_{j}^{z}&=&2\hat{n}_{j}-1, \\ \hat{\sigma}_{j}^{+}&=&e^{-i\pi\sum_{i<j}\hat{n}_{i}}\hat{c}_{j}^{\dagger}, \nonumber \end{eqnarray} where $\hat{n}_{j}=\hat{c}_{j}^{\dagger}\hat{c}_{j}$ is the complex fermion number operator, we map Eq. \ref{major} to a 3-local spin chain of length $L$: \begin{equation} \begin{split} \hat{H}_s=&t\sum_{j}\hat{\sigma}_{j}^{z}-t\sum_{j}\hat{\sigma}_{i}^{x}\hat{\sigma}_{i+1}^{x} \\ -&g\sum_{j}\hat{\sigma}_{i}^{z}\hat{\sigma}_{i+1}^{z}-g\sum_{j}\hat{\sigma}_{i}^{x}\hat{\sigma}_{i+2}^{x}.\\ \end{split} \end{equation} We employ quantum measurements on the ground state $\ket{\psi_s}$ of this Hamiltonian, based on which we carry out our MLE Hamiltonian learning protocol. Here, we evaluate the ground-state properties via exact diagonalization. The numerical results for two cases of $t=0.5, 1.5$ are in Fig. \ref{fig:majorana}. We achieve successful convergence and satisfactory accuracy on the target Hamiltonian. The relative entropy's instabilities are mainly due to the ground state's level crossing and degeneracy. \subsection{Example: alternative Hamiltonian for ground state} We have seen that MLE Hamiltonian learning can retrieve the unknown target Hamiltonians via quantum measurements of its Gibbs states, even its ground states. For pure states, however, one interesting byproduct is that the relation between Hamiltonian and eigenstates is essentially many-to-one. Therefore, it is possible to obtain various candidate Hamiltonians $\hat{H}_k$ sharing the same ground state as the original target $\hat{H}_s$, especially by controlling the operator/observable set. Here, we show such numerical examples. \begin{figure}\label{XZ} \end{figure} As our target quantum system, we consider the transverse field Ising model (TFIM) of length $L=15$: \begin{equation} \hat{H}_s=J\sum_{j}\hat{S}_{j}^{z}\hat{S}_{j+1}^{z}+g\sum_{j}\hat{S}_{j}^{x}, \end{equation} at its critical point $J=g=1$. Its ground state is $\ket{\psi_s}$. However, instead of the operators presenting in $\hat{H}_s$, we employ a different operator set for $\ket{\psi_s}$'s quantum measurements: \begin{equation} \{\hat{O}_{i}\} = \{\hat{S}_{i}^{z}\hat{S}_{i+1}^{z}, \hat{S}_{i}^{x}\hat{S}_{i+1}^{x}\}. \label{eq:op_set} \end{equation} We evaluate the ground-state properties via DMRG. The subsequent MLE Hamiltonian learning results are in Fig. \ref{XZ}. Since we obtain a candidate Hamiltonian with the operators in Eq. \ref{eq:op_set} and destined to differ from $\hat{H}_s$, the Hamiltonian distance is no longer a viable measure of its accuracy. Instead, we introduce the ground-state fidelity $f_{gs}=\braket{\psi_s|\psi_{k}^{gs}}$, where $\ket{\psi_s}$ ($\ket{\psi_{k}^{gs}}$) is the ground state of $\hat{H}_s$ ($\hat{H}_k$). Interestingly, while the relative entropy shows full convergence, the fidelity $f_{gs}$ jumps between $\sim 99.5\%$ and $\sim 10^{-3}\%$. This is understandable, as the quantum system is gapless, and the ground and low-lying excited states have similar properties under quantum measurements. \subsection{Example: two-dimensional topological states} \begin{figure}\label{CSLConstruct} \end{figure} Here, we consider MLE Hamiltonian learning on two-dimensional topological quantum systems. In particular, we consider the chiral spin liquid (CSL) on a triangular lattice: \begin{equation} \hat{H}_s =J_1\sum_{\langle ij\rangle}\Vec{S}_i\cdot\Vec{S}_j+J_2\sum_{\langle\langle ij\rangle\rangle}\Vec{S}_i\cdot\Vec{S}_j+K\sum_{i,j,k\in \bigtriangledown/\bigtriangleup}\Vec{S}_i\cdot\left(\Vec{S}_j\times\Vec{S}_k\right), \end{equation} where the first and second terms are Heisenberg interactions, and the last term is a three-spin chiral interaction. Previous DMRG studies have established $\hat{H}_s$'s ground state as a CSL under the model parameters $J_1 = 1.0$, $J_2=0.1$, and $K=0.2$\cite{PhysRevB.96.075116}, which we set as the parameters of the target Hamiltonian. Here, we employ exact diagonalization on a $4 \times 4$ system. Based upon entanglement studies of the lowest-energy eigenstates, we verify that both the modular $SU$ matrix corresponding to $C_6$ rotations and the entanglement entropy fit well with a CSL topological phase\cite{PhysRevB.85.235151}. Subsequently, we perform MLE Hamiltonian learning based on quantum measurements of the ground state, focusing on the operators presenting in $\hat{H}_s$. We summarize the results in Fig. \ref{CSLConstruct}. The Hamiltonian distance indicates a stable converging accuracy, yet the relative entropy and the fidelity $f_{gs}=\braket{\psi_s | \psi_{k}^{gs}}$ witness certain instabilities. Indeed, being a topological phase means ground-state degeneracy - competing low-energy eigenstates with global distinctions yet similar local properties. \end{document}
\begin{document} \title{Extending a valuation centered in a local domain to the formal completion.} \section{Introduction} \label{In} All the rings in this paper will be commutative with 1. Let $(R,m,k)$ be a local noetherian domain with field of fractions $K$ and $R_\nu$ a valuation ring, dominating $R$ (not necessarily birationally). Let $\nu|_K:K^*\twoheadrightarrow\Gamma$ be the restriction of $\nu$ to $K$; by definition, $\nu|_K$ is centered at $R$. Let $\hat R$ denote the $m$-adic completion of $R$. In the applications of valuation theory to commutative algebra and the study of singularities, one is often induced to replace $R$ by its $m$-adic completion $\hat R$ and $\nu$ by a suitable extension $\hat\nu_-$ to $\frac{\hat R}P$ for a suitably chosen prime ideal $P$, such that $P\cap R=(0)$ (one specific application we have in mind has to do with the approaches to proving the Local Uniformization Theorem in arbitrary characteristic such as \cite{Spi2} and \cite{Te}). The first reason is that the ring $\hat R$ is not in general an integral domain, so that we can only hope to extend $\nu$ to a \textit{pseudo-valuation} on $\hat R$, which means precisely a valuation $\hat\nu_-$ on a quotient $\frac{\hat R}P$ as above. The prime ideal $P$ is called the \textit{support} of the pseudo-valuation. It is well known and not hard to prove that such extensions $\hat\nu_-$ exist for some minimal prime ideals $P$ of $\hat R$. Although, as we shall see, the datum of a valuation $\nu$ determines a unique minimal prime of $\hat R$ when $R$ is excellent, in general there are many possible primes $P$ as above and for a fixed $P$ many possible extensions $\hat\nu_-$. This is the second reason to study extensions $\hat\nu_-$. The purpose of this paper is to give, assuming that $R$ is excellent, a systematic description of all such extensions $\hat\nu_-$ and to identify certain classes of extensions which are of particular interest for applications. In fact, the only assumption about $R$ we ever use in this paper is a weaker and more natural condition than excellence, called the G condition, but we chose to talk about excellent rings since this terminology seems to be more familiar to most people. For the reader's convenience, the definitions of excellent and G-rings are recalled in the Appendix. Under this assumption, we study \textbf{extensions to (an integral quotient of) the completion $\hat R$ of a valuation $\nu$ and give descriptions of the valuations with which such extensions are composed. In particular we give criteria for the uniqueness of the extension if certain simple data on these composed valuations are fixed}. \noindent We conjecture (see statement 5.19 in \cite{Te} and Conjecture \ref{teissier} below for a stronger and more precise statement) that\par\noindent \textbf{given an excellent local ring $R$ and a valuation $\nu$ of $R$ which is positive on its maximal ideal $m$, there exists a prime ideal $H$ of the $m$-adic completion $\hat R$ such that $H\bigcap R=(0)$ and an extension of $\nu$ to $\frac{\hat R}{H}$ which has the same value group as $\nu$.} When studying extensions of $\nu$ to the completion of $R$, one is led to the study of its extensions to the henselization $\tilde R$ of $R$ as a natural first step. This, in turn, leads to the study of extensions of $\nu$ to finitely generated local strictly \'etale extensions $R^e$ of $R$. We therefore start out by letting $\sigma:R\rightarrow R^\dag$ denote one of the three operations of completion, (strict) henselization, or a finitely generated local strictly \'etale extension: \begin{eqnarray} R^\dag&=&\hat R\quad\mbox{ or}\label{eq:hatR}\\ R^\dag&=&\tilde R\quad\mbox{ or}\label{eq:tildeR}\\ R^\dag&=&R^e.\label{eq:Re} \end{eqnarray} The ring $R^\dag$ is local; let $m^\dag$ denote its maximal ideal. The homomorphisms $$ R\rightarrow\tilde R\quad\text{ and }\quad R\rightarrow R^e $$ are regular for any ring $R$; by definition, if $R$ is an excellent ring then the completion homomorphism is regular (in fact, regularity of the completion homomorphism is precisely the defining property of G-rings; see the Appendix for the definition of regular homomorphism). Let $r$ denote the (real) rank of $\nu$. Let $(0)=\Delta_r\subsetneqq \Delta_{r-1}\subsetneqq\dots\subsetneqq \Delta_0=\Gamma$ be the isolated subgroups of $\Gamma$ and $P_0=(0)\subsetneqq P_1\subseteq\dots\subseteq P_r=m$ the prime valuation ideals of $R$, which need not, in general, be distinct. In this paper, we will assume that $R$ is excellent. Under this assumption, we will canonically associate to $\nu$ a chain $H_1\subset H_3\subset\dots\subset H_{2r+1}=mR^\dag$ of ideals of $R^\dag$, numbered by odd integers from 1 to $2r+1$, such that $H_{2\ell+1}\cap R=P_\ell$ for $0\le \ell\le r$. We will show that all the ideals $H_{2\ell+1}$ are prime. We will define $H_{2\ell}$ to be the unique minimal prime ideal of $P_\ell R^\dag$, contained in $H_{2\ell+1}$ (that such a minimal prime is unique follows from the regularity of the homomorphism $\sigma$). We will thus obtain, in the cases (\ref{eq:hatR})--(\ref{eq:Re}), a chain of $2r+1$ prime ideals $$ H_0\subset H_1\subset\dots\subset H_{2r}=H_{2r+1}=mR^\dag, $$ satisfying $H_{2\ell}\cap R=H_{2\ell+1}\cap R=P_\ell$ and such that $H_{2\ell}$ is a minimal prime of $P_\ell R^\dag$ for $0\le \ell\le r$. Moreover, if $R^\dag=\tilde R$ or $R^\dag=R^e$, then $H_{2\ell}=H_{2\ell+1}$. We call $H_i$ the $i${\bf-th implicit prime ideal} of $R^\dag$, associated to $R$ and $\nu$. The ideals $H_i$ behave well under local blowing ups along $\nu$ (that is, birational local homomorphisms $R\to R'$ such that $\nu$ is centered in $R'$), and more generally under \textit{$\nu$-extensions} of $R$ defined below in subsection \ref{trees}. This means that given any local blowing up along $\nu$ or $\nu$-extension $R\rightarrow R'$, the $i$-th implicit prime ideal $H'_i$ of ${R'}^\dag$ has the property that $H'_i\cap R^\dag=H_i$. This intersection has a meaning in view of Lemma \ref{factor} below. For a prime ideal $P$ in a ring $R$, $\kappa(P)$ will denote the residue field $\frac{R_P}{PR_P}$. Let $(0)\subsetneqq \mathbf{m}_1\subsetneqq\dots\subsetneqq \mathbf{m}_{r-1}\subsetneqq\mathbf{m}_r=\mathbf{m}_\nu$ be the prime ideals of the valuation ring $R_\nu$. By definitions, our valuation $\nu$ is a composition of $r$ rank one valuations $\nu=\nu_1\circ\nu_2\dots\circ\nu_r$, where $\nu_\ell$ is a valuation of the field $\kappa(\mathbf{m}_{\ell-1})$, centered at $\frac{(R_\nu)_{\mathbf{m}_\ell}}{\mathbf{m}_{\ell -1}}$ (see \cite{ZS}, Chapter VI, \S10, p. 43 for the definition of composition of valuations; more information and a simple example of composition is given below in subsection \ref{trees}, where we interpret each $\mathbf{m}_\ell$ as the limit of a tree of ideals). If $R^\dag=\tilde R$, we will prove that there is a unique extension $\tilde\nu_-$ of $\nu$ to $\frac{\tilde R}{H_0}$. If $R^\dag=\hat R$, the situation is more complicated. First, we need to discuss the behaviour of our constructions under $\nu$-extensions. \subsection{Local blowings up and trees.}\label{trees} We consider \textit{extensions} $R\rightarrow R'$ of local rings, that is, injective morphisms such that $R'$ is an $R$-algebra essentially of finite type and $m'\cap R=m$. In this paper we consider only extensions with respect to $\nu$; that is, both $R$ and $R'$ are contained in a fixed valuation ring $R_\nu$. Such extensions form a direct system $\{R'\}$. We will consider many direct systems of rings and of ideals indexed by $\{R'\}$; direct limits will always be taken with respect to the direct system $\{R'\}$. Unless otherwise specified, we will assume that \begin{equation} \lim\limits_{\overset\longrightarrow{R'}}R'=R_\nu.\label{eq:ZariskiRiemann} \end{equation} Note that by the fundamental properties of valuation rings (\cite{ZS}, \S VI), assuming the equality (\ref{eq:ZariskiRiemann}) is equivalent to assuming that $\lim\limits_{\overset\longrightarrow{R'}}K'=K_\nu$, where $K'$ stands for the field of fractions of $R'$ and $K_\nu$ for that of $R_\nu$, and that $\lim\limits_{\overset\longrightarrow{R'}}R'$ is a valuation ring. \begin{definition} A \textbf{tree} of $R'$-algebras is a direct system $\{S'\}$ of rings, indexed by the directed set $\{R'\}$, where $S'$ is an $R'$-algebra. Note that the maps are not necessarily injective. A morphism $\{S'\}\to\{T'\}$ of trees is the datum of a map of $R'$-algebras $S'\to T'$ for each $R'$ commuting with the tree morphisms for each map $R'\to R''$. \end{definition} \begin{lemma}\label{factor} Let $R\rightarrow R'$ be an extension of local rings. We have:\par \noindent 1) The ideal $N:=m^\dag\otimes_R1+1\otimes_Rm'$ is maximal in the $R$-algebra $R^\dag\otimes_RR'$.\par \noindent 2) The natural map of completions (resp. henselizations) $R^\dag\to{R'}^\dag$ is injective. \end{lemma} \begin{proof} 1) follows from that fact that $R^\dag/m^\dag =R/m$. The proof of 2) relies on a construction which we shall use often: the map $R^\dag\to{R'}^\dag$ can be factored as \begin{equation} R^\dag\to\left(R^\dag\otimes_RR'\right)_N\to{R'}^\dag,\label{eq:iota} \end{equation} where the first map sends $x$ to $x\otimes 1$ and the second is determined by $x\otimes x'\mapsto \hat b(x).c(x')$ where $\hat b$ is the natural map $R^\dag\to {R'}^\dag$ and $c$ is the canonical map $R'\to{R'}^\dag$. The first map is injective because $R^\dag$ is a flat $R$-algebra and it is obtained by tensoring the injection $R\to R'$ by the $R$-algebra $R^\dag$; furthermore, elements of $R^\dag$ whose image in $R^\dag\otimes_RR'$ lie outside of $N$ are precisely units of $R^\dag$, hence they are not zero divisors in $R^\dag\otimes_RR'$ and $R^\dag$ injects in every localization of $R^\dag\otimes_RR'$.\par \noindent Since $m'\cap R=m$, we see that the inverse image by the natural map of $R'$-algebras $$ \iota\colon R'\to(R^\dag\otimes_RR')_N, $$ defined by $x'\mapsto1\otimes_Rx'$, of the maximal ideal $M=(m^\dag\otimes_R1+1\otimes_Rm')(R^\dag\otimes_RR')_N$ of $(R^\dag\otimes_RR')_N$ is the ideal $m'$ and that $\iota$ induces a natural isomorphism $\frac{R'}{{m'}^i}\overset\sim\rightarrow\frac{(R^\dag\otimes_RR')_N}{M^i}$ for each $i$. From this it follows by the universal properties of completion and henselization that the second map in the sequence (\ref{eq:iota}) is the completion (resp. the henselization inside the completion) of $R^\dag\otimes_RR'$ with respect to the ideal $M$. It is therefore also injective. \end{proof} \begin{definition} Let $\{S'\}$ be a tree of $R'$-algebras. For each $S'$, let $I'$ be an ideal of $S'$. We say that $\{I'\}$ is a tree of ideals if for any arrow $b_{S'S''}\colon S'\rightarrow S''$ in our direct system, we have $b^{-1}_{S'S''}I''=I'$. We have the obvious notion of inclusion of trees of ideals. In particular, we may speak about chains of trees of ideals. \end{definition} \begin{examples} The maximal ideals of the local rings of our system $\{R'\}$ form a tree of ideals. For any non-negative element $\beta\in\Gamma$, the valuation ideals $\mathcal{P}'_\beta\subset R'$ of value $\beta$ form a tree of ideals of $\{R'\}$. Similarly, the $i$-th prime valuation ideals $P'_i\subset R'$ form a tree. If $rk\ \nu=r$, the prime valuation ideals $P'_i$ give rise to a chain \begin{equation} P'_0=(0)\subsetneqq P'_1\subseteq\dots\subseteq P'_r=m'\label{eq:treechain'} \end{equation} of trees of prime ideals of $\{R'\}$. \end{examples} We discuss this last example in a little more detail and generality in order to emphasize our point of view, crucial throughout this paper: the data of a composite valuation is equivalent to the data of its components. Namely, suppose we are given a chain of trees of ideals as in (\ref{eq:treechain'}), where we relax our assumptions of the $P'_i$ as follows. We no longer assume that the chain (\ref{eq:treechain'}) is maximal, nor that $P'_i\subsetneqq P'_{i+1}$, even for $R'$ sufficiently large; in particular, for the purposes of this example we momentarily drop the assumption that $rk\ \nu=r$. We will still assume, however, that $P'_0=(0)$ and that $P'_r=m'$. Taking the limit in (\ref{eq:treechain'}), we obtain a chain \begin{equation} (0)=\mathbf{m}_0\subsetneqq\mathbf{m}_1\subseteqq\dots\subseteqq \mathbf{m}_r=\mathbf{m}_\nu\label{eq:treechainlim} \end{equation} of prime ideals of the valuation ring $R_\nu$. Similarly, for each $1\leq\ell\leq r$ one has the equality $$ \lim\limits_{\overset\longrightarrow{R'}}{\frac{R'}{P'_\ell}}=\frac{R_\nu}{\bf m_\ell}. $$ Then \textbf{specifying the valuation $\nu$ is equivalent to specifying valuations $\nu_0,\nu_1$, \dots, $\nu_r$, where $\nu_0$ is the trivial valuation of $K$ and, for $1\le \ell\le r$, $\nu_\ell$ is a valuation of the residue field $k_{\nu_{\ell-1}}=\kappa(\mathbf{m}_{\ell-1})$, centered at the local ring $\lim\limits_{\longrightarrow}\frac{R'_{P'_\ell}}{P'_{\ell-1}R'_{P'_\ell}}= \frac{(R_\nu)_{\mathbf{m}_\ell}}{\mathbf{m}_{\ell-1}}$ and taking its values in the totally ordered group $\frac{\Delta_{\ell-1}}{\Delta_\ell}$.} The relationship between $\nu$ and the $\nu_\ell$ is that $\nu$ is the composition \begin{equation} \nu=\nu_1\circ\nu_2\circ\dots\circ\nu_r.\label{eq:composition1} \end{equation} For example, the datum of the valuation $\nu$, or of its valuation ring $R_\nu$, is equivalent to the datum of the valuation ring $\frac{R_\nu}{\mathbf{m}_{r-1}}\subset \frac{(R_\nu)_{\mathbf{m}_{r-1}}}{\mathbf{m}_{r-1}(R_\nu)_{\mathbf{m}_{r-1}}}=\kappa (\mathbf{m}_{r-1})$ of the valuation $\nu_r$ of the field $\kappa(\mathbf{m}_{r-1})$ and the valuation ring $(R_\nu)_{\mathbf{m}_{r-1}}$. If we assume, in addition, that for $R$ sufficiently large the chain (\ref{eq:treechain'}) (equivalently, (\ref{eq:treechainlim})) is a maximal chain of distinct prime ideals then $rk\ \nu=r$ and $rk\ \nu_\ell=1$ for each $\ell$.\par \begin{remark}\label{composite} Another way to describe the same property of valuations is that, given a prime ideal $H$ of the local integral domain $R$ one builds all valuations centered in $R$ having $H$ as one of the $P_\ell$ by choosing a valuation $\nu_1$ of $R$ centered at $H$, so that $\mathbf{m}_{\nu_1}\cap R=H$ and choosing a valuation subring $\overline R_{\overline\nu}$ of the field $\frac{R_{\nu_1}}{\mathbf{m}_{\nu_1}}$ centered at $R/H$. Then $\nu=\nu_1\circ \overline\nu$.\par \noindent Note that choosing a valuation of $R/H$ determines a valuation of its field of fractions $\kappa(H)$, which is in general much smaller than $\frac{R_{\nu_1}}{\mathbf{m}_{\nu_1}}$. Given a valuation of $R$ with center $H$, in order to determine a valuation of $R$ with center $m$ inducing on $R/H$ a given valuation $\mu$ we must choose an extension $\overline\nu$ of $\mu$ to $\frac{R_{\nu_1}}{\mathbf{m}_{\nu_1}}$, and there are in general many possibilities.\par This will be used in the sequel. In particular, it will be applied to the case where a valuation $\nu$ of $R$ extends uniquely to a valuation $\hat\nu_-$ of $\frac{\hat R}{H}$ for some prime $H$ of $\hat R$. Assuming that $\hat R$ is an integral domain, this determines a unique valuation of $\hat R$ only if the height ${\operatorname{ht}}\ H$ of $H$ in $\hat R$ is at most one. In all other cases the dimension of $\hat R_H$ is at least $2$ and we have infinitely many valuations with which to compose $\hat\nu_-$. This is the source of the height conditions we shall see in \S\ref{extensions}. \end{remark} \begin{example} Let $k_0$ be a field and $K=k_0((u,v))$ the field of fractions of the complete local ring $R=k_0[[u,v]]$. Let $\Gamma=\mathbf Z^2$ with lexicographical ordering. The isolated subgroups of $\Gamma$ are $(0)\subsetneqq(0)\oplus\mathbf Z\subsetneqq\mathbf Z^2$. Consider the valuation $\nu:K^*\rightarrow\mathbf Z^2$, centered at $R$, given by \begin{eqnarray} \nu(v)&=&(0,1)\\ \nu(u)&=&(1,0)\\ \nu(c)&=&0\quad\text{ for any }c\in k_0^*. \end{eqnarray} This information determines $\nu$ completely; namely, for any power series $$ f=\sum\limits_{\alpha,\beta}c_{\alpha\beta}u^\alpha v^\beta \in k_0[[u,v]], $$ we have $$ \nu(f)=\min\{(\alpha,\beta)\ |\ c_{\alpha\beta}\ne 0\}. $$ We have $rk\ \nu=rat.rk\ \nu=2$. Let $\Delta=(0)\oplus\mathbf Z$. Let $\Gamma_+$ denote the semigroup of all the non-negative elements of $\Gamma$. Let $k_0[[\Gamma_+]]$ denote the $R$-algebra of power series $\sum c_{\alpha,\beta}u^\alpha v^\beta$ where $c_{\alpha,\beta}\in k_0$ and the exponents $(\alpha ,\beta)$ form a well ordered subset of $\Gamma_+$. By classical results (see \cite{Kap1}, \cite{Kap2}), it is a valuation ring with maximal ideal generated by all the monomials $u^\alpha v^\beta$, where $(\alpha,\beta)>(0,0)$ (in other words, either $\alpha>0,\beta\in\mathbf Z$ or $\alpha=0,\beta>0$). Then $$ R_\nu=k_0[[\Gamma_+]]\bigcap k_0((u,v)) $$ is a valuation ring of $K$, and contains $k[[u,v]]$; it is the valuation ring of the valuation $\nu$. The prime ideal $\mathbf{m}_1$ is the ideal of $R_\nu$ generated by all the $uv^\beta$, $\beta\in\mathbf Z$. The valuation $\nu_1$ is the discrete rank 1 valuation of $K$ with valuation ring $$ (R_\nu)_{\mathbf{m}_1}=k_0[[u,v]]_{(u)} $$ and $\nu_2$ is the discrete rank 1 valuation of $k_0((v))$ with valuation ring $\frac{R_\nu}{\mathbf{m}_1}\cong k_0[[v]]$. \end{example} \begin{example} To give a more interesting example, let $k_0$ be a field of characteristic zero and $$ K=k_0(x,y,z) $$ a purely transcendental extension of $k_0$ of degree 3. Let $w$ be an independent variable and put $k=\bigcup\limits_{j=1}^\infty k_0\left(w^{\frac 1j}\right)$. Let $\Gamma=\mathbf Z\oplus\mathbf{Q}$ with the lexicographical ordering and $\Delta=(0)\oplus\mathbf{Q}$ the non-trivial isolated subgroup of $\Gamma$. Let $u,v$ be new variables and let $\mu_1:k((u,v))\rightarrow\mathbf{Z}^2_{lex}$ be the valuation of the previous example. Let $\mu_2$ denote the $x$-adic valuation of $k$ and put $\mu=\mu_1\circ\mu_2$. Consider the map $\iota:k_0[x,y,z]\rightarrow k[[u,v]]$ which sends $x$ to $w$, $y$ to $v$ and $z$ to $u-\sum\limits_{j=1}^\infty w^{\frac 1j}v^j$. Let $\nu_1=\left.\mu_1\right|_K$ and $\nu=\mu|_K$. The valuation $\nu:K^*\rightarrow\Gamma$ is centered at the local ring $R=k_0[x,y,z]_{(x,y,z)}$; we have \begin{eqnarray} \nu(x)&=&(0,1)\\ \nu(y)&=&(1,0),\\ \nu(z)&=&(1,1). \end{eqnarray} Write as a composition of two rank 1 valuations: $\nu=\nu_1\circ\nu_2$. We have natural inclusions $R_{\nu_1}\subset R_{\mu_1}$ and $k_{\nu_1}\subset k_{\mu_1}=k$. We claim that $k_{\nu_1}$ is not finitely generated over $k_0$. Indeed, if this were not the case then there would exist a prime number $p$ such that $w^{\frac1p}\mbox{$\in$ \hspace{-.8em}/} k_{\nu_1}$. Let $k'=k_0\left(x^{\frac1{(p-1)!}}\right)$. Let $L= k'(y,z)$. Consier the tower of field extensions $K\subset L\subset k[[u,v]]$ and let $\nu'$ denote the restriction of $\mu$ to $L$. Let $\Gamma'$ be the value group of $\nu'$ and $k_{\nu'}$ the residue field of its valuation ring. Now, $L$ contains the element $z_p:=z-\sum\limits_{j=1}^{p-1}x^{\frac1j}y^j$ as well as $\frac{z_p}{y^p}$. We have \begin{equation} \nu'(z_p)=\left(p,\frac1p\right), \end{equation} $\nu'\left(\frac{z_p}{y^p}\right)=0$ and the natural image of $\frac{z_p}{y^p}$ in $k_{\mu_1}=k$ is $w^{\frac1p}$. Now, $p\not|\ [L:K]$, $\left.[\Gamma':\Gamma]\ \right|\ [L:K]$ and $\left.[k_{\nu'}:k_\nu]\ \right|\ [L:K]$. This implies that $z_p\in L$ and $w^{\frac1p}\in k_{\nu_1}$, which gives the desired contradicion. It is not hard to show that for each $j$, there exists a local blowing up $R\rightarrow R'$ of $R$ such that, in the notation of (\ref{eq:treechain'}), we have $\kappa(P'_1)=k_0\left(w^{\frac1{j!}}\right)$ and that $\kappa(\mathbf{m}_1)=\lim\limits_{j\to\infty}\kappa(P'_1)=k$. The first one is the blowing up of the ideal $(y,z)R$, localized at the point $y=0, z/y=x$. Then one blows up the ideal $(z/y-x, y)$, and so on. Another way to see the valuation $\nu=\nu_1\circ\nu_2$ is to note that $\nu_1$ is the restriction to $K$ of the $v$-adic valuation under the inclusion of fields deduced from the inclusion of rings $$ k_0[[x,y,z]]_{(y,z)}\hookrightarrow k\left[\left[v^{{\mathbf Z}_+}\right]\right] $$ which sends $x$ to $w$, $y$ to $v$ and $z$ to $\sum\limits_{j=1}^\infty w^{\frac1j}v^j$. Recall that the ring on the right is made of power series with non negative rational exponents whose set of exponents is well ordered. We have $k_{\nu_1}=k$. \end{example} \begin{remark} The point of the last example is to show that, given a composed valuation as in (\ref{eq:composition1}), $\nu_\ell$ is a valuation of the field $k_{\nu_{\ell-1}}$, which may \textbf{properly} contain $\kappa(P'_{\ell-1})$ for \textbf{every} $R'\in\mathcal{T}$. This fact will be a source of complication later on and we prefer to draw attention to it from the beginning. \end{remark} Coming back to the implicit prime ideals, we will see that the implicit prime ideals $H'_i$ form a tree of ideals of $R^\dag$. We will show that if $\nu$ extends to a valuation of $\hat\nu_-$ centered at $\frac{\hat R}P$ with $P\cap R=(0)$ then the prime $P$ must contain the minimal prime $H_0$ of $\hat R$. We will then show that specifying an extension $\hat\nu_-$ of $\nu$ as above is equivalent to specifying a chain of prime valuation ideals \begin{equation} \tilde H'_0\subset\tilde H'_1\subset\dots\subset\tilde H'_{2r}=m'\hat R'\label{eq:chaintree} \end{equation} of $\hat R'$ such that $H'_\ell\subset\tilde H'_\ell$ for all $\ell\in\{0,\dots,2r\}$, and valuations $\hat\nu_1,\hat\nu_2,\dots,\hat\nu_{2r}$, where $\hat\nu_i$ is a valuation of the field $k_{\hat\nu_{i-1}}$ (the residue field of the valuation ring $R_{\hat\nu_{i-1}}$), arbitrary when $i$ is odd and satisfying certain conditions, coming from the valuation $\nu_{\frac i2}$, when $i$ is even. The prime ideals $H_i$ are defined as follows.\par \noindent Recall that given a valued ring $(R,\nu)$, that is a subring $R\subseteq R_\nu$ of the valuation ring $R_\nu$ of a valuation with value group $\Gamma$, one defines for each $\beta\in \Gamma$ the valuation ideals of $R$ associated to $\beta$: $$ \begin{array}{lr} {\cal P}_\beta (R)=&\{x\in R/\nu(x)\geq\beta\}\cr {\cal P}^+_\beta (R)=&\{x\in R/\nu(x)>\beta\}\end{array} $$ and the associated graded ring $$ \hbox{\rm gr}_\nu R=\bigoplus_{\beta\in \Gamma}\frac{{\cal P}_\beta (R)} {{\cal P}^+_\beta (R)}=\bigoplus_{\beta\in \Gamma_+}\frac{{\cal P}_\beta (R)}{{\cal P}^+_\beta (R)}. $$ The second equality comes from the fact that if $\beta\in\Gamma_-\setminus\{0\}$, we have ${\cal P}^+_\beta (R)={\cal P}_\beta (R)=R$. If $R\to R'$ is an extension of local rings such that $R\subset R'\subset R_\nu$ and $m_\nu\cap R'=m'$, we will write ${\cal P}'_\beta$ for ${\cal P}_\beta(R')$. Fix a valuation ring $R_\nu$ dominating $R$, and a tree ${\cal T}=\{R'\}$ of n\oe therian local $R$-subalgebras of $R_\nu$, having the following properties: for each ring $R'\in\cal{T}$, all the birational $\nu$-extensions of $R'$ belong to $\cal{T}$. Moreover, we assume that the field of fractions of $R_\nu$ equals $\lim\limits_{\overset\longrightarrow{R'}}K'$, where $K'$ is the field of fractions of $R'$. The tree $\cal{T}$ will stay constant throughout this paper. In the special case when $R$ happens to have the same field of fractions as $R_\nu$, we may take $\cal{T}$ to be the tree of all the birational $\nu$-extensions of $R$. \begin{notation} For a ring $R'\in\cal T$, we shall denote by ${\cal T}(R')$ the subtree of $\cal T$ consisting of all the $\nu$-extensions $R''$ of $R'$. \end{notation} We now define \begin{equation} H_{2\ell+1}=\bigcap\limits_{\beta\in\Delta_{\ell}} \left(\left(\lim\limits_{\overset\longrightarrow{R'}}{\cal P}'_\beta {R'}^\dag\right)\bigcap R^\dag\right),\ 0\le\ell\le r-1\label{eq:defin} \end{equation} (in the beginning of \S\ref{basics} we provide some motivation for this definition and give several elementary examples of $H'_i$ and $\tilde H'_i$). The questions answered in this paper originally arose from our work on the Local Uniformization Theorem, where passage to completion is required in both the approaches of \cite{Spi2} and \cite{Te}. In \cite{Te}, one really needs to pass to completion for valuations of arbitrary rank. One of the main intended applications of the theory of implicit prime ideals is the following conjecture. Let \begin{equation} \Gamma\hookrightarrow\hat\Gamma\label{eq:extGamma} \end{equation} be an extension of ordered groups of the same rank. Let \begin{equation} (0)=\Delta_r\subsetneqq\Delta_{r-1}\subsetneqq\dots\subsetneqq\Delta_0=\Gamma \label{eq:isolated} \end{equation} be the isolated subgroups of $\Gamma$ and $$ (0)=\hat\Delta_r\subsetneqq\hat\Delta_{r-1}\subsetneqq\dots\subsetneqq \hat\Delta_0=\hat\Gamma $$ the isolated subgroups of $\hat\Gamma$, so that the inclusion (\ref{eq:extGamma}) induces inclusions \begin{eqnarray} \Delta_\ell&\hookrightarrow&\hat\Delta_\ell\quad\text{ and}\\ \frac{\Delta_\ell}{\Delta_{\ell+1}}&\hookrightarrow& \frac{\hat\Delta_\ell}{\hat\Delta_{\ell+1}}. \end{eqnarray} Let $G\hookrightarrow\hat G$ be an extension of graded algebras without zero divisors, such that $G$ is graded by $\Gamma_+$ and $\hat G$ by $\hat\Gamma_+$. The graded algebra $G$ is endowed with a natural valuation with value group $\Gamma$ and similarly for $\hat G$ and $\hat\Gamma$. These natural valuations will both be denoted by $ord$. \begin{definition} We say that the extension $G\hookrightarrow\hat G$ is \textbf{scalewise birational} if for any $x\in\hat G$ and $\ell\in\{1,\dots,r\}$ such that $ord\ x\in\hat\Delta_\ell$ there exists $y\in G$ such that $ord\ y\in\Delta_\ell$ and $xy\in G$. \end{definition} Of course, scalewise birational implies birational and also that $\hat\Gamma=\Gamma$.\par \noindent While the main result of this paper is the primality of the implicit ideals associated to a valuation, and the subsequent description of the extensions of the valuation to the completion, the main conjecture stated here is the following:\par \begin{conjecture}\label{teissier} Assume that $\dim\ R'=\dim\ R$ for all $R'\in\mathcal{T}$. Then there exists a tree of prime ideals $H'$ of $\hat R'$ with $H'\cap R'=(0)$ and a valuation $\hat\nu_-$, centered at $\lim\limits_\to\frac{\hat R'}{H'}$ and having the following property:\par\noindent For any $R'\in\cal{T}$ the graded algebra $\mbox{gr}_{\hat\nu_-}\frac{\hat R'}{H'}$ is a scalewise birational extension of $\mbox{gr}_\nu R'$. \end{conjecture} The example given in remark 5.20, 4) of \cite{Te} shows that the morphism of associated graded rings is not an isomorphism in general. The approach to the Local Uniformization Theorem taken in \cite{Spi2} is to reduce the problem to the case of rank 1 valuations. The theory of implicit prime ideals is much simpler for valuations of rank 1 and takes only a few pages in Section \ref{archimedian}. The paper is organized as follows. In \S\ref{basics} we define the odd-numbered implicit ideals $H_{2\ell+1}$ and prove that $H_{2\ell+1}\cap R=P_\ell$. We observe that by their very definition, the ideals $H_{2\ell+1}$ behave well under $\nu$-extensions; they form a tree. Proving that $H_{2\ell+1}$ is indeed prime is postponed until later sections; it will be proved gradually in \S\ref{technical}--\S\ref{prime}. In the beginning of \S\ref{basics} we will explain in more detail the respective roles played by the odd-numbered and the even-numbered implicit ideals, give several examples (among other things, to motivate the need for taking the limit with respect to $R'$ in (\ref{eq:defin})) and say one or two words about the techniques used to prove our results. In \S\ref{technical} we prove the primality of the implicit prime ideals assuming a certain technical condition, called \textbf{stability}, about the tree $\cal T$ and the operation ${\ }^\dag$. It follows from the noetherianity of $R^\dag$ that there exists a specific $R'$ for which the limit in (\ref{eq:defin}) is attained. One of the main points of \S\ref{technical} is to prove properties of stable rings which guarantee that this limit is attained whenever $R'$ is stable. We then use the excellence of $R$ to define the even-numbered implicit prime ideals: for $i=2\ell$ the ideal $H_{2\ell}$ is defined to be the unique minimal prime of $P_\ell R^\dag$, contained in $H_{2\ell+1}$ (in the case $R^\dag=\hat R$ it is the excellence of $R$ which implies the uniqueness of such a minimal prime). We have $$ H_{2\ell}\cap R=P_\ell $$ for $\ell\in\{0,\dots,r\}$. The results of \S\ref{technical} apply equally well to completions, henselizations and other local \'etale extensions; to complete the proof of the primality of the implicit ideals in various contexts such as henselization or completion, it remains to show the existence of stable $\nu$-extensions in the corresponding context. In \S\ref{Rdag} we describe the set of extensions $\nu^\dag_-$ of $\nu$ to $\lim\limits_{\overset\longrightarrow{R'}}\frac{{R'}^\dag}{P'{R'}^\dag}$, where $P'$ is a tree of prime ideals of ${R'}^\dag$ such that $P'\cap R'=(0)$. We show (Theorem \ref{classification}) that specifying such a valuation $\nu^\dag_-$ is equivalent to specifying the following data: (1) a chain (\ref{eq:chaintree}) of trees of prime ideals $\tilde H'_i$ of ${R'}^\dag$ (where $\tilde H'_0=P'$), such that $H'_i\subset\tilde H'_i$ for each $i$ and each $R'\in\mathcal{T}$, satisfying one additional condition (we will refer to the chain (\ref{eq:chaintree}) as the chain of trees of ideals, \textbf{determined by} the extension $\nu^\dag_-$) (2) a valuation $\nu^\dag_i$ of the residue field $k_{\nu^\dag_{i-1}}$ of $\nu^\dag_{i-1}$, whose restriction to the field $\lim\limits_{\overset\longrightarrow{R'}}\kappa(\tilde H'_{i-1})$ is centered at the local ring $\lim\limits_{\overset\longrightarrow{R'}}\frac{{R'}^\dag_{\tilde H'_i}}{\tilde H'_{i-1}{R'}^\dag_{\tilde H'_i}}$. If $i=2\ell$ is even, the valuation $\nu^\dag_i$ must be of rank 1 and its restriction to $\kappa(\mathbf{m}_{\ell-1})$ must coincide with $\nu_\ell$. Notice the recursive nature of this description of $\nu^\dag_-$: in order to describe $\nu^\dag_i$ we must know $\nu^\dag_{i-1}$ in order to talk about its residue field $k_{\nu^\dag_{i-1}}$. In \S\ref{extensions} we address the question of uniqueness of $\nu^\dag_-$. We describe several classes of extensions $\nu^\dag_-$ which are particularly useful for the applications: \textbf{minimal} and \textbf{evenly minimal} extensions, and also those $\nu^\dag_-$ for which, denoting by ${\operatorname{ht}}\ I$ the height of an ideal, we have \begin{equation} {\operatorname{ht}}\ \tilde H'_{2\ell+1}- {\operatorname{ht}}\ \tilde H'_{2_\ell} \le 1\quad\text{ for }0\le\ell\le r;\label{eq:odd=even4} \end{equation} in fact, the special case of (\ref{eq:odd=even4}) which is of most interest for the applications is \begin{equation} \tilde H'_{2\ell}=\tilde H'_{2\ell+1}\quad\text{ for }1\le\ell\le r.\label{eq:odd=even} \end{equation} We prove some necessary and some sufficient conditions under which an extension $\nu^\dag_-$ whose corresponding ideals $\tilde H'_i$ satisfy (\ref{eq:odd=even}) is uniquely determined by the ideals $\tilde H'_i$. We also give sufficient conditions for the graded algebra $gr_\nu R'$ to be scalewise birational to $gr_{\hat\nu_-}\hat R'$ for each $R'\in\cal{T}$. These sufficient conditions are used in \S\ref{locuni1} to prove some partial results towards Conjecture \ref{teissier}. In \S\ref{henselization} we show the existence of $\nu$-extensions in $\cal T$, stable for henselization, thus reducing the proof of the primality of $H_{2\ell+1}$ to the results of \S\ref{technical}. We study the extension of $\nu$ to $\tilde R$ modulo its first prime ideal and prove that such an extension is unique. In \S\ref{prime} we use the results of \S\ref{henselization} to prove the existence of $\nu$-extensions in $\cal T$, stable for completion. Combined with the results of \S\ref{technical} this proves that the ideals $H_{2\ell+1}$ are prime. In \S\ref{locuni1} we describe a possible approach and prove some partial results towards constructing a chain of trees (\ref{eq:chaintree}) of prime ideals of $\hat R'$ satisfying (\ref{eq:odd=even}) and a corresponding valuation $\hat\nu_-$ which satisfies the conclusion of Conjecture \ref{teissier}. We also prove a necessary and a sufficient condition for the uniqueness of $\hat\nu_-$, assuming Conjecture \ref{teissier}. We would like to acknowledge the paper \cite{HeSa} by Bill Heinzer and Judith Sally which inspired one of the authors to continue thinking about this subject, as well as the work of S.D. Cutkosky, S. El Hitti and L. Ghezzi: \cite{CG} (which contains results closely related to those of \S\ref{archimedian}) and \cite{CE}. \section{Extending a valuation of rank one centered in a local domain to its formal completion.} \label{archimedian} Let $(R,M,k)$ be a local noetherian domain, $K$ its field of fractions, and $\nu:K\rightarrow{\Gamma}_+\cup\{\infty\}$ a rank one valuation, centered at $R$ (that is, non-negative on $R$ and positive on $M$).\par Let $\hat R$ denote the formal completion of $R$. It is convenient to extend $\nu$ to a valuation centered at $\frac{\hat R}H$, where $H$ is a prime ideal of $\hat R$ such that $H\cap R=(0)$. In this section, we will assume that $\nu$ is of rank one, so that the value group ${\Gamma}$ is archimedian. We will explicitly describe a prime ideal $H$ of $\hat R$, canonically associated to $\nu$, such that $H\cap R=(0)$ and such that $\nu$ has a unique extension $\hat\nu_-$ to $\frac{\hat R}H$. Let ${\Phi}=\nu(R\setminus (0))$, let $\mathcal{P}_\beta$ denote the $\nu$-ideal of $R$ of value $\beta$ and $\mathcal{P}_{\beta}^+$ the greatest $\nu$-ideal, properly contained in $\mathcal{P}_\beta$. We now define the main object of study of this section. Let \begin{equation} H:=\bigcap\limits_{\beta\in{\Phi}}(\mathcal{P}_\beta\hat R).\label{tag51} \end{equation} \begin{remark}\label{Remark51} Since $R$ is noetherian, we have $\nu (M)>0$ and since the ordered group $\Gamma$ is archimedian, for every $\beta\in{\Phi}$ there exists $n\in\mathbf N$ such that $M^n\subset \mathcal{P}_\beta$. In other words, the $M$-adic topology on $R$ is finer than (or equal to) the $\nu$-adic topology. Therefore an element $x\in\hat R$ lies in $\mathcal{P}_\beta\hat R\iff$ there exists a Cauchy sequence $\{x_n\}\subset R$ in the $M$-adic topology, converging to $x$, such that $\nu(x_n)\ge\beta$ for all $n\iff$ for {\it every} Cauchy sequence $\{x_n\}\subset R$, converging to $x$, $\nu(x_n)\ge\beta$ for all $n\gg0$. By the same token, $x\in H\iff$ there exists a Cauchy sequence $\{x_n\}\subset R$, converging to $x$, such that $\lim\limits_{n\to\infty}\nu(x_n)=\infty\iff$ for {\it every} Cauchy sequence $\{x_n\}\subset R$, converging to $x$, $\lim\limits_{n\to\infty}\nu(x_n)=\infty$. \end{remark} \begin{example} Let $R=k[u,v]_{(u,v)}$. Then $\hat R=k[[u,v]]$. Consider an element $w=u-\sum\limits_{i=1}^\infty c_iv^i\in\hat R$, where $c_i\in k^*$ for all $i\in\mathbf N$, such that $w$ is transcendental over $k(u,v)$. Consider the injective map $\iota:k[u,v]_{(u,v)}\rightarrow k[[t]]$ which sends $v$ to $t$ and $u$ to $\sum\limits_{i=1}^\infty c_it^i$. Let $\nu$ be the valuation induced from the $t$-adic valuation of $k[[t]]$ via $\iota$. The value group of $\nu$ is $\mathbf Z$ and ${\Phi}=\mathbf N_0$. For each $\beta\in\mathbf N$, $\mathcal{P}_\beta=\left(v^\beta,u-\sum\limits_{i=1}^{\beta-1}c_iv^i\right)$. Thus $H=(w)$. We come back to the general theory. Since the formal completion homomorphism $R\rightarrow\hat R$ is faithfully flat, \begin{equation} \mathcal{P}_\beta\hat R\cap R=\mathcal{P}_\beta\quad\text{for all }\beta\in{\Phi}.\label{tag52} \end{equation} Taking the intersection over all $\beta\in{\Phi}$, we obtain \begin{equation} H\cap R=\left(\bigcap\limits_{\beta\in{\Phi}}\left(\mathcal{P}_\beta\hat R\right)\right)\cap R=\bigcap\limits_{\beta\in{\Phi}}\mathcal{P}_\beta=(0),\label{tag53} \end{equation} In other words, we have a natural inclusion $R\hookrightarrow\frac{\hat R}H$. \end{example} \begin{theorem}\label{th53} \begin{enumerate} \item $H$ is a prime ideal of $\hat R$. \item $\nu$ extends uniquely to a valuation $\hat\nu_-$, centered at $\frac{\hat R}H$. \end{enumerate} \end{theorem} \begin{proof} Let $\bar x\in\frac{\hat R}H\setminus\{0\}$. Pick a representative $x$ of $\bar x$ in $\hat R$, so that $\bar x= x\ {\operatorname{mod}}\ H$. Since $x\mbox{$\in$ \hspace{-.8em}/} H$, we have $ x\mbox{$\in$ \hspace{-.8em}/} \mathcal{P}_\alpha\hat R$ for some $\alpha\in{\Phi}$. \begin{lemma}\label{lemma36} {\rm (See \cite{ZS}, Appendix 5, lemma 3)} Let $\nu$ be a valuation of rank one centered in a local noetherian domain $(R,M,k)$. Let $$ {\Phi}=\nu(R\setminus (0))\subset{\Gamma}. $$ Then ${\Phi}$ contains no infinite bounded sequences. \end{lemma} \begin{proof} An infinite ascending sequence $\alpha_1<\alpha_2<\dots$ in ${\Phi}$, bounded above by an element $\beta\in{\Phi}$, would give rise to an infinite descending chain of ideals in $\frac R{\mathcal{P}_\beta}$. Thus it is sufficient to prove that $\frac R{\mathcal{P}_\beta}$ has finite length. Let $\delta:=\nu(M)\equiv\min({\Phi}\setminus\{0\})$. Since ${\Phi}$ is archimedian, there exists $n\in\mathbf N$ such that $\beta\le n\delta$. Then $M^n\subset \mathcal{P}_\beta$, so that there is a surjective map $\frac R{M^n}\twoheadrightarrow\frac R{\mathcal{P}_\beta}$. Thus $\frac R{\mathcal{P}_\beta}$ has finite length, as desired. \end{proof} By Lemma \ref{lemma36}, the set $\{\beta\in{\Phi}\ |\ \beta<\alpha\}$ is finite. Hence there exists a unique $\beta\in{\Phi}$ such that \begin{equation} x\in \mathcal{P}_\beta\hat R\setminus \mathcal{P}_{\beta}^+\hat R.\label{tag54} \end{equation} Note that $\beta$ depends only on $\bar x$, but not on the choice of the representative $ x$. Define the function $\hat\nu_-:\frac{\hat R}H\setminus\{0\}\rightarrow{\Phi}$ by \begin{equation} \hat\nu_-(\bar x)=\beta.\label{tag55} \end{equation} By (\ref{tag52}), if $x\in R\setminus\{0\}$ then \begin{equation} \hat\nu_-(x)=\nu(x). \end{equation} It is obvious that \begin{equation} \hat\nu_-(x+y)\ge\min\{\hat\nu_-(x),\hat\nu_-(y)\}\label{tag57} \end{equation} \begin{equation} \hat\nu_-(xy)\ge\hat\nu_-(x)+\hat\nu_-(y)\label{tag58} \end{equation} for all $x,y\in\frac{\hat R}H$. The point of the next lemma is to show that $\frac{\hat R}H$ is a domain and that $\hat\nu_-$ is, in fact, a valuation (i.e. that the inequality (\ref{tag58}) is, in fact, an equality). \begin{lemma}\label{lemma54} For any non-zero $\bar x,\bar y\in\frac{\hat R}H$, we have $\bar x\bar y\ne0$ and $\hat\nu_-(\bar x\bar y)=\hat\nu_-(\bar x)+\hat\nu_-(\bar y)$. \end{lemma} \begin{proof} Let $\alpha=\hat\nu_-(\bar x)$, $\beta=\hat\nu_-(\bar y)$. Let $ x$ and $ y$ be representatives in $\hat R$ of $\bar x$ and $\bar y$, respectively. We have $M\mathcal{P}_\alpha\subset \mathcal{P}_{\alpha}^+$, so that \begin{equation} \frac{\mathcal{P}_\alpha}{\mathcal{P}_{\alpha}^+}\cong\frac{\mathcal{P}_\alpha}{\mathcal{P}_{\alpha}^++M\mathcal{P}_\alpha}\cong \frac{\mathcal{P}_\alpha}{\mathcal{P}_{\alpha}^+}\otimes_Rk\cong\frac{\mathcal{P}_\alpha}{\mathcal{P}_{\alpha}^+} \otimes_R\frac{\hat R}{M\hat R}\cong\frac{\mathcal{P}_\alpha\hat R}{(\mathcal{P}_{\alpha}^++M\mathcal{P}_\alpha)\hat R}\cong\frac{\mathcal{P}_\alpha\hat R}{\mathcal{P}_{\alpha}^+\hat R},\label{tag59} \end{equation} and similarly for $\beta$. By (\ref{tag59}) there exist $z\in \mathcal{P}_\alpha$, $w\in \mathcal{P}_\beta$, such that $z\equiv x\ {\operatorname{mod}}\ \mathcal{P}_{\alpha}^+\hat R$ and $w\equiv y\ {\operatorname{mod}}\ \mathcal{P}_{\beta}^+\hat R$. Then \begin{equation} xy\equiv zw\ {\operatorname{mod}}\ \mathcal{P}_{\alpha+\beta}^+\hat R.\label{tag510} \end{equation} Since $\nu$ is a valuation, $\nu(zw)=\alpha+\beta$, so that $zw\in \mathcal{P}_{\alpha+\beta}\setminus \mathcal{P}_{\alpha+\beta}^+$. By (\ref{tag52}) and (\ref{tag510}), this proves that $xy\in \mathcal{P}_{\alpha+\beta}\hat R\setminus \mathcal{P}_{\alpha+\beta}^+\hat R$. Thus $xy\mbox{$\in$ \hspace{-.8em}/} H$ (hence $\bar x\bar y\ne0$ in $\frac{\hat R}H$) and $\hat\nu_-(\bar x\bar y)=\alpha+\beta$, as desired. \end{proof} By Lemma \ref{lemma54}, $H$ is a prime ideal of $\hat R$. By (\ref{tag57}) and Lemma \ref{lemma54}, $\hat\nu_-$ is a valuation, centered at $\frac{\hat R}H$. To complete the proof of Theorem \ref{th53}, it remains to prove the uniqueness of $\hat\nu_-$. Let $x$, $\bar x$, the element $\alpha\in{\Phi}$ and \begin{equation} z\in \mathcal{P}_\alpha\setminus \mathcal{P}_{\alpha}^+\label{tag511} \end{equation} be as in the proof of Lemma \ref{lemma54}. Then there exist \begin{equation} \begin{array}{rl} u_1,\ldots , u_n&\in \mathcal{P}_{\alpha}^+\text{ and}\\ v_1,\ldots ,v_n&\in\hat R \end{array}\label{tag512} \end{equation} such that $ x=z+\sum\limits_{i=1}^nu_i v_i$. Letting $\bar v_i:=v_i\ {\operatorname{mod}}\ H$, we obtain $\bar x=\bar z+\sum\limits_{i=1}^n\bar u_i\bar v_i$ in $\frac{\hat R}H$. Therefore, by (\ref{tag511})--(\ref{tag512}), for any extension of $\nu$ to a valuation $\hat\nu '_-$, centered at $\frac{\hat R}H$, we have \begin{equation} \hat\nu '_-(\bar x)=\alpha=\hat\nu_-(\bar x),\label{tag513} \end{equation} as desired. This completes the proof of Theorem \ref{th53}. \end{proof} \begin{definition}\label{deft55} The ideal $H$ is called the {\bf implicit prime ideal} of $\hat R$, associated to $\nu$. When dealing with more than one ring at a time, we will sometimes write $H(R,\nu)$ for $H$. \end{definition} More generally, let $\nu$ be a valuation centered at $R$, not necessarily of rank one. In any case, we may write $\nu$ as a composition $\nu=\mu_2\circ\mu_1$, where $\mu_2$ is centered at a non-maximal prime ideal $P$ of $R$ and $\mu_1\left|_{\frac RP}\right.$ is of rank one. The valuation $\mu_1\left|_{\frac RP}\right.$ is centered at $\frac RP$. We define the {\bf implicit prime ideal} of $R$ with respect to $\nu$, denoted $H(R,\nu)$, to be the inverse image in $\hat R$ of the implicit prime ideal of $\frac{\hat R}P$ with respect to $\mu_1\left|_{\frac RP}\right.$. For the rest of this section, we will continue to assume that $\nu$ is of rank one. \begin{remark}\label{Remark56} By (\ref{tag59}), we have the following natural isomorphisms of graded algebras: $$ \begin{array}{rl} \mbox{gr}_\nu R&\cong\mbox{gr}_{\hat\nu_-}\frac{\hat R}H\\ G_\nu&\cong G_{\hat\nu_-}. \end{array} $$ \end{remark} \par We will now study the behaviour of $H$ under local blowings up of $R$ with respect to $\nu$ and, more generally, under local homomorphisms. Let $\pi:(R,M)\rightarrow(R',M')$ be a local homomorphism of local noetherian domains. Assume that $\nu$ extends to a rank one valuation $\nu':R'\setminus\{0\}\rightarrow{\Gamma}'$, where ${\Gamma}'\supset{\Gamma}$. The homomorphism $\pi$ induces a local homomorphism $\hat\pi:\hat R\rightarrow\hat R'$ of formal completions. Let ${\Phi}'=\nu'(R'\setminus\{0\})$. For $\beta\in{\Phi}'$, let $\mathcal{P}'_\beta$ denote the $\nu'$-ideal of $R_{\nu'}$ of value $\beta$, as above. Let $H'=H(R',\nu')$. \begin{lemma}\label{lemma58} Let $\beta\in{\Phi}$. Then \begin{equation} \left(\mathcal{P}'_\beta\hat R'\right)\cap\hat R=\mathcal{P}_\beta\hat R.\label{tag516} \end{equation} \end{lemma} \begin{proof} Since by assumption $\nu'$ extends $\nu$ we have $\mathcal{P}'_\beta\cap R=\mathcal{P}_\beta$ and the inclusion \begin{equation} \left(\mathcal{P}'_\beta\hat R'\right)\cap\hat R\supseteq \mathcal{P}_\beta\hat R.\label{tag517} \end{equation} We will now prove the opposite inclusion. Take an element $x\in\left(\mathcal{P}'_\beta\hat R'\right)\cap\hat R$. Let $\{x_n\}\subset R$ be a Cauchy sequence in the $M$-adic topology, converging to $x$. Then $\{\pi(x_n)\}$ converge to $\hat\pi(x)$ in the $M'$-adic topology of $\hat R'$. Applying remark \ref{Remark51} to $R'$, we obtain \begin{equation} \nu(x_n)\equiv\nu'(\pi(x_n))\ge\beta\quad\text{for }n\gg0.\label{tag518} \end{equation} By (\ref{tag518}) and Remark \ref{Remark56}, applied to $R$, we have $x\in \mathcal{P}_\beta\hat R$. This proves the opposite inclusion in (\ref{tag517}), as desired. \end{proof} \begin{corollary}\label{Corollary59} We have $$ H'\cap\hat R=H. $$ \end{corollary} \begin{proof} Since $\nu'$ is of rank one, ${\Phi}$ is cofinal in ${\Phi}'$. Now the Corollary follows by taking the intersection over all $\beta\in{\Phi}$ in (\ref{tag516}). \end{proof} Let $J$ be a non-zero ideal of $R$ and let $R\rightarrow R'$ be the local blowing up along $J$ with respect to $\nu$. Take an element $f\in J$, such that $\nu(f)=\nu(J)$. By the {\bf strict transform} of $J$ in $\hat R'$ we will mean the ideal $$ J^{\text{str}}:=\bigcup\limits_{i=1}^\infty\left(\left(J\hat R'\right):f^i\right)\equiv\left(J\hat R'_f\right)\cap\hat R'. $$ If $g$ is another element of $J$ such that $\nu(g)=\nu(J)$ then $\nu\left(\frac fg\right)=0$, so that $\frac fg$ is a unit in $R'$. Thus the definition of strict transform is independent of the choice of $f$. \begin{corollary}\label{Corollary510} $H^{\text{str}}\subset H'$. \end{corollary} \begin{proof} Since $H\hat R'\subset H'$, we have $H^{\text{str}}=\left(H\hat R'_f\right)\cap\hat R'\subset\left(H'\hat R'_f\right)\cap\hat R'=H'$, where the last equality holds because $H'$ is a prime ideal of $\hat R'$, not containing $f$. \end{proof} Using Zariski's Main Theorem, it can be proved that $H^{\text{str}}$ is prime. Since this fact is not used in the sequel, we omit the proof. \begin{corollary}\label{Corollary511} Let the notation and assumptions be as in corollary \ref{Corollary510}. Then \begin{equation} {\operatorname{ht}}\ H'\ge{\operatorname{ht}}\ H.\label{tag519} \end{equation} In particular, \begin{equation} \dim\frac{\hat R'}{H'}\le\dim\frac{\hat R}H.\label{tag520} \end{equation} \end{corollary} \begin{proof} Let $\bar R:=\left(\hat R\otimes_RR'\right)_{M'\hat R'\cap(\hat R\otimes_RR')}$. Let $\phi$ denote the natural local homomorphism $$ \bar R\rightarrow\hat R'. $$ Let $\bar H:=H'\cap\bar R$. Now, take $f\in J$ such that $\nu(f)=\nu(J)$. Then $f\mbox{$\in$ \hspace{-.8em}/} H'$ and, in particular, $f\mbox{$\in$ \hspace{-.8em}/}\bar H$. Since $R'_f\cong R_f$, we have $\hat R_f=\bar R_f$. In view of Corollary \ref{Corollary59}, we obtain $H\hat R_f\cong\bar H\bar R_f$, so \begin{equation} {\operatorname{ht}} \ H={\operatorname{ht}}\ \bar H.\label{tag521} \end{equation} Now, $\bar R$ is a local noetherian ring, whose formal completion is $\hat R'$. Hence $\phi$ is faithfully flat and therefore satisfies the going down theorem. Thus we have ${\operatorname{ht}}\ H'\ge{\operatorname{ht}}\ \bar H$. Combined with (\ref{tag521}), this proves (\ref{tag519}). As for the last statement of the Corollary, it follows from the well known fact that dimension does not increase under blowing up (\cite{Spi1}, Lemma 2.2): we have $\dim\ R'\le\dim\ R$, hence $$ \dim\ \hat R'=\dim R'\le\dim\ R=\dim\ \hat R, $$ and (\ref{tag520}) follows from (\ref{tag519}) and from the fact that complete local rings are catenarian. \end{proof} It may well happen that the containment of corollary \ref{Corollary510} and the inequality in (\ref{tag519}) are strict. The possibility of strict containement in corollary \ref{Corollary510} is related to the existence of subanalytic functions, which are not analytic. We illustrate this statement by an example in which $H^{\text{str}}\subsetneqq H'$ and ${\operatorname{ht}}\ H<{\operatorname{ht}}\ H'$. \begin{example} Let $k$ be a field and let $$ \begin{array}{rl} R&=k[x,y,z]_{(x,y,z)},\\ R'&=k[x',y',z']_{(x',y',z')}, \end{array} $$ where $x'=x$, $y'=\frac yx$ and $z'=z$. We have $K=k(x,y,z)$, $\hat R=k[[x,y,z]]$, $\hat R'=k[[x',y',z']]$. Let $t_1,t_2$ be auxiliary variables and let $\sum\limits_{i=1}^\infty c_it_1^i$ (with $c_i\in k$) be an element of $k[[t_1]]$, transcendental over $k(t_1)$. Let $\theta$ denote the valuation, centered at $k[[t_1,t_2]]$, defined by $\theta(t_1)=1$, $\theta(t_2)=\sqrt2$ (the value group of $\theta$ is the additive subgroup of $\mathbf R$, generated by 1 and $\sqrt2$). Let $\iota:R'\hookrightarrow k[[t_1,t_2]]$ denote the injective map defined by $\iota(x')=t_2$, $\iota(y')=t_1$, $\iota(z')=\sum\limits_{i=1}^\infty c_it_1^i$. Let $\nu$ denote the restriction of $\theta$ to $K$, where we view $K$ as a subfield of $k((t_1,t_2))$ via $\iota$. Let ${\Phi}=\nu(R\setminus\{0\})$; ${\Phi}'=\nu(R'\setminus\{0\})$. For $\beta\in{\Phi}'$, $P'_\beta$ is generated by all the monomials of the form ${x'}^\alpha{y'}^\gamma$ such that $\sqrt2\alpha+\gamma\ge\beta$, together with $z'-\sum\limits_{j=1}^ic_j{y'}^j$, where $i$ is the greatest non-negative integer such that $i<\beta$. Let $w':=z'-\sum\limits_{i=1}^\infty c_i{y'}^i$. Then $H'=(w')$, but $H=H'\cap\hat R=(0)$, so that $H^{\text{str}}=(0)\subsetneqq H'$ and ${\operatorname{ht}}\ H=0<1={\operatorname{ht}}\ H'$. \end{example} Recall the following basic result of the theory of G-rings: \begin{proposition}\label{Corollary57} Assume that $R$ is a reduced G-ring. Then $\hat R_H$ is a regular local ring. \end{proposition} \begin{proof} Let $K=R_{\mathcal{P}_\infty}=\kappa(\mathcal{P}_\infty)$ (here we are using that $R$ is reduced and that $\mathcal{P}_\infty$ is a minimal prime of $R$). By definition of G-ring, the map $R\rightarrow\hat R$ is a regular homomorphism. Then by (\ref{tag53}) $\hat R_H$ is geometrically regular over $K$, hence regular. \end{proof} \begin{remark} Having extended in a unique manner the valuation $\nu$ to a valuation $\hat\nu_-$ of $\frac{\hat R}{H}$, we see that if $R$ is a G-ring, by Proposition \ref{Corollary57} there is a unique minimal prime $\hat\mathcal{P}_\infty$ of $\hat R$ contained in $H$, corresponding to the ideal $(0)$ in $\hat R_H$. Since $H\cap R=(0)$, we have the equality $\hat\mathcal{P}_\infty\cap R=(0)$. Choosing a valuation $\mu$ of the fraction field of $\frac{\hat R_H}{\hat\mathcal{P}_\infty\hat R_H}$ centered at $\frac{\hat R_H}{\hat\mathcal{P}_\infty\hat R_H}$ whose value group $\Psi$ is a free abelian group produces a composed valuation $\hat\nu_-\circ\mu$ on $\frac{\hat R}{\hat\mathcal{P}_\infty}$ with value group $\Psi\bigoplus\Gamma$ ordered lexicographically, as follows:\par\noindent Given $x\in \frac{\hat R}{\hat\mathcal{P}_\infty}$, let $\psi=\mu(x)$ and blow up in $R$ the ideal $\mathcal{P}_\psi$ along our original valuation, obtaining a local ring $R'$. According to what we have seen so far in this section, in its completion $\hat R'$ we can write $x=ye$ with $\mu(e)=\psi$ and $y\in \hat R'\setminus H'$. The valuation $\nu$ on $R'$ extends uniquely to a valuation of $\frac{\hat R'}{H'}$, which we may still denote by $\hat\nu_-$ because it induces $\hat\nu_-$ on $\frac{\hat R}{H}$. Let us consider the image $\overline y$ of $y$ in $\frac{\hat R'}{H'}$. Setting $(\hat\nu_-\circ\mu)(x)=\psi\bigoplus \hat\nu_-(\overline y)\in \Psi\bigoplus\Gamma$ determines a valuation of $\frac{\hat R}{\hat\mathcal{P}_\infty}$ as required.\par If we drop the assumption that $\Psi$ is a free abelian group, the above construction still works, but the value group $\hat\Gamma$ of $\hat\nu_-\circ\mu$ need not be isomorphic to the direct sum $\Psi\bigoplus\Gamma$. Rather, we have an exact sequence $0\rightarrow\Gamma\rightarrow\bar\Gamma\rightarrow\Psi\rightarrow0$, which need not, in general, be split; see {\rm\cite{V1}, Proposition 4.3}. \noindent In the sequel we shall reduce to the case where $\hat R$ is an integral domain, so that $\hat\mathcal{P}_\infty=(0)$ and we will have constructed a valuation of $\hat R$. \end{remark} \section{Definition and first properties of implicit ideals.} \label{basics} Let the notation be as above. Before plunging into technical details, we would like to give a brief and informal overview of our constructions and the motivation for them. Above we recalled the well known fact that if $rk\ \nu=r$ then for every $\nu$-extension $R\rightarrow R'$ the valuation $\nu$ canonically determines a flag (\ref{eq:treechain'}) of $r$ subschemes of $\mbox{Spec}\ R'$. This paper shows the existence of subschemes of $\mbox{Spec}\ \hat R$, determined by $\nu$, which are equally canonical and which become explicit only after completion. To see what they are, first of all note that the ideal $P'_l\hat R'$, for $R'\in\cal{T}$ and $0\le\ell\le r-1$, need not in general be prime (although it is prime whenever $R'$ is henselian). Another way of saying the same thing is that the ring $\frac{R'}{P'_\ell}$ need not be analytically irreducible in general. However, we will see in \S\ref{prime} (resp. \S\ref{henselization}) that the valuation $\nu$ picks out in a canonical way one of the minimal primes of $P'_l\hat R'$ (resp. $P'_l\tilde R'$). We call this minimal prime $H'_{2\ell}$ for reasons which will become apparent later. By the flatness of completion (resp. henselization), we have $H'_{2\ell}\cap R'=P'_\ell$. We will show that the ideals $H'_{2\ell}$ form a tree. Let \begin{equation} (0)=\Delta_r\subsetneqq\Delta_{r-1}\subsetneqq\dots\subsetneqq\Delta_0= \Gamma\label{eq:isolated1} \end{equation} be the isolated subgroups of $\Gamma$. There are other ideals of $\hat R$, apart from the $H_{2\ell}$, canonically associated to $\nu$, whose intersection with $R$ equals $P_\ell$, for example, the ideal $\bigcap\limits_{\beta\in\Delta_\ell}\mathcal{P}_\beta\hat R$. The same is true of the even larger ideal \begin{equation} H_{2\ell+1}=\bigcap\limits_{\beta\in\Delta_\ell} \left(\left(\lim\limits_{\overset\longrightarrow{R'}}{\cal P}'_\beta\hat R'\right)\bigcap\hat R\right),\label{eq:defin2} \end{equation} (that $H_{2\ell+1}\cap R=P_\ell$ is easy to see and will be shown later in this section, in Proposition \ref{contracts}). While the examples below show that the ideal $\bigcap\limits_{\beta\in\Delta_\ell}\mathcal{P}_\beta\hat R$ need not, in general, be prime, the ideal $H_{2\ell+1}$ always is (this is the main theorem of this paper; it will be proved in \S\ref{prime}). The ideal $H_{2\ell+1}$ contains $H_{2\ell}$ but is not, in general equal to it. To summarize, we will show that the valuation $\nu$ picks out in a canonical way a generic point $H_{2\ell}$ of the formal fiber over $P_\ell$ and also another point $H_{2\ell+1}$ in the formal fiber, which is a specialization of $H_{2\ell}$. The main technique used to prove these results is to to analyze the set of zero divisors of $\frac{{R'}^\dag}{P'_\ell{R'}^\dag}$ (where $R^\dag$ stands for either $\hat R$, $\tilde R$, or a finite type \'etale extension $R^e$ of $R$), as follows. We show that the reducibility of $\frac{{R'}^\dag}{P'_\ell{R'}^\dag}$ is related to the existence of non-trivial algebraic extensions of $\kappa(P_\ell)$ inside $\kappa(P_\ell)\otimes_RR^\dag$. More precisely, in the next section we define $R$ to be \textbf{stable} if $\frac{R^\dag}{P_{\ell +1}R^\dag}$ is a domain and there does not exist a non-trivial algebraic extension of $\kappa(P_{\ell+1})$ which embeds both into $\kappa(P_{\ell +1})\otimes_RR^\dag$ and into $\kappa(P'_{\ell+1})$ for some $R'\in\mathcal{T}$. We show that if $R$ is stable then $\frac{{R'}^\dag}{P'_{\ell+1}{R'}^\dag}$ is a domain for all $R'\in\cal T$. For $\overline\beta\in\frac\Gamma{\Delta_{\ell+1}}$, let \begin{equation} \mathcal{P}_{\overline\beta}=\left\{x\in R\ \left|\ \nu(x)\mod\Delta_{\ell+1}\ge\overline\beta\right.\right\}\label{eq:pbetamodl} \end{equation} If $\Phi$ denotes the semigroup $\nu(R\setminus\{0\})\subset\Gamma$, which is well ordered since $R$ is noetherian (see [ZS], Appendix 4, Proposition 2), and $$ \beta(\ell)=\min\{\gamma\in\Phi\ |\ \beta-\gamma\in\Delta_{\ell+1}\} $$ then $\mathcal{P}_{\overline\beta}=\mathcal{P}_{\beta(l)}$.\par\noindent We have the inclusions $$ P_\ell\subset \mathcal{P}_{\overline\beta}\subset P_{\ell+1}, $$ and $\mathcal{P}_{\overline\beta}$ is the inverse image in $R$ by the canonical map $R\to \frac{R}{P_\ell}$ of a valuation ideal $\overline \mathcal{P}_{\overline\beta}\subset \frac{R}{P_\ell}$ for the rank one valuation $\frac{R}{P_\ell}\setminus\{0\}\to \frac{\Delta_\ell}{\Delta_{\ell+1}}$ induced by $\nu_{\ell+1}$.\par We will deduce from the above that if $R$ is stable then for each $\overline\beta\in\frac{\Delta_\ell}{\Delta_{\ell+1}}$ and each $\nu$-extension $R\rightarrow R'$ we have $\mathcal{P}'_{\overline\beta}{R'}^\dag\cap R^\dag=\mathcal{P}_{\overline\beta} R^\dag$, which gives us a very good control of the limit in the definition of $H_{2\ell+1}$ and of the $\nu$-extensions $R'$ for which the limit is attained. We then show, separately in the cases when $R^\dag=\tilde R$ (\S\ref{henselization}) and $R^\dag=\hat R$ (\S\ref{prime}), that there always exists a stable $\nu$-extension $R'\in\cal{T}$. We are now ready to go into details, after giving several examples of implicit ideals and the phenomena discussed above. Let $0\le\ell\le r$. We define our main object of study, the $(2\ell+1)$-st implicit prime ideal $H_{2\ell+1}\subset R^\dag$, by \begin{equation} H_{2\ell+1}=\bigcap\limits_{\beta\in\Delta_\ell} \left(\left(\lim\limits_{\overset\longrightarrow{R'}}{\cal P}'_\beta {R'}^\dag\right)\bigcap R^\dag\right),\label{eq:defin1} \end{equation} where $R'$ ranges over $\mathcal{T}$. As usual, we think of (\ref{eq:defin1}) as a tree equation: if we replace $R$ by any other $R''\in{\cal T}$ in (\ref{eq:defin1}), it defines the corresponding ideal $H''_{2\ell +1}\subset\hat R''^\dag$. Note that for $\ell=r$ (\ref{eq:defin1}) reduces to $$ H_{2r+1}=mR^\dag. $$ We start by giving several examples of the ideals $H'_i$ (and also of $\tilde H'_i$, which will appear a little later in the paper). \begin{example}{\label{Example31}} Let $R=k[x,y,z]_{(x,y,z)}$. Let $\nu$ be the valuation with value group $\Gamma=\mathbf Z^2_{lex}$, defined as follows. Take a transcendental power series $\sum\limits_{j=1}^\infty c_ju^j$ in a variable $u$ over $k$. Consider the homomorphism $R\hookrightarrow k[[u,v]]$ which sends $x$ to $v$, $y$ to $u$ and $z$ to $\sum\limits_{j=1}^\infty c_ju^j$. Consider the valuation $\nu$, centered at $k[[u,v]]$, defined by $\nu(v)=(0,1)$ and $\nu(u)=(1,0)$; its restriction to $R$ will also be denoted by $\nu$, by abuse of notation. Let $R_\nu$ denote the valuation ring of $\nu$ in $k(x,y,z)$ and let $\mathcal{T}$ be the tree consisting of all the local rings $R'$ essentially of finite type over $R$, birationally dominated by $R_\nu$. Let ${}^\dag=\hat{\ }$ denote the operation of formal completion. Given $\beta=(a,b)\in\mathbf Z^2_{lex}$, we have ${\mathcal{P}}_\beta=x^b\left(y^a,z-c_1y-\dots-c_{a-1}y^{a-1}\right)$. The first isolated subgroup $\Delta_1=(0)\oplus\mathbf Z$. Then $\bigcap\limits_{\beta\in(0)\oplus\mathbf Z}\left({\cal P}_\beta\hat R\right)=(y,z)$ and $\bigcap\limits_{\beta\in\Gamma=\Delta_0}\left({\cal P}_\beta\hat R\right)=\left(z-\sum\limits_{j=1}^\infty c_jy^j\right)$. It is not hard to show that for any $R'\in\mathcal{T}$ we have $H'_1=\left(z-\sum\limits_{j=1}^\infty c_jy^j\right)\hat R'$ and that $H_3=(y,z)\hat R$. It will follow from the general theory developed in \S\ref{extensions} that $\nu$ admits a unique extension $\hat\nu$ to $\lim\limits_{\overset\longrightarrow{R'}}\hat R'$. This extension has value group $\hat\Gamma=\mathbf Z^3_{lex}$ and is defined by $\hat\nu(x)=(0,0,1)$, $\hat\nu(y)=(0,1,0)$ and $\hat\nu\left(z-\sum\limits_{j=1}^\infty c_jy^j\right)=(1,0,0)$. For each $R'\in\mathcal{T}$ the ideal $H'_1$ is the prime valuation ideal corresponding to the isolated subgroup $(0)\oplus\mathbf Z^2_{lex}$ of $\hat\Gamma$ (that is, the ideal whose elements have values outside of $(0)\oplus\mathbf Z^2_{lex}$) while $H'_3$ is the prime valuation ideal corresponding to the isolated subgroup $(0)\oplus(0)\oplus\mathbf Z$. \end{example} \begin{example}{\label{Example32}} Let $R=k[x,y,z]_{(x,y,z)}$, $\Gamma=\mathbf Z^2_{lex}$, the power series $\sum\limits_{j=1}^\infty c_ju^j$ and the operation ${}^\dag=\hat{\ }$ be as in the previous example. This time, let $\nu$ be defined as follows. Consider the homomorphism $R\hookrightarrow k[[u,v]]$ which sends $x$ to $u$, $y$ to $\sum\limits_{j=1}^\infty c_ju^j$ and $z$ to $v$. Consider the valuation $\nu$, centered at $k[[u,v]]$, defined by $\nu(v)=(1,0)$ and $\nu(u)=(0,1)$; its restriction to $R$ will be also denoted by $\nu$. Let $R_\nu$ denote the valuation ring of $\nu$ in $k(x,y,z)$ and let $\mathcal{T}$ be the tree consisting of all the local rings $R'$ essentially of finite type over $R$, birationally dominated by $R_\nu$. Given $\beta=(a,b)\in\mathbf Z^2_{lex}$, we have $\mathcal{P}_\beta=z^a\left(x^b,y-c_1x-\dots-c_{b-1}x^{b-1}\right)$. The first isolated subgroup $\Delta_1=(0)\oplus\mathbf Z$. Then $\bigcap\limits_{\beta\in(0)\oplus\mathbf Z}\left({\mathcal{P}}_\beta\hat R\right)=\left(y-\sum\limits_{j=1}^\infty c_jx^j,z\right)$ and $\bigcap\limits_{\beta\in\Gamma=\Delta_0}\left({\cal P}_\beta\hat R\right)=(0)$. It is not hard to show that for any $R'\in\cal{T}$ we have $H'_1=(0)$ and that $H_3=\left(y-\sum\limits_{j=1}^\infty c_jx^j,z\right)\hat R'$. In this case, the extension $\hat\nu$ to $\lim\limits_{\overset\longrightarrow{R'}}\hat R'$ is not unique. Indeed, one possible extension $\hat\nu^{(1)}$ has value group $\hat\Gamma=\mathbf Z^3_{lex}$ and is defined by $\hat\nu^{(1)}(x)=(0,0,1)$, $\hat\nu^{(1)}\left(y-\sum\limits_{j=1}^\infty c_jx^j\right)=(0,1,0)$ and $\hat\nu^{(1)}(z)=(1,0,0)$. In this case, for any $R'\in\cal{T}$ the ideal $H'_3$ is the prime valuation ideal corresponding to the isolated subgroup $(0)\oplus(0)\oplus\mathbf Z$ of $\hat\Gamma$. Another extension $\hat\nu^{(2)}$ of $\nu$ is defined by $\hat\nu^{(2)}(x)=(0,0,1)$, $\hat\nu^{(2)}\left(y-\sum\limits_{j=1}^\infty c_jx^j\right)=(1,0,0)$ and $\hat\nu^{(2)}(z)=(0,1,0)$. In this case, the tree of ideals corresponding to the isolated subgroup $(0)\oplus(0)\oplus\mathbf Z$ is $H'_3$ (exactly the same as for $\hat\nu^{(1)}$) while that corresponding to $(0)\oplus\mathbf Z^2_{lex}$ is $\tilde H'_1=\left(y-\sum\limits_{j=1}^\infty c_jx^j\right)$. The tree $\tilde H'_1$ of prime $\hat\nu^{(2)}$-ideals determines the extension $\hat\nu^{(2)}$ completely. \end{example} The following two examples illustrate the need for taking the limit over the tree $\mathcal{T}$. \begin{example}{\label{Example33}} Let us consider the local domain $S=\frac{k[x,y]_{(x,y)}}{(y^2-x^2-x^3)}$. There are two distinct valuations centered in $(x,y)$. Let $a_i\in k,\ i\geq 2$ be such that $$ \left(y+x+\sum_{i\geq 2}a_ix^i\right)\left(y-x-\sum_{i\geq 2}a_ix^i\right)=y^2-x^2-x^3. $$ We shall denote by $\nu_+$ the rank one discrete valuation defined by $$ \nu_+(x)=\nu_+(y)=1, $$ $$ \nu_+(y+x)=2, $$ $$ \nu_+\left(y+x+\sum_{i= 2}^{b-1}a_ix^i\right)=b. $$ Now let $R=\frac{k[x,y,z]_{(x,y,z)}}{(y^2-x^2-x^3)}$. Let $\Gamma =\mathbf Z^2$ with the lexicographical ordering. Let $\nu$ be the composite valuation of the $(z)$-adic one with $\nu_+$, centered in $\frac R{(z)}$. The point of this example is to show that $$ H^*_{2\ell+1}=\bigcap_{\beta\in\Delta_{\ell}}\mathcal{P}_{\beta}{\hat R} $$ does not work as the definition of the $(2\ell+1)$-st implicit prime ideal because the resulting ideal $H^*_{2\ell+1}$ is not prime. Indeed, as $\mathcal{P}_{(a,0)}=(z^a)$, we have $$ H_1^*=\bigcap_{(a,b)\in\mathbf{Z}^2}\mathcal{P}_{(a,b)}{\hat R}=(0). $$ Let $f=y+x+\sum\limits_{i\geq 2}a_ix^i,g=y-x-\sum\limits_{i\geq 2}a_ix^i\in\hat R$. Clearly $f,g\mbox{$\in$ \hspace{-.8em}/} H^*_1=(0)$, but $f\cdot g=0$, so the ideal $H^*_1$ is not prime. One might be tempted (as we were) to correct this problem by localizing at $H^*_{2\ell+3}$. Indeed, if we take the new definition of $H^*_{2\ell+1}$ to be, recursively in the descending order of $\ell$, \begin{equation} H^*_{2\ell +1}=\left(\bigcap_{\beta\in\Delta_{\ell}}\mathcal{P}_{\beta}{\hat R}_{H^*_{2\ell +3}}\right)\cap{\hat R},\label{eq:localization} \end{equation} then in the present example the resulting ideals $H^*_3=(z,f)$ and $H^*_1=(f)$ are prime. However, the next example shows that the definition (\ref{eq:localization}) also does not, in general, give rise to prime ideals. \end{example} \begin{example}{\label{Example34}} Let $R=\frac{k[x,y,z]_{(x,y,z)}}{(z^2-y^2(1+x))}$. Let $\Gamma=\mathbf Z^2$ with the lexicographical ordering. Let $t$ be an independent variable and let $\nu$ be the valuation, centered in $R$, induced by the $t$-adic valuation of $k\left[\left[t^\Gamma\right]\right]$ under the injective homomorphism $\iota:R\hookrightarrow k\left[\left[t^\Gamma\right]\right]$, defined by $\iota(x)=t^{(0,1)}$, $\iota(y)=t^{(1,0)}$ and $\iota(z)=t^{(1,0)}\sqrt{1+t^{(0,1)}}$. The prime $\nu$-ideals of $R$ are $(0)\subsetneqq P_1\subsetneqq m$, with $P_1=(y,z)$. We have $\bigcap\limits_{\beta\in\Delta_1}\mathcal{P}_\beta\hat R=(y,z)\hat R=P_1\hat R$ and $\bigcap\limits_{\beta\in\Gamma}\mathcal{P}_\beta\hat R_{(y,z)}=\bigcap\limits_{\beta\in\Gamma}\mathcal{P}_\beta\hat R=(0)$. Note that the ideal $(0)$ is not prime in $\hat R$. Now, let $R'=R\left[\frac zy\right]_{m'}$, where $m'=\left(x,y,\frac zy-1\right)$ is the center of $\nu$ in $R\left[\frac zy\right]$. We have $z-y\sqrt{1+x}\in\hat R\setminus\mathcal{P}_{(2,0)}\hat R$. On the other hand, $z-y\sqrt{1+x}=y\left(\frac zy-\sqrt{1+x}\right)=0$ in $\hat R'$; in particular, $z-y\sqrt{1+x}\in\bigcap\limits_{\beta\in\Gamma}\mathcal{P}'_\beta\hat R'$. Thus this example also shows that the ideals $\mathcal{P}_\beta\hat R$, $\bigcap\limits_{\beta\in\Delta_\ell}\mathcal{P}_\beta\hat R$ and $\bigcap\limits_{\beta\in\Delta_\ell}\mathcal{P}_\beta\hat R_{H_{2\ell+3}}$ do not behave well under blowing up. \end{example} Note that both Examples \ref{Example33} and \ref{Example34} occur not only for the completion $\hat R$ but also for the henselization $\tilde R$. We come back to the general theory of implicit ideals. \begin{proposition}\label{contracts} We have $H_{2\ell+1}\cap R=P_\ell$. \end{proposition} \begin{proof} Recall that $P_\ell=\left\{x\in R\ \left|\ \ \nu(x)\mbox{$\in$ \hspace{-.8em}/} \Delta_\ell\right.\right\}$. If $x\in P_\ell$ then, since $\Delta_\ell$ is an isolated subgroup, we have $x\in {\cal P}_\beta$ for all $\beta\in \Delta_\ell$. The same inclusion holds for the same reason in all extensions $R'\subset R_\nu$ of $R$, and this implies the inclusion $P_\ell\subseteq H_{2\ell+1}\cap R$. Now let $x$ be in $H_{2\ell+1}\cap R$ and assume $x\mbox{$\in$ \hspace{-.8em}/} P_\ell$. Then there is a $\beta\in \Delta_\ell$ such that $x\mbox{$\in$ \hspace{-.8em}/}{\cal P}_\beta$. By faithful flatness of $R^\dag$ over $R$ we have ${\cal P}_\beta R^\dag\cap R={\cal P}_\beta$. This implies that $x\mbox{$\in$ \hspace{-.8em}/}{\cal P}_\beta R^\dag$, and the same argument holds in all the extensions $R'\in\mathcal{T}$, so $x$ cannot be in $H_{2\ell+1}\cap R$. This contradiction shows the desired equality. \end{proof} \begin{proposition}\label{behavewell} The ideals $H'_{2\ell+1}$ behave well under $\nu$-extensions $R\rightarrow R'$ in $\mathcal{T}$. In other words, let $R\rightarrow R'$ be a $\nu$-extension in $\mathcal{T}$ and let $H'_{2\ell+1}$ denote the $(2\ell+1)$-st implicit prime ideal of $\hat R'$. Then $H_{2\ell+1}=H'_{2\ell+1}\cap R^\dag$. \end{proposition} \begin{proof} Immediate from the definitions. \end{proof} To study the ideals $H_{2\ell+1}$, we need to understand more explicitly the nature of the limit appearing in (\ref{eq:defin1}). To study the relationship between the ideals $P_\beta R^{\dag}$ and $P'_\beta {R'}^\dag\bigcap R^\dag$, it is useful to factor the natural map $R^\dag\rightarrow{R'}^\dag$ as $R^\dag\rightarrow(R^\dag\otimes_RR')_{M'}\overset\phi\rightarrow{R'}^\dag$ as we did in the proof of Lemma \ref{factor}. In general, the ring $R^\dag\otimes_RR'$ is not local (see the above examples), but it has one distinguished maximal ideal $M'$, namely, the ideal generated by $mR^\dag\otimes1$ and $1\otimes m'$, where $m'$ denotes the maximal ideal of $R'$. The map $\phi$ factors through the local ring $\left(R^\dag\otimes_RR'\right)_{M'}$ and the resulting map $\left(R^\dag\otimes_RR'\right)_{M'}\rightarrow{R'}^\dag$ is either the formal completion or the henselization; in either case, it is faithfully flat. Thus $P'_\beta{R'}^\dag\cap\left(R^\dag\otimes_RR'\right)_{M'}= P'_\beta\left(R^\dag\otimes_RR'\right)_{M'}$. This shows that we may replace ${R'}^\dag$ by $\left(R^\dag\otimes_RR'\right)_{M'}$ in (\ref{eq:defin1}) without affecting the result, that is, \begin{equation} H_{2\ell+1}=\bigcap\limits_{\beta\in\Delta_\ell}\left( \left(\lim\limits_{\overset\longrightarrow{R'}} {\cal P}'_\beta\left(R^\dag\otimes_RR'\right)_{M'}\right)\bigcap R^\dag\right).\label{eq:defin3} \end{equation} From now on, we will use (\ref{eq:defin3}) as our working definition of the implicit prime ideals. One advantage of the expression (\ref{eq:defin3}) is that it makes sense in a situation more general than the completion and the henselization. Namely, to study the case of the henselization $\tilde R$, we will need to consider local \'etale extensions $R^e$ of $R$, which are contained in $\tilde R$ (particularly, those which are essentially of finite type). The definition (\ref{eq:defin3}) of the implicit prime ideals makes sense also in that case. \section{Stable rings and primality of their implicit ideals.} \label{technical} Let the notation be as in the preceding sections. As usual, $R^\dag$ will denote one of $\hat R$, $\tilde R$ or $R^e$ (a local \'etale $\nu$-extension essentially of finite type). Take an $R'\in\mathcal{T}$ and $\overline\beta\in\frac{\Delta_\ell}{\Delta_{\ell+1}}$. We have the obvious inclusion of ideals \begin{equation} \mathcal{P}_{\overline\beta} R^\dag\subset\mathcal{P}_{\overline\beta}{R'}^\dag\cap R^\dag\label{eq:stab1} \end{equation} (where $\mathcal{P}_{\overline\beta}$ is defined in (\ref{eq:pbetamodl})). A useful subtree of $\mathcal{T}$ is formed by the $\ell$-stable rings, which we now define. An important property of stable rings, proved below, is that the inclusion (\ref{eq:stab1}) is an equality whenever $R'$ is stable. \begin{definition}\label{stable} A ring $R'\in\mathcal{T}(R)$ is said to be $\ell$-\textbf{stable} if the following two conditions hold: (1) the ring \begin{equation} \kappa\left(P'_\ell\right)\otimes_R\left(R'\otimes_RR^\dag\right)_{M'} \label{eq:extension1} \end{equation} is an integral domain and (2) there do not exist an $R''\in\mathcal{T}(R')$ and a non-trivial algebraic extension $L$ of $\kappa(P'_\ell)$ which embeds both into $\kappa\left(P'_\ell\right)\otimes_R\left(R'\otimes_RR^\dag\right)_{M'}$ and $\kappa(P''_\ell)$. We say that $R$ is \textbf{stable} if it is $\ell$-stable for each $\ell\in\{0,\dots,r\}$. \end{definition} \begin{remark}\label{interchanging} (1) Rings of the form (\ref{eq:extension1}) will be a basic object of study in this paper. Another way of looking at the same ring, which we will often use, comes from interchanging the order of tensor product and localization. Namely, let $T'$ denote the image of the multiplicative system $\left(R'\otimes_RR^\dag\right)\setminus M'$ under the natural map $R'\otimes_RR^\dag\rightarrow\kappa\left(P'_\ell\right)\otimes_RR^\dag$. Then the ring (\ref{eq:extension1}) equals the localization $(T')^{-1}\left(\kappa\left(P'_\ell\right)\otimes_RR^\dag\right)$. (2) In the special case $R'=R$ in Definition \ref{stable}, we have $$ \kappa\left(P'_\ell\right)\otimes_R\left(R'\otimes_RR^\dag\right)_{M'}= \kappa\left(P_\ell\right)\otimes_RR^\dag. $$ If, moreover, $\frac R{P_\ell}$ is analytically irreducible then the hypothesis that $\kappa\left(P_\ell\right)\otimes_RR^\dag$ is a domain holds automatically; in fact, this hypothesis is equivalent to analytic irreducibility of $\frac R{P_\ell}$ if $R^\dag=\hat R$ or $R^\dag=\tilde R$. (3) Consider the special case when $R'$ is Henselian and ${\ }^\dag=\hat{\ }$. Excellent Henselian rings are algebraically closed inside their formal completions, so both (1) and (2) of Definition \ref{stable} hold automatically for this $R'$. Thus excellent Henselian local rings are always stable. \end{remark} In this section we study $\ell$-stable rings. We prove that if $R$ is $\ell$-stable then so is any $R'\in\mathcal{T}(R)$ (justifying the name ``stable''). The main result of this section, Theorem \ref{primality1}, says that if $R$ is stable then the implicit ideal $H'_{2\ell+1}$ is prime for each $\ell\in\{0,\dots,r\}$ and each $R'\in\mathcal{T}(R)$. \begin{remark}\label{hypotheses} In the next two sections we will show that there exist stable rings $R'\in\cal T$ for both $R^\dag=\hat R$ and $R^\dag=R^e$. However, the proof of this is different depending on whether we are dealing with completion or with an \'etale extension, and will be carried out separately in two separate sections: one devoted to henselization, the other to completion. \end{remark} \begin{proposition}\label{largeR1} Fix an integer $\ell$, $0\le\ell\le r$. Assume that $R'$ is $\ell$-stable and let $R''\in\mathcal{T}(R')$. Then $R''$ is $\ell$-stable. \end{proposition} \begin{proof} We have to show that (1) and (2) of Definition \ref{stable} for $R'$ imply (1) and (2) of Definition \ref{stable} for $R''$. The ring \begin{equation} \kappa\left(P''_\ell\right)\otimes_R\left(R''\otimes_RR^\dag\right)_{M''} \label{eq:extension} \end{equation} is a localization of $\kappa\left(P''_\ell\right)\otimes_R\left(\kappa\left(P'_\ell\right) \otimes_R\left(R'\otimes_RR^\dag\right)_{M'}\right)$. Hence (1) and (2) of Definition \ref{stable}, applied to $R'$, imply that $\kappa\left(P''_\ell\right)\otimes_R\left(R''\otimes_RR^\dag\right)_{M''}$ is an integral domain, so (1) of Definition \ref{stable} holds for $R''$. Replacing $R'$ by $R''$ clearly does not affect the hypotheses about the non-existence of the extension $L$, so (2) of Definition \ref{stable} also holds for $R''$. \end{proof} Next, we prove a technical result on which much of the rest of the paper is based. For $\overline\beta\in\frac\Gamma{\Delta_{\ell+1}}$, let \begin{equation} \mathcal{P}_{\overline\beta+}=\left\{x\in R\ \left|\ \nu(x)\mod\Delta_{\ell+1}>\overline\beta\right.\right\}.\label{eq:pbetamodl+} \end{equation} As usual, $\mathcal{P}'_{\overline\beta+}$ will stand for the analogous notion, but with $R$ replaced by $R'$, etc. \begin{proposition}\label{largeR2} Assume that $R$ itself is $(\ell+1)$-stable and let $R'\in\mathcal{T}(R)$. \begin{enumerate} \item For any $\overline\beta\in\frac{\Delta_\ell}{\Delta_{\ell+1}}$ \begin{equation} \mathcal{P}'_{\overline\beta}{R'}^\dag\cap R^\dag=\mathcal{P}_{\overline\beta} R^\dag.\label{eq:stab} \end{equation} \item For any $\overline\beta\in\frac\Gamma{\Delta_{\ell+1}}$ the natural map \begin{equation}\label{eq:gammaversion} \frac{\mathcal{P}_{\overline\beta}R^\dag}{\mathcal{P}_{\overline\beta+}R^\dag}\rightarrow \frac{\mathcal{P}'_{\overline\beta}{R'}^\dag}{\mathcal{P}'_{\overline\beta+}{R'}^\dag} \end{equation} is injective. \end{enumerate} \end{proposition} \begin{proof} As explained at the end of the previous section, since ${R'}^\dag$ is faithfully flat over the ring $\left(R^\dag\otimes_RR'\right)_{M'}$, we may replace ${R'}^\dag$ by $\left(R^\dag\otimes_RR'\right)_{M'}$ in both 1 and 2 of the Proposition. \noi\textbf{Proof of 1 of the Proposition:} It is sufficient to prove that \begin{equation} \mathcal{P}'_{\overline\beta}\left(R^\dag\otimes_RR'\right)_{M'}\bigcap R^\dag=\mathcal{P}_{\overline\beta} R^\dag.\label{eq:stab2} \end{equation} One inclusion in (\ref{eq:stab2}) is trivial; we must show that \begin{equation} \mathcal{P}'_{\overline\beta}\left(R^\dag\otimes_RR'\right)_{M'}\bigcap R^\dag\subset\mathcal{P}_{\overline\beta} R^\dag.\label{eq:stab3} \end{equation} \begin{lemma}\label{injectivity} Let $T'$ denote the image of the multiplicative set $\left(R'\otimes_RR^\dag\right)\setminus M'$ under the natural map of $R$-algebras $R'\otimes_RR^\dag\rightarrow\frac{R'_{P'_{\ell+1}}}{\mathcal{P}'_{\overline\beta} R'_{P'_{\ell+1}}}\otimes_RR^\dag$. Then the map of $R$-algebras \begin{equation} \bar\pi:\frac{R_{P_{\ell+1}}}{\mathcal{P}_{\overline\beta}R_{P_{\ell+1}}}\otimes_RR^\dag \rightarrow(T')^{-1}\left(\frac{R'_{P'_{\ell+1}}}{\mathcal{P}'_{\overline\beta} R'_{P'_{\ell+1}}}\otimes_RR^\dag\right) \label{eq:inclusion3} \end{equation} induced by $\pi:R\rightarrow R'$ is injective. \end{lemma} \begin{proof}\textit{(of Lemma \ref{injectivity})} We start with the field extension $$ \kappa(P_{\ell+1})\hookrightarrow\kappa(P'_{\ell+1}) $$ induced by $\pi$. Since $R^\dag$ is flat over $R$, the induced map $\pi_1:\kappa(P_{\ell+1})\otimes_RR^\dag\rightarrow\kappa(P'_{\ell+1})\otimes_R R^\dag$ is also injective. By (1) of Definition \ref{stable}, $\kappa(P'_{\ell+1})\otimes_RR^\dag$ is a domain. In particular, \begin{equation} \kappa\left(P'_{\ell+1}\right)\otimes_RR^\dag= \left(\frac{R'_{P'_{\ell+1}}}{\mathcal{P}'_{\overline\beta}R'_{P'_{\ell+1}}}\otimes_R R^\dag\right)_{red}. \label{eq:reduced1} \end{equation} The local ring $\frac{R'_{P'_{\ell+1}}}{\mathcal{P}'_{\overline\beta}R'_{P'_{\ell+1}}}$ is artinian because it can be seen as the quotient of $\frac{R'_{P'_{\ell+1}}}{P'_\ell R'_{P'_{\ell+1}}}$ by a valuation ideal corresponding to a rank one valuation. Since the ring is noetherian the valuation of the maximal ideal is positive, and since the group is archimedian, a power of the maximal ideal is contained in the valuation ideal.\par Therefore, its only associated prime is its nilradical, the ideal $\frac{P'_{\ell+1} R'_{P'_{\ell+1}}}{\mathcal{P}'_{\overline\beta}R'_{P'_{\ell+1}}}$; in particular, the $(0)$ ideal in this ring has no embedded components. Since $R^\dag$ is flat over $R$, $\frac{R'_{P'_{\ell+1}}}{\mathcal{P}'_{\overline\beta}R'_{P'_{\ell+1}}}\otimes_RR^\dag$ is flat over $\frac{R'_{P'_{\ell+1}}}{\mathcal{P}'_{\overline\beta}R'_{P'_{\ell+1}}}$ by base change. Hence the $(0)$ ideal of $\frac{R'_{P'_{\ell+1}}}{\mathcal{P}'_{\overline\beta}R'_{P'_{\ell+1}}}\otimes_RR^\dag$ has no embedded components. In particular, since the multiplicative system $T'$ is disjoint from the nilradical of $\frac{R'_{P'_{\ell+1}}}{\mathcal{P}'_{\overline\beta}R'_{P'_{\ell+1}}}\otimes_RR^\dag$, the set $T'$ contains no zero divisors, so localization by $T'$ is injective.\par By the definition of $\mathcal{P}_{\overline\beta}$, the map $\frac{R_{P_{\ell+1}}}{\mathcal{P}_{\overline\beta}R_{P_{\ell+1}}}\rightarrow \frac{R'_{P'_{\ell+1}}}{\mathcal{P}'_{\overline\beta}R'_{P'_{\ell+1}}}$ is injective, hence so is $$ \frac{R_{P_{\ell+1}}}{\mathcal{P}_{\overline\beta}R_{P_{\ell+1}}}\otimes_RR^\dag\rightarrow \frac{R'_{P'_{\ell+1}}}{\mathcal{P}'_{\overline\beta}R'_{P'_{\ell+1}}}\otimes_RR^\dag $$ by the flatness of $R^\dag$ over $R$. Combining this with the injectivity of the localization by $T'$, we obtain that $\bar\pi$ is injective, as desired. Lemma \ref{injectivity} is proved. \end{proof} Again by the definition of $\mathcal{P}_{\overline\beta}$, the localization map $\frac R{\mathcal{P}_{\overline\beta}}\rightarrow\frac{R_{P_{\ell+1}}}{\mathcal{P}_{\overline\beta} R_{P_{\ell+1}}}$ is injective, hence so is the map \begin{equation} \frac {R}{\mathcal{P}_{\overline\beta}}\otimes_RR^\dag\rightarrow \frac{R_{P_{\ell+1}}}{\mathcal{P}_{\overline\beta}R_{P_{\ell+1}}}\otimes_RR^\dag \label{eq:injective} \end{equation} by the flatness of $R^\dag$ over $R$. Combining this with Lemma \ref{injectivity}, we see that the composition \begin{equation} \frac R{\mathcal{P}_{\overline\beta}}\otimes_RR^\dag\rightarrow (T')^{-1}\left(\frac{R'_{P'_\ell}}{\mathcal{P}'_{\overline\beta} R'_{P'_\ell}}\otimes_RR^\dag\right) \label{eq:injective1} \end{equation} of (\ref{eq:injective}) with $\bar\pi$ is also injective. Now, (\ref{eq:injective1}) factors through $\left(\frac{R'}{\mathcal{P}'_{\overline\beta}}\otimes_RR^\dag\right)_{M'}$ (here we are guilty of a slight abuse of notation: we denote the natural image of $M'$ in $\frac{R'}{\mathcal{P}'_{\overline\beta}}\otimes_RR^\dag$ also by $M'$). Hence the map \begin{equation} \frac R{\mathcal{P}_{\overline\beta}}\otimes_RR^\dag\rightarrow \left(\frac{R'}{\mathcal{P}'_{\overline\beta}}\otimes_RR^\dag\right)_{M'} \label{eq:injective2} \end{equation} is injective. Since $\frac R{\mathcal{P}_{\overline\beta}}\otimes_RR^\dag\cong\frac{R^\dag}{\mathcal{P}_{\overline\beta} R^\dag}$ and $\left(\frac{R'}{\mathcal{P}'_{\overline\beta}}\otimes_RR^\dag\right)_{M'}\cong \frac{\left(R'\otimes_RR^\dag\right)_{M'}}{\mathcal{P}'_{\overline\beta} \left(R'\otimes_RR^\dag\right)_{M'}}$, the injectivity of (\ref{eq:injective2}) is the same as (\ref{eq:stab3}). This completes the proof of 1 of the Proposition. \noi\textbf{Proof of 2 of the Proposition:} We start with the injective homomorphism \begin{equation} \frac{\mathcal{P}_{\overline\beta}}{\mathcal{P}_{\overline\beta+}}\otimes_R\kappa(P_{\ell+1}) \rightarrow\frac{\mathcal{P}'_{\overline\beta}}{\mathcal{P}'_{\overline\beta+}}\otimes_R \kappa(P'_{\ell+1})\label{eq:vspaces} \end{equation} of $\kappa(P_{\ell+1})$-vector spaces. Since $R^\dag$ is flat over $R$, tensoring (\ref{eq:vspaces}) produces an injective homomorphism \begin{equation} \frac{\mathcal{P}_{\overline\beta}R^\dag}{\mathcal{P}_{\overline\beta+}R^\dag}\otimes_R \kappa(P_{\ell+1})\rightarrow\frac{\mathcal{P}'_{\overline\beta}}{\mathcal{P}'_{\overline\beta+}} \otimes_R\kappa(P'_{\ell+1})\otimes_RR^\dag\label{eq:Rdagmodules} \end{equation} of $R^\dag$-modules. Now, the $\kappa(P_{\ell+1})$-vector space $\frac{\mathcal{P}'_{\overline\beta}}{\mathcal{P}'_{\overline\beta+}}\otimes_R\kappa(P'_{\ell+1})$ is, in particular, a torsion-free $\kappa(P_{\ell+1})$-module. Since $\kappa(P_{\ell+1})\otimes_RR^\dag$ is a domain by definition of $(\ell+1)$-stable and by the flatness of $R^\dag\otimes_R\kappa(P_{\ell+1})$ over $\kappa(P_{\ell+1})$, the $R^\dag\otimes_R\kappa(P_{\ell+1})$-module $\frac{\mathcal{P}'_{\overline\beta}}{\mathcal{P}'_{\overline\beta+}}\otimes_R\kappa(P'_{\ell+1}) \otimes_RR^\dag$ is also torsion-free; in particular, its localization map by any multiplicative system is injective. Let $S'$ denote the image of the multiplicative set $\left(R'\otimes_RR^\dag\right)\setminus M'$ under the natural map of $R$-algebras $R'\otimes_RR^\dag\rightarrow\kappa(P'_{\ell+1})\otimes_RR^\dag$. By the above, the composition \begin{equation} \frac{\mathcal{P}_{\overline\beta}R^\dag}{\mathcal{P}_{\overline\beta+}R^\dag}\otimes_R \kappa(P_{\ell+1})\rightarrow(S')^{-1} \left(\frac{\mathcal{P}'_{\overline\beta}}{\mathcal{P}'_{\overline\beta+}}\otimes_R \kappa(P'_{\ell+1})\otimes_RR^\dag\right)\label{eq:Rdagmodules1} \end{equation} of (\ref{eq:Rdagmodules}) with the localization by $S'$ is injective. By the definition of $\mathcal{P}_{\overline\beta}$, the localization map $\frac{\mathcal{P}_{\overline\beta}}{\mathcal{P}_{\overline\beta+}}\rightarrow \frac{\mathcal{P}_{\overline\beta}}{\mathcal{P}_{\overline\beta+}}\otimes_R\kappa(P_{\ell+1})$ is injective, hence so is the map \begin{equation} \frac{\mathcal{P}_{\overline\beta}R^\dag}{\mathcal{P}_{\overline\beta+}R^\dag}= \frac{\mathcal{P}_{\overline\beta}}{\mathcal{P}_{\overline\beta+}}\otimes_RR^\dag\rightarrow \frac{\mathcal{P}_{\overline\beta}R^\dag}{\mathcal{P}_{\overline\beta+}R^\dag}\otimes_R \kappa(P_{\ell+1})\label{eq:injectivegamma} \end{equation} by the flatness of $R^\dag$ over $R$. Combining this with the injectivity of (\ref{eq:Rdagmodules1}), we see that the composition \begin{equation} \frac{\mathcal{P}_{\overline\beta}R^\dag}{\mathcal{P}_{\overline\beta+}R^\dag}\rightarrow (S')^{-1}\left(\frac{\mathcal{P}'_{\overline\beta}}{\mathcal{P}'_{\overline\beta+}}\otimes_R \kappa(P'_{\ell+1})\otimes_RR^\dag\right) \label{eq:injective1gamma} \end{equation} of (\ref{eq:injectivegamma}) with (\ref{eq:Rdagmodules1}) is also injective. Now, (\ref{eq:injective1gamma}) factors through $\frac{\mathcal{P}'_{\overline\beta}}{\mathcal{P}'_{\overline\beta+}}\otimes_{R'} \left(R'\otimes_RR^\dag\right)_{M'}$. Hence the map \begin{equation} \frac{\mathcal{P}_{\overline\beta}R^\dag}{\mathcal{P}_{\overline\beta+}R^\dag}\rightarrow \frac{\mathcal{P}'_{\overline\beta}}{\mathcal{P}'_{\overline\beta+}}\otimes_{R'} \left(R'\otimes_RR^\dag\right)_{M'}\label{eq:injective2gamma} \end{equation} is injective. Since $\frac{\mathcal{P}'_{\overline\beta}}{\mathcal{P}'_{\overline\beta+}}\otimes_{R'}{R'}^\dag\cong \frac{\mathcal{P}'_{\overline\beta}{R'}^\dag}{\mathcal{P}'_{\overline\beta+}{R'}^\dag}$ and by faithful flatness of ${R'}^\dag$ over $\left(R'\otimes_RR^\dag\right)_{M'}$, the injectivity of (\ref{eq:injective2gamma}) implies the injectivity of the map (\ref{eq:gammaversion}) required in 2 of the Proposition. This completes the proof of the Proposition. \end{proof} \begin{corollary}\label{stableimplicit} Take an integer $\ell\in\{0,\dots,r-1\}$ and assume that $R$ is $(\ell+1)$-stable. Then \begin{equation} H_{2\ell+1}=\bigcap\limits_{\beta\in\Delta_\ell}{\cal P}_\beta R^\dag.\label{eq:defin5} \end{equation} \end{corollary} \begin{proof} By Lemma 4 of Appendix 4 of \cite{ZS}, the ideals $\mathcal{P}_{\overline\beta}$ are cofinal among the ideals $\mathcal{P}_\beta$ for $\beta\in \Delta_\ell$. \end{proof} \begin{corollary}\label{stablecontracts} Assume that $R$ is stable. Take an element $\beta\in\Gamma$. Then $\mathcal{P}'_\beta{R'}^\dag\cap R^\dag=\mathcal{P}_\beta$. \end{corollary} \begin{proof} It is sufficient to prove that for each $\ell\in\{0,\dots,r-1\}$ and $\bar\beta\in\frac\Gamma{\Delta_{\ell+1}}$, we have \begin{equation} \mathcal{P}'_{\bar\beta}{R'}^\dag\cap R^\dag=\mathcal{P}_{\bar\beta};\label{eq:scalewisebeta} \end{equation} the Corollary is just the special case of (\ref{eq:scalewisebeta}) when $\ell=r-1$. We prove (\ref{eq:scalewisebeta}) by contradiction. Assume the contrary and take the smallest $\ell$ for which (\ref{eq:scalewisebeta}) fails to be true. Let $\Phi'=\nu(R'\setminus\{0\})$. We will denote by $\frac\Phi{\Delta_{\ell+1}}$ the image of $\Phi$ under the composition of natural maps $\Phi\hookrightarrow\Gamma\rightarrow\frac\Gamma{\Delta_{\ell+1}}$ and similarly for $\frac{\Phi'}{\Delta_{\ell+1}}$. Clearly, if (\ref{eq:scalewisebeta}) fails for a certain $\bar\beta$, it also fails for some $\bar\beta\in\frac{\Phi'}{\Delta_{\ell+1}}$; take the smallest $\bar\beta\in\frac{\Phi'}{\Delta_{\ell+1}}$ with this property. If we had $\bar\beta=\min\left\{\left.\tilde\beta\in\frac{\Phi'}{\Delta_{\ell+1}}\ \right|\ \tilde\beta-\bar\beta\in\Delta_\ell\right\}$, then (\ref{eq:scalewisebeta}) would also fail with $\bar\beta$ replaced by $\bar\beta\mod\Delta_\ell\in\frac\Gamma{\Delta_\ell}$, contradicting the minimality of $\ell$. Thus \begin{equation}\label{eq:notminimum} \bar\beta>\min\left\{\left.\tilde\beta\in\frac{\Phi'}{\Delta_{\ell+1}}\ \right|\ \tilde\beta-\bar\beta\in\Delta_\ell\right\}. \end{equation} Let $\bar\beta-$ denote the immediate predecessor of $\bar\beta$ in $\frac{\Phi'}{\Delta_{\ell+1}}$. By (\ref{eq:notminimum}), we have $\bar\beta-\bar\beta-\in\Delta_\ell$. By the choice of $\bar\beta$, we have $\mathcal{P}_{\bar\beta-}=\mathcal{P}'_{\bar\beta-}{R'}^\dag\cap R^\dag$ but $\mathcal{P}_{\bar\beta}\subsetneqq\mathcal{P}'_{\bar\beta}{R'}^\dag\cap R^\dag$. This contradicts Proposition \ref{largeR2}, applied to $\bar\beta-$. The Corollary is proved. \end{proof} Now we are ready to state and prove the main Theorem of this section. \begin{theorem}\label{primality1} (1) Fix an integer $\ell\in\{1,\dots,r+1\}$. Assume that there exists $R'\in\mathcal{T}(R)$ which is $(\ell+1)$-stable. Then the ideal $H_{2\ell+1}$ is prime. (2) Let $i=2\ell+2$. There exists an extension $\nu^\dag_{i0}$ of $\nu_{\ell+1}$ to $\lim\limits_{\overset\longrightarrow{R'}}\kappa(H'_{i-1})$, with value group \begin{equation} \Delta_{i-1,0}=\frac{\Delta_\ell}{\Delta_{\ell+1}},\label{eq:groupequal2} \end{equation} whose valuation ideals are described as follows. For an element $\overline\beta\in\frac{\Delta_\ell}{\Delta_{\ell+1}}$, the $\nu^\dag_{i0}$-ideal of $\frac{R^\dag}{H_{i-1}}$ of value $\overline\beta$, denoted by $\mathcal{P}^\dag_{\overline\beta\ell}$, is given by the formula \begin{equation} \mathcal{P}^\dag_{\overline\beta,\ell+1}=\left(\lim\limits_{\overset\longrightarrow{R'}} \frac{\mathcal{P}'_{\overline\beta}{R'}^\dag}{H'_{i-1}}\right)\cap\frac{R^\dag}{H_{i-1}}. \label{eq:valideal1} \end{equation} \end{theorem} \begin{remark} Once the even-numbered implicit prime ideals $H'_{2\ell}$ are defined below, we will show that $\nu^\dag_{i0}$ is the unique extension of $\nu_{\ell+1}$ to $\lim\limits_{\overset\longrightarrow{R'}}\kappa(H'_{i-1})$, centered in the local ring $\lim\limits_{\overset\longrightarrow{R'}} \frac{R'^\dag_{H'_{2\ell+2}}}{H'_{2\ell+1}R'^\dag_{H'_{2\ell+2}}}$. \end{remark} \begin{proof}\textit{(of Theorem \ref{primality1})} Let $R'$ be a stable ring in $\mathcal{T}(R)$. Once Theorem \ref{primality1} is proved for $R'$, the same results for $R$ will follow easily by intersecting all the ideals of ${R'}^\dag$ in sight with $R^\dag$. Therefore from now on we will replace $R$ by $R'$, that is, we will assume that $R$ itself is stable. Let $\Phi_\ell$ denote the image of the semigroup $\nu(R\setminus\{0\})$ in $\frac{\Gamma}{\Delta_{\ell+1}}$. As we saw above, $\Phi_\ell$ is well ordered. For an element $\overline\beta\in\Phi_\ell$, let $\overline\beta+$ denote the immediate successor of $\overline\beta$ in $\Phi_\ell$. Take any element $x\in R^\dag\setminus H_{i-1}$. By Corollary \ref{stableimplicit}, there exists (a unique) $\overline\beta\in\Phi_\ell\cap\frac{\Delta_\ell}{\Delta_{\ell+1}}$ such that \begin{equation} x\in{\cal P}_{\overline\beta} R^\dag\setminus{\cal P}_{\overline\beta+}R^\dag\label{eq:xinbeta} \end{equation} (where, of course, we allow $\overline\beta=0$). Let $\bar x$ denote the image of $x$ in $\frac{R^\dag}{H_{i-1}}$. We define $$ \nu^\dag_{i0}(\bar x)=\overline\beta. $$ Next, take another element $y\in R^\dag\setminus H_{2\ell+1}$ and let $\gamma\in\Phi_\ell\cap\frac{\Delta_\ell}{\Delta_{\ell+1}}$ be such that \begin{equation} y\in{\cal P}_{\overline\gamma}R^\dag\setminus{\cal P}_{\overline\gamma+}R^\dag.\label{eq:yingamma} \end{equation} Let $(a_1,...,a_n)$ be a set of generators of ${\cal P}_{\overline\beta}$ and $(b_1,...,b_s)$ a set of generators of ${\cal P}_{\overline\gamma}$, with $\nu_{\ell+1}(a_1)=\overline\beta$ and $\nu_{\ell+1}(b_1)=\overline\gamma$. Let $R'$ be a local blowing up along $\nu$ such that $R'$ contains all the fractions $\frac{a_i}{a_1}$ and $\frac{b_j}{b_1}$. By Proposition \ref{largeR1} and Definition \ref{stable} (1), the ideal $P'_{\ell+1}{R'}^\dag$ is prime. By construction, we have $a_1\ |\ x$ and $b_1\ |\ y$ in ${R'}^\dag$. Write $x=za_1$ and $y=wb_1$ in ${R'}^\dag$. The equality (\ref{eq:stab}), combined with (\ref{eq:xinbeta}) and (\ref{eq:yingamma}), implies that $z,w\mbox{$\in$ \hspace{-.8em}/} P'_{\ell+1}{R'}^\dag$, hence \begin{equation} zw\mbox{$\in$ \hspace{-.8em}/} P'_{\ell+1}{R'}^\dag\label{eq:zwnotin} \end{equation} by the primality of $P'_{\ell+1}{R'}^\dag$. We obtain \begin{equation} xy=a_1b_1zw.\label{eq:xyzw} \end{equation} Since $\nu$ is a valuation on $R'$, we have $\left({\cal P}'_{\overline\beta+\overline\gamma+}:(a_1b_1)R'\right)\subset P'_{\ell +1}$. By faithful flatness of ${R'}^\dag$ over $R'$ we obtain \begin{equation} \left({\cal P}'_{\overline\beta+\overline\gamma+}{R'}^\dag:(a_1b_1){R'}^\dag\right)\subset P'_{\ell+1}{R'}^\dag. \end{equation} Combining this with (\ref{eq:zwnotin}) and (\ref{eq:xyzw}), we obtain \begin{equation} xy\mbox{$\in$ \hspace{-.8em}/}{\cal P}_{\overline\beta+\overline\gamma+}R^\dag,\label{eq:xynotin} \end{equation} in particular, $xy\mbox{$\in$ \hspace{-.8em}/} H_{2\ell+1}$. We started with two arbitrary elements $x,y\in R^\dag\setminus H_{2\ell+1}$ and showed that $xy\mbox{$\in$ \hspace{-.8em}/} H_{2\ell+1}$. This proves (1) of the Theorem. Furthermore, (\ref{eq:xynotin}) shows that $\nu^\dag_{i0}(\bar x\bar y)=\overline\beta+\overline\gamma$, so $\nu^\dag_{i0}$ induces a valuation of $\kappa(H_{i-1})$ and hence also of $\lim\limits_{\overset\longrightarrow{R'}}\kappa(H'_{i-1})$. Equality (\ref{eq:groupequal2}) holds by definition and (\ref{eq:valideal1}) by the assumed stability of $R$. \end{proof} Next, we define the even-numbered implicit prime ideals $H'_{2\ell}$. The only information we need to use to define the prime ideals $H'_{2\ell}\subset H'_{2\ell+1}$ and to prove that $H'_{2\ell-1}\subset H'_{2\ell}$ are the facts that $H_{2\ell+1}$ is a prime lying over $P_\ell$ and that the ring homomorphism $R'\rightarrow{R'}^\dag$ is regular. \begin{proposition}\label{H2l} There exists a unique minimal prime ideal $H_{2\ell}$ of $P_\ell R^\dag$, contained in $H_{2\ell+1}$. \end{proposition} \begin{proof} Since $H_{2\ell+1}\cap R=P_\ell$, $H_{2\ell+1}$ belongs to the fiber of the map $Spec\ R^\dag\rightarrow Spec\ R$ over $P_\ell$. Since $R$ was assumed to be excellent, $S:={R^\dag}\otimes_R\kappa(P_\ell)$ is a regular ring (note that the excellence assumption is needed only in the case $R^\dag=\hat R$; the ring homomorphism $R\rightarrow R^\dag$ is automatically regular if $R^\dag=\tilde R$ or $R^\dag=R^e$). Hence its localization $\bar S:=S_{H_{2\ell+1}S}\cong\frac{R^\dag_{H_{2\ell+1}}}{P_\ell R^\dag_{H_{2\ell+1}}}$ is a regular {\em local} ring. In particular, $\bar S$ is an integral domain, so $(0)$ is its unique minimal prime ideal. The set of minimal prime ideals of $\bar S$ is in one-to-one correspondence with the set of minimal primes of $P_\ell$, contained in $H_{2\ell+1}$, which shows that such a minimal prime $H_{2\ell}$ is unique, as desired. \end{proof} We have $P_\ell\subset H_{2\ell}\cap R\subset H_{2\ell+1}\cap R=P_\ell$, so $H_{2\ell}\cap R\subset P_\ell$. \begin{proposition} We have $H_{2\ell-1}\subset H_{2\ell}$. \end{proposition} \begin{proof} Take an element $\beta\in\frac{\Delta_{\ell-1}}{\Delta_\ell}$ and a stable ring $R'\in\cal T$. Then $\mathcal{P}'_\beta\subset P'_\ell$, so \begin{equation} H'_{2\ell-1}\subset\mathcal{P}'_\beta{R'}^\dag\subset P'_\ell{R'}^\dag\subset H'_{2\ell}.\label{eq:inclusion} \end{equation} Intersecting (\ref{eq:inclusion}) back with $R^\dag$ we get the result. \end{proof} In \S\ref{henselization} we will see that if $R^\dag=\tilde R$ or $R^\dag=R^e$ then $H_{2\ell}=H_{2\ell+1}$ for all $\ell$. Let the notation be the same as in Theorem \ref{primality1}. \begin{proposition}\label{nu0unique} The valuation $\nu_{i0}^\dag$ is the unique extension of $\nu_\ell$ to a valuation of $\lim\limits_{\overset\longrightarrow{R'}}\kappa(H'_{i-1})$, centered in the local ring $\lim\limits_{\overset\longrightarrow{R'}}\frac{{R'}^\dag_{H'_{2\ell}}}{H'_{2\ell-1}{R'}^\dag_{H'_{2\ell}}}$. \end{proposition} \begin{proof} As usual, without loss of generality we may assume that $R$ is stable. Take an element $x\in R^\dag\setminus H_{2\ell-1}$. Let $\beta=\nu^\dag_{i0}(\bar x)$ and let $R'$ be the blowing up of the ideal $\mathcal{P}_\beta=(a_1,\dots,a_n)$, as in the proof of Theorem \ref{primality1}. Write \begin{equation} x=za_1\label{eq:xza} \end{equation} in $R'$. We have $z\in{R'}^\dag\setminus P'_\ell{R'}^\dag$, hence \begin{equation} \bar z\in\frac{{R'}^\dag_{H'_{2\ell}}}{H'_{2\ell-1}{R'}^\dag_{H'_{2\ell}}}\setminus\frac{P'_\ell{R'}^\dag_{H'_{2\ell}}}{H'_{2\ell-1}{R'}^\dag_{H'_{2\ell}}}= \frac{{R'}^\dag_{H'_{2\ell}}}{H'_{2\ell-1}{R'}^\dag_{H'_{2\ell}}}\setminus\frac{H'_{2\ell}{R'}^\dag_{H'_{2\ell}}}{H'_{2\ell-1}{R'}^\dag_{H'_{2\ell}}}. \label{eq:znotinPl} \end{equation} If $\nu^*$ is any other extension of $\nu_\ell$ to $\lim\limits_{\overset\longrightarrow{R'}}\kappa(H'_{i-1})$, centered in $\lim\limits_{\overset\longrightarrow{R'}}\frac{{R'}^\dag_{H'_{2\ell}}}{H'_{2\ell-1}{R'}^\dag_{H'_{2\ell}}}$, then $\nu^*(\bar a_1)=\beta$, $\nu^*(z)=0$ by (\ref{eq:znotinPl}), so $\nu^*(\bar x)=\beta=\nu_{i0}^\dag(\bar x)$. This completes the proof of the uniqueness of $\nu_{i0}^\dag$. \end{proof} \begin{remark}\label{sameresfield} If $R'$ is stable, we have a natural isomorphism of graded algebras $$ \mbox{gr}_{\nu^\dag_{i0}}\frac{{R'}^\dag_{H'_{2\ell}}}{H'_{2\ell-1}{R'}^\dag_{H'_{2\ell}}}\cong \mbox{gr}_{\nu_\ell}\frac{R'_{P'_\ell}}{P'_{\ell-1}R'_{P'_\ell}}\otimes_{R'}\kappa(H'_{2\ell}). $$ In particular, the residue field of $\nu^\dag_{i0}$ is $k_{\nu^\dag_{i0}}=\lim\limits_{\overset\longrightarrow{R'}}\kappa(H'_{2\ell})$. \end{remark} \section{A classification of extensions of $\nu$ to $\hat R$.} \label{Rdag} The purpose of this section is to give a systematic description of all the possible extensions $\nu^\dag_-$ of $\nu$ to a quotient of $R^\dag$ by a minimal prime as compositions of $2r$ valuations, \begin{equation} \nu^\dag_-=\nu^\dag_1\circ\dots\circ\nu^\dag_{2r},\label{eq:composition} \end{equation} satisfying certain conditions. One is naturally led to consider the more general problem of extending $\nu$ not only to rings of the form $\frac{R^\dag}P$ but also to the ring $\lim\limits_\to\frac{{R'}^\dag}{P'}$, where $P'$ is a tree of prime ideals of ${R'}^\dag$, such that $P'\cap R'=(0)$. We deal in a uniform way with all the three cases $R^\dag=\hat R$, $R^\dag=\tilde R$ and $R^\dag=R^e$, in order to be able to apply the results proved here to all three later in the paper. However, the reader should think of the case $R^\dag=\hat R$ as the main case of interest and the cases $R^\dag=\tilde R$ and $R^\dag=R^e$ as auxiliary and slightly degenerate, since, as we shall see, in these cases the equality $H_{2\ell}=H_{2\ell+1}$ is satisfied for all $\ell$ and the extension $\nu^\dag_-$ will later be shown to be unique. We will associate to each extension $\nu^\dag_-$ of $\nu$ to $R^\dag$ a chain \begin{equation} \tilde H'_0\subset\tilde H'_1\subset\dots\subset\tilde H'_{2r}=m'{R'}^\dag\label{eq:chaintree''} \end{equation} of prime $\nu^\dag_-$-ideals, corresponding to the decomposition (\ref{eq:composition}) and prove some basic properties of this chain of ideals. Now for the details. We wish to classify all the pairs $\left(\left\{\tilde H'_0\right\},\nu^\dag _+\right)$, where $\left\{\tilde H'_0\right\}$ is a tree of prime ideals of ${R'}^\dag$, such that $\tilde H'_0\cap R'=(0)$, and $\nu^\dag_+$ is an extension of $\nu$ to the ring $\lim\limits_\to\frac{{R'}^\dag}{\tilde H'_0}$. Pick and fix one such pair $\left(\left\{\tilde H'_0\right\},\nu^\dag_+\right)$. We associate to it the following collection of data, which, as we will see, will in turn determine the pair $\left(\left\{\tilde H'_0\right\},\nu^\dag_+\right)$. First, we associate to $\left(\left\{\tilde H'_0\right\},\nu^\dag_-\right)$ a chain (\ref{eq:chaintree''}) of $2r$ trees of prime $\nu^\dag_-$-ideals. Let $\Gamma^\dag$ denote the value group of $\nu^\dag_-$. Defining (\ref{eq:chaintree''}) is equivalent to defining a chain \begin{equation} \Gamma^\dag=\Delta^\dag_0\supset\Delta^\dag_1\supset\dots\supset\Delta^\dag_{2r}= \Delta^\dag_{2r+1}=(0)\label{eq:groups} \end{equation} of $2r$ isolated subroups of $\Gamma^\dag$ (the chain (\ref{eq:groups}) will not, in general, be maximal, and $\Delta^\dag_{2\ell +1}$ need not be distinct from $\Delta^\dag_{2\ell}$). We define the $\Delta^\dag_i$ as follows. For $0\le\ell\le r$, let $\Delta^\dag_{2\ell}$ and $\Delta^\dag_{2\ell +1}$ denote, respectively, the greatest and the smallest isolated subgroups of $\Gamma^\dag$ such that \begin{equation} \Delta^\dag_{2\ell}\cap\Gamma=\Delta^\dag_{2\ell +1}\cap\Gamma=\Delta_\ell.\label{eq:Delta} \end{equation} \begin{lemma}\label{rank1} We have \begin{equation} rk\ \frac{\Delta^\dag_{2\ell-1}}{\Delta^\dag_{2\ell}}=1\label{eq:rank1} \end{equation} for $1\le \ell\le r$. \end{lemma} \begin{proof} Since by construction $\Delta^\dag_{2\ell}\neq\Delta^\dag_{2\ell-1}$, equality (\ref{eq:rank1}) is equivalent to saying that there is no isolated subgroup $\Delta^\dag$ of $\Gamma^\dag$ which is properly contained in $\Delta^\dag_{2\ell-1}$ and properly contains $\Delta^\dag_{2\ell}$. Suppose such an isolated subgroup $\Delta^\dag$ existed. Then \begin{equation} \Delta_\ell=\Delta^\dag_{2\ell}\cap\Gamma\subsetneqq\Delta^\dag\cap\Gamma \subsetneqq\Delta^\dag_{2\ell-1}\cap\Gamma=\Delta_{\ell-1}, \label{eq:noninclusion} \end{equation} where the first inclusion is strict by the maximality of $\Delta^\dag_{2\ell}$ and the second by the minimality of $\Delta^\dag_{2\ell -1}$. Thus $\Delta^\dag\cap\Gamma$ is an isolated subgroup of $\Gamma$, properly containing $\Delta_\ell$ and properly contained in $\Delta_{l-1}$, which is impossible since $rk\ \frac{\Delta_{\ell-1}}{\Delta_\ell}=1$. This is a contradiction, hence $rk\ \frac{\Delta^\dag_{2\ell -1}}{\Delta^\dag_{2\ell}}=1$, as desired. \end{proof} \begin{definition} Let $0\le i\le 2r$. The $i$-th prime ideal \textbf{determined} by $\nu^\dag_-$ is the prime $\nu^\dag_-$-ideal $\tilde H'_i$ of ${R'}^\dag$, corresponding to the isolated subgroup $\Delta^\dag_i$ (that is, the ideal $\tilde H'_i$ consisting of all the elements of ${R'}^\dag$ whose values lie outside of $\Delta^\dag_i$). The chain of trees (\ref{eq:chaintree''}) of prime ideals of ${R'}^\dag$ formed by the $\tilde H'_i$ is referred to as the chain of trees \textbf{determined} by $\nu^\dag_-$. \end{definition} The equality (\ref{eq:Delta}) says that \begin{equation} \tilde H'_{2\ell}\cap R'=\tilde H'_{2\ell+1}\cap R'=P'_\ell\label{eq:tildeHcapR} \end{equation} By definitions, for $1\le i\le2r$, $\nu^\dag_i$ is a valuation of the field $k_{\nu^\dag_{i-1}}$. In the sequel, we will find it useful to talk about the restriction of $\nu^\dag_i$ to a smaller field, namely, the field of fractions of the ring $\lim\limits_{\overset\longrightarrow{R'}}\frac{{R'}^\dag_{\tilde H'_i}}{\tilde H'_{i-1}{R'}^\dag_{\tilde H'_i}}$; we will denote this restriction by $\nu^\dag_{i0}$. The field of fractions of $\frac{{R'}^\dag_{\tilde H'_i}}{\tilde H'_{i-1}{R'}^\dag_{\tilde H'_i}}$ is $\kappa(\tilde H'_{i-1})$, hence that of $\lim\limits_{\overset\longrightarrow{R'}}\frac{{R'}^\dag_{\tilde H'_i}}{\tilde H'_{i-1}{R'}^\dag_{\tilde H'_i}}$ is $\lim\limits_{\overset\longrightarrow{R'}}\kappa(\tilde H'_{i-1})$, which is a subfield of $k_{\nu^\dag_{i-1}}$. The value group of $\nu^\dag_{i0}$ will be denoted by $\Delta_{i-1,0}$; we have $\Delta_{i-1,0}\subset\frac{\Delta^\dag_{i-1}}{\Delta^\dag_i}$. If $i=2\ell$ is even then $\frac{R'_{P'_l}}{P'_{l-1}R'_{P'_l}}<\frac{{R'}^\dag_{\tilde H'_i}}{\tilde H'_{i-1}{R'}^\dag_{\tilde H'_i}}$, so $\lim\limits_{\overset\longrightarrow{R'}}\frac{R'_{P'_l}}{P'_{l-1}R'_{P'_l}}< \lim\limits_{\overset\longrightarrow{R'}}\frac{{R'}^\dag_{\tilde H'_i}}{\tilde H'_{i-1}{R'}^\dag_{\tilde H'_i}}$. In this case $rk\ \nu^\dag_i=1$ and $\nu^\dag_i$ an extension of the rank 1 valuation $\nu_\ell$ from $\kappa(P_{\ell-1})$ to $k_{\nu^\dag_{i-1}}$; we have \begin{equation} \frac{\Delta_{\ell-1}}{\Delta_\ell}\subset\Delta_{i-1,0}\subset \frac{\Delta^\dag_{i-1}}{\Delta^\dag_i}.\label{eq:deltalindelati-1} \end{equation} \begin{proposition}\label{necessary} Let $i=2\ell$. As usual, for an element $\overline\beta\in\left(\frac{\Delta_\ell}{\Delta_{\ell+1}}\right)_+$, let $\mathcal{P}_{\overline\beta}$ (resp. $\mathcal{P}'_{\overline\beta}$) denote the preimage in $R$ (resp. in $R'$) of the $\nu_{\ell+1}$-ideal of $\frac R{P_\ell}$ (resp. $\frac{R'}{P'_\ell}$) of value greater than or equal to $\overline\beta$. Then \begin{equation} \bigcap\limits_{\overline\beta\in\left(\frac{\Delta_\ell}{\Delta_{\ell+1}}\right)_+} \lim\limits_{\overset\longrightarrow{R'}} \left(\mathcal{P}'_{\overline\beta}{R'}^\dag+\tilde H'_{i+1}\right){R'}^\dag_{\tilde H'_{i+2}}\cap{R}^\dag\subset\tilde H_{i+1}. \label{eq:restriction} \end{equation} \end{proposition} The inclusion (\ref{eq:restriction}) should be understood as a condition on the tree of ideals. In other words, it is equally valid if we replace $R'$ by any other ring $R''\in\cal T$. \begin{proof}\textit{(of Proposition \ref{necessary})} Since $rk\frac{\Delta^\dag_{i+1}}{\Delta^\dag_{i+2}}=1$ by Lemma \ref{rank1}, $\frac{\Delta_\ell}{\Delta_{\ell+1}}$ is cofinal in $\frac{\Delta^\dag_{i+1}}{\Delta^\dag_{i+2}}$. Then for any $x\in\bigcap\limits_{\overline\beta\in\left(\frac{\Delta_\ell} {\Delta_{\ell+1}}\right)_+}\lim\limits_{\overset \longrightarrow{R'}}\left(\mathcal{P}'_{\overline\beta}{R'}^\dag+\tilde H'_{i+1}\right){R'}^\dag_{\tilde H'_{i+2}}\cap{R}^\dag$ we have $\nu^\dag_-(x)\mbox{$\in$ \hspace{-.8em}/}\Delta^\dag_i$, hence $x\in\tilde H_{i+1}$, as desired. \end{proof} From now to the end of \S\ref{extensions}, we will assume that $\cal T$ contains a stable ring $R'$, so that we can apply the results of the previous section, in particular, the primality of the ideals $H'_i$. \begin{proposition}\label{Hintilde} We have \begin{equation} H'_i\subset\tilde H'_i\qquad\text{ for all }i\in\{0,\dots,2r\}.\label{eq:Hintilde} \end{equation} \end{proposition} \begin{proof} For $\beta\in\Gamma^\dag$ and $R'\in\cal T$, let ${\mathcal{P}'_\beta}^\dag$ denote the $\nu^\dag_-$-ideal of ${R'}^\dag$ of value $\beta$. Fix an integer $\ell\in\{0,\dots,r\}$. For each $R'\in\cal T$, each $\beta\in\Delta_\ell$ and $x\in\mathcal{P}'_\beta$ we have $\nu^\dag_-(x)=\nu(x)\ge\beta$, hence \begin{equation} \mathcal{P}'_\beta{R'}^\dag\subset{\mathcal{P}'_\beta}^\dag.\label{eq:PinPdag} \end{equation} Taking the inductive limit over all $R'\in\cal T$ and the intersection over all $\beta\in\Delta_\ell$ in (\ref{eq:PinPdag}), and using the cofinality of $\Delta_\ell$ in $\Delta^\dag_{2\ell+1}$ and the fact that $\bigcap\limits_{\beta\in\Delta^\dag_{2\ell}} \left(\lim\limits_{\overset\longrightarrow{R'}}{\mathcal{P}'_\beta}^\dag\right)= \lim\limits_{\overset\longrightarrow{R'}}\tilde H'_{2\ell+1}$, we obtain the inclusion (\ref{eq:Hintilde}) for $i=2\ell+1$. To prove (\ref{eq:Hintilde}) for $i=2\ell$, note that $\tilde H'_{2\ell}\cap R'=\tilde H'_{2\ell+1}\cap R'=P_\ell$. By the same argument as in Proposition \ref{H2l}, excellence of $R'$ implies that there is a unique minimal prime $H^*_{2\ell}$ of $P'_\ell{R'}^\dag$, contained in $\tilde H'_{2\ell+1}$ and a unique minimal prime $H^{**}_{2\ell}$ of $P'_\ell{R'}^\dag$, contained in $\tilde H'_{2\ell}$. Now, Proposition \ref{H2l} and the facts that $H'_{2\ell+1}\subset\tilde H'_{2\ell+1}$ and $\tilde H'_{2\ell}\subset\tilde H'_{2\ell+1}$ imply that $H'_{2\ell}=H^*_{2\ell}=H^{**}_{2\ell}$, hence $H'_{2\ell}=H^{**}_{2\ell}\subset\tilde H'_{2\ell}$, as desired. \end{proof} \begin{definition} A chain of trees (\ref{eq:chaintree''}) of prime ideals of ${R'}^\dag$ is said to be \textbf{admissible} if $H'_i\subset\tilde H'_i$ and (\ref{eq:tildeHcapR}) and (\ref{eq:restriction}) hold. \end{definition} Equalities (\ref{eq:tildeHcapR}), Proposition \ref{necessary} and Proposition \ref{Hintilde} say that a chain of trees (\ref{eq:chaintree''}) of prime ideals of ${R'}^\dag$, determined by $\nu^\dag_-$, is admissible. Summarizing all of the above results, and keeping in mind the fact that specifying a composition of $2r$ valuation is equivalent to specifying all of its $2r$ components, we arrive at one of the main theorems of this paper: \begin{theorem}\label{classification} Specifying the valuation $\nu^\dag_-$ is equivalent to specifying the following data. The data will be described recursively in $i$, that is, the description of $\nu^\dag_i$ assumes that $\nu^\dag_{i-1}$ is already defined: (1) An admissible chain of trees (\ref{eq:chaintree''}) of prime ideals of ${R'}^\dag$. (2) For each $i$, $1\le i\le 2r$, a valuation $\nu^\dag_i$ of $k_{\nu^\dag_{i-1}}$ (where $\nu^\dag_0$ is taken to be the trivial valuation by convention), whose restriction to $\lim\limits_{\overset\longrightarrow{R'}}\kappa(\tilde H'_{i-1})$ is centered at the local ring $\lim\limits_{\overset\longrightarrow{R'}}\frac{{R'}^\dag_{\tilde H'_i}}{\tilde H'_{i-1}{R'}^\dag_{\tilde H'_i}}$. The data $\left\{\nu^\dag_i\right\}_{1\le i\le 2r}$ is subject to the following additional condition: if $i=2\ell$ is even then $rk\ \nu^\dag_i=1$ and $\nu^\dag_i$ is an extension of $\nu_\ell$ to $k_{\nu^\dag_{i-1}}$ (which is naturally an extension of $k_{\nu_{\ell-1}}$). \end{theorem} In particular, note that such extensions $\nu^\dag_-$ always exist, and usually there are plenty of them. The question of uniqueness of $\nu^\dag_-$ and the related question of uniqueness of $\nu^\dag_i$, especially in the case when $i$ is even, will be addressed in the next section. \section{Uniqueness properties of $\nu^\dag_-$.} \label{extensions} In this section we address the question of uniqueness of the extension $\nu^\dag_-$. One result in this direction, which will be very useful here, was already proved in \S\ref{technical}: Proposition \ref{nu0unique}. We give some necessary and some sufficient conditions both for the uniqueness of $\nu^\dag_-$ once the chain (\ref{eq:chaintree''}) of prime ideals determined by $\nu^\dag_-$ has been fixed, and also for the unconditional uniqueness of $\nu^\dag_-$. In \S\ref{henselization} we will use one of these uniqueness criteria to prove uniqueness of $\nu^\dag_-$ in the cases $R^\dag=\tilde R$ and $R^\dag=R^e$. At the end of this section we generalize and give a new point of view of an old result of W. Heinzer and J. Sally (Proposition \ref{HeSal}), which provides a sufficient condition for the uniqueness of $\nu^\dag_-$; see also \cite{Te}, Remarks 5.22. For a ring $R'\in\cal T$ let $K'$ denote the field of fractions of $R'$. For some results in this section we will need to impose an additional condition on the tree $\cal T$: we will assume that there exists $R_0\in\cal T$ such that for all $R'\in{\cal T}(R_0)$ the field $K'$ is algebraic over $K_0$. This assumption is needed in order to be able to control the height of all the ideals in sight. Without loss of generality, we may take $R_0=R$. \begin{proposition}\label{htstable} Assume that for all $R'\in\cal T$ the field $K'$ is algebraic over $K$. Consider a ring homomorphism $R'\rightarrow R''$ in $\cal T$. Take an $\ell\in\{0,\dots,r\}$. We have \begin{equation} {\operatorname{ht}}\ H''_{2\ell}\le {\operatorname{ht}}\ H'_{2\ell}.\label{eq:odddecreases} \end{equation} If equality holds in (\ref{eq:odddecreases}) then \begin{equation} {\operatorname{ht}}\ H''_{2\ell+1}\ge{\operatorname{ht}}\ H'_{2\ell+1}.\label{eq:odddecreases1} \end{equation} \end{proposition} \begin{proof} We start by recalling a well known Lemma (for a proof see \cite{ZS}, Appendix 1, Propositions 2 and 3, p. 326): \begin{lemma}\label{idealheight} Let $R\hookrightarrow R'$ be an extension of integral domains, essentially of finite type. Let $K$ and $K'$ be the respective fields of fractions of $R$ and $R'$. Consider prime ideals $P\subset R$ and $P'\subset R'$ such that $P=P'\cap R$. Then \begin{equation} {\operatorname{ht}}\ P'+tr.deg.(\kappa(P')/\kappa(P))\le\ {\operatorname{ht}}\ P+tr.deg.(K'/K).\label{eq:heightdrops} \end{equation} Moreover, equality holds in (\ref{eq:heightdrops}) whenever $R$ is universally catenarian. \end{lemma} Apply the Lemma to the rings $R'$ and $R''$ and the prime ideals $P'_\ell\subset R'$ and $P''_\ell\subset R''$. In the case at hand we have $tr.deg.(K''/K')=0$ by assumption. Hence \begin{equation} {\operatorname{ht}}\ P''_\ell\le\ {\operatorname{ht}}\ P'_\ell.\label{eq:htP} \end{equation} Since $H'_{2\ell}$ is a minimal prime of $P'_\ell{R'}^\dag$ and ${R'}^\dag$ is faithfully flat over $R'$, we have $ht\ P'_\ell=ht\ H'_{2\ell}$. Similarly, ${\operatorname{ht}}\ P''_\ell={\operatorname{ht}}\ H''_{2\ell}$, and (\ref{eq:odddecreases}) follows. Furthermore, equality in (\ref{eq:odddecreases}) is equivalent to equality in (\ref{eq:htP}). To prove (\ref{eq:odddecreases1}), let $\bar R=(R''\otimes_{R'}{R'}^\dag)_{M''}$, where $M''=(m''\otimes1+1\otimes m'{R'}^\dag)$ and let $\bar m$ denote the maximal ideal of $\bar R$. We have the natural maps ${R'}^\dag\overset\iota\rightarrow\bar R\overset\sigma\rightarrow{R''}^\dag$. The homomorphism $\sigma$ is nothing but the formal completion of the local ring $\bar R$; in particular, it is faithfully flat. Let \begin{equation} \bar H=H''_{2\ell+1}\cap\bar R,\label{eq:barH} \end{equation} $\bar H_0=H''_0\cap\bar R$. Since $H''_0$ is a minimal prime of ${R''}^\dag$ and $\sigma$ is faithfully flat, $\bar H_0$ is a minimal prime of $\bar R$. Assume that equality holds in (\ref{eq:odddecreases}) (and hence also in (\ref{eq:htP})). Since equality holds in (\ref{eq:htP}), by Lemma \ref{idealheight} (applied to the ring extension $R'\rightarrow R''$) the field $\kappa(P'')$ is algebraic over $\kappa(P')$. Apply Lemma \ref{idealheight} to the ring extension $\frac{{R'}^\dag}{H'_0}\hookrightarrow\frac{\bar R}{\bar H_0}$ and the prime ideals $\frac{H'_{2\ell+1}}{H'_0}$ and $\frac{\bar H}{\bar H_0}$. Since $K''$ is algebraic over $K'$, $\kappa(\bar H_0)$ is algebraic over $\kappa(H'_0)$. Since $\kappa(P'')$ is algebraic over $\kappa(P')$, $\kappa(\bar H)$ is algebraic over $\kappa(H'_{2\ell+1})$. Finally, $\hat R'$ is universally catenarian because it is a complete local ring. Now in the case ${\ }^\dag=\hat{\ }$ Lemma \ref{idealheight} says that ${\operatorname{ht}}\ \frac{H'_{2\ell+1}}{H'_0}={\operatorname{ht}}\ \frac{\bar H}{\bar H_0}$. Since both $\hat R'$ and $\bar R$ are catenarian, this implies that \begin{equation} {\operatorname{ht}}\ H'_{2\ell+1}={\operatorname{ht}}\ \bar H.\label{eq:htequal} \end{equation} In the case where ${\ }^\dag$ stands for henselization or a finite \'etale extension, (\ref{eq:htequal}) is an immediate consequence of (\ref{eq:barH}). Thus (\ref{eq:htequal}) is true in all the cases. Since $\sigma$ is faithfully flat and in view of (\ref{eq:barH}), ${\operatorname{ht}}\bar H\le {\operatorname{ht}}\ H''_{2\ell+1}$. Combined with (\ref{eq:htequal}), this completes the proof. \end{proof} \begin{corollary}\label{htstable1} For each $i$, $0\le i\le 2r$, the quantity ${\operatorname{ht}}\ H'_i$ stabilizes for $R'$ sufficiently far out in $\cal T$. \end{corollary} The next Proposition is an immediate consequence of Theorem \ref{classification}. \begin{proposition}\label{uniqueness2} Suppose given an admissible chain of trees (\ref{eq:chaintree''}) of prime ideals of ${R'}^\dag$. For each $\ell\in\{0,\dots,r-1\}$, consider the set of all $R'\in\cal T$ such that \begin{equation} {\operatorname{ht}}\ \tilde H'_{2\ell+1}- {\operatorname{ht}}\ \tilde H'_{2_\ell} \le 1\qquad\text{ for all even }i\label{eq:odd=even3} \end{equation} and, in case of equality, the 1-dimensional local ring $\frac{{R'}^\dag_{\tilde H'_{2\ell+1}}}{\tilde H'_{2\ell}{R'}^\dag_{\tilde H'_{2\ell+1}}}$ is unibranch (that is, analytically irreducible). Assume that for each $\ell$ the set of such $R'$ is cofinal in $\cal T$. Let $\nu^\dag_{2\ell+1,0}$ denote the unique valuation centered at $\lim\limits_{\overset\longrightarrow{R'}}\frac{{R'}^\dag_{\tilde H'_{2\ell+1}}}{\tilde H'_{2\ell}{R'}^\dag_{\tilde H'_{2\ell+1}}}$. Assume that for each even $i=2\ell$, $\nu_\ell$ admits a unique extension $\nu^\dag_{i0}$ to a valuation of $\lim\limits_{\overset\longrightarrow{R'}}\kappa(\tilde H'_{i-1})$, centered in $\lim\limits_{\overset\longrightarrow{R'}}\frac{{R'}^\dag_{\tilde H'_i}}{\tilde H'_{i-1}{R'}^\dag_{\tilde H'_i}}$. Then specifying the valuation $\nu^\dag_-$ is equivalent to specifying for each odd $i$, $2<i<2r$, an extension $\nu_i^\dag$ of the valuation $\nu^\dag_{i0}$ of $\lim\limits_{\overset\longrightarrow{R'}}\kappa(\tilde H'_{i-1})$ to its field extension $k_{\nu^\dag_{i-1}}$ (in particular, such extensions $\nu^\dag_-$ always exist). If for each odd $i$, $2<i<2r$, the field extension $\lim\limits_{\overset\longrightarrow{R'}}\kappa(\tilde H'_{i-1})\rightarrow k_{\nu^\dag_{i-1}}$ is algebraic and the extension $\nu_i^\dag$ of $\nu^\dag_{i0}$ to $k_{\nu^\dag_{i-1}}$ is unique then there is a unique extension $\nu^\dag_-$ of $\nu$ such that the $\tilde H'_i$ are the prime ideals, determined by $\nu^\dag_-$. Conversely, assume that $K'$ is algebraic over $K$ and that there exists a unique extension $\nu^\dag_-$ of $\nu$ such that the $\tilde H'_i$ are the prime $\nu^\dag_-$-ideals, determined by $\nu^\dag_-$. Then for each $\ell\in\{0,\dots,r-1\}$ and for all $R'$ sufficiently far out in $\cal T$ the inequality (\ref{eq:odd=even3}) holds. For each even $i=2\ell$, $\nu_\ell$ admits a unique extension $\nu^\dag_{i0}$ to a valuation of $\lim\limits_{\overset\longrightarrow{R'}}\kappa\left(\tilde H'_{i-1}\right)$, centered in $\lim\limits_{\overset\longrightarrow{R'}}\frac{{R'}^\dag_{\tilde H'_i}}{\tilde H'_{i-1}{R'}^\dag_{\tilde H'_i}}$; we have $rk\ \nu^\dag_{i0}=1$. For each odd $i$, the ring $\lim\limits_{\overset\longrightarrow{R'}}\frac{{R'}^\dag_{\tilde H'_i}}{\tilde H'_{i-1}{R'}^\dag_{\tilde H'_i}}$ is a valuation ring of a (not necessarily discrete) rank 1 valuation. For each odd $i$, $1\le i<2r$, the field extension $\lim\limits_{\overset\longrightarrow{R'}}\kappa(\tilde H'_{i-1})\rightarrow k_{\nu^\dag_{i-1}}$ is algebraic and the extension $\nu_i^\dag$ of $\nu^\dag_{i0}$ to $k_{\nu^\dag_{i-1}}$ is unique. \end{proposition} \begin{remark} We do not know of a simple criterion to decide when, given an algebraic field extension $K\hookrightarrow L$ and a valuation $\nu$ of $K$, is the extension of $\nu$ to $L$ unique. See \cite{HOS}, \cite{V2} for more information about this question and an algorithm for arriving at the answer using MacLane's key polynomials. \end{remark} Next we describe three classes of extensions of $\nu$ to $\lim\limits_{\overset\longrightarrow{R'}}{R'}^\dag$, which are of particular interest for applications, and which we call \textbf{minimal}, \textbf{evenly minimal} and \textbf{tight} extensions. \begin{definition} Let $\nu^\dag_-$ be an extension of $\nu$ to $\lim\limits_{\overset\longrightarrow{R'}}{R'}^\dag$ and let the notation be as above. We say that $\nu^\dag_-$ is \textbf{evenly minimal} if whenever $i=2\ell$ is even, the following two conditions hold: (1) \begin{equation} \Delta_{i-1,0}=\frac{\Delta_{\ell-1}}{\Delta_\ell}.\label{eq:groupequal} \end{equation} (2) For an element $\overline\beta\in\frac{\Delta_{\ell-1}}{\Delta_\ell}$, the $\nu^\dag_{i0}$-ideal of $\frac{R^\dag_{\tilde H_i}}{\tilde H_{i-1}R^\dag_{\tilde H_i}}$ of value $\overline\beta$, denoted by $\mathcal{P}^\dag_{\overline\beta ,\ell}$, is given by the formula \begin{equation} \mathcal{P}^\dag_{\overline\beta,\ell}=\left(\lim\limits_{\overset\longrightarrow{R'}} \frac{\mathcal{P}'_{\overline\beta}{R'}^\dag_{\tilde H'_i}}{\tilde H'_{i-1}{R'}^\dag_{\tilde H'_i}}\right)\cap\frac{R^\dag_{\tilde H_i}}{\tilde H_{i-1}R^\dag_{\tilde H_i}}.\label{eq:valideal} \end{equation} We say that $\nu^\dag_-$ is \textbf{minimal} if $\tilde H'_i=H'_i$ for each $R'$ and each $i\in\{0,\dots,2r+1\}$. We say that $\nu^\dag_-$ is \textbf{tight} if it is evenly minimal and \begin{equation} \tilde H'_i=\tilde H'_{i+1}\qquad\text{ for all even }i.\label{eq:odd=even2} \end{equation} \end{definition} \begin{remark} (1) The valuation $\nu^\dag_{i0}$ is uniquely determined by conditions (\ref{eq:groupequal}) and (\ref{eq:valideal}). Recall also that if $i=2\ell$ is even and we have: \begin{eqnarray} \tilde H'_i&=&H'_i\qquad\text{ and}\label{eq:tilde=H1}\\ \tilde H'_{i-1}&=&H'_{i-1}\label{eq:tilde=H2} \end{eqnarray} then $\nu^\dag_{i0}$ is uniquely determined by $\nu_\ell$ by Proposition \ref{nu0unique}. In particular, if $\nu^\dag_-$ is minimal (that is, if (\ref{eq:tilde=H1})--(\ref{eq:tilde=H2}) hold for all $i$) then $\nu^\dag_-$ is evenly minimal. (2) The definition of evenly minimal extensions can be rephrased as follows in terms of the associated graded algebras of $\nu_\ell$ and $\nu^\dag_{i0}$. First, consider a homomorphism in $\mathcal T$ and the following diagram, composed of natural homomorphisms: \begin{equation}\label{eq:CDevminimal} \xymatrix{{\frac{\mathcal{P}'_{\overline\beta}}{\mathcal{P}'_{\overline\beta+}}\otimes_{R'}{R'}^\dag_{\tilde H'_i}}\ar[r]^-{\lambda'}&{\frac{\mathcal{P}'_{\overline\beta}{R'}^\dag_{\tilde H'_i}}{\mathcal{P}'_{\overline\beta+}{R'}^\dag_{\tilde H'_i}}} \ar[r]^-{\phi'}& {\frac{{\mathcal{P}'_{\overline\beta}}^\dag}{{\mathcal{P}'}_{\overline\beta+}^\dag}}\\ {\ }&{\ }&{\frac{\mathcal{P}_{\overline\beta}^\dag}{\mathcal{P}_{\overline\beta+}^\dag}}\ar[u]_-{\psi'}} \end{equation} It follows from Nakayama's Lemma that equality (\ref{eq:valideal}) is equivalent to saying that \begin{equation}\label{eq:equalgraded} \frac{\mathcal{P}_{\overline\beta}^\dag}{\mathcal{P}_{\overline\beta+}^\dag}=\lim\limits_{\overset\longrightarrow{R'}}{\psi'}^{-1}\left((\phi'\circ\lambda')\left(\frac{\mathcal{P}'_{\overline\beta}}{\mathcal{P}'_{\overline\beta+}}\otimes_{R'}\kappa\left(\tilde H'_i\right)\right)\right). \end{equation} Taking the direct sum in (\ref{eq:equalgraded}) over all $\overline\beta\in\frac{\Delta_{\ell-1}}{\Delta_\ell}$ and passing to the limit on the both sides, we see that the extension $\nu^\dag_-$ is evenly minimal if and only if we have the following equality of graded algebras: $$ \mbox{gr}_{\nu_\ell}\left(\lim\limits_{\overset\longrightarrow{R'}}\frac{R'}{P'_\ell}\right)\otimes_{R'}\kappa\left(\tilde H'_i\right)=\mbox{gr}_{\nu^\dag_{i0}}\left(\lim\limits_{\overset\longrightarrow{R'}}\frac{{R'}^\dag_{\tilde H'_i}}{\tilde H'_{i-1}{R'}^\dag_{\tilde H'_i}}\right). $$ \end{remark} \begin{examples} The extension $\hat\nu$ of Example \ref{Example31} (page \pageref{Example31}) is minimal, but not tight. The valuation $\nu$ admits a unique tight extension $\hat\nu_2\circ\hat\nu_3$ to $\lim\limits_{\overset\longrightarrow{R'}}\frac{\hat R'}{H'_1}$; the valuation $\hat\nu$ is the composition of the discrete rank 1 valuation $\hat\nu_1$, centered in $\lim\limits_{\overset\longrightarrow{R'}}\hat R'_{H'_1}$ with $\hat\nu_2\circ\hat\nu_3$. The extension $\hat\nu^{(1)}$ of Example \ref{Example32} (page \pageref{Example32}) is minimal. The extension $\hat\nu^{(2)}$ is evenly minimal but not minimal. Neither $\hat\nu^{(1)}$ nor $\hat\nu^{(2)}$ is tight. The valuation $\nu$ admits a unique tight extension $\hat\nu_2\circ\hat\nu_3$ to $\lim\limits_{\overset\longrightarrow{R'}}\frac{R'}{\tilde H'_1}$, where $\tilde H'_1=\left(y-\sum\limits_{j=1}^\infty c_jx^j\right)$; the valuation $\hat\nu^{(2)}$ is the composition of the discrete rank 1 valuation $\hat\nu_1$, centered in $\lim\limits_{\overset\longrightarrow{R'}}\hat R'_{\tilde H'_1}$ with $\hat\nu_2\circ\hat\nu_3$. \end{examples} \begin{remark} As of this moment, we do not know of any examples of extensions $\hat\nu_-$ which are not evenly minimal. Thus, formally, the question of whether every extension $\hat\nu_-$ is evenly minimal is open, though we strongly suspect that counterexamples do exist. \end{remark} \begin{proposition}\label{resmin} Let $i=2\ell$ be even and let $\nu^\dag_{i0}$ be the extension of $\nu_\ell$ to $\lim\limits_{\overset\longrightarrow{R'}}\kappa(\tilde H'_{i-1})$, centered at the local ring $\lim\limits_{\overset\longrightarrow{R'}}\frac{{R'}^\dag_{\tilde H'_i}}{\tilde H'_{i-1}{R'}^\dag_{\tilde H'_i}}$, defined by (\ref{eq:valideal}). Then \begin{equation} k_{\nu^\dag_{i0}}=\lim\limits_{\overset\longrightarrow{R'}}\kappa(\tilde H'_i).\label{eq:resfield} \end{equation} \end{proposition} \begin{proof} Take two elements $x,y\in\lim\limits_{\overset\longrightarrow{R'}}\frac{{R'}^\dag_{\tilde H'_i}}{\tilde H'_{i-1}{R'}^\dag_{\tilde H'_i}}$, such that $\nu_{i0}^\dag(x)=\nu_{i0}^\dag(y)$. We must show that the image of $\frac xy$ in $k_{\nu^\dag_{i0}}$ belongs to $\lim\limits_{\overset\longrightarrow{R'}}\kappa(\tilde H'_i)$. Without loss of generality, we may assume that $x,y\in\frac{R^\dag_{\tilde H_i}}{\tilde H_{i-1}{R}^\dag_{\tilde H_i}}$. Let $\beta=\nu_{i0}^\dag(x)=\nu_{i0}^\dag(y)$. Choose $R'\in\cal T$ sufficiently far out in the direct system so that $x,y\in\frac{\mathcal{P}'_\beta{R'}^\dag_{\tilde H'_i}}{\tilde H'_{i-1}{R'}^\dag_{\tilde H'_i}}$. Let $R'\rightarrow R''$ be the blowing up of the ideal $\mathcal{P}'_\beta R'$. Then in $\frac{\mathcal{P}''_\beta{R''}^\dag_{\tilde H''_i}}{\tilde H''_{i-1}{R''}^\dag_{\tilde H''_i}}$ we can write \begin{eqnarray} x&=&az\qquad\text{ and }\label{eq:xaz}\\ y&=&aw, \end{eqnarray} where $\nu_{i0}^\dag(a)=\beta$ and $\nu_{i0}^\dag(z)=\nu_{i0}^\dag(w)=0$. Let $\bar z$ be the image of $z$ in $\kappa(\tilde H''_i)$ and similarly for $\bar w$. Then the image of $\frac xy$ in $k_{\nu^\dag_{i0}}$ equals $\frac{\bar z}{\bar w}\in\kappa(\tilde H''_i)$, and the result is proved. \end{proof} \begin{remark}\label{minimalexist} Theorem \ref{classification} and the existence of the extension $\nu^\dag_{2\ell,0}$ of $\nu_\ell$ in the case when $\tilde H'_{2\ell}=H'_{2\ell}$ and $\tilde H'_{2\ell-1}=H'_{2\ell-1}$ guaranteed by Theorem \ref{primality1} (2) allow us to give a fairly explicit description of the totality of minimal extensions as compositions of $2r$ valuations and, in particular, to show that they always exist. Indeed, minimal extensions $\nu^\dag_-$ can be constructed at will, recursively in $i$, as follows. Assume that the valuations $\nu^\dag_1,\dots,\nu^\dag_{i-1}$ are already constructed. If $i$ is odd, let $\nu^\dag_i$ be an arbitrary valuation of the residue field $k_{\nu^\dag_{i-1}}$ of the valuation ring $R_{\nu^\dag_{i-1}}$. If $i=2\ell$ is even, let $\nu^\dag_{i0}$ be the extension of $\nu_\ell$ to $\lim\limits_{\overset\longrightarrow{R'}}\kappa(H'_{i-1})$, centered at the local ring $\lim\limits_{\overset\longrightarrow{R'}} \frac{{R'}^\dag_{H'_i}}{H'_{i-1}{R'}^\dag_{H'_i}}$, whose existence and uniqueness are guaranteed by Theorem \ref{primality1} (2) and Proposition \ref{nu0unique}, respectively. Let $\nu^\dag_i$ be an arbitrary extension of $\nu^\dag_{i0}$ to the field $k_{\nu^\dag_{i-1}}$. It is clear that all the minimal extensions $\nu^\dag_-$ of $\nu$ are obtained in this way. In the next section we will use this remark to show that if $R^\dag=\tilde R$ or $R^\dag=R^e$ then $\nu$ admits a unique extension to $\frac{R^\dag}{H_0}$, which is necessarily minimal. \end{remark} We end this section by giving some sufficient conditions for the uniqueness of $\nu^\dag_-$. \begin{proposition}\label{uniqueness1} Suppose given an admissible chain of trees (\ref{eq:chaintree''}) of prime ideals of ${R'}^\dag$. For each $\ell\in\{0,\dots,r-1\}$, consider the set of all $R'\in\cal T$ such that \begin{equation} ht\ \tilde H'_{2\ell+1}- ht\ \tilde H'_{2\ell} \le 1\qquad\text{ for all even }i\label{eq:odd=even1} \end{equation} and, in case of equality, the 1-dimensional local ring $\frac{{R'}^\dag_{\tilde H'_{2\ell+1}}}{\tilde H'_{2\ell}{R'}^\dag_{\tilde H'_{2\ell+1}}}$ is unibranch (that is, analytically irreducible). Assume that for each $\ell$ the set of such $R'$ is cofinal in $\cal T$. Let $\nu^\dag_-$ be an extension of $\nu$ such that the $\tilde H'_i$ are prime $\nu^\dag_-$-ideals. Assume that $\nu^\dag_-$ is evenly minimal. Then there is at most one such extension $\nu^\dag_-$ and exactly one such $\nu^\dag_-$ if \begin{equation} \tilde H'_i=H'_i\quad\text{ for all }i.\label{eq:tildeH=H} \end{equation} (in the latter case $\nu^\dag_-$ is minimal by definition). \end{proposition} \begin{proof} By Theorem \ref{primality1} (2) and Proposition \ref{nu0unique}, if (\ref{eq:tildeH=H}) holds then $\nu^\dag_-$ is minimal and for each even $i$ the extension $\nu^\dag_{i0}$ exists and is unique. Therefore we may assume that in all the cases $\nu^\dag_-$ is evenly minimal and that $\nu^\dag_{i0}$ exists whenever (\ref{eq:tildeH=H}) holds. The valuation $\nu^\dag_-$, if it exists, is a composition of $2r$ valuations: $\nu^\dag_-=\nu^\dag_1\circ\nu^\dag_2\circ\dots\circ\nu^\dag_{2r}$, subject to the conditions of Theorem \ref{classification}. We prove the uniqueness of $\nu^\dag_-$ by induction on $r$. Assume the result is true for $r-1$. This means that there is at most one evenly minimal extension $\nu^\dag_3\circ\nu^\dag_4\circ\dots\circ\nu^\dag_{2r}$ of $\nu_2\circ\nu_3\circ\dots\circ\nu_r$ to $\lim\limits_{\overset\longrightarrow{R'}}\kappa(\tilde H'_2)$, and exactly one in the case when (\ref{eq:tildeH=H}) holds. To complete the proof of uniqueness of $\nu^\dag_-$, it is sufficient to show that both $\nu_1^\dag$ and $\nu_2^\dag$ are unique and that the residue field of $\nu_2^\dag$ equals $\lim\limits_{\overset\longrightarrow{R'}}\kappa(\tilde H'_2)$. We start with the uniqueness of $\nu_1^\dag$. If (\ref{eq:odd=even2}) holds then $\nu_1^\dag$ is the trivial valuation. Suppose, on the other hand, that equality holds in (\ref{eq:odd=even1}). Then the restriction of $\nu_1^\dag$ to each $R'\in\cal T$ such that the local ring $\lim\limits_{\overset\longrightarrow{R'}}\frac{\hat R'_{\tilde H'_1}}{\tilde H'_0\hat R'_{\tilde H'_1}}$ is one-dimensional and unibranch is the unique divisorial valuation centered in that ring (in particular, its residue field is $\kappa(\tilde H'_1)$). By the assumed cofinality of such $R'$, the valuation $\nu^\dag_1$ is unique and its residue field equals $\lim\limits_{\overset\longrightarrow{R'}}\kappa(\tilde H'_1)$. Thus, regardless of whether or not the inequality in (\ref{eq:odd=even1}) is strict, $\nu^\dag_1$ is unique and we have the equality of residue fields \begin{equation} k_{\nu_1^\dag}=\lim\limits_{\overset\longrightarrow{R'}}\kappa(\tilde H'_1)\label{eq:eqres} \end{equation} This equality implies that $\nu_2^\dag=\nu^\dag_{20}$. Now, the valuation $\nu^\dag_2=\nu^\dag_{20}$ is uniquely determined by the conditions (\ref{eq:groupequal}) and (\ref{eq:valideal}), and its residue field is \begin{equation} k_{\nu^\dag_2}=k_{\nu^\dag_{20}}=\lim\limits_{\overset\longrightarrow{R'}} \kappa(\tilde H'_2).\label{eq:resfield1} \end{equation} by Proposition \ref{resmin}. Furthermore, by Theorem \ref{primality1} exactly one such $\nu_2^\dag$ exists whenever (\ref{eq:tildeH=H}) holds. This proves that there is at most one possibility for $\nu^\dag_-$: the composition of $\nu_1^\dag\circ\nu^\dag_2$ with $\nu^\dag_3\circ\nu^\dag_4\circ\dots\circ\nu^\dag_{2r}$, and exactly one if (\ref{eq:tildeH=H}) holds. \end{proof} \begin{proposition}\label{tight=scalewise} The extension $\nu^\dag_-$ is tight if and only if for each $R'$ in our direct system the natural graded algebra extension $\mbox{gr}_\nu R'\rightarrow\mbox{gr}_{\nu^\dag_-}{R'}^\dag$ is scalewise birational. \end{proposition} \begin{remark}\label{rephrasing} Proposition \ref{tight=scalewise} allows us to rephrase Conjecture \ref{teissier} as follows: the valuation $\nu$ admits at least one tight extension $\nu^\dag_-$. \end{remark} \begin{proof}\textit{(of Proposition \ref{tight=scalewise})} ``If'' Assume that for each $R'$ in our direct system the natural graded algebra extension $\mbox{gr}_\nu R'\rightarrow\mbox{gr}_{\nu^\dag_-}{R'}^\dag$ is scalewise birational. Then \begin{equation}\label{eq:gamma=dag} \Gamma^\dag=\Gamma. \end{equation} Together with (\ref{eq:Delta}) this implies that for each $l\in\{1,\dots,r+1\}$ we have $\Delta^\dag_{2\ell-2}=\Delta^\dag_{2\ell-1}=\Delta_{\ell-1}$ under the identification (\ref{eq:gamma=dag}). Then $\frac{\Delta^\dag_{2\ell-1}}{\Delta^\dag_{2\ell}}=(0)$, so for all odd $i$ the valuation $\nu^\dag_i$ is trivial. This proves the equality (\ref{eq:odd=even2}) in the definition of tight. It remains to show that $\nu^\dag_-$ is evenly minimal. We will prove the even minimality in the form of equality (\ref{eq:equalgraded}) for each $\bar\beta\in\frac{\Delta_{\ell-1}}{\Delta_\ell}$. The right hand side of (\ref{eq:equalgraded}) is trivially contained in the left hand side; we must prove the opposite inclusion. To do that, take a non-zero element $x\in\frac{\mathcal{P}_{\overline\beta}^\dag}{\mathcal{P}_{\overline\beta+}^\dag}$. By scalewise birationaliy, there exist non-zero elements $\bar y,\bar z\in\mbox{gr}_\nu R$, with $ord\ \bar y,ord\ \bar z\in\Delta_\ell$, such that $x\bar y=\bar z$. Let $y$ be a representative of $\bar y$ in $R$, and similarly for $z$. Let $R\rightarrow R'$ be the local blowing up with respect to $\nu$ along the ideal $(y,z)$. Then, in the notation of (\ref{eq:equalgraded}), we have \begin{equation}\label{eq:equalgraded1} x={\psi'}^{-1}\left(\left(\phi'\circ\lambda'\right)\left(\frac{\bar z}{\bar y}\otimes_{R'}1\right)\right)\in{\psi'}^{-1}\left((\phi'\circ\lambda')\left(\frac{\mathcal{P}'_{\overline\beta}}{\mathcal{P}'_{\overline\beta+}}\otimes_{R'}\kappa\left(\tilde H'_i\right)\right)\right). \end{equation} This proves (\ref{eq:equalgraded}). ``If'' is proved. ``Only if''. Assume that $\nu^\dag_-$ is tight (that is, it is evenly minimal and (\ref{eq:odd=even2}) holds) and take $R'\in\cal T$. Then the valuation $\nu_{2\ell+1}$ is trivial for all $\ell$, so $\nu^\dag_-=\nu^\dag_2\circ\nu^\dag_4\circ\dots\circ\nu^\dag_{2r}$. We must show that the graded algebra extension $\mbox{gr}_\nu R'\rightarrow\mbox{gr}_{\nu^\dag_-}{R'}^\dag$ is scalewise birational. Again, we use induction on $r$. Take an element $x\in{R'}^\dag$. If $\nu^\dag_-(x)\in\Delta_1$ then $\mbox{in}_{\nu^\dag_-}x\in\mbox{gr}_{\nu^\dag_4\circ\dots\circ\nu^\dag_{2r}} \frac{{R'}^\dag}{\tilde H'_2}$, hence by the induction assumption there exists $y\in R'$ with $\nu^\dag_-(x)\in\Delta_1$ and $\mbox{in}_{\nu^\dag_-}(xy)\in\mbox{gr}_\nu\frac{R'}{P_1}$. In this case, there is nothing more to prove. Thus we may assume that $\nu^\dag_-(x)\mbox{$\in$ \hspace{-.8em}/}\Delta_1$. It remains to show that there exists $y\in R'$ such that $\mbox{in}_{\nu^\dag_-}(xy)\in\mbox{gr}_\nu R'$. Since the natural map sending each element of the ring to its image in the graded algebra behaves well with respect to multiplication and division, local blowings up induce birational transformations of graded algebras, and it is enough to find a local blowing up $R''\in{\cal T}(R')$ and $y\in R''$ such that $\mbox{in}_{\nu^\dag_-}(xy)\in\mbox{gr}_\nu R''$. Now, Proposition \ref{resmin} shows that there exists a local blowing up $R'\rightarrow R''$ such that $x=az$ (\ref{eq:xaz}), with $z\in R''$ and $\nu^\dag_2(a)=\nu^\dag_{2,0}(a)=0$. The last equality means that $\nu^\dag_-(a)\in\Delta_1$, and the result follows from the induction assumption, applied to $a$. \end{proof} The argument above also shows the following. Let ${\Phi'}^\dag=\nu^\dag_-\left({R'}^\dag\setminus\{0\}\right)$, take an element $\beta\in{\Phi'}^\dag$ and let ${\cal P'}^\dag_\beta$ denote the $\nu^\dag_-$-ideal of ${R'}^\dag$ of value $\beta$. \begin{corollary}\label{blup1} Take an element $x\in{\cal P'}^\dag_\beta$. There exists a local blowing up $R'\rightarrow R''$ such that $\beta\in\nu(R'')\setminus\{0\}$ and $x\in{\cal P}''_\beta{R''}^\dag$. \end{corollary} The next Proposition gives a sufficient condition for the uniqueness of $\nu^\dag_-$ (this result is due to Heinzer and Sally \cite{HeSa}). \begin{proposition}\label{HeSal} Assume that $K'$ is algebraic over $K$ for all $R'\in\cal T$ and that the following conditions hold: (1) $ht\ H'_1\le1$ (2) $ht\ H'_1+rat.rk\ \nu=\dim\ R'$, where $R'$ is taken to be sufficiently far out in the direct system. Let $\nu^\dag_-$ be an extension of $\nu$ to a ring of the form $\lim\limits_{\overset\longrightarrow{R'}}\frac{{R'}^\dag}{\tilde H'_0}$. Then either \begin{eqnarray} \tilde H'_0&=&H'_0\qquad\text{ or }\label{eq:tildeH=H0}\\ \tilde H'_0&=&H'_1.\label{eq:tildeH=H1} \end{eqnarray} The valuation $\nu$ admits a unique extension to $\lim\limits_{\overset\longrightarrow{R'}}\frac{{R'}^\dag}{H'_0}$ and a unique extension to $\lim\limits_{\overset\longrightarrow{R'}}\frac{{R'}^\dag}{H'_1}$. The first extension is minimal and the second is tight. \end{proposition} \begin{proof} For $1\le\ell\le r$, let $r_\ell$ denote the rational rank of $\nu_\ell$. Let $\nu^\dag_-$ be an extension of $\nu$ to a ring of the form $\lim\limits_{\overset\longrightarrow{R'}}\frac{{R'}^\dag}{\tilde H'_0}$, where $\tilde H'_0$ is a tree of prime ideals of ${R'}^\dag$ such that $\tilde H'_0\cap R'=(0)$. By Corollary \ref{htstable1} $ht\ H'_i$ stabilizes for $1\le i\le 2r$ and $R'$ sufficiently far out in the direct system. From now on, we will assume that $R'$ is chosen sufficiently far so that the stable value of $ht\ H'_i$ is attained. Now, let $i=2\ell$. The valuation $\nu^\dag_{i0}$ is centered in the local noetherian ring $\frac{{R'}^\dag_{\tilde H'_i}}{\tilde H'_{i-1}{R'}^\dag_{\tilde H'_i}}$, hence by Abhyankar's inequality \begin{equation} rat.rk\ \nu^\dag_{i0}\le\dim\frac{{R'}^\dag_{\tilde H'_i}}{\tilde H'_{i-1}{R'}^\dag_{\tilde H'_i}}\le ht\ \tilde H'_i-ht\ \tilde H'_{i-1}.\label{eq:abhyankar} \end{equation} Since this inequality is true for all even $i$, summing over all $i$ we obtain: \begin{equation} \begin{array}{rcl} \dim R'&=&\dim\ {R'}^\dag=\sum\limits_{i=1}^{2r}(ht\ \tilde H'_i-ht\ \tilde H'_{i-1})\ge ht\ \tilde H'_1+\sum\limits_{\ell=1}^r(ht\ \tilde H'_{2\ell}-ht\ \tilde H'_{2\ell-1})\ge\\ &\ge&ht\ H'_1+\sum\limits_{\ell=1}^rrat.rk\ \nu^\dag_{2\ell,0}\ge ht\ H'_1+\sum\limits_{\ell=1}^rr_\ell=ht\ H'_1+rat.rk\ \nu=\dim R'.\label{eq:inequalities} \end{array} \end{equation} Hence all the inequalities in (\ref{eq:abhyankar}) and (\ref{eq:inequalities}) are equalities. In particular, we have $$ ht\ \tilde H'_1=ht\ H'_1; $$ combined with Proposition \ref{Hintilde} this shows that \begin{equation} \tilde H'_1=H'_1. \end{equation} Together with the hypothesis (1) of the Proposition, this already proves that at least one of (\ref{eq:tildeH=H0})--(\ref{eq:tildeH=H1}) holds. Furthermore, equalities in (\ref{eq:abhyankar}) and (\ref{eq:inequalities}) prove that $$ ht\ \tilde H'_i=ht\ \tilde H'_{i-1} $$ for all odd $i>1$, so that \begin{equation} \tilde H'_i=\tilde H'_{i-1}\qquad\text{whenever $i>1$ is odd}\label{eq:oddiseven} \end{equation} and that \begin{equation} r_i=ht\ \tilde H'_i-ht\ \tilde H'_{i-1}\label{heightfixed} \end{equation} whenever $i$ is even. Now, consider the special case when $\tilde H'_i=H'_i$ for $i\ge1$ and $\tilde H'_0$ is as in (\ref{eq:tildeH=H0})--(\ref{eq:tildeH=H1}). According to Proposition \ref{nu0unique} for each even $i=2\ell$ there exists a unique extension $\nu^\dag_{i0}$ of $\nu_l$ to a valuation of $\lim\limits_{\overset\longrightarrow{R'}}\kappa(H'_{i-1})$, centered in the local ring $\lim\limits_{\overset\longrightarrow{R'}} \frac{{R'}^\dag_{H'_{2\ell}}}{H'_{2\ell-1}}$. Moreover, we have \begin{equation} k_{\nu^\dag_{2\ell,0}}=\lim\limits_{\overset\longrightarrow{R'}}\kappa(H'_{2\ell}) \label{eq:knudag2l} \end{equation} by Remark \ref{sameresfield}. By Theorem \ref{classification}, there exists an extension $\nu^\dag_-$ of $\nu$ to $\lim\limits_{\overset\longrightarrow{R'}}\frac{{R'}^\dag}{\tilde H'_0}$ such that the $\{\tilde H'_i\}$ as above is the chain of trees of prime ideals, determined by $\nu^\dag_-$. In particular, (\ref{eq:oddiseven}) and (\ref{heightfixed}) hold with $\tilde H'_i$ replaced by $H'_i$. Now (\ref{heightfixed}) and Proposition \ref{Hintilde} imply that for \textit{any} extension $\nu^\dag_-$ we have $\tilde H'_i=H'_i$ for $i>0$, so that the special case above is, in fact, the only case possible. Furthermore, by (\ref{eq:oddiseven}) we have $H'_{2\ell+1}=H'_{2\ell}$ for all $\ell\in\{1,\dots,r\}$. This implies that for all such $\ell$ the valuation $\nu^\dag_{2\ell+1,0}$ is the trivial valuation of $\lim\limits_{\overset\longrightarrow{R'}}\kappa(H'_{2\ell})$; in particular, \begin{equation} k_{\nu^\dag_{2\ell+1,0}}=\lim\limits_{\overset\longrightarrow{R'}}\kappa(H'_{2\ell}) \label{eq:knudag2l+1} \end{equation} for all $\ell\in\{1,\dots,r-1\}$. If $\tilde H'_0=H'_1=\tilde H'_1$ then the only possibility for $\nu^\dag_{10}=\nu^\dag_1$ is the trivial valuation of $\lim\limits_{\overset\longrightarrow{R'}}\kappa(H'_1)$; we have \begin{equation} k_{\nu^\dag_1}=k_{\nu^\dag_{10}}=\lim\limits_{\overset\longrightarrow{R'}} \kappa(H'_1).\label{eq:knudag10} \end{equation} If $\tilde H'_0=H'_0$ then by the hypothesis (1) of the Proposition and the excellence of $R$ the ring $\frac{{R'}^\dag_{H'_1}}{H'_0{R'}^\dag_{H'_1}}$ is a regular one-dimensional local ring (in particular, unibranch), hence the valuation $\nu^\dag_1=\nu^\dag_{10}$ centered at $\lim\limits_{\overset\longrightarrow{R'}} \frac{{R'}^\dag_{H'_1}}{H'_0{R'}^\dag_{H'_1}}$ is unique and (\ref{eq:knudag10}) holds also in this case. By induction on $i$, it follows from (\ref{eq:knudag2l}), (\ref{eq:knudag2l+1}), the uniqueness of $\nu^\dag_{2\ell,0}$ and the triviality of $\nu^\dag_{2\ell+1,0}$ for $\ell\ge1$ that $\nu_i^\dag$ is uniquely determined for all $i$ and $k_{\nu^\dag_i}=\lim\limits_{\overset\longrightarrow{R'}}\kappa(H'_i)$. This proves that in both cases (\ref{eq:tildeH=H0}) and (\ref{eq:tildeH=H1}) the valuation $\nu^\dag_-=\nu^\dag_1\circ\dots\circ\nu^\dag_{2r}$ is unique. The last statement of the Proposition is immediate from definitions. \end{proof} A related necessary condition for the uniqueness of $\nu^\dag_-$ will be proved in \S\ref{locuni1}. \section{Extending a valuation centered in an excellent local domain to its henselization.} \label{henselization} Let $\tilde R$ denote the henselization of $R$, as above. The completion homomorphism $R\rightarrow\hat R$ factors through the henselization: $R\rightarrow\tilde R\rightarrow\hat R$. In this section, we will show that $H_1$ is a minimal prime of $\tilde R$, that $\nu$ extends uniquely to a valuation $\tilde\nu_-$ of rank $r$ centered at $\frac{\tilde R}{H_1}$, and that $H_1$ is the unique prime ideal $P$ of $\tilde R$ such that $\nu$ extends to a valuation of $\frac{\tilde R}P$. Furthermore, we will prove that $H_{2\ell+1}$ is a minimal prime of $P_\ell\tilde R$ for all $\ell$ and that these are precisely the prime $\tilde\nu$-ideals of $\tilde R$. Studying the implicit prime ideals of $\tilde R$ and the extension of $\nu$ to $\tilde R$ is a logical intermediate step before attacking the formal completion, for the following reason. As we will show in the next section, if $R$ is already henselian in (\ref{eq:defin1}) then $\mathcal{P}'_\beta\hat R'_{H'_{2\ell+1}}\cap\hat R=\mathcal{P}_\beta\hat R$ for all $\beta$ and $R'$ and thus we have $H_{2\ell+1}=\bigcap\limits_{\beta\in\Delta_{\ell}}\left({\cal P}_\beta\hat R\right)$. We state the main result of this section. In the case when $R^e$ is an \'etale extension of $R$, contained in $\tilde R$, we use (\ref{eq:defin3}) with $R^\dag=R^e$ as our definition of the implicit prime ideals. \begin{theorem}\label{hensel0} Let $R^e$ be a local \'etale extension of $R$, contained in $\tilde R$. Then: (1) The ideal $H_{2\ell+1}$ is prime for $0\le l\le r$; it is a minimal prime of $P_\ell R^e$. In particular, $H_1$ is a minimal prime of $R^e$. We have $H_{2\ell}=H_{2\ell+1}$ for $0\le l\le r$. (2) The ideal $H_1$ is the unique prime $P$ of $R^e$ such that there exists an extension $\nu^e_-$ of $\nu$ to $\frac{R^e}P $; the extension $\nu^e_-$ is unique. The graded algebra $\mbox{gr}_{\nu^e_-}\frac{R^e}{H_1}$ is scalewise birational to $\mbox{gr}_\nu R$; in particular, $rk\ \nu^e_-=r$. (3) The ideals $H_{2\ell+1}$ are precisely the prime $\nu^e_-$-ideals of $R^e$. \end{theorem} \begin{proof} By assumption, the ring $R^e$ is a direct limit of local, strict \'etale extensions of $R$ which are essentially of finite type. All the assertions (1)--(3) behave well under taking direct limits, so it is sufficient to prove the Theorem in the case when $R^e$ is essentially of finite type over $R$. From now on, we will restrict attention to this case. The next step is to describe explicitly those local blowings up $R\rightarrow R'$ for which $R'$ is $\ell$-stable. Their interest to us is that, according to Proposition \ref{largeR2}, if $R'$ is $\ell$-stable then for all $R''\in{\cal T}(R')$ and all $\beta\in\frac{\Delta_\ell}{\Delta_{\ell+1}}$, we have the equality \begin{equation} {\cal P}''_\beta(R''\otimes_RR^e)\cap R^e={\cal P}_\beta R^e;\label{eq:contracts} \end{equation} in particular, the limit in (\ref{eq:defin3}) is attained, that is, we have the equality \begin{equation} H_{2\ell+1}=\bigcap\limits_{\beta\in\Delta_\ell}\left(\left({\cal P}'_\beta\left(R^e\otimes_RR'\right)_{M'}\right) \bigcap R^e\right).\label{eq:defin4} \end{equation} \begin{lemma}\label{lift} Let $\frac{R}{P_\ell}\to T$ be a finitely generated extension of $\frac {R}{P_\ell}$, contained in $\frac{R_\nu}{\bf m_\ell}$. Let $$ {\bf q}=\frac{\bf m_\nu}{\bf m_\ell}\cap T. $$ There exists a $\nu$-extension $R\to R'$ of $R$ such that $\frac{R'}{P'_\ell}=T_{\bf q}$. \end{lemma} \begin{proof} Write $T=\frac R{P_\ell}\left[\overline a_1,\ldots,\overline a_k\right]$, with $\overline a_i\in\frac{R_\nu}{\bf m_\ell}$, that is, $\nu_{\ell+1}\left(\overline a_i\right)\geq 0,\ 1\leq i\leq k$. We can lift the $\overline a_i$ to elements $a_i$ in $R_\nu$ such that $\nu\left(a_i\right)\geq0$. Let us consider the ring $R''=R\left[a_1,\ldots,a_k\right]\subset R_\nu$ and its localization $R'=R''_{{\bf m}_\nu\cap R''}$. The ideal $P'_\ell$ is the kernel of the natural map $R'\rightarrow\frac{R_\nu}{\bf m_\ell}$. Thus both $\frac{R'}{P'_\ell}$ and $T_{\bf q}$ are equal to the $\frac R{P_l}$-subalgebra of $\frac{R_\nu}{\bf m_\ell}$, obtained by adjoining $\overline a_1,\ldots,\overline a_k$ to $\frac R{P_l}$ inside $\frac{R_\nu}{\bf m_\ell}$ and then localizing at the preimage of the ideal $\frac{\bf m_\nu}{\bf m_\ell}$. This proves the Lemma. \end{proof} Let us now go back to our \'etale extension $R\to R^e$. \begin{lemma}\label{anirred1} Fix an integer $l\in\{0,\dots,r\}$. There exists a local blowing up $R\rightarrow R'$ along $\nu$ having the following property: let $P'_\ell$ denote the $\ell$-th prime $\nu$-ideal of $R'$. Then the ring $\frac{R'}{P'_\ell}$ is analytically irreducible; in particular, $\frac{R'}{P'_\ell}\otimes_R R^e$ is an integral domain. \end{lemma} \begin{remark} We are not claiming that there exists $R'\in\cal T$ such that $\frac{R'}{P'_\ell}$ is analytically irreducible for all $\ell$ (and we do not know how to prove such a claim), only that for each $\ell$ there exists an $R'$, which may depend on $\ell$, such that $\frac{R'}{P'_\ell}$ is analytically irreducible. On the other hand, below we will prove that there exists an $\ell$-stable $R'\in\cal T$. According to Definition \ref{stable} (2) and Proposition \ref{largeR1}, such a stable $R'$ has the property that $\kappa\left(P''_\ell\right)\otimes_R\left(R''\otimes_RR^e\right)_{M''}$ is a domain for all $R''\in{\cal T}(R')$. For a given $R''$, this property is weaker than the analytic irreducibility of $R''/P''_\ell$. The latter is equivalent to saying that $\kappa(P''_\ell)\otimes_R(R''\otimes_RR^\sharp)_{M''}$ is a domain for every local \'etale extension $R^\sharp$ of $R''$. \end{remark} \begin{proof}\textit{(of Lemma \ref{anirred1})} Since $R$ is an excellent local ring, every homomorphic image of $R$ is Nagata \cite{Mat} (Theorems 72 (31.H), 76 (33.D) and 78 (33.H)). Let $\pi:\frac R{P_\ell}\rightarrow S$ be the normalization of $\frac {R}{P_\ell}$. Then $S$ is a finitely generated $\frac {R}{P_\ell}$-algebra contained in $\frac{R_\nu}{\bf m_\ell}$, to which we can apply Lemma \ref{lift}. We obtain a $\nu$-extension $R\to R'$ such that the ring $\frac{R'}{P'_\ell}\cong\frac{R'}{P_\ell R'}$ is a localization of $S$ at a prime ideal, hence it is an excellent normal local ring. In particular, it is analytically irreducible (\cite{Nag}, Theorem (43.20), p. 187 and Corollary (44.3), p. 189), as desired. \end{proof} Next, we fix $\ell\in\{0,\dots,r\}$ and study the ring $(T')^{-1}(\kappa(P'_\ell)\otimes_RR^e)$, in particular, the structure of the set of its zero divisors, as $R'$ runs over ${\cal T}(R)$ (here $T'$ is as in Remark \ref{interchanging}). Since $R^e$ is separable algebraic, essentially of finite type over $R$, the ring $(T')^{-1}(\kappa(P'_\ell)\otimes_RR^e)$ is finite over $\kappa(P'_\ell)$; this ring is reduced, but it may contain zero divisors. In fact, it is a direct product of fields which are finite separable extensions of $\kappa(P'_\ell)$ because $R^e$ is separable and essentially of finite type over $R$.\par Consider a chain $R\rightarrow R'\rightarrow R''$ of $\nu$-extensions in $\cal T$. Let \begin{eqnarray} \kappa(P_\ell)\otimes_RR^e&=&\prod\limits_{j=1}^nK_j\\ (T')^{-1}\left(\kappa\left(P'_\ell\right)\otimes_RR^e\right)&=&\prod\limits_{j=1}^{n'}K'_j\\ (T'')^{-1}\left(\kappa\left(P''_\ell\right)\otimes_RR^e\right)&=&\prod\limits_{j=1}^{n''}K''_j \end{eqnarray} be the corresponding decompositions as products of finite field extensions of $\kappa(P_\ell)$ (resp. $\kappa(P'_\ell)$, resp. $\kappa(P''_\ell)$). We want to compare $(T')^{-1}\left(\kappa\left(P'_\ell\right)\otimes_RR^e\right)$ with $(T'')^{-1}\left(\kappa\left(P''_\ell\right)\otimes_RR^e\right)$. \begin{remark} The ring $\kappa\left(P'_\ell\right)\otimes_RR^e$ is itself a direct product of finite extensions of $\kappa\left(P'_\ell\right)$; say $\kappa\left(P'_\ell\right)=\prod\limits_{j\in S'}K'_j$ for a certain set $S'$. In this situation, localization is the same thing as the natural projection to the product of the $K'_j$ over a certain subset $\{1,\dots,n'\}$ of $S'$. Thus the passage from $(T')^{-1}\left(\kappa\left(P'_\ell\right)\otimes_RR^e\right)$ to $(T'')^{-1}\left(\kappa\left(P''_\ell\right)\otimes_RR^e\right)$ can be viewed as follows: first, tensor each $K'_j$ with $\kappa\left(P''_\ell\right)$ over $\kappa\left(P'_\ell\right)$; then, in the resulting direct product of fields, remove a certain number of factors. \end{remark} Let $\bar K'_1,\dots,\bar K'_{\bar n'}$ be the distinct isomorphism classes of finite extensions of $\kappa\left(P'_\ell\right)$ appearing among $K'_1,\dots,K'_{n'}$, arranged in such a way that $\left[\bar K'_j:\kappa\left(P'_\ell\right)\right]$ is non-increasing with $j$, and similarly for $\bar K''_1,\dots,\bar K''_{\bar n''}$. \begin{lemma}\label{decrease} We have the inequality \begin{equation} \left(\left[\bar K''_1:\kappa\left(P''_\ell\right)\right],\dots,\left[\bar K''_{\bar n''}:\kappa\left(P''_\ell\right)\right], n''\right)\le \left(\left[\bar K'_1:\kappa\left(P'_\ell\right)\right],\dots,\left[\bar K'_{\bar n'}:\kappa\left(P'_\ell\right)\right], n'\right)\label{eq:lex} \end{equation} in the lexicographical ordering. Furthermore, either $R'$ is $\ell$-stable or there exists $R''\in\cal T$ such that strict inequality holds in (\ref{eq:lex}). \end{lemma} \begin{proof} Fix a $q\in\{1,\dots,\bar n'\}$ and consider the tensor product $\bar K'_q\otimes_R\kappa\left(P''_\ell\right)$. Since $\bar K'_q$ is separable over $\kappa\left(P'_\ell\right)$, the ring $\bar K'_q\otimes_R\kappa\left(P''_\ell\right)=\prod\limits_{j\in S''_q}K''_j$ is a product of fields. Moreover, two cases are possible: (a) there exists a non-trivial extension $L$ of $\kappa\left(P'_\ell\right)$ which embeds both into $\kappa\left(P''_\ell\right)$ and $\bar K'_q$. In this case \begin{equation} \left[K''_j:\kappa\left(P''_\ell\right)\right]<\left[\bar K'_q:\kappa\left(P'_\ell\right)\right]\quad\text{ for all }j\in S''_q.\label{eq:strict} \end{equation} (b) there is no field extension $L$ as in (a). In this case $\bar K'_q\otimes_R\kappa\left(P''_\ell\right)$ is a field, so \begin{equation} \#S''_q=1\label{eq:card1} \end{equation} and \begin{equation} \left[K''_j:\kappa\left(P''_\ell\right)\right]=\left[\bar K'_q:\kappa\left(P'_\ell\right)\right]\quad\text{ for }j\in S''_q.\label{eq:equal} \end{equation} Now, if there exists $q\in\{1,\dots,\bar n'\}$ for which (a) holds, take the smallest such $q$. Then (\ref{eq:strict})--(\ref{eq:equal}) imply that strict inequality holds in (\ref{eq:lex}). On the other hand, if (b) holds for all $q\in\{1,\dots,\bar n'\}$ then (\ref{eq:card1}) and (\ref{eq:equal}) imply that \begin{equation} \left(\left[\bar K''_1:\kappa\left(P''_\ell\right)\right],\dots,\left[\bar K''_{\bar n''}:\kappa\left(P''_\ell\right)\right] \right)= \left(\left[\bar K'_1:\kappa\left(P'_\ell\right)\right],\dots,\left[\bar K'_{\bar n'}:\kappa\left(P'_\ell\right)\right] \right)\label{eq:lex1} \end{equation} and $n''\le n'$, so again (\ref{eq:lex}) holds. Finally, assume that $R'$ is not $\ell$-stable. If there exists $R''\in\cal T$ and $q\in\{1,\dots,\bar n'\}$ for which (a) holds, then by the above we have strict inequality in (\ref{eq:lex}) and there is nothing more to prove. Assume there are no such $R''$ and $q$. Then $(T')^{-1}(\kappa(P'_\ell)\otimes_RR^e)$ is not a domain, so $n'>1$. Take $R''\in{\cal T}(R')$ such that $\left(\frac{R''}{P''_l}\otimes_RR^e\right)_{M''}$ is an integral domain; such an $R''$ exists by Lemma \ref{anirred1}. Then $n''=1<n'$, as desired. \end{proof} \begin{corollary}\label{anirred2} There exists a stable $R'\in\cal T$. The limit in (\ref{eq:defin3}) is attained for this $R'$. \end{corollary} \begin{proof} In view of Proposition \ref{largeR1}, it is sufficient to prove that there exists $R'\in\cal T$ which which is $\ell$-stable for all $\ell\in\{0,1,\dots,r\}$. First, we fix $\ell\in\{0,1,\dots,r\}$. Lemma \ref{decrease} implies that there exists $R'\in\mathcal{T}(R)$ which is $\ell$-stable. By Proposition \ref{largeR1}, repeating the procedure above for each $\ell$ we can successively enlarge $R'$ in such a way that it becomes stable. The last statement follows from Proposition \ref{largeR2}. \end{proof} We are now in the position to prove Theorem \ref{hensel0}. By Theorem \ref{primality1} (1), $H_{2\ell-1}$ is prime. By Proposition \ref{contracts}, $H_{2\ell+1}$ maps to $P_\ell$ under the map $\pi^e:\mbox{Spec}\ R^e\rightarrow\mbox{Spec}\ R$. Since this map is \'etale, its fibers are zero-dimensional, which shows that $H_{2\ell+1}$ is a minimal prime of $P_\ell$. This proves (1) of Theorem \ref{hensel0}. By Proposition \ref{Hintilde}, for $0\le i\le2r$, $\tilde H_i$ is a prime ideal of $R^e$, containing $H_i$. Since the fibers of $\pi^e$ are zero-dimensional, we must have $\tilde H_i=H_i$, so $\tilde H_{2\ell}=\tilde H_{2\ell+1}=H_{2\ell}=H_{2\ell+1}$ for $0\le\ell\le r$. In particular, $\tilde H_0=H_1$. This shows that the unique prime $\tilde H_0$ of $R^e$ such that there exists an extension $\nu^e_-$ of $\nu$ to $\frac{R^e}{\tilde H_0}$ is $\tilde H_0=H_1$. Now (2) of the Theorem is given by Proposition \ref{uniqueness1}. (3) of Theorem \ref{hensel0} is now immediate. This completes the proof of Theorem \ref{hensel0}. \end{proof} We note the following corollary of the proof of (2) of Theorem \ref{hensel0} and Corollary \ref{blup1}. Let $\Phi^e=\nu^e_-(R^e\setminus\{0\})$, take an element $\beta\in\Phi^e$ and let ${\cal P}^e_\beta$ denote the $\nu^e_-$-ideal of $R^e$ of value $\beta$. \begin{corollary}\label{blup} Take an element $x\in {\cal P}^e_\beta$. There exists a local blowing up $R\rightarrow R'$ such that $\beta\in\nu(R')\setminus\{0\}$ and $x\in {\cal P}'_\beta{R'}^e$. \end{corollary} \section{The Main Theorem: the primality of implicit ideals.} \label{prime} In this section we study the ideals $H_j$ for $\hat R$ instead of $\tilde R$. The main result of this section is \begin{theorem}\label{primality} The ideal $H_{2\ell-1}$ is prime. \end{theorem} \begin{proof} For the purposes of this proof, let $H_{2\ell-1}$ denote the implicit ideals of $\hat R$ and $\tilde H_{2\ell-1}$ the implicit prime ideals of the henselization $\tilde R$ of $R$. Let $S$ be a local domain. By \cite{Nag} (Theorem (43.20), p. 187) there exists bijective maps between the set of minimal prime ideals of the henselization $\tilde S$ and the maximal ideals of the normalization $S^n$. If, in addition, $S$ is excellent, the two above sets also admit a natural bijection to the set of minimal primes of $\hat S$ \cite{Nag} (Corollary (44.3), p. 189). If $S$ is a henselian local domain, its only minimal prime is the (0) ideal, hence by the above the same is true of $\hat S$. Thus $\hat S$ is also a domain. This shows that any excellent henselian local domain is analytically irreducible, hence $\tilde H_{2\ell-1}\hat R$ is prime for all $\ell\in\{1,\dots,r+1\}$. Let $\tilde\nu_-$ denote the unique extension of $\nu$ to $\frac{\tilde R}{\tilde H_1}$, constructed in the previous section. Let $H^*_{2\ell-1}\subset\frac{\tilde R}{\tilde H_1}$ denote the implicit ideals associated to the henselian ring $\frac{\tilde R}{\tilde H_1}$ and the valuation $\tilde\nu_-$. \noi\textit{Claim.} We have $H^*_{2\ell-1}=\frac{H_{2\ell-1}}{\tilde H_1}$. \noi\textit{Proof of the claim:} For $\beta\in\Gamma$, let $\tilde P_\beta$ denote the $\tilde\nu_-$-ideal of $\frac{\tilde R}{\tilde H_1}$ of value $\beta$. For all $\beta$, we have $\frac{P_\beta}{\tilde H_1}\subset\tilde P_\beta$, and the same inclusion holds for all the local blowings up of $R$, hence $\frac{H_{2\ell-1}}{\tilde H_1}\subset H^*_{2\ell-1}$. To prove the opposite inclusion, we may replace $\tilde R$ by a finitely generated strict \'etale extension $R^e$ of $R$. Now let $\Phi^e=\nu^e_-\left(R^e\setminus\{0\}\right)$ and take an element $\beta\in\Phi^e\cap\Delta_{\ell -1}$. By Corollary \ref{blup}, there exists a local blowing up $R\rightarrow R'$ such that $x\in P'_\beta{R'}^e$. Letting $\beta$ vary over $\Phi^e\cap\Delta_{\ell-1}$, we obtain that if $x\in H^*_{2\ell-1}$ then $x\in\frac{H_{2\ell-1}}{\tilde H_1}$, as desired. This completes the proof of the claim. The Claim shows that replacing $R$ by $\frac{\tilde R}{\tilde H_1}$ in Theorem \ref{primality} does not change the problem. In other words, we may assume that $R$ is a henselian domain and, in particular, that $\hat R$ is also a domain. Similarly, the ring $\frac R{P_\ell}\otimes_R\hat R\cong\frac{\hat R}{P_\ell}$ is a domain, hence so is its localization $\kappa(P_\ell)\otimes_R\hat R$. Since $R$ is a henselian excellent ring, it is algebraically closed in $\hat R$ (\cite{Nag}, Corollary (44.3), p. 189 and Corollary \ref{notnormal} of the Appendix); of course, the same holds for $\frac R{P_\ell}$ for all $\ell$. Then $\kappa(P_\ell)$ is algebraically closed in $\kappa(P_\ell)\otimes_R\hat R$. This shows that the ring $R$ is stable. Now the Theorem follows from Theorem \ref{primality1}. This completes the proof of Theorem \ref{primality}. \end{proof} \section{Towards a proof of Conjecture \ref{teissier}, assuming local uniformization in lower dimension} \label{locuni1} Let the notation be as in the previous sections. In this section, we assume that the Local Uniformization Theorem holds and propose an approach to proving Conjecture \ref{teissier}. We prove a Corollary of Conjecture \ref{teissier} which gives a sufficient condition for $\hat\nu_-$ to be unique, which also turns out to be necessary under the additional assumption that $\hat\nu_-$ is minimal. We will assume that all the $R'\in\mathcal T$ are birational to each other, so that all the fraction fields $K'=K$ and the homomorphisms $R'\rightarrow R''$ are local blowings up with respect to $\nu$. Finally, we assume that $R$ contains a field $k_0$ and a local $k_0$-subalgebra $S$ essentailly of finite type, over which $R$ is strictly \'etale. In particular, all the rings in sight are equicharacteristic. First, we state the Local Uniformization Theorem in the precise form in which we are going to use it. \begin{definition}\label{lut} We say that \textbf{the embedded Local Uniformization theorem holds in } $\mathcal T$ if the following conditions are satisfied. Take an integer $\ell\in\{1,\dots,r-1\}$. Let $\mu_{\ell+1}:=\nu_{\ell+1}\circ\nu_{\ell+2}\circ\dots\circ\nu_r$. Consider a tree $\{H'\}$ of prime ideals of $\frac{\hat R'}{P'_\ell}$ such that $H'\cap\frac{R'}{P'_\ell}=(0)$ and a tight extension $\hat\mu_{2\ell+2}$ of $\mu_{\ell+1}$ to $\lim\limits_{\overset\longrightarrow{R'}}\frac{\hat R'}{H'}$. (1) There exists a local blowing up $\pi:R\rightarrow R'$ in $\mathcal T$, which induces an isomorphism at the center of $\nu_\ell$, such that $\frac{R'}{P'_\ell}$ is a regular local ring. (2) Assume that $\frac{R'}{P'_\ell}$ is a regular local ring. Then there exists in $\mathcal T$ a sequence $\pi:R\rightarrow R'$ of local blowings up along non-singular centers not containing the center of $\nu_l$ such that $\frac{\hat R'}{H'}$ is a regular local ring. \end{definition} It is well known (\cite{A}, \cite{L}, \cite{Z}) that the embedded Local Uniformization theorem holds if $R$ is an excellent local domain such that either $char\ k=0$ or $\dim\ R\le3$ (to be precise, (1) of Definition \ref{lut} is well known and (2) is an easy consequence of known results). While the Local Uniformization theorem in full generality is still an open problem, it is widely believed to hold for arbitrary quasi-excellent local domains. Proving this is an active field of current research in algebraic geometry. Proving local uniformization for rings of arbitrary characteristic is one of the intended applications of Conjecture \ref{teissier}. Note that in Definition \ref{lut} we require only local uniformization of rings of dimension strictly less than $\dim\ R$; the idea is to use induction on $\dim\ R$ to prove local uniformization of rings of dimension $\dim\ R$. We begin by stating a strengthening of Conjecture \ref{teissier} (using Remark \ref{rephrasing}): \begin{conjecture}\label{teissier1} The valuation $\nu$ admits at least one tight extension $\hat\nu_-$. This tight extension $\hat\nu_-$ can be chosen to have the following additional property: for rings $R'$ sufficiently far in the tree $\mathcal T$ we have the equality of semigroups $\hat\nu_-\left(\frac{\hat R'}{\tilde H'_0}\setminus\{0\}\right)=\nu(R'\setminus\{0\})$ and for $\beta\in\nu(R'\setminus\{0\})$ the $\hat\nu_-$-ideal of value $\beta$ is $\frac{\mathcal P_\beta\hat R'}{\tilde H'_0}$. In particular, we have the equality of graded algebras $\mbox{gr}_\nu R'=\mbox{gr}_{\hat\nu_-}\frac{\hat R'}{\tilde H'_0}$. \end{conjecture} Below, we give an explicit construction of a valuation $\hat\nu_-$ whose existence is asserted in the Conjecture by describing the trees of ideals $\tilde H'_i$, $0\le i\le 2r$ and, for each $i$, a valuations $\hat\nu_i$ of the residue field $k_{\nu_{i-1}}$, such that $\hat\nu_-=\hat\nu_1\circ\dots\hat\nu_{2r}$. More precisely, for $\ell\in\{0,\dots,r-1\}$, we will construct, recursively in the descending order of $\ell$, a tree $J'_{2\ell+1}$ of prime ideals of $\frac{\hat R'}{H'_{2\ell}}$, $R'\in\mathcal{T}$, such that $J'_{2\ell+1}\cap\frac{R'}{P_\ell}=(0)$, and an extension $\hat\mu_{2\ell+2}$ of $\mu_{\ell+1}$ to $\lim\limits_{\overset\longrightarrow{R'\in\mathcal{T}}}\frac{\hat R'}{J'_{2\ell+1}\hat R'}$; the valuation $\hat\mu_2$ will be our candidate for the desired tight extension $\hat\nu_-$ of $\mu_1=\nu$. Unfortunately, two steps in this construction still remain conjectural, namely, proving that $\hat\mu_{2\ell+2}$ is, indeed, a valuation, and that it is tight (this is essentially the content of Conjectures \ref{strongcontainment} and \ref{containment} below). Once these conjectures are proved, our recursive construction will be complete and Conjecture \ref{teissier1} will follow by setting $\hat\nu_-=\hat\mu_2$. Let us now describe the construction in detail. According to Corollary \ref{htstable1}, we may assume that $ht\ H'_i$ is constant for each $i$ after replacing $R$ by some other ring sufficiently far in $\mathcal T$. From now on, we will make this assumption without always stating it explicitly. By (1) of Definition \ref{lut}, applied successively to the trees of ideals $$ P'_\ell\subset R',\quad\ell\in\{1,\dots,r-1\}, $$ there exists $R''\in\cal T$ such that $\frac{R''}{P''_\ell}$ is regular for all $\ell\in\{1,\dots,r-1\}$. Without loss of generality, we may also assume that $R''$ is stable. For $\ell\in\{1,\dots,r-1\}$ and $R''\in\mathcal{T}$, let $\mathcal{T}_\ell(R'')$ denote the subtree of $\mathcal T$, consisting of all the local blowings up of $R''$ along ideals not contained in $P''_\ell$ (such local blowings up induce an isomorphism at the point $P''_\ell\in\mbox{Spec}\ R''$). Below, we will sometimes work with trees of rings and ideals indexed by $\mathcal{T}_\ell(R'')$ for suitable $\ell$ and $R''$ (instead of trees indexed by all of $\cal T$); the precise tree with which we are working will be specified in each case. For $\ell=r-1$, we define $J'_{2r-1}:=H'_{2r-1}$ and $\hat\mu_{2r}:=\hat\nu_{2r,0}$; according to Proposition \ref{nu0unique}, $\hat\nu_{2r,0}=\hat\nu_{2r}$ is the unique extension of $\nu_r$ to $\lim\limits_{\overset\longrightarrow{R'}}\frac{\hat R'}{H'_{2r-1}\hat R'}$. Next, assume that $\ell\in\{1,\dots,r-1\}$, that the tree $J'_{2\ell+1}$ of prime ideals of $\frac{\hat R'}{H'_{2\ell}\hat R'}$ and a tight extension $\hat\mu_{2\ell+2}$ of $\mu_{\ell+1}$ to $\lim\limits_{\overset\longrightarrow{R'}}\frac{\hat R'}{J'_{2\ell+1}}$ are already constructed for $R'\in\mathcal{T}$ and that $J'_{2\ell+1}\cap\frac{R'}{P_\ell}=(0)$. It remains to construct the ideals $J'_{2\ell-1}\subset\frac{\hat R'}{H'_{2\ell-2}\hat R'}$ and a tight extension $\hat\mu_{2\ell}$ of $\mu_\ell$ to $\lim\limits_{\overset\longrightarrow{R'}}\frac{\hat R'}{J'_{2\ell-1}}$ for $R'\in\mathcal{T}$. We will assume, inductively, that for all $R'\in\mathcal{T}$ the quantity $ht\ J'_{2\ell+1}$ is constant and the following conditions hold: \begin{enumerate} \item We have the equality of semigroups $\hat\mu_{2\ell+2}\left(\frac{\hat R'}{J'_{2\ell+1}}\setminus\{0\}\right)\cong \mu_{\ell+1}\left(\frac{R'}{P'_\ell}\setminus\{0\}\right)$. \item For all $\beta\in\mu_{\ell+1}\left(\frac{R'}{P'_\ell}\setminus\{0\}\right)$ the $\hat\mu_{2\ell+2}$-ideal of $\frac{\hat R'}{J'_{2\ell+1}}$ of value $\beta$ is the extension to $\frac{\hat R'}{J'_{2\ell+1}}$ of the $\mu_{\ell+1}$-ideal of $\frac{R'}{P'_\ell}$ of value $\beta$. \item In particular, we have a canonical isomorphism $gr_{\hat\mu_{2\ell+2}}\frac{\hat R'}{J'_{2\ell+1}}\cong gr_{\mu_{\ell+1}}\frac{R'}{P'_\ell}$ of graded algebras. \end{enumerate} By (2) of Definition \ref{lut} applied to the prime ideals $J'_{2\ell+1}\subset\frac{\hat R'}{H'_{2\ell}\hat R'}$, there exists $R'\in\cal T$ such that both $\frac{R'}{P'_\ell}$ and $\frac{\hat R'}{J'_{2\ell+1}}$ are regular. The fact that $\frac{R'}{P'_\ell}$ is regular implies that so is $\frac{\hat R'}{P'_\ell\hat R'}$. In particular, $\frac{\hat R'}{P'_\ell\hat R'}$ is a domain, so $H'_{2\ell}=P'_\ell\hat R'$. Take a regular system of parameters $$ \bar u'=(\bar u'_1,\dots,\bar u'_{n_\ell}) $$ of $\frac{R'}{P'_\ell}$. Let $k'$ denote the common residue field of $R'$, $\frac{R'}{P'_{\ell-1}}$ and $\frac{R'}{P'_\ell}$. Fix an isomorphism $\frac{R'}{P'_\ell}\cong k'[[\bar u']]$. Renumbering the variables, if necessary, we may assume that there exists $s_\ell\in\{1,\dots,n_\ell\}$ such that $\bar u'_1,\dots,\bar u'_{s_\ell}$ are $k'$-linearly independent modulo $({m'}^2+J'_{2\ell+1})\frac{\hat R'}{H'_{2\ell}}$. Since $\frac{\hat R'}{J'_{2\ell+1}}$ is regular, the ideal $J'_{2\ell+1}$ is generated by a set of the form $\bar v'=(\bar v'_{s_\ell+1},\dots,\bar v'_{n_\ell})$, where $$ \bar v'_j=\bar u'_j-\bar\phi_j(\bar u'_1,\dots,\bar u'_{s_\ell}),\ \bar\phi_j(\bar u'_1,\dots,\bar u'_{s_\ell})\in k'[[\bar u'_1,\dots,\bar u'_{s_\ell}]]. $$ Let $\bar w'=(\bar w'_1,\dots,\bar w'_{s_\ell})=(\bar u'_1,\dots,\bar u'_{s_\ell})$. Let $z'$ be a minimal set of generators of $\frac{P'_\ell}{P'_{\ell-1}}$. Let $k'_0$ be a quasi-coefficient field of $R'$ (that is, a subfield of $R'$ over which $k'$ is formally \'etale; such a quasi-coefficient field exists by \cite{Mat}, moreover, since $R'$ is algebraic over a finite type algebra over a field by hypotheses, $k'$ is finite over $k'_0$). By the hypotheses on $R$ and since $\frac{R'}{P'_\ell}$ is a regular local ring and $\bar u'$ is a minimal set of generators of its maximal ideal $\frac{m'}{P'_\ell}$, there exists an ideal $I\subset k'_0[z']$ such that $\frac{R'}{P'_{\ell-1}}$ is an \'etale extension of $\frac{k'_0[z',\bar u']_{(z',\bar u')}}I$. By assumptions, we have $ht\ P'_{\ell-1}<ht\ P'_\ell$, so $0<ht\ P'_\ell-ht\ P'_{\ell-1}=ht(z')-ht\ I$, in other words, \begin{equation} ht\ I<ht(z').\label{eq:Inotmaximal} \end{equation} Next, we prove two general lemmas about ring extensions. \begin{notation} Let $k_0$ be a field and $(S,m,k)$ a local noetherian $k_0$-algebra. For a field extension \begin{equation} k_0\hookrightarrow L\label{eq:k0inktilde} \end{equation} such that $k\otimes_{k_0}L$ is a domain, let $S(L)$ denote the localization of the ring $S\otimes_{k_0}L$ at the prime ideal $m(S\otimes_{k_0}L)$. \end{notation} \begin{lemma}\label{noetherian} Let $k_0$, $(S,m,k)$ and $L$ be as above. The ring $S(L)$ is noetherian. \end{lemma} \begin{proof} If the field extension (\ref{eq:k0inktilde}) is finitely generated, the Lemma is obvious. In the general case, write $L=\lim\limits_{\overrightarrow{i}}L_i$ as a direct limit of its finitely generated subextensions. For each $L_i$, let $k_i$ denote the residue field of $S(L_i)$; $k_i$ is nothing but the field of fractions of $k\otimes_{k_0}L_i$. Write $\hat S=\frac{k[[x]]}H$, where $x$ is a set of generators of $m$ and $H$ a certain ideal of $k[[x]]$. Then $\widehat{S(L_i)}\cong\frac{k_i[[x]]}{Hk_i[[x]]}$. Given two finitely generated extensions $L_i\subset L_j$ of $k_0$, contained in $L$, we have a commutative diagram $$ \begin{matrix} S(L_j)&\overset{\pi_j}\rightarrow&\widehat{S(L_j)}&\cong&\frac{k_j[[x]]}{Hk_j[[x]]}&\\ \psi_{ij}\uparrow&\ &\uparrow&\ &\uparrow\phi_{ij}\\ S(L_i)&\overset{\pi_i}\rightarrow&\widehat{S(L_i)}&\cong&\frac{k_i[[x]]}{Hk_i[[x]]} \end{matrix} $$ where $\phi_{ij}$ is the map induced by the natural inclusion $k_i\hookrightarrow k_j$ and the identity map of $x$ to itself. Let $k_\infty=\lim\limits_{\overrightarrow{i}}k_i$. Then, for each $i$, we have the obvious faithfully flat map $\rho_i:\widehat{S(L_i)}\rightarrow\frac{k_\infty[[x]]}{Hk_\infty[[x]]}$, defined by the natural inclusion $k_i\hookrightarrow k_\infty$ and the identity map of $x$ to itself; the maps $\rho_i$ commute with the $\phi_{ij}$. Thus, we have constructed a faithfully flat map $\rho_i\circ\pi_i$ from each element of the direct system $S(L_i)$ to the fixed noetherian ring $\frac{k_\infty[[x]]}{Hk_\infty[[x]]}$; moreover, the maps $\rho_i\circ\pi_i$ are compatible with the homomorphisms $\psi_{ij}$ of the direct system. This implies that the ring $S(L)=\lim\limits_{\overrightarrow{i}}S(L_i)$ is noetherian. \end{proof} \begin{lemma}\label{IS(t)} Let $(S,m,k)$ be a local noetherian ring. Let $t$ be an arbitrary collection of independent variables. Consider the rings $S[t]$ and $S(t):=S[t]_{mS[t]}$. Let $I$ be an ideal of $S$. Then \begin{equation} IS(t)\cap S[t]=IS[t].\label{eq:IcapSt=I} \end{equation} \end{lemma} \begin{proof} First, assume the collection $t$ consists of a single variable. Consider elements $f,g\in S[t]$ such that \begin{equation} f\mbox{$\in$ \hspace{-.8em}/} mS[t]\label{eq:fnotinmSt} \end{equation} and \begin{equation} fg\in IS[t].\label{eq:fginISt} \end{equation} Proving the equation (\ref{eq:IcapSt=I}) amounts to proving that \begin{equation} g\in IS[t].\label{eq:ginISt} \end{equation} We prove (\ref{eq:ginISt}) by contradiction. Assume that $g\mbox{$\in$ \hspace{-.8em}/} IS[t]$. Then there exists $n\in\mathbb N$ such that $g\mbox{$\in$ \hspace{-.8em}/}(I+m^n)S[t]$. Take the smallest such $n$, so that \begin{equation} g\in\left(I+m^{n-1}\right)S[t]\setminus(I+m^n)S[t].\label{eq:setminus} \end{equation} Write $f=\sum\limits_{j=0}^qa_jt^j$ and $g=\sum\limits_{j=0}^lb_jt^j$. Let \begin{eqnarray} l_0:&=&\max\{j\in\{0,\dots,l\}\ |\ b_j\mbox{$\in$ \hspace{-.8em}/} I+m^n\}\quad\text{ and}\\ q_0:&=&\max\{j\in\{0,\dots,q\}\ |\ a_j\mbox{$\in$ \hspace{-.8em}/} m\}. \end{eqnarray} Let $c_{l_0+q_0}$ denote the $(l_0+q_0)$-th coefficient of $fg$. We have $$ c_{l_0+q_0}=\sum\limits_{i+j=l_0+q_0}a_ib_j=a_{q_0}b_{l_0}+\sum\limits_{\begin{array}{c}i+j=l_0+q_0\\ i>q_0\end{array}}a_ib_j+\sum\limits_{\begin{array}{c}i+j=l_0+q_0\\ j>l_0\end{array}}a_ib_j. $$ By definition of $l_0$ and $q_0$ and (\ref{eq:setminus}) we have: \begin{eqnarray} a_{q_0}b_{l_0}&\mbox{$\in$ \hspace{-.8em}/}&I+m^n\quad\text{and}\\ \sum\limits_{\begin{array}{c}i+j=l_0+q_0\\ i>q_0\end{array}}a_ib_j+\sum\limits_{\begin{array}{c}i+j=l_0+q_0\\ j>l_0\end{array}}a_ib_j&\in&I+m^n. \end{eqnarray} Hence $c_{l_0+q_0}\mbox{$\in$ \hspace{-.8em}/} I+m^n$, which contradicts (\ref{eq:fginISt}). This completes the proof of Lemma \ref{IS(t)} in the case when $t$ is a single variable. The case of a general $t$ now follows by transfinite induction on the collection $t$. \end{proof} \begin{lemma}\label{contractsto0} There exist sets of representatives $$ u'=(u'_1,\dots,u'_{n_\ell}) $$ of $\bar u'$ and $\phi_j$ of $\bar\phi_j$, $s_\ell<j\le n_\ell$, in $\frac{\hat R'}{H'_{2\ell-2}\hat R'}$, having the following properties. Let \begin{eqnarray} w'=(w'_1,\dots,w'_{s_\ell})&=&(u'_1,\dots,u'_{s_\ell}),\\ v'=(v'_{s_\ell+1},\dots,v'_{n_\ell})&=& (u'_{s_\ell+1}-\phi_{s_\ell+1},\dots,u'_{n_\ell}-\phi_{n_\ell})\label{eq:defv}. \end{eqnarray} Let $J'_{2\ell-1}=\frac{H'_{2\ell-1}}{H'_{2\ell-2}}+(v')\subset\frac{\hat R'}{H'_{2\ell-2}}$. Then \begin{equation} w'\subset\frac{R'}{P'_{\ell-1}}\label{eq:winR} \end{equation} and \begin{equation} J'_{2\ell-1}\cap\frac{R'}{P'_{\ell-1}}=(0).\label{eq:contractsto0} \end{equation} \end{lemma} \begin{proof}\textit{(of Lemma \ref{contractsto0})} There is no problem choosing $w'$ to satisfy (\ref{eq:winR}). As for (\ref{eq:contractsto0}), we first prove the Lemma under the assumption that $k$ is countable. We choose the representatives $u'$ arbitrarily and let $\bar\phi_j(u')\in k[[u']]$ denote the formal power series obtained by substituting $u'$ for $\bar u'$ in $\bar\phi_j$. Any representative $\phi_j$ of $\bar\phi_j$, $s_\ell<j\le n_\ell$ has the form $\phi_j=\bar\phi_j(u')+h_j$ with $h_j\in(z')\frac{\hat R'}{H'_{2\ell-2}}$. We define the $h_j$ required in the Lemma recursively in $j$. Take $j\in\{s_\ell+1,\dots,n_\ell\}$. Assume that $h_{s_{\ell+1}},\dots,h_{j-1}$ are already defined and that \begin{equation} (v'_{s_\ell+1},\dots,v'_{j-1})\cap\frac{R'}{P'_{\ell-1}}=(0),\label{eq:j-1inter0} \end{equation} where we view $\frac{R'}{P'_{\ell-1}}$ as a subring of $\frac{\hat R'}{H'_{2\ell-1}}$. Since the ring $\frac{R'}{P'_{\ell-1}}$ is countable, there are countably many ideals in $\frac{\hat R'}{H'_{2\ell-1}+(v'_{s_\ell+1},\dots,v'_{j-1})}$, not contained in $\frac{(z')\hat R'}{H'_{2\ell-1}+(v'_{s_\ell+1},\dots,v'_{j-1})}$, which are minimal primes of ideals of the form $(f)\frac{\hat R'}{H'_{2\ell-1}+(v'_{s_\ell+1},\dots,v'_{j-1})}$, where $f$ is a non-zero element of $\frac{m'}{P'_{\ell-1}}$. Let us denote these ideals by $\{I_q\}_{q\in\mathbb N}$; we have \begin{equation} ht\ I_q=1\quad\text{ for all }q\in\mathbb N.\label{eq:ht=1} \end{equation} We note that \begin{equation} \frac{(z')\hat R'}{H'_{2\ell-1}+(v'_{s_\ell+1},\dots,v'_{j-1})}\not\subset I_q\quad\text{ for all }q\in\mathbb N.\label{eq:znotin} \end{equation} Indeed, by (\ref{eq:Inotmaximal}) and (\ref{eq:j-1inter0}) we have $ht\ \frac{(z')\hat R'}{H'_{2\ell-1}+(v'_{s_\ell+1},\dots,v'_{j-1})}\ge1$. In view of (\ref{eq:ht=1}), containment in (\ref{eq:znotin}) would imply equality, which contradicts the definition of $I_q$. Since $H'_{2\ell-1}\subsetneqq H'_{2\ell}$ and $J'_{2\ell+1}\subsetneqq\frac{m'\hat R'}{H'_{2\ell}}$, we have $$ \dim\frac{\hat R'}{H'_{2\ell-1}+(v'_{s_\ell+1},\dots,v'_{j-1})}\ge(ht\ H'_{2\ell}-ht\ H'_{2\ell-1})+ht\ \frac{m'\hat R'}{H'_{2\ell}}-(j-s_\ell-1)\ge $$ \begin{equation} (ht\ H'_{2\ell}-ht\ H'_{2\ell-1})+ht\ \frac{m'\hat R'}{H'_{2\ell}}-ht\ J'_{2\ell+1}+1\ge3. \end{equation} Let $\tilde u_j$ denote the image of $u'_j-\bar\phi_j(u')$ in $\frac{\hat R'}{H'_{2\ell-1}+(v'_{s_\ell+1},\dots,v'_{j-1})}$. Next, we construct an element $\tilde h_j\in\frac{(z')\hat R'}{H'_{2\ell-1}+(v'_{s_\ell+1},\dots,v'_{j-1})}$ such that \begin{equation} \tilde u_j-\tilde h_j\mbox{$\in$ \hspace{-.8em}/}\bigcup\limits_{q=1}^\infty I_q.\label{eq:notinIq} \end{equation} The element $\tilde h_j$ will be given as the sum of an infinite series $\sum\limits_{t=0}^\infty h_{jt}^t$ in $(z')\frac{\hat R'}{H'_{2\ell-1}+(v'_{s_\ell+1},\dots,v'_{j-1})}$, convergent in the $m'$-adic topology, which we will now construct recursively in $t$. Put $h_{j0}=0$. Assume that $t>0$, that $h_{j0},\dots,h_{j,t-1}$ are already defined and that for $q\in\{1,\dots,t-1\}$ we have $u'_j-\bar\phi_j(u')-\sum\limits_{l=0}^qh_{jl}\mbox{$\in$ \hspace{-.8em}/}\bigcup\limits_{l=1}^qI_l$ and $h_{jq}\in(z')\bigcap\left(\bigcap\limits_{l=1}^{q-1}I_l\right)$. If $u'_j-\bar\phi_j(u')-\sum\limits_{l=0}^{t-1}h_{jl}\mbox{$\in$ \hspace{-.8em}/} I_t$, put $h_{jt}=0$. If $u'_j-\bar\phi_j(u')-\sum\limits_{l=0}^{t-1}h_{jl}\in I_t$, let $h_{jt}$ be any element of $(z')\bigcap\left(\bigcap\limits_{l=1}^{t-1}I_l\right)\setminus I_t$ (such an element exists because $I_t$ is prime, in view of (\ref{eq:znotin})). This completes the definition of $\tilde h_j$. Let $h_j$ be an arbitrary representative of $\tilde h_j$ in $\frac{\hat R'}{H'_{2\ell-2}}$. We claim that \begin{equation} \left(H'_{2\ell-1}+(v'_{s_\ell+1},\dots,v'_j)\right)\cap\frac{R'}{P'_{\ell-1}}=(0).\label{eq:jinter0} \end{equation} Indeed, suppose the above intersection contained a non-zero element $f$. Then any minimal prime $\tilde I$ of the ideal $(v'_j)\frac{\hat R'}{H'_{2\ell-1}+(v'_{s_\ell+1},\dots,v'_{j-1})}$ is also a minimal prime of $(f)\frac{\hat R'}{H'_{2\ell-1}+(v'_{s_\ell+1},\dots,v'_{j-1})}$. Since $v_j\mbox{$\in$ \hspace{-.8em}/}\frac{(z')\hat R'}{H'_{2\ell-1}+(v'_{s_\ell+1},\dots,v'_{j-1})}$, we have $\tilde I\not\subset\frac{(z')\hat R'}{H'_{2\ell-1}+(v'_{s_\ell+1},\dots,v'_{j-1})}$. Hence $\tilde I=I_q$ for some $q\in\mathbb N$. Then $v_j\in I_q$, which contradicts (\ref{eq:notinIq}). Carrying out the above construction for all $j\in\{s_\ell+1,\dots,n_\ell\}$ produces the elements $\phi_j$ required in the Lemma. This completes the proof of Lemma \ref{contractsto0} in the case when $k$ is countable. Next, assume that $k$ is uncountable. Let $u'$ be chosen as above. By assumption, $\frac{R'}{P'_{\ell-1}}$ contains a $k_0$-subalgebra $S$ essentially of finite type, over which $\frac{R'}{P'_{\ell-1}}$ is strictly \'etale. Take a countable subfield $L_1\subset k_0$ such that the algebra $S$ is defined already over $L_1$ (this means that $S$ has the form \begin{equation} S=(S'_1\otimes_{L_1}k_0)_{m'_1(S'_1\otimes_{L_1}k_0)},\label{eq:R'/P'} \end{equation} where $(S'_1,m'_1,k'_1)$ is a local $L_1$-algebra essentially of finite type). Next, let $L_1\subset L_2\subset...$ be an increasing chain of finitely generated field extensions of $L_1$, contained in $k_0$, having the following property. Let $(S'_q,m'_q,k'_q)$ denote the localization of $S'_1\otimes_{k'_1}L_q$ at the maximal ideal $m'_1(S'_1\otimes_{k'_1}L_q)$. We require that $$ k'_\infty:=\bigcup\limits_{q=1}^\infty k'_q $$ contain all the coefficients of all the formal power series $\bar\phi_{s_\ell+1},\dots,\bar\phi_{n_\ell}$ and such that the ideal $\frac{H'_{2\ell-1}}{H'_{2\ell-2}}$ is generated by elements of $\frac{k'_\infty[[z']]}{I_\infty}[[u']]$, where $I_\infty$ is the kernel of the natural homomorphism $k'_\infty[[z']]\rightarrow\frac{\hat R'}{H'_{2\ell-2}}$. Let $H'_{2\ell-1,\infty}=\frac{H'_{2\ell-1}}{H'_{2\ell-2}}\cap \frac{k'_\infty[[z']]}{I_\infty}[[u']]$. We have constructed an increasing chain $S'_1\subset S'_2\subset...$ of local $L_1$-algebras essentially of finite type such that $k'_q$ is the residue field of $S'_q$. Then $S'_\infty:=\bigcup\limits_{q=1}^\infty S'_q$ is a local noetherian ring whose completion is $\frac{k'_\infty[[z',u']]}{\left(P'_{\ell-1}\cap S'_\infty\right)k'_\infty[[z',u']]}$. Let $m'_\infty$ denote the maximal ideal of $S'_\infty$. The above argument in the countable case shows that there exist representatives $\phi_{s_\ell+1},\dots,\phi_{n_\ell}$ of $\bar\phi_{s_\ell+1},\dots,\bar\phi_{n_\ell}$ in $\frac{\hat S'_\infty}{H'_{2\ell-2}\cap\hat S'_\infty}$ such that, defining $v'=(v'_{s_\ell+1},\dots,v'_{n_\ell})$ as in (\ref{eq:defv}), we have \begin{equation} \left((v')+H'_{2\ell-1,\infty}\right)\cap S'_\infty=(0).\label{eq:S'infty0} \end{equation} Let $L_\infty=\bigcup\limits_{q=1}^\infty L_q$ and let $t$ denote a transcendence base of $k_0$ over $L_\infty$. Let the notation be as in Lemma \ref{noetherian} with $k_0$ replaced by $L_\infty$. For example, $S'_\infty(L_\infty(t))$ will denote the localization of the ring $S'_\infty\otimes_{L_\infty}L_\infty(t)$ at the prime ideal ideal $m'_\infty(S'_\infty\otimes_{L_\infty}L_\infty(t))$. By (\ref{eq:S'infty0}), \begin{equation} \left((v')+H'_{2\ell-1,\infty}\right)\hat S'_\infty[t]\cap S'_\infty[t]=(0).\label{eq:S'infty0t} \end{equation} Now Lemma \ref{IS(t)} and the fact that $S'_\infty[t]$ is a domain imply that \begin{equation} \left((v')+H'_{2\ell-1,\infty}\right)\hat S'_\infty(L_\infty(t))\cap S'_\infty(L_\infty(t))=(0).\label{eq:capbarS=0} \end{equation} Next, let $\tilde L$ be a finite extension of $L_\infty(t)$, contained in $k_0$; then $S'_\infty(\tilde L)$ is finite over $S'_\infty(L_\infty(t))$. Since $\hat S'_\infty(\tilde L)$ is faithfully flat over $\hat S'_\infty(L_\infty(t))$ and in view of (\ref{eq:capbarS=0}), we have $$ \left(\left((v')+H'_{2\ell-1,\infty}\right)\hat S'_\infty(\tilde L)\cap S'_\infty(\tilde L)\right)\cap S'_\infty(L_\infty(t))=(0). $$ Hence $ht\ \left((v')+H'_{2\ell-1,\infty}\right)\hat S'_\infty(\tilde L)\cap S'_\infty(\tilde L)=0$. Since $S'_\infty(\tilde L)$ is a domain, this implies that \begin{equation} \left((v')+H'_{2\ell-1,\infty}\right)\hat S'_\infty(\tilde L)\cap S'_\infty(\tilde L)=(0).\label{eq:tildek=0} \end{equation} Since $k_0$ is algebraic over $L_\infty(t)$, it is the limit of the direct system of all the finite extensions of $L_\infty(t)$ contained in it. We pass to the limit in (\ref{eq:tildek=0}). By (\ref{eq:R'/P'}), we have $S=S'_\infty(k_0)$; we also note that $\hat S=\frac{\hat R'}{H'_{2\ell-2}}$. Since the natural maps $\hat S'_\infty(\tilde L)\rightarrow\hat S'_\infty(k_0)$ are all faithfully flat, we obtain \begin{equation} \left((v')+H'_{2\ell-1,\infty}\right)\hat S'_\infty(k_0)\cap S=(0).\label{eq:capS=0} \end{equation} Since $\hat S=\frac{\hat R'}{H'_{2\ell-2}}$ is also the formal completion of $\hat S'_\infty(k_0)$, it is faithfully flat over $\hat S'_\infty(k_0)$. Hence \begin{equation} J'_{2\ell-1}\cap\hat S'_\infty(k_0)=\left((v')+H'_{2\ell-1,\infty}\right)\frac{\hat R'}{H'_{2\ell-2}}\cap\hat S'_\infty(k_0)=\left((v')+H'_{2\ell-1,\infty}\right)\hat S'_\infty(k_0).\label{eq:capbarS} \end{equation} Combining this with (\ref{eq:capS=0}), we obtain \begin{equation} J'_{2\ell-1}\cap S=(0).\label{eq:capS(t)=0} \end{equation} Thus the ideal $J'_{2\ell-1}\cap\frac{R'}{P'_{\ell-1}}$ contracts to $(0)$ in $S$. Since $\frac{R'}{P'_{\ell-1}}$ is \'etale over $S$, this implies the desired equality (\ref{eq:contractsto0}). This completes the proof of Lemma \ref{contractsto0}. \end{proof} Since $\frac{\hat R'}{H'_{2\ell}\hat R'}$ is a complete regular local ring and $(w',v')$ is a set of representatives of a minimal set of generators of its maximal ideal $\frac{m'\hat R'}{H'_{2\ell}}$, there exists a complete local domain $R'_\ell$ (not necessarily regular) such that $\frac{\hat R'}{H'_{2\ell-1}}\cong R'_\ell[[w',v']]$. Consider the ring homomorphism \begin{equation} R'_\ell[[w',v']]\rightarrow R'_\ell[[w']],\label{eq:homomorphisms} \end{equation} obtained by taking the quotient modulo $(v')$. By (\ref{eq:homomorphisms}), the quotient of $\frac{\hat R'}{H'_{2\ell-2}}$ by $J'_{2\ell-1}$ is the integral domain $R'_\ell[[w']]$, hence $J'_{2\ell-1}$ is prime. Consider a local blowing up $R'\rightarrow R''$ in $\mathcal T$. Because of the stability assumption on $R$, the ring $\frac{\hat R''}{H''_{2\ell-2}}\otimes_R\kappa(P''_{l-1})$ is finite over $\frac{\hat R'}{H'_{2\ell-2}}\otimes_R\kappa(P'_{l-1})$; hence the ring $\lim\limits_{\overset\longrightarrow{R''\in\mathcal T}}\left(\frac{\hat R''}{H''_{2\ell-2}}\otimes_R\kappa(P''_{l-1})\right)$ is integral over $\frac{\hat R'}{H'_{2\ell-2}}\otimes_R\kappa(P'_{l-1})$. In particular, there exists a prime ideal in $$ \lim\limits_{\overset\longrightarrow{R''\in\mathcal T}}\left(\frac{\hat R''}{H''_{2\ell-2}}\otimes_R\kappa(P''_{l-1})\right), $$ lying over $J'_{2\ell-1}\frac{\hat R'}{H'_{2\ell-2}}\otimes_R\kappa(P'_{l-1})$. Pick and fix one such prime ideal. Intersecting this ideal with $\frac{\hat R''}{H''_{2\ell-2}}$ for each $R''\in\mathcal T$, we obtain a tree $J''_{2\ell-1}$ of prime ideals of $\frac{\hat R''}{H''_{2\ell-2}}$, $R''\in\mathcal T$. Our next task is to define the restriction of the valuation $\hat\mu_{2\ell}$ to the ring $\frac{\hat R'}{J'_{2\ell-1}}$. By the induction assumption, $\hat\mu_{2\ell+2}$ is already defined on $\lim\limits_{\overset\longrightarrow{R'\in\mathcal{T}}}\frac{\hat R'}{J'_{2\ell+1}\hat R'}$. For all \textit{stable} $R''\in\mathcal{T}$ we have the isomorphism $gr_{\hat\mu_{2\ell+2}}\frac{\hat R''}{J''_{2\ell+1}}\cong gr_{\mu_{\ell+1}}\frac{R''}{P''_\ell}$ of graded algebras (in particular, $gr_{\hat\mu_{2\ell+2}}\frac{\hat R''}{J''_{2\ell+1}}$ is scalewise birational to $gr_{\mu_{\ell+1}}\frac{R''}{P''_\ell}$ for any $R''\in\mathcal{T}$ and $\hat\mu_{2\ell+2}$ has the same value group $\Delta_\ell$ as $\mu_{\ell+1}$). Define the prime ideals $\tilde H''_{2\ell-2}=\tilde H''_{2\ell-1}$ to be equal to the preimage of $J''_{2\ell-1}$ in $\hat R''$ and $\tilde H''_{2\ell}=\tilde H''_{2\ell+1}$ the preimage of $J''_{2\ell+1}$ in $\hat R''$. By definition of tight extensions, the valuation $\hat\nu_{2\ell+1}$ must be trivial. It remains to describe the valuation $\hat\mu_{2\ell}$ on $\frac{\hat R''}{J''_{2\ell-1}}$, $R''\in\mathcal T$. We will first define $\hat\nu_{2\ell}$ and then put $\hat\mu_{2\ell}=\hat\nu_{2\ell}\circ\hat\mu_{2\ell+2}$. By definition of tight extensions, the value group of $\hat\mu_{2\ell}$ must be equal to $\Delta_{\ell-1}$ and that of $\hat\nu_{2\ell}$ to $\frac{\Delta_{\ell-1}}{\Delta_\ell}$. For a positive element $\bar\beta\in\frac{\Delta_{\ell-1}}{\Delta_\ell}$, define the candidate for $\hat\nu_{2\ell}$-ideal of $\frac{\hat R''_{\tilde H''_{2\ell}}}{\tilde H''_{2\ell-1}\hat R''_{\tilde H''_{2\ell}}}$ of value $\bar\beta$, denoted by $\hat{\mathcal P}''_{\beta\ell}$, by the formula \begin{equation} \hat{\mathcal P}''_{\bar\beta\ell}=\frac{\mathcal{P}''_{\bar\beta}\hat R''_{\tilde H''_{2\ell}}}{\tilde H''_{2\ell-1}\hat R''_{\tilde H''_{2\ell}}}.\label{eq:validealmain} \end{equation} \begin{conjecture}\label{strongcontainment} The elements $\phi_j$ of Lemma \ref{contractsto0} can be chosen in such a way that the following condition holds. For each positive element $\beta\in\frac{\Delta_{\ell-1}}{\Delta_\ell}$ and each tree morphism $R'\rightarrow R''$ in $\mathcal T$, we have $$ \hat{\mathcal P}''_{\beta\ell}\cap\hat R'_{\tilde H'_{2\ell}}=\hat{\mathcal P}'_{\beta\ell}. $$ \end{conjecture} \begin{conjecture}\label{containment} The elements $\phi_j$ of Lemma \ref{contractsto0} can be chosen in such a way that \begin{equation} \bigcap\limits_{\bar\beta\in\left(\frac{\Delta_{\ell-1}}{\Delta_\ell}\right)_+} \left(\mathcal{P}'_{\bar\beta}+\tilde H'_{2\ell-1}\right)\hat R'_{\tilde H'_{2\ell}}\subset\tilde H'_{2\ell-1}. \label{eq:restrictionmain} \end{equation} \end{conjecture} For the rest of this section assume that Conjectures \ref{strongcontainment} and \ref{containment} are true. For all $\bar\beta\in\left(\frac{\Delta_{\ell-1}}{\Delta_\ell}\right)_+$, we have the natural isomorphism $$ \lambda_{\bar\beta}:\frac{\mathcal{P}'_{\bar\beta}}{\mathcal{P}'_{\bar\beta+}}\otimes_{\kappa(P'_{\ell-1})}\kappa(\tilde H'_{2\ell})\longrightarrow\frac{\hat{\mathcal P}'_{\bar\beta}}{\hat{\mathcal P}'_{\bar\beta+}} $$ of $\kappa(\tilde H'_{2\ell})$-vector spaces. The following fact is an easy consequences of Conjecture \ref{strongcontainment}: \begin{corollary}\label{integraldomain}\textbf{(conditional on Conjecture \ref{strongcontainment})} If the elements $\phi_j$ of Lemma \ref{contractsto0} can be chosen as in Conjecture \ref{strongcontainment} then the graded algebra $$ \mbox{gr}_{\nu_\ell}\frac{R'_{P'_\ell}}{P'_{\ell-1}R'_{P'_\ell}}\otimes_{\kappa(P'_{\ell-1})}\kappa(\tilde H'_{2\ell})\cong\bigoplus\limits_{\bar\beta\in\left(\frac{\Delta_{\ell-1}}{\Delta_\ell}\right)_+}\frac{\hat{\mathcal P}'_{\bar\beta}}{\hat{\mathcal P}'_{\bar\beta+}} $$ is an integral domain. \end{corollary} For a non-zero element $x\in\frac{\hat R'_{\tilde H'_{2\ell}}}{\tilde H'_{2\ell-1}}$, let $\operatorname{Val}_\ell(x)=\left\{\left.\beta\in\nu_\ell\left(\frac{R'}{P'_{\ell-1}}\setminus\{0\}\right)\ \right|\ x\in\hat{\mathcal P}'_{\beta\ell}\right\}$. We define $\hat\nu_{2\ell}$ by the formula \begin{equation}\label{eq:mu2l} \hat\nu_{2\ell}(x)=\max\ \operatorname{Val}_\ell(x). \end{equation} Since $\nu_\ell$ is a rank 1 valuation, centered in a local noetherian domain $\frac{R'}{P'_{\ell-1}}$, the semigroup $\nu_\ell\left(\frac{R'}{P'_{\ell-1}}\setminus\{0\}\right)$ has order type $\mathbb N$, so by (\ref{eq:restrictionmain}) the set $Val_\ell(x)$ contains a maximal element. This proves that the valuation $\hat\nu_{2\ell}$ is well defined by the formula (\ref{eq:mu2l}), and that we have a natural isomorphism of graded algebras $$ \mbox{gr}_{\nu_\ell}\frac{R'_{P'_\ell}}{P'_{\ell-1}R'_{P'_\ell}}\otimes_{\kappa(P'_{\ell-1})}\kappa(\tilde H'_{2\ell})\cong\mbox{gr}_{\hat\nu_{2\ell}}\frac{\hat R_{\tilde H'_{2\ell}}}{\tilde H'_{2\ell-1}}. $$ Since the above construction is valid for all $R\in\mathcal T$, $\hat\nu_{2\ell}$ extends naturally to a valuation centered in the ring $\lim\limits_{\overset\longrightarrow{R'}}\frac{\hat R''}{\tilde H''_{2\ell-1}\hat R''}$ (by abuse of notation, this extension will also be denoted by $\hat\nu_{2\ell}$). The extension $\hat\mu_{2\ell}$ of $\mu_\ell$ to $\lim\limits_{\overset\longrightarrow{R''\in\mathcal{T}(R')}}\frac{\hat R''}{\tilde H''_{2\ell-1}\hat R'}$ is defined by $\hat\mu_{2\ell}=\hat\nu_{2\ell}\circ\hat\mu_{2\ell+2}$. This completes the proof of Conjecture \ref{teissier1} (assuming Conjectures \ref{strongcontainment} and \ref{containment}) by descending induction on $\ell$. $\Box$ The next Corollary of Conjecture \ref{teissier1} gives necessary conditions for $\hat\nu_-$ to be uniquely determined by $\nu$; it also shows that the same conditions are sufficient for $\hat\nu_-$ to be the unique minimal extension of $\nu$, that is, to satisfy \begin{equation} \tilde H'_i=H'_i,\quad0\le i\le 2r.\label{eq:tilde=nothing} \end{equation} Suppose given a tree $\left\{\tilde H'_0\right\}$ of minimal prime ideals of $\hat R'$ (in particular, $R'\cap\tilde H'_0=(0)$). If the valuation $\nu$ admits an extension to a valuation $\hat\nu_-$ of $\lim\limits_{\overset\longrightarrow{R'}}\frac{\hat R'}{\tilde H'_0}$, then $\tilde H'_0$ is the 0-th prime ideal of $\hat R'$, determined by $\hat\nu_-$. Since $\tilde H'_0$ is assumed to be a \textit{minimal} prime, we have $\tilde H'_0=H'_0$ by Proposition \ref{Hintilde}. \begin{remark} Let the notation be as in Conjecture \ref{teissier1}. Denote the tree of prime ideals $\{\tilde H'_0\}$ by $\{H'\}$ for short. Consider a homomorphism \begin{equation} R'\rightarrow R''\label{eq:R'toR''} \end{equation} in $\mathcal T$. Assume that the local rings $R'$ and $\frac{\hat R'}{H'}$ are regular, and let $V=(V_1,\dots,V_s)$ be a minimal set of generators of $H'$. Then $V$ can be extended to a regular system of parameters for $\hat R'$. We have an isomorphism $\hat R'\cong\frac{\hat R'}{H'}[[V]]$. The morphism (\ref{eq:R'toR''}) induces an isomorphism $\hat R'_{H'}\cong \hat R''_{H''}$, so that $V$ induces a regular system of parameters of $\hat R''_{H''}$. In particular, the $H''$-adic valuation of $\hat R''_{H''}$ coincides with the $H'$-adic valuation of $\hat R'_{H'}$. On the other hand, we do not know, assuming that $R''$ and $\frac{\hat R''}{H''}$ are regular and $ht\ H''=ht\ H'$, whether $V$ induces a minimal set of generators of $H''$; we suspect that the answer is ``no''. \end{remark} \begin{corollary}\label{uncond1}\textbf{(conditional on Conjecture \ref{teissier1})} If the valuation $\nu$ admits a unique extension to a valuation $\hat\nu_-$ of $\lim\limits_{\overset\longrightarrow{R'}}\frac{\hat R'}{H'_0}$, then the following conditions hold: (1) $ht\ H'_1\le 1$ (2) $H'_i=H'_{i-1}$ for all odd $i>1$. Moreover, this unique extension $\hat\nu_-$ is minimal. Conversely, assume that (1)--(2) hold. Then there exists a unique minimal extension $\hat\nu_-$ of $\nu$ to $\lim\limits_{\overset\longrightarrow{R'}}\frac{\hat R'}{H'_0}$. \end{corollary} \begin{proof} The fact that conditions (1), (2) and equations (\ref{eq:tilde=nothing}) determine $\hat\nu_-$ uniquely is nothing but Proposition \ref{uniqueness1}. Conversely, assume that there exists a unique extension $\hat\nu_-$ of $\nu$ to $\lim\limits_{\overset\longrightarrow{R'}}\frac{\hat R'}{H'_0}$. By Remark \ref{minimalexist}, there exist minimal extensions of $\nu$ to $\lim\limits_{\overset\longrightarrow{R'}}\frac{\hat R'}{H'_0}$, hence $\hat\nu_-$ must be minimal. Next, by Conjecture \ref{teissier1}, there exists a tree of prime ideals $\tilde H'$ with $H'\cap R'=(0)$ and a tight extension $\hat\mu_-$ of $\nu$ to $\lim\limits_{\overset\longrightarrow{R'}}\frac{\hat R'}{H'}$. The ideals $H'$ are both the the 0-th and the 1-st ideals determined by $\hat\mu_-$; in particular, we have \begin{equation} H'_0\subset H'_1\subset H'\label{eq:inH'} \end{equation} by Proposition \ref{Hintilde}. Now, take any valuation $\theta$, centered in the regular local ring $\frac{R'_{H'}}{H'_0}$, such that the residue field $k_\theta=\kappa(H')$. Then the composition $\hat\mu_-\circ\theta$ is an extension of $\nu$ to $\lim\limits_{\overset\longrightarrow{R'}}\frac{\hat R'}{H'_0}$, hence \begin{equation} \hat\mu_-\circ\theta=\hat\nu_-\label{eq:mucirctheta} \end{equation} by uniqueness. For $i\ge1$, the $i$-th prime ideal, determined by $\hat\mu_-\circ\theta=\hat\nu_-$ coincides with that determined by $\hat\mu_-$. Since $\nu$ is minimal and $\hat\mu_-$ is tight, we obtain condition (2) of the Corollary. Finally, if we had $ht\ H'>1$, there would be infinitely many choices for $\theta$, contradicting \ref{eq:mucirctheta} and the uniqueness of $\hat\nu_-$. Thus $ht\ H'\le1$. Combined with \ref{eq:inH'}, this proves (1) of the Corollary. This completes the proof of Corollary (\ref{uncond1}), assuming Conjecture \ref{teissier1}. \end{proof} \appendix{Regular morphisms and G-rings.} In this Appendix we recall the definitions of regular homomorphism, G-rings and excellent and quasi-excellent rings. We also summarize some of their basic properties used in the rest of the paper. \begin{definition}\label{regmor} (\cite{Mat}, Chapter 13, (33.A), p. 249) Let $\sigma:A\rightarrow B$ be a homomorphism of noetherian rings. We say that $\sigma$ is {\bf regular} if it is flat, and for every prime ideal $P\subset A$, the ring $B\otimes_A\kappa(P)$ is geometrically regular over $\kappa(P)$ (this means that for any finite field extension $\kappa(P)\rightarrow k'$, the ring $B\otimes_Ak'$ is regular). \end{definition} \begin{remark} If $\kappa(P)$ is perfect, the ring $B\otimes_A\kappa(P)$ is geometrically regular over $\kappa(P)$ if and only if it is regular. \end{remark} \begin{remark} It is known that a morphism of finite type is regular in the above sense if and only if it is smooth (that is, formally smooth in the sense of Grothendieck with respect to the discrete topology), though we do not use this fact in the present paper. \end{remark} Regular morphisms come up in a natural way when one wishes to pass to the formal completion of a local ring: \begin{definition}\label{Gring} (\cite{Mat}, (33.A) and (34.A)) Let $R$ be a noetherian ring. For a maximal ideal $m$ of $R$, let $\hat R_m$ denote the $m$-adic completion of $R$. We say that $R$ is a {\bf G-ring} if for every maximal ideal $m$ of $R$, the natural map $R\rightarrow\hat R_m$ is a regular homomorphism. \end{definition} The property of being a G-ring is preserved by localization and passing to rings essentially of finite type over $R$. \begin{definition}\label{quasiexcellent} (\cite{Mat}, Definition 2.5, (34.A), p. 259) Let $R$ be a noetherian ring. We say that $R$ is {\bf quasi-excellent} if the following two conditions hold: (1) $R$ is J-2, that is, for any scheme $X$, which is reduced and of finite type over $\mbox{Spec}\ R$, $Reg(X)$ is open in the Zariski topology. (2) For every maximal ideal $m\subset R$, $R_m$ is a G-ring. \end{definition} It is known \cite{Mat} that a \textit{local} G-ring is automatically J-2, hence automatically quasi-excellent. Thus for local rings ``G-ring'' and ``quasi-excellent'' are one and the same thing. A ring is said to be \textbf{excellent} if it is quasi-excellent and universally catenary, but we do not need the catenary condition in this paper. Both excellence and quasi-excellence are preserved by localization and passing to rings of finite type over $R$ (\cite{Mat}, Chapter 13, (33.G), Theorem 77, p. 254). In particular, any ring essentially of finite type over a field, $\mathbf Z$, $\mathbf Z_{(p)}$, $\mathbf Z_p$, the Witt vectors or any other excellent Dedekind domain is excellent. See \cite{Nag} (Appendix A.1, p. 203) for some examples of non-excellent rings. Rings which arise from natural constructions in algebra and geometry are excellent. Complete and complex-analytic local rings are excellent (see \cite{Mat}, Theorem 30.D) for a proof that any complete local ring is excellent). Finally, we remark that the category of quasi-excellent rings is a natural one for doing algebraic geometry, since it is the largest reasonable class of rings for which resolution of singularities can hold. Namely, let $R$ be a noetherian ring. Grothendieck (\cite{EGA}, IV.7.9) proves that if all of the irreducible closed subschemes of $\mbox{Spec}\ R$ and all of their finite purely inseparable covers admit resolution of singularities, then $R$ must be quasi-excellent. Grothendieck's result means that the largest {\it class} of noetherian rings, closed under homomorphic images and finite purely inseparable extensions, for which resolution of singularities could possibly exist, is {\it quasi-excellent} rings. We now summarize the specific uses we make of quasi-excellence in the present paper. We begin by recalling three results from \cite{Mat} and \cite{Nag}. As a point of terminology, we note that Nagata's ``pseudo-geometric'' rings are now commonly known as Nagata rings. Quasi-excellent rings are Nagata (\cite{Mat}, (33.H), Theorem 78). \begin{theorem}\label{annormal}(\cite{Mat}, (34.C), Theorem 79) Let $R$ be an excellent normal local ring. Then $R$ is analytically normal (this means that its formal completion $\hat R$ is normal). \end{theorem} \begin{theorem}(\cite{Nag}, (43.20), p. 187) Let $R$ be a local integral domain, $\tilde R$ its Henselization and $R'$ its normalization. There is a natural one-to-one correspondence between the minimal primes of $\tilde R$ and the maximal ideals of $R'$. \end{theorem} \begin{proposition}\label{anirred}(\cite{Nag}, Corollary (44.3), p. 189) Let $R$ be a quasi-excellent analytically normal local ring. Then its Henselization $\tilde R$ is analytically irreducible and is algebraically closed in its formal completion. \end{proposition} From the above results we deduce \begin{corollary}\label{notnormal} Let $(R,\mathbf m)$ be a Henselian excellent local domain. Then $R$ is analytically irreducible and is algebraically closed in $\hat R$. \end{corollary} \begin{proof} If, in addition, we assume $R$ to be normal, the result follows from Theorems \ref{annormal} and \ref{anirred}. In the general case, let $R'$ denote the normalization of $R$. Then $R'$ is a Henselian normal quasi-excellent local ring, so it satisfies the conclusions of the Corollary. Consider the commutative diagram \begin{equation}\label{eq:CD} \xymatrix{R\ar[d]_-\psi \ar[r]^-\phi & {\hat R}\ar[d]_-{\hat\psi}\\ R'\ar[r]^-{\phi'}&{\hat R'}} \end{equation} where $\hat R'$ stands for the formal completion of $R'$. Since $R$ is Nagata, $R'$ is a finite $R$-module. Thus $\phi'$ coincides with the $\mathbf m$-adic completion of $R'$, viewed as an $R$-module. Hence $\hat R'\cong R'\otimes_R\hat R$. Since $\psi$ is injective and $\hat R$ is flat over $r$, the map $\hat\phi$ is also injective. Since $R'$ is analytically irreducible, $\hat R'$ is a domain, and therefore so is its subring $\hat R$. This proves that $R$ is analytically irreducible. To prove that $R$ is algebraically closed in $\hat R$, take an element $x\in\hat R$, algebraic over $R$. Since all the maps in \ref{eq:CD} are injective, let us view all the rings involved as subrings of $\hat R'$. Since $R'$ is algebraically closed in $\hat R'$, we have $x\in R'$, in particular, we may write $x=\frac ab$ with $a,b\in R$. Now, since $(a)\hat R\subset(b)\hat R$ and $\hat R$ is faithfully flat over $R$, we have $(a)\subset(b)$ in $R$, so $x=\frac ab\in R$. This proves that $R$ is algebraically closed in $\hat R$. The Corollary is proved. \end{proof} Next we summarize, in a more specific manner, the way in which these results are applied in the present paper. The main applications are as follows. (1) Let $R$ be an excellent local domain, $P$ a prime ideal of $R$ and $H_i\subset H_{i+1}$ two prime ideals of $\hat R$ such that \begin{equation} H_i\cap R=H_{i+1}\cap R=P.\label{eq:HicontractstoP} \end{equation} Then $\frac RP$ is also excellent. Definitions \ref{regmor}, \ref{Gring} and \ref{quasiexcellent} imply that the ring $\hat R\otimes_R\kappa(P)$ is geometrically regular over $P$, in particular, regular. Moreover, (\ref{eq:HicontractstoP}) implies that the ideal $\frac{H_{i+1}}{P\hat R}$ is a prime ideal of $\frac{\hat R}{P\hat R}$, disjoint from the natural image of $R\setminus P$ in $\frac{\hat R}{P\hat R}$. Thus the local ring $\frac{\hat R_{H_{i+1}}}{P\hat R_{H_{i+1}}}$ is a localization of $\hat R\otimes_R\kappa(P)$ at the prime ideal $H_{i+1}(\hat R\otimes_R\kappa(P))$ and so is a local ring, geometrically regular over $\kappa(P)$, in particular, a regular local ring and, in particular, a domain. (2) Assume, in addition, that $H_i$ is a minimal prime of $P\hat R$. Since $\frac{\hat R_{H_{i+1}}}{P\hat R_{H_{i+1}}}$ is a domain, $H_i$ is the only minimal prime of $P\hat R$, contained in $H_{i+1}$. We have $P\hat R_{H_{i+1}}=H_i\hat R_{H_{i+1}}$. \end{document}
\begin{document} \title[Unitary groups and ramified extensions]{Unitary groups and ramified extensions} \author{J. Cruickshank} \address{School of Mathematics, Statistics and Applied Mathematics , National University of Ireland, Galway, Ireland} \email{[email protected]} \author{F. Szechtman} \address{Department of Mathematics and Statistics, Univeristy of Regina, Canada} \email{[email protected]} \thanks{The second author was supported in part by an NSERC discovery grant} \subjclass[2010]{15A21, 15A63, 11E39, 11E57, 20G25} \keywords{unitary group; skew-hermitian form; local ring} \begin{abstract} We classify all non-degenerate skew-hermitian forms defined over certain local rings, not necessarily commutative, and study some of the fundamental properties of the associated unitary groups, including their orders when the ring in question is finite. \end{abstract} \maketitle \section{Introduction} More than half a century ago \cite{D} offered a systematic study of unitary groups, as well as other classical groups, over fields and division rings. Thirty five years later, \cite{HO} expanded this study to fairly general classes of rings. In particular, the normal structure, congruence subgroup, and generation problems for unitary (as well as other classical) groups are addressed in \cite{HO} in great generality. In contrast, the problem of determining the order of a unitary group appears in \cite{HO} only in the classical case of finite fields, as found in \cite[\S 6.2]{HO}. General formulae for the orders of unitary groups defined over a finite ring where 2 is unit were given later in \cite{FH}. The proofs in \cite{FH} are fairly incomplete and, in fact, the formulae in \cite[Theorem 3]{FH} are incorrect when the involutions induced on the given residue fields are not the identity (even the order of the classical unitary groups defined over finite fields is wrong in this case). The argument in \cite[Theorem 3]{FH} is primarily based on a reduction homomorphism, stated without proof to be surjective. Recently, the correct orders of unitary groups defined over a finite local ring where 2 is invertible were given in \cite{CHQS}, including complete proofs. It should be noted that the forms underlying these groups were taken to be hermitian, which ensured the existence of an orthogonal basis. Moreover, the unitary groups from \cite{CHQS} were all extensions of orthogonal or unitary groups defined over finite fields. In the present paper we study the unitary group $U_n(A)$ associated to a non-degenerate skew-hermitian form $h:V\times V\to A$ defined on a free right $A$-module $V$ of finite rank $n$, where $A$ is a local ring, not necessarily commutative, endowed with an involution $*$ that satisfies $a-a^*\in{\mathfrak r}$ for all $a\in A$, and ${\mathfrak r}$ is the Jacobson radical of $A$. It is also assumed that ${\mathfrak r}$ is nilpotent and $2\in U(A)$, the unit group of $A$. These conditions occur often, most commonly when dealing with ramified quadratic extensions of quotients of local principal ideal domains with residue field of characteristic not 2 (see Example \ref{tresdos} for more details). A distinguishing feature of this case, as opposed to that of \cite{CHQS}, is that $h(v,v)$ is a non-unit for every $v\in V$. In particular, $V$ lacks an orthogonal basis. As the existence of an orthogonal basis is the building block of the theory developed in \cite{CHQS}, virtually all arguments from \cite{CHQS} become invalid under the present circumstances. Moreover, it turns out that now $n=2m$ must be even and, when $A$ is finite, $U_{2m}(A)$ is an extension of the \emph{symplectic} group ${\mathrm {Sp}}_{2m}(q)$ defined over the residue field $F_q=A/{\mathfrak r}$. In view of the essential differences between the present case and that of \cite{CHQS}, we hereby develop, from the beginning, the tools required to compute $|U_{2m}(A)|$ when $A$ is finite and the above hypotheses apply. In particular, a detailed and simple proof that the reduction homomorphism is surjective is given. The paper is essentially self-contained and its contents are as follows. It is shown in \S\ref{s1} that $n=2m$ must be even, and that $V$ admits a basis relative to which the Gram matrix of $h$ is equal to $$ J=\left( \begin{array}{cc} 0 & 1 \\ -1 & 0 \\ \end{array} \right), $$ where all blocks have size $m\times m$. Thus, $U_{2m}(A)$ consists of all $X\in {\mathrm{ GL}}_{2m}(A)$ such that $$ X^*JX=J, $$ where $(X^*)_{ij}=(X_{ji})^*$. We prove in \S\ref{s2} that $U_{2m}(A)$ acts transitively on basis vectors of $V$ having the same length. An important tool is found in \S\ref{s3}, namely the fact that the canonical reduction map $U_{2m}(A)\to U_{2m}(A/{\mathfrak i})$ is surjective, where ${\mathfrak i}$ is a $*$-invariant ideal of $A$ (the proof of the corresponding result from \cite{CHQS} makes extensive use of the fact that an orthogonal basis exists). The surjectivity of the reduction map allows us to compute, in \S\ref{s4}, the order of $U_{2m}(A)$ when $A$ is finite, by means of a series of reductions (a like method was used in \cite{CHQS}). We find that \begin{equation}\label{or1} |U_{2m}(A)|=|{\mathfrak r}|^{2m^2-m}|{\mathfrak m}|^{2m} |{\mathrm {Sp}}_{2m}(q)|, \end{equation} where ${\mathfrak m}=R\cap{\mathfrak r}$ and $R$ is the additive group of all $a\in A$ such that $a^*=a$. We also obtain in~\S\ref{s4} the order of the kernel, say $U_{2m}({\mathfrak i})$, of the reduction map $U_{2m}(A)\to U_{2m}(A/{\mathfrak i})$. Here $U_{2m}({\mathfrak i})$ consists of all $1+X\in U_{2m}(A)$ such that $X\in M_{2m}({\mathfrak i})$. When ${\mathfrak i}\neq A$, we obtain \begin{equation}\label{or3} U_{2m}({\mathfrak i})=|{\mathfrak i}|^{2m^2-m}|{\mathfrak i}\cap{\mathfrak m}|^{2m}. \end{equation} A totally independent way of computing $|U_{2m}(A)|$ is offered in \S\ref{s5}, where we show that \begin{equation}\label{or2} |U_{2m}(A)|=\frac{|{\mathfrak r}|^{m(m+1)}|A|^{m^2} (q^{2m}-1)(q^{2(m-1)}-1)\cdots (q^2-1)}{|S|^{2m}}. \end{equation} Here $S$ stands for the additive group of all $a\in A$ such that $a^*=-a$. It should be noted that (\ref{or1})-(\ref{or2}) are valid even when $A$ is neither commutative nor principal. We also prove in \S\ref{s5} that the number of basis vectors of $V$ of any given length is independent of this length and equal~to $$ (|A|^{2m}-|{\mathfrak r}|^{2m})/|S|. $$ In this regard, in \S\ref{s6} we demonstrate that the order of the stabilizer, say $S_v$, in $U_{2m}(A)$ of a basis vector $v$ is independent of $v$ and its length, obtaining $$ |S_v|=|U_{2(m-1)}(A)|\times |A|^{2m-1}/|S|. $$ We end the paper in \S\ref{s7}, where a refined version of (\ref{or1}) and (\ref{or2}) is given when $A$ is commutative and principal. Virtually all the above material will find application in a forthcoming paper on the Weil representation of $U_{2m}(A)$. \section{Non-degenerate skew-hermitian forms}\label{s1} Let $A$ be a ring with $1\neq 0$. The Jacobson radical of $A$ will be denoted by ${\mathfrak r}$ and the unit group of $A$ by $U(A)$. We assume that $A$ is endowed with an involution $*$, which we interpret to mean an antiautomorphism of order $\leq 2$. Note that if $*=1_A$ then $A$ is commutative. Observe also that ${\mathfrak r}$ as well as all of its powers are $*$-invariant ideals of $A$. We fix a right $A$-module $V$ and we view its dual $V^*$ as a right $A$-module via $$ (\alpha a)(v)=a^* \alpha(v),\quad v\in V,a\in A,\alpha\in V^*. $$ We also fix a skew-hermitian form on $V$, that is, a function $h:V\times V\to A$ that is linear in the second variable and satisfies $$ h(v,u)=-h(u,v)^*,\quad u,v\in V. $$ In particular, we have $$ h(u+v,w)=h(u,w)+h(v,w)\text{ and }h(ua,v)=a^*h(u,v),\quad u,v,w\in V,a\in A. $$ Associated to $h$ we have an $A$-linear map $T_h:V\to V^*$, given by $$ T_h(v)=h(v,-),\quad v\in V. $$ We assume that $h$ is non-degenerate, in the sense that $T_h$ is an isomorphism. \begin{lemma}\label{deco} Suppose $V=U\perp W$, where $U,W$ are submodules of $V$. Then the restriction $h_U$ of $h$ to $U\times U$ is non-degenerate. \end{lemma} \begin{proof} Suppose $T_{h_U}(u)=0$ for some $u\in U$. Since $h(u,W)=0$ and $V=U+W$, it follows that $h(u,V)=0$, so $u=0$ by the non-degeneracy of $h$. This proves that $T_{h_U}$ is injective. Suppose next that ${\alpha}\in U^*$. Extend ${\alpha}$ to ${\beta}\in V^*$ via ${\beta}(u+w)={\alpha}(u)$. Since $h$ is non-degenerate, there is $v\in V$ such that ${\beta}=T_h(v)$. Now $v=u+w$ for some $u\in U$ and $w\in W$. We claim that ${\alpha}=T_{h_U}(u)$. Indeed, given any $z\in U$, we have $$ [T_{h_U}(u)](z)=h_U(u,z)=h(u,z)=h(u+w,z)=h(v,z)={\beta}(z)={\alpha}(z). $$ \end{proof} The Gram matrix $M\in M_k(A)$ of a list of vectors $v_1,\dots,v_k\in V$ is defined by $M_{ij}=(v_i,v_j)$. \begin{lemma}\label{gram} Suppose the Gram matrix of $u_1,\dots,u_k\in A$, say $M$, is invertible. Then $u_1,\dots,u_k$ are linearly independent. \end{lemma} \begin{proof} Suppose $u_1a_1+\cdots+u_ka_k=0$. Then $h(u_i,u_1)a_1+\cdots+h(u_i,u_k)a_k=0$ for every $1\leq i\leq k$. Since $M$ is invertible, we deduce that $a_1=\cdots=a_k=0$. \end{proof} We make the following assumptions on $A$ for the remainder of the paper: (A1) $A$ is a local ring (this means that $A/{\mathfrak r}$ is a division ring or, alternatively, that every element of $A$ is either in $U(A)$ or in ${\mathfrak r}$). (A2) $2\in U(A)$. (A3) If $a\in A$ and $a^*=-a$ then $a\in{\mathfrak r}$; in particular, $b-b^*\in{\mathfrak r}$ for all $b\in A$. (A4) ${\mathfrak r}$ is nilpotent; the nilpotency degree of ${\mathfrak r}$ will be denoted by $e$. \begin{exa}\label{tresdos}{\rm Let $B$ be a local commutative ring with nilpotent Jacobson radical ${\mathfrak b}$. Suppose $2\in U(B)$ and let ${\sigma}$ be an automorphism of $B$ of order $\leq 2$. Consider the twisted polynomial ring $C=B[t; {\sigma}]$. Then $C$ has a unique involution $*$ that sends $t$ to $-t$ and fixes every $b\in B$. Thus $$ (b_0+b_1t+b_2 t^2+b_3 t^3+\cdots)^*=b_0-b_1^{\sigma} t+b_2 t^2-b_3^{\sigma} t^3+\cdots{\rm ( finite\; sum)} $$ Given any $b\in{\mathfrak b}$, the ideal $(t^2-b)$ of $C$ is $*$-invariant, so the quotient ring $A=C/(t^2-b)$ inherits an involution from $C$ and satisfies all our requirements. Note that if ${\sigma}\neq 1_B$ then $A$ is not commutative. Two noteworthy special cases are the following: $\bullet$ Let $D$ be a local principal ideal domain with finite residue field of odd characteristic and let $B$ be a quotient of $D$ by a positive power of its maximal ideal; take ${\sigma}=1_B$ and let $b$ be a generator of ${\mathfrak b}$. Then $A$ is a finite, commutative, principal ideal, local ring. $\bullet$ Let $B$ be a finite, commutative, principal ideal, local ring with Jacobson radical ${\mathfrak b}$. Suppose $2\in U(B)$ and let ${\sigma}\neq 1$ be an automorphism of $B$ of order 2 (as an example, take $A$ and its involution, as in the previous case). Take $b$ to be a generator of ${\mathfrak b}$ and let $a=t+(t^2-b)\in A$. Then $Aa=aA$ is the Jacobson radical of $A$. Moreover, every left (resp. right) ideal of $A$ is a power of $Aa$ (and hence an ideal). Furthermore, note that $A/Aa\cong B/Bb$.} \end{exa} We will also make the following assumption on $V$ for the remainder of the paper: (A5) $V$ is a free $A$-module of finite rank $n>0$ (reducing $V$ modulo ${\mathfrak r}$, we see the rank of $A$ is well-defined). In what follows we write $(u,v)$ instead of $h(u,v)$. \begin{lemma}\label{basis} Any linearly independent list of vectors from $V$ is part of a basis; if the list has $n$ vectors, it is already a basis, and no list has more than $n$ vectors. \end{lemma} \begin{proof} Suppose $u_1,\dots,u_k\in V$ are linearly independent and $v_1,\dots,v_n$ is a basis of $V$. By (A4) there is some $r\neq 0$ in ${\mathfrak r}^{e-1}$ such that ${\mathfrak r} r=0$. Write $u_1=v_1a_1+\cdots+v_na_n$, where $a_i\in A$. If all $a_i\in{\mathfrak r}$ then $u_1r=0$, contradicting linear independence. By (A1), we may assume without loss of generality that $a_1\in U(A)$, whence $u_1,v_2,\dots,v_n$ is a basis of $V$. Next write $u_2=u_1b_1+v_2b_2+\cdots+v_nb_n$, where $b_i\in A$. If $b_i\in{\mathfrak r}$ for all $i>1$ then $u_1(-b_1r)+u_2r=0$, contradicting linear independence. This process can be continued and the result follows. \end{proof} \begin{cor}\label{bas} If $v_1,\dots,v_n$ is a basis of $V$, ${\mathfrak i}$ is a proper ideal of $A$, and $v_i\equiv u_i\mod V{\mathfrak i}$ for all $1\leq i\leq n$, then $u_1,\dots,u_n$ is also a basis of $V$. \end{cor} \begin{proof} Let $M$ (resp. $N$) be the Gram matrix of $v_1,\dots,v_n$ (resp. $u_1,\dots,u_n$). Then, by assumption, $N=M+P$, for some $P\in M_n({\mathfrak i})$, so $P\in M_n({\mathfrak r})$ by (A1). It is well-known (see \cite[Theorem 1.2.6]{H}) that $M_n({\mathfrak r})$ is the Jacobson radical of $M_n(A)$. On the other hand, by assumption and Lemma \ref{gram}, $M\in{\mathrm{ GL}}_n(A)$. It follows that $N\in{\mathrm{ GL}}_n(A)$ as well. \end{proof} \begin{lemma}\label{basin} Let $v_1,\dots,v_n$ be a basis of $V$. Then, given any $1\leq i\leq n$, there exists $1\leq j\leq n$ such that $j\neq i$ and $(v_i,v_j)\in U(A)$. \end{lemma} \begin{proof} Suppose, if possible, that $(v_i,v_j)\notin U(A)$ for all $j\neq i$. Then by (A1), $(v_i,v_j)\in {\mathfrak r}$ for all $j\neq i$. Since $(v_i,v_i)\in{\mathfrak r}$ by (A3), it follows that $(v_i,V)\in{\mathfrak r}$. As $v_1,\dots,v_n$ span $V$ and $T_h:V\to V^*$ is surjective, every linear functional $V\to A$ has values in ${\mathfrak r}$, a contradiction. \end{proof} \begin{cor}\label{basincor} If $v\in V$ is a basis vector, there is some $w\in V$ such that $(v,w)=1$.\qed \end{cor} \begin{lemma}\label{uno} Let ${\mathfrak i}$ be a proper ideal of $A$. Suppose $v\in V$ is a basis vector such that $(v,v)\in {\mathfrak i}$. Then there is a basis vector $z\in V$ such that $v\equiv z\mod V{\mathfrak i}$ and $(z,z)=0$. \end{lemma} \begin{proof} It follows from (A1) and (A4) that ${\mathfrak i}$ is nilpotent, and we argue by induction on the nilpotency degree, say $f$, of ${\mathfrak i}$. If $f=1$ then ${\mathfrak i}=0$ and we take $z=v$. Suppose $f>1$ and the result is true for ideals of nilpotency degree less than $f$. By Corollary \ref{basincor}, there is $w\in V$ such that $(v,w)=1$. Observing that the Gram matrix of $v,w$ is invertible, Lemmas \ref{gram} and \ref{basis} imply that $v,w$ belong to a common basis of $V$. Then, for any $b\in A$, $v+wb$ is also a basis vector and, moreover, $$ (v+wb,v+wb)=(v,v)+b^*(w,w)b+b-b^*. $$ Thanks to (A2), we may take $b=-(v,v)/2\in {\mathfrak i}$. Then $b^*=-b$, so $b-b^*=-(v,v)$ and $$ (v+wb,v+wb)=b^*(w,w)b=-b(w,w)b\in {\mathfrak i}^2. $$ As the nilpotency degree of ${\mathfrak i}^2$ is less than $f$, by induction hypothesis there is a basis vector $z\in V$ such that $v+wb\equiv z\mod V{\mathfrak i}^2$ and $(z,z)=0$. Since $v\equiv v+wb\equiv z\mod V{\mathfrak i}$, the result follows. \end{proof} \begin{lemma}\label{tres} Let ${\mathfrak i}$ be a proper ideal of $A$. Suppose $u,v\in V$ satisfy $(u,v)\equiv 1\mod {\mathfrak i}$. Then there is $z\in V$ such that $z\equiv v\mod V{\mathfrak i}$ and $(u,z)=1$. \end{lemma} \begin{proof} Since $(u,v) \equiv 1 \mod {\mathfrak i}$, $(u,v)$ must be a unit. Moreover, $(u,v)^{-1} \equiv 1 \mod {\mathfrak i}$, so we can take $z = v(u,v)^{-1}$. \end{proof} \begin{lemma}\label{dos} Let ${\mathfrak i}$ be an ideal of $A$. Suppose $u,v\in V$ satisfy $(u,u)=0$, $(u,v)=1$ and $(v,v)\in {\mathfrak i}$. Then there is $z\in V$ such that $z\equiv v\mod V{\mathfrak i}$, $(u,z)=1$ and $(z,z)=0$. \end{lemma} \begin{proof} Set $b=(v,v)/2\in {\mathfrak i}$ and $z=ub+v$. Then $b^*=-b$, so that $b^*-b=-(v,v)$ and $$ (z,z)=(ub+v,ub+v)=b^*-b+(v,v)=0,\quad (u,z)=(u,u)b+(u,v)=1. $$ \end{proof} A symplectic basis of $V$ is a basis $u_1,\dots,u_m,v_1,\dots,v_m$ such that $(u_i,u_j)=0=(v_i,v_j)$ and $(u_i,v_j)=\delta_{ij}$. A pair of vectors $u,v$ of $V$ is symplectic if $(u,v)=1$ and $(u,u)=0=(v,v)$. \begin{lemma}\label{exte} Suppose the Gram matrix, say $M$, of $v_1,\dots,v_s$ is invertible. If $s<n$ then there is a basis $v_1,\dots,v_s,w_1,\dots,w_t$ of $V$ such that $(v_i,w_j)=0$ for all $i$ and $j$. \end{lemma} \begin{proof} By Lemmas \ref{gram} and \ref{basis}, there is a basis $v_1,\dots,v_s,u_1,\dots,u_t$ of $V$. Given $1\leq i\leq t$, we wish to find $a_1,\dots,a_s$ so that $w_i=u_i-(v_1a_1+\cdots+v_s a_s)$ is orthogonal to all $v_j$. This means $$ (v_j,v_1)a_1+\cdots+(v_j,v_s)a_s=(v_j,u_i),\quad 1\leq j\leq s. $$ This linear system can be solved by means of $M^{-1}$. Since $v_1,\dots,v_s,w_1,\dots,w_t$ is a basis of $V$, the result follows. \end{proof} \begin{prop}\label{sp} $V$ has a symplectic basis; in particular $n=2m$ is even. \end{prop} \begin{proof} Let $w_1,\dots,w_n$ be a basis of $V$, whose existence is guaranteed by (A5). We argue by induction on $n$. By (A3), $(w_1,w_1)\in{\mathfrak r}$. We infer from Lemma \ref{uno} the existence of a basis vector $u\in V$ such that $(u,u)=0$. By Corollary \ref{basincor}, there is $w\in V$ such that $(u,w)=1$. By Lemma \ref{dos}, there is $v\in V$ such that $(v,v)=0$ and $(u,v)=1$. It follows from Lemmas \ref{gram} and \ref{basis} that $u,v$ is part of a basis of $V$. If $n=2$ we are done. Suppose $n>2$ and the result is true for smaller ranks. By Lemma \ref{exte} there is basis $u,v,w_1,\dots,w_{n-2}$ of $V$ such that every $w_i$ is orthogonal to both $u$ and $v$. Let $U$ be the span of $u,v$ and let $W$ be the span of $w_1,\dots,w_{n-2}$. By Lemma \ref{deco}, the restriction of $h$ to $W$ is non-degenerate. By inductive assumption, $n-2$ is even, say $n-2=2(m-1)$, and $W$ has a symplectic basis, say $u_2,\dots,u_m,v_2,\dots,v_m$. It follows that $u_1,u_2,\dots,u_m,v,v_2,\dots,v_m$ is a symplectic basis of $V$. \end{proof} \begin{cor} All non-degenerate skew-hermitian forms on $V$ are equivalent.\qed \end{cor} \section{The unitary group acts transitively on basis vectors of the same length}\label{s2} By definition, the unitary group associated to $(V,h)$ is the subgroup, say $U(V,h)$, of ${\mathrm{ GL}}(V)$ that preserves~$h$. Thus, $U(V,h)$ consists of all $A$-linear automorphisms $g:V\to V$ such that $h(gu,gv)=h(u,v)$ for all $u,v\in V$. \begin{theorem}\label{actra} Let $u,v\in V$ be basis vectors satisfying $(u,u)=(v,v)$. Then there exists $g\in U(V,h)$ such that $gu=v$. \end{theorem} \begin{proof} As $u$ is a basis vector, there is a vector $u'\in V$ such that $(u,u')=1$. Let $W$ be the span of $u,u'$. By Lemma \ref{exte}, $V=W\oplus W^\perp$. The restrictions of $h$ to $W$ and $W^\perp$ are non-degenerate by Lemma \ref{deco}. A like decomposition exists for $v$. Thus, by means of Proposition \ref{sp}, we may restrict to the case $n=2$. By Proposition \ref{sp}, there is a symplectic basis $x,y$ of $V$ and we have $u=xa+yb$ for some $a,b\in A$. Since $u$ is a basis vector, one of these coefficients, say $a$ is a unit. Replacing $x$ by $xa$ and $y$ by $y(a^*)^{-1}$, we may assume that $u=x+yb$ for some $b\in A$, where $x,y$ is still a symplectic basis of $V$. We have $b-b^*=(u,u)$. Likewise, there is a symplectic basis $w,z$ of $V$ such that $v=w+zc$, where $c-c^*=(v,v)=(u,u)=b-b^*$. It follows that $c=b+r$ for some $r\in A$ such that $r^*=r$. Replace $w,z$ by $w-zr,z$. This basis of $V$ is also symplectic, since $(w-zr,w-zr)=0$ (because $r-r^*=0$). Moreover, $v=(w-zr)+z(c+r)=(w-rz)+zb$. Thus, $u$ and $v$ have exactly the same coordinates, namely $1,b$ relative to some symplectic bases of $V$. Let $g\in U$ map one symplectic basis into the other one. Then $gu=v$, as required. \end{proof} \section{Reduction modulo a $*$-invariant ideal}\label{s3} \begin{lemma}\label{util} Let ${\mathfrak i}$ be a proper ideal of $A$. Suppose $w_1,\dots,w_m,z_1,\dots,z_m\in V$ satisfy $$ (w_i,z_j)\equiv\delta_{ij}\mod{\mathfrak i},\; 1\leq i\leq m,\quad (w_i,w_j)\equiv 0\equiv (z_i,z_j)\mod{\mathfrak i},\; 1\leq i,j\leq m. $$ Then there exists a symplectic basis $w'_1,\dots,w'_m,z'_1,\dots,z'_m$ of $V$ such that $$ w_i\equiv w'_i\mod V{\mathfrak i},\quad z_i\equiv z'_i\mod V{\mathfrak i},\; 1\leq i\leq m. $$ \end{lemma} \begin{proof} By induction on $m$. Successively applying Lemmas \ref{uno}, \ref{tres} and \ref{dos} we obtain $w'_1,z'_1\in V$ such that $$ w_1\equiv w'_1 \mod V{\mathfrak i},\quad z_1\equiv z'_1 \mod V{\mathfrak i},\quad (w'_1,w'_1)=0=(z'_1,z'_1),\quad (w'_1,z'_1)=1. $$ If $m=1$ we are done. Suppose $m>1$ and the result is true for smaller ranks. Applying Corollary \ref{bas}, we see that $w'_1,z'_1,w_2\dots,w_m,z_2,\dots,z_m$ is a basis of $V$. Applying the procedure of Lemma \ref{exte} we obtain a basis $w'_1,z'_1,w^0_2\dots,w^0_m,z^0_2,\dots,z^0_m$ of $V$ such that $w'_1,z'_1$ are orthogonal to all other vectors in this list. Since $(x,y)\in{\mathfrak i}$ when $x\in\{w'_1,z'_1\}$ and $y\in\{w_2\dots,w_m,z_2,\dots,z_m\}$, the proof of Lemma \ref{exte} shows that $$ w^0_i\equiv w_i\mod V{\mathfrak i},\quad z^0_i\equiv z_i\mod V{\mathfrak i},\quad 2\leq i\leq m. $$ Therefore $$ (w^0_i,z^0_j)\equiv\delta_{ij}\mod{\mathfrak i},\; 2\leq i\leq m,\quad (w^0_i,w^0_j)\equiv 0\equiv (z^0_i,z^0_j)\mod{\mathfrak i},\; 2\leq i,j\leq m. $$ By Lemma \ref{deco}, the restriction of $h$ to the span, say $W$, of $w^0_2\dots,w^0_m,z^0_2,\dots,z^0_m$ is non-degenerate. By induction hypothesis, there is a symplectic basis $w'_2,\dots,w'_m,z'_2,\dots,z'_m$ of $W$ such that $$ w'_i\equiv w^0_i\mod V{\mathfrak i}, \quad z'_i\equiv z^0_i\mod V{\mathfrak i},\; 2\leq i\leq m. $$ Then $w'_1,\dots,w'_m,z'_1,\dots,z'_m$ is a basis of $V$ satisfying all our requirements. \end{proof} Let ${\mathfrak i}$ be a $*$-invariant ideal of $A$. Then $\overline{A}=A/{\mathfrak i}$ inherits an involution, also denoted by $*$, from $A$ by declaring $(a+{\mathfrak i})^*=a^*+{\mathfrak i}$. This is well-defined, since ${\mathfrak i}$ is $*$-invariant. Set $\overline{V}=V/V{\mathfrak i}$ and consider the skew-hermitian form $\overline{h}:\overline{V}\times \overline{V}\to \overline{A}$, given by $\overline{h}(u+V{\mathfrak i},v+V{\mathfrak i})=h(u,v)+{\mathfrak i}$. We see that $\overline{h}$ is well-defined and non-degenerate. We then have a group homomorphism $U(V,h)\to U(\overline{V},\overline{h})$, given by $g\mapsto \overline{g}$, where $\overline{g}(u+V{\mathfrak i})=g(u)+V{\mathfrak i}$. \begin{theorem}\label{sur} Let ${\mathfrak i}$ be a proper $*$-invariant ideal of $A$. Then the canonical group homomorphism $U(V,h)\to U(\overline{V},\overline{h})$ is surjective. \end{theorem} \begin{proof} Let $f\in U(\overline{V},\overline{h})$. By Proposition \ref{sp}, $V$ has a symplectic basis $u_1,\dots,u_m,v_1,\dots,v_m$. We have $f(u_i+V{\mathfrak i})=w_i+V{\mathfrak i}$ and $f(v_i+V{\mathfrak i})=z_i+V{\mathfrak i}$ for some $w_i,z_i\in V$ and $1\leq i\leq m$. Since $f$ preserves $\overline{h}$, we must have $$ (w_i,z_j)\equiv\delta_{ij}\mod{\mathfrak i},\; 1\leq i\leq m,\quad (w_i,w_j)\equiv 0\equiv (z_i,z_j)\mod{\mathfrak i},\; 1\leq i,j\leq m. $$ By Lemma \ref{util} there is a symplectic basis $w'_1,\dots,w'_m,z'_1,\dots,z'_m$ of $V$ such that $$ w_i\equiv w'_i\mod V{\mathfrak i},\quad z_i\equiv z'_i\mod V{\mathfrak i},\; 1\leq i\leq m. $$ Let $g\in U(V,h)$ map $u_1,\dots,u_m,v_1,\dots,v_m$ into $w'_1,\dots,w'_m,z'_1,\dots,z'_m$. Then $\overline{g}=f$. \end{proof} \section{Computing the order of the $U_{2m}(A)$ by successive reductions}\label{s4} We refer to an element $a$ of $A$ as hermitian (resp. skew-hermitian) if $a=a^*$ (resp. $a=-a^*$). Let $R$ (resp. $S$) be the subgroup of the additive group of $A$ of all hermitian (resp. skew-hermitian) elements. We know by (A3) that \begin{equation} \label{sr} S\subseteq {\mathfrak r}. \end{equation} Moreover, it follows from (A2) that \begin{equation} \label{ars} A=R\oplus S. \end{equation} Letting $${\mathfrak m}=R\cap{\mathfrak r},$$ we have a group imbedding $R/{\mathfrak m}\hookrightarrow A/{\mathfrak r}$. In fact, we deduce from (\ref{sr}) and (\ref{ars}) that \begin{equation} \label{ars2} A/{\mathfrak r}\cong R/{\mathfrak m}. \end{equation} \begin{lem} \label{cuad} Suppose ${\mathfrak i}$ is a $*$-invariant ideal of $A$ satisfying ${\mathfrak i}^2=0$. Let $\{v_1,\dots,v_n\}$ be a basis of $V$ and let $J$ be the Gram matrix of $v_1,\dots,v_n$. Then, relative to $\{v_1,\dots,v_n\}$, the kernel of the canonical epimorphism $U(V,h)\to U(\overline{V},\overline{h})$ consists of all matrices $1+M$, such that $M\in M_n({\mathfrak i})$ and \begin{equation} \label{anr} M^*J+JM=0. \end{equation} \end{lem} \begin{proof} By definition the kernel of $U(V,h)\to U(\overline{V},\overline{h})$ consists of all matrices of the form $1+M$, where $M\in M_n({\mathfrak i})$ and $$ (1+M)^* J(1+M)=J. $$ Expanding this equation and using ${\mathfrak i}^2=0$ yields (\ref{anr}). \end{proof} Let $\{u_1,\dots,u_m,v_1,\dots,v_m\}$ be a symplectic basis of $V$. We write $U_{2m}(A)$ for the image of $U(V,h)$ under the group isomorphism $GL(V)\to{\mathrm{ GL}}_{2m}(A)$ relative to $\{u_1,\dots,u_m,v_1,\dots,v_m\}$. We make the following assumption on $A$ for the remainder of the paper: (A6) $A$ is a finite ring. We deduce from (\ref{ars}) that $$|A|=|R||S|.$$ On the other hand, it follows from (A1), (A2) and (A6) that $F_q=A/{\mathfrak r}$ is a finite field of odd characteristic. By (A3), $a+{\mathfrak r}=a^* +{\mathfrak r}$ for all $a\in A$, so the involution that $*$ induces on $F_q$ is the identity. Taking ${\mathfrak i}={\mathfrak r}$ in Theorem \ref{sur}, we have ${U}_{2m}(\overline{A})={\mathrm {Sp}}_{2m}(q)$, the symplectic group of rank $2m$ over $F_q$. Recall \cite[Chapter 8]{T} that $$ |{\mathrm {Sp}}_{2m}(q)|=(q^{2m}-1)q^{2m-1}(q^{2(m-1)}-1)q^{2m-3}\cdots (q^2-1)q=q^{m^2}(q^{2m}-1)(q^{2(m-1)}-1)\cdots (q^{2}-1). $$ \begin{cor} \label{cuad2} Suppose ${\mathfrak i}$ is a $*$-invariant ideal of $A$ satisfying ${\mathfrak i}^2=0$. Then the kernel of $U(V,h)\to U(\overline{V},\overline{h})$ has order $|{\mathfrak i}|^{2m^2-m}|{\mathfrak i}\cap {\mathfrak m}|^{2m}$. \end{cor} \begin{proof} Let $\{u_1,\dots,u_m,v_1,\dots,v_m\}$ be a symplectic basis of $V$. Thus, the Gram matrix, say $J\in M_{2m}(A)$, of $u_1,\dots,u_m,v_1,\dots,v_m$ is $$ J=\left( \begin{array}{cc} 0 & 1 \\ -1 & 0 \\ \end{array} \right), $$ where all blocks are in $M_{m}(A)$. According to Lemma \ref{cuad}, the kernel of $U(V,h)\to U(\overline{V},\overline{h})$ consists of all $1+M$, where $$ M=\left( \begin{array}{cc} P & Q \\ T & S \\ \end{array} \right), $$ and $S=-P^*$, $Q=Q^*$ and $T=T^*$, which yields the desired result. \end{proof} \begin{thm} \label{zxz} Let $A$ be a finite local ring, not necessarily commutative, with Jacobson radical~${\mathfrak r}$ and residue field $F_q$ of odd characteristic. Suppose $A$ has an involution $*$ such that $a-a^*\in{\mathfrak r}$ for all $a\in A$. Let ${\mathfrak m}$ be the group of all $a\in{\mathfrak r}$ such that $a=a^*$. Then $$ |U_{2m}(A)|=|{\mathfrak r}|^{2m^2-m}|{\mathfrak m}|^{2m} q^{m^2}(q^{2m}-1)(q^{2(m-1)}-1)\cdots (q^2-1). $$ \end{thm} \begin{proof} Consider the rings $$ A=A/{\mathfrak r}^{e}, A/{\mathfrak r}^{e-1},\dots,A/{\mathfrak r}^2,A/{\mathfrak r}. $$ Each of them is a factor of $A$, so is local and inherits an involution from $*$. Each successive pair is of the form $C=A/{\mathfrak r}^k,D=A/{\mathfrak r}^{k-1}$, where the kernel of the canonical epimorphism $C\to D$ is ${\mathfrak j}={\mathfrak r}^{k-1}/{\mathfrak r}^k$, so that ${\mathfrak j}^2=0$. We may thus apply Theorem \ref{sur} and Corollary \ref{cuad2} $e-1$ times to obtain the desired result, as follows. We have $$ |{\mathfrak r}|=|{\mathfrak r}^{e-1}/{\mathfrak r}^e|\cdots |{\mathfrak r}/{\mathfrak r}^2| $$ and $$ |{\mathfrak m}|=|{\mathfrak m}\cap{\mathfrak r}^{e-1}/{\mathfrak m}\cap {\mathfrak r}^e|\cdots |{\mathfrak m}\cap{\mathfrak r}^{k-1}/{\mathfrak m}\cap {\mathfrak r}^{k}|\cdots |{\mathfrak m}\cap {\mathfrak r}/{\mathfrak m}\cap {\mathfrak r}^2|, $$ where the group of hermitian elements in the kernel of $C\to D$ has $|{\mathfrak m}\cap{\mathfrak r}^{k-1}/{\mathfrak m}\cap {\mathfrak r}^{k}|$ elements. Indeed, these elements are those $a+{\mathfrak r}^k$ such that $a\in {\mathfrak r}^{k-1}$ and $a-a^*\in {\mathfrak r}^k$. But $a+a^*$ is hermitian, so $a+a^*\in{\mathfrak m}\cap{\mathfrak r}^{k-1}$. Thus $$a=(a-a^*)/2+(a+a^*)/2\in {\mathfrak r}^k+{\mathfrak m}\cap{\mathfrak r}^{k-1}.$$ Hence the group of hermitian elements in the kernel of $C\to D$ is $$ ({\mathfrak m}\cap{\mathfrak r}^{k-1}+{\mathfrak r}^k)/{\mathfrak r}^k\cong {\mathfrak m}\cap{\mathfrak r}^{k-1}/({\mathfrak m}\cap{\mathfrak r}^{k-1}\cap {\mathfrak r}^k)\cong {\mathfrak m}\cap{\mathfrak r}^{k-1}/{\mathfrak m}\cap{\mathfrak r}^{k}. $$ \end{proof} \begin{thm} Given a $*$-invariant proper ideal ${\mathfrak i}$ of $A$, the kernel of $U(V,h)\to U(\overline{V},\overline{h})$ has order $|{\mathfrak i}|^{2m^2-m}|{\mathfrak i}\cap{\mathfrak m}|^{2m}$. \end{thm} \begin{proof} By Theorems \ref{sur} and \ref{zxz}, the alluded kernel has order $$ \frac{|U(V,h)|}{|U(\overline{V},\overline{h})|}=\frac{|{\mathfrak r}|^{2m^2-m}|{\mathfrak m}|^{2m}}{|{\mathfrak r}/{\mathfrak i}|^{2m^2-m}|({\mathfrak m}+{\mathfrak i})/{\mathfrak i}|^{2m}}=|{\mathfrak i}|^{2m^2-m}|{\mathfrak i}\cap{\mathfrak m}|^{2m}. $$ \end{proof} \section{Computing the order of $U_{2m}(A)$ by counting symplectic pairs}\label{s5} The following easy observation will prove useful. Given $s\in S$ and $y\in A$, we have \begin{equation} \label{yx} y-y^*=s\text{ if and only if }y\in s/2+R. \end{equation} By the length of a vector $v\in V$ we understand the element $(v,v)\in S$. Given $s\in S$, the number of basis vectors of $V$ of length $s$ will be denoted by $N(m,s)$. \begin{lemma}\label{numbers} Given $s\in S$, we have $$ N(1,s)=(|A|-|{\mathfrak r}|)(|R|+|{\mathfrak m}|)=(|A|^2-|{\mathfrak r}|^2)/|S|,\quad s\in S. $$ In particular, $N(1,s)$ is independent of $s$. \end{lemma} \begin{proof} Let $u,v$ be a symplectic basis of $V$. Given $(a,b)\in A^2$, the length of $w=ua+vb$ is $$ (w,w)=a^* b-b^* a. $$ Thus, we need to count the number of pairs $(a,b)\in A^2\setminus {\mathfrak r}^2$ such that \begin{equation} \label{lens} a^* b-b^* a=s. \end{equation} For this purpose, suppose first $a\in U(A)$. Setting $y=a^* b$ and using (\ref{yx}), we see that (\ref{lens}) holds if and only if $b\in (a^*)^{-1}(s/2+R)$. Thus, the number of solutions $(a,b)\in A^2$ to (\ref{lens}) such that $a\in U(A)$ is $(|A|-|{\mathfrak r}|)|R|$. Suppose next that $a\notin U(A).$ Then $b \in U(A)$. Rewriting (\ref{lens}) in the form \begin{equation} \label{lens2} b^* a-a^* b=-s \end{equation} and setting $y=b^* a$, we see as above that (\ref{lens2}) holds if and only if $a\in (b^*)^{-1}(-s/2+R)$. Recalling that $a\in{\mathfrak r}$, we are thus led to calculating $$ |[(b^*)^{-1}(-s/2+R)]\cap{\mathfrak r}|=|(-s/2+R)\cap b^*{\mathfrak r}|=|(-s/2+R)\cap {\mathfrak r}|=|R\cap{\mathfrak r}|, $$ the last two equalities holding because $b\in U(R)$ and $s\in{\mathfrak r}$. Recalling that ${\mathfrak m}=R\cap{\mathfrak r}$, it follows that $N(1,s)=(|A|-|{\mathfrak r}|)(|R|+|{\mathfrak m}|)$. Since this is independent of $s$, we infer $N(1,s)=(|A|^2-|{\mathfrak r}|^2)/|S|$. \end{proof} Note that the identity $(|A|-|{\mathfrak r}|)(|R|+|{\mathfrak m}|)=(|A|^2-|{\mathfrak r}|^2)/|S|$ also follows from $|R|=|A|/|S|$ and $|{\mathfrak m}|=|{\mathfrak r}|/|S|$. \begin{prop}\label{igual} Given $s\in S$, we have $$ N(m,s)=(|A|^{2m}-|{\mathfrak r}|^{2m})/|S|,\quad s\in S. $$ In particular, $N(m,s)$ is independent of $s$. \end{prop} \begin{proof} The first assertion follows from the second. We prove the latter by induction on $m$. The case $m=1$ is done in Lemma \ref{numbers}. Suppose that $m>1$ and $N(m-1,s)$ is independent of $s$. Set $N=N(1,0)$ and $M=N(m-1,0)$. Decompose $V$ as $U\perp W$ where $U$ has rank 2. Thus $N(m,s)$ is the number of pairs $(u,w) \in U \times W$ such that either $w$ is an arbitrary element of $W$ and $u$ is a basis vector of $U$ of length $s-(w,w)$, or, $u$ is an arbitrary non basis vector of $U$ and $w$ is a basis vector of $W$ of length $s - (u,u)$. These two possibilities are mutually exclusive. It follows, using the inductive hypothesis, that $$ N(m,s)=N|A|^{2(m-1)}+|{\mathfrak r}|^2 M, $$ which is independent of $s$. \end{proof} \begin{cor}\label{nusy} The number of symplectic pairs in $V$ is $$ \frac{(|A|^{2m}-|{\mathfrak r}|^{2m})|A|^{2m-1}}{|S|^2}. $$ \end{cor} \begin{proof} By Proposition \ref{igual}, the number of basis vectors of length 0 is $(|A|^{2m}-|{\mathfrak r}|^{2m})/|S|$. Given any such vector, say $u$, Lemma \ref{dos} ensures the existence of a vector $v\in V$ of length 0 such that $(u,v)=1$. Then, a vector $w\in V$ satisfies $(u,w)=1$ if and only if $w=au+v+z$, where $z$ is orthogonal to $u,v$. Moreover, given any such $z$, we see that $w$ has length 0 if and only if $$ 0=(w,w)=(ua+v,ua+v)+(z,z)=a^*-a+(z,z). $$ It follows from (\ref{yx}) that the number of solutions $a \in A$ to this equation is $|R|$. We infer that the number of symplectic pairs is $$ \frac{(|A|^{2m}-|{\mathfrak r}|^{2m})}{|S|}\times |A|^{2m-2}|R|=\frac{(|A|^{2m}-|{\mathfrak r}|^{2m})|A|^{2m-1}}{|S|^2}. $$ \end{proof} It follows from Lemma \ref{exte} and Proposition \ref{sp} that $U_{2m}(A)$ acts transitively on symplectic pairs. Moreover, we readily see that the stabilizer of a given symplectic pair is isomorphic to $U_{2(m-1)}(A)$. We infer from Corollary \ref{nusy} that $$ |U_{2m}(A)|=\frac{(|A|^{2m}-|{\mathfrak r}|^{2m})|A|^{2m-1}}{|S|^2}\times \frac{(|A|^{2(m-1)}-|{\mathfrak r}|^{2(m-1)})|A|^{2m-3}}{|S|^2}\times\cdots\times \frac{(|A|^{2}-|{\mathfrak r}|^{2})|A|}{|S|^2}. $$ We have proven the following result. \begin{thm} \label{zxz2} Let $A$ be a finite local ring, not necessarily commutative, with Jacobson radical~${\mathfrak r}$ and residue field $F_q$ of odd characteristic. Suppose $A$ has an involution $*$ such that $a-a^*\in{\mathfrak r}$ for all $a\in A$. Let $S$ be the group of all $a\in A$ such that $a=-a^*$. Then $$ |U_{2m}(A)|=\frac{|{\mathfrak r}|^{m(m+1)}|A|^{m^2} (q^{2m}-1)(q^{2(m-1)}-1)\cdots (q^2-1)}{|S|^{2m}}.\qed $$ \end{thm} \begin{note}{\rm We readily verify, by means of (\ref{ars}) and (\ref{ars2}), the equivalence of the formulae given in Theorems \ref{zxz} and \ref{zxz2}.} \end{note} \section{The order of the stabilizer of a basis vector}\label{s6} \begin{thm}\label{lasta} Let $v\in V$ be a basis vector and let $S_v$ be the stabilizer of $v$ in $U(V,h)$. Then $$ |S_v|=|U_{2(m-1)}(A)|\times |A|^{2m-1}/|S|. $$ In particular, the order of $S_v$ is independent of $v$ and its length. \end{thm} \begin{proof} By Theorem \ref{actra}, the number of basis vectors of length $(v,v)$ is equal to $|U_{2m}(A)|/|S_v|$. It follows from Proposition \ref{igual} that \begin{equation}\label{idw} |U_{2m}(A)|/|S_v|=(|A|^{2m}-|{\mathfrak r}|^{2m})/|S|. \end{equation} On the other hand, the above discussion shows that \begin{equation}\label{idw2} |U_{2m}(A)|=\frac{(|A|^{2m}-|{\mathfrak r}|^{2m})|A|^{2m-1}}{|S|^2}\times |U_{2(m-1)}(A)|. \end{equation} Combining (\ref{idw}) and (\ref{idw2}) we obtain the desired result. \end{proof} \section{The case when $A$ is commutative and principal}\label{s7} We make the following assumptions on $A$ until further notice: (A7) There is $a\in{\mathfrak r}$ such that $Aa=aA={\mathfrak r}$. (A8) The elements of $R$ commute among themselves. Using (A7), we see that $|A|=q^e$, $|{\mathfrak r}|=q^{e-1}$. Moreover, from $Aa=aA$, we get $a^*A=Aa^*$. Since $a=(a-a^*)/2+(a+a^*)/2$, not both $a-a^*$ and $a+a^*$ can be in ${\mathfrak r}^2$. Thus, ${\mathfrak r}$ has a generator $x$ that is hermitian or skew-hermitian and satisfies $Ax=xA$. In any case, $x^2$ is hermitian. We claim that $$ A=R+Rx. $$ Note first of all that, because of (A8), $R$ is a subring of $A$. Clearly, $R$ is a local ring with maximal ideal ${\mathfrak m}=R\cap{\mathfrak r}$ and residue field $R/{\mathfrak m}\cong A/{\mathfrak r}$. Secondly, from $A = R+S$ and $S\subseteq{\mathfrak r} = Ax$, we deduce \begin{equation} \label{der} A=R+Ax. \end{equation} Repeatedly using (\ref{der}) as well as (A8), we obtain $$ \begin{aligned} A &=R+(R+Ax)x=R+Rx+Ax^2=R+Rx+(R+Ax)x^2\\ &=R+Rx+Ax^3=R+Rx+(R+Ax)x^3=R+Rx+Ax^4=\dots=R+Rx. \end{aligned} $$ If $*=1_A$ then $A=R$ and ${\mathfrak r}={\mathfrak m}$ has $q^{e-1}$ elements. We make the following assumptions on $A$ until further notice: (A9) $*\neq 1_A$. (A10) $R\cap Rx=(0)$. It follows from (A9) and $A=R+Rx$ that $x$ cannot be hermitian. Therefore $x$ is skew-hermitian. Note that $R$ is a principal ring with maximal ideal ${\mathfrak m}=Rx^2$, since $$ {\mathfrak m}=R\cap Ax=R\cap (R+Rx)x=R\cap (Rx+Rx^2)=Rx^2+(R\cap Rx)=Rx^2. $$ \begin{lemma}\label{evod} The group epimorphism $f:R\to Rx$, given by $f(r)=rx$, is injective if $e$ is even, whereas the kernel of $f$ is $Rx^{e-1}$ and has $q$ elements if $e$ is odd. \end{lemma} \begin{proof} Note that every non-zero element of $A$ is of the form $cx^i$ for some unit $c\in U(A)$ and a unique $0\leq i<e$. It follows that the annihilator of $x$ in $A$ is equal to $Ax^{e-1}$. From $A=R+Rx$, we infer $Ax^{e-1}=Rx^{e-1}$. Thus, the kernel of $f$ is $R\cap Rx^{e-1}$. If $e$ is even then $R\cap Rx^{e-1}\subseteq R\cap Rx=(0)$, while if $e$ is odd $$R\cap Rx^{e-1}=Rx^{e-1}=Ax^{e-1}$$ is a 1-dimensional vector space over $F_q=A/{\mathfrak r}$. \end{proof} \begin{cor} We have $$ |A|=|R|^2\text{ if }e\text{ is even and }|A|=\frac{|R|^2}{q}\text{ if }e\text{ is odd.} $$ Thus, either $e=2\ell$ is even and $$ |{\mathfrak r}|=q^{2\ell-1},\;|{\mathfrak m}|=q^{\ell-1} $$ or $e=2\ell -1$ is odd and $$ |{\mathfrak r}|=q^{2\ell-2},\; |{\mathfrak m}|=q^{\ell-1}. $$ \end{cor} \begin{proof} This follows from $A=R\oplus Rx$, Lemma \ref{evod} and the group isomorphism $R/{\mathfrak m}\cong F_q$. \end{proof} We now resume the general discussion and note that if $A$ is a commutative, principal ideal ring and $*\neq 1_A$ then conditions (A7)-(A10) are automatically satisfied, for in this case we have $R\cap Rx\subseteq R\cap S=(0)$. It is clear that $A\cong R[t]/(t^2-x^2)$ if $e=2\ell$ is even, and $A\cong R[t]/(t^2-x^2,t^{2\ell-1})$ if $e=2\ell-1$ is odd. Using part of the above information together with Theorem \ref{zxz}, we obtain the following result. \begin{thm} \label{ords} Let $A$ be a finite, commutative, principal, local ring with Jacobson radical~${\mathfrak r}$ and residue field $A/{\mathfrak r}\cong F_q$ of odd characteristic. Let $e$ be the nilpotency degree of ${\mathfrak r}$. Suppose $A$ has an involution $*$ such that $a-a^*\in{\mathfrak r}$ for all $a\in A$. (a) If $*=1_A$ then $$ |U_{2m}(A)|=|{\mathrm {Sp}}_{2m}(A)|=q^{(e-1)(2m^2+m)+m^2}(q^{2m}-1)(q^{2(m-1)}-1)\cdots (q^2-1). $$ (b) If $*\neq 1_A$ and $e=2\ell$ is even then $$ |U_{2m}(A)|=q^{(2\ell-1)(2m^2-m)}q^{2(\ell-1)m} q^{m^2}(q^{2m}-1)(q^{2(m-1)}-1)\cdots (q^2-1). $$ (b) If $*\neq 1_A$ and $e=2\ell-1$ is odd then $$ |U_{2m}(A)|=q^{(2\ell-2)(2m^2-m)}q^{2(\ell-1)m} q^{m^2}(q^{2m}-1)(q^{2(m-1)}-1)\cdots (q^2-1).\qed $$ \end{thm} \begin{note}{\rm Our initial conditions on $A$ do not force $R$ to be a subring of $A$ or $R\cap Rx=(0)$. Indeed, let $A$ be as indicated in the parenthetical remark of the second case of Example \ref{tresdos}, and set $x=a$, $r=b$. Then $rx\in R\cap Rx$, so $R\cap Rx\neq (0)$, and $rxr=-r^2x$ with $(-r^2 x)^*=r^2 x$, so $R$ is not a subring of $A$. It is also clear that $A$ need not be principal, even if so is $R$, as can be seen by taking $b=0$ and $B$ not a field in the general construction of Example \ref{tresdos} (e.g. $A=Z_{p^2}[t]/(t^2)$).} \end{note} \noindent{\bf Acknowledgement.} We are very grateful to the referee for a thorough reading of the paper and valuable suggestions. \end{document}
\begin{document} \title[]{ON STRONG $r$-HELIX SUBMANIFOLDS AND SPECIAL CURVES} \author{Evren Z\i plar} \address{Department of Mathematics, Faculty of Science, University of Ankara, Tando\u{g}an, Turkey} \email{[email protected]} \urladdr{} \author{Ali \c{S}enol} \address{Department of Mathematics, Faculty of Science, \c{C}ank\i r\i\ Karatekin University, \c{C}ank\i r\i , Turkey} \email{[email protected]} \author{Yusuf Yayl\i } \address{Department of Mathematics, Faculty of Science, University of Ankara, Tando\u{g}an, Turkey} \email{[email protected]} \thanks{} \urladdr{} \date{} \subjclass[2000]{ \ 53A04, 53B25, 53C40, 53C50.} \keywords{Strong $r$-helix submanifold; Line of curvature; Geodesic curve; Slant helix.\\ Corresponding author: Evren Z\i plar, e-mail: [email protected]} \thanks{} \begin{abstract} In this paper, we investigate special curves on a strong $r$-helix submanifold in Euclidean $n$-space $E^{n}$. Also, we give the important relations between strong $r$-helix submanifolds and the special curves such as line of curvature, geodesic and slant helix. \end{abstract} \maketitle \section{Introduction} In differential geometry of manifolds, an helix submanifold of $IR^{n}$ with respect to a fixed direction $d$ in $IR^{n}$ is defined by the property that tangent planes make a constant angle with the fixed direction $d$ (helix direction) in [5]. Di Scala and Ruiz-Hern\'{a}ndez have introduced the concept of these manifolds in [5]. Besides, the concept of strong $r$-helix submanifold of $IR^{n}$ was introduced in [4]. Let $M\subset IR^{n}$ be a submanifold and let $H(M)$ be the set of helix directions of $M$. We say that $M$ is a strong $r$-helix if the set $H(M)$ is a linear subspace of $ IR^{n}$ of dimension greater or equal to $r$ in [4]. Recently, M. Ghomi worked out the shadow problem given by H.Wente. And, He mentioned the shadow boundary in [8]. Ruiz-Hern\'{a}ndez investigated that shadow boundaries are related to helix submanifolds in [12]. Helix hypersurfaces has been worked in nonflat ambient spaces in [6,7]. Cermelli and Di Scala have also studied helix hypersurfaces in liquid cristals in [3]. The plan of this paper is as follows. In section 2, we mention some basic facts in the general theory of strong $r$-helix, manifolds and curves. And, in section 3, we give the important relations between strong $r$-helix submanifolds and some special curves such as line of curvature, geodesic and slant helix. \section{PRELIMINARIES} \begin{definition} Let $M\subset IR^{n}$ be a submanifold of a euclidean space. A unit vector $ d\in IR^{n}$ is called a helix direction of $M$ if the angle between $d$ and any tangent space $T_{p}M$ is constant. Let $H(M)$ be the set of helix directions of $M$. We say that $M$ is a strong $r$-helix if $H(M)$ is a $r$ -dimensional linear subspace of $IR^{n}$ [4]. \end{definition} \begin{definition} A submanifold $M\subset IR^{n}$ is a strong $r$-helix if the set $H(M)$ is a linear subspace of $IR^{n}$ of dimension greater or equal to $r$ [4]. \end{definition} \begin{definition} A unit speed curve $\alpha :I\rightarrow E^{n}$ is called a slant helix if its unit principal normal $V_{2}$ makes a constant angle with a fixed direciton $U$ [1]. \end{definition} \begin{definition} Let the $(n-k)$-manifold $M$ be submanifold of the Riemannian manifold $ \overline{M}=E^{n}$ and let $\overline{D}$ be the Riemannian connexion on $ \overline{M}=E^{n}$. For $C^{\infty \text{ }}$fields $X$ and $Y$ with domain $A$ on $M$ (and tangent to $M$), define $D_{X}Y$ and $V(X,Y)$ on $A$ by decomposing $\overline{D}_{X}Y$ into unique tangential and normal components, respectively; thus, \begin{equation*} \overline{D}_{X}Y=D_{X}Y+V(X,Y)\text{. } \end{equation*} Then, $D$ is the Riemannian connexion on $M$ and $V$ is a symmetric vector-valued 2-covariant $C^{\infty \text{ }}$tensor called the second fundamental tensor. The above composition equation is called the Gauss equation [9]. \end{definition} \begin{definition} Let the $(n-k)$-manifold $M$ be submanifold of the Riemannian manifold $ \overline{M}=E^{n}$ , let $\overline{D}$ be the Riemannian connexion on $ \overline{M}=E^{n}$ and let $D$ be the Riemannian connexion on $M$. Then, the formula of Weingarten \begin{equation*} \overline{D}_{X}N=-A_{N}(X)+D_{X}^{\bot }N \end{equation*} for every $X$ and $Y$ tangent to $M$ and for every $N$ normal to $M$. $A_{N}$ is the shape operator associated to $N$ also known as the Weingarten operator corresponding to $N$ and $D^{\bot }$ is the induced connexion in the normal bundle of $M$ ($A_{N}(X)$ is also the tangent component of $- \overline{D}_{X}N$ and will be denoted by $A_{N}(X)$ $=$tang($-\overline{D} _{X}N$)). Specially, if $M$ is a hypersurface in $E^{n}$, we have $ \left\langle V(X,Y),N\right\rangle =\left\langle A_{N}(X),Y\right\rangle $ for all $X$, $Y$ tangent to $M$. So, \begin{equation*} V(X,Y)=\left\langle V(X,Y),N\right\rangle N=\left\langle A_{N}(X),Y\right\rangle N \end{equation*} and we obtain \begin{equation*} \overline{D}_{X}Y=D_{X}Y+\left\langle A_{N}(X),Y\right\rangle N\text{ .} \end{equation*} For this definition 2.5, note that the shape operator $A_{N}$ is defined by the map $A_{N}:\varkappa (M)\rightarrow \varkappa (M)$, where $\varkappa (M)$ is the space of tangent vector fields on $M$ and if $p\in M$, the shape operator $A_{N}$ is defined by the map $A_{p}:T_{p}(M)\rightarrow T_{p}(M)$ .The eigenvalues of $A_{p}$ are called the principal curvatures (denoted by $ \lambda _{i}$) and the eigenvectors of $A_{p}$ are called the principal vectors [10,11]. \end{definition} \begin{definition} If $\alpha $ is a (unit speed) curve in $M$ with $C^{\infty \text{ }}$unit tangent $T$, then $V(T,T)$ is called normal curvature vector field of $ \alpha $ and $k_{T}=\left\Vert V(T,T)\right\Vert $ is called the normal curvature of $\alpha $ [9]. \end{definition} \section{Main Theorems} \begin{theorem} \textbf{\ }Let $M$ be a strong $r$-helix hypersurface and $H(M)\subset E^{n}$ be the set of helix directions of $M$. If $\alpha :I\subset IR\rightarrow M$ is a (unit speed) line of curvature (not a line) on $M$, then $d_{j}\notin Sp\left\{ N,T\right\} $ along the curve $\alpha $ for all $d_{j}\in H(M)$, where $T$ is the tangent vector field of $\alpha $ and $N$ is a unit normal vector field of $M$. \end{theorem} \begin{proof} We assume that $d_{j}\in Sp\left\{ N,T\right\} $ along the curve $\alpha $ for any $d_{j}\in H(M)$. Then, along the curve $\alpha $, since $M$ is a strong $r$-helix hypersurface, we can decompose $d_{j}$ in tangent and normal components: \begin{equation} d_{j}=\cos (\theta _{j})N+\sin (\theta _{j})T \end{equation} where $\theta _{j}$ is constant. From (3.1),by taking derivatives on both sides along the curve $\alpha $, we get: \begin{equation} 0=\cos (\theta _{j})N^{ {\acute{}} }+\sin (\theta _{j})T^{ {\acute{}} }\text{ } \end{equation} Moreover, since $\alpha $ is a line of curvature on $M$, \begin{equation} N {\acute{}} =\lambda \alpha {\acute{}} \text{ } \end{equation} along the curve $\alpha $. By using the equations (3.2) and (3.3), we deduce that the system $\left\{ \alpha {\acute{}} ,T^{ {\acute{}} }\right\} $ is linear dependent. But, the system $\left\{ \alpha {\acute{}} ,T^{ {\acute{}} }\right\} $ is never linear dependent. This is a contradiction. This completes the proof. \end{proof} \begin{theorem} Let $M$ be a submanifold with $(n-k)$ dimension in $E^{n}$. Let $\overline{D} $ be Riemannian connexion (standart covariant derivative) on $E^{n}$ and $D$ be Riemannian connexion on $M$. Let us assume that $M\subset E^{n}$ be a strong $r$-helix submanifold and $H(M)\subset E^{n}$ be the space of the helix directions of $M$. If $\alpha :I\subset IR\rightarrow M$ is a (unit speed) geodesic curve on $M$ and if $\left\langle V_{2},\xi _{j}\right\rangle $ is a constant function along the curve $\alpha $, then $ \alpha $ is a slant helix in $E^{n}$, where $V_{2}$ is the unit principal normal of $\alpha $ and $\xi _{j}$ is the normal component of a direction $ d_{j}\in H(M)$. \end{theorem} \begin{proof} Let $T$ be the unit tangent vector field of $\alpha $. Then, from the formula Gauss in Definition (2.4), \begin{equation} \overline{D}_{T}T=D_{T}T+V(T,T) \end{equation} According to the Theorem, since $\alpha $ is a geodesic curve on $M$, \begin{equation} D_{T}T=0 \end{equation} So, by using (3.4),(3.5) and Frenet formulas, we have: \begin{equation*} \overline{D}_{T}T=k_{1}V_{2}=V(T,T) \end{equation*} That is, the vector field $V_{2}\in \vartheta (M)$ along the curve $\alpha $ , where $\vartheta (M)$ is the normal space of $M$. On the other hand, since $M$ is a strong $r$-helix submanifold, we can decompose any $d_{j}\in H(M)$ in its tangent and normal components: \begin{equation} d_{j}=\cos (\theta _{j})\xi _{j}+\sin (\theta _{j})T_{j} \end{equation} where $\theta _{j}$ is constant. Moreover, according to the Theorem, $ \left\langle V_{2},\xi _{j}\right\rangle $ is a constant function along the curve $\alpha $ for the normal component $\xi _{j}$ of \ a direction $ d_{j}\in H(M)$. Hence, doing the scalar product with $V_{2}$ in each part of the equation (3.6), we obtain: \begin{equation} \left\langle d_{j},V_{2}\right\rangle =\cos (\theta _{j})\left\langle V_{2},\xi _{j}\right\rangle +\sin (\theta _{j})\left\langle V_{2},T_{j}\right\rangle \end{equation} Since $\cos (\theta _{j})\left\langle V_{2},\xi _{j}\right\rangle =$constant and $\left\langle V_{2},T_{j}\right\rangle =0$ ( $V_{2}\in \vartheta (M)$) along the curve $\alpha $, from (3.7) we have: \begin{equation*} \left\langle d_{j},V_{2}\right\rangle =\text{constant.} \end{equation*} along the curve $\alpha $. Consequently, $\alpha $ is a slant helix in $ E^{n} $. \end{proof} \begin{theorem} Let $M$ be a submanifold with $(n-k)$ dimension in $E^{n}$. Let $\overline{D} $ be Riemannian connexion (standart covariant derivative) on $E^{n}$ and $D$ be Riemannian connexion on $M$. Let us assume that $M\subset E^{n}$ be a strong $r$-helix submanifold and $H(M)\subset E^{n}$ be the space of the helix directions of $M$. If $\alpha :I\subset IR\rightarrow M$ is a (unit speed) curve on $M$ with the normal curvature function $k_{T}=0$ and if $ \left\langle V_{2},T_{j}\right\rangle $ is a constant function along the curve $\alpha $, then $\alpha $ is a slant helix in $E^{n}$, where $V_{2}$ is the unit principal normal of $\alpha $ and $T_{j}$ is the tangent component of a direction $d_{j}\in H(M)$. \end{theorem} \begin{proof} Let $T$ be the unit tangent vector field of $\alpha $. Then, from the formula Gauss in Definition (2.4), \begin{equation} \overline{D}_{T}T=D_{T}T+V(T,T) \end{equation} According to the Theorem, since the normal curvature $k_{T}=0$, \begin{equation} V(T,T)=0 \end{equation} So, by using (3.8),(3.9) and Frenet formulas, we have: \begin{equation*} \overline{D}_{T}T=k_{1}V_{2}=D_{T}T\text{.} \end{equation*} That is, the vector field $V_{2}\in T_{\alpha (t)}M$, where $T_{\alpha (t)}M$ is the tangent space of $M$. On the other hand, since $M$ is a strong $r$ -helix submanifold, we can decompose any $d_{j}\in H(M)$ in its tangent and normal components: \begin{equation} d_{j}=\cos (\theta _{j})\xi _{j}+\sin (\theta _{j})T_{j} \end{equation} where $\theta _{j}$ is constant. Moreover, according to the Theorem, $ \left\langle V_{2},T_{j}\right\rangle $ is a constant function along the curve $\alpha $ for the tangent component $T_{j}$ of \ a direction $d_{j}\in H(M)$. Hence, doing the scalar product with $V_{2}$ in each part of the equation (3.10), we obtain: \begin{equation} \left\langle d_{j},V_{2}\right\rangle =\cos (\theta _{j})\left\langle V_{2},\xi _{j}\right\rangle +\sin (\theta _{j})\left\langle V_{2},T_{j}\right\rangle \end{equation} Since $\sin (\theta _{j})\left\langle V_{2},T_{j}\right\rangle =$constant and $\left\langle V_{2},\xi _{j}\right\rangle =0$ ($V_{2}\in T_{\alpha (t)}M$ ) along the curve $\alpha $, from (3.11) we have: \begin{equation*} \left\langle d_{j},V_{2}\right\rangle =\text{constant.} \end{equation*} along the curve $\alpha $. Consequently, $\alpha $ is a slant helix in $ E^{n} $. \end{proof} \begin{definition} Given an Euclidean submanifold of arbitrary codimension $M\subset IR^{n}$. A curve $\alpha $ in $M$ is called a line of curvature if its tangent $T$ is a principal vector at each of its points. In other words, when $T$ (the tangent of $\alpha $) is a principal vector at each of its points, for an arbitrary normal vector field $N\in \vartheta (M)$, the shape operator $ A_{N} $ associated to $N$ says $A_{N}(T)=$tang$(-$ $\overline{D} _{T}N)=\lambda _{j}T$ along the curve $\alpha $, where $\lambda _{j}$ is a principal curvature and $\overline{D}$ be the Riemannian connexion(standart covariant derivative) on $IR^{n}$ [2]. \end{definition} \begin{theorem} Let $M$ be a submanifold with $(n-k)$ dimension in $E^{n}$ and let $ \overline{D}$ be Riemannian connexion (standart covariant derivative) on $ E^{n}$. Let us assume that $M\subset E^{n}$ be a strong $r$-helix submanifold and $H(M)\subset E^{n}$ be the space of the helix directions of $ M$. If $\alpha :I\rightarrow M$ is a line of curvature with respect to the normal component $N_{j}\in \vartheta (M)$ of a direction $d_{j}\in H(M)$ and if $N_{j}^{ {\acute{}} }\in \varkappa (M)$ along the curve $\alpha $, then $d_{j}\in Sp\left\{ T\right\} ^{\bot }$ along the curve $\alpha $, where $T$ \ is the unit tangent vector field of $\alpha $. \end{theorem} \begin{proof} We assume that $\alpha :I\rightarrow M$ is a line of curvature with respect to the normal component $N_{j}\in \vartheta (M)$ of a direction $d_{j}\in H(M)$. Since $M$ is a strong $r$-helix submanifold, we can decompose $ d_{j}\in H(M)$ in its tangent and normal components: \begin{equation*} d_{j}=\cos (\theta _{j})N_{j}+\sin (\theta _{j})T_{j}\text{ } \end{equation*} where $\theta _{j}$ is constant. So, $\left\langle N_{j},d_{j}\right\rangle = $constant and by taking derivatives on both sides along the curve $\alpha $ , we get $\left\langle N_{j}^{ {\acute{}} },d_{j}\right\rangle =0$. On the other hand, since $\alpha :I\rightarrow M$ is a line of curvature with respect to the $N_{j}\in \vartheta (M)$, \begin{equation*} A_{N_{j}}(T)\text{=tang}(-\overline{D}_{T}N_{j})=\text{tang}(-N_{j}^{ {\acute{}} })=\lambda _{j}T\text{ } \end{equation*} along the curve $\alpha $. According to this Theorem, $N_{j}^{ {\acute{}} }\in \varkappa (M)$ along the curve $\alpha $. Hence, \begin{equation} \text{tang}(-N_{j}^{ {\acute{}} })=-N_{j}^{ {\acute{}} }=\lambda _{j}T\text{ } \end{equation} Therefore, by using the equalities $\left\langle N_{j}^{ {\acute{}} },d_{j}\right\rangle =0$ and (3.12), we obtain: \begin{equation*} \left\langle T,d_{j}\right\rangle =0 \end{equation*} along the curve $\alpha $. This completes the proof. \end{proof} \begin{theorem} Let $M$ be a submanifold with $(n-k)$ dimension in $E^{n}$ and let $ \overline{D}$ be Riemannian connexion (standart covariant derivative) on $ E^{n}$. Let us assume that $M\subset E^{n}$ be a strong $r$-helix submanifold and $H(M)\subset E^{n}$ be the space of the helix directions of $ M$. If $\alpha :I\rightarrow M$ is a curve in $M$ and if the system $\left\{ T_{j}^{ {\acute{}} },T\right\} $ is linear dependent along the curve $\alpha $, where $T_{j}^{ {\acute{}} }$ is the derivative of the tangent component $T_{j}$ of a direction $ d_{j}\in H(M)$ and $T$ the tangent to the curve $\alpha $, then $\alpha $ is a line of curvature in $M$. \end{theorem} \begin{proof} Since $M$ is a strong $r$-helix submanifold, we can decompose $d_{j}\in H(M)$ in its tangent and normal components: \begin{equation} d_{j}=\cos (\theta _{j})N_{j}+\sin (\theta _{j})T_{j}\text{ } \end{equation} where $\theta _{j}$ is constant. If we take derivative in each part of the equation (3.13) along the curve $\alpha $, we obtain: \begin{equation} 0=\cos (\theta _{j})N_{j}^{ {\acute{}} }+\sin (\theta _{j})T_{j}^{ {\acute{}} }\text{ } \end{equation} From (3.14), we can write \begin{equation} N_{j}^{ {\acute{}} }=-\tan (\theta _{j})T_{j}^{ {\acute{}} }\text{ } \end{equation} So, for the tangent component of $-N_{j}^{ {\acute{}} }$, from (3.15) we can write: \begin{equation} A_{N_{j}}(T)\text{=tang}(-\overline{D}_{T}N_{j})=\text{tang}(-N_{j}^{ {\acute{}} })=\text{tang}(\tan (\theta _{j})T_{j}^{ {\acute{}} })\text{ } \end{equation} along the curve $\alpha $. According to the hypothesis, the system $\left\{ T_{j}^{ {\acute{}} },T\right\} $ is linear dependent along the curve $\alpha $. Hence, we get $ T_{j}^{ {\acute{}} }=\lambda _{j}T$. And, by using the equation (3.16), we have: \begin{equation*} A_{N_{j}}(T)=\text{tang}(\tan (\theta _{j})T_{j}^{ {\acute{}} })=\text{tang}(\tan (\theta _{j})\lambda _{j}T) \end{equation*} and \begin{equation} A_{N_{j}}(T)=\text{tang}(\tan (\theta _{j})\lambda _{j}T)\text{ } \end{equation} Moreover, since $T\in \varkappa (M)$, tang$(\tan (\theta _{j})\lambda T)=(\tan (\theta _{j})\lambda _{j})T=k_{j}T$. Therefore, from (3.17), we have: \begin{equation*} A_{N_{j}}(T)=k_{j}T\text{.} \end{equation*} It follows that $\alpha $ is a line of curvature in $M$ for $N_{j}\in \vartheta (M)$. This completes the proof. \end{proof} \noindent \textbf{Acknowledgment.} The authors would like to thank two anonymous referees for their valuable suggestions and comments that helped to improve the presentation of this paper. \end{document}
\begin{document} \title{On the failure of the Gorenstein property\ for Hecke algebras of prime weight} \begin{abstract} In this article we report on extensive calculations concerning the Gorenstein defect for Hecke algebras of spaces of modular forms of prime weight~$p$ at maximal ideals of residue characteristic~$p$ such that the attached mod~$p$ Galois representation is unramified at~$p$ and the Frobenius at~$p$ acts by scalars. The results lead us to the ask the question whether the Gorenstein defect and the multplicity of the attached Galois representation are always equal to~$2$. We review the literature on the failure of the Gorenstein property and multiplicity one, discuss in some detail a very important practical improvement of the modular symbols algorithm over finite fields and include precise statements on the relationship between the Gorenstein defect and the multiplicity of Galois representations. MSC Classification: 11F80 (primary), 11F33, 11F25 (secondary). \end{abstract} \tableofcontents \section{Introduction} \label{section-introduction} In Wiles' proof of Fermat's Last Theorem (see \cite{wiles}) an essential step was to show that certain Hecke algebras are Gorenstein rings. Moreover, the Gorenstein property of Hecke algebras is equivalent to the fact that Galois representations appear on certain Jacobians of modular curves precisely with multiplicity one. This article is concerned with the Gorenstein property and with the multiplicity one question. We report previous work and exhibit many new examples where multiplicity one and the Gorenstein property fail. We compute the multiplicty in these cases. Moreover, we ask the question suggested by our computations whether it is always equal to two if it fails. We first have to introduce some notation. For integers $N \ge 1$ and $k \ge 2$ and a Dirichlet character $\chi: (\mathbb{Z}/N\mathbb{Z})^\times \to \mathbb{C}^\times$ we let $S_k(\Gamma_1(N))$ be the $\mathbb{C}$-vector space of holomorphic cusp forms on $\Gamma_1(N)$ of weight~$k$ and $S_k(N,\chi)$ the subspace on which the diamond operators act through the character~$\chi$. We now introduce some extra notation for Hecke algebras over specified rings. \begin{notation}[Notation for Hecke algebras] Whenever $S \subseteq R$ are rings and $M$ is an $R$-module on which the Hecke and diamond operators act, we let $\mathbb{T}_S(M)$ be the $S$-subalgebra inside the $R$-endomorphism ring of~$M$ generated by the Hecke and the diamond operators. If $\phi: S \to S'$ is a ring homomorphism, we let $\mathbb{T}_\phi (M) := \mathbb{T}_S(M) \otimes_S S'$ or with $\phi$ understood $\mathbb{T}_{S \to S'} (M)$. \end{notation} We will mostly be dealing with the Hecke algebras $\mathbb{T}_\mathbb{Z}(S_k(\Gamma_1(N)))$ and $\mathbb{T}_{\mathbb{Z}[\chi]}(S_k(N,\chi))$, their completions $\mathbb{T}_{\mathbb{Z} \to \mathbb{Z}_p}(S_k(\Gamma_1(N)))$ and $\mathbb{T}_{\mathcal{O}\to \widehat{\mathcal{O}}}(S_k(N,\chi))$, as well as their reductions $\mathbb{T}_{\mathbb{Z} \to \mathbb{F}_p}(S_k(\Gamma_1(N)))$ and $\mathbb{T}_{\mathcal{O} \to \mathbb{F}}(S_k(N,\chi))$. Here, $p$ is a prime and $\mathcal{O} = \mathbb{Z}[\chi]$ is the smallest subring of~$\mathbb{C}$ containing all values of~$\chi$, $\widehat{\mathcal{O}}$ is a completion and $\mathcal{O} \twoheadrightarrow \mathbb{F}$ is a reduction modulo a prime above~$p$. In Section~\ref{modular-symbols} the reduced Hecke algebras are identified with Hecke algebras of mod~$p$ modular forms, which are closely related to Hecke algebras of Katz modular forms over finite fields (see Section~\ref{section-relation}). We choose a holomorphic cuspidal Hecke eigenform as the starting point of our discussion and treatment. So let $f \in S_k(N,\chi) \subseteq S_k(\Gamma_1(N))$ be an eigenform for all Hecke and diamond operators. It (more precisely, its Galois conjugacy class) corresponds to minimal ideals, both denoted by $\mathfrak{p}_f$, in each of the two Hecke algebras $\mathbb{T}_\mathbb{Z}(S_k(\Gamma_1(N)))$ and $\mathbb{T}_{\mathbb{Z}[\chi]}(S_k(N,\chi))$. We also choose maximal ideals $\mathfrak{m}=\mathfrak{m}_f$ containing~$\mathfrak{p}_f$ of residue characteristic~$p$ again in each of the two. Note that the residue fields are the same in both cases. By work of Shimura and Deligne, one can associate to~$f$ (more precisely, to~$\mathfrak{m}$) a continuous odd semi-simple Galois representation \[ \rho_f = \rho_{\mathfrak{m}_f} = \rho_\mathfrak{m}: \Gal({\overline{\QQ}}/\mathbb{Q}) \rightarrow \mathrm{GL}_2(\hecke{N}/\mathfrak{m}) \] unramified outside $Np$ satisfying~$\Tr(\rho_\mathfrak{m}(\Frob_l)) \equiv T_l \mod \mathfrak{m}$ and $\Det(\rho_m(\Frob_l)) \equiv l^{k-1}\chi(l) \mod \mathfrak{m}$ for all primes~$l \nmid Np$. In the case of weight~$k=2$ and level~$N$, the representation~$\rho_m$ can be naturally realised on the $p$-torsion points of the Jacobian of the modular curve $X_1(N)_\mathbb{Q}$. The algebra $\mathbb{T}_{\mathbb{Z} \to \mathbb{F}_p}(S_2(\Gamma_1(N)))$ acts naturally on $J_1(N)_\mathbb{Q}({\overline{\QQ}})[p]$ and we can form the Galois module $J_1(N)_\mathbb{Q} ({\overline{\QQ}})[\mathfrak{m}] = J_1(N)_\mathbb{Q}({\overline{\QQ}})[p][\widetilde{\mathfrak{m}}]$ with $\widetilde{\mathfrak{m}}$ the maximal ideal of $\mathbb{T}_{\mathbb{Z} \to \mathbb{F}_p}(S_2(\Gamma_1(N)))$ which is the image of~$\mathfrak{m}$ under the natural projection. Supposing that $\rho_m$ is absolutely irreducible, the main result of~\cite{blr} shows that the Galois representation $J_1(Np)_\mathbb{Q}({\overline{\QQ}})[\mathfrak{m}]$ is isomorphic to a direct sum of $r$~copies of~$\rho_m$ for some integer $r \ge 1$, which one calls the {\em multiplicity of~$\rho_m$} (cf.~\cite{ribet-stein}). One says that $\rho_m$ is a {\em multiplicity one representation} or {\em satisfies multiplicity one}, if $r=1$. See \cite{mazur} for a similar definition for $J_0(N)$ and Prop.~\ref{nulleins} for a comparison. The notion of multiplicity can be naturally extended to Galois representations attached to eigenforms~$f$ of weights $3 \le k \le p+1$ for $p \nmid N$. This is accomplished by a result of Serre's which implies the existence of a maximal ideal $\mathfrak{m}_2 \subset \heckeoneNk{Np}{2}$ such that $\rho_{\mathfrak{m}_f} \cong \rho_{\mathfrak{m}_2}$ (see Prop.~\ref{totwo}). One hence obtains the notion of multiplicity (on $J_1(Np)$) for the representation~$\rho_{m_f}$ by defining it as the multiplicity of~$\rho_{\mathfrak{m}_2}$. Moreover, when allowing twists by the cyclotomic character, it is even possible to treat arbitrary weights. The following theorem summarises results on when the multiplicity in the above sense is known to be one. \begin{thm}[Mazur~\cite{mazur}, Edixhoven~\cite{edixhoven-serre}, Gross~\cite{gross}, Buzzard~\cite{buzzard_appendix}] \label{gorenstein-theorem} Let~$\rho_\mathfrak{m}$ be a representation associated with a modular cuspidal eigenform~$f \in S_{k}(N,\chi)$ and let~$p$ be the residue characteristic of~$\mathfrak{m}$. Suppose that $\rho_\mathfrak{m}$ is absolutely irreducible and that $p$ does not divide~$N$. If either \begin{enumerate} \itemsep=0cm plus 0pt minus 0pt \item $2 \le k \le p-1$, or \item $k=p$ and $\rho_\mathfrak{m}$ is ramified at~$p$, or \item $k=p$ and $\rho_\mathfrak{m}$ is unramified at~$p$ and~$\rho_\mathfrak{m}(\Frob_p)$ is not scalar, \end{enumerate} then the multiplicity of~$\rho_\mathfrak{m}$ is one. \end{thm} This theorem is composed of Lemma~15.1 from Mazur~\cite{mazur}, Theorem~9.2 from Edixhoven~\cite{edixhoven-serre}, Proposition~{12.10} from Gross~\cite{gross} and Theorem~6.1 from Buzzard~\cite{buzzard_appendix}. The following theorem by the second author (\cite{nongor}, Corollary~4.4) tells us when the multiplicity is not one. \begin{thm}\label{nongor} Let $\rho_\mathfrak{m}$ as in the previous theorem. Suppose $k=p$ and that $\rho_\mathfrak{m}$ is unramified at~$p$. If $p=2$, assume also that a Katz cusp form over~$\mathbb{F}_2$ of weight~$1$ on~$\Gamma_1(N)$ exists which gives rise to~$\rho_\mathfrak{m}$. If $\rho_\mathfrak{m}(\Frob_p)$ is a scalar matrix, then the multiplicity of~$\rho_\mathfrak{m}$ is bigger than~$1$. \end{thm} In Section~\ref{section-relation} we explain how the Galois representation $J_1(Np)_\mathbb{Q}({\overline{\QQ}})[\mathfrak{m}]$ is related to the different Hecke algebras evoked above and see in many cases of interest a precise relationship between the geometrically defined term of {\em multiplicity} and the {\em Gorenstein defect} of these algebras. The latter can be computed explicitly, which is the subject of the present article. We now give the relevant definitions. \begin{defi}[The Gorenstein property] Let~$A$ be a local Noetherian ring with maximal ideal~$\mathfrak{m}$. Suppose first that the Krull dimension of~$A$ is zero, i.e.\ that $A$ is Artinian. We then define the \emph{Gorenstein defect} of~$A$ to be the minimum number of $A$-module generators of the annihilator of~$\mathfrak{m}$ (i.e.\ $A[\mathfrak{m}]$) minus~1; equivalently, this is the~$A/\mathfrak{m}$-dimension of the annihilator of~$\mathfrak{m}$ minus~1. We say that $A$ is \emph{Gorenstein} if its Gorenstein defect is~0, and \emph{non-Gorenstein} otherwise. If the Krull dimension of~$A$ is positive, we inductively call~$A$ {\em Gorenstein}, if there exists a non-zero-divisor $x \in \mathfrak{m}$ such that $A/(x)$ is a Gorenstein ring of smaller Krull dimension (see~\cite{Eisenbud}, p.~532; note that our definition implies that $A$ is Cohen-Macaulay). A (not necessarily local) Noetherian ring is said to be \emph{Gorenstein} if and only if all of its localisations at its maximal ideals are Gorenstein. \end{defi} We will for example be interested in the Gorenstein property of $\mathbb{T}_\mathbb{Z}(S_k(\Gamma_1(N)))_\mathfrak{m}$. Choosing $x=p$ in the definition, we see that this is equivalent to the Gorenstein defect of the finite dimensional $\mathbb{F}_p$-algebra $\mathbb{T}_{\mathbb{Z}\to\mathbb{F}_p}(S_k(\Gamma_1(N)))_\mathfrak{m}$ being zero. Whenever we refer to the Gorenstein defect of the former algebra (over~$\mathbb{Z}$), we mean the one of the latter. Our computations will concern the Gorenstein defect of $\mathbb{T}_{\mathcal{O}\to\mathbb{F}}(S_k(\Gamma_1(N),\chi))_\mathfrak{m}$. See Section~\ref{section-relation} for a comparison with the one not involving a character. It is important to remark that the Gorenstein defect of a local Artin algebra over a field does not change after passing to a field extension and taking any of the conjugate local factors. We illustrate the definition by an example. The algebra $k[x,y,z]/(x^2,y^2,z^2,xy,xz,yz)$ for a field~$k$ is Artinian and local with maximal ideal $\mathfrak{m}:=(x,y,z)$ and the annihilator of~$\mathfrak{m}$ is~$\mathfrak{m}$ itself, so the Gorenstein defect is~$3-1=2$. We note that this particular case does occur in nature; a localisation~$\mathbb{T}_{\mathbb{Z}\to\mathbb{F}_2}(S_2(\Gamma_0(431)))_\mathfrak{m}$ at one maximal ideal is isomorphic to this, with~$k=\mathbb{F}_2$ (see~\cite{emerton}, the discussion just before Lemma~6.6). This example can also be verified with the algorithm presented in this paper. We now state a translation of Theorem~\ref{gorenstein-theorem} in terms of Gorenstein defects, which is immediate from the propositions in Section~\ref{section-relation}. \begin{thm}\label{thgor} Assume the set-up of Theorem~\ref{gorenstein-theorem} and that one of 1.,\ 2.,\ or 3.\ is satisfied. We use notations as in the discussion of multiplicities above. If $k=2$, then $\mathbb{T}_\mathbb{Z}(S_2(\Gamma_1(N)))_\mathfrak{m}$ is a Gorenstein ring. If $k\ge3$, then $\mathbb{T}_\mathbb{Z}(S_2(\Gamma_1(Np)))_{\mathfrak{m}_2}$ is, too. Supposing in addition that $\mathfrak{m}$ is ordinary (i.e.\ $T_p \not\in \mathfrak{m}$), then also $\mathbb{T}_\mathbb{Z}(S_k(\Gamma_1(N)))_\mathfrak{m}$ is Gorenstein. If, moreover, $p \ge 5$ or $\rho_\mathfrak{m}$ is not induced from $\mathbb{Q}(\sqrt{-1})$ (if $p=2$) or $\mathbb{Q}(\sqrt{-3})$ (if $p=3$), then $\mathbb{T}_{\mathcal{O}\to\mathbb{F}}(S_k(N,\chi))_\mathfrak{m}$ is Gorenstein as well. \hspace* {.5cm} $\Box$ \end{thm} We now turn our attention to computing the Gorenstein defect and the multiplicity in the case when it is known not to be one. \begin{cor}\label{corfailure} Let~$\rho_\mathfrak{m}$ be a representation associated with a cuspidal eigenform~$f \in S_{p}(N,\chi)$ with~$p$ the residue characteristic of~$\mathfrak{m}$. Assume that $\rho_\mathfrak{m}$ is absolutely irreducible, unramified at~$p$ such that $\rho_\mathfrak{m}(\Frob_p)$ is a scalar matrix. Let $r$ be the multiplicity of~$\rho_\mathfrak{m}$ and $d$ the Gorenstein defect of any of $\mathbb{T}_{\mathcal{O}\to\mathbb{F}}(S_k(N,\chi))_\mathfrak{m}$, $\mathbb{T}_{\mathbb{Z}\to\mathbb{F}_p}(S_k(\Gamma_1(N)))_\mathfrak{m}$ or $\mathbb{T}_{\mathbb{Z}\to\mathbb{F}_p}(S_2(\Gamma_1(Np)))_{\mathfrak{m}_2}$. Then the relation $d = 2r-2$ holds. \end{cor} {\bf Proof. } The equality of the Gorenstein defects and the relation with the multiplicity are proved in Section~\ref{section-relation}, noting that $\mathfrak{m}$ is ordinary, as $a_p(f)^2 = \chi(p) \neq 0$ by \cite{gross}, p.~487. \hspace* {.5cm} $\Box$ \subsection{Previous results on the failure of multiplicity one or the Gorenstein property} \label{previous-results} Prior to the present work and the article~\cite{nongor}, there have been some investigations into when Hecke algebras fail to be Gorenstein. In~\cite{kilford-nongorenstein}, the first author showed, using {\sc Magma}~\cite{magma}, that the Hecke algebras~$\heckeNktwo{431}$, $\heckeNktwo{503}$ and~$\heckeNktwo{2089}$ are not Gorenstein by explicit computation of the localisation of the Hecke algebra at a suitable maximal ideal above~2, and in~\cite{ribet-stein}, it is shown that~$\heckeNktwo{2071}$ is not Gorenstein in a similar fashion. These examples were discovered by considering elliptic curves~$E/\mathbb{Q}$ such that in the ring of integers of~$\mathbb{Q}(E[2])$ the prime ideal~$(2)$ splits completely, and then doing computations with {\sc Magma}. There are also some results in the literature on the failure of multiplicity one within the torsion of certain Jacobians. In~\cite{agashe-ribet-stein}, the following theorem is proved: \begin{thm}[Agashe-Ribet-Stein~\cite{agashe-ribet-stein}, Proposition~5.1] Suppose that~$E$ is an optimal elliptic curve over~$\mathbb{Q}$ of conductor~$N$, with congruence number~$r_E$ and modular degree~$m_E$ and that~$p$ is a prime such that~$p | r_E$ but~$p \nmid m_E$. Let~$\mathfrak{m}$ be the annihilator in~$\heckeNktwo{N}$ of~$E[p]$. Then multiplicity one fails for~$\mathfrak{m}$. \end{thm} They give a table of examples; for instance, $\heckeNktwo{54}$ does not satisfy multiplicity one at some maximal ideal above~$3$. It is not clear whether this phenomenon occurs infinitely often. In~\cite{ribet-festschrift}, it is shown that the mod~$p$ multiplicity of a certain representation in the Jacobian of the Shimura curve derived from the rational quaternion algebra of discriminant~$11 \cdot 193$ is~2; this result inspired the calculations in~\cite{kilford-nongorenstein}. Let us finally mention that for $p=2$ precisely the Galois representations~$\rho$ with image equal to the dihedral group~$D_3$ come from an elliptic curve over~$\mathbb{Q}$. For, on the one hand, we only need observe that $D_3=\mathrm{GL}_2(\mathbb{F}_2)$. On the other hand, any $S_3$-extension~$K$ of the rationals can be obtained as the splitting field of an irreducible integral polynomial $f=X^3+aX+b$. The $2$-torsion of the elliptic curve $E: Y^2=f$ consists precisely of the three roots of~$f$ and the point at infinity. So, the field generated over $\mathbb{Q}$ by the $2$-torsion of~$E$ is~$K$. \subsection{New results} Using the modular symbols algorithm over finite fields with an improved stop criterion (see Section~\ref{modular-symbols}), we performed computations in {\sc Magma} concerning the Gorenstein defect of Hecke algebras of cuspidal modular forms of prime weights~$p$ at maximal ideals of residue characteristic~$p$ in the case of Theorem~\ref{nongor}. All of our 384 examples have Gorenstein defect equal to~$2$ and hence their multiplicity is~$2$. We formulate part of our computational findings as a theorem. \begin{thm} For every prime $p < 100$ there exists a prime $N \neq p$ and a Dirichlet character~$\chi$ such that the Hecke algebra $\mathbb{T}_{\mathbb{Z}[\chi]\to\mathbb{F}}(S_p(N,\chi))$ has Gorenstein defect~$2$ at some maximal ideal~$\mathfrak{m}$ of residue characteristic~$p$. The corresponding Galois representation~$\rho_\mathfrak{m}$ appears with multiplicity two on the $p$-torsion of the Jacobian $J_1(Np)_\mathbb{Q}({\overline{\QQ}})$ if $p$ is odd, respectively $J_1(N)_\mathbb{Q}({\overline{\QQ}})$ if~$p=2$. \end{thm} Our computational results are discussed in more detail in Section~\ref{computational-results}. \subsection{A question} \begin{question}\label{ourquestion} Let $p$ be a prime. Let $f$ be a normalised cuspidal modular eigenform of weight~$p$, prime level $N \neq p$ for some Dirichlet character $\chi$. Let $\rho_f : G_\mathbb{Q} \to \mathrm{GL}_2({\overline{\FF}_p})$ be the modular Galois representation attached to~$f$. We assume that $\rho_f$ is irreducible and unramified at~$p$ and that $\rho_f(\Frob_p)$ is a scalar matrix. Write $\mathbb{T}_{\mathbb{F}}$ for $\mathbb{T}_{\mathbb{Z}[\chi]\to\mathbb{F}}(S_p(N,\chi))$. Recall that this notation stands for the tensor product over $\mathbb{Z}[\chi]$ of a residue field $\mathbb{F}/\mathbb{F}_p$ of $\mathbb{Z}[\chi]$ by the $\mathbb{Z}[\chi]$-algebra generated inside the endomorphism algebra of~$S_p(N,\chi)$ by the Hecke operators and by the diamond operators. Let $\mathfrak{m}$ be the maximal ideal of $\mathbb{T}_{\mathbb{F}}$ corresponding to~$f$. Is the Gorenstein defect of the Hecke algebra~$\mathbb{T}_{\mathbb{F}}$ localised at ${\mathfrak{m}}$, denoted by $\mathbb{T}_{\mathfrak{m}}$, always equal to~$2$? Equivalently, is the multiplicity of the Galois representation attached to~$f$ always equal to~$2$? \end{question} This question was also raised both by Kevin Buzzard and James Parson in communications to the authors. \section{Relation between multiplicity and Gorenstein defect} \label{section-relation} In this section we collect results, some of which are well-known, on the multiplicity of Galois representations, the Gorenstein defect and relations between the two. Whereas the mod~$p$ modular symbols algorithm naturally computes mod~$p$ modular forms (see Section~\ref{modular-symbols}), this rather geometrical section uses (mostly in the references) the theory of Katz modular forms over finite fields (see e.g.~\cite{edixhoven-boston}). If $N \ge 5$ and $k\ge 2$, the Hecke algebra $\mathbb{T}_{\mathbb{Z} \to \mathbb{F}_p}(S_k(\Gamma_1(N)))$ is both the Hecke algebra of mod~$p$ cusp forms of weight~$k$ on~$\Gamma_1(N)$ and the Hecke algebra of the corresponding Katz cusp forms over~$\mathbb{F}_p$. However, in the presence of a Dirichlet character $\mathbb{T}_{\mathbb{Z}[\chi] \to \mathbb{F}}(S_k(N,\chi))$ only has an interpretation as the Hecke algebra of the corresponding mod~$p$ cusp forms and there may be differences with the respective Hecke algebra for Katz forms (see Carayol's Lemma, Prop.~1.10 of~~\cite{edixhoven-boston}). We start with the well-known result in weight~$2$ (see e.g.\ \cite{mazur}, Lemma 15.1) that multiplicity one implies that the corresponding local Hecke factor is a Gorenstein ring. \begin{prop} Let $\mathfrak{m}$ be a maximal ideal of~$\mathbb{T} := \heckeoneNk{N}{2}$ of residue characteristic~$p$ which may divide~$N$. Denote by $\widetilde{\mathfrak{m}}$ the image of $\mathfrak{m}$ in $\mathbb{T}_{\mathbb{F}_p} := \mathbb{T} \otimes_\mathbb{Z} \mathbb{F}_p = \mathbb{T}_{\mathbb{Z}\to\mathbb{F}_p}(S_2(\Gamma_1(N)))$. Suppose that the Galois representation~$\rho_\mathfrak{m}$ is irreducible and satisfies multiplicity one (see Section~\ref{section-introduction}). Then as $\mathbb{T}_{\mathbb{F}_p,\widetilde{\mathfrak{m}}}$-modules one has $$ J_1(N)_\mathbb{Q}({\overline{\QQ}})[p]_{\widetilde{\mathfrak{m}}} \cong \mathbb{T}_{\mathbb{F}_p,\widetilde{\mathfrak{m}}} \oplus \mathbb{T}_{\mathbb{F}_p,\widetilde{\mathfrak{m}}}$$ and the localisations $\mathbb{T}_\mathfrak{m}$ and $\mathbb{T}_{\mathbb{F}_p,\widetilde{\mathfrak{m}}}$ are Gorenstein rings. Similar results hold if one replaces $\Gamma_1(N)$ and $J_1(N)$ by $\Gamma_0(N)$ and~$J_0(N)$. \end{prop} {\bf Proof. } For the proof we have to pass to $\mathbb{T}_{\mathbb{Z}_p} = \mathbb{T}_\mathbb{Z} \otimes_\mathbb{Z} \mathbb{Z}_p$. We also denote by~$\mathfrak{m}$ the maximal ideal in~$\mathbb{T}_{\mathbb{Z}_p}$ that corresponds to~$\mathfrak{m}$. Let $V$ be the $\mathfrak{m}$-part of the $p$-Tate module of $J_1(N)_\mathbb{Q}$. Multiplicity one implies that $V/\mathfrak{m} V$ is a $2$-dimensional $\mathbb{T}/\mathfrak{m} = \mathbb{T}_{\mathbb{Z}_p}/\mathfrak{m} = \mathbb{T}_{\mathbb{F}_p,\widetilde{\mathfrak{m}}}/\widetilde{\mathfrak{m}}$-vector space, since $$ V/\mathfrak{m} V \cong (V/pV)/\widetilde{\mathfrak{m}} \cong (J_1(N)_\mathbb{Q}({\overline{\QQ}})[p]) / \widetilde{\mathfrak{m}} \cong (J_1(N)_\mathbb{Q}({\overline{\QQ}})[p])^\vee / \widetilde{\mathfrak{m}} \cong (J_1(N)_\mathbb{Q}({\overline{\QQ}})[\mathfrak{m}])^\vee,$$ where the self-duality comes from the modified Weil pairing which respects the Hecke action (see e.g.\ \cite{gross}, p.~485). Nakayama's Lemma hence implies that $V$ is a $\mathbb{T}_{\mathbb{Z}_p,\mathfrak{m}}$-module of rank~$2$. As one knows that $V \otimes_{\mathbb{Z}_p} \mathbb{Q}_p$ is a $\mathbb{T}_\mathfrak{m} \otimes \mathbb{Q}_p$-module of rank~$2$, it follows that $V$ is a free $\mathbb{T}_{\mathbb{Z}_p,\mathfrak{m}}$-module of rank~$2$, whence $J_1(N)_\mathbb{Q}({\overline{\QQ}})[p]_{\widetilde{\mathfrak{m}}}$ is a free $\mathbb{T}_{\mathbb{F}_p,\widetilde{\mathfrak{m}}}$-module of rank~$2$. Taking the $\widetilde{\mathfrak{m}}$-kernel gives $J_1(N)_\mathbb{Q}({\overline{\QQ}})[\mathfrak{m}]=(\mathbb{T}_{\mathbb{F}_p,\widetilde{\mathfrak{m}}}[\widetilde{\mathfrak{m}}])^2$, whence the Gorenstein defect is zero. In the $\Gamma_0$-situation, the same proof holds. \hspace* {.5cm} $\Box$ In the so-called ordinary case, we have the following precise relationship between the multiplicity and the Gorenstein defect, which was suggested to us by Kevin Buzzard. The proof can be found in~\cite{nongor}. \begin{prop}\label{gormult} Suppose $p \nmid N$ and let $M=N$ or $M=Np$. Let $\mathfrak{m}$ be a maximal ideal of $\heckeoneNk M2$ of residue characteristic~$p$ and assume that $\mathfrak{m}$ is ordinary, i.e.\ that the $p$-th Hecke operator~$T_p$ is not in~$\mathfrak{m}$. Assume also that $\rho_\mathfrak{m}$ is irreducible. Denote by $\widetilde{\mathfrak{m}}$ the image of~$\mathfrak{m}$ in $\mathbb{T}_{\mathbb{F}_p} := \mathbb{T}_{\mathbb{Z}\to\mathbb{F}_p}(S_2(\Gamma_1(M)))$. Then the following statements hold: \begin{enumerate}[(a)] \item There is the exact sequence $$ 0 \to \mathbb{T}_{\mathbb{F}_p,\widetilde{\mathfrak{m}}} \to J_1(M)({\overline{\QQ}})[p]_{\widetilde{\mathfrak{m}}} \to \mathbb{T}_{\mathbb{F}_p,\widetilde{\mathfrak{m}}}^\vee \to 0$$ of $\mathbb{T}_{\mathbb{F}_p,\widetilde{\mathfrak{m}}}$-modules, where the dual is the $\mathbb{F}_p$-linear dual. \item If $d$ is the Gorenstein defect of $\mathbb{T}_{\mathbb{F}_p,\widetilde{\mathfrak{m}}}$ and $r$ is the multiplicity of $\rho_\mathfrak{m}$, then the relation $$ d = 2r-2$$ holds. \end{enumerate} \end{prop} We now establish a relation between mod~$p$ Hecke algebras of weights $3 \le k \le p+1$ for levels~$N$ not divisible by~$p$ with Hecke algebras of weight~$2$ and level~$Np$. It is needed in order to compare the Hecke algebras in higher weight to those acting on the $p$-torsion of Jacobians and thus to make a link to the multiplicity of the attached Galois representations. \begin{prop}\label{totwo} Let $N \ge 5$, $p \nmid N$ and $3 \le k \le p+1$. Let $\mathfrak{m}$ be a maximal ideal of the mod~$p$ Hecke algebra $\mathbb{T}_{\mathbb{Z}\to\mathbb{F}_p}(S_k(\Gamma_1(N))$. Then there exists a maximal ideal $\mathfrak{m}_2$ of $\mathbb{T}_{\mathbb{Z}\to\mathbb{F}_p}(S_2(\Gamma_1(Np))$ and a natural surjection $$\mathbb{T}_{\mathbb{Z}\to\mathbb{F}_p}(S_2(\Gamma_1(Np))_{\mathfrak{m}_2} \twoheadrightarrow \mathbb{T}_{\mathbb{Z}\to\mathbb{F}_p}(S_k(\Gamma_1(N))_\mathfrak{m}.$$ If $\mathfrak{m}$ is ordinary ($T_p \not\in \mathfrak{m}$), this surjection is an isomorphism. \end{prop} {\bf Proof. } From Sections~5 and~6 of \cite{Faithful}, whose notation we adopt for this proof, one obtains without difficulty the commutative diagram of Hecke algebras: $$ \xymatrix@=1.0cm{ \mathbb{T}_{\mathbb{Z}\to\mathbb{F}_p}(S_2(\Gamma_1(Np))_{\mathfrak{m}_2} \ar@{->>}[r] \ar@{=}[d] & \mathbb{T}_{\mathbb{F}_p}(J_1(Np)_\mathbb{Q}({\overline{\QQ}})[p])_{\mathfrak{m}_2} \ar@{->>}[r] \ar@{->>}[d] & \mathbb{T}_{\mathbb{F}_p} (H^1(\Gamma_1(N),V_{k-2}(\mathbb{F}_p)))_\mathfrak{m} \\ \mathbb{T}_{\mathbb{Z}_p \to \mathbb{F}_p} (L)_{\mathfrak{m}_2} \ar@{->>}[r] & \mathbb{T}_{\mathbb{F}_p} (\overline{L})_{\mathfrak{m}_2} \ar@{->>}[r] & \mathbb{T}_{\mathbb{Z}\to\mathbb{F}_p}(S_k(\Gamma_1(N))_\mathfrak{m}. \ar@{->>}[u]}$$ The claimed surjection can be read off. In the ordinary situation, Proposition~\ref{gormult} shows that the upper left horizontal arrow is in fact an isomorphism. That also the upper right horizontal arrow is an isomorphism is explained in~\cite{Faithful}. The result follows immediately. \hspace* {.5cm} $\Box$ In the next proposition we compare Hecke algebras for spaces of modular forms on $\Gamma_1(N)$ to those of the same level and weight, but with a Dirichlet character. \begin{prop} Let $N \ge 5$, $k \ge 2$ and let $\chi: (\mathbb{Z}/N\mathbb{Z})^\times \to \mathbb{C}^\times$ be a Dirichlet character. Let $f \in S_k(N,\chi) \subset S_k(\Gamma_1(N))$ be a normalised Hecke eigenform. Let further $\mathfrak{m}_{\overline{\chi}}$ be the maximal ideal in $\mathbb{T}_\mathbb{F}^{\overline{\chi}} := \mathbb{T}_{\mathbb{Z}[\chi]\to\mathbb{F}}(S_k(N,\chi))$ and $\mathfrak{m}$ the one in $\mathbb{T}_{\mathbb{Z}\to\mathbb{F}_p}(S_k(\Gamma_1(N)))$ of residue characteristic~$p$ for $p \nmid N$ belonging to~$f$. If $k=2$, suppose additionally that $\rho_\mathfrak{m}$ is irreducible. If $p=2$, suppose that $\rho_\mathfrak{m}$ is not induced from $\mathbb{Q}(\sqrt{-1})$, and if $p=3$, suppose that $\rho_\mathfrak{m}$ is not induced from $\mathbb{Q}(\sqrt{-3})$. Then the Gorenstein defects of $\mathbb{T}_{\mathbb{Z}[\chi]\to\mathbb{F}}(S_k(N,\chi))_{\mathfrak{m}_{\overline{\chi}}}$ and $\mathbb{T}_{\mathbb{Z}\to\mathbb{F}_p}(S_k(\Gamma_1(N)))_\mathfrak{m}$ are equal. \end{prop} {\bf Proof. } Write $\Delta := (\mathbb{Z}/N\mathbb{Z})^\times$ and let $\Delta_p$ be its $p$-Sylow subgroup. Let ${\overline{\chi}}: \Delta \to \mathbb{F}^\times$ be the reduction of~$\chi$ obtained by composing~$\chi$ with~$\mathbb{Z}[\chi]\to\mathbb{F}$. As the Gorenstein defect is invariant under base extension, it is no loss to work with $\mathbb{T}_\mathbb{F} := \mathbb{T}_{\mathbb{Z}\to\mathbb{F}}(S_k(\Gamma_1(N)))$. We still write~$\mathfrak{m}$ for the maximal ideal in $\mathbb{T}_\mathbb{F}$ belonging to~$f$. Note that $\langle \delta \rangle - {\overline{\chi}}(\delta) \in \mathfrak{m}$ for all~$\delta \in \Delta$. We let $\Delta$ act on $\mathbb{T}_\mathbb{F}$ via the diamond operators and we let $\mathbb{F}^{\overline{\chi}}$ be a copy of~$\mathbb{F}$ with $\Delta$-action through the inverse of~${\overline{\chi}}$. We have $$ (\mathbb{T}_{\mathbb{F},\mathfrak{m}} \otimes_\mathbb{F} \mathbb{F}^{\overline{\chi}})_\Delta = (\mathbb{T}_{\mathbb{F},\mathfrak{m}} \otimes_\mathbb{F} \mathbb{F}^{\overline{\chi}})/(1-\delta | \delta \in \Delta) \cong \mathbb{T}_{\mathbb{F},\mathfrak{m}_{\overline{\chi}}}^{\overline{\chi}},$$ which one obtains by considering the duals, identifying Katz cusp forms with mod~$p$ ones on~$\Gamma_1(N)$ and applying Carayol's Lemma (\cite{edixhoven-boston}, Prop.~1.10). For the case $k=2$, we should point the reader to the correction at the end of the introduction to~\cite{edixhoven}. However, the statement still holds after localisation at maximal ideals corresponding to irreducible representations. Moreover, the equality $ (\mathbb{T}_{\mathbb{F},\mathfrak{m}} \otimes_\mathbb{F} \mathbb{F}^{\overline{\chi}})^\Delta = \mathbb{T}_{\mathbb{F},\mathfrak{m}}[\langle \delta \rangle - {\overline{\chi}}(\delta) | \delta \in \Delta ]$ holds by definition. Now Lemma~7.3 of~\cite{Faithful} tells us that the localisation at~$\mathfrak{m}$ of the $\mathbb{F}$-vector space of Katz cusp forms of weight~$k$ on $\Gamma_1(N)$ over~$\mathbb{F}$ is a free $\mathbb{F}[\Delta_p]$-module. Note that the standing hypothesis $k \le p+1$ of Section~7 of~\cite{Faithful} is not used in the proof of that lemma and see also \cite{Faithful},~Remark~7.5. From an elementary calculation one now obtains that $N_\Delta = \sum_{\delta \in \Delta} \delta$ induces an isomorphism $$ (\mathbb{T}_{\mathbb{F}_\mathfrak{m}} \otimes_\mathbb{F} \mathbb{F}^{\overline{\chi}})_\Delta \xrightarrow{N_\Delta} (\mathbb{T}_{\mathbb{F}_\mathfrak{m}} \otimes_\mathbb{F} \mathbb{F}^{\overline{\chi}})^\Delta.$$ We now take the $\mathfrak{m}_{\overline{\chi}}$-kernel on both sides and obtain $$ \mathbb{T}_{\mathbb{F},\mathfrak{m}_{\overline{\chi}}}^{\overline{\chi}} [\mathfrak{m}_{\overline{\chi}}] \cong (\mathbb{T}_{\mathbb{F},\mathfrak{m}} \otimes_\mathbb{F} \mathbb{F}^{\overline{\chi}})_\Delta [\mathfrak{m}_{\overline{\chi}}] \cong (\mathbb{T}_{\mathbb{F},\mathfrak{m}} \otimes_\mathbb{F} \mathbb{F}^{\overline{\chi}})_\Delta [\mathfrak{m}] \cong (\mathbb{T}_{\mathbb{F},\mathfrak{m}} \otimes_\mathbb{F} \mathbb{F}^{\overline{\chi}})^\Delta [\mathfrak{m}] = \mathbb{T}_{\mathbb{F},\mathfrak{m}}[\mathfrak{m}].$$ This proves that the two Gorenstein defects are indeed equal. \hspace* {.5cm} $\Box$ The Gorenstein defect that we calculate on the computer is the number~$d$ of the following corollary, which relates it to the multiplicity of a Galois representation. \begin{cor} Let $p$ be a prime, $N \ge 5$ an integer such that $p \nmid N$, $k$ an integer satisfying $2 \le k \le p$ and $\chi: (\mathbb{Z}/N\mathbb{Z})^\times \to \mathbb{C}^\times$ a character. Let $f \in S_k(N,\chi)$ be a normalised Hecke eigenform. Let further $\mathfrak{m}$ denote the maximal ideal in $\mathbb{T}_{\mathbb{Z}[\chi]\to\mathbb{F}}(S_k(N,\chi))$ belonging to~$f$. Suppose that $\mathfrak{m}$ is ordinary and that $\rho_\mathfrak{m}$ is irreducible and not induced from $\mathbb{Q}(\sqrt{-1})$ (if $p=2$) and not induced from $\mathbb{Q}(\sqrt{-3})$ (if $p=3$). We define~$d$ to be the Gorenstein defect of $\mathbb{T}_{\mathbb{Z}[\chi]\to\mathbb{F}}(S_k(N,\chi))_\mathfrak{m}$ and~$r$ to be the multiplicity of~$\rho_\mathfrak{m}$. Then the equality $d = 2r-2$ holds. \hspace* {.5cm} $\Box$ \end{cor} We include the following proposition because it establishes equality of the two different notions of multiplicities of Galois representations in the case of the trivial character. \begin{prop}\label{nulleins} Let $N \ge 1$ and $p \nmid N$ and $f \in S_2(\Gamma_0(N)) \subseteq S_2(\Gamma_1(N))$ be a normalised Hecke eigenform belonging to maximal ideals~$\mathfrak{m}_0 \subseteq \mathbb{T}_{\mathbb{Z}\to\mathbb{F}_p}(S_2(\Gamma_0(N)))$ and $\mathfrak{m}_1 \subseteq \mathbb{T}_{\mathbb{Z}\to\mathbb{F}_p}(S_2(\Gamma_1(N)))$ of residue characteristic~$p$. Suppose that $\rho_{\mathfrak{m}_0} \cong \rho_{\mathfrak{m}_1}$ is irreducible. Then the multiplicity of $\rho_{\mathfrak{m}_1}$ on $J_1(N)_\mathbb{Q}({\overline{\QQ}})[p]$ is equal to the multiplicity of~$\rho_{\mathfrak{m}_0}$ on $J_0(N)_\mathbb{Q}({\overline{\QQ}})[p]$. Thus, if $p>2$, this multiplicity is equal to one by Theorem~\ref{gorenstein-theorem}. \end{prop} {\bf Proof. } Let $\Delta := (\mathbb{Z}/N\mathbb{Z})^\times$. We first remark that one has the isomorphism $$ J_0(N)_\mathbb{Q}({\overline{\QQ}})[p]_{\mathfrak{m}_0} \cong \big((J_1(N)_\mathbb{Q}({\overline{\QQ}})[p])^\Delta\big)_{\mathfrak{m}_0},$$ which one can for example obtain by comparing with the parabolic cohomology with $\mathbb{F}_p$-coefficients of the modular curves $Y_0(N)$ and $Y_1(N)$. Taking the $\mathfrak{m}_0$-kernel yields $$ J_0(N)_\mathbb{Q}({\overline{\QQ}})[\mathfrak{m}_0] \cong J_1(N)_\mathbb{Q}({\overline{\QQ}})[\mathfrak{m}_1],$$ since $\mathfrak{m}_1$ contains $\langle \delta \rangle - 1 $ for all~$\delta \in \Delta$. \hspace* {.5cm} $\Box$ \section{Modular Symbols and Hecke Algebras} \label{modular-symbols} The aim of this section is to present the algorithm that we use for the computations of local factors of Hecke algebras of mod~$p$ modular forms. It is based on mod~$p$ modular symbols which have been implemented in {\sc Magma} \cite{magma} by William Stein. The bulk of this section deals with proving the main advance, namely a stop criterion (Corollary~\ref{stopcor}), which in practice greatly speeds up the computations in comparison with ``standard'' implementations, as it allows us to work with many fewer Hecke operators than indicated by the theoretical Sturm bound (Proposition~\ref{propsturm}). We shall list results proving that the stop criterion is attained in many cases. However, the stop criterion does not depend on them, in the sense that it being attained is equivalent to a proof that the algebra it outputs is equal to a direct factor of a Hecke algebra of mod~$p$ modular forms. Whereas for Section~\ref{section-relation} the notion of Katz modular forms seems the right one, the present section works entirely with mod~$p$ modular forms, the definition of which is also recalled. This is very natural, since all results in this section are based on a comparison with the characteristic zero theory. \subsection{Mod~$p$ modular forms and modular symbols} \subsubsection*{Mod~$p$ modular forms} Let us for the time being fix integers $N \ge 1$ and $k \ge 2$, as well as a character $\chi: (\mathbb{Z}/N\mathbb{Z})^\times \to \mathbb{C}^\times$ such that $\chi(-1) = (-1)^k$. Let $M_k(N,\chi)$ be the space of holomorphic modular forms for $\Gamma_1(N)$, Dirichlet character $\chi$, and weight~$k$. It decomposes as a direct sum (orthogonal direct sum with respect to the Petersson inner product) of its cuspidal subspace $S_k(N,\chi)$ and its Eisenstein subspace ${\rm Eis}_k(N,\chi)$. As before, we let $\mathcal{O} = \mathbb{Z}[\chi]$. Moreover, we let $\mathfrak{P}$ be a maximal ideal of~$\mathcal{O}$ above~$p$ with residue field~$\mathbb{F}$ and $\widehat{\mathcal{O}}$ be the completion of~$\mathcal{O}$ at~$\mathfrak{P}$. Furthermore, let $K = \mathbb{Q}_p(\chi)$ be the field of fractions of~$\widehat{\mathcal{O}}$ and $\bar{\chi}$ be $\chi$ followed by the natural projection $\mathcal{O} \twoheadrightarrow \mathbb{F}$. Denote by $M_k(N,\chi\,;\,\mathcal{O})$ the sub-$\mathcal{O}$-module generated by those modular forms whose (standard) $q$-expansion has coefficients in~$\mathcal{O}$. It follows from the $q$-expansion principle that $$M_k(N,\chi\,;\,\mathcal{O}) \cong {\rm Hom}_{\mathcal{O}} \big(\mathbb{T}_{\mathcal{O}}(M_k(N,\chi)),\mathcal{O}\big)$$ and that hence $M_k(N,\chi\,;\,\mathcal{O}) \otimes_{\mathcal{O}} \mathbb{C} \cong M_k(N,\chi)$. We put $$ M_k(N,\bar{\chi}\,;\,\mathbb{F}) := M_k(N,\chi\,;\,\mathcal{O}) \otimes_{\mathcal{O}} \mathbb{F} \cong {\rm Hom}_{\mathbb{F}} \big(\mathbb{T}_{\mathcal{O}}(M_k(N,\chi)),\mathbb{F}\big) $$ and call the elements of this space {\em mod~$p$ modular forms}. The Hecke algebra $\mathbb{T}_{\mathcal{O}}(M_k(N,\chi))$ acts naturally and it follows that $\mathbb{T}_{\mathcal{O}\to\mathbb{F}}(M_k(N,\chi)) \cong \mathbb{T}_{\mathbb{F}}(M_k(N,\chi\,;\,\mathbb{F}))$. Similar statements hold for the cuspidal and the Eisenstein subspaces and we use similar notations. We call a maximal ideal $\mathfrak{m}$ of $\mathbb{T}_{\mathcal{O}\to\mathbb{F}} (M_k(N,\chi\,;\,\mathcal{O})$ (respectively, the corresponding maximal ideal of $\mathbb{T}_{\mathcal{O}\to\widehat{\mathcal{O}}} (M_k(N,\chi\,;\,\mathcal{O}))$) {\em non-Eisenstein} if and only if $$ S_k(N,\bar{\chi}\,;\,\mathbb{F})_\mathfrak{m} \cong M_k(N,\bar{\chi}\,;\,\mathbb{F})_\mathfrak{m}.$$ Otherwise, we call $\mathfrak{m}$ {\em Eisenstein}. We now include a short discussion of minimal and maximal primes, in view of Proposition~\ref{commalg}. Write $\mathbb{T}_{\widehat{\mathcal{O}}}$ for $\mathbb{T}_{\mathcal{O} \to \widehat{\mathcal{O}}} (S_k(N,\chi))$. Let $\mathfrak{m}$ be a maximal ideal of $\mathbb{T}_{\widehat{\mathcal{O}}}$. It corresponds to a $\Gal({\overline{\FF}_p}/\mathbb{F})$-conjugacy class of normalised eigenforms in $S_k(N,\bar{\chi}\,;\,\mathbb{F})$. That means for each $n\in \mathbb{N}$ that the minimal polynomial of $T_n$ acting on $S_k(N,\bar{\chi}\,;\,\mathbb{F})_\mathfrak{m}$ is equal to a power of the minimal polynomial of the coefficient~$a_n$ of each member of the conjugacy class. Moreover, $\mathfrak{p}$ corresponds precisely as above to a $\Gal({\overline{\QQ}_p}/K)$-conjugacy class of normalised eigenforms in $S_k(N,\chi\,;\,\mathcal{O}) \otimes_\mathcal{O} K$. Suppose that $\mathfrak{m}$ contains minimal primes $\mathfrak{p}_i$ for $i=1,\dots,r$. Then the normalised eigenforms corresponding to the~$\mathfrak{p}_i$ are congruent to one another modulo a prime above~$p$. Conversely, any congruence arises in this way. Thus, a maximal ideal $\mathfrak{m}$ of $\mathbb{T}_{\widehat{\mathcal{O}}}$ is Eisenstein if and only if it contains a minimal prime corresponding to a conjugacy class of Eisenstein series. As it is the reduction of a reducible representation, the mod~$p$ Galois representation corresponding to a non-Eisenstein prime is reducible. It should be possible to show the converse. \subsubsection*{Modular symbols} We now recall the modular symbols formalism and prove two useful results on base change and torsion. The main references for the definitions are \cite{SteinBook} and~\cite{HeckeMS}. Let $R$ be a ring, $\Gamma \le \mathrm{SL}_2(\mathbb{Z})$ a subgroup and $V$ a left $R[\Gamma]$-module. Recall that $\mathbb{P}^1(\mathbb{Q}) = \mathbb{Q} \cup \{\infty\}$ is the set of cusps of $\mathrm{SL}_2(\mathbb{Z})$, which carries a natural $\mathrm{SL}_2(\mathbb{Z})$-action via fractional linear transformations. We define the $R$-modules $$ \mathcal{M}_R := R[\{\alpha,\beta\}| \alpha,\beta \in \mathbb{P}^1(\mathbb{Q})]/ \langle \{\alpha,\alpha\}, \{\alpha,\beta\} + \{\beta,\gamma\} + \{\gamma,\alpha\} | \alpha,\beta,\gamma \in \mathbb{P}^1(\mathbb{Q})\rangle$$ and $\mathcal{B}_R := R[\mathbb{P}^1(\mathbb{Q})]$. They are connected via the {\em boundary map} $\delta: \mathcal{M}_R \to \mathcal{B}_R$ which is given by $\{\alpha,\beta\} \mapsto \beta - \alpha.$ Both are equipped with the natural left $\Gamma$-actions. Also let $\mathcal{M}_R(V) := \mathcal{M}_R \otimes_R V$ and $\mathcal{B}_R(V) := \mathcal{B}_R \otimes_R V$ with the left diagonal $\Gamma$-action. We call the $\Gamma$-coinvariants $$ \mathcal{M}_R (\Gamma,V) := \mathcal{M}_R(V)_\Gamma = \mathcal{M}_R(V)/ \langle (x - g x) | g \in \Gamma, x \in \mathcal{M}_R(V) \rangle$$ {\em the space of $(\Gamma,V)$-modular symbols.} Furthermore, {\em the space of $(\Gamma,V)$-boundary symbols} is defined as the $\Gamma$-coinvariants $$ \mathcal{B}_R(\Gamma,V) := \mathcal{B}_R(V)_\Gamma = \mathcal{B}_R(V)/ \langle (x - g x) | g \in \Gamma, x \in \mathcal{B}_R(V) \rangle.$$ The boundary map $\delta$ induces the {\em boundary map} $\mathcal{M}_R(\Gamma,V) \to \mathcal{B}_R(\Gamma,V)$. Its kernel is denoted by $\mathcal{CM}_R(\Gamma,V)$ and is called {\em the space of cuspidal $(\Gamma,V)$-modular symbols.} Let now $N \ge 1$ and $k \ge 2$ be integers and $\chi: (\mathbb{Z}/N\mathbb{Z})^\times \to R^\times$ be a character, i.e.\ a group homomorphism, such that $\chi(-1) = (-1)^k$ in~$R$. Write $V_{k-2}(R)$ for the homogeneous polynomials of degree $k-2$ over~$R$ in two variables, equipped with the natural $\Gamma_0(N)$-action. Denote by $V_{k-2}^\chi (R)$ the tensor product $V_{k-2}(R) \otimes_R R^\chi$ for the diagonal $\Gamma_0(N)$-action which on $R^\chi$ comes from the isomorphism $\Gamma_0(N) / \Gamma_1(N) \cong (\mathbb{Z}/N\mathbb{Z})^\times$ given by sending $\mat abcd$ to~$d$ followed by~$\chi^{-1}$. We use the notation $\mathcal{M}_k(N,\chi\,;\,R)$ for $\mathcal{M}(\Gamma_0(N),V_{k-2}^\chi(R))$, as well as similarly for the boundary and the cuspidal spaces. The natural action of the matrix $\eta = \mat {-1}001$ gives an involution on all of these spaces. We will denote by the superscript ${}^+$ the subspace invariant under this involution, and by the superscript ${}^-$ the anti-invariant one. On all modules discussed so far one has Hecke operators $T_n$ for all $n \in \mathbb{N}$ and diamond operators. For a definition see~\cite{SteinBook}. \begin{lem}\label{basechange} Let $R$, $\Gamma$ and $V$ as above and let $R \to S$ be a ring homomorphism. Then $$\mathcal{M}(\Gamma,V) \otimes_R S \cong \mathcal{M}(\Gamma,V \otimes_R S).$$ \end{lem} {\bf Proof. } This follows immediately from the fact that tensoring and taking coinvariants are both right exact. \hspace* {.5cm} $\Box$ \begin{prop}\label{torsion} Let $R$ be a local integral domain of characteristic zero with principal maximal ideal $\mathfrak{m} = (\pi)$ and residue field $\mathbb{F}$ of characteristic~$p$. Also let $N \ge 1$, $k \ge 2$ be integers and $\chi: (\mathbb{Z}/N\mathbb{Z})^\times \to R^\times$ a character such that $\chi(-1) = (-1)^k$. Suppose (i) that $p \ge 5$ or (ii) that $p = 2$ and $N$ is divisible by a prime which is $3$ modulo~$4$ or by~$4$ or (iii) that $p = 3$ and $N$ is divisible by a prime which is $2$ modulo~$3$ or by~$9$. Then the following statements hold: \begin{enumerate}[(a)] \item If $k \ge 3$, then $\mathcal{M}_k(N,\chi\,;\,R)[\pi] = \big(V_{k-2}^\chi(\mathbb{F})\big)^{\Gamma_0(N)}$. \item If $k=2$ or if $3 \le k \le p+2$ and $p \nmid N$, then $\mathcal{M}_k(N,\chi\,;\,R)[\pi] = 0$. \end{enumerate} \end{prop} {\bf Proof. } The conditions assure that the group $\Gamma_0(N)$ does not have any stabiliser of order $2p$ for its action on the upper half plane. Hence, by \cite{HeckeMS}, Theorem~6.1, the modular symbols space $\mathcal{M}_k(N,\chi\,;\,R)$ is isomorphic to $H^1(\Gamma_0(N),V_{k-2}^\chi(R))$. The arguments are now precisely those of the beginning of the proof of \cite{Faithful}, Proposition~2.6. \hspace* {.5cm} $\Box$ \subsubsection*{Hecke algebras of modular symbols and the Eichler-Shimura isomorphism} From Lemma~\ref{basechange} one deduces a natural surjection \begin{equation}\label{eqms} \mathbb{T}_{\mathcal{O} \to \mathbb{F}} (\mathcal{M}_k(N,\chi\,;\,\mathcal{O})) \twoheadrightarrow \mathbb{T}_{\mathbb{F}}(\mathcal{M}_k(N,\bar{\chi}\,;\,\mathbb{F})). \end{equation} In the same way, one also obtains \begin{equation}\label{eqmseins} \mathbb{T}_{\mathcal{O}} (\mathcal{M}_k(N,\chi\,;\,\mathcal{O})) \twoheadrightarrow \mathbb{T}_{\mathcal{O}} (\mathcal{M}_k(N,\chi\,;\,\mathcal{O})/\text{torsion}) \cong \mathbb{T}_{\mathcal{O}} (\mathcal{M}_k(N,\chi\,;\,\mathbb{C})), \end{equation} where one uses for the isomorphism that the Hecke operators are already defined over~$\mathcal{O}$. A similar statement holds for the cuspidal subspace. We call a maximal prime $\mathfrak{m}$ of $\mathbb{T}_{\mathcal{O}\to\widehat{\mathcal{O}}} (\mathcal{M}_k(N,\chi\,;\,\mathcal{O}))$ (respectively the corresponding prime of $\mathbb{T}_{\mathcal{O}\to\mathbb{F}} (\mathcal{M}_k(N,\chi\,;\,\mathcal{O}))$) {\em non-torsion} if $$ \mathcal{M}_k(N,\chi\,;\,\widehat{\mathcal{O}})_\mathfrak{m} \cong (\mathcal{M}_k(N,\chi\,;\,\widehat{\mathcal{O}})/\text{torsion})_\mathfrak{m}.$$ This is equivalent to the height of $\mathfrak{m}$ being~$1$. Proposition~\ref{torsion} tells us some cases in which all primes are non-torsion. \begin{thm}[Eichler-Shimura]\label{thmes} There are isomorphisms respecting the Hecke operators \begin{enumerate}[(a)] \item $M_k(N,\chi) \oplus S_k(N,\chi)^\vee \cong \mathcal{M}_k(N,\chi\,;\,\mathbb{C}),$ \item $S_k(N,\chi) \oplus S_k(N,\chi)^\vee \cong \mathcal{CM}_k(N,\chi\,;\,\mathbb{C}),$ \item $S_k(N,\chi) \cong \mathcal{CM}_k(N,\chi\,;\,\mathbb{C})^+.$ \end{enumerate} \end{thm} {\bf Proof. } Parts~(a) and (b) are \cite{DiamondIm}, Theorem 12.2.2, together with the comparison of \cite{HeckeMS}, Theorem~6.1. We use that the space of anti-holomorphic cusp forms is dual to the space of holomorphic cusp forms. Part~(c) is a direct consequence of~(b). \hspace* {.5cm} $\Box$ \begin{cor}\label{cores} There are isomorphisms $$\mathbb{T}_{\mathcal{O}}(S_k(N,\chi)) \cong \mathbb{T}_{\mathcal{O}} (\mathcal{CM}_k(N,\chi\,;\,\mathbb{C})) \cong \mathbb{T}_{\mathcal{O}} (\mathcal{CM}_k(N,\chi\,;\,\mathbb{C})^+),$$ given by sending $T_n$ to $T_n$ for all positive~$n$. \hspace* {.5cm} $\Box$ \end{cor} \subsection{The stop criterion} Although it is impossible to determine a priori the dimension of the local factor of the Hecke algebra associated with a given modular form mod~$p$, Corollary~\ref{stopcor} implies that the computation of Hecke operators can be stopped when the algebra generated has reached a certain dimension that is computed along the way. This criterion has turned out to be extremely useful and has made possible some of our computations which would not have been feasible using the Hecke bound naively. See Section~\ref{computational-results} for a short discussion of this issue. \subsubsection*{Some commutative algebra} We collect some useful statements from commutative algebra, which will be applied to Hecke algebras in the sequel. \begin{prop}\label{commalg} Let $R$ be an integral domain of characteristic zero which is a finitely generated $\mathbb{Z}$-module. Write $\widehat{R}$ for the completion of $R$ at a maximal ideal of~$R$ and denote by $\mathbb{F}$ the residue field and by $K$ the fraction field of~$\widehat{R}$. Let furthermore $A$ be a commutative $R$-algebra which is finitely generated as an $R$-module. For any ring homomorphism $R\to S$ write $A_S$ for $A \otimes_R S$. Then the following statements hold. \begin{enumerate}[(a)] \item The Krull dimension of $A_{\widehat{R}}$ is less than or equal to~$1$. The maximal ideals of $A_{\widehat{R}}$ correspond bijectively under taking pre-images to the maximal ideals of $A_\mathbb{F}$. Primes $\mathfrak{p}$ of height $0$ which are contained in a prime of height $1$ of $A_{\widehat{R}}$ are in bijection with primes of $A_K$ under extension (i.e.\ $\mathfrak{p} A_K$), for which the notation $\mathfrak{p}^e$ will be used. Under these correspondences, one has $A_{\mathbb{F},\mathfrak{m}} \cong A_{\widehat{R},\mathfrak{m}} \otimes_{\widehat{R}} \mathbb{F}$ and $A_{K,\mathfrak{p}^e} \cong A_{\widehat{R},\mathfrak{p}}$. \item The algebra $A_{\widehat{R}}$ decomposes as $$ A_{\widehat{R}} \cong \prod_\mathfrak{m} A_{\widehat{R},\mathfrak{m}},$$ where the product runs over the maximal ideals $\mathfrak{m}$ of $A_{\widehat{R}}$. \item The algebra $A_\mathbb{F}$ decomposes as $$ A_\mathbb{F} \cong \prod_\mathfrak{m} A_{\mathbb{F},\mathfrak{m}},$$ where the product runs over the maximal ideals $\mathfrak{m}$ of $A_\mathbb{F}$. \item The algebra $A_K$ decomposes as $$ A_K \cong \prod_\mathfrak{p} A_{K,\mathfrak{p}^e} \cong \prod_\mathfrak{p} A_{\widehat{R},\mathfrak{p}},$$ where the products run over the minimal prime ideals $\mathfrak{p}$ of $A_{\widehat{R}}$ which are contained in a prime ideal of height~$1$. \end{enumerate} \end{prop} {\bf Proof. } As $A_{\widehat{R}}$ is a finitely generated $\widehat{R}$-module, $A_{\widehat{R}}/\mathfrak{p}$ with a prime $\mathfrak{p}$ is an integral domain which is a finitely generated $\widehat{R}$-module. Hence, it is either a finite field or a finite extension of $\widehat{R}$. This proves that the height of $\mathfrak{p}$ is less than or equal to~$1$. The correspondences and the isomorphisms of Part~(a) are easily verified. The decompositions in Parts~(b) and~(c) are \cite{Eisenbud}, Corollary~7.6. Part~(d) follows by tensoring~(b) over $\widehat{R}$ with~$K$. \hspace* {.5cm} $\Box$ Similar decompositions for $A$-modules are derived by applying the idempotents of the decompositions of Part~(b). \begin{prop}\label{dimension} Assume the set-up of Proposition~\ref{commalg} and let $M,N$ be $A$-modules which as $R$-modules are free of finite rank. Suppose that \begin{enumerate}[(a)] \item $M \otimes_R \mathbb{C} \cong N \otimes_R \mathbb{C}$ as $A \otimes_R \mathbb{C}$-modules, or \item $M \otimes_R \bar{K} \cong N \otimes_R \bar{K}$ as $A \otimes_R \bar{K}$-modules. \end{enumerate} Then for all prime ideals $\mathfrak{m}$ of $A_\mathbb{F}$ corresponding to height $1$ primes of $A_{\widehat{R}}$ the equality $$ \dim_\mathbb{F} (M \otimes_R \mathbb{F})_\mathfrak{m} = \dim_\mathbb{F} (N \otimes_R \mathbb{F})_\mathfrak{m}$$ holds. \end{prop} {\bf Proof. } As for $A$, we also write $M_K$ for $M \otimes_R K$ and similarly for $N$ and $\widehat{R}$, $\mathbb{F}$, etc. By choosing an isomorphism $\mathbb{C} \cong \bar{K}$, it suffices to prove Part~(b). Using Proposition~\ref{commalg}, Part~(d), the isomorphism $M \otimes_R \bar{K} \cong N \otimes_R \bar{K}$ can be rewritten as $$ \bigoplus_{\mathfrak{p}} (M_{K, \mathfrak{p}^e} \otimes_K \bar{K}) \cong \bigoplus_{\mathfrak{p}} (N_{K, \mathfrak{p}^e} \otimes_K \bar{K}),$$ where the sums run over the minimal primes $\mathfrak{p}$ of $A_{\widehat{R}}$ which are properly contained in a maximal prime. Hence, an isomorphism $M_{K, \mathfrak{p}^e} \otimes_K \bar{K} \cong N_{K, \mathfrak{p}^e} \otimes_K \bar{K}$ exists for each~$\mathfrak{p}$. Since for each maximal ideal $\mathfrak{m}$ of $A_{\widehat{R}}$ of height $1$ we have by Proposition~\ref{commalg} $$ M_{\widehat{R}, \mathfrak{m}} \otimes_{\widehat{R}} K \cong \bigoplus_{\mathfrak{p} \subseteq \mathfrak{m} \text{ min.}} M_{K, \mathfrak{p}^e}$$ and similarly for $N$, we get \begin{align*} \dim_\mathbb{F} M_{\mathbb{F}, \mathfrak{m}} =& {\rm rk}_{\widehat{R}} M_{\widehat{R}, \mathfrak{m}} = \sum_{\mathfrak{p} \subseteq \mathfrak{m} \text{ min.}} \dim_K M_{K, \mathfrak{p}^e} \\ =& \sum_{\mathfrak{p} \subseteq \mathfrak{m} \text{ min.}} \dim_K N_{K, \mathfrak{p}^e} = {\rm rk}_{\widehat{R}} N_{\widehat{R}, \mathfrak{m}} = \dim_\mathbb{F} N_{\mathbb{F}, \mathfrak{m}}. \end{align*} This proves the proposition. \hspace* {.5cm} $\Box$ \subsubsection*{The stop criterion} \begin{prop}\label{propdim} Let $\mathfrak{m}$ be a maximal ideal of $\mathbb{T}_{\mathcal{O}\to\mathbb{F}}(\mathcal{M}_k(N,\chi\,;\,\mathcal{O}))$ which is non-torsion and non-Eisenstein. Then the following statements hold: \begin{enumerate}[(a)] \item $\mathcal{CM}_k(N,\bar{\chi}\,;\,\mathbb{F})_\mathfrak{m} \cong \mathcal{M}_k(N,\bar{\chi}\,;\,\mathbb{F})_\mathfrak{m}$. \item $2 \cdot \dim_{\mathbb{F}} S_k(N,\bar{\chi}\,;\,\mathbb{F})_\mathfrak{m} = \dim_{\mathbb{F}} \mathcal{CM}_k(N,\bar{\chi}\,;\,\mathbb{F})_\mathfrak{m}$. \item If $p \neq 2$, then $\dim_{\mathbb{F}} S_k(N,\bar{\chi}\,;\,\mathbb{F})_\mathfrak{m} = \dim_{\mathbb{F}} \mathcal{CM}_k(N,\bar{\chi}\,;\,\mathbb{F})^+_\mathfrak{m}$. \end{enumerate} \end{prop} {\bf Proof. } Part~(c) follows directly from Part~(b) by decomposing $\mathcal{CM}_k(N,\bar{\chi}\,;\,\mathbb{F})$ into a direct sum of its plus- and its minus-part. Statements~(a) and~(b) will be concluded from Proposition~\ref{dimension}. More precisely, it allows us to derive from Theorem~\ref{thmes} that \begin{align*} &\dim_{\mathbb{F}}\big((\mathcal{M}_k(N,\chi\,;\,\mathcal{O})/\text{torsion}) \otimes_{\mathcal{O}} \mathbb{F}\big)_\mathfrak{m}\\ = &\dim_{\mathbb{F}}\big({\rm Eis}_k (N,\bar{\chi}\,;\,\mathbb{F}) \oplus S_k (N,\bar{\chi}\,;\,\mathbb{F}) \oplus S_k (N,\bar{\chi}\,;\,\mathbb{F})^\vee \big)_\mathfrak{m} \end{align*} and $$ \dim_{\mathbb{F}}\big((\mathcal{CM}_k(N,\chi\,;\,\mathcal{O})/\text{torsion}) \otimes_{\mathcal{O}} \mathbb{F}\big)_\mathfrak{m} = 2 \cdot \dim_{\mathbb{F}}\big(S_k(N,\bar{\chi}\,;\,\mathbb{F})\big)_\mathfrak{m}.$$ The latter proves Part~(b), since $\mathfrak{m}$ is non-torsion. As by the definition of a non-Eisenstein prime ${\rm Eis}_k (N,\bar{\chi}\,;\,\mathbb{F})_\mathfrak{m} = 0$ and again since $\mathfrak{m}$ is non-torsion, it follows that $$\dim_{\mathbb{F}}\mathcal{CM}_k(N,\bar{\chi}\,;\,\mathbb{F})_\mathfrak{m} = \dim_{\mathbb{F}}\mathcal{M}_k(N,\bar{\chi}\,;\,\mathbb{F})_\mathfrak{m},$$ which implies Part~(a). \hspace* {.5cm} $\Box$ We will henceforth often regard non-Eisenstein non-torsion primes as in the proposition as maximal primes of $\mathbb{T}_{\mathbb{F}}(S_k(N,\bar{\chi}\,;\,\mathbb{F})) = \mathbb{T}_{\mathcal{O}\to\mathbb{F}}(S_k(N,\chi))$. \begin{cor}[Stop Criterion]\label{stopcor} Let $\mathfrak{m}$ be a maximal ideal of $\mathbb{T}_{\mathbb{F}}(S_k(N,\bar{\chi}\,;\,\mathbb{F}))$ which is non-Eisen\-stein and non-torsion. \begin{enumerate}[(a)] \item One has $\dim_{\mathbb{F}} \mathcal{M}_k(N,\bar{\chi}\,;\,\mathbb{F})_\mathfrak{m} = 2 \cdot \dim_{\mathbb{F}} \mathbb{T}_{\mathbb{F}} \big(\mathcal{M}_k(N,\bar{\chi}\,;\,\mathbb{F})\big)_\mathfrak{m}$ if and only if $$\mathbb{T}_{\mathbb{F}} \big(S_k(N,\bar{\chi}\,;\,\mathbb{F})\big)_\mathfrak{m} \cong \mathbb{T}_{\mathbb{F}} \big(\mathcal{CM}_k(N,\bar{\chi}\,;\,\mathbb{F})\big)_\mathfrak{m}.$$ \item One has $\dim_{\mathbb{F}} \mathcal{CM}_k(N,\bar{\chi}\,;\,\mathbb{F})_\mathfrak{m} = 2 \cdot \dim_{\mathbb{F}} \mathbb{T}_{\mathbb{F}} \big(\mathcal{CM}_k(N,\bar{\chi}\,;\,\mathbb{F})\big)_\mathfrak{m}$ if and only if $$\mathbb{T}_{\mathbb{F}} \big(S_k(N,\bar{\chi}\,;\,\mathbb{F})\big)_\mathfrak{m} \cong \mathbb{T}_{\mathbb{F}} \big(\mathcal{CM}_k(N,\bar{\chi}\,;\,\mathbb{F})\big)_\mathfrak{m}.$$ \item Assume $p \neq 2$. One has $\dim_{\mathbb{F}} \mathcal{CM}_k(N,\bar{\chi}\,;\,\mathbb{F})^+_\mathfrak{m} = \dim_{\mathbb{F}} \mathbb{T}_{\mathbb{F}} \big(\mathcal{CM}_k(N,\bar{\chi}\,;\,\mathbb{F})\big)_\mathfrak{m}$ if and only if $$\mathbb{T}_{\mathbb{F}} \big(S_k(N,\bar{\chi}\,;\,\mathbb{F})\big)_\mathfrak{m} \cong \mathbb{T}_{\mathbb{F}} \big(\mathcal{CM}_k(N,\bar{\chi}\,;\,\mathbb{F})^+\big)_\mathfrak{m}.$$ \end{enumerate} \end{cor} {\bf Proof. } We only prove (a), as (b) and (c) are similar. From Part~(b) of Proposition~\ref{propdim} and the fact that the $\mathbb{F}$-dimension of the algebra $\mathbb{T}_{\mathbb{F}} \big(S_k(N,\bar{\chi}\,;\,\mathbb{F})\big)_\mathfrak{m}$ is equal to the one of $S_k(N,\bar{\chi}\,;\,\mathbb{F})$, as they are dual to each other, it follows that $$ 2 \cdot \dim_\mathbb{F} \mathbb{T}_{\mathbb{F}} \big(S_k(N,\bar{\chi}\,;\,\mathbb{F})\big)_\mathfrak{m} = \dim_\mathbb{F} \big(\mathcal{CM}_k(N,\bar{\chi}\,;\,\mathbb{F})\big)_\mathfrak{m}.$$ The result is now a direct consequence of Equations~\ref{eqms} and~\ref{eqmseins} and Corollary~\ref{cores}. \hspace* {.5cm} $\Box$ Note that the first line of each statement only uses modular symbols and not modular forms, but it allows us to make statements involving modular forms. This is the aforementioned stop criterion; the computation of Hecke operators can be stopped if this equality is reached. We now list some results concerning the validity of the equivalent statements of Corollary~\ref{stopcor}. The reader can also consult \cite{EPW} for general results in the ordinary and distinguished case. \begin{prop} Let $p \ge 5$ be a prime, $k \ge 2$ and $N \ge 5$ with $p \nmid N$ integers, $\mathbb{F}$ a finite extension of~$\mathbb{F}_p$, $\bar{\chi}: (\mathbb{Z}/N\mathbb{Z})^\times \to \mathbb{F}^\times$ a character and $\mathfrak{m}$ a maximal ideal of $\mathbb{T}_\mathbb{F}(S_k(N,\bar{\chi}\,;\,\mathbb{F}))$ which is non-Eisenstein and non-torsion. Suppose (i) that $2 \le k \le p-1$ or (ii) that $k \in \{p,p+1\}$ and $\mathfrak{m}$ is ordinary. Then $$ \mathbb{T}_{\mathbb{F}} \big(S_k(N,\bar{\chi}\,;\,\mathbb{F})\big)_\mathfrak{m} \cong \mathbb{T}_{\mathbb{F}} \big(\mathcal{CM}_k(N,\bar{\chi}\,;\,\mathbb{F})\big)_\mathfrak{m} \cong \mathbb{T}_{\mathbb{F}} \big(\mathcal{CM}_k(N,\bar{\chi}\,;\,\mathbb{F})^+\big)_\mathfrak{m}.$$ \end{prop} {\bf Proof. } Using the comparison with group cohomology of \cite{HeckeMS}, Theorem~6.1, the result follows under Assumption~(i) from \cite{edixhoven}, Theorem~5.2, and is proved under Assumption~(ii) in \cite{Faithful}, Corollary~6.9, for the case of the group $\Gamma_1(N)$ and no Dirichlet character. The passage to a character is established by \cite{Faithful}, Theorem~7.4 and the remark following it. One identifies the mod~$p$ modular forms appearing with corresponding Katz forms using Carayol's Lemma (\cite{edixhoven-boston}, Prop.~1.10). \hspace* {.5cm} $\Box$ We end this section by stating the so-called Sturm bound (also called the Hecke bound), which gives the best a priori upper bound for how many Hecke operators are needed to generate all the Hecke algebra. We only need it in our algorithm in cases in which it is theoretically not known that the stop criterion will be reached. This will enable the algorithm to detect if the Hecke algebra on modular symbols is not isomorphic to the corresponding one on cuspidal modular forms. \begin{prop}[Sturm bound]\label{propsturm} The Hecke algebra $\mathbb{T}_\mathbb{C}(S_k(N,\chi))$ can be generated as an algebra by the Hecke operators $T_l$ for all primes~$l$ smaller than or equal to $\frac{kN}{12}\prod_{q \mid N, q \text{ prime}} (1+\frac{1}{q})$. \end{prop} {\bf Proof. } This is discussed in detail in Chapter~11 of \cite{SteinBook}. \hspace* {.5cm} $\Box$ \subsection{Algorithm} In this section we present a sketch of the algorithm that we used for our computations. The {\sc Magma} code can be downloaded from the second author's webpage and a manual is included as Appendix~\ref{section-manual}. \noindent {\bf\underline{Input:}} Integers $N \ge 1$, $k \ge 2$, a finite field $\mathbb{F}$, a character $\chi: (\mathbb{Z}/N\mathbb{Z})^\times \to \mathbb{F}^\times$ and for each prime~$l$ less than or equal to the Sturm bound an irreducible polynomial $f_l \in \mathbb{F}[X]$.\\ {\bf \underline{Output:}} An $\mathbb{F}$-algebra. \begin{itemize} \itemsep=0cm plus 0pt minus 0pt \item $M \leftarrow \mathcal{CM}_k(N,\chi\,;\,\mathbb{F})$, $l \leftarrow 1$, $L \leftarrow $ empty list. \item repeat \begin{itemize} \itemsep=0cm plus 0pt minus 0pt \item $l \leftarrow $ next prime after $l$. \item Compute $T_l$ on $M$ and append it to the list $L$. \item $M \leftarrow $ the restriction of $M$ to the $f_l$-primary subspace for $T_l$, i.e.\ to the biggest subspace of $M$ on which the minimal polynomial of $T_l$ is a power of $f_l$. \item $A \leftarrow $ the $\mathbb{F}$-algebra generated by the restrictions to $M$ of $T_2, T_3, \dots, T_l$. \end{itemize} \item until $2 \cdot \dim (A) = \dim (M)$ \emph{[the stop criterion]} or $l > $ Sturm bound. \item return $A$. \end{itemize} The $f_l$ should, of course, be chosen as the minimal polynomials of the coefficients $a_l(f)$ of the normalised eigenform $f \in S_k(N,\chi\,;\,\bar{\mathbb{F}})$ whose local Hecke algebra one wants to compute. Suppose the algorithm stops at the prime~$q$. If $q$ is bigger than the Sturm bound, the equivalent conditions of Corollary~\ref{stopcor} do not hold. In that case the output should be disregarded. Otherwise, $A$ is isomorphic to a direct product of the form $\prod_\mathfrak{m} \mathbb{T}(S_k(N,\chi\,;\,\mathbb{F}))_\mathfrak{m}$ where the $\mathfrak{m}$ are those maximal ideals such that the minimal polynomials of $T_2, T_3, \dots, T_q$ on $\mathbb{T}(S_k(N,\chi\,;\,\mathbb{F}))_\mathfrak{m}$ are equal to powers of $f_2, f_3, \dots, f_q$. It can happen that $A$ consists of more than one factor. Hence, one should still decompose $A$ into its local factors. Alternatively, one can also replace the last line but one in the algorithm by \begin{itemize} \item until $\big((2 \cdot \dim (A) = \dim (M))$ and $A$ is local$\big)$ or $l > $ Sturm bound, \end{itemize} which ensures that the output is a local algebra. In practice, one modifies the algorithm such that not for every prime~$l$ a polynomial~$f_l$ need be given, but that the algorithm takes each irreducible factor of the minimal polynomial of~$T_l$ if no $f_l$ is known. It is also useful to choose the order how $l$ runs through the primes. For example, one might want to take $l=p$ at an early stage with $p$ the characteristic of~$\mathbb{F}$, if one knows that this operator is needed, which is the case in all computations concerning Question~\ref{ourquestion}. \section{Computational results} \label{computational-results} In view of Question~\ref{ourquestion}, we produced 384 examples of odd irreducible continuous Galois representations $\Gal({\overline{\QQ}}/\mathbb{Q}) \to \mathrm{GL}_2({\overline{\FF}_p})$ that are completely split at~$p$. The results are documented in the accompanying tables (Appendix~\ref{section-tables}). The complete data can be downloaded from the second author's webpage. The Galois representations were created either by class field theory or from an irreducible integer polynomial whose Galois group embeds into $\mathrm{GL}_2({\overline{\FF}_p})$. All examples but one are dihedral; the remaining one is icosahedral. For each of these an eigenform was computed giving rise to it. The Gorenstein defect of the corresponding local Hecke algebra factor turned out always to be~$2$, supporting Question~\ref{ourquestion}. The authors preferred to proceed like this, instead of computing all Hecke algebras mod $p$ in weight $p$ for all ``small'' primes~$p$ and all ``small'' levels, since non-dihedral examples in which the assumptions of Question~\ref{ourquestion} are satisfied are very rare. \subsection{Table entries} For every computed local Hecke algebra enough data are stored to recreate it as an abstract algebra and important characteristics are listed in the tables of Appendix~\ref{section-tables}. A sample table entry is the following. \begin{longtable}{||c|c|c|c|c|c|c|c|c|c||} \hline Level & Wt & ResD & Dim & EmbDim & NilO & GorDef & \#Ops & \#(p$<$HB) & Gp \\ \hline 5939 & 5 & 3 & 12 & 3 & 5 & 2 & 5 & 366 & $D_{7}$ \\ \hline \end{longtable} Each entry corresponds to the Galois conjugacy class of an eigenform $f$ mod~$p$ with associated local Hecke algebra~$A$. The first and the second column indicate the level and the weight of~$f$. The latter is in all examples equal to the characteristic of the base field $k$ (a finite extention of~$\mathbb{F}_p$) of the algebra. Let $\mathfrak{m}_A$ denote the maximal ideal of~$A$. Then ResD stands for the degree of $K = A/\mathfrak{m}_A$ over~$\mathbb{F}_p$. Let us consider $A \otimes_k K$. It decomposes into a direct product of a local $K$-algebra $B$ and its $\Gal(K/k)$-conjugates. The $K$-dimension of $B$ (which is equal to the $k$-dimension of~$A$) is recorded in the fourth column. Let $\mathfrak{m}_B$ be the maximal ideal of~$B$. The {\em embedding dimension} EmbDim is the $K$-dimension of $\mathfrak{m}_B/\mathfrak{m}_B^2$. By Nakayama's Lemma this is the minimal number of $B$-generators for~$\mathfrak{m}_B$. The {\em nilpotency order} NilO is the maximal integer~$n$ such that $\mathfrak{m}_B^n$ is not the zero ideal. The column GorDef contains the Gorenstein defect of~$B$ (which is the same as the Gorenstein defect of~$A$). By \#Ops it is indicated how many Hecke operators were used to generate the algebra~$A$, applying the stop criterion (Corollary~\ref{stopcor}). This is contrasted with the number of primes smaller than the Sturm bound (Proposition~\ref{propsturm}, it is also called the Hecke bound), denoted by \#(p$<$HB). One immediately observes that the stop criterion is very efficient. Whereas the Sturm bound is roughly linear in the level, in 365 of the 384 calculated examples, less than 10 Hecke operators sufficed, in 252 examples even 5 were enough. The final column contains the image of the mod~$p$ Galois representation attached to~$f$ as an abstract group. \subsection{Dihedral examples} All Hecke algebras except one in our tables correspond to eigenforms whose Galois representations are dihedral, since these are by far the easiest to obtain explicitly as one can use class field theory. This is explained now. Let $p$ be a prime and $d$ a square-free integer which is $1$ mod~$4$ and not divisible by~$p$. We denote by $K$ the quadratic field~$\mathbb{Q}(\sqrt{d})$. Further we consider an unramified character $\chi: \Gal({\overline{\QQ}}/K) \to {\overline{\FF}_p}^\times$ of order $n \ge 3$. We assume that its inverse $\chi^{-1}$ is equal to $\chi$ conjugated by $\sigma$, denoted $\chi^\sigma$, for $\sigma$ (a lift of) the non-trivial element of $\Gal(K/\mathbb{Q})$. The induced representation $$\rho_\chi := {\rm Ind}_{\Gal({\overline{\QQ}}/K)}^{\Gal({\overline{\QQ}}/\mathbb{Q})}(\chi) : \Gal({\overline{\QQ}}/\mathbb{Q}) \to \mathrm{GL}_2({\overline{\FF}_p})$$ is irreducible and its image is the dihedral group $D_n$ of order~$2n$. If $l$ is a prime not dividing $2d$, we have $\rho_\chi(\Frob_l) = \mat 0110$ if $\big(\frac{d}{l}\big)=-1$, and $\rho_\chi(\Frob_l) = \mat {\chi(\Frob_\Lambda)}00 {\chi^\sigma(\Frob_\Lambda)}$ if $\big(\frac{d}{l}\big)=1$ and $l \mathcal{O}_K = \Lambda \sigma(\Lambda)$. This explicit description makes it obvious that the determinant of $\rho_\chi$ is the Legendre symbol $l \mapsto \big(\frac{d}{l}\big)$. Since the kernel of $\chi$ corresponds to a subfield of the Hilbert class field of~$K$, simple computations in the class group of~$K$ allow one to determine which primes split completely. These give examples satisfying the assumptions of Question~\ref{ourquestion} (the Frobenius at~$p$ is the identity) if $\rho_\chi$ is odd, i.e.\ if $p=2$ or $d < 0$. We remark that for characters $\chi$ of odd order~$n$ the assumption $\chi^{-1} = \chi^\sigma$ is not a big restriction, since any character can be written as $\chi = \chi_1 \chi_2$ with $\chi_1^\sigma = \chi_1^{-1}$ and $\chi_2^\sigma = \chi_2$, hence the latter descends to a character of $\Gal({\overline{\QQ}}/\mathbb{Q})$ and the representation $\rho_\chi$ is isomorphic to $\rho_{\chi_1} \otimes \chi_2$. All dihedral representations are known to come from eigenforms in the minimal possible weight with level equal to the (outside of $p$) conductor of the representation (see \cite{Dihedral}, Theorem~1). In the tables we computed the Hecke algebras of odd dihedral representations as above in the following ranges. For each prime~$p$ less than~$100$ and each prime $l$ less than or equal to the largest level occuring in the table for~$p$, we chose $d$ as plus or minus~$l$ such that $d$ is $1$ mod $4$ and we let $H$ run through all non-trivial cyclic quotients of the class group of $\mathbb{Q}(\sqrt{d})$ of order coprime to~$p$. For each $H$ we chose (unramified) characters~$\chi$ of the absolute Galois group of $\mathbb{Q}(\sqrt{d})$ corresponding to~$H$, up to Galois conjugacy and up to replacing $\chi$ by its inverse. Then $\chi$ is not the restriction of a character of $\Gal({\overline{\QQ}}/\mathbb{Q})$. By genus theory the order of $\chi$ is odd, as the class number is, so we necessarily have $\chi^{-1} = \chi^\sigma$. We computed the local factor of $\mathbb{T}_{\mathbb{F}_p}(S_p(d,\big(\frac{d}{\cdot}\big) \,;\, \mathbb{F}_p))$ corresponding to~$\rho_\chi$ if $\rho_\chi$ is odd and $p$ is completely split. For the prime $p=2$ we also allowed square-free integers~$d$ which are $1$ mod~$4$ and whose absolute value is less than~$5000$. \subsection{Icosahedral example} With the help of a list of polynomials provided by Gunter Malle (\cite{malle}) a Galois representation of $\Gal({\overline{\QQ}}/\mathbb{Q})$ with values in $\mathrm{GL}_2({\overline{\FF}_2})$ which is of prime conductor, completely split at~$2$ and thus satisfies the assumptions of Question~\ref{ourquestion} and whose image is isomorphic to the icosahedral group~$A_5$ could be described explicitly. The modular forms in weight $2$ predicted by Serre's conjecture were found and the corresponding Hecke algebra turned out to have Gorenstein defect equal to~$2$. Let $f \in \mathbb{Z}[X]$ be an irreducible polynomial of degree~$5$ whose Galois group, i.e.\ the Galois group of the normal closure $L$ of $K = \mathbb{Q}[X]/(f)$, is isomorphic to~$A_5$. We assume that $K$ is unramified at $2$, $3$ and~$5$. We have the Galois representation $$ \rho_f : \Gal({\overline{\QQ}}/\mathbb{Q}) \twoheadrightarrow \Gal(L/\mathbb{Q}) \cong A_5 \cong \mathrm{SL}_2(\mathbb{F}_4).$$ We now determine its conductor and its traces. Let $p$ be a ramified prime. As the ramification is tame, the image of the inertia group $\rho_f(I_p)$ at $p$ is cyclic of order $2$, $3$ or~$5$. In the first case, the image of a decomposition group $\rho_f(D_p)$ at $p$ is either equal to $\rho_f(I_p)$ or equal to $\mathbb{Z}/2\mathbb{Z} \times \rho_f(I_p)$. If the order of $\rho_f(I_p)$ is odd and $\rho_f(I_p) = \rho_f(D_p)$, then any completion of $L$ at the unique prime above~$p$ is totally ramified and cyclic of degree $\#\rho_f(I_p)$, hence contained in $\mathbb{Q}_p(\zeta_p)$ for $\zeta_p$ a primitive $p$-th root of unity. It follows that $p$ is congruent to $1$ mod $\#\rho_f(I_p)$. If the order of $\rho_f(I_p)$ is odd, but $\rho_f(I_p)$ is not equal to $\rho_f(D_p)$, then $\rho_f(D_p)$ is a dihedral group and the completion of $L$ at a prime above $p$ has a unique unramified quadratic subfield~$S$. Thus, we have the exact sequence $$ 0 \to \rho_f(I_p) \to \rho_f(D_p) \to \Gal(S/\mathbb{Q}_p) \to 0.$$ On the one hand, it is well-known that the conjugation by a lift of the Frobenius element of $\Gal(S/\mathbb{Q}_p)$ acts on $\rho_f(I_p)$ by raising to the $p$-th power. On the other hand, as the action is non-trivial it also corresponds to inversion on $\rho_f(I_p)$, since the only elements of order~$2$ in $(\mathbb{Z}/3\mathbb{Z})^\times$ and $(\mathbb{Z}/5\mathbb{Z})^\times$ are~$-1$. As a consequence, $p$ is congruent to $-1$ mod $\#\rho_f(I_p)$ in this case. We hence have the following cases. \begin{enumerate}[(1)] \item Suppose $p \mathcal{O}_K = \mathfrak{P}^5$. Then $p \equiv \pm 1 \mod 5$. \begin{enumerate}[(a)] \item If $p \equiv 1 \mod 5$, then $\rho_f|_{I_p} \sim \mat \chi00{\chi^{-1}}$ with $\chi$ a totally ramified character of $\Gal({\overline{\QQ}_p}/\mathbb{Q}_p)$ of order~$5$. \item If $p \equiv -1 \mod 5$, then $\rho_f(D_p)$ is the dihedral group with $10$ elements. \end{enumerate} \item Suppose $p \mathcal{O}_K = \mathfrak{P}^3 \mathfrak{Q} \mathfrak{R}$ or $p \mathcal{O}_K = \mathfrak{P}^3 \mathfrak{Q}$. \begin{enumerate}[(a)] \item If $p \equiv 1 \mod 3$, then $\rho_f|_{I_p} \sim \mat \chi00{\chi^{-1}}$ with $\chi$ a totally ramified character of $\Gal({\overline{\QQ}_p}/\mathbb{Q}_p)$ of order~$3$. \item If $p \equiv -1 \mod 3$, then $\rho_f(D_p)$ is the dihedral group with $6$ elements. \end{enumerate} \item Suppose that $p$ is ramified, but that we are neither in Case~(1) nor in Case~(2). Then $\rho_f|_{I_p} \sim \mat 1101$. \end{enumerate} By the definition of the conductor at $p$ it is clear that it is $p^2$ in Cases (1) and~(2) and $p$ in Case~(3). However, in Cases (1)(a) and (2)(a) one can choose a character $\epsilon$ of $\Gal({\overline{\QQ}}/\mathbb{Q})$ of the same order as~$\chi$ whose restriction to $D_p$ gives the character~$\chi$. If one twists the representation $\rho_f$ by $\epsilon$ one finds also in these cases that the conductor at~$p$ is~$p$. An inspection of the conjugacy classes of the group $\mathrm{SL}_2(\mathbb{F}_4)$ shows that the traces of $\rho_f$ twisted by some character $\epsilon$ of $\Gal({\overline{\QQ}}/\mathbb{Q})$ are as follows. Let $l$ be an unramified prime. \begin{itemize} \item If the order of $\Frob_l$ is $5$, then the trace at $\Frob_l$ is~$\epsilon(\Frob_l) w$ where $w$ is a root of the polynomial $X^2+X+1$ in $\mathbb{F}_2[X]$. \item If the order of $\Frob_l$ is $3$, then the trace at $\Frob_l$ is~$\epsilon(\Frob_l)$. \item If the order of $\Frob_l$ is $1$ or $2$, then the trace at $\Frob_l$ is~$0$. \end{itemize} These statements allow the easy identification of the modular form belonging to an icosahedral representation. We end this section with some remarks on our icosahedral example. It was obtained using the polynomial $x^5 - x^4 - 79 x^3 + 225 x^2 + 998 x - 3272$. The corresponding table entry is: \begin{longtable}{||c|c|c|c|c|c|c|c|c|c||} \hline Level & Wt & ResD & Dim & EmbDim & NilO & GorDef & \#Ops & \#(p$<$HB) & Gp \\ \hline 89491 & 2 & 2 & 12 & 4 & 3 & 2 & 4 & 1746 & $A_5$ \\ \hline \end{longtable} Hence, in level $89491$ and weight~$2$ there is a single eigenform~$g$ mod~$2$ up to Galois conjugacy whose first couple of $q$-coefficients agree with the traces of a twist of the given icosahedral Galois representation. From this one can deduce that the Galois representation $\rho_g$ of~$g$ has an icosahedral image and is only ramified at~$89491$. As weight and level lowering are not known in our case, we cannot prove that $\rho_g$ coincides with a twist of the given one. It might, however, be possible to exclude the existence of two distinct icosahedral extensions of the rationals inside~$\mathbb{C}$ that ramify only at~$89491$ by consulting tables. According to Malle, the icosahedral extension used has smallest discriminant among all totally real $A_5$-extensions of the rationals in which $2$ splits completely. \section{Further results and questions} \label{further-questions} In this section we present some more computational observations for Hecke algebras under the assumptions of Question~\ref{ourquestion}, which lead us to ask some more questions. \subsection*{On the dimension of the Hecke algebra} From the data, we see that many even integers appear as dimensions of the~$\mathbb{T}_\mathfrak{m}$. We know that the dimension must be at least~4, as this is the dimension of the smallest non-Gorenstein algebra which can appear in our case. This extends the results of~\cite{kilford-nongorenstein}, where the dimensions of the Hecke algebras~$\mathbb{T}_{\mathbb{Z} \to \mathbb{F}_2}(S_2(\Gamma_0(431)))$ and~$\mathbb{T}_{\mathbb{Z} \to \mathbb{F}_2}(S_2(\Gamma_0(503)))$ localised at the non-Gorenstein maximal ideals are shown to be~4. In this table we see exactly how many times each dimension appears in our data. We observe that every even integer between~4 and~32 appears, and that the largest dimension is~60. The most common dimension is~4, which appears about half of the time. However, as the dimension of the Hecke algebra attached to~$S_k(\Gamma_1(N))$ increases with~$N$ and with~$k$, this may be an artifact of the data being collected for ``small'' levels~$N$ and primes~$p$. \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|} \hline Dimension & 4 & 6 & 8 & 10 & 12 & 14 & 16 & 18 & 20 & 22 \\ \hline Number of algebras &206 & 58& 25 & 3 & 24 & 6 & 20 & 3 & 12 & 3 \\ \hline \hline Dimension& 24 & 26 & 28 & 30 & 32 & 36 & 40 & 46 & 56 & 60\\ \hline Number of algebras &5 & 4 & 2 & 1 & 2 & 2 & 4 & 1 & 2 & 1\\ \hline \end{tabular} It seems reasonable that there should be infinitely many cases with dimension~4, and plausible that every even integer greater than or equal to~4 should appear as a dimension infinitely many times. From the tables, we see that dimension~4 algebras appear at very high levels, so they do not appear to be becoming rare as the dimension increases, but this may, of course, be an artifact of our data. We note that not every example that arises from an elliptic curve in characteristic $p=2$ has Hecke algebra with dimension~$4$; for example the algebra $\mathbb{T}_{\mathbb{Z} \to \mathbb{F}_2}(S_2(\Gamma_0(2089)))$ localised at its non-Gorenstein maximal ideal has dimension~18. In level~$18097$ there is a dimension~$36$ example arising from an elliptic curve. \subsection*{On the residue degree} We will now solve an easy aspect of the question of the possible structures of non-Gorenstein local algebras occurring as local Hecke algebras. We assume for the couple of lines to follow the Generalised Riemann Hypothesis (GRH). We claim that then the residue degrees of $\mathbb{T}_{\mathfrak{m}}$ (in the notation of Question~\ref{ourquestion}) are unbounded, if we let~$p$ and $N$ run through the primes such that $p \neq N$ and $N$ is congruent to~$3$ modulo~$4$. For, class groups of imaginary quadratic fields $\mathbb{Q}(\sqrt{-N})$ have arbitrarily large cyclic factors of odd order, as the exponent of these class groups is known to go to infinity as $N$ does, by the main result of~\cite{boyd}, which assumes GRH. So the discussion on dihedral forms in Section~\ref{computational-results} immediately implies the claim. \subsection*{On the embedding dimension} One can ask whether the embedding dimension of the local Hecke algebras in the situation of Question~\ref{ourquestion} is bounded, if we allow $p$ and~$N$ to vary. This, however, seems to be a difficult problem. The embedding dimensions occuring in our tables are $3$ (299 times), $4$ (78 times) and~$5$ (7 times). The embedding dimension~$d$ is related to the number of Hecke operators needed to generate the local Hecke algebra, in the sense that at least~$d$ Hecke operators are needed. Probably, $d$ Hecke operators can be found that do generate, but they need not be the first~$d$ prime Hecke operators, of course. However, as our tables suggest, in most cases the actual computations were done using very few operators, and there are 99 of the 384 cases when the computation already finished after $d$ operators. \noindent \begin{tabular}{p{8cm}p{6cm}} L.~J.~P.~Kilford & Gabor Wiese \\ Mathematics Institute & NWF 1-Mathematik \\ Oxford University & Universität Regensburg \\ 24--29 St Giles' & D-93040 Regensburg \\ Oxford OX1 3LB & Germany \\ United Kingdom & \\ & {\tt [email protected]} \\ {\tt [email protected]} & {\tt http://maths.pratum.net} \end{tabular} \appendix \section{The {\sc Magma} package HeckeAlgebra (by Gabor Wiese)} \label{section-manual} \abstract{This is a short manual for the {\sc Magma} package {\tt HeckeAlgebra}, which can be downloaded from the author's webpage. The author would like to thank Lloyd Kilford for very helpful suggestions.} \subsection{Example} {\setlength{\parindent}{0pt} The following example explains the main functions of the package. Let us suppose that the file {\tt HeckeAlgebra.mg} is stored in the current path. We first attach the package.\\ \magma{> Attach("HeckeAlgebra.mg");} We want the package to be silent, so we put:\\ \magma{> SetVerbose ("HeckeAlgebra",false);} If we would like more information on the computations being performed, we should have put the value \magma{true}. Since we want to store the data to be computed in a file, we now create the file. \magma{> my\_file := "datafile";\\ > CreateStorageFile(my\_file);} Next, we would like to compute the Hecke algebras of the dihedral eigenforms of level $2039$ over extensions of $\mathbb{F}_2$. First, we create a list of such forms.\\ \magma{> dih := DihedralForms(2039 : ListOfPrimes := [2], completely\_split := false);}\\ Now, we compute the corresponding Hecke algebras, print part of the computed data in a human readable format, and finally save the data to our file.\\ \magma{> for f in dih do\\ for> ha := HeckeAlgebras(f);\\ for> HeckeAlgebraPrint1(ha);\\ for> StoreData(my\_file, ha);\\ for> end for;} \magmaout{ Level 2039\\ Weight 2\\ Characteristic 2\\ Gorenstein defect 0\\ Dimension 1\\ Number of operators used 3\\ Primes lt Hecke bound 68\\ Residue degree 2\\ ---------------------------\\ Level 2039\\ Weight 2\\ Characteristic 2\\ Gorenstein defect 2\\ Dimension 6\\ Number of operators used 4\\ Primes lt Hecke bound 68\\ Residue degree 2\\ ---------------------------\\ Level 2039\\ Weight 2\\ Characteristic 2\\ Gorenstein defect 0\\ Dimension 1\\ Number of operators used 3\\ Primes lt Hecke bound 68\\ Residue degree 6\\ ---------------------------\\ Level 2039\\ Weight 2\\ Characteristic 2\\ Gorenstein defect 0\\ Dimension 1\\ Number of operators used 3\\ Primes lt Hecke bound 68\\ Residue degree 4\\ ---------------------------\\ Level 2039\\ Weight 2\\ Characteristic 2\\ Gorenstein defect 0\\ Dimension 1\\ Number of operators used 3\\ Primes lt Hecke bound 68\\ Residue degree 4\\ ---------------------------\\ Level 2039\\ Weight 2\\ Characteristic 2\\ Gorenstein defect 0\\ Dimension 1\\ Number of operators used 3\\ Primes lt Hecke bound 68\\ Residue degree 12\\ ---------------------------\\ Level 2039\\ Weight 2\\ Characteristic 2\\ Gorenstein defect 0\\ Dimension 1\\ Number of operators used 3\\ Primes lt Hecke bound 68\\ Residue degree 12\\ --------------------------- } With the function \magma{DihedralForms} one may also compute exclusively representations that are completely split in the characteristic. The default is \magma{completely\_split := true}. By the option \magma{bound} we indicate primes up to which bound should be used as the characteristic. The following example illustrates this.\\ \magma{> dih1 := DihedralForms (431 : bound := 20);\\ > for f in dih1 do\\ for> ha := HeckeAlgebras(f);\\ for> HeckeAlgebraPrint1(ha);\\ for> StoreData(my\_file, ha);\\ for> end for;} \magmaout{ Level 431\\ Weight 2\\ Characteristic 2\\ Gorenstein defect 2\\ Dimension 4\\ Number of operators used 6\\ Primes lt Hecke bound 20\\ Residue degree 1\\ ---------------------------\\ Level 431\\ Weight 11\\ Characteristic 11\\ Gorenstein defect 2\\ Dimension 4\\ Number of operators used 5\\ Primes lt Hecke bound 77\\ Residue degree 3\\ --------------------------- } One can also compute icosahedral modular forms over extensions of $\mathbb{F}_2$, starting from an integer polynomial with Galois group $A_5$, as follows.\\ \magma{> R<x> := PolynomialRing(Integers());\\ > pol := x\^{}5-x\^{}4-780*x\^{}3-1795*x\^{}2+3106*x+344;\\ > f := A5Form(pol);} With this kind of icosahedral examples one has to pay attention to the conductor, as it can be huge. This polynomial has prime conductor. But conductors need not be square-free, in general.\\ \magma{> print Modulus(f`Character);} \magmaout{1951} So it's reasonable. We do the computation.\\ \magma{> ha := HeckeAlgebras(f);\\ > HeckeAlgebraPrint1(ha);} \magmaout{ Level 1951\\ Weight 2\\ Characteristic 2\\ Gorenstein defect 0\\ Dimension 3\\ Number of operators used 3\\ Primes lt Hecke bound 66\\ Residue degree 4\\ ---------------------------\\ Level 1951\\ Weight 2\\ Characteristic 2\\ Gorenstein defect 0\\ Dimension 6\\ Number of operators used 3\\ Primes lt Hecke bound 66\\ Residue degree 4\\ --------------------------- } There are two forms, which is okay, since they come from a weight one form in two different ways and this case is not exceptional. We now save them, as always.\\ \magma{> StoreData(my\_file, ha);} It is also possible to compute all forms at a given character and weight.\\ \magma{> eps := DirichletGroup(229,GF(2)).1;\\ > ha := HeckeAlgebras(eps,2);\\ > HeckeAlgebraPrint1(ha);} \magmaout{ Level 229\\ Weight 2\\ Characteristic 2\\ Gorenstein defect 0\\ Dimension 1\\ Number of operators used 12\\ Primes lt Hecke bound 12\\ Residue degree 1\\ ---------------------------\\ Level 229\\ Weight 2\\ Characteristic 2\\ Gorenstein defect 0\\ Dimension 2\\ Number of operators used 12\\ Primes lt Hecke bound 12\\ Residue degree 2\\ ---------------------------\\ Level 229\\ Weight 2\\ Characteristic 2\\ Gorenstein defect 0\\ Dimension 4\\ Number of operators used 12\\ Primes lt Hecke bound 12\\ Residue degree 1\\ ---------------------------\\ Level 229\\ Weight 2\\ Characteristic 2\\ Gorenstein defect 0\\ Dimension 2\\ Number of operators used 12\\ Primes lt Hecke bound 12\\ Residue degree 5\\ --------------------------- } \magma{> StoreData(my\_file,ha);} Next, we illustrate how one reloads what has been saved. One would like to type: \magma{load my\_file;} but that does not work. One has to do it as follows.\\ \magma{> load "datafile";\\ > mf := RecoverData(LoadIn,LoadInRel);} Now, \magma{mf} contains a list of all algebra data computed before. There's a rather concise printing function, displaying part of the information, namely \magma{HeckeAlgebraPrint(mf);}. One can also create a LaTeX longtable. The entries can be chosen in quite a flexible way. The standard usage is the following.\\ \magma{> HeckeAlgebraLaTeX(mf,"table.tex");} A short LaTeX file displaying the table is the following:\\ {\tt {$\backslash$}documentclass[11pt]\{article\}\\ {$\backslash$}usepackage\{longtable\}\\ {$\backslash$}begin\{document\}\\ {$\backslash$}input\{table\}\\ {$\backslash$}end\{document\}} The table we created is this one: \begin{longtable}{||c|c|c|c|c|c|c|c|c|c||} \hline Level & Wt & ResD & Dim & EmbDim & NilO & GorDef & \#Ops & \#(p$<$HB) & Gp \\ \hline\endhead\hline\endfoot\hline\hline\endlastfoot 2039 & 2 & 2 & 1 & 0 & 0 & 0 & 3 & 68 & $D_{3}$ \\ 2039 & 2 & 2 & 6 & 3 & 2 & 2 & 4 & 68 & $D_{5}$ \\ 2039 & 2 & 6 & 1 & 0 & 0 & 0 & 3 & 68 & $D_{9}$ \\ 2039 & 2 & 4 & 1 & 0 & 0 & 0 & 3 & 68 & $D_{15}$ \\ 2039 & 2 & 4 & 1 & 0 & 0 & 0 & 3 & 68 & $D_{15}$ \\ 2039 & 2 & 12 & 1 & 0 & 0 & 0 & 3 & 68 & $D_{45}$ \\ 2039 & 2 & 12 & 1 & 0 & 0 & 0 & 3 & 68 & $D_{45}$ \\ 431 & 2 & 1 & 4 & 3 & 1 & 2 & 6 & 20 & $D_{3}$ \\ 431 & 11 & 3 & 4 & 3 & 1 & 2 & 5 & 77 & $D_{7}$ \\ 1951 & 2 & 4 & 3 & 1 & 2 & 0 & 3 & 66 & $A_5$ \\ 1951 & 2 & 4 & 6 & 2 & 3 & 0 & 3 & 66 & $A_5$ \\ 229 & 2 & 1 & 1 & 0 & 0 & 0 & 12 & 12 & $$ \\ 229 & 2 & 2 & 2 & 1 & 1 & 0 & 12 & 12 & $$ \\ 229 & 2 & 1 & 4 & 1 & 3 & 0 & 12 & 12 & $$ \\ 229 & 2 & 5 & 2 & 1 & 1 & 0 & 12 & 12 & $$ \\ \end{longtable} In the examples of level $229$ the image of the Galois representation as an abstract group is not know. That is due to the fact that we created these examples without specifying the Galois representation in advance. It is possible to compute arbitrary Hecke operators on the local Hecke factors generated by \magma{HeckeAlgebras($\cdot$)}, as the following example illustrates.\\ \magma{> A,B,M,C := HeckeAlgebras(DirichletGroup(253,GF(2)).1,2 : over\_residue\_field := true);} Suppose that we want to know the Hecke operator $T_{17}$ on the $4$th local factor.\\ \magma{> i := 4;\\ > T := BaseChange(HeckeOperator(M,17),C[i]);} The coefficients are the eigenvalues (only one):\\ \magma{> Eigenvalues(T);} \magmaout{\{ $<$ \$ . 1 $\hat{ }$ 5, 8 $>$ \}} Let us remember the eigenvalue.\\ \magma{> e := SetToSequence(Eigenvalues(T))[1][1];} In order to illustrate the option \magma{over\_residue\_field}, we also compute the following:\\ \magma{> A1,B1,M1,C1 := HeckeAlgebras(DirichletGroup(253,GF(2)).1,2 : over\_residue\_field := false);\\ > T1 := BaseChange(HeckeOperator(M1,17),C1[i]);\\ > Eigenvalues(T1);} \magmaout{\{\}} The base field is strictly smaller than the residue field in this example and the operator \magma{T1} cannot be diagonalised over the base field. We check that \magma{e} is nevertheless a zero of the minimal polynomial of \magma{T1}.\\ \magma{> Evaluate(MinimalPolynomial(T1),e);} \magmaout{0} The precise usage of the package is described in the following sections. } \subsection{Hecke algebra computation} \subsubsection{The modular form format}\label{secmodforms} In the package, modular forms are often represented by the following record.\\ \noindent\magma{ModularFormFormat := recformat <} \begin{longtable}{ll} \magma{Character} &\magma{: GrpDrchElt,}\\ \magma{Weight} &\magma{: RngIntElt,}\\ \magma{CoefficientFunction} &\magma{: Map, }\\ \magma{ImageName} &\magma{: MonStgElt,}\\ \magma{Polynomial} &\magma{: RngUPolElt } \end{longtable} \magma{>;} The fields \magma{Character} and \magma{Weight} have the obvious meaning. Sometimes, the image of the associated Galois representation is known as an abstract group. Then that name is recorded in \magma{ImageName}, e.g.\ \magma{A\_5} or \magma{D\_3}. In some cases, a polynomial is known whose splitting field is the number field cut out by the Galois representation. Then the polynomial is stored in \magma{Polynomial}. The cases in which polynomials are known are usually icosahedral ones. The \magma{CoefficientFunction} is a function from the integers to a polynomial ring. For all primes $l$ different from the characteristic and not dividing the level of the modular form (i.e.\ the modulus of the \magma{Character}), the coefficient function should return the minimal polynomial of the $l$-th coefficient in the $q$-expansion of the modular form in question. \subsubsection{Dihedral modular forms}\label{secdih} Eigenforms whose associated Galois representations take dihedral groups as images provide an important source of examples, in many contexts. These eigenforms are called {\em dihedral}. The big advantage is that their Galois representation, and hence their $q$-coefficients, can be computed using class field theory. That enables one to exhibit Galois representations in the context of modular forms with certain number theoretic properties. The property for which these functions were initially created is that the representations should be unramified in the characteristic, say $p$, and that $p$ is completely split in the number field cut out by the representation. We consider dihedral representations whose determinant is the Legendre symbol of a quadratic field $\mathbb{Q}(\sqrt{N})$. The representations produced by the functions to be described are obtained by induction of an unramified character $\chi$ of~$\mathbb{Q}(\sqrt{N})$ whose conjugate by the non-trivial element of the Galois group of $\mathbb{Q}(\sqrt{N})$ over $\mathbb{Q}$ is assumed to be $\chi^{-1}$. \intr{intrinsic GetLegendre (N :: RngIntElt, K :: FldFin ) -> GrpDrchElt} For an odd positive integer \magma{N}, this function returns the element of \magma{DirichletGroup(Abs(N),K)} (with \magma{K} a finite field of characteristic different from $2$) which corresponds to the Legendre symbol $p \mapsto \left(\frac{\pm N}{p}\right)$. If \magma{N} is $1$ mod $4$ the sign is $+1$, and $-1$ otherwise. \intr{intrinsic DihedralForms (N :: RngIntElt : \\ \hspace*{2cm} ListOfPrimes := [], bound := 100, odd\_only := true, quad\_disc := 0, \\ \hspace*{2cm} completely\_split := true, all\_conjugacy\_classes := true ) -> Rec} This function computes all modular forms (in the sense of Section~\ref{secmodforms}) of level \magma{N} and weight~$p$ over a finite field of characteristic $p$ that come from dihedral representations whose determinant is the Legendre symbol of the quadratic field $K=\mathbb{Q}(\sqrt{\pm \text{\magma{quad\_disc}}})$ and which are obtained by induction of an unramified character of~$K$. If \magma{quad\_disc} is $1$ mod $4$ the sign is $+1$, and $-1$ otherwise. If \magma{quad\_disc} is $0$, the value of \magma{N} is used. If the option \magma{completely\_split} is set, only those representations are returned which are completely split at~$p$. If the option \magma{ListOfPrimes} is assigned a non-empty list of primes, only those primes are considered as the characteristic. If it is the empty set, all primes $p$ up to the \magma{bound} are taken into consideration. If the option \magma{odd\_only} is true, only odd Galois representations are returned. If the option \magma{all\_conjugacy\_classes} is true, each unramified character as above up to Galois conjugacy and up to taking inverses is used. Otherwise, a single choice is made. That there may be non-conjugate characters cutting out the same number field is due to the fact that there may be non-conjugate elements of the same order in the multiplicative group of a finite field. \subsubsection{Icosahedral modular forms}\label{secicosa} Eigenforms whose attached Galois representations take the group $A_5$ as projective images are called {\em icosahedral}. Since extensive tables of $A_5$-extensions of the rationals are available, one can consider icosahedral Galois representations which one knows very well. That allows one to test certain conjectures concerning modular forms on icosahedral ones. We note the isomorphism $A_5 \cong \mathrm{SL}_2(\mathbb{F}_4)$. Thus, $A_5$-extentions of the rationals give rise to icosahedral Galois representations in characteristic $2$ which (should) come from modular forms mod $2$. It would also be possible to use certain other primes, but this has not been implemented. \intr{intrinsic A5Form (f :: RngUPolElt) -> Rec} Returns the icosahedral form in characteristic $2$ and weight $2$ of smallest predicted level corresponding to the polynomial \magma{f} which is expected to be of degree~$5$ and whose Galois group is supposed to be~$A_5$. No checks about \magma{f} are performed. \subsubsection{The Hecke algebra format} The data concerning the Hecke algebra of an eigenform that is computed by the function \magma{HeckeAlgebras} is a record of the following form. \noindent\magma{AlgebraData := recformat <} \begin{longtable}{ll} \magma{Level} &\magma{: RngIntElt,}\\ \magma{Weight} &\magma{: RngIntElt,}\\ \magma{Characteristic} &\magma{: RngIntElt,}\\ \magma{BaseFieldDegree} &\magma{: RngIntElt,}\\ \magma{CharacterOrder} &\magma{: RngIntElt,}\\ \magma{CharacterConductor} &\magma{: RngIntElt,}\\ \magma{CharacterIndex} &\magma{: RngIntElt,}\\ \magma{AlgebraFieldDegree} &\magma{: RngIntElt,}\\ \magma{ResidueDegree} &\magma{: RngIntElt,}\\ \magma{Dimension} &\magma{: RngIntElt,}\\ \magma{GorensteinDefect} &\magma{: RngIntElt,}\\ \magma{EmbeddingDimension} &\magma{: RngIntElt,}\\ \magma{NilpotencyOrder} &\magma{: RngIntElt,}\\ \magma{Relations} &\magma{: Tup,}\\ \magma{NumberGenUsed} &\magma{: RngIntElt,}\\ \magma{ImageName} &\magma{: MonStgElt,}\\ \magma{Polynomial} &\magma{: RngUPolElt} \end{longtable} \magma{>;} \magma{Level} and \magma{Weight} have the obvious meaning. Let $K$ be the base field for the space of modular symbols used. It is (expected to be) a finite field. Then \magma{Characteristic} is the characteristic of $K$ and \magma{BaseFieldDegree} is the degree of $K$ over its prime field. The entries \magma{CharacterOrder}, \magma{CharacterConductor} and \magma{CharacterIndex} concern the Dirichlet character for which the modular symbols have been computed. The latter field is the index of the character in \magma{Elements(DirichletGroup($\cdot$))}. Note that that might change between different versions of {\sc Magma}. The fields \magma{ResidueDegree} (over the prime field), \magma{Dimension} and \magma{GorensteinDefect} have their obvious meaning for the Hecke algebra in question. The tuple \begin{quote} \magma{<AlgebraFieldDegree, EmbeddingDimension, NilpotencyOrder, Relations>} \end{quote} are data from which \magma{AffineAlgebra} can recreate the Hecke algebra up to isomorphism. \magma{NumberGenUsed} indicates the number of generators used by the package for the computation of the Hecke algebra. This number is usually much smaller than the Sturm bound. \magma{ImageName} and \magma{Polynomial} have the same meaning as in the record \magma{ModularFormFormat}. \subsubsection{Hecke algebras} \intr{intrinsic HeckeAlgebras (eps :: GrpDrchElt, weight :: RngIntElt :\\ \hspace*{2cm} UserBound := 0, first\_test := 3, test\_interval := 1, when\_test\_p := 3, \\ \hspace*{2cm} when\_test\_bad := 4, test\_sequence := [], dimension\_factor := 2,\\ \hspace*{2cm} ms\_space := 0, cuspidal := true, DegreeBound := 0, OperatorList := [], \\ \hspace*{2cm} over\_residue\_field := true, try\_minimal := true, force\_local := false, \\ \hspace*{.5cm} ) -> SeqEnum, SeqEnum, ModSym, Tup, Tup\\ intrinsic HeckeAlgebras ( t :: Rec :\\ \hspace*{2cm} UserBound := 0, first\_test := 3, test\_interval := 1, when\_test\_p := 3, \\ \hspace*{2cm} when\_test\_bad := 4, test\_sequence := [], dimension\_factor := 2,\\ \hspace*{2cm} ms\_space := 0, cuspidal := true, DegreeBound := 0, OperatorList := [], \\ \hspace*{2cm} over\_residue\_field := true, try\_minimal := true, force\_local := false, \\ \hspace*{.5cm} ) -> SeqEnum, SeqEnum, ModSym, Tup, Tup} These functions compute all local Hecke algebras (up to Galois conjugacy) in the specified \magma{weight} for the given Dirichlet character \magma{eps}, respectively those corresponding to the modular form \magma{t} given by a record of type \magma{ModularFormFormat}. The functions return 5 values \magma{A,B,C,D,E}. \magma{A} contains a list of records of type \magma{AlgebraData} describing the local Hecke algebra factors. \magma{B} is a list containing the local Hecke algebra factors as matrix algebras. \magma{C} is the space of modular symbols used in the computations. \magma{D} is a tuple containing the base change tuples describing the local Hecke factors. We need to know \magma{D} in order to compute matrices representing Hecke operators for the local factor. Finally, \magma{E} contains a tuple consisting of all Hecke operators computed so far for each local factor of the Hecke algebra. The usage in practice is described in the example at the beginning of this manual. We now explain the different options in detail. The modular symbols space to be used in the computations can be determined as follows. The option \magma{ms\_space} can be set to the values $1$ (the plus-space), $-1$ (the minus-space) and $0$ (the full space). Whether the restriction to the cuspidal subspace is taken, is determined by \magma{cuspidal}. It is not necessary to pass to the cuspidal subspace, for example, if a cusp form is given by a coefficient function (see the description of the record \magma{ModularFormFormat}). In some cases, a list of Hecke operators on the modular symbols space in question may already have been computed. In order to prevent {\sc Magma} from redoing their computations, they may be passed on to the function using the option \magma{OperatorList}. Often, one wants to compute the local Hecke algebra of a modular form whose degree of the coefficient field over its prime field is known, e.g.\ in the case of an icosahedral form in characteristic $2$ for the trivial Dirichlet character the coefficient field is $\mathbb{F}_4$. By assigning a positive value to the option \magma{DegreeBound} the function will automatically discard any systems of eigenvalues beyond that bound, which speeds up the computations. One must be a bit careful with this option, as there may be cases when the bound may not be respected at ``bad primes''. But it usually suffices to take twice the degree of the coefficient field, e.g.\ one chooses \magma{DegreeBound := 4} in the icosahedral example just described. If no system of eigenvalues should be discarded for degree reasons, one must set \magma{DegreeBound := 0}. All of the options \magma{first\_test}, \magma{test\_interval}, \magma{when\_test\_p},\magma{when\_test\_bad}, \magma{test\_sequence}, \magma{force\_local}, \magma{dimension\_factor} and \magma{UserBound} concern the stop criterion. Theoretically, the Sturm bound (see \magma{HeckeBound}) tells us up to which bound Hecke operators must be computed in order to be sure that they generate the whole Hecke algebra. In practice, however, the algorithm can often determine itself when enough Hecke operators have been computed to generate the algebra. That number is usually much smaller than the Sturm bound. The Sturm bound can be overwritten by assigning a positive number to \magma{UserBound}. The stop criterion is the following. Let $M$ be the modular symbols space used and $S$ the set of Hecke operators computed so far. Then $M = \bigoplus_{i=1}^r M_i$ (for some $r$) such that each $M_i$ is respected by the Hecke operators and the minimal polynomial of each $T \in S$ restricted to $M_i$ is a power of an irreducible polynomial (i.e.\ each $M_i$ is a primary space for the action of the algebra generated by all elements of $S$). Let $A_i$ be the algebra generated by $T|_{M_i}$ for all $T \in S$. One knows (in many cases, and in all cases of interest) that $A_i$ is equal to a direct product of local Hecke algebras if one has the equality $$ f \times \dim (A_i) = \text{ dimension of } M_i.$$ Here, $f$ is given by \magma{dimension\_factor} and should be $1$ if the plus-space or the minus space of modular symbols are used, and $2$ otherwise. The correct assignment of \magma{dimension\_factor} must be made by hand, whence experimentations are possible. If the stop criterion is not reached, the algorithm terminates at the Hecke bound. It may happen that, when the stop criterion is reached, one $A_i$ is isomorphic to a direct product of more than one local Hecke algebras. If in that case the option \magma{force\_local} is \magma{true}, the computation of Hecke operators is continued until each $A_i$ is isomorphic to a single Hecke factor. If \magma{force\_local} is \magma{false}, then a fast localisation algorithm is applied to each $A_i$. The option is useful, when one expects only a single local Hecke algebra factor, for example, when a modular form is given. In many cases of interest the Hecke operator $T_p$ with $p$ the characteristic is needed in order to generate the whole Hecke algebra. The option \magma{when\_test\_p} tells the algorithm at which step to compute $T_p$. It is very advisable to choose a small number. In practice, the stop criterion is reached after very few steps, e.g.\ 5 steps, when $T_p$ is computed early. Otherwise, the algorithm often has to continue until $T_p$ is computed, although most of the operators before did not change the generated algebra. The option \magma{when\_test\_bad} has a similar meaning for the $T_l$ for primes $l$ dividing the level. However, paying attention to them is only required when the modular form is old at~$l$. Moreover, one can assign a list of primes to \magma{test\_sequence}. The algorithm will then start with the Hecke operators indicated by that sequence, and then continue with the others. The option \magma{first\_test} tells the algorithm at which step the first test for the stop criterion is to be performed. The next test is then carried out after \magma{test\_interval} many steps, and so on. These numbers should be chosen small, too, unless the dimension test takes much time, which is rare, so that one wants to perform it less often, meaning that possibly more Hecke operators than necessary are computed (time consuming). The option \magma{over\_residue\_field} tells the algorithm whether at the end of the computation the local Hecke factors should be base changed to their residue field. If that is done, only one of the conjugate local factors of the base changed algebra is retained. Finally, the option \magma{try\_minimal} is passed on to \magma{AffineAlgebra}, when the output is generated. Calling that function with the option set \magma{true} can sometimes be very time consuming, but makes the output much shorter. \subsubsection{Storage functions} The package provides functions to store a list whose elements are records of type \magma{AlgebraData} in a file, and to re-read it. The usage of these functions is explained in the example at the beginning of this manual. \intr{intrinsic CreateStorageFile ( filename :: MonStgElt )} This function prepares the file \magma{filename} for storing the data. \intr{intrinsic StoreData (filename :: MonStgElt, forms :: SeqEnum)} This functions appends the list \magma{forms} of Hecke algebra data to the file \magma{filename}. That file must have been created by \magma{CreateStorageFile}. \intr{intrinsic StoreData (filename :: MonStgElt, form :: Rec)} This function appends the Hecke algebra data \magma{form} to the file \magma{filename}. That file must have been created by \magma{CreateStorageFile}. \intr{intrinsic RecoverData (LoadIn :: SeqEnum, LoadInRel :: Tup ) -> SeqEnum} In order to read Hecke algebra data from file \magma{``name''}, proceed as follows:\\ \magma{\hspace*{.5cm}> load ``name'';\\ \hspace*{.5cm}> readData := RecoverData(LoadIn,LoadInRel).}\\ Then \magma{readData} will contain a list whose elements are records of type \magma{AlgebraData}. \subsubsection{Output functions} \intr{intrinsic HeckeAlgebraPrint (ha :: SeqEnum)\\ intrinsic HeckeAlgebraPrint1 (ha :: SeqEnum)} These functions print part of the data stored in the list \magma{ha} of records of type \magma{AlgebraData} in a human readable format. \intr{intrinsic GetLevel (a :: Rec) -> Any\\ intrinsic GetWeight (a :: Rec) -> Any\\ intrinsic GetCharacteristic (a :: Rec) -> Any\\ intrinsic GetResidueDegree (a :: Rec) -> Any\\ intrinsic GetDimension (a :: Rec) -> Any\\ intrinsic GetGorensteinDefect (a :: Rec) -> Any\\ intrinsic GetEmbeddingDimension (a :: Rec) -> Any\\ intrinsic GetNilpotencyOrder (a :: Rec) -> Any\\ intrinsic GetHeckeBound (a :: Rec) -> Any\\ intrinsic GetPrimesUpToHeckeBound (a :: Rec) -> Any\\ intrinsic GetNumberOperatorsUsed (a :: Rec) -> Any\\ intrinsic GetPolynomial (a :: Rec) -> Any\\ intrinsic GetImageName (a :: Rec) -> Any} These functions return the property of the record \magma{a} of type \magma{AlgebraData} specified by the name of the function. If the corresponding attribute is not assigned, the empty string is returned. \intr{intrinsic HeckeAlgebraLaTeX (ha :: SeqEnum, filename :: MonStgElt : which := [ \\ \hspace*{2cm} <GetLevel,"Level">, <GetWeight,"Wt">, <GetResidueDegree,"ResD">,\\ \hspace*{2cm} <GetDimension,"Dim">, <GetEmbeddingDimension,"EmbDim">,\\ \hspace*{2cm} <GetNilpotencyOrder,"NilO">, <GetGorensteinDefect,"GorDef">,\\ \hspace*{2cm} <GetNumberOperatorsUsed,"\#Ops">,\\ \hspace*{2cm} <GetPrimesUpToHeckeBound,"\#(p$<$HB)">, <GetImageName,"Gp"> ] )} This function creates the LaTeX file \magma{filename} containing a longtable consisting of certain properties of the objects in \magma{ha} which are supposed to be records of type \magma{AlgebraData}. The properties to be written are indicated by the list given in the option \magma{which} consisting of tuples \magma{<f, name>}. Here \magma{f} is a function that evaluates a record of type \magma{AlgebraData} to some Magma object which is afterwards transformed into a string using \magma{Sprint}. Examples for \magma{f} are the functions \magma{GetLevel} etc., which are described above. The \magma{name} will appear in the table header. For a sample usage, see the example at the beginning of this manual. \subsubsection{Other functions} \intr{intrinsic HeckeBound ( N :: RngIntElt, k :: RngIntElt ) -> RngIntElt\\ intrinsic HeckeBound ( eps :: GrpDrchElt, k :: RngIntElt ) -> RngIntElt} These functions compute the Hecke bound for weight \magma{k} and level \magma{N}, respectively Dirichelt character \magma{eps}. Note that the Hecke bound is also often called the Sturm bound. \subsection{Algebra handling} \subsubsection{Affine algebras} Let $A$ be a commutative local Artin algebra with maximal ideal $\mathfrak{m}$ over a finite field $k$. The residue field $K=A/\mathfrak{m}$ is a finite extension of~$k$. By base changing to $K$ and taking one of the conjugate local factors, we now assume that $k=K$. The {\em embedding dimension} $e$ is the $k$-dimension of $\mathfrak{m}/\mathfrak{m}^2$. By Nakayama's Lemma, this is the minimal number of generators for $\mathfrak{m}$. The name comes from the fact that there is a surjection $$ \pi: k[x_1, \dots, x_e] \twoheadrightarrow A.$$ Its kernel is called the {\em relations ideal}. By the {\em nilpotency order} we mean the maximal integer $n$ such that $m^n$ is not the zero ideal. (As the algebra is local and Artin, its maximal ideal is nilpotent.) We know that the ideal $$ J^{n+1} \text{ with } J := (x_1, \dots, x_e) $$ is in the kernel of $\pi$. So, in order to store $\pi$, we only need to store the kernel $R$ of the linear map between two finite dimensional $k$-vector spaces $$ \pi_1: k[x_1, \dots, x_e]/J^{n+1} \twoheadrightarrow A.$$ From the tuple $<k,e,n,R>$ the algebra can be recreated (up to isomorphism). Let us point out, however, that from the tuple it is not obvious whether two algebras are isomorphic. That would have to be tested after recreating the algebras. These functions are used in order to store the Hecke algebras computed by \magma{HeckeAlgebras} in a way that does not use much memory, but retains the algebra up to isomorphism. \intr{intrinsic AffineAlgebra (A :: AlgMat : try\_minimal := true) -> RngMPolRes\\ intrinsic AffineAlgebra (A :: AlgAss : try\_minimal := true) -> RngMPolRes} This function turns the local commutative algebra \magma{A} into an affine algebra over its residue field. In fact, the algebra is first base changed to its residue field, then for one of the conjugate local factors an affine presentation is computed. If the option \magma{try\_minimal} is true, the number of relations will in general be smaller, but the computation time may be longer. \intr{intrinsic AffineAlgebraTup (A :: AlgMat : try\_minimal := true ) -> Tup\\ intrinsic AffineAlgebraTup (A :: AlgAss : try\_minimal := true) -> Tup} Given a commutative local Artin algebra \magma{A}, this function returns a tuple \magma{<k,e,n,R>}, consisting of the residue field \magma{k} of \magma{A}, the embedding dimension \magma{e}, the nilpotency order \magma{n} and relations \magma{R}. From these data, an affine algebra can be recreated which is isomorphic to one of the local factors of $A$ base changed to its residue field. If the option \magma{try\_minimal} is true, the number of relations will in general be smaller, but the computation time may be longer. \intr{intrinsic AffineAlgebra (form :: Rec) -> RngMPolRes} Given a record of type \magma{AlgebraData}, this function returns the corresponding Hecke algebra as an affine algebra. \intr{intrinsic AffineAlgebra (A :: Tup) -> RngMPolRes} This function turns a tuple \magma{<k,e,n,R>}, as above consisting of a field \magma{k}, two integers \magma{e}, \magma{n} (the embedding dimension and the nilpotency order) and relations \magma{R}, into an affine algebra. \subsubsection{Matrix algebra functions} \intr{intrinsic MatrixAlgebra ( L :: SeqEnum ) -> AlgMat} Given a list of matrices \magma{L}, this function returns the matrix algebra generated by the members of~\magma{L}. \intr{intrinsic RegularRepresentation ( A :: AlgMat ) -> AlgMat} This function computes the regular representation of the commutative matrix algebra \magma{A}. \intr{intrinsic CommonLowerTriangular ( A :: AlgMat ) -> AlgMat} Given a local commutative matrix algebra \magma{A}, this function returns an isomorphic matrix algebra whose matrices are all lower triangular, after a scalar extension to the residue field and taking one of the Galois conjugate factors. \noindent{\bf Base change} \intr{intrinsic BaseChange ( S :: Tup, T :: Tup ) -> Tup} This function computes the composition of the base change matrices \magma{T = <C,D>}, followed by those in \magma{S = <E,F>}. \intr{intrinsic BaseChange ( M :: Mtrx, T :: Tup ) -> Mtrx} Given a matrix \magma{M} and a tuple \magma{T = <C,D>} of base change matrices (for a subspace), this function computes the matrix of \magma{M} with respect to the basis corresponding to \magma{T}. \intr{intrinsic BaseChange ( M :: AlgMat, T :: Tup ) -> AlgMat} Given a matrix algebra \magma{M} and a tuple \magma{T = <C,D>} of base change matrices (for a subspace), this function computes the matrix algebra of \magma{M} with respect to the basis corresponding to \magma{T}. \noindent{\bf Decomposition} \intr{intrinsic Decomposition ( M :: Mtrx : DegBound := 0 ) -> Tup\\ intrinsic DecompositionUpToConjugation ( M :: Mtrx : DegBound := 0 ) -> Tup} Given a matrix \magma{M}, these functions compute a decomposition of the standard vector space such that \magma{M} acts as multiplication by a scalar on each summand. The output is a tuple consisting of base change tuples \magma{<C,D>} corresponding to the summands. With the second usage, summands conjugate under the absolute Galois group only appear once. \intr{intrinsic Decomposition ( L :: SeqEnum : DegBound := 0 ) -> Tup\\ intrinsic DecompositionUpToConjugation ( L :: SeqEnum : DegBound := 0) -> Tup} Given a sequence \magma{L} of commuting matrices, these functions compute a decomposition of the standard vector space such that each matrix in \magma{L} acts as multiplication by a scalar on each summand. The output is a tuple consisting of base change tuples \magma{<C,D>} corresponding to the summands. With the second usage, summands conjugate under the absolute Galois group only appear once. \intr{intrinsic Decomposition ( A :: AlgMat : DegBound := 0 ) -> Tup\\ intrinsic DecompositionUpToConjugation ( A :: AlgMat : DegBound := 0 ) -> Tup} Given a commutative matrix algebra \magma{A}, these functions compute a decomposition of the standard vector space such that each element in \magma{A} acts as multiplication by a scalar on each summand. The output is a tuple consisting of base change tuples \magma{<C,D>} corresponding to the summands. With the second usage, summands conjugate under the absolute Galois group only appear once. \intr{intrinsic AlgebraDecomposition ( A :: AlgMat : DegBound := 0 ) -> SeqEnum\\ intrinsic AlgebraDecompositionUpToConjugation ( A :: AlgMat : DegBound := 0 )\\ \hspace*{2cm} -> SeqEnum} Given a matrix algebra \magma{A} over a finite field, these functions return a local factor of \magma{A} after scalar extension to the residue field. With the second usage, factors conjugate under the absolute Galois group only appear once. \intr{intrinsic ChangeToResidueField ( A :: AlgMat ) -> SeqEnum} This function is identical to \magma{AlgebraDecompositionUpToConjugation}. \noindent{\bf Localisations} \intr{intrinsic Localisations ( L :: SeqEnum ) -> Tup, Tup\\ intrinsic Localisations ( A :: AlgMat ) -> Tup, Tup} Given a list \magma{L} of commuting matrices or a commutative matrix algebra \magma{A}, this function computes two tuples \magma{C}, \magma{D}, where \magma{C} contains a tuple consisting of the localisations of \magma{A}, respectively of the matrix algebra generated by \magma{L}, and \magma{D} consists of the corresponding base change tuples. \subsubsection{Associative algebras} \intr{intrinsic Localisations ( A :: AlgAss ) -> SeqEnum} This function returns a list of all localisations of the Artin algebra \magma{A}, which is assumed to be commutative. The output is a list of associative algebras. \subsubsection{Gorenstein defect} Let $A$ be a local Artin algebra over a field with unique maximal ideal $\mathfrak{m}$. We define the {\em Gorenstein defect} of $A$ to be $(\dim_{A/\mathfrak{m}} A[\mathfrak{m}]) -1$, which is equal to the number of $A$-module generators of the annihilator of the maximal ideal minus one. The algebra is said to be {\em Gorenstein} if its Gorenstein defect is equal to~$0$. \intr{intrinsic GorensteinDefect ( A :: RngMPolRes) -> RngIntElt\\ intrinsic GorensteinDefect ( A :: AlgAss) -> RngIntElt\\ intrinsic GorensteinDefect ( A :: AlgMat ) -> RngIntElt} These functions return the Gorenstein defect of the local commutative algebra \magma{A}. \intr{intrinsic IsGorenstein ( M :: RngMPolRes ) -> BoolElt\\ intrinsic IsGorenstein ( M :: AlgAss ) -> BoolElt\\ intrinsic IsGorenstein ( M :: AlgMat ) -> BoolElt} These functions test whether the commutative local algebra \magma{M} is Gorenstein. \section{Tables of Hecke algebras} \label{section-tables} \subsection*{Characteristic $p=2$, prime levels, dihedral} \begin{longtable}{||c|c|c|c|c|c|c|c|c|c||} \hline Level & Wt & ResD & Dim & EmbDim & NilO & GorDef & \#Ops & \#(p$<$HB) & Gp \\ \hline\endhead\hline\endfoot\hline\hline\endlastfoot 431 {}\footnote{See \cite{kilford-nongorenstein}.} & 2 & 1 & 4 & 3 & 1 & 2 & 6 & 20 & $D_{3}$ \\ 503 {}\footnote{See \cite{kilford-nongorenstein}.} & 2 & 1 & 4 & 3 & 1 & 2 & 3 & 23 & $D_{3}$ \\ 1319 & 2 & 2 & 4 & 3 & 1 & 2 & 6 & 47 & $D_{5}$ \\ 1439 & 2 & 1 & 4 & 3 & 1 & 2 & 4 & 52 & $D_{3}$ \\ 1559 & 2 & 1 & 4 & 3 & 1 & 2 & 7 & 55 & $D_{3}$ \\ 1607 & 2 & 1 & 4 & 3 & 1 & 2 & 3 & 56 & $D_{3}$ \\ 1759 & 2 & 1 & 4 & 3 & 1 & 2 & 5 & 62 & $D_{3}$ \\ 1823 & 2 & 2 & 4 & 3 & 1 & 2 & 3 & 62 & $D_{5}$ \\ 1879 & 2 & 1 & 16 & 4 & 5 & 2 & 6 & 65 & $D_{3}$ \\ 1951 & 2 & 1 & 4 & 3 & 1 & 2 & 4 & 66 & $D_{3}$ \\ 1999 & 2 & 1 & 4 & 3 & 1 & 2 & 5 & 67 & $D_{3}$ \\ 2039 & 2 & 2 & 6 & 3 & 2 & 2 & 4 & 68 & $D_{5}$ \\ 2089 {}\footnote{See \cite{kilford-nongorenstein}.} & 2 & 1 & 18 & 4 & 7 & 2 & 5 & 70 & $D_{3}$ \\ 2351 & 2 & 1 & 6 & 3 & 2 & 2 & 6 & 77 & $D_{3}$ \\ 3407 & 2 & 1 & 16 & 4 & 5 & 2 & 7 & 103 & $D_{3}$ \\ 3527 & 2 & 2 & 4 & 3 & 1 & 2 & 3 & 107 & $D_{5}$ \\ 3623 & 2 & 1 & 6 & 3 & 2 & 2 & 3 & 110 & $D_{3}$ \\ 3967 & 2 & 1 & 14 & 4 & 4 & 2 & 4 & 121 & $D_{3}$ \\ 4231 & 2 & 1 & 4 & 3 & 1 & 2 & 10 & 126 & $D_{3}$ \\ 4481 & 2 & 1 & 8 & 4 & 2 & 2 & 7 & 132 & $D_{3}$ \\ 4799 & 2 & 1 & 12 & 4 & 3 & 2 & 5 & 139 & $D_{3}$ \\ 4943 & 2 & 2 & 6 & 3 & 2 & 2 & 4 & 143 & $D_{5}$ \\ 5167 & 2 & 1 & 6 & 3 & 2 & 2 & 5 & 149 & $D_{3}$ \\ 5591 & 2 & 1 & 12 & 4 & 3 & 2 & 8 & 158 & $D_{3}$ \\ 5591 & 2 & 3 & 4 & 3 & 1 & 2 & 5 & 158 & $D_{9}$ \\ 5791 & 2 & 1 & 8 & 3 & 3 & 2 & 8 & 162 & $D_{3}$ \\ 6199 & 2 & 1 & 16 & 4 & 5 & 2 & 7 & 174 & $D_{3}$ \\ 6287 & 2 & 1 & 6 & 3 & 2 & 2 & 4 & 175 & $D_{3}$ \\ 6343 & 2 & 1 & 12 & 4 & 3 & 2 & 5 & 177 & $D_{3}$ \\ 6551 & 2 & 1 & 6 & 3 & 2 & 2 & 7 & 182 & $D_{3}$ \\ 6823 & 2 & 1 & 4 & 3 & 1 & 2 & 4 & 189 & $D_{3}$ \\ 6911 & 2 & 1 & 4 & 3 & 1 & 2 & 4 & 190 & $D_{3}$ \\ 6967 & 2 & 1 & 12 & 4 & 3 & 2 & 8 & 191 & $D_{3}$ \\ 7057 & 2 & 1 & 16 & 4 & 4 & 2 & 6 & 193 & $D_{3}$ \\ 7103 & 2 & 3 & 4 & 3 & 1 & 2 & 3 & 194 & $D_{7}$ \\ 7151 & 2 & 2 & 4 & 3 & 1 & 2 & 4 & 195 & $D_{5}$ \\ 7351 & 2 & 1 & 12 & 4 & 3 & 2 & 9 & 200 & $D_{3}$ \\ 7487 & 2 & 2 & 6 & 3 & 2 & 2 & 3 & 203 & $D_{5}$ \\ 7583 & 2 & 1 & 4 & 3 & 1 & 2 & 6 & 205 & $D_{3}$ \\ 7951 & 2 & 1 & 4 & 3 & 1 & 2 & 6 & 216 & $D_{3}$ \\ 8111 & 2 & 5 & 4 & 3 & 1 & 2 & 5 & 217 & $D_{11}$ \\ 8167 & 2 & 1 & 6 & 3 & 2 & 2 & 4 & 218 & $D_{3}$ \\ 8191 & 2 & 2 & 6 & 3 & 2 & 2 & 5 & 218 & $D_{5}$ \\ 8623 & 2 & 1 & 4 & 3 & 1 & 2 & 8 & 227 & $D_{3}$ \\ 8713 & 2 & 1 & 16 & 4 & 4 & 2 & 4 & 231 & $D_{3}$ \\ 9127 & 2 & 1 & 16 & 4 & 5 & 2 & 4 & 240 & $D_{3}$ \\ 9281 & 2 & 1 & 12 & 4 & 3 & 2 & 8 & 243 & $D_{3}$ \\ 9439 & 2 & 1 & 8 & 3 & 3 & 2 & 4 & 248 & $D_{3}$ \\ 9623 & 2 & 2 & 6 & 3 & 2 & 2 & 5 & 252 & $D_{5}$ \\ 9967 & 2 & 1 & 6 & 3 & 2 & 2 & 4 & 260 & $D_{3}$ \\ 10079 & 2 & 1 & 12 & 4 & 3 & 2 & 8 & 263 & $D_{3}$ \\ 10103 & 2 & 1 & 4 & 3 & 1 & 2 & 4 & 263 & $D_{3}$ \\ 10391 & 2 & 1 & 6 & 3 & 2 & 2 & 9 & 269 & $D_{3}$ \\ 10391 & 2 & 3 & 4 & 3 & 1 & 2 & 8 & 269 & $D_{9}$ \\ 10487 & 2 & 1 & 10 & 3 & 4 & 2 & 3 & 272 & $D_{3}$ \\ 10567 & 2 & 1 & 12 & 3 & 5 & 2 & 5 & 274 & $D_{3}$ \\ 10639 & 2 & 1 & 4 & 3 & 1 & 2 & 4 & 274 & $D_{3}$ \\ 10663 & 2 & 1 & 6 & 3 & 2 & 2 & 9 & 275 & $D_{3}$ \\ 10687 & 2 & 1 & 6 & 3 & 2 & 2 & 4 & 275 & $D_{3}$ \\ 10799 & 2 & 1 & 14 & 3 & 6 & 2 & 8 & 278 & $D_{3}$ \\ 11159 & 2 & 3 & 4 & 3 & 1 & 2 & 4 & 283 & $D_{7}$ \\ 11321 & 2 & 1 & 8 & 4 & 2 & 2 & 9 & 289 & $D_{3}$ \\ 11743 & 2 & 1 & 4 & 3 & 1 & 2 & 5 & 297 & $D_{3}$ \\ 13063 & 2 & 1 & 6 & 3 & 2 & 2 & 5 & 326 & $D_{3}$ \\ 13487 & 2 & 1 & 8 & 3 & 3 & 2 & 6 & 334 & $D_{3}$ \\ 13999 & 2 & 3 & 4 & 3 & 1 & 2 & 4 & 345 & $D_{7}$ \\ 14303 & 2 & 1 & 4 & 3 & 1 & 2 & 5 & 354 & $D_{3}$ \\ 14543 & 2 & 3 & 4 & 3 & 1 & 2 & 3 & 360 & $D_{7}$ \\ 14639 & 2 & 2 & 4 & 3 & 1 & 2 & 5 & 361 & $D_{5}$ \\ 15121 & 2 & 2 & 8 & 4 & 2 & 2 & 6 & 369 & $D_{5}$ \\ 15193 & 2 & 1 & 16 & 4 & 6 & 2 & 4 & 370 & $D_{3}$ \\ 15271 & 2 & 1 & 6 & 3 & 2 & 2 & 8 & 372 & $D_{3}$ \\ 15383 & 2 & 2 & 6 & 3 & 2 & 2 & 3 & 375 & $D_{5}$ \\ 15391 & 2 & 1 & 6 & 3 & 2 & 2 & 7 & 375 & $D_{3}$ \\ 15551 & 2 & 1 & 4 & 3 & 1 & 2 & 11 & 377 & $D_{3}$ \\ 15607 & 2 & 1 & 6 & 3 & 2 & 2 & 8 & 378 & $D_{3}$ \\ 15641 & 2 & 1 & 32 & 4 & 7 & 2 & 8 & 378 & $D_{3}$ \\ 15919 & 2 & 1 & 26 & 4 & 7 & 2 & 6 & 383 & $D_{3}$ \\ 15991 & 2 & 1 & 12 & 4 & 3 & 2 & 9 & 386 & $D_{3}$ \\ 16127 & 2 & 3 & 4 & 3 & 1 & 2 & 3 & 390 & $D_{7}$ \\ 16369 & 2 & 1 & 24 & 4 & 6 & 2 & 6 & 398 & $D_{3}$ \\ 16487 & 2 & 1 & 6 & 3 & 2 & 2 & 4 & 400 & $D_{3}$ \\ 16649 & 2 & 1 & 16 & 4 & 4 & 2 & 8 & 403 & $D_{3}$ \\ 17471 & 2 & 1 & 6 & 3 & 2 & 2 & 11 & 421 & $D_{3}$ \\ 18047 & 2 & 1 & 30 & 4 & 6 & 2 & 4 & 431 & $D_{3}$ \\ 18097 & 2 & 1 & 36 & 5 & 6 & 2 & 9 & 432 & $D_{3}$ \\ 18127 & 2 & 1 & 12 & 4 & 3 & 2 & 5 & 433 & $D_{3}$ \\ 18257 & 2 & 1 & 12 & 4 & 4 & 2 & 4 & 436 & $D_{3}$ \\ 19079 & 2 & 1 & 4 & 3 & 1 & 2 & 4 & 449 & $D_{3}$ \\ 19079 & 2 & 3 & 4 & 3 & 1 & 2 & 4 & 449 & $D_{9}$ \\ 19441 & 2 & 1 & 16 & 4 & 4 & 2 & 7 & 457 & $D_{3}$ \\ 19543 & 2 & 2 & 4 & 3 & 1 & 2 & 4 & 460 & $D_{5}$ \\ 19583 & 2 & 1 & 4 & 3 & 1 & 2 & 7 & 461 & $D_{3}$ \\ 19751 & 2 & 1 & 4 & 3 & 1 & 2 & 4 & 462 & $D_{3}$ \\ 19919 & 2 & 1 & 16 & 4 & 5 & 2 & 7 & 467 & $D_{3}$ \\ 19927 & 2 & 1 & 6 & 3 & 2 & 2 & 4 & 467 & $D_{3}$ \\ 20183 & 2 & 2 & 4 & 3 & 1 & 2 & 7 & 474 & $D_{5}$ \\ 20599 & 2 & 1 & 6 & 3 & 2 & 2 & 6 & 481 & $D_{3}$ \\ 20759 & 2 & 1 & 18 & 4 & 6 & 2 & 9 & 483 & $D_{3}$ \\ 20887 & 2 & 2 & 6 & 3 & 2 & 2 & 4 & 487 & $D_{5}$ \\ 21319 & 2 & 3 & 4 & 3 & 1 & 2 & 9 & 497 & $D_{7}$ \\ 21647 & 2 & 1 & 6 & 3 & 2 & 2 & 6 & 504 & $D_{3}$ \\ 21737 & 2 & 1 & 24 & 5 & 4 & 2 & 7 & 507 & $D_{3}$ \\ 21839 & 2 & 2 & 4 & 3 & 1 & 2 & 7 & 509 & $D_{5}$ \\ 22159 & 2 & 2 & 4 & 3 & 1 & 2 & 9 & 515 & $D_{5}$ \\ 22511 & 2 & 1 & 4 & 3 & 1 & 2 & 6 & 522 & $D_{3}$ \\ 22567 & 2 & 3 & 4 & 3 & 1 & 2 & 4 & 523 & $D_{7}$ \\ 22751 & 2 & 3 & 4 & 3 & 1 & 2 & 4 & 526 & $D_{7}$ \\ 23159 & 2 & 1 & 20 & 4 & 6 & 2 & 6 & 535 & $D_{3}$ \\ 23159 & 2 & 3 & 6 & 3 & 2 & 2 & 5 & 535 & $D_{9}$ \\ 23279 & 2 & 1 & 8 & 3 & 3 & 2 & 4 & 537 & $D_{3}$ \\ 23321 & 2 & 1 & 12 & 4 & 4 & 2 & 7 & 538 & $D_{3}$ \\ 23417 & 2 & 1 & 26 & 4 & 7 & 2 & 10 & 539 & $D_{3}$ \\ 23567 & 2 & 1 & 4 & 3 & 1 & 2 & 3 & 544 & $D_{3}$ \\ 23687 & 2 & 3 & 4 & 3 & 1 & 2 & 3 & 548 & $D_{7}$ \\ 23743 & 2 & 2 & 4 & 3 & 1 & 2 & 4 & 548 & $D_{5}$ \\ 24151 & 2 & 2 & 4 & 3 & 1 & 2 & 5 & 556 & $D_{5}$ \\ 24281 & 2 & 1 & 16 & 4 & 4 & 2 & 5 & 557 & $D_{3}$ \\ 24439 & 2 & 2 & 6 & 3 & 2 & 2 & 7 & 561 & $D_{5}$ \\ 24847 & 2 & 2 & 4 & 3 & 1 & 2 & 8 & 570 & $D_{5}$ \\ 25031 & 2 & 1 & 8 & 3 & 3 & 2 & 7 & 573 & $D_{3}$ \\ 25111 & 2 & 1 & 6 & 3 & 2 & 2 & 8 & 574 & $D_{3}$ \\ 25247 & 2 & 1 & 6 & 3 & 2 & 2 & 10 & 575 & $D_{3}$ \\ 25409 & 2 & 1 & 8 & 4 & 2 & 2 & 8 & 580 & $D_{3}$ \\ 25439 & 2 & 1 & 4 & 3 & 1 & 2 & 6 & 580 & $D_{3}$ \\ 25447 & 2 & 1 & 6 & 3 & 2 & 2 & 11 & 581 & $D_{3}$ \\ 25793 & 2 & 1 & 16 & 4 & 4 & 2 & 5 & 590 & $D_{3}$ \\ 26431 & 2 & 2 & 6 & 3 & 2 & 2 & 4 & 599 & $D_{5}$ \\ 26839 & 2 & 3 & 4 & 3 & 1 & 2 & 5 & 607 & $D_{7}$ \\ 26959 & 2 & 1 & 6 & 3 & 2 & 2 & 23 & 610 & $D_{3}$ \\ 27143 & 2 & 1 & 4 & 3 & 1 & 2 & 4 & 615 & $D_{3}$ \\ 27143 & 2 & 3 & 4 & 3 & 1 & 2 & 3 & 615 & $D_{9}$ \\ 27647 & 2 & 1 & 26 & 4 & 10 & 2 & 7 & 623 & $D_{3}$ \\ 27673 & 2 & 1 & 8 & 4 & 2 & 2 & 7 & 623 & $D_{3}$ \\ 27743 & 2 & 3 & 4 & 3 & 1 & 2 & 3 & 624 & $D_{7}$ \\ 28031 & 2 & 2 & 6 & 3 & 2 & 2 & 5 & 631 & $D_{5}$ \\ 28031 & 2 & 1 & 8 & 3 & 3 & 2 & 5 & 631 & $D_{3}$ \\ 28031 & 2 & 4 & 4 & 3 & 1 & 2 & 7 & 631 & $D_{15}$ \\ 28279 & 2 & 1 & 8 & 3 & 3 & 2 & 4 & 635 & $D_{3}$ \\ 28279 & 2 & 1 & 26 & 4 & 8 & 2 & 14 & 635 & $D_{3}$ \\ 28279 & 2 & 1 & 4 & 3 & 1 & 2 & 11 & 635 & $D_{3}$ \\ 28279 & 2 & 1 & 6 & 3 & 2 & 2 & 6 & 635 & $D_{3}$ \\ 28703 & 2 & 1 & 20 & 4 & 5 & 2 & 5 & 642 & $D_{3}$ \\ 28759 & 2 & 1 & 8 & 3 & 3 & 2 & 5 & 645 & $D_{3}$ \\ 29023 & 2 & 1 & 4 & 3 & 1 & 2 & 8 & 650 & $D_{3}$ \\ 29287 & 2 & 2 & 8 & 3 & 3 & 2 & 4 & 653 & $D_{5}$ \\ 29311 & 2 & 3 & 6 & 3 & 2 & 2 & 5 & 653 & $D_{7}$ \\ 29399 & 2 & 1 & 6 & 3 & 2 & 2 & 7 & 654 & $D_{3}$ \\ 29567 & 2 & 1 & 6 & 3 & 2 & 2 & 6 & 657 & $D_{3}$ \\ 29879 & 2 & 2 & 4 & 3 & 1 & 2 & 5 & 666 & $D_{5}$ \\ 29959 & 2 & 1 & 6 & 3 & 2 & 2 & 4 & 668 & $D_{3}$ \\ 29959 & 2 & 3 & 4 & 3 & 1 & 2 & 4 & 668 & $D_{9}$ \\ 30223 & 2 & 1 & 8 & 3 & 3 & 2 & 5 & 674 & $D_{3}$ \\ 30367 & 2 & 2 & 6 & 3 & 2 & 2 & 5 & 677 & $D_{5}$ \\ 30431 & 2 & 2 & 4 & 3 & 1 & 2 & 4 & 677 & $D_{5}$ \\ 30559 & 2 & 3 & 4 & 3 & 1 & 2 & 9 & 680 & $D_{7}$ \\ 30727 & 2 & 1 & 6 & 3 & 2 & 2 & 9 & 685 & $D_{3}$ \\ 30911 & 2 & 2 & 4 & 3 & 1 & 2 & 5 & 686 & $D_{5}$ \\ 31079 & 2 & 2 & 4 & 3 & 1 & 2 & 5 & 690 & $D_{5}$ \\ 31159 & 2 & 1 & 16 & 4 & 5 & 2 & 6 & 691 & $D_{3}$ \\ 31247 & 2 & 6 & 4 & 3 & 1 & 2 & 3 & 692 & $D_{13}$ \\ 31271 & 2 & 1 & 6 & 3 & 2 & 2 & 9 & 693 & $D_{3}$ \\ 31321 & 2 & 3 & 8 & 4 & 2 & 2 & 7 & 693 & $D_{7}$ \\ 31513 & 2 & 1 & 16 & 4 & 4 & 2 & 4 & 697 & $D_{3}$ \\ 31543 & 2 & 2 & 4 & 3 & 1 & 2 & 5 & 697 & $D_{5}$ \\ 31847 & 2 & 5 & 4 & 3 & 1 & 2 & 3 & 703 & $D_{11}$ \\ 32009 & 2 & 1 & 12 & 4 & 3 & 2 & 9 & 706 & $D_{3}$ \\ 32143 & 2 & 3 & 4 & 3 & 1 & 2 & 4 & 708 & $D_{7}$ \\ 32183 & 2 & 3 & 4 & 3 & 1 & 2 & 3 & 708 & $D_{7}$ \\ 32327 & 2 & 1 & 16 & 4 & 5 & 2 & 5 & 710 & $D_{3}$ \\ 32327 & 2 & 3 & 4 & 3 & 1 & 2 & 3 & 710 & $D_{9}$ \\ 32353 & 2 & 1 & 20 & 4 & 6 & 2 & 5 & 711 & $D_{3}$ \\ 32401 & 2 & 1 & 20 & 4 & 6 & 2 & 6 & 712 & $D_{3}$ \\ 32479 & 2 & 5 & 4 & 3 & 1 & 2 & 4 & 714 & $D_{11}$ \\ 32647 & 2 & 3 & 4 & 3 & 1 & 2 & 4 & 719 & $D_{7}$ \\ 32687 & 2 & 5 & 4 & 3 & 1 & 2 & 3 & 720 & $D_{11}$ \\ 32719 & 2 & 2 & 4 & 3 & 1 & 2 & 14 & 721 & $D_{5}$ \\ 32887 & 2 & 2 & 6 & 3 & 2 & 2 & 6 & 724 & $D_{5}$ \\ 32983 & 2 & 1 & 4 & 3 & 1 & 2 & 4 & 725 & $D_{3}$ \\ 33223 & 2 & 1 & 4 & 3 & 1 & 2 & 9 & 732 & $D_{3}$ \\ 33343 & 2 & 1 & 4 & 3 & 1 & 2 & 5 & 733 & $D_{3}$ \\ 33679 & 2 & 1 & 4 & 3 & 1 & 2 & 5 & 738 & $D_{3}$ \\ 33767 & 2 & 3 & 4 & 3 & 1 & 2 & 3 & 739 & $D_{7}$ \\ 34351 & 2 & 2 & 4 & 3 & 1 & 2 & 8 & 753 & $D_{5}$ \\ 34471 & 2 & 2 & 12 & 4 & 3 & 2 & 6 & 756 & $D_{5}$ \\ 34487 & 2 & 1 & 14 & 4 & 4 & 2 & 7 & 756 & $D_{3}$ \\ 34591 & 2 & 2 & 4 & 3 & 1 & 2 & 5 & 757 & $D_{5}$ \\ 34679 & 2 & 1 & 6 & 3 & 2 & 2 & 4 & 758 & $D_{3}$ \\ 34679 & 2 & 3 & 4 & 3 & 1 & 2 & 4 & 758 & $D_{9}$ \\ 34721 & 2 & 1 & 12 & 4 & 4 & 2 & 9 & 759 & $D_{3}$ \\ 34847 & 2 & 1 & 16 & 4 & 5 & 2 & 6 & 762 & $D_{3}$ \\ 35401 & 2 & 1 & 56 & 5 & 8 & 2 & 10 & 776 & $D_{3}$ \\ 35591 & 2 & 9 & 4 & 3 & 1 & 2 & 7 & 779 & $D_{19}$ \\ 35759 & 2 & 3 & 6 & 3 & 2 & 2 & 5 & 781 & $D_{7}$ \\ 35839 & 2 & 2 & 4 & 3 & 1 & 2 & 5 & 781 & $D_{5}$ \\ 35977 & 2 & 2 & 8 & 4 & 2 & 2 & 5 & 783 & $D_{5}$ \\ 36191 & 2 & 1 & 46 & 5 & 7 & 2 & 11 & 786 & $D_{3}$ \\ 36791 & 2 & 1 & 12 & 4 & 3 & 2 & 8 & 799 & $D_{3}$ \\ 36871 & 2 & 2 & 6 & 3 & 2 & 2 & 5 & 801 & $D_{5}$ \\ 37087 & 2 & 1 & 32 & 4 & 11 & 2 & 5 & 804 & $D_{3}$ \\ 37199 & 2 & 1 & 18 & 4 & 4 & 2 & 8 & 806 & $D_{3}$ \\ 37607 & 2 & 1 & 4 & 3 & 1 & 2 & 3 & 814 & $D_{3}$ \\ 37831 & 2 & 2 & 4 & 3 & 1 & 2 & 5 & 820 & $D_{5}$ \\ 37879 & 2 & 3 & 4 & 3 & 1 & 2 & 5 & 821 & $D_{7}$ \\ 37993 & 2 & 2 & 8 & 4 & 2 & 2 & 5 & 824 & $D_{5}$ \\ 38047 & 2 & 2 & 4 & 3 & 1 & 2 & 9 & 825 & $D_{5}$ \\ 38167 & 2 & 1 & 14 & 4 & 4 & 2 & 10 & 829 & $D_{3}$ \\ 38231 & 2 & 1 & 4 & 3 & 1 & 2 & 4 & 830 & $D_{3}$ \\ 38287 & 2 & 1 & 4 & 3 & 1 & 2 & 5 & 832 & $D_{3}$ \\ 38303 & 2 & 1 & 4 & 3 & 1 & 2 & 3 & 832 & $D_{3}$ \\ 38593 & 2 & 1 & 20 & 4 & 6 & 2 & 9 & 836 & $D_{3}$ \\ 38959 & 2 & 1 & 10 & 3 & 4 & 2 & 9 & 842 & $D_{3}$ \\ 38977 & 2 & 1 & 20 & 4 & 6 & 2 & 13 & 842 & $D_{3}$ \\ 39023 & 2 & 1 & 40 & 5 & 7 & 2 & 10 & 842 & $D_{3}$ \\ 39199 & 2 & 2 & 4 & 3 & 1 & 2 & 7 & 844 & $D_{5}$ \\ 39631 & 2 & 1 & 16 & 4 & 5 & 2 & 11 & 853 & $D_{3}$ \\ 39679 & 2 & 1 & 6 & 3 & 2 & 2 & 12 & 854 & $D_{3}$ \\ 39679 & 2 & 3 & 4 & 3 & 1 & 2 & 4 & 854 & $D_{9}$ \\ \end{longtable} \subsection*{Characteristic $p=2$ and non-prime square-free levels, dihedral} \begin{longtable}{||c|c|c|c|c|c|c|c|c|c||} \hline Level & Wt & ResD & Dim & EmbDim & NilO & GorDef & \#Ops & \#(p$<$HB) & Gp \\ \hline\endhead\hline\endfoot\hline\hline\endlastfoot 1055 & 2 & 1 & 24 & 4 & 9 & 2 & 5 & 47 & $D_{3}$ \\ 1727 & 2 & 1 & 16 & 4 & 5 & 2 & 6 & 65 & $D_{3}$ \\ 2071 {}\footnote{First found by W.\ Stein.} & 2 & 1 & 8 & 4 & 2 & 2 & 5 & 73 & $D_{3}$ \\ 2631 & 2 & 1 & 40 & 4 & 17 & 2 & 6 & 106 & $D_{3}$ \\ 2991 & 2 & 1 & 40 & 4 & 17 & 2 & 4 & 121 & $D_{3}$ \\ 3095 & 2 & 1 & 40 & 4 & 17 & 2 & 4 & 114 & $D_{3}$ \\ 3431 & 2 & 1 & 24 & 5 & 4 & 2 & 6 & 107 & $D_{3}$ \\ 3471 & 2 & 1 & 16 & 5 & 3 & 2 & 5 & 146 & $D_{3}$ \\ 3639 & 2 & 1 & 28 & 4 & 11 & 2 & 5 & 140 & $D_{3}$ \\ 4031 & 2 & 1 & 16 & 4 & 4 & 2 & 6 & 125 & $D_{3}$ \\ 4087 & 2 & 1 & 8 & 4 & 2 & 2 & 6 & 126 & $D_{3}$ \\ 4119 & 2 & 1 & 12 & 4 & 3 & 2 & 4 & 156 & $D_{3}$ \\ 4415 & 2 & 1 & 8 & 4 & 2 & 2 & 6 & 153 & $D_{3}$ \\ \end{longtable} \subsection*{Characteristic $p=2$, icosahedral} \begin{longtable}{||c|c|c|c|c|c|c|c|c|c||} \hline Level & Wt & ResD & Dim & EmbDim & NilO & GorDef & \#Ops & \#(p$<$HB) & Gp \\ \hline\endhead\hline\endfoot\hline\hline\endlastfoot 89491 & 2 & 2 & 12 & 4 & 3 & 2 & 4 & 1746 & $A_5$ \\ \end{longtable} \subsection*{Characteristic $p=3$, prime levels, dihedral} \begin{longtable}{||c|c|c|c|c|c|c|c|c|c||} \hline Level & Wt & ResD & Dim & EmbDim & NilO & GorDef & \#Ops & \#(p$<$HB) & Gp \\ \hline\endhead\hline\endfoot\hline\hline\endlastfoot 1031 & 3 & 2 & 4 & 3 & 1 & 2 & 4 & 55 & $D_{5}$ \\ 1511 & 3 & 3 & 4 & 3 & 1 & 2 & 9 & 74 & $D_{7}$ \\ 2087 & 3 & 2 & 4 & 3 & 1 & 2 & 3 & 98 & $D_{5}$ \\ 4259 & 3 & 2 & 4 & 3 & 1 & 2 & 3 & 179 & $D_{5}$ \\ 4799 & 3 & 3 & 22 & 4 & 9 & 2 & 9 & 196 & $D_{7}$ \\ 5939 & 3 & 2 & 4 & 3 & 1 & 2 & 4 & 235 & $D_{5}$ \\ 6899 & 3 & 2 & 4 & 3 & 1 & 2 & 3 & 269 & $D_{5}$ \\ 6959 & 3 & 2 & 4 & 3 & 1 & 2 & 4 & 270 & $D_{5}$ \\ 7523 & 3 & 2 & 4 & 3 & 1 & 2 & 4 & 289 & $D_{5}$ \\ 7559 & 3 & 2 & 4 & 3 & 1 & 2 & 6 & 290 & $D_{5}$ \\ 7583 & 3 & 3 & 20 & 3 & 9 & 2 & 6 & 290 & $D_{7}$ \\ 8219 & 3 & 2 & 4 & 3 & 1 & 2 & 6 & 310 & $D_{5}$ \\ 8447 & 3 & 5 & 20 & 3 & 9 & 2 & 3 & 318 & $D_{11}$ \\ 8699 & 3 & 2 & 6 & 3 & 2 & 2 & 9 & 326 & $D_{5}$ \\ 9431 & 3 & 3 & 4 & 3 & 1 & 2 & 4 & 350 & $D_{7}$ \\ 9743 & 3 & 2 & 8 & 3 & 3 & 2 & 3 & 360 & $D_{5}$ \\ 9887 & 3 & 2 & 8 & 3 & 3 & 2 & 3 & 365 & $D_{5}$ \\ 10079 & 3 & 2 & 60 & 3 & 29 & 2 & 5 & 368 & $D_{5}$ \\ 10247 & 3 & 2 & 10 & 4 & 3 & 2 & 5 & 375 & $D_{5}$ \\ 10847 & 3 & 3 & 22 & 4 & 9 & 2 & 9 & 395 & $D_{7}$ \\ 12011 & 3 & 2 & 4 & 3 & 1 & 2 & 3 & 431 & $D_{5}$ \\ 12119 & 3 & 2 & 56 & 3 & 27 & 2 & 8 & 434 & $D_{5}$ \\ 12263 & 3 & 2 & 8 & 3 & 3 & 2 & 3 & 438 & $D_{5}$ \\ 12959 & 3 & 5 & 20 & 3 & 9 & 2 & 4 & 457 & $D_{11}$ \\ 13907 & 3 & 2 & 22 & 4 & 9 & 2 & 8 & 487 & $D_{5}$ \\ 14699 & 3 & 2 & 4 & 3 & 1 & 2 & 6 & 513 & $D_{5}$ \\ 14783 & 3 & 3 & 20 & 3 & 9 & 2 & 3 & 515 & $D_{13}$ \\ 14783 & 3 & 3 & 20 & 3 & 9 & 2 & 3 & 515 & $D_{13}$ \\ \end{longtable} \subsection*{Characteristic $p=5$, prime levels, dihedral} \begin{longtable}{||c|c|c|c|c|c|c|c|c|c||} \hline Level & Wt & ResD & Dim & EmbDim & NilO & GorDef & \#Ops & \#(p$<$HB) & Gp \\ \hline\endhead\hline\endfoot\hline\hline\endlastfoot 419 & 5 & 1 & 4 & 3 & 1 & 2 & 3 & 40 & $D_{3}$ \\ 439 & 5 & 1 & 14 & 4 & 5 & 2 & 10 & 42 & $D_{3}$ \\ 491 & 5 & 1 & 4 & 3 & 1 & 2 & 3 & 46 & $D_{3}$ \\ 751 & 5 & 1 & 12 & 3 & 5 & 2 & 3 & 65 & $D_{3}$ \\ 839 & 5 & 1 & 6 & 3 & 2 & 2 & 6 & 70 & $D_{3}$ \\ 1231 & 5 & 1 & 4 & 3 & 1 & 2 & 3 & 97 & $D_{3}$ \\ 2579 & 5 & 1 & 4 & 3 & 1 & 2 & 3 & 180 & $D_{3}$ \\ 2699 & 5 & 1 & 14 & 4 & 5 & 2 & 8 & 188 & $D_{3}$ \\ 3299 & 5 & 1 & 4 & 3 & 1 & 2 & 6 & 220 & $D_{3}$ \\ 3359 & 5 & 1 & 4 & 3 & 1 & 2 & 6 & 222 & $D_{3}$ \\ 4111 & 5 & 1 & 4 & 3 & 1 & 2 & 3 & 267 & $D_{3}$ \\ 4219 & 5 & 1 & 20 & 3 & 6 & 2 & 5 & 274 & $D_{3}$ \\ 4931 & 5 & 3 & 12 & 3 & 5 & 2 & 3 & 310 & $D_{7}$ \\ 5011 & 5 & 1 & 4 & 3 & 1 & 2 & 5 & 316 & $D_{3}$ \\ 5639 & 5 & 1 & 6 & 3 & 2 & 2 & 5 & 348 & $D_{3}$ \\ 5939 & 5 & 3 & 12 & 3 & 5 & 2 & 5 & 366 & $D_{7}$ \\ 6079 & 5 & 1 & 6 & 3 & 2 & 2 & 5 & 370 & $D_{3}$ \\ 6271 & 5 & 1 & 4 & 3 & 1 & 2 & 3 & 379 & $D_{3}$ \\ 6571 & 5 & 1 & 12 & 3 & 5 & 2 & 5 & 399 & $D_{3}$ \\ 6691 & 5 & 1 & 4 & 3 & 1 & 2 & 5 & 405 & $D_{3}$ \\ 6779 & 5 & 1 & 6 & 3 & 2 & 2 & 6 & 410 & $D_{3}$ \\ 7459 & 5 & 1 & 12 & 3 & 5 & 2 & 7 & 443 & $D_{3}$ \\ 7759 & 5 & 3 & 4 & 3 & 1 & 2 & 3 & 457 & $D_{7}$ \\ 8779 & 5 & 1 & 12 & 3 & 5 & 2 & 12 & 511 & $D_{3}$ \\ 8819 & 5 & 3 & 4 & 3 & 1 & 2 & 3 & 513 & $D_{7}$ \\ 9011 & 5 & 1 & 4 & 3 & 1 & 2 & 6 & 522 & $D_{3}$ \\ \end{longtable} \subsection*{Characteristic $p=7$, prime levels, dihedral} \begin{longtable}{||c|c|c|c|c|c|c|c|c|c||} \hline Level & Wt & ResD & Dim & EmbDim & NilO & GorDef & \#Ops & \#(p$<$HB) & Gp \\ \hline\endhead\hline\endfoot\hline\hline\endlastfoot 199 & 7 & 1 & 4 & 3 & 1 & 2 & 3 & 30 & $D_{3}$ \\ 839 & 7 & 1 & 4 & 3 & 1 & 2 & 6 & 93 & $D_{3}$ \\ 1259 & 7 & 1 & 4 & 3 & 1 & 2 & 3 & 130 & $D_{3}$ \\ 1291 & 7 & 1 & 4 & 3 & 1 & 2 & 4 & 133 & $D_{3}$ \\ 1319 & 7 & 1 & 4 & 3 & 1 & 2 & 6 & 136 & $D_{3}$ \\ 1399 & 7 & 1 & 4 & 3 & 1 & 2 & 3 & 141 & $D_{3}$ \\ 1559 & 7 & 1 & 4 & 3 & 1 & 2 & 7 & 155 & $D_{3}$ \\ 1567 & 7 & 1 & 8 & 3 & 4 & 2 & 3 & 156 & $D_{3}$ \\ 1823 & 7 & 1 & 6 & 3 & 2 & 2 & 4 & 179 & $D_{3}$ \\ 1823 & 7 & 3 & 4 & 3 & 1 & 2 & 4 & 179 & $D_{9}$ \\ \end{longtable} \subsection*{Characteristic $p=11$, prime levels, dihedral} \begin{longtable}{||c|c|c|c|c|c|c|c|c|c||} \hline Level & Wt & ResD & Dim & EmbDim & NilO & GorDef & \#Ops & \#(p$<$HB) & Gp \\ \hline\endhead\hline\endfoot\hline\hline\endlastfoot 431 & 11 & 3 & 4 & 3 & 1 & 2 & 5 & 77 & $D_{7}$ \\ 563 & 11 & 1 & 4 & 3 & 1 & 2 & 3 & 97 & $D_{3}$ \\ 1187 & 11 & 1 & 6 & 3 & 2 & 2 & 6 & 181 & $D_{3}$ \\ 1223 & 11 & 3 & 4 & 3 & 1 & 2 & 4 & 187 & $D_{7}$ \\ 1231 & 11 & 1 & 4 & 3 & 1 & 2 & 3 & 189 & $D_{3}$ \\ 1231 & 11 & 3 & 4 & 3 & 1 & 2 & 3 & 189 & $D_{9}$ \\ 1327 & 11 & 1 & 4 & 3 & 1 & 2 & 3 & 199 & $D_{5}$ \\ 1327 & 11 & 1 & 4 & 3 & 1 & 2 & 4 & 199 & $D_{5}$ \\ 1583 & 11 & 1 & 24 & 3 & 11 & 2 & 4 & 230 & $D_{3}$ \\ 1619 & 11 & 1 & 4 & 3 & 1 & 2 & 3 & 235 & $D_{3}$ \\ 1823 & 11 & 1 & 4 & 3 & 1 & 2 & 4 & 263 & $D_{3}$ \\ 2243 & 11 & 1 & 4 & 3 & 1 & 2 & 3 & 310 & $D_{3}$ \\ 2351 & 11 & 1 & 4 & 3 & 1 & 2 & 6 & 325 & $D_{3}$ \\ 2351 & 11 & 3 & 4 & 3 & 1 & 2 & 6 & 325 & $D_{9}$ \\ 2503 & 11 & 1 & 4 & 3 & 1 & 2 & 3 & 341 & $D_{3}$ \\ 2591 & 11 & 1 & 4 & 3 & 1 & 2 & 5 & 351 & $D_{3}$ \\ 2647 & 11 & 1 & 4 & 3 & 1 & 2 & 3 & 360 & $D_{3}$ \\ 2767 & 11 & 1 & 4 & 3 & 1 & 2 & 3 & 370 & $D_{3}$ \\ 2791 & 11 & 1 & 4 & 3 & 1 & 2 & 3 & 375 & $D_{3}$ \\ 3011 & 11 & 1 & 4 & 3 & 1 & 2 & 5 & 402 & $D_{3}$ \\ 3119 & 11 & 1 & 4 & 3 & 1 & 2 & 5 & 415 & $D_{3}$ \\ 3299 & 11 & 1 & 4 & 3 & 1 & 2 & 4 & 434 & $D_{3}$ \\ 3299 & 11 & 3 & 4 & 3 & 1 & 2 & 4 & 434 & $D_{9}$ \\ 3571 & 11 & 1 & 4 & 3 & 1 & 2 & 4 & 462 & $D_{3}$ \\ \end{longtable} \subsection*{Characteristic $p=13$, prime levels, dihedral} \begin{longtable}{||c|c|c|c|c|c|c|c|c|c||} \hline Level & Wt & ResD & Dim & EmbDim & NilO & GorDef & \#Ops & \#(p$<$HB) & Gp \\ \hline\endhead\hline\endfoot\hline\hline\endlastfoot 367 & 13 & 1 & 4 & 3 & 1 & 2 & 3 & 78 & $D_{3}$ \\ 439 & 13 & 2 & 4 & 3 & 1 & 2 & 4 & 91 & $D_{5}$ \\ 563 & 13 & 1 & 4 & 3 & 1 & 2 & 3 & 111 & $D_{3}$ \\ 971 & 13 & 2 & 4 & 3 & 1 & 2 & 4 & 177 & $D_{5}$ \\ 1223 & 13 & 2 & 4 & 3 & 1 & 2 & 4 & 216 & $D_{5}$ \\ 1427 & 13 & 1 & 4 & 3 & 1 & 2 & 5 & 243 & $D_{3}$ \\ 1439 & 13 & 1 & 28 & 3 & 13 & 2 & 5 & 246 & $D_{3}$ \\ 1823 & 13 & 1 & 4 & 3 & 1 & 2 & 4 & 298 & $D_{3}$ \\ \end{longtable} \subsection*{Characteristic $p=17$, prime levels, dihedral} \begin{longtable}{||c|c|c|c|c|c|c|c|c|c||} \hline Level & Wt & ResD & Dim & EmbDim & NilO & GorDef & \#Ops & \#(p$<$HB) & Gp \\ \hline\endhead\hline\endfoot\hline\hline\endlastfoot 59 & 17 & 1 & 6 & 3 & 2 & 2 & 3 & 23 & $D_{3}$ \\ 239 & 17 & 2 & 4 & 3 & 1 & 2 & 5 & 68 & $D_{5}$ \\ 1327 & 17 & 1 & 4 & 3 & 1 & 2 & 3 & 289 & $D_{3}$ \\ 1427 & 17 & 2 & 4 & 3 & 1 & 2 & 3 & 306 & $D_{5}$ \\ 1951 & 17 & 1 & 4 & 3 & 1 & 2 & 4 & 402 & $D_{3}$ \\ 2503 & 17 & 1 & 4 & 3 & 1 & 2 & 3 & 497 & $D_{3}$ \\ 2687 & 17 & 1 & 36 & 3 & 17 & 2 & 4 & 529 & $D_{3}$ \\ \end{longtable} \subsection*{Characteristic $p=19$, prime levels, dihedral} \begin{longtable}{||c|c|c|c|c|c|c|c|c|c||} \hline Level & Wt & ResD & Dim & EmbDim & NilO & GorDef & \#Ops & \#(p$<$HB) & Gp \\ \hline\endhead\hline\endfoot\hline\hline\endlastfoot 439 & 19 & 1 & 6 & 3 & 2 & 2 & 3 & 125 & $D_{3}$ \\ 751 & 19 & 1 & 4 & 3 & 1 & 2 & 5 & 195 & $D_{5}$ \\ 751 & 19 & 1 & 4 & 3 & 1 & 2 & 5 & 195 & $D_{5}$ \\ 1427 & 19 & 1 & 6 & 3 & 2 & 2 & 3 & 335 & $D_{3}$ \\ \end{longtable} \subsection*{Characteristic $p=23$, prime levels, dihedral} \begin{longtable}{||c|c|c|c|c|c|c|c|c|c||} \hline Level & Wt & ResD & Dim & EmbDim & NilO & GorDef & \#Ops & \#(p$<$HB) & Gp \\ \hline\endhead\hline\endfoot\hline\hline\endlastfoot 83 & 23 & 1 & 4 & 3 & 1 & 2 & 3 & 37 & $D_{3}$ \\ 503 & 23 & 3 & 4 & 3 & 1 & 2 & 4 & 162 & $D_{7}$ \\ 971 & 23 & 2 & 4 & 3 & 1 & 2 & 4 & 284 & $D_{5}$ \\ 1259 & 23 & 1 & 4 & 3 & 1 & 2 & 3 & 358 & $D_{3}$ \\ \end{longtable} \subsection*{Characteristic $p=29$, prime levels, dihedral} \begin{longtable}{||c|c|c|c|c|c|c|c|c|c||} \hline Level & Wt & ResD & Dim & EmbDim & NilO & GorDef & \#Ops & \#(p$<$HB) & Gp \\ \hline\endhead\hline\endfoot\hline\hline\endlastfoot 107 & 29 & 1 & 4 & 3 & 1 & 2 & 3 & 55 & $D_{3}$ \\ 199 & 29 & 1 & 4 & 3 & 1 & 2 & 3 & 92 & $D_{3}$ \\ \end{longtable} \subsection*{Characteristic $p=31$, prime levels, dihedral} \begin{longtable}{||c|c|c|c|c|c|c|c|c|c||} \hline Level & Wt & ResD & Dim & EmbDim & NilO & GorDef & \#Ops & \#(p$<$HB) & Gp \\ \hline\endhead\hline\endfoot\hline\hline\endlastfoot 367 & 31 & 1 & 4 & 3 & 1 & 2 & 3 & 161 & $D_{3}$ \\ 743 & 31 & 3 & 4 & 3 & 1 & 2 & 4 & 293 & $D_{7}$ \\ \end{longtable} \subsection*{Characteristic $p=37$, prime levels, dihedral} \begin{longtable}{||c|c|c|c|c|c|c|c|c|c||} \hline Level & Wt & ResD & Dim & EmbDim & NilO & GorDef & \#Ops & \#(p$<$HB) & Gp \\ \hline\endhead\hline\endfoot\hline\hline\endlastfoot 139 & 37 & 1 & 4 & 3 & 1 & 2 & 4 & 83 & $D_{3}$ \\ \end{longtable} \subsection*{Characteristic $p=41$, prime levels, dihedral} \begin{longtable}{||c|c|c|c|c|c|c|c|c|c||} \hline Level & Wt & ResD & Dim & EmbDim & NilO & GorDef & \#Ops & \#(p$<$HB) & Gp \\ \hline\endhead\hline\endfoot\hline\hline\endlastfoot 83 & 41 & 1 & 4 & 3 & 1 & 2 & 3 & 61 & $D_{3}$ \\ 139 & 41 & 1 & 4 & 3 & 1 & 2 & 4 & 92 & $D_{3}$ \\ 431 & 41 & 1 & 4 & 3 & 1 & 2 & 5 & 233 & $D_{3}$ \\ \end{longtable} \subsection*{Characteristic $p=43$, prime levels, dihedral} \begin{longtable}{||c|c|c|c|c|c|c|c|c|c||} \hline Level & Wt & ResD & Dim & EmbDim & NilO & GorDef & \#Ops & \#(p$<$HB) & Gp \\ \hline 419 & 43 & 1 & 4 & 3 & 1 & 2 & 3 & 239 & $D_{3}$ \\ \hline \end{longtable} \subsection*{Characteristic $p=47$, prime levels, dihedral} \begin{longtable}{||c|c|c|c|c|c|c|c|c|c||} \hline Level & Wt & ResD & Dim & EmbDim & NilO & GorDef & \#Ops & \#(p$<$HB) & Gp \\ \hline\endhead\hline\endfoot\hline\hline\endlastfoot 31 & 47 & 1 & 4 & 3 & 1 & 2 & 3 & 30 & $D_{3}$ \\ 107 & 47 & 1 & 4 & 3 & 1 & 2 & 3 & 82 & $D_{3}$ \\ 139 & 47 & 1 & 4 & 3 & 1 & 2 & 4 & 101 & $D_{3}$ \\ 179 & 47 & 2 & 4 & 3 & 1 & 2 & 3 & 126 & $D_{5}$ \\ \end{longtable} \subsection*{Characteristic $p=53$, prime levels, dihedral} \begin{longtable}{||c|c|c|c|c|c|c|c|c|c||} \hline Level & Wt & ResD & Dim & EmbDim & NilO & GorDef & \#Ops & \#(p$<$HB) & Gp \\ \hline\endhead\hline\endfoot\hline\hline\endlastfoot 131 & 53 & 2 & 4 & 3 & 1 & 2 & 3 & 106 & $D_{5}$ \\ 211 & 53 & 1 & 4 & 3 & 1 & 2 & 4 & 159 & $D_{3}$ \\ \end{longtable} \subsection*{Characteristic $p=59$, prime levels, dihedral} \begin{longtable}{||c|c|c|c|c|c|c|c|c|c||} \hline Level & Wt & ResD & Dim & EmbDim & NilO & GorDef & \#Ops & \#(p$<$HB) & Gp \\ \hline\endhead\hline\endfoot\hline\hline\endlastfoot 23 {}\footnote{First found by K.\ Buzzard, unpublished.} & 59 & 1 & 4 & 3 & 1 & 2 & 4 & 30 & $D_{3}$ \\ 211 & 59 & 1 & 4 & 3 & 1 & 2 & 4 & 175 & $D_{3}$ \\ 227 & 59 & 1 & 4 & 3 & 1 & 2 & 3 & 187 & $D_{5}$ \\ 227 & 59 & 1 & 4 & 3 & 1 & 2 & 3 & 187 & $D_{5}$ \\ 367 & 59 & 1 & 4 & 3 & 1 & 2 & 3 & 279 & $D_{3}$ \\ \end{longtable} \subsection*{Characteristic $p=61$, prime levels, dihedral} \begin{longtable}{||c|c|c|c|c|c|c|c|c|c||} \hline Level & Wt & ResD & Dim & EmbDim & NilO & GorDef & \#Ops & \#(p$<$HB) & Gp \\ \hline\endhead\hline\endfoot\hline\hline\endlastfoot 239 & 61 & 1 & 4 & 3 & 1 & 2 & 4 & 199 & $D_{5}$ \\ 239 & 61 & 1 & 4 & 3 & 1 & 2 & 4 & 199 & $D_{5}$ \\ 431 & 61 & 1 & 4 & 3 & 1 & 2 & 5 & 327 & $D_{3}$ \\ \end{longtable} \subsection*{Characteristic $p=67$, prime levels, dihedral} \begin{longtable}{||c|c|c|c|c|c|c|c|c|c||} \hline Level & Wt & ResD & Dim & EmbDim & NilO & GorDef & \#Ops & \#(p$<$HB) & Gp \\ \hline\endhead\hline\endfoot\hline\hline\endlastfoot 31 & 67 & 1 & 4 & 3 & 1 & 2 & 6 & 41 & $D_{3}$ \\ 239 & 67 & 2 & 4 & 3 & 1 & 2 & 5 & 217 & $D_{5}$ \\ \end{longtable} \subsection*{Characteristic $p=71$, prime levels, dihedral} \begin{longtable}{||c|c|c|c|c|c|c|c|c|c||} \hline Level & Wt & ResD & Dim & EmbDim & NilO & GorDef & \#Ops & \#(p$<$HB) & Gp \\ \hline\endhead\hline\endfoot\hline\hline\endlastfoot 59 & 71 & 1 & 4 & 3 & 1 & 2 & 3 & 71 & $D_{3}$ \\ 239 & 71 & 1 & 4 & 3 & 1 & 2 & 5 & 223 & $D_{3}$ \\ 283 & 71 & 1 & 4 & 3 & 1 & 2 & 5 & 263 & $D_{3}$ \\ \end{longtable} \subsection*{Characteristic $p=73$, prime levels, dihedral} \begin{longtable}{||c|c|c|c|c|c|c|c|c|c||} \hline Level & Wt & ResD & Dim & EmbDim & NilO & GorDef & \#Ops & \#(p$<$HB) & Gp \\ \hline\endhead\hline\endfoot\hline\hline\endlastfoot 211 & 73 & 1 & 4 & 3 & 1 & 2 & 4 & 209 & $D_{3}$ \\ 283 & 73 & 1 & 4 & 3 & 1 & 2 & 5 & 269 & $D_{3}$ \\ \end{longtable} \subsection*{Characteristic $p=79$, prime levels, dihedral} \begin{longtable}{||c|c|c|c|c|c|c|c|c|c||} \hline Level & Wt & ResD & Dim & EmbDim & NilO & GorDef & \#Ops & \#(p$<$HB) & Gp \\ \hline\endhead\hline\endfoot\hline\hline\endlastfoot 307 & 79 & 1 & 4 & 3 & 1 & 2 & 5 & 307 & $D_{3}$ \\ \end{longtable} \subsection*{Characteristic $p=83$, prime levels, dihedral} \begin{longtable}{||c|c|c|c|c|c|c|c|c|c||} \hline Level & Wt & ResD & Dim & EmbDim & NilO & GorDef & \#Ops & \#(p$<$HB) & Gp \\ \hline\endhead\hline\endfoot\hline\hline\endlastfoot 47 & 83 & 2 & 4 & 3 & 1 & 2 & 4 & 67 & $D_{5}$ \\ 79 & 83 & 2 & 4 & 3 & 1 & 2 & 3 & 101 & $D_{5}$ \\ 107 & 83 & 1 & 6 & 3 & 2 & 2 & 3 & 132 & $D_{3}$ \\ 211 & 83 & 1 & 4 & 3 & 1 & 2 & 4 & 232 & $D_{3}$ \\ 251 & 83 & 1 & 4 & 3 & 1 & 2 & 3 & 271 & $D_{7}$ \\ 251 & 83 & 1 & 4 & 3 & 1 & 2 & 3 & 271 & $D_{7}$ \\ 251 & 83 & 1 & 4 & 3 & 1 & 2 & 3 & 271 & $D_{7}$ \\ \end{longtable} \subsection*{Characteristic $p=89$, prime levels, dihedral} \begin{longtable}{||c|c|c|c|c|c|c|c|c|c||} \hline Level & Wt & ResD & Dim & EmbDim & NilO & GorDef & \#Ops & \#(p$<$HB) & Gp \\ \hline\endhead\hline\endfoot\hline\hline\endlastfoot 131 & 89 & 1 & 4 & 3 & 1 & 2 & 3 & 165 & $D_{5}$ \\ 131 & 89 & 1 & 4 & 3 & 1 & 2 & 3 & 165 & $D_{5}$ \\ \end{longtable} \subsection*{Characteristic $p=97$, prime levels, dihedral} \begin{longtable}{||c|c|c|c|c|c|c|c|c|c||} \hline Level & Wt & ResD & Dim & EmbDim & NilO & GorDef & \#Ops & \#(p$<$HB) & Gp \\ \hline\endhead\hline\endfoot\hline\hline\endlastfoot 307 & 97 & 1 & 4 & 3 & 1 & 2 & 5 & 367 & $D_{3}$ \\ \end{longtable} \end{document}
\begin{document} \title{On tempered representations} \author[David Kazhdan and Alexander Yom Din]{David Kazhdan and Alexander Yom Din} \begin{abstract} Let $G$ be a unimodular locally compact group. We define a property of irreducible unitary $G$-representations $V$ which we call c-temperedness, and which for the trivial $V$ boils down to F{\o}lner's condition (equivalent to the trivial $V$ being tempered, i.e. to $G$ being amenable). The property of c-temperedness is a-priori stronger than the property of temperedness. We conjecture that for semisimple groups over local fields temperedness implies c-temperedness. We check the conjecture for a special class of tempered $V$'s, as well as for all tempered $V$'s in the cases of $G := SL_2 ({\mathbb R})$ and of $G = PGL_2 (\Omega)$ for a non-Archimedean local field $\Omega$ of characteristic $0$ and residual characteristic not $2$. We also establish a weaker form of the conjecture, involving only $K$-finite vectors. In the non-Archimedean case, we give a formula expressing the character of a tempered $V$ as an appropriately-weighted conjugation-average of a matrix coefficient of $V$, generalizing a formula of Harish-Chandra from the case when $V$ is square-integrable. \end{abstract} \maketitle \setcounter{tocdepth}{1} \tableofcontents \section{Introduction}\label{sec intro} \subsection{} Throughout the paper, we work with a unimodular second countable locally compact group $G$, and fix a Haar measure $dg$ on it. In the introduction, in \S\ref{ssec intro 1} - \S\ref{ssec intro 2.5} $G$ is assumed semisimple over a local field, while in \S\ref{ssec intro 3} - \S\ref{ssec intro 4} there is no such assumption. After the introduction, in \S\ref{sec Kfin} - \S\ref{sec proof of SL2R} $G$ is assumed semisimple over a local field, while in \S\ref{sec bi tempered} - \S\ref{sec ctemp is temp} there is no such assumption. Unitary representations of $G$ are pairs $(V , \pi)$, but for lightness of notation we denote them by $V$, keeping $\pi$ implicit. \subsection{}\label{ssec intro 1} Assume that $G$ is a semisimple group over a local field\footnote{So, for example, $G$ can be taken $SL_n ({\mathbb R})$ or $SL_n ({\mathbb Q}_p)$.}. The characterization of temperedness of irreducible unitary $G$-representations in terms of the rate of decrease of $K$-finite matrix coefficients is well-studied (see for example \cite{Wa,CoHaHo,Be}). Briefly, fixing a maximal compact subgroup $K \subset G$, an irreducible unitary $G$-representation $V$ is tempered if and only if for every two $K$-finite vectors $v_1 , v_2 \in V$ there exists $C>0$ such that $$ |\langle gv_1 , v_2 \rangle | \leq C \cdot \Xi_G (g)$$ for all $g \in G$, where $\Xi_G : G \to {\mathbb R}_{\ge 0}$ is Harish-Chandra's $\Xi$-function (see \S\ref{ssec V1 and Xi} for a reminder on the definition of $\Xi_G$). When considering matrix coefficients of more general vectors, differentiating between tempered and non-tempered irreducible unitary $G$-representations becomes more problematic, as the following example shows. \begin{example}[see Claim \ref{clm counterexample}]\label{rem counterexample} Let $G := PGL_2 (\Omega)$, $\Omega$ a local field. Denote by $A \subset G$ the subgroup of diagonal matrices. Given a unitary $G$-representation $V$ let us denote $$ {\mathcal M}_V (A) := \left\{ a \mapsto \langle a v_1 , v_2 \rangle \right\}_{v_1 , v_2 \in V} \subset C(A),$$ i.e. the set of matrix coefficients of $V$ restricted to $A$. Let us also denote $$ \widehat{L^1} (A) := \left\{ a \mapsto \int_{\hat{A}} \chi (a) \cdot \phi (\chi) \cdot d \chi \right\}_{\phi \in L^1 (\hat{A})} \subset C(A),$$ i.e. the set of Fourier transforms of $L^1$-functions on $\hat{A}$. Then for any non-trivial irreducible unitary $G$-representation $V$ we have $$ {\mathcal M}_V (A) = \widehat{L^1} (A).$$ \end{example} The remedy proposed in this paper is that, instead of analysing the pointwise growth of matrix coefficients, we analyse their ``growth in average", i.e. the behaviour of integrals of norm-squared matrix coefficients over big balls. \subsection{} We fix a norm\footnote{In the non-Archimedean case, norms on finite-dimensional vector spaces are discussed, for example, in \cite[Chapter II, \S 1]{We}.} $|| - ||$ on the vector space ${\mathfrak g} := {\rm Lie} (G)$ and consider also the induced operator norm $|| - ||$ on $\textnormal{End} ({\mathfrak g})$. We define the ``radius" function $\mathbf{r} : G \to {\mathbb R}_{\ge 0}$ by $$ \mathbf{r} (g) := \log \left( \max \{ || \textnormal{Ad} (g) ||, || \textnormal{Ad} (g^{-1}) || \} \right)$$ where $\textnormal{Ad} : G \to \textnormal{Aut} ({\mathfrak g})$ is the adjoint representation. We denote then by $G_{<r} \subset G$ the subset of elements $g$ for which $\mathbf{r} ( g) < r$. \begin{conjecture}[``asymptotic Schur orthogonality relations"]\label{main conj red exact} Let $V$ be a tempered irreducible unitary $G$-representation. There exist $\mathbf{d}(V) \in {\mathbb Z}_{\ge 0}$ and $\mathbf{f} (V) \in {\mathbb R}_{>0}$ such that for all $v_1 , v_2 , v_3 , v_4 \in V$ we have $$ \lim_{r \to +\infty} \frac{\int_{G_{< r}} \langle g v_1 , v_2 \rangle \overline{\langle gv_3 , v_4 \rangle } \cdot dg}{ r^{\mathbf{d} (V)}} = \frac{1}{\mathbf{f} (V)} \cdot \langle v_1 , v_3 \rangle \overline{\langle v_2 , v_4 \rangle}.$$ \end{conjecture} \begin{remark}[see Claim \ref{clm doesnt depend on norm}]\label{rem doesnt depend on norm} The validity of Conjecture \ref{main conj red exact}, as well as the resulting invariants $\mathbf{d} (V)$ and $\mathbf{f} (V)$ (and of other similar results/conjectures below - see the formulation of Claim \ref{clm doesnt depend on norm}), do not depend on the choice of the norm $|| - ||$ on ${\mathfrak g}$ (used to construct the subsets $G_{<r}$). \end{remark} \begin{remark}[see Remark \ref{rem ctemp red is temp}] An irreducible unitary $G$-representation $V$ for which the condition of Conjecture \ref{main conj red exact} is verified is tempered. \end{remark} \begin{remark} In the notation of Conjecture \ref{main conj red exact}, $\mathbf{d} (V) = 0$ if and only if $V$ is square-integrable. In that case, $\mathbf{f} (V)$ is the well-known formal degree of $V$. \end{remark} \begin{remark}[following from Proposition \ref{prop c-tempered orth rel cross}] Let $V$ and $W$ be two tempered irreducible unitary $G$-representations for which Conjecture \ref{main conj red exact} holds, and which are non-isomorphic. Then for all $v_1 , v_2 \in V$ and $w_1 , w_2 \in W$ one has $$ \lim_{r \to +\infty} \frac{\int_{G_{<r}} \langle g v_1 , v_2 \rangle \overline{\langle g w_1 , w_2 \rangle} \cdot dg}{r^{(\mathbf{d} (V) + \mathbf{d} (W))/2}} = 0.$$ \end{remark} \subsection{}\label{ssec intro 1.25} We show the following statement, weaker than Conjecture \ref{main conj red exact}: \begin{theorem}[see \S\ref{sec Kfin}]\label{thm main Kfin} Let $V$ be a tempered irreducible unitary $G$-representation and $K \subset G$ a maximal compact subgroup. There exists $\mathbf{d}(V) \in {\mathbb Z}_{\ge 0}$ such that: \begin{enumerate} \item if $G$ is non-Archimedean, there exists $\mathbf{f} (V) \in {\mathbb R}_{>0}$ such that for all $K$-finite\footnote{When $G$ is non-Archimedean $K$-finite is the same as smooth (in particular does not depend on $K$).} $v_1 , v_2 , v_3 , v_4 \in V$ we have $$ \lim_{r \to +\infty} \frac{\int_{G_{< r}} \langle g v_1 , v_2 \rangle \overline{\langle gv_3 , v_4 \rangle } \cdot dg}{ r^{\mathbf{d} (V)}} = \frac{1}{\mathbf{f} (V)} \cdot \langle v_1 , v_3 \rangle \overline{\langle v_2 , v_4 \rangle}.$$ \item If $G$ is Archimedean, for any given non-zero $K$-finite vectors $v_1 , v_2 \in V$ there exists $C(v_1 , v_2) >0$ such that $$ \lim_{r \to +\infty} \frac{\int_{G_{< r}} | \langle g v_1 , v_2 \rangle |^2 \cdot dg}{ r^{\mathbf{d} (V)}} = C (v_1 , v_2).$$ \end{enumerate} \end{theorem} \begin{remark} We expect that it should not be very difficult to establish the statement of item $(1)$ of Theorem \ref{thm main Kfin} also in the Archimedean case, instead of the weaker statement of item $(2)$. \end{remark} Concentrating on the non-Archimedean case for simplicity, Theorem \ref{thm main Kfin} has as a corollary the following proposition, a generalization (from the square-integrable case to the tempered case) of a formula of Harish-Chandra (see \cite[Theorem 9]{Ha2}), expressing the character as a conjugation-average of a matrix coefficient. \begin{definition} Assume that $G$ is non-Archimedean. We denote by $C^{\infty} (G)$ the space of (complex-valued) smooth functions on $G$ and by $D_c^{\infty} (G)$ the space of smooth distributions on $G$ with compact support. We denote by $C^{-\infty} (G)$ the dual to $D_c^{\infty} (G)$, i.e. the space of generalized functions on $G$ (thus we have an embedding $C^{\infty} (G) \subset C^{-\infty} (G)$). Given an admissible unitary $G$-representation $V$, we denote by $\Theta_V \in C^{-\infty} (G)$ the character of $V$. \end{definition} \begin{proposition}[see \S\ref{ssec proof of prop formula ch}]\label{prop formula ch} Let $V$ be a tempered irreducible unitary $G$-representation. Let $v_1 , v_2 \in V$ be smooth vectors. Denote by $m_{v_1 , v_2} \in C^{\infty}(G) \subset C^{-\infty} (G)$ the matrix coefficient $m_{v_1 , v_2} (g) := \langle g v_1 , v_2 \rangle$. Denoting $({}^g m) (x) := m(g^{-1} x g)$, the limit $$\lim_{r \to +\infty} \frac{\int_{G_{<r}} {}^g m_{v_1 , v_2} \cdot dg}{r^{\mathbf{d} (V)}}$$ exists in $C^{-\infty}(G)$, in the sense of weak convergence of generalized functions (i.e. convergence when paired against every element in $D_c^{\infty} (G)$), and is equal to $$ \frac{\langle v_1 , v_2 \rangle}{\mathbf{f} (V)} \cdot \Theta_V.$$ \end{proposition} \subsection{}\label{ssec intro 1.5} We are able to verify Conjecture \ref{main conj red exact} in some cases. \begin{theorem}[see Theorem \ref{thm V1 c temp}]\label{thm intro slowest} Conjecture \ref{main conj red exact} is true for the principal series irreducible unitary representation of ``slowest decrease", i.e. the unitary parabolic induction of the trivial character via a minimal parabolic subgroup. \end{theorem} Here is the main result of the paper: \begin{theorem}[see \S\ref{sec proof of SL2R}]\label{thm intro SL2R} Conjecture \ref{main conj red exact} is true for all tempered irreducible unitary representations of $G := SL_2 ({\mathbb R})$ and of $G := PGL_2 (\Omega)$, where $\Omega$ is a non-Archimedean field of characteristic $0$ and of residual characteristic not equal to $2$. \end{theorem} \subsection{}\label{ssec intro 2} The proposition that follows shows that a seemingly weaker property implies that of Conjecture \ref{main conj red exact}. \begin{definition} Given a unitary $G$-representation $V$ and vectors $v_1 , v_2 \in V$ we define $$ \Ms{v_1}{v_2}{r} := \int_{ G_{<r}} |\langle g v_1 , v_2 \rangle |^2 \cdot dg.$$ \end{definition} \begin{proposition}[see \S\ref{ssec clms red 1}]\label{prop from folner to exact} Let $V$ be an irreducible unitary $G$-representation. Let $v_0 \in V$ be a unit vector such that the following holds: \begin{enumerate} \item For any vectors $v_1 , v_2 \in V$ we have $$ \underset{r \to +\infty}{\limsup} \frac{ \Ms{v_1}{v_2}{r} }{ \Ms{v_0}{v_0}{r} } < +\infty.$$ \item For any vectors $v_1 , v_2 \in V$ and $r^{\prime} > 0$ we have $$ \lim_{r \to +\infty} \frac{ \Ms{v_1}{v_2}{r + r^{\prime}} - \Ms{v_1}{v_2}{r - r^{\prime}} }{ \Ms{v_0}{v_0}{r} } = 0.$$ \end{enumerate} Then Conjecture \ref{main conj red exact} holds for $V$. \end{proposition} \begin{question} Does item $(1)$ of Proposition \ref{prop from folner to exact} hold for arbitrary irreducible unitary $G$-representations? \end{question} \begin{remark}[see Proposition \ref{clm Prop tempered is c-tempered reductive holds}]\label{rem ctemp red is temp} An irreducible unitary $G$-representation for which there exists a unit vector $v_0 \in V$ such that conditions $(1)$ and $(2)$ of Proposition \ref{prop from folner to exact} are satisfied is tempered. \end{remark} \subsection{}\label{ssec intro 2.5} After finishing writing the current paper, we have found previous works \cite{Mi} and \cite{An}. Work \cite{Mi} intends at giving an asymptotic Schur orthogonality relation for tempered irreducible unitary representations, but we could not understand its validity; on the first page the author defines a seminorm $||-||_p^2$ on $C^{\infty} (G)$ by a limit, but this limit clearly does not always exist. Work \cite{An} (which deals with the more general setup of a symmetric space) provides an asymptotic Schur orthogonality relation for $K$-finite vectors in a tempered irreducible unitary $G$-representation, in the case when $G$ is real and under a regularity assumption on the central character. This work also seems to provide an interpretation of what we have denoted as $\mathbf{f} (V)$ in terms of the Plancherel density (but it would be probably good to work this out in more detail). \subsection{}\label{ssec intro 3} Let now $G$ be an arbitrary unimodular second countable locally compact group. We formulate a property of irreducible unitary $G$-representations which we call c-temperedness (see Definition \ref{def c-temp}). The property of c-temperedness is, roughly speaking, an abstract version of properties (1) and (2) of Proposition \ref{prop from folner to exact}. Here $G_{< r} \subset G$ are replaced by a sequence $\{ F_n \}_{n \ge 0}$ of subsets of $G$, which we call a F{\o}lner sequence, whose existence is part of the definition (so that we speak of a representation c-tempered with F{\o}lner sequence $\{ F_n \}_{n \ge 0}$), while the condition replacing property (2) of Proposition \ref{prop from folner to exact} generalizes, in some sense, the F{\o}lner condition for a group to be amenable (i.e. for the trivial representation to be tempered). We show in Corollary \ref{cor c-temp are temp} that any c-tempered irreducible unitary $G$-representation is tempered and pose the question: \begin{question}\label{main question} For which groups $G$ every tempered irreducible unitary $G$-representation is c-tempered with some F{\o}lner sequence? \end{question} As before, c-tempered irreducible unitary $G$-representations enjoy a variant of asymptotic Schur orthogonality relations (see Proposition \ref{prop c-tempered orth rel}): \begin{equation}\label{eq orth rel} \lim_{n \to +\infty} \frac{\int_{F_n} \langle g v_1 , v_3 \rangle \overline{\langle gv_2 , v_4 \rangle} \cdot dg }{\int_{F_n} |\langle g v_0 , v_0 \rangle |^2 \cdot dg} = \langle v_1 , v_2 \rangle \overline{\langle v_3 , v_4 \rangle}\end{equation} for all $v_1 , v_2 , v_3 , v_4 \in V$ and all unit vectors $v_0 \in V$. Also, we have a variant for a pair of non-isomorphic representations (see Proposition \ref{prop c-tempered orth rel cross}). \begin{definition} Let us say that two irreducible unitary $G$-representations are twins if their closures in $\hat{G}$ (w.r.t. the Fell topology) coincide. \end{definition} \begin{question} Let $V_1$ and $V_2$ be irreducible unitary $G$-representations and assume that $V_1$ and $V_2$ are twins. Suppose that $V_1$ is c-tempered with F{\o}lner sequence $\{ F_n \}_{n \ge 0}$. \begin{enumerate} \item Is it true that $V_2$ is also c-tempered with F{\o}lner sequence $\{ F_n \}_{n \ge 0}$? \item If so, is it true that for unit vectors $v_1 \in V_1$ and $v_2 \in V_2$ we have $$ \lim_{n \to +\infty} \frac{\int_{F_n} |\langle g v_1 , v_1 \rangle |^2 \cdot dg}{\int_{F_n} |\langle g v_2 , v_2 \rangle|^2 \cdot dg} = 1?$$ \end{enumerate} \end{question} \subsection{}\label{ssec intro 4} For many groups there exist tempered representations with the slowest rate of decrease of matrix coefficients. For such representations it is often much easier to prove analogs of c-temperedness or of orthogonality relation (\ref{eq orth rel}) than for other representations - as exemplified by Theorem \ref{thm intro slowest} above. See \cite{BoGa} for hyperbolic groups. \subsection{} Alexander Yom Din's research was supported by the ISRAEL SCIENCE FOUNDATION (grant No 1071/20). David Kazhdan's research was partially supported by ERC grant No 669655. We would like to thank Pavel Etingof for great help with the proof of Claim \ref{clm SL2 3} in the case $G = SL_2 ({\mathbb R})$, which was present in a prior draft of the paper, before we encountered the work \cite{BrCoNiTa}. We would like to thank Vincent Lafforgue for a very useful discussion. We thank Erez Lapid for useful correspondence. \subsection{}\label{ssec notation} Throughout the paper, $G$ is a unimodular second countable locally compact group. We fix a Haar measure $dg$ on $G$, as well as Haar measures on the other unimodular groups we encounter ($dk$ on the group $K$, etc.). We denote by $\textnormal{vol}_G (-)$ the volume with respect to $dg$. All unitary $G$-representations are on separable Hilbert spaces. Given a unitary $G$-representation $V$, vectors $v_1 , v_2 \in V$ and a measurable subset $F \subset G$, we denote $$ \Mss{v_1}{v_2}{F} := \int_{F} |\langle gv_1 , v_2 \rangle |^2 \cdot dg.$$ So in the case of a semisimple group over a local field as above, we have set $$ \Ms{v_1}{v_2}{r} := \Mss{v_1}{v_2}{G_{<r}}.$$ We write $L^2 (G) := L^2 (G , dg)$, considered as a unitary $G$-representation via the right regular action. Given Hilbert spaces $V$ and $W$, we denote by ${\mathcal B} (V; W)$ the space of bounded linear operators from $V$ to $W$, and write ${\mathcal B} (V) := {\mathcal B} (V; V)$. We write $F_1 \smallsetminus F_2$ for set differences and $F_1 \triangle F_2 := (F_1\smallsetminus F_2) \cup (F_2 \smallsetminus F_1)$ for symmetric set differences. \section{Notion of c-temperedness}\label{sec bi tempered} In this section, let $G$ be a unimodular second countable locally compact group. We introduce the notion of a c-tempered (with a given F{\o}lner sequence) irreducible unitary $G$-representation. \subsection{} The following definition aims at a generalization of the hypotheses of Proposition \ref{prop from folner to exact}, so as to make them suitable for a general group. \begin{definition}\label{def c-temp} Let $V$ be an irreducible unitary $G$-representation. Let $F_0 , F_1 , \ldots \subset G$ be a sequence of measurable pre-compact subsets all containing a neighbourhood of $1$. We say that $V$ is \textbf{c-tempered\footnote{``c" stands for ``matrix coefficients".} with F{\o}lner sequence $F_0 , F_1 , \ldots$} if there exists a unit vector $v_0 \in V$ such that the following two conditions are satisfied: \begin{enumerate} \item For all $v_1 , v_2 \in V$ we have\footnote{The notation $\Mss{-}{-}{-}$ is introduced in \S\ref{ssec notation}.} $$ \limsup_{n \to +\infty} \frac{ \Mss{v_1}{v_2}{F_n} }{ \Mss{v_0}{v_0}{F_n} } < +\infty.$$ \item For all $v_1 , v_2 \in V$ and all compact subsets $K \subset G$ we have $$ \lim_{n \to +\infty} \frac{\sup_{g_1 , g_2 \in K} \Mss{v_1}{v_2}{F_n \triangle g_2^{-1} F_n g_1} }{ \Mss{v_0}{v_0}{F_n} } = 0.$$ \end{enumerate} \end{definition} \begin{example} The trivial unitary $G$-representation is c-tempered with F{\o}lner sequence $F_0 , F_1 , \ldots$ if for any compact $K \subset G$ we have \begin{equation}\label{eq folner} \lim_{n \to +\infty} \sup_{g_1 , g_2 \in K} \frac{\textnormal{vol}_G (F_n \triangle g_2^{-1} F_n g_1)}{\textnormal{vol}_G (F_n)} = 0.\end{equation} By \textbf{F{\o}lner's condition}, the existence of such a sequence is equivalent\footnote{When stating F{\o}lner's condition for the amenability of $G$ it is more usual to consider $g_2^{-1} F_n$ rather than $g_2^{-1} F_n g_1$ in (\ref{eq folner}), i.e. to shift only on one side. However, using, for example, \cite[Theorem 4.1]{Gr} applied to the action of $G \times G$ on $G$, we see that the above stronger ``two-sided" condition also characterizes amenability.} to the trivial irreducible unitary $G$-representation being tempered, i.e. to $G$ being \textbf{amenable}. \end{example} \subsection{} Irreducible unitary $G$-representations which are c-tempered satisfy ``asymptotic Schur orthogonality relations": \begin{proposition}\label{prop c-tempered orth rel} Let $V$ be an irreducible unitary $G$-representation. Assume that $V$ is c-tempered with F{\o}lner sequence $F_0 , F_1 , \ldots$ and let $v_0 \in V$ be a unit vector for which the conditions (1) and (2) of Definition \ref{def c-temp} are satisfied. Then for all $v_1 , v_2 , v_3 , v_4 \in V$ we have \begin{equation}\label{eq approx schur bi} \lim_{n \to +\infty} \frac{\int_{F_n} \langle g v_1 , v_2 \rangle \overline{\langle g v_3 , v_4 \rangle} \cdot dg}{ \Mss{v_0}{v_0}{F_n} } = \langle v_1 , v_3 \rangle \overline{\langle v_2 , v_4 \rangle}.\end{equation} \end{proposition} \begin{proof} First, notice that in order to show that the limit in (\ref{eq approx schur bi}) holds, it is enough to show that for every sub-sequence there exists a further sub-sequence of it on which the limit holds. Replacing our sequence by the sub-sequence, it is therefore enough to show simply that there exists a sub-sequence on which the limit holds - which is what we will do. Define bilinear maps\footnote{Recall that $L^2 (G)$ denotes $L^2 (G , dg)$, viewed as a unitary $G$-representation via the right regular action.} $$ S_0 , S_1 , \ldots : V \times \overline{V} \to L^2 (G)$$ by $$ S_n (v_1 , v_2) (g) := \begin{cases} \frac{1}{\sqrt{ \Mss{v_0}{v_0}{F_n} }} \cdot \langle g v_1 , v_2 \rangle, & g \in F_n \\ 0, & g \notin F_n \end{cases}.$$ Clearly those are bounded. \begin{itemize} \item The bilinear maps $S_n$ are jointly bounded, i.e. there exists $C>0$ such that $||S_n||^2 \leq C$ for all $n$. \end{itemize} Indeed, by condition (1) of Definition \ref{def c-temp}, for any fixed $v_1 , v_2 \in V$ there exists $C>0$ such that $||S_n (v_1 , v_2)||^2 \leq C$ for all $n$. By the Banach-Steinhaus theorem, there exists $C>0$ such that $||S_n ||^2 \leq C$ for all $n$. Next, define quadlinear forms $$ \Phi_1 , \Phi_2 , \ldots : V \times \overline{V} \times \overline{V} \times V \to {\mathbb C}$$ by $$ \Phi_n (v_1 , v_2 , v_3 , v_4) := \langle S_n (v_1 , v_2) , S_n (v_3 , v_4) \rangle$$ \begin{itemize} \item The quadlinear forms $\Phi_n$ are jointly bounded, in fact $|| \Phi_n || \leq C$ for all $n$. \end{itemize} This follows immediately from the above finding $|| S_n ||^2 \leq C$ for all $n$. \begin{itemize} \item For all $g_1 , g_2 \in G$ and $v_1 , v_2 , v_3 , v_4 \in V$ we have \begin{equation}\label{eq approx GG-inv} \lim_{n \to +\infty } \left( \Phi_n (g_1 v_1 , g_2 v_2 , g_1 v_3, g_2 v_4 ) - \Phi_n (v_1 , v_2 , v_3 , v_4) \right)= 0.\end{equation} \end{itemize} Indeed, $$ | \Phi_n (g_1 v_1 , g_2 v_2 , g_1 v_3 , v_2 v_4) - \Phi_n (v_1 , v_2 , v_3 , v_4 ) | = $$ $$ = \frac{ | \int_{F_n} \langle g g_1 v_1 , g_2 v_2 \rangle \overline{\langle g g_1 v_3 , g_2 v_4 \rangle} \cdot dg - \int_{F_n} \langle g v_1 , v_2 \rangle \overline{\langle g v_3 , v_4 \rangle} \cdot dg |}{ \Mss{v_0}{v_0}{F_n} } \leq $$ $$ \leq \frac{ \int_{F_n \triangle g_2^{-1} F_n g_1 } |\langle g v_1 , v_2 \rangle | \cdot | \langle g v_3 , v_4 \rangle | \cdot dg}{ \Mss{v_0}{v_0}{F_n} } \leq $$ $$ \leq \sqrt{\frac{ \Mss{v_1}{v_2}{F_n \triangle g_2^{-1} F_n g_1} }{ \Mss{v_0}{v_0}{F_n} }} \cdot \sqrt{\frac{ \Mss{v_3}{v_4}{F_n \triangle g_2^{-1} F_n g_1} }{ \Mss{v_0}{v_0}{F_n} }}$$ and the last expression tends to $0$ as $n \to +\infty$ by condition (2) of Definition \ref{def c-temp}. \begin{itemize} \item There exists a sub-sequence $0 \leq m_0 < m_1 < \ldots$ such that $$ \lim_{n \to +\infty} \Phi_{m_n} (v_1 , v_2 , v_3 , v_4) = \langle v_1 , v_3 \rangle \overline{ \langle v_2 , v_4 \rangle } $$ for all $v_1 , v_2 , v_3 , v_4 \in V$. \end{itemize} By the sequential Banach-Alaouglu theorem (which is applicable since $V$ is separable), we can find a sub-sequence $0 \leq m_0 < m_1 < \ldots$ and a bounded quadlinear form $$ \Phi : V \times \overline{V} \times \overline{V} \times V \to {\mathbb C}$$ such that $\lim_{n \to +\infty}^{\textnormal{weak-}*} \Phi_{m_n} = \Phi$. Passing to the limit in equation (\ref{eq approx GG-inv}) we obtain that for all $g_1 , g_2 \in G$ and all $v_1 , v_2 , v_3 , v_4 \in V$ we have $$ \Phi (g_1 v_1 , g_2 v_2 , g_1 v_3 , g_2 v_4) = \Phi (v_1 , v_2 , v_3 , v_4).$$ Fixing $v_2 , v_4$, we obtain a bounded bilinear form $\Phi (- , v_2 , - , v_4) : V \times \overline{V} \to {\mathbb C}$ which is $G$-invariant, and hence by Schur's lemma is a multiple of the form $\langle - , - \rangle$, i.e. we have a uniquely defined $c_{v_2 , v_4} \in {\mathbb C}$ such that $$ \Phi (v_1 , v_2 , v_3 , v_4) = c_{v_2 , v_4} \cdot \langle v_1 , v_3 \rangle$$ for all $v_1 , v_3 \in V$. Similarly, fixing $v_1 , v_3$ we see that we have a uniquely defined $d_{v_1 , v_3} \in {\mathbb C}$ such that $$ \Phi (v_1 , v_2 , v_3 , v_4) = d_{v_1 , v_3} \cdot \overline{\langle v_2 , v_4 \rangle}$$ for all $v_2, v_4 \in {\mathbb C}$. Since $\Phi (v_0 , v_0, v_0, v_0) = 1$, plugging in $(v_1 , v_2 , v_3 ,v_4) := (v_0 , v_0 , v_0, v_0)$ in the first equality we find $c_{v_0 , v_0} = 1$. Then plugging in $(v_1 , v_2 , v_3 , v_4) := (v_1 , v_0 , v_3 , v_0)$ in both equalities and comparing, we find $d_{v_1 , v_3} = \langle v_1 , v_3 \rangle$. Hence we obtain $$ \Phi (v_1 , v_2 , v_3 , v_4) = \langle v_1 , v_3 \rangle \overline{\langle v_2 , v_4 \rangle} $$ for all $v_1 , v_2 , v_3 , v_4 \in V$. Now, writing explicitly $\Phi_{m_n} (v_1 , v_2 , v_3 , v_4)$, we see that the limit in (\ref{eq approx schur bi}) is valid on our sub-sequence, so we are done, as we explained in the beginning of the proof. \end{proof} \subsection{} If one unit vector $v_0$ satisfies conditions (1) and (2) of Definition \ref{def c-temp} then all unit vectors do: \begin{proposition}\label{prop c-tempered all are temp} Let $V$ be an irreducible unitary $G$-representation. Assume that $V$ is c-tempered with F{\o}lner sequence $F_0 , F_1 , \ldots$ and let $v_0 \in V$ be a unit vector for which the conditions (1) and (2) of Definition \ref{def c-temp} are satisfied. Then for any unit vector $v'_0 \in V$ the conditions (1) and (2) of Definition \ref{def c-temp} are satisfied. \end{proposition} \begin{proof} Let $v'_0 \in V$ be a unit vector. From (\ref{eq approx schur bi}) we get $$ \lim_{n \to +\infty} \frac{ \Mss{v'_0}{v'_0}{F_n} }{ \Mss{v_0}{v_0}{F_n} } = 1.$$ This makes the claim clear. \end{proof} \subsection{} We also have the following version of ``asymptotic Schur orthogonality relations" for a pair of non-isomorphic irreducible representations: \begin{proposition}\label{prop c-tempered orth rel cross} Let $V$ and $W$ be irreducible unitary $G$-representations. Assume that $V$ and $W$ are c-tempered with the same F{\o}lner sequence $F_0 , F_1 , \ldots$ and let $v_0 \in V$ and $w_0 \in W$ be unit vectors for which the conditions (1) and (2) of Definition \ref{def c-temp} are satisfied. Then for all $v_1 , v_2 \in V$ and $w_1 , w_2 \in W$ we have \begin{equation}\label{eq approx schur bi cross} \lim_{n \to +\infty} \frac{\int_{F_n} \langle g v_1 , v_2 \rangle \overline{\langle g w_1 , w_2 \rangle } \cdot dg}{ \sqrt{\Mss{v_0}{v_0}{F_n}} \sqrt{\Mss{w_0}{w_0}{F_n}} } = 0.\end{equation} \end{proposition} \begin{proof} We proceed similarly to the proof of Proposition \ref{prop c-tempered orth rel}. Namely, again it is enough to find a sub-sequence on which the limit holds. We define quadlinear forms $$ \Phi_1 , \Phi_2 , \ldots : V \times \overline{V} \times \overline{W} \times W \to {\mathbb C}$$ by $$ \Phi_n (v_1 , v_2 , w_1 , w_2) := \frac{\int_{F_n} \langle g v_1 , v_2 \rangle \overline{ \langle g w_1 , w_2 \rangle} \cdot dg}{\sqrt{\Mss{v_0}{v_0}{F_n}} \sqrt{\Mss{w_0}{w_0}{F_n}} }.$$ We see that these are jointly bounded, and that for all $g_1 , g_2 \in G$ and $v_1 , v_2 \in V$ and $w_1 , w_2 \in W$ we have $$ \lim_{n \to +\infty} \left( \Phi_n (g_1 v_1 , g_2 v_2 , g_1 w_1 , g_2 w_2) - \Phi_n (v_1 , v_2 , w_1 , w_2) \right) = 0.$$ We then find a bounded quadlinear form $$ \Phi : V \times \overline{V} \times \overline{W} \times W \to {\mathbb C}$$ and a sub-sequence $0 \leq m_0 < m_1 < \ldots$ such that $\lim_{n \to +\infty}^{\textnormal{weak-}*} \Phi_{m_n} = \Phi$. We get, for all $g_1 , g_2 \in G$ and $v_1 , v_2 \in V$ and $w_1 , w_2 \in W$: $$ \Phi (g_1 v_1 , g_2 v_2 , g_1 w_1 , g_2 w_2) = \Phi (v_1 , v_2 , w_1 , w_2).$$ By Schur's lemma we obtain $\Phi = 0$, giving us the desired. \end{proof} \subsection{} It is easy to answer Question \ref{main question} in the case of square-integrable representations: \begin{proposition}\label{prop sq int} Let $V$ be a square-integrable irreducible unitary $G$-representation. Then $V$ is c-tempered with F{\o}lner sequence any increasing sequence $F_0 , F_1 , \ldots $ of open pre-compact subsets in $G$, such that $1 \in F_0$ and $\cup_{n \ge 0} F_n = G$. \end{proposition} \begin{proof} Recall, that matrix coefficients of a square-integrable irreducible representation are square integrable. Let $v_0 \in V$ be a unit vector. Let $F_0 , F_1 , \ldots $ be any increasing sequence of open pre-compact subsets in $G$ whose union is $G$ and with $1 \in F_0$. Let $v_1,v_2 \in V$. Condition (1) of Definition \ref{def c-temp} holds because we have $$ \Mss{v_1}{v_2}{F_n} \leq \Mss{v_1}{v_2}{G} \leq \left( \frac{ \Mss{v_1}{v_2}{G} }{ \Mss{v_0}{v_0}{F_1} } \right) \cdot \Mss{v_0}{v_0}{F_n} .$$ As for condition (2) of Definition \ref{def c-temp}, let $\epsilon > 0$ and let $K \subset G$ be compact. There exists $n_0 \ge 0$ such that $$ \Mss{v_1}{v_2}{G \smallsetminus F_{n_0}} \leq \epsilon \cdot \Mss{v_0}{v_0}{F_1} .$$ There exists $n_1 \ge n_0$ such that $K F_{n_0} K^{-1} \subset F_{n_1}$. Let $n \ge n_1$ and let $g_1 , g_2 \in K$. Notice that $ ( F_n \triangle g_2^{-1} F_n g_1 ) \cap F_{n_0} = \emptyset$. Thus we have $$ \Mss{v_1}{v_2}{F_n \triangle g_2^{-1} F_n g_1} \leq \Mss{v_1}{v_2}{G \smallsetminus F_{n_0}} \leq \epsilon \cdot \Mss{v_0}{v_0}{F_1} \leq $$ $$ \leq \epsilon \cdot \Mss{v_0}{v_0}{F_n}.$$ \end{proof} \section{c-Tempered irreps are tempered}\label{sec ctemp is temp} In this section, let $G$ be a unimodular second countable locally compact group. We introduce some intermediate concepts, with the goal of showing that c-tempered irreducible unitary $G$-representations are tempered (Corollary \ref{cor c-temp are temp}). \subsection{} Let us recall some standard definitions and statements regarding weak containment. \begin{definition} Let $V$ and $W$ be unitary $G$-representations. \begin{enumerate} \item $V$ is \textbf{weakly contained} in $W$ if for every $v \in V$, compact $K \subset G$ and $\epsilon > 0$ there exist $w_1 , \ldots , w_r \in W$ such that $$ | \langle gv , v \rangle - \sum_{1 \leq i \leq r} \langle g w_i , w_i \rangle | \leq \epsilon$$ for all $g \in K$. \item $V$ is \textbf{Zimmer-weakly contained}\footnote{or ``weakly contained in the sense of Zimmer", following \cite[Remark F.1.2.(ix)]{BeHaVa}.} in $W$ if for every $v_1 , \ldots , v_r \in V$, compact $K \subset G$ and $\epsilon > 0$ there exist $w_1 , \ldots , w_r \in W$ such that $$ |\langle gv_i , v_j \rangle - \langle g w_i , w_j \rangle | \leq \epsilon $$ for all $1 \leq i,j \leq r$ and $g \in K$. \end{enumerate} \end{definition} To facilitate the formulation of the next lemma, let us also give the following intermediate definition: \begin{definition} Let $V$ and $W$ be unitary $G$-representations. Let us say that $V$ is \textbf{strongly-weakly contained} in $W$ if for every $v \in V$, compact $K \subset G$ and $\epsilon > 0$ there exists $w \in W$ such that $$ | \langle g v , v \rangle - \langle gw , w \rangle | \leq \epsilon$$ for all $g \in K$. \end{definition} \begin{lemma}\label{lem prop of weak cont} Let $V$ and $W$ be unitary $G$-representations. \begin{enumerate} \item If $V$ is Zimmer-weakly contained in $W$ then $V$ is strongly-weakly contained in $W$, and if $V$ is strongly-weakly contained in $W$ then $V$ is weakly contained in $W$. \item If $V$ is weakly contained in $W$ then $V$ is strongly-weakly contained in\footnote{Here, $W^{\oplus \infty}$ stands for the Hilbert direct sum of countably many copies of $W$.} $W^{\oplus \infty}$. \item If $V$ is weakly contained in $W^{\oplus \infty}$ then $V$ is weakly contained in $W$. \item If $V$ is irreducible and $V$ is weakly contained in $W$ then $V$ is strongly-weakly contained in $W$. \item If $V$ is cyclic (in particular, if $V$ is irreducible) and $V$ is strongly-weakly contained in $W$ then $V$ is Zimmer-weakly contained in $W$. \item If $V$ is strongly-weakly contained in $W$ then $V$ is Zimmer-weakly contained in $W^{\oplus \infty}$. \end{enumerate} \end{lemma} \begin{proof} Statements $(1)$, $(2)$ and $(3)$ are straight-forward. For statement $(4)$ see, for example, \cite[Proposition F.1.4]{BeHaVa}. For statement $(5)$ see \cite[proof of $(iii)\implies(iv)$ of Proposition 2.2]{Ke}. For statement $(6)$, again see \cite[proof of $(iii)\implies(iv)$ of Proposition 2.2]{Ke} (one writes $V$ as a Hilbert direct sum of countably many cyclic unitary $G$-representations, and uses item $(5)$). \end{proof} \begin{corollary}\label{cor temp Zimmer temp} Let $V$ and $W$ be unitary $G$-representations. \begin{enumerate} \item $V$ is weakly contained in $W$ if and only if $V$ is Zimmer-weakly contained in $W^{\oplus \infty}$. \item If $V$ is irreducible, $V$ is weakly contained in $W$ if and only if $V$ is Zimmer-weakly contained in $W$. \end{enumerate} \end{corollary} The following definition of temperedness is classical: \begin{definition} A unitary $G$-representation $V$ is said to be \textbf{tempered} if $V$ is weakly contained in\footnote{Recall that $L^2 (G)$ denotes $L^2 (G , dg)$, viewed as a unitary $G$-representation via the right regular action.} $L^2 (G)$. \end{definition} \begin{remark}\label{rem irr zimmer} Notice that an irreducible unitary $G$-representation is tempered if and only if it is Zimmer-weakly contained in $L^2 (G)$, by part $(2)$ of Corollary \ref{cor temp Zimmer temp}. \end{remark} \subsection{} The next definitions are related to the idea that one representation is weakly contained in another if there ``almost" exists a $G$-intertwining isometric embedding from the one to the other. \begin{definition}\label{def asymp emb} Let $V$ and $W$ be unitary $G$-representations. A sequence $\{ S_n \}_{n \ge 0} \subset {\mathcal B} (V ; W)$ is an \textbf{asymptotic embedding} if the following conditions are satisfied: \begin{enumerate} \item The operators $\{ S_n \}_{n \ge 0}$ are jointly bounded, i.e. there exists $C>0$ such that $|| S_n ||^2 \leq C$ for all $n \ge 0$. \item Given $v_1 , v_2 \in V$ and a compact $K \subset G$ we have $$ \lim_{n \to +\infty} \ \sup_{g \in K} | \langle (S_n g - g S_n) v_1 , S_n v_2 \rangle | = 0.$$ \item Given $v_1 , v_2 \in V$, we have $$ \lim_{n \to +\infty} \ \langle S_n v_1 , S_n v_2 \rangle = \langle v_1 , v_2 \rangle.$$ \end{enumerate} \end{definition} \begin{definition} Let $V$ and $W$ be unitary $G$-representations. \begin{enumerate} \item We say that $V$ is \textbf{o-weakly contained}\footnote{``o" stands for ``operator".} in $W$ if there exists an asymptotic embedding $\{ S_n \}_{n \ge 0} \subset {\mathcal B} (V ; W)$. \item We say that $V$ is \textbf{o-tempered} if it is o-weakly contained in $L^2 (G)$. \end{enumerate} \end{definition} \begin{lemma}\label{lem asymp emb uniform} In the context of Definition \ref{def asymp emb}, if conditions $(1)$ and $(2)$ of Definition \ref{def asymp emb} are satisfied then given compacts $L_1 , L_2 \subset V$ and a compact $K \subset G$ we have $$ \lim_{n \to +\infty} \sup_{v_1 \in L_1 , v_2 \in L_2 , g \in K} | \langle (S_n g - g S_n) v_1 , S_n v_2 \rangle | = 0,$$ and if conditions $(1)$ and $(3)$ of Definition \ref{def asymp emb} are satisfied then given compacts $L_1, L_2 \subset V$ we have $$ \lim_{n \to +\infty} \sup_{v_1 \in L_1 , v_2 \in L_2} |\langle S_n v_1 , S_n v_2 \rangle - \langle v_1 , v_2 \rangle| = 0.$$ \end{lemma} \begin{proof} This follows from the well-known fact from functional analysis that pointwise convergence coincides with compact convergence on equi-continuous subsets, see \cite[Proposition 32.5]{Tr}. \end{proof} \begin{lemma}\label{lem asymp emb irrep} In the context of Definition \ref{def asymp emb}, assume that $V$ is irreducible. If conditions $(1)$ and $(2)$ of Definition \ref{def asymp emb} are satisfied then there exists a sub-sequence $0 \leq m_0 < m_1 < \ldots$ and $c \in {\mathbb R}_{\ge 0}$ such that for all $v_1 , v_2 \in V$ we have \begin{equation}\label{eq limit c} \lim_{n \to +\infty} \ \langle S_{m_n} v_1 , S_{m_n} v_2 \rangle = c \cdot \langle v_1 , v_2 \rangle.\end{equation} In particular, if there exists $v \in V$ such that $\liminf_{n \to +\infty} || S_n v ||^2 > 0$ then there exists $d \in {\mathbb R}_{> 0}$ (in fact, $d^{-2} = \lim_{n \to +\infty} || S_{m_n} v ||^2 / || v ||^2$) such that $\{ d S_{m_n} \}_{n \ge 0}$ satisfies condition $(3)$ of Definition \ref{def asymp emb}, i.e. is an asymptotic embedding. \end{lemma} \begin{proof} By the sequential Banach–Alaoglu theorem (applicable as $V$ is separable, and $\{ S_n^* S_n \}_{n \ge 0}$ are jointly bounded by condition $(1)$), there exists a sub-sequence $1 \leq m_0 < m_1 < \ldots$ such that $\{ S_{m_n}^* S_{m_n} \}_{n \ge 0}$ converges in the weak operator topology to some $S \in {\mathcal B} (V)$. Let us first check that $S$ is $G$-invariant. For $g \in G$ and $v_1 , v_2 \in V$ we have $$ |\langle S_n^* S_n g v_1 , v_2 \rangle - \langle S_n^* S_n v_1 , g^{-1} v_2 \rangle | = | \langle S_n g v_1 , S_n v_2 \rangle - \langle S_n v_1 , S_n g^{-1} v_2 \rangle | \leq $$ $$ \leq |\langle (S_n g - g S_n) v_1 , S_n v _2 \rangle | + |\langle S_n v_1 , (g^{-1} S_n - S_n g^{-1}) v_2 \rangle |$$ and both summands in the last expression converge to $0$ as $n \to +\infty$ by condition $(2)$. Therefore $$ | \langle Sg v_1 , v_2 \rangle - \langle g S v_1 , v_2 \rangle | = \lim_{n \to +\infty} |\langle S_{m_n}^* S_{m_n} g v_1 , v_2 \rangle - \langle S_{m_n}^* S_{m_n} v_1 , g^{-1} v_2 \rangle | = 0 $$ i.e. $\langle S g v_1 , v_2 \rangle = \langle g S v_1 , v_2 \rangle$. Thus, since $v_1$ and $v_2$ were arbitrary, $S g = g S$. This holds for all $g \in G$, i.e. $S$ is $G$-invariant. By Schur's lemma, we deduce $S = c \cdot \textnormal{Id}_V$ for some $c \in {\mathbb C}$. This translates precisely to (\ref{eq limit c}). The last claim is then straight-forward. \end{proof} \begin{remark}\label{rem alt cond} Using Lemma \ref{lem asymp emb uniform}, it is straight-forward that, assuming condition $(1)$ of Definition \ref{def asymp emb}, conditions $(2)$ and $(3)$ in Definition \ref{def asymp emb} are equivalent to the one condition that for $v_1 , v_2 \in V$ and a compact $K \subset G$ one has $$ \lim_{n \to +\infty} \sup_{g \in K} | \langle g S_n v_1 , S_n v_2 \rangle - \langle g v_1 , v_2 \rangle | = 0.$$ Indeed, let us write \begin{equation}\label{eq alt cond} \langle g S_n v_1 , S_n v_2 \rangle - \langle gv_1 , v_2 \rangle = \langle (g S_n - S_n g) v_1 , S_n v_2 \rangle + ( \langle S_n g v_1 , S_n v_2 \rangle - \langle gv_1 , v_2 \rangle).\end{equation} The current condition gives condition $(3)$ by plugging in $g = 1$, and then (\ref{eq alt cond}) gives condition $(2)$, using the uniformity provided by Lemma \ref{lem asymp emb uniform}. Conversely, (\ref{eq alt cond}) shows immediately (again taking into consideration Lemma \ref{lem asymp emb uniform}) that conditions $(2)$ and $(3)$ imply the current condition. \end{remark} \subsection{} The concept of o-weak containment in fact coincides with that of Zimmer-weak containment: \begin{proposition}\label{prop o cont is weakly cont} Let $V$ and $W$ be unitary $G$-representations. Then $V$ is o-weakly contained in $W$ if and only if $V$ is Zimmer-weakly contained in $W$. \end{proposition} \begin{proof} Let $\{ S_n \}_{n \ge 0} \subset {\mathcal B} (V ; W)$ be an asymptotic embedding. Given $v_1 , \ldots , v_r \in V$, by Remark \ref{rem alt cond}, given any compact $K \subset G$ we have $$ \lim_{n \to +\infty} \sup_{g \in K} |\langle g S_n v_i , S_n v_j \rangle - \langle g v_i , v_j \rangle | = 0$$ for all $1 \leq i,j \leq r$, and thus $$ \lim_{n \to +\infty} \sup_{g \in K} \sup_{1 \leq i,j \leq r} |\langle g S_n v_i , S_n v_j \rangle - \langle g v_i , v_j \rangle | = 0.$$ Thus by definition $V$ is Zimmer-weakly contained in $W$. Conversely, suppose that $V$ is Zimmer-weakly contained in $W$. Let $\{ e_n \}_{n \ge 0}$ be an orthonormal basis for $V$. Let $\{ K_n \}_{n \ge 0}$ be an increasing sequence of compact subsets in $G$, with $1 \in K_0$ and with the property that for any compact subset $K \subset G$ there exists $n \ge 0$ such that $K \subset K_n$. As $V$ is Zimmer-weakly contained in $W$, given $n \ge 0$, let us find $w^n_0 , \ldots , w^n_n \in W$ such that $$ \sup_{g \in K_n} |\langle g e_i , e_j \rangle - \langle g w^n_i , w^n_j \rangle | \leq \frac{1}{n+1}$$ for all $0 \leq i,j \leq n$. Define $S_n : V \to W$ by $$ S_n \left( \sum_{i \ge 0} c_i \cdot e_i \right) := \sum_{0 \leq i \leq n} c_i \cdot w^n_i.$$ We want to check that $\{ S_n \}_{n \ge 0}$ is an asymptotic embedding. As for condition $(1)$, notice that $$ \left\Vert S_n \left( \sum_{i \ge 0} c_i e_i \right) \right\Vert^2 = \left\Vert \sum_{0 \leq i \leq n} c_i w^n_i \right\Vert^2 = \left| \sum_{0 \leq i,j \leq n} c_i \overline{c_j} \cdot \langle w^n_i , w^n_j \rangle \right| \leq $$ $$ \leq \left| \sum_{0 \leq i,j \leq n} c_i \overline{c_j} \cdot \langle e_i , e_j \rangle \right| + \left| \sum_{0 \leq i,j \leq n } c_i \overline{c_j} \cdot \left( \langle w_i^n , w_j^n \rangle - \langle e_i , e_j \rangle \right)\right| \leq $$ $$ \leq \sum_{0 \leq i \leq n} |c_i|^2 + \frac{1}{n+1} \left( \sum_{0 \leq i \leq n} |c_i| \right)^2 \leq2 \sum_{0 \leq i \leq n} |c_i|^2 \leq 2 \cdot \left\Vert \sum_{i \ge 0} c_i e_i \right\Vert^2,$$ showing that $|| S_n ||^2 \leq 2$ for all $n \ge 0$. It is left to show the condition as in Remark \ref{rem alt cond}. Let us thus fix a compact $K \subset G$. Notice that it is straight-forward to see that it is enough to check the condition for vectors in a subset of $V$, the closure of whose linear span is equal to $V$. So it is enough to check that $$ \lim_{n \to +\infty} \sup_{g \in K} |\langle g S_n e_i , S_n e_j \rangle - \langle g e_i , e_j \rangle | = 0$$ for any given $i,j \ge 0$. Taking $n$ big enough so that $K \subset K_n$ and $n \ge \max \{ i,j \}$, we have $$ \sup_{g \in K} |\langle g S_n e_i , S_n e_j \rangle - \langle g e_i , e_j \rangle | = \sup_{g \in K} |\langle g w^n_i , w^n_j \rangle - \langle g e_i , e_j \rangle | \leq \frac{1}{n+1},$$ giving the desired. \end{proof} \begin{corollary}\label{cor o temp is temp} An irreducible unitary $G$-representation is o-tempered if and only if it is tempered. \end{corollary} \begin{proof} This is a special case of Proposition \ref{prop o cont is weakly cont}, taking into account Remark \ref{rem irr zimmer}. \end{proof} \subsection{} Here we give a weaker version of c-temperedness, which is technically convenient to relate to other concepts of this section. \begin{definition}\label{def right-c-temp} Let $V$ be an irreducible unitary $G$-representation. Let $F_0 , F_1 , \ldots \subset G$ be a sequence of measurable pre-compact subsets all containing a neighbourhood of $1$. We say that $V$ is \textbf{right-c-tempered with F{\o}lner sequence $F_0 , F_1 , \ldots$} if there exists a unit vector $v_0 \in V$ such that the following two conditions are satisfied: \begin{enumerate} \item For all $v \in V$ we have $$ \limsup_{n \to +\infty} \frac{ \Mss{v}{v_0}{F_n} }{ \Mss{v_0}{v_0}{F_n} } < +\infty.$$ \item For all $v \in V$ and all compact subsets $K \subset G$ we have $$ \lim_{n \to +\infty} \frac{\sup_{g \in K} \Mss{v}{v_0}{F_n \triangle F_n g} }{ \Mss{v_0}{v_0}{F_n} } = 0.$$ \end{enumerate} \end{definition} \subsection{} Finally, we can show that c-tempered irreducible unitary $G$-representations are tempered. \begin{proposition}\label{prop right-c is op} Let $V$ be an irreducible unitary $G$-representation. Assume that $V$ is right-c-tempered (with some F{\o}lner sequence). Then $V$ is o-tempered. More precisely, suppose that $V$ is right-c-tempered with F{\o}lner sequence $F_0 , F_1 , \ldots$ and let $v_0 \in V$ be a unit vector for which the conditions (1) and (2) of Definition \ref{def right-c-temp} are satisfied. Then the sequence of operators $$ S_0 , S_1 , \ldots : V \to L^2 (G)$$ given by $$ S_n (v) (x) := \begin{cases} \frac{1}{\sqrt{ \Mss{v_0}{v_0}{F_n} }} \cdot \langle x v , v_0 \rangle , & x \in F_n \\ 0, & x \notin F_n \end{cases}$$ admits a sub-sequence which is an asymptotic embedding. \end{proposition} \begin{corollary}\label{cor c-temp are temp} Every c-tempered irreducible unitary $G$-representation (with some F{\o}lner sequence) is tempered. \end{corollary} \begin{proof} It is clear that c-temperedness implies right-c-temperedness, Proposition \ref{prop right-c is op} says that right-c-temperedness implies o-temperedness, and Corollary \ref{cor o temp is temp} says that o-temperedness is equivalent to temperedness. \end{proof} \begin{proof}[Proof (of Proposition \ref{prop right-c is op}).] Clearly each $S_n$ is bounded. By condition (2) of Definition \ref{def right-c-temp}, for any fixed $v \in V$ there exists $C>0$ such that $||S_n (v)||^2 \leq C$ for all $n$. By the Banach-Steinhaus theorem, this implies that the operators $S_0 , S_1 , \ldots$ are jointly bounded, thus condition $(1)$ of Definition \ref{def asymp emb} is verified. To verify condition $(2)$ of Definition \ref{def asymp emb}, fix $v \in V$ and a compact $K \subset G$. Given $g \in K$ and a function $f \in L^2 (G)$ of $L^2$-norm one, we have $$ | \langle S_n (g v) - g S_n (v) , f \rangle | = \frac{ \left| \int_{F_n} \langle x g v , v_0 \rangle \overline{f(x)} \cdot dx - \int_{F_n g^{-1}} \langle xg v , v_0 \rangle \overline{f(x)} \cdot dx \right| }{\sqrt{ \Mss{v_0}{v_0}{F_n} }} \leq $$ $$ \leq \frac{ \int_{F_n \triangle F_n g^{-1}} \left| \langle xg v , v_0 \rangle \overline{f(x)} \right| \cdot dx }{\sqrt{ \Mss{v_0}{v_0}{F_n} }} \leq \sqrt{\frac{\int_{F_n \triangle F_n g^{-1}} |\langle xg v , v_0 \rangle |^2 \cdot dx}{ \Mss{v_0}{v_0}{F_n} }} \cdot \sqrt{\int_{G} |f(x)|^2 \cdot dx} = $$ $$ = \sqrt{\frac{ \Mss{v}{v_0}{F_n g \triangle F_n} }{ \Mss{v_0}{v_0}{F_n} }}.$$ Since $f$ was arbitrary, this implies $$ || S_n (gv) - g S_n (v) || \leq \sqrt{\frac{ \Mss{v}{v_0}{F_n g \triangle F_n} }{ \Mss{v_0}{v_0}{F_n} }}$$ for $g \in K$. By condition (2) of Definition \ref{def right-c-temp}, this tends to $0$ as $n \to +\infty$, uniformly in $g \in K$, and hence the desired. Now, using Lemma \ref{lem asymp emb irrep} we see that some sub-sequence will satisfy condition $(3)$ of Definition \ref{def asymp emb}, once we notice that $|| S_n v_0 ||^2 = 1$ for all $n$ by construction. \end{proof} \section{The case of $K$-finite vectors}\label{sec Kfin} In this section $G$ is a semisimple group over a local field. We continue with notations from \S\ref{sec intro}. The purpose of this section is to prove Theorem \ref{thm main Kfin}. \subsection{} Let us first show that, when $G$ is non-Archimedean, it is enough to establish condition (2) of Theorem \ref{thm main Kfin}, and condition (1) will then follow. So we assume condition (2) and use the notation $C(v_1 , v_2)$ therein. Let us denote by $\underline{V} \subset V$ the subspace of $K$-finite (i.e. smooth) vectors. By the polarization identity, it is clear that for all $v_1 , v_2 , v_3 , v_4 \in \underline{V}$ the limit $$ \lim_{r \to +\infty} \frac{\int_{G_{<r}} \langle gv_1 , v_2 \rangle \overline{\langle gv_3 , v_4 \rangle} \cdot dg}{r^{\mathbf{d} (V)}}$$ exists, let us denote it by $D(v_1 , v_2 , v_3 , v_4)$, and $D$ is a quadlinear form $$D : \underline{V} \times \overline{\underline{V}} \times \overline{\underline{V}} \times \underline{V} \to {\mathbb C}.$$ Next, we claim that for all $v_1 , v_2 , v_3 , v_4 \in \underline{V}$ and all $g_1 , g_2 \in G$ we have $$ D(g_1 v_1, g_2 v_2 , g_1 v_3 , g_2 v_4) = D(v_1 , v_2 , v_3 , v_4).$$ Indeed, again by the polarization identity, it is enough to show that for all $v_1 , v_2 \in \underline{V}$ and all $g_1 , g_2 \in G$ we have \begin{equation}\label{eq Kfin nonArch C is G inv} C(g_1 v_1 , g_2 v_2) = C(v_1 , v_2).\end{equation} There exists $r_0 \ge 0$ such that $$ G_{<r-r_0} \subset g_2^{-1} G_{<r} g_1 \subset G_{<r+r_0}.$$ We have: $$ \int_{G_{<r}} |\langle g g_1 v_1 , g_2 v_2 \rangle |^2 \cdot dg = \int_{g_2^{-1} G_{<r} g_1} |\langle gv_1 , v_2 \rangle |^2 \cdot dg$$ and therefore $$ \int_{G_{<r-r_0}} |\langle g v_1 , v_2 \rangle |^2 \cdot dg \leq \int_{G_{<r}} |\langle g g_1 v_1 , g_2 v_2 \rangle |^2 \cdot dg \leq \int_{G_{<r+r_0}} |\langle gv_1 , v_2 \rangle |^2 \cdot dg.$$ Dividing by $r^{\mathbf{d} (V)}$ and taking the limit $r \to +\infty$ we obtain (\ref{eq Kfin nonArch C is G inv}). Now, by Schur's lemma (completely analogously to the reasoning with $\Phi$ in the proof of Proposition \ref{prop c-tempered orth rel}), we obtain that for some $C >0$ we have $$ D(v_1 , v_2 , v_3 , v_4) = C \cdot \langle v_1 , v_ 3 \rangle \overline{\langle v_2 , v_4 \rangle}$$ for all $v_1 , v_2 , v_3 , v_4 \in \underline{V}$. \subsection{} Thus, we aim at establishing condition (2) of Theorem \ref{thm main Kfin} in either the non-Archimedean or the Archimedean cases. Since a complex group can be considered as a real group and the formulation of the desired theorem will not change, we assume that we are either in the real case or in the non-Archimedean case. Also, notice that to show Theorem \ref{thm main Kfin} for all maximal compact subgroups it is enough to show it for one maximal compact subgroup (in the non-Archimedean case because the resulting notion of $K$-finite vectors does not depend on the choice of $K$ and in the real case since all maximal compact subgroups are conjugate). \subsection{} Let us fix some notation. We choose a maximal split torus $A \subset G$ and a minimal parabolic $P \subset G$ containing $A$. We denote $$ {\mathfrak a} := {\textnormal{Hom}}_{{\mathbb Z}} (X^* (A) , {\mathbb R}).$$ We let $L \subset {\mathfrak a}$ to be ${\mathfrak a}$ itself in the real case and the lattice in ${\mathfrak a}$ corresponding to $X_* (A)$ in the non-Archimedean case. We let $\exp : L \to A$ be the exponential map constructed in the usual way: \begin{itemize} \item If $G$ is real, we let $\exp$ to be the composition $L = {\mathfrak a} \cong {\rm Lie} (A) \to A$ where the last map is the exponential map from the Lie algebra to the Lie group, while the isomorphism is the identification resulting from the map $X^* (A) \to {\rm Lie} (A)^*$ given by taking the differential at $1 \in A$. \item If $G$ is non-Archimedean, we let $\exp$ be the composition $L \cong X_* (A) \to A$ where the last map is given by sending $\chi$ to $\chi (\varpi^{-1})$, where $\varpi$ is a uniformizer. \end{itemize} We denote by $$ \Delta \subset \widetilde{\Delta} \subset X^* (A) \subset {\mathfrak a}^*$$ the set of simple roots $\Delta$ and the set of positive roots $\widetilde{\Delta}$ (resulting from the choice of $P$). We identify ${\mathfrak a}$ with ${\mathbb R}^{\Delta}$ in the clear way. We set $$ {\mathfrak a}^+ := \{ x \in {\mathfrak a} \ | \ \alpha (x) \ge 0 \ \forall \alpha \in \Delta \}$$ and $L^+ := L \cap {\mathfrak a}^+$. Let us in the standard way choose a maximal comapct subgroup $K \subset G$ ``in good relative position" with $A$. In the real case this means ${\rm Lie} (A)$ sitting in the $(-1)$-eigenspace of a Cartan involution whose $1$-eigenspace is ${\rm Lie} (K)$ and in the non-Archimedean case it is as in \cite[V.5.1., Th{\' e}or{\` e}me]{Re}. In the non-Archimedean case let us also, to simplify notation, assume that $G = K \exp (L^+) K$ (in general there is a finite subset $S \subset Z_G (A)$ such that $G = \coprod_{s \in S} K \exp (L^+) s K$ and one proceeds with the obvious modifications). Let us denote $\rho := \frac{1}{2}\sum_{\alpha \in \widetilde{\Delta}} \mu_{\alpha} \cdot \alpha \in {\mathfrak a}^*$ where $\mu_{\alpha} \in {\mathbb Z}_{\ge 1}$ is the multiplicity of the root $\alpha$. Fixing Haar measures, especially denoting by $dx$ a Haar measure on $L$, we have a uniquely defined continuous $\omega : L^+ \to {\mathbb R}_{\ge 0}$ such that the following integration formula holds: $$ \int_{G} f(g) \cdot dg = \int_{K \times K} \left( \int_{L^+} \omega (x) f(k_1 \exp (x) k_2) \cdot dx \right) \cdot dk_1 dk_2.$$ Regarding the behaviour of $\omega (x)$, we can use \cite[around Lemma 1.1]{Ar} as a reference. In the real case there exists $C>0$ such that \begin{equation}\label{eq omega is like rho Arch} \frac{\omega (x)}{e^{2\rho (x)}} = C \cdot \prod_{\alpha} \left( 1 - e^{-2\alpha (x)} \right) \end{equation} where $\alpha$ runs over $\widetilde{\Delta}$ according to multiplicities $\mu_{\alpha}$. In the non-Archimedean case, for every $\Theta \subset \Delta$ there exists $C_{\Theta} > 0$ such that \begin{equation}\label{eq omega is like rho nonArch} \frac{\omega (x)}{e^{2\rho (x)}} = C_{\Theta} \end{equation} for all $x \in L^+$ satisfying $\alpha (x) = 0$ for all $\alpha \in \Theta$ and $\alpha (x) \neq 0$ for all $\alpha \in \Delta \smallsetminus \Theta$. Since, by Claim \ref{clm doesnt depend on norm}, we are free in our choice of the norm $||-||$ on ${\mathfrak g}$, let us choose $||-||$ to be a supremum norm in coordinates gotten from an $A$-eigenbasis. Then\footnote{Recall the notation $\mathbf{r}$ from \S\ref{sec intro}.} $$ \mathbf{r} (\exp (x)) = \log q \cdot \max_{\alpha \in \widetilde{\Delta} } |\alpha (x)|$$ where $q$ is the residual cardinality in the non-Archimedean case and $q := e$ in the real case. Let us denote $$ {\mathfrak a}_{<r} := \{ x \in {\mathfrak a} \ | \ | \alpha (x)| < \tfrac{r}{\log q} \ \forall \alpha \in \widetilde{\Delta} \}$$ and ${\mathfrak a}^+_{<r} := {\mathfrak a}_{<r} \cap {\mathfrak a}^+$ and similarly $L_{<r} := L \cap {\mathfrak a}_{<r}$, $L^+_{<r} := L^+ \cap L_{<r}$. Then $ L_{<r} = \exp^{-1} (G_{<r})$. Hence there exists $r_0 \ge 0$ such that \begin{equation}\label{eq Gr is like Lr} K \exp (L^+_{<r-r_0}) K \subset G_{<r} \subset K \exp(L^+_{<r+r_0}) K. \end{equation} \subsection{} Let now $V$ be a tempered irreducible unitary $G$-representation. Let us denote by $\underline{V} \subset V$ the subspace of $K$-finite vectors. Given $v_1 , v_2 \in V$, we will denote by $f_{v_1 , v_2}$ the continuous function on $L^+$ given by $$f_{v_1 , v_2} (x) := e^{\rho (x)} \langle \exp (x) v_1 , v_2 \rangle.$$ We have $$ \Ms{v_1}{v_2}{r} = \int_{G_{<r}} |\langle gv_1 , v_2 \rangle |^2 \cdot dg = $$ $$= \int_{K \times K} \left( \int_{L^+ \cap \exp^{-1} (k_2 G_{<r} k_1^{-1})} \frac{\omega (x)}{e^{2 \rho (x)}} |f_{k_1 v_1 , k_2 v_2} (x)|^2 \cdot dx \right) \cdot dk_1 dk_2.$$ In view of (\ref{eq Gr is like Lr}), in order to prove Theorem \ref{thm main Kfin} it is enough to show: \begin{claim}\label{clm Kfin ess} There exists $\mathbf{d} (V) \in {\mathbb Z}_{\ge 0}$ such that for every non-zero $v_1 , v_2 \in \underline{V}$ there exists $C(v_1 , v_2)>0$ such that $$ \lim_{r \to +\infty} \frac{\int_{K \times K} \left( \int_{L^+_{<r}} \frac{\omega (x)}{e^{2 \rho (x)}}| f_{k_1 v_1 , k_2 v_2} (x) |^2 \cdot dx \right) \cdot dk_1 dk_2}{r^{\mathbf{d} (V)}} = C(v_1 , v_2).$$ \end{claim} \subsection{} We have the following: \begin{claim}\label{clm growth exists}\ \begin{enumerate} \item Given $v_1 , v_2 \in \underline{V}$, either $f_{v_1 , v_2} = 0$ in which case we set $\mathbf{d} (v_1 , v_2) := -\infty$, or there exist $\mathbf{d} (v_1 , v_2) \in {\mathbb Z}_{\ge 0}$ and $C(v_1 , v_2)>0$ such that $$ \lim_{r \to +\infty} \frac{1}{r^{\mathbf{d} (v_1 , v_2)}} \int_{L^+_{<r}} \frac{\omega (x)}{e^{2 \rho (x)}} |f_{v_1 , v_2} (x)|^2 \cdot dx = C(v_1 , v_2).$$ \item In the real case, we have $\mathbf{d} (v_1 , X v_2) \leq \mathbf{d} (v_1 , v_2)$ for all $v_1 , v_2 \in \underline{V}$ and $X \in {\mathfrak g}$. \item Denoting $\mathbf{d}(V) := \sup_{v_1 , v_2 \in \underline{V}} \mathbf{d} (v_1 , v_2)$, we have neither $\mathbf{d} (V) = -\infty$ nor $\mathbf{d} (V) =+\infty$ (i.e. $\mathbf{d} (V) \in {\mathbb Z}_{\ge 0}$). \end{enumerate} \end{claim} Let us establish Claim \ref{clm Kfin ess} given Claim \ref{clm growth exists}: \begin{proof}[Proof (of Claim \ref{clm Kfin ess} given Claim \ref{clm growth exists}).] Let us first handle the non-Archimedean case. Let us notice that we can replace $v_1$ and $v_2$ by $g_1 v_1$ and $g_2 v_2$ for any $g_1,g_2 \in G$. Indeed, for some $r_0 \ge 0$ we have $$g_2^{-1} G_{<r-r_0} g_1 \subset G_{<r} \subset g_2^{-1} G_{<r+r_0} g_1$$ and thus $$ \int_{G_{<r-r_0}} | \langle g g_1 v_1 , g_2 v_2 \rangle |^2 \cdot dg \leq \int_{G_{<r}} |\langle g v_1 , v_2 \rangle |^2 \cdot dg \leq \int_{G_{<r+r_0}} | \langle g g_1 v_1 , g_2 v_2 \rangle |^2 \cdot dg,$$ from which the claim clearly follows. Since $G \cdot v_1$ spans $\underline{V}$ and $G \cdot v_2$ spans $\underline{V}$, we deduce that by replacing $v_1$ and $v_2$ we can assume that $\mathbf{d} (v_1 , v_2) = \mathbf{d} (V)$. Now, since the integral $$ \int_{K \times K} \left( \int_{L^+_{<r}} \frac{\omega (x)}{e^{2 \rho (x)}} |f_{k_1 v_1, k_2 v_2} (x)|^2 \cdot dx \right) \cdot dk_1 dk_2$$ over $K \times K$ is simply a finite linear combination the claim is clear. Let us now handle the real case. First, we would like to see that for some $k_1 , k_2 \in K$ we have $\mathbf{d} (k_1 v_1 , k_2 v_2) = \mathbf{d} (V)$. To that end, let us denote by ${\mathfrak n}$ and ${\mathfrak n}^-$ the Lie algebras of $N$ and $N^-$ (the unipotent radicals of $P$ and of $P^-$, the opposite to $P$ with respect to $A$) and identify ${\mathfrak a}$ with the Lie algebra of $A$ as before. Since ${\mathcal U} ({\mathfrak n}^-) {\mathcal U} ({\mathfrak a}) K v_1$ spans $\underline{V}$ and ${\mathcal U} ({\mathfrak n}) {\mathcal U} ({\mathfrak a}) K v_2$ spans $\underline{V}$, we can find $k_1 , k_2 \in K$ and some elements $v'_1 \in {\mathcal U} ({\mathfrak n}^-) {\mathcal U} ({\mathfrak a}) k_1 v_1$ and $v'_2\in {\mathcal U} ({\mathfrak n}) {\mathcal U} ({\mathfrak a}) k_2 v_2$ such that $\mathbf{d} (v'_1, v'_2) = \mathbf{d} (V)$. By Claim \ref{clm growth exists}(2) this forces $\mathbf{d} (k_1 v_1 , k_2 v_2) = \mathbf{d} (V)$. Next, given two continuous functions $f_1 , f_2$ on ${\mathfrak a}^+$ and $d \in {\mathbb Z}_{\ge 0}$ let us denote $$ \langle f_1 , f_2 \rangle_d := \lim_{r \to +\infty} \frac{ \int_{{\mathfrak a}^+_{<r}} \frac{\omega (x)}{e^{2 \rho (x)}} f_1(x) \overline{f_2 (x)} \cdot dx}{r^d}$$ if the limit exists, and $|| f ||^2_d := \langle f , f \rangle_d$. We claim that the function $(k_1 , k_2) \mapsto || f_{k_1 v_1 , k_2 v_2} ||^2_{\mathbf{d} (V)}$ on $K \times K$ is continuous and that $$ \lim_{r \to +\infty} \frac{\int_{K \times K} \left( \int_{{\mathfrak a}^+_{<r}} \frac{\omega (x)}{e^{2 \rho (x)}} |f_{k_1 v_1 , k_2 v_2} (x)|^2 \cdot dx \right) \cdot dk_1 dk_2}{r^{\mathbf{d} (V)}} = \int_{K \times K} || f_{k_1 v_1 , k_2 v_2 } ||^2_{\mathbf{d} (V)} \cdot dk_1 dk_2.$$ Then the right hand side is non-zero since we have seen that $\mathbf{d} (k_1 v_1 , k_2 v_2) = \mathbf{d} (V)$ for some $k_1 , k_2 \in V$, and we are done. Let $(v_1^i)$ be a basis for the ${\mathbb C}$-span of $\{ k v_1 \}_{k \in K}$ and let $(v_2^j)$ be a basis for the ${\mathbb C}$-span of $\{ k v_2 \}_{k \in K}$. Let us write $k v_1 = \sum_i c_i(k) v_1^i$ and $k v_2 = \sum_j d_j (k) v_2^j$, so that $c_i$ and $d_j$ are continuous ${\mathbb C}$-valued functions of $K$. Then $$ \int_{{\mathfrak a}^+_{<r}} \frac{\omega (x)}{e^{2 \rho (x)}} |f_{k_1 v_1 , k_2 v_2} (x)|^2 \cdot dx = $$ $$ = \sum_{i_1 , i_2 , j_1 , j_2} c_{i_1} (k_1) \overline{c_{i_2} (k_1)} \overline{d_{j_1} (k_2)} d_{j_2} (k_2) \int_{{\mathfrak a}^+_{<r} } \frac{\omega (x)}{e^{2 \rho (x)}} \cdot f_{v_1^{i_1} , v_2^{j_1}} (x) \cdot \overline{f_{v_1^{i_2} , v_2^{j_2}} (x)} \cdot dx.$$ Therefore $$ || f_{k_1 v_1 , k_2 v_2}||^2_{\mathbf{d} (V)} = \sum_{i_1 , i_2 , j_1 , j_2} c_{i_1} (k_1) \overline{c_{i_2} (k_1)} \overline{d_{j_1} (k_2)} d_{j_2} (k_2) \langle f_{v_1^{i_1} , v_2^{j_1}} , f_{v_1^{i_2} , v_2^{j_2}} \rangle_{\mathbf{d} (V)} $$ so $(k_1 , k_2) \mapsto || f_{k_1 v_1 , k_2 v_2} ||^2_{\mathbf{d} (V)}$ is indeed continuous. Also, it is now clear that we have $$ \lim_{r \to +\infty} \frac{\int_{K \times K} \left( \int_{{\mathfrak a}^+_{<r}} \frac{\omega (x)}{e^{2 \rho (x)}} |f_{k_1 v_1 , k_2 v_2} (x)|^2 \cdot dx \right) \cdot dk_1 dk_2}{r^{\mathbf{d} (V)}} = $$ $$ = \sum_{i_1 , i_2 , j_1 , j_2} \langle f_{v_1^{i_1} , v_2^{j_1}} , f_{v_1^{i_2} , v_2^{j_2}} \rangle_{\mathbf{d} (V)} \int_{K \times K} c_{i_1} (k_1) \overline{c_{i_2} (k_1)} \overline{d_{j_1} (k_2)} d_{j_2} (k_2) \cdot dk_1 dk_2 = $$ $$ = \int_{K \times K} || f_{k_1 v_1 , k_2 v_2} ||^2_{\mathbf{d} (V) } \cdot dk_1 dk_2$$ \end{proof} \subsection{} Let us now explain Claim \ref{clm growth exists} in the case when $G$ is non-Archimedean. Let $v_1 , v_2 \in \underline{V}$. Let us choose a positive integer $k$ large enough so that $k \cdot {\mathbb Z}_{\ge 0}^{\Delta} \subset L$. By enlarging $k$ even more if necessary, by \cite[Theorem 4.3.3.]{Ca} for every $\Theta \subset \Delta$ and every $y \in ( {\mathbb R}_{\ge 0}^{\Theta} \times {\mathbb R}_{>k}^{\Delta \smallsetminus \Theta}) \cap L^+ $ the function $$ f_{v_1 , v_2 , \Theta , y} : k \cdot {\mathbb Z}_{\ge 0}^{\Delta \smallsetminus \Theta} \to {\mathbb C}$$ given (identifying ${\mathbb R}^{\Delta \smallsetminus \Theta}$ with a subspace of ${\mathbb R}^{\Delta}$ in the clear way) by $x \mapsto f_{v_1 , v_2}(y+x)$, can be written as $$ \sum_{1 \leq i \leq p} c_i \cdot e^{\lambda_i (x_{\Delta \smallsetminus \Theta})} q_i (x_{\Delta \smallsetminus \Theta})$$ where $c_i \in {\mathbb C} \smallsetminus \{ 0\}$, $\lambda_i$ is a complex-valued functional on ${\mathbb R}^{\Delta \smallsetminus \Theta}$, and $q_i$ is a monomial on ${\mathbb R}^{\Delta \smallsetminus \Theta}$. Here $x_{\Delta \smallsetminus \Theta}$ is the image of $x$ under the natural projection ${\mathbb R}^{\Delta} \to {\mathbb R}^{\Delta \smallsetminus \Theta}$. We can assume that the couples in the collection $\{ (\lambda_i , q_i) \}_{1 \leq i \leq p}$ are pairwise different. Since $V$ is tempered, by ``Casselman's criterion" we in addition have that for every $1 \leq i \leq p$, ${\rm Re} (\lambda_i)$ is non-negative on ${\mathbb R}_{\ge 0}^{\Delta \smallsetminus \Theta}$. By Claim \ref{clm appb lat}, either $p = 0$, equivalently $f_{v_1 , v_2 , \Theta , y} = 0$ (in which case we set $d_{v_1 , v_2 , \Theta , y} := -\infty$), or there exists $d_{v_1 , v_2 , \Theta , y} \in {\mathbb Z}_{\ge 0}$ such that the limit $$ \lim_{r \to +\infty} \frac{1}{r^{d_{v_1 , v_2 , \Theta , y}}} \sum_{x \in (k \cdot {\mathbb Z}_{\ge 0}^{\Delta \smallsetminus \Theta}) \cap L^+_{<r}} |f_{v_1 , v_2}(y+x)|^2$$ exists and is strictly positive. Now, given $y \in L^+$ let us denote $\Theta_y := \{ \alpha \in \Delta \ | \ y_{\alpha} \leq k \}$ where by $y_{\alpha}$ we denote the coordinate of $y \in {\mathbb R}^{\Delta}$ at the $\alpha$-place. Let $Y \subset L^+$ be the subset of $y \in L^+$ for which $y_{\alpha} \leq 2k$ for all $\alpha \in \Delta$. Then $Y$ is a finite set, and we have \begin{equation}\label{eq break L into subsets} L^+ = \coprod_{y \in Y} \left( y + k \cdot {\mathbb Z}_{\ge 0}^{\Delta \smallsetminus \Theta_y} \right).\end{equation} Notice also that $\omega (x) / e^{2 \rho (x)}$ is a positive constant on each one of the subset of which we take union in (\ref{eq break L into subsets}). We set $\mathbf{d} (v_1 , v_2) := \max_{y \in Y} d_{v_1 , v_2 , \Theta_y , y}$. We see that either $f_{v_1 , v_2} = 0$ (then $\mathbf{d} (v_1 , v_2) = -\infty$) or the limit $$ \lim_{r \to +\infty} \frac{1}{r^{\mathbf{d} (v_1 , v_2)}} \sum_{x \in L^+_{<r} } \frac{\omega (x)}{e^{2 \rho (x)}} |f_{v_1 , v_2} (x)|^2$$ exists and is strictly positive. That $\mathbf{d} (V)$ is finite follows from $\mathbf{d} (v_1 , v_2)$ being controlled by finitely many Jacquet modules, with the finite central actions on them. \subsection{} Let us now explain Claim \ref{clm growth exists} in the case when $G$ is real. Using \cite{CaMi} we know that, fixing $k >0$, given $\Theta \subset \Delta$ the restriction of $\frac{\omega (x)^{1/2}}{e^{\rho (x)}} f_{v_1 , v_2} (x)$ to ${\mathbb R}_{\ge k}^{\Delta \smallsetminus \Theta} \times [0,k]^{\Theta}$ can be written as $$ \sum_{1 \leq i \leq p} e^{\lambda_i (x_{\Delta \smallsetminus \Theta})} q_i (x_{\Delta \smallsetminus \Theta}) \phi_i (x) $$ where the notation is as follows. First, $\lambda_i$ is a complex-valued functional on ${\mathbb R}^{\Delta \smallsetminus \Theta}$. Next, $q_i$ is a monomial on ${\mathbb R}^{\Delta \smallsetminus \Theta}$. The couples $(\lambda_i , q_i)$, for $1 \leq i \leq p$, are pairwise distinct. The function $\phi_i$ is expressible as a composition $$ [0,k]^{\Theta} \times {\mathbb R}_{\ge k}^{\Delta \smallsetminus \Theta} \xrightarrow{{\rm id} \times {\rm ei}} [0,k]^{\Theta} \times ({\mathbb C}_{|-|<1})^{\Delta \smallsetminus \Theta} \xrightarrow{\phi_i^{\circ}} {\mathbb C}$$ where ${\rm ei}$ is the coordinate-wise application of $x \mapsto e^{-x}$ and $\phi_{i}^{\circ}$ is a continuous function such that, for every $b \in [0,k]^{\Theta}$, the restriction of $\phi_i^{\circ}$ via $({\mathbb C}_{|-|<1})^{\Delta \smallsetminus \Theta} \xrightarrow{z \mapsto (b,z)} [0,k]^{\Theta} \times ({\mathbb C}_{|-|<1})^{\Delta \smallsetminus \Theta}$ is holomorphic. Lastly, the function $b \mapsto \phi_{i}^{\circ} (b , \{0\}^{\Delta \smallsetminus \Theta} ) $ on $[0,k]^{\Theta}$ is not identically zero. Since $V$ is tempered, by ``Casselman's criterion" we in addition have that for every $1 \leq i \leq p$, ${\rm Re} (\lambda_i)$ is non-negative on ${\mathbb R}_{\ge 0}^{\Delta \smallsetminus \Theta}$. If $p = 0$, we set $d_{v_1 , v_2 , \Theta} := -\infty$. Otherwise, Claim \ref{clm appb} provides a number $d_{v_1 , v_2 , \Theta} \in {\mathbb Z}_{\ge 0}$, described concretely in terms of $\{ (\lambda_i , q_{i} , \phi_{i} ) \}_{1 \leq i \leq p}$, such that the limit $$ \lim_{r \to +\infty} \frac{1}{r^{d_{v_1 , v_2 , \Theta}}} \int_{ ( [0,k]^{\Theta} \times {\mathbb R}_{\ge k}^{\Delta \smallsetminus \Theta} ) \cap {\mathfrak a}^+_{<r}} \frac{\omega (x)}{e^{2 \rho (x)}} |f_{v_1 , v_2} (x)|^2 \cdot dx $$ exists and is strictly positive. We set $\mathbf{d} (v_1 , v_2) := \max_{\Theta \subset \Delta} d_{v_1 , v_2 , \Theta}$. Then either $f_{v_1 , v_2} = 0$ or $\mathbf{d} (v_1 , v_2) > 0$ and the limit $$ \lim_{r \to +\infty} \frac{1}{r^{\mathbf{d} (v_1 , v_2)}} \int_{{\mathfrak a}^+_{<r}} \frac{\omega (x)}{e^{2 \rho (x)}} |f_{v_1 , v_2} (x)|^2 \cdot dx$$ exists and is strictly positive. That $\mathbf{d} (V)$ is finite follows from $\mathbf{d} (v_1 , v_2)$ being controlled by finitely many data, as in \cite{CaMi}. Part (2) of Claim \ref{clm growth exists} follows easily from the concrete description of $d_{v_1 , v_2 , \Theta}$ in Claim \ref{clm appb}. \section{Proofs for Remark \ref{rem counterexample}, Remark \ref{rem doesnt depend on norm} , Proposition \ref{prop formula ch}, Proposition \ref{prop from folner to exact} and Remark \ref{rem ctemp red is temp}.}\label{sec clms red} In this section, $G$ is a semisimple group over a local field. We continue with notations from \S\ref{sec intro}. We explain Remark \ref{rem counterexample} (in Claim \ref{clm counterexample}), explain Remark \ref{rem doesnt depend on norm} (in Claim \ref{clm doesnt depend on norm}), prove Proposition \ref{prop formula ch} (in \S\ref{ssec proof of prop formula ch}), prove Proposition \ref{prop from folner to exact} (in \S\ref{ssec clms red 1}) and explain Remark \ref{rem ctemp red is temp} (in Claim \ref{clm Prop tempered is c-tempered reductive holds}). \subsection{}\label{ssec clms red 1} \begin{lemma}\label{lem red temp is ctemp} Let $V$ be an irreducible unitary $G$-representation and suppose that there exists a unit vector $v_0 \in V$ satisfying properties (1) and (2) of Proposition \ref{prop from folner to exact}. Let $0 < r_0 < r_1 < \ldots$ be a sequence such that $\lim_{n \to +\infty} r_n = +\infty$. Then $V$ is c-tempered with F{\o}lner sequence $G_{< r_0} , G_{<r_1}, \ldots$. \end{lemma} \begin{proof} Property (1) of Definition \ref{def c-temp} is immediate from property (1) of Proposition \ref{prop from folner to exact}. Let us check property (2) of Definition \ref{def c-temp}. Thus, let $v_1 , v_2 \in V$ and let $K \subset G$ be a compact subset. Fix $r^{\prime} \ge 0$ big enough so that $K \subset G_{< r^{\prime}}$ and $K^{-1} \subset G_{< r^{\prime}}$. We then have, for all $r>0$ and all $g_1 , g_2 \in K$: $$G_{< r} \triangle g_2^{-1} G_{< r} g_1 \subset G_{< r + 2 r^{\prime}} \smallsetminus G_{< r - 2 r^{\prime}}.$$ Therefore, using property (2) of Proposition \ref{prop from folner to exact}, $$ \limsup_{r \to +\infty} \frac{\sup_{g_1 , g_2 \in K} \Mss{v_1}{v_2}{G_{< r} \triangle g_2^{-1} G_{< r} g_1}}{ \Mss{v_0}{v_0}{G_{<r}} } \leq \limsup_{r \to +\infty} \frac{ \Mss{v_1}{v_2}{G_{< r + 2 r^{\prime}} \smallsetminus G_{\leq r - 2 r^{\prime} }} }{ \Mss{v_0}{v_0}{G_{<r}}} = 0$$ and therefore also $$ \lim_{n \to +\infty} \frac{\sup_{g_1 , g_2 \in K} \Mss{v_1}{v_2}{G_{< r_n} \triangle g_2^{-1} G_{< r_n} g_1} }{ \Mss{v_0}{v_0}{G_{< r_n}}} = 0.$$ \end{proof} \begin{proof}[Proof (of Proposition \ref{prop from folner to exact}).] Let us fix a $K$-finite unit vector $v'_0 \in V$, for some maximal compact subgroup $K \subset G$. Let $0 < r_0 < r_1 < \ldots$ be a sequence such that $\lim_{n \to +\infty} r_n = +\infty$. By Lemma \ref{lem red temp is ctemp} $V$ is c-tempered with F{\o}lner sequence $G_{<r_0} , G_{<r_1} , \ldots$ and hence by Proposition \ref{prop c-tempered orth rel} we obtain $$ \lim_{n \to +\infty} \frac{\int_{g \in G_{<r_n}} \langle gv_1 , v_2 \rangle \overline{\langle gv_3 , v_4 \rangle} \cdot dg}{\Ms{v'_0}{v'_0}{r_n}} = \langle v_1 , v_3 \rangle \overline{\langle v_2 , v_4 \rangle}$$ for all $v_1 , v_2 , v_3 , v_4 \in V$. Since this holds for any such sequence $\{ r_n \}_{n \ge 0}$, we obtain \begin{equation}\label{eq formula again} \lim_{r \to +\infty} \frac{\int_{g \in G_{<r}} \langle gv_1 , v_2 \rangle \overline{\langle gv_3 , v_4 \rangle} \cdot dg}{\Ms{v'_0}{v'_0}{r}} = \langle v_1 , v_3 \rangle \overline{\langle v_2 , v_4 \rangle} \end{equation} for all $v_1 , v_2 , v_3 , v_4 \in V$. By Theorem \ref{thm main Kfin} we have $$ \lim_{r \to +\infty} \frac{\Ms{v'_0}{v'_0}{r}}{r^{\mathbf{d}(V)}} = C$$ for some $C>0$. This enables to rewrite (\ref{eq formula again}) as $$ \lim_{r \to +\infty} \frac{\int_{g \in G_{<r}} \langle gv_1 , v_2 \rangle \overline{\langle gv_3 , v_4 \rangle} \cdot dg}{r^{\mathbf{d} (V)}} = C \cdot \langle v_1 , v_3 \rangle \overline{\langle v_2 , v_4 \rangle} $$ for all $v_1 , v_2 , v_3 , v_4 \in V$, as desired. \end{proof} \subsection{} \begin{claim}\label{clm doesnt depend on norm} The validity of Conjecture \ref{main conj red exact}, as well as the resulting invariants $\mathbf{d} (V)$ and $\mathbf{f} (V)$, of Theorem \ref{thm main Kfin} as well as the resulting invariants $\mathbf{d} (V)$ and $\mathbf{f} (V)$ (the latter in the non-Archimedean case), and of Proposition \ref{prop from folner to exact}, do not depend on the choice of the norm $|| - ||$ on ${\mathfrak g}$. \end{claim} \begin{proof} Let $|| - ||^{\prime}$ be another norm on ${\mathfrak g}$, let $\mathbf{r}^{\prime} : G \to {\mathbb R}_{\ge 0}$ be the resulting function, and let $G_{<r}^{\prime} \subset G$ be the resulting subsets. There exists $r_0 \ge 0$ such that $$ e^{-r_0} \cdot || X || \leq ||X ||^{\prime} \leq e^{r_0} \cdot || X||, \quad \forall X \in {\mathfrak g}$$ and therefore $$ e^{-2r_0} \cdot ||{\rm Ad} (g)|| \leq || {\rm Ad} (g) ||' \leq e^{2 r_0} \cdot || {\rm Ad} (g) ||, \quad \forall g \in G.$$ Then $$ G^{\prime}_{< r} \subset G_{<r + 2r_0}, \quad \forall r\ge 0$$ and $$ G_{< r} \subset G^{\prime}_{<r + 2r_0}, \quad \forall r\ge 0.$$ These ``sandwich" relations readily imply the independence claims. \end{proof} \subsection{} \begin{claim}\label{clm Prop tempered is c-tempered reductive holds} An irreducible unitary $G$-representation for which there exists a unit vector $v_0 \in V$ such that conditions (1) and (2) of Proposition \ref{prop from folner to exact} are satisfied is tempered. \end{claim} \begin{proof} Clear from Lemma \ref{lem red temp is ctemp} coupled with Corollary \ref{cor c-temp are temp}. \end{proof} \subsection{} \begin{claim}\label{clm counterexample} Let $G := PGL_2 (\Omega)$, $\Omega$ a local field. Let $A \subset G$ be the subgroup of diagonal matrices. Then, for every non-trivial irreducible unitary $G$-representation $V$, the set of matrix coefficients of $V$ restricted to $A$ is equal to the set of function on $A$ of the form $$ a \mapsto \int_{\hat{A}} \chi (a) \cdot \phi (\chi) \cdot d\chi$$ as $\phi$ runs over $ L^1 (\hat{A})$. \end{claim} \begin{proof} Denote by $B \subset G$ the subgroup of upper-triangular matrices and by $N \subset B$ its unipotent radical. Let us recall that, by Mackey theory, there is a unique (up to isomorphism) infinite-dimensional irreducible unitary $B$-representation $W$, and the rest of irreducible unitary $B$-representations are killed by $N$. The restriction ${\rm Res}^B_A W$ is isomorphic to the right regular unitary $A$-representation $L^2 (A)$. Let now $V$ be a non-trivial irreducible unitary $G$-representation. Recall that by the Howe-Moore theorem (or by a step in one of its usual proofs) $V$ does not contain non-zero $N$-invariant vectors. By decomposing the restriction ${\rm Res}^G_B V$ into a direct integral of irreducible unitary $B$-representations, and using the fact that $V$ admits no non-zero $N$-invariant vectors, we see that ${\rm Res}^G_B V$ is a multiple of $W$. Hence, we deduce that ${\rm Res}^G_A A$ is a multiple of the right regular unitary $A$-representation $L^2 (A)$. Now, the matrix coefficients of a multiple of the right regular unitary $A$-representation $L^2 (A)$ are easily seen to be the functions on $A$ of the form $$ a \mapsto \int_{\hat{A}} \chi (a) \cdot \phi (\chi) \cdot d\chi$$ where $\phi \in L^1 (\hat{A})$. \end{proof} \subsection{}\label{ssec proof of prop formula ch} \begin{proof}[Proof (of Proposition \ref{prop formula ch}).] Fix $d \in D_c^{\infty} (G)$. Let $K \subset G$ be an open compact subgroup such that $d$ is invariant under $K$ both on left and on right. Let us denote by $e_1 , \ldots , e_n$ an orthonormal basis of $V^K$, and let us denote by $\pi_K : V \to V^K$ the orthonormal projection. Let us denote by $[-,-] : C^{-\infty} (G) \times D_c^{\infty} (G) \to {\mathbb C}$ the canonical pairing. We have $$ [{}^g m_{v_1 , v_2} , d] = [m_{g v_1 , g v_2} , d] = \langle d g v_1 , g v_2 \rangle = \langle d \pi_K (g v_1) , \pi_K (g v_2) \rangle = $$ $$ = \sum_{1 \leq i,j \leq n} \langle g v_1 , e_i \rangle \overline{\langle g v_2 , e_j \rangle} \langle d e_i , e_j \rangle. $$ Hence $$ \frac{\int_{G_{<r}} [{}^g m_{v_1 , v_2} , d] \cdot dg}{r^{\mathbf{d} (V)}} = \sum_{1 \leq i,j \leq n} \langle d e_i , e_j \rangle \cdot \frac{\int_{G_{<r}} \langle g v_1 , e_i \rangle \overline{\langle g v_2 , e_j \rangle} \cdot dg}{r^{\mathbf{d} (V)}}$$ and therefore $$ \lim_{r \to +\infty} \frac{\int_{G_{<r}} [{}^g m_{v_1 , v_2} , d] \cdot dg}{r^{\mathbf{d} (V)}} = \frac{1}{\mathbf{f} (V)} \sum_{1 \leq i,j \leq n} \langle d e_i , e_j \rangle \cdot \langle v_1 , v_2 \rangle \overline{\langle e_i , e_j \rangle} = $$ $$ = \frac{1}{\mathbf{f} (V)} \sum_{1 \leq i \leq n} \langle d e_i , e_i \rangle \cdot \langle v_1 , v_2 \rangle = \frac{\langle v_1 , v_2 \rangle}{\mathbf{f} (V)} \Theta_V (d).$$ \end{proof} \section{The case of the principal series representation $V_1$ of slowest decrease}\label{sec proof of V1} In this section $G$ is a semisimple group over a local field. We continue with notations from \S\ref{sec intro}. Our goal is to prove Theorem \ref{thm intro slowest} (restated as Theorem \ref{thm V1 c temp} below). \subsection{}\label{ssec V1 and Xi} We fix a minimal parabolic $P \subset G$ and a maximal compact subgroup $K \subset G$ such that $G = PK$. We consider the principal series unitary $G$-representation $V_1$ consisting of functions $f : G \to {\mathbb C}$ satisfying $$ f(pg) = \Delta_P (p)^{1/2} \cdot f(g) \quad \forall p \in P , g \in G$$ where $\Delta_P : P \to {\mathbb R}^{\times}_{>0}$ is the modulus function of $P$. The $G$-invariant inner product on $V_1$ can be taken to be $$ \langle f_1 , f_2 \rangle = \int_K f_1 (k) \cdot \overline{f_2 (k)} \cdot dk$$ (where we normalize the Haar measure on $K$ to have total mass $1$). Recall that $V_1$ is irreducible. We denote by $f_0 \in V_1$ the spherical vector, determined by $f_0 (k) = 1$ for all $k \in K$. We also write $$ \Xi_G (g) := \langle g f_0 , f_0 \rangle.$$ \begin{lemma}\label{lem growth Xi} Given $r' \ge 0$ we have $$ \lim_{r \to +\infty} \frac{\int_{G_{<r+r'} \smallsetminus G_{<r-r'}} \Xi_G (g)^2 \cdot dg}{\int_{G_{<r}} \Xi_G (g)^2 \cdot dg} = 0$$ and $$ \lim_{r \to +\infty} \frac{\int_{G_{<r+r'}} \Xi_G (g)^2 \cdot dg}{\int_{G_{<r}} \Xi_G (g)^2 \cdot dg} = 1.$$ \end{lemma} \begin{proof} The second equality follows from the first, and the first is immediately implied by Theorem \ref{thm main Kfin}. \end{proof} \subsection{} The main result of this section is: \begin{theorem}\label{thm V1 c temp} Let $V$ be an irreducible tempered unitary $G$-representation. Suppose that there exist a unit vector $v_0 \in V$ such that \begin{equation}\label{eq good vector} \underset{r \to +\infty}{\limsup} \frac{ \int_{G_{<r}} \Xi_G (g)^2 \cdot dg }{ \Ms{v_0}{v_0}{r} } < +\infty. \end{equation} Then Conjecture \ref{main conj red exact} holds for $V$. In particular, Conjecture \ref{main conj red exact} holds for $V_1$. \end{theorem} \subsection{} We will prove Theorem \ref{thm V1 c temp} using the following result: \begin{claim}\label{clm est using Xi} Let $V$ be a tempered unitary $G$-representation. Then for all unit vectors $v_1,v_2 \in V$ and all measurable $K$-biinvariant subsets $S \subset G$ we have $$ \int_S |\langle gv_1 , v_2 \rangle |^2 \cdot dg \leq \int_S \Xi_G (g)^2 \cdot dg.$$ \end{claim} \begin{proof}[Proof (of Theorem \ref{thm V1 c temp} given Claim \ref{clm est using Xi}).] To show that Conjecture \ref{main conj red exact} holds for $V$ we will use Proposition \ref{prop from folner to exact}, applied to our $V$ and our $v_0$. There exists $r_0 \ge 0$ such that $K G_{<r} K \subset G_{< r + r_0}$ for all $r \ge 0$. Let us verify condition (1) of Proposition \ref{prop from folner to exact}. For unit vectors $v_1 , v_2 \in V$ we have $$ \frac{\Ms{v_1}{v_2}{r}}{\Ms{v_0}{v_0}{r}} \leq \frac{\int_{G_{<r+r_0}} \Xi_G (g)^2 \cdot dg}{\Ms{v_0}{v_0}{r}}$$ and therefore condition (1) of Proposition \ref{prop from folner to exact} follows from (\ref{eq good vector}) and Lemma \ref{lem growth Xi}. Let us now verify condition (2) of Proposition \ref{prop from folner to exact}. For unit vectors $v_1 , v_2 \in V$ and $r' \ge 0$ we have $$ \frac{ \Ms{v_1}{v_2}{r+r'} - \Ms{v_1}{v_2}{r-r'}}{\Ms{v_0}{v_0}{r}} \leq \frac{\int_{G_{<r+r'+r_0} \smallsetminus G_{<r-(r'+r_0)}} \Xi _G(g)^2 \cdot dg}{\int_{G_{<r}} \Xi_G (g)^2 \cdot dg} \cdot \frac{\int_{G_{<r}} \Xi_G (g)^2 \cdot dg}{\Ms{v_0}{v_0}{r}}$$ and therefore condition (2) of Proposition \ref{prop from folner to exact} follows from (\ref{eq good vector}) and Lemma \ref{lem growth Xi}. \end{proof} \subsection{} We will prove Claim \ref{clm est using Xi} using the following result: \begin{claim}\label{clm norm of conv} Let $\phi \in L^2 (G)$ be zero outside of a measurable $K$-biinvariant subset $S \subset G$ of finite volume. Denote by $T_{\phi} : L^2 (G) \to L^2 (G)$ the operator of convolution $\psi \mapsto \phi \star \psi$. Then\footnote{Here $|| \phi ||$ stands for the $L^2$-norm of $\phi$.} $$ || T_{\phi} ||^2 \leq \left( \int_{S} \Xi_G (g)^2 \cdot dg \right) \cdot || \phi ||^2.$$ \end{claim} \begin{proof}[Proof (of Claim \ref{clm est using Xi} given Claim \ref{clm norm of conv}).] We can clearly assume that $S$ has finite volume. Let us denote $$ \phi (g) := {\rm ch}_{S} (g) \cdot \overline{\langle gv_1 , v_2 \rangle},$$ where ${\rm ch}_S$ stands for the characteristic function of $S$. Let us denote by $S_{\phi} : V \to V$ the operator $$ v \mapsto \int_G \phi (g) \cdot gv \cdot dg.$$ Since $V$ is tempered, we have $ ||S_{\phi}|| \leq || T_{\phi}||$. Therefore $$ \int_{S} |\langle gv_1 , v_2 \rangle |^2 \cdot dg = \int_G \phi (g) \cdot \langle gv_1 , v_2 \rangle \cdot dg = \langle S_{\phi} v_1 , v_2 \rangle \leq || S_{\phi} || \leq || T_{\phi} || \leq $$ $$ \leq \left( \sqrt{\int_S \Xi_G (g)^2 \cdot dg} \right) \cdot || \phi || = \left( \sqrt{\int_S \Xi_G (g)^2 \cdot dg} \right) \cdot \left( \sqrt{\int_S |\langle gv_1 , v_2 \rangle |^2 \cdot dg} \right)$$ thus $$\int_S |\langle gv_1 , v_2 \rangle |^2 \cdot dg \leq \int_S \Xi_G (g)^2 \cdot dg$$ as desired. \end{proof} \subsection{} Finally, let us prove Claim \ref{clm norm of conv}, following \cite{ChPiSa}. \begin{proof}[Proof (of Claim \ref{clm norm of conv}).] By\footnote{In the lemma we refer to it is assumed that $\phi$ is continuous but the arguments there apply to our $\phi$ without any modification.} \cite[Lemma 3.5]{ChPiSa} we can assume that $\phi$ is $K$-biinvariant and non-negative. By \cite[Proposition 4.3]{ChPiSa} we have $$ || T_{\phi} || = \int_G \Xi_G (g) \cdot \phi (g) \cdot dg.$$ Applying the Cauchy-Schwartz inequality, we obtain $$ ||T_{\phi} ||^2 \leq \left( \int_S \Xi_G (g)^2 \cdot dg\right) \cdot|| \phi||^2,$$ as desired. \end{proof} \section{The proof of Theorem \ref{thm intro SL2R}}\label{sec proof of SL2R} In this section we let $G$ be either $SL_2 ({\mathbb R})$ or $PGL_2 (\Omega)$, where $\Omega$ is a non-Archimedean local field of characteristic $0$ and residual characteristic not equal to $2$. We prove Theorem \ref{thm intro SL2R}. \subsection{} If $G = PGL_2 (\Omega)$, we denote by $\varpi$ a uniformizer in $\Omega$, by ${\mathcal O}$ the ring of integers in $\Omega$, by $p$ the residual characteristic of $\Omega$ and $q:= |{\mathcal O} / \varpi {\mathcal O} |$. \subsection{}\label{ssec SL2 notation} We denote by $A \subset G$ the subgroup of diagonal matrices and by $U \subset G$ the subgroup of unipotent upper-triangular matrices. If $G = SL_2 ({\mathbb R})$ we define the isomorphism $$ \mathbf{a} : {\mathbb R}^{\times} \to A, \quad t \mapsto \mtrx{t}{0}{0}{t^{-1}}$$ and if $G = PGL_2 (\Omega)$ we define the isomorphism $$ \mathbf{a} : \Omega^{\times} \to A, \quad t \mapsto \mtrx{t}{0}{0}{1}.$$ We denote $A^+ := \{ a \in A \ | \ |\mathbf{a}^{-1} (a) | \ge 1 \}$. If $G = SL_2 ({\mathbb R})$ then we can (and will) take $||- ||$ on ${\mathfrak g}$ to be such that $$\mathbf{r} \left( k_1 \mtrx{t}{0}{0}{t^{-1}} k_2 \right) = \log \max \{ |t|^2 , |t|^{-2} \}$$ where $t\in {\mathbb R}^{\times}$ and $k_1 , k_2 \in SO (2)$. If $G = PGL_2 (\Omega)$ then we can (and will) take $|| - ||$ on ${\mathfrak g}$ to be such that $$\mathbf{r} \left( k_1 \mtrx{t}{0}{0}{s} k_2 \right) = \log \max \{ |t/s| , |s/t| \}$$ where $t,s \in \Omega^{\times}$ and $k_1 , k_2 \in PGL_2 ({\mathcal O})$. Let us denote $A^+_{<r} := A^+ \cap G_{<r}$. If $G = SL_2 ({\mathbb R})$ we set $K := SO (2) \subset G$. If $G = PGL_2 (\Omega)$ we choose a non-square $\zeta \in {\mathcal O}^{\times}$ and set $K \subset G$ to be the subgroup of elements of the form $\mtrx{a}{\zeta b}{b}{a}$, $(a,b) \in \Omega^2 \smallsetminus \{ (0,0) \}$ (so $K$ is a closed compact subgroup in $G$, but not open, and in particular not maximal). We set $\omega : A^+ \to {\mathbb R}_{\ge 0}$ to be given by $\omega (\mathbf{a} (t)) := |t^2 - t^{-2} |$ if $G = SL_2 ({\mathbb R})$ and $\omega (\mathbf{a} (t)) := |t - t^{-1} |$ if $G = PGL_2 (\Omega)$. Then, taking the Haar measure on $K$ to have total mass $1$ and appropriately normalizing the Haar measure on $A$, for all non-negative-valued measurable functions $f$ on $G$ we have $$ \int_G f(g) \cdot dg = \int_{A^+} \omega (a) \left( \int_{K \times K} f(k_2 a k_1) \cdot dk_1 dk_2 \right) da.$$ Given a unitary $G$-representation $V$, vectors $v_1 , v_2 \in V$ and $a \in A^+$, we write $$ \ms{v_1}{v_2}{a} := \int_{K \times K} | \langle k_2 a k_1 v_1 , v_2 \rangle |^2 \cdot dk_1 dk_2.$$ We have \begin{equation}\label{eq M in term of Mcirc} \Ms{v_1}{v_2}{r} := \int_{A^+_{<r}} \omega (a) \cdot \ms{v_1}{v_2}{a} \cdot da \end{equation} (where $M_{v_1 , v_2} (r)$ was already defined in \S\ref{sec intro}). Given a unitary character\footnote{${\rm U} (1)$ denotes the subgroup of ${\mathbb C}^{\times}$ consisting of complex numbers with absolute value $1$.} $\chi: A \to {\rm U}(1) $ we consider the principal series unitary $G$-representation $V_{\chi}$, consisting of functions $f : G \to {\mathbb C}$ satisfying $$ f(u a g) = \chi (a) \cdot \Delta (a)^{1/2} \cdot f(g) \quad \forall a \in A, u \in U , g \in G,$$ where $\Delta (a) = |\mathbf{a}^{-1} (a)|^2$ if $G = SL_2 ({\mathbb R})$ and $\Delta (a) = |\mathbf{a}^{-1} (a)|$ if $G = PGL_2 (\Omega)$. Here $G$ acts by $(g^{\prime}f)(g) := f(gg^{\prime})$. The $G$-invariant inner product on $V_{\chi}$ can be expressed as $$ \langle f_1 , f_2 \rangle = \int_{K} f_1 (k) \cdot \overline{f_2 (k)} \cdot dk.$$ For $\theta \in \hat{K}$, let $h^{\chi}_{\theta} \in V_{\chi}$ denote the unique vector determined by $h^{\chi}_{\theta} (k) = \theta (k)$ for $k \in K$, if it exists, and write $\textnormal{types} (V_{\chi}) \subset \hat{K}$ for the subset of $\theta$'s for which it exists. Thus $(h^{\chi}_{\theta})_{\theta \in \textnormal{types} (V_{\chi})}$ is a Hilbert basis for $V_{\chi}$. \subsection{} Let us now give several preparatory remarks. First, we do not establish Conjecture \ref{main conj red exact} directly but, rather, establish conditions (1) and (2) of Proposition \ref{prop from folner to exact} (which suffices by that proposition). Second, for a square-integrable irreducible unitary $G$-representation $V$, establishing conditions (1) and (2) of Proposition \ref{prop from folner to exact} with any unit vector $v_0 \in V$ is straight-forward (see the proof of Proposition \ref{prop sq int} for a spelling-out). As is well-known, a tempered irreducible unitary $G$-representation which is not square-integrable is a direct summand in some $V_{\chi}$. Therefore, we establish conditions (1) and (2) of Proposition \ref{prop from folner to exact} for irreducible direct summands in $V_{\chi}$. Third, if $\chi = 1$ when $G = SL_2 ({\mathbb R})$ or if $\chi^2 = 1$ when $G = PGL_2 (\Omega)$, $V_{\chi}$ satisfies Conjecture \ref{main conj red exact} by Theorem \ref{thm V1 c temp}. So we assume throughout: \begin{equation}\label{eq condition} \chi \neq 1 \textnormal{ if } G=SL_2 ({\mathbb R}), \quad \quad \quad \chi^2 \neq 1 \textnormal{ if } G = PGL_2 (\Omega). \end{equation} \subsection{} We reduce Conjecture \ref{main conj red exact} for an irreducible summand in $V_{\chi}$ to the following two claims. \begin{claim}\label{clm SL2 2} Fix $\chi$ satisfying (\ref{eq condition}). Let $V$ be an irreducible direct summand in $V_{\chi}$. There exist $f \in V$, $r_0 \ge 0$ and $D>0$ such that for all $r \ge r_0$ we have \begin{equation}\label{eq from below} \Ms{f}{f}{r} \geq D \cdot r \end{equation} \end{claim} \begin{claim}\label{clm SL2 2b} Fix $\chi$ satisfying (\ref{eq condition}). There exist $r_0 > 0$ and $C>0$ (depending on $\chi$) such that for all $a \in A^+ \smallsetminus A^+_{<r_0}$ we have \begin{equation}\label{eq conj SL2 2} \ms{f_1}{f_2}{a} \leq C \cdot \omega (a)^{-1} \cdot ||f_1||^2 \cdot ||f_2||^2 \quad \quad \forall f_1, f_2 \in V_{\chi}.\end{equation} \end{claim} \begin{proof}[Proof (of Conjecture \ref{main conj red exact} for summands in $V_{\chi}$ given Claim \ref{clm SL2 2} and Claim \ref{clm SL2 2b} for $\chi$).] Let $V$ be an irreducible direct summand in $V_{\chi}$. Let $f$, $r_0$, $D$ and $C$ be as in Claim \ref{clm SL2 2} and as in Claim \ref{clm SL2 2b} (taking $r_0$ to be the maximum of the values from the two statements). In order to verify Conjecture \ref{main conj red exact} for $V$, we will verify the conditions (1) and (2) of Proposition \ref{prop from folner to exact}, where for $v_0$ we take our $f$. Using (\ref{eq conj SL2 2}) we obtain the existence of $E,E'>0$ such that for all $r_0 \leq r_1 < r_2$ we have $$ \Ms{f_1}{f_2}{r_2} - \Ms{f_1}{f_2}{r_1} \leq E \cdot \textnormal{vol}_A ( A^+_{<r_2} \smallsetminus A^+_{<r_1} ) \cdot || f_1||^2 \cdot || f_2 ||^2 \leq $$ $$ \leq E' \cdot (1 + (r_2 - r_1)) \cdot || f_1||^2 \cdot || f_2 ||^2.$$ From this and (\ref{eq from below}) the conditions (1) and (2) of Proposition \ref{prop from folner to exact} are immediate. \end{proof} \subsection{}\label{sec proof of clm SL2 2} Let us prove Claim \ref{clm SL2 2}. \begin{proof}[Proof (of Claim \ref{clm SL2 2}).] Let $V$ be an irreducible direct summand of $V_{\chi}$. Let us first treat the case $G = PGL_2 (\Omega)$. We use the (normalized) Jacquet $A$-module $J(-)$ with respect to $G \hookleftarrow AU \twoheadrightarrow A$. We denote by $\underline{V} \subset V$ the subspace of smooth vectors. $J(\underline{V})$ is isomorphic to ${\mathbb C}_{\chi} \oplus {\mathbb C}_{\chi^{-1}}$. We consider $v \in \underline{V}$ whose projection under the canonical $\underline{V} \twoheadrightarrow J(\underline{V})$ is non-zero and is an $A$-eigenvector with eigencharacter $\chi$. By Casselman's canonical pairing theory there exists a non-zero $\alpha \in J(\underline{V})^*$ which is $A$-eigenvector with eigencharacter $\chi^{-1}$ such that $\langle a v , v \rangle = |a |^{-1/2} \alpha (a v)$ whenever $a \in A^+ \smallsetminus A^+_{<r_0}$, for large enough $r_0 \ge 0$. Since we have $\alpha (a v) = \chi (a) \cdot \alpha (v)$ and $\alpha (v) \neq 0$, we deduce that for some $C>0$ we have $|\langle a v , v \rangle|^2 = C \cdot |a|^{-1} $ for $a \in A^+ \smallsetminus A^+_{<r_0}$. Let $K_v \subset K$ be an open compact subgroup, small enough so that $K_v v = v$. We have, again for $a \in A^+ \smallsetminus A^+_{< r_0}$: $$ \ms{v}{v}{a} = \int_{K \times K} |\langle k_2 a k_1 v , v \rangle |^2 \cdot dk_1 dk_2 \ge $$ $$ \ge \int_{K_v \times K_v} |\langle k_2 a k_1 v , v \rangle |^2 \cdot dk_1 dk_2 = C^{\prime} \cdot |\langle a v , v \rangle |^2$$ for some $C^{\prime}>0$ and so $\ms{v}{v}{a} \ge C^{\prime \prime} \cdot |a|^{-1}$ for some $C^{\prime \prime}>0$. From this we obtain the desired. Let us now treat the case $G = SL_2 ({\mathbb R})$. Fix any $\theta \in {\rm types} (V_{\chi})$. The leading asymptotic of $K$-finite vectors are well-known, and can be computed from explicit expressions in terms of the hypergeometric function (see \cite[\S 6.5]{KlVi}). In the case $\chi|_{\mathbf{a} ({\mathbb R}^{\times}_{>0})} \neq 1$, denoting by $0 \neq s \in {\mathbb R}$ the number for which $\chi (\mathbf{a} (t)) = t^{is}$ for all $t \in {\mathbb R}^{\times}_{>0}$, we have $$ \langle \mathbf{a} (e^x) h^{\chi}_{\theta} , h^{\chi}_{\theta} \rangle \sim e^{-x} \cdot (E_1 \cdot e^{-isx} + E_2 \cdot e^{isx} + o(1)) \quad\quad (x \to +\infty)$$ for some non-zero $E_1$ and $E_2$ and so $$ |\langle \mathbf{a} (e^x) h^{\chi}_{\theta} , h^{\chi}_{\theta} \rangle|^2 \sim e^{-2x} \left( D + E_3 \cdot e^{-2isx} + E_4 \cdot e^{2isx} + o(1) \right) \quad\quad (x \to +\infty)$$ for some $D>0$, $E_3$ and $E_4$. From this we obtain the desired. In the case $\chi|_{\mathbf{a} ({\mathbb R}^{\times}_{>0})} = 1$, and so $\chi (\mathbf{a} (-1)) = - 1$, we have $$ \langle \mathbf{a} (e^x) h_{\theta}^{\chi} , h_{\theta}^{\chi} \rangle \sim E \cdot e^{-x} \quad\quad (x \to +\infty) $$ for some non-zero $E$ and so $$| \langle \mathbf{a} (e^x) h_{\theta}^{\chi} , h_{\theta}^{\chi} \rangle |^2 \sim D \cdot e^{-2x} \quad\quad (x \to +\infty) $$ for some $D>0$. From this we obtain the desired. \end{proof} \subsection{} We further reduce Claim \ref{clm SL2 2b}. \begin{claim}\label{clm SL2 3} Fix $\chi$ satisfying (\ref{eq condition}). There exist $r_0 > 0$ and $C>0$ (depending on $\chi$) such that for all $\theta,\eta \in \textnormal{types} (V_{\chi})$ and all $a \in A^+ \smallsetminus A^+_{<r_0}$ we have \begin{equation}\label{eq thm SL2R 3} |\langle a h^{\chi}_{\theta} , h^{\chi}_{\eta} \rangle|^2 \leq C \cdot \omega(a)^{-1} .\end{equation} \end{claim} \begin{proof}[Proof (of Claim \ref{clm SL2 2b} for $\chi$ given Claim \ref{clm SL2 3} for $\chi$).] Let $f_1 , f_2 \in V_{\chi}$ and write $$ f_1 = \sum_{\theta \in \textnormal{types} (V_{\chi})} c_{\theta} \cdot h^{\chi}_{\theta}, \quad f_2 = \sum_{\theta \in \textnormal{types} (V_{\chi})} d_{\theta} \cdot h^{\chi}_{\theta}$$ with $c_{\theta} , d_{\theta} \in {\mathbb C}$. Using Fourier expansion of the function $(k_1 , k_2) \mapsto \langle a k_1 f_1 , k_2 f_2 \rangle$ on $K \times K$ we have, for $a \in A^+ \smallsetminus A^+_{<r_0}$: $$ \ms{f_1}{f_2}{a} = \int_{K\times K} \left| \langle a k_1 f_1 , k_2 f_2 \rangle \right|^2 \cdot dk_1 dk_2 = $$ $$ = \sum_{\theta,\eta \in \textnormal{types} (V_{\chi})} |c_{\theta}|^2 \cdot |d_{\eta}|^2 \cdot \left| \langle a h^{\chi}_{\theta} , h^{\chi}_{\eta} \rangle \right|^2 \leq C \cdot \omega (a)^{-1} \cdot ||f_1||^2 \cdot ||f_2||^2.$$ \end{proof} \subsection{} Let us now establish Claim \ref{clm SL2 3} in the case $G = SL_2 ({\mathbb R})$: \begin{proof}[Proof (of Claim \ref{clm SL2 3} in the case $G = SL_2 ({\mathbb R})$).] In the case $\chi |_{\mathbf{a} ({\mathbb R}^{\times}_{>0})} \neq 1$, this is the contents of \cite[Theorem 2.1]{BrCoNiTa} (which contains a stronger claim, incorporating $\chi$ into the inequality). Let us therefore assume $\chi|_{\mathbf{a} ({\mathbb R}^{\times}_{>0})} = 1$ and so $\chi (\mathbf{a} (-1)) = -1$. For $n \in {\mathbb Z}$, let us denote by $\theta_n$ the character of $K$ given by $\mtrx{c}{-s}{s}{c} \mapsto (c+is)^n$. We want to see that $$ \cosh (x) \left| \langle \mathbf{a} (e^x) h_{\theta_n}^{\chi} , h_{\theta_m}^{\chi} \rangle \right|$$ is bounded as we vary $x \in [0,+\infty)$ and $m,n \in 1 + 2 {\mathbb Z}$. We have $V_{\chi} = V_{\chi}^- \oplus V_{\chi}^+$, where $V_{\chi}^-$ and $V_{\chi}^+$ are irreducible unitary $G$-representations, ${\rm types} (V_{\chi}^-) = \{ \theta_n \ : \ n \in -1 - 2 {\mathbb Z}_{\ge 0}\}$ and ${\rm types} (V_{\chi}^+) = \{ \theta_n \ : \ n \in 1 + 2 {\mathbb Z}_{\ge 0}\}$. Since the matrix coefficient in question vanishes when $m \in -1 - 2 {\mathbb Z}_{\ge 0}$ and $n \in 1 + 2 {\mathbb Z}_{\ge 0}$ or vice versa, and since the matrix coefficient in question does not change when we replace $(n ,m)$ by $(-n , -m)$, we can assume $m,n \in 1 + 2 {\mathbb Z}_{ \ge 0}$. Furthermore, by conjugating by $\mtrx{0}{1}{-1}{0}$ it is straight-forward to see that we can assume that $m \ge n$. We denote $k := \tfrac{m-n}{2}$. Let us use the well-known concrete expression of matrix coefficients in terms of the hypergeometric function (see \cite[\S 6.5]{KlVi}): $$ \cosh (x) \cdot \langle \mathbf{a} (e^x) h_{\theta_n}^{\chi} , h_{\theta_m}^{\chi} \rangle = \tanh (x)^k \cdot \frac{(1+\frac{n-1}{2})_k}{k!} \cdot {}_2 F_1 \left( \frac{n-1}{2}+k+1 , \ -\frac{n-1}{2} , \ k+1 , \ \tanh (x)^2 \right).$$ We want to show that this expression is bounded as we vary $x \in [0,+\infty)$, $n \in 1+2 {\mathbb Z}_{\ge 0}$ and $k \in {\mathbb Z}_{\ge 0}$. Performing a change of variables $\frac{1-t}{2} := \tanh (x)^2$, denoting $r := \tfrac{n-1}{2}$ and interpreting in terms of Jacobi polynomials, we can rewrite this last expression as: $$Q^k_r (t) := (\tfrac{1-t}{2})^{k/2} \cdot P^{(k,0)}_{r} (t).$$ We want to see that $Q^k_r (t)$ is bounded as we vary $t \in [-1,1]$, $r \in {\mathbb Z}_{\ge 0}$ and $k \in {\mathbb Z}_{\ge 0}$. But, it is known that under a suitable interpretation of the variable $t$, $Q^k_r (t)$ is equal to a matrix coefficient of unit vectors in an irreducible unitary representation of ${\rm SU} (2)$, see, for example, \cite[\S 6.3]{KlVi}. Therefore, we have $ |Q^k_r (t)| \leq 1$ for all $t \in [-1,1]$, $r \in {\mathbb Z}_{\ge 0}$ and $k \in {\mathbb Z}_{\ge 0}$, as desired. See also \cite[Equation (20)]{HaSc} for a direct proof of this last inequality. \end{proof} \subsection{} Finally, we want to establish Claim \ref{clm SL2 3} in the case $G = PGL_2 (\Omega)$. We will use the following proposition: \begin{proposition}\label{prop padic estimate} There exists $C>0$ such that the following holds. Let $\psi_1$ and $\psi_2$ be unitary characters of $\varpi {\mathcal O}$. Let $\alpha_1 , \alpha_2 \in {\mathcal O}^{\times} x + x ^2\Omega [[x ]] \subset \Omega [[x]]$ be power series. Let $\chi$ be a non-trivial unitary character of $\Omega^{\times}$. Denote by $c(\chi)$ the number $0$ if $\chi|_{{\mathcal O}^{\times}} = 1$ and otherwise the smallest number $c \in {\mathbb Z}_{\ge 1}$ for which $\chi|_{1+\varpi^c {\mathcal O}} = 1$. Also, denote by $d(\chi)$ the number $1/|1-\chi (\varpi)|$ if $c(\chi) = 0$ and the number $0$ if $c(\chi) \neq 0$. Let $0 < m_1 \leq m_2 \leq n$ be integers. Then $$ \left| \int_{\varpi^{m_1} {\mathcal O} \smallsetminus \varpi^{m_2} {\mathcal O}} \psi_1 (\alpha_1 (x)) \psi_2 (\alpha_2 (\varpi^n x^{-1})) \chi (x) \cdot \frac{dx}{|x|} \right| \leq C(c(\chi)+d(\chi) + 1).$$ \end{proposition} \begin{proof}[Proof (of Claim \ref{clm SL2 3} in the case $G = PGL_2 (\Omega)$ given Proposition \ref{prop padic estimate}).] Let us calculate more concretely the inner product appearing in Claim \ref{clm SL2 3}. We can normalize the inner product on $V_{\chi}$ so that for $f_1, f_2 \in V_{\chi}$ we have $$ \langle f_1 , f_2 \rangle = \int_{\Omega} f_1 \mtrx{1}{0}{x}{1} \overline{f_2 \mtrx{1}{0}{x}{1}} \cdot dx.$$ We then calculate $$ \left\langle \mtrx{t}{0}{0}{1} h^{\chi}_{\theta} , h^{\chi}_{\mu} \right\rangle = |t|^{-1/2} \chi (t) \int_{\Omega} h^{\chi}_{\theta} \mtrx{1}{0}{x}{1} \overline{h^{\chi}_{\mu} \mtrx{1}{0}{t^{-1} x}{1}} \cdot dx $$ and thus we want to see that $$ I_{\theta , \mu} (t) := \int_{\Omega} h^{\chi}_{\theta} \mtrx{1}{0}{x}{1} \overline{h^{\chi}_{\mu} \mtrx{1}{0}{t^{-1} x}{1}} \cdot dx $$ is bounded independently of $\theta , \mu \in \hat{K}$ and $t \in F$ satisfying $|t| \ge 1$. In general, for $\theta \in \hat{K}$ and $x \in \Omega$, we have $$ h^{\chi}_{\theta} \mtrx{1}{0}{x}{1} = \frac{\chi^{-1} (1 - \zeta x^2)}{|1 - \zeta x^2|^{1/2}} \theta \mtrx{1}{\zeta x}{x}{1}.$$ And so we obtain \begin{equation}\label{eq for I} I_{\theta , \mu} (t) = \int_{\Omega} \frac{\chi^{-1} (1 - \zeta x^2)}{|1 - \zeta x^2|^{1/2}} \frac{\chi (1 - \zeta t^{-2} x^2)}{|1 - \zeta t^{-2} x^2|^{1/2}} \theta \mtrx{1}{\zeta x}{x}{1} \mu^{-1} \mtrx{1}{\zeta t^{-1} x}{t^{-1} x}{1} \cdot dx. \end{equation} If for $D \subset F$ we denote by $I^D_{\theta , \mu} (t)$ the same expression as that for $I_{\theta , \mu} (t)$ in (\ref{eq for I}) but where integration is performed over $D$, we see $$ | I^{\varpi {\mathcal O}}_{\theta , \mu} (t) | \leq \int_{\varpi {\mathcal O}} dx = 1/q$$ and thus this is bounded. Furthermore, $$ |I^{\Omega \smallsetminus t {\mathcal O}}_{\theta , \mu} (t)| \leq \int_{\Omega \smallsetminus t {\mathcal O}} \frac{dx}{|x| \cdot |t^{-1} x|} = 1/q $$ and thus this is bounded. Finally, for any specific $- \log_q |t| \leq m \leq 0$, we have $$ |I^{\varpi^{m} {\mathcal O}^{\times}}_{\theta , \mu} (t)| \leq \int_{\varpi^{m} {\mathcal O}^{\times}} \frac{dx}{|x|} =1-1/q.$$ Therefore, denoting by $k \in {\mathbb Z}_{\ge 1}$ a number such that $\chi|_{1+\varpi^{2k} {\mathcal O}} = 1$, it is enough to bound $I^{\varpi^k t {\mathcal O} \smallsetminus \varpi^{-k} {\mathcal O}}_{\theta , \mu} (t)$. We have $$I^{\varpi^k t {\mathcal O} \smallsetminus \varpi^{-k} {\mathcal O}}_{\theta , \mu} (t) = $$ $$ = \chi^{-1} (- \zeta) \theta \mtrx{0}{\zeta}{1}{0} \int_{\varpi^k t {\mathcal O} \smallsetminus \varpi^{-k} {\mathcal O}} \chi^{-2} (x) \theta \mtrx{1}{x^{-1}}{\zeta^{-1} x^{-1} }{1} \mu^{-1} \mtrx{1}{\zeta t^{-1} x}{t^{-1} x}{1} \cdot \frac{dx}{|x|}.$$ Let us denote by $K^{\prime} \subset K$ the subgroup consisting of $\mtrx{x}{\zeta y}{y}{x}$ for which $|y| \leq |x| \cdot |p|$. We have an isomorphism of topological groups $$ e : p {\mathcal O} \to K^{\prime}$$ given by $$ e (y) := \exp \mtrx{0}{\zeta y}{y}{0}.$$ Let us denote by $\alpha : p {\mathcal O} \to p {\mathcal O}$ the map given by $\alpha (y) := e^{-1} \mtrx{1}{\zeta y}{y}{1}$. We have a power series expansion $\alpha (y) = y + \zeta y^3 / 3 + \zeta^2 y^5 / 5 + \ldots$. Let us now denote by $\widetilde{\theta}$ the unitary character of $p {\mathcal O}$ satisfying $\theta|_{K'} \circ e = \widetilde{\theta}$ and by $\widetilde{\mu}$ the unitary character of $p {\mathcal O}$ satisfying $\mu|_{K'} \circ e = \widetilde{\mu}$. Returning to our integral, we can take $k$ big enough so that $\varpi^k \in p {\mathcal O}$. Substituting $x^{-1}$ in place of $x$ in the integral we have, we see that we need to show that $$ \int_{\varpi^{k+1} {\mathcal O} \smallsetminus \varpi^{-k+1} t^{-1} {\mathcal O}} \widetilde{\theta} (\alpha (\zeta^{-1} x)) \widetilde{\mu}^{-1} (\alpha (t^{-1} x^{-1})) \cdot \chi^2 (x) \cdot \frac{dx}{|x|}$$ is bounded independently of $\widetilde{\theta} , \widetilde{\mu} \in \widehat{p {\mathcal O}}$ and $t \in F$ satisfying $|t| \ge 1$. This is implied by Proposition \ref{prop padic estimate}. \end{proof} \begin{remark} We see from the proof that we have a more precise version of Claim \ref{clm SL2 3}: There exists $C>0$ such that for $\chi$ satisfying (\ref{eq condition}), $\theta , \mu \in {\rm types} (V_{\chi})$ and $t \in \Omega^{\times}$ satisfying $|t| \ge 1$, we have $$ \left| \left\langle \mtrx{t}{0}{0}{1} h^{\chi}_{\theta} , h^{\chi}_{\mu} \right\rangle \right| \leq C \cdot |t|^{-1/2} \cdot (c(\chi) +d(\chi)+1).$$ Here $c(\chi)$ and $d(\chi)$ are as in the formulation of Proposition \ref{prop padic estimate} (when we identify $\chi$ with a character of $\Omega^{\times}$ via the isomorphism ${\boldsymbol{a}} : \Omega^{\times} \to A$). \end{remark} Let us prove Proposition \ref{prop padic estimate}. \begin{proof}[Proof (of Proposition \ref{prop padic estimate}).] We can assume that the $1$-th coefficients of $\alpha_1$ and $\alpha_2$ are equal to $1$. Let us fix a unitary character $\psi$ of $\Omega$ satisfying $\psi|_{{\mathcal O}} = 1$ and $\psi|_{\varpi^{-1} {\mathcal O}} \neq 1$. For $i \in \{ 1 , 2 \}$, let $a_i \in \Omega$ be such that $\psi (a_i x) = \psi_i (x)$ for all $x \in \varpi {\mathcal O}$ ($a_i$ are defined up to addition of elements in $\varpi^{-1} {\mathcal O}$, so in particular we can assume that $a_i \neq 0$). Given $0 < m < n$ we define $$ J^m := \int_{\varpi^m {\mathcal O}^{\times}} \psi \left( a_1 \alpha_1 (x) + a_2 \alpha_2 (\varpi^n x^{-1}) \right) \cdot \chi (x) \cdot \frac{dx}{|x|} = $$ $$ = \chi (\varpi)^{m} \int_{ {\mathcal O}^{\times} } \psi \left( a_1 \alpha_1 (\varpi^m x) + a_2 \alpha_2 (\varpi^{n-m} x^{-1}) \right) \cdot \chi (x) \cdot dx,$$ so that the integral in question equals $\sum_{m_1 \leq m < m_2} J^m$. Let us abbreviate $b := (a_2 \varpi^{n-m}) / (a_1 \varpi^m)$. As we vary $m$, let us divide into cases. \begin{enumerate} \item Assume that $ |\varpi^m | < q^{-c (\chi)}$ and that $|b| < q^{-c(\chi)}$. Set $$\beta (x) := \varpi^{-m} \alpha_1 (\varpi^m x) + b \cdot \varpi^{m-n} \alpha_2 (\varpi^{n-m} x^{-1}).$$ Then $\beta$ gives a well-defined invertible analytic map ${\mathcal O}^{\times} \to {\mathcal O}^{\times}$, whose derivative is everywhere a unit. Moreover, if ${c(\chi)} > 0$, we have $\beta^{-1} (x_0 (1 + \varpi^{c(\chi)} {\mathcal O})) = x_0 (1 + \varpi^{c(\chi)} {\mathcal O})$ for any $x_0 \in {\mathcal O}^{\times}$. Hence $$ J^m = \chi (\varpi)^{m} \int_{ {\mathcal O}^{\times}} \psi (a_1 \varpi^m \beta (x)) \cdot \chi (x) \cdot dx = $$ $$ = \chi (\varpi)^{m} \int_{ {\mathcal O}^{\times} } \psi (a_1 \varpi^m x) \cdot \chi (x) \cdot dx.$$ Therefore: \begin{enumerate} \item Suppose that $|a_1 \varpi^m| \leq 1$. Then $J^m = \chi (\varpi)^{m} (1-1/q)$ if $\chi$ is unramified and $J^m = 0$ if $\chi$ is ramified. \item Suppose that $|a_1 \varpi^m \varpi^{c(\chi)} | > q^{-1}$. Then $J^m = 0$ if $\chi$ is unramified. If $\chi$ is ramified, we write $$ J^m = \chi (\varpi)^{m} \sum_{x_0 \in {\mathcal O}^{\times} / (1+\varpi^{c(\chi)} {\mathcal O})} \psi (a_1 \varpi^m x_0) \chi (x_0) \int_{\varpi^{c(\chi)} {\mathcal O}} \psi (a_1 \varpi^m x) \cdot dx$$ and each integral here is equal to $0$, so that also in that case we obtain $J^m = 0$. \item The case when neither of these two cases is satisfied corresponds to only finitely many values of $m$, whose number is linearly bounded in terms of $c(\chi)+1$, and so we can be content with the crude estimate $|J^m| \leq 1$ in this case. \end{enumerate} \item Assume that $|\varpi^{n-m}| < q^{-c(\chi)}$ and $|b^{-1}| < q^{-c ( \chi)}$. This case is dealt with analogously to the previous one; one denotes $$ \beta (x) := \omega^{m-n} \alpha_2 (\varpi^{n-m} x^{-1}) + b^{-1} \varpi^{-m} \alpha_1 (\varpi^m x)$$ and gets $$ J^m = \chi (\varpi)^m \int_{{\mathcal O}^{\times}} \psi (a_2 \varpi^{n-m} x) \cdot \chi (x) \cdot dx.$$ And thus: \begin{enumerate} \item Suppose that $|a_2 \varpi^{n-m}| \leq 1$. Then $J^m = \chi (\varpi)^{m} (1-1/q)$ if $\chi$ is unramified and $J^m = 0$ if $\chi$ is ramified. \item Suppose that $|a_2 \varpi^{n-m} \varpi^{c(\chi)} | > q^{-1}$. Then $J^m = 0$ if $\chi$ is unramified. If $\chi$ is ramified, we write $$ J^m = \chi (\varpi)^{m} \sum_{x_0 \in {\mathcal O}^{\times} / (1+\varpi^{c(\chi)} {\mathcal O})} \psi (a_2 \varpi^{n-m} x_0) \chi (x_0) \int_{\varpi^{c(\chi)} {\mathcal O}} \psi (a_2 \varpi^{n-m} x) \cdot dx$$ and each integral here is equal to $0$, so that also in that case we obtain $J^m = 0$. \item The case when neither of these two cases is satisfied corresponds to only finitely many values of $m$, whose number is linearly bounded in terms of $c(\chi)+1$, and so we can be content with the crude estimate $|J^m| \leq 1$ in this case. \end{enumerate} \item The case when neither of these two cases is satisfied corresponds to only finitely many values of $m$, whose number is linearly bounded in terms of $c(\chi)$, and so we can be content with the crude estimate $|J^m| \leq 1$ in this case. \end{enumerate} As our integral in question is equal to $\sum_{m_1 \leq m < m_2} J^m$, it is straight-forward that the findings above give the boundedness as desired. \end{proof} \appendix \section{Auxiliary claims regarding polynomial growth of exponential integrals and sums} \subsection{Some notation} We denote $[n] := \{1 , 2, \ldots , n \}$. We denote $${\mathbb C}_{\leq 0} := \{ z \in {\mathbb C} \ | \ {\rm Re} (z) \leq 0\}, \quad D := \{ z \in {\mathbb C} \ | \ |z| \leq 1\}.$$ Given $x = (x_1 , \ldots , x_n) \in {\mathbb R}_{\ge 0}^n$ and $m = (m_1 , \ldots , m_n) \in {\mathbb Z}_{\ge 0}^n$, we write $x^m := x_1^{m_1} \ldots x_n^{m_n}$. Given $\lambda \in {\mathbb C}_{\leq 0}^n$ we denote $$J_{\lambda} := \{ 1 \leq j \leq n \ | \ {\rm Re} (\lambda_j) = 0\}.$$ Given $(\lambda , m) \in {\mathbb C}_{\leq 0}^n \times {\mathbb Z}_{\ge 0}^n$, we denote $d(\lambda , m) := \sum_{j \in J_{\lambda}} (1+m_j)$. Given $J \subset [n]$ and some set $X$, let us denote by ${\rm res}_{J} : X^n \to X^{J}$ the natural restriction and by ${\rm ext}^{J} : X^{J} \to X^{n}$ the natural extension by zero. We fix a finite set ${\mathcal I} \subset {\mathbb R}_{\ge 0}^n$ with the property that given $j \in [n]$ there exists $v \in {\mathcal I}$ such that $\langle v , e_j \rangle \neq 0$, where $e_j$ the $j$-th standard basis vector. We denote $$ P_{< r} := \{ x \in {\mathbb R}_{\ge 0}^n \ | \ \langle v , x \rangle < r \ \forall v \in {\mathcal I} \}.$$ Given $J \subset [n]$, we denote by $P_J \subset {\mathbb R}_{\ge 0}^J$ the convex pre-compact subset $\{ y \in {\mathbb R}_{\ge 0}^J \ | \ {\rm ext}^{J} (y) \in P_{<1}\}$. In \S\ref{ssec growth integral} we will also use the following notations. We consider a compact space $B$ equipped with a nowhere vanishing Radon measure $db$. Let us say that a function $\phi : B \times {\mathbb R}_{\ge 0}^n \to {\mathbb C}$ is \textbf{nice} if it is expressible as $$ B \times {\mathbb R}_{\ge 0}^n \xrightarrow{{\rm id}_B \times {\rm ei}} B \times D^n \xrightarrow{\phi^{\circ}} {\mathbb C}$$ where ${\rm ei}(x_1 , \ldots , x_n) := (e^{-x_1} , \ldots , e^{-x_n})$ and $\phi^{\circ}$ is continuous and holomorphic in the second variable (in the sense that when we fix the variable in $B$ it is the restriction of a holomorphic function on a neighbourhood of $D^n$). Given $J \subset [n]$ we denote by ${\rm res}_{J} \phi : B \times {\mathbb R}_{\ge 0}^J \to {\mathbb C}$ the function given by ${\rm res}_{J} \phi (b , y) := \phi^{\circ} (b , {\rm ext}^{J} ({\rm ei} (y)))$. We also write $\phi (b , +\infty)$ for $\phi^{\circ} (b , 0)$ etc. \subsection{Growth - the case of summation over a lattice} \begin{lemma}\label{lem appb lat 1} Let $\lambda := (\lambda_1 , \ldots , \lambda_n) \in {\mathbb C}_{\leq 0}^n$ and $m := (m_1 , \ldots , m_n) \in {\mathbb Z}_{\ge 0}$. Let $K \subset {\mathbb R}_{\ge 0}^n$ be a compact subset. Assume that ${\rm Re} (\lambda) = 0$ and $\lambda \notin (2\pi i) {\mathbb Z}^n$. We have $$ \sup_{Q \subset K} \left| \frac{1}{r^n} \sum_{x \in \frac{1}{r} {\mathbb Z}_{\ge 0}^n \cap Q} x^m e^{r \langle \lambda , x \rangle} \right| = O( r^{-1} )$$ as $r \to +\infty$, where $Q$ denote convex subsets. \end{lemma} \begin{proof} Let us re-order the variables, assuming that $\lambda_1 \notin 2\pi i {\mathbb Z}$. Let us write $x = (x_1 , x')$ where $x' = (x_2 , \ldots , x_n)$ and analogously write $m'$ et cetera. Given a convex subset $Q \subset K$ and $x' \in {\mathbb R}_{\ge 0}^{n-1}$ let us denote by $Q^{x'} \subset {\mathbb R}_{\ge 0}$ the subset consisting of $x_1$ for which $(x_1 , x') \in Q$ (it is an interval). Let us enlarge $K$ for convenience, writing it in the form $K = K_1 \times K'$ where $K_1 \subset {\mathbb R}_{\ge 0}$ is a closed interval and $K' \subset {\mathbb R}_{\ge 0}^{n-1}$ is the product of closed intervals. We have $$ \sum_{x \in \frac{1}{r} {\mathbb Z}_{\ge 0}^n \cap Q} x^m e^{r\langle \lambda , x \rangle} = \sum_{ x' \in \frac{1}{r} {\mathbb Z}_{\ge 0}^{n-1} \cap K' } (x')^{m'} e^{r \langle \lambda' , x' \rangle }\left( \sum_{ x_1 \in \frac{1}{r} {\mathbb Z}_{\ge 0} \cap Q^{x'}} x_1^{m_1} e^{r \lambda_1 x_1} \right).$$ We have $Q^{x'} \subset K'$ and it is elementary to see that $$ \sup_{R \subset K'} \left| \sum_{x_1 \in \frac{1}{r} {\mathbb Z}_{\ge 0} \cap R} x_1^{m_1} e^{r \lambda_1 x_1} \right| = O(1)$$ as $r \to +\infty$, where $R$ denote intervals. Therefore we obtain, for some $C>0$ (not depending on $Q$) and all $r \ge 1$: $$ \left| \frac{1}{r^n} \sum_{x \in \frac{1}{r} {\mathbb Z}_{\ge 0}^n \cap Q} x^m e^{r\langle \lambda , x \rangle} \right| \leq C \left( \frac{1}{r^{n-1}} \sum_{ x' \in \frac{1}{r} {\mathbb Z}_{\ge 0}^{n-1} \cap K' } (x')^{m'} \right) r^{-1}.$$ Since the expression in brackets is clearly bounded independently of $r$, we are done. \end{proof} \begin{lemma}\label{lem appb lat 2} Let $(\lambda , m ) \in {\mathbb C}_{\leq 0}^n \times {\mathbb Z}_{\ge 0}^n$. Then the limit $$ \lim_{r \to +\infty} \frac{1}{r^{d(\lambda , m)}} \sum_{x \in {\mathbb Z}_{\ge 0}^n \cap P_{<r}} x^m e^{\langle \lambda , x \rangle} dx$$ exists, equal to $0$ if ${\rm res}_{J_{\lambda}} (\lambda) \notin 2\pi i \cdot {\mathbb Z}^{J_{\lambda}}$ and otherwise equal to $$\left( \int_{P_{J_{\lambda}}} y^{{\rm res}_{J_{\lambda}} (m)} dy \right) \left( \sum_{z \in {\mathbb Z}_{\ge 0}^{J_{\lambda}^c}} z^{{\rm res}_{J_{\lambda}^c} (m)} e^{\langle {\rm res}_{J_{\lambda}^c} (\lambda) , z \rangle } \right)$$ (the sum converging absolutely). \end{lemma} \begin{proof} Let us abbreviate $J := J_{\lambda}$. Let us denote $\lambda' := {\rm res}_J (\lambda)$ and $\lambda'' := {\rm res}_{J^c} (\lambda)$, and similarly for $m$. Given $x'' \in {\mathbb Z}_{\ge 0}^{J_c}$ let us denote by $P_{(<r)}^{x''} \subset {\mathbb R}_{\ge 0}^{J}$ the subset consisting of $y'$ for which ${\rm ext}^{J} (r y') + {\rm ext}^{J^c} (x'') \in P_{<r}$. We have $$ \sum_{x \in {\mathbb Z}_{\ge 0}^n \cap P_{<r}} x^m e^{\langle \lambda , x \rangle} = r^{d(\lambda , m) - |J|} \sum_{x'' \in {\mathbb Z}_{\ge 0}^{J^c}} (x'')^{m''} e^{\langle \lambda'' , x'' \rangle} \sum_{y' \in \frac{1}{r} {\mathbb Z}_{\ge 0}^{J} \cap P^{x''}_{(<r)}} (y')^{m'} e^{r \langle \lambda' , y' \rangle} := \triangle.$$ Let us assume first that $\lambda' \notin 2 \pi i \cdot {\mathbb Z}^{J}$. Then by Lemma \ref{lem appb lat 1} there exists $C>0$ such that for all convex subsets $Q \subset P_J$ and all $r \ge 1$ we have $$ \left| \frac{1}{r^{|J|}} \sum_{y' \in \frac{1}{r} {\mathbb Z}_{\ge 0}^{J} \cap Q} (y')^{m'} e^{r \langle \lambda' , y' \rangle} \right| \leq C \cdot r^{-1}.$$ Therefore $$ | \triangle | \leq C r^{d(\lambda , m) - 1} \sum_{x'' \in {\mathbb Z}_{\ge 0}^{J^c}} (x'')^{m''} e^{\langle {\rm Re} (\lambda'') , x'' \rangle}, $$ giving the desired. Now we assume $\lambda' \in 2 \pi i \cdot {\mathbb Z}^J$. It is not hard to see that $$ \lim_{r \to +\infty} \frac{1}{r^{|J|}} \sum_{y' \in \frac{1}{r} {\mathbb Z}_{\ge 0}^{J} \cap P^{x''}_{(<r)}} (y')^{m'} = \int_{P_J} (y')^{m'} dy'.$$ Hence we have (by dominated convergence) $$ \lim_{r \to +\infty} \frac{1}{r^{d(\lambda,m)}} \triangle = \sum_{x'' \in {\mathbb Z}_{\ge 0}^{J^c}} (x'')^{m''} e^{\langle \lambda'' , x'' \rangle} \int_{P_J} (y')^{m'} dy' .$$ \end{proof} \begin{claim}\label{clm appb lat} Let $p \ge 1$, let $\{ ( \lambda^{(\ell)} , m^{(\ell)}) \}_{\ell \in [p]} \subset {\mathbb C}_{\leq 0}^{n} \times {\mathbb Z}_{\ge 0}^n$ be a collection of pairwise different couples and let $\{ c^{(\ell)} \}_{\ell \in [p]} \subset {\mathbb C} \smallsetminus \{ 0 \}$ be a collection of non-zero scalars. Denote $d := \max_{\ell \in [p]} d(2{\rm Re} (\lambda^{(\ell)}) , 2 m^{(\ell)})$. The limit $$ \lim_{r \to +\infty} \frac{1}{r^d} \sum_{x \in {\mathbb Z}_{\ge 0}^n \cap P_{<r}} \left| \sum_{\ell \in [p]} c^{(\ell)} x^{m^{(\ell)}} e^{\langle \lambda^{(\ell)} , x \rangle } \right|^2 $$ exists and is strictly positive. \end{claim} \begin{proof} Let us break the integrand into a sum following $$\left| \sum_{\ell \in [p]} A_{\ell} \right|^2 = \sum_{\ell_1 , \ell_2 \in [p]} A_{\ell_1} \overline{A_{\ell_2}}.$$ Using Lemma \ref{lem appb lat 2} we see the that resulting limit breaks down as a sum, over $(\ell_1 ,\ell_2) \in [p]^2$, of limits which exist, so the only thing to check is that the resulting limit is non-zero. It is easily seen that the limit at the $(\ell_1 , \ell_2)$ place is zero unless $d(\lambda^{(\ell_1)} , m^{(\ell_1)} ) = d$, $d(\lambda^{(\ell_2)} , m^{(\ell_2)} ) = d$, $J_{\lambda^{(\ell_1)}} = J_{\lambda^{(\ell_2)}}$ and ${\rm res}_{J^{(\ell_1)}} (\lambda^{(\ell_2)}) - {\rm res}_{J^{(\ell_1)}} (\lambda^{(\ell_1)}) \in 2\pi i \cdot {\mathbb Z}^J$. We thus can reduce to the case when, for a given $J \subset [n]$, we have $J_{\lambda^{(\ell)}} = J$ for all $\ell \in [p]$, we have $d(\lambda^{(\ell)} , m^{(\ell)}) = d$ for all $\ell \in [p]$, and we have ${\rm res}_{J} (\lambda^{(\ell_2)}) - {\rm res}_{J} (\lambda^{(\ell_1)}) \in 2\pi i \cdot {\mathbb Z}^J$ for all $\ell_1 , \ell_2 \in [p]$. We then obtain, using Lemma \ref{lem appb lat 2}, that our overall limit equals $$ \sum_{z \in {\mathbb Z}_{\ge 0}^{J^c}} \int_{P_J} \left| \sum_{\ell \in [p]} c^{(\ell)} y^{{\rm res}_J (m^{(\ell)})} z^{{\rm res}_{J^c} (m^{(\ell)})} e^{\langle {\rm res}_{J^c} (\lambda^{(\ell)}) , z \rangle } \right|^2 dy.$$ It is therefore enough to check that $$ \sum_{\ell \in [p]} c^{(\ell)} y^{{\rm res}_J (m^{(\ell)})} z^{{\rm res}_{J^c} (m^{(\ell)})} e^{\langle {\rm res}_{J^c} (\lambda^{(\ell)}) , z \rangle }, $$ a function in $(z,y ) \in {\mathbb Z}_{\ge 0}^{J^c} \times P_J$, is not identically zero. By the local linear independence of powers of $y$, we can further assume that ${\rm res}_J (m^{(\ell)})$ is independent of $\ell \in [p]$, and want to check that $$\sum_{\ell \in [p]} c^{(\ell)} z^{{\rm res}_{J^c} (m^{(\ell)})} e^{\langle {\rm res}_{J^c} (\lambda^{(\ell)}) , z \rangle }, $$ a function in $z \in {\mathbb Z}_{\ge 0}^{J^c}$, is not identically zero. Notice that, by our assumptions, the elements in the collection $ \{ ( {\rm res}_{J^c} (\lambda^{(\ell)}) , {\rm res}_{J^c} (m^{(\ell)}) ) \}_{\ell \in [p]}$ are pairwise different. Thus the non-vanishing of our sum is clear (by linear algebra of generalized eigenvectors of shift operators on ${\mathbb Z}^{J^c}$). \end{proof} \subsection{Growth - the case of an integral}\label{ssec growth integral} \begin{lemma}\label{lem appb 1} Let $\lambda := (\lambda_1 , \ldots , \lambda_n) \in {\mathbb C}_{\leq 0}^n$ and $m := (m_1 , \ldots , m_n) \in {\mathbb Z}_{\ge 0}$. Let $K \subset {\mathbb R}_{\ge 0}^n$ be a compact subset. Assume that ${\rm Re} (\lambda) = 0$ and $\lambda \neq 0$. We have $$ \sup_{Q \subset K} \left| \int_Q x^m e^{r \langle \lambda , x \rangle} dx \right| = O( r^{-1} )$$ as $r \to +\infty$, where $Q$ denote convex subsets. \end{lemma} \begin{proof} Let us re-order the variables, assuming that $\lambda_1 \neq 0$. Let us write $x = (x_1 , x')$ where $x' = (x_2 , \ldots , x_n)$ and analogously write $m'$ etcetera. Given a convex subset $Q \subset K$ and $x' \in {\mathbb R}_{\ge 0}^{n-1}$ let us denote by $Q^{x'} \subset {\mathbb R}_{\ge 0}$ the subset consisting of $x_1$ for which $(x_1 , x') \in Q$ (it is an interval). Let us enlarge $K$ for convenience, writing it in the form $K = K_1 \times K'$ where $K_1 \subset {\mathbb R}_{\ge 0}$ is a closed interval and $K' \subset {\mathbb R}_{\ge 0}^{n-1}$ is the product of closed intervals. Using Fubini's theorem $$ \int_Q x^m e^{r\langle \lambda , x \rangle} dx = \int_{ K' } (x')^{m'} e^{r \langle \lambda' , x' \rangle }\left( \int_{Q^{x'}} x_1^{m_1} e^{r \lambda_1 x_1} dx_1 \right) dx'.$$ We have $Q^{x'} \subset K'$ and it is elementary to see that $$ \sup_{R \subset K'} \left| \int_{R} x_1^{m_1} e^{r \lambda_1 x_1} dx_1 \right| = O(r^{-1})$$ as $r \to +\infty$, where $R$ denote intervals. Therefore we obtain, for some $C>0$ and all $r \ge 1$: $$ \left| \int_Q x^m e^{r\langle \lambda , x \rangle} dx \right| \leq C \left( \int_{K'} (x')^{m'} dx' \right) r^{-1},$$ as desired. \end{proof} \begin{lemma}\label{lem appb 2} Let $(\lambda , m ) \in {\mathbb C}_{\leq 0}^n \times {\mathbb Z}_{\ge 0}^n$ and let $\phi : B \times {\mathbb R}_{\ge 0}^n \to {\mathbb C}$ be a nice function. Then the limit $$ \lim_{r \to +\infty} \frac{1}{r^{d(\lambda , m)}} \int_B \int_{P_{<r}} x^m e^{\langle \lambda , x \rangle} \phi (b,x)dxdb$$ exists, equal to $0$ if ${\rm res}_{J_{\lambda}} \lambda \neq 0$ and otherwise equal to $$\left( \int_{P_{J_{\lambda}}} y^{{\rm res}_{J_{\lambda}} (m)} dy \right) \left( \int_B \int_{{\mathbb R}_{\ge 0}^{J_{\lambda}^c}} z^{{\rm res}_{J_{\lambda}^c} (m)} e^{\langle {\rm res}_{J_{\lambda}^c} (\lambda) , z\rangle} {\rm res}_{J_{\lambda}^c} \phi (b , z) dz db \right)$$ (the double integral converging absolutely). \end{lemma} \begin{proof} Let us re-order the variables, assuming that $J := J_{\lambda} = [k]$. Let us write $x = (x',x'')$ where $x'$ consists of the first $k$ components and $x''$ consists of the last $k$ components. Let us write analogously $m' , \lambda'$ etc. First, let us notice that if $k \neq 0$, we can write $$ \phi (b , x ) = e^{-x_1} \phi_0 (b , x) + \phi_1 (b , x)$$ where $\phi_0 , \phi_1 : B \times {\mathbb R}_{\ge 0}^n \to {\mathbb C}$ are nice functions and $\phi_1$ does not depend on $x_1$. Dealing with $ e^{-x_1} \phi_0 (b,x)$ instead of $\phi (b,x)$ makes us consider $\lambda$ with smaller set $J_{\lambda}$ and thus $(\lambda , m)$ with a smaller $d(\lambda , m)$ and from this, reasoning inductively, we see that we can assume that $\phi$ only depends on $(b,x'')$. Let us write $\phi'' := {\rm res}_{J^c} \phi$. Let us perform a change of variables $y' := \frac{1}{r} x'$. Let $P_{(<r)} \subset {\mathbb R}_{\ge 0}^n$ denote the transform of $P_{<r}$ under this changes of variables (i.e. $(x',x'') \in P_{<r}$ if and only if $(y',x'') \in P_{(<r)}$). We obtain $$ \int_B \int_{P_{<r}} x^m e^{\langle \lambda , x \rangle} \phi (b , x) dx db = $$ $$ = r^{d} \int_B \int_{P_{(<r)}} (y')^{m'} e^{r \langle \lambda' , y' \rangle } (x'')^{m''} e^{\langle \lambda'' , x'' \rangle} \phi'' (b , x'') dy' dx'' db =: \triangle.$$ Given $x'' \in {\mathbb R}_{\ge 0}^{J^c}$, let us denote by $P_{(<r)}^{x''} \subset {\mathbb R}_{\ge 0}^{J}$ the set consisting of $y'$ for which $(y',x'') \in P_{(<r)}$. Notice that $P_{(<r_1)}^{x''} \subset P_{(<r_2)}^{x''}$ for $r_1 < r_2$ and $\cup_{r} P_{(<r)}^{x''} = P_{J}$. Using Fubini's theorem $$ \triangle = r^{d} \int_B \int_{ {\mathbb R}_{\ge 0}^{J^c}} (x'')^{m''} e^{\langle \lambda'' , x'' \rangle } \phi'' (b , x'') \left( \int_{ P_{(<r)}^{x''}} (y')^{m'} e^{r \langle \lambda' , y' \rangle } dy' \right) dx'' db.$$ If $\lambda' \neq 0$, by Lemma \ref{lem appb 1} there exists $C>0$ such that for all convex subsets $Q \subset P_J$ and all $r \ge 1$ we have $$ \left| \int_Q (y')^{m'} e^{r \langle \lambda' , y' \rangle } dy' \right| \leq C \cdot r^{-1}.$$ We have therefore $$ \left| \triangle \right| \leq C \cdot r^{d - 1} \cdot \int_B \int_{ {\mathbb R}_{\ge 0}^{J^c}} (x'')^{m''} e^{\langle {\rm Re} (\lambda'' ) , x'' \rangle } | \phi'' (b , x'')| dx'' db $$ and thus indeed the desired limit is equal to $0$. Now we assume $\lambda' = 0$. Using Lebesgue's dominated convergence theorem we have $$ \lim_{r \to +\infty} \frac{1}{r^d} \triangle = \lim_{r \to +\infty} \int_B \int_{{\mathbb R}_{\ge 0}^{J^c}} (x'')^{m''} e^{\langle \lambda'' , x'' \rangle } \phi'' (b , x'') \left( \int_{ P_{(<r)}^{x''}} (y')^{m'} dy' \right) dx'' db = $$ $$ = \int_B \int_{ {\mathbb R}_{\ge 0}^{J^c}} (x'')^{m''} e^{\langle \lambda'' , x'' \rangle } \phi'' (b , x'') \left( \int_{P_J} (y')^{m'} dy' \right) dx'' db$$ as desired. \end{proof} \begin{claim}\label{clm appb} Let $\{ ( \lambda^{(\ell)} , m^{(\ell)}) \}_{\ell \in [p]} \subset {\mathbb C}_{\leq 0}^{n} \times {\mathbb Z}_{\ge 0}^n$ be a collection of pairwise different couples. Let $\{ \phi^{(\ell)} \}_{\ell \in [p]}$ be a collection of nice functions $B \times {\mathbb R}_{\ge 0}^n \to {\mathbb C}$, such that for every $\ell \in [p]$ the function $b \mapsto \phi^{(\ell)} (b , +\infty)$ on $B$ is not identically zero. Denote $d := \max_{\ell \in [p]} d(2{\rm Re} (\lambda^{(\ell)}) , 2 m^{(\ell)})$. The limit $$ \lim_{r \to +\infty} \frac{1}{r^d} \int_B \int_{P_{<r}} \left| \sum_{\ell \in [p]} x^{m^{(\ell)}} e^{\langle \lambda^{(\ell)} , x \rangle } \phi^{(\ell)} (b , x)\right|^2 dx db$$ exists and is strictly positive. \end{claim} \begin{proof} Let us break the integrand into a sum following $$\left| \sum_{\ell \in [p]} A_{\ell} \right|^2 = \sum_{\ell_1 , \ell_2 \in [p]} A_{\ell_1} \overline{A_{\ell_2}}.$$ Using Lemma \ref{lem appb 2} we see the that resulting limit breaks down as a sum, over $(\ell_1 ,\ell_2) \in [p]^2$, of limits which exist, so the only thing to check is that the resulting limit is non-zero. It is easily seen that the limit at the $(\ell_1 , \ell_2)$ place is zero unless $d(\lambda^{(\ell_1)} , m^{(\ell_1)} ) = d$, $d(\lambda^{(\ell_2)} , m^{(\ell_2)} ) = d$, $J_{\lambda^{(\ell_1)}} = J_{\lambda^{(\ell_2)}}$ and ${\rm res}_{J^{(\ell_1)}} (\lambda^{(\ell_1)}) = {\rm res}_{J^{(\ell_1)}} (\lambda^{(\ell_2)})$. We thus can reduce to the case when, for a given $J \subset [n]$, we have $J_{\lambda^{(\ell)}} = J$ for all $\ell \in [p]$, we have $d(\lambda^{(\ell)} , m^{(\ell)}) = d$ for all $\ell \in [p]$, and we have ${\rm res}_{J} (\lambda^{(\ell_1)}) = {\rm res}_{J} (\lambda^{(\ell_2)})$ for all $\ell_1 , \ell_2 \in [p]$. We then obtain, using Lemma \ref{lem appb 2}, that our overall limit equals $$ \int_B \int_{{\mathbb R}_{\ge 0}^{J^c}} \int_{P_J} \left| \sum_{\ell \in [p]} y^{{\rm res}_J (m^{(\ell)})} z^{{\rm res}_{J^c} (m^{(\ell)})} e^{\langle {\rm res}_{J^c} (\lambda^{(\ell)}) , z\rangle} {\rm res}_{J^c} \phi^{(\ell)} (b , z) \right|^2 dy dz db.$$ It is therefore enough to check that $$ \sum_{\ell \in [p]} y^{{\rm res}_J (m^{(\ell)})} z^{{\rm res}_{J^c} (m^{(\ell)})} e^{\langle {\rm res}_{J^c} (\lambda^{(\ell)}) , z\rangle} {\rm res}_{J^c} \phi^{(\ell)} (b , z),$$ a function in $(b,z,y ) \in B \times {\mathbb R}_{\ge 0}^{J^c} \times P_J$, is not identically zero. By the local linear independence of powers of $y$, we can further assume that ${\rm res}_J (m^{(\ell)})$ is independent of $\ell \in [p]$, and want to check that $$ \sum_{\ell \in [p]} z^{{\rm res}_{J^c} (m^{(\ell)})} e^{\langle {\rm res}_{J^c} (\lambda^{(\ell)}) , z\rangle} {\rm res}_{J^c} \phi^{(\ell)} (b , z),$$ a function in $(b,z) \in B \times {\mathbb R}_{\ge 0}^{J^c}$, is not identically zero. Notice that, by our assumptions, the elements in the collection $ \{ ( {\rm res}_{J^c} (\lambda^{(\ell)}) , {\rm res}_{J^c} (m^{(\ell)}) ) \}_{\ell \in [p]}$ are pairwise different and for every $\ell \in [p]$, the function $b \mapsto \phi^{(\ell)} (b , {\rm ext}^{J^c} (+\infty))$ on $B$ is not identically zero. Considering the partial order on ${\mathbb C}^{J^c}$ given by $\mu_1 \leq \mu_2$ if $\mu_2 - \mu_1 \in {\mathbb Z}_{\ge 0}^{J^c}$, we can pick $\ell \in [p]$ for which ${\rm res}_{J^c} (\lambda^{(\ell)})$ is maximal among the $\{ {\rm res}_{J^c} (\lambda^{(\ell')}) \}_{\ell' \in [p]}$. We can then pick $b \in B$ such that $\phi^{(\ell)} (b , {\rm ext}^{J^c} (+\infty)) \neq 0$. We then boil down to Lemma \ref{lem appb 4} that follows. \end{proof} In the end of the proof of Claim \ref{clm appb} we have used the following: \begin{lemma}\label{lem appb 4} Let $\{ (\lambda^{(\ell)} , m^{(\ell)}) \}_{\ell \in [p]} \subset {\mathbb C}^n \times {\mathbb Z}_{\ge 0}^n$ be a collection of pairwise different couples. Let $\{ \phi^{(\ell)}\}_{\ell \in [p]}$ be a collection of nice functions ${\mathbb R}_{\ge 0}^n \to {\mathbb C}$ (so here $B = \{ 1 \}$). Suppose that $\phi^{(\ell)} (+\infty) \neq 0$ for some $\ell \in [p]$ for which $\lambda^{(\ell)}$ is maximal among the $\{ \lambda^{(\ell')} \}_{\ell' \in [p]}$ with respect to the partial order $\lambda_1 \leq \lambda_2$ if $\lambda_2 - \lambda_1 \in {\mathbb Z}_{\ge 0}^n$. Then the function $$ x \mapsto \sum_{\ell \in [p]} x^{m^{(\ell)}} e^{\langle \lambda^{(\ell)} , x \rangle} \phi^{(\ell)} (x)$$ on ${\mathbb R}_{\ge 0}^n$ is not identically zero. \end{lemma} \begin{proof} We omit the proof - one develops the $\phi^{(\ell)}$ into power series in $e^{-x_1} , \ldots , e^{-x_n}$ and uses separation by generalized eigenvalues of the partial differentiation operators $\partial_{x_1} , \ldots , \partial_{x_n}$. \end{proof} \end{document}
\begin{document} \author{Ciprian Demeter} \address{Department of Mathematics, Indiana University, Bloomington IN} \email{demeterc@@indiana.edu} \author{Shaoming Guo} \address{Department of Mathematics, Indiana University, Bloomington IN} \email{shaoguo@@indiana.edu} \thanks{The first author is partially supported by the NSF grant DMS-1161752} \title[Schr\"odinger maximal function]{Schr\"odinger maximal function estimates via the pseudoconformal transformation} \begin{abstract} We present an alternative way to recover the recent result from \cite{LR} using the pseudoconformal transformation. \end{abstract} \maketitle \section{Introduction} Recall that the solution of the Schr\"odinger equation \begin{equation} \label{e2} i\partial_t u(x,t)+\Delta u(x,t)=0,\;x\in\R^n,\;t\ge 0 \end{equation} with initial data $u_0\in L^2(\R^n)$ is given by $$u(x,t)=e^{it\Delta}u_0(x)=\int_{\R^n}\widehat{u}_0(\xi)e^{2\pi ix\cdot\xi-4\pi^2it|\xi|^2}d\xi.$$ A fundamental open question for $n\ge 2$ is identifying the smallest Sobolev index $s>0$ for which $$\lim_{t\to 0}u(x,t)=u_0(x)\; a.e., \text{ for each }u_0\in H^s(\R^n).$$ The main goal of this note is to give an alternative argument for the following recent result of Luc\`a and Rogers, which proves a lower bound on the Sobolev regularity index $s$. \begin{theorem}\label{main} Let $n\ge 2$ and $s<\frac{n}{2(n+2)}$. Then there exist $R_k\to\infty$ and $f_k\in L^2(\R^n)$ with $\widehat{f_k}$ supported in the annulus $|\xi|\sim R_k$ such that \begin{equation} \label{e1} \lim_{k\to\infty}\frac{\|\sup_{0<t\lesssim 1}|e^{it\Delta}f_k(x)|\|_{L^2(B(0,1))}}{R_k^{s}\|f_k\|_{L^2(\R^n)}}=\infty. \end{equation} \end{theorem} We use the pseudoconformal symmetry, according to which, if $u(x,t)$ solves \eqref{e2} then so does $$v(x,t)=\frac1{t^{n/2}}\bar{u}(\frac{x}{t},\frac1{t})e^{i\frac{|x|^2}{4t}}.$$ Moreover, the initial data of the two solutions will have comparable $L^2$ norms. See the Appendix. We will start with a solution $u$ (the same as the one in \cite{LR}) that is big on a cartesian set $X\times T$ of $(x,t)$. The set $X$ will be a small neighborhood of a rescaled copy of ${\mathbb Z}^n$ inside $[-1,1]^n$, while $T$ will be a discrete lattice inside $t\sim 1$. The measure of $X$ will be significantly smaller than 1, of order $R_k^{-\alpha_n}$, for some $\alpha_n>0$. The property that our construction exploits is that the set $Y=\frac{X}{T}$ can be made much larger than $X$, in fact it can be made to have measure comparable to 1. Note that the new solution $v$ will now be big for each $x\in Y$ (for some $t$ depending on $x$). This will be enough to prove Theorem \ref{main}. Let us compare our approach with other recent ones. Luc\`a and Rogers \cite{LR} use the Galilean symmetry, according to which if $u(x,t)$ solves \eqref{e2} then so does $$v(x,t)=u(x-t\theta,t)e^{it\frac{|\theta|^2}{4}}e^{i\frac{x\cdot\theta}2}$$ for arbitrary $\theta\in\R^n$. Moreover, the initial data of the two solutions will have comparable $L^2$ norms. As mentioned before, they start with the same $u$, and thus have the same $X,T$. Their observation is that, for appropriate $\theta$, the set $Y=X-\theta T$ will have measure comparable to 1. Bourgain \cite{Bo} constructs a solution $u$ which has two attributes. On the one hand, it is big on a cartesian product $X\times \{0\}$. So $T=\{0\}$. The second property of $u$ is that it is very symmetrical, almost invariant under a certain space-time translation. More precisely, for an appropriate $\nu\in\R^{n+1}$ \begin{equation} \label{e3} u(x,t)\approx u((x,t)+s\nu) \end{equation} will hold for all $x\in B(0,1)$ and all $t,s\sim 1$. The original small set $X$ where $u$ was large gets amplified from the fact that the $x$ projection of the set $$Y=(X\times \{0\})+[\frac1{10},10]\nu$$ has measure comparable to $1$. In both our example and the one from \cite{LR}, the Fourier transform $\widehat{u}_0$ of the initial data is essentially the characteristic function of a small neighborhood of a rescaled (and truncated) copy of ${\mathbb Z}^n$. In Bourgain's construction, the mass lives on a small portion of this set, where lattice points are restricted to a sphere. The key is that the lift of this sphere to the paraboloid $(\xi,|\xi|^2)$ is a collection of points that live in a hyperplane $H\subset \R^{n+1}$. The existence of a nonzero vector $\nu\in H^{\perp}$ is what makes the remarkable symmetry \eqref{e3} possible. In terms of the actual mathematics that is involved in proving that the enhanced set $Y$ has measure comparable to 1, the three methods described above are at least superficially different. Luc\`a and Rogers derive a quantitative version of the ergodic theorem involving the Funk-Hecke theorem. Bourgain uses a bit of Fourier analysis but his argument also has diophantine flavor. Our argument elaborates on a quantitative version of the multidimensional Dirichlet principle, which in its simplest form can be stated as follows. \begin{lemma} \label{8} Given $y_1,\ldots,y_n\in [0,1]$ and a real number $N\ge 1$, there is $1\le p\le N+2$ such that $$\max_{1\le i\le n}\|py_i\|\le \frac1{N^{1/n}}.$$ \end{lemma} Here and in the following, $\|x\|$ will denote the distance of $x$ to ${\mathbb Z}$. The proof of this lemma is an immediate application of pigeonholing. It is hard to conjecture what the optimal $s$ in Theorem \ref{main} should be. The authors feel that the likeliest possibility is $s=\frac{n}{2(n+1)}$. If one runs a multilinear type Bourgain--Guth argument for this problem (as was done in \cite{Bo}), the $n+1$ linear term has a favorable estimate consistent with this value of $s$. Another interesting question is whether the optimal $s$ is the same for a larger class of curved hyper-surfaces $(\xi,\varphi(\xi))$ generalizing the paraboloid $(\xi,|\xi|^2)$. It is worth mentioning that Bourgain exhibits a surface $$\varphi(\xi)=\langle A\xi,\xi\rangle+O(|\xi|^3)$$ with $A$ positive definite, for which a stronger result is proved: Theorem \ref{main} will hold even with $s<\frac{n-1}{2n}$ ($n\ge 3$) and $s<\frac{5}{16}$ ($n=2$). \begin{ack*} The authors thank J. Bennett, R. Luc\`a, K. Rogers and A. Vargas for a few interesting discussions on this topic. \end{ack*} \section{The main construction and the proof of Theorem \ref{main}} Via rescaling, Theorem \ref{main} will follow from the following result. \begin{theorem}\label{main1} Let $n\ge 2$ and $s<\frac{n}{2(n+2)}$. Then there exist $R_k\to\infty$ and $v_k\in L^2(\R^n)$ with $\widehat{v_k}$ supported in the annulus $|\xi|\sim 1$ such that \begin{equation} \label{e25} \lim_{k\to\infty}\frac{\|\sup_{0<t\lesssim R_k}|e^{it\Delta}v_k(x)|\|_{L^2(B(0,R_k))}}{R_k^{s}\|v_k\|_{L^2(\R^n)}}=\infty. \end{equation} \end{theorem} We will prove this in the end of the section, using some elementary number theoretical results derived in Section \ref{50}. For $0<u<v$ define the annuli $${\mathbb A}}\def\W{{\mathcal W}_{u,v}=\{x\in\R^n:\;u<|x|<v\}.$$ Fix $\sigma<\frac1{n+2}$. Fix a Schwartz function $\theta$ on $\R^n$ whose Fourier transform is supported inside ${\mathbb A}}\def\W{{\mathcal W}_{4^{-n-3},4\sqrt{n}}$ and equals 1 on ${\mathbb A}}\def\W{{\mathcal W}_{4^{-n-2},2\sqrt{n}}$. The next three lemmas are used to align the phases of an exponential sum so that the absolute value of the sum is comparable to the number of exponentials in the sum. \begin{lemma} \label{4} There exists $\epsilon_1>0$ so that for each $R$ large enough, the following holds: For each $x\in {\mathbb A}}\def\W{{\mathcal W}_{4^{-n-2},2\sqrt{n}}$, each $t\in (0,1)$ and each $\xi'\in\R^n$ with $|\xi'|\le \epsilon_1R$ we have $$|\int e^{2\pi i[(x-\frac{2t\xi'}{R})\cdot \xi-\frac{t}{R}|\xi|^2]}\theta(\xi)d\xi-1|<\frac12.$$ \end{lemma} \begin{proof} Let $$\psi(\xi)=-2\pi [\frac{2t\xi'}{R}\cdot \xi+\frac{t}{R}|\xi|^2].$$ Use the fact that $$\int e^{2\pi ix\cdot \xi}\theta(\xi)d\xi=1.$$ Then estimate $$|\int e^{2\pi ix\cdot \xi}\theta(\xi)[e^{i\psi(\xi)}-1]d\xi|\le 2\int_{|\xi|>C}|\theta(\xi)|d\xi+2\sup_{|\xi|\le C}|\psi(\xi)|\int_{\R^n}|\theta|.$$ Choose first $C$ so that $$\int_{|\xi|>C}|\theta(\xi)|d\xi<\frac14.$$ Then note that $$\sup_{|\xi|\le C}|\psi(\xi)|\le 2\pi(\frac{C^2}{R}+2\epsilon_1C).$$ Choose $\epsilon_1$ so small that $$4\pi(\frac{C^2}{R}+2\epsilon_1C)\int_{\R^n}|\theta|<\frac14.$$ for all $R$ large enough. \end{proof} The following lemma is rather trivial. \begin{lemma} \label{5} Let $\Omega$ be a finite set. Consider $a_{\xi'},b_{\xi'}\in{\mathbb C}$ for $\xi'\in\Omega$ such that $$\max|a_{\xi'}-1|\le \delta_1$$ $$\max|b_{\xi'}-1|\le \delta_2.$$ Then $$|\sum_{\xi'\in \Omega}a_{\xi'}b_{\xi'}-|\Omega||\le |\Omega|(\delta_1\max|b_{\xi'}|+\delta_2\max|a_{\xi'}|+\delta_1\delta_2).$$ \end{lemma} For $\epsilon_2>0$ small enough (depending only on $\theta$, as revealed in the proof of Proposition \ref{40}), define $$X:=(R^{\sigma-1}{\mathbb Z}^n+B(0,\frac{\epsilon_2}R))\cap {\mathbb A}}\def\W{{\mathcal W}_{4^{-n-2},2\sqrt{n}}$$ $$T:=(R^{2\sigma-1}{\mathbb Z})\cap (4^{-n-1},1),$$ and $$\Omega:=(R^{1-\sigma}{\mathbb Z}^n)\cap B(0,\epsilon_1R).$$ Define also the Fourier transform of the initial data $$\widehat{u_0}(\xi)=\sum_{\xi'\in\Omega}\theta(\xi-\xi').$$ Note that \begin{equation} \label{24} \|u_0\|_2\sim R^{\frac{\sigma n}{2}}. \end{equation} and \begin{equation} \label{41} {\operatorname{supp}}}\def\det{{\operatorname{det}}}\def\MLK{{\textbf{MLK}}\; u_0\subset {\mathbb A}}\def\W{{\mathcal W}_{4^{-n-3},4\sqrt{n}}. \end{equation} The following is essentially proved in (3.2) from \cite{LR}. \begin{proposition} \label{40} We have the following lower bound for each $(x,t)\in X\times T$ \begin{equation} \label{20} |e^{i\frac{t}{2\pi R}\Delta}u_0(x)|\gtrsim R^{\sigma n}. \end{equation} \end{proposition} \begin{proof} Note first that $$e^{i\frac{t}{2\pi R}\Delta}u_0(x)=\sum_{\xi'\in\Omega}e^{2\pi i[x\cdot \xi'-\frac{t}{R}|\xi'|^2]}\int e^{2\pi i[(x-\frac{2t\xi'}{R})\cdot \xi-\frac{t}{R}|\xi|^2]}\theta(\xi)d\xi$$ One easily checks that for $(x,t)\in X\times T$ and $\xi'\in\Omega$ we have $$x\cdot \xi'-\frac{t}{R}|\xi'|^2\in {\mathbb Z}+B(0,\epsilon_3),$$ where $\epsilon_3$ can be chosen as small as desired by choosing $\epsilon_2$ small enough. In particular, we can make sure that $$|e^{2\pi i[x\cdot \xi'-\frac{t}{R}|\xi'|^2]}-1|<\min\{\frac1{100}, \frac{1}{100}\int|\theta|\}.$$ It suffices now to combine this with Lemma \ref{4} and Lemma \ref{5}, once we also note that $$|\Omega|\sim R^{\sigma n}.$$ \end{proof} Recall that $u_0$ depends on $R$, so we might as well write $u_0=u_{0,R}$. Let now $u_{0, R}(x,t)=e^{it\Delta}u_{0,R}(x)$ and let $$v_R(x,t)=\frac1{t^{n/2}}\bar{u}_R(\frac{x}{t},\frac1{t})e^{i\frac{|x|^2}{4t}}$$ be its pseudoconformal transformation. The proposition in the Appendix shows that $v_R$ solves the Schr\"odinger equation with some initial data that we call $v_{0,R}$. We record the properties of $v_R$ in the following proposition. \begin{proposition} \label{11} We have for each large enough $R$ such that $R^{\sigma}$ is an integer \begin{equation} |\{x\in B(0,R):\sup_{0<t\lesssim R}|v_R(x,t)|\gtrsim R^{\sigma n-\frac{n}2}\}|\gtrsim R^n \end{equation} \begin{equation} \|v_{0,R}\|_2\sim R^{\frac{\sigma n}2} \end{equation} \begin{equation} \frac{\|\sup_{0<t\lesssim R}|v_R(x,t)|\|_{L^2(B_R)}}{\|v_{0,R}\|_2}\gtrsim R^{\frac{\sigma n}2} \end{equation} \begin{equation} {\operatorname{supp}}}\def\det{{\operatorname{det}}}\def\MLK{{\textbf{MLK}} \;\widehat{v_{0,R}}\subset 4\pi ({\operatorname{supp}}}\def\det{{\operatorname{det}}}\def\MLK{{\textbf{MLK}} \;u_{0,R})\subset {\mathbb A}}\def\W{{\mathcal W}_{4^{-n-2}\pi, 16\pi\sqrt{n}}. \end{equation} \end{proposition} \begin{proof} The first property follows from \eqref{20} and \eqref{21}. The second one follows from \eqref{24} and \eqref{25}. The third one is a consequence of the first two. The fourth one also follows from \eqref{25} and \eqref{41}. \end{proof} Let now $s<\frac{n}{2(n+2)}$. The proof of Theorem \ref{main1} for this $s$ will now immediately follow by choosing $\sigma<\frac1{n+2}$ such that $\frac{\sigma n}{2}>s$, and by using $v_k=v_{0, R_k}$ from Proposition \ref{11}, with $R_k^\sigma$ an integer that grows to infinity with $k$. \section{Number theoretical considerations} \label{50} \begin{lemma} \label{7} For each large enough real number $N$ there is $S=S_N\subset [0,1]^n$ with $|S|\ge \frac34$ such that for each $(y_1,\ldots,y_n)\in S$ there exists $p\in [4^{-n-1}N,N+2]$ satisfying \begin{equation} \label{9} \max_{1\le i\le n}\|py_i\|\le \frac1{N^{\frac1n}}. \end{equation} \end{lemma} \begin{proof} Using Lemma \ref{8}, we know that \eqref{9} holds for each $(y_1,\ldots,y_n)\in [0,1]^n$, if we allow $p\in [1,N+2]$. We need an upper bound for those $(y_1,\ldots,y_n)$ corresponding to $1\le p\le 4^{-n-1}N$. For each $p$ define $$A_p=\{(y_1,\ldots,y_n)\in [0,1]^n:\;\max_{1\le i\le n}\|py_i\|\le \frac1{N^{\frac1n}}\}=$$ $$=\bigcup_{0\le k_i\le p}\{(y_1,\ldots,y_n)\in [0,1]^n:\;\max_{1\le i\le n}|py_i-k_i|\le \frac1{N^{\frac1n}}\}.$$ The crude estimate $$|A_p|\le (p+1)^n(\frac{2}{N^{\frac1n}p})^n<\frac{4^n}{N}$$ leads to $$|\bigcup_{1\le p\le 4^{-n-1}N}A_p|\le \frac14.$$ \end{proof} \begin{proposition} Assume $R^{\sigma}$ is a large enough integer. Let $$\frac{XR}{T}=\{\frac{xR}{t}:x\in X, t\in T\}.$$ Then \begin{equation} \label{21} |\frac{XR}{T}\cap B(0,R)|\gtrsim R^n. \end{equation} \end{proposition} \begin{proof} It suffices to prove that $$|\frac{X}{T}\cap [0,1]^n|\ge \frac12.$$ This can be written as $|U|\ge \frac12$ where $$U=\{x\in[0,1]^n:\max_{1\le i\le n}|x_i-R^{-\sigma}\frac{k_i}{p}|\le \frac{\epsilon_2}R,\text{ with }$$$$4^{-n-2}R^{1-\sigma}\le |k|\le 2\sqrt{n}R^{1-\sigma},\;4^{-n-1}R^{1-2\sigma}\le p\le R^{1-2\sigma} \}.$$ This will follow if we prove that the larger set (we have dropped the restriction on $k_i$) $$V=\{x\in[0,1]^n:\max_{1\le i\le n}|x_i-R^{-\sigma}\frac{k_i}{p}|\le \frac{\epsilon_2}R,\text{ with }4^{-n-1}R^{1-2\sigma}\le p\le R^{1-2\sigma} \}$$ satisfies $|V|\ge \frac34.$ Indeed, note first that the restriction $$|k|\le 2\sqrt{n}R^{1-\sigma}$$is redundant, as the inequality $\max|k_i|\le R^{1-\sigma}+1\le 2R^{1-\sigma}$ is forced by the combination of $x\in[0,1]^n$, $\max_{1\le i\le n}|x_i-R^{-\sigma}\frac{k_i}{p}|\le \frac{\epsilon_2}R$ and $p\le R^{1-2\sigma}$. Second, note that $$W=\{x\in[0,1]^n:\max_{1\le i\le n}|x_i-R^{-\sigma}\frac{k_i}{p}|\le \frac{\epsilon_2}R,\text{ with }$$$$|k|\le 4^{-n-2}R^{1-\sigma},\;4^{-n-1}R^{1-2\sigma}\le p\le R^{1-2\sigma} \}$$ satisfies $W\subset[0,\frac12]^n$ and thus $|W|<\frac14$. We next focus on showing that $|V|\ge \frac34$. The inequality $$\max_{1\le i\le n}|x_i-R^{-\sigma}\frac{k_i}{p}|\le \frac{\epsilon_2}R$$ can be written as $$\max_{1\le i\le n}\|p\{R^{\sigma}x_i\}\|\le \frac{\epsilon_2pR^{\sigma}}R,$$ where $\{z\}$ is the fractional part of $z$. Note that given the lower bound $p\ge 4^{-n-1}R^{1-2\sigma}$ and that $\sigma<\frac1{n+2}$, for $R$ large enough we have $$\frac{\epsilon_2pR^{\sigma}}R\ge \frac1{(R^{1-2\sigma}-2)^{\frac1n}}.$$ Let $N:=R^{1-2\sigma}-2$. Then $V_0\subset V$ where $$V_0=\{x\in[0,1]^n:\max_{1\le i\le n}\|p\{R^{\sigma}x_i\}\|\le \frac1{N^{\frac1n}},\text{ for some }4^{-n-1}N\le p\le N+2 \}.$$ Since $R^{\sigma}$ is an integer, the map $$(x_1,\ldots,x_n)\mapsto (\{R^{\sigma}x_1\},\ldots, \{R^{\sigma}x_n\})$$ is a measure preserving transformation on $[0,1]^n$. The fact that $|V_0|\ge \frac34$ now follows from Lemma \ref{7}. \end{proof} \section{Appendix} We record below the following classical result. See for example \cite{Ta}, page 72. \begin{proposition} If $u(x,t)$ solves \eqref{e2} then so does its pseudoconformal transformation $$v(x,t)=\frac1{t^{n/2}}\bar{u}(\frac{x}{t},\frac1{t})e^{i\frac{|x|^2}{4t}}.$$ Moreover, the initial data $u_0(x)=u(x,0)$, $v_0(x)=v(x,t)$ are related by the formula \begin{equation} \label{25} \widehat{v_0}(y)=Cu_0(4\pi y). \end{equation} In particular, $$\|u_0\|_2\sim\|v_0\|_2.$$ \end{proposition} \end{document}
\begin{document} \title{Existence of weak solutions to stochastic heat equations driven by truncated $lpha$-stable white noises with non-Lipschitz coefficients} \begin{abstract} We consider a class of stochastic heat equations driven by truncated $\alpha$-stable white noises for $\alpha\in(1,2)$ with noise coefficients that are continuous but not necessarily Lipschitz and satisfy globally linear growth conditions. We prove the existence of weak solution, taking values in two different forms under different conditions, to such an equation using a weak convergence argument on solutions to the approximating stochastic heat equations. More precisely, for $\alpha\in(1,2)$ there exists a measure-valued weak solution. However, for $\alpha\in(1,5/3)$ there exists a function-valued weak solution, and in this case we further show that for $p\in(\alpha,5/3)$ the uniform $p$-th moment in $L^p$-norm of the weak solution is finite, and that the weak solution is uniformly stochastic continuous in $L^p$ sense. \noindent\textbf{Keywords:} Non-Lipschitz noise coefficients; Stochastic heat equations; Truncated $\alpha$-stable white noises; Uniform $p$-th moment; Uniform stochastic continuity. \noindent{{\bf MSC Classification (2020):} Primary: 60H15; Secondary: 60F05, 60G17} \end{abstract} \section{Introduction} \label{sec1} In this paper we study the existence of weak solution to the following non-linear stochastic heat equation \begin{equation} \label{eq:originalequation1} \left\{\begin{array}{lcl} \dfrac{\partial u(t,x)}{\partial t}=\dfrac{1}{2} \dfrac{\partial^2u(t,x)}{\partial x^2}+ \varphi(u(t-,x))\dot{L}_{\alpha}(t,x), && (t,x)\in (0,\infty) \times(0,L),\\[0.3cm] u(0,x)=u_0(x),&&x\in[0,L],\\[0.3cm] u(t,0)=u(t,L)=0,&& t\in[0,\infty), \end{array}\right. \end{equation} where $L$ is an arbitrary positive constants, $\dot{L}_{\alpha}$ denotes a truncated $\alpha$-stable white noise on $[0,\infty)\times[0,L]$ with $\alpha\in(1,2)$, the noise coefficient $\varphi:\mathbb{R}\rightarrow \mathbb{R}$ satisfies the hypotheses given below, and the initial function $u_0$ is random and measurable. Before studying the equation of particular form (\ref{eq:originalequation1}), we first consider a general stochastic heat equation \begin{equation} \label{GeneralSPDE} \dfrac{\partial u(t,x)}{\partial t}=\dfrac{1}{2}\dfrac{\partial^2u(t,x)}{\partial x^2}+G(u(t,x))+H(u(t,x))\dot{F}(t,x), \quad t\geq0, x\in\mathbb{R}, \end{equation} in which $G:\mathbb{R}\rightarrow \mathbb{R}$ is Lipschitz continuous, $H:\mathbb{R}\rightarrow \mathbb{R}$ is continuous and $\dot{F}$ is a space-time white noise. When $\dot{F}$ is a Gaussian white noise, there is a growing literature on stochastic partial differential equations (SPDEs for short) related to (\ref{GeneralSPDE}) such as the stochastic Burgers equations (see, e.g., Bertini and Cancrini \cite{Bertini:1994}, Da Prato et al. \cite{Daprato:1994}), SPDEs with reflection (see, e.g., Zhang \cite{Zhang:2016}), Parabolic Anderson Model (see, e.g., {G\"{a}rtner and Molchanov \cite{Gartner:1990}), etc. In particular, such a SPDE arises from super-processes (see, e.g., Konno and Shiga \cite{Konno:1988}, Dawson \cite{Dawson:1993} and Perkins \cite{P1991} and references therein). For $G\equiv 0$ and $H(u)=\sqrt{u}$, the solution to (\ref{GeneralSPDE}) is the density field of a one-dimensional super-Brownian motion. For $H(u)=\sqrt{u(1-u)}$ (stepping-stone model in population genetics), Bo and Wang \cite{Bo:2011} considered a stochastic interacting model consisting of equations (\ref{GeneralSPDE}) and proved the existence of weak solution to the system by using a weak convergence argument. In the case that $\dot{F}$ is a Gaussian colored noise that is white in time and colored in space, for continuous function $H$ satisfying the linear growth condition, Sturm \cite{Sturm:2003} proved the existence of a pair $(u,F)$ satisfying (\ref{GeneralSPDE}), the so-called weak solution, by first establishing the existence and uniqueness of lattice systems of SDEs driven by correlated Brownian motions with non-Lipschitz diffusion coefficients that describe branching particle systems in random environment in which the motion process has a discrete Laplacian generator and the branching mechanism is affected by a colored Gaussian random field, and then applying an approximation procedure. Xiong and Yang \cite{Xiong:2023} proved the existence of weak solution $(u,F)$ to (\ref{GeneralSPDE}) in a finite spatial domain with different boundary conditions by considering the weak limit of a sequence of approximating SPDEs of (\ref{GeneralSPDE}). They further proved the existence and uniqueness of the strong solution under additional H\"{o}lder continuity assumption on $H$. If $\dot{F}$ is a L\'{e}vy space-time white noise with Lipschitz continuous coefficient $H$, Albeverio et al. \cite{Albeverio:1998} first proved the existence and uniqueness of the solution when $\dot{F}$ is a Poisson white noise. Applebaum and Wu \cite{Applebaum:2000} extended the results to a general L\'{e}vy space-time white noise. For a stochastic fractional Burgers type non-linear equation that is similar to equation (\ref{GeneralSPDE}) and driven by L\'{e}vy space-time white noise on multidimensional space variables, we refer to Wu and Xie \cite{Wu:2012} and references therein. In particular, when $\dot{F}$ is an $\alpha$-stable white noise for $\alpha\in (0,1)\cup(1,2)$, Balan \cite{Balan:2014} studied SPDE (\ref{GeneralSPDE}) with $G\equiv 0$ and Lipschitz coefficient $H$ on a bounded domain in $\mathbb{R}^d$ with zero initial condition and Dirichlet boundary, and proved the existence of random field solution $u$ for the given noise $\dot{F}$ (the so-called strong solution). The approach in \cite{Balan:2014} is to first solve the equation with truncated noise (by removing the big jumps, the jumps size exceeds a fixed value $K$, from $\dot{F}$), yielding a solution $u_K$, and then show that for $N\geq K$ the solutions $u_N=u_K$ on the event $t\leq\tau_K$, where $\{\tau_K\}_{K\geq1}$ is a sequence of stopping times which tends to infinity as $K$ tends to infinity. Such a localization method which is also applied in Peszat and Zabczyk \cite{Peszat:2006} to show the existence of weak Hilbert-space valued solution. For $\alpha\in(1,2)$, Wang et al. \cite{Wang:2023} studied the existence and pathwise uniqueness of strong function-valued solution of (\ref{GeneralSPDE}) with Lipschitz coefficient $H$ using a localization method, and showed a comparison principle of solutions to such equation with different initial functions and drift coefficients. Yang and Zhou \cite{Yang:2017} found sufficient conditions on pathwise uniqueness of solutions to a class of SPDEs (\ref{GeneralSPDE}) driven by $\alpha$-stable white noise without negative jumps and with non-decreasing H\"{o}lder continuous noise coefficient $H$. But the existence of weak solution to (\ref{GeneralSPDE}) with general non-decreasing H\"{o}lder continuous noise coefficient is left open. For stochastic heat equations driven by general heavy-tailed noises with Lipschitz noise coefficients, we refer to Chong \cite{Chong:2017} and references therein. When $G=0$, $H(u)=u^{\beta}$ with $0<\beta<1$ (non-Lipschitz continuous) in (\ref{GeneralSPDE}) and $\dot{F}$ is an $\alpha$-stable ($\alpha\in(1,2)$) white noise on $[0,\infty)\times\mathbb{R}$ without negative jumps, it is shown in Mytnik \cite{Mytnik:2002} that for $0<\alpha\beta<3$ there exists a weak solution $(u,F)$ satisfying (\ref{GeneralSPDE}) by constructing a sequence of approximating processes that is tight with its limit solving the associated martingale problem, and that in the case of $\alpha\beta=1$ the weak uniqueness of solution to (\ref{GeneralSPDE}) holds. The pathwise uniqueness is shown in \cite{Yang:2017} for $\alpha\beta=1$ and $1<\alpha<\sqrt{5}-1$. For $\alpha$-stable colored noise $\dot{F}$ without negative jumps and with H\"{o}lder continuous coefficient $H$, Xiong and Yang \cite{Xiong:2019} proved the existence of week solution $(u,F)$ to (\ref{GeneralSPDE}) by showing the weak convergence of solutions to SDE systems on rescaled lattice with discrete Laplacian and driven by common stable random measure, which is similar to \cite{Sturm:2003}. In both \cite{Sturm:2003} and \cite{Xiong:2019} the dependence of colored noise helps with establishing the existence of weak solution. Inspired by work in the above mentioned literature, we are interested in the stochastic heat equation (\ref{eq:originalequation1}) in which the noise coefficient $\varphi$ satisfies the following more general hypothesis: \begin{hypothesis} \label{Hypo} $\varphi:\mathbb{R}\rightarrow \mathbb{R}$ is continuous and globally linear growth, and there exists a sequence of Lipschitz continuous functions $\varphi^n:\mathbb{R}\rightarrow \mathbb{R}$ such that \begin{itemize} \item[(i)] $\varphi^n$ uniformly converges to $\varphi$ as $n\rightarrow\infty$; \item[(ii)] for each $n\geq1$, there exists a constant $C_n$ such that $$\vert\varphi^n(x)-\varphi^n(y)\vert \leq C_n\vert x-y\vert,\,\,\forall x,y\in\mathbb{R};$$ \end{itemize} \end{hypothesis} The main contribution of this paper is to prove the existence and regularity of weak solutions to equation (\ref{eq:originalequation1}) under Hypothesis \ref{Hypo}. To this end, we consider two types of weak solutions that are measure-valued and function-valued, respectively. In addition, we also study the uniform $p$-moment and uniform stochastic continuity of the weak solution to equation (\ref{eq:originalequation1}). In the case that $\varphi$ is Lipschitz continuous, the existence of the solution can be usually obtained by standard Picard iteration (see, e.g., Dalang et al. \cite{Dalang:2009}, Walsh \cite{Walsh:1986}) or Banach fixed point principle (see, e.g., Truman and Wu \cite{Truman:2003}, Bo and Wang \cite{Bo:2006}). We thus mainly consider the case that $\varphi$ is non-Lipschitz continuous. Since the classical approaches of Picard iteration and Banach fixed point principle fail for SPDE (\ref{eq:originalequation1}) with non-Lipschitz $\varphi$, to prove the existence of a weak solution $(u,L_{\alpha})$ to (\ref{eq:originalequation1}), we first construct an approximating SPDE sequence with Lipschitz continuous noise coefficients $\varphi^n$, and prove the existence and uniqueness of strong solutions to the approximating SPDEs. We then proceed to show that the sequence of solution is tight in appropriate spaces. Finally, we prove that there exists a weak solution of (\ref{eq:originalequation1}) by using a weak convergence procedure. The rest of this paper is organized as follows. In the next section, we introduce some notation and the main theorems on the existence, uniform $p$-moment and uniform stochastic continuity of weak solution to (\ref{eq:originalequation1}). Section \ref{sec3} is devoted to the proof of the existence of measure-valued weak solution to (\ref{eq:originalequation1}). In Section \ref{sec4}, for $\alpha\in(1,5/3)$ we prove that there exists a weak solution to (\ref{eq:originalequation1}) as an $L^p$-valued process with $p\in(\alpha,5/3)$, and that the weak solution has the finite uniform $p$-th moment and the uniform stochastic continuity in the $L^p$ norm with $p\in(\alpha,5/3)$. \section{Notation and main results} \label{sec2} \subsection{Notation} \label{sec2.1} Let $(\Omega, \mathcal{F}, (\mathcal{F}_t)_{t\geq0}, \mathbb{P})$ be a complete probability space with filtration $(\mathcal{F}_t)_{t\geq0}$ satisfying the usual conditions, and let $ N(dt,dx,dz): [0,\infty)\times[0,L]\times \mathbb{R}\setminus\{0\}\rightarrow \mathbb{N} \cup\{0\}\cup\{\infty\} $ be a Poisson random measure on $(\Omega, \mathcal{F}, (\mathcal{F}_t)_{t\geq0}, \mathbb{P})$ with intensity measure $dtdx\nu_{\alpha}(dz)$, where $dtdx$ denotes the Lebesgue measure on $[0,\infty)\times[0,L]$ and the jump size measure $\nu_{\alpha}(dz)$ for $ \alpha\in(1,2) $ is given by \begin{align} \label{eq:smalljumpsizemeasure} \nu_{\alpha}(dz):=(c_{+}z^{-\alpha-1}1_{(0,K]}(z)+c_{-}(-z)^{-\alpha-1}1_{[-K,0)}(z) )dz, \end{align} where $c_{+}+c_{-}=1$ and $K>0$ is an arbitrary constant. Define \begin{align*} \tilde{N}(dt,dx,dz):= N(dt,dx,dz)-dtdx\nu_{\alpha}(dz). \end{align*} Then $\tilde{N}(dt,dx,dz)$ is the compensated Poisson random measure (martingale measure) on $[0,\infty)\times[0,L]\times \mathbb{R}\setminus\{0\}$. As in Balan \cite[Section 5]{Balan:2014}, define a martingale measure \begin{align} \label{def:stablenoise} L_{\alpha}(dt,dx):= \int_{\mathbb{R}\setminus\{0\}}z\tilde{N}(dt,dx,dz) \end{align} for $(t, x)\in [0,\infty)\times[0,L]$. Then the corresponding distribution-valued derivative $\{\dot{L}_{\alpha}(t,x):t\in[0,\infty),x\in[0,L]\}$ is a truncated $\alpha$-stable white noise. Write $\mathcal{G}^{\alpha}$ for the class of almost surely $\alpha$-integrable random functions defined by \begin{align*} \mathcal{G}^{\alpha}:=\left\{f\in\mathbb{B}: \int_0^t\int_0^L\vert f(s,x)\vert ^{\alpha}dxds<\infty, \mathbb{P}\text{-a.s.}\,\,\text{for all} \,\,t\in[0,\infty)\right\}, \end{align*} where $\mathbb{B}$ is the space of progressively measurable functions on $[0,\infty)\times[0,L]\times\Omega$. Then it holds by Mytnik \cite[Section 5]{Mytnik:2002} that the stochastic integral with respect to $\{L_{\alpha}(dx,ds)\}$ is well defined for all $f\in \mathcal{G}^{\alpha}$. Throughout this paper, $C$ denotes the arbitrary positive constant whose value might vary from line to line. If $C$ depends on some parameters such as $p,T$, we denote it by $C_{p,T}$. Let $G_t(x,y)$ be the fundamental solution of heat equation $\frac{\partial u}{\partial t} =\frac{1}{2}\frac{\partial^2 u}{\partial x^2}$ on the domain $[0,\infty)\times[0,L]\times[0,L]$ with Dirichlet boundary conditions (the subscript $t$ is not a derivative but a variable). Its explicit formula (see, e.g., Feller \cite[Page 341]{Feller:1971}) is given by \begin{equation*} G_t(x,y)=\dfrac{1}{\sqrt{2\pi t}}\sum_{k=-\infty}^{+\infty}\left\{ \exp\left(-\dfrac{(y-x+2kL)^2}{2t}\right) -\exp\left(-\dfrac{(y+x+2kL)^2}{2t} \right)\right\} \end{equation*} for $t\in(0,\infty),x,y\in[0,L]$; and $\lim_{t\downarrow0}G_t(x,y)=\delta_y(x)$, where $\delta$ is the Dirac delta distribution. Moreover, it holds by Xiong and Yang \cite[Lemmas 2.1-2.3]{Xiong:2023} that for $s,t\in[0,\infty)$ and $x,y,z\in[0,L]$ \begin{equation} \label{eq:Greenetimation0} G_t(x,y)=G_t(y,x),\,\, \int_0^L|G_t(x,y)|dy+\int_0^L|G_t(x,y)|dx\leq C, \end{equation} \begin{equation} \label{eq:Greenetimation1} \int_0^LG_s(x,y)G_t(y,z)dy=G_{t+s}(x,z), \end{equation} \begin{equation} \label{eq:Greenetimation2} \int_0^L\vert G_t(x,y)\vert ^pdy\leq Ct^{-\frac{p-1}{2}},\,\, p\geq1. \end{equation} Given a topological space $V$, let $D([0,\infty),V)$ be the space of c\`{a}dl\`{a}g paths from $[0,\infty)$ to $V$ equipped with the Skorokhod topology. For given $p\geq1$ we denote by $v_t\equiv\{v(t,\cdot),t\in[0,\infty)\}$ the $L^p([0,L])$-valued process equipped with norm \begin{equation*} \vert\vert v_t\vert\vert_{p}=\left(\int_0^L\vert v(t,x)\vert^pdx\right)^{\frac{1}{p}}. \end{equation*} For any $p\geq1$ and $T>0$ let $L_{loc}^p([0,\infty)\times[0,L])$ be the space of measurable functions $f$ on $[0,\infty)\times[0,L])$ such that \begin{equation*} \vert \vert f\vert \vert _{p,T}=\left(\int_0^T\int_0^L \vert f(t,x)\vert ^pdxdt\right)^{\frac{1}{p}}<\infty,\,\, \forall\,\, 0<T<\infty. \end{equation*} Let $B([0,L])$ be the space of all Borel functions on $[0,L]$, and let $\mathbb{M}([0,L])$ be the space of finite Borel measures on $[0,L]$ equipped with the weak convergence topology. For any $f\in B([0,L])$ and $\mu\in\mathbb{M}([0,L])$ define $ \langle f,\mu\rangle:=\int_0^L f(x)\mu(dx) $ whenever it exists. With a slight abuse of notation, for any $f,g\in B([0,L])$ we also denote by $ \langle f,g\rangle=\int_0^L f(x)g(x)dx. $ \subsection{Main results} \label{sec2.2} By a solution to equation (\ref{eq:originalequation1}) we mean a process $u_t\equiv\{u(t,\cdot),t\in[0,\infty)\}$ satisfying the following weak (variational) form equation: \begin{align} \label{eq:variationform} \langle u_t,\psi\rangle &=\langle u_0,\psi\rangle+\dfrac{1}{2}\int_0^t \langle u_s,\psi{''}\rangle ds +\int_0^{t+}\int_0^L\int_{\mathbb{R}\setminus\{0\}} \varphi(u(s-,x))\psi(x)z\tilde{N}(ds,dx,dz) \end{align} for all $t\in[0,\infty)$ and for any $\psi\in C^{2}([0,L])$ with $\psi(0)=\psi(L)=\psi^{'}(0)=\psi^{'}(L)=0$ or equivalently satisfying the following mild form equation: \begin{align} \label{eq:mildform} u(t,x)= \int_0^LG_t(x,y)u_0(y)dy +\int_0^{t+}\int_0^L \int_{\mathbb{R}\setminus\{0\}}G_{t-s}(x,y) \varphi(u(s-,y))z\tilde{N}(ds,dy,dz) \end{align} for all $t\in [0, \infty)$ and for a.e. $x\in [0,L]$, where the last terms in above equations follow from (\ref{def:stablenoise}). For the equivalence between the weak form (\ref{eq:variationform}) and mild form (\ref{eq:mildform}), we refer to Walsh \cite{Walsh:1986} and references therein. We first give the definition (see also in Mytnik \cite{Mytnik:2002}) of a weak solution to stochastic heat equation (\ref{eq:originalequation1}). \begin{definition} Stochastic heat equation (\ref{eq:originalequation1}) has a weak solution with initial function $u_0$ if there exists a pair $(u,L_{\alpha})$ defined on some filtered probability space such that ${L}_{\alpha}$ is a truncated $\alpha$-stable martingale measure on $[0,\infty)\times[0,L]$ and $(u,L_{\alpha})$ satisfies either equation (\ref{eq:variationform}) or equation (\ref{eq:mildform}). \end{definition} We now state the main theorems in this paper. The first theorem is on the existence of weak solution in $D([0,\infty),\mathbb{M}([0,L]))\cap L_{loc}^p([0,\infty)\times [0,L])$ with $p\in(\alpha,2]$ that was first considered in Mytnik \cite{Mytnik:2002}. \begin{theorem} \label{th:mainresult} If the initial function $u_0$ satisfies $\mathbb{E}[\vert\vert u_0\vert\vert_p^p]<\infty$ for some $p\in(\alpha,2]$, then under {\rm Hypothesis \ref{Hypo}} there exists a weak solution $(\hat{u}, {\hat{L}}_{\alpha})$ to equation (\ref{eq:originalequation1}) defined on a filtered probability space $(\hat{\Omega}, \hat{\mathcal{F}}, \{\hat{\mathcal{F}}_t\}_{t\geq0}, \hat{\mathbb{P}})$ such that \begin{itemize} \item[\rm (i)] $\hat{u}\in D([0,\infty),\mathbb{M}([0,L]))\cap L_{loc}^p([0,\infty)\times [0,L])$; \item[\rm (ii)] ${\hat{L}}_{\alpha}$ is a truncated $\alpha$-stable martingale measure with the same distribution as ${L}_{\alpha}$. \end{itemize} Moreover, for any $T>0$ we have \begin{equation} \label{eq:momentresult} \hat{\mathbb{E}}\left[\vert\vert\hat{u}\vert\vert_{p,T}^p\right]= \hat{\mathbb{E}}\left[\int_0^T\vert\vert\hat{u}_t\vert\vert_p^pdt\right]<\infty. \end{equation} \end{theorem} The proof of Theorem \ref{th:mainresult} is deferred to Section \ref{sec3}. Under additional assumption on $\alpha$, we can show that there exists a weak solution in $D([0,\infty),L^p([0,L]))$, $p\in(\alpha,5/3)$ with better regularity. \begin{theorem} \label{th:mainresult2} Suppose that $\alpha\in (1,5/3)$. If the initial function $u_0$ satisfies $\mathbb{E}[||u_0||_p^p]<\infty$ for some $p\in(\alpha,5/3)$, then under {\rm Hypothesis \ref{Hypo}} there exists a weak solution $(\hat{u}, {\hat{L}}_{\alpha})$ to equation (\ref{eq:originalequation1}) defined on a filtered probability space $(\hat{\Omega}, \hat{\mathcal{F}}, \{\hat{\mathcal{F}}_t\}_{t\geq0}, \hat{\mathbb{P}})$ such that \begin{itemize} \item[\rm (i)] $\hat{u}\in D([0,\infty),L^p([0,L]))$; \item[\rm (ii)] ${\hat{L}}_{\alpha}$ is a truncated $\alpha$-stable martingale measure with the same distribution as ${L}_{\alpha}$. \end{itemize} Furthermore, for any $T>0$ we have the following uniform $p$-moment and uniform stochastic continuity, that is, \begin{equation} \label{eq:momentresult2} \hat{\mathbb{E}}\left[\sup_{0\leq t\leq T}\vert\vert\hat{u}_t\vert\vert_p^p\right]<\infty, \end{equation} and that for each $0\leq h\leq\delta$ \begin{equation} \label{eq:timeregular} \lim_{\delta\rightarrow0}\hat{\mathbb{E}}\left[\sup_{0\leq t\leq T}\sup_{0\leq h\leq \delta}\vert\vert \hat{u}_{t+h}-\hat{u}_t\vert\vert_p^p\right]=0. \end{equation} \end{theorem} The proof of Theorem \ref{th:mainresult2} is deferred to Section \ref{sec4}. We present a specific stochastic heat equation to illustrate our results in the following example. \begin{example} Given $0<\beta<1$, consider the equation (\ref{eq:originalequation1}) with $\varphi(u)=\vert u\vert^{\beta}$ for $u\in\mathbb{R}$, that is, \begin{equation*} \label{eq:example} \left\{\begin{array}{lcl} \dfrac{\partial u(t,x)}{\partial t}=\dfrac{1}{2} \dfrac{\partial^2u(t,x)}{\partial x^2}+ \vert u(t-,x)\vert^{\beta}\dot{L}_{\alpha}(t,x), && (t,x)\in (0,\infty) \times(0,L),\\[0.3cm] u(0,x)=u_0(x),&&x\in[0,L],\\[0.3cm] u(t,0)=u(t,L)=0,&& t\in[0,\infty). \end{array}\right. \end{equation*} It is clear that $\vert u \vert^{\beta}$ is a non-Lipschitz continuous function with globally linear growth. We can construct a sequence of Lipschitz continuous functions $(\varphi^n)_{n\geq1}$ of the form \begin{equation*} \varphi^n(u)=(\vert u \vert\vee\varepsilon_n)^{\beta}, u\in\mathbb{R}, \end{equation*} where $\varepsilon_n\downarrow0$ as $n\uparrow\infty$, such that $\varphi^n$ satisfies {\rm Hypothesis \ref{Hypo}}. One can then apply {\rm Theorems \ref{th:mainresult} and \ref{th:mainresult2}} to establish the existence of weak solutions to the above stochastic heat equation. \end{example} Finally, we provide some discussions on our main results in the following remarks. \begin{remark} Note that the globally linear growth of $\varphi$ in {\rm Hypothesis \ref{Hypo}} guarantees the global existence of weak solutions. One can remove this condition if one only needs the existence of a weak solution up to the explosion time. On the other hand, the uniqueness of the solution to equation (\ref{eq:originalequation1}) is still an open problem because $\varphi$ is non-Lipschitz continuous. \end{remark} \begin{remark} The weak solutions of equation (\ref{eq:originalequation1}) in {\rm Theorems \ref{th:mainresult}} and {\rm\ref{th:mainresult2}} are proved by showing the tightness of the approximating solution sequence $(u^n)_{n\geq1}$ of equation (\ref{eq:approximatingsolution}); see {\rm Propositions \ref{th:tightnessresult}} and {\rm\ref{prop:tightnessresult2}} in {\rm Sections \ref{sec3}} and {\rm \ref{sec4}}, respectively. To show that the equation (\ref{eq:originalequation1}) has a function-valued weak solution, it is necessary to restrict $\alpha\in(1,5/3)$ due to a technical reason that Doob's maximal inequality can not be directly applied to show the uniform $p$-moment estimate of $(u^n)_{n\geq1}$ that is key to the proof of the tightness for $(u^n)_{n\geq1}$. To this end, we apply the factorization method in {\rm Lemma \ref{lem:uniformbound}} for transforming the stochastic integral such that the uniform $p$-moment of $(u^n)_{n\geq1}$ can be obtained. In order to remove this restriction and consider the case of $\alpha\in(1,2)$, we apply another tightness criteria, i.e., {\rm Lemma \ref{lem:tightcriterion0}}, to show the tightness of $(u^n)_{n\geq1}$. However, the weak solution of equation (\ref{eq:originalequation1}) is a measure-valued process in this situation. We also note that the existence of function-valued weak solution of equation (\ref{eq:originalequation1}) in the case of $\alpha\in[5/3,2)$ is still an unsolved problem. \end{remark} \begin{remark} If we remove the restriction of the bounded jumps for the $\alpha$-stable white noise $\dot{L}_{\alpha}$ in equation (\ref{eq:originalequation1}), the jump size measure $\nu_{\alpha}(dz)$ in (\ref{eq:smalljumpsizemeasure}) becomes \begin{align*} \nu_{\alpha}(dz)=(c_{+}z^{-\alpha-1}1_{(0,\infty)}(z)+c_{-}(-z)^{-\alpha-1}1_{(-\infty,0)}(z) )dz \end{align*} for $\alpha\in(1,2)$ and $c_{+}+c_{-}=1$. As in Wang et al. \cite[Lemma 3.1]{Wang:2023} we can construct a sequence of truncated $\alpha$-stable white noise $\dot{L}^K_{\alpha}$ with the jumps size measure given by (\ref{eq:smalljumpsizemeasure}) and a sequence of stopping times $(\tau_K)_{K\geq1}$ such that \begin{equation} \label{eq:stoppingtimes1} \lim_{K\rightarrow+\infty} \tau_K=\infty,\,\,\mathbb{P}\text{-a.s.}. \end{equation} Similar to equation (\ref{eq:originalequation1}), for given $K\geq1$, we can consider the following non-linear stochastic heat equation \begin{equation} \label{eq:truncatedSHE} \left\{\begin{array}{lcl} \dfrac{\partial u_K(t,x)}{\partial t}=\dfrac{1}{2} \dfrac{\partial^2u_K(t,x)}{\partial x^2}+ \varphi(u_K(t-,x))\dot{L}^K_{\alpha}(t,x), && (t,x)\in (0,\infty) \times(0,L),\\[0.3cm] u_K(0,x)=u_0(x),&&x\in[0,L],\\[0.3cm] u_K(t,0)=u_K(t,L)=0,&& t\in[0,\infty). \end{array}\right. \end{equation} If $\varphi$ is Lipschitz continuous, similar to the proof of {\rm Proposition \ref{th:Approximainresult}} in Wang et al. \cite[Proposition 3.2]{Wang:2023} one can show that there exists a unique strong solution $u_K=\{u_K(t,\cdot),t\in[0,\infty)\}$ to equation (\ref{eq:truncatedSHE}) by using the Banach fixed point principle. On the other hand, by Wang et al. \cite[Lemma 3.4]{Wang:2023}, it holds for each $K\leq N$ that \begin{align*} u_K=u_N \,\,\mathbb{P}\text{-} \,a.s. \,\,\text{on}\,\{t<\tau_K\}. \end{align*} By setting \begin{align*} u=u_K,\,0\leq t<\tau_K, \end{align*} and by the fact (\ref{eq:stoppingtimes1}), we obtain the strong (weak) solution $u$ to equation (\ref{eq:originalequation1}) with noise of unbounded jumps via letting $K\uparrow+\infty$. If $\varphi$ is non-Lipschitz continuous, for any $K\geq1$, {\rm Theorem \ref{th:mainresult}} or {\rm Theorem \ref{th:mainresult2}} shows that there exists a weak solution $(\hat{u}_K, {\hat{L}}^K_{\alpha})$ to equation (\ref{eq:truncatedSHE}) defined on a filtered probability space $(\hat{\Omega}, \hat{\mathcal{F}}, \{\hat{\mathcal{F}}_t\}_{t\geq0}, \hat{\mathbb{P}})_K$. However, we can not show that for each $K\leq N$ \begin{align*} (\hat{u}_K, {\hat{L}}^K_{\alpha})=(\hat{u}_N, {\hat{L}}^N_{\alpha}) \,\,\mathbb{P}\text{-} \,a.s. \,\,\text{on}\,\,\{t<\tau_K\} \end{align*} due to the non-Lipschitz continuity of $\varphi$. Therefore, we do not know whether there exists a common probability space $(\hat{\Omega}, \hat{\mathcal{F}}, \{\hat{\mathcal{F}}_t\}_{t\geq0}, \hat{\mathbb{P}})$ on which all of the weak solutions $((\hat{u}_K, {\hat{L}}^K_{\alpha}))_{K\geq1}$ are defined. Hence, the localization method in Wang et al. \cite{Wang:2023} becomes invalid, and the existence of the weak solution to equation (\ref{eq:originalequation1}) with untruncated $\alpha$-stable noise remains an unsolved problem. \end{remark} \section{Proof of Theorem \ref{th:mainresult}}\label{sec3} The proof of Theorem \ref{th:mainresult} proceeds in the following three steps. We first construct a sequence of the approximating SPDEs with globally Lipschitz continuous noise coefficients $(\varphi^n)_{n\geq1}$ satisfying Hypothesis \ref{Hypo}, and show that for each fixed $n\geq1$ there exists a unique strong solution $u^n$ in $D([0,\infty),L^p([0,L])$ with $p\in(\alpha,2]$ of the approximating SPDE; see Proposition \ref{th:Approximainresult}. We then prove that the approximating solution sequence $(u^n)_{n\geq1}$ is tight in both $D([0,\infty),\mathbb{M}([0,L]))$ and $L_{loc}^p([0,\infty)\times[0,L])$ for all $p\in(\alpha,2]$; see Proposition \ref{th:tightnessresult}. Finally, we proceed to show that there exists a weak solution $(\hat{u},\hat{L}_{\alpha})$ to equation (\ref{eq:originalequation1}) defined on another probability space $(\hat{\Omega}, \hat{\mathcal{F}}, \{\hat{\mathcal{F}}_t\}_{t\geq0}, \hat{\mathbb{P}})$ by applying a weak convergence argument. For each fixed $n\geq1$, we construct the approximate SPDE of the form \begin{equation} \label{eq:approximatingsolution} \left\{\begin{array}{lcl} \dfrac{\partial u^n(t,x)}{\partial t}=\dfrac{1}{2}\dfrac{\partial^2u^n(t,x)} {\partial x^2}+\varphi^n(u^n(t-,x))\dot{L}_{\alpha}(t,x),&& (t,x)\in (0,\infty) \times (0,L),\\[0.3cm] u^n(0,x)=u_0(x),&&x\in [0,L], \\[0.3cm] u^n(t,0)=u^n(t,L)=0,&&t\in[0,\infty), \end{array}\right. \end{equation} where the coefficient $\varphi^n$ satisfies Hypothesis \ref{Hypo}. Given $n\geq1$, by a solution to equation (\ref{eq:approximatingsolution}) we mean a process $u^n_t\equiv\{u^n(t,\cdot), t\in[0,\infty)\}$ satisfying the following weak form equation: \begin{align} \label{eq:approxivariationform} \langle u^n_t,\psi\rangle &=\langle u_0,\psi\rangle+\dfrac{1}{2}\int_0^t \langle u^n_s,\psi{''}\rangle ds +\int_0^{t+}\int_0^L\int_{\mathbb{R}\setminus\{0\}} \varphi^n(u^n(s-,x))\psi(x)z\tilde{N}(ds,dx,dz) \end{align} for all $t\in[0,\infty)$ and for any $\psi\in C^{2}([0,L])$ with $\psi(0)=\psi(L)=\psi^{'}(0)=\psi^{'}(L)=0$ or equivalently satisfying the following mild form equation: \begin{align} \label{mildformapproxi0} u^n(t,x)&=\int_0^LG_t(x,y)u_0(y)dy +\int_0^{t+}\int_0^L \int_{\mathbb{R}\setminus\{0\}}G_{t-s}(x,y) \varphi^n(u^n(s-,y))z\tilde{N}(ds,dy,dz) \end{align} for all $t\in [0, \infty)$ and for a.e. $x\in [0, L]$. We now present the definition (see also in Wang et al. \cite{Wang:2023}) of a strong solution to stochastic heat equation (\ref{eq:approximatingsolution}). \begin{definition} Given $p\geq 1$, the stochastic heat equation (\ref{eq:approximatingsolution}) has a strong solution in $D([0,\infty),L^p([0,L]))$ with initial function $u_0$ if for a given truncated $\alpha$-stable martingale measure $L_{\alpha}$ there exists a process $u^n_t\equiv\{u^n(t,\cdot),t\in[0,\infty)\}$ in $D([0,\infty),L^p([0,L]))$ such that either equation (\ref{eq:approxivariationform}) or equation (\ref{mildformapproxi0}) holds. \end{definition} Note that for each $n\geq1$ the noise coefficient $\varphi^n$ is not only Lipschitz continuous but also of globally linear growth. Indeed, for a given $\epsilon>0$ and $n_0\in\mathbb{N}$ large enough, Hypothesis \ref{Hypo} (i) and the globally linear growth of $\varphi$ imply that \begin{equation} \label{eq:glo-lin-growth} |\varphi^n(x)|\leq|\varphi^n(x)-\varphi(x)| +|\varphi(x)|\leq\epsilon+C(1+|x|),\,\, \forall n\geq n_0,\, \forall x\in\mathbb{R}. \end{equation} Therefore, we can use the classical Banach fixed point principle to show the existence and pathwise uniqueness of the strong solution to equation (\ref{eq:approximatingsolution}). Since the proof is standard, we just state the main result in the following proposition. For more details of the proof, we refer to Wang et al. \cite[Proposition 3.2]{Wang:2023} and references therein. Also note that the same method was applied in Truman and Wu \cite{Truman:2003} and in Bo and Wang \cite{Bo:2006} where the stochastic Burgers equation and the stochastic {C}ahn-{H}illiard equation driven by L\'{e}vy space-time white noise were studied, respectively. \begin{proposition} \label{th:Approximainresult} Given any $n\geq1$, if the initial function $u_0$ satisfies $\mathbb{E}[||u_0||_p^p]<\infty$ for some $p\in(\alpha,2]$, then under {\rm Hypothesis \ref{Hypo}} there exists a pathwise unique strong solution $u^n_t\equiv\{u^n(t,\cdot),t\in[0,\infty)\}$ to equation (\ref{eq:approximatingsolution}) such that for any $T>0$ \begin{equation} \label{eq:Approximomentresult} \sup_{n\geq1}\sup_{0\leq t\leq T}\mathbb{E}\left[\vert \vert u^n_t\vert \vert _p^p\right]<\infty. \end{equation} \end{proposition} \begin{remark} By {\rm Hypothesis \ref{Hypo} (ii)}, (\ref{eq:glo-lin-growth}) and estimate (\ref{eq:Approximomentresult}), the stochastic integral on the right-hand side of (\ref{mildformapproxi0}) is well defined. \end{remark} We are going to prove that the approximating solution sequence $(u^n)_{n\geq1}$ is tight in both $D([0,\infty),\mathbb{M}([0,L]))$ and $L_{loc}^p([0,\infty)\times[0,L])$ for all $p\in(\alpha,2]$ by using the following tightness criteria; see, e.g., Xiong and Yang \cite[Lemma 2.2]{Xiong:2019}. Note that this tightness criteria can be obtained by Ethier and Kurtz \cite[Theorems 3.9.1, 3.9.4 and 3.2.2]{Ethier:1986}. \begin{lemma} \label{lem:tightcriterion0} Given a complete and separable metric space $E$, let $(X^n=\{X^n(t),t\in[0,\infty)\})_{n\geq1}$ be a sequence of stochastic processes with sample paths in $D([0,\infty),E)$, and let $C_a$ be a subalgebra and dense subset of $C_b(E)$ (the bounded continuous functions space on $E$). Then the sequence $(X^n)_{n\geq1}$ is tight in $D([0,\infty),E)$ if both of the following conditions hold: \begin{itemize} \item[\rm (i)] For every $\varepsilon>0$ and $T>0$ there exists a compact set $\Gamma_{\varepsilon,T}\subset E$ such that \begin{equation} \label{eq:tightcriterion1} \inf_{n\geq1}\mathbb{P}[X^n(t)\in\Gamma_{\varepsilon,T}\,\, \text{for all}\,\, t\in[0,T] ]\geq1-\varepsilon. \end{equation} \item[\rm (ii)] For each $f\in C_a$, there exists a process $g_n\equiv\{g_n(t),t\in[0,\infty)\}$ such that \begin{equation*} f(X^n(t))-\int_0^tg_n(s)ds \end{equation*} is an $(\mathcal{F}_t)$-martingale and \begin{align} \label{eq:tight-moment} \sup_{0\leq t\leq T}\mathbb{E}\left[\vert f(X^n(t))\vert +\vert g_n(t)\vert \right]<\infty \end{align} and \begin{align} \label{eq:tight-moment2} \sup_{n\geq1}\mathbb{E}\left[\left(\int_0^T\vert g_n(t)\vert ^qdt\right)^{\frac{1}{q}}\right]<\infty \end{align} for each $n\geq1, T>0$ and $q>1$. \end{itemize} \end{lemma} Before showing the tightness of solution sequence $(u^n)_{n\geq1}$, we first find a uniform moment estimate in the following lemma. \begin{lemma} \label{le:uniformbounded} For each $n\geq1$ let $u^n$ be the strong solution to equation (\ref{eq:approximatingsolution}) given by {\rm Proposition \ref{th:Approximainresult}}. Then for given $T>0$ and $\psi\in C^{2}([0,L])$ with $\psi(0)=\psi(L)=\psi^{'}(0)=\psi^{'}(L)=0$ and $\vert \psi^{''}(x)\vert \leq C\psi(x),x\in[0,L]$, we have for $p\in(\alpha,2]$ that \begin{align} \label{eq:p-uniform} \sup_{n\geq1}\mathbb{E}\left[\sup_{0\leq t\leq T} \left\vert \int_{0}^Lu^n(t,x)\psi(x)dx\right\vert ^p\right]<\infty. \end{align} \end{lemma} \begin{proof} By (\ref{eq:approxivariationform}), it holds that for each $n\geq1$ \begin{align*} \mathbb{E}\left[\sup_{0\leq t\leq T} \left\vert \int_{0}^Lu^n(t,x)\psi(x)dx\right\vert ^p\right]\leq C_p(A_1+A_2+A_3), \end{align*} where \begin{align*} A_1&=\mathbb{E}\left[\left\vert \int_0^Lu_0(x)\psi(x)dx\right\vert ^p\right], \\ A_2&=\mathbb{E}\left[\sup_{0\leq t\leq T} \left\vert \int_0^{t}\int_0^Lu^n(s,x)\psi^{''}(x)dxds \right\vert ^p\right], \\ A_3&=\mathbb{E}\left[\sup_{0\leq t\leq T} \left\vert \int_0^{t+}\int_0^L\int_{\mathbb{R}\setminus\{0\}}\varphi^n(u^n(s-,x))\psi(x)z\tilde{N}(ds,dx,dz) \right\vert ^p\right]. \end{align*} For $p\in(\alpha,2]$ we separately estimate $A_1$, $A_2$ and $A_3$ as follows. For $A_1$, it holds by H\"{o}lder's inequality that \begin{align*} A_1&\leq C_p\left(\int_0^L\vert \psi(x)\vert ^{\frac{p}{p-1}}dx\right)^{\frac{p(p-1)}{p}} \mathbb{E}\left[\int_0^L\vert u_0(x)\vert ^pdx\right] \leq C_p\mathbb{E}[\vert \vert u_0\vert \vert _p^p] \leq C_p \end{align*} due to $\psi\in C^{2}([0,L])$ and $\mathbb{E}[\vert \vert u_0\vert \vert _p^p]<\infty$. For $A_2$, it holds by $\vert \psi^{''}(x)\vert \leq C\psi(x),x\in[0,L]$ and H\"{o}lder's inequality that \begin{align*} A_2&\leq C_p\mathbb{E}\left[\sup_{0\leq t\leq T} \left\vert \int_0^{t}\int_0^Lu^n(s,x)\psi(x)dxds \right\vert ^p\right] \leq C_{p,T}\int_0^{T} \mathbb{E}\left[\sup_{0\leq r\leq s} \left\vert \int_0^Lu^n(r,x)\psi(x)dx \right\vert ^p\right]ds. \end{align*} For $A_3$, the Doob maximal inequality and the Burkholder-Davis-Gundy inequality imply that \begin{align*} A_3&\leq C_p \mathbb{E}\left[\left|\int_0^{T}\int_0^L\int_{\mathbb{R}\setminus\{0\}}\vert \varphi^n(u^n(s-,x)) \psi(x)z\vert^2N(ds,dx,dz)\right|^{\frac{p}{2}}\right] \\ &\leq C_p \mathbb{E}\left[\int_0^{T}\int_0^L\int_{\mathbb{R}\setminus\{0\}}\vert \varphi^n(u^n(s-,x)) \psi(x)z\vert ^pN(ds,dx,dz)\right] \\ &=C_p \mathbb{E}\left[\int_0^{T}\int_0^L\int_{\mathbb{R}\setminus\{0\}}\vert \varphi^n(u^n(s,x)) \psi(x)z\vert ^pdsdx\nu_{\alpha}(dz)\right], \end{align*} where the second inequality follows from the fact that \begin{align} \label{eq:element-inequ} \left\vert \sum_{i=1}^ka_i^2\right\vert ^{\frac{q}{2}}\leq \sum_{i=1}^{k}\vert a_i\vert ^q \end{align} for $a_i\in\mathbb{R}, k\geq1$, and $q\in(0,2]$. By (\ref{eq:smalljumpsizemeasure}), it holds that for $p>\alpha$ \begin{align} \label{eq:jumpestimate} \int_{\mathbb{R}\setminus\{0\}}\vert z\vert ^p\nu_{\alpha} (dz)=c_{+}\int_0^Kz^{p-\alpha-1}dz+c_{-}\int_{-K}^0(-z)^{p-\alpha-1}dz =\frac{K^{p-\alpha}}{p-\alpha}, \end{align} then there exists a constant $C_{p,K,\alpha}$ such that \begin{align*} A_3\leq C_{p,K,\alpha} \mathbb{E}\left[\int_0^{T}\int_0^L\vert \varphi^n(u^n(s,x))\psi(x)\vert ^pdsdx\right]. \end{align*} By (\ref{eq:glo-lin-growth}), $\psi\in C^{2}([0,L])$ and (\ref{eq:Approximomentresult}) in Proposition \ref{th:Approximainresult}, it is easy to see that \begin{align*} A_3&\leq C_{p,K,\alpha,T}\left(1+\sup_{0\leq s\leq T} \mathbb{E}[\vert \vert u^n_s\vert \vert _p^p]\right)\leq C_{p,K,\alpha,T}. \end{align*} Combining the estimates $A_1,A_2$ and $A_3$, we have \begin{align*} \mathbb{E}&\left[\sup_{0\leq t\leq T} \left\vert \int_{0}^Lu^n(t,x)\psi(x)dx\right\vert ^p\right] \leq C_{p,K,\alpha,T}+C_{p,T}\int_0^{T}\mathbb{E}\left[\sup_{0\leq r\leq s} \left\vert \int_0^Lu^n(r,x)\psi(x)dx\right\vert ^p\right]ds \end{align*} Therefore, it holds by Gronwall's lemma that for $p\in(\alpha,2]$ \begin{align*} \sup_{n\geq1}\mathbb{E}\left[\sup_{0\leq t\leq T} \left\vert \int_{0}^Lu^n(t,x)\psi(x)dx\right\vert ^p\right]<\infty, \end{align*} which completes the proof. $\Box$ \end{proof} Note that for any function $v\in L^q[0,L]$ with $q\geq1$, we can identify $L^q([0,L])$ as a subset of $\mathbb{M}([0,L])$ by using the following correspondence $$v(x)\mapsto v(x)dx.$$ Then for each $n\geq1$ we can identify the $D([0,\infty),L^p([0,L]))$-valued random variable $u^n$ as a $D([0,\infty),\mathbb{M}([0,L]))$-valued random variable (still denoted by $u^n$). We now show the tightness of $(u^n)_{n\geq1}$ in the following proposition. \begin{proposition} \label{th:tightnessresult} The solution sequence $(u^n)_{n\geq1}$ to equation (\ref{eq:approximatingsolution}) given by {\rm Proposition \ref{th:Approximainresult}} is tight in both $D([0,\infty),\mathbb{M}([0,L]))$ and $L_{loc}^p([0,\infty)\times [0,L])$ for $p\in(\alpha,2]$. Let $u$ be an arbitrary limit point of $u^n$. Then \begin{align} \label{eq:tightresult} u\in D([0,\infty),\mathbb{M}([0,L])) \cap L_{loc}^p([0,\infty)\times [0,L]) \end{align} for $p\in(\alpha,2]$. \end{proposition} \begin{proof} For each $n\geq1, t\geq0$ and $\psi\in C^{2}([0,L])$ with $\psi(0)=\psi(L)=\psi^{'}(0)=\psi^{'}(L)=0$ and $\vert \psi^{''}(x)\vert \leq C\psi(x),x\in[0,L]$, let us define \begin{align*} \langle u^n,\psi\rangle:=\langle u_t^n,\psi\rangle=\int_0^Lu^n(t,x)\psi(x)dx. \end{align*} We first prove that the sequence $(\langle u^n,\psi\rangle)_{n\geq1}$ is tight in $D([0,\infty),\mathbb{R})$ by using Lemma \ref{lem:tightcriterion0}. It is easy to see that the condition (i) in Lemma \ref{lem:tightcriterion0} can be verified by Lemma \ref{le:uniformbounded} . In the following we mainly verify the condition (ii) in Lemma \ref{lem:tightcriterion0}. For each $f\in C_b^2(\mathbb{R})$ ($f,f^{'},f^{''}$ are bounded and uniformly continuous) with compact supports, it holds by (\ref{eq:approxivariationform}) and It\^{o}'s formula that \begin{align} \label{eq:ito} f(\langle u_t^n,\psi\rangle)&=f(\langle u_0^n,\psi\rangle) +\int_0^tf^{'}(\langle u_s^n,\psi\rangle)\langle u_s^n,\psi^{''}\rangle) ds \nonumber\\ &\quad+\int_0^t\int_0^L\int_{\mathbb{R}\setminus\{0\}} \mathcal{D}(\langle u_s^n,\psi\rangle,\varphi^n(u^n(s,x))\psi(x)z) dsdx\nu_{\alpha}(dz)+\text{mart.}, \end{align} where $ \mathcal{D}(u,v)=f(u+v)-f(u)-vf^{'}(u) $ for $u,v\in\mathbb{R}$. Since $f,f^{'},f^{''}$ are bounded and $\vert \psi^{''}(x)\vert \leq C\psi(x),x\in[0,L]$, then \begin{align} \label{eq:estimate1} \vert f^{'}(\langle u_s^n,\psi\rangle)\langle u_s^n,\psi^{''}\rangle\vert \leq C\left\vert \int_0^Lu^n({s},x)\psi(x)dx\right\vert . \end{align} By Taylor's formula, one can show that $\vert \mathcal{D}(u,v)\vert \leq C(\vert v\vert\wedge \vert v\vert ^2)$, which also implies that $\vert \mathcal{D}(u,v)\vert \leq C(\vert v\vert \wedge \vert v\vert ^p)$ for $p\in(\alpha,2]$. Thus we have for $p\in(\alpha,2]$, \begin{align} \label{eq:estimate2} &\int_0^L\int_{\mathbb{R}\setminus\{0\}} \vert \mathcal{D}(\langle u_s^n,\psi\rangle,\varphi^n(u^n(s,x))\psi(x)z)\vert dx\nu_{\alpha}(dz) \nonumber\\ &\leq C\left(\int_{\mathbb{R}\setminus\{0\}}\vert z\vert \wedge \vert z\vert ^p\nu_{\alpha}(dz)\right) \int_0^L(\vert \varphi^n(u^n(s,x))\psi(x)\vert +\vert \varphi^n(u^n(s,x))\psi(x)\vert ^p)dx \nonumber\\ &\leq C_{p,K,\alpha}\int_0^L(\vert \varphi^n(u^n(s,x))\psi(x)\vert +\vert \varphi^n(u^n(s,x))\psi(x)\vert ^p)dx, \end{align} where by (\ref{eq:smalljumpsizemeasure}), \begin{align*} \int_{\mathbb{R}\setminus\{0\}}\vert z\vert \wedge \vert z\vert ^p\nu_{\alpha}(dz) &=c_{+}\int_0^1z^{p-\alpha-1}dz +c_{-}\int_{-1}^0(-z)^{p-\alpha-1}dz +c_{+}\int_1^Kz^{-\alpha}dz \\ &\quad +c_{-}\int_{-K}^{-1}(-z)^{-\alpha}dz \\ &=\frac{1}{p-\alpha}+\dfrac{1-K^{1-\alpha}}{\alpha-1}\leq C_{p,K,\alpha}. \end{align*} For given $n\geq1$ let us define \begin{align*} g_n(s)&:=f^{'}(\langle u_s^n,\psi\rangle)\langle u_s^n,\psi^{''}\rangle) +\int_0^L\int_{\mathbb{R}\setminus\{0\}} \mathcal{D}(\langle u_s^n,\psi\rangle,\varphi^n(u^n(s,x))\psi(x)z) dx\nu_{\alpha}(dz). \end{align*} By (\ref{eq:ito}), it is easy to see that \begin{align*} f(\langle u_t^n,\psi\rangle)-\int_0^{t}g_n(s)ds \end{align*} is an $(\mathcal{F}_t)$-martingale. Now we verify the moment estimates (\ref{eq:tight-moment}) and (\ref{eq:tight-moment2}) of the condition (ii) in Lemma \ref{lem:tightcriterion0}. For each $t\in[0,T]$, it holds by the boundedness of $f$, estimates (\ref{eq:estimate1})-(\ref{eq:estimate2}) and (\ref{eq:glo-lin-growth}) that \begin{align*} \mathbb{E}\left[\vert f(\langle u_t^n,\psi\rangle)\vert +\vert g_n(t)\vert \right] &\leq C\left(1+\mathbb{E}\left[\left\vert \int_0^Lu^n(t,x)\psi(x)dx\right\vert \right]\right) \\ &\quad +C_{p,K,\alpha}\mathbb{E}\left[\int_0^L(\vert \psi(x)\vert +\vert u^n(t,x)\psi(x)\vert )dx\right] \\ &\quad+C_{p,K,\alpha}\mathbb{E}\left[\int_0^L(\vert \psi(x)\vert ^p+ \vert u^n(t,x)\psi(x)\vert ^p)dx\right]. \end{align*} Since $\psi\in C^2([0,L])$ implies that $\psi$ is bounded, then it holds by H\"{o}lder's inequality and (\ref{eq:p-uniform}) that for $p\in(\alpha,2]$ \begin{align*} \mathbb{E}\left[\vert f(\langle u_t^n,\psi\rangle)\vert +\vert g_n(t)\vert \right] &\leq C_{p,K,\alpha}\left(1+\left(\sup_{0\leq t\leq T} \mathbb{E}[\vert \vert u^n_t\vert \vert_p^p]\right)^{\frac{1}{p}} +\sup_{0\leq t\leq T} \mathbb{E}[\vert \vert u^n_t\vert \vert_p^p]\right), \end{align*} and so by (\ref{eq:Approximomentresult}), \begin{align*} \sup_{0\leq t\leq T}\mathbb{E}\left[\vert f(\langle u_t^n,\psi\rangle)\vert +\vert g_n(t)\vert \right]<\infty, \end{align*} which verifies the estimate (\ref{eq:tight-moment}). To verify (\ref{eq:tight-moment2}), it suffices to show that for each $n\geq1$ \begin{align*} \mathbb{E}\left[\int_0^T\vert g_n(t)\vert ^qdt\right]<\infty \end{align*} for some $q>1$. By the estimates (\ref{eq:estimate1})-(\ref{eq:estimate2}) and (\ref{eq:glo-lin-growth}), we have \begin{align*} \mathbb{E}\left[\int_0^T\vert g_n(t)\vert ^qdt\right] &\leq C_{q}\mathbb{E}\left[\int_0^T\left\vert \int_0^Lu^n(t,x)\psi(x)dx \right\vert ^qdt\right] \\ &\quad+C_{p,q,K,\alpha}\mathbb{E}\left[\int_0^T\left\vert \int_0^L(\vert \psi(x)\vert + \vert u^n(t,x)\psi(x)\vert )dx \right\vert ^qdt\right] \\ &\quad+C_{p,q,K,\alpha}\mathbb{E}\left[\int_0^T\left\vert \int_0^L(\vert \psi(x)\vert ^p+ \vert u^n(t,x)\psi(x)\vert ^p)dx \right\vert ^qdt\right]. \end{align*} Taking $1<q<2/p$, the H\"{o}lder inequality and boundedness of $\psi$ imply that \begin{align*} \mathbb{E}\left[\int_0^T\vert g_n(t)\vert ^qdt\right] &\leq C_{p,q,K,\alpha,T}\left(1+\sup_{0\leq t\leq T}\mathbb{E}[\vert \vert u^n_t\vert\vert_q^q] +\sup_{0\leq t\leq T}\mathbb{E} \left[\vert \vert u^n_t\vert \vert_{pq}^{pq}\right]\right), \end{align*} and so by (\ref{eq:Approximomentresult}), \begin{align*} \sup_{n\geq1}\mathbb{E}\left[\int_0^T\vert g_n(t)\vert ^qdt\right]<\infty, \end{align*} which verifies the estimate (\ref{eq:tight-moment2}). Therefore, for each $\psi\in C^{2}([0,L])$ with $\psi(0)=\psi(L)=\psi^{'}(0)=\psi^{'}(L)=0$ and $\vert \psi^{''}(x)\vert \leq C\psi(x),x\in[0,L]$, the sequence $(\langle u^n,\psi\rangle)_{n\geq1}$ is tight in $D([0,\infty),\mathbb{R})$, and so it holds by Mitoma's theorem (see, e.g., Walsh \cite[pp.361--365]{Walsh:1986}) that $(u^n)_{n\geq1}$ is tight in $D([0,\infty),\mathbb{M}([0,L]))$. On the other hand, by (\ref{eq:Approximomentresult}) we have for each $T>0$ \begin{align*} \sup_{n\geq1}\mathbb{E} \left[\int_0^T\int_0^L\vert u^n(t,x)\vert ^pdxdt\right] \leq C_T\sup_{n\geq1}\sup_{0\leq t\leq T} \mathbb{E}[\vert \vert u^n_t\vert \vert _p^p]<\infty \end{align*} for $p\in(\alpha,2]$. The Markov's inequality implies that for each $\varepsilon>0,T>0$ there exists a constant $C_{\varepsilon,T}$ such that \begin{align*} \sup_{n\geq1}\mathbb{P}\left[\int_0^T\int_0^L\vert u^n(t,x)\vert ^pdxdt>C_{\epsilon,T}\right]<\varepsilon \end{align*} for $p\in(\alpha,2]$. Therefore, the sequence $(u^n)_{n\geq1}$ is also tight in $L^p_{loc}([0,\infty)\times[0,L])$ for $p\in(\alpha,2]$, and the conclusion (\ref{eq:tightresult}) holds. $\Box$ \end{proof} \begin{Tproof}\textbf{~of Theorem \ref{th:mainresult}.} We are going to prove Theorem \ref{th:mainresult} by applying weak convergence arguments. For each $n\geq1$, let $u^n$ be the strong solution of equation (\ref{eq:approximatingsolution}) given by Proposition \ref{th:Approximainresult}. It can also be regarded as an element in $D([0,\infty),\mathbb{M}([0,L]))\cap L_{loc}^p([0,\infty)\times [0,L])$ with $p\in(\alpha,2]$. By Proposition \ref{th:tightnessresult}, there exists a $D([0,\infty),\mathbb{M}([0,L]))\cap L_{loc}^p([0,\infty)\times [0,L])$-valued random variable $u$ such that $u^n$ converges to $u$ in distribution in $D([0,\infty),\mathbb{M}([0,L]))\cap L_{loc}^p([0,\infty)\times [0,L])$ for $p\in(\alpha,2]$. On the other hand, the Skorokhod Representation Theorem (see, e.g., Either and Kurtz \cite[Theorem 3.1.8]{Ethier:1986}) yields that there exists another filtered probability space $(\hat{\Omega}, \hat{\mathcal{F}}, (\hat{\mathcal{F}}_t)_{t\geq0},\hat{\mathbb{P}})$ and on it a further subsequence $(\hat{u}^n)_{n\geq1}$ and $\hat{u}$ which have the same distribution as $(u^n)_{n\geq1}$ and $u$, so that $\hat{u}^n$ almost surely converges to $\hat{u}$ in $D([0,\infty),\mathbb{M}([0,L]))\cap L_{loc}^p([0,\infty)\times [0,L])$ for $p\in(\alpha,2]$. For each $t\geq0,n\geq1$ and any test function $\psi\in C^{2}([0,L])$ with $\psi(0)=\psi(L)=0$ and $\psi^{'}(0)= \psi^{'}(L)=0$, let us define \begin{align*} \hat{M}^n_{t}(\psi)&:=\int_0^L\hat{u}^n(t,x) \psi(x)dx-\int_0^L\hat{u}_0(x) \psi(x)dx -\frac{1}{2}\int_0^{t} \int_0^L\hat{u}^n(s,x)\psi^{''}(x)dxds. \end{align*} Since $\hat{u}^n$ almost surely converges to $\hat{u}$ in the Skorokhod topology as $n\rightarrow\infty$, then \begin{align} \label{eq:M^n_t} \hat{M}^n_{t}(\psi)& \overset{\mathbf{\hat{P}}\text{-a.s.}}{\longrightarrow} \int_0^L\hat{u}(t,x)\psi(x)dx- \int_0^L\hat{u}_0(x)\psi(x)dx -\frac{1}{2}\int_0^{t} \int_0^L\hat{u}(s,x) \psi^{''}(x) dxds \end{align} in the Skorokhod topology as $n\rightarrow\infty$. By (\ref{eq:approxivariationform}) and the fact that $\hat{u}^n$ has the same distribution as $u^n$ for each $n\geq1$, we have \begin{align*} \hat{M}^n_{t}(\psi) \overset{D}=& \int_0^Lu^n(t,x) \psi(x)dx-\int_0^Lu_0(x)\psi(x)dx -\frac{1}{2}\int_0^{t}\int_0^L u^n(s,x)\psi^{''}(x)dxds \nonumber\\ =&\int_0^{t+} \int_0^L\int_{\mathbb{R}\setminus\{0\}}\psi(x) \varphi^n(u^n(s-,x))z\tilde{N}(ds,dx,dz), \end{align*} where $\overset{D}=$ denotes the identity in distribution. The Burkholder-Davis-Gundy inequality, (\ref{eq:element-inequ})-(\ref{eq:jumpestimate}) and (\ref{eq:glo-lin-growth}) imply that for $p\in(\alpha,2]$ \begin{align*} \hat{\mathbb{E}}[\vert \hat{M}^n_{t}(\psi)\vert ^p] &=\mathbb{E} \left[\left\vert \int_0^{t+} \int_0^L\int_{\mathbb{R}\setminus\{0\}}\psi(x) \varphi^n(u^n(s-,x))z\tilde{N}(ds,dx,dz) \right\vert ^p\right] \\ &\leq C_p\mathbb{E} \left[\int_0^t\int_0^L \int_{\mathbb{R}\setminus\{0\}} \vert \psi(x)\vert ^p(1+\vert u^n(s,x)\vert )^p \vert z\vert ^pdsdx\nu_{\alpha}(dz) \right] \\ &\leq C_{p,K,\alpha,T} \left(\int_0^L\vert \psi(x)\vert ^pdx+\bigg\vert \sup_{x\in[0,L]}\psi(x)\bigg\vert ^p \sup_{0\leq t\leq T}\mathbb{E}\left[\vert \vert u^n_t\vert \vert _p^p\right]\right). \end{align*} Then by $\psi\in C^{2}([0,L])$ and (\ref{eq:Approximomentresult}), we have for each $T>0$ \begin{align*} \sup_{n\geq1}\sup_{0\leq t\leq T} \hat{\mathbb{E}}[\vert \hat{M}^n_{t}(\psi)\vert ^p]<\infty. \end{align*} Therefore, it holds by (\ref{eq:M^n_t}) that there exists an $(\hat{\mathcal{F}}_t)$-martingale $\hat{M}_{t}(\psi)$ such that $\hat{M}^n_{t}(\psi)$ converges weakly to $\hat{M}_{t}(\psi)$ as $n\rightarrow\infty$, and for each $t\geq0$ \begin{align} \label{martingle1} \hat{M}_{t}(\psi)&= \int_0^L\hat{u}(t,x)\psi(x)dx- \int_0^L \hat{u}_0(x)\psi(x)dx-\frac{1}{2}\int_0^{t} \int_0^L\hat{u}(s,x) \psi^{''}(x)dxds. \end{align} By Hypothesis \ref{Hypo} (i), the quadratic variation of $\{\hat{M}^n_{t}(\psi),t\in[0,\infty)\}$ satisfies that \begin{align*} \langle \hat{M}^n(\psi), \hat{M}^n(\psi) \rangle_t&=\int_0^{t}\int_0^L \int_{\mathbb{R}\setminus\{0\}}\varphi^n({u}^n(s,x))^2 \psi(x)^2z^2dsdx\nu_{\alpha}(dz) \\ &\overset{D}=\int_0^{t}\int_0^L \int_{\mathbb{R}\setminus\{0\}}\varphi^n(\hat{u}^n(s,x))^2 \psi(x)^2z^2dsdx\nu_{\alpha}(dz) \\ &\overset{\mathbb{P}-a.s.} \rightarrow\int_0^{t}\int_0^L \int_{\mathbb{R}\setminus\{0\}}\varphi(\hat{u}(s,x))^2 \psi(x)^2z^2dsdx\nu_{\alpha}(dz),\,\,t\in[0,T], \end{align*} as $n\rightarrow\infty$. We denote by $\{\langle \hat{M}(\psi),\hat{M}(\psi)\rangle_t,t\in[0,\infty)\}$ the quadratic variation process \begin{align*} \langle \hat{M}(\psi),\hat{M}(\psi)\rangle_t=\int_0^{t} \int_0^L\int_{\mathbb{R}\setminus\{0\}}\varphi(\hat{u}(s,x))^2 \psi(x)^2z^2dsdx\nu_{\alpha}(dz),\,\,t\geq0. \end{align*} Similar to Konno and Shiga \cite[Lemma 2.4]{Konno:1988}, $\langle \hat{M}(\psi),\hat{M}(\psi)\rangle_t$ corresponds to an orthonormal martingale measure $\hat{M}(dt,dx,dz)$ defined on the filtered probability space $(\hat{\Omega}, \hat{\mathcal{F}}, (\hat{\mathcal{F}_t})_{t\geq0}, \hat{\mathbb{P}})$ in the sense of Walsh \cite[Chapter 2]{Walsh:1986} whose quadratic measure is given by \begin{align*} \varphi(\hat{u}(t,x))^2z^2dtdx\nu_{\alpha}(dz). \end{align*} Let $\{\dot{\bar{L}}_{\alpha}(t,x):t\in[0,\infty),x\in[0,L]\}$ be another truncated $\alpha$-stable white noise, defined possibly on $(\hat{\Omega}, \hat{\mathcal{F}}, (\hat{\mathcal{F}_t})_{t\geq0}, \hat{\mathbb{P}})$, independent of $\hat{M}(dt,dx,dz)$ and define \begin{align*} \hat{L}_{\alpha}(t,\psi)&:= \int_0^{t+}\int_{0}^L\int_{\mathbb{R}\setminus\{0\}} \dfrac{1}{\varphi(\hat{u}(s-,x))} 1_{\{\varphi(\hat{u}(s-,x))\neq0\}}\psi(x)z\hat{M}(ds,dx,dz) \\ &\quad+\int_0^{t+}\int_{0}^L \psi(x) 1_{\{\varphi(\hat{u}(s-,x))=0\}} \bar{L}_{\alpha}(ds,dx). \end{align*} Then $\{\hat{L}_{\alpha}(t,\psi): \,t\in[0,\infty),\, \psi\in C^2([0, L]), \psi(0)=\psi(L)=0, \psi^{'}(0)=\psi^{'}(L)=0\}$ determines a truncated $\alpha$-stable white noise $\dot{\hat{L}}_{\alpha}(t,x)$ on $(\hat{\Omega}, \hat{\mathcal{F}}, (\hat{\mathcal{F}_t})_{t\geq0}, \hat{\mathbb{P}})$ with the same distribution as $\dot{L}_{\alpha}(t,x)$ such that \begin{align*} \hat{M}_t(\psi)&=\int_0^{t+} \int_0^L\varphi(\hat{u}(s-,x)) \psi(x) \hat{L}_{\alpha}(ds,dx) =\int_0^{t+} \int_0^L\int_{\mathbb{R}\setminus\{0\}}\varphi(\hat{u}(s-,x)) \psi(x)z\widetilde{\hat{N}}(ds,dx,dz), \end{align*} where $\widetilde{\hat{N}}(dt,dx,dz)$ denotes the compensated Poisson random measure associated to the truncated $\alpha$-stable martingale measure $\hat{L}_{\alpha}(t,x)$. Hence, it holds by (\ref{martingle1}) that $(\hat{u},\hat{L}_{\alpha})$ is a weak solution to (\ref{eq:originalequation1}) defined on $(\hat{\Omega}, \hat{\mathcal{F}}, (\hat{\mathcal{F}_t})_{t\geq0}, \hat{\mathbb{P}})$. On the other hand, since $\hat{u}^n$ has the same distribution as $u^n$ for each $n\geq1$, then the moment estimates (\ref{eq:Approximomentresult}) in Proposition \ref{th:Approximainresult} can be replaced by \begin{equation*} \sup_{n\geq1}\sup_{0\leq t\leq T} \hat{\mathbb{E}}\left[\vert \vert \hat{u}^n_t\vert \vert _p^p\right]<\infty \end{equation*} for $p\in(\alpha,2]$. For moment estimate (\ref{eq:momentresult}), the Fatou's Lemma implies that for $p\in(\alpha,2]$ \begin{align*} \hat{\mathbb{E}}\left[\vert \vert \hat{u}\vert \vert _{p,T}^p\right]&= \hat{\mathbb{E}}\left[\int_0^T\vert \vert \hat{u}_t\vert \vert _p^pdt\right] \leq\liminf_{n\rightarrow\infty}C_T \sup_{0\leq t\leq T}\hat{\mathbb{E}}\left[\vert \vert \hat{u}^n_t\vert \vert _p^p\right]<\infty, \end{align*} which completes the proof. $\Box$ \end{Tproof} \section{Proof of Theorem \ref{th:mainresult2}}\label{sec4} The proof of Theorem \ref{th:mainresult2} is similar to that of Theorem \ref{th:mainresult}. The main difference between them is that in the current proof we need to prove the solution sequence $(u^n)_{n\geq1}$ to equation (\ref{eq:approximatingsolution}), obtained from Proposition \ref{th:Approximainresult}, is tight in $D([0,\infty),L^p([0,L]))$ for $p\in(\alpha,5/3)$. To this end, we need the following tightness criteria; see, e.g., Ethier and Kurtz \cite[Theorem 3.8.6 and Remark (a)] {Ethier:1986}. Note that the same criteria was also applied in Sturm \cite {Sturm:2003} with Gaussian colored noise setting. \begin{lemma} \label{lem:tightcriterion} Given a complete and separable metric space $(E,\rho)$, let $(X^n)$ be a sequence of stochastic processes with sample paths in $D([0,\infty),E)$. The sequence is tight in $D([0,\infty),E)$ if the following conditions hold: \begin{itemize} \item[\rm (i)] For every $\varepsilon>0$ and rational $t\in[0,T]$, there exists a compact set $ \Gamma_{\varepsilon,T}\subset E$ such that \begin{equation} \label{eq:tightcriterion1} \inf_{n}\mathbb{P}[X^n(t)\in\Gamma_{\varepsilon,T}] \geq1-\varepsilon. \end{equation} \item[\rm (ii)] There exists $p>0$ such that \begin{equation} \label{eq:tightcriterion2} \lim_{\delta\rightarrow0}\sup_n\mathbb{E} \left[\sup_{0\leq t\leq T}\sup_{0\leq u\leq \delta}(\rho(X^n_{t+u},X^n_t)\wedge1)^p\right]=0. \end{equation} \end{itemize} \end{lemma} To verify condition (i) of Lemma \ref{lem:tightcriterion}, we need the following characterization of the relatively compact set in $L^p({[0,L]}),p\geq1$; see, e.g., Sturm \cite[Lemma 4.3]{Sturm:2003}. \begin{lemma} \label{lem:compactcriterion} A subset $\Gamma\subset L^p({[0,L]})$ for $p\geq1$ is relatively compact if and only if the following conditions hold: \begin{itemize} \item[\rm (a)] $\sup_{f\in\Gamma} \int_0^L\vert f(x)\vert ^pdx<\infty$, \item[\rm (b)] $\lim_{y\rightarrow0}\int_0^L\vert f(x+y)-f(x)\vert ^pdx=0$ uniformly for all $f\in\Gamma$, \item[\rm (c)] $\lim_{\gamma\rightarrow\infty} \int_{(L-\frac{L}{\gamma},L]}\vert f(x)\vert ^pdx=0$ for all $f\in\Gamma$. \end{itemize} \end{lemma} The proof of the tightness of $(u^n)_{n\geq1}$ is accomplished by verifying conditions (i) and (ii) in Lemma \ref{lem:tightcriterion}. To this end, we need some estimates on $(u^n)_{n\geq1}$, that is, the uniform bound estimate in Lemma \ref{lem:uniformbound}, the temporal difference estimate in Lemma \ref{lem:temporalestimation} and the spatial difference estimate in Lemma \ref{lem:spatialestimation}, respectively. \begin{lemma} \label{lem:uniformbound} Suppose that $\alpha\in(1,5/3)$ and for each $n\geq1$ $u^n$ is the solution to equation (\ref{eq:approximatingsolution}) given by {\rm Proposition \ref{th:Approximainresult}}. Then for given $T>0$ there exists a constant $C_{p,K,\alpha,T}$ such that \begin{equation} \label{eq:unformlybounded} \sup_n\mathbb{E}\left[\sup_{0\leq t\leq T}\vert \vert u^n_t\vert \vert _p^p\right]\leq C_{p,K,\alpha,T},\,\,\, \text{for}\,\,\,p\in(\alpha,5/3). \end{equation} \end{lemma} \begin{proof} For each $n\geq 1$, by (\ref{mildformapproxi0}) it is easy to see that $$\mathbb{E}\left[\sup_{0\leq t\leq T}\vert \vert u^n_t\vert \vert _p^p\right]\leq C_p(A_1+A_2),$$ where \begin{align*} A_1&=\mathbb{E}\left[\sup_{0\leq t\leq T}\Bigg\vert \Bigg\vert \int_0^LG_{t}(\cdot,y)u_0(y)dy\Bigg\vert \Bigg\vert _p^p\right],\\ A_2&=\mathbb{E}\left[\sup_{0\leq t\leq T}\Bigg\vert \Bigg\vert \int_0^{t+}\int_0^L\int_{\mathbb{R}\setminus\{0\}} G_{t-s}(\cdot,y)\varphi^n(u^n(s-,y))z \tilde{N}(ds,dy,dz)\Bigg\vert \Bigg\vert _p^p\right]. \end{align*} We separately estimate $A_1$ and $A_2$ as follows. For $A_1$, it holds by Young's convolution inequality and (\ref{eq:Greenetimation0}) that \begin{align*} \label{eq:A_1} A_1 &\leq C\mathbb{E}\left[\int_0^L \sup_{0\leq t\leq T}\left(\int_0^L|G_{t} (x,y)|dx\right) \vert u_0(y)\vert ^pdy\right]\leq C_T\mathbb{E}[\vert\vert u_0\vert \vert _p^p]. \end{align*} By Proposition \ref{th:Approximainresult}, we have $\mathbb{E}[\vert \vert u_0\vert \vert _p^p]<\infty$ for $p\in(\alpha,2]$, and so there exists a constant $C_{p,T}$ such that $A_1\leq C_{p,T}$. For $A_2$, we use the factorization method; see, e.g., Da Prato et al. \cite{Prato:1987}, which is based on the fact that for $0<\beta<1$ and $\,0\leq s\leq t$, \begin{equation*} \int_s^t(t-r)^{\beta-1}(r-s)^{-\beta}dr=\dfrac{\pi}{\sin(\beta\pi)}. \end{equation*} For any function $v: [0,\infty)\times {[0,L]} \rightarrow \mathbb{R}$ define \begin{align*} &\mathcal{J}^{\beta}v(t,x) :=\dfrac{\sin(\beta\pi)}{\pi}\int_0^t\int_0^L(t-s)^{\beta-1}G_{t-s}(x,y)v(s,y)dyds,\\ &\mathcal{J}^n_{\beta}v(t,x):=\int_0^{t+}\int_0^L\int_{\mathbb{R}\setminus\{0\}} (t-s)^{-\beta}G_{t-s}(x,y)\varphi^n(v(s-,y))z\tilde{N}(ds,dy,dz). \end{align*} By the stochastic Fubini Theorem and (\ref{eq:Greenetimation1}), we have \begin{align*} \mathcal{J}^{\beta}\mathcal{J}^n_{\beta}u^n(t,x) &=\dfrac{\sin(\beta\pi)}{\pi}\int_0^t\int_0^L (t-s)^{\beta-1}G_{t-s}(x,y)\Bigg(\int_0^{s+} \int_0^L\int_{\mathbb{R}\setminus\{0\}} (s-r)^{-\beta}\\ &\quad\quad\times G_{s-r}(y,m) \varphi^n(u^n(r-,m))z\tilde{N}(dr,dm,dz)\Bigg)dyds \\ &=\dfrac{\sin(\beta\pi)}{\pi}\int_0^{t+} \int_0^L\int_{\mathbb{R}\setminus\{0\}} \Bigg[\int_r^{t}(t-s)^{\beta-1}(s-r)^{-\beta}\\ &\quad\quad\times\Bigg(\int_0^LG_{t-s}(x,y) G_{s-r}(y,m)dy\Bigg)ds\Bigg] \varphi^n(u^n(r-,m))z\tilde{N}(dr,dm,dz) \\ &=\int_0^{t+}\int_0^L\int_{\mathbb{R}\setminus\{0\}}G_{t-s}(x,y)\varphi^n(u^n(s-,y))z\tilde{N}(ds,dy,dz). \end{align*} Thus, $$A_2=\mathbb{E}\left[\sup_{0\leq t\leq T}\vert \vert \mathcal{J}^{\beta}\mathcal{J}^n_{\beta}u^n_t\vert \vert _p^p\right].$$ Until the end of the proof we fix a $0<\beta<1$ satisfying \begin{align} \label{eq:factorization} 1-\frac{1}{p}<\beta<\frac{3}{2p}-\frac{1}{2}, \end{align} which requires that $$\frac{3}{2p}-\frac{1}{2}-(1-\frac{1}{p})>0.$$ Therefore, we need the assumption $p<5/3$ for this lemma. Back to our main proof, to estimate $A_2$ we first estimate $\mathbb{E}[\vert \vert \mathcal{J}^n_{\beta}u^n_t\vert \vert _p^p]$. For $p\in(\alpha,5/3)$, the Burkholder-Davis-Gundy inequality, (\ref{eq:element-inequ})-(\ref{eq:jumpestimate}) and (\ref{eq:glo-lin-growth}) imply that \begin{align*} \mathbb{E}[\vert \vert \mathcal{J}^n_{\beta}u^n_t\vert \vert _p^p] =&\int_0^L\mathbb{E}\left[\left\vert \int_0^{t+}\int_0^L\int_{\mathbb{R}\setminus\{0\}} (t-s)^{-\beta}G_{t-s}(x,y)\varphi^n(u^n(s-,y))z\tilde{N}(ds,dy,dz)\right\vert ^p\right]dx\nonumber\\ \leq&C_p\int_0^L\int_0^{t}\int_0^L\int_{\mathbb{R}\setminus\{0\}} \mathbb{E}\left[\vert (t-s)^{-\beta}G_{t-s}(x,y)\varphi^n(u^n(s,y))z\vert ^p\right]\nu_{\alpha}(dz)dydsdx\nonumber\\ \leq&C_{p,K,\alpha}\int_0^L\int_0^{t}\int_0^L \mathbb{E}\left[1+\vert u^n(s,y)\vert ^p\right]\vert t-s\vert ^{-\beta p}\vert G_{t-s}(x,y)\vert ^pdydsdx. \end{align*} Combine (\ref{eq:Greenetimation2}), we have \begin{align*} \mathbb{E}[\vert \vert \mathcal{J}^n_{\beta}u^n_t\vert \vert _p^p]\leq&C_{p,K,\alpha}\left(L+\mathbb{E}\left[\sup_{0\leq s\leq t}\vert \vert u^n_s\vert \vert _p^p\right]\right) \int_0^Ts^{-(\frac{p-1}{2}+\beta p)}ds. \end{align*} For $p<5/3$, by (\ref{eq:factorization}) we have $$\int_0^Ts^{-(\frac{p-1}{2}+\beta p)}ds<\infty.$$ Therefore, there exists a constant $C_{p,K,\alpha,T}$ such that \begin{equation} \label{eq:momentestimation} \mathbb{E}\left[\vert \vert \mathcal{J}^n_{\beta}u^n_t\vert \vert _p^p\right]\leq C_{p,K,\alpha,T}\left(1+\mathbb{E}\left[\sup_{0\leq s\leq t}\vert \vert u^n_s\vert \vert _p^p\right]\right). \end{equation} We now estimate $A_2=\mathbb{E}[\sup_{0\leq t\leq T}\vert \vert \mathcal{J}^{\beta}\mathcal{J}^n_{\beta}u^n_t\vert \vert _p^p]$. The Minkowski inequality implies that \begin{align} \label{eq:A_2_1} A_2 =&\mathbb{E}\left[\sup_{0\leq t\leq T}\dfrac{\sin(\pi\beta)}{\pi}\Bigg\vert \Bigg\vert \int_0^{t}\int_0^L(t-s)^{\beta-1} G_{t-s}(\cdot,y)\mathcal{J}^n_{\beta}u^n(s,y)dyds \Bigg\vert \Bigg\vert _p^p\right]\nonumber\\ \leq&\dfrac{\sin(\pi\beta)}{\pi}\mathbb{E}\left[\sup_{0\leq t\leq T}\left(\int_0^{t}(t-s)^{\beta-1} \Bigg\vert \Bigg\vert \int_0^LG_{t-s}(\cdot,y)\mathcal{J}^n_{\beta}u^n(s,y)dy\Bigg\vert \Bigg\vert _pds\right)^p\right]. \end{align} By the H\"{o}lder inequality and (\ref{eq:Greenetimation0}), we have \begin{align} \label{eq:A_2_2} &\Bigg\vert \Bigg\vert \int_0^LG_{t-s}(\cdot-y)\mathcal{J}^n_{\beta}u^n(s,y)dy\Bigg\vert \Bigg\vert _p\nonumber\\ &\quad=\left(\int_0^L\left\vert \int_0^L\vert G_{t-s}(x,y)\vert ^{\frac{p-1}{p}}\vert G_{t-s}(x,y)\vert ^{\frac{1}{p}}\mathcal{J}^n_{\beta}u^n(s,y) dy\right\vert ^pdx\right)^{\frac{1}{p}}\nonumber\\ &\quad\leq\left(\int_0^L\left\vert \left(\int_0^L\vert G_{t-s}(x,y)\vert dy\right)^{\frac{p-1}{p}} \left(\int_0^LG_{t-s}(x,y)\vert \mathcal{J}^n_{\beta}u^n(s,y)\vert ^pdy\right)^{\frac{1}{p}}\right\vert ^pdx\right)^{\frac{1}{p}}\nonumber\\ &\quad\leq\left(\sup_{x\in[0,L]}\int_0^L|G_{t-s}(x,y)|dy\right)^{\frac{p-1}{p}} \left(\int_0^L\int_0^L|G_{t-s}(x,y)|\vert \mathcal{J}^n_{\beta}u^n(s,y)\vert ^pdxdy\right)^{\frac{1}{p}}\nonumber\\ &\quad\leq C_T\vert \vert \mathcal{J}^n_{\beta}u^n_s\vert \vert _p. \end{align} Therefore, it follows from (\ref{eq:A_2_1}), (\ref{eq:A_2_2}), and the H\"{o}lder inequality that \begin{align*} \label{eq:A_2} A_2\leq&\dfrac{\sin(\pi\beta)C_{p,T}}{\pi}\mathbb{E}\left[\sup_{0\leq t\leq T}\left(\int_0^t(t-s)^{\beta-1}\vert \vert \mathcal{J}^n_{\beta}u^n_s\vert \vert _pds\right)^p\right]\nonumber\\ \leq&\dfrac{\sin(\pi\beta)C_{p,T}}{\pi}\mathbb{E}\left[\sup_{0\leq t\leq T}\left(\int_0^{t}1^{\frac{p}{p-1}}ds\right)^{p-1} \left(\int_0^{t}(t-s)^{(\beta-1)p}\vert \vert \mathcal{J}^n_{\beta}u^n_s\vert \vert _p^pds\right)\right] \nonumber\\ \leq&\dfrac{\sin(\pi\beta)C_{p,T}}{\pi}\int_0^{T} (T-s)^{(\beta-1)p} \mathbb{E}[\vert \vert \mathcal{J}^n_{\beta}u^n_s\vert \vert _p^p]ds. \end{align*} By (\ref{eq:momentestimation}), it also holds that \begin{align} A_2 \leq&\dfrac{\sin(\pi\beta)C_{p,K,\alpha,T}}{\pi} \int_0^{T}(T-s)^{(\beta-1)p}\left(1+\mathbb{E} \left[\sup_{0\leq r\leq s}\vert \vert u^n_r\vert \vert _p^p\right]\right)ds\nonumber\\ \leq&\dfrac{\sin(\pi\beta)C_{p,K,\alpha,T}}{\pi} \left(1+\int_0^{T}(T-s)^{(\beta-1)p}\mathbb{E} \left[\sup_{0\leq r\leq s}\vert \vert u^n_r \vert \vert _p^p\right]ds\right). \end{align} Combining (\ref{eq:A_2}) and the estimate for $A_1$, we have for each $T>0$, \begin{align*} &\mathbb{E}\left[\sup_{0\leq t \leq T}\vert \vert u^n_t\vert \vert _p^p\right]\leq C_{p,T} +\dfrac{\sin(\pi\beta)C_{p,K,\alpha,T}}{\pi} \int_0^{T}(T-s)^{(\beta-1)p}\mathbb{E} \left[\sup_{0\leq r\leq s}\vert \vert u^n_r\vert \vert _p^p\right]ds. \end{align*} Since $\beta>1-1/p$, applying a generalized Gronwall's Lemma (see, e.g., Lin \cite[Theorem 1.2]{Lin:2013}), we have \begin{align*} \sup_{n}\mathbb{E}\left[\sup_{0\leq t \leq T}\vert \vert u^n_t\vert \vert _p^p\right]\leq C_{p,K,\alpha,T}, \,\,\,\text{for}\,\,\,p\in(\alpha,5/3), \end{align*} which completes the proof. $\Box$ \end{proof} \begin{lemma} \label{lem:temporalestimation} Suppose that $\alpha\in(1,5/3)$ and for each $n\geq1$ $u^n$ is the solution to equation (\ref{eq:approximatingsolution}) given by {\rm Proposition \ref{th:Approximainresult}}. Then for given $T>0$, $0\leq h\leq\delta$ and $p\in(\alpha,5/3)$ \begin{equation} \label{eq:temporalestimation} \lim_{\delta\rightarrow0}\sup_n\mathbb{E}\left[\sup_{0\leq t\leq T}\sup_{0\leq h\leq \delta}\vert \vert u^n_{t+h}-u^n_t\vert \vert _p^p\right]=0. \end{equation} \end{lemma} \begin{proof} For each $n\geq 1$, by the factorization method in the proof of Lemma \ref{lem:uniformbound}, we have $$\mathbb{E}\left[\sup_{0\leq t\leq T}\sup_{0\leq h\leq \delta}\vert \vert u^n_{t+h}-u^n_t\vert \vert _p^p\right]\leq C_p(B_1+B_2),$$ where \begin{align*} B_1&=\mathbb{E}\left[\sup_{0\leq t\leq T} \sup_{0\leq h\leq \delta}\Bigg\vert \Bigg\vert \int_0^L(G_{t+h}(\cdot-y)-G_{t}(\cdot- y))u_0(y)dy\Bigg\vert \Bigg\vert _p^p\right],\\ B_2&=\mathbb{E}\left[\sup_{0\leq t\leq T} \sup_{0\leq h\leq \delta}\vert \vert \mathcal{J}^{\beta} \mathcal{J}^n_{\beta}u^n_{t+h}-\mathcal{J} ^{\beta}\mathcal{J}^n_{\beta}u^n_t\vert \vert _p^p\right]. \end{align*} For $B_1$, Young's convolution inequality and (\ref{eq:Greenetimation0}) imply that \begin{align*} B_1 &\leq \mathbb{E}\left[\int_0^L\sup_{0\leq t\leq T}\sup_{0\leq h\leq \delta}\left(\int_0^L(|G_{t+h}(x,y)|+|G_{t}(x,y)|)dx\right)\vert u_0(y)\vert ^pdy\right] \leq C_T\mathbb{E}[\vert \vert u_0\vert \vert _p^p]<\infty. \end{align*} Therefore, it holds by Lebesgue's dominated convergence theorem that $B_1$ converges to 0 as $\delta\rightarrow0$. For $B_2$, it is easy to see that $$B_2\leq \frac{\sin(\beta\pi)C_p}{\pi} (B_{2,1}+B_{2,2}+B_{2,3}),$$ where \begin{align*} B_{2,1}&=\mathbb{E}\Bigg[\sup_{0\leq t\leq T}\sup_{0\leq h\leq \delta}\Bigg\vert \Bigg\vert \int_0^{t}\int_0^L(t-s)^{\beta-1} (G_{t+h-s}(\cdot,y)-G_{t-s}(\cdot,y)) \mathcal{J}^n_{\beta}u^n(s,y)dyds\Bigg\vert \Bigg\vert _p^p\Bigg],\\ B_{2,2}&=\mathbb{E}\Bigg[\sup_{0\leq t\leq T}\sup_{0\leq h\leq \delta}\Bigg\vert \Bigg\vert \int_0^{t}\int_0^L((t+h-s)^{\beta-1}-(t-s)^{\beta-1})G_{t+h-s}(\cdot,y)\mathcal{J}^n_{\beta}u^n(s,y)dyds\Bigg\vert \Bigg\vert _p^p\Bigg],\\ B_{2,3}&=\mathbb{E}\left[\sup_{0\leq t\leq T}\sup_{0\leq h\leq \delta}\Bigg\vert \Bigg\vert \int_{t}^{t+h}\int_0^L(t+h-s)^{\beta-1}G_{t+h-s}(\cdot,y)\mathcal{J}^n_{\beta}u^n(s,y)dyds\Bigg\vert \Bigg\vert _p^p\right]. \end{align*} By the assumption $p\in(\alpha,5/3)$ of this lemma we can choose a $0<\beta<1$ satisfying $1-1/p<\beta<3/2p-1/2$. By Lemma \ref{lem:uniformbound} and (\ref{eq:momentestimation}), there exists a constant $C_{p,K,\alpha,T}$ such that \begin{equation} \label{ineq:0} \sup_{0\leq t\leq T}\mathbb{E}\left[\vert \vert \mathcal{J}^n_{\beta}u^n_t\vert \vert _p^p\right]\leq C_{p,K,\alpha,T}. \end{equation} To estimate $B_{2,1}$, we set $G^h_t(x,y)=G_{t+h}(x,y)-G_{t}(x,y)$. Similar to the estimates for (\ref{eq:A_2_1}) and (\ref{eq:A_2_2}) in the proof of Lemma \ref{lem:uniformbound}, we have \begin{align*} B_{2,1}\leq&\mathbb{E}\Bigg[\sup_{0\leq t\leq T} \sup_{0\leq h\leq \delta}\Bigg(\int_0^{t}(t-s)^{\beta-1} \Bigg(\sup_{x\in [0,L]} \int_0^LG^h_{t-s}(x,y)dy\Bigg)^{\frac{p-1}{p}} \vert \vert \mathcal{J}^n_{\beta}u^n_s\vert \vert _pds\Bigg)^p\Bigg]. \end{align*} It also follows from the H\"{o}lder inequality and (\ref{ineq:0}) that \begin{align*} B_{2,1}\leq&C_{p,T}\sup_{0\leq t\leq T}\mathbb{E}\left[\vert \vert \mathcal{J}^n_{\beta}u^n_t\vert \vert _p^p\right] \sup_{0\leq h\leq \delta} \left(\int_0^{T}s^{(\beta-1)p}\left(\sup_{x\in [0,L]}\int_0^LG_{t-s}^h(x,y)dy\right)^{p-1}ds\right)\nonumber\\ \leq&C_{p,K,\alpha,T}\sup_{0\leq h\leq \delta} \left(\int_0^{T}s^{(\beta-1)p}\left(\sup_{x\in [0,L]}\int_0^LG^h_{t-s}(x,y)dy\right)^{p-1}ds\right). \end{align*} Moreover, since $\beta>1-1/p$, it holds by (\ref{eq:Greenetimation0}) that \begin{align*} &\int_0^{T}s^{(\beta-1)p}\left(\sup_{x\in [0,L]}\int_0^LG_{t-s}^h(x,y)dy\right)^{p-1}ds\\ &\quad\leq\int_0^{T}s^{(\beta-1)p}\left(\sup_{x\in [0,L]}\left(\int_0^L|G_{t+h-s}(x,y)|dy+\int_0^L|G_{t-s}(x,y)|dy\right)\right)^{p-1}ds\\ &\quad\leq C_{p,T}\int_0^{T}s^{(\beta-1)p}ds<\infty. \end{align*} Thus, Lebesgue's Dominated Convergence Theorem implies that $B_{2,1}$ converges to 0 as $\delta\rightarrow0$. For $B_{2,2}$, the Minkowski inequality and Young's convolution inequality imply that \begin{align*} B_{2,2}\leq&\mathbb{E}\Bigg[\sup_{0\leq t\leq T}\sup_{0\leq h\leq \delta} \Bigg(\int_0^{t}((t+h-s)^{\beta-1}-(t-s)^{\beta-1}) \Bigg\vert \Bigg\vert \int_0^LG_{t+h-s}(\cdot,y)\mathcal{J}^n_{\beta}u^n(s,y)dy\Bigg\vert \Bigg\vert _p ds\Bigg)^p\Bigg]\nonumber\\ \leq&C_{p,T}\mathbb{E}\left[\sup_{0\leq t\leq T}\sup_{0\leq h\leq \delta}\left(\int_0^{t}((t+h-s)^{\beta-1}-(t-s)^{\beta-1}) \vert \vert \mathcal{J}^n_{\beta}u^n_s\vert \vert _p ds\right)^p\right].\nonumber \end{align*} By the H\"{o}lder inequality and (\ref{ineq:0}) we have for $\beta>1-1/p$, \begin{align*} B_{2,2}\leq&C_{p,T}\mathbb{E}\left[\sup_{0\leq t\leq T}\sup_{0\leq h\leq \delta} \int_0^{t}\vert (t+h-s)^{\beta-1}-(t-s)^{\beta-1}\vert ^{p} \vert \vert \mathcal{J}^n_{\beta}u^n_s\vert \vert _p^p ds\right]\nonumber\\ \leq&C_{p,T}\sup_{0\leq t\leq T}\mathbb{E}[\vert \vert \mathcal{J}^n_{\beta}u^n_t\vert \vert _p^p]\int_0^{T}\vert (s+\delta)^{\beta-1}-s^{(\beta-1)}\vert ^pds\nonumber\\ \leq&C_{p,K,\alpha,T}\int_0^{T}\vert (s+\delta)^{\beta-1}-s^{(\beta-1)}\vert ^pds<\infty. \end{align*} Therefore, by Lebesgue's dominated convergence theorem, we know that $B_{2,2}$ converges to 0 as $\delta\rightarrow0$. For $B_{2,3}$, similar to $B_{2,2}$, we get \begin{align} \label{ineq:B_{2_3}} B_{2,3}\leq&\mathbb{E}\left[\sup_{0\leq t\leq T}\sup_{0\leq h\leq \delta} \left(\int_{t}^{t+h}(t+h-s)^{\beta-1}\Big\vert \Big\vert \int_0^LG_{t+h-s}(\cdot,y)\mathcal{J}^n_{\beta}u^n(s,y)dy\Big\vert \Big\vert _p ds\right)^p\right]\nonumber\\ \leq&C_{p,T}\mathbb{E}\left[\sup_{0\leq t\leq T}\sup_{0\leq h\leq \delta}\left( \int_{t}^{t+h}(t+h-s)^{\beta-1}\vert \vert \mathcal{J}^n_{\beta}u^n_s\vert \vert _p ds\right)^p\right]\nonumber\\ \leq&C_{p,T}\mathbb{E}\left[\sup_{0\leq t\leq T}\sup_{0\leq h\leq \delta} \int_{t}^{t+h}\vert (t+h-s)^{\beta-1}\vert ^p\vert \vert \mathcal{J}^n_{\beta}u^n_s\vert \vert _p^p ds\right]\nonumber\\ \leq&C_{p,T}\sup_{0\leq t\leq T}\mathbb{E}[\vert \vert \mathcal{J}^n_{\beta}u^n_t\vert \vert _p^p]\sup_{0\leq h\leq \delta} \int_{t}^{t+h}\vert (t+h-s)^{\beta-1}\vert \leq C_{p,K,\alpha,T}\int_0^{\delta}s^{(\beta-1)p}ds. \end{align} Since $\beta>1-1/p$, we can conclude that the right-hand side of (\ref{ineq:B_{2_3}}) converges to 0 as $\delta\rightarrow0$. Therefore, by the estimates of $B_{2,1}, B_{2,2}, B_{2,3}$ and $B_1$, the desired result (\ref{eq:temporalestimation}) holds, which completes the proof. $\Box$ \end{proof} \begin{lemma} \label{lem:spatialestimation} For each $n\geq1$ let $u^n$ be the solution to equation (\ref{eq:approximatingsolution}) given by {\rm Proposition \ref{th:Approximainresult}}. Then for given $t\in[0,\infty)$, $0\leq\vert x_1\vert \leq\delta$ and $p\in(\alpha,2]$ \begin{equation} \label{eq:spatialestimation} \lim_{\delta\rightarrow 0}\sup_n\mathbb{E}\left[ \sup_{\vert x_1\vert \leq \delta}\vert \vert u^n(t,\cdot+x_1)-u^n(t, \cdot)\vert \vert _p^p\right]=0. \end{equation} \end{lemma} \begin{proof} Since the shift operator is continuous in $L^p([0,L])$, then for each $n\geq1$ and $\delta>0$ there exists a pathwise $x_1^{n,\delta}(t)\in\mathbb{R}$ such that $\vert x_1^{n,\delta}(t)\vert \leq\delta$ and $$\sup_{\vert x_1\vert \leq \delta}\vert \vert u^n(t,\cdot+x_1)-u^n(t,\cdot)\vert \vert _p^p=\vert \vert u^n(t,\cdot+x_1^{n,\delta}(t))-u^n(t,\cdot)\vert \vert _p^p.$$ As before, it is easy to see that $$\mathbb{E}[\vert \vert u^n(t,\cdot+x_1^{n,\delta}(t))-u^n(t,\cdot) \vert \vert _p^p]\leq C_p(C_1+C_2),$$ where \begin{align*} C_1&=\mathbb{E}\left[ \bigg\vert \bigg\vert \int_0^L (G_{t}(\cdot+x_1^{n,\delta}(t),y) -G_{t}(\cdot,y))u_0(y)dy\bigg\vert \bigg\vert _p^p \right], \\ C_2&=\mathbb{E}\bigg[\bigg\vert \bigg\vert \int_0^{t+}\int_0^L\int_{\mathbb{R}\setminus\{0\}} (G_{t-s}(\cdot+{x_1^{n,\delta}(t)},y) -G_{t-s}(\cdot,y)) \varphi^n(u^n(s-,y)) z\tilde{N}(dz,dy,ds)\bigg\vert \bigg\vert _p^p\bigg]. \end{align*} For $C_1$, Young's convolution inequality and (\ref{eq:Greenetimation0}) imply that \begin{align*} C_1&\leq\mathbb{E}\left[\int_0^L \left(\int_0^L(|G_{t}(x+x_1^{n, \delta}(t),y)|+|G_{t}(x,y))|dx\right) \vert u_0(y)\vert ^pdy\right] \leq C_T\mathbb{E}[\vert \vert u_0\vert \vert _p^p]<\infty. \end{align*} Thus, the Lebesgue dominated convergence theorem implies that $C_1$ converges to 0 as $\delta\rightarrow0$. For $C_2$, it follows from the Burkholder-Davis-Gundy inequality, (\ref{eq:element-inequ})-(\ref{eq:jumpestimate}), (\ref{eq:glo-lin-growth}) and (\ref{eq:Greenetimation2}) that for $p\in(\alpha,2]$ \begin{align*} C_2 =&\int_0^L\mathbb{E}\bigg[\bigg\vert \int_0^{t+}\int_0^L\int_{\mathbb{R}\setminus\{0\}} (G_{t-s}(x+{x_1^{n,\delta}(t)},y)-G_{t-s}(x,y))\varphi^n(u^n(s-,y))z \tilde{N}(ds,dy,dz)\bigg\vert ^p\bigg]dx\nonumber\\ \leq&C_p\int_0^L\int_0^{t}\int_0^L\int_{\mathbb{R}\setminus\{0\}} \mathbb{E}[\vert (G_{t-s}(x+{x_1^{n,\delta}(t)},y)-G_{t-s}(x,y))\varphi^n(u^n(s,y))z\vert ^p]\nu_{\alpha}(dz)dydsdx\nonumber\\ \leq&C_{p,K,\alpha}\int_0^L\int_0^{t}\int_0^L\mathbb{E}[(1+\vert u^n(s,y)\vert )^p] \vert (G_{t-s}(x+{x_1^{n,\delta}(t)},y)-G_{t-s}(x,y))\vert ^pdydsdx\nonumber\\ \leq&C_{p,K,\alpha}\left(\int_0^L\int_0^{t}\vert G_{t-s}(x+{x_1^{n,\delta}(t)},y)-G_{t-s}(x,y)\vert ^pdsdx\right) \left(L+\sup_{0\leq s\leq t}\mathbb{E}\left[\vert \vert u^n_s\vert \vert _p^p\right]\right) \\ \leq& C_{p,K,\alpha}\left(\int_0^L\int_0^{t}(\vert G_{t-s}(x+{x_1^{n,\delta}(t)},y)\vert ^p+\vert G_{t-s}(x,y)\vert ^p)dsdx\right) \left(L+\sup_{0\leq s\leq t}\mathbb{E}\left[\vert \vert u^n_s\vert \vert _p^p\right]\right) \\ \leq& C_{p,K,\alpha}\left(\int_0^t(t-s)^{-\frac{p-1}{2}}ds\right) \left(L+\sup_{0\leq s\leq t}\mathbb{E}\left[\vert \vert u^n_s\vert \vert _p^p\right]\right) \end{align*} Therefore, it holds by (\ref{eq:Approximomentresult}) and Lebesgue's dominated convergence theorem that $C_2$ converges to 0 as $\delta\rightarrow0$. Hence, by the estimates of $C_1$ and $C_2$, we obtain \begin{align*} \lim_{\delta\rightarrow 0}\sup_n\mathbb{E}\bigg[\sup_{\vert x_1\vert \leq \delta}\vert \vert u^n(t,\cdot+x_1)-u^n(t,\cdot)\vert \vert _p^p\bigg]&=0, \end{align*} which completes the proof. $\Box$ \end{proof} \begin{proposition} \label{prop:tightnessresult2} Suppose that $\alpha\in(1,5/3)$. The sequence of solutions $(u^n)_{n\geq1}$ to equation (\ref{eq:approximatingsolution}) given by {\rm Proposition \ref{th:Approximainresult}} is tight in $D([0,\infty),L^p([0,L]))$ for $p\in(\alpha,5/3)$. \end{proposition} \begin{proof} From (\ref{eq:Approximomentresult}) and Markov's inequality, for each $\varepsilon>0$, $p\in(\alpha,2]$ and $T>0$ there exists a $N\in\mathbb{N}$ such that \begin{align*} \sup\limits_n\mathbb{P}\left[\vert \vert u^n_t\vert \vert _p^p>N\right]\leq\dfrac{\varepsilon}{3},\quad t\in [0,T]. \end{align*} Let $\Gamma^1_{\varepsilon,T}$ be a closed set defined by \begin{align} \label{eq:Gamma1} \Gamma^1_{\varepsilon,T}:=\{v_t\in L^p([0,L]): \vert \vert v_t\vert \vert _p^p\leq N,t\in[0,T]\}. \end{align} By Lemma \ref{lem:spatialestimation} and Markov's inequality, it holds that for each $\varepsilon>0$, $p\in(\alpha,2]$ and $T>0$ \begin{equation*} \lim_{\delta\rightarrow 0}\sup_n\mathbb{P}\left[ \sup_{\vert x_1\vert \leq \delta}\vert \vert u^n(t,\cdot+x_1)-u^n(t,\cdot)\vert \vert _p^p>\varepsilon\right]=0,\quad t\in[0,T]. \end{equation*} Then for $k\in\mathbb{N}$ we can choose a sequence $(\delta_k)_{k\geq1}$ with $\delta_k\rightarrow0$ as $k\rightarrow\infty$ such that \begin{align*} \sup\limits_n\mathbb{P}\left[\sup_{\vert x_1\vert \leq \delta_k}\vert \vert u^n(t,\cdot+x_1)-u^n(t,\cdot)\vert \vert _p^p>\frac{1}{k} \right]\leq\dfrac{\varepsilon}{3}2^{-k},\quad t\in[0,T]. \end{align*} Let $\Gamma^2_{\varepsilon,T}$ be a closed set defined by \begin{align} \label{eq:Gamma2} \Gamma^2_{\varepsilon,T}:=\bigcap_{k=1}^{\infty} \left\{v_t\in L^p([0,L]): \sup_{\vert x_1\vert \leq \delta_k}\vert \vert v(t,\cdot+x_1)-v(t,\cdot)\vert \vert _p^p\leq\frac{1}{k},t\in[0,T] \right\}. \end{align} We next prove that for each $\varepsilon>0$ and $p\in(\alpha,2]$ , \begin{equation} \label{eq:compactset1} \lim_{\gamma\rightarrow \infty}\sup_n\mathbb{P}\left[\int_{(L-\frac{L}{\gamma},L]}\vert u^n(t,x)\vert ^pdx>\varepsilon\right]=0. \end{equation} It is easy to see that \begin{align*} \mathbb{E}\bigg[\int_{(L-\frac{L}{\gamma},L]}\vert u^n(t,x)\vert ^pdx\bigg] &=\mathbb{E}\bigg[\int_0^L \vert u^n(t,x)\vert ^p1_{(L-\frac{L}{\gamma},L]}(x)dx\bigg] \leq C_p(D_1+D_2), \end{align*} where \begin{align*} D_1&=\int_0^L\mathbb{E}\left[\left\vert \int_0^LG_{t} (x,y)u_0(y)dy\right\vert ^p\right] 1_{(L-\frac{L}{\gamma},L]}(x)dx, \\ D_2&=\int_0^L\mathbb{E}\Bigg[\Bigg\vert \int_0^{t+} \int_0^L\int_{\mathbb{R}\setminus\{0\}} G_{t-s}(x,y) \varphi^n(u^n(s-,y))z \tilde{N}(ds,dy,dz) \Bigg\vert ^p\Bigg]1_{(L-\frac{L}{\gamma},L]}(x)dx. \end{align*} It is easy to prove that $D_1$ converges to 0 as $\gamma\rightarrow\infty$ by using Young's convolution inequality and Lebesgue's dominated convergence theorem. For $D_2$, it holds by the Burkholder-Davis-Gundy inequality, (\ref{eq:element-inequ})-(\ref{eq:jumpestimate}) and (\ref{eq:glo-lin-growth}) that for $p\in(\alpha,2]$ \begin{align*} D_2 \leq &C_p\int_0^L\int_0^{t} \int_0^L\int_{\mathbb{R}\setminus\{0\}} \mathbb{E}[\vert G_{t-s}(x,y)\varphi^n(u^n(s,y))z\vert ^p] 1_{(L-\frac{L}{\gamma},L]}(x)\nu_{\alpha}(dz)dydsdx \nonumber\\ \leq&C_{p,K,\alpha}\int_0^L\int_0^{t} \int_0^L\mathbb{E}(1+\vert u^n(s,y)\vert )^p\vert G_{t-s}(x,y)\vert ^p1_{(L-\frac{L}{\gamma},L]}(x)dydsdx \nonumber\\ \leq&C_{p,K,\alpha,T}\left(L+\sup_{0\leq t\leq T}\mathbb{E} \left[\vert \vert u^n_t\vert \vert _p^p\right]\right) \int_0^L\int_0^{t}(t-s)^{-\frac{p-1}{2}} 1_{(L-\frac{L}{\gamma},L]}(x)dsdx. \end{align*} Since $p\leq2$, it holds that \begin{align*} D_2\leq C_{p,K,\alpha,T} \left(\int_0^L1_{(L-\frac{L}{\gamma},L]}(x)dx\right)\left(L+\sup_{0\leq t\leq T}\mathbb{E} \left[\vert \vert u^n_t\vert \vert _p^p\right]\right). \end{align*} By (\ref{eq:Approximomentresult}), $D_2$ converges to 0 as $\gamma\rightarrow\infty$. Therefore, (\ref{eq:compactset1}) is obtained from the estimates of $D_1$ and $D_2$ and Markov's inequality. For any $k\in\mathbb{N}$ and $T>0$ we can choose a sequence $(\gamma_k)_{k\geq1}$ with $\gamma_k\rightarrow\infty$ as $k\rightarrow\infty$ such that \begin{align*} \sup\limits_n\mathbb{P}\left [\int_{(L-\frac{L}{\gamma_k},L]}\vert u^n(t,x)\vert ^pdx>\frac{1}{k}\right] \leq\dfrac{\varepsilon}{3}2^{-k},\quad t\in[0,T]. \end{align*} Let $\Gamma^3_{\varepsilon,T}$ be a closed set defined by \begin{align} \label{eq:Gamma3} \Gamma^3_{\varepsilon,T}:=\bigcap_{k=1}^{\infty} \left\{v_t\in L^p([0,L]): \int_{(L-\frac{L}{\gamma_k},L]}\vert v(t,x)\vert ^pdx\leq\frac{1}{k}, t\in[0,T] \right\}. \end{align} Combining (\ref{eq:Gamma1}), (\ref{eq:Gamma2}) and (\ref{eq:Gamma3}) to define \begin{align*} \Gamma_{\varepsilon,T}:=\Gamma^1_{\varepsilon,T} \cap\Gamma^2_{\varepsilon,T} \cap\Gamma^3_{\varepsilon,T}, \end{align*} then $\Gamma_{\varepsilon,T}$ is a closed set in $L^p([0,L]),p\in(\alpha,2]$. For any function $f\in\Gamma_{\varepsilon,T}$ the definition of $\Gamma_{\varepsilon,T}$ implies that the conditions (a)-(c) in Lemma \ref{lem:compactcriterion} hold, and so $\Gamma_{\varepsilon,T}$ is a relatively compact set in $L^p([0,L]),p\in(\alpha,2]$. Combing the closeness and relatively compactness, we know that $\Gamma_{\varepsilon,T}$ is a compact set in $L^p([0,L]),p\in(\alpha,2]$. Moreover, the definition of $\Gamma_{\varepsilon,T}$ implies that \begin{align*} \inf_n\mathbb{P} [u^n_t\in\Gamma_{\varepsilon,T}]&\geq1- \frac{\varepsilon}{3}\left(1+2\sum_{k=1}^{\infty} 2^{-k}\right)=1-\varepsilon, \end{align*} which verifies condition (i) of Lemma \ref{lem:tightcriterion}. Condition (ii) of Lemma \ref{lem:tightcriterion} is verified by Lemma \ref{lem:temporalestimation} with $p\in(\alpha,5/3)$. Therefore, $(u^n)_{n\geq1}$ is tight in $D([0,\infty),L^p([0,L]))$ for $p\in(\alpha,5/3)$, which completes the proof. $\Box$ \end{proof} \begin{Tproof}\textbf{~of Theorem \ref{th:mainresult2}.} According to Proposition \ref{prop:tightnessresult2}, there exists a $D([0,\infty),L^p([0,T]))$-valued random variable $u$ such that $u^n$ converges to $u$ in distribution in the Skorohod topology. The Skorohod Representation Theorem yields that there exists another filtered probability space $(\hat{\Omega}, \hat{\mathcal{F}}, (\hat{\mathcal{F}}_t)_{t\geq0},\hat{\mathbb{P}})$ and on it a further subsequence $(\hat{u}^n)_{n\geq1}$ and $\hat{u}$ which have the same distribution as $(u^n)_{n\geq1}$ and $u$, so that $\hat{u}^n$ almost surely converges to $\hat{u}$ in the Skorohod topology. The rest of the proofs, including the construction of a truncated $\alpha$-stable measure $\hat{L}_{\alpha}$ such that $(\hat{u},\hat{L}_{\alpha})$ is a weak solution to equation (\ref{eq:originalequation1}), is same as the proof of Theorem \ref{th:mainresult} and we omit them. Since $\hat{u}^n$ has the same distribution as $u^n$ for each $n\geq1$, the moment estimate (\ref{eq:unformlybounded}) in Lemma \ref{lem:uniformbound} can be written as $$ \sup_{n\geq1}\hat{\mathbb{E}}\left[\sup_{0\leq t\leq T}\vert \vert \hat{u}^n_t\vert \vert _p^p\right]\leq C_{p,K,\alpha,T}. $$ Hence, by Fatou's Lemma, $$ \hat{\mathbb{E}}\left[\sup_{0\leq t\leq T}\vert \vert \hat{u}_t\vert \vert _p^p\right]\leq\liminf_{n\rightarrow\infty} \hat{\mathbb{E}}\left[\sup_{0\leq t\leq T}\vert \vert \hat{u}^n_t\vert \vert _p^p\right]<\infty. $$ This yields the uniform $p$-moment estimate (\ref{eq:momentresult2}). Similarly, we can obtain the uniform stochastic continuity (\ref{eq:timeregular}) by Lemma \ref{lem:temporalestimation}. $\Box$ \end{Tproof} \noindent \textbf{Acknowledgements} This work is supported by the National Natural Science Foundation of China (NSFC) (Nos. 11631004, 71532001), Natural Sciences and Engineering Research Council of Canada (RGPIN-2021-04100). \end{document}
\begin{document} \begin{abstract} We propose an approximate solver for multi-medium Riemann problems with materials described by a family of general Mie-Gr{\"u}neisen equations of state, which are widely used in practical applications. The solver provides the interface pressure and normal velocity by an iterative method. The well-posedness and convergence of the solver is verified with mild assumptions on the equations of state. To validate the solver, it is employed in computing the numerical flux on phase interfaces of a numerical scheme on Eulerian grids that was developed recently for compressible multi-medium flows. Numerical examples are presented for Riemann problems, air blast and underwater explosion applications. \end{abstract} \begin{keyword} Multi-medium Riemann problem, approximate Riemann solver, Mie-Gr{\"u}neisen equation of state, multi-medium flow \end{keyword} \title{{f An Approximate Solver for Multi-medium Riemann Problem with Mie-Gr{\"u} \section{Introduction} Numerical simulations of compressible multi-medium flow are of great interest in practical applications, such as mechanical engineering, chemical industry, and so on. Many conservative Eulerian algorithms perform very well in single-medium compressible flows. However, when such an algorithm is employed to compute multi-medium flows, numerical inaccuracies may occur at the material interfaces \cite{Abgrall1996, Saurel2007, Liu2003}, due to the great discrepancy of densities and equations of state across the interface. The simulation of compressible multi-medium flow in an Eulerian framework requires special attention in describing the interface connecting distinct fluids, especially for the problems that involve highly nonlinear equations of state. Several techniques have been taken to treat the multi-medium flow interactions. See \cite{Liu2003, Abgrall2001, Karni1994, Arienti2004, Shyue2001, Saurel1999, Price2015} for instance. A typical procedure of multi-medium compressible flows in Eulerian grids mainly consists of two steps, the interface capture and the interaction between different fluids. There are mainly two different approaches in literatures, the diffuse interface method (DIM) and the sharp interface method (SIM). The former \cite{Abgrall1996, Abgrall2001, Saurel1999, Saurel2009, Petitpas2009, Ansari2013} assumes the interface as a diffuse zone, and smears out the interface over a certain number of grid cells to avoid pressure oscillations. Diffuse interfaces correspond to artificial mixtures created by numerical diffusion, and the key issue is to fulfill interface conditions within this artificial mixture. The latter assumes the interface to be a sharp contact discontinuity, and different fluids are immiscible. Several approaches such as the volume of fluid (VOF) method \cite{Scardovelli1999, Noh1976}, level set method \cite{Sethian2001, Sussman1994}, moment of fluid (MOF) method \cite{Ahn2007, Dyadechko2008, Anbarlooei2009} and front-tracking method \cite{Glimm1998, Tryggvason2001} are used extensively to capture the interface. A key element for both diffuse and sharp interface methods, is to determine the interface states. The accurate prediction of the interface states can be used to stabilize the numerical diffusion in diffuse interface methods, and to compute the numerical flux and interface motion in sharp interface methods. One common approach is to solve a multi-medium Riemann problem which contains the fundamentally physical and mathematical properties of the governing equations. Indeed, the Riemann problem plays a key role in understanding the wave structures, since a general fluid flow may be interpreted as a nonlinear superpositions of the Riemann solutions. The solution of a multi-medium Riemann problem depends not only on the initial states at each side of the interface, but also on the forms of equations of state. For some simple equations of state, such as ideal gas, covolume gas or stiffened gas, the solution of the Riemann problem can be achieved to any desired accuracy with an exact solver. While the Riemann problems for the above equations of state have been fully investigated in \cite{Godunov1976, Plohr1988, Gottlieb1988, Toro2008} for instance, there exist some difficulties in the cases of general nonlinear equations of state due to their high nonlinearity. A variety of methods to solve the corresponding Riemann problems have then been proposed. For example, Larini \textit{et al.} \cite{Larini1992} developed an exact Riemann solver and applied their methods to a fifth-order virial equation of state (EOS), which is particularly suited to the gaseous detonation products of high explosive compounds. Shyue \cite{Shyue2001} developed a Roe's approximate Riemann solver for the Mie-Gr{\"u}neisen EOS with variable Gr{\"u}neisen coefficient. Quartapelle \textit{et al.} \cite{Quartapelle2003} proposed an exact Riemann solver by applying the Newton-Raphson iteration to the system of two nonlinear equations imposing the equality of pressure and of velocity, assuming as unknowns the two values of the specific volume at each side of the interface, and implemented it for the van der Waals gas. Arienti \textit{et al.} \cite{Arienti2004} applied a Roe-Glaster solver to compute the equations combining the Euler equations involving chemical reaction with the Mie-Gr{\"u}neisen EOS. More recently, Rallu \cite{Rallu2009} and Farhat \textit{et al.} \cite{Farhat2012} utilized a sparse grid technique to tabulate the solutions of exact multi-medium Riemann problems. Lee \textit{et al.} \cite{Lee2013} developed an exact Riemann solver for the Mie-Gr{\"u}neisen EOS with constant Gr{\"u}neisen coefficient, where the integral terms are evaluated using an iterative Romberg algorithm. Banks \cite{Banks2010} and Kamm \cite{Kamm2015} developed a Riemann solver for the convex Mie-Gr{\"u}neisen EOS by solving a nonlinear equation for the density increment involved in the numerical integration of rarefaction curves, and chose the JWL (Jones-Wilkins-Lee) EOS as a representative case. In this paper, we propose an approximate multi-medium Riemann solver for a family of general Mie-Gr{\"u}neisen equations of state in an iterative manner, which provides a strategy to reproduce the physics of interaction between two mediums across the interface. Several mild conditions on the coefficients of Mie-Gr{\"u}neisen EOS are assumed to ensure the convexity of equations of state and the well-posedness of our Riemann solver. The algebraic equation of the Riemann problem is solved by an inexact Newton method \cite{Dembo1982}, where the function and its derivative are evaluated approximately depending on the wave configuration. And the convergence of our Riemann solver is analyzed. To validate the proposed solver, we employed it in the computation of two-medium compressible flows with Mie-Gr{\"u}neisen EOS, as an extension of the numerical scheme that was developed recently for two-medium compressible flows with ideal gases \cite{Guo2016}. The rest of this paper is arranged as follows. In Section \ref{sec:riemann}, a solution strategy for the multi-medium Riemann problem with arbitrary Mie-Gr{\"u}neisen equations of state is presented. In Section \ref{sec:aps}, the procedures of our approximate Riemann solver are outlined, and the well-posedness and convergence are analyzed. In Section \ref{sec:model}, the application of our Riemann solver in two-medium compressible flow calculations \cite{Guo2016} is briefly introduced. In Section \ref{sec:num}, several classical Riemann problems and applications for air blast and underwater explosions are carried out to validate the accuracy and robustness of our solver. Finally, a short conclusion is drawn in Section \ref{sec:conclusion}. \section{Multi-medium Riemann Problem} \label{sec:riemann} Let us consider the following one-dimensional multi-medium Riemann problem of the compressible Euler equations \begin{subequations} \begin{equation} \dfrac{\partial}{\partial\tau} \begin{bmatrix} \rho \\ \rho u \\ E \end{bmatrix} +\dfrac{\partial}{\partial\xi} \begin{bmatrix} \rho u \\ \rho u^2+p \\ (E+p)u \end{bmatrix} =\bm 0,\quad E=\rho e+\dfrac{1}{2}\rho u^2. \end{equation} Here $\tau$ is time and $\xi$ is spatial coordinate, and $\rho$, $u$, $p$, $E$ and $e$ are the density, velocity, pressure, total energy and specific internal energy, respectively. The system has initial values \begin{equation} [\rho,u,p]^\top (\xi,\tau=0) = \begin{cases} [\rho_l,u_l,p_l]^\top, & \xi<0, \\ [\rho_r,u_r,p_r]^\top, & \xi>0. \end{cases} \end{equation} \label{system:oneriemann} \end{subequations} Here the equations of state for both mediums under our consideration can be classified into the family known as \emph{the Mie-Gr\"uneisen EOS}. The Mie-Gr\"uneisen EOS can be used to describe a lot of real materials, for example, the gas, water and gaseous products of high explosives \cite{Arienti2004, Liu2003, Kamm2015}, which is particularly useful in those practical applications we are studying now. The general form of the Mie-Gr\"uneisen EOS is given by \begin{equation} p(\rho, e)=\varGamma(\rho)\rho e + h(\rho), \label{eq:particulareos} \end{equation} where $\varGamma(\rho)$ is the Gr{\"u}neisen coefficient, and $h(\rho)$ is a reference state associated with the cold contribution resulting from the interactions of atoms at rest \cite{Heuze2012}. Thus the EOS of the multi-medium Riemann problem \eqref{system:oneriemann} is given by \[ p(\rho, e) = \begin{cases} \varGamma_l(\rho)\rho e+h_l(\rho), \\ \varGamma_r(\rho)\rho e+h_r(\rho), \end{cases} \] for the medium on the left and the right sides, respectively. For the ease of our analysis, we impose on $\varGamma(\rho)$ and $h(\rho)$ the following assumptions: \textbf{(C1)} $\varGamma'(\rho) \le 0,~(\rho\varGamma(\rho))' \ge 0,~(\rho\varGamma(\rho))''\ge 0$; \textbf{(C2)} $\lim\limits_{\rho\rightarrow+\infty}\varGamma(\rho)= \varGamma_\infty>0,~\varGamma(\rho)\le \varGamma_{\infty}+2$; \textbf{(C3)} $h'(\rho)\ge 0,~h''(\rho)\ge 0$. \begin{remark} An immediate consequence is that the Gr{\"u}neisen coefficient $\varGamma(\rho)$ must be nonnegative since $\varGamma(\rho) \ge-\varGamma'(\rho)\rho \ge 0$ by the condition \textbf{(C1)}. \end{remark} A lot of equations of state of our interests fulfill these assumptions. Particularly, we collect some equations of state in Appendix A which are used in our numerical tests as examples. These examples include ideal gas EOS, stiffened gas EOS, polynomial EOS, JWL EOS, and Cochran-Chan EOS. The coefficients $\varGamma(\rho)$, $h(\rho)$ and their derivatives for these equations of state are all provided therein. The Riemann problem for general convex equations of state have been fully analyzed, for example, in \cite{Smith1979, Menikoff1989}. Here the problem is more specific, thus we can present the structure of the solution in a straightforward way. The property of EOS is essential on the wave structures in the solution of Riemann problems. It is pointed out that the wave structures are composed solely of elementary waves \cite{Menikoff1989} if the \emph{fundamental derivative} \cite{Thompson1971} \[ \mathscr G=\dfrac{1}{c}\cdot\left.\dfrac{\partial\rho c}{\partial\rho}\right|_s= 1+\dfrac{\rho}{2c^2}\cdot\left.\dfrac{\partial c^2}{\partial \rho}\right|_s, \] keeps positive \footnote{ When the positivity condition $\mathscr G>0$ is violated, other configurations of waves may occur, such as composite waves, split waves, expansive shock waves or compressive rarefaction waves \cite{Menikoff1989}. The above anomalous wave structures are common issues in phase transitions. For further discussions on these anomalous wave structures, we refer the readers to \cite{Weyl1949, Thompson1973, Liu1975, Liu1976, Pego1986, Bethe1998, Bates2002, Voss2005, Muller2006, Fossati2014} for instance.}, where $s$ is the specific entropy and $c$ is the speed of sound. For the Mie-Gr{\"u}neisen EOS \eqref{eq:particulareos}, the speed of sound can be expressed as \[ c(p,\rho)=\sqrt{\left.\dfrac{\partial p}{\partial \rho}\right|_e +\dfrac{p}{\rho^2} \left.\dfrac{\partial p}{\partial e}\right|_{\rho}}= \sqrt{\left(\dfrac{1}{\rho} + \dfrac{\varGamma_k'(\rho)}{\varGamma_k(\rho)}\right)(p-h_k(\rho)) +\dfrac{p}{\rho}\varGamma_k(\rho) + h_k'(\rho)}, \] and a tedious calculus gives us that the fundamental derivative is \begin{equation} \begin{split} \mathscr G = &\dfrac{1}{c^2}\left( \dfrac12\left(\rho(\rho\varGamma_k(\rho))'' +{(\rho\varGamma_k(\rho))'}(2+\varGamma_k(\rho))\right)e+\dfrac12\rho h_k''(\rho) \right. \\ &\left.+\dfrac{p}{2\rho}(\varGamma_k^2(\rho)+2(\rho\varGamma_k(\rho))') +\dfrac12(2+\varGamma_k(\rho))h_k'(\rho)\right). \end{split} \label{eq:fundd} \end{equation} We conclude that \begin{lemma} The solution of the multi-medium Riemann problem \eqref{system:oneriemann} consists of only elementary waves if the conditions \textbf{(C1), (C2)} and \textbf{(C3)} are fulfilled. \end{lemma} \begin{proof} With the conditions \textbf{(C1), (C2)} and \textbf{(C3)}, a direct check on the terms in \eqref{eq:fundd} gives us that \[ \mathscr G > 0, \] then the result in \cite{Menikoff1989} is applied to give us the conclusion. \end{proof} Precisely, the solution of \eqref{system:oneriemann} consists of four constant regions connected by a linearly degenerate wave and two genuinely nonlinear waves (either shock wave or rarefaction wave, depending on the initial states), as is shown schematically in Fig. \ref{fig:Riemann-Fluid}. The linearly degenerate wave is actually the phase interface. \begin{figure} \caption{Typical wave structure of the Riemann problem in $\xi - \tau$ space.} \label{fig:Riemann-Fluid} \end{figure} Following the convention on notations, we denote the pressure and the velocity by $p^*$ and $u^*$ in the star region, respectively, which have the same value crossing over the phase interface. This allows us to derive a single nonlinear algebraic equation for $p^*$. Then we will solve this algebraic equation by an iterative method. At the beginning of each step of the iterative method, a provisional value of $p^*$ determines the wave structures from four possible configurations. The wave structures then prescribe the specific formation of the algebraic equation. Based on the residual of the algebraic equation, the value of $p^*$ is updated, which closes a single loop of the iterative method. Below let us give the details of the plan above. Firstly we need to study the relations of the solution across a nonlinear wave, saying a shock wave or a rarefaction wave. For convenience, we use the subscript $k = l$ or $r$ standing for either the left initial state $l$ or the right initial state $r$ hereafter. \begin{itemize} \item[-] {\bf Solution through a shock wave} If $p^*>p_k$, the corresponding nonlinear wave is a shock wave, and the star region state $\bm U_k^*$ is connected to the adjacent initial state $\bm U_k$ through a Hugoniot locus. The Rankine-Hugoniot jump conditions \cite{Toro2008} yield \begin{equation} \mp(u^*-u_k) = \left((p^*-p_k)\left(\dfrac{1}{\rho_k}-\dfrac{1}{\rho^*_k} \right)\right)^{1/2}, \label{eq:diffushock} \end{equation} and \begin{equation} e(p^*,\rho^*_k) - e(p_k,\rho_k) + \dfrac{1}{2}(p^*+p_k) \left(\dfrac{1}{\rho^*_k} -\dfrac{1}{\rho_k} \right) = 0, \label{eq:eose} \end{equation} where \[ e(p,\rho)=\dfrac{p-h_k(\rho)}{\varGamma_k(\rho)\rho}. \] Multiplying both sides of the equality \eqref{eq:eose} by $\rho_k\rho_k^*\varGamma_k(\rho_k)\varGamma_k(\rho_k^*)$ gives rise to \[ \begin{split} &\varGamma_k(\rho_k)\rho_k (p^*-h_k(\rho_k^*)) - \varGamma_k(\rho_k^*) \rho_k^* (p_k-h_k(\rho_k)) \\ &-\dfrac{1}{2}\varGamma_k(\rho_k^*)\varGamma_k(\rho_k)(p^*+p_k) (\rho_k^*-\rho_k)=0. \end{split} \] For convenience we introduce the \emph{Hugoniot function} as follows \begin{equation} \begin{split} \varPhi_k(p,\rho) &:=\varGamma_k(\rho_k)\rho_k (p-h_k(\rho)) - \varGamma_k(\rho) \rho (p_k-h_k(\rho_k))\\ &-\dfrac{1}{2}\varGamma_k(\rho_k)(p+p_k)\varGamma_k(\rho) (\rho-\rho_k), \end{split} \label{eq:phi} \end{equation} then the relation \eqref{eq:eose} boils down to the algebraic equation $\varPhi_k(p^*,\rho_k^*)=0$. The derivative of $\varPhi_k$ with respect to the density is found to be \begin{equation} \begin{split} \dfrac{\partial\varPhi_k}{\partial\rho}(p,\rho) &= -\varGamma_k(\rho_k)\rho_k \left(h_k'(\rho)-\dfrac{p+p_k}{2}\varGamma_k'(\rho)\right)\\ &-\varGamma_k(\rho_k)(\varGamma_k(\rho)+\rho\varGamma_k'(\rho)) \left(\rho_ke_k+\dfrac{p+p_k}{2}\right). \end{split} \label{eq:phide} \end{equation} As a result, the slope of the Hugoniot locus can be found by the method of implicit differentiation, namely, \[ \chi(p,\rho) := \left.\dfrac{\partial p}{\partial \rho}\right|_{\varPhi_k}= -\dfrac{2\partial\varPhi_k(p,\rho)/\partial\rho}{ \varGamma_k(\rho_k)(2\rho_k-{\varGamma_k(\rho)}(\rho-\rho_k))}. \] Before we discuss the properties of the Hugoniot function $\varPhi_k(p,\rho)$, let us introduce the compressive limit of the density $\rho_{\max}$ such that $\chi(p,\rho_{\max})=\infty$. By definition $\rho_{\max}$ solves the algebraic equation $2\rho_k-\varGamma_k(\rho)(\rho-\rho_k)=0$. This quantity is uniquely defined since the function \[ W(\rho):=\left(\dfrac{\rho}{\rho_{k}}-1\right)\varGamma_k(\rho)-2, \] is monotonically increasing in the interval $(\rho_k,+\infty)$ by the condition \textbf{(C1)}, and \[ W(\rho_k)=-2,\quad \lim_{\rho\rightarrow+\infty}W(\rho) \ge \lim_{\rho\rightarrow+\infty} \left(\dfrac{\rho}{\rho_{k}}-1\right)\varGamma_{\infty}-2=+\infty. \] We have the following results on the function $\varPhi_k(p,\rho)$ \begin{lemma}\label{thm:phi} Assume that the functions $\varGamma_k(\rho)$ and $h_k(\rho)$ satisfy the conditions \textbf{(C1), (C2)} and \textbf{(C3)}. Then $\varPhi_k(p,\rho)$ defined in \eqref{eq:phi} satisfies the following properties: \begin{itemize} \item[(1).] $\varPhi_k(p,\rho_k)>0$; \item[(2).] $\varPhi_k(p,\rho_{\max})<0$; \item[(3).] ${\partial\varPhi_k}(p,\rho)/{\partial \rho}<0$; \item[(4).] ${\partial^2\varPhi_k}(p,\rho)/{\partial\rho^2}<0$ if $\varGamma''_k(\rho)=0$. \end{itemize} \end{lemma} \begin{proof} (1). By definition \eqref{eq:phi} we have \[ \begin{split} \varPhi_k(p,\rho_k)&=\varGamma_k(\rho_k)\rho_k (p-h_k(\rho_k))- \varGamma_k(\rho_k) \rho_k (p_k-h_k(\rho_k)) \\ &= \varGamma_k(\rho_k)\rho_k(p-p_k) > 0. \end{split} \] (2). Since the compressive limit of the density $\rho_{\max}$ satisfies the relation $(\rho_{\max}-\rho_k)\varGamma_k(\rho_{\max})=2\rho_k$, we have \begin{equation} \begin{split} &\varPhi_k(p,\rho_{\max})\\ =&\varGamma_k(\rho_k)\rho_k (p-h_k(\rho_{\max}))- \varGamma_k(\rho_{\max}) \rho_{\max} (p_k-h_k(\rho_k)) \\ -& \dfrac{1}{2}\varGamma_k(\rho_{\max})(\rho_{\max} -\rho_k)\varGamma_k(\rho_k)(p+p_k)\\ =&-\varGamma_k(\rho_k)\rho_k(p_k+h_k(\rho_{\max})) - (\varGamma_k(\rho_{\max})+2)\rho_k(p_k-h_k(\rho_k)). \end{split} \label{eq:phimax} \end{equation} Obviously $\varPhi_k(p,\rho_{\max})<0$ if $h_k(\rho_{\max})\ge 0$. On the other hand, suppose that $h_k(\rho_{\max})< 0$. Rewriting \eqref{eq:phimax} as \[ \begin{split} \varPhi_k(p,\rho_{\max})&= -(\varGamma_k(\rho_k)+\varGamma_k(\rho_{\max})+2)\rho_kp_k\\ &-\rho_k(\varGamma_k(\rho_k)h_k(\rho_{\max}) -(\varGamma_k(\rho_{\max})+2)h_k(\rho_k)), \end{split} \] and using the inequality resulting from the conditions \textbf{(C2)} and \textbf{(C3)} \[ \varGamma_k(\rho_k)h_k(\rho_{\max}) \ge \varGamma_k(\rho_k)h_k(\rho_k)\ge (\varGamma_\infty+2)h_k(\rho_k)\ge (\varGamma_k(\rho_{\max})+2)h_k(\rho_k), \] we conclude that $\varPhi_k(p,\rho_{\max})<0$. (3). This is an obvious result from the expression \eqref{eq:phide}. (4). The second derivative of $\varPhi_k(p,\rho)$ with respect to the density is \[ \begin{split} \dfrac{\partial^2\varPhi_k}{\partial\rho^2}(p,\rho)&= -\varGamma_k(\rho_k)\rho_k \left(h_k''(\rho)-\dfrac{p+p_k}{2}\varGamma_k''(\rho)\right)\\ &-\varGamma_k(\rho_k)(2\varGamma_k'(\rho) +\rho\varGamma_k''(\rho)) \left(\rho_ke_k+\dfrac{p+p_k}{2}\right), \end{split} \] which is negative if $\varGamma''_k(\rho)=0$. This completes the proof of the whole theorem. \end{proof} Instantly by Lemma \ref{thm:phi}, the density $\rho$ can be uniquely determined from the equation $\varPhi_k(p,\rho)=0$ on the interval $(\rho_k,\rho_{\max})$ for any fixed $p$. Also the Hugoniot curve is monotonic due to $\chi>0$. Since the equation \eqref{eq:eose} uniquely defines the interface density $\rho_k^*$ for a given value of $p^*$, the right hand side of \eqref{eq:diffushock} can be regarded as a function of the interface pressure $p^*$ alone, formally written as \[ f_k(p^*)=\left((p^*-p_k)\left(\dfrac{1}{\rho_k}-\dfrac{1}{\rho^*_k} \right)\right)^{1/2},\quad p^*>p_k. \] \item[-] {\bf Solution through a rarefaction wave} If, on the other hand, $p^*\le p_k$, then the corresponding nonlinear wave is a rarefaction wave, and the interface state $\bm U_k^*$ is connected to the adjacent initial state $\bm U_k$ through a rarefaction curve. Since the Riemann invariant \[ u\pm \int \dfrac{1}{\rho c}\mathrm d p, \] is constant along the right-facing (left-facing) rarefaction curve, we have \begin{equation} \mp(u^*-u_k)=\int_{p_k}^{p^*} \dfrac{1}{\rho c}\mathrm d p, \label{eq:diffu} \end{equation} where the density $\rho$ is expressed in terms of $p$ by solving the isentropic relation \begin{equation} \dfrac{\partial p}{\partial \rho}=c^2(p,\rho). \label{eq:isenr} \end{equation} Similarly, the right hand side of \eqref{eq:diffu} can be expressed as a function of $p^*$ alone. Formally we define \[ f_k(p^*) = \int^{p^*}_{p_k} \dfrac{1}{\rho c}\mathrm d p,\quad p^*\le p_k. \] \end{itemize} Collecting both cases above, we have that \[ u^*-u_l=-f_l(p^*) \quad \text{ and } \quad u^*-u_r=f_r(p^*). \] Therefore, the interface pressure $p^*$ is the zero of the following \emph{pressure function} \[ f(p) := f_l(p)+f_r(p) + u_r - u_l. \] And the interface velocity $u^*$ can be determined by \[ u^*=\dfrac{1}{2}(u_l+u_r+f_r(p^*)-f_l(p^*)). \] Recall that the formula of the function $f_k(p)$ is given by \[ f_k(p)= \begin{cases} \displaystyle\int^{p}_{p_k} \dfrac{1}{\rho c}\mathrm d p, & p\le p_k,\\ \left((p-p_k)\left(\dfrac{1}{\rho_k}-\dfrac{1}{\rho}\right) \right)^{1/2}, & p>p_k, \end{cases} \] where $\rho$ is determined through either the Hugoniot relation \eqref{eq:eose} or the isentropic relation \eqref{eq:isenr} for a given $p$. We claim on $f_k(p)$ that \begin{lemma}\label{thm:propf} Assume that the conditions \textbf{(C1), (C2)} and \textbf{(C3)} hold for $\varGamma_k(\rho)$ and $h_k(\rho)$, the function $f_k(p)$ is monotonically increasing and concave, i.e. \[ f_k'(p)>0 \quad \text{ and } \quad f_k''(p)<0, \] if the Hugoniot function is concave with respect to the density, i.e. ${\partial^2 \varPhi_k(p,\rho)}/{\partial \rho^2}<0$. \end{lemma} \begin{proof} The first and second derivatives of $f_k(p)$ can be found to be \[ f_k'(p)= \begin{cases} \dfrac{1}{\rho c}, & p\le p_k, \\ \dfrac{1}{2f_k(p)} \left(\dfrac{1}{\rho_k}-\dfrac{1}{\rho} +\dfrac{p-p_k}{\rho^2\chi}\right), & p>p_k, \end{cases} \] and \[ f_k''(p)= \begin{cases} -\dfrac{\mathscr G}{\rho^2c^3}, &p\le p_k, \\ \begin{aligned} &-\dfrac{1}{4f_k^3(p)}\left( \dfrac{2(p-p_k)^2}{\rho^2\chi^2}\left(\dfrac{1}{\rho_k}-\dfrac{1}{\rho }\right) \left(\dfrac{2}{\rho}+\left.\dfrac{\partial\chi}{\partial p}\right|_{\varPhi_k}\right) \right.\\ &+ \left. \left(\dfrac{1}{\rho_k}-\dfrac{1}{\rho}-\dfrac{p-p_k}{\rho^2\chi} \right)^2 \right) , \end{aligned} & p>p_k, \end{cases} \] where $\mathscr G$ is the fundamental derivative \eqref{eq:fundd} and \[ \left.\dfrac{\partial\chi}{\partial p}\right|_{\varPhi_k} = \left(\dfrac{\partial\varPhi_k}{\partial\rho}(p,\rho)\right)^{-1} \left( \dfrac{\partial^2\varPhi_k}{\partial\rho^2}(p,\rho)+\chi\varGamma_k(\rho_k) (\rho_k\varGamma_k'(\rho)-(\rho\varGamma_k(\rho))') \right). \] The result then follows by direct observation. \end{proof} The behavior of $f_k(p)$ is related to the existence and uniqueness of the solution of the Riemann problem. The existence and uniqueness of the Riemann solution for gas dynamics under appropriate conditions have been established by Liu \cite{Liu1975} and by Smith \cite{Smith1979}. It is easy to show that the conditions \textbf{(C1), (C2)} and \textbf{(C3)} imply Smith's ``strong'' condition $\partial e(p,\rho)/\partial \rho<0$. However, for completeness we provide a short proof of the results for the case of Mie-Gr{\"u}neisen EOS in the following theorem. \begin{theorem}\label{thm:unique} Assume that the Mie-Gr{\"u}neisen EOS \eqref{eq:particulareos} satisfies the conditions \textbf{(C1), (C2)} and \textbf{(C3)}. The Riemann problem \eqref{system:oneriemann} admits a unique solution (in the class of admissible shocks, interfaces and rarefaction waves separating constant states) if and only if the initial states satisfy the constraint \begin{equation} u_r-u_l<\int_{0}^{p_l}\dfrac{\mathrm dp}{\rho c}+ \int_{0}^{p_r}\dfrac{\mathrm dp}{\rho c}. \label{eq:vacuum} \end{equation} \end{theorem} \begin{proof} By definition and Lemma \ref{thm:propf} we know that the pressure function $f(p)$ is monotonically increasing. Next we study the behavior of $f(p)$ when $p$ tends to infinity. Let $\tilde\rho$ represents the density such that $\varPhi_k(2p_k,\tilde\rho)=0$. When the pressure $p>2p_k$, we have $\rho>\tilde{\rho}$, and thus \[ f_k^2(p)=(p-p_k)\left(\dfrac{1}{\rho_k}-\dfrac{1}{\rho}\right) >(p-p_k)\left(\dfrac{1}{\rho_k}-\dfrac{1}{\tilde\rho}\right). \] As a result, $f_k(p)$ tends to positive infinity as $p\rightarrow +\infty$ and so does $f(p)$. Based on the behavior of the function $f(p)$, a necessary and sufficient condition for the interface pressure $p^*>0$ such that $f(p^*)=0$ to be uniquely defined is given by \[ f(0)=f_l(0)+f_r(0)+u_r-u_l<0, \] or equivalently, the constraint given by \eqref{eq:vacuum}. This completes the proof. \end{proof} \begin{remark} When the initial states violate the constraint \eqref{eq:vacuum}, the Riemann problem \eqref{system:oneriemann} has no solution in the above sense. One can yet define a solution by introducing a \emph{vacuum}. However, we are not going to address this issue since it is beyond the scope of our current interests. \end{remark} \section{Approximate Riemann Solver} \label{sec:aps} The solution of the Riemann problem \eqref{system:oneriemann} is obtained by finding the unique zero $p^*$ of the function $f(p)$. A first try to this problem is to use the Newton-Raphson iteration as \begin{equation} p_{n+1}=p_n-\dfrac{f(p_n)}{f'(p_n)} =p_n-\dfrac{f_l(p_n)+f_r(p_n) + u_r - u_l}{f_l'(p_n)+f_r'(p_n)}, \label{eq:iterp1} \end{equation} with a suitable initial estimate which we may choose as, for example, the acoustic approximation \cite{Godunov1976} \[ p_0 = \dfrac{\rho_lc_lp_r+\rho_rc_rp_l+\rho_lc_l\rho_rc_r(u_l-u_r)} {\rho_lc_l+\rho_rc_r}. \] The concavity of the pressure function $f(p)$ leads to the following global convergence of the Newton-Raphson iteration \begin{corollary} The Newton-Raphson iteration for \eqref{eq:iterp1} converges if $\varGamma_l''(\rho)=\varGamma_r''(\rho)=0$. \end{corollary} Unfortunately, there is no closed-form expression for the pressure function $f(p)$ or its derivative $f'(p)$ for equations of state such as polynomial EOS, JWL EOS or Cochran-Chan EOS. Therefore, we have to implement the iteration \eqref{eq:iterp1} approximately. Here we adopt the \emph{inexact Newton method} \cite{Dembo1982} instead, which is formulated as \begin{equation} \left\{ \begin{aligned} p_{n+1}&=p_n-\dfrac{F_n}{F_n'}=p_n-\dfrac{F_{n,l}+F_{n,r}+u_r-u_l} {F'_{n,l}+F'_{n,r}},\\ u_{n}&=\dfrac{1}{2}(u_l+u_r+F_{n,r}-F_{n,l}), \end{aligned}\right. \label{eq:iterp2} \end{equation} where $F_{n,k}$ and $F'_{n,k}$ approximate $f_k(p_{n})$ and $f_k'(p_{n})$, respectively. To specify the sequences $\{F_{n,k}\}$ and $\{F'_{n,k}\}$, we solve the Hugoniot loci through the numerical iteration and the isentropic curves by the numerical integration. It is natural to expect that the sequences $p_n$ and $u_n$ will tend to $p^*$ and $u^*$ respectively whenever the evaluation errors $|F_{n,k}-f_k(p_n)|$ and $|F'_{n,k}-f'_k(p_n)|$ are sufficiently small. As a preliminary result, let us introduce the following Lemma \ref{the:conv_p}, which states the local convergence of inexact Newton iterates \begin{lemma} \label{the:conv_p} If the initial iterate $p_0$ is sufficiently close to $p^*$, and the evaluation errors of $f(p_n)$ and $f'(p_n)$ satisfy the following constraint \[ 2|F_n-f(p_n)|+|\Delta p_n|\cdot|F'_n-f'(p_n)|\le \eta |F_n|, \] where $\Delta p_n=p_{n+1}-p_n=-F_n/F'_n$ denotes the step increment and $\eta\in (0,1)$ is a fixed constant, then the sequence of inexact Newton iterates $p_n$ defined by \[ p_{n+1}=p_n-\dfrac{F_n}{F'_n}, \] converges to $p^*$. Moreover, the convergence is linear in the sense that $|p_{n+1}-p^*|\le \zeta |p_n-p^*|$ for $\zeta \in (\eta,1)$. \end{lemma} \begin{proof} Since $\eta<\zeta$, there exists $\gamma>0$ sufficiently small that $(1+\gamma)((\eta+2)\gamma+\eta)\le \zeta$. Choose $\epsilon>0$ sufficiently small that \begin{itemize} \item[(1).] $|f'(p)-f'(p^*)|\le \gamma |f'(p^*)|$; \item[(2).] $|f'(p)^{-1}-f'(p^*)^{-1}|\le \gamma |f'(p^*)|^{-1}$; \item[(3).] $|f(p)-f(p^*)-f'(p^*)(p-p^*)|\le \gamma |f'(p^*)||p-p^*|$, \end{itemize} whenever $|p-p^*|\le \epsilon$. Now we prove the convergence rate by induction. Let the initial solution satisfy $|p_0-p^*|\le \epsilon$. Suppose that $|p_n-p^*|\le \epsilon$, then \[ \begin{split} |f(p_n)|&=|f(p_n)-f(p^*)-f'(p^*)(p_n-p^*)+f'(p^*)(p_n-p^*)| \\ &\le (1+\gamma)|f'(p^*)||p_n-p^*|. \end{split} \] The error of $(n+1)$-th iterate can be written as \[ \begin{split} p_{n+1}-p^*&= f'(p_n)^{-1} (r_n+(f'(p_n)-f'(p^*))(p_n-p^*)\\ &-(f(p_n)-f(p^*)-f'(p^*)(p_n-p^*))), \end{split} \] where the residual $r_n=f'(p_n)\Delta p_n+f(p_n)$ satisfies \[ \begin{split} |r_n|&=|(f'(p_n)-F_n')\Delta p_n+f(p_n)-F_n|\\ &=|(f'(p_n)-F_n')\Delta p_n+(1+\eta)(f(p_n)-F_n) -\eta(f(p_n)-F_n)| \le \eta |f(p_n)|. \end{split} \] Therefore \[ \begin{split} |p_{n+1}-p^*|&\le (1+\gamma)|f'(p^*)|^{-1} (\eta|f(p_n)|+\gamma |f'(p^*)||p_n-p^*| \\ & + \gamma |f'(p^*)||p_n-p^*|)\\ &\le (1+\gamma)(\eta(1+\gamma)+2\gamma)|p_n-p^*|\\ &\le \zeta |p_n-p^*|, \end{split} \] and hence $|p_{n+1}-p^*|\le \epsilon$. It follows that $p_n$ converges linearly to $p^*$. \end{proof} Recall that $f(p)$ and $f'(p)$ can be decomposed as \[ f(p)=f_l(p)+f_r(p)+u_r-u_l \text{~~and~~} f'(p)=f_l'(p)+f_r'(p). \] The following Theorem \ref{the:conv_pu} tells us that if $f_k$ and $f_k'$ are evaluated accurately enough, the convergences of iterates $p_n$ and $u_n$ are ensured. \begin{theorem}\label{the:conv_pu} Suppose that the initial estimate $p_0$ is sufficiently close to $p^*$. If the evaluation errors of $f_k(p_n)$ and $f'_k(p_n)$ satisfy \[ |F_{n,k}-f_k(p_n)|\le \dfrac{1}{6}\eta |F_n| \text{~~and~~} |F'_{n,k}-f_k'(p_n)|\le \dfrac{1}{6}\eta |F_n'|, \] where $\eta\in (0,1)$ is a constant. Moreover, assume that the sequence $\{F_n'\}$ is bounded. Then the sequences of pressure and velocity defined in \eqref{eq:iterp2} converge, namely \[ p_n\rightarrow p^*\text{~~and~~} u_n\rightarrow u^*. \] \end{theorem} \begin{proof} Since \begin{align*} |F_n-f(p_n)|\le |F_{n,l}-f_l(p_n)|+ |F_{n,r}-f_r(p_n)| \le \dfrac{1}{3}\eta |F_n|,\\ |F'_n-f'(p_n)|\le |F'_{n,l}-f'_l(p_n)|+ |F'_{n,r}-f'_r(p_n)| \le \dfrac{1}{3}\eta |F'_n|, \end{align*} we have \[ 2|F_n-f(p_n)|+|\Delta p_n|\cdot|F'_n-f'(p_n)|\le \dfrac{2}{3}\eta |F_n|+ \left|\dfrac{F_n}{F'_n}\right|\cdot \dfrac{1}{3}\eta|F'_n|=\eta |F_n|. \] From Lemma \ref{the:conv_p} we conclude that $p_n\rightarrow p^*$. Due to the continuity of $f_k$ we have $f_k(p_n)\rightarrow f_k(p^*)$. From $F_n=(p_n-p_{n+1})F_n'$ and the boundness of $\{F'_n\}$ we know that $F_n\rightarrow 0$. It follows that \[ |F_{n,k}-f_k(p^*)|\le |F_{n,k}-f_k(p_n)|+|f_k(p_n)-f_k(p^*)|\rightarrow 0. \] Finally, by the definition of $u_n$ and $u^*$ we have \[ |u_n-u^*|=\dfrac{1}{2}\left| (F_{n,r}-f_r(p^*)) - (F_{n,l} - f_l(p^*)) \right|\rightarrow 0, \] or $u_n\rightarrow u^*$. \end{proof} By Theorem \ref{the:conv_pu}, the convergence is guaranteed by \textit{a posteriori} control on the evaluation errors of $f_k(p_n)$ and $f'_k(p_n)$. The evaluation errors of $f_k(p_n)$ and $f'_k(p_n)$ depend on the residual of the algebraic equation as well as the truncation error of the ordinary differential equation. Therefore, to achieve better accuracy one may effectively reduce the residual term and the truncation error. To achieve a trade-off between the accuracy and efficiency in a practical implementation, we apply the Newton-Raphson method to solve the Hugoniot loci, and the Runge-Kutta method to solve the isentropic curves. Precisely, for the given $n$-th iterator $p_n$, if $p_n>p_k$, then we solve the following algebraic equation \begin{equation} \varPhi_k(p_n,\tilde\rho_{n,k})=0, \label{eq:algphi} \end{equation} to obtain $\tilde\rho_{n,k}$ to a prescribed tolerance with the Newton-Raphson method \[ \rho_{n,k,m+1}=\rho_{n,k,m}- \dfrac{\varPhi_k(p_n,\rho_{n,k,m})} {\partial\varPhi_k(p_n,\rho_{n,k,m})/\partial\rho}. \] By Lemma \ref{thm:phi}, we arrive at the following convergence result \begin{corollary} The Newton-Raphson iteration for \eqref{eq:algphi} must converge if $\varGamma''_k(\rho)=0$. \end{corollary} The values of $F_{n,k}$ and $F'_{n,k}$ for the shock branch are thus taken as \begin{align} \label{eq:shockf} F_{n,k}&=\left((p_n-p_k)\left(\dfrac{1}{\rho_k}-\dfrac{1} {\tilde\rho_{n,k}}\right)\right)^{1/2},\\ \label{eq:shockdf} F'_{n,k}&=\dfrac{1}{2F_{n,k}}\left( \dfrac{1}{\rho_k}-\dfrac{1}{\tilde\rho_{n,k}} +\dfrac{p_n-p_k}{\rho_{n,k}^2\chi(p_n,\tilde\rho_{n,k})} \right). \end{align} If, on the other hand, $p_n\le p_k$, then we solve the following initial value problem \begin{align*} \dfrac{\mathrm d f_k}{\mathrm d p}=\dfrac{1}{\rho c}, \\ f_k|_{p=p_k}=0, \end{align*} coupled with the initial value problem of the isentropic relations \begin{align*} \dfrac{\mathrm d\rho}{\mathrm d p}=\dfrac{1}{c^2},\\ \rho|_{p=p_k}=\rho_k, \end{align*} backwards until $p=p_n$ using fourth-order Runge-Kutta method. The values of $F_{n,k}$ and $F'_{n,k}$ are then taken as \begin{align} \label{eq:raref} F_{n,k} &= -\dfrac{1}{6}(p_k-p_n) \left(\dfrac{1}{Z_1}+\dfrac{2}{Z_2} +\dfrac{2}{Z_3}+\dfrac{1}{Z_4} \right), \\ \label{eq:raredf} F'_{n,k} &=\dfrac{1}{\tilde\rho_{n,k} c(p_n,\tilde\rho_{n,k})}, \end{align} where \[ \tilde\rho_{n,k} = \rho_k-\dfrac{1}{6}(p_k-p_n) \left(\dfrac{1}{c_1^2}+\dfrac{2}{c_2^2}+\dfrac{2}{c_3^2} +\dfrac{1}{c_4^2}\right),\\ \] and the intermediate states are given by \[ \begin{aligned} c_1&=c(p_k,\rho_k),& Z_1&=\rho_kc_1,\\ c_2&=c\left(\dfrac{p_k+p_n}{2},\rho_k-\dfrac{p_k-p_n}{2c_1^2}\right), & Z_2&= \left(\rho_k-\dfrac{p_k-p_n}{2c_1^2}\right)c_2,\\ c_3&=c\left(\dfrac{p_k+p_n}{2},\rho_k-\dfrac{p_k-p_n}{2c_2^2}\right), & Z_3&= \left(\rho_k-\dfrac{p_k-p_n}{2c_2^2}\right)c_3, \\ c_4&=c\left(p_n,\rho_k-\dfrac{p_k-p_n}{c_3^2}\right), & Z_4&= \left(\rho_k-\dfrac{p_k-p_n}{c_3^2}\right)c_4. \end{aligned} \] Finally the procedure of the approximate Riemann solver for \eqref{system:oneriemann} is listed below. \begin{mdframed} \setlength{\parindent}{0pt} \textbf{Step 1} Provide initial estimates of the interface pressure \[ p_0 = \dfrac{\rho_lc_lp_r+\rho_rc_rp_l+\rho_lc_l\rho_rc_r(u_l-u_r)} {\rho_lc_l+\rho_rc_r}. \] \textbf{Step 2} Assume that the $n$-th iterator $p_n$ is obtained. Determine the types of left and right nonlinear waves. (i) If $p_n>\max\{p_l,p_r\}$, then both nonlinear waves are shocks. (ii) If $\min\{p_l,p_r\}\le p_n \le \max\{p_l,p_r\}$, then one of the two nonlinear waves is a shock, and the other is a rarefaction wave. (iii) If $p_n<\min\{p_l,p_r\}$, then both nonlinear waves are rarefaction waves. \textbf{Step 3} Evaluate $F_{n,k}$ and $F'_{n,k}$ through \eqref{eq:shockf}~\eqref{eq:shockdf} or \eqref{eq:raref}~\eqref{eq:raredf} according to the wave structures and update the interface pressure through \[ p_{n+1}=p_n-\dfrac{F_{n,l}+F_{n,r} + u_r - u_l}{F'_{n,l}+F'_{n,r}}. \] \textbf{Step 4} Terminate whenever the relative change of the pressure reaches the prescribed tolerance. The sufficiently accurate estimate $p_n$ is then taken as the approximate interface pressure $p^*$. Otherwise return to Step 2. \textbf{Step 5} Compute the interface velocity $u^*$ through \[ u^*=\dfrac{1}{2}\left(u_l+u_r+F_{n,r} - F_{n,l}\right). \] \end{mdframed} In the above algorithm the tolerances of the outer iteration for pressure function and the inner iteration for Hugoniot function are both set as $10^{-8}$. Although the evaluation errors of $f_k(p_n)$ and $f'_k(p_n)$ for the isentropic branch is not effectively controlled in our implementation without applying a more accurate numerical quadrature rule, we have never encountered any failure in the convergence of inexact Newton iteration in the numerical tests. \section{Application on Two-medium Flows} \label{sec:model} We are considering the two-medium fluid flow described by an immiscible model in the domain $\Omega$. Two fluids are separated by a sharp interface $\Gamma(t)$ characterized by the zero of the level set function $\phi(\bm x,t)$ which satisfies \begin{equation} \dfrac{\partial \phi}{\partial t} + \tilde u|\nabla\phi| = 0. \label{eq:levelset:eov} \end{equation} Here $\tilde u$ denotes the normal velocity of the interface, which is determined by the solution of multi-medium Riemann problem. And the normal direction is chosen as the gradient of the level set function. \iffalse \begin{equation} \dfrac{\partial \phi}{\partial t} + \bm{\tilde u} \cdot \nabla\phi = 0, \label{eq:levelset:eov} \end{equation} where $\bm{\tilde u}$ denotes the velocity of interface. Its normal component $\tilde u_n$ is determined by the solution of multi-medium Riemann problem, while the tangent component $\tilde u_\tau$ is directly copied from the local velocity of fluid, where \[ \tilde u_n = \bm{\tilde u} \cdot \bm n, \quad \tilde u_\tau = \bm{\tilde u} - \tilde u_n \bm n, \] and $\bm n = \left. \frac{\nabla \phi}{|\nabla \phi|} \right|_{\phi=0}$ is the unit normal direction of the interface. \fi The region occupied by each fluid can be expressed in terms of the level set function $\phi(\bm x,t)$ \[ \Omega^+(t):=\{\bm x\in\Omega~|~\phi(\bm x,t)> 0\} \text{~~and~~} \Omega^-(t):=\{\bm x\in\Omega~|~\phi(\bm x,t)< 0\}. \] And the fluid in each region is governed by the Euler equations \begin{equation} \dfrac{\partial \bm{U}}{\partial t} + \nabla\cdot \bm F(\bm U)= \bm 0,\quad \bm x\in \Omega^\pm (t), \label{eq:euler} \end{equation} where \[ \bm{U}= \begin{bmatrix} \rho \\ \rho \bm u \\ E \end{bmatrix}\text{~~and~~} \bm{F(\bm U)}= \begin{bmatrix} \rho \bm u^\top \\ \rho \bm u\otimes \bm u+p\mathbf I\\ (E+p)\bm u^\top \end{bmatrix}. \] \iffalse Here $\rho$ stands for the density, $\bm u$ stands for the velocity, $p$ stands for the pressure and $E$ stands for the total energy per unit volume such that \[ E = \rho e + \frac{1}{2} \rho \|\bm u\|^2, \] where $e$ is the specific internal energy. \fi Here $\bm u$ stands for the velocity vector, and other variables represent the same as that in \eqref{system:oneriemann}. To provide closure for the above Euler equations, we need to specify the equation of state for each fluid, i.e. \[ p= \begin{cases} P^+(\rho,e), & \phi>0,\\ P^-(\rho,e), & \phi<0. \end{cases} \] The specific forms of the corresponding equations of state are the Mie-Gr\"uneisen EOS discussed in Section \ref{sec:riemann}. The level set equation \eqref{eq:levelset:eov} and the Euler equations \eqref{eq:euler} are coupled in the sense that the level set equation provides the boundary geometry for the Euler equations, whereas the Euler equations provide the interface motion for the level set equation. To solve the coupled system, we use the numerical scheme following Guo \textit{et al.} \cite{Guo2016}, which is implemented on an Eulerian grid. For completeness, however, we briefly sketch the main steps of numerical scheme for the two-medium flow therein. The approximate Riemann solver we proposed is applied to calculate the numerical flux at the interface in the numerical scheme. As a numerical scheme on an Eulerian grid, the whole domain $\Omega$ is divided into a conforming mesh with simplex cells, and may be refined or coarsened during the computation based on the $h$-adaptive mesh refinement strategy of Li and Wu \cite{Li2013}. The whole scheme is divided into three steps: \begin{itemize} \item[(1).] {\bf Evolution of interface} The level set function is approximated by a continuous piecewise-linear function. The discretized level set function \eqref{eq:levelset:eov} is updated through the characteristic line tracking method once the motion of the interface is given. Due to the nature of level set equation, it remains to specify the normal velocity $\tilde u$ within a narrow band near the interface. This can be achieved by firstly solving a multi-medium Riemann problem across the interface and then extending the velocity field to the nearby region using the harmonic extension technique of Di \textit{et al.} \cite{Di2007}. The solution of the multi-medium Riemann problem has been elaborated in Section \ref{sec:riemann}. In order to keep the property of signed distance function, we solve the following reinitialization equation \[ \begin{cases} \dfrac{\partial \psi}{\partial \tau} = \sgn(\psi_0)\cdot\left(1-\left|\nabla \psi \right|\right), \\ \psi(\bm x,0)=\psi_0=\phi(\bm x, t), \end{cases} \] until steady state using the explicitly positive coefficient scheme \cite{Di2007}. Once the level set function is updated until $n$-th time level, we can obtain the discretized phase interface $\Gamma_{h,n}$. A cell $K_{i,n}$ is called an \emph{interface cell} if the intersection of $K_{i,n}$ and $\Gamma_{h,n}$, denoted as $\Gamma_{K_{i,n}}$, is nonempty. Since the level set function is piecewise-linear and the cell is simplex, $\Gamma_{K_{i,n}}$ must be a linear manifold in $K_{i,n}$. The interface $\Gamma_{h,n}$ further cuts the cell $K_{i,n}$ and one of its boundaries $S_{ij,n}$ into two parts, which are represented as $K_{i,n}^\pm$ and $S_{ij,n}^\pm$ respectively (may be an empty set). The unit normal of $\Gamma_{K_{i,n}}$, pointing from $K_{i,n}^-$ to $K_{i,n}^+$, is denoted as $\bm n_{K_{i,n}}$. These quantities can be readily computed from the geometries of the interface and cells. See Fig. \ref{fig:twophasemodel} for an illustration. \begin{figure} \caption{Illustration of the two-medium fluid model.} \label{fig:twophasemodel} \end{figure} \item[(2).] {\bf Numerical flux} The numerical flux for the two-medium flow is composed of two parts: the cell edge flux and the interface flux. Below we explain the flux contribution towards any given cell $K_{i,n}$. We introduce two sets of flow variables at $n$-th time level \[ \bm U_{K_{i,n}}^\pm = \left[\begin{array}{c} \rho_{K_{i,n}}^\pm \\ \rho_{K_{i,n}}^\pm\bm u_{K_{i,n}}^\pm \\ E_{K_{i,n}}^\pm \end{array}\right], \] which refer to the constant states in the cell $K_{i,n}^\pm$. Note that the flow variables vanish if there is no corresponding fluid in a given cell. \begin{itemize} \item {\bf Cell edge flux} The cell edge flux is the exchange of flux between the same fluid across the cell boundary. For any edge $S_{ij}$ of the cell $K_{i,n}$, let $\bm n_{ij,n}$ be the unit normal pointing from $K_{i,n}$ into the adjacent cell $K_{j,n}$. The cell edge flux across $S_{ij,n}^\pm$ is calculated as \begin{equation}\label{eq:cell_bound_flux} \hat{\bm F}_{ij,n}^\pm = \Delta t_n \left|S_{ij,n}^\pm \right| \hat{\bm F} \left( \bm U_{K_{i,n}}^\pm, \bm U_{K_{j,n}}^\pm; \bm n_{ij,n} \right), \end{equation} where $\Delta t_n$ denotes the current time step length, and $\hat{\bm F}(\bm U_l, \bm U_r; \bm n)$ is a consistent monotonic numerical flux along $\bm n$. Here we adopt the local Lax-Friedrich flux \[ \hat{\bm F}(\bm U_l, \bm U_r; \bm n) = \dfrac{1}{2} \left(\bm F(\bm U_l) + \bm F(\bm U_r)\right) \cdot \bm n - \dfrac{1}{2}\lambda (\bm U_r - \bm U_l), \] where $\lambda$ is the maximal signal speed over $\bm U_l$ and $\bm U_r$. \item {\bf Interface flux} The interface flux is the exchange of the flux between two fluids due to the interaction of fluids at the interface. If $K_{i,n}$ is an interface cell, then the flux across the interface can be approximated by \begin{equation} \hat{\bm F}_{K_{i,n}}^{\pm}= \Delta t_n\left|\Gamma_{K_{i,n}}\right| \begin{bmatrix} 0 \\ p_{K_{i,n}}^*\bm n_{K_{i,n}} \\ p_{K_{i,n}}^*u_{K_{i,n}}^* \end{bmatrix}. \label{eq:phaseflux} \end{equation} Here $p_{K_{i,n}}^*$ and $u_{K_{i,n}}^*$ are the interface pressure and normal velocity, which are obtained by applying the approximate solver we proposed in Section \ref{sec:aps} to a local one-dimensional Riemann problem in the normal direction of the interface with initial states \[ \begin{split} [\rho_l,u_l,p_l]^\top= [\rho_{K_{i,n}}^-, \bm u_{K_{i,n}}^-\cdot \bm n_{K_{i,n}}, p_{K_{i,n}}^-]^\top,\\ [\rho_r,u_r,p_r]^\top= [\rho_{K_{i,n}}^+, \bm u_{K_{i,n}}^+\cdot \bm n_{K_{i,n}}, p_{K_{i,n}}^+]^\top. \end{split} \] Here the pressure $p_{K_{i,n}}^\pm$ in the initial states are given through the corresponding equations of state, namely \[ p_{K_{i,n}}^\pm=P^\pm (\rho_{K_{i,n}}^\pm,e_{K_{i,n}}^\pm),\quad e_{K_{i,n}}^\pm=\dfrac{E_{K_{i,n}}^\pm}{\rho_{K_{i,n}}^\pm} -\dfrac{1}{2}\|\bm u_{K_{i,n}}^\pm\|^2. \] \end{itemize} \item[(3).] {\bf Update of conservative variables} Once the edge flux \eqref{eq:cell_bound_flux} and interface flux \eqref{eq:phaseflux} are computed, the conservative variables at $(n+1)$-th time level is thus assigned as \[ \bm U^{\pm}_{K_{i,n+1}} \!=\! \left\{\begin{array}{ll} \bm 0, & K^\pm_{i,n+1} \!=\! \varnothing, \\ \dfrac{1}{|K^\pm_{i,n+1}|} \! \left(|K^\pm_{i,n}| \bm U^\pm_{K_{i,n}} \!+\! \displaystyle\sum_{S_{ij}^\pm\subseteq \partial K_{i,n}^\pm} \hat{\bm F}_{ij,n}^\pm \!+\! \hat{\bm F}^\pm_{K_{i,n}}\right), & K^\pm_{i,n+1} \!\neq\! \varnothing. \end{array}\right. \] \end{itemize} Basically, the steps we presented above may close the numerical scheme, while there are more details in the practical implementation to gurantee the stability of the scheme. Please see \cite{Guo2016} for those details. \section{Numerical Examples}\label{sec:num} In this section we present several numerical examples to validate our schemes, including one-dimensional Riemann problems, spherically symmetric problems and multi-dimensional shock impact problems. One-dimensional simulations are carried out on uniform interval meshes, while two and three-dimensional simulations are carried out on unstructured triangular and tetrahedral meshes respectively. \subsection{One-dimensional problems} In this part, we present some numerical examples of one-dimensional Riemann problems. The computational domain is $[0,1]$ with $400$ cells. And the reference solution, if mentioned, is computed on a very fine mesh with $10^4$ cells. \subsubsection{Shyue shock tube problem} This is a single-medium JWL Riemann problem used by Shyue \cite{Shyue2001}. The parameters of JWL EOS \eqref{eq:jwl} therein are given by $A_1=\unit[8.545\times10^{11}]{Pa}, A_2=\unit[2.050\times10^{10}]{Pa}$, $\omega=0.25$, $R_1=4.6$, $R_2=1.35$ and $\rho_0=\unit[1840]{kg/m^3}$. The initial conditions are \[ [\rho, u, p]^\top = \left\{ \begin{array}{ll} [1700, ~0, ~10^{12}]^\top, &x<0.5,\\ [2mm] [1000, ~0, ~5.0\times 10^{10}]^\top, & x > 0.5. \end{array} \right. \] The simulation terminates at $t=1.2\times 10^{-5}$. In Fig. \ref{res:rm_shyue}, the top panel shows the results of our numerical methods, while the bottom panel contains the results of Shyue \cite{Shyue2001}. Our results agree well with the highly resolved solution shown in a solid line given by Shyue. \begin{figure} \caption{Shyue shock tube problem (top panel: our results, bottom panel: results from Shyue \cite{Shyue2001}).} \label{res:rm_shyue} \end{figure} \subsubsection{Saurel shock tube problem} In this problem, we consider the advection of a density discontinuity of the liquid nitromethane described by Cochran-Chan EOS \eqref{eq:CC} in a uniform flow \cite{Saurel2007,Lee2013}. The parameters are given by $A_1=\unit[8.192\times10^8]{Pa}$, $A_2=\unit[1.508\times10^9]{Pa}, \omega=1.19$, $R_1=4.53$, $R_2=1.42$ and $\rho_0=\unit[1134]{kg/m^3}$. The initial value is \[ [\rho, u, p]^\top = \left\{ \begin{array}{ll} [1134, ~1000, ~2.0\times 10^{10}]^\top, &x<0.5,\\ [2mm] [500, ~1000, ~2.0\times 10^{10}]^\top, & x>0.5. \end{array} \right. \] The simulation terminates at $t=4.0\times 10^{-5}$. We use this Riemann problem to assess the performance of our methods against highly nonlinear equations of state. Fig. \ref{res:rm_saurel} displays the results of our numerical scheme and that of Saurel \textit{et al.} \cite{Saurel2007}, where we can see that there is no non-physical pressure and velocity across the contact discontinuity in our numerical scheme. \begin{figure} \caption{Saurel shock tube problem (top row: our results, bottom row: results from Saurel \textit{et al.} \cite{Saurel2007}).} \label{res:rm_saurel} \end{figure} \subsubsection{Ideal gas-water Riemann problem} In this example we simulate the ideal gas-water Riemann problems. The water is modeled by either the stiffened gas EOS \eqref{eq:stiffened} or the polynomial EOS \eqref{eq:poly}. The initial density, velocity and pressure are both assigned with \[ [\rho, u, p]^\top = \left\{ \begin{array}{ll} [1630, ~0, ~7.0\times 10^9]^\top, &x<0.5,\\ [2mm] [1000, ~0, ~1.0\times 10^5]^\top, & x > 0.5. \end{array} \right. \] The adiabatic exponent is $\gamma=2.0$ for the ideal gas EOS. The parameters of water are $\gamma=7.15$ and $p_\infty=\unit[3.31\times10^8]{Pa}$ for the stiffened gas EOS \cite{Rallu2009, Wang2008}, and $\rho_0=\unit[1000]{kg/m^3}$, $A_1=\unit[2.20\times10^9]{Pa}$, $A_2=\unit[9.54\times10^9]{Pa}$, $A_3=\unit[1.45\times10^{10}]{Pa}$, $B_0=B_1=0.28$, $T_1=\unit[2.20\times10^9]{Pa}$ and $T_2=0$ for the polynomial EOS \cite{Jha2014}, respectively. The same parameters are chosen in the remaining numerical examples for the water. The results with distinct equations of state at $t=8.0\times 10^{-5}$ are shown in Fig. \ref{res:rm_gm_water}, where we can observe that the numerical results agree well with the corresponding reference solutions. The comparison between these two graphs also shows the discrepancies due to the choices of the equations of state. It is observed that the shock wave in the stiffened gas EOS propagates faster than that in the polynomial EOS. \subsubsection{JWL-polynomial Riemann problem} This example concerns the JWL-polynomial Riemann problem. The initial states are \[ [\rho, u, p]^\top = \left\{ \begin{array}{ll} [1630, ~0, ~8.3\times 10^9]^\top, &x<0.5,\\ [2mm] [1000, ~0, ~1.0\times 10^5]^\top, & x > 0.5. \end{array} \right. \] We use the following values to describe the TNT \cite{Smith1999}: $A_1=\unit[3.712\times10^{11}]{Pa}$, $A_2=\unit[3.230\times10^9]{Pa}$, $\omega=0.30$, $R_1=4.15$, $R_2=0.95$ and $\rho_0=\unit[1630]{kg/m^3}$. The result at $t=8.0\times 10^{-5}$ is shown in Fig. \ref{res:rm_jwl_pol}, where we can observe that both the interface and shock are captured well without spurious oscillation. \begin{figure} \caption{Ideal gas-water Riemann problem (top row: stiffened gas, bottom row: polynomial).} \label{res:rm_gm_water} \caption{JWL-polynomial Riemann problem.} \label{res:rm_jwl_pol} \end{figure} \subsection{Spherically symmetric problems} In this part we present two spherically symmetric problems, where the governing equations are formulated as follows \begin{equation} \dfrac{\partial}{\partial t} \begin{bmatrix} r^2\rho \\ r^2\rho u \\ r^2E \end{bmatrix} +\dfrac{\partial}{\partial r} \begin{bmatrix} r^2\rho u \\ r^2(\rho u^2+p) \\ r^2(E+p)u \end{bmatrix} =\begin{bmatrix} 0 \\ 2rp \\ 0 \end{bmatrix}. \label{eq:spheq} \end{equation} The source term in \eqref{eq:spheq} is discretized using an explicit Euler method. \subsubsection{Air blast problem}\label{problem:blast} The shock wave that propagates through the air as a consequence of the nuclear explosion is commonly referred to as the \emph{blast wave}. In this example we simulate the blast wave from one kiloton nuclear charge. The explosion products and air are modeled by the ideal gas EOS with adiabatic exponents $\gamma=1.2$ and $1.4$ respectively. The initial density and pressure are $\unit[618.935]{kg/m^3}$ and $\unit[6.314 \times 10^{12}]{Pa}$ for the explosion products, and $\unit[1.29]{kg/m^3}$ and $\unit[1.013\times 10^5]{Pa}$ for the air. The initial interface is located at $r=\unit[0.3]{m}$ initially. To effectively capture the wave propagation we use a computational domain of radius $\unit[5000]{m}$. It is known that the destructive effects of the blast wave can be measured by its \emph{overpressure}, i.e., the amount by which the static pressure in the blast wave exceeds the ambient pressure ($\unit[1.013\times 10^5]{Pa}$). The overpressure increases rapidly to a peak value when the blast wave arrives, followed by a roughly exponential decay. The integration of the overpressure over time is called \emph{impulse}. See Fig. \ref{rm::air_blast} (a) for an illustration of these terminologies. Fig. \ref{rm::air_blast} (b) -- (d) show the peak overpressure, impulse and shock arrival time at different radii. The results are compared with the point explosion solutions in Qiao \cite{Qiao2003}, which confirm the accuracy of our methods in the air blast applications. \subsubsection{Underwater explosion problem} We use this example to simulate the underwater explosion problem where a TNT of one hundred kilograms explodes in the water. The high explosives and water are characterized by the JWL EOS and polynomial EOS, respectively. The radii of the computational domain and the initial interface are $\unit[15]{m}$ and $\unit[0.245]{m}$ respectively. The initial pressure of the high explosives is $\unit[9.5\times 10^9]{Pa}$. The same problem has been simulated in \cite{Jia2007} using ANSYS/AUTODYN. Fig. \ref{rm::udex} shows the computed peak overpressure and impulse at different radii. The results agree well with the empirical law provided in \cite{Cole1948}. \begin{figure} \caption{Shock wave parameters for air blast problem.} \label{rm::air_blast} \end{figure} \begin{figure} \caption{Shock wave parameters for underwater explosion problem.} \label{rm::udex} \end{figure} \subsection{Two-dimensional problems} In this part, we present a few two-dimensional cylindrically symmetric flows in engineering applications. The Euler equations for this configuration are formulated as a cylindrical form \[ \dfrac{\partial}{\partial t} \begin{bmatrix} r\rho \\ r\rho u \\ r\rho v\\ rE \end{bmatrix} +\dfrac{\partial}{\partial r} \begin{bmatrix} r\rho u \\ r(\rho u^2+p) \\ r\rho uv \\ r(E+p)u \end{bmatrix} +\dfrac{\partial}{\partial z} \begin{bmatrix} r\rho v \\ r\rho uv \\ r(\rho v^2+p)\\ r(E+p)v \end{bmatrix} =\begin{bmatrix} 0 \\ p \\ 0 \\ 0 \end{bmatrix}. \] To improve the efficiency of the simulation, the $h$-adaptive mesh method is adopted here \cite{Li2013}. Roughly speaking, more elements will be distributed in the region where the jump of pressure is sufficiently large. \subsubsection{Nuclear air blast problem} In this example we simulate the nuclear air blast in the computational domain $0\le r,z\le \unit[2000]{m}$. The initial states of the explosion products and air in this example are the same as that in Section \ref{problem:blast}, except that the bottom edge $z=0$ is now a rigid ground. The explosive center is located at the height $z=\unit[100]{m}$. And the radius of the initial interface is $\unit[0.3]{m}$ at $t=0$. Fig. \ref{rm::air_blast2} shows the pressure contours and adaptive meshes at $t=\unit[0.09]{s}$ and $\unit[0.3]{s}$. When the blast wave produced by the nuclear explosion arrives at the ground, it will be reflected firstly and propagate along the rigid ground simultaneously. When the incident angle exceeds the limit, the reflective wave switches from regular to irregular, and a Mach blast wave occurs. The peak overpressure and impulse at different radii are shown in Fig. \ref{rm::air_blast_ref2}. Our numerical results agree well the the reference data interpolated from the given standard data in \cite{Glasstone1977}. \begin{figure} \caption{Pressure contours and adaptive meshes for nuclear air blast problem.} \caption{Shock wave parameters for nuclear air blast problem.} \label{rm::air_blast2} \label{rm::air_blast_ref2} \end{figure} \subsubsection{TNT explosion in air}\label{problem:tntair} We use this example to assess the isotropic behavior of TNT explosion in a computational domain $0\le r\le \unit[20]{m},0\le z\le \unit[40]{m}$. The initial density and pressure are $\unit[1630]{kg/m^3}$ and $\unit[9.5\times 10^9]{Pa}$ for high explosives, and $\unit[1.29]{kg/m^3}$ and $\unit[1.013\times 10^5]{Pa}$ for the air. The initial interface is a sphere of radius $\unit[0.0527]{m}$ centered at the height $z=\unit[10]{m}$. The results of shock produced by the high explosives are shown in Fig. \ref{rm::tnt1}. The shock parameters are shown in Fig. \ref{rm::tnt2}, in comparison with the experimental data in \cite{Baker1973} and \cite{Crowl1969}. \begin{figure} \caption{Pressure contours and adaptive meshes for TNT explosion in air.} \caption{Shock wave parameters for TNT explosion in air. The reference solution 1 is taken from Baker \cite{Baker1973}, while the reference solution 2 is taken from Crowl \cite{Crowl1969}.} \label{rm::tnt1} \label{rm::tnt2} \end{figure} \subsection{Three-dimensional shock impact problem} In the last example, we present a three-dimensional shock impact problem in practical applications. The computational domain is a cross-shaped confined space $E_1\bigcup E_2$ where $E_1=[-2,2]\times [-2,2]\times [-6,6]$ and $E_2=[-6,6]\times [-2,2]\times[-2,2]$ in meters. The sphere of radius $\unit[0.4]{m}$ centered at the origin is filled with high explosives, while the remaining region is filled with air. The initial states are exactly the same as that of the problem in Section \ref{problem:tntair}. Due to the symmetry we only compute the problem in the first octant, namely an L-shaped region. All the boundaries are reflective walls. Again we use the $h$-adaptive technique to capture the shock front. The total number of cells is about $0.8$ -- $1$ million. To accelerate the computation we perform the parallel computing with eight processors based on the classical domain decomposition methods. The numerical results of shock wave produced by the high explosives at $t=\unit[2.4\times 10^{-4}]{s}$ and $t=\unit[9.2\times 10^{-4}]{s}$ are displayed in Fig. \ref{rm::3d1}. From here we can see that the shock wave propagates as an expansive spherical surface in earlier period. When the spherical shock wave impinges on the rigid surface, shock strength increases and shock reflection occurs. It is also observed in the slice plots of Fig. \ref{rm::3d2} that the wave structures are much more complicated in the three-dimensional confined space. The numerical results here also show the capability of our methods in resolving fully three-dimensional flows. \begin{figure} \caption{Pressure contours and adaptive meshes for three-dimensional shock impact problem. The plots (c) and (d) are turned over in the $y$-direction in order to display the shock reflection.} \caption{Pressure slices at $t=9.2\times 10^{-4}$s.} \label{rm::3d1} \label{rm::3d2} \end{figure} \section{Conclusions}\label{sec:conclusion} We extend the numerical scheme in Guo \textit{et al.} \cite{Guo2016} to the fluids that obey a general Mie-Gr{\"u}neisen equations of state. The algorithm of the multi-medium Riemann problem is elaborated, which is a fundamental element of the two-medium fluid flow. A variety of preliminary numerical examples and application problems validate the effectiveness and robustness of our methods. In our future work, we will generalize the framework to fluid-solid coupling problems. \iffalse \section*{Acknowledgments} The authors appreciate the financial supports provided by the National Natural Science Foundation of China (Grant No. 91630310, 11421110001, and 11421101). \fi \appendix \section{Collections of equations of state} In this appendix we elaborate the equations of state that are mentioned in the numerical results. For convenience, we also collect the expression of coefficients and their derivatives at the end of this part. \subsubsection*{\bf Ideal gas EOS} Most of gases can be modeled by the ideal gas law \begin{equation} p = (\gamma-1) \rho e, \label{eq:ideal} \end{equation} where $\gamma>1$ is the adiabatic exponent. \subsubsection*{\bf Stiffened gas EOS} When considering water under high pressures, the following stiffened gas EOS is often used \cite{Rallu2009,Wang2008}: \begin{equation} p = (\gamma-1) \rho e-\gamma p_\infty, \label{eq:stiffened} \end{equation} where $\gamma>1$ is the adiabatic exponent, and $p_\infty$ is a constant. \subsubsection*{\bf Polynomial EOS} The polynomial EOS \cite{Jha2014} can be used to model various materials \begin{equation} p = \begin{cases} A_1\mu + A_2\mu^2 + A_3\mu^3 + (B_0+B_1\mu)\rho_0 e, & \mu>0, \\ T_1 \mu + T_2\mu^2 + B_0 \rho_0e, &\mu \le 0, \end{cases} \label{eq:poly} \end{equation} where $\mu = {\rho}/{\rho_0} -1$ and $A_1,A_2,A_3,B_0,B_1, T_1,T_2,\rho_0$ are positive constants. In this paper, we take an alternative formulation in the tension branch \cite{Autodyn2003}, where $p=T_1 \mu + T_2\mu^2 + (B_0+B_1\mu)\rho_0e$ for $\mu\le0$, to ensure the continuity of the speed of sound at $\mu=0$. Such a formulation avoids the occurance of anomalous waves in the Riemann problem, which does not exist in real physics. When $B_1\le B_0\le B_1+2$ and $T_1\ge 2T_2$, the polynomial EOS satisfies the conditions \textbf{(C1)} and \textbf{(C3)}. In addition, if the density $\rho\ge {B_0\rho_0}/{(B_1+2)}$, then the polynomial EOS also satisfies the condition \textbf{(C2)}. \subsubsection*{\bf JWL EOS} Various detonation products of high explosives can be characterized by the JWL EOS \cite{Smith1999} \begin{equation} p = A_1\left(1-\frac{\omega\rho}{R_1\rho_0}\right) \exp\left(-\frac{R_1\rho_0}{\rho}\right) + A_2\left(1-\frac{\omega\rho}{R_2\rho_0}\right) \exp\left(-\frac{ R_2\rho_0}{\rho}\right) + \omega\rho e, \label{eq:jwl} \end{equation} where $A_1,A_2,\omega,R_1,R_2$ and $\rho_0$ are positive constants. Obviously the JWL EOS \eqref{eq:jwl} satisfies the conditions \textbf{(C1)} and \textbf{(C2)}. To enforce the condition \textbf{(C3)} we first notice that \[ \lim_{\rho\rightarrow 0^+} h'(\rho) = 0. \] Then it suffices to ensure that $h''(\rho)\ge 0$, which is equivalent to the following inequality in terms of $\nu=\rho_0/\rho$: \[ R_1\nu-2-\omega \ge G(\nu):=\dfrac{A_2R_2}{A_1R_1}(2+\omega -R_2\nu) \exp((R_1-R_2)\nu). \] A simple calculus shows that the maximum value of the function $G(\nu)$ above is given by \[ \alpha = \dfrac{A_2R_2^2}{A_1R_1(R_1-R_2)} \exp\left(\dfrac{(2+\omega)(R_1-R_2)-R_2}{R_2}\right). \] Therefore a sufficient condition for \textbf{(C3)} is that the density satisfies \[ \rho\le \dfrac{R_1}{2+\omega+\alpha}\rho_0, \] which is valid for most cases. \subsubsection*{\bf Cochran-Chan EOS} The Cochran-Chan EOS is commonly used to describe the reactants of condensed phase explosives \cite{Saurel2007,Lee2013}, which can be formulated as \begin{equation} p = \dfrac{A_1(R_1-1-\omega)}{R_1-1} \left(\dfrac{\rho}{\rho_0}\right)^{R_1}- \dfrac{A_2(R_2-1-\omega)}{R_2-1} \left(\dfrac{\rho}{\rho_0}\right)^{R_2}+ \omega \rho e, \label{eq:CC} \end{equation} where $A_1,A_2,\omega,R_1,R_2$ and $\rho_0$ are positive constants. The Cochran-Chan EOS satisfies the conditions \textbf{(C1), (C2)} and \textbf{(C3)} if $1<R_2\le 1+\omega \le R_1$. \setlength{\tabcolsep}{2pt} \setlength{\rotFPtop}{0pt plus 1fil} \begin{sidewaystable} \centering \caption*{List of coefficients and their derivatives for several equations of state.} \small \begin{tabular}{cccccc} \toprule & \multicolumn{1}{c}{\bf Ideal}& \multicolumn{1}{c}{\bf Stiffened} & \multicolumn{1}{c}{\bf Polynomial} & \multicolumn{1}{c}{\bf JWL}& \multicolumn{1}{c}{\bf Cochran-Chan} \\ \midrule $\varGamma(\rho)$ & $\gamma-1$ & $\gamma-1$ & $B_1+{(B_0-B_1)\rho_0}/{\rho}$ & $\omega$ & $\omega$ \\ [10mm] $\varGamma'(\rho)$ & $0$ & $0$ & $-{(B_0-B_1)\rho_0}/{\rho^2}$ & $0$ & $0$ \\ [10mm] $\varGamma''(\rho)$ & $0$ & $0$ & ${2(B_0-B_1)\rho_0}/{\rho^3}$ & $0$ & $0$ \\ [10mm] $h(\rho)$ & $0$ & $-\gamma p_\infty$ & $\begin{cases} A_1\mu + A_2 \mu^2 + A_3 \mu^3, & \rho > \rho_0 \\ T_1 \mu+T_2\mu^2, & \rho \le \rho_0 \end{cases}$ & \bgroup \footnotesize \begin{tabular}{r} $A_1\left(1-\dfrac{\omega\rho}{R_1\rho_0}\right) \exp\left(-\dfrac{R_1\rho_0}{\rho}\right)$ \\ $+A_2\left(1-\dfrac{\omega\rho}{R_2\rho_0}\right) \exp\left(-\dfrac{R_2\rho_0}{\rho}\right)$ \end{tabular} \egroup & \bgroup \footnotesize \begin{tabular}{r} $\dfrac{A_1(R_1-1-\omega)}{R_1-1}\left(\dfrac{\rho}{\rho_0}\right)^{R_1}$ \\ $-\dfrac{A_2(R_2-1-\omega)}{R_2-1}\left(\dfrac{\rho}{\rho_0}\right)^{R_2}$ \end{tabular} \egroup \\ [10mm] $h'(\rho)$ & $0$ & $0$ & $\begin{cases} (A_1+2A_2\mu+3A_3\mu^2)/\rho_0, & \rho > \rho_0 \\ (T_1+2T_2\mu)/\rho_0, & \rho \le \rho_0 \end{cases}$ & \bgroup \footnotesize \begin{tabular}{r} $A_1\left(\dfrac{R_1\rho_0}{\rho^2}-\dfrac{\omega}{\rho} -\dfrac{\omega}{R_1\rho_0}\right) \exp\left(-\dfrac{R_1\rho_0}{\rho}\right)$ \\ $+A_2\left(\dfrac{R_2\rho_0}{\rho^2}-\dfrac{\omega}{\rho} -\dfrac{\omega}{R_2\rho_0}\right) \exp\left(-\dfrac{R_2\rho_0}{\rho}\right)$ \end{tabular} \egroup & \bgroup \footnotesize \begin{tabular}{r} $\dfrac{A_1R_1(R_1-1-\omega)}{R_1-1}\left(\dfrac{\rho}{\rho_0}\right)^{R_1-1}$ \\ $-\dfrac{A_2R_2(R_2-1-\omega)}{R_2-1}\left(\dfrac{\rho}{\rho_0}\right)^{R_2-1}$ \end{tabular} \egroup \\ [10mm] $h''(\rho)$ & $0$ & $0$ & $\begin{cases} (2A_2+6A_3\mu)/\rho_0^2, & \rho > \rho_0 \\ 2T_2/\rho_0^2, & \rho \le \rho_0 \end{cases}$ & \bgroup \footnotesize \begin{tabular}{r} $\dfrac{A_1R_1\rho_0}{\rho^3} \left(\dfrac{R_1\rho_0}{\rho}-2-\omega\right) \exp\left(-\dfrac{R_1\rho_0}{\rho}\right)$ \\ $+\dfrac{A_2R_2\rho_0}{\rho^3} \left(\dfrac{R_2\rho_0}{\rho}-2-\omega\right) \exp\left(-\dfrac{R_2\rho_0}{\rho}\right)$ \end{tabular} \egroup & \bgroup \footnotesize \begin{tabular}{r} ${A_1R_1(R_1-1-\omega)}\left(\dfrac{\rho}{\rho_0}\right)^{R_1-2}$ \\ $-{A_2R_2(R_2-1-\omega)}\left(\dfrac{\rho}{\rho_0}\right)^{R_2-2}$ \end{tabular} \egroup \\ \bottomrule \end{tabular} \end{sidewaystable} \section*{\refname} \end{document}
\begin{document} \title{Positive weight function and classification of g-frames} \begin{abstract} Given a positive weight function and an isometry map on a Hilbert spaces ${\mathbb H}$, we study a class of linear maps which is a $g$-frame, $g$-Riesz basis and a $g$-orthonormal basis for ${\mathbb H}$ with respect to $\C$ in terms of the weight function. We apply our results to study the frame for shift-invariant subspaces on the Heisenberg group. \end{abstract} \section{Introduction and preliminaries} A frame for a Hilbert space is a countable set of overcompleted vectors such that each vector in the Hilbert space can be represented in a non-unique way in terms of the frame elements. The redundancy and the flexibility in the representation of a Hilbert space vector by the frame elements make the frames a useful tool in mathematics as well as in interdisciplinary fields such as sigma-delta quantization \cite{ben06}, neural networks \cite{can99}, image processing \cite{can05}, system modelling \cite{dud98}, quantum measurements \cite{eld02}, sampling theory \cite{fei94}, wireless communications \cite{str03} and many other well known fields. Given a Hilbert space $\mathcal H$, a countable family of vectors $\{x_j\}_{j\in J}\subset \mathcal H$ is called a {\it frame} for $\mathcal H$ if there are positive constants $A$ and $B$ such that for any $x\in \mathcal H$, $$A\|x\|^2 \leq \sum_{j\in J} |\langle x, x_j\rangle|^2 \leq B \|x\|^2 .$$ The frames were introduced for the first time by Duffin and Schaeffer \cite{duf52}, in the context of nonharmonic Fourier series \cite{you01}. The notion of a frame extended to g-frame by Sun \cite{sun06} in 2006 to generalize all the existing frames such as bounded quasi-projectors \cite{for04}, frames of subspaces \cite{cas04}, pseudo-frames \cite{li04}, oblique frames \cite{chr04}, outer frames \cite{ald04}, and time-frequency localization operators \cite{dor06}. Here, we recall the definition of a g-frame. \begin{definition} Let $\mathcal H$ be a Hilbert space and $\{\mathcal K_j\}_{j\in J}$ be a countable family of Hilbert spaces with associated norm $\| \cdot\|_{\mathcal K_j}$. A countable family of linear and bounded operators $\{\Lambda_j: {\mathbb H} \to \K_j\}_{j \in J}$ is called a {\it $g$-frame} for $\mathcal H$ with respect to $\{\mathcal K_j\}_{j\in J}$ if there are two positive constants $A$ and $B$ such that for any $f\in \mathcal H$ we have \begin{equation}\label{eq01} A\|f\|_{\mathcal H}^2 \leq \sum_{j\in J} \|\Lambda_j(f)\|_{\mathcal K_j}^2 \leq B \|f\|_{\mathcal H}^2. \end{equation} \end{definition} The constants $A$ and $B$ are called {\it $g$-frame lower and upper bounds}, respectively. If $A=B=1$, then it is called a {\it Parseval $g$-frame}. For example, by the Riesz representation theorem, every $g$-frame is a frame if $\mathcal K_j= \Bbb C$ for all $j\in J$. And, every frame is a $g$-frame with respect to $\Bbb C$. If the right-hand side of (\ref{eq01}) holds, it is said to be a {\it $g$-Bessel sequence} with bound $B$. The family $\{\Lambda_j \}_{j \in J}$ is called {\it $g$-complete}, if for any vector $f\in \mathcal H$ with $\Lambda_j(f)=0$ for $j \in J$, we have $f=0$. If $\{\Lambda_j \}_{j \in J}$ is $g$-complete and there are positive constants $A$ and $B$ such that for any finite subset $J_1 \subset J$ and $g_j \in \K_j, \; j \in J_1$, \[ A \sum_{j \in J_1} \|g_j \|^2 \leq \bigg\|\sum_{j \in J_1} \Lambda_j^* (g_j) \bigg\|^2 \leq B \sum_{j \in J_1}\|g_j \|^2, \] then $\{\Lambda_j \}_{j \in J}$ is called a {\it $g$-Riesz basis} for ${\mathbb H}$ with respect to $\{\K_j\}_{j \in J}$. Here, $\Lambda_j^*$ denotes the adjoint operator. We say $\{\Lambda_j \}_{j \in J}$ is a $g$-orthonormal basis for ${\mathbb H}$ with respect to $\{\K_j\}_{j \in J}$ if it satisfies the following: \begin{align}\label{onb1} \langle \Lambda^*_{i} g, \Lambda^*_{j} h \rangle &= 0 \quad \forall g \in \K_i, \; h \in \K_j, i\neq j \\\label{onb2} \| \Lambda^*_{i} g\|^2 &= \|g \|^2, \quad \forall i, \; g \in \K_i \\\label{onb3} \sum_{j \in J}\|\Lambda_jf \|^2 &=\|f \|^2, \quad \forall f \in {\mathbb H}. \end{align} Before we state the main results of this paper, let us consider the following example. For a function $\phi\in L^2(\Bbb R^d)$, $d\geq 1$ and $m, n\in \Bbb Z^d$, the modulation and translation of $\phi$ by multi-integers $m$ and $n$ are defined by $$M_m\phi(x) = e^{2\pi i \langle m, x\rangle} \phi(x), \quad T_n\phi(x) =\phi(x-n) .$$ The Gabor (Weil-Heisenberg) system generated by $\phi$ is $$\mathcal G(\phi):=\{M_m T_n\phi: \ m, n\in \Bbb Z^d\} .$$ It is well-known that the ``basis\rq\rq{} property of the Gabor system $\mathcal G(\phi)$ for its spanned vector space can be studied by the Zak transform of $\phi$ $$Z\phi(x, \xi) = \sum_{k\in \Bbb Z^d} \phi(x+k) e^{2\pi i \langle \xi, k\rangle} .$$ For example, the Gabor system $\mathcal G(\phi)$ is a Riesz basis for the spanned vector space if and only if there are positive constants $A$ and $B$ such that $A\leq |Z\phi(x, \xi)|\leq B$ a.e. $(x,\xi)\in [0,1]^d\times [0,1]^d$ (\cite{HSWW}). The purpose of this paper is to show that the above result is a particular case of a more general theory involving $g$-frames. For the rest of the paper we shall assume the following. $\Omega$ is a measurable set with measure $dx$. We assume that $\Omega$ has finite measure $|\Omega|$ and $|\Omega|=1$. We let $w:\Omega \to (0,\infty)$be a measurable map with $\int_\Omega w(x)dx<\infty$. Let $\U$ be a Hilbert space over the field $\Bbb F$ ($\Bbb R$ or $\Bbb C$) with associated inner product $\langle \cdot , \cdot \rangle_{\U} $. We denote by $L^2_w(\Omega, \U)$ the weighted Hilbert space of all measurable functions $f:\Omega\to \U$ such that $$\|f\|_{L^2_w(\Omega, \U)}^2: = \int_\Omega \|f(x)\|_\U^2 w(x) dx<\infty .$$ The associated inner product of any two functions $f, g$ in $L_w^2(\Omega, \U)$ is then given by $$\langle f, g\rangle_{L_w^2(\Omega, \U)} = \int_\Omega \langle f(x), g(x)\rangle_\U w(x) dx. $$ A countable family of unit vectors $\{f_k\}_{k\in K}$ in ${L_w^2(\Omega, \U)}$ constitute an ONB (orthonormal basis) for ${L_w^2(\Omega, \U)}$ with respect to the weight function $w$ if the family is orthogonal and for any $g\in {L_w^2(\Omega, \U)}$ the Parseval identity holds: $$\| g \|^2 = \sum_{k\in K} \left|\langle g, f_k\rangle_{L_w^2(\Omega, \U)}\right|^2 =\sum_{k\in K}\left| \int_\Omega \langle g(x), f_k(x)\rangle_{\U} w(x) dx\right|^2 .$$ To avoid any confusion, in the sequel, we shall use subscripts for all inner products and associated norms for Hilbert spaces, when necessary. For the rest, we assume that $S: \mathcal H \to L^2_w(\Omega, \U)$ is a linear and unitary map. Thus for any $f\in \mathcal H$ $$ \|f\|^2 = \int_\Omega \|Sf(x)\|_{\U}^2 w(x) dx. $$ We fix an ONB $\{f_n\}_{n\in K}$ for $L^2(\Omega)$ and ONB $\{g_m\}_{m\in J}$ for the Hilbert space $\U$, and define $$G_{m,n}(x): = f_n(x) g_m, \quad \forall x\in \Omega, \ (m,n)\in K\times J.$$ And, $$\tilde \Lambda_{(m,n)}(f)(x) = \langle S(f)(x), G_{(m,n)}(x) \rangle_{\U} \quad \forall \ f\in \mathcal H, x\in \Omega .$$ Our main results are the following. \begin{theorem}\label{th1} Let $\{f_n\}_{n\in K}\subset L^2(\Omega)$, $\{g_m\}_{m\in J} \subset \U$ and $\{\tilde \Lambda_{m,n}\}$ be as in above. Assume that $|f_n(x)|=1$ for a.e. $x\in \Omega$. Then the following hold: \begin{itemize} \item [(a)] $\{\tilde\Lambda_{(m,n)}\}_m $ is a Parseval $g$-frame for $\mathcal H$. Thus $\{\Lambda_{(m,n)}\}_m$ is a Bessel sequence. \item [(b)] For any ${(m,n)}$, the linear map $\Lambda_{m,n}: \mathcal H \to \C$ defined by $$\Lambda_{m,n}(f) =\int_\Omega \tilde\Lambda_{m,n}(f)(x) \ w(x) dx $$ is well-defined. And, $\{\Lambda_{m,n}\}$ is a frame for ${\mathbb H}$ if and only if there are positive finite constants $A$ and $B$ such that $A\leq w(x)\leq B$ for a.e. $x\in \Omega$. \item [(c)] The family $\{\Lambda_{m,n}\}$ is a Riesz basis for ${\mathbb H}$ if and only if there are positive finite constants $A$ and $B$ such that $A\leq w(x)\leq B$ for a.e. $x\in \Omega$. \end{itemize} \end{theorem} \begin{corollary}\label{main result} Let $\{\lambda_k\}_{k\in J}$ be an orthonormal basis (or a Parseval frame) for $L^2(\Omega)$ such that $|\lambda_k(x)|=1$ for all $x\in \Omega$. Assume that $S: \mathcal H \to L^2_w(\Omega)$ is a unitary map. Then the sequence of operators $\{\Lambda_k\}_{k\in J}$ defined by $$\Lambda_k(f) =\int_\Omega S(f)(x) \overline{ \lambda_k(x) } w(x) dx $$ is a frame for ${\mathbb H}$ if and only if there are positive finite constants $A$ and $B$ such that $A\leq w(x)\leq B$ for a.e. $x\in \Omega$. \end{corollary} \begin{theorem}\label{th3} Let $\{f_n\}_{n\in K}\subset L^2(\Omega)$, $\{g_m\}_{m\in J} \subset \U$ and $\{\tilde \Lambda_{m,n}\}$ be as in above. Assume that $|f_n(x)|=1$ for a.e. $x\in \Omega$. The family $\{\Lambda_{m,n}\}$ is an ONB for ${\mathbb H}$ if and only if $w(x)=1$ for a.e. $x\in \Omega$. \end{theorem} \section{Proof of Theorem \ref{th1}} First we prove the following lemmas which we need for the proof of Theorem \ref{th1}. \begin{lemma}\label{technical lemma} Let $\{f_n\}$ be an ONB for the weighted Hilbert space $L_w^2(\Omega)$. Let $\{g_m\}$ be an ONB for a Hilbert space $\U$. Define $G_{m,n}(x) = f_n(x) g_m$. Then the family $\{G_{m,n}\}_{J\times K}$ is an ONB for $L_w^2(\Omega, \U)$. \end{lemma} In order to prove the lemma, we shall recall the following result from \cite{iosevich-mayeli-14} and prove it here for the sake of completeness. \begin{lemma}\label{mixed orthonormal bases} Let $(X,\mu)$ be a measurable space, and $\{f_n\}_n$ be an orthonormal basis for $L^2(X):=L^2(X, d\mu)$. Let $Y$ be a Hilbert space and $\{g_m\}_m$ be a family in $Y$. For any $m, n$ and $x\in X$ define $G_{m,n}(x) := f_n(x) g_m$. Then $\{G_{m,n}\}_{m,n}$ is an orthonormal basis for the Hilbert space $L^2(X,Y,d\mu)$ if and if $\{g_m\}_m$ is an orthonormal basis for $Y$. \end{lemma} \begin{proof} For any $m, n$ and $m\rq{}, n\rq{}$ we have \begin{align}\label{orthogonality-relation} \langle G_{m,n}, G_{m\rq{},n\rq{}} \rangle &= \int_X \langle f_m(x) g_n, f_{m\rq{}}(x) g_{n\rq{}} \rangle_Y \ d\mu(x)\\\notag & = \langle f_m,f_{m\rq{}}\rangle_{L^2(X)} \langle g_n, g_{n\rq{}}\rangle_Y \\\notag &= \delta_{m,m\rq{}} \langle g_n, g_{n\rq{}}\rangle_Y. \end{align} This shows that the orthogonality of $\{G_{m,n}\}_{m,n}$ is equivalent to the orthogonality of $\{g_m\}_m$. And, $\|G_{m,n}\| =1$ if and only if $\|g_n\|=1$. Let $\{g_m\}_m$ be an orthonormal basis for $Y$. To prove the completeness of $\{G_{m,n}\}$ in $L^2(X,Y,d\mu)$, let $F\in L^2(X,Y,d\mu)$ such that $\langle F, G_{m,n}\rangle =0$, $\forall \ m, n$. We claim $F=0$. By the definition of the inner product we have \begin{align}\label{inner-product} 0=\langle F, G_{m,n}\rangle &=\int_X \langle F(x), G_{m,n}(x)\rangle_Y d\mu(x)\\\notag &= \int_X \langle F(x), f_m(x) g_n\rangle_Y d\mu(x) \\\notag &= \int_X \langle F(x), g_n\rangle_Y \overline{f_m(x)} d\mu(x) \\\notag &= \langle A_n, f_m\rangle \end{align} where $$A_n: X\to \C; \ \ x \mapsto \langle F(x), g_n\rangle_Y. $$ $A_n$ is a measurable function and lies in $L^2(X)$ with $\|A_n\|\leq \|F\|$. Since $\langle A_n, f_m\rangle_{L^2(X)}=0$ for all $m$, then $A_n=0$ by the completeness of $\{f_m\}$. On the other hand, by the definition of $A_n$ we have $\langle F(x), g_n\rangle_{Y}=0$ for a.e. $x\in X$. Since $\{g_n\}$ is complete in $Y$, then $F(x)=0$ for a.e. $x\in X$. This proves the claim. Conversely, assume that $\{G_{m,n}\}_{m,n}$ is an orthonormal basis for the Hilbert space $ L^2(X,Y,d\mu)$. Therefore by (\ref{orthogonality-relation}), $\{g_m\}$ is an orthonormal set. We prove that if for $g\in Y$ and $\langle g, g_m\rangle=0$ for all $m$, then $g$ must be identical to zero. For this, for any $n$ define the map $$B_n: X\to Y; \ \ x\mapsto f_n(x)g.$$ Then $B_n$ is measurable and it belongs to $ L^2(X,Y,d\mu)$ and $\|B_n\| = \|g\|_Y$. Thus \begin{align} B_n&=\sum_{n\rq{},m} \langle B_n, G_{m,n\rq{}}\rangle_{L^2(X,Y,d\mu)} G_{m,n\rq{}}\\\notag &= \sum_{n\rq{},m} \langle f_n, f_{n\rq{}}\rangle_{L^2(X)} \langle g, g_m\rangle_Y G_{m,n\rq{}}\\\notag &= \sum_{m} \langle g, g_m\rangle_Y G_{n,m}. \end{align} By the assumption that $ \langle g, g_m\rangle_Y=0$ for all $m$, we get $B_n=0$. This implies that $B_n(x)= f_n(x) g=0$ for a.e. $x$. Since, $f_n\neq 0$, then $g$ must be a zero vector, and hence we are done. \end{proof} \begin{lemma}\label{boundedness} $\tilde \Lambda_{(m,n)}: \mathcal H\to L^2_w(\Omega)$ is a bounded operator and $\|\tilde \Lambda_{(m,n)}(f)\|_{L^2_w(\Omega)}\leq \|f\|$. \end{lemma} \begin{proof} Let $f\in {\mathbb H}$. Then for any $m\in J$ and $n\in K$, \begin{align*} \| \tilde\Lambda_{m,n}(f)\|_{L^2_w(\Omega)}^2 &= \int_\Omega |\tilde\Lambda_{m,n}(f)(x)|^2 w(x) dx \\ &= \int_\Omega |\langle Sf(x), f_n(x)g_m\rangle_{\U}|^2 w(x) dx\\ &= \int_\Omega |\langle Sf(x), g_m\rangle_{\U}|^2 w(x) dx . \end{align*} Using the Cauchy--Schwartz inequality in the preceding line, we get \begin{align*} \| \tilde\Lambda_{m,n}(f)\|_{L^2_w(\Omega)}^2 &\leq \int_\Omega \|Sf(x)\|^2 w(x) dx = \|f\|^2 . \end{align*} \end{proof} Here, we first calculate the adjoint of $\tilde\Lambda_{m,n}$: $S$ is a unitary map. Then for any $f\in {\mathbb H}$ and $h\in L_w^2(\Omega,\U)$ we have \begin{align} \int_\Omega \langle Sf(x), h(x)\rangle_{\U} w(x)dx= \langle Sf, h\rangle_{L_w^2(\Omega,\U)} =\langle f, S^{-1}h\rangle . \end{align} Therefore for any $\phi\in L_w^2(\Omega)$ we get \begin{align}\label{adjoint} \langle \tilde \Lambda_{m,n} f, \phi\rangle = \langle f, S^{-1}((f_n \phi)g_m)\rangle , \end{align} where $(f_n \phi)g_m \in L_w^2(\Omega, \U)$ and $(f_n \phi)g_m(x) = f_n(x)\phi(x) g_m$. The relation (\ref{adjoint}) indicates that $$\tilde \Lambda_{m,n}^*(\phi) = S^{-1}((f_n \phi)g_m).$$ Notice, for any $f\in {\mathbb H}$, $$\Lambda_{m,n} f = \langle f, S^{-1}(f_ng_m)\rangle_{{\mathbb H}} .$$ Thus $\Lambda_{m,n}^*: \C \to {\mathbb H}$ is given by $c\to cS^{-1}(f_ng_m)$. \begin{proof}[Proof of Theorem \ref{th1}] $(a)$: Observe that $|\tilde \Lambda_{m,n}(f)(x)|\leq \| Sf(x)\|$ and $S$ is an isometry map. For any $f \in {\mathbb H}$ and $n\in K$ we have \begin{align*} \sum_m \| \tilde \Lambda_{m,n} (f)\|^2_{L^2_w(\Omega)} & =\sum_m \int_{\Omega} | \tilde \Lambda_{m,n} (f)(x)|^2 w(x) dx \\ & = \sum_m \int_{\Omega} | \langle G_{m,n}(x) , S(f)(x)\rangle_{\U} |^2 w(x) dx \\ & = \int_{\Omega} \sum_m| \langle G_{m,n}(x) , S(f)(x)\rangle_{\U} |^2 w(x) dx \\ & = \int_{\Omega} \sum_m| \langle g_m , w^{1/2}S(f)(x)\rangle_{\U} |^2 dx. \end{align*} By the assumptions of the theorem, for a.e. $x\in \Omega$, the sequence $\{g_m\}_m$ is an ONB for $\U$. Invoking this along the isometry property of $S$ in the last equation above, we get \begin{align}\label{parseval-proprty} \sum_m \| \tilde \Lambda_{m,n} (f)\|^2_{L^2_w(\Omega)} = \int_{\Omega} \| S(f)(x) \|^2_{\U} \; w(x) dx = \| f \|^2. \end{align} Therefore, $\{\tilde \Lambda_{m,n}\}_m$ is a Parseval $g$-frame for ${\mathbb H}$ with respect to $L^2_w(\Omega)$. To prove that $\{\Lambda_{m,n}\}_m$ is a Bessel sequence, note that by the H{\"o}lder's inequality for in the weighted Hilbert space $L^2_w(\Omega)$ we can write \begin{align*} |\Lambda_{m,n}(f)| \leq \int_\Omega |\tilde \Lambda_{m,n} (f)(x)| w(x) dx & \leq \bigg( \int_\Omega |\tilde \Lambda_{m,n} (f)(x)|^2 w(x) dx \bigg)^{\frac{1}{2}} \bigg( \int_\Omega w(x) dx \bigg)^{\frac{1}{2}}. \end{align*} By Lemma \ref{boundedness}, the first integral on the right is finite. Therefore by summing the square of the terms over $m$ we get \[ \sum_{m \in J} |\Lambda_{m,n}(f)|^2 \leq C \sum_{m \in J} \int_\Omega |\tilde \Lambda_{m,n} (f)(x) |^2 w(x) dx =C \sum_{m \in J} \| \tilde \Lambda_{m,n} (f) \|_{L^2_w(\Omega)}^2 =C\| f \|^2, \] where $C:= \int_\Omega w(x) dx$ is a non-zero constant. Thus $\{\Lambda_{m,n}\}_{m \in J}$ is a Bessel sequence for ${\mathbb H}$ with bound $C$. Notice, in the last equality we used (\ref{parseval-proprty}). $(b)$ The map $\Lambda_{m,n}:{\mathbb H} \to \C$ is linear, well-defined and bounded. Indeed, for any $f\in {\mathbb H}$, \begin{align*} \int_\Omega |\tilde \Lambda_{m,n} (f)(x)| w(x) dx &\leq \left(\int_\Omega w(x) dx\right)^{1/2} \left(\int_\Omega\|Sf(x)\|^2 w(x) dx\right)^{1/2} \\ &= \|f\| \left(\int_\Omega w(x) dx\right)^{1/2}. \end{align*} Assume that $A\leq w(x)\leq B$ for almost every $x\in \Omega$. Let $f\in \mathcal H$. Then \begin{align}\notag \sum_{m,n} |\Lambda_{m,n}(f)|^2 &= \sum_{m,n} \left| \int_\Omega \langle G_{m,n}(x) , S(f)(x)\rangle_{\U} w(x) dx\right|^2 \\\notag &= \sum_{m,n} \left| \int_\Omega \langle G_{m,n}(x) , S(f)(x)w(x)\rangle_{\U} dx\right|^2 \\\label{eq1} &=\sum_{m,n} \left| \langle G_{m,n} , S(f)w\rangle_{L^2(\Omega, \U)}\right|^2 . \end{align} Since the countable family $\{ G_{m,n}\}_{m,n}$ is an ONB for $L^2(\Omega, \U)$. Thus \begin{align}\notag (\ref{eq1})&= \Vert S(f) w \Vert^2_{L^2(\Omega, \U)} \\\label{equ2} & = \int_\Omega \Vert S(f)(x)\Vert^2_{\U} \ w(x)^2 dx. \end{align} By invoking the assumption that $w(x)\leq B$ for a.e. $x\in \Omega$ in (\ref{equ2}) we obtain \begin{align}\notag (\ref{equ2}) \leq B \int_\Omega \Vert S(f)(x)\Vert^2_{\U} \ w(x) dx = B \|S(f)\|^2_{L^2_w(\Omega, \U)}= B \| f\|^2. \end{align} This proves that the sequence $\{\Lambda_{m,n}\}_{m,n}$ is a Bessel sequence for ${\mathbb H}$. An analogues argument also proves the frame lower bound condition for $\{\Lambda_{m,n}\}_{m,n}$. For the converse, assume that $\{\Lambda_{m,n}\}_{m,n}$ is a frame for $\mathcal H$ with the frame bounds $0<A\leq B<\infty$. Therefore for any $f\in \mathcal H$ $$A\|f\|^2 \leq \sum_{m,n} |\Lambda_{m,n}(f)|^2 \leq B \|f\|^2.$$ Assume that there is a set $E\subset \Omega$ with positive measure such that $w(x)<A$ for all $x\in E$. We will prove that there exits a function in $\mathcal H$ for which the lower frame condition dose not hold. To this end, let $e_0 \in \U$ be a unit vector and let $\vec{0}$ denote the zero vector in $\U$. Define $\chi_E(x):= 1_E(x) e_0$. By the assumption, $w \in L^1(E)$. Thus $\chi_E \in L^2_w(\Omega, \U)$. $S$ is unitary, therefore there is a function $\phi_E\in {\mathbb H}$ such that $S(\phi_E)=\chi_E$, and we have \begin{align}\label{iso} \| S(\phi_E) \|_{L^2_w(\Omega, \U)}= \|\phi_E \|_{{\mathbb H}}=\| \chi_E \|_{L^2_w(\Omega, \U)}. \end{align} On the other hand, the sequence $\{G_{m,n}\}_{m,n}$ is an ONB for $L^2(\Omega, \U)$. Thus \begin{align*} \sum_{m,n} |\Lambda_{m,n}(\phi_E)|^2 &= \sum_{m,n} \left| \langle G_{m,n}, S(\phi_E)w \rangle_{L^2(\Omega, \U)} \right|^2 \\ &= \Vert \chi_E w \Vert^2_{L^2(\Omega, \U)} \\ & = \int_\Omega \Vert \chi_E(x) \Vert^2_{\U} \ w(x)^2 dx \\ &< A \int_\Omega \Vert \chi_E(x) \Vert^2_{\U} \ w(x) dx \\ & = A \|\chi_E\|^2_{L_w^2(\Omega,\U)} \\ &= A \| \phi_E \|^2_{{\mathbb H}} \hspace{1in} \text{by \ (\ref{iso})}. \end{align*} The preceding calculation shows that the lower frame bound condition fails for $\phi_E$. This contradicts our assumption that $\{\Lambda_{m,n}\}_{m,n}$ is a frame for ${\mathbb H}$, therefore $w(x)\geq A$ a.e. $x\in \Omega$. The argument for the upper bound for $w$ follows similarly. $(c)$ Assume that $A\leq w(x)\leq B$ for almost every $x\in \Omega$. Let $\{c_{m,n}\}_{m,n}$ be any finite sequence in $\C$. Then \begin{align*} \left\| \sum_{m,n} \Lambda_{m,n}^*(c_{m,n}) \right\|_{\mathbb H}^2 &= \left\| \sum_{m,n} c_{m,n} S^{-1}(f_ng_m) \right\|_{\mathbb H}^2 \\ & = \left\| S^{-1}\left(\sum_{m,n} c_{m,n} f_ng_m\right) \right\|_{\mathbb H}^2 \\ & = \left\| \sum_{m,n} c_{m,n} f_ng_m\right\|_{L^2_w(\Omega,\U)}^2 & \text{($S$ is unitary)}\\ &= \int_\Omega \left\| \sum_{m,n} c_{m,n} f_n(x)g_m\right\|_{\U}^2 w(x) dx\\ & = \int_\Omega \sum_{m,n} |c_{m,n}|^2 w(x) dx & \text{(by orthogonality of $g_m$)}\\ &\leq B \sum_{m,n} |c_{m,n}|^2 &\text{(since $w(x)\leq B$ a.e. $x\in \Omega$).} \end{align*} We also have \begin{align*} \left\| \sum_{m,n} \Lambda_{m,n}^*(c_{m,n}) \right\|_{\mathbb H}^2 & = \int_\Omega \sum_{m,n} |c_{m,n}|^2 w(x) dx \geq A \sum_{m,n} |c_{m,n}|^2 &\text{(since $w(x) \geq A$ a.e. $x\in \Omega$).} \end{align*} These show that $\{\Lambda_{m,n}\}_{K\times J}$ is a Riesz basis for ${\mathbb H}$ with lower and upper Riesz bounds $A$ and $B$, respectively. Now assume that $\{\Lambda_{m,n}\}_{K\times J}$ is a Riesz basis for ${\mathbb H}$ with Riesz bounds $A$ and $B$. Therefore, for any sequence $\{c_{m,n}\}_{m,n}$ the inequalities hold: \begin{align}\label{Riesz inequality} A\sum_{m,n} |c_{m,n}|^2 \leq \left\| \sum_{m,n}\Lambda_{m,n}^*(c_{m,n})\right\|^2 \leq B \sum_{m,n} |c_{m,n}|^2. \end{align} We show that there are positive constants $A$ and $B$ such that $A\leq w(x)\leq B$ for a.e. $x\in \Omega$. In contrary, without loss of generality, we assume then there is a measurable subset $E\subset \Omega$ with positive measure such that $w(x)<A$ for all $x\in E$. Let $e$ be any unitary vector in the Hilbert space $\U$ and define the function ${\bf 1}_E(x) = e$ if $x\in E$ and otherwise ${\bf 1}_E(x)=0$. It is clear that ${\bf 1}_E\in L^2(\Omega, \U)$. Thus, there are coefficients $\{c_{m,n}\}_{K\times J}$ such that ${\bf 1}_E= \sum_{m,n} c_{m,n} f_n g_m$ and $\|{\bf 1}_E\|^2= \sum_{m,n}|c_{m,n}|^2$. Then $\|{\bf 1}_E(x)\|=1$ for all $x\in E$ and we get \begin{align*} \left\| \sum_{m,n}\Lambda_{m,n}^*(c_{m,n})\right\|^2 &= \left\| \sum_{m,n} c_{m,n} f_ng_m\right\|_{L^2_w(\Omega,\U)}^2 \\ &= \huge\int_\Omega \left\| \sum_{m,n} c_{m,n} f_n(x)g_m\right\|_{\U}^2 w(x) dx\\ &= \huge\int_\Omega \left\| {\bf 1}_E(x)\right\|_{\U}^2 w(x) dx\\ &= \huge\int_E w(x) dx\\ &\leq A \int_\Omega \left\| {\bf 1}_E(x)\right\|_{\U}^2 dx \quad \text{(by the assumption $w(x)<A$ for all $x\in E$)}\\ &= A \int_\Omega \left\| \sum_{m,n} c_{m,n} f_n(x)g_m\right\|_{\U}^2 dx \\ &= A \int_\Omega \sum_{m,n} |c_{m,n}|^2 dx \\ &= A \sum_{m,n} |c_{m,n}|^2 . \end{align*} This is contrary to the lower bound condition in (\ref{Riesz inequality}). \end{proof} \section{Proof of Corollary \ref{main result}} \begin{proof} Assume that $A\leq w(x)\leq B$ for almost every $x\in \Omega$. Let $f\in \mathcal H$. Then \begin{align}\label{first line} \sum_{k \in J} |\Lambda_k(f)|^2 &= \sum_{k \in J} \left| \int_\Omega S(f)(x) \overline{\lambda_k(x)} w(x) dx\right|^2. \end{align} By the fact that $\{\lambda_k\}_{k \in J}$ is an orthonormal basis for $L^2(\Omega)$, using the Plancherel\rq{}s theorem we continue as follows: \begin{align*} (\ref{first line})&= \int_\Omega |S(f)(x) w(x)|^2 dx \leq B \int_\Omega |S(f)(x)|^2 w(x) dx = B \|f\|^2. \end{align*} The frame boundedness from below by $A$ also follows with a similar calculation. For the converse, we shall use a contradiction argument. Assume that $\{\Lambda_k\}_{k \in J}$ is a frame for $\mathcal H$ with the frame bounds $0<A\leq B<\infty$. Therefore for any $f\in \mathcal H$ we have $$A\|f\|^2 \leq \sum_{k \in J} |\Lambda_k(f)|^2 \leq B \|f\|^2.$$ Assume that $E\subset \Omega$ be a measurable set with positive measure such that $w(x)< A$ for all $x\in E$. We shall show that the lower bound condition for the frame $\{\Lambda_k\}_{k \in J}$ must then fail for the lower bound $A$. By the assumptions, $w \in L^1(E)$. Thus $1_E\in L^2_w(\Omega)$. Since $S$ is an onto map, assume that $\phi_E$ is the pre-image of $1_E$ in $\mathcal H$. Therefore $S(\phi_E) = 1_E$ and $\|\phi_E\|_\mathcal H = \|1_E\|_{L^2_w(\Omega)}$ and we have \begin{align}\label{E1} \sum_{k \in J} |\Lambda_k(\phi_E)|^2 = \sum_{k \in J} \left|\int_\Omega S(\phi_E)(x) \overline{\lambda_k(x)} w(x)dx \right|^2 = \int_\Omega |1_E(x)|^2 |w(x)|^2 dx = \int_E |w(x)|^2 dx. \end{align} Since $w(x)<A$ for all $x\in E$, then from the last integral we obtain the following: \begin{align*} (\ref{E1}) \leq A \int_E w(x) dx = A \int_\Omega |1_E(x)|^2 w(x) dx = A\int_\Omega |S(\phi_E)(x)|^2 w(x) dx = A\|\phi_E\|^2. \end{align*} The preceding calculation shows that the frame lower bound condition fails for $\phi_E$. This contradicts our assumption that $\{\Lambda_k\}_{k \in J}$ is a frame, therefore $w(x)\geq A$ a.e. $x\in \Omega$. The argument for the upper bound follows similarly. \end{proof} \section{Proof of Theorem \ref{th3}} \begin{proof} Assume that $w(x)=1$ a.e. $x\in \Omega$. By the equations (\ref{eq1}) and (\ref{equ2}), for any $f\in {\mathbb H}$ we have \begin{align}\notag \sum_{m,n} |\Lambda_{m,n}(f)|^2 &=\sum_{m,n} \left| \langle G_{m,n} , S(f)\rangle_{L^2(\Omega, \U)}\right|^2 = \| Sf\|^2 = \|f\|^2 . \end{align} This proves (\ref{onb3}). Let $c_1, c_2\in \C$. For any $m,m\rq{}\in J$ and $n, n\rq{}\in K$ we have \begin{align*} \langle \Lambda_{m,n}^*(c_1),\Lambda_{m\rq{},n\rq{}}^*(c_2)\rangle &= c_1\overline{c_2} \langle S^{-1}(f_ng_m), S^{-1}(f_{n\rq{}}g_{m\rq{}})\rangle_{{\mathbb H}} \\ & = c_1\overline{c_2} \langle f_ng_m, f_{n\rq{}}g_{m\rq{}}\rangle_{L^2(\Omega, \U)} \\ &= c_1\overline{c_2} \delta_{m,m\rq{}}\delta_{n,n\rq{}}. \end{align*} This proves the relations (\ref{onb1}) and (\ref{onb2}). To prove the converse, we assume contrary. We assume that there is a subset $E\subset \Omega$ of positive measure for which $w(x)<1$. As in the proof of Theorem \ref{th1}, one can show the existence of a function $\phi_E$ for which with an analogous calculation following the relation (\ref{iso}) the following holds: $$\sum_{m,n} |\Lambda_{m,n}(\phi_E)|^2 \leq \|\phi_E\|^2 .$$ This indicates that the relation (\ref{onb3}) does not hold for $\phi_E$, which contradicts the assumption. \end{proof} \section{Examples} \begin{example}\label{example 1} Let $\Omega=D$ be a fundamental domain in $\Bbb R^d$ with Lebesgue measure one. Assume that $\Gamma=M\Bbb Z^d$ where $M$ is an $d\times d$ invertible matrix and the exponentials $\{e_n(x):= e^{2\pi i \langle n,x\rangle}: \ n\in \Gamma\}$ form an orthonormal basis for $L^2(D)$. For $0\neq\phi\in L^2(\Bbb R^d)$, define ${\mathbb H}:= \overline{\text{span}\{\phi(\cdot-n): n\in \Bbb Z^d\}}$ and the weight function $w$ by $w(x) := \sum_{n\in \Gamma^\perp} |\hat\phi(x+n)|^2$, a.e. $x\in D$. We claim that $\int_D w(x) dx= \|\phi\|^2$, thus is finite. To this end, notice $D$ is a fundamental domain for the lattice $\Gamma$. By a result by Fuglede \cite{Fug74}, $D$ tiles $\Bbb R^d$ by the dual lattice $\Gamma^\perp=M^{-t}\Bbb Z^d$, $M^{-t}$ the inverse transpose of $M$. Therefore, we have \begin{align*} \int_D w(x) dx &= \int_D \sum_{n\in \Gamma^\perp} |\hat\phi(x+n)|^2 dx = \int_{\mathbb R^d} |\hat\phi(x)|^2 dx = \|\hat \phi\|^2 = \|\phi\|^2 . \end{align*} Let $E_w:= \{x\in D: w(x)>0\}$ and for any $f\in {\mathbb H}$ define $$S(f)(x):= 1_{E_w}(x) {w(x)}^{-1} \sum_{n\in \Gamma^\perp} \hat f(x+n) \overline{\hat \phi(x+n)} \quad \text{a.e.}\ x\in E_w.$$ Then $S$ is an unitary map from ${\mathbb H}$ onto the weighted Hilbert space $L_w^2(0,1)$ with $\U=\C$. Note that $Sf(x)= 0$ a.e. $x\in D\setminus E_w$ (Theorem 3.1 (i) \cite{HSWW}). For $k\in \Gamma$ define $$\Lambda_k(f):= \int_{E_w} \left(\sum_{n\in \Gamma^\perp} \hat f(x+n)\overline{\hat\phi(x+n)}\right) e^{-2\pi i \langle x, k\rangle } dx.$$ By Corollary \ref{main result}, the operators $\{\Lambda_k\}_{k\in \Gamma}$ constitute a frame for ${\mathbb H}$ if there are positive constants $A$ and $B$ such that $A\leq \sum_{n\in \Gamma^\perp} |\hat\phi(x+n)|^2\leq B$ a.e. $x\in E$. By the well-known periodization method, it is obvious that $\Lambda_k(f) = \langle T_k \phi, f\rangle$ for any $f\in {\mathbb H}$ and $k\in \Gamma$, with $T_k\phi(x)= \phi(x-k)$. Thus, $\{\Lambda_k\}_\Gamma$ is the translation family $\{T_k \phi\}_\Gamma$. For example, if $\phi\in L^2(\mathbb R)$ such that $\hat\phi= 1_{[0,1]}$, the indicator function of the unit interval, then the inequalities for $w$ holds for $A=1$ and $B=2$, hence $\{\Lambda_k\}_{k\in \Gamma}$ is a frame with lower and upper frame bounds $1$ and $2$, respectively. \end{example} \section{Application: Frames for shift-invariant subspaces on the Heisenberg group} In this section we shall revisit the example of a function in $L^2(\Bbb H^d)$ that was introduced in \cite{BHM14} and exploit our current results to study the frame and Riesz property of the lattice translations of the function for a shift-invariant subspace of $L^2(\Bbb H^d)$. \subsection{The Heisenberg group} The $d$-dimensional Heisenberg group $\Bbb H^d$ is identified with $\mathbb R^d \times \mathbb R^d \times \mathbb R$ and the noncommutative group law is given by \begin{equation}\label{equ:Hlaw} (p,q,t) (p',q',t') = (p + p', q + q', t + t' + p\cdot q'). \end{equation} The inverse of an element is given by $(p,q,t)^{-1}= (-p,-q, -t+p\cdot q)$. Here, $x\cdot y$ is the inner product of two vectors in $\mathbb R^d$. The Haar measure of the group is the Lebesgue measure on $\Bbb R^{2d+1}$. The class of non-zero measure irreducible representations of $\Bbb H^d$ is identified by non-zero elements $\lambda \in \mathbb R^*:=\mathbb R \setminus \{0\}$ (see \cite{F1995}). Indeed, for any $\lambda\neq 0$, the associated irreducible representation $\rho_\lambda$ of the Heisenberg group is equivalent to Schr\"odinger representation into the class of unitary operators on $L^2(\Bbb R^d)$, such that for any $(p,q,t)\in \Bbb H^d$ and $f\in L^2(\Bbb R^d)$ \begin{align}\label{definition-of-schroedinger-representation} \rho_\lambda(p,q,t)f(x) = e^{2\pi i t \lambda} e^{-2\pi i \lambda \langle q\cdot x\rangle} f(x-p) . \end{align} Notice $\rho_\lambda(p,q,0)f(x) = M_{\lambda q} T_p f(x)$ is the unitary frequency-translation operator, where $M_x$ and $T_y$ are the modulation and translation operators, respectively. For $\varphi\in L^2(\Bbb H^d)$, we denote by $\hat \varphi$ the operator valued Fourier transform of $\varphi$ which is defined by \begin{equation}\label{equ:Hfourier} \hat \varphi(\lambda) = \int_\HD \varphi(x) \rho_\lambda(x) dx \quad \forall \lambda\in \mathbb R \setminus \{0\} . \end{equation} The operator $\hat \varphi(\lambda)$ is a Hilbert-Schmidt operators on $L^2(\Bbb R^d)$ such that for any $f\in L^2(\Bbb R^d)$ \begin{equation}\notag \hat \varphi(\lambda)f(y) = \int_\HD \varphi(x) \rho_\lambda(x)f(y) \ dx \quad \forall \lambda\in \mathbb R \setminus \{0\} , \end{equation} and the equality is understood in $L^2$-norm sense. For any $\psi$ and $\varphi$ in $L^2(\Bbb H^d)$ and $\lambda \in \mathbb R \setminus \{0\}$, the Hilbert-Schmidt inner product $\langle \hat \varphi(\lambda), \hat\psi(\lambda)\rangle_{\HS}$ is the trace of an operator. Indeed, \begin{equation}\label{eq:HS} \langle\hat\varphi(\lambda), \hat\psi(\lambda)\rangle_{\HS} = \textnormal{trace}_{L^2(\mathbb R^d)} \left(\hat\varphi (\lambda) \hat\psi(\lambda)^*\right) . \end{equation} (Here, $\hat\psi(\lambda)^*$ denotes the $L^2(\mathbb R^d)$ adjoint of the operator $\hat\psi(\lambda)$.) It is easy to see that $\hat\varphi (\lambda) \hat\psi(\lambda)^*$ is a kernel operator. Thus $\langle\hat\varphi(\lambda), \hat\psi(\lambda)\rangle_{\HS}$ is trace of a kernel operator (\cite{F1995}). The Plancherel formula for the Heisenberg group is given by \begin{equation}\label{eq:Plancherel} \langle \varphi, \psi\rangle_{L^2(\HD)} = \int_\mathbb R \langle\hat\varphi(\lambda), \hat\psi(\lambda)\rangle_{\HS} |\lambda|^d d\lambda . \end{equation} The measure $|\lambda|^d d\lambda$ is the Plancherel measure on the non-zero measure class of irreducible representations of the Heisenberg group (\cite{F1995}) and $d\lambda$ is the Lebesgue measure on $\Bbb R^*$. By the periodization method, the integral in (\ref{eq:Plancherel}) is can be equivalently written as \begin{equation}\notag \int_\mathbb R \langle\hat\varphi(\lambda), \hat\psi(\lambda)\rangle_{\HS} |\lambda|^d d\lambda = \int_0^1 \sum_{j\in \Bbb Z} \langle\hat\varphi(\alpha+j), \hat\psi(\alpha+j)\rangle_{\HS} |\alpha+j|^d d\alpha . \end{equation} Thus, for any $\varphi\in L^2(\Bbb H^d)$, by the Plancherel formula we deduce the following: \begin{equation}\label{periodization} \|\varphi\|^2 = \int_0^1 \sum_{j\in \Bbb Z} \|\hat \varphi(\alpha+j)\|^2_{\HS} |\alpha+j|^d d\alpha . \end{equation} \subsection{Frames for a shift-invariant subspace} Let $u={\bf 1}_{[0,1]^d}\in L^2(\Bbb R^d)$ be the indicator function of the unit cube $[0,1]^d$. For $\alpha\neq 0$, define the $L^2$-unitary dilation of $u$ with respect to $\alpha$ by $u_\alpha(x)= |\alpha|^{-d/2}u(\alpha x)$. Let $a$ and $b$ be two real numbers such that $0\neq ab\in \Bbb Z$. Then the family consisted of translations and modulations of $u_\alpha$ by $a\Bbb Z^d$ and $b \Bbb Z^d$, respectively, is then given by $$ \left\{|\alpha|^{d/2}e^{-2\pi i\alpha b\langle m, x\rangle} {\bf 1}_{(0,\frac{1}{\alpha})^d}(x-an): \ \ m, n\in \Bbb Z^d\right\} .$$ It is known that the family is an orthonormal basis for $L^2(\Bbb R^d)$ and is called orthonormal Gabor or Weyl-Heisenberg basis for $L^2(\Bbb R^d)$ with the window function $u_\alpha$. Fix $0<\epsilon<1$ (for the rest of the paper) and define the projector map $\Psi_\epsilon$ from $(0,1)$ into the class of Hilbert-Schmidt operators of rank one on $L^2(\Bbb R^d)$ by $$\Psi_\epsilon(\alpha) := (u_\alpha\otimes u_\alpha) 1_{(\epsilon,1]}(\alpha),$$ where for any $f, g, h\in L^2(\Bbb R^d)$ we have $(f\otimes g)h:= \langle h, g\rangle f$. By the definition of the Hilbert-Schmidt norm, we then have \begin{align}\label{value of HS-norm} \|\Psi_\epsilon(\alpha)\|_{\mathcal{HS}} = 1_{(\epsilon,1]}(\alpha) . \end{align} Thus $\|\Psi_\epsilon\|^2 = (d+1)^{-1}(1-\epsilon^{d+1})$. This implies that $\Psi_\epsilon\in L^2(\mathbb R^*,\HS(L^2(\mathbb R^d)),|\lambda|^d d\lambda)$. Therefore, by the inverse Fourier transform for the Heisenberg group, there is a function in $L^2(\Bbb H^d)$ whose Fourier transform is identical to $\Psi_\epsilon$ in $L^2$-norm. We let $\psi_\epsilon$ denote this function $ L^2(\Bbb H^d)$. Let $\Bbb A$ and $\Bbb B$ be any $d\times d$ matrices in $GL(\Bbb R, d)$ such that $\Bbb A\Bbb B^t\in GL(\Bbb Z,d)$. Define $\Gamma :=\Bbb A\Bbb Z^d \times \Bbb B \Bbb Z^d\times\Bbb Z$. Then $\Gamma$ is a lattice subgroup of the Heisenberg group, a discrete and co-compact subgroup. For any $\gamma=(p,q,t)\in \Gamma$, we denote by $T_\gamma \psi_{\epsilon}$ the $\gamma$-translation of $\psi_\epsilon$ which is given by $$T_\gamma \psi_{\epsilon}(x,y,z)= \psi_{\epsilon}(\gamma^{-1}(x,y,z)).$$ Our goal here is to employ the current results and study the frame property of the family $\{T_\gamma\psi_{\epsilon}\}_{\gamma\in \Gamma}$ for its spanned vector space $V_{\Gamma, \psi_\epsilon}= \overline{\text{span}\{T_\gamma \psi_\epsilon: \gamma\in \Gamma\}}$. It is obvious that $V_{\Gamma, \psi_\epsilon}$ is $\Gamma$-translation-invariant subspace of $L^2(\Bbb H^d)$. For fixed $\epsilon$, define $w_\epsilon$ on $\Bbb R$ by $$w_\epsilon(\alpha) = \sum_{j \in \Z} \| \Psi_\epsilon (\alpha + j)\|^2_{\HS(L^2(\mathbb R^d))}|\alpha + j|^d.$$ The function $w_\epsilon$ is a positive and periodic function. Let $E_{w_\epsilon}:=\{\alpha\in (0,1): \ w_\epsilon(\alpha)>0\}$. The definition of $\Psi_\epsilon$ along (\ref{value of HS-norm}) yields the following result. \begin{lemma}\label{lem:example1} For any $\alpha \in E_{w_\epsilon}$ \begin{align}\label{inequality for the toy weight} \epsilon^d \leq w_\epsilon(\alpha) \leq 1 . \end{align} \end{lemma} Let $k:=(0,0,k) \in \Bbb Z$ and $T_k\psi_\epsilon$ denote the translation of $\psi_\epsilon$ at the center direction of the Heisenberg: $$T_k \psi_\epsilon(p,q,t) =\psi_\epsilon (p,q,t-k), \quad (p,q,t)\in \Bbb H^d .$$ Let ${\mathbb H}= \overline{\text{span}\{T_k \psi_\epsilon: \ k\in \Bbb Z\}}$ and $f\in {\mathbb H}$. For any $\alpha\in (0,1)$ define \begin{align}\label{isometry S} S(f)(\alpha) := 1_{E_{w_\epsilon}}(\alpha) w_\epsilon(\alpha)^{-1} \sum_{j \in \Z} \langle \hat \psi_\epsilon (\alpha + j), \hat f(\alpha+j)\rangle_{\HS(L^2(\mathbb R^d))}|\alpha + j|^d. \end{align} \begin{lemma}\label{isometry lemma} The map $S: {\mathbb H} \to L_{w_\epsilon}^2(0,1)$ defined in (\ref{isometry S}) is an unitary map. \end{lemma} \begin{proof} First we prove that $S$ is a bounded map on $L^2(\Bbb H^d)$. Let $f\in L^2(\Bbb H^d)$. Then \begin{align}\notag & \int_0^1 |Sf(\alpha)|^2 w_\epsilon(\alpha)d\alpha \\\notag & = \int_0^1 1_{E_{w_\epsilon}}(\alpha) w_\epsilon(\alpha)^{-2} \left(\sum_{j \in \Z} |\langle \hat \psi_\epsilon (\alpha + j), \hat f(\alpha+j)\rangle_{\HS(L^2(\mathbb R^d))}||\alpha + j|^d \right)^2 w_\epsilon(\alpha) d\alpha \\\notag & \leq \int_0^1 1_{E_{w_\epsilon}}(\alpha) w_\epsilon(\alpha)^{-1} \left(\sum_{j \in \Z} \|\hat \psi_\epsilon (\alpha + j)\|_{\HS} \|\hat f(\alpha+j)\|_{\HS} |\alpha + j|^d \right)^2 d\alpha \\\notag &\leq \int_0^1 1_{E_{w_\epsilon}}(\alpha) w_\epsilon(\alpha)^{-1} B_{\psi_\epsilon}(\alpha) B_f(\alpha) d\alpha\\\notag &= \int_0^1 1_{E_{w_\epsilon}}(\alpha) B_f(\alpha) d\alpha \end{align} where $B_g(\alpha) := \sum_j \|\hat g(\alpha+j)\|^2 |\alpha+j|^d$ for $g\in L^2(\Bbb H^d)$ and $\alpha\in (0,1)$. By the definition of $w_\epsilon$, it is immediate that $w_\epsilon(\alpha)^{-1} B_{\psi_\epsilon}(\alpha) = 1$ for a.e. $\alpha\in (0,1)$. Therefore \begin{align}\notag \int_0^1 |Sf(\alpha)|^2 w_\epsilon(\alpha)d\alpha \leq \int_0^1 1_{E_{w_\epsilon}}(\alpha) B_f(\alpha) d\alpha = \| f\|^2 \quad \quad \text{by (\ref{periodization})}. \end{align} This proves that $S$ is a bounded operator. Next we prove that $S$ is an isometry map on $\mathcal{H}$. Assume that $f= \sum_k a_k T_k \psi_\epsilon$ for any finite linear combination of $T_k\psi_\epsilon$, $k\in\Bbb Z$. Thus \begin{align}\notag & \int_0^1 |Sf(\alpha)|^2 w_\epsilon(\alpha)d\alpha \\\notag & = \int_0^1 1_{E_{w_\epsilon}}(\alpha) w_\epsilon(\alpha)^{-2} \left|\sum_{j \in \Z} \langle \hat \psi_\epsilon (\alpha + j), \hat f(\alpha+j)\rangle_{\HS(L^2(\mathbb R^d))}|\alpha + j|^d \right|^2 w_\epsilon(\alpha) d\alpha \\\notag & =\int_0^1 1_{E_{w_\epsilon}}(\alpha) w_\epsilon(\alpha)^{-1} \left|\sum_{j \in \Z} \langle \hat \psi_\epsilon (\alpha + j), \sum_k a_k \widehat{T_k \psi_\epsilon}(\alpha+j)\rangle_{\HS(L^2(\mathbb R^d))}|\alpha + j|^d \right|^2 d\alpha \\\notag & =\int_0^1 1_{E_{w_\epsilon}}(\alpha) w_\epsilon(\alpha)^{-1} \left|\sum_k a_k e^{-2\pi i k \alpha}\right|^2 \left|\sum_{j \in \Z} \langle \hat \psi_\epsilon (\alpha + j), \hat\psi_\epsilon(\alpha+j)\rangle_{\HS(L^2(\mathbb R^d))}|\alpha + j|^d \right|^2 d\alpha\\\notag & =\int_0^1 1_{E_{w_\epsilon}}(\alpha) \left|\sum_k a_k e^{-2\pi i k \alpha}\right|^2 w_\epsilon(\alpha) d\alpha \quad \text{(by the definition of $w_\epsilon$).}\\\notag \end{align} Notice we can write $$\left|\sum_k a_k e^{-2\pi i k \alpha}\right|^2 w_\epsilon(\alpha)= \sum_j \left\| \sum_k a_k \widehat{T_k \psi_\epsilon}(\alpha+j) \right\|_{\mathcal{HS}}^2 |\alpha + j|^d .$$ Applying this in above we get \begin{align}\notag \int_0^1 |Sf(\alpha)|^2 w_\epsilon(\alpha)d\alpha &= \int_0^1 1_{E_{w_\epsilon}}(\alpha) \sum_j \left\| \sum_k a_k \widehat{T_k \psi_\epsilon}(\alpha+j)\right\|_{\mathcal{HS}}^2 |\alpha + j|^d d\alpha\\\notag &= \int_0^1 1_{E_{w_\epsilon}}(\alpha) \sum_j \| \hat f(\alpha+j)\|_{\mathcal{HS}}^2 |\alpha + j|^d d\alpha\\\notag &= \|f\|^2 \quad \quad \text{by (\ref{periodization})}. \end{align} This completes the proof of the lemma. \end{proof} \begin{theorem} For any $k\in \Bbb Z$ and $f\in {\mathbb H}$ define $$\Lambda_k(f) = \int_{E_{w_\epsilon}} \sum_{j \in \Z} \langle \hat \psi_\epsilon (\alpha + j), \hat f(\alpha+j)\rangle_{\HS(L^2(\mathbb R^d))}|\alpha + j|^d e^{-2\pi i \alpha k} d\alpha .$$ Then $\Lambda_k (f) = \langle T_k \psi_{\epsilon}, f\rangle $ and $\{\Lambda_k\}_{k\in\Z}$ is a frame for ${\mathbb H}$ with respect to $\Bbb C$. \end{theorem} \begin{proof} The equation $\Lambda_k (f)= \langle T_k \psi_{\epsilon}, f\rangle $ is a result of the Parseval identity. The family $\{\Lambda_k\}_{k\in\Z}$ is a frame for ${\mathbb H}$ with respect to $\Bbb C$ by Lemmas \ref{lem:example1}, \ref{isometry lemma} and Theorem \ref{th1} (b). \end{proof} \section*{Acknowledgments} The author is deeply indebted to Dr. Azita Mayeli for several fruitful discussions and generous comments. The author wishes to thank the anonymous referees for their helpful comments and suggestions that helped to improve the quality of the paper. \end{document}
\begin{document} \title{\bf Efficient coordination mechanisms\\for unrelated machine scheduling\thanks{A preliminary version of the results of this paper appeared in {\em Proceedings of the 20th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)} \begin{abstract} We present three new coordination mechanisms for scheduling $n$ selfish jobs on $m$ unrelated machines. A coordination mechanism aims to mitigate the impact of selfishness of jobs on the efficiency of schedules by defining a local scheduling policy on each machine. The scheduling policies induce a game among the jobs and each job prefers to be scheduled on a machine so that its completion time is minimum given the assignments of the other jobs. We consider the maximum completion time among all jobs as the measure of the efficiency of schedules. The approximation ratio of a coordination mechanism quantifies the efficiency of pure Nash equilibria (price of anarchy) of the induced game. Our mechanisms are deterministic, local, and preemptive in the sense that the scheduling policy does not necessarily process the jobs in an uninterrupted way and may introduce some idle time. Our first coordination mechanism has approximation ratio $\Theta(\log m)$ and always guarantees that the induced game has pure Nash equilibria to which the system converges in at most $n$ rounds. This result improves a bound of $O(\log^2 m)$ due to Azar, Jain, and Mirrokni and, similarly to their mechanism, our mechanism uses a global ordering of the jobs according to their distinct IDs. Next we study the intriguing scenario where jobs are anonymous, i.e., they have no IDs. In this case, coordination mechanisms can only distinguish between jobs that have different load characteristics. Our second mechanism handles anonymous jobs and has approximation ratio $O\left(\frac{\log m}{\log \log m}\right)$ although the game induced is not a potential game and, hence, the existence of pure Nash equilibria is not guaranteed by potential function arguments. However, it provides evidence that the known lower bounds for non-preemptive coordination mechanisms could be beaten using preemptive scheduling policies. Our third coordination mechanism also handles anonymous jobs and has a nice ``cost-revealing'' potential function. We use this potential function in order, not only to prove the existence of equilibria, but also to upper-bound the price of stability of the induced game by $O(\log m)$ and the price of anarchy by $O(\log^2m)$. Our third coordination mechanism is the first that handles anonymous jobs and simultaneously guarantees that the induced game is a potential game and has bounded price of anarchy. In order to obtain the above bounds, our coordination mechanisms use $m$ as a parameter. Slight variations of these mechanisms in which this information is not necessary achieve approximation ratios of $O\left(m^{\epsilon}\right)$, for any constant $\epsilon>0$. \end{abstract} \section{Introduction} We study the classical problem of {\em unrelated machine scheduling}. In this problem, we have $m$ parallel machines and $n$ independent jobs. Job $i$ induces a (possibly infinite) positive processing time (or load) $w_{ij}$ when processed by machine $j$. The load of a machine is the total load of the jobs assigned to it. The quality of an assignment of jobs to machines is measured by the makespan (i.e., the maximum) of the machine loads or, alternatively, the maximum completion time among all jobs. The optimization problem of computing an assignment of minimum makespan is a fundamental APX-hard problem, quite well-understood in terms of its offline \cite{LST93} and online approximability \cite{AAFPW93,ANR95}. The approach we follow in this paper is both algorithmic and game-theoretic. We assume that each job is owned by a selfish agent. This gives rise to a {\em selfish scheduling} setting where each agent aims to minimize the completion time of her job with no regard to the global optimum. Such a selfish behaviour can lead to inefficient schedules from which no agent has an incentive to unilaterally deviate in order to improve the completion time of her job. From the algorithmic point of view, the designer of such a system can define a {\em coordination mechanism} \cite{CKN04}, i.e., a {\em scheduling policy} within each machine in order to ``coordinate'' the selfish behaviour of the jobs. Our main objective is to design coordination mechanisms that guarantee that the assignments reached by the selfish agents are {\em efficient}. \noindent {\bf The model.} A scheduling policy simply defines the way jobs are scheduled within a machine and can be either {\em non-preemptive} or {\em preemptive}. Non-preemptive scheduling policies process jobs uninterruptedly according to some order. Preemptive scheduling policies do not necessarily have this feature and can also introduce some idle time (delay). Although this seems unnecessary at first glance, as we show in this paper, it is a very useful tool in order to guarantee coordination. A coordination mechanism is a set of scheduling policies running on the machines. In the sequel, we use the terms coordination mechanisms and scheduling policies interchangeably. A coordination mechanism defines (or induces) a game with the job owners as players. Each job has all machines as possible {\em strategies}. We call an {\em assignment} (of jobs to machines) or {\em state} any set of strategies selected by the players, with one strategy per player. Given an assignment of jobs to machines, the cost of a player is the completion time of her job on the machine it has been assigned to; this completion time depends on the scheduling policy on that machine and the characteristics of all jobs assigned to that machine. Assignments in which no player has an incentive to change her strategy in order to decrease her cost given the assignments of the other players are called {\em pure Nash equilibria}. The global objective that is used in order to assess the efficiency of assignments is the {\em maximum completion time} over all jobs. A related quantity is the {\em makespan} (i.e., the maximum of the machine loads). Notice that when preemptive scheduling policies are used, these two quantities may not be the same (since idle time contributes to the completion time but not to the load of a machine). However, the optimal makespan is a lower bound on the optimal maximum completion time. The {\em price of anarchy} \cite{P01} is the maximum over all pure Nash equilibria of the ratio of the maximum completion time among all jobs over the optimal makespan. The {\em price of stability} \cite{ADKTWR04} is the minimum over all pure Nash equilibria of the ratio of the maximum completion time among all jobs over the optimal makespan. The {\em approximation ratio} of a coordination mechanism is the maximum of the price of anarchy of the induced game over all input instances. Four natural coordination mechanisms are the {\sf Makespan}, {\sf Randomized}, {\sf LongestFirst}, and {\sf ShortestFirst}. In the {\sf Makespan} policy, each machine processes the jobs assigned to it ``in parallel'' so that the completion time of each job is the total load of the machine. {\sf Makespan} is obviously a preemptive coordination mechanism. In the {\sf Randomized} policy, the jobs are scheduled non-preemptively in random order. Here, the cost of each player is the expected completion time of her job. In the {\sf ShortestFirst} and {\sf LongestFirst} policies, the jobs assigned to a machine are scheduled in non-decreasing and non-increasing order of their processing times, respectively. In case of ties, a {\em global ordering} of the jobs according to their distinct IDs is used. This is necessary by any deterministic non-preemptive coordination mechanism in order to be well-defined. Note that no such information is required by the {\sf Makespan} and {\sf Randomized} policies; in this case, we say that they handle {\em anonymous jobs}. According to the terminology of \cite{AJM08}, all these four coordination mechanisms are {\em strongly local} in the sense that the only information required by each machine in order to compute a schedule are the processing times of the jobs assigned to it. A {\em local} coordination mechanism may use all parameters (i.e., the load vector) of the jobs assigned to the same machine. Designing coordination mechanisms with as small approximation ratio as possible is our main concern. But there are other issues related to efficiency. The price of anarchy is meaningful only in games where pure Nash equilibria exist. So, the primary goal of the designer of a coordination mechanism should be that the induced game {\em always has} pure Nash equilibria. Furthermore, these equilibria should be {\em easy to find}. A very interesting class of games in which the existence of pure Nash equilibria is guaranteed is that of {\em potential} games. These games have the property that a {\em potential function} can be defined on the states of the game so that in any two states differing in the strategy of a single player, the difference of the values of the potential function and the difference of the cost of the player have the same sign. This property guarantees that the state with minimum potential is a pure Nash equilibrium. Furthermore, it guarantees that, starting from any state, the system will reach (converge to) a pure Nash equilibrium after a finite number of {\em selfish moves}. Given a game, its {\em Nash dynamics} is a directed graph with the states of the game as nodes and edges connecting two states differing in the strategy of a single player if that player has an incentive to change her strategy according to the direction of the edge. The Nash dynamics of potential games do not contain any cycle. Another desirable property here is {\em fast} convergence, i.e., convergence to a pure Nash equilibrium in a polynomial number of selfish moves. A particular type of selfish moves that have been extensively considered in the literature \cite{AAEMS08,CMS06,FFM08,MV04} is that of {\em best-response} moves. In a best-response move, a player having an incentive to change her strategy selects the strategy that yields the maximum decrease in her cost. Potential games are strongly related to {\em congestion games} introduced by Rosenthal \cite{R73}. Rosenthal presented a potential function for these games with the following property: in any two states differing in the strategy of a single player, the difference of the values of the potential function {\em equals} the difference of the cost of the player. Monderer and Shapley \cite{MS96} have proved that each potential game having this property is isomorphic to a congestion game. We point out that potential functions are not the only way to guarantee the existence of pure Nash equilibria. Several generalizations of congestion games such as those with player-specific latency functions \cite{M96} are not potential games but several subclasses of them provably have pure Nash equilibria. \noindent {\bf Related work.} The study of the price of anarchy of games began with the seminal work of Koutsoupias and Papadimitriou \cite{KP99} and has played a central role in the recently emerging field of Algorithmic Game Theory \cite{NRTV07}. Several papers provide bounds on the price of anarchy of different games of interest. Our work follows a different direction where the price of anarchy is the {\em objective to be minimized} and, in this sense, it is similar in spirit to studies where the main question is how to change the rules of the game at hand in order to improve the price of anarchy. Typical examples are the introduction of taxes or tolls in congestion games \cite{CKK06,CDR06,FJM04,KK04,S07}, protocol design in network and cost allocation games \cite{CRV08,KLO95}, Stackelberg routing strategies \cite{KS06,KLO97,KM02,R04,S07}, and network design \cite{R06}. Coordination mechanisms were introduced by Christodoulou, Koutsoupias, and Nanavati in \cite{CKN04}. They study the case where each player has the same load on each machine and, among other results, they consider the {\sf LongestFirst} and {\sf ShortestFirst} scheduling policies. We note that the {\sf Makespan} and {\sf Randomized} scheduling policies were used in \cite{KP99} as models of selfish behaviour in scheduling, and since that paper, the {\sf Makespan} policy has been considered as standard in the study of selfish scheduling games in models simpler than the one of unrelated machines and is strongly related to the study of congestion games (see \cite{V07,R07} and the references therein). Immorlica et al. \cite{ILMS05} study these four scheduling policies under several scheduling settings including the most general case of unrelated machines. They prove that the {\sf Randomized} and {\sf ShortestFirst} policies have approximation ratio $O(m)$ while the {\sf LongestFirst} and {\sf Makespan} policies have unbounded approximation ratio. Some scheduling policies are also related to earlier studies of local-search scheduling heuristics. So, the fact that the price of anarchy of the induced game may be unbounded follows by the work of Schuurman and Vredeveld \cite{SV01}. As observed in \cite{ILMS05}, the equilibria of the game induced by {\sf ShortestFirst} correspond to the solutions of the {\sf ShortestFirst} scheduling heuristic which is known to be $m$-approximate \cite{IK77}. The {\sf Makespan} policy is known to induce potential games \cite{EKM03}. The {\sf ShortestFirst} policy also induces potential games as proved in \cite{ILMS05}. In Section \ref{sec:b-cm}, we present examples showing that the scheduling policies {\sf LongestFirst} and {\sf Randomized} do not induce potential games.\footnote{After the appearance of the conference version of the paper, we became aware of two independent proofs that {\sf Longest-First} may induce games that do not have pure Nash equilibria \cite{DT09,FRS10}.} Azar et al. \cite{AJM08} study non-preemptive coordination mechanisms for unrelated machine scheduling. They prove that any local non-preemptive coordination mechanism is at least $\Omega(\log m)$-approximate\footnote{The corresponding proof of \cite{AJM08} contained a error which has been recently fixed by Fleischer and Svitkina \cite{FS10}.} while any strongly local non-preemptive coordination mechanism is at least $\Omega(m)$-approximate; as a corollary, they solve an old open problem concerning the approximation ratio of the {\sf ShortestFirst} heuristic. On the positive side, the authors of \cite{AJM08} present a non-preemptive local coordination mechanism (henceforth called {\sf AJM-1}) that is $O(\log m)$-approximate although it may induce games without pure Nash equilibria. The extra information used by this scheduling policy is the {\em inefficiency} of jobs (defined in the next section). They also present a technique that transforms this coordination mechanism to a preemptive one that induces potential games with price of anarchy $O(\log^2 m)$. In their mechanism, the players converge to a pure Nash equilibrium in $n$ rounds of best-response moves. We will refer to this coordination mechanism as {\sf AJM-2}. Both {\sf AJM-1} and {\sf AJM-2} use the IDs of the jobs. \noindent {\bf Our results.} We present three new coordination mechanisms for unrelated machine scheduling. Our mechanisms are deterministic, preemptive, and local. The schedules in each machine are computed as functions of the characteristics of jobs assigned to the machine, namely the load of jobs on the machine and their inefficiency. In all cases, the functions use an integer parameter $p\geq 1$; the best choice of this parameter for our coordination mechanisms is $p=O(\log m)$. Our analysis is heavily based on the convexity of simple polynomials and geometric inequalities for Euclidean norms. \begin{table} \centerline{\footnotesize \begin{tabular}{|l||c|c|c|c|l|} \hline Coordination & & & & & \\ mechanism & PoA & Pot. & PNE & IDs & Characteristics \\\hline\hline {\sf ShortestFirst} & $\Theta(m)$ & Yes & Yes & Yes & Strongly local, non-preemptive\\\hline {\sf LongestFirst} & unbounded & No & No & Yes & Strongly local, non-preemptive\\\hline {\sf Makespan} & unbounded & Yes & Yes & No & Strongly local, preemptive\\\hline {\sf Randomized} & $\Theta(m)$ & No & ? & No & Strongly local, non-preemptive\\\hline {\sf AJM-1} & $\Theta(\log m)$ & No & No & Yes & Local, non-preemptive\\\hline {\sf AJM-2} & $O(\log^2m)$ & Yes & Yes & Yes & Local, preemptive, uses $m$ \\\hline\hline {\sf ACOORD} & $\Theta(\log m)$ & Yes & Yes & Yes & Local, preemptive, uses $m$ \\ & $O(m^\epsilon)$ & Yes & Yes & Yes & Local, preemptive \\\hline {\sf BCOORD} & $O\left(\frac{\log m}{\log\log m}\right)$ & No & ? & No & Local, preemptive, uses $m$ \\ & $O(m^\epsilon)$ & No & ? & No & Local, preemptive \\\hline {\sf CCOORD} & $O(\log^2m)$ & Yes & Yes & No & Local, preemptive, uses $m$ \\ & $O(m^\epsilon)$ & Yes & Yes & No & Local, preemptive \\\hline \end{tabular}} \label{tab:comparison} \caption{Comparison of our coordination mechanisms to previously known ones with respect to the price of anarchy of the induced game (PoA), whether they induced potential games or not (Pot.), the existence of pure Nash equilibria (PNE), and whether they use the job IDs or not.} \end{table} Motivated by previous work, we first consider the scenario where jobs have distinct IDs. Our first coordination mechanism {\sf ACOORD} uses this information and is superior to the known coordination mechanisms that induce games with pure Nash equilibria. The game induced is a potential game, has price of anarchy $\Theta(\log m)$, and the players converge to pure Nash equilibria in at most $n$ rounds. Essentially, the equilibria of the game induced by {\sf ACOORD} can be thought of as the solutions produced by the application of a particular online algorithm, similar to the greedy online algorithm for minimizing the $\ell_p$ norm of the machine loads \cite{AAFPW93,C08}. Interestingly, the local objective of the greedy online algorithm for the $\ell_p$ norm may not translate to a completion time of jobs in feasible schedules; the online algorithm implicit by {\sf ACOORD} uses a different local objective that meets this constraint. The related results are presented in Section \ref{sec:a-cm}. Next we address the case where no ID information is associated to the jobs (anonymous jobs). This scenario is relevant when the job owners do not wish to reveal their identities or in large-scale settings where distributing IDs to jobs is infeasible. Definitely, an advantage that could be used for coordination is lost in this way but this makes the problem of designing coordination mechanisms more challenging. In Section \ref{sec:b-cm}, we present our second coordination mechanism {\sf BCOORD} which induces a simple congestion game with player-specific polynomial latency functions of a particular form. The price of anarchy of this game is only $O\left(\frac{\log m}{\log \log m}\right)$. This result demonstrates that preemption may be useful in order to beat the $\Omega(\log m)$ lower bound of \cite{AJM08} for non-preemptive coordination mechanisms. On the negative side, we show that the game induced may not be a potential game by presenting an example where the Nash dynamics have a cycle. Our third coordination mechanism {\sf CCOORD} is presented in Section \ref{sec:c-cm}. The scheduling policy on each machine uses an interesting function on the loads of the jobs assigned to the machine and their inefficiency. The game induced by {\sf CCOORD} is a potential game; the associated potential function is ``cost-revealing'' in the sense that it can be used to upper-bound the cost of equilibria. In particular, we show that the price of stability of the induced game is $O(\log m)$ and the price of anarchy is $O(\log^2 m)$. The coordination mechanism {\sf CCOORD} is the first that handles anonymous jobs and simultaneously guarantees that the induced game is a potential game and has bounded price of anarchy. Table \ref{tab:comparison} compares our coordination mechanisms to the previously known ones. Observe that the dependence of the parameter $p$ on $m$ requires that our mechanisms use the number of machines as input. By setting $p$ equal to an appropriately large constant, our mechanisms achieve price of anarchy $O(m^\epsilon)$ for any constant $\epsilon>0$. In particular, the coordination mechanisms {\sf ACOORD} and {\sf CCOORD} are the first ones that do not use the number of machines as a parameter, induce games with pure Nash equilibria, and have price of anarchy $o(m)$. We remark that the current paper contains several improvements compared to its conference version. There, the three coordination mechanisms had the restriction that a job with inefficiency more than $m$ on some machine has infinite completion time when assigned to that machine. Here, we have removed this restriction and have adapted the analysis accordingly. A nice consequence of the new definition is that the coordination mechanisms can now be defined so that they do not use the number of machines as a parameter. Furthermore, the definition of {\sf ACOORD} has been significantly simplified. Also, the analysis of the price of anarchy of the coordination mechanism {\sf BCOORD} in the conference version used a technical lemma which is implicit in \cite{STZ04}. In the current version, we present a different self-contained proof that is based on convexity properties of polynomials and Minkowski inequality; the new proof has a similar structure with the analysis of the price of anarchy of mechanism {\sf ACOORD}. We begin with preliminary technical definitions in Section \ref{sec:prelim} and conclude with interesting open questions in Section \ref{sec:open}. \section{Preliminaries}\label{sec:prelim} In this section, we present our notation and give some statements that will be useful later. We reserve $n$ and $m$ for the number of jobs and machines, respectively, and the indices $i$ and $j$ for jobs and machines, respectively. Unless specified otherwise, the sums $\sum_i$ and $\sum_j$ run over all jobs and over all machines, respectively. Assignments are denoted by $N$ or $O$. With some abuse in notation, we use $N_j$ to denote both the set of jobs assigned to machine $j$ and the set of their loads on machine $j$. We use the notation $L(N_j)$ to denote the load of machine $j$ under the assignment $N$. More generally, $L(A)$ denotes the sum of the elements for any set of non-negative reals $A$. For an assignment $N$ which assigns job $i$ to machine $j$, we denote the completion time of job $i$ under a given scheduling policy by ${\cal P}(i,N_j)$. Note that, besides defining the completion times, we do not discuss the particular way the jobs are scheduled by the scheduling policies we present. However, we require that {\em feasible} schedules are computable efficiently. A natural sufficient and necessary condition is the following: for any job $i\in N_j$, the total load of jobs with completion time at most ${\cal P}(i,N_j)$ is at most ${\cal P}(i,N_j)$. Our three coordination mechanisms use the inefficiency of jobs in order to compute schedules. We denote by $w_{i,\min}$ the minimum load of job $i$ over all machines. Then, its inefficiency $\rho_{ij}$ on machine $j$ is defined as $\rho_{ij}=w_{ij}/w_{i,\min}$. Our proofs are heavily based on the convexity of simple polynomials such as $z^k$ for $k\geq 1$ and on the relation of Euclidean norms of the machine loads and the makespan. Recall that the $\ell_k$ norm of the machine loads for an assignment $N$ is $\left(\sum_j{L(N_j)^k}\right)^{1/k}$. The proof of the next lemma is trivial. \begin{lemma}\label{lem:lp-norm} For any assignment $N$, $\max_j{L(N_j)} \leq \left(\sum_j{L(N_j)^k}\right)^{1/k} \leq m^{1/k} \max_j{L(N_j)}$. \end{lemma} In some of the proofs, we also use the Minkowski inequality (or the triangle inequality for the $\ell_p$ norm). \begin{lemma}[Minkowski inequality]\label{lem:minkowski} $\left(\sum_{t=1}^s{(a_t+b_t)^k}\right)^{1/k} \leq \left(\sum_{t=1}^s{a_t^k}\right)^{1/k}+\left(\sum_{t=1}^s{b_t^k}\right)^{1/k}$, for any $k\geq 1$ and $a_t,b_t\geq 0$. \end{lemma} The following two technical lemmas are used in some of our proofs. We include them here for easy reference. \begin{lemma}\label{lem:convexity} Let $r\geq 1$, $t\geq 0$ and $a_i\geq 0$, for $i=1, ..., k$. Then, \[\sum_{i=1}^k{\left(\left(t+a_i\right)^r-t^r\right)}\leq \left(t+\sum_{i=1}^k{a_i}\right)^r-t^r\] \end{lemma} \begin{proof} The case when $a_i=0$ for $i=1, ..., k$ is trivial. Assume otherwise and let $\xi=\sum_{i=1}^{k}{a_i}$ and $\xi_i=a_i/\xi$. Clearly, $\sum_{i=1}^k{\xi_i}=1$. By the convexity of function $z^r$ in $[0,\infty)$, we have that \begin{eqnarray}\nonumber (t+a_i)^r &= & \left((1-\xi_i)t+\xi_i\left(t+\sum_{i=1}^k{a_i}\right)\right)^r\\\label{eq:convexity} &\leq & (1-\xi_i)t^r+\xi_i\left(t+\sum_{i=1}^k{a_i}\right)^r \end{eqnarray} for $i=1, ..., k$. Using (\ref{eq:convexity}), we obtain \begin{eqnarray*} \sum_{i=1}^k{\left((t+a_i)^r-t^r\right)} &\leq & t^r\left(\sum_{i=1}^k{(1-\xi_i)}-k\right)+\left(t+\sum_{i=1}^k{a_i}\right)^r\sum_{i=1}^k{\xi_i}\\ &=& \left(t+\sum_{i=1}^k{a_i}\right)^r-t^r \end{eqnarray*} \qed\end{proof} \begin{lemma}\label{lem:slope} For any $z_0\geq 0$, $\alpha\geq 0$, and $p\geq 1$, it holds \[(p+1)\alpha z_0^p \leq (z_0+\alpha)^{p+1}-z_0^{p+1}\leq (p+1)\alpha (z_0+\alpha)^p.\] \end{lemma} \begin{proof} The inequality trivially holds if $\alpha=0$. If $\alpha>0$, the inequality follows since, due to the convexity of the function $z^{p+1}$, the slope of the line that crosses points $(z_0,z_0^{p+1})$ and $(z_0+\alpha,(z_0+\alpha)^{p+1})$ is between its derivative at points $z_0$ and $z_0+\alpha$.\qed\end{proof} We also refer to the multinomial and binomial theorems. \cite{HLP52} provides an extensive overview of the inequalities we use and their history (see also {\tt wikipedia.org} for a quick survey). \section{The coordination mechanism {\sf ACOORD}}\label{sec:a-cm} The coordination mechanism {\sf ACOORD} uses a global ordering of the jobs according to their distinct IDs. Without loss of generality, we may assume that the index of a job is its ID. Let $N$ be an assignment and denote by $N^i$ the restriction of $N$ to the jobs with the $i$ smallest IDs. {\sf ACOORD} schedules job $i$ on machine $j$ so that it completes at time \[{\cal P}(i,N_j) = \left(\rho_{ij}\right)^{1/p} L(N^i_j).\] Since $\rho_{ij}\geq 1$, the schedules produced are always feasible. Consider the sequence of jobs in increasing order of their IDs and assume that each job plays a best-response move. In this case, job $i$ will select that machine $j$ so that the quantity $\left(\rho_{ij}\right)^{1/p} L(N^i_j)$ is minimized. Since the completion time of job $i$ depends only on jobs with smaller IDs, no job will have an incentive to change its strategy and the resulting assignment is a pure Nash equilibrium. The following lemma extends this observation in a straightforward way. \begin{lemma}\label{lem:a-cm-potential-convergence} The game induced by the coordination mechanism {\sf ACOORD} is a potential game. Furthermore, any sequence of $n$ rounds of best-response moves converges to a pure Nash equilibrium. \end{lemma} \begin{proof} Notice that since a job does not affect the completion time of jobs with smaller IDs, the vector of completion times of the jobs (sorted in increasing order of their IDs) decreases lexicographically when a job improves its cost by deviating to another strategy and, hence, it is a potential function for the game induced by the coordination mechanism {\sf ACOORD}. Now, consider $n$ rounds of best-response moves of the jobs in the induced game such that each job plays at least once in each round. It is not hard to see that after round $i$, the job $i$ will have selected that machine $j$ so that the quantity $\left(\rho_{ij}\right)^{1/p} L(N^i_j)$ is minimized. Since the completion time of job $i$ depends only on jobs with smaller IDs, job $i$ has no incentive to move after round $i$ and, hence, no job will have an incentive to change its strategy after the $n$ rounds. So, the resulting assignment is a pure Nash equilibrium. \qed \end{proof} The sequence of best-response moves mentioned above can be thought of as an online algorithm that processes the jobs in increasing order of their IDs. The local objective is slightly different that the local objective of the greedy online algorithm for minimizing the $\ell_{p+1}$ norm of the machine loads \cite{AAG+95,C08}; in that algorithm, job $i$ is assigned to a machine $j$ so that the quantity $(L(N^{i-1}_j)+w_{ij})^{p+1}-L(N^{i-1}_j)^{p+1}$ is minimized. Here, we remark that we do not see how the local objective of that algorithm could be simulated by a scheduling policy that always produces feasible schedules. This constraint is trivially satisfied by the coordination mechanism {\sf ACOORD}. The next lemma bounds the maximum completion time at pure Nash equilibria in terms of the $\ell_{p+1}$ norm of the machine loads and the optimal makespan. \begin{lemma}\label{lem:completion-acoord} Let $N$ be a pure Nash equilibrium of the game induced by the coordination mechanisms {\sf ACOORD} and let $O$ be an optimal assignment. Then \[\max_{j,i\in N_j}{{\cal P}(i,N_j)} \leq \left(\sum_j{L(N_j)^{p+1}}\right)^{\frac{1}{p+1}} +\max_j{L(O_j)}.\] \end{lemma} \begin{proof} Let $i^*$ be the job that has the maximum completion time in assignment $N$. Denote by $j_1$ the machine $i^*$ uses in $N$ and let $j_2$ be a machine such that $\rho_{i^*j_2}=1$. If $j_1=j_2$, the definition of the coordination mechanism {\sf ACOORD} yields \begin{eqnarray*} \max_{j,i\in N_j}{{\cal P}(i,N_j)} &=& {\cal P}(i^*,N_{j_1})\\ &=& L(N_{j_1}^{i^*})\\ &\leq & L(N_{j_1})\\ &\leq & \left(\sum_j{L(N_j)^{p+1}}\right)^{\frac{1}{p+1}}. \end{eqnarray*} Otherwise, since player $i^*$ has no incentive to use machine $j_2$ instead of $j_1$, we have \begin{eqnarray*} \max_{j,i\in N_j}{{\cal P}(i,N_j)} &=& {\cal P}(i^*,N_{j_1})\\ &\leq & {\cal P}(i^*,N_{j_2}\cup \{w_{i^*j_2}\})\\ &=& L(N_{j_2}^{i^*})+w_{i^*j_2}\\ &\leq & L(N_{j_2})+\min_j{w_{i^*j}}\\ &\leq & \left(\sum_j{L(N_j)^{p+1}}\right)^{\frac{1}{p+1}} +\max_j{L(O_j)}. \end{eqnarray*} \qed \end{proof} Next we show that the approximation ratio of {\sf ACOORD} is $O(\log m)$ (for well-selected values of the parameter $p$). The analysis borrows and extends techniques from the analysis of the greedy online algorithm for the $\ell_p$ norm in \cite{C08}. \begin{theorem}\label{thm:a-cm-poa} The price of anarchy of the game induced by the coordination mechanism {\sf ACOORD} with $p=\Theta(\log m)$ is $O(\log m)$. Also, for every constant $\epsilon\in (0,1/2]$, the price of anarchy of the game induced by the coordination mechanism {\sf ACOORD} with $p=1/\epsilon-1$ is $O\left(m^\epsilon\right)$. \end{theorem} \begin{proof} Consider a pure Nash equilibrium $N$ and an optimal assignment $O$. Since no job has an incentive to change her strategy from $N$, for any job $i$ that is assigned to machine $j_1$ in $N$ and to machine $j_2$ in $O$, by the definition of {\sf ACOORD} we have that \begin{eqnarray*}\left(\rho_{ij_1}\right)^{1/p} L(N_{j_1}^i) &\leq & \left(\rho_{ij_2}\right)^{1/p} \left(L(N_{j_2}^{i-1})+w_{ij_2}\right). \end{eqnarray*} Equivalently, by raising both sides to the power $p$ and multiplying with $w_{i,\min}$, we have that \begin{eqnarray*}w_{ij_1} L(N_{j_1}^i)^p &\leq & w_{ij_2} \left(L(N_{j_2}^{i-1})+w_{ij_2}\right)^p.\end{eqnarray*} Using the binary variables $x_{ij}$ and $y_{ij}$ to denote whether job $i$ is assigned to machine $j$ in the assignment $N$ ($x_{ij}=1$) and $O$ ($y_{ij}=1$), respectively, or not ($x_{ij}=0$ and $y_{ij}=0$, respectively), we can express this last inequality as follows. \begin{eqnarray*} \sum_j{x_{ij}w_{ij}L(N_{j}^i)^p} &\leq &\sum_j{y_{ij}w_{ij}\left(L(N_{j}^{i-1})+w_{ij}\right)^{p}} \end{eqnarray*} By summing over all jobs and multiplying with $(e-1)(p+1)$, we have \begin{eqnarray}\nonumber & & (e-1)(p+1) \sum_i\sum_j{x_{ij}w_{ij}L(N_j^i)^p}\\\nonumber &\leq & (e-1)(p+1) \sum_i\sum_j{y_{ij}w_{ij}\left(L(N_j^{i-1})+w_{ij}\right)^p}\\\nonumber &\leq & (e-1)(p+1) \sum_j\sum_i{y_{ij}w_{ij}\left(L(N_j)+w_{ij}\right)^p}\\\nonumber &= & (e-1)(p+1) \sum_j\sum_i{y_{ij}w_{ij}\left(L(N_j)+y_{ij}w_{ij}\right)^p}\\\nonumber &\leq & \sum_j\sum_i{\left(\left(L(N_j)+ey_{ij}w_{ij}\right)^{p+1}-\left(L(N_j)+y_{ij}w_{ij}\right)^{p+1}\right)}\\\nonumber &\leq & \sum_j\sum_i{\left(\left(L(N_j)+ey_{ij}w_{ij}\right)^{p+1}-L(N_j)^{p+1}\right)}\\\nonumber &\leq & \sum_j{\left(\left(L(N_j)+e\sum_i{y_{ij}w_{ij}}\right)^{p+1}-L(N_j)^{p+1}\right)}\\\nonumber &=& \sum_j{\left(L(N_j)+eL(O_j)\right)^{p+1}}-\sum_j{L(N_j)^{p+1}}\\\label{eq:a-minkowski} &\leq & \left(\left(\sum_j{L(N_j)^{p+1}}\right)^{\frac{1}{p+1}}+e\left(\sum_j{L(O_j)^{p+1}}\right)^{\frac{1}{p+1}}\right)^{p+1}-\sum_j{L(N_j)^{p+1}}. \end{eqnarray} The second inequality follows by exchanging the sums and since $L(N_j^{i-1})\leq L(N_j)$, the first equality follows since $y_{ij}\in \{0,1\}$, the third inequality follows by applying Lemma \ref{lem:slope} with $\alpha=(e-1)y_{ij}w_{ij}$ and $z_0=L(N_j)+y_{ij}w_{ij}$, the fourth inequality is obvious, the fifth inequality follows by Lemma \ref{lem:convexity}, the second equality follows since the definition of the variables $y_{ij}$ implies that $L(O_j)=\sum_i{y_{ij}w_{ij}}$, and the last inequality follows by Minkowski inequality (Lemma \ref{lem:minkowski}). Now, we will relate the $\ell_{p+1}$ norm of the machines loads of assignments $N$ and $O$. We have \begin{eqnarray*} (e-1)\sum_j{L(N_j)^{p+1}} &=& (e-1)\sum_j{L(N_j^n)^{p+1}}\\ &=& (e-1)\sum_{i=1}^n{\sum_j{\left(L(N_j^i)^{p+1}-L(N_j^{i-1})^{p+1}\right)}}\\ &=& (e-1)\sum_{i=1}^n{\sum_j{\left(L(N_j^i)^{p+1}-(L(N_j^{i})-x_{ij}w_{ij})^{p+1}\right)}}\\ &\leq & (e-1)(p+1)\sum_i\sum_j{x_{ij}w_{ij}L(N_j^i)^p}\\ &\leq & \left(\left(\sum_j{L(N_j)^{p+1}}\right)^{\frac{1}{p+1}}+e\left(\sum_j{L(O_j)^{p+1}}\right)^{\frac{1}{p+1}}\right)^{p+1}-\sum_j{L(N_j)^{p+1}}. \end{eqnarray*} The first two equalities are obvious (observe that $L(N_j^0)=0$), the third one follows by the definition of variables $x_{ij}$, the first inequality follows by applying Lemma \ref{lem:slope} with $\alpha=x_{ij}w_{ij}$ and $z_0=L(N_j^{i})-x_{ij}w_{ij}$, and the last inequality follows by inequality (\ref{eq:a-minkowski}). So, the above inequality yields \begin{eqnarray*} \left(\sum_j{L(N_j)^{p+1}}\right)^{\frac{1}{p+1}} &\leq & \frac{e}{e^{\frac{1}{p+1}}-1} \left(\sum_j{L(O_j)^{p+1}}\right)^{\frac{1}{p+1}}\\ &\leq & e(p+1) \left(\sum_j{L(O_j)^{p+1}}\right)^{\frac{1}{p+1}}\\ &\leq & e(p+1)m^{\frac{1}{p+1}} \max_j{L(O_j)}. \end{eqnarray*} The second inequality follows since $e^z\geq z+1$ for $z\geq 0$ and the third one follows by Lemma \ref{lem:lp-norm}. Now, using Lemma \ref{lem:completion-acoord} and this last inequality, we obtain that \begin{eqnarray*} \max_{j,i\in N_j}{{\cal P}(i,N_j)} &\leq &\left(\sum_j{L(N_j)^{p+1}}\right)^{\frac{1}{p+1}} +\max_j{L(O_j)}\\ &\leq & \left(e(p+1)m^{\frac{1}{p+1}}+1\right) \max_j{L(O_j)}. \end{eqnarray*} The desired bounds follow by setting $p=\Theta(\log m)$ and $p=1/\epsilon-1$, respectively. \qed \end{proof} Our logarithmic bound is asymptotically tight; this follows by the connection to online algorithms mentioned above and the lower bound of \cite{ANR95}. \section{The coordination mechanism {\sf BCOORD}}\label{sec:b-cm} We now turn our attention to coordination mechanisms that handle anonymous jobs. We define the coordination mechanism {\sf BCOORD} by slightly changing the definition of {\sf ACOORD} so that the completion time of a job does not depend on its ID. So, {\sf BCOORD} schedules job $i$ on machine $j$ so that it finishes at time $${\cal P}(i,N_j)=\left(\rho_{ij}\right)^{1/p}L(N_j).$$ Since $\rho_{ij}\geq 1$, the schedules produced are always feasible. The next lemma bounds the maximum completion time at pure Nash equilibria (again in terms of the $\ell_{p+1}$ norm of the machine loads and the optimal makespan). \begin{lemma}\label{lem:completion-bcoord} Let $N$ be a pure Nash equilibrium of the game induced by the coordination mechanisms {\sf BCOORD} and let $O$ be an optimal assignment. Then \[\max_{j,i\in N_j}{{\cal P}(i,N_j)} \leq \left(\sum_j{L(N_j)^{p+1}}\right)^{\frac{1}{p+1}} +\max_j{L(O_j)}.\] \end{lemma} \begin{proof} The proof is almost identical to the proof of Lemma \ref{lem:completion-acoord}; we include it here for completeness. Let $i^*$ be the job that has the maximum completion time in assignment $N$. Denote by $j_1$ the machine $i^*$ uses in $N$ and let $j_2$ be a machine such that $\rho_{i^*j_2}=1$. If $j_1=j_2$, the definition of the coordination mechanism {\sf BCOORD} yields \begin{eqnarray*} \max_{j,i\in N_j}{{\cal P}(i,N_j)} &=& {\cal P}(i^*,N_{j_1})\\ &=& L(N_{j_1})\\ &\leq & \left(\sum_j{L(N_j)^{p+1}}\right)^{\frac{1}{p+1}}. \end{eqnarray*} Otherwise, since player $i^*$ has no incentive to use machine $j_2$ instead of $j_1$, we have \begin{eqnarray*} \max_{j,i\in N_j}{{\cal P}(i,N_j)} &=& {\cal P}(i^*,N_{j_1})\\ &\leq & {\cal P}(i^*,N_{j_2}\cup \{w_{i^*j_2}\})\\ &=& L(N_{j_2})+w_{i^*j_2}\\ &= & L(N_{j_2})+\min_j{w_{i^*j}}\\ &\leq & \left(\sum_j{L(N_j)^{p+1}}\right)^{\frac{1}{p+1}} +\max_j{L(O_j)}. \end{eqnarray*}\qed\end{proof} We are ready to present our upper bounds on the price of anarchy of the induced game. \begin{theorem} The price of anarchy of the game induced by the coordination mechanism {\sf BCOORD} with $p=\Theta(\log m)$ is $O\left(\frac{\log m}{\log \log m}\right)$. Also, for every constant $\epsilon\in (0,1/2]$, the price of anarchy of the game induced by the coordination mechanism {\sf BCOORD} with $p=1/\epsilon-1$ is $O\left(m^\epsilon\right)$. \end{theorem} \begin{proof} Consider a pure Nash equilibrium $N$ and an optimal assignment $O$. Since no job has an incentive to change her strategy from $N$, for any job $i$ that is assigned to machine $j_1$ in $N$ and to machine $j_2$ in $O$, we have that \begin{eqnarray*}(\rho_{ij_1})^{1/p}L(N_{j_1}) &\leq & (\rho_{ij_2})^{1/p}(L(N_{j_2})+w_{ij_2}). \end{eqnarray*} Equivalently, by raising both sides to the power $p$ and multiplying both sides with $w_{i,\min}$, we have that \begin{eqnarray*}w_{ij_1}L(N_{j_1})^p &\leq &w_{ij_2}(L(N_{j_2})+w_{ij_2})^p.\end{eqnarray*} Using the binary variables $x_{ij}$ and $y_{ij}$ to denote whether job $i$ is assigned to machine $j$ in the assignments $N$ ($x_{ij}=1$) and $O$ ($y_{ij}=1$), respectively, or not ($x_{ij}=0$ and $y_{ij}=0$, respectively), we can express this last inequality as follows: \begin{eqnarray*} \sum_j{x_{ij}w_{ij}L(N_j)^p} &\leq & \sum_j{y_{ij}w_{ij}(L(N_j)+w_{ij})^p}. \end{eqnarray*} By summing over all jobs and multiplying with $p$, we have \begin{eqnarray}\nonumber & & p\sum_i{\sum_j{x_{ij}w_{ij}L(N_j)^p}}\\\nonumber &\leq & p\sum_i{\sum_j{y_{ij}w_{ij}(L(N_j)+w_{ij})^p}}\\\nonumber &=& p\sum_j{\sum_i{y_{ij}w_{ij}(L(N_j)+y_{ij}w_{ij})^p}}\\\nonumber &\leq & \sum_j\sum_i{\left(\left(L(N_j)+\frac{2p+1}{p+1}y_{ij}w_{ij}\right)^{p+1}-\left(L(N_j)+y_{ij}w_{ij}\right)^{p+1}\right)}\\\nonumber &\leq & \sum_j\sum_i{\left(\left(L(N_j)+\frac{2p+1}{p+1}y_{ij}w_{ij}\right)^{p+1}-L(N_j)^{p+1}\right)}\\\nonumber &\leq & \sum_j{\left(\left(L(N_j)+\frac{2p+1}{p+1}\sum_i{y_{ij}w_{ij}}\right)^{p+1}-L(N_j)^{p+1}\right)}\\\nonumber &=& \sum_j{\left(\left(L(N_j)+\frac{2p+1}{p+1}L(O_j)\right)^{p+1}-L(N_j)^{p+1}\right)}\\\label{eq:b-minkowski} &\leq& \left(\left(\sum_j{L(N_j)^{p+1}}\right)^{\frac{1}{p+1}}+\frac{2p+1}{p+1}\left(\sum_j{L(O_j)^{p+1}}\right)^{\frac{1}{p+1}}\right)^{p+1}-\sum_j{L(N_j)^{p+1}} \end{eqnarray} The first equality follows by exchanging the sums and since $y_{ij}\in \{0,1\}$, the second inequality follows by applying Lemma \ref{lem:slope} with $\alpha=\frac{p}{p+1}y_{ij}w_{ij}$ and $z_0=L(N_j)+y_{ij}w_{ij}$, the third inequality is obvious, the fourth inequality follows by applying Lemma \ref{lem:convexity}, the second equality follows since the definition of variables $y_{ij}$ implies that $L(O_j)=\sum_i{y_{ij}w_{ij}}$, and the last inequality follows by applying Minkowski inequality (Lemma \ref{lem:minkowski}). Now, we relate the $\ell_{p+1}$ norm of the machine loads of assignments $N$ and $O$. We have \begin{eqnarray*} (p+1)\sum_j{L(N_j)^{p+1}} &=& p\sum_j{L(N_j)^{p+1}}+ \sum_j{L(N_j)^{p+1}}\\ &=& p\sum_i{\sum_j{x_{ij}w_{ij}L(N_j)^p}} + \sum_j{L(N_j)^{p+1}}\\ &\leq & \left(\left(\sum_j{L(N_j)^{p+1}}\right)^{\frac{1}{p+1}}+\frac{2p+1}{p+1}\left(\sum_j{L(O_j)^{p+1}}\right)^{\frac{1}{p+1}}\right)^{p+1}. \end{eqnarray*} The first equality is obvious, the second one follows by the definition of variables $x_{ij}$ and the inequality follows by inequality (\ref{eq:b-minkowski}). So, the above inequalities yield \begin{eqnarray*} \left(\sum_j{L(N_j)^{p+1}}\right)^{\frac{1}{p+1}} &\leq & \frac{2p+1}{p+1} \frac{1}{(p+1)^{\frac{1}{p+1}}-1} \left(\sum_j{L(O_j)^{p+1}}\right)^{\frac{1}{p+1}}\\ &\leq & \frac{2p+1}{\ln{(p+1)}} \left(\sum_j{L(O_j)^{p+1}}\right)^{\frac{1}{p+1}}\\ &\leq & \frac{2p+1}{\ln{(p+1)}} m^{\frac{1}{p+1}} \max_j{L(O_j)}. \end{eqnarray*} The second inequality follows since $e^z\geq z+1$ for $z\geq 0$ and the third one follows by Lemma \ref{lem:lp-norm}. Now, using Lemma \ref{lem:completion-bcoord} and our last inequality we have \begin{eqnarray*} \max_{j,i\in N_j}{{\cal P}(i,N_j)} &\leq & \left(\sum_j{L(N_j)^{p+1}}\right)^{\frac{1}{p+1}} +\max_j{L(O_j)}\\ &\leq & \left(1+\frac{2p+1}{\ln{(p+1)}}m^{\frac{1}{p+1}}\right)\max_j{L(O_j)}. \end{eqnarray*} The desired bounds follows by setting $p=\Theta(\log m)$ and $p=1/\epsilon-1$, respectively.\qed \end{proof} Note that the game induced by {\sf BCOORD} with $p=1$ is the same with the game induced by the coordination mechanism {\sf CCOORD} (with $p=1$) that we present in the next section. As such, it also has a potential function (also similar to the potential function of \cite{FKS05} for linear weighted congestion games) as we will see in Lemma \ref{lem:psi-potential}. In this way, we obtain a coordination mechanism that induces a potential game, handles anonymous jobs, and has aproximation ratio $O(\sqrt{m})$. Unfortunately, the next theorem demonstrates that, for higher values of $p$, the Nash dynamics of the game induced by {\sf BCOORD} may contain a cycle. \begin{theorem}\label{thm:no-potential} The game induced by the coordination mechanism {\sf BCOORD} with $p=2$ is not a potential game. \end{theorem} Before proving Theorem \ref{thm:no-potential}, we show that the games induced by the coordination mechanisms {\sf LongestFirst} and {\sf Randomized} may not be potential games either. All the instances presented in the following consist of four machines and three basic jobs $A$, $B$, and $C$. In each case, we show that the Nash dynamics contain a cycle of moves of the three basic jobs. First consider the {\sf LongestFirst} policy and the instance depicted in the following table. \ \centerline{ \begin{tabular}{|c|c|c|c|} \hline & A & B & C \\ $1$ & $14$ & $\infty$ & $5$ \\ $2$ & $\infty$ & $10$ & $\infty$ \\ $3$ & $3$ & $9$ & $10$ \\ $4$ & $7$ & $8$ & $9$ \\ \hline \end{tabular}} \ The cycle is defined on the following states: \begin{eqnarray*} & & (C,\underline{B},A,) \rightarrow(C,,\underline{A}B,)\rightarrow(C,,\underline{B},A)\rightarrow(C,,,\underline{A}B)\rightarrow (A\underline{C},,,B)\rightarrow\\ & & (\underline{A},,C,B)\rightarrow(,,A\underline{C},B)\rightarrow(,,A,\underline{B}C)\rightarrow (,B,A,\underline{C})\rightarrow(C,B,A,). \end{eqnarray*} Notice that the first and last assignment are the same. In each state, the player that moves next is underlined. Job $B$ is at machine $2$ in the first assignment and has completion time $10$. Hence, it has an incentive to move to machine $3$ (second assignment) where its completion time is $9$. Job $A$ has completion time $12$ in the second assignment since it is scheduled after job $B$ which has higher load on machine $3$. Moving to machine $4$ (third assignment), it decreases its completion time to $7$. The remaining moves in the cycle can be verified accordingly. The instance for the {\sf Randomized} policy contains four additional jobs $D$, $E$, $F$, and $G$ which are always scheduled on machines $1$, $2$, $3$, and $4$, respectively (i.e., they have infinite load on the other machines). It is depicted in the following table. \ \centerline{ \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & A & B & C & D & E & F & G\\ $1$ & $80$ & $\infty$ & $100$ & $2$ & $\infty$ & $\infty$ & $\infty$ \\ $2$ & $\infty$ & $171$ & $\infty$ & $\infty$ & $2$ & $\infty$ & $\infty$ \\ $3$ & $2$ & $154$ & $124$ & $\infty$ & $\infty$ & $32$ & $\infty$ \\ $4$ & $2$ & $76$ & $10$ & $\infty$ & $\infty$ & $\infty$ & 184\\ \hline \end{tabular}} \ The cycle is defined by the same moves of the basic jobs as in the case of {\sf LongestFirst}: \begin{eqnarray*} & & (CD,\underline{B}E,AF,G) \rightarrow(CD,E,\underline{A}BF,G)\rightarrow(CD,E,\underline{B}F,AG)\rightarrow(CD,E,F,\underline{A}BG)\rightarrow\\ & & (A\underline{C}D,E,F,BG)\rightarrow(\underline{A}D,E,CF,BG)\rightarrow(D,E,A\underline{C}F,BG)\rightarrow(D,E,AF,\underline{B}CG)\rightarrow\\ & & (D,BE,AF,\underline{C}G)\rightarrow(CD,BE,AF,G). \end{eqnarray*} Recall that (see \cite{ILMS05,KP99}) the expected completion time of a job $i$ which is scheduled on machine $j$ in an assignment $N$ is $\frac{1}{2}(w_{ij}+L(N_j))$ when the {\sf Randomized} policy is used. In each state, the player that moves next is underlined. It can be easily verified that each player in this cycle improves her cost by exactly $1$. For example, job $B$ has expected completion time $\frac{1}{2}(171+171+2)=172$ at machine $2$ in the first assignment and, hence, an incentive to move to machine $3$ in the second assignment where its completion time is $\frac{1}{2}(154+2+154+32)=171$. \paragraph{Proof of Theorem \ref{thm:no-potential}.} Besides the three basic jobs, the instance for the {\sf BCOORD} policy with $p=2$ contains two additional jobs $D$ and $E$ which are always scheduled on machines $3$ and $4$, respectively. The instance is depicted in the following table. \ \centerline{ \begin{tabular}{|c|c|c|c|c|c|} \hline & A & B & C & D & E \\ $1$ & $4.0202$ & $\infty$ & $4.0741$ & $\infty$ & $\infty$ \\ $2$ & $\infty$ & $8.2481$ & $\infty$ & $\infty$ & $\infty$ \\ $3$ & $0.0745$ & $0.6302$ & $0.3078$ & $29.1331$ & $\infty$ \\ $4$ & $2.4447$ & $5.1781$ & $2.4734$ & $\infty$ & $2.7592$ \\ \hline \end{tabular}} \ The cycle is defined by the same moves of the basic jobs as in the previous cases: \[(C,\underline{B},AD,E)\rightarrow (C,,\underline{A}BD,E)\rightarrow(C,,\underline{B}D,AE)\rightarrow(C,,D,\underline{A}BE)\rightarrow(A\underline{C},,D,BE)\rightarrow\] \[(\underline{A},,CD,BE)\rightarrow(,,A\underline{C}D,BE)\rightarrow(,,AD,\underline{B}CE)\rightarrow(,B,AD,\underline{C}E)\rightarrow(C,B,AD,E).\] Notice that, instead of considering the completion time $\left(\rho_{ij}\right)^{1/p}L(N_j)$ of a job $i$ on machine $j$ in an assignment $N$, it is equivalent to consider its cost as $w_{ij}L(N_j)^p$. In this way, we can verify that in any of the moves in the above cycle, the job that moves improves its cost. For example, job $B$ has cost $8.2481^3=561.127758090641$ on machine $2$ in the first assignment and cost $0.6302(0.0745+0.6302+29.1331)^2=561.063473430968$ on machine $3$ in the second assignment. \qed \section{The coordination mechanism {\sf CCOORD}}\label{sec:c-cm} In this section we present and analyze the coordination mechanism {\sf CCOORD} that handles anonymous jobs and guarantees that the induced game has pure Nash equilibria, price of anarchy at most $O(\log^2 m)$, and price of stability $O(\log m)$. In order to define the scheduling policy, we first define an interesting family of functions. \begin{defn} For integer $k\geq 0$, the function $\Psi_k$ mapping finite sets of reals to the reals is defined as follows: $\Psi_k(\emptyset)=0$ for any integer $k\geq 1$, $\Psi_0(A)=1$ for any (possibly empty) set $A$, and for any non-empty set $A=\{a_1, a_2, ..., a_n\}$ and integer $k\geq 1$, \[\Psi_k(A)=k! \sum_{1\leq d_1 \leq ... \leq d_k \leq n}{\prod_{t=1}^{k}{a_{d_t}}}.\] \end{defn} So, $\Psi_k(A)$ is essentially the sum of all possible monomials of total degree $k$ on the elements of $A$. Each term in the sum has coefficient $k!$. Clearly, $\Psi_1(A)=L(A)$. For $k\geq 2$, compare $\Psi_k(A)$ with $L(A)^k$ which can also be expressed as the sum of the same terms, albeit with different coefficients in $\{1, ..., k!\}$, given by the multinomial theorem. The coordination mechanism {\sf CCOORD} schedules job $i$ on machine $j$ in an assignment $N$ so that its completion time is $${\cal P}(i,N_j) = \left(\rho_{ij}\Psi_p(N_j)\right)^{1/p}.$$ Our proofs extensively use the properties in the next lemma; its proof is given in appendix. The first inequality implies that the schedule defined by {\sf CCOORD} is always feasible. \begin{lemma}\label{lem:properties} For any integer $k\geq 1$, any finite set of non-negative reals $A$, and any non-negative real $b$ the following hold: \[\begin{array}{l l} \mbox{a. } L(A)^k \leq \Psi_k(A) \leq k! L(A)^k & \mbox{d. } \Psi_k(A\cup\{b\}) -\Psi_k(A) = k b\Psi_{k-1}(A\cup \{b\})\\ \mbox{b. } \Psi_{k-1}(A)^{k} \leq \Psi_{k}(A)^{k-1} & \mbox{e. } \Psi_k(A) \leq kL(A)\Psi_{k-1}(A)\\ \mbox{c. } \Psi_k(A\cup\{b\}) = \sum_{t=0}^k{\frac{k!}{(k-t)!}b^t\Psi_{k-t}(A)} &\mbox{f. } \Psi_k(A\cup \{b\}) \leq \left(\Psi_k(A)^{1/k}+\Psi_k(\{b\})^{1/k}\right)^k \end{array}\] \end{lemma} The second property implies that $\Psi_k(A)^{1/k} \leq \Psi_{k'}(A)^{1/k'}$ for any integer $k'\geq k$. The third property suggests an algorithm for computing $\Psi_k(A)$ in time polynomial in $k$ and $|A|$ using dynamic programming. A careful examination of the definitions of the coordination mechanisms {\sf BCOORD} and {\sf CCOORD} and property (a) in the above lemma, reveals that {\sf CCOORD} makes the completion time of a job assigned to machine $j$ dependent on the approximation $\Psi_p(N_j)^{1/p}$ of the load $L(N_j)$ of the machine instead of its exact load as {\sf BCOORD} does. This will be the crucial tool in order to guarantee that the induced game is a potential game without significantly increasing the price of anarchy. The next lemma defines a potential function on the states of the induced game that will be very useful later. \begin{lemma}\label{lem:psi-potential} The function $\Phi(N)=\sum_j{\Psi_{p+1}(N_j)}$ is a potential function for the game induced by the coordination mechanism {\sf CCOORD}. Hence, this game always has a pure Nash equilibrium. \end{lemma} \begin{proof} Consider two assignments $N$ and $N'$ differing in the strategy of the player controlling job $i$. Assume that job $i$ is assigned to machine $j_1$ in $N$ and to machine $j_2\not=j_1$ in $N'$. Observe that $N_{j_1}=N'_{j_1}\cup \{w_{ij_1}\}$ and $N'_{j_2}=N_{j_2}\cup \{w_{ij_2}\}$. By Lemma \ref{lem:properties}d, we have that $\Psi_{p+1}(N_{j_1})-\Psi_{p+1}(N'_{j_1})=(p+1)w_{ij_1}\Psi_p(N_{j_1})$ and $\Psi_{p+1}(N'_{j_2})-\Psi_{p+1}(N_{j_2})=(p+1)w_{ij_2}\Psi_p(N'_{j_2})$. Using these properties and the definitions of the coordination mechanism {\sf CCOORD} and function $\Phi$, we have \begin{eqnarray*} \Phi(N)-\Phi(N') &=& \sum_j{\Psi_{p+1}(N_j)-\sum_j{\Psi_{p+1}(N'_j)}}\\ &=& \Psi_{p+1}(N_{j_1})+\Psi_{p+1}(N_{j_2})-\Psi_{p+1}(N'_{j_1})-\Psi_{p+1}(N'_{j_2})\\ &=& (p+1)w_{ij_1}\Psi_{p}(N_{j_1})-(p+1)w_{ij_2}\Psi_{p}(N'_{j_2})\\ &=& (p+1)w_{i,\min} \left({\cal P}(i,N_{j_1})^p-{\cal P}(i,N'_{j_2})^p\right) \end{eqnarray*} which means that the difference of the potentials of the two assignments and the difference of the completion time of player $i$ have the same sign as desired. \qed \end{proof} The next lemma relates the maximum completion time of a pure Nash equilibrium to the optimal makespan provided that their potentials are close. \begin{lemma}\label{lem:completion-time} Let $O$ be an optimal assignment and let $N$ be a pure Nash equilibrium of the game induced by the coordination mechanism {\sf CCOORD} such that $\left(\Phi(N)\right)^{\frac{1}{p+1}}\leq \gamma\left(\Phi(O)\right)^{\frac{1}{p+1}}$. Then, \[\max_{j,i\in N_j}{{\cal P}(i,N_j)} \leq \left(\gamma (p+1)m^{\frac{1}{p+1}}+p\right)\max_j{L(O_j)}.\] \end{lemma} \begin{proof} Let $i^*$ be the job that has the maximum completion time in $N$. Denote by $j_1$ the machine $i^*$ uses in assignments $N$ and let $j_2$ be a machine such that $\rho_{i^*j_2}=1$. If $j_1=j_2$, the definition of the coordination mechanism {\sf CCOORD} and Lemma \ref{lem:properties}b yield \begin{eqnarray}\nonumber \max_{j, i\in N_j}{{\cal P}(i,N_j)} &=& {\cal P}(i^*,N_{j_1})\\\nonumber &= & \Psi_p(N_{j_1})^{1/p}\\\nonumber &\leq & \Psi_{p+1}(N_{j_1})^{\frac{1}{p+1}}\\\label{eq:c-same-machines} &\leq & \left(\sum_j{\Psi_{p+1}(N_j)}\right)^{\frac{1}{p+1}}. \end{eqnarray} Otherwise, since player $i$ has no incentive to use machine $j_2$ instead of $j_1$, we have \begin{eqnarray}\nonumber \max_{j, i\in N_j}{{\cal P}(i,N_j)} &=& {\cal P}(i^*,N_{j_1})\\\nonumber &\leq & {\cal P}(i^*, N_{j_2} \cup \{w_{i^*j_2}\}) \\\nonumber &=& \Psi_p(N_{j_2}\cup\{w_{i^*j_2}\})^{1/p}\\\nonumber &\leq & \Psi_p(N_{j_2})^{1/p}+\Psi_p(\{w_{i^*j_2}\})^{1/p}\\\nonumber &\leq & \Psi_{p+1}(N_{j_2})^{\frac{1}{p+1}}+(p!)^{1/p}w_{i^*j_2}\\\nonumber &= & \Psi_{p+1}(N_{j_2})^{\frac{1}{p+1}}+(p!)^{1/p}\min_j{w_{i^*j}}\\\label{eq:c-diff-machines} &\leq & \left(\sum_j{\Psi_{p+1}(N_j)}\right)^{\frac{1}{p+1}}+p\max_j{L(O_j)}. \end{eqnarray} The first two equalities follows by the definition of {\sf CCOORD}, the first inequality follows since player $i^*$ has no incentive to use machine $j_2$ instead of $j_1$, the second inequality follows by Lemma \ref{lem:properties}f, the third inequality follows by Lemma \ref{lem:properties}b and the definition of function $\Psi_p$, the third equality follows by the definition of machine $j_2$ and the last inequality is obvious. Now, observe that the term in parenthesis in the rightmost side of inequalities (\ref{eq:c-same-machines}) and (\ref{eq:c-diff-machines}) equals the potential $\Phi(N)$. Hence, in any case, we have \begin{eqnarray*} \max_{j, i\in N_j}{{\cal P}(i,N_j)} &\leq& (\Phi(N))^{\frac{1}{p+1}}+p\max_j{L(O_j)}\\ &\leq & \gamma (\Phi(O))^{\frac{1}{p+1}}+p\max_j{L(O_j)}\\ &=& \gamma\left(\sum_j{\Psi_{p+1}(O_j)}\right)^{\frac{1}{p+1}}+p\max_j{L(O_j)}\\ &\leq &\gamma\left((p+1)! \sum_j{L(O_j)^{p+1}}\right)^{\frac{1}{p+1}}+p\max_j{L(O_j)}\\ &\leq & \left(\gamma (p+1)m^{\frac{1}{p+1}}+p\right)\max_j{L(O_j)}. \end{eqnarray*} The second inequality follows by the inequality on the potentials of assignments $N$ and $O$, the equality follows by the definition of the potential function $\Phi$, the third inequality follows by Lemma \ref{lem:properties}a and the last one follows by Lemma \ref{lem:lp-norm}. \qed \end{proof} A first application of Lemma \ref{lem:completion-time} is in bounding the price of stability of the induced game. \begin{theorem}\label{lem:stability} The game induced by the coordination mechanism {\sf CCOORD} with $p=\Theta(\log m)$ has price of stability at most $O(\log m)$. \end{theorem} \begin{proof} Consider the optimal assignment $O$ and the pure Nash equilibrium $N$ of minimum potential. We have $\left(\Phi(N)\right)^{\frac{1}{p+1}}\leq \left(\Phi(O)\right)^{\frac{1}{p+1}}$ and, using Lemma \ref{lem:completion-time}, we obtain that the maximum completion time in $N$ is at most $(p+1)m^{\frac{1}{p+1}}+p$ times the makespan of $O$. Setting $p=\Theta(\log m)$, the theorem follows. \qed \end{proof} A second application of Lemma \ref{lem:completion-time} is in bounding the price of anarchy. In order to apply it, we need a relation between the potential of an equilibrium and the potential of an optimal assignment; this is provided by the next lemma. \begin{lemma}\label{lem:equilibrium} Let $O$ be an optimal assignment and $N$ be a pure Nash equilibrium of the game induced by the coordination mechanism {\sf CCOORD}. Then, \[\left(\Phi(N)\right)^{\frac{1}{p+1}} \leq \frac{p+1}{\ln{2}} \left(\Phi(O)\right)^{\frac{1}{p+1}}.\] \end{lemma} \begin{proof} Consider a pure Nash equilibrium $N$ and an optimal assignment $O$. Since no job has an incentive to change her strategy from $N$, for any job $i$ that is assigned to machine $j_1$ in $N$ and to machine $j_2$ in $O$, we have that \begin{eqnarray*}\left(\rho_{ij_1}\Psi_p(N_{j_1})\right)^{1/p} &\leq & \left(\rho_{ij_2}\Psi_p(N_{j_2}\cup \{w_{ij_2}\})\right)^{1/p}. \end{eqnarray*} Equivalently, by raising both sides to the power $p$ and multiplying both sides with $w_{i,\min}$, we have that \begin{eqnarray*} w_{ij_1}\Psi_p(N_{j_1}) &\leq & w_{ij_2}\Psi(N_{j_2}\cup \{w_{ij_2}\}). \end{eqnarray*} Using the binary variables $x_{ij}$ and $y_{ij}$ to denote whether job $i$ is assigned to machine $j$ in the assignment $N$ ($x_{ij}=1$) and $O$ ($y_{ij}=1$) or not ($x_{ij}=0$ and $y_{ij}=0$, respectively), we can express the last inequality as follows: \begin{eqnarray*} \sum_j{x_{ij}w_{ij}\Psi_p(N_{j})} &\leq & \sum_j{y_{ij}w_{ij}\Psi(N_{j}\cup \{w_{ij}\})} \end{eqnarray*} By summing over all jobs, we have \begin{eqnarray*} \sum_i{\sum_j{x_{ij}w_{ij}\Psi_p(N_{j})} } &\leq & \sum_i{\sum_j{y_{ij}w_{ij}\Psi(N_{j}\cup \{w_{ij}\})}} \end{eqnarray*} By exchanging the double sums and since $\sum_i{x_{ij}w_{ij}}=L(N_j)$, we obtain \begin{eqnarray}\label{ineq:delta} \sum_j{L(N_j)\Psi_p(N_j)} &\leq & \sum_j{\sum_{i}{y_{ij}w_{ij}\Psi_p(N_j \cup \{w_{ij}\})}} \end{eqnarray} We now work with the potential of assignment $N$. We have \begin{eqnarray*} 2\Phi(N) &= & \Phi(N)+\sum_j{\Psi_{p+1}(N_j)}\\ &\leq & \Phi(N)+(p+1)\sum_j{L(N_j)\Psi_p(N_j)}\\ &\leq & \Phi(N)+ (p+1)\sum_j{\sum_i{y_{ij}w_{ij}}\Psi_p(N_j\cup \{w_{ij}\})}\\ &=& \Phi(N)+(p+1)\sum_j{\sum_i{y_{ij}w_{ij}}\sum_{t=0}^{p}{\frac{p!}{(p-t)!}\Psi_{p-t}(N_j)w_{ij}^t}}\\ &=& \Phi(N)+\sum_j{\sum_{t=0}^{p}{\frac{(p+1)!}{(p-t)!}\Psi_{p-t}(N_j)\sum_i{y_{ij}w_{ij}^{t+1}}}}\\ &\leq & \Phi(N)+\sum_j{\sum_{t=0}^{p}{\frac{(p+1)!}{(p-t)!(t+1)!}\Psi_{p-t}(N_j)\Psi_{t+1}(O_j)}}\\ &=& \Phi(N)+\sum_j{\sum_{t=1}^{p+1}{\left(\begin{array}{c}p+1\\t\end{array}\right)\Psi_{p+1-t}(N_j)\Psi_{t}(O_j)}}\\ &\leq & \Phi(N)+\sum_j{\sum_{t=1}^{p+1}{\left(\begin{array}{c}p+1\\t\end{array}\right)\Psi_{p+1}(N_j)^{\frac{p+1-t}{p+1}}\Psi_{p+1}(O_j)^{\frac{t}{p+1}}}}\\ &=& \Phi(N)+\sum_j{\left(\left(\Psi_{p+1}(N_j)^{\frac{1}{p+1}}+\Psi_{p+1}(O_j)^{\frac{1}{p+1}}\right)^{p+1}-\Psi_{p+1}(N_j)\right)}\\ &=& \Phi(N)+\sum_j{\left(\Psi_{p+1}(N_j)^{\frac{1}{p+1}}+\Psi_{p+1}(O_j)^{\frac{1}{p+1}}\right)^{p+1}}-\sum_j{\Psi_{p+1}(N_j)}\\ &\leq & \left(\left(\sum_j{\Psi_{p+1}(N_j)}\right)^{\frac{1}{p+1}}+\left(\sum_j{\Psi_{p+1}(O_j)}\right)^{\frac{1}{p+1}}\right)^{p+1}\\ &=& \left(\left(\Phi(N)\right)^{\frac{1}{p+1}}+\left(\Phi(O)\right)^{\frac{1}{p+1}}\right)^{p+1} \end{eqnarray*} The first inequality follows by Lemma \ref{lem:properties}e, the second inequality follows by inequality (\ref{ineq:delta}), the second equality follows by Lemma \ref{lem:properties}c, the third equality follows by exchanging the sums, the third inequality follows since the jobs $i$ assigned to machine $j$ are those for which $y_{ij}=1$ and by the definition of function $\Psi_{t+1}$ which yields that $\Psi_{t+1}(O_j)\geq (t+1)! \sum_i{y_{ij}w_{ij}^{t+1}}$, the fourth equality follows by updating the limits of the sum over $t$, the fourth inequality follows by Lemma \ref{lem:properties}b, the fifth equality follows by the binomial theorem, the sixth equality is obvious, the fifth inequality follows by Minkowski inequality (Lemma \ref{lem:minkowski}) and by the definition of the potential $\Phi(N)$, and the last equality follows by the definition of the potentials $\Phi(N)$ and $\Phi(O)$. By the above inequality, we obtain that \begin{eqnarray*} \left(\Phi(N)\right)^{\frac{1}{p+1}} &\leq & \frac{1}{2^{\frac{1}{p+1}}-1} \left(\Phi(O)\right)^{\frac{1}{p+1}}\leq \frac{p+1}{\ln{2}} \left(\Phi(O)\right)^{\frac{1}{p+1}} \end{eqnarray*} where the last inequality follows using the inequality $e^z\geq z+1$. \qed \end{proof} We are now ready to bound the price of anarchy. \begin{theorem}\label{thm:c-cm-poa} The price of anarchy of the game induced by the coordination mechanism {\sf CCOORD} with $p=\Theta(\log m)$ is $O\left(\log^2 m\right)$. Also, for every constant $\epsilon\in (0,1/2]$, the price of anarchy of the game induced by the coordination mechanism {\sf CCOORD} with $p=1/\epsilon-1$ is $O\left(m^\epsilon\right)$. \end{theorem} \begin{proof} Consider a pure Nash equilibrium $N$ and let $O$ be the optimal assignment. Using Lemma \ref{lem:equilibrium}, we have that $\left(\Phi(N)\right)^{\frac{1}{p+1}} \leq \frac{p+1}{\ln{2}} \left(\Phi(O)\right)^{\frac{1}{p+1}}$. Hence, by Lemma \ref{lem:completion-time}, we obtain that the maximum completion time in $N$ is at most $\frac{(p+1)^2}{\ln 2}m^{\frac{1}{p+1}}+p$ times the makespan of $O$. By setting $p=\Theta(\log m)$ and $p=1/\epsilon-1$, respectively, the theorem follows. \qed \end{proof} \section{Discussion and open problems}\label{sec:open} Our focus in the current paper has been on pure Nash equilibria. It is also interesting to generalize the bounds on the price of anarchy of the games induced by our coordination mechanisms for mixed Nash equilibria. Recently, Roughgarden \cite{R09} defined general smoothness arguments that can be used to bound the price of anarchy of games having particular properties. Bounds on the price of anarchy over pure Nash equilibria that are proved using smoothness arguments immediately imply that the same bounds on the price of anarchy hold for mixed Nash equilibria as well. We remark that the arguments used in the current paper in order to prove our upper bounds are not smoothness arguments. At least in the case of the coordination mechanism {\sf BCOORD}, smoothness arguments cannot be used to prove a bound on the price of anarchy as small as $O\left(\frac{\log m}{\log\log m}\right)$ since the price of anarchy over mixed Nash equilibria is provably higher in the case. We demonstrate this using the following construction. Czumaj and Voecking \cite{CV02} present a game induced by the {\sf Makespan} policy on related machines which has price of anarchy over mixed Nash equilibria at least $\Omega\left(\frac{\log m}{\log \log\log m}\right)$. The instance used in \cite{CV02} consists of $n$ jobs and $m$ machines. Each machine $j$ has a speed $\alpha_j\geq 1$ with $\alpha_1=1$ and each job $i$ has a weight $w_i$. The processing time of job $i$ on machine $j$ is $w_{ij}=\alpha_j w_i$ (i.e., the inefficiencies of the jobs are the same on the same machine). Now, consider the game induced by the coordination mechanism {\sf BCOORD} for the instance that consists of the same machines and jobs in which the processing time of job $i$ on machine $j$ is defined by $w'_{ij}=\alpha_j^{\frac{p}{p+1}} w_i$, i.e., the inefficiency of any job on machine $j$ is $\alpha_j^{\frac{p}{p+1}}$. Here, $p$ is the parameter used by {\sf BCOORD}. By the definition of {\sf BCOORD}, we can easily see that the game induced is identical with the game induced by {\sf Makespan} on the original instance of \cite{CV02}. Also note that, in our instance, the processing time of the jobs is not increased (and, hence, the optimal makespan is not larger than that in the original instance of \cite{CV02}). Hence, the lower bound of \cite{CV02} implies a lower bound on the price of anarchy over mixed Nash equilibria of the game induced by the coordination mechanism {\sf BCOORD}. Our work reveals several other interesting questions. First of all, it leaves open the question of whether coordination mechanisms with constant approximation ratio exist. In particular, is there any coordination mechanism that handles anonymous jobs, guarantees that the induced game has pure Nash equilibria, and has constant price of anarchy? Based on the lower bounds of \cite{AJM08,FS10}, such a coordination mechanism (if it exists) must use preemption. Alternatively, is the case of anonymous jobs provably more difficult than the case where jobs have IDs? Investigating the limits of non-preemptive mechanisms is still interesting. Notice that {\sf AJM-1} is the only non-preemptive coordination mechanism that has approximation ratio $o(m)$ but it does not guarantee that the induced game has pure Nash equilibria; furthermore, the only known non-preemptive coordination mechanism that induces a potential game with bounded price of anarchy is {\sf ShortestFirst}. So, is there any non-preemptive (deterministic or randomized) coordination mechanism that is simultaneously $o(m)$-approximate and induces a potential game? We also remark that Theorem \ref{thm:no-potential} does not necessarily exclude a game induced by the coordination mechanism {\sf BCOORD} from having pure Nash equilibria. Observe that the examples in the proof of Theorem \ref{thm:no-potential} do not consist of best-response moves and, hence, it is interesting to investigate whether best-response moves converge to pure Nash equilibria in such games. Furthermore, we believe that the games induced by the coordination mechanism {\sf CCOORD} are of independent interest. We have proved that these games belong to the class {\sf PLS} \cite{JPY88}. Furthermore, the result of Monderer and Shapley \cite{MS96} and the proof of Lemma \ref{lem:psi-potential} essentially show that each of these games is isomorphic to a congestion game. However, they have a beautiful definition as games on parallel machines that gives them a particular structure. What is the complexity of computing pure Nash equilibria in such games? Even in case that these games are {\sf PLS}-complete (informally, this would mean that computing a pure Nash equilibrium is as hard as finding any object whose existence is guaranteed by a potential function argument) like several variations of congestion games that were considered recently \cite{ARV06,FPT04,SV08}, it is still interesting to study the convergence time to efficient assignments. A series of recent papers \cite{AAEMS08,CMS06,FFM08,MV04} consider adversarial rounds of best-response moves in potential games so that each player is given at least one chance to play in each round (this is essentially our assumption in Lemma \ref{lem:a-cm-potential-convergence} for the coordination mechanism {\sf ACOORD}). Does the game induced by the coordination mechanism {\sf CCOORD} converges to efficient assignments after a polynomial number of adversarial rounds of best-response moves? Although it is a potential game, it does not have the particular properties considered in \cite{AAEMS08} and, hence, proving such a statement probably requires different techniques. Finally, recall that we have considered the maximum completion time as the measure of the efficiency of schedules. Other measures such as the weighted sum of completion times that is recently studied in \cite{CGM10} are interesting as well. Of course, considering the application of coordination mechanisms to settings different than scheduling is an important research direction. \small\paragraph{Acknowledgments.} I would like to thank Chien-Chung Huang for helpful comments on an earlier draft of the paper. \small \appendix \normalsize\section{Proof of Lemma \ref{lem:properties}} The properties clearly hold if $A$ is empty or $k=1$. In the following, we assume that $k\geq 2$ and $A=\{a_1, ..., a_n\}$ for integer $n\geq 1$. \paragraph{a.} Clearly, \begin{eqnarray*} L(A)^k &=& \left(\sum_{t=1}^k{a_t}\right)^k = \sum_{1\leq d_1 \leq ... \leq d_k \leq n}{\zeta(d_1, ..., d_k)\prod_{t=1}^{k}{a_{d_t}}} \end{eqnarray*} where $\zeta(d_1, ..., d_k)$ are multinomial coefficients on $k$ and, hence, belong $\{1, ..., k!\}$. The property then follows by the definition of $\Psi_k(A)$. \paragraph{b.} We can express $\Psi_{k-1}(A)^k$ and $\Psi_k(A)^{k-1}$ as follows: \begin{eqnarray*} \Psi_{k-1}(A)^k &=& \left((k-1)!\right)^k \left(\sum_{1\leq d_1 \leq ... \leq d_k \leq n}{\prod_{t=1}^{k}{a_{d_t}}}\right)^k \\ &= & \left((k-1)!\right)^k \sum_{1\leq d_1 \leq ... \leq d_{k(k-1)} \leq n}{\zeta_1(d_1, ..., d_{k(k-1)})\prod_{t=1}^{k(k-1)}{a_{d_t}}}.\\ \Psi_k(A)^{k-1} &=& \left(k!\right)^{k-1} \left(\sum_{1\leq d_1 \leq ... \leq d_k \leq n}{\prod_{t=1}^{k}{a_{d_t}}}\right)^{k-1}\\ &= & \left(k!\right)^{k-1} \sum_{1\leq d_1 \leq ... \leq d_{k(k-1)} \leq n}{\zeta_2(d_1, ..., d_{k(k-1)})\prod_{t=1}^{k(k-1)}{a_{d_t}.}} \end{eqnarray*} So, both $\Psi_{k-1}(A)^k$ and $\Psi_k(A)^{k-1}$ are sums of all monomials of degree $k(k-1)$ over the elements of $A$ with different coefficients. The coefficient $\zeta_1(d_1, ..., d_{k(k-1)})$ is the number of different ways to partition the multiset $D=\{d_1, ..., d_{k(k-1)}\}$ of size $k(k-1)$ into $k$ disjoint ordered multisets each of size $k-1$ so that the union of the ordered multisets yields the original multiset. We refer to these partitions as $(k,k-1)$-partitions. The coefficient $\zeta_2(d_1, ..., d_{k(k-1)})$ is the number of different ways to partition $D$ into $k-1$ disjoint ordered multisets each of size $k$ (resp. $(k-1,k)$-partitions). Hence, in order to prove the property, it suffices to show that for any multiset $\{d_1, ..., d_{k(k-1)}\}$, \begin{eqnarray}\label{ineq:partitions} \frac{\zeta_1(d_1, ..., d_{k(k-1)})}{\zeta_2(d_1, ..., d_{k(k-1)})} \leq \frac{k^{k-1}}{(k-1)!}. \end{eqnarray} Assume that some element of $D$ has multiplicity more than one and consider the new multiset $D'=\{d_1, ..., d'_i, ..., d_{k(k-1)}\}$ that replaces one appearance $d_i$ of this element with a new element $d'_i$ different than all elements in $D$. Then, in order to generate all $(k,k-1)$-partitions of $D'$, it suffices to consider the $(k,k-1)$-partitions of $D$ and, for each of them, replace $d_i$ with $d'_i$ once for each of the ordered sets in which $d_i$ appears. Similarly, we can generate all $(k-1,k)$-partitions of $D'$ using the $(k-1,k)$-partitions of $D$. Since the number of sets in $(k,k-1)$-partitions is larger than the number of sets in $(k-1,k)$-partitions, we will have that \begin{eqnarray*} \frac{\zeta_1(d_1, ..., d'_i, ..., d_{k(k-1)})}{\zeta_2(d_1, ..., d'_i, ..., d_{k(k-1)})} \geq \frac{\zeta_1(d_1, ..., d_{k(k-1)})}{\zeta_2(d_1, ..., d_{k(k-1)})} . \end{eqnarray*} By repeating this argument, we obtain that the ratio at the left-hand side of inequality (\ref{ineq:partitions}) is maximized when all $d_i$'s are distinct. In this case, both $\zeta_1$ and $\zeta_2$ are given by the multinomial coefficients \[\zeta_1(d_1, ..., d_{k(k-1)}) = \left(\begin{array}{c}k(k-1)\\\underbrace{k-1, ..., k-1}_{\mbox{\tiny $k$ times}}\end{array}\right) = \frac{(k(k-1))!}{((k-1)!)^{k}}\] and \[\zeta_1(d_1, ..., d_{k(k-1)}) = \left(\begin{array}{c}k(k-1)\\\underbrace{k, ..., k}_{\tiny \mbox{$k-1$ times}}\end{array}\right) = \frac{(k(k-1))!}{(k!)^{k-1}}\] and their ratio is exactly the one at the right-hand side of the inequality (\ref{ineq:partitions}). \paragraph{c.} The property follows easily by the definition of function $\Psi_k$ by observing that all the monomials of degree $k$ over the elements of $A$ that contain $b^t$ are generated by multiplying $b^t$ with the terms of $\Psi_{k-t}(A)$. \paragraph{d.} By property (c), we have \begin{eqnarray*} \Psi_k(A\cup\{b\}) -\Psi_k(A) &=& \sum_{t=1}^k{\frac{k!}{(k-t)!}b^t\Psi_{k-t}(A)}\\ &=& kb\sum_{t=1}^{k}{\frac{(k-1)!}{(k-t)!}b^{t-1}\Psi_{k-t}(A)}\\ &=& kb\sum_{t=0}^{k-1}{\frac{(k-1)!}{(k-1-t)!}b^{t}\Psi_{k-1-t}(A)}\\ &=& kb\Psi_{k-1}(A\cup\{b\}). \end{eqnarray*} \paragraph{e.} Working with the right-hand side of the inequality and using the definitions of $L$ and $\Psi_{k-1}$, we have \begin{eqnarray*} k L(A)\Psi_{k-1}(A) &=& k!\left(\sum_{t=1}^n{a_t}\right)\cdot \sum_{1\leq d_1 \leq ... \leq d_{k-1}\leq n}{\prod_{t=1}^{k-1}{a_{d_t}}}\\ &\geq & k!\sum_{1\leq d_1 \leq ... \leq d_{k}\leq n}{\prod_{t=1}^{k}{a_{d_t}}}\\ &=& \Psi_{k}(A). \end{eqnarray*} The equalities follow obviously by the definitions. To see why the inequality holds, observe that the multiplication of the sum of all monomials of degree $1$ with the sum of all monomials of degree $k-1$ will be a sum of all monomials of degree $k$, each having coefficient at least $1$. \qed \paragraph{f.} The proof follows by the derivation below in which we use property (c), the fact that $\Psi_t(\{b\})=t! b^t$ by the definition of function $\Psi_t$, property (b), and the binomial theorem. We have \begin{eqnarray*} \Psi_k(A\cup\{b\}) &=& \sum_{t=0}^k{\frac{k!}{(k-t)!}b^t\Psi_{k-t}(A)}\\ &=& \sum_{t=0}^k{\left(\begin{array}{c}k\\t\end{array}\right)\Psi_{k-t}(A)\Psi_t(\{b\})}\\ &\leq & \sum_{t=0}^k{\left(\begin{array}{c}k\\t\end{array}\right)\Psi_{k}(A)^\frac{k-t}{k}\Psi_k(\{b\})^\frac{t}{k}}\\ &=& \left(\Psi_k(A)^{1/k}+\Psi_k(\{b\})^{1/k}\right)^k. \end{eqnarray*} \qed \end{document}
\begin{document} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \begin{center} {\Large {\bf Experiment on interaction-free measurement in neutron interferometry}} Meinrad Hafner$^a$\footnote{Email: [email protected]} and Johann Summhammer$^b$\footnote{Email: [email protected]} $^a$ Institute for Astronomy University of Vienna Tuerkenschanzstrasse 17 A-1180 Vienna, Austria $^b$ Atominstitut der \"Osterreichischen Universit\"aten Sch\"uttelstrasse 115 A-1020 Vienna, Austria \end{center} \begin{abstract} A neutron interferometric test of interaction-free detection of the presence of an absorbing object in one arm of a neutron interferometer has been performed. Despite deviations from the ideal performance characteristics of a Mach-Zehnder interferometer it could be shown that information is obtained without interaction. \end{abstract} \section{Introduction} In classical mechanics the interaction between a measuring device and the object is treated as arbitrarily small and can therefore be neglected. This is fundamentally different in quantum physics, as examplified in the Heisenberg-microscope: A measurement of the position of an electron inevitably leads to a disturbance of its momentum by the scattering of a photon. It seems that no measurement can be performed without the interaction with a particle. Even in the case investigated by Dicke \cite{Dicke}, where knowledge about the position of an atom is gained by ascertaining where it is not, it had to be concluded, that photon scattering still is responsible for providing the information. However, the scattering remains unresolvable, because the change in the photon's momentum is within the momentum uncertainty of the illuminating beam of light. But recently Elitzur and Vaidman (EV) proposed an experiment where the presence of a perfectly absorbing classical object in one arm of a Mach-Zehnder interferometer can be detected \cite{EV1}, \cite{EV2}, \cite{Bennet}, \cite{Vaidman.quant-ph}. EV symbolized the object by a bomb which explodes when the measuring particle hits it. Their scheme is shown in Fig.1. Here, no interaction is involved. For, had there been an interaction, the test particle would have been absorbed and could not have reached any of the detectors at the outputs of the interferometer. The drawback of this scheme is that even when adjusting the transmissivity of the beam splitters of the interferometer, the probability of recognizing the object without interaction does not exceed 50\%. This disadvantage could be overcome by Zeilinger et al., who introduced a multi-loop interferometer. In the limit of infinitely many loops, one can obtain a probability of 100\% of determining the presence of the absorbing object without interaction \cite{Zeilinger.idea}. Paul and Pavicic incorporated this idea into a proposal employing an optical resonator \cite{Paul.Pavicic}. A first experiment with polarized photons has already been reported \cite{Zeilinger.exp}, \cite{Kwiat.review}. Due to imperfections of the optical devices only around 50\% of interaction-free detections of the object were possible. The theoretical analysis of these situations has also been extended to absorbing {\it quantum} objects, where it is no longer possible to assume a perfect absorber. But even then, a sizable amount of knowledge is gained without interaction \cite{Krenn}, \cite{Karlsson}. In this paper we report on the first experiment with neutrons. The original scheme of EV was implemented in a single crystal neutron interferometer. Although such interferometers are not ideal Mach-Zehnder interferometers, it was possible to ascertain the presence of an absorbing object, which was realized as a neutron detector, with higher probability than classical physics would permit, so that one must conclude that some knowledge is obtained without interaction. Before dealing with the details of this experiment, it is useful to recall the ideal case of EV (Fig.1). The two interferometric paths have equal length and the beam splitters at the entrance and at the exit have the same reflectivity of 50\%. When the interferometer is empty the probability amplitudes leading to P1 interfere constructively, while those leading to P2 interfere destructively. Therefore a particle will always be directed towards P1 (Fig.1a). If a perfectly absorbing object --- from now on called a black object --- is inserted into one path of the interferometer, only one beam can reach the exit ports, because the wavefunction behind the object vanishes. No interference can occur. With semitransparent mirrors at the entrance and exit of the interferometer, there is a probability of 50\% that the particle will be absorbed in the object, and a probability of 25\% each that it will reach either P1 or P2. In order to clarify why interaction-free measurement is involved here, consider a collection of identically looking objects, some perfectly absorbing, some perfectly transparent (i.e. not even inducing a phase shift) for the kind of particles used in the interferometer. Will it be possible to separate the objects into the two classes when testing them in the interferometer? Classically there is no way. Without permitting interaction we could only randomly pick out objects and put them into the one or the other class. But quantum mechanically a separation is possible. This is illustrated in Fig.2. Suppose we place one such object, of which it is not known whether it is black or transparent, into one path of the interferometer and send one particle. Three different outcomes can occur: \begin{description} \item[i.] The particle is detected at port P1. This can happen in either case, so no information is gained and another particle may be used. \item[ii.] The particle is detected at port P2. This is impossible with a perfectly transparent object, so we know now, that a black object is in the interferometer. Only one particle entered the interferometer and it was not absorbed, but was detected at P2. So it cannot have interacted with the object, and we got information of the absorbing property of the object {\it without interaction}. \item[iii.] The particle is detected neither at P1 nor at P2. Here we conclude, that the particle was absorbed by a black object, which, clearly, involved an interaction. \end{description} The result in ii is a nice example of the wave-particle-duality. The wave description determines the probabilities of the detection at the ports P1 and P2 by interference of the wavefunctions of the two paths. The particle description lets us conclude, that the particle {\it did not interact} with the black object when it was detected at P2: If it had been at the site of the black object, it would have been caught there, and could not have reached P2. The three outcomes permit putting the tested objects into three groups as sketched in Fig.2. Groups ii and iii will only contain black objects. All transparent objects are accumulated in group i, but also some black objects, because in a single test the probability for a transparent object to be placed there is 100\%, while for a black object it is 25\%. With repeated tests of the objects of this group, the probability of still having a black object there will approach zero. Simultaneously, the number of objects in group ii will reach one third of the originally available black objects. Two thirds of the black objects will end up in group iii, because they interacted with the test particle and absorbed it. The purpose of the present experiment was to perform such a selection procedure with a real neutron interferometer. The black objects were realized as a neutron detector. In this manner the three possible outcomes of an individual test could be observed. For transparent objects the interferometric paths were left empty. \section{Experiment and Results} The experiment was performed with the perfect-silicon-crystal neutron interferometer installed at the north beam port of the 250kW TRIGA MARK II reactor of the Atominstitut. This interferometer separates the paths by about 4 cm and recombines them again. The action of semitransparent mirrors is accomplished through Bragg-diffraction in Laue geometry at the 220-planes of the four crystal slabs. As can be seen in Fig.3, the basic layout is similar to a Mach-Zehnder interferometer and therefore well suited to test the EV-scheme. The main difference to the ideal case is that the two output beams are not equivalent. Output beam {\bf 1} can have a contrast of 100\% (i.e. be fully modulated), but output beam {\bf 2} cannot \cite{Petrascheck}, because the number of reflections and transmissions at the crystal slabs is not the same for the two paths making up output beam {\bf 2}. But due to crystal imperfections, even output beam {\bf 1} never reaches a contrast of 100\%. A contrast of 40\% is obtained with a cross section of 4 mm (width) times 7 mm (height) of the incident beam, and a wavelength of $\lambda = 1.8 \AA$ with a spread of $\Delta \lambda / \lambda \approx 1\%$. When normalizing the average intensity at output {\bf 1} to 1, the intensities of our set up are, as a function of a phase shift $\phi$ between paths {\bf I} and {\bf II} (setup as shown in Fig.3, but without the Cd sheet and without an object in path {\bf I}): \begin{equation} I_1(\phi) = 1 + 0.4 \cos(\phi) \end{equation} and \begin{equation} I_2(\phi) = 1.7 - 0.4 \cos(\phi). \end{equation} There is no phase shift $\phi$ at which one of the output beams is completely dark. This is contrary to the crucial assumption in the EV scheme. Consequently, a statement of the sort "if a neutron is detected at P2, a black object is with certainty in one of the arms of the intereferometer" was no longer possible. Nevertheless each test had to lead to a decision into which group the tested object should be put. We kept the three groups outlined above. But the certain identification of black objects had to be replaced by a probabilistic one, and it was attempted to achieve an enrichment of transparent objects in one group and of black objects in the two other groups. For this purpose theoretical simulations were undertaken, using the performance characteristics of the actual interferometer \cite{Diplom.Meinrad}. The questions to be answered were, whether the probability of successful interaction-free identification of black objects could be increased by \begin{description} \item[(a)] attenuating the intensity of beam {\bf I}, into which the test objects were to be placed, with an additional partial absorber, and \item[(b)] introducing an additional constant phase shift of $\pi$, such that a registration of a neutron at P1 rather than at P2 could be taken to indicate the presence of a black object. \end{description} It turned out that the optimal conditions could be obtained when attenuating beam {\bf I} to $t=16\%$ of its original intensity. This would reduce the amplitude of the intensity oscillations at P1 and P2 as a function of $\phi$ by a factor $\sqrt{t}=0.40$. The attenuation was done with a Cd-sheet of 125$\mu m$ thickness, which was placed in front of the second crystal slab, and parallel to this slab, as shown in Fig.3. The actual transmittance of this sheet was 0.158(4). The simulation also showed, that the phase shift between beams {\bf I} and {\bf II} should remain zero, just as in the EV-scheme, such that the count rate at P1 is at the {\it maximum} when a transparent object, or none, is present at the test position. But this, still, made it necessary to set the aluminum phase shifter to a certain value, because contrary to the ideal case, the actual interferometer has an internal phase offset due to inaccuracies of slab thicknesses and distances. In the experiment the registration of a neutron at P1 was interpreted as the presence of a transparent object, while registration at P2 was taken to indicate a black object. The probabilities of the possible results were determined by measuring the intensities of the neutron beams at the detectors P1 and P2, and, when applicable, at the detector D, which constituted the black object. The measured intensities of a typical run are listed in Table 1 below. Averaging over several runs was not done, because they showed different contrast, depending on the vibrational level of the building. A run lasted for approximately four hours, during which measurements with and without the black object in path {\bf I}, as well as background measurements and normal interference scan measurements had to be taken \cite{Meinrad.p61}. Without any object in the paths, but with the attenuating Cd-sheet in place (Fig.3), the total intensity of P1 {\it plus} P2, including background, was 1.25 counts/sec. \noindent {\bf Table 1:} Observed neutron counts with transparent (=none) or black object in the test position. \begin{tabular}{|c||r|r||r|} \hline detector & transparent object & black object & background \\ \hline \hline P1 & 3561 & 2073 & 215 \\ \hline P2 & 793 & 999 & 59 \\ \hline D & --- & 2253 & 1422 \\ \hline \end{tabular} The background was determined by rotating the interferometer crystal around the vertical axis away from the Bragg condition by some 3 degrees, such that the incident neutron beam went straight through the first and second crystal slabs. The background at D is due to its position close to the incident beam, whose intensity is three orders of magnitude larger than the intensity which the Bragg condition selects for the interferometer. The background at D is in part also due to the gamma radiation which is created when neutrons are absorbed in the Cd sheet in a nuclear reaction. The detectors P1 and P2 were standard $BF_3$-filled (atmospheric pressure) cylindrical neutron detectors with a diameter of 5 cm and a length of around 40 cm, and were hit axially by the respective beams. Their efficiencies exceeded 97\%. The detector D, representing the black object, had an efficiency of only 65\%. Its outside diameter was 2.54 cm, and its length some 30 cm. When at the test position, beam {\bf I} hit it perpendicular to the axis of the cylinder. It was filled with four atmospheres of $He^3$. The capture cross section of a thermal neutron at a $He^3$ nucleus is 5330 barns. This results in a probability of 0.76 that a neutron will be captured in the gas. The reduction to 0.65 is due to averaging over the beam cross section and to electronic discrimination, which was needed to suppress as much as possible the unwanted gamma-background. If the neutron went through detector D unaffected, it was certainly absorbed by the shielding material behind the detector. Thereby it was ensured, that the black object really was black. From the data in table 1 it is possible to infer whether an actual interaction-free identification of black objects is possible. In so doing we will assume 100\% efficiency of detectors P1 and P2. Since this is not very different from their actual efficiency the resulting error will be small. Detector D of the black object is not needed for the analysis. This detector only served for a check of the consistency of the results. After subtracting the background counts, which is permissible, because they can in principle be made arbitrarily low with improved shielding and electronics, the probabilities for an object to be put into one of the three groups after a test with only a single particle, can be calculated as given in table 2: \noindent {\bf Table 2:} Probabilities of detection of neutron at one of the detectors with black or transparent object in test position. Standard deviations include only Poisson statistics of the counts of table 1. Numbers in rectangular brackets are probabilities for ideal Mach-Zehnder interferometer. \begin{tabular}{|c||c|c|c|} \hline object & detection at P1 (i) & detection at P2 (ii) & absorption in object (iii) \\ \hline \hline black & .455$\pm$.014 [.25] & .231$\pm$.009 [.25] & .314$pm$.018 [.50] \\ \hline transparent & .820$\pm$.020 [1] & .180$\pm$.008 [0] & --- \\ \hline \end{tabular} It is noteworthy, that the probability that the neutron gets absorbed in the black object is significantly lower than in an ideal Mach-Zehnder interferometer. This improved performance is also retained in the limit of infinitely many tests of the objects remaining in group i. In such a procedure, the probability for a black object {\it not to interact} with the neutron and ultimately to be put into group ii is given by \begin{equation} p_{black}^{(ii)} \sum_{n=0}^{\infty}\left[ p_{black}^{(i)}\right]^n \approx .424 , \end{equation} where we have inserted from table 2, $p_{black}^{(i)} = .455$ and $p_{black}^{(ii)} = .231$. Correspondingly, the probability that the black object ultimately absorbs a particle in this procedure and thus is put into group iii is $1-.424=.576$. But, unfortunately, in such repeated tests the transparent objects, too, will accumulate in group ii. The reason is, that their probability of being put into group i in a single test is less than 1, namely $p_{trans}^{(i)}=.820$, according to table 2. Therefore, when testing objects in group i again and again, until either P1 or D fires, all transparent objects and 42.4\% of the black objects will end up in group ii. A closer analysis reveals, that with our real neutron interferometer, the best separation of black and transparent objects is obtained when testing each object {\it only once}. Then one can obtain a separation as shown in Fig.4. The x-axis represents the fraction of black objects in the original ensemble. The full curves going from the lower left to the upper right corner represent the fraction of black objects ending up in group ii, which is the group where the tested object is put when the particle is detected at P2. The thick line is the most likely value, the two thin ones delimit its standard deviation. The dashed line indicates "no separation" of black and transparent objects. One notes that an enrichment of black objects in group ii does happen, because their fraction there is always larger than their fraction in the original ensemble. For instance, for a fraction of .5 of black objects in the original ensemble, their fraction in the final ensemble ii will be .562$\pm$.014. Therefore, even with a neutron interferometer, whose performance is far from an ideal Mach-Zehnder interferometer, and in fact far from the best available neutron interferometers, {\it some information} is obtained in an interaction-free manner. For the sake of completeness Fig.4 also shows the fraction of transparent objects in group i after a single test. These are the curves extending from the upper left to the lower right corner. As expected, transparent objects are accumulated in group i. It is also interesting to ask whether for the {\it real} neutron interferometer there exists a method, different from the EV-method, which permits arbitrarily good separation of transparent and black objects. This is indeed possible, albeit at the cost of only a tiny fraction of the originally available black objects remaining. The method consists in repeated tests of the objects in {\it group ii}. As before, let $p_{black}^{(ii)}$ and $p_{trans}^{(ii)}$ denote the probabilities that a black, respectively transparent, object will be put into group ii after a single test. According to table 2 we have, again neglecting experimental uncertainties, $p_{black}^{(ii)} = .231$ and $p_{trans}^{(ii)} = .180$. When testing objects of group ii a further $N-1$ times, a black object of the original ensemble is therefore $\left( p_{black}^{(ii)} / p_{trans}^{(ii)} \right)^N = (1.283)^N$ times more likely to remain in group ii as compared to a transparent object of the original ensemble. Arbitrarily good purification of group ii is therefore possible {\it without interaction.} The number of black objects group ii contains in the end, will however only be $\left( p_{black}^{(ii)} \right)^N$ times the number of black objects in the original ensemble. \section{Conclusion} We have performed a test of interaction-free measurement with a neutron interferometer of the Mach-Zehnder type. In the unobstructed interferometer the probability of a neutron to be found in the path of the test position for the black and the transparent objects to be identified was around .3, and correspondingly the probability to find it in the other path was around .7. The visibility contrast at the exit port P1 only reached around 40\%. With such strong deviations from the characteristics of an ideal Mach-Zehnder interferometer it was nevertheless possible to show that an original ensemble with unknown proportions of black and transparent objects can be separated into two groups by testing each object with essentially only one neutron (about 1.04 neutrons on average), which must not be absorbed in the test object. Then one of these groups is guaranteed to contain a higher proportion of black objects than the original ensemble. The black objects are laid out as a neutron detector plus absorptive shielding. A neutron interacting with a black object would certainly be absorbed by it. Therefore, the result shows interaction-free measurement at work. \section{Acknowledgment} We would like to thank Professor H. Rauch for permission to use his neutron interferometry setup and to adapt it to the needs of this experiment. \begin{thebibliography}{99} \bibitem{Dicke} R. H. Dicke, Am. J. Phys. {\bf 49}, 925 (1981). \bibitem{EV1} A. C. Elitzur and L. Vaidman, Found. Phys. {\bf 23}, 987 (1993). \bibitem{EV2} L. Vaidman, Quantum Opt. {\bf 6}, 119 (1984). \bibitem{Bennet} C. H. Bennet, Nature {\bf 371}, 479 (1994). \bibitem{Vaidman.quant-ph} L. Vaidman, http://xxx.lanl.gov/abs/quant-ph/9610033. \bibitem{Zeilinger.idea} P. Kwiat, H. Weinfurter, T. Herzog, A. Zeilinger, and M. Kasevich, Ann. N.Y. Acad. Sci. {\bf 755}, 383 (1995). \bibitem{Paul.Pavicic} H. Paul and M. Pavicic, J. Opt. Soc. Am. {\bf B 14}, 1275 (1997); M. Pavicic, Phys. Lett. {\bf A 223}, 241 (1996). \bibitem{Zeilinger.exp} P. Kwiat, H. Weinfurter, T. Herzog, A. Zeilinger, M. Kasevich, Phys. Rev. Lett. {\bf 74}, 4763 (1995). \bibitem{Kwiat.review} P. Kwiat, H. Weinfurter and A. Zeilinger, Scientific American, Nov. 1996, p. 52. \bibitem{Krenn} G. Krenn, J. Summhammer, and K. Svozil, Phys. Rev. {\bf A53}, 1228 (1996). \bibitem{Karlsson} A. Karlsson, G. Bj\"ork, E. Forsberg, http://xxx.lanl.gov/abs/quant-ph/9705006. \bibitem{Petrascheck} D. Petrascheck, Phys. Rev. {\bf B 35}, 6549 (1987), and references therein. \bibitem{Diplom.Meinrad} M. Hafner, Diploma thesis, Faculty of Natural Sciences, Technical University of Vienna, 1996 (in German; unpublished). \bibitem{Meinrad.p61} ibid., p.61, ff. \end {thebibliography} \pagebreak {\bf Figure Captions} \noindent {\bf Fig.1: (a)} The test particle will exit the interferometer at P1 when there is no object, or a perfectly transparent one, in one path of the interferometer. {\bf (b)} With the perfectly absorbing object D in one path, the test particle, if it is not absorbed, can exit at P1 as well as at P2. \noindent {\bf Fig.2:} An original ensemble of black and transparent objects can be separated into three groups by a test in the interferometer. The black objects in group ii get there without having interacted with the test particle. \noindent {\bf Fig.3:} Experimental scheme. The base area of the four plate interferometer is 144mm $\times$ 100mm. Crystal slabs, phase shifter, black object (= detector D plus shielding) and beam widths are drawn to scale. The wide incident beam is collimated by a boron carbide (BC) slit. The transmitted part of beam I after the second crystal slab, and that of beam II after the third crystal slab, leave the interferometer unused. Detectors at exit beams P1 and P2 not to scale. \noindent {\bf Fig.4:} Results of experiment. Full lines (mean, and plus minus one standard deviation): In a single test of each object, a black object is more likely to be put into group ii, unless the neutron is absorbed by it in an interaction, while a transparent object is more likely to be put into group i. Dashed lines: What could be achieved when randomly putting objects of the original ensemble into different groups. \end{document}
\begin{document} \title*{Understanding the Dynamics of Collision and Near-Collision Motions in the $N$-Body Problem} \titlerunning{Collisions and Near-Collisions} \author{Lennard F. Bakker} \institute{Lennard F. Bakker \at Department of Mathematics, Brigham Young University, USA \email{[email protected]}} \maketitle \abstract {Although rare, collisions of two or more bodies in the $N$-body problem are apparent obstacles at which Newton's Law of Gravity ceases to make sense. Without understanding the nature of collisions, a complete understanding of the N-body problem can not be achieved. Historically, several methods have been developed to probe the nature of collisions in the $N$-body problem. One of these methods removes, or regularizes, certain types of collisions entirely, thereby relating not only analytically, but also numerically, the dynamics of such collision motions with their near collision motions. By understanding the dynamics of collision motions in the regularized setting, a better understanding of the dynamics of near-collision motions is achieved.} \section{Introduction} \label{sec:1} For ages, humankind has observed the regular and predicable motion of the planets and other bodies in the solar system and asked, will the motion of the bodies in the solar system continue for ever as they are currently observed? This philosophical question is the object of the mathematical notion of stability. A difficulty in applying the notion of stability to the motion of the solar system is that of collision and near-collision motions of bodies in the solar system. Collision and near-collision motions do occur in the solar system. Section \ref{sec:2} recounts a few of these that have been observed or predicted. The standard mathematical model for understanding the motion of planets and other bodies in the solar system is the Newtonian $N$-Body problem, presented in Section \ref{sec:3}. In included here are some of the basic features and mathematical theory of the Newtonian $N$-Body Problem, its integrals or constants of motion, special solutions such as periodic solutions, and the notions of stability and linear stability of periodic solutions and their relationship. The notions and basic theory of collisions and singularities in the Newtonian $N$-Body Problem is presented in Section \ref{sec:4}. This includes a discussion of the probabilities of collisions, and the regularization or the lack thereof for collisions. A collision motion is rare in that is has a probability of zero of occurring, whereas a near collision motion has a positive probability of occurring. Regularization is a mathematical technique that removes the collision singularities from the Newtonian $N$-Body Problem and enables an analysis of near-collision motions in terms of collision motions through the continuous dependence of motions on initial conditions. This regularization is illustrated in the collinear $2$-Body Problem, the simplest of all the $N$-Body Problems. Recent results are presented in Section \ref{sec:5} on the analytic and numerical existence and numerical stability and linear stability of periodic motions with regularizable collisions in various $N$-Body Problems with $N=3$ and $N=4$. Although fictitious, these periodic motions with regularizable collisions provide a view of their near-collision motions which could be motions of the bodies in the $N$-Body Problem that are collision-free and bounded for all time. \section{Phenomenon} \label{sec:2} Collisions and near-collisions of two or more solar system bodies are apparent obstacles at which Newton's Law of Gravity becomes problematic. Velocities of colliding bodies become infinite at the moment of collision, while velocities of near-colliding bodies become very large as they pass by each other. Both of these situations present problems for numerical estimates of the motion of such bodies. Although collisions are rare, historical evidence of collisions of solar system bodies is viewable on the surface of the Earth and the Moon \cite{Ce}. Only recently have collisions of solar systems bodies actually been observed. As the comet Shoemaker-Levy 9 approached Jupiter it was torn apart into fragments by tidal forces. In July of 1994, at least 21 discernible fragments of Shoemaker-Levy 9 collided with Jupiter. These were the {\it first} ever observed collisions of solar system bodies. An animation of some of the fragments of Shoemaker-Levy 9 colliding with Jupiter can be found at www2.jpl.nasa.gov/sl9/anim.html. Near-collision motion are less rare than collisions. As of March 2012, there are nearly 9000 known near-Earth asteroids\footnote{See http://neo.jpl.nasa.gov/stats/}, of which 1306 are potentially hazardous to Earth\footnote{See http://neo.jpl.nasa.gov/neo/groups.html}. One of these potentially hazardous asteroids, named 2012 DA14, was discovered in 2012. This asteroid will pass by Earth on February 15, 2013, coming closer to the Earth than satellites in geostationary orbit\footnote{See article about 2012 DA14 posted March 6, 2012 on MSNBC.com}. How close will 2012 DA14 pass by Earth? A mere 17000 miles (27000 km)\footnote{See article about 2012 DA14 posted March 8, 2012 on Earthsky.org}. In cosmic terms, this close shave of 2012 DA14 with Earth in 2013 is a near-collision motion. \section{The $N$-Body Problem} \label{sec:3} To model collision and near-collision motions we make some simplifying assumptions and use Newton's Inverse Square Law of Gravity. We assume that all the bodies are idealized as particles with zero volume (i.e., as points), that no particle is torn apart by tidal forces, that the mass of each particle never changes, and that besides Newton's Law of Gravity there are no other forces acting on the bodies. Under these assumptions we would think of Shoemaker-Levy 9 as not being torn apart by tidal forces, but as colliding with Jupiter as a whole. \subsection{The Equations} \label{sec:3.1} The particles modeling the bodies move in three-dimensional Euclidean space which we denote by ${\mathbf R}^3$. For a positive integer $N\geq 2$, suppose there are $N$ particles with positions ${\mathbf q}_j\in{\mathbf R}^3$ and masses $m_j>0$, $j=1,\dots,N$. The distance between two of the particles is denoted by \[ r_{jk} = \vert {\mathbf q}_j-{\mathbf q}_k\vert, \ \ j\ne k,\] which is the standard Euclidean distance between two points in ${\mathbf R}^3$. The Newtonian $N$-Body Problem is the system of second-order nonlinear differential equations \[ m_j {\mathbf q}_j^{\prime\prime} = \sum_{k\ne j} \frac{Gm_jm_k({\mathbf q}_k-{\mathbf q}_j)}{r_{jk}^3},\ \ j=1,\dots,N,\] where ${}^\prime = d/dt$ for a time variable $t$ and $G=6.6732\times 10^{-11}\ {\rm m}^2/{\rm s}^2{\rm kg}$. By an appropriate choice of units of the ${\mathbf q}_j$, we will assume that $G=1$ because we are investigating the qualitative or geometric, rather than the quantitative, behavior of collision and near-collision motions. By the standard existence, uniqueness, and extension theory in differential equations (see \cite{Ch} for example), the initial value problem \begin{equation}\label{IVP} m_j {\mathbf q}_j^{\prime\prime} = \sum_{k\ne j} \frac{m_jm_k({\mathbf q}_k-{\mathbf q}_j)}{r_{jk}^3}, \ \ {\mathbf q}_j(t_0) = {\mathbf q}_j^0, \ \ {\mathbf q}_j^{\prime}(t_0) = {\mathbf q}_j^{\prime 0}, \end{equation} has a unique solution \[ {\mathbf q}(t) = ( {\mathbf q}_1(t),\dots,{\mathbf q}_N(t))\] defined on a maximal interval of definition $(t^-,t^+)$ as long as $r_{jk}\ne 0$ for all $j\ne k$ at $t=t_0$. Such a solution ${\mathbf q}(t)$ describes a motion of the $N$ particles. Not every initial value problem (\ref{IVP}) will have a solution ${\mathbf q}(t)$ with $t^-=-\infty$ and $t^+=\infty$. A solution with either $t^->-\infty$ or $t^+<\infty$ experiences a singularity at the finite endpoint of its maximal interval of definition. The notion of a singularity is addressed in Section \ref{sec:4.1}. \subsection{Integrals} \label{sec:3.2} An integral of motion of the Newtonian $N$-Body Problem is a differentiable function $F$ of the position ${\mathbf q}$ and/or the velocity ${\mathbf q}^\prime$ and/or the masses ${\mathbf m}=(m_1,\dots,m_N)$ such that \[ \frac{d}{dt} F({\mathbf q}(t),{\mathbf q}^\prime(t),{\mathbf m}) = 0, \ \ t\in (t^-,t^+).\] Along a solution ${\mathbf q}(t)$ an integral $F$ of motion satisfies \[ F({\mathbf q}(t),{\mathbf q}^\prime(t),{\mathbf m}) = F({\mathbf q}(t_0),{\mathbf q}^\prime(t_0),{\mathbf m}),\ \ t^-<t<t^+,\] i.e., the value of $F$ is constant along the solution. The Newtonian $N$-Body Problem has ten known integrals of motion. The translation invariance of the equations of the Newtonian $N$-Body Problem give rise to $6$ integrals of motion. With $M=\sum_{j=1}^N m_j$, three of these are given by the components of the center of mass vector \[ {\mathbf C} = \frac{1}{M} \sum_{j=1}^m m_j {\mathbf q}_j,\] and three more are given by the components of the linear momentum vector \[ {\mathbf L} = \frac{1}{M} \sum_{j=1}^N m_j{\mathbf q}_j^\prime.\] Typically, both of these are set to ${\mathbf 0}$ so that the {\it relative motion} of the $N$ particles is emphasized. The rotational symmetry of the equations of the Newtonian $N$-Body Problem give rise to $3$ more integrals of motion. These integrals are given by the components of the angular momentum vector, \[ {\mathbf A} = \sum_{j=1}^N m_j {\mathbf q}_j\times {\mathbf q}^\prime_j.\] The angular momentum plays a key role in understanding collisions in the $N$-Body Problem, as we will see later on. There is one more integral of motion of the Newtonian $N$-Body Problem. The {\it self-potential} (or negative of the potential energy) is \[ U = \sum_{j<k} \frac{m_j m_k}{r_{ik}^2}.\] The {\it kinetic energy} is \[ K = \frac{1}{2} \sum_{j=1}^N m_j {\mathbf q}^\prime_j\cdot{\mathbf q}^\prime_j.\] The {\it total energy} \[ H = K-U\] is an integral of motion for the Newtonian $N$-Body Problem. In the late 1800's, the mathematical strategy for ``solving'' the Newtonian $N$-Body Problem was to find enough ``independent'' integrals of motion \cite{Sa}. This would implicitly give each solution as the curve of intersection of the hypersurfaces corresponding to the integrals of motion. Each solution ${\mathbf q}(t)$ is a curve in ${\mathbf R}^{6N}$. However, the intersection of the hypersurfaces of the ten integrals of motion gives a $6N-10>1$ dimension hypersurface in ${\mathbf R}^{6N}$, which is not a curve! The ten known integrals of motion are independent of each other (one is not a function of the others), and are algebraic functions of positions, velocities, and masses. Are there any more algebraic integrals of motion? This was answered a long time ago in 1887-1888 by Bruns \cite{Br}. \begin{theorem} There are no algebraic integrals of motion independent of the ten known integrals of motion. \end{theorem} \noindent Consequently, new integrals of motion, if any, can not be algebraic! In 1893, Newcomb \cite{Ne} lamented that no additional integrals had been found to enable the implicit solution of the $3$-Body Problem. It is well-known that the Newtonian $2$-Body Problem can be solved implicitly\footnote{See en.wikipedia.org/wiki/Gravitational\_two-body\_problem}, but all attempts to solve the $N$-Body Problem with $N\geq 3$ have been futile\footnote{Karl Sundman did solve the $3$-Body Problem when ${\mathbf A}\ne 0$ by convergent power series defined for all time, but the series converge too slowly to be of any theoretic or numerical use \cite{Sa}.}. Typically then the solution ${\mathbf q}(t)$ of the initial value problem (\ref{IVP}) is estimated numerically. From the constant total energy $H$ along a solution ${\mathbf q}(t)$, we observe that if any of the distances $r_{jk}$ get close to $0$, i.e., at least two of the particles are near collision, the self-potential becomes large and the kinetic energy becomes large too. The latter implies that the velocity of at least one of the particles becomes large, and the linear momentum ${\mathbf L}$ along ${\mathbf q}(t)$ implies that the velocity of at least two particles becomes large. In particular, from the equations of the Newtonian $N$-Body Problem, the particles that are near collision are the one with the large velocities. These large velocities presents problems for the numerical estimates of such a solution. \subsection{Special Solutions} \label{sec:3.3} Rather than solving the $N$-Body Problem for all of its solutions by finding enough independent integrals of motion, it is better to examine special solutions with particular features. The simplest solutions to find are equilibrium solutions, where the position ${\mathbf q}_j(t)$ of each particle is constant for all time. But the Newtonian $N$-Body Problem has none of these (see p.29 in \cite{MHO}). The next simplest solutions are periodic solutions, i.e., there exist $T>0$ such that ${\mathbf q}(t+T) = {\mathbf q}(t)$ for all $t\in{\mathbf R}$. These are part of the larger collection of solutions ${\mathbf q}(t)$ with $t^-=-\infty$ and $t^+=\infty$ that are bounded. Such solutions must have a particular total energy (see p. 160 in \cite{Sa}). \begin{theorem} If a solution ${\mathbf q}(t)$ of the Newtonian $N$-Body Problem exists for all time and is bounded, then the total energy $H<0$. \end{theorem} \noindent Consequently, any periodic solution ${\mathbf q}(t)$ of the Newtonian $N$-Body Problem must has negative total energy. This is why in the search for periodic solutions the total energy is always assigned a negative value. \subsection{Stability} \label{sec:3.4} A periodic solution ${\mathbf q}(t)$ of the Newtonian $N$-Body Problem gives a predictable future: we know with certainty what the positions of the $N$ particles will be at any time $t>0$. But what if our measurements of the initial conditions ${\mathbf q}(0)$ and ${\mathbf q}^\prime(0)$ are slightly off? A solution $\tilde{\mathbf q}(t)$ with initial conditions near ${\mathbf q}(0)$ and ${\mathbf q}^\prime(0)$ will stay close to ${\mathbf q}(t)$ for a time, by a property of solutions of initial value problems called continuity of solutions with respect to initial conditions (see \cite{Ch}). But if it stays close for all $t>0$, we think of ${\mathbf q}(t)$ as being stable. To {\it quantify} this notion of stability for a periodic solution, we use a Poincar\'e section which is a hyperplane $S$ containing the point $({\mathbf q}(0),{\mathbf q}^\prime(0))$ that is transverse to the curve $({\mathbf q}(t),{\mathbf q}^\prime(t))$. If ${\mathbf x} = (\tilde{\mathbf q}(0),\tilde{\mathbf q}^\prime(0))$ is a point on $S$ near the $({\mathbf q}(0),{\mathbf q}^\prime(0))$, then $P({\mathbf x})$ is next point where the the curve $(\tilde{\mathbf q}(t),\tilde{\mathbf q}^\prime(t))$ intersects $S$\footnote{For an illustration of this see en.wikipedia.org/wiki/Poincar\'e\_map}, and $P^2({\mathbf x})$ is the next point, and so on. The initial condition ${\mathbf x}^0 = ({\mathbf q}(0),{\mathbf q}^\prime(0))$ is a {\it fixed point} of this Poincar\'e map $P$ from $S$ to $S$, i.e., $P({\mathbf x}^0)={\mathbf x}^0$. \begin{definition}\label{stability} The periodic solution ${\mathbf q}(t)$ is stable if for every real $\epsilon>0$ there exist a real $\delta>0$ such that $\vert P^k({\mathbf x}) - {\mathbf x}^0\vert<\epsilon$ for all $k=1,2,3,\dots,$ whenever $\vert {\mathbf x} - {\mathbf x}^0\vert<\delta$. \end{definition} \noindent When ${\mathbf q}(t)$ is not stable, there are solutions which start nearby but eventually move away from ${\mathbf q}(t)$, and we say that ${\mathbf q}(t)$ is {\it unstable}. Showing directly that ${\mathbf q}(t)$ is stable or unstable is very difficult. Instead, the related concept of linearized stability is investigated, at least numerically. The derivative of the Poincar\'e map at the fixed point ${\mathbf x}^0$ is a square matrix $DP({\mathbf x}^0)$. \begin{definition}\label{linearstability} A periodic solution ${\mathbf q}(t)$ is \begin{enumerate} \item spectrally stable\footnote{There is a more restrictive notion of spectral stability known as linear stability that requires additional technical conditions on the square matrix $DP({\mathbf x}^0)$.} if all the eigenvalues of $DP({\mathbf x}^0)$ have modulus one, and is \item linearly unstable if any eigenvalue of $DP({\mathbf x}^0)$ has modulus bigger than one. \end{enumerate} \end{definition} \noindent In 1907, Liapunov \cite{Li} established a connection between the stability of Definition \ref{stability} and the linearized stability of Definition \ref{linearstability}. \begin{theorem} If a periodic solution ${\mathbf q}(t)$ is stable then it is spectrally stable, and if ${\mathbf q}(t)$ is linearly unstable, then it is unstable. \end{theorem} \noindent If a periodic solution is shown numerically to be linearly unstable, then by Theorem 3 the periodic solution is unstable. On the other hand, if a periodic solution is shown numerically to be spectrally stable, it may be stable or unstable. Examples exist with spectrally stable fixed points of maps like $P$ that are unstable (see \cite{SM}). The notion of stability for a non-periodic solution, such as the motion of the sun and planets in the solar system, is harder to grasp. Here is a sampling of the history and opinions on this stability problem. In 1891, Poincar\'e commented that the stability of the solar system had at that time already preoccupied much time and attention of researchers (see p. 147 in \cite{DH}). In 1971, Siegel and Moser lamented that a resolution of the stability problem for the $N$-Body Problem would probably be in the distant future (see p. 219 in \cite{SM}). In 1978, Moser noted that the answer to the stability of the solar system was still not known (see p. 127 in \cite{DH}). In 2005, Saari stated that a still unresolved problem for the $N$-Body Problem is that of stability (see p. 132 in \cite{Sa}). Meyer, Hall, and Offin commented how little is known about the stability problem and how difficult it was to get (see p. 229 in \cite{MHO}). In 1996, Diacu and Holmes suggested that the solar system should be considered stable (in a weak sense) if no collisions occur among the sun and the planets, and no planet ever escape from the solar system (see. p.129 in \cite{DH}). In this weak sense of stability, the solar system is stable for the next few billion years according to numerical work of Hayes \cite{Ha} in 2007. Much longer-term numerical studies of the solar system by Batygin and Laughlin \cite{BL} in 2008 using small changes in the initial conditions suggest that Mercury could fall into the sun in 1.261Gyr\footnote{Gyr means giga-year or 1,000,000,000 years}, or that Mercury and Venus could collide in 862Myr\footnote{Myr means mega-year or 1,000,000 years} and Mars could escape from the solar system in 822Myr. The Newtonian $N$-Body Problem thus suggests that in the near future, the Solar System should be free of collisions of planets and the Sun, with no planets escaping the solar system. But this still leaves open the possibility that smaller objects, such as asteroids and comets, could collide with any of the planets in the short and long term. Recall that there are nearly 9000 of those near-Earth asteroids to consider, with 2012 DA14 making its near-collision approach with Earth on February 15 of 2013. \section{Collisions} \label{sec:4} Either in the short term of the long term, collisions put a {\it wrench} into the question of any notion of stability. Why should a solution or any nearby solution of the Newtonian $N$-Body Problem be defined for all time? Remember that Shoemaker-Levy 9 has $t^+<\infty$! \subsection{Singularities} \label{sec:4.1} Collisions are one of the {\it two} kinds of singularities in the Newtonian $N$-Body Problem. The solution ${\mathbf q}(t)$ of initial value problem (\ref{IVP}) is real analytic (i.e., a convergent power series) on an interval $(t_0-\delta,t_0+\delta)$ for some $\delta>0$, as long as $r_{jk}\ne 0$ for all $j\ne k$ at $t_0$. By a process called analytic continuation (see for example \cite{MaH}), the interval $(t_0-\delta,t_0+\delta)$ can be extended to the maximal interval $(t^-,t^+)$. \begin{definition} A {\it singularity} of the Newtonian $N$-Body Problem is a time $t=t^+$ or $t^-$ when $t^+<\infty$ or $t^->-\infty$. \end{definition} In 1897, Painlev\'e \cite{Pa} characterized a singularity of the Newtonian $N$-Body Problem, using the quantity \[ r_{\rm min}(t) = \min_{j\ne k} r_{jk}(t)\] determined by a solution ${\mathbf q}(t)$. \begin{theorem} A singularity for the Newtonian $N$-body Problem occurs at time $t=t^*$ if and only if $r_{\rm min}(t)\to 0$ as $t\to t^*$. \end{theorem} \noindent An understanding what this means is obtained by considering the {\it collision set} \[ \Delta = \bigcup_{j\ne k} \{ {\mathbf q} : {\mathbf q}_j={\mathbf q}_k\}\subset ({\mathbf R}^3)^N,\] which is the set of points where two or more of the $N$-particles occupy the same position. Painlev\'e's characterization means that ${\mathbf q}(t)$ approaches the collison set, i.e., \[ {\mathbf q}(t)\to \Delta\ \ {\rm as}\ \ t\to t^*\] when $t^*$ is a singularity of the Newtonian $N$-Body Problem. Painlev\'e's chararterization introduces two classes of singularities. \begin{definition} A singularity $t^*$ of the Newtonian $N$-Body Problem is a collision singularity when $q(t)$ approaches a specific point of $\Delta$ as $t\to t^*$. Otherwise the singularity $t^*$ is a non-collision singularity. \end{definition} Only collision singularities can occur in the Newtonian $2$-Body Problem because it can be implicitly solved. In 1897, Painlev\'e \cite{Pa} showed that only one other Newtonian $N$-Body problem has only collision singularities. \begin{theorem} In the $3$-Body Problem, all singularities are collision singularities. \end{theorem} \noindent Unable to extend his result to more than $3$ bodies, Painlev\'e conjectured that there exist non-collision singularities in the Newtonian $4$ or larger Body Problem. In 1992, Xia \cite{Xi} mostly confirmed Painlev\'e's conjecture, giving an example in the Newtonian $5$-Body Problem. \begin{theorem} There exist non-collision singularities in the $N$-Body Problem for $N\geq 5$. \end{theorem} \noindent That leaves unresolved the question of the existence of non-collision singularities in the Newtonian $4$-Body Problem. An understanding of what a non-collision singular looks like is obtained by considering one-half of the {\it polar moment of inertia} of the Newtonian $N$-Body Problem: \[ I = \frac{1}{2} \sum_{j=1}^N m_j {\mathbf q}_j\cdot{\mathbf q}_j.\] This scalar quantity measures the ``diameter'' of the $N$ particles in the Newtonian $N$-Body Problem. In 1908, von Zeipel \cite{Ze} characterized a collision singularity in terms of the polar moment of inertia. \begin{theorem}\label{vonZeipel} A singularity of the Newtonian $N$-Body Problem at $t=t^*$ is a collision if and only if $I$ is bounded as $t\to t^*$. \end{theorem} \noindent This implies that for a non-collision singularity, at least one of the $N$-particles has to achieve an infinite distance from the origin in just a finite time. This is a rather strange thing for Newton's Law of Gravity to predict. On the other hand, by Theorem \ref{vonZeipel}, for a collision singularity, all of the positions of the $N$ particles remain bounded at the moment of the singularity. A total collapse is an example of a collision singularity in the $N$-Body Problem for which all $N$ particles collide at the same point at the singularity $t^*$. For a solution ${\mathbf q}(t)$, the quantity \[ r_{\rm max} = \max_{j\ne k} r_{jk}(t)\] characterizes a total collapse: a total collapse occurs at $t^*$ if and only if \[ r_{\rm max}(t) \to 0 {\rm\ as\ } t\to t^*.\] There is a relationship between total collapse and the angular momentum that was known by Weierstrass and established by Sundman (see \cite{Sa}). \begin{theorem}\label{angular} If ${\mathbf A}\ne 0$, then $r_{\rm max}(t)$ is bounded away from zero. \end{theorem} \noindent This does not preclude the collision of less than $N$ particles when ${\mathbf A}\ne 0$, as will be illustrated for certain Newtonian $N$-Body Problems in Section \ref{sec:5}. \subsection{Improbability} \label{sec:4.2} Recall that there are 1306 potentially hazardous near-Earth asteroids. What are the chances that Earth will be hit by a near-Earth asteroid, or Jupiter will be hit by another comet? Well, it depends on the arrangement of the particles. \begin{definition} A solution ${\mathbf q}(t)$ is called collinear if the $N$ particles always move on the same fixed line in ${\mathbf R}^3$. Otherwise it is called non-collinear. \end{definition} \noindent Every collinear solution has zero angular momentum because ${\mathbf q}_j(t)$ is parallel with ${\mathbf q}_j^\prime(t)$ for all $t\in (t^-,t^+)$. In 1971 and 1973, Saari \cite{Sa71,Sa73} established the probability of collisions. \begin{theorem}\label{collisionprobability}The probability that a non-collinear solution ${\mathbf q}(t)$ will have a collision is zero. Every collinear solution ${\mathbf q}(t)$ has a collision. \end{theorem} With collision singularities being rare for a non-collinear $N$-Body Problem, why bother to study them? Diacu and Holmes (see p. 84 and p. 103 in \cite{DH}) argue for the study of collision singularities because without such a study, a complete understanding of the Newtonian $N$-Body Problem could not be achieved. In particular, solutions near collision singularities could behave strangely, and the probability of a solution coming close to a collision singularity is positive and thus can not be ignored. Understanding then the collision singularities enables an understanding of the near-collision solutions. \subsection{Regularization} \label{sec:4.3} Regularization is one method by which we can get an understanding of a collision singularity. To {\it regularize} a collision means to extend the solution beyond the collision through an elastic bounce without loss or gain of total energy in such a way that all of the solutions nearby have continuity with respect to initial conditions, i.e., they look like the extended collision solution for a time (see p. 104 and p. 107 in \cite{DH}). Regularization is typically done by a Levi-Civita type change of the dependent variables, and a Sundman type change of the independent variable (see \cite{Ce}), that together {\it removes} the collision singularity from the equations. We illustrate this regularization in the simplest of the $N$-Body Problems. In the Collinear $2$-Body Problem (or Col2BP for short), the positions of the two particles are the scalar quantities $q_1$ and $q_2$. If $x=q_2-q_1$ is the distance between the particle with mass $m_1$ at $q_1$ and the particle with mass $m_2$ at $q_2>q_1$, then the Col2BP takes the form \begin{equation}\label{Col2BP} x^{\prime\prime} = - \frac{m_1+m_2}{x^2}, \ x>0,\end{equation} and the total energy takes the form \begin{equation}\label{totalenergy} H = \frac{m_1m_2}{2(m_1+m_2)} (x^\prime)^2 - \frac{m_1m_2}{x}.\end{equation} As $x\to 0$ the two particles approach collision, and the total energy implies that the two particles collide with an infinite velocity, \[ (x^\prime)^2\to\infty.\] To regularize the {\it binary collision} (or total collapse) in this problem, define a new independent variable $s$ and a new dependent variable $w$ by \[ \frac{ds}{dt} = \frac{1}{x}, \ \ w^2 = x,\] where the former is the Sundman type change of the independent variable, and the latter is the Levi-Civita type change of the dependent variable. If $\ \dot{}= d/ds$, the second-order equation (\ref{Col2BP}) becomes \begin{equation}\label{regularizedODE} w^2\big[ 2w\ddot w - 2\dot w^2 + (m_1+m_2)\big]=0,\end{equation} and the total energy (\ref{totalenergy}) becomes \begin{equation}\label{Renergy} Hw^2 = \frac{2m_1m_2}{m_1+m_2}\dot w^2 - m_1m_2.\end{equation} As $w\to 0$, the second-order equation (\ref{regularizedODE}) makes sense (no dividing by zero), and the total energy (\ref{Renergy}) implies that \[ (\dot w)^2 \to \frac{m_1+m_2}{2},\] which is a finite nonzero velocity! The collision singularity has been regularized. The regularized nonlinear second-order equation (\ref{regularizedODE}) can actually be solved! Solving the total energy (\ref{Renergy}) for $2(\dot w)^2$ and substituting this into the second-order equation (\ref{regularizedODE}) gives \begin{equation}\label{linearODE} 2w^3\left[ \ddot w - \frac{(m_1+m_2)H}{2m_1m_2} w \right] = 0.\end{equation} This makes sense when $w=0$, i.e., the moment of collision! For negative $H$, the linear second-order equation\footnote{This is a simple harmonic oscillator for $H<0$ whose solutions are in terms of cosine and sine.} inside the square brackets in (\ref{linearODE}) solves to give a real analytic stable periodic solution $w(s)$ which experiences a collision every half period in terms of the regularized time variable $s$. The corresponding solution $x(t)$ is periodic and experiences a collision once a period in terms of the original time variable $t$. This doubling of the number of collisions per period is because the change of dependent variable $w^2=x$ has $w(s)$ ``doubling'' $x(t)$ in that $w(s)$ passes through $0$ twice a period, going from positive to negative and then negative to positive, while $x(t)$ is positive except at collision where it is zero. The binary collision singularity in the Newtonian $2$-Body Problem can be regularized in a similar but more complicated way than what was done above for the Col2BP (see \cite{Sa}). By Theorem \ref{angular}, a solution of the $2$-Body Problem with nonzero angular momentum does not experience a collision or total collapse. A nonzero angular momentum near-collision solution looks like the zero angular momentum collision solution\footnote{Binary Star Systems are known to exist in the Universe. The Newtonian $2$-Body Problem predicts stability for a Binary Star System, a collision-free solution that is bounded for all time.}. The regularized $2$-Body Problem provides good numerical estimates of the motion because there are no infinite velocities! \subsection{McGehee} \label{sec:4.4} What about regularization of a triple collision, when three of the particles meet? In 1974, McGehee \cite{McG} showed that regularization of a triple collision is in general not possible\footnote{This is achieved by ``blowing-up'' the triple collision singularity and slowing down the motion as the particles approach a triple collision. This setting does allow for good numerical estimates of near-triple collisions.}. Starting close together, two solutions that approach a near triple collision can describe {\it radically different} motions after the near triple collision. This kind of behavior is known as ``sensitive dependence on initial conditions,'' and is an antithesis of stability. Triple collisions present a numerical nightmare! By extension, collisions with four or more particles present the same nightmare! So the only regularizable collisions are those that are essentially a binary collision. \section{Results} \label{sec:5} Spectrally stable periodic solutions have been found in Newtonian $N$-Body Problems with regularizable collisions for $N\geq 3$. Three of these situations discussed here are the Collinear $3$-Body Problem (or Col3BP), the Collinear Symmetric $4$-Body Problem (or ColS4BP), and the Planar Pairwise Symmetric $4$-Body Problem (or PPS4BP). There are other Newtonian $N$-Body Problems where periodic solutions with regularizable collisions whose existence has been given analytically \cite{Ya1,Ya2,Sh}, some of whose stability (in the sense of Definition \ref{stability}) and linear stability (as defined in Definition \ref{linearstability}) has been numerically determined \cite{BS,Wa,Ya1,Ya2}. \subsection{Col3BP} \label{sec:5.1} As a subproblem of the Newtonian $3$-Body Problem, the Col3BP requires that the three particles always lie on the same line through the origin. The positions of the three particles in the Col3BP are the scalars $q_1$, $q_2$, and $q_3$ which can be assumed to satisfy \[ q_1\leq q_2\leq q_3.\] By Theorem \ref{collisionprobability}, collisions always occur in the Col3BP. Because the three particles are collinear for all time, their angular is zero, and by Theorem \ref{angular} a total collapse is possible\footnote{Initial conditions leading to total collapse in the equal mass Col3BP are easy to realize: set $q_1=-1$, $q_2=0$, and $q_3=1$ with the initial velocity of each particle set to $0$.} in the Col3BP. In 1974, S.J. Aareth and Zare \cite{AZ} showed that any two of the three possible binary collisions in the $3$-Body Problem are regularizable\footnote{A good numerical model for the Sun-Jupiter-Shoemaker-Levy 9 or Earth-Moon-2012DA14 situation is regularized $3$-Body Problem of Aarseth and Zare.}. In 1993, Hietarinta and Mikkola \cite{HM} used Aarseth and Zare's regularization \cite{AZ} to regularize the binary collisions $q_1=q_2$ and $q_2=q_3$ in the Col3BP. In 1956, Schubart \cite{Sc} numerically found a periodic orbit in the equal mass Col3BP of negative total energy in which the inner particle oscillates between binary collisions with the outer particles. In 1977, H\'enon \cite{He} numerically extended Schubart's periodic solution to arbitrary masses and investigated their linear stability. In 1993, Hietarinta and Mikkola \cite{HM} also numerically investigated the linear stability of Schubart's periodic solution for arbitrary masses. Together they showed that Schubart's periodic solution is spectrally stable for certain masses, and linearly unstable for the remaining masses. Hietarinta and Mikkola \cite{HM} further numerically investigated the Poincar\'e section for Schubart's periodic solution for arbitrary masses, showing when there is stability as described in Definition \ref{stability}. In 2008, Moeckel \cite{Mo} and Venturelli \cite{Ve} separately proved the analytic existence of Schubart's solution when $m_1=m_3$ and $m_2$ is arbitrary. Only recently, in 2011, did Shibayama \cite{Sh} analytically prove the existence of Schubart's periodic solution for arbitrary masses in the Col3BP. Schubart's period solution for the Col3BP is also a periodic solution of the $3$-Body Problem, where in the latter the continuity with respect to initial conditions can be see for near-collision solutions. For example, Schubart's periodic solution for the nearly equal masses \[ m_1=0.333333,\ m_2=0.333334,\ m_3=0.333333\] is spectrally stable. Considered in $3$-Body Problem, Schubart's periodic solution for these mass values remains spectrally stable \cite{He}, and numerically the near-collision solutions in the Newtonian $3$-Body Problem behave like Schubart's periodic solution. It is therefore possible that in the $3$-Body Problem, there are solutions near Schubart's periodic solution that are free of collisions and bounded for all time. Imagine, as did H\'enon \cite{He}, of Newton's Law of Gravity predicting a triple star system that is free of collisions and bounded for all time! \subsection{ColS4BP} \label{sec.5.2} As a subproblem of the Newtonian $4$-Body Problem, the ColS4BP requires that the four particles always lie on the same line through the origin. The positions of the four particles are the scalars $q_1$, $q_2$, $q_3$, and $q_4$ that satisfy \[ q_4=-q_1,\ q_3=-q_2,\ q_1\geq 0,\ q_2\geq 0,\] and \[ -q_1 \leq -q_2\leq 0\leq q_2\leq q_1\] with masses \[ m_1=1,\ m_2=m>0,\ m_3=m,\ m_4=1.\] The angular momentum for all solutions of the ColS4BP is zero because of the collinearity, and so by Theorem \ref{angular} a total collapse is possible. There are two kinds of non-total collapse collisions in the ColS4BP: the binary collision of the inner pair of particles of mass $m$ each, i..e, $q_2=0$, and the {\it simultaneous} binary collision of the two outer pairs of particles, i.e., $q_1=q_2>0$. In 2002 and 2006, Sweatman \cite{SW1,SW2} showed, by adapting the regularization of Aarseth and Zare \cite{AZ}, that these non-total collapse collisions in the ColS4BP are regularizable. Sweatman \cite{SW1,SW2} numerically found a Schubart-like periodic solution in the ColS4BP with negative total energy for arbitrary $m$ where the outer pairs collide in a simultaneous binary collision at one moment and then the inner pair collides at another moment. He determined numerically that this Schubart-like periodic solution is spectrally stable when \[ 0<m<2.83\ {\rm and}\ m>35.4,\] and is otherwise linearly unstable. In 2010, Bakker et al \cite{BOYSR} verified Sweatman's linear stability for the Schubart-like periodic solution using a different technique. In 2011-2012, Ouyang and Yan \cite{OY2}, Shibayama \cite{Sh}, and Huang \cite{Hu} proved separately the analytic existence of the Schubart-like periodic solution in the ColS4BP. \subsection{PPS4BP} \label{sec:5.3} The PPS4BP has two particles of mass $1$ located at the planar locations \[ {\mathbf q}_1 {\rm\ and\ }{\mathbf q}_3=-{\mathbf q}_1,\] and two particles of mass $0<m\leq 1$ located at the planar locations \[ {\mathbf q}_2 {\rm\ and\ } {\mathbf q}_4=-{\mathbf q}_2.\] The four particles in the PPS4BP need not be collinear, so that the angular momentum need not be $0$. Unlike the ColS4BP, total collapse can be avoided in the PPS4BP by Theorem \ref{angular} when the angular momentum is not zero. Like the ColS4BP, there are two kinds of non-total collapse collisions in the PPS4BP: simultaneous binary collisions when ${\mathbf q}_1={\mathbf q}_2$ and ${\mathbf q}_3={\mathbf q}_4$, or when ${\mathbf q}_1 = {\mathbf q}_4$ and ${\mathbf q}_2={\mathbf q}_3$; and binary collisions when ${\mathbf q}_1=0$ or when ${\mathbf q}_2=0$. In 2010, Sivasankaran, Steves, and Sweatman \cite{SSS} showed that these non-total collapse collisions in the PPS4BP are regularizable. The Schubart-like periodic solution in the ColS4BP is also a periodic solution of the PPS4BP, where in the latter the continuity with respect to initial conditions can be observed for near-collision solutions. However, as shown by Sweatman \cite{SW2}, in the PPS4BP the Schubart-like periodic solution of the ColS4BP becomes linearly unstable for \[ 0<m<0.406, {\rm \ and\ } 0.569<m<1.02\] as well as $2.83<m<35.4$, while it remains spectrally stable for \[ 0.407<m<0.567{\rm\ and\ } m>35.4.\] By long-term numerical integrations for the Schubart-like periodic solution as a solution of the PPS4BP, Sweatman \cite{SW2} showed that stability in the sense of Definition \ref{stability} is possible when $0.407<m<0.567$ and when $m>35.4$. It is therefore possible for these values of $m$ that near Schubart's periodic solution there are collision-free solutions of the PPS4BP that are bounded for all time. In 2011, adapting the regularization of Aarseth and Zare \cite{AZ} to simultaneous binary collisions, Bakker, Ouyang, Yan, and Simmons \cite{BOYS} proved the analytic existence of a non-collinear periodic solution in the equal mass PPS4BP. This periodic solution has zero angular momentum, negative total energy, and alternates between a simultaneous binary collision of the symmetric pairs in the first and third quadrant where ${\mathbf q}_1={\mathbf q}_2$ and ${\mathbf q}_3={\mathbf q}_4$, and the simultaneous binary collision of the symmetric pairs in the second and fourth quadrants where ${\mathbf q}_1={\mathbf q}_4$ and ${\mathbf q}_2={\mathbf q}_3$. Bakker, Ouyang, Yan and Simmons \cite{BOYS} then numerically extended this non-collinear periodic simultaneous binary collision solution to unequal masses $0<m<1$. In 2012, Bakker, Mancuso, and Simmons \cite{BMS} have numerically determined that the non-collinear periodic simultaneous binary collision solution is spectrally stable when \[ 0.199<m<0.264{\rm\ and\ } 0.538<m\leq 1\] and is linearly unstable for the remaining values of $m$. Long-term numerical integrations of the regularized equations done by Bakker, Ouyang, Yan, and Simmons \cite{BOYS} suggest instability when $0.199<m<0.264$ and stability when $0.538<m\leq 1$ in the sense of Definition \ref{stability}. For these latter values of $m$ could the near-collision solutions in the PPS4BP that look like the non-collinear periodic simultaneous binary collision solution be collision-free and bounded for all time? \section{Future Work?} \label{sec:6} Both the ColS4BP and the PPS4BP are subproblems of the Newtonian $4$-Body Problem, where the non-total collapse collisions in the former two problems are regularizable. What is not known is how to, if possible, regularize binary collisions and simultaneous binary collisions in the Newtonian $4$-Body Problem within one coordinate system \footnote{During the Special Session on Celestial Mechanics at the American Mathematical Society's Sectional Conference in April 2011 at the College of the Holy Cross, Worcester, Massachusetts, Rick Moeckel put forth the problem of finding an elegant coordinate system for the Newtonian $4$-Body Problem in which regularizes binary collisions and simultaneous binary collisions and blows up all triple collisions and total collapse. The regularization of binary collisions and simultaneous binary collisions can be achieved within multiple coordinate systems, with one coordinate system for each regularizable collision.}. If such a regularization is possible, then all of the periodic solutions thus known in the ColS4BP and PPS4BP would also be periodic solutions of the Newtonian $4$-Body Problem and the investigation of their stability and linear stability in the Newtonian $4$-Body Problem could begin. With more possible perturbations of initial conditions in the Newtonian $4$-Body Problem as compared with the PPS4BP, a loss of spectral stability could indeed happen as it did with going from the ColS4BP to the PPS4BP. But some of the spectral stability might survive passage from the PPS4BP to the Newtonian $4$-Body Problem, giving the possibility of near-collision solutions that are collision-free and bounded for all time. \end{document}
\begin{document} \title{Diameter two properties and the Radon-Nikod\'ym property in Orlicz spaces} \keywords{Banach function space, Orlicz space, Daugavet property, (local, strong) diameter two property, Radon-Nikod\'ym property, octahedral norm, uniformly non-$\ell_1^2$ points} \subjclass[2010]{46B20, 46E30, 47B38} \author{Anna Kami\'{n}ska} \address{Department of Mathematical Sciences, The University of Memphis, TN 38152-3240} \email{[email protected]} \author{Han Ju Lee} \address{Department of Mathematics Education, Dongguk University - Seoul, 04620 (Seoul), Republic of Korea} \email{[email protected]} \author{Hyung Joon Tag} \address{Department of Mathematical Sciences, The University of Memphis, TN 38152-3240} \email{[email protected]} \date{\today} \thanks{ The second author was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology [NRF-2020R1A2C1A01010377].} \begin{abstract} Some necessary and sufficient conditions are found for Banach function lattices to have the Radon-Nikod\'ym property. Consequently it is shown that an Orlicz function space $L_\varphi$ over a non-atomic $\sigma$-finite measure space $(\Omega, \Sigma,\mu)$, not necessarily separable, has the Radon-Nikod\'ym property if and only if $\varphi$ is an $N$-function at infinity and satisfies the appropriate $\Delta_2$ condition. For an Orlicz sequence space $\ell_\varphi$, it has the Radon-Nikod\'ym property if and only if $\varphi$ satisfies the $\Delta_2^0$ condition. In the second part a relationship between uniformly $\ell_1^2$ points of the unit sphere of a Banach space and the diameter of the slices are studied. Using these results, a quick proof is given that an Orlicz space $L_\varphi$ has the Daugavet property only if $\varphi$ is linear, so when $L_\varphi$ is isometric to $L_1$. Another consequence is that Orlicz spaces equipped with the Orlicz norm generated by $N$-functions never have the local diameter two property, while it is well-known that when equipped with the Luxemburg norm, it may have that property. Finally, it is shown that the local diameter two property, the diameter two property, and the strong diameter two property are equivalent in Orlicz function and sequence spaces with the Luxemburg norm under appropriate conditions on $\varphi$. \end{abstract} \maketitle \begin{center}{Dedicated to the memory of Professor W.A.J. Luxemburg}\end{center} \section{Introduction} The objective of this paper is to study geometrical properties in real Banach spaces, in particular in Banach function spaces and Orlicz spaces. A Banach space $(X, \|\cdot\|)$ is said to have the {\it Daugavet property} if every rank one operator $T: X\to X$ satisfies the equation \[ \|I + T\| = 1 + \|T\|. \] It is well-known that $C[0,1]$ has the Daugavet property. Also, a rearrangement invariant space $X$ over a finite non-atomic measure space with the Fatou property satisfies the Daugavet property if the space is isometrically isomorphic to either $L_1$ or $L_{\infty}$ \cite{AKM, AKM2}. If a rearrangement invariant space $X$ over an infinite non-atomic measure space is uniformly monotone, then it is isometrically isomorphic to $L_1$ \cite{AKM2}. Furthermore, the only separable rearrangement invariant space over $[0,1]$ with the Daugavet property is $L_1[0,1]$ with the standard $L_1$-norm \cite{KMMW}. In \cite{KK}, a characterization of Musielak-Orlicz spaces with the Daugavet property has been provided. We refer to \cite{KSSW, W} for further information on the Daugavet property. Let $S_X$ and $B_X$ be the unit sphere and the unit ball of a Banach space $X$ and let $X^*$ be the dual space of $X$. A slice of $B_X$ determined by $x^*\in S_{X^*}$ and $\epsilon>0$ is defined by the set \[ S(x^*; \epsilon) = \{x\in B_X : x^*(x) > 1 - \epsilon\}. \] Analogously, for $x\in S_X$ and $\epsilon >0$, a weak$^*$-slice $S(x, \epsilon)$ of $B_{X^*}$ is defined by the set \[ S(x; \epsilon) = \{x^*\in B_{X^*}: x^*(x) > 1 - \epsilon\}. \] There are several geometrical properties related to slices and weak$^*$-slices. We say that $X$ has \begin{enumerate}[{\rm(i)}] \item the {\it local diameter two property} (LD2P) if every slice of $B_X$ has the diameter two. \item the {\it diameter two property} (D2P) if every non-empty relatively weakly open subset of $B_X$ has the diameter two. \item the {\it strong diameter two property} (SD2P) if every finite convex combination of slices of $B_X$ has the diameter two. \item the {\it weak$^*$-local diameter two property} (weak$^{*}$-LD2P) if every weak$^*$-slice of $B_{X^*}$ has the diameter two. \item the {\it Radon-Nikod\'ym property} (RNP) if there exist slices of $B_X$ with arbitrarily small diameter. \end{enumerate} A few remarks are in order now. Condition $\rm(v)$ is a geometrical interpretation of the classical Radon-Nikod\'ym property \cite[Theorem 3, p. 202]{DU}. By the definitions, we see that properties (i), (ii) and (iii) are on the opposite spectrum of $\rm(v)$. It is clear that $\rm(ii) \implies \rm(i)$, and the implication $\rm(iii) \implies \rm(ii)$ results from \cite[Lemma II.1 p. 26]{GGMS}. It is also well-known that these three properties are not equivalent in general {\cite{ALN, BLR}, see also introductory remarks in \cite{HLP}. A Banach space $X$ with the Daugavet property satisfies the SD2P \cite[Theorem 4.4]{ALN}. After Preliminaries, in Section 3, we show first that if a Banach function space $X$ over a $\sigma$-finite measure space has the RNP then it must be order continuous. The opposite implication is not true in general. However we prove it under additional assumptions when $X$ satisfies the Fatou property, and when the subspaces of order continuous elements and the closure of simple functions coincide in its K\"othe dual space $X'$. We will provide some examples to see that this assumption is necessary in order to show the converse. Applying the obtained results further, we conclude the section with the necessary and sufficient condition for the RNP in Orlicz spaces. There is a well-known criterion for the RNP in Orlicz spaces $L_\varphi$ over a separable complete non-atomic measure space $(\Omega, \Sigma, \mu)$, generated by an $N$-function $\varphi$. Here we drop the assumption of separability of a measure space and show that necessary and sufficient conditions for the RNP is that $\varphi$ satisfies appropriate $\Delta_2$ condition and that $\varphi$ is an $N$-function at infinity. In sequence spaces $\ell_\varphi$ we drop the assumption that $\varphi$ is an $N$-function. In section 4 the Daugavet and various diameter two properties are studied. In the first main theorem we give a local characterization of uniformly $\ell_1^2$ points $x\in S_X$, where $X$ is a Banach space, and the diameter of the weak$^*$-slice $S(x;\epsilon)$ generated by $x$. Analogously we describe a relationship between $x^*\in S_{X^*}$ and the diameter of the slice $S(x^*,\epsilon)$ given by $x^*$. Consequently, we obtain a description of global properties of $X$ or $X^*$ being locally octahedral and $X^*$ or $X$ respectively, having (weak$^*$) local diameter two property. We also obtain relationships among the Daugavet property of $X$ and the D2P of $X$ and weak$^*$-LD2P of $X^*$. In Theorem \ref{th:KamKub} we provide sufficient conditions for the existence of uniformly non-$\ell_1^2$ points in $L_\varphi$ and $\ell_\varphi$ both equipped with the Luxemburg norms. Combining this with the previous general facts we recover instantly that the only Orlicz space $L_\varphi$ generated by a finite function $\varphi$ with the Daugavet property must coincide with $L_1$ as sets with equivalent norm. The other consequences are that large class of the Orlicz spaces $L_\varphi^0$ and $\ell_\varphi^0$ equipped with the Orlicz norm, does not have the LD2P, in the striking opposition to the same Orlicz spaces equipped with the Luxemburg norm. In the final result we show that the LD2P, D2P, SD2P and the appropriate condition $\Delta_2$ are equivalent in $L_\varphi$ and $\ell_\varphi$. \section{Preliminaries} Let $(\Omega, \Sigma, \mu)$ be a measure space with a $\sigma$-finite complete measure $\mu$ and let $L^0(\Omega)$ be the set of all equivalence classes of $\mu$-measurable functions $f:\Omega\to \mathbb{R}$ modulo a.e. equivalence. We denote by $L^0=L^0(\Omega)$ if $\Omega$ is a non-atomic measure space and by $ \ell^0=L^0(\mathbb{N})$ if $\Omega = \mathbb{N}$ with the counting measure $\mu$. That is, $\ell^0$ consists of all real-valued sequences $x = \{x(n)\}$. A Banach space $(X, \|\cdot\|) \subset L^0(\Omega)$ is called a {\it Banach function lattice} if for $f\in L^0(\Omega)$ and $g \in X$, $0 \leq f \leq g$ implies $f \in X$ and $\|f\| \leq \|g\|$. We call $X$ a {\it Banach function space} if $\Omega$ is a non-atomic measure space and a {\it Banach sequence space} if $\Omega = \mathbb{N}$ with the counting measure $\mu$. A Banach function lattice $(X, \| \cdot \|)$ is said to have the {\it Fatou property} if whenever a sequence $(f_n) \subset X$ satisfies $\sup_n \|f_n\| < \infty$ and $f_n \uparrow f\in L^0(\Omega)$ a.e., we have $f \in X$ and $\|f_n\| \uparrow \|f\|$. An element $f\in X$ is said to be {\it order continuous} if for every $(f_n) \subset L^0(\Omega)$ such that $0\le f_n \le f$, $f_n \downarrow 0$ a.e. implies $\|f_n\| \downarrow 0$. The set of all order continuous elements in $X$ is denoted by $X_a$, and the closure in $X$ of all simple functions belonging to $X$ is denoted by $X_b$. If $X=X_a$ then we say that $X$ is order continuous. In this paper, a simple function is a finitely many valued function whose support is of finite measure. It is well-known that $X_a \subset X_b$ \cite[Theorem 3.11, p. 18]{BS}\cite{ Lux}. The {\it K\"{o}the dual space}, denoted by $X^{\prime}$, of a Banach function lattice $X$ is a set of $x \in L^0(\Omega)$, such that \[ \|x\|_{X^{\prime}}= \sup\left\{\int_\Omega xy : \|y\| \leq 1\right\} < \infty. \] The space $X^{\prime}$, equipped with the norm $\| \cdot \|_{X^{\prime}}$, is a Banach function lattice satisfying the Fatou property. It is well known that $X = X''$ if and only if $X$ satisfies the Fatou property \cite[Theorem 1, Ch. 15, p. 71]{Z}. We say $f,g \in X$ are {\it equimeasurable}, denoted by $f \sim g$, if $\mu\{t: |f(t)| > \lambda\} = \mu\{t: |g(t)| > \lambda\}$ for every $\lambda > 0$. A Banach function lattice $(X, \|\cdot\|)$ is said to be {\it rearrangement invariant} (r.i.) if $f \sim g$ ($f,g \in X$) implies $\|f\| = \|g\|$. The Lebesgue, Orlicz and Lorentz spaces are classical examples of r.i. spaces. The fundamental function of a r.i. space $X$ over a non-atomic measure space is defined by $\phi_X(t) = \|\chi_{E}\|_X$, $t\ge 0$, where $E\in \Sigma$ is such that $\mu(E) = t$. It is known that $\lim_{t \rightarrow 0^+} \phi_X(t) = 0$ if and only if $X_a = X_b$ \cite[Theorem 5.5, p. 67]{BS}. For the purely atomic case where each atom has the same measure,}$X_a = X_b$ is always true \cite[Theorem 5.4, p. 67]{BS}. Recall that a measure space $(\Omega, \Sigma, \mu)$ is said to be {\it separable} if there is a countable family $\mathcal{T}$ of measurable subsets such that for given $\epsilon>0$ and for each $E\in \Sigma$ of finite measure there is $A\in \mathcal{T}$ such that $\mu(A\Delta E)<\epsilon$, where $A\Delta E$ is the symmetric difference of $A$ and $E$. It is easy to check that if $\Sigma$ is a $\sigma$-algebra generated by countable subsets then $(\Omega, \Sigma, \mu)$ is separable \cite{Hal}. A function $\varphi:\mathbb{R}_+\to [0,\infty]$ is called an {\it Orlicz function} if $\varphi$ is convex, $\varphi(0)=0$, and $\varphi$ is left-continuous, not identically zero nor infinite on $(0,\infty)$. The complementary function $\varphi_{\ast}$ to $\varphi$ is defined by \[ \varphi_{\ast}(u) = \sup_{v \geq 0}\{uv- \varphi(v) \}, \ \ \ u\ge 0. \] The complementary function $\varphi_*$ is also an Orlicz function and $\varphi_{**} = \varphi$. An Orlicz function $\varphi$ is an {\it $N$-function at zero} if $\lim_{u \rightarrow 0^+} \frac{\varphi(u)}{u} = 0$ and {\it at infinity} if $\lim_{u \rightarrow \infty} \frac{\varphi(u)}{u} = \infty$. If $\varphi$ is an $N$-function at both zero and infinity then we say that $\varphi$ is an {\it $N$-function}. A function $\varphi$ is an $N$-function if and only if $\varphi_*$ is an $N$-function. An Orlicz function $\varphi$ satisfies the {\it $\Delta_2$ condition} if there exists $K>2$ such that $\varphi(2u) \leq K \varphi(u)$ for all $u\geq 0$, the $\Delta_2^\infty$ condition if there exist $K>2$ and $u_0\ge 0$ such that $\varphi(u_0) < \infty$ and for all $u\geq u_0$, $\varphi(2u) \leq K \varphi(u)$, and the $\Delta_2^0$ condition if there exist $K>2$ and $u_0$ such that $\infty > \varphi(u_0) > 0$ and for all $0\le u \leq u_0$, $\varphi(2u) \leq K \varphi(u)$. When we use the term {\it the appropriate $\Delta_2$ condition}, it means $\Delta_2$ in the case of a non-atomic measure $\mu$ with $\mu(\Omega) = \infty$, $\Delta_2^\infty$ for a non-atomic measure $\mu$ with $\mu(\Omega) < \infty$, and $\Delta_2^0$ for $\Omega = \mathbb{N}$ with the counting measure i.e. $\mu\{n\} =1$ for every $n\in \mathbb{N}$. The Orlicz space $L_\varphi(\Omega)$ is a collection of all $f\in L^0(\Omega)$ such that for some $\lambda > 0$, \[ I_\varphi(\lambda f):= \int_\Omega\varphi(\lambda |f(t)|)\,d\mu(t) = \int_\Omega\varphi(\lambda |f|)\,d\mu < \infty. \] The Orlicz spaces are equipped with either the Luxemburg norm \[ \|f\|_\varphi= \inf\left\{\epsilon > 0: I_\varphi\left(\frac{f}{\epsilon}\right) \le 1\right\}, \] or the Orlicz (or Amemiya) norm \[ \|f\|_\varphi^0= \sup\left\{\int_\Omega fg : I_{\varphi_*} (g)\le 1\right\} = \inf_{k>0} \frac{1}{k}(1 + I_\varphi(kf)). \] It is well-known that $\|f\|_\varphi \le \|f\|_\varphi^0 \le 2\|f\|_\varphi$ for $f\in L_\varphi(\Omega)$. By $L_\varphi(\Omega)$ we denote an Orlicz space equipped with the Luxemburg norm and by $L_\varphi^0(\Omega)$ with the Orlicz norm. The Orlicz spaces with either norms are rearrangement invariant spaces and have the Fatou property. If $\varphi$ is finite, i.e. $\varphi(u)< \infty$ for all $u>0$, then $(L_\varphi(\Omega))_a \ne \{0\}$ and it contains all simple functions. Therefore \[ (L_\varphi(\Omega))_a = (L_\varphi(\Omega))_b = \{x\in L^0: I_\varphi(\lambda x) < \infty \ \ \text{for all} \ \ \lambda > 0\}. \] It is also well-known that $L_\varphi(\Omega) = (L_\varphi(\Omega))_a$ if and only if $\varphi$ satisfies the appropriate $\Delta_2$ condition. The K\"othe duals of $L_\varphi(\Omega)$ and $L_\varphi^0(\Omega)$ are described by Orlicz spaces induced by $\varphi_*$ \cite{Chen, BS}. In fact, \[ (L_\varphi(\Omega))' = L_{\varphi_*}^0(\Omega)\ \ \text{ and} \ \ \ (L_\varphi^0(\Omega))' = L_{\varphi_*}(\Omega). \] In the case of non-atomic measure (resp., counting measure), we use the symbols $L_\varphi$ and $L_\varphi^0$, (resp., $\ell_\varphi$ and $\ell_{\varphi}^0$) for Orlicz spaces equipped with the Luxemburg and the Orlicz norm, respectively. For complete information on Orlicz spaces we refer the reader to the monographs \cite{BS, Chen, KR, LT1, LT2, Lux}. \section{The Radon-Nikod\'ym property} We start with a general result on the Radon-Nikod\'ym property in Banach function spaces. \begin{Theorem}\label{th:RNKothe} Let $X$ be a Banach function space over a complete $\sigma$-finite measure space $(\Omega, \Sigma, \mu)$. \begin{itemize} \item[(i)] If $X$ has the RNP then $X$ is order continuous. \item[(ii)] Assume that $X$ has the Fatou property and $(X')_a = (X')_b$. Then if $X$ is order continuous then $X$ has the RNP. \end{itemize} \end{Theorem} \begin{proof} (i) If $X$ is not order continuous then it contains an order isomorphic copy of $\ell_\infty$ \cite[Theorem 14.4, p.220]{AB}. Since $\ell_\infty$ does not have the RNP, $X$ does not have this property either. (ii) Suppose that $(X')_a = (X')_b$ and that $X$ is order continuous with the Fatou property. It is well-known that every separable dual space possesses the RNP \cite{DU}. Since $((X')_a)' = ((X')_b)'$, $((X')_a)'=(X')' = X''$ and $((X')_a)^*\simeq ((X')_a)'$ by Corollary 1.4.2 in \cite{BS}. It follows by the Fatou property that $X'' = X$. Therefore \[ ((X')_a)' \simeq ((X')_a)^* \simeq X ''=X. \] Hence $X$ is the dual space of $(X')_a$. If the measure space $(\Omega, \Sigma,\mu)$ is separable, then the order continuous space $X$ is also separable by Theorem 2.5.5 in \cite{BS}. Thus in this case, $X$ has the RNP. Now, suppose that $(\Omega, \Sigma,\mu)$ is not separable and we show that $X$ still has the RNP. We will use the fact that a Banach space $X$ satisfies the RNP if and only if every separable closed subspace $Y\subset X$ has the RNP \cite[Theorem 2, p. 81]{DU}. Since $X$ is order continuous $X=X_a=X_b$. Let $Y\subset X$ be a closed separable subspace of $X$. Then there exists a dense and countable set $\mathcal{Y} \subset Y$. For every $y\in \mathcal{Y} \subset X=X_b$, there exists a sequence of simple functions $(y_n) \subset X$ with supports of finite measure and such that $\|y-y_n\|_X \to 0$. Each $y_n$ can be expressed as $y_n = \sum_{i=1}^{m_n} a_i^{(n)} \chi_{A_i^{(n)}}$, where $a_i^{(n)}\in \mathbb{R}$, $A_i^{(n)} \in \Sigma$ with $\mu(A_i^{(n)}) < \infty$, so $y \in \overline{span} \{\chi_{A_i^{(n)}}, i=1,\dots,m_n, \ n\in \mathbb{N}\}$. Letting $\mathcal{A}_y = \{A_i^{(n)}: i=1,\dots,m_n, n\in\mathbb{N}\}$ and $\mathcal{A} = \cup_{ y\in \mathcal{Y}}\mathcal{A}_y$, the family $\mathcal{A}$ is countable. For our convenience, let $\mathcal{A} = \{E_i: i\in \mathbb{N}\}$. For each $i\in \mathbb{N}$ we have $\mu(E_i) < \infty$. Then we have \[ Y = \overline{\mathcal{Y}}\subset \overline{span} \{\chi_{E_i}, \ E_i \in \mathcal{A}\} \subset X. \] Let $\widetilde\Omega = \cup_{i=1}^\infty E_i$, $\sigma(\mathcal{A})$ be the smallest $\sigma$-algebra of $\Omega$ containing $\mathcal{A}$, $\widetilde{\Sigma} = \{\widetilde{\Omega} \cap E: E \in \sigma(\mathcal{A})\}$ and $\widetilde\mu = \mu|_{\widetilde\Sigma}$ the measure $\mu$ restricted to $\widetilde{\Sigma}$. In fact, it is easy to show that $\widetilde{\Sigma} = \sigma(\mathcal{A})$. Hence $\widetilde{\Sigma}$ is generated by a countable set, namely $\mathcal{A}$, so the measure space $(\widetilde\Omega, \widetilde\Sigma,\widetilde\mu)$ is separable \cite[Theorem B, p. 168]{Hal}). Now define the set \[ \widetilde X = \{x\chi_{\widetilde\Omega}: x\in X, \ x \ \text{is} \ \widetilde{\Sigma} - \text{measurable}\}. \] It is straightforward to check that $\widetilde X$ is a closed subspace of $X$ such that it is an order continuous Banach function space on $(\widetilde\Omega,\widetilde\Sigma, \widetilde\mu)$ with the Fatou property. So $\widetilde X$ is separable. The K\"othe dual of $\widetilde{X}$ is \[ \widetilde X' := (\widetilde X)' = \{y\chi_{\widetilde\Omega}: y\in X', \ y \ \text{is} \ \widetilde{\Sigma} - \text{measurable}\}. \] Clearly $\widetilde X' \subset L^0(\widetilde\Omega, \widetilde\Sigma, \widetilde\mu)$. From the assumption we have $(\widetilde X')_a = (\widetilde X')_b$. Hence $\widetilde X = \widetilde{X}'' \simeq ((\widetilde{X}')_a)^*$ by Corollary 1.4.2 in \cite{BS} again. Therefore, $\widetilde X$ is a separable dual space such that $Y \subset \overline{span} \{\chi_{E_i}, \ E_i\in \mathcal{A}\}\subset \widetilde X$, which implies that $\widetilde X$ and hence $Y$ has the RNP. Since the choice of $Y$ was arbitrary, $X$ has the RNP. \end{proof} \begin{Remark} $(1)$ The Fatou property in (ii) is a necessary assumption. For example, take $X=c_0$. This space does not have the RNP \cite[p. 61]{DU} and clearly does not satisfy the Fatou property. However, since $(c_0)'= \ell_1$ and $\ell_1$ is order continuous we have $((c_0)')_a = (\ell_1)_a = (\ell_1)_b = ((c_0)')_b$, which is the second assumption of the theorem. $(2)$ The assumption $(X')_a = (X')_b$ in (ii) is also necessary. Consider $X=L_1[0,1]$ that is clearly order continuous. Moreover $(X')_a = (L_\infty[0,1])_a = \{0\}$ and $(X')_b = (L_\infty[0,1])_b = L_\infty[0,1]$. Hence $(X')_a \ne (X')_b$ and it is well-known that $L_1[0,1]$ does not have the RNP \cite[p. 60]{DU}. \end{Remark} \begin{Proposition}\label{pro1} Let $\mu$ be a non-atomic measure and $\varphi$ be a finite Orlicz function. If $\varphi$ is not an $N$-function at infinity, then $L_\varphi$ contains a subspace isomorphic to $L_1[0,1]$. \end{Proposition} \begin{proof} Suppose $\varphi$ is not an $N$-function at infinity. We will show that given $A\in\Sigma$ with $\mu(A) < \infty$, the space $L_\varphi(A) = \{x\chi_A: x\in L_\varphi\}$ is equal to $ L_1(A)$ with equivalent norms. By the assumption $\lim_{u\to\infty} \varphi(u)/u =K< \infty$, and by the fact that the function $\varphi(u)/u$ is increasing, there exist $M> 0$ and $u_0 >0$ such that \begin{equation}\label{eq:31} \varphi(u) \le Ku \ \ \text{for} \ \ u\ge 0, \ \ \ \text{and} \ \ \ \varphi(u)\ge M u \ \ \ \text{for} \ \ u \geq u_0. \end{equation} Let $f\in L_\varphi(A)$ with $\|f\|_\varphi = 1$. Then $\supp f\subset A$ and $I_\varphi(f) \le 1$. Set $A_1 = \{t\in A: |f(t)| \le u_0\}$. Thus in view of the second part of inequality (\ref{eq:31}) we get \[ \|f\|_1 = \int_{A_1} |f| \, d\mu + \int_{A\setminus A_1} |f|\, d\mu \le u_0 \mu(A) + \frac1M I_\varphi(f) \le C, \] where $C = u_0 \mu(A) + \frac1M.$ Therefore for any $f$ from $L_\varphi(A)$, $\|f\|_1 \le C \|f\|_\varphi$. On the other hand, by the second part of inequality (\ref{eq:31}), for any $E\subset A$ we have \[ \int_A\varphi\left(\frac{\chi_E}{K\mu(E)}\right)\, d\mu = \int_E\varphi\left(\frac{1}{K\mu(E)}\right) \, d\mu \le 1. \] It follows that $\|\chi_E\|_\varphi \le K\mu(E) =K \|\chi_E\|_1$. Hence for any simple function $x = \sum_{i=1}^n a_i \chi_{E_i}\in L_\varphi(A)$ with $E_i \cap E_j = \emptyset$ for $i\ne j$, \[ \|x\|_\varphi \le \sum_{i=1}^n |a_i| \|\chi_{E_i}\|_\varphi \le K \sum_{i=1}^n |a_i| \mu(E_i) = K \|x\|_1. \] Now by the Fatou property of $L_1(A)$ and $L_\varphi(A)$, $\|x\|_\varphi \le K \|x\|_1$ for every $x \in L_1(A)$. Hence $L_\varphi(A) = L_1(A)$ with equivalent norms, and the proof is completed since $L_1(A)$ contains a subspace isomorphic to $L_1[0,1]$ (see \cite[p. 127, Theorem 9 (1)]{Lac}). \end{proof} Recall that an Orlicz function is said to be finite if its range does not contain the infinity. \begin{Lemma}\label{lem:finite} \rm{(a)} A finite Orlicz function $\varphi$ is an $N$-function at infinity if and only if $\varphi_*$ is finite. \rm{(b)} Let $\mu$ be a non-atomic measure. If $\varphi$ is a finite Orlicz function then $\|\chi_A\|_\varphi = 1/\varphi^{-1}(1/t)$, where $t>0$, $\mu(A) = t$. Consequently \[ \lim_{t\to 0+} \phi_{L_\varphi}(t) = \lim_{t\to 0+} 1/\varphi^{-1}(1/t) = 0. \] \end{Lemma} \begin{proof} (a) Suppose $\varphi$ is not an $N$-function at infinity. Then there exists $K>0$ such that for every $u >0$, $\varphi(u) \leq Ku$. Hence \[ \varphi_*(v) = \sup_{u >0}\{uv - \varphi(u)\} \geq \sup_{u >0}\{(v-K)u\}. \] Therefore if $v > K$ then $\varphi_*(v) =\sup_{u>0}\{(v-K)u\} = \infty$. Conversely, suppose there exists $K > 0$ such that for every $v > K$, $\varphi_*(v) = \infty$. Then \[ \varphi(u) = \sup\{uv - \varphi_*(v) : v \in (0,K)\}. \] By $\frac{\varphi(u)}{u} = \sup\{v - \frac{\varphi_*(v)}{u}: v \in (0, K)\}$, we have $\lim_{u \rightarrow \infty} \frac{\varphi(u)}{u} \leq K < \infty$, which shows that $\varphi$ is not an $N$-function at infinity. (b) Let $a_\varphi=\sup\{t: \varphi(t)=0\}$, and let $A\in \Sigma$, $\mu(A) = t$, $t > 0$. Then $I_\varphi(\chi_A/\epsilon) = 0$ if $\epsilon \ge 1/a_\varphi$, and $I_\varphi(\chi_A/\epsilon) = \varphi(1/\epsilon) t$ if $ \epsilon < 1/a_\varphi$. By the latter condition if $I_\varphi(\chi_A/\epsilon) = 1$, we get that $\|\chi_A\|_\varphi = \epsilon = 1/\varphi^{-1}(1/t)$. Clearly for $t\to 0+$ we get that $1/\varphi^{-1}(1/t) \to 0$. \end{proof} The next result provides a criterion of the Radon-Nikod\'ym property of Orlicz spaces over non-atomic measure spaces. We do not need the assumption of separability of the measure space \cite[Theorem 3.32]{Chen}. \begin{Theorem}\label{th:OrRN-funct} Let $\mu$ be a complete $\sigma$-finite, non-atomic measure on $\Sigma$ and $\varphi$ be a finite Orlicz function. Then the Orlicz spaces $L_\varphi$ {\rm (}and $L_\varphi^0${\rm )} over $(\Omega, \Sigma, \mu)$ have the Radon-Nikod\'ym property if and only if $\varphi$ is an $N$-function at infinity and satisfies the appropriate $\Delta_2$ condition. \end{Theorem} \begin{proof} Since the Luxemburg and Orlicz norms are equivalent we consider only $L_\varphi$ equipped with the Luxemburg norm. By the assumption that $\varphi$ is an $N$-function at infinity and Lemma \ref{lem:finite}(a) we get that $\varphi_*$ is finite on $(0,\infty)$. Applying now Lemma \ref{lem:finite}(b) to the function $\varphi_*$ we get that $\phi_{L_{\varphi_*}}(t) \to 0$ if $t\to 0+$. Hence $\lim_{t\to 0+} \phi_{L^0_{\varphi_*}}(t) = 0$. Applying now Theorem 2.5.5 in \cite{BS} we get $(L_{\varphi_*}^0)_a = (L_{\varphi_*}^0)_b$ and in view of $(L_\varphi)' = L_{\varphi_*}^0$ \cite[Corollary 8.15, p. 275]{BS} \cite{KR}, we have $((L_\varphi)')_a = ((L_\varphi)')_b$. It is well-known that $L_\varphi$ has the Fatou property and that $L_{\varphi}$ is order continuous if and only if $\varphi$ satisfies the appropriate $\Delta_2$ condition. Therefore, by Theorem \ref{th:RNKothe}(ii) the Orlicz space $L_\varphi$ has the RNP. For the converse, assume that $L_\varphi$ has the RNP. Since $L_1[0,1]$ does not have the RNP, $\varphi$ needs to be an $N$-function at infinity by Proposition~\ref{pro1}. If $\varphi$ does not satisfy the appropriate $\Delta_2$ condition, then $L_\varphi$ is not order continuous, and by Theorem \ref{th:RNKothe}(i) it does not have the RNP. \end{proof} By Theorem 2.5.4 in \cite{BS}, $X_a = X_b$ holds for every rearrangement invariant sequence space $X$. Consequently we obtain a characterization of the RNP in Orlicz sequence spaces $\ell_\varphi$ as a consequence of Theorem \ref{th:RNKothe}. This result is well-known for $\varphi$ being an $N$-function \cite[Theorem 3.32]{Chen}. \begin{Theorem} \label{th:RNP-ORseq} Let $\varphi$ be a finite Orlicz function. An Orlicz sequence space $\ell_{\varphi}$ has the Radon-Nikod\'ym property if and only if $\varphi$ satisfies the $\Delta_2^0$ condition. \end{Theorem} \begin{proof} Since any Orlicz sequence space is an r.i. space with the Fatou property, we always have $((\ell_{\varphi})')_a = (\ell_{\varphi_*}^0)_a = (\ell_{\varphi_*}^0)_b = ((\ell_{\varphi})')_b$. Moreover it is well-known that $\ell_{\varphi}$ is order continuous if and only if $\varphi$ satisfies the $\Delta_2^0$ condition \cite[Proposition 4.a.4]{LT1}. Hence, $\ell_{\varphi}$ has the RNP by Theorem \ref{th:RNKothe}. Conversely, suppose that $\ell_{\varphi}$ has the RNP. Then $\ell_{\varphi}$ is order continuous by Theorem \ref{th:RNKothe}. This implies that $\varphi$ satisfies the $\Delta_2^0$ condition. \end{proof} \section{Locally octahedral norm, uniformly non-$\ell_1^2$ points, diameter two properties and the Daugavet property} In this section, we first examine the relationship between locally octahedral norms and the Daugavet property. \begin{Definition}\cite{G, BLR2, HLP} A Banach space $X$ is locally octahedral if for every $x \in X$ and $\epsilon >0$, there exists $y \in S_X$ such that $\|\lambda x + y\| \geq (1 - \epsilon) (|\lambda| \|x\| + \|y\|)$ for all $\lambda \in \mathbb{R}$. \end{Definition} A point $x\in S_X$ is called a \emph{uniformly non-$\ell_1^2$} point if there exists $\delta>0$ such that $\min\{\|x + y\|, \|x - y\|\} \leq 2-\delta$ for all $y\in S_X$. Motivated by this, we introduce the following. \begin{Definition} A point $x\in S_X$ is called a \emph{uniformly $\ell_1^2$} point if, given $\delta>0$, there is $y\in S_X$ such that $\min\{\|x + y\|, \|x - y\|\} > 2-\delta$. \end{Definition} By Proposition 2.1 in \cite{HLP} we get immediately the following corollary. \begin{Corollary}\label{cor:octah-non} Every point $x \in S_X$ is a uniformly $\ell_1^2$ point if and only if the Banach space $X$ is locally octahedral. \end{Corollary} \begin{Lemma}\cite{HLP}\label{lem:aux} If $x, y \in S_X$ satisfy $\|x \pm y\| > 2 - \delta$ and $\alpha, \beta \in \mathbb{R}$, then \[ (1-\delta)(|\alpha| + |\beta|) < \|\alpha x \pm \beta y\| \leq |\alpha| + |\beta|. \] \end{Lemma} \begin{proof} See the proof of the implication from (iii) to (ii) in Proposition 2.1 in \cite{HLP}. \end{proof} In the next theorem we give a local characterization of uniformly $\ell_1^2$ points $x\in S_X$ (resp. $x^*\in S_{X^*}$) and the diameter of the slice $S(x;\epsilon)$ (resp. the diameter of the weak$^{*}$-slice $S(x^*,\epsilon)$). The techniques used in the proof are somewhat similar to the proof of Theorem 3.1 in \cite{HLP}, but the key ideas are more subtle emphasizing the local nature of discussed properties. It follows a corollary on relationships between global properties of local diameter two property in $X$ and of $X^*$ being locally octahedral, as well as between the weak$^*$-local diameter two property of $X^*$ and $X$ being locally octahedral. \begin{Theorem}\label{th:unif} \rm(a) An element $x \in S_{X}$ is a uniformly $\ell_1^2$ point if and only if the diameter of a weak$^{*}$-slice $S(x;\epsilon)$ is two for every $\epsilon > 0$. \rm(b) An element $x^* \in S_{X^*}$ is a uniformly $\ell_1^2$ point if and only if ${\rm{diam}}\,S(x^*;\epsilon)=2$ for every $\epsilon > 0$. \end{Theorem} \begin{proof} We will prove only (a) since (b) follows analogously. Suppose that for all $0<\epsilon < 1$, ${\rm{diam}} \, S(x; \epsilon) = 2$. Then there exist $x_1^*, x_2^*\in S(x;\epsilon)$ such that \begin{equation}\label{eq:11} x_1^*(x)> 1-\epsilon, \ \ \ x_2^* (x) > 1 - \epsilon, \ \ \|x_1^* - x_2^*\| > 2- \epsilon. \end{equation} Hence we can find $y \in S_X$ with $(x_1^* - x_2^*)(y) > 2 - \epsilon$. Thus \[ 2 \ge x_1^*(y) - x_2^*(y) > 2-\epsilon \ \ \ \text{and} \ \ \ x_1^*(y)\le1,\ -x_2^*(y) \le 1, \] and so $x_1^*(y) > 1-\epsilon$ and $-x_2^*(y) > 1-\epsilon$. Combining this with (\ref{eq:11}) we get that $x_1^*(x+y) > 2-2\epsilon$, $x_2^*(x-y) > 2 - 2\epsilon$, and so $\|x + y\| > 2 - 2\epsilon$ and $\|x - y\| > 2 -2\epsilon$. We showed that for every $0< \epsilon< 1$ there exists $y\in S_X$ such that \[ \min\{\|x + y\|, \|x - y\|\} > 2 -2\epsilon, \] which means that $x$ is a uniformly $\ell_1^2$ point. Conversely, suppose that $x \in S_X$ is a uniformly $\ell_1^2$ point. Then for any $\epsilon>0$, there exists $y \in S_X$ such that $\|x \pm y\| > 2 - \epsilon$. Define bounded linear functionals $x_1^*, x_2^*$ on the subspace ${\rm span}\{x,y\}$ such that \[ x_1^*(x) = 1,\ \ x_1^*(y) = 0,\ \ x_2^*(x) = 0 \ \ \text{and} \ \ x_2^*(y) = 1. \] \noindent Note that $\|x_1^*\| \geq 1$ and $\|x_2^*\| \geq 1$. By Lemma \ref{lem:aux}, for $\alpha, \beta \in \mathbb{R}$ we have \[ |(x_1^* \pm x_2^*)(\alpha x + \beta y)| = |\alpha \pm \beta| \leq |\alpha| + |\beta| \leq (1-\epsilon)^{-1}\|\alpha x + \beta y\|, \] \noindent so $\|x_1^* \pm x_2^*\| \leq (1-\epsilon)^{-1}$. Now, let $\widetilde{x_1}^* = \frac{x_1^* + x_2^*}{\|x_1^* + x_2^*\|}$ and $\widetilde{x_2}^* = \frac{x_1^* - x_2^*}{\|x_1^* - x_2^*\|}$. Then \[ \|\widetilde{x_1}^* - (x_1^* + x_2^*)\| = |\|x_1^* + x_2^*\| - 1| \leq \left|\frac{1}{1-\epsilon} - 1 \right| = \frac{\epsilon}{1 - \epsilon}. \] Similarly, \[ \|\widetilde{x_2}^* - (x_1^* - x_2^*)\| \leq \frac{\epsilon}{1 - \epsilon}. \] Since $(x_1^* \pm x_2^*)(x)=1$, we have $\widetilde{x_1}^*(x) = \frac{1}{\|x_1^* + x_2^*\|} \geq 1 - \epsilon$ and $\widetilde{x_2}^*(x) = \frac{1}{\|x_1^* - x_2^*\|} \geq 1 - \epsilon$. Hence $\widetilde{x_1}^*, \widetilde{x_2}^* \in S(x,\epsilon)$. Furthermore, \begin{eqnarray*} \|\widetilde{x_1}^* - \widetilde{x_2}^*\| &=& \|\widetilde{x_1}^* + (x_1^* + x_2^*) - (x_1^* + x_2^*) + (x_1^* - x_2^*) - (x_1^* - x_2^*) - \widetilde{x_2}^*\| \\ &\geq& 2 \|x_2^*\| -\|\widetilde{x_1}^* - (x_1^* + x_2^*)\| -\|\widetilde{x_2}^* - (x_1^* - x_2^*)\| \geq 2 - \frac{2\epsilon}{1-\epsilon}. \end{eqnarray*} \noindent Since $\epsilon > 0$ is arbitrary, ${\rm{diam}}\, S(x, \epsilon) = 2$. Finally by the Hahn-Banach theorem, we can extend the bounded linear functionals $x_1^*$ and $x_2^*$ from ${\rm span}\{x,y\}$ to $X$ and the proof is completed. \end{proof} Combining Corollary \ref{cor:octah-non} and Theorem \ref{th:unif} we obtain the following result proved earlier in \cite{HLP}. \begin{Corollary} \cite[Theorem 3.2, 3.4]{HLP}\label{HLP} Let $X$ be a Banach space. Then the following hold. \begin{enumerate} \item[$(1)$] $X$ is locally octahedral if and only if $X^*$ satisfies the weak$^{*}$ local diameter two property. \item[$(2)$] $X^*$ is locally octahedral if and only if $X$ satisfies the local diameter two property. \end{enumerate} \end{Corollary} Recall the equivalent geometric interpretation of the Daugavet property. \begin{Lemma}\cite[Lemma 2.2]{KSSW} \label{lem:Daug} The following are equivalent. \begin{enumerate}[{\rm(i)}] \item A Banach space $(X,\|\cdot\|)$ has the Daugavet property, \item\label{Daugii} For every slice $S = S(x^*,\epsilon)$ where $x^*\in S_{X^*}$, every $x \in S_X$ and every $\epsilon>0$, there exists $y\in S_{X}\cap S$ such that $\|x+y\|>2-\epsilon$, \item\label{Daugiii} For every weak$^{*}$-slice $S^* = S(x,\epsilon)$ where $x\in S_{X}$, every $x^* \in S_{X^*}$ and every $\epsilon>0$, there exists $y^*\in S_{X^*}\cap S^*$ such that $\|x^*+y^*\|>2-\epsilon$, \end{enumerate} \end{Lemma} The next result is known in a stronger form \cite[Theorem 4.4]{ALN} \cite[Corollary 2.5]{BLR2} \cite{HLP}, namely, if $X$ has the Daugavet property then it has the SD2P as well as its dual $X^*$ has the weak$^*$-SD2P. \begin{Proposition}\label{prop:slice} If a Banach space $X$ has the Daugavet property, then $X$ has the local diameter two property and $X^*$ has the weak$^*$-local diameter two property. \end{Proposition} \begin{proof} Let $x^*\in S_{X^*}$ and $S(x^*; \epsilon)$ be a slice of $B_X$. Then there is $x\in S_X$ such that $-x^*(x) > 1- \epsilon$. By (iii) of Lemma \ref{lem:Daug} we find $y\in S_X$ with $x^*(y) > 1-\epsilon$ and $\|x + y\| > 2 - 2\epsilon$. Clearly $-x, y \in S(x^*;\epsilon)$, and so ${\rm diam} \, S(x;\epsilon) = 2$. Now let $x\in S_X$ and $S(x; \epsilon)$ be a weak$^*$-slice of $B_{X^*}$. There exists $y^*\in S_{X^*}$ such that $-y^*(x) > 1 - \epsilon$. By (ii) of Lemma \ref{lem:Daug} there is $x^* \in S_{X^*}$ with $x^*(x) > 1 - \epsilon$ and $\|x^* + y^*\| > 2 - 2\epsilon$. Since both $x^*, -y^* \in S(x;\epsilon)$, $\epsilon > 0$ is arbitrary, we have that ${\rm diam} \, S(x,\epsilon) = 2$. \end{proof} The next result is an instant corollary of Theorem \ref{th:unif} and Proposition \ref{prop:slice}. \begin{Corollary}\cite[Proposition 4.4]{KK} \label{prop} If $(X,\|\cdot\|)$ has the Daugavet property, then all elements in $S_X$ and $S_{X^*}$ are uniformly $\ell_1^2$ points. \end{Corollary} Next we shall consider Orlicz spaces $L_{\varphi}, \,\ell_\varphi$ and $L_{\varphi}^0, \, \ell_\varphi^0$. Let us define first the following numbers related to Orlicz function $\varphi:\mathbb{R}_+ \to [0,\infty]$. Recall the Orlicz function $\varphi$ is called a {\it linear function} if $\varphi(u)= ku$ on $\mathbb{R}_+$ for some $k>0$. Set \[ d_\varphi =\sup\{u: \varphi(u)\ \ \text{is linear}\}, \ \ c_\varphi = \sup\{u: \varphi(u) \le 1\},\ \ \ b_\varphi = \sup\{u: \varphi(u) < \infty\}. \] \begin{Lemma} \cite[Lemma 4.1]{KK} \label{lem:1} Let $\varphi$ be an Orlicz function For every closed and bounded inteval $I \subset (d_{\varphi}, b_{\varphi})$ there is a constant $\sigma \in (0,1)$ such that $2 \varphi(u/2)/\varphi(u) \leq \sigma$ for $u \in I$. Moreover, if $\varphi(b_\varphi) < \infty$ then the same statement holds true for closed intervals $I \subset (d_\varphi, b_\varphi]$. \end{Lemma} \begin{Theorem}\label{th:KamKub} \begin{enumerate}[{\rm(1)}] \item[\rm(i)] Let $\mu$ be non-atomic. Let $\varphi$ be an Orlicz function such that $\varphi(b_\varphi)\mu(\Omega) >1$ and $d_\varphi < b_\varphi$. Then there exists $a> 0$ and $A \in \Sigma$ such that $x = a \chi_A, \|x\|_{\varphi} =1$ and $x$ is uniformly non-$\ell_1^2$ point in $L_{\varphi}$. If $b_\varphi = \infty$ then $x\in (L_\varphi)_a$. \item[\rm(ii)] Let $\mu$ be the counting measure on $\mathbb{N}$ and $\varphi$ be an Orlicz function such that $d_\varphi < c_\varphi$ and $\varphi(c_\varphi) = 1$. Then there exist $a > 0$ and $A \subset \mathbb{N}$ such that $x = a \chi_A, \|x\|_{\varphi} =1$ and $x$ is uniformly non-$\ell_1^2$ point in $\ell_{\varphi}$. If $b_\varphi = \infty$ then $x\in (\ell_\varphi)_a$. \end{enumerate} \end{Theorem} \begin{proof} (i): By the assumptions on $\varphi$ and non-atomicity of $\mu$, there exist $A\in \Sigma$ and $a\in (d_\varphi,b_\varphi)$ such that $\varphi(a) \mu(A) = 1$. Letting $x=a \chi_A$, we get $I_{\varphi}(x) =1$, and $\|x\|_{\varphi} = 1$. Clearly $x\in (L_\varphi)_a$ if $b_\varphi = \infty$. Let $y \in S_{L_\varphi}$ be arbitrary. Hence for a.e. $t\in \Omega$, $|y(t)| < b_\varphi$ if $b_\varphi=\infty$ or $|y(t)| \le b_\varphi$ if $b_\varphi<\infty$. Then, for any $\lambda >1$, $I_{\varphi}({y}/{\lambda}) \leq 1$. We claim that \begin{equation}\label{cond1} \text{there exist }\, d\in (a,b_\varphi) \,\, \text{and} \,\, B=\{t \in \Omega : |y(t)| \leq d \chi_A (t)\} \,\, \text{such that} \,\, \mu(A \cap B) >0. \end{equation} Indeed, let first $b_\varphi = \infty$. Define $B_k = \{t \in \Omega : |y(t)| \leq k \chi_A (t)\}$ for $k \in \mathbb{N}$. The sequence of sets $\{B_k\}$ is increasing, and so $0 < \mu(A) = \mu (A \cap (\cup_{k=1}^{\infty} B_k) = \lim_{k \rightarrow \infty} \mu (A \cap B_k)$, and this implies that there exists $m \in \mathbb{N}$ such that $2a <m$, $\mu (A \cap B_{m}) >0$. Letting $B = B_{m}$, $d=m$, we get (\ref{cond1}). Let now $b_\varphi < \infty$. Define $C_k = \{t \in \Omega : |y(t)| \leq (b_\varphi - 1/k) \chi_A (t)\}$ for $k \in \mathbb{N}$. Like before, $\{C_k\}$ is increasing and $\lim_{k \rightarrow \infty} \mu(A \cap C_k)>0$. So there exists $m$ such that $b_\varphi - 1/m > a$. Let now $d = b_\varphi - 1/m$ and $B = C_m$, and so (\ref{cond1}) is satisfied. Set \begin{equation}\label{eq:111} \gamma = I_{\varphi} (a \chi_{A \setminus B}). \end{equation} Clearly $\gamma\in [0,1)$. For any $\delta>0$, there exists $1>\epsilon>0$ such that $I_{\varphi}((1+ \epsilon)x) = \varphi((1+\epsilon)a)\mu(A) \leq 1 + \delta$. We can choose $\epsilon$ so small that we also have $(1+\epsilon)a < d$. Let $z = (1+\epsilon) x = (1 + \epsilon)a \chi_A$. Thus \begin{equation}\label{eq:11} I_\varphi(z) \le 1+\delta. \end{equation} Define \[ D = \{t \in A \cap B : x(t)y(t) \geq 0 \},\,\, E = (A \cap B) \setminus D. \] For $t \in A \cap B$, $\max \{ |z(t)|, |y(t)| \} = \max \{ |(1+\epsilon)a|, |y(t)| \} \in [a,d]$. Since $D \subset A \cap B$, we have $|z(t) - y(t)|/2 \le \max\{|z(t)|, |y(t)|\}$ for $t\in D$. Moreover by Lemma \ref{lem:1}, there exists $\sigma \in (0,1)$ such that $2\varphi(u/2)/\varphi(u) \le \sigma$ for $u\in [a,d] \subset (d_{\varphi}, b_{\varphi})$. Therefore \begin{eqnarray*} I_{\varphi}\left(\frac{z-y}{2} \chi_D \right) \leq I_{\varphi}\left(\frac{\max\{|z|, |y| \}}{2} \chi_D \right) &\leq& \frac{\sigma}{2} I_{\varphi}(\max\{|z|, |y| \}\chi_D)\\ &\leq& \frac{\sigma}{2}(I_{\varphi}(z \chi_D)+ I_{\varphi}(y \chi_D)). \end{eqnarray*} \noindent Analogously we can also show that \[ I_{\varphi} \left(\frac{z+y}{2} \chi_E \right) \leq \frac{\sigma}{2}(I_{\varphi}(z \chi_E)+ I_{\varphi}(y \chi_E)). \] Then, by the convexity of $\varphi$ and $A \cap B = D\cup E$, \begin{eqnarray*} &\,&I_{\varphi}\left(\frac{z-y}{2} \chi_{A \cap B} \right) + I_{\varphi} \left(\frac{z+y}{2} \chi_{A \cap B} \right)\\ &=& I_{\varphi} \left(\frac{z-y}{2} \chi_D \right) + I_{\varphi} \left(\frac{z+y}{2} \chi_D \right) + I_{\varphi} \left(\frac{z-y}{2} \chi_E \right) + I_{\varphi} \left(\frac{z+y}{2} \chi_E \right)\\ &\leq& \frac{\sigma}{2}(I_{\varphi}(z\chi_D) + I_{\varphi}(y\chi_D)) + \frac{1}{2}(I_{\varphi}(z\chi_D) + I_{\varphi}(y\chi_D))+ \frac{1}{2}(I_{\varphi}(z\chi_E) + I_{\varphi}(y\chi_E)) \\ &+& \frac{\sigma}{2}(I_{\varphi}(z\chi_E) + I_{\varphi}(y\chi_E) = \frac{1+ \sigma}{2}(I_{\varphi}(z \chi_{A \cap B}) + I_{\varphi}(y \chi_{A \cap B})). \end{eqnarray*} Now, choose $\delta \in (0, \frac{(1-\sigma)(1- \gamma)}{2})$. By the assumption $I_{\varphi}(y) \leq 1$ and by (\ref{eq:111}), (\ref{eq:11}) we have \[ 2+ \delta \geq I_{\varphi}(y) + 1 + \delta \geq I_{\varphi}(y) + I_{\varphi}(z), \] and so \begin{eqnarray*} 2+ \delta - I_{\varphi}\left(\frac{z+y}{2} \right) - I_{\varphi}\left(\frac{z-y}{2}\right) &\geq& I_{\varphi}(y)+ I_{\varphi}(z) - I_{\varphi}\left(\frac{z+y}{2}\right) - I_{\varphi}\left(\frac{z-y}{2}\right)\\ &\geq& I_{\varphi}(y)+ I_{\varphi}(z) - \frac{1+ \sigma}{2}(I_{\varphi}(z \chi_{A \cap B}) + I_{\varphi}(y \chi_{A \cap B}))\\ &\geq& \frac{1- \sigma}{2}(I_{\varphi}(z \chi_{A \cap B}) + I_{\varphi}(y \chi_{A \cap B}))\\ &\geq& \frac{1- \sigma}{2}I_{\varphi}(a \chi_{A \cap B}) = \frac{(1- \sigma)(1-\gamma)}{2}, \end{eqnarray*} \noindent which implies that \[ I_{\varphi}\left(\frac{z+y}{2} \right) + I_{\varphi}\left(\frac{z-y}{2} \right) \leq 2+ \delta - \frac{(1- \sigma)(1-\gamma)}{2} \le 2. \] It follows \[ \min \left \{I_{\varphi}\left(\frac{z+y}{2} \right), I_{\varphi}\left(\frac{z-y}{2} \right) \right\} \leq 1. \] If $I_{\varphi}\left(\frac{z+y}{2}\right) \leq 1$, then $\left \| \frac{z+y}{2} \right \|_{\varphi} \leq 1$, and so $\left \| \frac{x+(y/(1+\epsilon))}{2} \right \|_{\varphi} \leq \frac{1}{1+\epsilon}$. Moreover, \begin{equation*} \left | \left\| \frac{x+y}{2}\right \|_{\varphi} - \left \|\frac{x+(y/(1+\epsilon))}{2} \right \|_{\varphi} \right | \leq \left \| \frac{x+y}{2} - \frac{x+(y/(1+\epsilon))}{2} \right \|_{\varphi} = \frac{\epsilon}{2(1+\epsilon)}. \end{equation*} Hence \[ \left \| \frac{x+y}{2} \right \|_{\varphi} \leq \left \|\frac{x+(y/(1+\epsilon))}{2} \right \|_{\varphi} + \frac{\epsilon}{2(1+\epsilon)} \leq \frac{1}{1+ \epsilon} + \frac{\epsilon}{2(1+ \epsilon)} = 1 - \frac{\epsilon}{2(1+\epsilon)}. \] In a similar way, if $I_{\varphi} \left(\frac{z-y}{2} \right) \leq 1$, then $\left \| \frac{x-y}{2} \right \|_{\varphi} \leq 1 - \frac{\epsilon}{2(1+\epsilon)}$. Thus, we just showed that for any $y \in S_{L_\varphi}$, $\min \left\{ \left \| \frac{x+y}{2} \right \|_{\varphi}, \left \| \frac{x-y}{2} \right \|_{\varphi} \right \} \leq 1 - \frac{\epsilon}{2(1+\epsilon)}$, which means \begin{equation*} \min \{ \| x+y\|_{\varphi}, \| x-y\|_{\varphi}\} \leq 2 - \frac{\epsilon}{1+ \epsilon} < 2. \end{equation*} Therefore, $x= a \chi_A$, $\|x\|_{\varphi} = 1$ is a uniformly non-$\ell_1^2$ point in $L_\varphi$. (ii): If $x \in S_{\ell_{\varphi}}$, then $I_\varphi(x) = \sum_{i=1}^{\infty} \varphi(|x(i)|) \leq 1$. So for every $i \in \mathbb{N}$, $\varphi(|x(i)|) \leq 1$. Hence for any element of $S_{\ell_\varphi}$, we only consider $u \geq 0$ such that $\varphi(u)\leq 1$. Then by the assumptions $1= \frac{1}{\varphi(c_\varphi)}< \frac{1}{\varphi(d_\varphi)}$, there exist $a \in (d_{\varphi}, c_\varphi]$ and $A \subset \mathbb{N}$ such that $\varphi(a) = 1/\mu(A)$. Let $x = a\chi_A$. Then $I_\varphi(x) = \varphi(a)\mu(A) = 1$ and $\|x\|_\varphi = 1$. If $b_\varphi = \infty$ then $x \in (\ell_\varphi)_a$. Now for $y \in S_{\ell_\varphi}$, we want to show that there exists $d \in (a, c_\varphi)$ and $B = \{i \in \mathbb{N} : |y(i)| \leq d\}$ such that $\mu(A \cap B) > 0$, which corresponds to (\ref{cond1}) in function case. Since $y$ is in the unit ball of $\ell_\varphi$, for each $i\in \mathbb{N}$, $|y(i)| \le c_\varphi$. Define $C_k = \{i \in \mathbb{N} : |y(i)| \leq (c_\varphi - 1/k) \chi_A (i)\}$ for $k \in \mathbb{N}$. The sequence $\{C_k\}$ is increasing and \[ 0 < \mu (A) = \mu (A \cap (\cup_{k=1}^{\infty} C_k) = \lim_{k \rightarrow \infty} \mu (A \cap C_k). \] So there exists $m$ such that $c_\varphi - 1/m > a$. Let now $d = c_\varphi - 1/m$ and $B = C_m$. Then $d\in (a,c_\varphi)$, $|y(i)| \le d \chi_A(i)$ for $i\in A\cap B$ and $\mu(A\cap B) > 0$. Further we proceed analogously as in the proof for function spaces starting from (\ref{eq:111}). We apply Lemma \ref{lem:1} for the interval $I=[a,d] \subset (d_\varphi, c_\varphi)\subset (d_\varphi, b_\varphi)$. \end{proof} Concerning the Daugavet property we will consider only the case of non-atomic measure since it is not difficult to show that any rearrangement invariant sequence space never has the Daugavet property. In \cite{AKM2}, it was shown that an Orlicz space $L_\varphi$ generated by a finite Orlicz function $\varphi$ has the Daugavet property if and only if the space is isometrically isomorphic to $L_1$. Similar result can be derived also from \cite{KK} where it was proved for Musielak-Orlicz spaces. Below the given proof for Orlicz spaces $L_\varphi$ is much simpler than those in \cite{AKM2, KK}. In fact it is a direct corollary of Theorem \ref{th:KamKub}(i). \begin{Theorem}\label{thm:DaugOrlicz} Let $\mu$ be a non-atomic measure. If $\varphi$ is a finite Orlicz function then the only Orlicz space $L_\varphi$ having the Daugavet property correspond to a linear function $\varphi$, that is $L_\varphi = L_1$ isometrically. \end{Theorem} \begin{proof} If $L_\varphi = L_1$ isometrically, clearly the Orlicz space has the Daugavet property. Supposing $L_\varphi$ has the Daugavet property, by Corollary \ref{prop}, every point of the unit sphere of $L_\varphi$ is a uniformly $\ell_1^2$ point. Applying now Theorem \ref{th:KamKub}(i), $d_\varphi = b_\varphi$, where $b_\varphi = \infty$ by the assumption that $\varphi$ assumes finite values. Therefore $\varphi(u) = ku$, for some $k>0$ and all $u\ge 0$. Consequently, $L_\varphi = L_1$ and $\|\cdot\|_\varphi = k\|\cdot\|_1$. \end{proof} \begin{Theorem}\label{thm:Orlicznormdiam} \rm(i) Let $\mu$ be a non-atomic measure. If $d_{\varphi_*} < b_{\varphi_*}$ and $\varphi_*(b_{\varphi_*}) \mu(\Omega) > 1$ then $L_\varphi^0$ does not have the local diameter two property. \rm(ii) Let $\mu$ be the counting measure on $\mathbb{N}$. If $d_{\varphi_*} < c_{\varphi_*}$ and $\varphi_*(c_{\varphi_*}) =1$ then $\ell_\varphi^0$ does not have the local diameter two property. \end{Theorem} \begin{proof} We will show only (i), since the sequence case is proved similarly. By the assumptions in view of Theorem \ref{th:KamKub}(i), the space $L_{\varphi_*}$ has a uniformly non-$\ell_1^2$ point. In view of Theorem \ref{th:unif} it is equivalent to that the dual space $(L_{\varphi_*})^*$ does not have the weak$^*$-star local diameter two property. It is well-known that the dual space to Orlicz space $L_\varphi$ is isometrically isomorphic to the direct sum $L_{\varphi_*}^0 \oplus_1 \mathcal{S}$, where $\mathcal{S}$ is a set of the singular functionals on $L_\varphi$ (\cite{Chen}). Therefore the dual space $ (L_{\varphi_*})^*$ is isometrically isomorphic to $L_\varphi^0 \oplus_1 \mathcal{S}$ due to $\varphi_{**} = \varphi$ \cite{KR}. By Theorem \ref{th:KamKub}(i), there exists a uniformly non-$\ell_1^2$ point $x\in S_{L_{\varphi_*}}$ of a unit ball in $L_{\varphi_*}$. Hence in view of Proposition \ref{th:unif}, there exists $\epsilon > 0$ such that ${\rm diam}\, S(x;\epsilon) < 2$ where $S(x;\epsilon) = \{x^* \in B_{(L_{\varphi_*})^*} : x^*(x) > 1 - \epsilon\}$ is a weak$^*$-slice. Now, let $J: L_{\varphi_*} \rightarrow (L_{\varphi_*})^{**}$ be the canonical mapping so that $J(x)(x^*) = x^*(x)$. Letting $i: L_{\varphi}^0 \rightarrow (L_{\varphi_*})^*$ be isometric embedding, $T: = J(x) \circ i\in B_{(L_\varphi^0)^*}$ and $S(T;\epsilon) = \{y \in B_{L_{\varphi}^0} : T(y) > 1 - \epsilon\} $ is a slice of the unit ball in $L_\varphi^0$. Moreover, \[ S(T;\epsilon) \subset \{x^* \in B_{(L_{\varphi_*})^*} : J(x)(x^*) > 1 - \epsilon\} = \{x^* \in B_{(L_{\varphi_*})^*} : x^*(x) > 1 - \epsilon\} = S(x; \epsilon). \] Therefore, ${\rm diam} \,S(T;\epsilon)<2$, and the space $L_\varphi^0$ does not have the local diameter two property. \end{proof} In \cite{AKM2} it has been proved that if $\varphi$ does not satisfy the appropriate $\Delta_2$ condition then $L_\varphi$ or $\ell_\varphi$ has the local diameter two property. This result was generalized later to Orlicz-Lorentz spaces \cite{KT}. For the Orlicz spaces equipped with the Orlicz norm the situation is different. As shown below, for a large class of finite Orlicz functions the spaces $L_\varphi^0$ or $\ell_\varphi^0$ have no local diameter two property. \begin{Corollary}\label{Cor:Orlicznormdiam} Let $\mu$ be a non-atomic measure on $\Sigma$ or $\mu$ be the counting measure on $\mathbb{N}$. Let $\varphi$ be a finite $N$-function at infinity. Then there exists a slice of $B_{L_\varphi^0}$, respectively of $B_{\ell_\varphi^0}$, with diameter less than two. Consequently, the Orlicz spaces $L_\varphi^0$ or $\ell_\varphi^0$ equipped with the Orlicz norm do not have the local diameter two property. \end{Corollary} \begin{proof} Since $\varphi$ is an $N$-function at infinity then in view of Lemma \ref{lem:finite}, $\varphi_*$ is a finite function and so $b_{\varphi_*} = \infty$. We also have that $d_{\varphi_*} <\infty$. Indeed, if for a contrary $d_{\varphi_*} =\infty$ then $\varphi_*(v) = kv$ for some $k>0$ and all $v\ge 0$. Then it is easy to show that $\varphi = \varphi_{**}$ assumes only zero or infinity values, which contradicts the assumption that $\varphi$ is a finite Orlicz function. We complete the proof by application of Theorem \ref{thm:Orlicznormdiam}.\end{proof} We conclude this paper by showing that the SD2P, D2P and LD2P are equivalent in $L_\varphi$ or $\ell_\varphi$ when $\varphi$ does not satisfy the appropriate $\Delta_2$ condition. Recall a subspace $Y$ of $X^*$ is said to be {\it norming} if for every $x \in X$, \[ \|x\| = \sup\{|x^*(x)| : \|x^*\|_{X^*} \leq 1, x^* \in Y\}. \] \begin{Proposition}\cite[Proposition 1.b.18]{LT2}, \label{LT2} If $X$ is a Banach function space with the Fatou property, then the K\"othe dual space $X'$ is order isometric to a norming subspace of $X^*$. \end{Proposition} We say a closed subspace $Y$ is an \emph{$M$-ideal} in $X$ if $Y^\perp$ is the range of the bounded projection $P: X^* \rightarrow X^*$ such that $\|x^*\| = \|Px^*\| + \|(I-P)x^*\|$, that is $X^* = Y^{\perp} \oplus_1 Z$ for some subspace $Z$ of $X^*$. In fact, there is a connection between $M$-ideals and the SD2P. \begin{Theorem} \cite[Theorem 4.10]{ALN}\label{SD2P} Let $Y$ be a proper subspace of $X$ and let $Y$ be an $M$-ideal in $X$ i.e. $X^* = Y^{\perp} \oplus_1 Z$. If $Z$ is a norming subspace of $X^*$, then both $X$ and $Y$ have the strong diameter two property. \end{Theorem} \begin{Corollary}\label{th:Mideal} Let $\mu$ be a non-atomic measure on $\Sigma$ or the counting measure on $\mathbb{N}$. Given a finite Orlicz function $\varphi$ which does not satisfy the appropriate $\Delta_2$ condition, the spaces $L_{\varphi}$ or $\ell_\varphi$ and their proper subspaces $(L_{\varphi})_a\ne \{0\}$ or $(\ell_\varphi)_a\ne \{0\}$ have the strong diameter two property. \end{Corollary} \begin{proof} Let $\mu$ be non-atomic. By the assumption that $\varphi$ is finite, the subspace $(L_{\varphi})_a$ is non-trivial. Moreover it is well-known that it is an $M$-ideal in $L_{\varphi}$ \cite{HWW}. It is a proper subspace if $(L_{\varphi})_a\ne L_{\varphi}$, which is equivalent to that $\varphi$ does not satisfy the appropriate $\Delta_2$ condition. By Proposition \ref{LT2}, $(L_{\varphi})' \simeq ((L_{\varphi})_a)^*$ is a norming subspace of $(L_\varphi)^*$. Hence by Theorem \ref{SD2P}, both $(L_{\varphi})_a$ and $L_{\varphi}$ have the strong diameter two property. The proof in sequence case is similar. \end{proof} The $M$-ideal property of the order continuous subspace of an Orlicz-Lorentz space has been studied \cite{KLT}. In our final result, we obtain full characterization of (local, strong) diameter two properties in Orlicz spaces equipped with the Luxemburg norm. It is completion and extension of Theorems 2.5 and 2.6 from \cite{AKM}, where it was shown that $L_\varphi$ or $\ell_\varphi$ have the D2P whenever $\varphi$ does not satisfy appropriate condition $\Delta_2$. \begin{Theorem}\label{OReq} Let $\mu$ be a non-atomic measure on $\Sigma$ or the counting measure on $\mathbb{N}$ and let $\varphi$ be a finite Orlicz function. Consider the following properties. \begin{itemize} \item[(i)] $L_\varphi$ or $\ell_\varphi$ has the local diameter two property. \item[(ii)] $L_\varphi$ or $\ell_\varphi$ has the diameter two property. \item[(iii)] $L_\varphi$ or $\ell_\varphi$ has the strong diameter two property. \item[(iv)] $\varphi$ does not satisfy the appropriate $\Delta_2$ condition. \end{itemize} Then $\rm(iii) \implies (ii) \implies (i)$. For the sequence space $\ell_\varphi$ all properties $\rm(i)-(iv)$ are equivalent. If in addition $\varphi$ is $N$-function at infinity then all $\rm(i)-(iv)$ are also equivalent for the function space $L_\varphi$. \end{Theorem} \begin{proof} The fact $\rm(iii) \implies (ii) \implies (i)$ is well-known in general Banach spaces \cite{ALN, GGMS}. The implication $\rm(iv) \implies (iii)$ follows from Corollary \ref{th:Mideal}. If $L_\varphi$ has the local diameter two property then the space can not have the RNP. Thus from Theorems \ref{th:OrRN-funct} and \ref{th:RNP-ORseq}, (i) $\implies$ (iv). \end{proof} \end{document}
\begin{document} \begin{abstract} We study the translation surfaces corresponding to meromorphic differentials on compact Riemann surfaces. Such geometric structures naturally appear when studying compactifications of the strata of the moduli space of Abelian differentials. We compute the number of connected components of the strata of the moduli space of meromorphic differentials. We show that in genus greater than or equal to two, one has up to three components with a similar description as the one of Kontsevich and Zorich for the moduli space of Abelian differentials. In genus one, one can obtain an arbitrarily large number of connected components that are distinghished by a simple topological invariant. \end{abstract} \maketitle \setcounter{tocdepth}{1} \tableofcontents \section{Introduction} A nonzero holomorphic one-form ({Abelian differential}) on a compact Riemann surface naturally defines a flat metric with conical singularities on this surface. Geometry and dynamics on such flat surfaces, in relation to geometry and dynamics on the corresponding moduli space of Abelian differentials is a very rich topic and has been widely studied in the last 30 years. It is related to interval exchange transformations, billards in polygons, Teichmüller dynamics. A noncompact translation surface corresponds to a a one form on a noncompact Riemann surface. The dynamics and geometry on some special cases of noncompact translation surfaces have been studied more recently. For instance, dynamics on $\mathbb{Z}^d$ covers of compact translation surfaces (see \cite{HW,HLT, DHL}), infinite square tiled surfaces (see \cite{Hubert:Schmithuesen}), or general noncompact surfaces (see \cite{Bowman, BV,PSV}). In this paper, we investigate the case of translation surfaces that come from meromorphic differentials defined on compact Riemann surfaces. In this case, we obtain infinite area surfaces, with ``finite complexity''. Dynamics of the geodesic flow on a generic direction on such surface is trivial any infinite orbit converges to the poles. Also, $SL_2(\mathbb{R})$ action doesn't seem as rich as in the Abelian case (see Appendix~A). However, it turns out that such structures naturaly appear when studying compactifications of strata of the moduli space of Abelian differentials. Eskin, Kontsevich and Zorich show in a recent paper \cite{EKZ} that when a sequence of Abelian differentials $(X_i,\omega_i)$ converges to a boundary point in the Deligne-Munford compactification, then subsets $(Y_{i,j},\omega_{i,j})$ corresponding to thick components of the $X_i$, after suitable rescaling converge to meromorphic differentials (see \cite{EKZ}, Theorem~10). Smillie, in a work to appear, constructs a geometric compactification of the strata of the moduli space of Abelian differentials, by using only flat geometry, and where flat structures defined by meromorphic differentials are needed. The connected components of the moduli space of Abelian differentials were described by Kontsevich and Zorich in \cite{KoZo}. They showed that each stratum has up to three connected component, which are described by two invariants: hyperellipticity and the parity of the spin structure, that arise under some conditions on the set of zeroes. Later, Lanneau has described the connected components of the moduli space of quadratic differentials. The main goal of the paper is to describe connected components of the moduli space of meromorphic differentials with prescribed poles and zeroes. It is well known that each stratum of the moduli space of genus zero meromorphic differentials is connected. We show that when the genus is greater than, or equal to two, there is an analogous classification as the one of Kontsevich and Zorich, while in genus one, there can be an arbitrarily large number of connected components. In this paper, we will call \emph{translation surface with poles} a translation surface that comes from a meromorphic differential on a punctured Riemann surface, where poles correspond to the punctured points. We describe in Section~\ref{sec:flat:merom} the local models for neighborhoods of poles. Similarly to the Abelian case, we denote by $\mathcal{H}(n_1,\dots ,n_r,-p_1,\dots ,-p_s)$ the moduli space of translation surfaces with poles that corresponds to meromorphic differentials with zeroes of degree $n_1,\dots ,n_r$ and poles of degree $p_1,\dots ,p_s$. It will be called \emph{stratum of the moduli space of meromorphic differentials}. We will always assume that $s>0$. A strata is nonempty as soon as $\sum_{i} n_i-\sum_j p_j=2g-2$, for some nonnegative integer $g$ and $\sum_{j}p_j>1$. For a genus one translation surface $S$ with poles, we describe the connected components by using a geometric invariant easily computable in terms of the flat metric, that we call the \emph{rotation number of a surface}. As we will see in Section~\ref{genus1}, in the stratum $\mathcal{H}(n_1,\dots ,n_r,-p_1,\dots ,-p_s)$, the rotation number is a positive integer that divides all the $n_i,p_j$. \begin{theorem}\label{MT1} Let $\mathcal{H}(n_1,\dots ,n_r,-p_1,\dots ,-p_s)$, with $n_i,p_j>0$ and $\sum_{j}p_j>1$ be a stratum of genus one meromorphic differentials. Denote by $c$ be the number of positive divisors of $\gcd(n_1,\dots ,n_r,p_1,\dots ,p_s)$. The number of connected components of the stratum is: \begin{itemize} \item $c-1$ if $r=s=1$. In this case $n_1=p_1=\gcd(n_1,p_1)$ and each connected component corresponds to a a rotation number that divides $n_1$ and is not $n_1$. \item $c$ otherwise. In this case each connected component corresponds to a rotation number that divides $\gcd(n_1,\dots ,n_r, p_1,\dots ,p_s)$. \end{itemize} \end{theorem} A consequence of the previous theorem is that, contrary to the case of Abelian differentials, there can be an arbitrarily large number of connected components for a stratum of meromorphic differentials (in genus~1). For instance, the stratum $\mathcal{H}(24,-24)$ has 7 connected components since the positive divisors of 24 that are not 24 are $1,2,3,4,6,8$ and $12$. The general classification uses analogous criteria as for Abelian differentials. We recall that in this case, the connected components are distinguished by the following (up to a few exception in low genera): \begin{itemize} \item \emph{hyperellipticity}: if there is only one singularity or two singularities of equal degree, there is a component that consists only of hyperelliptic Riemann surfaces. For each translation surface, the hyperelliptic involution is an isometry. Slightly abusing with terminology, we usually call this component the \emph{hyperelliptic component}. \item \emph{the parity of the spin structure}: If all singularities are of even degree, there are two connected component (none of which is the hyperelliptic component) distinguished by a topological invariant easily computable in terms of the flat metric. \end{itemize} In Section~\ref{invariants}, we define in our context the notion of hyperelliptic component and spin structure. In the next theorem, we say that the set of poles and zeroes is: \begin{itemize} \item of \emph{hyperelliptic type} if the degree of zeroes are or the kind $\{2n\}$ or $\{n,n\}$, for some positive integer $n$, and if the degree of the poles are of the kind $\{-2p\}$ or $\{-p,-p\}$, for some positive integer $p$. \item of \emph{even type} if the degrees of zeroes are all even, and if the degrees of the poles are either all even, or are $\{-1,-1\}$. \end{itemize} \begin{theorem}\label{MT2} Let $\mathcal{H}=\mathcal{H}(n_1,\dots ,n_r,-p_1,\dots ,-p_s)$, with $n_i,p_j>0$ be a stratum of genus $g\geq 2$ meromorphic differentials. We have the following. \begin{enumerate} \item If $\sum_{i} p_i$ is odd and greater than two, then $\mathcal{H}$ is nonempty and connected. \item If $\sum_i p_i=2$ and $g=2$, then: \begin{itemize} \item if the set of poles and zeroes is of hyperelliptic type, then there are two connected components, one hyperelliptic, the other not (in this case, these two components are also distinghished by the parity of the spin structure) \item otherwise, the stratum is connected. \end{itemize} \item If $\sum_i p_i>2$ or if $g>2$, then: \begin{itemize} \item if the set of poles and zeroes is of hyperelliptic type, there is exactly one hyperelliptic connected component, and one or two nonhyperelliptic components that are discribed below. Otherwise, there is no hyperelliptic component. \item if the set of poles and zeroes is of even type, then $\mathcal{H}$ contains exactly two nonhyperelliptic connected components that are distinguished by the parity of the spin structure. Otherwise $\mathcal{H}$ contains exactly one nonhyperelliptic component. \end{itemize} \end{enumerate} \end{theorem} From the previous theorem, we see that there are at most three connected component in genus greater than or equal to two. For instance, the stratum $\mathcal{H}(4,4,-1,-1)$ contains a hyperelliptic connected component (zeroes and poles are of hyperelliptic type) and two nonhyperelliptic components (the zeroes are even and the poles are $\{-1,-1\}$). So it has three components. The stratum $\mathcal{H}(2,4,-1,-1,-2)$ is connected, since it does not have a hyperelliptic connected component and the poles and zeroes are not of even type. The structure of the paper is the following: \begin{itemize} \item In Section~\ref{sec:flat:merom}, we describe general facts about the metric defined by a meromorphic differential and define a topology on the moduli space. \item In Section~\ref{sec:tools}, we present three tools that are needed in the proof. The first two ones appear already in the paper of Kontsevich and Zorich, and in the paper of Lanneau. The third one is a version of the well known Veech construction for the case of translation surfaces with poles. \item In Section~\ref{genus1}, we describe the connected components in the genus one case. Some of the results in genus one will be very useful for the general genus. \item In Section \ref{invariants}, we describe the topological invariants for the general genus case, \emph{i.e.} hyperelliptic connected components and the parity of the spin structure. \item In Section \ref{sec:minimal}, we compute the connected components for the minimal strata, which are the strata with only one conical singularity (and possibly several poles). \item In Section \ref{sec:nonminimal}, we compute the connected components for the general case. \end{itemize} \subsubsection*{Acknowledgments} I thank Martin Moeller for many discussions about meromorphic differentials and about spin structures. I thank John Smillie for motivating the work on this paper and interesting discussions. I am also thankful to Pascal Hubert and Erwan Lanneau for the frequent discussions during the development of this paper. This work is partially suported by the ANR Project "GeoDym". \section{Flat structures defined by meromorphic differentials.}\label{sec:flat:merom} \subsection{Holomorphic one forms and flat structures} Let $X$ be a Riemann surface and let $\omega$ be a holomorphic one form. For each $z_0\in X$ such that $\omega(z_0)\neq 0$, integrating $\omega$ in a neighborhood of $z_0$ gives local coordinates whose corresponding transition functions are translations, and therefore $X$ inherits a flat metric, on $X\backslash \Sigma$, where $\Sigma$ is the set of zeroes of $\omega$. In a neighborhood of an element of $\Sigma$, such metric admits a conical singularity of angle $(k+1)2\pi$, where $k$ is the degree of the corresponding zero of $\omega$. Indeed, a zero of degree $k$ is given locally, in suitable coordinates by $\omega=(k+1)z^k dz$. This form is precisely the preimage of the constant form $dz$ by the ramified covering $z\to z^{k+1}$. In terms of flat metric, it means that the flat metric defined locally by a zero of order $k$ appear as a connected covering of order $k+1$ over a flat disk, ramified at zero. When $X$ is compact, the pair $(X,\omega)$, seen as a smooth surface with such translation atlas and conical singularities, is usually called a \emph{translation surface}. If $\omega$ is a meromorphic differential on a compact Riemann $\overline{X}$, we can consider the translation atlas defined defined by $\omega$ on $X=\overline{X}\backslash \Sigma'$, where $\Sigma'$ is the set of poles of $\omega$. We obtain a translation surface with infinite area. We will call such surface \emph{translation surface with poles}. \begin{convention} When speaking of a translation surface with poles $S=(X,\omega)$. The surface $S$ equipped with the flat metric is noncompact. The underlying Riemann surface $X$ is a punctured surface and $\omega$ is a holomorphic one-form on $X$. The corresponding closed Riemann surface is denoted by $\overline{X}$, and $\omega$ extends to a meromorphic differential on $\overline{X}$ whose set of poles is precisely $\overline{X}\backslash X$. \end{convention} Similarly to the case of Abelian differentials. A \emph{saddle connection} is a geodesic segment that joins two conical singularities (or a conical singularity to itself) with no conical singularities on its interior. We also recall that it is well known that $\sum_{i=1}^r n_i-\sum_{j=1}^s p_j=2g-2$, where $\{n_1,\dots ,n_r\}$ is the set (with multiplicities) of degree of zeroes of $\omega$ and $\{p_1,\dots ,p_s\}$ is the set (with multiplicities) of degree of the poles of $\omega$. \subsection{Local model for poles} The neighborhood of a pole of order one is an infinite cylinder with one end. Indeed, up to rescaling, the pole is given in local coordinates by $\omega=\frac{1}{z}dz$. Writing $z=e^{z'}$, we have $\omega=dz'$, and $z'$ is in a infinite cylinder. Now we describe the flat metric in a neighborhood of a pole of order $k\geq 2$ (see also \cite{strebel}). First, consider the meromorphic 1-form on $\mathbb{C}\cup \{\infty \}$ defined on $\mathbb{C}$ by $\omega=z^k dz$. Changing coordinates $w=1/z$, we see that this form has a pole $P$ of order $k+2$ at $\infty $, with zero residue. In terms of translation structure, a neighborhood of the pole is obtained by taking an infinite cone of angle $(k+1)2\pi$ and removing a compact neighborhood of the conical singularity. Since the residue is the only local invariant for a pole of order k, this gives a local model for a pole with zero residue. Now, define $U_R=\{z\in \mathbb{C}| |z|>R\}$ equipped with the standard flat metric. Let $V_R$ be the Riemann surface obtained after removing from $U_R$ the ${\pi}$--neighborhood of the real half line $\mathbb{R}^-$, and identifying by the translation $z\to z+\imath 2\pi$ the lines $-\imath {\pi}+\mathbb{R}^-$ and $\imath {\pi}+\mathbb{R}^-$. The surface $V_R$ is naturally equipped with a holomorphic one form $\omega$ coming from $dz$ on $V_R$. We claim that this one form has a pole of order 2 at infinity and residue -1. Indeed, start from the one form on $U_{R'}$ defined by $(1+1/z)dz$ and integrate it. Choosing the usual determination of $\ln(z)$ on $\mathbb{C}\backslash \mathbb{R}^-$, one gets the map $z\to z+\ln(z)$ from $U_{R'}\backslash \mathbb{R}^-$ to $\mathbb{C}$, which extends to a injective holomorphic map $f$ from $U_{R'}$ to $V_R$, if $R'$ is large enough. Furthermore, the pullback of the form $\omega$ on $V_R$ gives $(1+1/z)dz$. Then, the claim follows easily after the change of coordinate $w=1/z$ Let $k\geq 2$. The pullback of the form $(1+1/z)dz$ by the map $z\to z^{k-1}$ gives $((k-1)z^{k-2}+(k-1)/z)dz$, \emph{i.e.} we get at infinity a pole of order $k$ with residue $-(k-1)$. In terms of flat metric, a neighborhood of a pole of order $k$ and residue $-(k-1)$ is just the natural cyclic $(k-1)$--covering of $V_R$. Then, suitable rotating and rescaling gives the local model for a pole of order $k$ with a nonzero residue. For flat geometry, it will be convenient to forget the term $2\imath \pi$ when speaking of residue, hence we define the \emph{flat residue} of a pole $P$ to be $\int_{\gamma_P} \omega$, where $\gamma_P$ is a small closed path that turns around a pole counterclokwise. \subsection{Moduli space} If $(X,\omega)$ and $(X',\omega')$ are such that there is a biholomorphism $f:X\to X'$ with $f^* \omega'=\omega$, then $f$ is an isometry for the metrics defined by $\omega$ and $\omega'$. Even more, for the local coordinates defined by $\omega,\omega'$, the map $f$ is in fact a translation. As in the case of Abelian differentials, we consider the moduli space of meromorphic differentials, where $(X,\omega)\sim (X',\omega')$ if there is a biholomorphism $f:X\to X'$ such that $f^* \omega'=\omega$. A stratum corresponds to prescribed degree of zeroes and poles. We denote by $\mathcal{H}(n_1,\dots ,n_r,-p_1,\dots ,-p_s)$ the \emph{stratum} that corresponds to meromorphic differentials with zeroes of degree $n_1,\dots ,n_r$ and poles of degree $p_1,\dots ,p_s$. Such stratum is nonempty if and only if $\sum_{i=1}^r n_i-\sum_{j=1}^s p_j=2g-2$ for some integer $g\geq 0$ and if $\sum_{j=1}^s p_j>1$ (\emph{i.e.} if there is not just one simple pole.). A \emph{minimal stratum} is a stratum with $r=1$, \emph{i.e.} which corresponds to surfaces with only one conical singularity and possibly several poles. We define the topology on this space in the following way: a small neighborhood of $S$, with conical singularities $\Sigma$, is defined to be the equivalent classes of surfaces $S'$ for which there is a differentiable injective map $f:S\backslash V(\Sigma)\to S'$ such that $V(\Sigma)$ is a (small) neighborhood of $\Sigma$, $Df$ is close the identity in the translation charts, and the complement of the image of $f$ is a union on disks. One can easily check that this topology is Hausdorff. \section{Tools}\label{sec:tools} \subsection{Breaking up a singularity: local construction}\label{bzero:loc} Here we describe a sur\-ge\-ry, introduced by Eskin, Masur and Zorich (see \cite{EMZ}, Section~8.1) for Abelian differentials, that ``break up'' a singularity of degree $k_1+k_2\geq 2$ into two singularities of degree $k_1\geq 1$ and $k_2\geq 1$ respectively. This surgery is local, since the metric is modified only in a neighborhood of the singularity of degree $k_1+k_2$. In particular, it is also valid for the flat structure defined by a meromorphic differential. \begin{figure} \caption{Breaking up a zero, after Eskin, Masur and Zorich} \label{bzero} \end{figure} We start from a singularity of degree $k_1+k_2$. A neighborhood of such singularity is obtained by gluing $(2k_1+2k_2+2)$ Euclidean half disks in a cyclic order. The singularity breaking procedure consists in changing continuously the way these half disks are glued together, as in Figure~\ref{bzero}. This breaks the singularity of degree $k_1+k_2$ into singularities of degree $k_1$ and $k_2$ respectively, and with a small saddle connection joining them. \subsection{Bubbling a handle}\label{bubbling} The following surgery was introduced by Kontsevich and Zorich in \cite{KoZo}. Since it is a local construction, it is also valid for meromorphic differentials. As before, we start from a singularity of degree $k_1+k_2$ on a surface $S$. We first apply the previous surgery to get a pair of singularities of degree $k_1$ and $k_2$ respectively, and with a small saddle connection $\gamma$ joining them. Then, we cut the surface along $\gamma$ and obtain a flat surface with a boundary component that consists of two geodesic segments $\gamma_1,\gamma_2$. We identify their endpoints and the corresponding segments are now closed boundary geodescis $\gamma_1',\gamma_2'$. Then, we consider a cylinder with two boundary components isometric to $\gamma_i'$, and glue each of these component to $\gamma_i'$. The angle between $\gamma_1'$ and $\gamma_2'$ is $(k_1+1)2\pi$ (and $(k_2+1)2\pi$) Using a notation similar to the one introduced by Lanneau in \cite{La:cc}, we will denote by $S\oplus (k_1+1)$ the resulting surface for an arbitrary choice of continuous parameters. Different choices of continuous parameters lead to the same connected component and from a path $(S_t)_{t\in [0,1]}$, one can easily deduce a continuous path $S_t\oplus (k_1+1)$. Hence, as in \cite{La:cc}, the connected component of $S\oplus s$ only depends on $s$ and on the connected component of $S$. So, if $S$ is in a connected component $\mathcal{C}$ of a stratum of Abelian (resp. meromorphic) differential with only one singularity, $\mathcal{C}\oplus s$ is the connected component of a stratum of Abelian (resp. meromorphic) differentials obtained by the construction. \begin{remark} {The notation $\oplus$ slightly differs to the one introduced by Lanneau: since he manipulates \emph{quadratic differentials}, the angles can be any multiples of $\pi$, while in our case, we only have multiples of $2\pi$. So the surface we obtain would have been written $S\oplus 2(k_1+1)$ with the notation of Lanneau.} \end{remark} The following lemma is Proposition~2.9 in the paper of Lanneau \cite{La:cc}, written in our context. The ideas behind this proposition were also in the paper of Kontsevich and Zorich \cite{KoZo}. \begin{lemma}\label{lemme:lanneau} Let $\mathcal{C}$ be connected component a minimal stratum of the moduli space of meromorphic differentials, of the form $\mathcal{H}(n,-p_1,\dots ,-p_r)$. Then, the following statements hold. \begin{enumerate} \item $ \mathcal{C}\oplus s_1 \oplus s_2=\mathcal{C}\oplus s_2 \oplus s_1$ if $1\leq s_1,s_2\leq n+1$ and either $s_1\neq \frac{n}{2}+1$ or $s_2 \neq \frac{n+2}{2}+1$. \item $ \mathcal{C}\oplus s_1 \oplus s_2=\mathcal{C}\oplus s_2-1 \oplus s_1+1$ if $1\leq s_1\leq n+1$ and $2\leq s_2\leq n+3$. \item $\mathcal{C}\oplus s_1 \oplus s_2=\mathcal{C}\oplus s_2-2 \oplus s_1$ if $1\leq s_1\leq n+1$ and $1\leq s_2\leq n+3$ and $s_2-s_1\geq 2$. \item $\mathcal{C}\oplus s=\mathcal{C}\oplus (n+2-s)$ for all $s\in \{1,\dots, n+1\}$ \end{enumerate} \end{lemma} \begin{remark} There is a small mistake in the statement of Lanneau: the condition ``either $s_1\neq \frac{n}{2}+1$ or $s_2 \neq \frac{n+2}{2}+1$'' does not appear while it is necessary. This leads to a gap in the proof of Lanneau's Lemma~6.13 in \cite{La:cc}, but this gap is easily solved by using Lemma~A.2 of the same paper. \end{remark} \subsection{The infinite zippered rectangle construction} In this section, we describe a construction of translation surfaces with poles which is analogous to the well known Veech zippered rectangle construction. We will call this construction the \emph{infinite zippered rectangles construction}. We first recall Veech's construction. \subsubsection{The Veech construction of a translation surface}\label{veech:constr} The {Veech construction}, or {zippered rectangle construction} is usually seen as a way to define a suspension over an interval exchange map (see \cite{Veech82}). We can also see it as a easy way to define (almost any) translation surface. Consider a finite alphabet $\mathcal{A}=\{\alpha_1,\dots ,\alpha_d\}$, and a pair on one to one maps $\pi_t,\pi_b:\mathcal{A}\to \{1,\dots ,d\}$. Let $\zeta\in \mathbb{C}^\mathcal{A}$ be a vector for which each entry has positive real part. The Veech construction can be seen in two (almost) equivalent ways. One with a $2d$ sided polygon, and one with $d$ rectangles that are identified on their boundary. We present the first one, which is simpler but not as general as the second one. Consider the broken line $L_t$ on $\mathbb{C}=\mathbb R^2$ defined by concatenation of the vectors $\zeta_{\pi_t^{-1}(j)}$ (in this order) for $j=1,\dots,d$ with starting point at the origin. Similarly, we consider the broken line $L_b$ defined by concatenation of the vectors $\zeta_{\pi_b^{-1}(j)}$ (in this order) for $j=1,\dots,d$ with starting point at the origin. We assume that $\zeta$ is such that the vertices of $L_t$ are always above the real line, except possibly the foremost right (and of course the one at the origin), and that similarly, the vertices of $L_b$ are below the real line. Such $\zeta$ is called \emph{suspension datum} (see \cite{MMY}), and exists under a combinatorial condition on $(\pi_t,\pi_b)$ usually called ``irreducibility''. If the lines $L_t$ and $L_b$ have no intersections other than the endpoints, we can construct a translation surface $X$ by identifying each side $\zeta_j$ on $L_t$ with the side $\zeta_j$ on $L_b$ by a translation. The resulting surface is a translation surface endowed with the form $\omega=dz$ (see Figure~\ref{Veech:construction}). \begin{figure} \caption{Veech's construction of a translation surface} \label{Veech:construction} \end{figure} \begin{remark} The surface constructed in this way can also be seen as a union of rectangles whose dimensions are easily deduced from $\pi_t,\pi_b$ and $\zeta$, and that are ``zippered'' on their boundary. One can define $S$ directely in this way: the construction works also if $L_t,L_b$ have other intersection points. This is the \emph{zippered rectangle construction}, due to Veech (\cite{Veech82}). This construction coincides with the first one in the previous case. \end{remark} \subsubsection{Basic domains}\label{basic:domains} Now we generalize the previous construction to obtain a flat surface that corresponds to a compact Riemann surface with a meromorphic differential. Instead of having a polygon with pairwise identification on its boundary, we will have a finite union of some ``basic domains'' which are half-planes and infinite cylinders with polygonal boundaries (see Figure~\ref{fig:basic:domains}). Let $n\geq 0$. Let $\zeta\in \mathbb{C}^n$ be a complex vector whose entries have positive real part. Consider the broken line $L$ on $\mathbb{C}$ defined by concatenation of the following: \begin{itemize} \item the half-line $l_1$ that corresponds to $\mathbb{R}^-$, \item the broken line $L_t$ defined as above, \emph{i.e.} the concatenation of the segment defined by the vectors $\zeta_{j}$ (in this order) for $j=1,\dots,n$ with starting point at the origin, \item the horizontal half line $l_2$ starting from the right end of $L_t$, and going to the right. \end{itemize} We consider the subset $D^+(z_1,\dots ,z_n)$ (resp. $D^-(z_1,\dots ,z_n)$) as the set of complex numbers that are above $L$. The line $l_1$ will be refered to as the \emph{left} half-line, and $l_2$ will be refered to as the \emph{right} half-line. We will sometime write such domains $D^{+}$ or $D^-$ for short. The sets $D^\pm$ are kinds of half-planes with polygonal boundaries. Note that $n$ might be equal to $0$, and in this case, $D^+$ (resp. $D^-$) is just a half-plane with a marked point on its (horizontal) boundary. Similarly, if $n\geq 1$, we can define the subset $C^+(z_1,\dots ,z_n)$ (resp. $C^-(z_1,\dots ,z_n)$) as the set of complex numbers that are above $L_t$. Its boundary consists of two infinite vertical half-lines joined by the broken line $L_t$. The two infinite half-lines will be identified in the resulting construction, hence $C^\pm$ is just an infinite half-cylinder with a polygonal boundary. \begin{figure} \caption{A domain $D^+(\zeta_1,\zeta_2,\zeta_3)$ and a domain $C^+(\zeta_1,\zeta_2,\zeta_3)$} \label{ex:D:C} \label{fig:basic:domains} \end{figure} \subsubsection{An example: a surface with a single pole of order 2.} The idea of the construction is to glue by translation the basic domains together in order to get a noncompact translation surface with no boundary components. Since the formal description is rather technical, we first present a simple version of the construction. Let $\mathcal{A}$ be a finite alphabet and $\pi_t,\pi_b:\mathcal{A}\to \{1,\dots ,d\}$ be one-to-one maps. Let $\zeta\in \mathbb{C}^\mathcal{A}$ be such that $Re(\zeta_\alpha)>0$ for all $\alpha\in \mathcal{A}$. We define a flat surface $S$ as the disjoint union of the two half-planes $D^+=D^+(\zeta_{\pi_t^{-1}(1)},\dots ,\zeta_{\pi_t^{-1}(n)})$ and $D^-=D^-(\zeta_{\pi_b^{-1}(1)},\dots ,\zeta_{\pi_b^{-1}(n)})$ glued on their boundary by translation: the left half line of $D^+$ being glued to the left half-line of $D^-$ and similarly with the right half-lines, and a segment in $D^+$ corresponding to some $\zeta_i$ is glued to the corresponding one of $D^-$. Note that contrary to the case of compact translation surfaces, there is no ``suspension data condition'' on $\zeta$, hence, no combinatorial condition on $\pi$. The only condition that we require is that $Re(\zeta_i)>0$ for all $i$. Note also that we can have $n=0$, in this case $S=\mathbb{C}$. \begin{figure} \caption{Construction of a translation surface with a degree 2 pole.} \label{ex:surf:av:pole2} \end{figure} \subsubsection{General case} \label{inf:zipp:rect} We can generalize the above construction in order to have several poles of any order. Instead of considering two half-planes $D^+,D-$, we will do the same construction starting from $2d$ half-planes, $s^++s^-$ infinite cylinders, and define identification on their boundary. More precisely: Let $\zeta\in \mathbb{C}^n$ with positive real part. We consider the following combinatorial data: \begin{itemize} \item A collection {$\bf n^+$} of integers $0=n_0^+\leq n_1^+\leq \dots n_d^+ < \dots <n_{d+s^+}^+=n$ \item A collection {$\bf n^-$} of integers $0=n_0^-\leq n_1^-\leq \dots n_d^- < \dots <n_{d+s^-}^-=n$ \item A pair of maps $\pi_t,\pi_b: \mathcal{A}\to \{1,\dots ,n\}$ \item A collection {$\bf d$} of integers $0=d_0< d_1 < d_2 <\dots < d_{r}=d$. \end{itemize} The resulting surface will have $r$ poles of order greater than or equal to two, and $s^++s^-$ poles of order 1. For each $i \in \{0,\dots ,d-1\}$, we consider the basic domains as defined before $D_i^+(\zeta_{\pi_{t}^{-1}(n_i^++1)},\dots ,\zeta_{\pi_{t}^{-1}(n_{i+1}^+)})$ and $D_i^-(\zeta_{\pi_b^{-1}(n_i^-+1)},\dots ,\zeta_{\pi_b^{-1}(n_{i+1}^-}))$. For $i\in\{d,\dots ,d+s^+-1\}$, we define $C_i^+(\zeta_{\pi_{t}^{-1}(n_i^++1)},\dots ,\zeta_{\pi_{t}^{-1}(n_{i+1}^+)})$. For $i\in\{d,\dots ,d+s^--1\}$, we define $C_i^-(\zeta_{\pi_b^{-1}(n_i^-+1)},\dots ,\zeta_{\pi_b^{-1}(n_{i+1}^-}))$. \begin{figure} \caption{Construction of a translation surface with a degree 3 pole.} \label{ex:surf:pole:3} \end{figure} \begin{figure} \caption{Construction of a translation surface with two poles of degree $2$} \label{ex:surf:2pole:2} \end{figure} Then, we glue these domains together on their boundary by translations: \begin{itemize} \item each segment corresponding to a parameter $\zeta_i$ in a $``+''$ domain is glued to the unique corresponding one in a $``-''$ domain. \item each left line of a domain $D_i^+$ is glued to the left line of the domain $D_i^-$. \item For each $i\in \{1,\dots ,d\} \backslash \{d_1,\dots , d_r \}$ the right line of the domain $D_i^-$ is glued to one of the domain $D_{i+1}^+$. \item For each $i=d_k$, $k>0$, the right line of the domain $D_i^-$ is glued to one of the domain $D_{d_{k-1}+1}^+$. \item For each $C_i^+$ and $C_i^-$, the two vertical lines are glued together. \end{itemize} The resulting surface $S$ has no more boundary components, and is a flat surface with poles and conical singularities. It might not be connected for a given combinatorial data. We will consider only those that that give connected surfaces. Note that such surface is easily decomposes into a finite union of vertical strips and half-planes with vertical boundary (\emph{i.e.} ``infinite rectangle''), that are ``zippered'' on their boundary. \begin{example} Figure~\ref{ex:surf:pole:3} shows an example with $d=2$, $s^+=s^-=0$, ${n^+}=(0,2,4)$, ${n^-}=(0,2,4)$, $\pi_t=Id$, $(\pi_b^{-1}(1),\dots ,\pi_b^{-1}(n))=(2,3,4,1)$ and ${\bf d}=(0,2)$. One gets a surface in $\mathcal{H}(-3,5)$. Figure~\ref{ex:surf:2pole:2} shows an example with the same data, except that ${\bf d}=(0,1,2)$. One gets a surface in $\mathcal{H}(-2,-2,2,2)$. \end{example} \begin{lemma}\label{lem:dim} Let $S$ be a genus $g$ surface in $\mathcal{H}(n_1,\dots,n_r,-p_1,\ldots,-p_s)$, obtained by the infinite zippered rectangle construction with a continuous parameter $\zeta\in \mathbb{C}^n$. Then $$n=2g+r+s-2$$ \end{lemma} \begin{proof} By construction, the surface (pole included) is obtained by gluing $s$ disks on their boundary. The resulting surfaces admits a decompositions into cells, with $s$ faces, $n$ edges, and $r$ vertices. So, we have $2-2g=s-n+r$, and the result follows. \end{proof} The following proposition will be very useful in the remaing of the paper. It is analogous to the well-known fact that that any translation surface with no vertical saddle connection is obtained by the Veech construction. \begin{proposition}\label{no:vertical} Any translation surface with poles with no vertical saddle connection is obtained by the infinite zippered rectangle construction. \end{proposition} \begin{proof} According to the book of Strebel \cite{strebel} Section~11.4, when there are no vertical saddle connections the vertical trajectories are of the following two types: \begin{enumerate} \item lines that converge to a pole in each direction. \item half-lines starting (or ending) from a conical singularity and converging to a pole on their infinite direction. \end{enumerate} Furthermore, the set of non-singular vertical trajectories is a disjoint union of half-planes and of vertical strips $\sqcup_i \mathcal{P}_i \sqcup_j \mathcal{S}_j$. The half-planes have one vertical boundary component, and the strips have two vertical boundary component. We choose these half-planes or strips as large as possible, so each boundary component necessarily contains a conical singularity. This singularity is unique for each connected component, otherwise there would be a vertical saddle connection on the surface. This number of half-planes and strips is necessarily finite, since there is only a finite number of conical singularities, and each conical singularities is adjacent to a finite number of half-plane or strip. We cut each half-plane $\mathcal{P}_i$ in the following way: the boundary of $\mathcal{P}_i$ consists of a union of two vertical half-lines starting from the conical singularity. We consider the unique horizontal half-line starting from this singularity and cut $\mathcal{P}_i$ along this half line. It decomposes $\mathcal{P}_i$ into two components $\mathcal{P}_i^+$ (the upper one) and $\mathcal{P}_i^-$ (the lower one). We cut each strip $\mathcal{S}_j$ in the following way: the boundary of $\mathcal{S}_i$ has two components, each consists of a union of two vertical half-lines starting from a conical singularity. There is a unique geodesic segment joining these two boundary singularities. We cut $\mathcal{S}_j$ along this segment and obtain two components $\mathcal{S}_j^+$ and $\mathcal{S}_j^-$. The surface $S$ is obtained as the disjoint union of the $\mathcal{S}_j^\pm$ and $\mathcal{P}_i^\pm$, glued to each other by translation on their boundary components. Now we remark that the $\mathcal{P}_i^+$ and $\mathcal{S}_j^+$ are necessarily glued to each other along their vertical boundary components. Under this identification, $\sqcup \mathcal{P}_i^+ \sqcup_j \mathcal{S}_j^+$ is a union of subsets of the type $D^+$ and $S^+$, as in the previous construction. Similarly, gluing together along vertical sides the union of the $\mathcal{S}_j^-$ and $\mathcal{P}_i^-$, one obtains a union of $D^-$ and $C^-$ type subsets. So the surface is obtained by the infinite zippered rectangle construction. \end{proof} \begin{remark}\label{rq:param} Note that the parameters $(\zeta_i)_i$ are uniquely defined (once the suitable vertical direction is fixed) and the infinite zipperered rectangle construction defines a triangulation of the surface for which the $(\zeta_i)$ are the local parameters for the strata. Hence, map $S\to (\zeta_i)_i$ is a local homeomorphism. The corresponding saddle connections form a basis of the relative homology $H_1(S, \Sigma,\mathbb{Z})$, where $\Sigma$ is the set of conical singularities of $S$. \end{remark} Note that for any translation surface with poles, the set of saddle connections is at most countable, so it is always possible to rotate the surface in such way that there are no vertical saddle connections. Hence the previous theorem gives a representation for \emph{any} translation surface with poles. One important consequence this theorem is Proposition~\ref{adj:minimal}, which is the analogous version of a key argument in \cite{KoZo}. \section{Genus one case}\label{genus1} \subsection{Connected components} We first recall the following result in algebraic geometry, which is a consequence of Abel's theorem. \begin{NNthm} Let $\overline{X}=\mathbb{C}/\Gamma$ be a torus and let $D=\sum_i \alpha_i P_i$ be a divisor. Then there exists a meromorphic differential whose divisor is $D$ if and only if $\sum {z_i}\in \Gamma$, where for each $i$, ${z_i}$ is a representative in $\mathbb{C}$ of $P_i$. \end{NNthm} Now we use this theorem to describe the connected components in the genus one case. \begin{theorem} \label{th:g1:complex} Let $\mathcal{H}=\mathcal{H}(n_1,\dots ,n_r,-p_1,\dots ,-p_s) $ be a stratum of genus one meromorphic differentials. Let $d=\gcd(n_1,\dots ,n_r,p_1,\dots p_s)$, and let $c$ be the number of positive divisors of $d$. Then the number of connected components of $\mathcal{H}$ is: \begin{itemize} \item $c$ if $r+s\geq 3$. \item $c-1$ if $r=s=1$. \end{itemize} \end{theorem} \begin{proof} We first assume that $r+s=2$. Then $\mathcal{H}=\mathcal{H}(n,-n)$, for some $n\geq 2$ (the stratum $\mathcal{H}(1,-1)$ is empty). We have $d=n$. An element in $\mathcal{H}$ is given, up to a constant multiple, by a pair $(\overline{X},D)$, were $\overline{X}$ is a torus and $D=-nP+nQ$ is a divisor on $\overline{X}$. One can assume that $P=0$, and from the previous theorem, there is only a finite number of possibilities for $Q$, depending only on $n$. Hence, the map $(\overline{X},-n0+nQ)\mapsto (\overline{X},0)$ defines a covering from $P\mathcal{H}$ to $\mathcal{M}_{1,1}$, where $\mathcal{M}_{1,1}$ denotes the moduli space of genus one Riemann surfaces with a marked point. Fix $\overline{X}_0\in \mathcal{M}_{1,1}$ a regular point, and choose $v_1,v_2$ such that $\overline{X}_0=\mathbb{C}/(v_1 \mathbb{Z}+v_2\mathbb{Z})$. An element $(\overline{X}_0,\omega)\in \mathcal{H}$ is uniquely defined by the coordinates of $Q$, which are given in a unique way by a complex number of the form $\frac{p}{n}v_1+\frac{q}{n}v_2$, with $(p,q)\neq (0,0)$ and $0\leq p,q<n$. Since $\overline{X}_0$ is taken regular, there is a one to one correspondance between such pairs $(p,q)$ of integers and the elements of $\mathcal{H}$ whose underlying Riemann surface is $\overline{X}_0$. The monodromy of the covering $P\mathcal{H}\to \mathcal{M}_{1,1}$ is generated by the two maps $\phi_1:(p,q)\to (p+q,q) \mod n$ and $\phi_2:(p,q)\to (p,q+p) \mod n$. We remark that $d'=\gcd(p,q,n)$ is invariant by this action and the condition on $(p,q)$ implies that $0<d'<n$. Hence $d'$ is an invariant of the connected components of $\mathcal{H}$. The number of possible $d'$ is $c-1$. We claim that each $(p,q)$ has a (unique) representative modulo these actions of the kind $(d',0)$. To prove the claim, we start from an element $(p,q)$ and do an algorithm similar to Euclid's algorithm. Without loss of generality, one can assume that $p\neq 0$ and $q\neq 0$. Applying $\phi_1^r$ for some suitably chosen $r$, we can obtain $(p',q)$ with $0<p'\leq \gcd(q,n)$. Similarly, we can obtain $(p,q')$ with $0<q<\gcd(p,n)$. So if either $\gcd(q,n)<p$ or $\gcd(p,n)<q$, we obtain $(p',q')$ with $p'+q'<p+q$. Otherwise $p\leq \gcd(q,n)\leq q$ and $q\leq \gcd(p,n)\leq p$. This implies $p=q=d'$, and the result follows. Now we assume $r+q\geq 3$. We proceed in a similar way as before: we fix $\overline{X}_0=\mathbb{C}/\Gamma$ and a basis $v_1,v_2$ of $\Gamma$. Then a meromorphic differential is given by a a vector $(z_1,\dots z_{r},z_1',\dots ,z_{s}')\in \mathbb{C}^{r+s}$ with pairwise different entries (modulo $\Gamma$), and satisfying the linear equality $\sum_{i=1}^r n_i z_i -\sum_{i=1}^s p_i z_i=p v_1+ q v_2$. One can remark that: \begin{itemize} \item For each $(p,q)$ the set of $(z_i)_i,(z_j')_j$ satisfying the previous condition is nonempty and connected. \item If we choose other representatives $z_i,z_j'$ for the same differential $\omega$, this changes $(p,q)$ by $(p+\sum_i \alpha_i n_i+\sum_j \beta_j p_j,q+ \sum_{i} \alpha_i' n_i+\sum_j \beta_j' p_j)$, where $(\alpha_i,\beta_j, \alpha_i',\beta_j')$ can be any integers. \item The action of the two generators of the modular group changes $(p,q)$ by $(p+q,q)$ and $(p,q+p)$ respectively. \end{itemize} Then, by a proof very similar to the previous one, one can see that $d'=\gcd(p,q,n_1,\dots ,n_r,p_1,\dots ,p_s)$ is an invariant of connected component and one can find representative in each connected component satisfying $(p,q)=(d',0)$. So the number of connected components is precisely $c$. Note that the difference with the first case is that any pair $(p,q)\in \mathbb{Z}^2$ is possible. \end{proof} \subsection{Flat point of view: rotation number} The previous section classifies the connected component of the moduli space of meromorphic differentials in the genus one case from a complex analytic point of view. But the invariant which is given is not easy to describe in terms of flat geometry. The next theorem gives an interpretation in terms of flat geometry. Let $\gamma$ be a simple closed curve parametrized by the arc length on a translation surface that avoids the singularities. Then $t\to \gamma'(t)$ defines a map from $\mathbb{S}^1$ to $\mathbb{S}^1$. We denote by $Ind(\gamma)$ the index of this map. \begin{definition} Let $S=(X,\omega)\in \mathcal{H}(n_1,\dots ,n_r,-p_1,\dots -p_s)$ be a genus one translation surface with poles. Let $(a,b)$ be a symplectic basis of the homology the underlying compact Riemann surface $\overline{X}$ and $\gamma_a,\gamma_b$ be arc-length representatives of $a,b$, with no self intersections, and that avoid the zeroes and poles of $\omega$. We define the \emph{rotation number} of $S$ to by: $$rot(S)=\gcd(Ind(\gamma_a),Ind(\gamma_b),n_1,\dots n_r,p_1,\dots ,p_s)$$ \end{definition} \begin{theorem} Let $\mathcal{H}=\mathcal{H}(n_1,\dots ,n_r,-p_1,\dots -p_s)$ be a stratum of genus 1 meromorphic differentials. The rotation number is an invariant of connected components of $\mathcal{H}$. Any positive integer $d$ which divides $\gcd(n_1,\dots ,n_r,p_1,\dots p_s)$ is realised by a unique connected component of $\mathcal{H}$, except for the case $\mathcal{H}=\mathcal{H}(n,-n)$ where $d=n$ doesn't occur. \end{theorem} \begin{proof} Let $(a,b)$ be a symplectic basis of $H_1(\overline{X},\mathbb{Z})$. Let $\gamma_a, \gamma_a'$ be representatives of $a$ that are simple closed curves and don't contain a singularity. Since $\overline{X}$ is a torus, $\gamma_a$ and $\gamma_a'$ are homotopic as curves defined on $\overline{X}$. The index of $\gamma_a$ doesn't change while we deform $\gamma_a$ without crossing a pole or a zero. It is easy to see that when crossing a singularity of order $k\in \mathbb{Z}$, the index is changed by adding $\pm k$. Hence the rotation number only depend on the homology class of $a$ and $b$. If $\gamma_a$ and $\gamma_b$ intersects in one point, then there is a standard way to construct a simple closed curve representing $a\pm b$. Its index is $Ind(\gamma_a)\pm Ind(\gamma_b)$, and we obtain representatives of the symplectic basis $(a\pm b,b)$ (or $(a,a\pm b)$). The rotation number doesn't change by this operation. With this procedure, we can obtain any other symplectic basis of $\overline{X}$. Hence the rotation number is well defined for a given element of $\mathcal{H}$. Also, it is invariant by deforming $(\overline{X},\omega)$ inside the ambiant stratum, since by continuous deformation, we can keep track of a pair of representatives of a basis, and the indices are constant under continuous deformations. To prove the last part of the theorem, we remark that a surface in $\mathcal{H}(n,-p_1,\dots ,-p_s)$ obtained from $\mathcal{H}(n-2,-p_1,\dots ,-p_s)$ by bubbling a handle with parameter $k\in \{1,\dots ,n-1\}$ has a rotation number equal to $\gcd(k,p_1,\dots ,p_s)$ by a direct computation. Since $k<n$, we have $\gcd(k,p_1,\dots ,p_s)<n$ so $n$ is never a rotation number. Now we break up the singularity of order $n$ to get $r$ singularities of order $n_1,\dots ,n_r$. Since this doesn't change the metric outside a small neighborhood of the singularity of order $n$, we obtain a rotation number equal to $\gcd(k,n_1,\dots ,n_r,p_1,\dots ,p_s)$. The previous construction gives at least as many connected component as the number given by Theorem~\ref{th:g1:complex}. So, we see each rotation number is realized by a unique component, and that this component is realized by the bubbling a handle construction. \end{proof} Note that the last two paragraphs of the proof of the last theorem gives the following description of the connected components of the minimal strata in genus one. \begin{proposition} \label{g1:cyl} Let $\mathcal{H}=\mathcal{H}(n,-p_1,\dots ,-p_s)$ be a minimal stratum of genus one meromorphic differentials. Any connected component of $\mathcal{H}$ is of the form $\mathcal{H}_0 \oplus k$, for some $1\leq k \leq n-1 $, where $\mathcal{H}_0$ is the connected stratum $\mathcal{H}(n-2,-p_1,\dots ,-p_s)$. Also, for $1\leq k_1,k_2 \leq n-1$ we have: $$\mathcal{H}_0\oplus k_1=\mathcal{H}_0\oplus k_2$$ if and only if $\gcd(k_1,p_1,\dots ,p_s)=\gcd(k_2,p_1,\dots ,p_s)$. \end{proposition} \begin{remark} It is shown in the appendix that there are some translation surface with pole that do not contain any closed geodesic. \end{remark} \section{Spin structure and hyperelliptic components}\label{invariants} Recall that in the classification of the connected components of strata of the moduli space of Abelian differentials \cite{KoZo}, the connected components are distinghished by two invariants. \begin{itemize} \item ``Hyperelliptic components'': there are some connected components whose corresponding translation surfaces all have an extra symmetry. \item ``The parity of the spin structure'', which is a complex invariant that can be expressed in terms of the flat geometry by a simple formula. \end{itemize} \subsection{Hyperelliptic components} \begin{definition} A translation surface with poles $S$ is said to be \emph{hyperelliptic} if there exists an isometric involution $\tau:S\to S$ such that $S/\tau$ is a sphere. Equivalently, the underlying Riemann surface $\overline{X}$ is hyperelliptic and the hyperelliptic involution $\tau$ satisfies $\tau^* \omega=-\omega$. \end{definition} \begin{remark} In the case of Abelian differentials, if the underlying Riemann surface is hyperelliptic, then the translation surface is hyperelliptic since there are no nonzero holomorphic one forms on the sphere. In our case, similarly to the case of quadratic differentials, the underlying Riemann surface might be hyperelliptic, while the corresponding translation surface is not. \end{remark} \begin{proposition}\label{list:hyp} Let $n,p$ be positive integers with $n\geq p$. The following strata admit a connected component that consists only of hyperelliptic translation surfaces. \begin{itemize} \item $\mathcal{H}(2n,-2p)$ \item $\mathcal{H}(2n,-p,-p)$ \item $\mathcal{H}(n,n,-2p)$ \item $\mathcal{H}(n,n,-p,-p)$ \end{itemize} Furthermore, any stratum that contains an open set of flat surfaces with a nontrivial isometric involution is in the previous list for some $n\geq p\geq 1$. \end{proposition} \begin{proof} Let $\mathcal{H}$ be a stratum and $\mathring{\mathcal{H}}^{hyp}\subset \mathcal{H}$ the interior of the set of elements of $\mathcal{H}$ that admit a nontrivial isometric involution. Given a combinatoria datum $\sigma=(\mathbf{n}^+,\mathbf{n}^-,\pi_t,\pi_b,\mathbf{d})$ that defines an infinite zippered rectangle construction, we denote by $\mathcal{C}_\sigma$ the set of flat surfaces that are obtained by this construction with parameter $\sigma$, up to a rotation. Clearly, $\mathcal{C}_\sigma$ is open and connected. We claim that for each $\sigma$, the intersection between $\mathcal{C}_\sigma$ and $\mathring{\mathcal{H}}^{hyp}$ is either $\mathcal{C}_\sigma$ or empty. Indeed, choose a generic parameter $\zeta$ for the infinite zippered rectangle construction, such that the corresponding surface $S(\sigma,\zeta)$ is in $\mathring{\mathcal{H}}^{hyp}$. Let $D^+(z_1,\dots ,z_k)\subset S(\sigma,\zeta)$ be a half-plane of the construction. Then, $\zeta$ being generic, an isometric involution $\tau$ will necessarily send the segment corresponding to $z_i$ to itself. Hence if $\tau$ is not the identity, it is easy to see that the set $D^+(z_1,\dots ,z_k)$ will be sent to $D^-(z_k,z_{k-1},\dots , z_1)$, and therefore, we can define a similar involution for \emph{any} value of $z_1,\dots ,z_k$. Since this argument is valid for any $D^{\pm}$ and $C^{\pm}$ components, we see that all flat surfaces obtained by the infinite zippered rectangle construction with combinatorial datum $\sigma$ have a nontrivial isometric involution. This proves the claim. Now we remark that, by Proposition~\ref{no:vertical}, $\mathcal{H}=\cup_\sigma \mathcal{C}_\sigma$, where the union is taken on all $\sigma$ that corresponds to $\mathcal{H}$. The previous claim implies that $\mathring{\mathcal{H}}^{hyp}$ and its complement in $\mathcal{H}$ are both unions of some $\mathcal{C}_\sigma$, so if $\mathring{\mathcal{H}}^{hyp}$ is nonempty, it is a connected component of $\mathcal{H}$. Now we check that if $\mathring{\mathcal{H}}^{hyp}$ is not empty, then the stratum $\mathcal{H}$ is in the given list, \emph{i.e.} there is either one even degree zero (resp. pole) or two zeroes (res. poles) of equal degree. Let $\zeta_1,\dots ,\zeta_n$ be the continuous data in the infinite zippered rectangle construction for an element $S$ in $\mathring{\mathcal{H}}^{hyp}$. The above condition implies that for each $\zeta_i$, the middle of the corresponding segment in the surface is a fixed point for the involution $\tau$. So, there are at least $n$ fixed points. Let $r$ be the number of conical singularities, let $s$ be the number of poles and let $g'$ be the genus of $S/\tau$. We must have $\#(Fix(\tau))=2g+2-4g'$, and $2g+r+s-2=n\leq \#(Fix(\tau))$ (see Lemma~\ref{lem:dim}). Since $r,s>1$, this implies $g'=0$, so $S$ is hyperelliptic, and $\#(Fix(\tau))-n=4-r-s$. The fixed points of $\tau$ in $\overline{X}$ that do not correspond to the middle of a $z_i$ segment are necessarily either conical singularities or pole. The above combinatorial condition on the infinite zippered rectangle construction implies that $S$ has either two equal degree poles that are interchanged by $\tau$ or one pole of even degree that is preserved by $\tau$. So the condition $\#(Fix(\tau))-n=4-r-s$ implies that either there is one conical singularity which is fixed by $\tau$, or there are two singularities $P_1, P_2$ that are not fixed by $\tau$. By a similar argument as in the proof of Proposition~\ref{adj:minimal}, $P_1,P_2$ are the endpoints of a saddle connection corresponding to a parameter $\zeta_i$, so they are interchanged by $\tau$, hence they are of the same degree. Therefore, the stratum is necessarily one of the given list. The last step of the proof is to check that for the strata given in the statement, $\mathring{\mathcal{H}}^{hyp}$ is nonempty. This is an elementary check by using the infinite zippered rectangle construction that satisfies the previous condition. \end{proof} \subsection{The parity of the spin structure for translation surfaces with even singularities}\label{subsec:spin} \subsubsection{Spin structures on a surface} There are two equivalent definitions of spin structure for a compact Riemann surface $X$ commonly used. The first one is topological: let $P$ be the $\mathbb{S}^1$ bundle of directions of nonzero tangent vectors to $X$. A \emph{spin structure} on $X$ is a two-to-one covering $Q\to P$, whose restriction to a $\mathbb{S}^1$ fiber is the usual double covering $\mathbb{S}^1\to \mathbb{S}^1$. It is equivalent to a morphism $\xi:H_1(P,\mathbb{Z}/2\mathbb{Z})\to \mathbb{Z}/2\mathbb{Z}$ such that the image of the cycle $z$ corresponding to a fiber is one. Indeed, in this case the monodromy $\pi_1(P)\to \mathbb{Z}/2\mathbb{Z}$ factors to a map $\xi: H_1(P, \mathbb{Z}/2\mathbb{Z})\to \mathbb{Z}/2\mathbb{Z}$. The second equivalent definition comes from algebraic geometry (see \cite{Ati}). A \emph{theta characteristic}, is a solution of the equation $2D=K$ in the divisor class group, where $K$ is the canonical divisor. Equivalently, it is a complex line bundle $L$ such that $L\otimes L\sim \Omega_1(X)$. For such $L$, Atiyah and Munford showed independently \cite{Ati,Mum} that the dimension modulo 2 of the vector space of holomorphic sections of $L$ is invariant by deformation of the complex structure. This is the \emph{parity of the spin structure}. In \cite{Johnson}, Johnson provides a topological way to compute this invariant. He first constructs a lift $C\mapsto \widetilde{C}$ from $H_1(X,\mathbb{Z}/2\mathbb{Z})$ to $H_1(P,\mathbb{Z}/2\mathbb{Z})$. We refer \cite{Johnson} for details on the construction of the lift. In our case, it is enough to observe that when $C=[\gamma]$ is the class of a simple closed curve, the lift of $C$ is $\widetilde{C}=[\ha{\gamma}]+z$ where $\ha{\gamma}$ is the natural lift obtained by framing $\gamma$ with its speed vector $\gamma'$, and $z$ is the class of a $\mathbb{S}^1$ fiber. The composition of this lift with the map $\xi$ gives a quadratic form $\Omega:H_1(X,\mathbb{Z}/2\mathbb{Z}) \to \mathbb{Z}/2\mathbb{Z}$, $\Omega(C):=\xi(\widetilde{C})$. Johnson then shows that the parity of the spin structure is equal to the Arf invariant of $\Omega$, \emph{i.e.} for a symplectic basis $(a_1,b_1),\dots ,(a_g,b_g)$ of $H_1(X,\mathbb{Z}/2\mathbb{Z})$, the parity of the spin structure is $$\sum_{i=1}^g \Omega(a_i)\Omega(b_i).$$ \subsubsection{The parity of the spin structure for translation surfaces with even singularities} A translation surface $(X,\omega)$ with even poles and zeroes naturally gives a spin structure on $X$ in the following way: let $(\omega)=\sum_i 2n_i N_i-\sum_j 2p_jP_j $ the divisor associated to $\omega$. Then $D=\sum n_i N_i-\sum_j P_j$ satisfies $2D=K$. From the results of Atiyah and Munford in \cite{Ati, Mum}, it follows that the parity of the spin structure is a locally constant function on the strata where it is defined. Hence, it is an invariant of connected components, for strata with even poles and zeroes. The parity of the spin structure can be easily computed by using Johnson's construction. Following \cite{KoZo}, it is easy to see that the corresponding map $\Omega$ satisfies, for $\gamma$ a simple closed curve, $\Omega([\gamma])=ind(\gamma)+1$. Hence, the parity of the spin structure for a translation surface (with poles) is: $$\sum_{i=1}^g (ind(a_i)+1)(ind(b_i)+1).$$ \subsection{The parity of the spin structure for translation surfaces with only two simple poles}\label{spin:poles:simples} Let $S=(X,\omega)\in \mathcal{H}(2n_1,\dots ,2n_r,-1,-1)$ be a translation surface with zeroes of even order and a pair of simple poles (and no other poles). Since there are odd degree singularities, $\omega$ does not define a spin structure on $X$. However, one can still define a topological invariant, that will be the parity of the spin structure on another surface $S'$. Recall that a neigborhood a simple pole is an infinite cylinder. Choose a pair of waist curves $\gamma_1,\gamma_2$ on each cylinder associated to the two simple poles. Since there are no other poles than the pair of simple poles, the two have opposite residues by Stokes' theorem, hence $\gamma_1,\gamma_2$ are isometric. Now we cut the surface $S$ along $\gamma_1$ and $\gamma_2$. We obtain a compact translation surface with two geodesic boundary components. The condition on the residues implies that gluing together these boundary components by a translation gives a translation surface $S'$, where the pair of infinite cylinders in $S$ corresponds to a finite cylinder $C\subset S'$. The surface $S'$ belongs to the stratum $\mathcal{H}(2n_1,\dots ,2n_r)$ where the spin structure was defined by Kontsevich and Zorich. Note that other choices for $\gamma_1,\gamma_2$, and for the gluing operation only change the length and twist of $C$, hence gives the same connected component of $\mathcal{H}(2n_1,\dots ,2n_r)$. We will call \emph{the parity of the spin structure of S} the parity of the spin structure of the corresponding translation surface $S'$. \begin{remark} Note that one can also define the parity of the spin structure of $S=(X,\omega)$ in a more algebro-geometric way: we consider the stable curve $\overline{X}$ obtained by gluing together the two poles. In \cite{Cornalba}, Cornalba extends to a large class of stable curves (including this case) the notion of spin structure, and shows that the parity of the spin structure is invariant by deformations. The one form $\omega$ on $X$ can then be used to define on $\overline{X}$ a spin structure. \end{remark} \section{Higher genus case: minimal stratum} \label{sec:minimal} Recall that a minimal stratum correspond to the case where there is only one conical singularity (and possibly several poles). As in the papers of Kontsevich-Zorich \cite{KoZo} and Lanneau \cite{La:cc}, we first describe the connected components of minimal strata. The idea is similar: show that each such strata is obtained by bubbling $g$ cylinders and compute the connected components in this case. The first step is to find a surface obtained by bubbling a handle. In \cite{KoZo} and in \cite{La:cc} is used a rather combinatorial argument. A similar approach is possible in our case by using the infinite zippered rectangle construction, but this is quite technical. Another possibility is to reduce the problem to the genus one case for which it was proven in Section~\ref{genus1} that any minimal stratum contains a surface obtained by bubbling a handle. \begin{proposition} \label{exists:cyl} Let $\mathcal{C}$ be a connected component of the stratum $\mathcal{H}(n,-p1,\ldots,-p_s)$. We assume that the genus $g$ is nonzero. Then, there exists a flat surface in $\mathcal{C}$ which is obtained by bubbling a handle from a genus $g-1$ flat surface. \end{proposition} \begin{proof} We start from a surface in $\mathcal{C}$ obtained by the infinite zippered rectangle construction. It is defined by a combinatorial data and a continous parameter $\zeta\in\mathbb{C}^n$, with $n=2g+s-1$. Each $\zeta_i$ defines a closed geodesic path $\gamma_i$ joining the conical singularity to itself. The intersection number between any two such paths is $0$ or $\pm 1$. The genus is higher than zero and $\{\gamma_1,\ldots,\gamma_n\}$ generates the whole homology space $H_1(S,\mathbb{Z})$ since the complement is a union of punctured disks. Hence, there is a pair $\gamma_i,\gamma_j$ whose intersection number is one. Now we shrink $\zeta_i,\zeta_j$ until they are very small compared to all the other parameters. Then, we observe that a neighborhood of $\gamma_i,\gamma_j$ is isometric to the complement of a neighborhood of a pole for a surface in $\mathcal{H}(n,-n)$. Then, deforming suitably the surface, using Proposition~\ref{g1:cyl}, one obtains the desired result. \end{proof} We recall the notation introduced in Section~\ref{bubbling}. Let $\mathcal{C}$ is a connected component of a minimal stratum $\mathcal{H}(n,-p_1,\dots ,-p_s)$. Let $s\in \{1,\dots ,n+1\}$. The set $\mathcal{C}\oplus s$ is the connected component of the stratum $\mathcal{H}(n+2,-p_1,\dots ,-p_s)$ obtained by bubbling a handle after breaking the singularity of order $n$ into two singularities of order $(s-1)$ and $(n+1-s)$. The proposition that follows uses roughly the same arguments as in \cite{KoZo} and \cite{La:cc}. The only difference is the case when $n$ is odd, which does not occur for Abelian or quadratic differentials. \begin{proposition} \label{min:strat:upper:bound} Let $\mathcal{H}(n,-p_1,\dots ,-p_s)$ be a stratum of meromorphic differentials genus $g\geq 2$ surfaces, and denote by $\mathcal{C}_0$ the unique component of $\mathcal{H}(n-2g,-p_1,\dots ,-p_s)$. The following holds: \begin{itemize} \item If $n$ is odd, the stratum $\mathcal{H}(n,-p_1,\dots ,-p_s)$ is connected. \item If $n$ is even, the stratum $\mathcal{H}(n,-p_1,\dots ,-p_s)$ has at most three connected components which are in the following list: \begin{itemize} \item $\mathcal{C}_0\oplus (\frac{n-2g}{2}+1) \oplus (\frac{n-2g}{2}+2)\oplus\dots \oplus (\frac{n-2g}{2}+g)$ \item $\mathcal{C}_0\oplus 1\oplus\dots \oplus 1\oplus 1$ \item $\mathcal{C}_0\oplus 1\oplus\dots \oplus 1\oplus 2$ \end{itemize} \end{itemize} \end{proposition} \begin{proof} Let $\mathcal{C}$ be a connected component of $\mathcal{H}(n,-p_1,\dots ,-p_s)$. By proposition \ref{exists:cyl}, there exist integers $s_1,\dots ,s_g$, such that: $$\mathcal{C}=\mathcal{C}_0\oplus s_1\oplus\dots \oplus s_g$$ and for each $i\in \{1,\dots ,g\}$, $1\leq s_i\leq n-2g-2+2i+1$, since at Step~$i$, the handle corresponding to $s_i$ is bubbled on a zero of degree $n-2g+2(i-1)$. We assume for simplicity that $g=2$, and $(s_1,s_2)\neq ( \frac{n-2g}{2}+1, \frac{n-2g}{2}+2)$. Using operations $(1)$ and $(3)$ of Lemma~\ref{lemme:lanneau}, one can assume that $1\leq s_1\leq s_2\leq s_1+1$. Then, if $1\neq s_1$, using operations $(1)$, $(2)$, $(3)$ and $(1)$ (in this order), we have $\mathcal{C}_0\oplus s_1\oplus s_2=\mathcal{C}_0\oplus (s_1-1)\oplus (s_2-1)$. Repeating the same sequence of operations, we see that $\mathcal{C}$ is one of the following: \begin{itemize} \item $\mathcal{C}_0\oplus (\frac{n-2g}{2}+1) \oplus (\frac{n-2g}{2}+2)$ \item $\mathcal{C}_0\oplus 1\oplus 1$ \item $\mathcal{C}_0\oplus 1\oplus 2$ \end{itemize} If $n$ is odd, then the first case doesn't appear. By operation $(4)$ of Lemma~\ref{lemme:lanneau}, we have $$\mathcal{C}_0\oplus s_1\oplus s_2=\mathcal{C}_0\oplus s_1\oplus ((n-2g+2)+2-s_2)$$ so we can assume that $s_1$ and $s_2$ are of the same parity. Then, using the previous argument, we have: $$\mathcal{C}=\mathcal{C}_0\oplus 1\oplus 1$$ The case $g>2$ easily follows. \end{proof} The above proposition uses purely local constructions in a neighborhood of a singularity. The next proposition explains why the existence of suitable poles (at infinity) will ``kill'' some components. \begin{proposition} \label{min:strat:odd:poles:or:non:hyp} Let $\mathcal{H}(n,-p_1,\dots ,-p_s)$ be a stratum of meromorphic differentials on surfaces of genus $g\geq 2$ with $n$ even and $s\geq 2$, and denote by $\mathcal{C}_0$ the unique component of $\mathcal{H}(n-2g,-p_1,\dots ,-p_s)$. The following holds: \begin{enumerate} \item If there is a odd degree pole and $\sum_i p_i>2$, then: $$\mathcal{C}_0\oplus 1\oplus\dots \oplus 1=\mathcal{C}_0\oplus 1\oplus\dots \oplus 1 \oplus 2 $$ \item If $s> 2$ or $p_1\neq p_2$, then: $$\mathcal{C}_0\oplus \left(\frac{n-2g}{2}+1\right) \oplus \dots \oplus \left(\frac{n-2g}{2}+g\right) = \mathcal{C}_0 \oplus 1\oplus\dots \oplus 1\oplus s$$ for some $s\in \{1,2\}$. \end{enumerate} \end{proposition} \begin{proof} Case (1).\\ Note that $s\geq 2$ implies that we necessarily have $\sum_{i} p_i\geq 2$. From Proposition~\ref{g1:cyl}, $\mathcal{C}_0\oplus 2=\mathcal{C} _0\oplus k$ if and only if $\gcd(k,p_1,\dots ,p_s)=\gcd(2,p_1,\dots ,p_s)$. So, if there is an odd degree pole, $\gcd(2,p_1,\dots ,p_s)=1=\gcd(1,p_1,\dots ,p_s)$, hence $$(\mathcal{C}_0\oplus 1)\oplus 1\dots \oplus 1=(\mathcal{C}_0\oplus 2)\oplus 1\dots \oplus 1= \mathcal{C}_0\oplus 1\dots \oplus 1 \oplus 2,$$ which concludes the proof. Note that $\mathcal{C}_0\oplus 2$ is well defined because $\sum_{i}p_i>2$. \noindent Case (2).\\ As before, we use the classification in genus one. Since $n-2g-\sum_{i} p_i=-2$, we have $\frac{n-2g}{2}+1=\frac{\sum_i p_i}{2}$. If $s> 2$ or $p_1\neq p_2$, then there exists $i\in \{1,\dots ,s\}$ such that $\frac{n-2g}{2}+1> p_i$, so $\gcd(\frac{n-2g}{2}+1,p_1,\dots ,p_s)< \frac{n-2g}{2}+1$, hence there exists $k<\frac{n-2g}{2}+1$ such that $\mathcal{C}_0\oplus (\frac{n-2g}{2}+1)=\mathcal{C}_0\oplus k$. So we have $$\mathcal{C}_0\oplus (\frac{n-2g}{2}+1)\oplus (\frac{n-2g}{2}+2)\oplus \dots = \mathcal{C}_0 \oplus k \oplus (\frac{n-2g}{2}+2) \oplus \dots $$ Then, as in the proof of Proposition~\ref{min:strat:upper:bound}, $$\mathcal{C}_0 \oplus k \oplus (\frac{n-2g}{2}+2) \dots \oplus (\frac{n-2g}{2}+g) = \mathcal{C}_0 \oplus 1\oplus \dots \oplus 1\oplus s $$ for some $s\in \{1,2\}$. \end{proof} Putting together the last two propositions and the invariants, we have the following theorem. \begin{theorem}\label{th:str:min} Let $\mathcal{H}=\mathcal{H}(n,-p_1,\dots ,-p_s)$ be a minimal stratum of meromorphic differentials on genus $g\geq 2$ surfaces. We have: \begin{enumerate} \item If $n$ is even and $s=1$, then $\mathcal{H}$ has two connected components if $g=2$ and $p_s=2$, three otherwise. \item If $\mathcal{H}=\mathcal{H}(n,-p,-p)$, with $p$ even, then $\mathcal{H}$ has three connected components. \item If $\mathcal{H}=\mathcal{H}(n,-1,-1)$, then $\mathcal{H}$ has three connected components for $g>2$, two otherwise. \item If $\mathcal{H}=\mathcal{H}(n,-p,-p)$, with $p\neq 1$ odd, then $\mathcal{H}$ has two connected components. \item If all poles are of even degree and we are not in the previous case, then $\mathcal{H}$ has two connected components. \item In the remaining cases, $\mathcal{H}$ is connected. \end{enumerate} \end{theorem} \begin{proof} From Proposition~\ref{min:strat:upper:bound}, when $n$ is odd, which is part of Case~(6), $\mathcal{H}$ is connected. So we can assume that $n$ is even. Let $\mathcal{C}$ be a connected component of $\mathcal{H}$. Let $\mathcal{C}_0$ be the (connected) genus 0 stratum $\mathcal{H}(n-2g,-p_1,\dots ,-p_s)$. From Proposition~\ref{min:strat:upper:bound}, we have one of the three following possibilities. \begin{enumerate} \item[a)] $\mathcal{C}=\mathcal{C}_0\oplus (\frac{n-2g}{2}+1) \oplus (\frac{n-2g}{2}+2)\oplus\dots \oplus \frac{n}{2} $ \item[b)] $\mathcal{C}=\mathcal{C}_0\oplus 1 \oplus \dots \oplus 1 \oplus 1 $ \item[c)] $\mathcal{C}=\mathcal{C}_0\oplus 1 \oplus \dots \oplus 1\oplus 2$ \end{enumerate} When $\mathcal{H}=\mathcal{H}(n,-p)$ or $\mathcal{H}=\mathcal{H}(n,-p,-p)$, it is easy to see that case $a)$ corresponds to a hyperelliptic connected component, while case $b)$ does not, and neither $c)$ (except for the case $n-2g=0$ and $g=2$, where $a)$ and $c)$ are the same). When all degree of zeroes (and poles) are even, then Lemma~11 in \cite{KoZo} shows that cases b) and c) correspond to different spin structures, so are a different connected components. This is also true for $\mathcal{H}(n,-1,-1)$ by Section~\ref{spin:poles:simples}. The arguments of the two previous paragraphs proves the result for Cases (1), (2) and (3). Remark that $(n-2g)-\sum_i p_i=-2$. For Case $(4)$, Proposition~\ref{min:strat:odd:poles:or:non:hyp} shows that there are at most two connected components. Since $n-2g=2p-2>0$, Case~$a)$ corresponds to a hyperelliptic component while $b)$and $c)$ do not correspond to a hyperelliptic component. So there are at least two components. Since there are odd degree poles, $b)$ and $c)$ correspond to the same component by Proposition~\ref{min:strat:odd:poles:or:non:hyp}. So there are two components. For Case $(5)$, Proposition~\ref{min:strat:odd:poles:or:non:hyp} shows that $a)$ is in the same connected component as $b)$ or $c)$, while Lemma~11 in \cite{KoZo} shows that $b)$ and $c)$ have different spin structures. For Case $(6)$, with $n$ is even: this corresponds to having at least one odd pole, and either at least three poles or two poles of different degree. Then a direct application of Proposition~\ref{min:strat:odd:poles:or:non:hyp} shows that $a)$, $b)$ and $c)$ are the same connected component. This concludes the proof. \end{proof} \section{Higher genus case: nonminimal strata} \label{sec:nonminimal} The remaining part of the paper uses similar arguments as in Sections~5.2--5.4 in \cite{KoZo}. We quickly recall the three main steps. \begin{itemize} \item Each stratum is adjacent to a minimal stratum, and we can bound the number of connected components of a stratum by the number of connected components of the corresponding minimal one. \item We construct paths in suitable strata with two conical singularities that join the different connected components of the minimal stratum. \item We deduce from the previous arguments upper bounds on the number of connected component of a stratum, lower bounds are given by the topological invariants. \end{itemize} The following proposition is analogous to Corollary~4 in \cite{KoZo}. It is proven there by constructing surfaces with a one cylinder decomposition. Such surfaces never exist in our case, we use the infinite zippered rectangle construction instead. \begin{proposition}\label{adj:minimal} Any connected component of a stratum of meromorphic differentials is adjacent to the minimal stratum obtained by collapsing together all the zeroes. \end{proposition} \begin{proof} Let $S$ be in a stratum $\mathcal{H}$ of meromorphic differentials. We prove the result by induction on the number of conical singularities of $S$. We can assume that $S$ is obained by the previous construction. By connectivity of $S$, there is a $D^\pm$ component or a $C^\pm$ component that contains two different conical singularities on its boundary, hence, there is a parameter $\zeta_i$ whose corresponding segment on that component joins two different conical singularities. The segment is on the boundary of two components. Assume for instance, that it is a $D^+$ and a $C^-$ component. Now we just need to check that the surface obtained by shrinking $\zeta_i$ to zero is nondegenerate. Hence it will correspond to an element in a stratum with one conical singularity less. The set $D'^+$ obtained by shrinking $\zeta_i$ to zero from $D^+$ is still a domain as defined in Section~\ref{basic:domains}. The set $C'^-$ obtained by shrinking $\zeta_i$ to zero from $C^-$ is also a domain as defined in Section~\ref{basic:domains} except if we have $C^-=C^-(\zeta_i)$. But in this case, since the two vertical lines of $C^-$ are identified together, the two endpoints of the segment defined by $\zeta_i$ are necessarily the same singularity, contradicting the hypothesis. So, in any case, we obtain a surface $S'$ with fewer conical singularities. \end{proof} The following proposition is analogous to Corollary~2 in \cite{KoZo}, and is the first step of the proof described in the beginning of this section. The proof of Kontsevich and Zorich uses a deformation theory argument. We propose a proof that uses only flat geometry. \begin{proposition} \label{prop:upper:bound} The number of connected component of a stratum is smaller than or equal to the number of connected component of the corresponding minimal stratum. \end{proposition} \begin{proof} From the previous proposition, any connected component of a stratum $\mathcal{H}=\mathcal{H}(k_1,\dots ,k_r,-p_1,\dots, -p_s)$ is adjacent to a minimal stratum $\mathcal{H}^{min}= \mathcal{H}(k_1+,\dots +k_r,-p_1,\dots ,-p_s)$ by collapsing zeroes. It is enough to show that if $(S_n),\ (S'_n)$ are two sequences in $\mathcal{H}$ that converge to a surface $S\in \mathcal{H}^{min}$, then $S_n$ and $S_n'$ are in the same connected component of $\mathcal{H}$ for $n$ large enough. By definition of the topology on the moduli space of meromorphic differentials , for $n$ large enough, the conical singularities of $S_n$ (resp. $S_n'$) are all in a small disk $D_n$ (resp. $D_n'$) which is embedded in the surface $S_n$ (resp. $S_n'$), and whose boundary is a covering of a metric circle. Note that $D_n$ and $D_n'$ can be chosen arbitrarily small if $n$ is large enough, and we can assume that they have isometric boundaries. Replacing $D_n$ by a disk with a single singularity, one obtains a translation surface $\widetilde{S}_n$ which is very near to $S$, hence in the same connected component, and similarly for $S_n'$. Now we want to deform $D_n$ to obtain $D_n'$. It is obtained in the following way: $D_n$ can be seen as a subset of a genus zero translation surface $S_1$ in the stratum $\mathcal{H}(k_1,\dots ,k_r,-2-\sum_{i=1}^r k_i)$: we just ``glue'' a neighborhood of a pole to the boundary of the disk $D_n$. We proceed similarly with the disk $D_n'$ and obtain a translation surface $S_2$ in the same stratum as $S_1$. This stratum is connected since the genus is zero. Hence we deduce a continuous transformation that deform $D_n$ to $D_n'$. From the last two paragraphs, we easily deduce a continuous path from $S_n$ to $S_n'$, which proves the proposition. \end{proof} The following proposition is the second step of the proof. It is the analogous of Proposition~5 and Proposition~6 in \cite{KoZo}. Our proof is also valid for the Abelian case, and gives an interesting alternative proof. \begin{proposition}\label{join:cc} \begin{enumerate} \item Let $\mathcal{H}=\mathcal{H}(n,-p_1,\dots ,-p_s)$ be a genus $g\geq 2$ minimal stratum whose poles are all even or the pair $(-1,-1)$. For any $n_1,n_2$ odd such that $n_1+n_2=n$, there is a path $\gamma(t)\in \overline{\mathcal{H}(n_1,n_2,-p_1,\dots ,-p_s)}$ such that $\gamma(0),\gamma(1)\in \mathcal{H}$ and have different parities of spin structures. \item Let $\mathcal{H}=\mathcal{H}(n,-p_1,\dots ,-p_s)$ be a genus $g\geq 2$ minimal stratum that contains a hyperelliptic connected component. For any $n_1\neq n_2$ such that $n_1+n_2=n$, there is a path $\gamma(t)\in \overline{\mathcal{H}(n_1,n_2,-p_1,\dots ,-p_s)}$ such that $\gamma(0)$ is in a hyperelliptic component of $\mathcal{H}$ and $\gamma(1)$ is in a nonhyperelliptic component of $\mathcal{H}$. \end{enumerate} \end{proposition} \begin{proof} \emph{Case (1)}\\ Let $\mathcal{C}_0=\mathcal{H}(n-2g,-p_1,\dots ,-p_s)$. The connected components of $\mathcal{H}$ given by $\mathcal{C}_0\oplus 1\dots \oplus 1 \oplus 1$ and $ \mathcal{C}_0\oplus 1\dots \oplus 1\oplus 2$ have different parities of spin structures. We can rewrite these components as $ \mathcal{C}\oplus 1$ and $\mathcal{C}\oplus 2$, where $\mathcal{C}= \mathcal{C}_0\oplus 1\dots \oplus 1$. Fix $S_{g-1}\in \mathcal{C}$. For a surface $S_1\in \mathcal{H}(n,-n)$, one can get a surface $S$ in $\mathcal{H}(n,-p_1,\dots ,-p_s)$ by the following surgery: \begin{itemize} \item Cut $S_{g-1}$ along a small metric circle that turns around the singularity of degree $n-2$, and remove the disk bounded by this circle \item Cut $S_1$ along a large circle that turns around the pole of order $n$, and rescale $S_1$ such that this circle is isometric to the previous one. Remove the neighborhood of the pole of order $n$ bounded by this circle. \item Glue the two remaning surfaces along these circle, to obtain a surface $S\in \mathcal{H}(n,-p_1,\dots ,-p_s)$. \end{itemize} All choices in previous construction lead to the same connected component of $\mathcal{H}(n,-p_1,\dots ,-p_s)$, once $S_{g-1}, S_1$ are fixed. Similarly, we can do the same starting from a surface in $S_1\in\mathcal{H}(n_1,n_2,-n)$ and get a surface in $\mathcal{H}(n_1,n_2,-p_1,\dots ,-p_s)$. Now we start from a surface $S_{1,1} \in \mathcal{H}(n,-n)$ obtained by bubbling a handle with angle $2\pi$, \emph{i.e.} $S_{1,1}\in \mathcal{H}(n-2,-n)\oplus 1$. The rotation number of this surface is $\gcd(1,n)=1$. Breaking up the singularity into two singularities of order $n_1,n_2$, the rotation number is still $1$. Similarly, start from $S_{1,2}\in \mathcal{H}(n-2,-n)\oplus 2$. Its rotation number is $\gcd(2,n)=2$. Breaking up the singularity into two singularities of order $n_1,n_2$, the rotation number becomes $\gcd(2,n_1,n_2)=1$ since $n_1,n_2$ are odd. Hence there is a path in $\overline{\mathcal{H}(n_1,n_2,-n)}$ that joins $S_{1,1}\in\mathcal{H}(n-2,-n)\oplus 1$ to $S_{1,2}\in \mathcal{H}(n-2,-n)\oplus 2$. From this path, we deduce a path in $\overline{\mathcal{H}(n_1,n_2,-p_1,\dots ,-p_s)}$ that joins $\mathcal{C}\oplus 1$ to $\mathcal{C}\oplus 2$. So Part~$(1)$ of the proposition is proven. \noindent Case (2)\\ The proof is similar as the previous one: the hyperelliptic component of $\mathcal{H}(n,-p_1,\dots ,-p_s)$ is of the kind $\mathcal{C}\oplus \frac{n}{2}$, for some component $\mathcal{C}$. Any component of the kind $\mathcal{C}\oplus k$, with $k\neq \frac{n}{2}$ is nonhyperelliptic. As before, we use the case of genus one strata. A surface in $\mathcal{H}(n-2,-n)\oplus \frac{n}{2}$ is of rotation number $\gcd(\frac{n}{2},n)=\frac{n}{2}$. Breaking up the singularity of degree $n$ into two singularities of degree $n_1,n_2$, one obtains surface in $\mathcal{H}(n_1,n_2,-n)$ of rotation number $\gcd(\frac{n}{2},n_1,n_2)$. Since $n_1+n_2=n$ and $n_1\neq n_2$, this rotation number is not $\frac{n}{2}$, but some integer $k\in \{1,\dots ,\frac{n}{2}-1\}$. Hence there is a path in $\overline{\mathcal{H}(n_1,n_2,-n)}$ that joins $\mathcal{H}(n-2,-n)\oplus \frac{n}{2}$ to $\mathcal{H}(n-2,-n)\oplus k$. From this, we deduce the required path in $\overline{\mathcal{H}(n_1,n_2,-p_1,\dots ,-p_s)}$. \end{proof} Now we have all the intermediary results to prove Theorem~\ref{MT2}. \begin{proof}[Proof of Theorem~\ref{MT2}] Let $\mathcal{H}=\mathcal{H}(n_1,\dots ,n_r,-p_1,\dots ,-p_s)$ be a stratum of genus $g\geq 2$ surfaces. Denote by $\mathcal{H}_{min}$ the minimal stratum obtained by collapsing all zeroes. Recall that by Proposition~\ref{prop:upper:bound}, the number of connected components of $\mathcal{H}$ is smaller than, or equal to the number of connected components of $\mathcal{H}_{min}$. If $\sum_i p_i$ is odd, then the minimal stratum is connected and therefore the stratum is connected. So we can assume that $\sum_i p_i$ is even. Assume that $\sum p_i>2$ or $g>2$. From Theorem~\ref{th:str:min}, $\mathcal{H}_{min}$, hence $\mathcal{H}$ has at most three components. We fix some vocabulary: we say that the set of degree of zeroes (resp. poles) is of \emph{hyperelliptic type} if this set is $\{n,n\}$ or $\{2n\}$ (resp. $\{-p,-p\}$ or $\{-2p\}$), \emph{i.e.} it is the set of degree of zeroes or poles of a hyperelliptic component. Note that the set of degree of poles are of hyperelliptic type if and only if the corresponding minimal stratum contains a hyperelliptic connected component. We will also say that the set of degree of poles is of \emph{even type} if the degrees of the zeroes are all even or if they are $\{-1,-1\}$. This means that the underlying minimal stratum has two nonhyperelliptic components distinghished by the parity of the spin structure. \begin{itemize} \item If the stratum is $\mathcal{H}(n,n,-2p)$ or $\mathcal{H}(n,n,-p,-p)$. There is a hyperelliptic connected component. The corresponding minimal stratum $\mathcal{H}(2n,*)$ has one hyperelliptic component and at least one nonhypereliptic component. It is easy to see that breaking up the singularity of degree $2n$ into two singularities of degree $n$, from a nonhyperelliptic translation surface gives a surface in a nonhyperelliptic connected component. So, the stratum $\mathcal{H}(n,n,*)$ has one hyperelliptic connected component and at least one nonhyperelliptic connected component. \item If the set of degrees of poles and zeroes is of even type, we know from Theorem~\ref{th:str:min} that the minimal stratum has two nonhyperelliptic components (and possibly one hyperelliptic). Breaking up the singularity into even degree singularities preserves the spin structure, which therefore gives at least two nonhyperelliptic components in the stratum. \end{itemize} From the above description, we obtain lower bounds on the number of connected components. In particular, we see that if the degrees of zeroes and poles are both of hyperelliptic and even type, $\mathcal{H}$ as at least, so exactly, three connected components. Also, if the set of degrees of zeroes and poles is of hyperelliptic or even type, $\mathcal{H}$ has at least two connected components. Now we give upper bounds. \begin{enumerate} \item Assume that the poles are of hyperelliptic and even type, \emph{i.e.} the minimal stratum has three connected components. Denote respectively by $\mathcal{C}^{hyp}, \mathcal{C}^{odd}$ and $\mathcal{C}^{even}$ the connected components of $\mathcal{H}$ that are adjacent respectively to the three connected components of $\mathcal{H}_{min}$, $\mathcal{H}_{min}^{hyp},\mathcal{H}_{min}^{odd}$ and $\mathcal{H}_{min}^{even}$. For any $j\in \{1,\dots ,r\}$, the stratum $\mathcal{H}(n_j,\sum_{i\neq j} n_i,-p_1,\dots ,-p_s)$ is adjacent to $\mathcal{H}_{min}$. \begin{itemize} \item If the zeroes are not of hyperelliptic type, we can choose, $n_i$ so that $n_i\neq \sum_{i\neq j} n_i$, and by Proposition~\ref{join:cc} there is a path in $\mathcal{H}(n_j,\sum_{i\neq j} n_i,-p_1,\dots ,-p_s)$ joining the hyperelliptic component of $\mathcal{H}_{min}$ to a nonhyperelliptic connected component. Breaking up the singularity of order $\sum_{i\neq j}^r n_i$ along this path into singularities of order $(n_i)_{i\neq j}$, we obtain a path in $\mathcal{H}$ that joins a neighborhood of $\mathcal{H}_{min}^{hyp}$ to a neighborhood of a nonhyperelliptic component of $\mathcal{H}_{min}$. Hence, we necessarily have $\mathcal{C}^{hyp}=\mathcal{C}^{odd}$ or $\mathcal{C}^{hyp}=\mathcal{C}^{even}$. \item If the zeroes are not even, we conclude similarly that $\mathcal{C}^{odd}=\mathcal{C}^{even}$ \item Note that if the zeroes are neither of hyperelliptic type nor of even type, then $\mathcal{C}^{even}=\mathcal{C}^{odd}=\mathcal{C}^{hyp}$, so there is only one component for $\mathcal{H}$. \end{itemize} \item Assume that the poles are of hyperelliptic type but not of even type. The minimal stratum has two connected components, so there are at most two connected components for $\mathcal{H}$. If the zeroes are of hyperelliptic type, we have already seen that there are two components. Assume the zeroes are not of hyperelliptic type. Denote respectively by $\mathcal{C}^{hyp}, \mathcal{C}^{nonhyp}$ the connected components of $\mathcal{H}$ that are adjacent respectively to the hyperelliptic and the nonhyperelliptic component of $\mathcal{H}_{min}$. By the same argument as in $(1)$, using Proposition~\ref{join:cc} we have $\mathcal{C}^{hyp}=\mathcal{C}^{nonhyp}$, so $\mathcal{H}$ is connected. \item Assume that the poles are of even type but not of hyperelliptic type. The minimal stratum has two connected components distinguished by the parity of the spin structure. So there are at most two components for $\mathcal{H}$. If the zeroes are of even type, there are exactly two connected component for $\mathcal{H}$, that are distinguished by the parity of the spin structure. If the zeroes are not of even type, denote respectively by $\mathcal{C}^{odd}, \mathcal{C}^{even}$ the connected components of $\mathcal{H}$ that are adjacent respectively to the two components of $\mathcal{H}_{min}$. By the same argument as in $(1)$, using Proposition~\ref{join:cc} we have $\mathcal{C}^{odd}=\mathcal{C}^{even}$. \item Assume that the poles are neither of hyperelliptic nor of even type, then the minimal stratum is connected, so $\mathcal{H}$ is connected. \end{enumerate} It remains to prove the theorem when $g=2$ and $\sum_{i} p_i=2$. The minimal stratum has two connected components. In this case, it is equivalent to say that the zeroes are of hyperelliptic type or to say that they are of even type. If $\mathcal{H}=\mathcal{H}(2,2,*)$ or $\mathcal{H}(4,*)$, the stratum has at least two components, so exactly two. Otherwise, the stratum is adjacent to $\mathcal{H}(3,1,*)$, which connects $\mathcal{H}_{min}^{odd}$ to $\mathcal{H}_{min}^{even}$, hence $\mathcal{H}$ is connected. \end{proof} \appendix \section{Negative results for meromorphic differentials} In this section, we quickly give some examples to show that many well known results for the dynamics on translation surfaces are false in the case of translation surfaces with poles. \subsection{Dynamics of the geodesic flow} On a standard translation surface, the geodesic flow is uniquely ergodic for almost any directions. From the result of Proposition~\ref{no:vertical}, for almost any direction on a translation surface with poles, all infinite orbits for the geodesic flow converge to a pole. \subsection{Cylinders and closed geodesics} On a standard translation surface, always exists infinitely many closed geodesics (hence cylinders). For the case of translation surface with poles, one can consider the following example. Take the plane $\mathbb{C}$ and remove the inside of a square, and glue together by translation the corresponding opposite sides. One gets a surface in $\mathcal{H}(-2,2)$. It is easy to see that there are exactly two saddle connections joining the conical singularity to itself and no closed geodesic. A similar example in $\mathcal{H}(-2,1,1)$ obtained by removing a regular hexagon gives an example without a single saddle connection joining a conical singularity to itself. \subsection{$SL_2(\mathbb{R})$ action} The $SL_2(\mathbb{R})$ action on the strata of the moduli space of Abelian differentials is ergodic. It is not the case for the moduli space of meromorphic differentials if we consider the (infinite) volume form defined by the flat local coordinates. Indeed, consider the stratum $\mathcal{H}(-2,2)$, which is connected. Consider the set of surfaces obtained with the infinite zippered rectangle construction, by gluing together the set $D^+(z_1,z_2)$ and the set $D^-(z_2,z_1)$. It is easy to see that if $Im(z_2)<0<Im(z_1)$, there are no cylinders on the surface while if $Im(z_2)>0>Im(z_1)$, there is a cylinder on the surface. These two cases form two nointersecting open subsets of $\mathcal{H}(-2,2)$. Considering $SL_2(\mathbb{R})$ orbits, we obtain two disjoints $SL_2(\mathbb{R})$-invariants open subsets of a connected stratum. \nocite* \end{document}
\begin{document} \date{} \title{ Distributed Optimization Over Dependent Random Networks} \sloppy \begin{abstract} We study the averaging-based distributed optimization solvers over random networks. We show a general result on the convergence of such schemes using weight-matrices that are row-stochastic almost surely and column-stochastic in expectation for a broad class of dependent weight-matrix sequences. In addition to implying many of the previously known results on this domain, our work shows the robustness of distributed optimization results to link-failure. Also, it provides a new tool for synthesizing distributed optimization algorithms. {To prove our main theorem, we establish new results on the rate of convergence analysis of averaging dynamics over (dependent) random networks. These secondary results, along with the required martingale-type results to establish them, might be of interest to a broader research endeavors in distributed computation over random networks. } \end{abstract} \section{Introduction} Distributed optimization has received increasing attention in recent years due to its applications in distributed control of robotic networks \cite{bullo2009distributed}, study of opinion dynamics in social networks \cite{hegselmann2002opinion}, distributed estimation and signal processing \cite{rabbat2004distributed,cattivelli2010diffusion,stankovic2011decentralized}, and power networks \cite{dominguez2011distributed,dominguez2012decentralized,cherukuri2015distributed}. In distributed optimization, we are often interested in finding an optimizer of a decomposable function $F({z})=\sum_{i=1}^{n}f_{i}({z})$ such that $f_i(\cdot)$s are distributed through a network of $n$ agents, i.e., agent $i$ only knows $f_i(\cdot)$, and we are seeking to solve this problem without sharing the local objective function $f_i(z)$. Therefore, the goal is to find distributed dynamics over time-varying networks that, asymptotically, all the nodes agree on an optimizer of $F(\cdot)$. The most well-know algorithm that achieves this is, what we refer to as, the averaging-based distributed optimization solver where each node maintains an estimate of an optimal point, and at each time step, each node computes the average of the estimates of its neighbors and performs (sub-)gradient descent on its local objective function \cite{nedic2009distributed}. However, in order for such an algorithm to converge, the corresponding weight matrices should be doubly stochastic. While making a row-stochastic or a column-stochastic matrix is easy, this is not the case for doubly stochastic matrices. Therefore, it is often assumed that such a doubly stochastic matrix sequence is given. A solution to overcome this challenge is to establish more complicated distributed algorithms that effectively \textit{reconstruct} the average-state distributively. The first algorithm in this category was proposed in \cite{rabbat}, which is called subgradient-push (or push-sum), and later was extended for time-varying networks \cite{nedic2014distributed}. In this scheme, the weight matrices are assumed to be column-stochastic, and through the use of auxiliary state variables the approximate average state is reconstructed. Another scheme in this category that works with row-stochastic matrices, but does not need the column-stochastic assumption, is proposed in \cite{mai2016distributed,xi2018linear}. However, to use this scheme, every node needs to be assigned and know its unique label. Assigning those labels distributively is also another challenge in this respect. In addition, both these schemes invoke division operation which results in theoretical challenges in establishing their stability in random networks \cite{rezaeinia2019push,rezaeinia2020distributed}. {Another solution to address this challenge is to use gossip \cite{7165636} and broadcast gossip \cite{5585721} algorithms over random networks. The weight matrices of gossip algorithms are row-stochastic and in-expectation column-stochastic. This fact was generalized in \cite{7849184}, where it is proven that it is sufficient to have row-stochastic weight matrices, that are column-stochastic in-expectation. In all the above works on distributed optimization over random networks, all weight matrices are assumed to be independent and identically distributed (i.i.d.). In \cite{lobel2011distributed}, the broader class of random networks, which is Markovian networks was studied in distributed optimization; however, weight matrices were assumed to be doubly stochastic almost surely. Also, our work is closely related to the existing works on distributed averaging on random networks \cite{consensus1,consensus2,consensus3,touri2014endogenous}. In this paper, we study distributed optimization over random networks, where the randomness is not only time-varying but also, possibly, dependent on the past. Under the standard assumptions on the local objective functions and step-size sequences for the gradient descent algorithm, we show that the averaging-based distributed optimization solver at each node converges to a global optimizer almost surely if the weight matrices are row-stochastic almost surely, column-stochastic {in-expectation}, and satisfy certain connectivity assumptions. } The paper is organized as follows: we conclude this section by introducing mathematical notations that will be used subsequently. In Section \ref{sec:ProblemStatement}, we formulate the problem of interest and state the main result of this work and discuss some of its immediate consequences. To prove the main result, first we study the behavior of the distributed averaging dynamics over random networks in Section~\ref{sec:ConsensusWithoutPerturbation}. Then, in Section \ref{sec:ConsensusWithPerturbation}, we extent this analysis to the dynamics with arbitrary control inputs. Finally, the main result is proved in Section \ref{sec:Convergence}. We conclude this work in Section~\ref{Conclusion}. \textbf{Notation and Basic Terminology:} The following notation will be used throughout the paper. We let $[n]\triangleq\{1,\ldots,n\}$. We denote the space of real numbers by $\mathbb{R}$ and natural (positive integer) numbers by $\mathbb{N}$. We denote the space of $n$-dimensional real-valued vectors by $\mathbb{R}^n$. In this paper, all vectors are assumed to be column vectors. The transpose of a vector $x\in \mathbb{R}^n$ is denoted by $x^T$. For a vector $x\in \mathbb{R}^n$, $x_i$ represents the $i$th coordinate of $x$. We denote the all-one vector in $\mathbb{R}^n$ by $e^n=[1,1,\ldots,1]^T$. We drop the superscript $n$ in $e^n$ whenever the dimension of the space is understandable from the context. A non-negative matrix $A$ is a row-stochastic (column-stochastic) matrix if $Ae=e$ ($e^TA= e^T$). Let $(\Omega,\mathcal{F},\text{Pr})$ be a probability space and let $\{W(t)\}$ be a chain of random matrices, i.e., for all $t\geq 0$ and $i,j\in[n]$, $w_{ij}(t):\Omega\to\mathbb{R}$ is a Borel-measurable function. For random vectors (variables) { ${\bf x}(1),\ldots,{\bf x}(t)$, we denote the sigma-algebra generated by these random variables by $\sigma({\bf x}(0),\ldots,{\bf x}(t))$.} We say that $\{\mathcal{F}(t)\}$ is a filtration for $(\Omega,\mathcal{F})$ if $\mathcal{F}(0)\subseteq \mathcal{F}(1)\subseteq \cdots \subseteq \mathcal{F}$. Further, we say that a random process $\{V(t)\}$ (of random variables, vectors, or matrices) is adapted to $\{\mathcal{F}(t)\}$ if $V(t)$ is measurable with respect to $\mathcal{F}(t)$. Throughout this paper we mainly deal with directed graphs. A directed graph $\mathcal{G}=([n],\mathcal{E})$ (on $n$ vertices) is defined by a vertex set (identified by) $[n]$ and an edge set $\mathcal{E}\subset [n]\times [n]$. {A graph $\mathcal{G}=([n],\mathcal{E})$ has a spanning directed rooted tree if it has a vertex $r\in [n]$ as \textit{a} root such that there exists a (directed) path from $r$ to every other vertex in the graph.} For a matrix $A=[a_{ij}]_{n\times n}$, the associated directed graph with parameter $\gamma>0$ is the graph $\mathcal{G}^\gamma(A)=([n],\mathcal{E}^\gamma(A))$ with the edge set $\mathcal{E}^\gamma(A)=\{(j,i)\mid i,j\in [n],a_{ij}>\gamma \}$. Later, we fix the value $0<\gamma<1$ throughout the paper and hence, unless otherwise stated, for notational convenience, we use $\mathcal{G}(A)$ and $\mathcal{E}(A)$ instead of $\mathcal{G}^\gamma(A)$ and $\mathcal{E}^\gamma(A)$. { The function $f:\mathbb{R}^m\to\mathbb{R}$ is convex if for all $x,y\in\mathbb{R}^m$ and all $\theta\in[0,1]$, \begin{align*} f(\theta x+(1-\theta)y)\leq \theta f(x)+(1-\theta)f(y). \end{align*} We say that $g\in\mathbb{R}^m$ is a subgradient of the function $f(\cdot)$ at $\hat{x}$ if for all $x\in\mathbb{R}^m$, $f(x)-f(\hat{x})\geq \lrangle{g,x-\hat{x}},$ where $\lrangle{u_1,u_2}=u_1^Tu_2$ is the standard inner product in $\mathbb{R}^m$.} The set of all subgradients of $f(\cdot)$ at $x$ is denoted by $\nabla f(x)$. For a convex function $f(\cdot)$, $\nabla f(x)$ is not empty for all $x\in\mathbb{R}^m$ (see e.g., Theorem 3.1.15 in \cite{nesterov2018lectures}). Finally, for convenience and due to the frequent use of $\ell_\infty$ norm in the study of averaging dynamics, we use $\|\cdot\|$ to denote the $\ell_\infty$ norm $\|x\|\triangleq \max_{i\in [m]}|x_i|$. \section{Problem Formulation and Main Result}\label{sec:ProblemStatement} In this section, we discuss the main problem and the main result of this work. The proof of the result is provided in the subsequent sections. \subsection{General Framework} Consider a communication network with $n$ nodes or agents such that node $i$ has the cost function $f_i:\mathbb{R}^m\to\mathbb{R}$. Let $F({z})\triangleq\sum_{i=1}^{n}f_{i}({z})$. The goal of this paper is to solve \begin{align}\label{eqn:mainProblem} \arg\min_{{z}\in\mathbb{R}^m}F({z}) \end{align} distributively with the following assumption on the objective function. \begin{assumption}[Assumption on the Objective Function]\label{assump:AssumpOnFunction} We assume that: \begin{enumerate}[(a)] \item All $f_i({z})$ are convex functions over $\mathbb{R}^m$. \item The optimizer set $\mathcal{Z}\triangleq\arg\min_{{z}\in\mathbb{R}^m}F({z})$ is non-empty. \item The subgradients $f_i({z})$s are uniformly upper bounded, i.e., for all $g\in \nabla f_i(z)$, $\|g\|\leq L_i$ for all $z\in \mathbb{R}^m$ and all $i\in[n]$. {We let $L\triangleq\sum_{i=1}^nL_i$.} \end{enumerate} \end{assumption} In this paper, we are dealing with the dynamics of the $n$ agents estimates of an optimizer $z^*\in \mathcal{Z}$ which we denote them by ${\bf x}_i(t)$ for all $i\in[n]$. Therefore, we view ${\bf x}(t)$ as a \textbf{vector} of $n$ elements in the vector space $\mathbb{R}^m$. A distributed solution of \eqref{eqn:mainProblem} was first proposed in \cite{nedic2009distributed} using the following deterministic dynamics \begin{align*} {{\bf x}}_i(t+1)&=\sum_{j=1}^n w_{ij}(t+1){{\bf x}}_j(t)-\alpha(t){{\bf g}}_i(t) \end{align*} for $t\geq t_0$ for an initial time $t_0\in \mathbb{N}$, initial conditions ${\bf x}_i(t_0)\in \mathbb{R}^m$ for all $i\in [n]$, where ${\bf g}_i(t)\in\mathbb{R}^m$ is a subgradient of $f_{i}(z)$ at $z={\bf x}_{i}(t)$ for $i\in[n]$, and $\{\alpha(t)\}$ is a step-size sequence (in \cite{nedic2009distributed} the constant step-sizes variation of this dynamics was studied). {We simply refer to this dynamics as the averaging-based distributed optimization solver}. We can compactly write the above dynamics as \begin{align}\label{eqn:MainLineAlgorithm} {{\bf x}}(t+1)&=W(t+1){{\bf x}}(t)-\alpha(t){{\bf g}}(t), \end{align} where, ${{\bf g}}(t)=[{\bf g}_1(t),\ldots,{\bf g}_n(t)]^T$ is the vector of the sub-gradient vectors and matrix multiplication should be understood over the vector-field $\mathbb{R}^m$, i.e., \[[W(t+1){{\bf x}}(t)]_i\triangleq\sum_{j=1}^nw_{ij}(t+1){\bf x}_j(t).\] In distributed optimization, the goal is to find distributed dynamics ${\bf x}_i(t)$s such that $\lim_{t\to\infty}{\bf x}_i(t)=z$ where $z\in \mathcal{Z}$ for all $i\in[n]$. \subsection{Our Contribution} In the paper, we consider the random variation of \eqref{eqn:MainLineAlgorithm}, i.e., when $\{W(t)\}$ is a chain of random matrices. This random variation was first studied in \cite{lobel2010distributed} where to ensure the convergence, it was assumed that this sequence is {doubly stochastic almost surely} and i.i.d.. This was generalized to random networks that is Markovian in \cite{lobel2011distributed}. The dynamics~\eqref{eqn:MainLineAlgorithm} with i.i.d.\ weight matrices that are row-stochastic almost surely and column-stochastic \textit{in-expectation} was studied in \cite{7849184}. A special case of \cite{7849184} is the asynchronous gossip algorithm that were introduced in \cite{5585721}. In this work, we provide an overarching framework for the study of~\eqref{eqn:MainLineAlgorithm} with random weight matrices that are row-stochastic almost surely and column-stochastic in-expectation, that are not even independent in general. More precisely, we have following assumptions. \begin{assumption}[Stochastic Assumption]\label{assump:AssumpOnStochasticity} We assume that the weight matrix sequence $\{W(t)\}$, adapted to a filtration $\{\mathcal{F}(t)\}$, satisfies \begin{enumerate}[(a)] \item For all $t\geq t_0$, $W(t)$ is row-stochastic almost surely. \item For every $t> t_0$, $\mathbb{E}[W(t)\mid \mathcal{F}(t-1)]$ is column-stochastic (and hence, doubly stochastic) almost surely. \end{enumerate} \end{assumption} Similar to other works in this domain, our goal is to ensure that $\lim_{t\to\infty}{\bf x}_i(t)=z$ {\it almost surely} for some optimal $z\in \mathcal{Z}$ for all $i\in[n]$. To reach such a consensus value, we need to ensure enough flow of information between the agents, i.e., the associated graph sequence of $\{W(t)\}$ satisfies some form of connectivity over time. More precisely, we assume the following connectivity conditions. \begin{assumption}[Conditional $B$-Connectivity Assumption]\label{assump:AssumpOnConnectivity} We assume that for all $t \geq t_0$ \begin{enumerate}[(a)] \item \label{cond:connecta} Every node in $\mathcal{G}(W(t))$ has a self-loop, almost surely. \item \label{cond:bconnectivity}There exists an integer $B>0$ such that the random graph $\mathcal{G}_B(t)=([n],\mathcal{E}_{B}(t))$ where \begin{align*} \mathcal{E}_{B}(t)=\bigcup_{\tau=tB+1}^{(t+1)B}\mathcal{E}(\mathbb{E}[W(\tau)|\mathcal{F}(tB)]) \end{align*} has a spanning rooted tree almost surely. \end{enumerate} \end{assumption} Note that Assumption~\ref{assump:AssumpOnConnectivity}-\eqref{cond:bconnectivity} is satisfied if the random graph with vertex set $[n]$ and the edge set $\bigcup_{\tau=tB+1}^{(t+1)B}\mathcal{E}(\mathbb{E}[W(\tau)|\mathcal{F}(\tau-1)])$ has a spanning rooted tree almost surely. Finally, we assume the following standard condition on the step-size sequence $\{\alpha(t)\}$. \begin{assumption}[Assumption on Step-size]\label{assump:AssumpOnStepSize} For the step-size sequence $\{\alpha(t)\}$, we assume that $0<\alpha(t)\leq Kt^{-\beta}$ for some $K,\beta>0$ and all $t\geq t_0$, $\lim_{t\to\infty}\frac{\alpha(t)}{\alpha(t+1)}=1$, and \begin{align}\label{eqn:assumptionOnAlpha} \sum_{t=t_0}^{\infty}\alpha(t)=\infty~~\mbox{ and }~~\sum_{t=t_0}^{\infty}\alpha^2(t)<\infty. \end{align} \end{assumption} The main result of this paper is the following theorem. \begin{theorem}\label{thm:MainTheoremDisOpt} Under the Assumptions \ref{assump:AssumpOnFunction}-\ref{assump:AssumpOnStepSize} on the model and the dynamics~\eqref{eqn:MainLineAlgorithm}, {$\lim_{t\to\infty}{\bf x}_i(t)=z^*$ almost surely for all $i\in[n]$ and all initial conditions ${\bf x}_i(t_0)\in \mathbb{R}^m$, where $z^*$ is a random vector that is supported on the optimal set $\mathcal{Z}$. } \end{theorem} Before continuing with the technical details of the proof, let us first discuss some of the higher-level implications of this result: \textbf{1. Gossip-based sequential solvers}: Gossip algorithms, which were originally studied in \cite{boyd2006randomized,aysal2009broadcast}, have been used in solving distributed optimization problems \cite{5585721,7165636}. In gossip algorithms, at each round, a node randomly wakes up and shares its value with all or some of its neighbors. However, it is possible to leverage Theorem \ref{thm:MainTheoremDisOpt} to synthesize algorithms that do not require choosing a node {independently and uniformly at random} or use other coordination methods to update information at every round. An example of such a scheme is as follows: \begin{example} Consider a connected {\it undirected} network\footnote{The graphs do not need to be time-invariant, and this example can be extended to processes over underlying time-varying graphs.} $\mathcal{G}=([n],E)$. Consider a token that is handed sequentially in the network and initially it is handed to an arbitrary agent $\ell(0)\in[n]$ in the network. If at time $t\geq 0$, agent $\ell(t)\in[n]$ is in the possession of the token, it chooses one of its neighbors $s(t+1)\in [n]$ randomly and by flipping a coin, i.e., with probability $\frac{1}{2}$ shares its information to $s(t+1)$ and passes the token and with probability $\frac{1}{2}$ keeps the token and asks for information from $s(t+1)$. It means \begin{align*} \ell(t+1)=\begin{cases}\ell(t),& \mbox{with probability $\frac{1}{2}$}\\ s(t+1),& \mbox{with probability $\frac{1}{2}$}\end{cases}. \end{align*} Finally, the agent $\ell(t+1)$, who has the token at time $t+1$ and is receiving the information, does \begin{align*} {\bf x}_{\ell(t+1)}(t+1)=\frac{1}{2}({\bf x}_{s(t+1)}(t)+{\bf x}_{\ell(t)}(t))-\alpha(t){\bf g}_{\ell(t+1)}(t). \end{align*} For the other agents $i\not=\ell(t+1)$, we set \begin{align*} {\bf x}_i(t+1)={\bf x}_i(t)-\alpha(t){\bf g}_i(t). \end{align*} Let $\mathcal{F}(t)=\sigma({\bf x}(0),\ldots,{\bf x}(t),\ell(t))$, and the weight matrix $W(t)=[w_{ij}(t)]$ be \begin{align*} w_{ij}(t)=\begin{cases}\frac{1}{2},& i=j=\ell(t)\\ \frac{1}{2},& i=\ell(t),j\in\{s(t),\ell(t-1)\}\setminus\{\ell(t)\}\\ 1,& i=j\not=\ell(t)\\ 0,& \mbox{otherwise} \end{cases}, \end{align*} which is the weight matrix of this scheme. Note that $\mathbb{E}[W(t)|\mathcal{F}(t-1)]=V(\ell(t-1))$ where $V(h)=[v_{ij}(h)]$ with \begin{align*} v_{ij}(h)=\begin{cases}\frac{3}{4},& i=j=h\\ \frac{1}{4\delta_i},& i=h,(i,j)\in E \\ \frac{1}{4\delta_i},& j=h,(i,j)\in E\\ 1,& i=j\not=h\\ 0,& \mbox{otherwise} \end{cases}, \end{align*} where $\delta_i$ is the degree of the node $i$. Note that the matrix $\mathbb{E}[W(t)|\mathcal{F}(t-1)]$ is doubly stochastic, satisfies Assumption \ref{assump:AssumpOnConnectivity}-(\ref{cond:connecta}), and only depends on $\ell(t-1)$. Now, we need to check whether $\{W(t)\}$ satisfies Assumption \ref{assump:AssumpOnConnectivity}-(\ref{cond:bconnectivity}). We have \begin{align*} \mathbb{E}[W(t+n)|\mathcal{F}(t)] &\stackrel{}{=}\mathbb{E}[\mathbb{E}[W(t+n)\mid\mathcal{F}(t+n-1)]\mid\mathcal{F}(t)]\cr &\stackrel{}{=}\mathbb{E}[V(\ell(t+n-1))\mid \mathcal{F}(t)]\cr &=\mathbb{E}\left[\sum_{i=1}^nV(\ell(t+n-1))1_{\{\ell(t+n-1)=i\}}\bigg{|}\mathcal{F}(t)\right]\cr &\stackrel{}{=}\sum_{i=1}^{n}\mathbb{E}[V(i)1_{\{\ell(t+n-1)=i\}}\mid\mathcal{F}(t)]\cr &\stackrel{}{=}\sum_{i=1}^{n}V(i)\mathbb{E}[1_{\{\ell(t+n-1)=i\}}\mid\mathcal{F}(t)]. \end{align*} If the network is connected, starting from any vertex, after $n-1$ steps, the probability of reaching any other vertex is at least $(2\Delta)^{-(n-1)}>0$, where $\Delta\triangleq \max_{i\in [n]}\delta_i$. Therefore, we have $\mathbb{E}[1_{\{\ell(t+n-1)=i\}}|\mathcal{F}(t)]>0$ for all $i\in[n]$ and $t$, and hence, Assumption \ref{assump:AssumpOnConnectivity}-(\ref{cond:bconnectivity}) is satisfied with $B=n$. \end{example} \textbf{2. Robustness to link-failure}: Our result shows that \eqref{eqn:MainLineAlgorithm} is robust to random link-failures. Note that the results such as \cite{lobel2010distributed} will not imply the robustness of the algorithms to link failure as it assumes that the resulting weight matrices remain doubly stochastic. To show the robustness of averaging-based solvers, suppose that we have a deterministic doubly stochastic sequence $\{A(t)\}$, and suppose that each link at any time $t$ fails with some probability $p(t)>0$. More precisely, let $B(t)$ be a failure matrix where $b_{ij}(t)=0$ if a failure on link $(i,j)$ occurs at time $t$ and otherwise $b_{ij}(t)=1$ and we have \begin{align}\label{eqn:E1bijtuFt1begincases} \mathbb{E}[b_{ij}(t)| \mathcal{F}(t-1)]=1-p(t), \end{align} for $i,j\in[n]$. For example, if $B(t)$ is independent and identically distributed, i.e., \[b_{ij}(t)=\begin{cases} 0,&\text{ with probability $p$}\cr 1,& \text{ with probability $1-p$}\end{cases},\] then $B(t)$ satisfies \eqref{eqn:E1bijtuFt1begincases}. {Define $W(t)=[w_{ij}(t)]$ as follows \begin{align*} w_{ij}(t)\triangleq\begin{cases} a_{ij}(t)b_{ij}(t),&i\not=j\cr 1-\sum_{j\not=i}a_{ij}(t)b_{ij(t)},& i=j \end{cases}. \end{align*} Note that $W(t)$ is row-stochastic, and since $A(t)$ is column-stochastic, $\mathbb{E}[W(t)|\mathcal{F}(t-1)]$ is column-stochastic. Thus, Theorem \ref{thm:MainTheoremDisOpt}, using $W(t)$, translates to a theorem on robustness of the distributed dynamics \eqref{eqn:MainLineAlgorithm}: as long as the connectivity conditions of Theorem~\ref{thm:MainTheoremDisOpt} holds, the dynamics will reach a minimizer of the distributed problem almost surely. For example, if the link failure probability satisfies $p(t)\leq \bar{p}$ for all $t$ and some $\bar{p}<1$, our result implies that the result of Proposition 4 in \cite{do1} (for unconstrained case) would still hold under the above link-failure model. It is worth mentioning that if $\{A(t)\}$ is time-varying, then $\mathbb{E}[W(t)]$ would be time-varying and hence, the previous results on distributed optimization using i.i.d.\ row-stochastic weight matrices that are column-stochastic in-expectation \cite{7849184} would not imply such a robustness result. } \section{Autonomous Averaging Dynamics}\label{sec:ConsensusWithoutPerturbation} To prove Theorem \ref{thm:MainTheoremDisOpt}, we need to study the time-varying distributed averaging dynamics with a particular control input (gradient-like dynamics). To do this, first we study the autonomous averaging dynamics (i.e., without any input) and then, we use the established results to study the controlled dynamics. For this, consider the time-varying distributed averaging dynamics \begin{align}\label{eqn:unperturbed} {\bf x}(t+1)=W(t+1){\bf x}(t), \end{align} where $\{W(t)\}$ satisfying Assumption \ref{assump:AssumpOnConnectivity}. Defining transition matrix \begin{align*} \Phi(t,\tau)\triangleq W(t)\cdots W(\tau+1), \end{align*} and $\Phi(\tau,\tau)=I$, we have ${\bf x}(t)=\Phi(t,\tau){\bf x}(\tau)$. {Note that since $W(t)$s are row-stochastic matrices (a.s.) and the set of row-stochastic matrices is a semi-group (with respect to multiplication), the transition matrices $\Phi(t,\tau)$ are all row-stochastic matrices (a.s.). } {We say that a chain $\{W(t)\}$ achieves \textit{consensus} for the initial time $t_0 \in \mathbb{N}$ if for all $i$, $\lim_{t\rightarrow\infty }\|{\bf x}_i(t)-\tilde{x}\|=0$ almost surely, for all choices of initial condition ${\bf x}(t_0)\in (\mathbb{R}^m)^n$ in \eqref{eqn:unperturbed} and some random vector $\tilde{x}=\tilde{x}_{{\bf x}(t_0)}$.} It can be shown that an equivalent condition for consensus (for time $t_0$) that $\lim_{t\rightarrow\infty}\Phi(t,t_0)=e \pi^T(t_0)$ for a random stochastic vector $\pi(t_0,\omega)\in \mathbb{R}^n$, almost surely where $\omega\in \Omega$ is a sample point. For a matrix $A=[a_{ij}]$, let \begin{align*} {\rm diam}(A)=\max_{i,j\in[n]}\frac{1}{2}\sum_{\ell=1}^n |a_{i\ell}-a_{j\ell}|, \end{align*} and the mixing parameter \begin{align*} \Lambda(A)=\min_{i,j\in[n]}\sum_{\ell=1}^n \min\{a_{i\ell},a_{j\ell}\}. \end{align*} Note that for a row-stochastic matrix $A$, ${\rm diam}(A)\in [0,1]$. For a vector ${\bf x}=[{\bf x}_{i}]$ where ${\bf x}_i\in \mathbb{R}^m$ for all $i$, let \begin{align*} {\rm d}({\bf x})=\max_{i,j\in[n]}\|{\bf x}_i-{\bf x}_j\|. \end{align*} Note that ${\rm d}({\bf x})\leq 2\max_{i\in[n]}\|{\bf x}_i\|$. Also, if we have consensus, then $\lim_{t\to\infty}{\rm d}({\bf x}(t))=0$ and $\lim_{t\to\infty}{\rm diam}(\Phi(t,t_0))=0$ and in fact, the reverse implications are true \cite{chatterjee1977towards}, i.e., a chain achieves consensus (for time $t_0$) if and only if $\lim_{t\to\infty}{\rm d}({\bf x}(t))=0$ for all ${\bf x}(t_0)\in (\mathbb{R}^{m})^n$ or $\lim_{t\to\infty}{\rm diam}(\Phi(t,t_0))=0$. The following results relating the above quantities are useful for our future discussions. \begin{lemma}[\cite{hajnal1958weak,shen2000geometric}]\label{lem:lambdadiam} For $n\times n$ row-stochastic matrices $A,B$, we have \begin{align*} {\rm diam}(AB)\leq(1-\Lambda(A)){\rm diam}(B). \end{align*} \end{lemma} \begin{lemma}\label{lem:DiamBehrouz} For any $n\times n$ row-stochastic matrices $A,B$, we have \begin{enumerate}[(a)] \item ${\rm d}(A{\bf x})\leq{\rm diam}(A){\rm d}({\bf x})$ for all ${\bf x} \in(\mathbb{R}^m)^n$, \item ${\rm d}({\bf x}+{\bf y})\leq{\rm d}({\bf x})+{\rm d}({\bf y})$ for all ${\bf x},{\bf y}\in (\mathbb{R}^m)^n$, \item ${\rm diam}(A)=1-\Lambda(A)$, \item ${\rm diam}(AB)\leq {\rm diam}(A){\rm diam}(B)$, and \item $\left\|{\bf x}_i-\sum_{j=1}^{n}\pi_j{\bf x}_j\right\|\leq {\rm d}({\bf x})$ for all $i\in[n]$, ${\bf x}\in (\mathbb{R}^m)^n$, and any stochastic vector $\pi\in [0,1]^n$ (i.e., $\sum_{i=1}^n\pi_i=1$). \end{enumerate} \end{lemma} \pf The proof is provided in Appendix. \qed The main goal of this section is to obtain an exponentially decreasing upper bound (in terms of $t_1-\tau_1$ and $t_2-\tau_2$) on $\mathbb{E}[{\rm diam}(\Phi(t_2,\tau_2)){\rm diam}(\Phi(t_1,\tau_1))\mid \mathcal{F}(\tau_1)]$. Using this result and the connectivity assumption \ref{assump:AssumpOnConnectivity}, we can show that the transition matrices $\Phi(t,s)$ become mixing \textit{in-expectation} for large enough $t-s$. \begin{lemma}\label{lem:ExpectationLambda} Under Assumption \ref{assump:AssumpOnConnectivity} (Connectivity), there exists a parameter $\theta>0$ such that for every $s\geq t_0$, we have almost surely \begin{align*} \mathbb{E}[\Lambda(\Phi((n^2+s)B,sB))\mid\mathcal{F}(sB)]\geq\theta. \end{align*} \end{lemma} \pf Fix $s\geq 0$. Let $\mathbb{T}$ be the set of all collection of edges $E$ such that the graph $([n],E)$ has a spanning rooted tree, and for $k\in[n^2]$, \begin{align*} \mathcal{E}_{B}^{}(k)\triangleq\bigcup\limits_{\tau=(s+k-1)B+1}^{(s+k)B}\mathcal{E}^{}(\mathbb{E}[W(\tau)|\mathcal{F}((s+k-1)B)]). \end{align*} For notational simplicity, denote $\mathcal{F}(sB)$ by $\mathcal{F}$ and $\mathcal{F}((s+k)B)$ by $\mathcal{F}_k$ for $k\in[n^2]$. Let $V=\{\omega\mid\forall k~\mathcal{E}_{B}^{}(k)\in\mathbb{T}\}$. From Assumption \ref{assump:AssumpOnConnectivity}, we have $P(V)=1$. For $\omega\in V$ and $k\geq 1$, define the random graph $([n],\mathcal{T}_k)$ on $n$ vertices by \begin{align*} \mathcal{T}_k&=\begin{cases} \mathcal{T}_{k-1},& \mbox{if } \mathcal{T}_{k-1}\in \mathbb{T}\\ \mathcal{T}_{k-1}\cup\{u_k\},& \mbox{if } \mathcal{T}_{k-1}\not\in \mathbb{T} \end{cases}, \end{align*} with $ \mathcal{T}_0=\emptyset$, where \begin{align}\label{eqn:ukinEEBkcap} u_k\in \mathcal{E}_{B}^{}(k)\cap\overline{\mathcal{T}}_{k-1}, \end{align} and $\overline{\mathcal{T}}_{k}$ is the edge-set of the complement graph of $([n],{\mathcal{T}}_{k})$. Note that since $\mathcal{E}_{B}^{}(k)$ has a spanning rooted tree, if $\mathcal{T}_{k-1}\not\in \mathbb{T}$, then $\mathcal{E}_B(k)$ should contain an edge that does not belong to ${\mathcal{T}}_{k-1}$, which we identify it as $u_k$ in \eqref{eqn:ukinEEBkcap}. Hence, $\mathcal{T}_k$ is well-defined. Since there are at most $n(n-1)$ potential edges in a graph on $n$ vertices, $\mathcal{T}_{n^2}$ has a spanning rooted tree for $\omega\in V$. For $k\in[n^2]$, let \begin{align*} \mathcal{D}_{B}^{}(k)\triangleq\bigcup\limits_{\tau=(s+k-1)B+1}^{(s+k)B}\mathcal{E}^{\nu}(W(\tau)), \end{align*} for some fixed $0<\nu<\gamma$, and \[\mathcal{H}_{}^{}(k)\triangleq\cup_{\tau=1}^{k}\mathcal{D}_{B}^{}(\tau).\] Consider the sequences of events $\{U_k\}$ defined by $U_k\triangleq\left\{\omega\in V \mid \mathcal{T}_{k}\subset\mathcal{H}_{}^{}(k)\right\}$, for $k\geq 1$, and $U_0=V$. Note that if $\mathcal{T}_{k-1}\in \mathbb{T}$, then $\mathcal{T}_{k-1}\subset\mathcal{H}_{}^{}(k-1)$ implies $\mathcal{T}_{k}\subset\mathcal{H}_{}^{}(k)$, and if $\mathcal{T}_{k-1}\not\in \mathbb{T}$, then $\mathcal{T}_{k-1}\subset\mathcal{H}_{}^{}(k-1)$ and $u_k\in\mathcal{D}_{B}^{}(k)$ imply $\mathcal{T}_{k}\subset\mathcal{H}_{}^{}(k)$. Hence, for $k\geq 1$ \begin{align}\label{eqn:DefinitionofUk} 1_{\{U_k\}}\geq1_{\{U_{k-1}\}}1_{\{\mathcal{T}_{k-1}\not\in \mathbb{T}\}}1_{\{u_k\in\mathcal{D}_{B}^{}(k)\}}+1_{\{U_{k-1}\}}1_{\{\mathcal{T}_{k-1}\in \mathbb{T}\}}. \end{align} On the other hand, from Tower rule, we have \begin{align}\label{eqn:E1Uk1mathcalTk-1notinmathbbT} \mathbb{E}[1_{\{U_{k-1}\}}&1_{\{\mathcal{T}_{k-1}\not\in \mathbb{T}\}}1_{\{u_k\in\mathcal{D}_{B}^{}(k)\}}\mid \mathcal{F}]\\ &=\mathbb{E}[1_{\{U_{k-1}\}}1_{\{\mathcal{T}_{k-1}\not\in \mathbb{T}\}}\mathbb{E}[1_{\{u_k\in\mathcal{D}_{B}^{}(k)\}}|\mathcal{F}_{k-1}]\mid\mathcal{F}].\nonumber \end{align} Let $u_k(\omega)=(j_k(\omega),i_k(\omega))$. Since $u_k\in \mathcal{E}_{B}(k)$, there exists $(s+k-1)B<\tau_k\leq (s+k)B$ such that \[u_k\in\mathcal{E}^{}(\mathbb{E}[W(\tau_k)|\mathcal{F}_{k-1}]),\] and, we have \begin{align}\label{eqn:E1nu11wikjkkgeq} \mathbb{E}[(1-\nu) 1_{\{1-w_{i_kj_k}(\tau_k)\geq 1-\nu\}}|\mathcal{F}_{k-1}]\nonumber &\leq \mathbb{E}[1-w_{i_kj_k}(\tau_k)|\mathcal{F}_{k-1}] \leq 1-\gamma. \end{align} Therefore, \begin{align*} \mathbb{E}[1_{\{u_k\in\mathcal{D}_{B}^{}(k)\}}\mid\mathcal{F}_{k-1}]&\geq\mathbb{E}[1_{\{u_k\in\mathcal{E}^{\nu}(W(\tau_k))\}}\mid\mathcal{F}_{k-1}]\cr &=\mathbb{E}[ 1_{\{w_{i_kj_k}(\tau_k)> \nu\}}\mid\mathcal{F}_{k-1}]\cr &=1-\mathbb{E}[ 1_{\{w_{i_kj_k}(\tau_k)\leq \nu\}}\mid\mathcal{F}_{k-1}]\cr &=1-\mathbb{E}[ 1_{\{1-w_{i_kj_k}(\tau_k)\geq 1-\nu\}}\mid\mathcal{F}_{k-1}]\cr &\geq 1-\frac{1-\gamma}{1-\nu}\triangleq{p}>0, \end{align*} which holds as $\nu<\gamma$. This inequality and \eqref{eqn:E1Uk1mathcalTk-1notinmathbbT} imply that \begin{align*} \mathbb{E}[1_{\{U_{k-1}\}}1_{\{\mathcal{T}_{k-1}\not\in \mathbb{T}\}}&1_{\{u_k\in\mathcal{D}_{B}^{}(k)\}}\mid \mathcal{F}]\geq p\mathbb{E}[1_{\{U_{k-1}\}}1_{\{\mathcal{T}_{k-1}\not\in \mathbb{T}\}}\mid\mathcal{F}]. \end{align*} Therefore, \eqref{eqn:DefinitionofUk} implies \begin{align*} \mathbb{E}[1_{\{U_k\}}|\mathcal{F}] &\geq p\mathbb{E}[1_{\{U_{k-1}\}}1_{\{\mathcal{T}_{k-1}\not\in \mathbb{T}\}}|\mathcal{F}]+\mathbb{E}[1_{\{U_{k-1}\}}1_{\{\mathcal{T}_{k-1}\in \mathbb{T}\}}|\mathcal{F}]\cr &\geq p\left(\mathbb{E}[1_{\{U_{k-1}\}}1_{\{\mathcal{T}_{k-1}\not\in \mathbb{T}\}}|\mathcal{F}]+\mathbb{E}[1_{\{U_{k-1}\}}1_{\{\mathcal{T}_{k-1}\in \mathbb{T}\}}|\mathcal{F}]\right)\cr &=p\mathbb{E}[1_{\{U_{k-1}\}}|\mathcal{F}], \end{align*} and hence $\mathbb{E}[1_{\{U_k\}}|\mathcal{F}]\geq p^k$. Finally, since $\mathcal{T}_{n^2}$ has a spanning rooted tree, from Lemma~1 in \cite{lobel2010distributed}, we have \begin{align*} \Lambda(W(n^2B,\omega)\cdots W(n(n-1)B+n-1,\omega)\cdots W(1,\omega))\geq \nu^{n^2B}, \end{align*} for $\omega\in U_{n^2}$. Therefore, we have \begin{align*} \mathbb{E}[\Lambda(\Phi((n^2+s)B,sB))\mid\mathcal{F}(sB)]&\geq \nu^{n^2B}\mathbb{E}[1_{\{U_{n^2}\}}\mid \mathcal{F}]\geq \nu^{n^2B}{p}^{n^2}\triangleq\theta>0, \end{align*} which completes the proof.\qed Finally, we need the following result, which is proved in Appendix, to prove the main result of this section. \begin{lemma}\label{lem:ProductwithHistory} For a random process $\{Y(k)\}$, adapted to a filtration $\{\mathcal{F}(k)\}$, let \[\mathbb{E}[Y(k)|\mathcal{F}(k-1)]\leq a(k)\] for $K_1\leq k\leq K_2$ almost surely, where $K_1\leq K_2$ are arbitrary positive integers and $a(k)$s are (deterministic) scalars. Then, we have almost surely \begin{align*} \mathbb{E}\left[\prod_{k=K_1}^{K_2}Y(k)\bigg{|}\mathcal{F}(K_1-1)\right]\leq \prod_{k=K_1}^{K_2}a(k). \end{align*} \end{lemma} Now, we are ready to prove the main result for the convergence rate of the autonomous random averaging dynamics. \begin{lemma}\label{lem:ExpectationDiam} Under Assumption \ref{assump:AssumpOnConnectivity} (Connectivity), there exist $0<C$ and $0\leq \lambda<1$ such that for every $t_0\leq \tau_1\leq t_1$ and $t_0\leq \tau_2\leq t_2$ with $\tau_1\leq\tau_2$, we have almost surely \begin{align*} \mathbb{E}[{\rm diam}(\Phi(t_2,\tau_2)){\rm diam}(\Phi(t_1,\tau_1))|\mathcal{F}(\tau_1)]\leq C\lambda^{t_1-\tau_1}\lambda^{t_2-\tau_2}. \end{align*} \end{lemma} \pf First, we prove \begin{align}\label{eqn:EdiammPhittauFtau} \mathbb{E}[{\rm diam}(\Phi(t,\tau))|\mathcal{F}(\tau)]\leq \tilde{C}\tilde{\lambda}^{t-\tau}, \end{align} for some $0<\tilde{C}$ and $0\leq \tilde{\lambda}<1$. Let $s\triangleq\lceil \frac{\tau}{B}\rceil$ and $K\triangleq\lfloor\frac{t-sB}{n^2B}\rfloor$. We have \begin{align*} \mathbb{E}[{\rm diam}(\Phi(t,\tau))|\mathcal{F}(\tau)] &= \mathbb{E}\Bigg{[}{\rm diam}\Bigg{(}\Phi(t,sB+Kn^2B)\left[\prod_{k=1}^{K}\!\Phi(sB+kn^2B,sB+(k-1)n^2B)\!\right]\!\Phi(sB,\tau)\!\Bigg{)}\Bigg{|}\mathcal{F}(\tau)\Bigg{]}\cr &\stackrel{(a)}{\leq} \!\mathbb{E}\left[\prod_{k=1}^{K}\!(1-\Lambda(\Phi(sB+kn^2B,sB+(k-1)n^2B))\Bigg{|}\mathcal{F}(\tau)\right]\cr &\stackrel{(b)}{\leq} (1-\theta)^{K}\cr &\leq \tilde{C}(1-\theta)^{\frac{t-\tau}{n^2B}}, \end{align*} where $\tilde{C}=(1-\theta)^{-1-\frac{1}{n^2}}$ and $(a)$ follows from Lemma~\ref{lem:lambdadiam}, Lemma~\ref{lem:DiamBehrouz}-(d), and {${\rm diam}(\Phi(.,.))\leq1$}, and $(b)$ follows from Lemma \ref{lem:ExpectationLambda} and \ref{lem:ProductwithHistory}. Since $\theta>0$, we have $\tilde{\lambda}\triangleq (1-\theta)^{\frac{1}{n^2B}}<1$. To prove the main statement, we consider two cases: \begin{enumerate}[(i)] \item intervals $(\tau_1,t_1]$ and $(\tau_2,t_2]$ do not have an intersection, and \item $(\tau_1,t_1]$ and $(\tau_2,t_2]$ intersect. \end{enumerate} For case (i), since the two intervals do not overlap, we have $t_1\leq\tau_2$, and hence \begin{align*} \mathbb{E}[{\rm diam}(\Phi(t_2,\tau_2)){\rm diam}(\Phi(t_1,\tau_1))|\mathcal{F}(\tau_1)] &=\mathbb{E}\bigg{[}\mathbb{E}[{\rm diam}(\Phi(t_2,\tau_2))|\mathcal{F}(\tau_2)]{\rm diam}(\Phi(t_1,\tau_1))\bigg{|}\mathcal{F}(\tau_1)\bigg{]}\cr &\leq \tilde{C}\tilde{\lambda}^{t_1-\tau_1}\tilde{C}\tilde{\lambda}^{t_2-\tau_2}, \end{align*} which follows from \eqref{eqn:EdiammPhittauFtau}. For case (ii), let's write the union of the intervals $(\tau_1,t_1]$ and $(\tau_2,t_2]$ as disjoint union of three intervals: \[(\tau_1,t_1]\cup(\tau_2,t_2]=(s_1,s_2]\cup (s_2,s_3]\cup (s_3,s_4],\] for $s_1\leq s_2\leq s_3$ where $(s_2,s_3]\triangleq(\tau_1,t_1]\cap(\tau_2,t_2]$, $(s_1,s_2]\cup(s_3,s_4]\triangleq(\tau_1,t_1]\triangle(\tau_2,t_2]$. {Using this, it can be verified that} \begin{align*} \mathbb{E}[{\rm diam}(\Phi(t_2,\tau_2)){\rm diam}(\Phi(t_1,\tau_1))|\mathcal{F}(\tau_1)] &\stackrel{(a)}{\leq}\!\!\mathbb{E}[{\rm diam}(\Phi(s_4,s_3)){\rm diam}^2(\Phi(s_3,s_2)){\rm diam}(\Phi(s_2,s_1))|\mathcal{F}(\tau_1)]\cr &\stackrel{(b)}{\leq}\!\!\mathbb{E}[{\rm diam}(\Phi(s_4,s_3)){\rm diam}(\Phi(s_3,s_2)){\rm diam}(\Phi(s_2,s_1))|\mathcal{F}(\tau_1)]\cr &\stackrel{(c)}{\leq} \tilde{C}\tilde{\lambda}^{s_2-s_1}\tilde{C}\tilde{\lambda}^{s_3-s_2}\tilde{C}\tilde{\lambda}^{s_4-s_3}\cr &= \tilde{C}\tilde{\lambda}^{s_2-s_1}2\tilde{C}\sqrt{\tilde{\lambda}}^{2(s_3-s_2)}\tilde{C}\tilde{\lambda}^{s_4-s_3}\cr &\leq \tilde{C}^3\sqrt{\tilde{\lambda}}^{t_1-\tau_1}\sqrt{\tilde{\lambda}}^{t_2-\tau_2}, \end{align*} {where $(a)$ follows from Lemma \ref{lem:DiamBehrouz}-(d) , $(b)$ follows from {${\rm diam}(A)\leq1$} for all row-stochastic matrices $A$, and $(c)$ follows from \eqref{eqn:EdiammPhittauFtau} and Lemma \ref{lem:ProductwithHistory}}. Letting $C\triangleq \max\{\tilde{C}^2, \tilde{C}^3\}$ and $\lambda\triangleq \sqrt{\tilde{\lambda}}$, we arrive at the conclusion. \qed \section{Averaging Dynamics with Gradient-Flow Like Feedback }\label{sec:ConsensusWithPerturbation} In this section, we study the controlled linear time-varying dynamics \begin{align}\label{eqn:EpsilonAlgorithm} {{\bf x}}(t+1)&=W(t+1){{\bf x}}(t)+{\bf u}(t). \end{align} Note that the feedback ${\bf u}(t)=-\alpha(t) {\bf g}(t)$ leads to the dynamics \eqref{eqn:MainLineAlgorithm}. The goal of this section is to establish bounds on the convergence-rate of ${\rm d}({\bf x})$ (to zero) in-expectation and almost surely for a class of regularized input ${\bf u}(t)$. We start with the following two lemmas. \begin{lemma}\label{lem:d_x_t} For dynamics \eqref{eqn:EpsilonAlgorithm} and every $t_0\leq \tau\leq t$, we have \begin{align}\label{lem:lemma5} {\rm d}({\bf x}(t))& \stackrel{}{\leq}{\rm diam}(\Phi(t,\tau)){\rm d}({\bf x}(\tau))+\sum_{s=\tau}^{t-1}{\rm diam}(\Phi(t,s+1)){\rm d}({\bf u}(s)). \end{align} \end{lemma} \pf Note that the general solution for the dynamics \eqref{eqn:EpsilonAlgorithm} is given by \begin{align}\label{eqn:PhiPerturbation} {\bf x}(t)=\Phi(t,\tau){\bf x}(\tau)+\sum_{s=\tau}^{t-1}\Phi(t,s+1){\bf u}(s). \end{align} Therefore, using the sub-linearity property of ${\rm d}(\cdot)$ (Lemma \ref{lem:DiamBehrouz}-(b)), we have \begin{align*} {\rm d}({\bf x}(t))&{\leq}{\rm d}(\Phi(t,\tau){\bf x}(\tau))+\sum_{s=\tau}^{t-1}{\rm d}(\Phi(t,s+1){\bf u}(s))\cr &{\leq}{\rm diam}(\Phi(t,\tau)){\rm d}({\bf x}(\tau))+\sum_{s=\tau}^{t-1}{\rm diam}(\Phi(t,s+1)){\rm d}({\bf u}(s)), \end{align*} where the last inequality follows from Lemma \ref{lem:DiamBehrouz}-(a). \qed \begin{lemma}\label{lem:approxfunction} Let $\{\beta(t)\}$ be a positive (scalar) sequence such that $\lim_{t\to\infty}\frac{\beta(t)}{\beta(t+1)}=1$. Then for any $\theta\in[0,1)$, there exists some $M>0$ such that \begin{align*} \sum_{s=\tau}^{t-1}\beta(s)\theta^{t-s}\leq M\beta(t), \end{align*} for all $t\geq \tau\geq t_0$. \end{lemma} \pf The proof is provided in Appendix.\qed To prove the main theorem, we need to study how fast $\mathbb{E}[{\rm d}({\bf x}(t))]$ and $\mathbb{E}[{\rm d}^2({\bf x}(t))]$ approach to zero when the diameter of the control input ${\rm d}({\bf u}(t))$ goes to zero. Since $\mathbb{E}[{\rm d}^2({\bf x}(t))]\geq \mathbb{E}^2[{\rm d}({\bf x}(t))]$, it suffice to study convergence rate of $\mathbb{E}[{\rm d}^2({\bf x}(t))]$. \begin{lemma}\label{lem:ConsensusConveregnseVariance} Under Assumptions \ref{assump:AssumpOnStochasticity}, \ref{assump:AssumpOnConnectivity}, and \ref{assump:AssumpOnStepSize}, if almost surely ${\rm d}({\bf u}(t))<q\alpha(t)$ for some $q>0$, then we have, \[\frac{\mathbb{E}[{\rm d}^2({\bf x}(t))]}{\alpha^2(t)}\leq \hat{M}\] for some $\hat{M}>0$ and all $t\geq t_0$. \end{lemma} \pf Taking the square of both sides of \eqref{lem:lemma5}, for $t>\tau\geq t_0$, we have \begin{align*} {\rm d}^2({\bf x}(t)) &\stackrel{}{\leq}{\rm diam}^2(\Phi(t,\tau)){\rm d}^2({\bf x}(\tau)) +2{\rm diam}(\Phi(t,\tau)){\rm d}({\bf x}(\tau))\sum_{s=\tau}^{t-1}{\rm diam}(\Phi(t,s+1)){\rm d}({\bf u}(s))\cr &\qquad\qquad+\sum_{s=\tau}^{t-1}\sum_{\ell=\tau}^{t-1}{\rm diam}(\Phi(t,s+1)){\rm d}({\bf u}(s)){\rm diam}(\Phi(t,\ell+1)){\rm d}({\bf u}(\ell)). \end{align*} Taking the expectation of both sides of the above inequality, and using ${\rm d}({\bf u}(t))<q\alpha(t)$ almost surely, we have \begin{align*} \mathbb{E}[{\rm d}^2({\bf x}(t))] &\stackrel{}{\leq}\mathbb{E}[{\rm diam}^2(\Phi(t,\tau)){\rm d}^2({\bf x}(\tau))] +2\sum_{s=\tau}^{t-1}\mathbb{E}\big{[}{\rm diam}(\Phi(t,\tau)){\rm d}({\bf x}(\tau)){\rm diam}(\Phi(t,s+1)){\rm d}({\bf u}(s))\big{]}\cr &\qquad\qquad+\sum_{s=\tau}^{t-1}\sum_{\ell=\tau}^{t-1}\mathbb{E}\big{[}{\rm diam}(\Phi(t,s+1)){\rm d}({\bf u}(s)){\rm diam}(\Phi(t,\ell+1)){\rm d}({\bf u}(\ell))\big{]}\cr &\stackrel{}{\leq}\mathbb{E}\bigg{[}\mathbb{E}[{\rm diam}^2(\Phi(t,\tau))|\mathcal{F}(\tau)]{\rm d}^2({\bf x}(\tau))\bigg{]}+2\sum_{s=\tau}^{t-1}\mathbb{E}\bigg{[}\mathbb{E}[{\rm diam}(\Phi(t,\tau)){\rm diam}(\Phi(t,s+1))|\mathcal{F}(\tau)]{\rm d}({\bf x}(\tau))\bigg{]}\alpha(s)q\cr &\qquad\qquad+\sum_{s=\tau}^{t-1}\sum_{\ell=\tau}^{t-1}\mathbb{E}[{\rm diam}(\Phi(t,s+1)){\rm diam}(\Phi(t,\ell+1))]\alpha(s)\alpha(\ell)q^2. \end{align*} Therefore, from Lemma \ref{lem:ExpectationDiam}, we have \begin{align*} \mathbb{E}[{\rm d}^2({\bf x}(t))] &\stackrel{}{\leq}C\lambda^{2(t-\tau)}\mathbb{E}[{\rm d}^2({\bf x}(\tau))]+\frac{2Cq}{\lambda}\lambda^{t-\tau}\mathbb{E}[{\rm d}({\bf x}(\tau))]\sum_{s=\tau}^{t-1}\lambda^{t-s}\alpha(s)+\frac{Cq^2}{\lambda^2}\sum_{s=\tau}^{t-1}\sum_{\ell=\tau}^{t-1}\alpha(s)\alpha(\ell)\lambda^{t-s}\lambda^{t-\ell}\cr &\stackrel{}{\leq}C\lambda^{2(t-\tau)}\mathbb{E}[{\rm d}^2({\bf x}(\tau))]+\frac{2CqM}{\lambda}\lambda^{t-\tau}\mathbb{E}[{\rm d}({\bf x}(\tau))]\alpha(t)+\frac{Cq^2M^2}{\lambda^2}\alpha^2(t), \end{align*} where the last inequality follows from Lemma \ref{lem:approxfunction} and the fact that \begin{align*} \sum_{s=\tau}^{t-1}\sum_{\ell=\tau}^{t-1}\alpha(s)\alpha(\ell)\lambda^{t-s}\lambda^{t-\ell}=\left(\sum_{s=\tau}^{t-1}\alpha(s)\lambda^{t-s}\right)^2. \end{align*} Dividing both sides of the above inequality by $\alpha^2(t)$ and noting $\frac{\alpha(\tau)}{\alpha(t)}\lambda^{t-\tau}=\prod_{\kappa=\tau}^{t-1}\frac{\alpha(\kappa)}{\alpha(\kappa+1)}\lambda$, we have \begin{align*} &\frac{\mathbb{E}[{\rm d}^2({\bf x}(t))]}{\alpha^2(t)} \stackrel{}{\leq}C\frac{\mathbb{E}[{\rm d}^2({\bf x}(\tau))]}{\alpha^2(\tau)}\left(\prod_{\kappa=\tau}^{t-1}\frac{\alpha(\kappa)}{\alpha(\kappa+1)}\lambda\right)^2+2\frac{CqM}{\lambda}\frac{\mathbb{E}[{\rm d}({\bf x}(\tau))]}{\alpha(\tau)}\left(\prod_{\kappa=\tau}^{t-1}\frac{\alpha(\kappa)}{\alpha(\kappa+1)}\lambda\right)+\frac{Cq^2M^2}{\lambda^2}. \end{align*} Since $\lim_{\tau\to\infty}\frac{\alpha(\tau)}{\alpha(\tau+1)}=1$, for any $\hat{\lambda}\in (\lambda,1)$, there exists $\hat{\tau}$ such that for $\tau\geq \hat{\tau}$, we have $\frac{\mu(\tau)}{\mu(\tau+1)}\lambda\leq \hat{\lambda}$. Therefore, \begin{align*} \frac{\mathbb{E}[{\rm d}^2({\bf x}(t))]}{\alpha^2(t)} &\stackrel{}{\leq}C\frac{\mathbb{E}[{\rm d}^2({\bf x}(\tau))]}{\alpha^2(\tau)}\hat{\lambda}^{2(t-\tau)}+\frac{2CqM}{\lambda}\frac{\mathbb{E}[{\rm d}({\bf x}(\tau))]}{\alpha(\tau)}\hat{\lambda}^{t-\tau}+\frac{Cq^2M^2}{\lambda^2}. \end{align*} Taking the limit of the above inequality, we get \begin{align*} \limsup_{t\to\infty}\frac{\mathbb{E}[{\rm d}^2({\bf x}(t))]}{\alpha^2(t)} &\stackrel{}{\leq}\lim_{t\to\infty}C\frac{\mathbb{E}[{\rm d}^2({\bf x}(\tau))]}{\alpha^2(\tau)}\hat{\lambda}^{2(t-\tau)}+\lim_{t\to\infty}\frac{2CqM}{\lambda}\frac{\mathbb{E}[{\rm d}({\bf x}(\tau))]}{\alpha(\tau)}\hat{\lambda}^{t-\tau}+\frac{Cq^2M^2}{\lambda^2}\cr &=\frac{Cq^2M^2}{\lambda^2}. \end{align*} As a result, there exists an $\hat{M}>0$ such that $\frac{\mathbb{E}[{\rm d}^2({\bf x}(t))]}{\alpha^2(t)}\leq \hat{M}$. \qed To prove the main theorem, we also need to show that ${\rm d}({\bf x}(t))$ converges to zero almost surely (as will be proved in Lemma~\ref{lem:ConsensusConveregnse}). To do so, using the previous results, we will show (in Lemma~\ref{lem:ConsensusConveregnse}) that \begin{align*} \mathbb{E}[{\rm d}({\bf x}(t))|\mathcal{F}(\tau)]\leq\mathbb{E}[{\rm diam}(\Phi(t,\tau))|\mathcal{F}(\tau)]{\rm d}({\bf x}(\tau))+K\tau^{-\gamma}, \end{align*} for some $K,\gamma>0$. However, since $\sum_{\tau=t_0}^\infty \tau^{-\gamma}$ is not necessarily summable, we cannot use the standard Siegmund-Robbins Theorem \cite{ROBBINS1971233} to argue ${\rm d}({\bf x}(t))\to 0$ based on this inequality. We will use the facts that $\mathbb{E}[{\rm diam}(\Phi(t,\tau))|\mathcal{F}(\tau)]<1$ if $t-\tau$ is large enough, and ${\rm diam}(\Phi(t,\tau))\leq 1$ for all $t\geq \tau \geq t$. This leads us to prove a martingale-type result in Lemma \ref{lem:Dooblike}. To prove this lemma, we first show the following. \begin{lemma}\label{lem:SLLNlike} Consider a non-negative random process $a(t)$ such that \[\mathbb{E}[a(t+1)\mid \mathcal{F}(t)]\leq\tilde{\lambda}\] for some $\tilde{\lambda}<1$ and all $t\geq 0$. For $\lambda$ satisfying $\tilde{\lambda}<{\lambda}<1$, define the sequence of stopping-times $\{t_s\}_{s\geq 0}$ by \[t_s\triangleq\inf\{t>t_{s-1}|a(t)\leq \lambda\},\] with $t_0=0$. Then, $\lim_{s\to\infty}(t_{s+1}-t_s)t_s^{-\beta}=0$ almost surely for all $\beta>0$. \end{lemma} \pf Let us define the martingale $S(t)$ by \begin{align*} S(t)=S(t-1)+\left(1_{\{a(t)> \lambda\}}-\mathbb{E}[1_{\{a(t)>\lambda\}}|\mathcal{F}(t-1)]\right), \end{align*} where $S(0)=0$. Noting $|S(t+1)-S(t)|\leq 1$, from Azuma's inequality, we have \begin{align}\label{eqn:AzumaForSSigam} P(S(t+\sigma)-S(t)>\sigma\rho)\leq \exp\left(-\frac{\sigma^2\rho^2}{2\sigma}\right), \end{align} for all $\sigma\in \mathbb{N}$ and $\rho\in(0,1)$. For $\theta>0$, let the sequences of events \begin{align*} A_{\theta}(t)\triangleq\left\{\omega\big{|}S(t+\lfloor \theta t^\beta \rfloor)-S(t)>\lfloor \theta t^\beta \rfloor\rho\right\}. \end{align*} From \eqref{eqn:AzumaForSSigam}, we have \begin{align*} P(A_{\theta}(t))\leq\exp\left(-\frac{1}{2}(\theta t^\beta-1)\rho^2\right), \end{align*} implying $\sum_{t=1}^{\infty}P(A_{\theta}(t))<\infty$ as $\exp\left({-t^\beta}\right)\leq \frac{M}{t^2}$ for sufficiently large $M$ (depending on $\beta$). Therefore, the Borel–Cantelli Theorem implies that $P(\{A_{\theta}(t)\quad \text{i.o.}\})=0$ for all $\theta>0$. For $\theta>0$, let the sequences of events \begin{align*} B_{\theta}(t)\triangleq\left\{\omega\bigg{|}\frac{\hat{t}-t}{t^{\beta}}>\theta \mbox{ where } \hat{t}=\inf\{\tau>t|a(\tau)\leq \lambda\}\right\}. \end{align*} We show that $B_{\theta}(t)\subset A_{\theta}(t)$ for all $t,\theta$. Fix a constant $\rho\in (0,1)$ such that $1-\frac{{\tilde{\lambda}}}{\lambda}>\rho$. Since $\mathbb{E}[a(\tau)\mid \mathcal{F}(\tau-1)]\leq \tilde{\lambda}$, we have \begin{align*} \mathbb{E}[\lambda 1_{\{a(\tau)\geq \lambda\}}\mid \mathcal{F}(\tau-1)]&\leq \mathbb{E}[a(\tau)\mid \mathcal{F}(\tau-1)]\cr &\leq \tilde{\lambda}<\lambda(1-\rho) \end{align*} and hence, \begin{align}\label{eqn:rhoE1ataugeqlambdamidFtau} \mathbb{E}[ 1_{\{a(\tau)\geq \lambda\}}\mid \mathcal{F}(\tau-1)]<1-\rho. \end{align} Let $\sigma(t)\triangleq\lfloor \theta t^\beta \rfloor$. If $\hat{t}-t>\theta t^\beta$, then \begin{align*} S(t+\sigma(t))-S(t)&= \sigma(t)-\sum_{\tau=t+1}^{t+\sigma(t)} \mathbb{E}[1_{\{a(\tau)>\lambda\}}|\mathcal{F}(\tau-1)]\cr &> \sigma(t)-\sigma(t)(1-\rho)=\sigma(t)\rho, \end{align*} which follows from \eqref{eqn:rhoE1ataugeqlambdamidFtau}. Therefore, we have $B_{\theta}(t)\subset A_{\theta}(t)$, and hence, $P(\{B_{\theta}(t)\quad \text{i.o.}\})=0$ for all $\theta>0$. Finally, by contradiction, we show that $\lim_{s\to\infty}(t_{s+1}-t_s)t_s^{-\beta}=0$. Since, if $\lim_{s\to\infty}(t_{s+1}-t_s)t_s^{-\beta}\not=0$ almost surely, then $\limsup_{s\to\infty}(t_{s+1}-t_s)t_s^{-\beta}>0$ almost surely, and hence, $P\left(\limsup_{s\to\infty}(t_{s+1}-t_s)t_s^{-\beta}>\epsilon\right)>0$ for some $\epsilon>0$. Therefore, $P(\{B_{\epsilon}(t)\quad \text{i.o.}\})>0$, which is a contradiction. \qed \begin{lemma}\label{lem:Dooblike} Suppose that $\{D(t)\}$ is a non-negative random (scalar) process such that \begin{align}\label{eqn:dooblike} D(t+1)\leq a(t+1)D(t)+b(t),\quad \mbox{almost surely} \end{align} where $\{b(t)\}$ is a deterministic sequence and $\{a(t)\}$ is an adapted process (to $\{\mathcal{F}(t)\}$), such that $a(t)\in[0,1]$ and \[\mathbb{E}[a(t+1)\mid \mathcal{F}(t)]\leq\tilde{\lambda},\] almost surely for some $\tilde{\lambda}<1$ and all $t\geq 0$. Then, if \[0\leq b(t)\leq{K}{t^{-\tilde{\beta}}}\] for some $K,\tilde{\beta}>0$, we have $\lim_{t\to\infty}D(t)t^\beta=0$, almost surely, for all $\beta<\tilde{\beta}$. \end{lemma} \pf Let $t_s\triangleq\inf\{t>t_{s-1}|a(t)\leq \lambda\}$ and $t_0=0$ for some $\tilde{\lambda}<\lambda<1$, and $c(s)\triangleq \sum_{\tau=t_s+1}^{t_{s+1}-1}b(\tau)$ . Also, define \begin{align*} A\triangleq\left\{\omega\bigg{|}\lim_{s\to\infty}\frac{t_{s+1}-t_s}{t_s^{\min\{\tilde{\beta}-\beta,1\}}}=0\right\}. \end{align*} Note that Lemma \ref{lem:SLLNlike} implies $P(A)=1$. On the other hand, using \eqref{eqn:dooblike}, we have \begin{align*} D(t_{s+1})&\leq D(t_{s})\prod_{\ell=t_{s}+1}^{t_{s+1}}a(\ell)+\sum_{\tau=t_s}^{t_{s+1}-1}b(\tau)\prod_{\ell=\tau+2}^{t_{s+1}}a(\ell)\cr &\leq D(t_{s})\lambda+c(s), \end{align*} where the last inequality follows from $a(t)\in [0,1]$ and $a(t_{s+1})\leq \lambda$. Letting $R(t)=D(t)t^{\beta}$, we have \begin{align*} R(t_{s+1})\leq \left(\frac{t_{s+1}}{t_s}\right)^\beta R(t_s)\lambda+c(s)t_{s+1}^{\beta}. \end{align*} Note that, for $\omega\in A$, we have \begin{align}\label{eqn:lim_{stoinfty}frac{t_{s+1}}{t_{s}}} \lim_{s\to\infty}\frac{t_{s+1}}{t_{s}}=\lim_{s\to\infty}1+\frac{t_{s+1}-t_s}{t_{s}}=1. \end{align} As a result, for any $\hat{\lambda}\in (\lambda,1)$, there exists $\hat{s}$ such that for $s\geq \hat{s}$, we have $\left(\frac{t_{s+1}}{t_s}\right)^\beta\lambda\leq \hat{\lambda}$, and hence \begin{align*} R(t_{s+1})\leq R(t_s)\hat{\lambda}+c(s)t_{s+1}^{\beta}. \end{align*} Therefore, \begin{align*} R(t_s)\leq \hat{\lambda}^{s-\hat{s}} R(t_{\hat{s}})+\sum_{\tau=\hat{s}}^{s-1}c(\tau)t_{\tau+1}^{\beta}\hat{\lambda}^{s-\tau-1}. \end{align*} Taking the limits of the both sides, we have \begin{align*} \limsup_{s\to\infty}R(t_s)&\leq \limsup_{s\to\infty}\hat{\lambda}^{s-\hat{s}} R(t_{\hat{s}})+\sum_{\tau=\hat{s}}^{s-1}c(\tau)t_{\tau+1}^{\beta}\hat{\lambda}^{s-\tau-1}\cr &=\lim_{s\to\infty}c(s)t_{s+1}^{\beta}, \end{align*} which is implied by Lemma 3.1-(a) in \cite{ram2010distributed}. For $\omega\in A$, we have \begin{align}\label{eqn:lim_{stoinfty}c(s)t_{s+1}^{gamma}} \lim_{s\to\infty}c(s)t_{s+1}^{\beta} &\stackrel{(a)}{\leq} \lim_{s\to\infty}\frac{K(t_{s+1}-t_s)}{t_s^{\tilde{\beta}}}t_{s+1}^{\beta}\cr &= \lim_{s\to\infty}\frac{K(t_{s+1}-t_s)}{t_s^{\tilde{\beta}-\beta}}\frac{t_{s+1}^{\beta}}{t_{s}^{\beta}}\cr &\stackrel{(b)}{=} \lim_{s\to\infty}\frac{K(t_{s+1}-t_s)}{t_s^{\tilde{\beta}-\beta}}=0. \end{align} where $(a)$ follows from $b(t)\leq Kt^{-\tilde{\beta}}$, and $(b)$ follows from \eqref{eqn:lim_{stoinfty}frac{t_{s+1}}{t_{s}}}. Therefore, $\lim_{s\to\infty}R(t_s)=0$. Now for any $t>0$ with $t_s\leq t<t_{s+1}$, let $\sigma(t)=s$. By the definition of $R(t)$, we have $R(t)\leq \left(\frac{t}{\sigma(t)}\right)^\beta R(t_{\sigma(t)})+c(\sigma(t))t_{\sigma(t)+1}^{\beta}$. Therefore, $\lim_{s\to\infty}R(t_s)=0$ and Inequality \eqref{eqn:lim_{stoinfty}c(s)t_{s+1}^{gamma}} imply \begin{align*} \limsup_{t\to\infty}R(t)&\leq\lim_{t\to\infty}\left(\frac{t}{\sigma(t)}\right)^\beta R(t_{\sigma(t)})+c(\sigma(t))t_{\sigma(t)+1}^{\beta}=0, \end{align*} which is the desired conclusion as $R(t)=D(t)t^\beta$. \qed Finally, we are ready to show the almost sure convergence $\lim_{t\to\infty}{\rm d}({\bf x}(t))=0$ (and more) under our connectivity assumption and a regularity condition on the input ${\bf u}(t)$ for the controlled averaging dynamics \eqref{eqn:EpsilonAlgorithm}. \begin{lemma}\label{lem:ConsensusConveregnse} Suppose that $\{W(t)\}$ satisfies Assumption \ref{assump:AssumpOnConnectivity}. Then, if ${\rm d}({\bf u}(t))<qt^{-\tilde{\beta}}$ almost surely for some $q\geq 0$, we have $\lim_{t\to\infty} {{\rm d}({\bf x}(t))}{t^{\beta}}= 0$, {almost surely}, for $\beta<\tilde{\beta}$. \end{lemma} \pf From inequality \eqref{lem:lemma5}, we have \begin{align}\label{eqn:diambxk} {\rm d}({\bf x}(k)) &\leq{\rm diam}(\Phi(k,\tau)){\rm d}({\bf x}(\tau))+\sum_{s=\tau}^{k-1}{\rm diam}(\Phi(\tau,s+1)){\rm d}({\bf u}(s))\cr &\stackrel{(a)}{\leq}{\rm diam}(k,\tau)){\rm d}({\bf x}(\tau)) +\sum_{s=\tau}^{k-1}qs^{-\tilde{\beta}}\cr &\leq{\rm diam}(\Phi(k,\tau)){\rm d}({\bf x}(\tau)) +(k-\tau)q\tau^{-\tilde{\beta}}, \end{align} where $(a)$ follows from ${\rm diam}(\Phi(.,.))\leq 1$. Let $C>0$ and $\lambda\in [0,1)$ be the constants satisfying the statement of Lemma~\ref{lem:ExpectationDiam}. Since ${\lambda}<1$, for $T=\lceil|\frac{\log C}{\log {\lambda}}|\rceil+1$, we have $\tilde{\lambda}\triangleq C{\lambda}^T< 1$. Then, Lemma \ref{lem:ExpectationDiam} implies that \begin{align}\label{eqn:EdiammPhihnut1hnutFhnut} \mathbb{E}[{\rm diam}(\Phi(T(t+1),Tt))|\mathcal{F}(Tt)]\leq C{\lambda}^T= \tilde{\lambda}<1. \end{align} Let $D(t)\triangleq {\rm d}({\bf x}(Tt))$. From inequality \eqref{eqn:diambxk}, for $\tau=Tt$ and $k=T(t+1)$, we have \begin{align*} D(t+1) \leq{\rm diam}(\Phi(T(t+1),Tt))D(t) +T^{1-\tilde{\beta}}qt^{-\tilde{\beta}}. \end{align*} Taking conditional expectation of both sides of the above inequality given $\mathcal{F}(Tt)$, we have \begin{align*} &\mathbb{E}[D(t+1)|\mathcal{F}(Tt)]\leq\mathbb{E}[{\rm diam}(\Phi(T(t+1),Tt))|\mathcal{F}(Tt)]D(t) +T^{1-\tilde{\beta}}qt^{-\tilde{\beta}}. \end{align*} By letting $a(t+1)\triangleq {\rm diam}(\Phi(T(t+1),Tt))$ and $b(t)\triangleq Tqt^{-\tilde{\beta}}$, we are in the setting of Lemma \ref{lem:Dooblike}. Therefore, by Inequality \eqref{eqn:EdiammPhihnut1hnutFhnut} and ${\rm diam}(\Phi(T(t+1),Tt))\leq 1$, the conditions of Lemma~\ref{lem:Dooblike} hold, and hence, \begin{align}\label{eqn:Dttlimit} \lim_{t\to\infty}D(t)t^\beta=0 \end{align} almost surely. On the other hand, letting $\tau=T\left\lfloor \frac{k}{T}\right\rfloor$ in \eqref{eqn:diambxk} we have \begin{align*} {\rm d}({\bf x}(k))\leq D\left(\left\lfloor \frac{k}{T}\right\rfloor\right) +T^{1-\tilde{\beta}}q\left\lfloor \frac{k}{T}\right\rfloor^{-\tilde{\beta}}. \end{align*} Therefore, \begin{align*} \lim_{k\to\infty}{\rm d}({\bf x}(k))k^{\beta} &\leq\lim_{k\to\infty} D\left(\left\lfloor \frac{k}{T}\right\rfloor\right)k^{\beta} +T^{1-\tilde{\beta}}q\left\lfloor \frac{k}{T}\right\rfloor^{-\tilde{\beta}}k^{\beta}\cr &\leq\lim_{k\to\infty} D\left(\left\lfloor \frac{k}{T}\right\rfloor\right)\left(T\left(\left\lfloor \frac{k}{T}\right\rfloor+1\right)\right)^{\beta} +Tq(k-T)^{-\tilde{\beta}}k^{\beta}\cr&=0, \end{align*} where the last equality follows from \eqref{eqn:Dttlimit} and $\tilde{\beta}>\beta$. \qed \section{Convergence Analysis of the Main Dynamics}\label{sec:Convergence} Finally, in this section, we will study the main dynamics \eqref{eqn:MainLineAlgorithm}, i.e., the dynamics \eqref{eqn:EpsilonAlgorithm} with the feedback policy ${\bf u}_i(t)=-\alpha(t){\bf g}_i(t)$ where ${\bf g}_i(t)\in \nabla f_i({\bf x}_i(t))$. Throughout this section, we let $\bar{{\bf x}}\triangleq \frac{1}{n}e^T{\bf x}$ for a vector ${\bf x}\in (\mathbb{R}^m)^n$, First, we prove an inequality (Lemma~\ref{lem:FirstLyapunovDisOpt}) which plays a key role in the proof of Theorem \ref{thm:MainTheoremDisOpt} and to do so, we make use of the following result which is proven as a part of the proof of Lemma 8 (Equation (27)) in \cite{nedic2014distributed}. \begin{lemma}[\cite{nedic2014distributed}]\label{lem:AngeliaLemma} Under Assumption \ref{assump:AssumpOnFunction}, for all $v\in\mathbb{R}^m$, we have \begin{align*} n\lrangle{\bar{{\bf g}}(t),\bar{{\bf x}}(t)-{ v}}\geq F(\bar{{\bf x}}(t))-F({ v})-2\sum_{i=1}^nL_i\|{\bf x}_i(t)-\bar{{\bf x}}(t)\|. \end{align*} \end{lemma} \begin{lemma}\label{lem:FirstLyapunovDisOpt} For the dynamics \eqref{eqn:MainLineAlgorithm}, under Assumption \ref{assump:AssumpOnFunction}, for all ${ v}\in \mathbb{R}^m$, we have \begin{align*} \mathbb{E}[\|\bar{{\bf x}}(t+1)-{ v}\|^2|\mathcal{F}(t)] &\leq\|\bar{{\bf x}}(t)-{ v}\|^2+\alpha^2(t)\frac{L^2}{n^2}+\sum_{i=1}^{n}\|{\bf x}_i(t)-\bar{{\bf x}}(t)\|^2\\ &\qquad\qquad-\frac{2\alpha(t)}{n}(F(\bar{{\bf x}}(t))-F({ v}))+\frac{4\alpha(t)}{n}\sum_{i=1}^nL_i\|{\bf x}_i(t)-\bar{{\bf x}}(t)\|. \end{align*} \end{lemma} \pf Multiplying $\frac{1}{n}e^T$ from left to both sides of \eqref{eqn:MainLineAlgorithm}, we have \begin{align*} \bar{{\bf x}}(t+1)&=\overline{W}(t+1){{\bf x}}(t)-\alpha(t)\bar{{\bf g}}(t)\\ &=\bar{{\bf x}}(t)-\alpha(t)\bar{{\bf g}}(t)+\overline{W}(t+1){{\bf x}}(t)-\bar{{\bf x}}(t), \end{align*} where $\overline{W}(t)\triangleq\frac{1}{n}e^TW(t)$. Therefore, we can write \begin{align*} \|\bar{{\bf x}}(t+1)-v\|^2 &=\|\bar{{\bf x}}(t)-{{ v}}-\alpha(t)\bar{{\bf g}}(t)+\overline{W}(t+1){{\bf x}}(t)-\bar{{\bf x}}(t)\|^2\\ &=\|\bar{{\bf x}}(t)-{{ v}}\|^2+\|\alpha(t)\bar{{\bf g}}(t)\|^2+\|\overline{W}(t+1){{\bf x}}(t)-\bar{{\bf x}}(t)\|^2-2\alpha(t)\lrangle{\bar{{\bf g}}(t),\overline{W}(t+1){{\bf x}}(t)-{{ v}}}\\ &\qquad\qquad+2\lrangle{\bar{{\bf x}}(t)-{{ v}},\overline{W}(t+1){{\bf x}}(t)-\bar{{\bf x}}(t)}. \end{align*} Taking conditional expectation of both sides of { the above equality given $\mathcal{F}(t)$,} we have \begin{align*} \mathbb{E}[\|\bar{{\bf x}}(t+1)-{{ v}}\|^2|\mathcal{F}(t)] &=\|\bar{{\bf x}}(t)-{{ v}}\|^2+\|\alpha(t)\bar{{\bf g}}(t)\|^2+\mathbb{E}\left[\|\overline{W}(t+1){{\bf x}}(t)-\bar{{\bf x}}(t)\|^2|\mathcal{F}(t)\right]\\ &\quad-2\alpha(t)\lrangle{\bar{{\bf g}}(t),\mathbb{E}\left[\overline{W}(t+1){{\bf x}}(t)-{{ v}}|\mathcal{F}(t)\right]}+2\lrangle{(\bar{{\bf x}}(t)-{{ v}}),\mathbb{E}\left[\overline{W}(t+1){{\bf x}}(t)-\bar{{\bf x}}(t)|\mathcal{F}(t)\right]}\\ &{=}\|\bar{{\bf x}}(t)-{{ v}}\|^2+\|\alpha(t)\bar{{\bf g}}(t)\|^2+\mathbb{E}\left[\|\overline{W}(t+1){{\bf x}}(t)-\bar{{\bf x}}(t)\|^2|\mathcal{F}(t)\right]-2\lrangle{\alpha(t)\bar{{\bf g}}(t),\bar{{\bf x}}(t)-{{ v}}}. \end{align*} The last equality follows from the assumption that, $W(t+1)$ is doubly stochastic in-expectation and hence, \[\mathbb{E}[\overline{W}(t+1)|\mathcal{F}(t)]=\frac{1}{n}e^T,\] which implies \[\lrangle{\bar{{\bf g}}(t),\mathbb{E}\left[\overline{W}(t+1){{\bf x}}(t)-{{ v}}|\mathcal{F}(t)\right]}=\lrangle{\alpha(t)\bar{{\bf g}}(t),\bar{{\bf x}}(t)-{{ v}}},\] and \[\mathbb{E}\left[\overline{W}(t+1){{\bf x}}(t)-\bar{{\bf x}}(t)|\mathcal{F}(t)\right]=0.\] Note that $\overline{W}(t+1)$ is a stochastic vector (almost surely), therefore, due to the convexity of norm-square $\|\cdot\|^2$, we get \begin{align*} \|\overline{W}(t+1){{\bf x}}(t)-\bar{{\bf x}}(t)\|^2&\leq \sum_{i=1}^n\overline{W}_i(t+1)\|{{\bf x}}_i(t)-\bar{{\bf x}}(t)\|^2\cr &\leq \sum_{i=1}^n\|{{\bf x}}_i(t)-\bar{{\bf x}}(t)\|^2, \end{align*} {as $\overline{W}_i(t+1)\leq 1$ for all $i\in[n]$. Therefore, } \begin{align*} \mathbb{E}[\|\bar{{\bf x}}(t+1)-{{ v}}\|^2|\mathcal{F}(t)]&{=} \|\bar{{\bf x}}(t)-{{ v}}\|^2+\|\alpha(t)\bar{{\bf g}}(t)\|^2+\mathbb{E}\left[\|\overline{W}(t+1){{\bf x}}(t)-\bar{{\bf x}}(t)\|^2|\mathcal{F}(t)\right]-2\lrangle{\alpha(t)\bar{{\bf g}}(t),\bar{{\bf x}}(t)-{{ v}}}\cr &\leq \|\bar{{\bf x}}(t)-{{ v}}\|^2+\|\alpha(t)\bar{{\bf g}}(t)\|^2+\sum_{i=1}^{n}\|{\bf x}_i(t)-\bar{{\bf x}}(t)\|^2-2\lrangle{\alpha(t)\bar{{\bf g}}(t),\bar{{\bf x}}(t)-{{ v}}}. \end{align*} Finally, Lemma \ref{lem:AngeliaLemma} and the fact that \begin{align*} \|\bar{{\bf g}}(t)\|^2=\frac{1}{n^2}\left\|\sum_{i=1}^n{\bf g}_i(t)\right\|^2\leq \frac{1}{n^2}\left(\sum_{i=1}^n\left\|{\bf g}_i(t)\right\|\right)^2\leq \frac{L^2}{n^2}, \end{align*} complete the proof.\qed {\it Proof of Theorem \ref{thm:MainTheoremDisOpt}:} Recall that Siegmund and Robbins Theorem \cite{ROBBINS1971233} states that for a non-negative random process $\{V(t)\}$ (adapted to the filtration $\{\mathcal{F}(t)\}$) satisfying \begin{align}\label{eqn:RS} \mathbb{E}&\left[V(t+1)|\mathcal{F}(t)\right]=(1+a(t))V(t)-b(t)+c(t), \end{align} where $a(t),b(t),c(t)\geq 0$ for all $t$, and $\sum_{t=0}^{\infty}a(t)<\infty$ and $\sum_{t=1}^{\infty}c(t)<\infty$ almost surely, $\lim_{t\to\infty}V(x(t))$ exists and $\sum_{t=1}^{\infty}b(t)<\infty$ almost surely. In order to utilize this result and Lemma \ref{lem:FirstLyapunovDisOpt}, for all $t\geq 0$, let \begin{align*} V(t)&\triangleq\|\bar{{\bf x}}(t)-z\|^2, \cr a(t)&\triangleq 0, \cr b(t)&\triangleq-\frac{2\alpha(t)}{n}(F(\bar{{\bf x}}(t))-F(z)),\text{ and }\cr c(t)&\triangleq\alpha^2(t)\frac{L^2}{n^2}+\sum_{i=1}^{n}\|{\bf x}_i(t)-\bar{{\bf x}}(t)\|^2+\frac{4\alpha(t)}{n}\sum_{i=1}^nL_i\|{\bf x}_i(t)-\bar{{\bf x}}(t)\|, \end{align*} where $z\in\mathcal{Z}$. First, note that $a(t),b(t),c(t)\geq 0$ for all $t$. To invoke the Siegmund and Robbins result \eqref{eqn:RS}, we need to to prove that $\sum_{t=0}^{\infty}c(t)<\infty$, almost surely. Since $\sum_{t=0}^{\infty}\alpha^2(t)<\infty$, it is enough to show that \[\sum_{t=0}^{\infty}\|{\bf x}_i(t)-\bar{{\bf x}}(t)\|^2<\infty\] and \[\sum_{t=0}^\infty\alpha(t)\|{\bf x}_i(t)-\bar{{\bf x}}(t)\|<\infty,\] almost surely, for all $i\in[n]$. From Lemma~\ref{lem:DiamBehrouz}-(e) and Lemma~\ref{lem:ConsensusConveregnseVariance}, we have {\[\mathbb{E}\left[ \frac{\|{\bf x}_i(t)-\bar{{\bf x}}(t)\|}{\alpha(t)}\right]\leq \frac{\mathbb{E}\left[{\rm d}({\bf x}(t))\right]}{\alpha(t)}\leq \sqrt{\hat{M}}<\infty\]} for some $\hat{M}>0$. Therefore, we have \begin{align*} \lim_{T\to\infty}\mathbb{E}\left[\sum_{t=0}^{T}\alpha(t)\|{\bf x}_i(t)-\bar{{\bf x}}(t)\|\right] &=\lim_{T\to\infty}\mathbb{E}\left[ \sum_{t=0}^{T}\alpha^2(t)\frac{\|{\bf x}_i(t)-\bar{{\bf x}}(t)\|}{\alpha(t)}\right]\cr &=\lim_{T\to\infty}\sum_{t=0}^{T}\alpha^2(t)\mathbb{E}\left[ \frac{\|{\bf x}_i(t)-\bar{{\bf x}}(t)\|}{\alpha(t)}\right]\cr &\stackrel{}{\leq} \sqrt{\hat{M}}\sum_{t=0}^{\infty}{\alpha^2(t)}<\infty. \end{align*} {Similarly, using Lemma~\ref{lem:ConsensusConveregnseVariance}, there exists some $\hat{M}>0$ such that \begin{align*} \frac{\mathbb{E}\left[\|{\bf x}_i(t)-\bar{{\bf x}}(t)\|^2\right]}{\alpha^2(t)}\leq \frac{\mathbb{E}\left[{\rm d}^2({\bf x}(t))\right]}{\alpha^2(t)}\leq\hat{M}, \end{align*} for all $t\geq 0$, where the first inequality follows from Lemma~\ref{lem:DiamBehrouz}~-~(e).} Therefore, \begin{align*} \lim_{T\to\infty}\mathbb{E}\left[\sum_{t=0}^{T}\|{\bf x}_i(t)-\bar{{\bf x}}(t)\|^2\right] &= \lim_{T\to\infty}\mathbb{E}\left[\sum_{t=0}^{T}\alpha^2(t)\frac{\|{\bf x}_i(t)-\bar{{\bf x}}(t)\|^2}{\alpha^2(t)}\right]\cr &\stackrel{}{\leq} \hat{M}\sum_{t=0}^{\infty}\alpha^2(t)<\infty. \end{align*} Therefore, using Monotone Convergence Theorem, we have \begin{align*} \mathbb{E}\left[\sum_{t=0}^{\infty}\alpha(t)\|{\bf x}_i(t)-\bar{{\bf x}}(t)\|\right]&<\infty,\text{ and }\cr \mathbb{E}\left[\sum_{t=0}^{\infty}\|{\bf x}_i(t)-\bar{{\bf x}}(t)\|^2\right]&<\infty, \end{align*} which implies $\sum_{t=0}^{\infty}\alpha(t)\|{\bf x}_i(t)-\bar{{\bf x}}(t)\|<\infty$ and $\sum_{t=0}^{\infty}\|{\bf x}_i(t)-\bar{{\bf x}}(t)\|^2<\infty$, almost surely. Now that we showed that $c(t)$ is almost surely a summable sequence, Siegmund and Robbins Theorem implies that almost surely \[\lim_{t\to\infty}V(t)=\lim_{t\to\infty}\|\bar{{\bf x}}(t)-z\|^2\text{ exists},\] and \[\sum_{t=1}^{\infty}{\alpha(t)}(F(\bar{{\bf x}}(t))-F(z))<\infty.\] For $z\in \mathcal{Z}$, let's define \begin{align*} \Omega_{z}\triangleq\left\{\omega\Bigg{|}\begin{array}{cc} \lim\limits_{t\to\infty}\|\bar{{\bf x}}(t,\omega)-z\| \text{ exists,} \\ \sum\limits_{t=1}^{\infty}{\alpha(t)}(F(\bar{{\bf x}}(t,\omega))-F^*)<\infty \end{array}\right\}, \end{align*} where $F^*\triangleq\min_{z\in\mathbb{R}^m}F(z)$. Per Sigmund-Robbins result, we know that $P(\Omega_{z})=1$. Now, let $\mathcal{Z}_d\subset \mathcal{Z}$ be a countable dense subset of $\mathcal{Z}$ and let \begin{align*} \Omega_d\triangleq\bigcap_{z\in \mathcal{Z}_d}\Omega_z. \end{align*} Since $\mathcal{Z}_d$ is a countable set, we have $P(\Omega_d)=1$ and for $\omega\in \Omega_d$, since $\sum_{t=1}^{\infty}{\alpha(t)}(F(\bar{{\bf x}}(t,\omega))-F^*)<\infty $ and $\alpha(t)$ is not summable, we have \[\liminf_{t\to\infty}F(\bar{{\bf x}}(t))=F^*.\] This fact and the fact that $F(\cdot)$ is a continuous function implies that for all $\omega\in \Omega_d$, we have $\liminf_{t\to\infty}\|\bar{{\bf x}}(t,\omega)-z^*(\omega)\|=0$ for some $z^*(\omega)\in\mathcal{Z}$. {To show this, let $\{\bar{{\bf x}}({t_k})\}$ be a sub-sequence that $\lim_{k\to\infty}F(\bar{{\bf x}}(t_k,\omega))=F^*$ (such a sub-sequence depends on the sample path $\omega$). Since $\omega\in \Omega_d$ and \[\lim\limits_{t\to\infty}\|\bar{{\bf x}}(t,\omega)-\hat{z}\|\quad \text{exists}\] for some $\hat{z}\in \mathcal{Z}_d$, we conclude that $\{\bar{{\bf x}}(t,\omega)\}$ is a bounded sequences. Therefore, $\{\bar{{\bf x}}(t_k,\omega)\}$ is also bounded and it has an accumulation point $z^*\in \mathbb{R}^m$ and hence, there is a sub-sequence $\{\bar{{\bf x}}(t_{k_\tau},\omega)\}_{\tau\geq 0}$ of $\{\bar{{\bf x}}(t_k,\omega)\}_{k\geq 0}$ such that \[\lim_{\tau\to\infty}\bar{{\bf x}}(t_{k_\tau},\omega)=z^*.\] As a result of continuity of $F(\cdot)$, we have \[\lim_{\tau\to\infty}F(\bar{{\bf x}}(t_{k_\tau}))=F(z^*)=F^*\] and hence, $z^*\in \mathcal{Z}$. Note that the point $z^*=z^*(\omega)$ depends on the sample path $\omega$. } Since $\mathcal{Z}_d\subseteq \mathcal{Z}$ is dense, there is a sequence $\{q^*(s,\omega)\}_{s\geq 0}$ in $\mathcal{Z}_d$ such that $\lim_{s\to\infty}\|q^*(s,\omega)-z^*(\omega)\|=0$. Note that since $\omega\in \Omega_d$, $\lim_{t\to\infty}\|\bar{{\bf x}}(t,\omega)-q^*(s,\omega)\|$ exists for all $s\geq 0$ and we have {\begin{align*} \lim_{t\to\infty}\|\bar{{\bf x}}(t,\omega)-q^*(s,\omega)\| &=\lim_{t\to\infty}\|\bar{{\bf x}}(t,\omega)-z^*(\omega)+z^*(\omega)-q^*(s,\omega)\|\cr &\leq\liminf_{t\to\infty}\|\bar{{\bf x}}(t,\omega)-z^*(\omega)\|+\|q^*(s,\omega)-z^*(\omega)\|\cr &=\|q^*(s,\omega)-z^*(\omega)\|. \end{align*}} Therefore, we have \begin{align}\label{eqn:limqstar} \lim_{s\to\infty}\lim_{t\to\infty}\|\bar{{\bf x}}(t,\omega)-q^*(s,\omega)\|=0. \end{align} On the other hand, we have \begin{align*} \limsup_{t\to\infty}\|\bar{{\bf x}}(t,\omega)-z^*(\omega)\| &=\limsup_{t\to\infty}\|\bar{{\bf x}}(t,\omega)-q^*(s,\omega)+q^*(s,\omega)-z^*(\omega)\|\cr &\leq\limsup_{t\to\infty}\|\bar{{\bf x}}(t,\omega)-q^*(s,\omega)\|+\|q^*(s,\omega)-z^*(\omega)\|\cr &=\left(\lim_{t\to\infty}\|\bar{{\bf x}}(t,\omega)-q^*(s,\omega)\|\right)+\|q^*(s,\omega)-z^*(\omega)\|. \end{align*} Therefore, \begin{align}\label{eqn:FINALLY} \limsup_{t\to\infty}\|\bar{{\bf x}}(t,\omega)-z^*(\omega)\| &=\lim_{s\to\infty}\limsup_{t\to\infty }\|\bar{{\bf x}}(t,\omega)-z^*(\omega)\|\cr &\leq\lim_{s\to\infty}\lim_{t\to\infty}\|\bar{{\bf x}}(t,\omega)-q^*(s,\omega)\|+\lim_{s\to\infty}\|q^*(s,\omega)-z^*(\omega)\|\cr &=0, \end{align} where the last equality follows by combining \eqref{eqn:limqstar} and $\lim_{s\to\infty}\|q^*(s,\omega)-z^*(\omega)\|=0$. Note that \eqref{eqn:FINALLY}, implies that almost surely (i.e., for all $\omega\in\Omega_d$), we have \[\lim_{t\to\infty}\bar{{\bf x}}(t)=z^*(\omega)\] exists and it belongs to $\mathcal{Z}$. {Finally, according to Assumption \ref{assump:AssumpOnFunction} and \ref{assump:AssumpOnStepSize}, we have \[{\rm d}(\alpha(t){\bf g}(t))\leq 2{K}t^{-\beta}\max_{i\in[n]}L_i.\] Therefore, from lemma \ref{lem:ConsensusConveregnse}, we conclude that $\lim_{t\to\infty}{\rm d}({\bf x}(t))=0$ almost surely, and hence \[\lim_{t\to\infty}\|\bar{{\bf x}}(t)-{\bf x}_i(t)\|=0\quad \text{almost surely}.\] Since we almost surely have $\lim_{t\to\infty}\bar{{\bf x}}(t)=z^*$ for a random vector $z^*$ supported in $\mathcal{Z}$, we have $\lim_{t\to\infty}{\bf x}_i(t)=z^*$ for all $i\in[n]$ almost surely and the proof is complete. \qed} \section{Conclusion and Future Research}\label{Conclusion} In this work, we showed that the averaging-based distributed optimization solving algorithm over dependent random networks converges to an optimal random point under the standard conditions on the objective function and network formation that is conditionally $B$-connected. To do so, we established a rate of convergence for the second moment of the autonomous averaging dynamics over such networks and used that to study the convergence of the sample-paths and second moments of the controlled variation of those dynamics. Further extensions of the current work to non-convex settings, accelerated algorithms, and distributed online learning algorithms are of interest for future consideration on this topic. \appendix {\it Proof of Lemma \ref{lem:DiamBehrouz}}: For the proof of part (a), let ${\bf x}_{i}^{(k)}$ be the $k$th coordinate of ${\bf x}_i$, and define the vector $y=[y^{(1)},\ldots,y^{(m)}]^T$ where \[y^{(k)}=\frac{1}{2}({u}^{(k)}+{U}^{(k)}),\] with ${u}^{(k)}=\min_{i\in [n]}{\bf x}_i^{(k)}$ and ${U}^{(k)}=\max_{i\in [n]}{\bf x}_i^{(k)}$. Therefore, for $\ell\in[n]$, we have \begin{align}\label{eqn:ykmax} \|{\bf x}_\ell-y\|&=\max_{k\in[m]}|{\bf x}_{\ell}^{(k)}-y^{(k)}|\cr &\stackrel{}{=}\max_{k\in[m]}\left|{\bf x}_{\ell}^{(k)}-\frac{1}{2}(u^{(k)}+U^{(k)})\right|\cr &\stackrel{(a)}\leq\max_{k\in[m]}\frac{1}{2}\left|u^{(k)}-U^{(k)}\right|\cr &=\frac{1}{2}{\rm d}({\bf x}), \end{align} where $(a)$ follows from $u^{(k)}\leq {\bf x}_{\ell}^{(k)}\leq U^{(k)}$. Also, we have \begin{align*} {\rm d}(A{\bf x})&=\max_{i,j\in[n]}\Bigg{\|}\sum_{\ell=1}^na_{i\ell}{\bf x}_\ell-\sum_{\ell=1}^na_{j\ell}{\bf x}_\ell\Bigg{\|}\cr &=\max_{i,j\in[n]}\Bigg{\|}\sum_{\ell=1}^na_{i\ell}({\bf x}_\ell-{y})-\sum_{\ell=1}^na_{j\ell}({\bf x}_\ell-{y})+\sum_{\ell=1}^n(a_{j\ell}-a_{i\ell}){y}\Bigg{\|}\cr &=\max_{i,j\in[n]}\Bigg{\|}\sum_{\ell=1}^n(a_{i\ell}-a_{j\ell})({\bf x}_\ell-{y})\Bigg{\|} \end{align*} where the last equality holds as $A$ is a row-stochastic matrix and hence, $\sum_{\ell=1}^n(a_{j\ell}-a_{i\ell})=0$. Therefore, \begin{align*} {\rm d}(A{\bf x})&=\max_{i,j\in[n]}\Bigg{\|}\sum_{\ell=1}^n(a_{i\ell}-a_{j\ell})({\bf x}_\ell-{y})\Bigg{\|}\cr &\stackrel{(a)}{\leq}\max_{i,j\in[n]}\sum_{\ell=1}^n|a_{i\ell}-a_{j\ell}|{\|}{\bf x}_\ell-{y}{\|}\cr &\stackrel{(b)}{\leq}\max_{i,j\in[n]}\frac{1}{2}{\rm d}({\bf x})\sum_{\ell=1}^n|a_{i\ell}-a_{j\ell}|\cr &\leq{\rm diam}(A){\rm d}({\bf x}), \end{align*} where $(a)$ follows from the triangle inequality, and $(b)$ follow from \eqref{eqn:ykmax}. For the part (b), we have \begin{align*} {\rm d}({\bf x}+{\bf y})&=\max_{i,j\in[n]}\|({\bf x}+{\bf y})_i-({\bf x}+{\bf y})_j\|\cr &=\max_{i,j\in[n]}\|{\bf x}_i-{\bf x}_j+{\bf y}_i-{\bf y}_j\|\cr &\stackrel{(b)}{\leq}\max_{i,j\in[n]}\left(\|{\bf x}_i-{\bf x}_j\|+\|{\bf y}_i-{\bf y}_j\|\right)\cr &\leq\max_{i,j\in[n]}\|{\bf x}_i-{\bf x}_j\|+\max_{i,j\in[n]}\|{\bf y}_i-{\bf y}_j\|\cr &={\rm d}({\bf x})+{\rm d}({\bf y}), \end{align*} where $(b)$ follows from the triangle inequality. For the proof of part (c), we have \begin{align*} {\rm diam}(A)&=\max_{i,j\in[n]}\sum_{\ell=1}^n \frac{1}{2}|a_{i\ell}-a_{j\ell}|\cr &=\max_{i,j\in[n]}\sum_{\ell=1}^n \left(\frac{1}{2}(a_{i\ell}+a_{j\ell})-\min\{a_{i\ell},a_{j\ell}\}\right)\cr &\stackrel{(a)}{=}\max_{i,j\in[n]}1-\sum_{\ell=1}^n \min\{a_{i\ell},a_{j\ell}\}\cr &=1-\min_{i,j\in[n]}\sum_{\ell=1}^n \min\{a_{i\ell},a_{j\ell}\}\cr &=1-\Lambda(A), \end{align*} where $(a)$ follows from the fact that $A$ is row-stochastic. The proof of part (d) follows from part (c) and Lemma \ref{lem:lambdadiam}. For the part (e), due to the convexity of $\|\cdot\|$, we have \begin{align*} \left\|{\bf x}_i-\sum_{j=1}^{n}\pi_j{\bf x}_j\right\|\leq \sum_{j=1}^{n}\pi_j\left\|{\bf x}_i-{\bf x}_j\right\|\leq \sum_{j=1}^{n}\pi_j{\rm d}({\bf x})={\rm d}({\bf x}). \end{align*} \qed {\it Proof of Lemma \ref{lem:ProductwithHistory}}: We prove by induction on $K_2$. By the assumption, the lemma is true for $K_2=K_1$. For $K_2>K_1$, we have \begin{align*} \mathbb{E}\left[\prod_{k=K_1}^{K_2+1}Y(k)\bigg{|}\mathcal{F}(K_1-1)\right] &=\mathbb{E}\left[\mathbb{E}\left[\prod_{k=K_1}^{K_2+1}Y(k)\bigg{|}\mathcal{F}(K_2)\right]\bigg{|}\mathcal{F}(K_1-1)\right]\cr &=\mathbb{E}\left[\mathbb{E}\left[Y(K_2+1){|}\mathcal{F}(K_2)\right]\prod_{k=K_1}^{K_2}Y(k)\bigg{|}\mathcal{F}(K_1-1)\right]\cr &\leq\mathbb{E}\left[a(K_2+1)\prod_{k=K_1}^{K_2}Y(k)\bigg{|}\mathcal{F}(K_1-1)\right]\cr &\leq \prod_{k=K_1}^{K_2+1}a(k). \end{align*} \qed {\it Proof of Lemma \ref{lem:approxfunction}}: Consider $\hat{\tau}\geq t_0$ such that $\hat{\theta}\triangleq\sup_{t\geq\hat{\tau}}\frac{\beta({t})}{\beta({t}+1)}\theta<1$ and let $D(t)\triangleq \sum_{s=\tau}^{t-1}\beta(s)\theta^{t-s}$. Dividing both sides by $\beta(t)>0 $, for $t> \hat{\tau}$, we have \begin{align*} \frac{D(t)}{\beta(t)}&=\sum_{s=\tau}^{t-1}\frac{\beta(s)}{\beta(t)}\theta^{t-s}\cr &=\sum_{s=\tau}^{t-1}\prod_{\kappa=s}^{t-1}\frac{\beta(\kappa)}{\beta(\kappa+1)}\theta\cr &\leq\sum_{s=\tau}^{\hat{\tau}-1}\prod_{\kappa=s}^{t-1}\frac{\beta(\kappa)}{\beta(\kappa+1)}\theta+\sum_{s=\hat{\tau}}^{t-1}\hat{\theta}^{t-s}\cr &=\sum_{s=\tau}^{\hat{\tau}-1}\prod_{\kappa=s}^{t-1}\frac{\beta(\kappa)}{\beta(\kappa+1)}\theta+\sum_{k=1}^{t-\hat{\tau}}\hat{\theta}^{k}\cr &\leq \sum_{s=\tau}^{\hat{\tau}-1}\prod_{\kappa=s}^{t-1}\frac{\beta(\kappa)}{\beta(\kappa+1)}\theta+\frac{\hat{\theta}}{1-\hat{\theta}}. \end{align*} Let \[M_1\triangleq \sup_{t>\hat{\tau}}\sup_{\hat{\tau}\geq \tau\geq t_0}\sum_{s=\tau}^{\hat{\tau}-1}\prod_{\kappa=s}^{t-1}\frac{\beta(\kappa)}{\beta(\kappa+1)}\theta+\frac{\hat{\theta}}{1-\hat{\theta}}.\] Note that \begin{align*} \lim_{t\to\infty}\prod_{\kappa=s}^{t-1}\frac{\beta(\kappa)}{\beta(\kappa+1)}\theta\leq\lim_{t\to\infty}\hat{\theta}^{t-\hat{\tau}}\prod_{\kappa=s}^{\hat{\tau}-1}\frac{\beta(\kappa)}{\beta(\kappa+1)}\theta=0. \end{align*} Therefore, $\sup_{t\geq\tau}\prod_{\kappa=s}^{t-1}\frac{\beta(\kappa)}{\beta(\kappa+1)}\theta<\infty$, and hence, $M_1<\infty$. Thus, $D(t)\leq \max\{M_1,M_2\}\beta(t)$, where $M_2\triangleq \max_{\hat{\tau}\geq t\geq \tau\geq t_0}\sum_{s=\tau}^{t-1}\frac{\beta(s)}{\beta(t)}\theta^{t-s}$, and the proof is complete.\qed \end{document}
\begin{document} \title{Long-time behavior of micropolar fluid equations in cylindrical domains} \author{Bernard Nowakowski} \thanks{The author is partially supported by Polish KBN grant N N201 393137} \address{Bernard Nowakowski\\ Institute of Mathematics\\ Polish Academy of Sciences\\ \'Snia\-deckich 8\\ 00-956 Warsaw\\ Poland} \email{[email protected]} \subjclass[2000]{35Q30, 76D05} \date{November, 2011} \begin{abstract} In this paper we investigate the existence of $H^1$-uniform attractor and long-time behavior of solutions to non-autonomous micropolar fluid equations in three dimensional cylindrical domains. In our considerations we take into account that existence of global and strong solutions is proved under the assumption on smallness of change of the initial and the external data along the axis of the cylinder. Therefore, we refine the concept of uniform attractor by adopting the idea which was proposed by J.W. Cholewa and T. DÅ‚otko in \cite{chol1}. \end{abstract} \keywords{micropolar fluids, cylindrical domains, qualitative properties of solutions, uniform restricted attractor} \maketitle \section{Introduction} The study of attractors is an important part of examining dynamical systems. It was thoroughly investigated in many works (see e.g. \cite{Hale:1988fk}, \cite{Ladyzhenskaya:1991uq}, \cite{Temam:1997vn}, \cite{chol}, \cite{che}, \cite{Sell:2002kx}). These classical results cover many both autonomous and non-autonomous equations of mathematical physics. Roughly speaking, the general abstract theory justifying attractors' existence may be applied when the equations describing autonomous systems posses unique, strong and global solutions. For non-autonomous systems some additional uniformity for external data is required. Further obstacles arise when unbounded domains are examined (see e.g. \cite{luk5} and \cite{zhao} and references therein). In three dimensions the situation for nonlinear evolution problems is even more complex. In case of micropolar equations we lack information about the regularity of weak solutions at large. Thus, results concerning the existence of uniform attractors are merely conditional (see e.g. \cite{che}). In this article we present some remedy which is based on \cite{chol1}. It takes into account that global and strong solutions exist if some smallness of $L_2$-norms on the rate of change of the external and the initial data is assumed (see \cite{2012arXiv1205.4507N}). This leads to a restriction of the uniform attractor to a proper phase space. Introduced by A.~Eringen in \cite{erin}, the micropolar fluid equations form a useful generalization of the classical Navier-Stokes model in the sense that they take into account the structure of the media they describe. With many potential applications (see e.g. \cite{pop1}, \cite{pop2}, \cite{ari}, \cite{luk1}) they became an interesting and demanding area of interest for mathematicians and engineers (see e.g. \cite{erin}, \cite{pad}, \cite{gald}, \cite{ort}, \cite{luk1}, \cite{luk2} et al.) In this article we study the following initial-boundary value problem \begin{equation} \begin{aligned}\label{p1} &v_{,t} + v\cdot \nabla v - (\nu + \nu_r)\triangle v + \nabla p = 2\nu_r\Rot\omega + f & &\text{in } \Omega^{\infty} := \Omega\times(0,\infty),\\ &\Div v = 0 & &\text{in } \Omega^{\infty},\\ &\begin{aligned} &\omega_{,t} + v\cdot \nabla \omega - \alpha \triangle \omega - \beta\nabla\Div\omega + 4\nu_r\omega \\ &\mspace{60mu} = 2\nu_r\Rot v + g \end{aligned}& &\text{in } \Omega^{\infty}, \\ &v\cdot n = 0 & &\text{on $S^{\infty} := S\times(0,\infty)$}, \\ &\Rot v \times n = 0 & &\text{on $S^{\infty}$}, \\ &\omega = 0 & &\text{on $S_1^{\infty}$}, \\ &\omega' = 0, \qquad \omega_{3,x_3} = 0 & &\text{on $S_2^{\infty}$}, \\ &v\vert_{t = 0} = v(0), \qquad \omega\vert_{t = 0} = \omega(0) & &\text{in $\Omega$}. \end{aligned} \end{equation} By $\Omega \subset \mathbb{R}^3$ we mean a cylindrical type domain, i.e. \begin{equation*} \Omega = \left\{\left(x_1,x_2\right)\in\mathbb{R}^2\colon \varphi\left(x_1,x_2\right) \leq c_0\right\} \times \left\{ x_3\colon -a \leq x_3 \leq a\right\}, \end{equation*} where $a > 0$ and $c_0$ are real constants, $\varphi$ is a closed curve of class $\mathcal{C}^2$. The functions $v\colon\mathbb{R}^3\to\mathbb{R}^3$, $\omega\colon\mathbb{R}^3\to\mathbb{R}^3$ and $p\colon\mathbb{R}^3\to\mathbb{R}$ denote the velocity field, the micropolar field and the pressure. The external data (forces and momenta) are represented by $f$ and $g$ respectively. The viscosity coefficients $\nu$ and $\nu_r$ are real and positive. The choice of the boundary conditions for $v$ and $\omega$ was thoroughly described in \cite{2012arXiv1205.4046N}. Note that usually the zero Dirichlet condition is assumed for both the velocity and micropolar field. From physical point of view this is not suitable for some classes of fluids (see e.g. \cite{bou}). In fact, the explanation of particles behavior at the boundary is far more complex and was extensively discussed in \cite{con}, \cite{all3}, \cite{mig} and \cite{eri6}. The article is organized as follows: in the next Section we present an overview of the current state of the art. Subsequently we introduce notation and give some preliminary results. In Section \ref{sec4} the main results are formulated. Section \ref{sec5} is entirely devoted to the basics of semi-processes. In Sections \ref{sec6}, \ref{sec7} and \ref{sec8} we prove the existence of the restricted uniform attractor and analyze the behavior of the micropolar fluid flow for the large viscosity $\nu$. We demonstrate that for time independent data the global and unique solution $(v(t),\omega(t))$ converges to the solutions of the stationary problem and if $\nu_r \approx 0$ then the trajectories $(v(t),\omega(t))$ of micropolar flow and $u(t)$ of the Navier-Stokes flow differ indistinctively. \section{State-of-the-art} The study of the uniform attractors has mainly been focused on the the case of unbounded domains (see \cite{ukaszewicz:2004kx} and \cite{Dong:2006uq} and the references therein) which is barely covered by general abstract theory introduced in \cite{che}. Note that both these studies cover only two dimensional case. In three dimensions no results have been obtained so far due to lack of regularity of weak solutions. Another obstacle arises from relaxing the assumption of certain uniformity of the external data. To avoid these difficulties, the concepts of trajectory and pull-back attractors have been suggested and applied (see \cite{Chen:2007zr}, \cite{Chen:2009ys}, \cite{luk7}, \cite{Kapustyan:2010vn} and \cite{tar2}). \section{Notation and preliminary results}\label{sec3} In this paper we use the following function spaces: \begin{enumerate} \item[$\bullet$] $W^m_p(\Omega)$, where $m \in \mathbb{N}$, $p \geq 1$, is the closure of $\mathcal{C}^{\infty}(\Omega)$ in the norm \begin{equation*} \norm{u}_{W^m_p(\Omega)} = \left(\sum_{\abs{\alpha} \leq m} \norm{\Ud^{\alpha} u}_{L_p(\Omega)}^p\right)^{\frac{1}{p}}, \end{equation*} \item[$\bullet$] $H^k(\Omega)$, where $k \in \mathbb{N}$, is simply $W^k_2(\Omega)$, \item[$\bullet$] $W^{2,1}_p(\Omega^t)$, where $p \geq 1$, is the closure of $\mathcal{C}^{\infty}(\Omega\times(t_0,t_1))$ in the norm \begin{equation*} \norm{u}_{W^{2,1}_p(\Omega^t)} = \left(\int_{t_0}^{t_1}\!\!\!\int_{\Omega} \abs{u_{,xx}(x,s)}^p + \abs{u_{,x}(x,s)}^p + \abs{u(x,s)}^p + \abs{u_{,t}(x,s)}^p\, \ud x\, \ud s\right)^{\frac{1}{p}}, \end{equation*} \item[$\bullet$] $\mathcal{V}$ is the set of all smooth solenoidal functions whose normal component vanishes on the boundary \begin{equation*} \mathcal{V} = \left\{u\in\mathcal{C}^\infty(\Omega)\colon \Div u = 0 \text{ in } \Omega, \ u\cdot n\vert_S = 0 \right\}, \end{equation*} \item[$\bullet$] $H$ is the closure of $\mathcal{V}$ in $L_2(\Omega)$, \item[$\bullet$] $\widehat H = H \times L_2(\Omega)$, \item[$\bullet$] $V$ is the closure of $\mathcal{V}$ in $H^1(\Omega)$, \item[$\bullet$] $\widehat V = V\times H^1(\Omega)$, \item[$\bullet$] $V^k_2(\Omega^t)$, where $k \in \mathbb{N}$, is the closure of $\mathcal{C}^{\infty}(\Omega\times(t_0,t_1))$ in the norm \begin{equation*} \norm{u}_{V^k_2(\Omega^t)} = \underset{t\in (t_0,t_1)}{\esssup}\norm{u}_{H^k(\Omega)} \\ +\left(\int_{t_0}^{t_1}\norm{\nabla u}^2_{H^{k}(\Omega)}\, \ud t\right)^{1/2}. \end{equation*} \end{enumerate} To shorten the energy estimates we use: \begin{equation}\label{eq14} \begin{aligned} E_{v,\omega}(t) &:= \norm{f}_{L_2(t_0,t;L_{\frac{6}{5}}(\Omega))} + \norm{g}_{L_2(t_0,t;L_{\frac{6}{5}}(\Omega))} + \norm{v(t_0)}_{L_2(\Omega)} + \norm{\omega(t_0)}_{L_2(\Omega)}, \\ E_{h,\theta}(t) &:= \norm{f_{,x_3}}_{L_2(t_0,t;L_{\frac{6}{5}}(\Omega))} + \norm{g_{,x_3}}_{L_2(t_0,t;L_{\frac{6}{5}}(\Omega))} + \norm{h(t_0)}_{L_2(\Omega)} + \norm{\theta(t_0)}_{L_2(\Omega)}. \end{aligned} \end{equation} The following function is of particular interest \begin{equation}\label{eq26} \delta(t) := \norm{f_{,x_3}}^2_{L_2(\Omega^t)} + \norm{g_{,x_3}}^2_{L_2(\Omega^t)} + \norm{\Rot h(t_0)}^2_{L_2(\Omega)} + \norm{h(t_0)}^2_{L_2(\Omega)} + \norm{\theta(t_0)}^2_{L_2(\Omega)}. \end{equation} It expresses the smallness assumption which has to be made in order to prove the existence of regular solutions (see \cite{2012arXiv1205.4046N}. It contains no $L_2$-norms of the initial or the external data but only $L_2$-norms of their derivatives alongside the axis of the cylinder. In other words the data need not to be small but small must be their rate of change. With the above notation the following Theorem was proved in \cite{2012arXiv1205.4507N}. It is fundamental for further considerations. \begin{thm}\label{thm0} Let $0 < T < \infty$ be sufficiently large and fixed. Suppose that $v(0), \omega(0) \in H^1(\Omega)$ and $\Rot h(0) \in L_2(\Omega)$. In addition, let the external data satisfy $f_3\vert_{S_2} = 0$, $g'\vert_{S_2} = 0$, \begin{equation*} \begin{aligned} &\norm{f(t)}_{L_2(\Omega)} \leq \norm{f(kT)}_{L_2(\Omega)}e^{-(t - kT)}, & & &\norm{f_{,x_3}(t)}_{L_2(\Omega)} &\leq \norm{f_{,x_3}(kT)}_{L_2(\Omega)}e^{-(t - kT)},\\ &\norm{g(t)}_{L_2(\Omega)} \leq \norm{g(kT)}_{L_2(\Omega)}e^{-(t - kT)}, & & &\norm{g_{,x_3}(t)}_{L_2(\Omega)} &\leq \norm{g_{,x_3}(kT)}_{L_2(\Omega)}e^{-(t - kT)} \end{aligned} \end{equation*} and \begin{equation*} \sup_k \left\{f(kT),f_{,x_3}(kT),g(kT),g_{,x_3}(kT)\right\} < \infty. \end{equation*} Then, for sufficiently small $\delta(T)$ there exists a unique and regular solution to problem \eqref{p1} on the interval $(0,\infty)$. Moreover, \begin{multline*} \norm{v}_{W^{2,1}_2(\Omega^{\infty})} + \norm{\omega}_{W^{2,1}_2(\Omega^{\infty})} + \norm{\nabla p}_{L_2(\Omega^{\infty})} \leq \sup_k \Big(\norm{f}_{L_2(\Omega^{kT})} + \norm{f_{,x_3}}_{L_2(\Omega^{kT})} \\ + \norm{g}_{L_2(\Omega^{kT})} + \norm{g_{,x_3}}_{L_2(\Omega^{kT})} + \norm{v(0)}_{H^1(\Omega)} + \norm{\omega(0)}_{H^1(\Omega)} + 1\Big)^3. \end{multline*} \end{thm} \section{Main results}\label{sec4} The main result of this paper reads: \begin{thm}\label{thm1} Let the assumption from Theorem \ref{thm0} be satisfied. Suppose that \begin{equation*} \norm{u\big((k + 1)T + t\big) - u(kT + t)}_{L_2(\Omega)} < \epsilon, \end{equation*} for any $t \in [0,T]$ and $k \in \mathbb{N}$, where $u$ is any element of the set $\{f,g,f_{,x_3},g_{,x_3}\}$. Then, the family of semi-processes $\{U_\sigma(t,\tau)\colon t\geq\tau\geq 0\}$, $\sigma \in \Sigma^{\epsilon}_{\sigma_0}$ corresponding to problem \eqref{p1} has an uniform attractor $\mathcal{A}_{\Sigma^{\epsilon}_{\sigma_0}}$ which coincides with the uniform attractor $\mathcal{A}_{\omega(\Sigma^{\epsilon}_{\sigma_0})}$ of the family of semiprocesses $\{U_\sigma(t,\tau)\colon t\geq\tau\geq 0\}$, $\sigma \in \omega\left(\Sigma^{\epsilon}_{\sigma_0}\right)$, i.e. \begin{equation*} \mathcal{A}_{\Sigma^{\epsilon}_{\sigma_0}} = \mathcal{A}_{\omega\big(\Sigma^{\epsilon}_{\sigma_0}\big)}. \end{equation*} \end{thm} For explanatory notation we refer to Section \ref{sec5}. The second result deals with problem \eqref{p1} when the external data are time independent and the viscosity $\nu$ is large enough. It is expected that in such a case the solutions tend to stationary solutions. \begin{thm}[convergence to stationary solutions]\label{thm3} Suppose that the external data $f$ and $g$ do not depend on time. Then, there exists a positive constant $\nu_*$ such that for all $\nu > \nu_*$ the stationary problem \begin{equation*} \begin{aligned} &v_{\infty}\cdot \nabla v_{\infty} - (\nu + \nu_r)\triangle v_{\infty} + \nabla p_{\infty} = 2\nu_r\Rot\omega_{\infty} + f & &\text{in } \Omega,\\ &\Div v_{\infty} = 0 & &\text{in } \Omega,\\ &\begin{aligned} &v_{\infty}\cdot \nabla \omega_{\infty} - \alpha \triangle \omega_{\infty} - \beta\nabla\Div\omega_{\infty} + 4\nu_r\omega_{\infty} \\ &\mspace{60mu} = 2\nu_r\Rot v_{\infty} + g \end{aligned}& &\text{in } \Omega, \\ &v_{\infty}\cdot n = 0 & &\text{on $S$}, \\ &\Rot v_{\infty} \times n = 0 & &\text{on $S$}, \\ &\omega_{\infty} = 0 & &\text{on $S_1$}, \\ &\omega'_{\infty} = 0, \qquad \omega_{\infty 3,x_3} = 0 & &\text{on $S_2$} \end{aligned} \end{equation*} has a unique solution $(v_{\infty},\omega_{\infty})$ for which \begin{equation}\label{eq43} \norm{v_\infty}_{H^2(\Omega)} + \norm{p_{\infty}}_{H^1(\Omega)} + \norm{\omega_{\infty}}_{H^2(\Omega)} \leq F\big(\norm{f}_{L_2(\Omega)},\norm{g}_{L_2(\Omega)}\big) \end{equation} holds, where $F\colon[0,\infty)\times[0,\infty]\to[0,\infty)$, $F(0,0) = 0$ is a continuous and increasing function. Moreover, under assumptions of Theorem \ref{thm1} there exists a unique solution $(v(t),\omega(t))$ to problem \eqref{p1} converging to the the stationary solution $(v_{\infty},\omega_{\infty})$ as $t\to\infty$ and satysfying \begin{equation*} \norm{v(t) - v_{\infty}}^2_{L_2(\Omega)} + \norm{\omega(t) - \omega_{\infty}}^2_{L_2(\Omega)} \leq \left(\norm{v(0) - v_{\infty}}^2_{L_2(\Omega)} + \norm{\omega(0) - \omega_{\infty}}^2_{L_2(\Omega)}\right)e^{-\Delta(\nu)t}, \end{equation*} where \begin{equation*} \Delta(\nu) = c_1(\nu) - \frac{3c_2}{\nu}F^4\big(\norm{f_\infty}_{L_2(\Omega)},\norm{g_\infty}_{L_2(\Omega)}\big) \end{equation*} and \begin{equation*} \begin{aligned} c_1(\nu) &= \frac{\min\{\nu,\alpha\}}{c_{\Omega}}, \\ c_2 &= \frac{c_{I,\Omega}}{\min\{1,\alpha\}}. \end{aligned} \end{equation*} \end{thm} The last result of this articles establishes certain relation between the trajectories of the standard Navier-Stokes equations and micropolar equations. \begin{thm}\label{thm2} Let the pair $(u,\Theta)$ be a solution to the following initial-boundary value problem \begin{equation}\label{p11} \begin{aligned} &u_{,t} - \nu\triangle u + (u\cdot \nabla) u + \nabla q = f & &\text{in $\Omega^t$}, \\ &\Div u = 0 & &\text{in $\Omega^t$}, \\ &\Theta_{,t} - \alpha \triangle \Theta - \beta \nabla \Div\Theta + (u\cdot \nabla)\Theta = g & &\text{in $\Omega^t$}, \\ &\Rot u \times n = 0 & &\text{on $S^t$}, \\ &u \cdot n = 0 & &\text{on $S^t$}, \\ &\Theta = 0 & &\text{on $S_1^t$}, \\ &\Theta' = 0, \qquad \Theta_{3,x_3} = 0 & &\text{on $S_2^t$}, \\ &u\vert_{t = t_0} = u(t_0), \qquad \Theta\vert_{t = t_0} = \Theta(t_0) & &\text{in $\Omega\times\{t = t_0\}$}. \end{aligned} \end{equation} Suppose that $\nu > 0$ is sufficiently large. Finally, let the assumptions of Theorem \ref{thm1} hold. Then, for any $(u(t_0),\Theta(t_0)) \in B_{(v(t_0),\omega(t_0))}(R)$ (i.e. ball centered at $(v(t_0),\omega(t_0))$ with radius $R$), where $R > 0$, there exists $t^* = t^*(R)$ such that for all $t \geq t^*$ the trajectory $(v(t),\omega(t))$ lies in the $\epsilon$-neighborhood of the trajectory $(u(t),\Theta(t))$. \end{thm} \section{Basics of semiprocesses}\label{sec5} We begin with recalling a few facts from \cite[Ch. 1]{chol} and \cite[Ch. 2]{che}. Let $\{T(t)\}$ be a semigroup acting on a complete metric or Banach space $X$. Denote by $\mathcal{B}(X)$ the set of all bounded sets in $X$ with respect to metric in $X$. We say that $P \subset X$ is an \emph{attracting set} for $\{T(t)\}$ if for any $B \in \mathcal{B}(X)$ \begin{equation*} \dist_X\big(T(t)B,P\big)\xrightarrow[t\to\infty]{} 0. \end{equation*} Now we may define an attractor: \begin{defi} A set $\mathcal{A}$ is called \emph{global attractor} for the semigroup $\{T(t)\}$, if it satisfies: \begin{itemize} \item $\mathcal{A}$ is compact in $X$, \item $\mathcal{A}$ is an attracting set for $\{T(t)\}$, \item $\mathcal{A}$ is strictly invariant with respect to $\{T(t)\}$, i.e. $T(t)\mathcal{A} = \mathcal{A}$ for all $t \geq 0$. \end{itemize} \end{defi} \begin{defi} For any $B \in \mathcal{B}(X)$ the set \begin{equation*} \omega(B) = \bigcap_{\tau \geq 0} \overline{\bigcup_{t\geq \tau}T(t) B}^{X} \end{equation*} is called an \emph{$\omega$-limit set} for $B$. \end{defi} The existence of the global attractor in ensured by the following Proposition: \begin{prop}\label{prop2} Suppose that $\{T(t)\}$ is a continuous semigroup in a complete metric space $X$ having a compact attracting set $K \Subset X$. Then the semigroup $\{T(t)\}$ has a global attractor $\mathcal{A}$ ($\mathcal{A} \Subset K$). The attractor coincides with $\omega(K)$, i.e. $\mathcal{A} = \omega(K)$. \end{prop} \begin{proof} We refer the reader to \cite[Ch. 2, \S 3, Theorem 3.1]{che}. \end{proof} To give the proof of Theorem \ref{thm1} in the next Section we introduce the notions and concept of semi-processes. For this purpose we make use of \cite[Ch. $7$]{che}. Let us rewrite equation \eqref{p1}$_1$ in the abstract form \begin{equation}\label{eq37} (v_{,t},\omega_{,t}) = A(v,\omega,t) = A_{\sigma(t)}(v,\omega), \qquad t\in \mathbb{R}^+, \end{equation} where the right-hand side depends directly on the time-dependent forces and momenta, which is indicated by the presence of function $\sigma(t) = (f(t),g(t))$. The function $\sigma(t)$ is referred as the \emph{time symbol} (or the \emph{symbol}). By $\Psi$ we denote a Banach space, which contains the values of $\sigma(t)$ for almost all $t \in \mathbb{R}_+$. In our case $\Psi = L_{\frac{6}{5}}(\Omega)\cap L_2(\Omega)\times L_{\frac{6}{5}}(\Omega)\cap L_2(\Omega)$ (although we could write $L_2(\Omega)\times L_2(\Omega)$ only since $\Omega$ is bounded but we want to point out that in certain cases a weaker assumption on forces can be imposed). Moreover we assume that $\sigma(t)$, as a function of $t$, belongs to a topological space $\Xi_+ := \{\xi(t), t \in\mathbb{R}_+\colon \xi(t) \in \Psi \ \text{for a.e. $t\in\mathbb{R}_+$}\}$. It is tempting to write $\Xi_+ = L_{2,loc}(0,\infty;\Psi)$ but we must take into account certain restrictions, which were imposed on the data (see Theorem \ref{thm1}). Thus, we describe $\Xi_+$ in greater detail below. Consider then the family of equations of the form of \eqref{eq37}, where $\sigma(t) \in \Sigma \subseteq \Xi_+$. The space $\Sigma$ is called \emph{symbol space} and is assumed to contain, along with $\sigma(t)$, all translations, i.e. $\sigma(t + s) = T(s)\sigma(t) \in \Sigma$ for all $s\geq 0$, where $T(s)$ is a translation operator acting on $\Xi_+$. Furthermore, we fix $\sigma_0(t)$, $t \geq 0$. Consider the closure in $\Xi_+$ of the set \begin{equation*} \left\{T(s)\sigma_0(t)\colon s \geq 0\right\} = \left\{\sigma_0(t + s)\colon s\geq 0\right\}. \end{equation*} We call this closure the \emph{hull} of the symbol $\sigma_0(t)$ and denote by $\mathcal{H}_+(\sigma_0)$, \begin{equation*} \mathcal{H}_+(\sigma_0) = \overline{\left\{T(s)\sigma_0\colon s \geq 0\right\}}^{\Xi_+}. \end{equation*} \begin{defi} The symbol $\sigma_0(t) \in \Xi_+$ is called \emph{translation compact} in $\Xi_+$ if the hull $\mathcal{H}_+(\sigma_0)$ is compact in $\Xi_+$. \end{defi} In view of the above definition we would like to set $\Sigma = \mathcal{H}_+(\sigma_0)$. However for this purpose we need to determine how to choose $\Sigma$ to ensure its compactness in $\Xi_+$. This is crucial for the proof of the existence of the uniform attractor because we use: \begin{prop}\label{prop1} Let $\sigma_0(s)$ be a translation compact function in $\Xi_+$. Let the family of semi-processes $\{U(t,\tau)\colon t\geq \tau\geq 0\}$, $\sigma \in \mathcal{H}_+(\sigma_0)$ be uniformly asymptotically compact and $\big(E\times \mathcal{H}_+(\sigma_0),E\big)$-continuous. Let $\omega(\mathcal{H}_+(\sigma_0))$ be the attractor of the translation semi-group $\{T(t)\}$ acting on $\mathcal{H}_+(\sigma_0)$. Consider the corresponding family of semi-processes $\{U_\sigma(t,\tau)\colon t\geq\tau\geq 0\}$, $\sigma \in \mathcal{H}_+(\sigma_0)$. Then, the uniform attractor $\mathcal{A}_{\mathcal{H}_+(\sigma_0)}$ of the family of semi-processes $\{U_\sigma(t,\tau)\colon t\geq\tau\geq 0\}$, $\sigma \in \mathcal{H}_+(\sigma_0)$ exists and coincides with the uniform attractor $\mathcal{A}_{\omega(\mathcal{H}_+(\sigma_0))}$ of the family of semi-processes $\{U_\sigma(t,\tau)\colon t\geq\tau\geq 0\}$, $\sigma \in \omega(\mathcal{H}_+(\sigma_0))$, i.e. \begin{equation*} \mathcal{A}_{\mathcal{H}_+(\sigma_0)} = \mathcal{A}_{\omega(\mathcal{H}_+(\sigma_0))}. \end{equation*} \end{prop} \begin{proof} For the proof we refer the reader to \cite[Ch. 7, \S 3]{che}. \end{proof} The compactness of $\Sigma$ will be established in the framework of almost periodic functions and Stepanow spaces. We need a few more definitions (see \cite{Guter:1966ys} and \cite[Ch. 7, \S 5]{che}): \begin{defi} We call the expression \begin{equation*} \ud_{S^p_l}\big(f(s),g(s)\big) = \sup_{t \geq 0} \left(\frac{1}{l}\int_{t}^{t + l}\norm{f(s) - g(s)}_{\Psi}^p\, \ud s \right)^{\frac{1}{p}} \end{equation*} the $S^p_l$-distance of the order $p \geq 1$ corresponding to the length $l$. The space of all $p$-power locally integrable functions with values in $\Psi$ and equipped with the norm generated by $\ud_{S^p_l}$ we call the \emph{Stepanow space} and denote by $L_{p}^S(\mathbb{R}^+;\Psi)$. \end{defi} In view of the above definition we set \begin{equation*} \Xi_+ = L^S_{2,loc}(\mathbb{R}^+,\Psi). \end{equation*} Also note, that in the above definition we can put $l = 1$. Thus, we will write $S^p$ instead of $S^p_l$ or $S^p_1$. In addition, all $S_l^p$-spaces are complete. \begin{defi} A function $f \in L_p^S(\mathbb{R}^+;\Psi)$ we call \emph{$S^p$-asymptotically almost periodic} if for any $\epsilon \geq 0$ there exists a number $l = l(\epsilon)$ such that for each interval $(\alpha,\alpha + l)$, $\alpha \geq 0$ there exists a point $\tau \in (\alpha,\alpha + l)$ such that \begin{equation*} \ud_{S^p}\big(f(s + \tau),f(s)\big) = \sup_{t\geq 0} \left(\int_{t}^{t + 1} \norm{f(s + \tau) - f(s)}^p_{\Psi}\, \ud t\right)^{\frac{1}{p}} < \epsilon. \end{equation*} \end{defi} \begin{defi} We say that a function $f\in L_p^S(\mathbb{R}^+;\Psi)$ is \emph{$S^p$-normal} if the family of translations \begin{equation*} \left\{ f^h(s) = f(h + s)\colon h \in \mathbb{R}^+\right\} \end{equation*} is precompact in $L_p^S(\mathbb{R}^+;\Psi)$ with respect to the norm \begin{equation*} \left(\sup_{t \geq 0} \int_t^{t + 1} \norm{f(s)}^p_{\Psi}\, \ud s\right)^{\frac{1}{p}} \end{equation*} induced by $\ud_{S^p}$. \end{defi} From our point of view the following Proposition will play a crucial role: \begin{prop}\label{prop3} A function $f$ is $S^p$-normal if and only if $f$ is $S^p$-asymptotically almost periodic. \end{prop} Now we define \begin{equation*} \mathcal{H}_+(\sigma_0) = \overline{\left\{\sigma^h_0(s) = \sigma_0(s + h)\colon h \geq 0\right\}}^{\Xi_+}. \end{equation*} Note, that the set $\mathcal{H}_+(\sigma_0)$ does not contain any information on the smallness assumption on the external data. Therefore, for any given $\sigma_0$ we additionally introduce \begin{equation*} \sigma^{\epsilon}_{0,x_3} = \big(f_{0,x_3},g_{0,x_3}\big), \end{equation*} where \begin{equation*} \norm{f_{0,x_3}}_{L_2(\Omega^t)}^2 + \norm{g_{0,x_3}}_{L_2(\Omega^t)}^2 < \epsilon \end{equation*} and define \begin{equation*} \omega(\sigma_{0,x_3}^{\epsilon}) = \bigcap_{\tau \geq 0} \overline{\left\{\sigma_{0,x_3}^{\epsilon,h}(s)\colon h \geq \tau\right\}}^{\Xi_+}. \end{equation*} Then we set \begin{equation*} \Sigma^\epsilon_{\sigma_0} = \mathcal{H}_+(\sigma_0) \cap \omega(\sigma_{0,x_3}^{\epsilon}). \end{equation*} \begin{rem}\label{rem8} In view of the above Proposition, the hull $\mathcal{H}_+(\sigma_0)$ is compact if we take $\sigma_0$ $S^2$-asymptotically almost periodic. But recall that we obtained the global existence of regular solutions under the assumption on the external data that they decay exponentially on every time interval of the form $[kT,(k + 1)T]$ (see Theorem \ref{thm0}). Thus, if we assume that \begin{equation*} \norm{f\big((k + 1)T + t\big) - f(kT + t)}_{\Psi} < \epsilon \end{equation*} (and so for $g$ and their derivatives with respect to $x_3$) then the external data become $S^2$-asympto\-tically almost periodic. In other words, we take almost the same external data on every time interval of the form $[kT,(k + 1)T]$. On the contrary, in the proof of the global existence of regular solutions, the external data could differ from interval to interval without any restrictions. \end{rem} We can finally introduce the following Definition: \begin{defi} Let $E$ be a Banach space (or complete metric or a closed subset of $E$). Let a two parameter family of operators \begin{equation*} \left\{U_{\sigma}(t,\tau)\colon t\geq\tau\geq 0\right\}, \qquad U_{\sigma}(t,\tau)\colon E\to E \end{equation*} be given. We call it \emph{semi-process} in $E$ with the time symbol $\sigma \in \Sigma$ if \begin{description} \item[$(\mathbf{U})_1$] $U_{\sigma}(t,\tau) = U_{\sigma}(t,s)U_{\sigma}(s,\tau)$ for all $t\geq s\geq \tau \geq 0$, \item[$(\mathbf{U})_2$] $U_{\sigma}(\tau,\tau) = \Id$ for all $\tau \geq 0$. \end{description} \end{defi} Observe that if we had a global and unique solution to problem \eqref{p1}, we could associate it with certain semi-process $\left\{U_{\sigma}(t,\tau)\colon t\geq\tau\geq 0\right\}$ defined on $\widehat H$. In three dimensions, likewise for the standard Navier-Stokes equations, we only know that such a semi-process would exist on $[0,t_{\max}]$, where $t_{\max} = t_{\max}(v_0,\omega_0,f,g)$. Following the idea from \cite[\S 2.3]{chol1} we can define a semi-process on certain subspace of $\widehat H$. Let $B^{\epsilon} \in \widehat H$ be such that \begin{equation*} B^{\epsilon} = \left\{\big(v(0),\omega(0)\big) \in \widehat H\cap \widehat V\colon \norm{v(0)_{,x_3}}_{L_2(\Omega)}^2 + \norm{\Rot v(0)_{,x_3}}_{L_2(\Omega)}^2 + \norm{\omega(0)_{,x_3}}_{L_2(\Omega)}^2 < \epsilon\right\}. \end{equation*} Define \begin{equation*} \omega_{\tau,\Sigma^{\epsilon}_{\sigma_0}}(B^{\epsilon}) := \bigcap_{t\geq\tau}\overline{\bigcup_{\sigma \in \Sigma^{\epsilon}_{\sigma_0}}\bigcup_{s\geq t}U_{\sigma_0}(s,\tau)B^{\epsilon}}^{\widehat H} \end{equation*} and introduce the sets \begin{equation*} \begin{aligned} &\widehat H^{\epsilon} = \widehat H\cap \omega_{\tau,\Sigma^{\epsilon}_{\sigma_0}}(B^{\epsilon}), \\ &\widehat V^{\epsilon} = \widehat V\cap \omega_{\tau,\Sigma^{\epsilon}_{\sigma_0}}(B^{\epsilon}). \end{aligned} \end{equation*} \begin{defi} The family of semiprocesses $\left\{U_{\sigma}(t,\tau)\right\}_{t\geq \tau\geq 0}$, $\sigma \in \Sigma^{\epsilon}_{\sigma_0}$ is said to be $(\widehat H^{\epsilon}\times \Sigma^{\epsilon}_{\sigma_0},\widehat H^{\epsilon})$-continuous if for any fixed $t,\tau \in \mathbb{R}_+$, $t \geq \tau$ the mapping $\big((v,\omega),\sigma\big)\mapsto U_{\sigma}(t,\tau)(v,\omega)$ is continuous from $\widehat H^{\epsilon}\times \Sigma^{\epsilon}_{\sigma_0}$ to $\widehat H^{\epsilon}$. \end{defi} Let us justify the $(\widehat H^{\epsilon}\times \Sigma^{\epsilon}_{\sigma_0},\widehat H^{\epsilon})$-continuity of $U_{\sigma}(v,\omega)_{t\geq\tau\geq 0}$. We have \begin{lem}\label{lem35} The family $\left\{U_{\sigma}(t,\tau)\right\}_{t\geq\tau\geq 0}$, $\sigma \in \Sigma^{\epsilon}_{\sigma_0}$ of semiprocesses is $(\widehat H^{\epsilon}\times \Sigma^{\epsilon}_{\sigma_0},\widehat H^{\epsilon})$-continuous. \end{lem} \begin{proof} Let $\sigma_1 = (f^1,g^1)$ and $\sigma_2 = (f^2,g^2)$ be two different symbols. Consider two corresponding solutions $(v^1,\omega^1)$ and $(v^2,\omega^2)$ to problem \eqref{p1} with the initial conditions $(v^1_\tau,\omega^1_\tau)$ and $(v^2_\tau,\omega^2_\tau)$, respectively. Denote $V = v^1 - v^2$, $\Theta = \omega^1 - \omega^2$, $P = p^1 - p^2$, $F = f^1 - f^2$ and $G = g^1 - g^2$. Then the pair $(V,\Theta)$, \begin{equation*} (V,\Theta) = U_{\sigma_1}(t,\tau)(v^1_\tau,\omega^1_\tau) - U_{\sigma_2}(t,\tau)(v^2_\tau,\sigma^2_\tau), \end{equation*} is a solution to a following problem \begin{equation*} \begin{aligned} &V_{,t} - (\nu + \nu_r)\triangle V + \nabla P = - V\cdot \nabla v^1 - v^2\cdot \nabla V + 2\nu_r\Rot\Theta + F& &\text{in } \Omega^t, \\ &\Div V = 0 & &\text{in } \Omega^t,\\ &\begin{aligned} &\Theta_{,t} - \alpha \triangle \Theta - \beta\nabla\Div\Theta + 4\nu_r \Theta \\ &\mspace{60mu} = -v^1\cdot \nabla \Theta - V\cdot \nabla \omega^2 + 2\nu_r\Rot V + G \end{aligned}& &\text{in } \Omega^t,\\ &\Rot V \times n = 0, \qquad V\cdot n = 0 & &\text{on } S^t, \\ &\Theta = 0 & &\text{on } S^t_1, \\ &\Theta' = 0, \qquad \Theta_{3,x_3} = 0 & &\text{on } S^t_2, \\ &V\vert_{t = \tau} = V_\tau, \qquad \Theta\vert_{t = \tau} = \Theta_\tau & &\text{on } \Omega\times\{t = \tau\}. \end{aligned} \end{equation*} With $F$, $G$ and the initial conditions set to zero, we studied the above problem in \cite[see Lemma 11.1]{2012arXiv1205.4046N}. Thus, by analogous calculations we obtain the inequality \begin{multline*} \frac{1}{2}\Dt \int_{\Omega} \abs{V}^2 + \abs{\Theta}^2\, \ud x + \frac{\nu}{2c_{\Omega}}\norm{\Rot V}^2_{L_2(\Omega)} \\ \leq \frac{c_{I,\Omega}}{\nu} \left(\norm{\nabla v^1}^2_{L_3(\Omega)} \norm{V}^2_{L_2(\Omega)} + \norm{\nabla \omega^2}^2_{L_3(\Omega)} \norm{\Theta}^2_{L_2(\Omega)} + \norm{F}^2_{L_{\frac{6}{5}}(\Omega)} + \norm{G}^2_{L_{\frac{6}{5}}(\Omega)}\right) \end{multline*} Application of the Gronwall inequality shows \begin{multline*} \norm{V(t)}^2_{L_2(\Omega)} + \norm{\Theta(t)}^2_{L_2(\Omega)} \leq c_{\nu,I,\Omega}\exp\left(\norm{\nabla v^1}^2_{L_2(\tau,t;L_3(\Omega))} + \norm{\nabla \omega^2}^2_{L_2(\tau,t;L_3(\Omega))}\right) \\ \cdot \left(\norm{V(\tau)}^2_{L_2(\Omega)} + \norm{\Theta(\tau)}^2_{L_2(\Omega)} + \norm{F}^2_{L_2(\tau,t;L_{\frac{6}{5}}(\Omega))} + \norm{G}^2_{L_2(\tau,t;L_{\frac{6}{5}}(\Omega))}\right). \end{multline*} Since $H^2(\Omega) \hookrightarrow W^1_6(\Omega) \hookrightarrow W^1_3(\Omega)$ we see that \begin{align*} \norm{\nabla v^1}^2_{L_2(\tau,t;L_3(\Omega))} &\leq c_{\Omega} \norm{v^1}^2_{W^{2,1}_2(\Omega\times(\tau,t))}, \\ \norm{\nabla \omega^2}^2_{L_2(\tau,t;L_3(\Omega))} &\leq c_{\Omega}\norm{\omega^2}^2_{W^{2,1}_2(\Omega\times(\tau,t))}, \end{align*} By Theorem \ref{thm0} we get \begin{multline*} \norm{V(t)}^2_{L_2(\Omega)} + \norm{\Theta(t)}^2_{L_2(\Omega)} \\ \leq c_{data}\left(\norm{V(\tau)}^2_{L_2(\Omega)} + \norm{\Theta(\tau)}^2_{L_2(\Omega)} + \norm{F}^2_{L_2(\tau,t;L_{\frac{6}{5}}(\Omega))} + \norm{G}^2_{L_2(\tau,t;L_{\frac{6}{5}}(\Omega))}\right), \end{multline*} which ends the proof. \end{proof} \section{Existence of the uniform attractor}\label{sec6} In this section we prove the existence of the uniform attractor to problem \eqref{p1} restricted to $\widehat H^{\epsilon}$. By $\mathcal{B}(\widehat H)$ we denote the family of all bounded sets of $\widehat H$. We begin by introducing some fundamental definitions: \begin{defi} A family of processes $\{U_\sigma(t,\tau)\}_{t\geq \tau \geq 0}$, $\sigma \in \Sigma$ is said to be \emph{uniformly bounded} if for any $B \in \mathcal{B}(\widehat H)$ the set \begin{equation*} \bigcup_{\sigma\in\Sigma}\bigcup_{\tau\in\mathbb{R}^+}\bigcup_{t\geq\tau}U_\sigma(t,\tau)B\in\mathcal{B}(\widehat H). \end{equation*} \end{defi} \begin{defi} A set $B_0\in \widehat H^{\epsilon}$ is said to be \emph{uniformly absorbing} for the family of processes $\{U_\sigma(t,\tau)\}_{t\geq \tau \geq 0}$, $\sigma \in \Sigma$, if for any $\tau\in\mathbb{R}^+$ and for every $B\in\mathcal{B}(\widehat H)$ there exists $t_0 = t_0(\tau,B)$ such that $\bigcup_{\sigma\in\Sigma}U_\sigma(t,\tau)B\subseteq B_0$ for all $t\geq t_0$. If the set $B_0$ is compact, we call the family of processes \emph{uniformly compact}. \end{defi} \begin{defi}\label{def6.3} A set $P$ belonging to $\widehat H$ is said to be \emph{uniformly attracting} for the family of processes $\{U_\sigma(t,\tau)\}_{t\geq \tau \geq 0}$, $\sigma \in \Sigma$, if for an arbitrary fixed $\tau \in \mathbb{R}^+$ \begin{equation*} \lim_{t\to\infty} \left(\sup_{\sigma\in\Sigma}\dist_E \big(U_\sigma(t,\tau)B,P\big)\right) = 0. \end{equation*} If the set $P$ is compact, we call the family of processes \emph{uniformly asymptotically compact}. \end{defi} \begin{defi} A closed set $\mathcal{A}_{\Sigma} \subset \widehat H$ is said to be the \emph{uniform attractor} of the family of processes $\{U_\sigma(t,\tau)\}_{t\geq \tau \geq 0}$, $\sigma \in \Sigma$, if it is uniformly attracting set and it is contained in any closed uniformly attracting set $\mathcal{A}'$ of the family processes $\{U_\sigma(t,\tau)\}_{t\geq \tau \geq 0}$, $\sigma \in \Sigma$, $\mathcal{A}_{\Sigma} \subseteq \mathcal{A}'$. \end{defi} To prove the existence of the uniform attractor we need two technical lemmas: \begin{lem}\label{lem33} Let the assumptions of Theorem \ref{thm0} hold. Then, there exists a bounded and absorbing set in $\widehat H^{\epsilon}$ for the family of semiprocesses $\{U_\sigma(t,\tau)\}_{t\geq \tau \geq 0}$, $\sigma \in \Sigma^{\epsilon}_{\sigma_0}$. \end{lem} \begin{proof} Under the assumption of Theorem \ref{thm0} and in view assertion $(\mathbf{B})$ of \cite[Lemma 5.1]{2012arXiv1205.4507N} we see that for $t = (k + 1)T$, $t_0 = kT$ and sufficiently large $T > 0$ \begin{equation*} \norm{v\big((k + 1)T\big)}_{L_2(\Omega)}^2 + \norm{\omega\big((k + 1)T\big)}_{L_2(\Omega)}^2 \leq \norm{v(kT)}_{L_2(\Omega)}^2 + \norm{\omega(kT)}_{L_2(\Omega)}^2 \end{equation*} holds. By the same Lemma \begin{multline}\label{eq55} \limsup_{t\to\infty}\left(\norm{v(t)}_{L_2(\Omega)}^2 + \norm{\omega(t)}^2_{L_2(\Omega)}\right) \\ \leq c_{\nu,\alpha,\Omega}\left(\sup_{k\in\mathbb{N}}\left(\norm{f(kT)}^2_{L_2(\Omega)} + \norm{g(kT)}^2_{L_2(\Omega)}\right) + \norm{v(0)}_{L_2(\Omega)}^2 + \norm{\omega(0)}_{L_2(\Omega)}^2\right) =: R_1. \end{multline} Thus, for every $(v_0,\omega_0)\in \widehat H^{\epsilon}$ there exists $t_0 > 0$ such that \begin{equation*} (v(t),\omega(t)) \in B(0,\rho_1) \qquad \forall_{t\geq t_0}, \end{equation*} where $B(0,\rho_1)$ is the ball in $\widehat H^{\epsilon}$ centered at $0$ with radius $\rho_1 > R_1$. If $B(0,r)\subset \widehat H^{\epsilon}$ is any ball such that $(v_0,\omega_0) \in B(0,r)$ then there exists $t_0 =t_0(r)$ such that \eqref{eq55} holds. This proves the existence of bounded absorbing sets in $\widehat H^{\epsilon}$ for the semiprocess $\{U_\sigma(t,\tau)\}_{t\geq \tau \geq 0}$, $\sigma \in \Sigma^{\epsilon}_{\sigma_0}$. \end{proof} \begin{lem}\label{lem34} Let the assumptions of Theorem \ref{thm0} hold. Then, there exists a bounded and absorbing set in $\widehat V^{\epsilon}$ for the family of semiprocesses $\{U_\sigma(t,\tau)\}_{t\geq \tau \geq 0}$, $\sigma \in \Sigma^{\epsilon}_{\sigma_0}$. \end{lem} \begin{proof} From the assumptions of Theorem \ref{thm0} and from \cite[Lemma 5.2]{2012arXiv1205.4507N}, where we set $t_1 = (k + 1)T$, $t_0 = kT$, we infer that \begin{multline}\label{eq12} \limsup_{t \to \infty} \left(\norm{v(t)}^2_{H^1(\Omega)} + \norm{v(t)}^2_{H^1(\Omega)}\right) \\ \leq c_{\nu,\alpha,\beta,I,P,\Omega}\left(\sup_{k\in\mathbb{N}}\left(\norm{f(kT)}^2_{L_2(\Omega)} + \norm{g(kT)}^2_{L_2(\Omega)}\right) + \norm{v(0)}_{H^1(\Omega)}^2 + \norm{\omega(0)}_{H^1(\Omega)}^2\right) =: R_2. \end{multline} Likewise in previous Lemma, for every $(v_0,\omega_0)\in \widehat V^{\epsilon}$ there exists $t_0 > 0$ such that \begin{equation*} (v(t),\omega(t)) \in B(0,\rho_2) \qquad \forall_{t\geq t_0}, \end{equation*} where $B(0,\rho_2)$ is the ball in $\widehat V^{\epsilon}$ centered at $0$ with radius $\rho_2 > R_2$. For any $(v_0,\omega_0) \in B(0,r) \subset \widehat V^{\epsilon}$ there exists $t_0 =t_0(r)$ such that \eqref{eq12} holds. Therefore there exist bounded absorbing sets in $\widehat V^{\epsilon}$ for the semiprocess $\{U_\sigma(t,\tau)\}_{t\geq \tau \geq 0}$, $\sigma \in \Sigma^{\epsilon}_{\sigma_0}$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm1}] To prove the existence of the uniform attractor we make use of Proposition \ref{prop1}. We need to check the assumptions: \begin{itemize} \item From Remark \ref{rem8} and by assumption on the data it follows that $\sigma_0 = (f_0,g_0) \in \Xi_+$ is translation compact. \item From Definition \ref{def6.3} it follows that the family of semiprocesses $\{U_\sigma(t,\tau)\}_{t\geq \tau \geq 0}$, $\sigma \in \Sigma^{\epsilon}_{\sigma_0}$ is uniformly asymptotically compact, if there exists a set $P \in \widehat H^{\epsilon}$, which is uniformly attracting and compact. Lemmas \ref{lem33} and \ref{lem34} and the Sobolev embedding theorem ensure the existence of such set $P$. \item The $(\widehat H^{\epsilon}\times \Sigma^{\epsilon}_{\sigma_0},\widehat H^{\epsilon})$-continuity of the family of semiprocesses $\{U_\sigma(t,\tau)\}_{t\geq \tau \geq 0}$, $\sigma \in \Sigma^{\epsilon}_{\sigma_0}$ was proved in Lemma \ref{lem35}. \item To check that $\omega(\Sigma_{\sigma_0}^{\epsilon})$ is the global attractor for the translation semigroup $\{T(t)\}_{t\geq 0}$ acting on $\Sigma_{\sigma_0}^{\epsilon}$ we recall that $\Sigma_{\sigma_0}^{\epsilon} \Subset \Xi_+$ is compact metric space. We can therefore apply Proposition \ref{prop2}. Thus, \begin{align*} \dist_{\Xi_+} \big(T(t)\Sigma_{\sigma_0}^{\epsilon},\omega(\Sigma_{\sigma_0}^{\epsilon})\big) \xrightarrow[t\to 0]{}0 \intertext{and} T(t)\omega(\Sigma_{\sigma_0}^{\epsilon}) = \omega(\Sigma_{\sigma_0}^{\epsilon}) \qquad \forall_{t\geq 0}. \end{align*} \end{itemize} As we see all assumptions are satisfied. The application of Proposition \ref{prop1} ends the proof. \end{proof} \section{Convergence to stationary solutions for large $\nu$}\label{sec7} In this section we prove Theorem \ref{thm3}. \begin{proof} The existence of solutions for large $\nu$ and theirs estimate follows, after slight modifications, from \cite{luk0} and \cite[Ch. 2, \S 1, Theorems 1.1.1, 1.1.2 and 1.1.3]{luk1}. Let $V = v(t)-v_\infty$, $\Theta = \omega(t) - \omega_{\infty}$ and $P = p(t) - p_{\infty}$. Then $(V,\Theta)$ satisfies \begin{equation}\label{eq38} \begin{aligned} &V_{,t} - (\nu + \nu_r)\triangle V + \nabla P = -v\cdot\nabla V - V\cdot\nabla v_\infty + 2\nu_r \Rot \Theta & &\text{in $\Omega^t$}, \\ &\Div V = 0 & &\text{in $\Omega^t$}, \\ &\begin{aligned} &\Theta_{,t} - \alpha \triangle \Theta - \beta\nabla\Div\Theta + 4\nu_r \Theta \\ &\mspace{60mu} = - v\cdot\nabla \Theta - V\cdot \nabla \omega_{\infty} + 2\nu_r \Rot V \end{aligned} & &\text{in $\Omega^t$}, \\ &\Rot V\times n = 0 & &\text{on $S^t$}, \\ &V\cdot n = 0 & &\text{on $S^t$}, \\ &\Theta = 0 & &\text{on $S_1^t$}, \\ &\Theta' = 0, \qquad \Theta_{3,x_3} = 0 & &\text{on $S_2^t$}, \\ &V\vert_{t = 0} = v(0) - v_\infty, \qquad \Theta\vert_{t = 0} = \omega(0) - \omega_{\infty} & &\text{in $\Omega\times\{t = 0\}$}. \end{aligned} \end{equation} Multiplying the first and the third equation by $V$ and $\Theta$ respectively and integrating over $\Omega$ yields \begin{multline*} \frac{1}{2}\Dt \norm{V}^2_{L_2(\Omega)} + \left(\nu + \nu_r\right)\norm{\Rot V}_{L_2(\Omega)}^2 + \int_S \Rot V\times n \cdot V\, \ud S \\ = -\int_{\Omega} V\cdot \nabla v_{\infty} \cdot V\, \ud x + 2\nu_r\int_{\Omega} \Rot V \cdot \Theta\, \ud x + \int_S \Theta \times n \cdot V\, \ud S \end{multline*} and \begin{multline*} \frac{1}{2}\Dt \norm{\Theta}^2_{L_2(\Omega)} + \alpha\norm{\Rot\Theta}^2_{L_2(\Omega)} + (\alpha + \beta)\norm{\Div \Theta}^2_{L_2(\Omega)} + 4\nu_r\norm{\Theta}^2_{L_2(\Omega)} \\ = -\int_{\Omega} V\cdot \nabla \omega_{\infty}\cdot \Theta\, \ud x + 2\nu_r\int_{\Omega} \Rot V \cdot \Theta\, \ud x, \end{multline*} because in view of \eqref{eq38}$_{2,5}$ and \eqref{p1}$_{2,4}$ we see that \begin{equation*} \begin{aligned} &\int_{\Omega}\nabla P\cdot V\, \ud x = \int_S P (V\cdot n)\, \ud S = 0, \\ &-\int_{\Omega}v\cdot \nabla V\cdot V\, \ud x = -\frac{1}{2}\int_{\Omega}v\cdot \nabla\abs{V}^2\, \ud x = -\int_S \abs{V}^2 (v\cdot n)\, \ud S = 0, \\ &-\int_{\Omega}v\cdot \nabla \Theta\cdot \Theta\, \ud x = -\frac{1}{2}\int_{\Omega}v\cdot \nabla\abs{\Theta}^2\, \ud x = -\int_S \abs{\Theta}^2 (v\cdot n)\, \ud S = 0. \end{aligned} \end{equation*} Adding both equalities gives \begin{multline}\label{eq39} \frac{1}{2}\Dt \left(\norm{V}^2_{L_2(\Omega)} + \norm{\Theta}^2_{L_2(\Omega)}\right) + \left(\nu + \nu_r\right)\norm{\Rot V}_{L_2(\Omega)}^2 + \alpha\norm{\Rot\Theta}^2_{L_2(\Omega)} \\ + (\alpha + \beta)\norm{\Div \Theta}^2_{L_2(\Omega)} + 4\nu_r\norm{\Theta}^2_{L_2(\Omega)} \\ = 4\nu_r\int_{\Omega} \Rot V \cdot \Theta\, \ud x -\int_{\Omega} V\cdot \nabla v_{\infty} \cdot V\, \ud x -\int_{\Omega} V\cdot \nabla \omega_{\infty}\cdot \Theta\, \ud x =: I_1 + I_2 + I_3 \end{multline} Utilizing the H\"older and the Young inequalities on the right-hand side we have for $I_1$ \begin{equation*} I_1 \leq 4\nu_r \norm{\Rot V}_{L_2(\Omega)}\norm{\Theta}_{L_2(\Omega)} \leq 4\nu_r\epsilon_1\norm{\Rot V}_{L_2(\Omega)} + \frac{\nu_r}{\epsilon_1}\norm{\Theta}_{L_2(\Omega)}. \end{equation*} To estimate $I_2$ and $I_3$ we also make use of the interpolation inequality for $L_p$-spaces and the embedding theorem: \begin{equation*} \begin{aligned} \norm{u}_{L_4(\Omega)} \leq \norm{u}_{L_2(\Omega)}^{\frac{1}{4}}\norm{u}_{L_6(\Omega)}^{\frac{3}{4}} \leq c_I \norm{u}_{L_2(\Omega)}^{\frac{1}{4}}\norm{u}_{H^1(\Omega)}^{\frac{3}{4}}, \\ \norm{u}_{L_3(\Omega)} \leq \norm{u}_{L_2(\Omega)}^{\frac{1}{2}}\norm{u}_{L_6(\Omega)}^{\frac{1}{2}} \leq c_I \norm{u}_{L_2(\Omega)}^{\frac{1}{2}}\norm{u}_{H^1(\Omega)}^{\frac{1}{2}}. \end{aligned} \end{equation*} Thus \begin{multline*} I_2 \leq \norm{V}_{L_4(\Omega)}\norm{\nabla v_{\infty}}_{L_2(\Omega)}\norm{V}_{L_4(\Omega)} \leq c_I\norm{V}_{L_2(\Omega)}^{\frac{1}{2}} \norm{V}_{H^1(\Omega)}^{\frac{3}{2}}\norm{v_{\infty}}_{H^1(\Omega)} \\ \leq \epsilon_2c_I \norm{V}_{H^1(\Omega)}^2 + \frac{c_I}{4\epsilon_2} \norm{V}_{L_2(\Omega)}^2\norm{v_{\infty}}_{H^1(\Omega)}^4 \end{multline*} and \begin{multline*} I_3 \leq \norm{V}_{L_6(\Omega)} \norm{\nabla \omega_{\infty}}_{L_2(\Omega)} \norm{\Theta}_{L_3(\Omega)} \leq c_I\norm{V}_{H^1(\Omega)}\norm{\omega_{\infty}}_{H^1(\Omega)} \norm{\Theta}_{L_2(\Omega)}^{\frac{1}{2}} \norm{\Theta}_{H^1(\Omega)}^{\frac{1}{2}} \\ \leq c_I \epsilon_{31}\norm{V}_{H^1(\Omega)}^2 + \frac{c_I}{4\epsilon_{31}}\norm{\omega_{\infty}}_{H^1(\Omega)}^2\norm{\Theta}_{L_2(\Omega)}\norm{\Theta}_{H^1(\Omega)} \\ \leq c_I \epsilon_{31}\norm{V}_{H^1(\Omega)}^2 + \frac{c_I\epsilon_{32}}{4\epsilon_{31}} \norm{\Theta}^2_{H^1(\Omega)} + \frac{c_I}{16\epsilon_{31}\epsilon_{32}}\norm{\omega_{\infty}}_{H^1(\Omega)}^4\norm{\Theta}_{L_2(\Omega)}^2. \end{multline*} Now we set $\epsilon_1 = \frac{1}{4}$. By \cite[Lemma 6.7]{2012arXiv1205.4046N} it follows that \begin{equation*} \nu\norm{\Rot V}_{L_2(\Omega)}^2 + \alpha\norm{\Rot\Theta}^2_{L_2(\Omega)} + (\alpha + \beta)\norm{\Div \Theta}^2_{L_2(\Omega)} \geq \frac{\nu}{c_{\Omega}} \norm{V}_{H^1(\Omega)}^2 + \frac{\alpha}{c_{\Omega}}\norm{\Theta}^2_{H^1(\Omega)}. \end{equation*} It enables us to put $\epsilon_2 c_I = \epsilon_{31} c_I = \frac{\nu}{4c_{\Omega}}$ and $\epsilon_{32} \frac{c_I}{4\epsilon_{31}} = \frac{\alpha}{2c_{\Omega}}$. Hence \begin{align*} \epsilon_2 = \frac{\nu}{4c_{I,\Omega}}, & &\epsilon_{31} = \frac{\nu}{4c_{I,\Omega}}, & &\epsilon_{32} = \frac{\alpha\nu}{3c_{I,\Omega}}, & \\ \frac{c_I}{4\epsilon_2} = \frac{c_{I,\Omega}}{\nu}, & & & &\frac{c_I}{16\epsilon_{31}\epsilon_{32}} = \frac{3c_{I,\Omega}}{4\alpha\nu^2}. & \end{align*} Therefore, from \eqref{eq39} we obtain \begin{multline*} \frac{1}{2}\Dt\left(\norm{V}^2_{L_2(\Omega)} + \norm{\Theta}_{L_2(\Omega)}^2\right) + \frac{\nu}{2c_{\Omega}}\norm{V}_{H^1(\Omega)}^2 + \frac{\alpha}{2c_{\Omega}}\norm{\Theta}^2_{H^1(\Omega)} \\ \leq \frac{c_{I,\Omega}}{\nu}\norm{V}_{L_2(\Omega)}^2\norm{v_\infty}^4_{H^1(\Omega)} + \frac{3c_{I,\Omega}}{4\alpha\nu^2}\norm{\Theta}_{L_2(\Omega)}^2\norm{\omega_\infty}^4_{H^1(\Omega)}. \end{multline*} Let \begin{equation*} \begin{aligned} c_1(\nu) &= \frac{\min\{\nu,\alpha\}}{c_{\Omega}}, \\ c_2 &= \frac{c_{I,\Omega}}{\min\{1,\alpha\}} \end{aligned} \end{equation*} and define \begin{equation*} \Delta(\nu) = c_1(\nu) - \frac{3c_2}{\nu}F^4\big(\norm{f_\infty}_{L_2(\Omega)},\norm{g_\infty}_{L_2(\Omega)}\big). \end{equation*} Observe, that $\Delta(\nu) \to \frac{\alpha}{c_{\Omega}} > 0$ as $\nu \to \infty$. In particular there exists $\nu_* > 0$ such that $\Delta(\nu) > 0$ for any $\nu > \nu_*$ holds. The last inequality implies \begin{equation*} \Dt\left(\norm{V}^2_{L_2(\Omega)} + \norm{\Theta}_{L_2(\Omega)}^2\right) + \Delta(\nu)\left(\norm{V}_{L_2(\Omega)}^2 + \norm{\Theta}^2_{L_2(\Omega)}\right) \leq 0, \end{equation*} which is equivalent to \begin{equation*} \Dt\left(\left(\norm{V}^2_{L_2(\Omega)} + \norm{\Theta}_{L_2(\Omega)}^2\right)e^{\Delta(\nu)t}\right) \leq 0. \end{equation*} Integration with respect to $t$ yields \begin{equation*} \left(\norm{V(t)}^2_{L_2(\Omega)} + \norm{\Theta(t)}_{L_2(\Omega)}^2\right)e^{\Delta(\nu)t} \leq \norm{V(0)}^2_{L_2(\Omega)} + \norm{\Theta(0)}_{L_2(\Omega)}^2, \end{equation*} which implies \begin{equation*} \norm{V(t)}^2_{L_2(\Omega)} + \norm{\Theta(t)}_{L_2(\Omega)}^2 \leq \left(\norm{V(0)}^2_{L_2(\Omega)} + \norm{\Theta(0)}_{L_2(\Omega)}^2\right)e^{-\Delta(\nu)t}. \end{equation*} This concludes the proof. \end{proof} \section{Continuous dependence on modeling}\label{sec8} In this Section we examine the difference between $(v,\omega)$ and the solution $(u,\Theta)$ to problem \eqref{p11}. Observe, that \eqref{p11} is the same as \eqref{p1} with $\nu_r = 0$. Therefore, for $\nu_r$ close to zero we may measure in some sense the deviation of the flows of micropolar fluids from that of modeled by the Navier-Stokes equations. This problem was considered locally in \cite{paye} and globally in \cite[\S 5]{luk2}. \begin{proof}[Proof of Theorem \ref{thm2}] Let us denote $V(t) = v(t) - u(t)$, $Z(t) = \omega(t) - \Theta(t)$. Then the pair $(V(t),Z(t))$ is a solution to the problem \begin{equation*} \begin{aligned} &V_{,t} - (\nu + \nu_r)\triangle V - \nu_r \triangle u + \nabla (p - q) = -v\cdot\nabla V - V\cdot\nabla u + 2\nu_r \Rot \omega & &\text{in $\Omega^t$}, \\ &\Div V = 0 & &\text{in $\Omega^t$}, \\ &\begin{aligned} &Z_{,t} - \alpha \triangle Z - \beta\nabla\Div Z + 4\nu_r \omega \\ &\mspace{60mu} = - v\cdot\nabla Z - V\cdot \nabla \Theta + 2\nu_r \Rot v \end{aligned} & &\text{in $\Omega^t$}, \\ &\Rot V\times n = 0 & &\text{on $S^t$}, \\ &V\cdot n = 0 & &\text{on $S^t$}, \\ &Z = 0 & &\text{on $S_1^t$}, \\ &Z' = 0, \qquad Z_{3,x_3} = 0 & &\text{on $S_2^t$}, \\ &V\vert_{t = t_0} = v(t_0) - u(t_0), \qquad Z\vert_{t = t_0} = \omega(t_0) - \Theta(t_0) & &\text{in $\Omega\times\{t = t_0\}$}. \end{aligned} \end{equation*} Multiplying the first equation and the third by $V$ and $Z$, respectively and integrating over $\Omega$ yields \begin{multline*} \frac{1}{2}\Dt \int_{\Omega} \abs{V}^2\, \ud x + (\nu + \nu_r)\int_{\Omega}\abs{\Rot V}^2\, \ud x + (\nu + \nu_r)\int_S \Rot V \times n \cdot V\, \ud S + \nu_r\int_{\Omega} \Rot u \cdot \Rot V\, \ud x \\ + \nu_r \int_S \Rot u\times n \cdot V\, \ud S + \int_S (p - q) V \cdot n\, \ud S \\ = -\int_{\Omega} v\cdot \nabla V\cdot V\, \ud x - \int_{\Omega} V\cdot \nabla u \cdot V\, \ud x + 2\nu_r \int_{\Omega} \Rot \omega\cdot V\, \ud x \end{multline*} and \begin{multline*} \frac{1}{2}\Dt \int_{\Omega} \abs{Z}^2\, \ud x + \alpha\int_{\Omega} \abs{\Rot Z}^2\, \ud x + (\alpha + \beta)\int_{\Omega} \abs{\Div Z}^2\, \ud x - \alpha\int_S Z\times n \cdot \Rot Z\, \ud S + 4\nu_r \int_{\Omega}\omega\cdot Z\, \ud x \\ = -\int_{\Omega} v\cdot \nabla Z\cdot Z\, \ud x - \int_{\Omega} V \cdot \nabla \Theta \cdot Z\, \ud x + 2\nu_r\int_{\Omega} \Rot v\cdot Z\, \ud x. \end{multline*} All boundary integrals vanish due to the boundary conditions. In addition \begin{equation*} \begin{aligned} -\int_{\Omega} v\cdot \nabla V\cdot V\, \ud x &= -\frac{1}{2}\int_{\Omega} v\cdot \nabla \abs{V}^2\, \ud x = -\frac{1}{2}\int_S \abs{V}^2 v \cdot n \, \ud S = 0, \\ -\int_{\Omega} v\cdot \nabla Z\cdot Z\, \ud x &= -\frac{1}{2}\int_{\Omega} v\cdot \nabla \abs{Z}^2\, \ud x = -\frac{1}{2}\int_S \abs{Z}^2 v \cdot n \, \ud S = 0. \end{aligned} \end{equation*} We also have \begin{equation*} 2\nu_r\int_{\Omega} \Rot \omega\cdot V\, \ud x = 2\nu_r\int_{\Omega} \omega\cdot \Rot V\, \ud x + 2\nu_r\int_S \omega\times n \cdot V\, \ud S = 2\nu_r\int_{\Omega} \omega\cdot \Rot V\, \ud x, \end{equation*} which follows from integration by parts and the boundary conditions. Thus, we get \begin{multline*} \frac{1}{2}\Dt\left(\norm{V}^2_{L_2(\Omega)} + \norm{Z}^2_{L_2(\Omega)}\right) + (\nu + \nu_r)\norm{\Rot V}^2_{L_2(\Omega)} + \alpha\norm{\Rot Z}^2_{L_2(\Omega)} + (\alpha + \beta)\norm{\Div Z}^2_{L_2(\Omega)} \\ = -\nu_r\int_{\Omega} \Rot u \cdot \Rot V\, \ud x- \int_{\Omega} V\cdot \nabla u \cdot V\, \ud x + 2\nu_r \int_{\Omega} \omega\cdot \Rot V\, \ud x \\ - 4\nu_r \int_{\Omega}\omega\cdot Z\, \ud x - \int_{\Omega} V \cdot \nabla \Theta \cdot Z\, \ud x + 2\nu_r\int_{\Omega} \Rot v\cdot Z\, \ud x. \end{multline*} Next we estimate the first and the third term on the right-hand side by the means of the H\"older and the Young inequalities. We have \begin{equation*} \begin{aligned} -\nu_r\int_{\Omega} \Rot u \cdot \Rot V\, \ud x &\leq \nu_r\norm{\Rot u}_{L_2(\Omega)}\norm{\Rot V}_{L_2(\Omega)} \leq \epsilon_1\nu_r\norm{\Rot V}_{L_2(\Omega)}^2 + \frac{\nu_r}{4\epsilon_1}\norm{\Rot u}_{L_2(\Omega)}, \\ 2\nu_r \int_{\Omega} \omega\cdot \Rot V\, \ud x &\leq 2\nu_r\norm{\omega}_{L_2(\Omega)}\norm{\Rot V}_{L_2(\Omega)} \leq 2\epsilon_2\nu_r\norm{\Rot V}_{L_2(\Omega)}^2 + \frac{\nu_r}{2\epsilon_2}\norm{\omega}_{L_2(\Omega)}. \end{aligned} \end{equation*} We set $\epsilon_1 \nu_r = 2\epsilon_2 \nu_r = \frac{\nu_r}{2}$. Thus, $\epsilon_1 = \frac{1}{2}$ and $\epsilon_2 = \frac{1}{4}$. We get \begin{multline*} \frac{1}{2}\Dt\left(\norm{V}^2_{L_2(\Omega)} + \norm{Z}^2_{L_2(\Omega)}\right) + \nu\norm{\Rot V}^2_{L_2(\Omega)} + \alpha\norm{\Rot Z}^2_{L_2(\Omega)} + (\alpha + \beta)\norm{\Div Z}^2_{L_2(\Omega)} \\ \leq - \int_{\Omega} V\cdot \nabla u \cdot V\, \ud x - 4\nu_r \int_{\Omega}\omega\cdot Z\, \ud x - \int_{\Omega} V \cdot \nabla \Theta \cdot Z\, \ud x + 2\nu_r\int_{\Omega} \Rot v\cdot Z\, \ud x. \end{multline*} Utilizing \cite[Lemma 6.7]{2012arXiv1205.4046N} on the left-hand side yields \begin{multline*} \frac{1}{2}\Dt\left(\norm{V}^2_{L_2(\Omega)} + \norm{Z}^2_{L_2(\Omega)}\right) + \frac{\nu}{c_{\Omega}}\norm{V}^2_{H^1(\Omega)} + \frac{\alpha}{c_{\Omega}}\norm{Z}^2_{H^1(\Omega)} \\ \leq - \int_{\Omega} V\cdot \nabla u \cdot V\, \ud x - 4\nu_r \int_{\Omega}\omega\cdot Z\, \ud x - \int_{\Omega} V \cdot \nabla \Theta \cdot Z\, \ud x + 2\nu_r\int_{\Omega} \Rot v\cdot Z\, \ud x. \end{multline*} For the first term we have \begin{multline*} - \int_{\Omega} V\cdot \nabla u \cdot V\, \ud x \leq \norm{V}_{L_4(\Omega)}^2\norm{\nabla u}_{L_2(\Omega)} \leq c_I\norm{V}_{L_6(\Omega)}^{\frac{3}{2}}\norm{V}_{L_2(\Omega)}^{\frac{1}{2}}\norm{u}_{H^1(\Omega)} \\ \leq \epsilon_1 c_I\norm{V}_{H^1(\Omega)}^2 + \frac{c_I}{4\epsilon_1} \norm{V}_{L_2(\Omega)}^2\norm{u}_{H^1(\Omega)}^4, \end{multline*} where we used the H\"older inequality, the interpolation inequality between $L_2(\Omega)$ and $L_6(\Omega)$ and the Young inequality. For the second term we get \begin{equation*} - 4\nu_r \int_{\Omega}\omega\cdot Z\, \ud x \leq 4\nu_r\norm{\omega}_{L_2(\Omega)} \norm{Z}_{L_2(\Omega)} \leq 4\nu_r\epsilon_2\norm{\omega}_{L_2(\Omega)}^2 + \frac{\nu_rc_I}{\epsilon_2}\norm{Z}_{H^1(\Omega)}^2. \end{equation*} The third term we estimate in the following way \begin{equation*} - \int_{\Omega} V \cdot \nabla \Theta \cdot Z\, \ud x \leq \norm{V}_{L_6(\Omega)}\norm{\nabla \Theta}_{L_3(\Omega)}\norm{Z}_{L_2(\Omega)} \leq c_I\epsilon_3\norm{V}_{H^1(\Omega)}^2 + \frac{1}{4\epsilon_3}\norm{\nabla \Theta}_{L_3(\Omega)}^2\norm{Z}_{L_2(\Omega)}^2. \end{equation*} Finally, for the fourth term we have \begin{equation*} 2\nu_r \int_{\Omega}\Rot v\cdot Z\, \ud x \leq 2\nu_r\norm{\Rot v}_{L_2(\Omega)} \norm{Z}_{L_2(\Omega)} \leq 2\nu_r\epsilon_4\norm{\Rot v}_{L_2(\Omega)}^2 + \frac{\nu_rc_I}{2\epsilon_4}\norm{Z}_{H^1(\Omega)}^2. \end{equation*} We set $\epsilon_1 c_I = \epsilon_3 c_I = \frac{\nu}{4c_{\Omega}}$ and $\frac{\nu_rc_I}{\epsilon_2} = \frac{\nu_rc_I}{2\epsilon_4} = \frac{\alpha}{4c_{\Omega}}$. Hence \begin{align*} \frac{c_I}{4\epsilon_1} &= \frac{c_{I,\Omega}}{\nu}, & & & 4\nu_r\epsilon_2 &= \frac{16\nu_r^2c_{I,\Omega}}{\alpha}, & & & \frac{1}{4\epsilon_3} &= \frac{c_{I,\Omega}}{\nu}, & & & 4\nu_r\epsilon_4 &= \frac{8\nu_r^2c_{I,\Omega}}{\alpha} \end{align*} and we obtain \begin{multline*} \frac{1}{2}\Dt\left(\norm{V}^2_{L_2(\Omega)} + \norm{Z}^2_{L_2(\Omega)}\right) + \frac{\nu}{2c_{\Omega}}\norm{V}^2_{H^1(\Omega)} + \frac{\alpha}{2c_{\Omega}}\norm{Z}^2_{H^1(\Omega)} \\ \leq \frac{c_{I,\Omega}}{\nu}\norm{V}_{L_2(\Omega)}^2\norm{u}_{H^1(\Omega)}^4 + \frac{16\nu_r^2c_{I,\Omega}}{\alpha}\norm{\omega}_{L_2(\Omega)}^2 \\ + \frac{c_{I,\Omega}}{\nu}\norm{\nabla \Theta}_{L_3(\Omega)}^2\norm{Z}_{L_2(\Omega)}^2 + \frac{8\nu_r^2c_{I,\Omega}}{\alpha}\norm{\Rot v}_{L_2(\Omega)}^2. \end{multline*} We introduce \begin{equation}\label{eq44} \begin{aligned} \Delta_1(\nu) &= \frac{\nu}{c_{\Omega}} - \frac{2c_{I,\Omega}}{\nu}\norm{u}_{H^1(\Omega)}^4, \\ \Delta_2(\nu) &= \frac{\alpha}{c_{\Omega}} - \frac{2c_{I,\Omega}}{\nu}\norm{\nabla \Theta}_{L_3(\Omega)}^2, \\ \Delta(\nu) &= \min\left\{\Delta_1(\nu),\Delta_2(\nu)\right\}. \end{aligned} \end{equation} Thus, the last inequality implies \begin{multline*} \Dt\left(\norm{V}^2_{L_2(\Omega)} + \norm{Z}^2_{L_2(\Omega)}\right) + \Delta(\nu)\left(\norm{V}^2_{L_2(\Omega)} + \norm{Z}^2_{L_2(\Omega)}\right) \\ \leq \frac{32\nu_r^2c_{I,\Omega}}{\alpha}\norm{\omega}_{L_2(\Omega)}^2 + \frac{16\nu_r^2c_{I,\Omega}}{\alpha}\norm{\Rot v}_{L_2(\Omega)}^2, \end{multline*} which is equivalent to \begin{equation*} \Dt\left(\left(\norm{V}^2_{L_2(\Omega)} + \norm{Z}^2_{L_2(\Omega)}\right)e^{\Delta(\nu)t}\right) \leq \nu_r^2c_{\alpha,I,\Omega}\left(\norm{\omega}_{L_2(\Omega)}^2 + \norm{\Rot v}_{L_2(\Omega)}^2\right)e^{\Delta(\nu)t}. \end{equation*} Integrating with respect to $t \in (t_0,t_1)$ yields \begin{multline*} \norm{V(t)}^2_{L_2(\Omega)} + \norm{Z(t)}^2_{L_2(\Omega)} \\ \leq \nu_r^2c_{\alpha,I,\Omega}\left(\norm{\omega}_{L_2(\Omega^t)}^2 + \norm{\Rot v}_{L_2(\Omega^t)}^2\right) + \left(\norm{V(t_0)}^2_{L_2(\Omega)} + \norm{Z(t_0)}^2_{L_2(\Omega)}\right)e^{-\Delta(\nu)t}. \end{multline*} In view of the energy estimates (see e.g. \cite[Lemma 8.1]{2012arXiv1205.4046N}) we obtain \begin{equation*} \norm{\omega}_{L_2(\Omega^t)}^2 + \norm{\Rot v}_{L_2(\Omega^t)}^2 < E_{v,\omega}(t). \end{equation*} On the other hand, under slightly weaker assumption than in Theorem \ref{thm0} it follows from \cite[Theorem B]{wm6} that \begin{equation}\label{eq10} \norm{u}_{W^{2,1}_2(\Omega^t)} \leq \varphi(\nu,\text{data}) \end{equation} for any $t \in [0,\infty]$, where $\varphi$ is a positive and non-decreasing function dependent on the viscosity coefficient and the initial and the external data. By the Embedding Theorem for anisotropic Sobolev spaces (see e.g. \cite[Ch. 2, \S 3, Lemma 3.3]{lad} it follows that $W^{2,1}_2(\Omega^t) \hookrightarrow L_{10}(\Omega^t)$ and \begin{equation*} \norm{u}_{L_{10}(\Omega^t)} \leq c_{\Omega} \norm{u}_{W^{2,1}_2(\Omega^t)} \leq \varphi(\text{data}) \end{equation*} holds. In light of the maximal regularity for parabolic systems (see e.g. \cite[Lemma 6.9]{2012arXiv1205.4046N}) applied to \eqref{p11}$_3$ we deduce that $\Theta \in W^{2,1}_{\frac{5}{3}}(\Omega^t)$ and \begin{equation*} \norm{\Theta}_{W^{2,1}_{\frac{5}{3}}(\Omega^t)} \leq c_{\text{data},\alpha,\beta,\Omega} \left(\norm{g}_{L_{\frac{5}{3}}(\Omega^t)} + \norm{u}_{L_{10}(\Omega^t)} + \norm{\Theta(t_0)}_{W^{\frac{4}{5}}_{\frac{5}{3}}(\Omega)}\right). \end{equation*} From \eqref{eq10}, the above inequality and the maximal regularity for the Stokes and parabolic system (see e.g. \cite[Lemmas 6.9 and 6.11]{2012arXiv1205.4046N}) it follows that we can improve the regularity for $(u,\Theta)$ as we need. In particular \begin{equation*} \begin{aligned} \sup_{t \geq 0} \norm{u(t)}^4_{H^1(\Omega)} &\leq c_{\text{data}}, \\ \sup_{t \geq 0} \norm{\nabla \Theta(t)}^2_{L_3(\Omega)} &\leq c_{\text{data}}. \end{aligned} \end{equation*} In consequence, $\Delta(\nu)$ becomes positive and finite for $\nu$ sufficiently large. This ends the proof. \end{proof} \begin{rem} Observe, that for $u(t_0) = v(t_0)$ and $\Theta(t_0) = \omega(t_0)$ we obtain \begin{equation*} \norm{V(t)}^2_{L_2(\Omega)} + \norm{Z(t)}^2_{L_2(\Omega)} \leq \nu_r^2c_{\alpha,I,\Omega}\left(\norm{\omega}_{L_2(\Omega^t)}^2 + \norm{\Rot v}_{L_2(\Omega^t)}^2\right). \end{equation*} This estimate implies that as $\nu_r \to 0$ the velocity of the micropolar fluid model converges uniformly with respect to time on $[0,\infty)$ in $L_2$ to the velocity field of standard Navier-Stokes model. \end{rem} \end{document}
\begin{document} \title{Reply to Comment on `Spin Decoherence in Superconducting Atom Chips'} \author{Bo-Sture K. Skagerstam}\email{[email protected]} \affiliation{Complex Systems and Soft Materials Research Group, Department of Physics, The Norwegian University of Science and Technology, N-7491 Trondheim, Norway} \author{Ulrich Hohenester} \affiliation{Institut f\"ur Physik, Karl-Franzens-Universit\"at Graz, Universit\"atsplatz 5, A-8010 Graz, Austria} \author{Asier Eiguren} \affiliation{Institut f\"ur Physik, Karl-Franzens-Universit\"at Graz, Universit\"atsplatz 5, A-8010 Graz, Austria} \author{Per Kristian Rekdal} \affiliation{Institut f\"ur Physik, Karl-Franzens-Universit\"at Graz, Universit\"atsplatz 5, A-8010 Graz, Austria} \pacs{03.65.Yz, 03.75.Be, 34.50.Dy, 42.50.Ct} \maketitle In a recent paper \cite{skagerstam_06} we investigate spin decoherence in superconducting atom chips, and predict a lifetime enhancement by more than five orders of magnitude in comparison to normal-conducting atom chips. Scheel, Hinds, and Knight (SHK) \cite{scheel_06} cast doubt on these results as they are seemingly an artifact of the two-fluid model used for the description of the superconductor, and estimate a lifetime enhancement by a factor of ten instead. In this reply we show that this criticism is unwarranted since neither our central result relies on the two-fluid model, nor the predictions of our model strongly disagree with experimental data. In Ref.~\cite{skagerstam_06} we employ a dielectric description of the superconductor based on a parameterization of the complex optical conductivity $\sigma(T) \equiv \sigma_1(T) + i \sigma_2(T)$, viz. \begin{equation} \label{sigma_eq} \sigma(T) = 2/\omega\mu_0\delta^2(T)+i/\omega\mu_0\lambda_L^2(T) \, , \end{equation} \noi with $\delta(T)$ and $\lambda_L(T)$ the temperature dependent skin and London penetration depth, respectively. The spin lifetime for $\lambda_L(T) \ll \delta(T)$ is then obtained by matching the electromagnetic fields at the vacuum-superconductor interface, to arrive at our central result for the spin lifetime $\tau \propto \sigma^{3/2}_2(T)/\sigma_1(T) \propto \delta^2(T)/\lambda_L^3(T)$. As our analysis is only based on Maxwell's theory with appropriate boundary conditions, it is valid for $\delta(T)$ and $\lambda_L(T)$ values obtained from either a microscopic model description or from experimental data on $\sigma(T)$. The specific choice of parameterization in \eq{sigma_eq} is motivated by the two-fluid model, even though this model is not needed to justify it. In order to obtain an estimate of the lifetime for niobium, we make use of the experimental value $\sigma_1(T_c) = \sigma_n$ \cite{casalbuoni_06} and consider for $T \leq T_c$ the Gorter-Casimir temperature dependence $\sigma_1(T) = (T/T_c)^4 \sigma_n$ and $\sigma_2(T) = ( 1 - (T/T_c)^4 ) \, \sigma_2(0)$, where $\sigma_2(0) = 1/\omega \mu_0 \lambda_L^2(0)$. A decrease in temperature from $T_c$ to $T_c/2$ as considered in Ref.~\cite{scheel_06}, then results in a reduction of $\sigma_1(T_c/2)$ by approximately one order of magnitude. SHK correctly note that the modification of the quasi-particle dispersion in the superconducting state might give rise to a coherence Hebel-Schlichter peak of $\sigma_1(T)$ below $T_c$ \cite{lifetime}. To estimate the importance of this peak, we have computed $\sigma_1(T)$ using the Mattis-Bardeen formula. At the atomic transition frequency of 560 kHz we obtain a peak height of approximately $5 \, \sigma_n$, not hundred $\sigma_n$ \cite{scheel_05} as used by SHK. From the literature it is well-known that this coherence peak becomes substantially reduced if lifetime effects of the quasi-particles are considered \cite{lifetime,lifetime_2} and even disappears in the clean superconductor limit \cite{marsiglio:91}. As a fair and conservative estimate we correspondingly assign an uncertainty of one order of magnitude to our spin lifetimes. On the other hand, the major contribution of the $\tau$ enhancement in the superconducting state is due to the additional $\lambda_L^3(T)$ contribution accounting for the efficient magnetic field screening in superconductors. This factor is not considered by SHK and appears to be the main reason for the discrepancy between our results and those of Ref.~\cite{scheel_06}. The London length $\lambda_L(0) = 35$ nm as used in \cite{skagerstam_06} corresponds to $\omega \sigma_2(T_c/2) \approx 6.1 \times 10^{20} \, ( \Omega \, $m$ \, $s$ )^{-1}$ and is in agreement with the experimental data of Ref.~\cite{casalbuoni_06}. In conclusion, modifications of $\sigma(T)$ introduced by the details of the quasi-particle dispersion in the superconducting state (BCS or Eliashberg theory) are expected to modify the estimated lifetime values by at most one order of magnitude, but will by no means change the essentials of our findings, which only rely on generic superconductor properties. Hence, our prediction for a lifetime enhancement by more than five orders of magnitude prevails. Whether such high lifetimes can be obtained in superconducting atom chips will have to be determined experimentally. \end{document}
\begin{document} \title{Layer potentials for general linear elliptic systems} \author{Ariel Barton} \address{Ariel Barton, Department of Mathematical Sciences, 309 SCEN, University of Ar\-kan\-sas, Fayetteville, AR 72701} \email{[email protected]} \subjclass[2010]{Primary 35J58, Secondary 31B10, 31B20 } \begin{abstract} In this paper we construct layer potentials for elliptic differential operators using the Lax-Milgram theorem, without recourse to the fundamental solution; this allows layer potentials to be constructed in very general settings. We then generalize several well known properties of layer potentials for harmonic and second order equations, in particular the Green's formula, jump relations, adjoint relations, and Verchota's equivalence between well-posedness of boundary value problems and invertibility of layer potentials. \end{abstract} \keywords{Higher order differential equation, layer potentials, Dirichlet problem, Neumann problem} \maketitle \tableofcontents \section{Introduction} There is by now a very rich theory of boundary value problems for Laplace's operator, and more generally for second order divergence form operators $-\Div \mat A\nabla$. The Dirichlet problem \begin{equation*}-\Div \mat A\nabla u=0 \text{ in }\Omega,\quad u=f \text{ on }\partial\Omega, \quad \doublebar{u}_\XX\leq C\doublebar{f}_\DD\end{equation*} and the Neumann problem \begin{equation*}-\Div \mat A\nabla u=0 \text{ in }\Omega,\quad \nu\cdot\mat A\nabla u=g \text{ on }\partial\Omega, \quad \doublebar{u}_\XX\leq C\doublebar{g}_\NN\end{equation*} are known to be well-posed for many classes of coefficients $\mat A$ and domains~$\Omega$, and with solutions in many spaces $\XX$ and boundary data in many boundary spaces $\DD$ and~$\NN$. A great deal of current research consist in extending these well posedness results to more general situations, such as operators of order $2m\geq 4$ (for example, \cite{MazMS10,KilS11B,MitMW11, MitM13A, BreMMM14, BarHM17pC}; see also the survey paper \cite{BarM16B}), operators with lower order terms (for example, \cite{BraBHV12,Tao12,Fel16,PanT16,DavHM16p}) and operators acting on functions defined on manifolds (for example, \cite{MitMT01,MitMS06,KohPW13}). Two very useful tools in the second order theory are the double and single layer potentials given by \begin{align} \label{eqn:introduction:D} \D^{\mat A}_\Omega f(x) &= \int_{\partial\Omega} \overline{\nu\cdot \mat A^*(y)\nabla_{y} E^{L^*}(y,x)} \, f(y)\,d\sigma(y) ,\\ \label{eqn:introduction:S} \s^L_\Omega g(x) &= \int_{\partial\Omega}\overline{E^{L^*}(y,x)} \, g(y)\,d\sigma(y) \end{align} where $\nu$ is the unit outward normal to~$\Omega$ and where $E^L(y,x)$ is the fundamental solution for the operator~$L=-\Div \mat A\nabla$, that is, the formal solution to $L E^L(\,\cdot\,,x)=\delta_x$. These operators are inspired by a formal integration by parts \begin{align*}u(x) &= \int_\Omega \overline{L^*E^{L^*}(\,\cdot\,,x)}\,u \\&=- \int_{\partial\Omega}\!\! \overline{\nu\cdot \mat A^*\nabla E^{L^*}(\,\cdot\,,x)} \, u\,d\sigma +\int_{\partial\Omega}\!\!\overline{E^{L^*}(\,\cdot\,,x)} \, \nu\cdot \mat A\nabla u\,d\sigma +\int_\Omega \overline{E^{L^*}(\,\cdot\,,x)}\,Lu\end{align*} which gives the Green's formula \begin{equation*}u(x) = -\D^{\mat A}_\Omega (u\big\vert_{\partial\Omega})(x) + \s^L_\Omega (\nu\cdot \mat A\nabla u)(x)\quad\text{if $x\in\Omega$ and $Lu=0$ in $\Omega$}\end{equation*} at least for relatively well-behaved solutions~$u$. Such potentials have many well known properties beyond the above Green's formula, including jump and adjoint relations. In particular, by a clever argument of Verchota \cite{Ver84} and some extensions in \cite{BarM13,BarM16A}, well posedness of the Dirichlet problem is equivalent to invertibility of the operator $g\mapsto \s^L_\Omega g\big\vert_{\partial\Omega}$, and well posedness of the Neumann problem is equivalent to invertibility of the operator $f\mapsto \nu\cdot\mat A\nabla\D^{\mat A}_\Omega f$. This equivalence has been used to solve boundary value problems in many papers, including \cite{FabJR78,Ver84,DahK87,FabMM98,Zan00} in the case of harmonic functions (that is, the case $\mat A=\mat I$ and $L=-\Delta$) and \cite{AlfAAHK11, Bar13,Ros13, HofKMP15B, HofMayMou15,HofMitMor15, BarM16A} in the case of more general operators under various assumptions on the coefficients~$\mat A$. Layer potentials have been used in other ways in \cite{PipV92,KenR09,Rul07,Mit08,Agr09,MitM11,BarM13,AusM14}. Boundary value problems were studied using a functional calculus approach in \cite{AusAH08,AusAM10,AusA11, AusR12, AusM14, AusS14p, AusM14p}; in \cite{Ros13} it was shown that certain operators arising in this theory coincided with layer potentials. Thus, it is desirable to extend layer potentials to more general situations. One may proceed as in the homogeneous second order case, by constructing the fundamental solution, formally integrating by parts, and showing that the resulting integral operators have appropriate properties. In the case of higher order operators with constant coefficients, this has been done in \cite{Agm57,CohG83, CohG85, Ver05, MitM13A, MitM13B}. However, all three steps are somewhat involved in the case of variable coefficient operators (although see \cite{DavHM16p,Bar16} for fundamental solutions, for second order operators with lower order terms, and for higher order operators without lower order terms, respectively). An alternative, more abstract construction is possible. The fundamental solution for various operators was constructed in \cite{HofK07,Bar16,DavHM16p} as the kernel of the Newton potential, which may itself be constructed very simply using the Lax-Milgram theorem. It is possible to rewrite the formulas \eqref{eqn:introduction:D} and~\eqref{eqn:introduction:S} for the second order layer potential directly in terms of the Newton potential, without mediating by the fundamental solution, and this construction generalizes very easily. It is this approach that was taken in \cite{BarHM15p,BarHM17pA}. In this paper we will provide the details of this construction in a very general context. Roughly, this construction is valid for all differential operators $L$ that may be inverted via the Lax-Milgram theorem, and all domains $\Omega$ for which suitable boundary trace operators exist. We will also show that many properties of traditional layer potentials are valid in the general case. The organization of this paper is as follows. The goal of this paper is to construct layer potentials associated to an operator~$L$ as bounded linear operators from a space $\DD_2$ or $\NN_2$ to a Hilbert space $\HH_2$ given certain conditions on $\DD_2$, $\NN_2$ and $\HH_2$. In Section~\ref{sec:dfn} we will list these conditions and define our terminology. Because these properties are somewhat abstract, in Section~\ref{sec:example} we will give an example of spaces $\HH_2$, $\DD_2$ and $\NN_2$ that satisfy these conditions in the case where $L$ is a higher order differential operator in divergence form without lower order terms. This is the context of the paper \cite{BarHM17pC}; we intend to apply the results of the present paper therein to solve the Neumann problem with boundary data in $L^2$ for operators with $t$-independent self-adjoint coefficients. In Section~\ref{sec:D:S} of this paper we will provide the details of the construction of layer potentials. We will prove the higher order analogues for the Green's formula, adjoint relations, and jump relations in Section~\ref{sec:properties}. Finally, in Section~\ref{sec:invertible} we will show that the equivalence between well posedness of boundary value problems and invertibility of layer potentials of \cite{Ver84,BarM13,BarM16A} extends to the higher order case. \section{Terminology} \label{sec:dfn} We will construct layer potentials $\D^\B_\Omega$ and $\s^L_\Omega$ using the following objects. \begin{itemize} \item Two Hilbert spaces $\HH_1$ and $\HH_2$. \item Six vector spaces $\widehat\HH_1^\Omega$, $\widehat\HH_1^\CC$, $\widehat\HH_2^\Omega$, $\widehat\HH_2^\CC$, $\widehat\DD_1$ and $\widehat\DD_2$. \item Bounded bilinear functionals $\B:\HH_1\times\HH_2\mapsto \C$, $\B^\Omega:\HH_1^\Omega\times\HH_2^\Omega\mapsto \C$, and $\B^\CC:\HH_1^\CC\times\HH_2^\CC\mapsto \C$. (We will define the spaces $\HH_j^\Omega$, $\HH_j^\CC$ momentarily.) \item Bounded linear operators $\Tr_1:\HH_1\mapsto\widehat\DD_1$ and $\Tr_2:\HH_2\mapsto\widehat\DD_2$. \item Bounded linear operators from $\HH_j$ to $\widehat\HH_j^\Omega$ and $\widehat\HH_j^\CC$; we shall denote these operators $\big\vert_\Omega$ and~$\big\vert_\CC$. \end{itemize} We will work not with the spaces $\widehat \HH_j^\Omega$, $\widehat\HH_j^\CC$ and $\widehat\DD_j$, but with the spaces $ \HH_j^\Omega$, $\HH_j^\CC$ and $\DD_j$ defined as follows. \begin{gather} \HH_j^\Omega=\{F\big\vert_\Omega:F\in\HH_j\}/\sim\text{ with norm }\doublebar{f}_{\HH^\Omega_j} = \inf\{\doublebar{F}_{\HH_j}: F\big\vert_\Omega=f\} ,\\ \HH_j^\CC=\{F\big\vert_\CC : F\in\HH_j\}/\sim\text{ with norm }\doublebar{f}_{\HH^\CC_j} = \inf\{\doublebar{F}_{\HH_j}: F\big\vert_\CC=f\} ,\\ \DD_j=\{\Tr_j F:F\in\HH_j\}/\sim\text{ with norm }\doublebar{f}_{\DD_j} = \inf\{\doublebar{F}_{\HH_j}: \Tr_j F=f\} \end{gather} where $\sim$ denotes the equivalence relation $f\sim g$ if $\doublebar{f-g}=0$. We impose the following conditions on the given function spaces and operators. We require that there is some $\lambda>0$ such that for every $u\in\HH_1$, $v\in \HH_2$ and $\varphi$,~$\psi\in \HH_j$ for $j=1$ or $j=2$, the following conditions are valid. \begin{gather} \label{cond:coercive} \sup_{w\in \HH_1\setminus\{0\}} \frac{\abs{\B(w,v)}}{\doublebar{w}_{\HH_1}}\geq \lambda \doublebar{v}_{\HH_2},\quad \sup_{w\in \HH_2\setminus\{0\}} \frac{\abs{\B(u,w)}}{\doublebar{w}_{\HH_2}}\geq \lambda \doublebar{u}_{\HH_1}. \\ \label{cond:local} \B(u,v) = \B^\Omega(u\big\vert_{\Omega},v\big\vert_\Omega) +\B^\CC(u\big\vert_{\CC}, v\big\vert_{\CC}). \\ \label{cond:trace:extension} {\text{If $\Tr_j \varphi=\Tr_j \psi$, then there is a $w\in\HH_j$ with}} \qquad\qquad \qquad\qquad\qquad \qquad \\\nonumber \qquad\qquad \qquad\qquad\qquad \qquad {\text{ $w\big\vert_\Omega=\varphi\big\vert_\Omega$, $w\big\vert_{\CC}=\psi\big\vert_{\CC}$ and $\Tr_j w= \Tr_j\varphi=\Tr_j\psi$.}} \end{gather} We now introduce some further terminology. We will define the linear operator $L$ as follows. If $u\in \HH_2$, let $Lu$ be the element of the dual space $\HH_1^*$ to $\HH_1$ given by \begin{equation}\label{dfn:L}\langle\varphi,Lu\rangle = \B(\varphi,u).\end{equation} Notice that $L$ is bounded $\HH_2\mapsto\HH_1^*$. If $u\in \HH_2^\Omega$, we let $(Lu)\big\vert_\Omega$ be the element of the dual space to $\{\varphi\in\HH_1:\Tr_1 \varphi=0\}$ given by \begin{equation}\label{dfn:L:interior} \langle\varphi,(Lu)\big\vert_\Omega\rangle = \B^\Omega(\varphi\big\vert_\Omega,u)\quad\text{for all $\varphi\in\HH_1$ with $\Tr_1\varphi=0$} .\end{equation} If $u\in\HH_2$, we will often use $(Lu)\big\vert_\Omega$ as shorthand for $(L(u\big\vert_\Omega))\big\vert_\Omega$. We will primarily be concerned with the case $(Lu)\big\vert_\Omega=0$. Let \begin{equation}\NN_2=\DD_1^*, \qquad \NN_1=\DD_2^*\end{equation} denote the dual spaces to $\DD_1$ and $\DD_2$. We will now define the Neumann boundary values of an element $u$ of $\HH_2^\Omega$ that satisfies $(Lu)\big\vert_\Omega=0$. If $\Tr_1\varphi=\Tr_1\psi$ and $(Lu)\big\vert_\Omega=0$, then $\B^\Omega(\varphi\big\vert_\Omega-\psi\big\vert_\Omega,u)=0$ by definition of $(Lu)\big\vert_\Omega$. Thus, $\B^\Omega(\varphi\big\vert_\Omega,u)$ depends only on~$\Tr_1\varphi$, not on~$\varphi$. Thus, $\M_\Omega^\B u$ defined as follows is a well defined element of $\NN_2$. \begin{equation}\label{eqn:Neumann}\langle \Tr_1\varphi,\M_\Omega^\B u \rangle = \B^\Omega(\varphi\big\vert_\Omega,u)\quad\text{for all $\varphi\in \HH_1$}.\end{equation} We may compute \begin{equation*}\abs{\langle \arr f,\M_\Omega^\B u \rangle} \leq \doublebar{\B^\Omega} \inf\{\doublebar{\varphi}_{\HH_1}:\Tr_1\varphi=\arr f\} \doublebar{u}_{\HH_2^\Omega} = \doublebar{\B^\Omega} \doublebar{\arr f}_{\DD_1}\doublebar{u}_{\HH_2^\Omega}\end{equation*} and so we have the bound $\doublebar{\M_\Omega^\B u }_{\NN_2}\leq \doublebar{\B^\Omega}\doublebar{u}_{\HH_2^\Omega}$. If $(Lu)\big\vert_\Omega\neq 0$, then the linear operator given by $\varphi\mapsto B^\Omega(\varphi\big\vert_\Omega,u)$ is still of interest. We will denote this operator $L(u\1_\Omega)$; that is, if $u\in\HH_2^\Omega$ (or $u\in\HH_2$ as before), then $L(u\1_\Omega)\in \HH_1^*$ is defined by \begin{equation}\label{dfn:L:singular} \langle\varphi,L(u\1_\Omega)\rangle = \B^\Omega(\varphi\big\vert_\Omega,u)\quad\text{for all $\varphi\in\HH_1$} .\end{equation} \section{An example: higher order differential equations} \label{sec:example} In this section, we provide an example of a situation in which the terminology of Section~\ref{sec:dfn} and the construction and properties of layer potentials of Sections~\ref{sec:D:S} and~\ref{sec:properties} may be applied. We remark that this is the situation of \cite{BarHM17pC}, and that we will therein apply the results of this paper. Let $m\geq 1$ be an integer, and let $L$ be an elliptic differential operator of the form \begin{equation} \label{eqn:L} Lu=(-1)^m\sum_{\abs\alpha=\abs\beta= m} \partial^\alpha(A_{\alpha\beta} \partial^\beta u)\end{equation} for some bounded measurable coefficients~$\mat A$ defined on $\R^d$. Here $\alpha$ and $\beta$ are multiindices in $\N_0^d$, where $\N_0$ denotes the nonnegative integers. As is standard in the theory, we say that $Lu=0$ in an open set $\Omega$ in the weak sense if \begin{equation}\label{eqn:weak}\int_\Omega \sum_{\abs\alpha=\abs\beta= m} \partial^\alpha\varphi \,A_{\alpha\beta}\, \partial^\beta u=0 \quad\text{for all $\varphi\in C^\infty_0(\Omega)$}.\end{equation} We impose the following ellipticity condition: we require that for some $\lambda>0$, \begin{equation*}\Re \sum_{{\abs\alpha=\abs\beta= m}} \int_{\R^d} \overline{\partial^\alpha\varphi}\,A_{\alpha\beta}\,\partial^\beta\varphi\geq \lambda \doublebar{\nabla^m\varphi}_{L^2(\R^d)}^2 \quad \text{for all $\varphi\in\dot W^2_m(\R^d)$.} \end{equation*} Let $\Omega\subset\R^d$ be a Lipschitz domain with connected boundary, and let $\CC=\R^d\setminus\bar\Omega$ denote the interior of its complement. Observe that $\partial\Omega=\partial\CC$. The following function spaces and linear operators satisfy the conditions of Section~\ref{sec:dfn}. \begin{itemize} \item $\HH_1=\HH_2=\HH$ is the homogeneous Sobolev space $\dot W^2_m(\R^d)$ of locally integrable functions~$\varphi$ (or rather, of equivalence classes of functions modulo polynomials of degree $m-1$) with weak derivatives of order $m$, and such that the $\HH$-norm given by $\doublebar{\varphi}_\HH=\doublebar{\nabla^m\varphi}_{L^2(\R^d)}$ is finite. This space is a Hilbert space with inner product $\langle \varphi,\psi\rangle =\sum_{\abs\alpha=m} \int_{\R^d} \overline{\partial^\alpha\varphi}\,\partial^\alpha\psi$. \item $\widehat\HH^\Omega$ and $\widehat\HH^\CC$ are the Sobolev spaces $\widehat\HH^\Omega=\dot W^2_m(\Omega)=\{\varphi:\nabla^m\varphi\in L^2(\Omega)\}$ and $\widehat\HH^\CC=\dot W^2_m(\CC)=\{\varphi:\nabla^m\varphi\in L^2(\CC)\}$ with the obvious norms. \item $\widehat \DD$ denotes the (vector-valued) Besov space $ \dot B^{2,2}_{1/2}(\partial\Omega)$ of locally integrable functions modulo constants with norm \begin{equation*}\doublebar{f}_{\dot B^{2,2}_{1/2}(\partial\Omega)} = \biggl(\int_{\partial\Omega}\int_{\partial\Omega} \frac{\abs{f(x)-f(y)}^2}{\abs{x-y}^d}\,d\sigma(x)\,d\sigma(y)\biggr)^{1/2}.\end{equation*} \item In \cite{Bar16pA,BarHM15p,BarHM17pC}, $\Tr$ is the linear operator defined on $ \HH$ by $\Tr u=\Trace^\Omega \nabla^{m-1}u\big\vert_\Omega$, where $\Trace^\Omega$ is the standard boundary trace operator of Sobolev spaces. (Given a suitable modification of the trace space $\DD$, it is also possible to choose $\Tr u = \{\Trace^\Omega \partial^\gamma u\}_{\abs\gamma\leq m-1}$, or more concisely $\Tr u = (\Trace^\Omega u,\partial_\nu u,\dots,\partial_\nu^{m-1} u)$, where $\nu$ is the unit outward normal, so that the boundary derivatives of $u$ of all orders are recorded. See, for example, \cite{ PipV95B,She06B, Agr07, MazMS10,MitM13A}.) \item $\B$ is the bilinear operator on $\HH\times\HH$ given by \begin{equation*}\B(\psi,\varphi) = \sum_{\abs\alpha=\abs\beta= m}\int_{\R^d} \overline{\partial^\alpha\psi}\,A_{\alpha\beta}\,\partial^\beta\varphi.\end{equation*} $\B^\Omega$ and $\B^\CC$ are defined analogously to~$\B$, but with the integral over $\R^d$ replaced by an integral over $\Omega$ or~$\CC$. \end{itemize} $\B$, $\B^\Omega$ and $\B^\CC$ are clearly bounded and bilinear, and the restriction operators $\big\vert_\Omega:\HH\mapsto\widehat\HH^\Omega$, $\big\vert_\CC:\HH\mapsto\widehat\HH^\CC$ are bounded and linear. The trace operator $\Tr$ is linear. If $\Omega=\R^d_+$ is the half-space, then boundedness of $\Tr:\HH\mapsto\DD$ was established in \cite[Section~5]{Jaw77}; this extends to the case where $\Omega$ is the domain above a Lipschitz graph via a change of variables. If $\Omega$ is a bounded Lipschitz domain, then boundedness of $\Tr:W\mapsto\widehat\DD$, where $W$ is the inhomogeneous Sobolev space with norm $\sum_{k=0}^m \doublebar{\nabla^k\varphi}_{L^2(\R^d)}$, was established in \cite[Chapter~V]{JonW84}. Then boundedness of $\Tr:\HH\mapsto\widehat\DD$ follows by the Poincar\'e inequality. By assumption, the coercivity condition~\eqref{cond:coercive} is valid. If $\partial\Omega$ has Lebesgue measure zero, then Condition~\eqref{cond:local} is valid. A straightforward density argument shows that if $\Tr$ is bounded, then Condition~\eqref{cond:trace:extension} is valid. Thus, the given spaces and operators satisfy the conditions imposed at the beginning of Section~\ref{sec:dfn}. We now comment on a few of the other quantities defined in Section~\ref{sec:dfn}. If $u\in \HH$, and if $Lu=0$ in $\Omega$ in the weak sense of formula~\eqref{eqn:weak}, then by density $\B^\Omega(\varphi,u)=0$ for all $\varphi\in\HH$ with $\Tr\varphi=0$; that is, $(Lu)\big\vert_\Omega$ as defined in Section~\ref{sec:dfn} satisfies $(Lu)\big\vert_\Omega=0$. For many classes of domains there is a bounded extension operator from $\widehat\HH^\Omega$ to $\HH$, and so $\HH^\Omega=\widehat\HH^\Omega=\dot W^2_m(\Omega)$ with equivalent norms. (If $\Omega$ is a Lipschitz domain then this is a well known result of Calder\'on \cite{Cal61} and Stein \cite[Theorem~5, p.~181]{Ste70}; the result is true for more general domains, see for example \cite{Jon81}.) As mentioned above, if $\Omega\subset\R^d$ is a Lipschitz domain, then $\Tr$ is a bounded operator $\HH \mapsto \widehat \DD$. If $\partial\Omega$ is connected, then $\Tr$ moreover has a bounded right inverse. (See \cite{JonW84} or \cite[Proposition~7.3]{MazMS10} in the inhomogeneous case, and \cite{Bar16pB} in the present homogeneous case.) Thus, the norm in $\DD$ is comparable to the Besov norm. Furthermore, $\{\nabla^{m-1}\varphi\big\vert_{\partial\Omega}:\varphi\in C^\infty_0(\R^d)\}$ is dense in $\DD$. Thus, if $m=1$ then $\DD=\widehat\DD=\dot B^{2,2}_{1/2}(\partial\Omega)$. If $m\geq 2$ then $\DD$ is a closed {proper} subspace of $\widehat\DD$, as the different partial derivatives of a common function must satisfy certain compatibility conditions. In this case $\DD$ is the Whitney-Sobolev space used in many papers, including \cite{AdoP98, MazMS10, MitMW11, MitM13A, MitM13B, BreMMM14, Bar16pA}. If $m=1$, then by an integration by parts argument we have that $\M_\B^\Omega u = \nu\cdot \mat A\nabla u$, where $\nu$ is the unit outward normal to~$\Omega$, whenever $u$ is sufficiently smooth. The weak formulation of Neumann boundary values of formula~\eqref{eqn:Neumann} coincides with the formulation of higher order Neumann boundary data of \cite{BarHM15p,Bar16pA,BarHM17pC} given the above choice of $\Tr=\Trace^\Omega \nabla^{m-1}$, and with that of \cite{ Ver05,Agr07, MitM13A} if we instead choose $\Tr u=(\Trace^\Omega u, \partial_\nu u,\dots,\partial_\nu^{m-1} u)$ or $\Tr u = \{\Trace^\Omega \partial^\gamma u\}_{\abs\gamma\leq m-1}$. \section{Construction of layer potentials} \label{sec:D:S} We will now use the Babu\v{s}ka-Lax-Milgram theorem to construct layer potentials. This theorem may be stated as follows. \begin{thm}[{\cite[Theorem~2.1]{Bab70}}] \label{thm:lax-milgram} Let $\HH_1$ and $\HH_2$ be two Hilbert spaces, and let $\B$ be a bounded bilinear form on $\HH_1\times \HH_2$ that is coercive in the sense that for some fixed $\lambda>0$, formula~\eqref{cond:coercive} is valid for every $u\in\HH_1$ and $v\in\HH_2$. Then for every linear functional $T$ defined on ${\HH_1}$ there is a unique $u_T\in {\HH_2}$ such that $\B(v,u_T)=\overline{T(v)}$. Furthermore, $\doublebar{u_T}_{\HH_2}\leq \frac{1}{\lambda}\doublebar{T}_{\HH_1\mapsto\C}$. \end{thm} We construct layer potentials as follows. Let $\arr g\in\NN_2$. Then the operator $T_{\arr g}\varphi = \langle \arr g,\Tr_1\varphi\rangle$ is a bounded linear operator on~$\HH_1$. By the Lax-Milgram lemma, there is a unique $u_T=\s^L_\Omega\arr g\in \HH_2$ such that \begin{equation}\label{eqn:S} \B( \varphi, \s^L_\Omega\arr g ) = \langle \Tr \varphi,\arr g\rangle \quad\text{for all $\varphi\in\HH_1$}.\end{equation} We will let $\s^L_\Omega\arr g$ denote the single layer potential of~$\arr g$. Observe that the dependence of $\s^L_\Omega$ on the parameter $\Omega$ consists entirely of the dependence of the trace operator on~$\Omega$, and the connection between $\Tr_2$ and $\Omega$ is given by formula~\eqref{cond:trace:extension}. This formula is symmetric about an interchange of $\Omega$ and $\CC$, and so $\s^L_\Omega \arr g=\s^L_\CC\arr g$. The double layer potential is somewhat more involved. We begin by defining the Newton potential. Let $H$ be an element of the dual space $\HH_1^*$ to~$\HH_1$. By the Lax-Milgram theorem, there is a unique element $\PP^L H$ of $\HH_2$ that satisfies \begin{equation} \label{eqn:newton} \B(\varphi,\PP^L H) = \langle \varphi,H\rangle\quad\text{for all $\varphi\in\HH_1$}.\end{equation} We refer to $\PP^L$ as the Newton potential. In some applications, it is easier to work with the Newton potential rather than the single layer potential directly; we remark that \begin{equation}\s^L_\Omega \arr g = \PP^L (T_{\arr g}) \quad\text{where } \langle T_{\arr g},\varphi \rangle = \langle \arr g,\Tr_1\varphi\rangle.\end{equation} We now return to the double layer potential. Let $\arr f\in\DD_2$. Then there is some $F\in \HH_2$ such that $\Tr_2 F=\arr f$. Let \begin{equation} \label{eqn:D:+}\D_\Omega^\B \arr f = -F\big\vert_\Omega + \PP^L (L(\1_\Omega F))\big\vert_\Omega \qquad\text{if $\Tr_2 F=\arr f$}.\end{equation} Notice that $\D_\Omega^\B \arr f$ is an element of $\HH_2^\Omega$, not of $\HH_2$. We will conclude this section by showing that $\D^\B_\Omega\arr f$ is well defined, that is, does not depend on the choice of $F$ in formula~\eqref{eqn:D:+}. We will also establish that layer potentials are bounded operators. \begin{lem}\label{lem:potentials:bounded} The double layer potential is well defined. Furthermore, we have the bounds \begin{gather*}\doublebar{\D^\B_\Omega\arr f}_{\HH_2^\Omega} \leq \frac{\doublebar{B^\CC}}{\lambda}\doublebar{\arr f}_{\DD_2}, \quad \doublebar{\D^\B_\CC\arr f}_{\HH_2^\CC} \leq \frac{\doublebar{B^\Omega}}{\lambda}\doublebar{\arr f}_{\DD_2}, \quad \doublebar{\s^L_\Omega\arr g}_{\HH_2} \leq \frac{1}{\lambda}\doublebar{\arr g}_{\NN_2}. \end{gather*} \end{lem} \begin{proof} By Theorem~\ref{thm:lax-milgram}, we have that \begin{equation*}\doublebar{\s^L_\Omega \arr g}_{\HH_2} \leq \frac{1}{\lambda} \doublebar{T_{\arr g}}_{\HH_1\mapsto\C} \leq \frac{1}{\lambda} \doublebar{\Tr_1}_{\HH_1\mapsto\DD_1}\doublebar{\arr g}_{\DD_1\mapsto\C}. \end{equation*} By definition of $\DD_1$ and $\NN_2$, $\doublebar{\Tr_1}_{\HH_1\mapsto\DD_1}=1$ and $\doublebar{\arr g}_{\DD_1\mapsto\C}=\doublebar{\arr g}_{\NN_2}$, and so $\s^L_\Omega:\NN_2\mapsto\HH_2$ is bounded with operator norm at most $1/\lambda$. We now turn to the double layer potential. We will begin with a few properties of the Newton potential. By definition of~$L$, if $\varphi\in\HH_1$ then $\langle \varphi, LF\rangle = \B(\varphi, F)$. By definition of~$\PP^L$, $\B(\varphi,\PP^L (LF)) = \langle\varphi, LF\rangle$. Thus, by coercivity of~$\B$, \begin{equation}F=\PP^L(LF)\quad\text{for all }F\in\HH_2.\end{equation} By definition of $\B^\Omega$, $\B^\CC$ and $L(\1_\Omega F)$, \begin{align*} \langle\varphi,LF\rangle &= \B(\varphi,F) =\B^\Omega(\varphi\big\vert_\Omega,F\big\vert_\Omega) +\B^\CC(\varphi\big\vert_{\CC},F\big\vert_{\CC})& \\&= \langle\varphi, L(F\1_\Omega)\rangle + \langle \varphi, L(F\1_{\CC})\rangle &\text{for all $\varphi\in\HH_1$}.\end{align*} Thus, $LF=L(F\1_\Omega)+L(F\1_{\CC})$ and so \begin{align} \label{eqn:D:alternate:extensions} -F + \PP^L (L(F\1_\Omega)) &= -F + \PP^L (LF) - \PP^L (F\1_{\CC}) \\\nonumber&= - \PP^L (L(F\1_{\CC})) .\end{align} In particular, suppose that $\arr f=\Tr_2 F=\Tr_2 F'$. By Condition~\eqref{cond:trace:extension}, there is some $F''\in \HH_2$ such that $F''\big\vert_\Omega=F\big\vert_\Omega$ and $F''\big\vert_{\CC}=F'\big\vert_{\CC}$. Then \begin{align*} -F\big\vert_\Omega + \PP^L (L(\1_\Omega F))\big\vert_\Omega &= -F''\big\vert_\Omega + \PP^L (L(\1_\Omega F''))\big\vert_\Omega = - \PP^L (L(F''\1_{\CC}))\big\vert_\Omega \\&= - \PP^L (L(F'\1_{\CC}))\big\vert_\Omega = -F'\big\vert_\Omega + \PP^L (L(\1_\Omega F'))\big\vert_\Omega \end{align*} and so $\D_\Omega^\B \arr f$ is well-defined, that is, depends only on $\arr f$ and not the choice of function $F$ with $\Tr_2 F=\arr f$. Furthermore, we have the alternative formula \begin{equation}\label{eqn:D:alternate} \D_\Omega^\B \arr f = - \PP^L (L(\1_{\CC} F))\big\vert_\Omega \qquad\text{if $\Tr_2 F=\arr f$} .\end{equation} Thus, \begin{equation*}\doublebar{\D_\Omega^B\arr f}_{\HH_2^\Omega} \leq \inf_{\Tr_2 F=\arr f} \doublebar{\PP^L (L(\1_{\CC} F))\big\vert_\Omega}_{\HH_2^\Omega} \leq \inf_{\Tr F=\arr f} \doublebar{\PP^L (L(\1_{\CC} F))}_{\HH_2}. \end{equation*} by definition of the $\HH_2^\Omega$-norm. By Theorem~\ref{thm:lax-milgram} and definition of $\PP^L$, we have that \begin{equation*}\doublebar{\PP^L (L(\1_{\CC} F))}_{\HH_2}\leq \frac{1}{\lambda} \doublebar{L(\1_{\CC} F)}_{\HH_1\mapsto\C}.\end{equation*} Since $L(\1_{\CC} F)(\varphi) = \B^\CC(\varphi\big\vert_{\CC}, F\big\vert_{\CC})$, we have that \begin{equation*}\doublebar{L(\1_{\CC} F)}_{\HH_1\mapsto\C}\leq \doublebar{\B^\CC}\doublebar{F\big\vert_{\CC}}_{\HH_2^\CC} \leq \doublebar{\B^\CC}\doublebar{F}_{\HH_2}\end{equation*} and so \begin{equation*}\doublebar{\D_\Omega^B\arr f\big\vert_\Omega}_{\HH_2^\Omega} \leq \inf_{\Tr F=\arr f} \frac{1}{\lambda} \doublebar{\B^\CC}\doublebar{F}_{\HH_2} =\frac{1}{\lambda} \doublebar{\B^\CC} \doublebar{\arr f}_{\DD_2} \end{equation*} as desired. \end{proof} \section{Properties of layer potentials} \label{sec:properties} We will begin this section by showing that layer potentials are solutions to the equation $(Lu)\big\vert_\Omega=0$ (Lemma~\ref{lem:potentials:solutions}). We will then prove the Green's formula (Lemma~\ref{lem:green}), the adjoint formulas for layer potentials (Lemma~\ref{lem:adjoint}), and conclude this section by proving the jump relations for layer potentials (Lemma~\ref{lem:jump}). \begin{lem}\label{lem:potentials:solutions} Let $\arr f\in\DD$, $\arr g\in\NN$, and let $u=\D^\B_\Omega\arr f$ or $u=\s^L_\Omega\arr g\big\vert_\Omega$. Then $(Lu)\big\vert_\Omega=0$. \end{lem} \begin{proof} Recall that $(Lu)\big\vert_\Omega=0$ if $\B^\Omega(\varphi_+\big\vert_\Omega,u)=0$ for all $\varphi_+\in\HH_1$ with $\Tr_1\varphi_+=0$. If $\Tr_1\varphi_+=0=\Tr_1 0$, then by Condition~\eqref{cond:trace:extension} there is some $\varphi\in\HH_1$ with $\varphi\big\vert_\Omega=\varphi_+$, $\varphi\big\vert_{\CC} = 0$ and $\Tr_1\varphi=0$. By the definition~\eqref{eqn:S} of the single layer potential, \begin{equation*}0=\B(\varphi,\s^L\arr g) =\B^\Omega(\varphi\big\vert_\Omega,\s^L_\Omega\arr g\big\vert_\Omega) +\B^\CC(\varphi\big\vert_{\CC},\s^L_\Omega\arr g\big\vert_{\CC}) =\B^\Omega(\varphi_+\big\vert_\Omega,\s^L_\Omega\arr g\big\vert_\Omega) \end{equation*} as desired. Turning to the double layer potential, if $\varphi\in\HH_1$, then by the definition~\eqref{eqn:D:+} of $\D_\Omega^\B$, formula~\eqref{eqn:D:alternate} for~$\D_\CC^\B$ and linearity of~$\B^\Omega$, \begin{align*} \B^\Omega(\varphi\big\vert_\Omega,\D^\B_\Omega \arr f) &= -\B^\Omega\bigl(\varphi\big\vert_\Omega, F\big\vert_\Omega\bigr) +\B^\Omega\bigl(\varphi\big\vert_\Omega, \PP^L(L(\1_\Omega F))\big\vert_\Omega\bigr) ,\\ \B^\CC(\varphi\big\vert_\CC,\D^\B_\CC \arr f\big\vert_{\CC}) &=-\B^\CC\bigl(\varphi\big\vert_{\CC}, \PP^L(L(\1_\Omega F))\big\vert_{\CC}\bigr) .\end{align*} Subtracting and applying Condition~\eqref{cond:local}, \begin{align*} \B^\Omega(\varphi\big\vert_\Omega,\D^\B_\Omega \arr f) -\B^\CC(\varphi\big\vert_\CC,\D^\B_\CC \arr f\big\vert_{\CC}) &= -\B^\Omega\bigl(\varphi\big\vert_\Omega, F\big\vert_\Omega\bigr) +\B\bigl(\varphi, \PP^L(L(\1_\Omega F))\bigr) .\end{align*} By the definition~\eqref{eqn:newton} of $\PP^L$, \begin{equation*}\B\bigl(\varphi, \PP^L(L(\1_\Omega F))\bigr) = \langle \varphi, L(\1_\Omega F)\rangle\end{equation*} and by the definition~\eqref{dfn:L:singular} of $L(\1_\Omega F)$, \begin{equation*}\B\bigl(\varphi, \PP^L(L(\1_\Omega F))\bigr) = \B^\Omega(\varphi\big\vert_\Omega,F\big\vert_\Omega).\end{equation*} Thus, \begin{align}\label{eqn:D:solution} \B^\Omega(\varphi\big\vert_\Omega,\D^\B_\Omega \arr f) -\B^\CC(\varphi\big\vert_\CC,\D^\B_\CC \arr f) &= 0 \quad\text{for all $\varphi\in\HH_1$.} \end{align} In particular, as before if $\Tr_1 \varphi_+=0$ then there is some $\varphi$ with $\varphi\big\vert_\Omega=\varphi_+\big\vert_\Omega$, $\varphi\big\vert_\CC=0$ and so $\B^\Omega(\varphi\big\vert_\Omega,\D^\B_\Omega \arr f)=0$. This completes the proof. \end{proof} \begin{lem}\label{lem:green} If $u\in\HH_2^\Omega$ and $(Lu)\big\vert_\Omega=0$, then \begin{equation*}u = -\D^\B_\Omega (\Tr_2 U) + \s^L_\Omega (\M^\B_\Omega u)\big\vert_\Omega,\quad 0 = \D^\B_\CC (\Tr_2 U) + \s^L_\CC (\M^\B_\Omega u)\big\vert_{\CC}\end{equation*} for any $U\in\HH_2$ with $U\big\vert_\Omega=u$. \end{lem} \begin{proof} By definition~\eqref{eqn:D:+} of the double layer potential, \begin{equation*} -\D_\Omega^\B (\Tr_2 U) = U\big\vert_\Omega - \PP^L (L(\1_\Omega U))\big\vert_\Omega = u - \PP^L (L(\1_\Omega u))\big\vert_\Omega \end{equation*} and by formula~\eqref{eqn:D:alternate} \begin{equation*}\D_\CC^\B (\Tr_2 U) = -\PP^L (L(\1_\Omega u))\big\vert_{\CC}.\end{equation*} It suffices to show that $\PP^L(L(\1_\Omega u))=\s^L_\Omega(\M^\B_\Omega u)$. Let $\varphi\in\HH_1$. By formulas~\eqref{eqn:S} and~\eqref{eqn:Neumann}, \begin{equation*}\B(\varphi,\s^L_\Omega (\M^\B_\Omega u)) =\langle \Tr_1 \varphi,\M^\B_\Omega u\rangle =\B^\Omega(\varphi\big\vert_\Omega,u) .\end{equation*} By formula~\eqref{eqn:newton} for the Newton potential and by the definition~\eqref{dfn:L:singular} of $L(\1_\Omega u)$, \begin{equation*} \B(\varphi, \PP^L(L(\1_\Omega u))) = \langle \varphi, L(\1_\Omega u)\rangle =\B^\Omega(\varphi\big\vert_\Omega,u) .\end{equation*} Thus, $\B(\varphi, \PP^L(L(\1_\Omega u))) = \B(\varphi,\s^L_\Omega (\M^\B_\Omega u))$ for all $\varphi\in\HH$; by coercivity of $\B$, we must have that $\PP^L(L(\1_\Omega u))=\s^L_\Omega(\M^\B_\Omega u)$. This completes the proof. \end{proof} Let $\B^*(\varphi,\psi)=\overline{\B(\psi,\varphi)}$ and define $\B^\Omega_*$, $\B^\CC_*$ analogously. Then $\B^*$ is a bounded and coercive operator $\HH_2\times \HH_1\mapsto\C$, and so we can define the double and single layer potentials $\D^{\B^*}_\Omega:\DD_1\mapsto \HH_1^\Omega$, $\s^{L^*}_\Omega:\NN_1\mapsto \HH_1$. We then have the following adjoint relations. \begin{lem}\label{lem:adjoint} We have the adjoint relations \begin{align} \label{eqn:neumann:D:dual} \langle \arr \varphi, \M_\B^{\Omega} \D^{\B}_\Omega \arr f\rangle &= \langle\M_{\B^*}^{\Omega} \D^{\B^*}_\Omega \arr \varphi, \arr f\rangle ,\\ \label{eqn:dirichlet:S:dual} \langle \arr \gamma, \Tr_2 \s^L_\Omega \arr g\rangle &= \langle \Tr_1 \s^{L^*}_\Omega \arr \gamma, \arr g\rangle \end{align} for all $\arr f\in \DD_2$, $\arr \varphi\in\DD_1$, $\arr g\in\NN_2$ and $\arr\gamma\in\NN_1$. If we let $\Tr_2^\Omega \D^{\B}_\Omega \arr f = -\Tr_2 F + \Tr_2 \PP^L(L(\1_\Omega F))$, where $F$ is as in formula~\eqref{eqn:D:+}, then $\Tr_2^\Omega \D^{\B}_\Omega \arr f $ does not depend on the choice of $F$, and we have the duality relations \begin{align}\label{eqn:dirichlet:D:dual} \langle \arr \gamma, \Tr_2^\Omega \D^{\B}_\Omega \arr f\rangle &= \langle-\arr \gamma+\M_{\B^*}^{\Omega}\s^{L^*}_\Omega\arr \gamma, \arr f\rangle .\end{align} \end{lem} \begin{proof} By formula~\eqref{eqn:S}, \begin{gather*}\langle \Tr_1 \s^{L^*}_\Omega \arr \gamma, \arr g\rangle =\B(\s^{L^*}_\Omega\arr \gamma,\s^L_\Omega\arr g\rangle, \\ \langle \Tr_1 \s^{L}_\Omega \arr g, \arr \gamma\rangle =\B^*(\s^{L}_\Omega\arr g,\s^{L^*}_\Omega\arr \gamma\rangle\end{gather*} and so formula~\eqref{eqn:dirichlet:S:dual} follows by definition of~$\B^*$. Let $\Phi\in\HH_1$ and $F\in\HH_2$ with $\Tr_1\Phi=\arr\varphi$, $\Tr_2F=\arr f$. Then by formulas \eqref{eqn:Neumann} and~\eqref{eqn:D:+}, \begin{align*}\langle \arr \varphi, \M_\B^{\Omega} \D^{\B}_\Omega \arr f\rangle &= \B^\Omega(\Phi\big\vert_\Omega, \D^{\B}_\Omega \arr f) = -\B^\Omega(\Phi\big\vert_\Omega, F\vert_\Omega) +\B^\Omega(\Phi\big\vert_\Omega, \PP^L(L(\1_\Omega F))\big\vert_\Omega) \\&= -\overline{\B^\Omega_*(F\vert_\Omega, \Phi\big\vert_\Omega)} +\overline{\B^\Omega_*(\PP^L(L(\1_\Omega F))\big\vert_\Omega, \Phi\big\vert_\Omega)} .\end{align*} By formula~\eqref{dfn:L:singular}, \begin{equation*}\B^\Omega_*(\PP^L(L(\1_\Omega F))\big\vert_\Omega, \Phi\big\vert_\Omega) = \langle \PP^L(L(\1_\Omega F)), L^*(\1_\Omega\Phi)\rangle.\end{equation*} By formula~\eqref{eqn:newton}, \begin{equation*}\B^\Omega_*(\PP^L(L(\1_\Omega F))\big\vert_\Omega, \Phi\big\vert_\Omega) = \B^\Omega_*( \PP^L(L(\1_\Omega F)), \PP^{L^*}(L^*(\1_\Omega\Phi))).\end{equation*} Thus, \begin{align*}\langle \arr \varphi, \M_\B^{\Omega} \D^{\B}_\Omega \arr f\rangle &= -\overline{\B^\Omega_*(F\vert_\Omega, \Phi\big\vert_\Omega)} +\overline{\B^\Omega_*( \PP^L(L(\1_\Omega F)), \PP^{L^*}(L^*(\1_\Omega\Phi)))} .\end{align*} By the same argument \begin{align*}\langle \arr f,\M_{\B^*}^{\Omega} \D^{\B^*}_\Omega \arr \varphi\rangle &= -\overline{\B^\Omega(\Phi\vert_\Omega, F\big\vert_\Omega)} +\overline{\B^\Omega( \PP^{L^*}(L^*(\1_\Omega \Phi)), \PP^{L}(L(\1_\Omega F)))} \end{align*} and by definition of $\B^\Omega_*$ formula~\eqref{eqn:neumann:D:dual} is proven. Finally, by definition of $\Tr_2^\Omega \D^{\B}_\Omega$, \begin{equation*}\langle \arr \gamma, \Tr_2^\Omega \D^{\B}_\Omega \arr f\rangle = -\langle \arr \gamma, \Tr_2 F\rangle + \langle \arr \gamma, \Tr_2 \PP^L(L(\1_\Omega F)) \rangle .\end{equation*} By the definition~\eqref{eqn:S} of the single layer potential, \begin{equation*} \langle \arr \gamma, \Tr_2 \PP^L(L(\1_\Omega F)) \rangle = \overline{\B^*( \PP^L(L(\1_\Omega F)) ,\s^{L^*}_\Omega\arr \gamma)} .\end{equation*} By definition of $\B^*$ and the definition~\eqref{eqn:newton} of the Newton potential, \begin{equation*}\overline{\B^*( \PP^L(L(\1_\Omega F)) ,\s^{L^*}_\Omega\arr \gamma)} = {\langle \s^{L^*}_\Omega\arr \gamma, L(\1_\Omega F) \rangle} \end{equation*} and by the definition~\eqref{dfn:L:singular} of $L(\1_\Omega F)$, \begin{equation*}{\langle \s^{L^*}_\Omega\arr \gamma, L(\1_\Omega F) \rangle} = \B^\Omega (\s^{L^*}_\Omega\arr \gamma\big\vert_\Omega, F\big\vert_\Omega).\end{equation*} By the definition~\eqref{eqn:Neumann} of Neumann boundary values, \begin{equation*}\overline{\B^\Omega_*(F,\s^{L^*}_\Omega\arr \gamma)} = \overline{\langle \Tr_2 F, \M^\Omega_{\B^*}(\s^{L^*}_\Omega\arr\gamma\big\vert_\Omega)\rangle}\end{equation*} and so \begin{equation*}\langle \arr \gamma, \Tr_2^\Omega \D^{\B}_\Omega \arr f\rangle = -\langle\arr\gamma,\arr f\rangle + {\langle \M^\Omega_{\B^*}(\s^{L^*}_\Omega\arr\gamma\big\vert_\Omega), \arr f\rangle} \end{equation*} for any choice of $F$. Thus $\Tr_2^\Omega \D^{\B}_\Omega $ is well-defined and formula~\eqref{eqn:dirichlet:D:dual} is valid. \end{proof} \begin{lem}\label{lem:jump} Let $\Tr_2^\Omega \D^{\B}_\Omega $ be as in Lemma~\ref{lem:adjoint}. If $\arr f\in\DD$ and $\arr g\in\NN$, then we have the jump and continuity relations \begin{align} \label{eqn:D:jump} \Tr_2^\Omega\D^\B_\Omega\arr f +\Tr_2^\CC\D^\B_\CC\arr f &=-\arr f ,\\ \label{eqn:S:jump} \M_\B^\Omega (\s^L_\Omega \arr g\big\vert_\Omega) +\M_\B^{\CC} (\s^L_\Omega\arr g\big\vert_{\CC}) &=\arr g ,\\ \label{eqn:D:cts} \M_\B^{\Omega} (\D^\B_\Omega\arr f) - \M_\B^{\CC} (\D^\B_\CC\arr f ) &=0 .\end{align} If there are bounded operators $\Tr_2^\Omega:\HH_2^\Omega\mapsto\DD_2$ and $\Tr_2^\CC:\HH_2^\CC\mapsto\DD_2$ such that $\Tr_2 F = \Tr_2^\Omega (F\big\vert_\Omega)= \Tr_2^\CC (F\big\vert_\CC)$ for all $F\in\HH_2$, then in addition \begin{align} \label{eqn:S:cts} \Tr_2^\Omega(\s^L_\Omega \arr g\big\vert_\Omega) -\Tr_2^\CC(\s^L_\Omega \arr g\big\vert_{\CC}) &=0 .\end{align} \end{lem} The given condition formula for $\Tr_2^\Omega$, $\Tr_2^\CC$ is very natural if $\Omega\subset\R^d$ is an open set, $\CC=\R^d\setminus\bar\Omega$ and $\Tr_2$ denotes a trace operator restricting functions to the boundary~$\partial\Omega$. Observe that if such operators $\Tr_2^\Omega$ and $\Tr_2^\CC$ exist, then by the definition~\eqref{eqn:D:+} of the double layer potential and by the definition of $\Tr_2^\Omega \D^\B_\Omega$ in Lemma~\ref{lem:adjoint}, $\Tr_2^\Omega (\D^\B_\Omega \arr f) = (\Tr_2^\Omega \D^\B_\Omega)\arr f$ and so there is no ambiguity of notation. \begin{proof}[Proof of Lemma~\ref{lem:jump}] We first observe that by the definition~\eqref{eqn:Neumann} of Neumann boundary values, if $u_+\in\HH_2^\Omega$ and $u_-\in\HH_2^\CC$ with $(Lu_+)\big\vert_\Omega=0$ and $(Lu_-)\big\vert_\CC=0$, then \begin{equation*} \M_\B^\Omega u_+ +\M_\B^{\CC} u_-=\arr \psi \text{ if and only if } \langle\Tr_1\varphi,\arr\psi\rangle = \B^\Omega(\varphi\big\vert_\Omega,u_+) +\B^\CC(\varphi\big\vert_{\CC},u_-) \end{equation*} for all $\varphi\in\HH_1$. The continuity relation \eqref{eqn:D:cts} follows from formula~\eqref{eqn:D:solution}. The jump relation~\eqref{eqn:D:jump} follows from the definition of $\Tr_2^\Omega\D^\B_\Omega$ and by using formula~\eqref{eqn:D:alternate:extensions} to rewrite $\Tr_2^\CC\D^\B_\CC$. The jump relation \eqref{eqn:S:jump} follows from the definition~\eqref{eqn:S} of the single layer potential. The continuity relation~\eqref{eqn:S:cts} follows because $\s^L_\Omega\arr g\in \HH_2$ and by the definition of $\Tr_2^\Omega$, $\Tr_2^\CC$. \end{proof} \section{Layer potentials and boundary value problems} \label{sec:invertible} If $\HH_2^\Omega$ and $\DD_2$ are as in Section~\ref{sec:example}, then by the Lax-Milgram lemma there is a unique solution to the Dirichlet aed and Neumann boundary value problems \begin{equation*}\left\{\begin{aligned} (Lu)\big\vert_\Omega &= 0,\\ \Tr^\Omega_2 u &= \arr f,\\ \doublebar{u}_{\HH_2^\Omega} &\leq C \doublebar{\arr f}_{\DD_2}, \end{aligned}\right.\qquad \left\{\begin{aligned} (Lu)\big\vert_\Omega &= 0,\\ \M_\B^\Omega u &= \arr g,\\ \doublebar{u}_{\HH^\Omega_2} &\leq C \doublebar{\arr g}_{\NN_2} .\end{aligned}\right. \end{equation*} We routinely wish to establish existence and uniqueness to the Dirichlet and Neumann boundary value problems \begin{equation*} (D)^{\widehat L}_\XX\left\{\begin{aligned} (\widehat Lu)\big\vert_\Omega &= 0,\\ \WTr_\XX^\Omega u &= \arr f,\\ \doublebar{u}_{\XX^\Omega} &\leq C \doublebar{\arr f}_{\DD_\XX}, \end{aligned}\right.\qquad (N)^\B_\XX\left\{\begin{aligned} (\widehat Lu)\big\vert_\Omega &= 0,\\ \MM_\B^\Omega u &= \arr g,\\ \doublebar{u}_{\XX^\Omega} &\leq C \doublebar{\arr g}_{\NN_\XX} \end{aligned}\right. \end{equation*} for some constant $C$ and some other solution space $\XX$ and spaces of Dirichlet and Neumann boundary data $\DD_\XX$ and $\NN_\XX$. For example, if $L$ is a second-order differential operator, then as in \cite{JerK81B,KenP93,KenR09,DinPR13p} we might wish to establish well-posedness with $\DD_\XX=\dot W_1^p(\partial\Omega)$, $\NN_\XX=L^p(\partial\Omega)$ and $\XX=\{u:\widetilde N(\nabla u)\in L^p(\partial\Omega)\}$, where $\widetilde N$ is the nontangential maximal function introduced in \cite{KenP93}. The classic method of layer potentials states that if layer potentials, originally defined as bounded operators $\D^\B_\Omega:\DD_2\mapsto\HH_2$ and $\s^L_\Omega:\NN_2\mapsto\HH_2$, may be extended to operators $\D^\B_\Omega:\DD_\XX\mapsto\XX$ and $\s^L_\Omega:\NN_\XX\mapsto\XX$, and if certain of the properties of layer potentials of Section~\ref{sec:properties} are preserved by that extension, then well posedness of boundary value problems are equivalent to certain invertibility properties of layer potentials. In this section we will make this notion precise. As in Sections~\ref{sec:dfn}, \ref{sec:D:S} and~\ref{sec:properties}, we will work with layer potentials and function spaces in a very abstract setting. \subsection{From invertibility to well posedness} \label{sec:invertible:well-posed} In this section we will need the following objects. \begin{itemize} \item Quasi-Banach spaces $\XX^\Omega$, $\DD_\XX$ and $\NN_\XX$. \item A linear operator $u\mapsto(\widehat L u)\big\vert_\Omega$ acting on~$\XX^\Omega$. \item Linear operators $\WTr_\XX^\Omega:\{u\in\XX^\Omega:(\widehat L u)\big\vert_\Omega =0\}\mapsto\DD_\XX$ and $\MM_\B^\Omega:\{u\in\XX^\Omega:(\widehat L u)\big\vert_\Omega =0\}\mapsto\NN_\XX$. \item Linear operators $\widehat\D^\B_\Omega:\DD_\XX\mapsto \XX^\Omega$ and $\widehat\s^L_\Omega:\NN_\XX\mapsto \XX^\Omega$. \end{itemize} \begin{rmk} Recall that $\s^L_\Omega=\s^L_\CC$ is defined in terms of a ``global'' Hilbert space $\HH_2$. If $\XX^\Omega=\HH^\Omega$, then $\widehat \s^L_\Omega\arr g = \s^L_\Omega\arr g\big\vert_\Omega$. In the general case, we do not assume the existence of a global quasi-Banach space $\XX$ whose restrictions to $\Omega$ lie in~$\XX^\Omega$, and thus we will let $\widehat \s^L_\Omega\arr g$ be an element of $\XX^\Omega$ without assuming a global extension. \end{rmk} In applications it is often useful to define $\Tr^\Omega$, $\M_\B^\Omega$, $L$, $\D^\B_\Omega$ and $\s^L_\Omega$ in terms of some Hilbert spaces $\HH_j$, $\HH_j^\Omega$ and to extend these operators to operators with domain or range $\XX^\Omega$ by density or some other means. See, for example, \cite{BarHM17pC}. We will not assume that the operators $\WTr_\XX^\Omega$, $\MM_\B^\Omega$, $\widehat L$, $\widehat\D^\B_\Omega$ and $\widehat\s^L_\Omega$ arise by density; we will merely require that they satisfy certain properties similar to those established in Section~\ref{sec:properties}. Specifically, we will often use the following conditions; observe that if $\XX^\Omega=\HH_2^\Omega$ for some $\HH_2^\Omega$ as in Section~\ref{sec:dfn}, these properties are valid. \begin{description} \item[\hypertarget{ConditionT}{\condTname}] $\WTr_\XX^\Omega$ is a bounded operator $\{u\in\XX^\Omega:(\widehat L u)\big\vert_\Omega =0\}\mapsto \DD_\XX$. \item[\hypertarget{ConditionM}{\condMname}] $\MM_\B^\Omega$ is a bounded operator $\{u\in\XX^\Omega:(\widehat L u)\big\vert_\Omega =0\}\mapsto \NN_\XX$. \item[\hypertarget{ConditionS}{\condSname}] The single layer potential $\widehat\s^L_\Omega$ is bounded $\NN_\XX\mapsto \XX^\Omega$, and if $\arr g\in \NN_\XX$ then $(\widehat L (\widehat\s^L_\Omega\arr g))\big\vert_\Omega=0$. \item[\hypertarget{ConditionD}{\condDname}] The double layer potential $\widehat\D^\B_\Omega$ is bounded $\DD_\XX\mapsto\XX^\Omega$, and if $\arr f\in \DD_\XX$ then $(\widehat L (\widehat\D^\B_\Omega\arr f))\big\vert_\Omega=0$. \item[\hypertarget{ConditionG}{\condGname}] If $u \in \XX^\Omega$ and $(\widehat L u)\big\vert_\Omega =0$, then we have the Green's formula \begin{equation*}u = -\widehat\D^\B_\Omega (\WTr_\XX^\Omega u) + \widehat\s^L_\Omega (\MM_\B^\Omega u).\end{equation*} \end{description} We remark that the linear operator $u\mapsto(\widehat L u)\big\vert_\Omega$ is used only to characterize the subspace $\XX^\Omega_{ L}=\{u\in\XX^\Omega:(\widehat L u)\big\vert_\Omega =0\}$. We could work directly with $\XX^\Omega_{ L}$; however, we have chosen to use the more cumbersome notation $\{u\in\XX^\Omega:(\widehat L u)\big\vert_\Omega =0\}$ to emphasize that the following arguments, presented here only in terms of linear operators, are intended to be used in the context of boundary value problems for differential equations. The following theorem is straightforward to prove and is the core of the classic method of layer potentials. \begin{thm}\label{thm:surjective:existence} Let $\XX$, $\DD_\XX$, $\NN_\XX$, $\widehat\D^\B_\Omega$, $\widehat\s^L_\Omega$, $\WTr_\XX^\Omega$ and $\MM_\B^\Omega$ be quasi-Banach spaces and linear operators with domains and ranges as above. Suppose that Conditions~\condT\ and \condS\ are valid, and that $\WTr_\XX^\Omega \widehat\s^L_\Omega: \NN_\XX \mapsto \DD_\XX$ is surjective. Then for every $\arr f\in \DD_\XX$, there is some $u$ such that \begin{equation}\label{eqn:Dirichlet:weak} (\widehat L u)\big\vert_\Omega = 0, \quad \WTr_\XX^\Omega u = \arr f, \quad u\in\XX^\Omega.\end{equation} Suppose in addition $\WTr_\XX^\Omega \widehat\s^L_\Omega: \NN_\XX \mapsto \DD_\XX$ has a bounded right inverse, that is, there is a constant $C_0$ such that if $\arr f\in\DD_\XX$, then there is some preimage $\arr g$ of $\arr f$ with $\doublebar{\arr g}_{\NN_\XX}\leq C_0\doublebar{\arr f}_{\DD_\XX}$. Then there is some constant $C_1$ depending on $C_0$ and the implicit constants in Conditions~\condT\ and~\condS\ such that if $\arr f\in\DD_\XX$, then there is some $u\in\XX^\Omega$ such that \begin{equation} \label{eqn:Dirichlet:strong} (\widehat L u)\big\vert_\Omega = 0, \quad \WTr_\XX^\Omega u = \arr f, \quad \doublebar{u}_{\XX^\Omega}\leq C_1\doublebar{\arr f}_{\DD_\XX}.\end{equation} Suppose that Conditions~\condM\ and \condD\ are valid, and that $\MM_\B^\Omega\widehat\D^\B_\Omega: \DD_\XX \mapsto \NN_\XX$ is surjective. Then for every $\arr g\in \NN_\XX$, there is some $u$ such that \begin{equation}\label{eqn:Neumann:weak}(\widehat L u)\big\vert_\Omega = 0, \quad \MM_\B^\Omega u = \arr g, \quad u\in\XX^\Omega.\end{equation} If $\MM_\B^\Omega\widehat\D^\B_\Omega: \DD_\XX \mapsto \NN_\XX$ has a bounded right inverse, then there is some constant $C_1$ depending on the bound on that inverse and the implicit constants in Conditions~\condM\ and~\condD\ such that if $\arr g\in\NN_\XX$, then there is some $u\in\XX^\Omega$ such that \begin{equation}\label{eqn:Neumann:strong}(\widehat L u)\big\vert_\Omega = 0, \quad \MM_\B^\Omega u = \arr g, \quad \doublebar{u}_{\XX^\Omega}\leq C_1\doublebar{\arr g}_{\NN_\XX}.\end{equation} \end{thm} Thus, surjectivity of layer potentials implies of solutions to for boundary value problems. We may also show that injectivity of layer potentials implies uniqueness of solutions to boundary value problems. This argument appeared first in \cite{BarM16A} and is the converse to an argument of \cite{Ver84}. \begin{thm}\label{thm:injective:unique} Let $\XX$, $\DD_\XX$, $\NN_\XX$, $\widehat\D^\B_\Omega$, $\widehat\s^L_\Omega$, $\WTr_\XX^\Omega$ and $\MM_\B^\Omega$ be quasi-Banach spaces and linear operators with domains and ranges as above. Suppose that Conditions~\condT, \condM, \condD, \condS\ and~\condG\ are all valid. Suppose that the operator $\WTr_\XX^\Omega \widehat\s^L_\Omega: \NN_\XX \mapsto \DD_\XX$ is one-to-one. Then for each $\arr f\in\DD_\XX$, there is at most one solution $u$ to the Dirichlet problem \begin{equation*}(\widehat L u)\big\vert_\Omega = 0, \quad \WTr_\XX^\Omega u = \arr f, \quad u\in\XX^\Omega.\end{equation*} If $\WTr_\XX^\Omega \widehat\s^L_\Omega: \NN_\XX \mapsto \DD_\XX$ has a bounded left inverse, that is, there is a constant $C_0$ such that the estimate $\doublebar{\arr g}_{\NN_\XX}\leq C_0 \doublebar{\WTr_\XX^\Omega \widehat\s^L_\Omega \arr g}_{\DD_\XX}$ is valid, then there is some constant $C_1$ such that every $u\in\XX^\Omega$ with $(\widehat L u)\big\vert_\Omega=0$ satisfies $\doublebar{u}_{\XX^\Omega}\leq C_1\doublebar{\Tr_\XX^\Omega u}_{\DD_\XX}$ (that is, if $u$ satisfies the Dirichlet problem~\eqref{eqn:Dirichlet:weak} then $u$ must satisfy the Dirichlet problem~\eqref{eqn:Dirichlet:strong}). Similarly, if the operator $\MM_\B^\Omega \widehat\D^\B_\Omega:\DD_\XX \mapsto \NN_\XX$ is one-to-one, then for each $\arr g\in\NN_\XX$, there is at most one solution $u$ to the Neumann problem \begin{equation*}(\widehat L u)\big\vert_\Omega = 0, \quad \MM_\B^\Omega u = \arr g, \quad u\in\XX^\Omega.\end{equation*} If $\MM_\B^\Omega \widehat\D^\B_\Omega:\DD_\XX \mapsto \NN_\XX$ has a bounded left inverse, then there is some constant $C_1$ such that every $u\in\XX^\Omega$ with $(\widehat L u)\big\vert_\Omega=0$ satisfies $\doublebar{u}_{\XX^\Omega}\leq C_1\doublebar{\MM_\B^\Omega u}_{\DD_\XX}$. \end{thm} \begin{proof} We present the proof only for the Neumann problem; the argument for the Dirichlet problem is similar. Suppose that $u$, $v\in\XX^\Omega$ with $(\widehat Lu)\big\vert_\Omega=(\widehat Lv)\big\vert_\Omega=0$ in~$\Omega$ and $\MM_\B^\Omega u=\arr g=\MM_\B^\Omega v$. By Condition~\condG, \begin{align*} u = -\widehat\D^\B_\Omega (\WTr_\XX^\Omega u) + \widehat\s^L_\Omega (\MM_\B^\Omega u) &= -\widehat\D^\B_\Omega (\WTr_\XX^\Omega u) + \widehat\s^L_\Omega \arr g ,\\ v= -\widehat\D^\B_\Omega (\WTr_\XX^\Omega v) + \widehat\s^L_\Omega (\MM_\B^\Omega v) &= -\widehat\D^\B_\Omega (\WTr_\XX^\Omega v) + \widehat\s^L_\Omega \arr g .\end{align*} In particular, $\MM_\B^\Omega \widehat\D^\B_\Omega (\WTr_\XX^\Omega u) = \MM_\B^\Omega \widehat\D^\B_\Omega (\WTr_\XX^\Omega v)$. If $\MM_\B^\Omega \widehat\D^\B_\Omega$ is one-to-one, then $\WTr_\XX^\Omega u = \WTr_\XX^\Omega v$. Another application of Condition~\condG\ yields that $u=v$. Now, suppose that we have the estimate $\doublebar{\arr f}_{\DD_\XX}\leq C \doublebar{\MM_\B^\Omega \widehat\D^\B_\Omega \arr f}_{\NN_\XX}$. (This implies injectivity of $\MM_\B^\Omega \widehat\D^\B_\Omega$.) Let $u\in {\XX^\Omega}$ with $(\widehat L u)\big\vert_\Omega=0$; we want to show that $\doublebar{u}_{\XX^\Omega}\leq C \doublebar{\MM_\B^\Omega u}_{\DD_\XX}$. By Condition~\condG, and because $\XX^\Omega$ is a quasi-Banach space, \begin{equation*}\doublebar{u}_{\XX^\Omega}\leq C\doublebar{\widehat\D^\B_\Omega (\WTr_\XX^\Omega u)}_{\XX^\Omega} + C\doublebar{\widehat\s^L_\Omega (\MM_\B^\Omega u)}_{\XX^\Omega}.\end{equation*} By Conditions~\condD\ and~\condS, \begin{equation*}\doublebar{u}_\XX\leq C\doublebar{\WTr_\XX^\Omega u}_{\DD_\XX} + C\doublebar{\MM_\B^\Omega u}_{\NN_\XX}.\end{equation*} Applying our estimate on $\MM_\B^\Omega \widehat\D^\B_\Omega$, we see that \begin{equation*}\doublebar{u}_\XX\leq C\doublebar{\MM_\B^\Omega \widehat\D^\B_\Omega\WTr_\XX^\Omega u}_{\NN_\XX} + C\doublebar{\MM_\B^\Omega u}_{\NN_\XX}.\end{equation*} By Condition~\condG, $\widehat\D^\B_\Omega(\WTr_\XX^\Omega u) =\widehat\s^L_\Omega(\MM_\B^\Omega u) - u $, and so \begin{equation*}\doublebar{u}_\XX\leq C\doublebar{\MM_\B^\Omega \widehat \s^L_\Omega\MM_\B^\Omega u}_{\NN_\XX} + C\doublebar{\MM_\B^\Omega u}_{\NN_\XX}.\end{equation*} Another application of Condition~\condS\ and of Condition~\condM\ completes the proof. \end{proof} \subsection{From well posedness to invertibility} \label{sec:well-posed:invertible} We are now interested in the converse results. That is, we have shown that results for layer potentials imply results for boundary value problems; we would like to show that results for boundary value problems imply results for layer potentials. Notice that the above results were built on the Green's formula~\condG. The converse results will be built on jump relations, as in Lemma~\ref{lem:jump}. Recall that jump relations treat the interplay between layer potentials in a domain and in its complement; thus we will need to impose conditions in both domains. In this section we will need the following spaces and operators. \begin{itemize} \item Quasi-Banach spaces $\XX^\UU$, $\XX^\VV$, $\DD_\XX$ and $\NN_\XX$. \item Linear operators $u\mapsto(\widehat L u)\big\vert_\UU$ and $u\mapsto(\widehat L u)\big\vert_\VV$ acting on~$\XX^\UU$ and~$\XX^\VV$. \item Linear operators $\WTr_\XX^\UU $, $\MM_\B^\UU$, $\WTr_\XX^\VV $, and $\MM_\B^\VV$ acting on $\{u\in\XX^\UU:(\widehat L u)\big\vert_\UU =0\}$ or $\{u\in\XX^\VV:(\widehat L u)\big\vert_\VV =0\}$. \item Linear operators $\widehat\D^\B_\UU$, $\widehat\D^\B_\VV$ acting on $\DD_\XX$ and $\widehat\s^L_\UU$, $\widehat\s^L_\VV$ acting on $\NN_\XX$. \end{itemize} In the applications $\UU$ is an open set in $\R^d$ or in a smooth manifold, and $\VV=\R^d\setminus\overline\UU$ is the interior of its complement. The space $\XX^\VV$ is then a space of functions defined in~$\VV$ and is thus a different space from $\XX^\UU$. However, we emphasize that we have defined only one space $\DD_\XX$ of Dirichlet boundary values and one space $\NN_\XX$ of Neumann boundary values; that is, the traces from both sides of the boundary must lie in the same spaces. We will often use the following conditions. \begin{description} \item[\hypertarget{ConditionTT}{\condTTname}, \hypertarget{ConditionMM}{\condMMname}, \hypertarget{ConditionSS}{\condSSname}, \hypertarget{ConditionDD}{\condDDname}, \hypertarget{ConditionGG}{\condGGname}] Condition \condT, \condM, \condS, \condD, or \condG\ holds for both $\Omega=\UU$ and $\Omega=\VV$. \item[\hypertarget{ConditionJScts}{\condJSctsname}] If $\arr g\in\NN_\XX$, then we have the continuity relation \begin{align*} \WTr_\XX^\UU (\widehat\s^L_\UU \arr g) -\WTr_\XX^\VV (\widehat\s^L_\VV \arr g) &=0 .\end{align*} \item[\hypertarget{ConditionJDcts}{\condJDctsname}] If $\arr f\in\DD_\XX$, then we have the continuity relation \begin{align*} \MM_\B^\UU (\widehat\D^\B_\UU\arr f) - \MM_\B^\VV (\widehat\D^\B_\VV\arr f ) &=0 .\end{align*} \item[\hypertarget{ConditionJSjump}{\condJSjumpname}] If $\arr g\in\NN_\XX$, then we have the jump relation \begin{align*} \MM_\B^\UU (\widehat\s^L_\UU ) +\MM_\B^\VV (\widehat\s^L_\VV\arr g) &=\arr g .\end{align*} \item[\hypertarget{ConditionJDjump}{\condJDjumpname}] If $\arr f\in\DD_\XX$, then we have the jump relation \begin{align*} \WTr_\XX^\UU (\widehat\D^\B_\UU\arr f) +\WTr_\XX^\VV (\widehat\D^\B_\VV\arr f) &=-\arr f .\end{align*} \end{description} We now move from well posedness of boundary value problems to invertibility of layer potentials. The following theorem uses an argument of Verchota from \cite{Ver84}. \begin{thm}\label{thm:unique:injective} Assume that Conditions~\condMM, \condSS, \condJScts, and \condJSjump\ are valid. Suppose that, for any $\arr f\in\DD_\XX$, there is at most one solution $u_+$ or $u_-$ to each of the two Dirichlet problems \begin{gather*} (\widehat L u_+)\big\vert_\UU = 0, \quad \WTr_\XX^\UU u_+ = \arr f, \quad u_+\in\XX^\UU ,\\ (\widehat L u_-)\big\vert_\VV = 0, \quad \WTr_\XX^\VV u_- = \arr f, \quad u_-\in\XX^\VV .\end{gather*} Then $\Tr_\XX^{\UU}\widehat\s^L_\UU:\NN_\XX\mapsto\DD_\XX$ is one-to-one. If in addition there is a constant $C_0$ such that every $u_+\in\XX^\UU$ and $u_-\in\XX^\VV$ with $(\widehat L u_+)\big\vert_\UU =0$ and $(\widehat L u_+)\big\vert_\VV = 0$ satisfies \begin{equation*}\doublebar{u_+}_{\XX^\UU}\leq C_0\doublebar{\WTr_\XX^\UU u}_{\DD_\XX}, \quad \doublebar{u_-}_{\XX^\VV}\leq C_0\doublebar{\WTr_\XX^\VV u}_{\DD_\XX}, \end{equation*} then there is a constant $C_1$ such that the bound $\doublebar{\arr g}_{\NN_\XX} \leq C_1 \doublebar{\Tr_\XX^{\UU}\widehat\s^L_\UU\arr g}_{\DD_\XX}$ is valid for all $\arr g\in\NN_\XX$. Similarly, assume that Conditions~ \condTT, \condDD, \condJDcts, and \condJDjump\ are valid. Suppose that for any $\arr g\in\NN_\XX$, there is at most one solution $u_\pm$ to each of the two Neumann problems \begin{gather*} (\widehat L u_+)\big\vert_\UU = 0, \quad \MM_\B^\UU u_+ = \arr g, \quad u_+\in\XX^\UU ,\\ (\widehat L u_-)\big\vert_\VV = 0, \quad \MM_\B^\VV u_- = \arr g, \quad u_-\in\XX^\VV .\end{gather*} Then $\MM_\B^\UU\widehat\D^\B_\UU:\DD_\XX\mapsto\NN_\XX$ is one-to-one. If there is a constant $C_0$ such that every $u_+\in\XX^\UU$ and $u_-\in\XX^\VV$ with $(\widehat L u_+)\big\vert_\UU =0$ and $(\widehat L u_+)\big\vert_\VV = 0$ satisfies \begin{equation*}\doublebar{u_+}_{\XX^\UU}\leq C_0\doublebar{\MM_\B^\UU u}_{\DD_\XX}, \quad \doublebar{u_-}_{\XX^\VV}\leq C_0\doublebar{\MM_\B^\VV u}_{\DD_\XX}, \end{equation*} then there is a constant $C_1$ such that the bound $\doublebar{\arr f}_{\DD_\XX} \leq C_1 \doublebar{\MM_\B^\UU\widehat\D^\B_\UU\arr f}_{\NN_\XX}$ is valid for all $\arr f\in\DD_\XX$. \end{thm} \begin{proof} As in the proof of Theorem~\ref{thm:injective:unique}, we will consider only the relationship between the Neumann problem and the double layer potential. Let $\arr f$, $\arr h\in\DD_\XX$. By Condition~\condDD, $u_+=\widehat\D^\B_\UU\arr f\in\XX^\UU$ and $v_+=\widehat\D^\B_\UU \arr h\in\XX^\UU$. If $\MM_\B^\UU \widehat\D^\B_\UU \arr f = \MM_\B^\UU \widehat\D^\B_\UU \arr h$, then $\MM_\B^\UU u_+=\MM_\B^\UU v_+$. Because there is at most one solution to the Neumann problem, we must have that $u_+=v_+$, and in particular $\WTr_\XX^\UU \widehat\D^\B_\UU \arr f = \WTr_\XX^\UU \widehat\D^\B_\UU \arr h$. By Condition~\condJDcts, we have that $\MM_\B^\VV \widehat\D^\B_\VV \arr f = \MM_\B^\VV \widehat\D^\B_\VV \arr h$. By Condition~\condDD\ and uniqueness of solutions to the $\VV$-Neumann problem, $\WTr_\XX^\VV \widehat\D^\B_\VV \arr f = \WTr_\XX^\VV \widehat\D^\B_\VV \arr h$. By \condJDjump, we have that \begin{equation*}\arr f = \WTr_\XX^\UU \widehat\D^\B_\UU \arr f + \WTr_\XX^\VV \widehat\D^\B_\VV \arr f = \WTr_\XX^\UU \widehat\D^\B_\UU \arr h + \WTr_\XX^\VV \widehat\D^\B_\VV \arr h =\arr h\end{equation*} and so $\MM_\B^\UU \widehat\D^\B_\UU$ is one-to-one. Now assume the stronger condition, that is, that $C_0<\infty$. Because $\DD_\XX$ is a quasi-Banach space, if $\arr f\in\DD_\XX$ then by Condition~\condJDjump, \begin{equation*}\doublebar{\arr f}_{\DD_\XX} \leq C\doublebar{\WTr_\XX^\UU \widehat\D^\B_\UU \arr f}_{\DD_\XX}+\doublebar{\WTr_\XX^\VV \widehat\D^\B_\VV \arr f}_{\DD_\XX}.\end{equation*} By Condition~\condDD, $\widehat\D^\B_\UU \arr f\in\XX^\UU$ with $(\widehat L(\widehat\D^\B_\UU \arr f))\big\vert_\UU=0$. Thus by Condition~\condTT, $\doublebar{\WTr_\XX^\UU \widehat\D^\B_\UU \arr f}_{\NN_\XX}\leq C\doublebar{\widehat\D^\B_\UU \arr f}_{\XX^\UU}$. Thus, \begin{equation*}\doublebar{\arr f}_{\DD_\XX} \leq C\doublebar{\widehat\D^\B_\UU \arr f}_{\XX^\UU}+\doublebar{\widehat\D^\B_\VV \arr f}_{\XX^\VV}.\end{equation*} By definition of~$C_0$, \begin{equation*}\doublebar{\widehat\D^\B_\UU \arr f}_{\XX^\UU}\leq C_0\doublebar{\MM_\B^\UU\widehat\D^\B_\UU \arr f}_{\NN_\XX}\quad\text{and}\quad\doublebar{\widehat\D^\B_\VV \arr f}_{\XX^\VV}\leq C_0\doublebar{\MM_\B^\VV\widehat\D^\B_\VV \arr f}_{\NN_\XX}.\end{equation*} By Condition~\condJDcts, $\MM_\B^\VV\widehat\D^\B_\VV \arr f=\MM_\B^\UU\widehat\D^\B_\UU \arr f$ and so \begin{equation*}\doublebar{\arr f}_{\DD_\XX} \leq 2CC_0\doublebar{\MM_\B^\UU\widehat\D^\B_\UU \arr f}_{\NN_\XX}\end{equation*} as desired. \end{proof} Finally, we consider the relationship between existence and surjectivity. The following argument appeared first in \cite{BarM13}. \begin{thm} \label{thm:existence:surjective} Assume that Conditions~\condMM, \condGG, \condJScts, and \condJDjump\ are valid. Suppose that, for any $\arr f\in\DD_\XX$, there is at least one pair of solutions $u_\pm$ to the pair of Dirichlet problems \begin{equation}\label{eqn:Dirichlet:ES} (\widehat L u_+)\big\vert_\UU = (\widehat L u_-)\big\vert_\VV = 0, \>\>\> \WTr_\XX^\UU u_+ = \WTr_\XX^\VV u_- = \arr f, \>\>\> u_+\in\XX^\UU , \>\>\> u_-\in\XX^\VV .\end{equation} Then $\Tr_\XX^{\UU}\widehat\s^L_\UU:\NN_\XX\mapsto\DD_\XX$ is onto. Suppose that there is some $C_0<\infty$ such that, if $\arr f\in\DD_\XX$, there is some pair of solutions $u^\pm$ to the problem~\eqref{eqn:Dirichlet:ES} with \begin{equation*}\doublebar{u_+}_{\XX^\UU}\leq C_0\doublebar{\arr f}_{\DD_\XX}, \quad \doublebar{u_-}_{\XX^\VV}\leq C_0\doublebar{\arr f}_{\DD_\XX}. \end{equation*} Then there is a constant $C_1$ such that for any $\arr f\in\DD_\XX$, there is a $\arr g\in\NN_\XX$ such that ${\Tr_\XX^{\UU}\widehat\s^L_\UU\arr g}=\arr f$ and $\doublebar{\arr g}_{\NN_\XX} \leq C_1 \doublebar{\arr f}_{\DD_\XX}$. Similarly, assume that Conditions~\condTT, \condGG, \condJDcts, and \condJSjump\ are valid. Suppose that for any $\arr g\in\NN_\XX$, there is at least one pair of solutions $u_\pm$ to the pair of Neumann problems \begin{equation}\label{eqn:Neumann:ES} (\widehat L u_+)\big\vert_\UU = (\widehat L u_-)\big\vert_\VV = 0, \>\>\> \MM_\B^\UU u_+ = \MM_\B^\VV u_- = \arr g, \>\>\> u_+\in\XX^\UU , \>\>\> u_-\in\XX^\VV .\end{equation} Then $\MM_\B^\UU\widehat\D^\B_\UU:\DD_\XX\mapsto\NN_\XX$ is onto. Suppose that there is some $C_0<\infty$ such that, if $\arr g\in\NN_\XX$, there is some pair of solutions $u^\pm$ to the problem~\eqref{eqn:Neumann:ES} with \begin{equation}\label{eqn:Neumann:bound}\doublebar{u_+}_{\XX^\UU}\leq C_0\doublebar{\arr g}_{\NN_\XX}, \quad \doublebar{u_-}_{\XX^\VV}\leq C_0\doublebar{\arr g}_{\NN_\XX}. \end{equation} Then there is a constant $C_1$ such that for any $\arr g\in\NN_\XX$, there is an $\arr f\in\DD_\XX$ such that ${\MM_\B^\UU\widehat\D^\B_\UU\arr f}=\arr g$ and $\doublebar{\arr f}_{\DD_\XX} \leq C_1 \doublebar{\arr g}_{\NN_\XX}$. \end{thm} \begin{proof} As usual we present the proof for the Neumann problem. Choose some $\arr g\in\NN_\XX$ and let $u_+$ and $u_-$ be the solutions to the problem~\eqref{eqn:Neumann:ES} assumed to exist. (If $C_0<\infty$ we further require that the bound~\eqref{eqn:Neumann:bound} be valid.) By Condition~\condTT, $\arr f_+=\WTr_\XX^\UU u_+$ and $\arr f_-=\WTr_\XX^\VV u_-$ exist and lie in~$\DD_\XX$. By Condition~\condGG, \begin{align*} 2\arr g &= \MM_\B^\UU u_+ + \MM_\B^\VV u_- \\&= \MM_\B^\UU (-\widehat\D^\B_\UU \arr f_+ + \widehat\s^L_\UU \arr g) + \MM_\B^\VV (-\widehat\D^\B_\VV \arr f_- + \widehat\s^L_\VV \arr g) .\end{align*} By Conditions~\condJDcts\ and \condJSjump\ and linearity of the operators $\MM_\B^\UU$, $\MM_\B^\VV$, we have that \begin{align*} 2\arr g &= -\MM_\B^\UU\widehat\D^\B_\UU \arr f_+ + \MM_\B^\UU\widehat\s^L_\UU \arr g -\MM_\B^\UU\widehat\D^\B_\UU \arr f_- +\arr g- \MM_\B^\UU\widehat\s^L_\UU \arr g \\&= \arr g -\MM_\B^\UU \widehat\D^\B_\UU (\arr f_+ + \arr f_-) .\end{align*} Thus, $\MM_\B^\UU \widehat\D^\B_\UU$ is surjective. If $C_0<\infty$, then because $\DD_\XX$ is a quasi-Banach space and by Condition~\condTT, \begin{equation*}\doublebar{\arr f_+ + \arr f_-}_{\DD_\XX} \leq C C_0\doublebar{\arr g}_{\NN_\XX} \end{equation*} as desired. \end{proof} \newcommand{\etalchar}[1]{$^{#1}$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \end{document}
\begin{document} \title{On the Power of Reusable Magic States} \author{Jonas T. \surname{Anderson}} \email[]{[email protected]} \affiliation{Center for Quantum Information and Control, University of New Mexico, Albuquerque, NM, 87131, USA} \date{\today, \currenttime} \begin{abstract} In this paper we study reusable magic states. These states are a special subset of the standard magic states. Once distilled, reusable magic states can be used repeatedly to apply some unitary U. Given this property, reusable magic states have the potential to greatly lower qubit and gate overhead in fault-tolerant quantum computation. While these states are promising, we provide a strong argument for their limited computational power. Specifically, we show that if reusable magic states can be used to apply non-Clifford unitaries, then we can exploit them to efficiently simulate poly-sized quantum circuits on a classical computer. \end{abstract} \maketitle \section{Introduction} Magic states were introduced by Bravyi and Kitaev \cite{Bravyi:2005a} as a way to implement logical gates that were not available as transversal gates in an error-correcting code. Their idea was as follows: first prepare many initial magic states, then use these to create an encoded magic state. This encoding procedure will introduce noise, and the encoded magic state will only be close to the desired state. We repeat this process to obtain many noisy encoded magic states. We then put these through many rounds of a distillation protocol. This eventually produces an encoded magic state of the desired fidelity. Finally, we use gate teleportation to apply the gate corresponding to the magic state to our encoded state. The procedure described above is the canonical way of completing a universal gate set. In fact, Eastin and Knill \cite{Eastin:2008a} proved that universal and transversal gate sets do not exist for any quantum code, thereby making magic state distillation not only a convenience, but a necessity. Note that there exist other possible ways of completing a universal gate set such as braiding of anyons \cite{Kitaev:1997a} or Dehn twists \cite{Koenig:2010a}, but these will not be discussed here. Once we have decided to use magic states, it is important to focus on lowering the immense overhead associated with them. This can be done in a variety of ways which will be discussed in turn below. \begin{figure} \caption{Unencoded magic protocol. (a) Distillation. (b) Gate Teleportation.} \label{fig:protocolunencoded} \end{figure} \begin{description} \item[(1)] We can choose a code with many transversal gates available. As mentioned above, it is not possible to have a universal and transversal gate set, but we can come close. For example, the 15-qubit Reed-Muller code \cite{MacWilliams:1977a, Steane:1999a} needs only a Hadamard gate to achieve universality, while the toric codes \cite{Kitaev:1997a} need both the $T$ gate and the Hadamard gate. As a variant of this approach, we could also use codes for which the remaining non-transversal gates have a low overhead magic state implementation. For example, the Hadamard gate magic state protocol likely requires much less overhead than the $T$ gate magic state protocol. \item[(2)] The encoding magic states typically introduces significant noise. If this is our primary concern, we can try to find procedures for distilling magic states that allow for very noisy magic states as input. There are known theoretical bounds for this approach. Magic state distillation protocols use only Clifford circuits, and therefore states within the stabilizer polyhedron can never be distilled to non-stabilizer states. This is simply because the stabilizer states are closed under Clifford operations. There are, however, other conditions which preclude non-stabilizer states from being distillable, such as positivity of the discrete Wigner function \cite{Veitch2012}. Also the existence of bound states \cite{Campbell:2010a, Campbell:2009a, Campbell:2009b, Campbell:2010b} (states that cannot be distilled) have also been discovered. Nevertheless certain states that lie on the border of the stabilizer polyhedron have been shown to be distillable \cite{Reichardt:2005a}. Finding more states such as these should be our primary goal if we expect the initial magic states to be very noisy. The schemes discussed in this paragraph are illustrated in Fig.~\ref{fig:protocolunencoded}. We input some noisy states $\ket{\tilde{\rho}}$ into the distillation circuit (a). The approach outlined above seeks to find circuits (a) that allow very noisy inputs while still eventually distilling $\ket{M}$. If the system is noisy but qubits are abundant, this would be the appropriate paradigm to study. \item[(3)] If it is not difficult to prepare noisy magic states that meet the criteria for distill-ability, then our primary concern is to reduce overhead. Two ideas for reducing overhead are discussed below. Note that (3a) and (3b) are not mutually exclusive. \begin{description} \item[(3a)] For each magic state that we apply, we must first distill a high fidelity version of that magic state. This involves using many lower fidelity magic states as input to a distillation circuit with the goal of producing a higher fidelity magic state as output. Typically this process must be repeated for many rounds, feeding the output of one distillation circuit into the input of the next. The methods for reducing overhead in the distillation circuit involve either reducing the number of inputs needed at a given round or reducing the total number of rounds. Graphically, this approach focuses on improving the circuit in Fig.~\ref{fig:protocolunencoded}(a) by reducing the number of inputs and/or rounds of distillation. Protocols to reduce the number of rounds in magic state distillation were discussed in \cite{Meier:2012a}. \item[(3b)] Another way of reducing the overhead is to reuse magic states. This is accomplished by modifying the gate teleportation procedure such that the magic state is available for reuse after the gate has been applied. We will refer to these magic states as reusable magic states. This approach would allow magic states $\ket{M}$ that are input into the box in Fig.~\ref{fig:protocolunencoded}(b) to be reused without any additional distillation. If a code could be found such that its universal gate set was comprised of only transversal gates and reusable magic states, the savings in overhead would be immense. Once these magic states have been distilled there would be no need for any additional overhead ever! We will show that in most cases these reusable magic states are highly unlikely to exist. \end{description} \end{description} \section{Reusable Magic States} A quantum computing architecture that uses magic states consists of an encoded system $\mathcal{S}$ and a supply of encoded magic states in an auxiliary system $\mathcal{M}$. Here $\mathcal{M}$ refers to a fixed size, but otherwise arbitrary system. These systems should remain isolated until a magic state is needed in the computation. It is in this paradigm that we hope to implement gates with reusable magic states. Below we will represent our system simply as $\ket{\psi}$. We will represent the auxiliary system containing the magic state as $\ket{M}$. The argument that follows applies to both single qubit and encoded qubit systems. Also, these states can be mixed or pure. To make notation simpler, we will represent the systems as pure states on single qubit systems. Formally we define a {\it reusable magic state} as a state $\ket{M}$ such that after application of a Clifford circuit on the joint system $\mathcal{S}\otimes \mathcal{M}$ some gate $U_{M}$ has been applied to the system $\mathcal{S}$ and the state $\ket{M}$ of the system $\mathcal{M}$ is unchanged. The state $\ket{M}$ can therefore be used again. We will use this definition to prove that reusable magic states do not exist for non-Clifford gates. When defining reusable magic states for Clifford gates the above definition must be restricted to prevent Clifford gates of the type are attempting to implement re-usably. A reusable magic state for the $S$ gate ($\sqrt{Z}$ gate) was shown in \cite{Aliferis:2007b, Jones:2010a}. The $S$ gate is a Clifford gate; however the circuit uses only $CNOT$ and $H$ to implement the $S$ gate. \begin{figure} \caption{Gate teleportation circuit for the reusable S gate. The $\ket{\pi/2}$ magic state can be reused, thereby reducing the overhead for the $S$ gate to ${\cal O}(1)$. This circuit can be used in codes where $H$ is transversal, but $S$ is not. This particular circuit is credited to Austin Fowler and modified from \cite{Aliferis:2007b, Jones:2010a}.} \label{fig:teleportation4} \end{figure} This gate can be modified to make a transversal $\sqrt{X}$ gate with the identity $\sqrt{X}=HSH$. Additionally, the combination of two reusable gates is itself a reusable gate. A reusable $\sqrt{Y}$ gate can be constructed by combining $\sqrt{X}$ and $S (\sqrt{Z})$ gates. It may seem that a reusable $H$ gate could be built through similar constructions; however all attempts by the author to date require that the $H$ gate be present in the circuit. This could only be the case when that gate is already available transversally, obviating the need for such a reusable magic state. \section{Non-Clifford Reusable Magic States} Most research on magic states focuses exclusively on non-Clifford magic states. In many technologies Clifford gates are considered to be easier to implement than general unitaries. They can be made universal with the addition of a single non-Clifford unitary. In other words, the gate set $<\{\mbox{Clifford}\},\; U>$ provides a dense set of unitaries in $SU(2^n)$. The canonical choice for $U$ is the $T$ gate ($\sqrt{S}$ gate). \begin{figure} \caption{Gate teleportation circuit for the T gate. In this circuit the magic state $\ket{\pi/4}$ is used to apply the T gate to some state $\ket{\psi}$. Where $\ket{\pi/4}=(\ket{0}+e^{i\pi/4}\ket{1})/\sqrt{2}$.} \label{fig:teleportation2} \end{figure} It is well-known that Clifford gates can be efficiently simulated on a classical computer in polynomial time {\bf P}. In fact, the computational power of a Clifford gate computer is thought to be weaker than a polynomial-time classical computer. The power of a polynomial-sized universal quantum gate set is, by definition, the class {\bf BQP} and hence can solve any problem within this class. Using the Solvay-Kitaev algorithm \cite{Kitaev:2002a, Nielsen:2000a}, we can compile any gate from a universal gate set in time linear in $\log^{c}(1/\epsilon)$. Where $c$ is some constant (typically between 2 and 3), and $\epsilon$ is the desired precision of the compiled gate. While some gate sets may be more efficient (in terms of overhead) than others, any universal quantum gate set can be used to efficiently solve problems in the class {\bf BQP}. In the derivation below we will present a `proof by contradiction'. We will assume the existence of a non-Clifford reusable magic state circuit, and then show that if such a circuit could be constructed, it would imply that {\bf BQP} $=$ {\bf P}. First, assume that the following circuit exists (see Fig.~\ref{fig:reusableMS}). \begin{figure} \caption{Reusable magic state circuit. $\mathcal{C}$ denotes some Clifford circuit and $\ket{M}$ is the reusable magic state. $\ket{M}$ may be comprised of many qubits and/or qudits providing the size is fixed. $M$ is any non-Clifford unitary. } \label{fig:reusableMS} \end{figure} Where $\mathcal{C}$ denotes some Clifford circuit and $\ket{M}$ is the reusable magic state. $\ket{M}$ may be comprised of many qubits and/or qudits as long as the size is fixed. $M$ is any non-Clifford unitary. Now, since $<\{\mbox{Clifford}\},\; M>$ constitutes a universal gate set, we can write a general quantum circuit using only Clifford gates and $M$. \begin{figure} \caption{ A generic quantum circuit. For clarity we have separated the Clifford gates into blocks, with the non-Clifford gate $M$ occurring between blocks. A general computation in {\bf BQP} would have polynomially many (in the number of inputs) rounds of Clifford and non-Clifford gates.} \label{fig:example} \end{figure} For example, Fig.~\ref{fig:example} depicts a general quantum circuit with $\mathcal{C}_n$ denoting a round of arbitrary poly-sized Clifford gates, and $M$ a non-Clifford unitary. The circuit is simulate-able by a {\bf BQP} quantum computer as long as the circuit size (total number of circuit elements) is polynomial in the number of inputs. However, since we assumed that circuits of the form shown in Fig.~\ref{fig:reusableMS} exist, we can execute the same computation shown in Fig.~\ref{fig:example} by replacing the $M$ gates with their magic state implementation. Note that this will only increase the number of inputs by a constant amount (the size of $\ket{M}$). \begin{figure} \caption{The circuit in Fig.~\ref{fig:example} with all non-Clifford gates $M$ replaced with the circuit in Fig.~\ref{fig:reusableMS}. Now, all non-Clifford circuitry has been moved to the beginning of the computation, and only constant overhead (the size of $\ket{M}$) has been introduced.} \label{fig:example2} \end{figure} We can continue this process and replace all gates $M$ by their magic state implementations. Now the entire body of the computation consists of only Clifford gates. We have only to prepare the state $\ket{M}$ which is unentangled with the rest of the system. This state could still be highly non-trivial; however we can always represent this state as a sum of stabilizer states. For example, the single qubit pure state $\ket{\psi} = \alpha\ket{0} + \beta\ket{1}$ can be written as $|\alpha|^{2}\ket{0}\bra{0} + \alpha\beta^{*}\ket{0}\bra{1} + \alpha^{*} \beta\ket{1}\bra{0} + |\beta|^{2}\ket{1}\bra{1} = a_{+}I + a_{-}Z + b_{+}X + (-i)b_{-}Y$. Where $a_{\pm} = \frac{|\alpha|^{2} \pm |\beta|^{2}}{2}$ and $b_{\pm} = \frac{\alpha\beta^{*} \pm \alpha^{*}\beta}{2}$. We have fixed the size of $\ket{M}$ to be independent of circuit size; therefore it can always assumed that the circuit size is large enough, such that the dimension of $\ket{M}$ is logarithmic in number of inputs to the circuit. We can then write $\ket{M}$ as a sum of stabilizers which will generally have a number of terms that grows as ${\cal O}(2^{d})$. Where $d$ is the dimension of $\ket{M}$. Again, this number is fixed and is independent of circuit size. This amounts to a constant overhead in our notation. Finally, since the entire body of the circuit consists of Clifford gates which map stabilizer states to stabilizer states, the number of terms in the initial sum of stabilizer states is fixed throughout the computation. We can simulate each of the terms in the initial sum of stabilizer states in time that grows polynomially with the number of input states. We can thus simulate the entire circuit in time ${\cal O}(2^{d}\times POLY(n)) = {\cal O}(POLY(n))$. Where $d=c$ (some constant) and $n$ is the number of input states. In conclusion, we have shown that if a circuit such as that shown in Fig.~\ref{fig:reusableMS} exists for non-Clifford unitary $M$, then {\bf BQP} $=$ {\bf P}. In fact, since Clifford state computation is in the class {\bf ParityL} \cite{Aaronson:2004a} (which is thought to be weaker than {\bf P}), this would be of even greater consequence. In the highly unlikely event that such a circuit exists, it would not be useful since the entire endeavor of quantum computation would be obviated as a consequence. Some open questions still linger such as: {\it Does a reusable magic state exist for the $H$ gate?} This circumvents the proof in this paper, since $H$ {\it is} a Clifford gate. Codes such as the 15-qubit Reed-Muller code can be made universal with the addition of such a gate; therefore finding such a state would drastically reduce the overhead for this and similar codes. As mentioned above, our definition of reusable magic states must be modified when the unitary we are trying to implement is a Clifford gate. Qudit magic state distillation has recently been introduced in \cite{Campbell:2012a, Anwar:2012a, Veitch2012}. Our result applies to qudit codes as well. Namely, non-Clifford qudit gates cannot be implemented using reusable magic states, unless qudit quantum computation is efficiently simulate-able on a classical computer. The proof is briefly sketched here: The Clifford group for any prime number $p$ in $SU(p^n)$ is a maximal finite subgroup. The addition of any non-Clifford unitary generates an infinite group which is dense in $SU(p^n)$. As in the qubit case, we need only a single non-Clifford gate to complete a universal gate set. These properties of the Clifford group are not well known and were only recently mentioned in the physics literature (see appendix in \cite{Campbell:2012a} and references therein). Using this, the proof for the qudit case follows in exactly the same manner as the qubit case. It is, however, possible that qudit analogues of the Reed-Muller or other similar codes can complete a universal gate set with the addition of some qudit Clifford gate. Therefore, it may be fruitful to search for these codes and for reusable qudit magic states. \begin{acknowledgments} The author would like to acknowledge many helpful conversations with Chris Cesare, Andrew Landahl, Rolando Somma, Adam Meier, Bryan Eastin, Jim Harrington, and Olivier Landon-Cardinal. Additionally, I would like to thank Earl Campbell for pointing out subtleties in the qudit Clifford group and providing ideas for extending this proof to the qudit case. JTA was supported in part by the National Science Foundation through Grant 0829944. JTA was supported in part by the Laboratory Directed Research and Development program at Sandia National Laboratories. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. \end{acknowledgments} \begin{thebibliography}{20} \expandafter\ifx\csname natexlab\endcsname\relax\def\natexlab#1{#1}\fi \expandafter\ifx\csname bibnamefont\endcsname\relax \def\bibnamefont#1{#1}\fi \expandafter\ifx\csname bibfnamefont\endcsname\relax \def\bibfnamefont#1{#1}\fi \expandafter\ifx\csname citenamefont\endcsname\relax \def\citenamefont#1{#1}\fi \expandafter\ifx\csname url\endcsname\relax \def\url#1{\texttt{#1}}\fi \expandafter\ifx\csname urlprefix\endcsname\relax\defURL {URL }\fi \providecommand{\bibinfo}[2]{#2} \providecommand{\arxiv}[2][]{\href{http://arxiv.org/pdf/#2}{\texttt{arXiv:#2}}} \providecommand{\doi}[2][]{\href{http://dx.doi.org/#2}{\texttt{doi:#2}}} \bibitem[{\citenamefont{Bravyi and Kitaev}(2005)}]{Bravyi:2005a} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Bravyi}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Kitaev}}, \emph{\bibinfo{title}{{Universal quantum computation with ideal \{C\}lifford gates and noisy ancillas}}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{71}}, \bibinfo{pages}{22316} (\bibinfo{year}{2005}), \doi{10.1103/PhysRevA.71.022316}. \bibitem[{\citenamefont{Eastin and Knill}(2008)}]{Eastin:2008a} \bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Eastin}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Knill}}, \emph{\bibinfo{title}{{Restrictions on transversal encoded quantum gate sets}}} (\bibinfo{year}{2008}), \doi{10.1103/PhysRevLett.102.110502}, \arxiv{0811.4262}. \bibitem[{\citenamefont{Kitaev}(2003)}]{Kitaev:1997a} \bibinfo{author}{\bibfnamefont{A.~Y.} \bibnamefont{Kitaev}}, \emph{\bibinfo{title}{{Fault-tolerant quantum computation by anyons}}}, \bibinfo{journal}{Ann. Phys.} \textbf{\bibinfo{volume}{303}}, \bibinfo{pages}{2} (\bibinfo{year}{2003}), \doi{10.1016/S0003-4916(02)00018-0}. \bibitem[{\citenamefont{Koenig et~al.}(2010)\citenamefont{Koenig, Kuperberg, and Reichardt}}]{Koenig:2010a} \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Koenig}}, \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Kuperberg}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{B.~W.} \bibnamefont{Reichardt}}, \emph{\bibinfo{title}{{Quantum computation with Turaev-Viro codes}}}, p.~\bibinfo{pages}{53} (\bibinfo{year}{2010}), \arxiv{1002.2816}, URL \url{http://arxiv.org/abs/1002.2816}. \bibitem[{\citenamefont{MacWilliams and Sloane}(1977)}]{MacWilliams:1977a} \bibinfo{author}{\bibfnamefont{F.~J.} \bibnamefont{MacWilliams}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{N.~J.~A.} \bibnamefont{Sloane}}, \emph{\bibinfo{title}{{The Theory of Error-Correcting Codes}}}, vol.~\bibinfo{volume}{16} of \emph{\bibinfo{series}{North-Holland mathematical library}} (\bibinfo{publisher}{North-Holland}, \bibinfo{address}{New York}, \bibinfo{year}{1977}), ISBN \bibinfo{isbn}{0-444-85009-0}. \bibitem[{\citenamefont{Steane}(1999)}]{Steane:1999a} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Steane}}, \emph{\bibinfo{title}{{Quantum Reed-Muller codes}}}, \bibinfo{journal}{IEEE Transactions on Information Theory} \textbf{\bibinfo{volume}{45}}, \bibinfo{pages}{1701} (\bibinfo{year}{1999}), ISSN \bibinfo{issn}{00189448}, \doi{10.1109/18.771249}, URL \url{http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber =771249}. \bibitem[{\citenamefont{Veitch et~al.}(2012)\citenamefont{Veitch, Ferrie, and Emerson}}]{Veitch2012} \bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Veitch}}, \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Ferrie}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Emerson}}, \emph{\bibinfo{title}{{Negative Quasi-Probability Representation is a Necessary Resource for Magic State Distillation}}}, \bibinfo{journal}{Arxiv:1201.1256} (\bibinfo{year}{2012}). \bibitem[{\citenamefont{Campbell and Browne}(2010)}]{Campbell:2010a} \bibinfo{author}{\bibfnamefont{E.~T.} \bibnamefont{Campbell}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{D.~E.} \bibnamefont{Browne}}, \emph{\bibinfo{title}{{Bound States for Magic State Distillation in Fault-Tolerant Quantum Computation}}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{104}} (\bibinfo{year}{2010}). \bibitem[{\citenamefont{Campbell and Browne}(2009{\natexlab{a}})}]{Campbell:2009a} \bibinfo{author}{\bibfnamefont{E.~T.} \bibnamefont{Campbell}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{D.~E.} \bibnamefont{Browne}}, \emph{\bibinfo{title}{{On the Structure of Protocols for Magic State Distillation}}}, \bibinfo{journal}{Lecture Notes in Computer Science 5906 Theory of Quantum Computation, Communication and Cryptography} p.~\bibinfo{pages}{20} (\bibinfo{year}{2009}{\natexlab{a}}). \bibitem[{\citenamefont{Campbell and Browne}(2009{\natexlab{b}})}]{Campbell:2009b} \bibinfo{author}{\bibfnamefont{E.~T.} \bibnamefont{Campbell}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{D.~E.} \bibnamefont{Browne}}, \emph{\bibinfo{title}{{Bound States for Magic State Distillation}}} (\bibinfo{year}{2009}{\natexlab{b}}). \bibitem[{\citenamefont{Campbell}(2010)}]{Campbell:2010b} \bibinfo{author}{\bibfnamefont{E.~T.} \bibnamefont{Campbell}}, \emph{\bibinfo{title}{{Catalysis and activation of magic states in fault tolerant architectures}}} (\bibinfo{year}{2010}), \doi{10.1103/PhysRevA.83.032317}, \arxiv{1010.0104}, URL \url{http://arxiv.org/abs/1010.0104}. \bibitem[{\citenamefont{Reichardt}(2005)}]{Reichardt:2005a} \bibinfo{author}{\bibfnamefont{B.~W.} \bibnamefont{Reichardt}}, \emph{\bibinfo{title}{{Quantum Universality from Magic States Distillation Applied to \{CSS\} Codes}}}, \bibinfo{journal}{Quant. Inf. Proc.} \textbf{\bibinfo{volume}{4}}, \bibinfo{pages}{251} (\bibinfo{year}{2005}), \doi{10.1007/s11128-005-7654-8}. \bibitem[{\citenamefont{Meier et~al.}(2012)\citenamefont{Meier, Eastin, and Knill}}]{Meier:2012a} \bibinfo{author}{\bibfnamefont{A.~M.} \bibnamefont{Meier}}, \bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Eastin}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Knill}}, \emph{\bibinfo{title}{{Magic-state distillation with the four-qubit code}}}, p.~\bibinfo{pages}{10} (\bibinfo{year}{2012}), \arxiv{1204.4221}, URL \url{http://arxiv.org/abs/1204.4221}. \bibitem[{\citenamefont{Aliferis}(2007)}]{Aliferis:2007b} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Aliferis}}, \emph{\bibinfo{title}{{Level reduction and the quantum threshold theorem}}}, Ph.D. thesis, \bibinfo{school}{Caltech} (\bibinfo{year}{2007}). \bibitem[{\citenamefont{Jones et~al.}(2010)\citenamefont{Jones, {Van Meter}, Fowler, McMahon, Kim, Ladd, and Yamamoto}}]{Jones:2010a} \bibinfo{author}{\bibfnamefont{N.~C.} \bibnamefont{Jones}}, \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{{Van Meter}}}, \bibinfo{author}{\bibfnamefont{A.~G.} \bibnamefont{Fowler}}, \bibinfo{author}{\bibfnamefont{P.~L.} \bibnamefont{McMahon}}, \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Kim}}, \bibinfo{author}{\bibfnamefont{T.~D.} \bibnamefont{Ladd}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Yamamoto}}, \emph{\bibinfo{title}{{Layered architecture for quantum computing}}}, p.~\bibinfo{pages}{23} (\bibinfo{year}{2010}), \arxiv{1010.5022}, URL \url{http://arxiv.org/abs/1010.5022}. \bibitem[{\citenamefont{Kitaev et~al.}(2002)\citenamefont{Kitaev, Shen, and Vyalyi}}]{Kitaev:2002a} \bibinfo{author}{\bibfnamefont{A.~b.~Y.} \bibnamefont{Kitaev}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Shen}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.~N.} \bibnamefont{Vyalyi}}, \emph{\bibinfo{title}{{Classical and Quantum Computation}}}, vol.~\bibinfo{volume}{47} of \emph{\bibinfo{series}{Graduate Studies in Mathematics}} (\bibinfo{publisher}{American Mathematical Society}, \bibinfo{address}{Providence, RI}, \bibinfo{year}{2002}), ISBN \bibinfo{isbn}{0-821-82161-X}. \bibitem[{\citenamefont{Nielsen and Chuang}(2000)}]{Nielsen:2000a} \bibinfo{author}{\bibfnamefont{M.~A.} \bibnamefont{Nielsen}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{I.~L.} \bibnamefont{Chuang}}, \emph{\bibinfo{title}{{Quantum Computation and Quantum Information}}} (\bibinfo{publisher}{Cambridge University Press}, \bibinfo{address}{Cambridge}, \bibinfo{year}{2000}), ISBN \bibinfo{isbn}{0-521-63235-8 (Hardback), 0-521-63503-9 (Paperback)}. \bibitem[{\citenamefont{Aaronson and Gottesman}(2004)}]{Aaronson:2004a} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Aaronson}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Gottesman}}, \emph{\bibinfo{title}{{Improved simulation of stabilizer circuits}}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{70}}, \bibinfo{pages}{52328} (\bibinfo{year}{2004}), \doi{10.1103/PhysRevA.70.052328}. \bibitem[{\citenamefont{Campbell et~al.}(2012)\citenamefont{Campbell, Anwar, and Browne}}]{Campbell:2012a} \bibinfo{author}{\bibfnamefont{E.~T.} \bibnamefont{Campbell}}, \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Anwar}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{D.~E.} \bibnamefont{Browne}}, \emph{\bibinfo{title}{{Magic state distillation in all prime dimensions using quantum Reed-Muller codes}}} (\bibinfo{year}{2012}), \arxiv{1205.3104}, URL \url{http://arxiv.org/abs/1205.3104}. \bibitem[{\citenamefont{Anwar et~al.}(2012)\citenamefont{Anwar, Campbell, and Browne}}]{Anwar:2012a} \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Anwar}}, \bibinfo{author}{\bibfnamefont{E.~T.} \bibnamefont{Campbell}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{D.~E.} \bibnamefont{Browne}}, \emph{\bibinfo{title}{{Qutrit Magic State Distillation}}}, p.~\bibinfo{pages}{13} (\bibinfo{year}{2012}), \arxiv{1202.2326}, URL \url{http://arxiv.org/abs/1202.2326}. \end{thebibliography} \end{document}
\begin{document} \preprintTitle{Time-space adaptive discontinuous Galerkin method for advection-diffusion equations with non-linear reaction mechanism} \preprintAuthor{B\"{u}lent Karas\"{o}zen \footnote{Department of Mathematics and Institute of Applied Mathematics, Middle East Technical University, 06800 Ankara, Turkey, \textit{email}: [email protected] }, Murat Uzunca \footnote{Department of Mathematics, Middle East Technical University, 06800 Ankara, Turkey, \textit{email}: [email protected] }} \preprintAbstract{\small In this work, we apply a time-space adaptive discontinuous Galerkin method using the elliptic reconstruction technique with a robust (in P\'{e}clet number) elliptic error estimator in space, for the convection dominated parabolic problems with non-linear reaction mechanisms. We derive a posteriori error estimators in the $L^{\infty}(L^2)+L^2(H^1)$-type norm using backward Euler in time and discontinuous Galerkin (symmetric interior penalty Galerkin (SIPG)) in space. Numerical results for advection dominated reactive transport problems in homogeneous and heterogeneous media demonstrate the performance of the time-space adaptive algorithm.} \preprintKeywords{Non-linear diffusion-convection reaction, Discontinuous Galerkin, Time-space adaptivity, Elliptic reconstruction, A posteriori error estimates} \preprintDate{August 2014} \preprintNo{2014-6} \makePreprintCoverPage \section{Introduction} \label{intro} Advection-diffusion-reaction (ADR) equations are widely used for systems in chemical reaction problems. In the linear convection or advection dominated case, stabilized continuous finite elements and discontinuous Galerkin (DG) methods are capable of handling the nonphysical oscillations. On the other hand, in the non-linear stationary case, the non-linear reaction term produces sharp layers in addition to the spurious oscillations due to the convection. Thus, an accurate and efficient numerical resolution of such layers is a challenge as the exact location of the layers are not known a priori. In the non-stationary case, the resolution of such layers is more critical since the nature of the sharp layers may vary as time progresses. Recently, several stabilization and shock/discontinuity capturing techniques were developed for linear and non-linear steady-state problems \cite{bause12ash}. In contrast to the stabilized continuous Galerkin finite element methods, DG methods produce stable discretizations without the need for stabilization parameters. The DG combines the best properties of the finite volume and continuous finite elements methods. Finite volume methods can only use lower degree polynomials, and continuous finite elements methods require higher regularity due to the continuity requirements. The DG method is in particular suitable for non-matching grids and hp (space and order) adaptivity, detecting sharp layers and singularities. They are easily adapted locally for nonconforming finite elements requiring less regularity. Higher order DG approximation can be easily used by hierarchical bases \cite{burger09pkd}, and the convergence rates are limited by the consistency error which makes the DG suitable for complex fluid flows. The DG methods are robust with respect to the variation of the physical parameters like diffusion constant and permeability. The stability of the DG approximation retained by adjusting the penalty parameter to penalize the jumps at the interface of the elements. Various choices of the penalty parameter is suggested in the literature \cite{castillo12pdg,dobrev08psi,epshteyn07epp,slingerland14fls}. A unified analysis of the interior penalty DG methods for elliptic PDEs are given in \cite{arnold02uad}. Other advantages of the DG methods are conservation of mass and fluxes and parallelization. Moreover, DG methods are better suited for adaptive strategies which are based on (residual-based) a posteriori error estimation. The disadvantages are the resulting larger and denser matrices and ill-conditioning with increasing degree of the discontinuous polynomials. The main tool in this paper is the adaptivity applied to the DG discretized (in space) systems using (residual-based) a posteriori error estimates. The aim of the a posteriori estimates is to derive bounds between the known numerical solution and the unknown exact solution. For elliptic diffusion-convection-reaction equations, there are various studies applying adaptivity using a posteriori error estimates in the literature \cite{pietro14rra,houston02dhp,klieber06ast,schotzau09rae}, which are well-understood. In this paper, we derive a posteriori error estimates for semi-linear ADR problems using the well-know elliptic a posteriori estimates, by which in contrast to the standard energy techniques we do not need to try to adapt the estimates case by case in order to compare the exact solution with numerical solution directly. For this reason, we use the {\it elliptic reconstruction} technique in \cite{makridakis03era} which allows us to utilize available a posteriori estimates derived for elliptic equations to control the main part of the spatial error. The idea of the elliptic reconstruction technique is to construct an auxiliary solution whose difference to the numerical solution can be estimated by a known (elliptic) a posteriori estimate, and the constructed auxiliary solution satisfies a variant of the given problem with a right hand side which can be controlled in an optimal way. By this way, we are able to obtain results being optimal order in both $L^2(H^1)$ and $L^{\infty}(L^2)$-type norms, while the results obtained by the standard energy methods are only optimal order in $L^2(H^1)$-type norms, but sub-optimal order in $L^{\infty}(L^2)$-type norms. In \cite{cangiani13adg}, a posteriori error estimates in the $L^{\infty}(L^2)+L^2(H^1)$-type norm are derived for linear parabolic diffusion-convection-reaction equations using backward Euler in time and discontinuous Galerkin in space utilizing the elliptic reconstruction technique. In this paper, we extend the study in \cite{cangiani13adg} in a similar way by deriving and implementing a posteriori error estimates in the $L^{\infty}(L^2)+L^2(H^1)$-type norm using backward Euler in time and discontinuous Galerkin (symmetric interior penalty (SIPG)) in space for the convection dominated parabolic problems with non-linear reaction mechanisms. To derive the a posteriori error estimates, we use the modification of the robust (in P\'{e}clet number) a posteriori error estimator in \cite{schotzau09rae} for linear steady-state diffusion-convection-reaction equations to the steady-state diffusion-convection equations with non-linear reaction mechanisms utilizing the elliptic reconstruction technique \cite{makridakis03era}. Application of the adaptive discontinuous Galerkin methods and a posteriori error estimates to the problems in geoscience are reviewed recently in \cite{pietro14rra}. Most of the applications of DG methods in geoscience concern reactive transport with advection \cite{bastian11udg,klieber06ast,sun05lna} and strong permeability contrasts such as layered reservoirs \cite{slingerland14fls} or vanishing and varying diffusivity posing challenges in computations \cite{proft09dgm}. The permeability in heterogeneous porous and fractured media varies over orders of magnitude in space, which results in highly variable flow field, where the local transport is dominated by advection or diffusion \cite{tambue10eia}. Accurate and efficient numerical solution of the ADR equations to predict the macroscopic mixing, anomalous transport of the solutes and contaminants for a wide range of parameters like permeability and P\'{e}clet numbers, different flow velocities and reaction rates and reaction rates are challenging problems \cite{tambue10eia}. In order to resolve the complex flow patterns accurately, higher order time stepping methods like exponential time stepping methods are used \cite{tambue10eia}. We show here using time-space adaptive first order backward Euler and DG in space, the same results can be obtained. The rest of this paper is organized as follows. In the next section, we introduce the function spaces and related norms which are used in our analysis, and the model problem with the assumptions to have unique solution. In Section \ref{SIPG}, we give the symmetric discontinuous interior penalty Galerkin discretized semi-discrete system, and fully-discrete system using Backward Euler in time. The derivation of a posteriori error estimator is given in Section~\ref{aposteriori}, first for steady-state problems (with proofs), then for parabolic problems utilizing the elliptic reconstruction technique. In Section~\ref{algorithm}, we state the adaptive algorithm procedure using the a posteriori error estimators derived in Section~\ref{aposteriori}. The solution procedure of the fully-discrete system by Newton method and the structures of the arising matrix and vectors are discussed in Section~\ref{newton}. After demonstrating the performance of the algorithm by some numerical studies in Section~\ref{numeric}, the paper ends with conclusions. \section{Model problem} \label{secmodel} Let $\Omega \subset\mathbb{R}^2$ be a bounded, open and convex domain with boundary $\partial\Omega$. For a Banach space $X$, define the spaces $L^p(0,T;X)$ \begin{subequations}\label{model} \begin{align} \| v\|_{L^p(0,T;X)} &= \left( \int_0^T \| v(t)\|_X^p dt \right)^{1/p}<\infty \; , \qquad \text{for } 1\leq p < +\infty \nonumber \\ \| v\|_{L^{\infty}(0,T;X)} &= \underset{0\leq t\leq T}{\text{esssup}} \|v(t)\|_X <\infty \; , \qquad \text{for } p = +\infty \nonumber \end{align} \end{subequations} Also define the space $$ H^1(0,T;X)=\{ v\in L^2(0,T;X)| \; v_t\in L^2(0,T;X)\}. $$ We denote by $C(0,T;X)$ and $C^{0,1}(0,T;X)$, the spaces of continuous and Lipschitz-continuous functions $v:[0,T]\mapsto X$, respectively, equipped with the norms \begin{subequations} \begin{align} \| v\|_{C(0,T;X)} &= \underset{0\leq t\leq T}{\text{max}} \|v(t)\|_X <\infty \nonumber \\ \| v\|_{C^{0,1}(0,T;X)} &= \text{max} \left\{ \| v\|_{C(0,T;X)}, \| v_t\|_{C(0,T;X)}\right\} <\infty \nonumber \end{align} \end{subequations} We consider the system of semi-linear diffusion-convection-reaction equations \begin{equation}\label{org} \frac{\partial u_i}{\partial t}-\nabla\cdot (\epsilon_i\nabla u_i) + \vec{\beta}_i\cdot\nabla u_i + r_i(\vec{u}) = f_i \;\; , \quad i=1,2,\ldots , J \end{equation} in $\Omega\times (0,T]$ for the vector of unknowns $\vec{u}=(u_1,u_2,\ldots , u_J)^T$ with appropriate boundary and initial conditions. We assume that the source functions $f_i\in C(0,T;L^2(\Omega))$, and the velocity fields $\vec{\beta }_i\in C\left(0,T;W^{1,\infty}(\Omega)\right)^2$ either given or computed. For flow in heterogeneous media in Section~\ref{ex4}, the symmetric dispersion tensors $\epsilon_i$ are taken of the form $$ \epsilon_i= \begin{bmatrix} D_i^1 & 0 \\ 0 & D_i^2 \end{bmatrix} $$ with $0<D_i^1,D_i^2\ll 1$. Moreover, we assume that the non-linear reaction terms are bounded, locally Lipschitz continuous and monotone, i.e. satisfies for any $s, s_1, s_2\ge 0$, $s,s_1, s_2 \in \mathbb{R}$ the following conditions \cite{uzunca14adg} \begin{subequations} \begin{align} |r_i(s)| &\leq C_s , \quad C_s>0 \label{nonl1}\\ \| r_i(s_1)-r_i(s_2)\|_{L^2(\Omega)} &\leq L\| s_1-s_2\|_{L^2(\Omega)} , \quad L>0\label{nonl2} \\ r_i\in C^1(\mathbb{R}_0^+), \quad r_i(0) =0, &\quad r_i'(s)\ge 0. \label{nonl3} \end{align} \end{subequations} We also assume that there are $\kappa , \kappa_*\geq 0$ satisfying for $i=1,2,\ldots ,J$ \begin{equation} \label{ass} -\frac{1}{2}\nabla\cdot\vec{\beta }_i (x) \geq \kappa, \qquad \| -\nabla\cdot\vec{\beta }_i\|_{C(0,T;L^{\infty}(\Omega))}\leq \kappa^*\kappa, \end{equation} The first inequality in (\ref{ass}) ensures the well-posedness of the problem, and the letter is used in the a posteriori error analysis. The weak formulation of the system (\ref{org}) reads as: for any $v\in H_0^1(\Omega )$, find $u_i\in L^2(0,T;H_0^1(\Omega))\cap H^1(0,T;L^2(\Omega))$, $i=1,2,\ldots , J$, such that for all $t\in (0,T]$ \begin{equation} \label{weak} \int_{\Omega}\frac{\partial u_i}{\partial t}vdx + a(t;u_i,v)+b_i(t;\vec{u},v)=l_i(v) \end{equation} \begin{subequations} \begin{align} a(t;u_i, v)=& \int_{\Omega}(\epsilon_i\nabla u_i\cdot\nabla v+\vec{\beta }_i\cdot\nabla u_iv)dx , \label{weaka} \\ b_i(t;\vec{u}, v) =& \int_{\Omega}r_i(\vec{u})v dx , \\ l_i( v)=& \int_{\Omega}f_iv dx \end{align} \end{subequations} which have unique solutions $\{ u_i\}_{i=1}^J$ in the space $C(0,T;L^2(\Omega))$ under the given regularity assumptions and the conditions \eqref{nonl1}-\eqref{nonl3}. In the sequel, for simplicity, we just consider a single equation of the system \eqref{org} (J=1) without any subscript to construct the discontinuous Galerkin discretization in the next section and thereafter, and we also continue with a homogeneous dispersion tensor leading to a simple diffusivity constant $0<\epsilon\ll 1$. We, furthermore, take into account the homogeneous Dirichlet boundary conditions to simplify the notations. It can be proceeded with heterogeneous dispersion tensor and other type of boundary conditions in a standard way. \section{Discontinuous Galerkin discretization} \label{SIPG} For space discretization of a single equation of (\ref{org}), we use the symmetric discontinuous interior penalty Galerkin (SIPG) method \cite{arnold02uad,riviere08dgm} with the upwinding for the convection part \cite{ayuso09dgm,houston02dhp}. Let $\{\xi_h\}$ be a family of shape regular meshes with the elements (triangles) $K_i\in\xi_h$ satisfying $\overline{\Omega}=\cup \overline{K}$ and $K_i\cap K_j=\emptyset$ for $K_i$, $K_j$ $\in\xi_h$. Let us denote by $\Gamma^0$ and $\Gamma^{\partial}$ the set of interior and Dirichlet boundary edges, respectively, so that $\Gamma =\Gamma^0\cup\Gamma^{\partial}$ forms the skeleton of the mesh. For any $K\in\xi_h$, let $\mathbb{P}_k(K)$ be the set of all polynomials of degree at most $k$ on $K$. Then, set the finite dimensional space $$ V_h(\xi_h)=\left\{ v\in L^2(\Omega ) : v|_{K}\in\mathbb{P}_k(K) ,\; \forall K\in \xi_h \right\}\not\subset H_0^1(\Omega). $$ The functions in $V_h(\xi_h)$, in contrast to the standard (continuous) finite elements, are discontinuous along the inter-element boundaries causing that along an interior edge, there are two different traces from the adjacent elements sharing that edge. In the light of this fact, let us first introduce some notations before giving the SIPG formulation. Let $K_i$, $K_j\in\xi_h$ ($i<j$) be two adjacent elements sharing an interior edge $e=K_i\cap K_j\subset \Gamma_0$ (see Fig.\ref{jump}). We denote the trace of a scalar function $v$ from inside $K_i$ by $v_{i}$ and from inside $K_j$ by $v_{j}$, and then we set the jump and average values of $v$ on the edge $e$ $$ [v]= v_{i}\vec{n}_e- v_{j}\vec{n}_e , \quad \{ v\}=\frac{1}{2}(v_{i}+ v_{j}). $$ Here $\vec{n}_e$ denotes the unit normal to the edge $e$ oriented from $K_i$ to $K_j$. Similarly, we set the jump and average values of a vector valued function $\vec{q}$ on e $$ [\vec{q}]= \vec{q}_{i}\cdot \vec{n}_e- \vec{q}_{j}\cdot \vec{n}_e , \quad \{ \vec{q}\}=\frac{1}{2}(\vec{q}_{i}+ \vec{q}_{j}), $$ Observe that $[v]$ is a vector for a scalar function $v$, while, $[\vec{q}]$ is scalar for a vector valued function $\vec{q}$. On the other hand, along any boundary edge $e=K_i\cap \partial\Omega$, we set $$ [v]= v_{i}\vec{n} , \quad \{ v\}=v_{i}, \quad [\vec{q}]=\vec{q}_{i}\cdot \vec{n}, \quad \{ \vec{q}\}=\vec{q}_{i} $$ where $\vec{n}$ is the unit outward normal to the boundary at $e$. We define the inflow and outflow boundary parts at a time $t$ by $$ \Gamma_t^-=\{ x\in \partial\Omega | \; \vec{\beta}(\vec{x},t)\cdot \vec{n}(x)<0\}\; , \qquad \Gamma_t^+=\partial\Omega\setminus\Gamma_t^- $$ At a time $t$, the inflow and outflow parts of an element $K$ are defined by $$ \partial K_t^-=\{ x\in \partial K | \; \vec{\beta}(\vec{x},t)\cdot \vec{n}_K(x)<0\}\; , \qquad \partial K_t^+=\partial K\setminus\partial K_t^- $$ where $\vec{n}_K(x)$ denotes the outward unit normal to the boundary of the element $K$ at $x$. \begin{figure} \caption{Two adjacent elements sharing an edge (left); an element near to domain boundary (right)} \label{jump} \end{figure} Under the given definitions, the semi-discrete problem reads as: for $t=0$, set $u_h(0)\in V_h(\xi_h)$ as the projection (orthogonal $L^2$-projection) of $u_0$ onto $V_h(\xi_h)$; for each $t\in (0,T]$, for all $v_h\in V_h(\xi_h)$, find $u_h\in C^{0,1}(0,T;V_h(\xi_h))$ such that \begin{equation} \label{dg} \int_{\Omega}\frac{\partial u_{h}}{\partial t}v_{h}dx + a_{h}(t;u_{h},v_{h}) + K_{h}(u_{h},v_{h})+b_{h}(t;u_{h}, v_{h})=l_{h}(v_{h}) , \end{equation} \begin{subequations} \begin{align} a_{h}(t;u_{h}, v_{h})=& \sum \limits_{K \in {\xi}_{h}} \int_{K} \epsilon \nabla u_{h}\cdot\nabla v_{h} dx + \sum \limits_{K \in {\xi}_{h}} \int_{K} \vec{\beta } \cdot \nabla u_{h} v_{h} dx \label{dga} \\ &+ \sum \limits_{K \in {\xi}_{h}}\int_{\partial K_t^-\setminus\partial\Omega } \vec{\beta }\cdot \vec{n}_K (u_{h}^{out}-u_{h}) v_{h} ds - \sum \limits_{K \in {\xi}_{h}} \int_{\partial K_t^-\cap \Gamma_t^{-}} \vec{\beta }\cdot \vec{n}_K u_{h} v_{h} ds \nonumber \\ &+ \sum \limits_{ e \in \Gamma}\frac{\sigma \epsilon}{h_{e}} \int_{e} [u_{h}]\cdot[v_{h}] ds, \nonumber \\ K_{h}(u_{h},v_{h}) =& - \sum \limits_{ e \in \Gamma} \int_{e} ( \{\epsilon \nabla v_{h} \}\cdot[u_{h}] + \{\epsilon \nabla u_{h} \}\cdot [v_{h}] )ds \label{kh} \\ b_{h}(t;u_{h}, v_{h}) =& \sum \limits_{K \in {\xi}_{h}} \int_{K} r(u_{h}) v_{h} dx, \\ l_{h}( v_{h})=& \sum \limits_{K \in {\xi}_{h}} \int_{K} f_h v_{h} dx \end{align} \end{subequations} where $u_{h}^{out}$ denotes the value on an edge from outside of an element $K$. The parameter $\sigma\in\mathbb{R}_0^+$ is called the penalty parameter which should be sufficiently large; independent of the mesh size $h$ and the diffusion coefficient $\epsilon$ \cite{riviere08dgm} [Sec. 2.7.1]. We choose the penalty parameter $\sigma$ for the SIPG method depending on the polynomial degree $k$ as $\sigma=3k(k+1)$ on interior edges and $\sigma=6k(k+1)$ on boundary edges. Upon integration by parts on the convective term, bilinear form (\ref{dga}) will be \begin{subequations} \begin{align} a_{h}(t;u_{h}, v_{h})=& \sum \limits_{K \in {\xi}_{h}} \int_{K} \epsilon \nabla u_{h}\cdot\nabla v_{h} dx - \sum \limits_{K \in {\xi}_{h}} \int_{K} ( \vec{\beta } u_h\cdot\nabla v_h + \nabla\cdot\vec{\beta } u_hv_h) dx \label{bilbyp} \\ &+ \sum \limits_{K \in {\xi}_{h}}\int_{\partial K_t^+\setminus\partial\Omega } \vec{\beta }\cdot \vec{n}_K u_{h}(v_h-v_{h}^{out}) ds + \sum \limits_{K \in {\xi}_{h}} \int_{\partial K_t^+\cap \Gamma_t^{+}} \vec{\beta }\cdot \vec{n}_K u_{h} v_{h} ds \nonumber \\ &+ \sum \limits_{ e \in \Gamma}\frac{\sigma \epsilon}{h_{e}} \int_{e} [u_{h}]\cdot[v_{h}] ds \nonumber \end{align} \end{subequations} Note that the bilinear form $a_{h}(t;u,v)$ is well-defined for the functions $u,v\in H_0^1(\Omega )$, and equal to $$ a_{h}(t;u, v)=\int_{\Omega}(\epsilon\nabla u\cdot\nabla v + \vec{\beta}\cdot\nabla uv)dx $$ Thus, the weak formulation (\ref{weaka}) can be rewritten for any $t\in (0,T]$ as \begin{equation}\label{eqweak} \int_{\Omega}\frac{\partial u}{\partial t}vdx + a_h(t;u,v)+b(t;u,v)=l(v) \; , \qquad \forall v\in H_0^1(\Omega ) \end{equation} For the fully discrete system, consider a subdivision of $[0,T]$ into $n$ time intervals $I_k=(t^{k-1},t^{k}]$ of length $\tau_k$, $k=1,2,\ldots , n$. Set $t^0=0$ and for $n\geq 1$, $t^k=\tau_1+\tau_2+\cdots + \tau_k$. Denote by $\xi_h^0$ an initial mesh and by $\xi_h^k$ the mesh associated to the $k^{th}$ time step for $k>0$, which is obtained from $\xi_h^{k-1}$ by locally refining/coarsening. Moreover, we assign the finite element space $V_h^k=V_h(\xi_h^k)$ to each mesh $\xi_h^k$. Then, the fully discrete problem, backward Euler in time, reads as: for $t=0$, set $u_h^0\in V_h^0$ as the projection (orthogonal $L^2$-projection) of $u_0$ onto $V_h^0$; for $k=1,2,\ldots , n$, find $u_h^k\in V_h^{k}$ such that for all $v_h^k\in V_h^{k}$ \begin{equation}\label{fullydisc} \int_{\Omega}\frac{u_{h}^{k}-u_{h}^{k-1}}{\tau_k}v_{h}^k dx + a_{h}(t^k;u_{h}^k,v_{h}^k)+ K_{h}(u_{h}^k,v_{h}^k)+b_{h}(t^k;u_{h}^k, v_{h}^k)=\int_{\Omega}f^{k}v_{h}^k dx \end{equation} \section{A posteriori error analysis} \label{aposteriori} The problems concerned in this paper are convection/reaction dominated non-stationary models which often produce internal/boundary layers where the solutions have steep gradients. Thus, an efficient solution of such equations are needed. The most popular technique to solve the convection/reaction dominated problems is the adaptivity which requires an estimator to locate the regions where the error is too large. In this work, we construct the residual-based, robust (in P\'{e}clet number) a posteriori error estimators for non-stationary convection domianted diffusion-convection equations with non-linear reaction mechanisms. To construct the a posteriori error estimators, we extend the a posteriori error estimators for linear non-stationary problems constructed in \cite{cangiani13adg} which uses the a posteriori error estimators for linear stationary models constructed in \cite{schotzau09rae} utilizing the elliptic reconstruction technique \cite{makridakis03era} to make connection between the stationary and non-stationary error. As in \cite{cangiani13adg}, first we construct and prove the a posteriori error bounds for non-linear stationary models in Section~\ref{stationary}. Then, in Section~\ref{semidiscerror}, we give the a posteriori error bounds for the semi-discrete system of the non-stationary problems with non-linear reaction term. Finally, in Section~\ref{fullydiscerror}, we give the a posteriori error bounds for the fully-discrete system of the non-stationary problems with non-linear reaction term. The main contribution to the error analysis lies in the construction of the error bounds for non-linear stationary models in Section~\ref{stationary}. The remaining utilize the constructions in \cite{cangiani13adg}. \subsection{A posteriori error bounds for non-linear stationary system} \label{stationary} We consider first the stationary problem, i.e. for a given $t\in (0,T]$, let $u^s \in H_0^1(\Omega )$ be the unique solution to the weak problem \begin{equation} \label{stweak} a(t;u^s,v)+b(t;u^s,v)=l(v) \; , \qquad \forall v\in H_0^1(\Omega ) \end{equation} and let $u_h^s\in V_h(\xi_h)$ be the unique solution to the discrete problem \begin{equation} \label{stdg} a_h(t;u_h^s,v_h)+K_{h}(u_{h}^s,v_{h})+b_h(t;u_h^s,v_h)=l(v_h) \; , \qquad \forall v\in V_h(\xi_h) \end{equation} In order to measure the spatial error, we use the energy norm $$ ||| v |||^2=\sum \limits_{K \in {\xi}_{h}}(\| \epsilon\nabla v\|_{L^2(K)}^2 +\kappa \| v\|_{L^2(K)}^2) + \sum \limits_{e \in \Gamma}\frac{\epsilon\sigma}{h_{e}}\| [v]\|_{L^2(e)}^2, $$ and the semi-norm \begin{equation} \label{semin} |v|_C^2=|\vec{\beta }v|_*^2+\sum\limits_{e \in \Gamma}(\kappa h_e+ \frac{h_e}{\epsilon})\| [v]\|_{L^2(e)}^2, \end{equation} where $$ |u|_*=\mathop{\text{sup}}_{w\in H_0^1(\Omega )\setminus \{ 0\}}\frac{\int_{\Omega}u\cdot \nabla w dx}{||| w|||}, $$ For each element $K \in {\xi}_{h}$, we define the local error indicators $\eta_K^2$ as \begin{eqnarray} \label{res} \eta_K^2= \eta_{R_K}^2 + \eta_{E_K}^2 + \eta_{J_K}^2, \end{eqnarray} where \begin{eqnarray} \eta_{R_K}^2&=&\rho_K^2\| f_h +\epsilon\Delta u_h^s-\vec{\beta }_h\cdot\nabla u_h^s-r(u_h^s)\|_{L^2(K)}^2, \nonumber \\ \eta_{E_K}^2 &=& \frac{1}{2} \sum \limits_{e \in \partial K\setminus\Gamma^{\partial} }\epsilon^{-\frac{1}{2}}\rho_e\| [\epsilon\nabla u_h^s]\|_{L^2(e)}^2, \nonumber \\ \eta_{J_K}^2 &=& \frac{1}{2}\sum \limits_{e \in \partial K\setminus\Gamma^{\partial} }\left( \frac{\epsilon\sigma}{h_e}+\kappa h_e+\frac{h_e}{\epsilon}\right) \| [u_h^s]\|_{L^2(e)}^2 +\sum \limits_{e \in \partial K\cap\Gamma^{\partial}}\left( \frac{\epsilon\sigma}{h_e}+\kappa h_e+\frac{h_e}{\epsilon}\right) \| u_h^s\|_{L^2(e)}^2, \nonumber \\ \end{eqnarray} with the weights $\rho_K$ and $\rho_e$, on an element $K$, given by $$ \rho_{K}=\min\{h_{K}\epsilon^{-\frac{1}{2}}, \kappa^{-\frac{1}{2}}\}, \; \rho_{e}=\min\{h_{e}\epsilon^{-\frac{1}{2}}, \kappa^{-\frac{1}{2}}\}, $$ for $\kappa \neq 0$. When $\kappa =0$, we take $\rho_{K}=h_{K}\epsilon^{-\frac{1}{2}}$ and $\rho_{e}=h_{e}\epsilon^{-\frac{1}{2}}$. Then, our a posteriori error indicator is given by \begin{equation} \label{errind} \eta=\left( \sum \limits_{K\in{\xi}_{h}}\eta_K^2\right)^{1/2}. \end{equation} We also introduce the data approximation terms, $$ \Theta_K^2 =\rho_K^2(\| f-f_h\|_{L^2(K)}^2 + \| (\vec{\beta } -\vec{\beta }_h)\cdot\nabla u_h^s\|_{L^2(K)}^2). $$ and the data approximation error \begin{equation} \label{data} \Theta=\left( \sum \limits_{K\in{\xi}_{h}}\Theta_K^2\right)^{1/2}. \end{equation} Then, we have the a posteriori error bounds \begin{eqnarray} \| u^s-u_h^s\|_{DG} \lesssim \eta + \Theta \qquad &(\text{reliability}), \label{rel} \\[0.2cm] \eta \lesssim \| u^s-u_h^s\|_{DG} + \Theta \qquad &(\text{efficiency}). \label{eff} \end{eqnarray} with $$ \| v\|_{DG}= ||| v||| + | v|_C $$ \subsubsection{Proof of a posteriori error bounds} The proof for the bounds of spatial error estimates are analogous to the ones in \cite{schotzau09rae} for the linear problems. Therefore, only the proofs for non-linear reaction term are stated explicitly . In the following, we use the symbols $\lesssim$ and $\gtrsim$ to denote the bounds that are valid up to positive constants independent of the local mesh size $h$, the diffusion coefficient $\epsilon$ and the penalty parameter $\sigma$. The spatial error $\| u^s-u_h^s\|_{DG}$ is not well-defined, since $u^s\in H_0^1(\Omega)$ and $u_h^s\in V_h(\xi_h)\nsubseteq H_0^1(\Omega)$. Therefore, we split the stationary SIPG solution $u_h^s$ as $$ u_h^s=u_h^c+u_h^r $$ with $u_h^c\in H_0^1(\Omega)\cap V_h(\xi_h)$ being the conforming part of the solution and $u_h^r\in V_h(\xi_h)$ is the remainder term. In this way, we have $u_h^s\in H_0^1(\Omega)+V_h(\xi_h)$, and from the triangular inequality $$ \| u^s-u_h^s\|_{DG}\leq \| u^s-u_h^c\|_{DG}+\| u_h^r\|_{DG} $$ Now, both the terms on the right hand side are well-defined norms, and our aim is to find bounds for them. Next, we introduce the following auxiliary forms: \begin{subequations} \begin{align} D_{h}(u, v)=& \sum \limits_{K \in {\xi}_{h}} \int_{K} \left( \epsilon \nabla u\cdot\nabla v -\nabla\cdot \vec{\beta }uv\right) dx \label{d} \\ O_h(u,v)=& -\sum \limits_{K \in {\xi}_{h}} \int_{K}\vec{\beta }u\cdot\nabla v dx +\sum \limits_{K \in {\xi}_{h}} \int_{\partial K^+\cap\Gamma^+}\vec{\beta }\cdot\vec{n}_K uv ds \nonumber \\ & +\sum \limits_{K \in {\xi}_{h}} \int_{\partial K^+\setminus\partial\Omega }\vec{\beta }\cdot\vec{n}_Ku(v-v^{out}) ds \label{o} \\ J_h(u,v) =& \sum \limits_{ e \in \Gamma_{0}\cup\Gamma }\frac{\sigma \epsilon}{h_{e}} \int_{e} [u]\cdot [v] ds. \label{j} \end{align} \end{subequations} We note that, for a specific $t\in (0,T]$, the SIPG bilinear form (\ref{bilbyp}) satisfies $$ a_h(t;u,v)=D_h(u,v)+O_h(u,v)+J_h(u,v) $$ and is well-defined on $H_0^1(\Omega )+V_h(\xi_h)$. Using the first identity in (\ref{ass}), we can easily have for any $u\in H_0^1(\Omega)$ $$ {a}_h(t;u,v) \geq ||| u|||^2 $$ Moreover, the auxiliary forms are continuous \cite{schotzau09rae}[Lemma 4.2]: \begin{eqnarray} |D_h(u,v)| &\lesssim& |||u|||\; |||v||| \; , \qquad u,v\in H_0^1(\Omega )+V_h(\xi_h) ,\label{cd} \\ |O_h(u,v)| &\lesssim& |\vec{\beta }u|_* \;|||v||| \; , \qquad u\in H_0^1(\Omega )+V_h(\xi_h), v\in H_0^1(\Omega) ,\label{co} \\ |J_h(u,v)| &\lesssim& |||u|||\; |||v||| \; , \qquad u,v\in H_0^1(\Omega )+V_h(\xi_h) \label{cj} , \end{eqnarray} and for $u\in V_h(\xi_h)$, $v\in V_h(\xi_h)\cap H_0^1(\Omega )$ \cite{schotzau09rae}[Lemma 4.3] \begin{eqnarray} |K_h(u,v)| &\lesssim& \gamma^{-1/2}\left( \sum_{e\in \Gamma_0\cup\Gamma^D} \frac{\gamma\epsilon}{h_e}\| [u]\|_{L^2(e)}\right)^{1/2} |||v|||.\label{ck} \end{eqnarray} We also have for the non-linear form $b_h(t;u,v)$, for a specific time $t$, using the boundedness assumption (\ref{nonl1}), \begin{eqnarray} |b_h(t;u,v)| &\lesssim& |||v||| \; , \qquad u,v\in H_0^1(\Omega )+V_h(\xi_h). \label{cb} \end{eqnarray} Now, we give some auxiliary results and conditions which are used in the proofs. \begin{itemize} \item {\bf Inf-sup condition:} \cite{schotzau09rae}[Lemma 4.4] For a nonzero $u\in H_0^1(\Omega )$, for a constant $C>0$, we have \begin{eqnarray} |||u|||+|\vec{\beta }u|_* \lesssim \underset{v\in H_0^1(\Omega ) \setminus \{ 0\}}{\text{sup}}\frac{{a}_h(t;u,v)}{|||v|||}. \label{infsup} \end{eqnarray} \item {\bf Approximation operator:} Let $V_h^c=V_h(\xi_h)\cap H_0^1(\Omega )$ be the conforming subspace of $V_h(\xi_h)$. For any $u\in V_h(\xi_h)$, there exists an approximation operator $A_h:\; V_h(\xi_h) \mapsto V_h^c$ satisfying \begin{eqnarray} \sum_{K\in\xi}\| u-A_hu\|_{L^2(K)}^2 \lesssim \sum_{e\in \Gamma_0\cup\Gamma^D}\int_e h_e|[u]|^2ds, \label{app} \\ \sum_{K\in\xi}\| \nabla (u-A_hu)\|_{L^2(K)}^2 \lesssim \sum_{e\in \Gamma_0\cup\Gamma^D}\int_e \frac{1}{h_e}|[u]|^2ds. \label{dapp} \end{eqnarray} \item {\bf Interpolation operator:} For any $u\in H_0^1(\Omega )$, there exists an interpolation operator $$ I_h:\; H_0^1(\Omega ) \mapsto \{ w\in C(\overline{\Omega}): \; w|_K\in\mathbb{P}_1(K), \forall K\in\xi, w=0 \; \text{on }\Gamma \} $$ that satisfies \begin{eqnarray} |||I_h u||| \lesssim |||u||| , \label{int1} \\ \left( \sum_{K\in\xi}\rho_K^{-2}\| u-I_hu\|_{L^2(K)}^2\right)^{1/2} \lesssim |||u|||, \label{int2} \\ \left( \sum_{e\in\Gamma_0\cup\Gamma^D}\epsilon^{1/2}\rho_e^{-1}\| u-I_hu\|_{L^2(K)}^2\right)^{1/2} \lesssim |||u|||. \label{int3} \end{eqnarray} Now, consider the splitting of the stationary solution $u_h^s=u_h^c+u_h^r$ as $u_h^c=A_h u_h^s\in H_0^1(\Omega )\cap V_h(\xi_h)$ with $A_h$ is the approximation operator and $u_h^r=u_h^s-u_h^c\in V_h$.\\ \item {\bf Bound for remainder term:} \cite{schotzau09rae}[Lemma 4.7] For the remainder term, we have the bound \begin{eqnarray} \label{rem} \| u_h^r\|_{DG} \lesssim \eta \end{eqnarray} where $\eta$ is our a posteriori error indicator (\ref{errind}).\\ \end{itemize} \noindent {\bf Lemma:} For a given $t\in (0,T]$ and for any $v\in H_0^1(\Omega )$, we have \begin{eqnarray} \int_{\Omega } f(v-I_hv)dx-{a}_h(t;u_h^s,v-I_hv)-b_h(t;u_h^s,v-I_hv) \lesssim (\eta +\Theta )|||v||| \label{lemma} \end{eqnarray} where $I_h$ is the interpolation operator.\\ \noindent {\it Proof:} Let $$ T=\int_{\Omega } f(v-I_hv)dx-{a}_h(t;u_h^s,v-I_hv)-b_h(t;u_h^s,v-I_hv). $$ Integration by parts yields \begin{eqnarray*} T &=& \sum \limits_{K \in {\xi}_{h}} \int_K (f+\epsilon\Delta u_h^s-\vec{\beta }\cdot\nabla u_h^s-r(u_h^s))(v-I_hv)dx\\ & & -\sum \limits_{K \in {\xi}_{h}} \int_{\partial K}\epsilon\nabla u_h^s\cdot \vec{n}_K(v-I_hv)ds\\ & & +\sum \limits_{K \in {\xi}_{h}} \int_{\partial K^-\setminus \partial\Omega }\vec{\beta }\cdot \vec{n}_k(u_h^s-u_h^{s,out})(v-I_hv)ds\\ &=&T_1+T_2+T_3. \end{eqnarray*} Adding and substracting the data approximation terms into the term $T_1$ \begin{eqnarray*} T_1 &=& \sum \limits_{K \in {\xi}_{h}} \int_K (f_h +\epsilon\Delta u_h^s-\vec{\beta }_h\cdot\nabla u_h^s-r(u_h^s))(v-I_hv)dx\\ & & +\sum \limits_{K \in {\xi}_{h}} \int_K ((f-f_h)-(\vec{\beta }-\vec{\beta }_h)\cdot\nabla u_h^s)(v-I_hv)dx. \end{eqnarray*} Using the Cauchy-Schwarz inequality and interpolation operator identity (\ref{int2}) \begin{eqnarray*} T_1 &\lesssim & \left( \sum \limits_{K \in {\xi}_{h}} \eta_{R_K}^2\right)^{1/2} \left( \sum \limits_{K \in {\xi}_{h}} \rho_{K}^{-2}\| v-I_hv\|_{L^2(K)}^2\right)^{1/2} \\ && +\left( \sum \limits_{K \in {\xi}_{h}} \Theta_{K}^2\right)^{1/2} \left( \sum \limits_{K \in {\xi}_{h}} \rho_{K}^{-2}\| v-I_hv\|_{L^2(K)}^2\right)^{1/2} \\ &\lesssim & \left( \sum \limits_{K \in {\xi}_{h}} (\eta_{R_K}^2+\Theta_K^2)\right)^{1/2}|||v|||. \end{eqnarray*} For the terms $T_2$ and $T_3$, we have \cite{schotzau09rae}[Lemma 4.8] $$ T_2 \lesssim \left( \sum \limits_{K \in {\xi}_{h}} \eta_{E_K}^2 \right)^{1/2}|||v||| $$ $$ T_3 \lesssim \left( \sum \limits_{K \in {\xi}_{h}} \eta_{J_K}^2 \right)^{1/2}|||v||| $$ which finishes the proof.\\ \noindent {\bf Bound to the conforming part of the error:} For a given $t\in (0,T]$, there holds \begin{eqnarray} \| u^s-u_h^c\|_{DG}\lesssim \eta + \Theta . \label{conf} \end{eqnarray} \noindent {\it Proof:} Since $u^s-u_h^c\in H_0^1(\Omega )$, we have $|u^s-u_h^c|_C=|\vec{\beta} (u^s-u_h^c)|_*$. Then, from the inf-sup condition (\ref{infsup}) $$ \| u^s-u_h^c\|_{DG} = |||u^s-u_h^c||| +|u^s-u_h^c|_C \lesssim \underset{v\in H_0^1(\Omega ) \setminus \{ 0\}}{\text{sup}}\frac{{a}_h(t;u^s-u_h^c,v)}{|||v|||} . $$ So, we need to bound the term ${a}_h(t;u^s-u_h^c,v)$. Using that $u^s-u_h^c\in H_0^1(\Omega )$, we have \begin{eqnarray*} {a}_h(t;u^s-u_h^c,v) &=& {a}_h(t;u^s,v)-{a}_h(t;u_h^c,v) \\ &=& \int_{\Omega }fv dx -b_h(t;u^s,v)-{a}_h(t;u_h^c,v)\\ &=& \int_{\Omega }fv dx -b_h(t;u^s,v)-D_h(u_h^c,v)-J_h(u_h^c,v)-O_h(u_h^c,v) \\ &=& \int_{\Omega }fv dx -b_h(t;u_h^s,v)+b_h(t;u_h^s,v)-b_h(t;u^s,v)\\ && \quad -{a}_h(t;u_h^s,v)+D_h(u_h^r,v)+J_h(u_h^r,v)+O_h(u_h^r,v). \end{eqnarray*} We also have from the SIPG scheme \begin{eqnarray*} \int_{\Omega }fI_hv dx &=& a_h(t;u_h^s,I_hv)+K_h(u_h^s,I_hv)+b_h(t;u_h^s,I_hv)\\ \end{eqnarray*} Hence, we obtain $$ a(t;u^s-u_h^c,v)=T_1+T_2+T_3+T_4 $$ \begin{eqnarray*} T_1 &=& \int_{\Omega }f(v-I_hv) dx -{a}_h(t;u_h^s,v-I_hv)-b_h(t;u_h^s,v-I_hv)\\ T_2 &=& D_h(u_h^r,v)+J_h(u_h^r,v)+O_h(u_h^r,v)\\ T_3 &=& K_h(u_h^s,I_hv) \\ T_4 &=& b_h(t;u_h^s,v)-b_h(t;u^s,v)\\ \end{eqnarray*} From the inequality (\ref{lemma}), we have $$ T_1 \lesssim (\eta +\Theta )|||v||| $$ The continuity results (\ref{cd}-\ref{cj}) and the bound to remainder term (\ref{rem}) yields $$ T_2 \lesssim (|||u_h^r|||+|\vec{\beta} u_h^r|_*)|||v||| \leq \eta |||v||| $$ Moreover, using the identities (\ref{ck}) and (\ref{int1}), we get $$ T_3 \lesssim \gamma^{-1/2}\left( \sum_{K\in \xi} \eta_{J_K}^2\right)^{1/2} |||I_hv||| \lesssim \gamma^{-1/2}\left( \sum_{K\in \xi} \eta_{J_K}^2\right)^{1/2} |||v|||. $$ Finally, using Cauchy-Shwarz inequality and the boundedness property (\ref{cb}), we get \begin{eqnarray*} T_4 &=& b_h(t;u_h^s,v)-b_h(t;u^s,v) = \int_{\Omega }r(u_h^s)v dx-\int_{\Omega }r(u^s)v dx \\ & \leq & C_1 \| v\|_{L^2(\Omega )} - C_2 \| v\|_{L^2(\Omega )} \\ & \lesssim & |||v|||. \end{eqnarray*} This finishes the proof.\\ \noindent {\bf Proof to the reliability:} Combining the bounds (\ref{rem}) and (\ref{conf}) to the remainder and the conforming parts of the error, respectively, we obtain \begin{eqnarray*} \|u^s-u_h^s \|_{DG} &\leq& \|u^s-u_h^c \|_{DG} +\|u_h^r \|_{DG}\\ &\leq& \eta + \Theta + \eta \\ &\lesssim & \eta + \Theta \end{eqnarray*} \noindent {\bf Proof to the efficiency:} The proof of the efficiency is similar to Theorem 3.3 in \cite{schotzau09rae}. We only use the boundedness property (\ref{nonl1}) of the non-linear reaction term to bound the terms occurring in the procedure in \cite{schotzau09rae}. \subsection{A posteriori error bounds for the semi-discrete system} \label{semidiscerror} In order to measure the error for the semi-discrete problem, we use the $L^{\infty}(L^2)+L^2(H^1)$-type norm $$ \| v\|_*^2 = \| v\|_{L^{\infty}(0,T;L^2(\Omega ))}^2 + \int_0^T |||v|||^2dt $$ We make use the elliptic reconstruction technique in \cite{makridakis03era}: the elliptic reconstruction $w\in H_0^1(\Omega )$ is defined as the unique solution of the problem \begin{equation}\label{cont_elliptic} a(t;w,v)+b_h(t;w,v)=\int_{\Omega}\left( f-\frac{\partial u_h}{\partial t}\right)vdx \; , \quad \forall v\in H_0^1(\Omega ). \end{equation} The SIPG discretization, on the other hand, of the above system reads as: for each $t\in (0,T]$, find $w_h\in C^{0,1}(0,T;V_h(\xi_h))$ such that for all $v_h\in V_h(\xi_h)$ $$ a_h(t;w_h,v_h)+K_h(w_h,v_h)+b_h(t;w_h,v_h)= \int_{\Omega}\left( f-\frac{\partial u_h}{\partial t}\right)v_hdx $$ which implies that $w_h=u_h$. Hence, the error bound to the term $\| w-u_h\|_{DG}$ can be found using the a posteriori error bound (\ref{rel}) for non-linear stationary problem. We give the a posteriori error bounds for the semi-discrete system developed as in \cite{cangiani13adg}. For the error $e(t)=u(t)-u_h(t)$ of the semi-discrete problem, we set the decomposition $e(t)=\mu (t)+\nu (t)$ with $\mu (t)= u(t)-w(t)$ and $\nu = w(t)-u_h(t)$. Then, for any $t\in (0,T]$, using (\ref{eqweak}) and (\ref{cont_elliptic}), we obtain that $$ \int_{\Omega}\frac{\partial e}{\partial t}v dx + a_h(t;\mu , v)+ b_h(t;\mu , v)=0 $$ which yields the error bound \cite{cangiani13adg}[Theorem 5.4] $$ \| e\|_* \lesssim \tilde{\eta} $$ where the error estimator $\tilde{\eta}$ is defined by \begin{eqnarray*} \tilde{\eta}^2 = \| e(0)\|^2 + \int_0^T \tilde{\eta}_{S_1}^2dt + \text{min}\left\{ \left( \int_0^T \tilde{\eta}_{S_2}^2 dt \right)^2 , \rho_T^2\int_0^T \tilde{\eta}_{S_2}^2dt \right\} + \underset{0\leq t\leq T}{\text{max}}\tilde{\eta}_{S_3}^2 \end{eqnarray*} with \begin{eqnarray*} \tilde{\eta}_{S_1}^2 &=& \sum_{K\in{\xi}_{h}} \rho_K^2\left\| f- \frac{\partial u_h}{\partial t} +\epsilon\Delta u_h - \vec{\beta}\cdot\nabla u_h - r(u_h)\right\|_{L^2(K)}^2 + \sum_{e\in\Gamma_0} \epsilon^{-1/2}\rho_e\| [ \epsilon\nabla u_h ] \|_{L^2(e)}^2 \\ && + \sum_{e\in\Gamma_0\cup\Gamma^D}\left( \frac{\epsilon\sigma}{h_e}+\kappa h_e+\frac{h_e}{\epsilon}\right)\| [ u_h ] \|_{L^2(e)}^2\\ \tilde{\eta}_{S_2}^2 &=& \sum_{e\in\Gamma_0\cup\Gamma^D} h_e\left\| \left[ \frac{\partial u_h}{\partial t}\right]\right\|_{L^2(e)}^2 \\ \tilde{\eta}_{S_3}^2 &=& \sum_{e\in\Gamma_0\cup\Gamma^D} h_e\| [ u_h]\|_{L^2(e)}^2 \end{eqnarray*} and the weight $\rho_T=\text{min}(\epsilon^{-\frac{1}{2}}, \kappa^{-\frac{1}{2}})$.\\ \subsection{A posteriori error bounds for the fully-discrete system} \label{fullydiscerror} For the fully discrete case, we consider the solutions at discrete time instances. For this reason, let $A^k\in V_h^k$ be the unique solution of the stationary system $$ a_h(t^k; u_h^k,v_h^k) + K_h(u_h^k,v_h^k) + b_h(t^k ; u_h^k,v_h^k) = \int_{\Omega} A^k v_h^k dx \; , \quad \forall v_h^k \in V_h^k $$ Note that for $k\geq 1$, $A^k=I_h^kf^k - (u_h^k - I_h^ku_h^{k-1})/\tau_k$ with $I_h^k$ being the $L^2$-projection operator onto the space $V_h^k$. Then, the elliptic reconstruction $w^k\in H_0^1(\Omega )$ is defined as the unique solution of the stationary system \begin{equation}\label{disc_elliptic} a(t^k; w^k,v) + b(t^k; w^k,v)=\int_{\Omega} A^k v dx \; , \qquad \forall v\in H_0^1(\Omega ). \end{equation} Next, we take, as in \cite{cangiani13adg}, the discrete solution $u_h(t)$ as a piecewise continuous function so that on each interval $(t^{k-1},t^k]$, $u_h(t)$ is the linear interpolation of the values $u_h^{k-1}$ and $u_h^k$ given by $$ u_h(t)= l_{k-1}(t)u_h^{k-1} + l_{k}(t)u_h^{k} $$ with the linear Lagrange interpolation basis $l_{k-1}$ and $l_{k}$ defined on $[t^{k-1},t^k]$. Then, for the error $e=u-u_h$, using (\ref{eqweak}) and (\ref{disc_elliptic}), we obtain for all $v\in H_0^1(\Omega)$ \begin{eqnarray*} \int_{\Omega} \frac{\partial e}{\partial t} v dx + a_h(t; e,v) + b_h(t; e,v) &=& \int_{\Omega}(f-f^k)v dx + \int_{\Omega}\left( f^k - \frac{\partial u_h}{\partial t}\right) v dx \\ && - a_h(t;u_h,v) - b_h(t; u_h,v) \end{eqnarray*} which yields the error bound \cite{cangiani13adg}[Theorem 6.5] $$ \| e\|_*^2 \lesssim \eta_S^2 + \eta_T^2 $$ where $\eta_S$ is the spatial estimator given by \cite{cangiani13adg} \begin{eqnarray*} \eta_S^2 &=& \| e(0)\|^2 + \frac{1}{3}\sum_{k=1}^n \tau_k (\eta_{S_{1,k-1}}^2+\eta_{S_{1,k}}^2) +\sum_{k=1}^n \tau_k \eta_{S_{2,k}}^2 \\ && + \underset{0\leq k\leq n}{\text{max}}\eta_{S_{3,k}}^2 + \text{min} \left\{ \left( \sum_{k=1}^n \tau_k \eta_{S_{4,k}}\right)^2 , \rho_T^2 \sum_{k=1}^n \tau_j \eta_{S_{4,k}}^2 \right\} \end{eqnarray*} with \begin{eqnarray} {\eta}_{S_{1,k}}^2 &=& \sum_{K\in{\xi}_{h}^{k-1}\cup {\xi}_{h}^{k}} \rho_K^2\left\| A^k +\epsilon\Delta u_h^k - \vec{\beta}^k\cdot\nabla u_h^k - r(u_h^k)\right\|_{L^2(K)}^2 + \sum_{e\in\Gamma_0} \epsilon^{-1/2}\rho_e\| [ \epsilon\nabla u_h^k ] \|_{L^2(e)}^2 \nonumber \\ && + \sum_{e\in\Gamma}\left( \frac{\epsilon\sigma}{h_e}+\kappa h_e+\frac{h_e}{\epsilon}\right)\| [ u_h^k ] \|_{L^2(e)}^2 \label{spatialest} \\ {\eta}_{S_{2,k}}^2 &=& \sum_{K\in{\xi}_{h}^{k-1}\cup {\xi}_{h}^{k}} \rho_K^2 \left\| f^k - I_h^kf^k + \frac{u_h^{k-1}-I_h^ku_h^{k-1}}{\tau_k}\right\|_{L^2(K)}^2 \nonumber \\ {\eta}_{S_{3,k}}^2 &=& \sum_{e\in\Gamma} h_e\| [ u_h^k]\|_{L^2(e)}^2 \nonumber \\ {\eta}_{S_{4,k}}^2 &=& \sum_{e\in\Gamma} h_e \left\| \left[ \frac{u_h^k-u_h^{k-1}}{\tau_k}\right]\right\|_{L^2(e)}^2 \nonumber \end{eqnarray} and $\eta_T$ is the temporal estimator given by \cite{cangiani13adg} \begin{eqnarray} \label{timeest} \eta_T^2 = \sum_{k=1}^n\int_{t_{k-1}}^{t_k}\eta_{T_1,k}^2 dt + \text{min} \left\{ \left( \sum_{k=1}^n \int_{t_{k-1}}^{t_k} \eta_{T_{2,k}}dt\right)^2 , \rho_T^2 \sum_{k=1}^n \int_{t_{k-1}}^{t_k} \eta_{T_{2,k}}^2dt \right\} \end{eqnarray} with \begin{eqnarray*} \eta_{T_{1,k}}^2 &=& \sum_{K\in {\xi}_{h}^{k-1}\cup {\xi}_{h}^{k}}\epsilon^{-1} \| l_k(\vec{\beta}^k-\vec{\beta})u_h^k + l_{k-1}(\vec{\beta}^{k-1}-\vec{\beta})u_h^{k-1}\|_{L^2(K)}^2 \\ \eta_{T_{2,k}}^2 &=& \sum_{K\in {\xi}_{h}^{k-1}\cup {\xi}_{h}^{k}} \| f-f^k + l_{k-1}(A^k - A^{k-1}) + l_k(\nabla\cdot\vec{\beta}^k-\nabla\cdot \vec{\beta})u_h^k + l_k(\nabla\cdot\vec{\beta}^{j-1}-\nabla\cdot \vec{\beta})u_h^{k-1}\|_{L^2(K)}^2 \end{eqnarray*} \section{Adaptive algorithm} \label{algorithm} The time-space adaptive algorithm for linear convection-diffusion problems in \cite{cangiani13adg} is modified for semi-linear problems of type \eqref{model}, see Fig.~\ref{chart}. It is based on the residual-based a posteriori error estimators given in the previous sections. The algorithm starts with an initial uniform mesh in space and with a given initial time step. At each time step, the space and time-step are adaptively arranged according to the user defined tolerances $\mathbf{ttol}$ for time-step refinement, and $\mathbf{stol^+}$ and $\mathbf{stol^-}$ for spatial mesh, former corresponding to refinement and latter corresponding to the coarsening procedures in space. Note that we do not need a temporal tolerance corresponding to time-step coarsening, since we start, in our problems, with a uniform equispaced distribution of $[0,T]$ having sufficiently large time-steps. Thus, it is enough just bisecting the time intervals not satisfying the temporal tolerance $\mathbf{ttol}$. Both the refinement and coarsening processes in space are determined by the indicator $\eta_{S_{1,k}}$ (\ref{spatialest}) appearing in the spatial estimator. Since the temporal estimator $\eta_T$ (\ref{timeest}) is not easy to compute, the adaptive refinement of the time-steps are driven by the modified temporal-indicator \cite{cangiani13adg} $$ \tilde{\eta}_{T_k}^2=\int_{t_{k-1}}^{t_k} \eta_{T_{1,k}}^2dt + \min \{\rho_T,T\}\int_{t_{k-1}}^{t_k} \eta_{T_{2,k}}^2dt $$ sum of which gives a bound for the temporal estimator $\eta_T^2$. \begin{figure} \caption{Adaptive algorithm chart} \label{chart} \end{figure} Although the adaptive algorithm, Fig.~\ref{chart}, stands for a single equation of the system \eqref{org}, it is not difficult to extend the algorithm to the coupled systems. For this, say we have a system of two equations for the unknowns $u_1$ and $u_2$, the temporal-indicators $\tilde{\eta}_{T_k}^1$, $\tilde{\eta}_{T_k}^2$ and the spatial-indicators $\eta_{S_{1,k}}^1$, $\eta_{S_{1,k}}^2$ corresponding to the unknowns $u_1$ and $u_2$, respectively, are computed. To adapt the time-step size, we ask the temporal condition for the both temporal-indicators, i.e. $\tilde{\eta}_{T_k}^1\leq \mathbf{ttol}$ and $\tilde{\eta}_{T_k}^2\leq \mathbf{ttol}$. On the other hand, to select the elements to be refined, we take the set of elements which is the union of the sets of the elements satisfying $\eta_{S_{1,k}}^1> \mathbf{stol^+}$ and $\eta_{S_{1,k}}^2> \mathbf{stol^+}$, and similar procedure to select the elements to be coarsened, but not including any elements which are selected to be refined. Numerical studies demonstrate that the adaptive algorithm is capable of resolving the layers in space as the time progresses. \section{Solution of the fully-discrete system} \label{newton} In this section, we discuss the solution of the fully-discrete system \eqref{fullydisc} on an arbitrary $k^{th}$ time-step, which is solved for all $k=1,2,\ldots , n$. In order to not be confused about the notations, let us consider the system \eqref{fullydisc} on an arbitrary $k^{th}$ time-step without the superscript for the time-step of the form \begin{equation}\label{singleeq} \int_{\Omega}\frac{u_{h}-w_{h}}{\tau}v_{h} dx + a_{h}(u_{h},v_{h})+ K_{h}(u_{h},v_{h})+b_{h}(u_{h}, v_{h})=\int_{\Omega}fv_{h} dx \; , \quad \forall v_h\in V_h \end{equation} where we set $u_{h}:=u_{h}^k$, $w_h:=u_{h}^{k-1}$, $v_{h}:=v_{h}^k$, $f:=f^k$, $\tau:=\tau_k$, $a_{h}(u_{h},v_{h}):=a_{h}(t^k;u_{h}^k,v_{h}^k)$, $b_{h}(u_{h},v_{h}):=b_{h}(t^k;u_{h}^k,v_{h}^k)$ and $V_h:=V_h^k$. The approximate solution $u_h$ and the known solution (from the previous time-step) $w_h$ of \eqref{singleeq} have the form \begin{equation}\label{comb} u_h=\sum_{i=1}^{Nel}\sum_{l=1}^{Nloc}u_l^i\phi_l^i \; , \quad w_h=\sum_{i=1}^{Nel}\sum_{l=1}^{Nloc}w_l^i\phi_l^i \end{equation} where $\phi_l^i$'s are the basis polynomials spanning the space $V_h$, $u_l^i$'s are the unknown coefficients to be found, $w_l^i$'s are the known coefficients. $Nel$ denotes the number of triangles and $Nloc$ is the number of local dimension depending on the degree of polynomials $k$ (in 2D, $Nloc=(k+1)(k+2)/2$). In DG methods, we choose the piecewise basis polynomials $\phi_l^i$'s in such a way that each basis function has only one triangle as a support, i.e. we choose on a specific triangle $K_e$, $e\in\{ 1,2,\ldots , Nel\}$, the basis polynomials $\phi_l^e$ which are identically zero outside the triangle $K_e$, $l=1,2,\ldots , Nloc$. By this construction, the stiffness and mass matrices obtained by DG methods has a block structure, each of which related to a triangle (there is no overlapping as in continuous FEM case). The product $dof:=Nel*Nloc$ gives the degree of freedom in DG methods. Inserting the linear combinations \eqref{comb} of $u_h$ and $w_h$ in \eqref{singleeq} and choosing the test functions $v_h=\phi_l^i$, $l=1,2,\ldots , Nloc$, $i=1,2,\ldots , Nel$, the discrete residual of the system (\ref{singleeq}) in matrix vector form is given by \begin{equation}\label{residualsystem} Res(\vec{u})=M\vec{u} - M\vec{w} +\tau( S\vec{u} + \vec{b}(\vec{u}) - \vec{f}) = 0 \end{equation} where $\vec{u}, \vec{w}\in\mathbb{R}^{dof}$ are the vector of unknown and known coefficients $u_l^i$'s and $w_l^i$'s, respectively, $M\in\mathbb{R}^{dof\times dof}$ is the mass matrix, $S\in\mathbb{R}^{dof\times dof}$ is the stiffness matrix corresponding to the bilinear form $\tilde{a}_h(u_h,v_h):=a_h(u_h,v_h)+K_h(u_h,v_h)$, $\vec{b}\in\mathbb{R}^{dof}$ is the vector function of $\vec{u}$ related to the non-linear form $b_h(u_h,v_h)$ and $\vec{f}\in\mathbb{R}^{dof}$ is the vector to the linear form $\int_{\Omega}fv_{h}dx$. The explicit definitions are given by $$ S= \begin{bmatrix} S_{11} & S_{12} & \cdots & S_{1,Nel} \\ S_{21} & S_{22} & & \vdots \\ \vdots & & \ddots & \\ S_{Nel,1} & \cdots & & S_{Nel,Nel} \end{bmatrix} \; , \quad M= \begin{bmatrix} M_{11} & M_{12} & \cdots & M_{1,Nel} \\ M_{21} & M_{22} & & \vdots \\ \vdots & & \ddots & \\ M_{Nel,1} & \cdots & & M_{Nel,Nel} \end{bmatrix} $$ $$ \vec{u}= \begin{bmatrix} \vec{u}_1 \\ \vec{u}_2 \\ \vdots \\ \vec{u}_{Nel} \end{bmatrix} \; , \quad \vec{w}= \begin{bmatrix} \vec{w}_1 \\ \vec{w}_2 \\ \vdots \\ \vec{w}_{Nel} \end{bmatrix} \; , \quad \vec{b}(\vec{u})= \begin{bmatrix} \vec{b}_1(\vec{u}) \\ \vec{b}_2(\vec{u}) \\ \vdots \\ \vec{b}_{Nel}(\vec{u}) \end{bmatrix} \; , \quad \vec{f}= \begin{bmatrix} \vec{f}_1 \\ \vec{f}_2 \\ \vdots \\ \vec{f}_{Nel} \end{bmatrix} $$ where all the blocks have dimension $Nloc$: $$ S_{ji}= \begin{bmatrix} \tilde{a}_h(\phi_1^i,\phi_1^j) & \tilde{a}_h(\phi_2^i,\phi_1^j) & \cdots & \tilde{a}_h(\phi_{Nloc}^i,\phi_1^j) \\ \tilde{a}_h(\phi_1^i,\phi_2^j) & \tilde{a}_h(\phi_2^i,\phi_2^j) & & \vdots \\ \vdots & & \ddots & \\ \tilde{a}_h(\phi_1^i,\phi_{Nloc}^j) & \cdots & & \tilde{a}_h(\phi_{Nloc}^i,\phi_{Nloc}^j) \end{bmatrix} $$ $$ M_{ji}= \begin{bmatrix} \int_{\Omega}\phi_1^i\phi_1^jdx & \int_{\Omega}\phi_2^i\phi_1^jdx & \cdots & \int_{\Omega}\phi_{Nloc}^i\phi_1^jdx \\ \int_{\Omega}\phi_1^i\phi_2^jdx & \int_{\Omega}\phi_2^i\phi_2^jdx & & \vdots \\ \vdots & & \ddots & \\ \int_{\Omega}\phi_1^i\phi_{Nloc}^jdx & \cdots & & \int_{\Omega}\phi_{Nloc}^i\phi_{Nloc}^jdx \end{bmatrix} $$ $$ \vec{u}_i= \begin{bmatrix} u_1^i \\ u_2^i \\ \vdots \\ u_{Nloc}^i \end{bmatrix} \; , \quad \vec{w}_i= \begin{bmatrix} w_1^i \\ w_2^i \\ \vdots \\ w_{Nloc}^i \end{bmatrix} \; , \quad \vec{b}_i(\vec{u})= \begin{bmatrix} b_h(u_h,\phi_1^i) \\ b_h(u_h,\phi_2^i) \\ \vdots \\ b_h(u_h,\phi_{Nloc}^i) \end{bmatrix} \; , \quad \vec{f}_i= \begin{bmatrix} \int_{\Omega}f\phi_1^idx \\ \int_{\Omega}f\phi_2^idx \\ \vdots \\ \int_{\Omega}f\phi_{Nloc}^idx \end{bmatrix} . $$ The block structure of DG methods causes to the increase of the condition number of the obtained matrices by the degree $k$ of basis polynomials, which is a drawback of DG methods comparing to the classical (continuous) finite elements. However, taking into account the locality, a valuable property, of DG methods, this drawback can be handled by various preconditioners developed for DG discretized schemes in the literature. Besides, the condition number of the stiffness matrix $S$ increases linearly by the penalty parameter $\sigma$, as well. Therefor, the penalty parameter should not be chosen too large. On the other hand, it should be selected sufficiently large to ensure the coercivity of the bilinear form \cite{riviere08dgm}[Sec. 27.1], which is needed for the stability of the convergence of the DG method. It ensures that the stiffness matrix arising from the DG discretization is symmetric positive definite. In the literature, several choices of the penalty parameter are suggested. In \cite{epshteyn07epp}, computable lower bounds are derived, and in \cite{dobrev08psi}, the penalty parameter is chosen depending on the diffusion coefficient $\epsilon$. The effect of the penalty parameter on the condition number was discussed in detail for the DG discretization of the Poisson equation in \cite{castillo12pdg} and in \cite{slingerland14fls} for layered reservoirs with strong permeability contrasts. In our study, we select the penalty parameter $\sigma$ depending only on the polynomial degree $k$, as $\sigma =3k(k+1)$ on interior edges and $\sigma =6k(k+1)$ on boundary edges. The reason of the coupling of the penalty parameter $\sigma$ on boundary edges is sufficient penalization of the solution on the boundary due to the non-homogeneous Dirichlet boundary conditions which are imposed weakly in DG methods. Next, we solve the system \eqref{residualsystem} by Newton method. In the sequel, we start with an initial guess $\vec{u}^{(0)}$ (most possibly $\vec{u}^{(0)}=\vec{w}$, i.e. the known solution from the previous time-step) and we solve the system \begin{eqnarray*} J^s\delta\vec{u}^{(s)} &=& -Res(\vec{u}^{(s)}) \\ \vec{u}^{(s+1)} &=& \vec{u}^{(s)} + \delta\vec{u}^{(s)} \; , \quad s=0,1,\ldots \end{eqnarray*} where $J^s=M+\tau (S+J^s_{\vec{b}})$ is the Jacobian matrix of the system at the iteration $\vec{u}^{(s)}$, and $J^s_{\vec{b}}$ denotes the Jacobian matrix to the non-linear vector $\vec{b}(\vec{u})$ at the iteration $\vec{u}^{(s)}$. \section{Numerical studies} \label{numeric} In this section, we give the numerical studies demonstrating the performance of the time-space adaptive algorithm. All the computations are implemented on MATLAB-R2014a. In the problems, by the very coarse initial mesh, we mean an initial mesh which is formed, for instance on $\Omega =(0,1)^2$, by dividing the domain with $\Delta x_1=\Delta x_2=0.5$ leading to 8 triangular elements and 48 DoFs for quadratic elements. As the first example, Example~\ref{ex1}, we give a test example with polynomial type non-linearity having a non-moving internal layer to figure out the benchmark of the algorithm by using different tolerances and diffusion parameters $\epsilon$: rates of error, spatial and temporal estimators, and effectivity indices (proportion of the estimator to the error). We expect that the effectivity indices lie in a small band for different diffusion parameters meaning that our estimators are robust in the system P\'{e}clet number. Moreover, to demonstrate the mentioned properties, we use the weighted DoFs as in \cite{cangiani13adg} $$ \text{Weighted DoFs} = \frac{1}{T} \sum_{k=1}^n \tau_k\lambda_k $$ where $\lambda_k$ denotes the total number of DoFs on the union mesh $\xi_h^{k-1}\cup\xi_h^{k}$. Since the first example has a non-moving internal layer, a monotonic increase in the DoFs is expected by the time progresses. Conversely, we give problems having moving layers by the time progresses in Example~\ref{ex2}-\ref{ex3}. In this case, we expect that the refinement and coarsening procedures in space work simultaneously leading to oscillations in time vs DoFs plots. By Example~\ref{ex2}, we also test the performance of our algorithm for a coupled system. As the final example, Example~\ref{ex4}, we consider an important real life problem representing a reaction in porous media having internal layers at different locations due to the high-permeability rocks. \subsection{Example with polynomial type non-linearity (benchmark of the algorithm)} \label{ex1} The first example is taken from \cite{bause12ash} with a polynomial non-linear term $$ u_t+\vec{\beta }\cdot\nabla u-\epsilon\Delta u+r(u)=f \quad \text{in } \; \Omega =(0,1)^2, $$ with the convection field $\vec{\beta }(x,y)=(2,3)^T$, diffusion coefficient $\epsilon =10^{-6}$, the non-linear reaction term $r(u)=u^4$. The source function $f$ and the Dirichlet boundary condition are chosen so that the exact solution is given by \begin{eqnarray*} u(\vec{x},t)&=& 16\sin (\pi t)x_1(1-x_1)x_2(1-x_2) \\ && [ 0.5 + \pi^{-1}\arctan (2\epsilon^{-1/2}(0.25^2-(x_1-0.5)^2-(x_2-0.5)^2)) ]. \end{eqnarray*} We start by demonstrating the decrease of the errors by uniform time-space refinement using linear DG elements. In Fig.~\ref{ex1:uniform}, the expected first order convergence in space and time is shown. \begin{figure} \caption{Example \ref{ex1}: Decays of estimators and errors for uniform time-space } \label{ex1:uniform} \end{figure} \begin{figure} \caption{Example \ref{ex1}: Error/spatial estimator for $\epsilon =10^{-2}$ (left) and $\epsilon =10^{-4}$ (right) } \label{ex1:errors} \end{figure} For the time-space adaptive solution, we use quadratic DG elements. We investigate the performance of the spatial estimator by fixing the temporal time-step $\tau =0.005$ so that the temporal error dominated by the spatial error. We reduce the spatial estimator tolerance $\mathbf{stol^+}$ from $10^{-1}$ to $10^{-6}$. The rates of the error and the spatial estimator are similar as illustrated in Fig.~\ref{ex1:errors} for $\epsilon =10^{-2}$ and $\epsilon =10^{-4}$. Fig.~\ref{ex1:spatial} shows the spatial effectivity indices and the decrease of the spatial estimators for the diffusion constant $\epsilon$. The effectivity indices do not exceed 7 which are acceptable as in \cite{cangiani13adg} for linear convection-diffusion problems. \begin{figure} \caption{Example \ref{ex1}: Spatial effectivity indices (left) and estimators (right)} \label{ex1:spatial} \end{figure} To investigate the performance of the temporal estimator, we fix a sufficiently fine spatial mesh so that the the spatial error dominated by the temporal error, and then we reduce the temporal estimator tolerance $\mathbf{ttol}$ in the range $10^{-1}-10^{-6}$. In Fig.~\ref{ex1:temporal}, the temporal effectivity indices and the decrease of the temporal estimators are not affected by $\epsilon$, and effectivity indices are almost the same within the band 1-2. \begin{figure} \caption{Example \ref{ex1}: Temporal effectivity indices (left) and estimators (right)} \label{ex1:temporal} \end{figure} Finally, we apply the time-space adaptive algorithm with the tolerances $\mathbf{ttol}=10^{-3}$, $\mathbf{stol^+}=3\times 10^{-4}$ and $\mathbf{stol^-}=3\times 10^{-7}$. Firstly, we prepare an initial mesh starting from a very coarse spatial mesh and a uniform partition of the time interval $[0,0.5]$ with the step-size $\tau =0.25$ until the the user defined tolerances $\mathbf{ttol}$ and $\mathbf{stol^+}$ are satisfied. The adaptive mesh at the final time $T=0.5$ is shown in Fig.~\ref{ex1:mesh}. In Fig.~\ref{ex1:dofdt} on the right, the change of the time-steps is shown, whereas the change in the DoFs is illustrated in Fig.~\ref{ex1:dofdt} on the left. Since the layers in the problem do not move as the time progresses, the number of DoFs increases monotonically by the spatial grid refinement. In Fig.~\ref{ex1:plot}, it is shown that all the oscillations are damped out by adaptive algorithm using less DoFs compared to the uniform one. \begin{figure} \caption{Example \ref{ex1}: Adaptive mesh } \label{ex1:mesh} \end{figure} \begin{figure} \caption{Example \ref{ex1}: Uniform (left) and adaptive (right) solutions at T=0.5 } \label{ex1:plot} \end{figure} \begin{figure} \caption{Example \ref{ex1}: Evolution of DoFs (left) and time-steps $\tau$ (right) } \label{ex1:dofdt} \end{figure} \subsection{Coupled example with polynomial type non-linearity} \label{ex2} The next example is a coupled non-linear problem taken from \cite{bause13hof}. \begin{eqnarray*} \frac{\partial u_i}{\partial t}- \epsilon\Delta u_i+\vec{\beta }_i\cdot\nabla u_i+u_1u_2 &=& f_i , \quad i=1,2 \end{eqnarray*} on $\Omega =(0,1)^2$ with the convection fields $\vec{\beta }_1=(1,0)^T$ and $\vec{\beta }_2=(-1,0)^T$, and the diffusion constant $\epsilon=10^{-5}$. The Dirichlet boundary conditions, initial conditions and the load functions $f_i$ are chosen so that the exact solutions are $$ u_1(\vec{x},t)=\frac{1}{2}\left( 1-\tanh \frac{2x_1-0.2t-0.8}{\sqrt{5\epsilon }}\right) $$ $$ u_2(\vec{x},t)=\frac{1}{2}\left( 1+\tanh \frac{2x_1+0.2t-0.9}{\sqrt{5\epsilon }}\right) $$ We use again quadratic DG elements. We prepare an initial mesh, Fig.~\ref{ex2:mesh} on the left, starting with a very coarse spatial mesh and a uniform partition of the time interval $[0,1]$ with the step-size $\tau =0.1$ until the user defined tolerances $\mathbf{ttol}=10^{-3}$ and $\mathbf{stol^+}=10^{-1}$ are satisfied. Here, two sharp fronts move towards to each other and then mix directly after the time $t=0.1$, Fig.~\ref{ex2:cross}. The movement of the fronts are also visible in Fig.~\ref{ex2:mesh} claiming that refinement/coarsening of the adaptive algorithm works well. We see that the sharp fronts in the cross-wind direction $x_2=0.5x_1+0.75$ are almost damped out. Moreover, Fig.~\ref{ex2:mesh}-\ref{ex2:dtdof} show that both the spatial and temporal estimators catch the time where the two sharp fronts mix. \begin{figure} \caption{Example \ref{ex2}: Cross-section plots in the cross-wind direction at $t=0.1$ (left) and $t=1$ (right)} \label{ex2:cross} \end{figure} \begin{figure} \caption{Example \ref{ex2}: Adaptive meshes at $t=0$, $t=0.1$ and $t=1$ (from left to right)} \label{ex2:mesh} \end{figure} \begin{figure} \caption{Example \ref{ex2}: Evolution of DoFs (left) and time-steps $\tau$ (right)} \label{ex2:dtdof} \end{figure} \subsection{Non-linear ADR equation in homogeneous porous media} \label{ex3} We consider the advection-diffusion-reaction (ADR) equation in \cite{tambue10eia} with polynomial type non-linear reaction \begin{eqnarray*} \frac{\partial u}{\partial t}- \epsilon\Delta u + \vec{\beta}\cdot\nabla u+\gamma u^2(u-1) &=& 0 \quad \text{in } \; \Omega\times (0,T] \end{eqnarray*} on $\Omega =(0,1)^2$. We take as in \cite{tambue10eia} the homogeneous dispersion tensor as $\epsilon =10^{-4}$, the velocity field $\vec{\beta}=(-0.01,-0.01)^T$ and $\gamma =100$. The initial and boundary conditions are chosen by the exact solution $$ u(\vec{x},t)=[ 1+\exp ( a(x_1+x_2-bt)+a(b-1)) ]^{-1} $$ with $a=\sqrt{\gamma /(4\epsilon)}$ and $b=-0.02+\sqrt{\gamma\epsilon}$. The problem is a transport of a front in homogeneous porous media. We simulate the given problem for the final time $T=1$, and with quadratic DG elements. We begin by preparing an initial mesh starting from a very coarse spatial mesh and a uniform partition of the time interval $[0,1]$ with the step-size $\tau =0.25$ until the user defined tolerances $\mathbf{ttol}=3\times 10^{-3}$ and $\mathbf{stol^+}=10^{-3}$ are satisfied. In Fig.~\ref{ex3:meshsol}, the adaptive meshes and solution profiles are shown at the times $t=\{ 0.2,0.6,1\}$ where the movement of the front can be seen. The time vs DoFs and time vs time step-size plots in Fig.~\ref{ex3:dtdof} indicate clearly the oscillations in DoFs and time-steps due to the movement of the front. \begin{figure} \caption{Example \ref{ex3}: Adaptive meshes (top) and solution profiles (bottom) at $t=0.2$, $t=0.6$ and $t=1$ (from left to right)} \label{ex3:meshsol} \end{figure} \begin{figure} \caption{Example \ref{ex3}: Evolution of DoFs (left) and time-steps $\tau$ (right)} \label{ex3:dtdof} \end{figure} \subsection{Non-linear ADR in deterministic heterogeneous porous media} \label{ex4} We consider the ADR equation in \cite{tambue10eia} with Monod or Langmuir isotherm type non-linear reaction \begin{eqnarray*} \frac{\partial u}{\partial t}- \nabla\cdot (\epsilon\nabla u) + \vec{\beta}(x)\cdot\nabla u+\frac{u}{1+u} &=& 0 \quad \text{in } \; \Omega\times (0,T]\\ u(x,t) &=& 1 \quad \text{on } \; \Gamma^D\times [0,T] \\ -\epsilon\nabla u(x,t)\cdot\vec{n} &=& 0 \quad \text{on } \; (\partial\Omega\setminus\Gamma^D)\times [0,T] \\ u(x,0) &=& 0 \quad \text{in } \; \Omega \end{eqnarray*} with $\Omega =(0,3)\times (0,2)$ and $\Gamma^D=\{ 0\}\times [0,2]$. The problem represents a reaction in porous media, for instance, transport in a highly idealized fracture pattern. Here $\epsilon$ stands for the heterogeneous dispersion tensor given by $$ \epsilon = \begin{bmatrix} 10^{-3} & 0\\ 0 & 10^{-4} \end{bmatrix} $$ The velocity field $\vec{\beta}(x)$ is determined via the Darcy's law $$ \vec{\beta} = -\frac{k(x)}{\mu}\nabla p $$ where $p$ is the fluid pressure, $\mu$ is the fluid viscosity and $k(x)$ is the permeability of the porous medium. Using the mass conservation property $\nabla\cdot\vec{\beta}(x)=0$ under the assumption that rock and fluids are incompressible, the velocity field $\vec{\beta}(x)$ is computed by solving \begin{eqnarray*} \nabla\cdot\left( \frac{k(x)}{\mu}\nabla p \right) &=& 0 \quad \text{in } \; \Omega\\ p &=& 1 \quad \text{on } \; \{ 0\}\times [0,2] \\ p &=& 0 \quad \text{on } \; \{ 3\}\times [0,2] \\ -k(x)\nabla p\cdot \vec{n} &=& 0 \quad \text{on } \; (0,3)\times \{ 0,2\} \end{eqnarray*} We simulate the given problem for the final time $T=1$ using linear DG elements. We take the fluid viscosity $\mu =0.1$, and the permeability field as in \cite{tambue10eia} with three parallel streaks the permeability of which are 100 times greater than the permeability of the surrounding domain, see Fig.~\ref{ex4:perm} on the left, by which the flow is canalized from the lower-permeability rocks into the high-permeability ones, Fig.~\ref{ex4:perm} on the right. For the adaptive procedure, we prepare an initial mesh starting from a very coarse spatial mesh and a uniform partition of the time interval $[0,1]$ with the step-size $\tau =0.05$ until the user defined tolerances $\mathbf{ttol}=10^{-3}$ and $\mathbf{stol^+}=3\times 10^{-4}$ are satisfied. Fig.~\ref{ex4:plot03}-\ref{ex4:plot1} show the adaptive meshes and concentrations at $t=0.3$ and $t=1$, where we can clearly see the flow-focusing due to the high-permeability. \begin{figure} \caption{Example \ref{ex4}: Permeability field (left) and velocity streamlines (right)} \label{ex4:perm} \end{figure} \begin{figure} \caption{Example \ref{ex4}: Adaptive mesh (left) and concentration (right) at $t=0.3$} \label{ex4:plot03} \end{figure} \begin{figure} \caption{Example \ref{ex4}: Adaptive mesh (left) and concentration (right) at $t=1$} \label{ex4:plot1} \end{figure} Time vs DoFs and time vs time step-size plots are given in Fig.~\ref{ex4:dtdof}. We see that initially small time steps are used and then it reaches a steady time step, Fig.~\ref{ex4:dtdof} on the right. The number of DoFs increases (refinement dominates coarsening) monotonically after the meet of first high-permeability rock until the meet of third high-permeability rock and then the increase stops, Fig.~\ref{ex4:dtdof} on the left. This is meaningful since there is no sharp flow canalization after the third high-permeability rock. \begin{figure} \caption{Example \ref{ex4}: Evolution of DoFs (left) and time-steps $\tau$ (right)} \label{ex4:dtdof} \end{figure} \section{Conclusion} We implemented a time-space adaptive algorithm for non-linear ADR equations based on utilizing the elliptic reconstruction technique to be able to use the elliptic a posteriori error estimator for the convection dominated parabolic problems with non-linear reaction mechanisms. We derived a posteriori error estimator in the $L^{\infty}(L^2) + L^2(H^1)$-type norm using backward Euler in time and SIPG in space. We demonstrated the performance of the algorithm by numerical studies. \end{document}
\begin{document} \def\spacingset#1{\renewcommand{\baselinestretch} {#1}\small\normalsize} \spacingset{1} \if00 { \title{\bf Functional Outlier Detection by a Local Depth with Application to NO$_{x}$ Levels} \author{ \textbf{Carlo Sguera}\\ \normalsize Department of Statistics, Universidad Carlos III de Madrid\\ \normalsize 28903 Getafe (Madrid), Spain\\ \normalsize(\texttt{[email protected]})\\ \textbf{Pedro Galeano} \\ \normalsize Department of Statistics, Universidad Carlos III de Madrid\\ \normalsize 28903 Getafe (Madrid), Spain\\ \normalsize(\texttt{[email protected]})\\ and \\ \textbf{Rosa E. Lillo} \\ \normalsize Department of Statistics, Universidad Carlos III de Madrid\\ \normalsize 28903 Getafe (Madrid), Spain\\ \normalsize(\texttt{[email protected]})\\ } \date{} \maketitle } \fi \if10 { \begin{center} {\LARGE\bf Functional Outlier Detection by a Local Depth with Application to NO$_{x}$ Levels} \end{center} } \fi \begin{abstract} This paper proposes methods to detect outliers in functional data sets and the task of identifying atypical curves is carried out using the recently proposed kernelized functional spatial depth (KFSD). KFSD is a local depth that can be used to order the curves of a sample from the most to the least central, and since outliers are usually among the least central curves, we present a probabilistic result which allows to select a threshold value for KFSD such that curves with depth values lower than the threshold are detected as outliers. Based on this result, we propose three new outlier detection procedures. The results of a simulation study show that our proposals generally outperform a battery of competitors. We apply our procedures to a real data set consisting in daily curves of emission levels of nitrogen oxides (NO$_{x}$) since it is of interest to identify abnormal NO$_{x}$ levels to take necessary environmental political actions. \end{abstract} \noindent {\it Keywords:} Functional depths; Functional outlier detection; Kernelized functional spatial depth; Nitrogen oxides; Smoothed resampling. \spacingset{1.5} \section{INTRODUCTION} \label{sec:intro} The accurate identification of outliers is an important aspect in any statistical data analysis. Nowadays there are well-established outlier detection techniques in the univariate and multivariate frameworks (for a complete review of the topic, see for example \citeauthor{BarLew1994} \citeyear{BarLew1994}). In recent years, new types of data have become available and tractable thanks to the evolution of computing resources, e.g., big multivariate data sets having more variables than observations (high-dimensional multivariate data) or samples composed of repeated measurements of the same observation taken over an ordered set of points that can be interpreted as realizations of stochastic processes (functional data). In this paper we focus on functional data, which are usually studied with the tools provided by functional data analysis (FDA). For overviews on FDA methods, see \cite{RamSil2005}, \cite{FerVie2006}, \cite{HorKok2012} or \cite{Cue2014}. For environmental statistical problems tackled using FDA techniques, see for example \cite{IgnEtAl2015}, \cite{MenEtAl2014} and \cite{RuiEsp2012}. As in univariate or multivariate analysis, the detection of outliers is also fundamental in FDA. According to \citeauthor{FebGalGon2007}\ (\citeyear{FebGalGon2007}, \citeyear{FebGalGon2008}), a functional outlier is a curve generated by a stochastic process with a different distribution than the one of normal curves. This definition covers many types of outliers, e.g., magnitude outliers, shape outliers and partial outliers, i.e., curves having atypical behaviors only in some segments of the domain. Shape and partial outliers are typically harder to detect than magnitude outliers (in the case of high magnitude, outliers can even be recognized by simply looking at a graph), and therefore entail more challenging outlier detection problems. In this paper we focus on samples contaminated by low magnitude, shape or partial outliers. Specifically, we consider a real data set consisting in nitrogen oxides (NO$_{x}$) emission daily levels measured in the Barcelona area (see \citeauthor{FebGalGon2008} \citeyear{FebGalGon2008} for a first analysis of this data set). Since NO$_{x}$ represent one of the most important pollutants, cause ozone formation and contribute to global warning, it is of interest the identification of days with abnormally large NO$_{x}$ emissions to allow the implementation of actions able to control their causes, which are primarily the combustion processes generated by motor vehicles and industries. We propose to detect functional outliers using the notion of functional depth. A functional depth is a measure providing a $P$-based center-outward ordering criterion for observations of a functional space $\mathbb{H}$, where $P$ is a probability distribution on $\mathbb{H}$. When a sample of curves is available, a functional depth orders the curves from the most to the least central according to their depth values and, if any outlier is in the sample, its depth is expected to be among the lowest values. Therefore, it is reasonable to build outlier detection methods that use functional depths. \indent In this paper we enlarge the number of available functional outlier detection procedures by presenting three new methods based on a specific depth, the kernelized functional spatial depth (KFSD, \citeauthor{SguGalLil2014}\ \citeyear{SguGalLil2014}). KFSD is a local-oriented depth, that is, a depth which orders curves looking at narrow neighborhoods and giving more weight to close than distant curves. Its approach is opposite to what global-oriented depths do. Indeed, any global depth makes depend the depth of a given curve on the whole rest of observations, with equal weights for all of them. This is the case of a global-oriented depth such as the functional spatial depth (FSD, \citeauthor{ChaCha2014} \citeyear{ChaCha2014}), of which KFSD is its local version. A local depth such as KFSD may result useful to analyze functional samples having a structure deviating from unimodality or symmetry. Moreover, the local approach behind KFSD proved to be a good strategy in supervised classification problems with groups of curves not extremely clear-cut (see \citeauthor{SguGalLil2014} \citeyear{SguGalLil2014}). Alternatively, we illustrate that KFSD ranks well low magnitude, shape or partial outliers, that is, their corresponding KFSD values are in general lower than those of normal curves. Then, we propose different procedures to select a threshold for KFSD to distinguish between normal curves and outliers. These procedures employ smoothing resampling techniques and are based on a theoretical result which allows to obtain a probabilistic upper bound on a desired false alarm probability of detecting normal curves as outliers. Note that the probabilistic foundations of the proposed methods represent a novelty in FDA outlier detection problems. We study the performances of our procedures in a simulation study and analyzing the NO$_{x}$ data set. We show this data set in Figure \ref{fig:NOxW}, where it is already possible to appreciate that the presence of partial outliers is an issue. \begin{figure} \caption{NO$_{x}$ levels measured in $\mu g/m^{3}$ every hour of 76 working days between 23/02/2005 and 26/06/2005 in Poblenou, Barcelona.} \label{fig:NOxW} \end{figure} We compare our methods with some alternative outlier detection procedures: \cite{FebGalGon2008} proposed to label as outliers those curves with depth values lower than a certain threshold. As functional depths, they considered the Fraiman and Muniz depth (\citeauthor{FraMun2001} \citeyear{FraMun2001}), the h-modal depth (\citeauthor{CueFebFra2006} \citeyear{CueFebFra2006}) and the integrated dual depth (\citeauthor{CueFra2009} \citeyear{CueFra2009}). To determine the depth threshold, they proposed two different bootstrap procedures based on depth-based trimmed or weighted resampling, respectively; \cite{SunGen2011} introduced the functional boxplot, which is constructed using the ranking of curves provided by the modified band depth (\citeauthor{LopRom2009} \citeyear{LopRom2009}). The proposed functional boxplot detects outliers using a rule that is similar to the one of the standard boxplot; \cite{HynSha2010} proposed to reduce the outlier detection problem from functional to multivariate data by means of functional principal component analysis (FPCA), and to use two alternative multivariate techniques on the scores to detect outliers, i.e., the bagplot and the high density region boxplot, respectively. The remainder of the article is organized as follows. In Section \ref{sec:kfsd} we recall the definition of KFSD. In Section \ref{sec:odp} we consider the functional outlier detection problem. In Theorem \ref{th:inPaper01} we present the result on which are based three new outlier detection methods which employ KFSD as depth function. In Section \ref{sec:simStudy} we report the results of our simulation study, whereas in Section \ref{sec:NOx} we perform outlier detection on the NO$_{x}$ data set. In Section \ref{sec:conc} we draw some conclusions. Finally, in the Appendix we report a sketch of the proof of Theorem \ref{th:inPaper01}. \section{THE KERNELIZED FUNCTIONAL SPATIAL DEPTH} \label{sec:kfsd} In functional spaces a depth measure has the purpose of measuring the degree of centrality of curves relative to the distribution of a functional random variable. Various functional depths have been proposed following two alternative approaches: a global approach, which implies that the depth of an observation depends equally on all the observations allowed by $P$ on $\mathbb{H}$, and a local approach, which instead makes depend the depth of an observation more on close than distant observations. Among the existing global-oriented depths there is the Fraiman and Muniz depth (FMD, \citeauthor{FraMun2001} \citeyear{FraMun2001}), the random Tukey depth (RTD, \citeauthor{CueNie2008} \citeyear{CueNie2008}), the integrated dual depth (IDD, \citeauthor{CueFra2009} \citeyear{CueFra2009}), the modified band depth (MBD, \citeauthor{LopRom2009} \citeyear{LopRom2009}) or the functional spatial depth (FSD, \citeauthor{ChaCha2014} \citeyear{ChaCha2014}). Proposals of local-oriented depths are instead the h-modal depth (HMD, \citeauthor{CueFebFra2006}\ \citeyear{CueFebFra2006}) or the kernelized functional spatial depth (KFSD, \citeauthor{SguGalLil2014}\ \citeyear{SguGalLil2014}). In this paper we focus on KFSD. Before giving its definition, we recall the definition of the functional spatial depth (FSD, \citeauthor{ChaCha2014} \citeyear{ChaCha2014}). Let $\mathbb{H}$ be an infinite-dimensional Hilbert space, then for $x \in \mathbb{H}$ and the functional random variable $Y \in \mathbb{H}$, FSD of $x$ relative to $Y$ is given by \begin{equation*} FSD(x,Y) = 1 - \left\|\mathbb{E}\left[\frac{x-Y}{\left\|x-Y\right\|}\right]\right\|, \end{equation*} \noindent where $\|\cdot\|$ is the norm inherited from the usual inner product in $\mathbb{H}$. For a $n$-size random sample of $Y$, i.e., $Y_{n} = \left\{y_{1}, \ldots, y_{n}\right\}$, the sample version of FSD has the following form: \begin{equation} \label{eq:sampleFSD} FSD(x,Y_{n}) = 1 - \frac{1}{n}\left\|\sum_{i=1}^{n}\frac{x-y_{i}}{\left\|x-y_{i}\right\|}\right\|. \end{equation} As mentioned before, FSD is a global-oriented depth and KFSD is a local version of it. KFSD is obtained writing \eqref{eq:sampleFSD} in terms of inner products and then replacing the inner product function with a positive definite and stationary kernel function. This replacement exploits the relationship \begin{equation}\label{eq:kappa_phi} \kappa(x,y) = \langle\phi(x), \phi(y)\rangle, \quad x, y \in \mathbb{H}, \end{equation} \noindent where $\kappa$ is the kernel $\kappa: \mathbb{H} \times \mathbb{H} \rightarrow \mathbb{R}$, $\phi$ is the embedding map $\phi: \mathbb{H} \rightarrow \mathbb{F}$ and $\mathbb{F}$ is a feature space. Indeed, a definition of KFSD in terms of $\phi$ can be given, that is, \begin{equation} \label{eq:popKFSD} KFSD(x,Y) = 1 - \left\|\mathbb{E}\left[\frac{\phi(x)-\phi(Y)}{\left\|\phi(x)-\phi(Y)\right\|}\right]\right\|, \end{equation} \noindent and it can be interpreted as a recoded version of $FSD(x,Y)$ since $KFSD(x,Y)=$ \linebreak $FSD(\phi(x),\phi(Y))$. The sample version of \eqref{eq:popKFSD} is given by \begin{equation*} KFSD(x,Y_{n}) = 1 - \frac{1}{n}\left\|\sum_{i=1}^{n}\frac{\phi(x)-\phi(y_{i})}{\left\|\phi(x)-\phi(y_{i})\right\|}\right\|. \end{equation*} \noindent Then, standard calculations (see Appendix) and \eqref{eq:kappa_phi} allow to provide an alternative expression of $KFSD(x,Y_{n})$, in this case in terms of $\kappa$: \begin{gather} KFSD(x, Y_{n}) = 1 - \nonumber \\ \label{eq:sampleKFSD} \frac{1}{n} \left(\sum_{\substack{i,j=1; \\ y_{i} \neq x; y_{j} \neq x}}^{n}\frac{\kappa(x,x)+\kappa(y_{i},y_{j})-\kappa(x,y_{i})-\kappa(x,y_{j})}{\sqrt{\kappa(x,x)+\kappa(y_{i},y_{i})-2\kappa(x,y_{i})}\sqrt{\kappa(x,x)+\kappa(y_{j},y_{j})-2\kappa(x,y_{j})}}\right)^{1/2}, \end{gather} \noindent Note that \eqref{eq:sampleKFSD} only requires the choice of $\kappa$, and not of $\phi$, which can be left implicit. As $\kappa$ we use the Gaussian kernel function given by \begin{equation} \label{eq:ker} \kappa(x,y) = \exp\left(-\frac{\|x-y\|^2}{\sigma^2}\right), \end{equation} \noindent where $x, y \in \mathbb{H}$. In turn, \eqref{eq:ker} depends on the norm function inherited by the functional Hilbert space where data are assumed to lie, and on the bandwidth $\sigma$. Regarding $\sigma$, we initially consider 9 different $\sigma$, each one equal to 9 different percentiles of the empirical distribution of $\left\{\|y_{i}-y_{j}\|, y_{i}, y_{j} \in Y_{n}\right\}$. The first percentile is 10\%, and by increments of 10 we obtain the ninth percentile, i.e., 90\%. Note that the lower $\sigma$, the more local the approach, and therefore the percentiles that we use cover different degrees of KFSD-based local approaches: strongly (e.g., 20\%), moderately (e.g., 50\%) and weakly (e.g., 80\%) local approaches. In Section \ref{sec:simStudy} we present a method to select $\sigma$ in outlier detection problems. In general, since any functional depth measures the degree of centrality or extremality of a given curve relative to a distribution or a sample, outliers are expected to have low depth values. More in particular, in presence of low magnitude, shape or partial outliers, an approach based on the use of a local depth like KFSD may help in detecting outliers. To illustrate this fact, we present the following example: first, we generated 100 data sets of size 50 from a mixture of two stochastic processes, one for normal curves and one for high magnitude outliers, with the probability that a curve is an outlier equal to 0.05. Second, we generated a group of 100 data sets from a mixture which produces shape outliers. Finally, we generated a group of 100 data sets from a mixture which produces partial outliers. In Figure \ref{fig:globVSloc} we report a contaminated data set for each mixture. \begin{figure} \caption{Examples of contaminated data sets: high magnitude contamination (top), shape contamination (middle) and partial contamination (bottom). The solid curves are normal curves and the dashed curves are outliers} \label{fig:globVSloc} \end{figure} Let $n_{out,j}, j=1, \ldots, 100$, be the number of outliers generated in the $j$th data set. For each data set and functional depth, it is desirable to assign the $n_{out,j}$ lowest depth values to the $n_{out,j}$ generated outliers. For each mixture and generated data set, we recorded how many times the depth of an outlier is among the $n_{out,j}$ lowest values. As depth functions, we considered five global depths (FMD, RTD, IDD, MBD and FSD) and two local depths (HMD and KFSD). The results reported in Table \ref{tab:example} show that for all the functional depths the ranking of high magnitude outliers is an easier task than the ranking of shape and partial outliers. However, while the ranking of high magnitude outliers is reasonably good in different cases, e.g., for the local KFSD (94.87\%) and the global RTD (90.17\%), the ranking of shape and partial outliers is markedly better with local depths (shape: 86.72\% for KFSD and 85.47\% for HMD; partial: 82.03\% for KFSD and 81.25\% for HMD) than with the best global depths (shape: FSD with 39.06\%; partial: FSD with 46.48\%). These results suggest that, selecting a proper threshold, KFSD can isolate well outliers. \begin{table}[!htbp] \captionsetup{justification=justified,width=0.5\textwidth} \caption{Percentages of times a depth assigns a value among the $n_{out,j}$ lowest ones to an outlier. Types of outliers: high magnitude, shape and partial.} \centering \scalebox{.75}{ \begin{tabular}{l|ccccc|cc} \hline type of depths & \multicolumn{5}{c}{global depths} \vline & \multicolumn{2}{c}{local depths} \\ \hline depths & FMD & RTD & IDD & MBD & FSD & HMD & KFSD \\ \hline high magnitude outliers & 86.32 & 90.17 & 81.62 & 69.23 & 68.80 & 85.47 & 94.87 \\ shape outliers & 7.81 & 33.59 & 38.67 & 12.11 & 39.06 & 85.94 & 86.72 \\ partial outliers & 18.75 & 44.53 & 34.77 & 19.14 & 46.48 & 81.25 & 82.03\\ \hline \end{tabular}} \label{tab:example} \end{table} \section{OUTLIER DETECTION FOR FUNCTIONAL DATA} \label{sec:odp} The outlier detection problem can be described as follows: let $Y_{n}= \left\{y_{1}, \ldots, y_{n}\right\}$ be a sample generated from a mixture of two functional random variables in $\mathbb{H}$, one for normal curves and one for outliers, say $Y_{nor}$ and $Y_{out}$, respectively. Let $Y_{mix}$ be a mixture, i.e., \begin{equation} \label{eq:mix} Y_{mix} = \left\{ \begin{array}{ll} Y_{nor}, & \mbox{with probability } 1-\alpha, \\ Y_{out}, & \mbox{with probability } \alpha, \end{array} \right. \end{equation} \noindent where $\alpha \in [0,1]$ is the contamination probability (usually, a value rather close to 0). The curves composing $Y_{n}$ are all unlabeled, and the goal of the analysis is to decide whether each curve is a normal curve or an outlier.\\ \indent KFSD is a functional extension of the kernelized spatial depth for multivariate data (KSD) proposed by \cite{CheDanPenBar2009}, who also proposed a KSD-based outlier detector that we generalize to KFSD: for a given data set $Y_{n}$ generated from $Y_{mix}$ and $t \in [0,1]$, the KFSD-based outlier detector for $x \in \mathbb{H}$ is given by \begin{equation} \label{eq:g_KFSD_no_b} g(x,Y_{n}) = \left\{ \begin{array}{ll} 1, & \mbox{if } KFSD(x,Y_{n}) \leq t, \\ 0, & \mbox{if } KFSD(x,Y_{n}) > t, \end{array} \right. \end{equation} \noindent where $t$ is a threshold which allows to discriminate between outliers (i.e., $g(x,Y_{n})=1$) and normal curves (i.e., $g(x,Y_{n})=0$), and it is a parameter that needs to be set. For the multivariate case, KSD-based outlier detection is carried under different scenarios. One of them consists in an outlier detection problem where two samples are available and the threshold $t$ is selected by controlling the probability that normal observations are classified as outliers, i.e., the false alarm probability (FAP). The selection criterion is based on a result providing a KSD-based probabilistic upper bound on the FAP which depends on $t$. Then, the threshold for KSD is provided by the maximum value of $t$ such that the upper bound does not exceed a given desired FAP. We extend this result to KFSD: \begin{theorem}\label{th:inPaper01} Let $Y_{n_{Y}} = \left\{y_{i}, \ldots, y_{n_{Y}}\right\}$ and $Z_{n_{Z}} = \left\{z_{i}, \ldots, z_{n_{Z}}\right\}$ be i.\ i.\ d.\ samples generated from the unknown mixture of random variables $Y_{mix} \in \mathbb{H}$ described by (\ref{eq:mix}), with $\alpha > 0$. Let $g(\cdot,Y_{n_{Y}})$ be the outlier detector defined in (\ref{eq:g_KFSD_no_b}). Fix $\delta \in (0,1)$ and suppose that $\alpha \leq r$ for some $r \in [0,1]$. For a new random element $x$ generated from $Y_{nor}$, the following inequality holds with probability at least $1-\delta$: \begin{equation} \label{eq:ineThe01} \mathbb{E}_{x | Y_{n_{Y}}}\left[g(x,Y_{n_{Y}})\right] \leq \frac{1}{1-r} \left[\frac{1}{n_{Z}} \sum_{i=1}^{n_{Z}} g\left(z_{i},Y_{n_{Y}}\right) + \sqrt{\frac{\ln{1/\delta}}{2 n_{Z}}} \right], \end{equation} \noindent where $\mathbb{E}_{x | Y_{n_{Y}}}$ refers to the expected value of $x$ for a given $Y_{n_{Y}}$. \end{theorem} The proof of Theorem \ref{th:inPaper01} is presented in the Appendix. Recall that the FAP is the probability that a normal observation $x$ is classified as outlier. For the elements of Theorem \ref{th:inPaper01}, $\mathrm{Pr}_{x | Y_{n_{Y}}}\left(g(x,Y_{n_{Y}})=1\right)$ is the FAP. Moreover, \begin{equation*} \mathrm{Pr}_{x | Y_{n_{Y}}}\left(g(x,Y_{n_{Y}})=1\right) = \mathbb{E}_{x | Y_{n_{Y}}}\left[g(x,Y_{n_{Y}})\right]. \end{equation*} \noindent Therefore, the probabilistic upper bound of Theorem \ref{th:inPaper01} applies also to the FAP. It is worth noting that the application of Theorem \ref{th:inPaper01} requires to observe two samples, circumstance rather uncommon in classical outlier detection problems, in which usually a single sample generated from an unknown mixture of random variables is available. For this reason, we propose a solution which allows to use Theorem \ref{th:inPaper01} in presence of a unique sample. Note that the general idea behind holds also in the multivariate framework, and therefore it would enable to perform KSD-based outlier detection when only a $\mathbb{R}^{d}$-sample is available. In the functional context, our solution consists in setting $Y_{n_{Y}} = Y_{n}$ and in obtaining $Z_{n_{Z}}$ by resampling with replacement from $Y_{n}$. Note that by doing this, and for sufficiently large values of $n_{Z}$, we also obtain that the effect of $\delta$ on the probabilistic upper bound drastically reduces. Concerning $r$, that is the upper bound for the unknown contamination probability $\alpha$, a true range between 0 and 0.1 appears to be appropriate to cover most of the situations found in practice. Regarding the resampling procedure to obtain $Z_{n_{Z}}$, we consider three different schemes, all of them with replacement. Since we deal with potentially contaminated data sets, besides simple resampling, we also consider two robust KFSD-based resampling procedures inspired by the work of \cite{FebGalGon2008}. The three resampling schemes that we consider are: \begin{compactenum} \item\label{it:sim} Simple resampling. \item\label{it:tri} KFSD-based trimmed resampling: once $KFSD(y_{i},Y_{n}), i=1,\ldots,n$ are obtained, it is possible to identify the $\ceil*{\alpha_{T}}\%$ least deepest curves, for a certain $0 < \alpha_{T} < 1$ usually close to 0, that we advise to set equal to $r$. These least deep curves are deleted from the sample, and simple resampling is carried out with the remaining curves. \item\label{it:wei} KFSD-based weighted resampling: once $KFSD(y_{i},Y_{n}), i=1,\ldots,n$ are obtained, weighted resampling is carried out with weights $w_{i} = KFSD(y_{i},Y_{n})$. \end{compactenum} \noindent All the above procedures generate samples with some repeated curves. However, in a preliminary stage of our study we observed that it is preferable to work with $Z_{n_{Z}}$ composed of non-repeated curves. To obtain such samples, we add a common smoothing step to the previous three resampling schemes. To describe the smoothing step, first recall that each curve in $Y_{n}$ is in practice observed at a discretized and finite set of domain points, and that the sets may differ from one curve to another. For this reason, the estimation of $Y_{n}$ at a common set of $m$ equidistant domain points may be required. Let $\left(y_{i}(s_{1}), \ldots,y_{i}(s_{m})\right)$ be the observed or estimated $m$-dimensional equidistant discretized version of $y_{i}$, $\Sigma_{Y_{n}}$ be the covariance matrix of the discretized form of $Y_{n}$ and $\gamma$ be a smoothing parameter. Consider a zero-mean Gaussian process whose discretized form has $\gamma\Sigma_{Y_{n}}$ as covariance matrix. Let $\left(\zeta(s_{1}), \ldots, \zeta(s_{m})\right)$ be a discretized realization of the previous Gaussian process. Consider any of the previous three resampling procedures and assume that at the $j$th trial, $j=1, \ldots, n_{Z}$, the $i$th curve in $Y_{n}$ has been sampled. Then, the discretized form of the $j$th curve in $Z_{n_{Z}}$ would be given by $\left(z_{j}(s_{1}), \ldots, z_{j}(s_{m})\right) = \left(y_{i}(s_{1})+\zeta(s_{1}), \ldots, y_{i}(s_{m})+\zeta(s_{m})\right)$, or, in functional form, by $z_{j} = y_{i} + \zeta$. Therefore, combining each resampling scheme with this smoothing step, we provide three different approximate ways to obtain $Z_{n_{Z}}$, and we refer to them as $smo$, $tri$ and $wei$, respectively. Then, for fixed $\delta$, $r$ and desired FAP, the threshold $t$ for \eqref{eq:g_KFSD_no_b} is selected as the maximum value of $t$ such that the right-hand side of \eqref{eq:ineThe01} does not exceed the desired FAP. Let $t^{*}$ be the selected threshold, which is then used in \eqref{eq:g_KFSD_no_b} to compute $g\left(y_{i}, Y_{n}\right)$, $i=1, \ldots, n$. If $g\left(y_{i}, Y_{n}\right) = 1$, $y_{i}$ is detected as outlier. To summarize, we provide three KFSD-based outlier detection procedures and we refer to them as KFSD$_{smo}$, KFSD$_{tri}$ and KFSD$_{wei}$ depending on how $Z_{n_{Z}}$ is obtained ($smo$, $tri$ and $wei$, respectively; recall that $Y_{n_{Y}}=Y_{n}$). As competitors of the proposed procedures, we consider the methods mentioned in Section \ref{sec:intro} that we now describe.\\ \indent \cite{SunGen2011} proposed a depth-based functional boxplot and an associated outlier detection rule based on the ranking of the sample curves that MBD provides. The ranking is used to define a sample central region, that is, the smallest band containing at least half of the deepest curves. The non-outlying region is defined inflating the central region by 1.5 times. Curves that do not belong completely to the non-outlying region are detected as outliers. The original functional boxplot is based on the use of MBD as depth, but clearly any functional depth can be used. Another contribution of this paper is the study of the performances of the outlier detection rule associated to the functional boxplot (from now on, FBP) when used together with the battery of functional depths mentioned in Section \ref{sec:kfsd}.\\ \indent \cite{FebGalGon2008} proposed two depth-based outlier detection procedures that select a threshold for FMD, HMD or IDD by means of two alternative robust smoothed bootstrap procedures whose single bootstrap samples are obtained using the above described $tri$ and $wei$, respectively. At each bootstrap sample, the 1\% percentile of empirical distribution of the depth values is obtained, say $p_{0.01}$. If $B$ is the number of bootstrap samples, $B$ values of $p_{0.01}$ are obtained. Each method selects as cutoff $c$ the median of the collection of $p_{0.01}$ and, using $c$ as threshold, a first outlier detection is performed. If some curves are detected as outliers, they are deleted from the sample, and the procedure is repeated until no more outliers are found (note that $c$ is computed only in the first iteration). We refer to these methods as B$_{tri}$ and B$_{wei}$, and also in this case we evaluate these procedures using all the functional depths mentioned in Section \ref{sec:kfsd}.\\ \indent Finally, we also consider two procedures proposed by \cite{HynSha2010} that are not based on the use of a functional depth. Both are based on the first two robust functional principal components scores and on two different graphical representations of them. The first proposal is the outlier detection rule associated to the functional bagplot (from now on, FBG), which works as follows: obtain the bivariate robust scores and order them using the multivariate halfspace depth (\citeauthor{Tuk1975} \citeyear{Tuk1975}). Define an inner region by considering the smallest region containing at least the 50\% of the deepest scores, and obtain a non-outlying region by inflating the inner region by 2.58 times. FBG detects as outliers those curves whose scores are outside the non-outlying region. Note that the scores-based regions and outliers allow to draw a bivariate bagplot, which produces a functional bagplot once it is mapped onto the original functional space. The second proposal is related to a different graphical tool, the high density region boxplot (from now on, we refer to its associated outlier detection rule as FHD). In this case, once obtained the scores, perform a bivariate kernel density estimation. Define the $(1-\beta)$-high density region (HDR), $\beta \in (0,1)$, as the region of scores with coverage probability equal to $(1-\beta)$. FHD detects as outliers those curves whose scores are outside the $(1-\beta)$-HDR. In this case, it is possible to draw a bivariate HDR boxplot which can be mapped onto a functional version, thus providing the functional HDR boxplot. \section{SIMULATION STUDY} \label{sec:simStudy} After introducing KFSD$_{smo}$, KFSD$_{tri}$ and KFSD$_{wei}$, their competitors (FBP, B$_{tri}$, B$_{wei}$, FBG and FHD), as well as seven different functional depths (FMD, HMD, RTD, IDD, MBD, FSD and KFSD), in this section we carry out a simulation study to evaluate the performances of the different methods. For FBP, B$_{tri}$ and B$_{wei}$, we use the notation procedure+depth: for example, FBP+FMD refers to the method obtained by using FBP together with FMD.\\ \indent To perform our simulation study, we consider six models: all of them generate curves according to the mixture of random variables $Y_{mix}$ described by \eqref{eq:mix}. The first three mixture models (MM1, MM2 and MM3) share $Y_{nor}$, with curves generated by \begin{equation} \label{eq:Y_nor_123} y(s) = 4s + \epsilon(s), \end{equation} \noindent where $s \in [0,1]$ and $\epsilon(s)$ is a zero-mean Gaussian component with covariance function given by \begin{equation*} \mathbb{E}(\epsilon(s),\epsilon(s^{\prime})) = 0.25 \exp{(-(s-s^{\prime})^2)}, \quad s,s^{\prime} \in [0, 1]. \end{equation*} \noindent Also the remaining three mixture models (MM4, MM5 and MM6) share $Y_{nor}$, but, in this case, the curves are generated by \begin{equation} \label{eq:Y_nor_456} y(s) = u_{1}\sin s + u_{2}\cos s, \end{equation} \noindent where $s \in [0,2\pi]$ and $u_{1}$ and $u_{2}$ are observations from a continuous uniform random variable between 0.05 and 0.15.\\ \indent MM1, MM2 and MM3 differ in their $Y_{out}$ components. Under MM1, the outliers are generated by \begin{equation*} y(s) = 8s - 2 + \epsilon(s), \end{equation*} \noindent which produces outliers of both shape and low magnitude nature. Under MM2, the outliers are generated by adding to \eqref{eq:Y_nor_123} an observation from a $N(0,1)$, and as result outliers are more irregular than normal curves. Finally, under MM3, the outliers are generated by \begin{equation*} y(s) = 4\exp(s) + \epsilon(s), \end{equation*} \noindent which produces curves that are normal in the first part of the domain, but that become exponentially outlying.\\ \indent Similarly, MM4, MM5 and MM6 differ in their $Y_{out}$ components. Under MM4, the outliers are generated replacing $u_{2}$ with $u_{3}$ in \eqref{eq:Y_nor_456}, where $u_{3}$ is an observation from a continuous uniform random variable between 0.15 and 0.17. This change produces partial low magnitude outliers in the first and middle part of the domain of the curves. Under MM5, the outliers are generated by adding to \eqref{eq:Y_nor_456} an observation from a $N(0,\left(\frac{0.1}{2}\right)^{2})$, and they turn out to be more irregular curves. Finally, under MM6, the outliers are generated by \begin{equation} \label{eq:Y_out_6} y(s) = u_{1}\sin s + \exp\left(\frac{0.69s}{2\pi}\right) u_{4} \cos s, \end{equation} \noindent where $u_{4}$ is an observation from a continuous uniform random variable between 0.1 and 0.15. As MM3, MM6 allows outliers that are normal in the first part of the domain and become outlying with an exponential pattern. In Figure \ref{fig:simStudy} we report a simulated data set with at least one outlier for each mixture model. \begin{figure} \caption{Examples of contaminated functional data sets generated by MM1, MM2, MM3, MM4, MM5 and MM6. Solid curves are normal curves and dashed curves are outliers.} \label{fig:simStudy} \end{figure} The details of the simulation study are the following: for each mixture model, we generated 100 data sets, each one composed of 50 curves. As mentioned above, for each single samples Theorem \ref{th:inPaper01} cannot be directly applied, and therefore KFSD$_{smo}$, KFSD$_{tri}$ and KFSD$_{wei}$ represent practical alternatives. Two values of the contamination probability $\alpha$ were considered: 0.02 and 0.05. All curves were generated using a discretized and finite set of 51 equidistant points in the domain of each mixture model ($[0,1]$ for MM1, MM2 and MM3; $[0, 2\pi]$ for MM4, MM5 and MM6) and the discretized versions of the functional depths were used.\\ \indent In relation with the methods and the functional depths that we consider in the study, their specifications are described next: \begin{compactenum} \item FBP when used with FMD, HMD, RTD, IDD, MBD, FSD and KFSD: regarding FBP, as reported in Section \ref{sec:odp}, the central region is built considering the 50\% deepest curves and the non-outlying region by inflating by 1.5 times the central region. Regarding the depths, for HMD, we follow the recommendations in \cite{FebGalGon2008}, that is, $\mathbb{H}$ is the $L^2$ space, $\kappa(x,y) = \frac{2}{\sqrt{2\pi}}\exp\left(-\frac{\|x-y\|^2}{2h^2}\right)$ and $h$ is equal to the 15\% percentile of the empirical distribution of $\left\{\|y_{i}-y_{j}\|, y_{i}, y_{j} \in Y_{n}\right\}$. For RTD and IDD, we work with 50 projections in random Gaussian directions. For MBD, we consider bands defined by two curves. For FSD and KFSD, we assume that the curves lie in the $L^{2}$ space. Moreover, in KFSD we set $\sigma$ equal to a moderately local percentile (50\%) of the empirical distribution of $\left\{\|y_{i}-y_{j}\|, y_{i}, y_{j} \in Y_{n}\right\}$. \item B$_{tri}$ and B$_{wei}$ when used with FMD, HMD, RTD, IDD, MBD, FSD and KFSD: $\gamma=0.05$, $B=200$, $\alpha_{T}=\alpha$. Regarding the depths, we use the specifications reported for FBP. \item FBG: as reported in Section \ref{sec:odp}, the central region is built considering the 50\% deepest bivariate robust functional principal component scores and the non-outlying region by inflating by 2.58 times the central region. \item FHD: $\beta = \alpha$. \item KFSD$_{smo}$, KFSD$_{tri}$ and KFSD$_{wei}$: $n_{Y}=n=50$ (since $Y_{n_{Y}}=Y_{n}$), $\gamma=0.05$, $\alpha_{T}=\alpha$ (only for KFSD$_{tri}$), $n_{Z}=6n$, $\delta=0.05$, $r=\alpha$, desired FAP = 0.10. Moreover, as introduced in Section \ref{sec:kfsd}, for these methods we consider 9 percentiles to set $\sigma$ in KFSD. The way in which we propose to choose the most suitable percentile for outlier detection is presented below. \end{compactenum} In supervised classification, the availability of training curves with known class memberships makes possible the definition of some natural procedures to set $\sigma$ for KFSD, such as cross-validation. However, in an outlier detection problem, it is common to have no information whether curves are normal or outliers. Therefore, training procedures are not immediately available.\\ \indent We propose to overcome this drawback by obtaining a ``training sample of peripheral curves", and then choosing the percentile that ranks better these peripheral curves as final percentile for KFSD in KFSD$_{smo}$, KFSD$_{tri}$ and KFSD$_{wei}$. We now describe this procedure, which is based on $J$ replications. Let $Y_{n}$ be the functional data set on which outlier detection has to be done and let $Y_{(n)} = \left\{y_{(1)}, \ldots, y_{(n)}\right\}$ be the depth-based ordered version of $Y_{n}$, where $y_{(1)}$ and $y_{(n)}$ are the curves with minimum and maximum depth, respectively. The steps to obtain a set of peripheral curves are the following: \begin{compactenum}[I.] \item\label{ite:i} Let $\left\{p_{1},\ldots,p_{K}\right\}$ be the set of percentiles in use (in our case, as explained in Section \ref{sec:kfsd}, $p_{k}=(10k)\%$, $k \in \left\{1,\ldots,K=9\right\})$, and choose randomly a percentile from the set. For the $j$th replication, $j \in \left\{1,\ldots,J\right\}$, denote the selected percentile as $p^{j}$. We use $J=20$ in the rest of the paper. \item\label{ite:ii} Using $p^{j}$, compute $KFSD_{p^{j}}(y_{i},Y_{n})$, $i=1, \ldots, n$, where the notation $KFSD_{p^{j}}(\cdot,\cdot)$ is used to describe what percentile is used. For the $j$th replication, denote the KFSD-based ordered curves as $y_{(1),j}, \ldots, y_{(n),j}$. \item\label{ite:iii} Take $y_{(1),j}, \ldots, y_{(l_{j}),j}$, where $l_{j} \sim Bin(n,\frac{1}{n})$. Apply the smoothing step described in Section \ref{sec:odp} to these curves. For the smoothing step, we use $\Sigma_{Y_{n}}$ and $\gamma=0.05$. For the $j$th replication, denote the peripheral and smoothed curves as $y_{(1),j}^{*}, \ldots, y_{(l_{j}),j}^{*}$. \item\label{ite:iv} Repeat $J$ times steps \ref{ite:i}.-\ref{ite:iii}. to obtain a collection of $L=\sum_{j=1}^{J}l_{j}$ peripheral curves, say $Y_{L}$ (for an example, see Figure \ref{fig:trainPeri}). \end{compactenum} \begin{figure} \caption{Example of a training sample of peripheral curves for a contaminated data set generated by MM1 with $\alpha=0.05$. The solid and shaded curves are the original curves (both normal and outliers). The dashed curves are the peripheral curves to use as training sample.} \label{fig:trainPeri} \end{figure} Next, $Y_{L}$ acts as training sample according to the following steps: for each $y_{(i),j}^{*} \in Y_{L}$, ($i \leq l_{j}$), and $p_{k} \in \left\{p_{1},\ldots,p_{K}\right\}$, compute $KFSD_{p_{k}}(y_{(i),j}^{*},Y_{-(i),j})$, where $Y_{-(i),j} = Y_{n} \setminus \left\{y_{(i),j}\right\}$. At the end, a $L \times K$ matrix is obtained, say $D_{LK}=\left\{d_{lk}\right\}_{\substack{l=1,\ldots,L\\k=1,\ldots,K}}$, whose $k$th column is composed of the KFSD values of the $L$ training peripheral curves when the $k$th percentile is employed in KFSD. Next, let $r_{lk}$ be the rank of $d_{lk}$ in the vector $\left(KFSD_{p_{k}}(y_{1},Y_{n}),\ldots,KFSD_{p_{k}}(y_{n},Y_{n}),\right.$ $\left. d_{lk} \right)$, e.g., $r_{lk}$ is equal to 1 or $n+1$ if $d_{lk}$ is the minimum or the maximum value in the vector, respectively. Let $R_{LK}$ be the result of this transformation of $D_{LK}$, and sum the elements of each column, obtaining a $K$-dimensional vector, say $\mathbf{R}_{K}$. Since the goal is to assign ranks as low as possible to the peripheral curves, choose the percentile associated to the minimum value of $\mathbf{R}_{K}$. When a tie is observed, we break it randomly.\\ \indent The comparison among methods is performed in terms of both correct and false outlier detection percentages, which are reported in Tables \ref{tab:MM01_I}-\ref{tab:MM06_I}. To ease the reading of the tables, for each model and $\alpha$, we report in bold the 5 best methods in terms of correct outlier detection percentage (c).\footnote{In presence of tie, the method with lower false outlier detection percentage (f) is preferred.} For each model, if a method is among the 5 best ones for both contamination probabilities $\alpha$, we report its label in bold. \begin{table}[!htbp] \parbox{.475\textwidth}{ \captionsetup{justification=justified,width=0.475\textwidth} \caption{MM1, $\alpha=\left\{0.02,0.05\right\}$. Correct (c) and false (f) outlier detection percentages of FBP, B$_{tri}$, B$_{wei}$, FBG, FHD, KFSD$_{smo}$, KFSD$_{tri}$ and KFSD$_{wei}$.} \centering \scalebox{.60}{ \begin{tabular}{l|cc|cc} \hline & \multicolumn{2}{c}{$\alpha=0.02$} & \multicolumn{2}{|c}{$\alpha=0.05$}\\ \hline & c & f & c & f\\ \hline FBP+FMD & 44.34 & 1.23 & 43.86 & 0.73\\ FBP+HMD & \textbf{74.53} & 0.94 & 72.81 & 0.61\\ FBP+RTD & 61.32 & 0.57 & 63.16 & 0.31\\ FBP+IDD & 55.66 & 0.61 & 61.84 & 0.34\\ FBP+MBD & 49.06 & 1.33 & 50.44 & 0.69\\ FBP+FSD & 62.26 & 0.67 & 61.84 & 0.40\\ FBP+KFSD & 66.04 & 0.86 & \textbf{74.12} & 0.44\\ \hline B$_{tri}$+FMD & 0.00 & 0.98 & 0.00 & 1.82 \\ B$_{tri}$+HMD & 66.98 & 1.45 & 57.89 & 1.47 \\ B$_{tri}$+RTD & 10.38 & 1.78 & 14.91 & 1.76 \\ B$_{tri}$+IDD & 10.38 & 1.55 & 11.84 & 1.74 \\ B$_{tri}$+MBD & 0.00 & 0.51 & 0.00 & 1.49 \\ B$_{tri}$+FSD & 2.83 & 0.76 & 5.26 & 1.17 \\ B$_{tri}$+KFSD & 70.75 & 1.43 & 58.77 & 1.40 \\ \hline B$_{wei}$+FMD & 0.00 & 1.29 & 0.00 & 1.49 \\ B$_{wei}$+HMD & 71.70 & 1.02 & 47.37 & 0.65 \\ B$_{wei}$+RTD & 13.21 & 2.04 & 13.60 & 1.78 \\ B$_{wei}$+IDD & 17.92 & 1.82 & 10.53 & 1.55 \\ B$_{wei}$+MBD & 0.00 & 1.08 & 0.00 & 1.40 \\ B$_{wei}$+FSD & 2.83 & 1.39 & 3.95 & 1.07 \\ B$_{wei}$+KFSD & 61.32 & 0.88 & 55.26 & 0.48 \\ \hline \textbf{FBG} & \textbf{100.00} & 2.27 & \textbf{97.81} & 2.37\\ \hline FHD & 48.11 & 1.00 & 73.68 & 2.77\\ \hline \textbf{KFSD$_{smo}$} & \textbf{89.62} & 4.50 & \textbf{85.09} & 2.58 \\ \textbf{KFSD$_{tri}$} & \textbf{89.62} & 4.92 & \textbf{92.11} & 4.40 \\ \textbf{KFSD$_{wei}$} & \textbf{97.17} & 9.44 & \textbf{96.93} & 6.54 \\ \hline \end{tabular}} \label{tab:MM01_I} } \parbox{.475\textwidth}{ \captionsetup{justification=justified,width=0.475\textwidth} \caption{MM2, $\alpha=\left\{0.02,0.05\right\}$. Correct (c) and false (f) outlier detection percentages of FBP, B$_{tri}$, B$_{wei}$, FBG, FHD, KFSD$_{smo}$, KFSD$_{tri}$ and KFSD$_{wei}$.} \centering \scalebox{.60}{ \begin{tabular}{l|cc|cc} \hline & \multicolumn{2}{c}{$\alpha=0.02$} & \multicolumn{2}{|c}{$\alpha=0.05$}\\ \hline & c & f & c & f\\ \hline FBP+FMD & 99.09 & 1.08 & \textbf{96.39} & 0.84\\ FBP+HMD & 96.36 & 0.96 & 96.39 & 0.88\\ FBP+RTD & \textbf{99.09} & 0.61 & 94.78 & 0.25\\ FBP+IDD & 99.09 & 0.70 & 95.18 & 0.38\\ FBP+MBD & 99.09 & 1.06 & \textbf{96.39} & 0.82\\ FBP+FSD & \textbf{99.09} & 0.57 & 94.78 & 0.36\\ FBP+KFSD & 98.18 & 0.63 & 93.98 & 0.36\\ \hline B$_{tri}$+FMD & 0.00 & 1.06 & 0.00 & 1.96 \\ B$_{tri}$+HMD & 95.45 & 1.51 & \textbf{96.79} & 1.68 \\ B$_{tri}$+RTD & 1.82 & 1.92 & 6.83 & 2.61 \\ B$_{tri}$+IDD & 5.45 & 1.60 & 7.63 & 1.94 \\ B$_{tri}$+MBD & 0.00 & 0.98 & 0.40 & 2.10 \\ B$_{tri}$+FSD & 4.55 & 1.06 & 5.22 & 1.62 \\ B$_{tri}$+KFSD & 97.27 & 1.60 & 95.18 & 1.52 \\ \hline B$_{wei}$+FMD & 0.00 & 1.27 & 0.00 & 1.52 \\ B$_{wei}$+HMD & 95.45 & 1.02 & 86.35 & 0.36 \\ B$_{wei}$+RTD & 5.45 & 2.21 & 8.43 & 2.84 \\ B$_{wei}$+IDD & 7.27 & 1.49 & 9.64 & 2.36 \\ B$_{wei}$+MBD & 0.00 & 1.27 & 0.40 & 1.49 \\ B$_{wei}$+FSD & 8.18 & 1.39 & 4.02 & 1.37 \\ B$_{wei}$+KFSD & 95.45 & 0.96 & 79.52 & 0.51 \\ \hline FBG & 8.18 & 3.07 & 4.42 & 2.95\\ \hline FHD & 7.27 & 1.88 & 12.45 & 5.66\\ \hline KFSD$_{smo}$ & \textbf{100.00} & 3.91 & 95.18 & 2.76 \\ \textbf{KFSD$_{tri}$} & \textbf{100.00} & 5.19 & \textbf{97.99} & 4.84 \\ \textbf{KFSD$_{wei}$} & \textbf{100.00} & 9.20 & \textbf{99.60} & 6.48 \\ \hline \end{tabular}} \label{tab:MM02_I} } \end{table} \begin{table}[!htbp] \parbox{.475\textwidth}{ \captionsetup{justification=justified,width=0.475\textwidth} \caption{MM3, $\alpha=\left\{0.02,0.05\right\}$. Correct (c) and false (f) outlier detection percentages of FBP, B$_{tri}$, B$_{wei}$, FBG, FHD, KFSD$_{smo}$, KFSD$_{tri}$ and KFSD$_{wei}$.} \centering \scalebox{.60}{ \begin{tabular}{l|cc|cc} \hline & \multicolumn{2}{c}{$\alpha=0.02$} & \multicolumn{2}{|c}{$\alpha=0.05$}\\ \hline & c & f & c & f\\ \hline FBP+FMD & 65.69 & 0.92 & 49.19 & 0.97\\ \textbf{FBP+HMD} & \textbf{89.22} & 0.57 & \textbf{85.89} & 0.63\\ FBP+RTD & 86.27 & 0.45 & 76.61 & 0.34\\ FBP+IDD & 79.41 & 0.51 & 70.56 & 0.38\\ FBP+MBD & 74.51 & 0.88 & 59.27 & 0.84\\ FBP+FSD & 79.41 & 0.51 & 73.79 & 0.42\\ \textbf{FBP+KFSD} & \textbf{89.22} & 0.57 & \textbf{83.06} & 0.59\\ \hline B$_{tri}$+FMD & 2.94 & 0.73 & 5.24 & 1.22 \\ B$_{tri}$+HMD & 57.84 & 1.57 & 53.63 & 1.56 \\ B$_{tri}$+RTD & 15.69 & 1.76 & 21.37 & 1.81 \\ B$_{tri}$+IDD & 20.59 & 1.65 & 20.56 & 1.70 \\ B$_{tri}$+MBD & 0.98 & 1.06 & 3.23 & 1.54 \\ B$_{tri}$+FSD & 16.67 & 1.14 & 17.34 & 1.22 \\ B$_{tri}$+KFSD & 57.84 & 1.63 & 49.19 & 1.52 \\ \hline B$_{wei}$+FMD & 2.94 & 1.10 & 3.63 & 0.84 \\ B$_{wei}$+HMD & 60.78 & 1.25 & 42.74 & 0.76 \\ B$_{wei}$+RTD & 15.69 & 1.92 & 17.34 & 1.73 \\ B$_{wei}$+IDD & 23.53 & 1.33 & 14.52 & 1.22 \\ B$_{wei}$+MBD & 0.98 & 1.29 & 2.82 & 1.14 \\ B$_{wei}$+FSD & 15.69 & 1.16 & 12.10 & 0.84 \\ B$_{wei}$+KFSD & 56.86 & 1.12 & 41.53 & 0.67 \\ \hline FBG & 86.27 & 2.65 & \textbf{78.63} & 1.73\\ \hline FHD & 49.02 & 1.02 & 65.73 & 2.88\\ \hline KFSD$_{smo}$ & \textbf{89.22} & 3.90 & 73.79 & 2.95 \\ \textbf{KFSD$_{tri}$} & \textbf{90.20} & 4.63 & \textbf{83.47} & 4.71 \\ \textbf{KFSD$_{wei}$} & \textbf{97.06} & 8.96 & \textbf{90.32} & 6.50 \\ \hline \end{tabular}} \label{tab:MM03_I} } \parbox{.475\textwidth}{ \captionsetup{justification=justified,width=0.475\textwidth} \caption{MM4, $\alpha=\left\{0.02,0.05\right\}$. Correct (c) and false (f) outlier detection percentages of FBP, B$_{tri}$, B$_{wei}$, FBG, FHD, KFSD$_{smo}$, KFSD$_{tri}$ and KFSD$_{wei}$.} \centering \scalebox{.60}{ \begin{tabular}{l|cc|cc} \hline & \multicolumn{2}{c}{$\alpha=0.02$} & \multicolumn{2}{|c}{$\alpha=0.05$}\\ \hline & c & f & c & f\\ \hline FBP+FMD & 1.02 & 0.00 & 0.00 & 0.00\\ FBP+HMD & 6.12 & 0.00 & 1.60 & 0.02\\ FBP+RTD & 0.00 & 0.00 & 0.00 & 0.00\\ FBP+IDD & 0.00 & 0.00 & 0.00 & 0.00\\ FBP+MBD & 0.00 & 0.00 & 0.00 & 0.00\\ FBP+FSD & 0.00 & 0.00 & 0.00 & 0.00\\ FBP+KFSD & 2.04 & 0.00 & 0.80 & 0.00\\ \hline B$_{tri}$+FMD & 60.20 & 0.16 & \textbf{47.60} & 0.11 \\ B$_{tri}$+HMD & 41.84 & 0.04 & 18.80 & 0.17 \\ B$_{tri}$+RTD & 54.08 & 1.16 & 34.80 & 0.82 \\ B$_{tri}$+IDD & 55.10 & 1.02 & 37.20 & 0.59 \\ \textbf{B$_{tri}$+MBD} & \textbf{64.29} & 0.14 & \textbf{46.40} & 0.13 \\ B$_{tri}$+FSD & \textbf{68.37} & 0.14 & 45.60 & 0.08 \\ B$_{tri}$+KFSD & 58.16 & 0.20 & 28.00 & 0.13 \\ \hline B$_{wei}$+FMD & 51.02 & 0.12 & 23.60 & 0.00 \\ B$_{wei}$+HMD & 38.78 & 0.06 & 10.80 & 0.02 \\ B$_{wei}$+RTD & 37.76 & 0.49 & 25.20 & 0.15 \\ B$_{wei}$+IDD & 43.88 & 0.67 & 28.00 & 0.42 \\ B$_{wei}$+MBD & 56.12 & 0.10 & 25.20 & 0.02 \\ B$_{wei}$+FSD & 63.27 & 0.06 & 29.20 & 0.00 \\ B$_{wei}$+KFSD & 58.16 & 0.12 & 21.20 & 0.00 \\ \hline FBG & 9.18 & 0.53 & 6.80 & 1.09\\ \hline FHD & 51.02 & 1.02 & 37.60 & 4.34\\ \hline \textbf{KFSD$_{smo}$} & \textbf{87.76} & 2.16 & \textbf{50.00} & 1.24 \\ \textbf{KFSD$_{tri}$} & \textbf{91.84} & 3.00 & \textbf{64.80} & 2.91 \\ \textbf{KFSD$_{wei}$} & \textbf{95.92} & 5.08 & \textbf{62.00} & 3.35 \\ \hline \end{tabular}} \label{tab:MM04_I} } \end{table} \begin{table}[!htbp] \parbox{.475\textwidth}{ \captionsetup{justification=justified,width=0.475\textwidth} \caption{MM5, $\alpha=\left\{0.02,0.05\right\}$. Correct (c) and false (f) outlier detection percentages of FBP, B$_{tri}$, B$_{wei}$, FBG, FHD, KFSD$_{smo}$, KFSD$_{tri}$ and KFSD$_{wei}$.} \centering \scalebox{.60}{ \begin{tabular}{l|cc|cc} \hline & \multicolumn{2}{c}{$\alpha=0.02$} & \multicolumn{2}{|c}{$\alpha=0.05$}\\ \hline & c & f & c & f\\ \hline FBP+FMD & 55.56 & 0.00 & 54.00 & 0.00\\ FBP+HMD & 66.67 & 0.00 & 68.40 & 0.04\\ FBP+RTD & 57.58 & 0.00 & 54.40 & 0.00\\ FBP+IDD & 52.53 & 0.00 & 56.00 & 0.00\\ FBP+MBD & 55.56 & 0.00 & 55.20 & 0.00\\ FBP+FSD & 55.56 & 0.00 & 55.60 & 0.00\\ FBP+KFSD & 60.61 & 0.00 & 59.20 & 0.00\\ \hline B$_{tri}$+FMD & 3.03 & 0.18 & 2.80 & 0.44 \\ \textbf{B$_{tri}$+HMD} & \textbf{97.98} & 0.12 & \textbf{92.40} & 0.11 \\ B$_{tri}$+RTD & 16.16 & 1.06 & 20.00 & 1.03 \\ B$_{tri}$+IDD & 18.18 & 1.06 & 16.00 & 1.07 \\ B$_{tri}$+MBD & 2.02 & 0.16 & 3.20 & 0.32 \\ B$_{tri}$+FSD & 29.29 & 0.18 & 27.20 & 0.23 \\ B$_{tri}$+KFSD & 93.94 & 0.24 & \textbf{92.40} & 0.21 \\ \hline B$_{wei}$+FMD & 3.03 & 0.29 & 2.40 & 0.23 \\ B$_{wei}$+HMD & \textbf{93.94} & 0.08 & 73.60 & 0.00 \\ B$_{wei}$+RTD & 15.15 & 1.06 & 17.60 & 1.12 \\ B$_{wei}$+IDD & 25.25 & 0.98 & 20.00 & 0.99 \\ B$_{wei}$+MBD & 2.02 & 0.20 & 3.60 & 0.21 \\ B$_{wei}$+FSD & 29.29 & 0.14 & 21.60 & 0.13 \\ B$_{wei}$+KFSD & 83.84 & 0.08 & 72.00 & 0.04 \\ \hline FBG & 0.00 & 1.02 & 0.40 & 0.04\\ \hline FHD & 4.04 & 1.96 & 12.80 & 5.64\\ \hline \textbf{KFSD$_{smo}$} & \textbf{98.99} & 1.82 & \textbf{94.00} & 0.44 \\ \textbf{KFSD$_{tri}$} & \textbf{98.99} & 2.61 & \textbf{98.00} & 2.11 \\ \textbf{KFSD$_{wei}$} & \textbf{100.00} & 4.61 & \textbf{98.40} & 2.11 \\ \hline \end{tabular}} \label{tab:MM05_I} } \parbox{.475\textwidth}{ \captionsetup{justification=justified,width=0.475\textwidth} \caption{MM6, $\alpha=\left\{0.02,0.05\right\}$. Correct (c) and false (f) outlier detection percentages of FBP, B$_{tri}$, B$_{wei}$, FBG, FHD, KFSD$_{smo}$, KFSD$_{tri}$ and KFSD$_{wei}$.} \centering \scalebox{.60}{ \begin{tabular}{l|cc|cc} \hline & \multicolumn{2}{c}{$\alpha=0.02$} & \multicolumn{2}{|c}{$\alpha=0.05$}\\ \hline & c & f & c & f\\ \hline FBP+FMD & 48.42 & 0.00 & 44.19 & 0.00 \\ FBP+HMD & 60.00 & 0.18 & \textbf{62.92} & 0.00\\ FBP+RTD & 55.79 & 0.00 & 54.68 & 0.00\\ FBP+IDD & 46.32 & 0.00 & 40.07 & 0.00\\ FBP+MBD & 48.42 & 0.00 & 45.69 & 0.00\\ FBP+FSD & 52.63 & 0.00 & 52.43 & 0.00\\ FBP+KFSD & 57.89 & 0.00 & 56.93 & 0.00\\ \hline B$_{tri}$+FMD & 29.47 & 0.22 & 33.71 & 0.32 \\ B$_{tri}$+HMD & \textbf{71.58} & 0.24 & 45.69 & 0.15 \\ B$_{tri}$+RTD & 35.79 & 0.82 & 31.09 & 0.51 \\ B$_{tri}$+IDD & 38.95 & 0.37 & 35.96 & 0.74 \\ B$_{tri}$+MBD & 29.47 & 0.24 & 31.09 & 0.32 \\ B$_{tri}$+FSD & 52.63 & 0.20 & 43.82 & 0.19 \\ B$_{tri}$+KFSD & \textbf{71.58} & 0.22 & 50.56 & 0.21 \\ \hline B$_{wei}$+FMD & 23.16 & 0.24 & 19.48 & 0.08 \\ B$_{wei}$+HMD & 68.42 & 0.12 & 35.96 & 0.00 \\ B$_{wei}$+RTD & 38.95 & 0.69 & 24.34 & 0.51 \\ B$_{wei}$+IDD & 33.68 & 0.59 & 25.09 & 0.40 \\ B$_{wei}$+MBD & 24.21 & 0.18 & 19.85 & 0.13 \\ B$_{wei}$+FSD & 47.37 & 0.16 & 27.72 & 0.08 \\ B$_{wei}$+KFSD & 66.32 & 0.12 & 44.19 & 0.06 \\ \hline FBG & 17.89 & 0.02 & 14.98 & 0.06\\ \hline FHD & 52.63 & 1.02 & \textbf{61.80} & 2.85\\ \hline \textbf{KFSD$_{smo}$} & \textbf{91.58} & 2.08 & \textbf{71.16} & 0.95 \\ \textbf{KFSD$_{tri}$} & \textbf{93.68} & 2.69 & \textbf{82.02} & 2.49 \\ \textbf{KFSD$_{wei}$} & \textbf{96.84} & 4.69 & \textbf{83.15} & 2.75 \\ \hline \end{tabular}} \label{tab:MM06_I} } \end{table} The results in Tables \ref{tab:MM01_I}-\ref{tab:MM06_I} show that: \begin{compactenum} \item KFSD$_{tri}$ and KFSD$_{wei}$ are always among the 5 best methods. KFSD$_{smo}$ is among the 5 best methods 10 times over 12, but when its performance is not among the 5 best, it is neither extremely far from the fifth method (MM2, $\alpha=0.05$: 95.18\% against 96.79\%; MM3, $\alpha=0.05$: 73.79\% against 78.63\%). The rest of the methods are among the 5 best procedures at most 4 times over 12 (FBP+HMD and B$_{tri}$+HMD). \item Regarding MM5 and MM6, our procedures are clearly the best options in terms of correct detection (c), and in the following order: KFSD$_{wei}$, KFSD$_{tri}$ and KFSD$_{smo}$. In general, this pattern is observed overall the simulation study. Note that for MM6 and $\alpha=0.02$ we observe the best relative performances of KFSD$_{smo}$, KFSD$_{tri}$ and KFSD$_{wei}$, i.e., 91.58\%, 93.68\% and 96.84\%, respectively, against 71.58\% of the fourth best method (B$_{wei}$+KFSD), that is, we observe at least 20\% differences. \item About MM3, KFSD$_{wei}$ is clearly the best method in terms of correct detection, however at the price of having a greater false detection (f). This is in general the main weak point of KFSD$_{smo}$, KFSD$_{tri}$ and KFSD$_{wei}$. As for correct detection, we observe a overall pattern in our methods in false detection, but in an opposite way, indicating therefore a trade-off between c and f. Relative high false detection percentages are however something expected in KFSD$_{smo}$, KFSD$_{tri}$ and KFSD$_{wei}$ since these methods are based on the definition of a desired false alarm probability, which is equal to 10\% in this study. Concerning MM2, we observe similar results to MM3, but in this case the performances of the best methods in terms of correct detection (KFSD$_{smo}$, KFSD$_{tri}$, KFSD$_{wei}$, FBP-based methods and B$_{tri}$ when used with local depths) are closer to each other. Finally, there are only 2 cases in which a competitor outperforms all our methods, and it is FBAG under MM1 and both $\alpha$. However, this procedure does not show a behavior as stable as KFSD$_{smo}$, KFSD$_{tri}$ and KFSD$_{wei}$ do. Indeed, FBAG shows poor performances under other models, e.g., MM2. \end{compactenum} In summary, the above results and remarks show that the proposed KFSD-based procedures are the best methods in detecting outliers for the considered models. Moreover, KFSD$_{tri}$ seems the most reasonable choice to balance the mentioned trade-off between c and f. In terms of correct detection, KFSD$_{wei}$ slightly outperforms KFSD$_{tri}$, which however shows very good and stable performances when compared with the remaining methods. In terms of false detection, KFSD$_{tri}$ considerably improves on KFSD$_{wei}$, especially under some models (e.g., see MM2). \begin{figure} \caption{Boxplots of the percentiles selected in the training steps of the simulation study for KFSD$_{smo}$, KFSD$_{tri}$ and KFSD$_{wei}$.} \label{fig:simBoxPlot} \end{figure} In Figure \ref{fig:simBoxPlot} we report a series of boxplots summarizing which percentiles have been selected in the training steps for KFSD$_{smo}$, KFSD$_{tri}$ and KFSD$_{wei}$, and the following general remarks can be made. First, MM6 is the mixture model for which lower percentiles have been selected, and it is also a scenario in which our methods considerably outperform their competitors. The need for a more local approach for MM6-data may explain the two observed facts about this mixture model. Second, lower and more local percentiles have been chosen for mixture models with nonlinear mean functions (MM4, MM5 and MM6) than for mixture models with linear mean functions (MM1, MM2 and MM3). Finally, the percentiles selected by means of the proposed training procedure seem to vary among data sets. However, except for MM3 and $\alpha=0.02$, at least for half of the data sets a percentile not greater than the median has been chosen, which implies at most a moderately local approach. \section{REAL DATA STUDY: NITROGEN OXIDES (NO$_{x}$) DATA} \label{sec:NOx} Besides simulated data, we consider a real data set which consists in nitrogen oxides (NO$_{x}$) emission level daily curves measured every hour close to an industrial area in Poblenou (Barcelona) and is available in the \texttt{R} package \texttt{fda.usc} (\citeauthor{FebOvi2012} \citeyear{FebOvi2012}). Outlier detection on this data set was first performed by \cite{FebGalGon2008} where these authors proposed B$_{tri}$ and B$_{wei}$. We carry on their study considering more methods and depths. NO$_{x}$ are one of the most important pollutants, and it is important to identify outlying trajectories because these curves may compromise any statistical analysis or be of special interest for further analysis and to implement environmental political countermeasures. The NO$_{x}$ levels that we consider were measured in $\mu g/m^{3}$ every hour of every day for the period 23/02/2005-26/06/2005. Only for 115 days of the period are available the 24 measurements, and these are the days that compose the final NO$_{x}$ data set. Moreover, following \cite{FebGalGon2008}, since the NO$_{x}$ data set includes working as well as nonworking days, it seems more appropriate to consider a first sample of 76 working day curves (from now on, W) and a second sample of 39 nonworking day curves (from now on, NW). Both W and NW are showed in Figure \ref{fig:NOx}, where it is possible to appreciate at least two facts that justify the split of the original data set. First, the W curves have in general higher values than NW curves, which can be explained by the greater activity of motor vehicles and industries in a city like Barcelona during working days. Second, both data sets contain curves with peaks, but for W curves the peaks occur roughly around 7-8 a.m. and during many days, whereas for NW curves the peaks occur later and during few days, which again can be explained by the differences between Barcelona's economic activity of working and nonworking days. \begin{figure} \caption{NO$_{x}$ data: working (top) and non working (bottom) day curves.} \label{fig:NOx} \end{figure} At first glance, each data set may contain outliers, especially partial outliers in the form of abnormal peaks, and therefore a local depth approach by means of KFSD$_{smo}$, KFSD$_{tri}$ and KFSD$_{wei}$ appears to be a good strategy to detect outliers. Besides them, we do outlier detection with all the methods used in Section \ref{sec:simStudy}. For all the procedures we use the same specifications as in Section \ref{sec:simStudy}, and we assume $\alpha=0.05$. For each method, we report the labels of the curves detected as outliers in Table \ref{tab:NOx} and we highlight these curves in Figure \ref{fig:NOxIC}. \begin{table}[!htbp] \captionsetup{justification=justified,width=0.5\textwidth} \caption{NO$_{x}$ data, Working and Nonworking data sets. Curves detected as outliers by FBP, B$_{tri}$, B$_{wei}$, FBG, FHD, KFSD$_{smo}$, KFSD$_{tri}$ and KFSD$_{wei}$.} \centering \scalebox{.64}{ \begin{tabular}{l|c|c} \hline & working days & nonworking w days\\ \hline & \multicolumn{2}{c}{detected outliers}\\ \hline FBP+FMD & - & -\\ FBP+HMD & 12, 16, 37 & 5, 7, 20, 21\\ FBP+RTD & 37 & 20\\ FBP+IDD & - & 5, 7, 20\\ FBP+MBD & - & -\\ FBP+FSD & 37 & -\\ FBP+KFSD & 12, 16, 37 & 5, 7, 20, 21\\ \hline B$_{tri}$+FMD & 16, 37 & 7\\ B$_{tri}$+HMD & 14, 16, 37 & 7, 20\\ B$_{tri}$+RTD & 16 & 7, 20\\ B$_{tri}$+IDD & 16, 37 & 7, 20\\ B$_{tri}$+MBD & 16, 37 & 7\\ B$_{tri}$+FSD & 14, 16, 37 & -\\ B$_{tri}$+KFSD & 12, 14, 16, 37 & 7, 20\\ \hline B$_{wei}$+FMD & 16 & 7, 20\\ B$_{wei}$+HMD & 16, 37 & 7, 20\\ B$_{wei}$+RTD & 16 & -\\ B$_{wei}$+IDD & 16, 37 & 20\\ B$_{wei}$+MBD & 16 & 7\\ B$_{wei}$+FSD & 16, 37 & -\\ B$_{wei}$+KFSD & 16, 37 & 7, 20\\ \hline FBG & 16, 37 & -\\ \hline FHD & 12, 14, 16, 37 & 7, 20\\ \hline KFSD$_{smo}$ & 14, 16, 37 & 7, 20, 21\\ KFSD$_{tri}$ & 12, 14, 16, 37 & 7, 20, 21\\ KFSD$_{wei}$ & 11, 12, 13, 14, 15, 16, 37, 38 & 7, 20, 21\\ \hline \end{tabular}} \label{tab:NOx} \end{table} \begin{figure} \caption{NO$_{x}$ data set, curves detected as outliers in Table \ref{tab:NOx}: working (top) and nonworking (bottom) days.} \label{fig:NOxIC} \end{figure} Concerning W, most of the methods detect as outlier day 37, the Friday at the beginning of the long weekend due to Labor's day in 2005 and whose curve shows a partial outlying behavior before noon and at the end of the day. Another day detected as outlier by many methods is day 16, another Friday before a long weekend, Easter holidays in 2005, and whose curve has the highest morning peak. In addition to curves 16 and 37, KFSD$_{smo}$ detects as outlier curve 14, as other nine methods do, recognizing a seemingly outlying pattern in early hours of the day. Additionally, KFSD$_{tri}$ includes among the outliers also day 12, which may be atypical because of its behavior in early afternoon. Note that both day 12 and 14 are in the week before the above-mentioned Easter holidays. Finally, KFSD$_{wei}$ detects as outliers the greatest number of curves. This last result may appear exaggerated, but all the curves that are outliers according to KFSD$_{wei}$ seem to have some partial deviations from the majority of curves. For example, day 13, whose curve is considered normal by the rest of the procedures, shows a peak at end of the day. Similar peaks can be observed also in other curves detected as outliers by other methods (e.g., days 16 and 37), which means that it may be occurring a masking effect to day 13's detriment, and only KFSD$_{wei}$ points out this possibly outlying feature of the curve. Regarding the training step for KFSD to set $\sigma$, it gives as result the 70\% percentile. Observing the first graph of Figure \ref{fig:NOx}, it can be noticed that some curves have a likely outlying behavior, and this may be the reason why a weakly local approach for KFSD may be adequate enough. In the case of NW, some methods detect no curves as outliers (e.g., all the FSD-based methods), exclusively three FBP-based methods flag day 5 as outlier, whereas days 7, 20 and 21 are detected as outliers by our methods as well as others. Note that day 7 is the Saturday before Easter and days 20 and 21 are Labor's day eve and the same Labor's day. Days 7 and 20, which have two peaks, at the beginning and end of the day, are also flagged by other twelve and eight methods, respectively, while day 21, which shows a single peak in the first hours of the day, is considered atypical by only two other methods, which happen to be local (FBP+HMD and FBP+KFSD). This last result may be connected with what has been observed at the KFSD training step for selecting the percentile, i.e., the selection of the 30\% percentile. Therefore, KFSD$_{smo}$, KFSD$_{tri}$ and KFSD$_{wei}$ work with a strongly local percentile, and their results partially resemble the ones of the previously mentioned local techniques. \section{CONCLUSIONS} \label{sec:conc} This paper proposes to tackle outlier detection in functional samples using the kernelized functional spatial depth as a tool. In Theorem \ref{th:inPaper01} we presented a probabilistic result allowing to set a KFSD-threshold to identify outliers, but in practice it is necessary to observe two samples to apply Theorem \ref{th:inPaper01}. To overcome this practical limitation, we proposed KFSD$_{smo}$, KFSD$_{tri}$ and KFSD$_{wei}$ which are methods that can be applied when a unique functional sample is available and are based on both a probabilistic approach and smoothed resampling techniques. We also proposed a new procedure to set the bandwidth $\sigma$ of KFSD that is based on obtaining training samples by means of smoothed resampling techniques. The general idea behind this procedure can be applied to other functional depths or methods with parameters that need to be set. We investigated the performances of KFSD$_{smo}$, KFSD$_{tri}$ and KFSD$_{wei}$ by means of a simulation study. We focused on challenging scenarios with low magnitude, shape and partial outliers instead of high magnitude outliers. The results support our proposals. Along the simulation study, KFSD$_{smo}$, KFSD$_{tri}$ and KFSD$_{wei}$ attained the largest correct detection performances in most of the analyzed setups, but in some cases they paid a price in terms of false detection. However, KFSD$_{smo}$, KFSD$_{tri}$ and KFSD$_{wei}$ work with a given desired false alarm probability, and therefore higher false detection percentages than their competitors are due to the inherent structure of the methods. We also observed a trade-off between c and f for KFSD$_{smo}$, KFSD$_{tri}$ and KFSD$_{wei}$, and a clear pattern. For these reasons in our opinion KFSD$_{tri}$ should be preferred to KFSD$_{smo}$ or KFSD$_{wei}$ since it performs extremely well in terms of correct detection, while it has lower false detection percentages than KFSD$_{wei}$. Concerning the remaining methods, there are competitors that in few scenarios outperformed our methods. However, in these few cases the differences are not great, and in addition these competitors are not stable across the considered scenarios. Furthermore, we also showed that our procedures can be applied in environmental contexts with an example where the goal was to detect outlying NO$_{x}$ curves to identify days possibly characterized by abnormal pollution levels. To conclude, we present two possible future research lines. First, since KFSD is a depth whose local approach is in part based on the choice of the kernel function, it would be interesting to explore how the choice of different kernels affects the behavior of KFSD. Moreover, each kernel will depend on a bandwidth and a norm. For the selection of the bandwidth, we used a criterion based on the study of the empirical distribution of the sample distances, but alternatives should be investigated, for example an adaptation of the so-called Silverman's rule (\citeauthor{Sil1986} \citeyear{Sil1986}) for selecting the bandwidth of a kernel-based functional depth such as KFSD. For the choice of the norm, a sensitivity study would help in understanding how important is the functional space assumption. Second, since outlier detection can be seen as a special case of cluster analysis (it is a cluster problem with maximum two clusters, and one of them with size much smaller than the other,even 0), a natural step ahead in our research may be the definition of KFSD-based cluster analysis procedures. { \section*{ACKNOWLEDGMENTS} The authors would like to thank the editor in chief, the associate editor and an anonymous referee for their helpful comments. This research was partially supported by Spanish Ministry of Science and Innovation grant ECO2011-25706 and by Spanish Ministry of Economy and Competition grant ECO2012-38442. } \appendix \section{Appendix} \subsection{From $FSD(x, Y_{n})$ to $KFSD(x, Y_{n})$} \label{sec:app01} To show how to pass from $FSD(x, Y_{n})$ in \eqref{eq:sampleFSD} to $KFSD(x, Y_{n})$ in \eqref{eq:sampleKFSD}, we first show that $FSD(x, Y_{n})$ can be expressed in terms of inner products. We present this result for $n=2$. The norm in \eqref{eq:sampleFSD} can be written as \begin{equation*} \begin{array}{ll} \left\|\sum_{i}^{2}\frac{x-y_{i}}{\|x-y_{i}\|}\right\|^{2} &= \left\|\frac{x-y_{1}}{\|x-y_{1}\|}+\frac{x-y_{2}}{\|x-y_{2}\|}\right\|^{2}\\ &= \left\|\frac{x-y_{1}}{\sqrt{\langle x,x\rangle+\langle y_{1},y_{1}\rangle-2\langle x,y_{1}\rangle}}+\frac{x-y_{2}}{\sqrt{\langle x,x\rangle+\langle y_{2},y_{2}\rangle-2\langle x,y_{2}\rangle}}\right\|^{2} \end{array} \end{equation*} \noindent Let $\delta_{1}=\sqrt{\langle x,x\rangle+\langle y_{1},y_{1}\rangle-2\langle x,y_{1}\rangle}$ and $\delta_{2}=\sqrt{\langle x,x\rangle+\langle y_{2},y_{2}\rangle-2\langle x,y_{2}\rangle}$. Then, \begin{equation*} \begin{array}{ll} \left\|\sum_{i}^{2}\frac{x-y_{i}}{\|x-y_{i}\|}\right\|^{2} &= \left\|\frac{x-y_{1}}{\delta_{1}}+\frac{x-y_{2}}{\delta_{2}}\right\|^{2} \\ &= \left\|\frac{x-y_{1}}{\delta_{1}}\right\|+\left\|\frac{x-y_{2}}{\delta_{2}}\right\| + \frac{2}{\delta_{1}\delta_{2}}\langle x-y_{1},x-y_{2}\rangle \\ &= 2 + \frac{2}{\delta_{1}\delta_{2}}(\langle x,x\rangle+\langle y_{1},y_{2}\rangle-\langle x,y_{1}\rangle-\langle x,y_{2}\rangle)\\ &= \sum_{i,j=1}^{2}\frac{\langle x,x\rangle+\langle y_{i},y_{j}\rangle-\langle x,y_{i}\rangle-\langle x,y_{j}\rangle}{\delta_{i}\delta_{j}}, \end{array} \end{equation*} \noindent and apply the embedding map $\phi$ to all the observations of the last expression. According to \eqref{eq:kappa_phi}, this is equivalent to substitute the inner product function with a positive definite and stationary kernel function $\kappa$, which explains the definition of $KFSD(x, Y_{n})$ in \eqref{eq:sampleKFSD} for $n=2$. The generalization of this result to $n>2$ is straightforward. \subsection{Proof of Theorem \ref{th:inPaper01}} \label{sec:app02} As explained in Section \ref{sec:odp}, Theorem \ref{th:inPaper01} is a functional extension of a result derived by \cite{CheDanPenBar2009} for KSD, and since they are closely related, next we report a sketch of the proof of Theorem \ref{th:inPaper01}. The proof for KSD is mostly based on an inequality known as \citeauthor{Mcd1989}'s inequality (\citeauthor{Mcd1989} \citeyear{Mcd1989}), which also applies to general probability spaces, and therefore to functional Hilbert spaces. We report this inequality in the next lemma: \begin{lemma}\label{le:01} \textbf{(\citeauthor{Mcd1989} \citeyear{Mcd1989} [1.2])} Let $\Omega_{1}, \ldots, \Omega_{n}$ be probability spaces. Let $\mathbf{\Omega} = \prod_{j=1}^{n} \Omega_{j}$ and let $X: \mathbf{\Omega} \rightarrow \mathbb{R}$ be a random variable. For any $j \in \left\{1, \ldots, n\right\}$, let $(\omega_{1}, \ldots, \omega_{j}, \ldots,$ $\omega_{n})$ and $\left(\omega_{1}, \ldots, \hat{\omega}_{j}, \ldots, \omega_{n}\right)$ be two elements of $\mathbf{\Omega}$ that differ only in their $j$th coordinates. Assume that $X$ is uniformly difference-bounded by $\{c_j\}$, that is, for any $j \in \left\{1, \ldots, n\right\}$, \begin{equation} \label{eq:lem01_c_j} \left|X\left(\omega_{1}, \ldots, \omega_{j}, \ldots, \omega_{n}\right)-X\left(\omega_{1}, \ldots, \hat{\omega}_{j}, \ldots, \omega_{n}\right)\right| \leq c_{j}. \end{equation} \noindent Then, if $\mathbb{E}[X]$ exists, for any $\tau > 0$ \begin{equation*} \mathrm{Pr}\left(X-\mathbb{E}[X] \geq \tau \right) \leq \exp \left(\frac{-2\tau^2}{\sum_{j=1}^{n} c_{j}^{2}}\right). \end{equation*} \end{lemma} In order to apply Lemma \ref{le:01} to our problem, define \begin{equation} \label{eq:lem01_X_omega} X(z_{1},\ldots,z_{n_{Z}}) = - \frac{1}{n_{Z}}\sum_{i=1}^{n_{Z}}g(z_{i},Y_{n_{Y}}|Y_{n_{Y}}), \end{equation} \noindent whose expected value is given by \begin{equation} \label{eq:lem01_EX} \mathbb{E}[X] = \mathbb{E}_{z_{i}|Y_{n_{Y}}}\left[- \frac{1}{n_{Z}}\sum_{i=1}^{n_{Z}}g(z_{i},Y_{n_{Y}}|Y_{n_{Y}})\right] = - \mathbb{E}_{z_{1}|Y_{n_{Y}}}\left[g(z_{1},Y_{n_{Y}}|Y_{n_{Y}})\right]. \end{equation} Now, for any $j \in \left\{1, \ldots, n_{Z}\right\}$ and $\hat{z}_{j} \in \mathbb{H}$, the following inequality holds \begin{equation*} \left|X(z_{1},\ldots,z_{j},\ldots,z_{n_{Z}}) - X(z_{1},\ldots,\hat{z}_{j},\ldots,z_{n_{Z}})\right| \leq \frac{1}{n_{Z}}, \end{equation*} \noindent and it provides assumption \eqref{eq:lem01_c_j} of Lemma \ref{le:01}. Therefore, for any $\tau > 0$ \begin{equation*} \mathrm{Pr}\left(\mathbb{E}_{z_{1}|Y_{n_{Y}}}\left[g(z_{1},Y_{n_{Y}}|Y_{n_{Y}})\right] - \frac{1}{n_{Z}}\sum_{i=1}^{n_{Z}}g(z_{i},Y_{n_{Y}}|Y_{n_{Y}}) \geq \tau\right) \leq \exp\left(-2n_{Z}\tau^{2}\right), \end{equation*} \noindent and by the law of total probability \begin{equation*} \begin{array}{l} \mathbb{E}\left[\mathrm{Pr}\left(\mathbb{E}_{z_{1}|Y_{n_{Y}}}\left[g(z_{1},Y_{n_{Y}}|Y_{n_{Y}})\right] - \frac{1}{n_{Z}}\sum_{i=1}^{n_{Z}}g(z_{i},Y_{n_{Y}}|Y_{n_{Y}}) \geq \tau\right)\right] \\ = \mathrm{Pr}\left(\mathbb{E}_{z_{1}|Y_{n_{Y}}}\left[g(z_{1},Y_{n_{Y}})\right] - \frac{1}{n_{Z}}\sum_{i=1}^{n_{Z}}g(z_{i},Y_{n_{Y}}) \geq \tau\right) \leq \exp\left(-2n_{Z}\tau^{2}\right)\\ \end{array} \end{equation*} Next, setting $\delta = \exp\left(-2n_{Z}\tau^{2}\right)$, and solving for $\tau$, the following result is obtained: \begin{equation*} \tau = \sqrt{\frac{\ln 1/\delta}{2n_{Z}}}. \end{equation*} \noindent Therefore, \begin{equation} \label{eq:pr_z1} \mathrm{Pr}\left(\mathbb{E}_{z_{1}|Y_{n_{Y}}}\left[g(z_{1},Y_{n_{Y}})\right] \leq \frac{1}{n_{Z}}\sum_{i=1}^{n_{Z}}g(z_{i},Y_{n_{Y}}) + \sqrt{\frac{\ln 1/\delta}{2n_{Z}}}\right) \geq 1-\delta. \end{equation} However, Theorem \ref{th:inPaper01} provides a probabilistic upper bound for $\mathbb{E}_{x|Y_{n_{Y}}}\left[g(x,Y_{n_{Y}})\right]$. First, recall that $z_{1} \sim Y_{mix}$ and note that \begin{equation*} \mathbb{E}_{(z_{1} \sim Y_{mix})|Y_{n_{Y}}}\left[g\left(z_{1},Y_{n_{Y}}\right)\right] = (1-\alpha)\mathbb{E}_{(z_{1} \sim Y_{nor})|Y_{n_{Y}}}\left[g\left(z_{1},Y_{n_{Y}}\right)\right] + \alpha\mathbb{E}_{(z_{1} \sim Y_{out})|Y_{n_{Y}}}\left[g\left(z_{1},Y_{n_{Y}}\right)\right]. \end{equation*} \noindent Then, since $\mathbb{E}_{(z_{1} \sim Y_{nor})|Y_{n_{Y}}}\left[g\left(z_{1},Y_{n_{Y}}\right)\right] = \mathbb{E}_{x|Y_{n_{Y}}}\left[g\left(x,Y_{n_{Y}}\right)\right]$, for $\alpha>0$, \begin{equation} \label{eq:nor_mix_mix} \mathbb{E}_{x|Y_{n_{Y}}}\left[g\left(x,Y_{n_{Y}}\right)\right] \leq \frac{1}{1-\alpha}\mathbb{E}_{(z_{1} \sim Y_{mix})|Y_{n_{Y}}}\left[g\left(z_{1},Y_{n_{Y}}\right)\right]. \end{equation} \noindent Consequently, combining \eqref{eq:pr_z1} and \eqref{eq:nor_mix_mix}, and for $r \geq \alpha$, we obtain \begin{equation*} \mathrm{Pr}\left(\mathbb{E}_{x|Y_{n_{Y}}}\left[g(x,Y_{n_{Y}})\right] \leq \frac{1}{1-r}\left[\frac{1}{n_{Z}}\sum_{i=1}^{n_{Z}}g(z_{i},Y_{n_{Y}}) + \sqrt{\frac{\ln 1/\delta}{2n_{Z}}}\right]\right) \geq 1-\delta, \end{equation*} \noindent which completes the proof. \begin{flushright} $\blacksquare$ \end{flushright} \end{document}
\begin{document} \title{A Class of Continued Radicals} \author{Costas J. Efthimiou} \date{} \maketitle \begin{abstract} We compute the limits of a class of continued radicals extending the results of a previous note in which only periodic radicals of the class were considered. \end{abstract} \section{Introduction.} In \cite{Efthimiou} the author discussed the values for a class of periodic continued radicals of the form {\footnotesize \begin{equation} a_0\sqrt{2+a_1\sqrt{2+a_2\sqrt{2+a_3\sqrt{2+\cdots}}}} ~, \label{eq:OurRadical} \end{equation} } where for some positive integer $n$, $$ a_{n+k} ~=~ a_k~,~~~k=0,1,2,\dots~, $$ and $$ a_k\in\{-1,+1\}~,~~~k=0,1,\dots,n-1~. $$ It was also shown that the radicals given by equation \eqref{eq:OurRadical} have limits two times the fixed points of the Chebycheff polynomials $T_{2^n}(x)$, thus unveiling an interesting relation between these topics. In \cite{ZH}, the authors defined the set $S_2$ of all continued radicals of the form \eqref{eq:OurRadical} (with $a_0=1$) and they investigated some of their properties by assuming that the limit of the radicals exists. In particular, they showed that all elements of $S_2$ lie between 0 and 2, any two radicals cannot be equal to each other, and $S_2$ is uncountable. My previous note hence partially bridged this gap but left unanswered the question `\textit{what are the limits if the radicals are not periodic?}' I answer the question in this note. The result is easy to establish, but I realized it only as I was reading the proof of my previous note. Such is the working of the mind! \section{The Limits.} Towards the desired result, I present the following lemma from \cite{Shklarsky}, also used in the periodic case, which is an extension of the well known trigonometric formulas of the angles $\pi/2^n$. \begin{lemma} For $a_i\in\{-1,1\}$, with $i=0,1,\dots,n-1$, we have that {\footnotesize $$ 2\, \sin \left\lbrack \left( a_0+{a_0a_1\over2}+\dots+{a_0a_1\cdots a_{n-1}\over2^{n-1}} \right) {\pi\over4} \right\rbrack ~=~ a_0\sqrt{2+a_1\sqrt{2+a_2\sqrt{2+\dots+a_{n-1}\sqrt{2}}}}~. $$ } \end{lemma} The lemma is proved in \cite{Shklarsky} using induction. According to this lemma, the partial sums of the continued radical \eqref{eq:OurRadical} are given by $$ x_n ~=~ 2\sin \left\lbrack \left( a_0+{a_0a_1\over2}+\dots+{a_0a_1\cdots a_{n-1}\over2^{n-1}} \right) {\pi\over4} \right\rbrack~. $$ The series \begin{equation*} a_0+{a_0a_1\over2}+\dots+{a_0a_1\cdots a_{n-1}\over2^{n-1}} +\cdots \end{equation*} is absolutely convergent and thus it converges to some number $a$. Therefore, the original continued radical converges to the real number $$ x ~=~ 2\sin{a\pi\over4}~. $$ We can find a concise formula for $x$. For this calculation it is more useful to use the products $$ P_m ~=~ \prod_{k=0}^{m} a_k~,~~~\text{for } m=0,1,2,\dots~, $$ which take the values $\pm1$. We will refer to these as partial parities. (When the pattern is periodic of period $n$ only the first $n$ parities $P_0, P_1, \dots, P_{n-1}$ are independent.) Using the notation with the partial parities, set \begin{eqnarray*} a &=& P_0 + {P_1\over2} + {P_2\over 2^2} + \cdots+ {P_{n-1}\over 2^{n-1}} + {P_n\over 2^{n}} + \cdots ~. \end{eqnarray*} We now define $$ Q_m ~=~ {1+P_m\over2}~. $$ Since $P_m\in\{-1,1\}$, it follows that $Q_m\in\{0,1\}$. Inversely, $P_m=2Q_m-1$. Thus \begin{equation*} a ~=~ \sum_{m=0}^\infty {P_m\over 2^m} ~=~ \sum_{m=0}^\infty {Q_m\over 2^{m-1}} - \sum_{m=0}^\infty {1\over 2^m} ~=~ 4\, \sum_{m=0}^\infty {Q_m\over 2^{m+1}} - 2~. \end{equation*} Notice that the sum $$ Q ~=~ \sum_{m=0}^\infty {Q_m\over 2^{m+1}} $$ in the previous equation is the number $Q$ whose binary expression is $0.Q_0Q_1\cdots Q_{n-3}Q_{n-2}\cdots$. Therefore $a=4Q-2$. In \cite{ZH}, the authors noticed that all continued radicals of the form \eqref{eq:OurRadical} (with $a_0=1$) are in one-to-one correspondence with the set of decimals between 0 and 1 as written in binary notation (and that's how they determined that the set $S_2$ is uncountable). But, with the above calculation, this correspondence is made deeper. It gives the limit of the radical \eqref{eq:OurRadical} as follows $$ x ~=~ -2\cos \left( Q\pi \right) ~. $$ For example, if $a_k=1$ for all $k$, then also $Q_k=1$ for all $k$ and the number $Q=0.111111111\cdots$ written in the binary system is the number $Q=1$ in the decimal system; hence $x=2$. We thus recover the well known result {\footnotesize \begin{equation*} 2 ~=~ \sqrt{2+\sqrt{2+\sqrt{2+\sqrt{2+\cdots}}}} ~. \end{equation*} } \section{Conclusion.} Having found the limit of \eqref{eq:OurRadical}, the next obvious question is to determine the limit of the radical {\footnotesize \begin{equation*} a_0\sqrt{y+a_1\sqrt{y+a_2\sqrt{y+a_3\sqrt{y+\cdots}}}} ~, \end{equation*} } for values of the variable $y$ that make the radical (and the limit) well defined. However, a direct application of the above method fails and so far a convenient variation has been elusive. Therefore, the limit of the last radical in the general case remains an open problem although it is known in at least two cases \cite{ZH}. \noindent\textit{ Department of Physics, University of Central Florida, Orlando, FL 32816 \\ [email protected] } \end{document}
\begin{document} \title{On embedded hidden Markov models and\\particle Markov chain Monte Carlo methods} \author{Axel Finke\\Department of Statistical Science, University College London, UK.\\Arnaud Doucet\\Department of Statistics, Oxford University, UK.\\Adam M.~Johansen\\Department of Statistics, University of Warwick, UK.} \maketitle \begin{abstract} \noindent{}The \gls{EHMM} sampling method is a \gls{MCMC} technique for state inference in non-linear non-Gaussian state-space models which was proposed in \citet{Neal2003,Neal2004} and extended in \citet{ShestopaloffNeal2016}. An extension to Bayesian parameter inference was presented in \citet{ShestopaloffNeal2013}. An alternative class of \gls{MCMC} schemes addressing similar inference problems is provided by \gls{PMCMC} methods \citep{AndrieuDoucetHolenstein2009,AndrieuDoucetHolenstein2010}. All these methods rely on the introduction of artificial extended target distributions for multiple state sequences which, by construction, are such that one randomly indexed sequence is distributed according to the posterior of interest. By adapting the \glsdesc{MH} algorithms developed in the framework of \gls{PMCMC} methods to the \gls{EHMM} framework, we obtain novel \gls{PF}-type algorithms for state inference and novel \gls{MCMC} schemes for parameter and state inference. In addition, we show that most of these algorithms can be viewed as particular cases of a general \gls{PF} and \gls{PMCMC} framework. We compare the empirical performance of the various algorithms on low- to high-dimensional state-space models. We demonstrate that a properly tuned conditional \gls{PF} with `local' \gls{MCMC} moves proposed in \citet{ShestopaloffNeal2016} can outperform the standard conditional \gls{PF} significantly when applied to high-dimensional state-space models while the novel \gls{PF}-type algorithm could prove to be an interesting alternative to standard \glspl{PF} for likelihood estimation in some lower-dimensional scenarios. \end{abstract} \glsresetall \glsunset{MCMC} \section{Introduction} Throughout this work, for concreteness, we will describe both \gls{PMCMC} and \gls{EHMM} methods in the context of performing inference in non-linear state-space models. However, we stress that those methods can be used to perform inference in other contexts. Non-linear non-Gaussian state-space models constitute a popular class of time series models which can be described in the time-homogeneous case as follows --- throughout this paper we consider the time-homogeneous case, noting that the generalisation to time-inhomogeneous models is straightforward but notationally cumbersome. Let $\{x_t\}_{t \geq 1}$ be an $\mathcal{X}$-valued latent Markov process satisfying \begin{equation} x_1 \sim \mu_{\theta}(\,\cdot\,) \quad \text{and} \quad x_{t}\vert (x_{t-1} = x) \sim f_{\theta}(\,\cdot\,| x), \text{ for $t \geq 2$}. \label{eq:modtransition} \end{equation} and let $\{y_t\}_{t \geq 1}$ be a sequence of $\mathcal{Y}$-valued observations which are conditionally independent given $\{x_t\}_{t \geq 1}$ and which satisfy \begin{equation} y_t \vert (x_1,\dotsc, x_t = x, x_{t+1}, \dotsc ) \sim g_{\theta}(\,\cdot\,| x), \text{ for $t \geq 1$}. \label{eq:modobs} \end{equation} Here $\theta\in\varTheta$ denotes the vector of parameters of the model. Let $z_{i:j}$ denotes the components $(z_i,z_{i+1},\dotsc,z_j) $ of a generic sequence $\{z_t\}_{t \geq 1} $.\ Assume that we have access to a realization of the observations $Y_{1:T}=y_{1:T}$. If $\theta$ is known, inference about the latent states $x_{1:T}$ relies upon \begin{equation} p_{\theta}(x_{1:T} \vert y_{1:T}) = \frac{p_{\theta}(x_{1:T}, y_{1:T})}{p_{\theta}(y_{1:T})}, \end{equation} where \begin{equation} p_{\theta} (x_{1:T}, y_{1:T}) = \mu_{\theta}(x_1) \prod_{t=2}^{T} f_{\theta}(x_{t}\vert x_{t-1}) \prod_{t=1}^{T} g_{\theta}(y_t \vert x_t) . \end{equation} When $\theta$ is unknown, to conduct Bayesian inference a prior density $p(\theta)$ is assigned to the parameters and inference proceeds via the joint posterior density \begin{equation} p(x_{1:T},\theta \vert y_{1:T}) = p(\theta \vert y_{1:T}) p_{\theta}(x_{1:T}\vert y_{1:T}), \end{equation} where the marginal posterior distribution of the parameter satisfies \begin{equation} p(\theta \vert y_{1:T}) \propto p(\theta) p_{\theta}(y_{1:T}), \end{equation} the likelihood $ p_{\theta}(y_{1:T})$ being given by \begin{equation} p_{\theta}(y_{1:T}) =\int p_{\theta} (x_{1:T}, y_{1:T}) \mathrm{d}x_{1:T}. \end{equation} Many algorithms have been proposed over the past twenty-five years to perform inference for this class of models; see \citet{Kantas2015} for a recent survey. We focus here on the \gls{EHMM} algorithm introduced in \citet{Neal2003,Neal2004} and on \gls{PMCMC} introduced in \cite{AndrieuDoucetHolenstein2009,AndrieuDoucetHolenstein2010}. Both classes of methods are fairly generic and do not require the state-space model under consideration to possess additional structural properties beyond \eqref{eq:modtransition} and \eqref{eq:modobs}. The \gls{EHMM} method has been recently extended in \citet{ShestopaloffNeal2013,ShestopaloffNeal2016} while extensions of \gls{PMCMC} have also been proposed in, among other works, \citet{Whiteley2010} and \citet{LindstenJordanSchon2014}. In particular, \citet{Whiteley2010} combined the conditional \gls{PF} algorithm of \cite{AndrieuDoucetHolenstein2009,AndrieuDoucetHolenstein2010} with a backward sampling step. We will denote the resulting algorithm as the conditional \gls{PF} with \gls{BS}. Both \gls{EHMM} and \gls{PMCMC} methods rely upon sampling a population of $N$ particles for the state $x_{t}$ and introducing an extended target distribution over the resulting $N^{T}$ potential sequences $x_{1:T}$ such that one of the sequences selected uniformly at random is at equilibrium by construction. It was observed in \citet[p.~116]{LindstenSchon2013} that conditional \gls{PF} with \gls{BS} is reminiscent of the \gls{EHMM} method proposed in \citet{Neal2003,Neal2004} and some connections were made between some simple \gls{EHMM} methods and \gls{PMCMC} methods in \citet[pp.~82--87]{Finke2015} who also showed that both methods can be viewed as special cases of a much more general construction. However, to the best of our knowledge, the connections between the two classes of methods have never been investigated thoroughly. Indeed, such an analysis was deemed of interest in \citet{ShestopaloffNeal2014}, where we note that \gls{EHMM} methods are sometimes alternatively referred to as \emph{ensemble} \gls{MCMC} methods: \begin{quote} ``It would \ldots{} be interesting to compare the performance of the ensemble \gls{MCMC} method with the [\gls{PMCMC}]-based methods of \cite{AndrieuDoucetHolenstein2010} and also to see whether techniques used to improve [particle \gls{MCMC}] methods can be used to improve ensemble methods and vice versa.'' \end{quote} In this work, we characterize this relationship and show that it is possible to exploit the similarities between these methods to derive new inference algorithms. The relationship between the various classes of algorithms discussed in this work is shown in Figure~\ref{fig:relationship_between_algorithms}. The remainder of the paper is organized as follows. Section \ref{sec:pmcmc} reviews some \gls{PMCMC} schemes, including the \gls{PMMH} algorithm and \gls{PG} samplers. We recall how the validity of these algorithms can be established by showing that they are standard \gls{MCMC} algorithms sampling from an extended target distribution. In particular, the \gls{PMMH} algorithm can be thought of as a standard \gls{MH} algorithm sampling from this extended target using a \gls{PF} proposal for the states. Likewise, the theoretical validity of the conditional \gls{PF} with \gls{BS} can be established by showing that it corresponds to a (``partially collapsed'' -- see \citet{VanDyk2008}) Gibbs sampler \citep{Whiteley2010}. Section~\ref{sec:embeddedHMM} is devoted to the `original' \gls{EHMM} from \citet{Neal2003,Neal2004}. At the core of this methodology is an extended target distribution which shares common features with the \gls{PMCMC} target. We show that the \gls{EHMM} method can be reinterpreted as a collapsed Gibbs sampling procedure for this target. This provides an alternative proof of validity of this algorithm. More interestingly, it is possible to come up with an original \gls{MH} scheme to sample from this extended target distribution reminiscent of \gls{PMMH}. However, whereas the \gls{PMMH} algorithm relies on \gls{PF} estimates of the likelihood $p_{\theta}(y_{1:T})$, this \gls{MH} version of \gls{EHMM} relies on an estimate of $p_{\theta}(y_{1:T})$ computed using a finite-state \gls{HMM}, the cardinality of the state-space being $N$. The computational cost of both of these original \gls{EHMM} methods is $O(N^2T)$ in contrast to the $O(NT)$-cost of \gls{PMCMC} methods. The high computational cost of the original \gls{EHMM} method has partially motivated the development of a novel class of alternative \gls{EHMM} methods which bring the computational complexity down to $O(NT)$. As described in Section~\ref{sec:embeddedHMMnewversion}, this is done by introducing a set of auxiliary variables playing the same r\^ole as the ancestor indices generated in the resampling step of a standard \gls{PF}. This leads to the extended target distribution introduced in \citet{ShestopaloffNeal2016}. We show that this target coincides in a special case with the extended target of \gls{PMCMC} when one uses the \gls{FAAPF} \citep{PittShephard1999} and the resulting \gls{EHMM} coincides with the conditional \gls{FAAPF} with \gls{BS} in this scenario. We show once more that the validity of this novel \gls{EHMM} method can be established by using a collapsed Gibbs sampler. In Section~\ref{sec:novel_methodology}, we derive several novel, practical extensions to the alternative \gls{EHMM} method. First, we show that the alternative \gls{EHMM} framework can also be used to derive an \gls{MH} algorithm which, once again, is very similar to the \gls{PMMH} algorithm except that $p_{\theta}(y_{1:T})$ is estimated unbiasedly using a novel \gls{PF} type algorithm relying on local \gls{MCMC} moves. Second, we derive additional bootstrap \gls{PF} and general \gls{APF} type variants of the alternative \gls{EHMM} method. In Section~\ref{sec:general_pmcmc}, we describe a general, unifying \gls{PMCMC} framework which admits all variants of standard \gls{PMCMC} methods and all variants of alternative \gls{EHMM} discussed in this work as special cases. This also allows us to generalize the \glsdesc{AS} scheme from \citet{LindstenJordanSchon2014}. In Section~\ref{sec:simulations}, we empirically compare the performance of all the algorithms mentioned above. Our results indicate that, as suggested in \citet{ShestopaloffNeal2016}, a properly tuned version of the conditional \gls{PF} (and hence \gls{PG} sampler) using \gls{MCMC} moves proposed in \citet{ShestopaloffNeal2016} can outperform existing methods in high dimensions while the (`non-conditional') \glspl{PF} using \gls{MCMC} moves are a potentially interesting alternative to standard \glspl{PF} for likelihood and state estimation for lower-dimensional models. \begin{figure} \caption{Relationship between the various classes of algorithms discussed in this work. A general construction admitting all of these as special cases can be found in \citet[Section~1.4]{Finke2015}. Novel methodology introduced in this work is highlighted in bold.} \label{fig:relationship_between_algorithms} \end{figure} \section{Particle Markov chain Monte Carlo methods} \label{sec:pmcmc} This section reviews \gls{PMCMC} methods. For transparency, we first restrict ourselves in this section to the scenario in which the underlying \gls{PF} used is the bootstrap \gls{PF}, and then discuss the \glsdesc{FAAPF} before finally considering the case of general \glsdescplural{APF}. \subsection{Extended target distribution} Let $N$ be an integer such that $N \geq 2$. \gls{PMCMC} methods rely on the following extended target density on $\varTheta \times \mathcal{X}^{NT}\times \{1,\dotsc,N\}^{N{(T-1)}+1}$ \begin{equation} \tilde{\pi}{\bigl(\theta,b_{1:T},\mathbf{x}_{1:T},\mathbf{a}_{1:T-1}^{-b_{2:T}}\bigr)} \coloneqq \frac{1}{N^T} \times \underbrace{\pi{\bigl(\theta,x_{1:T}^{b_{1:T}}\bigr)}}_{\mathclap{\text{\footnotesize{target}}}} \times \underbrace{\phi_\theta{\bigl(\mathbf{x}_{1:T}^{-b_{1:T}},\mathbf{a}_{1:T-1}^{-b_{2:T}}|x_{1:T}^{b_{1:T}},b_{1:T}\bigr)} }_{\mathclap{\text{\footnotesize{law of conditional \gls{PF}}}}}, \label{eq:PMCMCtarget} \end{equation} where $\smash{\pi(\theta,x_{1:T}) \coloneqq p(x_{1:T},\theta| y_{1:T})}$ represents the posterior distribution of interest. In addition, the \emph{particles} $\mathbf{x}_t \coloneqq \{ x_t^1,\dotsc,x_t^N \} \in\mathcal{X}^N$, \emph{ancestor indices} $\mathbf{a}_t \coloneqq \{a_t^1,\dotsc,a_t^N\} \in \{1,\dotsc,N\}^N$ and \emph{particle indices} $b_{1:T} \coloneqq \{ b_1,\dotsc,b_T\}$ are related as \begin{equation} \mathbf{x}_t^{-b_t}=\mathbf{x}_t\backslash x_t^{b_t}, \quad \mathbf{x}_{1:T}^{-b_{1:T}}=\bigl\{ \mathbf{x}_1^{-b_1},\dotsc,\mathbf{x}_T^{-b_T}\bigr\}, \quad \mathbf{a}_{t-1}^{-b_t}=\mathbf{a}_{t-1}\backslash a_{t-1}^{b_t}, \quad \mathbf{a}_{1:T-1}^{-b_{2:T}}=\bigl\{ \mathbf{a}_1^{-b_{2}},\dotsc,\mathbf{a}_{T-1}^{-b_T}\bigr\}. \end{equation} In particular, given $b_T$, the particle indices $b_{1:T-1}$ are deterministically related to the ancestor indices by the recursive relationship \begin{equation} b_t = a_t^{b_{t+1}}, \quad \text{for $t = T-1, \dotsc, 1$.} \end{equation} Finally, for any $\smash{(x_{1:T}^{b_{1:T}},b_{1:T}) \in\mathcal{X}^N \times \bigl\{1,\dotsc,N\}^N}$, $\phi_\theta$ denotes a conditional distribution induced by an algorithm referred to as a \gls{CPF} \begin{equation} \phi_\theta{\bigl(\mathbf{x}_{1:T}^{-b_{1:T}}, \mathbf{a}_{1:T-1}^{-b_{2:T}}\big| x_{1:T}^{b_{1:T}},b_{1:T}\bigr)} \coloneqq \prod_{\substack{\mathllap{i}=\mathrlap{1}\\\mathllap{i} \neq \mathrlap{b_1}}}^N \mu_\theta{\bigl( x_1^i\bigr)} \prod_{t=2}^T \prod_{\substack{\mathllap{i}=\mathrlap{1}\\\mathllap{i} \neq \mathrlap{b_t}}}^N w_{\theta,t-1}^{a_{t-1}^i}\,f_\theta{\bigl( x_t^i\big|x_{t-1}^{a_{t-1}^i}\bigr)}, \label{eq:CPF} \end{equation} where \begin{equation} w_{\theta,t}^i \coloneqq \frac{g_\theta(y_t |x_t^i)}{\sum_{j=1}^N g_\theta(y_t | x_t^j)} \label{eq:multinomialresampling} \end{equation} represents the normalised weight associated with the $i$th particle at time~$t$. The key feature of this high-dimensional target is that by construction it ensures that $\smash{( \theta,x_{1:T}^{b_{1:T}})}$ is distributed according to the posterior of interest. \gls{PMCMC} methods are \gls{MCMC} algorithms which sample from this extended target, hence from the posterior of interest. \subsection{Particle marginal Metropolis--Hastings} \label{subsec:pmmh} \glsreset{PMMH} \glsreset{MH} The \emph{\gls{PMMH}} algorithm is a \gls{MH} algorithm targeting $\tilde{\pi}( \theta,b_{1:T},\mathbf{x}_{1:T},\mathbf{a}_{1:T-1}^{-b_{2:T}}) $ defined through \eqref{eq:PMCMCtarget}, \eqref{eq:CPF} and \eqref{eq:multinomialresampling} using a proposal of the form \begin{equation} q{(\theta,{\theta'})} \times \underbrace{\Psi_{{\theta'}}( \mathbf{x}_{1:T},\mathbf{a}_{1:T-1})}_{\mathclap{\text{\footnotesize{law of \gls{PF}}}}} \times \underbrace{w_{{\theta'},T}^{b_T}}_{\mathclap{\text{\parbox{1.3cm}{\centering\footnotesize{path selection}}}}}, \label{eq:proposalparticlefilter} \end{equation} where $b_{1:T}$ is again obtained via the reparametrisation $\smash{b_t = a_t^{b_{t+1}}}$ for $t = T-1, \dotsc,1$ and $\Psi_\theta(\mathbf{x}_{1:T},\mathbf{a}_{1:T-1})$ is the law induced by a bootstrap \gls{PF} \begin{equation} \Psi_\theta( \mathbf{x}_{1:T},\mathbf{a}_{1:T-1}) \coloneqq \prod_{i=1}^N \mu_\theta( x_1^i) \prod_{t=2}^T \prod_{i=1}^N w_{\theta,t-1}^{a_{t-1}^i} f_\theta\bigl(x_t^i \big| x_{t-1}^{a_{t-1}^i}\bigr). \label{eq:distributionPF} \end{equation} The resulting \gls{MH} acceptance probability is of the form \begin{equation} 1 \wedge \frac{\hat{p}_{{\theta'}}(y_{1:T})p({\theta'})}{\hat{p}_\theta(y_{1:T})p(\theta)} \frac{q({\theta'},\theta)}{q(\theta,{\theta'})}, \label{eq:PMMHratio} \end{equation} where \begin{equation} \hat{p}_\theta(y_{1:T}) \coloneqq \prod_{t=1}^T \biggl[\frac{1}{N} \sum_{i=1}^N g_\theta(y_t| x_t^i) \biggr]\label{eq:likelihoodestimatorPF} \end{equation} is well known to be an unbiased estimate of $p_\theta(y_{1:T})$; see \cite{DelMoral2004}. We stress that the unbiased estimates appearing in the numerator and denominator of \eqref{eq:PMMHratio} each depends upon the particles (and ancestor indices) generated in distinct \glspl{PF} but we suppress this dependence to keep the notation as simple as is possible. The validity of the expression in \eqref{eq:PMMHratio} follows directly by noting that: \begin{align} \frac{\tilde{\pi}( \theta,b_{1:T},\mathbf{x}_{1:T},\mathbf{a}_{1:T-1}^{-b_{2:T}}) }{\Psi_\theta( \mathbf{x}_{1:T},\mathbf{a}_{1:T-1}) w_{\theta,T}^{b_T}} & = \frac{1}{N^T} \frac{\pi( \theta,x_{1:T}^{b_{1:T}})}{\mu_\theta(x_1^{b_1}) \bigl[\prod_{t=2}^T w_{\theta,t-1}^{b_{t-1}} f_\theta(x_t^{b_t} | x_{t-1}^{b_{t-1}}) \bigr] w_{\theta,T}^{b_T}}\\ & = \frac{p( \theta| y_{1:T}) }{N^T}\frac {\mu_\theta( x_1^{b_1}) g_\theta( y_1| x_1^{b_1}) \prod_{t=2}^T f_\theta( x_t^{b_t}| x_{t-1}^{b_{t-1}}) g_\theta( y_t| x_t^{b_t}) }{\mu_\theta( x_1^{b_1}) \bigl[\prod_{t=2}^T w_{\theta,t-1}^{b_{t-1}} f_\theta(x_t^{b_t} | x_{t-1}^{b_{t-1}}) \bigr] w_{\theta,T}^{b_T}}\\ & = p(\theta | y_{1:T}) \frac{\hat{p}_\theta( y_{1:T}) }{p_\theta(y_{1:T})}\\ & \propto\hat{p}_\theta(y_{1:T}) p(\theta), \label{eq:PMCMCratiotargetproposal} \end{align} where we have again used that $b_t = a_t^{b_{t+1}}$, for $t = T-1,\dotsc,1$ and that $p(\theta|y_{1:T})/p_\theta(y_{1:T}) = p(\theta)/p(y_{1:T})$; see also \cite[Theorem 2]{AndrieuDoucetHolenstein2010}. \subsection{Particle Gibbs samplers} \label{subsec:cpf} \glsreset{PG} \glsreset{BS} \glsreset{AS} To sample from $\pi(\theta,x_{1:T})$, one can use the \emph{\gls{PG}} sampler. The \gls{PG} sampler mimics the block Gibbs sampler iterating draws from $\pi(\theta | x_{1:T})$ and $\pi(x_{1:T}| \theta) $. As sampling from $\pi(x_{1:T} | \theta) $ is typically impossible, we can use a so called conditional \gls{PF} kernel with \gls{BS} to emulate sampling from it. Given a current value of $x_{1:T}$, we perform the following steps (see \citet{AndrieuDoucetHolenstein2009}, \citet[Section~4.5]{AndrieuDoucetHolenstein2010}); \begin{enumerate} \item Sample $b_{1:T}$ uniformly at random and set $x_{1:T}^{b_{1:T}}\leftarrow x_{1:T}$. \item Run the conditional \gls{PF}, i.e.\ sample from $\phi_\theta(\mathbf{x}_{1:T}^{-b_{1:T}},\mathbf{a}_{1:T-1}^{-b_{2:T}} | x_{1:T}^{b_{1:T}},b_{1:T})$. \item\label{alg:simple_pg_last_step} Sample $b_T$ according to $\Pr(b_T=m) = w_\theta^{m}$ and set $b_t=a_t^{b_{t+1}}$ for $t = T-1, \dotsc, 1$. \end{enumerate} It was noticed in \citet{Whiteley2010} that it is possible to improve Step~\ref{alg:simple_pg_last_step}: for $t=T-1,\dotsc,1$, instead of deterministically setting $b_t=a_t^{b_{t+1}}$, one can use a backward sampling step which samples \begin{equation} \Pr{\bigl(b_t=m\bigr)} \propto w_{\theta,t}^{m}f_\theta{\bigl(x_{t+1}^{b_{t+1}}\bigr| x_t^{m}\bigr)}. \label{eq:backwardsampling} \end{equation} To establish the validity of this procedure (i.e.\ of the conditional \gls{PF} with \gls{BS}), it was shown that this procedure is a (partially) collapsed Gibbs sampler of invariant distribution $\smash{\tilde{\pi}(b_{1:T},\mathbf{x}_{1:T},\mathbf{a}_{1:T-1} | \theta)}$, sampling recursively from $\smash{\tilde{\pi}(b_t | \theta,\mathbf{x}_{1:t},\mathbf{a}_{1:t-1},x_{t+1:T}^{b_{t+1:T}},b_{t+1:T})}$, for $t=T,T-1,\dotsc,1$. Indeed, we have \begin{align} \MoveEqLeft \tilde{\pi}{\bigl(b_t\bigr| \theta,\mathbf{x}_{1:t},\mathbf{a}_{1:t-1},x_{t+1:T}^{b_{t+1:T}},b_{t+1:T}\bigr)}\\ & \propto \sum_{b_{1:t-1}}\sum_{\mathbf{a}_{t:T-1}} \idotsint \frac{\pi{\bigl( \theta,x_{1:T}^{b_{1:T}}\bigr)} }{N^T} \prod_{\substack{\mathllap{i}=\mathrlap{1}\\\mathllap{i} \neq \mathrlap{b_1}}}^N \mu_\theta{\bigl( x_1^i\bigr)} \smashoperator{\prod_{n=2}^T} \prod_{\substack{\mathllap{i}=\mathrlap{1}\\\mathllap{i} \neq \mathrlap{b_n}}}^N w_{\theta,n-1}^{a_{n-1}^i}f_\theta{\bigl(x_n ^i\bigr| x_{n-1}^{a_{n-1}^i}\bigr)}\,\mathrm{d} \mathbf{x}_{t+1:T}^{-b_{t+1:T}}\\ & \propto \smashoperator{\sum_{b_{1:t-1}}} \pi{\bigl( \theta,x_{1:T}^{b_{1:T}}\bigr)} \prod_{\substack{\mathllap{i}=\mathrlap{1}\\\mathllap{i} \neq \mathrlap{b_1}}}^N \mu_\theta{\bigl( x_1^i\bigr)} \prod_{n=2}^t \prod_{\substack{\mathllap{i}=\mathrlap{1}\\\mathllap{i} \neq \mathrlap{b_n}}}^N w_{\theta,n-1}^{a_{n-1}^i}f_\theta{\bigl(x_n^i\bigr| x_{n-1}^{a_{n-1}^i}\bigr)} \\ & = \smashoperator{\sum_{b_{1:t-1}}} \pi{\bigl(\theta,x_{1:T}^{b_{1:T}}\bigr)} \frac{\prod_{i=1}^N \mu_\theta{\bigl( x_1^i\bigr)} \prod_{n=2}^t \prod_{i=1}^N w_{\theta,n-1}^{a_{n-1}^i}f_\theta{\bigl(x_n^i\bigr| x_{n-1}^{a_{n-1}^i}\bigr)} }{\mu_\theta{\bigl(x_1^{b_1}\bigr)} \prod_{n=2}^t w_{\theta,n-1}^{b_{n-1}}f_\theta{\bigl(x_n^{b_n}\bigr| x_{n-1}^{b_{n-1}}\bigr)} }, \quad \text{as $\smash{a_{n-1}^{b_n} = b_{n-1}}$,}\label{eq:CPFBS}\\ & \propto \smashoperator{\sum_{b_{1:t-1}}} f_\theta{\bigl(x_{t+1}^{b_{t+1} }\bigr| x_t^{b_t}\bigr)} w_{\theta,t}^{b_t}\\ & \propto f_\theta{\bigl(x_{t+1}^{b_{t+1}}\bigr| x_t^{b_t}\bigr)} w_{\theta,t}^{b_t}, \end{align} where we have used that the numerator of the ratio appearing in \eqref{eq:CPFBS} is independent of $b_{1:t-1}$. \subsection{Extension to the fully-adapted auxiliary particle filter} \label{Section:perfectadaptationPF} \glsreset{FAAPF} It is straightforward to employ a more general class of \glspl{PF} in a \gls{PMCMC} context. One such \gls{PF} is the \emph{\gls{FAAPF}} \citep{PittShephard1999} whose incorporation within \gls{PMCMC} was explored in \citet{Pitt2012}. It is described in this subsection. When it is possible to sample from $p_\theta(x_1|y_1) \propto \mu_\theta(x_1) g_\theta(y_1| x_1) $ and $p_\theta(x_t| x_{t-1},y_t) \propto f_\theta(x_t| x_{t-1}) g_\theta(y_t|x_t)$ and to compute $p_\theta(y_1) = \int \mu_\theta(x_1) g_\theta(y_1|x_1) \mathrm{d}x_1$ and $p_\theta(y_t|x_{t-1}) = \int f_\theta(x_t|x_{t-1}) g_\theta(y_t| x_t) \mathrm{d} x_t$, it is possible to\ define the target distribution $\smash{\tilde{\pi}(\theta,b_{1:T},\mathbf{x}_{1:T},\mathbf{a}_{1:T-1}^{-b_{2:T}})}$ using an alternative conditional \gls{PF} -- the conditional \gls{FAAPF} -- in \eqref{eq:PMCMCtarget} (more precisely, in these circumstances one can implement the associated \gls{PF}): \begin{equation} \phi_\theta{\bigl(\mathbf{x}_{1:T}^{-b_{1:T}},\mathbf{a} _{1:T-1}^{-b_{2:T}}\bigr| x_{1:T}^{b_{1:T}},b_{1:T}\bigr)} = \prod_{\substack{\mathllap{i}=\mathrlap{1}\\\mathllap{i} \neq \mathrlap{b_1}}}^N p_\theta{\bigl(x_1^i\bigr| y_1\bigr)} \prod_{t=2}^T \prod_{\substack{\mathllap{i}=\mathrlap{1}\\\mathllap{i} \neq \mathrlap{b_t}}}^N w_{\theta,t-1}^{a_{t-1}^i}p_\theta{\bigl(x_t^i\bigr| x_{t-1}^{a_{t-1}^i},y_t\bigr)}, \label{eq:CPFperfectadaptation} \end{equation} where \begin{equation} w_{\theta,t}^{i} \coloneqq \frac{p_\theta(y_{t+1} | x_{t}^{i})}{\sum_{j=1}^Np_\theta(y_{t+1}| x_{t}^j)}. \label{eq:perfectadaptationweight} \end{equation} In this case, we can target the extended distribution $\smash{\tilde{\pi}(\theta,b_{1:T},\mathbf{x}_{1:T},\mathbf{a}_{1:T-1}^{-b_{2:T}})}$ defined through \eqref{eq:PMCMCtarget}, \eqref{eq:CPFperfectadaptation} and \eqref{eq:perfectadaptationweight} using a \gls{MH} algorithm with proposal \begin{equation} q{\bigl( \theta,{\theta'}\bigr)} \times \underbrace{\Psi_{{\theta'}}{\bigl(\mathbf{x}_{1:T},\mathbf{a}_{1:T-1}\bigr)}}_{\mathclap{\text{\footnotesize{law of \gls{FAAPF}}}}} \times \underbrace{\frac{1}{N}}_{\mathclap{\text{\parbox{1.3cm}{\centering\footnotesize{path selection}}}}}, \end{equation} i.e.\ we pick $b_T$ uniformly at random, then set $\smash{b_t=a_t^{b_{t+1}}}$ for $t=T-1,\dotsc,1$ and $\Psi_\theta{\bigl(\mathbf{x}_{1:T},\mathbf{a}_{1:T-1}\bigr)}$ is the distribution associated with the \gls{FAAPF} instead of the bootstrap \gls{PF} \begin{equation} \Psi_\theta{\bigl(\mathbf{x}_{1:T},\mathbf{a}_{1:T-1}\bigr)} = \prod_{i=1}^N p_\theta{\bigl(x_1^i\bigr| y_1\bigr)} \prod_{t=2}^T \prod_{i=1}^N w_{\theta,t-1}^{a_{t-1}^i}p_\theta{\bigl(x_t^i\bigr| x_{t-1}^{a_{t-1}^i},y_t\bigr)} . \label{eq:distributionperfectadaptation} \end{equation} It is easy to check that the resulting \gls{MH} acceptance probability is also of the form given in \eqref{eq:PMMHratio} but with \begin{equation} \hat{p}_\theta(y_{1:T}) = p_\theta(y_1) \prod_{t=2}^T \biggl[\frac{1}{N} \sum_{i=1}^N p_\theta(y_t|x_{t-1}^i)\biggr]. \label{eq:likelihoodestimatorperfectadaptationPF} \end{equation} The conditional \gls{FAAPF} with \gls{BS} proceeds by first running the conditional \gls{FAAPF} defined in \eqref{eq:CPFperfectadaptation}, then sampling $b_T$ uniformly at random and finally sampling $b_{T-1},\dotsc,b_1$ backwards using \begin{equation} \tilde{\pi}\bigl(b_t \big| \theta,\mathbf{x}_{1:t},\mathbf{a}_{1:t-1}, x_{t+1:T}^{b_{t+1:T}},b_{t+1:T}\bigr) \propto f_\theta\bigl(x_{t+1}^{b_{t+1}} \big| x_t^{b_t}\bigr), \label{eq:backwardperfectadaptation} \end{equation} where the expression in \eqref{eq:backwardperfectadaptation} is obtained using calculations similar to those in \eqref{eq:CPFBS}. \subsection{Extension to general auxiliary particle filters} \label{subsec:apf} \glsreset{APF} The previous section demonstrated that the \gls{FAAPF} leads straightforwardly to valid \gls{PMCMC} algorithms and will allow natural connections to be made to certain \gls{EHMM} methods. Here, we show that as was established in \citet[Appendix 8.2]{Pitt2012}, \emph{any} general \emph{\gls{APF}} can be employed in this context and will lead to natural extensions of these methods. To facilitate later developments, an explicit representation of the associated extended target distribution and related quantities is useful. Viewing the \gls{APF} as a sequential importance resampling algorithm for an appropriate sequence of target distributions as described in \citet{JohansenDoucet2008}, it is immediate that the density associated with such an algorithm is simply: \begin{align} \Psi_\theta^{\mathbf{q}_\theta}(\mathbf{x}_{1:T},\mathbf{a}_{1:T-1}) = \prod_{i=1}^N q_{\theta,1}(x_1^i) \prod_{t=2}^T w_{\theta,t-1}^{a_{t-1}^i} q_{\theta,t}\bigl(x_t^i \big|x_{t-1}^{a_{t-1}^i}\bigr), \end{align} where $\mathbf{q}_\theta = \{q_{\theta, t}\}_{t=1}^T$ and $q_{\theta,t}$ denotes the proposal distribution employed at time $t$ (with dependence of this distribution upon the observation sequence suppressed from the notation) and $\smash{w_{\theta,t}^i = v_{\theta,t}^i / \sum_{j=1}^N v_{\theta,t}^j}$ with: \begin{align} v_{\theta,t}^i = \begin{cases} \dfrac{\mu_\theta{\bigl(x_1^i\bigr)} g_\theta{\bigl(y_1|x_1^i\bigr)} \tilde{p}_\theta{\bigl(y_2|x_1^i\bigr)}}{q_{\theta,1}{\bigl(x_1^i\bigr)}}, & \text{if $t = 1$,}\\ \dfrac{f_\theta{\bigl(x_t^i|x_{t-1}^{a_{t-1}^i}\bigr)} g_\theta{\bigl(y_t|x_t^i\bigr)} \tilde{p}_\theta{\bigl(y_{t+1}|x_t^i\bigr)}}{q_{\theta,t}{\bigl(x_t^i | x_{t-1}^{a_{t-1}^i}\bigr)} \tilde{p}_\theta{\bigl(y_t|x_{t-1}^{a_{t-1}^i}\bigr)}}, & \text{if $1 < t < T$,}\\ \dfrac{f{\bigl(x_T^i|x_{T-1}^{a_{T-1}^i}\bigr)} g_\theta{\bigl(y_T|x_T^i\bigr)}}{q_{\theta,T}{\bigl(x_T^i|x_{t-1}^{a_{T-1}^i}\bigr)} \tilde{p}_\theta{\bigl(y_T|x_{T-1}^{a_{T-1}^i}\bigr)}}, & \text{if $t = T$,} \end{cases}\label{eq:apfweights} \end{align} and $\tilde{p}_\theta(y_{t+1}|x_t^i)$ denoting the approximation of the predictive likelihood employed within the weighting of the \gls{APF}. Note that $\tilde{p}_\theta(y_{t+1}|x_t)$ can be any positive function of $x_t$ and the simpler sequential importance resampling \gls{PF} is recovered by setting $\tilde{p}_\theta(y_{t+1}|x_t) \equiv 1$, with the bootstrap \gls{PF} emerging as a particular case thereof when $q_{\theta, t}(x_t|x_{t-1}) = f_\theta(x_t|x_{t-1})$. Associated with the \gls{APF} is a conditional \gls{PF} of the form: \begin{equation} \phi^{\mathbf{q}_\theta}_\theta{\bigl(\mathbf{x}_{1:T}^{-b_{1:T}},\mathbf{a}_{1:T-1}^{-b_{2:T}}\bigr| x_{1:T}^{b_{1:T}},b_{1:T}\bigr)} = \prod_{\substack{\mathllap{i}=\mathrlap{1}\\\mathllap{i} \neq \mathrlap{b_1}}}^N q_{\theta,1}{\bigl(x_1^i\bigr)} \prod_{t=2}^T \prod_{\substack{\mathllap{i}=\mathrlap{1}\\\mathllap{i} \neq \mathrlap{b_t}}}^N w_{\theta,t-1}^{a_{t-1}^i}\,q_{\theta,t}{\bigl(x_t^i\bigr|x_{t-1}^{a_{t-1}^i}\bigr)}. \label{eq:CAPF} \end{equation} A \gls{PMCMC} algorithm is arrived at by employing the extended target distribution, \begin{equation} \tilde{\pi}^{\mathbf{q}_\theta}{\bigl(\theta,b_{1:T},\mathbf{x}_{1:T},\mathbf{a} _{1:T-1}^{-b_{2:T}}\bigr)} = \frac{1}{N^T} \times \underbrace{\pi{\bigl( \theta,x_{1:T}^{b_{1:T}}\bigr)}}_{\mathclap{\text{\footnotesize{target}}}} \times \underbrace{\phi^{\mathbf{q}_\theta}_\theta{\bigl(\mathbf{x}_{1:T}^{-b_{1:T}},\mathbf{a}_{1:T-1}^{-b_{2:T}}\bigr| x_{1:T}^{b_{1:T}},b_{1:T}\bigr)} }_{\mathclap{\text{\footnotesize{law of conditional \gls{APF}}}}}, \label{eq:APMCMCtarget} \end{equation} and proposal distribution, \begin{equation} q{(\theta,{\theta'})} \times \underbrace{\Psi_{{\theta'}}^{\mathbf{q}_\theta'}{(\mathbf{x}_{1:T},\mathbf{a}_{1:T-1})}}_{\mathclap{\text{\footnotesize{law of \gls{APF}}}}} \times \underbrace{w_{{\theta'},T}^{b_T}}_{\mathclap{\text{\parbox{1.3cm}{\centering\footnotesize{path selection}}}}}. \label{eq:proposalauxiliaryparticlefilter} \end{equation} One can straightforwardly verify that this leads to a \gls{MH} acceptance probability of the form stated in \eqref{eq:PMMHratio} but using the natural unbiased estimator of the normalising constant associated with the \gls{APF}, \begin{equation} \hat{p}_\theta(y_{1:T}) = \prod_{t=1}^T \biggl[\frac{1}{N} \sum_{i=1}^N v_{\theta,t}^i\biggr]. \end{equation} We conclude this section by noting that although the constructions developed above were presented for simplicity with multinomial resampling employed during every iteration of the algorithm, it is straightforward to incorporate more sophisticated, adaptive resampling schemes within this framework. \section{Original embedded hidden Markov models} \label{sec:embeddedHMM} \glsreset{EHMM} \glsreset{HMM} \subsection{Extended target distribution} The \emph{\gls{EHMM}} method of \cite{Neal2003,Neal2004} is based on the introduction of a target distribution on $\varTheta \times \mathcal{X}^{NT}\times \{ 1,\dotsc,N \}^N$ of the form \begin{align} \tilde{\pi}(\theta,b_{1:T},\mathbf{x}_{1:T}) = \frac{1}{N^T} \times \underbrace{\pi{\bigl(\theta,x_{1:T}^{b_{1:T}}\bigr)}}_{\mathclap{\text{\footnotesize{target}}}} \times \underbrace{\prod_{t=1}^T \;\; \Bigl\{ \smashoperator{\prod_{i=b_t -1}^1} \widetilde{R}_{\theta,t}{\bigl(x_t^i\bigr| x_t ^{i+1}\bigr)} \cdot \smashoperator{\prod_{i=b_t+1}^N} R_{\theta,t}{\bigl(x_t^i\bigr| x_t^{i-1}\bigr)} \Bigr\}}_{\mathclap{\text{\footnotesize{law of conditional random grid generation}}}}, \label{eq:embeddedHMM2003target} \end{align} where $R_{\theta,t}$ is a $\rho_{\theta,t}$-invariant Markov transition kernel, i.e.\ $\smash{\int\rho_{\theta,t}(x) R_{\theta,t}(x^\prime| x) \mathrm{d} x = \rho_{\theta,t}(x^\prime)}$, and $\smash{\widetilde{R}_{\theta,t}}$ is its reversal, i.e.\ $\smash{\widetilde{R}_{\theta,t}(x^\prime|x) = \rho_{\theta,t}(x^\prime) R_{\theta,t}(x|x^\prime) / \rho_{\theta,t}(x)}$ (for $\rho_{\theta,t}$-almost every $x$ and $x^\prime$). Similarly to the \gls{PMCMC} extended target distribution, the key feature of $\tilde{\pi}(\theta,b_{1:T},\mathbf{x}_{1:T})$ is that, by construction, it ensures that the associated marginal distribution of $\smash{(\theta,x_{1:T}^{b_{1:T}})}$ is the posterior of interest. \subsection{Metropolis--Hastings algorithm} \label{subsec:ehmm-mh} As detailed in the next section, the algorithm proposed in \citet{Neal2003} can be reinterpreted as a Gibbs sampler targeting $\tilde{\pi}(b_{1:T},\mathbf{x}_{1:T}|\theta)$. We present here an alternative, original \gls{MH} algorithm to sample from $\tilde{\pi}(\theta,b_{1:T},\mathbf{x}_{1:T})$. It relies on a proposal of the form \begin{equation} q{\bigl( \theta,{\theta'}\bigr)} \times \underbrace{\Psi_{{\theta'}}{\bigl(\mathbf{x}_{1:T}\bigr)} }_{\mathclap{\parbox{2.3cm}{\text{\parbox{2.3cm}{\centering\footnotesize{law of random grid generation}}}}}} \times \underbrace{q_{{\theta'}}{\bigl(b_{1:T}\bigr| \mathbf{x}_{1:T}\bigr)}}_{\mathclap{\text{\parbox{2cm}{\centering\footnotesize{path selection}}}}}, \label{eq:proposalembeddedHMM2003} \end{equation} where \begin{equation} \Psi_\theta{\bigl( \mathbf{x}_{1:T}\bigr)} \coloneqq \frac{1}{N^T} \prod_{t=1}^T \Bigl\{ \rho_{\theta,t}{\bigl(x_t^1\bigr)} \smashoperator{\prod_{i=2}^N} R_{\theta,t}{\bigl(x_t^i\bigr| x_t^{i-1}\bigr)} \Bigr\} \label{eq:distributionrandomHMMfilter} \end{equation} is sometimes referred to as the \emph{ensemble base measure} \citep{Neal2011} and \begin{equation} q_\theta(b_{1:T} | \mathbf{x}_{1:T}) \coloneqq \frac{\tilde{p}_\theta{\bigl( x_{1:T}^{b_{1:T}},y_{1:T}\bigr)}}{\sum_{b_{1:T}^\prime}\tilde{p}_\theta{\bigl( x_{1:T}^{b_{1:T}^\prime},y_{1:T}\bigr)}} = \frac{1}{N^T}\frac{\tilde{p}_\theta{\bigl(x_{1:T}^{b_{1:T}},y_{1:T}\bigr)}}{\tilde{p}_\theta(y_{1:T})}. \label{eq:pathselectionEHMM} \end{equation} In this expression, we have (where we note that this is no longer a probability density with respect to Lebesgue measure) \begin{equation} \tilde{p}_\theta( x_{1:T},y_{1:T}) \coloneqq \frac{\mu_\theta(x_1) g_\theta(y_1|x_1)}{\rho_{\theta,1}(x_1)} \prod_{t=2}^T \frac{f_\theta(x_t|x_{t-1}) g_\theta(y_t|x_t)}{\rho_{\theta,t}(x_t)} \label{eq:modifiedjointposterior} \end{equation} and \begin{equation} \tilde{p}_\theta(y_{1:T}) \coloneqq \frac{1}{N^T}\sum _{b_{1:T}^\prime}\tilde{p}_\theta{\bigl( x_{1:T}^{b_{1:T}^\prime },y_{1:T}\bigr)} . \label{eq:likelihoodestimatorembeddedHMM} \end{equation} To sample from $\Psi_\theta( \mathbf{x}_{1:T})$, we sample $\smash{x_t^{1}\sim\rho_{\theta,t}}(x_t^{1})$ and $\smash{\mathbf{x}_t^{-1}}\sim\smash{\prod_{i=2}^N R_{\theta,t}(x_t^i| x_t^{i-1})}$ for $t=1,...,T$. Hence, at time $t$ all of the particles are marginally distributed according to $\rho_{\theta,t}$. When $\smash{R_{\theta,t}(x^\prime| x) = \rho_{\theta,t}(x^\prime)}$, this corresponds to the algorithm proposed in \citet{LinChen2005}. Sampling from the high-dimensional discrete distribution $q_\theta(b_{1:T}| \mathbf{x}_{1:T})$ can be performed in $O(N^2T)$ operations with the finite state-space \gls{HMM} filter using the $N$ states $(x_t^i)$ at time $t$, transition probabilities proportional to $f_\theta(x_t^j| x_{t-1}^i)$ and conditional probabilities of the observations proportional to $g_\theta(y_t| x_t^i) / \rho_{\theta,t}(x_t^i)$. We also obtain as a by-product $\tilde{p}_\theta(y_{1:T})$, which is an unbiased estimate of $p_\theta(y_{1:T})$. The resulting \gls{MH} algorithm targeting the extended distribution given in \eqref{eq:embeddedHMM2003target} with the proposal given in \eqref{eq:proposalembeddedHMM2003} admits an acceptance probability of the form \begin{equation} 1 \wedge \frac{\tilde{p}_{{\theta'}}(y_{1:T})p({\theta'})}{\tilde{p}_\theta(y_{1:T}) p(\theta)} \frac{q({\theta'},\theta)}{q(\theta,{\theta'})}, \label{eq:embeddedHMMratio} \end{equation} i.e.\ it looks very much like the \gls{PMMH} algorithm, except that instead of having likelihood terms estimated by a particle filter, these likelihood terms are estimated using a finite state-space \gls{HMM} filter. To establish the correctness of the acceptance probability given in \eqref{eq:embeddedHMMratio}, we note that \begin{align} \frac{\tilde{\pi}(\theta,b_{1:T},\mathbf{x}_{1:T})}{\Psi_\theta(\mathbf{x}_{1:T}) q_\theta(b_{1:T}| \mathbf{x}_{1:T}) } & = \frac{N^{-T}\pi(\theta,x_{1:T}^{b_{1:T}}) \prod_{t=1}^T \bigl\{ \prod_{i=b_t -1}^1 \widetilde{R}_{\theta,t}(x_t^i| x_t^{i+1}) \cdot \prod_{i=b_t+1}^N R_{\theta,t}(x_t^i| x_t^{i-1}) \bigr\} }{ \prod_{t=1}^T \bigl\{ \rho_{\theta,t}( x_t^{1}) \cdot \prod_{i=2}^N R_{\theta,t}(x_t^i| x_t^{i-1}) \bigr\} N^{-T}\frac{\tilde{p}_\theta( x_{1:T}^{b_{1:T}},y_{1:T}) }{\tilde{p}_\theta( y_{1:T}) }}\\ & =\frac{p_\theta( x_{1:T}^{b_{1:T}},y_{1:T}) / p_\theta(y_{1:T}) }{\prod_{t=1}^T \rho_{\theta,t}(x_t^{b_t})} \biggl[ \frac{p_\theta(x_{1:T}^{b_{1:T}},y_{1:T})}{\tilde{p}_\theta(y_{1:T}) \prod_{t=1}^T \rho_{\theta,t}( x_t^{b_t})}\biggr]^{-1}\\ & = p(\theta| y_{1:T}) \frac{\tilde{p}_\theta(y_{1:T})}{p_\theta(y_{1:T})} \propto \tilde{p}_\theta(y_{1:T}) p(\theta), \label{eq:embeddedHMMratiotargetproposal} \end{align} where we have used that \begin{equation} \frac{\tilde{p}_\theta(x_{1:T},y_{1:T})}{\tilde{p}_\theta{\bigl( y_{1:T}\bigr)}} =\frac{p_\theta(x_{1:T},y_{1:T})}{\tilde{p}_\theta(y_{1:T}) \prod_{t=1}^T \rho_{\theta,t}{\bigl(x_t^{b_t}\bigr)}}. \end{equation} In addition, we have used the following identity which we will also exploit in the next section: if $R$ is a $\rho$-invariant Markov kernel and $\widetilde{R}$ the associated reversal, then for any $b,c \in \{1,\dotsc,N\}$, \begin{align} \smashoperator{\prod_{i=b-1}^1} \widetilde{R}{\bigl(x^i\bigr| x^{i+1}\bigr)} \cdot \rho{\bigl(x^{b}\bigr)} \cdot \smashoperator{\prod_{i=b+1}^N} R{\bigl(x^i\bigr| x^{i-1}\bigr)} = \smashoperator{\prod_{i=c-1}^1} \widetilde{R}{\bigl(x^i\bigr| x^{i+1}\bigr)} \cdot \rho{\bigl(x^c\bigr)} \cdot \smashoperator{\prod_{i=c+1}^N} R{\bigl(x^i\bigr| x^{i-1}\bigr)}. \label{eq:useful_reversal_kernel_identity} \end{align} \subsection{Interpretation as a collapsed Gibbs sampler} \label{subsec:ehmm-gibbs} Consider the following Gibbs sampler type algorithm to sample from $\pi(x_{1:T}|\theta)$: \begin{enumerate} \item \label{enum:ehmm_gibbs:1} Sample $b_{1:T}$ uniformly at random on $\smash{\{1,\dotsc,N\}^T}$ and set $\smash{x_{1:T}^{b_{1:T}}\leftarrow x_{1:T}}$; \item \label{enum:ehmm_gibbs:2} Sample $\smash{\tilde{\pi}(\mathbf{x}_{1:T}^{-b_{1:T}}| \theta,b_{1:T},x_{1:T}^{b_{1:T}})}$; \item \label{enum:ehmm_gibbs:3} Sample $\smash{b_T\sim\tilde{\pi}(b_T|\theta,\mathbf{x}_{1:T})}$ then $\smash{b_{T-1}\sim\tilde{\pi}(b_{T-1}| \theta,\mathbf{x}_{1:T-1},x_T^{b_T},b_T)}$ and so on. \end{enumerate} It is obvious that Steps~\ref{enum:ehmm_gibbs:1} and \ref{enum:ehmm_gibbs:2} coincide with the first steps of the \gls{EHMM} algorithm described in \citet{Neal2003}. For Step~\ref{enum:ehmm_gibbs:3}, we note that \begin{align} \MoveEqLeft \tilde{\pi}{\bigl(b_t\bigr| \theta,\mathbf{x}_{1:t},x_{t+1:T}^{b_{t+1:T}},b_{t+1:T}\bigr)} \\ & \propto \smashoperator{\sum_{b_{1:t-1}}} \idotsint \pi{\bigl(\theta,x_{1:T}^{b_{1:T}}\bigr)} \prod_{n=1}^T \;\; \Bigl\{ \smashoperator{\prod_{i=b_n -1}^1} \widetilde{R}_{\theta,n}{\bigl(x_n^i\bigr| x_n^{i+1}\bigr)} \cdot \smashoperator{\prod_{i=b_n+1}^N} R_{\theta,n}{\bigl(x_n^i\bigr| x_n^{i-1}\bigr)} \Bigr\}\, \mathrm{d}\mathbf{x}_{n+1}^{-b_{n+1}}\cdots \mathrm{d} \mathbf{x}_T^{-b_T}\\ & =\smashoperator{\sum_{b_{1:t-1}}} \pi{\bigl(\theta,x_{1:T}^{b_{1:T}}\bigr)} \smashoperator{\prod_{n=1}^t} \;\; \Bigl\{\smashoperator{\prod_{i=b_n -1}^1} \widetilde{R}_{\theta,n}{\bigl(x_n^i \bigr| x_n^{i+1}\bigr)} \cdot \smashoperator{\prod_{i=b_n+1}^N} R_{\theta,n}{\bigl(x_n^i\bigr| x_n^{i-1}\bigr)} \Bigr\}\\ & = \smashoperator{\sum_{b_{1:t-1}}} \pi{\bigl(\theta,x_{1:T}^{b_{1:T}}\bigr)} \prod_{n=1}^t \frac{ \prod_{i=b_n -1}^1 \widetilde{R}_{\theta,n}{\bigl(x_n^i\bigr| x_n^{i+1}\bigr)} \cdot \rho_{\theta,n}{\bigl( x_n^{b_n}\bigr)} \cdot \prod_{i=b_n+1}^N R_{\theta,n}{\bigl(x_n^i\bigr| x_n^{i-1}\bigr)} }{\rho_{\theta,n}{\bigl( x_n^{b_n}\bigr)} }\\ & \propto\smashoperator{\sum_{b_{1:t-1}}}\,\frac{\pi{\bigl( \theta,x_{1:T}^{b_{1:T}}\bigr)} }{ \prod_{n=1}^t \rho_{\theta,n}{\bigl( x_n^{b_n}\bigr)} }, \end{align} where by \eqref{eq:useful_reversal_kernel_identity}, the numerator in the penultimate line is independent of $b_n$. Since \begin{align} \frac{\pi{\bigl(\theta,x_{1:T}^{b_{1:T}}\bigr)}}{\prod_{n=1}^t \rho_{\theta,n}{\bigl(x_n^{b_n}\bigr)}} & \propto \frac{p_{\theta }{\bigl( x_{1:T}^{b_{1:T}},y_{1:T}\bigr)} }{\prod_{n=1}^t \rho_{\theta,n}{\bigl(x_n^{b_n}\bigr)}}\\ & \propto \underbrace{\prod_{n=1}^t \frac{f_\theta\bigl(x_n^{b_n}\bigr| x_{n-1}^{b_{n-1}}\bigr)g_\theta{\bigl(y_n\bigr| x_n^{b_n}\bigr)}}{\rho_{\theta,n}{\bigl(x_n^{b_n}\bigr)} }}_{\mathclap{\text{\footnotesize{modified posterior $\tilde{p}_\theta(x_{1:t}^{b_{1:t}}| y_{1:t})$}}}} \cdot \smashoperator{\prod_{n=t+1}^T} f_\theta{\bigl(x_n^{b_n}\bigr| x_{n-1}^{b_{n-1}}\bigr)} g_\theta{\bigl(y_n\bigr| x_n^{b_n}\bigr)}, \end{align} we can compute the marginal $\smash{\tilde{p}_\theta(x_t^{b_t}| y_{1:t}) \coloneqq \sum_{b_{1:t-1}} \tilde{p}_\theta(x_{1:t}^{b_{1:t}}| y_{1:t})}$ using the same (finite state-space) \gls{HMM} filter discussed in the previous section and so \begin{equation} \tilde{\pi}{\bigl(b_t\bigr| \theta,\mathbf{x}_{1:t},x_{t+1:T}^{b_{t+1:T}},b_{t+1:T}\bigr)} \propto\tilde{p}_\theta{\bigl(x_t^{b_t}\bigr| y_{1:t}\bigr)} f_\theta{\bigl(x_{t+1}^{b_t+1}\bigr| x_t^{b_t}\bigr)} \end{equation} coinciding with the expression obtained in \citet{Neal2003}. This is an alternative proof of validity of the algorithm. The present derivation is more complex than that in \citet{Neal2003} which relies on a simple detailed balance argument. One potential benefit of our approach is that it can be extended systematically to any extended target admitting a similar structure; see for example \citet[p.~116]{LindstenSchon2013} for extensions to the non-Markovian case. Finally, we note that this algorithm may be viewed as a special case of the framework proposed in \citet{Tjelmeland2004} and simplifies to Barker's kernel \citep{Barker1965} if $N=2$ and $T=1$. \section{Alternative embedded hidden Markov models} \label{sec:embeddedHMMnewversion} \glsreset{MCMCPF} \glsreset{MCMCFAAPF} \glsreset{MCMCAPF} In its original version, the \gls{EHMM} method has a computational cost per iteration of order $O(N^2T)$ compared to $O(NT)$ for \gls{PMCMC} methods and it samples particles independently across time which can be inefficient if the latent states are strongly correlated. The new version of \gls{EHMM} methods, which was proposed in \citet{ShestopaloffNeal2016}, resolves both of these limitations. It can be viewed as a \gls{PMCMC}-type algorithm making use of a new type of \gls{PF} that we term the \emph{\gls{MCMCFAAPF}} given its connection to the \gls{FAAPF} which we detail below. \subsection{Extended target distribution} This version of the \gls{EHMM}, henceforth referred to as the \emph{alternative} \gls{EHMM} method, relies on the extended target distribution \begin{equation} \tilde{\pi}{\bigl( \theta,b_{1:T},\mathbf{x}_{1:T}, \mathbf{a}_{1:T-1}^{-b_{2:T}}\bigr)} = \frac{1}{N^T} \times \underbrace{\pi{\bigl(\theta,x_{1:T}^{b_{1:T}}\bigr)}}_{\mathclap{\text{\footnotesize{target}}}} \times \underbrace{\phi_\theta\bigl(\mathbf{x}_{1:T}^{-b_{1:T}},\mathbf{a}_{1:T-1}^{-b_{2:T}}\bigr|x_{1:T}^{b_{1:T}},b_{1:T}\bigr)}_{\mathclap{\text{\footnotesize{law of conditional \gls{MCMCFAAPF}}}}}, \end{equation} where we will refer to the algorithm inducing the following distribution as the conditional \gls{MCMCFAAPF} for reasons which are made clear below: \begin{align} \MoveEqLeft \phi_\theta{\bigl( \mathbf{x}_{1:T}^{-b_{1:T}},\mathbf{a}_{1:T-1}^{-b_{2:T}}\bigr| x_{1:T}^{b_{1:T}},b_{1:T}\bigr)}\\* & = \smashoperator{\prod_{\smash{i=b_1-1}}^1}\widetilde{R}_{\theta,1}{\bigl(x_1^i\bigr| x_1^{i+1}\bigr)} \cdot \smashoperator{\prod_{\smash{i=b_1+1}}^N} R_{\theta,1}{\bigl(x_1^i\bigr| x_1^{i-1}\bigr)} \label{eq:conditionalPFnovelEHMMtarget}\\ & \quad \times \smashoperator{\prod_{t=2}^{\smash{T}}} \;\; \Bigl\{\smashoperator{\prod_{i=b_t -1}^{\smash{1}}} \widetilde{R}_{\theta,t}{\bigl(x_t^i,a_{t-1}^i\bigr| x_t^{i+1},a_{t-1}^{i+1};\mathbf{x}_{t-1}\bigr)} \smashoperator{\prod_{i=b_t+1}^{\smash{N}}} R_{\theta,t}{\bigl(x_t^{i+1},a_{t-1}^{i+1}\bigr| x_t^i,a_{t-1}^i;\mathbf{x}_{t-1}\bigr)} \Bigr\}, \end{align} with $\smash{b_t=a_t^{b_{t+1}}}$ as for \gls{PMCMC} methods. Here $R_{\theta,1}$ is invariant with respect to $\rho_{\theta,1}(x_1) = p_\theta(x_1 | y_1)$ whereas, for $t=2,\dotsc,T$, $R_{\theta,t}(\,\cdot\,|\,\cdot\,; \mathbf{x}_{t-1})$ is invariant w.r.t.\ \begin{align} \rho_{\theta,t}{\bigl(x_t, a_{t-1}\bigr| \mathbf{x}_{t-1}\bigr)} & =\frac{g_\theta{\bigl(y_t\bigr| x_t\bigr)} f_\theta{\bigl(x_t\bigr| x_{t-1}^{a_{t-1}}\bigr)}} {\sum_{i=1}^Np_\theta{\bigl(y_t\bigr| x_{t-1}^i\bigr)}} =\frac{p_\theta{\bigl(y_t\bigr| x_{t-1}^{a_{t-1}}\bigr)}}{\sum_{i=1}^Np_\theta{\bigl(y_t\bigr| x_{t-1}^i\bigr)}}p_\theta{\bigl(x_t\bigr| y_t,x_{t-1}^{a_{t-1}}\bigr)} , \end{align} while, for $t=1,\dotsc,T$, $\widetilde{R}_{\theta,t}(\,\cdot\,|\,\cdot\,; \mathbf{x}_{t-1})$ denotes the reversal of the kernel $R_{\theta,t}(\,\cdot\,|\,\cdot\,; \mathbf{x}_{t-1})$ with respect to its invariant distribution. Note that if $R_{\theta,1}(x_1^\prime|x_1) =\rho_{\theta,1}( x_1^\prime)$ and $R_{\theta,t}(x_t^\prime, a_{t-1}^\prime|x_t, a_{t-1};\mathbf{x}_{t-1}) = \rho_{\theta,t}(x_t^\prime, a_{t-1}^\prime| \mathbf{x}_{t-1})$, the extended target $\tilde{\pi}( \theta,b_{1:T},\mathbf{a}_{1:T-1}^{-b_{2:T}}, \mathbf{x}_{1:T})$ coincides exactly with the extended target associated with the \gls{FAAPF} described in Section~\ref{Section:perfectadaptationPF}. As explored in the following two sections, this allows us to understand this \gls{EHMM} approach as the incorporation of a slightly more general class of \glspl{PF} within a \gls{PMCMC} framework and ultimately suggests further generalisations of these algorithms. \subsection{Metropolis--Hastings algorithm} \label{subsec:mcmc-fa-apf} We now consider the following \gls{MH} algorithm to sample from $\smash{\tilde{\pi}(\theta,b_{1:T}, \mathbf{x}_{1:T}, \mathbf{a}_{1:T-1}^{-b_{2:T}})}$. It relies on a proposal of the form \begin{equation} q{\bigl( \theta,{\theta'}\bigr)} \times \underbrace{\Psi_{{\theta'}}{\bigl(\mathbf{x}_{1:T}, \mathbf{a}_{1:T-1}\bigr)} }_{\mathclap{\text{\parbox{2cm}{\centering\footnotesize{law of \gls{MCMCFAAPF}}}}}} \times \underbrace{\frac{1}{N}}_{\mathclap{\text{\parbox{1.3cm}{\centering\footnotesize{path selection}}}}}, \label{eq:proposalnovelEHMM} \end{equation} i.e.\ to sample $b_{1:T}$, we pick $b_T$ uniformly at random, then set $\smash{b_t=a_t^{b_{t+1}}}$ for $t=T-1,\dotsc,1$. Moreover, \begin{align} \Psi_\theta{\bigl( \mathbf{x}_{1:T}, \mathbf{a}_{1:T-1}\bigr)} & = \rho_{\theta,1}{\bigl( x_1^1\bigr)} \smashoperator{\prod_{\smash{i=2}}^N} R_{\theta,1}{\bigl(x_1^i\bigr| x_1^{i-1}\bigr)}\\ & \quad \times \smashoperator{\prod_{t=2}^{\smash{T}}} \; \Bigl\{ \rho_{\theta,t}{\bigl( x_t^1, a_{t-1}^1\bigr| \mathbf{x}_{t-1}\bigr)} \smashoperator{\prod_{i=2}^{\smash{N}}} R_{\theta,t}{\bigl(x_t^{i},a_{t-1}^{i}\bigr| x_t^{i-1},a_{t-1}^{i-1};\mathbf{x}_{t-1}\bigr)} \Bigr\} \label{eq:novelPFforEHMM} \end{align} is the law of a novel \gls{PF} type algorithm, which we refer to as the \gls{MCMCFAAPF}; again the reason for this terminology should become clear below. The \gls{MCMCFAAPF} proceeds as follows. \begin{enumerate} \item At time $1$, sample $x_1^1\sim\rho_{\theta,1}(x_1^1)$ and then $\mathbf{x}_1^{-1}\sim \prod_{i=2}^N R_{\theta,1}(x_1^i| x_1^{i-1})$. \item At time $t=2,\dotsc,T$, sample \begin{enumerate} \item $(x_t^1, a_{t-1}^1) \sim\rho_{\theta,t}(x_t^1, a_{t-1}^1| \mathbf{x}_{t-1})$, \item $(\mathbf{x}_t^{-1},\mathbf{a}_{t-1}^{-1}) \sim \prod_{i=2}^N R_{\theta,t}(x_t^{i},a_{t-1}^{i}| x_t^{i-1},a_{t-1}^{i-1};\mathbf{x}_{t-1})$. \end{enumerate} \end{enumerate} If $R_{\theta,1}(x_1^\prime\bigr| x_1) = \rho_{\theta,1}(x_1^\prime)$ and $R_{\theta,t}(x_t^\prime, a_{t-1}^\prime\bigr|x_t, a_{t-1};\mathbf{x}_{t-1}) = \rho_{\theta,t}(x_t^\prime, a_{t-1}^\prime\bigr| \mathbf{x}_{t-1})$, this corresponds to the standard \gls{FAAPF}. The resulting \gls{MH} algorithm targeting the extended distribution defined in \eqref{eq:embeddedHMM2003target} and using the proposal defined in \eqref{eq:proposalembeddedHMM2003} admits an acceptance probability of the form \begin{equation} 1 \wedge \frac{\hat{p}_{{\theta'}}(y_{1:T}) p({\theta'})}{\hat{p}_\theta(y_{1:T}) p(\theta)}\frac{q({\theta'},\theta)}{q(\theta,{\theta'})}, \label{eq:alternative_ehmm_acceptance_probability} \end{equation} i.e.\ it looks very much like the \gls{PMMH}, except that here $\hat{p}_\theta(y_{1:T})$ is given by the expression in \eqref{eq:likelihoodestimatorperfectadaptationPF} with particles generated via \eqref{eq:novelPFforEHMM}. Note that this estimate is unbiased. The validity of the acceptance probability in \eqref{eq:alternative_ehmm_acceptance_probability} can be established by calculating \begin{align} \MoveEqLeft \frac{\tilde{\pi}{\bigl( \theta,b_{1:T}, \mathbf{x}_{1:T}, \mathbf{a}_{1:T-1}^{-b_{2:T}}\bigr)}}{\Psi_\theta{\bigl( \mathbf{a}_{1:T-1},\mathbf{x}_{1:T}\bigr)} \frac{1}{N}}\\ & = N\pi{\bigl(\theta,x_{1:T}^{b_{1:T}}\bigr)} \frac{\prod_{i=b_1-1}^1 \widetilde{R}_{\theta,1}{\bigl(x_1^i\bigr| x_1^{i+1}\bigr)} \cdot \prod_{i=b_1+1}^N R_{\theta,1}{\bigl(x_1^i\bigr| x_1^{i-1}\bigr)}}{\rho_{\theta,1}{\bigl( x_1^1\bigr)} \prod_{i=2}^N R_{\theta,1}{\bigl(x_1^j\bigr| x_1^{i-1}\bigr)}}\\ & \quad \times \prod_{t=2}^T \frac{\prod_{i=b_t -1}^1 \widetilde{R}_{\theta,t}{\bigl(x_t^i,a_{t-1}^i\bigr|x_t^{i+1},a_{t-1}^{i+1};\mathbf{x}_{t-1}\bigr)} \cdot \prod_{i=b_t+1}^N R_{\theta,t}{\bigl(x_t^{i+1},a_{t-1}^{i+1}\bigr| x_t^i,a_{t-1}^i;\mathbf{x}_{t-1}\bigr)}}{\rho_{\theta,t}{\bigl(x_t^{1}, a_{t-1}^{1}\bigr| \mathbf{x}_{1:t-1}\bigr)} \prod_{i=2}^N R_{\theta,t}{\bigl(x_t^{i+1},a_{t-1}^{i+1}\bigr| x_t^i,a_{t-1}^j;\mathbf{x}_{t-1}\bigr)}}\\ & = \frac{N^{T-1}\pi{\bigl( \theta,x_{1:T}^{b_{1:T}}\bigr)} }{\rho_{\theta,1}{\bigl( x_1^{b_1}\bigr)} \prod_{t=2}^T \rho_{\theta,t}{\bigl(x_t^{b_t}, a_{t-1}^{b_t}\bigr|\mathbf{x}_{1:t-1}\bigr)} }\\ & = \frac{N^{T-1}\pi{\bigl(\theta,x_{1:T}^{b_{1:T}}\bigr)}}{\frac{p_\theta(x_1^{b_1}, y_1)}{p_\theta(y_1)} \prod_{t=2}^T \frac{f_\theta(x_t^{b_t}| x_{t-1}^{b_{t-1}}) g(y_t| x_t^{b_t})}{\sum_{i=1}^N p_\theta(y_t| x_{t-1}^i) } } = p(\theta | y_{1:T}) \frac{\hat{p}_\theta(y_{1:T})}{p_\theta(y_{1:T})}. \end{align} We have again used identity \eqref{eq:useful_reversal_kernel_identity} and additionally that $\smash{b_t=a_t^{b_{t+1}}}$, for $t = T-1,\dotsc, 1$. \subsection{Gibbs sampler} \label{subsec:gibbs_mcmc-fa-apf} The \gls{EHMM} method of \cite{ShestopaloffNeal2016} can be reinterpreted as a collapsed Gibbs sampler to sample from the extended target distribution $\tilde{\pi}(\theta,b_{1:T},\mathbf{x}_{1:T}, \mathbf{a}_{1:T-1}^{-b_{2:T}})$. Given a current value of $x_{1:T}$, the algorithm proceeds as follows. \begin{enumerate} \item Sample $b_{1:T}$ uniformly at random and set $x_{1:T}^{b_{1:T}}\leftarrow x_{1:T}$. \item Run the conditional \gls{MCMCFAAPF}, i.e.\ sample from $\phi_\theta(\mathbf{x}_{1:T}^{-b_{1:T}},\mathbf{a}_{1:T-1}^{-b_{2:T}}| x_{1:T}^{b_{1:T}},b_{1:T})$. \item Sample $b_T$ according to $\Pr(b_T=m) = 1/N$ and then, for $t=T-1,\dotsc,1$, sample $b_t$ according to a distribution proportional to $f_\theta(x_{t+1}^{b_{t+1}}| x_t^{b_t})$. \end{enumerate} The validity of the algorithm is established using a detailed balance argument in \citet{ShestopaloffNeal2016}. Alternatively, we can show using simple calculations similar to the ones presented earlier that \begin{equation} \tilde{\pi}\bigl( b_t\big| \theta,\mathbf{x}_{1:t}, x_{t+1:T}^{b_{t+1:T}}, b_{t+1:T}\bigr) \propto f_\theta\bigl(x_{t+1}^{b_{t+1}} \big| x_t^{b_t}\bigr) . \end{equation} In the standard conditional \gls{PF}, the particles are conditionally independent given the previously sampled values. The conditional \gls{MCMCFAAPF} allows for conditional dependence between all the particles (and ancestor indices) generated in one time step. Indeed, we can choose the kernels $R_{\theta, t}^{\mathbf{q}_\theta}(\,\cdot\,|\,\cdot\,;\mathbf{x}_{t-2:t-1}, \mathbf{a}_{t-2})$ such that they induce only small, local moves. This can improve the performance of \gls{PG} samplers in high dimensions: as with standard \gls{MCMC} schemes, less ambitious local moves are much more likely to be accepted. Of course, as with any local proposal one could not expect such a strategy to work well with strongly multi-modal target distributions without further refinements. \section{Novel practical extensions} \label{sec:novel_methodology} Motivated by the connections identified above, we now develop extensions based upon the more general \gls{PMCMC} algorithms described above, in particular considering constructions based around general \glspl{APF}. In particular, we relax the requirement in the \gls{MH} algorithm from Section~\ref{subsec:mcmc-fa-apf} that it is possible to sample from the proposal distribution of the \gls{FAAPF} (which is possible in only a small number of tractable models) and to compute its associated importance weight. \subsection{MCMC APF} Generalising the \gls{MCMCFAAPF} in the same manner as the \gls{APF} generalises the \gls{FAAPF} leads us to propose a (general) \emph{\gls{MCMCAPF}}. Set \begin{align} \rho_{\theta,t}^{\mathbf{q}_\theta}(x_t,a_{t-1}|\mathbf{x}_{t-2:t-1}, \mathbf{a}_{t-2}) & = \begin{cases} q_{\theta,1}(x_1), & \text{if $t = 1$,}\\ \dfrac{v_{\theta,t-1}^{a_{t-1}}}{\sum_{i=1}^N v_{\theta,t-1}^i} q_{\theta,t}(x_t|x_{t-1}^{a_{t-1}}), & \text{if $t> 1$,} \end{cases} \end{align} where $v_{\theta,t-1}^i$ are as defined in \eqref{eq:apfweights}, and is responsible for the dependence upon $a_{t-1}$ and $x_{t-2}$ in particular, and we allow $\smash{R_t^{\mathbf{q}_\theta}(\,\cdot\,|\,\cdot\,;\mathbf{x}_{t-2:t-1}, \mathbf{a}_{t-2})}$ and $\smash{\widetilde{R}_{t}^{\mathbf{q}_\theta}(\,\cdot\,|\,\cdot\,;\mathbf{x}_{t-2:t-1}, \mathbf{a}_{t-2})}$ to respectively denote a $\rho_{\theta,t}^{\mathbf{q}_\theta}(\,\cdot\,|\mathbf{x}_{t-2:t-1}, \mathbf{a}_{t-2})$-invariant Markov kernel and the associated reversal kernel. Although this expression superficially resembles the mixture proposal of the \emph{marginalised} \gls{APF} \citep{Klass2005}, by explicitly including the ancestry variables it avoids incurring the $O(N^2)$ cost and allows an approximation of smoothing distributions. We then define the law of the \gls{MCMCAPF} via: \begin{align} \Psi^{\mathbf{q}_\theta}_\theta{\bigl(\mathbf{x}_{1:T}, \mathbf{a}_{1:T-1}\bigr)} & \coloneqq \rho^{\mathbf{q}_\theta}_{\theta,1}{\bigl( x_1^1\bigr)} \smashoperator{\prod_{\smash{i=2}}^N} R^{\mathbf{q}_\theta}_{\theta,1}{\bigl(x_1^i\bigr| x_1^{i-1}\bigr)}\\ & \quad \times \smashoperator{\prod_{t=2}^{\smash{T}}}\;\Bigl\{\rho^{\mathbf{q}_\theta}_{\theta,t}{\bigl(x_t^1, a_{t-1}^1\bigr| \mathbf{x}_{t-2:t-1}, \mathbf{a}_{t-2}\bigr)} \bigr. \smashoperator{\prod_{i=2}^{\smash{N}}} R^{\mathbf{q}_\theta}_{\theta,t}{\bigl(x_t^{i},a_{t-1}^{i}\bigr| x_t^{i-1},a_{t-1}^{i-1};\mathbf{x}_{t-2:t-1}, \mathbf{a}_{t-2}\bigr)} \Bigr\}. \end{align} The corresponding extended \gls{PMCMC} target distribution is simply: \begin{equation} \tilde{\pi}^{\mathbf{q}_\theta}{\bigl( \theta,b_{1:T},\mathbf{x}_{1:T}, \mathbf{a}_{1:T-1}^{-b_{2:T}}\bigr)} = \frac{1}{N^T} \times \underbrace{\pi\bigl(\theta,x_{1:T}^{b_{1:T}}\bigr) }_{\text{\footnotesize{target}}} \times \underbrace{\phi^{\mathbf{q}_\theta}_{\theta}\bigl( \mathbf{x}_{1:T}^{-b_{1:T}},\mathbf{a}_{1:T-1}^{-b_{2:T}}\big\vert x_{1:T}^{b_{1:T}},b_{1:T}\bigr)}_{\text{\footnotesize{law of conditional \gls{MCMCAPF}}}}, \label{eq:novelEHMMtarget} \end{equation} where, as might be expected: \begin{align} \MoveEqLeft \phi^{\mathbf{q}_\theta}_\theta{\bigl(\mathbf{x}_{1:T}^{-b_{1:T}},\mathbf{a}_{1:T-1}^{-b_{2:T}}\bigr| x_{1:T}^{b_{1:T}},b_{1:T}\bigr)}\\* & = \smashoperator{\prod_{\smash{i=b_1-1}}^1} \widetilde{R}^{\mathbf{q}_\theta}_{\theta,1}{\bigl(x_1^i\bigr| x_1^{i+1}\bigr)} \cdot \smashoperator{\prod_{\smash{i=b_1+1}}^N} R^{\mathbf{q}_\theta}_{\theta,1}{\bigl(x_1^i\bigr| x_1^{i-1}\bigr)}\\ & \quad \times \smashoperator{\prod_{t=2}^{\smash{T}}} \;\; \Bigl\{\smashoperator{\prod_{i=b_t-1}^{\smash{1}}} \widetilde{R}^{\mathbf{q}_\theta}_{\theta,t}{\bigl(x_t^i,a_{t-1}^i\bigr| x_t^{i+1},a_{t-1}^{i+1};\mathbf{x}_{t-2:t-1}, \mathbf{a}_{t-2}\bigr)} \bigr. \smashoperator{\prod_{i=b_t+1}^{\smash{N}}} R^{\mathbf{q}_\theta}_{\theta,t}{\bigl(x_t^{i},a_{t-1}^{i}\bigr| x_t^{i-1},a_{t-1}^{i-1};\mathbf{x}_{t-2:t-1}, \mathbf{a}_{t-2}\bigr)} \Bigr\}. \end{align} Note that the \gls{MCMCFAAPF} can be viewed as a special case of the \gls{MCMCAPF} in much the same way that the \gls{FAAPF} from Section~\ref{Section:perfectadaptationPF} can be viewed as a special case of the (general) \gls{APF} from Section~\ref{subsec:apf}. \subsection{Metropolis--Hastings algorithms} We arrive at a \gls{PMMH}-type algorithm based around the \gls{MCMCAPF} by considering proposal distributions of the form: \begin{equation} q{\bigl(\theta,{\theta'}\bigr)} \times \underbrace{\Psi^{\mathbf{q}_\theta'}_{{\theta'}}{\bigl(\mathbf{x}_{1:T}, \mathbf{a}_{1:T-1}\bigr)}}_{\mathclap{\text{\parbox{2.3cm}{\centering\footnotesize{law of \gls{MCMCAPF}}}}}} \times \underbrace{w_{{\theta'},T}^{b_T}}_{\mathclap{\text{\parbox{1.3cm}{\centering\footnotesize{path selection}}}}}, \end{equation} where, as in Section~\ref{subsec:apf}, $\smash{w_{\theta,T}^i = v_{\theta,T}^i / \sum_{j=1}^N v_{\theta,T}^j}$ and $\smash{\hat{p}_\theta(y_{1:T}) = \prod_{t=1}^T N^{-1} \sum_{i=1}^N v_{\theta,t}^i}$ is again an unbiased estimate of the marginal likelihood. Note that the \gls{PMMH}-type variant of the \gls{MCMCFAAPF} cannot often be used in realistic scenarios because it requires sampling from $p_\theta(x_t|x_{t-1}, y_t)$ and evaluating $x_{t-1} \mapsto p_\theta(y_t|x_{t-1})$ in order to implement the \gls{FAAPF} in \eqref{eq:mcmc_fa-apf_transition_density}. To circumvent this problem, we can define a special case of the \gls{MCMCAPF} algorithm which requires neither sampling from $p_\theta(x_t|x_{t-1}, y_t)$ nor evaluating $x_{t-1} \mapsto p_\theta(y_t|x_{t-1})$. This algorithm, obtained by setting $\tilde{p}_\theta(y|x) \equiv 1$, will be called \emph{\gls{MCMCPF}} as it represents an analogue of the (bootstrap) \gls{PF}. At time~$1$, the \gls{MCMCPF} uses the \gls{MCMC} kernels $\overline{R}_{\theta,1}$ which are invariant w.r.t.\ $\bar{\rho}_{\theta,1}(x_1) \coloneqq \mu_\theta(x_1)$. At time~$t$, $t > 1$, the \gls{MCMCPF} uses the kernels $\overline{R}_{\theta,t}(\,\cdot\,|\,\cdot\,;\mathbf{x}_{t-1})$ which are invariant w.r.t.\ \begin{align} \bar{\rho}_{\theta,t}(x_t,a_{t-1}|\mathbf{x}_{t-1}) \coloneqq \frac{g_\theta(y_{t-1}|x_{t-1}^{a_{t-1}})}{\sum_{i=1}^N g_\theta(y_{t-1}|x_{t-1}^i)} f_\theta(x_t|x_{t-1}^{a_{t-1}}). \label{eq:mcmc_fa-apf_transition_density} \end{align} The \gls{PMMH}-type variant of the \gls{MCMCPF} may be useful if the \gls{PMMH}-type variant of the \gls{MCMCFAAPF} cannot be implemented. \subsection{Gibbs samplers} Given the extended target construction of the \gls{MCMCAPF} algorithm, it is straightforward to implement \gls{PG} algorithms \gls{BS} (or similarly with \gls{AS} -- see Section~\ref{subsec:generalised_PGS}) which target it. However, Gibbs samplers based around the (conditional) \gls{MCMCPF} do not appear useful as they might be expected to perform less well than the Gibbs sampler based around the \gls{MCMCFAAPF} and are no more easy to implement: in contrast to the \gls{PMMH}-type algorithms, the Gibbs sampler based around the (conditional) \gls{MCMCFAAPF} does \emph{not} generally require sampling from $p_\theta(x_t|x_{t-1}, y_t)$ and it only requires evaluation of the unnormalised density $p_\theta(y_t|x_t)f_\theta(x_t|x_{t-1})$ in the transition density of the \gls{FAAPF} in \eqref{eq:mcmc_fa-apf_transition_density}. \section{General particle Markov chain Monte Carlo methods} \label{sec:general_pmcmc} In this section, we describe a slight generalisation of \gls{PMCMC} methods which admits both the standard \gls{PMCMC} methods from Section~\ref{sec:pmcmc} as well as the alternative \gls{EHMM} methods from Section~\ref{sec:embeddedHMMnewversion} as special cases. In addition, we derive both the \glsdesc{BS} and \glsdesc{AS} recursions for this algorithm. We note that this section is necessarily slightly more abstract than the previous sections. As the details developed below are not required for understanding the remainder of this work, this section may be skipped on a first reading. \subsection{Extended target distribution} We define $\mathbf{z}_1 \coloneqq \mathbf{x}_1$ and $\mathbf{z}_t \coloneqq (\mathbf{x}_t, \mathbf{a}_{t-1})$. For notational brevity, also define $\smash{\mathbf{z}_1^{-i} \coloneqq \mathbf{z}_1 \setminus x_1^i}$, $\smash{\mathbf{z}_t^{-i} \coloneqq \mathbf{z}_t \setminus (x_t^i, a_{t-1}^i)}$ as well as $\smash{\mathbf{z}_{1:t}^{-b_{1:t}} = (\mathbf{z}_{1}^{-b_1}, \dotsc, \mathbf{z}_{t}^{-b_t})}$. We note that further auxiliary variables could be included in $\mathbf{z}_t$ without changing anything in the construction developed below. The law of a general \gls{PF} is given by \begin{align} \Psi_\theta(\mathbf{z}_{1:T}) \coloneqq \psi_{\theta,1}(\mathbf{z}_1) \prod_{t=2}^T \psi_{\theta,t}(\mathbf{z}_t|\mathbf{z}_{1:t-1}). \end{align} With this notation, general \gls{PMCMC} methods target the following extended distribution: \begin{align} \tilde{\pi}(\theta, \mathbf{z}_{1:T}, b_T) \coloneqq \frac{1}{N^T} \times \underbrace{\pi(\theta, x_{1:T}^{b_{1:T}})}_{\text{\footnotesize{target}}} \times \underbrace{\phi_\theta(\mathbf{z}_{1:T}^{-b_{1:T}} | x_{1:T}^{b_{1:T}}, b_{1:T})}_{\text{\parbox{2.3cm}{\centering\footnotesize{law of conditional general \gls{PF}}}}}, \label{eq:general_pmcmb_target_distribution} \end{align} where the law of the conditional general \gls{PF} is given by \begin{align} \phi_\theta(\mathbf{z}_{1:T}^{-b_{1:T}} | x_{1:T}^{b_{1:T}}, b_{1:T}) \coloneqq \psi_{\theta,1}^{-b_1}(\mathbf{z}_1^{-b_1}) \prod_{t=2}^T \psi_{\theta,t}^{-b_t}(\mathbf{z}_t^{-b_t}|\mathbf{z}_{1:t-1}, x_t^{b_t}), \end{align} with \begin{align} \psi_{\theta,t}^{-i}(\mathbf{z}_t^{-i}|\mathbf{z}_{1:t-1}, x_t^i) & \coloneqq \frac{\psi_{\theta,t}(\mathbf{z}_t|\mathbf{z}_{1:t-1})}{\psi_{\theta,t}^i(x_t^i, a_{t-1}^i|\mathbf{z}_{1:t-1}, x_t^i)}. \end{align} Here, $\psi_{\theta,t}^i(\,\cdot\,|\mathbf{z}_{1:t-1})$ denotes the marginal distribution of the $i$th components of $\mathbf{x}_t$ and $\mathbf{a}_{t-1}$ under the distribution $\psi_{\theta,t}(\,\cdot\,|\mathbf{z}_{1:t-1})$. Finally, for any $t \in \{1,\dotsc,T\}$, we define the following unnormalised weight \begin{align} \tilde{v}_{\theta, t}^{b_t} \coloneqq \frac{1}{N^t} \frac{\gamma_{\theta,t}(x_{1:t}^{b_{1:t}})}{\psi_{\theta,1}^{b_1}(x_1^{b_1}) \prod_{n=2}^t \psi_{\theta,n}^{b_n}(x_n^{b_n}, a_{n-1}^{b_n}|\mathbf{z}_{1:n-1})}, \end{align} where $b_{1:t-1}$ on the r.h.s.\ are to be interpreted as functions of $b_t$ and the ancestry variables via the usual recursion $\smash{b_t = a_t^{b_{t+1}}}$. Here, $\gamma_{\theta,t}(x_{1:t})$ is the unnormalised density targeted at the $t$th step of the general \gls{PF} -- for all the algorithms discussed in this work, we will state these densities explicitly in Appendix~\ref{subsec:special_cases}; in particular, \begin{equation} \gamma_{\theta,T}(x_{1:T}) = p_\theta(x_{1:T}, y_{1:T}). \end{equation} We make the following minimal assumption to ensure the validity of the (general) \gls{PMCMC} algorithms. \begin{assumption}[absolute continuity] \label{as:absolute_continuity} For any $t \in \{1,\dotsc,T\}$, any $i \in \{1,\dotsc,N\}$ and any $\mathbf{z}_{1:t-1}$, the support of $(x_t, b_{t-1}) \mapsto \psi_{\theta,t}^i(x_t, b_{t-1}|\mathbf{z}_{1:t-1})$ includes the support of $(x_t, b_{t-1}) \mapsto \gamma_{\theta,t}(x_{1:t-1}^{b_{1:t-1}}, x_{t})$. \end{assumption} We also make the following assumption which requires that all marginals of the conditional distributions $\psi_{\theta,t}(\,\cdot\,|\mathbf{z}_{1:t-1})$ are identical. \begin{assumption}[identical marginals] \label{as:exchangeability} For any $(i,j) \in \{1,\dotsc, N\}^2$ and any $t \in \{1,\dotsc,T\}$, $\psi_{\theta,t}^i = \psi_{\theta,t}^j$. \end{assumption} \begin{remark}\label{rem:identical_marginals_assumption} Assumption~\ref{as:exchangeability} can be easily dropped in favour of selecting a suitable (non-uniform) distribution for the particle indices $b_{1:T}$ in \eqref{eq:general_pmcmb_target_distribution}. Indeed, more elaborate constructions could be used to justify resampling schemes which, unlike multinomial resampling, are not exchangeable in the sense of \citet[Assumption~2]{AndrieuDoucetHolenstein2010} (unless one permutes the particle indices uniformly at random at the end of each step as mentioned in \citet{AndrieuDoucetHolenstein2010}). Similarly, such more general constructions would allow us to view the use of more sophisticated \glspl{PF}, such as the \emph{discrete particle filter} of \citet{Fearnhead1998}, with \gls{PMCMC} schemes as special cases of this framework as shown in \citet[Section~2.3.4]{Finke2015}. \end{remark} In Examples~\ref{ex:antithetic} and \ref{ex:sqmc}, we show how \glspl{APF} with \emph{antithetic variables} \citep{BizjajevaOlsson2008} and (randomised) \emph{\gls{SQMC} methods} \citep{GerberChopin2015} can be considered as special cases of the framework described in this section even though these methods cannot easily be viewed as conventional \glspl{PF} because the particles are not sampled conditionally independently at each step. \begin{example}[\glspl{APF} with antithetic variables] \label{ex:antithetic} The \glspl{APF} with antithetic variables from \citet{BizjajevaOlsson2008} aim to improve the performance of \glspl{APF} by introducing negative correlation into the particle population. To that end, the $N$ particles are divided into $M$ groups of $K$ particles; the particles in each group then share the same ancestor index and given the ancestor particle, they are sampled in such a way that they are negatively correlated. Assume that there exists $K, M \in \mathbb{N}$ such that $N = K M$ and for $\tilde{x}_t \coloneqq (\tilde{x}_t^1, \dotsc, \tilde{x}_t^K) \in \mathcal{X}^K$ let $\tilde{q}_{\theta,t}(\tilde{x}_t| x_{t-1})$ denote some joint proposal kernel for $K$ particles such that if $(\tilde{x}_t^1, \dotsc, \tilde{x}_t^K) \sim \tilde{q}_{\theta,t}(\,\cdot\,| x_{t-1})$ then \begin{enumerate*} \item $\tilde{x}_t^1, \dotsc, \tilde{x}_t^K$ are (pairwise) negatively correlated, \item marginally, $\smash{\tilde{x}_t^k \sim q_{\theta,t}(\,\cdot\,| x_{t-1})}$ for all $k \in \{1,\dotsc,K\}$. \end{enumerate*} Given $\mathbf{z}_{1:t-1}$, the \gls{APF} with antithetic variables generates $\mathbf{z}_t = (\mathbf{a}_{t-1}, \mathbf{x}_t)$ as follows (we use the convention that any action prescribed for some $m$ is to be performed for all $m \in \{1,\dotsc,M\}$). \begin{enumerate} \item \label{ex:antithetic:step:1} Set $\smash{a_{t-1}^{(m-1)K+1} = i}$ w.p.\ proportional to $v_{\theta, t-1}^i$. \item Set $\smash{a_{t-1}^{(m-1)K+k} \coloneqq a_{t-1}^{(m-1)K+1}}$ for all $k \in \{2,\dotsc,K\}$. \item \label{ex:antithetic:step:2} Sample $\smash{\bigl(x_t^{(m-1)K+k}\bigr)_{k \in \{1,\dotsc,K\}} \sim \tilde{q}_{\theta,t}\bigl(\,\cdot\,\big| x_{t-1}^{a_{t-1}^{(m-1)K+1}}\bigr)}$. \item \label{ex:antithetic:step:3} Permute the particle indices on $\mathbf{z}_t^1, \dotsc, \mathbf{z}_t^N$ uniformly at random. \end{enumerate} \end{example} \begin{example}[sequential quasi Monte Carlo]\label{ex:sqmc} Let $\mathcal{X} = \mathbb{R}^d$. Randomised \gls{SQMC} algorithms are general \glspl{PF} which stratify sampling of the ancestor indices and particles $\mathbf{z}_t = (\mathbf{a}_{t-1}, \mathbf{x}_t)$ by computing them as a deterministic transformation of a set of randomised quasi Monte Carlo points $\mathbf{u}_t \coloneqq (u_t^1, \dotsc, u_t^N) \in [0,1)^{(d+1)N}$. By construction, \begin{enumerate*} \item \label{ex:sqmc:property:1} the set $\mathbf{u}_t = (u_t^1, \dotsc, u_t^N)$ has a low discrepancy, \item \label{ex:sqmc:property:2} for each $i \in \{1,\dotsc,N\}$, $u_t^i$ is (marginally) uniformly distributed on the $(d+1)$-dimensional hypercube. \end{enumerate*} Write $u_t^i = (\tilde{u}_t^i, \tilde{v}_t^i)$ with $\tilde{u}_t^i \in [0,1)$ and $\tilde{v}_t^i \in [0,1)^{d}$. Given $\mathbf{z}_{1:t-1}$, the algorithm \citep[Algorithm~3]{GerberChopin2015} transforms $\mathbf{u}_t \to \mathbf{z}_t = (\mathbf{a}_{t-1}, \mathbf{x}_t)$ as follows (using the convention that any action mentioned for some $i$ is to be performed for all $i \in \{1,\dotsc,N\}$). \begin{enumerate} \item \label{ex:sqmc:step:1} Find a suitable permutation $\sigma_{t-1}\colon \{1,\dotsc, N\} \to \{1,\dotsc,N\}$ such that $x_{t-1}^{\sigma_{t-1}(1)} \leq \dotsc \leq x_{t-1}^{\sigma_{t-1}(N)}$, if $d=1$; if $d > 1$, the permutation $\sigma_{t-1}$ is obtained by mapping the particles to the hypercube $[0,1)^{d}$ and projecting them onto $[0,1)$ using the pseudo-inverse of the Hilbert space-filling curve. These projections are then ordered as for $d=1$ (see \citet{GerberChopin2015} for details). \item \label{ex:sqmc:step:2} Set $\smash{a^i \coloneqq F^{-1}(\tilde{u}_t^i)}$, where $F^{-1}$ denotes the generalised inverse of the \gls{CDF} $F\colon \{1,\dotsc,N\} \to [0,1]$, defined by $\smash{F(i) \coloneqq \sum_{j=1}^i v_{\theta,t-1}^{\sigma_{t-1}(j)} / \sum_{j=1}^N v_{\theta,t-1}^j}$. \item \label{ex:sqmc:step:3} Set $\smash{a_{t-1}^i \coloneqq \sigma_{t-1}(a^i)}$ and $\smash{x_t^i \coloneqq \varGamma_{\theta,t}(x_{t-1}^{a_{t-1}^i}, \tilde{v}_t^i)}$. Here, if $d=1$, the function $\varGamma_{\theta,t}(x_{t-1}, \,\cdot\,)$ is the (generalised) inverse of the \gls{CDF} associated with $q_{\theta,t}(\,\cdot\,|x_{t-1})$; if $d > 1$, this can be generalised via the Rosenblatt transform. \item \label{ex:sqmc:step:4} Permute the particle indices on $\mathbf{z}_t^1, \dotsc, \mathbf{z}_t^N$ uniformly at random. \end{enumerate} \end{example} While the joint kernel $\psi_{\theta,t}(\mathbf{z}_t|\mathbf{z}_{1:t-1})$ is potentially intractable in both examples, the random permutation of the particle indices (i.e.\ Step~\ref{ex:antithetic:step:3} in Example~\ref{ex:antithetic} and also Step~\ref{ex:sqmc:step:4} in Example~\ref{ex:sqmc}) ensures that Assumption~\ref{as:exchangeability} is satisfied. Indeed, it can be easily verified that in both examples, for any $(i,j) \in \{1,\dotsc,N\}^2$, \begin{align} \psi_{\theta,t}^i(x_t,a_{t-1}|\mathbf{z}_{1:t-1}) = \rho_{\theta,t}^{\mathbf{q}_\theta}(x_t, a_{t-1}|\mathbf{x}_{t-2:t-1}, \mathbf{a}_{t-2}) = \psi_{\theta,t}^j(x_t, a_{t-1}|\mathbf{z}_{1:t-1}). \end{align} As pointed out in Remark~\ref{rem:identical_marginals_assumption}, Assumption~\ref{as:exchangeability} is not actually necessary and can be easily dropped in favour of a slightly more general construction of the extended target distribution which is implicitly employed by \citet{BizjajevaOlsson2008, GerberChopin2015} (who therefore do not require the random permutation of the particle indices). \subsection{General particle marginal Metropolis--Hastings} In this section, we use the general \gls{PMCMC} framework to derive a general \gls{PMMH} algorithm. All \gls{PMMH} algorithms and \gls{MH} versions of the alternative \gls{EHMM} methods can then be seen as special cases of this general scheme as shown in Appendix~\ref{subsec:special_cases}. As with the standard \gls{PMMH}, we may use an \gls{MH} algorithm to target the extended distribution $\tilde{\pi}(\theta, \mathbf{z}_{1:T}, b_T)$ using a proposal of the form \begin{equation} q(\theta,{\theta'}) \times \underbrace{\Psi_{{\theta'}}(\mathbf{z}_{1:T})}_{\text{\footnotesize{\parbox{1.5cm}{\centering law of general \gls{PF}}}}} \times \underbrace{q_{\theta'}(b_T|\mathbf{z}_{1:T})}_{\text{\footnotesize{path selection}}}, \label{eq:proposalGeneralisedPF} \end{equation} where we have defined the selection probability \begin{align} q_\theta(b_T|\mathbf{z}_{1:T}) \coloneqq \frac{\tilde{v}_{\theta, T}^{b_T}}{\sum_{i=1}^N \tilde{v}_{\theta, T}^i}. \end{align} Define the usual unbiased estimate of the marginal likelihood \begin{equation} \hat{p}_\theta(y_{1:T}) \coloneqq \sum_{i=1}^N \tilde{v}_{\theta, T}^i. \end{equation} Then we obtain the following general \gls{PMMH} algorithm (Algorithm~\ref{alg:general_pmmh}) the validity of which can be established by checking that indeed, \begin{align} \frac{\tilde{\pi}(\theta, \mathbf{z}_{1:T}, b_T)}{\Psi_\theta(\mathbf{z}_{1:T}) q_\theta(b_T|\mathbf{z}_{1:T})} = p(\theta|y_{1:T}) \frac{\hat{p}_\theta(y_{1:T})}{p_\theta(y_{1:T})}. \end{align} \noindent\parbox{\textwidth}{ \begin{flushleft} \begin{framedAlgorithm}[general \gls{PMMH} algorithm] \label{alg:general_pmmh} Given $(\theta, \mathbf{z}_{1:T}, b_T) \sim \tilde{\pi}(\theta, \mathbf{z}_{1:T}, b_T)$ with associated likelihood estimate $\hat{p}_{\theta}(y_{1:T})$. \begin{enumerate} \item Propose $\theta' \sim q(\theta, \theta')$, $\mathbf{z}_{1:T}' \sim \Psi_{\theta'}(\mathbf{z}_{1:T}')$ and $b_T' \sim q_{\theta'}(b_T'|\mathbf{z}_{1:T}')$. \item Compute likelihood estimate $\hat{p}_{\theta'}(y_{1:T})$ based on $\mathbf{z}_{1:T}'$. \item Set $(\theta, \mathbf{z}_{1:T}, b_T) \leftarrow (\theta', \mathbf{z}_{1:T}', b_T')$ w.p.\ $ 1 \wedge \dfrac{\smash{\hat{p}_{{\theta'}}(y_{1:T})p({\theta'})}}{\hat{p}_{\theta}(y_{1:T})p(\theta)} \dfrac{\smash{q({\theta'},\theta)}}{q(\theta,{\theta'})} . \label{eq:generalisedPMMHratio} $ \end{enumerate} \end{framedAlgorithm} \end{flushleft} } \subsection{General particle Gibbs samplers} \label{subsec:generalised_PGS} \glsreset{AS} \glsreset{BS} In this section, we use the general \gls{PMCMC} framework to derive a general \gls{PG} sampler. We also derive \gls{BS} \citep{Whiteley2010} and \gls{AS} \citep{LindstenJordanSchon2014} recursions and prove that they leave the target distribution of interest invariant. As before, all \gls{PG} samplers and Gibbs versions of the alternative \gls{EHMM} method can then be seen as special cases of this general scheme as shown in Appendix~\ref{subsec:special_cases}. Set \begin{equation} \gamma_\theta(x_{t+1:T}|x_{1:t}) \coloneqq \frac{\gamma_{\theta,T}(x_{1:T})}{\gamma_{\theta,t}(x_{1:t})}. \end{equation} We are then ready to state both (general) \gls{PG} samplers. For the remainder of this section, we let $\tilde{x}_{1:t}^i$ denote the $i$th particle lineage at time~$t$, i.e.\ $\smash{\tilde{x}_{1:t}^i = x_{1:t}^{i_{1:t}}}$, where $i_t = i$ and $\smash{i_n = a_n^{i_{n+1}}}$, for $n = t-1, \dotsc, 1$. \noindent\parbox{\textwidth}{ \begin{flushleft} \begin{framedAlgorithm}[general \gls{PG} sampler with \gls{BS}] \label{alg:general_pg_bs} Given $(\theta, x_{1:T}) \sim \pi$, obtain $(\theta', x_{1:T}') \sim \pi$ as follows. \begin{enumerate} \item Sample $\theta'$ via some $\pi(\,\cdot\,|x_{1:T})$-invariant \gls{MCMC} kernel. \item For $t = 1,\dotsc,T$, perform the following steps. \begin{enumerate} \item If $t=1$, sample $b_1$ uniformly on $\{1,\dotsc,N\}$, set $x_1^{b_1} \coloneqq x_1$ and sample $\mathbf{z}_1^{-b_1} \sim \psi_{\theta',1}^{-b_t}(\mathbf{z}_1^{-b_1}|x_1^{b_1})$. \item If $t>1$, sample $b_t$ uniformly on $\{1,\dotsc,N\}$, set $x_t^{b_t} \coloneqq x_t$, $a_{t-1}^{b_t} \coloneqq b_{t-1}$ and sample $\mathbf{z}_t^{-b_t} \sim \psi_{{\theta'},t}^{-b_t}(\mathbf{z}_t^{-b_t}|\mathbf{z}_{1:t-1}, x_t^{b_t})$. \end{enumerate} \item Sample $b_T \sim q_{\theta'}(b_T|\mathbf{z}_{1:T})$ and for $t = T-1, \dotsc, 1$, set $b_t=i$ w.p.\ proportional to $\tilde{v}_{{\theta'}, t}^{i}\gamma_{\theta'}(x_{t+1:T}^{b_{t+1:T}}|\tilde{x}_{1:t}^i)$. \item Set $x_{1:T}' \coloneqq x_{1:T}^{b_{1:T}}$. \end{enumerate} \end{framedAlgorithm} \end{flushleft} } \noindent\parbox{\textwidth}{ \begin{flushleft} \begin{framedAlgorithm}[general \gls{PG} sampler with \gls{AS}] \label{alg:general_pg_as} Given $(\theta, x_{1:T}) \sim \pi$, obtain $(\theta', x_{1:T}') \sim \pi$ as follows. \begin{enumerate} \item Sample $\theta'$ via some $\pi(\,\cdot\,|x_{1:T})$-invariant \gls{MCMC} kernel. \item For $t = 1,\dotsc,T$, perform the following steps. \begin{enumerate} \item If $t=1$, sample $b_1$ uniformly on $\{1,\dotsc,N\}$, set $x_1^{b_1} \coloneqq x_1$ and sample $\mathbf{z}_1^{-b_1} \sim \psi_{\theta',1}^{-b_t}(\mathbf{z}_1^{-b_1}|x_1^{b_1})$. \item If $t>1$, sample $b_t$ uniformly on $\{1,\dotsc,N\}$, set $x_t^{b_t} \coloneqq x_t$, set $a_{t-1}^{b_t} = i$ w.p.\ proportional to $\tilde{v}_{{\theta'}, t-1}^{i}\gamma_{\theta'}(x_{t:T}^{b_{t:T}}|\tilde{x}_{1:t-1}^{i})$ and sample $\mathbf{z}_t^{-b_t} \sim \psi_{{\theta'},t}^{-b_t}(\mathbf{z}_t^{-b_t}|\mathbf{z}_{1:t-1}, x_t^{b_t})$. \end{enumerate} \item Sample $b_T \sim q_{\theta'}(b_T|\mathbf{z}_{1:T})$ and for $t = T-1, \dotsc, 1$, set $b_t \coloneqq a_t^{b_{t+1}}$. \item Set $x_{1:T}' \coloneqq x_{1:T}^{b_{1:T}}$. \end{enumerate} \end{framedAlgorithm} \end{flushleft} } As in previous sections, the \gls{BS} recursion in Algorithm~\ref{alg:general_pg_bs} may be justified via appropriate partially-collapsed Gibbs sampler arguments by noting that \begin{align} \tilde{\pi}(b_t|\theta, \mathbf{z}_{1:t}, x_{t+1:T}^{b_{t+1:T}}) & \propto \tilde{v}_{{\theta}, t}^{b_t}\gamma_{\theta}(x_{t+1:T}^{b_{t+1:T}}|\tilde{x}_{1:t}^{b_t}). \end{align} The \gls{AS} steps in Algorithm~\ref{alg:general_pg_as} follows similarly since $a_t^{b_{t+1}} = b_t$, by construction. Alternatively -- without invoking partially-collapsed Gibbs sampler arguments -- the validity of \gls{BS} can be established by even further extending the space to include the new particle indices generated via \gls{BS}. As shown in \citet[Chapter~3.4.3]{Finke2015}, this construction also proves a particular duality of \gls{BS} and \gls{AS}. \section{Empirical study} \label{sec:simulations} In this section, we empirically compare the performance of some of the algorithms described in this work on a $d$-dimensional linear-Gaussian state-space model. \subsection{Model} The model considered throughout this section is given by \begin{align} \mu_\theta(x_1) & = \mathrm{Normal}(x_1; m_0, C_0),\\ f_\theta(x_t|x_{t-1}) & = \mathrm{Normal}(x_t; A x_{t-1}, \sigma^2 \mathbf{I}_d), \quad \text{for $t > 1$,}\\ g_\theta(y_t|x_t) & = \mathrm{Normal}(y_t; x_t, \tau^2 \mathbf{I}_d), \quad \text{for $t \geq 1$,} \end{align} where $x_t, y_t \in \mathbb{R}^d$, $\sigma, \tau > 0$, $\mathbf{I}_d$ denotes the $(d,d)$-dimensional identity matrix and $A$ is the $(d,d)$-dimensional symmetric banded matrix with upper and lower bandwidth $1$, with entries $a_0 \in \mathbb{R}$ on the main diagonal, and with entries $a_1 \in \mathbb{R}$ on the remaining bands, i.e.\ \begin{equation} A = \begin{bmatrix} a_0 & a_1 & 0 & \dotsc & 0\\ a_1 & a_0 & a_1 & \ddots & \vdots\\ 0 & a_1 & \ddots & \ddots & 0\\ \vdots & \ddots & \ddots & a_0 & a_1\\ 0 & \dotsc & 0 & a_1 & a_0 \end{bmatrix}. \end{equation} For simplicity, we assume that the initial mean $m_0 \coloneqq \mathbf{0}_d \in \mathbb{R}^d$ (where $\mathbf{0}_d$ denotes a vector of zeros of length~$d$) and the initial $(d,d)$-dimensional covariance matrix $C_0 = \mathbf{I}_d$ are known. Thus, the task is to approximate the posterior distribution of the remaining parameters $\theta \coloneqq (a_0, a_1, \sigma, \tau)$. The true values of these parameters, i.e.\ the values used for simulating the data are $(0.5, 0.2, 1, 1)$. As prior distributions, we take uniform distributions on $(-1,1)$ for $a_0$ and $a_1$ and inverse-gamma distributions on $\sigma$ and $\tau$ each with shape parameter $1$ and scale parameter $0.5$. All parameters are assumed to be independent a priori. In all algorithms, we propose new values ${\theta'}$ for $\theta$ via a simple Gaussian random-walk kernel, i.e.\ we use $q(\theta, {\theta'}) \coloneqq \mathrm{Normal}({\theta'};\theta, (100 d_\theta d T)^{-1} \mathbf{I}_{d_\theta})$, where $d_\theta$ is the dimension of the parameter vector $\theta$, i.e.\ $d_\theta = 4$. \subsection{Algorithms} \label{subsec:algorithms} In this subsection, we detail the specific algorithms whose empirical performance we compare in our simulation study. \begin{description}[leftmargin=0pt, labelindent=0pt] \item[Standard \gls{PMCMC}.] We implement the (bootstrap) \gls{PF} and the \gls{FAAPF} using multinomial resampling at every step. Though we note that more sophisticated resampling schemes, e.g.\ adaptive systematic resampling, could easily be employed. As described above, we can implement both \gls{MH} algorithms (i.e.\ the \gls{PMMH}) and Gibbs samplers based around these standard \glspl{PF}. For the latter, we make use of \gls{AS} in the conditional \glspl{PF}. \item[Original \gls{EHMM}.] We implement the algorithms with $\rho_{\theta,t}(x) = \mathrm{Normal}(x; \mu, \varSigma)$, where $\mu$ and $\varSigma$ represent the mean and covariance matrix associated with the stationary distribution of the latent Markov chain $(X_t)_{t \in \mathbb{N}}$. We compare two different options for constructing the kernels $R_{\theta,t}$ which leave this distribution invariant. \begin{enumerate}[label=(\Roman*), ref=\Roman*] \item \label{as:original_EHMM_kernel_a} The kernel $R_{\theta,t}$ generates \gls{IID} samples from its invariant distribution, i.e.\ $\smash{R_{\theta,t}(x_t'|x_t) = \rho_{\theta,t}(x_t')}$. \item \label{as:original_EHMM_kernel_b} The kernel $R_{\theta,t}$ is a standard \gls{MH} kernel which proposes a value $x_t^\star$ using the Gaussian random-walk proposal $\smash{\mathrm{Normal}(x_t^\star; x_t, d^{-1}\mathbf{I}_d)}$. \end{enumerate} \item[Alternative \gls{EHMM}.] We compare four different versions of the \gls{MCMCPF} and \gls{MCMCFAAPF} methods outlined above. Again, we implement both \gls{MH} algorithms and Gibbs samplers (with \gls{AS}) based around these methods. Below, we describe the specific versions which we are comparing. The kernels $\overline{R}_{\theta, t}(\,\cdot\,|\,\cdot\,;\mathbf{x}_{t-1})$ employed in the \gls{MCMCPF} and the kernels $R_{\theta, t}(\,\cdot\,|\,\cdot\,;\mathbf{x}_{t-1})$ employed in the \gls{MCMCFAAPF} are all taken to be \gls{MH} kernels which, given $(x_t, a_{t-1})$, propose a new value $(x_t^\star, a_{t-1}^\star)$ using a proposal of the following form \begin{equation} \frac{v_{\theta,t-1}^{a_{t-1}^\star}}{\sum_{i=1}^N v_{\theta,t-1}^{i}} s_{\theta,t}(x_t^\star|x_t;\mathbf{x}_{t-1}, a_{t-1}^\star). \end{equation} We compare two different approaches for generating a new value for the particle, $x_t^\star$. \begin{enumerate}[label=(\Roman*), ref=\Roman*] \item \label{as:particle_proposal_a} The first proposal uses a simple Gaussian random-walk kernel, i.e.\ \begin{equation} s_{\theta,t}(x_t^\star|x_t;\mathbf{x}_{t-1}, a_{t-1}^\star) = \mathrm{Normal}(x_t^\star; x_t, d^{-1}\mathbf{I}_d), \label{eq:rw_proposal} \end{equation} where the scaling of the covariance matrix is motivated by existing results on optimal scaling for such random-walk proposal kernels \citep{GelmanRobertsGilks1996,RobertsGelmanGilks1997}. \item \label{as:particle_proposal_b} The second proposal uses the \emph{autoregressive} proposal employed by \cite{ShestopaloffNeal2016}, i.e.\ \begin{equation} s_{\theta,t}(x_t^\star|x_t;\mathbf{x}_{t-1}, a_{t-1}^\star) = \mathrm{Normal}\bigl(x_t^\star;\mu + \sqrt{1-\varepsilon^2} (x_t - \mu), \varepsilon^2\varSigma\bigr), \label{eq:ar_proposal} \end{equation} where $\mu$ and $\varSigma$ denote the mean and covariance matrix of $f_\theta(x_t|x_{t-1}) = \mathrm{Normal}(x_t; \mu, \varSigma)$, i.e.\ $\varSigma=\sigma^2\mathbf{I}$ and $\mu=A x_{t-1}$. To scale the covariance matrix of this proposal with the dimension $d$, we set $\varepsilon \coloneqq \sqrt{d^{-1}}$. \end{enumerate} \item[Idealised.] We also implement the algorithms which the above-mentioned algorithms seek to mimic. The idealised Gibbs sampler, is a (Metropolis-within-)Gibbs algorithm which updates the latent states $x_{1:T}$ as one block by sampling them from their full conditional posterior distribution. The idealised marginal \gls{MH} algorithm analytically evaluates the marginal likelihood $p_\theta(y_{1:T})$ via the Kalman filter. \end{description} \subsection{Results for general PMMH algorithms} In this subsection, we empirically compare the performance of various \gls{PMMH} type samplers. First, we fix $\theta$ in order to assess the variability of the estimates of the marginal likelihood, $\hat{p}_\theta(y_{1:T})$, which is a key ingredient in (general) \gls{PMMH} algorithms. Then, we perform inference about $\theta$. Recall that in order to implement the \gls{MH} version of the \gls{MCMCFAAPF}, we need to sample at least one particle from $p_\theta(x_t|x_{t-1}, y_t)$ at each time~$t$ and we need to be able to evaluate the function $x_{t-1} \mapsto p_\theta(y_t|x_{t-1})$. In other words, whenever we can implement this algorithm we can also implement a standard \gls{PMMH} algorithm based around the \gls{FAAPF}. Figure~\ref{fig:likelihood_estimates} shows the relative estimates of the marginal likelihood obtained from the various algorithms described in this work for various model dimensions. Unsurprisingly, the \gls{PF}, resp.\ \gls{FAAPF}, provides lower variance estimates than its corresponding \gls{MCMCPF}, resp.\ \gls{MCMCFAAPF} counterparts. However, more interestingly, the \gls{MCMCFAAPF} can provide lower variance estimates than the standard \gls{PF} and could prove useful in more realistic scenarios where it is computationally very expensive to run the \gls{FAAPF}. As expected, the original \gls{EHMM} method described in Section~\ref{sec:embeddedHMM} breaks down very quickly as the dimension $d$ increases. \begin{figure} \caption{$d=2$.} \label{fig:likelihood_estimates_in_dimension_2} \caption{$d=5$.} \label{fig:likelihood_estimates_in_dimension_5} \caption{$d=10$.} \label{fig:likelihood_estimates_in_dimension_10} \caption{$d=25$.} \label{fig:likelihood_estimates_in_dimension_25} \caption{Relative estimates of the marginal likelihood $p_\theta(y_{1:T})$. Based on $1\,000$ independent runs of each algorithm (and writing $\hat{p}_\theta(y_{1:T}) = \tilde{p}_\theta(y_{1:T})$ in the case of the original \gls{EHMM} method) with each run using a different data sequence of length $T=10$ simulated from the model. The number of particles was $N=1\,000$ for the $\mathrm{O}(N)$ methods and $N=100$ for the $\mathrm{O}(N^2)$ methods.} \label{fig:likelihood_estimates} \end{figure} The right panel of Figure~\ref{fig:pmmh_acf_parameters} shows kernel-density plots of the estimates of parameter~$a_0$ obtained from various \gls{PMMH}-type algorithms. Clearly, the \gls{PMMH}-type algorithms based around the (bootstrap) \gls{PF} or the \gls{MCMCPF} were unable to obtain sensible parameter estimates within the number of iterations that we fixed. The left panel of Figure~\ref{fig:pmmh_acf_parameters} shows the corresponding empirical autocorrelation. The results are consistent with the efficiency of the likelihood estimates illustrated in Figure~\ref{fig:likelihood_estimates}. That is, at least in this setting, the standard \gls{MH} version of the alternative \gls{EHMM} method does not outperform standard \gls{PMMH} algorithms. The estimates of the other parameters behaved similarly and the results for $(a_1, \sigma, \tau)$ are therefore omitted. \begin{figure} \caption{Autocorrelation (left panel) and kernel-density estimate (right panel) of the estimates of Parameter~$a_0$ for model dimension~$d=25$ and with $T=10$ observations. Obtained from $10^6$ iterations (of which the initial $10$~\% were discarded as burn-in) of standard \gls{PMMH} algorithms and \gls{MH} versions of the alternative \gls{EHMM} method using $N=1000$ particles. The autocorrelations shown on the r.h.s.\ are averages over four independent runs of each algorithm. \textit{Note:} the \gls{PMMH} algorithms based on the (bootstrap) \gls{PF} and based on the \gls{MCMCPF} failed to yield meaningful approximations of the posterior distribution and the corresponding kernel-density estimates are therefore suppressed.} \label{fig:pmmh_acf_parameters} \end{figure} \subsection{Results for general particle Gibbs samplers} In this subsection, we compare empirically the performance of various \gls{PG} type samplers (all using \gls{AS}). Gibbs samplers based on the original \gls{EHMM} method failed to yield meaningful estimates for the model dimensions considered in this subsection and at a similar computational cost as the other algorithms. We therefore do not show results for the original \gls{EHMM} method in the figures below. Recall that in order to implement the conditional \gls{MCMCFAAPF}, we do not need to sample from $p_\theta(x_t|x_{t-1}, y_t)$ nor evaluate the function $x_{t-1} \mapsto p_\theta(y_t|x_{t-1})$. In other words, we can implement the conditional \gls{MCMCFAAPF} in many situations in which implementing a standard conditional \gls{FAAPF} is impossible. Figure~\ref{fig:gibbs_acf_state_components_dimX_100} shows the autocorrelation of estimates of the first component of $x_1$ obtained from various \gls{PG} samplers for model dimension $d=100$. For the moment, we have kept $\theta$ fixed to the true values. It appears that in high dimensions, the conditional \glspl{PF} with \gls{MCMC} moves are able to outperform standard conditional \glspl{PF}. Note that although, unsurprisingly, the best performance is obtained with the \gls{MCMCFAAPF}, the simpler \gls{MCMCPF} is able to substantially outperform the approach based upon a standard \gls{PF}. This is supported by Figure~\ref{fig:gibbs_ess_dimX_100} which shows that the conditional \glspl{PF} with \gls{MCMC} moves lead to a higher estimated \gls{ESS} in this setting. The acceptance rates associated with the \gls{MH} kernels are shown in Figure~\ref{fig:gibbs_acceptance_rates_dimX_100}. \begin{figure} \caption{Autocorrelation (left panel) and kernel-density estimate (right panel) of the estimates of the first component of $x_{1}$ for the state-space model in dimension~$d=100$ with $T=10$ observations. Obtained from three independent runs of each of the various Gibbs samplers comprising $500\,000$ iterations (of which the initial $10$~\% were discarded as burn-in) and using $N=100$ particles. Here, $\theta$ was fixed to the true parameters throughout each run. The autocorrelations shown on the r.h.s.\ are averages over the three independent runs of each algorithm. \textit{Note:} the conditional (bootstrap) \gls{PF} almost never managed to update the states: the corresponding kernel-density estimates were therefore not meaningful and are hence suppressed.} \label{fig:gibbs_acf_state_components_dimX_100} \end{figure} \begin{figure} \caption{Average \gls{ESS} for the same setting and colour-coding as in Figure~\ref{fig:gibbs_acf_state_components_dimX_100}. The results are averaged over three independent runs of each algorithm. It is worth noting that the \gls{ESS} does not take the autocorrelation of the state-estimates (over iterations of the (particle) \gls{PMCMC} chain) into account and so may flatter \glspl{MCMCPF} to an extent but does illustrate the lessening of weight degeneracy within the particle set which they achieve.} \caption{Average acceptance rates for the \gls{MH} kernels $\overline{R}_{\theta, t}(\,\cdot\,|\,\cdot\,;\mathbf{x}_{t-1})$ and $R_{\theta, t}(\,\cdot\,|\,\cdot\,;\mathbf{x}_{t-1})$ for same setting and colour-coding as in Figure~\ref{fig:gibbs_acf_state_components_dimX_100}. Note that standard \glspl{PF} can always be interpreted as using a \gls{MH} kernel that proposes \gls{IID} samples from its invariant distribution so that the acceptance rate is always $1$ in this case. Again the acceptance rates are averaged over three independent runs of each algorithm.} \label{fig:gibbs_ess_dimX_100} \label{fig:gibbs_acceptance_rates_dimX_100} \end{figure} We conclude this section by showing (in Figure~\ref{fig:gibbs_acf_parameters}) simulation results for the estimates of Parameter~$a_0$ obtained from the various \gls{PG} samplers. The \gls{MH} kernel which updates $\theta$ was employed $100$ times per iteration, i.e. $100$ times between each conditional \gls{PF} update of the latent states as the former is relatively computationally cheap compared to the latter. Note that as indicated by the kernel-density estimates in the right panel of Figure~\ref{fig:gibbs_acf_parameters}, the Gibbs sampler based around the \gls{PF} did not manage to sufficiently explore the support of the posterior distribution within the number of iterations that we fixed. This lack of convergence also caused the comparatively low empirical autocorrelation of the \gls{PG} chains based around the (bootstrap) \gls{PF} in the left panel of Figure~\ref{fig:gibbs_acf_parameters}: as the chain did not sufficiently traverse support of the target distribution -- due to poor mixing of the state-updates as illustrated in Figure~\ref{fig:gibbs_acf_state_components_dimX_100} -- the empirical autocorrelation shown in Figure~\ref{fig:gibbs_acf_parameters} is a poor estimate of the (theoretical) autocorrelation of the chain. More specifically, the former greatly underestimates the latter. The estimates of the other parameters behaved similarly and the results for $(a_1, \sigma, \tau)$ are therefore omitted. \begin{figure} \caption{Model dimension~$d=25$.} \caption{Model dimension~$d=100$.} \caption{Autocorrelation (left panel) and kernel-density estimate (right panel) of the estimates of Parameter~$a_0$ for $T=10$ observations. Obtained one run of the various Gibbs samplers comprising $10^6$ iterations (of which the initial $10$~\% were discarded as burn-in) and using $N=100$ particles. The autocorrelations shown on the l.h.s.\ are averages over the two independent runs of each algorithm.} \label{fig:gibbs_acf_parameters} \end{figure} \section{Discussion} \glsresetall In this work, we have discussed the connections between the \gls{PMCMC} and \gls{EHMM} methodologies and have obtained novel Bayesian inference algorithms for state and parameter estimation in state-space models. We have compared the empirical performance of the various \gls{PMCMC} and \gls{EHMM} algorithms on a simple high-dimensional state-space model. We have found that a properly tuned conditional \gls{PF} which employs local \glsdesc{MH} moves proposed in \citet{ShestopaloffNeal2016} can dramatically outperform the standard conditional \glspl{PF} in high dimensions. Additionally, by formally establishing that \gls{PMCMC} and the (alternative) \gls{EHMM} methods can be viewed as a special case of a general \gls{PMCMC} framework, we have derived both \glsdesc{BS} and \glsdesc{AS} for this general framework. This provides a promising strategy for extending the range of applicability of \glsdesc{PG} algorithms as well as providing a novel class of \glspl{PF} which might be useful. There are numerous other potential extensions of these ideas. For instance, many existing extensions of standard \gls{PMCMC} methods could also be considered for the alternative \gls{EHMM} methods, e.g.\ incorporating gradient-information into the parameter proposals $q(\theta, {\theta'})$ or exploiting correlated pseudo-marginal ideas \citep{Deligiannidis2015}. Clearly, further generalisation of the target distribution and associated algorithms introduced here are possible. Many other processes for simulating from an extended target admitting a single random trajectory with the correct marginal distribution are possible, e.g.\ along the lines of \citet{LindstenJohansenNaesseth2014}. \appendix \section{Special cases of the general PMCMC algorithm} \label{subsec:special_cases} \glsunset{MCMC} \glsunset{PF} \glsunset{FAAPF} \glsunset{APF} \glsunset{MCMCPF} \glsunset{MCMCFAAPF} \glsunset{MCMCAPF} In this appendix, we show that all \gls{PMCMC} and alternative \gls{EHMM} methods described this work can be recovered as special cases of the general \gls{PMCMC} framework from Section~\ref{sec:general_pmcmc}. For completeness, we explicitly derive all algorithms as special cases of the general framework even though \gls{PMCMC} methods based around the (bootstrap) \gls{PF} and \gls{FAAPF} were already shown to be special cases of \gls{PMCMC} methods based around the general \gls{APF} and even though, alternative \gls{EHMM} methods based around the \gls{MCMCPF} and \gls{MCMCFAAPF} were already shown to be special cases of alternative \gls{EHMM} methods based around the \gls{MCMCAPF}. \begin{description}[leftmargin=0pt, labelindent=0pt] \item[(Bootstrap) \gls{PF}.] In this case, $\psi_{\theta,1}(\mathbf{z}_1) = \prod_{i=1}^N \mu_\theta(x_1^i) = \prod_{i=1}^N \bar{\rho}_{\theta, 1}(x_1^i)$, and, for $t>1$, \begin{equation} \psi_{\theta,t}(\mathbf{z}_t|\mathbf{z}_{1:t-1}) = \prod_{i=1}^N \frac{g_\theta(y_{t-1}|x_{t-1}^{a_{t-1}^i})}{\sum_{j=1}^N g_\theta(y_{t-1}|x_{t-1}^j)} f_\theta(x_t^i|x_{t-1}^{a_{t-1}^i}) = \prod_{i=1}^N \bar{\rho}_{\theta, t}(x_t^i, a_{t-1}^i|\mathbf{x}_{t-1}), \end{equation} while $\gamma_{\theta,t}(x_{1:t}) \coloneqq p_\theta(x_{1:t}, y_{1:t})$, for any $t \leq T$. This implies that $\tilde{v}_{\theta,t}^{i} = \frac{1}{N} g_\theta(y_t|x_t^{i}) \prod_{n=1}^{t-1} \frac{1}{N} \sum_{j=1}^N g_\theta(y_n|x_n^j)$, so that we obtain $q_\theta(i|\mathbf{z}_{1:T}) = g_\theta(y_T|x_T^i) / \sum_{j=1}^N g_\theta(y_T|x_T^j)$ and $\hat{p}_\theta(y_{1:T}) = \prod_{t=1}^{T} \frac{1}{N} \sum_{i=1}^N g_\theta(y_t|x_t^i)$, as stated in Section~\ref{sec:pmcmc}. \item [\gls{FAAPF}.] In this case, $\psi_{\theta,1}(\mathbf{z}_1) = \prod_{i=1}^N p_\theta(x_1^i|y_1) = \prod_{i=1}^N \rho_{\theta,1}(x_1^i)$, and, for $t>1$, \begin{equation} \psi_{\theta,t}(\mathbf{z}_t|\mathbf{z}_{1:t-1}) = \prod_{i=1}^N \frac{g_\theta(y_{t}|x_{t-1}^{a_{t-1}^i})}{\sum_{j=1}^N g_\theta(y_{t}|x_{t-1}^j)} p_\theta(x_t^i|x_{t-1}^{a_{t-1}^i}, y_t) = \prod_{i=1}^N \rho_{\theta,t}(x_t^i, a_{t-1}^i|\mathbf{x}_{t-1}), \end{equation} while $\gamma_{\theta,t}(x_{1:t}) \coloneqq p_\theta(x_{1:t}, y_{1:t}) p_\theta(y_{t+1}|x_t)$, for $t < T$, and $\gamma_{\theta,T}(x_{1:T}) \coloneqq p_\theta(x_{1:T}, y_{1:T})$. This implies that $\tilde{v}_{\theta,t}^{i} = \frac{1}{N} p_\theta(y_1) \prod_{n=2}^t \frac{1}{N} \sum_{j=1}^N p_\theta(y_n|x_{n-1}^j)$, so that we obtain the selection probability $q_\theta(i|\mathbf{z}_{1:T}) = 1/N$ and the marginal-likelihood estimate $\hat{p}_\theta(y_{1:T}) = p_\theta(y_1) \prod_{t=2}^T \frac{1}{N} \sum_{i=1}^N p_\theta(y_t|x_{t-1}^i)$, as stated in Section~\ref{sec:pmcmc}. \item[General \gls{APF}.] In this case, $\psi_{\theta,1}(\mathbf{z}_1) = \prod_{i=1}^N q_{\theta,1}(x_1^i) = \prod_{i=1}^N \rho_{\theta,1}^{\mathbf{q}_\theta}(x_1^i)$, and, for $t>1$, \begin{equation} \psi_{\theta,t}(\mathbf{z}_t|\mathbf{z}_{1:t-1}) = \prod_{i=1}^N \frac{v_{\theta, t-1}^{a_{t-1}^i}}{\sum_{j=1}^N v_{\theta, t-1}^j} q_{\theta,t}(x_t^i|x_{t-1}^{a_{t-1}^i}) = \prod_{i=1}^N \rho_{\theta,t}^{\mathbf{q}_\theta}(x_t^i, a_{t-1}^i|\mathbf{x}_{t-2:t-1}, \mathbf{a}_{t-2}), \end{equation} while $\gamma_{\theta,t}(x_{1:t}) \coloneqq p_\theta(x_{1:t}, y_{1:t}) \tilde{p}_\theta(y_{t+1}|x_t)$, for $t < T$, and $\gamma_{\theta,T}(x_{1:T}) \coloneqq p_\theta(x_{1:T}, y_{1:T})$. This implies that $\tilde{v}_{\theta,t}^{i} = \frac{1}{N} v_{\theta,t}^i \prod_{n=1}^{t-1} \frac{1}{N} \sum_{j=1}^N v_{\theta,n}^j$, so that we obtain the selection probability $q_\theta(i|\mathbf{z}_{1:T}) = v_{T,\theta}^i / \sum_{j=1}^N v_{T,\theta}^j$ and the marginal-likelihood estimate $\hat{p}_\theta(y_{1:T}) = \prod_{t=1}^T \frac{1}{N} \sum_{i=1}^N v_{t,\theta}^i$, as stated in Section~\ref{sec:pmcmc}. \item[\gls{MCMCPF}.] In this case, $\psi_{\theta,1}(\mathbf{z}_1) = \bar{\rho}_{\theta,1}(x_1^1) \prod_{i=2}^N \overline{R}_{\theta,1}(x_1^i|x_1^{i-1})$, and, for $t>1$, \begin{equation} \psi_{\theta,t}(\mathbf{z}_t|\mathbf{z}_{1:t-1}) = \bar{\rho}_{\theta,t}(x_t^1, a_{t-1}^1|\mathbf{x}_{t-1}) \prod_{i=2}^N \overline{R}_{\theta,t}(x_t^i, a_{t-1}^i|x_t^{i-1}, a_{t-1}^{i-1}; \mathbf{x}_{t-1}), \end{equation} while $\gamma_{\theta,t}(x_{1:t})$, $q_\theta(b_T|\mathbf{z}_{1:T})$ and $\hat{p}_\theta(y_{1:T})$ are the same as for \gls{PMCMC} methods using the bootstrap \gls{PF}. \item[\gls{MCMCFAAPF}.] In this case, $\psi_{\theta,1}(\mathbf{z}_1) = \rho_{\theta,1}(x_1^1) \prod_{i=2}^N R_{\theta,1}(x_1^i|x_1^{i-1})$, and, for $t>1$, \begin{equation} \psi_{\theta,t}(\mathbf{z}_t|\mathbf{z}_{1:t-1}) = \rho_{\theta,t}(x_t^1, a_{t-1}^1|\mathbf{x}_{t-1}) \prod_{i=2}^N R_{\theta,t}(x_t^i, a_{t-1}^i|x_t^{i-1}, a_{t-1}^{i-1}; \mathbf{x}_{t-1}), \end{equation} while $\gamma_{\theta,t}(x_{1:t})$, $q_\theta(b_T|\mathbf{z}_{1:T})$ and $\hat{p}_\theta(y_{1:T})$ are the same as for \gls{PMCMC} methods using the \gls{FAAPF}. \item[\gls{MCMCAPF}.] In this case, $\psi_{\theta,1}(\mathbf{z}_1) = \rho_{\theta,1}^{\mathbf{q}_\theta}(x_1^1) \prod_{i=2}^N R_{\theta,1}^{\mathbf{q}_\theta}(x_1^i|x_1^{i-1})$, and, for $t>1$, \begin{equation} \psi_{\theta,t}(\mathbf{z}_t|\mathbf{z}_{1:t-1}) = \rho_{\theta,t}^{\mathbf{q}_\theta}(x_t^1, a_{t-1}^1|\mathbf{x}_{t-2:t-1}, \mathbf{a}_{t-2}) \prod_{i=2}^N R_{\theta,t}^{\mathbf{q}_\theta}(x_t^i, a_{t-1}^i|x_t^{i-1}, a_{t-1}^{i-1}; \mathbf{x}_{t-2:t-1}, \mathbf{a}_{t-2}), \end{equation} while $\gamma_{\theta,t}(x_{1:t})$, $q_\theta(b_T|\mathbf{z}_{1:T})$ and $\hat{p}_\theta(y_{1:T})$ are the same as for \gls{PMCMC} methods using the general APF. \end{description} \end{document}
\begin{document} \begin{abstract} We show that MA$_{\kappa}$ implies that each collection of ${P}_{\mathfrak c}$-points of size at most $\kappa$ which has a $P_{\mathfrak c}$-point as an $RK$ upper bound also has a ${P}_{\mathfrak c}$-point as an $RK$ lower bound. \end{abstract} \maketitle \section{Introduction} The Rudin-Keisler ($RK$) ordering of ultrafilters has received considerable attention since its introduction in the 1960s. For example, one can take a look at \cite{rudin,Rudin:66,mer,blass,kn,RS,RK}, or \cite{RV}. Recall the definition of the Rudin-Keisler ordering. \begin{definition} Let $\mathcal U$ and $\mathcal V$ be ultrafilters on $\omega$. We say that $\mathcal U\le_{RK}\mathcal V$ if there is a function $f$ in $\omega^{\omega}$ such that $A\in \mathcal U$ if and only if $f^{-1}(A)\in\mathcal V$ for every $A\subseteq \omega$. \end{definition} When $\mathcal U$ and $\mathcal V$ are ultrafilters on $\omega$ and $\mathcal U\le_{RK}\mathcal V$, we say that $\mathcal U$ is \emph{Rudin-Keisler (RK) reducible} to $\mathcal V$, or that $\mathcal U$ is \emph{Rudin-Keisler (RK) below} $\mathcal V$. In case $\mathcal U\le_{RK}\mathcal V$ and $\mathcal V\le_{RK}\mathcal U$ both hold, then we say that $\mathcal U$ and $\mathcal V$ are \emph{Rudin-Keisler equivalent}, and write $\mathcal U\equiv_{RK}\mathcal V$. Very early in the investigation of this ordering of ultrafilters, it was noticed that the class of P-points is particularly interesting. Recall that an ultrafilter $\mathcal U$ on $\omega$ is called a \emph{P-point} if for any $\set{a_n:n<\omega}\subseteq\mathcal U$ there is an $a\in\mathcal U$ such that $a\subseteq^* a_n$ for every $n<\omega$, i.e. the set $a\setminus a_n$ is finite for every $n<\omega$. P-points were first constructed by Rudin in \cite{rudin}, under the assumption of the Continuum Hypothesis. The class of P-points forms a downwards closed initial segment of the class of all ultrafilters. In other words, if $\mathcal U$ is a P-point and $\mathcal V$ is any ultrafilter on $\omega$ with $\mathcal V \; {\leq}_{RK} \; \mathcal U$, then $\mathcal V$ is also a P-point. Hence understanding the order-theoretic structure of the class of P-points can provide information about the order-theoretic structure of the class of all ultrafilters on $\omega$. One of the first systematic explorations of the order-theoretic properties of the class of all ultrafilters, and particularly of the class of P-points, under ${\leq}_{RK}$ was made by Blass in \cite{blassphd} and \cite{blass}, where he proved many results about this ordering under the assumption of Martin's Axiom (MA). Let us note here that it is not possible to construct P-points in ZFC only, as was proved by Shelah (see \cite{proper}). Thus some set-theoretic assumption is needed to ensure the existence of P-points. The most commonly used assumption when studying the order-theoretic properties of the class of P-points is MA. Under MA every ultrafilter has character $\mathfrak c$. Therefore, the ${P}_{\mathfrak c}$-points are the most natural class of P-points to focus on under MA. Again, the ${P}_{\mathfrak c}$-points form a downwards closed subclass of the P-points. \begin{definition} \label{def:pc} An ultrafilter $\mathcal U$ on $\omega$ is called a $P_{\mathfrak c}$\emph{-point} if for every $\alpha<\mathfrak c$ and any $\set{a_i:i<\alpha}\subseteq \mathcal U$ there is an $a\in\mathcal U$ such that $a\subseteq^* a_i$ for every $i<\alpha$. \end{definition} In Theorem 5 from \cite{blass}, Blass proved in ZFC that if $\set{\mathcal U_n:n<\omega}$ is a countable collection of P-points and if there is a P-point $\mathcal V$ such that $\mathcal U_n\le_{RK}\mathcal V$ for every $n<\omega$, then there is a P-point $\mathcal U$ such that $\mathcal U\le_{RK}\mathcal U_n$ for every $n<\omega$. In other words, if a countable family of P-points has an upper bound, then it also has a lower bound. The main result of this paper generalizes Blass' theorem to families of ${P}_{\mathfrak c}$-points of size less than $\mathfrak c$ under MA. More precisely, if MA holds and a family of ${P}_{\mathfrak c}$-points of size less than $\mathfrak c$ has an $RK$ upper bound which is a ${P}_{\mathfrak c}$-point, then the family also has an RK lower bound. Blass proved his result via some facts from \cite{blassmodeltheory} about non-standard models of complete arithmetic. In order to state these results, we introduce a few notions from \cite{blassmodeltheory}. The language $L$ will consist of symbols for all relations and all functions on $\omega$. Let $N$ be the standard model for this language, its domain is $\omega$ and each relation or function denotes itself. Let $M$ be an elementary extension of $N$, and let ${}^*R$ be the relation in $M$ denoted by $R$, and let ${}^*f$ be the function in $M$ denoted by $f$. Note that if $a\in M$, then the set $\set{{}^*f(a):f:\omega\to\omega}$ is the domain of an elementary submodel of $M$. Submodel like this, i.e.\@ generated by a single element, will be called \emph{principal}. It is not difficult to prove that a principal submodel generated by $a$ is isomorphic to the ultrapower of the standard model by the ultrafilter ${\mathcal U}_{a} = \{X \subseteq \omega: a \in {}^{\ast}{X}\}$. If $A,B\subseteq M$, we say that they are \emph{cofinal with each other} iff $(\forall a\in A)(\exists b\in B)\ a\ {}^*\!\!\le b$ and $(\forall b\in B)(\exists a\in A)\ b\ {}^*\!\!\le a$. Finally, we can state Blass' theorem. \begin{theorem}[Blass, Theorem 3 in \cite{blassmodeltheory}]\label{t:blassmodel} Let $M_i$ ($i<\omega$) be countably many pairwise cofinal submodels of $M$. Assume that at least one of the $M_i$ is principal. Then $\bigcap_{i<\omega}M_i$ is cofinal with each $M_i$, in fact it contains a principal submodel cofinal with each $M_i$. \end{theorem} After proving this theorem, Blass states that it is not known to him whether Theorem \ref{t:blassmodel} can be extended to larger collections of submodels. The proof of our main result clarifies this, namely in Theorem \ref{theorem3} below we prove that under MA it is possible to extend it to collections of models of size less than $\mathfrak c$ provided that there is a principal model that is isomorphic to an ultrapower induced by a ${P}_{\mathfrak c}$-point. Then we proceed and use this result to prove Theorem \ref{maintheorem} where we extend Theorem 5 from \cite{blass} to collections of fewer than $\mathfrak c$ many ${P}_{\mathfrak c}$-points. Recall that MA$_{\alpha}$ is the statement that for every partial order $P$ which satisfies the countable chain condition and for every collection $\mathcal{D} = \{{D}_{i}: i < \alpha\}$ of dense subsets of $P$, there is a filter $G\subseteq P$ such that $G\cap {D}_{i}\neq\emptyset$ for every $i < \alpha$. \section{The lower bound} In this section we prove the results of the paper. We begin with a purely combinatorial lemma about functions. \begin{definition}\label{closedness} Let $\alpha$ be an ordinal, let $\mathcal F=\set{f_i:i<\alpha}\subseteq \omega^{\omega}$ be a family of functions, and let $A$ be a subset of $\alpha$. We say that a set $F\subseteq \omega$ is ($A,\mathcal F$)\emph{-closed} if $f_i^{-1}(f_i''F)\subseteq F$ for each $i\in A$. \end{definition} \begin{remark} Notice that if $F$ is ($A,\mathcal F$)-closed, then $f_i^{-1}(f_i''F)=F$ for each $i\in A$. \end{remark} \begin{lemma}\label{finiteunion} Let $\alpha$ be an ordinal, let $\mathcal F=\set{f_i:i<\alpha}\subseteq \omega^{\omega}$ be a family of functions, and let $A$ be a subset of $\alpha$. Suppose that $m<\omega$, and that $F_k$ is ($A,\mathcal F$)-closed subset of $\omega$, for each $k<m$. Then the set $F=\bigcup_{k<m}F_k$ is ($A,\mathcal F$)-closed. \end{lemma} \begin{proof} To prove that $F$ is ($A,\mathcal F$)-closed take any $i\in A$, and $n\in f_i^{-1}(f_i''F)$. This means that there is some $n'\in F$ such that $f_i(n)=f_i(n')$. Let $k<m$ be such that $n'\in F_k$. Then $n\in f_i^{-1}(f_i''F_k)$. Since $F_k$ is ($A,\mathcal F$)-closed, $n\in f_i^{-1}(f_i''F_k)\subseteq F_k$. Thus $n\in F_k\subseteq F$. \end{proof} \begin{lemma}\label{ensuringcondition} Let $\alpha<\mathfrak c$ be an ordinal. Let $\mathcal F=\set{f_i:i<\alpha}\subseteq \omega^{\omega}$ be a family of finite-to-one functions. Suppose that for each $i,j<\alpha$ with $i<j$, there is $l<\omega$ such that $f_j(n)=f_j(m)$ whenever $f_{i}(n)=f_{i}(m)$ and $n,m\ge l$. Then for each finite $A\subseteq \alpha$, and each $n<\omega$, there is a finite $(A,\mathcal F)$-closed set $F$ such that $n\in F$. \end{lemma} \begin{proof} First, if $A$ is empty, then we can take $F=\set{n}$. So fix a non-empty finite $A\subseteq\alpha$, and $n<\omega$. For each $i,j\in A$ such that $i<j$, by the assumption of the lemma, take $l_{ij}<\omega$ such that for each $n,m\ge l_{ij}$, if $f_i(n)=f_i(m)$, then $f_j(n)=f_j(m)$. Since $A$ is a finite set, there is $l=\max\set{l_{ij}:i,j\in A,\ i<j}$. So $l$ has the property that for every $i,j\in A$ with $i<j$, if $f_i(n)=f_i(m)$ and $n,m\ge l$, then $f_j(n)=f_j(m)$. Let $i_0=\max(A)$. Clearly, $f_i''l$ is finite for each $i\in A$, and since each $f_i$ is finite-to-one the set ${f}^{-1}_{i}\br{f_i''l}$ is finite for every $i\in A$. Since the set $A$ is also finite, there is $l'<\omega$ such that $\bigcup_{i\in A}f_i^{-1}\br{f_i''l}\subseteq l'$. Again, since $f_{i_0}$ is finite-to-one there is $l''<\omega$ such that $f^{-1}_{i_0}\br{f_{i_0}''l'}\subseteq l''$. Note that by the definition of numbers $l'$ and $l''$, we have $l''\ge l'\ge l$. \begin{claim}\label{closedset} For all $k<\omega$, if $k\ge l''$, then the set $f_{i_0}^{-1}(f_{i_0}''\set{k})$ is ($A,\mathcal F$)-closed. \end{claim} \begin{proof} Fix $k\ge l''$ and let $X=f_{i_0}^{-1}(f_{i_0}''\set{k})$. First observe that $X\cap l'=\emptyset$. To see this suppose that there is $m\in X\cap l'$. Since $m\in X$, $f_{i_0}(m)=f_{i_0}(k)$. Together with $m\in l'$, this implies that $k\in f_{i_0}^{-1}(f_{i_0}''\set{m})\subseteq f_{i_0}^{-1}(f_{i_0}''l')\subseteq l''$. Thus $k<l''$ contradicting the choice of $k$. Secondly, observe that if $m<l$ and $k'\in X$, then $f_i(m)\neq f_i(k')$ for each $i\in A$. To see this, fix $m<l$ and $k'\in X$, and suppose that for some $i\in A$, $f_i(m)=f_i(k')$. This means that $k'\in f_i^{-1}(f_i''\set{m})\subseteq f_i^{-1}(f_i''l)\subseteq l'$ contradicting the fact that $X\cap l'=\emptyset$. Now we will prove that $X$ is ($A,\mathcal F$)-closed. Take any $i\in A$ and any $m\in f_i^{-1}(f_i''X)$. We should prove that $m\in X$. Since $m\in f_i^{-1}(f_i''X)$, $f_i(m)\in f_i''X$ so there is some $k'\in X$ such that $f_i(m)=f_i(k')$. By the second observation, $m\ge l$. By the first observation $k'\ge l'\ge l$. By the assumption of the lemma, since $m,k'\ge l$, and $f_i(m)=f_i(k')$, it must be that $f_{i_0}(m)=f_{i_0}(k')$. Since $k'\in X=f_{i_0}^{-1}(f_{i_0}''\set{k})$, it must be that $f_{i_0}(k)=f_{i_0}(k')=f_{i_0}(m)$. This means that $m\in f_{i_0}^{-1}(f_{i_0}''\set{k})=X$ as required. Thus $f_{i_0}^{-1}(f_{i_0}''\set{k})$ is $(A,\mathcal F)$-closed. \end{proof} Now we inductively build a tree $T\subseteq \omega^{<\omega}$ we will be using in the rest of the proof. Fix a function $\Phi:\omega\to\omega^{<\omega}$ so that $\Phi^{-1}(\sigma)$ is infinite for each $\sigma\in \omega^{<\omega}$. For each $m<\omega$ let $u_m=\Phi(m)\br{\abs{\Phi(m)}-1}$, i.e. $u_m$ is the last element of the sequence $\Phi(m)$. Let $T_0=\set{\emptyset,\seq{n}}$ (recall that $n$ is given in the statement of the lemma). Suppose that $m\ge 1$, and that $T_m$ is given. If $\Phi(m)$ is a leaf node of $T_m$, then let \[\textstyle Z_m=\left(\bigcup_{i\in A}f_i^{-1}(f_i''\set{u_m})\right)\setminus\left(\bigcup_{\eta\in T_m}\operatorname{range}(\eta)\right), \] and $T_{m+1}=T_m\cup\set{\Phi(m)^{\frown}\seq{k}:k\in Z_m}$. If $\Phi(m)$ is not a leaf node of $T_m$, then $T_{m+1}=T_m$. Finally, let $T=\bigcup_{m<\omega}T_m$ and $F=\bigcup_{\eta\in T}\operatorname{range}(\eta)$. \begin{claim}\label{subclaimtree} If $\sigma$ is a non-empty element of the tree $T$, then there is $m_0\ge 1$ such that $\sigma$ is a leaf node of $T_{m_0}$, that $\sigma=\Phi(m_0)$ and that $$\textstyle\bigcup_{i\in A}f_i^{-1}(f_i''\set{u_{m_0}})\subseteq \bigcup_{\eta\in T_{m_0+1}}\operatorname{range}(\eta).$$ \end{claim} \begin{proof} Fix a non-empty $\sigma$ in $T$. Let $m_1=\min\set{k<\omega:\sigma\in T_k}$. Since $\abs{\sigma}>0$, $\sigma$ is a leaf node of $T_{m_1}$. Consider the set $W=\set{m\ge m_1:\Phi(m)=\sigma}$. Since the set $\set{m<\omega:\Phi(m)=\sigma}$ is infinite, $W$ is non-empty subset of positive integers, so it has a minimum. Let $m_0=\min W$. Note that if $m_0=m_1$, then $\sigma$ is a leaf node of $T_{m_0}$. If $m_0>m_1$, by the construction of the tree $T$, since $\Phi(k)\neq \sigma$ whenever $m_1\le k<m_0$, it must be that $\sigma$ is a leaf node of every $T_k$ for $m_1<k\le m_0$. Thus $\sigma$ is a leaf node of $T_{m_0}$ and $\Phi(m_0)=\sigma$. Again by the construction of the tree $T$, we have $T_{m_0+1}=T_{m_0}\cup \set{\sigma^{\frown}\seq{k}:k\in Z_{m_0}}$. This means that $$\textstyle\bigcup_{\eta\in T_{m_0+1}}\operatorname{range}(\eta)=Z_{m_0}\cup \bigcup_{\eta\in T_{m_0}}\operatorname{range}(\eta).$$ Finally, the definition of $Z_{m_0}$ implies that \[\textstyle \bigcup_{i\in A}f_i^{-1}(f_i''\set{u_{m_0}})\subseteq Z_{m_0}\cup\bigcup_{\eta\in T_{m_0}}\operatorname{range}(\eta)=\bigcup_{\eta\in T_{m_0+1}}\operatorname{range}(\eta), \] as required. \end{proof} \begin{claim}\label{closedtree} The set $F$ is ($A,\mathcal F$)-closed, and contains $n$ as an element. \end{claim} \begin{proof} Since $\seq{n}\in T_0$, $n\in F$. To see that $F$ is ($A,\mathcal F$)-closed, take any $j\in A$, and any $w\in f_j^{-1}\br{f_j''F}$. We have to show that $w\in F$. Since $w\in f_j^{-1}\br{f_j''F}$, there is $m\in F$ such that $f_j(w)=f_j(m)$. Since $m\in F=\bigcup_{\eta\in T}\operatorname{range}(\eta)$, there is $\sigma$ in $T$ such that $\sigma(k)=m$ for some $k<\omega$. Consider $\sigma\upharpoonright(k+1)$. Since $\sigma\upharpoonright(k+1)\in T$, by Claim \ref{subclaimtree} there is $m_0\ge 1$ such that $\Phi(m_0)=\sigma\upharpoonright(k+1)$, that $\sigma\upharpoonright(k+1)$ is a leaf node of $T_{m_0}$ and that (note that $u_{m_0}=\sigma(k)=m$) \[\textstyle \bigcup_{i\in A}f_i^{-1}(f_i''\set{m})\subseteq \bigcup_{\eta\in T_{m_0+1}}\operatorname{range}(\eta)\subseteq \bigcup_{\eta\in T}\operatorname{range}(\eta)=F. \] So $w\in f_j^{-1}(f_j''\set{m})\subseteq F$ as required. \end{proof} \begin{claim}\label{finitetree} The tree $T$ is finite. \end{claim} \begin{proof} First we prove that each level of $T$ is finite. For $k<\omega$ let $T_{(k)}$ be the $k$-th level of $T$, i.e. $T_{(k)}=\set{\sigma\in T: \abs{\sigma}=k}$. Clearly $T_{(0)}$ and $T_{(1)}$ are finite. So suppose that $T_{(k)}$ is finite. Let $T_{(k)}=\set{\sigma_0,\sigma_2,\dots,\sigma_t}$ be enumeration of that level. For $s\le t$ let $m_s$ be such that $\Phi(m_s)=\sigma_s$ and that $\sigma_s$ is a leaf node of $T_{m_s}$. Note that by the construction of the tree $T$ all nodes at the level $T_{(k+1)}$ are of the form $\sigma_s^{\frown}\seq{r}$ where $s\le t$ and $r\in Z_{m_s}$. Since the set $A$ is finite and all functions $f_i$ (for $i\in A$) are finite-to-one, $Z_{m_s}$ is finite for every $s\le t$. Thus there are only finitely many nodes of the form $\sigma_s^{\frown}\seq{r}$ where $s\le t$ and $r\in Z_{m_s}$, hence the level $T_{(k+1)}$ must also be finite. This proves by induction that each level of $T$ is finite. Suppose now that $T$ is infinite. By K\"{o}nig's lemma, since each level of $T$ is finite, $T$ has an infinite branch $b$. By definition of the sets $Z_m$ ($m<\omega$), each node of $T$ is 1-1 function, so $b$ is also an injection from $\omega$ into $\omega$. In particular, the range of $b$ is infinite. Let $k=\min\set{m<\omega:b(m)\ge l''}$, and let $\sigma=b\upharpoonright(k+1)$. Clearly, $\sigma\in T$. By Claim \ref{subclaimtree}, there is $m_0<\omega$ such that $\sigma$ is a leaf node of $T_{m_0}$, that $\Phi(m_0)=\sigma$, and that $\bigcup_{i\in A}f_i^{-1}(f_i''\set{\sigma(k)})\subseteq \bigcup_{\eta\in T_{m_0+1}}\operatorname{range}(\eta)$. Since $\sigma(k)=b(k)\ge l''$, Claim \ref{closedset} implies that the set $Y=f_{i_0}^{-1}(f_{i_0}''\set{\sigma(k)})$ is $(A,\mathcal F)$-closed. By the construction $T_{m_0+1}=T_{m_0}\cup\set{\sigma^{\frown}\seq{m}:m\in Z_{m_0}}$. Since $b$ is an infinite branch, there is $m'\in Z_{m_0}$ such that $b(k+1)=m'$. Now $m'\in Z_{m_0}\subseteq \bigcup_{i\in A}f_i^{-1}\br{f_i''\set{\sigma(k)}}$, the fact that $\sigma(k)\in Y$, and the fact that $Y$ is ($A,\mathcal F$)-closed, together imply that $$\textstyle m'\in \bigcup_{i\in A}f_i^{-1}\br{f_i''\set{\sigma(k)}}\subseteq \bigcup_{i\in A}f_i^{-1}\br{f_i''Y}\subseteq Y.$$ Consider the node $\tau=\sigma^{\frown}\seq{m'}=b\upharpoonright(k+2)$. Since $b$ is an infinite branch, it must be that $\tau^{\frown}\seq{b(k+2)}\in T$. By Claim \ref{subclaimtree}, there is $m_1$ such that $\tau$ is a leaf node of $T_{m_1}$ and that $\Phi(m_1)=\tau$. Clearly, $m_1>m_0$ and $\tau^{\frown}\seq{b(k+2)}\in T_{m_1+1}$. Recall that we have already shown that $\bigcup_{i\in A}f_i^{-1}(f_i''\set{\sigma(k)})\subseteq \bigcup_{\eta\in T_{m_0+1}}\operatorname{range}(\eta)$. Thus $Y\subseteq\bigcup_{\eta\in T_{m_0+1}}\operatorname{range}(\eta)$. This, together with the fact that $\tau(k+1)=m'\in Y$, that $Y$ is $(A,\mathcal F)$-closed, and $m_1>m_0$ jointly imply that \[\textstyle \bigcup_{i\in A}f_i^{-1}(f_i''\set{\tau(k+1)})\subseteq Y\subseteq\bigcup_{\eta\in T_{m_0}+1}\operatorname{range}(\eta)\subseteq \bigcup_{\eta\in T_{m_1}}\operatorname{range}(\eta). \] This means that \[\textstyle b(k+2)\in Z_{n_1}=\bigcup_{i\in A}f_i^{-1}(f_i''\set{\tau(k+1)})\setminus \bigcup_{\eta\in T_{m_1}}\operatorname{range}(\eta)=\emptyset, \] which is clearly impossible. Thus, $T$ is not infinite. \end{proof} To finish the proof, note that by Claim \ref{closedtree} the set $F$ is $(A,\mathcal F)$-closed and contains $n$ as an element, while by Claim \ref{finitetree} the set $F$ is finite. So $F$ satisfies all the requirements of the conclusion of the lemma. \end{proof} The following lemma is the main application of Martin's Axiom. Again, it does not directly deal with ultrafilters, but with collections of functions. \begin{lemma}[MA$_{\alpha}$]\label{mainlemma} Let $\mathcal F=\set{f_i:i<\alpha}\subseteq \omega^{\omega}$ be a family of finite-to-one functions. Suppose that for each non-empty finite set $A\subseteq\alpha$, and each $n<\omega$, there is a finite ($A,\mathcal F$)-closed set $F$ containing $n$ as an element. Then there is a finite-to-one function $h\in\omega^{\omega}$, and a collection $\set{e_i:i<\alpha}\subseteq \omega^{\omega}$ such that for each $i<\alpha$, there is $l<\omega$ such that $h(n)=e_i(f_i(n))$ whenever $n\ge l$. \end{lemma} \begin{proof} We will apply MA$_{\alpha}$, so we first define the poset we will be using. Let $\mathcal P$ be the set of all $p=\seq{g_p,h_p}$ such that \begin{enumerate}[label=(\Roman*)] \item\label{uslov1} $h_p:N_p\to \omega$ where $N_p$ is a finite subset of $\omega$, \item\label{uslov2} $g_p=\seq{g^i_p:i\in A_p}$ where $A_p\in [\alpha]^{<\aleph_0}$, and $g^i_p:f_i''N_p\to \omega$ for each $i\in A_p$, \item\label{uslov3} $N_p$ is ($A_p,\mathcal F$)-closed. \end{enumerate} Define the ordering relation $\le$ on $\mathcal P$ as follows: $q\le p$ iff \begin{enumerate}[resume,label=(\Roman*)] \item\label{ext1} $N_p\subseteq N_q$, \item\label{ext2} $A_p\subseteq A_q$, \item\label{ext3} $h_q\upharpoonright N_p=h_p$, \item\label{ext4} $g_q^i\upharpoonright f_i''N_p=g_p^i$ for each $i\in A_p$, \item\label{ext5} $h_q(n)>h_q(m)$ whenever $m\in N_p$ and $n\in N_q\setminus N_p$, \item\label{ext6} $h_q(n)=g_q^i(f_i(n))$ for each $n\in N_q\setminus N_p$ and $i\in A_p$. \end{enumerate} It is clear that $\seq{\mathcal P,\le}$ is a partially ordered set. \begin{claim}\label{addingn} Let $p\in \mathcal P$, $n_0<\omega$, and suppose that $A\subseteq \alpha$ is finite such that $A_p\subseteq A$. Then there is $q\le p$ such that $n_0\subseteq N_q$ and that $N_q$ is ($A,\mathcal F$)-closed. \end{claim} \begin{proof} Applying the assumption of the lemma to the finite set $A$, and each $k\in n_0\cup N_p$, we obtain sets $F_k$ ($k\in n_0\cup N_p$) such that $k\in F_k$ and $f_i^{-1}(f_i''F_k)\subseteq F_k$ for each $k\in n_0\cup N_p$ and $i\in A$. Let $N_q=\bigcup_{k\in n_0\cup N_p}F_k$, let $A_q=A_p$, and let $t=\max\set{h_p(k)+1:k\in N_p}$. Finally, define \[ h_q(n)=\left\{\begin{array}{l} h_p(n),\ \mbox{if}\ n\in N_p\\ t,\ \mbox{if}\ n\in N_q\setminus N_p\end{array}\right.\ \mbox{and}\ g^i_q(k)=\left\{\begin{array}{l} g^i_p(k),\ \mbox{if}\ k\in f_i''N_p\\ t,\ \mbox{if}\ k\in f_i''N_q\setminus f_i''N_p\end{array}\right.\ \mbox{for}\ i\in A_q. \] Let $g_q$ denote the sequence $\seq{g^i_q:i\in A_q}$. Clearly $n_0\subseteq N_q$. By Lemma \ref{finiteunion}, $N_q$ is ($A,\mathcal F$)-closed. We still have to show that $q=\seq{g_q,h_q}\in \mathcal P$ and $q\le p$. Since $h_q$ is defined on $N_q$ and $N_q$ finite, since $A_q=A_p$ and $g_p^i$ is defined on $f_i''N_q$ for $i\in A_p$, and since $N_q$ is $(A_q,\mathcal F)$-closed, conditions \ref{uslov1}-\ref{uslov3} are satisfied by $q$. Thus $q\le p$. Next we show $q\le p$. Conditions \ref{ext1}-\ref{ext4} are obviously satisfied by the definition of $q$. Since $h_q(n)=t>h_p(k)=h_q(k)$ for each $n\in N_q\setminus N_p$ and $k\in N_p$, conditions \ref{ext4} is also satisfied. So we still have to check \ref{ext6}. Take any $i\in A_p$ and $n\in N_q\setminus N_p$. By the definition of $h_q$, we have $h_q(n)=t$. Once we prove that $f_i(n)\in f_i''N_q\setminus f_i''N_p$, we will be done because in that case the definition of $g^i_p$ implies that $g^i_p(f_i(n))=t$ as required. So suppose the contrary, that $f_i(n)\in f_i''N_p$. Since $p$ is a condition and $A_q=A_p$, it must be that $n\in f_i^{-1}(f_i''N_p)\subseteq N_p$. But this contradicts the choice of $n$. Thus condition \ref{ext6} is also satisfied and $q\le p$. \end{proof} \begin{claim}\label{addingj} Let $p\in\mathcal P$, and $j_0<\alpha$. Then there is $q\le p$ such that $j_0\in A_q$. \end{claim} \begin{proof} Let $A_q=A_p\cup\set{j_0}$. Applying Claim \ref{addingn} to $A_q$ and $n=0$, we obtain a condition $p'\le p$ such that $N_{p'}$ is $(A_q,\mathcal F)$-closed. Let $N_q=N_{p'}$, $h_{q}=h_{p'}$, and $g^i_q=g^i_{p'}$ for $i\in A_p$. Define $g^{j_0}_q(k)=0$ for each $k\in f_{j_0}''N_{p'}$, and let $g_p$ denote the sequence $\seq{g^i_q:i\in A_q}$. Since $j_0\in A_q$, to finish the proof of the claim it is enough to show that $q=\seq{g_q,h_q}\in \mathcal P$, and that $q\le p'$. Conditions \ref{uslov1}-\ref{uslov3} are clear from the definition of $q$ because $p'\in\mathcal P$ and $N_q=N_{p'}$, $h_q=h_{p'}$, $g_q^{j_0}:f_{j_0}''N_q\to\omega$, and $g_q^i=g_p^i$ for $i\in A_q\setminus\set{j_0}$. Conditions \ref{ext1}-\ref{ext4} are clear by the definition of $h_q$ and $g_q$. Conditions \ref{ext5} and \ref{ext6} are vacuously true because $N_{p'}=N_q$. Thus the claim is proved. \end{proof} \begin{claim}\label{merging} If $p,q\in \mathcal P$ are such that $h_p=h_q$ and $g^i_p=g^i_q$ for $i\in A_p\cap A_q$, then $p$ and $q$ are compatible in $\mathcal P$. \end{claim} \begin{proof} We proceed to define $r\le p,q$. Let $N=N_p=N_q$. Let $$t=\max\set{h_p(n)+1:n\in N}.$$ Applying the assumption of the lemma to $A=A_p\cup A_q$, and each $k\in N$, we obtain ($A,\mathcal F$)-closed sets $F_k$ ($k\in N$). By Claim \ref{finiteunion}, the set $N_r=\bigcup_{k\in N}F_k$ is ($A,\mathcal F$)-closed. Let $A_r=A$, and define \[ h_r(n)=\left\{\begin{array}{l} h_p(n),\mbox{ if}\ n\in N\\[1mm] t,\mbox{ if}\ n\in N_r\setminus N\end{array}\right.\hskip-1.5mm\mbox{and }g^i_r(k)=\left\{\begin{array}{l} g^i_p(k),\mbox{ if}\ i\in A_p\mbox{ and }k\in f_i''N\\[1mm] g^i_q(k),\mbox{ if}\ i\in A_q\mbox{ and }k\in f_i''N\\[1mm] t,\ \mbox{ if}\ k\in f_i''N_r\setminus f_i''N\end{array}\right.\hskip-1.5mm, \] for $i\in A_r$. Let $g_r$ denote the sequence $\seq{g^i_r:i\in A_r}$. As we have already mentioned, $r=\seq{h_r,g_r}$ satisfies \ref{uslov3}, and it clear that \ref{uslov1} and \ref{uslov2} are also true for $r$. To see that $r\le p$ and $r\le q$ note that conditions \ref{ext1}-\ref{ext5} are clearly satisfied. We will check that $r$ and $p$ satisfy \ref{ext6} also. Take any $n\in N_r\setminus N$ and $i\in A_p$. By the definition of $h_r$, $h_r(n)=t$. By the definition of $g_r^i$, if $f_i(n)\in f_i''N_r\setminus f_i''N$, then $g^i_r(f_i(n))=t=h_r(n)$. So suppose this is not the case, i.e. that $f_i(n)\in f_i''N$. This would mean that $n\in f_i^{-1}(f_i''N)$, which is impossible because $n\notin N$ and $N$ is $(A,\mathcal F)$-closed because $p\in \mathcal P$. Thus we proved $r\le p$. In the same way it can be shown that $r\le q$. \end{proof} \begin{claim}\label{ccc} The poset $\mathcal P$ satisfies the countable chain condition. \end{claim} \begin{proof} Let $\seq{p_{\xi}:\xi<\omega_1}$ be an uncountable set of conditions in $\mathcal P$. Since $h_{p_{\xi}}\in [\omega\times\omega]^{<\omega}$ for each $\xi<\omega_1$, there is an uncountable set $\Gamma\subseteq\omega_1$, and $h\in [\omega\times\omega]^{<\omega}$ such that $h_{p_{\xi}}=h$ for each $\xi\in \Gamma$. Consider the set $\seq{A_{p_{\xi}}:\xi\in\Gamma}$. By the $\Delta$-system lemma, there is an uncountable set $\Delta\subseteq\Gamma$, and a finite set $A\subseteq\alpha$ such that $A_{p_{\xi}}\cap A_{p_{\eta}}=A$ for each $\xi,\eta\in\Delta$. Since $\Delta$ is uncountable and $g_{p_{\xi}}^i\in [\omega\times\omega]^{<\omega}$ for each $i\in A$ and $\xi\in\Delta$, there is an uncountable set $\Theta\subseteq\Delta$ and $g_i$ for each $i\in A$, such that $g^i_{p_{\xi}}=g_i$ for each $\xi\in\Theta$ and $i\in A$. Let $\xi$ and $\eta$ in $\Theta$ be arbitrary. By Claim \ref{merging}, $p_{\xi}$ and $p_{\eta}$ are compatible in $\mathcal P$. \end{proof} Consider sets $D_{j}=\set{p\in \mathcal P: j\in A_p}$ for $j\in\alpha$, and $D_{m}=\set{p\in \mathcal P:m\in N_p}$ for $m\in\omega$. By Claim \ref{addingj} and Claim \ref{addingn}, these sets are dense in $\mathcal P$. By MA$_{\alpha}$ there is a filter $\mathcal G\subseteq \mathcal P$ intersecting all these sets. Clearly, $h=\bigcup_{p\in \mathcal G}h_p$ and $e_i=\bigcup_{p\in\mathcal G}g_p^i$, for each $i\in \alpha$, are functions from $\omega$ into $\omega$. We will prove that these functions satisfy the conclusion of the lemma. First we will prove that $h$ is finite-to-one. Take any $m\in \omega$ and let $k=h(m)$. By the definition of $h$, there is $p\in \mathcal G$ such that $h_p(m)=k$. Suppose that $h^{-1}(\set{k})\not\subseteq N_p$. This means that there is an integer $m_0\notin N_p$ such that $h(m_0)=k$. Let $q\in \mathcal G$ be such that $h_q(m_0)=k$. Now for a common extension $r\in\mathcal G$ of both $p$ and $q$, it must be that $h_r(m_0)=h_p(m)$, contradicting the fact that $r\le p$, in particular condition \ref{ext5} is violated in this case. We still have to show that for each $i\in \alpha$, there is $l\in\omega$ such that $h(n)=e_i(f_i(n))$ whenever $n\ge l$. So take $i\in\alpha$. By Claim \ref{addingj}, there is $p\in\mathcal G$ such that $i\in A_p$. Let $l=\max(N_p)+1$. We will prove that $l$ is as required. Take any $n\ge l$. By Claim \ref{addingn}, there is $q\in\mathcal G$ such that $n\in q$. Let $r\in \mathcal G$ be a common extension of $p$ and $q$. Since $n\notin N_p$ and $r\le p$, it must be that $h_r(n)=g_r^i(f_i(n))$, according to condition \ref{ext6}. Hence $h(n)=e_i(f_i(n))$, as required. \end{proof} Before we move to the next lemma let us recall that if $c$ is any element of the model $M$, then $\mathcal U=\set{X\subseteq \omega: c\in {}^*X}$ is ultrafilter on $\omega$. \begin{lemma}\label{ensuringensuring} Let $\alpha<\mathfrak c$ be an ordinal. Let $\seq{M_i:i<\alpha}$ be a $\subseteq$-decreasing sequence of principal submodels of $M$, i.e. each $M_i$ is generated by a single element $a_i$ and $M_j\subseteq M_i$ whenever $i<j<\alpha$. Let each $M_i$ ($i<\alpha$) be cofinal with $M_0$. Suppose that $\mathcal U_0=\set{X\subseteq \omega: a_0\in \prescript{*}{}X}$ is a $P_{\mathfrak c}$-point. Then there is a family $\set{f_i:i<\alpha}\subseteq \omega^{\omega}$ of finite-to-one functions such that $\prescript{*}{}f_i(a_0)=a_i$ for $i<\alpha$, and that for $i,j<\alpha$ with $i<j$, there is $l<\omega$ such that $f_j(n)=f_j(m)$ whenever $f_i(n)=f_i(m)$ and $m,n\ge l$. \end{lemma} \begin{proof} Let $i<j<\alpha$. Since $M_j\subseteq M_i$, and $M_i$ is generated by $a_i$, there is a function $\varphi_{ij}:\omega\to\omega$ such that $\prescript{*}{}\varphi_{ij}(a_i)=a_j$. Since $M_j$ is cofinal with $M_i$, by Lemma in \cite[page 104]{blassmodeltheory}, if $i<j<\alpha$, then there is a set $Y_{ij}\subseteq\omega$ such that $a_i\in \prescript{*}{}Y_{ij}$ and that $\varphi_{ij}\upharpoonright Y_{ij}$ is finite-to-one. For $i<\alpha$ let $g_i:\omega\to\omega$ be defined as follows: if $n<\omega$, then for $n\notin Y_{0i}$ let $g_i(n)=n$, while for $n\in Y_{0i}$ let $g_i(n)=\varphi_{0i}(n)$. Note that $\prescript{*}{}g_i(a_0)=a_i$, and that $g_i$ is finite-to-one. The latter fact follows since $g_i$ is one-to-one on $\omega\setminus Y_{0i}$ and on $Y_{0i}$ it is equal to $\varphi_{0i}$, which is finite-to-one on $Y_{0i}$. Now by the second part of Lemma on page 104 in \cite{blassmodeltheory}, for $i<j<\alpha$ there is a finite-to-one function $\pi_{ij}:\omega\to\omega$ such that $\prescript{*}{}\varphi_{ij}(a_i)=\prescript{*}{}\pi_{ij}(a_i)$. Note that this means that $\prescript{*}{}g_j(a_0)=\prescript{*}{}\pi_{ij}(\prescript{*}{}g_i(a_0))$ for $i<j<\alpha$, i.e. the set $X_{ij}=\set{n\in\omega:g_j(n)=\pi_{ij}(g_i(n))}$ is in $\mathcal U_0$. Since $\alpha<\mathfrak c$ and $\mathcal U_0$ is a $P_{\mathfrak c}$-point, there is a set $X\subseteq \omega$ such that $X\in\mathcal U_0$ and that the set $X\setminus X_{ij}$ is finite whenever $i<j<\alpha$. Consider the sets $W_i=g_i''X$ for $i<\alpha$. For each $i<\alpha$, let $W_i=W_i^0\cup W_i^1$ where $W_i^0\cap W_i^1=\emptyset$ and both $W_i^0$ and $W_i^1$ are infinite. Fix $i<\alpha$ for a moment. We know that \[\textstyle X=\left(X\cap \bigcup_{n\in W_i^0}g_i^{-1}(\set{n})\right)\cup \left(X\cap \bigcup_{n\in W_i^1}g_i^{-1}(\set{n})\right). \] Since $X\in\mathcal U_0$ and $\mathcal U_0$ is an ultrafilter, we have that either $X\cap \bigcup_{n\in W_i^0}g_i^{-1}(\set{n})\in\mathcal U_0$ or $X\cap \bigcup_{n\in W_i^1}g_i^{-1}(\set{n})\in\mathcal U_0$. Suppose that $Y=X\cap \bigcup_{n\in W_i^0}g_i^{-1}(\set{n})\in \mathcal U_0$ (the other case would be handled similarly). Note that by the definition of $\mathcal U_0$ we know that $a_0\in {}^*Y$. Define $f_i:\omega\to\omega$ as follows: for $n\in Y$ let $f_i(n)=g_i(n)$, while for $n\notin Y$ let $f_i(n)=W_i^1(n)$. Now that functions $f_i$ are defined, unfix $i$. We will prove that $\mathcal F=\set{f_i:i<\alpha}$ has all the properties from the conclusion of the lemma. Since each $g_i$ ($i<\alpha$) is finite-to-one, it is clear that $f_i$ is also finite-to-one. Again, this is because $g_i$ is finite-to-one on $\omega$, and outside of $Y$ the function $f_i$ is defined so that it is one-to-one. Since $\prescript{*}{}g_i(a_0)=a_i$ and $a_0\in {}^*Y$, it must be that $\prescript{*}{}f_i(a_0)=a_i$ for each $i<\alpha$. Now we prove the last property. Suppose that $i<j<\alpha$. Since the set $X\setminus X_{ij}$ is finite and $Y\subseteq X$, there is $l<\omega$ so that $Y\setminus l\subseteq X_{ij}$. Take $m,n\ge l$, and suppose that $f_i(n)=f_i(m)$. There are three cases. First, $n,m\notin Y$. In this case, $f_i(n)=f_i(m)$ implies that $W_i^1(n)=W_i^1(m)$, i.e. $n=m$. Hence $f_j(n)=f_j(m)$. Second, $m\in Y$ and $n\notin Y$. Since $m\in Y$, $g_i(m)=f_i(m)=f_i(n)$ so $f_i(n)\in W_i^0$. On the other hand, since $n\notin Y$, $f_i(n)=W_i^1(n)$. Thus we have $f_i(n)\in W_i^0\cap W_i^1$ which is in contradiction with the fact that $W_i^0\cap W_i^1=\emptyset$. So this case is not possible. Third, $m,n\in Y$. In this case $f_i(n)=f_i(m)$ implies that $g_i(n)=g_i(m)$. Since $m,n\in Y\setminus l\subseteq X_{ij}$ it must be that $f_j(n)=g_j(n)=\pi_{ij}(g_i(n))=\pi_{ij}(g_i(m))=g_j(m)=f_j(m)$ as required. Thus the lemma is proved. \end{proof} \begin{lemma}[MA$_{\alpha}$]\label{ensuringtheorem3} Let $\seq{M_i:i<\alpha}$ be a $\subseteq$-decreasing sequence of principal, and pairwise cofinal submodels of $M$. Suppose that $\mathcal U_0=\set{X\subseteq \omega: a_0\in \prescript{*}{}X}$ is a $P_{\mathfrak c}$-point, where $a_0$ generates $M_0$. Then there is an element $c\in\bigcap_{i<\alpha}M_i$ which generates a principal model cofinal with all $M_i$ ($i<\alpha$). \end{lemma} \begin{proof} Let $a_i$ for $i<\alpha$ be an element generating $M_i$. By Lemma \ref{ensuringensuring} there is a family $\mathcal F=\set{f_i:i<\alpha}\subseteq \omega^{\omega}$ of finite-to-one functions such that $\prescript{*}{}f_i(a_0)=a_i$ for $i<\alpha$, and that for $i,j<\alpha$ with $i<j$, there is $l<\omega$ such that $f_j(n)=f_j(m)$ whenever $f_i(n)=f_i(m)$ and $m,n\ge l$. By Lemma \ref{ensuringcondition}, for each finite $A\subseteq \alpha$, and each $n<\omega$, there is a finite ($A,\mathcal F$)-closed set containing $n$ as an element. Now using MA$_{\alpha}$, Lemma \ref{mainlemma} implies that there is a finite-to-one function $h\in\omega^{\omega}$, and a collection $\set{e_i:i<\alpha}\subseteq\omega^{\omega}$ such that for each $i<\alpha$ there is $l<\omega$ such that $h(n)=e_i(f_i(n))$ whenever $n\ge l$. Let $c=\prescript{*}{}h(a_0)$, and let $M_{\alpha}$ be a model generated by $c$. By Lemma in \cite[pp. 104]{blassmodeltheory}, $M_{\alpha}$ is cofinal with $M_0$. Thus $M_{\alpha}$ is a principal model cofinal with all $M_i$ ($i<\alpha$). To finish the proof we still have to show that $c\in \bigcap_{i<\alpha}M_i$. Fix $i<\alpha$. Let $l<\omega$ be such that $h(n)=e_i(f_i(n))$ for $n\ge l$. If $a_0<l$, then $M_j=M$ for each $j<\alpha$ so the conclusion is trivially satisfied. So $a_0\prescript{*}{}\ge l$. Since the sentence $(\forall n) [n\ge l\Rightarrow h(n)=e_i(f_i(n))]$ is true in $M$, it is also true in $M_0$. Thus $c=\prescript{*}{}h(a_0)=\prescript{*}{}e_i(\prescript{*}{}f_i(a_0))=\prescript{*}{}e_i(a_i)\in M_i$ as required. \end{proof} \begin{theorem}[MA$_{\alpha}$]\label{theorem3} Let $M_i$ ($i<\alpha$) be a collection of pairwise cofinal submodels of $M$. Suppose that $M_{0}$ is principal, and that $\mathcal U_0=\set{X\subseteq \omega: a_0\in \prescript{*}{}X}$ is a $P_{\mathfrak c}$-point, where $a_0$ generates $M_0$. Then $\bigcap_{i<\alpha}M_i$ contains a principal submodel cofinal with each $M_i$. \end{theorem} \begin{proof} We define models $M'_i$ for $i\le\alpha$ as follows. $M'_0=M_0$. If $M'_i$ is defined, then $M'_{i+1}$ is a principal submodel of $M'_i\cap M_{i+1}$ cofinal with $M'_i$ and $M_{i+1}$. This model exists by Corollary in \cite[pp. 105]{blassmodeltheory}. If $i\le \alpha$ is limit, then the model $M'_i$ is a principal model cofinal with all $M'_j$ ($j<i$). This model exists by Lemma \ref{ensuringtheorem3}. Now the model $M'_{\alpha}$ is as required in the conclusion of the lemma. \end{proof} \begin{theorem}[MA$_{\alpha}$]\label{maintheorem} Suppose that $\set{\mathcal U_i:i<\alpha}$ is a collection of P-points. Suppose moreover that $\mathcal U_0$ is a P$_{\mathfrak c}$-point such that $\mathcal U_i\le_{RK}\mathcal U_0$ for each $i<\alpha$. Then there is a P-point $\mathcal U$ such that $\mathcal U\le_{RK} \mathcal U_i$ for each $i$. \end{theorem} \begin{proof} By Theorem 3 of \cite{blass}, $\omega^{\omega}/\mathcal U_i$ is isomorphic to an elementary submodel $M_i$ of $\omega^{\omega}/\mathcal U_0$. Since all $\mathcal U_i$ ($i<\alpha$) are non-principal, each model $M_i$ ($i<\alpha$) is non-standard. By Corollary in \cite[pp. 150]{blass}, each $M_i$ ($i<\alpha$) is cofinal with $M_0$. This implies that all the models $M_i$ ($i<\alpha$) are pairwise cofinal with each other. By Theorem \ref{theorem3} there is a principal model $M'$ which is a subset of each $M_i$ ($i<\alpha$) and is cofinal with $M_0$. Since $M'$ is principal, there is an element $a$ generating $M'$. Let $\mathcal U=\set{X\subseteq \omega:a\in \prescript{*}{}X}$. Then $\omega^{\omega}/\mathcal U\cong M'$. Since $M'$ is cofinal with $M_0$, $M'$ is not the standard model. Thus $\mathcal U$ is non-principal. Now $M'\prec M_i$ ($i<\alpha$) implies that $\mathcal U\le_{RK}\mathcal U_i$ (again using Theorem 3 of \cite{blass}). Since $\mathcal U$ is Rudin-Keisler below a $P$-point, $\mathcal U$ is also a $P$-point. \end{proof} \begin{corollary}[MA] \label{cor:lower} If a collection of fewer than $\mathfrak c$ many ${P}_{\mathfrak c}$-points has an upper bound which is a ${P}_{\mathfrak c}$-point, then it has a lower bound. \end{corollary} \begin{corollary}[MA] \label{cor:closed} The class of ${P}_{\mathfrak c}$-points is downwards $< \mathfrak c$-closed under ${\leq}_{RK}$. In other words, if $\alpha < \mathfrak c$ and $\langle {\mathcal U}_{i}: i < \alpha \rangle$ is a sequence of ${P}_{\mathfrak c}$-points such that $\forall i < j < \alpha\left[{\mathcal U}_{j} \: {\leq}_{RK} \: {\mathcal U}_{i}\right]$, then there is a ${P}_{\mathfrak c}$-point $\mathcal U$ such that $\forall i < \alpha\left[ \mathcal U \: {\leq}_{RK} \: {\mathcal U}_{i} \right]$. \end{corollary} \end{document}
\begin{document} \title[Representations of the quantum toroidal algebra]{Representations of the quantum toroidal algebra on highest weight modules of the quantum affine algebra of type $\mathfrak {gl}_N$} \author{K. Takemura} \address{Research Institute for Mathematical Sciences, Kyoto University, 606 Kyoto, Japan.} \email{[email protected]} \thanks{K.T. is supported by JSPS Research Fellowship for Young Scientists} \author{D. Uglov} \address{Research Institute for Mathematical Sciences, Kyoto University, 606 Kyoto, Japan.} \email{[email protected]} \subjclass{17B37, 81R50} \begin{abstract} A representation of the quantum toroidal algebra of type $\mathfrak {sl}_N$ is constructed on every integrable irreducible highest weight module of the the quantum affine algebra of type $\mathfrak {gl}_N.$ The $q$-version of the level-rank duality giving the reciprocal decomposition of the $q$-Fock space with respect to mutually commutative actions of $\operatorname{U}^{\prime}_q(\widehat{{\mathfrak {gl}}}_N)$ of level $L$ and $\UU_q^{\prime}(\asll_L)$ of level $N$ is described. \end{abstract} \maketitle \section{Introduction} \noindent In this article we continue our study \cite{STU} of representations of the quantum toroidal algebra of type $\mathfrak {sl}_N$ on irreducible integrable highest weight modules of the quantum affine algebra of type $\mathfrak {gl}_N.$ The quantum toroidal algebra $\ddot{\UU}$ was introduced in \cite{GKV} and \cite{VV1}. The definition of $\ddot{\UU}$ is given in Section \ref{sec:tor}. This algebra is a two-parameter deformation of the enveloping algebra of the universal central extension of the double-loop Lie algebra $\mathfrak {sl}_N[x^{\pm 1}, y^{\pm 1}].$ To our knowledge, no general results on the representation theory of $\ddot{\UU}$ are available at the present. It therefore appears to be desirable, as a preliminary step towards a development of a general theory, to obtain concrete examples of representations of $\ddot{\UU}.$ The main reason why representations of central extensions of the double-loop Lie algebra, and of their deformations such as $\ddot{\UU},$ are deemed to be a worthwhile topic to study, is that one expects applications to higher-dimensional exactly solvable field theories. Our motivation to study such representations comes, however, from a different source. We were led to this topic while trying to understand the meaning of the level 0 action of the quantum affine algebra $\UU_q^{\prime}(\asll_N)$ which was defined in \cite{TU}, based on the earlier work \cite{JKKMP}, on each level 1 irreducible integrable highest weight module of the algebra $\operatorname{U}_q(\widehat{{\mathfrak {gl}}}_N).$ These level 0 actions appear as the $q$-analogues of the Yangian actions on level 1 irreducible integrable modules of $\widehat{{\mathfrak {sl}}}_N$ discovered in \cite{H,Schoutens}. Let us recall here, following \cite{STU} and \cite{VV2}, the connection between the level 0 actions and the quantum toroidal algebra $\ddot{\UU}.$ It is known \cite{GKV} (see also Section \ref{sec:tor}) that $\ddot{\UU}$ contains two subalgebras $\operatorname{U}_{h},$ and $\operatorname{U}_v$ such that there are algebra homomorphisms $\UU_q^{\prime}(\asll_N) \rightarrow \operatorname{U}_h,$ and $\UU_q^{\prime}(\asll_N) \rightarrow \operatorname{U}_v.$ As a consequence, every module of $\ddot{\UU}$ admits two actions of $\UU_q^{\prime}(\asll_N):$ the {\em horizontal} action obtained through the first of the above homomorphisms, and the {\em vertical} action obtained through the second one. It was shown in \cite{STU} and \cite{VV2}, that on each level 1 irreducible integrable highest weight module of $\operatorname{U}_q(\widehat{{\mathfrak {gl}}}_N)$ there is an action of $\ddot{\UU},$ such that the horizontal action coincides with the standard level 1 action of $\UU_q^{\prime}(\asll_N) \subset \operatorname{U}_q(\widehat{{\mathfrak {gl}}}_N),$ while the vertical action coincides with the level 0 action defined in \cite{TU}. The aim of the present article is to extend this result to higher level irreducible integrable highest weight modules of $\operatorname{U}_q(\widehat{{\mathfrak {gl}}}_N).$ The algebra $\operatorname{U}_q(\widehat{{\mathfrak {gl}}}_N)$ is, by definition, the tensor product of algebras $H\otimes\UU_q(\asll_N),$ where $H$ is the Heisenberg algebra (see Section \ref{sec:bosons}). Let $\Lambda$ be a level $L$ dominant integral weight of $\UU_q(\asll_N),$ and let $V(\Lambda)$ be the irreducible integrable $\UU_q(\asll_N)$-module of the highest weight $\Lambda.$ As the main result of this article we define an action of $\ddot{\UU}$ on the irreducible $\operatorname{U}_q(\widehat{{\mathfrak {gl}}}_N)$-module \begin{equation} \widetilde{V}(\Lambda) = {\mathbb K\hskip.5pt}[H_-]\otimes V(\Lambda), \label{eq:tv1} \end{equation} where ${\mathbb K\hskip.5pt}[H_-]$ is the Fock representation (see Section \ref{sec:decomp}) of $H.$ The corresponding horizontal action of $\UU_q^{\prime}(\asll_N)$ is just the standard, level $L,$ action on the second tensor factor in (\ref{eq:tv1}). The vertical action of $\UU_q^{\prime}(\asll_N)$ has level zero, this action is a $q$-analogue of the Yangian action constructed recently on each irreducible integrable highest weight module of $\widehat{{\mathfrak {gl}}}_N$ in \cite{U}. Let us now describe the main elements of our construction of the $\ddot{\UU}$-action on $\widetilde{V}(\Lambda).$ To define the $\ddot{\UU}$-action we introduce a suitable realization of $\widetilde{V}(\Lambda)$ using the $q$-analogue of the classical level-rank duality, due to Frenkel \cite{F1,F2}, between the affine Lie algebras $\widehat{{\mathfrak {sl}}}_N$ and $\widehat{{\mathfrak {sl}}}_L.$ The quantized version of the level-rank duality takes place on the {\em $q$-Fock space} (we call it, simply, the Fock space hereafter). The Fock space is an integrable, level $L,$ module of the algebra $\UU_q^{\prime}(\asll_N).$ The action of this algebra on the Fock space is centralized by a level $N$ action of $\UU_q^{\prime}(\asll_L),$ and the resulting action of $\UU_q^{\prime}(\asll_N)\otimes\UU_q^{\prime}(\asll_L)$ is centralized by an action of the Heisenberg algebra $H.$ We give in the present paper a construction of the Fock space in the spirit of semi-infinite wedges of \cite{S,KMS}. The Fock space defined in \cite{KMS} appears as the special case of our construction when the level $L$ equals $1.$ In Theorem \ref{t:decofF} we describe the irreducible decomposition of the Fock space with respect to the action of $H\otimes\UU_q^{\prime}(\asll_N)\otimes\UU_q^{\prime}(\asll_L).$ This theorem is the $q$-analogue of Theorem 1.6 in \cite{F1}. The decomposition shows that for every level $L$ dominant integral weight $\Lambda$ the corresponding irreducible $\operatorname{U}_q(\widehat{{\mathfrak {gl}}}_N)$-module $\widetilde{V}(\Lambda)$ is realized as a direct summand of the Fock space, such that the multiplicity space of $\widetilde{V}(\Lambda)$ is a certain level $N$ irreducible integrable highest weight module of $\UU_q^{\prime}(\asll_L).$ To define the action of the quantum toroidal algebra on $\widetilde{V}(\Lambda)$ we proceed very much along the lines of \cite{STU}. The starting point is a representation, due to Cherednik \cite{C3}, of the toroidal Hecke algebra of type $\mathfrak {gl}_n$ on the linear space ${\mathbb K\hskip.5pt}[z_1^{\pm 1},\dots,z_n^{\pm 1}]\otimes ({\mathbb K\hskip.5pt}^L)^{\otimes n}.$ Here ${\mathbb K\hskip.5pt} = \mathbb Q(q^{\frac{1}{2N}}).$ Applying the Varagnolo--Vasserot duality \cite{VV1} between modules of the toroidal Hecke algebra and modules of $\ddot{\UU},$ we obtain a representation of $\ddot{\UU}$ on the $q$-wedge product $\wedge^n V_{\mathrm {aff}},$ where $V_{\mathrm {aff}} = {\mathbb K\hskip.5pt}[z^{\pm1}]\otimes {\mathbb K\hskip.5pt}^N\otimes {\mathbb K\hskip.5pt}^L.$ This $q$-wedge product (we call it, simply, {\em the wedge product} hereafter) is similar to the wedge product of \cite{KMS}, and reduces to the latter when $L=1.$ The Fock space is defined as an inductive limit ($n \rightarrow \infty$) of the wedge product $\wedge^n V_{\mathrm {aff}}.$ We show that the Fock space inherits the $\ddot{\UU}$-action from $\wedge^n V_{\mathrm {aff}}.$ As the final step we demonstrate, that the $\ddot{\UU}$-action on the Fock space can be restricted on $\widetilde{V}(\Lambda)$ provided certain parameters in the $\ddot{\UU}$-action are fixed in an appropriate way. Let us now comment on two issues which we {\em do not} deal with in the present paper. The first one is the question of irreducibility of $\widetilde{V}(\Lambda)$ as the $\ddot{\UU}$-module. Based on analysis of the Yangian limit (see \cite{U}) we expect that $\widetilde{V}(\Lambda)$ is irreducible. However we lack a complete proof of this at the present. The second issue is the decomposition of $\widetilde{V}(\Lambda)$ with respect to the level 0 vertical action of $\UU_q^{\prime}(\asll_N).$ In the Yangian limit this decomposition was performed in \cite{U} for the vacuum highest weight $\Lambda = L \Lambda_0.$ It is natural to expect, that combinatorially this decomposition will remain unchanged in the $q$-deformed situation. In particular, the irreducible components are expected to be parameterized by semi-infinite skew Young diagrams, and the $\operatorname{U}_q(\mathfrak {sl}_N)$-characters of these components are expected to be given by the corresponding skew Schur functions. \\ \mbox{} \\ \noindent The paper is organized as follows. In sections \ref{s:pre} through \ref{s:Fock} we deal with the $q$-analogue of the level-rank duality, and the associated realization of the integrable irreducible modules of $\operatorname{U}_q(\widehat{{\mathfrak {gl}}}_N).$ Section \ref{s:pre} contains background information on the quantum affine algebras and affine Hecke algebra. In Section \ref{s:wedgeprod} we introduce the wedge product, and describe the technically important {\em normal ordering rules} for the $q$-wedge vectors. In Section \ref{s:Fock} we define the Fock space, and, on this space, the action of $H\otimes\UU_q^{\prime}(\asll_N)\otimes\UU_q^{\prime}(\asll_L).$ The decomposition of the Fock space as $H\otimes\UU_q^{\prime}(\asll_N)\otimes\UU_q^{\prime}(\asll_L)$-module is given in Theorem \ref{t:decofF}. In Sections \ref{s:tor} and \ref{s:toract} we deal with the quantum toroidal algebra $\ddot{\UU}$ and its actions. Section \ref{s:tor} contains basic information on the toroidal Hecke algebra and $\ddot{\UU}.$ In Section \ref{s:toract} we define actions of $\ddot{\UU}$ on the Fock space, and on irreducible integrable highest weight modules of $\operatorname{U}_q(\widehat{{\mathfrak {gl}}}_N).$ \\ \section{Preliminaries} \label{s:pre} \subsection{Preliminaries on the quantum affine algebra}\label{sec:Usl} For $k,m \in \mathbb Z$ we define the following $q$-integers, factorials, and binomials $$[k]_q = \frac{q^k-q^{-k}}{q-q^{-1}},\quad [k]_q!=[k]_q[k-1]_q\cdots[1]_q,\quad \text{and} \;\begin{bmatrix}m\\k\end{bmatrix}_q = \frac{[m]_q!}{[m-k]_q![k]_q!}.$$ The quantum affine algebra $\UU_q(\asll_M)$ is the unital associative algebra over ${\mathbb K\hskip.5pt} =\mathbb Q(q)$ generated by the elements $E_i,F_i,K_i,K_i^{-1}, D$ $(0\leqslant i < M)$ subject to the relations: \begin{gather} K_i K_j = K_j K_i, \quad DK_i=K_iD,\quad K_i K_i^{-1} = K_i^{-1} K_i = 1, \label{eq:r1}\\ K_i E_j = q^{a_{ij}} E_j K_i, \label{eq:r2}\\ K_i F_j = q^{-a_{ij}} F_j K_i, \label{eq:r3}\\ [D,E_i] = \delta(i=0) E_i, \quad [D,F_i] = -\delta(i=0) F_i,\label{eq:r4} \\ [E_i,F_j] = \delta_{ij} \frac{K_i - K_i^{-1}}{q - q^{-1}},\label{eq:r5}\\ \sum_{k=0}^{1-a_{ij}} (-1)^k \begin{bmatrix} 1- a_{ij} \\ k \end{bmatrix}_q E_i^{1-a_{ij}-k} E_j E_i^k = 0 \quad (i\neq j), \label{eq:r6}\\ \sum_{k=0}^{1-a_{ij}} (-1)^k \begin{bmatrix} 1- a_{ij} \\ k \end{bmatrix}_q F_i^{1-a_{ij}-k} F_j F_i^k = 0 \quad (i\neq j). \label{eq:r7} \end{gather} Here $a_{ij} = 2\delta(i=j) - \delta(i=j+1) - \delta(i=j-1),$ and the indices are extended to all integers modulo $M.$ For $P$ a statement, we write $\delta(P) = 1$ if $P$ is true, $\delta(P) = 0$ if otherwise. \\ $\UU_q(\asll_M)$ is a Hopf algebra, in this paper we will use two different coproducts $\Delta^+$ and $\Delta^-$ given by \begin{alignat}{5} &\Delta^+(K_i) = K_i\otimes K_i,\quad & &\Delta^-(K_i) = K_i\otimes K_i,& \label{eq:co1}\\ &\Delta^+(E_i) = E_i\otimes K_i + 1\otimes E_i,\quad & &\Delta^-(E_i) = E_i\otimes 1+ K_i\otimes E_i,& \label{eq:co2}\\ &\Delta^+(F_i) = F_i\otimes 1 + K_i^{-1}\otimes F_i,\quad & &\Delta^-(F_i) = F_i\otimes K_i^{-1}+ 1\otimes F_i,& \label{eq:co3} \\ &\Delta^+(D) = D\otimes 1 + 1\otimes D,\quad & &\Delta^-(D) = D\otimes 1 + 1\otimes D.& \label{eq:co4} \end{alignat} Denote by $\operatorname{U}_q^{\prime}(\widehat{{\mathfrak {sl}}}_M)$ the subalgebra of $\operatorname{U}_q(\widehat{{\mathfrak {sl}}}_M)$ generated by $E_i,F_i,K_i,K_i^{-1}, 0\leqslant i < M.$ \\ \noindent In our notations concerning weights of $\operatorname{U}_q(\widehat{{\mathfrak {sl}}}_M)$ we will follow \cite{Kac}. Thus we denote by $\Lambda_0,\Lambda_1,\dots,\Lambda_{M-1}$ the fundamental weights, by $\delta$ the null root, and let $\alpha_i = 2\Lambda_i - \Lambda_{i+1} - \Lambda_{i-1} + \delta_{i,0} \delta$ $(0\leqslant i < M)$ denote the simple roots. The indices are assumed to be cyclically extended to all integers modulo $M.$ Let $P_M = \mathbb Z \delta \oplus \left(\oplus_i \mathbb Z \Lambda_i\right)$ be the set of integral weights. \\ \noindent Let ${\mathbb K\hskip.5pt}^N$ be the $N$-dimensional vector space with basis $\mathfrak v_{1},\mathfrak v_{2},\dots,\mathfrak v_N,$ and let ${\mathbb K\hskip.5pt}^L$ be the $L$-dimensional vector space with basis $\mathfrak e_{1},\mathfrak e_{2},\dots,\mathfrak e_L.$ We set $V_{\mathrm {aff}} = {\mathbb K\hskip.5pt}[z^{\pm 1}]\otimes {\mathbb K\hskip.5pt}^L\otimes {\mathbb K\hskip.5pt}^N.$ $V_{\mathrm {aff}}$ has basis $\{ z^m \mathfrak e_{a} \mathfrak v_{\epsilon} \}$ where $m\in \mathbb Z$ and $1\leqslant a \leqslant L;$ $1\leqslant \epsilon \leqslant N.$ Both algebras $\UU_q(\asll_N)$ and $\UU_q(\asll_L)$ act on $V_{\mathrm {aff}}.$ $\UU_q(\asll_N)$ acts in the following way: \begin{eqnarray} K_{i}(z^m\mathfrak e_a\mathfrak v_{\epsilon}) &=& q^{\delta_{\epsilon,i} - \delta_{\epsilon,i+1}}z^m\mathfrak e_a\mathfrak v_{\epsilon}, \\ E_{i}(z^m\mathfrak e_a\mathfrak v_{\epsilon}) &=& \delta_{\epsilon,i+1}z^{m+\delta_{i,0}}\mathfrak e_a\mathfrak v_{\epsilon-1},\\ F_{i}(z^m\mathfrak e_a\mathfrak v_{\epsilon}) &=& \delta_{\epsilon,i}z^{m-\delta_{i,0}} \mathfrak e_a\mathfrak v_{\epsilon+1},\\ D(z^m\mathfrak e_a\mathfrak v_{\epsilon}) &=& mz^m\mathfrak e_a\mathfrak v_{\epsilon}; \end{eqnarray} where $0 \leqslant i < N,$ and all indices but $a$ should be read modulo $N.$ \\ The action of $\UU_q(\asll_L)$ is given by \begin{eqnarray} \dot{K}_{a}(z^m\mathfrak e_b\mathfrak v_{\epsilon}) &=& q^{\delta_{b,L-a+1} - \delta_{b,L-a}}z^m\mathfrak e_b\mathfrak v_{\epsilon}, \label{eq:ul1}\\ \dot{E}_{a}(z^m\mathfrak e_b\mathfrak v_{\epsilon}) &=& \delta_{b,L-a}z^{m+\delta_{a,0}}\mathfrak e_{b+1}\mathfrak v_{\epsilon},\\ \dot{F}_{a}(z^m\mathfrak e_b\mathfrak v_{\epsilon}) &=& \delta_{b,L-a+1}z^{m-\delta_{a,0}}\mathfrak e_{b-1}\mathfrak v_{\epsilon}, \label{eq:ul2}\\ \dot{D}(z^m\mathfrak e_a\mathfrak v_{\epsilon}) &=& mz^m\mathfrak e_a\mathfrak v_{\epsilon}. \end{eqnarray} where $0 \leqslant a< N,$ and all indices but $\epsilon$ are to be read modulo $L.$ Above and in what follows we put a dot over the generators of $\UU_q(\asll_L)$ in order to distinguish them from the generators of $\UU_q(\asll_N).$ When both $\UU_q(\asll_N)$ and $\UU_q(\asll_L)$ act on the same linear space and share a vector $v$ as their weight vector, we will understand that $\mathrm {wt}(v)$ is a sum of weights of $\UU_q(\asll_N)$ and $\UU_q(\asll_L).$ Thus $$ \mathrm {wt}(z^m\mathfrak e_a\mathfrak v_{\epsilon}) = \Lambda_{\epsilon} - \Lambda_{\epsilon-1} + \dot{\Lambda}_{L-a+1} - \dot{\Lambda}_{L-a} + m(\delta + \dot{\delta}).$$ Here, and from now on, we put dots over the fundamental weights, etc. of $\UU_q(\asll_L).$\\ \noindent Iterating the coproduct $\Delta^+$ (cf.(\ref{eq:co1}--\ref{eq:co4})) $n-1$ times we get an action of $\UU_q(\asll_N)$ on the tensor product $V_{\mathrm {aff}}^{\otimes n}.$ Likewise for $\UU_q(\asll_L),$ but in this case we use the {\em other} coproduct $\Delta^-.$ \subsection{Preliminaries on the affine Hecke algebra} \label{sec:affH} The affine Hecke algebra of type $\mathfrak {gl}_n,$ $\mathbf {\dot{H}}_n,$ is a unital associative algebra over ${\mathbb K\hskip.5pt}$ generated by elements $T_i^{\pm 1},X_j^{\pm 1}, 1\leqslant i < n, 1\leqslant j \leqslant n.$ These elements satisfy the following relations: \begin{gather} T_i T_i^{-1} = T_i^{-1} T_i = 1,\qquad (T_i + 1)(T_i - q^2) = 0, \label{eq:ah1}\\ T_i T_{i+1} T_i = T_{i+1} T_i T_{i+1}, \qquad T_i T_j = T_j T_i \quad \text{if $|i-j| > 1,$} \\ X_j X_j^{-1} = X_j^{-1} X_j = 1, \qquad X_i X_j = X_j X_i, \\ T_i X_i T_i = q^2 X_{i+1}, \qquad T_i X_j = X_j T_i \quad \text{if $ j \neq i,i+1.$}\label{eq:ah2} \end{gather} The subalgebra $\mathbf {H}_n \subset \mathbf {\dot{H}}_n$ generated by the elements $T_i^{\pm 1}$ alone is known to be isomorphic to the finite Hecke algebra of type $\mathfrak {gl}_n.$\\ \noindent Following \cite{GRV}, \cite{KMS} we introduce a representation of $\mathbf {\dot{H}}_n$ on the linear space $({\mathbb K\hskip.5pt}[z^{\pm 1}]\otimes {\mathbb K\hskip.5pt}^L)^{\otimes n}.$ We will identify this space with ${\mathbb K\hskip.5pt}[z_1^{\pm 1},\dots,z_n^{\pm 1}]\otimes ({\mathbb K\hskip.5pt}^L)^{\otimes n}$ by the correspondence $$ z^{m_1}\mathfrak e_{a_1} \otimes z^{m_2}\mathfrak e_{a_2}\otimes \cdots \otimes z^{m_n}\mathfrak e_{a_n} \mapsto z_1^{m_1}z_2^{m_2}\cdots z_n^{m_n} \otimes (\mathfrak e_{a_1} \otimes \mathfrak e_{a_2}\otimes \cdots \otimes \mathfrak e_{a_n}). $$ Let $E_{a,b} \in \mathrm {End}({\mathbb K\hskip.5pt}^L)$ be the matrix units with respect to the basis $\{\mathfrak e_a\},$ and define the {\em trigonometric R-matrix} as the following operator on $({\mathbb K\hskip.5pt}[z^{\pm 1}]\otimes {\mathbb K\hskip.5pt}^L)^{\otimes 2} = {\mathbb K\hskip.5pt}[z_1^{\pm 1},z_2^{\pm 1}]\otimes ({\mathbb K\hskip.5pt}^L)^{\otimes 2}:$ \begin{eqnarray*} R(z_1,z_2)& =& (q^2 z_1 - z_2)\sum_{1\leqslant a \leqslant L} E_{a,a}\otimes E_{a,a} + q(z_1 - z_2)\sum_{1\leqslant a \neq b \leqslant L} E_{a,a}\otimes E_{b,b} + \\ & + &z_1(q^2 - 1)\sum_{1\leqslant a < b \leqslant L} E_{a,b}\otimes E_{b,a} + z_2(q^2 - 1)\sum_{1\leqslant b < a \leqslant L} E_{a,b}\otimes E_{b,a}. \end{eqnarray*} Let $s$ be the exchange operator of factors in the tensor square $({\mathbb K\hskip.5pt}[z^{\pm 1}]\otimes {\mathbb K\hskip.5pt}^L)^{\otimes 2},$ and let \begin{equation} \Tc _{(1,2)}:= \frac{ z_1 - q^2 z_2 }{z_1 - z_2} \cdot \left( 1 - s \cdot \frac{R(z_1,z_2)}{q^2 z_1 - z_2}\right) - 1. \label{eq:Tc}\end{equation} The operator $\Tc_{(1,2)}$ is known as {\em the matrix Demazure-Lusztig operator} (cf. \cite{C3}), note that it is an element of $\mathrm {End}\left(({\mathbb K\hskip.5pt}[z^{\pm 1}]\otimes {\mathbb K\hskip.5pt}^L)^{\otimes 2}\right)$ despite the presence of the denominators in the definition. For $1 \leqslant i < n$ we put \begin{equation} \Tc_i := 1^{\otimes (i-1)}\otimes \Tc_{(i,i+1)} \otimes 1^{\otimes (n-i-1)} \quad \in \mathrm {End}\left( ({\mathbb K\hskip.5pt}[z^{\pm 1}]\otimes {\mathbb K\hskip.5pt}^L)^{\otimes n}\right). \label{eq:Tci}\end{equation} \begin{propos}[\cite{C3}, \cite{GRV}, \cite{KMS}] The map \begin{equation} X_j \mapsto z_j, \qquad T_i \mapsto \Tc_i \end{equation} where $z_j$ stands for the multiplication by $z_j,$ extends to a right representation of $\mathbf {\dot{H}}_n$ on $({\mathbb K\hskip.5pt}[z^{\pm 1}]\otimes {\mathbb K\hskip.5pt}^L)^{\otimes n}.$ \end{propos} \noindent Following \cite{J} we define a left action of the finite Hecke algebra $\mathbf {H}_n$ on $({\mathbb K\hskip.5pt}^N)^{\otimes n}$ by \begin{align} T_i \mapsto \Ts_i := 1^{\otimes (i-1)}\otimes \Ts \otimes 1^{\otimes (n-i-1)}, \quad \text{where $ \Ts \in \mathrm {End}\left(({\mathbb K\hskip.5pt}^N)^{\otimes 2} \right),$} \label{eq:Tsi}\\ \text{and} \quad \Ts(\mathfrak v_{\epsilon_1}\otimes \mathfrak v_{\epsilon_2}) = \begin{cases} q^2 \mathfrak v_{\epsilon_1}\otimes \mathfrak v_{\epsilon_2} & \text{ if $\epsilon_1 = \epsilon_2,$} \\ q \mathfrak v_{\epsilon_2}\otimes \mathfrak v_{\epsilon_1} & \text{ if $\epsilon_1 < \epsilon_2,$} \\ q \mathfrak v_{\epsilon_2}\otimes \mathfrak v_{\epsilon_1} + (q^2 - 1) \mathfrak v_{\epsilon_1}\otimes \mathfrak v_{\epsilon_2} & \text{ if $\epsilon_1 > \epsilon_2.$} \end{cases} \label{eq:Ts} \end{align} \section{The Wedge Product} \label{s:wedgeprod} \subsection{Definition of the wedge product} \label{sec:wedge} Identify the tensor product $V_{\mathrm {aff}}^{\otimes n}$ with \\$({\mathbb K\hskip.5pt}[z^{\pm 1}]\otimes {\mathbb K\hskip.5pt}^L)^{\otimes n} \otimes ({\mathbb K\hskip.5pt}^N)^{\otimes n}$ by the natural isomorphism $$ z^{m_1}\mathfrak e_{a_1}\mathfrak v_{\epsilon_1} \otimes \cdots \otimes z^{m_n}\mathfrak e_{a_n}\mathfrak v_{\epsilon_n} \mapsto \left(z^{m_1}\mathfrak e_{a_1} \otimes \cdots \otimes z^{m_n}\mathfrak e_{a_n}\right)\otimes\left(\mathfrak v_{\epsilon_1} \otimes \cdots \otimes\mathfrak v_{\epsilon_n}\right). $$ Then the operators $\Tc_i$ and $\Ts_i$ are extended on $V_{\mathrm {aff}}^{\otimes n}$ as $\Tc_i\otimes 1$ and $1 \otimes \Ts_i$ respectively. In what follows we will keep the same symbol $\Tc_i$ to mean $\Tc_i\otimes 1,$ and likewise for $\Ts_i.$ We define the $n$-fold {\em $q$-wedge product } (or, simply, {\em the wedge product}) $\wedge^n V_{\mathrm {aff}}$ as the following quotient space: \begin{equation} \wedge^n V_{\mathrm {aff}} := V_{\mathrm {aff}}^{\otimes n} / \sum_{i=1}^{n-1} \mathrm {Im} (\Tc_i - \Ts_i). \label{eq:qwp} \end{equation} Note that under the specialization $q=1$ the operator $\Tc$ (\ref{eq:Tc}) tends to {\em minus} the permutation operator of the tensor square $(\mathbb Q[z^{\pm 1}]\otimes \mathbb Q^L)^{\otimes 2},$ while the operator $\Ts$ (\ref{eq:Ts}) tends to {\em plus} the permutation operator of the tensor square $(\mathbb Q^N)^{\otimes 2},$ so that (\ref{eq:qwp}) is a $q$-analogue of the standard exterior product. \noindent {\bf Remark.} The wedge product is the dual, in the sense of Chari--Pressley \cite{CP} (see also \cite{C1}) of the $\mathbf {\dot{H}}_n$-module $({\mathbb K\hskip.5pt}[z^{\pm 1}]\otimes {\mathbb K\hskip.5pt}^L)^{\otimes n}:$ there is an evident isomorphism of linear spaces $$ \wedge^nV_{\mathrm {aff}} \cong ({\mathbb K\hskip.5pt}[z^{\pm 1}]\otimes {\mathbb K\hskip.5pt}^L)^{\otimes n}\otimes_{\mathbf {H}_n}({\mathbb K\hskip.5pt}^N)^{\otimes n}. $$ \\ \noindent For $ m \in \mathbb Z_{\neq 0}$ define $B^{(n)}_m \in \mathrm {End}(V_{\mathrm {aff}}^{\otimes n})$ as \begin{equation} B^{(n)}_m = z_1^m + z_2^m + \cdots + z_n^m. \end{equation} In Section \ref{sec:Usl} mutually commutative actions of the quantum affine algebras $\UU_q(\asll_N)$ and $\UU_q(\asll_L)$ were defined on $V_{\mathrm {aff}}^{\otimes n}.$ The operators $B^{(n)}_m$ obviously commute with these actions.\\ \noindent The following proposition is easily deduced from the results of \cite{CP}, \cite{GRV}, \cite{KMS}. \begin{propos} \label{p:CPD} For each $i=1,\dots,n-1$ the subspace $\mathrm {Im}(\Tc_i - \Ts_i)\subset V_{\mathrm {aff}}^{\otimes n}$ is invariant with respect to $\UU_q(\asll_N),\UU_q(\asll_L)$ and $B^{(n)}_m$ $(m \in \mathbb Z_{\neq 0}).$ Therefore actions of $\UU_q(\asll_N),\UU_q(\asll_L)$ and $B^{(n)}_m$ are defined on the wedge product $\wedge^n V_{\mathrm {aff}}.$ \end{propos} \noindent It is clear that the actions of $\UU_q^{\prime}(\asll_N) \subset \UU_q(\asll_N),$ $\UU_q^{\prime}(\asll_L)\subset\UU_q(\asll_L)$ and $B^{(n)}_m$ $(m\in \mathbb Z_{\neq 0})$ on the wedge product are mutually commutative. \subsection{Wedges and normally ordered wedges} \label{sec:nowedges} In the following discussion it will be convenient to relabel elements of the basis $\{z^m\mathfrak e_a\mathfrak v_{\epsilon}\}$ of $V_{\mathrm {aff}}$ by single integer. We put $k = \epsilon-N(a+Lm)$ and denote $u_k = z^m\mathfrak e_a\mathfrak v_{\epsilon}.$ Then the set $\{u_k \:|\: k \in \mathbb Z\}$ is a basis of $V_{\mathrm {aff}}.$ Let \begin{equation} u_{k_1}\wedge u_{k_2} \wedge \cdots \wedge u_{k_n} \label{eq:wedge} \end{equation} be the image of the tensor $u_{k_1}\otimes u_{k_2} \otimes \cdots \otimes u_{k_n}$ under the quotient map from $V_{\mathrm {aff}}^{\otimes n}$ to $\wedge^nV_{\mathrm {aff}}.$ We will call a vector of the form (\ref{eq:wedge}) {\em a wedge} and will say that a wedge is {\em normally ordered} if $k_1>k_2>\dots>k_n.$ When $q$ is specialized to $1,$ a wedge is antisymmetric with respect to a permutation of any pair of indices $k_i,k_j,$ and the normally ordered wedges form a basis of $\wedge^nV_{\mathrm {aff}}.$ In the general situation -- when $q$ is a parameter -- the normally ordered wedges still form a basis of $\wedge^nV_{\mathrm {aff}}.$ However the antisymmetry is replaced by a more complicated {\em normal ordering rule} which allows to express any wedge as a linear combination of normally ordered wedges. \mbox{} \noindent Let us start with the case of the two-fold wedge product $\wedge^2V_{\mathrm {aff}}.$ The explicit expressions for the operators $\Tc_1$ and $\Ts_1$ lead for all $k \leqslant l$ to the normal ordering rule of the form \begin{equation} u_k\wedge u_l = c_{kl}(q) u_l\wedge u_k + (q^2-1)\sum_{i\geqslant 1, l-i > k+i} c_{kl}^{(i)}(q) u_{l-i}\wedge u_{k+i}, \label{eq:norule1} \end{equation} were $ c_{kl}(q), c_{kl}^{(i)}(q)$ are Laurent polynomials in $q.$ In particular $c_{kk}(q)=-1,$ and thus $u_k\wedge u_k =0.$ To describe all the coefficients in (\ref{eq:norule1}), we will employ a vector notation. For all $a,a_1,a_2 =1,\dots,L;$ $\epsilon,\epsilon_1,\epsilon_2 $ $=$ $1,\dots,N;$ $m_1,m_2 \in \mathbb Z$ define the following column vectors: \begin{eqnarray} X_{a,a}^{\epsilon_1,\epsilon_2}(m_1,m_2) &=&\left(\begin{array}{c} u_{\epsilon_1-N(a+L m_1)}\wedge u_{\epsilon_2-N(a+L m_2)} \\ u_{\epsilon_2-N(a+L m_1)}\wedge u_{\epsilon_1-N(a+L m_2)}\end{array}\right),\\ Y_{a_1,a_2}^{\epsilon,\epsilon}(m_1,m_2) &=& \left(\begin{array}{c} u_{\epsilon-N(a_1+L m_1)}\wedge u_{\epsilon-N(a_2+L m_2)} \\ u_{\epsilon-N(a_2+L m_1)}\wedge u_{\epsilon-N(a_1+L m_2)}\end{array}\right), \\ Z_{a_1,a_2}^{\epsilon_1,\epsilon_2}(m_1,m_2) &=& \left(\begin{array}{c} u_{\epsilon_1-N(a_1+L m_1)}\wedge u_{\epsilon_2-N(a_2+L m_2)} \\ u_{\epsilon_1-N(a_2+L m_1)}\wedge u_{\epsilon_2-N(a_1+L m_2)} \\ u_{\epsilon_2-N(a_1+L m_1)}\wedge u_{\epsilon_1-N(a_2+L m_2)} \\ u_{\epsilon_2-N(a_2+L m_1)}\wedge u_{\epsilon_1-N(a_1+L m_2)} \end{array}\right). \end{eqnarray} Moreover let \begin{eqnarray} & X_{a,a}^{\epsilon_1,\epsilon_2}(m_1,m_2)^{\prime}=X_{a,a}^{\epsilon_1,\epsilon_2}(m_1,m_2)^{\prime\prime} = X_{a,a}^{\epsilon_1,\epsilon_2}(m_1,m_2) \quad &\text{if $ m_1\neq m_2,$} \\ & Y_{a_1,a_2}^{\epsilon,\epsilon}(m_1,m_2)^{\prime}=Y_{a_1,a_2}^{\epsilon,\epsilon}(m_1,m_2)^{\prime\prime} = Y_{a_1,a_2}^{\epsilon,\epsilon}(m_1,m_2) \quad &\text{if $ m_1\neq m_2,$} \\ & Z_{a_1,a_2}^{\epsilon_1,\epsilon_2}(m_1,m_2)^{\prime}=Z_{a_1,a_2}^{\epsilon_1,\epsilon_2}(m_1,m_2)^{\prime\prime} = Z_{a_1,a_2}^{\epsilon_1,\epsilon_2}(m_1,m_2) \quad &\text{if $ m_1\neq m_2.$} \end{eqnarray} And \begin{eqnarray} X_{a,a}^{\epsilon_1,\epsilon_2}(m,m)^{\prime} &=&\left(\begin{array}{c} 0 \\ u_{\epsilon_2-N(a+L m)}\wedge u_{\epsilon_1-N(a+L m)}\end{array}\right),\\ Y_{a_1,a_2}^{\epsilon,\epsilon}(m,m)^{\prime} &=& \left(\begin{array}{c} u_{\epsilon-N(a_1+L m)}\wedge u_{\epsilon-N(a_2+L m)} \\ 0 \end{array}\right), \\ Z_{a_1,a_2}^{\epsilon_1,\epsilon_2}(m,m)^{\prime} &=& \left(\begin{array}{c} u_{\epsilon_1-N(a_1+L m)}\wedge u_{\epsilon_2-N(a_2+L m)} \\ 0 \\ u_{\epsilon_2-N(a_1+L m)}\wedge u_{\epsilon_1-N(a_2+L m)} \\ 0 \end{array}\right). \end{eqnarray} \begin{eqnarray} X_{a,a}^{\epsilon_1,\epsilon_2}(m,m)^{\prime\prime} &=&\left(\begin{array}{c} u_{\epsilon_1-N(a+L m)}\wedge u_{\epsilon_2-N(a+L m)} \\ 0 \end{array}\right),\\ Y_{a_1,a_2}^{\epsilon,\epsilon}(m,m)^{\prime\prime} &=& \left(\begin{array}{c} 0 \\ u_{\epsilon-N(a_2+L m)}\wedge u_{\epsilon-N(a_1+L m)}\end{array}\right), \\ Z_{a_1,a_2}^{\epsilon_1,\epsilon_2}(m,m)^{\prime\prime} &=& \left(\begin{array}{c} 0 \\ u_{\epsilon_1-N(a_2+L m)}\wedge u_{\epsilon_2-N(a_1+L m)} \\ 0 \\ u_{\epsilon_2-N(a_2+L m)}\wedge u_{\epsilon_1-N(a_1+L m)} \end{array}\right). \end{eqnarray} For $t \in \mathbb Z$ introduce also the matrices: \begin{alignat}{5} & M_{X} = \left(\begin{array}{c c} 0 & -q \\ -q & q^2 - 1 \end{array}\right),\quad & &M_{X}(t) = (q^2-1)\left(\begin{array}{c c} q^{2t-2} & -q^{2t-1} \\ -q^{2t-1} & q^{2t} \end{array}\right), & & \\ & M_{Y} = \left(\begin{array}{c c} q^{-2}-1 & -q^{-1} \\ -q^{-1} & 0 \end{array}\right),\quad & & M_{Y}(t) = (q^{-2}-1)\left(\begin{array}{c c} q^{-2t} & -q^{-2t+1} \\ -q^{-2t+1} & q^{-2t+2} \end{array}\right).& & \end{alignat} \begin{gather} M_{Z} = \left(\begin{array}{c c c c} 0 & 0 & -(q-q^{-1}) & -1 \\ 0 & 0 & -1 & 0 \\ -(q-q^{-1}) & -1 & (q-q^{-1})^2 & (q-q^{-1}) \\ -1 & 0 & (q-q^{-1}) & 0 \end{array}\right), \\ M_{Z}(t) = \qquad \frac{q^2-1}{q^2+1} \times \\ \left(\begin{array}{c c c c} q^{2t}-q^{-2t} & q^{2t-1}+q^{-2t+1} & -(q^{2t+1}+q^{-2t-1}) & -(q^{2t}-q^{-2t})\\ q^{2t-1}+q^{-2t+1} & q^{2t-2}-q^{-2t+2} & -(q^{2t}-q^{-2t}) & -(q^{2t-1}+q^{-2t+1}) \\ -(q^{2t+1}+q^{-2t-1}) & -(q^{2t}-q^{-2t}) & q^{2t+2}-q^{-2t-2} & q^{2t+1}+q^{-2t-1} \\ -(q^{2t}-q^{-2t}) & -(q^{2t-1}+q^{-2t+1}) & q^{2t+1}+q^{-2t-1}& q^{2t}-q^{-2t} \end{array}\right). \nonumber \end{gather} Note that all entries of the matrix $M_{Z}(t)$ are Laurent polynomials in $q,$ i.e. the numerators are divisible by ${q^2+1}.$ \mbox{} \noindent Computing $\mathrm {Im}(\Tc - \Ts)$ we get the following lemma: \begin{lemma}[Normal ordering rules] \label{l:norules}\mbox{} In $\wedge^2 V_{\mathrm {aff}} $ there are the following relations: \begin{eqnarray} & & u_{\epsilon-N(a+Lm_1)}\wedge u_{\epsilon-N(a+Lm_2)} = - u_{\epsilon-N(a+Lm_2)}\wedge u_{\epsilon-N(a+Lm_1)} \quad (m_1 \geqslant m_2), \label{eq:n1} \end{eqnarray} \begin{eqnarray} & &X_{a,a}^{\epsilon_1,\epsilon_2}(m_1,m_2)^{\prime} = M_X\cdot X_{a,a}^{\epsilon_1,\epsilon_2}(m_2,m_1)^{\prime\prime} + \!\!\!\!\sum_{t=1}^{[\frac{m_1-m_2}{2}]} \!\!\!\! M_X(t)\cdot X_{a,a}^{\epsilon_1,\epsilon_2}(m_2+t,m_1-t)^{\prime\prime}\label{eq:n2}\\ & & \quad (m_1\geqslant m_2; \epsilon_1 > \epsilon_2 ), \nonumber\\ & & Y_{a_1,a_2}^{\epsilon,\epsilon}(m_1,m_2)^{\prime} = M_Y\cdot Y_{a_1,a_2}^{\epsilon,\epsilon}(m_2,m_1)^{\prime\prime} + \!\!\!\!\sum_{t=1}^{[\frac{m_1-m_2}{2}]}\!\!\!\! M_Y(t)\cdot Y_{a_1,a_2}^{\epsilon,\epsilon}(m_2+t,m_1-t)^{\prime\prime} \label{eq:n3}\\ & & \quad (m_1\geqslant m_2; a_1 > a_2 ), \nonumber \\ & &Z_{a_1,a_2}^{\epsilon_1,\epsilon_2}(m_1,m_2)^{\prime} = M_Z\cdot Z_{a_1,a_2}^{\epsilon_1,\epsilon_2}(m_2,m_1)^{\prime\prime} + \!\!\!\!\sum_{t=1}^{[\frac{m_1-m_2}{2}]}\!\!\!\! M_Z(t)\cdot Z_{a_1,a_2}^{\epsilon_1,\epsilon_2}(m_2+t,m_1-t)^{\prime\prime} \label{eq:n4} \\ & & \quad (m_1\geqslant m_2; \epsilon_1 > \epsilon_2; a_1 > a_2 ). \nonumber \end{eqnarray} \end{lemma} \noindent The relations (\ref{eq:n1} -- \ref{eq:n4}) indeed have the form (\ref{eq:norule1}), in particular, all wedges $u_{k}\wedge u_{l}$ in the left-hand-sides satisfy $k\leqslant l$ and all wedges in the right-hand-sides are normally ordered. Note moreover, that every wedge $u_{k}\wedge u_{l}$ such that $k\leqslant l$ appears in the left-hand-side of one of the relations. When $L=1$ the normal ordering rules are given by (\ref{eq:n1}) and (\ref{eq:n2}), these relations coincide with the normal ordering rules of \cite[ eq.(43),(45)]{KMS}. \begin{propos} \label{p:fwbasis}\mbox{} \mbox{}\\ {\em (i)} Any wedge from $\wedge^n V_{\mathrm {aff}}$ is a linear combination of normally ordered wedges with coefficients determined by the normal ordering rules {\em (\ref{eq:n1} -- \ref{eq:n4})} applied in each pair of adjacent factors of $\wedge^n V_{\mathrm {aff}}.$ \\ \noindent {\em (ii)} Normally ordered wedges form a basis of $\wedge^n V_{\mathrm {aff}}.$ \end{propos} \begin{proof} (i) follows directly from the definition of $\wedge^n V_{\mathrm {aff}}.$ \\ (ii) In view of (i) it is enough to prove that normally ordered wedges are linearly independent. This is proved by specialization $q=1.$ Let $w_{1},\dots,w_{m}$ be a set of distinct normally ordered wedges in $\wedge^n V_{\mathrm {aff}},$ and let $t_{1},\dots,t_{m} \in V_{\mathrm {aff}}^{\otimes n}$ be the corresponding pure tensors. Assume that \begin{equation} \sum c_j(q) w_{j} = 0, \label{eq:l1} \end{equation} where $c_1(q),\dots,c_m(q)$ are non-zero Laurent polynomials in $q.$ Then \begin{equation} \sum c_j(q) t_{j} \in \sum_{i=1}^{n-1} \mathrm {Im}(\Tc_i-\Ts_i). \end{equation} Specializing $q$ to be $1$ this gives \begin{equation} \sum c_j(1) t_{j} \in \sum_{i=1}^{n-1} \mathrm {Im}(P_i+1) \subset \otimes_{\mathbb Q}^n {\overline{V}_{\mathrm {aff}}}, \label{eq:l2} \end{equation} where ${\overline{V}_{\mathrm {aff}}} = \mathbb Q[z,z^{-1}]\otimes_{\mathbb Q}\mathbb Q^L\otimes_{\mathbb Q}\mathbb Q^N,$ and $P_i$ is the permutation operator for the $i$th and $i+1$th factors in $\otimes_{\mathbb Q}^n {\overline{V}_{\mathrm {aff}}}.$ Since each $t_j$ is a tensor of the form $u_{k_1}\otimes u_{k_2}\otimes \cdots \otimes u_{k_n}$ where $k_1,k_2,\dots,k_n$ is a decreasing sequence, it follows from (\ref{eq:l2}) that $c_j(1)=0$ for all $j.$ Therefore each $c_j(q)$ has the form $(q-1)c_j(q)^{(1)}$ where $c_j(q)^{(1)}$ is a Laurent polynomial in $q.$ Equation (\ref{eq:l1}) gives now \begin{equation} \sum c_j(q)^{(1)} w_{j} = 0. \end{equation} Repeating the arguments above we conclude that all $c_j(q)$ are divisible by arbitrarily large powers of $(q-1).$ Therefore all $c_j(q)$ vanish. \end{proof} \begin{lemma} \label{l:lemma} Let $l \leqslant m.$ Then the wedges $u_m\wedge u_{m-1} \wedge \cdots \wedge u_{l+1}\wedge u_{l}\wedge u_m$ and $u_l\wedge u_m\wedge u_{m-1} \wedge \cdots \wedge \cdots u_{l+1}\wedge u_{l}$ are equal to zero. \end{lemma} \begin{proof} As particular cases of relations (\ref{eq:n1} -- \ref{eq:n4}) we have for all $k$ and $N\geqslant 2$ $$ u_k\wedge u_k = 0, \quad u_k\wedge u_{k+1} = \begin{cases} -q^{\delta(k\not\equiv 0\bmod N)} u_{k+1}\wedge u_{k} & \text{if $N\geqslant 2,$}\\ -q^{-1}u_{k+1}\wedge u_{k} & \text{if $N=1.$} \end{cases} $$ The lemma follows by induction from (\ref{eq:n1} -- \ref{eq:n4}). \end{proof} \section{The Fock Space} \label{s:Fock} \subsection{Definition of the Fock space} For each integer $M$ we define the Fock space ${\mathcal F}_M$ as the inductive limit $(n \rightarrow \infty)$ of $\wedge^n V_{\mathrm {aff}},$ where maps $\wedge^n V_{\mathrm {aff}} \rightarrow \wedge^{n+1} V_{\mathrm {aff}}$ are given by $v \mapsto v\wedge u_{M-n}.$ For $v \in \wedge^n V_{\mathrm {aff}}$ we denote by $v\wedge u_{M-n}\wedge u_{M-n-1}\wedge \cdots$ the image of $v$ with respect to the canonical map from $\wedge^n V_{\mathrm {aff}}$ to ${\mathcal F}_M.$ Note that for $v_{(n)} \in \wedge^n V_{\mathrm {aff}},$ $v_{(r)} \in \wedge^r V_{\mathrm {aff}},$ the equality $$ v_{(n)}\wedge u_{M-n}\wedge u_{M-n-1}\wedge \cdots = v_{(r)}\wedge u_{M-r}\wedge u_{M-r-1}\wedge \cdots $$ holds if and only if there is $s \geqslant n,r$ such that $$ v_{(n)}\wedge u_{M-n}\wedge u_{M-n-1}\wedge \cdots \wedge u_{M-s+1} = v_{(r)}\wedge u_{M-r}\wedge u_{M-r-1}\wedge \cdots \wedge u_{M-s+1} .$$ In particular, $v_{(n)}\wedge u_{M-n}\wedge u_{M-n-1}\wedge \cdots$ vanishes if and only if there is $s \geqslant n$ such that $ v_{(n)}\wedge u_{M-n}\wedge u_{M-n-1}\wedge \cdots \wedge u_{M-s+1} $ is zero. \\ \noindent For a decreasing sequence of integers $(k_1 > k_2 > \cdots )$ such that $k_i = M-i+1$ for $i \gg 1,$ we will call the vector $ u_{k_1}\wedge u_{k_2} \wedge \cdots \quad \in {\mathcal F}_M$ a (semi-infinite) {\em normally ordered wedge}. \begin{propos}\label{p:siwbasis} The normally ordered wedges form a basis of ${\mathcal F}_M.$ \end{propos} \begin{proof} For each $w \in {\mathcal F}_M$ there are $n, v\in \wedge^n V_{\mathrm {aff}} $ such that $ w = v \wedge u_{M-n}\wedge u_{M-n-1}\wedge \cdots.$ By Proposition \ref{p:fwbasis} the finite normally ordered wedges form a basis of $\wedge^n V_{\mathrm {aff}},$ therefore $w$ is a linear combination of vectors \begin{equation} u_{k_1}\wedge u_{k_2}\wedge \cdots \wedge u_{k_n}\wedge u_{M-n}\wedge u_{M-n-1}\wedge \cdots , \quad \text{ where $k_1 > k_2 > \cdots > k_n.$} \label{eq:swibasis1} \end{equation} If $k_n \leqslant M-n,$ then there is $r > n$ such that $u_{k_n}\wedge u_{M-n}\wedge u_{M-n-1}\wedge \cdots \wedge u_{M-r+1}$ vanishes by Lemma \ref{l:lemma}. It follows that (\ref{eq:swibasis1}) is zero if $k_n \leqslant M-n.$ Thus the normally ordered wedges span ${\mathcal F}_M.$ \\ \noindent Suppose $ \sum c_{(k_1,k_2,\dots)} u_{k_1}\wedge u_{k_2}\wedge \cdots = 0,$ where wedges under the sum are normally ordered and $c_{(k_1,k_2,\dots)} \in {\mathbb K\hskip.5pt}.$ Then by definition of the inductive limit there exists $n$ such that $ \sum c_{(k_1,k_2,\dots)} u_{k_1}\wedge u_{k_2}\wedge \cdots \wedge u_{k_n} = 0.$ Thus linear independence of semi-infinite normally ordered wedges follows from the linear independence of finite normally ordered wedges. \end{proof} \subsection{The actions of $\UU_q(\asll_N)$ and $\UU_q(\asll_L)$ on the Fock spaces} \label{sec:UF} Define the {\em vacuum vector} of ${\mathcal F}_M$ as $$ |M\rangle = u_M \wedge u_{M-1} \wedge \cdots . $$ Then for each vector $w$ from ${\mathcal F}_M$ there is a sufficiently large integer $m$ such that $w$ can be represented as \begin{equation} w = v\wedge |-NLm\rangle ,\quad \text{ where $v \in \wedge^{M+NLm}V_{\mathrm {aff}}.$} \label{eq:w=vbyvac} \end{equation} \noindent For each $M\in \mathbb Z$ we define on ${\mathcal F}_M$ operators $E_i,F_i,K_i^{\pm 1}, D$ $(0\leqslant i < N)$ and $\dot{E}_a,\dot{F}_a,\dot{K}_a^{\pm 1},\dot{D}$ $(0\leqslant a < L)$ and then show, in Theorem \ref{t:UNUL}, that these operators satisfy the defining relations of $\UU_q(\asll_N)$ and $\UU_q(\asll_L)$ respectively. As the first step we define actions of these operators on vectors of the form $|-NLm\rangle.$ Let $\overline{v} = u_{-NLm}\wedge u_{-NLm -1}\wedge \cdots \wedge u_{-NL(m+1)+1}.$ We set \begin{alignat}{5} &D |-NLm\rangle & = & NL\frac{m(1-m)}{2} |-NLm\rangle, \label{eq:dNvac}& \\ &K_i |-NLm\rangle & = & q^{L\delta(i=0)}|-NLm\rangle,\label{eq:KNvac} & \\ &E_i |-NLm\rangle & = & 0, \label{eq:ENvac} & \\ &F_i |-NLm\rangle & = & \begin{cases} 0 & \text{ if $i \neq 0,$} \\ F_0(\overline{v})\wedge |-NL(m+1)\rangle & \text{ if $i = 0.$} \end{cases} & \label{eq:FNvac} \end{alignat} And \begin{alignat}{5} &\dot{D} |-NLm\rangle & = & NL\frac{m(1-m)}{2} |-NLm\rangle, & \label{eq:dLvac} \\ &\dot{K}_a |-NLm\rangle & = & q^{N\delta(a=0)}|-NLm\rangle,& \label{eq:KLvac} \\ &\dot{E}_a |-NLm\rangle & = & 0, &\label{eq:ELvac} \\ &\dot{F}_a |-NLm\rangle & = & \begin{cases} 0 & \text{ if $a \neq 0,$} \\ q^{-N}\dot{F}_0(\overline{v})\wedge |-NL(m+1)\rangle & \text{ if $a = 0.$} \end{cases}&\label{eq:FLvac} \end{alignat} Then the actions on an arbitrary vector $w \in {\mathcal F}_M$ are defined by using the presentation (\ref{eq:w=vbyvac}) and the coproducts (\ref{eq:co1} -- \ref{eq:co4}). Thus for $v \in \wedge^{M+NLm}V_{\mathrm {aff}}$ and $w = v\wedge |-NLm\rangle \in {\mathcal F}_M$ we define \begin{alignat}{5} &D(w) & = & D(v)\wedge |-NLm\rangle + v \wedge D |-NLm\rangle , &\label{eq:dN} \\ &K_i(w) &=& K_i(v)\wedge K_i |-NLm\rangle,& \label{eq:KN} \\ &E_i(w) &=& E_i(v)\wedge K_i |-NLm\rangle, & \label{eq:EN} \\ &F_i(w) &=& F_i(v)\wedge |-NLm\rangle + K_i^{-1}(v)\wedge F_i|-NLm\rangle .&\label{eq:FN} \end{alignat} And \begin{alignat}{5} &\dot{D}(w) & = & \dot{D}(v)\wedge |-NLm\rangle + v \wedge \dot{D} |-NLm\rangle , &\label{eq:dL} \\ &\dot{K}_a(w) &=& \dot{K}_a(v)\wedge \dot{K}_a |-NLm\rangle, & \label{eq:KL} \\ &\dot{E}_a(w) &=& \dot{E}_a(v)\wedge |-NLm\rangle, \label{eq:EL} &\\ &\dot{F}_a(w) &=& \dot{F}_a(v)\wedge \dot{K}_a^{-1}|-NLm\rangle + v\wedge \dot{F}_a|-NLm\rangle .&\label{eq:FL} \end{alignat} It follows from Lemma \ref{l:lemma} that the operators $E_i,F_i,K_i^{\pm 1},D$ and $\dot{E}_a,\dot{F}_a,\dot{K}_a^{\pm 1},\dot{D}$ are well-defined, that is do not depend on a particular choice of the presentation (\ref{eq:w=vbyvac}), and for $v \in \wedge^n V_{\mathrm {aff}},$ $u\in {\mathcal F}_{M-n}$ satisfy the following relations, analogous to the coproduct formulas (\ref{eq:co1} -- \ref{eq:co4}): \begin{alignat}{5} &D(v\wedge u) & = & D(v)\wedge u + v \wedge D(u) ,& \label{eq:codN} \\ &K_i(v\wedge u) &=& K_i(v)\wedge K_i(u), & \label{eq:coKN} \\ &E_i(v\wedge u) &=& E_i(v)\wedge K_i (u) + v\wedge E_i(u), & \label{eq:coEN} \\ &F_i(v\wedge u) &=& F_i(v)\wedge u + K_i^{-1}(v)\wedge F_i(u) .&\label{eq:coFN} \end{alignat} And \begin{alignat}{5} &\dot{D}(v\wedge u) & = & \dot{D}(v)\wedge u + v \wedge \dot{D}(u) ,& \label{eq:codL} \\ &\dot{K}_a(v\wedge u) &=& \dot{K}_a(v)\wedge \dot{K}_a(u), & \label{eq:coKL} \\ &\dot{E}_a(v\wedge u) &=& \dot{E}_a(v)\wedge u + \dot{K}_a(v)\wedge \dot{E}_a(u), & \label{eq:coEL} \\ &\dot{F}_a(v\wedge u) &=& \dot{F}_a(v)\wedge \dot{K}_a^{-1}(u) + v\wedge \dot{F}_a(u) .\label{eq:coFL}& \end{alignat} Relations ((\ref{eq:dNvac}, \ref{eq:KNvac}),(\ref{eq:dN}, \ref{eq:KN})) and ((\ref{eq:dLvac}, \ref{eq:KLvac}),(\ref{eq:dL}, \ref{eq:KL})) define the weight decomposition of the Fock space ${\mathcal F}_M.$ We have \begin{equation} \mathrm {wt}( |-NLm\rangle ) = L\Lambda_0 + N\dot{\Lambda}_0 + NL\frac{m(1-m)}{2} (\delta + \dot{\delta}), \label{eq:wt1} \end{equation} and for $v \in \wedge^{M+NLm} V_{\mathrm {aff}}$ \begin{equation} \mathrm {wt}(v\wedge |-NLm\rangle ) = \mathrm {wt}(v) + \mathrm {wt}( |-NLm\rangle ). \label{eq:wt2} \end{equation} \begin{theor} \label{t:UNUL} \mbox{} \\ {\em (i)} The operators $E_i,F_i,K_i,D$ $(0\leqslant i <N)$ define on ${\mathcal F}_M$ a structure of an integrable $\UU_q(\asll_N)$-module. And the operators $\dot{E}_a,\dot{F}_a,\dot{K}_a,\dot{D} $ define on ${\mathcal F}_M$ a structure of an integrable $\UU_q(\asll_L)$-module. \\ {\em (ii)} The actions of the subalgebras $\UU_q^{\prime}(\asll_N) \subset \UU_q(\asll_N)$ and $\UU_q^{\prime}(\asll_L) \subset \UU_q(\asll_L)$ on ${\mathcal F}_M$ are mutually commutative. \end{theor} \begin{proof} (i) It is straightforward to verify that the relations (\ref{eq:r1}--\ref{eq:r4}) are satisfied. In particular, the weights of $E_i,F_i$ and $\dot{E}_a,\dot{F}_a$ are $\alpha_i,-\alpha_i$ and $\dot{\alpha}_a,-\dot{\alpha}_a$ respectively. To prove the relations \begin{equation} [E_i,F_j] = \delta_{ij}\frac{K_i-K_i^{-1}}{q -q^{-1}}, \quad \text{and} \quad [\dot{E}_a,\dot{F}_b] = \delta_{ab}\frac{\dot{K}_a-\dot{K}_a^{-1}}{q -q^{-1}} \label{eq:t1} \end{equation} it is enough, by (\ref{eq:KN}--\ref{eq:FN}) and (\ref{eq:KL}--\ref{eq:FL}), to show that these relations hold when applied to a vacuum vector of the form $|-NLm\rangle.$ If $i\neq j ,$ $a\neq b$ we have $$ [E_i,F_j]|-NLm\rangle = 0,\quad [\dot{E}_a,\dot{F}_b]|-NLm\rangle = 0 $$ because $\alpha_i-\alpha_j + \mathrm {wt}(|-NLm\rangle)$ $(i\neq j )$ and $\dot{\alpha}_a -\dot{\alpha}_b+\mathrm {wt}(|-NLm\rangle)$ $(a\neq b)$ are not weights of ${\mathcal F}_{-NLm}.$ The relations \begin{alignat}{4} &[E_i,F_i]|-NLm\rangle &=&\frac{K_i-K_i^{-1}}{q -q^{-1}}|-NLm\rangle \label{eq:t21} \\ &[\dot{E}_a,\dot{F}_a] |-NLm\rangle &= &\frac{\dot{K}_a-K_a^{-1}}{q -q^{-1}}|-NLm\rangle & \label{eq:t22} \end{alignat}evidently hold by (\ref{eq:KNvac} -- \ref{eq:FNvac}), (\ref{eq:KLvac} -- \ref{eq:FLvac}) when $i \neq 0,$ $a\neq 0.$ Let $a=0.$ We have \begin{align*} \dot{F}_0 |-NLm\rangle = &q^{-N}\sum_{i =1}^N q^{i} \: u_{N-N(1+Lm)}\wedge u_{N-1-N(1+Lm)}\wedge \cdots \\ & \cdots \wedge u_{i-N(L+L(m-1))} \wedge \cdots \wedge u_{1-N(1+Lm)} \wedge |N-N(2+Lm)\rangle. \end{align*} Then by Lemma \ref{l:lemma} $$ \dot{E}_0 \dot{F}_0 |-NLm\rangle = q^{1-N} \sum_{i=1}^N q^{2(i-1)} |-NLm\rangle = \frac{q^N-q^{-N}}{q -q^{-1}}|-NLm\rangle.$$ This shows the relation (\ref{eq:t22}) for $a=0.$ The relation (\ref{eq:t21}) for $i=0$ is shown in a similar way. Thus $E_i,F_i,K_i,D$ and $\dot{E}_a,\dot{F}_a,\dot{K}_a,\dot{D} $ satisfy the defining relations (\ref{eq:r1} -- \ref{eq:r5}). Observe that for $i=0,\dots,N-1;$ $a=0,\dots,L-1$ and $\mu \in P_N+P_L,$ $\mu + r \alpha_i$, $ \mu + n \dot{\alpha}_a $ are weights of ${\mathcal F}_{M}$ for only a finite number of $r$ and $n.$ Therefore ${\mathcal F}_M$ is an integrable module of $\operatorname{U}_q(\mathfrak {sl}_2)_i = \langle E_i,F_i,K_i^{\pm 1}\rangle $ and $\operatorname{U}_q(\mathfrak {sl}_2)_a = \langle \dot{E}_a,\dot{F}_a,\dot{K}_a^{\pm 1}\rangle .$ By Proposition B.1 of \cite{KMPY} this implies that the Serre relations (\ref{eq:r6}, \ref{eq:r7}) are satisfied. Eigenspaces of the operator $D$ and eigenspaces of the operator $\dot{D}$ are finite-dimensional. Therefore the integrability with respect to each $\operatorname{U}_q(\mathfrak {sl}_2)_i$ and $\operatorname{U}_q(\mathfrak {sl}_2)_a$ implies the integrability of ${\mathcal F}_M$ as both $\UU_q(\asll_N)$-module and $\UU_q(\asll_L)$-module. \\ \noindent (ii) The Cartan part of $\UU_q^{\prime}(\asll_N)$ evidently commutes with $\UU_q^{\prime}(\asll_L),$ and vice-versa. By (\ref{eq:KN} -- \ref{eq:FN}) and (\ref{eq:KL} -- \ref{eq:FL}) it is enough to prove that commutators between the other generators vanish when applied to a vector of the form $|-NLm\rangle .$ The relation $$ [E_i,\dot{E}_a] |-NLm\rangle = 0$$ is trivially satisfied by (\ref{eq:ENvac}, \ref{eq:ELvac}). The relations $$ [F_i,\dot{E}_a] |-NLm\rangle = 0, \quad [E_i,\dot{F}_a] |-NLm\rangle = 0$$ hold because $\dot{\alpha}_a - \alpha_i + \mathrm {wt}(|-NLm\rangle)$ and $ \alpha_i -\dot{\alpha}_a + \mathrm {wt}(|-NLm\rangle)$ are not weights of ${\mathcal F}_{-NLm}.$ The relations $$[F_i,\dot{F}_a] |-NLm\rangle = 0$$ are trivial by (\ref{eq:FNvac}, \ref{eq:FLvac}) when $i\neq 0,a\neq 0;$ and are verified by using the normal ordering rules (\ref{eq:n1} -- \ref{eq:n4}) and Lemma \ref{l:lemma} in the rest of the cases. \end{proof} \subsection{The actions of Bosons} \label{sec:bosons} We will now define actions of operators $B_n$ $(n\in \mathbb Z_{\neq 0})$ (called {\em bosons}) on ${\mathcal F}_M.$ Let $u_{k_1}\wedge u_{k_2} \wedge \cdots $ ($k_i = M-i+1$ for $i\gg 1$) be a vector of ${\mathcal F}_M.$ By Lemma \ref{l:lemma}, for $n\neq 0$ the sum \begin{align} &(z^n u_{k_1})\wedge u_{k_2} \wedge u_{k_3} \wedge \cdots \; + \label{eq:ba}\\ &u_{k_1} \wedge(z^n u_{k_2})\wedge u_{k_3} \wedge \cdots \; + \nonumber\\ &u_{k_1} \wedge u_{k_2} \wedge(z^n u_{k_3})\wedge \cdots \; + \nonumber\\ & \quad + \quad \cdots \quad .\nonumber \end{align} contains only a finite number of non-zero terms, and is, therefore, a vector of ${\mathcal F}_M.$ By Proposition \ref{p:CPD} the assignment $u_{k_1}\wedge u_{k_2} \wedge \cdots \mapsto \text{(\ref{eq:ba})}$ defines an operator on ${\mathcal F}_M.$ We denote this operator $B_n.$ By definition we have for $v \in V_{\mathrm {aff}},$ $u \in {\mathcal F}_{M-1}:$ \begin{equation} B_n(v\wedge u) = (z^nv)\wedge u + v\wedge B_n(u). \label{eq:Bvu} \end{equation} \begin{propos} For all $n\in \mathbb Z_{\neq 0}$ the operator $B_n$ commutes with the actions of $\UU_q^{\prime}(\asll_N)$ and $\UU_q^{\prime}(\asll_L).$ \end{propos} \begin{proof} It follows immediately from the definition, that the weight of $B_n$ is $n(\delta + \dot{\delta}).$ Thus $B_n$ commutes with $K_i, \dot{K}_a$ $(0\leqslant i <N, 0\leqslant a < L).$ Let $X$ be any of the operators $E_i,F_i,\dot{E}_a,\dot{F}_a$ $(0\leqslant i <N, 0\leqslant a < L).$ The relations (\ref{eq:coEN}, \ref{eq:coFN}), (\ref{eq:coEL}, \ref{eq:coFL}) and (\ref{eq:Bvu}) imply now that $[B_n , X] = 0$ will follow from $[B_n , X] |-NLm \rangle = 0 $ for an arbitrary integer $m.$ If $n >0,$ we have $[B_n , X] |-NLm \rangle = 0$ because $n(\delta + \dot{\delta})\pm\alpha_i + \mathrm {wt}(|-NLm \rangle) $ and $n(\delta + \dot{\delta})\pm\dot{\alpha}_a + \mathrm {wt}(|-NLm \rangle)$ are not weights of ${\mathcal F}_{-NLm}.$ Let $n < 0.$ Consider the expansion $$ [B_n , X] |-NLm \rangle = \sum_{\nu} c_{\nu} u_{k^{\nu}_1}\wedge u_{k^{\nu}_2}\wedge \cdots $$ where the wedges in the right-hand-side are normally ordered. Comparing the weights of the both sides, we obtain for all $\nu$ the inequality $k_1^{\nu} > -NLm.$ For $r \geqslant 0$ (\ref{eq:coEN}, \ref{eq:coFN}), (\ref{eq:coEL}, \ref{eq:coFL}) and (\ref{eq:Bvu}) give \begin{multline} [B_n , X] |-NLm \rangle = \\ = u_{-NLm}\wedge u_{-NLm-1}\wedge \cdots \wedge u_{-NL(m+r)+1}\wedge [B_n , X] |-NL(m+r) \rangle \label{eq:bx} \end{multline} where $$ [B_n , X] |-NL(m+r) \rangle = \sum_{\nu} c_{\nu} u_{k^{\nu}_1-NLr}\wedge u_{k^{\nu}_2-NLr}\wedge \cdots . $$ Now let $r$ be sufficiently large, so that $$ k_1^{\nu} - NLr \leqslant -NLm$$ holds for all $\nu.$ By Lemma \ref{l:lemma}, the last inequality and $ k_1^{\nu} - NLr > -NL(m+r) $ imply that (\ref{eq:bx}) vanishes. \end{proof} \begin{propos} There are non-zero $\gamma_n(q) \in \mathbb Q[q,q^{-1}]$ (independent on $M$) such that \begin{equation} [B_n,B_{n^{\prime}}] = \delta_{n+n^{\prime},0} \gamma_n(q). \end{equation} \end{propos} \begin{proof} Each vector of ${\mathcal F}_{M^{\prime}}$ $(M^{\prime}\in\mathbb Z)$ is of the form $v\wedge |M\rangle$ where $v \in \wedge^k V_{\mathrm {aff}},$ and $k=M^{\prime}-M$ is sufficiently large. By (\ref{eq:Bvu}) we have $$ [B_n,B_{n^{\prime}}](v\wedge |M\rangle) = v\wedge[B_n,B_{n^{\prime}}]|M\rangle.$$ The vector $[B_n,B_{n^{\prime}}]|M\rangle$ vanishes if $n+n^{\prime}>0$ because in this case $\mathrm {wt}(|M\rangle) + (n+n^{\prime})(\delta + \dot{\delta})$ is not a weight of ${\mathcal F}_M.$ Let $n+n^{\prime}<0.$ Write $[B_n,B_{n^{\prime}}]|M\rangle$ as the linear combination of normally ordered wedges: $$ [B_n,B_{n^{\prime}}]|M\rangle = \sum_{\nu} c_{\nu} u_{k_1^{\nu}}\wedge u_{k_2^{\nu}} \wedge \cdots .$$ Since $[B_n,B_{n^{\prime}}]|M\rangle$ is of the weight $\mathrm {wt}(|M\rangle) + (n+n^{\prime})(\delta + \dot{\delta})$ with $n+n^{\prime}<0,$ we necessarily have $k_1^{\nu} > M.$ For any $s > 0$ eq. (\ref{eq:Bvu}) gives \begin{equation} [B_n,B_{n^{\prime}}]|M\rangle = u_M\wedge u_{M-1} \wedge \cdots \wedge u_{M-NLs+1}\wedge [B_n,B_{n^{\prime}}]|M-NLs\rangle, \label{eq:pB1}\end{equation} where $$ [B_n,B_{n^{\prime}}]|M-NLs\rangle = \sum_{\nu} c_{\nu} u_{k_1^{\nu}-NLs}\wedge u_{k_2^{\nu}-NLs} \wedge \cdots. $$ Taking $s$ sufficiently large so that $ M-k_1^{\nu} + NLs \geqslant 0$ holds for all $\nu$ above, we have for all $\nu$ the inequalities $$ k_1^{\nu} - NLs - (M - NLs) > 0, \quad \text{and} \quad M-(k_1^{\nu} - NLs) \geqslant 0. $$ Lemma \ref{l:lemma} now shows that (\ref{eq:pB1}) is zero. Let now $n+n^{\prime}=0.$ The vector $[B_n,B_{n^{\prime}}]|M\rangle$ has weight $\mathrm {wt}(|M\rangle).$ The weight subspace of this weight is one-dimensional, so we have $ [B_n,B_{-n}]|M\rangle = \gamma_{n,M}(q) |M\rangle $ for $\gamma_{n,M}(q) \in {\mathbb K\hskip.5pt}.$ Since $[B_n,B_{-n}]|M\rangle = u_M \wedge [B_n,B_{-n}]|M-1\rangle,$ $ \gamma_{n,M}(q)$ is independent on $M.$ The coefficients $c_{kl}(q),c_{kl}^{(i)}(q)$ in the normal ordering rules (\ref{eq:norule1}) are Laurent polynomials in $q,$ hence so are $\gamma_{n}(q).$ Specializing to $q=1$ we have $\gamma_{n}(1) = nNL.$ Thus all $\gamma_{n}(q)$ $(n \in \mathbb Z_{\neq 0})$ are non-zero. \end{proof} \begin{propos} If $N=1$ or $L=1$ or $n=1,2$, we have for $\gamma _n(q)$ the following formula: \begin{equation} \gamma _n(q) = n \frac{1-q^{2Nn}}{1-q^{2n}} \frac{1-q^{-2Ln}}{1-q^{-2n}}. \label{Bconst} \end{equation} \end{propos} \begin{proof} The $L=1$ case is due to \cite{KMS}, and the formula for $N=1$ is obtained from the formula for $L=1$ by comparing the normal ordering rules (\ref{eq:n2}) and (\ref{eq:n3}). The $n=1,2$ case is shown by a direct but lengthy calculation. (First act with $B_{-n}$ on the vacuum vector, express all terms as linear combinations of the normally ordered wedges, then act with $B_n$ and, again, rewrite the result in terms of the normally ordered wedges to get the coefficient $\gamma _n(q)$.) \end{proof} \begin{conje} The formula {\em (\ref{Bconst})} is valid for all positive integers $N,L,n$. \end{conje} \noindent Let $H$ be the Heisenberg algebra generated by $\{ B_n\}_{n\in\mathbb Z_{\neq 0}}$ with the defining relations $[B_n,B_{n^{\prime}}]=\delta_{n+n^{\prime},0}\gamma_n(q).$ Summarizing this and the previous sections, we have constructed on each Fock space ${\mathcal F}_M$ an action of the algebra $H\otimes \UU_q^{\prime}(\asll_N) \otimes \UU_q^{\prime}(\asll_L).$ Note that the action of $\UU_q^{\prime}(\asll_N)$ has level $L$ and the action of $\UU_q^{\prime}(\asll_L)$ has level $N.$ \subsection{The decomposition of the Fock space} \label{sec:decomp} Let $P_N^+$ and $P_N^+(L)$ be respectively the set of dominant integral weights of $\UU_q^{\prime}(\asll_N)$ and the subset of dominant integral weights of level $L\in \mathbb N:$ \begin{alignat}{4} &P_N^+ &= &\{ a_0\Lambda_0 + a_1\Lambda_1 + \cdots + a_{N-1}\Lambda_{N-1} \: | \: a_i \in \mathbb Z_{\geqslant 0} \}, \\ &P_N^+(L) &= &\{ a_0\Lambda_0 + a_1\Lambda_1 + \cdots + a_{N-1}\Lambda_{N-1} \: | \: a_i \in \mathbb Z_{\geqslant 0},\; \sum a_i = L \}. \end{alignat} For $\Lambda \in P_N^+$ let $V(\Lambda)$ be the irreducible integrable highest weight module of $\UU_q^{\prime}(\asll_N),$ and let $v_{\Lambda} \in V(\Lambda)$ be the highest weight vector. Let $\overline{\Lambda}_1,\overline{\Lambda}_2,\dots,\overline{\Lambda}_{N-1}$ be the fundamental weights of $\mathfrak {sl}_N,$ and let $\overline{\alpha}_i = 2 \overline{\Lambda}_i-\overline{\Lambda}_{i+1}-\overline{\Lambda}_{i-1}$ $ 1 \leqslant i < N$ be the simple roots. Here the indices are cyclically extended to all integers modulo $N,$ and $\overline{\Lambda}_0 :=0.$ Let $\overline{Q}_N = \oplus_{i=1}^{N-1} \mathbb Z \overline{\alpha}_i $ be the root lattice of $\mathfrak {sl}_N.$ For an $\UU_q^{\prime}(\asll_N)$-weight $\Lambda = \sum_{i=0}^{N-1} a_i \Lambda_i $ we will set $\overline{\Lambda} = \sum_{i=1}^{N-1} a_i \overline{\Lambda}_i.$ A vector $w \in {\mathcal F}_M$ is a {\em highest weight vector} of $H\otimes \UU_q^{\prime}(\asll_N) \otimes \UU_q^{\prime}(\asll_L)$ if it is a highest weight vector with respect to $\UU_q^{\prime}(\asll_N)$ and $\UU_q^{\prime}(\asll_L)$ and is annihilated by $B_n$ with $n > 0.$ We will now describe a family of highest weight vectors. With every $\Lambda = \sum_{i=0}^{N-1}a_i \Lambda_i \in P_N^+(L),$ such that $\overline{\Lambda} \equiv \overline{\Lambda}_M \bmod \overline{Q}_N,$ we associate $\dot{\Lambda}^{(M)} \in P_L^+(N)$ (i.e. $\dot{\Lambda}^{(M)}$ is a dominant integral weight of $\UU_q^{\prime}(\asll_L)$ of level $N$) as follows. Let $M\equiv s \bmod NL$ $( 0\leqslant s < NL),$ and let $l_1 \geqslant l_2 \geqslant \dots \geqslant l_N $ be the partition defined by the relations: \begin{align} &l_i - l_{i+1} = a_i\quad (1 \leqslant i < N), \label{eq:part1}\\ & l_1 + l_2 + \cdots + l_N = s + NL. \label{eq:part2} \end{align} Note that all $l_i$ are integers, and that $l_N > 0.$ Then we set \begin{equation} \dot{\Lambda}^{(M)} := \dot{\Lambda}_{l_1} + \dot{\Lambda}_{l_2} + \cdots + \dot{\Lambda}_{l_N}. \label{eq:dotLambda} \end{equation} Recall that the indices of the fundamental weights are cyclically extended to all integers modulo $L$. Consider the Young diagram of $l_1 \geqslant l_2 \geqslant \dots \geqslant l_N$ (Fig. 1).We set the coordinates $(x,y)$ of the lowest leftmost square to be $(1,1).$ \begin{center} \begin{picture}(120,120)(-15,-15) \multiput(0,0)(10,0){2}{\line(0,1){85}} \put(0,85){\line(1,0){10}} \put(0,85){\makebox(12,15){{\scriptsize $l_1$}}} \put(20,0){\line(0,1){60}} \put(10,60){\line(1,0){10}} \put(10,60){\makebox(12,15){{\scriptsize $l_2$}}} \put(30,0){\line(0,1){40}} \put(20,40){\line(1,0){10}} \put(20,40){\makebox(12,15){{\scriptsize $l_3$}}} \put(40,15){\makebox(30,10){$\cdots$}} \multiput(80,0)(10,0){2}{\line(0,1){20}} \put(80,20){\line(1,0){10}} \put(80,20){\makebox(12,15){{\scriptsize $l_N$}}} \put(0,0){\line(1,0){30}} \put(80,0){\line(1,0){10}} \put(-10,-10){\vector(1,0){20}} \put(10,-15){\makebox{{\scriptsize $x$}}} \put(-10,-10){\vector(0,1){20}} \put(-15,10){\makebox{{\scriptsize $y$}}} \end{picture} {\large { Fig. 1}} \end{center} Introduce a numbering of squares of the Young diagram by $1,2,\dots,s+NL$ by requiring that the numbers assigned to squares in the bottom row of a pair of any adjacent rows are greater than the numbers assigned to squares in the top row, and that the numbers increase from right to left within each row (cf. the example below). Letting $(x_i,y_i)$ to be the coordinates of the $i$th square, set $k_i = x_i + N(y_i - L - 1) + M-s.$ Then $k_i > k_{i+1}$ for all $i=1,2,\dots,s+NL-1.$ Now define \begin{equation} \psi_{\Lambda} = u_{k_1}\wedge u_{k_2} \wedge \cdots \wedge u_{k_{s+NL}}\wedge |M-s-NL \rangle. \label{eq:hwv} \end{equation} Note that $\psi_{\Lambda} \in {\mathcal F}_M,$ and $\psi_{\Lambda}$ is a normally ordered wedge. \begin{example} Let $N=3,$ $L=2,$ and $M=0.$ The set $\{ \Lambda \in P_3^+(2) \: | \: \overline{\Lambda} \equiv 0\bmod \overline{Q}_3 \}$ contains the two weights: $2\Lambda_0$ and $\Lambda_1+\Lambda_2$ only. The corresponding weights of $\UU_q^{\prime}(\asll_L)$ and the numbered Young diagrams are shown below. $$ \begin{picture}(100,60)(0,0) \put(0,50){$\Lambda = 2\Lambda_0:$} \put(0,35){$\dot{\Lambda}^{(0)} = 3\dot{\Lambda}_0$} \multiput(0,0)(0,10){3}{\line(1,0){30}} \multiput(0,0)(10,0){4}{\line(0,1){20}} \put(0,0){\makebox(10,10){\scriptsize{$6$}}} \put(10,0){\makebox(10,10){\scriptsize{$5$}}} \put(20,0){\makebox(10,10){\scriptsize{$4$}}} \put(0,10){\makebox(10,10){\scriptsize{$3$}}} \put(10,10){\makebox(10,10){\scriptsize{$2$}}} \put(20,10){\makebox(10,10){\scriptsize{$1$}}} \end{picture} \begin{picture}(100,60)(0,0) \put(0,50){$\Lambda = \Lambda_1+\Lambda_2:$} \put(0,35){$\dot{\Lambda}^{(0)} = \dot{\Lambda}_0 + 2\dot{\Lambda}_1 $} \multiput(0,0)(0,10){2}{\line(1,0){30}} \put(0,20){\line(1,0){20}}\put(0,30){\line(1,0){10}} \multiput(0,0)(10,0){2}{\line(0,1){30}} \put(20,0){\line(0,1){20}}\put(30,0){\line(0,1){10}} \put(0,0){\makebox(10,10){\scriptsize{$6$}}} \put(10,0){\makebox(10,10){\scriptsize{$5$}}} \put(20,0){\makebox(10,10){\scriptsize{$4$}}} \put(0,10){\makebox(10,10){\scriptsize{$3$}}} \put(10,10){\makebox(10,10){\scriptsize{$2$}}} \put(0,20){\makebox(10,10){\scriptsize{$1$}}} \end{picture} $$ \end{example} \begin{propos}\label{p:hw} For each $\Lambda \in P_N^+(L)$ such that $\overline{\Lambda} \equiv \overline{\Lambda}_M\bmod \overline{Q}_N,$ $\psi_{\Lambda}$ is a highest weight vector of $H\otimes \UU_q^{\prime}(\asll_N)\otimes \UU_q^{\prime}(\asll_L).$ The $\UU_q^{\prime}(\asll_N)$-weight of $\psi_{\Lambda}$ is $\Lambda,$ and the $\UU_q^{\prime}(\asll_L)$-weight of $\psi_{\Lambda}$ is $\dot{\Lambda}^{(M)}.$ \end{propos} \begin{proof} The weights of $\psi_{\Lambda}$ are given by (\ref{eq:wt1}, \ref{eq:wt2}). To prove that $\psi_{\Lambda}$ is annihilated by $E_i,\dot{E}_a$ and $B_n$ $(n >0)$ we use the following lemma. \begin{lemma}\label{l:hw} Keeping $\Lambda$ as in the statement of Proposition \ref{p:hw}, define the decreasing sequence $k_1,k_2,\dots$ from $\psi_{\Lambda} = u_{k_1}\wedge u_{k_2} \wedge \cdots .$ \noindent Then for $l>m$ we have \begin{equation} u_{k_l} \wedge u_{k_m} = \sum_{l^{\prime }}c_{\alpha , k_{l^{\prime }}}u_{\alpha }\wedge u_{k_{l^{\prime }}} \: \: \mbox{ where } \alpha > k_{l^{\prime }} \geqslant k_l. \end{equation} \end{lemma} \begin{proof} Define $\epsilon _{k_i}, a_{k_i}, m_{k_i}$ $(1 \leqslant \epsilon _{k_i}\leqslant N, 1\leqslant a_{k_i} \leqslant L, m_{k_i} \in \mathbb Z)$ by $k_{i} = \epsilon _{k_i}-N(a_{k_i}+L m_{k_i})$. Using the normal ordering rules, we have \begin{equation} u_{k_l} \wedge u_{k_m} = \sum c_{\alpha ,\beta} u_{\alpha }\wedge u_{\beta}, \label{albeta} \end{equation} where $k_m\geqslant \alpha > \beta \geqslant k_l$ and $\alpha = \epsilon _{k_i} -N(a_{k_{i^{\prime}}}+Lm_{\alpha })$, $\beta = \epsilon _{k_j} -N(a_{k_{j^{\prime}}}+Lm_{\beta })$, $i,j,i^{\prime},j^{\prime} \in \{l,m \} $, $i \neq j$, $i^{\prime} \neq j^{\prime}$, $m_{\alpha }, m_{\beta } \in \mathbb Z$. \mbox{} From the explicit expression for $\psi_{\Lambda}$ (cf. \ref{eq:hwv}) it follows that there is at most one integer $\gamma $ such that $\gamma = \epsilon _{k_i} -N(a_{k_{i^{\prime}}}+Lm_{\gamma })$ $(i,i^{\prime } \in \{k,l \}, \; m_{\gamma} \in \mathbb Z)$, $k_l < \gamma < k_m $ and $\gamma \neq k_i.$ Moreover, if the integer $\gamma $ exists, then $a_l \neq a_m$, $\epsilon _l > \epsilon _m$ and $\gamma = e_{k_l}-N(a_{k_l}+L(m_{k_m}+\delta(a_{k_l} <a_{k_m})))$. Note that $\gamma$ is the maximal element of the set $\{ \gamma^{\prime} | \gamma^{\prime} =\epsilon _{k_i} -N(a_{k_{i^{\prime}}}+Lm_{\gamma^{\prime}}), \; i,i^{\prime } \in \{k,l \}, \; m_{\gamma^{\prime}} \in \mathbb Z, \; k_l < \gamma^{\prime} < k_m \} $. If the $\gamma $ exists, then $\beta$ in (\ref{albeta}) is distinct from $\gamma $. Therefore $\beta = k_{l^{\prime }}$ for some $l^{\prime }$ such that $k_{l^{\prime }} \geqslant k_l$, and the lemma follows. \end{proof} Now we continue the proof of Proposition \ref{p:hw}. \mbox{} From the definition of $\psi_{\lambda}$ it follows that $E_i\psi_{\Lambda},\dot{E}_a\psi_{\Lambda}$ and $B_n\psi_{\Lambda}$ $(n >0)$ are linear combinations of vectors of the form \begin{equation} u_{k_1} \wedge \dots\wedge u_{k_{i-1}} \wedge u_{k_{j}} \wedge u_{k_{i+1}} \wedge \dots \wedge u_{k_{j}} \wedge\dots . \label{klwedge} \end{equation} Applying Lemma \ref{l:hw} repeatedly, we conclude that vectors (\ref{klwedge}) are all zero. \end{proof} \noindent Let ${\mathbb K\hskip.5pt}[H_-]$ be the Fock module of $H.$ That is ${\mathbb K\hskip.5pt}[H_-]$ is the $H$-module generated by the vector $1$ with the defining relations $B_n 1 =0$ for $n >0.$ By Theorem \ref{t:UNUL}, ${\mathcal F}_M$ is an integrable module of $\UU_q^{\prime}(\asll_N)$ and $\UU_q^{\prime}(\asll_L).$ Therefore it is semisimple relative to the algebra $H\otimes \UU_q^{\prime}(\asll_N)\otimes \UU_q^{\prime}(\asll_L).$ Proposition \ref{p:hw} now implies that we have an injective $H\otimes \UU_q^{\prime}(\asll_N)\otimes \UU_q^{\prime}(\asll_L)$ - linear homomorphism \begin{equation} \bigoplus_{ \{\Lambda \in P_N^+(L)\: |\: \overline{\Lambda} \equiv \overline{\Lambda}_M\bmod \overline{Q}_N \}} {\mathbb K\hskip.5pt}[H_-]\otimes V(\Lambda)\otimes V(\dot{\Lambda}^{(M)}) \; \rightarrow \; {\mathcal F}_M \label{eq:hom} \end{equation} sending $1\otimes v_{\Lambda} \otimes v_{\dot{\Lambda}^{(M)}}$ to $\psi_{\Lambda}.$ It is known (cf. \cite{F1}[Theorem 1.6]) that (\ref{eq:hom}) specializes to an isomorphism when $q=1$. The characters of ${\mathbb K\hskip.5pt}[H_-],$ $V(\Lambda),$ $V(\dot{\Lambda}^{(M)}),$ and ${\mathcal F}_M$ remain unchanged when $q$ is specialized to $1.$ Therefore (\ref{eq:hom}) is an isomorphism. Summarizing, we have the following theorem. \begin{theor} \label{t:decofF} There is an isomorphism of $H\otimes \UU_q^{\prime}(\asll_N)\otimes \UU_q^{\prime}(\asll_L)$-modules: \begin{equation} {\mathcal F}_M \cong \bigoplus_{ \{\Lambda \in P_N^+(L)\: |\: \overline{\Lambda} \equiv \overline{\Lambda}_M\bmod \overline{Q}_N \}} {\mathbb K\hskip.5pt}[H_-]\otimes V(\Lambda)\otimes V(\dot{\Lambda}^{(M)}) . \end{equation} \end{theor} \section{The toroidal Hecke algebra and the quantum toroidal algebra} \label{s:tor} \subsection{Toroidal Hecke algebra} \label{sec:torHecke} \mbox{} From now on we will work over the base field $\mathbb Q(q^{\frac{1}{2N}})$ rather than $\mathbb Q(q).$ Until the end of the paper we put ${\mathbb K\hskip.5pt} = \mathbb Q(q^{\frac{1}{2N}}).$ Clearly, all results of the preceding sections hold for this ${\mathbb K\hskip.5pt}.$ The toroidal Hecke algebra of type $\mathfrak {gl}_n$, $\ddot{\mathbf {H}}_n,$ \cite{VV1,VV2} is a unital associative algebra over ${\mathbb K\hskip.5pt}$ with the generators $\mathbf x^{\pm 1},$ $T_i^{\pm 1},X_j^{\pm 1},Y_j^{\pm 1}, 1\leqslant i < n, 1\leqslant j \leqslant n.$ The defining relations involving $T_i^{\pm 1},X_j^{\pm 1}$ are those of the affine Hecke algebra (\ref{eq:ah1} -- \ref{eq:ah2}), and the rest of the relations are as follows: \begin{eqnarray*} & \text{ the elements $\mathbf x^{\pm 1}$ are central,} & \qquad \mathbf x \mathbf x^{-1} = \mathbf x^{-1}\mathbf x = 1, \\ & Y_j Y_j^{-1} = Y_j^{-1} Y_j = 1, & \qquad Y_i Y_j = Y_j Y_i, \\ & T_i^{-1} Y_i T_i^{-1} = q^{-2} Y_{i+1}, &\qquad T_i Y_j = Y_j T_i \quad \text{if $ j \neq i,i+1.$} \\ &(X_1 X_2 \cdots X_n)Y_1 = \mathbf x Y_1 (X_1 X_2 \cdots X_n), &\qquad X_2 Y_1^{-1} X_2^{-1} Y_1 = q^{-2} T_1^2. \end{eqnarray*} The subalgebras of $\ddot{\mathbf {H}}_n$ generated by $T_i^{\pm 1},X_j^{\pm 1}$ and by $T_i^{\pm 1},Y_j^{\pm 1}$ are both isomorphic to the affine Hecke algebra $\dot{\mathbf {H}}_n$ (cf. \cite{VV1}, \cite{VV2}). \\ \noindent Following \cite{C3} we introduce a representation of the toroidal Hecke algebra on the space $ ({\mathbb K\hskip.5pt}[z^{\pm 1}] \otimes {\mathbb K\hskip.5pt}^L)^{\otimes n}$ $=$ $ {\mathbb K\hskip.5pt}[z_1^{\pm 1},\dots,z_n^{\pm 1}] \otimes ({\mathbb K\hskip.5pt}^L)^{\otimes n}.$ This representation is an extension of the representation of $\dot{\mathbf {H}}_n = \langle T_i^{\pm 1} , X_j \rangle $ described in Section \ref{sec:affH}. Let $\nu = \sum_{a=1}^L \nu(a) \epsilon_a,$ where $\epsilon_a = \dot{\overline{\Lambda}}_a - \dot{\overline{\Lambda}}_{a-1},$ be an integral weight of $\mathfrak {sl}_L$ $( \nu(a) \in \mathbb Z).$ Define $q^{\nu^{\vee}} \in \mathrm {End} \left({\mathbb K\hskip.5pt}[z^{\pm 1}] \otimes {\mathbb K\hskip.5pt}^L\right)$ as follows: $$ q^{\nu^{\vee}}( z^m\mathfrak e_a) = q^{\nu(L+1-a)} z^m\mathfrak e_a.$$ Here the basis $\mathfrak e_1,\dots,\mathfrak e_L$ of ${\mathbb K\hskip.5pt}^L$ is the same as in Section \ref{sec:Usl}. For $p \in q^{\mathbb Z}$ define $p^D \in \mathrm {End} \left({\mathbb K\hskip.5pt}[z^{\pm 1}] \otimes {\mathbb K\hskip.5pt}^L\right)$ as $$ p^{D}( z^m\mathfrak e_a) = p^{m} z^m\mathfrak e_a.$$ For $i=1,2,\dots,n-1$ let $s_i$ be the permutation operator of factors $i$ and $i+1$ in $ ({\mathbb K\hskip.5pt}[z^{\pm 1}] \otimes {\mathbb K\hskip.5pt}^L)^{\otimes n},$ and let $\tilde{T}_{i,i+1} = - q (\Tc_i)^{-1}.$ Here $\Tc_i$ is the generator of the finite Hecke algebra defined in (\ref{eq:Tci}). For $X \in \mathrm {End} \left({\mathbb K\hskip.5pt}[z^{\pm 1}] \otimes {\mathbb K\hskip.5pt}^L\right)$ let $$(X)_i := 1^{\otimes (i-1)} \otimes X \otimes 1^{\otimes (n-i-1)}\in \mathrm {End} \left({\mathbb K\hskip.5pt}[z^{\pm 1}] \otimes {\mathbb K\hskip.5pt}^L\right)^{\otimes n}.$$ For $i=1,2,\dots,n$ define the matrix analogue of the Cherednik-Dunkl operator \cite{C3} as \begin{equation} Y_i^{(n)} = \tilde{T}^{-1}_{i,i+1}\cdots \tilde{T}^{-1}_{n-1,n} s_{n-1} s_{n-2} \cdots s_1 (p^D)_1 (q^{\nu^{\vee}})_1 \tilde{T}_{1,2}\cdots \tilde{T}_{i-1,i}. \label{eq:Y} \end{equation} Let $s\in \{0,1,\dots,NL-1\}$ and $m\in \mathbb Z$ be defined from $n = s + NLm.$ Put $\unun{n} = Nm.$ \begin{propos}[\cite{C3}] \label{p:torHeckerep} The map $$ T_i \mapsto \Tc_i,\quad X_i \mapsto z_i, \quad Y_i \mapsto q^{-{\unun{n}}} Y_i^{(n)}, \quad \mathbf x \mapsto p 1 $$ extends to a right representation of $\ddot{\mathbf {H}}_n$ on $({\mathbb K\hskip.5pt}[z^{\pm 1}] \otimes {\mathbb K\hskip.5pt}^L)^{\otimes n}.$ \end{propos} \noindent{\bf Remark.} The normalizing factor $q^{-{\unun{n}}}$ in the map $Y_i \mapsto q^{-{\unun{n}}} Y_i^{(n)}$ above clearly can be replaced by any coefficient in ${\mathbb K\hskip.5pt}.$ The adopted choice of this factor makes $q^{-{\unun{n}}} Y_i^{(n)}$ to behave appropriately (see Proposition \ref{p:inter}) with respect to increments of $n$ by steps of the value $NL.$ \\ \noindent Let $\chi = \sum_{a=1}^L \chi(a) \epsilon_a $ be an integral weight of $\mathfrak {sl}_L.$ Let $\operatorname{U}_q({\mathfrak b}_L)^{\chi}$ be the non-unital subalgebra of $\UU_q^{\prime}(\asll_L)$ generated by the elements \begin{equation}\dot{F}_0,\dot{F}_1,\dots,\dot{F}_{L-1} \quad \text{and} \quad \dot{K}_a - q^{\chi(a) - \chi(a+1)} 1 \quad (a=1,\dots,L-1). \label{eq:gen} \end{equation} We define an action of $\UU_q^{\prime}(\asll_L)$ on ${\mathbb K\hskip.5pt}[z^{\pm 1}] \otimes {\mathbb K\hskip.5pt}^L$ by the obvious restriction of the action on ${\mathbb K\hskip.5pt}[z^{\pm 1}] \otimes {\mathbb K\hskip.5pt}^L \otimes {\mathbb K\hskip.5pt}^N$ defined in (\ref{eq:ul1} -- \ref{eq:ul2}). Iterating the coproduct $\Delta^-$ given in (\ref{eq:co1} -- \ref{eq:co3}) we obtain an action of $\UU_q^{\prime}(\asll_L)$ on $({\mathbb K\hskip.5pt}[z^{\pm 1}] \otimes {\mathbb K\hskip.5pt}^L)^{\otimes n}.$ \begin{propos} \label{p:inv1} Suppose $p = q^{-2L},$ and $ \nu = -\chi - 2\rho,$ where $\rho = \sum_{a=1}^{L-1} \dot{\overline{\Lambda}}_a.$ Then the action of the toroidal Hecke algebra on $({\mathbb K\hskip.5pt}[z^{\pm 1}] \otimes {\mathbb K\hskip.5pt}^L)^{\otimes n}$ defined in Proposition \ref{p:torHeckerep} leaves invariant the subspace $\operatorname{U}_q({\mathfrak b}_L)^{\chi} \left(({\mathbb K\hskip.5pt}[z^{\pm 1}] \otimes {\mathbb K\hskip.5pt}^L)^{\otimes n}\right).$ \end{propos} \begin{proof} It is clear that the multiplication by $z_i,$ and hence action of $X_i$ commutes with all generators of $\UU_q^{\prime}(\asll_L).$ \mbox{} From the intertwining property of the $R$-matrix it follows that the operators $\Tc_i$ (cf. \ref{eq:Tc}) commute with all generators of $\UU_q^{\prime}(\asll_L)$ as well. With $p = q^{-2L},$ and $ \nu = -\chi - 2\rho,$ a direct computation gives \begin{align*} &Y_n^{(n)} \dot{F}_a = \left( (q^{\chi(a)-\chi(a+1)}1 - \dot{K}_a) \dot{K}_a^{-1} (\dot{F}_a)_n (\dot{K}_a)_n + \dot{F}_a (\dot{K}_a)_n \right) Y_n^{(n)} \quad (a=1,\dots,L-1),\\ &Y_n^{(n)} \dot{F}_0 = \left( (q^{\chi(L)-\chi(1)}1 - \dot{K}_0) \dot{K}_0^{-1} (\dot{F}_0)_n (\dot{K}_0)_n + \dot{F}_0 (\dot{K}_0)_n \right) Y_n^{(n)}. \end{align*} In view of the relation $\Tc_i Y_{i+1}^{(n)} \Tc_i = q^2 Y_i^{(n)},$ and the commutativity of $\Tc_i$ with the generators of $\UU_q^{\prime}(\asll_L),$ this shows that for all $i$ the operators $Y_i^{(n)}$ leave the image of $\operatorname{U}_q({\mathfrak b}_L)^{\chi} $ invariant. \end{proof} \subsection{The quantum toroidal algebra} \label{sec:tor} Fix an integer $N\geqslant 3.$ The quantum toroidal algebra of type $\mathfrak {sl}_N,$ $\ddot{\UU},$ is an associative unital algebra over ${\mathbb K\hskip.5pt}$ with generators: $$E_{i,k},\quad F_{i,k},\quad H_{i,l},\quad K_i^{\pm 1}, \quad q^{\pm\frac12 c}, \quad \mathbf d^{\pm 1},$$ where $k\in {\mathbb Z}$, $l\in {\mathbb Z}\backslash \{0\}$ and $i=0,1,\cdots,N-1$. The generators $q^{\pm\frac12 c}$ and $\mathbf d^{\pm 1}$ are central. The rest of the defining relations are expressed in terms of the formal series $$E_i(z)=\sum_{k\in {\mathbb Z}}E_{i,k}z^{-k}, \quad F_i(z)=\sum_{k\in {\mathbb Z}}F_{i,k}z^{-k}, \quad K_i^{\pm}(z)=K_i^{\pm 1}\exp(\pm (q-q^{-1})\sum_{k\geqslant 1}H_{i,\pm k} z^{\mp k}),$$ as follows: \begin{gather} K_i K_i^{-1}=K_i^{-1}K_i= q^{\frac12 c} q^{-\frac12 c} = q^{-\frac12 c} q^{\frac12 c}= \mathbf d \mathbf d^{-1} = \mathbf d^{-1} \mathbf d = 1, \\ K_i^{\pm}(z)K_j^{\pm}(w)=K_j^{\pm}(w)K_i^{\pm}(z) \label{rb} \\ \theta_{- a_{ij}} (q^{-c}\mathbf d^{m_{ij}}\frac{z}{w})K_i^-(z)K_j^+(w)= \theta_{-a_{ij}} (q^{c}\mathbf d^{m_{ij}}\frac{z}{w})K_j^+(w) K_i^-(z) \label{rc} \\ K_i^{\pm}(z)E_j(w) =\theta_{\mp a_{ij}} (q^{-\frac12 c}\mathbf d^{\mp m_{ij}}w^{\pm}z^{\mp})E_j(w)K_i^+(z) \label{rd} \\ K_i^{\pm}(z)F_j(w) =\theta_{\pm a_{ij}} (q^{\frac12 c}\mathbf d^{\mp m_{ij}}w^{\pm}z^{\mp})F_j(w)K_i^+(z) \\ [E_i(z),F_j(w)]=\delta_{i,j}\frac{1}{q-q^{-1}} \{\delta(q^c\frac{w}{z})K_i^+(q^{\frac12 c}w)-\delta(q^c\frac{z}{w})K_i^ -(q^{\frac12 c}z)\} \label{re} \\ (\mathbf d^{m_{ij}}z-q^{a_{ij}}w)E_i(z)E_j(w) =(q^{a_{ij}}\mathbf d^{m_{ij}}z-w)E_j(w)E_i(z) \label{rf} \\ (\mathbf d^{m_{ij}}z-q^{-a_{ij}}w)F_i(z)F_j(w) =(q^{-a_{ij}}\mathbf d^{m_{ij}}z-w)F_j(w)F_i(z) \\ \sum_{\sigma\in {\mathfrak S}_m}\sum_{r=0}^m(-1)^r \begin{bmatrix}m\\r\end{bmatrix} E_i(z_{\sigma(1)})\cdots E_i(z_{\sigma(r)})E_j(w)E_i(z_{\sigma(r+1)})\cdots E_i(z_{\sigma(m)})=0 \label{rg} \\ \sum_{\sigma\in {\mathfrak S}_m}\sum_{r=0}^m(-1)^r \begin{bmatrix}m\\r\end{bmatrix} {} F_i(z_{\sigma(1)})\cdots F_i(z_{\sigma(r)})F_j(w)F_i(z_{\sigma(r+1)})\cdots {} F_i(z_{\sigma(m)}) = 0 \label{rg1} \end{gather} where in (\ref{rg}) and (\ref{rg1}) $i\ne j$ and $m=1-a_{ij}$.\\ \noindent In these defining relations $\delta(z) = \sum_{n = -\infty}^{\infty} z^n,$ $\theta_m(z) \in {\mathbb K\hskip.5pt}[[z]] $ is the expansion of $\frac{zq^m-1}{z-q^m},$ $a_{ij}$ are the entries of the Cartan matrix of $\widehat{{\mathfrak {sl}}}_N,$ and $m_{ij}$ are the entries of the following $N\times N$-matrix $$ M=\begin{pmatrix} 0 & -1 & 0 & \hdots & 0 & 1\\ 1 & 0 & -1 & \hdots & 0 & 0\\ 0 & 1 & 0 & \hdots & 0 & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\ 0 & 0 & 0 & \hdots & 0 & -1\\ -1 & 0 & 0 & \hdots & 1 & 0 \end{pmatrix}. $$ Let $\operatorname{U}_h$ be the subalgebra of $\ddot{\UU}$ generated by the elements $E_{i,0},F_{i,0},K_i^{\pm 1}$ $(0\leqslant i < N).$ These elements satisfy the defining relations (\ref{eq:r1} -- \ref{eq:r3}) and (\ref{eq:r5} -- \ref{eq:r7}) of $\UU_q^{\prime}(\asll_N).$ Thus the following map extends to a homomorphism of algebras: \begin{equation}\UU_q^{\prime}(\asll_N) \rightarrow \operatorname{U}_h :\; E_i \mapsto E_{i,0}, \quad F_i \mapsto F_{i,0}, \quad K_i^{\pm 1} \mapsto K_i^{\pm 1}. \label{eq:Uh} \end{equation} Let $\operatorname{U}_v$ be the subalgebra of $\ddot{\UU}$ generated by the elements $E_{i,k},F_{i,k},H_{i,l},$ $K_i^{\pm 1}$ $(1\leqslant i < N; k\in \mathbb Z; l\in \mathbb Z_{\neq 0}),$ and $q^{\pm \frac12 c}, \mathbf d^{\pm 1}.$ Recall, that apart from the presentation given in Section \ref{sec:Usl}, the algebra $\UU_q^{\prime}(\asll_N)$ has the ``new presentation'' due to Drinfeld which is similar to that one of $\ddot{\UU}$ above. A proof of the isomorphism between the two presentations is announced in \cite{Drinfeld1} and given in \cite{Beck}. Let $\tilde{E}_{i,k},\tilde{F}_{i,k},\tilde{H}_{i,l},\tilde{K}_{i}^{\pm 1},$ $(1\leqslant i < N; k\in \mathbb Z; l\in \mathbb Z_{\neq 0}),$ and $ q^{\pm \frac12 \tilde{c}}$ be the generators of $\UU_q^{\prime}(\asll_N)$ in the realization of \cite{Drinfeld1}. Comparing this realization of $\UU_q^{\prime}(\asll_N)$ with the defining relations of $\ddot{\UU}$ one easily sees that the map \begin{align} \UU_q^{\prime}(\asll_N) \rightarrow \operatorname{U}_v :\; &\tilde{E}_{i,k} \mapsto \mathbf d^{ik}E_{i,k},\quad \tilde{F}_{i,k} \mapsto \mathbf d^{ik}F_{i,k},\quad \tilde{H}_{i,l} \mapsto \mathbf d^{il}H_{i,l},\label{eq:Uv}\\ &\tilde{K}_i^{\pm 1} \mapsto K_i^{\pm 1}, \quad q^{\pm \frac12 \tilde{c}} \mapsto q^{\pm \frac12 c} \nonumber \end{align} where $1 \leqslant i < N,$ extends to a homomorphism of algebras. Thus each module of $\ddot{\UU}$ carries two actions of $\UU_q^{\prime}(\asll_N)$ obtained by pull-backs through the homomorphisms (\ref{eq:Uh}) and (\ref{eq:Uv}). We will say that a module of $\ddot{\UU}$ has {\em level} $(l_v,l_h)$ provided the action of $\UU_q^{\prime}(\asll_N)$ obtained through the homomorphism (\ref{eq:Uh}) has level $l_h,$ and the action of $\UU_q^{\prime}(\asll_N)$ obtained through the homomorphism (\ref{eq:Uv}) has level $l_v.$ On such a module the central elements $q^{\pm \frac12 c}$ act as multiplications by $q^{\pm \frac12 l_v},$ and the element $K_0 K_1 \cdots K_{N-1}$ acts as the multiplication by $q^{l_h}.$ The following proposition, proved in \cite{VV1}, shows that it is sometimes possible to extend a representation of $\UU_q^{\prime}(\asll_N)$ to a representation of $\ddot{\UU}.$ \begin{propos} \label{p:shift1} Let $W$ be a module of $\UU_q^{\prime}(\asll_N).$ Suppose that there are $a,b \in q^{\mathbb Z},$ and an invertible $\tilde{\psi} \in \mathrm {End}(W)$ such that \begin{alignat}{5} &\tilde{\psi}^{-1}\tilde{E}_i(z)\tilde{\psi} = \tilde{E}_{i-1}(az),& &\tilde{\psi}^{-2}\tilde{E}_1(z)\tilde{\psi}^2 = \tilde{E}_{N-1}(bz),& \\ &\tilde{\psi}^{-1}\tilde{F}_i(z)\tilde{\psi} = \tilde{F}_{i-1}(az),& &\tilde{\psi}^{-2}\tilde{F}_1(z)\tilde{\psi}^2 = \tilde{F}_{N-1}(bz),& \\ &\tilde{\psi}^{-1}\tilde{K}_i^{\pm}(z)\tilde{\psi} = \tilde{K}_{i-1}^{\pm}(az),& \qquad&\tilde{\psi}^{-2}\tilde{K}_1^{\pm}(z)\tilde{\psi}^2 = \tilde{K}_{N-1}^{\pm}(bz),& \end{alignat} where $2\leqslant i < N.$ Then $W$ is a $\ddot{\UU}$-module with the action given by \begin{align*} & X_i(z) = \tilde{X}_i(d^iz) \qquad (1\leqslant i <N), \qquad X_0(z) = \tilde{\psi}^{-1}\tilde{X}_1(a^{-1}d^{-1}z)\tilde{\psi}, \\ & \mathbf d = d 1, \qquad q^{\frac12 c} = q^{\frac12 \tilde{c}}. \end{align*} where $d^N = b/a^2,$ and $X = E,F,K^{\pm}.$ \end{propos} \subsection{The Varagnolo-Vasserot duality} \label{sec:VVdual} We now briefly review, following \cite{VV1}, the Schur-type duality between the toroidal Hecke algebra $\ddot{\mathbf {H}}_n$ and the quantum toroidal algebra $\ddot{\UU}.$ Let $\operatorname M$ be a right $\ddot{\mathbf {H}}_n $-module, such that the central element $\mathbf x$ of $\ddot{\mathbf {H}}_n$ acts as the multiplication by $x \in q^{\mathbb Z}.$ The algebra $\ddot{\mathbf {H}}_n$ contains two subalgebras: $\dot{\mathbf {H}}_n^h = \langle T_i^{\pm 1} , X_j \rangle ,$ and $\dot{\mathbf {H}}_n^v = \langle T_i^{\pm 1} , Y_j \rangle$ both isomorphic to the affine Hecke algebra $\dot{\mathbf {H}}_n.$ Therefore the duality functor of Chari--Pressley \cite{CP} yields two actions of $\UU_q^{\prime}(\asll_N)$ on the linear space $ \operatorname M\otimes_{\mathbf {H}_n} ({\mathbb K\hskip.5pt}^N)^{\otimes n}.$ Here the action of the finite Hecke algebra $\mathbf {H}_n$ on $({\mathbb K\hskip.5pt}^N)^{\otimes n}$ is given by (\ref{eq:Tsi}), and $\mathbf {H}_n$ is embedded into $\ddot{\mathbf {H}}_n$ as the subalgebra generated by $T_i^{\pm 1}.$ For $i,j=1,\dots,N$ let $e_{i,j} \in \mathrm {End}({\mathbb K\hskip.5pt}^N)$ be the matrix units with respect to the basis $\mathfrak v_1,\mathfrak v_2,\dots,\mathfrak v_{N}$ (cf. Section \ref{sec:Usl}). For $i=0,1,\dots,N-1$ let $k_i = q^{e_{i,i} - e_{i+1,i+1}},$ where the indices are cyclically extended modulo $N$. For $X \in \mathrm {End}({\mathbb K\hskip.5pt}^N)$ we put $(X)_i = 1^{\otimes (i-1)}\otimes X \otimes 1^{\otimes (n-i)}.$ The functor of \cite{CP} applied to $\operatorname M$ considered as the $\dot{\mathbf {H}}_n^h$-module gives the following action of $\UU_q^{\prime}(\asll_N)$ on $ \operatorname M\otimes_{\mathbf {H}_n} ({\mathbb K\hskip.5pt}^N)^{\otimes n}:$ \begin{align} &E_i(m\otimes v) = \sum_{j=1}^n m X_j^{\delta(i=0)} \otimes (e_{i,i+1})_j (k_i)_{j+1} (k_i)_{j+2}\cdots (k_i)_{n}v, \label{eq:h1}\\ &F_i(m\otimes v) = \sum_{j=1}^n m X_j^{-\delta(i=0)} \otimes (e_{i+1,i})_j (k_i^{-1})_{1} (k_i^{-1})_{2}\cdots (k_i^{-1})_{j-1}v, \\ &K_i(m\otimes v) = m \otimes (k_i)_1(k_i)_2 \cdots (k_i)_n v. \label{eq:h3} \end{align} Here $m \in \operatorname M, v \in ({\mathbb K\hskip.5pt}^N)^{\otimes n},$ and the indices are cyclically extended modulo $N.$ Likewise, application of this functor to $\operatorname M$ considered as the $\dot{\mathbf {H}}_n^v$-module gives another action of $\UU_q^{\prime}(\asll_N)$ on $ \operatorname M\otimes_{\mathbf {H}_n} ({\mathbb K\hskip.5pt}^N)^{\otimes n}:$ \begin{align} &\hat{E}_i(m\otimes v) = \sum_{j=1}^n m Y_j^{-\delta(i=0)} \otimes (e_{i,i+1})_j (k_i)_{j+1} (k_i)_{j+2}\cdots (k_i)_{n}v, \label{eq:v1}\\ &\hat{F}_i(m\otimes v) = \sum_{j=1}^n m Y_j^{\delta(i=0)} \otimes (e_{i+1,i})_j (k_i^{-1})_{1} (k_i^{-1})_{2}\cdots (k_i^{-1})_{j-1}v, \\ &\hat{K}_i(m\otimes v) = m \otimes (k_i)_1(k_i)_2 \cdots (k_i)_n v. \label{eq:v3} \end{align} Here we put hats over the generators in order to distinguish the actions given by (\ref{eq:h1} -- \ref{eq:h3}) and (\ref{eq:v1} -- \ref{eq:v3}). Varagnolo and Vasserot have proven, in \cite{VV1}, that $ \operatorname M\otimes_{\mathbf {H}_n} ({\mathbb K\hskip.5pt}^N)^{\otimes n}$ is a $\ddot{\UU}$-module such that the $\UU_q^{\prime}(\asll_N)$-action (\ref{eq:h1} -- \ref{eq:h3}) is the pull-back through the homomorphism (\ref{eq:Uh}), and the $\UU_q^{\prime}(\asll_N)$-action (\ref{eq:v1} -- \ref{eq:v3}) is the pull-back through the homomorphism (\ref{eq:Uv}). Let us recall here the main element of their proof. Let $\psi$ be the endomorphism of $ \operatorname M\otimes_{\mathbf {H}_n} ({\mathbb K\hskip.5pt}^N)^{\otimes n}$ defined by \begin{gather} \psi : m \otimes \mathfrak v_{\epsilon_1}\otimes \mathfrak v_{\epsilon_2}\otimes \cdots \otimes \mathfrak v_{\epsilon_n} \mapsto\label{eq:psi} \\ m X_1^{-\delta_{N,\epsilon_1}}X_2^{-\delta_{N,\epsilon_2}} \cdots X_n^{-\delta_{N,\epsilon_n}} \otimes \mathfrak v_{\epsilon_1+1}\otimes \mathfrak v_{\epsilon_2+1}\otimes \cdots \otimes \mathfrak v_{\epsilon_n+1}, \nonumber \end{gather} where $\mathfrak v_{N+1}$ is identified with $\mathfrak v_1.$ Taking into account the defining relations of $\ddot{\mathbf {H}}_n$ one can confirm that $\psi$ is well-defined. Let $\tilde{E}_{i,k},\tilde{F}_{i,k},\tilde{H}_{i,l},\tilde{K}_{i}^{\pm 1}$ $(k\in \mathbb Z;l\in \mathbb Z_{\neq 0}; 1\leqslant i < N)$ be the generators of the $\UU_q^{\prime}(\asll_N)$-action (\ref{eq:v1} -- \ref{eq:v3}) obtained from $\hat{E}_j,\hat{F}_j,\hat{K}_j^{\pm 1}$ $(0\leqslant j < N)$ by the isomorphism between the two realizations of $\UU_q^{\prime}(\asll_N)$ given in \cite{Beck}. Let $\tilde{E}_i(z), \tilde{F}_i(z), \tilde{K}^{\pm}_i(z)$ be the corresponding generating series. \begin{propos}[\cite{VV1}] \label{p:fintwist} The following relations hold in $\operatorname M\otimes_{\mathbf {H}_n} ({\mathbb K\hskip.5pt}^N)^{\otimes n}:$ \begin{alignat}{5} &{\psi}^{-1}\tilde{E}_i(z){\psi} = \tilde{E}_{i-1}(q^{-1}z),& &{\psi}^{-2}\tilde{E}_1(z){\psi}^2 = \tilde{E}_{N-1}(x^{-1}q^{N-2}z),& \\ &{\psi}^{-1}\tilde{F}_i(z){\psi} = \tilde{F}_{i-1}(q^{-1}z),& &{\psi}^{-2}\tilde{F}_1(z){\psi}^2 = \tilde{F}_{N-1}(x^{-1}q^{N-2}z),& \\ &{\psi}^{-1}\tilde{K}_i^{\pm}(z){\psi} = \tilde{K}_{i-1}^{\pm}(q^{-1}z),& \qquad&{\psi}^{-2}\tilde{K}_1^{\pm}(z){\psi}^2 = \tilde{K}_{N-1}^{\pm}(x^{-1}q^{N-2}z).& \end{alignat} Here $2\leqslant i < N.$ \end{propos} \noindent Proposition \ref{p:shift1} now implies that $\operatorname M\otimes_{\mathbf {H}_n} ({\mathbb K\hskip.5pt}^N)^{\otimes n}$ is a $\ddot{\UU}$-module, in particular, the central element $\mathbf d$ acts as the multiplication by $x^{-1/N}q,$ and the central element $q^{\frac12 c}$ acts as the multiplication by $1.$ \subsection{The action of the quantum toroidal algebra on the wedge product} In the framework of the preceding section, let $\operatorname M = ({\mathbb K\hskip.5pt}[z^{\pm 1}]\otimes {\mathbb K\hskip.5pt}^L)^{\otimes n}$ be the $\ddot{\mathbf {H}}_n$-module with the action given in Proposition \ref{p:torHeckerep}. In view of the remark made in Section \ref{sec:wedge}, the linear space $\operatorname M\otimes_{\mathbf {H}_n} ({\mathbb K\hskip.5pt}^N)^{\otimes n}$ is isomorphic to the wedge product $\wedge^nV_{\mathrm {aff}}.$ Therefore, by the Varagnolo-Vasserot duality, $\wedge^nV_{\mathrm {aff}}$ is a module of $\ddot{\UU}.$ The action of $\UU_q^{\prime}(\asll_N)$ given by (\ref{eq:h1} -- \ref{eq:h3}) coincides with the action of $\UU_q^{\prime}(\asll_N)$ defined on $\wedge^nV_{\mathrm {aff}}$ in Section \ref{sec:wedge}. Following the terminology of \cite{VV2}, we will call this action {\em the horizontal} action of $\UU_q^{\prime}(\asll_N)$ on $\wedge^n V_{\mathrm {aff}}.$ The formulas (\ref{eq:v1} -- \ref{eq:v3}) give another action of $\UU_q^{\prime}(\asll_N)$ on $\wedge^n V_{\mathrm {aff}},$ we will refer to this action as {\em the vertical } action. Recall, that in Section \ref{sec:wedge} an action of $\UU_q^{\prime}(\asll_L),$ commutative with the horizontal action of $\UU_q^{\prime}(\asll_N),$ was defined on $\wedge^nV_{\mathrm {aff}}.$ Recall, as well, that for each integral weight $\chi$ of $\mathfrak {sl}_L$ we have defined, in Section \ref{sec:torHecke}, the subalgebra $\operatorname{U}_q({\mathfrak b}_L)^{\chi} $ of $\UU_q^{\prime}(\asll_L).$ The $\ddot{\mathbf {H}}_n$-module structure defined in Proposition \ref{p:torHeckerep} depends on two parameters: $\nu$ which is an integral weight of $\mathfrak {sl}_L,$ and $p \in q^{\mathbb Z}.$ The same parameters thus enter into the $\ddot{\UU}$-module structure on $\wedge^n V_{\mathrm {aff}}.$ \begin{propos}\label{p:inv2} Suppose $p = q^{-2L},$ and $\nu = - \chi - 2\rho$ for an integral $\mathfrak {sl}_L$-weight $\chi.$ Then the action of $\ddot{\UU}$ on $\wedge^n V_{\mathrm {aff}}$ leaves invariant the linear subspace $\operatorname{U}_q({\mathfrak b}_L)^{\chi}\left(\wedge^n V_{\mathrm {aff}} \right).$ \end{propos} \begin{proof} It is not difficult to see, that the subalgebras $\operatorname{U}_h$ and $\operatorname{U}_v$ generate $\ddot{\UU}$ (cf. Lemma 2 in \cite{STU}). Therefore, to prove the proposition, it is enough to show, that both the horizontal and the vertical actions of $\UU_q^{\prime}(\asll_N)$ on $\wedge^n V_{\mathrm {aff}}$ leave $\operatorname{U}_q({\mathfrak b}_L)^{\chi}\left(\wedge^n V_{\mathrm {aff}} \right)$ invariant. However, the horizontal action commutes with the action of $\UU_q^{\prime}(\asll_L),$ while Proposition \ref{p:inv1} implies that the vertical action leaves $\operatorname{U}_q({\mathfrak b}_L)^{\chi}\left(\wedge^n V_{\mathrm {aff}} \right)$ invariant. \end{proof} \section{The actions of the quantum toroidal algebra on the Fock spaces and on irreducible integrable highest weight modules of $\operatorname{U}_q^{\prime}(\widehat{{\mathfrak {gl}}}_N)$} \label{s:toract} \subsection{A level 0 action of $\UU_q^{\prime}(\asll_N)$ on the Fock space} Let $\pi^v_{(n)} : \UU_q^{\prime}(\asll_N) \rightarrow \mathrm {End}(\wedge^n V_{\mathrm {aff}})$ be the map defining the vertical action of $\UU_q^{\prime}(\asll_N)$ on the wedge product $\wedge^nV_{\mathrm {aff}}.$ In accordance with (\ref{eq:v1} -- \ref{eq:v3}), for $f\in ({\mathbb K\hskip.5pt}[z^{\pm 1}]\otimes {\mathbb K\hskip.5pt}^L)^{\otimes n}$ and $v \in ({\mathbb K\hskip.5pt}^N)^{\otimes n }$ we have \begin{align} &\pi^v_{(n)}(E_i)\cdot \wedge(f\otimes v) = \wedge \sum_{j=1}^n (q^{-\unun{n}}Y^{(n)}_j)^{-\delta(i=0)} f \otimes (e_{i,i+1})_j (k_i)_{j+1} (k_i)_{j+2}\cdots (k_i)_{n}v, \label{eq:pi1}\\ &\pi^v_{(n)}(F_i)\cdot \wedge(f\otimes v) = \wedge \sum_{j=1}^n (q^{-\unun{n}}Y^{(n)}_j)^{\delta(i=0)} f \otimes (e_{i+1,i})_j (k_i^{-1})_{1} (k_i^{-1})_{2}\cdots (k_i^{-1})_{j-1}v,\\ &\pi^v_{(n)}(K_i)\cdot \wedge(f\otimes v) = \wedge f \otimes (k_i)_1(k_i)_2 \cdots (k_i)_n v, \label{eq:pi3} \end{align} where we denote by $\wedge$ the canonical map from $V_{\mathrm {aff}}^{\otimes n}$ $ = $ $ ({\mathbb K\hskip.5pt}[z^{\pm 1}]\otimes {\mathbb K\hskip.5pt}^L)^{\otimes n} \otimes ({\mathbb K\hskip.5pt}^N)^{\otimes n }$ to $\wedge^n V_{\mathrm {aff}}.$ In this section, for each $M\in \mathbb Z,$ we define a level 0 action of $\UU_q^{\prime}(\asll_N)$ on the Fock space ${\mathcal F}_M.$ Informally, this action arises as the limit $n\rightarrow \infty$ of the vertical action (\ref{eq:pi1} -- \ref{eq:pi3}) on the wedge product. In parallel with the finite case, the Fock space, thus admits two actions of $\UU_q^{\prime}(\asll_N):$ the level $L$ action defined in Section \ref{sec:UF} as the inductive limit of the horizontal action, and an extra action with level zero. We start by introducing a grading on ${\mathcal F}_M.$ To facilitate this, we adopt the following notational convention. For each integer $k$ we define the unique triple $\overline{k},\dot{k},\underline{k},$ where $\overline{k} \in \{1,2,\dots ,N\},$ $\dot{k} \in \{1,2,\dots ,L\},$ $\underline{k} \in \mathbb Z$ by $$ k = \overline{k} - N( \dot{k} + L \underline{k}) .$$ Then (cf. Section \ref{sec:nowedges}) we have $u_k = z^{\underline{k}}\mathfrak e_{\dot{k}}\mathfrak v_{\overline{k}}.$ The Fock space ${\mathcal F}_M$ has a basis formed by normally ordered semi-infinite wedges $ u_{k_1}\wedge u_{k_2} \wedge \cdots $ where the decreasing sequence of momenta $k_1,k_2,\dots$ satisfies the asymptotic condition $k_i = M-i+1$ for $i\gg 1.$ Let $o_1,o_2,\dots$ be the sequence of momenta labeling the vacuum vector $|M\rangle$ of ${\mathcal F}_M,$ i.e.: $o_i = M-i+1$ for all $i \geqslant 1.$ Define the degree of a semi-infinite normally ordered wedge by \begin{equation} \deg u_{k_1}\wedge u_{k_2} \wedge \cdots = \sum_{i\geqslant 1} \underline{o_i} - \underline{k_i}. \label{eq:sideg}\end{equation} Let ${\mathcal F}_M^d$ be the homogeneous component of ${\mathcal F}_M$ of degree $d.$ Clearly, the asymptotic condition $k_i = M-i+1$ $(i\gg 1)$ implies that $$ {\mathcal F}_M = \bigoplus_{d=0}^{\infty} {\mathcal F}_M^d.$$ Let $s \in \{0,1,\dots,NL-1\}$ be defined from $M\equiv s\bmod NL.$ For a non-negative integer $l$ we define the linear subspace $V_{M,s+lNL}$ of $\wedge^{s+lNL}V_{\mathrm {aff}}$ by \begin{equation} V_{M,s+lNL} = \bigoplus_{\underline{k_{s+lNL}} \leqslant \underline{o_{s+lNL}}} {\mathbb K\hskip.5pt} u_{k_1}\wedge u_{k_2}\wedge \cdots \wedge u_{k_{s+lNL}}, \label{eq:VM} \end{equation} where the wedges in the right-hand side are assumed to be normally ordered. For $s=l=0$ we put $V_{M,s+lNL} = {\mathbb K\hskip.5pt}.$ The vector space (\ref{eq:VM}) has a grading similar to that one of the Fock space. Now the degree of a normally ordered wedge is defined as \begin{equation} \deg u_{k_1}\wedge u_{k_2} \wedge \cdots\wedge u_{k_{s+lNL}} = \sum_{i=1}^{s+lNL} \underline{o_i} - \underline{k_i}. \label{eq:fdeg}\end{equation} Note that this degree is necessarily a non-negative integer since $k_1 > k_2 > \cdots > k_{s+lNL}$ and $\underline{k_{s+lNL}} \leqslant \underline{o_{s+lNL}}$ imply $\underline{k_i} \leqslant \underline{o_i}$ for all $i=1,2,\dots,s+lNl.$ Let $V_{M,s+lNL}^d$ be the homogeneous component of $V_{M,s+lNL}$ of degree $d.$ For non-negative integers $d$ and $l$ introduce the following linear map: \begin{equation} \varrho_l^d : V_{M,s+lNL}^d \rightarrow {\mathcal F}_M^d \: : \: w \mapsto w\wedge | M-s-lNL \rangle. \label{eq:rol}\end{equation} The proof of the following proposition is straightforward (cf. Proposition 16 in \cite{STU}, or Proposition 3.3 in \cite{U}). \begin{propos} \label{p:rho} Suppose $l \geqslant d.$ Then $\varrho_l^d$ is an isomorphism of vector spaces. \end{propos} \noindent In view of this proposition, it is clear that for non-negative integers $d,l,m,$ such that $d \leqslant l < m,$ the linear map \begin{equation} \varrho_{l,m}^d : V_{M,s+lNL}^d \rightarrow V_{M,s+mNL}^d \: : \: w \mapsto w\wedge u_{M-s-lNL}\wedge u_{M-s-lNL-1}\wedge \cdots \wedge u_{M-s-mNL+1} \label{eq:rolm}\end{equation} is an isomorphism of vector spaces as well. Now let us return to the vertical action $\pi^v_{(n)}$ of $\UU_q^{\prime}(\asll_N)$ on $\wedge^n V_{\mathrm {aff}} $ given by (\ref{eq:pi1} -- \ref{eq:pi3}). \begin{propos} For each $d=0,1,\dots$ the subspace $ V_{M,s+lNL}^d \subset \wedge^{s+lNL} V_{\mathrm {aff}} $ is invariant with respect to the action $\pi^v_{(s+lNL)}.$ \end{propos} \begin{proof} Let $n=s+lNL,$ and let us identify $V_{\mathrm {aff}}^{\otimes n}$ with ${\mathbb K\hskip.5pt}[z_1^{\pm 1},\dots,z_n^{\pm 1}]\otimes ({\mathbb K\hskip.5pt}^L)^{\otimes n} \otimes ({\mathbb K\hskip.5pt}^N)^{\otimes n}$ by the isomorphism $$ z^{m_1}\mathfrak e_{a_1}\mathfrak v_{\epsilon_1} \otimes \cdots \otimes z^{m_n}\mathfrak e_{a_n}\mathfrak v_{\epsilon_n} \mapsto z_1^{m_1}\cdots z_n^{m_n} \mathfrak e_{a_1} \cdots \mathfrak e_{a_n} \mathfrak v_{\epsilon_n} \cdots \mathfrak v_{\epsilon_n}. $$ Then $V_{M,s+lNL}$ is the image, with respect to the quotient map $\wedge : V_{\mathrm {aff}}^{\otimes n} \rightarrow \wedge^n V_{\mathrm {aff}},$ of the subspace \begin{equation} (z_1 \cdots z_n)^{\underline{o_n}} {\mathbb K\hskip.5pt}[z_1^{-1},\dots,z_n^{- 1}]\otimes ({\mathbb K\hskip.5pt}^L)^{\otimes n} \otimes ({\mathbb K\hskip.5pt}^N)^{\otimes n} \subset V_{\mathrm {aff}}^{\otimes n}, \label{eq:ss}\end{equation} while the grading on $V_{M,s+lNL}$ is induced from the grading of (\ref{eq:ss}) by eigenvalues of the operator $D = z_1\frac{\partial}{\partial z_1} + \cdots + z_n\frac{\partial}{\partial z_n}.$ The operators $Y_i^{(n)}$ leave $(z_1 \cdots z_n)^{\underline{o_n}} {\mathbb K\hskip.5pt}[z_1^{-1},\dots,z_n^{- 1}]\otimes ({\mathbb K\hskip.5pt}^L)^{\otimes n} $ invariant, and commute with $D.$ Now (\ref{eq:pi1} -- \ref{eq:pi3}) imply the statement of the proposition. \end{proof} \begin{propos} \label{p:inter} Let $0 \leqslant d \leqslant l,$ let $n=s+lNL,$ and let $X$ be any of the generators $E_i,F_i,K_i^{\pm 1}$ $(0\leqslant i < N)$ of $\UU_q^{\prime}(\asll_N).$ Then the following intertwining relation holds for all $w \in V_{M,s+lNL}^d:$ \begin{equation} \pi^v_{(n+NL)}(X) \cdot \varrho_{l,l+1}^d (w) = \varrho_{l,l+1}^d \left(\pi^v_{(n)}(X)\cdot w \right). \label{eq:inter}\end{equation} Consequently, for $0 \leqslant d \leqslant l < m$ the map $\varrho_{l,m}^d$ defined in {\em (\ref{eq:rolm})} is an isomorphism of $\UU_q^{\prime}(\asll_N)$-modules. \end{propos} \begin{proof} The proof is based, in particular, on Lemma \ref{l:ml}, to state which we introduce the following notation. For $\mathbf m = (m_1,m_2,\dots,m_n) \in \mathbb Z^n,$ and $\mathbf a = (a_1,a_2,\dots,a_n) \in \{1,2,\dots,L\}^n$ let $$ \zeta_i(\mathbf m,\mathbf a) = p^{m_i} q^{\nu(L+1-a_i) + \mu_i(\mathbf m,\mathbf a)} \qquad (i=1,2,\dots,n) $$ where $p, \nu$ are the parameters of the representation of $\ddot{\mathbf {H}}_n$ introduced in Section \ref{sec:torHecke}, and $\mu_i(\mathbf m,\mathbf a) = - \#\{j < i | m_j < m_i, a_j = a_i\} + \#\{j < i | m_j \geqslant m_i, a_j = a_i\} + \#\{j > i | m_j > m_i, a_j = a_i\} - \#\{j > i | m_j \leqslant m_i, a_j = a_i\}.$ \begin{lemma}\label{l:ml} For $k =1,2,\dots$ consider the following monomial $$ f = z_1^{m_1} z_2^{m_2} \dots z_{n+k}^{m_{n+k}} \otimes \mathfrak e_{a_1}\mathfrak e_{a_2} \cdots \mathfrak e_{a_{n+k}} \in {\mathbb K\hskip.5pt}[z_1^{\pm 1},\dots,z_{n+k}^{\pm 1}]\otimes ({\mathbb K\hskip.5pt}^L)^{\otimes(n+k)}.$$ Assume that $m_1,m_2,\dots,m_n < m_{n+1} = m_{n+2} = \cdots = m_{n+k} =: m,$ and that $a_{n+i} \leqslant a_{n+j}$ for $ 1 \leqslant i < j \leqslant k.$ For $j \in \{ 1,2,\dots,L\}$ put $\overline{n}(j) = \#\{ \: i\: | \: a_{n+i} = j, 1\leqslant i \leqslant k\}.$ Define the linear subspaces $\mathcal K_{n,k}^m,\mathcal L_{n,k}^m \subset {\mathbb K\hskip.5pt}[z_1^{\pm 1},\dots,z_{n+k}^{\pm 1}]\otimes ({\mathbb K\hskip.5pt}^L)^{\otimes(n+k)}$ as follows: \begin{align*} &\mathcal K_{n,k}^m = {\mathbb K\hskip.5pt}\{ z_1^{m_1^{\prime}}\cdots z_{n+k}^{m_{n+k}^{\prime}} \otimes \mathfrak e \: | \: \mathfrak e \in ({\mathbb K\hskip.5pt}^L)^{n+k}; m_1^{\prime},\dots,m_{n+k}^{\prime} \leqslant m; \#\{ m_i^{\prime} | m_i^{\prime} = m \} < k\}, \\ &\mathcal L_{n,k}^m = {\mathbb K\hskip.5pt}\{ z_1^{m_1^{\prime}}\cdots z_{n+k}^{m_{n+k}^{\prime}} \otimes \mathfrak e_{b_1}\cdots \mathfrak e_{b_{n+k}} \: |\: m_1^{\prime},\dots,m_{n}^{\prime} < m ; m_{n+1}^{\prime},\dots,m_{n+k}^{\prime} = m; \\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \exists j < a_{n+k} \:\text{{\em s.t.}}\: \#\{\: i \: | b_{n+i} = j, 1 \leqslant i \leqslant k\} > \overline{n}(j)\}. \end{align*} Then \begin{align*} &\left( Y_i^{(n+k)} \right)^{\pm 1}(f) \equiv \zeta_i(\mathbf m,\mathbf a)^{\pm 1} f \:\bmod \left( \mathcal K_{n,k}^m + \mathcal L_{n,k}^m \right) \quad (i=n+1,n+2,\dots,n+k), \\ &\left( Y_i^{(n+k)} \right)^{\pm 1}(f) \equiv q^{\pm \overline{n}(a_i)} \left( Y_i^{(n)} \right)^{\pm 1}(f) \: \bmod \left( \mathcal K_{n,k}^m + \mathcal L_{n,k}^m \right) \quad (i=1,2,\dots,n). \end{align*} Here $\mathbf m = (m_1,\dots,m_{n+k}),$ $\mathbf a = (a_1,\dots,a_{n+k}),$ and in the right-hand side of the last equation $\left( Y_i^{(n)} \right)^{\pm 1}$ act on the first $n$ factors of the monomial $f.$ \end{lemma} \noindent A proof of the lemma is given in \cite{TU} for $L=1.$ A proof for general $L$ is quite similar and will be omitted here. Let $w$ be a normally ordered wedge from $V_{M,n}^d,$ and let $\bar{w} = \varrho_{l,l+1}^d(w).$ The vector $\bar{w}$ is a normally ordered wedge from $V_{M,n+NL}^d,$ we have \begin{equation} \bar{w} = u_{k_1}\wedge u_{k_2} \wedge \cdots \wedge u_{k_{n+NL}} = \wedge (f \otimes v), \end{equation} where \begin{align} & f = (z_1^{\underline{k_1}}z_2^{\underline{k_2}} \cdots z_n^{\underline{k_n}})(z_{n+1} \cdots z_{n+NL})^{m}\otimes \label{eq:fff}\\ & \qquad \qquad \qquad \qquad\otimes (\mathfrak e_{\dot{k}_1}\mathfrak e_{\dot{k}_2} \cdots \mathfrak e_{\dot{k}_n})\underbrace{(\mathfrak e_1 \cdots \mathfrak e_1)}_{ N \: {\mathrm {times}}}\underbrace{(\mathfrak e_2 \cdots \mathfrak e_2)}_{ N \:{\mathrm {times}}} \dots \underbrace{(\mathfrak e_L \cdots \mathfrak e_L)}_{ N \:{\mathrm {times}}}, \nonumber \\ & v = (\mathfrak v_{\overline{k_1}}\mathfrak v_{\overline{k_2}} \cdots \mathfrak v_{\overline{k_n}})(\mathfrak v_N \mathfrak v_{N-1} \underbrace{ \cdots \mathfrak v_1 ) \dots (\mathfrak v_N \mathfrak v_{N-1} }_{ L \: {\mathrm {copies}}} \cdots \mathfrak v_1) \in ({\mathbb K\hskip.5pt}^N)^{\otimes (n+NL)}, \end{align} and $m = \underline{o_{n+1}} = \underline{o_{n+2}} = \cdots =\underline{o_{n+NL}}.$ The monomial $f$ given by (\ref{eq:fff}) satisfies the assumptions of Lemma \ref{l:ml} with $k=NL,$ and $\overline{n}(j) = N$ for all $j \in \{1,2,\dots,L\}.$ Let $\mathcal K_{n,NL}^m$ and $\mathcal L_{n,NL}^m$ be the corresponding subspaces of ${\mathbb K\hskip.5pt}[z_1^{\pm 1},\dots,z_{n+NL}^{\pm 1}]\otimes ({\mathbb K\hskip.5pt}^L)^{\otimes(n+NL)}.$ \begin{lemma} \label{l:ml1} Let $y \in ({\mathbb K\hskip.5pt}^N)^{\otimes (n+NL)},$ and let $f_1 \in \mathcal K_{n,NL}^m,$ $f_2 \in \mathcal L_{n,NL}^m. $ Then \begin{align} &\wedge( f_1 \otimes y ) \in \oplus_{d^{\prime} > l} V_{M,n+NL}^{d^{\prime}}, \tag{i} \\ &\wedge( f_2 \otimes y ) = 0. \tag{ii} \end{align} \end{lemma} \begin{proof} \mbox{} \\ This lemma is the special case ($b=L$ and $c=N$) of Lemma \ref{l:mlbc}. See the proof of Lemma \ref{l:mlbc}. \end{proof} Now we continue the proof of the proposition. \mbox{} From the definitions (\ref{eq:pi1} -- \ref{eq:pi3}) and Lemmas \ref{l:lemma}, \ref{l:ml} and \ref{l:ml1}, it follows that (\ref{eq:inter}) holds modulo $\oplus_{d^{\prime} > d} V_{M,n+NL}^{d^{\prime}}.$ However, the both sides of (\ref{eq:inter}) belong to $V_{M,n+NL}^{d}$ since the action of $\UU_q^{\prime}(\asll_N)$ preserves the degree $d.$ Hence (\ref{eq:inter}) holds exactly. \end{proof} Now we are ready to give the definition of the level 0 action of $\UU_q^{\prime}(\asll_N)$ on the Fock space ${\mathcal F}_M.$ \begin{defin} \label{dp:def} Let $0 \leqslant d \leqslant l.$ We define a $\UU_q^{\prime}(\asll_N)$-action $\pi^v : \UU_q^{\prime}(\asll_N) \mapsto \mathrm {End}({\mathcal F}_M^d)$ as $$ \pi^v(X) = \varrho_l^d \circ \pi^v_{(s+lNL)}(X) \circ (\varrho_l^d)^{-1} \qquad (X \in \UU_q^{\prime}(\asll_N) ).$$ By Proposition \ref{p:inter} this definition does not depend on the choice of $l$ as long as $l \geqslant d.$ \end{defin} Thus a $\UU_q^{\prime}(\asll_N)$-action is defined on each homogeneous component ${\mathcal F}_M^d,$ and hence on the entire Fock space ${\mathcal F}_M.$ \subsection{The action of the quantum toroidal algebra on the Fock space} In Section \ref{sec:UF} we defined a level $L$ action of $\UU_q^{\prime}(\asll_N)$ on ${\mathcal F}_M.$ Let us denote by $\pi^h$ the corresponding map $\UU_q^{\prime}(\asll_N) \rightarrow \mathrm {End}({\mathcal F}_M).$ We refer to $\pi^h$ as the horizontal $\UU_q^{\prime}(\asll_N)$-action on the Fock space. In the preceding section we defined another -- level 0 -- action $ \pi^v : \UU_q^{\prime}(\asll_N) \rightarrow \mathrm {End}({\mathcal F}_M).$ We call $\pi^v$ the vertical $\UU_q^{\prime}(\asll_N)$-action. Note that for $i=1,2,\dots,N-1$ we have \begin{equation*} \pi^h(E_i) = \pi^v(E_i), \quad \pi^h(F_i) = \pi^v(F_i), \quad \pi^h(K_i) = \pi^v(K_i), \end{equation*} i.e. the restrictions of $\pi^h$ and $\pi^v$ on the subalgebra $\operatorname{U}_q(\mathfrak {sl}_N)$ coincide. In this section we show that $\pi^h$ and $\pi^v$ are extended to an action $\ddot{\pi}$ of the quantum toroidal algebra $\ddot{\UU},$ such that $\pi^h$ is the pull-back of $\ddot{\pi}$ through the homomorphism (\ref{eq:Uh}), and $\pi^v$ is the pull-back of $\ddot{\pi}$ through the homomorphism (\ref{eq:Uv}). The definition of $\ddot{\pi}$ is based on Proposition \ref{p:shift1}. Let $\psi_n : \wedge^nV_{\mathrm {aff}} \rightarrow \wedge^nV_{\mathrm {aff}}$ be the the map (\ref{eq:psi}) for $\operatorname M = ({\mathbb K\hskip.5pt}[z^{\pm 1}]\otimes {\mathbb K\hskip.5pt}^L)^{\otimes n}.$ That is \begin{gather} \psi_n : z^{m_1}\mathfrak e_{a_1}\mathfrak v_{\epsilon_1}\wedge z^{m_2}\mathfrak e_{a_2}\mathfrak v_{\epsilon_2}\wedge \cdots \wedge z^{m_n}\mathfrak e_{a_n}\mathfrak v_{\epsilon_n} \mapsto \label{eq:psin}\\ z^{m_1-\delta_{\epsilon_1,N}}\mathfrak e_{a_1}\mathfrak v_{\epsilon_1+1}\wedge z^{m_2-\delta_{\epsilon_2,N}}\mathfrak e_{a_2}\mathfrak v_{\epsilon_2+1}\wedge \cdots \wedge z^{m_n-\delta_{\epsilon_n,N}}\mathfrak e_{a_n}\mathfrak v_{\epsilon_n+1}, \nonumber \end{gather} where $\mathfrak v_{N+1} $ is identified with $\mathfrak v_1.$ Let ${\mathcal F} = \oplus_M {\mathcal F}_M.$ We define a semi-infinite analogue $\psi_{\infty} \in \mathrm {End}({\mathcal F})$ of $\psi_n$ as follows. For $m\in \mathbb Z$ we let $$ \psi_{\infty} |-mNL \rangle = z^{m-1}\mathfrak e_1\mathfrak v_1\wedge z^{m-1}\mathfrak e_2\mathfrak v_1\wedge \cdots \wedge z^{m-1}\mathfrak e_L\mathfrak v_1 \wedge |-mNL \rangle.$$ Any vector in ${\mathcal F}$ can be presented in the form $v\wedge |-mNL\rangle ,$ where $v \in \wedge^nV_{\mathrm {aff}}$ for suitable $n$ and $m.$ Then we set $$ \psi_{\infty}( v\wedge |-mNL\rangle ) = \psi_n(v)\wedge \psi_{\infty} |-mNL\rangle.$$ By using the normal ordering rules it is not difficult to verify that $\psi_{\infty}$ is well-defined (does not depend on the choice of $m$). Note that $\psi_{\infty} : {\mathcal F}_M \rightarrow {\mathcal F}_{M+L},$ and that $\psi_{\infty}$ is invertible. Moreover \begin{equation} \psi_{\infty}^{-1} \pi^h(X_i) \psi_{\infty} = \pi^h(X_{i-1}) \qquad (i=0,1,\dots,N-1), \label{eq:cyc} \end{equation} where $X=E,F,K$ and the indices are cyclically extended modulo $N.$ \begin{propos} \label{p:twist} For each vector $w \in {\mathcal F}_M$ we have \begin{eqnarray} {\psi_{\infty}^{-1}}\pi^v(\tilde{X}_i(z)){\psi_{\infty}}(w) & = & \pi^v(\tilde{X}_{i-1}(q^{-1}z))(w), \qquad (2\leqslant i \leqslant N-1), \label{eq:tw1} \\ {\psi_{\infty}^{-2}}\pi^v(\tilde{X}_1(z)){\psi_{\infty}^2}(w) & = & \pi^v(\tilde{X}_{N-1}(p^{-1}q^{N-2}z))(w), \label{eq:tw2} \end{eqnarray} where $X = E,F,K^{\pm}.$ \end{propos} \begin{proof} To prove the proposition we use the following lemmas. \begin{lemma}\label{l:mlbc} Let $0 \leqslant d \leqslant l,$ $n=s+lNL,$ where $M\equiv s\bmod N,$ $s\in \{0,1,\dots,NL-1\}.$ Let $w= z_1^{\underline{k_1}}\mathfrak e_{\dot{k}_1}\mathfrak v_{\overline{k_1}} \wedge z_2^{\underline{k_2}}\mathfrak e_{\dot{k}_2}\mathfrak v_{\overline{k_2}} \wedge \cdots \wedge z_n^{\underline{k_n}}\mathfrak e_{\dot{k}_n}\mathfrak v_{\overline{k_n}}$ be a normally ordered wedge from $V_{M,n}^d,$ let $b,c$ be integers such that $1 \leqslant b \leqslant L,$ $1 \leqslant c \leqslant N$. We define $f \in {\mathbb K\hskip.5pt}[z_1^{\pm 1},\dots,z_{n+bc}^{\pm 1}]\otimes ({\mathbb K\hskip.5pt}^L)^{\otimes(n+bc)}$ as follows. \begin{align} & f = (z_1^{\underline{k_1}}z_2^{\underline{k_2}} \cdots z_n^{\underline{k_n}})(z_{n+1} \cdots z_{n+bc})^{m}\otimes \label{eq:ffff}\\ & \qquad \qquad \qquad \qquad\otimes (\mathfrak e_{\dot{k}_1}\mathfrak e_{\dot{k}_2} \cdots \mathfrak e_{\dot{k}_n})\underbrace{(\mathfrak e_1 \cdots \mathfrak e_1)}_{ c \: {\mathrm {times}}}\underbrace{(\mathfrak e_2 \cdots \mathfrak e_2)}_{ c \:{\mathrm {times}}} \dots \underbrace{(\mathfrak e_b \cdots \mathfrak e_b)}_{ c \:{\mathrm {times}}}, \nonumber \end{align} where $m = \underline{o_{n+1}} = \underline{o_{n+2}} = \cdots =\underline{o_{n+bc}}.$ The monomial $f$ given by (\ref{eq:ffff}) satisfies the assumptions of Lemma \ref{l:ml} with $k=bc,$ and $\overline{n}(j) = c$ for all $j \in \{1,2,\dots,b\}.$ Let $\mathcal K_{n,bc}^m$ and $\mathcal L_{n,bc}^m$ be the corresponding subspaces of ${\mathbb K\hskip.5pt}[z_1^{\pm 1},\dots,z_{n+bc}^{\pm 1}]\otimes ({\mathbb K\hskip.5pt}^L)^{\otimes(n+bc)}.$ Let $y= y^{(n)} \otimes ( \mathfrak v_{\epsilon_{1}} \otimes \cdots \otimes \mathfrak v_{\epsilon_{bc}}) \in ({\mathbb K\hskip.5pt}^N)^{\otimes n} \otimes ({\mathbb K\hskip.5pt}^N)^{\otimes bc} $ such that $N-c+1 \leqslant \epsilon_{i} \leqslant N$ $(1\leqslant i \leqslant bc)$, and let $f_1 \in \mathcal K_{n,bc}^m,$ $f_2 \in \mathcal L_{n,bc}^m. $ Then \begin{align} &\wedge( f_1 \otimes y ) \in \oplus_{d^{\prime} > l} V_{M,n+bc}^{d^{\prime}}, \tag{i} \\ &\wedge( f_2 \otimes y ) = 0. \tag{ii} \end{align} \end{lemma} \begin{proof} \mbox{} \\ (i) The vector $\wedge( f_1 \otimes y )$ is a linear combination of normally ordered wedges $$ u_{(k_i)} = u_{k_1} \wedge u_{k_2} \wedge \cdots \wedge u_{k_n} \wedge u_{k_{n+1}} \wedge \cdots \wedge u_{k_{n+bc}} $$ such that $\underline{k_{n+1}} < \underline{o_{n+1}}.$ This inequality implies that $\deg u_{(k_i)} \geqslant l+1.$ \\ \noindent (ii) It is sufficient to show that \begin{equation} \wedge( \mathfrak e_{a_1}\mathfrak e_{a_2}\cdots\mathfrak e_{a_{bc}} \otimes \mathfrak v_{\epsilon_{1}} \mathfrak v_{\epsilon_{2}} \cdots \mathfrak v_{\epsilon_{bc}} )\in \wedge^{bc}V_{\mathrm {aff}} \label{lemw} \end{equation} is zero whenever there is $J \in \{1,2,\dots,b\}$ such that $\#\{ i \: | \: 1\leqslant i\leqslant bc ,\; a_i = J \} > c.$ Using the normal ordering rules (\ref{eq:n1} -- \ref{eq:n4}) one can write (\ref{lemw}) as a linear combination of the normally ordered wedges $\mathfrak e_{a^{\prime}_1}\mathfrak v_{\epsilon^{\prime}_1}\wedge \mathfrak e_{a^{\prime}_2}\mathfrak v_{\epsilon^{\prime}_2} \wedge \cdots \wedge \mathfrak e_{a^{\prime}_{bc}}\mathfrak v_{\epsilon^{\prime}_{bc}}$. The $\operatorname{U}_q(\mathfrak {sl}_N)$ and $\operatorname{U}_q(\mathfrak {sl}_L)$-weights of the both sides in the normal ordering rules are equal. This implies that $\# \{ i \: | \: a_i ^{\prime } =J \} >c $ and $\# \{ j \: | \: \exists i , \; \epsilon^{\prime}_i =j , \; a_i ^{\prime } =J \} \leqslant c$. Therefore, there exists some $i$ such that $a_i ^{\prime }= a_{i+1} ^{\prime }$ and $\epsilon^{\prime}_i= \epsilon^{\prime}_{i+1}$. On the other hand, we know that $\mathfrak e_{a_i ^{\prime }}\mathfrak v_{\epsilon^{\prime}_i} \wedge \mathfrak e_{a_i ^{\prime }}\mathfrak v_{\epsilon^{\prime}_i} =0$. This implies that $\wedge( f_2 \otimes y ) = 0.$ \end{proof} \begin{lemma} \label{l:lt} Suppose $d$ and $l$ are integers such that $0 \leqslant d \leqslant l.$ Let $n= s + lNL,$ where $s \in \{0,1,\dots,NL-1\}$ is defined from $M \equiv s \bmod NL.$ Let $m$ be the integer such that $M-s-lNL = -mNL.$ For $1 \leqslant b \leqslant L$ we put \begin{align*} &v_{b,N} = z^m\mathfrak e_1\mathfrak v_N\wedge z^m\mathfrak e_2\mathfrak v_N\wedge \cdots \wedge z^m\mathfrak e_b\mathfrak v_N,\\ &v_{b,N-1} = z^m\mathfrak e_1\mathfrak v_N\wedge z^m\mathfrak e_1\mathfrak v_{N-1}\wedge z^m\mathfrak e_2\mathfrak v_N\wedge z^m\mathfrak e_2\mathfrak v_{N-1}\wedge \cdots \wedge z^m\mathfrak e_b\mathfrak v_N\wedge z^m\mathfrak e_b\mathfrak v_{N-1}. \end{align*} Assume $v \in V^d_{M,s+lNL}.$ Then \begin{align} &\pi^v_{(n+b)}(\tilde{X}_i(z))(v\wedge v_{b,N}) = \pi^v_{(n)}(\tilde{X}_i(z))(v)\wedge v_{b,N} , \label{eq:lt1}\\ &\pi^v_{(n+2b)}(\tilde{X}_{N-1}(z))(v\wedge v_{b,N-1}) = \pi^v_{(n)}(\tilde{X}_{N-1}(z))(v) \wedge v_{b,N-1} \label{eq:lt2} \end{align} Here $1\leqslant i \leqslant N-2.$ \end{lemma} For the proof, see the appendix. \noindent Retaining the notations introduced in the statement of the above lemma, we continue the proof of the proposition. We may assume that $w \in {\mathcal F}_M^d.$ Then, by Proposition \ref{p:rho}, $w = v\wedge |-mNL\rangle,$ where $v \in V_{M,s+lNL}^d.$ By Definition \ref{dp:def}, for $ 2 \leqslant i \leqslant N-1$ we have \begin{equation} \pi^v(\tilde{X}_{i-1}(q^{-1}z))(v\wedge |-mNL\rangle) = \pi_{(n)}^v(\tilde{X}_{i-1}(q^{-1}z))(v )\wedge |-mNL\rangle. \label{eq:inftwist1}\end{equation} The definition of $\psi_{\infty}$ yields $$ |-mNL\rangle = v_{L,N}\wedge \psi_{\infty}^{-1}|-mNL\rangle,$$ where $v_{L,N}$ is defined in the statement of Lemma \ref{l:lt}. Applying (\ref{eq:lt1}) in this lemma, we have $$\pi^v_{(n+L)}(\tilde{X}_{i-1}(q^{-1}z))(v\wedge v_{L,N}) = \pi^v_{(n)}(\tilde{X}_{i-1}(q^{-1}z))(v)\wedge v_{L,N}. $$ Taking this, and Proposition \ref{p:fintwist} into account, we find that the right-hand side of (\ref{eq:inftwist1}) equals $$ \psi_{n+L}^{-1}\pi^v_{(n+L)}(\tilde{X}_i(z))\psi_{n+L}(v\wedge v_{L,N})\wedge \psi_{\infty}^{-1}|-mNL\rangle, $$ which in turn is equal, by definition of $\psi_{\infty},$ to \begin{equation} \psi_{\infty}^{-1} \left(\pi^v_{(n+L)}(\tilde{X}_i(z))\psi_{n+L}(v\wedge v_{L,N})\wedge |-mNL\rangle \right). \label{eq:iftwist2}\end{equation} It is clear, that $\psi_{n+L}(v\wedge v_{L,N}) \in V^{d^{\prime}}_{M+L,n+L}$ for some non-negative integer $d^{\prime}.$ Choosing now $m$ large enough, or, equivalently, $l$ large enough (cf. the statement of Lemma \ref{l:lt}), we have by Definition \ref{dp:def}: $$ \pi^v_{(n+L)}(\tilde{X}_i(z))\psi_{n+L}(v\wedge v_{L,N})\wedge |-mNL\rangle = \pi^v(\tilde{X}_i(z))\left(\psi_{n+L}(v\wedge v_{L,N})\wedge |-mNL\rangle \right) .$$ Since $\psi_{\infty}(v\wedge |-mNL\rangle ) = \psi_{n+L}(v\wedge v_{L,N})\wedge |-mNL\rangle,$ we find that (\ref{eq:iftwist2}) equals $$ \psi_{\infty}^{-1} \pi^v(\tilde{X}_i(z))\psi_{\infty}\left(v\wedge|-mNL\rangle \right).$$ Thus (\ref{eq:tw1}) is proved. A proof of (\ref{eq:tw2}) is similar. Here the essential ingredients are the relation (\ref{eq:lt2}), and those relations of Proposition \ref{p:fintwist} which contain the square of $\psi.$ \end{proof} Now by Propositions \ref{p:shift1} and \ref{p:twist} we obtain \begin{theor} The following map extends to a representation of $\ddot{\UU}$ on ${\mathcal F}_M.$ \begin{alignat}{4} & \ddot{\pi} : X_i(z) &\quad \mapsto\quad & \pi^v(\tilde{X}_i(d^iz)) \qquad (1\leqslant i <N), \label{eq:tt1}\\ & \ddot{\pi} : X_0(z) &\quad \mapsto \quad & {\psi_{\infty}^{-1}}\pi^v(\tilde{X}_1(qd^{-1}z)){\psi_{\infty}}, \label{eq:tt2} \\ & \ddot{\pi} : \mathbf d & \quad \mapsto \quad & d 1, \\ & \ddot{\pi} : q^{\frac12 c} & \quad \mapsto\quad & 1. \end{alignat} Here $d = p^{-1/N}q ,$ and $X = E,F,K^{\pm}.$ \end{theor} \mbox{} From (\ref{eq:tt1}) it follows that the vertical (level $0$) $\UU_q^{\prime}(\asll_N)$-action $\pi^v$ is the pull-back of $\ddot{\pi}$ through the homomorphism (\ref{eq:Uv}). Whereas from (\ref{eq:tt2}) and (\ref{eq:cyc}) it follows that the horizontal (level $L$) $\UU_q^{\prime}(\asll_N)$-action $\pi^h$ the pull-back of $\ddot{\pi}$ through the homomorphism (\ref{eq:Uh}). Thus as an $\ddot{\UU}$-module the Fock space ${\mathcal F}_M$ has level $(0,L)$ (cf. Section \ref{sec:tor}). \subsection{The actions of the quantum toroidal algebra on irreducible integrable highest weight modules of $\operatorname{U}_q^{\prime}(\widehat{{\mathfrak {gl}}}_N)$ } Let $\Lambda$ be a level $L$ dominant integral weight of $\UU_q^{\prime}(\asll_N).$ In this section we define an action of the quantum toroidal algebra $\ddot{\UU}$ on the irreducible module \begin{equation} \widetilde{V}(\Lambda) = {\mathbb K\hskip.5pt}[H_-]\otimes V(\Lambda) \end{equation} of the algebra $\operatorname{U}_q^{\prime}(\widehat{{\mathfrak {gl}}}_N) = H\otimes \UU_q^{\prime}(\asll_N).$ Here (cf. Section \ref{sec:decomp}) ${\mathbb K\hskip.5pt}[H_-]$ is the Fock module of the Heisenberg algebra $H,$ and $V(\Lambda)$ is the irreducible highest weight module of $\UU_q^{\prime}(\asll_N)$ of highest weight $\Lambda.$ In Section \ref{sec:torHecke} we defined, for any integral weight $\chi$ of $\mathfrak {sl}_L,$ the subalgebra $\operatorname{U}_q({\mathfrak b}_L)^{\chi}$ of $\UU_q^{\prime}(\asll_L).$ A level $N$ action of $\UU_q^{\prime}(\asll_L)$ on the Fock space ${\mathcal F}_M$ ($M\in \mathbb Z$) was defined in Section \ref{sec:Usl}, so that there is an action $\operatorname{U}_q({\mathfrak b}_L)^{\chi}$ on ${\mathcal F}_M.$ Recall moreover, that the vertical $\UU_q^{\prime}(\asll_N)$-action $\pi^v$ on ${\mathcal F}_M,$ and, consequently, the action $\ddot{\pi}$ of $\ddot{\UU},$ depend on two parameters: $p \in q^{\mathbb Z},$ and $\nu$ which is an integral weight of $\mathfrak {sl}_L.$ \begin{propos}\label{p:inv3} Suppose $p = q^{-2L},$ and $\nu = - \chi - 2\rho$ for an integral $\mathfrak {sl}_L$-weight $\chi.$ Then the action $\ddot{\pi}$ of $\ddot{\UU}$ on ${\mathcal F}_M$ leaves invariant the linear subspace $\operatorname{U}_q({\mathfrak b}_L)^{\chi}\left({\mathcal F}_M\right).$ \end{propos} \begin{proof} It is sufficient to prove that both the horizontal $\UU_q^{\prime}(\asll_N)$-action $\pi^h$ and the vertical $\UU_q^{\prime}(\asll_N)$-action $\pi^v$ leave $\operatorname{U}_q({\mathfrak b}_L)^{\chi}\left({\mathcal F}_M\right)$ invariant. The horizontal action commutes with the action of $\UU_q^{\prime}(\asll_L).$ Thus it remains to prove that the vertical action leaves $\operatorname{U}_q({\mathfrak b}_L)^{\chi}\left({\mathcal F}_M\right)$ invariant. Let $w \in {\mathcal F}_M^d$ and let $l \geqslant d.$ By Proposition \ref{p:rho} there is a unique $v \in V_{M,s+lNL}^d$ such that $$ w = v \wedge | M - s - lNL \rangle .$$ Here $s\in \{0,1,\dots,NL-1\},$ $M\equiv s\bmod NL.$ \\ Let $g$ be one of the generators of $\operatorname{U}_q({\mathfrak b}_L)^{\chi}$ (cf. \ref{eq:gen}). For all large enough $l$ we have \begin{equation} g(w) = g(v)\wedge | M - s - lNL \rangle c(g), \label{eq:inv31}\end{equation} where $c(g) = q^{-N}$ if $g = \dot{F}_0,$ and $c(g) = 1$ if $g = \dot{F}_a, \dot{K}_a - q^{\chi(a) - \chi(a+1)} 1$ $(1\leqslant a < L).$ If $g= \dot{F}_0$ then $g(v) \in V_{M,s+lNL}^{d+1},$ otherwise $g(v) \in V_{M,s+lNL}^{d}.$ Let $X$ be an element of $\UU_q^{\prime}(\asll_N).$ Provided $l$ is sufficiently large, Definition \ref{dp:def} gives $$ \pi^v(X)g(w) = \pi_{(s+lNL)}^v(X)g(v) \wedge | M - s - lNL \rangle c(g) .$$ By Proposition \ref{p:inv2} the right-hand side of the last equation is a linear combination of vectors \begin{equation} h (v^{\prime})\wedge | M - s - lNL \rangle , \label{eq:inv32}\end{equation} where $h$ is again one of the generators of $\operatorname{U}_q({\mathfrak b}_L)^{\chi},$ and $v^{\prime}$ belongs to either $V_{M,s+lNL}^{d}$ or $V_{M,s+lNL}^{d+1}.$ Applying (\ref{eq:inv31}) again, the vector (\ref{eq:inv32}) is seen to be proportional to $$ h(v^{\prime}\wedge | M - s - lNL \rangle ).$$ Thus the vertical action leaves $\operatorname{U}_q({\mathfrak b}_L)^{\chi}\left({\mathcal F}_M\right)$ invariant. \end{proof} Now we use Theorem \ref{t:decofF} to define an action of $\ddot{\UU}$ on $\widetilde{V}(\Lambda).$ Fix the unique $M \in \{0,1,\dots,N-1\}$ such that $\overline{\Lambda} \equiv \overline{\Lambda}_M \bmod \overline{Q}_N.$ Since the dual weights $\dot{\Lambda}^{(M)}$ of $\UU_q^{\prime}(\asll_L)$ are distinct for distinct $\Lambda,$ from Theorem \ref{t:decofF} we have the isomorphism of $\operatorname{U}_q^{\prime}(\widehat{{\mathfrak {gl}}}_N)$-modules: \begin{equation} \widetilde{V}(\Lambda) \cong {\mathcal F}_M/ \operatorname{U}_q({\mathfrak b}_L)^{\chi}\left({\mathcal F}_M\right),\label{eq:is} \end{equation} where $\chi $ is the finite part of $\dot{\Lambda}^{(M)}.$ That is for $\dot{\Lambda}^{(M)} = \sum_{a=0}^{L-1} n_a \dot{\Lambda}_a,$ $\chi = \sum_{a=1}^{L-1} n_a \dot{\overline{\Lambda}}_a.$ By Proposition \ref{p:inv2}, the $\ddot{\UU}$-action $\ddot{\pi}$ with $p= q^{-2L},$ $ \nu = - \chi - 2 \rho,$ factors through the quotient map $$ {\mathcal F}_M \rightarrow {\mathcal F}_M/ \operatorname{U}_q({\mathfrak b}_L)^{\chi}\left({\mathcal F}_M\right),$$ and therefore by (\ref{eq:is}) induces an action of $\ddot{\UU}$ on $\widetilde{V}(\Lambda).$ \appendix \section{The proof of lemma \ref{l:lt}} \setcounter{section}{7} In this appendix we prove Lemma \ref{l:lt}. The idea of the proof is essentially the same as that of the proof of \cite[Lemma 23]{STU}. \\ {\bf Lemma \ref{l:lt}.} {\em Suppose $d$ and $l$ are integers such that $0 \leqslant d \leqslant l.$ Let $n= s + lNL,$ where $s \in \{0,1,\dots,NL-1\}$ is defined from $M \equiv s \bmod NL.$ Let $m$ be the integer such that $M-s-lNL = -mNL.$ For $1 \leqslant b \leqslant L$ we put \begin{align*} &v_{b,N} = z^m\mathfrak e_1\mathfrak v_N\wedge z^m\mathfrak e_2\mathfrak v_N\wedge \cdots \wedge z^m\mathfrak e_b\mathfrak v_N,\\ &v_{b,N-1} = z^m\mathfrak e_1\mathfrak v_N\wedge z^m\mathfrak e_1\mathfrak v_{N-1}\wedge z^m\mathfrak e_2\mathfrak v_N\wedge z^m\mathfrak e_2\mathfrak v_{N-1}\wedge \cdots \wedge z^m\mathfrak e_b\mathfrak v_N\wedge z^m\mathfrak e_b\mathfrak v_{N-1}. \end{align*} Assume $v \in V^d_{M,s+lNL}.$ Then \begin{align} &\pi^v_{(n+b)}(\tilde{X}_i(z))(v\wedge v_{b,N}) = \pi^v_{(n)}(\tilde{X}_i(z))(v)\wedge v_{b,N} , \label{eq:alt1}\\ &\pi^v_{(n+2b)}(\tilde{X}_{N-1}(z))(v\wedge v_{b,N-1}) = \pi^v_{(n)}(\tilde{X}_{N-1}(z))(v) \wedge v_{b,N-1} \label{eq:alt2} \end{align} Here $1\leqslant i \leqslant N-2.$} \begin{proof} As is mentioned in the proof of Lemma 22 in \cite{STU}, for each $i$ $(1 \leqslant i \leqslant N-1),$ the subalgebra of $\UU_q^{\prime}(\asll_N)$ generated by $\tilde{E}_{i,l'} , \; \tilde{F}_{i,l'}, \; \tilde{H}_{i,m'}, \tilde{K}^{\pm }_{i}$ $(l' \in \mathbb Z , \; m' \in \mathbb Z \setminus \{ 0 \} )$ is in fact generated by only the elements $\tilde{E}_{i,0}, \tilde{F}_{i,0}, \tilde{K}^{\pm }_{i}, \tilde{F}_{i,1} $ and $\tilde{F}_{i,-1}.$ By the definition of the representation, every generator of the vertical action $\mbox{U}_v$ preserves the degree in the sense of (\ref{eq:fdeg}). So it is sufficient to show that the actions of $\tilde{E}_{i,0}, \tilde{F}_{i,0}, \tilde{K}^{\pm }_{i}, \tilde{F}_{i,1} $ and $\tilde{F}_{i,-1}$ satisfy the relations (\ref{eq:alt1}, \ref{eq:alt2}). For $\tilde{E}_{i,0}, \tilde{F}_{i,0}, \tilde{K}^{\pm }_{i}$, this is shown directly by using the definitions of the actions (\ref{eq:pi1}--\ref{eq:pi3}). Now we must show that \begin{align} &\pi^v_{(n+b)}(\tilde{F}_{i,\pm 1})(v\wedge v_{b,N}) = \pi^v_{(n)}(\tilde{F}_{i,\pm 1})(v)\wedge v_{b,N}, \label{eq:Fi}\\ &\pi^v_{(n+2b)}(\tilde{F}_{N-1,\pm 1})(v\wedge v_{b,N-1}) = \pi^v_{(n)}(\tilde{F}_{N-1,\pm 1})(v)\wedge v_{b,N-1} \label{eq:FN-1} \end{align} Here $1\leqslant i \leqslant N-2.$ We will prove (\ref{eq:FN-1}). For any $ M',M'', M''' $ ($1 \leqslant M', M'', M'''\leqslant N+2b, \; M'\leqslant M'' $), we define an $\UU_q^{\prime}(\asll_N) $--action on the space $ {\mathbb K\hskip.5pt}[z_1^{\pm 1},\dots,z_{n+2b}^{\pm 1}]\otimes ({\mathbb K\hskip.5pt}^L)^{\otimes(n+2b)} \otimes ({\mathbb K\hskip.5pt}^N)^{\otimes (n+2b)}$ in terms of the Chevalley generators as follows: \begin{align} & E_i( f \otimes \tilde{v} ) = \sum_{j=M'}^{M''} (q^{-\unun{M'''}}Y_{j}^{(M''')})^{-\delta (i=0)} f \otimes (e_{i,i+1})_{j} (k_{i})_{j+1} \dots (k_{i})_{M''} \tilde{v}, \label{eM} \\ & F_i ( f \otimes \tilde{v}) = \sum_{j=M'}^{M''} (q^{-\unun{M'''}}Y_{j}^{(M''')})^{\delta (i=0)} f \otimes (k_{i}^{-1})_{M'}\dots (k_{i}^{-1})_{j-1}(e_{i+1,i})_{j} \tilde{v}. \label{fM} \\ & K_i ( f \otimes \tilde{v}) = f \otimes (k_i)_{M'} (k_i)_{M'+1} \dots (k_i)_{M''} \tilde{v}. \label{kM} \end{align} Here $i= 0, \dots ,N-1$, indices are cyclically extended modulo $N$, $f \in {\mathbb K\hskip.5pt}[z_1^{\pm 1},\dots,z_{n+2b}^{\pm 1}]\otimes ({\mathbb K\hskip.5pt}^L)^{\otimes(n+2b)}$, $ \tilde{v} \in ({\mathbb K\hskip.5pt}^N)^{\otimes (n+2b)}$, and the meaning of the notations $(e_{i,i'})_j$, $(k ^{\pm 1}_i)_j$ is the same as in Section \ref{sec:VVdual}. It is understood, that for $M''' < n+2b$ the operators $Y_{i}^{(M''')}$ in (\ref{eM}, \ref{fM}) act non-trivially only on the variables $z_1,z_2,\dots,z_{M'''}$ and on the first $M'''$ factors in ${\mathbb K\hskip.5pt}^{\otimes(n+2b)}.$ Note that the $\UU_q^{\prime}(\asll_N)$-action is well--defined because of the commutativity of $Y_{i}^{(M''')}$ ($i = 1, \dots , M'''$). The actions of the Drinfeld generators are determined by the actions of the Chevalley generators. Let $\tilde{X}$ be an element of $\UU_q^{\prime}(\asll_N) $, we denote by $\tilde{X}^{(M',M''), M'''}$ the operator giving the action of $\tilde{X}$ on the space ${\mathbb K\hskip.5pt}[z_1^{\pm 1},\dots,z_{n+2b}^{\pm 1}]\otimes ({\mathbb K\hskip.5pt}^L)^{\otimes(n+2b)} \otimes ({\mathbb K\hskip.5pt}^N)^{\otimes (n+2b)}$ in accordance with (\ref{eM}--\ref{kM}). Also, we set $\tilde{X}^{\{ j \}, M''' }$ $=$ $\tilde{X}^{(j,j), M'''}$ $(j=1, \dots , M''').$ With these definitions, for any two elements $\tilde{X}$ and $\tilde{Y}$ from $\UU_q^{\prime}(\asll_N) ,$ the operators $\tilde{X}^{(M',M''), M'''}$ and $\tilde{Y}^{(N',N''), M'''}$ commute if $M''<N'$ or $N''<M'$. Note that for any $\tilde{X} \in \UU_q^{\prime}(\asll_N)$ we have $$ \pi^v_{(n+2b)}(\tilde{X})\wedge(f \otimes \tilde{v}) = \wedge\left( \tilde{X}^{(1,n+2b),n+2b}(f \otimes \tilde{v})\right).$$ \mbox{} \noindent Let $UN_+$ and $UN_-^{2}$ be the left ideals in $\UU_q^{\prime}(\asll_N)$ generated respectively by $\{ \tilde{E}_{i,k'} \}$ and $\{ \tilde{F}_{i,k'}\tilde{F}_{j,l'} \}$. Let $UN_+^{(M',M''), M'''},(UN_-^2)^{(M',M''), M'''}$ be the images of these ideals with respect to the map $\UU_q^{\prime}(\asll_N) \rightarrow \mathrm {End}\left({\mathbb K\hskip.5pt}[z_1^{\pm 1},\dots,z_{n+2b}^{\pm 1}]\otimes ({\mathbb K\hskip.5pt}^L)^{\otimes(n+2b)} \otimes ({\mathbb K\hskip.5pt}^N)^{\otimes (n+2b)} \right)$ given by (\ref{eM}--\ref{kM}). Then the following relations hold: \begin{align*} & \pi_{(n+2b)}^v(\tilde{F}_{N-1,1}) \wedge (f \otimes \tilde{v} ) \equiv \wedge( (\tilde{K}_{N-1} ^{(1,n+2b-2), n+2b} \tilde{F}_{N-1,1}^{(n+2b-1,n+2b),n+2b}+ \tilde{F}_{N-1,1}^{(1,n+2b-2), n+2b}) (f \otimes \tilde{v} )) , \\ & \pi_{(n+2b)}^v(\tilde{F}_{N-1,-1}) \wedge (f \otimes \tilde{v} ) \equiv \wedge (((\tilde{K}_{N-1} ^{(1,n+2b-2), n+2b})^{-1} \tilde{F}_{N-1,-1}^{(n+2b-1,n+2b),n+2b}+ \tilde{F}_{N-1,-1}^{(1,n+2b-2), n+2b} \\ & \quad \quad \quad + (q^{-1}-q)(\tilde{K}_{N-1}^{(1,n+2b-2), n+2b})^{-1} \tilde{H}_{N-1,-1}^{(1,n+2b-2), n+2b} \tilde{F}_{N-1,0}^{(n+2b-1,n+2b),n+2b} ) (f \otimes \tilde{v} )) ,\nonumber \\ & \qquad \text{where}\qquad f \in {\mathbb K\hskip.5pt}[z_1^{\pm 1},\dots,z_{n+2b}^{\pm 1}]\otimes ({\mathbb K\hskip.5pt}^L)^{\otimes(n+2b)}, \quad \tilde{v} \in ({\mathbb K\hskip.5pt}^N)^{\otimes n+2b}. \nonumber \end{align*} Here the equivalence $\equiv$ is understood to be modulo $$ \wedge (UN_+^{(1,n+2b-2), n+2b} \cdot (UN_-^2)^{(n+2b-1,n+2b),n+2b} (f \otimes \tilde{v} )).$$ These relations follow from the the coproduct formulas which have been obtained in \cite[Proposition 3.2.A]{koyama}: \begin{eqnarray} & \Delta^+ (\tilde{F}_{i,1}) \equiv \tilde{K}_{i} \otimes \tilde{F}_{i,1} + \tilde{F}_{i,1} \otimes 1 & \mbox{mod } UN_+ \otimes UN_-^{2} , \label{cop1} \\ & \Delta^+ (\tilde{F}_{i,-1}) \equiv \tilde{K}_{i}^{-1} \otimes \tilde{F}_{i,-1} + \tilde{F}_{i,-1} \otimes 1 & \label{cop2} \\ & + (q^{-1}-q)\tilde{K}_i^{-1} \tilde{H}_{i,-1} \otimes \tilde{F}_{i,0} & \mbox{mod } UN_+ \otimes UN_-^{2}. \nonumber \end{eqnarray} Recall the definition of $\Delta^+$ given in (\ref{eq:co1} -- \ref{eq:co4}). Let $w= z_1^{\underline{k_1}}\mathfrak e_{\dot{k}_1}\mathfrak v_{\overline{k_1}} \wedge z_2^{\underline{k_2}}\mathfrak e_{\dot{k}_2}\mathfrak v_{\overline{k_2}} \wedge \cdots \wedge z_n^{\underline{k_n}}\mathfrak e_{\dot{k}_n}\mathfrak v_{\overline{k_n}}$ be a normally ordered wedge from $V_{M,n}^d,$ and define $f \in {\mathbb K\hskip.5pt}[z_1^{\pm 1},\dots,z_{n+2b}^{\pm 1}]\otimes ({\mathbb K\hskip.5pt}^L)^{\otimes(n+2b)}$ and $\tilde{v} \in ({\mathbb K\hskip.5pt}^N)^{\otimes(n+2b)}$ as follows. \begin{align} & f = (z_1^{\underline{k_1}}z_2^{\underline{k_2}} \cdots z_n^{\underline{k_n}})(z_{n+1} \cdots z_{n+2b})^{m}\otimes (\mathfrak e_{\dot{k}_1}\mathfrak e_{\dot{k}_2} \cdots \mathfrak e_{\dot{k}_n})(\mathfrak e_1 \mathfrak e_1 \mathfrak e_2 \mathfrak e_2 \cdots \mathfrak e_b \mathfrak e_b), \label{lemf} \\ & \tilde{v} = (\mathfrak v_{\overline{k_1}}\mathfrak v_{\overline{k_2}} \cdots \mathfrak v_{\overline{k_n}})(\mathfrak v_N \underbrace{ \mathfrak v_{N-1} ) (\mathfrak v_N \mathfrak v_{N-1}) \dots (\mathfrak v_N }_{ b\: {\mathrm {copies}}} \mathfrak v_{N-1}), \label{lemv} \end{align} where $m = \underline{o_{n+1}} = \underline{o_{n+2}} = \cdots =\underline{o_{n+2b}}.$ Then the monomial $f$ satisfies the assumptions of Lemma \ref{l:ml} with $k=2b,$ and $\overline{n}(j) = 2$ for all $j \in \{1,2,\dots,b\}.$ Now we will show the equality \begin{equation} \pi_{(n+2b)}^v(\tilde{F}_{N-1,\pm 1})\wedge (f \otimes \tilde{v} )) =\wedge(\tilde{F}_{N-1,\pm 1}^{(1,n+2b-2), n+2b} (f \otimes \tilde{v})). \label{ffstar} \end{equation} First let us prove that any element in $UN_+^{(1,n+2b-2), n+2b} \cdot (UN_-^2)^{(n+2b-1,n+2b),n+2b}$ annihilates the vector $f \otimes \tilde{v}$ where $f$ and $\tilde{v}$ are given by (\ref{lemf}) and (\ref{lemv}). It is enough to show that \begin{equation} ( \tilde{F}_{i',k'}^{(n+2b-1,n+2b),n+2b}\tilde{F}_{j',l'}^{(n+2b-1,n+2b),n+2b}) ( \bar{v} \otimes (z_{n+2b-1}^m\mathfrak e_b \mathfrak v_N \otimes z_{n+2b}^m \mathfrak e_b \mathfrak v_{N-1}))=0, \end{equation} for $\bar{v} \in {\mathbb K\hskip.5pt}[z_1^{\pm 1},\dots,z_{n+2b-2}^{\pm 1}]\otimes ({\mathbb K\hskip.5pt}^L)^{\otimes(n+2b-2)} \otimes ({\mathbb K\hskip.5pt}^N)^{\otimes (n+2b-2)}$. This follows immediately from the observation that $\mathrm {wt}(\mathfrak v_N)+\mathrm {wt}( \mathfrak v_{N-1})-\overline{\alpha}_{i'}-\overline{\alpha}_{j'}$ is not a $\operatorname{U}_q(\mathfrak {sl}_N)$-weight of $ ({\mathbb K\hskip.5pt}^N)^{\otimes 2}$. Next we will show that $\wedge (\tilde{F}_{N-1,\pm 1}^{(n+2b-1,n+2b),n+2b} (f \otimes \tilde{v}) )= 0$, (here $f$ and $\tilde{v}$ are given by (\ref{lemf}) and (\ref{lemv})). By the formulas (\ref{cop1}) and (\ref{cop2}), we have the following identities modulo $\wedge (UN_+^{\{ n+2b-1 \} ,n+2b } (UN_- ^2) ^{\{ n+2b\},n+2b } ( f \otimes \tilde{v} ))$ (see also \cite{STU}): \begin{align} & \wedge ( \tilde{F}^{(n+2b-1,n+2b),n+2b}_{N-1,1} (f \otimes \tilde{v} )) \equiv \wedge ( (\tilde{K}_{N-1} ^{\{ n+2b-1 \},n+2b } \tilde{F}_{N-1,1}^{\{ n+2b \},n+2b }+ \tilde{F}_{N-1,1}^{\{ n+2b-1 \},n+2b }) (f\otimes \tilde{v} )) , \\ & \wedge (\tilde{F}^{(n+2b-1,n+2b),n+2b}_{N-1,-1} (f \otimes \tilde{v} )) \equiv \wedge (((\tilde{K}_{N-1} ^{\{ n+2b-1 \} ,n+2b})^{-1} \tilde{F}_{N-1,-1}^{\{ n+2b \} ,n+2b}+ \tilde{F}_{N-1,-1}^{\{ n+2b-1 \} ,n+2b} \\ & \quad \quad \quad + (q^{-1}-q)[ \tilde{E}_{N-1,0}^{\{ n+2b-1 \} ,n+2b}, \tilde{F}_{N-1,-1}^{\{ n+2b-1 \},n+2b } ] \tilde{F}_{N-1,0}^{\{ n+2b \},n+2b } ) (f \otimes \tilde{v} )) ,\nonumber \end{align} The following formula is essentially written in \cite[Proposition 3.2.B]{koyama}: \begin{align} & \tilde{F}_{i,\pm 1}^{\{ l\},n+2b } ( f' \otimes (\otimes _{j=1}^{n+2b} \mathfrak v_{\epsilon _j})) \label{ko2} \\ & = (q^{i-\unun{n+2b}}(Y_l ^{(n+2b)})^{-1})^{\pm 1} f'\otimes (\otimes _{j=1}^{l-1} \mathfrak v_{\epsilon _j}) \otimes \delta _{i,\epsilon _l} \mathfrak v_{i+1} \otimes (\otimes _{j=l+1}^{n+2b} \mathfrak v_{\epsilon _j}), \nonumber \end{align} where $f'\in {\mathbb K\hskip.5pt}[z_1^{\pm 1},\dots,z_{n+2b}^{\pm 1}]\otimes ({\mathbb K\hskip.5pt}^L)^{\otimes(n+2b)} $ and $\otimes _{j=1}^{n+2b} \mathfrak v_{\epsilon _j} \in ({\mathbb K\hskip.5pt}^N)^{\otimes (n+2b)}$. By (\ref{ko2}) we have $(UN_+^{\{ n+2b-1 \},n+2b } (UN_- ^2)^{\{ n+2b \},n+2b } (f \otimes \tilde{v} )) =0 $, and by (\ref{ko2}) and Lemma \ref{l:ml} we have \begin{align} & \tilde{F}_{N-1,\pm 1}^{(n+2b-1,n+2b),n+2b} (f \otimes \tilde{v} ) \equiv \alpha _{\pm 1} \bar{v} \otimes z_{n+2b-1}^m \mathfrak e_b \mathfrak v_N \otimes z_{n+2b}^m \mathfrak e_b \mathfrak v_N \label{ot}\\ & \qquad \qquad \qquad \qquad \mbox{ mod } (\mathcal K_{n,2b}^m + \mathcal L_{n,2b}^m ) \otimes (\check{v}^{(n)} \otimes (\mathfrak v_N \underbrace{ \mathfrak v_{N-1} ) \dots (\mathfrak v_N }_{ b-1\: {\mathrm {copies}}} \mathfrak v_{N-1}) (\mathfrak v_N \mathfrak v_N)) , \nonumber \end{align} Here $ c _{\pm 1}$ are certain coefficients, $ \bar{v} \in {\mathbb K\hskip.5pt}[z_1^{\pm 1},\dots,z_{n+2b-2}^{\pm 1}]\otimes ({\mathbb K\hskip.5pt}^L)^{\otimes(n+2b-2)} \otimes ({\mathbb K\hskip.5pt}^N)^{\otimes (n+2b-2)}$, $\check{v}^{(n)} \in ({\mathbb K\hskip.5pt}^N)^{\otimes n}$. Using the normal ordering rules, we have $\wedge (\bar{v} \otimes z_{n+2b-1}^m \mathfrak e_b \mathfrak v_N \otimes z_{n+2b}^m \mathfrak e_b \mathfrak v_N) =0.$ By Lemma \ref{l:mlbc}, we have \begin{equation} \wedge ((\mathcal K_{n,2b}^m + \mathcal L_{n,2b}^m ) \otimes (\check{v}^{(n)} \otimes (\mathfrak v_N \underbrace{ \mathfrak v_{N-1} ) \dots (\mathfrak v_N }_{ b-1\: {\mathrm {copies}}} \mathfrak v_{N-1}) (\mathfrak v_N \mathfrak v_N))) \in \oplus_{d^{\prime} > l} V_{M,n+2b}^{d^{\prime}}. \label{ott} \end{equation} On the other hand the degree of the wedge (\ref{ott}) is equal to deg$(f \otimes \tilde{v} )=d$. Taking into account that $d \leqslant l,$ we have $\wedge (\tilde{F}_{N-1,\pm 1}^{(n+2b-1,n+2b),n+2b} (f \otimes \tilde{v} ))=0$. Now we prove that $\wedge((\tilde{K}^{(1,n+2b-2),n+2b}_{N-1})^{-1} \tilde{H}^{(1,n+2b-2),n+2b}_{N-1,-1} \tilde{F}_{N-1,0}^{(n+2b-1,n+2b),n+2b} (f \otimes \tilde{v}))$ vanishes. We have \begin{align} & \wedge ((\tilde{K}^{(1,n+2b-2),n+2b}_{N-1})^{-1} \tilde{H}^{(1,n+2b-2),n+2b}_{N-1,-1} \tilde{F}_{N-1,0}^{(n+2b-1,n+2b),n+2b} \bar{v} \otimes z_{n+2b-1}^m \mathfrak e_b \mathfrak v_N \otimes z_{n+2b}^m \mathfrak e_b \mathfrak v_{N-1}) \\ & = \wedge(( \tilde{K}^{(1,n+2b-2),n+2b}_{N-1})^{-1} \tilde{H}^{(1,n+2b-2),n+2b}_{N-1,-1} \bar{v} \otimes z_{n+2b-1}^m \mathfrak e_b \mathfrak v_N \otimes z_{n+2b}^m \mathfrak e_b \mathfrak v_N) , \nonumber \end{align} here $ \bar{v} \in {\mathbb K\hskip.5pt}[z_1^{\pm 1},\dots,z_{n+2b-2}^{\pm 1}]\otimes ({\mathbb K\hskip.5pt}^L)^{\otimes(n+2b-2)} \otimes ({\mathbb K\hskip.5pt}^N)^{\otimes (n+2b-2)}$. By (\ref{eM} -- \ref{kM}) the operator $\tilde{H}^{(1,n+2b-2),n+2b}_{N-1,-1}$ is a polynomial in the operators $(Y_{j}^{(n+2b)})^{\pm 1}$, $(k_l)^{\pm 1}_j,$ $(e_{l,l'})_j $ where $1 \leqslant j \leqslant n+2b-2 $ and $ 1 \leqslant l,l' \leqslant N$. By Lemma \ref{l:ml}, we have \begin{align*} & (\tilde{K}^{(1,n+2b-2),n+2b}_{N-1})^{-1} \tilde{H}^{(1,n+2b-2),n+2b}_{N-1,-1} \bar{v} \otimes z_{n+2b-1}^m \mathfrak e_b \mathfrak v_N \otimes z_{n+2b}^m \mathfrak e_b \mathfrak v_N \\ & \qquad \qquad \qquad \equiv c(\hat{v} \otimes z_{n+2b-1}^m \mathfrak e_b \mathfrak v_N \otimes z_{n+2b}^m \mathfrak e_b \mathfrak v_N ) \nonumber \\ & \qquad \qquad \qquad \qquad \qquad \mbox{ mod } (\mathcal K_{n,2b}^m + \mathcal L_{n,2b}^m ) \otimes (\check{v}^{(n)} \otimes (\mathfrak v_N \underbrace{ \mathfrak v_{N-1} ) \dots (\mathfrak v_N }_{ b-1\: {\mathrm {copies}}} \mathfrak v_{N-1}) (\mathfrak v_N \mathfrak v_N)) , \nonumber \end{align*} Here $ c $ is a certain coefficient, $ \bar{v}, \hat{v} \in {\mathbb K\hskip.5pt}[z_1^{\pm 1},\dots,z_{n+2b-2}^{\pm 1}]\otimes ({\mathbb K\hskip.5pt}^L)^{\otimes(n+2b-2)} \otimes ({\mathbb K\hskip.5pt}^N)^{\otimes (n+2b-2)}$, $\check{v}^{(n)}$ is an element in $({\mathbb K\hskip.5pt}^N)^{\otimes n}$. Repeating the arguments given after the relation (\ref{ott}), we have $\wedge((\tilde{K}^{(1,n+2b-2),n+2b}_{N-1})^{-1} \tilde{H}^{(1,n+2b-2),n+2b}_{N-1,-1} \tilde{F}_{N-1,0}^{(n+2b-1,n+2b),n+2b} (f \otimes \tilde{v}))=0.$ Thus we have shown (\ref{ffstar}). Repeatedly applying the arguments that led to (\ref{ffstar}), we have \begin{equation} \pi_{(n+2b)}^v(\tilde{F}_{N-1, \pm 1})(f \otimes \tilde{v})) = \wedge (\tilde{F}^{(1,n),n+2b}_{N-1, \pm 1}(f \otimes \tilde{v})). \label{nn2b} \end{equation} To prove $\pi_{(n+2b)}^v(\tilde{F}_{N-1, \pm 1})\wedge(f \otimes \tilde{v})) = \wedge (\tilde{F}^{(1,n),n}_{N-1, \pm 1}(f \otimes \tilde{v}))$, we must show that in the right-hand side of (\ref{nn2b}) we can replace $q^{-\unun{n+2b}}Y^{(n+2b)}_i$ by $q^{-\unun{n}}Y^{(n)}_i$ $(1 \leqslant i \leqslant n).$ Observe that $\tilde{F}^{(1,n),n+2b}_{N-1, \pm 1}$ is a polynomial in the operators \begin{equation} (Y_{j}^{(n+2b)})^{\pm 1},\quad (k_l)^{\pm 1}_j,\quad (e_{l,l'})_j \quad\text{where $1 \leqslant j \leqslant n $ and $ 1 \leqslant l,l' \leqslant N.$} \label{eq:opers}\end{equation} By Lemma \ref{l:ml} we have \begin{equation} (q^{-\unun{n+2b}}Y^{(n+2b)}_i)^{\pm 1} (f \otimes \tilde{v}) \equiv (q^{-\unun{n}}Y^{(n)}_i)^{\pm 1} (f \otimes \tilde{v}) \mbox{ mod } (\mathcal K ^m _{n,2b} + \mathcal L ^m _{n,2b}) \otimes \tilde{v}. \end{equation} For $f^{\prime} \in \mathcal K ^m _{n,2b} + \mathcal L ^m _{n,2b},$ and $\mathcal E_n$ a polynomial in (\ref{eq:opers}), the vector $f^{\prime}\otimes \mathcal E_n \tilde{v}$ satisfies the assumption of Lemma \ref{l:mlbc}. By this lemma, and by the arguments given after (\ref{ott}), we have \begin{equation} \wedge ( (\mathcal K ^m _{n,2b} + \mathcal L ^m _{n,2b}) \otimes \mathcal E_n \tilde{v}) =0 \label{lstab} \end{equation} Combining (\ref{lstab}), the commutativity of $\mathcal E_n$ and $(Y^{(\tilde{n})}_i)^{\pm 1}$ $(1 \leqslant i \leqslant n, \; \tilde{n} = n $ or $n+2b)$, and the fact that $(Y_i^{(\tilde{n})})^{\pm 1} ( \mathcal K ^m _{n,2b} + \mathcal L ^m _{n,2b}) \subset ( \mathcal K ^m _{n,2b} + \mathcal L ^m _{n,2b}),$ we have $\pi_{(n+2b)}^v(\tilde{F}_{N-1, \pm 1})\wedge (f \otimes \tilde{v})) = \wedge (\tilde{F}^{(1,n),n}_{N-1, \pm 1}(f \otimes \tilde{v}))$. The relation (\ref{eq:FN-1}) follows. To prove (\ref{eq:Fi}), consider the tensor product ${\mathbb K\hskip.5pt}[z_1^{\pm 1},\dots,z_{n+b}^{\pm 1}]\otimes ({\mathbb K\hskip.5pt}^L)^{\otimes(n+b)} \otimes ({\mathbb K\hskip.5pt}^N)^{\otimes (n+b)}$, use the formulas (\ref{cop1}), (\ref{cop2}) and continue the proof in a way that is completely analogous to the proof of (\ref{eq:FN-1}). \end{proof} \newcommand{\BOOK}[6]{\bibitem[{#6}]{#1}{\sc #2}, {\it #3} (#4)#5.} \newcommand{\JPAPER}[8]{\bibitem[{#8}]{#1}{\sc #2}, `#3', {\it #4} {\bf #5} (#6) #7.} \newcommand{\JPAPERS}[9]{\bibitem[{#9}]{#1}{\sc #2}, `#3', {\it #4} #5 #6, #7 #8.} \end{document}
\begin{document} \title{Further calculations for the McKean stochastic game for a spectrally negative L\'evy process: from a point to an interval} \author{\textbf{E.J. Baurdoux\footnote{Department of Statistics, London School of Economics. Houghton street, {\sc London, WC2A 2AE, United Kingdom.} E-mail: [email protected]}, K. van Schaik\footnote{Department of Mathematical Sciences, University of Bath, Claverton Down, {\sc Bath, BA2 7AY, United Kingdom}. E-mail: [email protected]. This author gratefully acknowledges being supported by a post-doctoral grant from the AXA Research Fund}}} \date{} \maketitle \begin{abstract} Following Baurdoux and Kyprianou \cite{McKean} we consider the McKean stochastic game, a game version of the McKean optimal stopping problem (American put), driven by a spectrally negative L\'evy process. We improve their characterisation of a saddle point for this game when the driving process has a Gaussian component and negative jumps. In particular we show that the exercise region of the minimiser consists of a singleton when the penalty parameter is larger than some threshold and 'thickens' to a full interval when the penalty parameter drops below this threshold. Expressions in terms of scale functions for the general case and in terms of polynomials for a specific jump-diffusion case are provided. \end{abstract} \begin{tabbing} {\footnotesize Keywords:} \= \footnotesize{Stochastic games, optimal stopping, Levy processes, fluctuation theory} \\ {\footnotesize Mathematics Subject Classification (2000): 60G40, 91A15} \end{tabbing} \section{Introduction} This paper is a follow-up to the paper \cite{McKean} by Baurdoux and Kyprianou (henceforth BK), in which the solution to the McKean stochastic game driven by a spectrally negative L\'evy process is studied. Let us introduce the setting in BK (and in this paper). Let $X$ be a L\'evy process defined on a filtered probability space $(\Omega,\mathcal{F},\mathbf{F},\mathbb{P})$, where $\mathbf{F}=(\mathcal{F}_t)_{t \geq 0}$ is the filtration generated by $X$ which is naturally enlarged (cf. Definition 1.3.38 in Bichteler \cite{Dichteler02}). For $x \in \mathbb{R}$ we denote by $\mathbb{P}_x$ the law of $X$ when it is started at $x$ and we abbreviate $\mathbb{P}=\mathbb{P}_0$. Accordingly we shall write $\mathbb{E}_x$ and $\mathbb{E}$ for the associated expectation operators. We assume throughout that $X$ is spectrally negative, meaning that it has no positive jumps and that it is not the negative of a subordinator. The McKean stochastic game is an example of a type of stochastic games introduced by Dynkin \cite{Dynkin69}. It is a two-player zero sum game, consisting of a maximiser aiming at maximizing over $\mathbf{F}$-stopping times $\tau$ the expected payoff according to the (discounted) lower payoff process given by $e^{-qt}(K-\exp(X_t))^+$ for all $t \geq 0$ and a minimiser aiming at minimizing over $\mathbf{F}$-stopping times $\sigma$ the expected payoff according to the (discounted) upper payoff process given by $e^{-qt}((K-\exp(X_t))^+ +\delta)$ for all $t \geq 0$, where $K,\delta>0$. That is, for any pair of stopping times $(\tau,\sigma)$ the payoff to the maximizer is \[ M_x(\tau,\sigma) := \mathbb{E}_x [ e^{-q \tau} (K-e^{X_{\tau}})^+ \mathbf{1}_{\{ \tau \leq \sigma \}} + e^{-q \sigma} ((K-e^{X_{\sigma}})^+ +\delta) \mathbf{1}_{\{ \sigma < \tau \}} ]. \] We assume throughout this paper that the discount factor $q$ satisfies \begin{equation} 0\leq \psi(1)\leq q \text{ and } q>0, \label{ass} \end{equation} where $\psi$ denotes the Laplace exponent of $X$. (Note that since both payoff processes vanish a.s. as $t \to \infty$, there is no ambiguity in allowing for $\tau$ and $\sigma$ to be inifinitely valued as we will in this paper). For any $x$, this game has a \emph{value} if the upper and lower value, $\inf_{\sigma} \sup_{\tau} M_x(\tau,\sigma)$ and $\sup_{\tau} \inf_{\sigma} M_x(\tau,\sigma)$ respectively, coincide. Even more, if a pair $(\tau^*,\sigma^*)$ exists such that \[ M_x(\tau,\sigma^*) \leq M_x(\tau^*,\sigma^*) \leq M_x(\tau^*,\sigma) \quad \mbox{for all $(\tau,\sigma)$}, \] the value exists and equals $M_x(\tau^*,\sigma^*)$. In this case $(\tau^*,\sigma^*)$ is called a \emph{saddle point} (or Nash equilibrium). For an account of these concepts in a general Markovian setting, see Ekstr\"om and Peskir \cite{Ekstrom08} and the references therein. For other examples of stochastic games, see e.g. Kifer \cite{Kifer00}, Kyprianou \cite{Kyprianou04}, Baurdoux and Kyprianou \cite{Baurdoux08}, Gapeev and K\"uhn \cite{Gapeev05}, Baurdoux et al \cite{Baurdoux09}. Note that the McKean game can be seen as an extension of the classic McKean optimal stopping problem (cf. \cite{McKean65} and Theorem \ref{avram} below). In a financial interpretation, this optimal stopping problem is usually referred to as American put option, with $K$ the strike price. The McKean game then extends the American put option by introducing the possibility for the writer of the option to cancel the contract, at the expense of paying the intrinsic value plus an extra constant penalty given by the penalty parameter $\delta$. Cf. e.g. Kifer \cite{Kifer00} and Kallsen and K\"uhn \cite{Kallsen04} for a general account on the interpretation of stochastic games as financial contracts. In BK it was shown that a saddle point $(\tau^*,\sigma^*)$ indeed exists for the McKean game, so in particular the value function $V$ is well defined by \begin{eqnarray} V(x) &=& \sup_\tau \inf_\sigma \mathbb{E}_x \left[ e^{-q\tau}(K-e^{X_\tau})^+\mathbf{1}_{\{\tau \leq \sigma\}}+e^{-q\sigma}((K-e^{X_\sigma})^++\delta)\mathbf{1}_{\{\sigma<\tau\}} \right] \nonumber\\ &=& \inf_\sigma \sup_\tau \mathbb{E}_x \left[ e^{-q\tau}(K-e^{X_\tau})^+\mathbf{1}_{\{\tau \leq \sigma\}}+e^{-q\sigma}((K-e^{X_\sigma})^++\delta)\mathbf{1}_{\{\sigma<\tau\}} \right] \nonumber\\ &=& \mathbb{E}_x \left[ e^{-q\tau^*}(K-e^{X_{\tau^*}})^+\mathbf{1}_{\{\tau^* \leq \sigma^*\}}+e^{-q\sigma^*}((K-e^{X_{\sigma^*}})^++\delta)\mathbf{1}_{\{\sigma^*<\tau^*\}} \right]. \nonumber \end{eqnarray} The optimal stopping time for the maximiser, $\tau^*$, is the first hitting time of an interval of the form $(-\infty,x^*]$ for some $x^*<\log K$. For the minimiser the optimal stopping time $\sigma^*$ is as follows. When the penalty parameter $\delta$ exceeds $\bar{\delta}:=U(\log K)$, where $U$ denotes the value function of the McKean optimal stopping problem, the minimiser never stops (i.e. $\sigma^*=\infty$). When $\delta \leq \bar{\delta}$, the optimal stopping region for the minimizer is an interval of the form $[\log K,y^*]$. If the Gaussian component $\sigma_X$ of $X$ is equal to zero (note that this corresponds to the situation that $X$ does not creep downwards), we have $y^*>\log K$. Furthermore formulae in terms of scale functions for $x^*$ and $V$ on $(-\infty,\log K]$ were provided. However, two issues were left open in BK. Firstly, when $X$ has a Gaussian component it was not clear when the optimal stopping region for the minimiser consists of a point and when of an interval, i.e. when $y^*=\log K$ and when $y^*>\log K$ holds. Secondly, no characterisation was given of $y^*$. In this paper we give an answer to both these issues. In particular, we show that when $\sigma_{X}>0$ there exists a critical value $\delta_0 \in (0,\bar{\delta})$ such that the stopping region for the minimiser is a single point when $\delta\in[\delta_0,\bar{\delta})$ and a full interval when $\delta\in(0,\delta_0)$, cf. Theorem \ref{thm_Erik_main} (see also Remark \ref{rem_Kees1}). Furthermore we show that $y^*$ and $\delta_0$ can be characterised as unique solutions to functional equations using scale functions, cf. Theorem \ref{thm_Erik2}. The rest of this paper is organised as follows. In the remainder of this introduction we introduce scale functions and some notation (Subsection \ref{subsec_scale}), and review the results from BK in more detail (Subsection \ref{subsec_review}). In Section \ref{sec_main_res} we present our new results. Finally, in Section \ref{sec_jump_diff} we translate these results to a specific jump-diffusion setting, accompanied by some plots. \subsection{Scale functions}\label{subsec_scale} First we introduce some notation for first entry times. For $a \leq b$ we write \[ \tau_a^+ := \inf \{ t>0 \, | \, X_t>a \}, \quad \tau_a^- := \inf \{ t>0 \, | \, X_t<a \} \quad \mbox{and} \quad T_{[a,b]} := \inf \{ t>0 \, | \, X_t \in [a,b] \}. \] Furthermore we denote the often used first hitting time of $\log K$ for simplicity by $T_K$, that is $T_K := \inf \{ t>0 \, | \, X_t = \log K \}$. A useful class of functions when studying first exit problems driven by spectrally negative L\'evy processes are so-called scale functions. We shortly review some of their properties as they play an important role in this paper, for a more complete overview the reader is e.g. referred to Chapter VII in Bertoin \cite{Bertoin96} or Chapter 8 in Kyprianou \cite{Kyprianou06}. For each $q \geq 0$ the scale functions $W^{(q)}: \mathbb{R} \to [0,\infty)$ are known to satisfy for all $x \in \mathbb{R}$ and $a \geq 0$ \begin{equation}\label{K_30sept2} \mathbb{E}_x [ e^{-q \tau_a^+} \mathbf{1}_{\{ \tau_a^+<\tau_0^- \}} ] = \frac{W^{(q)}(x \wedge a)}{W^{(q)}(a)}. \end{equation} In particular it is evident that $W^{(q)}(x)=0$ for all $x<0$. Furthermore it is known that $W^{(q)}$ is almost everywhere differentiable on $(0,\infty)$, it is right continuous at zero and \begin{equation}\label{K_18sept2} \int_0^{\infty} e^{-\beta x} W^{(q)}(x) \, \mbox{d}x = \frac{1}{\psi(\beta)-q} \end{equation} for all $\beta>\Phi(q)$, where $\Phi(q)$ is the largest root of the equation $\psi(\theta)=q$ (of which there are at most two, recall that $\psi$ is the Laplace exponent of $X$). If $X$ has a Gaussian component $\sigma_X>0$ it is known that $W^{(q)} \in C^2(0,\infty)$ with $W^{(q)}(0)=0$ and $W^{(q){\prime}}(0)=2/\sigma_X^2$. We usually write $W=W^{(0)}$. Associated to the functions $W^{(q)}$ are the functions $Z^{(q)}: \mathbb{R} \to [1,\infty)$ defined by \begin{equation}\label{K_18sept3} Z^{(q)}(x) = 1+q \int_0^x W^{(q)}(y) \, \mbox{d}y \end{equation} for $q \geq 0$. Together the functions $W^{(q)}$ and $Z^{(q)}$ are collectively known as scale functions and predominantly appear in almost all fluctuation identities for spectrally negative L\'evy processes. For example, it is also known that for all $x \in \mathbb{R}$ and $a,q \geq 0$ \[ \mathbb{E}_x [ e^{-q \tau_0^-} \mathbf{1}_{\{ \tau_a^+>\tau_0^- \}} ] = Z^{(q)}(x \wedge a) - \frac{Z^{(q)}(a)}{W^{(q)}(a)} W^{(q)}(x \wedge a) \] and \begin{equation}\label{K_30sept1} \mathbb{E}_x [ e^{-q \tau_0^-} \mathbf{1}_{\{ \tau_0^-<\infty \}} ] = Z^{(q)}(x) - \frac{q}{\Phi(q)} W^{(q)}(x), \end{equation} where $q/\Phi(q)$ is to be understood in the limiting sense $\psi'(0) \wedge 0$ when $q=0$. For $c > 0$, consider the change of measure \begin{equation}\label{K_18sept1} \left. \frac{d\mathbb{P}^c}{d\mathbb{P}} \right|_{\mathcal{F}_t} = e^{cX_t-\psi(c)t}. \end{equation} Under $\mathbb{P}^c$, the process $X$ is still a spectrally negative L\'evy process and we mark its Laplace exponent and scale functions with the subscript $c$. From $\psi_c(\lambda)=\psi(\lambda)-\psi(c)$ for $\lambda \geq 0$ we get by taking Laplace transforms \[ W^{q}_c(x)=e^{-x} W^{(q+\psi(1))}(x) \] for all $q \geq 0$. \subsection{Reviewing the McKean stochastic game}\label{subsec_review} First consider the McKean optimal stopping problem (or American put option) with value function $U$, i.e. \[U(x)=\sup_\tau\mathbb{E}_x[e^{-q\tau}(K-e^{X_\tau})^+].\] We recall the solution to this problem as it appears in \cite{Chan} (see also \cite{Mordecki}): \begin{theorem}\label{avram} For the McKean optimal stopping problem under (\ref{ass}) we have \[ U(x) = K Z^{(q)}(x-k^*) - e^x Z_1^{(q-\psi(1))}(x-k^*), \] where \[ e^{k^*} = K\frac{q}{\Phi(q)}\frac{\Phi(q)-1}{q-\psi(1)}, \] which is to be understood in the limiting sense when $q=\psi(1)$, in other words, $e^{k^*} = K \psi(1)/\psi'(1)$. An optimal stopping time is given by $\tau^*=\inf\{t>0 : X_t < k^*\}$. \end{theorem} Next we recall the main result from BK on a saddle point and the value function for the McKean game: \begin{theorem}\label{mainthrm}Consider the McKean stochastic game under the assumption (\ref{ass}). \begin{itemize} \item[(i)] If $\delta \geq U(\log K)$, then a stochastic saddle point is given by $\tau^*$ from Theorem \ref{avram} and $\sigma^*=\infty$, in which case $V=U.$ \item[(ii)] If $\delta< U(\log K)$, a stochastic saddle point is given by the pair \[ \tau^*= \inf\{t>0 : X_t <x^*\} \text{ and }\sigma^*=\inf\{t> 0 : X_t \in [\log K, y^*]\}, \] where $x^*$ uniquely solves \begin{equation} Z^{(q)}(\log K -x) - Z_1^{(q-\psi(1))} (\log K - x)= \frac{\delta}{K}, \label{howtofindx*} \end{equation} $x^*> k^*$ (the optimal level of the corresponding McKean optimal stopping problem in Theorem \ref{avram}) and $y^* \geq \log K$. Furthermore, \[V(x) = KZ^{(q)}(x - x^*) - e^xZ_1^{(q-\psi(1))}(x- x^*)\] for $x\leq \log K$ and if $y^*=\log K$ then for any $x \in \mathbb{R}$ \[ V(x) = KZ^{(q)}(x-x^*)-e^x Z_1^{(q-\psi(1))}(x-x^*) + \alpha e^{\Phi (q)(\log K-x^*)} W^{(q)}(x-\log K), \] where \[ \alpha = e^{x^*} \frac{q-\psi(1)}{\Phi (q)-1}-\frac{qK}{\Phi (q)}, \] which is to be understood in the limiting sense when $q=\psi(1)$, i.e. $\alpha=e^{x^*} \psi'(1)-K\psi(1)$. \end{itemize} \end{theorem} Hence a saddle point exists, and consists of the first hitting time of $(-\infty,x^*]$ for the maximizer and of the first hitting time of $[\log K,y^*]$ for the minimizer. Furthermore equation (\ref{howtofindx*}) gives us a characterisation of $x^*$, but we know only little about $y^*$. The issue of when $y^*=\log K$ and when $y^*>\log K$ holds was in BK only answered when $X$ has no Gaussian component: \begin{theorem} \label{mainthrmII} Suppose in Theorem \ref{mainthrm} that $\delta< U(\log K)$. If $X$ has no Gaussian component, then $y^*>\log K$ and necessarily $\Pi(-\infty, \log K -y^*)>0$. \end{theorem} \begin{remark}\label{rem_Kees1} These results have a clear interpretation. Starting from any $X_0>\log K$, the minimizer could either stop right away and pay $\delta$ to the maximizer, or wait a short $\Delta t$. The latter decision has the advantage of profiting from the discounting, but the disadvantage of the risk that a (large) negative jump could bring $X$ (far) below $\log K$, where a higher payoff than (discounted) $\delta$ can be claimed by the maximizer. The closer $X_0$ is chosen to $\log K$, the more dominant the disadvantage becomes, hence the exercise region for the minimiser takes the form of an interval $[\log K,y^*]$. When $X$ is a Brownian motion it is obvious that we have $y^*=\log K$ for any $\delta \in (0,\bar{\delta}]$ (see also \cite{Kyprianou04}). The above Theorem \ref{mainthrmII} tells us that the other extreme case, namely $y^*>\log K$ for any $\delta \in (0,\bar{\delta}]$, i.e. the disadvantage of waiting being dominant for the minimizer, occurs whenever $X$ has no Gaussian component. The interesting question is what happens when $X$ has a Gaussian component \emph{and} negative jumps. It turns out that for $\delta$ large enough, when stopping immediately is relatively expensive, the Gaussian part 'wins' in the sense that $y^*=\log K$, while for $\delta$ small enough, when stopping immediately has become cheaper, the negative jumps 'win' in the sense that $y^*>\log K$, see Theorem \ref{thm_Erik_main} below. \end{remark} \section{Single point or interval when $X$ has a Gaussian part $\sigma_{X}>0$}\label{sec_main_res} Throughout this section we assume that condition (\ref{ass}) holds. Recall that $T_K := \inf \{ t>0 \, | \, X_t = \log K \}$. Consider the following function \begin{equation}f_{\delta}(x)=\sup_\tau \mathbb{E}_x[e^{-q \tau}(K-e^{X_\tau})\mathbf{1}_{\{\tau\leq T_K\}}+\delta e^{-q T_K}\mathbf{1}_{\{T_K<\tau\}}], \label{auxosp} \end{equation} i.e. the optimal value for the maximizer provided the minimiser only exercises when $X$ hits $\log K$. We first prove the following technical result. \begin{lemma}\label{hulp} Suppose $\sigma_{X}>0$ and $\delta \in (0,\bar{\delta}]$. The function $f_{\delta}$ is differentiable on $\mathbb{R}\backslash \{\log K\}$. Furthermore, $f_{\delta}=V$ on $(-\infty,\log K],$ $f_{\delta}\geq V$ on $\mathbb{R}$ and $f_{\delta}'(\log K+)$ is a strictly decreasing continuous function of $\delta$. \end{lemma} \begin{proof} Let $\delta \in (0,\bar{\delta}]$. Due to Theorem \ref{mainthrm} and the absence of positive jumps we have for $x\leq \log K$ \begin{eqnarray*} V(x)&=&\mathbb{E}_x[e^{-q\tau_{x^*(\delta)}^-}(K-e^{X_{\tau_{x^*(\delta)}^-}})\mathbf{1}_{\{\tau_{x^*(\delta)}^-<T_K\}}+\delta e^{-qT_K}\mathbf{1}_{\{T_K<\tau_{x^*(\delta)}^-\}}]\\ &=&\sup_{\tau}\mathbb{E}_x[e^{-q\tau}(K-e^{X_{\tau}})\mathbf{1}_{\{\tau<T_K\}}+\delta e^{-qT_K}\mathbf{1}_{\{T_K<\tau\}}]\\ &=&f_{\delta}(x). \end{eqnarray*} Also, for any $x\in\mathbb{R}$ \begin{eqnarray*}f_{\delta}(x)&=&\sup_\tau \mathbb{E}_x[e^{-q \tau}(K-e^{X_\tau})\mathbf{1}_{\{\tau\leq T_K\}}+\delta e^{-q T_K}\mathbf{1}_{\{T_K<\tau\}}]\\ &\geq &\inf_\sigma\sup_\tau \mathbb{E}_x[e^{-q \tau}(K-e^{X_\tau})\mathbf{1}_{\{\tau\leq \sigma\}}+\delta e^{-q \sigma}\mathbf{1}_{\{\sigma<\tau\}}]\\ &=&V(x). \end{eqnarray*} In fact, since stopping is not optimal on $(\log K,\infty)$ as the lower pay-off function is zero there, we deduce that we have for all $x\in\mathbb{R}$ \begin{equation}\label{Kees_28jul1} f_{\delta}(x)= \mathbb{E}_x[e^{-q \tau_{x^*(\delta)}^-}(K-e^{X_{\tau_{x^*(\delta)}^-}})\mathbf{1}_{\{\tau_{x^*(\delta)}^-\leq T_K\}}+\delta e^{-q T_K}\mathbf{1}_{\{T_K<\tau_{x^*(\delta)}^-\}}]. \end{equation} Now, let $\delta_2>\delta_1>c$ for some $c>0$. From the defintion of $f_{\delta}$ in (\ref{auxosp}) we find \begin{eqnarray*}f_{\delta_2}(x)-f_{\delta_1}(x)&=&\sup_\tau \mathbb{E}_x[e^{-q \tau}(K-e^{X_\tau})\mathbf{1}_{\{\tau\leq T_K\}}+\delta_2 e^{-q T_K}\mathbf{1}_{\{T_K<\tau\}}]\\ &&-\sup_\tau \mathbb{E}_x[e^{-q \tau}(K-e^{X_\tau})\mathbf{1}_{\{\tau\leq T_K\}}+\delta_1 e^{-q T_K}\mathbf{1}_{\{T_K<\tau\}}]\\ &\leq&(\delta_2-\delta_1)\sup_\tau \mathbb{E}_x[e^{-q T_K}\mathbf{1}_{\{T_K<\tau\}}]\\ &\leq & (\delta_2-\delta_1) \mathbb{E}_x[e^{-q \tau_{\log K}^-}], \end{eqnarray*} from which it follows that (the equality by (\ref{K_30sept1})) \begin{eqnarray*}\frac{f_{\delta_2}(\log K+\varepsilon)-\delta_2}{\varepsilon}-\frac{f_{\delta_1}(\log K+\varepsilon)-\delta_1}{\varepsilon}&\leq& (\delta_2-\delta_1) \frac{\mathbb{E}_{\log K+\varepsilon}[e^{-q \tau_{\log K}^-}]-1}{\varepsilon}\\ &=&(\delta_2-\delta_1) \left( \frac{Z^{(q)}(\varepsilon)-1}{\varepsilon} - \frac{q}{\Phi(q)} \frac{W^{(q)}(\varepsilon)}{\varepsilon} \right). \end{eqnarray*} Since $f_{\delta}$ is a differentiable function on $[\log K,\infty)$ (see equation (27) in BK together with (\ref{Kees_28jul1})) and using $Z^{(q)\prime}(0)=W^{(q)}(0)=0$, $W^{(q)\prime}(0+)=2/\sigma_{X}^2$ we deduce that \begin{equation}\label{K_30sept3} f_{\delta_2}'(\log K+)-f_{\delta_1}'(\log K+)\leq -\frac{2q}{\sigma_{X}^2\Phi(q)}(\delta_2-\delta_1), \end{equation} showing that $f_{\delta}'(\log K+)$ is strictly decreasing in $\delta$. Also, using (\ref{auxosp}) and the fact that $\tau_{x^*(\delta_1)}^-$ is a feasible strategy also when $\delta=\delta_2$, it holds that \begin{eqnarray*}f_{\delta_2}(x)-f_{\delta_1}(x)&\geq &\mathbb{E}_x[e^{-q \tau_{x^*(\delta_1)}^-}(K-e^{X_{\tau_{x^*(\delta_1)}^-}})\mathbf{1}_{\{\tau_{x^*(\delta_1)}^-\leq T_K\}}+\delta_2 e^{-q T_K}\mathbf{1}_{\{T_K<\tau_{x^*(\delta_1)}^-\}}]\\ &&-\mathbb{E}_x[e^{-q \tau_{x^*(\delta_1)}^-}(K-e^{X_{\tau_{x^*(\delta_1)}^-}})\mathbf{1}_{\{\tau_{x^*(\delta_1)}^-\leq T_K\}}+\delta_1 e^{-q T_K}\mathbf{1}_{\{T_K<\tau_{x^*(\delta_1)}^-\}}]\\ &=&(\delta_2-\delta_1)\mathbb{E}_x[e^{-q T_K}\mathbf{1}_{\{T_K<\tau_{x^*(\delta_1)}^-\}}]\\ &\geq&(\delta_2-\delta_1)\mathbb{E}_x[e^{-q T_K}\mathbf{1}_{\{T_K<\tau_{x^*(c)}^-\}}], \end{eqnarray*} where the final inequality follows from the observation that $x^*(\delta)$ is decreasing in $\delta$ and that $\delta_1>c$. Note that $x^*(c)<\log(K-c)$ since $V(x)$ is strictly decreasing in $x\in(-\infty,\log K]$ for any $\delta>0$ and thus \begin{eqnarray*}&&\hspace{-3cm}\frac{f_{\delta_2}(\log K+\varepsilon)-\delta_2}{\varepsilon}-\frac{f_{\delta_1}(\log K+\varepsilon)-\delta_1}{\varepsilon}\\ &\geq& (\delta_2-\delta_1) \frac{\mathbb{E}_{\log K+\varepsilon}[e^{-q T_K}\mathbf{1}_{\{T_K<\tau_{x^*(c)}^-\}}]-1}{\varepsilon}\\ &=&(\delta_2-\delta_1)\frac{W^{(q)}(\log K+\varepsilon-x^*(c))-W^{(q)}(\log K -x^*(c))}{\varepsilon W^{(q)}(\log K -x^*(c))}\\ &&-(\delta_2-\delta_1)e^{\Phi(q)(\log K -x^*(c))}\frac{W^{(q)}(\varepsilon)}{\varepsilon W^{(q)}(\log K-x^*(c))}. \end{eqnarray*} because of Lemma 12 in BK. It follows that \[f_{\delta_1}'(\log K+)-f_{\delta_1}'(\log K+)\geq (\delta_2-\delta_1)\frac{\sigma_{X}^2W^{(q)\prime}(\log K-x^*(c))- 2e^{\Phi(q)(\log K -x^*(c))}}{\sigma_{X}^2W^{(q)}(\log K-x^*(c))}.\] Since $c$ is arbitrary, we conclude from this inequality together with (\ref{K_30sept3}) that $f_{\delta}'(\log K+)$ is indeed continuous in $\delta$ for any $\delta>0$. \end{proof} Now we are ready to prove our main result, extending Theorem \ref{mainthrm}: \begin{theorem}\label{thm_Erik_main} Suppose $\sigma_{X}>0$. When $\Pi\neq 0$, then there exists a unique $\delta_0 \in (0,\bar{\delta})$ such that an optimal stopping time for the minimiser is given by $T_K$ (i.e. $y^*(\delta)=\log K$) when $\delta \in [\delta_0,\bar{\delta}]$ and by $T_{[\log K, y^*(\delta)]}$ for some $y^*(\delta)>\log K$ when $\delta \in (0,\delta_0)$. \end{theorem} \begin{proof} Let $\sigma_{X}>0$ and suppose $\Pi\neq 0$. We know from Theorem \ref{mainthrm} that the stopping region for the minimiser is of the form $[\log K,y^*]$ for some $y^*\geq \log K$. We claim that setting $\delta_0$ equal to the unique zero of $f_{\delta}'(\log K+)$ on $(0,\bar{\delta})$ yields the result. First let us show that this unique zero indeed exists. For $\delta=\bar{\delta}$ it holds that $f_{\delta}'(\log K+)=U'(\log K)<0$ (cf. Theorem \ref{avram}). Using Lemma \ref{hulp}, it suffices to show that there exists some $\delta>0$ such that $f_{\delta}'(\log K+)>0$. We argue by contradiction, so, again using Lemma \ref{hulp}, suppose that $f_{\delta}'(\log K+)<0$ for all $\delta>0$. This implies that for each $\delta>0$ there exists some $\varepsilon>0$ such that $f_{\delta}(x)<f_{\delta}(\log K)=\delta$ for all $x\in(\log K,\log K+\varepsilon].$ Since $V \leq f_{\delta}$ (Lemma \ref{hulp}) we deduce that $V(x)<\delta=(K-e^x)^++\delta$ for all $x\in(\log K, \log K +\varepsilon)$, hence $y^*=\log K$ and in fact $V=f_{\delta}$ (by (\ref{auxosp})). But plugging $\tau_{\log K/2}^-$ in the rhs of (\ref{auxosp}) yields \[f_{\delta}(x)\geq K/2\mathbb{E}_x[e^{-q\tau_{\log K/2}^-}\mathbf{1}_{\{\tau_{\log K/2}^-<T_K\}}].\] This lower bound is strictly positive for $x>\log K$ since $\Pi\neq 0$ and does not depend on $\delta$. Hence for $\delta$ small enough we deduce the existence of some $x>\log K$ such that $f_{\delta}(x)>\delta$, which contradicts with $f_{\delta}(x)=V(x) \leq \delta$ on $[\log K,\infty)$. Next for the optimal stopping time of the minimiser. For $\delta>\delta_0$ the same reasoning as above yields $y^*=\log K$. For the case $\delta=\delta_0$ we note that for any fixed $x$ the function $f_{\delta}(x)$ is continuous in $\delta$, as is easily seen from (\ref{auxosp}). Hence \[f_{\delta_0}(x)=\lim_{\delta\downarrow \delta_0}f_{\delta}(x)\leq (K-e^x)^++\delta_0,\] from which we can deduce that we still have $y^*=\log K$. Finally, let $\delta<\delta_0$. Again much as above, we then have that $f_{\delta}'(\log K+)>0$ and thus there exist $x>\log K$ for which $f_{\delta}(x)>\delta=(K-e^x)^++\delta$. Since trivially $V$ is bounded above by this upper payoff function, it cannot be true that $f_{\delta}=V$ and thus it can also not be true that $y^*=\log K$, so we indeed arrive at $y^*>\log K$. \end{proof} \begin{remark} It should be clear from the proof of the above Theorem \ref{thm_Erik_main} that this result is essentially due to the upper payoff function $(K-e^x)^++\delta$ having a kink at the point where it first touches the value function as $\delta$ decreases (namely $\log K$). That is, if we would only slightly alter the upper payoff function on an environment of $\log K$ so it would have a continuous derivative, we should expect the optimal stopping time for the minimiser to be $T_{[y_1^*,y_2^*]}$ with $y_1^*<\log K<y_2^*$ for \emph{all} $\delta \in (0,\bar{\delta})$ and \emph{any} spectrally negative L\'evy process $X$. \end{remark} Next we provide expressions that complement those from Theorem \ref{mainthrm}. Recall that Theorem \ref{mainthrm} in particular already provides us with a formula for $V$ on $(-\infty,\log K]$, so we can make use of the following function: \begin{equation}\label{wdelta}w_\delta(x)=\left\{\begin{array}{ll} V(x)&\mbox{for $x<\log K$}\\ \delta&\mbox{for $x\geq \log K$}. \end{array} \right. \end{equation} \begin{theorem}\label{thm_Erik2} Suppose $\Pi \not= 0$. We have the following. \begin{itemize} \item[(i)] Suppose $\sigma_{X}>0$. Then $\delta_0$ is the unique solution on $(0,\bar{\delta})$ to the equation in $\delta$: \[ \int_{t<0}\int_{u<t}(w_\delta(t+\log K)-\delta)e^{-\Phi(q)(t-u)}\Pi(du)dt=\frac{\delta q}{\Phi(q)}. \] \item[(ii)] Suppose $y^*>\log K$ (i.e. $\sigma_{X}>0$ and $\delta <\delta_0$, or $\sigma_{X}=0$ and $\delta<\bar{\delta}$). Then $y^*$ is the unique solution on $(\log K,\infty)$ to the equation in $y$: \begin{equation}\label{Kees_29jul2} \int_{t<0}\int_{u<t}(w_\delta(t+y)-\delta)e^{-\Phi(q)(t-u)}\Pi(du)dt=\frac{\delta q}{\Phi(q)}. \end{equation} Furthermore, $V(x)=\delta$ for $x \in [\log K,y^*]$ and for $x \in (y^*,\infty)$: \begin{equation}\label{29jul3} V(x) = \delta Z^{(q)}(x-y^*) - \int_{t<0}\int_{u<t}(w_\delta(t+y^*)-\delta) W^{(q)}(x-y^*-t+u) e^{-\Phi(q)(t-u)}\Pi(du)dt. \end{equation} \end{itemize} \end{theorem} \begin{proof} First we introduce the function \begin{equation}h(x,y):=\mathbb{E}_x[e^{-q\tau_{y}^-}w_\delta(X_{\tau_y^-})] \label{h}\end{equation} for $x>y \geq \log K$. Observe that by the lack of positive jumps, $h(.,y)$ is the optimal value the maximizer can obtain when the minimiser chooses as stopping region $[\log K,y]$. Hence in particular $V(x)=h(x,y^*)$. Denote by $u^{(q)}(s,t)$ the resolvent density of $X$ started at $s>0$ and killed at first passage below $0$. Invoking the compensation formula (see e.g. Theorem 4.4 in \cite{Kyprianou06}) leads to \begin{eqnarray*} h(x,y)&=&\delta \mathbb{E}_x[e^{-q\tau_y^-}]+\mathbb{E}_x[e^{-q\tau_{y}^-}(w_\delta(X_{\tau_y^-})-\delta)\mathbf{1}_{\{X_{\tau_y^-}<\log K\}}]\\ &=&\delta \mathbb{E}_x[e^{-q\tau_y^-}]+\int_{t<\log K-y}\int_{u<t}(w_\delta(t+y)-\delta)u^{(q)}(x-y,t-u)\Pi(du)dt\\ &=&\delta \mathbb{E}_x[e^{-q\tau_y^-}]+\int_{t<0}\int_{u<t}(w_\delta(t+y)-\delta)u^{(q)}(x-y,t-u)\Pi(du)dt, \end{eqnarray*} where the final equality is due to the fact that $w_\delta=\delta$ on $[\log K,y]$. We know that (see e.g. Theorem 8.1 and Corollary 8.8 in \cite{Kyprianou06} respectively) \[\mathbb{E}_x[e^{-q\tau_y^-}]=Z^{(q)}(x-y)-\frac{q}{\Phi(q)}W^{(q)}(x-y)\] and \[u^{(q)}(s,t)=e^{-\Phi(q)t}W^{(q)}(s)-W^{(q)}(s-t),\] hence \begin{eqnarray} h(x,y)&=& \int_{t<0}\int_{u<t}(w_\delta(t+y)-\delta)(e^{-\Phi(q)(t-u)}W^{(q)}(x-y)-W^{(q)}(x-y-t+u))\Pi(du)dt\nonumber \\ &&+\delta(Z^{(q)}(x-y)-\frac{q}{\Phi(q)}W^{(q)}(x-y)).\label{compensation2} \end{eqnarray} Furthermore, when $X$ is of unbounded variation we can compute for $x>y$ \begin{eqnarray*} \frac{\partial}{\partial x}h(x,y)&=&\delta(qW^{(q)}(x-y)-\frac{q}{\Phi(q)}W^{(q)^\prime}(x-y))\\ &&\hspace{-2cm}+\int_{t<0}\int_{u<t}(w_\delta(t+y)-\delta)(e^{-\Phi(q)(t-u)}W^{(q)^\prime}(x-y)-W^{(q)^\prime}(x-y-t+u))\Pi(du)dt. \end{eqnarray*} and we can let $x \downarrow y$ to arrive at \begin{equation}\label{Kees_29jul1} \frac{\partial}{\partial x}h(y+,y) = \left(\int_{t<0}\int_{u<t}(w_\delta(t+y)-\delta)e^{-\Phi(q)(t-u)}\Pi(du)dt-\frac{q\delta}{\Phi(q)}\right)W^{(q)^\prime}(0+). \end{equation} Ad (i). Recall the function $f_{\delta}$ as defined in (\ref{auxosp}), and recall in particular from the proof of Lemma \ref{hulp} that $\delta_0$ is the unique $\delta \in (0,\bar{\delta})$ for which $f_{\delta}'(\log K+)=0$. Furthermore, note that $f_{\delta}(x)=h(x,\log K)$ for $x>\log K$, since both sides equal the optimal value the maximizer can obtain when the minimiser only stops when $X$ hits $\log K$. Combining these observations with (\ref{Kees_29jul1}) and $W^{(q)^\prime}(0+)=2/\sigma_{X}^2 \not= 0$ yields the result. Ad (ii). We first consider the case when $X$ is of bounded variation. We know from Theorem 4 in BK that we have continuous fit, i.e. $V(y^*+)=\delta$. Since the integrand in (\ref{compensation2}) is bounded and equal to zero for $t<\log K-y$ we can take the limit inside the integrals to deduce that \[h(y+,y)=\delta-\frac{q\delta}{\mathrm{d}\Phi(q)} +\frac{1}{\mathrm{d}}\int_{t<0}\int_{u<t}(w_\delta(t+y)-\delta)e^{-\Phi(q)(t-u)}\Pi(du)dt,\] so using $V(y^*+)=h(y^*+,y^*)$ it follows that $y^*$ indeed solves (\ref{Kees_29jul2}). For uniqueness, the function $w_\delta=V$ is strictly decreasing on $(-\infty,\log K]$ and $\delta=V(y^*)=h(y^*+,y^*).$ Since $q>0$, the minimiser would not stop at points in $[\log K,\infty]$ from which the process cannot jump into $(-\infty,\log K)$ and thus $\log K-y^*>l:=\sup\{x:\Pi(-\infty,x)=0\}.$ Combining these observations imply that $h(y+,y)$ is a strictly decreasing function on $[\log K,\log K-l]$. Next consider the case that $X$ is of unbounded variation. Now Theorem 4 in BK tells us that we have smooth fit, i.e. $V'(y^*+)=0$. Using $V(x)=h(x,y^*)$ together with (\ref{Kees_29jul1}) yields again that $y^*$ solves (\ref{Kees_29jul2}), uniqueness follows in the same way as in the previous paragraph. Finally, (\ref{29jul3}) is readily seen from $V(x)=h(x,y^*)$, (\ref{compensation2}) and the fact that $y^*$ satisfies (\ref{Kees_29jul2}). \end{proof} We conclude this section with some properties of $y^*$ as a function of $\delta$. Note that by spectral negativity, $\Pi \not=0$ implies $\sup \{ x \, : \, \Pi(-\infty,x)=0 \} < 0$. \begin{theorem} Suppose $\Pi \not=0$. Then $y^*(\delta)$ is continuous and decreasing as a function of $\delta$, with $y^*(\bar{\delta}-)=\log K$ if $\sigma_{X}=0$ (resp. $y^*(\delta_0-)=\log K$ if $\sigma_{X}>0$) and $y^*(0+)=\log K-\sup \{ x \, : \, \Pi(-\infty,x)=0 \}$. \end{theorem} \begin{proof} We write $V_{\delta}$ to stress the dependence of the value function on $\delta$. Continuity of $y^*(\delta)$ is clear as the above Theorem \ref{thm_Erik2} (ii) and the fact that $w_{\delta}$ is continuous in $\delta$ (see the argument for continuity of $\delta \mapsto V_{\delta}$ below) allow to apply the implicit function theorem. To see that it is decreasing it suffices to show that $\delta \mapsto V_{\delta}(x)-\delta$ is decreasing. For this, take $\delta_1<\delta_2$ and let $(\tau^*_1,\sigma^*_1)$ denote the saddle point when $\delta=\delta_1$. Then $V_{\delta_1}$ is the value when the supremum over all pairs $(\tau,\sigma^*_1)$ is taken. As $\sigma^*_1$ is also feasible for the minimiser when $\delta=\delta_2$ we have that $V_{\delta_2}$ is bounded above by the value when the supremum over the same pairs $(\tau,\sigma^*_1)$ is taken. This yields \begin{eqnarray} V_{\delta_2}(x)-V_{\delta_1}(x) & \leq & \sup_{\tau} \mathbb{E}_x [ e^{-q \sigma^*_1}((K-e^{X_{\sigma^*_1}})^+ +\delta_2) \mathbf{1}_{\{ \sigma^*_1<\tau \}} \nonumber\\ & & \quad - e^{-q \sigma^*_1} ((K-e^{X_{\sigma^*_1}})^+ +\delta_1) \mathbf{1}_{\{ \sigma^*_1<\tau \}} ] \nonumber\\ & \leq & \delta_2-\delta_1, \label{Kees_30jul5} \end{eqnarray} as required. Next, by the monotonicity the limits mentioned in the theorem exist. First we show $y^*(0+)=\log K-l$, where $l:=\sup \{ x \, : \, \Pi(-\infty,x)=0 \}$. Suppose we had $y^*(0+)<\log K-l$, then for some $x_1 \in (y^*(0+),\log K-l)$ and any $\delta>0$ we have $\mathbb{P}_{x_1}(\tau^-_{\log K/2}<T_{[\log K,y^*(\delta)]}) \geq \mathbb{P}_{x_1}(\tau^-_{\log K/2}<T_{[\log K,y^*(0+)]})>0$. So, starting from $x_1$, if the maximizer chooses $\tau^-_{\log K/2}$ he ensures a strictly positive value, independent of $\delta$. But this of course contradicts with $V_{\delta}(x_1) \leq \delta \downarrow 0$ as $\delta \downarrow 0$. If we had $y^*(0+)>\log K-l$, then for some $x_2 \in (\log K-l,y^*(0+))$ we have for $\delta$ small enough $x_2 \leq y^*(\delta)$ and consequently $V_{\delta}(x_2)=\delta$. But the minimiser can do better, that is in fact we have $V_{\delta}(x_2)<\delta$, as is easily seen. Namely, the minimiser can choose $T_{[\log K,\log K-l]}$, so that starting from $x_2>\log K-l$ the maximiser can at most get discounted $\delta$, the discount factor being strictly less than $1$ since $q>0$ and $X$ is right continuous. Next suppose $\sigma_{X}>0$ and let us show that $y^*(\delta_0-)=\log K$. Suppose we had $y^*(\delta_0-)>\log K$. Note that for any $x$, $\delta \mapsto V_{\delta}(x)$ is continuous, since for $\delta_1<\delta_2$ trivially $V_{\delta_2}(x) \geq V_{\delta_1}(x)$ and (\ref{Kees_30jul5}). So for $\log K<x_1<x_2<y^*(\delta_0-)$ it would follow that $V_{\delta}(x_1)-V_{\delta}(x_2) \to V_{\delta_0}(x_1)-V_{\delta_0}(x_2)=\delta_0-\delta_0=0$ as $\delta \downarrow \delta_0$. But the difference $V_{\delta}(x_1)-V_{\delta}(x_2)$ does not vanish as $\delta \downarrow \delta_0$, as follows easily from the homogeneity of $X$. More precisely, denoting by $(\tau^*_1,\sigma^*_1)$ resp. $(\tau^*_2,\sigma^*_2)$ the saddle point when starting from $x_1$ resp. $x_2$, similar arguments as the ones leading to (\ref{Kees_30jul5}) yield in this case \[ V_{\delta}(x_1) \geq \mathbb{E} [ e^{-q \tau^*_2} (K-e^{x_1 + X_{\tau^*_2}})^+ \mathbf{1}_{\{ \tau^*_2 \leq \sigma^*_1 \}} + e^{-q \sigma^*_1} ((K-e^{x_1 + X_{\sigma^*_1}})^+ +\delta)\mathbf{1}_{\{ \sigma^*_1 <\tau^*_2 \}} ] \] and \[ V_{\delta}(x_2) \leq \mathbb{E} [ e^{-q \tau^*_2} (K-e^{x_2 + X_{\tau^*_2}})^+ \mathbf{1}_{\{ \tau^*_2 \leq \sigma^*_1 \}} + e^{-q \sigma^*_1} ((K-e^{x_2 + X_{\sigma^*_1}})^+ +\delta)\mathbf{1}_{\{ \sigma^*_1 <\tau^*_2 \}} ], \] thus \begin{equation}\label{Kees_31jul1} V_{\delta}(x_1) - V_{\delta}(x_2) \geq \mathbb{E} [ e^{-q \kappa} ( (K-e^{x_1 + X_{\kappa}})^+ - (K-e^{x_2 + X_{\kappa}})^+) ] \end{equation} where $\kappa=\sigma^*_1 \wedge \tau^*_2=\inf \{ t>0 \, | \, X_t = \log K-x_1 \} \wedge \inf \{ t > 0 \, | \, X_t < x^*(\delta)-x_2 \}$. Clearly, since $x^*(\delta) \leq \log K$ and $x_1<x_2$ the rhs of (\ref{Kees_31jul1}) is strictly positive iff $\mathbb{P}(\tau^*_2<\sigma^*_1)>0$. Obviously also after taking the limit for $\delta \downarrow \delta_0$ this probability is positive on account of $\Pi \not=0$. Finally, $y^*(\bar{\delta}-)=\log K$ when $\sigma_{X}=0$ can be shown by the same arguments, taking into account here one has $\sigma^*=\infty$ for $\delta>\bar{\delta}$. \end{proof} \section{Jump-diffusion case}\label{sec_jump_diff} In this section we translate the general results from the previous Section \ref{sec_main_res} to the particular case of a jump-diffusion with downwards directed, exponentially distributed jumps. In this case, which is quite popular in practical applications in finance e.g. due to its tractable nature, the expressions become much more explicit. In particular a formula exists that expresses $y^*$ explicit in terms of $x^*$, cf. Proposition \ref{K_1mei_3} (iv). For the sequel we set \begin{equation}\label{K_6apr_1} X_t = \sigma_{X} W_t + \mu t - \sum_{i=1}^{N_t} \xi_i, \quad t \geq 0, \end{equation} where $\sigma_{X}>0$, $\mu \in \mathbb{R}$, $N$ is a Poisson process with intensity $\lambda>0$ counting the jumps and $(\xi_i)_{i \geq 0}$ is an iid sequence of random variables following an exponential distribution with parameter $\theta>0$. The following Proposition \ref{K_7apr_1} states formulas for the scale functions in this jump-diffusion case (recall $\mathbb{P}^c$ as defined in (\ref{K_18sept1})): \begin{proposition}\label{K_7apr_1} Let $c,r \geq 0$. We have the following for $X$ given by (\ref{K_6apr_1}) under $\mathbb{P}^c$. \begin{itemize} \item[(i)] The Laplacian is given by \[ \psi_c(z) = \psi(z+c) - \psi(c) = \frac{\sigma_{X}^2}{2} z^2 +(\sigma_{X}^2 c+ \mu) z - \frac{\lambda \theta z}{(\theta+z+c)(\theta+c) }. \] The function $z \mapsto \psi_c(z)-r$ has three zeros $\beta_1(c,r)<-\theta-c<\beta_2(c,r) \leq \beta_3(c,r)$, with $\beta_2(c,r)<0<\beta_3(c,r)$ if $r>0$; $\beta_2(c,r)=0<\beta_3(c,r)$ if $r=0$ and $\psi_c'(0) \leq 0$; $\beta_2(c,r)<0=\beta_3(c,r)$ if $r=0$ and $\psi_c'(0) \geq 0$. \item[(ii)] In particular, if $r=\psi(1)>0$ we have \begin{multline*} \beta_{1,2}(0,r) = - \left( \frac{\theta}{2} + \frac{r}{\sigma_{X}^2} + \frac{\lambda}{\sigma_{X}^2 (\theta+1)} \right) \pm \sqrt{\left( \frac{\theta}{2} + \frac{r}{\sigma_{X}^2} + \frac{\lambda}{\sigma_{X}^2 (\theta+1)} \right)^2 - \frac{2r\theta}{\sigma_{X}^2}} \\ \mbox{and} \quad \beta_{3}(0,r)=1. \end{multline*} \end{itemize} Define for $i=1,2,3$ the constants \[ C_i(c,r) = \frac{2 (\theta + c + \beta_i(c,r))}{\sigma_{X}^2 \prod_{j \not= i} ( \beta_j(c,r)-\beta_i(c,r) )}. \] We have the following formulas for the scale functions $W^{(r)}_c$ and $Z^{(r)}_c$ on $[0,\infty)$. \begin{itemize} \item[(iii)] If $\beta_2(c,r)\not=0$ or $\beta_3(c,r)\not=0$ we have \[ W^{(r)}_c(x) = \sum_{i=1}^3 C_i(c,r) e^{\beta_i(c,r) x}, \] otherwise (necessarily $r=0$) we have \[ W^{(0)}_c(x) = \frac{2}{\sigma_{X}^2 \beta_1(c,0)} \left( (1-c-\theta) e^{\beta_1(c,0) x} -(\theta+c) x + \theta+c-1 \right). \] \item[(iv)] If $r>0$ we have \[ Z^{(r)}_c(x) = r \sum_{i=1}^3 \frac{C_i(c,r)}{\beta_i(c,r)} e^{\beta_i(c,r) x},\] while $Z^{(0)}_c(x)=1$. \end{itemize} \end{proposition} \begin{proof} Follows from the definitions (\ref{K_18sept2}) and (\ref{K_18sept3}) by some elementary calculations. Also, see e.g. \cite{Avram04}. \end{proof} In the sequel we assume for simplicity $q>0$ and $q=\psi(1)$, i.e. we set $\mu :=q-\sigma_{X}^2/2+\lambda/(\theta+1)$. (Note that condition (\ref{ass}) is met). This means that $\mathbb{P}$ is a so-called risk neutral measure in the sense that the discounted price process $(e^{X_t-qt})_{t \geq 0}$ is a $\mathbb{P}$-martingale, as required in a financial modelling context. (However the reader should have no difficulties translating the upcoming formulas to the situation for any $q \in [0,\psi(1)]$ if required.) Note that the above Proposition \ref{K_7apr_1} (ii) gives explicit formulas for the roots $\beta_i(0,q)$ in this case. First we turn to formulas for the McKean optimal stopping problem (cf. Theorem \ref{avram}). \begin{proposition}\label{K_22apr_1} The value function $U$ of the McKean optimal stopping problem is given by \[ U(x) = \left\{ \begin{array}{ll} K-e^x & \mbox{if $x \leq k^*$} \\ c_1 e^{\beta_1(0,q)(x-k^*)} + c_2 e^{\beta_2(0,q)(x-k^*)} & \mbox{if $x > k^*$,} \end{array} \right. \] where \begin{multline*} c_1=\frac{\beta_2(0,q) K + (1-\beta_2(0,q)) e^{k^*}}{\beta_2(0,q)-\beta_1(0,q)}, \qquad c_2=\frac{\beta_1(0,q) K + (1-\beta_1(0,q)) e^{k^*}}{\beta_1(0,q)-\beta_2(0,q)} \\ \mbox{and} \qquad e^{k^*} = \frac{K q}{\sigma_{X}^2/2+q+\lambda/(\theta+1)^2}. \end{multline*} \end{proposition} \begin{proof} A direct derivation of these formulas can be found in \cite{Kou02} e.g. Alternatively, plugging the formulas from Proposition \ref{K_7apr_1} in the results from Theorem \ref{avram} we see that we can write \begin{equation}\label{K_1mei_1} U(x) = K q \sum_{i=1}^3 \frac{C_i(0,q)}{\beta_i(0,q)} e^{\beta_i(0,q) (x-k^*)} -e^x \quad \mbox{and} \quad e^{k^*} = K \frac{\psi(1)}{\psi'(1)}. \end{equation} Applying the identity \begin{equation}\label{K_1mei_2} \frac{\sigma_{X}^2}{2} \prod_{i=1}^3 (z-\beta_i(c,q)) = (\theta+z+c) (\psi_c(z)-q) \quad \mbox{for $z \not= -\theta-c$} \end{equation} to this particular case (i.e. $c=0$, $q=\psi(1)$, $\beta_3(0,q)=1$), dividing both sides by $z-1$ and taking the limit for $z \to 1$ we find \begin{equation}\label{K_11mei1} \sigma_{X}^2 (1-\beta_1(0,q)) (1-\beta_2(0,q)) = 2 (\theta+1) \psi'(1). \end{equation} Plugging this in the equation for $e^{k^*}$ we find $e^{k^*}=2(\theta+1)Kq/(\sigma_{X}^2 (\beta_2(0,q)-1)(\beta_1(0,q)-1))$. Using this expression in (\ref{K_1mei_1}), together with $\beta_1(0,q) \beta_2(0,q)=2q\theta/\sigma_{X}^2$ (from (\ref{K_1mei_2}) with $z=0$), the stated formula for $U$ indeed follows. \end{proof} Now we are ready to turn to formulas for the optimal exercise levels $x^*, y^*$ and the value function $V$ of the McKean game. Recall that for $\delta \geq U(\log K)$ the game degenerates to the McKean optimal stopping problem. \begin{proposition}\label{K_1mei_3} Consider the McKean game driven by (\ref{K_6apr_1}). Recall $\bar \delta=U(\log K)$. We assume throughout that $\delta<\bar \delta$. \begin{itemize} \item[(i)] The optimal level $x^*=x^*(\delta)$ is the unique solution to the equation in $x$: \[ q \sum_{i=1}^3 \frac{C_i(0,q)}{\beta_i(0,q)} K^{\beta_i(0,q)} e^{-\beta_i(0,q) x} -1 = \frac{\delta}{K}. \] On $(-\infty,x^*]$ we have $V(x)=K-e^x$ and on $(x^*,\log K]$ we have \[ V(x) = K q \sum_{i=1}^3 \frac{C_i(0,q)}{\beta_i(0,q)} e^{\beta_i(0,q) (x-x^*)} -e^x. \] \item[(ii)] The threshold $\delta_0 \in (0,\bar \delta)$ is the unique solution to the equation in $z$: \[ q \sum_{i=1}^3 \frac{C_i(0,q) K^{\beta_i(0,q)}}{\beta_i(0,q) (\theta + \beta_i(0,q))} e^{-\beta_i(0,q) x^*(z)} - \frac{\lambda + (\theta+1)q}{\lambda \theta K} z = \frac{1}{\theta+1}. \] \item[(iii)] Suppose $\delta \in [\delta_0,\bar \delta)$. We have $y^*=\log K$ and on $[\log K,\infty)$ \[ V(x) = K \sum_{i=1}^2 C_i(0,q) \left( \frac{q e^{-\beta_i(0,q) x^*}}{\beta_i(0,q)} + K^{-\beta_i(0,q)} \left( \psi'(1)-Kqe^{-x^*} \right) \right) e^{\beta_i(0,q) x}. \] \item[(iv)] Suppose $\delta \in (0,\delta_0)$. We have \[ e^{\theta y^*} = \frac{\lambda \theta K^{\theta+1}}{(\theta+1) q \delta} \left( q \sum_{i=1}^3 \frac{C_i(0,q) K^{\beta_i(0,q)}}{\beta_i(0,q) (\theta + \beta_i(0,q))} e^{-\beta_i(0,q) x^*} - \frac{1}{\theta+1} - \frac{\delta}{\theta K} \right). \] On $[\log K,y^*]$ we have $V(x)=\delta$ and on $(y^*,\infty)$ we have \[ V(x) = \frac{\delta}{\beta_2(0,q)-\beta_1(0,q)} \left( \beta_2(0,q) e^{\beta_1(0,q) (x-y^*)} - \beta_1(0,q) e^{\beta_2(0,q) (x-y^*)} \right). \] \end{itemize} \end{proposition} \begin{proof} Ad (i). Apply Proposition \ref{K_7apr_1} to the formulas from Theorem \ref{mainthrm} (ii). Ad (ii). Apply Proposition \ref{K_7apr_1} to Theorem \ref{thm_Erik2} (i). Ad (iii). Apply Proposition \ref{K_7apr_1} to the formula from Theorem \ref{mainthrm} (ii) to obtain \[ V(x) = K \sum_{i=1}^3 C_i(0,q) \left( \frac{q e^{-\beta_i(0,q) x^*}}{\beta_i(0,q)} + K^{-\beta_i(0,q)} \left( \psi'(1)-Kqe^{-x^*} \right) \right) e^{\beta_i(0,q) x}-e^x \] and use (\ref{K_11mei1}) to see that the terms involving the exponential of a positive factor times $x$ vanish. (Of course, one can also reason directly that they should cancel, since otherwise $V$ would not stay bounded for large $x$, which it should by definition). Ad (iv). For $y^*$, apply Proposition \ref{K_7apr_1} to Theorem \ref{thm_Erik2} (ii) and simplify to arrive at the stated formula. Note that \[ \sum_{i=1}^3 \frac{C_i(0,q)}{\beta_i(0,q) (\theta + \beta_i(0,q))} = \frac{2}{\sigma_{X}^2 \prod_{i=1}^3 \beta_i(0,q)} = \frac{1}{\theta q}, \] the final equality by (\ref{K_1mei_2}). For $V$, apply Proposition \ref{K_7apr_1} to Theorem \ref{thm_Erik2} (ii) and simplify, making use of the formula for $y^*$ and in particular Proposition \ref{K_7apr_1} (ii). \end{proof} We conclude with two plots in this jump-diffusion setting, produced using the above Proposition \ref{K_1mei_3}, to illustrate the main result from this paper. \begin{figure} \caption{$\delta \in [\delta_0,\bar \delta)$, so $y^*=\log K$. The black curves are the upper and lower payoff functions, the red curve is the value function $V$} \end{figure} \begin{figure} \caption{$\delta \in (0,\delta_0)$, so $y^*>\log K$. The black curves are the upper and lower payoff functions, the red curve is the value function $V$} \end{figure} \end{document}
\begin{document} \maketitle \begin{abstract} Generated Jacobian equations are Monge-Amp$\grave{\textrm{e}}$re type equations which contain optimal transport as a special case. Therefore, optimal transport case has its own special structure which is not necessarily true for more general generated Jacobian equations. Hence the theory for optimal transport can not be directly transplanted to generated Jacobian equations. In this paper, we point out the difficulties that prevent applying the proof the local H$\ddot{\textrm{o}}$lder regularity of solutions of optimal transport problem from \cite{Loeper2009OnTR} directly to Generated Jacobian Equations, we then discuss how to handle these difficulties, and prove local H$\ddot{\textrm{o}}$lder regularity in the generated Jacobian equation case. \end{abstract} \section{Introduction} In this paper, we will consider the H$\ddot{\textrm{o}}$lder regularity theory of solutions to generated Jacobian equations(GJE), which is one type of prescribed Jacobian equation(PJE). A (PJE) is a second order PDE of the form \begin{displaymath} \label{PJE} \tag{PJE} \det \left( D_x ( T(x, D\phi(x) ,\phi(x) ) ) \right) = \psi(x,D\phi(x),\phi(x)), \end{displaymath} where $T: X \times \mathbb{R} \times \mathbb{R}^n \to \mathbb{R}^n$ and $\psi: X \times \mathbb{R} \times \mathbb{R}$, with unkown $\phi : X \to \mathbb{R}$. If there are functions $G$ and $V$ that satisfy \begin{displaymath} \left\{ \begin{array}{rl} D_x G(x, T(x,p,u), V(x,p,u)) & = p \\ G(x, T(x,p,u), V(x,p,u)) & = u \end{array} \right., \end{displaymath} then the $\eqref{PJE}$ can be written as follows : \begin{equation} \label{GJE} \tag{GJE} \det \left( D^2_x \phi(x) - A(x, D\phi(x), \phi(x)) \right) = \bar{\psi}(x, D\phi(x), \phi(x)) \end{equation} where \begin{align*} A(x,p,u) & = D_x^2G (x, T(x,p,u), V(x,p,u)) \\ \bar{\psi}(x,p,u) & = \det(E(x, T(x,p,u), V(x,p,u))) \psi(x,p,u) \end{align*} with $E$ from \eqref{Gnondeg} in 2.1. \eqref{GJE} is called a generated Jacobian equation and we call $G$ the generating function of $\eqref{GJE}$. Well-known examples of $\eqref{GJE}$ arise in Monge-Amp$\grave{\textrm{e}}$re equation, optimal transport, and near and far field reflector antenna design. The second boundary condition of $\eqref{GJE}$ is that for some given domain $Y \subset \mathbb{R}^n$, the image of $X$ under the map $T( \cdot, D\phi(\cdot), \phi(\cdot))$ is equal to $Y$. As a special case, if we have measures which are supported on $X$ and $Y$ (we call them the source and the target respectively), then this second boundary condition can be replaced by defining weak solutions using the measures. We discuss this in 2.3.\\ The main theorem of this paper is local H$\ddot{\textrm{o}}$lder regularity of solutions to $\eqref{GJE}$. In the literature, it is known that, if the source and the target are bounded away from 0 and $\infty$ with respect to the Lebesgue measure, and if the generating function satisfies (G3w) together with some other conditions, then the solution $\phi$ to the $\eqref{GJE}$ has local H$\ddot{\textrm{o}}$lder regularity(See \cite{Guillen2015PointwiseEA}). But in optimal transport, which is a special case of $\eqref{GJE}$, the author of \cite{Loeper2009OnTR} showed the local H$\ddot{\textrm{o}}$lder regularity using $\eqref{As}$ condition, but weaker assumption on the source measure. The main theorem of this paper is parallel to his result in optimal transport. \\ The idea of the proof of the main theorem of this paper will be imported from \cite{Loeper2009OnTR} and the arXiv version of \cite{Kim2007ContinuityCA}. But, since there are structural differences between $\eqref{GJE}$ and optimal transport, we need to adjust the idea in \cite{Loeper2009OnTR} to import it to $\eqref{GJE}$. One of the big differences is the dependency of the generating function on the scalar parameter $v$. In optimal transport case, the generating function is $G(x,y,v) = -c(x,y)-v$, so that by taking the derivative with respect to $x$ or $y$, dependency on $v$ vanishes. This is not necessarily true in more general $\eqref{GJE}$ case. Hence every estimate about the cost function $c(x,y)$ should be combined with an estimate about the scalar parameter $v$ to import them to $\eqref{GJE}$. Another big difference is that the conditions on optimal transport hold on the whole space $X \times Y \times \mathbb{R}$, and all the derivatives have bounded norm. More general generating functions, however, do not necessarily satisfy the conditions on the whole space $X \times Y \times \mathbb{R}$, but only on a subset $\mathfrak{g} \subset X \times Y \times \mathbb{R}$. An example can be found in \cite{Liu2015OnTC}. Moreover, because of the dependency on the scalar parameter, the derivatives of the generating function might not be bounded. Hence whenever we want to use conditions and derivatives of the generating function, we need to check that the points used are in a ``nice'' set $\mathfrak{g}$ and stay inside a compact set. Since we are going to focus on the local regularity, this can be done by localizing the argument. Lemma $\ref{localniceg}$ will allow us to choose points in a nice compact set.\\ We introduce some related results. Optimal transport case is done in \cite{Loeper2009OnTR} and the arXiv version of \cite{Kim2007ContinuityCA}. For other H$\ddot{\textrm{o}}$lder regularity in optimal transport, see \cite{Figalli2011HlderCA} and \cite{Guillen2012OnTL}. In \cite{Guillen2012OnTL}, the authors prove the H$\ddot{\textrm{o}}$lder regularity under a condition known as (QQ-conv), which is equivalent to (A3w) when the cost function is smooth, and the same authors prove analogous result for $\eqref{GJE}$ in \cite{Guillen2015PointwiseEA}. In reflector antenna design, some cases are done by various authors. See \cite{Gutirrez2014TheNF}, \cite{ABEDIN20161}, \cite{Gutirrez2019C1alphaestimatesFT}. For other regularity theory of $\eqref{GJE}$, see \cite{trudinger2012local}, \cite{Jhaveri2016PartialRO}, \cite{Guillen2015PointwiseEA}. \section{Setting of the problem} \subsection{Conditions for the generating function $G$} To have the local holder regularity result, we need some structural conditions on the domains $X$ and $Y$ and on the generating function $G$. We first assume the regularity and monotonicity of the generating function : \begin{equation} \label{Regular} G \in C^4(X \times Y \times \mathbb{R}) \tag{Regular} \end{equation} \begin{equation} \label{Gmono} D_v G < 0 \tag{$G$-mono} \end{equation} From $\eqref{Gmono}$, we get a function $H : X \times Y \times \mathbb{R} \to \mathbb{R}$ that satisfies \begin{equation} G(x,y,H(x,y,u)) = u. \end{equation} Note that the implicit function theorem ensures that $H \in C^4$ by $\eqref{Regular}$ and this, with $\eqref{Gmono}$, implies \begin{equation} \label{Hmono} \tag{$H$-mono} D_u H < 0. \end{equation} As in optimal transport case, we need conditions like (A1), (A2)(sometimes these are called (twisted) and (non-deg) respectively) and (As) in \cite{Loeper2009OnTR}. But in some examples of $\eqref{GJE}$, the function $G$ does not satisfy these conditions on the whole domain $X \times Y \times \mathbb{R}$. Instead, there is a subset $\mathfrak{g} \subset X \times Y \times \mathbb{R}$ on which the generating function $G$ satisfies conditions corresponding to (A1), (A2) and (As). Therefore, we assume that there is a subset $\mathfrak{g} \subset X \times Y \times \mathbb{R}$ such that the following conditions hold : \begin{equation} \label{Gtwist} (y,v) \mapsto (D_x G(x, \cdot , \cdot) , G(x,\cdot , \cdot)) \ \textrm{is injective on} \ \mathfrak{g}_x \tag{$G$-twist} \end{equation} \begin{equation} \label{G*twist} x \mapsto -\frac{D_y G}{D_v G} (\cdot , y, v) \textrm{ is injective on } \mathfrak{g}_{y,v} \tag{$G^*$-twist} \end{equation} \begin{equation} \label{Gnondeg} \det \left( D^2_{xy} G - D^2_{xv}G \otimes \frac{D_yG}{D_vG} \right) \neq 0 \ \textrm{on}\ \mathfrak{g} \tag{$G$-nondeg} \end{equation} where $\mathfrak{g}_x = \{ (y,v) | (x,y,v) \in \mathfrak{g} \}$ and $\mathfrak{g}_{y,v} = \{ x | (x,y,v) \in \mathfrak{g} \}$. We will use $E$ to denote the matrix in the condition $\eqref{Gnondeg}$. Under these conditions, we can define the \emph{$G-$exponential function} on some subset of $T^*_xX$ and $T^*_yY$. \begin{Def} We define the $\gexp{x}{u}$ and $V_x$ by \begin{displaymath} \left\{ \begin{array}{rl} D_xG(x, \gexp{x}{u}(p) , V_x(p,u)) & = p \\ G(x, \gexp{x}{u}(p) , V_x(p,u)) & = u \end{array} \right. . \end{displaymath} We call $\gexp{x}{u}$ the \emph{$G$-exponential function} with focus $(x,u)$. We define another exponential map $\gsexp{y}{v}$ by \begin{displaymath} -\frac{D_y G}{D_v G} (\gsexp{y}{v}(q) , y, v) = q. \end{displaymath} We call $\gsexp{y}{v}$ the \emph{$G^*$-exponential function} with focus $(y,v)$. \end{Def} Note that by the implicit function theorem, the functions $\gexp{x}{u}, V_x(p,u)$ and $\gsexp{y}{v}$ are $C^3$ on the domain of each function. \\ In optimal transport case, the equations which are used to define the $c$-exponential maps and $c^*$-exponential maps are symmetric. In the above definition for the $G$-exponential function, the equations that we used to define $\gexp{x}{u}$ and $\gsexp{y}{v}$ do not look symmetric, but they actually play symmetric roles. See Remark 9.5 in \cite{Guillen2015PointwiseEA}. \begin{Rmk} \label{derivexp} Let $I_{x,y} = \{ v \in \mathbb{R} | (x,y,v) \in \mathfrak{g} \}$ and let $J_{x,y} = H(x,y,I_{x,y})$ and $ \mathfrak{h} = \{ (x,y,u) | u \in J_{x,y} \}$. We will assume that $\mathfrak{h}$ is open relative to $X \times Y \times \mathbb{R}$. Let $\mathfrak{h}_{x,u} = \{ y \in Y | (x,y,u) \in \mathfrak{h} \}$. \begin{equation} \label{Domopen} \mathfrak{h} \textrm{ is open relative to } X \times Y \times \mathbb{R} \tag{DomOpen} \end{equation} We will denote the image of $\mathfrak{h}_{x,u}$ under the map $D_xG(x, \cdot , H(x, \cdot, u))$ by $\mathfrak{h}^*_{x,u}$. Then the $G$-exponential map $\gexp{x}{u}$ is defined on $\mathfrak{h}^*_{x,u}$. Moreover, we get the expression $V_x(p,u) = H(x, \gexp{x}{u}(p) , u)$. With this expression, we can compute \begin{displaymath} D_p \gexp{x}{u}(p) = E^{-1}(x, \gexp{x}{u}(p), V_x(p,u)). \end{displaymath} \end{Rmk} To impose geometric conditions on the sets $X$ and $Y$, we define \emph{$G$-convexity} of sets. \begin{Def} \label{gconvex} $X$ is said to be \emph{$G$-convex} if $\mathfrak{g}_{y,v}^*$, which is the image of $\mathfrak{g}_{y,v}$ under the map $\displaystyle -\frac{D_y G}{D_v G} (\cdot , y, v)$, is convex for any $(y,v) \in Y \times \mathbb{R}$. $Y$ is said to be \emph{$G^*$-convex} if $\mathfrak{h}^*_{x,u}$ is convex for any $(x,u) \in X \times \mathbb{R}$. \end{Def} We also add convexity conditions to the supports of source and target. \begin{equation} \label{hDomConv} X \textrm{ is } G\textrm{-convex} \tag{hDomConv} \end{equation} \begin{equation} \label{vDomConv} Y \textrm{ is } G^*\textrm{-convex} \tag{vDomconv} \end{equation} \begin{Def} For $x \in X$ and $u \in \mathbb{R}$, let $y_0 , y_1 \in \mathfrak{h}_{x,u}$ and let $p_i =D_x G(x, y_i, H(x, y_i, u))$. A \emph{$G$-segment} that connects $y_0$ and $y_1$ with focus $(x,u)$ is the image of $[p_0 , p_1]$ under the map $\gexp{x}{u}$ \begin{displaymath} \{ \gexp{x}{u}((1-\theta)p_0 + \theta p_0) | \theta \in [0,1] \}. \end{displaymath} For $y \in Y$ and $v \in \mathbb{R}$, let $x_0, x_1 \in g_{y,v}$ and let $\displaystyle q_i = -\frac{D_y G}{D_v G} (x_i , y, v)$. The \emph{$G^*$-segment} that connects $x_0$ and $x_1$ with focus $(y,v)$ is the image of $[q_0 , q_1]$ under the map $\gsexp{y}{v}$ : \begin{displaymath} \{\gsexp{y}{v}((1-\theta)q_0 + \theta q_0)| \theta \in [0,1] \}. \end{displaymath} \end{Def} \begin{Rmk} The definition of $G$-convexity is different from the usual $G$-convex definition from the literature (See \cite{Guillen2015PointwiseEA}). The two definitions serve the same purpose, however, as these convexity conditions are used to ensure that $G$-segments are well defined. For example, if $y_0, y_1 \in \mathfrak{h}_{x,u}$ then the $G$-segment that connects $y_0$ and $y_1$ with focus $(x,u)$ is well defined by Definition $\ref{gconvex}$. In \cite{Kim2007ContinuityCA}, the authors define convexity of domains using a subset $W$ of $M \times \bar{M}$(Definition 2.5). The convexity we have defined coincides with their horizontal and vertical convexity of $W$ when the generating function is given by a cost function. \end{Rmk} In optimal transport, there is an important condition called (A3w). This condition is a sign condition of a 4-tensor that was first introduced in \cite{Ma2005RegularityOP}, and \cite{Trudinger2006OnTS} (the tensor is called the MTW tensor sometimes). In optimal transport, the tensor is defined by $D^2_{pp} \left( - D^2_{xx}c(x, exp^c_x(p))\right)$, and it has the coordinate expression \begin{displaymath} MTW_{ijkl} = \left( c_{ij,r}c^{r,s}c_{s,pq} - c_{ij,pq} \right)c^{p,k}c^{q,l} \end{displaymath} (See, for example, \cite{Ma2005RegularityOP}). Then (A3w) condition assume that $MTW[\xi, \xi, \eta, \eta] \geq 0$ for any $\xi \perp \eta$. (A3w) is used to show many regularity results. In \cite{Figalli2011HlderCA}, the arXiv version of \cite{Kim2007ContinuityCA}, and \cite{Guillen2012OnTL}, (A3w) is used to show the H$\ddot{\textrm{o}}$lder regularity of potential functions. In \cite{Philippis2012SobolevRF}, (A3w) is used to show the Sobolev regularity of potential functions. In \cite{Loeper2009OnTR} However, the author uses the strengthened condition (As) to show the H$\ddot{\textrm{o}}$lder regularity result. \begin{equation} \label{As} \tag{As} MTW[ \xi, \xi, \eta, \eta] \geq \delta | \xi |^2 | \eta |^2 \textrm{ whenever } \xi \perp \eta, \textrm{ for some } \delta>0. \end{equation} In optimal transport, $\eqref{As}$ is equivalent to \begin{equation} \label{As'} \tag{As'} MTW [ \xi, \xi, \eta, \eta ] > 0 \textrm{ for } \xi \perp \eta. \end{equation} Obviously $\eqref{As}$ implies $\eqref{As'}$. For the other implication, note that $X \times Y$ is compact. Then $\eqref{Regular}$ and $\eqref{Gnondeg}$ implies that there exists $\delta >0$ such that $\eqref{As}$ is true with the $\delta$. Since optimal transport is a special case of (GJE), we should impose a condition that corresponds to the condition $\eqref{As}$ to import the idea from \cite{Loeper2009OnTR}. Therefore, we assume the following condition. \begin{equation} \label{G3s} \tag{G3s} D^2_{pp} \mathcal{A}(x,p,u)[\xi, \xi, \eta, \eta ] > 0 \textrm{ for any } \xi \perp \eta \end{equation} where $\mathcal{A}(x,p,u) = D^2_{xx} G\left( x, \gexp{x}{u}(p), V_x(p,u) \right)$, $\xi \in T_x X$ and $\eta \in T^*_xX$. Here $\xi \perp \eta$ means that the dual pairing $\eta(\xi)$ is 0. Note that $p$ must be in $\mathfrak{h}^*_{x,u}$ to well-define $\mathcal{A}(x,p,u)$. Here we point out that, unlike optimal transport, $\eqref{G3s}$ is not equivalent to, but weaker than the following. \begin{displaymath} D^2_{pp} \mathcal{A}(x,p,u)[\xi, \xi, \eta, \eta ] > \delta |\xi|^2 |\eta|^2 \textrm{ for any } \xi \perp \eta \textrm{ for some } \delta >0. \end{displaymath} This is because $\delta$ depends on $(x,p,u) \in \mathfrak{h}^* = \bigcup_{(x,u) \in X \times \mathbb{R}} \mathfrak{h}^*_{x,u}$ and $\mathfrak{h}^*$ is not compact in general. We use $\eqref{G3s}$ in this paper. Then some arguments that use $\delta$ in $\eqref{As}$ from \cite{Loeper2009OnTR} can not be applied. We resolve this problem by localizing the argument. See Remark $\ref{nicedom}$. In this paper, we have that $X$ and $Y$ are subsets of $\mathbb{R}^n$. Then we can identify $T_xX$ and $T^*_xX$ with $\mathbb{R}^n$, and the dual pairing with usual inner product in $\mathbb{R}^n$. As such we will often use $\mathbb{R}^n$ for the tangent space and the cotangent space. (G3w) condition is the same condition $\eqref{G3s}$, but with $\geq$ instead of $>$. \begin{Rmk} The condition (A3w) is known to be equivalent to the next inequality \begin{displaymath} -c(y,\bar{x}(t)) + c(x, \bar{x}(t)) \leq \max \{ -c(y,\bar{x}(0)) + c(x, \bar{x}(0)), -c(y,\bar{x}(1)) + c(x, \bar{x}(1)) \} \end{displaymath} where $\bar{x}(t)$ is the $c$-segment that connects $\bar{x}(0)$ and $\bar{x}(1)$ with focus $x$ (See \cite{Loeper2009OnTR}, \cite{Villani2003TopicsIO}). This is called Loeper's property. This has the following geometric meaning : If two $c$-affine functions, which have $c-$subdifferentials $\bar{x}(0)$ and $\bar{x}(1)$ respectively, meet at $x$, then the $c$-affine functions that pass though that point with $c$-subdifferential $\bar{x}(t)$ lies under max of the two original $c$-affine functions. Therefore, this property is called geometric Loeper's property (gLp) sometimes. In \cite{Loeper2009OnTR}, he uses \eqref{As} to get (gLp) in some quantitative way. This can be done in the $G$-convex setting as well. We obtain a property corresponding to (gLp) for $G$-affine functions, and along with a quantitative version of (gLp) using $\eqref{G3s}$ in the lemma $\ref{qglp}$. \end{Rmk} \subsection{$G-$convex functions} The solutions of \eqref{GJE} belong to a class known as \emph{$G$-convex function}. We define the $G$-convexity of a function by generalizing the usual convexity for the $c$-convexity in optimal transport case : \begin{Def} A function $\phi : X \to \mathbb{R}$ is said to be a \emph{$G$-convex function} if for any $x_0 \in X$ there exists $y_0 \in Y$ and $v_0 \in \mathbb{R}$ such that $(x_0 , y_0 , v_0 ) \in \mathfrak{g}$ and \begin{align} \label{gsubineq} \phi(x_0) & = G(x_0, y_0, v_0 ) \\ \phi(x) & \geq G(x , y_0, v_0 ) \quad \forall x \in X. \nonumber \end{align} We say that $(y_0,v_0)$ is a \emph{$G$-focus} of $\phi$ at $x_0$ and the function $G(x,y_0, v_0)$ is a \emph{$G-$supporting function} of $\phi$ at $x_0$ with focus $(y_0,v_0)$. We define the \emph{$G$-subdifferential} of $\phi$ at $x_0$ by \begin{displaymath} \gsub{\phi}{x_0} = \{ y \in Y | (y,v) \textrm{ is a } G\textrm{-focus of } \phi \textrm{ at } x_0 \textrm{ for some } v \in \mathbb{R}\} \end{displaymath} and we define $ \displaystyle \gsub{\phi}{A} = \bigcup_{x \in A} \gsub{\phi}{x}$. \end{Def} With the definition above and Remark $\ref{derivexp}$, we can express $v_0$ as follows : \begin{displaymath} v_0 = H(x_0 , y_0 , \phi(x_0)). \end{displaymath} Therefore, the $G$-supporting function of a $G$-convex function $\phi$ at $x_0$ with $G$-sudifferential $y_0$ can be expressed as $G(x, y_0, H(x_0, y_0, \phi(x_0)))$. Also, note that $(x_0, y_0, v_0) \in \mathfrak{g}$ is equivalent to $(x_0,y_0, \phi(x_0)) \in \mathfrak{h}$. We will use this fact often. We call the function of the form $G( \cdot, y, v)$, \emph{$G$-affine function} with focus $(y,v)$, or simply a \emph{$G$-affince function}. \begin{Rmk} \label{rmkonunif} The conditions on the generating function $G$ imply $G$-convexity of the $G$-subdifferential at a point which in turn implies the following proposition : \begin{Prop} \label{loctoglob} If a $G$-affine function $G(\cdot , y_0 , v_0 )$ supports a $G$-convex function $\phi$ at $x_0$ locally, i.e. \begin{displaymath} \begin{array}{c} \phi(x_0) = G(x_0, y_0, v_0) \\ \phi(x) \geq G(x, y_0, v_0) \textrm{ on some neighborhood of } x_0 \end{array} \end{displaymath} and if $(x_0, y_0, v_0) \in \mathfrak{g}$, then $y_0 \in \gsub{\phi}{x_0}$. \end{Prop} \noindent For example, it is proved in \cite{Guillen2015PointwiseEA} (Corollary 4.24), but under an extra condition on $J_{x,y}$, namely (unif) and the authors require the solution of the \eqref{GJE} to be ``nice''. \begin{flushleft} \begin{tabular}{rl} (unif) & $\exists \underline{u} , \overline{u}$ such that $[\underline{u} , \overline{u} ] \subset J_{x,y}$ for any $x \in X$ and $y \in Y$. \\ (nice) & The solution $\phi$ to the $\eqref{GJE}$ lies in $( \underline{u}, \overline{u} )$ : $\phi(x) \in ( \underline{u}, \overline{u} ), \forall x \in X$. \end{tabular} \end{flushleft} In \cite{Guillen2015PointwiseEA}, the authors used these conditions for two reasons. First, they use these to check that the $G$-segments they are using are well-defined. Under these conditions, if $u \in ( \underline{u}, \overline{u} )$, then $(x,y, u) \in \mathfrak{h}$ for any $x \in X$ and $y \in Y$ so that (DomConv*) assures that the $G$-segement for any pair of points in $Y$ with focus $(x,u)$ can be defined. Second, they get a compact set $X \times Y \times [ \underline{u}, \overline{u} ]$ that lies inside $\mathfrak{h}$ with (unif). Then the norms of derivatives of the functions $G$ and $H$ will be bounded, and will be bounded away from 0 if it is signed. For the local regularity, we do not need the fixed interval $[ \underline{u}, \overline{u} ]$ inside $J_{x,y}$ for any $x \in X$ and $y \in Y$. Instead, we will use the following weaker conditions : \begin{flushleft} \begin{tabular}{rp{10cm}} (unifw) & $\exists a , b : X \times Y \to \mathbb{R}$ which are continuous and $[a(x,y) , b(x,y)] \subset J_{x,y}$ for any $x \in X$ and $y \in Y$. \\ (nicew) & The solution $\phi$ to the (GJE) lies in $(a, b)$ : $\phi(x) \in (a(x,y) , b(x,y))$ for any $x \in X$ and $y \in \gsub{\phi}{x}$. \end{tabular} \end{flushleft} Note that the $G$-segments used in \cite{Guillen2015PointwiseEA} to show Proposition $\ref{loctoglob}$ can be well-defined by $\eqref{hDomConv}$ and $\eqref{vDomConv}$. In fact, the $G$-segments constructed in \cite{Guillen2015PointwiseEA} connect two points in $\gsub{\phi}{x} \subset \mathfrak{h}_{x, \phi(x)}$. Hence $\eqref{vDomConv}$ is enough to define the $G$-segments. With similar reasoning, $\eqref{hDomConv}$ ensures that the $G^*$-segments that are used are well-defined. Moreover, the sets \begin{equation} \begin{array}{c} \Phi := \{ (x,y,u) | u \in [a(x,y),b(x,y)] \} \subset \mathfrak{h} \\ \Psi := \{ (x,y,v) | v \in H(x,y,[a(x,y),b(x,y)]) \} \end{array} \end{equation} are compact. Note that $\eqref{hDomConv}$ and $\eqref{vDomConv}$ do not imply that $(x,y_{\theta}, u) \in \Phi$, where $y_{\theta}$ is the $G-$segment connecting $y_0$ and $y_1$ with focus $(x,u)$, so we can not bound norms of $G$ and its derivatives on a $G$-segment only using $\Phi$. But we know that $\Phi$ lies in a compact set $X \times Y \times [ \min a , \max b ]$, and $\Psi$ lies in the corresponding compact set. Therefore, we use norms on these compact sets that contain $\Phi$ and $\Psi$ and we still can use the same proof of the proposition $\ref{loctoglob}$ with assumption (unifw) and (nicew) instead of (unif) and (nice). \end{Rmk} \begin{Rmk} \label{nicedom} For a compact subset $S$ of $\mathfrak{h}$, the condition \eqref{Gnondeg} implies that we have a constant $C_e$ that depends on $S$ such that \begin{equation} \label{Ce1} \frac{1}{C_e} \leq \|E\| \leq C_e \end{equation} where $\|E\|$ is the operator norm of $E$. This with Remark $\ref{derivexp}$ implies that \begin{equation} \label{Ce2} \frac{1}{C_e} |p_1 - p_0| \leq |\gexp{x}{u}(p_1) - \gexp{x}{u}(p_0)| \leq C_e |p_1 - p_0| \end{equation} when $(x, \gexp{x}{u}((1-\theta)p_0 + \theta p_1) , u) \in S, \forall \theta \in [0,1]$. Also, the (G3s) condition implies that we have a constant $\alpha$ that depends on $S$ such that \begin{equation} \label{alpha} D^2_{pp} \mathcal{A} (x,p,u) [\xi, \xi, \eta, \eta ] > \alpha \end{equation} for any unit $\xi$ and $\eta$ such that $\xi \perp \eta$ if $(x, \gexp{x}{u}(p), u) \in S$. Note that by tensoriality, we have \begin{equation} D^2_{pp} \mathcal{A} (x,p,u) [\xi, \xi, \eta, \eta ] > \alpha|\xi|^2|\eta|^2 \end{equation} for non unit $\xi$ and $\eta$. Moreover, compactness of $X \times Y \times [\min a, \max b]$ implies that we have $\beta >0$ such that \begin{equation} \label{beta} D_v G < - \beta \textrm{ on } X \times Y \times [\min a, \max b]. \end{equation} \end{Rmk} With these conditions, we show some simple propositions for the $G-$subdifferential of $\phi$. We will denote the $r$ neighborhood of a set $A$ by $\nbhd{r}{A}$. \begin{Prop} \label{compgsub} The subdifferential of $\phi$ at a point $x$ is a closed subset of $Y$ and it lies compactly in $\mathfrak{h}_{x, \phi(x)}$ : \begin{displaymath} \gsub{\phi}{x} \Subset \mathfrak{h}_{x, \phi(x)} \subset Y. \end{displaymath} \end{Prop} Note that Proposition $\ref{compgsub}$ for optimal transport case is much easier to prove because $J_{x,y} =\mathbb{R}$ for any $(x,y) \in X \times Y$ i.e. $\mathfrak{h} = X \times Y \times \mathbb{R}$ so that we only need to check the inequality ($\ref{gsubineq}$). But in \eqref{GJE} case, compactness is not trivial since showing the inequality ($\ref{gsubineq}$) is not enough, and we need to check $(x, y, \phi(x)) \in \mathfrak{h}$. \begin{proof} First we show that $\gsub{\phi}{x}$ is closed. Suppose $y \in \overline{\gsub{\phi}{x}}$. Then there exists a sequence $y_i \in \gsub{\phi}{x}$ that converges to $y$. Then from (unifw) and (nicew), we have \begin{displaymath} \begin{array}{rl} \phi(x) & \in \bigcap_{i=1}^{\infty} (a(x, y_i) , b(x, y_i)) \\ & \subset [ \sup a(x, y_i), \inf b(x, y_i) ] \\ & \displaystyle \subset \left[ \lim_{i \to \infty} a(x, y_i), \lim_{i \to \infty} b(x, y_i) \right] \\ & = [a(x,y), b(x,y)]. \end{array} \end{displaymath} This implies that $(x,y,\phi(x)) \in \mathfrak{h}$ by (unifw). Moreover, from the definition of $G-$subdifferential, we have \begin{displaymath} G(z, y_i, H(x,y_i, \phi(x))) \leq \phi(z), \ \forall z \in X. \end{displaymath} Taking $i \to \infty$, we get \begin{displaymath} G(z, y, H(x, y, \phi(x))) \leq \phi(z), \ \forall z \in X. \end{displaymath} This inequality with $(x,y,\phi(x)) \in \mathfrak{h}$ shows that $y \in \gsub{\phi}{x}$. Therefore $\gsub{\phi}{x}$ is closed, and hence compact as a closed subset of $Y$. Now note that $\gsub{\phi}{x} \subset \mathfrak{h}_{x, \phi(x)}$. Moreover, from the openness assumption of $\mathfrak{h}$ (Remark $\ref{derivexp}$), $\mathfrak{h}_{x,\phi(x)}$ is relatively open with respect to $Y$. Therefore compactness of $\gsub{\phi}{x}$ shows that $\gsub{\phi}{x} \Subset \mathfrak{h}_{x, \phi(x)}$. \end{proof} The next proposition is about continuity of the $G$-subdifferential of a $G$-convex function. Note that the $G$-subdifferential is a set valued function, so that the continuity here is not the ordinary continuity of a single valued function. But if the $G$-subdifferential is single valued, the next proposition actually coincides with the continuity of a single valued function. \begin{Prop} \label{gsubconti} Let $\phi$ be a $G$-convex function with (nicew). Let $x \in X$. Then for $\epsilon >0$, there exists $\delta >0$ such that if $|z - x| \leq \delta$ then \begin{displaymath} \gsub{\phi}{z} \subset \nbhd{\epsilon}{\gsub{\phi}{x}}. \end{displaymath} \end{Prop} \begin{proof} Suppose it is not true. Then we get a sequence $x_k \in X$ and $y_k \in \gsub{\phi}{x_k}$ such that $x_k \to x$ as $k \to \infty$ but $y_k \notin \nbhd{\epsilon}{\gsub{\phi}{x}}$ for any $k$. Since $Y$ is compact, we can extract a subsequence such that $y_k \to y$. Note that by (nicew), we have $(x,y,\phi(x)) \in {\Phi} \subset \mathfrak{h}$. From the choice of $y_k$, we have $y \notin \nbhd{\epsilon}{\gsub{\phi}{x}}$. But since $y_k$ are in subdifferentials of $\phi$, we have \begin{equation} \label{prop3.4:1} \phi(z) \geq G(z , y_k, H(x_k, y_k, \phi(x_k))), \ \forall z \in X. \end{equation} We can take limit on ($\ref{prop3.4:1}$) because $\phi$, $G$, and $H$ are continuous. Hence we get \begin{displaymath} \phi(z) \geq G(z,y,H(x,y,\phi(x))) \end{displaymath} which implies $y \in \gsub{\phi}{x}$, which contradicts to $y \notin \nbhd{\epsilon}{\gsub{\phi}{x}}$. \end{proof} \subsection{weak solutions to the (GJE)} Let $\mu$ be the source measure, a probability measure supported on $X$, and let $\nu$ be the target measure, a probability measure supported on $Y$. Then we can interpret the solutions to the (GJE) with second boundary condition in the following ways, which are analogies of the Alexandrov solution and the Brenier solution in optimal transport case. \begin{Def} Let $\phi : X \to \mathbb{R}$ be a $G$-convex function. Then \\ 1. $\phi$ is called a \emph{weak Alexandrov solution} to the (GJE) if \begin{displaymath} \mu(A) = \nu(\gsub{\phi}{A}), \ \forall A \subset X. \end{displaymath} 2. $\phi$ is called a \emph{weak Brenier solution} to the (GJE) if \begin{displaymath} \nu(B) = \mu(\partial_G^{-1}\phi (B) ), \ \forall B \subset Y. \end{displaymath} \end{Def} Note that the $G$-subdifferential of $\phi(x)$ is single valued $\mu-$a.e. in $X$ as it is semi-convex by Proposition $\ref{semiconvex}$. Hence a weak Brenier solution satisfies the push-forward condition $\partial_G \phi _\sharp \mu_0 = \mu$. In optimal transport, it is well known that the Brenier solution is not necessarily an Alexandrov solution. To prevent this, we need some convexity condition on the domains. See 4.6 in \cite{Figalli2017TheME} for the Monge-Amp$\grave{\textrm{e}}$re case, \cite{Ma2005RegularityOP} for the general optimal transport case. To develop the local regularity theory in this paper, we will use a weak Alexandrov solution. \subsection{conditions for measures and main result} We will assume the target measure $\nu$ is bounded away from 0 and $\infty$ with respect to the Lebesgue measure on $Y$. For the local regularity theory, we will assume one of the following assumptions : \\ \begin{tabular}{c p{10cm}} 1. & $\exists p \in (n,\infty]$ and $C_{\mu}$ such that for any $x \in X$ and $r \geq 0$ we have $\mu( B_r(x) ) \leq C_{\mu} r^{n(1-\frac{1}{p})}$. \\ 2. & $\exists f : \mathbb{R}^{+} \to \mathbb{R}^{+}$ such that $\lim_{r \to 0} f(r) = 0$ and for any $x \in X$ and $r \geq 0$ we have $\mu(B_r(x))\leq f(r)r^{n(1-\frac{1}{n})}$. \end{tabular}\\ Then the main theorem of this paper is the following \begin{Thm} \label{main} Suppose $X$ and $Y$ are compact domains in $\mathbb{R}^n$ and let $G : X \times Y \times \mathbb{R} \to \mathbb{R}$ be the generating function. Let $\mu$ and $\nu$ be probability measures on $X$ and $Y$ respectively. Assume that $G$ satisfies \eqref{Regular}, \eqref{Gmono}, \eqref{Gtwist}, \eqref{G*twist}, \eqref{Gnondeg}, \eqref{G3s}, and (unifw). Assume also that $X$ and $Y$ satisfy \eqref{hDomConv} and \eqref{vDomConv} and the target measure $\nu$ is bounded away from 0 and $\infty$ with respect to the Lebesgue measure on $Y$. Let $\phi$ be an weak Alexandrov solution to the equation ($\ref{GJE}$) that satisfies (nicew). Then we have the followings : \\ \begin{tabular}{c p{10cm}} 1. & If there exist $p \in (n, \infty]$ and $C_{\mu}$ such that $\mu( B_r(x)) \leq C_{\mu} r^{n(1-\frac{1}{p})}$ for all $r \geq 0$, $x \in X$, then $\phi \in C^{1,\sigma}_{loc}(X)$. \\ 2. & If there exist $f : \mathbb{R}^{+} \to \mathbb{R}^{+}$ such that $\lim_{r \to 0} f(r) = 0$ and $\mu(B_r(x)) \leq f(r)r^{n(1-\frac{1}{n})}$ for all $r \geq 0$, $x \in X$, then $\phi \in C^{1}_{loc}(X)$. \end{tabular}\\ Here, $\rho = 1- \frac{n}{p}$ and $\sigma = \frac{\rho}{4n-2+\rho}$. \end{Thm} Note that the exponents that appears in above theorem are the same as in \cite{Loeper2009OnTR}. \section{Proof of local holder regularity} For the proof of local H$\ddot{\textrm{o}}$lder regularity, we will do most computations in the set $X \times Y \times [\min a , \max b]$. Since this set is compact we will be able to get finite quantities in each computation with a localizing argument. Then we will be able to apply the idea from \cite{Loeper2009OnTR} for the proof of each lemma. The main difference between \cite{Loeper2009OnTR} and this paper comes from the structural difference of a cost function and a generating function. In particular, a generating function has its own nice subdomain where the structural conditions hold true, whereas the corresponding conditions hold on the whole domain in optimal transport case. Therefore, we need to check that the points at which we use the conditions are in the nice subdomain. Moreover, each derivative of $G$ still depends on the scalar parameter $v$, hence we need to take care of extra terms that come from the dependency on the scalar parameter $v$. In addition, estimates on the cost function $c(x,y)$ should be done on $G(x,y,v)$. \\ Through out this paper, we will use tensor notation for derivatives many times. For example, we view $D^2_{xx}G$ as a 2-tensor, and we use square bracket ``[ , ]'' for tensor notation. \begin{displaymath} D^2_{xx}G [\xi, \xi] = \xi^t D^2_{xx}G \xi. \end{displaymath} \subsection{Quantitative (glp) with (G3s)} We start with some estimation on $G$-affine functions. In this subsection, $x_m$ is a point in $X$, $u \in \mathbb{R}$, and $y_0, y_1 \in \mathfrak{h}_{x_m, u}$. Also, for $\theta \in [0,1]$, we denote the $G$-segment that connects $y_0$ and $y_1$ with focus $(x_m,u)$ by $y_{\theta}$, and we use $v_{\theta} = H(x_m, y_{\theta}, u)$ and $p_i = D_xG(x_m,y_i,v_i)$. Moreover, we will assume that the points $(x_m, y_{\theta}, u) \in S$ for some compact set $S \Subset \mathfrak{h}$. Then by remark$\ref{nicedom}$, we get constants $C_e$ and $\alpha$ that depend on $S$ and satisfy ($\ref{Ce1}$),($\ref{Ce2}$), and ($\ref{alpha}$). \begin{Lem} \label{2dest} For some constant $C_1$ that depends on the $C^3$ norm of $G$, $C^1$ norm of $H$, and $C_e$, we have \begin{equation} \left| \left( D^2_{xx}G(x_m,y_{\theta},v_{\theta}) - D^2_{xx}G(x_m,y_{\theta'},v_{\theta'})\right) [\xi, \xi] \right| \leq C_1|\theta - \theta'||p_1 - p_0||\xi|^2 \end{equation} \end{Lem} \begin{proof} \begin{displaymath} \begin{array}{l} \|D^2_{xx} G(x_m, y_{\theta} , H(x_m, y_{\theta}, u)) - D^2_{xx} G(x_m, y_{\theta'}, H(x_m, y_{\theta'}, u))\| \\ \leq \|D^3_{xxy}G\||y_{\theta} - y_{\theta'}| + \|D^3_{xxv}G\| \|D_yH\| |y_{\theta} - y_{\theta'}| \\ \leq (\|D^3_{xxy}G\| + \|D^3_{xxv}G\| \|D_yH\|)C_e |\theta - \theta'||p_1 - p_0|. \end{array} \end{displaymath} We set $C_1 = (\|D^3_{xxy}G\| + \|D^3_{xxv}G\| \|D_yH\|)C_e$. \end{proof} \begin{Lem} \label{2dest2} Let $\xi_p = \mathrm{Proj}_{p_1 - p_0} (\xi)$, where $\mathrm{Proj}_{p}$ is the orthogonal projection onto $p$. Then for some constants $\Delta_1$ and $\Delta_2$ that depend on $\alpha$, the $C^4$ norm of $G$, we have \begin{displaymath} D^2_{xx}G(x_m,y_{\theta}, v_{\theta})[\xi, \xi] \leq \begin{array}{l} \left((1-\theta) D^2_{xx}G(x_m, y_0, v_0) + \theta D^2_{xx}G(x_m, y_1, v_1)\right)[\xi, \xi] \\ +\theta(1-\theta)|p_1 - p_0|^2(-\Delta_1 |\xi|^2 + \Delta_2 |\xi_p|^2). \end{array} \end{displaymath} \end{Lem} \begin{proof} Let $f_{\xi} : [0,1] \to \mathbb{R}$ be such that \begin{displaymath} f_{\xi}(\theta) = D^2_{xx}G(x_m, y_{\theta}, v_{\theta})[\xi, \xi]. \end{displaymath} Let $\xi' = \xi - \xi_p $ so that $\xi' \perp \xi_p$. Then we can apply \eqref{G3s}, and we obtain \begin{displaymath} f_{\xi'}'' \geq \alpha |p_1 - p_0|^2 |\xi'|^2 \end{displaymath} and from this uniform convexity, we have \begin{equation} \label{lem3.2:1} f_{\xi'} (\theta) \leq \theta f_{\xi'}(1) + (1-\theta)f_{\xi'}(0) - \frac{1}{2} \alpha |p_1 - p_0|^2|\xi'|^2 \theta(1-\theta). \end{equation} Let $g_{\xi} = f_{\xi} - f_{\xi'}$. Then \begin{align*} g_{\xi}''(\theta) & = f_{\xi}''(\theta) - f_{\xi'}'' (\theta)\\ & = D^2_{pp} \mathcal{A}[ \xi, \xi, p_1 - p_0, p_1 - p_0] - D^2_{pp} \mathcal{A} [ \xi', \xi', p_1 - p_0, p_1 - p_0] \\ & = 2D^2_{pp} \mathcal{A} [\xi', \xi_p, p_1 - p_0, p_1 - p_0] + D^2_{pp} \mathcal{A} [ \xi_p, \xi_p, p_1 - p_0, p_1 - p_0]. \end{align*} Therefore, bounding $|\xi'|$ by $|\xi|$, we obtain \begin{displaymath} |g_{\xi}''| \leq 3 \|D^2_{pp}\mathcal{A} \| |p_1 - p_0|^2|\xi||\xi_p| \end{displaymath} and from this bound, we have \begin{equation} \label{lem3.2:2} g_{\xi}(\theta) \leq \theta g_{\xi}(1) + (1-\theta) g_{\xi}(0) + \frac{3}{2}\|D^2_{pp}\mathcal{A}\||p_1 - p_0|^2|\xi||\xi_p| \theta (1-\theta). \end{equation} Combining ($\ref{lem3.2:1}$) and ($\ref{lem3.2:2}$), we obtain \begingroup \allowdisplaybreaks \begin{align*} D^2_{xx}G(x_m,y_{\theta}, v_{\theta}) & = g_{\xi} + f_{\xi'} \\ & \leq \theta g(1) + (1-\theta) g(0) + \frac{3}{2} \|D^2_{pp}\mathcal{A}\||p_1 - p_0|^2|\xi||\xi_p| \theta (1-\theta) \\ & \quad + \theta f_{\xi'}(1) + (1-\theta)f_{\xi'}(0) - \frac{1}{2} \alpha |p_1 - p_0|^2|\xi'|^2 \theta(1-\theta) \\ & = \theta D^2_{xx}G(x_m,y_1, u) [\xi, \xi] + (1-\theta)D^2_{xx}G(x_m,y_0, u)(\xi, \xi) \\ & \quad + \theta(1-\theta) | p_1 - p_0|^2 \left( -\frac{\alpha}{2}|\xi'|^2 + \frac{3}{2}\|D^2_{pp}\mathcal{A}\| |\xi||\xi_p| \right) \\ & \leq \theta D^2_{xx}G(x_m,y_1, u) [\xi, \xi] + (1-\theta)D^2_{xx}G(x_m,y_0, u)(\xi, \xi) \\ & \quad + \theta(1-\theta) | p_1 - p_0|^2 \left( -\frac{\alpha}{2}|\xi|^2 + (\frac{3}{2}\|D^2_{pp}\mathcal{A}\| +\alpha)|\xi||\xi_p| \right). \end{align*} \endgroup Here, we use weighted Young's inequality \begin{displaymath} (\frac{3}{2}\|D^2_{pp}\mathcal{A}\|+\alpha) |\xi||\xi_p| \leq \frac{\alpha}{4}|\xi|^2 + \alpha^{-1} (\frac{3}{2}\|D^2_{pp}\mathcal{A}\|+\alpha)^2 |\xi_p|^2. \end{displaymath} Then we obtain \begin{align*} D^2_{xx}G(x_m, y_{\theta}, v_{\theta}) & \leq \theta D^2_{xx}G(x_m,y_1, u) [\xi, \xi] + (1-\theta)D^2_{xx}G(x_m,y_0, u)[\xi, \xi] \\ & + \theta(1-\theta) | p_1 - p_0|^2 \left( -\frac{\alpha}{4} |\xi|^2 + \alpha^{-1} (\frac{3}{2}\|D^2_{pp}\mathcal{A}\| +\alpha)^2 |\xi_p|^2 \right). \end{align*} Hence we get the inequality with $\Delta_1 = \frac{\alpha}{4} $ and $\Delta_2 = \alpha^{-1}(\frac{3}{2} \|D^2_{pp}\mathcal{A}\| +\alpha)^2$. \end{proof} The next lemma is the quantitative version of (gLp). We will use the (G3s) condition through Lemma $\ref{qglp}$ later. \begin{Lem} \label{qglp} Define $\bar{\phi}(x) : X \to \mathbb{R}$ by \begin{displaymath} \bar{\phi}(x) = \max \{ G(x, y_0, v_0), G(x,y_1, v_1) \} \end{displaymath} Then we have the quantitative (gLp) : \begin{equation} \bar{\phi}(x) \geq G(x, y_{\theta}, v_{\theta}) + \delta_0\theta(1-\theta)|y_1 - y_0|^2|x-x_m|^2 -\gamma |x-x_m|^3 \end{equation} \label{qglp1} for $\theta \in [ \epsilon , 1- \epsilon]$ and $|x - x_m| \leq C \epsilon$. \end{Lem} \begin{proof} Note that by taking the Taylor series, \begin{displaymath} G(x,y_i, v_i) = u + \langle D_xG(x_m,y_i,v_i), (x-x_m) \rangle + \frac{1}{2} D^2_{xx}G(x,y_i,v_i)[x-x_m, x-x_m] + o(|x-x_m|^2). \end{displaymath} Therefore, we have \begin{align*} \bar{\phi}(x) & \geq \theta G(x,y_0,v_0) + (1 - \theta) G(x,y_1,v_1) \\ & = u + \langle \theta p_1 + (1-\theta) p_0 , x-x_m \rangle \\ &\ + \frac{1}{2} \left( \theta D^2_{xx}G(x, y_0, v_0) + (1-\theta) D^2_{xx}(x, y_1, u_1)\right)[x-x_m , x-x_m] + o(|x-x_m|^2). \end{align*} Applying Lemma $\ref{2dest2}$, we obtain \begin{align} \label{qglp2} \bar{\phi}(x) & \geq u + \langle \theta p_1 + (1-\theta)p_0, x-x_m \rangle + \frac{1}{2} D^2_{xx}G(x_m, y_{\theta}, v_{\theta}) [x-x_m, x-x_m] \\ & \ \ - \frac{1}{2}\theta(1-\theta)|p_1 - p_0|^2 ( - \Delta_1 |x-x_m|^2 + \Delta_2 |(x-x_m)_p|^2) + o(|x-x_m|^2) \nonumber \end{align} for any $\theta \in [0,1]$. Let $\theta' \in [0,1]$, then we can write ($\ref{qglp2}$) with $\theta'$. Let us call this inequality ($\ref{qglp2}$'). Then adding and subtracting the right hand side of ($\ref{qglp2}$) to the right hand side of ($\ref{qglp2}$') and reordering some terms, we get \begin{align} \label{qglp3} \bar{\phi}(x) & \geq u + \langle \theta p_1 + (1-\theta)p_0,x-x_m \rangle + \frac{1}{2} D^2_{xx}(x_m, y_{\theta}, v_{\theta}) [x-x_m, x-x_m] \nonumber \\ & \quad + \frac{1}{2} \Delta_1 \theta(1-\theta)|p_1 - p_0|^2 |x-x_m|^2 \nonumber \\ & \quad +(\theta' - \theta)\langle p_1 - p_0, x - x_m \rangle -\frac{1}{2}\theta(1-\theta)\Delta_2|p_1 - p_0|^2|(x - x_m)_p|^2 \nonumber \\ & \quad + \frac{1}{2} \left( D^2_{xx}G(x_m, y_{\theta'},v_{\theta'}) - D^2_{xx}G(x_m, y_{\theta},v_{\theta}) \right) [x-x_m, x-x_m] \\ & \quad + \frac{1}{2}\Delta_1 \left( (\theta'(1-\theta') - \theta(1-\theta) \right)|p_1 - p_0|^2|x-x_m|^2 \nonumber \\ & \quad + \frac{1}{2}\Delta_2 \left( (\theta(1-\theta) - \theta'(1-\theta') \right)|p_1 - p_0|^2|(x-x_m)_p|^2 + o(|x-x_m|^2) \nonumber \end{align} Note that by definition of $(x-x_m)_p$, we have $|p_1 - p_0||(x - x_m)_p| = | \langle p_1 - p_0,x - x_m \rangle|$. Hence, we can write the third line $L_3$ as follows \begin{displaymath} L_3 = [\theta' - \theta -\frac{1}{2}\theta(1-\theta)\Delta_2 \langle p_1 - p_0, x - x_m \rangle]\langle p_1 - p_0 ,x - x_m \rangle. \end{displaymath} Therefore, if we choose \begin{equation} \label{theta'} \theta' = \theta + \theta(1-\theta) \Delta_2 \langle p_1 - p_0, x - x_m \rangle , \end{equation} we can make $L_3 = 0$. To ensure $\theta'$ is in $[0,1]$, we first assume that $\theta$ is away from 0 and 1, i.e. we assume $\theta \in [\epsilon, 1- \epsilon]$ for $\epsilon >0$. Then we can make the second term $\theta(1-\theta) \Delta_2 \langle p_1 - p_0, x - x_m \rangle$ small. by assuming \begin{displaymath} |x - x_m| \leq \frac{4 \epsilon}{ \Delta_2 |p_1 - p_0|} \leq \frac{\epsilon}{\theta(1-\theta) \Delta_2 |p_1 - p_0|}. \end{displaymath} Under these assumptions and ($\ref{theta'}$), we get $\theta' \in [0,1]$ and $L_3 = 0$. We can apply Lemma $\ref{2dest}$ to the forth line $L_4$ of ($\ref{qglp3}$) to get \begin{align} \label{qglp4} L_4 & = \frac{1}{2} \left( D^2_{xx}G(x_m, y_{\theta'},v_{\theta'}) - D^2_{xx}G(x_m, y_{\theta},v_{\theta}) \right) [x-x_m, x-x_m] \nonumber \\ & \geq - C_1 |\theta - \theta'||p_1 - p_0||x - x_m|^2 \\ & \geq -C_1 \theta(1-\theta)\Delta_2 |p_1 - p_0|^2|x-x_m|^3 \nonumber \\ & \geq - \frac{1}{4} C_1 \Delta_2 |p_1 - p_0|^2|x-x_m|^3 \nonumber \end{align} For the fifth and sixth line, $L_5$ and $L_6$, note that by ($\ref{theta'}$), \begin{align*} \theta'(1-\theta') - \theta(1-\theta) & = (\theta - \theta')(\theta + \theta' -1) \\ & = - \theta(1-\theta)\Delta_2 \langle p_1 - p_0, x-x_m \rangle (\theta + \theta' -1) \end{align*} so that we can bound \begin{align} \label{qglp5} |L_5| & = \left| \Delta_1 [(\theta'(1-\theta') - \theta(1-\theta)]|p_1 - p_0|^2|x-x_m|^2 \right| \nonumber \\ & \leq \theta(1-\theta)(\theta+\theta'-1)\Delta_1 \Delta_2|p_1 - p_0|^3|x-x_m|^3 \\ & \leq \frac{1}{4} \Delta_1 \Delta_2|p_1 - p_0|^3|x-x_m|^3 \nonumber \end{align} \begin{align} \label{qglp6} |L_6| & =\left| \Delta_2 [(\theta(1-\theta) - \theta'(1-\theta')]|p_1 - p_0|^2|(x-x_m)_p|^2 \right| \nonumber \\ & \leq \theta(1-\theta)(\theta+\theta'-1)(\Delta_2)^2 |p_1 - p_0|^3|x-x_m|^3 \\ & \leq \frac{1}{4} (\Delta_2)^2 |p_1 - p_0|^3|x-x_m|^3. \nonumber \end{align} Combining ($\ref{qglp4}$), ($\ref{qglp5}$), and ($\ref{qglp6}$), we can bound ($\ref{qglp3}$) from below \begin{align} \label{qglp7} \bar{\phi}(x) \geq & u + \langle \theta p_1 + (1-\theta)p_0, x-x_0 \rangle + \frac{1}{2} D^2_{xx}G(x_m, y_{\theta}, v_{\theta}) [x-x_m, x-x_m] \nonumber\\ & + \Delta_1 \theta(1-\theta)|p_1 - p_0|^2 |x-x_m|^2 \\ & - C_2(|p_1 - p_0|^2 + |p_1 - p_0|^3)|x-x_m|^3 + o(|x-x_m|^2) \nonumber \end{align} where $C_2$ depends on $C_1$, $\Delta_1$, and $\Delta_2$. We apply Taylor's Theorem to the first line of ($\ref{qglp7}$) and change it to $G(x,y_{\theta},v_{\theta})$ with $o( |x-x_m|^2)$. Moreover, the little o term $o(|x-x_m|^2)$ is at least $O(|x-x_m|^3)$ because the generating function is $C^4$. Therefore we can put it with the $|x-x_m|^3$ term, and we get \begin{align*} \bar{\phi}(x) \geq & G(x,y_{\theta},v_{\theta}) + \Delta_1 \theta(1-\theta)|p_1 - p_0|^2 |x-x_m|^2 \\ & -C_2(1 + |p_1 - p_0|^2 + |p_1 - p_0|^3)|x-x_m|^3 \end{align*} possibly taking larger $C_2$ then before. Finally, we bound $|p_1 - p_0|$ by $C_e \mathrm{diam}(Y)$ and $\frac{1}{C_e}|y_1 - y_0|$ from above and below to get \begin{displaymath} \bar{\phi}(x) \geq G(x,y_{\theta},v_{\theta}) + \frac{\Delta_1}{C_e^2}\theta(1-\theta)|y_1 - y_0|^2|x-x_m|^2 -\gamma |x-x_m|^3. \end{displaymath} Hence we obtain the lemma with $\delta_0 = \Delta_1 / C_e^2$ and $\gamma = C_2(1+C_e^2 \mathrm{diam}(Y)^2 + C_e^3 \mathrm{diam}(Y)^3) $. \end{proof} \subsection{Local estimates for $G-$convex functions} If a $G$-convex function $\phi$ is $C^2$, then for any $x \in X$ we get a $G$-supporting function at $x$. From the definition of a $G$-supporting function, the difference of $\phi$ and the $G$-supporting function attains global minimum at $x$ and the regularity condition implies \begin{displaymath} D^2_{xx} \phi \geq \ D^2_{xx}G \geq - \| D^2_{xx} G \| I. \end{displaymath} Hence it is semi convex. We show the semi convexity without $C^2$ assumption on the $G$- convex function in the following proposition. \begin{Prop} \label{semiconvex} Let $\phi$ be a $G$-convex function that satisfies (nicew). Then we have following inequality \begin{equation} \phi(x_t) \leq (1-t)\phi(x_0) + t\phi(x_1) +\frac{1}{2}t(1-t)\| D^2_{xx} G \||x_0 - x_1|^2 \end{equation} where $x_t = (1-t) x_0 + t x_1$. In particular, $\phi$ is semi-convex. \end{Prop} \begin{proof} Since $\phi$ is $G-$convex, we have $y \in Y$ and $v \in \mathbb{R}$ such that $(x_t, y, v) \in \Psi$ and \begin{align*} & \phi(x_t) = G(x_t, y,v), \\ &\phi(x) \geq G(x, y, v), \ \forall x \in X. \end{align*} Moreover, we have \begin{displaymath} G(x,y,v) \geq \phi(x_t) + \langle p_t , x-x_t \rangle - \frac{1}{2} \| D^2_{xx} G \| (x-x_t)^2 \end{displaymath} where $p_t = D_x G(x_t, y, v)$. Evaluate this at $x = x_0$ and $x = x_1$ and add them with weight $(1-t)$ and $t$ respectively. \begin{align} \label{semiconvex1} (1-t) \phi(x_0) + t \phi(x_1) &\geq (1-t) G(x_0 , y, v) + t G(x_1, y, v) \nonumber \\ & \geq \phi(x_t) + \langle p , (1-t)(x_0 - x_t) + t (x_1 - x_t) \rangle \\ & \quad - \frac{1}{2} \| D^2_{xx} G \| \left( (1-t)(x_0 - x_t)^2 + t (x_1 - x_t)^2 \right). \nonumber \end{align} Note that by the choice of $x_t$, we have $(1-t)(x_0 - x_t) + t(x_1 - x_t) = 0$ and \begin{displaymath} |x_0 - x_t| = t|x_0 - x_1| \textrm{ and } |x_1 - x_t| = (1-t)|x_0 - x_1|. \end{displaymath} Then ($\ref{semiconvex1}$) becomes \begin{displaymath} (1-t)\phi(x_0) + t \phi(x_1) \geq \phi(x_t) - \frac{1}{2}t(1-t) \| D^2_{xx} G \| |x_0 - x_1|^2, \end{displaymath} which is the desired inequality. This implies that $\phi$ is semi-convex because $t(1-t)|x_0 - x_1|^2 + |x_t|^2 = (1-t)|x_0|^2 + t|x_1|^2$ so that \begin{align*} \phi(x_t) + \frac{1}{2} \| D^2_{xx}G \| |x_t|^2 \leq& (1-t) \phi(x_0) + t \phi(x_1) \\ & + \frac{1}{2} \| D^2_{xx} G \| ( t(1-t) |x_1 - x_0|^2 + |x_t|^2 ) \\ \leq & (1-t) ( \phi(x_0) + \frac{1}{2} \| D^2_{xx} G \||x_0|^2 ) \\ & + t ( \phi(x_1) + \frac{1}{2} \| D^2_{xx}G \| |x_1|^2 ) . \end{align*} Therefore $\phi(x) + \frac{1}{2} \|D^2_{xx}G \||x|^2$ is convex. \end{proof} When we use a norm of some derivatives of $G$ and $H$, we need to check that the points lie in $\mathfrak{h}$ and $\mathfrak{g}$, and they are in a compact subset of $X \times Y \times \mathbb{R}$. So far, we choose points on the graph of a $G$-convex functions. Then this choice with the $G$-convexity of $G$-subdifferential ensures that the points which we are using are in compact subsets of $\mathfrak{h}$ and $\mathfrak{g}$. But we need to use other points later (for example, $(x_t, y_i, u)$ in the lemma $\ref{localest}$). Hence we localize the argument and use the next lemma to show our points lie in a compact subset of $\mathfrak{h}$. \begin{Lem} \label{localniceg} Let $\phi$ be a $G$-convex function with (nicew) and let $x_0 \in \mathring{X}$. Then there exists $\delta(x_0) >0$ and $S \Subset \mathfrak{h}$ such that if $|x_1 - x_0| < \delta(x_0)$, then \begin{equation} \label{inS} \begin{array}{c} (x_t, y_{\theta} , G(x_t,y_0,H(x_0,y_0,\phi(x_0))) \in S \\ (x_t, y_{\theta}, \phi(x_t)) \in S \end{array} \end{equation} for any $x_t = (1-t)x_0 + tx_1$, $t \in [0,1]$ and $y_{\theta}$, the $G$-segment connecting $y_0$ and $y_1$ with focus $(x_t, \phi(x_t))$ where $y_0 \in \gsub{\phi}{x_0}$, $y_1 \in \gsub{\phi}{x_1}$. \end{Lem} \begin{proof} Note that by (nicew), we have that $(x_0, y_0, \phi(x_0)) \in \mathring{\mathfrak{h}}$ for any $y_0 \in \gsub{\phi}{x_0}$. Therefore, we have $r_1, r_2, r_3 >0$ such that \begin{equation} \label{S} S := B_{r_1}(x_0) \times \left( \nbhd{r_2}{\gsub{\phi}{x_0}} \cap Y \right) \times (\phi(x_0) - r_3 , \phi(x_0) + r_3) \Subset \mathring{\mathfrak{h}} \end{equation} that is, $\overline{S}$ is compact and $\overline{S}$ is contained in $\mathring{\mathfrak{h}}$. Let $C_e$ be the constant from Remark $\ref{nicedom}$, and let $ \gsubs{\phi}{x_0} = \left(\gexp{x_0}{\phi(x_0)} \right)^{-1} \left( \gsub{\phi}{x_0} \right) $. Then by the same remark, we have \begin{displaymath} \nbhd{r_2 / C_e}{\gsubs{\phi}{x_0}} \cap \mathfrak{h}^*_{x_0, \phi(x_0)} \subset \left( \gexp{x_0}{\phi(x_0)} \right)^{-1} \left( \nbhd{r_2}{\gsub{\phi}{x_0}} \cap Y \right). \end{displaymath} Note that $D_x G(x, \cdot, H(x, \cdot, u)) = \left( \gexp{x}{u} \right)^{-1}(\cdot)$. Since the function $(x,y,u) \mapsto D_xG(x,y,H(x,y,u))$ is uniformly continuous on $S$, there exist $\delta_x, \delta_u >0$ such that if $|x - x_0| < \delta_x$ and $|u - \phi(x_0)| < \delta_u$, then \begin{displaymath} \left| D_xG(x, y, H(x,y,u)) - D_xG(x_0, y, H(x_0, y, \phi(x_0))) \right| < \frac{r_2}{4C_e} \end{displaymath} for any $y \in \nbhd{r_2}{\gsub{\phi}{x_0}} \cap Y$. Hence, for any $y \in \nbhd{r_2}{\gsub{\phi}{x_0}} \cap Y$ such that $ \left( \gexp{x_0}{\phi(x_0)} \right)^{-1} (y) \in \nbhd{r_2/4C_e}{\gsubs{\phi}{x_0}}$, we have \begin{displaymath} \left( \gexp{x}{u} \right)^{-1}(y) \in \nbhd{r_2/2C_e}{\gsubs{\phi}{x_0}} \end{displaymath} if $|x-x_0| < \delta_x$ and $|u - \phi(x_0)| < \delta_u$. Note that the set $\nbhd{r_2/2C_e}{\gsubs{\phi}{x_0}}$ is convex by $G-$convexity of $G-$subdifferentials. Again from Remark $\ref{nicedom}$, if $y \in \nbhd{r_2/4C_e^2}{\gsub{\phi}{x_0}}$ then $\left( \gexp{x_0}{\phi(x_0)} \right)^{-1}(y) \in \nbhd{r_2/4C_e}{\gsubs{\phi}{x_0}}$. By Proposition $\ref{gsubconti}$, there exists $\delta_1$ such that if $|x-x_0| < \delta_1$, then \begin{displaymath} \gsub{\phi}{x} \subset \nbhd{r_2/4C_e^2}{\gsub{\phi}{x_0}}. \end{displaymath} Moreover, by continuity of $G$,$H$, and $\phi$, we have $\delta_2$ such that if $|x-x_0| < \delta_2$, then for any $y_0 \in \gsub{\phi}{x_0}$, \begin{align*} \allowdisplaybreaks |G(x,y_0,H(x_0,y_0,\phi(x_0))) - \phi(x_0)| < \min\{\delta_u, r_3\} \\ |\phi(x) - \phi(x_0)| < \min\{ \delta_u, r_3 \}. \end{align*} We take $\delta(x_0)$ small enough so that $\delta(x_0) \leq \min \{ \delta_x, \delta_u, \delta_1, \delta_2, r_1, r_3 \}$. Suppose $|x_1 - x_0| < \delta(x_0)$. Then $y_1 \in \gsub{\phi}{x_1} \subset \nbhd{r_2/4C_e^2}{\gsub{\phi}{x_0}}$. Moreover, for any $y_0 \in \gsub{\phi}{x_0}$, we have $\phi(x_t), G(x_t, y_0, H(x_0,y_0,\phi(x_0))) \in ( \phi(x_0) - r_3 , \phi(x_0) + r_3)$. Hence $(x_t, y_0, u)$, $(x_t, y_1, u) \in S \subset \mathfrak{h}$ where $u$ is either $\phi(x_t)$ or $G(x_t, y_0, H(x_0,y_0,\phi(x_0)))$. By our construction, the $G$-segment that connects $y_0$ and $y_1$ with focus $(x_t, u)$ is contained in $\nbhd{r_2/2}{\gsub{\phi}{x_0}} \cap Y$. Therefore, we have \begin{displaymath} (x_t, y_{\theta}, u) \in B_{\delta(x_0)}(x_0) \times \left( \nbhd{r_2/2}{\gsub{\phi}{x_0}} \cap Y \right) \times (\phi(x_0) - r_3 , \phi(x_0) + r_3) \subset S. \end{displaymath} \end{proof} \begin{Rmk} \label{deponsol} The constant $\delta(x_0)$ depends on the modulus of continuity of $\phi$ and $\partial_G \phi$ at $x_0$. If we have estimates on these apriori, then we can get rid of this dependency on the solution. \end{Rmk} Note that ($\ref{S}$) shows structure of $S$ other than ($\ref{inS}$). We will use this structure of $S$ too. With help of the above lemma, we can bound various norms necessary to obtain the next lemma. \begin{Lem} \label{localest} Let $\phi$ be a $G$-convex function and let $x_0 \in X$. Choose $x_1$ such that $|x_0 - x_1| < \delta(x_0)$. Let $G(x,y_0,v_0)$ and $G(x,y_1,v_1)$ be supporting $G$-affine functions that supports $\phi$ at $x_0$ and $x_1$ respectively. Let $x_t \in [x_0 , x_1]$ such that \begin{displaymath} G(x_t, y_0, v_0) = G(x_t,y_1,v_1) =: u. \end{displaymath} We assume $|y_1 - y_0| \geq |x_0 - x_1|$, then we have \begin{equation} \label{lem4} \phi(x_t) - u \leq C_3 |x_1 - x_0| |y_1 - y_0| \end{equation} where $C_3$ depends on the $C^2$ norm of $G$, hence on constant $C_e$ from Remark $\ref{nicedom}$, which in turn depends on $S$ in Lemma $\ref{localniceg}$ \end{Lem} \begin{proof} First of all, the definition of supporting function implies that \begin{align*} G(x_0, y_0, v_0) - G(x_0,y_1,v_1) = \phi(x_0) - G(x_0,y_1,v_1) \geq 0 \\ G(x_1, y_0, v_0) - G(x_1, y_1, v_1) = G(x_1, y_0, v_0) - \phi(x_1) \leq 0 \end{align*} so that existence of $x_t$ is implied by the intermediate value theorem. If $t$ was either 1 or 0, then the left hand side of ($\ref{lem4}$) is 0 so that the lemma is trivial. Otherwise, by our choice of $x_t$ and $u$, we have the equalities \begin{displaymath} u = G(x_t, y_0, v_0) = G(x_t, y_0, H(x_0, y_0, \phi(x_0))). \end{displaymath} Note that by ($\ref{S}$) of Lemma $\ref{localniceg}$, we have $(x_t, y_i, u) \in S$. We use Taylor expansion on $G(x,y_i,v_i)$ at $x_t$ to get \begin{equation} \label{lem4:1} \phi(x_i) - u \leq \langle D_x G(x_t, y_i, v_i), x_i - x_t \rangle + \frac{1}{2} \| D^2_{xx} G \| |x_i - x_t|^2. \end{equation} From Lemma $\ref{semiconvex}$, \begin{align} \label{lem4:2} \phi(x_t) -u & \leq (1-t) ( \phi(x_0)-u) + t (\phi(x_1)-u) + \frac{1}{2} t(1-t) \| D^2_{xx} G \| |x_1 - x_0 |^2 \\ & \leq (1-t) (\phi(x_0)-u) + t( \phi(x_1)-u) + \frac{1}{8} \| D^2_{xx} G \| |x_1 - x_0 |^2. \nonumber \end{align} If $(1-t)\langle D_x G(x_t, y_0, v_0), x_0 - x_t \rangle + t (\phi(x_1)-u) \leq 0$, then from ($\ref{lem4:1}$) and ($\ref{lem4:2}$), \begin{align*} \phi(x_t) - u & \leq (1-t) \left( \langle D_x G(x_t, y_0, v_0), x_0 - x_t \rangle + \frac{1}{2} \| D^2_{xx} G \| |x_0 - x_t|^2 \right) \\ & \ \ +t (\phi(x_1)-u) +\frac{1}{8} \| D^2_{xx}G\| |x_1 - x_0|^2 \\ & \leq (1-t) \frac{1}{2} \| D^2_{xx} G \| |x_0 - x_t|^2 +\frac{1}{8} \| D^2_{xx}G\| |x_1 - x_0|^2 \\ & \leq \frac{5}{8} \|D^2_{xx}G \||x_1 - x_0|^2 \\ & \leq \frac{5}{8}\|D^2_{xx}G \||y_1 - y_0||x_1 - x_0|. \end{align*} Otherwise, since $t(1-t) \leq 1$, we have \begin{align*} 0 & \leq (1-t)\langle D_xG(x_t, y_0, v_0), x_0 - x_t \rangle + t (\phi(x_1)-u) \\ & \leq \frac{1}{t} \langle D_xG(x_t, y_0, v_0), x_0 - x_t \rangle + \frac{1}{1-t} (\phi(x_1)-u) \end{align*} so that \begin{align*} \allowdisplaybreaks \phi(x_t) -u & \displaystyle \leq (1-t)\langle D_xG(x_t, y_0, v_0), x_0 - x_t \rangle + t(\phi(x_1) - u) \\ & \displaystyle \ \ + \left( \frac{1}{8}+\frac{1}{2}(1-t)\right)\|D^2_{xx}G \||x_1 - x_0|^2 \\ & \displaystyle \leq \frac{1}{t} \langle D_xG(x_t, y_0, v_0), x_0 - x_t \rangle + \frac{1}{1-t} (\phi(x_1) - u) \\ &\displaystyle \ \ + \left( \frac{1}{8}+\frac{1}{2}(1-t)\right)\|D^2_{xx}G \||x_1 - x_0|^2 \\ & \displaystyle \leq \frac{1}{t} \langle D_xG(x_t, y_0, v_0), x_0 - x_t \rangle + \frac{1}{1-t} \langle D_xG(x_t, y_1, v_1), x_1 - x_t \rangle \\ & \displaystyle \ \ + \frac{1}{2(1-t)} \|D^2_{xx}G \||x_1 - x_t|^2 + \frac{5}{8}\|D^2_{xx}G \||x_1 - x_0|^2 \\ & \displaystyle = \langle D_xG(x_t, y_0, v_0), x_0 - x_1 \rangle + \langle D_xG(x_t, y_1, v_1), x_1 - x_0 \rangle \\ & \displaystyle \ \ +\frac{1}{2}\|D^2_{xx}G \||x_1 - x_t ||x_1 - x_0| + \frac{5}{8}\|D^2_{xx}G \||x_1 - x_0|^2. \\ \end{align*} In the last equality, we used $ t = \frac{|x_0 - x_t|}{|x_1 - x_0|}$ and $1-t = \frac{|x_1 - x_t|}{ |x_1 - x_0|}$. We use the fundamental theorem of calculus to obtain \begin{align*} \phi(x_t) - u & \displaystyle \leq \int_0^1 \frac{d}{d\theta} \langle D_x G (x_t, y_{\theta}, H(x_t, y_{\theta}, u)) , x_1 - x_0 \rangle ds \\ &\displaystyle \ \ \ \ \ + \frac{9}{8}\| D^2_{xx}G \| |x_1 - x_0|^2 \\ &\displaystyle \leq \| E \| C_e |y_1 - y_0| |x_0 - x_t| + \frac{9}{8}\| D^2_{xx}G \| |x_1 - x_0|^2 \\ & \leq \left( C_e^2 + \frac{9}{8} \| D^2_{xx}G \| \right) |y_1 - y_0||x_1 - x_0| \end{align*} where $y_{\theta}$ is the $G$-segment connecting $y_0$ and $y_1$ with focus $(x_t, u)$. \end{proof} \begin{Lem} \label{gsubdiffest} Let $x_t$ be as in Lemma $\ref{localest}$. There exist $l$, $r$ that depend on $|x_0 - x_1|$ and $|y_0 - y_1|$ and $\kappa$ such that if $\mathcal{N}_r([x_0, x_1]) \subset X$ and \begin{equation} \label{gsubestcond} |y_0 - y_1| \geq \max\{|x_1 - x_0|, \kappa |x_1 - x_0|^{1/5}\} \end{equation} then, choosing $x_1$ close to $x_0$ if necessary, we have \begin{displaymath} \mathcal{N}_l \left( \left\{ y_{\theta} | \theta \in \left[\frac{1}{4} , \frac{3}{4}\right] \right\} \right) \cap Y \subset \partial_G \phi ( B_r (x_t) ) \end{displaymath} where $y_{\theta}$ is the $G$-segment connecting $y_0$ and $y_1$ with focus $(x_t,u)$ as in the proof of Lemma $\ref{localest}$. \end{Lem} \begin{proof} Let $u$ be as in the proof of Lemma $\ref{localest}$. By the definition of $G$-convexity and Lemma $\ref{qglp}$, we have \begin{align} \label{lem5:1} \phi(x) & \geq \max \{{G(x, y_1, v_0) , G(x,y_1,v_1)} \} \\ & \geq G(x, y_{\theta}, v_{\theta}) + \frac{3}{16}\delta_0|y_1 - y_0|^2|x-x_t|^2 -\gamma |x-x_t|^3 \nonumber \end{align} for $\theta \in [ \frac{1}{4} , \frac{3}{4}]$, $|x - x_t| \leq \frac{1}{4}C$ where $v_{\theta} = H(x_t, y_{\theta}, u)$. Next, we look at a $G$-affine function that pass through $(x_t , \phi(x_t))$ with focus $(y, H(x_t,y,\phi(x_t)))$ where $y \in \mathfrak{h}_{x_t,\phi(x_t)}$, i.e. $G(x, y, H(x_t, y, \phi(x_t)))$. we estimate this function on the boundary of $B_r(x_t)$ to compare with $\phi$. Let $v(y,u) = H(x_t, y, u)$. \begin{align} \label{lem5:2} G(x,y,v(y,\phi(x_t))) & = G(x, y, v(y,\phi(x_t))) - G(x, y_{\theta}, v(y_{\theta}, \phi(x_t))) \nonumber \\ & \ \ + G(x, y_{\theta}, v(y_{\theta}, \phi(x_t))) - G(x, y_{\theta}, v(y_{\theta}, u)) \\ & \ \ + G(x,y_{\theta},v(y_{\theta},u)) \nonumber \end{align} Note that by Lemma $\ref{localniceg}$, $(x_t, y_{\theta}, u) \in S \Subset \mathfrak{h}$. For the first line, noting that $\phi(x_t) = G(x_t, y,v(y,\phi(x_t)))$, we have \begin{align} \label{lem5:3:1} \allowdisplaybreaks &G(x,y,v(y,\phi(x_t))) - \phi(x_t) \nonumber \\ &= \displaystyle \int_0^1 \frac{d}{ds} \left( G(x_t + s(x-x_t), y, v(y, \phi(x_t))) \right) ds \\ &= \displaystyle \int_0^1 \langle D_x G( x_t + s(x-x_t), y, v(y, \phi(x_t))), x-x_t \rangle ds \nonumber \end{align} and similar equation holds for $G(x, y_{\theta}, v(y_{\theta}, \phi(x_t))) - \phi(x_t)$. Therefore, we have \begin{align} \label{lem5:3} \allowdisplaybreaks &G(x, y, v(y,\phi(x_t))) - G(x, y_{\theta}, v(y_{\theta}, \phi(x_t))) \nonumber \\ &=\displaystyle \int_0^1 \langle \left( \begin{array}{l} D_x G( x_t + s(x-x_t), y, v(y, \phi(x_t))) \\ - D_x G( x_t + s(x-x_t), y_{\theta}, v(y_{\theta}, \phi(x_t))) \end{array} \right), x-x_t \rangle ds \nonumber \\ &=\displaystyle \int_0^1 \int_0^1 \frac{d}{ds'} \langle \left( D_xG \left(\begin{array}{l} x_t + s(x - x_t), y_{\theta} + s'(y-y_{\theta}), \\ v( y_{\theta} + s'(y - y_{\theta}), \phi(x_t)) \end{array} \right) \right), x-x_t \rangle ds'ds \\ &= \displaystyle \int_0^1 \int_0^1 \left( D_{xy}G + D_{xv}G D_y H \right) [y-y_{\theta}, x-x_t] ds'ds \nonumber \\ &= \displaystyle \int_0^1 \int_0^1 \left( D_{xy}G + D_{xv}G \frac{D_y G}{-D_v G} \right) [y-y_{\theta}, x-x_t] ds'ds \nonumber \\ &\leq C_4 |x - x_t| |y - y_{\theta}|, \nonumber \end{align} where $C_4$ depends on the $C^2$ norm of $G$ and $\beta$ (note that the functions in the last integral are evaluated at different points so that $C_4$ might not be equal to $C_e$). For the second line, we use Lemma $\ref{localest}$. \begin{align} \label{lem5:4} &G(x, y_{\theta}, v(y_{\theta}, \phi(x_t))) - G(x, y_{\theta}, v(y_{\theta}, u)) \nonumber \\ &= \displaystyle \int_0^1 \frac{d}{ds} \left( G{(x, y_{\theta}, v(y_{\theta}, u + s(\phi(x_t)-u))} \right) ds \nonumber \\ &= \displaystyle \int_0^1 D_v G D_u H (\phi(x_t)-u) ds \\ &\leq C'_5 |\phi(x_t)-u| \nonumber \\ &\leq C_5|x_1 - x_0| |y_1 - y_0| \nonumber \end{align} where $C_5$ depends on the $C^1$ norm of $G$, $\beta$, and $C_3$. applying ($\ref{lem5:3}$) and ($\ref{lem5:4}$) to ($\ref{lem5:2}$), \begin{equation} \label{lem5:5} G(x, y,v(y, \phi(x_t))) \leq G(x,y_{\theta},v(y_{\theta},u)) + C_4 |x - x_t| |y - y_{\theta}| + C_5|x_1 - x_0| |y_1 - y_0|. \end{equation} Comparing ($\ref{lem5:1}$) and ($\ref{lem5:5}$), if we have \begin{equation} \label{lem5:6} C_4 |x - x_t| |y - y_{\theta}| + C_5|x_1 - x_0| |y_1 - y_0| \leq \frac{3}{16}\delta_0|y_1 - y_0|^2|x-x_t|^2 -\gamma |x-x_t|^3, \end{equation} then we can obtain $G(x, y, v(y, \phi(x_t))) \leq \phi(x)$. ($\ref{lem5:6}$) is satisfied if we have \begin{align*} C_5|x_1 - x_0| |y_1 - y_0| \leq & \frac{1}{16}\delta_0|y_1 - y_0|^2|x-x_t|^2 \\ C_4 |x - x_t| |y - y_{\theta}| \leq & \frac{1}{16}\delta_0|y_1 - y_0|^2|x-x_t|^2 \\ \gamma |x-x_t|^3 \leq & \frac{1}{16}\delta_0|y_1 - y_0|^2|x-x_t|^2. \end{align*} Therefore we choose \begin{equation} \label{rlk} r^2 = \frac{16C_5}{\delta_0} \frac{|x_1 - x_0|}{|y_1 - y_0|}, \quad l = \frac{\delta_0}{16C_4}r |y_1 - y_0|^2, \quad \kappa = \left( \frac{16^3 \gamma^2 C_5}{\delta_0 ^3} \right) ^{\frac{1}{5}} \end{equation} so that we obtain \begin{displaymath} G(x, y, v(y, \phi(x_t))) \leq \phi(x) \textrm{ for } y \in \mathcal{N}_l \left( \left\{ y_{\theta} | \theta \in [ \frac{1}{4} , \frac{3}{4} ] \right\} \right) \textrm{ and } x \in \partial B_r (x_t). \end{displaymath} Note that $\kappa$ does not depend on $x_0$ and $x_1$. From the condition ($\ref{gsubestcond}$), we know that \begin{displaymath} r^2 \leq \frac{16C_5}{\kappa \delta_0} |x_1 - x_0|^{\frac{4}{5}}, \quad l \leq \frac{\sqrt{\delta_0 C_5}}{4C_4} |x_1 - x_0|^{\frac{1}{2}}\mathrm{diam}(Y)^{\frac{3}{2}}. \end{displaymath} Therefore, choosing $x_1$ close enough to $x_0$ so that $|x_1 - x_0| \leq \frac{4C_4^2 r_2^2}{\mathrm{diam}(Y)^{3}\delta_0C_5}$, we can assume that \begin{equation} \label{smallr} l \leq \frac{r_2}{2} \end{equation} where $r_2$ is from the proof of Lemma $\ref{localniceg}$. Since $G(x_t, y,v(y, \phi(x_t))) = \phi(x_t)$, we get a local maximum of $G(x, y, v(y, \phi(x_t))) - \phi(x)$ at some point $x_y \in B_r(x_t)$ with non-negative value. Then from the proof of Lemma \ref{localniceg}, we get that $\nbhd{l}{ \left\{ y_{\theta} | \theta \in [0,1] \right\} } \cap Y \subset \mathfrak{h}_{x_y,\phi(x_t)}$. If $G(x_y, y, v(y,\phi(x_t))) = \phi(x_y)$, then $G(x, y,v(y, \phi(x_t)))$ is a local support of $\phi$ at $x_y$. Since $\phi$ is $G$-convex, the $G$-affine function $G(x, y, v(y, \phi(x_t)))$ is a global support of $\phi$ and hence $ y \in \partial_G \phi (x_y) \in \partial_G \phi ( B_r (x_t))$ by Proposition \ref{loctoglob} (Recall by Remark $\ref{rmkonunif}$, we can use this proposition). Suppose $G(x_y, y, v(y, \phi(x_t))) > \phi(x_y)$. Note that we have $\phi(x) \geq G(x, y,v(y, u))$. We define \begin{align*} f_y(h) & = \max_{x \in B_r(x_t)} \left\{ G(x, y,v(y, h)) - \phi(x)\right\} \\ & = \max_{x \in B_r(x_t)} \left\{ G(x, y, H(x_t, y, h)) - \phi(x) \right\} \end{align*} Then $f_y(\phi(x_t)) > 0$ and $f_y(u) \leq 0$. Moreover, since $\|D_vG\|$ and $\|D_u H\|$ are bounded on $\Psi$ and $\Phi$, the functions in the $\max$ are equicontinuous so that $f_y$ is a continuous function. Therefore, there exists $h_y \in [u, \phi(x_t)]$ at which we have $f_y(h_y) = 0 $. In other words, $G(x, y, v(y, h_y))$ supports $\phi$ at some point $x'$ in $B_r(x_t)$. From ($\ref{smallr}$) and the proof of Lemma $\ref{localniceg}$, we have $(x', y, h_y) \in \mathfrak{h}$. Hence we get $y \in \partial_G \phi (B_r(x_t))$. \end{proof} \subsection{Some convex geometry} In this subsection, we prove a useful convex geometry lemma to estimate the volume of $\nbhd{l}{ \left\{ y_{\theta} | \theta \in \left[\frac{1}{4} , \frac{3}{4}\right] \right\} }$. This subsection corresponds to Lemma 5.10 in \cite{Loeper2009OnTR}. In this paper we will show the same type of lemma but we do not use the generating function $G$ outside of its domain $X \times Y \times \mathbb{R}$. Note that in \cite{Loeper2009OnTR}, it is used that the cost function $c$ can be extended to and differentiated outside of $\Omega \time \Omega'$. \begin{Rmk} \label{convlip} Suppose we have a compact convex set $A$. Then for each point $p \in \partial A$, there is $r_p > 0$ such that the boundary $\partial A$ can be written as a graph of a convex function up to an isometry in $B_{r_p}(p)$. But since $A$ is compact, we can have $r>0$ that does not depend on $p \in \partial A$ but can replace $r_p$. Moreover, since the convex functions are locally Lipschitz, by taking smaller $r$ if needed, we can assume that each convex function that describes $\partial A$ is a Lipschitz function in $B_r(p)$. Then using compactness again, we can assume that we have a uniform Lipschitz constant. In fact, we can bound the Lipschitz constant by $L = \frac{\textrm{diam{A}}}{r}$. See corollary A.23 of \cite{Figalli2017TheME}. \end{Rmk} \begin{Lem} \label{convvolball} Let $A$ be a compact convex set. Then there exist $r_A>0$ and $C_A$ that depend on the set $A$ such that for any $r' < r_A$ and $x \in A$, we have \begin{equation} \label{volball} \vol{B_{r'}(x) \cap A} \geq C_A \vol{B_{r'}(x)}. \end{equation} \end{Lem} \begin{proof} Let $r$ be as in the Remark $\ref{convlip}$, and let $L$ be the Lipschitz constant in the same remark. Let $r' < \frac{r}{2}$. If $x \in \{ q \in A | \mathrm{dist}(q,\partial A) \geq \frac{r}{2} \}$, Then $B_{r'} (x) \subset A$ so that $\vol{B_{r'} (x) \cap A} = \vol{B_{r'} (x)}$. Now suppose $\mathrm{dist}(x, \partial A) < \frac{r}{2}$, then $\exists p \in \partial A$ such that $B_{r'} (x) \subset B_r(p)$. In $B_r(p)$, we have that $\partial A$ is the graph of a convex function $f$ up to an isometry, and $f$ is Lipschitz with the Lipschitz constant $L$. Then, since $x \in A$, we know that $x^n \geq f(x')$ where $x = (x' , x^n) \in \mathbb{R}^{n-1} \times \mathbb{R}$ and we have the inclusions \begin{align} \label{inclusionball} B_{r'} (x) \bigcap A & \supset B_{r'} (x) \cap \{ q = (q', q^n) \in \mathbb{R}^{n-1} \times \mathbb{R} | q^n \geq f(q') \} ) \nonumber \\ & \supset B_{r'} (x) \cap \{ q | q^n \geq f(x') + L |x' - q'| \} \\ & \supset B_{r'} (x) \cap \{ q | q^n \geq x^n + L|x' - q'| \}. \nonumber \end{align} The last set above is the intersection of a ball centered at $x$ and a cone with vertex at $x$. Hence the volume of this set is a multiple of the volume of the ball where the constant multiplied depends on $L$. Therefore the above inclusion implies ($\ref{volball}$) with $C_A$ depending on $L$, hence on $A$. Therefore the lemma holds with $r_A = \frac{r}{2}$. \end{proof} In the next lemma, we use the term ``length of a curve''. The curve that we are using is not necessarily differentiable, so we use next definition for the length of a curve \begin{Def} Let $\gamma : [a,b] \to \mathbb{R}^n$ be a continuous curve. We define its length by \begin{displaymath} \len{\gamma} = \sup \left\{ \sum_{i=1}^n |\gamma(t_i) - \gamma(t_{i-1})| \big| a = t_0 \leq t_1 \leq \cdots \leq t_n =b \right\} \end{displaymath} \end{Def} It is well known that this definition preserves a lot of properties of arclength of $C^1$ curves. Some continuous curves may have $\len{\gamma} = \infty$, but $\len{\gamma}$ is finite if $\gamma$ is a Lipschitz curve with finite domain. \begin{Lem} \label{convvoltube} Let $A$ be a compact convex set and let $\gamma : [0,1] \to A$ be a bi-Lipschitz curve, that is \begin{equation} \label{bilipcurve} C_{\gamma}' |s-t| \leq |\gamma(s) - \gamma(t)| \leq C_{\gamma} |s-t| \end{equation} for some constants $C_{\gamma}'$ and $C_{\gamma}$. Then there exists $K_A$ and $l_A>0$ that depend on $A$ and $C_{\gamma}'$ such that for any $l \leq l_A$, we have \begin{equation} \vol{\nbhd{l}{\gamma} \cap A} \geq K_A C'_{\gamma}l^{n-1}. \end{equation} \end{Lem} \begin{proof} We assume $l < \frac{r_A}{2}$ where $r_A$ is from the previous lemma $\ref{convvolball}$. Let $m \in \mathbb{N}$ be the smallest number such that $ C_{\gamma} \leq r_A m$. Then by taking $r_A$ smaller if necessary, we have $m \leq \frac{2C_{\gamma}}{r_A}$. Note that we have $\len{\gamma} \leq C_{\gamma}$ from ($\ref{bilipcurve}$). We define $\gamma_i(t) = \gamma( (1-t)\frac{i}{2m} + t \frac{i+1}{2m} )$. Then $\gamma_i$ is bi-Lipschitz with constants $\frac{C_{\gamma}'}{2m}$ for lower bound and $\frac{C_{\gamma}}{2m}$ for upper bound i.e. \begin{displaymath} \frac{C_{\gamma}'}{2m}|s-t| \leq | \gamma_i(s) - \gamma_i(t) | \leq \frac{C_{\gamma}}{2m} |s-t|. \end{displaymath} In addition, we have $\len{\gamma_i} \leq \frac{r_A}{2}$ by our choice of $m$ and $l$. Suppose $\nbhd{l}{\gamma_i} \cap \partial A \neq \emptyset$. Then we can write $\partial A$ as a graph of a Lipschitz convex function $f$ with Lipschitz constant $L$ around some point $p \in \nbhd{l}{\gamma_i} \cap \partial A$. Moreover, for any $x \in \nbhd{l}{\gamma_i}$, there are some $t, s \in [0,1]$ such that \begin{align*} |x-p| & < |x- \gamma_i(t) | + |\gamma_i(t) - \gamma_i(s)| + |\gamma_i(s) - p| \\ & \leq \frac{r_A}{2} + \frac{r_A}{2} + \frac{r_A}{2} = \frac{3}{4}r, \end{align*} where $r$ is the radius of the ball around a point on $\partial A$ in which we can write $\partial A$ as a graph of a convex function (Remark $\ref{convlip}$). Therefore $\nbhd{l}{\gamma_i}$ lies in the epigraph of $f$ in $B_r(p)$. Then the proof of Lemma $\ref{convvolball}$ shows that at each point on the curve $\gamma_i$, there exists a conical sector $\mathrm{Sec}_{\gamma_i(t)}$ in $B_l(\gamma_i(t)) \cap A$ which is a translation of the conical sector $\mathrm{Sec_0}$ : \begin{displaymath} \mathrm{Sec}_0 = B_l(0) \cap \{ q | q^n \geq L|q'| \}. \end{displaymath} Note that the inscribed ball in this conical sector has radius $L'l$ where $L'$ is a constant that depends on $L$ so that the inscribed ball is $B_{L'l}(v)$ for some $v \in \mathrm{Sec}_0$. Therefore, for each $0 \leq i \leq 2m-1$, we get $v_i$ such that $\nbhd{L'l}{\gamma_i+v_i} \subset \nbhd{l}{\gamma} \cap A$. Now, if we have $l < \frac{C_{\gamma}'}{4m}$, then for any $x \in \nbhd{l}{\gamma_i}$ and $y \in \nbhd{l}{\gamma_{i+2}}$, we get that for some $s, t \in [0,1]$, \begin{align*} |x-y| & \geq |\gamma_i(t) - \gamma_{i+2}(s)| - ( |x-\gamma_i(t)| + | y - \gamma_{i+2}(s)|) \\ & \geq \frac{C_{\gamma}'}{2m} - 2l >0. \end{align*} Therefore, $\nbhd{L'l}{\gamma_i} \cap \nbhd{L'l}{\gamma_{i+2}} = \emptyset$. Note that each $\nbhd{L'l}{\gamma_i}$ has volume bounded below by $(L'l)^{n-1}|\gamma_i(0) - \gamma_i(1)| \geq \frac{(L')^{n-1}C_{\gamma}'}{2m} l^{n-1}$ so that \begin{align*} \vol{\nbhd{l}{\gamma} \cap A} & \geq \vol{\bigcup_{i=0}^{2m-1}\nbhd{L'l}{\gamma_i+v_i}} \\ & \geq \vol{\bigcup_{i=0}^{m-1} \nbhd{L'l}{\gamma_{2i}+v_i}} \\ & \geq \frac{(L')^{n-1}C_{\gamma}'}{2m} l^{n-1} \times m = \frac{1}{2}(L')^{n-1}C_{\gamma}'l^{n-1}. \end{align*} Therefore we get the lemma with $l_A = \frac{1}{8} \frac{C_{\gamma}'}{C_{\gamma}} r_A \leq \min \{ \frac{r_A}{2} , \frac{C_{\gamma}'}{4m} \} = \frac{C_{\gamma}'}{4m} $ and $K_A = \frac{1}{2}(L')^{n-1}$. \end{proof} \begin{Lem} \label{gsubvolest} Let $y_{\theta}$ be as in Lemma $\ref{gsubdiffest}$ and let $A = \mathfrak{h}^*_{x_0, \phi(x_0)}$. If $l \leq \frac{r_A}{8C_e^5}$, then we have \begin{equation} \vol{\nbhd{l}{ \left\{ y_{\theta} | \theta \in \left[ \frac{1}{4} , \frac{3}{4} \right] \right\} } \cap Y} \geq C_V l^{n-1} |y_0 - y_1| \end{equation} where $C_V$ depends on $x_0$, $\mathfrak{h}$, and $C_e$. \end{Lem} \begin{proof} Note that $\theta \mapsto y_{\theta}$ is a bi-Lipschitz curve with \begin{displaymath} \frac{1}{C_e^2} |y_1-y_0||\theta - \theta'| \leq |y_{\theta} - y_{\theta'}| \leq C_e^2 |y_1 - y_0 | |\theta - \theta'|. \end{displaymath} Then the reparametrized curve $\theta \mapsto y_{(1-\theta)\frac{1}{4} + \theta \frac{3}{4}}$ is bi-Lipschitz with Lipschitz constants $\frac{2}{C_e^2}|y_1 - y_0|$ and $2C_e^2 |y_1 - y_0|$. Then the curve $\theta \mapsto {\gexp{x_t}{\phi(x_t)}}^{-1}(y_{(1-\theta)\frac{1}{4} + \theta \frac{3}{4}})$ is bi-Lipschitz with Lipschitz constants $\underline{L} = \frac{2}{C_e^3}|y_1 - y_0|$ for lower bound and $\overline{L} = 2C_e^3 |y_1 - y_0|$ for upper bound. Moreover, since the map ${\gexp{x_0}{\phi(x_0)}}^{-1}$ is bi-Lipschitz with Lipschitz constants $\frac{1}{C_e}$ and $C_e$, we have \begin{align*} \displaystyle \nbhd{\frac{l}{C_e}}{ {\gexp{x_0}{\phi(x_0)}}^{-1} \left( \left\{ y_{\theta} | \theta \in \left[ \frac{1}{4} , \frac{3}{4} \right] \right\} \right) } \cap \mathfrak{h}^*_{x_0, \phi(x_0)} \\ \displaystyle \subset {\gexp{x_0}{\phi(x_0)}}^{-1} \left( \nbhd{l}{ \left\{ y_{\theta} | \theta \in \left[ \frac{1}{4} , \frac{3}{4} \right] \right\} } \cap Y \right). \end{align*} Note that by (vDomConv), $\mathfrak{h}^*_{x_0, \phi(x_0)}$ is convex. Moreover, by our choice of $l$, we have $\frac{l}{C_e} \leq \frac{1}{8} {\underline{L}}/{\overline{L}} r_{A} $. Then by Lemma $\ref{convvoltube}$, we get a constant $K_{x_0}$ that depends on $\mathfrak{h}^*_{x_0, \phi(x_0)}$, hence on $x_0$ such that \begin{align*} \displaystyle \vol{\nbhd{\frac{l}{C_e}}{ {\gexp{x_0}{\phi(x_0)}}^{-1} \left( \left\{ y_{\theta} | \theta \in \left[ \frac{1}{4} , \frac{3}{4} \right] \right\} \right) } \cap \mathfrak{h}^*_{x_0, \phi(x_0)}} \\ \displaystyle \geq K_{x_0} \frac{2}{C_e^3}|y_1 - y_0| (l / C_e)^{n-1}. \end{align*} Using bi-Lipschitzness once more, we obtain \begin{displaymath} \vol{ \nbhd{l}{ \left\{ y_{\theta} | \theta \in \left[ \frac{1}{4} , \frac{3}{4} \right] \right\} } \cap Y } \geq C_V l^{n-1} |y_1 - y_0|, \end{displaymath} with $C_V = {2K_{x_0}}/{C_e^{2n+2}}$. \end{proof} \begin{Rmk} \label{condforunifLip} The constant $C_V$ depends on the Lipschitz constant of $\partial \mathfrak{h}^*_{x_0, \phi(x_0)}$ so that it depends on $x_0$ and the value of $\phi$ at $x_0$. If we assume that $\{ \mathfrak{h}^*_{x,u} \}_{(x,u) \in X \times \mathbb{R}}$ is uniformly Lipschitz, we can get rid of this dependency. \end{Rmk} \subsection{Proof of the main theorem} In the proof of the main theorem, we will use the lemmas in previous subsections. So, we review the conditions to use those lemmas. We choose $x_0, x_1 \in \mathring{X}$ and $y_0, y_1 \in \gsub{\phi}{x_0}$. First of all, to localize the argument, we choose $x_1$ close enough to $x_0$. Explicitly, we choose $|x_0 - x_1|$ smaller than $\delta(x_0)$ to use Lemma $\ref{localniceg}$, smaller than $\frac{4C_4^2 r_2^2}{\mathrm{diam}^3\delta_0C_5}$ to use Lemma $\ref{gsubdiffest}$, and smaller than $\frac{C_4^2r_A^2}{4C_e^{12} \delta_0 C_5} \frac{1}{\mathrm{diam}(Y)^3}$ (with $A = \mathfrak{h}^*_{x_0, \phi(x_0)}$) to get the condition in Lemma $\ref{gsubvolest}$. For $y_0$ and $y_1$, we need $|y_0 - y_1| \geq \max{ \{|x_0 - x_1| , \kappa |x_0 - x_1|^{1/5} \}}$ to use Lemma $\ref{gsubdiffest}$. Note that if there is no such $y_0$ and $y_1$, that means we have H$\ddot{\textrm{o}}$lder regularity with exponent $\frac{1}{5}$. \begin{proof}[Proof of the main theorem $\ref{main}$] We deal with the first part of the theorem. \\ 1. In the first case, we deal with the case $p = \infty$ first. If $p = \infty$, then we have \begin{displaymath} \mu \left( B_r(x_t) \right) \leq C \vol{ B_r(x_t) } \leq C' r^n \end{displaymath} for some $C$ and $C'$. Moreover, since $\phi$ is an Alexandrov solution, we have \begin{equation} \label{main1} \begin{array}{rl} \mu \left( B_r(x_t) \right) = \nu \left( \gsub{\phi}{ B_r(x_t)} \right) & \geq \nu \left( \nbhd{l}{ \left\{ y_{\theta} | \theta \in \left[ \frac{1}{4} , \frac{3}{4} \right] \right\}} \right) \\ & \geq \lambda C_V l^{n-1}|y_0 - y_1|. \end{array} \end{equation} Combining these, we get $C'r^n \geq C_V l^{n-1}|y_0 - y_1|$. We plug ($\ref{rlk}$) into this inequality to obtain \begin{displaymath} |y_0 - y_1| \leq C |x_0 - x_1|^{\frac{1}{4n-1}} \end{displaymath} for some constant $C$. Note that this implies single valuedness and H$\mathrm{\ddot{o}}$lder continuity of $\partial_G \phi$.\\ \noindent Next we deal with the case $p < \infty$. If the condition on $\mu$ holds with $p < \infty$, define $F$ by \begin{equation} \label{defineF} F(V) = \sup\{\mu(B)| B \subset X \textrm{ a ball of volume } V \}. \end{equation} Then we have $F(\vol{ B_r (x_t) }) \geq \mu( B_r (x_t) ) = \nu (\gsub{\phi}{B_r (x_t)})$. This with ($\ref{main1}$) implies \begin{equation} \label{boundnuF} F \left( C \frac{|x_0 - x_1|^{n/2}}{|y_0 - y_1|^{n/2}} \right) \geq C'|x_0 - x_1|^{(n-1)/2}|y_0 - y_1|^{(3n-1)/2} \end{equation} for some constants $C$ and $C'$. From the assumption on $\mu$, we have $F(V) \leq C'' V^{1- 1/p}$ for some $C''$. This with above inequality ($\ref{boundnuF}$), we have \begin{displaymath} |y_0 - y_1|^{2n-1+\frac{1}{2}(1-\frac{n}{p})} \leq C|x_1 - x_0|^{\frac{1}{2}(1- \frac{n}{p})}. \end{displaymath} Therefore, with the condition $p >n$, we get \begin{displaymath} |y_0 - y_1| \leq C |x_0 - x_1|^{\frac{\rho}{4n-2+\rho}} \end{displaymath} where $\rho = 1-\frac{n}{p}$. Therefore, we have the following : for any $x_0 \in \mathring{X}$, there exists some constants $r_{x_0}$ and $C_{x_0}$ that depends on $x_0$, $\phi(x_0)$, continuity of $\phi$ at $x_0$ such that if $|x_0 - x_1| < r_{x_0}$, we have \begin{equation} \label{gsubdiffholder} |y_0 - y_1| \leq C_{x_0} |x_0 - x_1|^{\frac{\rho}{4n-2+\rho}}. \end{equation} Then for a compact set $X' \Subset \mathring{X}$, we can cover it with a finite number of balls on which we have ($\ref{gsubdiffholder}$) with respect to the center of the ball. Then we can get the H$\ddot{\textrm{o}}$lder regularity of $\partial_G \phi$ on $X'$ by connecting any two points in $X'$ with a piecewise segment where each segment lies in one of the balls. Then the H$\ddot{\textrm{o}}$lder constant will be bounded by the sum of the H$\ddot{\textrm{o}}$lder constant on each ball that contains a segment times the number of the balls. To get the H$\ddot{\textrm{o}}$lder regularity of the potential $\phi$, we note that $\gsub{\phi}{x} = \gexp{x}{\phi(x)} ( D_x \phi (x) )$, and use remark $\ref{nicedom}$. \\ Now we prove the second part of the theorem.\\ 2. Suppose we have $f : \mathbb{R}^{+} \to \mathbb{R}^{+}$ such that $\lim_{r \to 0} f(r) = 0$ and for any $x \in X$ and $r \geq 0$ we have $\mu(B_r(x))\leq f(r)r^{n(1-\frac{1}{n})}$. Note that we can choose $f$ strictly increasing. Then by ($\ref{defineF}$), we have \begin{equation} \label{Fbound} F(V) \leq f\left( \left( \frac{1}{\omega_n} \right) ^{\frac{1}{n}} V^{\frac{1}{n}} \right) \times \left( \frac{1}{\omega_n} \right)^{1 - \frac{1}{n}} V^{1-\frac{1}{n}} \end{equation} where $\omega_n$ is the volume of the unit ball in $\mathbb{R}^n$. Define $\tilde{f}$ by \begin{displaymath} \tilde{f}(V) ^{2n-1}= \left( \frac{1}{\omega_n} \right)^{1 - \frac{1}{n}} f \left( \left( \frac{1}{\omega_n} \right)^{\frac{1}{n}} V^{\frac{1}{2}} \right). \end{displaymath} Then ($\ref{Fbound}$) becomes \begin{equation} \label{Ftildebound} F(V) \leq \left[ \tilde{f} \left( V^{\frac{2}{n}} \right) \right]^{2n-1} V^{1 - \frac{1}{n}}. \end{equation} We combine ($\ref{Ftildebound}$) with ($\ref{boundnuF}$), and we have \begin{equation} \label{singlevalue} \tilde{f} \left( C' \frac{|x_0 - x_1|}{|y_0 - y_1|} \right) \geq C''|y_0 - y_1| \end{equation} for some constants $C', C'' >0$. Note that we can assume that $\frac{|x_0 - x_1|}{|y_0 - y_1|} \to 0$ as $|x_0 - x_1| \to 0$ because otherwise, we get a Lipschitz estimate so that we still can get H$\ddot{\textrm{o}}$lder regularity. Then ($\ref{singlevalue}$) implies that $\partial_G \phi$ is a single valued map. Let $g$ be the modulus of continuity of the $G-$subdifferential map $\partial_G \phi$. We divide into two cases. If $g(u) \leq \max \left\{ u, \kappa u^{\frac{1}{5}} \right\}$, then we get $g(u) \to 0$ as $u \to 0$. In the other case, from ($\ref{singlevalue}$) we get \begin{displaymath} \tilde{f} \left( C' \frac{u}{g(u)} \right) \geq C'' g(u). \end{displaymath} Since $f$ was strictly increasing, so is $\tilde{f}$, so that $\tilde{f}$ is invertible. Therefore the above equation is equivalent to \begin{displaymath} u \geq \tilde{f}^{-1} \left( C'' g(u) \right) \frac{g(u)}{C'}. \end{displaymath} Let $\omega$ be the inverse of $z \mapsto \tilde{f}^{-1}(C''z) \frac{z}{C'}$. Note that $\omega$ is strictly increasing. Therefore, composing $\omega$ on above inequality shows that \begin{displaymath} g(u) \leq \omega (u). \end{displaymath} Since the function $z \mapsto \tilde{f}^{-1}(C''z) \frac{z}{C'}$ is strictly increasing and has limit 0 as $ z \to 0$, $\omega(u)$ also has limit 0 as $u \to 0$. Therefore the above inequality implies that $g(u) \to 0$ as $u \to 0$. Hence the modulus of continuity of $\partial_G \phi$ has limit 0 as the variable tends to 0 so that $\partial_G \phi$ is continuous at $x_0$. \end{proof} \begin{Rmk} Note that from Lemma $\ref{localniceg}$ and $\ref{gsubvolest}$, the constants that we get depend on the value of $\phi$ and continuity of $\phi$. Because of these dependencies, the H$\ddot{\textrm{o}}$lder regularity that we get in this paper might not be uniform for solutions to the (GJE). To get a bounds on the H$\ddot{\textrm{o}}$lder norm that do not depend on the solution $\phi$, we need to add some conditions on the set $\mathfrak{h}$ so that we can get a uniform Lipschitz constant for $\partial \mathfrak{h}^*_{x,u}$ for any $(x,u) \in X \times \mathbb{R}$ as mentioned in Remark $\ref{condforunifLip}$. Moreover, we need an apiori estimate on the modulus of continuity of $\phi$ and $\partial_G \phi$ as mentioned in Remark $\ref{deponsol}$. \end{Rmk} \end{document}
\begin{document} \title[Starlike functions]{Radius of Starlikeness for Classes of Analytic Functions} \author[K. Khatter]{Kanika Khatter} \address{Department of Mathematics, SGTB Khalsa College, University of Delhi, Delhi--110 007, India} \email{[email protected]} \author[S. K. Lee]{See Keong Lee} \address{School of Mathematical Sciences, Universiti Sains Malaysia, 11800 USM, Penang, Malaysia} \email{[email protected]} \author[V. Ravichandran]{V. Ravichandran} \address{Department of Mathematics, NIT Tiruchirappalli, Tamil Nadu--620015, India} \email{[email protected]} \begin{abstract} We consider normalized analytic function $f$ on the open unit disk for which either $\RE f(z)/g(z)>0$, $|f(z) /g(z) - 1|<1$ or $\RE (1-z^2) f(z) /z>0$ for some analytic function $g$ with $\RE (1-z^2) g(z) /z>0$. We have obtained the radii for these functions to belong to various subclasses of starlike functions. The subclasses considered include the classes of starlike functions of order $\alpha$, lemniscate starlike functions and parabolic starlike functions. \end{abstract} \keywords{starlike functions, exponential function, lemniscate of Bernoulli, radius problems, coefficient estimate} \subjclass[2010]{30C45, 30C80} \maketitle \section{Introduction} For any two classes $\mathcal{G}$ and $\mathcal{H}$ of analytic functions defined on the unit disk $\mathbb{D}$, the $\mathcal{H}$-radius for the class $\mathcal{G}$, denoted by $\mathcal{R}_{\mathcal{H}}(\mathcal{G})$, is the maximal radius $\rho \leq 1$ such that the $f\in \mathcal{G}$ implies that the function $f_r$, defined by $f_r(z)=f(rz)/r$, belongs to class $\mathcal{H}$ for all $0< r \leq \rho$. Among the radius problems for various subclasses of analytic functions, one direction of study focuses on obtaining the radius for classes consisting of functions characterised by ratio of the function $f$ and another function $g$, where $g$ is a function belonging to some special subclass of $\mathcal{A}$ of all analytic functions on $\mathbb{D}$ normalized by $f(0)=0=f'(0)-1$. MacGregor \cite{Mac,Mac1} obtained the radius of starlikeness for the class of functions $f \in \mathcal{A}$ satisfying either $\RE(f(z)/g(z))>0$ or $|f(z)/g(z)-1|<1$ for some $g \in \mathcal{K}$. Ali \emph{et al.}\cite{Ali} estimated several radii for classes of functions satisfying either (i)~$\RE(f(z)/g(z))>0$, where $\RE(g(z)/z)>0$ or $\RE(g(z)/z)>1/2$; (ii) $|f(z)/g(z)-1|<1$, where $\RE(g(z)/z)>0$ or $g$ is convex; (iii) $|f'(z)/g'(z)-1|<1$, where $\RE g'(z)>0$. The work is further investigated in \cite{asha}. These classes are related to the Caratheodory class $\mathcal{P}$ consisting of all analytic functions $p$ with $p(0)=1$ and $\RE p(z)>0$ for all $z \in \mathbb{D}$. Motivated by the aforesaid studies, we consider the following three classes $\mathcal{K}_1$, $\mathcal{K}_2$, and $\mathcal{K}_3$: \begin{align*} \mathcal{K}_1&:= \left\{ f \in \mathcal{A} : \frac{f(z)}{g(z)}\in \mathcal{P}, ~~\text{for some}~~ g \in \mathcal{A} ,\ \RE \frac{1-z^2}{z}g(z) >0 \right\},\\ \mathcal{K}_2&:= \left\{ f \in \mathcal{A} : \left| \frac{f(z)}{g(z)} - 1 \right|<1, ~~ \text{for some}~~ g\in \mathcal{A} , \ \RE \frac{1-z^2}{z}g(z) >0 \right\}, \intertext{and} \mathcal{K}_3&:= \left\{ f \in \mathcal{A} : \RE \frac{1-z^2}{z}f(z) >0 \right\}, \end{align*} and estimate the radius for the functions in the classes to belong to various subclasses of starlike functions which we discuss below. Let $f, F$ be analytic on $\mathbb{D}:= \{z \in \mathbb{C}: |z| < 1\}$; the function $f$ is subordinate to $F$, written $f\prec F$, provided $f=F\circ w$ for some analytic self-mapping $w$ of the unit disk $ \mathbb{D}$ that fixes the origin. Subordination is very useful in the study of subclasses of univalent functions. For instance, the concept of Hadamard product and subordination was used in \cite{uni} to introduce the class of all functions $f$ satisfying $z{(k_\alpha * f)'}/{(k_\alpha * f)}\prec h$ where $k_\alpha(z)= z/(1-z)^{\alpha}$, $\alpha \in \mathbb{R}$, $f \in \mathcal{A}$ and $h$ is a convex function. Later in 1989, Shanmugam \cite{shan} studied the class $\mathcal{S}_g^*(h)$ of all functions $f\in{\mathcal{A}}$ satisfying $z(f*g)'/(f*g) \prec h$ where $h$ is a convex function and $g$ is a fixed function in $\mathcal{A}$. By replacing $g$ with the functions $z/(1-z)$ and $z/(1-z)^2$, we get the subclasses $\mathcal{S}^*(h)$ and $\mathcal{K}(h)$ of Ma-Minda starlike and convex functions, respectively. In 1992, Ma and Minda \cite{mam} studied the distortion, growth, covering and coefficient estimates for these functions with the weaker assumption of starlikeness on $h$. These classes unifies several subclasses of starlike and convex functions. When $h$ is the mapping of $\mathbb{D}$ onto the right half-plane, $\mathcal{S}^*(h)$ and $\mathcal{K}(h)$ reduce to the class $\mathcal{S}^*$ of starlike and $\mathcal{K}$ of convex functions, respectively. For $h(z)=(1+Az)/(1+Bz)$, with $-1 \leq B <A \leq 1$, the classes become $\mathcal{S}^*[A,B]$ of Janowski starlike functions and $\mathcal{K}[A,B]$ of Janowski convex functions. For $A= 1- 2 \alpha$ and $B = -1$ where $0 \leq \alpha < 1$, these subclasses become $\mathcal{S}^*(\alpha)$ of the starlike functions of order $\alpha$ and $\mathcal{K}(\alpha)$ of convex functions of order $\alpha$, respectively introduced by Robertson \cite{rob}. For $h (z)= \sqrt{1+z}$, the class $\mathcal{S}^*(h)$ becomes the class $\mathcal{S}^*_{L} $ of the lemniscate starlike functions introduced and studied by Sok\'{o}l and Stankiewicz \cite{sok,sok2}; analytically, $f \in \mathcal{S}^*_{L}$ if $|(z f'(z)/f(z))^2-1|<1$. Mendiratta \emph{et al.}\cite{sumit,sumit2} studied the classes $\mathcal{S}^*_e = \mathcal{S}^*(e^z)$ and $\mathcal{S}^*_{RL} = \mathcal{S}^*(h_{RL})$, where \[ h_{RL}:= \sqrt{2} - (\sqrt{2} -1) \sqrt{\frac{1-z}{1+2(\sqrt{2}-1)z}}.\] Indeed, a function $f$ belongs to $\mathcal{S}^*_e$ or to $ \mathcal{S}^*_{RL}$ if $z f'(z)/f(z)$ respectively belongs to $\{ w \in \mathbb{C}: |\log w|<1\}$ or $\{ (w - \sqrt{2})^2 -1 < 1 \}$. Sharma \emph{et al.}\cite{sharma} defined and studied the class of functions defined by $\mathcal{S}^*_c = \mathcal{S}^* (h_c(z))$, where $h_c(z) = 1 + (4/3) z + (2/3) z^2$; a function $f\in \mathcal{S}^*_c$ if $z f'(z)/f(z)\in \{ x + i y : (9 x^2 + 9 y^2 -18 x + 5)^2 -16 (9x^2 + 9 y^2 -6 x +1) = 0 \}$. Cho \emph{et al.}\cite{cho} defined and studied the class $\mathcal{S}^*_{\sin} = \mathcal{S}^*(1 + \sin z)$. Raina and Sokol \cite{raina} defined the class $\mathcal{S}^*_{\leftmoon} = \mathcal{S}^*(h_{\leftmoon})$, where $h_{\leftmoon} = z + \sqrt{1+ z^2}$ and $\mathcal{S}^*_{\leftmoon}$ consists of functions for which $z f'(z)/f(z)$ lies in the the leftmoon region defined by $\Omega_{\leftmoon}= h_{\leftmoon}(\mathbb{D}) :=\{ w\in \mathbb{C}: |w^2 - 1| < 2 |w| \}$. Another particular case is the class $\mathcal{S}^*_R = \mathcal{S}^*(h_R)$ studied in \cite{kumar} where $h_R = 1+ (zk + z^2)/(k^2 - kz)$, and $k = \sqrt{2} + 1$. The subclass $\mathcal{S}_{P} $ of parabolic starlike functions (see the survey \cite{ronning} or \cite{ali,mam1, ganga}) consists of all normalized analytic functions $f$ with $zf'(z)/f(z)$ lying in the parabolic region $(\IM(w))^2< 2 \RE(w)-1$. \section{Main Results} The first theorem gives the various radii of starlikeness for the class $\mathcal{K}_1$ which consists of functions $f \in \mathcal{A}$ satisfying $\RE(f(z)/g(z)) > 0$ for some $g \in \mathcal{A}$ and $\RE ((1-z^2)g(z)/z) >0$. Note that the functions $f_1,\ g_1: \mathbb{D}\rightarrow \mathbb{C}$ defined by \begin{equation} \label{f1} f_1(z) = \frac{z(1+ i z)^2}{(1-z^2)(1-i z)^2} \quad \quad ~~~ \quad \text{and}~~~ \quad \quad g_1(z)= \frac{z (1+ i z)}{(1-z^2) (1- i z)} \end{equation} satisfy \begin{equation*} \RE \frac{f_1(z)}{g_1(z)} = \RE \frac{1-z^2}{z}g_1(z) = \RE \frac{1+ i z}{1- i z} >0. \end{equation*} This means the function $f_1 \in \mathcal{K}_1$ and so $\mathcal{K}_1 \neq \phi$. Further we will see that this function $f_1$ serves as an extremal function for many radii problems studied here. \begin{theorem}\label{th1} For the class $\mathcal{K}_1$, the following results hold: \begin{enumerate} \item The $\mathcal{S}^*(\alpha)$-radius is the smallest positive real root of the equation $r^4(1+\alpha) - 4 r^3-2 r^2 -4 r +(1-\alpha)=0$, \quad $0 \leq \alpha <1$. \item The $\mathcal{S}^*_L$-radius is ${R}_{\mathcal{S}^*_L}= (\sqrt{5}-2)/(\sqrt{2}+1) \approx 0.0977826$. \item The $\mathcal{S}_{P}$-radius is the smallest positive real root of the equation $ 3 r^4 - 8r^3 -4 r^2 -8 r +1 = 0 $ i.e. ${R}_{\mathcal{S}_{P}} \approx 0.116675$. \item The $\mathcal{S}^*_{e}$-radius is the smallest positive real root of the equation $(2r^2 +4 r +4 r^3 -1-r^4)e = r^4 - 1 $ i.e. ${R}_{\mathcal{S}^*_{e}} \approx 0.144684$. \item The $\mathcal{S}^*_{c}$-radius is the smallest positive real root of the equation $ 4 r^4 - 12 r^3 - 6 r^2 - 12 r + 2 = 0 $ i.e. ${R}_{\mathcal{S}^*_{c}} \approx 0.15182$. \item The $\mathcal{S}^*_{\leftmoon}$-radius is the smallest positive real root of the equation $ 4 r^3 + 2 r^2 + 4 r + \sqrt{2}(1- r^4) = 2 $ i.e. ${R}_{\mathcal{S}^*_{\leftmoon}} \approx 0.134993$. \item The $\mathcal{S}^*_{\sin}$-radius is ${R}_{\mathcal{S}^*_{\sin}}= (-2+\sqrt{4+ \sin{1}(2+ \sin{1})})/(2+ \sin{1}) \approx 0.185835$. \item The $\mathcal{S}^*_{RL}$-radius is ${R}_{\mathcal{S}^*_{RL}} \approx 0.0687813$. \item The $\mathcal{S}^*_{R}$-radius is the smallest positive real root of the equation $ 4 r^3 + 2 r^2 + 4 r - r^4 -1 = 2 (1- \sqrt{2})(1- r^4) $ i.e. ${R}_{\mathcal{S}^*_{R}} \approx 0.0419413$. \end{enumerate} All the radii obtained are sharp. \end{theorem} We would use the following lemmas in order to prove our results: \begin{lemma}[{\cite[Lemma 2.2, p.\ 4]{jain}}]\label{T1} For $0<a<\sqrt{2}$, let $r_a$ be given by \begin{align*} r_a= \left\{ \begin{array}{ll} (\sqrt{1-a^2}- (1-a^2))^{1/2}, & \hbox{$0< a \leq 2 \sqrt{2}/3$;} \\ \sqrt{2}-a, & \hbox{$ 2 \sqrt{2}/3 \leq a < \sqrt{2}$.} \end{array} \right. \end{align*} Then $\{w: |w-a|< r_a \} \subseteq \{w: |w^2-1|<1\}$. \end{lemma} \begin{lemma}[{\cite[Lemma 1, p.\ 321]{shan}}]\label{T2} For $a> 1/2$, let $r_a$ be given by \begin{align*} r_a= \left\{ \begin{array}{ll} a-1/2, & \hbox{$1/2< a \leq 3/2$;} \\ \sqrt{2a-2}, & \hbox{$ a \geq 3/2$.} \end{array} \right. \end{align*} Then $\{w: |w-a|< r_a \} \subseteq \{w: \RE w > |w-1|\} = \Omega_{1/2}$. Here, $\Omega_p$ is a parabolic region which is symmetric with respect to the real axis and vertex at $(p,0)$. \end{lemma} \begin{lemma}[{\cite[Lemma 2.2, p.\ 368]{sumit}}]\label{T3} For $ e^{-1} < a < e$, let $r_a$ be given by \begin{align*} r_a= \left\{ \begin{array}{ll} a-e^{-1}, & \hbox{$e^{-1}< a \leq (e+e^{-1})/2$;} \\ e - a , & \hbox{$ (e+e^{-1})/2 \leq a < e$.} \end{array} \right. \end{align*} Then $\{w: |w-a|< r_a \} \subseteq \{w: |\log w|<1\} = \Omega_e$, which is the image of the unit disk $\mathbb{D}$ under the exponential function. \end{lemma} \begin{lemma}[{\cite[Lemma 2.5, p.\ 926]{sharma}}]\label{T4} For $ 1/3 < a < 3$, let $r_a$ be given by \begin{align*} r_a= \left\{ \begin{array}{ll} (3a-1)/3, & \hbox{$1/3< a \leq 5/3$;} \\ 3- a , & \hbox{$ 5/3 \leq a \leq 3$.} \end{array} \right. \end{align*} Then $\{w: |w-a|< r_a \} \subseteq \Omega_c$. Here $\Omega_c$ is the region bounded by the cardioid $\{x+ i y: (9x^2+9y^2-18x+5)^2 - 16 (9 x^2+ 9 y^2 -6x + 1) = 0 \}$. \end{lemma} \begin{lemma}[{\cite[Lemma 3.3, p.\ 7]{cho}}]\label{T5} For $ 1 - \sin 1 < a < 1+ \sin 1 $, let $r_a= \sin 1 - |a-1|$. Then $\{w: |w-a|< r_a \} \subseteq \Omega_{sin}$. Here $\Omega_{sin}$ is the image of the unit disk $\mathbb{D}$ under the function $1+ \sin z$. \end{lemma} \begin{lemma}[{\cite[Lemma 2.1, p.\ 3]{gandhi}}]\label{T7} For $ \sqrt{2}-1 < a < \sqrt{2}+1$, let $r_a= 1 - |\sqrt{2}-a|$. Then $\{w: |w-a|< r_a \} \subseteq \Omega_{\leftmoon}= \{w: |w^2 - 1|< 2 |w|\}$. \end{lemma} \begin{lemma}[{\cite[Lemma 2.2, p.\ 202]{kumar}}]\label{T8} For $ 2 (\sqrt{2}-1) < a < 2$, let $r_a$ be given by \begin{align*} r_a= \left\{ \begin{array}{ll} a-2(\sqrt{2}-1), & \hbox{$2 (\sqrt{2}-1) < a \leq \sqrt{2}$;} \\ 2- a , & \hbox{$\sqrt{2} \leq a < 2$.} \end{array} \right. \end{align*} Then $\{w: |w-a|< r_a \} \subseteq \Omega_R$, where $\Omega_R$ is the image of the unit disk $\mathbb{D}$ under the function $1+ ((z k + z^2)/(k^2 -k z))$, ~~ $k= \sqrt{2}+1$. \end{lemma} \begin{lemma}[{\cite[Lemma 3.2, p.\ 10]{sumit2}}]\label{T9} For $ 0 < a < \sqrt{2}$, let $r_a$ be given by \begin{align*} r_a= \left\{ \begin{array}{ll} a, & \hbox{$0 < a \leq \sqrt{2}/3$;} \\ \big((1- (\sqrt{2}-a)^2)^{1/2}-(1-(\sqrt{2}-a)^2)\big)^{1/2}, & \hbox{$\sqrt{2}/3 \leq a < \sqrt{2}$.} \end{array} \right. \end{align*} Then $\{w: |w-a|< r_a \} \subseteq \{ w: \RE w >0, |(w- \sqrt{2})^2-1|< 1 \}= \Omega_{RL}$. \end{lemma} \begin{lemma}[{\cite[Lemma 2, p.\ 240]{shah}}]\label{T10} If $ p(z) = 1 + b_n z^n + b_{n+1} z^{n+1} + \cdots $ is analytic and satisfies $ \RE p(z) > \alpha$, $0 \leq \alpha < 1$, for $|z|< 1$, then \begin{equation*} \left| \frac{zp'(z)}{p(z)}\right| \leq \frac{2 n z^n(1-\alpha)}{(1- |z|^n)(1+ (1-2 \alpha)|z|^n)}. \end{equation*} \end{lemma} With all these tools, we are ready to give the proof of our first result. \begin{proof}[Proof of Theorem~\ref{th1}] Let $f \in \mathcal{K}_1$ and the function $g: \mathbb{D} \rightarrow \mathbb{C}$ be chosen such that \begin{equation}\label{1} \RE {\frac{f(z)}{g(z)}}>0 \quad \text{and} \quad \RE{\Big( \frac{1-z^2}{z} g(z) \Big)}>0 \quad (z \in \mathbb{D}). \end{equation} Let us define $p_1, p_2: \mathbb{D}\rightarrow \mathbb{C}$ as \begin{equation}\label{2} p_1(z)= \frac{1-z^2}{z} g(z) \quad \text{and} \quad p_2(z)= \frac{f(z)}{g(z)} \end{equation} Therefore, by equation \eqref{1}, $p_1$ and $p_2$ are in $\mathcal{P}$. Equation \eqref{2} yields \begin{equation*} f(z) = \frac{z}{(1-z^2)} p_1(z) p_2(z). \end{equation*} Take logarithm at both sides and differentiate with respect to $z$ would give \begin{equation}\label{3} \frac{z f'(z)}{f(z)}= \frac{1+z^2}{1-z^2}+ \frac{z p_1'(z)}{p_1(z)}+ \frac{z p_2'(z)}{p_2(z)}. \end{equation} It can be easily proved that the bilinear transform $w= (1+z^2)/(1-z^2)$ maps the disk $|z|\leq r$ onto the disk \begin{equation}\label{4} \left| \frac{1+z^2}{1-z^2}- \frac{1+r^4}{1-r^4} \right| \leq \frac{2r^2}{1-r^4}. \end{equation} Now, by Lemma \ref{T10}, for $p \in \mathcal{P}(\alpha) := \{ p \in \mathcal{P} : \RE p(z) > \alpha, z \in \mathbb{D}\}$, we have \begin{equation}\label{5} \left| \frac{z p'(z)}{p(z)}\right| \leq \frac{2 (1- \alpha) r}{(1-r)\big(1+ (1-2\alpha)r\big)} \quad (|z|\leq r). \end{equation} By using equations \eqref{3}, \eqref{4} and \eqref{5}, we can conclude that a function $f \in \mathcal{K}_1$ maps the disk $|z|\leq r$ onto the disk \begin{equation}\label{6} \left| \frac{z f'(z)}{f(z)} - \frac{1+r^4}{1-r^4} \right| \leq \frac{2r (2 r^2 + r + 2)}{1-r^4}. \end{equation} In order to solve radius problems for $f \in \mathcal{K}_1$, we are interested in computing the value of $r$ for which the disk in \eqref{6} is contained in the corresponding regions. The classes we are considering here are all subclasses of starlike functions and therefore, we first determine the radius of starlikeness for $f \in \mathcal{K}_1$. From \eqref{6}, we have \begin{equation*} \RE \frac{z f'(z)}{f(z)} \geq \frac{r^4 - 4 r^3 - 2 r^2 -4r + 1}{1-r^4} \geq 0. \end{equation*} Solving the above inequality for $r$, we get that the function $f \in \mathcal{K}_1$ is starlike in $|z|\leq 0.216845$. Hence, all the radii that we are going to estimate here, will be less than 0.216845. For the function $f_1$ defined in \eqref{f1}, we have \begin{align*} \frac{zf'_1(z)}{f_1(z)} & = \frac{1+ 4 i z +2 z^2 -4 i z^3 +z^4}{1- z^4}\\ & = \frac{1+ 4 i z(1- z^2) + 2 z^2 + z^4 }{1 - z^4} \end{align*} At $z := r i = (0.216845) i $, we have $zf'_1(z)/f_1(z)\approx 0$, thereby proving that the radius of starlikeness obtained for the class $\mathcal{K}_1$ is sharp. \begin{enumerate} \item In order to compute $R_ {\mathcal{S}^*(\alpha)}$, we estimate the value of $r \in (0,1)$ satisfying \begin{equation*} \RE \frac{z f'(z)}{f(z)} \geq \frac{r^4 - 4 r^3 - 2 r^2 -4r + 1}{1-r^4} \geq \alpha. \end{equation*} Therefore, the number $r = R_ {\mathcal{S}^*(\alpha)}$, is the smallest positive real root of the equation $r^4(1+\alpha) - 4 r^3-2 r^2 -4 r +(1-\alpha)=0$ in $(0,1)$. For the function $f_1 \in \mathcal{K}_1$ given by \eqref{f1}, we have \begin{align} \label{f11} \frac{z f_1'(z)}{f_1(z)} & = \frac{1+ 4 i z +2 z^2 -4 i z^3 +z^4}{1- z^4} \end{align} At $z:= r i = \mathcal{R}_{\mathcal{S}^*(\alpha)}$, \eqref{f11} reduces to \begin{align*} \frac{z f_1'(z)}{f_1(z)} & = \frac{1- 4 r - 2 r^2 -4 r^3 +r^4}{1- r^4} = \alpha, \end{align*} thereby proving that the radius is sharp. \item We use lemma \ref{T1} to compute the lemniscate starlike radius for the function $f \in \mathcal{K}_1$. Let $a = (1+r^4)/(1-r^4)$. Then for $0 \leq r < 1$, we have $a \geq 1$. So for $a < \sqrt{2}$, we get $r < \sqrt[4]{(\sqrt{2}-1)/(\sqrt{2}+2)} \approx 0.59018$. On the other hand, consider \[ \frac{2r(2r^2 + r + 2)}{1-r^4} \leq \sqrt{2} - a = \sqrt{2} - \frac{1+r^4}{1 - r^4}. \] From this, let $r^*$ be the smallest positive real roof of the equation $(1+\sqrt{2})r^4 + 4r^3 + 2r^2 + 4r + (1 - \sqrt{2}) = 0$. Then the radius of lemniscate starlikeness for $f\in \mathcal{K}_1$ is \[ R_{\mathcal{S}^*_L} = \min\left\{ \left( \frac{\sqrt{2}-1}{\sqrt{2}+2}\right)^{1/4}, r^*\right\} = r^* = \frac{\sqrt{5}-2}{\sqrt{2}+1}. \] The radius obtained is sharp. Consider the functions $f, g: \mathbb{D}\rightarrow \mathbb{C}$ defined by \begin{equation} \label{f_1} f(z) = \frac {z(1-z)}{(1+z)^3} \quad \quad ~~~ \quad \text{and}~~~ \quad \quad g(z)= \frac{z}{(1+z)^2}. \end{equation} Then clearly $f \in \mathcal{K}_1$ as \begin{equation*} \RE \frac{f(z)}{g(z)} = \RE \frac{1-z^2}{z}g(z) = \RE \frac{1+ z}{1- z} >0. \end{equation*} Now, for $z := - r^* = - R_{\mathcal{S}^*_L}$, we have $(z^2 -4 z +1)/(1-z^2) = \sqrt{2}$ and thus \begin{equation*} \left| \left(\frac{zf'(z)}{f(z)}\right)^2 -1 \right| = \left| \left(\frac{r^2 -4r+1}{1-r^2}\right)^2 -1 \right|=1, \end{equation*} thereby proving that the radius obtained is sharp by the function $f$ in \eqref{f_1}. \item We use Lemma \ref{T2} to compute the parabolic starlike radius for $f \in \mathcal{K}_1$. Again, let $a = (1+r^4)/(1-r^4)$, which is larger than or equal to $1$ for $0 \leq r < 1$. Note that \[ a = \frac{1 + r^4}{1 - r^4} = \frac{3}{2} \quad \Leftrightarrow \quad r= \left(\frac{1}{5} \right)^{1/4} \approx 0.66874. \] Since the radius we are looking for would be less than $0.216845$, we only consider the case $1/2 < a \leq 3/2$ in Lemma \ref{T2}. So when considering \[ \frac{2r(2r^2 + r + 2)}{1-r^4} \leq \frac{1 + r^4}{1-r^4} - \frac{1}{2}, \] let $r^*$ be the smallest positive real root of the equation $3 r^4-8 r^3 -4 r^2 -8 r +1= 0$. Then the radius of parabolic starlikeness for $f\in \mathcal{K}_1$ is \[ R_{\mathcal{S}_P} = \min\left\{ \left(\frac{1}{5}\right)^{1/4} , r^*\right\} = r^* \approx 0.116675. \] \indent We see that the sharpness follows for the function $f_1 \in \mathcal{K}_1$ defined in \eqref{f1}. At $z = ir$, we have \begin{equation*} F(r) = \frac{z f_1'(z)}{f_1(z)}\bigg|_{z = ir} = \frac{1- 4r - 2 r^2 -4 r^3 +r^4}{1- r^4}. \end{equation*} Then, \begin{align*} \left| F(r)- 1 \right| &= \left| \frac{2r (r^3-2r^2-r-2)}{1-r^4}\right|. \end{align*} For $z := ir^* = iR_{\mathcal{S}_{P}}$, we have \begin{align*} \RE \frac{z f'_1(z)}{f_1(z)} &= \frac{1+ r^4 - 4 r^3 -2 r^2- 4r}{1- r^4} (\approx 0.5)\\ &= \frac{2 r (2+r+2r^2-r^3)}{1 -r^4}= \left| \frac{zf_1'(z)}{f_1(z)}- 1 \right|. \end{align*} Thus the radius obtained is sharp for the function $f_1$. \item By using Lemma \ref{T3} and the argument similar to the above, we get that the exponential starlike radius $R_{\mathcal{S}^*_e}$ for the class $\mathcal{K}_1$ is the smallest positive real root of the equation $(4 r^3 +2 r^2 + 4 r -1 -r^4)e = r^4 -1$. The radius is sharp for the function $f_1$ defined in \eqref{f1}. For $z := ir = i{R}_{\mathcal{S}^*_e}$, we have \begin{equation*} \left|\log \frac{zf'_1(z)}{f_1(z)} \right| = \left| \log \frac{1 + r^4 - 4r^3 - 2 r^2 - 4r}{1 - r^4}\right|=1. \end{equation*} \item By using Lemma \ref{T4}, and similar argument as before, the $\mathcal{S}^*_c$-radius for the class $\mathcal{K}_1$ is the smallest positive real root of the equation $2 r^4 - 6 r^3 - 3 r^2 - 6 r +1 = 0$. The radius is sharp for the function $f_1$ defined in \eqref{f1}. Indeed, for the function $f_1$ defined in \eqref{f1}, we have at $z := ir = i \mathcal{R}_{\mathcal{S}^*_c}$, \begin{equation*} \frac{zf'_1(z)}{f_1(z)} = \frac{1 + r^4 - 4r^3 - 2 r^2 - 4r}{1 - r^4} =\frac{1}{3}=h_c(-1)\in \partial h_c(\mathbb{D}), \end{equation*} where $h_c(z) = 1 + (4/3)z + (2/3)z^2$. This shows that the result is sharp. \item To determine the $\mathcal{S}^*_{\leftmoon}$-radius, ${R}_{\mathcal{S}^*_{\leftmoon}}$, we will use Lemma \ref{T7}. After some computations following the idea above, it can be shown that ${R}_{\mathcal{S}^*_{\leftmoon}}$ is the smallest positive real root of the equation $4 r^3 + 2 r^2 + 4r = 2 -\sqrt{2}(1-r^4)$. The radius is sharp for the function $f_1$ defined in \eqref{f1}, since at $z := ir = i{R}_{\mathcal{S}^*_{\leftmoon}}$, we have \begin{align*} \left| \left( \frac{zf_1'(z)}{f_1(z)}\right)^2-1 \right| &= \left| \left( \frac{1+ r^4 - 4 r^3 - 2 r^2 - 4 r }{1-r^4}\right)^2 -1 \right| (\approx 0.134993 )\\&= 2\left| \frac{1+ r^4 - 4 r^3 - 2 r^2 - 4 r }{1-r^4} \right| = 2 \left|\frac{zf_1'(z)}{f_1(z)} \right|. \end{align*} \item In order to find the $\mathcal{S}^*_{\sin}$-radius for function $f \in \mathcal{K}_1$, we make use of Lemma \ref{T5}. Similarly as above, with $a = {(1+r^4)}/{(1-r^4)} >1$, it can be shown by arguing similarly as above that the $\mathcal{S}^*_{\sin}$-radius is the smallest positive real root of the equation $(2 + \sin{1})r^4 + 4r^3 + 2r^2 + 4r - \sin{1} = 0$. The radius is sharp for the function $f_1$ defined in \eqref{f1}. \item In order to compute the $\mathcal{S}^*_{RL}$- radius for the class $\mathcal{K}_1$, we use Lemma \ref{T9}. As $\sqrt{2}/3 \leq a = {(1+ r^4)}/{(1- r^4)} < \sqrt{2}$, a computation using Lemma \ref{T9} shows that the $\mathcal{S}^*_{RL}$- radius is the smallest positive real root of the equation \[ 4r^2(2r^2 + r + 2)^2 = (1-r^4)\sqrt{\left(\sqrt{2}-1\right) + \left(\sqrt{2}-2\right)r^4} - 2\left( \sqrt{2}-1 + (\sqrt{2}-2)r^4\right). \] The radius obtained is sharp for the function $ f \in \mathcal{K}_1$ given by \eqref{f_1}. At $ z := - r = - {R}_{\mathcal{S}^*_{RL}}$, we have $(z^2 -4 z +1)/(1-z^2) = \sqrt{2}$ and therefore, \begin{equation*} \left| \left( \frac{zf'(z)}{f(z)} - \sqrt{2} \right)^2 -1 \right| = \left| \left( \frac{1- 4z + z^2}{1- z^2} - \sqrt{2} \right)^2 -1 \right| = 1. \end{equation*} Hence the result. \item Since $2 (\sqrt{2}-1) < a = {(1+ r^4)}/{(1- r^4)}\leq \sqrt{2}$, by using Lemma \ref{T8}, it can be shown that the $\mathcal{S}_R$- radius is obtained by solving the equation \begin{equation*} \left( 2\sqrt{2} - 1\right)r^4 - 4r^3 - 2r^2 - 4r + \left( 3 - 2\sqrt{2}\right)=0. \end{equation*} The radius is sharp for the function $f_1$ defined in \eqref{f1}. Indeed, for the function $f_1$ defined in \eqref{f1}, we have at $z := ir = i {R}_{\mathcal{S}^*_R}$ that \begin{equation*} \frac{zf'_1(z)}{f_1(z)} = \frac{1 + r^4 - 4r^3 - 2 r^2 - 4r}{1 - r^4} = 2\sqrt{2} -2 =h_R(-1)\in \partial h_R(\mathbb{D}). \end{equation*} Here, $h_R = 1+ (zk + z^2)/(k^2 - kz)$, and $k = \sqrt{2} + 1$. \qedhere \end{enumerate} \end{proof} Our next result gives various radii of starlikeness for the $\mathcal{K}_2$, which consists of functions $f \in \mathcal{A}$ satisfying $|(f(z)/g(z))-1|<1$ for some $g \in \mathcal{A}$ and $\RE ((1-z^2)g(z)/z) >0$. Consider the functions $f_2$, $g_2: \mathbb{D}\rightarrow \mathbb{C}$ defined by \begin{equation} \label{f2} f_2(z) = \frac{z(1+ i z)^2}{(1-z^2)(1-i z)} \quad \quad ~~~ \quad \text{and}~~~ \quad \quad g_2(z)= \frac{z (1+ i z)}{(1-z^2) (1- i z)}. \end{equation} Clearly, \begin{equation*} \left| \frac{f_2(z)}{g_2(z)} - 1 \right| = |i z| = |z|< 1~~~~\quad \text{and}~~~~\quad \RE \frac{1-z^2}{z}g_2(z) = \RE \frac{1+ i z}{1- i z} >0. \end{equation*} Therefore, the function $f_2$ is in $\mathcal{K}_2$ and this shows $\mathcal{K}_2 \neq \phi$. Note that this function $f_2$ would serve as an extremal function for several radii-problems that we study here. \begin{theorem} For $f \in \mathcal{K}_2$, the following results hold: \begin{enumerate} \item The sharp $\mathcal{S}^*(\alpha)$ radius is the smallest positive real root of the equation $ \alpha r^4 -3r (r^2 + r + 1) + (1- \alpha)=0$, \quad $0 \leq \alpha <1$. \item The $\mathcal{S}^*_L$ radius is ${R}_{\mathcal{S^*_L}}= (\sqrt{2}-1)/(\sqrt{2}+2) \approx 0.12132$. \item The sharp $\mathcal{S}_{P}$ radius is the smallest positive real root of the equation $ 6 r^3 + 6 r^2 + 6 r -1 - r^4 = 0$ i.e., ${R}_{\mathcal{S}_{P}} \approx 0.1432698$. \item The sharp $\mathcal{S}^*_{e}$ radius is the smallest positive real root of the equation $(3 r^3 + 3 r^2 + 3 r -1)e + 1 - r^4 = 0$ i.e., ${R}_{\mathcal{S}^*_{e}} \approx 0.174887$. \item The sharp $\mathcal{S}^*_{c}$ radius is the smallest positive real root of the equation $ 9 r^3 + 9 r^2 + 9 r - 2 - r^4 = 0 $ i.e., ${R}_{\mathcal{S}^*_{c}} \approx 0.182815$. \item The sharp $\mathcal{S}^*_{\leftmoon}$ radius is the smallest positive real root of the equation $ r^4 (1- \sqrt{2}) + 3 r^3 + 3 r^2 + 3 r = 2 - \sqrt{2}$ i.e., ${R}_{\mathcal{S}^*_{\leftmoon}} \approx 0.164039$. \item The sharp $\mathcal{S}^*_{\sin}$ radius is ${R}_{\mathcal{S}^*_{\sin}}= \sin{1}/(3+ \sin{1}) \approx 0.219049$. \item The sharp $\mathcal{S}^*_{R}$ radius is the smallest positive real root of the equation $ 2 r^4 + 3 r^3 + 3 r^2 + 3 r - 3 + 2 \sqrt{2}(1- r^4) =0 $ i.e., ${R}_{\mathcal{S}^*_{R}} \approx 0.0541073$. \item The $\mathcal{S}^*_{RL}$ radius is ${R}_{\mathcal{S}^*_{RL}} \approx 0.0870259$. \end{enumerate} \end{theorem} \begin{proof} Let $f \in \mathcal{K}_2$ and the function $g: \mathbb{D} \rightarrow \mathbb{C}$ be chosen such that \begin{equation}\label{2.1} \left| {\frac{f(z)}{g(z)}}-1\right|< 1 \quad \text{and} \quad \RE{\Big( \frac{1-z^2}{z} g(z) \Big)}>0 \quad (z \in \mathbb{D}). \end{equation} Note that $| {f(z)}/{g(z)}-1|< 1 $ holds if and only if $\RE (g(z)/f(z)) > 1/2$ Let define $p_1, p_2: \mathbb{D}\rightarrow \mathbb{C}$ as \begin{equation}\label{2.2} p_1(z)= \frac{1-z^2}{z} g(z) \quad \text{and} \quad p_2(z)= \frac{g(z)}{f(z)}. \end{equation} Then, by equations \eqref{2.1} and \eqref{2.2}, $p_1 \in \mathcal{P}$ and $p_2 \in \mathcal{P}(1/2)$. Equation \eqref{2.2} also yields \begin{equation*} f(z) = \frac{z}{1-z^2} \frac{p_1(z)}{ p_2(z)}. \end{equation*} Taking logarithm on both sides and differentiating with respect to $z$ gives \begin{equation}\label{2.3} \frac{z f'(z)}{f(z)}= \frac{1+z^2}{1-z^2}+ \frac{z p_1'(z)}{p_1(z)}- \frac{z p_2'(z)}{p_2(z)}. \end{equation} By using equations \eqref{4},\eqref{5} and \eqref{2.3}, it can proven that the function $f$ maps the disk $|z|\leq r$ onto the disk \begin{equation}\label{2.6} \left| \frac{z f'(z)}{f(z)} - \frac{1+r^4}{1-r^4} \right| \leq \frac{r (r^3 + 3 r^2 + 3 r + 3)}{1-r^4}. \end{equation} From \eqref{2.6}, we can get \begin{equation*} \RE \frac{z f'(z)}{f(z)} \geq \frac{1 - 3 r ( r^2 + r + 1)}{1-r^4} \geq 0. \end{equation*} Upon solving for $r$, we can conclude that the function $f \in \mathcal{K}_2$ is starlike in $|z|\leq 0.253077\cdots$. The classes we are considering here are all subclasses of starlike functions, hence, all the radii we estimate here, will be less than $0.253077\cdots$. For the function $f_2$ defined in \eqref{f2}, we have \begin{align*} \frac{zf'_2(z)}{f_2(z)} & = \frac{1 + 3 i z + 3 z^2 - 3 i z^3}{1- z^4}\\ & = \frac{1+ 3 i z (1- z^2) + 3 z^2 }{1 - z^4} \end{align*} At $z := ir = i(0.253077)$, we have $zf_2'(z)/f_2(z)\approx 0$, thereby proving that the radius of starlikeness obtained for the class $\mathcal{K}_2$ is sharp. \begin{enumerate} \item In order to compute $R_ {\mathcal{S}^*(\alpha)}$, we estimate the value of $r \in [0,1]$ satisfying \begin{equation*} \RE \frac{z f'(z)}{f(z)} \geq \frac{1 - 3 r ( r^2 + r + 1)}{(1-r^4)} \geq \alpha. \end{equation*} Therefore, the number $R_ {\mathcal{S}^*(\alpha)}$, is the root of the equation $\alpha r^4 - 3 r ( r^2 + r + 1) +(1-\alpha)=0$ in $[0,1]$. For the function $f_2 \in \mathcal{K}_2$ given by \eqref{f2}, we have \begin{align} \label{f22} \frac{z f'_2(z)}{f_2(z)} & = \frac{1 + 3 i z + 3 z^2 - 3 i z^3}{1- z^4} \end{align} At $z:= i r = {R}_{\mathcal{S}^*(\alpha)}$, \eqref{f22} reduces to \begin{align*} \frac{z f'_2(z)}{f_2(z)} & = \frac{1 - 3 r - 3 r^2 - 3 r^3}{1- r^4} = \alpha, \end{align*} thereby proving that the radius is sharp. \item We would use Lemma \ref{T1} to compute the lemniscate starlike radius for $f \in \mathcal{K}_2$. So, let $a = (1 + r^4)/(1-r^4)$. Then $1 \leq a < \infty$ for $r\in [0,1)$, and $a < \sqrt{2}$ when $r < \left(\left(\sqrt{2} - 1\right)/\left(\sqrt{2} + 1\right)\right)^{1/4}$. From \ref{2.6}, we know that $f\in \mathcal{K}_2$ maps the disk $|z| \leq r$ onto the disk \begin{equation} \left| \frac{z f'(z)}{f(z)} - \frac{1+r^4}{1-r^4} \right| \leq \frac{r (r^3 + 3 r^2 + 3 r + 3)}{1-r^4}. \end{equation} So, consider \[ \frac{r (r^3 + 3 r^2 + 3 r + 3)}{1-r^4} \leq \sqrt{2} - \frac{1+r^4}{1-r^4}, \] and let $r^*$ be the smallest positive real root of the equation \[ \left( \sqrt{2} + 2\right) r^4 - 4r^3 - 2r^2 - 4r + \left( 3 - 2\sqrt{2}\right) = 0. \] Then by Lemma \ref{T1}, the lemniscate starlike radius $R_{\mathcal{S}^*_L}$ for $f \in \mathcal{K}_2$ is given by \[ R_{\mathcal{S}^*_L} = \min\left\{ \left(\frac{\sqrt{2} - 1}{\sqrt{2} + 1}\right)^{1/4}, r^*\right\} = r^* = \frac{\sqrt{2} -1}{\sqrt{2} + 2} = 0.12132\cdots \] This radius may not be sharp. \item We use Lemma \ref{T2} to compute the parabolic starlike radius for $f \in \mathcal{K}_2$. For $a = (1 + r^4)/(1-r^4)$, we have $a \leq 3/2$ if $r \leq (1/5)^{1/4} \approx 0.668740305$. by Lemma \ref{T2}, consider \[ \frac{r (r^3 + 3 r^2 + 3 r + 3)}{1-r^4} \leq \frac{1+r^4}{1-r^4} - \frac{1}{2}, \] and let $r^*$ be the smallest positive real root of the equation \[ r^4 - 6r^3 - 6r^2 - 6r + 1 = 0. \] Then the $\mathcal{S}_{P}$-radius is \[ R_{\mathcal{S}_P} = \min \left\{ \left( \frac{1}{5}\right)^{1/4}, r^*\right\} = r^* \approx 0.1432698. \] We see that sharpness follows for the function $f_2 \in \mathcal{K}_2$ defined in \eqref{f2}. As shown previously, at $z := i r $, we have \begin{equation*} \frac{z f_2'(z)}{f_2(z)} = \frac{1 - 3 r - 3 r^2 - 3 r^3}{1- r^4}. \end{equation*} Thus, \begin{align*} \left| \frac{zf_2'(z)}{f_2(z)}- 1 \right| &= \left| \frac{r (3 + 3 r + 3 r^2 - r^3)}{1 - r^4}\right| \end{align*} For $z := i r = i {R}_{\mathcal{S}_P}$, we have \begin{align*} \RE \frac{z f_2'(z)}{f_2(z)} &= \frac{1 - 3 r - 3 r^2 - 3 r^3}{1- r^4} \quad(\approx 0.5)\\ &=\frac{r (3 + 3 r + 3 r^2 - r^3)}{1 - r^4}= \left| \frac{zf_2'(z)}{f_2(z)}- 1 \right|. \end{align*} Thus the radius obtained is sharp for the function $f_2$. \item For the $\mathcal{S}_e$-radius of $f\in \mathcal{K}_2$, we will use Lemma \ref{T3} since if $a = (1 + r^4)/(1-r^4)$, $0\leq r <1$, we have $a < e$ for $r < [(e-1)/(e+1)]^{1/4} \approx 0.82449$. Also, since $a \leq \frac{1}{2}(e + e^{-1})$ for $r < [(e-1)/(e+1)]^{2} \approx 0.213552$, consider \[ \frac{r (r^3 + 3 r^2 + 3 r + 3)}{1-r^4} \leq \frac{1+r^4}{1-r^4} - \frac{1}{e}, \] and let $r^*$ be the smallest positive real root of the equation \[ r^4 - e(3r^3 + 3r^2 + 3r) + e - 1 = 0. \] Then the $\mathcal{S}_{e}$-radius is \[ R_{\mathcal{S}_e} = \min \left\{ \left( \frac{e-1}{e+1}\right)^{2}, r^*\right\} = r^* \approx 0.174887. \] \begin{equation*} \left|\log \frac{zf'_2(z)}{f_2(z)} \right| = \left| \log \frac{1 - 3 r - 3 r^2 - 3 r^3}{1- r^4}\right|=1, \end{equation*} hence proving that the exponential starlike radius obtained for the class $\mathcal{K}_2$ is sharp. \item By using Lemma \ref{T4}, it can be proven similarly as above that the $\mathcal{S}^*_c$-radius $R_{\mathcal{S}^*_c}$ for the class $\mathcal{K}_2$ is the smallest positive real root of the equation $9 r^3 + 9 r^2 + 9 r - 2 - r^4 = 0$, which is $R_{\mathcal{S}^*_c} \approx 0.182815$. The radius obtained is sharp for the function $f_2$ defined in \eqref{f2} as for $z := ir = i {R}_{\mathcal{S}^*_c}$, \begin{equation*} \frac{zf'_2(z)}{f_2(z)} = \frac{1 -3 r - 3 r^2 -3 r^3}{1 - r^4} =\frac{1}{3}=h_c(-1)\in \partial h_c(\mathbb{D}). \end{equation*} \item The $\mathcal{S}^*_{\leftmoon}$-radius ${R}_{\mathcal{S}^*_{\leftmoon}}$ for the class $\mathcal{K}_2$ is the smallest positive real root of the equation $r^4 (1- \sqrt{2}) + 3 r^3 + 3 r^2 + 3 r = 2 - \sqrt{2}$. This can be obtained by considering the inequality \[ \frac{3 r^3 +3 r^2 + 3 r - 1}{1- r^4} \leq 1- \sqrt{2} \] and then using Lemma \ref{T7}. The radius is sharp for the function $f_2$ defined in \eqref{f2}, since at $z := i r = i{R}_{\mathcal{S}^*_{\leftmoon}}$, we have \begin{align*} \left| \left( \frac{zf_2'(z)}{f_2(z)}\right)^2 - 1 \right| &= \left| \left( \frac{1 - 3 r - 3 r^2 - 3 r^3}{1- r^4}\right)^2 -1 \right| \quad (\approx 0.82842)\\ &= 2\left| \frac{1 - 3 r - 3 r^2 - 3 r^3}{1- r^4} \right| = 2 \left|\frac{zf_2'(z)}{f_2(z)} \right|. \end{align*} \item In order to find the $\mathcal{S}^*_{\sin}$-radius for the function $f \in \mathcal{K}_2$, we make use of Lemma \ref{T5}. It is easy to see that $ 1 - \sin 1 < a = {(1+r^4)}/{(1-r^4)} < 1+ \sin 1 $ for $r < [(\sin{1})/(2 + \sin{1})]^{1/4}$. Since $a>1$, consider \begin{equation*}\label{11} \frac{r (r^3 + 3 r^2 + 3 r + 3)}{1-r^4} \leq \sin 1 - \left(\frac{1+r^4}{1-r^4} - 1\right). \end{equation*} Then the $\mathcal{S}^*_{\sin}$-radius, $R_{\mathcal{S}^*_{\sin}} (\approx 0.219049)$, is the smallest positive real root of the equation \[ (3 + \sin{1})r^4 + 3r(r^2 + r +1) = \sin{1}. \] The radius obtained is sharp for the function $f_2$ defined in \eqref{f2}. \item We use Lemma \ref{T8} in order to compute the $\mathcal{S}^*_R$- radius for the class $\mathcal{K}_2$. Since $2 (\sqrt{2}-1) < a = {(1+ r^4)}/{(1- r^4)}\leq \sqrt{2}$ for $r < [(\sqrt{2} - 1)/(\sqrt{2} + 1)]^{1/4} \approx 0.64359$, by Lemma \ref{T8}, we consider \[ \frac{r(r^3 + 3 r^2 +3 r + 3)}{1-r^4} \leq \frac{1 + r^4}{1 - r^4} - 2(\sqrt{2} - 1). \] Then the $\mathcal{S}^*_R$- radius for $\mathcal{K}_2$ can be computed to be $R_{\mathcal{S}^*_{R}} \approx 0.0870259$. The radius obtained is sharp for the function $f_2$ defined in \eqref{f2}. Indeed, for the function $f_2$ defined in \eqref{f2}, we have at $z := ir = i {R}_{\mathcal{S}^*_R}$, \begin{equation*} \frac{zf'_2(z)}{f_2(z)} = \frac{1 - 3 r - 3 r^2 -3 r^3}{1 - r^4} =2 \sqrt{2} - 2=h_R(-1)\in \partial h_R(\mathbb{D}). \end{equation*} This shows that the result is sharp. \item Finally, for the $\mathcal{S}^*_{RL}$- radius, $R_{\mathcal{S}^*_{RL}}$, for the class $\mathcal{K}_2$, by Lemma \ref{T9}, the value of $R_{\mathcal{S}^*_{RL}} \approx 0.0541073$ is obtained from solving the equation \begin{align*} (1- r^4) \big\{ (1- r^4)^2 - \big( (\sqrt{2} - 1)& - (\sqrt{2} + 1) r^4 \big)^2 \big\}^{1/2} = (r^4 + 3r^3 + 3r^2 + 3r)^2 \\& \quad +(1- r^4)^2 - \big((\sqrt{2} - 1)- (\sqrt{2} + 1) r^4 \big)^2. \qedhere \end{align*} \end{enumerate} \end{proof} The last theorem aims at computing the various radii of starlikeness for the function $f \in \mathcal{K}_3$ that satisfies $\RE ((1-z^2)f(z)/z) >0$. Consider the function $f_3: \mathbb{D}\rightarrow \mathbb{C}$ defined by \begin{equation} \label{f3} f_3(z) = \frac{z(1+ i z)}{(1-z^2)(1- i z)} \end{equation} Clearly, \begin{equation*} \RE \frac{(1-z^2)}{z}f_3(z) = \RE \frac{1+ i z}{1- i z} >0. \end{equation*} Therefore, the function $f_3 \in \mathcal{K}_3$ and $\mathcal{K}_3 \neq \phi$. This function $f_3$ would serve as an extremal function for various radius problems in the following theorem. \begin{theorem} For $f \in \mathcal{K}_3$, the following results hold: \begin{enumerate} \item The sharp $\mathcal{S}^*(\alpha)$ radius is the smallest positive real root of the equation $ (1+\alpha) r^4 - 2 r (r^2 + r + 1) + (1- \alpha)=0$, \quad $0 \leq \alpha <1$. \item The sharp $\mathcal{S}^*_L$ radius is ${R}_{\mathcal{S_L}}= (\sqrt{2}-1)/(\sqrt{2}+1) \approx 0.171573$. \item The sharp $\mathcal{S}_{P}$ radius is the smallest positive real root of the equation $ 4 r^3 + 4 r^2 + 4 r -1 - 3r^4 = 0$ i.e. ${R}_{\mathcal{S}_{P}} \approx 0.2021347$. \item The sharp $\mathcal{S}^*_{e}$ radius is the smallest positive real root of the equation $(2 r^3 + 2 r^2 + 2 r -1 - r^4)e + 1 - r^4 = 0$ i.e. ${R}_{\mathcal{S}^*_{e}} \approx 0.244259$. \item The sharp $\mathcal{S}^*_{c}$ radius is the smallest positive real root of the equation $ 3 r^3 + 3 r^2 + 3 r - 1 - 2 r^4 = 0 $ i.e. ${R}_{\mathcal{S}^*_{c}} \approx 0.254726$. \item The sharp $\mathcal{S}^*_{\leftmoon}$ radius is the smallest positive real root of the equation $ 2 r^3 + 2 r^2 + 2 r - \sqrt{2} r^4 = 2 - \sqrt{2}$ i.e. ${R}_{\mathcal{S}^*_{\leftmoon}} \approx 0.229877$. \item The sharp $\mathcal{S}^*_{\sin}$ radius is ${R}_{\mathcal{S}^*_{\sin}}= \sin{1}/(2 + \sin{1}) \approx 0.296139$. \item The sharp $\mathcal{S}^*_{R}$ radius is the smallest positive real root of the equation $ r^4 + 2 r^3 + 2 r^2 + 2 r -3 + 2 \sqrt{2} (1 - r^4)=0 $ i.e. ${R}_{\mathcal{S}^*_{R}} \approx 0.0790749$. \item The $\mathcal{S}^*_{RL}$ radius is ${R}_{\mathcal{S}^*_{RL}} \approx 0.125145$. \end{enumerate} \end{theorem} \begin{proof} Let the function $f \in \mathcal{K}_3$. Then \begin{equation}\label{3.1} \RE{\Big( \frac{1-z^2}{z} f(z) \Big)}>0 \quad (z \in \mathbb{D}). \end{equation} Define $p: \mathbb{D}\rightarrow \mathbb{C}$ as \begin{equation}\label{3.2} p(z)= \frac{1-z^2}{z} f(z) \end{equation} Therefore, by equation \eqref{3.1}, we have $p \in \mathcal{P}$ and \begin{equation*} f(z) = \frac{z}{(1-z^2)} p(z). \end{equation*} From this, take logarithm on both sides and then differentiate with respect to $z$: \begin{equation}\label{3.3} \frac{z f'(z)}{f(z)}= \frac{1+z^2}{1-z^2}+ \frac{z p'(z)}{p(z)}. \end{equation} By using equations \eqref{4},\eqref{5} and \eqref{3.3}, we can prove that the function $f$ maps the disk $|z|\leq r$ onto the disk \begin{equation}\label{3.6} \left| \frac{z f'(z)}{f(z)} - \frac{1+r^4}{1-r^4} \right| \leq \frac{2 r (r^2 + r + 1)}{1-r^4}. \end{equation} In order to solve radius problems, we are interested in computing the value of $r$ for which the disk in \eqref{3.6} is contained in the corresponding regions. Again the classes we are considering here are all subclasses of starlike functions and therefore, are defined by the quantity $z f'(z)/f(z)$ lying in some region in the right half plane. In particular, for $f$ to be in $\mathcal{S^*}$, we need \begin{equation*} \RE \frac{z f'(z)}{f(z)} \geq \frac{1 + r^4 - 2 r ( r^2 + r + 1)}{1-r^4} \geq 0. \end{equation*} Thus the function $f \in \mathcal{K}_3$ is starlike in $|z|\leq 0.346014$. With this, now all the radii we estimate here shall be less than $0.346014$. For the function $f_3$ defined in \eqref{f3}, we have \begin{align}\label{f33} \frac{zf'_3(z)}{f_3(z)} & = \frac{1 + 2 i z + 2 z^2 - 2 i z^3 + z^4}{1- z^4}\\ \notag & = \frac{1+ 2 i z (1- z^2) + 2 z^2 + z^4 }{1 - z^4}. \end{align} At $z := ir = i(0.346014)$, we have $zf'_3(z)/f_3(z)\approx 0$, thereby proving that the radius of starlikeness obtained for the class $\mathcal{K}_3$ is sharp. \begin{enumerate} \item To determine the radius $R_ {\mathcal{S}^*(\alpha)}$ of starlikeness of order $\alpha$, we estimate the value of $r \in [0,1]$ satisfying \begin{equation*} \RE \frac{z f'(z)}{f(z)} \geq \frac{1 + r^4 - 2 r ( r^2 + r + 1)}{1-r^4} \geq \alpha. \end{equation*} Hence, $R_ {\mathcal{S}^*(\alpha)}$ is the root of the equation $(1 + \alpha) r^4 - 2 r ( r^2 + r + 1) +(1-\alpha)=0 $ in $[0,1]$. From \eqref{f33}, if $z:= i r = i{R}_{\mathcal{S}^*(\alpha)}$, then \eqref{f33} reduces to \begin{align*} \frac{z f_3'(z)}{f_3(z)} & = \frac{1 - 2 r - 2 r^2 - 2 r^3 + r^4}{1- r^4} = \alpha, \end{align*} which shows that $f_3$ is the extremal function. \item We can use Lemma \ref{T1} to compute the lemniscate starlike radius for $f \in \mathcal{K}_3$. For $a = (1 + r^4)/(1-r^4)$, we have $a < \sqrt{2}$ when $r < \left(\left(\sqrt{2} - 1\right)/\left(\sqrt{2} + 1\right)\right)^{1/4}$. By \eqref{2.6} and Lemma \ref{T1}, consider \[ \frac{2r (r^2 + r + 1)}{1-r^4} \leq \sqrt{2} - \frac{1+r^4}{1-r^4}, \] and let $r^*$ be the smallest positive real root of the equation \[ \left( \sqrt{2} + 1\right) r^4 + 2r^3 + 2r^2 + 2r + \left( 1 - \sqrt{2}\right) = 0. \] Hence, the lemniscate starlike radius $R_{\mathcal{S}^*_L}$ for $f \in \mathcal{K}_3$ is given by \[ R_{\mathcal{S}^*_L} = \min\left\{ \left(\frac{\sqrt{2} - 1}{\sqrt{2} + 1}\right)^{1/4}, r^*\right\} = r^* = \frac{1 - \sqrt{2}}{1 + \sqrt{2}} = 0.1715728753\ldots. \] For the sharpness, consider the function $\hat{f_3}: \mathbb{D}\rightarrow \mathbb{C}$ defined by \begin{equation} \label{f_31} \hat{ f_3}(z) = \frac {z}{(1+z)^2}. \end{equation} Clearly, \begin{equation*} \RE \frac{1-z^2}{z}\hat{f_3}(z) = \RE \frac{1 - z}{1+ z} >0. \end{equation*} So $\hat{f_3} \in \mathcal{K}_3$. Also \begin{equation*} \left| \left(\frac{z\hat{f_3}'(z)}{\hat{f_3}(z)}\right)^2 -1 \right| = \left| \left(\frac{1 - z}{1 + z}\right)^2 -1 \right|. \end{equation*} Now, for $z := r = {R_{\mathcal{S}^*_L}}$, we have $(1 - z)/(1 + z) = \sqrt{2}$ and \begin{equation*} \left| \left(\frac{z\hat{f_3}'(z)}{\hat{f_3}(z)}\right)^2 -1 \right| = \left| \left(\sqrt{2}\right)^2 -1 \right|=1, \end{equation*} Therefore, the radius obtained is sharp for the function $\hat{f_3}$. \item For the parabolic starlike radius for $f \in \mathcal{K}_3$, we would use Lemma \ref{T2}. For $r \leq (1/5)^{1/4}$, we have $a= {(1+r^4)}/{(1-r^4)}\leq 3/2$, and if we consider \begin{equation}\label{39} \frac{ 2 r ( r^2 + r + 1)}{1-r^4} \leq \frac{1+r^4}{1-r^4} -\frac{1}{2}, \end{equation} then the $\mathcal{S}_P$-radius is given by \[ R_{\mathcal{S}_P} = \min\left\{ \left(\frac{1}{5}\right)^{1/4}, r^*\right\} = r^* \approx 0.2021347, \] where $r^*$ is the smallest positive real root of the equation $3r^4 - 4 r^3 - 4 r^2 - 4 r + 1 = 0$. The sharpness of the result follows for the function $f_3$ defined in \eqref{f3}. As shown previously, at $z := i r$, we have \begin{equation*} \frac{z f_3'(z)}{f_3(z)} = \frac{1 - 2 r - 2 r^2 - 2 r^3 + r^4}{1- r^4}. \end{equation*} Then for $z := i r = {R}_{\mathcal{S}_P}$, we have \begin{align*} \RE \frac{z f'_3(z)}{f_3(z)} &= \frac{1 - 2 r - 2 r^2 - 2 r^3 + r^4}{1- r^4} \quad (= 0.5)\\ &= \frac{2 r (1 + r + r^2 - r^3)}{1 - r^4}= \left| \frac{zf_3'(z)}{f_3(z)}- 1 \right|, \end{align*} thus illustrates the radius obtained is sharp for the function $f_3$. \item By using Lemma \ref{T3} and considering \begin{equation*} \frac{2 r ( r^2 + r + 1)}{(1-r^4)} \leq \frac{1+r^4}{1-r^4} -\frac{1}{e}, \end{equation*} it can be proven similarly as above that the exponential starlike radius $R_{\mathcal{S}^*_e}$ for the class $\mathcal{K}_3$ is the smallest positive real root of the equation $(2 r^3 + 2 r^2 + 2 r -1 - r^4)e + 1 - r^4 = 0$. Again, for the function $f_3$ given in \eqref{f3}, at $z := i r = i{R}_{\mathcal{S}^*_e} \approx i(0.244259)$, we have \begin{equation*} \left|\log \frac{zf'_3(z)}{f_3(z)} \right| = \left| \log \frac{1 - 2 r - 2 r^2 - 2 r^3 + r^4}{1- r^4}\right|=1, \end{equation*} thereby proving that the result obtained is sharp. \item For the $\mathcal{S}^*_c$-radius for the class $\mathcal{K}_3$, we use Lemma \ref{T4} by considering \begin{equation*} \frac{2 r ( r^2 + r + 1)}{1-r^4} \leq \frac{1+r^4}{1-r^4} -\frac{1}{3}. \end{equation*} Then it can be proven similarly as above that $R_{\mathcal{S}^*_c}$ is the smallest positive real root of the equation $$3 r^3 + 3 r^2 + 3 r - 1 - 2 r^4 = 0.$$ The radius obtained is sharp for the function $f_3$ defined in \eqref{f3}. Indeed, for the function $f_3$, we have at $z := i r = i {R}_{\mathcal{S}^*_c} \approx i(0.254716)$ that \begin{equation*} \frac{zf'_3(z)}{f_3(z)} = \frac{1 - 2 r - 2r^2 -2 r^3 +r^4}{1 - r^4} =\frac{1}{3}=h_c(-1)\in \partial h_c(\mathbb{D}). \end{equation*} \item By proving similarly as above, the $\mathcal{S}^*_{\leftmoon}$-radius for the class $\mathcal{K}_3$ is the smallest positive real root of the equation \[ 2 r^3 + 2 r^2 + 2 r - \sqrt{2} r^4 = 2 - \sqrt{2}.\] In this case, we would use Lemma \ref{T7} and consider \begin{equation*} \frac{2 r ( r^2 + r + 1)}{1- r^4} \leq 1- \sqrt{2} + \frac{1 + r^4}{1 - r^4}. \end{equation*} The radius is sharp for the function $f_3$ defined in \eqref{f3}, since at $z := ir = i{R}_{\mathcal{S}^*_{\leftmoon}}$, we have \begin{align*} \left| \left( \frac{zf_3'(z)}{f_3(z)}\right)^2-1 \right| &= \left| \left( \frac{1 - 2 r - 2 r^2 - 2 r^3 + r^4}{1- r^4}\right)^2 -1 \right| \\&= 2\left| \frac{1 - 2 r - 2 r^2 - 2 r^3 + r^4}{1- r^4} \right| = 2 \left|\frac{zf_3'(z)}{f_3(z)} \right|. \end{align*} \item In order to find the $\mathcal{S}^*_{\sin}$-radius for the function $f \in \mathcal{K}_3$, we make use of Lemma \ref{T5}, where we would consider \begin{equation*}\label{11} \frac{2 r ( r^2 + r + 1)}{(1-r^4)} \leq \sin 1 - \frac{2 r^4}{1-r^4}. \end{equation*} The $\mathcal{S}^*_{\sin}$-radius, $R_{\mathcal{S}^*_{\sin}}$, is smallest positive real root of the equation \[ 2r(r^3 + r^2 + r + 1) = (\sin{1})(1 - r^4). \] The radius obtained is sharp for the function $f_3$ defined in \eqref{f3}. \item We use Lemma \ref{T8} to compute the $\mathcal{S}^*_R$- radius for the class $\mathcal{K}_3$. By considering \begin{equation*} \frac{2 r ( r^2 + r + 1)}{1-r^4} \leq \frac{1 + r^4}{1 - r^4} - 2(\sqrt{2} - 1), \end{equation*} we would obtain $R_{\mathcal{S}^*_{R}}$ to be given by the smallest positive real root of the equation \[ (2\sqrt{2} - 1)r^4 - 2r(r^2 + r + 1) + (3 - 2\sqrt{2}) = 0. \] The radius obtained is sharp for the function $f_3$ defined in \eqref{f3}. Indeed, for the function $f_3$, we have at $z := i r = i {R}_{\mathcal{S}^*_c} \approx 0.0790749$, \begin{equation*} \frac{zf'_3(z)}{f_3(z)} = \frac{1 - 2 r - 2r^2 -2 r^3 + r^4}{1 - r^4} = 2 \sqrt{2}-2=h_R(-1)\in \partial h_R(\mathbb{D}). \end{equation*} \item Lastly, the $\mathcal{S}^*_{RL}$- radius for the class $\mathcal{K}_3$ is obtained by using Lemma \ref{T9} and from the equation \begin{align*} (1- r^4) \big\{ (1- r^4)^2 - \big( (\sqrt{2} - 1) - (\sqrt{2} + 1) r^4 \big)^2 \big\}^{1/2} &= (2r^3+2r^2+2r)^2 +(1- r^4)^2 \\& \quad - \big((\sqrt{2} - 1)- (\sqrt{2} + 1) r^4 \big)^2. \qedhere \end{align*} \end{enumerate} \end{proof} \subsection*{\textbf{Acknowledgment.}} The second author gratefully acknowledges support from USM research university grants 1001.PMATHS.8011101. \end{document}
\begin{document} \title{The Maximal Rank Conjecture for Sections of Curves} \begin{abstract} Let $C \subset \pp^r$ be a general curve of genus $g$ embedded via a general linear series of degree $d$. The \emph{Maximal Rank Conjecture} asserts that the restriction maps $H^0(\oo_{\pp^r}(m)) \to H^0(\oo_C(m))$ are of maximal rank; this determines the Hilbert function of $C$. In this paper, we prove an analogous statement for the union of hyperplane sections of general curves. More specifically, if $H \subset \pp^r$ is a general hyperplane, and $C_1, C_2, \ldots, C_n$ are general curves, we show $H^0(\oo_{H}(m)) \to H^0(\oo_{(C_1 \cup C_2 \cup \cdots \cup C_n) \cap H}(m))$ is of maximal rank, except for some counterexamples when $m = 2$. As explained in \cite{over}, this result plays a key role in the author's proof of the Maximal Rank Conjecture \cite{mrc}. \end{abstract} \section{Introduction} Let $\mathcal{H}_{d, g, r}$ denote the Hilbert scheme classifying subschemes of $\pp^r$ with Hilbert polynomial $P(x) = dx + 1 - g$. We have a natural rational map from any component of $\mathcal{H}_{d, g, r}$ whose general member is a smooth curve to the moduli space $M_g$ of curves. The Brill--Noether theorem asserts that there exists such a component whose general member is nondegenerate and that dominates $M_g$ if and only if \[\rho(d, g, r) := (r + 1)d - rg - r(r + 1) \geq 0.\] Moreover, it is known that when $\rho(d, g, r) \geq 0$, there exists a unique such component that dominates $M_g$. We shall refer to a curve $C \subset \pp^r$ lying in this component as a \emph{Brill--Noether Curve (BN-curve)}. A natural first step in understanding the extrinsic geometry of general curves is to understand their Hilbert function. Here we have the \emph{Maximal Rank Conjecture}: \begin{conj}[Maximal Rank Conjecture] If $C$ is a general BN-curve and $m$ is a positive integer, then the restriction map \[H^0(\oo_{\pp^r}(m)) \to H^0(\oo_C(m))\] is of maximal rank. \end{conj} \begin{rem} Since $H^1(\oo_C(m)) = 0$ for $m \geq 2$ when $C$ is a general BN-curve, the Maximal Rank Conjecture completely determines the Hilbert function of $C$. \end{rem} In this paper, we study a related question for hyperplane sections. Namely, we prove that the general hyperplane section of a general union of BN-curves imposes the expected number of conditions on hypersurfaces of every degree, apart from a few counterexamples that occur for quadric hypersurfaces. Both the results and the techniques developed here play a critical role in the author's proof of the Maximal Rank Conjecture \cite{mrc}, as explained in \cite{over}. More precisely, we prove: \begin{thm}[Hyperplane Maximal Rank Theorem] \label{main} If $C_1, C_2, \ldots, C_n$ are independently general BN-curves of degrees $d_i$ and genera $g_i$, and $H \subset \pp^r$ is a general hyperplane, and $m$ is a positive integer, then the restriction map \[H^0(\oo_H(m)) \to H^0(\oo_{(C_1 \cup C_2 \cup \cdots \cup C_n) \cap H}(m))\] is of maximal rank, except possibly when $m = 2$ and $d_i < g_i + r$ for some $i$. \end{thm} The conclusion that this restriction map is of maximal rank can be reformulated in terms of the cohomology of the twists of the ideal sheaf as follows: \begin{align*} H^0(\mathcal{I}_{((C_1 \cup C_2 \cup \cdots \cup C_n) \cap H) / H}(m)) = 0 &\quad \text{when}\ \sum_{i = 1}^n d_i \geq \binom{m + r - 1}{r - 1}, \\ H^1(\mathcal{I}_{((C_1 \cup C_2 \cup \cdots \cup C_n) \cap H) / H}(m)) = 0 &\quad \text{when}\ \sum_{i = 1}^n d_i \leq \binom{m + r - 1}{r - 1}. \end{align*} In the course of proving Theorem~\ref{main}, we will also prove stronger results for $r = 3$ and for $r = 4$. Namely: \begin{thm} \label{add3} Let $X \subset H \simeq \pp^2 \subset \pp^3$ be a subscheme, and $C \subset \pp^3$ be a general BN-curve. \begin{itemize} \item If $C$ is a canonical curve and $m = 2$, suppose that $X$ is nonempty. \item If $C$ is a canonical curve and $m \neq 2$, write $\Lambda \subset H$ for a general line, and suppose that the restriction maps \begin{gather*} H^0(\oo_H(m)) \to H^0(\oo_X(m)) \\ H^0(\oo_H(m - 1)) \to H^0(\oo_X(m - 1)) \\ H^0(\oo_\Lambda(m)) \to H^0(\oo_{X \cap \Lambda}(m)) \end{gather*} are of maximal rank, with either the second one an injection, or the third one a surjection with kernel of dimension at least $4$. \item Otherwise, suppose the map \[H^0(\oo_H(m)) \to H^0(\oo_{X}(m))\] is of maximal rank. \end{itemize} Then the map \[H^0(\oo_H(m)) \to H^0(\oo_{X \cup (C \cap H)}(m))\] is of maximal rank. \end{thm} \begin{thm} \label{add4} Let $X \subset H \simeq \pp^3 \subset \pp^4$ be a subscheme, and $C \subset \pp^4$ be a general BN-curve of degree $d$ and genus $g$. \begin{itemize} \item If $(d, g) \in \{(8, 5), (9, 6), (10, 7)\}$ and $m = 2$, suppose that $X$ is either positive dimensional or of degree at least $11 - d$. \item If $(d, g) \in \{(8, 5), (9, 6), (10, 7)\}$ and $m \neq 2$, write $\Lambda \subset H$ for a general plane, and suppose that the restriction maps \begin{gather*} H^0(\oo_H(m)) \to H^0(\oo_X(m)) \\ H^0(\oo_H(m - 1)) \to H^0(\oo_X(m - 1)) \\ H^0(\oo_\Lambda(m)) \to H^0(\oo_{X \cap \Lambda}(m)) \end{gather*} are of maximal rank, with either the second one an injection, or the third one a surjection with kernel of dimension at least $8$. \item Otherwise, suppose the map \[H^0(\oo_H(m)) \to H^0(\oo_{X}(m))\] is of maximal rank. \end{itemize} Then the map \[H^0(\oo_H(m)) \to H^0(\oo_{X \cup (C \cap H)}(m))\] is of maximal rank. \end{thm} We shall prove Theorem~\ref{main} using an inductive approach due originally to Hirschowitz \cite{mrat}. In its simplest form, suppose that $C = X \cup Y$ is a reducible curve such that $Y$ is contained in some hyperplane $H'$: \begin{center} \begin{tikzpicture}[scale=0.7] \draw[thick] (0, 1) -- (1, 3) -- (9, 3) -- (8, 1) -- (0, 1); \draw[thick] (0, 4) -- (1, 6) -- (9, 0) -- (8, -2) -- (0, 4); \draw (2, 2) .. controls (2, 6) and (5, 6) .. (5, 2); \draw (2, 2) .. controls (2, 0) and (3, 0) .. (3, 2); \draw (4, 2) .. controls (4, 0) and (5, 0) .. (5, 2); \draw (3, 2) .. controls (3, 4) and (4, 4) .. (4, 2); \draw (6, 2.5) .. controls (3, 2.5) and (3, 1.5) .. (6, 1.5); \draw (6, 2.5) .. controls (8, 2.5) and (8, 1.5) .. (7, 1.5); \draw (6, 1.5) .. controls (7, 1.5) and (7, 2) .. (6.5, 2); \draw (7, 1.5) .. controls (6, 1.5) and (6, 2) .. (6.5, 2); \draw (9.1, 2.5) node{$H'$}; \draw (9.1, -0.5) node{$H$}; \draw (5, 4.1) node{$X$}; \draw (7.65, 2.5) node{$Y$}; \end{tikzpicture} \end{center} \noindent Then we have the exact sequence of sheaves \[0 \to \mathcal{I}_{(X \cap H) / H}(m - 1) \to \mathcal{I}_{(C \cap H) / H}(m) \to \mathcal{I}_{(Y \cap H)/(H \cap H')}(m) \to 0,\] which gives rise to a long exact sequence in cohomology \[\cdots \to H^i(\mathcal{I}_{(X \cap H) / H}(m - 1)) \to H^i(\mathcal{I}_{(C \cap H) / H}(m)) \to H^i(\mathcal{I}_{(Y \cap H)/(H \cap H')}(m)) \to \cdots.\] Consequently, we can deduce the hyperplane maximal rank theorem for the general hyperplane section of $C$ from the hyperplane maximal rank theorem for the general hyperplane sections of $X$ and $Y$. The structure of this paper is as follows. First, in Section~\ref{defc}, we give several methods of constructing reducible BN-curves that will be useful for specialization arguments later on. In Sections~\ref{r} and~\ref{m}, we prove the hyperplane maximal rank theorem in the special cases $r = 3$ and $m = 2$ respectively. We then deduce the general case in Sections~\ref{sec:glue} and~\ref{sec:ind} via the above inductive argument, by finding appropriate BN-curves $X \subset \pp^r$ and $Y \subset H' \subset \pp^r$ satisfying the hyperplane maximal rank theorem for $(m - 1, r)$ and $(m, r - 1)$ respectively. \textit{Notational Convention}: We say a BN-curve $X \subset \pp^r$ is \emph{nonspecial} if $d \geq g + r$, i.e.\ if $X$ is a \emph{limit} of curves with nonspecial hyperplane section. \section{\label{defc} Some Gluing Lemmas} In this section, we will give some lemmas that let us construct examples of BN-curves. \begin{lm} \label{glone} Let $X \subset \pp^r$ be a curve with $H^1(N_X) = 0$, and $D$ be a rational normal curve of degree $d \leq r$ that is $k$-secant to $X$, where \[k \leq \begin{cases} d + 1 & \text{if $d < r$;} \\ r + 2 & \text{if $d = r$.} \end{cases}\] Then $X \cup D$ is smoothable and $H^1(N_{X \cup D}) = 0$. Moreover, if $X$ is a BN-curve, then $X \cup D$ is a BN-curve. \end{lm} \begin{proof} The vanishing of $H^1(N_{X \cup D})$ and smoothability of $X \cup D$ are consequences of Theorem~4.1 of \cite{hh} (via the same argument as Corollary~4.2 of \cite{hh}), together with the fact that \[N_D = \oo_{\pp^1}(d)^{\oplus(r - d)} \oplus \oo_{\pp^1}(d + 2)^{\oplus (d - 1)}.\] Now assume $X$ is a BN-curve. To show that $X \cup D$ is a BN-curve, we just need to count the dimension of the space of embeddings of $X \cup D$ into projective space (this suffices because there is a unique component of the Hilbert scheme that dominates $M_g$). In order to do this, first note that \[\rho(X \cup D) = \rho(X) + (r + 1)d - r(k - 1).\] Consequently, the verification that $X \cup D$ is a BN-curve boils down to the following two assertions, both of which are straight-forward to check: \begin{enumerate} \item Given a $\pp^1$ with $k \leq d + 1$ marked points, the family of degree $d$ embeddings of $\pp^1$ as a rational normal curve with given values at the marked points has dimension \[(r - d)(d - k + 1) + d(d + 2 - k) = (r + 1)d - r(k - 1).\] \item Given a $\pp^1$ with $r + 2$ marked points, there is a unique embedding of $\pp^1$ as a rational normal curve of degree $r$ with given values at all marked points. \end{enumerate} This completes the proof. \end{proof} \begin{lm} \label{glonered} Let $X \subset \pp^r$ be a curve with $H^1(N_X) = 0$, and $R$ be a rational normal curve of degree $r - 1$ that is $(r + 1)$-secant to $X$, and $L$ be a line that is $1$-secant to both $X$ and $R$. Then $H^1(N_{X \cup R \cup L}) = 0$. \end{lm} \begin{proof} Note that for curves $A$ and $B$, \[ H^1(N_{A \cup B}|_A) = 0 \tand H^1(N_{A \cup B}|_B(-A \cap B)) = 0 \quad \Rightarrow \quad H^1(N_{A \cup B}) = 0;\] indeed, this holds for $N_{A \cup B}$ replaced by any vector bundle. In particular, since $N_A$ is a subbundle of full rank in $N_{A \cup B}|_A$, we can conclude that $H^1(N_{A \cup B}) = 0$ provided that \begin{align*} H^1(N_A) = 0 &\tand H^1(N_{A \cup B}|_B(-A \cap B)) = 0, \\ \text{or respectively} \quad H^1(N_{A \cup B}|_A) = 0 &\tand H^1(N_B(-A \cap B)) = 0. \end{align*} Thus, the vanishing of $H^1(N_{X \cup R \cup L})$ follows from the following facts: \begin{align*} H^1(N_X) &= 0 \\[-0.5ex] H^1(N_{R \cup L}|_R(-X \cap R)) &= H^1(\oo_{\pp^1}^{\oplus (r - 2)} \oplus \oo_{\pp^1}(-1)) = 0. \\ H_1(N_L(-L \cap (X \cup R))) &= H^1(\oo_{\pp^1}(-1)) = 0. \qedhere \end{align*} \end{proof} \begin{lm} \label{gltwo} Let $X \subset \pp^r$ be a curve with $H^1(N_X) = 0$, and $L$ be a line $3$-secant to $X$. Assume that the tangent lines to $X$ at the three points of intersection do not all lie in a plane. Then $X \cup D$ is smoothable and $H^1(N_{X \cup D}) = 0$. \end{lm} \begin{proof} See Remark 4.2.2 of \cite{hh}. \end{proof} We end this section with two simple observations, that will be used several times in the remainder of the paper and will therefore be useful to spell out. \begin{lm} \label{obvious} Let $\mathcal{X}$ and $\mathcal{Y}$ be irreducible families of curves in $\pp^r$, sweeping out subvarieties $\bar{\mathcal{X}}, \bar{\mathcal{Y}} \subset \pp^r$ of codimension at most one. Let $X$ and $Y$ be specializations of $\mathcal{X}$ and $\mathcal{Y}$ respectively, such that $X \cup Y$ is a BN-curve with $H^1(N_{X \cup Y}) = 0$, and $X \cap Y$ is quasi-transverse and general in $\bar{\mathcal{X}} \cap \bar{\mathcal{Y}}$. Then there are simultaneous generalizations $X'$ and $Y'$ of $X$ and $Y$ respectively such that $X' \cup Y'$ is a BN-curve with $\#(X \cap Y) = \#(X' \cap Y')$. Equivalently, in more precise language, write $B_1$ and $B_2$ for the bases of $\mathcal{X}$ and $\mathcal{Y}$ respectively. Then we are asserting the existence of an irreducible $B \subset B_1 \times B_2$ dominating both $B_1$ and $B_2$, such that any fiber $(X', Y')$ of $(\mathcal{X} \times \mathcal{Y}) \times_{(B_1 \times B_2)} B$ satisfies the given conclusion. \end{lm} \begin{proof} As $\bar{\mathcal{Y}}$ has codimension at most one, the intersection of any generalization $X'$ of $X$ with $\bar{\mathcal{X}} \cap \bar{\mathcal{Y}}$ contains a generalization of $X \cap Y$. Similarly, the intersection of any generalization $Y'$ of $Y$ with $\bar{\mathcal{X}} \cap \bar{\mathcal{Y}}$ contains a generalization of $X \cap Y$. The existence of simultaneous generalizations $X'$ and $Y'$ of $X$ and $Y$ respectively with $\#(X \cap Y) = \#(X' \cap Y')$ thus follows from the generality of $X \cap Y$ in $\bar{\mathcal{X}} \cap \bar{\mathcal{Y}}$. Moreover, since $H^1(N_{X \cup Y}) = 0$, the curve $X \cup Y$ is a smooth point of the corresponding Hilbert scheme; consequently, any generalization $X' \cup Y'$ of $X \cup Y$ is a BN-curve. \end{proof} \begin{lm} \label{addsub} Let $S \subset \pp^r$ and $T \subset \pp^r$ be sets of points such that the restriction maps \[H^0(\oo_{\pp^r}(m)) \to H^0(\oo_S(m)) \tand H^0(\oo_{\pp^r}(m)) \to H^0(\oo_{S \cup T}(m))\] are of maximal rank. Then, for every integer $0 \leq n \leq \# T$, there exists a subset $T' \subset T$ of cardinality $n$ such that \[H^0(\oo_{\pp^r}(m)) \to H^0(\oo_{S \cup T'}(m))\] is of maximal rank. In particular, taking $T = \pp^r(\cc) \smallsetminus S$, if $H^0(\oo_{\pp^r}(m)) \to H^0(\oo_S(m))$ is of maximal rank, then for $n$ general points $T' \subset \pp^r$, the map $H^0(\oo_{\pp^r}(m)) \to H^0(\oo_{S \cup T'}(m))$ is also of maximal rank. \end{lm} \begin{proof} We argue by induction on $n$. When $n = 0$, the conclusion holds by assumption. When $n = 1$, we note that the conclusion is obvious if $H^0(\oo_{\pp^r}(m)) \to H^0(\oo_S(m))$ is injective or if $H^0(\oo_{\pp^r}(m)) \to H^0(\oo_{S \cup T}(m))$ is surjective. We may therefore suppose that the map $H^0(\oo_{\pp^r}(m)) \to H^0(\oo_S(m))$ is surjective but not injective, whose kernel contains a nonzero polynomial $f$; and that $H^0(\oo_{\pp^r}(m)) \to H^0(\oo_{S \cup T}(m))$ is injective. In particular, there is a point $p \in T$ with $f|_p \neq 0$. Taking $T' = \{p\}$, the map $H^0(\oo_{\pp^r}(m)) \to H^0(\oo_{S \cup T'}(m))$ is surjective by construction. For the inductive step, let $T'' \subset T$ be of size $n - 1$ such that $H^0(\oo_{\pp^r}(m)) \to H^0(\oo_{S \cup T''}(m))$ is of maximal rank. Applying our inductive hypothesis with $(S, T) = (S \cup T'', T \smallsetminus T'')$ completes the proof. \end{proof} \section{\boldmath The Case $r = 3$ \label{r}} In this section, we will prove Theorems~\ref{add3} and~\ref{add4}. As a consequence of Theorem~\ref{add3}, we will deduce that if $C_1, C_2, \ldots, C_n \subset \pp^3$ are independently general BN-curves, then \[H^0(\oo_H(m)) \to H^0(\oo_{(C_1 \cup C_2 \cup \cdots \cup C_n) \cap H}(m))\] is of maximal rank, unless $n = 1$, and $C_1$ is a canonically embedded curve of genus $4$, and $m = 2$. (In which case by inspection the above map fails to be of maximal rank.) \begin{proof}[Proof of Theorem~\ref{add3}] If $C$ is not a canonical curve, Theorem~1.5 of \cite{quadrics} states that $C \cap H$ is a general set of points, and so Lemma~\ref{addsub} yields the desired result. If $C$ is a canonical curve, Theorem~1.5 of \cite{quadrics} states that $C \cap H$ is a set of $6$ points which are general subject to the constraint that they lie on a conic. In particular, $C \cap H$ imposes indepedent conditions on $H^0(\oo_H(1))$ and on any fixed proper subspace of $H^0(\oo_H(2))$. Since $X$ is nonempty by assumption if $m = 2$, the kernel of $H^0(\oo_H(m)) \to H^0(\oo_X(m))$ is a proper subspace of $H^0(\oo_H(m))$ if $m = 2$. If $m \leq 2$, we therefore conclude that \[H^0(\oo_H(m)) \to H^0(\oo_{X \cup (C \cap H)}(m))\] is injective (so in particular of maximal rank as desired). If $m \geq 3$, we specialize the conic to the union of two lines, and the points of $C \cap H$ to consist of $2$ points on one line (which is just a set of $2$ general points), and $4$ points on the other. Using our assumption that $H^0(\oo_H(m)) \to H^0(\oo_X(m))$ is of maximal rank and applying Lemma~\ref{addsub} twice, it suffices to show \[H^0(\oo_H(m)) \to H^0(\oo_{X \cup Y}(m))\] is of maximal rank, where $Y$ is a set of $\max(4, \dim \ker H^0(\oo_\Lambda(m)) \to H^0(\oo_{X \cap \Lambda}(m)))$ points which are general subject to the condition that they lie on a line $\Lambda$. For this, we use the exact sequence \[0 \to \mathcal{I}_X(m - 1) \to \mathcal{I}_{X \cup Y}(m) \to \mathcal{I}_{Y/\Lambda}(m) \to 0.\] Note that $H^0(\mathcal{I}_{Y/\Lambda}(m)) = 0$, and if $\dim \ker H^0(\oo_\Lambda(m)) \to H^0(\oo_{X \cap \Lambda}(m)) \geq 4$, then we have $H^1(\mathcal{I}_{Y/\Lambda}(m)) = 0$ too. In particular, the associated long exact sequence in cohomology implies $H^0(\mathcal{I}_{X \cup Y}(m)) = 0$ provided that $H^0(\mathcal{I}_X(m - 1)) = 0$, and similarly for $H^1$ if $\dim \ker H^0(\oo_\Lambda(m)) \to H^0(\oo_{X \cap \Lambda}(m)) \geq 4$. Our assumption that $H^0(\oo_H(m - 1)) \to H^0(\oo_X(m - 1))$ is of maximal rank and injective if $\dim \ker H^0(\oo_\Lambda(m)) \to H^0(\oo_{X \cap \Lambda}(m)) < 4$ thus implies that $H^0(\oo_H(m)) \to H^0(\oo_{X \cup Y}(m))$ is of maximal rank, as desired. \end{proof} \begin{proof}[Proof of Theorem~\ref{add4}] If $(d, g) \notin \{(8, 5), (9, 6), (10, 7)\}$, Theorem~1.6 of \cite{quadrics} states that $C \cap H$ is a general set of points, and so Lemma~\ref{addsub} yields the desired result. If $(d, g) \in \{(8, 5), (9, 6), (10, 7)\}$, then Theorem~1.5 of \cite{quadrics} states that $C \cap H$ is a general complete intersection of $3$ quadrics, a general set of $9$ points on a complete intersection of $2$ quadrics, or a general set of $10$ points on a quadric, respectively. In particular, we may specialize $C \cap H$ to consist of $8$ points which are a general complete intersection of $3$ quadrics, together with $d - 8$ independently general points. Applying Lemma~\ref{addsub}, it suffices to show the result when $(d, g) = (8, 5)$ and $C \cap H$ is a general complete intersection of $3$ quadrics. In particular $C \cap H$ imposes indepedent conditions on $H^0(\oo_H(1))$ and on any fixed subspace of $H^0(\oo_H(2))$ of codimension at least $3$. Since any subscheme of $\pp^3$ of positive dimension or of degree at least $3$ imposes at least $3$ conditions on quadrics, if $m \leq 2$ we therefore conclude that \[H^0(\oo_H(m)) \to H^0(\oo_{X \cup (C \cap H)}(m))\] is injective (so in particular of maximal rank as desired). If $m \geq 3$, we claim we may further specialize $C \cap H$ to $8$ general points in a plane. To see this, take a general set $\Gamma$ of $8$ points in a plane. Then there is a smooth plane cubic curve $E$ containing $\Gamma$. Let $p \in E$ be a point so that $\oo_E(2)(2p) \simeq \oo_E(\Gamma)$. Choose a basis $\langle f_1, f_2, f_3 \rangle$ for $H^0(\oo_E(1))$, so that $E \subset \pp^2$ is embedded via $[f_1 : f_2 : f_3]$, and let $f_0$ be an extension to a basis of $H^0(\oo_E(1)(p))$. Then for $\lambda$ generic, the image of $\Gamma$ in $\pp^3$ under $[\lambda f_0 : f_1 ; f_2 : f_3]$ is a set of $8$ points on the image of $E$, with class twice the pullback to $E$ under this embedding of the hyperplane class in $\pp^3$ --- in particular, as $E$ is the complete intersection of two quadrics and is projectively normal, is a complete intersection of $3$ quadrics in $\pp^3$. Specializing $\lambda \to 0$, we obtain the set $\Gamma$ of $8$ general points in the plane that we started with. Using our assumption that $H^0(\oo_H(m)) \to H^0(\oo_X(m))$ is of maximal rank and applying Lemma~\ref{addsub}, it suffices to show \[H^0(\oo_H(m)) \to H^0(\oo_{X \cup Y}(m))\] is of maximal rank, where $Y$ is a set of $\max(8, \dim \ker H^0(\oo_\Lambda(m)) \to H^0(\oo_{X \cap \Lambda}(m)))$ points which are general subject to the condition that they lie on a plane $\Lambda$. Note that $H^0(\mathcal{I}_{Y/\Lambda}(m)) = 0$, and if $\dim \ker H^0(\oo_\Lambda(m)) \to H^0(\oo_{X \cap \Lambda}(m)) \geq 8$, then we have $H^1(\mathcal{I}_{Y/\Lambda}(m)) = 0$ too. In particular, the associated long exact sequence in cohomology implies $H^0(\mathcal{I}_{X \cup Y}(m)) = 0$ provided that $H^0(\mathcal{I}_X(m - 1)) = 0$, and similarly for $H^1$ if $\dim \ker H^0(\oo_\Lambda(m)) \to H^0(\oo_{X \cap \Lambda}(m)) \geq 8$. Our assumption that $H^0(\oo_H(m - 1)) \to H^0(\oo_X(m - 1))$ is of maximal rank and injective if $\dim \ker H^0(\oo_\Lambda(m)) \to H^0(\oo_{X \cap \Lambda}(m)) < 8$ thus implies that $H^0(\oo_H(m)) \to H^0(\oo_{X \cup Y}(m))$ is of maximal rank, as desired. \end{proof} \begin{cor} \label{cor:r} If $C_1, C_2, \ldots, C_n$ are independently general space BN-curves, $H \subset \pp^3$ is a general hyperplane, and $m$ is a positive integer, then the restriction map \[H^0(\oo_H(m)) \to H^0(\oo_{(C_1 \cup C_2 \cup \cdots \cup C_n) \cap H}(m))\] is of maximal rank, except if $m = 2$ and $n = 1$ and $C_1$ is a canonical curve. \end{cor} \begin{proof} Applying Theorem~\ref{add3}, we immediately see all cases of this statement by induction (starting with $n = 0$ as our base case), provided we check the case when $m = 3$ and $n = 2$ and $C_1$ and $C_2$ are both canonical curves. In this case, by Theorem~1.5 of~\cite{quadrics} $(C_1 \cup C_2) \cap H$ is a collection of $12$ points which are general subject to the condition that $6$ of them lie on conic $Q_1$ and the other $6$ lie on a conic $Q_2$; we want to show such the general such subscheme does not lie on any cubics. For this, we specialize one of the points on $Q_1$ to one of the points of intersection $Q_1 \cap Q_2$, and one the points on $Q_2$ to a differnt point of intersection $Q_1 \cap Q_2$. The resulting subscheme of degree $12$ meets $Q_1$ in $7$ points, but a cubic not containing $Q_1$ can only meet $Q_1$ in $6$ points by Bezout's theorem. Any such cubic must therefore contain $Q_1$, and symmetrically $Q_2$. But $Q_1 \cup Q_2$ is of degree $4$, so is contained in no cubics, as desired. \end{proof} \section{\boldmath The Case $m = 2$ \label{m}} In this section, we will prove the hyperplane maximal rank theorem when $m = 2$, and the curves $C_i$ are all nonspecial. We will begin by constructing reducible curves with the following lemma, to which we will apply the method of Hirschowitz outlined in the introduction. \begin{lm} \label{foo} Let $H' \subset \pp^r$ be a hyperplane, and $(d, g)$ be integers with $d \geq g + r$ and $g \geq 0$. Assume $d_1$ and $d_2$ are nonnegative integers with $d = d_1 + d_2$. Then there exist curves $X \subset \pp^r$ and $Y \subset H'$, of degrees $d_1$ and $d_2$ respectively, both of which are either nonspecial BN-curves, rational normal curves, or empty; with $X \cap Y$ general, such that $X \cup Y \subset \pp^r$ is a nondegenerate BN-curve of genus $g$ with $H^1(N_{X \cup Y}) = 0$. \end{lm} \begin{proof} We argue by induction on $d$ (which satisfies $d \geq r$). For the base case, we take $d = r$, which forces $g = 0$. We may then let $X$ and $Y$ be rational normal curves of degrees $d_1$ and $d_2$ respectively, meeting at one point; this gives a BN-curve with $H^1(N_{X \cup Y}) = 0$ by Lemma~\ref{glone}. For the inductive step, we assume $d \geq r + 1$; in particular, if $d_1 \leq 1$, then $d_2 \geq r$. Define $g' = \max(0, g - 1)$ and \[(d_1', d_2') = \begin{cases} (d_1 - 1, d_2) & \text{if $d_1 \geq 2$;} \\ (d_1, d_2 - 1) & \text{else.} \end{cases}\] By our inductive hypothesis, there exists curves $X' \subset \pp^r$ and $Y' \subset H'$, of degrees $d_1'$ and $d_2'$ respectively, both of which are either nonspecial BN-curves, rational normal curves, or empty; with $X' \cap Y'$ general, such that $X' \cup Y' \subset \pp^r$ is a nondegenerate BN-curve of genus $g'$ with $H^1(N_{X' \cup Y'}) = 0$. If $d_1 \geq 2$ and $g = 0$, we take $X = X' \cup L$ for $L$ a general $1$-secant line to $X'$, and $Y = Y'$; by Lemma~\ref{glone}, both $X$ and $X \cup Y$ are BN-curves, and $H^1(N_{X \cup Y}) = 0$. Similarly if $d_1 \leq 1$ and $g = 0$ (respectively $g \geq 1$), we take $X = X'$, and $Y = Y' \cup L$ for $L$ a general $1$-secant (respectively $2$-secant) line to $Y'$; by Lemma~\ref{glone}, both $Y$ and $X \cup Y$ are BN-curves, and $H^1(N_{X \cup Y}) = 0$. Finally, we consider the case $d_1 \geq 2$ and $g \geq 1$. If $X'$ is nondegenerate, we take $X = X' \cup L$ for $L$ a general $2$-secant line to $X$, and $Y = Y'$; by Lemma~\ref{glone}, both $X$ and $X \cup Y$ are BN-curves, and $H^1(N_{X \cup Y}) = 0$. If $X'$ is degenerate, then since $X' \cup Y'$ is nondegenerate by assumption, the general line $L$ meeting $X'$ and $Y'$ each once intersects $Y'$ in a point which is independantly general from $X' \cap Y'$. We then take we take $X = X' \cup L$, and $Y = Y'$; again by Lemma~\ref{glone}, both $X$ and $X \cup Y$ are BN-curves, and $H^1(N_{X \cup Y}) = 0$. \end{proof} \noindent Combining this with Lemma~\ref{obvious}, we obtain: \begin{cor} \label{mns} Let $C_1, C_2, \ldots, C_n \subset \pp^r$ be independantly general nonspecial BN-curves, and $H' \subset \pp^r$ be a hyperplane. Then we may specialize the $C_i$ to curves $X_i \cup Y_i$ such that $\sum \deg X_i$ and $\sum \deg Y_i$ are any two nonnegative integers adding up to $\sum \deg C_i$; and such that $X_1, X_2, \ldots, X_n \subset \pp^r$ and $Y_1, Y_2, \ldots, Y_n \subset H'$ are each sets of independantly general BN-curves or rational normal curves. \end{cor} \begin{prop} \label{m2} Let $C_1, C_2, \ldots, C_n \subset \pp^r$ be independantly general BN-curves, and $H \subset \pp^r$ be a general hyperplane. Assume that $C_i$ is nonspecial for all $i$. Then \[H^0(\oo_H(2)) \to H^0(\oo_{(C_1 \cup C_2 \cup \cdots \cup C_n) \cap H}(2))\] is of maximal rank. \end{prop} \begin{proof} We use induction on $r$; when $r = 3$, this is a consequence of Corollary~\ref{cor:r}. For the inductive step, write $d = \sum \deg C_i$, and let $(d_1, d_2)$ be nonnegative integers with $d = d_1 + d_2$, such that \begin{align*} d_1 \geq r \tand d_2 \geq \binom{r}{2} &\qquad \text{if} \quad d \geq \binom{r + 1}{2}, \\ d_1 \leq r \tand d_2 \leq \binom{r}{2} &\qquad \text{if} \quad d \leq \binom{r + 1}{2}. \end{align*} Pick a hyperplane $H'$ transverse to $H$. By Corollary~\ref{mns}, we may specialize the $C_i$ to curves $X_i \cup Y_i$ such that $\sum \deg X_i = d_1$ and $\sum \deg Y_i = d_2$; and such that \[X := X_1 \cup X_2 \cup \cdots \cup X_n \subset \pp^r \tand Y := Y_1 \cup Y_2 \cup \cdots \cup Y_n \subset H'\] are each unions of independantly general BN-curves or rational normal curves. Since the hyperplane section of a rational normal curve is a general set of points, our inductive hypothesis in combination with Lemma~\ref{addsub} implies \[H^0(\oo_{H \cap H'}(2)) \to H^0(\oo_{Y \cap H}(2))\] is of maximal rank. Define \[i = \begin{cases} 0 & \text{if}\ d \geq \binom{r + 1}{2}, \\ 1 & \text{if}\ d \leq \binom{r + 1}{2}; \end{cases}\] so we want to show \[H^i(\mathcal{I}_{(C_1 \cup C_2 \cup \cdots \cup C_n) \cap H / H}(2)) = 0,\] and know by induction that \[H^i(\mathcal{I}_{Y \cap H / (H \cap H')}(2)) = 0\] By direct examination, $H^i(\mathcal{I}_{X \cap H / H}(1)) = 0$. Consequently, we may use the exact sequence of sheaves \[0 \to \mathcal{I}_{X \cap H / H}(1) \to \mathcal{I}_{(C_1 \cup C_2 \cup \cdots \cup C_n) \cap H / H}(2) \to \mathcal{I}_{Y \cap H / (H \cap H')}(2) \to 0,\] which gives rise to the long exact sequence in cohomology \[\cdots \to H^i(\mathcal{I}_{X \cap H / H}(1)) \to H^i(\mathcal{I}_{(C_1 \cup C_2 \cup \cdots \cup C_n) \cap H / H}(2)) \to H^i(\mathcal{I}_{Y \cap H / (H \cap H')}(2)) \to \cdots,\] to conclude that $H^i(\mathcal{I}_{(C_1 \cup C_2 \cup \cdots \cup C_n) \cap H / H}(2)) = 0$ as desired. \end{proof} \subsection{\boldmath The Condition $d \geq g + r$} The condition $d \geq r$ is necessary; indeed when $d < g + r$, the map will sometimes fail to be of maximal rank, as shown by the following proposition: \begin{prop} \label{fail} Let $C \subset \pp^r$ be any curve of degree $d$ and genus $g$, with $d < g + r$ and $4d - 2g < r(r + 3)$. Then the restriction map \[H^0(\oo_H(2)) \to H^0(\oo_{C \cap H}(2))\] fails to be of maximal rank. \end{prop} \begin{proof} We compute \[\dim H^0(\oo_{\pp^r}(2)) - \dim H^0(\oo_C(2)) = \binom{r + 2}{2} - (2d + 1 - g) = \frac{r(r + 3) - (4d - 2g)}{2} > 0,\] and so $C$ lies on a quadric. Moreover, we have \begin{align*} \dim H^0(\oo_{\pp^r}(2)) - \dim H^0(\oo_C(2)) &= \frac{r(r + 3) - (4d - 2g)}{2} \\ &= \binom{r + 1}{2} - d + (g + r - d) \\ &= \dim H^0(\oo_H(2)) - \dim H^0(\oo_{C \cap H}(2)) + (g + r - d) \\ &> \dim H^0(\oo_H(2)) - \dim H^0(\oo_{C \cap H}(2)). \end{align*} Now every quadric containing $C$ restricts to a quadric in $H$ containing $H \cap C$; as $C$ is nondegenerate, this restriction has no kernel. Consequently, there is a subspace of $H^0(\oo_H(2))$ in the kernel of $H^0(\oo_H(2)) \to H^0(\oo_{C \cap H}(2))$ which is of positive dimension that exceeds $\dim H^0(\oo_H(2)) - \dim H^0(\oo_{C \cap H}(2))$. In other words, $H^0(\oo_H(2)) \to H^0(\oo_{C \cap H}(2))$ is not of maximal rank. \end{proof} When $n = 1$, the cases in Proposition~\ref{fail} are the only cases in which the restriction map $H^0(\oo_H(2)) \to H^0(\oo_{C \cap H}(2))$ fails to be of maximal rank. Indeed, if $C$ is a general BN-curve with $d < g + r$, then $C$ is linearly normal, i.e.\ $H^1(\mathcal{I}_C(1))$ vanishes. Now consider the exact sequence of sheaves \[0 \to \mathcal{I}_C(1) \to \oo_{\pp^r}(1) \oplus \mathcal{I}_C(2) \to \mathcal{I}_{C \cap H}(2) \to 0;\] this induces a long exact sequence of cohomology groups: \[\cdots \to H^0(\oo_{\pp^r}(1)) \oplus H^0(\mathcal{I}_C(2)) \to H^0(\mathcal{I}_{C \cap H}(2)) \to H^1(\mathcal{I}_C(1)) \to \cdots\] It follows that $H^0(\oo_{\pp^r}(1)) \oplus H^0(\mathcal{I}_C(2)) \to H^0(\mathcal{I}_{C \cap H}(2))$ is surjective, i.e.\ every quadric $Q \subset H$ containing $C\cap H$ is the intersection with $H$ of a quadric $\tilde{Q}\subset \pp^r$ containing $C$. For $4d - 2g \geq r(r + 3)$, the maximal rank conjecture for quadrics (see \cite{qb} or \cite{jp}) implies that $C$ is not contained in any quadric, and consequently that $C \cap H$ is not contained in any quadric. \section{Construction of Reducible Curves \label{sec:glue}} In this section, which is the heart of the proof, we will construct examples of reducible BN-curves $X \cup Y$ where $Y \subset H'$. These reducible curves will be the essential ingredient in applying the inductive method of Hirschowitz in the following section to deduce the hyperplane maximal rank theorem. \begin{lm} \label{gluea} Let $H' \subset \pp^r$ be a hyperplane, and $(d, g)$ be integers with $\rho(d, g, r) \geq 0$ and $d \geq g + r - 2$. Assume $d_1$ and $d_2$ are positive integers with $d = d_1 + d_2$, that additionally satisfy: \[d_1 \geq r + \max(0, g + r - d) \tand d_2 \geq r - 1.\] Then there exist nonspecial BN-curves $X \subset \pp^r$ and $Y \subset H'$ of degrees $d_1$ and $d_2$ respectively, with $X \cap Y$ general, such that $X \cup Y \subset \pp^r$ is a BN-curve of genus $g$ with $H^1(N_{X \cup Y}) = 0$. \end{lm} \begin{proof} We will argue by induction on $d$ and $\rho(d, g, r)$. Notice that our inequalities for $d_1$ and $d_2$ imply $d \geq 2r - 1$; for the base case, we consider when $d = 2r - 1$ or $\rho(d, g, r) = 0$. If $d = 2r - 1$, we take $X$ to be a rational normal curve of degree $r$, and $Y \subset H$ to be a rational normal of degree $r - 1$ that meets $X \cap H$ in $g + 1$ points. (Note that as $\rho(2r - 1, g, r) \geq 0$, we have $g + 1 \leq r$.) By inspection, $X \cup Y$ is of genus $g$; as $\aut H$ acts $(r + 1)$-transitively on points in linear general position, $X \cap Y$ is general. Moreover, $X \cup Y$ is a BN-curve with $H^1(N_{X \cup Y}) = 0$ by Lemma~\ref{glone}. If $\rho(d, g, r) = 0$ and $d \geq g + r - 2$, then either $(d, g) = (2r, r + 1)$ or $(d, g) = (3r, 2r + 2)$. In the case $(d, g) = (2r, r + 1)$, we take $X$ to be the union of a rational normal curve $R$ of degree $r$ with a $2$-secant line $L$, and $Y$ to be a rational normal curve of degree $r - 1$ passing through $X \cap H$. Again, by inspection $X \cup Y$ is of genus $r + 1$; as $\aut H$ acts $(r + 1)$-transitively on points in linear general position, $X \cap Y$ is general. To see that $X \cup Y$ is a BN-curve with $H^1(N_{X \cup Y}) = 0$, we apply Lemma~\ref{glone} to the decomposition $X \cup Y = (Y \cup L) \cup R$. Now suppose that $(d, g) = (3r, 2r + 2)$. If $d_2 = r - 1$, then we take $X = C \cup L$ to be the union of a canonical curve $C$ with a general $1$-secant line $L$. We take $Y$ to be the rational normal curve of degree $r - 1$ passing through $L \cap H'$ and through $r + 1$ points of $C \cap H'$. By inspection $X \cup Y$ is of genus $2r + 2$. To see that $X \cap Y$ is general, first note that since $\aut H$ acts $(r + 1)$-transitively on points in linear general position, $C \cap Y$ is general; moreover, $L \cap H$ is general with respect to $C$. To see that $X \cup Y$ is a BN-curve, we apply Lemma~\ref{glone} to the decomposition $X \cup Y = C \cup (L \cup Y)$, while noting that $L \cup Y$ is the specialization of a rational normal curve of degree $r$. Moreover, by Lemma~\ref{glonered}, we have $H^1(N_{X \cup Y}) = 0$. Otherwise, we have $d_2 \geq r$ and $d_1 \geq r + 2$; in this case we take $X = R_1 \cup L_0 \cup L_1 \cup N_1$ and $Y = R_2 \cup L_2 \cup N_2$, where: \begin{enumerate} \item $R_1$ is a general rational normal curve of degree $r$. \item $L_0$ is a general $2$-secant line to $R_1$. \item $R_2$ is a general rational normal curve of degree $r - 1$ passing through all $r + 1$ points of $(R_1 \cup L_0) \cap H$. \item \label{l1} $L_1$ is a general line meeting $R_1$ once and $L_0$ once. \item \label{l2} $L_2$ is a general $2$-secant line to $R_2$, passing through $L_1 \cap H$. \item $N_1$ is a general rational normal curve of degree $d_1 - r - 2$ meeting $L_1$ once and $R_1$ in $d_1 - r - 2$ points (we take $N_1 = \emptyset$ if $d_1 = r + 2$). \item $N_2$ is a general rational normal curve of degree $d_2 - r$ meeting $L_2$ once and $R_2$ in $d_2 - r$ points (we take $N_2 = \emptyset$ if $d_2 = r$). \end{enumerate} In order for this to make sense, we need conditions \ref{l1} and \ref{l2} to be consistent. The consistency of \ref{l1} and \ref{l2}, as well as the assertion that $X \cap Y$ is general, both follow from the following two claims: \begin{itemize} \item $L_1 \cap H$ is general relative to $(R_1 \cup L_0) \cap H$. This follows from $L_1 \cap R_1$ being general relative to $L_0$ and $R_1 \cap H$, which in turn follows from the existence of a rational normal curve of degree $r$ through a general collection of $r + 3$ points. \item The $2$-secant lines to $R_2$ sweep out $H$ as we vary $R_2$ over all rational normal curves of degree $r - 1$ passing through all $r + 1$ points of $(R_1 \cup L_0) \cap H$. This follows from the observation that $R_2$ sweeps out $H$, which again follows from the existence of a rational normal curve of degree $r - 1$ through a general collection of $r + 2$ points in $H'$. \end{itemize} By inspection, $X \cup Y$ is a curve of genus $g$ and $X$ and $Y$ are nonspecial. To show that $X \cup Y$ is a BN-curve, we apply Lemma~\ref{glone} to the decomposition \[X \cup Y = (L_0 \cup R_2) \cup R_1 \cup (L_1 \cup L_2 \cup N_1 \cup N_2).\] Similarly, to show $H^1(N_{X \cup Y}) = 0$, we apply Lemma~\ref{glone} and then Lemma~\ref{gltwo} to the decomposition \[X \cup Y = (L_0 \cup R_2) \cup R_1 \cup L_2 \cup N_1 \cup N_2 \cup L_1.\] To apply Lemma~\ref{gltwo}, we need to check that the tangent lines to $(L_0 \cup R_2) \cup R_1 \cup L_2 \cup N_1 \cup N_2$ at the points of intersection with $L_1$ do not all lie in a plane. Since $L_1$ intersects $L_0$, the only possible plane that could contain all $3$ tangents is $\overline{L_0L_1}$. But as this plane contains the two points of intersection of $L_0$ with $R_1$ and a plane can only intersect a rational normal curve at $3$ points with multiplicity, the tangent line to $R_1$ at $L_1 \cap R_1$ cannot be contained in this plane. Consequently, we may apply Lemma~\ref{gltwo} as claimed. For the inductive step, we have $d \geq 2r$ and $\rho(d, g, r) > 0$. We claim that these inequalities imply that \begin{equation} \label{nonspec} r + \max(0, g + r - d) + r - 1 < d = d_1 + d_2. \end{equation} Of course, \[r + \max(0, g + r - d) + r - 1 = \max(2r - 1, 3r - 1 + g - d);\] consequently, as $2r - 1 < 2r \leq d$, it suffices to show $3r - 1 + g - d < d$, or equivalently $g < 2d + 1 - 3r$. To see this, note that if $g \geq 2d + 1 - 3r$, then we would have \[-(r - 1)(d - 2r) = (r + 1)d - r(2d + 1 - 3r) - r(r + 1) \geq (r + 1)d - rg - r(r + 1) > 0,\] which is a contradiction; thus, $g < 2d + 1 - 3r$, and so \eqref{nonspec} holds. Consequently, there exists $(d_1', d_2')$ either equal to $(d_1 - 1, d_2)$ or to $(d_1, d_2 - 1)$, such that $d_1' \geq r + \max(0, g + r - d)$ and $d_2' \geq r - 1$. (Otherwise $d_1 - 1 < r + \max(0, g + r - d)$ and $d_2 - 1 < r - 1$, i.e.\ $d_1 \leq r + \max(0, g + r - d)$ and $d_2 \leq r - 1$; adding these contradicts \eqref{nonspec}.) If we define $g' = \max(0, g - 1)$, then $\max(0, g + r - d) = \max(0, g' + r - (d - 1))$. Thus by the inductive hypothesis, there are BN-curves $X' \subset \pp^r$ and $Y' \subset H'$ of degrees $d_1'$ and $d_2'$ respectively, with $X' \cap Y'$ general, such that $X' \cup Y' \subset \pp^r$ is a BN-curve of genus $g'$ with $H^1(N_{X' \cup Y'}) = 0$. To complete the inductive step, we take \[(X, Y) = \begin{cases} (X', Y' \cup L) & \text{if $d_1' = d_1$;} \\ (X' \cup L, Y') & \text{if $d_2' = d_2$;} \\ \end{cases} \twhere L = \begin{cases} \text{a $1$-secant line} & \text{if $g' = g$;} \\ \text{a $2$-secant line} & \text{if $g' \neq g$.} \end{cases} \] This satisfies the desired conclusion by Lemma~\ref{glone}. \end{proof} \begin{lm} \label{glue} Let $H' \subset \pp^r$ be a hyperplane, and $(d, g)$ be integers with $\rho(d, g, r) \geq 0$. Assume $d_1$ and $d_2$ are positive integers with $d = d_1 + d_2$, that additionally satisfy: \[d_1 \geq r + \max(0, g + r - d) \tand d_2 \geq r - 1.\] Then there exists BN-curves $X \subset \pp^r$ and $Y \subset H'$ of degrees $d_1$ and $d_2$ respectively, with $X \cap Y$ general, such that $X \cup Y \subset \pp^r$ is a BN-curve of genus $g$ with $H^1(N_{X \cup Y}) = 0$. Moreover, we can take $X$ to be nonspecial if \begin{equation} \label{ceil} d_2 \geq (r - 1) \cdot \left\lceil \frac{\max(0, g + r - d)}{2}\right\rceil. \end{equation} \end{lm} \begin{proof} We will argue by induction on $d$. When $d \geq g + r - 2$, we are done by Lemma~\ref{gluea}. Thus we may assume that $d < g + r - 2$. In particular, this implies that $d \geq 4r$, and that $\max(0, g + r - d) = g + r - d$. We claim that \begin{equation}\label{spec} r + \max(0, g + r - d) + r - 1 = 3r - 1 + g - d < d - 2(r - 2) = d_1 + d_2 - 2(r - 2). \end{equation} This is equivalent to $g < 2d + 5 - 5r$; to see this, note that if $g \geq 2d + 5 - 5r$, then \[-(r - 1)(d - 4r) - 2r = (r + 1)d - r(2d + 5 - 5r) - r(r + 1) \geq (r + 1)d - rg - r(r + 1) = 0,\] which is a contradiction; thus, $g < 2d + 5 - 5r$, and so \eqref{spec} holds. Consequently, there exists $(d_1', d_2')$ either equal to $(d_1 - 1, d_2 - r + 1)$ or to $(d_1 - r, d_2)$, such that \[d_1' \geq r + \max(0, g + r - d) - 1 = r + \max(0, (g - r - 1) + r - (d - r))\] and $d_2' \geq r - 1$. (Otherwise $d_1 - r < r + \max(0, g + r - d) - 1$ and $d_2 - r + 1 < r - 1$, i.e.\ $d_1 - (r - 2) \leq r + \max(0, g + r - d)$ and $d_2 - (r - 2) \leq r - 1$; adding these contradicts \eqref{spec}.) Thus by the inductive hypothesis, there are BN-curves $X' \subset \pp^r$ and $Y' \subset H'$ of degrees $d_1'$ and $d_2'$ respectively, with $X' \cap Y'$ general, such that $X' \cup Y' \subset \pp^r$ is a BN-curve of genus $g - r - 1$ with $H^1(N_{X' \cup Y'}) = 0$. To complete the inductive step, we take \[(X, Y) = \begin{cases} (X' \cup L, Y' \cup R_2) & \text{if $d_1' = d_1 - 1$;} \\ (X' \cup R_1, Y') & \text{if $d_2' = d_2$.} \\ \end{cases}\] Here, $R_1$ is a rational normal curve of degree $r$ that is $(r + 2)$-secant to $X'$, and $L$ is a $1$-secant line to $X'$, and $R_2$ is a rational normal curve of degree $r - 1$ intersecting $Y'$ in $r + 1$ points and passing through $L \cap H$. Tracing through the proof, we notice then when \eqref{ceil} is satisfied, we add a $1$-secant line to $X$ at least as many times as we add an $(r + 2)$-secant rational normal curve of degree $r$. In particular, when \eqref{ceil} holds, the curve $X$ we constructed is nonspecial. \end{proof} \begin{lm} \label{above} Let $H' \subset \pp^r$ be a hyperplane, and $(d, g)$ be integers with $\rho(d, g, r) \geq 0$. Write \[d_1 = 1 + \max(0, g + r - d) \tand d_2 = d - d_1.\] Then there exists a rational curve $X \subset \pp^r$, and a BN-curve $Y \subset H'$, of degrees $d_1$ and $d_2$ respectively, with $X \cap Y$ general, such that $X \cup Y \subset \pp^r$ is a BN-curve of genus $g$ with $H^1(N_{X \cup Y}) = 0$. \end{lm} \begin{proof} We argue by induction on $\rho(d, g, r)$. When $\rho(d, g, r) = 0$, then $(d, g, r) = (r(t + 1), (r + 1)t, r)$ for some nonnegative integer $t$; so for $\rho(d, g, r) = 0$ we may argue by induction on $t$. When $t = 0$, we let $X$ be a line, and $Y$ be a rational normal curve of degree $r - 1$ in $H'$, passing through $X \cap H'$; by Lemma~\ref{glone}, the union $X \cup Y$ is is a BN-curve with $H^1(N_{X \cup Y}) = 0$. For the inductive step, we let $X' \subset \pp^r$ and $Y' \subset H'$ be of degrees $d_1 - 1$ and $d_2 - r + 1$ respectively, with $X' \cap Y'$ general, such that $X' \cup Y' \subset \pp^r$ is a BN-curve of degree $d - r$ and genus $g - r - 1$ with $H^1(N_{X' \cup Y'}) = 0$. We then pick a general point $p \in H'$, and let \[X = X' \cup L \tand Y = Y' \cup R,\] where $L$ is a $1$-secant line to $X'$ through $p$, and $R$ is a rational normal curve of degree $r - 1$ which is $(r + 1)$-secant to $Y'$ and passes through $p$. Applying Lemmas~\ref{glone} and~\ref{glonered}, we conclude that the union $X \cup Y$ is is a BN-curve of genus $g$ with $H^1(N_{X \cup Y}) = 0$ as desired. For the inductive step, we let $X' \subset \pp^r$ and $Y' \subset H'$ be of degrees $d_1$ and $d_2 - 1$ respectively, with $X' \cap Y'$ general, such that $X' \cup Y' \subset \pp^r$ is a BN-curve of degree $d - 1$ and genus $g' := \max(0, g - 1)$ with $H^1(N_{X' \cup Y'}) = 0$. We then take \[X = X' \tand Y = Y' \cup L,\] where $L$ is a line which is $1$-secant to $Y'$ if $g = g'$ and $2$-secant otherwise. Applying Lemma~\ref{glone}, we conclude that the union $X \cup Y$ is is a BN-curve of genus $g$ with $H^1(N_{X \cup Y}) = 0$ as desired. \end{proof} \begin{lm} \label{key} Let $C_1, C_2, \ldots, C_n \subset \pp^r$ be independantly general BN-curves, of degrees $d_i$ and genera $g_i$, and $H, H' \subset \pp^r$ be transverse hyperplanes. Let $d'$ and $d''$ be nonnegative integers with \[d' + d'' = \sum d_i \tand d' \geq r - 1 + \sum [1 + \max(0, g_i + r - d_i)].\] Then we may specialize the $C_i$ to curves $C_i^\circ$ with $\sum \# (C_i^\circ \cap H \cap H') = d''$, so that \begin{gather*} C_1^\circ \cap H \cap H', C_2^\circ \cap H \cap H', \ldots, C_n^\circ \cap H \cap H' \subset H \cap H' \quad \text{and} \\ C_1^\circ \cap H \smallsetminus H', C_2^\circ \cap H \smallsetminus H', \ldots, C_n^\circ \cap H \smallsetminus H' \subset H \end{gather*} are sets of subsets of hyperplane sections of independantly general BN-curves. Moreover, we can assume the second of these sets is a set of subsets of hyperplane sections of independantly general nonspecial BN-curves if \begin{equation} \label{ceil} d'' \geq (r - 1) \cdot \sum \left\lceil \frac{\max(0, g_i + r - d_i)}{2}\right\rceil. \end{equation} \end{lm} \begin{proof} We first note that it suffices to consider the case where all $C_i$ are special. Indeed, if $C_1, C_2, \ldots, C_m$ are special, and $C_{m + 1}, C_{m + 2}, \ldots, C_n$ are nonspecial, then the result for $C_1, C_2, \ldots, C_n$ follows from the result for $C_1, C_2, \ldots, C_m$ combined with Corollary~\ref{mns} for $C_{m + 1}, C_{m + 2}, \ldots, C_n$. We may thus suppose $C_i$ is special for all $i$. In particular, for all $i$, \begin{equation} \label{dimin} d_i \geq d_i - [(r + 1) d_i - r g_i - r(r + 1)] = r + r(g_i + r - d_i). \end{equation} We now argue by induction on $n$. When $n = 1$, this follows from Lemmas~\ref{glue} and~\ref{obvious} if $d'' \geq r - 1$. If $d'' \leq r - 2$, then by the uniform position principle, the points of $C_1 \cap H$ are in linear general position. We may therefore apply an automorphism of $H$ so that exactly $d''$ of these points lie in $H'$. For the inductive step, note that Equation~\eqref{dimin} gives, in combination with $g_n + r - d_n \geq 1$, \[d_n - r - \max(0, g_n + r - d_n) \geq (r - 1) \cdot \left\lceil\frac{\max(0, g_n + r - d_n)}{2}\right \rceil \geq r - 1.\] In particular, so long as \[2r - 2 + \sum [1 + \max(0, g_i + r - d_i)] \leq d' \leq d - r + 1,\] we may combine our inductive hypothesis (for $C_1, \ldots, C_{n - 1}$) with Lemmas~\ref{glue} and~\ref{obvious} (for $C_n$) to deduce the result. If $d' \geq d - r + 2$, the result follows from our inductive hypothesis (for $C_1, \ldots, C_{n - 1}$); we do not specialize $C_n$. Finally, if $r - 1 + \sum [1 + \max(0, g_i + r - d_i)] \leq d' \leq 2r - 2 + \sum [1 + \max(0, g_i + r - d_i)]$, then upon rearrangement, \[d' - [1 + \max(0, g_n + r - d_n)] \leq 2r - 2 + \sum_{i < n} [1 + \max(0, g_n + r - d_n)],\] so by combining Lemmas~\ref{above} and~\ref{obvious}(for $C_n$) with our inductive hypothesis (for $C_1, \ldots, C_{n - 1}$), it suffices to show $2r - 2 + \sum_{i < n} [1 + \max(0, g_n + r - d_n)] \leq \sum_{i < n} d_i$, which follows in turn from \[d_1 \geq 2r - 1 + \max(0, g_1 + r - d_1),\] which in turn follows from Equation~\eqref{dimin} together with $g_n + r - d_n \geq 1$. \end{proof} \section{The Inductive Argument \label{sec:ind}} In this section, we combine the results of the previous three sections to inductively prove the hyperplane maximal rank theorem. This essentially boils down to manipulating inequalities to show that we can choose the integers $(d', d'')$ appearing in the previous section in the appropriate fashion. We begin by giving some bounds on the expressions appearing in Lemma~\ref{glue} that are easier to manipulate. \begin{lm} \label{ione} Let $d$, $g$, and $r$ be integers with $\rho(d, g, r) \geq 0$. Then \[1 + \max(0, g + r - d) \leq \frac{d}{r} \tand (r - 1) \cdot \left\lceil \frac{\max(0, g + r - d)}{2}\right\rceil \leq \frac{r - 1}{2r} \cdot d.\] \end{lm} \begin{proof} By assumption, \[r \cdot (g + r - d) \leq r \cdot (g + r - d) + (r + 1)d - rg - r(r + 1) = d - r\] \[\Rightarrow \max(0, g + r - d) \leq \frac{d - r}{r}.\] Substituting this in, we find \begin{align*} 1 + \max(0, g + r - d) &\leq 1 + \frac{d - r}{r} = \frac{d}{r} \\ \left\lceil \frac{\max(0, g + r - d)}{2}\right\rceil &\leq \frac{\frac{d - r}{r} + 1}{2} = \frac{r - 1}{2r} \cdot d. && \qedhere \end{align*} \end{proof} \begin{lm} \label{itwo} Let $d$, $r$, and $m$ be integers with \[r \geq 4, \quad m \geq 3, \tand d \geq 2r + 2.\] Assume that \[d \geq \binom{m + r - 1}{m}, \quad \text{respectively} \quad d \leq \binom{m + r - 1}{m}.\] Then there are integers $d'$ and $d''$ such that $d = d' + d''$ and \[d' \geq \binom{m + r - 2}{m - 1} \tand d'' \geq \binom{m + r - 2}{m},\] \[\text{respectively} \quad d' \leq \binom{m + r - 2}{m - 1} \quad \text{and} \quad d'' \leq \binom{m + r - 2}{m},\] which moreover satisfy \[d' \geq r - 1 + \frac{d}{r} \tand d'' \geq r - 1.\] Additionally, if $m = 3$, we can replace $d'' \geq r - 1$ by the stronger assumption that \[d'' \geq \frac{r - 1}{2r} \cdot d.\] \end{lm} \begin{proof} First we consider the case where \[d = \binom{m + r - 1}{m} \geq \binom{r + 2}{3} \geq 2r + 2.\] In this case, we take \[d' = \binom{m + r - 2}{m - 1} \tand d'' = \binom{m + r - 2}{m}.\] To see that these satisfy the given conditions, first note that \[d'' \geq \binom{r + 1}{3} \geq r - 1.\] Next note that \[\binom{m + r - 1}{m} \geq \frac{r - 1}{\frac{m}{m + r - 1} - \frac{1}{r}};\] indeed, the LHS is an increasing function of $m$, the RHS is a decreasing function of $m$, and the inequality is obvious for $m = 3$. Rearranging, we get \[d' = \binom{m + r - 2}{m - 1} = \frac{m}{m + r - 1} \cdot \binom{m + r - 1}{m} \geq r - 1 + \frac{1}{r} \cdot \binom{m + r - 1}{m} = r - 1 + \frac{d}{r}.\] If $m = 3$, then \[d'' = \binom{r + 1}{3} \geq \frac{r - 1}{2r} \cdot \binom{r + 2}{3} = \frac{r - 1}{2r} \cdot d.\] In general, we induct upwards on $d$ in the $\geq$ case and downwards on $d$ in the $\leq$ case. To do this, we want to show that if $d'$ and $d''$ satisfy \[d' \geq r - 1 + \frac{d}{r} \tand d'' \geq \frac{r - 1}{2r} \cdot d \twhere d = d' + d'' \geq 2r + 2,\] then either $(d' - 1, d'')$ or $(d', d'' - 1)$, as well as either $(d' + 1, d'')$ or $(d', d'' + 1)$, satisfy the above two conditions. We note that \begin{align*} d' \geq r - 1 + \frac{d}{r} = r - 1 + \frac{d' + d''}{r} \quad &\Leftrightarrow \quad (r - 1) d' \geq r(r - 1) + d''. \\ d'' \geq \frac{r - 1}{2r} \cdot d = \frac{r - 1}{2r} \cdot (d' + d'') \quad &\Leftrightarrow \quad (r + 1) d'' \geq (r - 1) d'. \end{align*} Assume (to the contrary) that neither $(d' - 1, d'')$ nor $(d', d'' - 1)$ satisfy the conditions, respectively that neither $(d' + 1, d'')$ nor $(d', d'' + 1)$ satisfy the conditions. Then we must have \begin{align*} (r - 1) (d' - 1) < r(r - 1) + d'' &\tand (r + 1)(d'' - 1) < (r - 1) d', \\ \text{respectively} \quad (r - 1) d' < r(r - 1) + d'' + 1 &\tand (r + 1)d'' < (r - 1) (d' + 1). \end{align*} Equivalently, we must have \begin{align*} (r - 1) (d' - 1) + 1 \leq r(r - 1) + d'' &\tand (r + 1)(d'' - 1) + 1 \leq (r - 1) d', \\ \text{respectively} \quad (r - 1) d' \leq r(r - 1) + d'' &\tand (r + 1)d'' + 1 \leq (r - 1) (d' + 1). \end{align*} Adding twice the first equation to the second, we must have \begin{align*} 2(r - 1) (d' - 1) + 2 + (r + 1)(d'' - 1) + 1 &\leq 2r(r - 1) + 2d'' + (r - 1) d', \\ \text{respectively} \quad 2(r - 1) d' + (r + 1)d'' + 1 &\leq 2r(r - 1) + 2d'' + (r - 1) (d' + 1). \end{align*} Simplifying yields \[(r - 1)(d' + d'') \leq 2r^2 + r - 4, \quad \text{respectively} \quad (r - 1) (d' + d'') \leq 2r^2 - r - 2.\] In particular, \[d = d' + d'' \leq \frac{2r^2 + r - 4}{r - 1} = 2r + 3 - \frac{1}{r - 1} \quad \Rightarrow \quad d \leq 2r + 2.\] Consequently, we can reach via upward and downward induction every value of $d$ that is at least $2r + 2$. \end{proof} \begin{proof}[Proof of the Hyperplane Maximal Rank Theorem.] We use induction on $m$ and $r$. For $m = 2$, this is a consequence of Proposition~\ref{m2}; for $r = 3$, this is a consequence of Corollary~\ref{cor:r}. Note that if $\sum d_i \leq 2r - 1$, then all the $C_i$ are nonspecial and so $H^0(\oo_H(2)) \to H^0(\oo_{C \cap H}(2))$ is surjective; consequently, $H^0(\oo_H(m)) \to H^0(\oo_{C \cap H}(m))$ is surjective for all $m \geq 2$. Thus, we may suppose $\sum d_i \geq 2r$. For the inductive step, we define integers $(d', d'')$ as follows. If $\sum d_i \in \{2r, 2r + 1\}$, we take $(d', d'') = (r + 1, \sum d_i - r - 1)$. Otherwise, for $\sum d_i \geq 2r + 2$, we let $(d', d'')$ be as in Lemma~\ref{itwo}. Fix another hyperplane $H'$ transverse to $H$. By Lemma~\ref{key}, plus Lemma~\ref{ione} when $d \geq 2r + 2$, we may specialize the $C_i$ to curves $C_i^\circ$ with $\sum \# (C_i^\circ \cap H \cap H') = d''$, so that \begin{gather*} X := (C_1^\circ \cap H \cap H') \cup (C_2^\circ \cap H \cap H') \cup \cdots \cup (C_n^\circ \cap H \cap H') \subset H \cap H' \quad \text{and} \\ Y := (C_1^\circ \cap H \smallsetminus H') \cup (C_2^\circ \cap H \smallsetminus H') \cup \cdots \cup (C_n^\circ \cap H \smallsetminus H') \subset H \end{gather*} are unions of subsets of hyperplane sections of independantly general BN-curves. Moreover, if $m = 3$, then we can arrange for $X$ to be a union of subsets of hyperplane sections of independantly general nonspecial BN-curves. By our inductive hypothesis, Lemma~\ref{addsub}, and the uniform position principle, we know that the restriction maps \[H^0(\oo_H(m - 1)) \to H^0(\oo_Y(m - 1)) \tand H^0(\oo_{H \cap H'}(m)) \to H^0(\oo_X(m))\] are of maximal rank. Define \[i = \begin{cases} 0 & \text{if}\ \sum d_i \geq \binom{r + m - 1}{m}, \\ 1 & \text{if}\ \sum d_i \leq \binom{r + m - 1}{m}; \end{cases}\] so we want to show $H^i(\mathcal{I}_{(X \cup Y) \cap H}(m)) = 0$. The exact sequence of sheaves \[0 \to \mathcal{I}_{(X \cap H) / H}(m - 1) \to \mathcal{I}_{(X \cup Y) \cap H / H}(m) \to \mathcal{I}_{(Y \cap H)/(H \cap H')}(m) \to 0,\] gives rise to a long exact sequence in cohomology \[\cdots \to H^i(\mathcal{I}_{(X \cap H) / H}(m - 1)) \to H^i(\mathcal{I}_{(X \cup Y) \cap H / H}(m)) \to H^i(\mathcal{I}_{(Y \cap H)/(H \cap H')}(m)) \to \cdots.\] By the inductive hypothesis, we have $H^i(\mathcal{I}_{(X \cap H) / H}(m - 1)) = H^i(\mathcal{I}_{(Y \cap H)/(H \cap H')}(m)) = 0$. Consequently, $H^i(\mathcal{I}_{(X \cup Y) \cap H / H}(m)) = 0$, as desired. \end{proof} {} \end{document}
\begin{document} \title{Nash equilibrium with Sugeno payoff} \author{Taras Radul} \maketitle Institute of Mathematics, Casimirus the Great University, Bydgoszcz, Poland; \newline Department of Mechanics and Mathematics, Lviv National University, Universytetska st.,1, 79000 Lviv, Ukraine. \newline e-mail: tarasradul\@ yahoo.co.uk \textbf{Key words and phrases:} Nash equilibrium, game in capacities, Sugeno integral \begin{abstract} This paper is devoted to Nash equilibrium for games in capacities. Such games with payoff expressed by Choquet integral were considered in \cite{KZ} and existence of Nash equilibrium was proved. We also consider games in capacities but with expected payoff expressed by Sugeno integral. We prove existence of Nash equilibrium using categorical methods and abstract convexity theory. \end{abstract} \section{Introduction} The classical Nash equilibrium theory is based on fixed point theory and was developed in frames of linear convexity. The mixed strategies of a player are probability (additive) measures on a set of pure strategies. But an interest to Nash equilibria in more general frames is rapidly growing in last decades. There are also results about Nash equilibrium for non-linear convexities. For instance, Briec and Horvath proved in \cite{Ch} existence of Nash equilibrium point for $B$-convexity and MaxPlus convexity. Let us remark that MaxPlus convexity is related to idempotent (Maslov) measures in the same sense as linear convexity is related to probability measures. We can use additive measures only when we know precisely probabilities of all events considered in a game. However it is not the case in many modern economic models. The decision theory under uncertainty considers a model when probabilities of states are either not known or imprecisely specified. Gilboa \cite{Gil} and Schmeidler \cite{Sch} axiomatized expectations expressed by Choquet integrals attached to non-additive measures called capacities, as a formal approach to decision-making under uncertainty. Dow and Werlang \cite{DW} generalized this approach for two players game where belief of each player about a choice of the strategy by the other player is a capacity. This result was extended onto games with arbitrary finite number of players \cite{EK}. Kozhan and Zaricznyi introduced in \cite{KZ} a formal mathematical generalization of Dow and Werlang's concept of Nash equilibrium of a game where players are allowed to form non-additive beliefs about opponent's decision but also to play their mixed non-additive strategies. Such game is called by authors game in capacities. The expected payoff function was there defined using a Choquet integral. Kozhan and Zaricznyi proved existence theorem using a linear convexity on the space of capacities which is preserved by Choquet integral. There was stated a problem of existence of Nash equilibrium for another functors \cite{KZ}. An alternative to so-called Choquet expected utility model is the qualitative decision theory. The corresponding expected utility is expressed by Sugeno integral. See for example papers \cite{DP}, \cite{DP1}, \cite{CH1}, \cite{CH} and others. Sugeno integral chooses a median value of utilities which is qualitative counterpart of the averaging operation by Choquet integral. Following \cite{KZ} we introduce in this paper the general mathematical concept of Nash equilibrium of a game in capacities. However, motivated by the qualitative approach, we consider expected payoff function defined by Sugeno integral. To prove existence theorem for this concrete case, we consider more general framework which could unify all mentioned before situations and give us a method to prove theorems about existence of Nash equilibrium in different contexts. We use categorical methods and abstract convexity theory. The notion of convexity considered in this paper is considerably broader then the classic one; specifically, it is not restricted to the context of linear spaces. Such convexities appeared in the process of studying different structures like partially ordered sets, semilattices, lattices, superextensions etc. We base our approach on the notion of topological convexity from \cite{vV} where the general convexity theory is covered from axioms to application in different areas. Particularly, there is proved Kakutani fixed point theorem for abstract convexity. Above mentioned constructions of the spaces of probability measures, idempotent measures and capacities are functorial and could be completed to monads (see \cite{RZ}, \cite{Z} and \cite{NZ} for more details). There was introduced in \cite{R1} a convexity structure on each $\mathbb F$-algebra for any monad $\mathbb F$ in the category of compact Hausdorff spaces and continuous maps. Particularly, topological properties of monads with binary convexities were investigated. We prove a counterpart of Nash theorem for an abstract convexity in this paper. Particularly, we consider binary convexities. These results we use to obtain Nash theorem for algebras of any L- monad with binary convexity. Since capacity monad is an L-monad with binary convexity \cite{R2}, we obtain as corollary the corresponding result for capacities. \section{Games in capacities} By $\mathsf{Comp}$ we denote the category of compact Hausdorff spaces (compacta) and continuous maps. For each compactum $X$ we denote by $C(X)$ the Banach space of all continuous functions on $X$ with the usual $\sup$-norm. In what follows, all spaces and maps are assumed to be in $\mathsf{Comp}$ except for $\mathbb R$ and maps in sets $C(X)$ with $X$ compact Hausdorff. We need the definition of capacity on a compactum $X$. We follow a terminology of \cite{NZ}. A function $c$ which assign each closed subset $A$ of $X$ a real number $c(A)\in [0,1]$ is called an {\it upper-semicontinuous capacity} on $X$ if the three following properties hold for each closed subsets $F$ and $G$ of $X$: 1. $c(X)=1$, $c(\emptyset)=0$, 2. if $F\subset G$, then $c(F)\le c(G)$, 3. if $c(F)<a$, then there exists an open set $O\supset F$ such that $c(B)<a$ for each compactum $B\subset O$. We extend a capacity $c$ to all open subsets $U\subset X$ by the formula $c(U)=\sup\{c(K)\mid K$ is a closed subset of $X$ such that $K\subset U\}$. It was proved in \cite{NZ} that the space $MX$ of all upper-semicontinuous capacities on a compactum $X$ is a compactum as well, if a topology on $MX$ is defined by a subbase that consists of all sets of the form $O_-(F,a)=\{c\in MX\mid c(F)<a\}$, where $F$ is a closed subset of $X$, $a\in [0,1]$, and $O_+(U,a)=\{c\in MX\mid c(U)>a\}$, where $U$ is an open subset of $X$, $a\in [0,1]$. Since all capacities we consider here are upper-semicontinuous, in the following we call elements of $MX$ simply capacities. There is considered in \cite{KZ} a tensor product for capacities, which is a continuous map $\otimes:MX_1\times\dots\times MX_n\to M(X_1\times\dots\times X_n)$. Note that, despite the space of capacities contains the space of probability measures, the tensor product of capacities does not extend tensor product of probability measures. Due to Zhou \cite{Zh} we can identify the set $MX$ with some set of functionals defined on the space $C(X)$ using the Choquet integral. We consider for each $\mu\in MX$ its value on a function $f\in C(X)$ defined by the formulae $$\mu(f)=\int fd\mu=\int_0^\infty\mu\{x\in X|f(X)\ge t\}dt+\int^0_{-\infty}(\mu\{x\in X|f(X)\ge t\}-1)dt$$ Let us remember the definition of Nash equilibrium. We consider a $n$-players game $f:X=\prod_{i=1}^n X_i\to\mathbb R^n$ with compact Hausdorff spaces of strategies $X_i$. The coordinate function $f_i:X\to \mathbb R$ we call payoff function of $i$-th player. For $x\in X$ and $t_i\in X_i$ we use the notation $(x;t_i)=(x_1,\dots,x_{i-1},t_i,x_{i+1},\dots,x_n)$. A point $x\in X$ is called a Nash equilibrium point if for each $i\in\{1,\dots,n\}$ and for each $t_i\in X_i$ we have $f_i(x;t_i)\le f_i(x)$. Kozhan and Zarichnyj proved in \cite{KZ} existence of Nash equilibrium for game in capacities $ef:\prod_{i=1}^n MX_i\to\mathbb R^n$ with expected payoff functions defined by $$ef_i(\mu_1,\dots,\mu_n)=\int_{X_1\times\dots\times X_n}f_id(\mu_1\otimes\dots\otimes\mu_n)$$ Let us remark that the Choquet functional representation of capacities preserves the natural linear convexity structure on $MX$ which was used in the proof of existence of Nash equilibrium \cite{KZ}. However this representation does not preserve the capacity monad structure. (We will introduce the monad notion in Section 4). There was introduced \cite{R2} another functional representation of capacities using Sugeno integral (see also \cite{NR} for similar result). This representation preserves the capacity monad structure. Let us describe such representation. Fix any increasing homeomorphism $\psi:(0,1)\to\mathbb R$. We put additionally $\psi(0)=-\infty$, $\psi(1)=+\infty$ and assume $-\infty<t<+\infty$ for each $t\in\mathbb R$. We consider for each $\mu\in MX$ its value on a function $f\in C(X)$ defined by the formulae $$\mu(f)=\int_X^{Sug} fd\mu=\max\{t\in\mathbb R\mid \mu(f^{-1}([t,+\infty)))\ge\psi^{-1}(t)\}$$ Let us remark that we use some modification of Sugeno integral. The original Sugeno integral \cite{Su} "ignores" function values outside the interval $[0,1]$ and we introduce a "correction" homeomorphism $\psi$ to avoid this problem. Now, following \cite{KZ}, we consider a game in capacities $sf:\prod_{i=1}^n MX_i\to\mathbb R^n$, but motivated by \cite{DP}, we consider Sugeno expected payoff functions defined by $$sf_i(\mu_1,\dots,\mu_n)=\int^{Sug}_{X_1\times\dots\times X_n}f_id(\mu_1\otimes\dots\otimes\mu_n)$$ The main goal of this paper is to prove existence of Nash equilibrium for such game. Since Sugeno integral does not preserve linear convexity on $MX$ we can not use methods from \cite{KZ}. We will use some another natural convexity structure which has the binarity property (has Helly number 2). We will obtain some general result for such convexities which could be useful to investigate existence of Nash equilibrium for diverse construction. Finally, we will obtain the result for capacities as a corollary of these general results. \section{Binary convexities} A family $\mathcal C$ of closed subsets of a compactum $X$ is called a {\it convexity} on $X$ if $\mathcal C$ is stable for intersection and contains $X$ and the empty set. Elements of $\mathcal C$ are called $\mathcal C$-convex (or simply convex). Although we follow general concept of abstract convexity from \cite{vV}, our definition is different. We consider only closed convex sets. Such structure is called closure structure in \cite{vV}. The whole family of convex sets in the sense of \cite{vV} could be obtained by the operation of union of up-directed families. In what follows, we assume that each convexity contains all singletons. A convexity $\mathcal C$ on $X$ is called $T_2$ if for each distinct $x_1$, $x_2\in X$ there exist $S_1$, $S_2\in\mathcal C$ such that $S_1\cup S_2=X$, $x_1\notin S_2$ and $x_2\notin S_1$. Let us remark that if a convexity $\mathcal C$ on a compactum $X$ is $T_2$, then $\mathcal C$ is a subbase for closed sets. A convexity $\mathcal C$ on $X$ is called $T_4$ (normal) if for each disjoint $C_1$, $C_2\in \mathcal C$ there exist $S_1$, $S_2\in\mathcal C$ such that $S_1\cup S_2=X$, $C_1\cap S_2=\emptyset$ and $C_2\cap S_1=\emptyset$. Let $(X,\mathcal C)$, $(Y,\mathcal D)$ be two compacta with convexity structures. A continuous map $f:X\to Y$ is called {\it CP-map} (convexity preserving map) if $f^{-1}(D)\in\mathcal C$ for each $D\in\mathcal D$; $f$ is called {\it CC-map} (convex-to-convex map) if $f(C)\in\mathcal D$ for each $C\in\mathcal C$. By a multimap (set-valued map) of a set $X$ into a set $Y$ we mean a map $F:X\to 2^Y$. We use the notation $F:X\multimap Y$. If $X$ and $Y$ are topological spaces, then a multimap $F:X\multimap Y$ is called upper semi-continuous (USC) provided for each open set $O\subset Y$ the set $\{x\in X\mid F(x)\subset O\}$ is open in $X$. It is well-known that a multimap is USC iff its graph is closed in $X\times Y$. Let $F:X\multimap X$ be a multimap. We say that a point $x\in X$ is a fixed point of $F$ if $x\in F(x)$. The following counterpart of Kakutani theorem for abstract convexity is a partial case of Theorem 3 from \cite{W} (it also could be obtain combining Theorem 6.15, Ch.IV and Theorem 4.10, Ch.III from \cite{vV}). \begin{theorem}\label{KA} Let $\mathcal C$ be a normal convexity on a compactum $X$ such that all convex sets are connected and $F:X\multimap X$ is a USC multimap with values in $\mathcal C$. Then $F$ has a fixed point. \end{theorem} Let $\mathcal C$ be a family of subsets of a compactum $X$. We say that $\mathcal C$ is {\it linked} if the intersection of every two elements is non-empty. A convexity $\mathcal C$ is called {\it binary} if the intersection of every linked subsystem of $\mathcal C$ is non-empty. \begin{lemma}\label{BC} Let $\mathcal C$ be a $T_2$ binary convexity on a continuum $X$. Then $\mathcal C$ is normal and all convex sets are connected. \end{lemma} \begin{proof} The first assertion of the lemma is proved in Lemma 3.1 \cite{RZ}. Let us prove the second one. Consider any $A\in\mathcal C$. There was defined in \cite{MV} a retraction $h_A:X\to A$ by the formula $h_A(x)=\cap\{C\in\mathcal C\mid x\in C$ and $C\cap A\ne\emptyset\}$. Hence $A$ is connected and the lemma is proved. \end{proof} Now we can reformulate Theorem \ref{KA} for binary convexities. \begin{theorem}\label{KB} Let $\mathcal C$ be a $T_2$ binary convexity on a continuum $X$ and $F:X\multimap X$ is a USC multimap with values in $\mathcal C$. Then $F$ has a fixed point. \end{theorem} Now, let $\mathcal C_i$ be a convexity on $X_i$. We say that the function $f_i:X\to\mathbb R$ is quasi concave by $i$-th coordinate if we have $(f_i^x)^{-1}([t;+\infty))\in\mathcal C_i$ for each $t\in\mathbb R$ and $x\in X$ where $f_i^x:X_i\to\mathbb R$ is a function defined as follows $f_i^x(t_i)=f_i(x;t_i)$ for $t_i\in X_i$. \begin{theorem}\label{NN} Let $f:X=\prod_{i=1}^n X_i\to\mathbb R^n$ be a game with a normal convexity $\mathcal C_i$ defined on each compactum $X_i$ such that all convex sets are connected, the function $f$ is continuous and the function $f_i:X\to\mathbb R$ is quasi concave by $i$-th coordinate for each $i\in\{1,\dots,n\}$. Then there exists a Nash equilibrium point. \end{theorem} \begin{proof} Fix any $x\in X$. For each $i\in\{1,\dots,n\}$ consider a set $M_i^x\subset X_i$ defined as follows $M_i^x=\{t\in X_i\mid f_i^x(t)=\max_{s\in X_i}f_i^x(s)\}$. We have that $M_i^x$ is a closed subset $X_i$. Since the function $f_i:X\to\mathbb R$ is quasi concave by $i$-th coordinate, we have that $M_i^x\in\mathcal C_i$. Define a multimap $F:X\multimap X$ by the formulae $F(x)=\prod_{i=1}^n M_i^x$ for $x\in X$. Let us show that $F$ is USC. Consider any point $(x,y)\in X\times X$ such that $y\notin F(x)$. Then there exists $i\in\{1,\dots,n\}$ such that $f_i^x(y_i )<\max_{s\in X_i}f_i^x(s)\}$. Hence we can choose $t_i\in X_i$ such that $f_i(x;y_i)<f_i(x;t_i)$. Since $f_i$ is continuous, there exists a neighborhood $O_x$ of $x$ in $X$ and a neighborhood $O_{y_i}$ of $y_i$ in $Y_i$ such that for each $x'\in O_x$ and $y_i'\in O_{y_i}$ we have $f_i(x;y_i')<f_i(x;t_i)$. Put $O_y=(\mathrm{pr}_i)^{-1}(O_{y_i})$. Then for each $(x',y')\in O_x\times O_y$ we have $y'\notin F(x')$. Thus the graph of $F$ is closed in $X\times Y$, hence $F$ is upper semicontinuous. We consider on $X$ the family $\mathcal C=\{\prod_{i=1}^n C_i\mid C_i\in\mathcal C_i\}$. It is easy to see that $\mathcal C$ forms a normal convexity on compactum $X$ such that all convex sets are connected. Then by Theorem \ref{KA} $F$ has a fixed point which is a Nash equilibrium point. \end{proof} Now, the following corollary follows from the previous theorem and Lemma \ref{BC}. \begin{corollary}\label{NB} Let $f:X=\prod_{i=1}^n X_i\to\mathbb R^n$ be a game such that there is defined a $T_2$ binary convexity $\mathcal C_i$ on each continuum $X_i$, the function $f$ is continuous and the function $f_i:X\to\mathbb R$ is quasi concave by $i$-th coordinate for each $i\in\{1,\dots,n\}$. Then there exists a Nash equilibrium point. \end{corollary} \section{L-monads and its algebras} We apply Corollary \ref{NB} to study games defined on algebras of binary L-monads. We recall some categorical notions (see \cite{Mc} and \cite{TZ} for more details). We define them only for the category $\mathsf{Comp}$. Let $F:\mathsf{Comp}\to\mathsf{Comp}$ be a covariant functor. A functor $F$ is called continuous if it preserves the limits of inverse systems. In what follows, all functors assumed to preserve monomorphisms, epimorphisms, weight of infinite compacta. We also assume that our functors are continuous. For a functor $F$ which preserves monomorphisms and an embedding $i:A\to X$ we shall identify the space $FA$ and the subspace $F(i)(FA)\subset FX$. A {\it monad} $\mathbb T=(T,\eta,\mu)$ in the category $\mathsf{Comp}$ consists of an endofunctor $T:{\mathsf{Comp}}\to{\mathsf{Comp}}$ and natural transformations $\eta:\mathrm{Id}_{\mathsf{Comp}}\to T$ (unity), $\mu:T^2\to T$ (multiplication) satisfying the relations $\mu\circ T\eta=\mu\circ\eta T=${\bf 1}$_T$ and $\mu\circ\mu T=\mu\circ T\mu$. (By $\mathrm{Id}_{\mathsf{Comp}}$ we denote the identity functor on the category ${\mathsf{Comp}}$ and $T^2$ is the superposition $T\circ T$ of $T$.) Let $\mathbb T=(T,\eta,\mu)$ be a monad in the category ${\mathsf{Comp}}$. The pair $(X,\xi)$ where $\xi:TX\to X$ is a map is called a $\mathbb T$-{\it algebra} if $\xi\circ\eta X=id_X$ and $\xi\circ\mu X=\xi\circ T\xi$. Let $(X,\xi)$, $(Y,\xi')$ be two $\mathbb T$-algebras. A map $f:X\to Y$ is called a $\mathbb T$-algebras morphism if $\xi'\circ Tf=f\circ\xi$. Let $(X,\xi)$ be an $\mathbb F$-algebra for a monad $\mathbb F=(F,\eta,\mu)$ and $A$ is a closed subset of $X$. Denote by $f_A$ the quotient map $f_A:X\to X/A$ (the classes of equivalence are one-point sets $\{x\}$ for $x\in X\setminus A$ and the set $A$) and put $a=f_A(A)$. Denote $A^+=(Ff_A)^{-1}(\eta(X/A)(a))$. Define the $\mathbb F$-{\it convex hull} $C_\mathbb F(A)$ of $A$ as follows $C_\mathbb F(A)=\xi(A^+)$. Put additionally $C_\mathbb F(\emptyset)=\emptyset$. We define the family $\mathcal C_\mathbb F(X,\xi)=\{A\subset X|A $ is closed and $\mathcal C_\mathbb F(A)=A\}$. Elements of the family $\mathcal C_\mathbb F(X,\xi)$ we call $\mathbb F$-{\it convex}. It was shown in \cite{R1} that the family $\mathcal C_\mathbb F(X,\xi)$ forms a convexity on $X$, moreover, each morphism of $\mathbb F$-algebras is a $CP$-map. Let us remark that one-point sets are always $\mathbb F$-convex. We don't know if the convexities we have introduced are $T_2$. We consider in this section a class of monads generating convexities which have this property. The class of $L$-monads was introduced in \cite{R1} and it contains many well-known monads in $\mathsf{Comp}$ like superextension, hyperspace, probability measure, capacity, idempotent measure etc. For $\phi\in C(X)$ by $\max\phi$ ($\min\phi$) we denote $\max_{x\in X}\phi(x)$ ($\min_{x\in X}\phi(x)$) and $\pi_\phi$ or $\pi(\phi)$ denote the corresponding projection $\pi_\phi:\prod_{\psi\in C(X)}[\min\psi,\max\psi]\to[\min\phi,\max\phi]$. It was shown in \cite{R3} that for each L-monad $\mathbb F=(F,\eta,\mu)$ we can consider $FX$ as subset of the product $\prod_{\phi\in C(X)}[\min\phi,\max\phi]$, moreover, we have $\pi_\phi\circ \eta X=\phi$, $\pi_\phi\circ \mu X=\pi(\pi_\phi)$ for all $\phi\in C(X)$ and $\pi_\psi\circ Ff=\pi_{\psi\circ f}$ for all $\psi\in C(Y)$, $f:X\to Y$. We could consider these properties of $L$-monads as a definition \cite{R3}. We say that an L-monad $\mathbb F=(F,\eta,\mu)$ weakly preserves preimages if for each map $f:X\to Y$ and each closed subset $A\subset Y$ we have $\pi_\phi(\nu)\in[\min\phi(f^{-1}(A)),$ $\max\phi(f^{-1}(A))]$ for each $\nu\in (Ff)^{-1}(A)$ and $\phi\in C(X)$ \cite{R1}. It was shown in \cite{R1} that for each L-monad $\mathbb F$ which weakly preserves preimages the convexity $\mathcal C_\mathbb F(FX,\mu X)$ is $T_2$. \begin{lemma}\label{CC} Let $(X,\xi)$ be an $\mathbb F$-algebra for an $L$-monad $\mathbb F=(F,\eta,\mu)$ which weakly preserves preimages. Then the map $\xi:FX\to X$ is a CC-map for convexities $\mathcal C_\mathbb F(FX,\mu)$ and $\mathcal C_\mathbb F(X,\xi)$ respectively. \end{lemma} \begin{proof} Consider any $B\in \mathcal C_\mathbb F(FX,\mu)$. We should show that $\xi(B)\in\mathcal C_\mathbb F(X,\xi)$. Denote by $\chi:X\to X/\xi(B)$ the quotient map and put $b=\chi(\xi(B))$. Consider any $\mathcal A\in FX$ such that $F\chi(\mathcal A)=(\eta(X/\xi(B))(b))$. We should show that $\xi(\mathcal A)\in\xi(B)$. Consider the quotient map $\chi_1:FX\to FX/B$ and put $b_1=\chi_1(B)$. There exists a (unique) continuous map $\xi':FX/B\to X/\xi(B)$ such that $\xi'(b_1)=b$ and $\xi'\circ \chi_1=\chi\circ \xi$. Put $\mathcal D=F(\eta X)(\mathcal A)$. We have $F\xi(\mathcal D)=\mathcal A$, hence $F\xi'\circ F\chi_1(\mathcal D)=F\chi\circ F\xi(\mathcal D)=F\chi(\mathcal A)=\eta(X/\xi(B))(b)$. Since $F$ weakly preserves preimages, we have $F\chi_1(\mathcal D)=\eta(FX/B)(b_1)$. Since $B\in \mathcal C_\mathbb F(FX,\mu)$, we have $\mu X(\mathcal D)\in B$. Hence $\xi(\mathcal A)=\xi\circ F\xi(\mathcal D)=\xi\circ \mu(\mathcal D)\in\xi(B)$. The lemma is proved. \end{proof} We call a monad $\mathbb F$ binary if $\mathcal C_\mathbb F(X,\xi)$ is binary for each $\mathbb F$-algebra $(X,\xi)$. \begin{lemma}\label{BT} Let $\mathbb F=(F,\eta,\mu)$ be a binary L-monad which weakly preserves preimages. Then for each $\mathbb F$-algebra $(X,\xi)$ the convexity $\mathcal C_\mathbb F(X,\xi)$ is $T_2$. \end{lemma} \begin{proof} Consider any two distinct points $x$, $y\in X$. Since $\xi$ is a morphism of $\mathbb F$-algebras $(FX,\mu X)$ and $(X,\xi)$, it is a CP-map and we have $\xi^{-1}(x)$, $\xi^{-1}(y)\in \mathcal C_\mathbb F(FX,\mu)$. Since $\mathcal C_\mathbb F(FX,\mu)$ is $T_2$ and binary, it is normal by Lemma \ref{BC}. Hence we can choose $L_1$, $L_2\in \mathcal C_\mathbb F(FX,\mu)$ such that $L_1\cup L_2=FX$ and $L_1\cap\xi^{-1}(x)=\emptyset$, $L_2\cap\xi^{-1}(y)=\emptyset$. Then we have $\xi(L_1)$, $\xi(L_2)\in\mathcal C_\mathbb F(X,\xi)$ by Lemma \ref{CC}, $\xi(L_1)\cup\xi(L_2)=X$, $x\notin L_1$ and $y\notin L_2$. The lemma is proved. \end{proof} Consider any L-monad $\mathbb F=(F,\eta,\mu)$. It is easy to check that for each segment $[a,b]\subset\mathbb R$ the pair $([a,b],\xi_{[a,b]})$ is an $F$-algebra where $\xi_{[a,b]}=\pi_{\mathrm{id}_{[a,b]}}$. Consider a game $f:X=\prod_{i=1}^n X_i\to\mathbb R^n$ where for each compactum $X_i$ there exists a map $\xi_i:FX_i\to X_i$ such that the pair $(X_i,\xi_i)$ is an $\mathbb F$-algebra. We say that the function $f_i:X\to\mathbb R$ is an $\mathbb F$-algebras morphism by $i$-th coordinate if for each $x\in X$ the function $f_i^x:X_i\to\mathbb R$ is a morphism of $\mathbb F$-algebras $(X_i,\xi_i)$ and $([\min f_i^x,\max f_i^x],\xi_{[\min f_i^x,\max f_i^x]})$. \begin{theorem}\label{NA} Let $\mathbb F=(F,\eta,\mu)$ be a binary L-monad which weakly preserves preimages. Let $f:X=\prod_{i=1}^n X_i\to\mathbb R^n$ be a game such that there is defined an $\mathbb F$-algebra map $\xi_i:FX_i\to X_i$ on each continuum $X_i$, the function $f$ is continuous and the function $f_i:X\to\mathbb R$ is an $\mathbb F$-algebras morphism by $i$-th coordinate for each $i\in\{1,\dots,n\}$. Then there exists a Nash equilibrium point. \end{theorem} \begin{proof} Since for each $x\in X$ the function $f_i^x:X_i\to\mathbb R$ is an $\mathbb F$-algebras morphism, it is a CP-map, hence quasi concave. Now, our theorem follows from Lemma \ref{BT} and Corollary \ref{NB}. \end{proof} \section{Pure and mixed strategies} Let $\mathbb F=(F,\eta,\mu)$ be a binary L-monad which weakly preserves preimages. We consider Nash equilibrium for free algebras $(FX,\mu X)$ in this section. Points of a compactum $X$ we call pure strategies and points of $FX$ we call mixed strategies. Such approach is a natural generalization of the model from \cite{KZ} where spaces of capacities $MX$ were considered. We consider a game $u:X=\prod_{i=1}^n X_i\to\mathbb R^n$ with compact Hausdorff spaces of pure strategies $X_1,\dots,X_n$ and continuous payoff functions $u_i:\prod_{i=1}^n X_i\to\mathbb R$. It is well known how to construct the tensor product of two (or finite number) probability measures. This operation was generalized in \cite{TZ} for each monad in the category $\mathsf{Comp}$. More precisely there was constructed for each compacta $X_1,\dots,X_n$ a continuous map $\otimes:\prod_{i=1}^n F X_i\to F(\prod_{i=1}^n X_i)$ which is natural by each argument and for each $i$ we have $F(p_i)\circ\otimes= \mathrm{pr}_i$ where $p_i:\prod_{j=1}^nX_j\to X_i$ and $\mathrm{pr}_i:\prod_{j=1}^n FX_j\to FX_i$ are natural projections. We define the payoff functions $eu_i:FX_1\times\dots\times FX_n\to\mathbb R$ by the formula $eu_i=\pi_{u_i}\circ\otimes$. Evidently, $eu_i$ is continuous. Consider any $t\in\mathbb R$ and $\nu\in FX_1\times\dots\times FX_n$. Then we have $(eu_i^\nu)^{-1}[t;+\infty)=\{\mu_i\in FX_i\mid eu_i(\nu;\mu_i)\ge t_i\}=l^{-1}(\pi_{u_i}^{-1}[t;+\infty)\cap\{\nu_i\}\times\dots\times FX_i\times\dots\times\{\nu_n\})$, where $l:FX_i\to\prod_{j=1}^n FX_j$ is an embedding defined by $l(\mu_i)=(\nu;\mu_i)$ for $\mu_i\in FX_i$. A structure of $\mathbb F$-algebra on the product $\prod_{j=1}^n FX_j$ of $\mathbb F$-algebras $(FX_i,\mu X_i)$ is given by a map $\xi:F(\prod_{i=1}^n FX_i)\to\prod_{i=1}^n FX_i$ defined by the formula $\xi=(\mu X_i\circ F(p_i))_{i=1}^n$. It is easy to check that a product of convex in $FX_i$ sets is convex in $\prod_{i=1}^n FX_i$. Since $\mathbb F$ weakly preserves preimages, $\pi_{u_i}^{-1}[t;+\infty)$ is convex in $\prod_{i=1}^n FX_i$. It is easy to see that $l$ is a CP-map, hence the map $eu_i$ is quasiconcave on $i$-th coordinate. Hence, using Corollary \ref{NB}, we obtain the following theorem. \begin{theorem} The game with payoff functions $eu_i$ has a Nash equilibrium point provided each $FX_i$ is connected. \end{theorem} Now, consider a game in capacities with Sugeno payoff functions introduced in the beginning of the paper. The assignment $M$ extends to the capacity functor $M$ in the category of compacta, if the map $Mf:MX\to MY$ for a continuous map of compacta $f:X \to Y$ is defined by the formula $Mf(c)(F)=c(f^{-1}(F))$ where $c\in MX$ and $F$ is a closed subset of $X$. This functor was completed to the monad $\mathbb M=(M,\eta,\mu)$ \cite{NZ}, where the components of the natural transformations are defined as follows: $\eta X(x)(F)=1$ if $x\in F$ and $\eta X(x)(F)=0$ if $x\notin F$; $\mu X(\mathcal C)(F)=\sup\{t\in[0,1]\mid \mathcal C(\{c\in MX\mid c(F)\ge t\})\ge t\}$, where $x\in X$, $F$ is a closed subset of $X$ and $\mathcal C\in M^2(X)$. Since capacity monad $\mathbb M$ is a binary L-monad which weakly preserves preimages with $\pi_\varphi(\nu)=\int_X^{Sug} fd\nu$ for any $\nu\in MX$ and $\varphi\in C(X)$ \cite{R2}, we obtain as a consequence \begin{corollary}\label{NC} A game in capacities $sf:\prod_{i=1}^n MX_i\to\mathbb R^n$ with Sugeno payoff functions has a Nash equilibrium point. \end{corollary} \end{document}
\begin{document} \title[operator multipliers] {Bilinear operator multipliers into the trace class} \author[C. Le Merdy]{Christian Le Merdy} \email{[email protected]} \address{Laboratoire de Math\'ematiques de Besan\c con, UMR 6623, CNRS, Universit\'e Bourgogne Franche-Comt\'e, 25030 Besan\c{c}on Cedex, France} \author[I. Todorov]{Ivan G. Todorov} \email{[email protected]} \address{Mathematical Sciences Research Center, Queen's University Belfast, Belfast BT7 1NN, United Kingdom} \author[L. Turowska]{Lyudmila Turowska} \email{[email protected]} \address{Department of Mathematical Sciences, Chalmers University of Technology and the University of Gothenburg, Gothenburg SE-412 96, Sweden} \date{\today} \maketitle \begin{abstract} Given Hilbert spaces $H_1,H_2,H_3$, we consider bilinear maps defined on the cartesian product $S^2(H_2,H_3)\times S^2(H_1,H_2)$ of spaces of Hilbert-Schmidt operators and valued in either the space $B(H_1,H_3)$ of bounded operators, or in the space $S^1(H_1,H_3)$ of trace class operators. We introduce modular properties of such maps with respect to the commutants of von Neumann algebras $M_i\subset B(H_i)$, $i=1,2,3$, as well as an appropriate notion of complete boundedness for such maps. We characterize completely bounded module maps $u\colon S^2(H_2,H_3)\times S^2(H_1,H_2)\to B(H_1,H_3)$ by the membership of a natural symbol of $u$ to the von Neumann algebra tensor product $M_1\overline{\otimes} M_2^{op}\overline{\otimes} M_3$. In the case when $M_2$ is injective, we characterize completely bounded module maps $u\colon S^2(H_2,H_3)\times S^2(H_1,H_2)\to S^1(H_1,H_3)$ by a weak factorization property, which extends to the bilinear setting a famous description of bimodule linear mappings going back to Haagerup, Effros-Kishimoto, Smith and Blecher-Smith. We make crucial use of a theorem of Sinclair-Smith on completely bounded bilinear maps valued in an injective von Neumann algebra, and provide a new proof of it, based on Hilbert $C^*$-modules. \end{abstract} \vskip 1cm \noindent {\it 2000 Mathematics Subject Classification:} 46L07, 46B28, 47D25, 46L08. \vskip 1cm \section{Introduction}\label{1Intro} Factorization properties of completely bounded maps have played a prominent role in the development of operator spaces \cite{BLM, ER, P2} and in their applications to Hilbertian operator theory, in particular for the study of special classes of operators: Schur multipliers, Fourier multipliers on either commutative or non commutative groups, module maps, decomposable maps, etc. The main purpose of this paper is to establish new such factorization properties for some classes of bilinear maps defined on the cartesian product $S^2(H_2,H_3)\times S^2(H_1,H_2)$ of two spaces of Hilbert-Schmidt operators and valued in their ``product space", namely the space $S^1(H_1,H_3)$ of trace class operators. This line of investigation is motivated by the recent characterization of bounded bilinear Schur multipliers $S^2\times S^2\to S^1$ proved in \cite{CLPST, CLS}, by various advances on multidimensional operator multipliers, see \cite{KS, JTT}, and by new developments on multiple operator integrals, see e.g. \cite{AP} and the references therein. Let $H,K$ be Hilbert spaces and let $M,N$ be von Neumann algebras acting on $H$ and $K$, respectively. Let $CB_{(N',M')}(S^1(H,K))$ denote the Banach space of all $(N',M')$-bimodule completely bounded maps on $S^1(H,K)$, equipped with the completely bounded norm $\cbnorm{\, \cdotp}$. This space is characterized by the following factorization property. \begin{theorem}\label{Haag} A bounded map $u\colon S^1(H,K)\to S^1(H,K)$ belongs to $CB_{(N',M')}(S^1(H,K))$ and $\cbnorm{u}\leq 1$ if and only if there exist an index set $I$, a family $(a_i)_{i\in I}$ of elements of $M$ belonging to the row space $R_I^w(M)$ and a family $(b_i)_{i\in I}$ of elements of $N$ belonging to the column space $C_I^w(N)$ such that $$ u(z)= \sum_{i\in I} b_i z a_i,\qquad z\in S^1(H,K), $$ and $\norm{(a_i)_i}_{R_I^w} \norm{(b_i)_i}_{C_I^w} \leq 1$. \end{theorem} We refer to Section \ref{5SS} for the precise definitions of the spaces $R_I^w(M)$ and $C_I^w(N)$. The above theorem is a reformulation of \cite[Theorem 2.2]{BS}, a fundamental factorization result going back to \cite{EK, H} (see also \cite{Sm}). Indeed let $B(K,H)$ (resp. $S^\infty(K,H)$) denote the space of all bounded operators (resp. all compact operators) from $K$ into $H$. Then by standard operator space duality, the adjoint mapping $u\mapsto u^*$ induces an isometric isomorphism between $CB_{(N',M')}(S^1(H,K))$ and the space $CB_{(M',N')}(S^\infty(K,H), B(K,H))$ of all $(M',N')$-bimodule completely bounded maps from $S^\infty(K,H)$ into $B(K,H)$. Consequently the description of such maps provided by \cite[Theorem 2.2]{BS} yields Theorem \ref{Haag}. Using the so-called weak$^*$ Haagerup tensor product $\stackrel{w^*h}{\otimes}$ introduced in \cite{BS}, an equivalent formulation of Theorem \ref{Haag} is that we have a natural isometric $w^*$-homeomorphic identification \begin{equation}\label{Haag+} M\stackrel{w^*h}{\otimes} N\,\simeq \, CB_{(N',M')}(S^1(H,K)). \end{equation} In this paper we consider three Hilbert spaces $H_1,H_2,H_3$ as well as von Neumann algebras $M_1,M_2,M_3$ acting on them. We study bilinear $(M'_3,M'_2,M'_1)$-module maps $$ u\colon S^2(H_2,H_3)\times S^2(H_1,H_2)\longrightarrow S^1(H_1,H_3), $$ in the sense that $u(Ty,SxR)=Tu(yS,x)R$ for any $x\in S^2(H_1,H_2)$, $y\in S^2(H_2,H_3)$, $R\in M'_1$, $S\in M'_2$ and $T\in M'_3$. In the case when $H_i=L^2(\Omega_i)$ for some measure spaces $\Omega_i$, $i=1,2,3$, and $M_i=L^\infty(\Omega_i)\subset B(L^2(\Omega_i))$ in the usual way, bilinear $(M'_3,M'_2,M'_1)$-module maps coincide with the bilinear Schur multipliers discussed in \cite{JTT, CLS}. On the projective tensor product $S^2(H_2,H_3)\widehat{\otimes}S^2(H_1,H_2)$, we introduce a natural operator space structure, denoted by $\Gamma(H_1,H_2,H_3)$, see (\ref{2OS-Gamma}). Our main result, Theorem \ref{6Factorization}, is a characterization, in the case when $M_2$ is injective, of completely bounded $(M'_3,M'_2,M'_1)$-module maps $u$ as above by a weak factorization property, which extends Theorem \ref{Haag}. (see Remark \ref{6Recover}). This characterization is already new in the non module case (that is, when $M_i=B(H_i)$ for $i=1,2,3$). The proof of this result has two steps. First we establish an isometric and $w^*$-homeomorphic identification \begin{equation}\label{1Id} M_2^{op}\overline{\otimes}\bigl( M_1\stackrel{w^*h}{\otimes}M_3\bigr)\,\simeq \, CB_{(M'_3,M'_2,M'_1)}\bigl(\Gamma(H_1,H_2,H_3), S^1(H_1,H_3)\bigr) \end{equation} which extends (\ref{Haag+}), see Theorem \ref{3OM-S1}. Second we make use of a remarkable factorization result of Sinclair-Smith \cite{SS} on completely bounded bilinear maps valued in an injective von Neumann algebra (see Theorem \ref{4SS} for the precise statement), as well as operator space results, to derive Theorem \ref{6Factorization} from (\ref{1Id}). The Sinclair-Smith theorem, which plays a key role in this paper, was proved in \cite[Theorem 4.4]{SS} using tensor product computations, the Effros-Lance characterization of semidiscrete von Neumann algebras \cite{EL} and Connes's fundamental result (completed in \cite{W}) that any injective von Neumann algebra is semidiscrete. In Section \ref{5SS} below, we give a new, much shorter proof of Theorem \ref{4SS} based on Hilbert $C^*$-modules. The paper also contains a thorough study of completely bounded bilinear $(M'_3,M'_2,M'_1)$-module maps $$ u\colon S^2(H_2,H_3)\times S^2(H_1,H_2)\longrightarrow B(H_1,H_3). $$ In analogy with (\ref{1Id}) we show that the space of such maps can be identified with the von Neumann algebra tensor product $M_1\overline{\otimes} M_2^{op}\overline{\otimes} M_3$, see Corollary \ref{3Mod-B}. \section{Operator space and duality preliminaries}\label{20S} We start with some general principles and conventions which will be used throughout this paper. Let $E,F$ and $G$ be Banach spaces. We let $E\otimes F$ denote the algebraic tensor product of $E$ and $F$. We let $B(E,G)$ denote the Banach space of all bounded operators from $E$ into $G$. We let $B_2(F\times E,G)$ denote the Banach space of all bounded bilinear operators from $F\times E$ into $G$. Let $F\widehat{\otimes} E$ be the projective tensor product of $F$ and $E$. To any $u\in B_2(F\times E,G)$, one can associate a unique $\widetilde{u}\colon F\otimes E\to G$ satisfying $$ \widetilde{u}(y\otimes x)=u(y,x), \qquad x\in E,\ y\in F. $$ Then $\widetilde{u}$ extends to a bounded operator (still denoted by) $\widetilde{u}\colon F\widehat{\otimes} E\to G$ and we have equality $\norm{\widetilde{u}}=\norm{u}$. Then the mapping $u\mapsto \widetilde{u}$ yields an isometric identification \begin{equation}\label{1Duality1} B_2(F\times E,G) \,\simeq\, B(F\widehat{\otimes} E,G). \end{equation} Consider the case $G=\ensuremath{\mathbb{C}}$. Then (\ref{1Duality1}) provides an isometric identification $B_2(F\times E,\ensuremath{\mathbb{C}}) \,\simeq\, (F\widehat{\otimes} E)^*$. Now to any bounded bilinear form $u\colon F\times E\to \ensuremath{\mathbb{C}}\,$, one can associate two bounded maps $$ u'\colon E\longrightarrow F^* \qquad\hbox{and}\qquad u''\colon F\longrightarrow E^* $$ defined by $\langle u'(x),y\rangle = u(y,x) = \langle u''(y),x\rangle$ for any $x\in E$ and $y\in F$. Moreover the norms of $u'$ and $u''$ are equal to the norm of $u$. Hence the mappings $u\mapsto u'$ and $u\mapsto u''$ yield isometric identifications \begin{equation}\label{1Duality3} (F\widehat{\otimes} E)^* \,\simeq\, B(E,F^*)\,\simeq\,B(F,E^*). \end{equation} We refer to \cite[Chap. 8, Theorem 1 and Corollary 2]{DU} for these classical facts. We assume that the reader is familiar with the basics of Operator Space Theory and completely bounded maps, for which we refer to \cite{ER, P2} and \cite[Chap. 1]{BLM}. However we need to review a few important definitions and fundamental results which will be used at length in this paper; the remainder of this section is devoted to this task. We will make crucial use of the dual operator space $E^*$ of an operator space $E$ as well as of the operator space $CB(E,F)$ of completely bounded maps from $E$ into another operator space $F$ (see e.g. \cite[Section 3.2]{ER}). Whenever $v\colon E\to F$ is a completely bounded map, its completely bounded norm will be denoted by $\cbnorm{v}$. Let $E,F$ be operator spaces. We let $F\stackrel{\frown}{\otimes} E$ denote the operator space projective tensor product of $F$ and $E$ (here we adopt the notation from \cite[1.5.11]{BLM}). We will often use the fact that this tensor product is commutative and associative. The identifications (\ref{1Duality3}) have operator space analogs. Namely let $u\colon F\times E\to\ensuremath{\mathbb{C}}\,$ be a bounded bilinear form. Then $\widetilde{u}$ extends to a functional on $F\stackrel{\frown}{\otimes} E$ if and only if $u'\colon E\to F^*\,$ is completely bounded, if and only if $u''\colon F\to E^*$ is completely bounded. In this case $\cbnorm{u'}=\cbnorm{u''}=\norm{\widetilde{u}}_{(F\stackrel{\frown}{\otimes} E)^*}$. Thus (\ref{1Duality3}) restricts to isometric identifications \begin{equation}\label{1Duality4} (F\stackrel{\frown}{\otimes}E)^* \,\simeq\, CB(E,F^*)\,\simeq\,CB(F,E^*). \end{equation} It turns out that the latter are actually completely isometric identifications (see e.g. \cite[Section 7.1]{ER} or \cite[(1.51)]{BLM}). Let $H,K$ be Hilbert spaces. We let $\overline{K}$ denote the complex conjugate of $K$. For any $\xi\in K$, the notation $\overline{\xi}$ stands for $\xi$ regarded as an element of $\overline{K}$. We recall the canonical identification $\overline{K}=K^*$. Thus for any $\xi\in K$ and any $\eta\in H$, $\overline{\xi}\otimes \eta$ may be regarded as the rank one operator $K\to H$ taking any $\zeta\in K$ to $\langle \zeta,\xi\rangle\eta$. With this convention, the algebraic tensor product $\overline{K}\otimes H$ is identified with the space of all bounded finite rank operators from $K$ into $H$. Let $S^1(K,H)$ be the space of trace class operators $v\colon K\to H$, equipped with its usual norm $\norm{v}_1=tr(\vert v\vert)$. Then $\overline{K}\otimes H$ is a dense subspace of $S^1(K,H)$ and $\norm{\,\cdotp}_1$ coincides with the Banach space projective norm on $\overline{K}\otimes H$. Hence we have an isometric identification \begin{equation}\label{1Proj} S^1(K,H)\,\simeq\, \overline{K}\widehat{\otimes} H. \end{equation} Let $S^2(K,H)$ be the space of Hilbert-Schmidt operators $v\colon K\to H$, equipped with its usual Hilbertian norm $\norm{v}_2=\bigl(tr(v^*v)\bigr)^\frac12$. Then $\overline{K}\otimes H$ is a dense subspace of $S^2(K,H)$ and $\norm{\,\cdotp}_2$ coincides with the Hilbertian tensor norm on $\overline{K}\otimes H$. Hence we have an isometric identification \begin{equation}\label{1HS} S^2(K,H)\,\simeq\, \overline{K}\stackrel{2}{\otimes} H, \end{equation} where the right hand side denotes the Hilbertian tensor product of $\overline{K}$ and $H$. Let $S^\infty(H,K)$ denote the space of all compact operators from $H$ into $K$, equipped with its usual operator space structure. We recall that through trace duality, we have isometric identifications \begin{equation}\label{1S11} S^\infty(H,K)^* \,\simeq\, S^1(K,H) \qquad\hbox{and}\qquad S^1(K,H)^* \,\simeq\, B(H,K). \end{equation} Throughout we asssume that $S^1(K,H)$ is equipped with its canonical operator space structure, so that (\ref{1S11}) holds completely isometrically (see e.g. \cite[Theorem 3.2.3]{ER}). Let $E,G$ be Banach spaces and let $j\colon E^*\to G^*$ be a $w^*$-continuous isometry. Then its range $j(E^*)$ is $w^*$-closed, hence $j(E^*)$ is a dual space. Further $j$ induces a $w^*$-$w^*$-homeomorphism between $E^*$ and $j(E^*)$ (see e.g. \cite[A.2.5]{BLM}). Thus $j$ allows to identify $E^*$ and $j(E^*)$ as dual Banach spaces. In this case, we will say that $j$ induces a $w^*$-continuous isometric identification between $E^*$ and $j(E^*)$. If $E,G$ are operator spaces and $j$ is a complete isometry, then $j(E^*)$ is a dual operator space and we will call $j$ a $w^*$-continuous completely isometric identification between $E^*$ and $j(E^*)$. Let $E,F$ be operator spaces and consider $w^*$-continuous completely isometric embeddings \begin{equation}\label{1Rep} E^*\subset B(H) \qquad\hbox{and}\qquad F^*\subset B(K), \end{equation} for some Hilbert spaces $H,K$ (see e.g. \cite[Prop. 3.2.4]{ER}). The normal spatial tensor product of the dual operator spaces $F^*$ and $E^*$ is defined as the $w^*$-closure of $F^*\otimes E^*$ into the von Neumann algebra $B(K)\overline{\otimes} B(H)$ and is denoted by $$ F^*\overline{\otimes} E^*. $$ This is a dual operator space. It turns out that its definition does not depend on the specific embeddings (\ref{1Rep}), see e.g. \cite[p. 135]{ER}. We note for further use that the natural embedding $B(K)\otimes B(H)\subset B(K\stackrel{2}{\otimes} H)$ extends to a $w^*$-continuous completely isometric identification \begin{equation}\label{1Normal} B(K)\overline{\otimes} B(H) \,\simeq\, B(K\stackrel{2}{\otimes} H). \end{equation} To deal with normal spatial tensor products, it is convenient to use the so-called slice maps. Take any $\lambda\in S^1(K)$ and consider it as a $w^*$-continuous functional $\lambda\colon B(K)\to\ensuremath{\mathbb{C}}\,$. Then the operator $\lambda\otimes I_{B(H)}$ extends to a (necessarily unique) $w^*$-continuous bounded map $$ \ell_\lambda\colon B(K)\overline{\otimes} B(H)\longrightarrow B(H). $$ Likewise, any $\mu\in S^1(H)$ can be considered as a $w^*$-continuous functional $\mu\colon B(H)\to\ensuremath{\mathbb{C}}\,$ and $I_{B(K)}\otimes \mu$ extends to a $w^*$-continuous bounded map $$ r_\mu\colon B(K)\overline{\otimes} B(H)\longrightarrow B(K). $$ Then we have the following properties (for which we refer to either \cite[Lemma 7.2.2]{ER} and its proof, or \cite[1.5.2]{BLM}). \begin{lemma}\label{1Slice} Let $z\in B(K)\overline{\otimes} B(H)$. The linear mappings $$ z'\colon S^1(H)\longrightarrow B(K) \qquad\hbox{and}\qquad z''\colon S^1(K)\longrightarrow B(H) $$ defined by $z'(\mu)=r_\mu(z)$ and $z''(\lambda)=\ell_\lambda(z)$ are completely bounded. Further the mappings $z\mapsto z'$ and $z\mapsto z''$ are $w^*$-continuous completely isometric isomorphisms from $B(K)\overline{\otimes} B(H)$ onto $CB(S^1(H),B(K))$ and $CB(S^1(K),B(H))$, respectively. \end{lemma} According to (\ref{1Duality4}), an equivalent formulation of the above lemma is that \begin{equation}\label{1S1S1} \bigl(S^1(K)\stackrel{\frown}{\otimes} S^1(H)\bigr)^*\,\simeq\, B(K)\overline{\otimes} B(H) \end{equation} $w^*$-continuously and completely isometrically. Recall (\ref{1Rep}). The space of all $z\in B(K)\overline{\otimes} B(H)$ such that $z'$ is valued in $F^*$ and $z''$ is valued in $E^*$ is usually called the normal Fubini tensor product of $F^*$ and $E^*$. This subspace is $w^*$-continuously completely isometric to $CB(E,F^*)$ (equivalently to $CB(F,E^*)$, by (\ref{1Duality4})). Indeed we may regard $CB(E,F^*)$ as the subspace of $CB(S^1(H),B(K))$ of all $w\colon S^1(H)\to B(K)$ such that $w$ is valued in $F^*$ and $w$ vanishes on $E^{*}_{\perp}$. Then it is not hard to see that $z$ belongs to the normal Fubini tensor product of $F^*$ and $E^*$ if and only if $z'$ belongs to $CB(E,F^*)$. We refer to \cite[Theorem 7.2.3]{ER} for these facts. It is easy to check that the normal Fubini tensor product of $F^*$ and $E^*$ contains $F^*\overline{\otimes} E^*$. This yields a $w^*$-continuous completely isometric embedding $$ F^*\overline{\otimes} E^*\,\subset\, CB(E,F^*). $$ However this inclusion may be strict. The next lemma provides a list of cases when the inclusion is an equality. We refer the reader to \cite[Sections 7.2 and 11.2]{ER} for the proofs. Whenever $M$ is a von Neumann algebra, we let $M_*$ denote its (unique) predual. We equip it with its natural operator space structure, so that $M=(M_*)^*$ completely isometrically (see e.g. \cite[Section 2.5]{P2} or \cite[Lemma 1.4.6]{BLM}). \begin{lemma}\label{1Injective1} \ \begin{itemize} \item [(a)] For any von Neumann algebras $M,N$, we have $$ N\overline{\otimes}M\,\simeq\, CB(M_*,N). $$ \item [(b)] For any injective von Neumann algebra $M$ and for any operator space $E$, we have $$ M\overline{\otimes} E^*\,\simeq\, CB(E,M). $$ \item [(c)] For any Hilbert spaces $H,K$ and for any operator space $E$, we have $$ B(H,K)\overline{\otimes} E^*\,\simeq\, CB(E,B(H,K)). $$ \end{itemize} \end{lemma} Let $K$ be a Hilbert space. We let $\{K\}_c$ (resp. $\{K\}_r$) denote the column operator space (resp. the row operator space) over $K$. We recall that through the canonical identification $K^*=\overline{K}$, we have $$ \{K\}_c^* = \{\overline{K}\}_r \quad\hbox{and}\quad \{K\}_r^* = \{\overline{K}\}_c \qquad\hbox{completely isometrically}. $$ (See e.g. \cite[Section 3.4]{ER}.) We let $F\stackrel{h}{\otimes}E$ denote the Haagerup tensor product of a couple $(F,E)$ of operator spaces. We will use the fact that this is an associative tensor product. Let $\theta\colon F\times E\to\ensuremath{\mathbb{C}}\,$ be a bounded bilinear form. Then $\theta$ extends to an element of $(F\stackrel{h}{\otimes}E)^*$ if and only if there exist a Hilbert space $\mbox{${\mathcal H}$}$ and two completely bounded maps $\alpha\colon E\to \{\mbox{${\mathcal H}$}\}_c\,$ and $\beta\colon F\to \{\overline{\mbox{${\mathcal H}$}}\}_r\,$ such that $\theta(y,x)= \langle \alpha(x),\beta(y)\rangle$ for any $x\in E$ and any $y\in F$ (see e.g. \cite[Corollary 9.4.2]{ER}). The Haagerup tensor product is projective. This means that if $p\colon E\to E_1$ and $q\colon F\to F_1$ are complete quotient maps, then $q\otimes p$ extends to a (necessarily unique) complete quotient map $F\stackrel{h}{\otimes}E\to F_1\stackrel{h}{\otimes}E_1$. Taking the adjoint of the latter, we obtain a $w^*$-continuous completely isometric embedding \begin{equation}\label{1Embed} (F_1\stackrel{h}{\otimes}E_1)^* \,\subset\, (F\stackrel{h}{\otimes}E)^*. \end{equation} \begin{lemma}\label{1Slice-H} Let $E,F,E_1,F_1$ be operator spaces as above and let $\theta\in (F\stackrel{h}{\otimes}E)^*$. Let $$ \theta'\colon E\longrightarrow F^* \qquad\hbox{and}\qquad \theta''\colon F\longrightarrow E^* $$ be the bounded linear maps associated to $\theta$. Then $\theta\in (F_1\stackrel{h}{\otimes}E_1)^*$ (in the sense given by (\ref{1Embed})) if and only if $\theta'$ is valued in $F_1^*$ and $\theta''$ is valued in $E_1^*$. \end{lemma} \begin{proof} If $\theta\in (F_1\stackrel{h}{\otimes}E_1)^*$, then $\theta(y,x)=0$ if either $x\in{\rm Ker}(p)$ or $y\in{\rm Ker}(q)$. Hence $\langle \theta''(y),x\rangle=0$ for any $(y,x)\in F\times {\rm Ker}(p)$ and $\langle \theta'(x),y\rangle=0$ for any $(y,x)\in {\rm Ker}(q)\times E$. Hence $\theta''$ is valued in $E_1^*={\rm Ker}(p)^\perp$ and $\theta'$ is valued in $F_1^*={\rm Ker}(q)^\perp$. Assume conversely that $\theta'$ is valued in $F_1^*$ and that $\theta''$ is valued in $E_1^*$. Let $\alpha\colon E\to \{\mbox{${\mathcal H}$}\}_c\,$ and $\beta\colon F\to \{\overline{\mbox{${\mathcal H}$}}\}_r\,$ be completely bounded maps, for some Hilbert space $\mbox{${\mathcal H}$}$, such that $\theta(y,x)= \langle \alpha(x),\beta(y)\rangle$ for any $x\in E$ and any $y\in F$. Changing $\mbox{${\mathcal H}$}$ into the closure of the range of $\alpha$, we may assume that $\alpha$ has dense range. Next changing $\mbox{${\mathcal H}$}$ into the closure of the (conjugate of) the range of $\beta$, we may actually assume that both $\alpha$ and $\beta$ have dense range. The assumption on $\theta'$ means that $\langle \alpha(x),\beta(y)\rangle=0$ for any $x\in E$ and any $y\in {\rm Ker}(q)$. Since $\alpha$ has dense range this means that $\beta$ vanishes on ${\rm Ker}(q)$. Likewise the assumption on $\theta''$ means that $\alpha$ vanishes on ${\rm Ker}(p)$. We may therefore consider $\alpha_1\colon E_1\to \{\mbox{${\mathcal H}$}\}_c$ and $\beta_1\colon F_1\to \{\overline{\mbox{${\mathcal H}$}}\}_r$ induced by $\alpha$ and $\beta$, that is, $\alpha=\alpha_1\circ p$ and $\beta=\beta_1\circ q$. Further $\alpha_1$ and $\beta_1$ are completely bounded, hence the bilinear mapping $(y_1,x_1)\mapsto \langle \alpha_1(x_1),\beta_1(y_1)\rangle$ is an element of $(F_1\stackrel{h}{\otimes}E_1)^*$. By construction it identifies with $\theta$ in the embedding (\ref{1Embed}), hence $\theta$ belongs to $(F_1\stackrel{h}{\otimes}E_1)^*$. \end{proof} We will need the so-called weak$^*$ Haagerup tensor product of two dual operator spaces \cite{BS}. It can be defined by \begin{equation}\label{1w*h} F^*\stackrel{w^*h}{\otimes}E^* \,=\, (F\stackrel{h}{\otimes}E)^*. \end{equation} The reason why this dual space can be considered as a tensor product over the couple $(F^*,E^*)$ is discussed in \cite[1.6.9]{BLM}. We now recall a few tensor product identities involving the operator space projective tensor product and the Haagerup tensor product. \begin{proposition}\label{1Recap} Let $E$ be an operator space and let $H,K$ be two Hilbert spaces. \begin{itemize} \item [(a)] We have completely isometric identifications $$ \{K\}_r\stackrel{\frown}{\otimes} E \,\simeq\, \{K\}_r\stackrel{h}{\otimes} E \qquad\hbox{and}\qquad E\stackrel{\frown}{\otimes} \{H\}_c \,\simeq\, E\stackrel{h}{\otimes} \{H\}_c. $$ \item [(b)] We have completely isometric identifications $$ \{K\}_r\stackrel{\frown}{\otimes} \{H\}_r \,\simeq\, \{K\stackrel{2}{\otimes} H\}_r \qquad\hbox{and}\qquad \{K\}_c\stackrel{\frown}{\otimes} \{H\}_c \,\simeq\, \{K\stackrel{2}{\otimes} H\}_c. $$ \item [(c)] The embedding $\overline{K}\otimes H\subset S^1(K,H)$ extends to completely isometric identifications \begin{equation}\label{1RC} S^1(K,H) \,\simeq\, \{\overline{K}\}_r\stackrel{\frown}{\otimes}\{H\}_c \end{equation} and \begin{equation}\label{1REC} S^1(K,H)\stackrel{\frown}{\otimes} E \,\simeq\, \{\overline{K}\}_r\stackrel{\frown}{\otimes} E \stackrel{\frown}{\otimes}\{H\}_c. \end{equation} \item [(d)] To any $u\colon E\to B(H,K)$, associate $\theta_u\colon \overline{K}\otimes E\otimes H\to\ensuremath{\mathbb{C}}\,$ by letting $\theta_u(\overline{\xi}\otimes x\otimes \eta ) =\langle u(x)\eta,\xi\rangle$, for any $x\in E,\eta\in H,\xi\in K$. Then $u\mapsto\theta_u$ extends to a $w^*$-continuous completely isometric identification $$ \bigl(\{\overline{K}\}_r\stackrel{\frown}{\otimes} E \stackrel{\frown}{\otimes}\{H\}_c\bigr)^* \,\simeq\, CB(E,B(H,K)). $$ \end{itemize} \end{proposition} \begin{proof} We refer to \cite[Proposition 9.3.2]{ER} for (a) and to \cite[Proposition 9.3.5]{ER} for (b). Formula (\ref{1RC}) follows from \cite[Proposition 9.3.4]{ER} and (a), and formula (\ref{1REC}) follows by the comutativity of the operator space projective tensor product. Finally (d) is a consequence of (\ref{1REC}), (\ref{1S11}) and (\ref{1Duality4}). \end{proof} \begin{remark}\label{1Equal} Comparing (\ref{1RC}) with (\ref{1Proj}), we note that at the Banach space level, the operator space projective tensor product of a row and a column Hilbert space coincides with their Banach space projective tensor product. \end{remark} \begin{remark}\label{1RankOne} For any $\eta\in H$ and $\xi\in K$, let $T_{\eta,\xi}\in B(H,K)$ be the rank one operator defined by $$ T_{\eta,\xi}(\zeta)=\langle \zeta,\eta\rangle\, \xi,\qquad\zeta\in H. $$ When we consider this operator as an element of $S^\infty(H,K)$ or $B(H,K)$, it is convenient to identify it with $\xi\otimes\overline{\eta} \in K\otimes\overline{H}$, and hence to regard $K\otimes\overline{H}$ as a subspace of $S^\infty(H,K)$. This convention is different from the one used so far when we had to represent rank one (more generally, finite rank) operators as elements of the trace class or of the Hilbert-Schmidt class. The rationale for this is that the trace duality providing (\ref{1S11}) extends the natural duality between $K\otimes\overline{H}$ and $\overline{K}\otimes H$. Then the embedding $K\otimes\overline{H}\subset S^\infty(H,K)$ extends to a completely isometric identification \begin{equation}\label{1Compact} S^\infty (H,K)\,\simeq\, \{K\}_c\stackrel{h}{\otimes} \{\overline{H}\}_r. \end{equation} (See e.g. \cite[Proposition 9.3.4]{ER}.) \end{remark} If $A$ is any $C^*$-algebra, the so-called opposite $C^*$-algebra $A^{op}$ is the involutive Banach space $A$ equipped with its reversed multiplication $(a,b)\mapsto ba$. Note that as an operator space, $A^{op}$ is not (in general) the same as $A$, that is, the identity mapping $A\to A^{op}$ is not a complete isometry. See e.g. \cite[Theorem 2.2]{Roy} for more about this. In the case when $A=B(H)$, we have the following well-known description (see e.g. \cite[Sections 2.9 and 2.10]{P2}). \begin{lemma}\label{1Opp} Let $H$ be a Hilbert space. For any $S\in B(H)$, define $$ \widehat{S}(\overline{h}) = \overline{S^*(h)},\qquad h\in H. $$ Then $S\mapsto \widehat{S}$ is a $*$-isomorphism from $B(H)^{op}$ onto $B(\overline{H})$. \end{lemma} In the sequel we will use the operator space $M_{*}^{op}$ for any von Neumann algebra $M$. This is both the predual operator space of $M^{op}$ and the opposite operator space of $M_*$, in the sense of \cite[Section 2.10]{P2}. \section{Operator multipliers into the trace class}\label{3OM} Let $H_1,H_2,H_3$ be three Hilbert spaces. Using (\ref{1HS}), we let $$ \Theta\colon H_1\stackrel{2}{\otimes}\overline{H_2}\stackrel{2}{\otimes} H_3\longrightarrow S^2\bigl(S^2(H_1,H_2),H_3\bigr) $$ be the unitary operator obtained by first identifying $H_1\stackrel{2}{\otimes}\overline{H_2}$ with $\overline{S^2(H_1,H_2)}$, and then identifying $\overline{S^2(H_1,H_2)}\stackrel{2}{\otimes} H_3$ with $S^2\bigl(S^2(H_1,H_2),H_3\bigr)$. For any $\varphi\in B\bigl(H_1\stackrel{2}{\otimes}\overline{H_2} \stackrel{2}{\otimes} H_3\bigr)$, one may define a bounded bilinear map $$ \tau_\varphi\colon S^2(H_2,H_3)\times S^2(H_1,H_2)\longrightarrow B(H_1,H_3) $$ by $$ \bigl[\tau_\varphi(y,x)\bigr](h) = \Theta\bigl[\varphi(h\otimes y)\bigr](x), \qquad x\in S^2(H_1,H_2), \ y\in S^2(H_2,H_3),\ h\in H_1. $$ On the right hand side of the above equality, $y$ is regarded as an element of $\overline{H_2}\stackrel{2}{\otimes}H_3$, and hence $h\otimes y$ is an element of $H_1\stackrel{2}{\otimes}\overline{H_2} \stackrel{2}{\otimes} H_3$. It is clear that $$ \bignorm{\bigl[\tau_\varphi(y,x)\bigr](h)} \leq \norm{\varphi}\norm{x}_2\norm{y}_2\norm{h}. $$ Consequently, the above construction defines a contraction \begin{equation}\label{2Sigma1} \tau\colon B\bigl(H_1\stackrel{2}{\otimes}\overline{H_2}\stackrel{2}{\otimes} H_3\bigr) \longrightarrow B_2\bigl(S^2(H_2,H_3)\times S^2(H_1,H_2), B(H_1,H_3)\bigr). \end{equation} The bilinear maps $\tau_\varphi$ were introduced in \cite{JTT} (however the latter paper focuses on the case when $\bignorm{\bigl[\tau_\varphi(y,x)\bigr](h)} \leq D\norm{x}\norm{y}\norm{h}$ for some constant $D>0$). We call $\tau_\varphi$ an operator multiplier and we say that $\varphi$ is the symbol of $\tau_\varphi$. We refer to \cite{JTT} for $m$-linear versions of such operators for arbitrary $m\geq 2$. We note that by (\ref{1Normal}) and Lemma \ref{1Opp}, we have a von Neumann algebra identification \begin{equation}\label{2VN} B\bigl(H_1\stackrel{2}{\otimes}\overline{H_2}\stackrel{2}{\otimes} H_3\bigr) \,\simeq\, B(H_1)\overline{\otimes} B(H_2)^{op}\overline{\otimes} B(H_3). \end{equation} In the sequel we will make no difference between these two von Neumann algebras. In particular, we will consider symbols $\varphi$ of operator multipliers as elements of $B(H_1)\overline{\otimes} B(H_2)^{op}\overline{\otimes} B(H_3)$. One can check (see \cite{JTT}) that for any $R\in B(H_1)$, $S\in B(H_2)$ and $T\in B(H_3)$, we have \begin{equation}\label{2Sigma2} \tau_{R\otimes S\otimes T}(y,x) = TySxR, \qquad x\in S^2(H_1,H_2), \ y\in S^2(H_2,H_3). \end{equation} Note that in this identity, $S$ is regarded as an element of $B(H_2)^{op}$ at the left-hand side and as an element of $B(H_2)$ at the right-hand side. We now define the operator space \begin{equation}\label{2OS-Gamma} \Gamma(H_1,H_2,H_3)\, =\, \{S^2(H_2,H_3)\}_c \stackrel{\frown}{\otimes} \{S^2(H_1,H_2)\}_r. \end{equation} According to Remark \ref{1Equal}, $\Gamma(H_1,H_2,H_3)$ coincides, at the Banach space level, with the projective tensor product of $S^2(H_2,H_3)$ and $S^2(H_1,H_2)$. Hence \begin{equation}\label{2Equal} B_2\bigl(S^2(H_2,H_3)\times S^2(H_1,H_2), B(H_1,H_3)\bigr) \,\simeq\, B\bigl(\Gamma(H_1,H_2,H_3), B(H_1,H_3)\bigr) \end{equation} by (\ref{1Duality1}). In the sequel for any $u\colon S^2(H_2,H_3)\times S^2(H_1,H_2)\to B(H_1,H_3)$, we let $$ \widetilde{u}\colon \Gamma(H_1,H_2,H_3)\longrightarrow B(H_1,H_3) $$ denote its associated linear map. The next proposition shows that under the identification (\ref{2Equal}), the range of $\tau$ coincides with the space of completely bounded maps from $\Gamma(H_1,H_2,H_3)$ into $B(H_1,H_3)$. \begin{proposition}\label{2OM-B} Let $u\colon S^2(H_2,H_3)\times S^2(H_1,H_2)\to B(H_1,H_3)$ be a bounded bilinear map. Then $\widetilde{u}\colon \Gamma(H_1,H_2,H_3)\to B(H_1,H_3)$ is completely bounded if and only if there exists $\varphi$ in $B(H_1)\overline{\otimes} B(H_2)^{op}\overline{\otimes} B(H_3)$ such that $u=\tau_\varphi$. Further $\tau$ provides a $w^*$-continuous completely isometric identification \begin{equation}\label{2OM-B1} B(H_1)\overline{\otimes} B(H_2)^{op}\overline{\otimes} B(H_3) \,\simeq\, CB\bigl(\Gamma(H_1,H_2,H_3), B(H_1,H_3)\bigr). \end{equation} \end{proposition} \begin{proof} For convenience we set $$ \mbox{${\mathcal H}$}=H_1\stackrel{2}{\otimes}\overline{H_2} \stackrel{2}{\otimes} H_3. $$ By (\ref{1HS}) and Proposition \ref{1Recap} (b), we have $$ \{S^2(H_1,H_2)\}_r\,\simeq\,\{\overline{H_1}\}_r \stackrel{\frown}{\otimes}\{H_2\}_r \qquad\hbox{and}\qquad \{S^2(H_2,H_3)\}_c\,\simeq\,\{\overline{H_2}\}_c \stackrel{\frown}{\otimes}\{H_3\}_c $$ completely isometrically. Hence applying (\ref{2OS-Gamma}), we have \begin{equation}\label{2Gamma} \Gamma(H_1,H_2,H_3) \,\simeq\, \{\overline{H_2}\}_c \stackrel{\frown}{\otimes}\{H_3\}_c \stackrel{\frown}{\otimes} \{\overline{H_1}\}_r \stackrel{\frown}{\otimes}\{H_2\}_r \end{equation} completely isometrically. Using the commutativity of the operator space projective tensor product, we deduce a completely isometric identification $$ \{\overline{H_3}\}_r\stackrel{\frown}{\otimes} \Gamma(H_1,H_2,H_3)\stackrel{\frown}{\otimes} \{H_1\}_c \,\simeq\, \{\overline{H_1}\}_r\stackrel{\frown}{\otimes} \{H_2\}_r\stackrel{\frown}{\otimes}\{\overline{H_3}\}_r \stackrel{\frown}{\otimes} \{H_1\}_c \stackrel{\frown}{\otimes}\{\overline{H_2}\}_c\stackrel{\frown}{\otimes}\{H_3\}_c. $$ Using Proposition \ref{1Recap} (b) again, we have $$ \{H_1\}_c \stackrel{\frown}{\otimes}\{\overline{H_2}\}_c\stackrel{\frown}{\otimes}\{H_3\}_c \simeq\{\mbox{${\mathcal H}$}\}_c \qquad\hbox{and}\qquad \{\overline{H_1}\}_r\stackrel{\frown}{\otimes} \{H_2\}_r\stackrel{\frown}{\otimes}\{\overline{H_3}\}_r \simeq\{\overline{\mbox{${\mathcal H}$}}\}_r $$ completely isometrically. By Proposition \ref{1Recap} (c), this yields a completely isometric identification \begin{equation}\label{2Ident1} \{\overline{H_3}\}_r\stackrel{\frown}{\otimes} \Gamma(H_1,H_2,H_3)\stackrel{\frown}{\otimes} \{H_1\}_c \simeq S^1\bigl(\mbox{${\mathcal H}$}). \end{equation} Passing to the duals, using (\ref{1S11}) and Proposition \ref{1Recap} (d), we deduce a $w^*$-continuous completely isometric identification $$ B(\mbox{${\mathcal H}$})\,\simeq\, CB\bigl(\Gamma(H_1,H_2,H_3), B(H_1,H_3)\bigr). $$ Combining with (\ref{2VN}), we deduce a $w^*$-continuous, completely isometric onto mapping \begin{equation}\label{2J} J\colon B(H_1)\overline{\otimes} B(H_2)^{op}\overline{\otimes} B(H_3) \longrightarrow CB\bigl(\Gamma(H_1,H_2,H_3), B(H_1,H_3)\bigr). \end{equation} Now to establish the proposition it suffices to check that \begin{equation}\label{2J=Tau} J(\varphi)=\widetilde{\tau}_\varphi \end{equation} for any $\varphi\in B(H_1)\overline{\otimes} B(H_2)^{op}\overline{\otimes} B(H_3)$. We claim that it suffices to prove (\ref{2J=Tau}) in the case when $\varphi$ belongs to the algebraic tensor product $B(H_1)\otimes B(H_2)^{op}\otimes B(H_3)$. Indeed let $\varphi\in B(H_1)\overline{\otimes} B(H_2)^{op}\overline{\otimes} B(H_3)$, let $x\in S_2(H_1,H_2)$, $y\in S_2(H_2,H_3)$ and $h\in H_1$. Assume that $(\varphi_t)_t$ is a net of $B(H_1)\otimes B(H_2)^{op}\otimes B(H_3)$ converging to $\varphi$ in the $w^*$-topology. Then $\varphi_t(h\otimes y)\to \varphi(h\otimes y)$ in the weak topology of $H_1\stackrel{2}{\otimes} \overline{H_2}\stackrel{2}{\otimes} H_3$. Hence $\Theta[\varphi_t(h\otimes y)]\to \Theta[\varphi(h\otimes y)]$ in the weak topology of $S^2(S^2(H_1,H_2),H_3)$, which implies that $\Theta[\varphi_t(h\otimes y)](x) \to \Theta[\varphi(h\otimes y)](x)$ in the weak topology of $H_3$. Equivalently, $[\tau_{\varphi_t}(y,x)](h)\to [\tau_{\varphi}(y,x)](h)$ weakly. Since $J$ is $w^*$-continuous, we also have, by similar arguments, that $[J(\varphi_t)(y\otimes x)](h)\to [J(\varphi)(y\otimes x)](h)$ weakly. Hence if $J(\varphi_t)=\widetilde{\tau}_{\varphi_t}$ for any $t$, we have $J(\varphi)=\widetilde{\tau}_\varphi$ as well. Moreover by linearity, it suffices to prove (\ref{2J=Tau}) when $\varphi= R\otimes S \otimes T$ for some $R\in B(H_1)$, $S\in B(H_2)$ and $T\in B(H_3)$. In view of (\ref{2Sigma2}), it therefore suffices to show that \begin{equation}\label{2Check} J(R\otimes S\otimes T)(y\otimes x ) = TySxR, \end{equation} for any $R\in B(H_1)$, $S\in B(H_2)$, $T\in B(H_3)$, $x\in S^2(H_1,H_2)$ and $y\in S^2(H_2,H_3)$. Since $J$ is linear and $w^*$-continuous it actually suffices to prove (\ref{2Check}) when $R$, $S$, $T$, $x$ and $y$ are rank one. For $i=1,2,3$, let $\xi_i,\eta_i,h_i,k_i\in H_i$ and consider $x=\overline{\xi_1}\otimes\eta_2$ and $y= \overline{\xi_2}\otimes\eta_3$, as well as the operators $R=h_1\otimes\overline{k_1}$, $S=h_2\otimes\overline{k_2}$ and $T=h_3\otimes\overline{k_3}$ (see Remark \ref{1RankOne} for the use of these tensor product notations). Then let $$ \alpha = \overline{\xi_1}\otimes\eta_2\otimes\overline{\xi_3} \otimes \eta_1\otimes\overline{\xi_2}\otimes \eta_3\, \in \, (\overline{H_1}\otimes H_2\otimes \overline{H_3})\otimes (H_1\otimes \overline{H_2}\otimes H_3) \,\subset\, S^1(\mbox{${\mathcal H}$}) $$ and let $$ \beta= h_1\otimes \overline{k_2}\otimes h_3\otimes \overline{k_1}\otimes h_2\otimes \overline{k_3} \, \in \, (H_1\otimes \overline{H_2}\otimes H_3)\otimes (\overline{H_1}\otimes H_2\otimes \overline{H_3})\,\subset\, B(\mbox{${\mathcal H}$}). $$ In the identification (\ref{2Ident1}), $\overline{\xi_3}\otimes y\otimes x \otimes \eta_1$ corresponds to $\alpha$ whereas in the identification (\ref{2VN}), $R\otimes S\otimes T$ corresponds to $\beta$. Hence \begin{align*} \bigl\langle\bigl[J(R\otimes S\otimes T)(y\otimes x)](\eta_1),\xi_3\bigr\rangle & ={\rm tr}(\alpha\beta)\\ & =\langle h_1,\xi_1\rangle\langle \eta_2,k_2\rangle \langle h_3,\xi_3\rangle\langle \eta_1,k_1\rangle \langle h_2,\xi_2\rangle\langle \eta_3,k_3\rangle. \end{align*} On the other hand, $$ TySxR = \langle h_1,\xi_1\rangle\langle \eta_2,k_2\rangle \langle h_2,\xi_2\rangle\langle \eta_3,k_3\rangle h_3\otimes\overline{k}_1, $$ hence \begin{equation}\label{2Trace} \langle TySxR(\eta_1),\xi_3\rangle = \langle h_1,\xi_1\rangle\langle \eta_2,k_2\rangle \langle h_3,\xi_3\rangle\langle \eta_1,k_1\rangle \langle h_2,\xi_2\rangle\langle \eta_3,k_3\rangle. \end{equation} This proves the desired equality. \end{proof} \begin{remark}\label{2Rk} Using (\ref{1S1S1}) twice we have a $w^*$-continuous completely isometric identification \begin{equation}\label{2Rk1} B(H_1)\overline{\otimes} B(H_2)^{op}\overline{\otimes} B(H_3)\,\simeq\, \bigl(S^1(H_1)\stackrel{\frown}{\otimes} S^1(H_2)^{op}\stackrel{\frown}{\otimes} S^1(H_3)\bigr)^*. \end{equation} Let $\varphi\in B(H_1)\overline{\otimes} B(H_2)^{op}\overline{\otimes} B(H_3)$ and let $u=\tau_\varphi$. Let $\xi_1,\eta_1\in H_1$, $\xi_2,\eta_2\in H_2$ and $\xi_3,\eta_3\in H_3$ and regard $\overline{\xi_i}\otimes\eta_i$ as an element of $S^1(H_i)$ for $i=1,2,3$. According to (\ref{2Rk1}) we may consider the action of $\varphi$ on $\overline{\xi_1}\otimes\eta_1\otimes\eta_2\otimes \overline{\xi_2}\otimes\overline{\xi_3}\otimes\eta_3$. Then we have $$ \langle\varphi, \overline{\xi_1}\otimes\eta_1\otimes\eta_2\otimes \overline{\xi_2}\otimes\overline{\xi_3}\otimes\eta_3\rangle\,=\, \bigl\langle \bigl[u(\overline{\xi_2}\otimes\eta_3, \overline{\xi_1}\otimes\eta_2)\bigr] (\eta_1),\xi_3\bigr\rangle. $$ Indeed this follows from the arguments in the proof of Proposition \ref{2OM-B}. Details are left to the reader. \end{remark} Let $\varphi\in B(H_1)\overline{\otimes} B(H_2)^{op}\overline{\otimes} B(H_3)$. We will say that $\tau_\varphi$ is an {\bf $S^1$-operator multiplier} if it takes values into the trace class $S^1(H_1,H_3)$ and there exists a constant $D\geq 0$ such that $$ \norm{\tau_\varphi(y,x)}_1\leq D\norm{x}_2\norm{y}_2,\qquad x\in S^2(H_1,H_2),\ y\in S^2(H_2,H_3). $$ Note that, by (\ref{2Sigma2}), $\tau_\varphi$ is an $S^1$-operator multiplier when $\varphi$ is of the form $R\otimes S \otimes T$. Consequently, $\tau_\varphi$ is an $S^1$-operator multiplier whenever $\varphi$ belongs to the algebraic tensor product $B(H_1)\otimes B(H_2)\otimes B(H_3)$. In this paper we will be mostly interested in {\bf completely bounded $S^1$-operator multipliers}, that is, $S^1$-operator multipliers $\tau_\varphi$ such that $\widetilde{\tau_\varphi}$ is a completely bounded map from $\Gamma(H_1,H_2,H_3)$ into $S^1(H_1,H_3)$. Note that the canonical inclusion $S^1(H_1,H_3)\subset B(H_1,H_3)$ is a complete contraction, hence $$ CB(\Gamma(H_1,H_2,H_3), S^1(H_1,H_3))\,\subset\, CB(\Gamma(H_1,H_2,H_3), B(H_1,H_3))\qquad\hbox{contractively.} $$ It therefore follows from Proposition \ref{2OM-B} that the space of all completely bounded $S^1$-operator multipliers coincides with the space $CB(\Gamma(H_1,H_2,H_3), S^1(H_1,H_3))$. The following statement provides a characterization. \begin{lemma}\label{2CB} Let $u\colon S^2(H_2,H_3)\times S^2(H_1,H_2)\to S^1(H_1,H_3)$ be a bounded bilinear map and let $D>0$ be a constant. Then $\widetilde{u}\in CB(\Gamma(H_1,H_2,H_3), S^1(H_1,H_3))$ and $\cbnorm{\widetilde{u}}\leq D$ if and only if for any $n\geq 1$, for any $x_1,\ldots,x_n\in S^2(H_1,H_2)$ and for any $y_1,\ldots,y_n\in S^2(H_2,H_3)$, $$ \bignorm{\bigl[u(y_i,x_j)\bigr]_{1\leq i,j\leq n}}_{S^1(\ell^2_n(H_1), \ell^2_n(H_3))}\,\leq D\,\Bigl(\sum_{j=1}^n \norm{x_j}^2_2\Bigr)^{\frac12}\Bigl(\sum_{i=1}^n \norm{y_i}^2_2\Bigr)^{\frac12}. $$ \end{lemma} \begin{proof} For any $n\geq 1$, we use the classical notations $R_n=\{\ell^2_n\}_r, C_n=\{\ell^2_n\}_c$ and $S^1_n=S^1(\ell^2_n)$. Consider $u$ as above and set $$ d_n = \bignorm{I_{S^1_n}\otimes \widetilde{u} \colon S_n^1\stackrel{\frown}{\otimes} \Gamma(H_1,H_2,H_3) \longrightarrow S_n^1\stackrel{\frown}{\otimes}S^1(H_1,H_3)} $$ for any $n\geq 1$. By \cite[Lemma 1.7]{P1}, $\widetilde{u}\in CB\bigl(\Gamma(H_1,H_2,H_3), S^1(H_1,H_3)\bigr)$ if and only if the sequence $(d_n)_{n\geq 1}$ is bounded and in this case, $\cbnorm{\widetilde{u}}=\sup_n d_n$. By Proposition \ref{1Recap} (c), $$ S_n^1\stackrel{\frown}{\otimes} \Gamma(H_1,H_2,H_3) \,\simeq\, R_n\stackrel{\frown}{\otimes} \{S^2(H_1,H_2)\}_r \stackrel{\frown}{\otimes}\{S^2(H_2,H_3)\}_c\stackrel{\frown}{\otimes} C_n $$ completely isometrically. Using Proposition \ref{1Recap} (b), this yields $$ S_n^1\stackrel{\frown}{\otimes} \Gamma(H_1,H_2,H_3) \,\simeq\, \bigl\{\ell^2_n\stackrel{2}{\otimes}S^2(H_1,H_2)\bigr\}_r\stackrel{\frown}{\otimes} \bigl\{\ell^2_n\stackrel{2}{\otimes}S^2(H_2,H_3)\bigr\}_c. $$ Applying Remark \ref{1Equal}, we derive that $$ S_n^1\stackrel{\frown}{\otimes} \Gamma(H_1,H_2,H_3) \,\simeq\, \bigl(\ell^2_n\stackrel{2}{\otimes}S^2(H_1,H_2)\bigr)\,\widehat{\otimes}\, \bigl(\ell^2_n\stackrel{2}{\otimes}S^2(H_2,H_3)\bigr) $$ isometrically. Similarly, \begin{align*} S_n^1 \stackrel{\frown}{\otimes} S^1(H_1,H_3) & \,\simeq\, R_n \stackrel{\frown}{\otimes} S^1(H_1,H_3) \stackrel{\frown}{\otimes} C_n \\ & \,\simeq\, R_n\stackrel{\frown}{\otimes} \{\overline{H_1}\}_r \stackrel{\frown}{\otimes} \{H_3\}_c\stackrel{\frown}{\otimes} C_n \\ & \,\simeq\, \bigl\{\ell^2_n\stackrel{2}{\otimes}\overline{H_1}\bigr\}_r \stackrel{\frown}{\otimes} \bigl\{\ell^2_n\stackrel{2}{\otimes} H_3\bigr\}_c \\ & \,\simeq\, S^1\bigl(\ell^2_n(H_1), \ell^2_n(H_3)\bigr) \end{align*} isometrically. Hence a thorough look at these identifications shows that $$ d_n = \sup\Bigl\{\bignorm{\bigl[u(y_i,x_j)\bigr]_{1\leq i,j\leq n}}_{S^1(\ell^2_n(H_1), \ell^2_n(H_3))}\Bigr\}, $$ where the supremum runs over all $$ (x_1,\ldots,x_n)\in \ell^2_n\stackrel{2}{\otimes}S^2(H_1,H_2) \qquad\hbox{and}\qquad (y_1,\ldots,y_n)\in\ell^2_n\stackrel{2}{\otimes}S^2(H_2,H_3) $$ of norms less than or equal to $1$. This yields the result. \end{proof} The next result, which should be compared to Proposition \ref{2OM-B}, provides a characterization of completely bounded $S^1$-operator multipliers. Before stating it, we note that we have $S^1(H_1)\stackrel{\frown}{\otimes} S^1(H_3)\subset S^1(H_1)\stackrel{h}{\otimes} S^1(H_3)$ completely contractively (see e.g. \cite[Theorem 9.2.1]{ER}). Consequently $$ CB\bigl(S^1(H_1)\stackrel{h}{\otimes} S^1(H_3), B(H_2)^{op}\bigr)\,\subset\, CB\bigl(S^1(H_1)\stackrel{\frown}{\otimes} S^1(H_3), B(H_2)^{op}\bigr) $$ contractively. Applying Lemma \ref{1Injective1} (c), and using (\ref{1S1S1}) and (\ref{1w*h}), we deduce a contractive embedding $$ B(H_2)^{op}\overline{\otimes}\bigl( B(H_1)\stackrel{w^*h}{\otimes}B(H_3)\bigr)\,\subset\, B(H_1)\overline{\otimes} B(H_2)^{op} \overline{\otimes} B(H_3). $$ \begin{theorem}\label{2OM-S1} Let $\varphi\in B(H_1)\overline{\otimes} B(H_2)^{op}\overline{\otimes} B(H_3)$. Then $\tau_\varphi$ is a completely bounded $S^1$-operator multiplier if and only if $\varphi$ belongs to $B(H_2)^{op}\overline{\otimes}\bigl( B(H_1)\stackrel{w^*h}{\otimes}B(H_3)\bigr)$. Further (\ref{2OM-B1}) restricts to a $w^*$-continuous completely isometric identification \begin{equation}\label{2Ident3} B(H_2)^{op}\overline{\otimes}\bigl( B(H_1)\stackrel{w^*h}{\otimes}B(H_3)\bigr)\,\simeq\, CB\bigl(\Gamma(H_1,H_2,H_3), S^1(H_1,H_3)\bigr). \end{equation} \end{theorem} \begin{proof} The scheme of proof is similar to the one of Proposition \ref{2OM-B}. Recall (\ref{2Gamma}) from this proof. On the one hand, using commutativity of the operator space projective tensor product, we deduce a completely isometric identification $$ \Gamma(H_1,H_2,H_3) \,\simeq\, \{H_2\}_r \stackrel{\frown}{\otimes} \{\overline{H}_2\}_c \stackrel{\frown}{\otimes} \{\overline{H}_1\}_r \stackrel{\frown}{\otimes} \{H_3\}_c, $$ and then, by Proposition \ref{1Recap} (c), \begin{equation}\label{2Ident2} \Gamma(H_1,H_2,H_3) \,\simeq\, S^1(\overline{H}_2) \stackrel{\frown}{\otimes} S^1(H_1,H_3). \end{equation} On the other hand, it follows from (\ref{1REC}) and Proposition \ref{1Recap} (a) that $$ S^1(H_1,H_3) \stackrel{\frown}{\otimes} S^\infty(H_3,H_1) \,\simeq\,\{\overline{H_1}\}_r\stackrel{h}{\otimes} S^\infty(H_3,H_1)\stackrel{h}{\otimes}\{H_3\}_c. $$ Then using (\ref{1Compact}), we deduce that $$ S^1(H_1,H_3) \stackrel{\frown}{\otimes} S^\infty(H_3,H_1) \,\simeq\, \bigl(\{\overline{H_1}\}_r\stackrel{h}{\otimes} \{H_1\}_c\bigr) \stackrel{h}{\otimes} \bigl(\{\overline{H_3}\}_r\stackrel{h}{\otimes}\{H_3\}_c\bigr). $$ Applying Proposition \ref{1Recap} (a) again together with (\ref{1RC}), we obtain that $$ S^1(H_1,H_3) \stackrel{\frown}{\otimes} S^\infty(H_3,H_1) \,\simeq\, S^1(H_1) \stackrel{h}{\otimes} S^1(H_3) $$ completely isometrically. Combining the last identification with (\ref{2Ident2}), we find \begin{equation}\label{2Ident4} \Gamma(H_1,H_2,H_3)\stackrel{\frown}{\otimes} S^\infty(H_3,H_1)\,\simeq\, S^1(\overline{H}_2) \stackrel{\frown}{\otimes}\bigl(S^1(H_1) \stackrel{h}{\otimes} S^1(H_3)\bigr). \end{equation} We now pass to duals. First by (\ref{1Duality4}) and (\ref{1S11}), we have a $w^*$-continuous completely isometric identification $$ \bigl(\Gamma(H_1,H_2,H_3)\stackrel{\frown}{\otimes} S^\infty(H_3,H_1)\bigr)^* \,\simeq\, CB\bigl(\Gamma(H_1,H_2,H_3), S^1(H_1,H_3)\bigr). $$ Second by (\ref{1Duality4}) and Lemma \ref{1Opp}, we have $w^*$-continuous completely isometric identifications \begin{align*} \bigl(S^1(\overline{H}_2) \stackrel{\frown}{\otimes}\bigl(S^1(H_1) \stackrel{h}{\otimes} S^1(H_3)\bigr)\bigr)^* \,&\simeq\, CB\bigl(S^1(H_1) \stackrel{h}{\otimes} S^1(H_3),B(\overline{H_2})\bigr)\\ &\simeq\, CB\bigl(S^1(H_1) \stackrel{h}{\otimes} S^1(H_3),B(H_2)^{op}\bigr). \end{align*} Equivalently, by Lemma \ref{1Injective1} (c), we have $$ \bigl(S^1(\overline{H}_2) \stackrel{\frown}{\otimes}\bigl(S^1(H_1) \stackrel{h}{\otimes} S^1(H_3)\bigr)\bigr)^* \,\simeq\,B(H_2)^{op}\overline{\otimes}\bigl(B(H_1)\stackrel{w^*h}{\otimes} B(H_3)\bigr). $$ Thus (\ref{2Ident4}) yields a $w^*$-continuous, completely isometric onto mapping $$ L\colon B(H_2)^{op}\overline{\otimes}\bigl(B(H_1)\stackrel{w^*h}{\otimes} B(H_3)\bigr) \longrightarrow CB\bigl(\Gamma(H_1,H_2,H_3), S^1(H_1,H_3)\bigr). $$ Arguing as in the proof of Proposition \ref{2OM-B}, it now suffices to show that for any $R\in B(H_1)$, $S\in B(H_2)$ and $T\in B(H_3)$, $L(S\otimes R\otimes T)$ coincides with $\widetilde{\tau}_{R\otimes S\otimes T}$. Next, it suffices to show that \begin{equation}\label{2Check2} L(S\otimes R\otimes T)(y\otimes x) = TySxR \end{equation} when $R,S,T$ are rank one and when $x\in S^2(H_1,H_2)$ and $y\in S^2(H_2,H_3)$ are rank one. We let $\xi_i,\eta_i,h_i,k_i\in H_i$ for $i=1,2,3$ and consider $R=h_1\otimes\overline{k_1}$, $S=h_2\otimes\overline{k_2}$, $T=h_3\otimes\overline{k_3}$, $x=\overline{\xi_1}\otimes\eta_2$ and $y= \overline{\xi_2}\otimes\eta_3$. Then $y\otimes x\in \Gamma(H_1,H_2,H_3)$ corresponds to $(\eta_2\otimes\overline{\xi_2})\otimes (\overline{\xi_1}\otimes\eta_3)\in S^1(\overline{H}_2)\otimes S^1(H_1,H_3)$ in the identification (\ref{2Ident2}). Hence $y\otimes x\otimes(\eta_1\otimes\overline{\xi_3})$ regarded as an element of $\Gamma(H_1,H_2,H_3)\otimes S^\infty(H_3,H_1)$ corresponds to $$ (\eta_2\otimes\overline{\xi_2})\otimes (\overline{\xi_1}\otimes\eta_1)\otimes(\overline{\xi_3}\otimes\eta_3) \,\in\, S^1(\overline{H}_2)\otimes S^1(H_1)\otimes S^1(H_3) $$ in the identification (\ref{2Ident4}). Since $$ \widehat{S}\otimes R\otimes T=\overline{k_2}\otimes h_2\otimes h_1\otimes\overline{k_1}\otimes h_3\otimes\overline{k_3} \,\in\, B(\overline{H}_2)\otimes B(H_1)\otimes B(H_3), $$ we then have $$ \bigl\langle\bigl[ L(S\otimes R\otimes T)(y\otimes x )\bigr](\eta_1),\xi_3\bigr\rangle = \langle \eta_2,k_2\rangle\langle h_2,\xi_2\rangle \langle h_1,\xi_1\rangle\langle \eta_1,k_1\rangle \langle h_3,\xi_3\rangle\langle \eta_3,k_3\rangle. $$ By (\ref{2Trace}), the right hand side of this equality is equal to $\langle TySxR(\eta_1),\xi_3\rangle$. This proves the identity (\ref{2Check2}), and hence the result. \end{proof} \section{Module maps}\label{4MOD} As in the previous section, we consider three Hilbert spaces $H_1,H_2,H_3$. We further consider von Neumann subalgebras $$ M_1\subset B(H_1),\qquad M_2\subset B(H_2)\quad\hbox{and}\qquad M_3\subset B(H_3) $$ acting on these spaces. For $i=1,2,3$, we let $M_i'\subset B(H_i)$ be the commutant of $M_i$. Let $u\colon S^2(H_2,H_3)\times S^2(H_1,H_2)\to B(H_1,H_3)$ be a bounded bilinear operator. We say that $u$ is an $(M'_3,M'_2,M'_1)$-module map (or is $(M'_3,M'_2,M'_1)$-modular) provided that $$ u(Ty,x)=Tu(y,x),\qquad u(y,xR)=u(y,x)R\quad\hbox{and}\quad u(yS,x)=u(y,Sx) $$ for any $x\in S^2(H_1,H_2)$, $y\in S^2(H_2,H_3)$, $R\in M'_1$, $S\in M'_2$ and $T\in M'_3$. It will be convenient to associate to $u$ the following $4$-linear bounded operators. We define \begin{equation}\label{3U11} U_1\colon \overline{H_2}\times H_2\times \overline{H_3}\times H_3\longrightarrow B(H_1) \end{equation} by \begin{equation}\label{3U12} \bigl\langle \bigl[U_1(\overline{\xi_2},\eta_2,\overline{\xi_3},\eta_3)\bigr] (\eta_1),\xi_1\bigr\rangle \,=\, \bigl\langle \bigl[u(\overline{\xi_2}\otimes\eta_3, \overline{\xi_1}\otimes\eta_2)\bigr] (\eta_1),\xi_3\bigr\rangle \end{equation} for any $\xi_1,\eta_1\in H_1$, $\xi_2,\eta_2\in H_2$ and $\xi_3,\eta_3\in H_3$. Likewise we define $$ U_2\colon\overline{H_1}\times H_1\times \overline{H_3}\times H_3\to B(H_2) \qquad\hbox{and}\qquad U_3\colon\overline{H_1}\times H_1\times \overline{H_2}\times H_2\to B(H_3) $$ by \begin{align*} \bigl\langle \bigl[U_2(\overline{\xi_1},\eta_1,\overline{\xi_3},\eta_3)\bigr] (\eta_2),\xi_2\bigr\rangle \,&=\, \bigl\langle \bigl[u(\overline{\xi_2}\otimes\eta_3, \overline{\xi_1}\otimes\eta_2)\bigr] (\eta_1),\xi_3\bigr\rangle\\ \bigl\langle \bigl[U_3(\overline{\xi_1},\eta_1,\overline{\xi_2},\eta_2)\bigr] (\eta_3),\xi_3\bigr\rangle \,&=\, \bigl\langle \bigl[u(\overline{\xi_2}\otimes\eta_3, \overline{\xi_1}\otimes\eta_2)\bigr] (\eta_1),\xi_3\bigr\rangle. \end{align*} \begin{lemma}\label{3LemMod} Let $u\in B_2\bigl(S^2(H_2,H_3)\times S^2(H_1,H_2), B(H_1,H_3)\bigr)$. Then $u$ is an $(M'_3,M'_2,M'_1)$-module map if and only if for any $i=1,2,3$, $U_i$ is valued in $M_i$. \end{lemma} \begin{proof} Let $R\in B(H_1)$. For any $\eta_1,\xi_1\in H_1$, $\eta_2,\xi_2\in H_2$ and $\eta_3,\xi_3\in H_3$, we have $$ \bigl\langle \bigl[u(\overline{\xi_2}\otimes\eta_3, \overline{\xi_1}\otimes\eta_2)\bigr] R(\eta_1),\xi_3\bigr\rangle\, =\, \bigl\langle \bigl[U_1(\overline{\xi_2},\eta_2,\overline{\xi_3},\eta_3)\bigr] R(\eta_1),\xi_1\bigr\rangle. $$ Further $(\overline{\xi_1}\otimes\eta_2)R= \overline{R^*(\xi_1)}\otimes\eta_2$, hence \begin{align*} \bigl\langle \bigl[u(\overline{\xi_2}\otimes\eta_3, (\overline{\xi_1}\otimes\eta_2)R)\bigr] (\eta_1),\xi_3\bigr\rangle\, & =\,\bigl\langle \bigl[U_1(\overline{\xi_2},\eta_2,\overline{\xi_3},\eta_3)\bigr] (\eta_1),R^*(\xi_1)\bigr\rangle\\ & =\,\bigl\langle R\bigl[U_1(\overline{\xi_2},\eta_2,\overline{\xi_3},\eta_3)\bigr] (\eta_1),\xi_1\bigr\rangle. \end{align*} Since $\overline{H_1}\otimes H_2$ and $\overline{H_2}\otimes H_3$ are dense in $S^2(H_1,H_2)$ and $S^2(H_2,H_3)$, respectively, we deduce that $u(y,xR)=u(y,x)R$ for any $x\in S^2(H_1,H_2)$ and any $y\in S^2(H_2,H_3)$ if and only if $R$ commutes with $U_1(\overline{\xi_2},\eta_2,\overline{\xi_3},\eta_3)$ for any $\xi_2,\eta_2\in H_2$ and $\xi_3,\eta_3\in H_3$. Consequently $u$ is $(\ensuremath{\mathbb{C}},\ensuremath{\mathbb{C}},M_1')$-modular if and only if the range of $U_1$ commutes with $M_1'$. By the Bicommutant Theorem, this means that $u$ is $(\ensuremath{\mathbb{C}},\ensuremath{\mathbb{C}},M_1')$-modular if and only if $U_1$ is valued in $M_1$. Likewise $u$ is $(\ensuremath{\mathbb{C}},M_2',\ensuremath{\mathbb{C}})$-modular (resp. $(M_3',\ensuremath{\mathbb{C}},\ensuremath{\mathbb{C}})$-modular) if and only if $U_2$ is valued in $M_2$ (resp. $U_3$ is valued in $M_3$). This proves the result. \end{proof} \begin{corollary}\label{3Mod-B} Let $\varphi\in B(H_1)\overline{\otimes} B(H_2)^{op}\overline{\otimes} B(H_3)$. Then $\tau_\varphi$ is $(M_3',M_2',M_1')$-modular if and only if $\varphi\in M_1\overline{\otimes} M_2^{op}\overline{\otimes} M_3$. This provides (as a restriction of (\ref{2OM-B1})) a $w^*$-continuous completely isometric identification $$ M_1\overline{\otimes} M_2^{op}\overline{\otimes} M_3 \,\simeq\, CB_{(M'_3,M'_2,M'_1)} \bigl(\Gamma(H_1,H_2,H_3), B(H_1,H_3)\bigr), $$ where the right-hand side denotes the subspace of $CB\bigl(\Gamma(H_1,H_2,H_3), B^1(H_1,H_3)\bigr)$ of all completely bounded maps $\widetilde{u}$ such that $u$ is an $(M'_3,M'_2,M'_1)$-module map. \end{corollary} \begin{proof} Consider the duality relation $$ B(H_2)^{op}\overline{\otimes} B(H_3) = \bigl(S^1(H_2)^{op} \stackrel{\frown}{\otimes} S^1(H_3)\bigr)^* $$ provided by (\ref{1S1S1}). We claim that in the space $S^1(H_2)^{op} \stackrel{\frown}{\otimes} S^1(H_3)$, we have equality \begin{equation}\label{3perp} \bigl(M_{2}^{op}\overline{\otimes} M_3\bigr)_{\perp} \,=\,\overline{(M_{2\perp}^{op}\otimes S^1(H_3) + S^1(H_2)^{op}\otimes M_{3\perp})}. \end{equation} Indeed let $z\in B(H_2)^{op}\overline{\otimes} B(H_3)$ and let $z'\colon S^1(H_3)\to B(H_2)^{op}$ and $z''\colon S^1(H_2)^{op} \to B(H_3)$ be associated with $z$ (see Lemma \ref{1Slice}). Then $z\in \bigl(M_{2\perp}^{op}\otimes S^1(H_3)\bigr)^{\perp}$ if and only if $z'$ is valued in $M_{2}^{op}$, whereas $z\in \bigl(S^1(H_2)^{op}\otimes M_{3\perp}\bigr)^{\perp}$ if and only if $z''$ is valued in $M_{3}$. Consequently, $z$ belongs to the orthogonal of $M_{2\perp}^{op}\otimes S^1(H_3) + S^1(H_2)^{op}\otimes M_{3\perp}$ if and only if $z'$ is valued in $M_{2}^{op}$ and $z''$ is valued in $M_{3}$. In turn this is equivalent to $z'\in CB(M_{3*}, M_{2}^{op})$. Applying Lemma \ref{1Injective1} (a), we deduce that the orthogonal of $M_{2\perp}^{op}\otimes S^1(H_3) + S^1(H_2)^{op}\otimes M_{3\perp}$ is equal to $M_{2}^{op}\overline{\otimes} M_3$. The claim (\ref{3perp}) follows at once. Let $\varphi\in B(H_1)\overline{\otimes} B(H_2)^{op}\overline{\otimes} B(H_3)$. Using Lemma \ref{1Slice}, we may associate 3 completely bounded operators \begin{align*} \varphi^1 &\colon S^1(H_2)^{op}\stackrel{\frown}{\otimes} S^1(H_3)\longrightarrow B(H_1),\\ \varphi^2 &\colon S^1(H_1)\stackrel{\frown}{\otimes} S^1(H_3)\longrightarrow B(H_2)^{op},\\ \varphi^3 &\colon S^1(H_1)\stackrel{\frown}{\otimes} S^1(H_2)^{op}\longrightarrow B(H_3) \end{align*} to $\varphi$. According to Lemma \ref{1Injective1} (a), $\varphi$ belongs to $M_1\overline{\otimes} M_2^{op}\overline{\otimes} M_3$ if and only if $\varphi^1$ is valued in $M_1$ and $\varphi^1$ vanishes on $(M_2^{op}\overline{\otimes} M_3)_{\perp}$. By (\ref{3perp}), $\varphi^1$ vanishes on $(M_2^{op}\overline{\otimes} M_3)_{\perp}$ if and only if it both vanishes on $M_{2\perp}^{op}\otimes S^1(H_3)$ and $S^1(H_2)^{op}\otimes M_{3\perp}$. A quick look at the definitions of $\varphi^1, \varphi^2,\varphi^3$ reveals that $\varphi^1$ vanishes on $M_{2\perp}^{op}\otimes S^1(H_3)$ if and only if $\varphi^2$ is valued in $M^{op}_{2}$ and that $\varphi^1$ vanishes on $S^1(H_2)^{op}\otimes M_{3\perp}$ if and only if $\varphi^3$ is valued in $M_{3}$. Altogether we obtain that $\varphi$ belongs to $M_1\overline{\otimes} M_2^{op}\overline{\otimes} M_3$ if and only if $\varphi^1$ is valued in $M_1$, $\varphi^2$ is valued in $M_2^{op}$ and $\varphi^3$ is valued in $M_3$. Let $u=\tau_\varphi$. It follows from Remark \ref{2Rk} that for any $\eta_2,\xi_2\in H_2$ and $\eta_3,\xi_3\in H_3$, we have $$ \varphi^1(\eta_2\otimes \overline{\xi_2}\otimes \overline{\xi_3}\otimes \eta_3) = U_1(\overline{\xi_2},\eta_2,\overline{\xi_3},\eta_3), $$ where $U_1$ is defined by (\ref{3U11}) and (\ref{3U12}). Thus $\varphi^1$ is valued in $M_1$ if and only if $U_1$ is valued in $M_1$. Likewise $\varphi^2$ is valued in $M_2^{op}$ if and only if $U_2$ is valued in $M_2$ and $\varphi^3$ is valued in $M_3$ if and only if $U_3$ is valued in $M_3$. By Lemma \ref{3LemMod} we deduce that $u$ is $(M_1',M_2',M_3')$-modular if and only if $\varphi\in M_1\overline{\otimes} M_2^{op}\overline{\otimes} M_3$. \end{proof} We now turn to the study of modular completely bounded $S^1$-multipliers. We let $$ CB_{(M'_3,M'_2,M'_1)}\bigl(\Gamma(H_1,H_2,H_3), S^1(H_1,H_3)\bigr) $$ denote the subspace of $CB\bigl(\Gamma(H_1,H_2,H_3), S^1(H_1,H_3)\bigr)$ of all completely bounded maps $\widetilde{u}$ such that $u$ is an $(M'_3,M'_2,M'_1)$-module map. According to (\ref{1Embed}) and (\ref{1w*h}), $M_1\stackrel{w^*h}{\otimes} M_3$ can be regarded as a $w^*$-closed subspace of the dual operator space $B(H_1)\stackrel{w^*h}{\otimes}B(H_3)$. Consequently, $M_2^{op}\overline{\otimes}\bigl(M_1\stackrel{w^*h}{\otimes} M_3\bigr)$ can be regarded as a $w^*$-closed subspace of the dual operator space $B(H_2)^{op}\overline{\otimes}\bigl(B(H_1)\stackrel{w^*h}{\otimes}B(H_3)\bigr)$. The next statement is a continuation of Theorem \ref{2OM-S1}. \begin{theorem}\label{3OM-S1} Assume that $M_2$ is injective. \begin{itemize} \item [(a)] Let $\varphi\in B(H_2)^{op}\overline{\otimes}\bigl(B(H_1)\stackrel{w^*h}{\otimes}B(H_3)\bigr)$. Then $\varphi$ belongs to $M_2^{op}\overline{\otimes}\bigl(M_1\stackrel{w^*h}{\otimes} M_3\bigr)$ if and only if $\tau_\varphi$ is $(M'_3,M'_2,M'_1)$-modular. \item [(b)] The identification (\ref{2Ident3}) restricts to \begin{equation}\label{3Ident1} M_2^{op}\overline{\otimes}\bigl( M_1\stackrel{w^*h}{\otimes}M_3\bigr)\,\simeq \, CB_{(M'_3,M'_2,M'_1)}\bigl(\Gamma(H_1,H_2,H_3), S^1(H_1,H_3)\bigr). \end{equation} \end{itemize} \end{theorem} \begin{proof} Clearly (b) is a consequence of (a) so we only treat this first item. Let $\varphi\in B(H_2)^{op}\overline{\otimes}\bigl(B(H_1)\stackrel{w^*h}{\otimes}B(H_3)\bigr)$. Let $$ \sigma\colon S^1(H_1)\stackrel{h}{\otimes}S^1(H_3)\longrightarrow B(H_2)^{op} $$ be corresponding to $\varphi$ in the identification provided by Lemma \ref{1Injective1} (c). Then let $$ \rho\colon S^1(H_2)^{op} \longrightarrow B(H_1)\stackrel{w^*h}{\otimes} B(H_3) $$ be the restriction of the adjoint of $\sigma$ to $S^1(H_2)^{op}$. We assumed that $M_2$ is injective. It therefore follows from Lemma \ref{1Injective1} (b) that $\varphi\in M_2^{op}\overline{\otimes}\bigl(M_1\stackrel{w^*h}{\otimes} M_3\bigr)$ if and only if \begin{equation}\label{2Fubini1} \sigma\bigl(S^1(H_1)\stackrel{h}{\otimes}S^1(H_3)\bigr) \,\subset\, M_2^{op} \end{equation} and \begin{equation}\label{2Fubini2} \rho\bigl(S^1(H_2)^{op}\bigr)\,\subset\,M_1\stackrel{w^*h}{\otimes} M_3. \end{equation} Let $u=\tau_\varphi$. We will now show that $u$ is an $(M'_3,M'_2,M'_1)$-module map if and only if (\ref{2Fubini1}) and (\ref{2Fubini2}) hold true. First we observe that for any $\xi_1,\eta_1\in H_1$ and $\xi_3,\eta_3\in H_3$, $$ \sigma\bigl((\overline{\xi_1}\otimes\eta_1)\otimes(\overline{\xi_3}\otimes\eta_3)\bigr) \,=\, U_2(\overline{\xi_1},\eta_1,\overline{\xi_3},\eta_3). $$ Indeed, this follows from Remark \ref{2Rk} and the definition of $U_2$. Since $\overline{H_1}\otimes H_1$ and $\overline{H_3}\otimes H_3$ are dense in $S^1(H_1)$ and $S^1(H_3)$, respectively, we deduce that (\ref{2Fubini1}) holds true if and only if $U_2$ is valued in $M_2$. For any $v\in S^1(H_2)^{op}$, we may regard $\rho(v)$ as an element of $\bigl(S^1(H_1)\stackrel{h}{\otimes} S^1(H_3)\bigr)^*$. Then following the notation in Lemma \ref{1Slice-H}, we let $$ [\rho(v)]'\colon S^1(H_3)\longrightarrow B(H_1) \qquad\hbox{and}\qquad [\rho(v)]''\colon S^1(H_1)\longrightarrow B(H_3) $$ be the bounded linear maps associated to $\rho(v)$. For any $\xi_2,\eta_2\in H_2$ and $\xi_3,\eta_3\in H_3$, we have $$ \bigl[\rho(\eta_2\otimes\overline{\xi_2})\bigr]'(\overline{\xi_3}\otimes\eta_3) = U_1(\overline{\xi_2},\eta_2, \overline{\xi_3},\eta_3). $$ Indeed this follows again from Remark \ref{2Rk}. Since $H_2\otimes \overline{H_2}$ and $\overline{H_3}\otimes H_3$ are dense in $S^1(H_2)^{op}$ and $S^1(H_3)$, respectively, we deduce that $[\rho(v)]'$ maps $S^1(H_3)$ into $M_1$ for any $v\in S^1(H_2)^{op}$ if and only if $U_1$ is valued in $M_1$. Likewise, $[\rho(v)]''$ maps $S^1(H_1)$ into $M_3$ for any $v\in S^1(H_2)^{op}$ if and only if $U_3$ is valued in $M_3$. Applying Lemma \ref{1Slice-H}, we deduce (\ref{2Fubini2}) holds true if and only if $U_1$ is valued in $M_1$ and $U_3$ is valued in $M_3$. Altogether we have that (\ref{2Fubini1}) and (\ref{2Fubini2}) both hold true if and only if for any $i=1,2,3$, $U_i$ is valued in $M_i$. According to Lemma \ref{3LemMod}, this is equivalent to $u=\tau_\varphi$ being $(M'_3,M'_2,M'_1)$-modular. \end{proof} \section{The Sinclair-Smith factorization theorem}\label{5SS} Let $I$ be an index set, and consider the Hilbertian operator spaces $$ C_I = \{\ell^2_I\}_c \qquad\hbox{and}\qquad R_I = \{\ell^2_I\}_r. $$ For any operator space $G$, we set $$ C_I^w(G^*) = C_I\overline{\otimes} G^* \qquad\hbox{and}\qquad R_I^w(G^*) = R_I\overline{\otimes} G^*. $$ This notation is taken from \cite[1.2.26--1.2.29]{BLM}, to which we refer for more information. We recall that $C_I^w(G^*)$ can be equivalently defined as the space of all families $(x_i)_{i\in I}$ of elements of $G^*$ such that the sums $\sum_{i\in J} x^*_ix_i$, for finite $J\subset I$, are uniformly bounded. Likewise, $R_I^w(G^*)$ is equal to the space of all families $(y_i)_{i\in I}$ of elements of $G^*$ such that the sums $\sum_{i\in J} y_iy_i^*$, for finite $J\subset I$, are uniformly bounded. Assume that $G^*=M$ is a von Neumann algebra, and consider $(x_i)_{i\in I}\in C_I^w(M)$ and $(y_i)_{i\in I}\in R_I^w(M)$. Then the family $(y_i x_i)_{i\in I}$ is summable in the $w^*$-topology of $M$ and we let \begin{equation}\label{4Sum} \sum_{i\in I} y_i x_i\ \in M \end{equation} denote its sum. We note the obvious fact that for any $x_i\in M, i\in I$, $(x_i)_{i\in I}$ belongs to $R_I^w(M)$ if and only if $(x_i^*)_{i\in I}$ belongs to $C_I^w(M)$. In this case we set $$ \bigl[(x_i)_{i\in I}\bigr]^* \,=\, (x_i^*)_{i\in I}. $$ \begin{lemma}\label{4CI-CB} Let $E,G$ be operator spaces and let $I$ be an index set. For any $\alpha=(\alpha_i)_{i\in I}\in C_I^w\bigl(CB(E,G^*)\bigr)$, the (well-defined) operator $\widehat{\alpha}\colon E\to C_I^w(G^*)$, $\widehat{\alpha}(x) =(\alpha_i(x))_{i\in I}$, is completely bounded and the mapping $\alpha\mapsto \widehat{\alpha}$ induces a $w^*$-continuous completely isometric identification $$ C_I^w\bigl(CB(E,G^*)\bigr)\,\simeq\, CB\bigl(E, C_I^w(G^*)\bigr). $$ Likewise we have $$ R_I^w\bigl(CB(E,G^*)\bigr)\,\simeq\, CB\bigl(E, R_I^w(G^*)\bigr). $$ \end{lemma} \begin{proof} According to Lemma \ref{1Injective1} (c) and (\ref{1Duality4}), $C_I^w(Z^*)\simeq (R_I\stackrel{\frown}{\otimes} Z)^*$ for any operator space $Z$. Applying this identification, first with $Z=E\stackrel{\frown}{\otimes} G$ and then with $Z=G$, we obtain that \begin{align*} C_I^w\bigl(CB(E,G^*)\bigr)\, &\simeq\, C_I^w\bigl((E\stackrel{\frown}{\otimes} G)^*\bigr)\quad\hbox{by (\ref{1Duality4})}\\ &\simeq\, (R_I\stackrel{\frown}{\otimes} E\stackrel{\frown}{\otimes} G)^*\\ &\simeq\, CB\bigl(E, (R_I\stackrel{\frown}{\otimes} G)^*\bigr)\quad\hbox{by (\ref{1Duality4})}\\ &\simeq\, CB\bigl(E, C_I^w(G^*)\bigr). \end{align*} A straightforward verification reveals that this identification is implemented by $\widehat{\alpha}$. This yields the first part of the lemma. The proof of the second part is identical. \end{proof} We can now state the Sinclair-Smith factorization theorem, which will be use in the next section. \begin{theorem}\label{4SS} (\cite{SS}) Let $E,F$ be operator spaces, let $M$ be an injective von Neumann algebra and let $w\colon F\stackrel{h}{\otimes} E\to M$ be a completely bounded map. Then there exist an index set $I$ and two families $$ \alpha=(\alpha_i)_{i\in I}\in C_I^w\bigl(CB(E,M)\bigr) \qquad\hbox{and}\qquad \beta=(\beta_i)_{i\in I}\in R_I^w\bigl(CB(F,M)\bigr) $$ such that $\cbnorm{\alpha}\cbnorm{\beta} = \cbnorm{w}$ and $$ w(y\otimes x)\,=\,\sum_{i\in I} \beta_i(y)\alpha_i(x), \qquad x\in E,\, y\in F. $$ \end{theorem} In the rest of this section, we give a new (shorter) proof of Theorem \ref{4SS} based on Hilbert $C^*$-modules. In the following we give the necessary background on Hilbert $C^*$-modules. Let $M$ be a $C^*$-algebra. Recall that a pre-Hilbert $M$-module is a right $M$-module $\mbox{${\mathcal X}$}$ equipped with a map $\langle\,\cdotp,\,\cdotp\rangle\colon\mbox{${\mathcal X}$}\times \mbox{${\mathcal X}$}\to M$ (called an $M$-valued inner product) satisfying the following properties: \begin{itemize} \item $\langle s,s\rangle\geq 0$ for every $s\in \mbox{${\mathcal X}$}$; \item $\langle s,s\rangle=0$ if and only if $s=0$; \item $\langle s,t\rangle=\langle t,s\rangle^*$ for every $s,t\in \mbox{${\mathcal X}$}$; \item $\langle s, t_1m_1+t_2m_2\rangle=\langle s,t_1\rangle m_1+\langle s,t_2\rangle m_2$ for every $s,\,t_1,\,t_2\in \mbox{${\mathcal X}$}$ and $m_1,\, m_2\in M$. \end{itemize} In this setting, the map $\|\cdot\|\colon\mbox{${\mathcal X}$}\to\mathbb R^+$, defined by $$ \|s\|=\|\langle s,s\rangle\|^{1/2}, \qquad s\in \mbox{${\mathcal X}$}, $$ is a norm on $\mbox{${\mathcal X}$}$. A pre-Hilbert $M$-module which is complete with respect to its norm is said to be a Hilbert $M$-module. By \cite{B} (see also \cite[8.2.1]{BLM}), a Hilbert $M$-module $\mbox{${\mathcal X}$}$ has a canonical operator space structure obtained by letting, for any $n\geq 1$, $$ \bignorm{(s_{ij})_{i,j}} \, =\, \Biggnorm{\Biggl(\sum_{k=1}^n\langle s_{ki}, s_{kj} \rangle\Biggr)_{i,j}}_{M_n(M)}^{1/2}, \qquad (s_{ij})_{i,j}\in M_n(\mbox{${\mathcal X}$}). $$ A morphism between two Hilbert $M$-modules $\mbox{${\mathcal X}$}_1$ and $\mbox{${\mathcal X}$}_2$ is a bounded $M$-module map $u\colon \mbox{${\mathcal X}$}_1\to \mbox{${\mathcal X}$}_2$. A unitary isomorphism $u\colon \mbox{${\mathcal X}$}_1\to \mbox{${\mathcal X}$}_2$ is an isomorphism preserving the $M$-valued inner products. Any such map is a complete isometry (see e.g. \cite[Proposition 8.2.2]{BLM}). Assume now that $M$ is a von Neumann algebra. As a basic example, we recall that whenever $p\in M$ is a projection, then the subspace $pM$ of $M$ is a Hilbert $M$-module, when equipped with multiplication on the right as the $M$-module action, and with the $M$-valued inner product $\langle x,y\rangle = x^*y$, for $x,y\in pM$. We recall the construction of the ultraweak direct sum Hilbert $M$-module. Let $I$ be an index set and let $\{\mbox{${\mathcal X}$}_i\, :\, i\in I\}$ be a collection of Hilbert $M$-modules indexed by $I$. We let $\langle\,\cdotp,\,\cdotp\rangle_i$ denote the $M$-valued inner product of $\mbox{${\mathcal X}$}_i$, for any $i\in I$. Let $\mbox{${\mathcal X}$}$ be the set of all families $s=(s_i)_{i\in I}$, with $s_i\in\mbox{${\mathcal X}$}_i$, such that the sums $\sum_{i\in J}\langle s_i,s_i\rangle_i$, for finite $J\subset I$, are uniformly bounded. Since $\langle s_i,s_i\rangle_i\geq 0$ for each $i\in I$, the family $(\langle s_i,s_i\rangle_i)_{i\in I}$ is then summable in the $w^*$-topology of $M$. Using polarization identity, it is easy to deduce that for any $s=(s_i)_{i\in I}$ and any $t=(t_i)_{i\in I}$ in $\mbox{${\mathcal X}$}$, the family $(\langle s_i,t_i\rangle_i)_{i\in I}$ is summable in the $w^*$-topology of $M$. Then one defines $$ \langle s,t\rangle\,=\, \sum_{i\in I} \langle s_i,t_i\rangle_i. $$ It turns out that $\mbox{${\mathcal X}$}$ is a right $M$-module for the action $(s_i)_{i\in I}\cdot m = (s_i m)_{i\in I}$, and that equipped with $\langle \,\cdotp,\,\cdotp\rangle$, $\mbox{${\mathcal X}$}$ is a Hilbert $M$-module. The latter is called the ultraweak direct sum of $\{\mbox{${\mathcal X}$}_i\, :\, i\in I\}$ and it is denoted by $$ \mbox{${\mathcal X}$}\,=\,\oplus_{i\in I} \mbox{${\mathcal X}$}_i. $$ See e.g. \cite[8.5.26]{BLM} for more on this construction. Let $I$ be an index set, consider $C_I^w(M)$ as a right $M$-module is the obvious way. For any $(s_i)_{i\in I}$ and $(t_i)_{i\in I}$ in $C_I^w(M)$ set $$ \langle (s_i)_{i\in I}, (t_i)_{i\in I}\rangle=\sum_{i\in I} s_i^* t_i\,, $$ where this sum is defined by (\ref{4Sum}). This is an $M$-valued inner product, which makes $C_I^w(M)$ a Hilbert $M$-module. Moreover the canonical operator space structure of $C_I^w(M)$ as a Hilbert $M$-module coincides with the one given by writing $C_I^w(M)=C_I\overline{\otimes} M$, see \cite[8.2.3]{BLM}. Further we clearly have $$ C_I^w(M)\,\simeq\,\oplus_{i\in I} M\qquad \hbox{as Hilbert } M\hbox{-modules}. $$ \begin{proof}[Proof of Theorem \ref{4SS}] Assume that $M\subset B(K)$ for some Hilbert space $K$. Let $w\colon F\stackrel{h}{\otimes} E\to M$ be a completely bounded map. By the Christensen-Sinclair factorization theorem (see e.g. \cite[Theorem 9.4.4]{ER}), there exist a Hilbert space $\mbox{${\mathcal H}$}$ and two completely bounded maps $$ a\colon E\to B(K,\mbox{${\mathcal H}$}) \qquad\hbox{and}\qquad b\colon F\to B(\mbox{${\mathcal H}$},K) $$ such that $\cbnorm{a}\cbnorm{b}=\cbnorm{w}$ and $w(y\otimes x)=b(y)a(x)$ for any $x\in E$ and any $y\in F$. Since $M$ is injective, there exists a unital completely positive projection $$ \Psi \colon B(K)\longrightarrow M. $$ As $\Psi$ is valued in $M$, we then have \begin{equation}\label{4Facto1} w(y\otimes x)=\Psi\bigl(b(y)a(x)\bigr), \qquad x\in E, \ y\in F. \end{equation} We introduce $$ C\,=\,\bigl\{T\in B(K,\mbox{${\mathcal H}$})\,:\, \Psi(T^*T)=0\bigr\}. $$ For any $k\in K$, $(T,S)\mapsto\langle\Psi(T^*S)k,k\rangle$ is a nonnegative sesquilinear form on $B(K,\mbox{${\mathcal H}$})$, which vanishes on $\{(T,T)\, :\, T\in C\}$. This implies (by the Cauchy-Schwarz inequality) that $\langle\Psi(T^*S )k,k\rangle=0$ for any $T\in C$ and any $S\in B(K,\mbox{${\mathcal H}$})$. Consequently, $$ C\,=\, \bigl\{T\in B(K,\mbox{${\mathcal H}$})\, :\,\Psi(T^*S)=0 \text{ for any } S\in B(K,\mbox{${\mathcal H}$})\bigr\}. $$ In particular $C$ is a subspace of $B(K,\mbox{${\mathcal H}$})$. Moreover $\Psi$ is an $M$-bimodule map by \cite{To}, hence $$ \Psi((Tm)^*(Tm))=\Psi(m^*T^*Tm)=m^*\Psi(T^*T)m,\qquad m\in M,\, T\in B(K,\mbox{${\mathcal H}$}). $$ Consequently, $C$ is invariant under right multiplication by elements of $M$. Let $N=B(K,\mbox{${\mathcal H}$})/C$ and let $q\colon B(K,\mbox{${\mathcal H}$})\to N$ be the quotient map. The $M$-invariance of $C$ allows to define a right $M$-module action on $N$ by $$ q(T)\cdot m = q(Tm),\qquad m\in M,\ T\in B(K,\mbox{${\mathcal H}$}). $$ For any $S,\, T\in B(K,\mbox{${\mathcal H}$})$, set $$ \langle q(T),q(S)\rangle_{N}=\Psi(T^*S). $$ Then $\langle\,\cdotp,\,\cdotp\rangle_{N}$ is a well-defined, $M$-valued inner product on $N$, and hence $N$ is a pre-Hilbert $M$-module. For convenience, we keep the notation $N$ to denote its completion, which is a Hilbert $M$-module. The factorization property (\ref{4Facto1}) can now be rephrased as \begin{equation}\label{4Facto2} w(y\otimes x)=\bigl\langle q(b(y)^*),q(a(x))\bigr\rangle_N, \qquad x\in E, \ y\in F. \end{equation} Recall from Paschke's fundamental paper \cite{Pa} that the dual of $N$ (in the Hilbert $M$-module sense) is the space $$ N' = \bigl\{\phi \colon N \to M \, : \, \phi \mbox{ is a bounded } M\mbox{-module map}\bigr\}. $$ Equip $N'$ with the linear structure obtained with usual addition of maps and scalar multiplication given by $(\lambda\cdot\phi)(t) =\overline{\lambda}\phi(t)$ for any $\phi\in N'$, $\lambda\in\ensuremath{\mathbb{C}}$, and $t\in N$. Then $N'$ is a right $M$-module for the action given by $$ (\phi\cdot m)(t) = m^*\phi(t), \qquad \phi\in N',\ m\in M,\ t\in N. $$ Let $\kappa\colon N\to N'$ be defined by $\kappa(s)\colon t\in N \mapsto \langle s,t\rangle\in M$. Then $\kappa$ is a linear map. By \cite[Theorem 3.2]{Pa}, there exists an $M$-valued inner product $\langle\,\cdotp , \,\cdotp\rangle_{N'}$ on $N'$ such that \begin{equation}\label{4Kappa1} \langle \kappa(s),\kappa(t)\rangle_{N'}\, =\, \langle s,t\rangle_N, \qquad s,t\in N, \end{equation} and such that $N'$ is selfdual (see \cite[Section 3]{Pa} for the definition). Then by \cite[Theorem 3.12]{Pa}, $N'$ is unitarily isomorphic to an ultraweak direct sum $\displaystyle{\oplus_{i\in I}} p_i M$, where $(p_i)_{i\in I}$ is a family of non-zero projections in $M$. Summarizing, we then have \begin{equation}\label{4Kappa2} N\,\stackrel{\kappa}{\hookrightarrow}\, N'\,\simeq\, \oplus_i p_i M \,\subset\, \oplus_i M \,\simeq C_I^w(M). \end{equation} Note that by (\ref{4Kappa1}), $\kappa$ is a complete isometry. We claim that the quotient map $q \colon B(K,\mbox{${\mathcal H}$})\to N$ is completely contractive, when $N$ is equipped with its Hilbert $M$-module operator space structure. Indeed, $\Psi$ is completely contractive hence, for any $(S_{ij})_{i,j}\in M_n(B(K,\mbox{${\mathcal H}$}))$, we have \begin{align*} \bignorm{\bigl(q(S_{ij})\bigr)_{i,j}}^2_{M_n(N)}\, & =\, \Biggnorm{\left(\sum_{k=1}^n \Psi(S_{ki}^* S_{kj}) \right)_{i,j}}_{M_n(M)}\\ & \leq \, \Biggnorm{\left(\sum_{k=1}^n S_{ki}^* S_{kj}\right)_{i,j}}_{M_n(B(K))}\\ & = \, \bignorm{\bigl((S_{ij})_{i,j}^*(S_{ij})_{i,j}\bigr)}_{M_n(B(K))}\\ &=\, \bignorm{(S_{ij})_{i,j}}^2_{M_n(B(K,\tiny{\mbox{${\mathcal H}$}}))}. \end{align*} Using (\ref{4Kappa2}), we define $\alpha\colon E\to C_I^w(M)$ by $\alpha(x)=\kappa\bigl(q(a(x))\bigr)$. It follows from above that $\alpha$ is completely bounded, with $\cbnorm{\alpha}\leq\cbnorm{a}$. Likewise we define $\beta\colon F\to R_I^w(M)$ by $\beta(y)=\bigl[\kappa\bigl(q(b(y)^*)\bigr)]^*$. Then $\beta$ is completely bounded, with $\cbnorm{\beta}\leq\cbnorm{b}$. Consequently, $\cbnorm{\alpha}\cbnorm{\beta} \leq \cbnorm{w}$. In accordance with Lemma \ref{4CI-CB}, let $(\alpha_i)_{i\in I}\in C^w_I\bigl(CB(E,M)\bigr)$ and $(\beta_i)_{i\in I}\in R^w_I\bigl(CB(F,M)\bigr)$ be corresponding to $\alpha$ and $\beta$, respectively. Then by (\ref{4Facto2}) and (\ref{4Kappa2}), we have $$ w(y\otimes x)\,=\,\langle\beta(y)^*,\alpha(x)\rangle_{N'} \,=\, \bigl\langle (\beta_i(y)^*)_{i\in I},(\alpha_i(x))_{i\in I} \bigr\rangle_{C_I^w(M)}\, =\, \sum_{i\in I} \beta_i(y)\alpha_i(x) $$ for any $x\in E$ and $y\in F$. Once this identity is established, the inequality $\cbnorm{w}\leq \cbnorm{\alpha}\cbnorm{\beta}$ is a classical fact. \end{proof} \section{Factorization of modular operators}\label{6Facto} Consider $H_1,H_2,H_3$ and $M_1,M_2,M_3$, $M_i\subset B(H_i)$, as in Sections \ref{3OM} and \ref{4MOD}. Using the Hilbert space identification $S^2(H_1,H_2) \simeq\overline{H_1}\stackrel{2}{\otimes} H_2$, Lemma \ref{1Opp} and (\ref{1Normal}), we have von Neumann algebra identifications $$ B(H_1)\overline{\otimes} B(H_2)^{op}\simeq B(H_1)\overline{\otimes} B(\overline{H_2}) \simeq B(H_1\stackrel{2}{\otimes}\overline{H_2})\simeq B(S^2(H_1,H_2))^{op} $$ and hence a von Neumann algebra embedding $$ \tau^1\colon M_1\overline{\otimes} M_2^{op}\,\hookrightarrow\, B(S^2(H_1,H_2))^{op}. $$ Unraveling the above identifications, we see that \begin{equation}\label{6Tau1} \bigl[\tau^1(R\otimes S)\bigr](x)\,=\, SxR,\qquad x\in S^2(H_1,H_2),\, R\in M_1,\,S\in M_2. \end{equation} Further this property determines $\tau^1$. Likewise we may consider $$ \tau^3\colon M_2^{op}\overline{\otimes} M_3\hookrightarrow\, B(S^2(H_2,H_3)), $$ the (necessarily unique) von Neumann algebra embedding satisfying \begin{equation}\label{6Tau3} \bigl[\tau^3(S\otimes T)\bigr](y) = TyS,\qquad y\in S^2(H_2,H_3),\, T\in M_3,\,S\in M_2. \end{equation} For convenience, for any $a\in M_1\overline{\otimes} M_2^{op}$ and any $b\in M_2^{op}\overline{\otimes} M_3$, we write $\tau^1_a$ instead of $\tau^1(a)$ and $\tau^3_b$ instead of $\tau^3(b)$. The main objective of this section is to prove the following description of modular completely bounded $S^1$-multipliers. \begin{theorem}\label{6Factorization} Assume that $M_2$ is injective. \begin{itemize} \item [(a)] Let $I$ be an index set and let $$ A=(a_i)_{i\in I}\in R_{I}^{w}\bigl(M_1\overline{\otimes} M_2^{op}\bigr) \qquad\hbox{and}\qquad B=(b_i)_{i\in I}\in C_{I}^{w}\bigl(M_2^{op}\overline{\otimes} M_3\bigr). $$ For any $x\in S^2(H_1,H_2)$ and any $y\in S^2(H_2,H_3)$, $$ \sum_{i\in I} \,\bignorm{\tau^3_{b_i}(y)\tau^1_{a_i}(x)}_1\, <\,\infty. $$ Let $u_{A,B}\colon S^2(H_2,H_3)\times S^2(H_1,H_2)\to S^1(H_1,H_3)$ be the resulting mapping defined by $$ u_{A,B}(y,x)\,=\, \sum_{i\in I} \tau^3_{b_i}(y)\tau^1_{a_i}(x),\qquad x\in S^2(H_1,H_2),\, y\in S^2(H_2,H_3). $$ Then $\widetilde{u}_{A,B}\in CB_{(M'_3,M'_2,M'_1)}\bigl(\Gamma(H_1,H_2,H_3), S^1(H_1,H_3)\bigr)$ and \begin{equation}\label{6cbn} \cbnorm{\widetilde{u}_{A,B}}\,\leq\, \norm{A}_{R_{I}^{w}}\,\norm{B}_{C_{I}^{w}}. \end{equation} \item [(b)] Conversely, let $u\colon S^2(H_2,H_3)\times S^2(H_1,H_2)\to S^1(H_1,H_3)$ be a bounded bilinear map and assume that $\widetilde{u}$ belongs to $CB_{(M'_3,M'_2,M'_1)}\bigl(\Gamma(H_1,H_2,H_3), S^1(H_1,H_3)\bigr)$. Then there exist an index set $I$ and two families $$ A=(a_i)_{i\in I}\in R_{I}^{w}\bigl(M_1\overline{\otimes} M_2^{op}\bigr) \qquad\hbox{and}\qquad B=(b_i)_{i\in I}\in C_{I}^{w}\bigl(M_2^{op}\overline{\otimes} M_3\bigr) $$ such that $u=u_{A,B}$ and $\norm{A}_{R_{I}^{w}}\,\norm{B}_{C_{I}^{w}} = \cbnorm{u}$. \end{itemize} \end{theorem} We will establish two intermediate lemmas before proceeding to the proof. We recall the mapping $\tau$ from (\ref{2Sigma1}). In the sequel we use the notation $1$ for the unit of either $B(H_1)$ or $B(H_3)$. Thus for any $a\in M_1\overline{\otimes} M_2^{op}$, we may consider $a\otimes 1\in M_1\overline{\otimes} M_2^{op}\overline{\otimes} M_3$. Likewise, for any $b\in M_2^{op}\overline{\otimes} M_3$, we may consider $1\otimes b\in M_1\overline{\otimes} M_2^{op}\overline{\otimes} M_3$. The following is a generalization of \cite[Lemma 20]{CLS}. \begin{lemma}\label{6Magic1} For any $a\in M_1\overline{\otimes} M_2^{op}$, for any $b\in M_2^{op}\overline{\otimes} M_3$, and for any $x\in S^2(H_1,H_2)$ and $y\in S^2(H_2,H_3)$, we have \begin{equation}\label{6Magic11} \tau_{(a\otimes 1)(1\otimes b)} (y,x) \,=\, \tau^3_{b}(y)\tau^1_{a}(x). \end{equation} \end{lemma} \begin{proof} We fix $x\in S^2(H_1,H_2)$, $y\in S^2(H_2,H_3)$, $\eta_1\in H_1$ and $\xi_3\in H_3$. Let $R\in M_1, S,S'\in M_2^{op}, T\in M_3$. Then $(R\otimes S\otimes 1)(1\otimes S'\otimes T) = R\otimes S'S\otimes T$. Hence by (\ref{2Sigma2}), (\ref{6Tau1}) and (\ref{6Tau3}), we have $$ \tau_{(R\otimes S\otimes 1)(1\otimes S'\otimes T)}(y,x) \,=\, TyS'SxR\,=\, \tau^3_{S'\otimes T}(y) \tau^1_{R\otimes S}(x). $$ Hence the result holds true when $a$ and $b$ are elementary tensors. By linearity, this implies (\ref{6Magic11}) in the case when $a$ and $b$ belong to the algebraic tensor products $M_1\otimes M_2^{op}$ and $M_2^{op}\otimes M_3$, respectively. We now use a limit process. Let $a\in M_1\overline{\otimes} M_2^{op}$ and $b\in M_2^{op}\overline{\otimes} M_3$ be arbitrary. Let $(a_s)_s$ be a net in $M_1\otimes M_2^{op}$ converging to $a$ in the $w^*$-topology of $M_1\overline{\otimes} M_2^{op}$ and let $(b_t)_t$ be a net in $M_2^{op}\otimes M_3$ converging to $b$ in the $w^*$-topology of $M_2^{op}\overline{\otimes} M_3$. For any $s,t$, we have \begin{equation}\label{6st} \tau_{(a_s\otimes 1)(1\otimes b_t)} (y,x) =\tau^3_{b_t}(y)\tau^1_{a_s}(x) \end{equation} by the preceding paragraph. On the one hand, since the product is separately $w^*$-continuous on von Neumann algebras, \begin{equation}\label{6asbt} (a\otimes 1)(1\otimes b) \,=\,w^*\hbox{-}\lim_s\lim_t (a_s\otimes 1)(1\otimes b_t) \end{equation} in $M_1\overline{\otimes} M_2^{op}\overline{\otimes} M_3$. Since $\tau$ is $w^*$-continuous, this implies that $$ \bigl\langle \bigl[\tau_{(a\otimes 1)(1\otimes b)} (y,x)\bigr](\eta_1),\xi_3\bigr\rangle\,=\, \lim_s\lim_t \bigl\langle \bigl[\tau_{(a_s\otimes 1)(1\otimes b_t)} (y,x)\bigr](\eta_1),\xi_3\bigr\rangle. $$ On the other hand, by the $w^*$-continuity of $\tau^1$ and $\tau^3$, $\tau^1_{a_s}\to \tau^1_{a}$ in the $w^*$-topology of $B(S^2(H_1,H_2))$ and $\tau^3_{b_t}\to \tau^3_{b}$ in the $w^*$-topology of $B(S^2(H_2,H_3))$. Consequently, $\tau^1_{a_s}(x)\to \tau^1_{a}(x)$ in the weak topology of $S^2(H_1,H_2)$ whereas $\tau^3_{b_t}(y)\to \tau^3_{b}(y)$ in the weak topology of $S^2(H_2,H_3)$. This readily implies that $$ \bigl\langle \bigl[\tau^3_{b}(y)\tau^1_{a}(x)\bigr](\eta_1),\xi_3\bigr\rangle\,=\, \lim_s\lim_t \bigl\langle \bigl[\tau^3_{b_t}(y)\tau^1_{a_s}(x)\bigr](\eta_1),\xi_3\bigr\rangle. $$ Combining these two limit results with (\ref{6st}), we deduce the formula (\ref{6Magic11}). \end{proof} It follows from Lemma \ref{1Injective1} (a) that we have $w^*$-continuous and completely isometric identifications \begin{equation}\label{6a-alpha} M_1\overline{\otimes} M_2^{op}\,\simeq\, CB(M_{1*},M_2^{op}) \qquad\hbox{and}\qquad M_2^{op}\overline{\otimes} M_3\,\simeq\, CB(M_{3*},M_2^{op}). \end{equation} Likewise, $M_1\overline{\otimes} M_2^{op}\overline{\otimes}M_3 \,\simeq\,CB((M_1\overline{\otimes}M_3)_* , M_{2}^{op})$ hence by \cite[Theorem 7.2.4]{ER}, we have a $w^*$-continuous and completely isometric identification \begin{equation}\label{6a-alpha-1} M_1\overline{\otimes} M_2^{op}\overline{\otimes}M_3 \,\simeq\, CB\bigl(M_{1*}\stackrel{\frown}{\otimes} M_{3*}, M_{2}^{op}\bigr). \end{equation} \begin{lemma}\label{6Magic2} Assume that $M_2$ is injective. Let $a\in M_1\overline{\otimes} M_2^{op}$ and $b\in M_2^{op}\overline{\otimes} M_3$. Let $\alpha\in CB(M_{1*}, M_2^{op})$ and $\beta\in CB(M_{3*}, M_2^{op})$ be corresponding to $a$ and $b$, respectively, through the identifications (\ref{6a-alpha}). Let $$ \sigma_{a,b}\colon M_{1*}\stackrel{\frown}{\otimes} M_{3*}\longrightarrow M_2^{op} $$ be the completely bounded map corresponding to $(a\otimes 1)(1\otimes b)$ through the identification (\ref{6a-alpha-1}). Then we have \begin{equation}\label{6Magic22} \sigma_{a,b}(v_1\otimes v_3)\,=\,\alpha(v_1)\beta(v_3) \end{equation} for any $v_1\in M_{1*}$ and any $v_3\in M_{3*}$. \end{lemma} \begin{proof} We fix $v_1\in M_{1*}$ and $v_3\in M_{3*}$. Let $R\in M_1$, $S,S'\in M_2^{op}$ and $T\in M_3$, and assume first that $a=R\otimes S$ and $b=S'\otimes T$. Then $\alpha(v_1)= \langle R,v_1\rangle_{M_1,M_{1*}} S$ and $\beta(v_3)= \langle T, v_3 \rangle_{M_3,M_{3*}} S'$. Hence $$ \alpha(v_1)\beta(v_3) = \langle R,v_1\rangle_{M_1,M_{1*}} \langle T, v_3 \rangle_{M_3,M_{3*}} S'S. $$ Since $(a\otimes 1)(1\otimes b) = R\otimes S'S\otimes T$, $\sigma_{a,b}(v_1\otimes v_3)$ is also equal to $\langle R, v_1\rangle_{M_1,M_{1*}} \langle T, v_3 \rangle_{M_3,M_{3*}} S'S$. This proves the result in this special case. By linearity, we deduce that (\ref{6Magic22}) holds true when $a$ and $b$ belong to the algebraic tensor products $M_1\otimes M_2^{op}$ and $M_2^{op}\otimes M_3$. As in the proof of the preceding lemma, we deduce the general case by a limit process. Let $a\in M_1\overline{\otimes} M_2^{op}$ and $b\in M_2^{op}\overline{\otimes} M_3$ be arbitrary. Let $(a_s)_s$ be a net in $M_1\otimes M_2^{op}$ converging to $a$ in the $w^*$-topology of $M_1\overline{\otimes} M_2^{op}$ and let $(b_t)_t$ be a net in $M_2^{op}\otimes M_3$ converging to $b$ in the $w^*$-topology of $M_2^{op}\overline{\otimes} M_3$. Then for any $s,t$, let $\alpha_s\in CB(M_{1*}, M_2^{op})$ and $\beta_t\in CB(M_{3*}, M_2^{op})$ be corresponding to $a_s$ and $b_t$, respectively. By the preceding paragraph, $$ \sigma_{a_s,b_t}(v_1\otimes v_3)\,=\,\alpha_s(v_1)\beta_t(v_3) $$ for any $s,t$. Since the identifications (\ref{6a-alpha}) are $w^*$-continuous, $\alpha_s(v_1)\to\alpha(v_1)$ and $\beta_t(v_3)\to\beta(v_3)$ in the $w^*$-topology of $M_2^{op}$. Since the product is separately $w^*$-continuous on von Neumann algebras, this implies that $$ \alpha(v_1)\beta(v_3)\,=\, w^*\hbox{-}\lim_s\lim_t \alpha_s(v_1)\beta_t(v_3). $$ Next since the identification (\ref{6a-alpha-1}) is $w^*$-continuous, it follows from (\ref{6asbt}) that $$ \sigma_{a,b}(v_1\otimes v_3)\,=\, w^*\hbox{-}\lim_s\lim_t \sigma_{a_s,b_t}(v_1\otimes v_3). $$ The identity (\ref{6Magic22}) follows at once. \end{proof} Note that if $M_2$ is injective, then by Lemma \ref{1Injective1} (b) the identification (\ref{6a-alpha-1}) restricts to an identification between $M_2^{op}\overline{\otimes}\bigl(M_1\stackrel{w^*h}{\otimes} M_3\bigr)$ and $CB\bigl(M_{1*}\stackrel{h}{\otimes} M_{3*}, M_{2}^{op}\bigr)$. Combining with (\ref{3Ident1}), we deduce a $w^*$-continuous and completely isometric identification \begin{equation}\label{6a-alpha-2} CB_{(M'_3,M'_2,M'_1)}\bigl(\Gamma(H_1,H_2,H_3), S^1(H_1,H_3)\bigr) \,\simeq\, CB\bigl(M_{1*}\stackrel{h}{\otimes} M_{3*}, M_{2}^{op}\bigr). \end{equation} This will be used in the proof below. \begin{proof}[Proof of Theorem \ref{6Factorization}] \ (a): Consider $x\in S^2(H_1,H_2)$ and $y\in S^2(H_2,H_3)$. We have \begin{align*} \sum_{i\in I} \bignorm{\tau^3_{b_i}(y)\tau^1_{a_i}(x)}_1\, & \leq\, \sum_{i\in I} \bignorm{\tau^3_{b_i}(y)}_2 \bignorm{\tau^1_{a_i}(x)}_2\\ &\leq\,\Bigl(\sum_{i\in I} \bignorm{\tau^3_{b_i}(y)}_2^2\Bigr)^{\frac12} \Bigl(\sum_{i\in I} \bignorm{\tau^1_{a_i}(x)}_2^2\Bigr)^{\frac12}, \end{align*} by the Cauchy-Schwarz inequality. Let $J\subset I$ be a finite subset. Since $\tau^3$ is a $*$-homomorphism, we have \begin{align*} \sum_{i\in J} \bignorm{\tau^3_{b_i}(y)}_2^2\, & =\, \sum_{i\in J} \bigl\langle {\tau^3_{b_i}}^*\tau^3_{b_i}(y),y\bigr\rangle_{S^2}\\ & =\,\Bigl\langle \tau^3\Bigl(\sum_{i\in J} b_i^*b_i\Bigr) (y),y\Bigr\rangle_{S^2}\\ & \leq\, \Bignorm{\sum_{i\in J} b_i^*b_i}\,\norm{y}^2_2\\ &\leq\, \norm{B}_{C_{I}^{w}}^2\,\,\norm{y}^2_2. \end{align*} Since $J$ is arbitrary, this implies that \begin{equation}\label{6Square1} \sum_{i\in I} \bignorm{\tau^3_{b_i}(y)}_2^2\, \leq\, \norm{B}_{C_{I}^{w}}^2\,\,\norm{y}^2_2. \end{equation} Likewise, \begin{equation}\label{6Square2} \sum_{i\in I} \bignorm{\tau^1_{a_i}(x)}_2^2 \,\leq\, \norm{A}_{R_{I}^{w}}^2\,\,\norm{x}^2_2. \end{equation} This implies $$ \sum_{i\in I} \,\bignorm{\tau^3_{b_i}(y)\tau^1_{a_i}(x)}_1\, \leq \, \norm{A}_{R_{I}^{w}} \norm{B}_{C_{I}^{w}}\norm{x}_2\norm{y}_2, $$ which allows the definition of $u_{A,B}$. Let $n\geq 1$ be an integer, let $x_1,\ldots, x_n \in S^2(H_1,H_2)$ and let $y_1,\ldots,y_n \in S^2(H_2,H_3)$. In the space $S^1(\ell^2_n(H_1),\ell^2_n(H_3))$, we have the equality $$ \bigl[u_{A,B}(y_k,x_l)\bigr]_{1\leq k,l\leq n} = \sum_{i\in I} \bigl[\tau^3_{b_i}(y_k)\tau^1_{a_i}(x_l)\bigr]_{1\leq k,l\leq n}. $$ Further for any $i\in I$, we have $$ \bignorm{\bigl[\tau^3_{b_i}(y_k)\tau^1_{a_i}(x_l)\bigr]_{1\leq k,l\leq n}}_{ S^1(\ell^2_n(H_1),\ell^2_n(H_3))}\,\leq\, \Bigl(\sum_{k=1}^n\norm{\tau^3_{b_i}(y_k)}_2^2\Bigr)^{\frac12} \Bigl(\sum_{l=1}^n\norm{\tau^1_{a_i}(x_l)}_2^2\Bigr)^{\frac12}. $$ Consequently, using Cauchy-Schwarz, \begin{align*} \bignorm{\bigl[u_{A,B}(y_k,x_l)\bigr]_{1\leq k,l\leq n}}_{S^1(\ell^2_n(H_1),\ell^2_n(H_3))} \,& \leq\, \sum_{i\in I} \Bigl(\sum_{k=1}^n\norm{\tau^3_{b_i}(y_k)}_2^2\Bigr)^{\frac12} \Bigl(\sum_{l=1}^n\norm{\tau^1_{a_i}(x_l)}_2^2\Bigr)^{\frac12}\\ &\leq\,\Bigl(\sum_{i\in I} \sum_{k=1}^n\norm{\tau^3_{b_i}(y_k)}_2^2\Bigr)^{\frac12} \Bigl(\sum_{i\in I}\sum_{l=1}^n\norm{\tau^1_{a_i}(x_l)}_2^2\Bigr)^{\frac12}. \end{align*} It therefore follows from (\ref{6Square1}) and (\ref{6Square2}) that $$ \bignorm{\bigl[u_{A,B}(y_k,x_l)\bigr]_{1\leq k,l\leq n}}_{S^1(\ell^2_n(H_1),\ell^2_n(H_3))} \,\leq\,\norm{A}_{R_{I}^{w}}\norm{B}_{C_{I}^{w}}\Bigl(\sum_{k=1}^n\norm{y_k}_2^2\Bigr)^{\frac12} \Bigl(\sum_{l=1}^n\norm{x_l}_2^2\Bigr)^{\frac12}. $$ According to Lemma \ref{2CB}, this shows that $\widetilde{u}_{A,B} $ is completely bounded and that (\ref{6cbn}) holds. Again let $x\in S^2(H_1,H_2)$ and $y\in S^2(H_2,H_3)$. Using a simple approximation process, one can check that for any $R\in M_1'$, $S\in M_2'$ and $T\in M_3'$, we have $$ \tau^1_a(xR)=\tau^1_a(x)R,\quad \tau^1_a(Sx)=S\tau^1_a(x),\quad \tau^3_b(yS)= \tau^3(y)S\quad\hbox{and}\quad \tau^3_b(Ty)=T\tau^3(y) $$ whenever $a\in M_1\overline{\otimes} M_2^{op}$ and $b\in M_2^{op}\overline{\otimes} M_3$. This implies that $(y,x)\mapsto \tau^3_b(y)\tau^1_a(x)$ is an $(M_3',M_2',M_1')$-module map for any $a\in M_1\overline{\otimes} M_2^{op}$ and $b\in M_2^{op}\overline{\otimes} M_3$. This readily implies that $u_{A,B}$ is an $(M_3',M_2',M_1')$-module map. (b): Assume that $\widetilde{u}\in CB_{(M'_3,M'_2,M'_1)} \bigl(\Gamma(H_1,H_2,H_3), S^1(H_1,H_3)\bigr)$. Let $$ \sigma\colon M_{1*}\stackrel{h}{\otimes} M_{3*}\longrightarrow M_2^{op} $$ be the completely bounded map corresponding to $\widetilde{u}$ through the identification (\ref{6a-alpha-2}). Since $M_2$ is injective, we may apply Theorem \ref{4SS} to $\sigma$. We obtain the existence of an index set $I$ and two families $(\alpha_i)_{i\in I}\in R_{I}^{w}\bigl(CB(M_{1*}, M_2^{op})\bigr)$ and $(\beta_i)_{i\in I}\in C_{I}^{w}\bigl(CB(M_{3*}, M_2^{op})\bigr)$ such that $$ \sigma(v_1\otimes v_3)\,=\,\sum_{i\in I} \alpha_i(v_1)\beta_i(v_3), \qquad v_1\in M_{1*},\ v_3\in M_{3*}. $$ For any $i\in I$, we let $a_i\in M_1\overline{\otimes} M_2^{op}$ and $b_i\in M^{op}_2\overline{\otimes} M_3$ be corresponding to $\alpha_i$ and $\beta_i$, respectively, through the identifications (\ref{6a-alpha}). Then we set $A=(a_i)_{i\in I}$ and $B=(b_i)_{i\in I}$. By Theorem \ref{4SS}, we may assume that $\norm{A}_{R_{I}^{w}}\norm{B}_{C_{I}^{w}} = \cbnorm{u}$. For any finite subset $J\subset I$, we may define $$ u_J\colon S^2(H_2,H_3)\times S^2(H_1,H_2)\to S_1(H_1,H_3) \qquad\hbox{and}\qquad \sigma_J\colon M_{1*}\stackrel{h}{\otimes} M_{3*}\to M_2^{op} $$ by $$ u_J(y,x)\,=\, \sum_{i\in J} \tau^3_{b_i}(y)\tau^1_{a_i}(x), \qquad x\in S^2(H_1,H_2),\ y\in S^2(H_2,H_3), $$ and $$ \sigma_J(v_1\otimes v_3)\,=\,\sum_{i\in J} \alpha_i(v_1)\beta_i(v_3), \qquad v_1\in M_{1*},\ v_3\in M_{3*}. $$ It follows from Lemmas \ref{6Magic1} and \ref{6Magic2} that for any $i$, the mapping $(v_1\otimes v_3)\to \alpha_i(v_1)\beta_i(v_3)$ corresponds to the mapping $y\otimes x\mapsto \tau^3_{b_i}(y)\tau^1_{a_i}(x)$ through the identification (\ref{6a-alpha-2}). By linearity we deduce that $\sigma_J$ corresponds to $\widetilde{u}_J$ through (\ref{6a-alpha-2}). We observe that by the easy (and well-known) converse to Theorem \ref{4SS}, we have $$ \cbnorm{\sigma_J}\leq \bignorm{(\alpha_i)_{i\in J}}_{R_{J}^{w}(CB(M_{1*}, M_2^{op}))} \bignorm{(\beta_i)_{i\in J}}_{C_{J}^{w}(CB(M_{3*}, M_2^{op}))}. $$ This implies the following uniform boundedness, \begin{equation}\label{6Uniform} \forall\, J\subset I\ \hbox{finite},\qquad \cbnorm{\sigma_J}\,\leq\, \norm{A}_{R_{I}^{w}}\norm{B}_{C_{I}^{w}}. \end{equation} In the sequel we consider the set of finite subsets of $I$ as directed by inclusion. We observe that for any $v_1\in M_{1*}$ and $v_3\in M_{3*}$, $\sigma_J(v_1 \otimes v_3)\to \sigma(v_1\otimes v_3)$ in the $w^*$-topology of $M_2^{op}$. Using the uniform boundedness (\ref{6Uniform}), this implies that $\sigma_J\to \sigma$ in the point-$w^*$-topology of $CB\bigl( M_{1*}\stackrel{h}{\otimes} M_{3*},M_2^{op}\bigr)$. Applying (\ref{6Uniform}) again, we deduce that $\sigma_J\to \sigma$ in the $w^*$-topology of $CB\bigl( M_{1*}\stackrel{h}{\otimes} M_{3*},M_2^{op}\bigr)$. Since the identification (\ref{6a-alpha-2}) is a $w^*$-continuous one, this implies that $\widetilde{u}_J\to \widetilde{u}$ is the $w^*$-topology of $CB\bigl(\Gamma(H_1,H_2,H_3), S^1(H_1,H_3)\bigr)$. Let $x\in S^2(H_1,H_2)$ and $y\in S^2(H_2,H_3)$. The above implies that $u_J(y,x)\to u(y,x)$ in the $w^*$-topology of $S^1(H_1,H_3)$. However by part (a) of the theorem, $$ u_J(y,x)\,\longrightarrow\, \sum_{i\in I} \tau^3_{b_i}(y)\tau^1_{a_i}(x) $$ in the norm topology of $S^1(H_1,H_3)$. This shows that $u(y,x)$ is equal to this sum, and proves the result. \end{proof} \begin{remark} It is clear from its proof that part (a) of Theorem \ref{6Factorization} is true without assuming that $M_2$ is injective. The injectivity assumption in Theorem \ref{4SS} is necessary, see \cite[Theorem 5.3]{SS}, however we do not know if it is necessary in part (b) of Theorem \ref{6Factorization}. \end{remark} The next corollary follows from the above proof. \begin{corollary}\label{6Facto-varphi} Assume that $M_2$ is injective and let $\varphi\in M_1\overline{\otimes} M_2^{op}\overline{\otimes} M_3$. Then $\tau_\varphi$ is a completely bounded $S^1$-multiplier if and only if there exist an index set $I$ and families $$ (a_i)_{i\in I}\in R_{I}^{w}\bigl(M_1\overline{\otimes} M_2^{op}\bigr) \qquad\hbox{and}\qquad (b_i)_{i\in I}\in C_{I}^{w}\bigl(M_2^{op}\overline{\otimes} M_3\bigr) $$ such that $$ \varphi\,=\, \sum_{i\in I} (a_i\otimes 1)(1\otimes b_i), $$ where the convergence in taken in the $w^*$-topology. Further $$ \cbnorm{\tau_\varphi}\,=\,\inf\Bigl\{ \bignorm{(a_i)_{i\in I}}_{R_I^\omega}\bignorm{(b_i)_{i\in I}}_{C_I^\omega}\Bigr\}, $$ where the infimumm runs over all possible families $(a_i)_{i\in I}$ and $(b_i)_{i\in I}$ providing such a factorization of $\varphi$. \end{corollary} \begin{remark}\label{6Recover} \ (a)$\,$ Assume that $H_2=\ensuremath{\mathbb{C}}$ is trivial. Then $$ \Gamma(H_1,\ensuremath{\mathbb{C}},H_3) = \{H_3\}_c\stackrel{\frown}{\otimes} \{\overline{H_1}\}_r \simeq S^1(H_1,H_3), $$ by (\ref{1RC}). Hence $CB\bigl(\Gamma(H_1,\ensuremath{\mathbb{C}},H_3),S^1(H_1,H_3)\bigr)\simeq CB(S^1(H_1,H_3))$ and in this identification, $CB_{(M_3',\ensuremath{\mathbb{C}},M_1')}\bigl(\Gamma(H_1,\ensuremath{\mathbb{C}},H_3),S^1(H_1,H_3)\bigr)$ coincides with $CB_{(M_3',M_1')}(S^1(H_1,H_3))$, the space of all $(M_3',M_1')$-bimodule completely bounded maps from $S^1(H_1,H_3)$ into itself. Further $\tau^1\colon M_1\hookrightarrow B(\overline{H_1})^{op}\simeq B(H_1)$ and $\tau^3\colon M_3\hookrightarrow B(H_3)$ coincide with the canonical embeddings. Hence in this case, Theorem \ref{6Factorization} reduces to Theorem \ref{Haag} (see also (\ref{Haag+})). (b)$\,$ A tensor product reformulation of Corollary \ref{6Facto-varphi} is that the bilinear mapping $(a,b)\mapsto (a\otimes 1)(1\otimes b)$ extends to a complete quotient map $$ (M_1\overline{\otimes} M_2^{op}) \stackrel{w^*h}{\otimes} (M_2^{op}\overline{\otimes} M_3)\longrightarrow M_2^{op}\overline{\otimes}\bigl( M_1 \stackrel{w^*h}{\otimes} M_3\bigr). $$ \end{remark} We conclude this paper by considering the special case of Schur multipliers. Our presentation follows \cite{CLS}. We let $(\Omega_1,\mu_1)$, $(\Omega_2,\mu_2)$ and $(\Omega_3,\mu_3)$ be three separable measure spaces. (The separability assumption is not essential but avoids technical measurability issues.) Recall the classical fact that to any $f\in L^2(\Omega_1\times\Omega_2)$, one may associate an operator $x_f\in S^2(L^2(\Omega_1),L^2(\Omega_2))$ given by $$ x_f(\eta) =\int_{\Omega_1} f(t_1,\,\cdotp)\eta(t_1)\,d\mu_1(t_1),\qquad \eta\in L^2(\Omega_1), $$ and the mapping $f\mapsto x_f$ is a unitary which yields a Hilbert space identification $$ L^2(\Omega_1\times\Omega_2) \,\simeq\, S^2\bigl(L^2(\Omega_1),L^2(\Omega_2)\bigr). $$ Of course the same holds with the pairs $(\Omega_2,\Omega_3)$ and $(\Omega_1,\Omega_3)$. For any $g\in L^2(\Omega_2\times\Omega_3)$ (resp. $h\in L^2(\Omega_1\times\Omega_3)$) we let $y_g\in S^2\bigl(L^2(\Omega_2),L^2(\Omega_3)\bigr)$ (resp. $z_h\in S^2\bigl(L^2(\Omega_1),L^2(\Omega_3)\bigr)$) be the corresponding Hilbert-Schmidt operator. To any $\varphi\in L^\infty(\Omega_1\times\Omega_2\times\Omega_3)$, one may associate a bounded bilinear map $$ \Lambda_\varphi\colon S^2\bigl(L^2(\Omega_2),L^2(\Omega_3)\bigr)\times S^2\bigl(L^2(\Omega_1),L^2(\Omega_2)\bigr) \longrightarrow S^2\bigl(L^2(\Omega_1),L^2(\Omega_3)\bigr) $$ given for any $f\in L^2(\Omega_1\times\Omega_2)$ and $g\in L^2(\Omega_2\times\Omega_3)$ by $$ \Lambda_\varphi(y_g,x_f)=z_h $$ where, for almost every $(t_1,t_3)\in \Omega_1\times\Omega_3$, $$ h(t_1,t_3)\,=\,\int_{\Omega_2}\varphi(t_1,t_2,t_3)f(t_1,t_2)g(t_2,t_3)\,d\mu_2(t_2)\,. $$ We refer to \cite[Theorem 3.1]{JTT} or \cite[Subsection 3.2]{CLS} for the proof, and also for the fact that $$ \norm{\Lambda_\varphi\colon S^2\times S^2\longrightarrow S^2}\, =\, \norm{\varphi}_\infty. $$ Bilinear maps of this form will be called {\bf bilinear Schur multipliers} in the sequel. Since $$ S^2\bigl(L^2(\Omega_1),L^2(\Omega_3)\bigr)\,\subset \, B\bigl(L^2(\Omega_1),L^2(\Omega_3)\bigr) $$ contractively, we may regard any bilinear Schur multiplier as valued in $B\bigl(L^2(\Omega_1),L^2(\Omega_3)\bigr)$. Then it follows from the proof of \cite[Corollary 10]{CLS} that \begin{equation}\label{5Norm} \bignorm{\Lambda_\varphi\colon S^2\times S^2\longrightarrow B \bigl(L^2(\Omega_1),L^2(\Omega_3)\bigr)}\, =\, \norm{\varphi}_\infty. \end{equation} For any $i=1,2,3$, let us regard \begin{equation}\label{5qi*} L^\infty(\Omega_i)\subset B(L^2(\Omega_i)) \end{equation} as a von Neumann algebra in the usual way, that is, any $r\in L^\infty(\Omega_i)$ is identified with the multiplication operator $f\mapsto rf,\, f\in L^2(\Omega_i)$. In the sequel we use the notions considered so far in the case when $H_i=L^2(\Omega_i)$ and $M_i= L^\infty(\Omega_i)$. We note that $$ L^\infty(\Omega_i)'=L^\infty(\Omega_i) \qquad\hbox{and}\qquad L^\infty(\Omega_i)^{op}=L^\infty(\Omega_i). $$ Using the classical von Neumann algebra identification $$ L^\infty(\Omega_1\times\Omega_2\times\Omega_3)= L^\infty(\Omega_1)\overline{\otimes} L^\infty(\Omega_2)\overline{\otimes} L^\infty(\Omega_3), $$ we may apply the construction from Sections 3 and 4 to any $\varphi\in L^\infty(\Omega_1\times\Omega_2\times\Omega_3)$ and consider the operator multiplier $$ \tau_\varphi \colon S^2\bigl(L^2(\Omega_2),L^2(\Omega_3)\bigr)\times S^2\bigl(L^2(\Omega_1),L^2(\Omega_2)\bigr) \longrightarrow B\bigl(L^2(\Omega_1),L^2(\Omega_3)\bigr). $$ It turns out that \begin{equation}\label{5tau=lambda} \tau_\varphi=\Lambda_\varphi. \end{equation} The easy verification is left to the reader. The next proposition should be compared with \cite[Theorem 3.1]{JTT}. In the latter result, the authors established a similar characterization of bilinear module maps, but under the assumption that they take values in $S^2\bigl(L^2(\Omega_1),L^2(\Omega_3)\bigr)$. \begin{proposition}\label{6Schur1} For any $$ u\in B_2\bigl(S^2\bigl(L^2(\Omega_2),L^2(\Omega_3)\bigr)\times S^2\bigl(L^2(\Omega_1),L^2(\Omega_2)\bigr), B\bigl(L^2(\Omega_1),L^2(\Omega_3)\bigr)\bigr), $$ the following are equivalent. \begin{itemize} \item [(i)] $u$ is a bilinear Schur multiplier. \item [(ii)] $u$ is an $(L^\infty(\Omega_3), L^\infty(\Omega_2),L^\infty(\Omega_1))$-module map. \end{itemize} \end{proposition} \begin{proof} The implication ``(i)$\,\Rightarrow\,$(ii)" follows from (\ref{5tau=lambda}) and Corollary \ref{3Mod-B}. (It is also possible to write a direct proof.) To prove the converse, assume that $u$ is $(L^\infty(\Omega_3), L^\infty(\Omega_2),L^\infty(\Omega_1))$-modular. We let $$ U\colon S^1(L^2(\Omega_1))\times S^1(L^2(\Omega_2)) \times S^1(L^2(\Omega_3))\longrightarrow \ensuremath{\mathbb{C}} $$ be the unique trilinear form satisfying $$ U(\overline{\xi_1}\otimes\eta_1, \overline{\xi_2}\otimes\eta_2, \overline{\xi_3}\otimes\eta_3) \,=\, \bigl\langle \bigl[u(\overline{\xi_2}\otimes\eta_3, \overline{\xi_1}\otimes\eta_2)\bigr] (\eta_1),\xi_3\bigr\rangle $$ for any $\xi_1,\eta_1\in L^2(\Omega_1)$, $\xi_2,\eta_2\in L^2(\Omega_2)$ and $\xi_3,\eta_3\in L^2(\Omega_3)$. Then for $i=1,2,3$, let $$ q_i\colon S^1(L^2(\Omega_i))\longrightarrow L^1(\Omega_i) $$ be the unique bounded operator satisfying $q_i(\overline{\xi_i}\otimes\eta_i) =\overline{\xi_i}\eta_i$ for any $\xi_i,\eta_i\in L^2(\Omega_i)$. This is a quotient map, whose adjoint coincides with the embedding (\ref{5qi*}). Recall the operators $U_1,U_2,U_3$ defined at the beginning of Section \ref{4MOD}. By Lemma \ref{3LemMod}, $U_i$ is valued in $L^\infty(\Omega_i)$ for any $i=1,2,3$. This implies that $U$ vanishes on the union of ${\rm Ker}(q_1)\times S^1(L^2(\Omega_2)) \times S^1(L^2(\Omega_3))$, $S^1(L^2(\Omega_1))\times {\rm Ker}(q_2) \times S^1(L^2(\Omega_3))$ and $S^1(L^2(\Omega_1)) \times S^1(L^2(\Omega_2))\times {\rm Ker}(q_3)$. Consequently, there exists a trilinear form $$ \widehat{u}\colon L^1(\Omega_1)\times L^1(\Omega_2)\times L^1(\Omega_3) \longrightarrow\ensuremath{\mathbb{C}} $$ factorizing $U$ in the sense that $$ U(v_1,v_2,v_3) = \widehat{u}\bigl(q_1(v_1), q_2(v_2),q_3(v_3)\bigr), \qquad v_i\in S^1(L^2(\Omega_i)). $$ Since $L^1(\Omega_1)\widehat{\otimes} L^1(\Omega_2)\widehat{\otimes} L^1(\Omega_3) = L^1(\Omega_1\times \Omega_2\times \Omega_3)$ (see e.g. \cite[Chap. VIII, Example 10]{DU}), there exists $\varphi\in L^\infty(\Omega_1\times \Omega_2\times \Omega_3)$ such that $$ \widehat{u}(\phi_1,\phi_2,\phi_3)\,=\,\int_{\Omega_1\times\Omega_2\times\Omega_3} \varphi(t_1,t_2,t_3) \phi_1(t_1)\phi_2(t_2) \phi_3(t_3)\,d\mu_1(t_1)d\mu_2(t_2)d\mu_3(t_3) $$ for any $\phi_i\in L^1(\Omega_i)$. A thorough look at the definitions of $U$ and $\Lambda_\varphi$ then reveals that $u=\Lambda_\varphi$. \end{proof} Combining (\ref{5Norm}), (\ref{5tau=lambda}) and Proposition \ref{2OM-B}, we obtain that any bilinear Schur multiplier $u$ induces a completely bounded $$ \widetilde{u}\colon \Gamma\bigl(L^2(\Omega_1), L^2(\Omega_2),L^2(\Omega_3)\bigr) \longrightarrow B(L^2(\Omega_1), L^2(\Omega_3)) $$ and that $\cbnorm{\widetilde{u}}=\norm{\widetilde{u}} (=\norm{u})$. The next result, which essentially follows from \cite{CLS}, shows that similarly, $S^1$-Schur multipliers are automatically completely bounded and that their norm and completely bounded norm coincide. \begin{theorem}\label{6Schur2} Let $\varphi\in L^\infty(\Omega_1\times\Omega_2\times\Omega_3)$. \begin{itemize} \item [(a)] $\Lambda_\varphi$ is an $S^1$-operator multiplier if and only if there exist a separable Hilbert space $H$ and two functions $$ a\in L^{\infty}(\Omega_1 \times \Omega_2 ; H) \qquad \text{and} \qquad b\in L^{\infty}(\Omega_2\times \Omega_3 ; H) $$ such that \begin{equation}\label{6Facto1} \varphi(t_1,t_2,t_3)= \left\langle a(t_1,t_2), b(t_2,t_3) \right\rangle \end{equation} for a.e. $(t_1,t_2,t_3) \in \Omega_1 \times \Omega_2 \times \Omega_3.$ In this case, $$ \bignorm{\Lambda_\varphi \colon S^2 \times S^2 \rightarrow S^1}= \inf\bigl\{\norm{a}_\infty\norm{b}_\infty\bigr\}, $$ where the infimum runs over all pairs $(a,b)$ verifying the above factorization property. \item [(b)] If $\Lambda_\varphi$ is an $S^1$-operator multiplier, then $$ \widetilde{\Lambda_\varphi}\colon \Gamma\bigl(L^2(\Omega_1), L^2(\Omega_2),L^2(\Omega_3)\bigr) \longrightarrow S^1\bigl(L^2(\Omega_1), L^2(\Omega_3)\bigr) $$ is completely bounded, with $\cbnorm{\widetilde{\Lambda_\varphi}} = \norm{\widetilde{\Lambda_\varphi}}$. \end{itemize} \end{theorem} \begin{proof} Part (a) is given by \cite[Theorem 22]{CLS}. Assume that $\Lambda_\varphi$ is an $S^1$-operator multiplier. Let $$ \mbox{${\mathcal S}$}_{3,1}\subset B\bigl(S^\infty(L^2(\Omega_3), L^2(\Omega_1)), B(L^2(\Omega_3), L^2(\Omega_1))\bigr) $$ be the space of all measurable Schur multipliers from $L^2(\Omega_3)$ into $L^2(\Omega_1)$, in the sense of \cite[Subsection 2.4]{CLS}. Then using the notation from the latter paper (to which we refer for more explanations), part (a) implies that $\varphi\in L^\infty_\sigma\bigl(\Omega_2;\mbox{${\mathcal S}$}_{3,1}\bigr)$. Indeed this follows from Peller's description of measurable Schur multipliers given by \cite[Theorem 1]{Pe} (see also \cite[Theorem 3.3]{Sp}, \cite[Theorem 23]{CLS} and \cite{H}). Measurable Schur multipliers are $(L^\infty(\Omega_1),L^\infty(\Omega_3))$-bimodule maps hence by \cite[Theorem 2.1]{Sm}, any element of $\mbox{${\mathcal S}$}_{3,1}$ is a completely bounded map, whose completely bounded norm coincides with its usual norm. Thus we have $$ \mbox{${\mathcal S}$}_{3,1}\,\subset\, CB \bigl(S^\infty(L^2(\Omega_3), L^2(\Omega_1)), B(L^2(\Omega_3), L^2(\Omega_1))\bigr) \qquad\hbox{isometrically}. $$ We deduce that $$ \varphi\in L^\infty_\sigma\bigl(\Omega_2;CB \bigl(S^\infty(L^2(\Omega_3), L^2(\Omega_1)), B(L^2(\Omega_3), L^2(\Omega_1))\bigr)\bigr). $$ Recall that by \cite[Theorem 2.2]{BS} (see also Theorem 4.2 in the latter paper), we have a $w^*$-continuous isometric identification $$ CB \bigl(S^\infty(L^2(\Omega_3), L^2(\Omega_1)), B(L^2(\Omega_3), L^2(\Omega_1))\bigr) \simeq B(L^2(\Omega_1))\stackrel{w^*h}{\otimes} B(L^2(\Omega_3)). $$ Hence we obtain that $\varphi$ belongs to $L^\infty_\sigma\bigl(\Omega_2;B(L^2(\Omega_1))\stackrel{w^*h}{\otimes} B(L^2(\Omega_3))\bigr)$. Equivalently, $\varphi$ belongs to $L^\infty(\Omega_2)\overline{\otimes} \bigl( B(L^2(\Omega_1))\stackrel{w^*h}{\otimes} B(L^2(\Omega_3))\bigr)$. Moreover the norm of $\Lambda_\varphi \colon S^2 \times S^2 \rightarrow S^1$ is equal to the norm of $\varphi$ in the latter space. Now applying Theorem \ref{2OM-S1}, we deduce that $\Lambda_\varphi \colon S^2 \times S^2 \rightarrow S^1$ is completely bounded, with $\cbnorm{\widetilde{\Lambda_\varphi}} = \norm{\widetilde{\Lambda_\varphi}}$. \end{proof} \begin{remark} In Theorem \ref{6Schur2} above, (a) can be deduced from (b) as follows. Assume that $\Lambda_\varphi$ is a completely bounded $S^1$-operator multiplier, with completely bounded norm $<1$. By Proposition \ref{6Schur1} and (\ref{5tau=lambda}), $\Lambda_\varphi=\tau_\varphi$ is $(L^\infty(\Omega_3),L^\infty(\Omega_2),L^\infty(\Omega_1))$-modular. Further $L^\infty(\Omega_2)$ is injective. Hence by Corollary \ref{6Facto-varphi}, there exist an index set $I$, a family $(a_i)_{i\in I}$ in $L^{\infty}(\Omega_1 \times \Omega_2)$ and a family $(b_i)_{i\in I}$ in $L^{\infty}(\Omega_2 \times \Omega_3)$ such that $$ \sum_{i\in I} \vert a_i\vert^2\, < 1 \qquad\hbox{and}\qquad \sum_{i\in I} \vert b_i\vert^2\, < 1 $$ almost everywhere on $\Omega_1 \times \Omega_2$ and on $\Omega_2 \times \Omega_3$, respectively, and $\varphi=\sum_{i\in I} (a_i\otimes 1)(1\otimes b_i)$ in the $w^*$-topology of $L^\infty(\Omega_1\times\Omega_2\times\Omega_3)$. Since we assumed that the three measure spaces $(\Omega_j,\mu_j)$ are separable, it follows from the proof of Corollary \ref{6Facto-varphi} that $I$ can be chosen to be a countable set. Then we have \begin{equation}\label{6Facto2} \varphi(t_1,t_2,t_3)\, = \,\sum_{i\in I} a_i(t_1,t_2) b_i(t_2,t_3) \end{equation} for a.e. $(t_1,t_2,t_3) \in \Omega_1 \times \Omega_2 \times \Omega_3$. Further we may define $a\in L^{\infty}(\Omega_1 \times \Omega_2 ; \ell^2_I)$ and $b\in L^{\infty}(\Omega_2\times \Omega_3 ; \ell^2_I)$ by $a(t_1,t_2) =(a_i(t_1,t_2))_{i\in I}$ and $b(t_2,t_3) =(b_i(t_2,t_3))_{i\in I}$, respectively. Then we both have $\norm{a}_\infty\leq 1$ and $\norm{b}_\infty\leq 1$, and the identity (\ref{6Facto2}) yields (\ref{6Facto1}), with $H=\ell^2_I$. Note however we do not know any direct proof of Theorem \ref{6Schur2} (b), not using some of the arguments from \cite{CLS}. \end{remark} \noindent {\bf Acknowledgements.} The first author was supported by the French ``Investissements d'Avenir" program, project ISITE-BFC (contract ANR-15-IDEX-03). We warmly thank the referee for the careful reading and several valuable suggestions which improved the presentation of the paper. \end{document}
\begin{document} \title{On Entire Solutions of an Elliptic System Modeling Phase Separations} \date{} \begin{abstract} We study the qualitative properties of a limiting elliptic system arising in phase separation for Bose-Einstein condensates with multiple states: \[ \begin{cases} \Delta u=u v^2\ \ \mbox{in} \ {\mathbb R}^n, \\ \Delta v= v u^2 \ \ \mbox{in} \ {\mathbb R}^n, \\ u, v>0\quad \ \mbox{in} \ {\mathbb R}^n. \end{cases} \] When $n=1$, we prove uniqueness of the one-dimensional profile. In dimension $2$, we prove that stable solutions with linear growth must be one-dimensional. Then we construct entire solutions in ${\mathbb R}^2$ with polynomial growth $|x|^d$ for any positive integer $d \geq 1$. For $d\geq 2$, these solutions are not one-dimensional. The construction is also extended to multi-component elliptic systems. \end{abstract} \noindent {\sl Keywords:} {\small Stable solutions, elliptic systems, phase separations, Almgren's monotonicity formulae.}\ \vskip 0.2cm \noindent {\sl AMS Subject Classification (2000):} {\small 35B45 .} \vskip 0.2cm \section{Introduction and Main Results} \setcounter{equation}{0} Consider the following two-component Gross-Pitaevskii system \begin{align} & -\Delta u + \alpha u^3 + \Lambda v^2 u = \lambda_1 u &&\text{in }\Omega, \label{1}\\ & -\Delta v +\beta v^3 + \Lambda u^2 v = \lambda_2 v &&\text{in }\Omega, \label{2}\\ & u>0,\quad v>0 && \text{in }\Omega, \label{37}\\ & u=0,\quad v=0 && \text{on }\partial\Omega\,, \label{3}\\ & \int_\Omega u^2=N_1,\quad\int_\Omega v^2=N_2\, , \label{301} && \end{align} where $\alpha, \beta, \Lambda >0$ and $\Omega$ is a bounded smooth domain in ${\mathbb R}^n$. Solutions of (\ref{1})-(\ref{301}) can be regarded as critical points of the energy functional \begin{equation}\label{5.1} E_\Lambda(u,v)=\int_\Omega\,\left(|\nabla u|^2+|\nabla v|^2\right)+\frac{\alpha}{2}u^4+\frac{\beta}{2}v^4+\frac{\Lambda}{2} u^2v^2\,,\end{equation} on the space $(u,v)\in H^1_0(\Omega)\times H^1_0(\Omega)$ with constraints \begin{equation} \label{302} \int_\Omega u^2 dx=N_1, \int_\Omega v^2 dx=N_2. \end{equation} The eigenvalues $\lambda_j$'s are Lagrange multipliers with respect to~(\ref{302}). Both eigenvalues $\lambda_j=\lambda_{j,\Lambda}, j=1,2$, and eigenfunctions $u=u_\Lambda, v=v_\Lambda$ depend on the parameter $\Lambda$. As the parameter $\Lambda$ tends to infinity, the two components tend to separate their supports. In order to investigate the basic rules of phase separations in this system one needs to understand the asymptotic behavior of $(u_\Lambda, v_\Lambda)$ as $ \Lambda \to +\infty$. We shall assume that the solutions $(u_\Lambda, v_\Lambda)$ of (\ref{1})-(\ref{301}) are such that the associated eigenvalues $\lambda_{j,\Lambda}$'s are uniformly bounded, together with their energies $ E_\Lambda (u_\Lambda, v_\Lambda)$. Then, as $\Lambda \to +\infty$, there is weak convergence (up to a subsequence) to a limiting profile $(u_\infty, v_\infty)$ which formally satisfies \begin{equation} \label{eq:limit-equation1} \begin{cases} -\Delta u_{\infty} +\alpha u_{\infty}^3 =\lambda_{1,\infty} u_{\infty} \qquad & \text{in $\Omega_u$}\,,\\ -\Delta v_{\infty} +\beta v_{\infty}^3 =\lambda_{2,\infty} v_{\infty} \qquad &\text{in $\Omega_v$}\,,\\ \end{cases} \end{equation} where $\Omega_u=\{x\in\Omega: u_\infty(x)>0\}$ and $\Omega_v=\{x\in\Omega: v_\infty(x)>0\}$ are positivity domains composed of finitely disjoint components with positive Lebesgue measure, and each $\lambda_{j,\infty}$ is the limit of $\lambda_{j,\Lambda}$'s as $\Lambda\to\infty$ (up to a subsequence). There is a large literature about this type of questions. Effective numerical simulations for (\ref{eq:limit-equation1}) can be found in~\cite{B}, \cite{BaD} and~\cite{CLLL}. Chang-Lin-Lin-Lin ~\cite{CLLL} proved pointwise convergence of $(u_\Lambda, v_\Lambda)$ away from the interface $\Gamma\equiv\{x\in\Omega: u_\infty(x)=v_\infty(x)=0\}$. In Wei-Weth \cite{ww} the uniform equicontinuity of $(u_\Lambda, v_\Lambda)$ is established, while Noris-Tavares-Terracini-Verzini~\cite{NTTV} proved the uniform-in-$\Lambda$ H\"older continuity of $(u_\Lambda, v_\Lambda)$. The regularity of the nodal set of the limiting profile has been investigated in \cite{C-L 2, TT2011} and in \cite{DWZ2011}: it turns out that the limiting pair $(u_\infty(x),v_\infty(x))$ is the positive and negative pair $(w^+,w^-)$ of a solution of the equation $-\Delta w+\alpha (w^{+})^3-\beta (w^{-})^3 =\lambda_{1,\infty}w^+-\lambda_{2,\infty}w^-$. To derive the asymptotic behavior of $(u_\Lambda, v_\Lambda)$ near the interface $\Gamma=\{x\in\Omega: u_\infty(x)=v_\infty(x)=0\}$, one is led to considering the points $x_\Lambda \in \Omega$ such that $ u_\Lambda (x_\Lambda)=v_\Lambda (x_\Lambda)= m_\Lambda\to 0$ and $x_\Lambda \to x_\infty \in \gamma\subset\Omega$ as $\Lambda \to +\infty$ (up to a subsequence). Assuming that \begin{equation} \label{mainas} m_\Lambda^4 \Lambda \to C_0>0, \end{equation} (without loss of generality we may assume that $ C_0=1$), then, by blowing up, we find the following nonlinear elliptic system \begin{equation}\label{maineqn} \Delta u= u v^2\,, \quad \Delta v= v u^2\,, \quad u, v > 0 \quad \mbox{in} \quad {\mathbb R}^n\,. \end{equation} Problem (\ref{maineqn}) has been studied in Berestycki-Lin-Wei-Zhao \cite{blwz}, and Noris-Tavares-Terracini-Verzini \cite{NTTV}. It has been proved in \cite{blwz} that, in the one-dimensional case, (\ref{mainas}) always holds. In addition, the authors showed the existence, symmetry and nondegeneracy of the solution to one-dimensional limiting system \begin{equation} \label{1D} u^{''}= uv^2, v^{''}=v u^2, u, v>0 \ \mbox{in} \ {\mathbb R}. \end{equation} In particular they showed that entire solutions are reflectionally symmetric, i.e., there exists $x_0$ such that $ u(x-x_0)= v(x_0-x)$. They also established a two-dimensional version of the De Giorgi Conjecture in this framework. Namely, under the growth condition \begin{equation} \label{bd1} u(x)+v(x)\leq C (1+|x|), \end{equation} all monotone solution is one dimensional. On the other hand, in \cite{NTTV}, it was proved that the linear growth is the lowest possible for solutions to (\ref{maineqn}). In other words, if there exists $\alpha \in (0,1)$ such that \begin{equation} \label{bd2} u(x)+v(x)\leq C (1+|x|)^{\alpha}, \end{equation} then $u, v \equiv 0$. In this paper we address three problems left open in \cite{blwz}. First, we prove the uniqueness of (\ref{1D}) (up to translations and scaling). This answers the question stated in Remark 1.4 of \cite{blwz}. Second, we prove that the De Giorgi conjecture still holds in the two dimensional case, when we replace the monotonicity assumption by the stability condition. A third open question of (\ref{maineqn}) is whether all solutions to (\ref{maineqn}) necessarily satisfy the growth bound (\ref{bd1}). We shall answer this question negatively in this paper. We first study the one-dimensional problem (\ref{1D}). Observe that problem (\ref{1D}) is invariant under the translations $ (u(x), v(x)) \to ( u(x+t), v(x+t)), \forall t \in {\mathbb R}$ and scalings $ (u(x), v(x)) \to ( \lambda u(\lambda x), \lambda v(\lambda x)), \forall \lambda >0$. The following theorem classifies all entire solutions to (\ref{1D}). \begin{thm} \label{thm0} The solution to (\ref{1D}) is unique, up to translations and scaling. \end{thm} Next,we want to classify the stable solutions in ${\mathbb R}^2$. We recall that a {\em stable} solution $(u, v)$ to (\ref{maineqn}) is such that the linearization is weakly positive definite. That is, it satisfies \[ \int_{{\mathbb R}^n} [\nabla \varphi|^2+|\nabla \psi |^2 + v^2 \varphi^2+u^2 \psi^2 +4 uv \varphi \psi] \geq 0, \qquad \forall \varphi, \psi \in C_0^\infty ({\mathbb R}^n). \] In \cite{blwz}, it was proved that the one-dimensional solution is stable in ${\mathbb R}^n$. Our first result states that the only stable solution in ${\mathbb R}^2$, among those growing at most linearly, is the one-dimensional family. \begin{thm} \label{thm1} Let $(u,v)$ be a stable solution to (\ref{maineqn}) in ${\mathbb R}^2$. Furthermore, we assume that the growth bound (\ref{bd1}) holds. Then $(u, v)$ is one-dimensional, i.e., there exists $a \in {\mathbb R}^2, |a|=1$ such that $(u, v)= (U (a \cdot x), V (a \cdot x))$ where $(U, V)$ are functions of one variable and satisfies (\ref{1D}). \end{thm} Our third result shows that there are solutions to (\ref{maineqn}) with polynomial growth $|x|^d$ that are not one dimensional. The construction depends on the following harmonic polynomial $\Phi$ of degree $d$: $$\Phi:=\mbox{Re}(z^d).$$ Note that $\Phi$ has some dihedral symmetry; indeed, let us take its $d$ nodal lines $L_1, \cdots, L_d$ and denote the corresponding reflection with respect to these lines by $T_1,\cdots, T_d$. Then there holds \begin{equation}\label{reflectional symmetry} \Phi(T_i z)=-\Phi(z). \end{equation} The third result of this paper is the following one. \begin{thm}\label{main result} For each positive integer $d \geq 1$, there exists a solution $(u,v)$ to problem \eqref{maineqn}, satisfying \begin{enumerate} \item $u-v>0$ in $\{\Phi>0\}$ and $u-v<0$ in $\{\Phi<0\}$; \item $u \geq\Phi^+$ and $v\geq\Phi^-$; \item $\forall i=1,\cdots, d$, $u(T_iz)=v(z)$; \item $\forall r>0$, the Almgren frequency function satisfies \begin{equation} \label{nr} N(r):=\frac{r\int_{B_r(0)}|\nabla u|^2+|\nabla v|^2+u^2v^2}{\int_{\partial B_r(0)}u^2+v^2}\leq d; \end{equation} \item \begin{equation}\label{nr 2} \lim_{r \to +\infty} N(r) =d. \end{equation} \end{enumerate} \end{thm} Note that the one-dimensional solution constructed in \cite{blwz} can be viewed as corresponding to the case $d=1$. For $d\geq 2$, the solutions of Theorem \ref{main result} will be obtained by a minimization argument under symmetric variations $(\varphi,\psi)$ (i.e. satisfying $\varphi\circ T_i=\psi$ for every reflection $T_i$). The first four claims will be derived from the construction. See Theorem \ref{thm existence on bounded set}. Regarding the claim 5, we note that by Almgren's monotonicity formula, (see Proposition \ref{monotonocity} below), the Almgren frequency quotient $N(r)$ is increasing in $r$. Hence $ \lim_{r \to +\infty} N(r)$ exists. To understand the asymptotics at infinity of the solutions, one way is to study the blow-down sequence defined by: $$(u_R(x), v_R(x)):=(\frac{1}{L(R)}u(Rx)\frac{1}{L(R)}v(Rx)),$$ where $L(R)$ is chosen so that $$\int_{\partial B_1(0)}u_R^2+v_R^2=1.$$ In Section 6, we will prove \begin{thm}\label{thm asymptotics at infinity} Let $(u,v)$ be a solution of \eqref{maineqn} such that \[d:=\lim\limits_{r\rightarrow+\infty}N(r)<+\infty.\] Then $d$ is a positive integer. As $R\to\infty$, $(u_R, v_R)$ defined above (up to a subsequence) converges to $(\Psi^+,\Psi^-)$ uniformly on any compact set of $\mathbb{R}^N$ where $\Psi$ is a homogeneous harmonic polynomial of degree $d$. If $d=1$ then $(u,v)$ is asymptotically flat at infinity. \end{thm} In particular this applies to the solutions found by Theorem \ref{main result} to yield the following property \begin{coro} Let $(u,v)$ be a solution of \eqref{maineqn} given by Theorem \ref{main result}. Then $$(u_R(x), v_R(x)):=(\frac{1}{R^d}u(Rx)\frac{1}{R^d}v(Rx))$$ converges uniformly on compact subsets of $\mathbb R^2$ to a multiple of $(\Phi^+,\Phi^-)$, where $\Phi:=\mbox{Re}(z^d)$. \end{coro} \par Theorem \ref{thm asymptotics at infinity} roughly says that $(u,v)$ is asymptotic to $(\Psi^+,\Psi^-)$ at infinity for some homogeneous harmonic polynomial. The extra information we have in the setting of Theorem \ref{main result} is that $\Psi\equiv\Phi=\mbox{Re}(z^d)$. This can be inferred from the symmetries of the solution (property $3$ in Theorem \ref{main result}). For another elliptic system with a similar form, \begin{equation} \label{uvnew} \left\{ \begin{aligned} &\Delta u=uv, u>0 \ \mbox{in} \ {\mathbb R}^n,\\ &\Delta v=vu, v>0 \ \mbox{in} \ {\mathbb R}^n \end{aligned} \right. \end{equation} the same result has been proved by Conti-Terracini-Verzini in \cite{C-T-V 3}. In fact, their result hold for any dimension $n\geq 1$ and any harmonic polynomial function on $\mathbb{R}^n$. Note however that the problem here is different from (\ref{uvnew}). Actually, equation (\ref{uvnew}) can be reduced to a single equation: indeed, the difference $u-v$ is a harmonic function ($\Delta (u-v)=0$) and thus we can write $v= u-\Phi $ where $\Phi$ is a harmonic function. By restricting to certain symmetry classes, then (\ref{uvnew}) can be solved by sub-super solution method. However, this reduction does not work for system (\ref{maineqn}) that we study here. For the proof of Theorem \ref{main result}, we first construct solutions to (\ref{maineqn}) in any bounded ball $B_R(0)$ satisfying appropriate boundary conditions: \begin{equation}\label{equation100} \left\{ \begin{aligned} &\Delta u=uv^2, ~~\mbox{in}~~B_R(0),\\ &\Delta v=vu^2,~~\mbox{in}~~B_R(0), \\ & u=\Phi^+, v=\Phi^- \ \mbox{ on} \ \partial B_R(0). \end{aligned} \right. \end{equation} This is done by variational method and using heat flow. The next natural step is to let $R\rightarrow+\infty$ and obtain some convergence result. This requires some uniform (in $R$) upper bound for solutions to (\ref{equation100}). In order to prove this, we will exploit a new monotonicity formula for symmetric functions (Proposition \ref{prop:upperbound}). We also need to exclude the possibility of degeneracy, that is that the limit could be $0$ or a solution with lower degree such as a one dimensional solution. To this end, we will give some lower bound using the Almgren monotonicity formula. Lastly, we observe that the same construction works also for a system with many components. Let $d$ be an integer or a half-integer and $2d=hk$ be a multiple of the number of components $k$, and $G$ denote the rotation of order $2d$. In this way we prove the following result \begin{thm}\label{thm:maini} There exists a positive solution to the system \begin{equation}\label{eq:system} \left\{ \begin{aligned} &\Delta u_i=u_i\sum_{j\neq i,j=1}^ku_j^2, ~~\mbox{in}~~\mathbb C={\mathbb R}^2, i=1,\dots, k,\\ & u_i>0, i=1,\ldots, k, \end{aligned} \right. \end{equation} having the following symmetries (here $\overline{z}$ is the complex conjugate of $z$) \begin{equation} \label{eqn2_i} \begin{aligned} u_{i}(z)&=u_i(G^hz), \qquad \ &\mbox{ on} \ &\mathbb C\,,i=1,\dots,k,\\ u_i(z)&=u_{i+1}(Gz), \qquad \ &\mbox{ on} \ &\mathbb C\,,i=1,\dots,k,\\ u_{k+1}(z)&=u_1(z), \ &\mbox{ on} \ &\mathbb C\\ u_{k+2-i}(z)&=u_i(\overline{z}), \qquad \ &\mbox{ on} \ &\mathbb C\,,i=1,\dots,k.\\ \end{aligned} \end{equation} Furthermore, \[\lim_{r\to\infty} \dfrac{1}{r^{1+2d}}\int_{\partial B_r(0)}\sum_1^k u_{i}^2=b\in(0,+\infty)\;;\] and \[\lim_{r\to\infty} \frac{r\int_{B_r(0)}\sum_1^k |\nabla u_{i}|^2+\sum_{i<j}u_i^2u_j^2} {\int_{\partial B_r(0)}\sum_1^k u_{i}^2}=d\;.\] \end{thm} The problem of the full classification of solutions to \eqref{maineqn} is largely open. In view of our results, one can formulate several open questions. \noindent {\bf Open problem 1.} We recall from \cite{blwz} that it is still an open problem to know in which dimension it is true that all monotone solution is one-dimensional. A similar open question is in which dimension it is true that all stable solution is one-dimensional. We refer to \cite{A-C}, \cite{GG}, \cite{dkw}, \cite{pacard}, and \cite{savin} for results of this kind for Allen-Cahn equation. \noindent {\bf Open problem 2.} Let us recall that in one space dimension, there exists a unique solution to (\ref{1D}) (up to translations and scalings). Such solutions have linear growth at infinity and, in the Almgren monotonicity formula, they satisfy \begin{equation} \label{2.1n} \lim\limits_{r\rightarrow+\infty}N(r)=1. \end{equation} It is natural to conjecture that, in any space dimension, a solution of (\ref{maineqn}) satisfying (\ref{2.1n}) is actually one dimensional, that is, there is a unit vector $a$ such that $(u(x),v(x))=(U (a \cdot x ), V (a \cdot x))$ for $x \in\mathbb{R}^n$, where $(U,V)$ solves (\ref{1D}). However this result seems to be difficult to obtain at this stage. \noindent {\bf Open problem 3.} A further step would be to prove uniqueness of the (family of) solutions having polynomial asymptotics given by Theorem \ref{main result} in two space dimension. A more challenging question is to classify all solutions with \begin{equation} \label{2.1nn} \lim\limits_{r\rightarrow+\infty}N(r)=d. \end{equation} \noindent {\bf Open problem 4.} For the Allen-Cahn equation $ \Delta u+u-u^3=0$ in ${\mathbb R}^2$, solutions similar to Theorem \ref{main result} was first constructed in \cite{dfp} for $d=2$ and in \cite{acm} for $ d\geq 3$. (However all solutions to Allen-Cahn equation are bounded.) On the other hand, it was also proved in \cite{dkpw} that Allen-Can equation in ${\mathbb R}^2$ admits solutions with multiple fronts. An open question is whether similar result holds for (\ref{maineqn}). Namely, are there solutions to (\ref{maineqn}) such that the set $\{ u=v\}$ contains disjoint multiple curves? \noindent {\bf Open problem 5.} This question is related to extension of Theorem \ref{main result} to higher dimensions. We recall that for the Allen-Cahn equation $ \Delta u+u-u^3=0$ in ${\mathbb R}^{2m}$ with $m\geq 2$, saddle-like solutions were constructed in \cite{cabre} by employing properties of Simons cone. Stable solutions to Allen-Cahn equation in ${\mathbb R}^8$ with non planar level set were found in \cite{pacard}, using minimal cones. We conjecture that all these results should have analogues for (\ref{maineqn}). \section{Uniqueness of solutions in ${\mathbb R}$: Proof of Theorem \ref{thm0}} \numberwithin{equation}{section} \setcounter{equation}{0} In this section we prove Theorem \ref{thm0}. Without loss of generality, we assume that \begin{equation} \lim_{x \to +\infty} u(x)= +\infty, \lim_{x \to +\infty} v(x)=0. \end{equation} The existence of such entire solutions has been proved in \cite{blwz}. By symmetry property of solutions to (\ref{1D}) (Theorem 1.3 of \cite{blwz}), we may consider the following problem \begin{equation}\label{entire problem} \left\{ \begin{aligned} &u^{''}=uv^2,v^{''}=vu^2, u,v>0~~\text{in}~~\mathbb{R},\\ &\lim\limits_{x\to+\infty}u^{'}(x)=-\lim\limits_{x\to-\infty}v^{'}(x)=a \end{aligned} \right. \end{equation} where $a>0$ is a constant. We now prove that there exists a unique solution $(u, v)$ to (\ref{entire problem}), up to translations. We will prove it using the method of moving planes. First we observe that for any solution $(u,v)$ of \eqref{entire problem}, $u^{''}$ and $v^{''}$ decay exponentially at infinity. Integration shows that as $x\to+\infty$, $|u^{'}(x)-a|$ decays exponentially. (See also \cite{blwz}.) This implies the existence of a positive constant $A$ such that \begin{equation}\label{uniform deviation} |u(x)-ax^+|+|v(x)-ax^-|\leq A. \end{equation} Moreover, the limits \[\lim\limits_{x\to+\infty}(u(x)-ax^+),\lim\limits_{x\to-\infty}(v(x)-ax^-)\] exist. Now assume $(u_1,v_1)$ and $(u_2,v_2)$ are two solutions of \eqref{entire problem}. For $t>0$, denote \[u_{1,t}(x):=u_1(x+t),v_{1,t}(x):=v_1(x+t).\] We want to prove that there exists an optimal $t_0$ such that for all $t\geq t_0$, \begin{equation}\label{sliding} u_{1,t}(x)\geq u_2(x),v_{1,t}(x)\leq v_2(x)~~\text{in}~~\mathbb{R}. \end{equation} Then we will show that when $t=t_0$ these inequalities are identities. This will imply the uniqueness result. Without loss of generality, assume $(u_1,v_1)$ and $(u_2,v_2)$ satisfy the estimate \eqref{uniform deviation} with the same constant $A$. \noindent {\bf Step 1.} For $t\geq \frac{16A}{a}$ ($A$ as in \eqref{uniform deviation}), \eqref{sliding} holds. Firstly, in the region $\{x\geq -t+\frac{2A}{a}\}$, by \eqref{uniform deviation} we have \begin{equation}\label{1n} u_{1,t}(x)\geq a(x+t)-A\geq ax^++A\geq u_2(x); \end{equation} while in the region $\{x\leq -t+\frac{2A}{a}\}$, we have \begin{equation}\label{2n} v_{1,t}(x)\leq a(x+t)^-+A\leq ax^--A\leq v_2(x). \end{equation} On the interval $\{x<-t+\frac{2A}{a}\}$, we have \begin{equation} \label{new1} \left\{ \begin{aligned} &u_{1,t}^{''}=u_{1,t}v_{1,t}^2\leq u_{1,t}v_2^2,\\ &u_2^{''}=u_2v_2^2.\\ \end{aligned} \right. \end{equation} With the right boundary conditions \[u_{1,t}(-t+\frac{2A}{a})\geq u_2(-t+\frac{2A}{a}), \lim\limits_{x\to-\infty}u_{1,t}(x)=\lim\limits_{x\to-\infty}u_2(x)=0,\] a direct application of the maximum principle implies \[\inf_{\{x<-t+\frac{2A}{a}\}}(u_{1,t}-u_2)\geq 0.\] By the same type of argument also show that \[\sup_{\{x>-t+\frac{2A}{a} \}}(v_{1,t}-v_2)\leq 0.\] Therefore, we have shown that for $ t\geq \frac{16A}{a}, u_{1,t}\geq u_2$ and $ v_{1,t}\leq v_2 $. \noindent {\bf Step 2.} We now decrease the $t$ to an optimal value when (\ref{sliding}) holds \[t_0=\inf\{t^{'} | \ \text{such that}~~\eqref{sliding}~~\text{holds} ~~\text{for all} ~~ t \geq t^{'} \}.\] Thus $t_0$ is well defined by Step 1. Since $ -(u_{1,t_0}-u_2)^{''} +v_{1,t_0}^2 (u_{1,t_0}-u_2) \geq 0,\ -(v_{2}-v_{1,t_0})^{''} +u_{1,t_0}^2 (v_2- v_{1,t_0}) \geq 0,$ by the strong maximum principle, either \[u_{1,t_0}(x)\equiv u_2(x),v_{1,t_0}(x)\equiv v_2(x)~~\text{in}~~\mathbb{R},\] or \begin{equation} \label{new23} u_{1,t_0}(x)> u_2(x),v_{1,t_0}(x)< v_2(x)~~\text{in}~~\mathbb{R}. \end{equation} Let us argue by contradiction that (\ref{new23}) holds. By the definition of $t_0$, there exists a sequence of $t_k<t_0$ such that $\lim\limits_{k\to+\infty}t_k=t_0$ and either \begin{equation} \label{2.0n} \inf_{\mathbb{R}}(u_{1,t_k}-u_2)<0, \end{equation} or \[\sup_{\mathbb{R}}(v_{1,t_k}-v_2)>0.\] Let us only consider the first case. Define $w_{1,k}:=u_{1,t_k}-u_2$ and $w_{2,k}:=v_2-v_{1,t_k}$. Direct calculations show that they satisfy \begin{equation}\label{2.9} \left\{ \begin{aligned} &-w_{1,k}^{''}+v_{1,t_k}^2w_{1,k}=u_2(v_2+v_{1,t_k})w_{2,k}~~\mbox{in}~~\mathbb{R}, \\ &-w_{2,k}^{''}+u_{1,t_k}^2w_{2,k}=v_2(u_2+u_{1,t_k})w_{1,k}~~\mbox{in}~~\mathbb{R}. \end{aligned} \right. \end{equation} We use the auxiliary function $g(x)=\log(|x|+3)$ as in \cite{cl}. Note that $$g\geq 1,~~g^{''}<0~~\mbox{in}~~\{x\neq 0\}.$$ Define $\widetilde{w}_{1,k}:=w_{1,k}/g$ and $\widetilde{w}_{2,k}:=w_{2,k}/g$. For $x \not = 0$ we have \begin{equation}\label{2.10} \left\{ \begin{aligned} &-\widetilde{w}_{1,k}^{''}-2\frac{g^{'}}{g}\widetilde{w}_{1,k}^{'} +[v_{1,t_k}^2-\frac{g^{'}}{g}]\widetilde{w}_{1,k}=u_2(v_2+v_{1,t_k})\widetilde{w}_{2,k}, ~~\mbox{in}~~\mathbb{R}, \\ &-\widetilde{w}_{2,k}^{''}-2\frac{g^{'}}{g}\widetilde{w}_{2,k}^{'} +[u_{1,t_k}^2-\frac{g^{'}}{g}]\widetilde{w}_{2,k}=v_2(u_2+u_{1,t_k})\widetilde{w}_{1,k},~~\mbox{in}~~\mathbb{R}. \end{aligned} \right. \end{equation} By definition, $w_{1,k}$ and $w_{2,k}$ are bounded in $\mathbb{R}$, and hence \[\widetilde{w}_{1,k},\widetilde{w}_{2,k}\to 0~~\text{as}~~|x|\to\infty.\] In particular, in view of (\ref{2.0n}), we know that $\inf_{{\mathbb R}} (\widetilde{w}_{1,k})<0$ is attained at some point $x_{k,1}$. Note that $|x_{k,1}|$ must be unbounded, for if $ x_{k,1} \to x_{\infty}, t_k\to t_0$, then $ w_{1,k} (x_{k,1}) \to u_{1,t_0} (x_\infty)- u_2 (x_\infty) =0$. But this violates the assumption (\ref{new23}). Since $|x_{k,1}|$ is unbounded, at $x=x_{k,1}$ there holds $$\widetilde{w}_{1,k}^{''}\geq 0~~\mbox{and}~~\widetilde{w}_{1,k}^{'}=0.$$ Substituting this into the first equation of \eqref{2.10}, we get \begin{equation}\label{2.11} [v_{1,t_k}(x_{k,1})^2-\frac{ g^{''}(x_{k,1})}{g(x_{k,1})}] \widetilde{w}_{1,k}(x_{k,1})\geq u_2(x_{k,1})(v_2(x_{k,1})+v_{1,t_k}(x_{k,1}))\widetilde{w}_{2,k}(x_{k,1}) \end{equation} which implies that $\widetilde{w}_{2,k}(x_{k,1})<0$. Thus we also have $\inf\limits_{\mathbb{R}}\widetilde{w}_{2,k}<0$. Assume it is attained at $x_{k,2}$. Same argument as before shows that $|x_{k,2}|$ must also be unbounded. Similar to \eqref{2.11}, we have \begin{equation}\label{2.12} [u_{1,t_k}(x_{k,2})^2-\frac{g^{''}(x_{k,2})}{g(x_{k,2})}] \widetilde{w}_{2,k} (x_{k,2})\geq v_2(x_{k,2})(u_2(x_{k,2})+u_{1,t_k}(x_{k,2}))\widetilde{w}_{1,k} (x_{k,2}). \end{equation} Observe that $$\widetilde{w}_{2,k} (x_{k,2})=\inf\limits_{\mathbb{R}}\widetilde{w}_{2,k} \leq\widetilde{w}_{2, k} (x_{k,1}), $$ $$\widetilde{w}_{1, k} (x_{k,1})=\inf\limits_{\mathbb{R} }\widetilde{w}_{1,k} \leq\widetilde{w}_{1, k} (x_{k,2}). $$ Substituting these into \eqref{2.11} and \eqref{2.12}, we obtain \begin{equation} \label{2.13n} \widetilde{w}_{1, k} (x_{k,1})\geq \frac{u_2(x_{k,1})[v_2(x_{k,1})+v_{1,t_k}(x_{k,1})]} {v_{1,t_k}(x_{k,1})^2-\frac{g^{''}(x_{k,1})}{g(x_{k,1})}} \frac{v_2(x_{k,2})[u_2(x_{k,2})+u_{1,t_k}(x_{k,2})]} {u_{1,t_k}(x_{k,2})^2-\frac{g^{''}(x_{k,2})}{g(x_{k,2})}} \widetilde{w}_{1,k}(x_{k,1}). \end{equation} Since $ \tilde{w}_{1,k} (x_{k,1}) <0$, we conclude from (\ref{2.13n}) that \begin{equation} \frac{u_2(x_{k,1})[v_2(x_{k,1})+v_{1,t_k}(x_{k,1})]} {v_{1,t_k}(x_{k,1})^2-\frac{g^{''}(x_{k,1})}{g(x_{k,1})}} \frac{v_2(x_{k,2})[u_2(x_{k,2})+u_{1,t_k}(x_{k,2})]} {u_{1,t_k}(x_{k,2})^2-\frac{g^{''}(x_{k,2})}{g(x_{k,2})}} \geq 1 \end{equation} where $|x_{k,1}|\to +\infty, |x_{k,2}| \to +\infty$. This is impossible since $ \frac{ g^{''} (x)}{g (x)} \sim -\frac{1}{|x|^2 \log (|x|+3)}$ as $|x| \to +\infty$, and we also use the decaying as well as the linear growth properties of $u$ and $v$ at $\infty$. We have thus reached a contradiction, and the proof of Theorem \ref{thm0} is thereby completed. \section{Stable solutions: Proof of Theorem \ref{thm1}} \numberwithin{equation}{section} \setcounter{equation}{0} In this section, we prove Theorem \ref{thm1}. The proof follows an idea from Berestycki-Caffarelli-Nirenberg \cite{BCN}-see also Ambrosio-Cabr\'{e} \cite{A-C} and Ghoussoub-Gui \cite{GG}. First, by the stability, we have the following \begin{lem} There exist a constant $\lambda\geq 0$ and two functions $\varphi>0$ and $\psi<0$, smoothly defined in $\mathbb{R}^2$ such that \begin{equation}\label{entire eigenfunction} \left\{ \begin{aligned} &\Delta\varphi=v^2\varphi+2uv\psi-\lambda\varphi,\\ &\Delta\psi= 2uv\varphi+v^2\psi-\lambda\psi. \end{aligned} \right. \end{equation} \end{lem} \begin{proof} For any $R<+\infty$ the stability assumption reads $$\lambda(R):=\min\limits_{\varphi,\psi\in H_0^1(B_R(0))\setminus\{0\}} \frac{\int_{B_R(0)}|\nabla\varphi|^2+|\nabla\psi|^2+v^2\varphi^2+u^2\psi^2+4uv\varphi\psi} {\int_{B_R(0)}\varphi^2+\psi^2}\geq0.$$ It's well known that the corresponding minimizer is the first eigenfunction. That is, let $(\varphi_R,\psi_R)$ realizing $\lambda(R)$, then \begin{equation}\label{first eigenfunction R} \left\{ \begin{aligned} &\Delta\varphi_R= v^2\varphi_R+2uv\psi_R-\lambda(R)\varphi_R, \ \mbox{in} \ B_R (0),\\ &\Delta\psi_R= 2uv\varphi_R+v^2\psi_R-\lambda(R)\psi_R, \ \mbox{in} \ B_R (0), \\ & \varphi_R =\psi_R=0 \ \mbox{on} \ \partial B_R (0). \end{aligned} \right. \end{equation} By possibly replacing $(\varphi_R,\psi_R)$ with $(|\varphi_R|,-|\psi_R|)$, we can assume $\varphi_R\geq 0$ and $\psi_R\leq 0$. After a normalization, we also assume \begin{equation}\label{normalization condition 2} |\varphi_R(0)|+|\psi_R(0)|=1. \end{equation} $\lambda(R)$ is decreasing in $R$, thus uniformly bounded as $R\rightarrow+\infty$. Let $$\lambda:=\lim\limits_{R\rightarrow+\infty}\lambda(R).$$ The equation for $\varphi_R$ and $-\psi_R$ (both of them are nonnegative functions) forms a cooperative system, thus by the Harnack inequality (\cite{A-G-M} or \cite{BS}), $\varphi_R$ and $\psi_R$ are uniformly bounded on any compact set of $\mathbb{R}^2$. By letting $R\rightarrow+\infty$, we can obtain a converging subsequence and the limit $(\varphi,\psi)$ satisfies \eqref{entire eigenfunction}. \par We also have $\varphi\geq 0$ and $\psi\leq 0$ by passing to the limit. Hence $$-\Delta\varphi+(v^2-\lambda)\varphi\geq 0.$$ Applying the strong maximum principle, either $\varphi>0$ strictly or $\varphi\equiv 0$. If $\varphi\equiv 0$, substituting this into the first equation in \eqref{entire eigenfunction}, we obtain $\psi\equiv 0$. This contradicts the normalization condition \eqref{normalization condition 2}. Thus, it holds true that $\varphi>0$ and similarly $\psi<0$. \end{proof} Fix a unit vector $\xi$. Differentiating the equation (\ref{maineqn}) yields the following equation for $(u_{\xi},v_{\xi})$ \begin{equation}\label{linearized equation} \left\{ \begin{aligned} &\Delta u_{\xi}=v^2u_{\xi}+2uvv_{\xi},\\ &\Delta v_{\xi}=2uvu_{\xi}+v^2v_{\xi}. \end{aligned} \right. \end{equation} Let $$w_1=\frac{u_{\xi}}{\varphi},w_2=\frac{v_{\xi}}{\psi}.$$ Direct calculations using \eqref{entire eigenfunction} and \eqref{linearized equation} show \begin{equation} \left\{ \begin{aligned} &\text{div}(\varphi^2\nabla w_1)=2uv\varphi\psi(w_2-w_1)+\lambda\varphi^2w_1,\\\nonumber &\text{div}(\varphi^2\nabla w_2)=2uv\varphi\psi(w_1-w_2)+\lambda\psi^2w_2. \end{aligned} \right. \end{equation} For any $\eta\in C_0^{\infty}(\mathbb{R}^2)$, testing these two equations with $w_1\eta^2$ and $w_2\eta^2$ respectively, we obtain \begin{equation} \left\{ \begin{aligned} &-\int\varphi^2|\nabla w_1|^2\eta^2-2\varphi^2w_1\eta\nabla w_1\nabla\eta =\int 2uv\varphi\psi(w_2-w_1)w_1\eta^2+\lambda\varphi^2w_1\eta^2,\\\nonumber &-\int\psi^2|\nabla w_2|^2\eta^2-2\psi^2w_2\eta\nabla w_2\nabla\eta =\int 2uv\varphi\psi(w_1-w_2)w_2\eta^2+\lambda\psi^2w_2\eta^2. \end{aligned} \right. \end{equation} Adding these two and applying the Cauchy-Schwarz inequality, we infer that \begin{equation}\label{5.6} \int\varphi^2|\nabla w_1|^2\eta^2+\psi^2|\nabla w_2|^2\eta^2\leq 16 \int\varphi^2w_1^2|\nabla\eta|^2+\psi^2w_2^2|\nabla\eta|^2\leq 16 \int (u_{\xi}^2+v_{\xi}^2)|\nabla\eta|^2. \end{equation} Here we have taken away the positive term in the right hand side and used the fact that $$2uv\varphi\psi(w_2-w_1)w_1\eta^2+2uv\varphi\psi(w_1-w_2)w_2\eta^2 =-2uv\varphi\psi(w_1-w_2)^2\eta^2\geq0,$$ because $\varphi>0$ and $\psi<0$. On the other hand, testing the equation $\Delta u \geq 0$ with $u\eta^2$ ($\eta$ as above) and integrating by parts, we get $$\int|\nabla u|^2\eta^2\leq 16\int u^2|\nabla\eta|^2.$$ The same estimate also holds for $v$. For any $r>0$, take $\eta\equiv 1$ in $B_r(0)$, $\eta\equiv0$ outside $B_{2r}(0)$ and $|\nabla\eta|\leq 2/r$. By the linear growth of $u$ and $v$, we obtain a constant $C$ such that \begin{equation}\label{5.7} \int_{B_r(0)}|\nabla u|^2+|\nabla v|^2\leq Cr^2. \end{equation} Now for any $R>0$, in \eqref{5.6}, we take $\eta$ to be \begin{equation} \eta(z)= \left\{ \begin{array}{ll} 1, & x\in B_R(0), \\\nonumber 0, & x\in B_{R^2}(0)^c,\\\nonumber 1-\frac{\log(|z|/R)}{\log R} & x\in B_{R^2}(0)\setminus B_R(0). \end{array} \right. \end{equation} With this $\eta$, we infer from (\ref{5.6}) \begin{eqnarray*} &&\int_{B_R(0)}\varphi^2|\nabla w_1|^2+\psi^2|\nabla w_2|^2\\ &\leq&\frac{C}{(\log R)^2}\int_{B_{R^2}(0)\setminus B_R(0)}\frac{1}{|z|^2}(|\nabla u|^2+|\nabla v|^2)\\ &\leq &\frac{C}{(\log R)^2}\int_R^{R^2}r^{-2}(\int_{\partial B_r(0)}|\nabla u|^2+|\nabla v|^2)dr\\ &=&\frac{C}{(\log R)^2}\int_R^{R^2}r^{-2}(\frac{d}{dr}\int_{B_r(0)}|\nabla u|^2+|\nabla v|^2)dr\\ &=&\frac{C}{(\log R)^2}[r^{-2}\int_{\partial B_r(0)}|\nabla u|^2+|\nabla v|^2)|_R^{R^2}+2\int_R^{R^2}r^{-3}(\int_{B_r(0)}|\nabla u|^2+|\nabla v|^2)dr]\\ &\leq& \frac{C}{\log R}. \end{eqnarray*} By letting $R\rightarrow+\infty$, we see $\nabla w_1\equiv 0$ and $\nabla w_2\equiv 0$ in $\mathbb{R}^2$. Thus, there is a constant $c$ such that $$(u_{\xi},v_{\xi})=c(\varphi,\psi).$$ Because $\xi$ is an arbitrary unit vector, from this we actually know that after changing the coordinates suitably, $$u_y\equiv 0,v_y\equiv 0\ \ \text{in}~\mathbb{R}^2.$$ That is, $u$ and $v$ depend on $x$ only and they are one dimensional. \section{Existence in bounded balls} \numberwithin{equation}{section} \setcounter{equation}{0} In this section we first construct a solution $(u,v)$ to the problem \begin{equation}\label{equation} \left\{ \begin{aligned} &\Delta u=uv^2 ~~\mbox{in}~~B_R(0),\\ &\Delta v=vu^2 ~~\mbox{in}~~B_R(0), \end{aligned} \right. \end{equation} satisfying the boundary condition \begin{equation} \label{eqn2} u=\Phi^+, v=\Phi^- \ \mbox{ on} \ \partial B_R(0)\subset\mathbb{R}^2. \end{equation} More precisely, we prove \begin{thm}\label{thm existence on bounded set} There exists a solution $(u_R,v_R)$ to problem \eqref{equation}, satisfying \begin{enumerate} \item $u_R-v_R>0$ in $\{\Phi>0\}$ and $u_R-v_R<0$ in $\{\Phi<0\}$; \item $u_R\geq\Phi^+$ and $v_R\geq\Phi^-$; \item $\forall i=1,\cdots, d$, $u_R(T_iz)=v_R(z)$; \item $\forall r\in(0,R)$, $$N(r;u_R,v_R):=\frac{r\int_{B_r(0)}|\nabla u_R|^2+|\nabla v_R|^2+u_R^2v_R^2} {\int_{\partial B_r(0)}u_R^2+v_R^2}\leq d.$$ \end{enumerate} \end{thm} \begin{proof} Let us denote $\mathcal U\subset H^1(B_R(0))^2$ the set of pairs satisfying the boundary condition \eqref{eqn2}, together with conditions $(1,2,3)$ of the statement of the Theorem (with the strict inequality $<$ replaced by $\leq$, and so now $\mathcal{U}$ is a closed set). The desired solution will be a minimizer of the energy functional $$E_R(u,v):=\int_{B_R(0)}|\nabla u|^2+|\nabla v|^2+u^2v^2$$ over $\mathcal U$. Existence of at least one minimizer follows easily from the direct method of the Calculus of Variations. To prove that the minimizer also satisfies equation (\ref{equation}), we use the heat flow method. More precisely, we consider the following parabolic problem \begin{equation}\label{parabolic equation} \left\{ \begin{aligned} &U_t-\Delta U=-UV^2, ~~\mbox{in}~~[0,+\infty)\times B_R(0),\\ &V_t-\Delta V=-VU^2,~~\mbox{in}~~[0,+\infty)\times B_R(0), \end{aligned} \right. \end{equation} with the boundary conditions $U=\Phi^+$ and $V=\Phi^{-}$ on $(0,+\infty)\times \partial B_R(0)$ and initial conditions in $\mathcal U$. By the standard parabolic theory, there exists a unique local solution $(U,V)$. Then by the maximum principle, $0\leq U\leq \sup_{B_R(0)}\Phi^+, \ \ 0\leq V \leq \sup_{B_R(0)} \Phi^{-}$, hence the solution can be extended to a global one, for all $t\in(0,+\infty)$. By noting the energy inequality \begin{equation} \label{ert} \frac{d}{dt}E_R(U(t),V(t))=-\int_{B_R(0)}|\frac{\partial U}{\partial t}|^2 +|\frac{\partial V}{\partial t}|^2 \end{equation} and the fact that $E_R\geq 0$, standard parabolic theory implies that for any sequence $t_i\to+\infty$, there exists a subsequence of $t_i$ such that $(U(t_i),V(t_i))$ converges to a solution $(u,v)$ of \eqref{equation}. Next we show that $\mathcal U$ is positively invariant by the parabolic flow. First of all, by the symmetry of initial and boundary data, $(V(t,T_iz),U(t,T_iz))$ is also a solution to the problem \eqref{parabolic equation}. By the uniqueness of solutions to the parabolic system \eqref{parabolic equation}, $(U,V)$ inherits the symmetry of $(\Phi^+,\Phi^-)$. That is, for all $t\in[0,+\infty)$ and $i=1,\cdots, d$, $$U(t,z)=V(t,T_iz).$$ This implies $$U-V=0~~\mbox{on}~~\{\Phi=0\}.$$ Thus, in the open set $D_R:=B_R(0)\cap\{\Phi>0\}$, we have, for any initial datum $(u_0,v_0)\in\mathcal U$, \begin{equation}\label{eq:difference} \left\{ \begin{aligned} &(U-V)_t-\Delta (U-V)=UV(U-V), ~~\mbox{in}~~[0,+\infty)\times D_R(0),\\ &U-V\geq 0,~~\mbox{on}~~[0,+\infty)\times \partial D_R(0),\\ &U-V\geq 0,~~\mbox{on}~~\{0\}\times D_R(0). \end{aligned} \right. \end{equation} The strong maximum principle implies $U-V>0$ in $(0,+\infty)\times D_R(0)$. By letting $t\to+\infty$, we obtain that the limit satisfies \begin{equation}\label{4.10} u-v\geq 0~~\mbox{in}~~D_R(0). \end{equation} $(u,v)$ also has the symmetry, $\forall i=1,\cdots, d$ $$u (T_i z)=v (z).$$ Similar to \eqref{eq:difference}, noting \eqref{4.10}, we have \begin{equation} \left\{ \begin{aligned} &-\Delta (u-v)\geq 0, ~~\mbox{in}~~D_R(0),\\ &u-v=\Phi^+,~~\mbox{on}~~\partial D_R(0). \end{aligned} \right. \end{equation} Comparing with $\Phi^+$ on $D_R(0)$, we obtain \begin{equation}\label{2m} u-v>\Phi^+>0,~~\mbox{in}~~D_R(0). \end{equation} Because $u>0$ and $v>0$ in $B_R(0)$, we in fact have \begin{equation}\label{3n} u>\Phi^+,~~\mbox{in}~~B_R(0). \end{equation} In conclusion, $(u,v)$ satisfies conditions $(1,2,3)$ in the statement of the theorem. Let $(u_R, v_R)$ be a minimizer of $E_{R}$ over ${\mathcal U}$. Now we consider the parabolic equation (\ref{parabolic equation}) with the initial condition \begin{equation} U(x, t)= u_R (x), V (x, t) = v_R (x). \end{equation} By (\ref{ert}), we deduce that $$ E_R (u_R, v_R) \leq E_{R} (U, V) \leq E_R (u_R, v_R) $$ and hence $ (U(x, t), V(x, t) )\equiv (u_R(x), v_R (x))$ for all $ t \geq 0$. By the arguments above, we see that $ (u_R, v_R)$ satisfies (\ref{equation})and conditions $(1,2,3)$ in the statement of the theorem. In order to prove (4), we firstly note that, as $(u_R,v_R)$ minimizes the energy and $(\Phi^+,\Phi^-)\in\mathcal U$, there holds \[\int_{B_R(0)}|\nabla u_R|^2+|\nabla v_R|^2+u_R^2v_R^2\leq \int_{B_R(0)}|\nabla \Phi|^2. \] Now by the Almgren monotonicity formula (Proposition \ref{monotonocity} below) and the boundary conditions, $\forall r\in(0,R)$, we derive $$N(r;u_R,v_R)\leq N(R;u_R,v_R)\leq \frac{R\int_{B_R(0)}|\nabla\Phi|^2}{\int_{\partial B_R(0)}|\Phi|^2}=d.$$ This completes the proof of Theorem \ref{thm existence on bounded set}. \end{proof} Let us now turn to the system with many components. In a similar way we shall prove the existence on bounded sets. Let $d$ be an integer or a half-integer and $2d=hk$ be a multiple of the number of components $k$, and $G$ denote the rotation of order $2d$. Take the fundamental domain $F$ of the rotations group of degree $2d$, that is $F=\{z\in\mathbb{C}\;:\; \theta=\mbox{arg}(z)\in(-\pi/{2d},\pi/{2d})\}$. \begin{equation} \Psi(z)=\begin{cases} r^{d}\cos(d\theta)\qquad&\text{if $z\in \cup _{i=0}^{h-1}G^{ik}(F)$,}\\ 0 & \text{otherwise in $\mathbb C$.} \end{cases} \end{equation} Note that $\Psi(z)$ is positive whenever it is not zero. Next we construct a solution $(u_1,\dots,u_k)$ to the system \begin{equation}\label{equation_i} \Delta u_i=u_i\sum_{j\neq i,j=1}^ku_j^2, ~~\mbox{in}~~B_R(0), i=1,\dots, k \end{equation} satisfying the symmetry and boundary condition (here $\overline{z}$ is the complex conjugate of $z$) \begin{equation} \label{eqn2_in} \left\{\begin{array}{l} u_{i}(z)= u_i(G^hz), \qquad \ \ \ \ \mbox{ on} \ B_R(0)\,,i=1,\dots,k,\\ u_i (z)= u_{i+1}(Gz), \qquad \ \ \mbox{ on} \ B_R(0)\,,i=1,\dots,k,\\ u_{k+2-i}(z)= u_i(\overline{z}), \qquad \ \mbox{ on} \ B_R(0)\,,i=1,\dots,k,\\ u_{k+1}(z)= u_1(z), \ \ \ \ \ \ \ \ \ \mbox{ on} \ B_R(0), \end{array} \right. \end{equation} \begin{equation} \label{eqn2_ibc} u_{i+1}(z)=\Psi(G^i(z)), \qquad \mbox{ on} \ \partial B_R(0)\,,i=0,\dots,k-1. \end{equation} More precisely, we prove the following. \begin{thm}\label{thm existence on bounded seti} For every $R>0$, there exists a solution $(u_{1,R},\dots,u_{k,R})$ to the system \eqref{equation_i} with symmetries \eqref{eqn2_in} and boundary conditions \eqref{eqn2_ibc}, satisfying, $$N(r):=\frac{r\int_{B_r(0)}\sum_1^k|\nabla u_{i,R}|^2+\sum_{i<j}u_{i,R}^2u_{j,R}^2} {\int_{\partial B_r(0)}\sum_1^k u_{i,R}^2}\leq d, \ \forall r\in(0,R).$$ \end{thm} \begin{proof} Let us denote by $\mathcal U\subset H^1(B_R(0))^k$ the set of pairs satisfying the symmetry and boundary condition \eqref{eqn2_in}, \eqref{eqn2_ibc}. The desired solution will be the minimizer of the energy functional $$\int_{B_r(0)}\sum_1^k|\nabla u_{i,R}|^2+\sum_{i<j}u_{i,R}^2u_{j,R}^2$$ over $\mathcal U$. Once more, to deal with the constraints, we may take advantage of the positive invariance of the associated heat flow: \begin{equation}\label{parabolic equation_i} \left\{ \begin{aligned} &\dfrac{\partial U_i}{\partial t}-\Delta U_i=-U_i\sum_{j\neq i}U_j^2, ~~\mbox{in}~~[0,+\infty)\times B_R(0),\\ \end{aligned} \right.\end{equation} which can be solved under conditions \eqref{eqn2_i}, \eqref{eqn2_ibc} and initial conditions in $\mathcal U$. Thus, the minimizer of the energy $(u_{1,R},\dots,u_{k,R})$ solves the differential system. In addition, using the test function $(\Psi_1,\dots,\Psi_k)$, where $\Psi_i=\Psi\circ G^{i-1}$, $i=1,\dots,k$, we have \[\int_{B_R(0)}\sum_1^k|\nabla u_{i,R}|^2+\sum_{i<j}u_{i,R}^2u_{j,R}^2\leq k\int_{B_R(0)}|\nabla \Psi|^2. \] Now by the Almgren monotonicity formula below (Proposition \ref{monotonocity}) and the boundary conditions, we get $$N(r)\leq N(R)\leq \frac{R\int_{B_R(0)}|\nabla\Psi|^2}{\int_{\partial B_R(0)}|\Psi|^2}=d, \ \forall r \in (0, R).$$ \end{proof} In order to conclude the proof of Theorems \ref{thm existence on bounded set} and \ref{thm existence on bounded seti}, we need to find upper and lower bounds for the solutions, uniform with respect to $R$ on bounded subsets of $\mathbb C$. That is, we will prove that for any $ r>0$, there exists positive constants $0<c(r)<C(r)$ (independent of $R$) such that \begin{equation}\label{uniform upper bound}c(r)< \sup\limits_{B_r(0)}u_R\leq C(r). \end{equation} Once we have this estimate, then by letting $R\rightarrow+\infty$, a subsequence of $(u_R,v_R)$ will converge to a solution $(u,v)$ of problem (\ref{maineqn}), uniformly on any compact set of $\mathbb{R}^2$. It is easily seen that properties (1), (2), (3) and (4) in Theorem \ref{thm existence on bounded set} can be derived by passing to the limit, and we obtain the main results stated in Theorem \ref{main result} and \ref{thm:maini}. It then remains to establish the bound (\ref{uniform upper bound}). In the next section, we shall obtain this estimate by using the monotonicity formula. \section{Monotonicity formula} Let us start by stating some monotonicity formulae for solutions to (\ref{maineqn}), for any dimension $n\geq 2$. The first two are well-known and we include them here for completeness. But we will also require some refinements. \begin{prop}\label{monotonocity 1} For $r>0$ and $x\in\mathbb{R}^n$, $$E(r)=r^{2-n}\int_{B_r(x)}\sum_1^k|\nabla u_i|^2+\sum_{i<j}u_i^2u_j^2$$ is nondecreasing in $r$. \end{prop} For a proof, see \cite{C-L 2}. The next statement is an Almgren-type monotonicity formula with remainder. \begin{prop}\label{monotonocity} For $r>0$ and $x\in\mathbb{R}^n$, let us define \[H(r)=r^{1-n}\int_{\partial B_r(x)}\sum_1^k u_i^2.\] Then $$N(r;x):=\frac{E(r)}{H(r)}$$ is nondecreasing in $r$. In addition there holds \begin{equation}\label{eq:remainder} \int_{0}^r \dfrac{2\int_{B_s}\sum_{i<j}^ku_i^2u_j^2}{\int_{\partial B_s}\sum_1^ku_i^2}ds\leq N(r)\;. \end{equation} \end{prop} \begin{proof} For simplicity, take $x$ to be the origin $0$ and let $k=2$. We have \[H(r)= r^{1-n}\int_{\partial B_r}u^2+v^2\;,\qquad E(r)=r^{2-n}\int_{B_r}|\nabla u|^2+|\nabla v|^2+u^2v^2\;. \] Then, direct calculations show that \begin{equation}\label{4.2} \frac{d}{dr} H(r) =2r^{1-n}\int_{B_r}|\nabla u|^2+|\nabla v|^2+2u^2v^2. \end{equation} By the proof of Proposition \ref{monotonocity 1}, we have \begin{equation}\label{4.1} \frac{d}{dr} E(r) =2r^{2-n}\int_{\partial B_r}[u_r^2+v_r^2] +2r^{1-n}\int_{B_r}u^2v^2. \end{equation} With these two identities, we obtain \begin{multline*} \frac{d}{dr}\frac{E}{H}(r) =\dfrac{H [2r^{2-n}\int_{\partial B_r}(u_r^2+v_r^2) +2r^{1-n}\int_{B_r}u^2v^2] -E[2r^{1-n}\int_{\partial B_r} uu_r+vv_r]}{H^2}\\ \geq \dfrac{2r^{3-2n}\int_{\partial B_r}(u^2+v^2) \int_{\partial B_r}(u_r^2+v_r^2) -2r^{3-2n}\left[\int_{\partial B_r} uu_r+vr_r\right]^2}{H^{2}}+\\ +\dfrac{2r^{1-n}\int_{B_r}u^2v^2}{H} \geq \dfrac{2r^{1-n}\int_{B_r}u^2v^2}{H}. \end{multline*} Here we have used the following inequality $$E(r)\leq \int_{B_r}|\nabla u|^2+|\nabla v|^2+2u^2v^2 =\int_{\partial B_r} uu_r+vr_r.$$ Hence this yields monotonicity of the Almgren quotient. In addition, by integrating the above inequality we obtain \begin{equation*} \int_{r_0}^r \dfrac{2\int_{B_s}u^2v^2}{\int_{\partial B_s}u^2+v^2}ds\leq N(r)\;. \end{equation*} \end{proof} If $x=0$, we simply denote $N(r;x)$ as $N(r)$. Assuming an upper bound on $N(r)$, we establish a doubling property by the Almgren monotonicity formula. \begin{prop}\label{doubling property} Let $R>1$ and let $(u_1,\dots,u_k)$ be a solution of \eqref{eq:system} on $B_R$. If $N(R)\leq d$, then for any $1<r_1\leq r_2\leq R$ \begin{equation}\label{eq:h_monotone} \dfrac{H(r_2)}{H(r_1)}\leq e^{d}\dfrac{r_2^{2d}}{r_1^{2d}}. \end{equation} \end{prop} \begin{proof} For simplicity of notation, we expose the proof for the case of two components. By direct calculation using \eqref{4.2}, we obtain \begin{eqnarray*} \frac{d}{dr}\log\Bigg[r^{1-n} (\int_{\partial B_r(0)}u^2+v^2)\Bigg] &=&\frac{2\int_{B_r}|\nabla u|^2+|\nabla v|^2+2u^2v^2}{\int_{\partial B_r(0)}u^2+v^2}\\ &\leq& \frac{2N(r)}{r}+\frac{2\int_{B_r}u^2v^2}{\int_{\partial B_r(0)}u^2+v^2}\\ &\leq& \frac{2d}{r}+\frac{2\int_{B_r}u^2v^2}{\int_{\partial B_r(0)}u^2+v^2}\\ \end{eqnarray*} Thanks to \eqref{eq:remainder}, by integrating, we find that, if $r_1\leq r_2\leq 2r_0$ then \begin{equation}\label{eq:h_monotone1} \dfrac{H(r_2)}{H(r_1)}\leq e^{d}\dfrac{r_2^{2d}}{r_1^{2d}}. \end{equation} \end{proof} An immediate consequence of Proposition \ref{doubling property} is the lower bound on bounded sets for the solutions found in Theorems \ref{thm existence on bounded set} and \ref{thm existence on bounded seti}. \begin{prop}\label{prop:lowerbound} Ler $(u_{1,R},\dots,u_{k,R})$ be a family of solutions to \eqref{eq:system} such that $N(R)\leq d$ and $H(R)= CR^{2d}$. Then, for every fixed $r<R$, there holds \[H(r)\geq Ce^{-d}r^{2d}.\] \end{prop} Another byproduct of the monotonicity formula with the remainder \eqref{eq:remainder} is the existence of the limit of $H(r)/r^{2d}$. \begin{coro}\label{existencelimitH} Let $R>1$ and let $(u_1,\dots,u_k)$ be a solution of \eqref{eq:system} on $\mathbb C$ such that $\lim_{r\to+\infty}N(r)\leq d$, then there exists \begin{equation}\label{eq:Hhaslimit} \lim_{r\to+\infty}\dfrac{H(r)}{r^{2d}}<+\infty\;. \end{equation} \end{coro} Now we prove the optimal lower bound on the growth of the solution. To this aim, we need a fine estimate on the asymptotics of the lowest eigenvalue as the competition term diverges. The following result is an extension of Theorem 1.6 in \cite{blwz}, where the estimate was proved in case of two components. \begin{thm}\label{thm:lambda} Let $d$ be a fixed integer and let us consider \begin{multline}\label{eq:min_eigenvalue} \mathcal{L}(d,\Lambda)=\min\left\{ \int_0^{2\pi}\sum_i^d|u^\prime_i|^2+\Lambda\sum_{i<j}^d u_i^2u_j^2\; \Bigg| \; \begin{array}{l} \int_0^{2\pi}\sum_i u_i^2=1, \ u_{i+1}(x)=u_i(x-2\pi/d),\\ u_1(-x)=u_1(x)\;,u_{d+1}=u_1\; \end{array} \right\}. \end{multline} Then, there exists a constant $C$ such that for all $\Lambda>1$ we have \begin{equation}\label{eq:est_l} d^2-C \Lambda^{-1/4}\leq\mathcal{L}(d,\Lambda)\leq d^2\;. \end{equation} \end{thm} \begin{proof} Any minimizer $(u_{1,\Lambda},\dots,u_ {d,\Lambda})$ solves the system of ordinary differential equations \begin{equation}u_i^{''}=\Lambda u_i\sum_{j\neq i}u_j^2-\lambda u_i\;,\qquad i=1,\dots,d, \end{equation} together with the associated energy conservation law \begin{equation} \sum_1^d(u_i^{'})^2+\lambda u_i^2-\Lambda\sum_{i<j}^du_i^2u_j^2=h\;. \end{equation} Note that the Lagrange multiplier satisfies \[\lambda=\int_0^{2\pi}\sum_i^d|u^\prime_i|^2+2\Lambda\sum_{i<j}^d u_i^2u_j^2=\mathcal{L}(d,\Lambda)+\int_0^{2\pi}\Lambda\sum_{i<j}^d u_i^2u_j^2\;.\] As $\Lambda\to\infty$, we see convergence of the eigenvalues $\lambda\simeq \mathcal{L}(d,\Lambda)\to d^2$, together with the energies $h\to 2d^2$. Moreover, the solutions remain bounded in Lipschitz norm and converge in Sobolev and H\"older spaces (see \cite{blwz} for more details). Now, let us focus on the interval $I=(a,a+2\pi/d)$ where the $i$-th component is active. The symmetry constraints imply \begin{multline*}u_{i-1}(a)=u_i(a)\;,u_{i-1}^{'}(a)=-u_i^{'}(a)\;,\\ u_{i+1}(a+2\pi/d)=u_i(a+2\pi/d)\;,u_{i+1}^{'}(a+2\pi/d)=-u_i^{'}(a+2\pi/d)\end{multline*} We observe that there is interaction only with the two prime neighboring components, while the others are exponentially small (in $\Lambda$) on $I$. Close to the endpoint $a$, the component $u_i$ is increasing and convex, while $u_{i-1}$ is decreasing and again convex. Similarly to \cite{blwz} we have that \begin{equation}\label{eq:iv} u_i(a)= u_{i-1}(a)\simeq K\Lambda^{-1/4}\;, u'_i(a)= -u^{'}_{i-1}(a)\simeq H=(h+K)/2\;. \end{equation} Hence, in a right neighborhood of $a$, there holds $u_i(x)\geq u_i(a)$, and therefore, as $u_{i-1}^{''}\geq\Lambda u_i^2(a)u_{i-1}$, from the initial value problem \eqref{eq:iv} we infer \[u_{i-1}(x)\leq C u_i(a)e^{-\Lambda^{1/2}u_i(a)(x-a)}\;,\forall x\in [a,b].\] On the other hand, on the same interval we have \[u_{i}(x)\leq u_i(a)+C(x-a)\;,\forall x\in [a,b].\] (here and below $C$ denotes a constant independent of $\Lambda$). Consequently, there holds \begin{equation}\label{eq:allterms} \Lambda \int_I u_{i-1}^2 u_{i}^2+u_{i-1}^3 u_{i}+u_{i-1} u_{i}^2\leq C\Lambda^{-1/2}u_i(a)^{-1}\simeq C\Lambda^{-1/4}\;. \end{equation} In particular, this yields \begin{equation}\label{eq:lambda} \mathcal L(d,\Lambda)\geq \lambda -C\Lambda^{-1/4}\;. \end{equation} In order to estimate $\lambda$, let us consider $\widehat u_i=\left(u_i-\sum_{j=i\pm 1}u_j\right)^+$. Then, as $u_i(a)=u_{i-1}(a)$ and $u_i(a+2\pi/d)=u_{i+1}(a+2\pi/d)$, $\widehat u_i\in H^1_0(I)$. By testing the differential equation for $u_i-\sum_{j=i\pm 1}u_j$ with $\widehat u_i$ on $I$ we find \[\int_I |\widehat u_i^{'}|^2\leq \lambda \int_I |\widehat u_i|^2+C\Lambda^{-1/4}\;,\] where in the last term we have majorized all the integrals of mixed fourth order monomials with \eqref{eq:allterms}. As $|I|=2\pi/d$, using Poincar\'e inequality and \eqref{eq:lambda} we obtain the desired estimate on $\mathcal L(d,\Lambda)$. \end{proof} We are now ready to apply the estimate from below on $\mathcal L$ to derive a lower bound on the energy growth. We recall that there holds \[ \widehat E(r):=\int_{B_r(x)}\sum_1^k|\nabla u_i|^2+2\sum_{i<j}u_i^2u_j^2=\int_{\partial B_r(x)}\sum_1^k u_i\dfrac{\partial u_i}{\partial r} \] \begin{prop}\label{prop:upperbound} Let $(u_{1,R},\dots,u_{k,R})$ be a solution of \eqref{eq:system} having the symmetries \eqref{eqn2_i} on $B_R$. There exists a constant $C$ (independent of $R$) such that for all $\;1\leq r_1\leq r_2\leq R$ there holds \begin{equation} \dfrac{\widehat E(r_2)}{\widehat E(r_1)}\geq C\dfrac{r_2^{2d}}{r_1^{2d}} \end{equation} \end{prop} \begin{proof} Let us compute, \begin{multline*}\dfrac{d}{dr}\log\left(r^{-2d}\widehat E(r)\right)=-\dfrac{2d}{r}+\dfrac{\int_{\partial B_r(x)}{\sum_1^k|\nabla u_i|^2}+2\sum_{i<j}u_i^2u_j^2}{\int_{\partial B_r(x)}\sum_1^k u_i\dfrac{\partial u_i}{\partial r}}\\ =-\dfrac{2d}{r}+\dfrac{\int_{\partial B_r(x)}\sum_1^k\left(\dfrac{\partial u_i}{\partial r}\right)^2+\dfrac{1}{r^2}\left[\sum_1^k\left(\dfrac{\partial u_i}{\partial \theta}\right)^2+2r^2\sum_{i<j}u_i^2u_j^2\right]}{\int_{\partial B_r(x)}\sum_1^k u_i\dfrac{\partial u_i}{\partial r}}\\ =-\dfrac{2d}{r}+\dfrac{\int_{0}^{2\pi}\sum_1^k\left(\dfrac{\partial u_i}{\partial r}\right)^2+\dfrac{1}{r^2}\left[\sum_1^k\left(\dfrac{\partial u_i}{\partial \theta}\right)^2+2r^2\sum_{i<j}u_i^2u_j^2\right]}{\int_{0}^{2\pi}\sum_1^k u_i\dfrac{\partial u_i}{\partial r}}\end{multline*} Now we use Theorem \ref{thm:lambda} and we continue the chain of inequalities: \begin{multline}\label{eq:Emonotone}\dfrac{d}{dr}\log\left(r^{-2d}\widehat E(r)\right)\geq -\dfrac{2d}{r}+\dfrac{\int_{0}^{2\pi}\sum_1^k\left(\dfrac{\partial u_i}{\partial r}\right)^2+\dfrac{\mathcal L(d,2r^2)}{r^2}\int_0^{2\pi}\sum_1^k u_i^2}{\int_{0}^{2\pi}\sum_1^k u_i\dfrac{\partial u_i}{\partial r}}\\ \geq -\dfrac{2d-2 \sqrt{\mathcal L(d,2r^2)}}{r}\geq -\dfrac{C}{r^{3/2}}\;, \end{multline} where in the last line we have used H\"older inequality. By integration we easily obtain the assertion. \end{proof} A direct consequence of the above inequalities is the non vanishing of the quotient $E/r^{2d}$: \begin{coro}\label{existencelimitE} Let $R>1$ and let $(u_1,\dots,u_k)$ be a solution of \eqref{eq:system} on $\mathbb C$ satisfying \ref{eqn2_i}: then there exists \begin{equation}\label{eq:hatEhaslimit} \lim_{r\to+\infty}\dfrac{\widehat E(r)}{r^{2d}}=b\in (0,+\infty]\;. \end{equation} If, in addition, $\lim_{r\to+\infty}N(r)\leq d$, then we have that $b<+\infty$ and \begin{equation}\label{eq:Ehaslimit} \lim_{r\to+\infty}N(r)=d,\quad\text{and}\quad\lim_{r\to+\infty}\dfrac{E(r)}{r^{2d}}=b\;. \end{equation} \end{coro} \begin{proof} Note that \eqref{eq:hatEhaslimit} is a straightforward consequence of the monotonicity formula \eqref{eq:Emonotone}. To prove \eqref{eq:Ehaslimit}, we first notice that \[\lim_{r\to+\infty}\dfrac{E(r)}{r^{2d}}=\lim_{r\to+\infty}N(r)\dfrac{H(r)}{r^{2d}}.\] So the limit of $E(r)/r^{2d}$ exists finite. Now we use \eqref{eq:remainder} \[ \int_{0}^{+\infty} \dfrac{2\int_{B_s}\sum_{i<j}^ku_i^2u_j^2}{\int_{\partial B_s}\sum_1^ku_i^2}ds<+\infty \] and we infer \[\liminf_{r\to+\infty}\dfrac{r\int_{B_{r}}\sum_{i<j}^ku_i^2u_j^2}{\int_{\partial B_{r}}\sum_1^ku_i^2}=0.\] Next, using Corollary \ref{existencelimitH} we can compute \[\liminf_{r\to+\infty}\dfrac{\int_{B_{r}}\sum_{i<j}^ku_i^2u_j^2}{r^{2d}}= \liminf_{r\to+\infty}\dfrac{\int_{B_{r}}\sum_{i<j}^ku_i^2u_j^2}{H(r)} \dfrac{H(r)}{r^{2d}} =0,\] and finally \[\liminf_{r\to +\infty}\dfrac{\widehat E(r)-E(r)}{r^{2d}}=0;.\] Was the limit of $N(r)$ strictly less that $d$, the growth of $H(r)$ would be in contradiction with that of $E(r)$. \end{proof} Now we can combine the upper and lower estimates to obtain convergence of the approximating solutions on compact sets and complete the proof of Theorems \ref{thm:maini} \begin{proof}[Proof of Theorem \ref{thm:maini}.] Let $(u_{1,R},\dots,u_{k,R})$ be a family of solutions to \eqref{eq:system} such that $N_R(R)\leq d$ and $H_R(R)= CR^{2d}$. Since $H_R(R)=CR^{2d}$, then, by Proposition \ref{doubling property} we deduce that, for every fixed $1<r<R$, there holds \[H_R(r)\geq Ce^{-d}r^{2d}\;. \] Assume first that there holds a uniform bound for some $r>1$, \begin{equation}\label{eq:boundonH} H_R(r)\leq C\;. \end{equation} Then $H_R(r)$ and $E_R(r)$ are uniformly bounded on $R$. This implies a uniform bound on the $H^1(B_{r})$ norm. As the components are subharmonic, standard elliptic estimates (Harnack inequality) yield actually a $\mathcal C^2$ bound on $B_{r/2}$, which is independent on $R$. Note that, by Proposition \ref{prop:lowerbound}, $H_R(r)$ is bounded away from zero, so the weak limit cannot be zero. By the doubling Property \ref{doubling property} the uniform bound on $H_R(r_2)\leq C r_2^{2d}$ holds for every $r_2\in\mathbb R$ larger than $r$. Thus, a diagonal procedure yields existence of a nontrivial limit solution of the differential system, defined on the whole of $\mathbb C$. It is worthwhile noticing that this solution inherits all the symmetries of the approximating solutions together with the upper bound on the Almgren's quotient. Finally, from Corollary \ref{existencelimitH} and \ref{existencelimitE} infer the limit \begin{equation}\label{right growth rate} \lim_{r\to +\infty}\dfrac{H(r)}{r^{2d}}=\lim_{r\to +\infty}\dfrac{1}{N(r)}, \lim_{r\to +\infty}\dfrac{E(r)}{r^{2d}}=\dfrac bd\in(0,+\infty)\:. \end{equation} Let us now show that $H_R (r)$ is uniformly bounded with respect to $R$ for fixed $r$. We argue by contradiction and assume that, for a sequence $R_n\to+\infty$, there holds \begin{equation}\label{eq:Hunbounded} \lim_{n\to+\infty}H_{R_n}(r)=+\infty\;. \end{equation} Denote $u_{i,n}=u_{i,R_n}$ and $H_n$, $E_n$, $N_n$ the corresponding functions. Note that, as $E_n$ is bounded, we must have $N_n(r)\to 0$. For each $n$, let $\lambda_n\in(0,r)$ such that \[\lambda^2_nH_n(\lambda_n)=1\;\] (such $\lambda_n$ exist right because of \eqref{eq:Hunbounded}) and scale \[\tilde u_{i,n}(z)=\lambda_n u_{i,n}(\lambda_n z)\;, \quad |z|<R_n/\lambda_n\;.\] Note that the $(\tilde u_{i,n})_i$ still solve system \eqref{eq:system} on the disk $B(0,R_n/\lambda_n)$ and enjoy all the symmetries \eqref{eqn2_i}. Let us denote $\tilde H_n$, $\tilde E_n$, $\tilde N_n$ the corresponding quantities. We have \[\begin{aligned} \tilde H_n(1)&=\lambda^2_nH_n(\lambda_n)=1, \\ \tilde E_n(1)&=\lambda^2_nE_n(\lambda_n) \to 0 \\ \tilde N_n(1)&=N_n(\lambda_n)\to 0 \end{aligned}\] In addition there holds $\tilde N_n(s)\leq d$ for $s<R_n/\lambda_n$. By the compactness argument exposed above, we can extract a subsequence converging in the compact-open topology of $\mathcal C^2$ to a nontrivial symmetric solution of \eqref{eq:system} with Almgren quotient vanishing constantly. Thus, such solution should be a nonzero constant in each component, but constant solution are not compatible with the system of PDE's \eqref{eq:system} . \end{proof} \section{Asymptotics at infinity} \numberwithin{equation}{section} \setcounter{equation}{0} We now come to the proof of Theorem \ref{thm asymptotics at infinity}. Note that by Proposition \ref{doubling property}, the condition on $N(r)$ implies that $u$ and $v$ have a polynomial growth. (In fact, with more effort we can show the reverse also holds. Namely, if $u$ and $v$ have polynomial growth, then $N(r)$ approaches a positive integer as $r\to +\infty$. We leave out the proof.) Recall the blow down sequence is defined by $$(u_R(x), v_R(x)):=(\frac{1}{L(R)}u(Rx),\frac{1}{L(R)}v(Rx)),$$ where $L(R)$ is chosen so that \begin{equation}\label{normalization condition} \int_{\partial B_1(0)}u_R^2+v_R^2=\int_{\partial B_1(0)}\Phi^2. \end{equation} For the solutions in Theorem \ref{main result}, by \eqref{right growth rate}, we have \begin{equation}\label{eq:L(R)} L(R)\sim R^d. \end{equation} \par We will now analyze the limit of $(u_R,v_R)$ as $R\rightarrow+\infty$. \par Because for any $r\in(0,+\infty)$, $N(r)\leq d$, $(u, v)$ satisfies Proposition \ref{doubling property} for any $r\in(1,+\infty)$. After rescaling, we see that Proposition \ref{doubling property} holds for $(u_R,v_R)$ as well. Hence, there exists a constant $C>0$, such that for any $R$ and $r\in(1,+\infty)$, \begin{equation}\label{4.3} \int_{\partial B_r(0)}u_R^2+v_R^2\leq C e^{d}r^d. \end{equation} Next, $(u_R,v_R)$ satisfies the equation \begin{equation}\label{4.4} \left\{ \begin{aligned} &\Delta u_R=L(R)^2R^2u_Rv_R^2,\\ &\Delta v_R=L(R)^2R^2v_Ru_R^2,\\ &u_R,v_R>0~~\mbox{in}~~\mathbb{R}^2. \end{aligned} \right. \end{equation} Here we need to observe that, by \eqref{eq:L(R)}, $$\lim\limits_{R\rightarrow+\infty}L(R)^2R^2=+\infty.$$ By \eqref{4.3}, as $R\rightarrow+\infty$, $u_R$ and $v_R$ are uniformly bounded on any compact set of $\mathbb{R}^2$. Then by the main result in \cite{DWZ2011}, \cite{NTTV} and \cite{TT2011}, there is a harmonic function $\Psi$ defined in $\mathbb{R}^2$, such that (a subsequence of) $(u_R,v_R)\rightarrow(\Psi^+,\Psi^-)$ in $H^1$ and in H\"older spaces on any compact set of $\mathbb{R}^2$. By \eqref{normalization condition}, $$\int_{\partial B_1(0)}\Psi^2=\int_{\partial B_1(0)}\Phi^2,$$ so $\Psi$ is nonzero. Because $L(R)\rightarrow+\infty$, $u_R(0)$ and $v_R(0)$ goes to $0$, hence \begin{equation}\label{4.6} \Psi(0)=0. \end{equation} \par After rescaling in Proposition \ref{monotonocity}, we obtain a corresponding monotonicity formula for $(u_R,v_R)$, $$N(r;u_R,v_R):=\frac{r\int_{B_r(0)}|\nabla u_R|^2+|\nabla v_R|^2+L(R)^2R^2u_R^2v_R^2} {\int_{\partial B_r(0)}u_R^2+v_R^2}=N(Rr)$$ is nondecreasing in $r$. By (4) in Theorem \ref{main result} and from Corollary \ref{existencelimitE}, \begin{equation}\label{4.7} N(r;u_R,v_R)\leq d=\lim_{r\to+\infty} N(r;u_R,v_R)\;\;, \forall\; r\in(0,+\infty). \end{equation} In \cite{DWZ2011}, it's also proved that $(u_R,v_R)\rightarrow(\Psi^+,\Psi^-)$ in $H^1_{loc}$ and for any $r<+\infty$, $$\lim\limits_{R\rightarrow+\infty}\int_{B_r(0)}L(R)^2R^2u_R^2v_R^2=0.$$ After letting $R\rightarrow+\infty$ in \eqref{4.7}, we get \begin{equation}\label{convergence of degree} N(r;\Psi):=\frac{r\int_{B_r(0)}|\nabla \Psi|^2} {\int_{\partial B_r(0)}\Psi^2}=\lim\limits_{R\rightarrow+\infty}N(r;u_R,v_R)=\lim\limits_{R\rightarrow+\infty}N(Rr)= d. \end{equation} In particular, $N(r;\Psi)$ is a constant for all $r\in(0,+\infty)$. So $\Psi$ is a homogeneous polynomial of degree $d$. Actually the number $d$ is the vanishing order of $\Psi$ at $0$, which must therefore be a positive integer. Now it remains to prove that $\Psi\equiv\Phi$: this is easily done by exploiting the symmetry conditions on $\Psi$ (point $(3)$ of Theorem \ref{main result}). \noindent {\bf Acknowledgment.} Part of this work was carried out while Henri Berestycki was visiting the Department of Mathematics at the University of Chicago. Heá was supported by an NSF FRG grant DMS-1065979 and by the French "Agence Nationale de la Recherche" within the project PREFERED (ANR 08-BLAN-0313). Juncheng Wei was supported by a GRF grant from RGC of Hong Kong. Susanna Terracini was partially supported by the Italian PRIN2009 grant ``Critical Point Theory and Perturbative Methods for Nonlinear Differential Equations". Kelei Wang was supported by the Australian Research Council. \addcontentsline{toc}{section}{References} \end{document}
\begin{document} \noindent {\em Acta Sci. Math. (Szeged)} (2021) {\bf 87}, 367 -- 379\\ doi: 10.14232/actasm-020-558-7 \\ {\em arXiv version}: layout, fonts, pagination and numbering of lemmas, theorems and formulas may differ from those in the ACTASM published paper; moreover, references [12] and [13] in the published paper were printed with errors which have been corrected in this version (the author was not responsible for those errors -- according to one of the Associate Editors of Acta Sci. Math. (Szeged), they were caused by a mistake during their copy-editing process), and page numbers in reference [11] have also been corrected in this version. \\ \title[On lattice isomorphisms of orthodox semigroups]{On lattice isomorphisms of orthodox semigroups} \author[Simon M. Goberstein]{Simon M. Goberstein}\thanks{ Department of Mathematics and Statistics, California State University, Chico, CA 95929-0525, U.S.A.\\ \indent \;\,e-mail: [email protected]} \begin{abstract} Two semigroups are lattice isomorphic if the lattices of their subsemigroups are isomorphic, and a class of semigroups is lattice closed if it contains every semigroup which is lattice isomorphic to some semigroup from that class. An orthodox semigroup is a regular semigroup whose idempotents form a subsemigroup. We prove that the class of all orthodox semigroups in which every nonidempotent element has infinite order is lattice closed. \\ \noindent 2020 Mathematics Subject Classification: primary 20M15, 20M18, 20M19; secondary 08A30. \end{abstract} \maketitle \font\caps=cmcsc10 scaled \magstep1 \def\normalsize\caps{\normalsize\caps} \section{Introduction} Let $S$ be a semigroup. The set of all subsemigroups of $S$ (including, by convention, the empty one) is a lattice under set-theoretic inclusion, and the relationship between the properties of this lattice and the properties of $S$ has been studied in numerous publications for over 60 years. We say that two semigroups are lattice isomorphic whenever their subsemigroup lattices are isomorphic. If $S$ is isomorphic or antiisomorphic to any semigroup which is lattice isomorphic to $S$, then $S$ is called lattice determined in the class of all semigroups, and if for each isomorphism $\Phi$ of the subsemigroup lattice of $S$ onto that of a semigroup $T$, there exists an isomorphism or an antiisomorphism $\varphi$ of $S$ onto $T$ such that $U\Phi=U\varphi$ for every subsemigroup $U$ of $S$, then $S$ is termed strongly lattice determined. A class of semigroups is said to be lattice closed (in the class of all semigroups) if it contains every semigroup which is lattice isomorphic to some semigroup from that class. Of course, a lattice closed class of semigroups may contain members which are not lattice determined. \\ \indent Finding lattice closed classes of semigroups and identifying lattice determined semigroups in such classes are among the most important problems in this area of research (see \cite{key20}). It is well known that the class of all groups is not lattice closed. However, the classes of all torsion-free groups and of all completely simple semigroups, which are neither groups nor left or right zero semigroups, are lattice closed \cite[\S\S\,34,\,38]{key20}, and it is also easily seen (and well known) that the class of bands is lattice closed. All semigroups mentioned in the preceding sentence are regular (in the sense of von Neumann for rings). A regular semigroup is orthodox if the set of its idempotents is a subsemigroup, and orthodox semigroups form one of the most important classes of regular semigroups. Adopting for semigroups the standard group-theoretic term, we will say that a semigroup is torsion-free if all of its nonidempotent elements have infinite order. Note that according to Lemma \ref{205} in Section 2, a monogenic inverse semigroup whose generators have infinite order is torsion-free. The principal goal of this paper is to prove that the class of all torsion-free orthodox semigroups is lattice closed (see Theorem \ref{303} in Section 3). An important role in that proof is played by the author's theorem stating that every bisimple orthodox semigroup generated by a pair of mutually inverse elements of infinite order is strongly lattice determined \cite[Theorem 4.1]{key6}. Using Theorem \ref{303} (and its proof) together with Lemma \ref{205}, we also show that the class of all monogenic orthodox semigroups whose generators have infinite order is lattice closed and every semigroup in that class is torsion-free (Proposition \ref{308}). In Section 2, we establish a few auxiliary results and, for the reader's convenience, review some basic semigroup-theoretic concepts and include preliminaries on lattice isomorphisms of semigroups. \\ \indent We use \cite{key1} and \cite{key11} as standard references for the algebraic theory of semigroups and, in general, follow the terminology and notation of these monographs. We also refer to \cite{key20} for information about lattice isomorphisms of semigroups and to \cite{key14} for a detailed description of the structure of monogenic inverse semigroups. \section{Preliminaries} We use the term ``order'' instead of ``partial order'' and refer to a linearly ordered set as a {\em chain}. In this paper, $\mathbb{N}$ stands for the set of all natural numbers (= positive integers), and $\mathbb{N}_0\,(=\mathbb{N}\cup\{0\})$ for the set of all nonnegative integers. Let $S$ be a semigroup. The equality (or ``diagonal'') relation on $S$ will be denoted by $\Delta_S$. We say that $x\in S$ is a {\em group element} of $S$ if it belongs to some subgroup of $S$; otherwise $x$ is a {\em nongroup element} of $S$. The set of nongroup elements of $S$ will be denoted by $N_S$, and the set of idempotents of $S$ by $E_S$. To indicate that $U$ is a subsemigroup of $S$, we write $U\leq S$. Under the convention that $\emptyset\leq S$, the set of all subsemigroups of $S$ ordered by inclusion is a lattice which we denote by $\operatorname{Sub}(S)$. If $w=w(x_1,\ldots,x_n)$ is a word in the alphabet $\{x_1,\ldots,x_n\}\subseteq S$, we will say shortly that $w$ is a word in $x_1,\ldots, x_n$, and if no confusion is likely, $w$ will be identified with its value in $S$. As usual, $\langle X\rangle$ stands for the subsemigroup of $S$ generated by $X\subseteq S$, and if $X=\{x_1,\ldots,x_n\}$ is a finite subset of $S$, then $\langle X\rangle$ is written as $\langle x_1,\ldots, x_n\rangle$. Let $x\in S$. Then $\langle x\rangle$ is the {\em cyclic} subsemigroup of $S$ generated by $x$. If $\langle x\rangle$ is a finite semigroup, the number of its elements is called the {\em order} of $x$ and denoted by $o(x)$. If $\langle x\rangle$ is infinite, we write $o(x)=\infty$ and say that $x$ has {\em infinite order}. If every nonidempotent element of $S$ has infinite order, we call $S$ a {\em torsion-free} semigroup. \\ \indent If $S$ and $T$ are semigroups such that $\operatorname{Sub}(S)\cong \operatorname{Sub}(T)$, then $S$ and $T$ are called {\em lattice isomorphic}, and any isomorphism of $\operatorname{Sub}(S)$ onto $\operatorname{Sub}(T)$ is referred to as a {\em lattice isomorphism} of $S$ onto $T$. If $\Phi$ is a lattice isomorphism of $S$ onto $T$, then $\Phi$ is said to be {\em induced} by a mapping $\varphi \colon S\rightarrow T$ if $U\Phi=U\varphi$ for all $U\leq S$. If $S$ is isomorphic or antiisomorphic to any semigroup that is lattice isomorphic to it, then $S$ is called {\em lattice determined}, and if each lattice isomorphism of $S$ onto a semigroup $T$ is induced by an isomorphism or an antiisomorphism of $S$ onto $T$, we say that $S$ is {\em strongly lattice determined}. A class $\mathcal{K}$ of semigroups is said to be {\em lattice closed} (in the class of all semigroups) if $\mathcal{K}$ contains every semigroup which is lattice isomorphic to some semigroup that belongs to $\mathcal{K}$. It is easily seen (and well known) that bands, in general, are not lattice determined -- for instance, any left or right zero semigroup $S$ is clearly lattice isomorphic to a chain (that is, a semilattice with a linear natural order) having the same cardinality as $S$. On the other hand, as an immediate corollary of \cite[Proposition 1]{key16} (reproduced as \cite[Proposition 3.2(a)]{key20}) we have: \begin{result}\label{201} {\rm (From \cite[Proposition 1]{key16} or \cite[Proposition 3.2(a)]{key20})} A semigroup which is lattice isomorphic to a band is itself a band, that is, the class of all bands is lattice closed. \end{result} For convenience of reference, we also record here the following two results (which imply, in particular, that the class of torsion-free semigroups is lattice closed). \begin{result}\label{202}{\rm \cite[Lemma 34.8]{key20}} The infinite cyclic group is strongly lattice determined. \end{result} \begin{result}\label{203}{\rm (From \cite[Subsections 31.1--31.5]{key20})} Let $S$ and $T$ be torsion-free semigroups, and let $\Phi$ be a lattice isomorphism of $S$ onto $T$. Then $\Phi$ is induced by a unique bijection $\varphi \colon S\rightarrow T$ defined by the formula $\langle x\rangle\Phi=\langle x\varphi\rangle$ for all $x\in S$, and $(x^n)\varphi=(x\varphi)^n$ for all $x\in S$ and $n\in\mathbb{N}$. Thus the infinite cyclic semigroup is strongly lattice determined. \end{result} If $S$ and $T$ are torsion-free semigroups and $\Phi$ is a lattice isomorphism of $S$ onto $T$, the bijection $\varphi \colon S\rightarrow T$ in Result \ref{203} will be called the {\em $\Phi$-associated bijection} of $S$ onto $T$. \\ \indent Let $S$ be a semigroup. An element $x\in S$ is {\em regular} if there is $x'\in S$ such that $xx'x=x$, and if all elements of $S$ are regular, $S$ is a {\em regular semigroup}. A $\mathcal{D}$-class $D$ of a regular semigroup $S$ is said to be {\em combinatorial} if all subgroups of $S$ contained in $D$ are trivial (or, equivalently, if $H_x=\{x\}$ for all $x\in D$), and $S$ itself is {\em combinatorial} if all $\mathcal{D}$-classes of $S$ are combinatorial, so $S$ is combinatorial precisely when $\mathcal{H}=\Delta_S$. If $x, x'\in S$ satisfy $xx'x=x$, then by setting $y=x'xx'$, we have $xyx=x$ and $yxy=y$, in which case, $x$ and $y$ are said to be mutually {\em inverse}. As in \cite{key5}, we indicate that $x$ and $y$ are mutually inverse elements of $S$ by writing $x\perp y$, and the phrase ``$x\perp y$ in $S$'' will always mean that $x,y\in S$ and $x\perp y$. We also denote by $V_S(x)$ the set of all inverses of $x\in S$, so $y\in V_S(x)$ if and only if $x\perp y$ in $S$. In general, a regular element of $S$ may have more than one inverse but if each $x\in S$ has a {\em unique} inverse (usually denoted by $x^{-1}$), then $S$ is an {\em inverse semigroup}. If $S$ is an inverse semigroup and $x\in S$, the inverse subsemigroup $\langle x, x^{-1}\rangle$ of $S$ is called {\em monogenic}, and if $S=\langle x, x^{-1}\rangle$ then $S$ is a {\em monogenic inverse semigroup}. \\ \indent The {\em bicyclic semigroup} $\mathcal{B}(a,b)$ is often defined as a semigroup with identity $1$ generated by the two-element set $\{a,b\}$ and given by one defining relation $ab=1$ \cite[\S 1.12]{key1}. Of course, $\mathcal{B}(a,b)$ can also be defined without mentioning the identity as a semigroup given by the following presentation: $\mathcal{B}(a,b)=\langle a, b\;|\;aba=a, bab=b, a^2b=a, ab^2=b\rangle$. It is well known that $\mathcal{B}(a,b)$ is a combinatorial bisimple inverse semigroup with $b=a^{-1}$, each of its elements has a unique representation in the form $b^ma^n$, where $m$ and $n$ are nonnegative integers (and $a^0=b^0=ab$), and the semilattice of idempotents of $\mathcal{B}(a,b)$ is a chain: $ab>ba>b^2a^2>\cdots$. Observe that if $S=\langle x, x^{-1}\rangle$ is a monogenic inverse semigroup such that $xx^{-1}>x^{-1}x$, then $S=\mathcal{B}(x,x^{-1})$, and if $x^{-1}x>xx^{-1}$, then $S=\mathcal{B}(x^{-1},x)$. Let $\mathcal{C}=\{(m,n)\colon m, n\in\mathbb{N}_0\}$ with multiplication defined as follows: $(m,n)(p,q)=(m+p-r, n+q-r)$ where $r=\min\{n,p\}$. Then $\mathcal{C}$ is a semigroup and the mapping $(m,n)\to b^ma^n$ is an isomorphism of $\mathcal{C}$ onto $\mathcal{B}(a,b)$ (see \cite[\S 1.12]{key1}). Therefore $\mathcal{C}$ can be viewed as another copy of the bicyclic semigroup (in \cite{key14}, the bicyclic semigroup was, in fact, defined as the semigroup $\mathcal{C}$ described above). \\ \indent A comprehensive description of the structure of monogenic inverse semigroups is given in \cite[Chapter IX]{key14}. We will recall only a few basic facts about them (see \cite{key14} for more details). Note, in particular, that if $S$ is a monogenic inverse semigroup, then $\mathcal{D}=\mathcal{J}$, so the order relation on the set of $\mathcal{J}$-classes of $S$ is actually an order relation on the set of its $\mathcal{D}$-classes. Let $S=\langle x, x^{-1}\rangle$ be a monogenic inverse semigroup. If $xx^{-1}=x^{-1}x$, then $S$ is a cyclic group. As noted in the preceding paragraph, if $xx^{-1}<x^{-1}x$ or $x^{-1}x<xx^{-1}$, then $S$ is a bicyclic semigroup. Assume that $S$ is neither a cyclic group nor a bicyclic semigroup (that is, $xx^{-1}$ and $x^{-1}x$ are incomparable with respect to the natural order on $E_S$). Then the set of $\mathcal{D}$-classes of $S$ is a nontrivial chain with the largest element $D_x$. Moreover, one of the following holds: (i) $S$ is a free monogenic inverse semigroup, or (ii) $S$ has a smallest proper ideal $K$ (the {\em kernel} of $S$), which is either a bicyclic semigroup or a cyclic group. In case (i), all $\mathcal{D}$-classes of $S$ are combinatorial and form an infinite chain: $D_x>D_{x^2}>\cdots>D_{x^m}>\cdots$, where the $\mathcal{D}$-class $D_{x^m}$ consists of $(m+1)^2$ elements for each $m\in\mathbb{N}$. In case (ii), there is $n\in\mathbb{N}$ such that the $\mathcal{D}$-classes of $S$ form a finite chain: $D_x>D_{x^2}>\cdots>D_{x^n}>D_{x^{n+1}}=K$, where for each $1\leq m\leq n$, the $\mathcal{D}$-class $D_{x^m}$ is combinatorial and consists of $(m+1)^2$ elements. If $S$ is either a cyclic group or a bicyclic semigroup, it is convenient (and common) to assume that $S$ itself is the kernel of $S$, and with this assumption in mind we can state that if $S$ is an arbitrary monogenic inverse semigroup, then $S$ is not combinatorial if and only if it has a kernel which is a nontrivial cyclic group (and is the only nontrivial $\mathcal{H}$-class of $S$). \\ \indent If $S=\langle x, x^{-1}\rangle$ is a free monogenic inverse semigroup, each $s\in S$ can be uniquely written in the form $s=x^{-p}x^qx^{-r}$ where $q>0$ and $0\leq p, r\leq q$, and $x^{-p}x^qx^{-r}\in E_S$ if and only if $p+r=q$; this copy of the free monogenic inverse semigroup was denoted in \cite{key14} by $C_3$. If the set $\{((m,n), (p,q))\in \mathcal{C}\times \mathcal{C}\colon m+p=n+q>0\}$ is equipped with multiplication of the direct square $\mathcal{C}\times\mathcal{C}$ of the bicyclic semigroup $\mathcal{C}$, one obtains another copy of the free monogenic inverse semigroup, denoted in \cite{key14} by $C_2$. Clearly, $((m,n), (p,q))\in E_{C_2}$ if and only if $m=n$ and $p=q$. By setting $x=((1,0),(0,1))$ in $C_2$, it is easily shown that $C_2$ is isomorphic to $C_3=\langle x, x^{-1}\rangle$ (and a few other isomorphic copies of the free monogenic inverse semigroup were described in \cite{key14}). We are going to use several results about monogenic inverse semigroups expressed in \cite{key14} in terms of $C_2$ and its congruences, and for the reader's convenience we will recall some relevant definitions and notations from \cite{key14}. \\ \indent Let $S=\{((m,n), (p,q))\in \mathcal{C}\times \mathcal{C}\colon m+p=n+q>0\}$ with multiplication of the direct product $\mathcal{C}\times\mathcal{C}$. Let $\rho$ be a congruence on $S$. The least $n\in\mathbb{N}$ such that $((n,n),(0,0))$ and $((n+1,n+1),(0,0))$ are $\rho$-related is denoted by $l(\rho)$; if such $n$ does not exist, put $l(\rho)=\infty$. Similarly, $r(\rho)$ is the least natural number $n$ such that $((0,0),(n,n))$ and $((0,0),(n+1,n+1))$ are $\rho$-related, and if no such $n$ exists, then $r(\rho)=\infty$. By \cite[Lemma IX.2.6]{key14}, if $l(\rho)<\infty$ and $r(\rho)<\infty$, then $l(\rho)=r(\rho)$. Let $k\in\mathbb{N}$. As above, set $x=((1,0),(0,1))$. If $l(\rho)=r(\rho)=k$ and $\rho|_{\langle x\rangle}$ has infinitely many classes, then $\rho$ is said to have {\em type} $(k,\omega)$. If $l(\rho)=\infty$ and $r(\rho)=k$, the {\em type} of $\rho$ is defined to be $(k,\infty^+)$, and if $l(\rho)=k$ and $r(\rho)=\infty$, the {\em type} of $\rho$ is $(k,\infty^-)$. The {\em weight} of an element $u=((m,n),(p,q))$ of $S$ is the number $w(u)=m+p$. Now for all $u=((m,n),(p,q)), u'=((m'\!,n'),(p'\!,q'))\in S$, define \\ \[\;u\rho_{(k,\,\omega)}u' \;\,\Longleftrightarrow u=u'\text{ or }[w(u), w(u')\geq k\text{ and }m-n=m'-n'],\] \[u\rho_{(k,\,\infty^{+})}u' \!\Longleftrightarrow u=u'\text{ or }[w(u), w(u')\geq k\text{ and }(m,n)=(m'\!,n')],\] \[\!\!\!\!\!u\rho_{(k,\,\infty^{-})}u' \!\Longleftrightarrow u=u'\text{ or }[w(u), w(u')\geq k\text{ and }(p,q)=(p'\!,q')].\] \\ Then $\rho_{(k,\,\omega)}$ is the unique congruence on $S$ of type $(k,\,\omega)$ \cite[Proposition IX.2.11]{key14}, and $\rho_{(k,\,\infty^+)}$ is the unique congruence on $S$ of type $(k,\,\infty^+)$ \cite[Proposition IX.2.10]{key14}; of course, by symmetry, $\rho_{(k,\,\infty^-)}$ is the unique congruence on $S$ of type $(k,\,\infty^-)$. \\ \indent Let $Q$ be a semigroup with zero $0$. Denote $Q^{\ast}=Q\setminus\{0\}$. Let $T$ be a semigroup disjoint from $Q$, and let $\varphi\colon Q^{\ast}\to T$ be a partial homomorphism. For all $a,b\in T\cup Q^{\ast}$ define $a\circ b$ as follows: $a\circ b=a(b\varphi)$ if $a\in T$ and $b\in Q^{\ast}$; $a\circ b=(a\varphi)b$ if $a\in Q^{\ast}$ and $b\in T$; $a\circ b=(a\varphi)(b\varphi)$ if $a,b\in Q^{\ast}$ and $ab=0$ in $Q$; $a\circ b=ab$ if either $a,b\in T$ or $a,b\in Q^{\ast}$ and $ab\ne 0$ in $Q$. Then $T\cup Q^{\ast}$ is a semigroup called a {\em retract ideal extension} (or simply an {\em ideal extension}) of $T$ by $Q$ whose operation $\circ$ is {\em determined by the partial homomorphism $\varphi$} (see \cite[Section I.9]{key14}). \\ \indent Consider again $S=\{((m,n), (p,q))\in \mathcal{C}\times \mathcal{C}\colon m+p=n+q>0\}$ with multiplication of the direct product $\mathcal{C}\times\mathcal{C}$, so $S$ is the free monogenic inverse semigroup $C_2$. For any $n\in\mathbb{N}$, denote by $I_n$ the set $\{u\in C_2\colon w(u)\geq n\}$. Then $I_n$ is an ideal of $C_2$, and we let $M_n=C_2/I_n$. In the notation of this paragraph, we can state the following result. \begin{result}\label{204}{\rm (From \cite[Theorem IX.3.4 (i), (ii)]{key14})} Let $k\in\mathbb{N}$. Then \\ \indent {\rm(i)} $C_2/\rho_{(k,\,\infty^+)}$ {\rm[respectively, }$C_2/\rho_{(k,\,\infty^-)}${\rm]} is an ideal extension of the bicyclic semigroup $\mathcal{C}$ by $M_k$ with the operation in $\mathcal{C}\cup M^{\ast}_k$ determined by the partial homomorphism $\theta: M^{\ast}_k\to\mathcal{C}$ where $\theta: ((m,n), (p,q))\mapsto (m,n)\;{\rm[respectively, }\;\theta: ((m,n), (p,q))\mapsto (p,q){\rm]}${\rm;} \\ \indent {\rm(ii)} $C_2/\rho_{(k,\,\omega)}$ is an ideal extension of the cyclic group $\mathbb{Z}$ by $M_k$ with the operation in $\mathbb{Z}\cup M^{\ast}_k$ determined by the partial homomorphism $\eta: ((m,n), (p,q))\mapsto m-n$ of $M^{\ast}_k$ to $\mathbb{Z}$. \end{result} Of course, in Case (ii) of Result \ref{204}, instead of $\mathbb{Z}$ one can use a multiplicative infinite cyclic group $G=\langle g\rangle$, so $C_2/\rho_{(k,\,\omega)}$ will be an ideal extension of $G$ by $M_k$ with the operation in $G\cup M^{\ast}_k$ determined by the partial homomorphism $\eta: ((m,n), (p,q))\mapsto g^{m-n}$ of $M^{\ast}_k$ to $G$. Take any nonidempotent element $((m,n), (p,q))$ of $M^{\ast}_k$. According to Result \ref{204}, $((m,n), (p,q))\theta\notin E_{\mathcal{C}}$ in both situations considered in Case (i), and if in Case (ii) instead of $\mathbb{Z}$ we use a multiplicative infinite cyclic group $G=\langle g\rangle$ with the identity $1_G$, then $((m,n), (p,q))\eta\ne 1_G$. \begin{lemma}\label{205} Let $S=\langle x, x^{-1}\rangle$ be a monogenic inverse semigroup such that $o(x)=\infty$. Then every nonidempotent element of $S$ has infinite order, that is, $S$ is torsion-free. \end{lemma} {\bf Proof.} If $S$ is a free monogenic inverse semigroup, \cite[Lemma IX.3.8]{key14} shows that every nonidempotent element of $S$ has infinite order (this also follows from \cite[Exercise IX.2.14(iii)]{key14}, which asserts that $\langle s, s^{-1}\rangle$ is a free monogenic inverse subsemigroup of $S$ for every $s\in S\setminus E_S$). Suppose that $S$ is not free. Then $S$ has a kernel $K$ (perhaps coinciding with $S$), which is either a bicyclic semigroup or a cyclic group (the latter must be infinite since $o(x)=\infty$). \\ \indent Take any $s\in S\setminus E_S$. If $s\in K$, it is immediate that $o(s)=\infty$. Now assume that $K$ is a proper ideal of $S$ and $s\notin K$. Thus there is $k\geq 2$ such that $D_x>\cdots>D_{x^{k-1}}>D_{x^k}=K$, and $s\in D_x\cup\cdots\cup D_{x^{k-1}}$. In the notation of the paragraph preceding Result \ref{204}, $K=I_k$ and $D_x\cup\cdots\cup D_{x^{k-1}}=M^{\ast}_k$. Moreover, if $K$ is an infinite cyclic group, then $S=C_2/\rho_{(k,\,\omega)}$, and if $K$ is a bicyclic semigroup, then $S=C_2/\rho_{(k,\,\infty^+)}$ or $S=C_2/\rho_{(k,\,\infty^-)}$. Since all nonidempotent elements of a free monogenic inverse semigroup have infinite order, $\langle s\rangle\nsubseteq M^{\ast}_k$ and hence $\langle s\rangle\cap K\ne\emptyset$. Let $l$ be the smallest natural number such that $s^l\in K$. Then $s^{l-1}\in M^{\ast}_k$. Using Result \ref{204}, the remark that follows it, and the formulas for the powers of a nonidempotent element $((m,n), (p,q))$ of $C_2$ given in \cite[Lemma IX.3.8]{key14}, one can deduce that if $K$ is a bicyclic semigroup, then $s^l=(s\theta)(s^{l-1}\theta)\notin E_S$, and if $K$ is an infinite cyclic group, then $s^l=(s\eta)(s^{l-1}\eta)\notin E_S$. We have shown that in both cases $s^l\in K\setminus E_S$. If $o(s)<\infty$, the cyclic semigroup $\langle s\rangle$ is finite and since $s^l\in\langle s\rangle$, we would have $o(s^l)<\infty$, contradicting the fact that all nonidempotent elements of $K$ have infinite order. Therefore $o(s)=\infty$. The proof is complete. \hspace{\fill}$\Box$ \\ \indent An {\em orthodox semigroup} is a regular semigroup in which the idempotents form a subsemigroup. By \cite[Theorem VI.1.1]{key11}, if $S$ is a regular semigroup, the following conditions are equivalent: (a) $S$ is orthodox; (b) $V_S(e)\subseteq E_S$ for all $e\in E_S$; (c) $V_S(b)V_S(a)\subseteq V_S(ab)$ for all $a,b\in S$. It follows that if $a\perp b$ in an orthodox semigroup $S$, then $a\in N_S$ if and only if $b\in N_S$, and $a^n\perp b^n$ for all $n\in\mathbb{N}$, which implies that $o(a)=o(b)$. According to the terminology introduced by the author in \cite{key5} (by analogy with the inverse semigroup case), a {\em monogenic orthodox semigroup} is an orthodox semigroup generated by a pair of mutually inverse elements. In what follows, the phrase ``let $A=\langle a,b\rangle$ be a monogenic orthodox semigroup'' will always mean that $A$ is an orthodox semigroup with $a\perp b$ in $A$. \\ \indent It was shown by Hall (see \cite[Theorem 2]{key9} or \cite[Theorem VI.1.10]{key11}) that a regular semigroup $S$ is orthodox if and only if for all $x, y\in S$, if $V_S(x)\cap V_S(y)\ne\emptyset$ then $V_S(x)= V_S(y)$. Let $S$ be an arbitrary orthodox semigroup. Since $E_S$ is a band, it is a semilattice $Y$ of rectangular bands $E_{\alpha}\,({\alpha}\in Y)$, that is, $E_S=\bigcup_{\alpha\in Y}E_{\alpha}$ where $Y$ is a semilattice and $E_{\alpha}\,(\alpha\in Y)$ are pairwise disjoint rectangular bands satisfying $E_{\alpha}E_{\beta}\subseteq E_{\alpha\beta}$ for all $\alpha,\beta\in Y$ (see \cite[\S 3]{key9} or \cite[Section VI.1]{key11}); as in \cite{key9} and \cite{key11}, we may denote $E_{\alpha}$ by $E(e)$ if $e\in E_{\alpha}$. Now let $\gamma_S=\{(x,y)\in S\times S\colon V_S(x)=V_S(y)\}$. If no confusion is likely, we will omit the subscript $S$ in $\gamma_S$ and in $V_S(x)$ for all $x\in S$. It was proved by Hall that $\gamma$ is the smallest inverse semigroup congruence on $S$ (see \cite[Theorem 3]{key9} or \cite[Theorem VI.1.12]{key11}), so $S/\gamma\,(=S\gamma^{\natural})$ is the maximum inverse semigroup homomorphic image of $S$. According to \cite[Remark 1]{key9}, $e\gamma=V(e)=E(e)$ for all $e\in E_S$. Thus $e\gamma$ is a rectangular band for every $e\in E_S$, the semilattices $E_{S/\gamma}$ and $Y$ are isomorphic, and $s\gamma^{\natural}\in E_{S/\gamma}$ if and only if $s\in E_S$ for all $s\in S$. It follows that for any $x\in S$, we have $o(x)=\infty$ if and only if $o(x\gamma)=\infty$. Therefore $S$ is torsion-free if and only if $S/\gamma$ is torsion-free. {\em The results reviewed in this paragraph will be used below without any further reference or explanation.} \\ \indent Let $S$ be an orthodox semigroup. Since $\gamma\cap\mathcal{H}=\Delta_S$ (see \cite[Chapter VI, formula (1.16)]{key11}), if $H$ is any $\mathcal{H}$-class of $S$, the restriction of $\gamma^{\natural}$ to $H$ is an injection. As noted in \cite{key3}, from this observation and the fact that a monogenic inverse semigroup contains a nontrivial $\mathcal{H}$-class if and only if it has a kernel which is a nontrivial cyclic group (and is the only nontrivial $\mathcal{H}$-class of that semigroup), one can deduce the following assertion: \setcounter{theorem}{5} \begin{result}\label{206}{\rm (From \cite[Proof of Theorem 2.1]{key3}} Let $A=\langle a, b\rangle$ be a monogenic orthodox semigroup. If $H$ is a nontrivial $\mathcal{H}$-class of $A$, then $A$ has a kernel $K$ and $H\subseteq K$. \end{result} Let us show that the assertion stated in Result \ref{206} can be made more precise. \begin{proposition}\label {207} Let $A=\langle a, b\rangle$ be a monogenic orthodox semigroup. Suppose that $H$ is a nontrivial $\mathcal{H}$-class of $A$. Then $H$ is a cyclic group isomorphic to the kernel of $A/\gamma$ and $A$ has a completely simple kernel $K$, which contains $H$ and is isomorphic to the direct product of $H$ and a rectangular band. \end{proposition} {\bf Proof.} As noted prior to Result \ref{206}, the restriction of $\gamma^{\natural}$ to $H$ is an injection from $H$ to the kernel of $A/\gamma$, which is a nontrivial cyclic group. Denote the kernel of $A/\gamma$ by $G$, and let $K=G\left(\gamma^{\natural}\right)^{-1}$. Then $K$ is an ideal of $A$ and $H\subseteq K$. Clearly, $E_K=\{e\in K\colon e\gamma^{\natural}=1_G\}$ where $1_G$ is the identity of $G$. If $e, f\in E_K$, then $e\gamma^{\natural}=f\gamma^{\natural}$, so $e\perp f$ and hence $(e, f)\in\mathcal{D}$. Therefore $K$ is a regular $\mathcal{D}$-class (= $\mathcal{J}$-class) of $A$, which shows that $K$ is the kernel of $A$. If $e$ is any element of $E_K$, then $E_K=\{f\in K\colon f\in e\gamma\}=e\gamma$. Thus $E_K$ is a rectangular band. By \cite[Corollary IV.3.5]{key13} (or \cite[Exercise III.12]{key11}), $K\cong H\times E_K$. Since each $e\in E_K$ is primitive, $K$ is completely simple. Clearly, the maximum inverse semigroup homomorphic image of $H\times E_K$ is isomorphic to $H$, and $K\gamma^{\natural}=G$, so $H\cong G$. The proof is complete. \hspace{\fill}$\Box$ \section{The class of torsion-free orthodox semigroups is lattice closed} It is well known that cyclic groups and the bicyclic semigroup are the only bisimple monogenic inverse semigroups. However, as shown in \cite{key5}, the class of bisimple monogenic orthodox semigroups is substantially more diverse. In particular, the author constructed in \cite{key5} a family of pairwise nonisomorphic bisimple orthodox semigroups $\mathcal{O}_{(\nu,\,\mu)}(a,b)$ indexed by ordered pairs $(\nu,\mu)\in\mathbb{N}^{\ast}\times\mathbb{N}^{\ast}$ (where $\mathbb{N}^{\ast}=\mathbb{N}\cup\{\infty\}$), each being generated by a pair of mutually inverse elements $a$ and $b$ satisfying $ab=a^2b^2$ and $ba\neq b^2a^2$, and proved that if $S$ is an arbitrary bisimple monogenic orthodox semigroup with nongroup generators, then $S$ or its dual is isomorphic to one of the semigroups of that two-parameter family. \begin{theorem}\label{301}{\rm (From \cite[Lemma 2.7 and Theorem 2.9]{key5})} Let $S$ be an orthodox semigroup, let $a$ be an arbitrary nongroup element of $S$, and let $b\in V(a)$. Then either $\{a, b, ab, ba\}$ is a $\mathcal{D}$-class of $\langle a, b\rangle$ such that $\langle a,b\rangle\setminus\{a, b, ab, ba\}$ is an ideal of $\langle a, b\rangle$, or $\langle a, b\rangle$ is a bisimple monogenic orthodox semigroup and $o(a)=o(b)=\infty$, in which case $\langle a, b\rangle$ or its dual is isomorphic to $\mathcal{O}_{(\nu,\,\mu)}(a,b)$ for some $\mu, \nu\in\mathbb{N}^{\ast}$. \end{theorem} As shown in \cite{key18} (or \cite[Theorem 41.8]{key20}), the bicyclic semigroup is strongly lattice determined. A much more general result was established in \cite{key6} where it was proved that {\em for all $\mu, \nu\in\mathbb{N}^{\ast}$, the semigroups $\mathcal{O}_{(\nu,\,\mu)}(a,b)$ are strongly lattice determined} (the bicyclic semigroup is just one member of that infinite two-parameter family -- namely, $\mathcal{O}_{(1, 1)}(a,b)$ is bicyclic); this is the main part of the following theorem, from which it is obtained when the generators $a$ and $b$ of a bisimple monogenic orthodox semigroup $S=\langle a, b\rangle$ are assumed to be nongroup. \begin{theorem}\label{302} \cite[Theorem 4.1]{key6} Let $S=\langle a,b\rangle$ be an arbitrary bisimple monogenic orthodox semigroup such that $a$ (and hence $b$) has infinite order. Then $S$ is strongly lattice determined. \end{theorem} We are ready to establish the main result of the paper: \begin{theorem}\label{303} Let $S$ be a torsion-free orthodox semigroup, and let $\Phi$ be a lattice isomorphism of $S$ onto a semigroup $T$. Then $T$ is also a torsion-free orthodox semigroup. Thus the class of all torsion-free orthodox semigroups is lattice closed. \end{theorem} {\bf Proof.} By Result \ref{201}, $E_T\ne\emptyset$ and $E_T$ is a subsemigroup of $T$ such that $E_T=E_S\Phi$. Since the class of torsion-free semigroups is lattice closed, we can apply Result \ref{203} and denote by $\varphi$ the $\Phi$-associated bijection of $S$ onto $T$. Group elements of $T$ are regular. To prove that $T$ is an orthodox semigroup, it remains to show that all nongroup elements of $T$ are regular. \\ \indent Take an arbitrary $x\in N_T$, and let $a=x\varphi^{-1}$. By Result \ref{202}, the infinite cyclic group is strictly lattice determined. Since $S$ is torsion-free, if $a$ were a group element of $S$, it would be contained in some infinite cyclic subgroup $G$ of $S$, so $x$ would belong to the infinite cyclic subgroup $G\varphi$ of $T$, contradicting the assumption that $x\in N_T$. Therefore $a\in N_S$. Choose an arbitrary $b\in V(a)$. Then $\langle a, b\rangle$ is a monogenic orthodox semigroup such that $a,b\in N_S$ and $o(a)=o(b)=\infty$. Let $y=b\varphi$. By Result \ref{203}, $o(x)=o(y)=\infty$. Note that \[\langle a, b\rangle\Phi=\left(\langle a\rangle\vee\langle b\rangle\right)\Phi=\langle a\rangle\Phi\vee\langle b\rangle\Phi=\langle x\rangle\vee\langle y\rangle=\langle x, y\rangle. \] \indent Suppose that $\langle a, b\rangle$ is a bisimple monogenic orthodox semigroup. Then, by Theorem \ref{302}, it is strongly lattice determined. Therefore $\langle x, y\rangle$ is also a bisimple monogenic orthodox semigroup which is isomorphic or antiisomorphic to $\langle a, b\rangle$. In particular, $x\perp y$ in $T$ and hence $x$ is a regular element of $T$. \\ \indent Now assume that $\langle a, b\rangle$ is not bisimple. Then, according to Theorem \ref{301}, $\{a, b, ab, ba\}$ is a $\mathcal{D}$-class of $\langle a, b\rangle$ such that $\langle a,b\rangle\setminus\{a, b, ab, ba\}$ is an ideal of $\langle a, b\rangle$. Let $I=\langle a,b\rangle\setminus\{a, b, ab, ba\}$ and $J=I\Phi$. Take an arbitrary $s\in\langle a, b\rangle$. Since $\langle s, I\rangle=\langle s\rangle\vee I$ and $\langle s\varphi, J\rangle=\langle s\varphi\rangle\vee J$, we conclude that \[ \langle s, I\rangle\Phi=\left(\langle s\rangle \vee I\right)\Phi=\langle s\rangle\Phi \vee I\Phi=\langle s\varphi \rangle \vee J=\langle s\varphi, J\rangle. \] If $s\in I$ then $\langle s, I\rangle = I=\{s\}\cup I$. If $s\in\{ab, ba\}$, it is also clear that $\langle s, I\rangle= \{s\}\cup I$. Finally, if $s\in\{a, b\}$ then again $\langle s, I\rangle= \{s\}\cup I$ because $s^k\in I$ for all $k\geq 2$. We have shown that $\langle s, I\rangle=\{s\}\cup I$, and hence $\langle s\varphi, J\rangle=\langle s, I\rangle\varphi=\left(\{s\}\cup I\right)\varphi=\{s\}\varphi\cup I\varphi=\{s\varphi\}\cup J$. Since $a, b\notin I$, we have $x, y\notin J$. Let $e=(ab)\varphi$ and $f=(ba)\varphi$. Then $e, f\in E_T$ and $e, f\notin J$ because $ab, ba\notin I$. Note that $\langle x, y\rangle=\langle a, b\rangle\Phi=\left(\{a, b, ab, ba\}\cup I\right)\varphi=\{a\varphi, b\varphi, (ab)\varphi, (ba)\varphi\}\cup I\varphi$. Therefore we have \setcounter{equation}{3} \begin{equation}\langle x, y\rangle=\{x, y, e, f\}\cup J\text{ and }\{x, y, e, f\}\cap J=\emptyset. \label{eq:1st} \end{equation} Since $\langle a\rangle\cap I=\langle a\rangle\setminus\{a\}$, it follows that \[ \langle x\rangle\cap J= \langle a\rangle\Phi\cap I\Phi=\left(\langle a\rangle\cap I\right)\Phi=\left(\langle a\rangle\setminus\{a\}\right)\Phi=\langle a\rangle\varphi\setminus\{a\}\varphi=\langle x\rangle\setminus\{x\}, \] which shows that $x^k\in J$ for all $k\geq 2$. By symmetry, we also have $y^k\in J$ for all $k\geq 2$. \setcounter{theorem}{4} \begin{lemma}\label{305} One of the following two statements is true: \\ \indent {\rm (i)} $e=xy$ and $f=yx$, or \\ \indent {\rm (ii)} $e=yx$ and $f=xy$. \end{lemma} {\bf Proof.} Since $e\in\langle x, y\rangle$, we can choose a shortest possible word $u$ in $x, y$ representing $e$. Thus $e=u$ and no word in $x, y$ which is shorter than $u$ has value $e$. \\ \indent {\bf Case 1:} The first letter of $u$ is $x$. \\ \indent Since $ab\ne a$, it follows that $e\ne x$. Therefore $u=xv$ for some nonempty word $v$ in $x, y$, and $e\ne v$ because $v$ is shorter than $u$. Since $e=xv$, we conclude that $e\in\langle x, v\rangle=\langle x\rangle\vee\langle v\rangle$. Let $c=v\varphi^{-1}$. Then \[ab=e\varphi^{-1}\in\left(\langle x\rangle\vee\langle v\rangle\right)\Phi^{-1}=\langle x\rangle\Phi^{-1}\vee\langle v\rangle\Phi^{-1}=\langle x\varphi^{-1}\rangle\vee\langle v\varphi^{-1}\rangle=\langle a\rangle\vee\langle c\rangle=\langle a, c\rangle.\] If $c\in I$, then $ab\in\langle a, I\rangle=\{a\}\cup I$, which is not true. Hence $c\in\{a, b, ab, ba\}$. If $c=a$, then $v=a\varphi=x$, so that $e=x^2\in J$; a contradiction. Since $v\ne e$, it is clear that $c\ne ab$. If $c=ba$, then $ab\in\langle a, ba\rangle$, contradicting the easily verified fact that $\langle a, ba\rangle\setminus I=\{a, ba\}$. Since $c\notin\{a, ab, ba\}$, it follows that $c=b$, so $v=b\varphi=y$. Therefore $e=xy$. \\ \indent {\bf Case 2:} The first letter of $u$ is $y$. \\ \indent By a symmetric argument to that used in Case 1, we deduce that $e=yx$. \\ \indent Similarly to the above, starting with $f\in\langle x, y\rangle$, we prove that either $f=xy$, or $f=yx$. Since $e\ne f$, note that if $e=xy$ then $f\ne xy$ and hence $f=yx$, whereas if $e=yx$ then $f\ne yx$ and so $f=xy$. Therefore either (i) $e=xy$ and $f=yx$, or (ii) $e=yx$ and $f=xy$. This completes the proof of Lemma \ref{305}. \hspace{\fill}$\Box$ \\ \indent By Lemma \ref{305}, $xy$ and $yx$ are distinct idempotents of $T$ such that $\{xy, yx\}=\{e, f\}$. Therefore, using (\ref{eq:1st}), we conclude that \setcounter{equation}{5} \begin{equation}\langle x, y\rangle=\{x, y, xy, yx\}\cup J\text{ and }\{x, y, xy, yx\}\cap J=\emptyset. \label{eq:2nd} \end{equation} \noindent Our goal is to prove that $x$ is a regular element. In fact, we are going to show that $x\perp y$. \\ \indent Since $xyx\in\langle x, y\rangle$, according to (\ref{eq:2nd}), we have \begin{equation} xyx\in \{x, y, xy, yx\}\cup J. \label{eq:3rd} \end{equation} \noindent If $xyx\in J$, then $xy=(xy)^2=(xyx)y\in\langle y, J\rangle=\{y\}\cup J$; a contradiction because $xy\ne y$ and $xy\notin J$. Therefore $xyx\notin J$. If $xyx=y$, then $xy=(xyx)y=y^2\in J$, which is not true. Thus $xyx\ne y$. Assume that $xyx=xy$. Then $xy=(xyx)y=xy^2\in\langle x, J\rangle=\{x\}\cup J$; a contradiction since $xy\ne x$ and $xy\notin J$. Hence $xyx\ne xy$. By a dual argument, $xyx\ne yx$. We have shown that $xyx\notin\{y, xy, yx\}\cup J$. In view of (\ref{eq:3rd}), it follows that $xyx=x$, that is, $x$ is a regular element of $T$. By symmetry, we also have $yxy=y$, so $x\perp y$. The proof of Theorem \ref{303} is complete. \hspace{\fill}$\Box$ \\ {\bf Remark.} By \cite[Theorem]{key12} and comments after \cite[Result 4.5]{key4}, if $A\!=\!\langle a, a^{-1}\rangle$ is a monogenic inverse semigroup, which is neither bicyclic nor a group, and $\Psi$ is an isomorphism of the partial automorphism monoid of $A$ onto that of a semigroup $B$, then $B\!\cong\! A$. One step in the proof of this is \cite[Lemma 1]{key12} according to which $B\!=\!\langle x, y\rangle$ for mutually inverse $x$ and $y$ defined by $\Delta_{\langle x\rangle}\!=\!\Delta_{\langle a\rangle}\Psi$ and $\Delta_{\langle y\rangle}\!=\!\Delta_{\langle a^{-1}\rangle}\Psi$. The assumption that $A$ is a monogenic {\em inverse} semigroup (neither bicyclic nor a group) and {\em partial automorphism monoids} of $A$ and $B$ are isomorphic, is crucial for all results of \cite{key12}. However, due to Theorem \ref{301}, certain arguments in the proof of \cite[Lemma 1]{key12} have natural analogues in the more general setting of this paper. For instance, the argument in the last paragraph of the proof of Theorem \ref{303} is similar to the one in the final paragraph of the proof of \cite[Lemma 1]{key12}. \\ \indent Using Lemma \ref{205} and Theorem \ref{303} (and its proof), we can establish the following result. \setcounter{theorem}{7} \begin{proposition}\label{308} Let $A=\langle a, b\rangle$ be a monogenic orthodox semigroup such that $a$ (and hence $b$) has infinite order. Then $A$ is torsion-free and any semigroup lattice isomorphic to $A$ is a torsion-free monogenic orthodox semigroup. Thus the class of all monogenic orthodox semigroups whose generators have infinite order is lattice closed and every semigroup in that class is torsion-free. \end{proposition} {\bf Proof.} Since $A/\gamma=\langle a\gamma, b\gamma\rangle$ is a monogenic inverse semigroup with $b\gamma=(a\gamma)^{-1}$ such that $o(a\gamma)=\infty$, according to Lemma \ref{205}, $A/\gamma$ is torsion-free. Therefore $A$ is torsion-free. \\ \indent By Result \ref{203}, $\langle a\rangle$ and $\langle b\rangle$ are strongly lattice determined. Thus $\langle a\rangle\Phi=\langle x\rangle$ and $\langle b\rangle\Phi=\langle y\rangle$ for some $x, y\in X$ such that $o(x)=o(y)=\infty$. Then \[X=A\Phi=\langle a, b\rangle\Phi=\left(\langle a\rangle\vee\langle b\rangle\right)\Phi=\langle a\rangle\Phi\vee\langle b\rangle\Phi=\langle x\rangle\vee\langle y\rangle=\langle x, y\rangle.\] If $A$ is bisimple, according to Theorem \ref{302}, $\langle x, y\rangle$ is a bisimple monogenic orthodox semigroup which is isomorphic or antiisomorphic to $\langle a, b\rangle$. Suppose that $A$ is not bisimple. Then, as shown in the proof of Theorem \ref{303}, $x\perp y$. Moreover, since $A$ is torsion-free, applying Theorem \ref{303}, we conclude that $X$ is a torsion-free orthodox semigroup. Therefore in all cases $X=\langle x, y\rangle$ is a torsion-free monogenic orthodox semigroup. This completes the proof. \hspace{\fill}$\Box$ \\ \begin{center} {\bf Acknowledgement} \end{center} The author would like to thank a careful referee for useful comments, which have helped shorten the proofs of Propositions \ref{207} and \ref{308}, resulting in an improved version of the paper. \\ \end{document}
\begin{document} \title{\Large Regularity Structure, Vorticity Layer and Convergence Rates of Inviscid Limit of Free Surface Navier-Stokes Equations with or without Surface Tension} \setlength\parindent{2em} \setlength\parskip{5pt} \begin{abstract} \normalsize{ In this paper, we study the inviscid limit of the free surface incompressible Navier-Stokes equations with or without surface tension. By delicate estimates, we prove the weak boundary layer of the velocity of the free surface Navier-Stokes equations and the existence of strong or weak vorticity layer for different conditions. When the limit of the difference between the initial Navier-Stokes vorticity and the initial Euler vorticity is nonzero, or the tangential projection on the free surface of the Euler strain tensor multiplying by normal vector is nonzero, there exists a strong vorticity layer. Otherwise, the vorticity layer is weak. We estimate convergence rates of tangential derivatives and the first order standard normal derivative in energy norms, we show that not only tangential derivatives and standard normal derivative have different convergence rates, but also their convergence rates are different for different Euler boundary data. Moreover, we determine regularity structure of the free surface Navier-Stokes solutions with or without surface tension, surface tension changes regularity structure of the solutions. } \\ \par \small{ \textbf{Keywords}: free surface Navier-Stokes equations, free surface Euler equations, inviscid limit, strong vorticity layer, weak vorticity layer, regularity structure } \end{abstract} \tableofcontents \section{Introduction} In this paper, we study the inviscid limit of the free surface incompressible Navier-Stokes equations with or without surface tension (see \cite{Masmoudi_Rousset_2012_FreeBC,Wang_Xin_2015,Elgindi_Lee_2014}): \begin{equation}\label{Sect1_NavierStokes_Equation} \left\{\begin{array}{ll} u_t + u\cdot\nabla u + \nabla p = \ensuremath{\epsilon}\triangle u,\hspace{1.5cm} x\in\Omega_t, \\[7pt] \nabla\cdot u =0, \hspace{3.82cm} x\in\Omega_t,\\[7pt] \partial_t h = u\cdot \ensuremath{\textbf{N}}, \hspace{3.45cm} x\in\Sigma_t,\\[7pt] p\ensuremath{\textbf{n}} -2\ensuremath{\epsilon} \mathcal{S}u\,\ensuremath{\textbf{n}} =gh\ensuremath{\textbf{n}} -\sigma H\ensuremath{\textbf{n}}, \hspace{1.07cm} x\in\Sigma_t,\\[7pt] (u,h)|_{t=0} = (u_0^{\ensuremath{\epsilon}},h_0^{\ensuremath{\epsilon}}). \end{array}\right. \end{equation} where $x=(y,z)$, $y$ is the horizontal variable, $z$ is the vertical variable, the normalized pressure $p=p^F + g z$, $p^F$ is the hydrodynamical pressure of the fluid, $g z$ corresponds to the gravitational force. The surface tension in the dynamical boundary condition $(\ref{Sect1_NavierStokes_Equation})_4$, namely $H = - \nabla_x \cdot \big(\frac{(-\nabla_y h,1)}{\sqrt{1+|\nabla_y h|^2}}\big) = \nabla_y\cdot \big(\frac{\nabla_y h}{\sqrt{1+|\nabla_y h|^2}}\big)$, is twice the mean curvature of the free surface $\Sigma_t$. The initial data satisfies the compatibility condition $\Pi\mathcal{S}u_0^{\ensuremath{\epsilon}} \ensuremath{\textbf{n}}|_{z=0} =0$. Some notations are defined as follows: \begin{equation}\label{Sect1_FreeSurface_Definition} \begin{array}{ll} \Omega_t =\{x\in\mathbb{R}^3|\, -\infty< z<h(t,y)\},\\[6pt] \Sigma_t = \{x\in\mathbb{R}^3|\, z =h(t,y)\},\\[6pt] \ensuremath{\textbf{N}}=(-\nabla h, 1)^{\top},\quad \ensuremath{\textbf{n}}=\frac{\ensuremath{\textbf{N}}}{|\ensuremath{\textbf{N}}|}, \\[6pt] \mathcal{S}u =\frac{1}{2}(\nabla u +(\nabla u)^{\top}), \end{array} \end{equation} where the symbol $^{\top}$ means the transposition of matrices or vectors. We suppose $h(t,y)\ensuremath{\rightarrow} 0$ as $|y|\ensuremath{\rightarrow} +\infty$ for any $t\geq 0$. In this paper, we are interested in the free surface and have no interest in the fluid dynamics on the bottom of $\Omega_t$, thus we simply assume $-\infty< z<h(t,y)$. Also, we neglect the Coriolis effect generated by the planetary rotation, then there is no Ekman layer near the free surface even if Rossby number is small. Let $\ensuremath{\epsilon}\ensuremath{\rightarrow} 0$ in $(\ref{Sect1_NavierStokes_Equation})$, we formally get the following free surface Euler equations: \begin{equation}\label{Sect1_Euler_Equation} \left\{\begin{array}{ll} u_t + u\cdot\nabla u + \nabla p = 0,\hspace{2.48cm} x\in\Omega_t, \\[7pt] \nabla\cdot u =0, \hspace{4.34cm} x\in\Omega_t,\\[7pt] \partial_t h = u\cdot \ensuremath{\textbf{N}}, \hspace{4cm} x\in\Sigma_t,\\[7pt] p =gh -\sigma H, \hspace{3.8cm} x\in\Sigma_t,\\[7pt] (u,h)|_{t=0} = (u_0,h_0) := \lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}(u_0^{\ensuremath{\epsilon}},h_0^{\ensuremath{\epsilon}}) , \end{array}\right. \end{equation} where $(u_0,h_0) = \lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}(u_0^{\ensuremath{\epsilon}},h_0^{\ensuremath{\epsilon}})$ is in the pointwise sense or even the $L^2$ sense (see \cite{Masmoudi_Rousset_2012_FreeBC,Elgindi_Lee_2014,Mei_Wang_Xin_2015} for the sufficient conditions of the inviscid limit), $(u_0,h_0)$ are independent of $\ensuremath{\epsilon}$. Note that except for $(u_0,h_0) =\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}(u_0^{\ensuremath{\epsilon}}, h_0^{\ensuremath{\epsilon}})$, we do not restrict their derivatives, especially normal derivatives. Furtherly, note that the Navier-slip boundary case requires $u_0 =\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0} u_0^{\ensuremath{\epsilon}}$ (see \cite{Iftimie_Planas_2006}), while the Dirichlet boundary case requires $u_0^{\ensuremath{\epsilon}}(y_1,y_2) \sim u_0(y_1,y_2) + u_0^P(y_1,\frac{y_2}{\sqrt{\ensuremath{\epsilon}}}) +o(\ensuremath{\epsilon})$, where $u_0^P$ is the initial data of Prandtl equations with Dirichlet boundary condition (see \cite{Sammartino_Caflisch_1998}). The following Taylor sign condition should be imposed on $(\ref{Sect1_Euler_Equation})$ if $\sigma=0$ , \begin{equation}\label{Sect1_TaylorSign_1} \begin{array}{ll} g - \partial_z p|_{z=0} \geq\delta_p>0. \end{array} \end{equation} In this paper, either $\sigma=0$ or $\sigma>0$ is fixed, we do not study the zero surface tension limit. For both $(\ref{Sect1_NavierStokes_Equation})$ and $(\ref{Sect1_Euler_Equation})$, the analysis for the fixed $\sigma>0$ case is very different from that for the $\sigma=0$ case. In order to describe the strength of the initial vorticity layer, we define \begin{equation}\label{Sect1_Vorticity_Layer_Profile_1} \begin{array}{ll} \varpi^{bl}_0 = \nabla\times u_0^{\ensuremath{\epsilon}} - \nabla\times u_0 = \nabla\times u_0^{\ensuremath{\epsilon}} - \nabla\times\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0} u_0^{\ensuremath{\epsilon}}. \end{array} \end{equation} We emphasize that the initial vorticity layer means a boundary layer at the initial time rather than a time layer in the vicinity of the initial time. If $u_0^{\ensuremath{\epsilon}}$ has a profile $u_0^{\ensuremath{\epsilon}}(y,z) \sim u_0(y,z) + \sqrt{\ensuremath{\epsilon}} u_0^{bl}(y,\frac{z}{\sqrt{\ensuremath{\epsilon}}})$ in its asymptotic expansion, then $\partial_z u_0^{\ensuremath{\epsilon}}$ does not converge uniformly to $\partial_z u_0$, and then \begin{equation}\label{Sect1_Vorticity_Layer_Profile_2} \begin{array}{ll} \lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\varpi^{bl}_0 = (-\partial_z u_0^{bl,2}, \partial_z u_0^{bl,1}, 0)^{\top} \neq 0, \end{array} \end{equation} which means the initial vorticity layer is strong. For strong initial vorticity layer, there is a special case: if the Euler boundary data satisfies $\Pi\mathcal{S}u_0 \ensuremath{\textbf{n}}|_{z=0} =0$, then $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\varpi^{bl}_0|_{z=0} =0$ on the free surface due to the compatibility condition $\Pi\mathcal{S}u_0^{\ensuremath{\epsilon}} \ensuremath{\textbf{n}}|_{z=0} =0$. However, it can not prevent $(\ref{Sect1_Vorticity_Layer_Profile_2})$ from holding in the vicinity of the free surface. For example, we choose the boundary layer profile to be $u_0^{bl}(y,\frac{z}{\sqrt{\ensuremath{\epsilon}}}) = \exp\{- (\frac{z}{\sqrt{\ensuremath{\epsilon}}})^2\} (1,1,0)^{\top}$, for which \begin{equation}\label{Sect1_Example_1_0} \begin{array}{ll} \lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\varpi^{bl}_0 \big|_{z=0} =0, \quad \lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\varpi^{bl}_0 \big|_{z = -\sqrt{\ensuremath{\epsilon}}} = 2e^{-1}(-1,1,0)^{\top} \neq 0. \end{array} \end{equation} On the contrary, if $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\varpi^{bl}_0 =0$, then $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\varpi^{bl}_0|_{z=0} =0$ due to its continuity, and then $\Pi\mathcal{S}u_0 \ensuremath{\textbf{n}}|_{z=0} =0$ at the initial time. If $u_0^{\ensuremath{\epsilon}}$ has a profile $u_0^{\ensuremath{\epsilon}}(y,z) \sim u_0(y,z) + \ensuremath{\epsilon}^{\frac{1}{2} + \delta_{ubl}} u_0^{bl}(y,\frac{z}{\sqrt{\ensuremath{\epsilon}}})$ in its asymptotic expansion, where $\delta_{ubl}>0$, then $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\varpi^{bl}_0=0$, which means the initial vorticity layer is weak. In order to describe the discrepancy between boundary value of Navier-Stokes vorticity and that of Euler vorticity, we investigate whether the Euler boundary data satisfies $\Pi\mathcal{S} u\ensuremath{\textbf{n}}|_{\Sigma_t} =0$. If $\Pi\mathcal{S} u\ensuremath{\textbf{n}}|_{\Sigma_t} =0$, the boundary value of Navier-Stokes vorticity converges to that of Euler vorticity; otherwise there is a discrepancy. It is easy to have $\Pi\mathcal{S} u\ensuremath{\textbf{n}}|_{\Sigma_t} \neq 0$ in $(0,T]$, because it satisfies the forced transport equation. While $\Pi\mathcal{S} u\ensuremath{\textbf{n}}|_{\Sigma_t} =0$ in $[0,T]$ is nontrivial. However, we can construct the Euler velocity field satisfying $\Pi\mathcal{S} u\ensuremath{\textbf{n}}|_{\Sigma_t} =0$ and finite energy. The scenario of our problem is as follows: construct a Euler velocity field satisfying $\Pi\mathcal{S} u\ensuremath{\textbf{n}}|_{\Sigma_t} =0$, let Navier-Stokes initial data is a small perturbation of the Euler initial data, then we study the inviscid limit of Navier-Stokes solutions. One example of $\Pi\mathcal{S} u\ensuremath{\textbf{n}}|_{\Sigma_t} =0$ is that \begin{equation}\label{Sect1_Example_1_1} \begin{array}{ll} u= (-y_2 e^{-y_1^2 -y_2^2 -z^2}, y_1 e^{-y_1^2 -y_2^2 -z^2}, 0),\quad\, h=0, \end{array} \end{equation} and the pressure $p$ is the solution of the Poisson equation: \begin{equation}\label{Sect1_Example_1_2} \left\{\begin{array}{ll} - \triangle p = e^{-2 (y_1^2 + y_2^2 + z^2)(-2 + 4 y_1^2 + 4 y_2^2 - 4y_1^2y_2^2)}, \\[4pt] p|_{z=0} =0. \end{array}\right. \end{equation} Then the Euler boundary data satisfies \begin{equation}\label{Sect1_Example_1_3} \begin{array}{ll} \mathcal{S}u \ensuremath{\textbf{n}} |_{z=0} = [y_2 z e^{-y_1^2 -y_2^2 -z^2}, -y_1 z e^{-y_1^2 -y_2^2 -z^2}, 0]^{\top} \big|_{z=0} =0. \end{array} \end{equation} By deforming symmetrically the velocity field $(\ref{Sect1_Example_1_1})$ where $h$ is also symmetric, one may construct infinitely many velocity fields satisfying $\Pi\mathcal{S} u\ensuremath{\textbf{n}}|_{\Sigma_t} =0$. \subsection{Survey of Previous Results} In this survey, we introduce the previous results on the well-posedness and inviscid limits. As to the irrotational fluids, refer to S. Wu \cite{Wu_2D_1997,Wu_3D_1999,Wu_2D_2009,Wu_3D_2011}, Germain, Masmoudi and Shatah \cite{Germain_Masmoudi_Shatah_2012}, Ionescu and Pusateri \cite{Ionescu_Pusateri_2015}, Alazard and Delort \cite{Alazard_Delort_2013} for the water waves without surface tension, refer to K. Beyer and M. G$\ddot{u}$nther \cite{Beyer_Gunther_1998}, Germain, Masmoudi and Shatah \cite{Germain_Masmoudi_Shatah_2015} for the water waves with surface tension. Before introducing previous results on the boundary layer and inviscid limit problem, we survery there some well-posedness results. The free surface Navier-Stokes equations have both local and global well-posedness results, while the free surface Euler equations only have local well-posedness results. As to the free surface Navier-Stokes equations, refer to Beale \cite{Beale_1981}, Hataya \cite{Hataya_2009}, Guo and Tice \cite{Guo_Tice_2013_ARMA,Guo_Tice_2013_InftyDomain,Guo_Tice_2013_Local} for the zero surface tension, refer to Beale \cite{Beale_1984}, Tani \cite{Tani_1996}, Tanaka and Tani \cite{Tani_Tanaka_1995} for the surface tension case. Especially, \cite{Beale_1984,Tani_Tanaka_1995,Hataya_2009,Guo_Tice_2013_ARMA,Guo_Tice_2013_InftyDomain} proved the global in time results for the small initial data. Note that the viscosity is capable of producing the global well-posedness, while the surface tension only provides the regularizing effect on the free surface and enhance the decay rates of the solutions (see \cite{Guo_Tice_2013_ARMA}). The general free surface Euler equations which are much more difficult and only have local results. Refer to Lindblad \cite{Lindblad_2005}, Coutand and Shkiller \cite{Coutand_Shkoller_2007}, Shatah and Zeng \cite{Shatah_Zeng_2008}, Zhang and Zhang \cite{Zhang_Zhang_2008} for the zero surface tension case, refer to Coutand and Shkiller \cite{Coutand_Shkoller_2007}, Shatah and Zeng \cite{Shatah_Zeng_2008} for the surface tension case. As the viscosity approaches zero, we hope that the solutions of Navier-Stokes equations converge to the solutions of Euler equations. However, this is only proved in the whole spaces where there are no boundary conditions, see \cite{Swann_1971,Kato_1972,DiPerna_Majda_1987_CPAM,DiPerna_Majda_1987_CMP,Constantin_1986,Masmoudi_2007}. However, in the presence of boundaries, the inviscid limit problem will be challenging due to the formation of boundary layers. For Navier-Stokes equations with Dirichlet boundary condition in the fixed domain, $u|_{\partial\Omega} =0$, strong boundary layer whose width is $O(\sqrt{\ensuremath{\epsilon}})$ and amplitude is $O(1)$ forms near the boundary. Namely, the Navier-Stokes solution is expected to behave like $u^{\ensuremath{\epsilon}} \sim u^0 + u^{bl}(t,y, z/{\sqrt{\ensuremath{\epsilon}}})$ where $u^0$ is the Euler solution satisfying characteristic boundary condition $u\cdot\ensuremath{\textbf{n}}|_{\partial\Omega} =0$, $u^{bl}(t,y, z/{\sqrt{\ensuremath{\epsilon}}})$ is the boundary layer profile. The inviscid limit is not rigorously verified except for the following two cases, i. e., the analytic setting (see \cite{Asano_1988,Sammartino_Caflisch_1998}) and the case where the vorticity is located away from the boundary (see \cite{Maekawa_2013,Maekawa_2014}). For Navier-Stokes equations with Navier-slip boundary condition in the fixed domain, $\Pi(2\mathcal{S} u\ensuremath{\textbf{n}} + \gamma_{s}\, u)|_{\partial\Omega} =0,\ u\cdot \ensuremath{\textbf{n}}|_{\partial\Omega} =0$, weak boundary layer whose width and amplitude are $O(\sqrt{\ensuremath{\epsilon}})$ forms near the boundary. Namely, the Navier-Stokes solution is expected to behave like $u^{\ensuremath{\epsilon}} \sim u^0 + \sqrt{\ensuremath{\epsilon}} u^{bl}(t,y, z/{\sqrt{\ensuremath{\epsilon}}})$, where $u^0$ is the Euler solution satisfying characteristic boundary condition $u\cdot\ensuremath{\textbf{n}}|_{\partial\Omega} =0$. For the inviscid limit, refer to Iftimie and Planas \cite{Iftimie_Planas_2006}, Iftimie and Sueur \cite{Iftimie_Sueur_2011}, Masmoudi and Rousset \cite{Masmoudi_Rousset_2012_NavierBC}, Xiao and Xin \cite{Xiao_Xin_2013}. Note that $H^1$ convergence is satisfied for general Navier-slip boundary condition or curved boundary, while $H^3$ convergence happens for complete slip boundary condition $\omega\times\ensuremath{\textbf{n}}|_{\partial\Omega} =0,\, u\cdot \ensuremath{\textbf{n}}|_{\partial\Omega} =0$ and flat boundary (see \cite{Xiao_Xin_2007,Beirao_Crispo_2011}). For the free surface Navier-Stokes equations with kinetical and dynamical boundary conditions in the moving domain, the recent works on the inviscid limit are studied in conormal Sobolev spaces for which the normal differential operators vanish on the free surface. Masmoudi and Rousset \cite{Masmoudi_Rousset_2012_FreeBC} proved the uniform estimates and inviscid limit of the free surface incompressible Navier-Stokes equations without surface tension in conormal Sobolev spaces. By extending this conormal analysis framework, Wang and Xin \cite{Wang_Xin_2015}, Elgindi and Lee \cite{Elgindi_Lee_2014} proved the inviscid limit of the free surface incompressible Navier-Stokes equations with surface tension, Mei, Wang and Xin \cite{Mei_Wang_Xin_2015} proved the inviscid limit of the free surface compressible Navier-Stokes equations with or without surface tension. \cite{Masmoudi_Rousset_2012_FreeBC} pointed out the free surface Navier-Stokes solutions are expected to behave like $u^{\ensuremath{\epsilon}} \sim u^0 + \sqrt{\ensuremath{\epsilon}} u^{bl}(t,y, z/{\sqrt{\ensuremath{\epsilon}}})$, where $u^0$ is the free surface Euler solutions. \subsection{Formulation of the Problem and Our Motivations} We first study N-S (abbreviation of Navier-Stokes) equations $(\ref{Sect1_NavierStokes_Equation})$ with $\sigma=0$. In this subsection, we formulate the free boundary problem into the fixed coordinates domain $\mathbb{R}^3_{-}$. Similar to \cite{Masmoudi_Rousset_2012_FreeBC}, we define the diffeomorphism between $\mathbb{R}^3_{-}$ and the moving domain $\Omega_t$: \begin{equation}\label{Sect1_LagrCoord_Definition_1} \begin{array}{ll} \Phi(t,\cdot) : \mathbb{R}^3_{-} = \mathbb{R}^2\times (-\infty,0) \quad \ensuremath{\rightarrow} \quad \Omega_t, \\[4pt] \hspace{2.75cm} x=(y,z) \quad \ensuremath{\rightarrow} \quad (y,\varphi(t,y,z)), \end{array} \end{equation} and define $\varphi$ as \begin{equation}\label{Sect1_LagrCoord_Definition_2} \begin{array}{ll} \varphi(t,y,z) = Az + \eta(t,y,z), \end{array} \end{equation} where $A>0$ is constant to be determined, $\eta$ is defined as \begin{equation}\label{Sect1_LagrCoord_Definition_3} \begin{array}{ll} \eta(t,y,z) = \psi \ast_y h(t,y) , \end{array} \end{equation} here the symbol $\ast_y$ is a convolution in the $y$ variable and $\psi$ decays sufficiently fast in $z$ such that $(1-z)\psi,\ \psi,\ \partial_z\psi, \cdots, \partial_z^{m+1}\psi \in L^1(\mathrm{d}z)$. For example, $\psi = \mathcal{F}^{-1}[\frac{1}{(1-z)^4} e^{-(1-z)^2(1+|\xi|^2)}]$ where $\mathcal{F}^{-1}$ is the inverse Fourier transformation with respect to $\xi\in\mathbb{R}^2$. The constant $A>0$ is suitably chosen such that $\Phi$ is a diffeomorphism, namely \begin{equation}\label{Sect1_LagrCoord_Definition_4} \begin{array}{ll} \partial_z \varphi(0,y,z) \geq 1, \quad \forall x\in\mathbb{R}^3_{-}. \end{array} \end{equation} By the diffeomorphism $(\ref{Sect1_LagrCoord_Definition_1})$, we have \begin{equation}\label{Sect1_LagrCoord_Definition_5} \begin{array}{ll} v(t,x) = u(t,y,\varphi(t,y,z)), \hspace{1.1cm} q(t,x) = p(t,y,\varphi(t,y,z)), \hspace{1cm} \forall x\in\mathbb{R}^3_{-}, \\[6pt] \partial^{\varphi}_i v(t,x) = \partial_i u(t,y,\varphi(t,y,z)), \quad \partial^{\varphi}_i q(t,x) = \partial_i p(t,y,\varphi(t,y,z)), \quad i =t,1,2,3, \end{array} \end{equation} while $h(t,y)$ does not change. Then the free surface Navier-Stokes equations $(\ref{Sect1_NavierStokes_Equation})$ with $\sigma=0$ are equivalent to the following system: \begin{equation}\label{Sect1_NS_Eq} \left\{\begin{array}{ll} \partial_t^{\varphi} v + v\cdot\nabla^{\varphi} v + \nabla^{\varphi} q = \ensuremath{\epsilon}\triangle^{\varphi} v, \hspace{1.04cm} x\in\mathbb{R}^3_{-}, \\[7pt] \nabla^{\varphi}\cdot v =0, \hspace{4cm} x\in\mathbb{R}^3_{-},\\[7pt] \partial_t h = v(t,y,0)\cdot N, \hspace{2.78cm} z=0,\\[7pt] q\ensuremath{\textbf{n}} -2\ensuremath{\epsilon} \mathcal{S}^{\varphi}v\,\ensuremath{\textbf{n}} =gh\ensuremath{\textbf{n}}, \hspace{2.45cm} z=0,\\[7pt] (v,h)|_{t=0} = (v_0^{\ensuremath{\epsilon}},h_0^{\ensuremath{\epsilon}}), \end{array}\right. \end{equation} where \begin{equation}\label{Sect1_NS_Eq_ComplementDef} \begin{array}{ll} \ensuremath{\textbf{N}}=(-\nabla h(t,y), 1)^{\top},\quad \ensuremath{\textbf{n}}=\frac{\ensuremath{\textbf{N}}}{|\ensuremath{\textbf{N}}|}, \\[7pt] \mathcal{S}^{\varphi}v =\frac{1}{2}(\nabla^{\varphi} v +\nabla^{\varphi} v^{\top}). \end{array} \end{equation} Obviously, let $\ensuremath{\epsilon}\ensuremath{\rightarrow} 0$ in $(\ref{Sect1_NS_Eq})$, we formally get the following free surface Euler equations: \begin{equation}\label{Sect1_Euler_Eq} \left\{\begin{array}{ll} \partial_t^{\varphi} v + v\cdot\nabla^{\varphi} v + \nabla^{\varphi} q = 0, \hspace{1.4cm} x\in\mathbb{R}^3_{-}, \\[7pt] \nabla^{\varphi}\cdot v =0, \hspace{3.7cm} x\in\mathbb{R}^3_{-},\\[7pt] \partial_t h = v(t,y,0)\cdot N, \hspace{2.48cm} z=0,\\[7pt] q =gh, \hspace{4.24cm} z=0,\\[7pt] (v,h)|_{t=0} = (v_0,h_0), \end{array}\right. \end{equation} where $v_0$ is the limit of $v_0^{\ensuremath{\epsilon}}$ in the $L^2$ sense, $h_0$ is the limit of $h_0^{\ensuremath{\epsilon}}$ in the $L^2$ sense for $\sigma=0$ and in the $H^1$ sense for $\sigma>0$, $(v_0,h_0)$ is independent of $\ensuremath{\epsilon}$. The following Taylor sign condition should be imposed on $(\ref{Sect1_Euler_Eq})$ when $\sigma=0$, \begin{equation}\label{Sect1_TaylorSign_2} \begin{array}{ll} g - \partial_z^{\varphi} q |_{z=0} \geq\delta_q>0. \end{array} \end{equation} D. Coutand and S. Shkoller (see \cite{Coutand_Shkoller_2007}) proved the well-posedness of the free surface incompressible Euler equations $(\ref{Sect1_Euler_Eq})$ without surface tension. We state their results in our formulation as follows: {\it Suppose the Taylor sign condition $(\ref{Sect1_TaylorSign_2})$ holds at $t=0$, $h_0\in H^3(\mathbb{R}^2), v_0\in H^3(\mathbb{R}^3_{-})$, then there exists $T>0$ and a unique solution $(v,q,h)$ of $(\ref{Sect1_Euler_Eq})$ with $v\in L^{\infty}([0,T],H^3(\mathbb{R}^3_{-})),\nabla q\in L^{\infty}([0,T],H^2(\mathbb{R}^3_{-})), h\in L^{\infty}([0,T],H^{3}(\mathbb{R}^2))$. } Though conormal derivatives of the Navier-Stokes solutions and conormal derivatives of Euler solutions vanish on the free boundary, their differences oscillate dramatically in the vicinity of the free boundary, thus the conormal functional spaces are not suitable for studying the convergence rates of inviscid limit. Thus, we define the following functional spaces: \begin{equation}\label{Sect1_Define_Spapces} \begin{array}{ll} \|v\|_{X^{m,s}}^2 := \sum\limits_{\ell\leq m, |\alpha|\leq m+s-\ell}\|\partial_t^{\ell} \mathcal{Z}^{\alpha} v\|_{L^2(\mathbb{R}^3_{-})}^2 \, , \hspace{0.83cm} \|v\|_{X^{m}}^2 := \|v\|_{X^{m,0}}^2 \, , \\[17pt] \|v\|_{X_{tan}^{m,s}}^2 := \sum\limits_{\ell\leq m, |\alpha|\leq m+s-\ell}\|\partial_t^{\ell} \partial_y^{\alpha} v\|_{L^2(\mathbb{R}^3_{-})}^2 \, , \hspace{0.91cm} \|v\|_{X_{tan}^{m}}^2 := \|v\|_{X_{tan}^{m,0}}^2 \, , \\[17pt] |h|_{X^{m,s}}^2 := \sum\limits_{\ell\leq m, |\alpha|\leq m+s-\ell}|\partial_t^{\ell} \partial_y^{\alpha} h|_{L^2(\mathbb{R}^2)}^2 \, , \hspace{1.27cm} |h|_{X^{m}}^2 := |h|_{X^{m,0}}^2 \, , \\[20pt] \|v\|_{Y_{tan}^{m,s}}^2 := \sum\limits_{\ell\leq m, |\alpha|\leq m+s-\ell}\|\partial_t^{\ell} \partial_y^{\alpha} v\|_{L^{\infty}(\mathbb{R}^3_{-})}^2 \, , \hspace{0.83cm} \|v\|_{Y_{tan}^{m}}^2 := \|v\|_{Y_{tan}^{m,0}}^2 \, , \\[17pt] |h|_{Y^{m,s}}^2 := \sum\limits_{\ell\leq m, |\alpha|\leq m+s-\ell}|\partial_t^{\ell} \partial_y^{\alpha} h|_{L^{\infty}(\mathbb{R}^2)}^2 \, , \hspace{1.2cm} |h|_{Y^{m}}^2 := |h|_{Y^{m,0}}^2 \, , \end{array} \end{equation} where the differential operators $\mathcal{Z}_1=\partial_{y_1}, \mathcal{Z}_2=\partial_{y_2}, \mathcal{Z}_3 =\frac{z}{1-z}\partial_z$ (see \cite{Masmoudi_Rousset_2012_FreeBC,Elgindi_Lee_2014,Wang_Xin_2015,Mei_Wang_Xin_2015}). Also, we use $|\cdot|_m$ to denote the standard Sobolev norm defined in the horizontal space $\mathbb{R}^2$. Assume $\omega^{\ensuremath{\epsilon}}=\nabla^{\varphi^{\ensuremath{\epsilon}}}\times v^{\ensuremath{\epsilon}}$, $\omega=\nabla^{\varphi}\times v$ are Navier-Stokes vorticity, Euler vorticity respectively, $\hat{\omega} =\omega^{\ensuremath{\epsilon}} -\omega$. In this paper, bounded variables or quantities mean that they are bounded by $O(1)$, small variables or quantities mean that they are bounded by $O(\ensuremath{\epsilon}^{\beta})$ for some $\beta>0$. Now we state our motivations of this paper. 1. As $\ensuremath{\epsilon}\ensuremath{\rightarrow} 0$, \cite{Masmoudi_Rousset_2012_FreeBC} showed that the velocity converges in $L^2$ and $L^{\infty}$ norms, the height function converges in $L^2$ and $W^{1,\infty}$ norms. We can expect that their tangential derivatives converges, but we still do not know whether the vorticity and normal derivatives of the velocity converge in $L^{\infty}$ norm. If they do not converge in the $L^{\infty}$ norm, there are must be a strong vorticity layer in the vicinity of the free surface. \cite{Masmoudi_Rousset_2012_FreeBC} pointed the N-S solution is expected to behave like $u^{\ensuremath{\epsilon}} \sim u^0 + \sqrt{\ensuremath{\epsilon}} u^{bl}(t,y, z/{\sqrt{\ensuremath{\epsilon}}})$, however, this is not rigorously proved. It is expected that the velocity of the free surface N-S equations has a weak boundary layer, we have to prove the existence of strong vorticity layer for some sufficient conditions. Note that the energy norms are too weak, thus we use the $L^{\infty}$ norm to describe the existence of strong boundary layers. 2. We want to know the sufficient and necessary conditions for the existence of strong vorticity layer, we also want to know these conditions for the weak vorticity layer. We show that there are two sufficient conditions for the strong vorticity layer, note that these two conditions are almost independent. One condition is that the initial vorticity layer is strong, then it is transported by the velocity field for any small $\ensuremath{\epsilon}$, and then we get a strong vorticity layer when $t\in (0,T]$. Another condition is that the Euler boundary data satisfies $\Pi\mathcal{S}^{\varphi} v \ensuremath{\textbf{n}}|_{z=0} \neq 0$ in $(0,T]$, then there is a discrepancy between N-S vorticity and Euler vorticity, and then we have a strong vorticity layer. When neither of two sufficient conditions is satisfied, we show that the vorticity layer is weak. 3. \cite{Masmoudi_Rousset_2012_FreeBC,Wang_Xin_2015,Elgindi_Lee_2014} proved the uniform regularity and inviscid limit of the free surface N-S equations with or without surface tension. In order to prove the uniform regularities, \cite{Masmoudi_Rousset_2012_FreeBC,Elgindi_Lee_2014,Wang_Xin_2015,Mei_Wang_Xin_2015} controlled the bounded quantities in conormal functional spaces and applied the following integration by parts formula to the a priori estimates: \begin{equation}\label{Sect1_Formulas_CanNotUse} \begin{array}{ll} \frac{\mathrm{d}}{\mathrm{d}t}\int\limits_{\mathbb{R}^3_{-}} f \mathrm{d}\mathcal{V}_t =\int\limits_{\mathbb{R}^3_{-}} \partial_t^{\varphi} f \mathrm{d}\mathcal{V}_t + \int\limits_{\{z=0\}} f v\cdot\ensuremath{\textbf{N}} \mathrm{d}y, \\[12pt] \int\limits_{\mathbb{R}^3_{-}} \vec{a} \cdot\nabla^{\varphi} f \mathrm{d}\mathcal{V}_t = \int\limits_{\{z=0\}} \vec{a}\cdot \ensuremath{\textbf{N}}\, f \mathrm{d}y - \int\limits_{\mathbb{R}^3_{-}} \nabla^{\varphi}\cdot \vec{a}\, f \mathrm{d}\mathcal{V}_t, \\[12pt] \int\limits_{\mathbb{R}^3_{-}} \vec{a} \cdot (\nabla^{\varphi}\times \vec{b}) \,\mathrm{d}\mathcal{V}_t = \int\limits_{\{z=0\}} \vec{a} \cdot (\ensuremath{\textbf{N}} \times \vec{b}) \,\mathrm{d}y + \int\limits_{\mathbb{R}^3_{-}} (\nabla^{\varphi}\times \vec{a}) \cdot \vec{b} \,\mathrm{d}\mathcal{V}_t, \end{array} \end{equation} where $\mathrm{d}\mathcal{V}_t = \partial_z\varphi \mathrm{d}y\mathrm{d}z$ is defined on $\mathbb{R}^3_{-}$ but measures the volume element of $\Omega_t$. Refer to \cite{Masmoudi_Rousset_2012_FreeBC} for the first and second formulae in $(\ref{Sect1_Formulas_CanNotUse})$. As to the last formula $(\ref{Sect1_Formulas_CanNotUse})_3$ used in the fixed domain, refer to \cite{Wang_2016,Wang_Xin_Yong_2015,Xiao_Xin_2013}. Motivated by \cite{Masmoudi_Rousset_2012_FreeBC,Wang_Xin_2015,Elgindi_Lee_2014}, we want to know convergence rates of the inviscid limit, which involves two moving domain, we denote Navier-Stokes domain and Euler domain by $\Omega^{\ensuremath{\epsilon}}$ and $\Omega$ respectively. In general, $\Omega^{\ensuremath{\epsilon}}$ and $\Omega$ do not coincide, we can not compare these two velocity fields. Thus, we have to map $\Omega^{\ensuremath{\epsilon}}$ and $\Omega$ to the common fixed coordinate domain $\mathbb{R}^3_{-}$, namely $\Omega^{\ensuremath{\epsilon}} = \Phi^{\ensuremath{\epsilon}}(\mathbb{R}^3_{-}), \Omega = \Phi(\mathbb{R}^3_{-})$. For any $x\in\mathbb{R}^3_{-}$, two points $\Phi^{\ensuremath{\epsilon}}(x)$ and $\Phi(x)$ do not coincide in general, However, $\Phi^{\ensuremath{\epsilon}}(x)$ converges to $\Phi(x)$ pointwisely as $\ensuremath{\epsilon}\ensuremath{\rightarrow} 0$, thus $|v^{\ensuremath{\epsilon}}(x) - v(x)|$ and $|\partial_t^{\ell}\mathcal{Z}^{\alpha} v^{\ensuremath{\epsilon}}(x) - \partial_t^{\ell}\mathcal{Z}^{\alpha} v(x)|$ must be small quantities. We have to overcome many difficulties involving two different moving domains to close the estimates of $|\partial_t^{\ell}\mathcal{Z}^{\alpha} v^{\ensuremath{\epsilon}}(x) - \partial_t^{\ell}\mathcal{Z}^{\alpha} v(x)|$. 4. \cite{Xiao_Xin_2007,Xiao_Xin_2011,Xiao_Xin_2013,Wang_2016} studied the inviscid limit of the incompressible or compressible N-S equations with Navier-slip boundary condition, where the initial Navier-Stokes data and initial Euler data are exactly the same and independent of $\ensuremath{\epsilon}$. If the Navier-Stokes boundary condition satisfies $\omega^{\ensuremath{\epsilon}}\times \ensuremath{\textbf{n}}|_{z=0} =0$ and the boundary is flat, then the Euler boundary data also satisfies $\omega\times \ensuremath{\textbf{n}}|_{z=0} =0$, $\|u^{\ensuremath{\epsilon}} -u\|_{L^2} \ensuremath{\lesssim} O(\ensuremath{\epsilon}), \|\omega^{\ensuremath{\epsilon}} -\omega\|_{L^2} + \|u^{\ensuremath{\epsilon}} -u\|_{H^1} \ensuremath{\lesssim} O(\ensuremath{\epsilon}^{\frac{3}{4}})$, \cite{Xiao_Xin_2007} proved the $H^3$ convergence. While if the Euler boundary data is general or the boundary is curved, then $\|u^{\ensuremath{\epsilon}} -u\|_{L^2} \ensuremath{\lesssim} O(\ensuremath{\epsilon}^{\frac{3}{4}}), \|\omega^{\ensuremath{\epsilon}} -\omega\|_{L^2} + \|u^{\ensuremath{\epsilon}} -u\|_{H^1} \ensuremath{\lesssim} O(\ensuremath{\epsilon}^{\frac{1}{4}})$. \cite{Iftimie_Sueur_2011} showed that it is impossible to prove $H^2$ convergence. We are also interested in the convergence rates of the inviscid limit of the free boundary problem for Navier-Stokes equations. However, in our formulation of the free boundary problem, the diffeomorphism between the fixed coordinates $\mathbb{R}^3_{-}$ and two moving domains are twisted, the differential operators in N-S and Euler equations are also twisted, then the estimates of tangential derivatives and the estimates of normal derivatives can not be decoupled, we even can not develop the $L^2$ estimate of $(v^{\ensuremath{\epsilon}}-v,h^{\ensuremath{\epsilon}}-h)$ themselves without involving the normal derivative $\partial_z v^{\ensuremath{\epsilon}} - \partial_z v$. Thus, we want to know whether tangential derivatives and normal derivatives have different convergence rates. If the Euler boundary data satisfies $\Pi\mathcal{S}^{\varphi} v|_{z=0} \neq 0$, $\omega^{\ensuremath{\epsilon}}|_{z=0}$ does not converge to $\omega|_{z=0}$, we want to know how to calculate convergence rates of the vorticity in the energy norm. If $\Pi\mathcal{S}^{\varphi} v|_{z=0} = 0$, $\omega^{\ensuremath{\epsilon}}|_{z=0}\ensuremath{\rightarrow} \omega|_{z=0}$, we want to know how to improve the convergence rates. 5. To estimate the convergence rate of the inviscid limit, we need to use the time derivatives. However, time derivatives can not be expressed in terms of space derivatives by using the equations, since we work on conormal spaces instead of standard Sobolev spaces. Thus, we prove the uniform regularity concluding time derivatives and determine the regularity structure of N-S solutions and Euler solutions in conormal functional spaces. When time derivatives are included, uniform estimates of tangential derivatives will be different from \cite{Masmoudi_Rousset_2012_FreeBC}. Moreover, our estimates of normal derivatives are based on the estimates of vorticity rather than those of $\Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}$ (see \cite{Masmoudi_Rousset_2012_FreeBC,Wang_Xin_2015,Elgindi_Lee_2014}). \subsection{Main Results for N-S Equations without Surface Tension} \cite{Masmoudi_Rousset_2012_FreeBC} proved the uniform regularity of space derivatives of the free surface Navier-Stokes equations (\ref{Sect1_NS_Eq}), while the following proposition concerns the uniform regularity of time derivatives. \begin{proposition}\label{Sect1_Proposition_TimeRegularity} For $m\geq 6$, assume the initial data $(v_0^{\ensuremath{\epsilon}},h_0^{\ensuremath{\epsilon}})$ satisfy the compatibility condition $\Pi\mathcal{S}^{\varphi} v_0^{\ensuremath{\epsilon}}\ensuremath{\textbf{n}}|_{z=0} =0$ and the regularities: \begin{equation}\label{Sect1_Proposition_TimeRegularity_1} \begin{array}{ll} \sup\limits_{\ensuremath{\epsilon}\in (0,1]} \big( |h_0^{\ensuremath{\epsilon}}|_{X^{m-1,1}} + \ensuremath{\epsilon}^{\frac{1}{2}}|h_0^{\ensuremath{\epsilon}}|_{X^{m-1,\frac{3}{2}}} + \|v_0^{\ensuremath{\epsilon}}\|_{X^{m-1,1}} + \|\omega_0^{\ensuremath{\epsilon}}\|_{X^{m-1}} \\[7pt]\quad + \|\omega_0^{\ensuremath{\epsilon}}\|_{1,\infty} + \ensuremath{\epsilon}^{\frac{1}{2}}\|\partial_z \omega_0^{\ensuremath{\epsilon}}\|_{L^{\infty}}\big) \leq C_0, \end{array} \end{equation} where $C_0>0$ is suitably small such that the Taylor sign condition $g-\partial_z^{\varphi^{\ensuremath{\epsilon}}} q^{\ensuremath{\epsilon}} |_{z=0} \geq c_0 >0$, then the unique Navier-Stokes solution to $(\ref{Sect1_NS_Eq})$ satisfies \begin{equation}\label{Sect1_Proposition_TimeRegularity_2} \begin{array}{ll} \sup\limits_{t\in [0,T]} \big( |h^{\ensuremath{\epsilon}}|_{X^{m-1,1}}^2 + \ensuremath{\epsilon}^{\frac{1}{2}}|h^{\ensuremath{\epsilon}}|_{X^{m-1,\frac{3}{2}}}^2 + \|v^{\ensuremath{\epsilon}}\|_{X^{m-1,1}}^2 + \|\partial_z v^{\ensuremath{\epsilon}}\|_{X^{m-2}}^2 + \|\omega^{\ensuremath{\epsilon}}\|_{X^{m-2}}^2 \\[7pt]\quad + \|\partial_z v^{\ensuremath{\epsilon}}\|_{1,\infty}^2 + \ensuremath{\epsilon}^{\frac{1}{2}}\|\partial_{zz}v^{\ensuremath{\epsilon}}\|_{L^{\infty}}^2 \big) + \|\partial_t^m h\|_{L^4([0,T],L^2)}^2 + \ensuremath{\epsilon}\|\partial_t^m h\|_{L^4([0,T],H^{\frac{1}{2}})}^2 \\[7pt]\quad + \ensuremath{\epsilon}\int\limits_0^T \|\nabla v^{\ensuremath{\epsilon}}\|_{X^{m-1,1}}^2 + \|\nabla\partial_z v^{\ensuremath{\epsilon}}\|_{X^{m-2}}^2 \,\mathrm{d}t \leq C. \end{array} \end{equation} As $\ensuremath{\epsilon}\ensuremath{\rightarrow} 0$, the Euler solution to $(\ref{Sect1_Euler_Eq})$ satisfies the following regularities: \begin{equation}\label{Sect1_Proposition_TimeRegularity_3} \begin{array}{ll} \sup\limits_{t\in [0,T]} \big( |h|_{X^{m-1,1}} + \|v\|_{X^{m-1,1}} + \|\partial_z v\|_{X^{m-2}} + \|\omega\|_{X^{m-2}} \\[8pt]\quad + \|\partial_z v\|_{1,\infty} \big) + \|\partial_t^m h\|_{L^4([0,T],L^2)}^2 \leq C, \end{array} \end{equation} where the Taylor sign condition $g-\partial_z^{\varphi} q |_{z=0} \geq c_0 >0$ holds. \end{proposition} For the initial regularities $(\ref{Sect1_Proposition_TimeRegularity_1})$, we can not prove $\|\partial_t^m v^{\ensuremath{\epsilon}}\|_{L^4([0,T],L^2)}$. To prove $\|\partial_t^m v^{\ensuremath{\epsilon}}\|_{L^4([0,T],L^2)}$, it requires $(\ref{Sect1_Proposition_TimeRegularity_1})$ as well as $\partial_t^m v_0^{\ensuremath{\epsilon}}, \partial_t^m h_0^{\ensuremath{\epsilon}}\in L^2(\mathbb{R}^3_{-})$. Note that when $\sigma=0$, we must use the following Alinhac's good unknown (see \cite{Alinhac_1989,Masmoudi_Rousset_2012_FreeBC}) to estimate tangential derivatives: \begin{equation}\label{Sect1_Good_Unknown_1} \begin{array}{ll} V^{\ell,\alpha} = \partial_t^{\ell}\mathcal{Z}^{\alpha} v - \partial_z^{\varphi}v \partial_t^{\ell}\mathcal{Z}^{\alpha} \eta, \ 0<\ell+|\alpha|\leq m, \ell\leq m-1,\\[6pt] Q^{\ell,\alpha} = \partial_t^{\ell}\mathcal{Z}^{\alpha} q - \partial_z^{\varphi}q \partial_t^{\ell}\mathcal{Z}^{\alpha} \eta, \ 0<\ell+|\alpha|\leq m, \ell\leq m-1. \end{array} \end{equation} Our proof of Proposition $\ref{Sect1_Proposition_TimeRegularity}$ is different from \cite{Masmoudi_Rousset_2012_FreeBC}: (i) $\|\partial_t^{\ell} q\|_{L^2}$ has no bound in general. When $|\alpha|=0$, we estimate $V^{\ell,0}$ and $\nabla\partial_t^{\ell}q$ where $0\leq \ell \leq m-1$, the dynamical boundary condition can not be used. (ii) \cite{Masmoudi_Rousset_2012_FreeBC} as well as \cite{Wang_Xin_2015,Elgindi_Lee_2014,Mei_Wang_Xin_2015} estimated normal derivatives by using $\Pi\mathcal{S}^{\varphi}\ensuremath{\textbf{n}}$ and its evolution equations. While in this paper, we estimate normal derivatives by using the vorticity and the following equations: \begin{equation}\label{Sect1_Vorticity_H_Eq} \left\{\begin{array}{ll} \partial_t^{\varphi} \omega_h + v\cdot\nabla^{\varphi}\omega_h - \ensuremath{\epsilon}\triangle^{\varphi}\omega_h = \vec{\textsf{F}}^0[\nabla\varphi](\omega_h,\partial_j v^i), \\[7pt] \omega^1|_{z=0} =\textsf{F}^1 [\nabla\varphi](\partial_j v^i), \\[6pt] \omega^2|_{z=0} =\textsf{F}^2 [\nabla\varphi](\partial_j v^i), \end{array}\right. \end{equation} where $j=1,2,\ i=1,2,3$, $\vec{\textsf{F}}^0[\nabla\varphi](\omega_h,\partial_j v^i)$ is a quadratic polynomial vector with respect to $\omega_h$ and $\partial_j v^i$, $\textsf{F}^1 [\nabla\varphi](\partial_j v^i)$, $\textsf{F}^2 [\nabla\varphi](\partial_j v^i)$ are polynomials with respect to $\partial_j v^i$, all the coefficients are fractions of $\nabla\varphi$. (iii) In \cite{Masmoudi_Rousset_2012_FreeBC}, the Taylor sign condition is $g-\partial_z^{\varphi^{\ensuremath{\epsilon}}}q^{\ensuremath{\epsilon},E}|_{z=0} \geq c_0>0$, that is imposed on the Euler part of the pressure $q^{\ensuremath{\epsilon}}$. \ $q^{\ensuremath{\epsilon}}$ has a decomposition $q^{\ensuremath{\epsilon}} = q^{\ensuremath{\epsilon},E}+ q^{\ensuremath{\epsilon},NS}$ which satisfy \begin{equation}\label{Sect1_Pressure_EulerPart} \begin{array}{ll} \left\{\begin{array}{ll} \triangle^{\varphi^{\ensuremath{\epsilon}}} q^{\ensuremath{\epsilon},E} = -\partial_i^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon},j}\partial_j^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon},i}, \\[6pt] q^{\ensuremath{\epsilon},E}|_{z=0} = g h^{\ensuremath{\epsilon}}. \end{array}\right. \qquad \left\{\begin{array}{ll} \triangle^{\varphi^{\ensuremath{\epsilon}}} q^{\ensuremath{\epsilon},NS} = 0, \\[6pt] q^{\ensuremath{\epsilon},NS}|_{z=0} = 2\ensuremath{\epsilon}\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} v\ensuremath{\textbf{n}}\cdot\ensuremath{\textbf{n}}. \end{array}\right. \end{array} \end{equation} However, the force term of $q^{\ensuremath{\epsilon},E}$ has boundary layer in the vicinity of the free boundary in general, thus $\partial_z^{\varphi^{\ensuremath{\epsilon}}} q^{\ensuremath{\epsilon},E}|_{z=0}$ may also have boundary layer, it is unknown whether $\partial_z^{\varphi^{\ensuremath{\epsilon}}} q^{\ensuremath{\epsilon},E}|_{z=0}$ converges pointwisely to $\partial_z^{\varphi} q|_{z=0}$ or not. Different from \cite{Masmoudi_Rousset_2012_FreeBC}, our Taylor sign condition is $g-\partial_z^{\varphi^{\ensuremath{\epsilon}}} q^{\ensuremath{\epsilon}}|_{z=0} \geq c_0 >0$. Since $\partial_z^{\varphi^{\ensuremath{\epsilon}}} q^{\ensuremath{\epsilon}}|_{z=0} = \ensuremath{\epsilon}\triangle^{\varphi^{\ensuremath{\epsilon}}} v^3 - \partial_t v^3 - v_y^{\ensuremath{\epsilon}} \cdot\nabla_y v^{\ensuremath{\epsilon},3}$ and $\|\partial_{zz} v\|_{L^{\infty}},\sqrt{\ensuremath{\epsilon}}\|\partial_{zz} v\|_{L^{\infty}}$ are bounded, thus $\partial_z^{\varphi^{\ensuremath{\epsilon}}} q^{\ensuremath{\epsilon}}|_{z=0}$ converges to $\partial_z^{\varphi} q|_{z=0}$ pointwisely. For classical solutions to the free surface Navier-Stokes equations $(\ref{Sect1_NS_Eq})$ with $\sigma=0$, we will estimate the convergence rates of the velocity later, which implies the weak boundary layer of the velocity. Before estimating the convergence rates, we show the following theorem which states the existence of strong vorticity layer. \begin{theorem}\label{Sect1_Thm_StrongLayer} Assume $T>0$ is finite, fixed and independent of $\ensuremath{\epsilon}$, $(v^{\ensuremath{\epsilon}},h^{\ensuremath{\epsilon}})$ is the solution in $[0,T]$ of Navier-Stokes equations $(\ref{Sect1_NS_Eq})$ with initial data $(v^{\ensuremath{\epsilon}}_0,h^{\ensuremath{\epsilon}}_0)$ satisfying $(\ref{Sect1_Proposition_TimeRegularity_1})$, $\omega^{\ensuremath{\epsilon}}$ is its vorticity. $(v,h)$ is the solution in $[0,T]$ of Euler equations $(\ref{Sect1_Euler_Eq})$ with initial data $(v_0,h_0)\in X^{m-1,1}(\mathbb{R}^3_{-}) \times X^{m-1,1}(\mathbb{R}^2)$, $\omega$ is its vorticity. (1) If the initial Navier-Stokes velocity satisfies $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}(\nabla^{\varphi^{\ensuremath{\epsilon}}}\times v_0^{\ensuremath{\epsilon}}) - \nabla^{\varphi}\times\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0} v_0^{\ensuremath{\epsilon}} \neq 0$ in the initial set $\mathcal{A}_0$, the Euler boundary data satisfies $\Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}|_{z=0} = 0$ in $[0,T]$, then the Navier-Stokes solution of $(\ref{Sect1_NS_Eq})$ has a strong vorticity layer satisfying \begin{equation}\label{Sect1_Thm_StrongLayer_1} \begin{array}{ll} \lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|\omega^{\ensuremath{\epsilon}} -\omega\|_{L^{\infty}(\mathcal{X}(\mathcal{A}_0)\times (0,T])} \neq 0, \\[9pt] \lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|\partial_z^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}} -\partial_z^{\varphi} v\|_{L^{\infty}(\mathcal{X}(\mathcal{A}_0)\times (0,T])} \neq 0, \\[9pt] \lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}} -\mathcal{S}^{\varphi}v\|_{L^{\infty}(\mathcal{X}(\mathcal{A}_0)\times (0,T])} \neq 0,\\[9pt] \lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|\nabla^{\varphi^{\ensuremath{\epsilon}}} q^{\ensuremath{\epsilon}} -\nabla^{\varphi} q\|_{L^{\infty}(\mathcal{X}(\mathcal{A}_0)\times (0,T])} \neq 0. \end{array} \end{equation} where $\mathcal{X}(\mathcal{A}_0) = \{\mathcal{X}(t,x)\big|\mathcal{X}(0,x)\in\mathcal{A}_0, \partial_t \mathcal{X}(t,x) = v(t,\Phi^{-1}\circ \mathcal{X})\}$. (2) If $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}(\nabla^{\varphi^{\ensuremath{\epsilon}}}\times v_0^{\ensuremath{\epsilon}}) - \nabla^{\varphi}\times\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0} v_0^{\ensuremath{\epsilon}} = 0$, the Euler boundary data satisfies $\Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}|_{z=0} \neq 0$ in $(0,T]$, then the Navier-Stokes solution of $(\ref{Sect1_NS_Eq})$ has a strong vorticity layer satisfying \begin{equation}\label{Sect1_Thm_WeakLayer_1} \begin{array}{ll} \lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\big|\omega^{\ensuremath{\epsilon}}|_{z=0} -\omega|_{z=0}\big|_{L^{\infty}(\mathbb{R}^2\times (0,T])} \neq 0, \\[9pt] \lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|\omega^{\ensuremath{\epsilon}} -\omega\|_{L^{\infty}(\mathbb{R}^2\times [0, O(\ensuremath{\epsilon}^{\frac{1}{2} -\delta_z}))\times (0,T])} \neq 0, \\[9pt] \lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|\partial_z^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}} -\partial_z^{\varphi} v\| _{L^{\infty}(\mathbb{R}^2\times [0, O(\ensuremath{\epsilon}^{\frac{1}{2} -\delta_z}))\times (0,T])} \neq 0, \\[9pt] \lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}} -\mathcal{S}^{\varphi}v\| _{L^{\infty}(\mathbb{R}^2\times [0, O(\ensuremath{\epsilon}^{\frac{1}{2} -\delta_z}))\times (0,T])} \neq 0,\\[9pt] \lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|\nabla^{\varphi^{\ensuremath{\epsilon}}} q^{\ensuremath{\epsilon}} -\nabla^{\varphi} q\| _{L^{\infty}(\mathbb{R}^2\times [0, O(\ensuremath{\epsilon}^{\frac{1}{2} -\delta_z}))\times (0,T])} \neq 0, \end{array} \end{equation} for some constant $\delta_z \geq 0$. (3) $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}(\nabla^{\varphi^{\ensuremath{\epsilon}}}\times v_0^{\ensuremath{\epsilon}}) - \nabla^{\varphi}\times\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0} v_0^{\ensuremath{\epsilon}} = 0$ and $\Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}|_{z=0} = 0$ in $[0,T]$ are necessary conditions for the Navier-Stokes solution of $(\ref{Sect1_NS_Eq})$ to have a weak vorticity layer satisfying \begin{equation}\label{Sect1_Thm_WeakerLayer_1} \begin{array}{ll} \lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|\omega^{\ensuremath{\epsilon}} -\omega\|_{L^{\infty}(\mathfrak{Cl}(\mathbb{R}^3_{-})\times (0,T])} = 0, \\[9pt] \lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|\partial_z^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}} -\partial_z^{\varphi} v\|_{L^{\infty}(\mathfrak{Cl}(\mathbb{R}^3_{-})\times (0,T])} = 0, \\[9pt] \lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}} -\mathcal{S}^{\varphi}v\|_{L^{\infty}(\mathfrak{Cl}(\mathbb{R}^3_{-})\times (0,T])} = 0,\\[9pt] \lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|\nabla^{\varphi^{\ensuremath{\epsilon}}} q^{\ensuremath{\epsilon}} -\nabla^{\varphi} q\|_{L^{\infty}(\mathfrak{Cl}(\mathbb{R}^3_{-})\times (0,T])} = 0, \end{array} \end{equation} where $\mathfrak{Cl}(\mathbb{R}^3_{-}) = \mathbb{R}^3_{-}\cup \{x|z=0\}$ is the closure of $\mathbb{R}^3_{-}$. \end{theorem} We give some remarks on Theorem $\ref{Sect1_Thm_StrongLayer}$: \begin{remark}\label{Sect1_Remark_StrongLayer} (i) To represent $\partial_z^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}} - \partial_z^{\varphi} v$ is more natural than $\partial_z v^{\ensuremath{\epsilon}} - \partial_z v$. However, $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|\partial_z^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}} - \partial_z^{\varphi} v\|_{L^{\infty}}\neq 0$ results from $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|\partial_z v^{\ensuremath{\epsilon}} - \partial_z v\|_{L^{\infty}}\neq 0$ and $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|\partial_z(\eta^{\ensuremath{\epsilon}} -\eta)\|_{L^{\infty}} =0$, due to the formula: \begin{equation}\label{Sect1_Transport_DifferenceEq} \begin{array}{ll} \partial_z^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}} - \partial_z^{\varphi} v = \partial_z^{\varphi^{\ensuremath{\epsilon}}}(v^{\ensuremath{\epsilon}} -v) - \partial_z^{\varphi} v \, \partial_z^{\varphi^{\ensuremath{\epsilon}}} (\eta^{\ensuremath{\epsilon}} -\eta) \\[5pt]\hspace{1.95cm} = \frac{1}{\partial_z\varphi^{\ensuremath{\epsilon}}} \cdot\partial_z(v^{\ensuremath{\epsilon}} -v) - \partial_z^{\varphi} v \, \frac{1}{\partial_z\varphi^{\ensuremath{\epsilon}}} \cdot \partial_z (\eta^{\ensuremath{\epsilon}} -\eta). \end{array} \end{equation} (ii) The energy norm $\|\cdot\|_{L^2}$ is weaker than the $L^{\infty}$ norm, because $\|\omega^{\ensuremath{\epsilon}}-\omega\|_{L^2(\mathbb{R}^3_{-})} =0$, even though we have the profile $\omega^{\ensuremath{\epsilon}}(t,y,z) \sim \omega(t,y,z) + \omega^{bl}(t,y,\frac{z}{\sqrt{\ensuremath{\epsilon}}})$. While $\|\omega^{\ensuremath{\epsilon}}-\omega\|_{L^{\infty}(\mathbb{R}^3_{-})} \neq 0$. Thus, we use the $L^{\infty}$ norm to describe the strong vorticity layer. (iii) $\textsf{S}_n =\Pi \mathcal{S}^{\varphi} v \ensuremath{\textbf{n}}$ satisfies the forced transport equations: \begin{equation}\label{Sect1_Transport_Eq_Sn} \begin{array}{ll} \partial_t^{\varphi} \textsf{S}_n + v\cdot\nabla^{\varphi} \textsf{S}_n = -\frac{1}{2}\Pi\big((\nabla^{\varphi} v)^2+((\nabla^{\varphi} v)^{\top})^2\big) \ensuremath{\textbf{n}} - \Pi((\mathcal{D}^{\varphi})^2 q)\ensuremath{\textbf{n}}\\[8pt]\quad + (\partial_t^{\varphi}\Pi + v\cdot\nabla^{\varphi}\Pi)\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}} + \Pi\mathcal{S}^{\varphi} v(\partial_t^{\varphi}\ensuremath{\textbf{n}} + v\cdot\nabla^{\varphi}\ensuremath{\textbf{n}}), \end{array} \end{equation} where $\big((\mathcal{D}^{\varphi})^2 q\big)$ is the Hessian matrix of $q$. The equation $(\ref{Sect1_Transport_Eq_Sn})$ implies that even if $\textsf{S}_n|_{t=0} = 0$, then $\textsf{S}_n \neq 0$ in $(0,T]$ is possible due to the force terms of $(\ref{Sect1_Transport_Eq_Sn})$. However, $\textsf{S}_n|_{z=0} \equiv 0$ in $[0,T]$ can be constructed, see an example constructed in $(\ref{Sect1_Example_1_1}), (\ref{Sect1_Example_1_2}),(\ref{Sect1_Example_1_3})$. (iv) $\Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}|_{z=0} = 0$ at $t=0$ implies that $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}(\nabla^{\varphi^{\ensuremath{\epsilon}}}\times v_0^{\ensuremath{\epsilon}})|_{z=0} - \nabla^{\varphi}\times\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0} v_0^{\ensuremath{\epsilon}}|_{z=0} = 0$. But it does not contradict with $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}(\nabla^{\varphi^{\ensuremath{\epsilon}}}\times v_0^{\ensuremath{\epsilon}}) - \nabla^{\varphi}\times\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0} v_0^{\ensuremath{\epsilon}} \neq 0$ in the initial set $\mathcal{A}_0$, see $(\ref{Sect1_Example_1_0})$ where $\mathcal{A}_0 = \{x|z= -\sqrt{\ensuremath{\epsilon}}\}$ in local coordinates. If $\Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}|_{z=0} \neq 0$ in $[0,T]$ and $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}(\nabla^{\varphi^{\ensuremath{\epsilon}}}\times v_0^{\ensuremath{\epsilon}}) - \nabla^{\varphi}\times\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0} v_0^{\ensuremath{\epsilon}} \neq 0$ in the initial set $\mathcal{A}_0$, then it is easy to know the results are the union of $(\ref{Sect1_Thm_StrongLayer_1})$ and $(\ref{Sect1_Thm_WeakLayer_1})$. (v) $\ensuremath{\textbf{N}}\cdot \partial_z^{\varphi} v$ and $\ensuremath{\textbf{N}}\cdot \partial_z v$ do not have boundary layer, but $\partial_z v^3$ has boundary layer in general. Similarly, $\ensuremath{\textbf{N}}\cdot \omega$ does not have boundary layer, but $\omega^3$ has boundary layer in general. The reason is that both $v|_{z=0}$ and $\omega|_{z=0}$ are not perpendicular to the free surface in general. \end{remark} The proof of Theorem $\ref{Sect1_Thm_StrongLayer}$ is based on the analysis of the limit of $\hat{\omega} = \omega^{\ensuremath{\epsilon}} -\omega$ which satisfies the following equations: \begin{equation}\label{Sect1_N_Derivatives_Difference_Eq} \left\{\begin{array}{ll} \partial_t^{\varphi^{\ensuremath{\epsilon}}} \hat{\omega}_h + v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} \hat{\omega}_h - \ensuremath{\epsilon}\triangle^{\varphi^{\ensuremath{\epsilon}}}\hat{\omega}_h = \vec{\textsf{F}}^0[\nabla\varphi^{\ensuremath{\epsilon}}](\omega_h^{\ensuremath{\epsilon}},\partial_j v^{\ensuremath{\epsilon},i}) - \vec{\textsf{F}}^0[\nabla\varphi](\omega_h,\partial_j v^i) \\[5pt]\quad + \ensuremath{\epsilon}\triangle^{\varphi^{\ensuremath{\epsilon}}}\omega_h + \partial_z^{\varphi}\omega_h \partial_t^{\varphi^{\ensuremath{\epsilon}}} \hat{\eta} + \partial_z^{\varphi} \omega_h\, v^{\ensuremath{\epsilon}}\cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta} - \hat{v}\cdot\nabla^{\varphi} \omega_h , \\[8pt] \hat{\omega}_h|_{z=0} =\textsf{F}^{1,2} [\nabla\varphi^{\ensuremath{\epsilon}}](\partial_j v^{\ensuremath{\epsilon},i}) - \omega^b_h, \\[6pt] \hat{\omega}_h|_{t=0} = (\hat{\omega}_0^1, \hat{\omega}_0^2)^{\top}, \end{array}\right. \end{equation} where $\vec{\textsf{F}}^0[\nabla\varphi](\omega_h,\partial_j v^i)$ and $\textsf{F}^{1,2} [\nabla\varphi^{\ensuremath{\epsilon}}](\partial_j v^{\ensuremath{\epsilon},i})$ = $(\textsf{F}^1 [\nabla\varphi](\partial_j v^i), \textsf{F}^2 [\nabla\varphi](\partial_j v^i))^{\top}$ are defined in $(\ref{Sect1_Vorticity_H_Eq})$. Note that in $\vec{\textsf{F}}^0[\nabla\varphi](\omega_h,\partial_j v^i)$, $\omega_h$ has degree one. By introducing Lagrangian coordinates $(\ref{Sect3_Preliminaries_Vorticity_Eq_3})$, the equations $(\ref{Sect1_N_Derivatives_Difference_Eq})$ can be transformed into the heat equation with damping and force terms. By splitting $(\ref{Sect3_BoundarLayer_Initial_Eq_1})$ and estimating $(\ref{Sect3_BoundarLayer_Initial_Eq_Force})$ and $(\ref{Sect3_BoundarLayer_Initial_Eq_Initial})$, we investigate the effect of the initial vorticity layer. If $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\big\|\hat{\omega}_h|_{t=0}\big\|_{L^{\infty}(\mathcal{A}_0)} \neq 0$, we prove that the limit of $\hat{\omega}_h$ is equal to that of the initial vorticity layer in Lagrangian coordinates, thus the limit of the initial vorticity layer is transported in Eulerian coordinates. Namely, $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|\hat{\omega}\|_{L^{\infty}(\mathcal{X}(\mathcal{A}_0)\times (0,T])} \neq 0$. By splitting $(\ref{Sect4_BoundarLayer_Boundary_Eq_1})$ and estimating $(\ref{Sect4_BoundarLayer_Boundary_Force})$ and $(\ref{Sect4_BoundarLayer_Boundary_BC})$, we investigate the effect of the discrepancy of boundary values of the vorticities for the inviscid limits. If $\hat{\omega}_h|_{z=0} \neq 0$ and $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\big\|\hat{\omega}_h|_{t=0}\big\|_{L^{\infty}} = 0$, there is a discrepancy between N-S vorticity and Euler vorticity, we prove that $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|\omega^{\ensuremath{\epsilon}} -\omega\|_{L^{\infty}(\mathbb{R}^2\times [0, O(\ensuremath{\epsilon}^{\frac{1}{2} -\delta_z}))\times (0,T])} \\ \neq 0$ by using symbolic analysis. The following theorem concerns the convergence rates of the inviscid limits of $(\ref{Sect1_NS_Eq})$. Note that if some functional space has negative indices, then such a estimate does not exist. \begin{theorem}\label{Sect1_Thm_ConvergenceRates} Assume $T>0$ is finite, fixed and independent of $\ensuremath{\epsilon}$, $(v^{\ensuremath{\epsilon}},h^{\ensuremath{\epsilon}})$ is the solution in $[0,T]$ of Navier-Stokes equations $(\ref{Sect1_NS_Eq})$ with initial data $(v^{\ensuremath{\epsilon}}_0,h^{\ensuremath{\epsilon}}_0)$ satisfying $(\ref{Sect1_Proposition_TimeRegularity_1})$, $\omega^{\ensuremath{\epsilon}}$ is its vorticity. $(v,h)$ is the solution in $[0,T]$ of Euler equations $(\ref{Sect1_Euler_Eq})$ with initial data $(v_0,h_0)\in X^{m-1,1}(\mathbb{R}^3_{-}) \times X^{m-1,1}(\mathbb{R}^2)$, $\omega$ is its vorticity. $g-\partial_z^{\varphi^{\ensuremath{\epsilon}}} q^{\ensuremath{\epsilon}} |_{z=0} \geq c_0 >0$, $g-\partial_z^{\varphi} q |_{z=0} \geq c_0 >0$. Assume there exists an integer $k$ where $1\leq k\leq m-2$, such that $\|v^{\ensuremath{\epsilon}}_0 -v_0\|_{X^{k-1,1}(\mathbb{R}^3_{-})} =O(\ensuremath{\epsilon}^{\lambda^v})$, $|h^{\ensuremath{\epsilon}}_0 -h_0|_{X^{k-1,1}(\mathbb{R}^2)} =O(\ensuremath{\epsilon}^{\lambda^h})$, $\|\omega^{\ensuremath{\epsilon}}_0 - \omega_0\|_{X^{k-1}(\mathbb{R}^3_{-})} =O(\ensuremath{\epsilon}^{\lambda^{\omega}_1})$, where $\lambda^v>0, \lambda^h>0, \lambda^{\omega}_1>0$. If the Euler boundary data satisfies $\Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}|_{z=0}\neq 0$ in $[0,T]$, then the convergence rates of the inviscid limit satisfy \begin{equation}\label{Sect1_Thm4_ConvergenceRates_1} \begin{array}{ll} \|v^{\ensuremath{\epsilon}} -v\|_{X_{tan}^{k-1,1}} + |h^{\ensuremath{\epsilon}} -h|_{X^{k-1,1}} = O(\ensuremath{\epsilon}^{\min\{\frac{1}{4},\lambda^v,\lambda^h, \lambda^{\omega}_1\}}), \\[7pt] \|\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \partial_z^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}} - \ensuremath{\textbf{N}}\cdot \partial_z^{\varphi} v\|_{X_{tan}^{k-1}} + \|\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \omega^{\ensuremath{\epsilon}} - \ensuremath{\textbf{N}}\cdot \omega\|_{X_{tan}^{k-1}} = O(\ensuremath{\epsilon}^{\min\{\frac{1}{4},\lambda^v,\lambda^h, \lambda^{\omega}_1\}}), \\[7pt] \|\partial_z^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}} -\partial_z^{\varphi} v\|_{X_{tan}^{k-2}} + \|\omega^{\ensuremath{\epsilon}} -\omega\|_{X_{tan}^{k-2}} = O(\ensuremath{\epsilon}^{\min\{\frac{1}{8},\frac{\lambda^v}{2},\frac{\lambda^h}{2}, \frac{\lambda^{\omega}_1}{2}\}}), \\[7pt] \|\nabla^{\varphi^{\ensuremath{\epsilon}}} q^{\ensuremath{\epsilon}} - \nabla^{\varphi} q\|_{X_{tan}^{k-2}} + \|\triangle^{\varphi^{\ensuremath{\epsilon}}} q^{\ensuremath{\epsilon}} - \triangle^{\varphi} q\|_{X_{tan}^{k-2}} = O(\ensuremath{\epsilon}^{\min\{\frac{1}{8},\frac{\lambda^v}{2},\frac{\lambda^h}{2}, \frac{\lambda^{\omega}_1}{2}\}}), \end{array} \end{equation} \begin{equation*} \begin{array}{ll} \|v^{\ensuremath{\epsilon}} -v\|_{Y_{tan}^{k-3}} + |h^{\ensuremath{\epsilon}} -h|_{Y^{k-3}} = O(\ensuremath{\epsilon}^{\min\{\frac{1}{8},\frac{\lambda^v}{2},\frac{\lambda^h}{2}, \frac{\lambda^{\omega}_1}{2}\}}), \\[7pt] \|\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \partial_z^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}} - \ensuremath{\textbf{N}}\cdot \partial_z^{\varphi} v\|_{Y_{tan}^{k-4}} + \|\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \omega^{\ensuremath{\epsilon}} - \ensuremath{\textbf{N}}\cdot \omega\|_{Y_{tan}^{k-4}} = O(\ensuremath{\epsilon}^{\min\{\frac{1}{8},\frac{\lambda^v}{2},\frac{\lambda^h}{2}, \frac{\lambda^{\omega}_1}{2}\}}). \end{array} \end{equation*} If the Euler boundary data satisfies $\Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}|_{z=0} = 0$ in $[0,T]$, assume $\|\omega^{\ensuremath{\epsilon}}_0 - \omega_0\|_{X^{k-2}(\mathbb{R}^3_{-})} =O(\ensuremath{\epsilon}^{\lambda^{\omega}_2})$ where $\lambda^{\omega}_2>0$, then the convergence rates of the inviscid limit satisfy \begin{equation}\label{Sect1_Thm4_ConvergenceRates_2} \begin{array}{ll} \|v^{\ensuremath{\epsilon}} -v\|_{X_{tan}^{k-2,1}} + |h^{\ensuremath{\epsilon}} -h|_{X^{k-2,1}} = O(\ensuremath{\epsilon}^{\min\{\frac{1}{2},\lambda^v,\lambda^h, \lambda^{\omega}_2\}}), \\[7pt] \|\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \partial_z^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}} - \ensuremath{\textbf{N}}\cdot \partial_z^{\varphi}v\|_{X_{tan}^{k-2}} + \|\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \omega^{\ensuremath{\epsilon}} - \ensuremath{\textbf{N}}\cdot \omega\|_{X_{tan}^{k-2}} = O(\ensuremath{\epsilon}^{\min\{\frac{1}{2},\lambda^v,\lambda^h, \lambda^{\omega}_2\}}), \\[7pt] \|\partial_z^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}} -\partial_z^{\varphi} v\|_{X_{tan}^{k-3}} + \|\omega^{\ensuremath{\epsilon}} -\omega\|_{X_{tan}^{k-3}} = O(\ensuremath{\epsilon}^{\min\{\frac{1}{4},\frac{\lambda^v}{2},\frac{\lambda^h}{2}, \frac{\lambda^{\omega}_2}{2}\}}), \\[7pt] \|\nabla^{\varphi^{\ensuremath{\epsilon}}} q^{\ensuremath{\epsilon}} - \nabla^{\varphi} q\|_{X_{tan}^{k-3}} + \|\triangle^{\varphi^{\ensuremath{\epsilon}}} q^{\ensuremath{\epsilon}} - \triangle^{\varphi} q\|_{X_{tan}^{k-3}} = O(\ensuremath{\epsilon}^{\min\{\frac{1}{4},\frac{\lambda^v}{2},\frac{\lambda^h}{2}, \frac{\lambda^{\omega}_2}{2}\}}), \\[7pt] \|v^{\ensuremath{\epsilon}} -v\|_{Y_{tan}^{k-4}} + |h^{\ensuremath{\epsilon}} -h|_{Y^{k-4}} = O(\ensuremath{\epsilon}^{\min\{\frac{1}{4},\frac{\lambda^v}{2},\frac{\lambda^h}{2}, \frac{\lambda^{\omega}_2}{2}\}}), \\[7pt] \|\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \partial_z^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}} - \ensuremath{\textbf{N}}\cdot \partial_z^{\varphi} v\|_{Y_{tan}^{k-5}} + \|\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \omega^{\ensuremath{\epsilon}} - \ensuremath{\textbf{N}}\cdot \omega\|_{Y_{tan}^{k-5}} = O(\ensuremath{\epsilon}^{\min\{\frac{1}{4},\frac{\lambda^v}{2},\frac{\lambda^h}{2}, \frac{\lambda^{\omega}_2}{2}\}}). \end{array} \end{equation} \end{theorem} We give some remarks on Theorem $\ref{Sect1_Thm_ConvergenceRates}$: \begin{remark}\label{Sect1_Remark_ConvergenceRates} (i) Convergence rates are represented in functional spaces containing only tangential derivatives, such as $X_{tan}^{k-1,1}$ and $Y_{tan}^{k-3}$, because we are actually interested in standard derivatives rather than conormal derivatives. Theorem $\ref{Sect1_Thm_StrongLayer}$ has already described the behaviors of standard normal derivatives. However, we have to use conormal Sobolev spaces to estimate convergence rates, the initial data is also required to converge in conormal Sobolev spaces, thus the index $k$ depends on the strength of the initial vorticity layer. The weaker the initial vorticity layer is, the larger $k$ is. (ii) In general, $\nabla\cdot (v^{\ensuremath{\epsilon}} -v) \neq 0$, $q^{\ensuremath{\epsilon}} -q$ is infinite for the infinite fluid depth. Thus, we even can not obtain the $L^2$ estimate of $(v^{\ensuremath{\epsilon}}-v,h^{\ensuremath{\epsilon}}-h)$ without involving $\partial_z v^{\ensuremath{\epsilon}} - \partial_z v$. The estimates of tangential derivatives and the estimates of normal derivatives can not be decoupled, but tangential derivatives and normal derivatives have different convergence rates. (iii) The convergence rate of the initial vorticity is related to whether the Euler boundary data satisfies $\Pi\mathcal{S}^{\varphi}v \ensuremath{\textbf{n}}|_{z=0,t=0} = 0$ or not. In general, $\lambda^{\omega}_1 \neq \lambda^{\omega}_2$. The convergence rates of the inviscid limit for the free boundary problem are slower than those of the Navier-slip boundary case. (iv) To represent $\partial_z^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}} - \partial_z^{\varphi} v$ is more natural than $\partial_z v^{\ensuremath{\epsilon}} - \partial_z v$. However, the estimate of $\|\partial_z^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}} - \partial_z^{\varphi} v\|_{X_{tan}^{k-2}}$ results from the estimate of $\|\partial_z v^{\ensuremath{\epsilon}} - \partial_z v\|_{X_{tan}^{k-2}}$ and $\|\partial_z(\eta^{\ensuremath{\epsilon}} -\eta)\|_{X_{tan}^{k-2}}$, due to the formula $(\ref{Sect1_Transport_DifferenceEq})$. The $L^{\infty}$ type estimates in $(\ref{Sect1_Thm4_ConvergenceRates_1})$ are based on the formula $\|f\|_{L^{\infty}}^2 \ensuremath{\lesssim} \|f\|_{H_{tan}^{s_1}}\|\partial_z f\|_{H_{tan}^{s_2}}$ where $s_1+s_2>2$ (see \cite{Masmoudi_Rousset_2012_FreeBC}). $\|v^{\ensuremath{\epsilon}} -v\|_{X_{tan}^k}$ and $|h^{\ensuremath{\epsilon}}-h|_{X^k}$ can not be estimated because we can not control $\|\partial_t^k(q^{\ensuremath{\epsilon}} -q)\|_{L^2}$. (v) For the finite fluid depth $\mathbb{R}^2\times [-L,0]$ and $L>0$ can be very small, if the initial data satisfy $\|v^{\ensuremath{\epsilon}}_0 -v_0\|_{X^{m-1,1}(\mathbb{R}^2\times [-L,0])} =O(\ensuremath{\epsilon}^{\lambda^v},L)$, $|h^{\ensuremath{\epsilon}}_0 -h_0|_{X^{m-1,1}(\mathbb{R}^2)} =O(\ensuremath{\epsilon}^{\lambda^h})$. $\|\omega^{\ensuremath{\epsilon}}_0 - \omega_0\|_{X^{m-1}(\mathbb{R}^2\times [-L,0])} =O(\ensuremath{\epsilon}^{\lambda^{\omega}_1},L)$. If $\Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}|_{z=0} \neq 0$ in $[0,T]$, then the pressure itself has $L^2$ type estimates: \begin{equation*} \begin{array}{ll} \|q^{\ensuremath{\epsilon}}(\cdot,z) - q(\cdot,z)\|_{X_{tan}^{k-2}(\mathbb{R}^2)} \ensuremath{\lesssim} \big|q^e|_{z=0} - q|_{z=0}\big|_{X_{tan}^{k-2}} + \|\partial_z q^{\ensuremath{\epsilon}} -\partial_z q\|_{X_{tan}^{k-2}} \\[7pt]\hspace{4.02cm} \ensuremath{\lesssim} O(\ensuremath{\epsilon}^{\min\{\frac{1}{8},\frac{\lambda^v}{2},\frac{\lambda^h}{2}, \frac{\lambda^{\omega}_1}{2}\}}), \\[8pt] \|q^{\ensuremath{\epsilon}} - q\|_{X_{tan}^{k-2}(\mathbb{R}^2\times [-L,0])} \ensuremath{\lesssim} O(\ensuremath{\epsilon}^{\min\{\frac{1}{8},\frac{\lambda^v}{2},\frac{\lambda^h}{2}, \frac{\lambda^{\omega}_1}{2}\}},L), \\[6pt] \|\omega^{\ensuremath{\epsilon}} - \omega\|_{X_{tan}^{k-2}(\mathbb{R}^2\times [-L,0])} \ensuremath{\lesssim} O(\ensuremath{\epsilon}^{\min\{\frac{1}{8},\frac{\lambda^v}{2},\frac{\lambda^h}{2}, \frac{\lambda^{\omega}_1}{2}\}}), \\[8pt] \|v^{\ensuremath{\epsilon}} - v\|_{X_{tan}^{k-2}(\mathbb{R}^2\times [-L,0])} \ensuremath{\lesssim} O(\ensuremath{\epsilon}^{\min\{\frac{1}{4},\lambda^v,\lambda^h, \lambda^{\omega}_1\}},L), \\[8pt] \|h^{\ensuremath{\epsilon}} - h\|_{X_{tan}^{k-1}(\mathbb{R}^2)} \ensuremath{\lesssim} O(\ensuremath{\epsilon}^{\min\{\frac{1}{4},\lambda^v,\lambda^h, \lambda^{\omega}_1\}}). \end{array} \end{equation*} If $\Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}|_{z=0} = 0$ in $[0,T]$, we only adjust the indices of $\ensuremath{\epsilon}$ in the above convergence rates, the results are similar. \end{remark} Now we show our strategies of the proofs. Denote $\hat{v} = v^{\ensuremath{\epsilon}} -v$, $\hat{h}= h^{\ensuremath{\epsilon}} -h$, $\hat{q} = q^{\ensuremath{\epsilon}} -q$, then $\hat{v},\hat{h},\hat{q}$ satisfy the following equations \begin{equation}\label{Sect1_T_Derivatives_Difference_Eq} \left\{\begin{array}{ll} \partial_t^{\varphi^{\ensuremath{\epsilon}}}\hat{v} -\partial_z^{\varphi} v \partial_t^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta} + v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} \hat{v} -v^{\ensuremath{\epsilon}}\cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta}\, \partial_z^{\varphi} v + \nabla^{\varphi^{\ensuremath{\epsilon}}} \hat{q} -\partial_z^{\varphi} q\nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta} \\[5pt]\quad = 2\ensuremath{\epsilon} \nabla^{\varphi^{\ensuremath{\epsilon}}} \cdot\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \hat{v} + \ensuremath{\epsilon} \triangle^{\varphi^{\ensuremath{\epsilon}}} v - \hat{v}\cdot\nabla^{\varphi} v, \hspace{3.96cm} x\in\mathbb{R}^3_{-}, \\[8pt] \nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot \hat{v} = \partial_z^{\varphi}v \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta}, \hspace{6.38cm} x\in\mathbb{R}^3_{-},\\[8pt] \partial_t \hat{h} + v_y\cdot \nabla_y \hat{h} = \hat{v}\cdot\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}, \hspace{6cm} \{z=0\}, \\[7pt] (\hat{q} -g \hat{h})\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} -2\ensuremath{\epsilon} \mathcal{S}^{\varphi^\ensuremath{\epsilon}}\hat{v}\,\ensuremath{\textbf{N}}^\ensuremath{\epsilon} = 2\ensuremath{\epsilon} \mathcal{S}^{\varphi^\ensuremath{\epsilon}}v\,\ensuremath{\textbf{N}}^\ensuremath{\epsilon}, \hspace{3.8cm} \{z=0\}, \\[6pt] (\hat{v},\hat{h})|_{t=0} = (v_0^\ensuremath{\epsilon} -v_0,h_0^\ensuremath{\epsilon} -h_0). \end{array}\right. \end{equation} In order to close the estimates for $(\ref{Sect1_T_Derivatives_Difference_Eq})$, we define the following variables which is similar to Alinhac's good unknown (see \cite{Alinhac_1989,Masmoudi_Rousset_2012_FreeBC}): \begin{equation}\label{Sect1_Good_Unknown_2} \begin{array}{ll} \hat{V}^{\ell,\alpha} = \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} - \partial_z^{\varphi}v \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\eta}, \quad\ \hat{Q}^{\ell,\alpha} = \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{q} - \partial_z^{\varphi}q \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\eta}. \end{array} \end{equation} When $|\alpha|>0$, we study the equations $(\ref{Sect5_TangentialEstimates_Diff_Eq})$ and estimate $\hat{V}^{\ell,\alpha}, \hat{Q}^{\ell,\alpha}$. When $|\alpha|=0$, we study the equations $(\ref{Sect5_Tangential_Estimates_Time})$ and estimate $\hat{V}^{\ell,0}$ and $\partial_t^{\ell}\nabla \hat{q}$. When we prove the estimates, we always rewrite the viscous terms by using the formula $\triangle^{\varphi} v = 2\nabla^{\varphi}\cdot\mathcal{S}^{\varphi} v -\nabla^{\varphi}(\nabla^{\varphi}\cdot v) = \nabla^{\varphi}(\nabla^{\varphi}\cdot v) -\nabla^{\varphi}\times(\nabla^{\varphi}\times v)$. The dissipation of the velocity and the vorticity are controlled by using inequalities: \begin{equation}\label{Sect1_Good_Useful_Inequalities} \begin{array}{ll} \|\nabla v\|_{L^2}^2 \ensuremath{\lesssim} \int\limits_{\mathbb{R}^3_{-}}|\mathcal{S}^{\varphi} v|^2 \,\mathrm{d}\mathcal{V}_t + \|v\|_{L^2}^2, \\[8pt] \|\nabla \omega\|_{L^2}^2 \ensuremath{\lesssim} \int\limits_{\mathbb{R}^3_{-}}|\nabla^{\varphi}\times \omega|^2 \,\mathrm{d}\mathcal{V}_t + \|\omega\|_{L^2}^2 + |\omega\cdot \ensuremath{\textbf{n}}|_{\frac{1}{2}}, \end{array} \end{equation} where $(\ref{Sect1_Good_Useful_Inequalities})_1$ is Korn's inequality (see \cite{Masmoudi_Rousset_2012_FreeBC,Wang_2003}), $(\ref{Sect1_Good_Useful_Inequalities})_2$ can be proved by using Hodge decomposition and $\nabla^{\varphi}\cdot \omega =0$. \subsection{Main Results for N-S Equations with Surface Tension} We studied the free surface Navier-Stokes equations with surface tension, the free surface Navier-Stokes equations $(\ref{Sect1_NavierStokes_Equation})$ with fixed $\sigma>0$ are equivalent to the following system: \begin{equation}\label{Sect1_NS_Eq_ST} \left\{\begin{array}{ll} \partial_t^{\varphi} v + v\cdot\nabla^{\varphi} v + \nabla^{\varphi} q = \ensuremath{\epsilon}\triangle^{\varphi} v, \hspace{1.04cm} x\in\mathbb{R}^3_{-}, \\[6pt] \nabla^{\varphi}\cdot v =0, \hspace{4cm} x\in\mathbb{R}^3_{-},\\[6pt] \partial_t h = v(t,y,0)\cdot N, \hspace{2.77cm} z=0,\\[6pt] q\ensuremath{\textbf{n}} -2\ensuremath{\epsilon} \mathcal{S}^{\varphi}v\,\ensuremath{\textbf{n}} =gh\ensuremath{\textbf{n}} - \sigma H\ensuremath{\textbf{n}}, \hspace{1.25cm} z=0,\\[6pt] (v,h)|_{t=0} = (v_0^{\ensuremath{\epsilon}},h_0^{\ensuremath{\epsilon}}), \end{array}\right. \end{equation} where $H = - \nabla_x \cdot \big(\frac{(-\nabla_y h,1)}{\sqrt{1+|\nabla_y h|^2}}\big) = \nabla_y\cdot \big(\frac{\nabla_y h}{\sqrt{1+|\nabla_y h|^2}}\big)$. Let $\ensuremath{\epsilon} \ensuremath{\rightarrow} 0$, we formally get the free surface Euler equations with surface tension: \begin{equation}\label{Sect1_Euler_Eq_ST} \left\{\begin{array}{ll} \partial_t^{\varphi} v + v\cdot\nabla^{\varphi} v + \nabla^{\varphi} q = 0, \hspace{1.69cm} x\in\mathbb{R}^3_{-}, \\[6pt] \nabla^{\varphi}\cdot v =0, \hspace{4cm} x\in\mathbb{R}^3_{-},\\[6pt] \partial_t h = v(t,y,0)\cdot N, \hspace{2.78cm} z=0,\\[6pt] q =gh -\sigma H, \hspace{3.62cm} z=0,\\[6pt] (v,h)|_{t=0} = (v_0,h_0), \end{array}\right. \end{equation} where $(v_0,h_0) = \lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}(v_0^{\ensuremath{\epsilon}},h_0^{\ensuremath{\epsilon}})$, we do not need the Taylor sign condition $(\ref{Sect1_TaylorSign_2})$ for $(\ref{Sect1_Euler_Eq_ST})$ when $\sigma>0$ is fixed. For the free surface N-S equations $(\ref{Sect1_NS_Eq_ST})$, we show the regularity structure of $(\ref{Sect1_NS_Eq_ST})$ and $(\ref{Sect1_Euler_Eq_ST})$ with $\sigma>0$. \begin{proposition}\label{Sect1_Proposition_Regularity_Tension} Fix $\sigma>0$. For $m\geq 6$, assume the initial data $(v_0^{\ensuremath{\epsilon}},h_0^{\ensuremath{\epsilon}})$ satisfy the compatibility condition $\Pi\mathcal{S}^{\varphi} v_0^{\ensuremath{\epsilon}}\ensuremath{\textbf{n}}|_{z=0} =0$ and the regularities: \begin{equation}\label{Sect1_Proposition_Regularity_Tension_1} \begin{array}{ll} \sup\limits_{\ensuremath{\epsilon}\in (0,1]} \big( |h_0^{\ensuremath{\epsilon}}|_{X^{m}} + \ensuremath{\epsilon}^{\frac{1}{2}}|h_0^{\ensuremath{\epsilon}}|_{X^{m,\frac{1}{2}}} + \sigma|h_0^{\ensuremath{\epsilon}}|_{X^{m,1}} + \|v_0^{\ensuremath{\epsilon}}\|_{X^m} + \|\omega_0^{\ensuremath{\epsilon}}\|_{X^{m-1}} \\[7pt]\quad + \ensuremath{\epsilon}\|\nabla v_0\|_{X^{m-1,1}}^2 + \ensuremath{\epsilon}\|\nabla\omega_0\|_{X^{m-1}}^2 + \|\omega_0^{\ensuremath{\epsilon}}\|_{1,\infty} + \ensuremath{\epsilon}^{\frac{1}{2}}\|\partial_z \omega_0^{\ensuremath{\epsilon}}\|_{L^{\infty}}\big) \leq C_0, \end{array} \end{equation} then the unique Navier-Stokes solution to $(\ref{Sect1_NS_Eq})$ satisfies \begin{equation}\label{Sect1_Proposition_Regularity_Tension_2} \begin{array}{ll} \sup\limits_{t\in [0,T]} \big( |h^{\ensuremath{\epsilon}}|_{X^{m-1,1}}^2 + \ensuremath{\epsilon}^{\frac{1}{2}}|h^{\ensuremath{\epsilon}}|_{X^{m-1,\frac{3}{2}}}^2 + \sigma |h^{\ensuremath{\epsilon}}|_{X^{m-1,2}}^2 + \|v^{\ensuremath{\epsilon}}\|_{X^{m-1,1}}^2 + \|\partial_z v^{\ensuremath{\epsilon}}\|_{X^{m-2}}^2 \\[9pt]\quad + \|\omega^{\ensuremath{\epsilon}}\|_{X^{m-2}}^2 + \|\partial_z v^{\ensuremath{\epsilon}}\|_{1,\infty}^2 + \ensuremath{\epsilon}^{\frac{1}{2}}\|\partial_{zz}v^{\ensuremath{\epsilon}}\|_{L^{\infty}}^2 \big) + \|\partial_z v\|_{L^4([0,T],X^{m-1})}^2 \\[8pt]\quad + \|\partial_t^m v\|_{L^4([0,T],L^2)}^2 + \|\partial_t^m h\|_{L^4([0,T],L^2)}^2 + \ensuremath{\epsilon}\|\partial_t^m h\|_{L^4([0,T],X^{0,\frac{1}{2}})}^2 \\[6pt]\quad + \sigma\|\partial_t^m h\|_{L^4([0,T],X^{0,1})}^2 + \ensuremath{\epsilon}\int\limits_0^T \|\nabla v^{\ensuremath{\epsilon}}\|_{X^{m-1,1}}^2 + \|\nabla\partial_z v^{\ensuremath{\epsilon}}\|_{X^{m-2}}^2 \,\mathrm{d}t \leq C. \end{array} \end{equation} As $\ensuremath{\epsilon}\ensuremath{\rightarrow} 0$, the Euler solution to $(\ref{Sect1_Euler_Eq})$ with the initial data $(v^0,h^0)$ satisfies the following regularities: \begin{equation}\label{Sect1_Proposition_TimeRegularity_Tension_3} \begin{array}{ll} \sup\limits_{t\in [0,T]} \big( |h|_{X^{m-1,1}} + \sigma |h|_{X^{m-1,2}}^2 + \|v\|_{X^{m-1,1}} + \|\partial_z v\|_{X^{m-2}} + \|\omega\|_{X^{m-2}} \big) \\[9pt]\quad + \|\partial_z v\|_{L^4([0,T],X^{m-1})}^2 + \|\partial_t^m v\|_{L^4([0,T],L^2)}^2 + \|\partial_t^m h\|_{L^4([0,T],L^2)}^2 \\[7pt]\quad + \sigma\|\partial_t^m h\|_{L^4([0,T],X^{0,1})}^2 \leq C. \end{array} \end{equation} \end{proposition} For any $\sigma\geq 0$, the equation of vorticity and its equivalent boundary condition $\Pi\mathcal{S}^{\varphi}v \ensuremath{\textbf{n}}|_{z=0} =0$ are the same, thus the estimates of normal derivatives are also the same. Thus, our proof of Proposition $\ref{Sect1_Proposition_Regularity_Tension}$ is the same as the $\sigma=0$ case. Note that our proof is different from \cite{Masmoudi_Rousset_2012_FreeBC,Wang_Xin_2015}. However, the estimates of the pressure are very different from the $\sigma=0$ case. If we couple $\triangle^{\varphi} q = -\partial_j^{\varphi} v^i \partial_i^{\varphi} v^j$ with its nonhomogeneous Dirichlet boundary condition $q|_{z=0} = g h - \sigma H + 2\ensuremath{\epsilon}\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}\cdot\ensuremath{\textbf{n}}$, the estimates can not be closed due to the less regularity of $h$. The elliptic equation of the pressure coupled with its Neumann boundary condition is as follows (see \cite{Wang_Xin_2015,Elgindi_Lee_2014,Masmoudi_Rousset_2012_NavierBC}): \begin{equation}\label{Sect1_Pressure_Neumann} \left\{\begin{array}{ll} \triangle^{\varphi} q = -\partial_j^{\varphi} v^i \partial_i^{\varphi} v^j, \\[5pt] \nabla^{\varphi} q\cdot\ensuremath{\textbf{N}}|_{z=0} = - \partial_t^{\varphi} v\cdot\ensuremath{\textbf{N}} - v\cdot\nabla^{\varphi} v\cdot\ensuremath{\textbf{N}} + \ensuremath{\epsilon}\triangle^{\varphi} v\cdot\ensuremath{\textbf{N}}, \end{array}\right. \end{equation} Using $(\ref{Sect1_Pressure_Neumann})$ to estimate the pressure, we have to prove $\partial_t^m v\in L^4([0,T],L^2)$. When we estimate $\partial_t^m v$, we have to overcome the difficulties generated by $\partial_t^m q$. Besides integrating the energy estimates in time twice (see \cite{Wang_Xin_2015}), we apply the following Hardy's inequality (see \cite{Masmoudi_Wong_2015}) to the terms $\partial_z (\partial_t^{m-1}q \cdot f)$ where $\|f\|_{L^2(\mathbb{R}^3_{-}})$ and $\|(1-z)\partial_z f\|_{L^2(\mathbb{R}^3_{-}})$ are bounded. \begin{equation}\label{Sect1_HardyIneq} \begin{array}{ll} \|\frac{1}{1-z} \partial_t^{\ell} q\|_{L^2(\mathbb{R}^3_{-})} \ensuremath{\lesssim} \big|\partial_t^{\ell} q|_{z=0}\big|_{L^2(\mathbb{R}^2)} + \|\partial_z \partial_t^{\ell} q\|_{L^2(\mathbb{R}^3_{-})}, \quad 0\leq\ell\leq m-1. \end{array} \end{equation} Note that \cite{Wang_Xin_2015} considered the finite fluid depth, for which $\|\partial_t^{\ell} q\|_{L^2(\mathbb{R}^2\times [-L,0])}$ is bounded. While we consider the infinite fluid depth in this paper, $\|\partial_t^{\ell} q\|_{L^2}$ has no bound in general. Another difference is that \cite{Wang_Xin_2015} needs Taylor sign condition and Alinhac's good unknown $(\ref{Sect1_Good_Unknown_1})$ to estimate tangential derivatives. While we do not need them for the fixed $\sigma>0$. D. Coutand and S. Shkoller (see \cite{Coutand_Shkoller_2007}) proved the well-posedness of the free surface incompressible Euler equations $(\ref{Sect1_Euler_Eq_ST})$ with surface tension. We state their results in our formulation as follows: {\it Suppose that $\sigma>0$ is fixed, $h_0\in H^{5.5}(\mathbb{R}^2), v_0\in H^{4.5}(\mathbb{R}^3_{-})$, then there exists $T>0$ and a solution $(v,q,h)$ of $(\ref{Sect1_Euler_Eq_ST})$ with $v\in L^{\infty}([0,T],H^{4.5}(\mathbb{R}^3_{-})), \nabla q\in L^{\infty}([0,T],H^3(\mathbb{R}^3_{-})), h\in L^{\infty}([0,T],H^{5.5}(\mathbb{R}^2))$. The solution is unique if $h_0\in H^{6.5}(\mathbb{R}^2), v_0\in H^{5.5}(\mathbb{R}^3_{-})$. } Since the equations of the vorticity and its equivalent boundary condition $\Pi\mathcal{S}^{\varphi}v\ensuremath{\textbf{n}}|_{z=0} =0$ are the same as the $\sigma=0$ case, Theorem $\ref{Sect1_Thm_StrongLayer}$ is also valid for the equations $(\ref{Sect1_NS_Eq})$ with $\sigma>0$. Thus, we are mainly concerned with convergence rates of the inviscid limit. The results are stated in the following theorem: \begin{theorem}\label{Sect1_Thm_StrongLayer_ST} Assume $T>0$ is finite, fixed and independent of $\ensuremath{\epsilon}$, $(v^{\ensuremath{\epsilon}},h^{\ensuremath{\epsilon}})$ is the solution in $[0,T]$ of Navier-Stokes equations $(\ref{Sect1_NS_Eq})$ with initial data $(v^{\ensuremath{\epsilon}}_0,h^{\ensuremath{\epsilon}}_0)$ satisfying $(\ref{Sect1_Proposition_TimeRegularity_1})$, $\omega^{\ensuremath{\epsilon}}$ is its vorticity. $(v,h)$ is the solution in $[0,T]$ of Euler equations $(\ref{Sect1_Euler_Eq})$ with initial data $(v_0,h_0)\in X^{m-1,1}(\mathbb{R}^3_{-}) \times X^{m-1,1}(\mathbb{R}^2)$, $\omega$ is its vorticity. Assume there exists an integer $k$ where $1\leq k\leq m-2$, such that $\|v^{\ensuremath{\epsilon}}_0 -v_0\|_{X^{k}(\mathbb{R}^3_{-})} =O(\ensuremath{\epsilon}^{\lambda^v})$, $|h^{\ensuremath{\epsilon}}_0 -h_0|_{X^{k}(\mathbb{R}^2)} =O(\ensuremath{\epsilon}^{\lambda^h})$, $\|\omega^{\ensuremath{\epsilon}}_0 - \omega_0\|_{X^{k-1}(\mathbb{R}^3_{-})} =O(\ensuremath{\epsilon}^{\lambda^{\omega}_1})$, where $\lambda^v>0, \lambda^h>0, \lambda^{\omega}_1>0$. If the Euler boundary data satisfies $\Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}|_{z=0}\neq 0$ in $[0,T]$, then the convergence rates of the inviscid limit satisfy \begin{equation}\label{Sect1_Thm7_ConvergenceRates_1} \begin{array}{ll} \|v^{\ensuremath{\epsilon}} -v\|_{X_{tan}^{k-1,1}} + |h^{\ensuremath{\epsilon}} -h|_{X^{k-1,2}} = O(\ensuremath{\epsilon}^{\min\{\frac{1}{4},\lambda^v,\lambda^h, \lambda^{\omega}_1\}}), \\[7pt] \|\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \partial_z^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}} - \ensuremath{\textbf{N}}\cdot \partial_z^{\varphi} v\|_{X_{tan}^{k-1}} + \|\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \omega^{\ensuremath{\epsilon}} - \ensuremath{\textbf{N}}\cdot \omega\|_{X_{tan}^{k-1}} = O(\ensuremath{\epsilon}^{\min\{\frac{1}{4},\lambda^v,\lambda^h, \lambda^{\omega}_1\}}), \\[7pt] \|\partial_z^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}} -\partial_z^{\varphi} v\|_{X_{tan}^{k-2}} + \|\omega^{\ensuremath{\epsilon}} -\omega\|_{X_{tan}^{k-2}} = O(\ensuremath{\epsilon}^{\min\{\frac{1}{8},\frac{\lambda^v}{2},\frac{\lambda^h}{2}, \frac{\lambda^{\omega}_1}{2}\}}), \\[7pt] \|\nabla^{\varphi^{\ensuremath{\epsilon}}} q^{\ensuremath{\epsilon}} - \nabla^{\varphi} q\|_{X_{tan}^{k-2}} + \|\triangle^{\varphi^{\ensuremath{\epsilon}}} q^{\ensuremath{\epsilon}} - \triangle^{\varphi} q\|_{X_{tan}^{k-2}} = O(\ensuremath{\epsilon}^{\min\{\frac{1}{8},\frac{\lambda^v}{2},\frac{\lambda^h}{2}, \frac{\lambda^{\omega}_1}{2}\}}), \\[7pt] \|v^{\ensuremath{\epsilon}} -v\|_{Y_{tan}^{k-3}} + |h^{\ensuremath{\epsilon}} -h|_{Y^{k-2}} = O(\ensuremath{\epsilon}^{\min\{\frac{1}{8},\frac{\lambda^v}{2},\frac{\lambda^h}{2}, \frac{\lambda^{\omega}_1}{2}\}}), \\[7pt] \|\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \partial_z^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}} - \ensuremath{\textbf{N}}\cdot \partial_z^{\varphi} v\|_{Y_{tan}^{k-4}} + \|\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \omega^{\ensuremath{\epsilon}} - \ensuremath{\textbf{N}}\cdot \omega\|_{Y_{tan}^{k-4}} = O(\ensuremath{\epsilon}^{\min\{\frac{1}{8},\frac{\lambda^v}{2},\frac{\lambda^h}{2}, \frac{\lambda^{\omega}_1}{2}\}}). \end{array} \end{equation} If the Euler boundary data satisfies $\Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}|_{z=0} = 0$ in $[0,T]$, assume $\|\omega^{\ensuremath{\epsilon}}_0 - \omega_0\|_{X^{k-2}(\mathbb{R}^3_{-})} =O(\ensuremath{\epsilon}^{\lambda^{\omega}_2})$ where $\lambda^{\omega}_2>0$, then the convergence rates of the inviscid limit satisfy \begin{equation}\label{Sect1_Thm7_ConvergenceRates_2} \begin{array}{ll} \|v^{\ensuremath{\epsilon}} -v\|_{X_{tan}^{k-2,1}} + |h^{\ensuremath{\epsilon}} -h|_{X^{k-2,2}} = O(\ensuremath{\epsilon}^{\min\{\frac{1}{2},\lambda^v,\lambda^h, \lambda^{\omega}_2\}}), \\[7pt] \|\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \partial_z^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}} - \ensuremath{\textbf{N}}\cdot \partial_z^{\varphi} v\|_{X_{tan}^{k-2}} + \|\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \omega^{\ensuremath{\epsilon}} - \ensuremath{\textbf{N}}\cdot \omega\|_{X_{tan}^{k-2}} = O(\ensuremath{\epsilon}^{\min\{\frac{1}{2},\lambda^v,\lambda^h, \lambda^{\omega}_2\}}), \\[7pt] \|\partial_z^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}} -\partial_z^{\varphi} v\|_{X_{tan}^{k-3}} + \|\omega^{\ensuremath{\epsilon}} -\omega\|_{X_{tan}^{k-3}} = O(\ensuremath{\epsilon}^{\min\{\frac{1}{4},\frac{\lambda^v}{2},\frac{\lambda^h}{2}, \frac{\lambda^{\omega}_2}{2}\}}), \\[7pt] \|\nabla^{\varphi^{\ensuremath{\epsilon}}} q^{\ensuremath{\epsilon}} - \nabla^{\varphi} q\|_{X_{tan}^{k-3}} + \|\triangle^{\varphi^{\ensuremath{\epsilon}}} q^{\ensuremath{\epsilon}} - \triangle^{\varphi} q\|_{X_{tan}^{k-3}} = O(\ensuremath{\epsilon}^{\min\{\frac{1}{4},\frac{\lambda^v}{2},\frac{\lambda^h}{2}, \frac{\lambda^{\omega}_2}{2}\}}), \\[7pt] \|v^{\ensuremath{\epsilon}} -v\|_{Y_{tan}^{k-4}} + |h^{\ensuremath{\epsilon}} -h|_{Y^{k-3}} = O(\ensuremath{\epsilon}^{\min\{\frac{1}{4},\frac{\lambda^v}{2},\frac{\lambda^h}{2}, \frac{\lambda^{\omega}_2}{2}\}}), \\[7pt] \|\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \partial_z^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}} - \ensuremath{\textbf{N}}\cdot \partial_z^{\varphi} v\|_{Y_{tan}^{k-5}} + \|\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \omega^{\ensuremath{\epsilon}} - \ensuremath{\textbf{N}}\cdot \omega\|_{Y_{tan}^{k-5}} = O(\ensuremath{\epsilon}^{\min\{\frac{1}{4},\frac{\lambda^v}{2},\frac{\lambda^h}{2}, \frac{\lambda^{\omega}_2}{2}\}}). \end{array} \end{equation} \end{theorem} Beside Remark $\ref{Sect1_Remark_ConvergenceRates}$, we supplement the following remarks: \begin{remark}\label{Sect1_Remark_ST} (i) When $\sigma>0$, the surface tension changes the regularity structure of Navier-Stokes solutions and Euler solutions, it does not change convergence rates of the inviscid limit. For the fixed $\sigma>0$, we need neither Taylor sign condition nor Alinhac's good unknown. However, if $\sigma\ensuremath{\rightarrow} 0$ is allowed, we need both Taylor sign condition and Alinhac's good unknown to close a priori estimates. (ii). \cite{Ambrose_Masmoudi_2005,Ambrose_Masmoudi_2009,Wang_Xin_2015} studied the zero surface tension limit of water waves or the free surface N-S equations, but convergence rates of the zero surface tension limit are unknown. For the equations $(\ref{Sect1_NS_Eq_ST})$, $\sigma\ensuremath{\rightarrow} 0$ is very different from $\ensuremath{\epsilon}\ensuremath{\rightarrow} 0$, because $\ensuremath{\epsilon}\ensuremath{\rightarrow} 0$ implies some boundary layers generate, $\sigma\ensuremath{\rightarrow} 0$ implies the height function $h$ loses some regularities. By using the variables $(\ref{Sect1_Good_Unknown_2})$, it is not difficult for extending our estimates to provide the convergence rates. Assume $1\leq k\leq m-2$, $\|v^{\ensuremath{\epsilon}}_0 -v_0\|_{X^{k}(\mathbb{R}^3_{-})} =O(\sigma^{\mu^v}, \ensuremath{\epsilon}^{\lambda^v})$, $|h^{\ensuremath{\epsilon}}_0 -h_0|_{X^{k}(\mathbb{R}^2)} =O(\sigma^{\mu^h}, \ensuremath{\epsilon}^{\lambda^h})$, $\|\omega^{\ensuremath{\epsilon}}_0 - \omega_0\|_{X^{k-1}(\mathbb{R}^3_{-})} =O(\sigma^{\mu^{\omega}}, \ensuremath{\epsilon}^{\lambda^{\omega}_1})$, $g- \partial_z^{\varphi^{\ensuremath{\epsilon}}} q^{\ensuremath{\epsilon}} \geq c_0>0$. If $\Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}|_{z=0}$ $\neq 0$ in $[0,T]$, we have the convergence rates of the inviscid limit: \begin{equation*} \begin{array}{ll} \|v^{\ensuremath{\epsilon}} -v\|_{X_{tan}^{k-1,1}} + |h^{\ensuremath{\epsilon}} -h|_{X^{k-1,1}} = O(\sigma^{\min\{\mu^v,\mu^h,\mu^{\omega}\}}, \ensuremath{\epsilon}^{\min\{\frac{1}{4},\lambda^v,\lambda^h, \lambda^{\omega}_1\}}), \\[6pt] \|\partial_z^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}} -\partial_z^{\varphi} v\|_{X_{tan}^{k-2}} + \|\omega^{\ensuremath{\epsilon}} -\omega\|_{X_{tan}^{k-2}} = O(\sigma^{\min\{\mu^v,\mu^h,\mu^{\omega}\}}, \ensuremath{\epsilon}^{\min\{\frac{1}{8},\frac{\lambda^v}{2},\frac{\lambda^h}{2}, \frac{\lambda^{\omega}_1}{2}\}}), \\[6pt] \|\nabla^{\varphi^{\ensuremath{\epsilon}}} q^{\ensuremath{\epsilon}} - \nabla^{\varphi} q\|_{X_{tan}^{k-2}} + \|\triangle^{\varphi^{\ensuremath{\epsilon}}} q^{\ensuremath{\epsilon}} - \triangle^{\varphi} q\|_{X_{tan}^{k-2}} \\[5pt]\hspace{3.2cm} = O(\sigma^{\min\{\mu^v,\mu^h,\mu^{\omega}\}}, \ensuremath{\epsilon}^{\min\{\frac{1}{8},\frac{\lambda^v}{2},\frac{\lambda^h}{2}, \frac{\lambda^{\omega}_1}{2}\}}), \\[6pt] \|v^{\ensuremath{\epsilon}} -v\|_{Y_{tan}^{k-3}} + |h^{\ensuremath{\epsilon}} -h|_{Y^{k-3}} = O(\sigma^{\min\{\mu^v,\mu^h,\mu^{\omega}\}}, \ensuremath{\epsilon}^{\min\{\frac{1}{8},\frac{\lambda^v}{2},\frac{\lambda^h}{2}, \frac{\lambda^{\omega}_1}{2}\}}). \end{array} \end{equation*} If $\Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}|_{z=0}= 0$ in $[0,T]$, we only adjust the indices of $\ensuremath{\epsilon}$ in the above convergence rates, the results are similar. \end{remark} The rest of the paper is organized as follows: In Section 2, we study the boundary value of the vorticity, determine the regularity structure of N-S solutions with $\sigma=0$. In Section 3, we study the strong vorticity layer caused by the strong initial vorticity layer. In Section 4, we study the strong vorticity layer caused by the discrepancy between boundary values of the vorticities. In Section 5, we estimate the convergence rates of the inviscid limit for $\sigma=0$. In Section 6, we determine the regularity structure of N-S solutions with $\sigma>0$. In Section 7, we estimate the convergence rates of the inviscid limit for $\sigma>0$. In the Appendices A and B, we derive the equations and their boundary conditions which are useful for a priori estimates. \section{Vorticity, Normal Derivatives and Regularity Structure of Navier-Stokes Solutions for $\sigma=0$} In this section, we determine the relationship between the vorticity on the free boundary and normal derivatives of the velocity on the free boundary, and derive the equations of $\omega_h =(\omega^1,\omega^2)$ and their boundary conditions. When $\sigma=0$, we prove Proposition $\ref{Sect1_Proposition_TimeRegularity}$ on the regularities of Navier-Stokes solutions and Euler solutions. For simplicity, we omit the superscript ${}^{\ensuremath{\epsilon}}$ in this section, which represents Navier-Stokes solutions. \subsection{Vorticity and Normal Derivatives on the Free Boundary} The following lemma states that the normal derivatives $(\partial_z v^1,\partial_z v^2)$ can be estimated by the tangential vorticity $\omega_h$. \begin{lemma}\label{Sect2_NormalDer_Vorticity_Lemma} Assume $v$ and $\omega$ are the velocity and vorticity of the free surface Navier-Stokes equations $(\ref{Sect1_NS_Eq})$ respectively, $\|v\|_{X^{m-1,1}} + |h|_{X^{m-1,1}} <+\infty$, then \begin{equation}\label{Sect2_NormalDer_Vorticity_Estimate} \begin{array}{ll} \|\partial_z v^1\|_{X^k} + \|\partial_z v^2\|_{X^k} \ensuremath{\lesssim} \|\omega_h\|_{X^k} + \|v\|_{X^{k,1}} + |h|_{X^{k,\frac{1}{2}}}, \quad k\leq m-1. \end{array} \end{equation} \end{lemma} \begin{proof} We calculate the vorticity: \begin{equation}\label{Sect2_NormalDer_Vorticity_Estimate_1} \left\{\begin{array}{ll} \omega^1 = \partial_2^{\varphi} v^3 - \partial_z^{\varphi} v^2 = \partial_2 v^3 - \frac{\partial_2\varphi}{\partial_z\varphi}\partial_z v^3 - \frac{1}{\partial_z\varphi}\partial_z v^2, \\[8pt] \omega^2 = \partial_z^{\varphi} v^1 -\partial_1^{\varphi} v^3 = - \partial_1 v^3 + \frac{\partial_1\varphi}{\partial_z\varphi}\partial_z v^3 + \frac{1}{\partial_z\varphi}\partial_z v^1, \\[8pt] \omega^3 = \partial_1^{\varphi} v^2 - \partial_2^{\varphi} v^1 = \partial_1 v^2 - \frac{\partial_1\varphi}{\partial_z\varphi}\partial_z v^2 - \partial_2 v^1 + \frac{\partial_2\varphi}{\partial_z\varphi}\partial_z v^1. \end{array}\right. \end{equation} Plug the following divergence free condition \begin{equation}\label{Sect2_DivFreeCondition} \begin{array}{ll} \partial_z v^3 = \partial_1\varphi\partial_z v^1 + \partial_2\varphi\partial_z v^2 - \partial_z\varphi(\partial_1 v^1 + \partial_2 v^2), \end{array} \end{equation} into $(\ref{Sect2_NormalDer_Vorticity_Estimate_1})$, we get \begin{equation}\label{Sect2_NormalDer_Vorticity_Estimate_2} \left\{\begin{array}{ll} \omega^1 = - \frac{\partial_1\varphi\partial_2\varphi}{\partial_z\varphi}\partial_z v^1 - \frac{1 + (\partial_2\varphi)^2}{\partial_z\varphi}\partial_z v^2 + \partial_2 v^3 + \partial_2\varphi(\partial_1 v^1 + \partial_2 v^2), \\[8pt] \omega^2 = \frac{1+(\partial_1\varphi)^2}{\partial_z\varphi}\partial_z v^1 + \frac{\partial_1\varphi\partial_2\varphi}{\partial_z\varphi}\partial_z v^2 - \partial_1 v^3 - \partial_1\varphi(\partial_1 v^1 + \partial_2 v^2). \end{array}\right. \end{equation} It follows from $(\ref{Sect2_NormalDer_Vorticity_Estimate_2})$ that \begin{equation}\label{Sect2_NormalDer_Vorticity_Estimate_3} \left\{\begin{array}{ll} \frac{\partial_1\varphi\partial_2\varphi}{\partial_z\varphi}\partial_z v^1 + \frac{1 + (\partial_2\varphi)^2}{\partial_z\varphi}\partial_z v^2 = - \omega^1 + \partial_2 v^3 + \partial_2\varphi(\partial_1 v^1 + \partial_2 v^2), \\[8pt] \frac{1+(\partial_1\varphi)^2}{\partial_z\varphi}\partial_z v^1 + \frac{\partial_1\varphi\partial_2\varphi}{\partial_z\varphi}\partial_z v^2 = \omega^2 + \partial_1 v^3 + \partial_1\varphi(\partial_1 v^1 + \partial_2 v^2). \end{array}\right. \end{equation} For $(\ref{Sect2_NormalDer_Vorticity_Estimate_3})$, the determinant of the coefficient matrix of $(\partial_z v^1, \partial_z v^2)^{\top}$ is \begin{equation}\label{Sect2_NormalDer_Vorticity_Estimate_4} \begin{array}{ll} \left|\begin{array}{cc} \frac{\partial_1\varphi\partial_2\varphi}{\partial_z\varphi} & \frac{1 + (\partial_2\varphi)^2}{\partial_z\varphi} \\[4pt] \frac{1+(\partial_1\varphi)^2}{\partial_z\varphi} & \frac{\partial_1\varphi\partial_2\varphi}{\partial_z\varphi} \end{array}\right| = -\frac{1+(\partial_1\varphi)^2 + (\partial_2\varphi)^2}{(\partial_z\varphi)^2} \neq 0, \end{array} \end{equation} thus we can solve $\partial_z v^1$ and $\partial_z v^2$ from $(\ref{Sect2_NormalDer_Vorticity_Estimate_3})$, namely there exist four homogeneous polynomials $f^k[\nabla\varphi](\partial_j v^i),\, k=1,2,3,4$, which are one order with respect to $\partial_j v^i$, the coefficients are fractions of $\nabla\varphi$. \begin{equation}\label{Sect2_NormalDer_Vorticity_Estimate_5} \left\{\begin{array}{ll} \partial_z v^1 = f^1[\nabla\varphi](\omega^1,\omega^2) + f^2[\nabla\varphi](\partial_j v^i),\ j=1,2,\ i=1,2,3, \\[8pt] \partial_z v^2 = f^3[\nabla\varphi](\omega^1,\omega^2) + f^4[\nabla\varphi](\partial_j v^i),\ j=1,2,\ i=1,2,3, \end{array}\right. \end{equation} then we have the estimates: \begin{equation}\label{Sect2_NormalDer_Vorticity_Estimate_6} \begin{array}{ll} \|\partial_z v^1\|_{X^k} + \|\partial_z v^2\|_{X^k} \ensuremath{\lesssim} \|\omega_h\|_{X^k} + \sum\limits_{i,j}\|\partial_j v^i\|_{X^k} + \|\nabla\varphi\|_{X^k}. \end{array} \end{equation} Thus, Lemma $\ref{Sect2_NormalDer_Vorticity_Lemma}$ is proved. \end{proof} Because the Navier-Stokes boundary data satisfies $\Pi\mathcal{S}^{\varphi} v \ensuremath{\textbf{n}}|_{z=0} =0$, the following lemma claims that the boundary value of normal derivatives and tangential vorticity can be expressed in terms of that of tangential derivatives, the tangential vorticity $\omega_h$ satisfies $(\ref{Sect1_Vorticity_H_Eq})$. \begin{lemma}\label{Sect2_Vorticity_H_Eq_BC_Lemma} Assume $v$ and $\omega$ are the velocity and vorticity of the free surface Navier-Stokes equations $(\ref{Sect1_NS_Eq})$ respectively. If $\ensuremath{\epsilon}>0$, then there exist polynomials $\vec{\textsf{F}}^0[\nabla\varphi](\omega_h,\partial_j v^i)$, $\textsf{F}^1 [\nabla\varphi](\partial_j v^i)$, $\textsf{F}^2 [\nabla\varphi](\partial_j v^i)$ such that $\omega_h$ satisfies $(\ref{Sect1_Vorticity_H_Eq})$, where $\vec{\textsf{F}}^0[\nabla\varphi](\omega_h,\partial_j v^i)$ is a quadratic polynomial vector with respect to $\omega_h$ and $\partial_j v^i$, $\textsf{F}^1 [\nabla\varphi](\partial_j v^i)$, $\textsf{F}^2 [\nabla\varphi](\partial_j v^i)$ are polynomials with respect to $\partial_j v^i$, all the coefficients are fractions of $\nabla\varphi$. \end{lemma} \begin{proof} Firstly, we investigate the following quantity on the free boundary: \begin{equation}\label{Sect2_Vorticity_H_BC_1} \begin{array}{ll} \mathcal{S}^{\varphi}v \ensuremath{\textbf{n}} = \left(\begin{array}{c} n^1\partial_1^{\varphi} v^1 + \frac{n^2}{2}(\partial_1^{\varphi} v^2 + \partial_2^{\varphi} v^1) + \frac{n^3}{2}(\partial_1^{\varphi} v^3 + \partial_z^{\varphi} v^1) \\[4pt] \frac{n^1}{2}(\partial_1^{\varphi} v^2 + \partial_2^{\varphi} v^1) + n^2\partial_2^{\varphi} v^2 + \frac{n^3}{2}(\partial_2^{\varphi} v^3 + \partial_z^{\varphi} v^2) \\[4pt] \frac{n^1}{2}(\partial_1^{\varphi} v^3 + \partial_z^{\varphi} v^1) + \frac{n^2}{2}(\partial_2^{\varphi} v^3 + \partial_z^{\varphi} v^2) - n^3\partial_1^{\varphi} v^1 - n^3\partial_2^{\varphi} v^2 \end{array}\right). \end{array} \end{equation} Since $\Pi \mathcal{S}^{\varphi} v \ensuremath{\textbf{n}} =0$, $\mathcal{S}^{\varphi}v\ensuremath{\textbf{n}} = (\mathcal{S}^{\varphi}v \ensuremath{\textbf{n}}\cdot\ensuremath{\textbf{n}})\ensuremath{\textbf{n}}$, then $\mathcal{S}^{\varphi}v \ensuremath{\textbf{n}}$ is parallel to $\ensuremath{\textbf{n}}$. By using $\mathcal{S}^{\varphi}v \ensuremath{\textbf{n}}\times\ensuremath{\textbf{n}} =0$, we have \begin{equation}\label{Sect2_Vorticity_H_BC_2} \left\{\begin{array}{ll} n^3[n^1\partial_1^{\varphi} v^1 + \frac{n^2}{2}(\partial_1^{\varphi} v^2 + \partial_2^{\varphi} v^1) + \frac{n^3}{2}(\partial_1^{\varphi} v^3 + \partial_z^{\varphi} v^1)] \\[5pt]\quad = n^1[\frac{n^1}{2}(\partial_1^{\varphi} v^3 + \partial_z^{\varphi} v^1) + \frac{n^2}{2}(\partial_2^{\varphi} v^3 + \partial_z^{\varphi} v^2) - n^3\partial_1^{\varphi} v^1 - n^3\partial_2^{\varphi} v^2], \\[10pt] n^3[\frac{n^1}{2}(\partial_1^{\varphi} v^2 + \partial_2^{\varphi} v^1) + n^2\partial_2^{\varphi} v^2 + \frac{n^3}{2}(\partial_2^{\varphi} v^3 + \partial_z^{\varphi} v^2)] \\[5pt]\quad = n^2[\frac{n^1}{2}(\partial_1^{\varphi} v^3 + \partial_z^{\varphi} v^1) + \frac{n^2}{2}(\partial_2^{\varphi} v^3 + \partial_z^{\varphi} v^2) - n^3\partial_1^{\varphi} v^1 - n^3\partial_2^{\varphi} v^2], \\[10pt] n^2[n^1\partial_1^{\varphi} v^1 + \frac{n^2}{2}(\partial_1^{\varphi} v^2 + \partial_2^{\varphi} v^1) + \frac{n^3}{2}(\partial_1^{\varphi} v^3 + \partial_z^{\varphi} v^1)] \\[5pt]\quad = n^1[\frac{n^1}{2}(\partial_1^{\varphi} v^2 + \partial_2^{\varphi} v^1) + n^2\partial_2^{\varphi} v^2 + \frac{n^3}{2}(\partial_2^{\varphi} v^3 + \partial_z^{\varphi} v^2)]. \end{array}\right. \end{equation} Firstly, we solve $\partial_z^{\varphi} v^1$ from $(\ref{Sect2_Vorticity_H_BC_2})$: \begin{equation}\label{Sect2_Vorticity_H_BC_4} \begin{array}{ll} [\frac{(n^3)^2}{2} - \frac{(n^1)^2}{2} - \frac{(n^2)^2}{2}]\partial_z^{\varphi} v^1 = -[\frac{(n^3)^2}{2} - \frac{(n^1)^2}{2} - \frac{(n^2)^2}{2}]\partial_1^{\varphi} v^3 \\[6pt]\quad + (\frac{n^1(n^2)^2}{n^3} - 2n^1n^3)\partial_1^{\varphi} v^1 - (n^1n^3 + \frac{n^1(n^2)^2}{n^3})\partial_2^{\varphi} v^2 \\[6pt]\quad + [\frac{(n^2)^2}{n^3}\frac{n^2}{2}- \frac{n^1n^2}{n^3}\frac{n^1}{2} -\frac{n^2n^3}{2}](\partial_1^{\varphi} v^2 + \partial_2^{\varphi} v^1), \end{array} \end{equation} Secondly, we solve $\partial_z^{\varphi} v^2$ from $(\ref{Sect2_Vorticity_H_BC_2})$: \begin{equation}\label{Sect2_Vorticity_H_BC_6} \begin{array}{ll} [\frac{(n^3)^2}{2} - \frac{(n^2)^2}{2} - \frac{(n^1)^2}{2}]\partial_z^{\varphi} v^2 = -[\frac{(n^3)^2}{2} - \frac{(n^2)^2}{2} - \frac{(n^1)^2}{2}]\partial_2^{\varphi} v^3 \\[6pt]\quad - (n^2n^3 + \frac{(n^1)^2n^2}{n^3})\partial_1^{\varphi} v^1 + (\frac{(n^1)^2n^2}{n^3} -2n^2n^3)\partial_2^{\varphi} v^2 \\[6pt]\quad + (\frac{(n^1)^2}{n^3}\frac{n^1}{2} -\frac{n^1n^3}{2} - \frac{n^1n^2}{n^3}\frac{n^2}{2}) (\partial_1^{\varphi} v^2 + \partial_2^{\varphi} v^1). \end{array} \end{equation} It follows from $(\ref{Sect2_Vorticity_H_BC_4})$ and $(\ref{Sect2_Vorticity_H_BC_6})$ that \begin{equation*} \begin{array}{ll} \big[(n^1)^2 + \frac{(n^3)^2}{2} + \frac{1}{2}\frac{(n^1)^4}{(n^3)^2} - \frac{1}{2}\frac{(n^2)^4}{(n^3)^2} \big]\partial_z v^1 + \big[n^1n^2 + \frac{(n^1)^3n^2}{(n^3)^2} + \frac{n^1(n^2)^3}{(n^3)^2} \big]\partial_z v^2 \\[8pt] = -[\frac{(n^3)^2}{2} - \frac{(n^1)^2}{2} - \frac{(n^2)^2}{2}]\big[\partial_z\varphi\partial_1 v^3 - \partial_1\varphi [- \partial_z\varphi(\partial_1 v^1 + \partial_2 v^2)] \big] \\[5pt]\quad + (\frac{n^1(n^2)^2}{n^3} - 2n^1n^3)(\partial_z\varphi\partial_1 v^1) - (n^1n^3 + \frac{n^1(n^2)^2}{n^3})(\partial_z\varphi\partial_2 v^2) \\[5pt]\quad + [\frac{(n^2)^2}{n^3}\frac{n^2}{2}- \frac{n^1n^2}{n^3}\frac{n^1}{2} -\frac{n^2n^3}{2}] (\partial_z\varphi\partial_1 v^2 + \partial_z\varphi\partial_2 v^1), \end{array} \end{equation*} \begin{equation*} \begin{array}{ll} \big[n^1n^2 + \frac{(n^1)^3n^2}{(n^3)^2} + \frac{n^1(n^2)^3}{(n^3)^2} \big]\partial_z v^1 + \big[(n^2)^2 + \frac{1}{2}(n^3)^2 + \frac{(n^2)^4}{2(n^3)^2} - \frac{(n^1)^4}{2(n^3)^2}\big]\partial_z v^2 \\[8pt] = -[\frac{(n^3)^2}{2} - \frac{(n^2)^2}{2} - \frac{(n^1)^2}{2}]\big[\partial_z\varphi\partial_2 v^3 - {\partial_z\varphi} [- \partial_z\varphi(\partial_1 v^1 + \partial_2 v^2)]\big] \\[5pt]\quad - (n^2n^3 + \frac{(n^1)^2n^2}{n^3})(\partial_z\varphi\partial_1 v^1) + (\frac{(n^1)^2n^2}{n^3} -2n^2n^3)(\partial_z\varphi\partial_2 v^2) \\[5pt]\quad + (\frac{(n^1)^2}{n^3}\frac{n^1}{2} -\frac{n^1n^3}{2} - \frac{n^1n^2}{n^3}\frac{n^2}{2}) (\partial_z\varphi\partial_1 v^2 + \partial_z\varphi\partial_2 v^1). \end{array} \end{equation*} where the coefficient matrix of $(\partial_z v^1, \partial_z v^2)^{\top}$ is \begin{equation}\label{Sect2_Vorticity_H_BC_9} \begin{array}{ll} \textsf{M} = \left(\begin{array}{cc} (n^1)^2 + \frac{(n^3)^2}{2} + \frac{1}{2}\frac{(n^1)^4}{(n^3)^2} - \frac{1}{2}\frac{(n^2)^4}{(n^3)^2} & n^1n^2 + \frac{(n^1)^3n^2}{(n^3)^2} + \frac{n^1(n^2)^3}{(n^3)^2} \\[4pt] n^1n^2 + \frac{(n^1)^3n^2}{(n^3)^2} + \frac{n^1(n^2)^3}{(n^3)^2} & (n^2)^2 + \frac{1}{2}(n^3)^2 + \frac{(n^2)^4}{2(n^3)^2} - \frac{(n^1)^4}{2(n^3)^2} \end{array}\right). \end{array} \end{equation} Assume $|\nabla h|_{\infty}$ is suitably small, then $n^3$ is suitably large and $|n^1|+|n^2|$ is suitably small such that $\textsf{M}$ is strictly diagonally dominant matrix. By Levy-Desplanques theorem, $\textsf{M}$ is nondegenerate, thus we can solve $\partial_z v^1$ and $\partial_z v^2$, namely there exist two homogeneous polynomials $f^5[\nabla\varphi](\partial_j v^i)$ and $f^6[\nabla\varphi](\partial_j v^i)$, which are one order with respect to $\partial_j v^i$, the coefficients are fractions of $\nabla\varphi$. \begin{equation}\label{Sect2_Vorticity_H_BC_10} \left\{\begin{array}{ll} \partial_z v^1 = f^5[\nabla\varphi](\partial_j v^i), \ j=1,2,\ i=1,2,3,\\[6pt] \partial_z v^2 = f^6[\nabla\varphi](\partial_j v^i), \ j=1,2,\ i=1,2,3. \end{array}\right. \end{equation} By $(\ref{Sect2_NormalDer_Vorticity_Estimate_2})$, we have the boundary values of $\omega_h = (\omega^1,\omega^2)$: \begin{equation}\label{Sect2_Vorticity_H_BC_11} \left\{\begin{array}{ll} \omega^1 = - \frac{\partial_1\varphi\partial_2\varphi}{\partial_z\varphi}\partial_z v^1 - \frac{1 + (\partial_2\varphi)^2}{\partial_z\varphi}\partial_z v^2 + \partial_2 v^3 + \partial_2\varphi(\partial_1 v^1 + \partial_2 v^2) \\[6pt]\hspace{0.47cm} = - \frac{\partial_1\varphi\partial_2\varphi}{\partial_z\varphi}f^5[\nabla\varphi](\partial_j v^i) - \frac{1 + (\partial_2\varphi)^2}{\partial_z\varphi}f^6[\nabla\varphi](\partial_j v^i) + \partial_2 v^3 + \partial_2\varphi(\partial_1 v^1 + \partial_2 v^2) \\[6pt]\hspace{0.39cm} := \textsf{F}^1 [\nabla\varphi](\partial_j v^i), \\[8pt] \omega^2 = \frac{1+(\partial_1\varphi)^2}{\partial_z\varphi}\partial_z v^1 + \frac{\partial_1\varphi\partial_2\varphi}{\partial_z\varphi}\partial_z v^2 - \partial_1 v^3 - \partial_1\varphi(\partial_1 v^1 + \partial_2 v^2) \\[6pt]\hspace{0.47cm} = \frac{1+(\partial_1\varphi)^2}{\partial_z\varphi}f^5[\nabla\varphi](\partial_j v^i) + \frac{\partial_1\varphi\partial_2\varphi}{\partial_z\varphi}f^6[\nabla\varphi](\partial_j v^i) - \partial_1 v^3 - \partial_1\varphi(\partial_1 v^1 + \partial_2 v^2) \\[6pt]\hspace{0.39cm} := \textsf{F}^2 [\nabla\varphi](\partial_j v^i) \end{array}\right. \end{equation} Since $\omega_h$ satisfies the equation: \begin{equation}\label{Sect2_Vorticity_H_Eq_1} \begin{array}{ll} \partial_t^{\varphi} \omega_h + v\cdot\nabla^{\varphi}\omega_h - \ensuremath{\epsilon}\triangle^{\varphi}\omega_h = \omega_h\cdot\nabla^{\varphi}_h v_h + \omega^3\partial_z^{\varphi} v_h, \end{array} \end{equation} where the force term can be transformed as follows: \begin{equation}\label{Sect2_Vorticity_H_Eq_2} \begin{array}{ll} \omega_h\cdot\nabla^{\varphi}_h v_h + \omega^3\partial_z^{\varphi} v_h \\[7pt] = \omega_1 (\partial_1 v_h - \frac{\partial_1\varphi}{\partial_z\varphi}\partial_z v_h) + \omega_2 (\partial_2 v_h - \frac{\partial_2\varphi}{\partial_z\varphi}\partial_z v_h) \\[6pt]\quad + (\partial_1 v^2 - \frac{\partial_1\varphi}{\partial_z\varphi}\partial_z v^2 - \partial_2 v^1 + \frac{\partial_2\varphi}{\partial_z\varphi}\partial_z v^1) \frac{1}{\partial_z\varphi}\partial_z v_h \\[7pt] = \omega_1 \partial_1 v_h + \omega_2 \partial_2 v_h - \omega_1 \frac{\partial_1\varphi}{\partial_z\varphi}\vec{f}^{5,6}[\nabla\varphi](\partial_j v^i) - \omega_2 \frac{\partial_2\varphi}{\partial_z\varphi}\vec{f}^{5,6}[\nabla\varphi](\partial_j v^i) \\[6pt]\quad + [\partial_1 v^2 - \frac{\partial_1\varphi}{\partial_z\varphi}f^6[\nabla\varphi](\partial_j v^i) - \partial_2 v^1 + \frac{\partial_2\varphi}{\partial_z\varphi}f^5[\nabla\varphi](\partial_j v^i)] \frac{1}{\partial_z\varphi}\vec{f}^{5,6}[\nabla\varphi](\partial_j v^i) \\[7pt] = \vec{\textsf{F}}^0[\nabla\varphi](\omega_h,\partial_j v^i), \ j=1,2,\ i=1,2,3, \end{array} \end{equation} where $\vec{\textsf{F}}^0[\nabla\varphi](\omega_h,\partial_j v^i)$ is a quadratic polynomial vector with $\omega_h$ and $\partial_j v^i$, the coefficients are fractions of $\nabla\varphi$, $\omega_h$ has degree one. Namely, $\omega_h$ satisfies the equation $(\ref{Sect1_Vorticity_H_Eq})_1$. Thus, Lemma $\ref{Sect2_Vorticity_H_Eq_BC_Lemma}$ is proved. \end{proof} \subsection{Estimates of Derivatives including Time Derivatives} For the free surface N-S equations $(\ref{Sect1_NS_Eq})$, we develop a priori estimates of tangential derivatives including time derivatives. The estimates for normal derivatives are very different from \cite{Masmoudi_Rousset_2012_FreeBC}. \cite{Masmoudi_Rousset_2012_FreeBC} used the variable $\textsf{S}_n =\Pi \mathcal{S}^{\varphi} v \ensuremath{\textbf{n}}$, while we investigate the vorticity in this paper. $q$ satisfies the elliptic equation with its nonhomogeneous Dirichlet boundary condition \begin{equation}\label{Sect2_Tangential_Estimate_Pressure_1} \left\{\begin{array}{ll} \triangle^{\varphi} q = -\partial_j^{\varphi} v^i \partial_i^{\varphi} v^j, \\[5pt] q|_{z=0} = gh +2\ensuremath{\epsilon}\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}\cdot \ensuremath{\textbf{n}}, \end{array}\right. \end{equation} then it is standard to prove the gradient estimate of $q$: \begin{equation}\label{Sect2_Tangential_Estimate_Pressure_2} \begin{array}{ll} \|\nabla q\|_{X^{m-1}} \ensuremath{\lesssim} \|\partial_i^{\varphi} v^j \partial_j^{\varphi} v^i\|_{X^{m-1}} + \big|q|_{z=0}\big|_{X^{m-1,\frac{1}{2}}} \\[8pt] \ensuremath{\lesssim} \|v\|_{X^{m-1,1}} + \|\partial_z v\|_{X^{m-1}} + g|h|_{X^{m-1,\frac{1}{2}}} + \ensuremath{\epsilon}|v_{z=0}|_{X^{m-1,\frac{3}{2}}} + \ensuremath{\epsilon}|h|_{X^{m-1,\frac{3}{2}}}. \end{array} \end{equation} Note that \cite{Masmoudi_Rousset_2012_FreeBC} estimated the pressure by decomposition $q^{\ensuremath{\epsilon}} = q^{\ensuremath{\epsilon},E} + q^{\ensuremath{\epsilon},NS}$, which satisfy two systems $(\ref{Sect1_Pressure_EulerPart})$. While our estimate is standard. In order to close the estimates of tangential derivatives of $v$, that is to bound $\|\partial_t^{\ell} v\|_{L^2}$ and $\sqrt{\ensuremath{\epsilon}}\|\nabla \partial_t^{\ell}\mathcal{Z}^{\alpha}v\|_{L^2}$, we must prove two preliminary lemmas of $h$ by using the kinetical boundary condition $(\ref{Sect1_NS_Eq})_3$. The first preliminary lemma concerns $|\partial_t^{\ell} h|_{L^2}$ where $0\leq\ell\leq m-1$. Note that the estimates of mix derivatives $\partial_t^{\ell}\mathcal{Z}^{\alpha} h$ will be obtained when we estimate mix derivatives $\partial_t^{\ell}\mathcal{Z}^{\alpha} v$, where $|\alpha|>0$. \begin{lemma}\label{Sect2_Height_Estimates_Lemma} Assume $0\leq\ell\leq m-1$, $|\partial_t^{\ell}h|_{L^2}$ have the estimates: \begin{equation}\label{Sect2_Height_Estimates_Lemma_Eq} \begin{array}{ll} |\partial_t^{\ell}h|_{L^2}^2 \ensuremath{\lesssim} |h_0|_{X^{m-1}}^2 + \int\limits_0^t |h|_{X^{m-1,1}}^2 + \|v\|_{X^{m-1,1}}^2 \,\mathrm{d}t + \|\partial_z v\|_{L^4([0,T],X^{m-1})}^2. \end{array} \end{equation} \end{lemma} \begin{proof} By the kinetical boundary condition $(\ref{Sect1_NS_Eq})_3$, we have $\partial_t h + v_y\cdot\nabla_y h = v^3$, apply $\partial_t^{\ell}$ to the above equation, we get \begin{equation}\label{Sect2_Height_Estimates_Lemma_Eq_1} \begin{array}{ll} \partial_t \partial_t^{\ell}h + v_y\cdot \nabla_y \partial_t^{\ell}h = \partial_t^{\ell}v^3 - [\partial_t^{\ell}, v_y\cdot \nabla_y]h. \end{array} \end{equation} Multiply $(\ref{Sect2_Height_Estimates_Lemma_Eq_1})$ with $\partial_t^{\ell}h$, integrate in $\mathbb{R}^2$, we have \begin{equation}\label{Sect2_Height_Estimates_Lemma_Eq_2} \begin{array}{ll} \frac{\mathrm{d}}{\mathrm{d}t}\int\limits_{\mathbb{R}^2} |\partial_t^{\ell}h|^2 \,\mathrm{d}y = 2\int\limits_{\mathbb{R}^2}\big( \partial_t^{\ell}v^3 - [\partial_t^{\ell}, v_y\cdot \nabla_y]h \big) \partial_t^{\ell}h \,\mathrm{d}y + \int\limits_{\mathbb{R}^2} |\partial_t^{\ell}h|^2 \nabla_y\cdot v_y \,\mathrm{d}y \\[7pt] \ensuremath{\lesssim} |\partial_t^{\ell}h|_{L^2}^2 + |h|_{X^{\ell,1}}^2 + \big|v|_{z=0}\big|_{X^{\ell}}^2 \\[7pt] \ensuremath{\lesssim} |\partial_t^{\ell}h|_{L^2}^2 + |h|_{X^{k-1,1}}^2 + \|v\|_{X^{k-1,1}}^2 + \|\partial_z v\|_{X^{k-1}}^2. \end{array} \end{equation} Sum $\ell$, integrate $(\ref{Sect2_Height_Estimates_Lemma_Eq_2})$ in time and apply the integral form of Gronwall's inequality, we have \begin{equation}\label{Sect2_Height_Estimates_Lemma_Eq_3} \begin{array}{ll} \int\limits_{\mathbb{R}^2} |\partial_t^{\ell}h|^2 \,\mathrm{d}y \ensuremath{\lesssim} |h_0|_{X^{m-1}}^2 + \int\limits_0^t |h|_{X^{m-1,1}}^2 + \|v\|_{X^{m-1,1}}^2 + \|\partial_z v\|_{X^{m-1}}^2 \,\mathrm{d}t \\[6pt] \ensuremath{\lesssim} |h_0|_{X^{m-1}}^2 + \int\limits_0^t |h|_{X^{m-1,1}}^2 + \|v\|_{X^{m-1,1}}^2 \,\mathrm{d}t + \|\partial_z v\|_{L^4([0,T],X^{m-1})}^2. \end{array} \end{equation} Thus, Lemma $\ref{Sect2_Height_Estimates_Lemma}$ is proved. \end{proof} The second preliminary lemma concerns $\sqrt{\ensuremath{\epsilon}}|\partial_t^{\ell}\mathcal{Z}^{\alpha} h|_{\frac{1}{2}}$, by which we bound $\sqrt{\ensuremath{\epsilon}}\|\mathcal{S}^{\varphi}\partial_t^{\ell}\mathcal{Z}^{\alpha}\eta\|_{L^2}$ and then we can bound $\sqrt{\ensuremath{\epsilon}}\|\mathcal{S}^{\varphi}\partial_t^{\ell}\mathcal{Z}^{\alpha} v\|_{L^2}$. \begin{lemma}\label{Sect2_Height_Viscous_Estimates_Lemma} Assume $0\leq\ell\leq m-1$, $\sqrt{\ensuremath{\epsilon}}|\partial_t^{\ell}\mathcal{Z}^{\alpha}h|_{\frac{1}{2}}$ have the estimates: \begin{equation}\label{Sect2_Height_Viscous_Estimates_Lemma_Eq} \begin{array}{ll} \ensuremath{\epsilon}|h|_{X^{m-1,\frac{3}{2}}}^2 \leq \ensuremath{\epsilon}|h_0|_{X^{m-1,\frac{3}{2}}}^2 + \int\limits_0^t|h|_{X^{m-1,1}}^2 + \ensuremath{\epsilon}\sum\limits_{\ell\leq m-1,\ell+|\alpha|\leq m}|\nabla V^{\ell,\alpha}|_{L^2}^2 \,\mathrm{d}t. \end{array} \end{equation} \end{lemma} \begin{proof} Let $\Lambda$ be a differential operator with respect to $y$, whose Fourier multiplier is $(1+|\xi|^2)^{\frac{1}{2}}$, so $|\Lambda^{\frac{1}{2}} h|_{L^2} = |h|_{\frac{1}{2}}$. By the kinetical boundary condition $(\ref{Sect1_NS_Eq})_3$, we have $\partial_t h + v_y\cdot\nabla_y h = v^3$, apply $\partial_t^{\ell}\mathcal{Z}^{\alpha}\Lambda^{\frac{1}{2}}$ to the above equation, we get \begin{equation}\label{Sect2_Height_Viscous_Estimates_Lemma_Eq_1} \begin{array}{ll} \partial_t \partial_t^{\ell}\mathcal{Z}^{\alpha}\Lambda^{\frac{1}{2}} h + v_y\cdot \nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha}\Lambda^{\frac{1}{2}} h = \partial_t^{\ell}\mathcal{Z}^{\alpha}\Lambda^{\frac{1}{2}}v^3 - [\partial_t^{\ell}\mathcal{Z}^{\alpha}\Lambda^{\frac{1}{2}}, v_y\cdot \nabla_y]h. \end{array} \end{equation} Multiply $(\ref{Sect2_Height_Viscous_Estimates_Lemma_Eq_1})$ with $\ensuremath{\epsilon}\partial_t^{\ell}\mathcal{Z}^{\alpha}\Lambda^{\frac{1}{2}} h$, integrate in $\mathbb{R}^2$, we have \begin{equation}\label{Sect2_Height_Viscous_Estimates_Lemma_Eq_2} \begin{array}{ll} \ensuremath{\epsilon}\frac{\mathrm{d}}{\mathrm{d}t}\int\limits_{\mathbb{R}^2} |\partial_t^{\ell}\mathcal{Z}^{\alpha}\Lambda^{\frac{1}{2}} h|^2 \,\mathrm{d}y = 2\ensuremath{\epsilon}\int\limits_{\mathbb{R}^2}\big(\partial_t^{\ell}\mathcal{Z}^{\alpha}\Lambda^{\frac{1}{2}} v^3 - [\partial_t^{\ell}\mathcal{Z}^{\alpha}\Lambda^{\frac{1}{2}}, v_y\cdot \nabla_y]h \big) \partial_t^{\ell}\mathcal{Z}^{\alpha}\Lambda^{\frac{1}{2}} h \,\mathrm{d}y \\[7pt]\quad + \ensuremath{\epsilon}\int\limits_{\mathbb{R}^2} |\partial_t^{\ell}\mathcal{Z}^{\alpha}\Lambda^{\frac{1}{2}}h|^2 \nabla_y\cdot v_y \,\mathrm{d}y \\[7pt] \ensuremath{\lesssim} \ensuremath{\epsilon}|h|_{X^{m-1,\frac{3}{2}}}^2 + \ensuremath{\epsilon}\big|v|_{z=0}\big|_{X^{m-1,\frac{3}{2}}}^2 + \ensuremath{\epsilon}|\partial_t^{\ell}\mathcal{Z}^{\alpha}\Lambda^{\frac{1}{2}} h|_{L^2}^2\\[7pt] \ensuremath{\lesssim} \ensuremath{\epsilon}|h|_{X^{m-1,\frac{3}{2}}}^2 + \ensuremath{\epsilon}\|v\|_{X_{tan}^{m-1,2}}^2 + \ensuremath{\epsilon}\|\partial_z v\|_{X_{tan}^{m-1,1}}^2 + \ensuremath{\epsilon}|\partial_t^{\ell}\mathcal{Z}^{\alpha}\Lambda^{\frac{1}{2}} h|_{L^2}^2\\[8pt] \ensuremath{\lesssim} \ensuremath{\epsilon}|h|_{X^{m-1,\frac{3}{2}}}^2 + \sum\limits_{\ell\leq m-1,\ell+|\alpha|\leq m}(\ensuremath{\epsilon}\|\nabla_y V^{\ell,\alpha}\|_{L^2}^2 + \ensuremath{\epsilon}\|\nabla_y\partial_t^{\ell}\mathcal{Z}^{\alpha}\eta\|_{L^2}^2) \\[12pt]\quad + \sum\limits_{\ell\leq m-1,\ell+|\alpha|\leq m}\big[\ensuremath{\epsilon}\|\partial_z V^{\ell,\alpha}\|_{L^2}^2 + \|\partial_z^{\varphi} v\|_{L^{\infty}}^2 \cdot \ensuremath{\epsilon}\|\partial_z \partial_t^{\ell}\mathcal{Z}^{\alpha}\eta\|_{L^2}^2 \\[12pt]\quad + (\sqrt{\ensuremath{\epsilon}}\|\partial_{zz}^{\varphi} v\|_{L^{\infty}})^2 \|\partial_t^{\ell}\mathcal{Z}^{\alpha}\eta\|_{L^2}^2 + \ensuremath{\epsilon}|\partial_t^{\ell}\mathcal{Z}^{\alpha}\Lambda^{\frac{1}{2}} h|_{L^2}^2 \big] \\[7pt] \ensuremath{\lesssim} \ensuremath{\epsilon}|h|_{X^{m-1,\frac{3}{2}}}^2 +|h|_{X^{m-1,1}}^2 + \ensuremath{\epsilon}\sum\limits_{\ell\leq m-1,\ell+|\alpha|\leq m}|\nabla V^{\ell,\alpha}|_{L^2}^2. \end{array} \end{equation} Sum $\ell,\alpha$, integrate $(\ref{Sect2_Height_Viscous_Estimates_Lemma_Eq_2})$ in time and apply the integral form of Gronwall's inequality, we have $(\ref{Sect2_Height_Viscous_Estimates_Lemma_Eq})$. Thus, Lemma $\ref{Sect2_Height_Viscous_Estimates_Lemma}$ is proved. \end{proof} The following lemma concerns the estimates of tangential derivatives. The proof is different from \cite{Masmoudi_Rousset_2012_FreeBC} when we estimate $\partial_t^{\ell} v$ where $1\leq \ell\leq m-1$, since $\|\partial_t^{\ell} q\|$ has no bound for infinite fluid depth. \begin{lemma}\label{Sect2_Tangential_Estimate_Lemma} Assume the conditions are the same with those of Proposition $\ref{Sect1_Proposition_TimeRegularity}$, then $v$ and $h$ satisfy the a priori estimate: \begin{equation}\label{Sect2_Tangential_Estimate} \begin{array}{ll} \|v\|_{X^{m-1,1}}^2 + |h|_{X^{m-1,1}}^2 + \ensuremath{\epsilon} |h|_{X^{m-1,\frac{3}{2}}}^2 + \ensuremath{\epsilon}\int\limits_0^t \|\nabla v\|_{X^{m-1,1}}^2 \,\mathrm{d}t \\[5pt] \ensuremath{\lesssim} \|v_0\|_{X^{m-1,1}}^2 + |h_0|_{X^{m-1,1}}^2 + \ensuremath{\epsilon} |h_0|_{X^{m-1,\frac{3}{2}}}^2 + \|\partial_z v\|_{L^4([0,T],X^{m-1})}^2. \end{array} \end{equation} \end{lemma} \begin{proof} Apply $\partial_t^{\ell}\mathcal{Z}^{\alpha}$ to $(\ref{Sect1_NS_Eq})$, we use the following commutator (see \cite{Masmoudi_Rousset_2012_FreeBC}): \begin{equation}\label{Sect2_Tangential_Estimate_1} \begin{array}{ll} [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \partial_i^{\varphi}] f = - \partial_z^{\varphi} f\, \partial_i^{\varphi} (\partial_t^{\ell}\mathcal{Z}^{\alpha}\eta) + b.t.\, ,\quad i=t,1,2,3. \end{array} \end{equation} where the abbreviation $b.t.$ represents bounded terms in this paper. Note that $\partial_t^m \varphi$ and $\partial_t^m h$ are bounded in $L^4([0,T],L^2)$, thus they are also represented by $b.t.$ in $(\ref{Sect2_Tangential_Estimate_1})$. Similar to \cite{Masmoudi_Rousset_2012_FreeBC}, we choose the Alinhac's good unknown $(\ref{Sect1_Good_Unknown_1})$ as our variable, then $V^{\ell,\alpha}$ and $Q^{\ell,\alpha}$ satisfies \begin{equation}\label{Sect2_Tangential_Estimate_5} \left\{\begin{array}{ll} \partial_t^{\varphi} V^{\ell,\alpha} + v\cdot\nabla^{\varphi} V^{\ell,\alpha} + \nabla^{\varphi} Q^{\ell,\alpha} -2\ensuremath{\epsilon}\nabla^{\varphi}\cdot\mathcal{S}^{\varphi} V^{\ell,\alpha} \\[5pt]\quad = -\partial_t^{\varphi}\partial_z^{\varphi} v \, \partial_t^{\ell}\mathcal{Z}^{\alpha} \eta - v\cdot\nabla^{\varphi} \partial_z^{\varphi} v \, \partial_t^{\ell}\mathcal{Z}^{\alpha} \eta - \nabla^{\varphi} \partial_z^{\varphi} q \, \partial_t^{\ell}\mathcal{Z}^{\alpha} \eta \\[7pt]\quad + 2\ensuremath{\epsilon}\nabla^{\varphi}\cdot (\mathcal{S}^{\varphi} \partial_z^{\varphi} v \, \partial_t^{\ell}\mathcal{Z}^{\alpha} \eta) - 2\ensuremath{\epsilon} \partial_z^{\varphi}\mathcal{S}^{\varphi} v_y\cdot \nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} \eta + b.t. \, , \\[10pt] \nabla^{\varphi}\cdot V^{\ell,\alpha} = - (\nabla^{\varphi}\cdot\partial_z^{\varphi} v) \, \partial_t^{\ell}\mathcal{Z}^{\alpha} \eta + b.t.\, = b.t. \, , \\[10pt] \partial_t \partial_t^{\ell}\mathcal{Z}^{\alpha} h + v_y \cdot\nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} h = V^{\ell,\alpha} \cdot \ensuremath{\textbf{N}} + b.t.\, , \\[10pt] Q^{\ell,\alpha}\ensuremath{\textbf{N}} -2\ensuremath{\epsilon} \mathcal{S}^{\varphi} V^{\ell,\alpha}\,\ensuremath{\textbf{N}} \\[7pt]\quad = (g - \partial_z^{\varphi}q) \partial_t^{\ell}\mathcal{Z}^{\alpha}h\ensuremath{\textbf{N}} + 2\ensuremath{\epsilon} (\mathcal{S}^{\varphi}\partial_z^{\varphi} v \,\ensuremath{\textbf{N}})\, \partial_t^{\ell}\mathcal{Z}^{\alpha} h - [\partial_t^{\ell}\mathcal{Z}^{\alpha},2\ensuremath{\epsilon} \mathcal{S}^{\varphi}v\ensuremath{\textbf{n}}\cdot\ensuremath{\textbf{n}},\ensuremath{\textbf{N}}] \\[8pt]\quad + (2\ensuremath{\epsilon} \mathcal{S}^{\varphi}v - 2\ensuremath{\epsilon} \mathcal{S}^{\varphi}v\ensuremath{\textbf{n}}\cdot\ensuremath{\textbf{n}})\,\partial_t^{\ell}\mathcal{Z}^{\alpha}\ensuremath{\textbf{N}} +2\ensuremath{\epsilon} [\partial_t^{\ell}\mathcal{Z}^{\alpha},\mathcal{S}^{\varphi}v, \ensuremath{\textbf{N}}] + b.t. \, , \\[10pt] (\partial_t^{\ell}\mathcal{Z}^{\alpha}v, \partial_t^{\ell}\mathcal{Z}^{\alpha}h)|_{t=0} = (\partial_t^{\ell}\mathcal{Z}^{\alpha}v_0, \partial_t^{\ell}\mathcal{Z}^{\alpha}h_0). \end{array}\right. \end{equation} When $|\alpha|\geq 1, \, 1\leq \ell+|\alpha|\leq m$, we develop the $L^2$ estimate of $V^{\ell,\alpha}$. The estimates are similar to \cite{Masmoudi_Rousset_2012_FreeBC}, but we do not use $g-\partial_z^{\varphi} q^{E}$. \begin{equation}\label{Sect2_Tangential_Estimate_6} \begin{array}{ll} \frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int\limits_{\mathbb{R}^3_{-}} |V^{\ell,\alpha}|^2 \,\mathrm{d}\mathcal{V}_t - \int\limits_{\mathbb{R}^3_{-}} Q^{\ell,\alpha} \, \nabla^{\varphi}\cdot V^{\ell,\alpha} \,\mathrm{d}\mathcal{V}_t + 2\ensuremath{\epsilon} \int\limits_{\mathbb{R}^3_{-}} |\mathcal{S}^{\varphi}V^{\ell,\alpha}|^2 \,\mathrm{d}\mathcal{V}_t \\[14pt] \leq \int\limits_{\{z=0\}} (2\ensuremath{\epsilon} \mathcal{S}^{\varphi}V^{\ell,\alpha}\ensuremath{\textbf{N}} - Q^{\ell,\alpha}\ensuremath{\textbf{N}})\cdot V^{\ell,\alpha} \mathrm{d}y + \|\partial_z v\|_{X^{m-1}}^2 + \|\nabla q\|_{X^{m-1}}^2 + \text{b.t.} \\[14pt] \leq -\int\limits_{\{z=0\}} (g - \partial_z^{\varphi}q) \partial_t^{\ell}\mathcal{Z}^{\alpha}h\ensuremath{\textbf{N}}\cdot V^{\ell,\alpha} \mathrm{d}y + \|\partial_z v\|_{X^{m-1}}^2 + \|\nabla q\|_{X^{m-1}}^2 + \text{b.t.} \\[14pt] \leq -\int\limits_{\{z=0\}} (g - \partial_z^{\varphi}q) \partial_t^{\ell}\mathcal{Z}^{\alpha}h (\partial_t \partial_t^{\ell}\mathcal{Z}^{\alpha} h + v_y \cdot\nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} h) \mathrm{d}y + \|\partial_z v\|_{X^{m-1}}^2 \\[11pt]\quad + \|\nabla q\|_{X^{m-1}}^2 + \text{b.t.} \\[10pt] \leq - \frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t} \int\limits_{\{z=0\}} (g - \partial_z^{\varphi}q) |\partial_t^{\ell}\mathcal{Z}^{\alpha}h|^2 \mathrm{d}y + \|\partial_z v\|_{X^{m-1}}^2 + \|\nabla q\|_{X^{m-1}}^2 + \text{b.t.}, \end{array} \end{equation} then \begin{equation}\label{Sect2_Tangential_Estimate_7} \begin{array}{ll} \frac{\mathrm{d}}{\mathrm{d}t}\int\limits_{\mathbb{R}^3_{-}} |V^{\ell,\alpha}|^2 \,\mathrm{d}\mathcal{V}_t + \frac{\mathrm{d}}{\mathrm{d}t} \int\limits_{\{z=0\}} (g - \partial_z^{\varphi}q) |\partial_t^{\ell}\mathcal{Z}^{\alpha}h|^2 \mathrm{d}y + \ensuremath{\epsilon} \int\limits_{\mathbb{R}^3_{-}} |\mathcal{S}^{\varphi}V^{\ell,\alpha}|^2 \,\mathrm{d}\mathcal{V}_t \\[14pt] \ensuremath{\lesssim} \|\partial_z v\|_{X^{m-1}}^2 + \|\nabla q\|_{X^{m-1}}^2 + \text{b.t.} \end{array} \end{equation} Since $(g - \partial_z^{\varphi}q)|_{z=0} \geq c_0>0$, a priori estimates can be closed. Thus, \begin{equation}\label{Sect2_Tangential_Estimate_8} \begin{array}{ll} \|\partial_t^{\ell}\mathcal{Z}^{\alpha}v\|^2 + |\partial_t^{\ell}\mathcal{Z}^{\alpha} h|^2 + \ensuremath{\epsilon} |\partial_t^{\ell}\mathcal{Z}^{\alpha} h|_{\frac{1}{2}}^2 + \ensuremath{\epsilon}\int\limits_0^t \|\nabla \partial_t^{\ell}\mathcal{Z}^{\alpha}v\|^2 \,\mathrm{d}t \\[7pt] \ensuremath{\lesssim} \|v_0\|_{X^{m-1,1}}^2 + |h_0|_{X^{m-1,1}}^2 + \ensuremath{\epsilon} |h_0|_{X^{m-1,\frac{3}{2}}}^2 + \int\limits_0^T \|\partial_z v\|_{X^{m-1}}^2 + \|\nabla q\|_{X^{m-1}}^2 \,\mathrm{d}t. \end{array} \end{equation} where we used the estimate of $\ensuremath{\epsilon} |\partial_t^{\ell}\mathcal{Z}^{\alpha} h|_{\frac{1}{2}}^2$ that is proved by Lemma $\ref{Sect2_Height_Viscous_Estimates_Lemma}$. When $|\alpha|=0$ and $0\leq\ell\leq m-1$, we have no bounds of $q$ and $\partial_t^{\ell} q$. Without using Hardy's inequality $(\ref{Sect1_HardyIneq})$, we have a simpler method to estimate $V^{\ell,0}$, we neither use the variable $Q^{\ell,\alpha}$ and nor apply the integration by parts to the pressure terms. Also, the divergence free condition and the dynamical boundary condition will not be used here. Then \begin{equation}\label{Sect2_Tangential_Estimate_10} \begin{array}{ll} \frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int\limits_{\mathbb{R}^3_{-}} |V^{\ell,0}|^2 \,\mathrm{d}\mathcal{V}_t + 2\ensuremath{\epsilon} \int\limits_{\mathbb{R}^3_{-}} |\mathcal{S}^{\varphi}V^{\ell,0}|^2 \,\mathrm{d}\mathcal{V}_t \\[14pt] \leq - \int\limits_{\mathbb{R}^3_{-}} \partial_t^{\ell}\nabla^{\varphi} q \cdot V^{\ell,0} \,\mathrm{d}\mathcal{V}_t + \int\limits_{\{z=0\}} 2\ensuremath{\epsilon} \mathcal{S}^{\varphi}V^{\ell,0}\ensuremath{\textbf{N}} \cdot V^{\ell,0} \mathrm{d}y + \|\partial_z v\|_{X^{m-1}}^2 + \text{b.t.} \\[14pt] \ensuremath{\lesssim} \|\partial_t^{\ell}\nabla q\|_{L^2}^2 + \ensuremath{\epsilon}\int\limits_{\{z=0\}} |V^{\ell,0}|^2 \mathrm{d}y + 4\ensuremath{\epsilon}\int\limits_{\{z=0\}} |\mathcal{S}^{\varphi}V^{\ell,0}|^2 \mathrm{d}y + \|\partial_z v\|_{X^{m-1}}^2 + \text{b.t.} \\[14pt] \ensuremath{\lesssim} \|\partial_t^{\ell}\nabla q\|_{L^2}^2 + \ensuremath{\epsilon} \|\partial_t^{\ell} v|_{z=0}\|_{L^2}^2 + \ensuremath{\epsilon} |\partial_t^{\ell} h|_{L^2}^2 + \ensuremath{\epsilon}\big|\partial_t^{\ell}\partial_y v|_{z=0}\big|_{L^2}^2 + \ensuremath{\epsilon}\big|\partial_t^{\ell}\partial_z v|_{z=0}\big|_{L^2}^2 \\[8pt]\quad + \ensuremath{\epsilon} |\partial_t^{\ell} h|_{X^{0,\frac{1}{2}}}^2 + \|\partial_z v\|_{X^{m-1}}^2 + \text{b.t.}. \end{array} \end{equation} Since $\partial_z v|_{z=0}$ can be expressed in terms of tangential derivatives, see $(\ref{Sect2_Vorticity_H_BC_10})$. \begin{equation}\label{Sect2_Tangential_Estimate_11} \begin{array}{ll} \partial_z v^1 = f^5[\nabla\varphi](\partial_j v^i), \ j=1,2,\ i=1,2,3, \\[7pt] \partial_z v^2 = f^6[\nabla\varphi](\partial_j v^i), \ j=1,2,\ i=1,2,3, \\[7pt] \partial_z v^3 = \partial_1\varphi\partial_z v^1 + \partial_2\varphi\partial_z v^2 - \partial_z\varphi(\partial_1 v^1 + \partial_2 v^2) , \end{array} \end{equation} then \begin{equation}\label{Sect2_Tangential_Estimate_12} \begin{array}{ll} \frac{\mathrm{d}}{\mathrm{d}t}\int\limits_{\mathbb{R}^3_{-}} |V^{\ell,0}|^2 \,\mathrm{d}\mathcal{V}_t + \ensuremath{\epsilon} \int\limits_{\mathbb{R}^3_{-}} |\mathcal{S}^{\varphi}V^{\ell,0}|^2 \,\mathrm{d}\mathcal{V}_t \\[9pt] \ensuremath{\lesssim} \|\partial_t^{\ell}\nabla q\|_{L^2}^2 + \ensuremath{\epsilon}\big|\partial_t^{\ell}\partial_y v|_{z=0}\big|_{L^2}^2 + \ensuremath{\epsilon} |\partial_t^{\ell} h|_{X^{0,\frac{1}{2}}}^2 + \|\partial_z v\|_{X^{m-1}}^2 + \text{b.t.} \\[9pt] \ensuremath{\lesssim} \|\partial_t^{\ell}\nabla q\|_{L^2}^2 + \ensuremath{\epsilon}\|\nabla v\|_{X^{m-1,1}}^2 + \ensuremath{\epsilon} |\partial_t^{\ell} h|_{X^{0,\frac{1}{2}}}^2 + \|\partial_z v\|_{X^{m-1}}^2 + \text{b.t.} \ . \end{array} \end{equation} Similarly, we have \begin{equation}\label{Sect2_Tangential_Estimate_13} \begin{array}{ll} \|V^{\ell,0}\|^2 + \ensuremath{\epsilon}\int\limits_0^t \|\nabla V^{\ell,0}\|^2 \,\mathrm{d}t \ensuremath{\lesssim} \|v_0\|_{X^{m-1,1}}^2 + |h_0|_{X^{m-1,1}}^2 + \ensuremath{\epsilon} |h_0|_{X^{m-1,\frac{3}{2}}}^2 \\[6pt]\quad + \|\nabla q\|_{L^4([0,T],X^{m-1})}^2 + \|\partial_z v\|_{L^4([0,T],X^{m-1})}^2 + \ensuremath{\epsilon}\int\limits_0^T \|\nabla v\|_{X^{m-1,1}}^2 \,\mathrm{d}t. \end{array} \end{equation} Combining $(\ref{Sect2_Tangential_Estimate_13})$ and Lemma $\ref{Sect2_Height_Estimates_Lemma}$, we obtain the estimate of $\partial_t^{\ell} v$: \begin{equation}\label{Sect2_Tangential_Estimate_14} \begin{array}{ll} \int\limits_{\mathbb{R}^2} |\partial_t^{\ell}v|^2 \,\mathrm{d}y \ensuremath{\lesssim} \|v_0\|_{X^{m-1,1}}^2 + |h_0|_{X^{m-1,1}}^2 + \ensuremath{\epsilon} |h_0|_{X^{m-1,\frac{3}{2}}}^2 + \|\partial_z v\|_{L^4([0,T],X^{m-1})}^2 \\[7pt]\quad + \|\nabla q\|_{L^4([0,T],X^{m-1})}^2 + \int\limits_0^t |h|_{X^{m-1,1}}^2 + \|v\|_{X^{m-1,1}}^2 \,\mathrm{d}t + \ensuremath{\epsilon}\int\limits_0^T \|\nabla v\|_{X^{m-1,1}}^2 \,\mathrm{d}t. \end{array} \end{equation} Sum $\ell$ and $\alpha$ in $(\ref{Sect2_Tangential_Estimate_8}),(\ref{Sect2_Tangential_Estimate_14})$ and Lemma $\ref{Sect2_Height_Estimates_Lemma}$, we get the estimate $(\ref{Sect2_Tangential_Estimate})$. Thus, Lemma $\ref{Sect2_Tangential_Estimate_Lemma}$ is proved. \end{proof} In order to study $\partial_z v$, \cite{Masmoudi_Rousset_2012_NavierBC} estimated the quantity $\omega_h - 2\alpha u_h^{\bot}$ and got $\|\partial_z v\|_{L^{\infty}([0,T],H_{co}^{m-1})}$. However, for the free surface Navier-Stokes equations $(\ref{Sect1_NS_Eq})$, it is impossible to obtain such a higher regularity of $\partial_z v$. Similar to \cite{Masmoudi_Rousset_2012_FreeBC}, we estimate $\|\partial_z v\|_{L^4([0,T],X^{m-1})}^2$ to close energy estimates. \begin{lemma}\label{Sect2_Vorticity_Lemma} Assume $v$ and $\omega$ are the velocity and vorticity of the free surface Navier-Stokes equations $(\ref{Sect1_NS_Eq})$ respectively. $\omega_h$ satisfies the following estimate: \begin{equation}\label{Sect2_Vorticity_Estimate} \begin{array}{ll} \|\omega_h\|_{L^4([0,T],X^{m-1})}^2 + \|\partial_z v\|_{L^4([0,T],X^{m-1})}^2\\[7pt] \ensuremath{\lesssim} \big\|\omega_h|_{t=0}\big\|_{X^{m-1}}^2 + \int\limits_0^T\|v\|_{X^{m-1,1}}^2 + |h|_{X^{m-1,1}}^2 \,\mathrm{d}t + \ensuremath{\epsilon}\int\limits_0^t\|\partial_z v\|_{X^{m-1,1}}^2\,\mathrm{d}t. \end{array} \end{equation} \end{lemma} \begin{proof} By Lemma $\ref{Sect2_Vorticity_H_Eq_BC_Lemma}$, we have the equations of $\omega_h$: \begin{equation}\label{Sect2_Vorticity_Estimate_1} \left\{\begin{array}{ll} \partial_t^{\varphi} \omega_h + v\cdot\nabla^{\varphi}\omega_h - \ensuremath{\epsilon}\triangle^{\varphi}\omega_h = \vec{\textsf{F}}^0[\nabla\varphi](\omega_h,\partial_j v^i), \\[8pt] \omega_h|_{z=0} = \vec{\textsf{F}}^{1,2}[\nabla\varphi](\partial_j v^i), \\[9pt] \omega_h|_{t=0} = (\omega_0^1, \omega_0^2)^{\top}. \end{array}\right. \end{equation} where $j=1,2,\ i=1,2,3$. Similar to \cite{Masmoudi_Rousset_2012_FreeBC}, we decompose $\omega_h = \omega_h^{nhom} + \omega_h^{hom}$, such that $\omega_h^{nhom}$ satisfies the nonhomogeneous equations: \begin{equation}\label{Sect2_Vorticity_Estimate_2} \left\{\begin{array}{ll} \partial_t^{\varphi} \omega_h^{nhom} + v\cdot\nabla^{\varphi}\omega_h^{nhom} - \ensuremath{\epsilon}\triangle^{\varphi}\omega_h^{nhom} = \vec{\textsf{F}}^0[\nabla\varphi](\omega_h,\partial_j v^i), \\[8pt] \omega_h^{nhom}|_{z=0} = 0, \\[6pt] \omega_h^{nhom}|_{t=0} = (\omega_0^1, \omega_0^2)^{\top}, \end{array}\right. \end{equation} and $\omega_h^{hom}$ satisfies the homogeneous equations: \begin{equation}\label{Sect2_Vorticity_Estimate_3} \left\{\begin{array}{ll} \partial_t^{\varphi} \omega_h^{hom} + v\cdot\nabla^{\varphi}\omega_h^{hom} - \ensuremath{\epsilon}\triangle^{\varphi}\omega_h^{hom} = 0, \\[7pt] \omega_h^{hom}|_{z=0} = \vec{\textsf{F}}^{1,2}[\nabla\varphi](\partial_j v^i), \\[6pt] \omega_h^{hom}|_{t=0} = 0. \end{array}\right. \end{equation} $(\ref{Sect2_Vorticity_Estimate_2})_1$ is equivalent to \begin{equation}\label{Sect2_Vorticity_Estimate_4} \begin{array}{ll} \partial_t \omega_h^{nhom} + v_y\cdot\nabla_y\omega_h^{nhom} + V_z \partial_z \omega_h^{nhom} - \ensuremath{\epsilon}\triangle^{\varphi}\omega_h^{nhom} = \vec{\textsf{F}}^0[\nabla\varphi](\omega_h,\partial_j v^i). \end{array} \end{equation} where $V_z = \frac{1}{\partial_z\varphi}(v\cdot\ensuremath{\textbf{N}} -\partial_t\varphi) = \frac{1}{\partial_z\varphi}(v^3 -\partial_t\eta -v_y\cdot\nabla_y\eta)$, see \cite{Masmoudi_Rousset_2012_FreeBC}. Apply $\partial_t^{\ell}\mathcal{Z}^{\alpha}$, where $\ell+|\alpha|\leq m-1$, to the equations $(\ref{Sect2_Vorticity_Estimate_4})$, we get \begin{equation}\label{Sect2_Vorticity_Estimate_6} \left\{\begin{array}{ll} \partial_t \partial_t^{\ell}\mathcal{Z}^{\alpha}\omega_h^{nhom} + v_y\cdot\nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha}\omega_h^{nhom} + V_z\partial_z \partial_t^{\ell}\mathcal{Z}^{\alpha}\omega_h^{nhom} - \ensuremath{\epsilon}\triangle^{\varphi}\partial_t^{\ell}\mathcal{Z}^{\alpha}\omega_h^{nhom} \\[9pt]\quad = \partial_t^{\ell}\mathcal{Z}^{\alpha}\vec{\textsf{F}}^0[\nabla\varphi](\omega_h,\partial_j v^i) - [\partial_t^{\ell}\mathcal{Z}^{\alpha}, v_y\cdot\nabla_y] \omega_h^{nhom} - [\partial_t^{\ell}\mathcal{Z}^{\alpha}, V_z\partial_z] \omega_h^{nhom} \\[9pt]\quad + \ensuremath{\epsilon}\nabla^{\varphi} \cdot [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \nabla^{\varphi}]\omega_h^{nhom} + \ensuremath{\epsilon}[\partial_t^{\ell}\mathcal{Z}^{\alpha}, \nabla^{\varphi}\cdot]\nabla^{\varphi}\omega_h^{nhom} , \\[12pt] \partial_t^{\ell}\mathcal{Z}^{\alpha}\omega_h^{nhom}|_{z=0} = 0, \\[9pt] \partial_t^{\ell}\mathcal{Z}^{\alpha}\omega_h^{nhom}|_{t=0} = (\partial_t^{\ell}\mathcal{Z}^{\alpha}\omega_0^1, \partial_t^{\ell}\mathcal{Z}^{\alpha}\omega_0^2)^{\top}. \end{array}\right. \end{equation} Develop the $L^2$ estimate of $(\ref{Sect2_Vorticity_Estimate_6})$, we get \begin{equation}\label{Sect2_Vorticity_Estimate_7} \begin{array}{ll} \frac{\mathrm{d}}{\mathrm{d}t} \|\partial_t^{\ell}\mathcal{Z}^{\alpha} \omega_h^{nhom}\|_{L^2}^2 + 2\ensuremath{\epsilon}\|\nabla^{\varphi} \partial_t^{\ell}\mathcal{Z}^{\alpha} \omega_h^{nhom}\|_{L^2}^2 \\[10pt] \ensuremath{\lesssim} \|\partial_t^{\ell}\mathcal{Z}^{\alpha}\omega_h\|_{L^2}^2 + \|\partial_t^{\ell}\mathcal{Z}^{\alpha}\partial_j v^i\|_{L^2}^2 + \|\partial_t^{\ell}\mathcal{Z}^{\alpha}\nabla\varphi\|_{L^2}^2 + \big\|[\partial_t^{\ell}\mathcal{Z}^{\alpha}, V_z\partial_z] \omega_h^{nhom}\big\|_{L^2}^2\\[10pt]\quad + \ensuremath{\epsilon}\int\limits_{\mathbb{R}^3_{-}} \nabla^{\varphi} \cdot [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \nabla^{\varphi}]\omega_h^{nhom} \cdot \partial_t^{\ell}\mathcal{Z}^{\alpha} \omega_h^{nhom}\,\mathrm{d}\mathcal{V}_t \\[10pt]\quad + \ensuremath{\epsilon}\int\limits_{\mathbb{R}^3_{-}}[\partial_t^{\ell}\mathcal{Z}^{\alpha}, \nabla^{\varphi}\cdot]\nabla^{\varphi}\omega_h^{nhom} \cdot \partial_t^{\ell}\mathcal{Z}^{\alpha} \omega_h^{nhom}\,\mathrm{d}\mathcal{V}_t + b.t.\, . \end{array} \end{equation} Now we estimate the last three terms on the right hand of $(\ref{Sect2_Vorticity_Estimate_7})$, the first term is \begin{equation}\label{Sect2_Vorticity_Estimate_7_1} \begin{array}{ll} \big\|[\partial_t^{\ell}\mathcal{Z}^{\alpha}, V_z\partial_z] \omega_h^{nhom}\big\|_{L^2}^2\\[6pt] = \sum\limits_{\ell_1 + |\alpha_1|>0}\big\|\frac{1-z}{z}\partial_t^{\ell_1}\mathcal{Z}^{\alpha_1} [\frac{1}{\partial_z\varphi}(v^3 -\eta_t - v\cdot\nabla_y \eta)] \cdot \frac{z}{1-z}\partial_t^{\ell_2}\mathcal{Z}^{\alpha_2}\partial_z \omega_h^{nhom}\big\|_{L^2}^2\\[11pt] \ensuremath{\lesssim} \big\|\partial_z\partial_t^{\ell}\mathcal{Z}^{\alpha} [\frac{1}{A + \partial_z(\psi\ast h)}(v^3 - \psi\ast h_t - v\cdot\nabla_y (\psi\ast h))\big\|_{L^2}^2 + b.t. \\[10pt] \ensuremath{\lesssim} \big\|\partial_t^{\ell}\mathcal{Z}^{\alpha} \partial_z[\frac{1}{A + \partial_z(\psi\ast h)}(v^3 - \psi\ast (v^3 - v_y\cdot\nabla_y h) - v\cdot\nabla_y (\psi\ast h))\big\|_{L^2}^2 + b.t. \\[12pt] \ensuremath{\lesssim} |\partial_z^{m+1}\psi|_{L^1(\mathrm{d}z)}^2 |\partial_t^{\ell}\partial_y^{\alpha_y} h|_{L^2}^2 + \|\partial_t^{\ell}\mathcal{Z}^{\alpha}\partial_z(\frac{1}{\partial_z\varphi} v)\|_{L^2}^2 \\[10pt]\quad + |\partial_z^{m}\psi\|_{L^1(\mathrm{d}z)}^2 \|\partial_t^{\ell}\partial_y^{\alpha_y}\partial_y h\|_{L^2}^2 + b.t. \\[10pt] \ensuremath{\lesssim} |h|_{X^{m-1,1}}^2 + \|\partial_z v\|_{X^{m-1}}^2 + b.t. \,, \end{array} \end{equation} the second term is \begin{equation}\label{Sect2_Vorticity_Estimate_7_2} \begin{array}{ll} \ensuremath{\epsilon}\int\limits_{\mathbb{R}^3_{-}} \nabla^{\varphi} \cdot [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \nabla^{\varphi}]\omega_h^{nhom} \cdot \partial_t^{\ell}\mathcal{Z}^{\alpha} \omega_h^{nhom}\,\mathrm{d}\mathcal{V}_t \\[8pt] = \sum\limits_{i=1}^3 \ensuremath{\epsilon}\int\limits_{\mathbb{R}^3_{-}} [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \partial_i^{\varphi}]\omega_h^{nhom} \cdot \partial_i^{\varphi}\partial_t^{\ell}\mathcal{Z}^{\alpha} \omega_h^{nhom}\,\mathrm{d}\mathcal{V}_t \\[10pt] = \sum\limits_{i=1}^3 \ensuremath{\epsilon}\int\limits_{\mathbb{R}^3_{-}} \partial_i^{\varphi}\partial_t^{\ell}\mathcal{Z}^{\alpha}\eta\, \partial_z^{\varphi}\omega_h^{nhom} \cdot \partial_i^{\varphi}\partial_t^{\ell}\mathcal{Z}^{\alpha} \omega_h^{nhom}\,\mathrm{d}\mathcal{V}_t + b.t. \hspace{0.7cm} \end{array} \end{equation} \begin{equation*} \begin{array}{ll} = \sum\limits_{i=1}^3 \ensuremath{\epsilon}\int\limits_{\mathbb{R}^3_{-}} \frac{1}{z}\partial_i^{\varphi}\partial_t^{\ell}\mathcal{Z}^{\alpha}(\psi\ast h)\, \frac{1}{\partial_z\varphi}\mathcal{Z}^3\omega_h^{nhom} \cdot \partial_i^{\varphi}\partial_t^{\ell}\mathcal{Z}^{\alpha} \omega_h^{nhom}\,\mathrm{d}\mathcal{V}_t + b.t.\\[12pt] \ensuremath{\lesssim} |\partial_z^{m+1} \psi|_{L^1(\mathrm{d}z)}^2 |h|_{X^{m-1,1}}^2 + \ensuremath{\epsilon}\|\nabla^{\varphi}\partial_t^{\ell}\mathcal{Z}^{\alpha} \omega_h^{nhom}\|_{L^2}^2 + b.t. \\[10pt] \ensuremath{\lesssim} |h|_{X^{m-1,1}}^2 + \ensuremath{\epsilon}\|\nabla^{\varphi}\partial_t^{\ell}\mathcal{Z}^{\alpha} \omega_h^{nhom}\|_{L^2}^2 + b.t. \,. \end{array} \end{equation*} and the third term is \begin{equation}\label{Sect2_Vorticity_Estimate_7_3} \begin{array}{ll} \ensuremath{\epsilon}\int\limits_{\mathbb{R}^3_{-}}[\partial_t^{\ell}\mathcal{Z}^{\alpha}, \nabla^{\varphi}\cdot]\nabla^{\varphi}\omega_h^{nhom} \cdot \partial_t^{\ell}\mathcal{Z}^{\alpha} \omega_h^{nhom}\,\mathrm{d}\mathcal{V}_t \\[9pt] \ensuremath{\lesssim} \ensuremath{\epsilon}\sum\limits_{i=1}^3 \int\limits_{\mathbb{R}^3_{-}}\partial_i^{\varphi}\partial_t^{\ell}\mathcal{Z}^{\alpha}\eta\partial_z^{\varphi}\partial_i^{\varphi}\omega_h^{nhom} \cdot \partial_t^{\ell}\mathcal{Z}^{\alpha} \omega_h^{nhom}\,\mathrm{d}\mathcal{V}_t + b.t. \\[10pt] \ensuremath{\lesssim} - \ensuremath{\epsilon}\sum\limits_{i=1}^3 \int\limits_{\mathbb{R}^3_{-}}\partial_z^{\varphi}\partial_i^{\varphi}\partial_t^{\ell}\mathcal{Z}^{\alpha}\eta\partial_i^{\varphi}\omega_h^{nhom} \cdot \partial_t^{\ell}\mathcal{Z}^{\alpha} \omega_h^{nhom}\,\mathrm{d}\mathcal{V}_t \\[10pt]\quad - \ensuremath{\epsilon}\sum\limits_{i=1}^3 \int\limits_{\mathbb{R}^3_{-}}\partial_i^{\varphi}\partial_t^{\ell}\mathcal{Z}^{\alpha}\eta\partial_i^{\varphi}\omega_h^{nhom} \cdot \partial_z^{\varphi}\partial_t^{\ell}\mathcal{Z}^{\alpha} \omega_h^{nhom}\,\mathrm{d}\mathcal{V}_t + b.t. \\[10pt] \ensuremath{\lesssim} |\partial_z^{m+1} \psi|_{L^1(\mathrm{d}z)}^2 |h|_{X^{m-1,1}}^2 + \ensuremath{\epsilon}\|\nabla^{\varphi}\partial_t^{\ell}\mathcal{Z}^{\alpha} \omega_h^{nhom}\|_{L^2}^2 + b.t. \hspace{1.4cm} \\[9pt] \ensuremath{\lesssim} |h|_{X^{m-1,1}}^2 + \ensuremath{\epsilon}\|\nabla^{\varphi}\partial_t^{\ell}\mathcal{Z}^{\alpha} \omega_h^{nhom}\|_{L^2}^2 + b.t.. \end{array} \end{equation} Plug $(\ref{Sect2_Vorticity_Estimate_7_1}),(\ref{Sect2_Vorticity_Estimate_7_2}),(\ref{Sect2_Vorticity_Estimate_7_3})$ into $(\ref{Sect2_Vorticity_Estimate_7})$, we get \begin{equation}\label{Sect2_Vorticity_Estimate_8} \begin{array}{ll} \frac{\mathrm{d}}{\mathrm{d}t} \|\partial_t^{\ell}\mathcal{Z}^{\alpha} \omega_h^{nhom}\|_{L^2}^2 + 2\ensuremath{\epsilon}\|\nabla^{\varphi} \partial_t^{\ell}\mathcal{Z}^{\alpha} \omega_h^{nhom}\|_{L^2}^2 \\[10pt] \ensuremath{\lesssim} \|\partial_t^{\ell}\mathcal{Z}^{\alpha}\omega_h\|_{L^2}^2 + \|\partial_z v\|_{X^{m-1}}^2 + |h|_{X^{m-1,1}}^2 + \ensuremath{\epsilon}\|\nabla^{\varphi}\partial_t^{\ell}\mathcal{Z}^{\alpha} \omega_h^{nhom}\|_{L^2}^2 + b.t.\, . \end{array} \end{equation} Sum $\ell$ and $\alpha$ in $(\ref{Sect2_Vorticity_Estimate_8})$, and integrate $(\ref{Sect2_Vorticity_Estimate_8})$ from $0$ to $t$, we get \begin{equation}\label{Sect2_Vorticity_Estimate_9} \begin{array}{ll} \|\omega_h^{nhom}\|_{X^{m-1}}^2 + \ensuremath{\epsilon} \int\limits_0^t \|\nabla \omega_h^{nhom}\|_{X^{m-1}}^2 \,\mathrm{d}t \\[7pt] \ensuremath{\lesssim} \big\|\omega_h^{nhom}|_{t=0}\big\|_{X^{m-1}}^2 + \int\limits_0^t\|\omega_h\|_{X^{m-1}}^2 \,\mathrm{d}t + \int\limits_0^t\|v\|_{X^{m-1,1}}^2 + |h|_{X^{m-1,1}}^2 \,\mathrm{d}t \\[7pt] \ensuremath{\lesssim} \big\|\omega_h|_{t=0}\big\|_{X^{m-1}}^2 + \sqrt{t}\|\omega_h\|_{L^4([0,t],X^{m-1})}^2 + \int\limits_0^t\|v\|_{X^{m-1,1}}^2 \,\mathrm{d}t + \int\limits_0^t|h|_{X^{m-1,1}}^2 \,\mathrm{d}t. \end{array} \end{equation} It follows from $(\ref{Sect2_Vorticity_Estimate_9})$ that \begin{equation}\label{Sect2_Vorticity_Estimate_10} \begin{array}{ll} \|\omega_h^{nhom}\|_{X^{m-1}}^4 \ensuremath{\lesssim} \big\|\omega_h|_{t=0}\big\|_{X^{m-1}}^4 + T\|\omega_h\|_{L^4([0,t],X^{m-1})}^4 \\[7pt]\hspace{2.6cm} + \big(\int\limits_0^T\|v\|_{X^{m-1,1}}^2 \,\mathrm{d}t\big)^2 + \big(\int\limits_0^T|h|_{X^{m-1,1}}^2 \,\mathrm{d}t\big)^2, \\[14pt] \int\limits_0^t\|\omega_h^{nhom}\|_{X^{m-1}}^4 \,\mathrm{d}t \ensuremath{\lesssim} T\big\|\omega_h|_{t=0}\big\|_{X^{m-1}}^4 + T\int\limits_0^t \|\omega_h\|_{L^4([0,t],X^{m-1})}^4 \,\mathrm{d}t \\[7pt]\hspace{3.28cm} + T\big(\int\limits_0^T\|v\|_{X^{m-1,1}}^2 \,\mathrm{d}t\big)^2 + T\big(\int\limits_0^T|h|_{X^{m-1,1}}^2 \,\mathrm{d}t\big)^2. \end{array} \end{equation} For the homogeneous equations $(\ref{Sect2_Vorticity_Estimate_3})$, the same as the $L^{4}([0,T],L^2)$ estimate in \cite{Masmoudi_Rousset_2012_FreeBC} and paradifferential calculus (see Theorem 10.6 in \cite{Masmoudi_Rousset_2012_FreeBC}), when $\ell +|\alpha|\leq m-1$, we have \begin{equation}\label{Sect2_Vorticity_Estimate_11} \begin{array}{ll} \|\partial_t^{\ell}\mathcal{Z}^{\alpha}\omega_h^{hom}\|_{L^4([0,T],L^2(\mathbb{R}^3_{-}))}^2 \\[8pt] \ensuremath{\lesssim} \|\partial_t^{\ell}\mathcal{Z}^{\alpha}\omega_h^{hom}\|_{H^{\frac{1}{4}}([0,T],L^2(\mathbb{R}^3_{-}))}^2 \\[8pt] \ensuremath{\lesssim} \sqrt{\ensuremath{\epsilon}}\int\limits_0^T\big|\partial_t^{\ell}\mathcal{Z}^{\alpha}\omega_h^{hom}|_{z=0}\big|_{L^2(\mathbb{R}^2)}^2 \,\mathrm{d}t \\[8pt] \ensuremath{\lesssim} \sqrt{\ensuremath{\epsilon}}\int\limits_0^T\big|\partial_t^{\ell}\mathcal{Z}^{\alpha}\big(\vec{\textsf{F}}^{1,2}[\nabla\varphi](\partial_j v^i)\big)|_{z=0}\big|_{L^2(\mathbb{R}^2)}^2 \,\mathrm{d}t \\[8pt] \ensuremath{\lesssim} \sqrt{\ensuremath{\epsilon}}\int\limits_0^T|h|_{X^{m-1,1}}^2\,\mathrm{d}t + \sqrt{\ensuremath{\epsilon}}\int\limits_0^T\big|\partial_j v^i|_{z=0}\big|_{X^{m-1}}^2\,\mathrm{d}t \\[8pt] \ensuremath{\lesssim} \sqrt{\ensuremath{\epsilon}}\int\limits_0^T|h|_{X^{m-1,1}}^2\,\mathrm{d}t + \sqrt{\ensuremath{\epsilon}}\int\limits_0^T\big|v|_{z=0}\big|_{X_{tan}^{m-1,1}}^2\,\mathrm{d}t \\[8pt] \ensuremath{\lesssim} \sqrt{\ensuremath{\epsilon}}\int\limits_0^T|h|_{X^{m-1,1}}^2\,\mathrm{d}t + \sqrt{\ensuremath{\epsilon}}\int\limits_0^T\|\partial_z v\|_{X_{tan}^{m-1,1}} \|v\|_{X_{tan}^{m-1,1}}\,\mathrm{d}t \\[8pt] \ensuremath{\lesssim} \sqrt{\ensuremath{\epsilon}}\int\limits_0^T|h|_{X^{m-1,1}}^2\,\mathrm{d}t + \int\limits_0^T\|v\|_{X^{m-1,1}}^2\,\mathrm{d}t + \ensuremath{\epsilon}\int\limits_0^T\|\partial_z v\|_{X^{m-1,1}}^2\,\mathrm{d}t, \end{array} \end{equation} where $\big|\partial_j v^i|_{z=0}\big|_{X^{m-1}} = \big|v|_{z=0}\big|_{H^m}$ since $j=1,2$. Sum $\alpha$ in $(\ref{Sect2_Vorticity_Estimate_11})$, we get \begin{equation}\label{Sect2_Vorticity_Estimate_12} \begin{array}{ll} \|\omega_h^{hom}\|_{L^4([0,T],X^{m-1})}^2 \ensuremath{\lesssim} \int\limits_0^T|h|_{X^{m-1,1}}^2 + \|v\|_{X^{m-1,1}}^2\,\mathrm{d}t + \ensuremath{\epsilon}\int\limits_0^T\|\partial_z v\|_{X^{m-1,1}}^2\,\mathrm{d}t. \end{array} \end{equation} Square $(\ref{Sect2_Vorticity_Estimate_12})$, we have \begin{equation}\label{Sect2_Vorticity_Estimate_12_Square} \begin{array}{ll} \|\omega_h^{hom}\|_{L^4([0,t],X^{m-1})}^4 \ensuremath{\lesssim} \|\omega_h^{hom}\|_{L^4([0,T],X^{m-1})}^4 \\[6pt] \ensuremath{\lesssim} \big(\int\limits_0^T|h|_{X^{m-1,1}}^2\,\mathrm{d}t\big)^2 + \big(\int\limits_0^T\|v\|_{X^{m-1,1}}^2\,\mathrm{d}t\big)^2 + \big(\ensuremath{\epsilon}\int\limits_0^T\|\partial_z v\|_{X^{m-1,1}}^2\,\mathrm{d}t\big)^2. \end{array} \end{equation} By $(\ref{Sect2_Vorticity_Estimate_10})$ and $(\ref{Sect2_Vorticity_Estimate_12_Square})$, we have \begin{equation}\label{Sect2_Vorticity_Estimate_13} \begin{array}{ll} \|\omega_h\|_{L^4([0,t],X^{m-1})}^4 \ensuremath{\lesssim} \|\omega_h^{nhom}\|_{L^4([0,t],X^{m-1})}^4 + \|\omega_h^{hom}\|_{L^4([0,t],X^{m-1})}^4 \\[7pt] \ensuremath{\lesssim} \big\|\omega_h|_{t=0}\big\|_{X^{m-1}}^4 + \int\limits_0^t \|\omega_h\|_{L^4([0,t],X^{m-1})}^4 \,\mathrm{d}t + \big(\int\limits_0^T\|v\|_{X^{m-1,1}}^2 \,\mathrm{d}t\big)^2 \\[7pt]\quad + \big(\int\limits_0^T|h|_{X^{m-1,1}}^2 \,\mathrm{d}t\big)^2 + \big(\ensuremath{\epsilon}\int\limits_0^T\|\partial_z v\|_{X^{m-1,1}}^2\,\mathrm{d}t\big)^2. \end{array} \end{equation} By the integral form of Gronwall's inequality, it is easy to have \begin{equation}\label{Sect2_Vorticity_Estimate_14} \begin{array}{ll} \|\omega_h\|_{L^4([0,T],X^{m-1})}^2 \\[5pt] \ensuremath{\lesssim} \big\|\omega_h|_{t=0}\big\|_{X^{m-1}}^2 + \int\limits_0^T\|v\|_{X^{m-1,1}}^2 \,\mathrm{d}t + \int\limits_0^T|h|_{X^{m-1,1}}^2 \,\mathrm{d}t + \ensuremath{\epsilon}\int\limits_0^T\|\partial_z v\|_{X^{m-1,1}}^2\,\mathrm{d}t. \end{array} \end{equation} While by $(\ref{Sect2_NormalDer_Vorticity_Estimate_5})$, we have \begin{equation}\label{Sect2_Vorticity_Estimate_15} \begin{array}{ll} \|\partial_z v_h\|_{L^4([0,T],X^{m-1})}^2 \\[7pt] \ensuremath{\lesssim} \|\omega_h\|_{L^4([0,T],X^{m-1})}^2 + |h|_{L^4([0,T],X^{m-1,\frac{1}{2}})}^2 + \|v\|_{L^4([0,T],X^{m-1,1})}^2. \end{array} \end{equation} By the divergence free condition $(\ref{Sect2_DivFreeCondition})$, we have \begin{equation}\label{Sect2_Vorticity_Estimate_17} \begin{array}{ll} \|\partial_z v^3\|_{L^4([0,T],X^{m-1})}^2 \\[7pt] \ensuremath{\lesssim} \|\partial_z v_h\|_{L^4([0,T],X^{m-1})}^2 + \|\nabla\varphi\|_{L^4([0,T],X^{m-1})}^2 + \|\partial_j v^i\|_{L^4([0,T],X^{m-1})}^2 \\[7pt] \ensuremath{\lesssim} \|\omega_h\|_{L^4([0,T],X^{m-1})}^2 + |h|_{L^4([0,T],X^{m-1,\frac{1}{2}})}^2 + \|v\|_{L^4([0,T],X^{m-1,1})}^2. \end{array} \end{equation} Thus, Lemma $\ref{Sect2_Vorticity_Lemma}$ is proved. \end{proof} Refer to \cite{Masmoudi_Rousset_2012_FreeBC} for the $L^{\infty}$ estimates which imply $\partial_z v, \mathcal{Z}^3\partial_z v, \sqrt{\ensuremath{\epsilon}}\partial_{zz}v \in L^{\infty}$. The argument is based on analyzing 1D Fokker Planck equation which has explicit Green function. In the following lemma, we estimate $\|\partial_z v\|_{L^{\infty}([0,T],X^{m-2})}$. Note that we can not have $\partial_z v\in L^{\infty}([0,T],X^{m-1})$ due to $\omega|_{z=0} =\textsf{F}^{1,2} [\nabla\varphi](\partial_j v^i)$, see $(\ref{Sect1_Vorticity_H_Eq})$. \begin{lemma}\label{Sect2_NormalDer_Lemma} Assume $v$ and $\omega$ are the velocity and vorticity of the free surface Navier-Stokes equations $(\ref{Sect1_NS_Eq})$ respectively. $\omega_h$ satisfies the following estimate: \begin{equation}\label{Sect2_NormalDer_Estimate} \begin{array}{ll} \|\partial_z v\|_{X^{m-2}}^2 + \ensuremath{\epsilon}\int\limits_0^t\|\partial_{zz}v\|_{X^{m-2}}^2 \mathrm{d}\mathcal{V}_t\mathrm{d}t + \|\omega\|_{X^{m-2}}^2 + \ensuremath{\epsilon}\int\limits_0^t\|\nabla\omega\|_{X^{m-2}}^2 \mathrm{d}\mathcal{V}_t\mathrm{d}t \\[6pt] \ensuremath{\lesssim} \|\partial_z v_0\|_{X^{m-2}}^2 + \int\limits_0^t \|v\|_{X^{m-1,1}}^2 + |h|_{X^{m-1}}^2\,\mathrm{d}t + \|\partial_z v\|_{L^4([0,T],X^{m-1})}^2. \end{array} \end{equation} \end{lemma} \begin{proof} By the divergence free condition $\nabla^{\varphi}\cdot v =0$, we have \begin{equation}\label{Sect2_NormalDer_Estimate_Laplacian} \begin{array}{ll} \triangle^{\varphi} v = \nabla^{\varphi}(\nabla^{\varphi}\cdot v) - \nabla^{\varphi}\times (\nabla^{\varphi}\times v) = - \nabla^{\varphi}\times\omega. \end{array} \end{equation} Firstly, we develop the $L^2$ estimate of $\omega$. Multiple $(\ref{Sect1_NS_Eq})$ with $\nabla^{\varphi}\times \omega$, integrate in $\mathbb{R}^3_{-}$, use the integration by parts formula $(\ref{Sect1_Formulas_CanNotUse})_3$, we get \begin{equation}\label{Sect2_NormalDer_Estimate_L2_1} \begin{array}{ll} \int\limits_{\mathbb{R}^3_{-}} \big(\partial_t^{\varphi} v + v\cdot\nabla^{\varphi} v + \nabla^{\varphi} q + \ensuremath{\epsilon}\nabla^{\varphi}\times \omega \big) \cdot \nabla^{\varphi}\times \omega \mathrm{d}\mathcal{V}_t =0, \\[16pt] \int\limits_{\mathbb{R}^3_{-}} \big(\partial_t^{\varphi} \omega + v\cdot\nabla^{\varphi} \omega + \nabla^{\varphi}\times\nabla^{\varphi} q \big) \cdot \omega \mathrm{d}\mathcal{V}_t + \ensuremath{\epsilon}\int\limits_{\mathbb{R}^3_{-}} |\nabla^{\varphi}\times \omega|^2 \mathrm{d}\mathcal{V}_t \\[2pt]\quad = - \int\limits_{z=0} \big(\partial_t v + v_y\cdot\nabla_y v + \nabla^{\varphi} q \big) \cdot \ensuremath{\textbf{N}}\times \omega \mathrm{d}\mathcal{V}_t - \int\limits_{\mathbb{R}^3_{-}} (\sum\limits_{i=1}^3 \nabla^{\varphi} v^i \partial_i^{\varphi} v \cdot \omega \mathrm{d}\mathcal{V}_t, \\[13pt] \|\omega\|_{L^2}^2 + \ensuremath{\epsilon}\int\limits_0^t \|\nabla \omega\|_{L^2}^2 \mathrm{d}_t \leq \big\|\omega|_{t=0}\big\|_{L^2}^2 + b.t.\, . \end{array} \end{equation} Note that $\int\limits_{z=0} \nabla^{\varphi} q \cdot \ensuremath{\textbf{N}}\times \omega \mathrm{d}\mathcal{V}_t \ensuremath{\lesssim} \big|\nabla^{\varphi} q|_{z=0}\big|_{-\frac{1}{2}} + \big|\ensuremath{\textbf{N}}\times \omega|_{z=0}\big|_{\frac{1}{2}} =b.t. \, .$ When $1\leq \ell+|\alpha|\leq m-2$, apply $\partial_t^{\ell}\mathcal{Z}^{\alpha}$ to $(\ref{Sect1_NS_Eq})$ and rewrite the viscous terms, we have \begin{equation}\label{Sect2_NormalDer_Estimate_1} \begin{array}{ll} \partial_t^{\varphi} \partial_t^{\ell}\mathcal{Z}^{\alpha}v + v\cdot\nabla^{\varphi} \partial_t^{\ell}\mathcal{Z}^{\alpha} v + \nabla^{\varphi} \partial_t^{\ell}\mathcal{Z}^{\alpha} q + \ensuremath{\epsilon}\nabla^{\varphi}\times(\nabla^{\varphi}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}v) = \ensuremath{\epsilon}\, \mathcal{I}_{1,1} + \mathcal{I}_{1,2}, \end{array} \end{equation} where \begin{equation*} \begin{array}{ll} \mathcal{I}_{1,1} = - [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \nabla^{\varphi} \times] \omega - \nabla^{\varphi} \times [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \nabla^{\varphi}\times] v, \\[8pt] \mathcal{I}_{1,2} = - [\partial_t^{\ell}\mathcal{Z}^{\alpha}, v_y \cdot \nabla_y + V_z \partial_z] v - [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \ensuremath{\textbf{N}}\partial_z^{\varphi}] q. \end{array} \end{equation*} Multiple $(\ref{Sect2_NormalDer_Estimate_1})$ with $\nabla^{\varphi}\times (\nabla^{\varphi}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}v)$, integrate in $\mathbb{R}^3_{-}$, we have \begin{equation}\label{Sect2_NormalDer_Estimate_2} \begin{array}{ll} \int\limits_{\mathbb{R}^3_{-}} \partial_t^{\varphi} \partial_t^{\ell}\mathcal{Z}^{\alpha}v \cdot \nabla^{\varphi}\times (\nabla^{\varphi}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}v) \mathrm{d}\mathcal{V}_t \\[7pt]\ + \int\limits_{\mathbb{R}^3_{-}}v\cdot\nabla^{\varphi} \partial_t^{\ell}\mathcal{Z}^{\alpha} v \cdot \nabla^{\varphi}\times (\nabla^{\varphi}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}v) \mathrm{d}\mathcal{V}_t \\[7pt]\ + \int\limits_{\mathbb{R}^3_{-}}\nabla^{\varphi} \partial_t^{\ell}\mathcal{Z}^{\alpha} q \cdot \nabla^{\varphi}\times (\nabla^{\varphi}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}v) \mathrm{d}\mathcal{V}_t + \ensuremath{\epsilon}\int\limits_{\mathbb{R}^3_{-}} |\nabla^{\varphi}\times (\nabla^{\varphi}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}v)|^2 \mathrm{d}\mathcal{V}_t \\[7pt]\ = \int\limits_{\mathbb{R}^3_{-}}(\ensuremath{\epsilon}\, \mathcal{I}_{1,1} + \mathcal{I}_{1,2}) \cdot \nabla^{\varphi}\times (\nabla^{\varphi}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}v) \mathrm{d}\mathcal{V}_t, \end{array} \end{equation} Use the integration by parts formula $(\ref{Sect1_Formulas_CanNotUse})_3$ and note that $[\partial_t^{\varphi}, \nabla^{\varphi}] =0$, we have \begin{equation}\label{Sect2_NormalDer_Estimate_3} \begin{array}{ll} \int\limits_{\mathbb{R}^3_{-}} \partial_t^{\varphi} |\nabla^{\varphi}\times\partial_t^{\ell}\mathcal{Z}^{\alpha}v|^2 \mathrm{d}\mathcal{V}_t + \int\limits_{\mathbb{R}^3_{-}}v\cdot\nabla^{\varphi} |\nabla^{\varphi}\times \partial_t^{\ell}\mathcal{Z}^{\alpha} v |^2 \mathrm{d}\mathcal{V}_t \\[8pt]\quad + \int\limits_{\mathbb{R}^3_{-}} (\nabla^{\varphi}\times \nabla^{\varphi} \partial_t^{\ell}\mathcal{Z}^{\alpha} q) \cdot (\nabla^{\varphi}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}v) \mathrm{d}\mathcal{V}_t + \ensuremath{\epsilon}\int\limits_{\mathbb{R}^3_{-}} |\nabla^{\varphi}\times (\nabla^{\varphi}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}v)|^2 \mathrm{d}\mathcal{V}_t \\[8pt] = - \int\limits_{z=0} (\partial_t^{\varphi} \partial_t^{\ell}\mathcal{Z}^{\alpha}v + v\cdot\nabla^{\varphi} \partial_t^{\ell}\mathcal{Z}^{\alpha} v) \cdot \ensuremath{\textbf{N}}\times (\nabla^{\varphi}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}v) \mathrm{d}y \\[6pt]\quad - \int\limits_{\mathbb{R}^3_{-}} [(\sum\limits_{i=1}^3 \nabla^{\varphi}v^i\cdot\partial_i^{\varphi})\times \partial_t^{\ell}\mathcal{Z}^{\alpha} v] \cdot (\nabla^{\varphi}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}v) \mathrm{d}\mathcal{V}_t \\[6pt]\quad - \int\limits_{z=0}\nabla^{\varphi} \partial_t^{\ell}\mathcal{Z}^{\alpha} q \cdot \ensuremath{\textbf{N}}\times (\nabla^{\varphi}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}v) \mathrm{d}y + \int\limits_{\mathbb{R}^3_{-}} \ensuremath{\epsilon}\,\mathcal{I}_{1,1} \cdot \nabla^{\varphi}\times (\nabla^{\varphi}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}v) \mathrm{d}\mathcal{V}_t\\[8pt]\quad + \int\limits_{z=0}\mathcal{I}_{1,2} \cdot \ensuremath{\textbf{N}}\times (\nabla^{\varphi}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}v) \mathrm{d}y + \int\limits_{\mathbb{R}^3_{-}} \nabla^{\varphi}\times\mathcal{I}_{1,2} \cdot (\nabla^{\varphi}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}v) \mathrm{d}\mathcal{V}_t. \end{array} \end{equation} By $\nabla^{\varphi}\times \nabla^{\varphi} \partial_t^{\ell}\mathcal{Z}^{\alpha} q =0$, we have \begin{equation}\label{Sect2_NormalDer_Estimate_4} \begin{array}{ll} \frac{\mathrm{d}}{\mathrm{d}t} \int\limits_{\mathbb{R}^3_{-}} |\nabla^{\varphi}\times\partial_t^{\ell}\mathcal{Z}^{\alpha}v|^2 \mathrm{d}\mathcal{V}_t + 2\ensuremath{\epsilon}\int\limits_{\mathbb{R}^3_{-}} |\nabla^{\varphi}\times (\nabla^{\varphi}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}v)|^2 \mathrm{d}\mathcal{V}_t \\[7pt] \ensuremath{\lesssim} \|\nabla^{\varphi}\times\partial_t^{\ell}\mathcal{Z}^{\alpha}v\|_{L^2}^2 + \ensuremath{\epsilon}\|\nabla^{\varphi}\times (\nabla^{\varphi}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}v)\|_{L^2}^2 + \big|v |_{z=0}\big|_{X_{tan}^{m-1}}^2 \\[7pt]\quad + \big|\nabla^{\varphi}\times\partial_t^{\ell}\mathcal{Z}^{\alpha}v |_{z=0}\big|_{\frac{1}{2}}^2 + \big|\nabla^{\varphi}\partial_t^{\ell}\mathcal{Z}^{\alpha} q|_{z=0} \big|_{-\frac{1}{2}}^2 + \|\nabla \partial_t^{\ell}\mathcal{Z}^{\alpha}v\|_{L^2}^2 \\[7pt]\quad + \sum\limits_{\ell_1 +|\alpha|_1 \leq m-3}\big|\nabla^{\varphi}\partial_t^{\ell_1}\mathcal{Z}^{\alpha_1} q|_{z=0} \big|_{-\frac{1}{2}}^2 + \ensuremath{\epsilon}\|\mathcal{I}_{1,1}\|_{L^2}^2 + \|\nabla^{\varphi}\times \mathcal{I}_{1,2}\|_{L^2}^2. \end{array} \end{equation} It is easy to prove that $\|\mathcal{I}_{1,1}\|_{L^2} \ensuremath{\lesssim} \|\nabla \omega\|_{X^{m-2}}$. Then we estimate $\nabla^{\varphi}\times \mathcal{I}_{1,2}$. \begin{equation}\label{Sect2_NormalDer_Estimate_5} \begin{array}{ll} \|\nabla^{\varphi}\times [\partial_t^{\ell}\mathcal{Z}^{\alpha}, V_z \partial_z] v\|_{L^2} \ensuremath{\lesssim} \sum\limits_{\ell^1+|\alpha^1|>0} \big(\|\frac{1-z}{z}\partial_t^{\ell^1}\mathcal{Z}^{\alpha^1} V_z \cdot \frac{z}{1-z}\nabla^{\varphi}\times \partial_t^{\ell^2}\mathcal{Z}^{\alpha^2} \partial_z v \|_{L^2} \\[7pt]\qquad + \|\nabla^{\varphi}\partial_t^{\ell^1}\mathcal{Z}^{\alpha^1} V_z \times \partial_t^{\ell^2}\mathcal{Z}^{\alpha^2} \partial_z v \|_{L^2} \big) \end{array} \end{equation} \begin{equation*} \begin{array}{ll} \quad \ensuremath{\lesssim} \sum\limits_{\ell^1+|\alpha^1|>0} \big(\|\nabla\partial_t^{\ell^1}\mathcal{Z}^{\alpha^1} V_z \|_{L^2} + \|\nabla^{\varphi}\times \partial_t^{\ell^2}\mathcal{Z}^{\alpha^2} \mathcal{Z}^3 v \|_{L^2} + \|\partial_t^{\ell^2}\mathcal{Z}^{\alpha^2} \partial_z v \|_{L^2} \big) \\[10pt]\quad \ensuremath{\lesssim} \|\omega\|_{X^{m-2}} + \|v\|_{X^{m-1}} + \|\partial_z v\|_{X^{m-2}} + \|\partial_{zz} \eta\|_{X^{m-2}} + \|\partial_{zt} \eta\|_{X^{m-2}} \\[6pt]\quad \ensuremath{\lesssim} \|\omega\|_{X^{m-2}} + \|v\|_{X^{m-1}} + |h|_{X^{m-1}}. \\[16pt] \|\nabla^{\varphi}\times [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \ensuremath{\textbf{N}}\partial_z^{\varphi}] q\|_{L^2} \\[6pt]\quad \ensuremath{\lesssim} \|(\nabla_y, 0)^{\top}\times [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \ensuremath{\textbf{N}}\partial_z^{\varphi}] q\|_{L^2} + \|\ensuremath{\textbf{N}}\partial_z^{\varphi}\times [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \ensuremath{\textbf{N}}\partial_z^{\varphi}] q\|_{L^2} \\[7pt]\quad \ensuremath{\lesssim} \|\nabla q\|_{X^{m-1}} + \sum\limits_{\ell^1+|\alpha^1|>0} \big(\|\ensuremath{\textbf{N}}\partial_z^{\varphi}\times \partial_t^{\ell^1}\mathcal{Z}^{\alpha^1}\ensuremath{\textbf{N}} \cdot \partial_t^{\ell^2}\mathcal{Z}^{\alpha^2} \partial_z^{\varphi} q \|_{L^2} \\[11pt]\qquad + \|\ensuremath{\textbf{N}}\times \partial_t^{\ell^1}\mathcal{Z}^{\alpha^1}\ensuremath{\textbf{N}} \cdot \partial_z^{\varphi}\partial_t^{\ell^2}\mathcal{Z}^{\alpha^2} \partial_z^{\varphi} q \|_{L^2} \\[7pt]\quad \ensuremath{\lesssim} \|\nabla q\|_{X^{m-1}} + \|\partial_{zz}^{\varphi} q \|_{X^{m-3}} \ensuremath{\lesssim} \|\nabla q\|_{X^{m-1}} + \|\triangle^{\varphi} q \|_{X^{m-3}} \\[7pt]\quad \ensuremath{\lesssim} \|\nabla q\|_{X^{m-1}} + \|\partial_i^{\varphi} v^j \partial_j^{\varphi} v^i\|_{X^{m-3}} \ensuremath{\lesssim} \|\nabla q\|_{X^{m-1}} + \|v\|_{X^{m-2}} + \|\omega\|_{X^{m-2}}. \end{array} \end{equation*} Plug $(\ref{Sect2_NormalDer_Estimate_5})$ into $(\ref{Sect2_NormalDer_Estimate_4})$, integrate in time and apply the integral form of Gronwall's inequality, then we have \begin{equation}\label{Sect2_NormalDer_Estimate_6} \begin{array}{ll} \|\nabla^{\varphi}\times\partial_t^{\ell}\mathcal{Z}^{\alpha}v\|_{L^2}^2 + \ensuremath{\epsilon}\int\limits_0^t\|\nabla^{\varphi}\times (\nabla^{\varphi}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}v)\|_{L^2}^2 \mathrm{d}t \\[6pt] \ensuremath{\lesssim} \big\|\nabla^{\varphi}\times\partial_t^{\ell}\mathcal{Z}^{\alpha}v |_{t=0}\big\|_{L^2}^2 + \int\limits_0^t \|\omega\|_{X^{m-2}}^2 + \|v\|_{X^{m-1,1}}^2 + |h|_{X^{m-1}}^2 \\[9pt]\quad + \|\partial_z v\|_{X^{m-1}}^2 + \|\nabla q\|_{X^{m-1}}^2\,\mathrm{d}t + b.t. \end{array} \end{equation} $\partial_t^{\ell}\mathcal{Z}^{\alpha} \omega$ is equivalent to $\nabla^{\varphi} \times \partial_t^{\ell}\mathcal{Z}^{\alpha} v$, due to $\ell +|\alpha|\leq m-2$ and \begin{equation}\label{Sect2_NormalDer_Estimate_2_Formula} \begin{array}{ll} \partial_t^{\ell}\mathcal{Z}^{\alpha} \omega - \partial_t^{\ell}\mathcal{Z}^{\alpha} (\nabla^{\varphi} \times v) = \sum\limits_{\ell_1 +|\alpha_1| >0} \partial_t^{\ell_1}\mathcal{Z}^{\alpha_1} (\frac{\ensuremath{\textbf{N}}}{\partial_z\varphi})\partial_z \times \partial_t^{\ell_2}\mathcal{Z}^{\alpha_2} v, \\[14pt] \|\partial_t^{\ell}\mathcal{Z}^{\alpha} \omega - \partial_t^{\ell}\mathcal{Z}^{\alpha} (\nabla^{\varphi} \times v)\|_{L^2} \ensuremath{\lesssim} \|\partial_z v\|_{X^{m-3}} + |h|_{X^{m-2,\frac{1}{2}}}. \end{array} \end{equation} Then we have the estimate of the vorticity: \begin{equation}\label{Sect2_NormalDer_Estimate_7} \begin{array}{ll} \|\omega\|_{X^{m-2}}^2 + \ensuremath{\epsilon}\int\limits_0^t\|\nabla\omega\|_{X^{m-2}}^2 \mathrm{d}\mathcal{V}_t\mathrm{d}t \\[6pt] \ensuremath{\lesssim} \|\omega_0 \|_{X^{m-2}}^2 + \int\limits_0^t \|v\|_{X^{m-1,1}}^2 + |h|_{X^{m-1}}^2 + \|\partial_z v\|_{X^{m-1}}^2 + \|\nabla q\|_{X^{m-1}}^2\,\mathrm{d}t. \end{array} \end{equation} Thus, Lemma $\ref{Sect2_NormalDer_Lemma}$ is proved. \end{proof} \begin{remark}\label{Sect2_NormalDer_Remark} There is another approach to estimate $\|\omega_h\|_{X^{m-2}}$ and $\|\partial_z v\|_{X^{m-2}}$, that is, we define variables: \begin{equation}\label{Sect2_NormalDer_Remark_1} \left\{\begin{array}{ll} \zeta^1 = \omega^1 -\textsf{F}^1 [\nabla\varphi](\partial_j v^i), \ j=1,2,\ i=1,2,3, \\[9pt] \zeta^2 = \omega^2 -\textsf{F}^2 [\nabla\varphi](\partial_j v^i), \ j=1,2,\ i=1,2,3, \end{array}\right. \end{equation} then $\zeta$ satisfies the following equation: \begin{equation}\label{Sect2_NormalDer_Remark_2} \left\{\begin{array}{ll} \partial_t \zeta + v_y\cdot\nabla_y\zeta + V_z\partial_z\zeta - \ensuremath{\epsilon}\triangle^{\varphi}\zeta = \vec{\textsf{F}}^0[\nabla\varphi](\zeta + \vec{\textsf{F}}^{1,2}[\nabla\varphi](\partial_j v^i),\partial_j v^i) \\[6pt]\quad + \vec{\textsf{F}}^{1,2}[\nabla\varphi](\partial_j\partial_i^{\varphi} q + [\partial_j, \partial_t^{\varphi} + v\cdot\nabla^{\varphi} - \ensuremath{\epsilon}\triangle^{\varphi}]v^i) - \partial_j v^i(\partial_t^{\varphi} \vec{\textsf{F}}^{1,2}[\nabla\varphi] \\[6pt]\quad + v\cdot\nabla^{\varphi} \vec{\textsf{F}}^{1,2}[\nabla\varphi] - \ensuremath{\epsilon}\triangle^{\varphi} \vec{\textsf{F}}^{1,2}[\nabla\varphi]) + \ensuremath{\epsilon}\nabla^{\varphi} \vec{\textsf{F}}^{1,2}[\nabla\varphi] \cdot \nabla^{\varphi}\partial_j v^i, \\[8pt] \zeta|_{z=0} =0, \\[6pt] \zeta|_{t=0} = \omega_{h,0} -\textsf{F}^{1,2} [\nabla\varphi](\partial_j v^i)|_{t=0}. \end{array}\right. \end{equation} Then we have the estimate of $\zeta$: \begin{equation}\label{Sect2_NormalDer_Remark_3} \begin{array}{ll} \|\zeta\|_{X^{m-2}}^2 + \ensuremath{\epsilon} \int\limits_0^t\|\nabla \zeta\|_{X^{m-2}}^2 \,\mathrm{d}t \\[7pt] \ensuremath{\lesssim} \big\|\zeta|_{t=0}\big\|_{X^{m-2}}^2 + \int\limits_0^t\|v\|_{X^{m-2,2}}^2 + \|\nabla q\|_{X^{m-2,1}}^2 + |h|_{X^{m}}^2 \,\mathrm{d}t \\[10pt] \ensuremath{\lesssim} \big\|\omega|_{t=0}\big\|_{X^{m-2}}^2 + \big\|h|_{t=0}\big\|_{X^{m-2,\frac{1}{2}}}^2 + \big\|v|_{t=0}\big\|_{X^{m-2,1}}^2 + \int\limits_0^t \cdots \,\mathrm{d}t. \end{array} \end{equation} By using $(\ref{Sect2_NormalDer_Remark_1})$ and $(\ref{Sect2_NormalDer_Remark_3})$, we can estimate $\|\omega\|_{X^{m-2}}$. However, we can not use this method and variables $(\ref{Sect2_NormalDer_Remark_1})$ to estimate convergence rates of the inviscid limit, see Remark $\ref{Sect5_NormalDer_Remark}$. \end{remark} By the estimates proved in Lemmas $\ref{Sect2_Tangential_Estimate_Lemma}, \ref{Sect2_Vorticity_Lemma}, \ref{Sect2_NormalDer_Lemma}$, it is standard to prove Proposition $\ref{Sect1_Proposition_TimeRegularity}$. \section{Strong Vorticity Layer Caused by Strong Initial Vorticity Layer} In this section, we study the strong vorticity layer for the free surface N-S equations $(\ref{Sect1_NS_Eq})$, which arises from the strong initial vorticity layer. \subsection{The Equations Transformed in Lagrangian Coordinates} In this preliminaries, we derive the evolution equations of $\hat{\omega}_h = \omega_h^{\ensuremath{\epsilon}} -\omega_h$, and construct a variable which satisfies the heat equations with damping. $\hat{\omega}$ satisfies the equations $(\ref{Sect1_N_Derivatives_Difference_Eq})$, plug the following equality into $(\ref{Sect1_N_Derivatives_Difference_Eq})$, \begin{equation}\label{Sect3_Preliminaries_Vorticity_Eq_2} \begin{array}{ll} \vec{\textsf{F}}^0[\nabla\varphi^{\ensuremath{\epsilon}}](\omega_h^{\ensuremath{\epsilon}},\partial_j v^{\ensuremath{\epsilon},i}) - \vec{\textsf{F}}^0[\nabla\varphi](\omega_h,\partial_j v^i) \\[7pt] = f^7[\nabla\varphi^{\ensuremath{\epsilon}},\nabla\varphi,\partial_j v^{\ensuremath{\epsilon},i},\partial_j v^i]\hat{\omega}_h + f^8[\nabla\varphi^{\ensuremath{\epsilon}},\nabla\varphi,\partial_j v^{\ensuremath{\epsilon},i},\partial_j v^i,\omega_h^{\ensuremath{\epsilon}},\omega_h]\partial_j\hat{v}^i \\[7pt]\quad + f^9[\nabla\varphi^{\ensuremath{\epsilon}},\nabla\varphi,\partial_j v^{\ensuremath{\epsilon},i},\partial_j v^i,\omega_h^{\ensuremath{\epsilon}},\omega_h]\nabla\hat{\varphi}, \end{array} \end{equation} where these coefficients $f^7[\cdots],f^8[\cdots],f^9[\cdots]$ are uniformly bounded with respect to $\ensuremath{\epsilon}$. Then we obtain the following system of $\hat{\omega}_h$: \begin{equation}\label{Sect3_Vorticity_Eq} \left\{\begin{array}{ll} \partial_t^{\varphi^{\ensuremath{\epsilon}}} \hat{\omega}_h + v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} \hat{\omega}_h - \ensuremath{\epsilon}\triangle^{\varphi^{\ensuremath{\epsilon}}}\hat{\omega}_h - f^7[\nabla\varphi^{\ensuremath{\epsilon}},\nabla\varphi,\partial_j v^{\ensuremath{\epsilon},i},\partial_j v^i]\hat{\omega}_h \\[8pt]\ = f^8[\nabla\varphi^{\ensuremath{\epsilon}},\nabla\varphi,\partial_j v^{\ensuremath{\epsilon},i},\partial_j v^i,\omega_h^{\ensuremath{\epsilon}},\omega_h]\partial_j\hat{v}^i + f^9[\nabla\varphi^{\ensuremath{\epsilon}},\nabla\varphi,\partial_j v^{\ensuremath{\epsilon},i},\partial_j v^i,\omega_h^{\ensuremath{\epsilon}},\omega_h]\nabla\hat{\varphi} \\[7pt]\quad\ + \ensuremath{\epsilon}\triangle^{\varphi^{\ensuremath{\epsilon}}}\omega_h + \partial_z^{\varphi}\omega_h \partial_t^{\varphi^{\ensuremath{\epsilon}}} \hat{\varphi} + \partial_z^{\varphi} \omega_h\, v^{\ensuremath{\epsilon}}\cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\varphi} - \hat{v}\cdot\nabla^{\varphi} \omega_h , \\[8pt] \hat{\omega}_h|_{z=0} =\textsf{F}^{1,2} [\nabla\varphi^{\ensuremath{\epsilon}}](\partial_j v^{\ensuremath{\epsilon},i}) - \omega_h^b := \hat{\omega}_h^b, \\[6pt] \hat{\omega}_h|_{t=0} =\omega^{\ensuremath{\epsilon}}_{h,0} -\omega_{h,0} := \hat{\omega}_{h,0}. \end{array}\right. \end{equation} Similar to \cite{Masmoudi_Rousset_2012_FreeBC}, we eliminate the convection term by using the Lagrangian parametrization of $\Omega_t$: \begin{equation}\label{Sect3_Preliminaries_Vorticity_Eq_3} \begin{array}{ll} \partial_t \mathcal{X}(t,x) = u^{\ensuremath{\epsilon}}(t,\mathcal{X}(t,x)) = v^{\ensuremath{\epsilon}}(t,\, \Phi^{-1} \circ \mathcal{X}), \quad \mathcal{X}(0,x) =\Phi(0,x). \end{array} \end{equation} Define the Jacobian of the change of variables $J(t,x) = |\det\nabla \mathcal{X}(t,x)|$, then $J(t,x) = J(0,x):=J_0(x)$ due to the divergence free condition. Denote $a_0 = |J_0(x)|^{\frac{1}{2}}$, define the matrix $(a_{ij}) = |J_0|^{\frac{1}{2}} P^{-1}$, where the matrix $P$ satisfies $P_{ij} =\partial_i \mathcal{X}\cdot \partial_j \mathcal{X}$. Define $W = e^{-\gamma t} \hat{\omega}_h(t,\, \Phi^{-1}\circ \mathcal{X})$, then $W$ satisfies the equation: \begin{equation}\label{Sect3_Preliminaries_HeatEq_Damping} \begin{array}{ll} a_0\partial_t W - \ensuremath{\epsilon}\partial_i(a_{ij}\partial_j W) + \big(\gamma a_0 - f^7[\nabla\varphi^{\ensuremath{\epsilon}},\nabla\varphi,\partial_j v^{\ensuremath{\epsilon},i},\partial_j v^i]\big) W \\[9pt] = \ensuremath{\epsilon}\, e^{-\gamma t}\triangle^{\varphi^{\ensuremath{\epsilon}}}\omega_h + e^{-\gamma t}\partial_z^{\varphi}\omega_h \partial_t^{\varphi^{\ensuremath{\epsilon}}} \hat{\varphi} + e^{-\gamma t}\partial_z^{\varphi} \omega_h\, v^{\ensuremath{\epsilon}}\cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\varphi} \\[7pt]\quad\ - e^{-\gamma t}\hat{v}\cdot\nabla^{\varphi} \omega_h + e^{-\gamma t}f^8[\nabla\varphi^{\ensuremath{\epsilon}},\nabla\varphi,\partial_j v^{\ensuremath{\epsilon},i},\partial_j v^i,\omega_h^{\ensuremath{\epsilon}},\omega_h]\partial_j\hat{v}^i \\[7pt]\quad\ + e^{-\gamma t}f^9[\nabla\varphi^{\ensuremath{\epsilon}},\nabla\varphi,\partial_j v^{\ensuremath{\epsilon},i},\partial_j v^i,\omega_h^{\ensuremath{\epsilon}},\omega_h]\nabla\hat{\varphi} :=\mathcal{I}_2, \end{array} \end{equation} where $\|\mathcal{I}_2\|_{L^{\infty}} \ensuremath{\rightarrow} 0$ as $\ensuremath{\epsilon}\ensuremath{\rightarrow} 0$. Since $a_0>0$, we can choose suitably large $\gamma>0$ such that \begin{equation}\label{Sect3_Preliminaries_Vorticity_Eq_5} \begin{array}{ll} \gamma a_0 - f^7[\nabla\varphi^{\ensuremath{\epsilon}},\nabla\varphi,\partial_j v^{\ensuremath{\epsilon},i},\partial_j v^i] >0, \end{array} \end{equation} then $\big(\gamma a_0 - f^7[\nabla\varphi^{\ensuremath{\epsilon}},\nabla\varphi,\partial_j v^{\ensuremath{\epsilon},i},\partial_j v^i]\big) W$ is a damping term. Since the matrix $(a_{ij})$ is definitely positive, $- \ensuremath{\epsilon}\partial_i(a_{ij}\partial_j W)$ is the diffusion term. \subsection{$L^{\infty}$ Estimate of Strong Vorticity Layer} In this subsection, we prove that if the initial vorticity layer is strong, then the vorticity layer is strong. Before proving our results, let us investigate the simplest model by using the heat kernel, that is the following heat equation with damping, Dirichlet boundary condition and constant coefficients. \begin{proposition}\label{Sect3_HeatEq_Diffusion_Proposition} Assume $\|W\|_{X_{tan}^2} <+\infty$, $w^{ini} \neq 0$, $\gamma>0$ is constant, $W$ is the solution of the following heat equation with damping: \begin{equation}\label{Sect3_HeatEq_Diffusion_1} \left\{\begin{array}{ll} \partial_t W - \ensuremath{\epsilon}\triangle W + \gamma W =0, \\[5pt] W|_{z=0} = 0, \\[5pt] W|_{t=0} = w^{ini} \nrightarrow 0, \end{array}\right. \end{equation} then $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|W\|_{L^{\infty}(\mathbb{R}^3_{-} \times (0,T])} \neq 0$. \end{proposition} \begin{proof} Define $\tilde{W} = e^{\gamma t}W$, the equations $(\ref{Sect3_HeatEq_Diffusion_1})$ are rewritten as \begin{equation}\label{Sect3_HeatEq_Diffusion_2} \left\{\begin{array}{ll} \partial_t \tilde{W} - \ensuremath{\epsilon}\triangle \tilde{W} = 0, \\[5pt] \tilde{W}|_{z=0} = 0, \\[5pt] \tilde{W}|_{t=0} = w^{ini} \neq 0, \end{array}\right. \end{equation} Note that $\ensuremath{\epsilon}\triangle \tilde{W} \nrightarrow 0$ as $\ensuremath{\epsilon}\ensuremath{\rightarrow} 0$. Otherwise, we have $\partial_t \tilde{W}=0$, then $\tilde{W} = w^{ini}(y,\frac{z}{\sqrt{\ensuremath{\epsilon}}})$. However, $\ensuremath{\epsilon}\partial_{zz} \tilde{W} = \partial_{zz} w^{ini}(y,\frac{z}{\sqrt{\ensuremath{\epsilon}}}) \neq 0$. This is a contradiction. $\ensuremath{\epsilon}\ensuremath{\rightarrow} 0$ implies $\ensuremath{\epsilon} t\ensuremath{\rightarrow} 0$, then the limit of the solution $\tilde{W}$ satisfies \begin{equation}\label{Sect3_HeatEq_Diffusion_3} \begin{array}{ll} \tilde{W}(t,x) = \frac{1}{\sqrt{4\pi \ensuremath{\epsilon} t}} \int\limits_{\mathbb{R}^3_{-}} w^{ini}(y) \big(\exp\{-\frac{|x-y|^2}{4\ensuremath{\epsilon} t}\} -\exp\{-\frac{|x+y|^2}{4\ensuremath{\epsilon} t}\}\big) \,\mathrm{d}y \\[8pt]\hspace{1.22cm} \ensuremath{\rightarrow} w^{ini}(x), \text{\ as\ } \ensuremath{\epsilon} t\ensuremath{\rightarrow} 0. \end{array} \end{equation} The convergence $(\ref{Sect3_HeatEq_Diffusion_3})$ is strong. Because $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|w^{ini}(x)\|_{L^{\infty}(\mathbb{R}^3_{-})} \neq 0$, then we have $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|W\|_{L^{\infty}(\mathbb{R}^3_{-} \times (0,T])} \neq 0$. Thus, Proposition $\ref{Sect3_HeatEq_Diffusion_Proposition}$ is proved. \end{proof} In order to prove that the strong initial vorticity layer is one of sufficient conditions for the existence of strong vorticity layer, we assume that the Euler boundary data satisfies $\Pi\mathcal{S}^{\varphi}v\ensuremath{\textbf{n}}|_{z=0} =0$. \begin{theorem}\label{Sect3_BoundarLayer_Initial_Thm} Assume $\omega^{\ensuremath{\epsilon}},v^{\ensuremath{\epsilon}}$ are the vorticity, velocity of Navier-Stokes equations $(\ref{Sect1_NS_Eq})$, $\omega,v,\ensuremath{\textbf{n}}$ are the vorticity, velocity, normal vector of Euler equations $(\ref{Sect1_Euler_Eq})$. If the initial Navier-Stokes velocity satisfies $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}(\nabla^{\varphi^{\ensuremath{\epsilon}}}\times v_0^{\ensuremath{\epsilon}}) - \nabla^{\varphi}\times\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0} v_0^{\ensuremath{\epsilon}} \neq 0$ in the initial set $\mathcal{A}_0$, the Euler solution satisfies $\Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}|_{z=0} = 0$ in $[0,T]$, then $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|\omega^{\ensuremath{\epsilon}} -\omega\|_{L^{\infty}(\mathcal{X}(\mathcal{A}_0)\times (0,T])} \neq 0$. \end{theorem} \begin{proof} Since $\Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}|_{z=0} = 0$ in $[0,T]$, $|\omega_0^{\ensuremath{\epsilon}}|_{z=0} -\omega_0|_{z=0}|_{\infty} \ensuremath{\rightarrow} 0$ as $\ensuremath{\epsilon}\ensuremath{\rightarrow} 0$. then there exist a set $\mathcal{A}_0\cap \{x|z<0\} \neq \emptyset$ such that $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}(\nabla^{\varphi^{\ensuremath{\epsilon}}}\times v_0^{\ensuremath{\epsilon}}) - \nabla^{\varphi}\times\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0} v_0^{\ensuremath{\epsilon}} \neq 0$ in the initial set $\mathcal{A}_0$. We study the equations $(\ref{Sect3_Vorticity_Eq})$ in the Lagrangian coordinates: \begin{equation}\label{Sect3_BoundarLayer_Initial_Eq_1} \left\{\begin{array}{ll} a_0\partial_t W - \ensuremath{\epsilon}\partial_i(a_{ij}\partial_j W) + \big(\gamma a_0 - f^7[\nabla\varphi^{\ensuremath{\epsilon}},\nabla\varphi,\partial_j v^{\ensuremath{\epsilon},i},\partial_j v^i]\big) W =\mathcal{I}_2, \\[8pt] W|_{z=0} = \hat{\omega}_h^b := \omega_0^{\ensuremath{\epsilon}}|_{z=0} -\omega_0|_{z=0} \ensuremath{\rightarrow} 0, \\[7pt] W|_{t=0} = \hat{\omega}_{h,0} \nrightarrow 0. \end{array}\right. \end{equation} We decompose $W = W^{f\!o} + W^{bdy} + W^{ini}$, such that $W^{f\!o}$ satisfies the nonhomogeneous equations: \begin{equation}\label{Sect3_BoundarLayer_Initial_Eq_Force} \left\{\begin{array}{ll} a_0\partial_t W^{f\!o} - \ensuremath{\epsilon}\partial_i(a_{ij}\partial_j W^{f\!o}) + \big(\gamma a_0 - f^7[\nabla\varphi^{\ensuremath{\epsilon}},\nabla\varphi,\partial_j v^{\ensuremath{\epsilon},i},\partial_j v^i]\big) W^{f\!o} =\mathcal{I}_2, \\[8pt] W^{f\!o}|_{z=0} = 0, \\[7pt] W^{f\!o}|_{t=0} = 0, \end{array}\right. \end{equation} $W^{bdy}$ satisfies the following equations: \begin{equation}\label{Sect3_BoundarLayer_Initial_Eq_Boundary} \left\{\begin{array}{ll} a_0\partial_t W^{bdy} - \ensuremath{\epsilon}\partial_i(a_{ij}\partial_j W^{bdy}) + \big(\gamma a_0 - f^7[\nabla\varphi^{\ensuremath{\epsilon}},\nabla\varphi,\partial_j v^{\ensuremath{\epsilon},i},\partial_j v^i]\big) W^{bdy} =0,\\[8pt] W^{bdy}|_{z=0} = \hat{\omega}_h^b, \\[8pt] W^{bdy}|_{t=0} = 0, \end{array}\right. \end{equation} and $W^{ini}$ satisfies the homogeneous equations: \begin{equation}\label{Sect3_BoundarLayer_Initial_Eq_Initial} \left\{\begin{array}{ll} a_0\partial_t W^{ini} - \ensuremath{\epsilon}\partial_i(a_{ij}\partial_j W^{ini}) + \big(\gamma a_0 - f^7[\nabla\varphi^{\ensuremath{\epsilon}},\nabla\varphi,\partial_j v^{\ensuremath{\epsilon},i},\partial_j v^i]\big) W^{ini} =0,\\[8pt] W^{ini}|_{z=0} = 0, \\[8pt] W^{ini}|_{t=0} = \hat{\omega}_{h,0}, \end{array}\right. \end{equation} where $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|\hat{\omega}_{h,0}\|_{L^{\infty}(\mathcal{A}_0)} \neq 0$. Note the diffusion term and damping term in $(\ref{Sect3_BoundarLayer_Initial_Eq_Force})$, it is easy to use the maximal principle to prove $\|W^{f\!o}\|_{L^{\infty}} \ensuremath{\rightarrow} 0$ as $\ensuremath{\epsilon}\ensuremath{\rightarrow} 0$, due to the force term that vanishes when $\ensuremath{\epsilon}\ensuremath{\rightarrow} 0$. For $(\ref{Sect3_BoundarLayer_Initial_Eq_Boundary})$, we define \begin{equation}\label{Sect3_BoundarLayer_Initial_Eq_Boundary_1} \begin{array}{ll} \phi = W^{bdy} - \big(\textsf{F}^{1,2} [\nabla\varphi^{\ensuremath{\epsilon}}](\partial_j v^{\ensuremath{\epsilon},i}) - \textsf{F}^{1,2} [\nabla\varphi](\partial_j v^i)\big), \end{array} \end{equation} then $\phi$ satisfies the following equations: \begin{equation}\label{Sect3_BoundarLayer_Initial_Eq_Boundary_2} \left\{\begin{array}{ll} a_0\partial_t W^{bdy} - \ensuremath{\epsilon}\partial_i(a_{ij}\partial_j W^{bdy}) + \big(\gamma a_0 - f^7[\nabla\varphi^{\ensuremath{\epsilon}},\nabla\varphi,\partial_j v^{\ensuremath{\epsilon},i},\partial_j v^i]\big) W^{bdy} \\[7pt]\quad = - a_0\partial_t \big(\textsf{F}^{1,2} [\nabla\varphi^{\ensuremath{\epsilon}}](\partial_j v^{\ensuremath{\epsilon},i}) - \textsf{F}^{1,2} [\nabla\varphi](\partial_j v^i)\big) \\[7pt]\qquad + \ensuremath{\epsilon}\partial_i \big[a_{ij}\partial_j \big(\textsf{F}^{1,2} [\nabla\varphi^{\ensuremath{\epsilon}}](\partial_j v^{\ensuremath{\epsilon},i}) - \textsf{F}^{1,2} [\nabla\varphi](\partial_j v^i)\big)\big] \\[7pt]\qquad - \big(\gamma a_0 - f^7[\nabla\varphi^{\ensuremath{\epsilon}},\nabla\varphi,\partial_j v^{\ensuremath{\epsilon},i},\partial_j v^i]\big) \big(\textsf{F}^{1,2} [\nabla\varphi^{\ensuremath{\epsilon}}](\partial_j v^{\ensuremath{\epsilon},i}) - \textsf{F}^{1,2} [\nabla\varphi](\partial_j v^i)\big),\\[8pt] \phi|_{z=0} = 0, \\[8pt] \phi|_{t=0} = - \textsf{F}^{1,2} [\nabla\varphi^{\ensuremath{\epsilon}}](\partial_j v^{\ensuremath{\epsilon},i})|_{t=0} + \textsf{F}^{1,2} [\nabla\varphi](\partial_j v^i)|_{t=0}. \end{array}\right. \end{equation} It is easy to prove $\|\phi\|_{L^{\infty}} \ensuremath{\rightarrow} 0$ as $\ensuremath{\epsilon}\ensuremath{\rightarrow} 0$, due to the force term and the initial data $\phi|_{t=0}$ that vanish when $\ensuremath{\epsilon}\ensuremath{\rightarrow} 0$. Thus, it follows from $(\ref{Sect3_BoundarLayer_Initial_Eq_Boundary_1})$ that $\|W^{bdy}\|_{L^{\infty}} \ensuremath{\rightarrow} 0$ as $\ensuremath{\epsilon}\ensuremath{\rightarrow} 0$. Next, we study the equations $(\ref{Sect3_BoundarLayer_Initial_Eq_Initial})$ which are already expressed in the Lagrangian coordinates. In order to prove $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|\omega^{\ensuremath{\epsilon}} -\omega\|_{L^{\infty}(\mathcal{X}(\mathcal{A}_0)\times (0,T])} \neq 0$, we have to prove $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|W^{ini}\|_{L^{\infty}(\mathcal{A}_0\times (0,T])} \neq 0$. By defining the variable \begin{equation}\label{Sect3_BoundarLayer_Initial_Eq_Initial_1} \begin{array}{ll} \tilde{W}^{ini} = W^{ini} \exp\{\int_0^t \frac{1}{a_0} (\gamma a_0 - f^7[\nabla\varphi^{\ensuremath{\epsilon}},\nabla\varphi,\partial_j v^{\ensuremath{\epsilon},i},\partial_j v^i]) \,\mathrm{d}t\}, \end{array} \end{equation} $(\ref{Sect3_BoundarLayer_Initial_Eq_Initial})$ can be rewritten as \begin{equation}\label{Sect3_BoundarLayer_Initial_Eq_Initial_2} \left\{\begin{array}{ll} \partial_t \tilde{W}^{ini} - \frac{\sqrt{\ensuremath{\epsilon}}}{a_0} \exp\{-\int_0^t \frac{1}{a_0} (\gamma a_0 - f^7[\cdots]) \,\mathrm{d}t\} \cdot a_{ij} \partial_{ij} \tilde{W}^{ini} = \sqrt{\ensuremath{\epsilon}}\,\mathcal{I}_3,\\[8pt] \tilde{W}|_{z=0} = 0, \\[8pt] \tilde{W}|_{t=0} = \hat{\omega}_{h,0}, \end{array}\right. \end{equation} where \begin{equation}\label{Sect3_BoundarLayer_Initial_Eq_Initial_3} \begin{array}{ll} \mathcal{I}_3 = \frac{\sqrt{\ensuremath{\epsilon}}}{a_0}\sum\limits_{i,j =1}^3\partial_i a_{ij} \cdot \partial_j \big(\tilde{W}^{ini} \exp\{-\int_0^t \frac{1}{a_0} (\gamma a_0 - f^7[\cdots]) \,\mathrm{d}t\} \big) \\[11pt]\hspace{0.85cm} + \frac{\sqrt{\ensuremath{\epsilon}}}{a_0}\sum\limits_{i,j=1}^3 a_{ij} \partial_i \tilde{W}^{ini} \cdot \partial_j \big(\exp\{-\int_0^t \frac{1}{a_0} (\gamma a_0 - f^7[\cdots]) \,\mathrm{d}t\}\big). \end{array} \end{equation} Note that $\|\mathcal{I}_3\|_{L^{\infty}} <+\infty$, because $\mathcal{I}_3$ contains normal differential operator $\partial_z$ of order at most one. $(\ref{Sect3_BoundarLayer_Initial_Eq_Initial_2})$ is uniformly parabolic, which has fundament solution satisfying the parabolic scaling. Let $\textsf{H}(\frac{x}{\sqrt{\ensuremath{\epsilon} t}})$ to denote the fundament solution of the following homogeneous parabolic equation in $\mathbb{R}^3$: \begin{equation}\label{Sect3_BoundarLayer_Initial_Eq_Initial_4} \begin{array}{ll} \partial_t f - \frac{\ensuremath{\epsilon}}{a_0} a_{ij} \exp\{-\int_0^t \frac{1}{a_0} (\gamma a_0 - f^7[\cdots]) \,\mathrm{d}t\} \cdot \partial_{ij} f =0, \end{array} \end{equation} by using the fundament solution $\textsf{H}$, the equations $(\ref{Sect3_BoundarLayer_Initial_Eq_Initial_2})$ have the explicit formula by using Duhamel's principle, \begin{equation}\label{Sect3_BoundarLayer_Initial_Eq_Initial_5} \begin{array}{ll} \tilde{W}^{ini}(t,x) = \int\limits_{\mathbb{R}^3_{-}} \hat{\omega}_{h,0}(y) (\textsf{H}(\frac{x-y}{\sqrt{\ensuremath{\epsilon} t}}) - \textsf{H}(\frac{x+y}{\sqrt{\ensuremath{\epsilon} t}})) \,\mathrm{d}y \\[7pt]\hspace{2cm} + \int\limits_0^t \int\limits_{\mathbb{R}^3_{-}} \sqrt{\ensuremath{\epsilon}}\,\mathcal{I}_3(t- s,y) (\textsf{H}(\frac{x-y}{\sqrt{\ensuremath{\epsilon} s}}) - \textsf{H}(\frac{x+y}{\sqrt{\ensuremath{\epsilon} s}})) \,\mathrm{d}y \mathrm{d}s, \\[-0.3cm] \end{array} \end{equation} then $\tilde{W}^{ini}(t,x)\ensuremath{\rightarrow} \hat{\omega}_{h,0} + \sqrt{\ensuremath{\epsilon}} O(1) \ensuremath{\rightarrow} \hat{\omega}_{h,0}$ pointwisely, as $\ensuremath{\epsilon}\ensuremath{\rightarrow} 0$, $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0} \tilde{W}^{ini}(t,x)$ and $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0} \hat{\omega}_{h,0}(x)$ have the same support. The limit of the solution is equal to that of the initial data in Lagrangian coordinates, namely the limit of the vorticity is also transported by the velocity field in Eulerian coordinates. By $(\ref{Sect3_BoundarLayer_Initial_Eq_Initial_1})$, we have \begin{equation}\label{Sect3_BoundarLayer_Initial_Eq_Initial_6} \begin{array}{ll} \lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|W^{ini}\|_{L^{\infty}(\mathcal{A}_0)} = \lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|\tilde{W}^{ini}\exp\{- \int_0^t \frac{1}{a_0} (\gamma a_0 - f^7[\cdots]) \,\mathrm{d}t\}\|_{L^{\infty}(\mathcal{A}_0)} \neq 0. \end{array} \end{equation} So $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|\hat{\omega}_h\|_{L^{\infty}(\mathcal{X}(\mathcal{A}_0)\times (0,T])} \neq 0$. Thus, Theorem $\ref{Sect3_BoundarLayer_Initial_Thm}$ is proved. \end{proof} It is easy to show that the strong vorticity layer implies the strong boundary layers of the following variables: \begin{equation}\label{Sect3_StrongVorticityLayer_Corollary} \begin{array}{ll} \lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|\partial_z v^{\ensuremath{\epsilon}} -\partial_z v\|_{L^{\infty}(\mathcal{X}(\mathcal{A}_0)\times (0,T])} \neq 0, \\[9pt] \lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|\mathcal{S}v^{\ensuremath{\epsilon}} -\mathcal{S}v\|_{L^{\infty}(\mathcal{X}(\mathcal{A}_0)\times (0,T])} \neq 0,\\[9pt] \lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|\nabla q^{\ensuremath{\epsilon}} - \nabla q\|_{L^{\infty}(\mathcal{X}(\mathcal{A}_0)\times (0,T])} \neq 0. \end{array} \end{equation} \section{Strong Vorticity Layer Caused by the Discrepancy between Boundary Values of Vorticities} In this section, we study the strong vorticity layer for the free surface N-S equations $(\ref{Sect1_NS_Eq})$, which arises from the discrepancy between boundary value of Navier-Stokes vorticity and boundary value of Euler vorticity. \subsection{Discrepancy of the Vorticity on the Free Boundary} The following lemma shows that if the tangential projection on the free surface of the Euler strain tensor multiply by normal vector does not vanishes, then there is a discrepancy between Navier-Stokes vorticity and Euler vorticity. \begin{lemma}\label{Sect4_Vorticity_Discrepancy_Lemma} Assume $\omega^{\ensuremath{\epsilon}},v^{\ensuremath{\epsilon}},\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}$ are the vorticity, velocity, normal vector of Navier-Stokes equations $(\ref{Sect1_NS_Eq})$, $\omega,v,\ensuremath{\textbf{N}}$ are the vorticity, velocity, normal vector of Euler equations $(\ref{Sect1_Euler_Eq})$, $\omega^{\ensuremath{\epsilon},b}$ and $\omega^b$ are boundary values of $\omega^{\ensuremath{\epsilon}},\omega$ respectively. If $\Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}|_{z=0} \neq 0$ in $(0,T]$, then $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}|\omega^{\ensuremath{\epsilon},b}-\omega^b| _{L^{\infty}(\mathbb{R}^2\times (0,T])} \neq 0$. \end{lemma} \begin{proof} We denote $\textsf{S}_n =\Pi \mathcal{S}^{\varphi} v \ensuremath{\textbf{n}}$. Since $\textsf{S}_n \neq 0$, $\mathcal{S}^{\varphi}v\ensuremath{\textbf{n}} = (\mathcal{S}^{\varphi}v \ensuremath{\textbf{n}}\cdot\ensuremath{\textbf{n}})\ensuremath{\textbf{n}} + \Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}$, then $\mathcal{S}^{\varphi}v \ensuremath{\textbf{n}}$ is not parallel to $\ensuremath{\textbf{n}}$, namely, \begin{equation}\label{Sect4_Vorticity_Discrepancy_1} \begin{array}{ll} \mathcal{S}^{\varphi}v \ensuremath{\textbf{n}}\times\ensuremath{\textbf{n}} = (\mathcal{S}^{\varphi}v \ensuremath{\textbf{n}}\cdot\ensuremath{\textbf{n}})\ensuremath{\textbf{n}} \times\ensuremath{\textbf{n}} + \Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}} \times\ensuremath{\textbf{n}} = \Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}} \times\ensuremath{\textbf{n}} \neq 0. \end{array} \end{equation} Denote $\Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}} \times\ensuremath{\textbf{n}} := (\Theta^1,\Theta^2,\Theta^3)^{\top}$, which is a nonzero vector. By $\mathcal{S}^{\varphi}v \ensuremath{\textbf{n}}\times\ensuremath{\textbf{n}} = (\Theta^1,\Theta^2,\Theta^3)^{\top}$, similar to $(\ref{Sect2_Vorticity_H_BC_2})$, we have \begin{equation}\label{Sect4_Vorticity_Discrepancy_2} \left\{\begin{array}{ll} n^3[n^1\partial_1^{\varphi} v^1 + \frac{n^2}{2}(\partial_1^{\varphi} v^2 + \partial_2^{\varphi} v^1) + \frac{n^3}{2}(\partial_1^{\varphi} v^3 + \partial_z^{\varphi} v^1)] \\[5pt]\quad = n^1[\frac{n^1}{2}(\partial_1^{\varphi} v^3 + \partial_z^{\varphi} v^1) + \frac{n^2}{2}(\partial_2^{\varphi} v^3 + \partial_z^{\varphi} v^2) - n^3\partial_1^{\varphi} v^1 - n^3\partial_2^{\varphi} v^2] -\Theta^2, \\[10pt] n^3[\frac{n^1}{2}(\partial_1^{\varphi} v^2 + \partial_2^{\varphi} v^1) + n^2\partial_2^{\varphi} v^2 + \frac{n^3}{2}(\partial_2^{\varphi} v^3 + \partial_z^{\varphi} v^2)] \\[5pt]\quad = n^2[\frac{n^1}{2}(\partial_1^{\varphi} v^3 + \partial_z^{\varphi} v^1) + \frac{n^2}{2}(\partial_2^{\varphi} v^3 + \partial_z^{\varphi} v^2) - n^3\partial_1^{\varphi} v^1 - n^3\partial_2^{\varphi} v^2] +\Theta^1, \\[10pt] n^2[n^1\partial_1^{\varphi} v^1 + \frac{n^2}{2}(\partial_1^{\varphi} v^2 + \partial_2^{\varphi} v^1) + \frac{n^3}{2}(\partial_1^{\varphi} v^3 + \partial_z^{\varphi} v^1)] \\[5pt]\quad = n^1[\frac{n^1}{2}(\partial_1^{\varphi} v^2 + \partial_2^{\varphi} v^1) + n^2\partial_2^{\varphi} v^2 + \frac{n^3}{2}(\partial_2^{\varphi} v^3 + \partial_z^{\varphi} v^2)] + \Theta^3. \end{array}\right. \end{equation} Then we have the following two equations involving $\partial_z v^1$ and $\partial_z v^2$: \begin{equation}\label{Sect4_Vorticity_Discrepancy_3} \begin{array}{ll} \big[(n^1)^2 + \frac{(n^3)^2}{2} + \frac{1}{2}\frac{(n^1)^4}{(n^3)^2} - \frac{1}{2}\frac{(n^2)^4}{(n^3)^2} \big]\partial_z v^1 + \big[n^1n^2 + \frac{(n^1)^3n^2}{(n^3)^2} + \frac{n^1(n^2)^3}{(n^3)^2} \big]\partial_z v^2 \\[10pt] = -[\frac{(n^3)^2}{2} - \frac{(n^1)^2}{2} - \frac{(n^2)^2}{2}]\big[\partial_z\varphi\partial_1 v^3 - \partial_1\varphi [- \partial_z\varphi(\partial_1 v^1 + \partial_2 v^2)] \big] \\[5pt]\quad + (\frac{n^1(n^2)^2}{n^3} - 2n^1n^3)(\partial_z\varphi\partial_1 v^1) - (n^1n^3 + \frac{n^1(n^2)^2}{n^3})(\partial_z\varphi\partial_2 v^2) \\[5pt]\quad + [\frac{(n^2)^2}{n^3}\frac{n^2}{2}- \frac{n^1n^2}{n^3}\frac{n^1}{2} -\frac{n^2n^3}{2}] (\partial_z\varphi\partial_1 v^2 + \partial_z\varphi\partial_2 v^1)-\partial_z\varphi\Theta^2 - \frac{n^2}{n^3}\partial_z\varphi\Theta^3, \\[15pt] \big[n^1n^2 + \frac{(n^1)^3n^2}{(n^3)^2} + \frac{n^1(n^2)^3}{(n^3)^2} \big]\partial_z v^1 + \big[(n^2)^2 + \frac{1}{2}(n^3)^2 + \frac{(n^2)^4}{2(n^3)^2} - \frac{(n^1)^4}{2(n^3)^2}\big]\partial_z v^2 \\[10pt] = -[\frac{(n^3)^2}{2} - \frac{(n^2)^2}{2} - \frac{(n^1)^2}{2}]\big[\partial_z\varphi\partial_2 v^3 - {\partial_z\varphi} [- \partial_z\varphi(\partial_1 v^1 + \partial_2 v^2)]\big] \\[5pt]\quad - (n^2n^3 + \frac{(n^1)^2n^2}{n^3})(\partial_z\varphi\partial_1 v^1) + (\frac{(n^1)^2n^2}{n^3} -2n^2n^3)(\partial_z\varphi\partial_2 v^2) \\[5pt]\quad + (\frac{(n^1)^2}{n^3}\frac{n^1}{2} -\frac{n^1n^3}{2} - \frac{n^1n^2}{n^3}\frac{n^2}{2}) (\partial_z\varphi\partial_1 v^2 + \partial_z\varphi\partial_2 v^1)+\partial_z\varphi\Theta^1 + \frac{n^1}{n^3}\partial_z\varphi\Theta^3. \end{array} \end{equation} When $|\nabla h|_{\infty}$ is suitably small, the coefficient matrix of $(\partial_z v^1, \partial_z v^2)^{\top}$ is nondegenerate, then we solve \begin{equation}\label{Sect4_Vorticity_Discrepancy_4} \left\{\begin{array}{ll} \partial_z v^1 = f^5[\nabla\varphi](\partial_j v^i) -\textsf{M}^{11}(\partial_z\varphi\Theta^2 + \frac{n^2}{n^3}\partial_z\varphi\Theta^3) +\textsf{M}^{12}(\partial_z\varphi\Theta^1 + \frac{n^1}{n^3}\partial_z\varphi\Theta^3), \\[6pt]\hspace{1.2cm} j=1,2,\ i=1,2,3,\\[8pt] \partial_z v^2 = f^6[\nabla\varphi](\partial_j v^i) -\textsf{M}^{21}(\partial_z\varphi\Theta^2 + \frac{n^2}{n^3}\partial_z\varphi\Theta^3) +\textsf{M}^{22}(\partial_z\varphi\Theta^1 + \frac{n^1}{n^3}\partial_z\varphi\Theta^3), \\[6pt]\hspace{1.2cm} j=1,2,\ i=1,2,3, \end{array}\right. \end{equation} where the matrix $\textsf{M} = (\textsf{M}_{ij})$ is defined in $(\ref{Sect2_Vorticity_H_BC_9})$, $(\textsf{M}^{ij}) = (\textsf{M}_{ij})^{-1}$. By $(\ref{Sect2_NormalDer_Vorticity_Estimate_2})$ and $(\ref{Sect4_Vorticity_Discrepancy_4})$, we have the boundary values of $\omega_h = (\omega^1,\omega^2)$: \begin{equation}\label{Sect4_Vorticity_Discrepancy_5} \begin{array}{ll} \omega^1 = - \frac{\partial_1\varphi\partial_2\varphi}{\partial_z\varphi}\partial_z v^1 - \frac{1 + (\partial_2\varphi)^2}{\partial_z\varphi}\partial_z v^2 + \partial_2 v^3 + \partial_2\varphi(\partial_1 v^1 + \partial_2 v^2) \\[8pt]\hspace{0.47cm} := \textsf{F}^1 [\nabla\varphi](\partial_j v^i) + \varsigma_1\Theta^1 + \varsigma_2\Theta^2 + \varsigma_3\Theta^3, \\[9pt] \omega^2 = \frac{1+(\partial_1\varphi)^2}{\partial_z\varphi}\partial_z v^1 + \frac{\partial_1\varphi\partial_2\varphi}{\partial_z\varphi}\partial_z v^2 - \partial_1 v^3 - \partial_1\varphi(\partial_1 v^1 + \partial_2 v^2) \\[8pt]\hspace{0.47cm} := \textsf{F}^2 [\nabla\varphi](\partial_j v^i) + \varsigma_4\Theta^1 + \varsigma_5\Theta^2 + \varsigma_6\Theta^3, \end{array} \end{equation} where the coefficients $\varsigma_i$ are as follows: \begin{equation}\label{Sect4_Vorticity_Discrepancy_6} \begin{array}{ll} \varsigma_1 =\partial_z\varphi[\partial_1\varphi\partial_2\varphi\textsf{M}^{12} + (1+(\partial_2\varphi)^2)\textsf{M}^{22}], \\[8pt] \varsigma_2 =\partial_1\varphi\partial_2\varphi\textsf{M}^{11} + (1 + (\partial_2\varphi)^2)\textsf{M}^{21}, \\[8pt] \varsigma_3 =\big[(1+(\partial_1\varphi)^2) \big(-\textsf{M}^{11}\frac{n^2}{n^3} +\textsf{M}^{12}\frac{n^1}{n^3}\partial_z\varphi\big) \\[7pt]\hspace{0.85cm} + \partial_1\varphi\partial_2\varphi \big(-\textsf{M}^{21}\frac{n^2}{n^3} +\textsf{M}^{22}\frac{n^1}{n^3}\partial_z\varphi\big)\big], \\[9pt] \varsigma_4 = \partial_z\varphi[(1+(\partial_1\varphi)^2)\textsf{M}^{12} + \partial_1\varphi\partial_2\varphi\textsf{M}^{22}], \\[8pt] \varsigma_5 = -(1+(\partial_1\varphi)^2)\textsf{M}^{11} - \partial_1\varphi\partial_2\varphi \textsf{M}^{21}, \\[8pt] \varsigma_6 = (1+(\partial_1\varphi)^2) \big(-\textsf{M}^{11}\frac{n^2}{n^3} +\textsf{M}^{12}\frac{n^1}{n^3}\partial_z\varphi\big) \\[7pt]\hspace{0.85cm} + \partial_1\varphi\partial_2\varphi \big(-\textsf{M}^{21}\frac{n^2}{n^3} +\textsf{M}^{22}\frac{n^1}{n^3}\partial_z\varphi\big). \end{array} \end{equation} If $|\varsigma_1\Theta^1 + \varsigma_2\Theta^2 + \varsigma_3\Theta^3|_{\infty} = |\varsigma_4\Theta^1 + \varsigma_5\Theta^2 + \varsigma_6\Theta^3|_{\infty} = 0$, then \begin{equation}\label{Sect4_Vorticity_Discrepancy_7} \left\{\begin{array}{ll} \partial_z v^1 = f^5[\nabla\varphi](\partial_j v^i),\ j=1,2,\ i=1,2,3,\\[8pt] \partial_z v^2 = f^6[\nabla\varphi](\partial_j v^i),\ j=1,2,\ i=1,2,3. \end{array}\right. \end{equation} Since the proof of Lemma $\ref{Sect2_Vorticity_H_Eq_BC_Lemma}$ is revertible, $(\ref{Sect4_Vorticity_Discrepancy_7})$ implies $\mathcal{S}^{\varphi}v \ensuremath{\textbf{n}}\times\ensuremath{\textbf{n}} =0$. This strongly contradicts with $(\ref{Sect4_Vorticity_Discrepancy_1})$. Thus, either $|\varsigma_1\Theta^1 + \varsigma_2\Theta^2 + \varsigma_3\Theta^3|_{\infty}\neq 0$ or $|\varsigma_4\Theta^1 + \varsigma_5\Theta^2 + \varsigma_6\Theta^3|_{\infty}\neq 0$. Without lose of generality, we assume the former holds. As $\ensuremath{\epsilon}\ensuremath{\rightarrow} 0$, $|\textsf{F}^1 [\nabla\varphi^{\ensuremath{\epsilon}}](\partial_j v^{\ensuremath{\epsilon},i}) - \textsf{F}^1 [\nabla\varphi](\partial_j v^i)|_{L^{\infty}}\ensuremath{\rightarrow} 0$, this convergence is strong due to enough uniform regularities in conormal Sobolev space of Navier-Stokes solutions and its tangential derivatives (see \cite{Masmoudi_Rousset_2012_FreeBC}). Thus, when $\ensuremath{\epsilon}$ is sufficiently small, $|\textsf{F}^1 [\nabla\varphi^{\ensuremath{\epsilon}}](\partial_j v^{\ensuremath{\epsilon},i}) - \textsf{F}^1 [\nabla\varphi](\partial_j v^i)|_{\infty} \leq \frac{1}{2}|\varsigma_1\Theta^1 + \varsigma_2\Theta^2 + \varsigma_3\Theta^3|_{\infty}$. It follows from $(\ref{Sect4_Vorticity_Discrepancy_5})$ that \begin{equation}\label{Sect4_Vorticity_Discrepancy_8} \begin{array}{ll} |\omega^{\ensuremath{\epsilon},1} -\omega^1|_{\infty} \geq |\varsigma_1\Theta^1 + \varsigma_2\Theta^2 + \varsigma_3\Theta^3|_{\infty} - \big|\textsf{F}^1 [\nabla\varphi^{\ensuremath{\epsilon}}](\partial_j v^{\ensuremath{\epsilon},i}) - \textsf{F}^1 [\nabla\varphi](\partial_j v^i)\big|_{\infty} \\[8pt]\hspace{2cm} \geq \frac{1}{2}|\varsigma_1\Theta^1 + \varsigma_2\Theta^2 + \varsigma_3\Theta^3|_{\infty}. \end{array} \end{equation} Then \begin{equation}\label{Sect4_Vorticity_Discrepancy_9} \begin{array}{ll} |\omega_h^{\ensuremath{\epsilon},b} -\omega_h^b|_{L^{\infty}(\mathbb{R}^2\times (0,T])} \geq \max\{ |\omega^{\ensuremath{\epsilon},1} -\omega^1|_{\infty} ,\, |\omega^{\ensuremath{\epsilon},2} -\omega^2|_{\infty}\} \\[8pt] \geq \frac{1}{2}\max\{|\varsigma_1\Theta^1 + \varsigma_2\Theta^2 + \varsigma_3\Theta^3|_{\infty}, \, |\varsigma_4\Theta^1 + \varsigma_5\Theta^2 + \varsigma_6\Theta^3|_{\infty}\} >0. \end{array} \end{equation} Thus, Lemma $\ref{Sect4_Vorticity_Discrepancy_Lemma}$ is proved. \end{proof} \subsection{$L^{\infty}$ Estimate of Strong Vorticity Layer} In this subsection, we prove the existence of strong vorticity layer when the Euler boundary data satisfies $\Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}|_{z=0} \neq 0$ in $(0,T]$. Before proving our results, let us investigate the simplest model, that is the following heat equation with damping and constant coefficients. \begin{proposition}\label{Sect4_HeatEq_Diffusion_Proposition} Assume $w^b \in H^4(\mathbb{R}^2\times[0,T])$, $w^b \nrightarrow 0$, $\gamma>0$ is constant, $W$ is the solution of the following heat equation with damping: \begin{equation}\label{Sect4_HeatEq_Diffusion_1} \left\{\begin{array}{ll} \partial_t W - \ensuremath{\epsilon}\triangle W + \gamma W =0, \\[5pt] W|_{z=0} = w^b \nrightarrow 0, \\[5pt] W|_{t=0} = 0, \end{array}\right. \end{equation} then $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|W\|_{L^{\infty}(\mathbb{R}^3_{-} \times (0,T])} \neq 0$. \end{proposition} \begin{proof} We define the following Fourier transformation with respect to $(t,y)\in\mathbb{R}_{+}\times\mathbb{R}^2$, \begin{equation}\label{Sect4_BoundarLayer_Boundary_Fourier} \begin{array}{ll} \mathcal{F}[W](\tau,\xi,z) = \int\limits_0^{+\infty}\int\limits_{\mathbb{R}^3_{-}} e^{-i \tau t - i\xi\cdot y} W(t,y,z) \,\mathrm{d}t\mathrm{d}y, \end{array} \end{equation} Note that $W|_{t=0} = 0$, there is no term involving $W|_{t=0}$ appears as a force term. By applying Fourier transformation $(\ref{Sect4_BoundarLayer_Boundary_Fourier})$ to $(\ref{Sect4_HeatEq_Diffusion_1})$, we get the second-order ordinary differential equation: \begin{equation}\label{Sect4_HeatEq_Diffusion_2} \begin{array}{ll} i\tau \mathcal{F}[W] -\ensuremath{\epsilon}\partial_{zz} \mathcal{F}[W] + \ensuremath{\epsilon}|\xi|^2\mathcal{F}[W] + \gamma \mathcal{F}[W] =0, \\[9pt] \partial_{zz} \mathcal{F}[W] - \frac{1}{\ensuremath{\epsilon}}(i\tau + \ensuremath{\epsilon}|\xi|^2 + \gamma)\mathcal{F}[W] =0, \\[9pt] \mathcal{F}[W](\tau,\xi,z) = \exp\{(i\tau + \ensuremath{\epsilon}|\xi|^2 + \gamma)^{\frac{1}{2}}\frac{z}{\sqrt{\ensuremath{\epsilon}}}\} \mathcal{F}[w^b](\tau,\xi), \end{array} \end{equation} where the complex root $(i\tau + \ensuremath{\epsilon}|\xi|^2 + \gamma)^{\frac{1}{2}}$ has two branches, one of which always has a positive real part due to $\ensuremath{\epsilon}|\xi|^2 + \gamma>0$, then we choose this branch. If $|z| =O(\ensuremath{\epsilon}^{\frac{1}{2} -\delta_z})$ where $\delta_z > 0$, we simply assume $z = - \ensuremath{\epsilon}^{\frac{1}{2} -\delta_z}$, then as $\ensuremath{\epsilon}\ensuremath{\rightarrow} 0$, \begin{equation}\label{Sect4_HeatEq_Diffusion_3} \begin{array}{ll} |\exp\{(i\tau + \ensuremath{\epsilon}|\xi|^2 + \gamma)^{\frac{1}{2}}\frac{z}{\sqrt{\ensuremath{\epsilon}}}\}| =\exp\{-(\ensuremath{\epsilon}|\xi|^2 + \gamma)^{\frac{1}{2}} \ensuremath{\epsilon}^{-\delta_z}\} \leq \exp\{-\gamma^{\frac{1}{2}} \ensuremath{\epsilon}^{-\delta_z}\} \ensuremath{\rightarrow} 0, \\[7pt] \|\mathcal{F}[W](\tau,\xi,z)\|_{L^1(\mathrm{d}\tau\mathrm{d}\xi)} = \|\exp\{-(\ensuremath{\epsilon}|\xi|^2 + \gamma)^{\frac{1}{2}}\ensuremath{\epsilon}^{-\delta_z}\} \mathcal{F}[w^b](\tau,\xi)\|_{L^1 (\mathrm{d}\tau\mathrm{d}\xi)} \ensuremath{\rightarrow} 0, \end{array} \end{equation} note that $\mathcal{F}[w^b]\in L^1 (\mathrm{d}\tau\mathrm{d}\xi)$ requires $w^b \in H^4(\mathbb{R}^2\times [0,T])$, then \begin{equation}\label{Sect4_HeatEq_Diffusion_4} \begin{array}{ll} \lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0} \|W\|_{L^{\infty}(t,y,z)} = \lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|\mathcal{F}^{-1}[\mathcal{F}[W]]\|_{L^{\infty}(t,y,z)} \\[7pt] \ensuremath{\lesssim} \|\mathcal{F}[W](\tau,\xi,z)\|_{L^1(\mathrm{d}\tau\mathrm{d}\xi)} \ensuremath{\rightarrow} 0. \end{array} \end{equation} If $|z| =O(\ensuremath{\epsilon}^{\frac{1}{2} +\delta_z})$ where $\delta_z \geq 0$, we simply assume $z = - \ensuremath{\epsilon}^{\frac{1}{2} +\delta_z}$, then as $\ensuremath{\epsilon}\ensuremath{\rightarrow} 0$, \begin{equation}\label{Sect4_HeatEq_Diffusion_5} \begin{array}{ll} |\exp\{(i\tau + \ensuremath{\epsilon}|\xi|^2 + \gamma)^{\frac{1}{2}}\frac{z}{\sqrt{\ensuremath{\epsilon}}}\}| =\exp\{-(\ensuremath{\epsilon}|\xi|^2 + \gamma)^{\frac{1}{2}} \ensuremath{\epsilon}^{\delta_z}\} \ensuremath{\rightarrow} 1 \text{\ or\ } e^{-\sqrt{\gamma}}, \end{array} \end{equation} for any finite $\xi\in \mathbb{R}^2$, then $\|\mathcal{F}[W](\tau,\xi,z)\|_{L^{\infty} (\mathrm{d}\tau\mathrm{d}\xi)} \neq 0$. We use the proof by contradiction. Assume that $\|W\|_{L^{\infty}(t,y,z)} = 0$, then $\|W\|_{L^1(t,y,z)} = 0$, and then $\|\mathcal{F}[W](\tau,\xi,z)\|_{L^{\infty}} = 0$, this is a contradiction. Thus, if $z =O(\ensuremath{\epsilon}^{\frac{1}{2} +\delta_z})$, $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|W\|_{L^{\infty}(t,y,z)} \neq 0$. Thus, Proposition $\ref{Sect4_HeatEq_Diffusion_Proposition}$ is proved. \end{proof} However, the proof of the following theorem is much more complicated than Proposition $\ref{Sect4_HeatEq_Diffusion_Proposition}$, since we have to use the symbolic analysis and paradifferential calculus for our problem. In order to prove that the discrepancy of boundary values of vorticities is one of sufficient conditions for the existence of strong vorticity layer, we assume that the initial vorticity layer is weak. \begin{theorem}\label{Sect4_BoundarLayer_Boundary_Thm} Assume the conditions are the same with Lemma $\ref{Sect4_Vorticity_Discrepancy_Lemma}$. If the Euler solution satisfies $\Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}|_{z=0} \neq 0$ in $(0,T]$, and $\Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}|_{z=0} \in H^4(\mathbb{R}^2\times[0,T])$, the initial Navier-Stokes velocity satisfies $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}(\nabla^{\varphi^{\ensuremath{\epsilon}}}\times v_0^{\ensuremath{\epsilon}}) - \nabla^{\varphi}\times\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0} v_0^{\ensuremath{\epsilon}} = 0$, then $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|\omega^{\ensuremath{\epsilon}} -\omega\|_{L^{\infty}(\mathbb{R}^2\times [0, O(\sqrt{\ensuremath{\epsilon}}))\times (0,T])} \neq 0$. \end{theorem} \begin{proof} Since the initial Navier-Stokes velocity satisfies $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}(\nabla\times u_0^{\ensuremath{\epsilon}}) - \nabla\times\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0} u_0^{\ensuremath{\epsilon}} = 0$, $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}|\omega^{\ensuremath{\epsilon}}_0 -\omega_0|_{L^{\infty}} = 0$, then $\Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}|_{z=0,t=0} = 0$, that does not contradict with $\Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}|_{z=0} \neq 0$ in $(0,T]$. We study the equations $(\ref{Sect3_Vorticity_Eq})$ with small initial data: \begin{equation}\label{Sect4_BoundarLayer_Boundary_Eq_1} \left\{\begin{array}{ll} a_0\partial_t W - \ensuremath{\epsilon}\partial_i(a_{ij}\partial_j W) + \big(\gamma a_0 - f^7[\nabla\varphi^{\ensuremath{\epsilon}},\nabla\varphi,\partial_j v^{\ensuremath{\epsilon},i},\partial_j v^i]\big) W =\mathcal{I}_2, \\[10pt] W|_{z=0} = e^{-\gamma t}\hat{\omega}_h^b \nrightarrow 0, \\[9pt] W|_{t=0} = \hat{\omega}_{h,0} \ensuremath{\rightarrow} 0. \end{array}\right. \end{equation} We decompose $W = W^{bdy} + W^{f\!o}$, such that $W^{f\!o}$ satisfies the nonhomogeneous equations: \begin{equation}\label{Sect4_BoundarLayer_Boundary_Force} \left\{\begin{array}{ll} a_0\partial_t W^{f\!o} - \ensuremath{\epsilon}\partial_i(a_{ij}\partial_j W^{f\!o}) + \big(\gamma a_0 - f^7[\nabla\varphi^{\ensuremath{\epsilon}},\nabla\varphi,\partial_j v^{\ensuremath{\epsilon},i},\partial_j v^i]\big) W^{f\!o} =\mathcal{I}_2, \\[10pt] W^{f\!o}|_{z=0} = 0, \\[9pt] W^{f\!o}|_{t=0} = \hat{\omega}_{h,0} \ensuremath{\rightarrow} 0. \end{array}\right. \end{equation} and $W^{bdy}$ satisfies the homogeneous equations: \begin{equation}\label{Sect4_BoundarLayer_Boundary_BC} \left\{\begin{array}{ll} a_0\partial_t W^{bdy} - \ensuremath{\epsilon}\partial_i(a_{ij}\partial_j W^{bdy}) + \big(\gamma a_0 - f^7[\nabla\varphi^{\ensuremath{\epsilon}},\nabla\varphi,\partial_j v^{\ensuremath{\epsilon},i},\partial_j v^i]\big) W^{bdy} =0, \\[10pt] W^{bdy}|_{z=0} = e^{-\gamma t}\hat{\omega}_h^b \nrightarrow 0, \\[11pt] W^{bdy}|_{t=0} = 0. \end{array}\right. \end{equation} Note the diffusion term and damping term of $(\ref{Sect4_BoundarLayer_Boundary_Force})$, it is easy to prove that $\|W^{f\!o}\|_{L^{\infty}} \leq \|W^{f\!o}|_{t=0}\|_{L^{\infty}} + \int\limits_0^t \|\mathcal{I}_2\|_{\infty}\,\mathrm{d}t \ensuremath{\rightarrow} 0.$ Next, we study the homogeneous equations $(\ref{Sect4_BoundarLayer_Boundary_BC})$ with variable coefficients, which differs from the equations $(\ref{Sect4_HeatEq_Diffusion_1})$ up to coefficients. By using symbolic analysis, it is standard to prove that the limit of the solution of $(\ref{Sect4_BoundarLayer_Boundary_BC})$ behaves similarly to that of $(\ref{Sect4_HeatEq_Diffusion_1})$. However, we still show some keypoints. We rewrite $(\ref{Sect4_BoundarLayer_Boundary_BC})$ in the following form: \begin{equation}\label{Sect4_BoundarLayer_Boundary_BC_1} \left\{\begin{array}{ll} \ensuremath{\epsilon} \partial_{zz} W^{bdy} + \ensuremath{\epsilon}(\frac{\partial_z a_{33}}{a_{33}}+ \sum\limits_{j=1,2}\frac{\partial_j a_{j3}}{a_{33}})\partial_z W^{bdy} + 2\ensuremath{\epsilon} \sum\limits_{j=1,2}\frac{a_{j3}}{a_{33}}\partial_{jz} W^{bdy} \\[11pt]\quad + \ensuremath{\epsilon} \sum\limits_{j=1,2}\frac{\partial_z a_{j3}}{a_{33}}\partial_j W^{bdy} + \ensuremath{\epsilon} \sum\limits_{j=1,2}\frac{\partial_i a_{ij}}{a_{33}}\partial_j W^{bdy} + \ensuremath{\epsilon} \sum\limits_{j=1,2}\frac{a_{ij}}{a_{33}}\partial_{ij} W^{bdy} \\[13pt]\quad - \frac{a_0}{a_{33}}\partial_t W^{bdy} - \frac{1}{a_{33}}\big(\gamma a_0 - f^7[\nabla\varphi^{\ensuremath{\epsilon}},\nabla\varphi,\partial_j v^{\ensuremath{\epsilon},i},\partial_j v^i]\big) W^{bdy} =0, \\[11pt] W^{bdy}|_{z=0} = e^{-\gamma t}\hat{\omega}_h^b \nrightarrow 0, \\[8pt] W^{bdy}|_{t=0} = 0. \end{array}\right. \end{equation} Take $z$ as a parameter, then the symbolic version of $(\ref{Sect4_BoundarLayer_Boundary_BC_1})$ is \begin{equation}\label{Sect4_BoundarLayer_Boundary_BC_2} \left\{\begin{array}{ll} \ensuremath{\epsilon} \partial_{zz} \tilde{W}^{bdy} + A_1 \sqrt{\ensuremath{\epsilon}} \partial_z \tilde{W}^{bdy} + A_0 \tilde{W}^{bdy} =0, \\[11pt] \tilde{W}^{bdy}|_{z=0} = \mathcal{F}[e^{-\gamma t}\hat{\omega}_h^b] \nrightarrow 0, \\[8pt] \tilde{W}^{bdy}|_{t=0} = 0. \end{array}\right. \end{equation} where the Fourier multipliers are as follows: \begin{equation}\label{Sect4_BoundarLayer_Boundary_BC_3} \begin{array}{ll} A_1 = \sqrt{\ensuremath{\epsilon}}(\frac{\partial_z a_{33}}{a_{33}}+ \sum\limits_{j=1,2}\frac{\partial_j a_{j3}}{a_{33}} + 2i \sum\limits_{j=1,2}\frac{a_{j3}}{a_{33}} \xi^j) \\[14pt] A_0 = i\ensuremath{\epsilon} \sum\limits_{j=1,2}\frac{\partial_z a_{j3}}{a_{33}}\xi^j + i\ensuremath{\epsilon} \sum\limits_{j=1,2}\frac{\partial_i a_{ij}}{a_{33}}\xi^j - \ensuremath{\epsilon} \sum\limits_{j=1,2}\frac{a_{ij}}{a_{33}}\xi^i\xi^j - i\tau \frac{a_0}{a_{33}} \\[12pt]\hspace{0.87cm} - \frac{1}{a_{33}}\big(\gamma a_0 - f^7[\nabla\varphi^{\ensuremath{\epsilon}},\nabla\varphi,\partial_j v^{\ensuremath{\epsilon},i},\partial_j v^i]\big), \end{array} \end{equation} Due to $|a_0| + |a_{ij}| + \sqrt{\ensuremath{\epsilon}}|\partial_z a_{ij}| \leq C$ for some $C>0$ (see \cite{Masmoudi_Rousset_2012_FreeBC}), when $\ensuremath{\epsilon}\ensuremath{\rightarrow} 0$, \begin{equation*} \begin{array}{ll} A_1 \ensuremath{\rightarrow} \sqrt{\ensuremath{\epsilon}}\frac{\partial_z a_{33}}{a_{33}}, \\[7pt] - A_0 \ensuremath{\rightarrow} \frac{1}{a_{33}}\big(\gamma a_0 - f^7[\nabla\varphi^{\ensuremath{\epsilon}},\nabla\varphi,\partial_j v^{\ensuremath{\epsilon},i},\partial_j v^i]\big) + i\tau \frac{a_0}{a_{33}} - i\ensuremath{\epsilon} \sum\limits_{j=1,2}\frac{\partial_z a_{j3}}{a_{33}}\xi^j . \end{array} \end{equation*} When $\ensuremath{\epsilon}>0$ is sufficiently small, the values of $A_1$ and $A_0$ are around their limits. The solution of the ODE $(\ref{Sect4_BoundarLayer_Boundary_BC_2})$ is that \begin{equation}\label{Sect4_BoundarLayer_Boundary_BC_4} \begin{array}{ll} \tilde{W}^{bdy} = \exp\{\frac{- A_1 + \sqrt{A_1^2 -4 A_0}}{2} \frac{z}{\sqrt{\ensuremath{\epsilon}}}\}\tilde{W}^{bdy}|_{z=0}. \end{array} \end{equation} The complex root $\sqrt{A_1^2 -4 A_0}$ has two branches, but one of which always has positive real part, since $\Re (A_1^2 -4 A_0)>0$ when $\ensuremath{\epsilon}$ is sufficiently small, where $\Re$ represents the real part. Then we choose this branch. Since $\Re (- 4 A_0) >0$ when $\ensuremath{\epsilon}$ is sufficiently small, then $|\Re \sqrt{A_1^2 -4 A_0}| >| - A_1|$, and then $\Re\frac{- A_1 + \sqrt{A_1^2 -4 A_0}}{2} >0$ and $\big\|\frac{- A_1 + \sqrt{A_1^2 -4 A_0}}{2}\big\|_{L^{\infty}}<+\infty$. Define $\mathcal{T}[W^{bdy}] = \mathcal{F}^{-1}[\tilde{W}^{bdy}]$. Note that $(\ref{Sect4_BoundarLayer_Boundary_BC_3})$ has the same form with $(\ref{Sect4_HeatEq_Diffusion_2})_3$, apply the same argument in Proposition $\ref{Sect4_HeatEq_Diffusion_Proposition}$ to $(\ref{Sect4_BoundarLayer_Boundary_BC_3})$, we can prove that if $z =O(\ensuremath{\epsilon}^{\frac{1}{2} +\delta_z})$ where $\delta_z > 0$, $\|\tilde{W}^{bdy}\|_{L^1} \nrightarrow 0$ as $\ensuremath{\epsilon}\ensuremath{\rightarrow} 0$, then $\|\mathcal{T}[W^{bdy}]\|_{L^{\infty}} \neq 0$. If $z =O(\ensuremath{\epsilon}^{\frac{1}{2} -\delta_z})$ where $\delta_z \geq 0$, $\|\tilde{W}^{bdy}\|_{L^1} \ensuremath{\rightarrow} 0$ as $\ensuremath{\epsilon}\ensuremath{\rightarrow} 0$, then $\|\mathcal{T}[W^{bdy}]\|_{L^{\infty}} \ensuremath{\rightarrow} 0$. The difference between $\mathcal{T}[W^{bdy}]$ and $W^{bdy}$ is bounded by $W^{bdy}$ (refer to the results of paradifferential calculus in \cite{Masmoudi_Rousset_2012_FreeBC,Metivier_Zumbrun_2005}). If we assume $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|W^{bdy}\|_{L^{\infty}} =0$, then $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|\mathcal{T}[W^{bdy}]\|_{L^{\infty}} =0$. This is a contradiction. So $\lim\limits_{\ensuremath{\epsilon}\ensuremath{\rightarrow} 0}\|W^{bdy}\|_{L^{\infty}} \neq 0$ in some set located in the interior. Thus, Theorem $\ref{Sect4_BoundarLayer_Boundary_Thm}$ is proved. \end{proof} \section{Convergence Rates of Inviscid Limit for $\sigma=0$} In this section, we estimate convergence rates of the inviscid limit when $\sigma=0$. We denote $\hat{v} =v^{\ensuremath{\epsilon}} -v,\hat{q} =q^{\ensuremath{\epsilon}} -q, \hat{h} =h^{\ensuremath{\epsilon}} -h$, we denote the $i-$th components of $v^{\ensuremath{\epsilon}}$ and $v$ by $v^{\ensuremath{\epsilon},i}$ and $v^i$ respectively. \subsection{Estimates for the Pressure Gradient} \begin{lemma}\label{Sect5_Pressure_Lemma} Assume $0\leq s\leq k-1,\, k \leq m-2$, the difference of the pressure $\hat{q}$ has the following gradient estimate: \begin{equation}\label{Sect5_Pressure_Lemma_Eq} \begin{array}{ll} \|\nabla \hat{q}\|_{X^s} \ensuremath{\lesssim} \|\hat{v}\|_{X^{s,1}} + \|\partial_z\hat{v}\|_{X^s} + |\hat{h}|_{X^{s,\frac{1}{2}}} + O(\ensuremath{\epsilon}). \end{array} \end{equation} \end{lemma} \begin{proof} \cite{Masmoudi_Rousset_2012_FreeBC} introduced the following matrices $\textsf{E}$ and $\textsf{P}$ satisfying $\textsf{E} = \frac{1}{\partial_z\varphi} \textsf{P} \textsf{P}^{\top}$: \begin{equation*} \begin{array}{ll} \textsf{E} = \left(\begin{array}{ccc} \partial_z \varphi & 0 & -\partial_1\varphi \\[1pt] 0 & \partial_z \varphi & -\partial_2\varphi \\[2pt] -\partial_1 \varphi & -\partial_2 \varphi & \frac{1+(\partial_1\varphi)^2 + (\partial_2\varphi)^2}{\partial_z\varphi} \end{array}\right) \, , \quad \textsf{P} = \left(\begin{array}{ccc} \partial_z \varphi & 0 & 0 \\[2pt] 0 & \partial_z \varphi & 0 \\[2pt] -\partial_1 \varphi & -\partial_2 \varphi & 1 \end{array}\right)\, . \end{array} \end{equation*} Apply the divergence operator $\nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot$ to $(\ref{Sect1_NS_Eq})_1$ and apply the divergence operator $\nabla^{\varphi}\cdot$ to $(\ref{Sect1_Euler_Eq})_1$, then we get \begin{equation}\label{Sect5_Pressure_Estimates_1} \left\{\begin{array}{ll} \nabla\cdot(\textsf{E}^{\ensuremath{\epsilon}}\nabla q^{\ensuremath{\epsilon}}) = \partial_z \varphi^{\ensuremath{\epsilon}}\triangle^{\varphi^{\ensuremath{\epsilon}}} q^{\ensuremath{\epsilon}} = -\partial_z \varphi^{\ensuremath{\epsilon}}\nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot (v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}}), \\[6pt] \nabla\cdot(\textsf{E}\nabla q) = \partial_z \varphi\triangle^{\varphi} q = -\partial_z \varphi\nabla^{\varphi}\cdot (v \cdot\nabla^{\varphi} v). \end{array}\right. \end{equation} It follows from $(\ref{Sect5_Pressure_Estimates_1})$ and $(\ref{SectA_Difference_Transform_2})$ that \begin{equation}\label{Sect5_Pressure_Estimates_2} \begin{array}{ll} \nabla\cdot(\textsf{E}^{\ensuremath{\epsilon}}\nabla \hat{q}) + \nabla\cdot((\textsf{E}^{\ensuremath{\epsilon}} -\textsf{E}) \nabla q) = \nabla\cdot(\textsf{E}^{\ensuremath{\epsilon}}\nabla q^{\ensuremath{\epsilon}}) - \nabla\cdot(\textsf{E}\nabla q) \\[7pt] = - \partial_z \varphi^{\ensuremath{\epsilon}}\nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot (v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}}) + \partial_z \varphi \nabla^{\varphi}\cdot (v \cdot\nabla^{\varphi} v) \\[7pt] = -\nabla\cdot \big[\textsf{P}^{\ensuremath{\epsilon}} (v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}})\big] + \nabla\cdot \big[\textsf{P} (v \cdot\nabla^{\varphi} v)\big] \\[7pt] = -\nabla\cdot \big[\textsf{P}^{\ensuremath{\epsilon}} (v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}} - v \cdot\nabla^{\varphi} v)\big] - \nabla\cdot \big[(\textsf{P}^{\ensuremath{\epsilon}} - \textsf{P}) (v \cdot\nabla^{\varphi} v)\big] \\[7pt] = -\nabla\cdot \big[\textsf{P}^{\ensuremath{\epsilon}} (v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} \hat{v} - v^{\ensuremath{\epsilon}}\cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\varphi}\partial_z^{\varphi} v + \hat{v}\cdot\nabla^{\varphi} v)\big] - \nabla\cdot \big[(\textsf{P}^{\ensuremath{\epsilon}} - \textsf{P}) (v \cdot\nabla^{\varphi} v)\big]. \end{array} \end{equation} Namely, $\hat{q}$ satisfies the following elliptic equation: \begin{equation}\label{Sect5_Pressure_Estimates_3} \left\{\begin{array}{ll} \nabla\cdot(\textsf{E}^{\ensuremath{\epsilon}}\nabla \hat{q}) = -\nabla\cdot((\textsf{E}^{\ensuremath{\epsilon}} -\textsf{E}) \nabla q) - \nabla\cdot [(\textsf{P}^{\ensuremath{\epsilon}} - \textsf{P}) (v \cdot\nabla^{\varphi} v)] \\[5pt]\hspace{2.1cm} -\nabla\cdot [\textsf{P}^{\ensuremath{\epsilon}} (v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} \hat{v} - v^{\ensuremath{\epsilon}}\cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\varphi}\partial_z^{\varphi} v + \hat{v}\cdot\nabla^{\varphi} v)], \\[7pt] q|_{z=0} = g\hat{h} + 2\ensuremath{\epsilon}\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}} \ensuremath{\textbf{n}}^{\ensuremath{\epsilon}}\cdot\ensuremath{\textbf{n}}^{\ensuremath{\epsilon}}. \end{array}\right. \end{equation} The matrix $\textsf{E}^{\ensuremath{\epsilon}}$ is definitely positive, then it is standard to prove that $\hat{q}$ satisfies the following gradient estimate: \begin{equation}\label{Sect5_Pressure_Estimates_6} \begin{array}{ll} \|\nabla \hat{q}\|_{X^s} \ensuremath{\lesssim} \|(\textsf{E}^{\ensuremath{\epsilon}} -\textsf{E}) \nabla q \|_{X^s} + \|(\textsf{P}^{\ensuremath{\epsilon}} - \textsf{P}) (v \cdot\nabla^{\varphi} v)\|_{X^s} \\[6pt]\hspace{1.74cm} + \|\textsf{P}^{\ensuremath{\epsilon}} (v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} \hat{v} - v^{\ensuremath{\epsilon}}\cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\varphi}\partial_z^{\varphi} v + \hat{v}\cdot\nabla^{\varphi} v)\|_{X^s} \\[5pt]\hspace{1.74cm} + |g\hat{h} + 2\ensuremath{\epsilon}\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}} \ensuremath{\textbf{n}}^{\ensuremath{\epsilon}}\cdot\ensuremath{\textbf{n}}^{\ensuremath{\epsilon}}|_{X^{s,\frac{1}{2}}} \\[9pt]\hspace{1.3cm} \ensuremath{\lesssim} \|\textsf{E}^{\ensuremath{\epsilon}} -\textsf{E}\|_{X^s} + \|\textsf{P}^{\ensuremath{\epsilon}} - \textsf{P}\|_{X^s} + \|\hat{v}\|_{X^s} + \|\nabla\hat{v}\|_{X^s} + \|\nabla\hat{\varphi}\|_{X^s}\\[7pt]\hspace{1.74cm} + g|\hat{h}|_{X^{s,\frac{1}{2}}} + 2\ensuremath{\epsilon} \big|\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}}|_{z=0}\big|_{X^{s,\frac{1}{2}}} \\[9pt]\hspace{1.3cm} \ensuremath{\lesssim} \|\hat{v}\|_{X^{s,1}} + \|\partial_z\hat{v}\|_{X^s} + \|\nabla\hat{\eta}\|_{X^s} + g|\hat{h}|_{X^{s,\frac{1}{2}}} + 2\ensuremath{\epsilon} \big|\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}}|_{z=0}\big|_{X^{s,\frac{1}{2}}} \\[9pt]\hspace{1.3cm} \ensuremath{\lesssim} \|\hat{v}\|_{X^{s,1}} + \|\partial_z\hat{v}\|_{X^s} + |\hat{h}|_{X^{s,\frac{1}{2}}} + O(\ensuremath{\epsilon}), \end{array} \end{equation} where $\big|\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}}|_{z=0}\big|_{X^{s,\frac{1}{2}}} \ensuremath{\lesssim} \|\partial_z\partial_j v^{\ensuremath{\epsilon}}\|_{X^s}^{\frac{1}{2}}\|\partial_j v^{\ensuremath{\epsilon}}\|_{X^{s+1}}^{\frac{1}{2}} <+\infty$. Thus, Lemma $\ref{Sect5_Pressure_Lemma}$ is proved. \end{proof} \subsection{Estimates for Tangential Derivatives} In order to close the estimates of tangential derivatives of $\hat{v}$, that is to bound $\|\partial_t^{\ell} \hat{v}\|_{L^2}$ and $\sqrt{\ensuremath{\epsilon}}\|\nabla \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}\|_{L^2}$, we must prove two preliminary lemmas of $\hat{h}$ by using the kinetical boundary condition $(\ref{Sect1_T_Derivatives_Difference_Eq})_3$. The first preliminary lemma concerns $|\partial_t^{\ell} \hat{h}|_{L^2}$ where $0\leq\ell\leq k-1$. Note that the estimates of mix derivatives $\partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{h}$ will be obtained when we estimate mix derivatives $\partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{v}$, where $|\alpha|>0$. \begin{lemma}\label{Sect5_Height_Estimates_Lemma} Assume $0\leq k\leq m-2$, $0\leq\ell\leq k-1$, $|\partial_t^{\ell}\hat{h}|_{L^2}$ has the estimates: \begin{equation}\label{Sect5_Height_Estimates_Lemma_Eq} \begin{array}{ll} |\partial_t^{\ell}\hat{h}|_{L^2}^2 \ensuremath{\lesssim} |\hat{h}_0|_{X^{k-1}}^2 + \int\limits_0^t |\hat{h}|_{X^{k-1,1}}^2 + \|\hat{v}\|_{X^{k-1,1}}^2 \,\mathrm{d}t + \|\partial_z\hat{v}\|_{L^4([0,T],X^{k-1})}^2. \end{array} \end{equation} \end{lemma} \begin{proof} Apply $\partial_t^{\ell}$ to the kinetical boundary condition $(\ref{Sect1_T_Derivatives_Difference_Eq})_3$, we get \begin{equation}\label{Sect5_Height_Estimates_Lemma_Eq_1} \begin{array}{ll} \partial_t \partial_t^{\ell}\hat{h} + v_y\cdot \nabla_y \partial_t^{\ell}\hat{h} = \partial_t^{\ell}\hat{v}\cdot\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} + [\partial_t^{\ell}, \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot]\hat{v} - [\partial_t^{\ell}, v_y\cdot \nabla_y]\hat{h}. \end{array} \end{equation} Multiply $(\ref{Sect5_Height_Estimates_Lemma_Eq_1})$ with $\partial_t^{\ell}\hat{h}$, integrate in $\mathbb{R}^2$, we have \begin{equation}\label{Sect5_Height_Estimates_Lemma_Eq_2} \begin{array}{ll} \frac{\mathrm{d}}{\mathrm{d}t}\int\limits_{\mathbb{R}^2} |\partial_t^{\ell}\hat{h}|^2 \,\mathrm{d}y = 2\int\limits_{\mathbb{R}^2}\big( \partial_t^{\ell}\hat{v}\cdot\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} + [\partial_t^{\ell}, \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot]\hat{v} - [\partial_t^{\ell}, v_y\cdot \nabla_y]\hat{h} \big) \partial_t^{\ell}\hat{h} \,\mathrm{d}y \\[7pt]\quad + \int\limits_{\mathbb{R}^2} |\partial_t^{\ell}\hat{h}|^2 \nabla_y\cdot v_y \,\mathrm{d}y \ensuremath{\lesssim} |\partial_t^{\ell}\hat{h}|_{L^2}^2 + |\hat{h}|_{X^{\ell,1}}^2 + \big|\hat{v}|_{z=0}\big|_{X^{\ell}}^2 \\[7pt] \ensuremath{\lesssim} |\partial_t^{\ell}\hat{h}|_{L^2}^2 + |\hat{h}|_{X^{k-1,1}}^2 + \|\hat{v}\|_{X^{k-1,1}}^2 + \|\partial_z\hat{v}\|_{X^{k-1}}^2. \end{array} \end{equation} Sum $\ell$, integrate $(\ref{Sect5_Height_Estimates_Lemma_Eq_2})$ in time and apply the integral form of Gronwall's inequality, we have \begin{equation}\label{Sect5_Height_Estimates_Lemma_Eq_3} \begin{array}{ll} \int\limits_{\mathbb{R}^2} |\partial_t^{\ell}\hat{h}|^2 \,\mathrm{d}y \ensuremath{\lesssim} |\hat{h}_0|_{X^{k-1}}^2 + \int\limits_0^t |\hat{h}|_{X^{k-1,1}}^2 + \|\hat{v}\|_{X^{k-1,1}}^2 + \|\partial_z\hat{v}\|_{X^{k-1}}^2 \,\mathrm{d}t \\[8pt] \ensuremath{\lesssim} |\hat{h}_0|_{X^{k-1}}^2 + \int\limits_0^t |\hat{h}|_{X^{k-1,1}}^2 + \|\hat{v}\|_{X^{k-1,1}}^2 \,\mathrm{d}t + \|\partial_z\hat{v}\|_{L^4([0,T],X^{k-1})}^2. \end{array} \end{equation} Thus, Lemma $\ref{Sect5_Height_Estimates_Lemma}$ is proved. \end{proof} The second preliminary lemma concerns $\sqrt{\ensuremath{\epsilon}}|\partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{h}|_{\frac{1}{2}}$, by which we bound $\sqrt{\ensuremath{\epsilon}}\|\mathcal{S}^{\varphi}\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\eta}\|_{L^2}$ and then we can bound $\sqrt{\ensuremath{\epsilon}}\|\mathcal{S}^{\varphi}\partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{v}\|_{L^2}$. \begin{lemma}\label{Sect5_Height_Viscous_Estimates_Lemma} Assume $0\leq k\leq m-2$, $0\leq\ell\leq k-1$, $\ell+|\alpha| \leq k$, $\sqrt{\ensuremath{\epsilon}}|\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h}|_{\frac{1}{2}}$ has the estimates: \begin{equation}\label{Sect5_Height_Viscous_Estimates_Lemma_Eq} \begin{array}{ll} \ensuremath{\epsilon}|\hat{h}|_{X^{k-1,\frac{3}{2}}}^2 \leq \ensuremath{\epsilon}|\hat{h}_0|_{X^{k-1,\frac{3}{2}}}^2 + \int\limits_0^t|\hat{h}|_{X^{k-1,1}}^2 + \ensuremath{\epsilon}\|\nabla \hat{v}\|_{X^{k-1,1}}^2 \,\mathrm{d}t. \end{array} \end{equation} \end{lemma} \begin{proof} The differential operator $\Lambda$ is defined in the proof of Lemma $\ref{Sect2_Height_Viscous_Estimates_Lemma}$. Apply $\partial_t^{\ell}\mathcal{Z}^{\alpha}\Lambda^{\frac{1}{2}}$ to the kinetical boundary condition $(\ref{Sect1_T_Derivatives_Difference_Eq})_3$, we get \begin{equation}\label{Sect5_Height_Viscous_Estimates_Lemma_Eq_1} \begin{array}{ll} \partial_t \partial_t^{\ell}\mathcal{Z}^{\alpha}\Lambda^{\frac{1}{2}}\hat{h} + v_y\cdot \nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha}\Lambda^{\frac{1}{2}}\hat{h} \\[6pt] = \partial_t^{\ell}\mathcal{Z}^{\alpha}\Lambda^{\frac{1}{2}}\hat{v}\cdot\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} + [\partial_t^{\ell}\mathcal{Z}^{\alpha}\Lambda^{\frac{1}{2}}, \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot]\hat{v} - [\partial_t^{\ell}\mathcal{Z}^{\alpha}\Lambda^{\frac{1}{2}}, v_y\cdot \nabla_y]\hat{h}. \end{array} \end{equation} Multiply $(\ref{Sect5_Height_Viscous_Estimates_Lemma_Eq_1})$ with $\ensuremath{\epsilon}\partial_t^{\ell}\mathcal{Z}^{\alpha}\Lambda^{\frac{1}{2}} \hat{h}$, integrate in $\mathbb{R}^2$, we have \begin{equation}\label{Sect5_Height_Viscous_Estimates_Lemma_Eq_2} \begin{array}{ll} \ensuremath{\epsilon}\frac{\mathrm{d}}{\mathrm{d}t}\int\limits_{\mathbb{R}^2} |\partial_t^{\ell}\mathcal{Z}^{\alpha}\Lambda^{\frac{1}{2}} \hat{h}|^2 \,\mathrm{d}y = \ensuremath{\epsilon}\int\limits_{\mathbb{R}^2} |\partial_t^{\ell}\mathcal{Z}^{\alpha}\Lambda^{\frac{1}{2}}\hat{h}|^2 \nabla_y\cdot v_y \,\mathrm{d}y \\[7pt]\quad + 2\ensuremath{\epsilon}\int\limits_{\mathbb{R}^2}\big(\partial_t^{\ell}\mathcal{Z}^{\alpha}\Lambda^{\frac{1}{2}}\hat{v}\cdot\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} + [\partial_t^{\ell}\mathcal{Z}^{\alpha}\Lambda^{\frac{1}{2}}, \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot]\hat{v} - [\partial_t^{\ell}\mathcal{Z}^{\alpha}\Lambda^{\frac{1}{2}}, v_y\cdot \nabla_y]\hat{h}\big) \partial_t^{\ell}\mathcal{Z}^{\alpha}\Lambda^{\frac{1}{2}} \hat{h} \,\mathrm{d}y \end{array} \end{equation} \begin{equation*} \begin{array}{ll} \ensuremath{\lesssim} \ensuremath{\epsilon}|\hat{h}|_{X^{k-1,\frac{3}{2}}}^2 + \ensuremath{\epsilon}\big|\hat{v}|_{z=0}\big|_{X^{k-1,\frac{3}{2}}}^2 + \ensuremath{\epsilon}|\partial_t^{\ell}\mathcal{Z}^{\alpha}\Lambda^{\frac{1}{2}} \hat{h}|_{L^2}^2\\[7pt] \ensuremath{\lesssim} \ensuremath{\epsilon}|\hat{h}|_{X^{k-1,\frac{3}{2}}}^2 + \ensuremath{\epsilon}\|\hat{v}\|_{X_{tan}^{k-1,2}}^2 + \ensuremath{\epsilon}\|\partial_z \hat{v}\|_{X_{tan}^{k-1,1}}^2 + \ensuremath{\epsilon}|\partial_t^{\ell}\mathcal{Z}^{\alpha}\Lambda^{\frac{1}{2}} \hat{h}|_{L^2}^2\\[8pt] \ensuremath{\lesssim} \ensuremath{\epsilon}|\hat{h}|_{X^{k-1,\frac{3}{2}}}^2 +|\hat{h}|_{X^{k-1,1}}^2 + \ensuremath{\epsilon}\|\nabla \hat{v}\|_{X^{k-1,1}}^2. \hspace{5.cm} \end{array} \end{equation*} Sum $\ell,\alpha$, integrate $(\ref{Sect5_Height_Viscous_Estimates_Lemma_Eq_2})$ in time and apply the integral form of Gronwall's inequality, we have $(\ref{Sect5_Height_Viscous_Estimates_Lemma_Eq})$. Thus, Lemma $\ref{Sect5_Height_Viscous_Estimates_Lemma}$ is proved. \end{proof} We state that $\partial_z \hat{v}^3$ can be estimated by $\partial_z \hat{v}_h$, that is $\|\partial_z \hat{v}^3\|_{X^s} \ensuremath{\lesssim} \|\hat{v}_h\|_{X^{s,1}} + \|\partial_z \hat{v}_h\|_{X^s} + |\hat{h}|_{X^{s,\frac{1}{2}}}$. The proof is based on the following equality that follows from the divergence free condition $(\ref{Sect2_DivFreeCondition})$. \begin{equation}\label{Sect5_NDerivatives_Lemma_Eq_1} \begin{array}{ll} \partial_z \hat{v}^3 = -\partial_z\varphi^{\ensuremath{\epsilon}}(\partial_1 \hat{v}^1 + \partial_2 \hat{v}^2) - \partial_z\hat{\varphi}(\partial_1 v^1 + \partial_2 v^2) \\[7pt]\hspace{1.15cm} +\partial_1\varphi^{\ensuremath{\epsilon}}\partial_z \hat{v}^1 +\partial_1\hat{\varphi}\partial_z v^1 + \partial_2\varphi^{\ensuremath{\epsilon}}\partial_z \hat{v}^2 +\partial_2\hat{\varphi}\partial_z v^2. \end{array} \end{equation} Now we develop the estimates for tangential derivatives. \begin{lemma}\label{Sect5_Tangential_Estimates_Lemma} Assume $0\leq k\leq m-2$, $\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}$ and $\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h}$ have the estimates: \begin{equation}\label{Sect5_Tangential_Estimates_Lemma_Eq} \begin{array}{ll} \|\hat{v}\|_{X^{k-1,1}}^2 + |\hat{h}|_{X^{k-1,1}}^2 + \ensuremath{\epsilon}|\hat{h}|_{X^{k-1,\frac{3}{2}}}^2 + \ensuremath{\epsilon}\int\limits_0^t\|\nabla\hat{v}\|_{X^{k-1,1}}^2 \,\mathrm{d}t \\[5pt] \ensuremath{\lesssim} \|\hat{v}_0\|_{X^{k-1,1}}^2 + |\hat{h}_0|_{X^{k-1,1}}^2 + \ensuremath{\epsilon}|\hat{h}_0|_{X^{k-1,\frac{3}{2}}}^2 +\|\partial_z \hat{v}\|_{L^4([0,T],X^{k-1})}^2 \\[6pt]\quad + |\partial_t^k\hat{h}|_{L^4([0,T],L^2)}^2 + \|\nabla\hat{q}\|_{L^4([0,T],X^{k-1})}^2 + O(\ensuremath{\epsilon}). \end{array} \end{equation} \end{lemma} \begin{proof} $(\hat{V}^{\ell,\alpha}, \hat{Q}^{\ell,\alpha})$ satisfy the following equations: \begin{equation}\label{Sect5_TangentialEstimates_Diff_Eq} \left\{\begin{array}{ll} \partial_t^{\varphi^{\ensuremath{\epsilon}}} \hat{V}^{\ell,\alpha} + v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} \hat{V}^{\ell,\alpha} + \nabla^{\varphi^{\ensuremath{\epsilon}}} \hat{Q}^{\ell,\alpha} - 2\ensuremath{\epsilon}\nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} \\[8pt]\quad = 2\ensuremath{\epsilon}[\partial_t^{\ell}\mathcal{Z}^{\alpha},\nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot]\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \hat{v} + 2\ensuremath{\epsilon}\nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot[\partial_t^{\ell}\mathcal{Z}^{\alpha},\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}] \hat{v} + \ensuremath{\epsilon}\partial_t^{\ell}\mathcal{Z}^{\alpha}\triangle^{\varphi^\ensuremath{\epsilon}} v \\[8pt]\quad - \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\varphi}\partial_t^{\varphi^{\ensuremath{\epsilon}}}\partial_z^{\varphi} v - \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\varphi}\, v^{\ensuremath{\epsilon}}\cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}\partial_z^{\varphi} v - \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}\cdot\nabla^{\varphi} v - \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\varphi} \nabla^{\varphi^{\ensuremath{\epsilon}}}\partial_z^{\varphi} q \\[8pt]\quad - [\partial_t^{\ell}\mathcal{Z}^{\alpha},\partial_t^{\varphi^{\ensuremath{\epsilon}}}]\hat{v} + [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \partial_z^{\varphi} v \partial_t^{\varphi^{\ensuremath{\epsilon}}}]\hat{\varphi} - [\partial_t^{\ell}\mathcal{Z}^{\alpha}, v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}] \hat{v} - [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \nabla^{\varphi} v\cdot]\hat{v}\\[8pt]\quad + [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \partial_z^{\varphi} v \, v^{\ensuremath{\epsilon}}\cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}]\hat{\varphi} - [\partial_t^{\ell}\mathcal{Z}^{\alpha},\nabla^{\varphi^{\ensuremath{\epsilon}}}] \hat{q} + [\partial_t^{\ell}\mathcal{Z}^{\alpha},\partial_z^{\varphi} q\nabla^{\varphi^{\ensuremath{\epsilon}}}]\hat{\varphi} := \mathcal{I}_4 , \\[12pt] \nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot \hat{V}^{\ell,\alpha} = -[\partial_t^{\ell}\mathcal{Z}^{\alpha},\nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot] \hat{v} + [\partial_t^{\ell}\mathcal{Z}^{\alpha},\partial_z^{\varphi}v \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}]\hat{\eta} - \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\eta} \nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot \partial_z^{\varphi}v, \\[12pt] \partial_t \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h} + v_y^{\ensuremath{\epsilon}}\cdot \nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h} - \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \hat{V}^{\ell,\alpha} = \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \partial_z^{\varphi} v \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\eta} \\[7pt]\quad - \hat{v}_y \cdot \nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} h - \partial_y \hat{h}\cdot \partial_t^{\ell}\mathcal{Z}^{\alpha}v_y + [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \hat{v},\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}] - [\partial_t^{\ell}\mathcal{Z}^{\alpha}, v_y, \partial_y \hat{h}], \\[12pt] \hat{Q}^{\ell,\alpha}\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} -2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}\,\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} - (g-\partial_z^{\varphi}q)\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h} \,\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} \\[8pt]\quad = 2\ensuremath{\epsilon} [\partial_t^{\ell}\mathcal{Z}^{\alpha},\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}] v^{\ensuremath{\epsilon}}\,\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} + (2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}} - 2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}}\ensuremath{\textbf{n}}^{\ensuremath{\epsilon}}\cdot\ensuremath{\textbf{n}}^{\ensuremath{\epsilon}}) \,\partial_t^{\ell}\mathcal{Z}^{\alpha}\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} \\[8pt]\quad - [\partial_t^{\ell}\mathcal{Z}^{\alpha},2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}}\ensuremath{\textbf{n}}^{\ensuremath{\epsilon}}\cdot\ensuremath{\textbf{n}}^{\ensuremath{\epsilon}},\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}] +2\ensuremath{\epsilon} [\partial_t^{\ell}\mathcal{Z}^{\alpha},\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}}, \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}] + 2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}v\,\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}, \\[12pt] (\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v},\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h})|_{t=0} = (\partial_t^{\ell}\mathcal{Z}^{\alpha}v_0^\ensuremath{\epsilon} -\partial_t^{\ell}\mathcal{Z}^{\alpha}v_0, \partial_t^{\ell}\mathcal{Z}^{\alpha}h_0^\ensuremath{\epsilon} -\partial_t^{\ell}\mathcal{Z}^{\alpha}h_0), \end{array}\right. \end{equation} When $|\alpha|\geq 1, \ell\leq k-1, 1\leq \ell+|\alpha|\leq k$, we develop the $L^2$ estimate of $\hat{V}^{\ell,\alpha}$, we have \begin{equation}\label{Sect5_Tangential_Estimates_1} \begin{array}{ll} \frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t} \int\limits_{\mathbb{R}^3_{-}} |\hat{V}^{\ell,\alpha}|^2 \,\mathrm{d}\mathcal{V}_t - \int\limits_{\mathbb{R}^3_{-}} \hat{Q}^{\ell,\alpha} \nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot \hat{V}^{\ell,\alpha} \,\mathrm{d}\mathcal{V}_t + 2\ensuremath{\epsilon} \int\limits_{\mathbb{R}^3_{-}} |\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}\partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{v}|^2 \,\mathrm{d}\mathcal{V}_t \\[10pt] = -\int\limits_{\{z=0\}} \big(\hat{Q}^{\ell,\alpha} \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} - 2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}\partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{v} \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} \big) \cdot \hat{V}^{\ell,\alpha} \,\mathrm{d}y + \int\limits_{\mathbb{R}^3_{-}} \mathcal{I}_4 \cdot V^{\ell,\alpha} \,\mathrm{d}\mathcal{V}_t \\[10pt]\quad + 2\ensuremath{\epsilon} \int\limits_{\mathbb{R}^3_{-}} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}\partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{v} \cdot \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} (\partial_z^{\varphi}v \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\eta})\,\mathrm{d}\mathcal{V}_t \\[10pt] \ensuremath{\lesssim} -\int\limits_{\{z=0\}} \big(\hat{Q}^{\ell,\alpha} \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} - 2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}\partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{v} \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} \big) \cdot \hat{V}^{\ell,\alpha} \,\mathrm{d}y + \|\hat{V}^{\ell,\alpha}\|_{L^2}^2 + \|\partial_z \hat{v}\|_{X^{k-1}}^2 \\[10pt]\quad + \|\hat{v}\|_{X^{k-1,1}}^2 + \|\hat{\eta}\|_{X^{k-1,1}}^2 + \ensuremath{\epsilon}|\hat{h}|_{X^{k-1,\frac{3}{2}}}^2 + \|\partial_t^k\hat{\eta}\|_{L^2}^2 + \|\nabla\hat{q}\|_{X^{k-1}}^2 + O(\ensuremath{\epsilon}). \end{array} \end{equation} We develop the boundary estimates in $(\ref{Sect5_Tangential_Estimates_1})$, \begin{equation}\label{Sect5_Tangential_Estimates_2} \begin{array}{ll} -\int\limits_{\{z=0\}} \big(\hat{Q}^{\ell,\alpha} \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} - 2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}\partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{v} \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} \big) \cdot \hat{V}^{\ell,\alpha} \,\mathrm{d}y \\[10pt] = \int\limits_{\{z=0\}} -(g -\partial_z^{\varphi}q) \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h}\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \hat{V}^{\ell,\alpha} - \big(2\ensuremath{\epsilon} [\partial_t^{\ell}\mathcal{Z}^{\alpha},\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}] v^{\ensuremath{\epsilon}}\,\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} \\[9pt]\qquad + (2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}} - 2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}}\ensuremath{\textbf{n}}^{\ensuremath{\epsilon}}\cdot\ensuremath{\textbf{n}}^{\ensuremath{\epsilon}})\,\partial_t^{\ell}\mathcal{Z}^{\alpha}\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} - [\partial_t^{\ell}\mathcal{Z}^{\alpha},2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}}\ensuremath{\textbf{n}}^{\ensuremath{\epsilon}}\cdot\ensuremath{\textbf{n}}^{\ensuremath{\epsilon}},\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}] \\[7pt]\qquad + 2\ensuremath{\epsilon} [\partial_t^{\ell}\mathcal{Z}^{\alpha},\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}}, \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}] + 2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}v\,\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\big)\cdot \hat{V}^{\ell,\alpha} \,\mathrm{d}y \\[8pt] \ensuremath{\lesssim} \int\limits_{\{z=0\}} -(g -\partial_z^{\varphi}q) \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h}\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \hat{V}^{\ell,\alpha} \,\mathrm{d}y + O(\ensuremath{\epsilon}), \end{array} \end{equation} note that normal derivatives $\partial_z v^{\ensuremath{\epsilon}}$ on the boundary can be expressed in terms of tangential derivatives of $v^{\ensuremath{\epsilon}}$, thus we get $O(\ensuremath{\epsilon})$ rather than $O(\sqrt{\ensuremath{\epsilon}})$. Namely, \begin{equation}\label{Sect5_Tangential_Estimates_3} \begin{array}{ll} -\int\limits_{\{z=0\}} \big(\hat{Q}^{\ell,\alpha} \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} - 2\ensuremath{\epsilon} \partial_t^{\ell}\mathcal{Z}^{\alpha}\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \hat{v} \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} \big) \cdot \hat{V}^{\ell,\alpha} \,\mathrm{d}y \\[13pt] \ensuremath{\lesssim} \int\limits_{\{z=0\}} -(g -\partial_z^{\varphi}q) \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h} \big(\partial_t \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h} + v_y^{\ensuremath{\epsilon}}\cdot \nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h} - \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \partial_z^{\varphi} v \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\eta} \\[11pt]\quad + \hat{v}_y \cdot \nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} h + \partial_y \hat{h}\cdot \partial_t^{\ell}\mathcal{Z}^{\alpha}v_y - [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \hat{v},\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}] + [\partial_t^{\ell}\mathcal{Z}^{\alpha}, v_y, \partial_y \hat{h}]\big)\,\mathrm{d}y + O(\ensuremath{\epsilon}) \\[11pt] \ensuremath{\lesssim} - \frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int\limits_{\{z=0\}} (g -\partial_z^{\varphi}q) |\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h}|^2 \,\mathrm{d}y + \frac{1}{2}\int\limits_{\{z=0\}} \nabla\cdot (g v_y -v_y \partial_z^{\varphi}q) |\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h}|^2 \,\mathrm{d}y \\[11pt]\quad - \frac{1}{2}\int\limits_{\{z=0\}} \partial_t\partial_z^{\varphi}v |\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h}|^2 \,\mathrm{d}y - \int\limits_{\{z=0\}} (g -\partial_z^{\varphi}q) \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h} \big(- \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \partial_z^{\varphi} v \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\eta} \\[11pt]\quad + \hat{v}_y \cdot \nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} h + \partial_y \hat{h}\cdot \partial_t^{\ell}\mathcal{Z}^{\alpha}v_y - [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \hat{v},\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}] + [\partial_t^{\ell}\mathcal{Z}^{\alpha}, v_y, \partial_y \hat{h}]\big)\,\mathrm{d}y + O(\ensuremath{\epsilon}) \\[11pt] \ensuremath{\lesssim} - \frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int\limits_{\{z=0\}} (g -\partial_z^{\varphi}q) |\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h}|^2 \,\mathrm{d}y + \|\hat{v}\|_{X^{k-1}} + \|\partial_z\hat{v}\|_{X^{k-1}} + |\hat{h}|_{X^{k-1,1}} + O(\ensuremath{\epsilon}). \end{array} \end{equation} By $(\ref{Sect5_Tangential_Estimates_1})$ and $(\ref{Sect5_Tangential_Estimates_3})$, we have \begin{equation}\label{Sect5_Tangential_Estimates_4} \begin{array}{ll} \frac{\mathrm{d}}{\mathrm{d}t} \int\limits_{\mathbb{R}^3_{-}} |\hat{V}^{\ell,\alpha}|^2 \,\mathrm{d}\mathcal{V}_t + \frac{\mathrm{d}}{\mathrm{d}t}\int\limits_{\{z=0\}} (g -\partial_z^{\varphi}q) |\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h}|^2 \,\mathrm{d}y + \ensuremath{\epsilon} \int\limits_{\mathbb{R}^3_{-}} |\partial_t^{\ell}\mathcal{Z}^{\alpha}\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \hat{v}|^2 \,\mathrm{d}\mathcal{V}_t \\[13pt] \ensuremath{\lesssim} \|\hat{V}^{\ell,\alpha}\|_{L^2}^2 + \|\partial_z \hat{v}\|_{X^{k-1}}^2 + \|\hat{v}\|_{X^{k-1,1}}^2 + |\hat{h}|_{X^{k-1,1}}^2 + |\partial_t^k\hat{h}|_{L^2}^2 + \ensuremath{\epsilon}|\hat{h}|_{X^{k-1,\frac{3}{2}}}^2 \\[5pt]\quad + \|\nabla\hat{q}\|_{X^{k-1}}^2 + O(\ensuremath{\epsilon}). \end{array} \end{equation} Since $g -\partial_z^{\varphi}q \geq c_0>0$, integrate $(\ref{Sect5_Tangential_Estimates_4})$ in time, apply the integral form of Gronwall's inequality, we get \begin{equation}\label{Sect5_Tangential_Estimates_5} \begin{array}{ll} \|\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}\|_{L^2}^2 + |\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h}|^2 +\ensuremath{\epsilon}|\hat{h}|_{X^{k-1,\frac{3}{2}}}^2 + \ensuremath{\epsilon} \int\limits_0^t\|\nabla\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}\|_{L^2}^2 \,\mathrm{d}t \\[8pt] \ensuremath{\lesssim} \|\hat{v}_0\|_{X^{k-1,1}}^2 + |\hat{h}_0|_{X^{k-1,1}}^2 + \int\limits_0^t \|\hat{v}\|_{X^{k-1,1}}^2 + |\hat{h}|_{X^{k-1,1}}^2 +\ensuremath{\epsilon}|\hat{h}|_{X^{k-1,\frac{3}{2}}}^2 \,\mathrm{d}t \\[8pt]\quad + \|\partial_z \hat{v}\|_{L^4([0,T],X^{k-1})}^2 + |\partial_t^k\hat{h}|_{L^4([0,T],L^2)}^2 + \|\nabla\hat{q}\|_{L^4([0,T],X^{k-1})}^2 + O(\ensuremath{\epsilon}). \end{array} \end{equation} When $|\alpha|=0, \, 0\leq\ell\leq k-1$, we have no bounds of $\hat{q}$ and $\partial_t^{\ell} \hat{q}$, so we neither use the variable $\hat{Q}^{\ell,\alpha}$ and nor apply the integration by parts to the pressure terms. Also, the dynamical boundary condition will not be used. Since the main equation of $\hat{V}^{\ell,0}$ and its kinetical boundary condition satisfy \begin{equation}\label{Sect5_Tangential_Estimates_Time} \left\{\begin{array}{ll} \partial_t^{\varphi^{\ensuremath{\epsilon}}} \hat{V}^{\ell,0} + v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{V}^{\ell,0} - 2\ensuremath{\epsilon}\nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\hat{v} \\[8pt]\quad = \ensuremath{\epsilon}\partial_t^{\ell}\triangle^{\varphi^\ensuremath{\epsilon}} v^\ensuremath{\epsilon} + 2\ensuremath{\epsilon} [\partial_t^{\ell},\nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot]\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \hat{v} + 2\ensuremath{\epsilon} \nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot[\partial_t^{\ell},\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}] \hat{v} - \partial_t^{\ell}\nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{q} \\[8pt]\quad + \partial_z^{\varphi} q\nabla^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\hat{\varphi} - \partial_t^{\ell}\hat{\varphi}\partial_t^{\varphi^{\ensuremath{\epsilon}}}\partial_z^{\varphi} v - \partial_t^{\ell}\hat{v}\cdot\nabla^{\varphi} v - \partial_t^{\ell}\hat{\varphi}\, v^{\ensuremath{\epsilon}}\cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}\partial_z^{\varphi} v \\[8pt]\quad - [\partial_t^{\ell},\partial_t^{\varphi^{\ensuremath{\epsilon}}}]\hat{v} + [\partial_t^{\ell}, \partial_z^{\varphi} v \partial_t^{\varphi^{\ensuremath{\epsilon}}}]\hat{\varphi} - [\partial_t^{\ell}, v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}] \hat{v} + [\partial_t^{\ell}, \partial_z^{\varphi} v \, v^{\ensuremath{\epsilon}}\cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}]\hat{\varphi} \\[8pt]\quad - [\partial_t^{\ell}, \nabla^{\varphi} v\cdot]\hat{v} + [\partial_t^{\ell},\partial_z^{\varphi} q\nabla^{\varphi^{\ensuremath{\epsilon}}}]\hat{\varphi} :=\mathcal{I}_5, \\[13pt] \partial_t \partial_t^{\ell}\hat{h} + v_y^{\ensuremath{\epsilon}}\cdot \nabla_y \partial_t^{\ell}\hat{h} - \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \hat{V}^{\ell,0} = \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \partial_z^{\varphi} v \partial_t^{\ell}\hat{\eta} \\[7pt]\quad - \hat{v}_y \cdot \nabla_y \partial_t^{\ell} h - \partial_y \hat{h}\cdot \partial_t^{\ell}v_y + [\partial_t^{\ell}, \hat{v},\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}] - [\partial_t^{\ell}, v_y, \partial_y \hat{h}], \\[13pt] (\partial_t^{\ell}\hat{v},\partial_t^{\ell}\hat{h})|_{t=0} = (\partial_t^{\ell}v_0^\ensuremath{\epsilon} -\partial_t^{\ell}v_0, \partial_t^{\ell}h_0^\ensuremath{\epsilon} -\partial_t^{\ell}h_0), \end{array}\right. \end{equation} then we have $L^2$ estimate of $\hat{V}^{\ell,0}$: \begin{equation}\label{Sect5_Tangential_Estimates_6} \begin{array}{ll} \frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t} \int\limits_{\mathbb{R}^3_{-}} |\hat{V}^{\ell,0}|^2 \,\mathrm{d}\mathcal{V}_t + 2\ensuremath{\epsilon} \int\limits_{\mathbb{R}^3_{-}} |\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}\partial_t^{\ell} \hat{v}|^2 \,\mathrm{d}\mathcal{V}_t = 2\ensuremath{\epsilon} \int\limits_{\{z=0\}} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}\partial_t^{\ell} \hat{v} \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} \cdot \hat{V}^{\ell,0} \,\mathrm{d}y \\[12pt]\quad + 2\ensuremath{\epsilon} \int\limits_{\mathbb{R}^3_{-}} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}\partial_t^{\ell} \hat{v} \cdot \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} (\partial_z^{\varphi}v\partial_t^{\ell} \hat{\eta}) \,\mathrm{d}\mathcal{V}_t + \int\limits_{\mathbb{R}^3_{-}}\mathcal{I}_5 \cdot \hat{V}^{\ell,0} \,\mathrm{d}\mathcal{V}_t. \end{array} \end{equation} Now we estimate the right hand side of $(\ref{Sect5_Tangential_Estimates_6})$: \begin{equation}\label{Sect5_Tangential_Estimates_7} \begin{array}{ll} 2\ensuremath{\epsilon}\int\limits_{\{z=0\}} \mathcal{S}^{\varphi^\ensuremath{\epsilon}} \partial_t^{\ell} v^\ensuremath{\epsilon} \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} \cdot \hat{V}^{\ell,0} \,\mathrm{d}y = 2\ensuremath{\epsilon}\int\limits_{\{z=0\}} \mathcal{S}^{\varphi^\ensuremath{\epsilon}} \partial_t^{\ell} v^\ensuremath{\epsilon} \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} \cdot (\partial_t^{\ell}\hat{v} - \partial_z^{\varphi} v \partial_t^{\ell}\hat{\eta}) \,\mathrm{d}y \\[12pt] \ensuremath{\lesssim} \big|\partial_t^{\ell}\hat{v}|_{z=0}\big|_{L^2}^2 + |\partial_t^{\ell}\hat{h}|_{L^2}^2 + O(\ensuremath{\epsilon}) \ensuremath{\lesssim} \|\partial_t^{\ell}\hat{v}\|_{L^2}^2 + \|\partial_t^{\ell}\partial_z \hat{v}\|_{L^2}^2 + |\partial_t^{\ell}\hat{h}|_{L^2}^2 + O(\ensuremath{\epsilon}). \end{array} \end{equation} It is easy to check that \begin{equation}\label{Sect5_Tangential_Estimates_8} \begin{array}{ll} \int\limits_{\mathbb{R}^3_{-}}\mathcal{I}_5 \cdot \hat{V}^{\ell,0} \,\mathrm{d}\mathcal{V}_t \ensuremath{\lesssim} \|\hat{V}^{\ell,0}\|_{L^2}^2 + \|\partial_t^{\ell-1}\partial_z \hat{v}\|_{L^2}^2 + \|\partial_t^{\ell-1}\partial_y \hat{v}\|_{L^2}^2 + \|\partial_t^{\ell-1} \hat{v}\|_{L^2}^2 \\[8pt]\quad + \|\partial_t^{\ell-1}\nabla\hat{\eta}\|_{L^2}^2 + \|\partial_t^{\ell}\hat{\eta}\|_{L^2}^2 + \|\partial_t^{\ell}\nabla\hat{\eta}\|_{L^2}^2 + \|\partial_t^{\ell}\nabla\hat{q}\|_{L^2}^2 +O(\ensuremath{\epsilon}). \end{array} \end{equation} By $(\ref{Sect5_Tangential_Estimates_6})$, $(\ref{Sect5_Tangential_Estimates_7})$ and $(\ref{Sect5_Tangential_Estimates_8})$, we have \begin{equation}\label{Sect5_Tangential_Estimates_9} \begin{array}{ll} \frac{\mathrm{d}}{\mathrm{d}t} \int\limits_{\mathbb{R}^3_{-}} |\hat{V}^{\ell,0}|^2 \,\mathrm{d}\mathcal{V}_t + \ensuremath{\epsilon} \int\limits_{\mathbb{R}^3_{-}} |\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}\partial_t^{\ell} \hat{v}|^2 \,\mathrm{d}\mathcal{V}_t \ensuremath{\lesssim} \|\hat{V}^{\ell,0}\|_{L^2}^2 + \|\partial_z \hat{v}\|_{X^{k-1}}^2 + \|\hat{v}\|_{X^{k-1,1}}^2 \\[8pt]\quad + |\hat{h}|_{X^{k-1,1}}^2 + \ensuremath{\epsilon}|\hat{h}|_{X^{k-1,\frac{3}{2}}}^2 + \|\nabla\hat{q}\|_{X^{k-1}}^2 +O(\ensuremath{\epsilon}). \end{array} \end{equation} Integrate $(\ref{Sect5_Tangential_Estimates_9})$ in time, apply the integral form of Gronwall's inequality, we have \begin{equation}\label{Sect5_Tangential_Estimates_10} \begin{array}{ll} \|\hat{V}^{\ell,0}\|_{L^2}^2 + \ensuremath{\epsilon}\|\nabla\partial_t^{\ell}\hat{v}\|_{L^2}^2 \\[5pt] \ensuremath{\lesssim} \|\hat{v}_0\|_{X^{k-1}}^2 +|\hat{h}_0|_{X^{k-1}}^2 + \int\limits_0^t \|\partial_z \hat{v}\|_{X^{k-1}}^2 + \|\hat{v}\|_{X^{k-1,1}}^2 + |\hat{h}|_{X^{k-1,1}}^2 \\[7pt]\quad + \|\nabla\hat{q}\|_{X^{k-1}}^2 + \ensuremath{\epsilon}|\hat{h}|_{X^{k-1,\frac{3}{2}}}^2\,\mathrm{d}t +O(\ensuremath{\epsilon}) \\[5pt] \ensuremath{\lesssim} \|\hat{v}_0\|_{X^{k-1}}^2 +|\hat{h}_0|_{X^{k-1}}^2 + \int\limits_0^t \|\hat{v}\|_{X^{k-1,1}}^2 + |\hat{h}|_{X^{k-1,1}}^2 + \ensuremath{\epsilon}|\hat{h}|_{X^{k-1,\frac{3}{2}}}^2\,\mathrm{d}t \\[9pt]\quad + \|\partial_z \hat{v}\|_{L^4([0,T],X^{k-1})}^2 + \|\nabla\hat{q}\|_{L^4([0,T],X^{k-1})}^2 +O(\ensuremath{\epsilon}). \end{array} \end{equation} Combining $(\ref{Sect5_Tangential_Estimates_10})$ and Lemma $\ref{Sect5_Height_Estimates_Lemma}$, we have \begin{equation}\label{Sect5_Tangential_Estimates_11} \begin{array}{ll} \|\partial_t^{\ell}\hat{v}\|_{L^2}^2 + |\partial_t^{\ell}\hat{h}|_{L^2}^2 + \ensuremath{\epsilon}|\hat{h}|_{X^{k-1,\frac{3}{2}}}^2 + \ensuremath{\epsilon} \int\limits_0^t\|\nabla\partial_t^{\ell}\hat{v}\|_{L^2}^2 \,\mathrm{d}t \\[5pt] \ensuremath{\lesssim} \|\hat{v}_0\|_{X^{k-1}}^2 +|\hat{h}_0|_{X^{k-1}}^2 + \int\limits_0^t \|\hat{v}\|_{X^{k-1,1}}^2 + |\hat{h}\|_{X^{k-1,1}}^2 + \ensuremath{\epsilon}|\hat{h}|_{X^{k-1,\frac{3}{2}}}^2\,\mathrm{d}t \\[9pt]\quad + \|\partial_z \hat{v}\|_{L^4([0,T],X^{k-1})}^2 + \|\nabla\hat{q}\|_{L^4([0,T],X^{k-1})}^2 +O(\ensuremath{\epsilon}). \end{array} \end{equation} Apply the integral form of Gronwall's inequality to $(\ref{Sect5_Tangential_Estimates_5})$ and $(\ref{Sect5_Tangential_Estimates_11})$, we get $(\ref{Sect5_Tangential_Estimates_Lemma_Eq})$. Thus, Lemma $\ref{Sect5_Tangential_Estimates_Lemma}$ is proved. \end{proof} \subsection{Estimates for Normal Derivatives when $\Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}|_{z=0} \neq 0$} In this subsection, we develop the estimates for normal derivatives $\partial_z \hat{v}$. In the following lemma, we estimate $\|\partial_z\hat{v}\|_{L^4([0,T],X^{k-1})}^2$ by studying the equations of $\hat{\omega}_h$. \begin{lemma}\label{Sect5_Vorticity_Lemma} Assume $k\leq m-2$, if $\Pi\mathcal{S}^{\varphi} v \ensuremath{\textbf{n}}|_{z=0} \neq 0$, then the vorticity has the following estimate: \begin{equation}\label{Sect5_Vorticity_Lemma_Eq} \begin{array}{ll} \|\partial_z\hat{v}_h\|_{L^4([0,T],X^{k-1})}^2 + \|\hat{\omega}_h\|_{L^4([0,T],X^{k-1})}^2 \\[6pt] \ensuremath{\lesssim} \big\|\hat{\omega}_0\big\|_{X^{k-1}}^2 + \int\limits_0^T\|\hat{v}\|_{X^{k-1,1}}^2 \,\mathrm{d}t + \int\limits_0^T|\hat{h}|_{X^{k-1,1}}^2 \,\mathrm{d}t + \|\partial_t^k\hat{h}\|_{L^4([0,T],L^2)}^2 + O(\sqrt{\ensuremath{\epsilon}}). \end{array} \end{equation} \end{lemma} \begin{proof} Assume $\ell+|\alpha|\leq k-1$, we study the equations $(\ref{Sect1_N_Derivatives_Difference_Eq})$ and decompose $\hat{\omega}_h = \hat{\omega}_h^{nhom} + \hat{\omega}_h^{hom}$, such that $\hat{\omega}_h^{nhom}$ satisfies the following nonhomogeneous equations: \begin{equation}\label{Sect5_N_Derivatives_Difference_Eq_Nonhom} \left\{\begin{array}{ll} \partial_t^{\varphi^{\ensuremath{\epsilon}}} \hat{\omega}_h^{nhom} + v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} \hat{\omega}_h^{nhom} - \ensuremath{\epsilon}\triangle^{\varphi^{\ensuremath{\epsilon}}}\hat{\omega}_h^{nhom} \\[6pt]\quad = \vec{\textsf{F}}^0[\nabla\varphi^{\ensuremath{\epsilon}}](\omega_h^{\ensuremath{\epsilon}},\partial_j v^{\ensuremath{\epsilon},i}) - \vec{\textsf{F}}^0[\nabla\varphi](\omega_h,\partial_j v^i) + \ensuremath{\epsilon}\triangle^{\varphi^{\ensuremath{\epsilon}}}\omega_h \\[6pt]\qquad + \partial_z^{\varphi}\omega_h \partial_t^{\varphi^{\ensuremath{\epsilon}}} \hat{\eta} + \partial_z^{\varphi} \omega_h\, v^{\ensuremath{\epsilon}}\cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta} - \hat{v}\cdot\nabla^{\varphi} \omega_h , \\[11pt] \hat{\omega}_h^{nhom}|_{z=0} =0, \\[7pt] \hat{\omega}_h^{nhom}|_{t=0} = (\hat{\omega}_0^1, \hat{\omega}_0^2)^{\top}, \end{array}\right. \end{equation} and $\hat{\omega}_h^{hom}$ satisfies the following homogeneous equations: \begin{equation}\label{Sect5_N_Derivatives_Difference_Eq_Hom} \left\{\begin{array}{ll} \partial_t^{\varphi^{\ensuremath{\epsilon}}} \hat{\omega}_h^{hom} + v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} \hat{\omega}_h^{hom} - \ensuremath{\epsilon}\triangle^{\varphi^{\ensuremath{\epsilon}}}\hat{\omega}_h^{hom} =0, \\[9pt] \hat{\omega}_h^{hom}|_{z=0} =\textsf{F}^{1,2} [\nabla\varphi^{\ensuremath{\epsilon}}](\partial_j v^{\ensuremath{\epsilon},i}) - \omega^b, \\[7pt] \hat{\omega}_h^{hom}|_{t=0} = 0, \end{array}\right. \end{equation} By using $(\ref{Sect3_Preliminaries_Vorticity_Eq_2})$ and $\partial_t^{\varphi^{\ensuremath{\epsilon}}} + v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} = \partial_t + v_y^{\ensuremath{\epsilon}}\cdot\nabla_y + V_z^{\ensuremath{\epsilon}}\partial_z$, $(\ref{Sect5_N_Derivatives_Difference_Eq_Nonhom})$ is equivalent to the following equations: \begin{equation}\label{Sect5_N_Derivatives_Difference_Eq_Nonhom_1} \left\{\begin{array}{ll} \partial_t \hat{\omega}_h^{nhom} + v_y^{\ensuremath{\epsilon}}\cdot\nabla_y\hat{\omega}_h^{nhom} + V_z^{\ensuremath{\epsilon}}\partial_z \hat{\omega}_h^{nhom} - \ensuremath{\epsilon}\triangle^{\varphi^{\ensuremath{\epsilon}}}\hat{\omega}_h^{nhom} \\[8pt]\quad = f^7[\nabla\varphi^{\ensuremath{\epsilon}},\nabla\varphi,\partial_j v^{\ensuremath{\epsilon},i},\partial_j v^i]\hat{\omega}_h + f^8[\nabla\varphi^{\ensuremath{\epsilon}},\nabla\varphi,\partial_j v^{\ensuremath{\epsilon},i},\partial_j v^i,\omega_h^{\ensuremath{\epsilon}},\omega_h]\partial_j\hat{v}^i \\[7pt]\qquad + f^9[\nabla\varphi^{\ensuremath{\epsilon}},\nabla\varphi,\partial_j v^{\ensuremath{\epsilon},i},\partial_j v^i,\omega_h^{\ensuremath{\epsilon}},\omega_h]\nabla\hat{\varphi} + \ensuremath{\epsilon}\triangle^{\varphi^{\ensuremath{\epsilon}}}\omega_h \\[7pt]\qquad + \partial_z^{\varphi}\omega_h \partial_t^{\varphi^{\ensuremath{\epsilon}}} \hat{\eta} + \partial_z^{\varphi} \omega_h\, v^{\ensuremath{\epsilon}}\cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta} - \hat{v}\cdot\nabla^{\varphi} \omega_h := \mathcal{I}_6, \\[9pt] \hat{\omega}_h^{nhom}|_{z=0} =0, \\[7pt] \hat{\omega}_h^{nhom}|_{t=0} = (\hat{\omega}_0^1, \hat{\omega}_0^2)^{\top}, \end{array}\right. \end{equation} Apply $\partial_t^{\ell}\mathcal{Z}^{\alpha}$ to $(\ref{Sect5_N_Derivatives_Difference_Eq_Nonhom_1})$, we get \begin{equation}\label{Sect5_N_Derivatives_Difference_Eq_Nonhom_2} \left\{\begin{array}{ll} \partial_t \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\omega}_h^{nhom} + v_y^{\ensuremath{\epsilon}}\cdot\nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\omega}_h^{nhom} + V_z^{\ensuremath{\epsilon}}\partial_z \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\omega}_h^{nhom} - \ensuremath{\epsilon}\triangle^{\varphi^{\ensuremath{\epsilon}}}\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\omega}_h^{nhom} \\[8pt]\quad = \partial_t^{\ell}\mathcal{Z}^{\alpha} \mathcal{I}_6 -[\partial_t^{\ell}\mathcal{Z}^{\alpha}, v_y^{\ensuremath{\epsilon}}\cdot\nabla_y]\hat{\omega}_h^{nhom} -[\partial_t^{\ell}\mathcal{Z}^{\alpha}, V_z\partial_z]\hat{\omega}_h^{nhom} \\[8pt]\qquad + \ensuremath{\epsilon}\nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \nabla^{\varphi}]\hat{\omega}_h^{nhom} + \ensuremath{\epsilon}[\partial_t^{\ell}\mathcal{Z}^{\alpha}, \nabla^{\varphi}\cdot]\nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\omega}_h^{nhom}, \\[9pt] \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\omega}_h^{nhom}|_{z=0} =0, \\[7pt] \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\omega}_h^{nhom}|_{t=0} = (\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\omega}_0^1, \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\omega}_0^2)^{\top}, \end{array}\right. \end{equation} Develop the $L^2$ estimate of $\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\omega}_h^{nhom}$, we get \begin{equation}\label{Sect5_N_Derivatives_Difference_Eq_Nonhom_3} \begin{array}{ll} \frac{\mathrm{d}}{\mathrm{d}t} \|\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\omega}_h^{nhom}\|_{L^2}^2 + 2\ensuremath{\epsilon} \|\nabla^{\varphi^{\ensuremath{\epsilon}}}\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\omega}_h^{nhom}\|_{L^2}^2 \\[10pt] \ensuremath{\lesssim} \|\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\omega}_h^{nhom}\|_{L^2}^2 + \|\partial_t^{\ell}\mathcal{Z}^{\alpha} \mathcal{I}_6\|_{L^2}^2 + \|[\partial_t^{\ell}\mathcal{Z}^{\alpha}, V_z\partial_z]\hat{\omega}_h^{nhom}\|_{L^2}^2 \\[8pt]\quad + \ensuremath{\epsilon} \int\limits_{\mathbb{R}^3_{-}}\nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \nabla^{\varphi}]\hat{\omega}_h^{nhom} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\omega}_h^{nhom} \,\mathrm{d}\mathcal{V}_t \\[8pt]\quad + \ensuremath{\epsilon} \int\limits_{\mathbb{R}^3_{-}}[\partial_t^{\ell}\mathcal{Z}^{\alpha}, \nabla^{\varphi}\cdot]\nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\omega}_h^{nhom} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\omega}_h^{nhom} \,\mathrm{d}\mathcal{V}_t \hspace{3.5cm} \end{array} \end{equation} \begin{equation*} \begin{array}{ll} \ensuremath{\lesssim} \|\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\omega}_h^{nhom}\|_{L^2}^2 + \|\hat{\omega}_h\|_{X^{k-1}}^2 + \|\hat{v}\|_{X^{k-1,1}}^2 + \|\nabla\hat{\eta}\|_{X^{k-1}}^2 + \|\partial_t^k\hat{\eta}\|_{L^2}^2 \\[8pt]\ + \sum\limits_{\ell_1+|\alpha_1|>0} \|\frac{1-z}{z}\partial_t^{\ell}\mathcal{Z}^{\alpha} V_z \cdot \partial_t^{\ell}\mathcal{Z}^{\alpha} \frac{z}{1-z}\partial_z\hat{\omega}_h^{nhom}\|_{L^2}^2 \\[8pt]\ - \ensuremath{\epsilon} \int\limits_{\mathbb{R}^3_{-}}[\partial_t^{\ell}\mathcal{Z}^{\alpha}, \ensuremath{\textbf{N}}\partial_z^{\varphi}]\hat{\omega}_h^{nhom} \cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\omega}_h^{nhom} \,\mathrm{d}\mathcal{V}_t \\[8pt]\ + \ensuremath{\epsilon} \int\limits_{\mathbb{R}^3_{-}} \sum\limits_{\ell_1+|\alpha_1|>0}\big[(\partial_z^{\varphi})^{-1} \partial_t^{\ell_1}\mathcal{Z}^{\alpha_1}(\frac{\ensuremath{\textbf{N}}}{\partial_z\varphi}) \partial_t^{\ell_2}\mathcal{Z}^{\alpha_2}\partial_z\big]\cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\omega}_h^{nhom} \, \partial_z^{\varphi}\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\omega}_h^{nhom} \,\mathrm{d}\mathcal{V}_t. \end{array} \end{equation*} where the notation $(\partial_z^{\varphi})^{-1}$ means a cancellation such that $(\partial_z^{\varphi})^{-1}(\partial_z^{\varphi}) =1$. Integrate $(\ref{Sect5_N_Derivatives_Difference_Eq_Nonhom_3})$ in time, apply the integral form of Gronwall's inequality, it is easy to have \begin{equation}\label{Sect5_N_Derivatives_Difference_Eq_Nonhom_4} \begin{array}{ll} \|\hat{\omega}_h^{nhom}\|_{X^{k-1}}^2 + 2\ensuremath{\epsilon} \int\limits_0^t\|\nabla\hat{\omega}_h^{nhom}\|_{X^{k-1}}^2 \,\mathrm{d}t \\[7pt] \leq \|\hat{\omega}_{0,h}\|_{X^{k-1}}^2 + \int\limits_0^t\|\hat{\omega}_h\|_{X^{k-1}}^2\,\mathrm{d}t + \|\hat{h}\|_{X^{k-1,1}}^2\,\mathrm{d}t + \|\partial_t^k\hat{h}\|_{L^2}^2\,\mathrm{d}t + O(\ensuremath{\epsilon}). \end{array} \end{equation} Similar to $(\ref{Sect2_Vorticity_Estimate_10})$, we have \begin{equation}\label{Sect5_N_Derivatives_Difference_Eq_Nonhom_5} \begin{array}{ll} \|\hat{\omega}_h^{nhom}\|_{L^4([0,T],X^{k-1})}^2 \ensuremath{\lesssim} \sqrt{T}\big\|\hat{\omega}_{0,h}\big\|_{X^{k-1}}^2 + T \|\hat{\omega}_h\|_{L^4([0,T],X^{k-1})}^2 \\[9pt]\quad + \sqrt{T}\int\limits_0^T\|\hat{v}\|_{X^{k-1,1}}^2 \,\mathrm{d}t + \sqrt{T}\int\limits_0^T|\hat{h}|_{X^{k-1,1}}^2 \,\mathrm{d}t + \sqrt{T}|\partial_t^k \hat{h}|_{L^4([0,T],L^2)}^2 + O(\ensuremath{\epsilon}). \end{array} \end{equation} For the homogeneous equations $(\ref{Sect5_N_Derivatives_Difference_Eq_Hom})$, similar to the estimates of the equations $(\ref{Sect2_Vorticity_Estimate_3})$ or \cite{Masmoudi_Rousset_2012_FreeBC}, we have \begin{equation}\label{Sect5_N_Derivatives_Difference_Eq_Hom_1} \begin{array}{ll} \|\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\omega}_h^{hom}\|_{L^4([0,T],L^2(\mathbb{R}^3_{-}))}^2 \ensuremath{\lesssim} \|\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\omega}_h^{hom}\|_{H^{\frac{1}{4}}([0,T],L^2(\mathbb{R}^3_{-}))}^2 \\[10pt] \ensuremath{\lesssim} \sqrt{\ensuremath{\epsilon}}\int\limits_0^T\big|\hat{\omega}_h^{hom}|_{z=0}\big|_{X^{k-1}(\mathbb{R}^2)}^2 \,\mathrm{d}t \\[10pt] \ensuremath{\lesssim} \sqrt{\ensuremath{\epsilon}}\int\limits_0^T\big|\textsf{F}^{1,2}[\nabla\varphi^{\ensuremath{\epsilon}}](\partial_j v^{\ensuremath{\epsilon},i}) -\textsf{F}^{1,2}[\nabla\varphi](\partial_j v^i) \big|_{X^{k-1}(\mathbb{R}^2)}^2 \,\mathrm{d}t \\[8pt]\quad + \sqrt{\ensuremath{\epsilon}}\int\limits_0^T\big|\varsigma_1\Theta^1 + \varsigma_2\Theta^2 + \varsigma_3\Theta^3\big|_{X^{k-1}(\mathbb{R}^2)}^2 \,\mathrm{d}t \\[8pt]\quad + \sqrt{\ensuremath{\epsilon}}\int\limits_0^T\big|\varsigma_4\Theta^4 + \varsigma_5\Theta^5 + \varsigma_6\Theta^6\big|_{X^{k-1}(\mathbb{R}^2)}^2 \,\mathrm{d}t \ensuremath{\lesssim} O(\sqrt{\ensuremath{\epsilon}}), \end{array} \end{equation} where $\varsigma_i$ and $\Theta^i$ are defined in the proof of Lemma $\ref{Sect4_Vorticity_Discrepancy_Lemma}$. By $(\ref{Sect5_N_Derivatives_Difference_Eq_Nonhom_5})$ and $(\ref{Sect5_N_Derivatives_Difference_Eq_Hom_1})$, we have \begin{equation}\label{Sect5_NormalEstimates_1} \begin{array}{ll} \|\hat{\omega}_h\|_{L^4([0,T],X^{k-1})}^2 \ensuremath{\lesssim} \|\hat{\omega}_h^{nhom}\|_{L^4([0,T],X^{k-1})}^2 + \|\hat{\omega}_h^{hom}\|_{L^4([0,T],X^{k-1})}^2 \\[9pt] \ensuremath{\lesssim} \big\|\hat{\omega}_{0,h}\big\|_{X^{k-1}}^2 + |\partial_t^k\hat{h}|_{L^4([0,T],L^2)}^2 + \int\limits_0^T\|\hat{v}\|_{X^{k-1,1}}^2 \,\mathrm{d}t + \int\limits_0^T|\hat{h}|_{X^{k-1,1}}^2 \,\mathrm{d}t + O(\sqrt{\ensuremath{\epsilon}}). \end{array} \end{equation} Thus, Lemma $\ref{Sect5_Vorticity_Lemma}$ is proved. \end{proof} \begin{remark}\label{Sect5_Euler_BoundaryData_Remark} If $\Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}|_{z=0} = 0$, then $\Theta^i =0$ where $i=1,\cdots,6$, and then the estimate $(\ref{Sect5_N_Derivatives_Difference_Eq_Hom_1})$ is reduced into the following estimate: \begin{equation}\label{Sect5_Euler_BoundaryData_Remark_Estimate} \begin{array}{ll} \|\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\omega}_h^{hom}\|_{L^4([0,T],L^2(\mathbb{R}^3_{-}))}^2 \\[7pt] \ensuremath{\lesssim} \sqrt{\ensuremath{\epsilon}}\int\limits_0^T\big|\textsf{F}^{1,2}[\nabla\varphi^{\ensuremath{\epsilon}}](\partial_j v^{\ensuremath{\epsilon},i}) -\textsf{F}^{1,2}[\nabla\varphi](\partial_j v^i) \big|_{X^{k-1}(\mathbb{R}^2)}^2 \,\mathrm{d}t \ensuremath{\lesssim} O(\sqrt{\ensuremath{\epsilon}}), \end{array} \end{equation} Since we do not have the convergence rates of $|\partial_j v^{\ensuremath{\epsilon},i} - \partial_j v^i|_{X^{k-1}(\mathbb{R}^2)}$, thus we can not improve the convergence rates of $\|\omega\|_{L^4([0,T],X^{k-1})}^2$. However, we can improve the convergence rates of $\|\omega\|_{L^4([0,T],X^{k-2})}^2$, see subsection 5.4. \end{remark} If $\Pi\mathcal{S}^{\varphi}v\ensuremath{\textbf{n}}|_{z=0} \neq 0$, we estimate $\|\partial_z \hat{v}\|_{L^{\infty}([0,T],X^{m-4})}$ and $\|\hat{\omega}\|_{L^{\infty}([0,T],X^{m-4})}$. Note that when $\Pi\mathcal{S}^{\varphi}v\ensuremath{\textbf{n}}|_{z=0} \neq 0$, not only $\big|\nabla^{\varphi^{\ensuremath{\epsilon}}} \times \partial_t^{\ell}\mathcal{Z}^{\alpha}(v^{\ensuremath{\epsilon}} -v)|_{z=0}\big|_{L^2} \neq 0$ but also $\big|\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\times(\nabla^{\varphi^{\ensuremath{\epsilon}}} \times \partial_t^{\ell}\mathcal{Z}^{\alpha}(v^{\ensuremath{\epsilon}} -v))|_{z=0}\big|_{L^2} \neq 0$. \begin{lemma}\label{Sect5_NormalDer_Lemma} Assume $0\leq k\leq m-2$, $\hat{\omega}_h =\omega_h^{\ensuremath{\epsilon}} -\omega_h$, $\partial_z\hat{v} =\partial_z v^{\ensuremath{\epsilon}} -\partial_z v$, then $\hat{\omega}_h$ and $\partial_z\hat{v}$ satisfy the following estimate: \begin{equation}\label{Sect5_NormalDer_Estimate} \begin{array}{ll} \|\hat{\omega}\|_{X^{k-2}}^2 + \|\partial_z\hat{v}\|_{X^{k-2}}^2 \ensuremath{\lesssim} \|\hat{\omega}_0\|_{X^{k-2}}^2 + \int\limits_0^t\|\hat{v}\|_{X^{k-2}} + \|\partial_z \hat{v}\|_{X^{k-2}} \\[6pt]\quad + \|\nabla\hat{q} \|_{X^{k-2}} + \|\hat{h}\|_{X^{k-1}}\,\mathrm{d}t + O(\ensuremath{\epsilon}). \end{array} \end{equation} \end{lemma} \begin{proof} By using $(\ref{Sect2_NormalDer_Estimate_Laplacian})$, we rewrite $(\ref{Sect1_T_Derivatives_Difference_Eq})_1$ as \begin{equation}\label{Sect5_NormalDer_Estimate_L2_1} \begin{array}{ll} \partial_t^{\varphi^{\ensuremath{\epsilon}}}\hat{v}-\partial_z^{\varphi} v \partial_t^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta} + v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} \hat{v} - v^{\ensuremath{\epsilon}}\cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta}\, \partial_z^{\varphi} v + \hat{v}\cdot\nabla^{\varphi} v \\[7pt]\quad + \nabla^{\varphi^{\ensuremath{\epsilon}}} \hat{q} - \partial_z^{\varphi} q\nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta} = -\ensuremath{\epsilon}\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \hat{\omega} -\ensuremath{\epsilon}\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \omega \end{array} \end{equation} Firstly, we develop $L^2$ estimate of $\hat{\omega}$. Multiple $(\ref{Sect5_NormalDer_Estimate_L2_1})$ with $\nabla^{\varphi^{\ensuremath{\epsilon}}}\times(\nabla^{\varphi^{\ensuremath{\epsilon}}}\times\hat{v})$, integrate in $\mathbb{R}^3_{-}$, use the integration by parts formula $(\ref{Sect1_Formulas_CanNotUse})_3$, we get \begin{equation}\label{Sect5_NormalDer_Estimate_L2_2} \begin{array}{ll} \int\limits_{\mathbb{R}^3_{-}} \big(\partial_t^{\varphi^{\ensuremath{\epsilon}}}\hat{v}-\partial_z^{\varphi} v \partial_t^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta} + v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} \hat{v} - v^{\ensuremath{\epsilon}}\cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta}\, \partial_z^{\varphi} v + \hat{v}\cdot\nabla^{\varphi} v \\[7pt]\ + \nabla^{\varphi^{\ensuremath{\epsilon}}} \hat{q} - \partial_z^{\varphi} q\nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta} + \ensuremath{\epsilon}\nabla^{\varphi^{\ensuremath{\epsilon}}}\times (\nabla^{\varphi^{\ensuremath{\epsilon}}}\times\hat{v}) +\ensuremath{\epsilon}\triangle^{\varphi^{\ensuremath{\epsilon}}}v \big) \cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}\times(\nabla^{\varphi^{\ensuremath{\epsilon}}}\times\hat{v}) \,\mathrm{d}\mathcal{V}_t =0, \\[13pt] \int\limits_{\mathbb{R}^3_{-}} \nabla^{\varphi^{\ensuremath{\epsilon}}}\times\big(\partial_t^{\varphi^{\ensuremath{\epsilon}}}\hat{v} + v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} \hat{v} + \nabla^{\varphi^{\ensuremath{\epsilon}}} \hat{q}\big) \,\mathrm{d}\mathcal{V}_t + \ensuremath{\epsilon}\int\limits_{\mathbb{R}^3_{-}} |\nabla^{\varphi^{\ensuremath{\epsilon}}}\times(\nabla^{\varphi^{\ensuremath{\epsilon}}}\times\hat{v})|^2 \,\mathrm{d}\mathcal{V}_t\\[8pt]\quad = \int\limits_{\mathbb{R}^3_{-}} \nabla^{\varphi^{\ensuremath{\epsilon}}}\times\big(\partial_z^{\varphi} v \partial_t^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta} + v^{\ensuremath{\epsilon}}\cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta}\, \partial_z^{\varphi} v - \hat{v}\cdot\nabla^{\varphi} v + \partial_z^{\varphi} q\nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta}\big)\cdot \hat{\omega} \,\mathrm{d}\mathcal{V}_t \\[8pt]\qquad - \int\limits_{z=0} \big(\partial_t\hat{v} + v^{\ensuremath{\epsilon}}_y \cdot\nabla_y \hat{v} + \hat{v}\cdot\nabla^{\varphi} v -\partial_z^{\varphi} v \partial_t\hat{\eta} - v^{\ensuremath{\epsilon}}_y\cdot \nabla_y \hat{\eta}\, \partial_z^{\varphi} v + \nabla^{\varphi^{\ensuremath{\epsilon}}} \hat{q} \\[11pt]\qquad - \partial_z^{\varphi} q\nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta} \big)\cdot \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\times (\nabla^{\varphi^{\ensuremath{\epsilon}}}\times\hat{v}) \,\mathrm{d}y - \ensuremath{\epsilon}\int\limits_{\mathbb{R}^3_{-}} \nabla^{\varphi^{\ensuremath{\epsilon}}}\times \omega \cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}\times (\nabla^{\varphi^{\ensuremath{\epsilon}}}\times\hat{v}) \,\mathrm{d}\mathcal{V}_t \\[8pt]\quad \ensuremath{\lesssim} \|\hat{\omega}\|_{L^2}^2 + |\hat{h}|_{X^{1,\frac{1}{2}}}^2 + \|\hat{v}\|_{X^1}^2 + \|\partial_z \hat{v}\|_{L^2}^2 + \frac{\ensuremath{\epsilon}}{2}\int\limits_{\mathbb{R}^3_{-}} |\nabla^{\varphi^{\ensuremath{\epsilon}}}\times\hat{\omega}|^2 \,\mathrm{d}\mathcal{V}_t + O(\ensuremath{\epsilon})\\[8pt]\qquad + \big|\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\times (\nabla^{\varphi^{\ensuremath{\epsilon}}}\times\hat{v})|_{z=0}\big|_{L^2} \big(\big|\hat{v}|_{z=0}\big|_{X_{tan}^1} + \big|\hat{h}|_{z=0}\big|_{X^1} \big) \\[8pt]\qquad + \big|\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\times (\nabla^{\varphi^{\ensuremath{\epsilon}}}\times\hat{v})|_{z=0}\big|_{\frac{1}{2}} \big|\nabla \hat{q}|_{z=0}\big|_{-\frac{1}{2}}. \end{array} \end{equation} Since $\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \nabla^{\varphi^{\ensuremath{\epsilon}}} \hat{q} =0$, $\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \omega$ is bounded, $\big|\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\times (\nabla^{\varphi^{\ensuremath{\epsilon}}}\times\hat{v})|_{z=0}\big|_{L^2}\neq 0$, \begin{equation}\label{Sect5_NormalDer_Estimate_L2_3} \begin{array}{ll} \|\hat{\omega}|_{L^2} + \ensuremath{\epsilon}\int\limits_0^t \|\nabla\hat{\omega}\|^2 \,\mathrm{d}t \ensuremath{\lesssim} \|\hat{\omega}_0\|_{L^2}^2 + \int\limits_0^t |\hat{h}|_{X^{1,1}}^2 + \|\hat{v}\|_{X^1}^2 + \|\partial_z \hat{v}\|_{L^2}^2 \\[8pt]\quad + \big|\hat{v}|_{z=0}\big|_{X_{tan}^1} + \big|\hat{h}|_{z=0}\big|_{X^1} + \big|\nabla \hat{q}|_{z=0}\big|_{-\frac{1}{2}} \,\mathrm{d}t + O(\ensuremath{\epsilon}) \\[9pt] \ensuremath{\lesssim} \|\hat{\omega}_0\|_{L^2}^2 + \int\limits_0^t |\hat{h}|_{X^{1,1}}^2 + \|\hat{v}\|_{X^1}^2 + \|\partial_z \hat{v}\|_{L^2}^2 + \|\hat{v}\|_{X_{tan}^1} + \|\partial_z\hat{v}\|_{X_{tan}^1} \\[8pt]\quad + \|\hat{h}\|_{X^1} + \|\nabla \hat{q}\|_{L^2} \,\mathrm{d}t + O(\ensuremath{\epsilon}). \end{array} \end{equation} When $\ell+|\alpha|\leq k-2$, we study the quantity $\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}$. The equations $(\ref{SectA_Difference_Eq2_1})_2$ is rewritten as \begin{equation}\label{Sect5_NormalDer_Estimate_1} \begin{array}{ll} \partial_t^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} + v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} + \nabla^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{q} + \ensuremath{\epsilon}\nabla^{\varphi^\ensuremath{\epsilon}}\times \nabla^{\varphi^\ensuremath{\epsilon}} \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{v} = \ensuremath{\epsilon} \, \mathcal{I}_{7,1} + \mathcal{I}_{7,2}, \end{array} \end{equation} where \begin{equation}\label{Sect5_NormalDer_Estimate_2} \begin{array}{ll} \mathcal{I}_{7,1} = -[\partial_t^{\ell}\mathcal{Z}^{\alpha}, \nabla^{\varphi^\ensuremath{\epsilon}}\times]\nabla^{\varphi^\ensuremath{\epsilon}}\times\hat{v} - \nabla^{\varphi^\ensuremath{\epsilon}}\times[\partial_t^{\ell}\mathcal{Z}^{\alpha}, \nabla^{\varphi^\ensuremath{\epsilon}}\times] \hat{v} + \partial_t^{\ell}\mathcal{Z}^{\alpha}\triangle^{\varphi^{\ensuremath{\epsilon}}}v, \\[9pt] \mathcal{I}_{7,2} := \partial_z^{\varphi} v (\partial_t + v^{\ensuremath{\epsilon}}_y\cdot \nabla_y + V_z^{\ensuremath{\epsilon}}\partial_z) \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\varphi} - \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}\cdot\nabla^{\varphi} v + \partial_z^{\varphi} q\nabla^{\varphi^{\ensuremath{\epsilon}}}\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\varphi} \\[8pt]\quad - [\partial_t^{\ell}\mathcal{Z}^{\alpha},\partial_t + v^{\ensuremath{\epsilon}}\partial_y + V_z^{\ensuremath{\epsilon}}\partial_z]\hat{v} + [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \partial_z^{\varphi} v (\partial_t + v^{\ensuremath{\epsilon}}_y\cdot \nabla_y + V_z^{\ensuremath{\epsilon}}\partial_z]\hat{\varphi} \\[8pt]\quad - [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \nabla^{\varphi} v\cdot]\hat{v} - [\partial_t^{\ell}\mathcal{Z}^{\alpha},\nabla^{\varphi^{\ensuremath{\epsilon}}}] \hat{q} + [\partial_t^{\ell}\mathcal{Z}^{\alpha},\partial_z^{\varphi} q\nabla^{\varphi^{\ensuremath{\epsilon}}}]\hat{\varphi}. \end{array} \end{equation} Multiply $(\ref{Sect5_NormalDer_Estimate_1})$ with $\nabla^{\varphi^{\ensuremath{\epsilon}}}\times (\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v})$, integrate in $\mathbb{R}^3_{-}$, we get \begin{equation}\label{Sect5_NormalDer_Estimate_3} \begin{array}{ll} \int\limits_{\mathbb{R}^3_{-}}\partial_t^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} \cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}\times (\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}) \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}} \\[10pt]\quad + \int\limits_{\mathbb{R}^3_{-}}v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} \cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}\times (\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}) \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}} \\[10pt]\quad + \int\limits_{\mathbb{R}^3_{-}}\nabla^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{q} \cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}\times (\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}) \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}} \\[11pt]\quad + \ensuremath{\epsilon}\int\limits_{\mathbb{R}^3_{-}}|\nabla^{\varphi^\ensuremath{\epsilon}}\times \nabla^{\varphi^\ensuremath{\epsilon}} \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{v}|^2 \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}} \\[11pt] = \int\limits_{\mathbb{R}^3_{-}}(\ensuremath{\epsilon} \, \mathcal{I}_{7,1} + \mathcal{I}_{7,2}) \cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}\times (\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}) \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}}. \end{array} \end{equation} Use the integration by parts formula $(\ref{Sect1_Formulas_CanNotUse})_3$ and note that $[\partial_t^{\varphi^{\ensuremath{\epsilon}}}, \nabla^{\varphi^{\ensuremath{\epsilon}}}] =0$, we have \begin{equation}\label{Sect5_NormalDer_Estimate_4} \begin{array}{ll} \int\limits_{z=0}\partial_t^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} \cdot \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\times (\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}) \,\mathrm{d}y \\[10pt]\quad + \int\limits_{\mathbb{R}^3_{-}}\partial_t^{\varphi^{\ensuremath{\epsilon}}} (\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}) \cdot (\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}) \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}} \\[10pt]\quad + \int\limits_{z=0}v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} \cdot \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\times (\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}) \,\mathrm{d}y \\[10pt]\quad + \int\limits_{\mathbb{R}^3_{-}} v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} (\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}) \cdot (\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}) \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}} \hspace{2.5cm} \end{array} \end{equation} \begin{equation*} \begin{array}{ll} \quad + \int\limits_{\mathbb{R}^3_{-}} [(\sum\limits_{i=1}^3 \nabla^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon},i} \cdot\partial_i^{\varphi^{\ensuremath{\epsilon}}}) \times \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}] \cdot (\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}) \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}} \\[10pt]\quad + \int\limits_{z=0}\nabla^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{q} \cdot \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\times (\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}) \,\mathrm{d}y \\[10pt]\quad + \int\limits_{\mathbb{R}^3_{-}}\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \nabla^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{q} \cdot (\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}) \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}} \\[11pt]\quad + \ensuremath{\epsilon}\int\limits_{\mathbb{R}^3_{-}}|\nabla^{\varphi^\ensuremath{\epsilon}}\times \nabla^{\varphi^\ensuremath{\epsilon}} \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{v}|^2 \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}} = \ensuremath{\epsilon}\int\limits_{\mathbb{R}^3_{-}}\mathcal{I}_{7,1} \cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}\times (\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}) \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}} \\[11pt]\quad + \int\limits_{z=0}\mathcal{I}_{7,2} \cdot \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\times (\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}) \,\mathrm{d}y + \int\limits_{\mathbb{R}^3_{-}} \nabla^{\varphi^{\ensuremath{\epsilon}}}\times \mathcal{I}_{7,2} \cdot (\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}) \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}}. \end{array} \end{equation*} Note that $\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \nabla^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{q} =0$ and $(\partial_t^{\varphi^{\ensuremath{\epsilon}}} + v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}})|_{z=0} = (\partial_t + v_y^{\ensuremath{\epsilon}}\cdot\nabla_y)$, we have \begin{equation}\label{Sect5_NormalDer_Estimate_5} \begin{array}{ll} \frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t} \int\limits_{\mathbb{R}^3_{-}} |\nabla^{\varphi^{\ensuremath{\epsilon}}}\times\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}|^2 \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}} + \ensuremath{\epsilon}\int\limits_{\mathbb{R}^3_{-}}|\nabla^{\varphi^\ensuremath{\epsilon}}\times \nabla^{\varphi^\ensuremath{\epsilon}} \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{v}|^2 \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}} \\[10pt] = - \int\limits_{z=0}(\partial_t + v_y^{\ensuremath{\epsilon}}\cdot\nabla_y) \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} \cdot \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\times (\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}) \,\mathrm{d}y \\[10pt]\quad - \int\limits_{\mathbb{R}^3_{-}} [(\sum\limits_{i=1}^3 \nabla^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon},i} \cdot\partial_i^{\varphi^{\ensuremath{\epsilon}}}) \times \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}] \cdot (\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}) \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}} \\[10pt]\quad - \int\limits_{z=0}\nabla^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{q} \cdot \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\times (\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}) \,\mathrm{d}y \\[10pt]\quad + \ensuremath{\epsilon}\int\limits_{\mathbb{R}^3_{-}}\mathcal{I}_{7,1} \cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}\times (\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}) \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}} + \int\limits_{z=0}\mathcal{I}_{7,2} \cdot \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\times (\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}) \,\mathrm{d}y \\[11pt]\quad + \int\limits_{\mathbb{R}^3_{-}} \nabla^{\varphi^{\ensuremath{\epsilon}}}\times \mathcal{I}_{7,2} \cdot (\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}) \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}} \\[12pt] \ensuremath{\lesssim} \|\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}\|_{L^2}^2 + |\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\times (\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v})|_{\frac{1}{2}} \big|\nabla^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{q}|_{z=0} \big|_{-\frac{1}{2}} \\[10pt]\quad + |\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\times (\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v})|_{L^2} \big(\big|\mathcal{I}_{7,2}|_{z=0}\big|_{L^2} + \big|\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}|_{z=0} \big|_{X_{tan}^1} \big) \\[10pt]\quad + \|\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}\|_{X^1}^2 + \|\partial_z \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}\|_{L^2}^2 + \ensuremath{\epsilon}\|\mathcal{I}_{7,1}\|_{L^2}^2 + \|\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \mathcal{I}_{7,2}\|_{L^2}^2. \end{array} \end{equation} It is easy to prove that \begin{equation}\label{Sect5_NormalDer_Estimate_6} \begin{array}{ll} \big|\mathcal{I}_{7,2}|_{z=0}\big|_{L^2} \ensuremath{\lesssim} |\hat{h}|_{X^{k-1}} + \big|\hat{v}|_{z=0}\big|_{X^{k-2}} + \big|\nabla\hat{q}|_{z=0}\big|_{X^{k-3}} \\[6pt]\hspace{1.85cm} \ensuremath{\lesssim} |\hat{h}|_{X^{k-1}} + \|\hat{v}\|_{X^{k-2}} + \|\partial_z\hat{v}\|_{X^{k-2}} + \|\nabla\hat{q}\|_{X^{k-2}}, \\[9pt] \ensuremath{\epsilon}\|\mathcal{I}_{7,1}\|_{L^2}^2 \ensuremath{\lesssim} \ensuremath{\epsilon}\sum\limits_{\ell+|\alpha|\leq k-2} \|\nabla^{\varphi^\ensuremath{\epsilon}}\times \nabla^{\varphi^\ensuremath{\epsilon}} \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{v}\|_{L^2}^2 + O(\ensuremath{\epsilon}), \\[13pt] \|\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \mathcal{I}_{7,2}\|_{L^2}^2 \ensuremath{\lesssim} \|\hat{\eta}\|_{X^{k-1,1}}^2 + \|\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \hat{v}\|_{X_{tan}^{k-2}}^2 + \|\nabla^{\varphi^{\ensuremath{\epsilon}}}\times [\partial_t^{\ell}\mathcal{Z}^{\alpha},V_z^{\ensuremath{\epsilon}}\partial_z]\hat{v}\|_{L^2}^2 \\[6pt]\hspace{2.8cm} + \|\nabla^{\varphi^{\ensuremath{\epsilon}}}\times[\partial_t^{\ell}\mathcal{Z}^{\alpha},\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\partial_z^{\varphi^{\ensuremath{\epsilon}}}] \hat{q}\|_{L^2}^2, \end{array} \end{equation} where the estimates for the last two terms are similar to $(\ref{Sect2_NormalDer_Estimate_5}), (\ref{Sect2_NormalDer_Estimate_6})$. Integrate $(\ref{Sect5_NormalDer_Estimate_5})$ in time, apply the integral form of Gronwall's inequality, we have \begin{equation}\label{Sect5_NormalDer_Estimate_7} \begin{array}{ll} \|\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}\|_{L^2}^2 + \ensuremath{\epsilon}\int\limits_{\mathbb{R}^3_{-}}|\nabla^{\varphi^\ensuremath{\epsilon}}\times \nabla^{\varphi^\ensuremath{\epsilon}} \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{v}|^2 \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}} \\[5pt] \ensuremath{\lesssim} \big\|\nabla^{\varphi^{\ensuremath{\epsilon}}}\times \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}|_{t=0}\big\|_{L^2}^2 + \int\limits_0^t \|\hat{v}\|_{X^{k-2}} + \|\partial_z \hat{v}\|_{X^{k-2}} + \|\nabla\hat{q} \|_{X^{k-2}} + \|\hat{h}\|_{X^{k-1}} \,\mathrm{d}t \\[7pt]\quad + \int\limits_0^t \|\hat{v}\|_{X^{k-1}}^2 + \|\partial_z \hat{v}\|_{X^{k-1}}^2 + \|\hat{h}\|_{X^{k-1,1}}^2 \,\mathrm{d}t + O(\ensuremath{\epsilon}) \\[8pt] \ensuremath{\lesssim} \big\|\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\omega}|_{t=0}\big\|_{L^2}^2 + \int\limits_0^t\|\hat{v}\|_{X^{k-2}} + \|\partial_z \hat{v}\|_{X^{k-2}} + \|\nabla\hat{q} \|_{X^{k-2}} + \|\hat{h}\|_{X^{k-1}}\,\mathrm{d}t + O(\ensuremath{\epsilon}). \end{array} \end{equation} Since $\hat{\omega} = \nabla^{\varphi^{\ensuremath{\epsilon}}} \times \hat{v} - \nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta} \times \partial_z^{\varphi} v$, we have \begin{equation}\label{Sect5_NormalDer_Estimate_8} \begin{array}{ll} \|\hat{\omega}\|_{X^{k-2}}^2 + \|\partial_z\hat{v}\|_{X^{k-2}}^2 \ensuremath{\lesssim} \|\hat{\omega}|_{t=0}\|_{X^{k-2}}^2 + \int\limits_0^t\|\hat{v}\|_{X^{k-2}} + \|\partial_z \hat{v}\|_{X^{k-2}} \\[6pt]\quad + \|\nabla\hat{q} \|_{X^{k-2}} + \|\hat{h}\|_{X^{k-1}}\,\mathrm{d}t + O(\ensuremath{\epsilon}). \end{array} \end{equation} Thus, Lemma $\ref{Sect5_NormalDer_Lemma}$ is proved. \end{proof} \begin{remark}\label{Sect5_NormalDer_Remark} We can not use the following variables to estimate $\hat{\omega}_h$: \begin{equation}\label{Sect5_NormalDer_Remark_1} \left\{\begin{array}{ll} \hat{\zeta}^1 = \hat{\omega}^1 -\textsf{F}^1 [\nabla\varphi^{\ensuremath{\epsilon}}](\partial_j v^{\ensuremath{\epsilon},i}) + \omega^{b,1}, \ j=1,2,\ i=1,2,3, \\[6pt] \hat{\zeta}^2 = \hat{\omega}^2 -\textsf{F}^2 [\nabla\varphi^{\ensuremath{\epsilon}}](\partial_j v^{\ensuremath{\epsilon},i}) + \omega^{b,2}, \ j=1,2,\ i=1,2,3, \end{array}\right. \end{equation} because $\textsf{F}^{1,2} [\nabla\varphi^{\ensuremath{\epsilon}}](\partial_j v^{\ensuremath{\epsilon},i})$ may not converge to the extension of $\omega^b$. That is, $\hat{\zeta}$ may not be a small quantity. \end{remark} \subsection{Estimates for Normal Derivatives when $\Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}|_{z=0} = 0$} When $\Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}|_{z=0} = 0$, the boundary value of Navier-Stokes vorticity converges to that of Euler vorticity, the convergence rates of the inviscid limits can be improved. In the following lemma, we estimate normal derivatives for the special Euler boundary data. \begin{lemma}\label{Sect5_Special_EularData_Lemma} Assume $k\leq m-2$, if $\Pi\mathcal{S}^{\varphi} v \ensuremath{\textbf{n}}|_{z=0} = 0$, then the vorticity has the following estimate: \begin{equation}\label{Sect5_Special_EularData_Lemma_Eq} \begin{array}{ll} \|\partial_z\hat{v}_h\|_{L^4([0,T],X^{k-2})}^2 + \|\hat{\omega}_h\|_{L^4([0,T],X^{k-2})}^2 \\[6pt] \ensuremath{\lesssim} \big\|\hat{\omega}_0\big\|_{X^{k-2}}^2 + \int\limits_0^T\|\hat{v}\|_{X^{k-1,1}}^2 \,\mathrm{d}t + \int\limits_0^T|\hat{h}|_{X^{k-2,1}}^2 \,\mathrm{d}t + \|\partial_t^{k-1}\hat{h}\|_{L^4([0,T],L^2)}^2 \\[8pt]\quad + \sqrt{\ensuremath{\epsilon}}\|\partial_z\hat{v}\|_{L^4([0,T],X^{k-1})}^2 + O(\ensuremath{\epsilon}). \end{array} \end{equation} \end{lemma} \begin{proof} If $\Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}|_{z=0} = 0$, $\Theta^i =0$ where $i=1,\cdots,6$, See Remark $\ref{Sect5_Euler_BoundaryData_Remark}$. When $\ell+|\alpha|\leq k-2$, we study the equations $(\ref{Sect1_N_Derivatives_Difference_Eq})$ and decompose $\hat{\omega}_h = \hat{\omega}_h^{nhom} + \hat{\omega}_h^{hom}$, such that $\hat{\omega}_h^{nhom}$ satisfies the nonhomogeneous equations $(\ref{Sect5_N_Derivatives_Difference_Eq_Nonhom})$ and $\hat{\omega}_h^{hom}$ satisfies the homogeneous equations $(\ref{Sect5_N_Derivatives_Difference_Eq_Hom})$. While $\hat{\omega}_h^{nhom}$ satisfies the following estimate: \begin{equation}\label{Sect5_Special_EularData_1} \begin{array}{ll} \|\hat{\omega}_h^{nhom}\|_{L^4([0,T],X^{k-2})}^2 \ensuremath{\lesssim} \sqrt{T}\big\|\hat{\omega}_{0,h}\big\|_{X^{k-2}}^2 + T \|\hat{\omega}_h\|_{L^4([0,T],X^{k-2})}^2 \\[9pt]\quad + \sqrt{T}\int\limits_0^T\|\hat{v}\|_{X^{k-2,1}}^2 \,\mathrm{d}t + \sqrt{T}\int\limits_0^T|\hat{h}|_{X^{k-2,1}}^2 \,\mathrm{d}t + \sqrt{T}|\partial_t^{k-1} \hat{h}|_{L^4([0,T],L^2)}^2 + O(\ensuremath{\epsilon}). \end{array} \end{equation} When $\ell+|\alpha|\leq k-2$, the estimate $(\ref{Sect5_N_Derivatives_Difference_Eq_Hom_1})$ is reduced as follows: \begin{equation}\label{Sect5_Special_EularData_2} \begin{array}{ll} \|\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\omega}_h^{hom}\|_{L^4([0,T],L^2(\mathbb{R}^3_{-}))}^2 \\[7pt] \ensuremath{\lesssim} \sqrt{\ensuremath{\epsilon}}\int\limits_0^T\big|\textsf{F}^{1,2}[\nabla\varphi^{\ensuremath{\epsilon}}](\partial_j v^{\ensuremath{\epsilon},i}) -\textsf{F}^{1,2}[\nabla\varphi](\partial_j v^i) \big|_{X^{k-2}(\mathbb{R}^2)}^2 \,\mathrm{d}t \\[7pt] \ensuremath{\lesssim} \sqrt{\ensuremath{\epsilon}}\int\limits_0^T\big|\hat{v}|_{z=0}\big|_{X^{k-1}(\mathbb{R}^2)}^2 \,\mathrm{d}t + \sqrt{\ensuremath{\epsilon}}\int\limits_0^T|\hat{h}|_{X^{k-2,1}(\mathbb{R}^2)}^2 \,\mathrm{d}t \\[7pt] \ensuremath{\lesssim} \sqrt{\ensuremath{\epsilon}}\int\limits_0^T\|\hat{v}\|_{X^{k-1,1}(\mathbb{R}^2)}^2 \,\mathrm{d}t + \sqrt{\ensuremath{\epsilon}}\sqrt{T}\|\partial_z\hat{v}\|_{L^4([0,T],X^{k-1})}^2 + \sqrt{\ensuremath{\epsilon}}\int\limits_0^T|\hat{h}|_{X^{k-2,1}(\mathbb{R}^2)}^2 \,\mathrm{d}t. \end{array} \end{equation} By $(\ref{Sect5_Special_EularData_1})$ and $(\ref{Sect5_Special_EularData_2})$, we have \begin{equation}\label{Sect5_Special_EularData_3} \begin{array}{ll} \|\partial_z\hat{v}_h\|_{L^4([0,T],X^{k-2})}^2 + \|\hat{\omega}_h\|_{L^4([0,T],X^{k-2})}^2 \\[10pt] \ensuremath{\lesssim} \|\hat{\omega}_h^{nhom}\|_{L^4([0,T],X^{k-2})}^2 + \|\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\omega}_h^{hom}\|_{L^4([0,T],L^2(\mathbb{R}^3_{-}))}^2 \\[7pt] \ensuremath{\lesssim} \big\|\hat{\omega}_0\big\|_{X^{k-2}}^2 + \int\limits_0^T\|\hat{v}\|_{X^{k-1,1}}^2 \,\mathrm{d}t + \int\limits_0^T|\hat{h}|_{X^{k-2,1}}^2 \,\mathrm{d}t + \|\partial_t^{k-1}\hat{h}\|_{L^4([0,T],L^2)}^2 \\[8pt]\quad + \sqrt{\ensuremath{\epsilon}}\|\partial_z\hat{v}\|_{L^4([0,T],X^{k-1})}^2 + O(\ensuremath{\epsilon}). \end{array} \end{equation} Thus, Lemma $\ref{Sect5_Special_EularData_Lemma}$ is proved. \end{proof} \subsection{Convergence Rates of the Inviscid Limit} In this subsection, we calculate convergence rates of the inviscid limit. \begin{theorem}\label{Sect5_ConvergenceRates_Thm} Assume $T>0$ is finite, fixed and independent of $\ensuremath{\epsilon}$, $(v^{\ensuremath{\epsilon}},h^{\ensuremath{\epsilon}})$ is the solution in $[0,T]$ of Navier-Stokes equations $(\ref{Sect1_NS_Eq})$ with initial data $(v^{\ensuremath{\epsilon}}_0,h^{\ensuremath{\epsilon}}_0)$ satisfying $(\ref{Sect1_Proposition_TimeRegularity_1})$, $\omega^{\ensuremath{\epsilon}}$ is its vorticity. $(v,h)$ is the solution in $[0,T]$ of Euler equations $(\ref{Sect1_Euler_Eq})$ with initial data $(v_0,h_0)\in X^{m-1,1}(\mathbb{R}^3_{-}) \times X^{m-1,1}(\mathbb{R}^2)$, $\omega$ is its vorticity. Assume there exists an integer $k$ where $1\leq k\leq m-2$, such that $\|v^{\ensuremath{\epsilon}}_0 -v_0\|_{X^{k-1,1}(\mathbb{R}^3_{-})} =O(\ensuremath{\epsilon}^{\lambda^v})$, $|h^{\ensuremath{\epsilon}}_0 -h_0|_{X^{k-1,1}(\mathbb{R}^2)} =O(\ensuremath{\epsilon}^{\lambda^h})$, $\|\omega^{\ensuremath{\epsilon}}_0 - \omega_0\|_{X^{k-1}(\mathbb{R}^3_{-})} =O(\ensuremath{\epsilon}^{\lambda^{\omega}_1})$, where $\lambda^v>0, \lambda^h>0, \lambda^{\omega}_1>0$. If the Euler boundary data satisfies $\Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}|_{z=0}\neq 0$ in $[0,T]$, then the convergence rates of the inviscid limit satisfy $(\ref{Sect1_Thm4_ConvergenceRates_1})$. If the Euler boundary data satisfies $\Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}|_{z=0} = 0$ in $[0,T]$, assume $\|\omega^{\ensuremath{\epsilon}}_0 - \omega_0\|_{X^{k-2}(\mathbb{R}^3_{-})} =O(\ensuremath{\epsilon}^{\lambda^{\omega}_2})$ where $\lambda^{\omega}_2>0$, then the convergence rates of the inviscid limit satisfy $(\ref{Sect1_Thm4_ConvergenceRates_2})$. \end{theorem} \begin{proof} If $\Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}|_{z=0}\neq 0$, we prove the converge rates of the inviscid limit: By Lemmas $\ref{Sect5_Pressure_Lemma}, \ref{Sect5_Tangential_Estimates_Lemma}, \ref{Sect5_Vorticity_Lemma}$, we have \begin{equation}\label{Sect5_ConvergenceRates_Thm_Eq_1} \begin{array}{ll} \|\hat{v}\|_{X^{k-1,1}}^2 + |\hat{h}|_{X^{k-1,1}}^2 \ensuremath{\lesssim} \|\hat{v}_0\|_{X^{k-1,1}}^2 + |\hat{h}_0|_{X^{k-1,1}}^2 + |\hat{\omega}_0|_{X^{k-1}}^2 \\[5pt]\quad + \int\limits_0^t\|\hat{v}\|_{X^{k-1,1}}^2 + \int\limits_0^t\|\hat{h}\|_{X^{k-1,1}}^2 \,\mathrm{d}t + O(\sqrt{\ensuremath{\epsilon}}). \end{array} \end{equation} Apply the integral form of Gronwall's inequality to $(\ref{Sect5_ConvergenceRates_Thm_Eq_1})$, we get \begin{equation}\label{Sect5_ConvergenceRates_Thm_Eq_2} \begin{array}{ll} \|\hat{v}\|_{X^{k-1,1}}^2 + |\hat{h}|_{X^{k-1,1}}^2 \ensuremath{\lesssim} \|\hat{v}_0\|_{X^{k-1,1}}^2 + |\hat{h}_0|_{X^{k-1,1}}^2 + \big\|\hat{\omega}_0\big\|_{X^{k-1}}^2 + O(\sqrt{\ensuremath{\epsilon}}) \\[7pt]\hspace{3.3cm} \ensuremath{\lesssim} O(\ensuremath{\epsilon}^{\min\{\frac{1}{2}, 2\lambda^v, 2\lambda^h, 2\lambda^{\omega}_1\}}). \end{array} \end{equation} By Lemma $\ref{Sect5_Vorticity_Lemma}$, we have \begin{equation}\label{Sect5_ConvergenceRates_Thm_Eq_3} \begin{array}{ll} \|\partial_z\hat{v}_h\|_{L^4([0,T],X^{k-1})}^2 + \|\hat{\omega}_h\|_{L^4([0,T],X^{k-1})}^2 \ensuremath{\lesssim} O(\ensuremath{\epsilon}^{\min\{\frac{1}{2}, 2\lambda^v, 2\lambda^h, 2\lambda^{\omega}_1\}}). \end{array} \end{equation} By Lemmas $\ref{Sect5_Pressure_Lemma}, \ref{Sect5_Vorticity_Lemma}, \ref{Sect5_NormalDer_Lemma}$, we have \begin{equation}\label{Sect5_ConvergenceRates_Thm_Eq_4} \begin{array}{ll} \|\hat{\omega}\|_{X^{k-2}}^2 + \|\partial_z\hat{v}\|_{X^{k-2}}^2 \ensuremath{\lesssim} \|\hat{\omega}_0\|_{X^{k-2}}^2 + \int\limits_0^t\|\hat{v}\|_{X^{k-2}} + \|\partial_z \hat{v}\|_{X^{k-2}} \\[6pt]\quad + \|\nabla\hat{q} \|_{X^{k-2}} + \|\hat{h}\|_{X^{k-1}}\,\mathrm{d}t + O(\ensuremath{\epsilon}) \ensuremath{\lesssim} O(\ensuremath{\epsilon}^{\min\{\frac{1}{4}, \lambda^v, \lambda^h, \lambda^{\omega}_1\}}). \end{array} \end{equation} If $\Pi\mathcal{S}^{\varphi} v\ensuremath{\textbf{n}}|_{z=0}= 0$, we prove the converge rates of the inviscid limit: By Lemma $\ref{Sect5_Special_EularData_Lemma}$, we have \begin{equation}\label{Sect5_ConvergenceRates_Thm_Eq_5} \begin{array}{ll} \|\partial_z\hat{v}_h\|_{L^4([0,T],X^{k-2})}^2 + \|\hat{\omega}_h\|_{L^4([0,T],X^{k-2})}^2 \\[6pt] \ensuremath{\lesssim} \big\|\hat{\omega}_0\big\|_{X^{k-2}}^2 + \int\limits_0^T\|\hat{v}\|_{X^{k-1,1}}^2 \,\mathrm{d}t + \int\limits_0^T|\hat{h}|_{X^{k-2,1}}^2 \,\mathrm{d}t + \sqrt{\ensuremath{\epsilon}}O(\ensuremath{\epsilon}^{\min\{\frac{1}{2}, 2\lambda^v, 2\lambda^h, 2\lambda^{\omega}_1\}}). \end{array} \end{equation} Couple $(\ref{Sect5_ConvergenceRates_Thm_Eq_5})$ with the following tangential estimates, \begin{equation}\label{Sect5_ConvergenceRates_Thm_Eq_6} \begin{array}{ll} \|\hat{v}\|_{X^{k-2,1}}^2 + |\hat{h}|_{X^{k-2,1}}^2 \ensuremath{\lesssim} \|\hat{v}_0\|_{X^{k-2,1}}^2 + |\hat{h}_0|_{X^{k-2,1}}^2 +\|\partial_z \hat{v}\|_{L^4([0,T],X^{k-2})}^2 \\[8pt]\quad + \int\limits_0^t\|\hat{v}\|_{X^{k-2,1}}^2 + \int\limits_0^t\|\hat{h}\|_{X^{k-2,1}}^2 \,\mathrm{d}t + O(\ensuremath{\epsilon}), \end{array} \end{equation} apply the integral form of Gronwall's inequality, then we get \begin{equation}\label{Sect5_ConvergenceRates_Thm_Eq_7} \begin{array}{ll} \|\hat{v}\|_{X^{k-2,1}}^2 + |\hat{h}|_{X^{k-2,1}}^2 \\[6pt] \ensuremath{\lesssim} \big\|\hat{\omega}_0\big\|_{X^{k-2}}^2 + \|\hat{v}_0\|_{X^{k-2,1}}^2 + |\hat{h}_0|_{X^{k-2,1}}^2 + \sqrt{\ensuremath{\epsilon}}O(\ensuremath{\epsilon}^{\min\{\frac{1}{2}, 2\lambda^v, 2\lambda^h, 2\lambda^{\omega}_1\}}) \\[8pt] \ensuremath{\lesssim} O(\ensuremath{\epsilon}^{\min\{1, 2\lambda^v, 2\lambda^h, 2\lambda^{\omega}_2, 2\lambda^{\omega}_1+1\}}) = O(\ensuremath{\epsilon}^{\min\{1, 2\lambda^v, 2\lambda^h, 2\lambda^{\omega}_2\}}). \end{array} \end{equation} Similar to $(\ref{Sect5_ConvergenceRates_Thm_Eq_4})$, we have \begin{equation}\label{Sect5_ConvergenceRates_Thm_Eq_8} \begin{array}{ll} \|\hat{\omega}\|_{X^{k-3}}^2 + \|\partial_z\hat{v}\|_{X^{k-3}}^2 \ensuremath{\lesssim} O(\ensuremath{\epsilon}^{\min\{\frac{1}{2}, \lambda^v, \lambda^h, \lambda^{\omega}_2\}}). \end{array} \end{equation} Thus, Theorem $\ref{Sect5_ConvergenceRates_Thm}$ is proved. \end{proof} To estimate $\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \partial_z^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}} - \ensuremath{\textbf{N}} \cdot \partial_z^{\varphi} v$, we use the equality $\ensuremath{\textbf{N}}\cdot\partial_z^{\varphi} v = - (\partial_1 v^1 + \partial_2 v^2)$. To estimate $\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot {\omega}^{\ensuremath{\epsilon}} -\ensuremath{\textbf{N}}\cdot \omega$, we use the following equality: \begin{equation}\label{Sect5_Vorticity_Multiple_N} \begin{array}{ll} \ensuremath{\textbf{N}}\cdot\omega = -\partial_1\varphi(\partial_2 v^3 - \frac{\partial_2\varphi}{\partial_z\varphi}\partial_z v^3 - \frac{1}{\partial_z\varphi}\partial_z v^2) -\partial_2\varphi(- \partial_1 v^3 + \frac{\partial_1\varphi}{\partial_z\varphi}\partial_z v^3 + \frac{1}{\partial_z\varphi}\partial_z v^1) \\[8pt]\hspace{1.3cm} + \partial_1 v^2 - \frac{\partial_1\varphi}{\partial_z\varphi}\partial_z v^2 - \partial_2 v^1 + \frac{\partial_2\varphi}{\partial_z\varphi}\partial_z v^1 \\[8pt]\hspace{0.9cm} = -\partial_1\varphi\partial_2 v^3 +\partial_2\varphi\partial_1 v^3 + \partial_1 v^2 - \partial_2 v^1. \end{array} \end{equation} \section{Regularity Structure of Navier-Stokes Solutions for Fixed $\sigma>0$} In this section, $\sigma>0$, we prove Proposition $\ref{Sect1_Proposition_Regularity_Tension}$ on the regularities of Navier-Stokes solutions and Euler solutions. For simplicity, we omit the superscript ${}^{\ensuremath{\epsilon}}$ in this section, which represents Navier-Stokes solutions. Since the estimates of normal derivatives are the same as the $\sigma=0$ case, we only focus on the estimates of the pressure gradient and tangential derivatives when $\sigma>0$. The following lemma concerns the estimate of the pressure gradient. \begin{lemma}\label{Sect6_Pressure_Estimates} Assume the pressure $q$ satisfies the elliptic equation with Neumann boundary condition $(\ref{Sect1_Pressure_Neumann})$, then $q$ has the following gradient estimate: \begin{equation}\label{Sect6_Pressure_Estimate_Eq} \begin{array}{ll} \|\nabla q\|_{X^{m-1}} \ensuremath{\lesssim} \|\partial_t^m v\|_{X^m} + \|v\|_{X^{m-1,1}} + \|\partial_z v\|_{X^{m-1}} + |h|_{X^{m,1}} \\[6pt]\quad + \ensuremath{\epsilon} \|\nabla_y v\|_{X^{m-1,1}} + \ensuremath{\epsilon} \|\partial_z v\|_{X^{m-1}} + \ensuremath{\epsilon} |h|_{X^{m-1,\frac{3}{2}}}. \end{array} \end{equation} \end{lemma} \begin{proof} The $L^2$ estimate of the elliptic equation with its Neumann boundary condition $(\ref{Sect1_Pressure_Neumann})$ is standard, that is \begin{equation}\label{Sect6_Pressure_Estimate_L^2} \begin{array}{ll} \|\nabla q\|_{L^2} \ensuremath{\lesssim} \|v\cdot \nabla^{\varphi} v\|_{L^2} + \big|\nabla^{\varphi} q\cdot\ensuremath{\textbf{N}}|_{z=0} \big|_{-\frac{1}{2}} \\[8pt] \ensuremath{\lesssim} \|v\|_{X^{0,1}}+ \|\partial_z v\|_{L^2} + |h|_{X^{0,1}} + \big|\partial_t^{\varphi} v\cdot\ensuremath{\textbf{N}}|_{z=0}\big|_{-\frac{1}{2}} \\[6pt]\quad + \big|v\cdot\nabla^{\varphi} v\cdot\ensuremath{\textbf{N}}|_{z=0} \big|_{-\frac{1}{2}} + \ensuremath{\epsilon}\big|\triangle^{\varphi} v\cdot\ensuremath{\textbf{N}}|_{z=0}\big|_{-\frac{1}{2}} \\[8pt] \ensuremath{\lesssim} \|v\|_{X^{0,1}}+ \|\partial_z v\|_{L^2} + |h|_{X^{0,1}} + \|\partial_t^{\varphi} v\|_{L^2} + \|\nabla^{\varphi}\cdot\partial_t^{\varphi} v\|_{L^2} \\[6pt]\quad + \|v\cdot\nabla^{\varphi} v\|_{L^2} + \|\nabla\cdot(v\cdot\nabla^{\varphi} v)\|_{L^2} + \ensuremath{\epsilon}\big|v|_{z=0}\big|_{\frac{3}{2}} + \ensuremath{\epsilon}|h|_{\frac{3}{2}}. \end{array} \end{equation} where we used the inequality $|v\cdot\ensuremath{\textbf{N}}|_{-\frac{1}{2}} \ensuremath{\lesssim} \|v\| + \|\nabla^{\varphi}\cdot v\|$ (see \cite{Wang_Xin_2015}). Similar to \cite{Wang_Xin_2015,Masmoudi_Rousset_2012_NavierBC}, we have higher order estimates for $(\ref{Sect1_Pressure_Neumann})$: \begin{equation}\label{Sect6_Pressure_Estimate_1} \begin{array}{ll} \|\nabla q\|_{X^{m-1}} \ensuremath{\lesssim} \|v\cdot \nabla^{\varphi} v\|_{X^{m-1}} + \big|\nabla^{\varphi} q\cdot\ensuremath{\textbf{N}}|_{z=0} \big|_{X^{m-1, -\frac{1}{2}}} \\[9pt] \ensuremath{\lesssim} \|v\|_{X^{m-1,1}}+ \|\partial_z v\|_{X^{m-1}} + |h|_{X^{m-1,1}} + \big|\nabla^{\varphi} q\cdot\ensuremath{\textbf{N}}|_{z=0} \big|_{X^{m-1, -\frac{1}{2}}}. \end{array} \end{equation} Next, we estimate $\big|\nabla^{\varphi} q\cdot\ensuremath{\textbf{N}}|_{z=0} \big|_{X^{m-1, -\frac{1}{2}}}$. Firstly, it is easy to estimate $\ensuremath{\epsilon} \big|\triangle^{\varphi} v\cdot\ensuremath{\textbf{N}}|_{z=0} \big|_{X^{m-1,-\frac{1}{2}}}$. It follows from the divergence free condition $\nabla^{\varphi}\cdot v=0$ that \begin{equation}\label{Sect6_Pressure_Estimate_2} \begin{array}{ll} \partial_z v \cdot\ensuremath{\textbf{N}} = -\partial_z\varphi \nabla_y\cdot v_y, \quad z\leq 0, \\[11pt] \partial_{zz} v\cdot\ensuremath{\textbf{N}}|_{z=0} = \partial_z(\partial_z v \cdot\ensuremath{\textbf{N}}) -\partial_z v \cdot \partial_z\ensuremath{\textbf{N}} = - \partial_z(\partial_z\varphi \nabla_y\cdot v_y) + \partial_z v_y \cdot \partial_z\partial_y\varphi \\[5pt] = - \partial_z^2\varphi \nabla_y\cdot v_y - \partial_z\varphi \nabla_y\cdot\partial_z v_y + \partial_z v_y \cdot \partial_z\partial_y\varphi \\[5pt] = - \partial_z^2\varphi \nabla_y\cdot v_y - \partial_z\varphi \nabla_y\cdot[f^{5,6}[\nabla\varphi](\partial_j v^i)] + [f^{5,6}[\nabla\varphi](\partial_j v^i)] \cdot \partial_z\partial_y\varphi , \end{array} \end{equation} where $\partial_z v_h = f^{5,6}[\nabla\varphi](\partial_j v^i)$ is proved in $(\ref{Sect2_Vorticity_H_BC_10})$. Thus, \begin{equation}\label{Sect6_Pressure_Estimate_3} \begin{array}{ll} \ensuremath{\epsilon} \big|\triangle^{\varphi} v\cdot\ensuremath{\textbf{N}}|_{z=0} \big|_{X^{m-1,-\frac{1}{2}}} \ensuremath{\lesssim} \ensuremath{\epsilon} \big|\nabla_y v|_{z=0}\big|_{X^{m-1,\frac{1}{2}}} + \ensuremath{\epsilon} |h|_{X^{m-1,\frac{3}{2}}} \\[7pt] \ensuremath{\lesssim} \ensuremath{\epsilon} \|\nabla_y v\|_{X^{m-1,1}} + \ensuremath{\epsilon} \|\partial_z v\|_{X^{m-1}} + \ensuremath{\epsilon} |h|_{X^{m-1,\frac{3}{2}}}. \end{array} \end{equation} Secondly, we estimate $\big|(\partial_t^{\varphi} v + v\cdot\nabla^{\varphi} v)\cdot\ensuremath{\textbf{N}}|_{z=0} \big|_{X^{m-1, -\frac{1}{2}}}$. \begin{equation}\label{Sect6_Pressure_Estimate_4} \begin{array}{ll} \big|(\partial_t^{\varphi} v + v\cdot\nabla^{\varphi} v)\cdot\ensuremath{\textbf{N}}|_{z=0} \big|_{X^{m-1, -\frac{1}{2}}} = \big|(\partial_t v + v_y\cdot\nabla_y v)\cdot\ensuremath{\textbf{N}}|_{z=0} \big|_{X^{m-1, -\frac{1}{2}}} \\[8pt] \ensuremath{\lesssim} \sum\limits_{\ell+|\alpha|\leq m-1} \big(\big|\partial_t^{\ell}\mathcal{Z}^{\alpha}(\partial_t v + v_y\cdot\nabla_y v)\cdot\ensuremath{\textbf{N}}|_{z=0} \big|_{-\frac{1}{2}} + \big|\partial_t^{\ell}\mathcal{Z}^{\alpha}\ensuremath{\textbf{N}}|_{z=0} \big|_{-\frac{1}{2}}\big) \\[12pt] \ensuremath{\lesssim} \sum\limits_{\ell+|\alpha|\leq m-1} \big(\|\partial_t^{\ell}\mathcal{Z}^{\alpha} \partial_t v\|_{L^2} + \|\nabla^{\varphi}\cdot\partial_t^{\ell}\mathcal{Z}^{\alpha} \partial_t v\|_{L^2}\big) \\[10pt]\quad + \sum\limits_{\ell+|\alpha|\leq m-1} \big(\|\partial_t^{\ell}\mathcal{Z}^{\alpha} \nabla_y v\|_{L^2} + \|\nabla^{\varphi}\cdot\partial_t^{\ell}\mathcal{Z}^{\alpha} \nabla_y v\|_{L^2}\big) + |h|_{X^{m-1, \frac{1}{2}}} \\[10pt] \ensuremath{\lesssim} \|v\|_{X^m} + \|\partial_z v\|_{X^{m-1}} + |h|_{X^{m,1}}. \end{array} \end{equation} Thus, Lemma $\ref{Sect6_Pressure_Estimates}$ is proved. \end{proof} Before estimating tangential derivatives of $v$, we have the estimate of $\partial_t^{\ell} h$ by using the kinetical boundary condition $(\ref{Sect1_NS_Eq_ST})_3$, which is the same with $(\ref{Sect1_NS_Eq})_3$, we give the following lemma without proof, which is the same with Lemma $\ref{Sect2_Height_Estimates_Lemma}$. \begin{lemma}\label{Sect6_Height_Estimates_Lemma} Assume $0\leq\ell\leq m-1$, $\partial_t^{\ell}h$ have the estimates: \begin{equation}\label{Sect6_Height_Estimates_Lemma_Eq} \begin{array}{ll} \int\limits_{\mathbb{R}^2} |\partial_t^{\ell}h|^2 \,\mathrm{d}y \ensuremath{\lesssim} |h_0|_{X^{m-1}}^2 + \int\limits_0^t |h|_{X^{m-1,1}}^2 + \|v\|_{X^{m-1,1}}^2 \,\mathrm{d}t + \|\partial_z v\|_{L^4([0,T],X^{m-1})}^2. \end{array} \end{equation} \end{lemma} Now, we develop a priori estimates for tangential derivatives including time derivatives. Our equations and variables are different from \cite{Wang_Xin_2015} which used Alinhac's good unknown. \begin{lemma}\label{Sect6_Tangential_Estimate_Lemma} Assume the conditions are the same with those of Proposition $\ref{Sect1_Proposition_Regularity_Tension}$, then $v$ and $h$ satisfy the a priori estimate: \begin{equation}\label{Sect6_Tangential_Estimate} \begin{array}{ll} \|v\|_{X^{m-1,1}}^2 + |h|_{X^{m-1,1}}^2 + \ensuremath{\epsilon} |h|_{X^{m-1,\frac{3}{2}}}^2 + \sigma |h|_{X^{m-1,2}}^2 + \ensuremath{\epsilon}\int\limits_0^t \|\nabla v\|_{X^{m-1,1}}^2 \,\mathrm{d}t \\[9pt] \ensuremath{\lesssim} \|v_0\|_{X^{m-1,1}}^2 + |h_0|_{X^{m-1,1}}^2 + \ensuremath{\epsilon} |h_0|_{X^{m-1,\frac{3}{2}}}^2 + \sigma |h_0|_{X^{m-1,2}}^2 \\[6pt]\quad + \|\partial_z v\|_{L^4([0,T],X^{m-1})}^2 + \|\partial_t^m v\|_{L^4([0,T],L^2)}^2 + \|\partial_t^m h\|_{L^4([0,T],X^{0,1})}^2. \end{array} \end{equation} \end{lemma} \begin{proof} For the fixed $\sigma>0$, we do not need Alinhac's good unknown $(\ref{Sect1_Good_Unknown_1})$. Apply $\partial_t^{\ell}\mathcal{Z}^{\alpha}$ to $(\ref{Sect1_NS_Eq_ST})$, then $\partial_t^{\ell}\mathcal{Z}^{\alpha}v$ and $\partial_t^{\ell}\mathcal{Z}^{\alpha}q$ satisfy the following equations: \begin{equation}\label{Sect6_Tangential_Estimate_2} \left\{\begin{array}{ll} \partial_t^{\varphi} \partial_t^{\ell}\mathcal{Z}^{\alpha}v + v\cdot\nabla^{\varphi} \partial_t^{\ell}\mathcal{Z}^{\alpha} v + \nabla^{\varphi} \partial_t^{\ell}\mathcal{Z}^{\alpha} q - 2\ensuremath{\epsilon}\nabla^{\varphi}\cdot \mathcal{S}^{\varphi} \partial_t^{\ell}\mathcal{Z}^{\alpha}v \\[8pt]\quad = \partial_t^{\ell+1}\mathcal{Z}^{\alpha}\eta \partial_z^{\varphi}v + \partial_t^{\ell}\mathcal{Z}^{\alpha}\nabla \eta \cdot v \partial_z^{\varphi} v + \partial_t^{\ell}\mathcal{Z}^{\alpha}\nabla \eta \cdot \partial_z^{\varphi} q \\[8pt]\quad + 2\ensuremath{\epsilon}\nabla^{\varphi}\cdot [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \mathcal{S}^{\varphi}]v + 2\ensuremath{\epsilon}[\partial_t^{\ell}\mathcal{Z}^{\alpha}, \nabla^{\varphi}\cdot] \mathcal{S}^{\varphi}v + b.t., \\[10pt] \nabla^{\varphi}\cdot \partial_t^{\ell}\mathcal{Z}^{\alpha}v = \partial_t^{\ell}\mathcal{Z}^{\alpha}\nabla\eta \cdot \partial_z^{\varphi}v + b.t. , \\[10pt] \partial_t \partial_t^{\ell}\mathcal{Z}^{\alpha} h + v_y \cdot\nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} h = \partial_t^{\ell}\mathcal{Z}^{\alpha}v \cdot \ensuremath{\textbf{N}} + [\partial_t^{\ell}\mathcal{Z}^{\alpha}, v,\ensuremath{\textbf{N}}], \\[10pt] \partial_t^{\ell}\mathcal{Z}^{\alpha}q \ensuremath{\textbf{N}} -2\ensuremath{\epsilon} \mathcal{S}^{\varphi} \partial_t^{\ell}\mathcal{Z}^{\alpha}v\,\ensuremath{\textbf{N}} \\[8pt]\quad = g \partial_t^{\ell}\mathcal{Z}^{\alpha}h\ensuremath{\textbf{N}} - \sigma \nabla_y\cdot\frac{1}{\sqrt{1+|\nabla_y h|^2}} \big(\nabla_y\partial_t^{\ell}\mathcal{Z}^{\alpha} h - \frac{\nabla_y h(\nabla_y h\cdot \nabla_y\partial_t^{\ell}\mathcal{Z}^{\alpha}h)}{1+|\nabla_y h|^2}\big) \ensuremath{\textbf{N}} \\[10pt]\quad + 2\ensuremath{\epsilon} [\partial_t^{\ell}\mathcal{Z}^{\alpha},\mathcal{S}^{\varphi}] v\,\ensuremath{\textbf{N}} + (2\ensuremath{\epsilon} \mathcal{S}^{\varphi}v - (q-g h))\,\partial_t^{\ell}\mathcal{Z}^{\alpha}\ensuremath{\textbf{N}} \\[8pt]\quad - [\partial_t^{\ell}\mathcal{Z}^{\alpha},q-g h,\ensuremath{\textbf{N}}] +2\ensuremath{\epsilon} [\partial_t^{\ell}\mathcal{Z}^{\alpha},\mathcal{S}^{\varphi}v, \ensuremath{\textbf{N}}] -\sigma [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \ensuremath{\textbf{N}}]H \\[8pt]\quad - \sigma \nabla_y\cdot [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \nabla_y h, \frac{1}{\sqrt{1+|\nabla_y h|^2}}] \ensuremath{\textbf{N}}, \\[12pt] (\partial_t^{\ell}\mathcal{Z}^{\alpha}v, \partial_t^{\ell}\mathcal{Z}^{\alpha}h)|_{t=0} = (\partial_t^{\ell}\mathcal{Z}^{\alpha}v_0, \partial_t^{\ell}\mathcal{Z}^{\alpha}h_0). \end{array}\right. \end{equation} When $|\alpha|\geq 1, \, 1\leq \ell+|\alpha|\leq m, 0\leq\ell\leq m-1$, we develop the $L^2$ estimate $\partial_t^{\ell}\mathcal{Z}^{\alpha} v$ and $\partial_t^{\ell}\mathcal{Z}^{\alpha} h$, we get \begin{equation}\label{Sect6_Tangential_Estimate_3} \begin{array}{ll} \frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int\limits_{\mathbb{R}^3_{-}} |\partial_t^{\ell}\mathcal{Z}^{\alpha}v|^2 \,\mathrm{d}\mathcal{V}_t - \int\limits_{\mathbb{R}^3_{-}} \partial_t^{\ell}\mathcal{Z}^{\alpha}q \, \nabla^{\varphi}\cdot \partial_t^{\ell}\mathcal{Z}^{\alpha}v \,\mathrm{d}\mathcal{V}_t + 2\ensuremath{\epsilon} \int\limits_{\mathbb{R}^3_{-}} |\mathcal{S}^{\varphi}\partial_t^{\ell}\mathcal{Z}^{\alpha}v|^2 \,\mathrm{d}\mathcal{V}_t \\[14pt] \leq \int\limits_{\{z=0\}} (2\ensuremath{\epsilon} \mathcal{S}^{\varphi}\partial_t^{\ell}\mathcal{Z}^{\alpha}v\ensuremath{\textbf{N}} - \partial_t^{\ell}\mathcal{Z}^{\alpha}q\ensuremath{\textbf{N}})\cdot \partial_t^{\ell}\mathcal{Z}^{\alpha}v \mathrm{d}y + \|\partial_z v\|_{X^{m-1}}^2 + \|\nabla q\|_{X^{m-1}}^2 \\[11pt]\quad + |h|_{X^{m-1,2}}^2 + |\partial_t^m h|_{L^2}^2 + \text{b.t.} \\[7pt] \leq -\int\limits_{\{z=0\}} \big[ g \partial_t^{\ell}\mathcal{Z}^{\alpha}h - \sigma \nabla_y\cdot\frac{1}{\sqrt{1+|\nabla_y h|^2}} \big(\nabla_y\partial_t^{\ell}\mathcal{Z}^{\alpha} h - \frac{\nabla_y h(\nabla_y h\cdot \nabla_y\partial_t^{\ell}\mathcal{Z}^{\alpha}h)}{1+|\nabla_y h|^2}\big) \big]\ensuremath{\textbf{N}} \\[13pt]\quad \cdot \partial_t^{\ell}\mathcal{Z}^{\alpha}v \mathrm{d}y + \|\partial_z v\|_{X^{m-1}}^2 + \|\nabla q\|_{X^{m-1}}^2 + |h|_{X^{m-1,2}}^2 + |\partial_t^m h|_{L^2}^2 + \text{b.t.} \\[8pt] \leq \sigma\int\limits_{\{z=0\}} \nabla_y\cdot\frac{1}{\sqrt{1+|\nabla_y h|^2}} \big(\nabla_y\partial_t^{\ell}\mathcal{Z}^{\alpha} h - \frac{\nabla_y h(\nabla_y h\cdot \nabla_y\partial_t^{\ell}\mathcal{Z}^{\alpha}h)}{1+|\nabla_y h|^2}\big) \cdot(\partial_t \partial_t^{\ell}\mathcal{Z}^{\alpha} h \\[13pt]\quad + v_y \cdot\nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} h) \mathrm{d}y -\int\limits_{\{z=0\}} g \partial_t^{\ell}\mathcal{Z}^{\alpha}h \cdot(\partial_t \partial_t^{\ell}\mathcal{Z}^{\alpha} h + v_y \cdot\nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} h) \mathrm{d}y \\[10pt]\quad + \|\partial_z v\|_{X^{m-1}}^2 + \|\nabla q\|_{X^{m-1}}^2 + |h|_{X^{m-1,2}}^2 + |\partial_t^m h|_{L^2}^2 + \text{b.t.} \\[7pt] \leq - \sigma \int\limits_{\{z=0\}} \frac{1}{\sqrt{1+|\nabla_y h|^2}} \big(\nabla_y\partial_t^{\ell}\mathcal{Z}^{\alpha} h - \frac{\nabla_y h(\nabla_y h\cdot \nabla_y\partial_t^{\ell}\mathcal{Z}^{\alpha}h)}{1+|\nabla h|^2}\big) \cdot(\partial_t \nabla_y\partial_t^{\ell}\mathcal{Z}^{\alpha} h \\[13pt]\quad + v_y \cdot\nabla_y \nabla_y\partial_t^{\ell}\mathcal{Z}^{\alpha} h) \mathrm{d}y - \frac{g}{2}\frac{\mathrm{d}}{\mathrm{d}t} \int\limits_{\{z=0\}} |\partial_t^{\ell}\mathcal{Z}^{\alpha}h|^2 \mathrm{d}y + \|\partial_z v\|_{X^{m-1}}^2 \\[10pt]\quad + \|\nabla q\|_{X^{m-1}}^2 + |h|_{X^{m-1,2}}^2 + |\partial_t^m h|_{L^2}^2 + \text{b.t.} \end{array} \end{equation} \begin{equation*} \begin{array}{ll} \leq - \frac{g}{2}\frac{\mathrm{d}}{\mathrm{d}t} \int\limits_{\{z=0\}} |\partial_t^{\ell}\mathcal{Z}^{\alpha}h|^2 \mathrm{d}y - \frac{\sigma}{2}\frac{\mathrm{d}}{\mathrm{d}t} \int\limits_{\{z=0\}} \frac{1}{\sqrt{1+|\nabla_y h|^2}} \big(|\nabla_y\partial_t^{\ell}\mathcal{Z}^{\alpha} h|^2 \\[12pt]\quad - \frac{|\nabla_y h\cdot \nabla_y\partial_t^{\ell}\mathcal{Z}^{\alpha}h|^2}{1+|\nabla_y h|^2}\big) \mathrm{d}y + \|\partial_z v\|_{X^{m-1}}^2 + \|\nabla q\|_{X^{m-1}}^2 + |h|_{X^{m-1,2}}^2 + |\partial_t^m h|_{L^2}^2 + \text{b.t.} \end{array} \end{equation*} then \begin{equation}\label{Sect6_Tangential_Estimate_4} \begin{array}{ll} \frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int\limits_{\mathbb{R}^3_{-}} |\partial_t^{\ell}\mathcal{Z}^{\alpha}v|^2 \,\mathrm{d}\mathcal{V}_t + \frac{g}{2}\frac{\mathrm{d}}{\mathrm{d}t} \int\limits_{\{z=0\}} |\partial_t^{\ell}\mathcal{Z}^{\alpha}h|^2 \mathrm{d}y + 2\ensuremath{\epsilon} \int\limits_{\mathbb{R}^3_{-}} |\mathcal{S}^{\varphi}\partial_t^{\ell}\mathcal{Z}^{\alpha}v|^2 \,\mathrm{d}\mathcal{V}_t \\[10pt]\quad + \frac{\sigma}{2}\frac{\mathrm{d}}{\mathrm{d}t} \int\limits_{\{z=0\}} \frac{1}{\sqrt{1+|\nabla_y h|^2}} \big(|\nabla_y\partial_t^{\ell}\mathcal{Z}^{\alpha} h|^2 - \frac{|\nabla_y h\cdot \nabla_y\partial_t^{\ell}\mathcal{Z}^{\alpha}h|^2}{1+|\nabla_y h|^2}\big) \mathrm{d}y \\[12pt] \leq \|\partial_z v\|_{X^{m-1}}^2 + \|\nabla q\|_{X^{m-1}}^2 + |h|_{X^{m-1,2}}^2 + |\partial_t^m h|_{L^2}^2 + \text{b.t.} \end{array} \end{equation} Integrate $(\ref{Sect6_Tangential_Estimate_4})$ in time, apply the integral form of Gronwall's inequality, we have \begin{equation}\label{Sect6_Tangential_Estimate_5} \begin{array}{ll} \|\partial_t^{\ell}\mathcal{Z}^{\alpha}v\|^2 + |\partial_t^{\ell}\mathcal{Z}^{\alpha} h|^2 + \ensuremath{\epsilon} |\partial_t^{\ell}\mathcal{Z}^{\alpha} h|_{\frac{1}{2}}^2 + \frac{\sigma}{4} |\partial_t^{\ell}\mathcal{Z}^{\alpha} h|_{1}^2 + \ensuremath{\epsilon}\int\limits_0^t \|\nabla \partial_t^{\ell}\mathcal{Z}^{\alpha}v\|^2 \,\mathrm{d}t \\[9pt] \ensuremath{\lesssim} \|v_0\|_{X^{m-1,1}}^2 + |h_0|_{X^{m-1,1}}^2 + \ensuremath{\epsilon} |h_0|_{X^{m-1,\frac{3}{2}}}^2 + \sigma |h_0|_{X^{m-1,2}}^2 \\[7pt]\quad + \int\limits_0^T \|\partial_z v\|_{X^{m-2,1}}^2 + \|\nabla q\|_{X^{m-2,1}}^2 + |\partial_t^m h|_{L^2}^2\,\mathrm{d}t. \end{array} \end{equation} Note that we use the following inequality to control the surface tension term: \begin{equation}\label{Sect6_Tangential_Estimate_6} \begin{array}{ll} \frac{\sigma}{2}\int\limits_{\{z=0\}} \frac{1}{\sqrt{1+|\nabla_y h|^2}} \big(|\nabla_y\partial_t^{\ell}\mathcal{Z}^{\alpha} h|^2 - \frac{|\nabla_y h\cdot \nabla_y\partial_t^{\ell}\mathcal{Z}^{\alpha}h|^2}{1+|\nabla_y h|^2}\big) \mathrm{d}y \\[15pt] \geq \frac{\sigma}{2}\int\limits_{\{z=0\}}\frac{1}{2(1+|\nabla_y h|^2)^{\frac{3}{2}}} |\nabla_y\partial_t^{\ell}\mathcal{Z}^{\alpha} h|^2 \,\mathrm{d}y \geq \frac{\sigma}{4}\int\limits_{\{z=0\}}|\nabla_y\partial_t^{\ell}\mathcal{Z}^{\alpha} h|^2 \,\mathrm{d}y. \end{array} \end{equation} When $|\alpha|=0$ and $0\leq\ell\leq m-1$, we have no bounds of $q$ and $\partial_t^{\ell} q$, so we can not apply the integration by parts to the pressure terms. The divergence free condition and the dynamical boundary condition will not be used here. Then \begin{equation}\label{Sect6_Tangential_Estimate_7} \begin{array}{ll} \frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int\limits_{\mathbb{R}^3_{-}} |\partial_t^{\ell} v|^2 \,\mathrm{d}\mathcal{V}_t + 2\ensuremath{\epsilon} \int\limits_{\mathbb{R}^3_{-}} |\mathcal{S}^{\varphi}\partial_t^{\ell} v|^2 \,\mathrm{d}\mathcal{V}_t \\[12pt] \leq - \int\limits_{\mathbb{R}^3_{-}} \partial_t^{\ell}\nabla^{\varphi} q \cdot \partial_t^{\ell} v \,\mathrm{d}\mathcal{V}_t + \int\limits_{\{z=0\}} 2\ensuremath{\epsilon} \mathcal{S}^{\varphi}\partial_t^{\ell} v\ensuremath{\textbf{N}} \cdot \partial_t^{\ell} v \mathrm{d}y + \|\partial_z v\|_{X^{m-2}}^2 \\[6pt]\quad + \|\partial_z q\|_{X^{m-2}}^2 + \sum\limits_{\ell=0}^{m-1}|\partial_t^{\ell+1} h|_{L^2}^2 + \text{b.t.} \\[11pt] \ensuremath{\lesssim} \|\nabla q\|_{X^{m-1}}^2 + \|\partial_t^{\ell} v\|_{L^2}^2 + \|\partial_z v\|_{X^{m-2}}^2 + \ensuremath{\epsilon}\|\nabla_y\partial_t^{\ell} v\|_{X^{0,1}}^2 + \ensuremath{\epsilon}\|\partial_z\nabla_y\partial_t^{\ell} v\|_{L^2}^2 \\[4pt]\quad + \sum\limits_{\ell=0}^m|\partial_t^{\ell} h|_{L^2}^2 + \text{b.t.} \end{array} \end{equation} Combining $(\ref{Sect6_Tangential_Estimate_7})$ and $(\ref{Sect6_Height_Estimates_Lemma_Eq})$, we have \begin{equation}\label{Sect6_Tangential_Estimate_8} \begin{array}{ll} \|\partial_t^{\ell} v\|^2 + \|\partial_t^{\ell} h\|^2 + \ensuremath{\epsilon}\int\limits_0^t \|\nabla \partial_t^{\ell} v\|^2 \,\mathrm{d}t \ensuremath{\lesssim} \|v_0\|_{X^{m-1}}^2 + \|\nabla q\|_{L^4([0,T],X^{m-1})}^2 \\[6pt]\quad + \|\partial_z v\|_{L^4([0,T],X^{m-1})}^2 + |\partial_t^m h|_{L^4([0,T],L^2)}^2 + b.t. \end{array} \end{equation} Sum $\ell$ and $\alpha$. By $(\ref{Sect6_Tangential_Estimate_5}),(\ref{Sect6_Tangential_Estimate_8})$ and Lemma $\ref{Sect6_Height_Estimates_Lemma}$, we get the estimate $(\ref{Sect6_Tangential_Estimate})$. Thus, Lemma $\ref{Sect6_Tangential_Estimate_Lemma}$ is proved. \end{proof} In order to close our estimates of tangential derivatives, we need to bound $\|\partial_t^m v\|_{L^4([0,T],L^2)}^2$ and $\|\partial_t^m h\|_{L^4([0,T],X^{0,1})}^2$, which appear in Lemma $\ref{Sect6_Tangential_Estimate_Lemma}$. Thus, we estimate $\partial_t^m v$ and $\partial_t^m h$. \begin{lemma}\label{Sect6_TimeDer_Estimate_Lemma} $\partial_t^m v, \partial_t^m h, \partial_t^{m+1}h$ satisfies the following estimate: \begin{equation}\label{Sect6_TimeDer_Estimate} \begin{array}{ll} \|\partial_t^m v\|_{L^4([0,T],L^2)}^2 + |\partial_t^m h|_{L^4([0,T],X^{0,1})}^2 + |\partial_t^{m+1}\nabla h|_{L^4([0,T],L^2)}^2 \\[6pt] \ensuremath{\lesssim} \|\partial_t^m v_0\|_{L^2}^2 + g|\partial_t^m h_0|_{L^2}^2 + \sigma|\partial_t^m\nabla h_0|_{L^2}^2 + \|\partial_z\partial_t^{m-1} v_0\|_{L^2}^2 \\[6pt]\quad + \|\partial_z v\|_{L^4([0,T],X^{m-1})}^2 + \text{b.t.} \end{array} \end{equation} \end{lemma} \begin{proof} In $(\ref{Sect6_Tangential_Estimate_2})$, let $\alpha=0$ and $\ell=m$. Then multiply with $\partial_t^m v$, integrate in $\mathbb{R}^3_{-}$, then we get \begin{equation}\label{Sect6_TimeDer_Estimate_1} \begin{array}{ll} \frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int\limits_{\mathbb{R}^3_{-}} |\partial_t^m v|^2 \,\mathrm{d}\mathcal{V}_t - \int\limits_{\mathbb{R}^3_{-}} \partial_t^m q \, \nabla^{\varphi}\cdot \partial_t^m v \,\mathrm{d}\mathcal{V}_t + 2\ensuremath{\epsilon} \int\limits_{\mathbb{R}^3_{-}} |\mathcal{S}^{\varphi}\partial_t^m v|^2 \,\mathrm{d}\mathcal{V}_t \\[14pt] \leq \int\limits_{\{z=0\}} (2\ensuremath{\epsilon} \mathcal{S}^{\varphi}\partial_t^m v\ensuremath{\textbf{N}} - \partial_t^m q\ensuremath{\textbf{N}})\cdot \partial_t^m v \mathrm{d}y + \|\partial_z v\|_{X^{m-1}}^2 + \|\nabla q\|_{X^{m-1}}^2 \\[11pt]\quad + |h|_{X^{m-1,2}}^2 + |\partial_t^m h|_{X^{0,1}}^2 + |\partial_t^{m+1} h|_{L^2}^2 + \text{b.t.} \\[7pt] \leq -\int\limits_{\{z=0\}} \big[ g \partial_t^m h - \sigma \nabla_y\cdot\frac{1}{\sqrt{1+|\nabla_y h|^2}} \big(\nabla_y\partial_t^m h - \frac{\nabla_y h(\nabla_y h\cdot \nabla_y\partial_t^m h)}{1+|\nabla_y h|^2}\big) \big]\ensuremath{\textbf{N}} \\[13pt]\quad \cdot \partial_t^m v \mathrm{d}y + \|\partial_z v\|_{X^{m-1}}^2 + \|\nabla q\|_{X^{m-1}}^2 + |h|_{X^{m-1,2}}^2 + |\partial_t^m h|_{X^{0,1}}^2 \\[6pt]\quad + |\partial_t^{m+1} h|_{L^2}^2 + \text{b.t.} \\[6pt] \leq \sigma\int\limits_{\{z=0\}} \nabla_y\cdot\frac{1}{\sqrt{1+|\nabla_y h|^2}} \big(\nabla_y\partial_t^m h - \frac{\nabla_y h(\nabla_y h\cdot \nabla_y\partial_t^m h)}{1+|\nabla_y h|^2}\big) \cdot(\partial_t \partial_t^m h \\[13pt]\quad + v_y \cdot\nabla_y \partial_t^m h) \mathrm{d}y -\int\limits_{\{z=0\}} g \partial_t^m h \cdot(\partial_t \partial_t^m h + v_y \cdot\nabla_y \partial_t^m h) \mathrm{d}y \\[10pt]\quad + \|\partial_z v\|_{X^{m-1}}^2 + \|\nabla q\|_{X^{m-1}}^2 + |h|_{X^{m-1,2}}^2 + |\partial_t^m h|_{X^{0,1}}^2 + |\partial_t^{m+1} h|_{L^2}^2 + \text{b.t.} \\[7pt] \leq - \sigma \int\limits_{\{z=0\}} \frac{1}{\sqrt{1+|\nabla_y h|^2}} \big(\nabla_y\partial_t^m h - \frac{\nabla_y h(\nabla_y h\cdot \nabla_y\partial_t^m h)}{1+|\nabla_y h|^2}\big) \cdot(\partial_t \nabla_y\partial_t^m h \\[13pt]\quad + v_y \cdot\nabla_y \nabla_y\partial_t^m h) \mathrm{d}y - \frac{g}{2}\frac{\mathrm{d}}{\mathrm{d}t} \int\limits_{\{z=0\}} |\partial_t^m h|^2 \mathrm{d}y + \|\partial_z v\|_{X^{m-1}}^2 \\[10pt]\quad + \|\nabla q\|_{X^{m-1}}^2 + |h|_{X^{m-1,2}}^2 + |\partial_t^m h|_{X^{0,1}}^2 + |\partial_t^{m+1} h|_{L^2}^2 + \text{b.t.} \\[8pt] \leq - \frac{g}{2}\frac{\mathrm{d}}{\mathrm{d}t} \int\limits_{\{z=0\}} |\partial_t^m h|^2 \mathrm{d}y - \frac{\sigma}{2}\frac{\mathrm{d}}{\mathrm{d}t} \int\limits_{\{z=0\}} \frac{1}{\sqrt{1+|\nabla_y h|^2}} \big(|\nabla_y\partial_t^m h|^2 - \frac{|\nabla_y h\cdot \nabla_y\partial_t^m h|^2}{1+|\nabla_y h|^2}\big) \mathrm{d}y \\[12pt]\quad + \|\partial_z v\|_{X^{m-1}}^2 + \|\nabla q\|_{X^{m-1}}^2 + |h|_{X^{m-1,2}}^2 + |\partial_t^m h|_{X^{0,1}}^2 + |\partial_t^{m+1} h|_{L^2}^2 + \text{b.t.} \end{array} \end{equation} The same as \cite{Wang_Xin_2015}, we will integrate in time twice, we get the $L^4([0,T],L^2)$ type estimate. After the first integration in time, we have \begin{equation}\label{Sect6_TimeDer_Estimate_3} \begin{array}{ll} \|\partial_t^m v\|_{L^2}^2 + g|\partial_t^m h|_{L^2}^2 + \frac{\sigma}{2}|\partial_t^m\nabla h|_{L^2}^2 + 4\ensuremath{\epsilon} \int\limits_{\mathbb{R}^3_{-}} |\mathcal{S}^{\varphi}\partial_t^m v|^2 \,\mathrm{d}\mathcal{V}_t \\[6pt] \ensuremath{\lesssim} \|\partial_t^m v_0\|_{L^2}^2 + g|\partial_t^m h_0|_{L^2}^2 + \sigma|\partial_t^m\nabla h_0|_{L^2}^2 + \int\limits_0^t\int\limits_{\mathbb{R}^3_{-}} \partial_t^m q \, \nabla^{\varphi}\cdot \partial_t^m v \,\mathrm{d}\mathcal{V}_t\mathrm{d}t \end{array} \end{equation} \begin{equation*} \begin{array}{ll} \quad + \int\limits_0^t\|\partial_z v\|_{X^{m-1}}^2 + \|\nabla q\|_{X^{m-1}}^2 + |h|_{X^{m-1,2}}^2 + |\partial_t^m h|_{X^{0,1}}^2 + |\partial_t^{m+1} h|_{L^2}^2\mathrm{d}t + \text{b.t.} \\[6pt] \ensuremath{\lesssim} \|\partial_t^m v_0\|_{L^2}^2 + g|\partial_t^m h_0|_{L^2}^2 + \sigma|\partial_t^m\nabla h_0|_{L^2}^2 + \int\limits_0^t\int\limits_{\mathbb{R}^3_{-}} \partial_t^m q \, \nabla^{\varphi}\cdot \partial_t^m v \,\mathrm{d}\mathcal{V}_t\mathrm{d}t \\[6pt]\quad + \|\partial_z v\|_{L^4([0,T],X^{m-1})}^2 + |\partial_t^m h|_{L^4([0,T],X^{0,1})}^2 + |\partial_t^m v|_{L^4([0,T],L^2)}^2 + \text{b.t.}. \end{array} \end{equation*} Now we deal with the pressure term: \begin{equation}\label{Sect6_TimeDer_Estimate_4} \begin{array}{ll} \int\limits_0^t\int\limits_{\mathbb{R}^3_{-}} \partial_t^m q \, \nabla^{\varphi}\cdot \partial_t^m v \,\mathrm{d}\mathcal{V}_t\mathrm{d}t = -\int\limits_0^t\int\limits_{\mathbb{R}^3_{-}} \partial_t^m q \, [\partial_t^m, \nabla^{\varphi}\cdot] v \,\mathrm{d}\mathcal{V}_t\mathrm{d}t \\[7pt] = \sum\limits_{\ell_1>0}\int\limits_0^t \int\limits_{\mathbb{R}^3_{-}} \partial_t^m q \, \big(\partial_z \varphi\partial_t^{\ell_1}(\frac{\ensuremath{\textbf{N}}}{\partial_z \varphi}) \big) \cdot \partial_t^{\ell_2}\partial_z v \,\mathrm{d}x\mathrm{d}t \\[11pt] = \sum\limits_{\ell_1>0}\int\limits_0^t\frac{\mathrm{d}}{\mathrm{d}t}\int\limits_{\mathbb{R}^3_{-}} \partial_t^{m-1} q \, \big(\partial_z \varphi\partial_t^{\ell_1}(\frac{\ensuremath{\textbf{N}}}{\partial_z \varphi}) \big) \cdot \partial_t^{\ell_2}\partial_z v \,\mathrm{d}x\mathrm{d}t \\[11pt]\quad - \sum\limits_{\ell_1>0}\int\limits_0^t\int\limits_{\mathbb{R}^3_{-}} \partial_t^{m-1} q \, \partial_t\big(\partial_z \varphi\partial_t^{\ell_1}(\frac{\ensuremath{\textbf{N}}}{\partial_z \varphi}) \big) \cdot \partial_t^{\ell_2}\partial_z v \,\mathrm{d}x\mathrm{d}t \\[11pt]\quad - \sum\limits_{\ell_1>0}\int\limits_0^t\int\limits_{\mathbb{R}^3_{-}} \partial_t^{m-1} q \, \big(\partial_z \varphi\partial_t^{\ell_1}(\frac{\ensuremath{\textbf{N}}}{\partial_z \varphi}) \big) \cdot \partial_t^{\ell_2+1}\partial_z v \,\mathrm{d}x\mathrm{d}t \\[11pt] = \sum\limits_{\ell_1>0}\int\limits_{\{z=0\}} \partial_t^{m-1} q \, \big(\partial_z \varphi\partial_t^{\ell_1}(\frac{\ensuremath{\textbf{N}}}{\partial_z \varphi}) \big) \cdot \partial_t^{\ell_2}v \,\mathrm{d}y \\[11pt]\quad -\sum\limits_{\ell_1>0}\int\limits_{\mathbb{R}^3_{-}} \partial_z\big[\partial_t^{m-1} q \, \big(\partial_z \varphi\partial_t^{\ell_1}(\frac{\ensuremath{\textbf{N}}}{\partial_z \varphi}) \big)\big] \cdot \partial_t^{\ell_2} v \,\mathrm{d}x \\[11pt]\quad - \sum\limits_{\ell_1>0} \int\limits_{\{z=0\}} \partial_t^{m-1} q|_{t=0} \, \big(\partial_z \varphi\partial_t^{\ell_1}(\frac{\ensuremath{\textbf{N}}}{\partial_z \varphi})|_{t=0} \big) \cdot \partial_t^{\ell_2} v|_{t=0} \,\mathrm{d}y \\[11pt]\quad + \sum\limits_{\ell_1>0} \int\limits_{\mathbb{R}^3_{-}} \partial_z\big[\partial_t^{m-1} q|_{t=0} \, \big(\partial_z \varphi\partial_t^{\ell_1}(\frac{\ensuremath{\textbf{N}}}{\partial_z \varphi}) \big)|_{t=0}\big] \cdot \partial_t^{\ell_2} v|_{t=0} \,\mathrm{d}x \\[11pt]\quad - \sum\limits_{\ell_1>0}\int\limits_0^t\int\limits_{\{z=0\}} \partial_t^{m-1} q \, \partial_t\big(\partial_z \varphi\partial_t^{\ell_1}(\frac{\ensuremath{\textbf{N}}}{\partial_z \varphi}) \big) \cdot \partial_t^{\ell_2} v \,\mathrm{d}y\mathrm{d}t \\[11pt]\quad + \sum\limits_{\ell_1>0}\int\limits_0^t\int\limits_{\mathbb{R}^3_{-}} \partial_z\big[\partial_t^{m-1} q \, \partial_t\big(\partial_z \varphi\partial_t^{\ell_1}(\frac{\ensuremath{\textbf{N}}}{\partial_z \varphi}) \big)\big] \cdot \partial_t^{\ell_2}v \,\mathrm{d}x\mathrm{d}t \\[11pt]\quad - \sum\limits_{\ell_1>0} \int\limits_0^t\int\limits_{\{z=0\}} \partial_t^{m-1} q \, \big(\partial_z \varphi\partial_t^{\ell_1}(\frac{\ensuremath{\textbf{N}}}{\partial_z \varphi}) \big) \cdot \partial_t^{\ell_2+1} v \,\mathrm{d}y\mathrm{d}t \\[11pt]\quad + \sum\limits_{\ell_1>0} \int\limits_0^t\int\limits_{\mathbb{R}^3_{-}} \partial_z\big[\partial_t^{m-1} q \, \big(\partial_z \varphi\partial_t^{\ell_1}(\frac{\ensuremath{\textbf{N}}}{\partial_z \varphi}) \big)\big] \cdot \partial_t^{\ell_2+1} v \,\mathrm{d}x\mathrm{d}t . \end{array} \end{equation} $(\ref{Sect6_TimeDer_Estimate_4})$ contains $\partial_z\big[\partial_t^{m-1} q f \big]$, where $f$ represents the terms $\big(\partial_z \varphi\partial_t^{\ell_1}(\frac{\ensuremath{\textbf{N}}}{\partial_z \varphi}) \big)$, $\big(\partial_z \varphi\partial_t^{\ell_1}(\frac{\ensuremath{\textbf{N}}}{\partial_z \varphi}) \big)|_{t=0}$, $\partial_t\big(\partial_z \varphi\partial_t^{\ell_1}(\frac{\ensuremath{\textbf{N}}}{\partial_z \varphi}) \big)$. By using Hardy's inequality $(\ref{Sect1_HardyIneq})$, we get \begin{equation}\label{Sect6_TimeDer_Estimate_5} \begin{array}{ll} \int\limits_{\mathbb{R}^3_{-}}\partial_z\big[\partial_t^{m-1} q f \big]\cdot \partial_t^{\ell_2}v \,\mathrm{d}x \\[7pt] = \int\limits_{\mathbb{R}^3_{-}}(\partial_z\partial_t^{m-1} q) f \cdot \partial_t^{\ell_2}v \,\mathrm{d}x + \int\limits_{\mathbb{R}^3_{-}}\frac{1}{1-z}\partial_t^{m-1} q[(1-z)\partial_z f]\cdot \partial_t^{\ell_2}v \,\mathrm{d}x \end{array} \end{equation} \begin{equation*} \begin{array}{ll} \ensuremath{\lesssim} \|\partial_z \partial_t^{m-1} q\|_{L^2}^2 + \|\frac{1}{1-z}\partial_t^{m-1} q\|_{L^2}^2 + \|(1-z)\partial_z f\|_{L^2}^2 + \|\partial_t^{\ell_2}v\|_{L^2}^2 \\[7pt] \ensuremath{\lesssim} \|\partial_z \partial_t^{m-1} q\|_{L^2}^2 + \big|\partial_t^{m-1} q|_{z=0}\big|_{L^2}^2 + |\partial_t^m h|_{X^{0,1}}^2 + \|\partial_t^{m-1}v\|_{L^2}^2, \hspace{1.3cm} \end{array} \end{equation*} where $(1-z)\partial_z f \sim (1-z)\partial_z \psi\ast \partial_t^{\ell} h$ and $(1-z)\partial_z\psi \in L^1(\mathrm{d}z)$. Denote \begin{equation}\label{Sect6_TimeDer_Estimate_6} \begin{array}{ll} \mathcal{I}_8 := \|\partial_z \partial_t^{m-1} q\|_{L^2}^2 + \big|\partial_t^{m-1} q|_{z=0}\big|_{L^2}^2 + |\partial_t^m h|_{X^{0,1}}^2 \\[5pt]\hspace{0.94cm} + \|\partial_t^{m-1}v\|_{L^2}^2 + \big|\partial_t^{m-1}v|_{z=0}\big|_{L^2}^2 \\[8pt]\hspace{0.53cm} \ensuremath{\lesssim} \|\partial_t^m v\|_{L^2}^2 + \|\partial_z\partial_t^{m-1} v\|_{L^2}^2 + |\partial_t^m h|_{X^{0,1}}^2 + b.t. \end{array} \end{equation} Plug $(\ref{Sect6_TimeDer_Estimate_5})$ into $(\ref{Sect6_TimeDer_Estimate_4})$, we get \begin{equation}\label{Sect6_TimeDer_Estimate_7} \begin{array}{ll} \int\limits_0^t\int\limits_{\mathbb{R}^3_{-}} \partial_t^m q \, \nabla^{\varphi}\cdot \partial_t^m v \,\mathrm{d}\mathcal{V}_t\mathrm{d}t \ensuremath{\lesssim} \mathcal{I}_8|_{t=0} + \mathcal{I}_8 + \int\limits_0^T \mathcal{I}_8 \,\mathrm{d}s \\[10pt] \ensuremath{\lesssim} \|\partial_t^m v_0\|_{L^2}^2 + \|\partial_z\partial_t^{m-1} v_0\|_{L^2}^2 + |\partial_t^m h_0|_{X^{0,1}}^2 + \|\partial_t^m v\|_{L^2}^2 + \|\partial_z\partial_t^{m-1} v\|_{L^2}^2 \\[6pt]\quad + |\partial_t^m h|_{X^{0,1}}^2 + \|\partial_t^m v\|_{L^4([0,T],L^2)}^2 + \|\partial_z\partial_t^{m-1} v\|_{L^4([0,T],L^2)}^2 + |\partial_t^m h|_{L^4([0,T],L^2)}^2. \end{array} \end{equation} By $(\ref{Sect6_TimeDer_Estimate_3})$ and $(\ref{Sect6_TimeDer_Estimate_7})$, we get \begin{equation}\label{Sect6_TimeDer_Estimate_8} \begin{array}{ll} \|\partial_t^m v\|_{L^2}^2 + g|\partial_t^m h|_{L^2}^2 + \frac{\sigma}{2}|\partial_t^m\nabla_y h|_{L^2}^2 \\[6pt] \ensuremath{\lesssim} \|\partial_t^m v_0\|_{L^2}^2 + g|\partial_t^m h_0|_{L^2}^2 + \sigma|\partial_t^m\nabla_y h_0|_{L^2}^2 + \|\partial_z\partial_t^{m-1} v_0\|_{L^2}^2 + \|\partial_t^m v\|_{L^2}^2 \\[6pt]\quad + \|\partial_z\partial_t^{m-1} v\|_{L^2}^2 + |\partial_t^m h|_{X^{0,1}}^2 + \|\partial_t^m v\|_{L^4([0,T],L^2)}^2 + \|\partial_z v\|_{L^4([0,T],X^{m-1})}^2 \\[6pt]\quad + |\partial_t^m h|_{L^4([0,T],X^{0,1})}^2 + \text{b.t.} \end{array} \end{equation} Square $(\ref{Sect6_TimeDer_Estimate_8})$ and integrate in time again, apply the integral form of Gronwall's inequality, we have \begin{equation}\label{Sect6_TimeDer_Estimate_9} \begin{array}{ll} \|\partial_t^m v\|_{L^4([0,T],L^2)}^2 + g|\partial_t^m h|_{L^4([0,T],L^2)}^2 + \frac{\sigma}{2}|\partial_t^m\nabla_y h|_{L^4([0,T],L^2)}^2 \\[6pt] \ensuremath{\lesssim} \|\partial_t^m v_0\|_{L^2}^2 + g|\partial_t^m h_0|_{L^2}^2 + \sigma|\partial_t^m\nabla_y h_0|_{L^2}^2 + \|\partial_z\partial_t^{m-1} v_0\|_{L^2}^2 \\[6pt]\quad + \|\partial_z v\|_{L^4([0,T],X^{m-1})}^2 + \text{b.t.} \end{array} \end{equation} Thus, Lemma $\ref{Sect6_TimeDer_Estimate_Lemma}$ is proved. \end{proof} The same as the $\sigma=0$ case, we have the estimates of normal derivatives: \begin{equation}\label{Sect6_NormalDer_Estimates} \begin{array}{ll} \partial_z v,\, \omega \in L^4([0,T],X^{m-1}) \cap L^{\infty}([0,T],X^{m-2}). \end{array} \end{equation} Couple Lemmas $\ref{Sect6_Pressure_Estimate_Eq}, \ref{Sect6_Tangential_Estimate_Lemma}, \ref{Sect6_TimeDer_Estimate_Lemma}$ with the normal estimates $(\ref{Sect6_NormalDer_Estimates})$, it is standard to prove Proposition $\ref{Sect1_Proposition_Regularity_Tension}$. \section{Convergence Rates of Inviscid Limit for Fixed $\sigma>0$} In this section, we estimate convergence rates of the inviscid limit for the $\sigma>0$ case. We denote $\hat{v} =v^{\ensuremath{\epsilon}} -v,\hat{q} =q^{\ensuremath{\epsilon}} -q, \hat{h} =h^{\ensuremath{\epsilon}} -h$, we denote the $i-$th components of $v^{\ensuremath{\epsilon}}$ and $v$ by $v^{\ensuremath{\epsilon},i}$ and $v^i$ respectively. $\hat{v},\hat{h},\hat{q}$ satisfy the following equations \begin{equation}\label{Sect7_DifferenceEq_1} \left\{\begin{array}{ll} \partial_t^{\varphi^{\ensuremath{\epsilon}}}\hat{v} + v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} \hat{v} + \nabla^{\varphi^{\ensuremath{\epsilon}}} \hat{q} - 2\ensuremath{\epsilon} \nabla^{\varphi^{\ensuremath{\epsilon}}} \cdot\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \hat{v} \\[6pt]\quad = \partial_z^{\varphi} v \partial_t^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta} + v^{\ensuremath{\epsilon}}\cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta}\, \partial_z^{\varphi} v - \hat{v}\cdot\nabla^{\varphi} v + \partial_z^{\varphi} q\nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta} + \ensuremath{\epsilon} \triangle^{\varphi^{\ensuremath{\epsilon}}} v, \quad x\in\mathbb{R}^3_{-}, \\[9pt] \nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot \hat{v} = \partial_z^{\varphi}v \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta}, \hspace{6.42cm} x\in\mathbb{R}^3_{-},\\[7pt] \partial_t \hat{h} + v_y\cdot \nabla \hat{h} = \hat{v}\cdot\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}, \hspace{6.2cm} \{z=0\}, \\[8pt] \hat{q}\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} -2\ensuremath{\epsilon} \mathcal{S}^{\varphi^\ensuremath{\epsilon}} \hat{v} \,\ensuremath{\textbf{N}}^\ensuremath{\epsilon} = g \hat{h} \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} -\sigma \nabla_y \cdot \big( \mathfrak{H}_1\nabla_y\hat{h} \\[5pt]\quad + \mathfrak{H}_2 \nabla_y\hat{h}\cdot\nabla_y(h^{\ensuremath{\epsilon}}+h) \nabla_y (h^{\ensuremath{\epsilon}} +h)\big)\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} + 2\ensuremath{\epsilon} \mathcal{S}^{\varphi^\ensuremath{\epsilon}}v\,\ensuremath{\textbf{N}}^\ensuremath{\epsilon}, \hspace{1.47cm} \{z=0\}, \\[9pt] (\hat{v},\hat{h})|_{t=0} = (v_0^\ensuremath{\epsilon} -v_0,h_0^\ensuremath{\epsilon} -h_0), \end{array}\right. \end{equation} where the quantities $\mathfrak{H}_1$ and $\mathfrak{H}_2$ are defined as \begin{equation}\label{Sect7_DifferenceEq_2} \begin{array}{ll} \mathfrak{H}_1 = \frac{1}{2\sqrt{1+|\nabla_y h^{\ensuremath{\epsilon}}|^2}}+ \frac{1}{2\sqrt{1+|\nabla_y h|^2}}, \\[12pt] \mathfrak{H}_2 = \frac{-1} {2\sqrt{1+|\nabla_y h^{\ensuremath{\epsilon}}|^2}\sqrt{1+|\nabla_y h|^2}(\sqrt{1+|\nabla_y h^{\ensuremath{\epsilon}}|^2} + \sqrt{1+|\nabla_y h|^2})}. \end{array} \end{equation} Since the estimates for normal derivatives are the same as the $\sigma>0$ case, we focus on the estimates of the pressure and tangential derivatives. The following lemma concerns the estimate of $\nabla\hat{q} = \nabla q^{\ensuremath{\epsilon}} -\nabla q$: \begin{lemma}\label{Sect7_Pressure_Lemma} Assume $0\leq s\leq k-1,\, k \leq m-1$, the difference of the pressure $\hat{q}$ has the following gradient estimate: \begin{equation}\label{Sect7_Pressure_Lemma_Eq} \begin{array}{ll} \|\nabla \hat{q}\|_{X^s} \ensuremath{\lesssim} \|\hat{v}\|_{X^{s,1}} + \|\partial_z\hat{v}\|_{X^s} + \|\partial_t^{s+1}\hat{v}\|_{L^2} + |\partial_t^s\hat{h}\|_{X^{0,\frac{1}{2}}} + |\hat{h}|_{X^{s,\frac{3}{2}}} +O(\ensuremath{\epsilon}). \end{array} \end{equation} \end{lemma} \begin{proof} The Navier-Stokes pressure $q^{\ensuremath{\epsilon}}$ satisfies the elliptic equations $(\ref{Sect1_Pressure_Neumann})$, while the Euler pressure satisfies the following equations: \begin{equation}\label{Sect1_Pressure_Neumann_EulerEq} \left\{\begin{array}{ll} \triangle^{\varphi} q = -\partial_j^{\varphi} v^i \partial_i^{\varphi} v^j, \\[6pt] \nabla^{\varphi} q\cdot\ensuremath{\textbf{N}}|_{z=0} = - \partial_t^{\varphi} v\cdot\ensuremath{\textbf{N}} - v\cdot\nabla^{\varphi} v\cdot\ensuremath{\textbf{N}}, \end{array}\right. \end{equation} Then the difference between boundary values is \begin{equation}\label{Sect7_Pressure_Estimates_1} \begin{array}{ll} \nabla^{\varphi^{\ensuremath{\epsilon}}} q^{\ensuremath{\epsilon}}\cdot\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} |_{z=0} - \nabla^{\varphi} q\cdot\ensuremath{\textbf{N}} |_{z=0} \\[5pt] = \ensuremath{\epsilon}\triangle^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}}\cdot\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} - (\partial_t^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}}\cdot\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}-\partial_t^{\varphi} v\cdot\ensuremath{\textbf{N}}) - (v^{\ensuremath{\epsilon}}\cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}}\cdot\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} - v\cdot\nabla^{\varphi} v\cdot\ensuremath{\textbf{N}}), \end{array} \end{equation} \begin{equation*} \begin{array}{ll} (\nabla^{\varphi^{\ensuremath{\epsilon}}} \hat{q} -\partial_z^{\varphi}q \nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta}) \cdot\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} |_{z=0} + \nabla^{\varphi} q\cdot \hat{\ensuremath{\textbf{N}}}|_{z=0} \\[6pt] = \ensuremath{\epsilon}\triangle^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}}\cdot\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} - (\partial_t^{\varphi^{\ensuremath{\epsilon}}} \hat{v} - \partial_z^{\varphi} v\partial_t^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta})\cdot\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} - \partial_t^{\varphi} v \cdot\hat{\ensuremath{\textbf{N}}} \\[6pt]\quad - (v^{\ensuremath{\epsilon}}\cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} \hat{v} - v^{\ensuremath{\epsilon}}\cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} \hat{\eta}\, \partial_z^{\varphi}v + \hat{v}\cdot\nabla^{\varphi} v)\cdot\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} - v\cdot\nabla^{\varphi} v\cdot\hat{\ensuremath{\textbf{N}}}, \\[10pt] \nabla^{\varphi^{\ensuremath{\epsilon}}} \hat{q} \cdot\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} |_{z=0} =\partial_z^{\varphi}q \nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta} \cdot\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} |_{z=0} - \nabla^{\varphi} q\cdot \hat{\ensuremath{\textbf{N}}}|_{z=0} \\[7pt]\quad + \ensuremath{\epsilon}\triangle^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}}\cdot\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} - (\partial_t \hat{v} + v^{\ensuremath{\epsilon}}_y\cdot\nabla_y \hat{v})\cdot\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} - (\partial_t v + v_y\cdot\nabla_y v) \cdot\hat{\ensuremath{\textbf{N}}} \\[7pt]\quad + [(\partial_t\hat{\eta} + v_y^{\ensuremath{\epsilon}}\cdot\nabla_y \hat{\eta})\, \partial_z^{\varphi}v - \hat{v}\cdot\nabla^{\varphi} v]\cdot\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} :=\mathcal{I}_9. \end{array} \end{equation*} Similar to $(\ref{Sect5_Pressure_Estimates_3})$, $\hat{q}$ satisfies the following elliptic equation: \begin{equation}\label{Sect7_Pressure_Estimates_2} \left\{\begin{array}{ll} \nabla\cdot(\textsf{E}^{\ensuremath{\epsilon}}\nabla \hat{q}) = -\nabla\cdot((\textsf{E}^{\ensuremath{\epsilon}} -\textsf{E}) \nabla q) - \nabla\cdot [(\textsf{P}^{\ensuremath{\epsilon}} - \textsf{P}) (v \cdot\nabla^{\varphi} v)] \\[5pt]\hspace{2.07cm} -\nabla\cdot [\textsf{P}^{\ensuremath{\epsilon}} (v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} \hat{v} - v^{\ensuremath{\epsilon}}\cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\varphi}\partial_z^{\varphi} v + \hat{v}\cdot\nabla^{\varphi} v)], \\[7pt] \nabla^{\varphi^{\ensuremath{\epsilon}}} \hat{q} \cdot\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} |_{z=0} = \mathcal{I}_9. \end{array}\right. \end{equation} The matrix $\textsf{E}^{\ensuremath{\epsilon}}$ is definitely positive, then it is standard to prove that $\hat{q}$ satisfies the following gradient estimate: \begin{equation}\label{Sect7_Pressure_Estimates_3} \begin{array}{ll} \|\nabla \hat{q}\|_{X^s} \ensuremath{\lesssim} \|(\textsf{E}^{\ensuremath{\epsilon}} -\textsf{E}) \nabla q \|_{X^s} + \|(\textsf{P}^{\ensuremath{\epsilon}} - \textsf{P}) (v \cdot\nabla^{\varphi} v)\|_{X^s} \\[5pt]\quad + \|\textsf{P}^{\ensuremath{\epsilon}} (v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} \hat{v} - v^{\ensuremath{\epsilon}}\cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\varphi}\partial_z^{\varphi} v + \hat{v}\cdot\nabla^{\varphi} v)\|_{X^s} + |\mathcal{I}_9|_{X^{s,-\frac{1}{2}}} \\[9pt] \ensuremath{\lesssim} \|\textsf{E}^{\ensuremath{\epsilon}} -\textsf{E}\|_{X^s} + \|\textsf{P}^{\ensuremath{\epsilon}} - \textsf{P}\|_{X^s} + \|\hat{v}\|_{X^s} + \|\nabla\hat{v}\|_{X^s} + \|\nabla\hat{\varphi}\|_{X^s} + |\mathcal{I}_9|_{X^{s,-\frac{1}{2}}} \\[9pt] \ensuremath{\lesssim} \|\hat{v}\|_{X^{s,1}} + \|\partial_z\hat{v}\|_{X^s} + |\hat{h}|_{X^{s,\frac{1}{2}}} + |\mathcal{I}_9|_{X^{s,-\frac{1}{2}}}. \end{array} \end{equation} Now we estimate the boundary terms. \begin{equation}\label{Sect7_Pressure_Estimates_4} \begin{array}{ll} |\mathcal{I}_9|_{X^{s,-\frac{1}{2}}} \ensuremath{\lesssim} \big|\partial_t\hat{v}\cdot\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}|_{z=0}\big|_{X^{s,-\frac{1}{2}}} + \big|v_y^{\ensuremath{\epsilon}}\cdot\nabla_y \hat{v}\cdot\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}|_{z=0}\big|_{X^{s,-\frac{1}{2}}} + \big|\hat{v}\cdot\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}|_{z=0}\big|_{X^{s,-\frac{1}{2}}} \\[9pt]\quad + \big|\partial_t\hat{\eta}\cdot\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}|_{z=0}\big|_{X^{s,-\frac{1}{2}}} + \big|\nabla_y \hat{\eta}\cdot\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}|_{z=0}\big|_{X^{s,-\frac{1}{2}}} +|\hat{h}|_{X^{s,\frac{1}{2}}} +O(\ensuremath{\epsilon}) \\[10pt] \ensuremath{\lesssim} |\hat{h}|_{X^{s,\frac{1}{2}}} + \|\partial_t\hat{v}\|_{X^s} + \|\hat{v}\|_{X^{s,1}} + \|\partial_t\hat{\eta}\|_{X^s} + \|\partial_t\hat{\eta}\|_{X^{s,1}} + \|\nabla\hat{\eta}\|_{X^{s,1}} +O(\ensuremath{\epsilon}) \\[10pt] \ensuremath{\lesssim} \|\partial_t^{s+1}\hat{v}\|_{L^2} + \|\hat{v}\|_{X^{s,1}} + |\partial_t\hat{h}\|_{X^{s,\frac{1}{2}}} + |\hat{h}|_{X^{s,\frac{3}{2}}} +O(\ensuremath{\epsilon}), \end{array} \end{equation} refer to $(\ref{Sect6_Pressure_Estimate_2})$ for the estimate of $\ensuremath{\epsilon}\triangle^{\varphi^{\ensuremath{\epsilon}}}\cdot\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}$. By $(\ref{Sect7_Pressure_Estimates_3})$ and $(\ref{Sect7_Pressure_Estimates_4})$, we obtain $(\ref{Sect7_Pressure_Lemma_Eq})$. Thus, Lemma $\ref{Sect7_Pressure_Lemma}$ is proved. \end{proof} Before estimating tangential derivatives of $\hat{v}$, we have the estimate of $\partial_t^{\ell} \hat{h}$ by using the kinetical boundary condition $(\ref{Sect7_DifferenceEq_1})_3$, which is the same with $(\ref{Sect1_T_Derivatives_Difference_Eq})_3$, we give the following lemma without proof, which is the same with Lemma $\ref{Sect5_Height_Estimates_Lemma}$. \begin{lemma}\label{Sect7_Height_Estimates_Lemma} Assume $0\leq k\leq m-2$, $0\leq\ell\leq k-1$, $\partial_t^{\ell}\hat{h}$ have the estimates: \begin{equation}\label{Sect7_Height_Estimates_Lemma_Eq} \begin{array}{ll} \int\limits_{\mathbb{R}^2} |\partial_t^{\ell}\hat{h}|^2 \,\mathrm{d}y \ensuremath{\lesssim} |\hat{h}_0|_{X^{k-1}}^2 + \int\limits_0^t |\hat{h}|_{X^{k-1,1}}^2 + \|\hat{v}\|_{X^{k-1,1}}^2 \,\mathrm{d}t + \|\partial_z\hat{v}\|_{L^4([0,T],X^{k-1})}^2. \end{array} \end{equation} \end{lemma} We develop the estimates for tangential derivatives. \begin{lemma}\label{Sect7_Tangential_Estimates_Lemma} Assume $0\leq k\leq m-2$, $\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}$ and $\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h}$ have the estimates: \begin{equation}\label{Sect7_Tangential_Estimates_Lemma_Eq} \begin{array}{ll} \|\hat{v}\|_{X^{k-1,1}}^2 + |\hat{h}|_{X^{k-1,1}}^2 \ensuremath{\lesssim} \|\hat{v}_0\|_{X^{k-1,1}}^2 + |\hat{h}_0|_{X^{k-1,1}}^2 + \int\limits_0^t \|\partial_z \hat{v}\|_{X^{k-1}}^2 \\[8pt]\quad + \|\hat{v}\|_{X^{k-1,1}}^2 + \|\hat{h}\|_{X^{k-1,1}}^2 \,\mathrm{d}t + O(\ensuremath{\epsilon}). \end{array} \end{equation} \end{lemma} \begin{proof} $(\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}, \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h}, \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{q})$ satisfy the following equations: \begin{equation}\label{Sect7_TangentialEstimates_Diff_Eq} \left\{\begin{array}{ll} \partial_t^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} + v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} + \nabla^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{q} - 2\ensuremath{\epsilon} \nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} \\[8pt]\quad = \ensuremath{\epsilon}\partial_t^{\ell}\mathcal{Z}^{\alpha}\triangle^{\varphi^\ensuremath{\epsilon}} v + 2\ensuremath{\epsilon} [\partial_t^{\ell}\mathcal{Z}^{\alpha},\nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot]\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \hat{v} + 2\ensuremath{\epsilon} \nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot[\partial_t^{\ell}\mathcal{Z}^{\alpha}, \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}] \hat{v} \\[8pt]\quad + \partial_z^{\varphi} v \partial_t^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\varphi} + \partial_z^{\varphi} v \, v^{\ensuremath{\epsilon}}\cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\varphi} - \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}\cdot\nabla^{\varphi} v + \partial_z^{\varphi} q\nabla^{\varphi^{\ensuremath{\epsilon}}}\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\varphi} \\[8pt]\quad - [\partial_t^{\ell}\mathcal{Z}^{\alpha},\partial_t^{\varphi^{\ensuremath{\epsilon}}}]\hat{v} + [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \partial_z^{\varphi} v \partial_t^{\varphi^{\ensuremath{\epsilon}}}]\hat{\varphi} - [\partial_t^{\ell}\mathcal{Z}^{\alpha}, v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}] \hat{v} - [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \nabla^{\varphi} v\cdot]\hat{v}\\[8pt]\quad + [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \partial_z^{\varphi} v \, v^{\ensuremath{\epsilon}}\cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}]\hat{\varphi} - [\partial_t^{\ell}\mathcal{Z}^{\alpha},\nabla^{\varphi^{\ensuremath{\epsilon}}}] \hat{q} + [\partial_t^{\ell}\mathcal{Z}^{\alpha},\partial_z^{\varphi} q\nabla^{\varphi^{\ensuremath{\epsilon}}}]\hat{\varphi} :=\mathcal{I}_{10}, \\[11pt] \nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} = \partial_z^{\varphi}v \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\eta} -[\partial_t^{\ell}\mathcal{Z}^{\alpha},\nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot] \hat{v} + [\partial_t^{\ell}\mathcal{Z}^{\alpha},\partial_z^{\varphi}v \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}]\hat{\eta}, \\[11pt] \partial_t \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h} + v_y^{\ensuremath{\epsilon}}\cdot \nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h} - \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} = - \hat{v}_y \cdot \nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} h - \partial_y \hat{h}\cdot \partial_t^{\ell}\mathcal{Z}^{\alpha}v_y \\[7pt]\quad + [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \hat{v},\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}] - [\partial_t^{\ell}\mathcal{Z}^{\alpha}, v_y, \partial_y \hat{h}], \\[11pt] \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{q}\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} -2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{v}\,\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} - g\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h}\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} + \sigma \nabla_y \cdot \big( \mathfrak{H}_1\nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{h} \big) \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} \\[8pt]\quad + \sigma\nabla_y \cdot \big( \mathfrak{H}_2 \nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{h}\cdot\nabla_y(h^{\ensuremath{\epsilon}}+h) \nabla_y (h^{\ensuremath{\epsilon}} +h)\big)\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} = \mathcal{I}_{11,1} + \mathcal{I}_{11,2}, \\[11pt] (\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v},\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h})|_{t=0} = (\partial_t^{\ell}\mathcal{Z}^{\alpha}v_0^\ensuremath{\epsilon} -\partial_t^{\ell}\mathcal{Z}^{\alpha}v_0, \partial_t^{\ell}\mathcal{Z}^{\alpha}h_0^\ensuremath{\epsilon} -\partial_t^{\ell}\mathcal{Z}^{\alpha}h_0), \end{array}\right. \end{equation} where \begin{equation}\label{Sect7_TangentialEstimates_Diff_Eq_Appendix} \begin{array}{ll} \mathcal{I}_{11,1} := 2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}v\,\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} + 2\ensuremath{\epsilon}(\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}} - \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}}\ensuremath{\textbf{n}}^{\ensuremath{\epsilon}}\cdot\ensuremath{\textbf{n}}^{\ensuremath{\epsilon}})\,\partial_t^{\ell}\mathcal{Z}^{\alpha}\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} \\[4pt]\hspace{1.25cm} + 2\ensuremath{\epsilon}[\partial_t^{\ell}\mathcal{Z}^{\alpha}, \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}} - \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}}\ensuremath{\textbf{n}}^{\ensuremath{\epsilon}}\cdot\ensuremath{\textbf{n}}^{\ensuremath{\epsilon}}, \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}] - 2\ensuremath{\epsilon} [\partial_t^{\ell}\mathcal{Z}^{\alpha},\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}] v^{\ensuremath{\epsilon}}\,\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}, \\[10pt] \mathcal{I}_{11,2} := - \sigma\nabla_y \cdot \big([\partial_t^{\ell}\mathcal{Z}^{\alpha}, \mathfrak{H}_1\nabla_y] \hat{h}\big) \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} \\[4pt]\hspace{1.25cm} - \sigma\nabla_y \cdot \big([\partial_t^{\ell}\mathcal{Z}^{\alpha}, \mathfrak{H}_2 \nabla_y (h^{\ensuremath{\epsilon}} +h) \nabla_y(h^{\ensuremath{\epsilon}}+h)\cdot \nabla_y] \hat{h} \big)\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}. \end{array} \end{equation} When $|\alpha|\geq 1$ and $1\leq \ell+|\alpha|\leq k$, we develop the $L^2$ estimate of $\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}$, we have \begin{equation}\label{Sect7_Tangential_Estimates_1} \begin{array}{ll} \frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t} \int\limits_{\mathbb{R}^3_{-}} |\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}|^2 \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}} - \int\limits_{\mathbb{R}^3_{-}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{q} \nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}} + 2\ensuremath{\epsilon}\int\limits_{\mathbb{R}^3_{-}} |\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}|^2 \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}} \\[10pt] \ensuremath{\lesssim} -\int\limits_{\{z=0\}} \big(\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{q} \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} - 2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}\partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{v} \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} \big) \cdot \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} \,\mathrm{d}y + \int\limits_{\mathbb{R}^3_{-}} \mathcal{I}_{10} \cdot \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}} \\[11pt] \ensuremath{\lesssim} \int\limits_{\{z=0\}} \big[- g\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h} + \sigma \nabla_y \cdot \big( \mathfrak{H}_1\nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{h} \big) \\[9pt]\quad + \sigma\nabla_y \cdot \big( \mathfrak{H}_2 \nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{h}\cdot\nabla_y(h^{\ensuremath{\epsilon}}+h) \nabla_y (h^{\ensuremath{\epsilon}} +h)\big) \big] \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} \,\mathrm{d}y \\[9pt]\quad - \int\limits_{\{z=0\}} (\mathcal{I}_{11,1} + \mathcal{I}_{11,2}) \cdot \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} \,\mathrm{d}y + \int\limits_{\mathbb{R}^3_{-}} \mathcal{I}_{10}\cdot \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}} +O(\ensuremath{\epsilon}) \end{array} \end{equation} \begin{equation*} \begin{array}{ll} \ensuremath{\lesssim} \int\limits_{\{z=0\}} \big[- g\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h} + \sigma \nabla_y \cdot \big( \mathfrak{H}_1\nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{h} \big) \\[8pt]\quad + \sigma\nabla_y \cdot \big( \mathfrak{H}_2 \nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{h}\cdot\nabla_y(h^{\ensuremath{\epsilon}}+h) \nabla_y (h^{\ensuremath{\epsilon}} +h)\big)\big] \\[8pt]\quad \cdot\Big(\partial_t \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h} + v_y^{\ensuremath{\epsilon}}\cdot \nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h} + \hat{v}_y \cdot \nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} h + \partial_y \hat{h}\cdot \partial_t^{\ell}\mathcal{Z}^{\alpha}v_y \\[7pt]\quad - [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \hat{v},\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}] + [\partial_t^{\ell}\mathcal{Z}^{\alpha}, v_y, \partial_y \hat{h}] \Big) \,\mathrm{d}y + \int\limits_{\mathbb{R}^3_{-}} \mathcal{I}_{10}\cdot \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}} \\[7pt]\quad - \int\limits_{\{z=0\}} \mathcal{I}_{11,1} \cdot \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} \,\mathrm{d}y - \int\limits_{\{z=0\}} \mathcal{I}_{11,2} \cdot \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} \,\mathrm{d}y +O(\ensuremath{\epsilon}). \hspace{2.3cm} \end{array} \end{equation*} We develop the following boundary estimates in $(\ref{Sect7_Tangential_Estimates_1})$, \begin{equation}\label{Sect7_Tangential_Estimates_2_1} \begin{array}{ll} \int\limits_{\{z=0\}} \big(\partial_t \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h} + v_y^{\ensuremath{\epsilon}}\cdot \nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h}\big) \big(- g\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h} + \sigma \nabla_y \cdot \big( \mathfrak{H}_1\nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{h} \big) \\[8pt]\quad + \sigma\nabla_y \cdot \big( \mathfrak{H}_2 \nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{h}\cdot\nabla_y(h^{\ensuremath{\epsilon}}+h) \nabla_y (h^{\ensuremath{\epsilon}} +h)\big)\big) \,\mathrm{d}y \\[9pt] = -\frac{g}{2} \frac{\mathrm{d}}{\mathrm{d}t} \int\limits_{\{z=0\}} |\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h}|^2 \,\mathrm{d}y + \frac{g}{2} \int\limits_{\{z=0\}} |\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h}|^2 \nabla_y\cdot v^{\ensuremath{\epsilon}}_y \,\mathrm{d}y \\[9pt]\quad - \sigma\int\limits_{\{z=0\}} \big(\partial_t \nabla_y\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h} + v_y^{\ensuremath{\epsilon}}\cdot \nabla_y \nabla_y\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h} + \nabla_y v_y^{\ensuremath{\epsilon},j}\cdot \partial_j \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h}\big) \\[8pt]\quad \cdot \big[\mathfrak{H}_1\nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{h} + \mathfrak{H}_2 \nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{h}\cdot\nabla_y(h^{\ensuremath{\epsilon}}+h) \nabla_y (h^{\ensuremath{\epsilon}} +h)\big] \,\mathrm{d}y \\[10pt] \ensuremath{\lesssim} -\frac{g}{2} \frac{\mathrm{d}}{\mathrm{d}t} \int\limits_{\{z=0\}} |\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h}|^2 \,\mathrm{d}y - \frac{\sigma}{2}\frac{\mathrm{d}}{\mathrm{d}t} \int\limits_{\{z=0\}} \mathfrak{H}_1|\nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{h}|^2 \\[11pt]\quad + \mathfrak{H}_2 |\nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{h}\cdot\nabla_y(h^{\ensuremath{\epsilon}}+h)|^2 \,\mathrm{d}y + \|\nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{h}\|_{L^2}^2 + \|\partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{h}\|_{L^2}^2. \end{array} \end{equation} It is easy to check that \begin{equation}\label{Sect7_Tangential_Estimates_2_2} \begin{array}{ll} - \int\limits_{\{z=0\}} \mathcal{I}_{11,1} \cdot \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} \,\mathrm{d}y =O(\ensuremath{\epsilon}). \end{array} \end{equation} Another boundary estimate is that \begin{equation}\label{Sect7_Tangential_Estimates_2_3} \begin{array}{ll} - \int\limits_{\{z=0\}} \mathcal{I}_{11,2} \cdot \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} \,\mathrm{d}y = \sigma\int\limits_{\{z=0\}} \nabla_y \cdot \Big( [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \mathfrak{H}_1\nabla_y] \hat{h} \\[12pt]\quad + [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \mathfrak{H}_2 \nabla_y (h^{\ensuremath{\epsilon}} +h) \nabla_y(h^{\ensuremath{\epsilon}}+h)\cdot \nabla_y] \hat{h} \Big) \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}\,\mathrm{d}y \\[10pt] = \sigma\int\limits_{\{z=0\}} \nabla_y \cdot \Big( [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \mathfrak{H}_1\nabla_y] \hat{h} + [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \mathfrak{H}_2 \nabla_y (h^{\ensuremath{\epsilon}} +h) \nabla_y(h^{\ensuremath{\epsilon}}+h)\cdot \nabla_y] \hat{h} \Big)\\[12pt]\quad \cdot\Big(\partial_t \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h} + v_y^{\ensuremath{\epsilon}}\cdot \nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h} + \hat{v}_y \cdot \nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} h + \partial_y \hat{h}\cdot \partial_t^{\ell}\mathcal{Z}^{\alpha}v_y \\[7pt]\quad - [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \hat{v},\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}] + [\partial_t^{\ell}\mathcal{Z}^{\alpha}, v_y, \partial_y \hat{h}] \Big)\,\mathrm{d}y \\[10pt] \ensuremath{\lesssim} \sigma |\hat{h}|_{X^{k-1,2}}^2 + \sigma |\partial_t^k \hat{h}|_{X^{0,1}}^2 + \big|\hat{v}|_{z=0}\big|_{X^{k-1}}^2 \\[8pt] \ensuremath{\lesssim} \sigma |\hat{h}|_{X^{k-1,2}}^2 + \sigma |\partial_t^k \hat{h}|_{X^{0,1}}^2 + \|\hat{v}\big|_{X^{k-1,1}}^2 + \|\partial_z\hat{v}\big|_{X^{k-1}}^2 . \end{array} \end{equation} Plug $(\ref{Sect7_Tangential_Estimates_2_1}),(\ref{Sect7_Tangential_Estimates_2_2}),(\ref{Sect7_Tangential_Estimates_2_3})$ into $(\ref{Sect7_Tangential_Estimates_1})$, we have \begin{equation}\label{Sect7_Tangential_Estimates_3} \begin{array}{ll} \frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t} \int\limits_{\mathbb{R}^3_{-}} |\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}|^2 \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}} + 2\ensuremath{\epsilon}\int\limits_{\mathbb{R}^3_{-}} |\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}|^2 \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}} + \frac{g}{2} \frac{\mathrm{d}}{\mathrm{d}t} \int\limits_{\{z=0\}} |\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h}|^2 \,\mathrm{d}y \\[10pt]\quad + \frac{\sigma}{2}\frac{\mathrm{d}}{\mathrm{d}t} \int\limits_{\{z=0\}} \mathfrak{H}_1|\nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{h}|^2 + \mathfrak{H}_2 |\nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{h}\cdot\nabla_y(h^{\ensuremath{\epsilon}}+h)|^2\big] \,\mathrm{d}y \\[10pt] \ensuremath{\lesssim} \|\hat{v}\|_{X^{k-1,1}}^2 + \|\partial_t^k\hat{v}\|_{L^2}^2 + |\hat{h}|_{X^{k-1,2}}^2 + |\partial_t^k \hat{h}|_{X^{0,1}}^2 + \|\hat{q}\|_{X^{k-1}}^2 + O(\ensuremath{\epsilon}). \end{array} \end{equation} Since \begin{equation}\label{Sect7_Tangential_Estimates_4} \begin{array}{ll} \int\limits_{\{z=0\}} \mathfrak{H}_1|\nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{h}|^2 + \mathfrak{H}_2 |\nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{h}\cdot\nabla_y(h^{\ensuremath{\epsilon}}+h)|^2\big] \,\mathrm{d}y \\[10pt] \geq \int\limits_{\{z=0\}} |\nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{h}|^2 (\mathfrak{H}_1 - |\mathfrak{H}_2| |\nabla_y(h^{\ensuremath{\epsilon}}+h)|^2) \,\mathrm{d}y \geq \int\limits_{\{z=0\}} 4|\mathfrak{H}_2| |\nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{h}|^2 \,\mathrm{d}y. \end{array} \end{equation} where $4|\mathfrak{H}_2| \geq \delta_{\sigma}>0$. Integrate $(\ref{Sect7_Tangential_Estimates_3})$ in time, apply the integral form of Gronwall's inequality, note that $(\ref{Sect7_Tangential_Estimates_4})$, we get \begin{equation}\label{Sect7_Tangential_Estimates_5} \begin{array}{ll} \int\limits_{\mathbb{R}^3_{-}} |\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}|^2 \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}} + g \int\limits_{\{z=0\}} |\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h}|^2 \,\mathrm{d}y + \sigma\int\limits_{\{z=0\}} |\nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{h}|^2 \,\mathrm{d}y \\[10pt] \ensuremath{\lesssim} \|\hat{v}_0\|_{X^{k-1,1}}^2 + |\hat{h}_0|_{X^{k-1,1}}^2 + \int\limits_0^t \|\partial_t^k\hat{v}\|_{L^2}^2 + |\partial_t^k \hat{h}|_{X^{0,1}}^2 + \|\hat{q}\|_{X^{k-1}}^2 \,\mathrm{d}t + O(\ensuremath{\epsilon}). \end{array} \end{equation} When $|\alpha|=0, \, 0\leq\ell\leq k-1$, we have no bounds of $\hat{q}$ and $\partial_t^{\ell} \hat{q}$, so we can not apply the integration by parts to the pressure terms. Also, the dynamical boundary condition will not be used. Since the main equation of $\partial_t^{\ell}\hat{v}$ and its kinetical boundary condition satisfy \begin{equation}\label{Sect7_Tangential_Estimates_Time} \left\{\begin{array}{ll} \partial_t^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\hat{v} + v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\hat{v} - 2\ensuremath{\epsilon}\nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\hat{v} \\[8pt]\quad = - \nabla^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\hat{q} + 2\ensuremath{\epsilon} [\partial_t^{\ell},\nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot]\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \hat{v} + 2\ensuremath{\epsilon}\nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot[\partial_t^{\ell}, \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}] \hat{v} + \ensuremath{\epsilon}\partial_t^{\ell}\triangle^{\varphi^\ensuremath{\epsilon}} v \\[8pt]\quad + \partial_z^{\varphi} v \partial_t^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\hat{\varphi} + \partial_z^{\varphi} v \, v^{\ensuremath{\epsilon}}\cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}\partial_t^{\ell}\hat{\varphi} - \partial_t^{\ell}\hat{v}\cdot\nabla^{\varphi} v + \partial_z^{\varphi} q\nabla^{\varphi^{\ensuremath{\epsilon}}}\partial_t^{\ell}\hat{\varphi} \\[8pt]\quad - [\partial_t^{\ell},\partial_t^{\varphi^{\ensuremath{\epsilon}}}]\hat{v} + [\partial_t^{\ell}, \partial_z^{\varphi} v \partial_t^{\varphi^{\ensuremath{\epsilon}}}]\hat{\varphi} - [\partial_t^{\ell}, v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}] \hat{v} - [\partial_t^{\ell}, \nabla^{\varphi} v\cdot]\hat{v}\\[8pt]\quad + [\partial_t^{\ell}, \partial_z^{\varphi} v \, v^{\ensuremath{\epsilon}}\cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}]\hat{\varphi} - [\partial_t^{\ell},\nabla^{\varphi^{\ensuremath{\epsilon}}}] \hat{q} + [\partial_t^{\ell},\partial_z^{\varphi} q\nabla^{\varphi^{\ensuremath{\epsilon}}}]\hat{\varphi} :=\mathcal{I}_{12}, \\[11pt] \partial_t \partial_t^{\ell}\hat{h} + v_y^{\ensuremath{\epsilon}}\cdot \nabla_y \partial_t^{\ell}\hat{h} - \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \partial_t^{\ell}\hat{v} = - \hat{v}_y \cdot \nabla_y \partial_t^{\ell} h - \partial_y \hat{h}\cdot \partial_t^{\ell}v_y \\[6pt]\quad + [\partial_t^{\ell}, \hat{v},\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}] - [\partial_t^{\ell}, v_y, \partial_y \hat{h}], \\[11pt] (\partial_t^{\ell}\hat{v},\partial_t^{\ell}\hat{h})|_{t=0} = (\partial_t^{\ell}v_0^\ensuremath{\epsilon} -\partial_t^{\ell}v_0, \partial_t^{\ell}h_0^\ensuremath{\epsilon} -\partial_t^{\ell}h_0), \end{array}\right. \end{equation} then we have $L^2$ estimate of $\partial_t^{\ell}\hat{v}$: \begin{equation}\label{Sect7_Tangential_Estimates_6} \begin{array}{ll} \frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t} \int\limits_{\mathbb{R}^3_{-}} |\partial_t^{\ell}\hat{v}|^2 \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}} + 2\ensuremath{\epsilon}\int\limits_{\mathbb{R}^3_{-}}|\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\hat{v}|^2 \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}} \\[6pt] \ensuremath{\lesssim} 2\ensuremath{\epsilon}\int\limits_{\{z=0\}} \mathcal{S}^{\varphi^\ensuremath{\epsilon}}\partial_t^{\ell} \hat{v} \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} \cdot \partial_t^{\ell}\hat{v} \,\mathrm{d}y + \int\limits_{\mathbb{R}^3_{-}}\mathcal{I}_{12}\cdot \partial_t^{\ell} \hat{v} \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}} \\[6pt] \ensuremath{\lesssim} \|\mathcal{I}_{12}\|_{L^2}^2 + \|\partial_t^{\ell} \hat{v}\|_{L^2}^2 + O(\ensuremath{\epsilon}). \end{array} \end{equation} It is easy to check the last term of $(\ref{Sect7_Tangential_Estimates_6})$ satisfies \begin{equation}\label{Sect7_Tangential_Estimates_7} \begin{array}{ll} \|\mathcal{I}_{12}\|_{L^2} \ensuremath{\lesssim} \|\partial_z \hat{v}\|_{X^{k-1}} + \|\hat{v}\|_{X^{k-1,1}} + \|\partial_t^k\hat{v}\|_{L^2} \\[5pt]\hspace{1.67cm} + |\partial_t^k \hat{h}|_{L^2} + |\hat{h}|_{X^{k-1,\frac{1}{2}}} + \|\nabla\hat{q}\|_{X^{k-1}} + O(\ensuremath{\epsilon}). \end{array} \end{equation} Plug $(\ref{Sect7_Tangential_Estimates_7})$ into $(\ref{Sect7_Tangential_Estimates_6})$, integrate in time and apply the integral form of Gronwall's inequality, we have \begin{equation}\label{Sect7_Tangential_Estimates_8} \begin{array}{ll} \|\partial_t^{\ell}\hat{v}\|_{L^2}^2 + \ensuremath{\epsilon}\int\limits_0^t \|\nabla \partial_t^{\ell}\hat{v}\|_{L^2}^2 \\[7pt] \ensuremath{\lesssim} \|\partial_t^{\ell}\hat{v}_0\|_{L^2}^2 + \int\limits_0^t\|\partial_z \hat{v}\|_{X^{k-1}}^2 + \|\hat{v}\|_{X^{k-1,1}}^2 + \|\partial_t^k\hat{v}\|_{L^2}^2 + |\partial_t^k \hat{h}|_{L^2}^2 \\[6pt]\quad + |\hat{h}|_{X^{k-1,1}}^2 + \|\nabla\hat{q}\|_{X^{k-1}}^2\,\mathrm{d}t + O(\ensuremath{\epsilon}) \\[6pt] \ensuremath{\lesssim} \|\hat{v}_0\|_{X^{k-1}}^2 + \|\partial_z \hat{v}\|_{L^4([0,T],X^{k-2})}^2 + |\partial_t^k \hat{h}|_{L^4([0,T],L^2)}^2 \\[5pt]\quad + \|\nabla\hat{q}\|_{L^4([0,T],X^{k-1})}^2 + \int\limits_0^t\|\hat{v}\|_{X^{k-1}}^2 + |\hat{h}|_{X^{k-1,1}}^2 \,\mathrm{d}t + O(\ensuremath{\epsilon}). \end{array} \end{equation} Sum $\ell$ and $\alpha$. By $(\ref{Sect7_Tangential_Estimates_5})$, $(\ref{Sect7_Tangential_Estimates_8})$ and Lemma $\ref{Sect7_Height_Estimates_Lemma}$, we have $(\ref{Sect7_Tangential_Estimates_Lemma_Eq})$. Thus, Lemma $\ref{Sect7_Tangential_Estimates_Lemma}$ is proved. \end{proof} In order to close our estimates of tangential derivatives, we need to bound $\|\partial_t^k \hat{v}\|_{L^4([0,T],L^2)}^2$ and $\|\partial_t^k \hat{h}\|_{L^4([0,T],X^{0,1})}^2$, which appear in Lemma $\ref{Sect7_Tangential_Estimates_Lemma}$. Thus, we estimate $\partial_t^k \hat{v}$ and $\partial_t^k \hat{h}$. \begin{lemma}\label{Sect7_TimeDer_Estimate_Lemma} $\partial_t^k \hat{v}, \partial_t^k \hat{h}, \partial_t^{k+1}\hat{h}$ satisfies the following estimate: \begin{equation}\label{Sect7_TimeDer_Estimate} \begin{array}{ll} \|\partial_t^k \hat{v}\|_{L^4([0,T],L^2)}^2 + |\partial_t^k \hat{h}|_{L^4([0,T],X^{0,1})}^2 + |\partial_t^{k+1}\nabla \hat{h}|_{L^4([0,T],L^2)}^2 \\[6pt] \ensuremath{\lesssim} \|\partial_t^k \hat{v}_0\|_{L^2}^2 + g|\partial_t^k \hat{h}_0|_{L^2}^2 + \sigma|\partial_t^k\nabla \hat{h}_0|_{L^2}^2 + \|\partial_z\partial_t^{k-1} \hat{v}_0\|_{L^2}^2 \\[6pt]\quad + \|\partial_z \hat{v}\|_{L^4([0,T],X^{k-1})}^2 + O(\ensuremath{\epsilon}). \end{array} \end{equation} \end{lemma} \begin{proof} $(\partial_t^k \hat{v},\partial_t^k \hat{h},\partial_t^k \hat{q})$ satisfy the following equations: \begin{equation}\label{Sect7_TimeDer_Eq} \left\{\begin{array}{ll} \partial_t^{\varphi^{\ensuremath{\epsilon}}} \partial_t^k\hat{v} + v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} \partial_t^k\hat{v} + \nabla^{\varphi^{\ensuremath{\epsilon}}} \partial_t^k\hat{q} - 2\ensuremath{\epsilon} \nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \partial_t^k\hat{v} = \mathcal{I}_{10}|_{\ell=k,|\alpha|=0}, \\[11pt] \nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot \partial_t^k\hat{v} = \partial_z^{\varphi}v \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}\partial_t^k\hat{\eta} -[\partial_t^k,\nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot] \hat{v} + [\partial_t^k,\partial_z^{\varphi}v \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}]\hat{\eta}, \\[11pt] \partial_t \partial_t^k\hat{h} + v_y^{\ensuremath{\epsilon}}\cdot \nabla_y \partial_t^k\hat{h} - \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \partial_t^k\hat{v} = - \hat{v}_y \cdot \nabla_y \partial_t^k h - \partial_y \hat{h}\cdot \partial_t^k v_y \\[5pt]\quad + [\partial_t^k, \hat{v},\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}] - [\partial_t^k, v_y, \partial_y \hat{h}], \\[11pt] \partial_t^k \hat{q}\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} -2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \partial_t^k \hat{v}\,\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} - g\partial_t^k\hat{h}\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} + \sigma \nabla_y \cdot \big( \mathfrak{H}_1\nabla_y \partial_t^k \hat{h} \big) \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} \\[5pt]\quad + \sigma\nabla_y \cdot \big( \mathfrak{H}_2 \nabla_y \partial_t^k \hat{h}\cdot\nabla_y(h^{\ensuremath{\epsilon}}+h) \nabla_y (h^{\ensuremath{\epsilon}} +h)\big)\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} \\[5pt]\quad = \mathcal{I}_{11,1}|_{\ell=k,|\alpha|=0} + \mathcal{I}_{11,2}|_{\ell=k,|\alpha|=0}, \\[11pt] (\partial_t^k\hat{v},\partial_t^k\hat{h})|_{t=0} = (\partial_t^k v_0^\ensuremath{\epsilon} -\partial_t^k v_0, \partial_t^k h_0^\ensuremath{\epsilon} -\partial_t^k h_0), \end{array}\right. \end{equation} Then multiply $(\ref{Sect7_TimeDer_Eq})$ with $\partial_t^k \hat{v}$, integrate in $\mathbb{R}^3_{-}$, then we get \begin{equation}\label{Sect7_TimeDer_Estimate_1} \begin{array}{ll} \frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int\limits_{\mathbb{R}^3_{-}} |\partial_t^k \hat{v}|^2 \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}} - \int\limits_{\mathbb{R}^3_{-}} \partial_t^k \hat{q} \, \nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot \partial_t^k \hat{v} \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}} + 2\ensuremath{\epsilon}\int\limits_{\mathbb{R}^3_{-}} |\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \partial_t^k\hat{v}|^2 \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}} \\[14pt] \leq \int\limits_{\{z=0\}} (2\ensuremath{\epsilon} \mathcal{S}^{\varphi}\partial_t^k \hat{v} \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} - \partial_t^k \hat{q}\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}})\cdot \partial_t^k \hat{v} \mathrm{d}y + \|\partial_z \hat{v}\|_{X^{k-1}}^2 + \|\nabla \hat{q}\|_{X^{k-1}}^2 \\[11pt]\quad + |\hat{h}|_{X^{k-1,2}}^2 + |\partial_t^k \hat{h}|_{X^{0,1}}^2 + |\partial_t^{k+1} \hat{h}|_{L^2}^2 + O(\ensuremath{\epsilon}) \\[7pt] \leq -\int\limits_{\{z=0\}} \Big[ g\partial_t^{\ell}\hat{h} - \sigma \nabla_y \cdot \big( \mathfrak{H}_1\nabla_y \partial_t^{\ell} \hat{h} \big) \\[6pt]\quad - \sigma\nabla_y \cdot \big( \mathfrak{H}_2 \nabla_y \partial_t^{\ell} \hat{h} \cdot\nabla_y(h^{\ensuremath{\epsilon}}+h) \nabla_y (h^{\ensuremath{\epsilon}} +h)\big) \Big]\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} \cdot \partial_t^k \hat{v} \mathrm{d}y \\[6pt]\quad + \|\partial_z \hat{v}\|_{X^{k-1}}^2 + \|\nabla \hat{q}\|_{X^{k-1}}^2 + |\hat{h}|_{X^{k-1,2}}^2 + |\partial_t^k \hat{h}|_{X^{0,1}}^2 + |\partial_t^{k+1} \hat{h}|_{L^2}^2 + O(\ensuremath{\epsilon}) \\[6pt] \leq \sigma\int\limits_{\{z=0\}} \nabla_y\cdot \big(\mathfrak{H}_1\nabla_y \partial_t^{\ell} \hat{h} + \mathfrak{H}_2 \nabla_y \partial_t^{\ell} \hat{h}\cdot\nabla_y(h^{\ensuremath{\epsilon}}+h) \nabla_y (h^{\ensuremath{\epsilon}} +h) \big) \cdot(\partial_t \partial_t^k \hat{h} \\[13pt]\quad + v_y \cdot\nabla_y \partial_t^k \hat{h}) \mathrm{d}y -\int\limits_{\{z=0\}} g \partial_t^k \hat{h} \cdot(\partial_t \partial_t^k \hat{h} + v_y \cdot\nabla_y \partial_t^k \hat{h}) \mathrm{d}y \\[10pt]\quad + \|\partial_z \hat{v}\|_{X^{k-1}}^2 + \|\nabla \hat{q}\|_{X^{k-1}}^2 + |\hat{h}|_{X^{k-1,2}}^2 + |\partial_t^k \hat{h}|_{X^{0,1}}^2 + |\partial_t^{k+1} \hat{h}|_{L^2}^2 + O(\ensuremath{\epsilon}) \\[7pt] \leq - \sigma \int\limits_{\{z=0\}} \big(\mathfrak{H}_1\nabla_y \partial_t^{\ell} \hat{h} + \mathfrak{H}_2 \nabla_y \partial_t^{\ell} \hat{h}\cdot\nabla_y(h^{\ensuremath{\epsilon}}+h) \nabla_y (h^{\ensuremath{\epsilon}} +h) \big) \cdot(\partial_t \nabla_y\partial_t^k \hat{h} \\[13pt]\quad + v_y \cdot\nabla_y \nabla_y\partial_t^k \hat{h}) \mathrm{d}y - \frac{g}{2}\frac{\mathrm{d}}{\mathrm{d}t} \int\limits_{\{z=0\}} |\partial_t^k \hat{h}|^2 \mathrm{d}y + \|\partial_z \hat{v}\|_{X^{k-1}}^2 + \|\nabla \hat{q}\|_{X^{k-1}}^2 \\[7pt]\quad + |\hat{h}|_{X^{k-1,2}}^2 + |\partial_t^k \hat{h}|_{X^{0,1}}^2 + |\partial_t^{k+1} \hat{h}|_{L^2}^2 + O(\ensuremath{\epsilon}) \\[8pt] \leq - \frac{g}{2}\frac{\mathrm{d}}{\mathrm{d}t} \int\limits_{\{z=0\}} |\partial_t^k \hat{h}|^2 \mathrm{d}y - \frac{\sigma}{2}\frac{\mathrm{d}}{\mathrm{d}t} \int\limits_{\{z=0\}} \big(\mathfrak{H}_1|\nabla_y\partial_t^k \hat{h}|^2 + \mathfrak{H}_2 |\nabla_y \partial_t^{\ell} \hat{h}\cdot\nabla_y(h^{\ensuremath{\epsilon}}+h)|^2 \big) \mathrm{d}y \\[12pt]\quad + \|\partial_z \hat{v}\|_{X^{k-1}}^2 + \|\nabla \hat{q}\|_{X^{k-1}}^2 + |\hat{h}|_{X^{k-1,2}}^2 + |\partial_t^k \hat{h}|_{X^{0,1}}^2 + |\partial_t^{k+1} \hat{h}|_{L^2}^2 + O(\ensuremath{\epsilon}). \end{array} \end{equation} The same as $(\ref{Sect7_Tangential_Estimates_4})$, we have \begin{equation}\label{Sect7_TimeDer_Estimate_1_Inequality} \begin{array}{ll} \int\limits_{\{z=0\}} \mathfrak{H}_1|\nabla_y \partial_t^k \hat{h}|^2 + \mathfrak{H}_2 |\nabla_y \partial_t^k \hat{h}\cdot\nabla_y(h^{\ensuremath{\epsilon}}+h)|^2\big] \,\mathrm{d}y \\[10pt] \geq \int\limits_{\{z=0\}} |\nabla_y \partial_t^k \hat{h}|^2 (\mathfrak{H}_1 - |\mathfrak{H}_2| |\nabla_y(h^{\ensuremath{\epsilon}}+h)|^2) \,\mathrm{d}y \geq \int\limits_{\{z=0\}} 4|\mathfrak{H}_2| |\nabla_y \partial_t^k \hat{h}|^2 \,\mathrm{d}y. \end{array} \end{equation} where $4|\mathfrak{H}_2| \geq \delta_{\sigma}>0$, since $|\nabla_y h^{\ensuremath{\epsilon}}|_{\infty}$ and $|\nabla_y h|_{\infty}$ have their upper bounds. The same as \cite{Wang_Xin_2015}, we will integrate in time twice, we get the $L^4([0,T],L^2)$ type estimate. After the first integration in time, we have \begin{equation}\label{Sect7_TimeDer_Estimate_3} \begin{array}{ll} \|\partial_t^k \hat{v}\|_{L^2}^2 + g|\partial_t^k \hat{h}|_{L^2}^2 + \sigma|\partial_t^k\nabla_y \hat{h}|_{L^2}^2 +\ensuremath{\epsilon}\int\limits_0^t \|\nabla\partial_t^k \hat{v}\|_{L^2}^2 \,\mathrm{d}t \\[6pt] \ensuremath{\lesssim} \|\partial_t^k \hat{v}_0\|_{L^2}^2 + g|\partial_t^k \hat{h}_0|_{L^2}^2 + \sigma|\partial_t^k\nabla_y \hat{h}_0|_{L^2}^2 + \int\limits_0^t\int\limits_{\mathbb{R}^3_{-}} \partial_t^k \hat{q} \, \nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot \partial_t^k \hat{v} \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}}\mathrm{d}t \\[6pt]\quad + \int\limits_0^t\|\partial_z \hat{v}\|_{X^{k-1}}^2 + \|\nabla \hat{q}\|_{X^{k-1}}^2 + |\hat{h}|_{X^{k-1,2}}^2 + |\partial_t^k \hat{h}|_{X^{0,1}}^2 + |\partial_t^{k+1} \hat{h}|_{L^2}^2\mathrm{d}t + O(\ensuremath{\epsilon}) \end{array} \end{equation} \begin{equation*} \begin{array}{ll} \ensuremath{\lesssim} \|\partial_t^k \hat{v}_0\|_{L^2}^2 + g|\partial_t^k \hat{h}_0|_{L^2}^2 + \sigma|\partial_t^k\nabla \hat{h}_0|_{L^2}^2 + \int\limits_0^t\int\limits_{\mathbb{R}^3_{-}} \partial_t^k \hat{q} \, \nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot \partial_t^k \hat{v} \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}}\mathrm{d}t \\[6pt]\quad + \|\partial_z \hat{v}\|_{L^4([0,T],X^{k-1})}^2 + |\partial_t^k \hat{h}|_{L^4([0,T],X^{0,1})}^2 + |\partial_t^k \hat{v}|_{L^4([0,T],L^2)}^2 + O(\ensuremath{\epsilon}). \hspace{1cm} \end{array} \end{equation*} Similar to the procedures $(\ref{Sect6_TimeDer_Estimate_4})$, $(\ref{Sect6_TimeDer_Estimate_5})$, $(\ref{Sect6_TimeDer_Estimate_6})$, $(\ref{Sect6_TimeDer_Estimate_7})$, we deal with the pressure term $\int\limits_0^t\int\limits_{\mathbb{R}^3_{-}} \partial_t^k \hat{q} \, \nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot \partial_t^k \hat{v} \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}}\mathrm{d}t$ by using Hardy's inequality. Denote \begin{equation}\label{Sect7_TimeDer_Estimate_6} \begin{array}{ll} \mathcal{I}_{13} := \|\partial_z \partial_t^{k-1} \hat{q}\|_{L^2}^2 + \big|\partial_t^{k-1} \hat{q}|_{z=0}\big|_{L^2}^2 + |\partial_t^k \hat{h}|_{X^{0,1}}^2 \\[4pt]\hspace{1.15cm} + \|\partial_t^{k-1}\hat{v}\|_{L^2}^2 + \big|\partial_t^{k-1}\hat{v}|_{z=0}\big|_{L^2}^2 \\[8pt]\hspace{0.7cm} \ensuremath{\lesssim} \|\partial_t^k \hat{v}\|_{L^2}^2 + \|\partial_z\partial_t^{k-1} \hat{v}\|_{L^2}^2 + |\partial_t^k \hat{h}|_{X^{0,1}}^2 + O(\ensuremath{\epsilon}), \end{array} \end{equation} then we have \begin{equation}\label{Sect7_TimeDer_Estimate_7} \begin{array}{ll} \int\limits_0^t\int\limits_{\mathbb{R}^3_{-}} \partial_t^k \hat{q} \, \nabla^{\varphi}\cdot \partial_t^k \hat{v} \,\mathrm{d}\mathcal{V}_t^{\ensuremath{\epsilon}}\mathrm{d}t \ensuremath{\lesssim} \mathcal{I}_{13}|_{t=0} + \mathcal{I}_{13} + \int\limits_0^T \mathcal{I}_{13} \,\mathrm{d}s \\[10pt] \ensuremath{\lesssim} \|\partial_t^k \hat{v}_0\|_{L^2}^2 + \|\partial_z\partial_t^{k-1} \hat{v}_0\|_{L^2}^2 + |\partial_t^k \hat{h}_0|_{X^{0,1}}^2 + \|\partial_t^k \hat{v}\|_{L^2}^2 + \|\partial_z\partial_t^{k-1} \hat{v}\|_{L^2}^2 \\[6pt]\quad + |\partial_t^k \hat{h}|_{X^{0,1}}^2 + \|\partial_t^k \hat{v}\|_{L^4([0,T],L^2)}^2 + \|\partial_z\partial_t^{k-1} \hat{v}\|_{L^4([0,T],L^2)}^2 + |\partial_t^k \hat{h}|_{L^4([0,T],L^2)}^2. \end{array} \end{equation} By $(\ref{Sect7_TimeDer_Estimate_3})$ and $(\ref{Sect7_TimeDer_Estimate_7})$, we get \begin{equation}\label{Sect7_TimeDer_Estimate_8} \begin{array}{ll} \|\partial_t^k \hat{v}\|_{L^2}^2 + g|\partial_t^k \hat{h}|_{L^2}^2 + \sigma|\partial_t^k\nabla_y \hat{h}|_{L^2}^2 \\[6pt] \ensuremath{\lesssim} \|\partial_t^k \hat{v}_0\|_{L^2}^2 + g|\partial_t^k \hat{h}_0|_{L^2}^2 + \sigma|\partial_t^k\nabla_y \hat{h}_0|_{L^2}^2 + \|\partial_z\partial_t^{k-1} \hat{v}_0\|_{L^2}^2 + \|\partial_t^k \hat{v}\|_{L^2}^2 \\[6pt]\quad + \|\partial_z\partial_t^{k-1} \hat{v}\|_{L^2}^2 + |\partial_t^k \hat{h}|_{X^{0,1}}^2 + \|\partial_t^k \hat{v}\|_{L^4([0,T],L^2)}^2 + \|\partial_z \hat{v}\|_{L^4([0,T],X^{k-1})}^2 \\[6pt]\quad + |\partial_t^k \hat{h}|_{L^4([0,T],X^{0,1})}^2 + O(\ensuremath{\epsilon}). \end{array} \end{equation} Square $(\ref{Sect7_TimeDer_Estimate_8})$ and integrate in time again (see \cite{Wang_Xin_2015}), apply the integral form of Gronwall's inequality, we have \begin{equation}\label{Sect7_TimeDer_Estimate_9} \begin{array}{ll} \|\partial_t^k \hat{v}\|_{L^4([0,T],L^2)}^2 + g|\partial_t^k \hat{h}|_{L^4([0,T],L^2)}^2 + \sigma|\partial_t^k\nabla_y \hat{h}|_{L^4([0,T],L^2)}^2 \\[6pt] \ensuremath{\lesssim} \|\partial_t^k \hat{v}_0\|_{L^2}^2 + g|\partial_t^k \hat{h}_0|_{L^2}^2 + \sigma|\partial_t^k\nabla_y \hat{h}_0|_{L^2}^2 + \|\partial_z\partial_t^{k-1} \hat{v}_0\|_{L^2}^2 \\[6pt]\quad + \|\partial_z \hat{v}\|_{L^4([0,T],X^{k-1})}^2 + O(\ensuremath{\epsilon}). \end{array} \end{equation} Thus, Lemma $\ref{Sect7_TimeDer_Estimate_Lemma}$ is proved. \end{proof} Based on Lemmas $\ref{Sect7_Height_Estimates_Lemma}$, $\ref{Sect7_Tangential_Estimates_Lemma}$ and $\ref{Sect7_TimeDer_Estimate_Lemma}$, the estimates of tangential derivatives can be closed. The estimates of normal derivatives are the same as the $\sigma=0$ case. Finally, it is standard to estimates $(\ref{Sect1_Thm7_ConvergenceRates_1})$ and $(\ref{Sect1_Thm7_ConvergenceRates_2})$ in Theorem $\ref{Sect1_Thm_StrongLayer_ST}$. \appendix \section{Derivation of the Equations and Boundary Conditions} In this appendix, we derive the equations and their boundary conditions for the $\sigma=0$ case. Since $\partial_i^{\varphi^{\ensuremath{\epsilon}}}\varphi^{\ensuremath{\epsilon}} = \partial_i \varphi^{\ensuremath{\epsilon}} - \frac{\partial_i\varphi^{\ensuremath{\epsilon}}}{\partial_z \varphi^{\ensuremath{\epsilon}}}\partial_z \varphi^{\ensuremath{\epsilon}} =0$ and $\partial_z^{\varphi^{\ensuremath{\epsilon}}}\varphi^{\ensuremath{\epsilon}} = \frac{1}{\partial_z\varphi^{\ensuremath{\epsilon}}}\partial_z \varphi^{\ensuremath{\epsilon}} =1$, \begin{equation}\label{SectA_Difference_Transform_1} \begin{array}{ll} \partial_i^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}} - \partial_i^{\varphi} v = \partial_i^{\varphi^{\ensuremath{\epsilon}}}\hat{v} - (\frac{\partial_i \varphi^{\ensuremath{\epsilon}}}{\partial_z \varphi^{\ensuremath{\epsilon}}} - \frac{\partial_i \varphi}{\partial_z \varphi})\partial_z v = \partial_i^{\varphi^{\ensuremath{\epsilon}}}\hat{v} + (\partial_i \varphi - \frac{\partial_i \varphi^{\ensuremath{\epsilon}}}{\partial_z \varphi^{\ensuremath{\epsilon}}}\partial_z \varphi)\frac{1}{\partial_z \varphi}\partial_z v \\[8pt]\hspace{1.95cm} = \partial_i^{\varphi^{\ensuremath{\epsilon}}}\hat{v} + \partial_z^{\varphi} v \partial_i^{\varphi^{\ensuremath{\epsilon}}}\varphi = \partial_i^{\varphi^{\ensuremath{\epsilon}}}\hat{v} + \partial_z^{\varphi} v \partial_i^{\varphi^{\ensuremath{\epsilon}}}\varphi -\partial_z^{\varphi} v \partial_i^{\varphi^{\ensuremath{\epsilon}}}\varphi^{\ensuremath{\epsilon}} \\[8pt]\hspace{1.95cm} = \partial_i^{\varphi^{\ensuremath{\epsilon}}}\hat{v} -\partial_z^{\varphi} v \partial_i^{\varphi^{\ensuremath{\epsilon}}}\hat{\varphi} = \partial_i^{\varphi^{\ensuremath{\epsilon}}}\hat{v} - \partial_i^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta} \, \partial_z^{\varphi} v, \hspace{0.5cm} i=t,1,2,\\[11pt] \partial_z^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}} - \partial_z^{\varphi} v = \partial_z^{\varphi^{\ensuremath{\epsilon}}} \hat{v} + (\frac{1}{\partial_z \varphi^{\ensuremath{\epsilon}}} - \frac{1}{\partial_z \varphi}) \partial_z v \\[8pt]\hspace{1.95cm} = \partial_z^{\varphi^{\ensuremath{\epsilon}}} \hat{v} + (\frac{1}{\partial_z \varphi^{\ensuremath{\epsilon}}}\partial_z \varphi - 1) \frac{1}{\partial_z \varphi}\partial_z v = \partial_z^{\varphi^{\ensuremath{\epsilon}}} \hat{v} + (\partial_z^{\varphi^{\ensuremath{\epsilon}}} \varphi - 1) \frac{1}{\partial_z \varphi}\partial_z v \\[8pt]\hspace{1.95cm} = \partial_z^{\varphi^{\ensuremath{\epsilon}}} \hat{v} + (\partial_z^{\varphi^{\ensuremath{\epsilon}}} \varphi - \partial_z^{\varphi^{\ensuremath{\epsilon}}} \varphi^{\ensuremath{\epsilon}}) \partial_z^{\varphi} v \\[8pt]\hspace{1.95cm} = \partial_z^{\varphi^{\ensuremath{\epsilon}}} \hat{v} - \partial_z^{\varphi} v\partial_z^{\varphi^{\ensuremath{\epsilon}}}\hat{\varphi} = \partial_z^{\varphi^{\ensuremath{\epsilon}}} \hat{v} - \partial_z^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta} \, \partial_z^{\varphi} v. \end{array} \end{equation} Similarly, we have \begin{equation}\label{SectA_Difference_Transform_2} \begin{array}{ll} \partial_i^{\varphi^{\ensuremath{\epsilon}}}q^{\ensuremath{\epsilon}} - \partial_i^{\varphi} q = \partial_i^{\varphi^{\ensuremath{\epsilon}}}\hat{q} - \partial_i^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta} \, \partial_z^{\varphi} q, \hspace{0.5cm} i=t,1,2, 3 \\[8pt] v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} v^{\ensuremath{\epsilon}} - v\cdot\nabla^{\varphi} v = v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} \hat{v} - v^{\ensuremath{\epsilon}}\cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta}\, \partial_z^{\varphi} v + \hat{v}\cdot\nabla^{\varphi} v, \\[8pt] \omega^{\ensuremath{\epsilon}} -\omega = \nabla^{\varphi^{\ensuremath{\epsilon}}}\times v^{\ensuremath{\epsilon}} - \nabla^{\varphi}\times v = \nabla^{\varphi^{\ensuremath{\epsilon}}} \times \hat{v} - \nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta} \times \partial_z^{\varphi} v. \end{array} \end{equation} \begin{lemma}\label{SectA_DifferenceEq1_Lemma} $(\hat{v} = v^{\ensuremath{\epsilon}} -v,\ \hat{h} =h^{\ensuremath{\epsilon}} -h,\ \hat{q} = q^{\ensuremath{\epsilon}} -q)$ satisfy the equations $(\ref{Sect1_T_Derivatives_Difference_Eq})$. \end{lemma} \begin{proof} Plug $(\ref{SectA_Difference_Transform_1}),(\ref{SectA_Difference_Transform_2})$ into \begin{equation}\label{SectA_Difference_Eq1_4} \begin{array}{ll} \partial_t^{\varphi^\ensuremath{\epsilon}} v^\ensuremath{\epsilon} -\partial_t^{\varphi} v + v^\ensuremath{\epsilon}\cdot\nabla^{\varphi^\ensuremath{\epsilon}} v^\ensuremath{\epsilon} - v\cdot\nabla^{\varphi} v + \nabla^{\varphi^\ensuremath{\epsilon}} q^\ensuremath{\epsilon} - \nabla^{\varphi} q \\[5pt] = 2\ensuremath{\epsilon} \nabla^{\varphi^{\ensuremath{\epsilon}}} \cdot\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} (v^{\ensuremath{\epsilon}}-v) + \ensuremath{\epsilon} \triangle^{\varphi^{\ensuremath{\epsilon}}} v, \end{array} \end{equation} then we get the equation $(\ref{Sect1_T_Derivatives_Difference_Eq})_1$. It follows from the divergence free condition that \begin{equation}\label{SectA_Difference_Eq1_5} \begin{array}{ll} 0 = \nabla^{\varphi^\ensuremath{\epsilon}}\cdot v^\ensuremath{\epsilon} - \nabla^{\varphi}\cdot v = \sum\limits_{i=1}^3(\partial_i^{\varphi^{\ensuremath{\epsilon}}}\hat{v}^i -\partial_z^{\varphi} v^i \partial_i^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta}) = \nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot \hat{v} - \partial_z^{\varphi}v \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta}. \end{array} \end{equation} It follows from the kinetical boundary condition that \begin{equation}\label{SectA_Difference_Eq1_6} \begin{array}{ll} \partial_t\hat{h} = \partial_t h^{\ensuremath{\epsilon}} -\partial_t h = v^\ensuremath{\epsilon}(t,y,0)\cdot \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} -v(t,y,0)\cdot \ensuremath{\textbf{N}}, \\[6pt] v^\ensuremath{\epsilon}\cdot \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} -v\cdot \ensuremath{\textbf{N}} = \hat{v}\cdot \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} +v\cdot \hat{\ensuremath{\textbf{N}}} = v\cdot (-\nabla_y \hat{h},0) + \hat{v}\cdot \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}, \\[6pt] \partial_t \hat{h} + v_y\cdot \nabla_y \hat{h} = \hat{v}\cdot\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}. \end{array} \end{equation} The dynamical boundary condition for the Euler equation with $\sigma=0$ is a scalar equation, that is $q= gh$. For any vector such as $\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}$, $q \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}= gh \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}$ makes sense. It follows from the dynamical boundary condition that \begin{equation}\label{SectA_Difference_Eq1_7} \begin{array}{ll} q^\ensuremath{\epsilon} \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} -q\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} -2\ensuremath{\epsilon} \mathcal{S}^{\varphi^\ensuremath{\epsilon}}(v^\ensuremath{\epsilon} -v)\,\ensuremath{\textbf{N}}^\ensuremath{\epsilon} =gh^{\ensuremath{\epsilon}} \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} -gh \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} + 2\ensuremath{\epsilon} \mathcal{S}^{\varphi^\ensuremath{\epsilon}}v\,\ensuremath{\textbf{N}}^\ensuremath{\epsilon}, \\[6pt] \hat{q}\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} -2\ensuremath{\epsilon} \mathcal{S}^{\varphi^\ensuremath{\epsilon}}\hat{v}\,\ensuremath{\textbf{N}}^\ensuremath{\epsilon} =g \hat{h} \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} + 2\ensuremath{\epsilon} \mathcal{S}^{\varphi^\ensuremath{\epsilon}}v\,\ensuremath{\textbf{N}}^\ensuremath{\epsilon}, \\[6pt] (\hat{q} -g \hat{h})\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} -2\ensuremath{\epsilon} \mathcal{S}^{\varphi^\ensuremath{\epsilon}}\hat{v}\,\ensuremath{\textbf{N}}^\ensuremath{\epsilon} = 2\ensuremath{\epsilon} \mathcal{S}^{\varphi^\ensuremath{\epsilon}}v\,\ensuremath{\textbf{N}}^\ensuremath{\epsilon}. \end{array} \end{equation} Thus, Lemma $\ref{SectA_DifferenceEq1_Lemma}$ is proved. \end{proof} \begin{lemma}\label{SectA_DifferenceEq2_Lemma} Assume $0\leq \ell +|\alpha|\leq k, \ 0\leq \ell\leq k-1, \ |\alpha|\geq 1$, let $\hat{V}^{\ell,\alpha} = \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} - \partial_z^{\varphi}v \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\varphi}$, $\hat{Q}^{\ell,\alpha} = \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{q} - \partial_z^{\varphi}q \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\varphi}$, then $\hat{V}^{\ell,\alpha}, \hat{Q}^{\ell,\alpha}$ satisfy the equations $(\ref{Sect5_TangentialEstimates_Diff_Eq})$. \end{lemma} \begin{proof} Apply $\partial_t^{\ell}\mathcal{Z}^{\alpha}$ to the equations $(\ref{Sect1_T_Derivatives_Difference_Eq})$, we prove $(\ref{Sect5_TangentialEstimates_Diff_Eq})$. The derivation of the main equation $(\ref{Sect5_TangentialEstimates_Diff_Eq})_1$ is as follows: \begin{equation}\label{SectA_Difference_Eq2_1} \begin{array}{ll} \partial_t^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} + [\partial_t^{\ell}\mathcal{Z}^{\alpha},\partial_t^{\varphi^{\ensuremath{\epsilon}}}]\hat{v} -\partial_z^{\varphi} v \partial_t^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\varphi} - [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \partial_z^{\varphi} v \partial_t^{\varphi^{\ensuremath{\epsilon}}}]\hat{\varphi} \\[7pt]\quad + v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} + [\partial_t^{\ell}\mathcal{Z}^{\alpha}, v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}] \hat{v} - \partial_z^{\varphi} v \, v^{\ensuremath{\epsilon}}\cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\varphi} \\[7pt]\quad - [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \partial_z^{\varphi} v \, v^{\ensuremath{\epsilon}}\cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}]\hat{\varphi} + \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}\cdot\nabla^{\varphi} v + [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \nabla^{\varphi} v\cdot]\hat{v} + \nabla^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{q} \\[7pt]\quad + [\partial_t^{\ell}\mathcal{Z}^{\alpha},\nabla^{\varphi^{\ensuremath{\epsilon}}}] \hat{q} - \partial_z^{\varphi} q\nabla^{\varphi^{\ensuremath{\epsilon}}}\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\varphi} - [\partial_t^{\ell}\mathcal{Z}^{\alpha},\partial_z^{\varphi} q\nabla^{\varphi^{\ensuremath{\epsilon}}}]\hat{\varphi} \\[7pt]\quad = \ensuremath{\epsilon}\partial_t^{\ell}\mathcal{Z}^{\alpha}\triangle^{\varphi^\ensuremath{\epsilon}} \hat{v} + \ensuremath{\epsilon}\partial_t^{\ell}\mathcal{Z}^{\alpha}\triangle^{\varphi^\ensuremath{\epsilon}} v, \\[13pt] \partial_t^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} -\partial_z^{\varphi} v \partial_t^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\varphi} + v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} - \partial_z^{\varphi} v \, v^{\ensuremath{\epsilon}}\cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\varphi} \\[8pt]\quad + \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}\cdot\nabla^{\varphi} v + \nabla^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{q} - \partial_z^{\varphi} q\nabla^{\varphi^{\ensuremath{\epsilon}}}\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\varphi} - 2\ensuremath{\epsilon}\partial_t^{\ell}\mathcal{Z}^{\alpha}\nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \hat{v} \\[8pt]\quad = \ensuremath{\epsilon}\partial_t^{\ell}\mathcal{Z}^{\alpha}\triangle^{\varphi^\ensuremath{\epsilon}} v - [\partial_t^{\ell}\mathcal{Z}^{\alpha},\partial_t^{\varphi^{\ensuremath{\epsilon}}}]\hat{v} + [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \partial_z^{\varphi} v \partial_t^{\varphi^{\ensuremath{\epsilon}}}]\hat{\varphi} - [\partial_t^{\ell}\mathcal{Z}^{\alpha}, v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}] \hat{v} \\[8pt]\quad + [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \partial_z^{\varphi} v \, v^{\ensuremath{\epsilon}}\cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}]\hat{\varphi} - [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \nabla^{\varphi} v\cdot]\hat{v} - [\partial_t^{\ell}\mathcal{Z}^{\alpha},\nabla^{\varphi^{\ensuremath{\epsilon}}}] \hat{q} + [\partial_t^{\ell}\mathcal{Z}^{\alpha},\partial_z^{\varphi} q\nabla^{\varphi^{\ensuremath{\epsilon}}}]\hat{\varphi}, \\[13pt] \partial_t^{\varphi^{\ensuremath{\epsilon}}} (\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} -\partial_z^{\varphi} v \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\varphi}) + v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}(\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} - \partial_z^{\varphi} v \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\varphi}) \\[8pt]\quad + \nabla^{\varphi^{\ensuremath{\epsilon}}} (\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{q} - \partial_z^{\varphi} q\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\varphi}) - 2\ensuremath{\epsilon}\nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} = \mathcal{I}_4, \\[13pt] \partial_t^{\varphi^{\ensuremath{\epsilon}}} \hat{V}^{\ell,\alpha} + v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} \hat{V}^{\ell,\alpha} + \nabla^{\varphi^{\ensuremath{\epsilon}}} \hat{Q}^{\ell,\alpha} - 2\ensuremath{\epsilon}\nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} = \mathcal{I}_4. \end{array} \end{equation} The derivation of the divergence free condition $(\ref{Sect5_TangentialEstimates_Diff_Eq})_2$ is as follows: \begin{equation}\label{SectA_Difference_Eq2_2} \begin{array}{ll} \nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} + [\partial_t^{\ell}\mathcal{Z}^{\alpha},\nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot] \hat{v} - \partial_z^{\varphi}v \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\eta} - [\partial_t^{\ell}\mathcal{Z}^{\alpha},\partial_z^{\varphi}v \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}]\hat{\eta} =0, \\[12pt] \nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} - \partial_z^{\varphi}v \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\eta} \\[8pt]\quad = -[\partial_t^{\ell}\mathcal{Z}^{\alpha},\nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot] \hat{v} + [\partial_t^{\ell}\mathcal{Z}^{\alpha},\partial_z^{\varphi}v \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}]\hat{\eta}, \\[12pt] \nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot (\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} - \partial_z^{\varphi}v \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\eta}) + \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\eta} \nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot \partial_z^{\varphi}v \\[8pt]\quad = -[\partial_t^{\ell}\mathcal{Z}^{\alpha},\nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot] \hat{v} + [\partial_t^{\ell}\mathcal{Z}^{\alpha},\partial_z^{\varphi}v \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}]\hat{\eta}, \end{array} \end{equation} \begin{equation*} \begin{array}{ll} \nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot \hat{V}^{\ell,\alpha} = -[\partial_t^{\ell}\mathcal{Z}^{\alpha},\nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot] \hat{v} + [\partial_t^{\ell}\mathcal{Z}^{\alpha},\partial_z^{\varphi}v \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}]\hat{\eta} - \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\eta} \nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot \partial_z^{\varphi}v, \end{array} \end{equation*} Next, we derive the kinetical boundary condition $(\ref{Sect5_TangentialEstimates_Diff_Eq})_3$. Apply $\partial_t^{\ell}\mathcal{Z}^{\alpha}$ to Navier-Stokes and Euler kinetical boundary conditions, we get \begin{equation}\label{SectA_Difference_Eq2_3} \begin{array}{ll} \partial_t \partial_t^{\ell}\mathcal{Z}^{\alpha} h^{\ensuremath{\epsilon}} + v_y^{\ensuremath{\epsilon}} \cdot\nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} h^{\ensuremath{\epsilon}} = \partial_t^{\ell}\mathcal{Z}^{\alpha}v^{\ensuremath{\epsilon}}\cdot \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} + [\partial_t^{\ell}\mathcal{Z}^{\alpha}, v^{\ensuremath{\epsilon}},\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}], \\[8pt] \partial_t \partial_t^{\ell}\mathcal{Z}^{\alpha} h + v_y \cdot\nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} h = \partial_t^{\ell}\mathcal{Z}^{\alpha}v\cdot \ensuremath{\textbf{N}} + [\partial_t^{\ell}\mathcal{Z}^{\alpha}, v,\ensuremath{\textbf{N}}], \end{array} \end{equation} then the kinetical boundary condition $(\ref{Sect5_TangentialEstimates_Diff_Eq})_3$ is derived as follows: \begin{equation}\label{SectA_Difference_Eq2_4} \begin{array}{ll} \partial_t \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h} + v_y^{\ensuremath{\epsilon}}\cdot \nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h} + \hat{v}_y \cdot \nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} h = \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} - \partial_y \hat{h}\cdot \partial_t^{\ell}\mathcal{Z}^{\alpha}v_y \\[7pt]\quad + [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \hat{v},\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}] - [\partial_t^{\ell}\mathcal{Z}^{\alpha}, v_y, \partial_y \hat{h}], \\[12pt] \partial_t \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h} + v_y^{\ensuremath{\epsilon}}\cdot \nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h} - \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} = - \hat{v}_y \cdot \nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} h - \partial_y \hat{h}\cdot \partial_t^{\ell}\mathcal{Z}^{\alpha}v_y \\[7pt]\quad + [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \hat{v},\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}] - [\partial_t^{\ell}\mathcal{Z}^{\alpha}, v_y, \partial_y \hat{h}], \\[12pt] \partial_t \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h} + v_y^{\ensuremath{\epsilon}}\cdot \nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h} - \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \hat{V}^{\ell,\alpha} = \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}\cdot \partial_z^{\varphi} v \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\eta} \\[8pt]\quad - \hat{v}_y \cdot \nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} h - \partial_y \hat{h}\cdot \partial_t^{\ell}\mathcal{Z}^{\alpha}v_y + [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \hat{v},\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}] - [\partial_t^{\ell}\mathcal{Z}^{\alpha}, v_y, \partial_y \hat{h}], \end{array} \end{equation} Finally, we derive the dynamical boundary condition $(\ref{Sect5_TangentialEstimates_Diff_Eq})_4$. Apply $\partial_t^{\ell}\mathcal{Z}^{\alpha}$ to Navier-Stokes and Euler dynamical boundary conditions, we get \begin{equation}\label{SectA_Difference_Eq2_5} \begin{array}{ll} (\partial_t^{\ell}\mathcal{Z}^{\alpha}q^{\ensuremath{\epsilon}} - g\partial_t^{\ell}\mathcal{Z}^{\alpha}h^{\ensuremath{\epsilon}})\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} -2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}v^{\ensuremath{\epsilon}}\,\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} \\[8pt]\quad = 2\ensuremath{\epsilon} [\partial_t^{\ell}\mathcal{Z}^{\alpha},\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}] v^{\ensuremath{\epsilon}}\,\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} + (2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}} - (q^{\ensuremath{\epsilon}}-g h^{\ensuremath{\epsilon}}))\,\partial_t^{\ell}\mathcal{Z}^{\alpha}\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} \\[8pt]\qquad - [\partial_t^{\ell}\mathcal{Z}^{\alpha},q^{\ensuremath{\epsilon}}-g h^{\ensuremath{\epsilon}},\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}] +2\ensuremath{\epsilon} [\partial_t^{\ell}\mathcal{Z}^{\alpha},\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}}, \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}], \\[11pt] \partial_t^{\ell}\mathcal{Z}^{\alpha}q = g\partial_t^{\ell}\mathcal{Z}^{\alpha}h, \end{array} \end{equation} then the dynamical boundary condition $(\ref{Sect5_TangentialEstimates_Diff_Eq})_4$ is derived as follows: \begin{equation}\label{SectA_Difference_Eq2_6} \begin{array}{ll} (\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{q} - g\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h})\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} -2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}\,\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} \\[8pt]\quad = 2\ensuremath{\epsilon} [\partial_t^{\ell}\mathcal{Z}^{\alpha},\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}] v^{\ensuremath{\epsilon}}\,\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} + (2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}} - 2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}}\ensuremath{\textbf{n}}^{\ensuremath{\epsilon}}\cdot\ensuremath{\textbf{n}}^{\ensuremath{\epsilon}})\,\partial_t^{\ell}\mathcal{Z}^{\alpha}\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} \\[8pt]\qquad - [\partial_t^{\ell}\mathcal{Z}^{\alpha},2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}}\ensuremath{\textbf{n}}^{\ensuremath{\epsilon}}\cdot\ensuremath{\textbf{n}}^{\ensuremath{\epsilon}},\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}] +2\ensuremath{\epsilon} [\partial_t^{\ell}\mathcal{Z}^{\alpha},\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}}, \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}] + 2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}v\,\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}, \\[12pt] \hat{Q}^{\ell,\alpha}\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} -2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}\,\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} - (g-\partial_z^{\varphi}q)\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h} \,\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} \\[8pt]\quad = 2\ensuremath{\epsilon} [\partial_t^{\ell}\mathcal{Z}^{\alpha},\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}] v^{\ensuremath{\epsilon}}\,\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} + (2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}} - 2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}}\ensuremath{\textbf{n}}^{\ensuremath{\epsilon}}\cdot\ensuremath{\textbf{n}}^{\ensuremath{\epsilon}})\,\partial_t^{\ell}\mathcal{Z}^{\alpha}\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} \\[8pt]\qquad - [\partial_t^{\ell}\mathcal{Z}^{\alpha},2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}}\ensuremath{\textbf{n}}^{\ensuremath{\epsilon}}\cdot\ensuremath{\textbf{n}}^{\ensuremath{\epsilon}},\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}] +2\ensuremath{\epsilon} [\partial_t^{\ell}\mathcal{Z}^{\alpha},\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}}, \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}] + 2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}v\,\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}, \end{array} \end{equation} Thus, Lemma $\ref{SectA_DifferenceEq2_Lemma}$ is proved. \end{proof} \begin{lemma}\label{SectA_DifferenceEq2_Lemma_Time} Assume $0\leq \ell \leq k-1, |\alpha|=0$, let $\hat{V}^{\ell,0} = \partial_t^{\ell}\hat{v} - \partial_z^{\varphi}v \partial_t^{\ell}\hat{\varphi}$, then the main equation of $\hat{V}^{\ell,0}$ and its kinetical boundary condition satisfy $(\ref{Sect5_Tangential_Estimates_Time})$. \end{lemma} \begin{proof} The derivation of the main equation of $\hat{V}^{\ell,0}$ is as follows: \begin{equation}\label{SectA_DifferenceEq2_Lemma_Time_1} \begin{array}{ll} \partial_t^{\varphi^{\ensuremath{\epsilon}}} (\partial_t^{\ell}\hat{v} -\partial_z^{\varphi} v \partial_t^{\ell}\hat{\varphi}) + \partial_t^{\ell}\hat{\varphi}\partial_t^{\varphi^{\ensuremath{\epsilon}}}\partial_z^{\varphi} v + v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}(\partial_t^{\ell}\hat{v} - \partial_z^{\varphi} v \partial_t^{\ell}\hat{\varphi}) + \partial_t^{\ell}\hat{v}\cdot\nabla^{\varphi} v\\[8pt]\quad + \partial_t^{\ell}\hat{\varphi}\, v^{\ensuremath{\epsilon}}\cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}\partial_z^{\varphi} v + \partial_t^{\ell}\nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{q} - \partial_z^{\varphi} q\nabla^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\hat{\varphi} - 2\ensuremath{\epsilon}\partial_t^{\ell}\nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \hat{v} \\[8pt]\quad = \ensuremath{\epsilon}\partial_t^{\ell}\triangle^{\varphi^\ensuremath{\epsilon}} v^\ensuremath{\epsilon} - [\partial_t^{\ell},\partial_t^{\varphi^{\ensuremath{\epsilon}}}]\hat{v} + [\partial_t^{\ell}, \partial_z^{\varphi} v \partial_t^{\varphi^{\ensuremath{\epsilon}}}]\hat{\varphi} - [\partial_t^{\ell}, v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}] \hat{v} \\[8pt]\quad + [\partial_t^{\ell}, \partial_z^{\varphi} v \, v^{\ensuremath{\epsilon}}\cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}]\hat{\varphi} - [\partial_t^{\ell}, \nabla^{\varphi} v\cdot]\hat{v} + [\partial_t^{\ell},\partial_z^{\varphi} q\nabla^{\varphi^{\ensuremath{\epsilon}}}]\hat{\varphi}, \\[12pt] \partial_t^{\varphi^{\ensuremath{\epsilon}}} \hat{V}^{\ell,0} + v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{V}^{\ell,0} - 2\ensuremath{\epsilon}\nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\hat{v} = \mathcal{I}_5. \end{array} \end{equation} The derivation of the kinetical boundary condition is the same as Lemma $\ref{SectA_DifferenceEq2_Lemma}$, but let $\alpha =0$ in $(\ref{Sect5_TangentialEstimates_Diff_Eq})_3$. Thus, Lemma $\ref{SectA_DifferenceEq2_Lemma_Time}$ is proved. \end{proof} \begin{lemma}\label{SectA_Vorticity_Eq_Lemma} $\hat{\omega}_h =\omega_h^{\ensuremath{\epsilon}} -\omega_h$ satisfies the equations $(\ref{Sect1_N_Derivatives_Difference_Eq})$. \end{lemma} \begin{proof} By Lemma $\ref{Sect2_Vorticity_H_Eq_BC_Lemma}$ and $(\ref{Sect2_Vorticity_Estimate_4})$, the tangential components of Navier-Stokes vorticity $\omega_h^{\ensuremath{\epsilon}}$ satisfies \begin{equation}\label{SectA_Vorticity_Lemma_Eq_1} \left\{\begin{array}{ll} \partial_t^{\varphi^{\ensuremath{\epsilon}}} \omega_h^{\ensuremath{\epsilon}} + v^{\ensuremath{\epsilon}}\cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}\omega_h^{\ensuremath{\epsilon}} - \ensuremath{\epsilon}\triangle^{\varphi^{\ensuremath{\epsilon}}}\omega_h^{\ensuremath{\epsilon}} = \vec{\textsf{F}}^0[\nabla\varphi^{\ensuremath{\epsilon}}](\omega_h^{\ensuremath{\epsilon}},\partial_j v^{\ensuremath{\epsilon},i}), \\[7pt] \omega^{\ensuremath{\epsilon},1}|_{z=0} =\textsf{F}^1 [\nabla\varphi^{\ensuremath{\epsilon}}](\partial_j v^{\ensuremath{\epsilon},i}), \\[7pt] \omega^{\ensuremath{\epsilon},2}|_{z=0} =\textsf{F}^2 [\nabla\varphi^{\ensuremath{\epsilon}}](\partial_j v^{\ensuremath{\epsilon},i}), \end{array}\right. \end{equation} Similar to the arguments in $(\ref{Sect2_Vorticity_H_Eq_1}), (\ref{Sect2_Vorticity_H_Eq_2})$, the tangential components of Euler vorticity $\omega_h^{\ensuremath{\epsilon}}$ satisfies \begin{equation}\label{SectA_Vorticity_Lemma_Eq_2} \left\{\begin{array}{ll} \partial_t^{\varphi} \omega_h + v\cdot\nabla^{\varphi}\omega_h = \vec{\textsf{F}}^0[\nabla\varphi](\omega_h,\partial_j v^i), \\[7pt] \omega^1|_{z=0} = \partial_2^{\varphi} v^3 - \partial_z^{\varphi} v^2 = \partial_2 v^3 -\frac{\partial_2\varphi}{\partial_z\varphi}\partial_z v^3 - \frac{1}{\partial_z\varphi}\partial_z v^2 := \omega^{b,1}, \\[7pt] \omega^2|_{z=0} = \partial_z^{\varphi} v^1 - \partial_1^{\varphi} v^3 = \frac{1}{\partial_z\varphi}\partial_z v^1 - \partial_1 v^3 +\frac{\partial_1\varphi}{\partial_z\varphi}\partial_z v^3 := \omega^{b,2}, \end{array}\right. \end{equation} By $(\ref{SectA_Vorticity_Lemma_Eq_1})-(\ref{SectA_Vorticity_Lemma_Eq_2})$, we get \begin{equation}\label{SectA_Vorticity_Lemma_Eq_3} \left\{\begin{array}{ll} \partial_t^{\varphi^{\ensuremath{\epsilon}}} \hat{\omega}_h - \partial_z^{\varphi} \omega_h \partial_t^{\varphi^{\ensuremath{\epsilon}}} \hat{\eta} + v^{\ensuremath{\epsilon}}\cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\omega}_h - \partial_z^{\varphi} \omega_h v^{\ensuremath{\epsilon}}\cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta} + \hat{v}\cdot\nabla^{\varphi}\omega_h - \ensuremath{\epsilon}\triangle^{\varphi^{\ensuremath{\epsilon}}}\hat{\omega}_h \\[6pt]\quad = \vec{\textsf{F}}^0[\nabla\varphi^{\ensuremath{\epsilon}}](\omega_h^{\ensuremath{\epsilon}},\partial_j v^{\ensuremath{\epsilon},i}) - \vec{\textsf{F}}^0[\nabla\varphi](\omega_h,\partial_j v^i) + \ensuremath{\epsilon}\triangle^{\varphi^{\ensuremath{\epsilon}}}\omega_h, \\[7pt] \hat{\omega}^1|_{z=0} =\textsf{F}^1 [\nabla\varphi^{\ensuremath{\epsilon}}](\partial_j v^{\ensuremath{\epsilon},i}) - \omega^{b,1}, \\[6pt] \hat{\omega}^2|_{z=0} =\textsf{F}^2 [\nabla\varphi^{\ensuremath{\epsilon}}](\partial_j v^{\ensuremath{\epsilon},i}) - \omega^{b,2}. \end{array}\right. \end{equation} Thus, Lemma $\ref{SectA_Vorticity_Eq_Lemma}$ is proved. \end{proof} \section{Derivation of the Equations for the Surface Tension} In this appendix, we derive the equations and their boundary conditions for the $\sigma>0$ case. \begin{lemma}\label{SectB_DifferenceEq1_Corollary} $(\hat{v} = v^{\ensuremath{\epsilon}} -v,\ \hat{h} =h^{\ensuremath{\epsilon}} -h,\ \hat{q} = q^{\ensuremath{\epsilon}} -q)$ satisfy the equations $(\ref{Sect7_DifferenceEq_1})$. \end{lemma} \begin{proof} The surface tension term appear in the dynamical boundary condition, thus we only need to derive the difference equation of the dynamical boundary condition, other equations and the kinetical boundary condition are the same with the $\sigma=0$ case, see $(\ref{Sect1_T_Derivatives_Difference_Eq})$. We derive the difference equation of the dynamical boundary condition. The dynamical boundary condition for the Euler equation with $\sigma>0$ is a scalar equation, that is $q= gh -\sigma H$. For any vector such as $\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}$, $q \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}= gh \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} -\sigma H \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}$ makes sense. Denote $\hat{H} = H^{\ensuremath{\epsilon}} -H$, it follows from the dynamical boundary condition that \begin{equation}\label{SectB_DifferenceEq1_Corollary_1} \begin{array}{ll} q^\ensuremath{\epsilon} \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} -q\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} -2\ensuremath{\epsilon} \mathcal{S}^{\varphi^\ensuremath{\epsilon}}v^{\ensuremath{\epsilon}}\,\ensuremath{\textbf{N}}^\ensuremath{\epsilon} =gh^{\ensuremath{\epsilon}} \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} -gh \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} -\sigma H^{\ensuremath{\epsilon}} \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} + \sigma H \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}, \\[7pt] \hat{q}\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} -2\ensuremath{\epsilon} \mathcal{S}^{\varphi^\ensuremath{\epsilon}}\hat{v}\,\ensuremath{\textbf{N}}^\ensuremath{\epsilon} =(g \hat{h} -\sigma \hat{H})\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} + 2\ensuremath{\epsilon} \mathcal{S}^{\varphi^\ensuremath{\epsilon}}v\,\ensuremath{\textbf{N}}^\ensuremath{\epsilon}. \end{array} \end{equation} $\hat{v},\hat{h},\hat{q}$ satisfy the following equations \begin{equation}\label{SectB_DifferenceEq1_Corollary_2} \left\{\begin{array}{ll} \partial_t^{\varphi^{\ensuremath{\epsilon}}}\hat{v}-\partial_z^{\varphi} v \partial_t^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta} + v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} \hat{v} - v^{\ensuremath{\epsilon}}\cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta}\, \partial_z^{\varphi} v + \hat{v}\cdot\nabla^{\varphi} v \\[6pt]\quad + \nabla^{\varphi^{\ensuremath{\epsilon}}} \hat{q} - \partial_z^{\varphi} q\nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta} = 2\ensuremath{\epsilon} \nabla^{\varphi^{\ensuremath{\epsilon}}} \cdot\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \hat{v} + \ensuremath{\epsilon}\triangle^{\varphi^{\ensuremath{\epsilon}}} v, \hspace{2cm} x\in\mathbb{R}^3_{-}, \\[7pt] \nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot \hat{v} - \partial_z^{\varphi}v \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}\hat{\eta} =0, \hspace{5.27cm} x\in\mathbb{R}^3_{-},\\[7pt] \partial_t \hat{h} + v_y\cdot \nabla \hat{h} = \hat{v}\cdot\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}, \hspace{5.66cm} \{z=0\}, \\[6pt] \hat{q}\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} -2\ensuremath{\epsilon} \mathcal{S}^{\varphi^\ensuremath{\epsilon}}\hat{v}\,\ensuremath{\textbf{N}}^\ensuremath{\epsilon} =(g \hat{h} -\sigma \hat{H})\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} + 2\ensuremath{\epsilon} \mathcal{S}^{\varphi^\ensuremath{\epsilon}}v\,\ensuremath{\textbf{N}}^\ensuremath{\epsilon}, \hspace{1.88cm} \{z=0\}, \\[6pt] (\hat{v},\hat{h})|_{t=0} = (v_0^\ensuremath{\epsilon} -v_0,h_0^\ensuremath{\epsilon} -h_0), \end{array}\right. \end{equation} where \begin{equation}\label{SectB_DifferenceEq1_Corollary_3} \begin{array}{ll} \hat{H} = \nabla_y \cdot \Big(\frac{\nabla_y h^{\ensuremath{\epsilon}}}{\sqrt{1+|\nabla_y h^{\ensuremath{\epsilon}}|^2}}- \frac{\nabla_y h}{\sqrt{1+|\nabla_y h|^2}}\Big) \\[9pt]\quad = \nabla_y \cdot \Big(\nabla_y \hat{h} \big(\frac{1}{2\sqrt{1+|\nabla_y h^{\ensuremath{\epsilon}}|^2}}+ \frac{1}{2\sqrt{1+|\nabla_y h|^2}}\big)\Big) \\[11pt]\qquad + \nabla_y \cdot \Big(\frac{\sqrt{1+|\nabla_y h|^2} - \sqrt{1+|\nabla_y h^{\ensuremath{\epsilon}}|^2}}{\sqrt{1+|\nabla_y h^{\ensuremath{\epsilon}}|^2}\sqrt{1+|\nabla_y h|^2}}\cdot \frac{\nabla_y h^{\ensuremath{\epsilon}} +\nabla_y h}{2}\Big) \\[10pt]\quad = \nabla_y \cdot \Big(\nabla_y\hat{h} \big(\frac{1}{2\sqrt{1+|\nabla_y h^{\ensuremath{\epsilon}}|^2}}+ \frac{1}{2\sqrt{1+|\nabla_y h|^2}}\big)\Big) \\[10pt]\qquad - \nabla_y \cdot \Big(\big(\frac{\nabla_y\hat{h}\cdot\nabla_y(h^{\ensuremath{\epsilon}}+h)} {2\sqrt{1+|\nabla_y h^{\ensuremath{\epsilon}}|^2}\sqrt{1+|\nabla_y h|^2}(\sqrt{1+|\nabla_y h^{\ensuremath{\epsilon}}|^2} + \sqrt{1+|\nabla_y h|^2})}\big) \nabla_y (h^{\ensuremath{\epsilon}} + h)\Big). \end{array} \end{equation} Plug $(\ref{SectB_DifferenceEq1_Corollary_3})$ into $(\ref{SectB_DifferenceEq1_Corollary_2})$, we obtain $(\ref{Sect7_DifferenceEq_1})$ and $(\ref{Sect7_DifferenceEq_2})$. Thus, Lemma $\ref{SectB_DifferenceEq1_Corollary}$ is proved. \end{proof} \begin{lemma}\label{SectB_DifferenceEq2_Corollary} Assume $0\leq \ell +|\alpha|\leq k, \ 0\leq \ell\leq k-1, \ |\alpha|\geq 1$, then $(\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v},\ \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h},\ \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{q})$ satisfy the equations $(\ref{Sect7_TangentialEstimates_Diff_Eq})$. \end{lemma} \begin{proof} Apply $\partial_t^{\ell}\mathcal{Z}^{\alpha}$ to the equations $(\ref{Sect7_DifferenceEq_1})$, we prove $(\ref{Sect7_TangentialEstimates_Diff_Eq})$. The main equation $(\ref{Sect7_TangentialEstimates_Diff_Eq})_1$ follows from $(\ref{SectA_Difference_Eq2_1})_2$, \begin{equation}\label{SectB_DifferenceEq2_Corollary_1} \begin{array}{ll} \partial_t^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} + v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} + \nabla^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{q} - 2\ensuremath{\epsilon}\partial_t^{\ell}\mathcal{Z}^{\alpha}\nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \hat{v} \\[8pt]\quad = \partial_z^{\varphi} v \partial_t^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\varphi} + \partial_z^{\varphi} v \, v^{\ensuremath{\epsilon}}\cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\varphi} - \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v}\cdot\nabla^{\varphi} v + \partial_z^{\varphi} q\nabla^{\varphi^{\ensuremath{\epsilon}}}\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\varphi} \\[8pt]\quad + \ensuremath{\epsilon}\partial_t^{\ell}\mathcal{Z}^{\alpha}\triangle^{\varphi^\ensuremath{\epsilon}} v - [\partial_t^{\ell}\mathcal{Z}^{\alpha},\partial_t^{\varphi^{\ensuremath{\epsilon}}}]\hat{v} + [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \partial_z^{\varphi} v \partial_t^{\varphi^{\ensuremath{\epsilon}}}]\hat{\varphi} - [\partial_t^{\ell}\mathcal{Z}^{\alpha}, v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}] \hat{v} \\[8pt]\quad + [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \partial_z^{\varphi} v \, v^{\ensuremath{\epsilon}}\cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}]\hat{\varphi} - [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \nabla^{\varphi} v\cdot]\hat{v} - [\partial_t^{\ell}\mathcal{Z}^{\alpha},\nabla^{\varphi^{\ensuremath{\epsilon}}}] \hat{q} + [\partial_t^{\ell}\mathcal{Z}^{\alpha},\partial_z^{\varphi} q\nabla^{\varphi^{\ensuremath{\epsilon}}}]\hat{\varphi}, \\[13pt] \partial_t^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} + v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} + \nabla^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{q} - 2\ensuremath{\epsilon} \nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} = \mathcal{I}_{10}. \end{array} \end{equation} The divergence free condition $(\ref{Sect7_TangentialEstimates_Diff_Eq})_2$ follows from $(\ref{SectA_Difference_Eq2_2})$, \begin{equation}\label{SectB_DifferenceEq2_Corollary_2} \begin{array}{ll} \nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} = \partial_z^{\varphi}v \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{\eta} -[\partial_t^{\ell}\mathcal{Z}^{\alpha},\nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot] \hat{v} + [\partial_t^{\ell}\mathcal{Z}^{\alpha},\partial_z^{\varphi}v \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}]\hat{\eta}. \end{array} \end{equation} Next, the kinetical boundary condition $(\ref{Sect7_TangentialEstimates_Diff_Eq})_3$ is exactly $(\ref{SectA_Difference_Eq2_4})_2$. Finally, we derive the dynamical boundary condition $(\ref{Sect7_TangentialEstimates_Diff_Eq})_4$. Apply $\partial_t^{\ell}\mathcal{Z}^{\alpha}$ to Navier-Stokes and Euler dynamical boundary conditions, we get \begin{equation}\label{SectB_DifferenceEq2_Corollary_3} \begin{array}{ll} (\partial_t^{\ell}\mathcal{Z}^{\alpha}q^{\ensuremath{\epsilon}} - g\partial_t^{\ell}\mathcal{Z}^{\alpha}h^{\ensuremath{\epsilon}} + \sigma \partial_t^{\ell}\mathcal{Z}^{\alpha}H^{\ensuremath{\epsilon}})\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} -2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}v^{\ensuremath{\epsilon}}\,\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} \\[8pt]\quad = 2\ensuremath{\epsilon} [\partial_t^{\ell}\mathcal{Z}^{\alpha},\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}] v^{\ensuremath{\epsilon}}\,\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} + (2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}} - (q^{\ensuremath{\epsilon}}-g h^{\ensuremath{\epsilon}} + \sigma H^{\ensuremath{\epsilon}}))\,\partial_t^{\ell}\mathcal{Z}^{\alpha}\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} \\[8pt]\qquad - [\partial_t^{\ell}\mathcal{Z}^{\alpha},q^{\ensuremath{\epsilon}}-g h^{\ensuremath{\epsilon}} + \sigma H^{\ensuremath{\epsilon}},\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}] +2\ensuremath{\epsilon} [\partial_t^{\ell}\mathcal{Z}^{\alpha},\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}}, \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}], \\[9pt] (\partial_t^{\ell}\mathcal{Z}^{\alpha}q - g\partial_t^{\ell}\mathcal{Z}^{\alpha}h + \sigma \partial_t^{\ell}\mathcal{Z}^{\alpha}H)\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} =0. \end{array} \end{equation} By $(\ref{SectB_DifferenceEq2_Corollary_3})_1 -(\ref{SectB_DifferenceEq2_Corollary_3})_2$, we get \begin{equation}\label{SectB_DifferenceEq2_Corollary_4} \begin{array}{ll} (\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{q} - g\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h} + \sigma \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{H})\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} - 2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{v} \,\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} \\[8pt]\quad = 2\ensuremath{\epsilon} [\partial_t^{\ell}\mathcal{Z}^{\alpha},\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}] v^{\ensuremath{\epsilon}}\,\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} + (2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}} - (q^{\ensuremath{\epsilon}}-g h^{\ensuremath{\epsilon}} + \sigma H^{\ensuremath{\epsilon}}))\,\partial_t^{\ell}\mathcal{Z}^{\alpha}\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} \\[8pt]\qquad - [\partial_t^{\ell}\mathcal{Z}^{\alpha},q^{\ensuremath{\epsilon}}-g h^{\ensuremath{\epsilon}} + \sigma H^{\ensuremath{\epsilon}},\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}] +2\ensuremath{\epsilon} [\partial_t^{\ell}\mathcal{Z}^{\alpha},\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}}, \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}] + 2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}v\,\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}, \end{array} \end{equation} and then we calculate $\partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{H}$, \begin{equation}\label{SectB_DifferenceEq2_Corollary_5} \begin{array}{ll} \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{H} = \partial_t^{\ell}\mathcal{Z}^{\alpha} \nabla_y \cdot \big[ \mathfrak{H}_1\nabla_y\hat{h} + \mathfrak{H}_2 \nabla_y\hat{h}\cdot\nabla_y(h^{\ensuremath{\epsilon}}+h) \nabla_y (h^{\ensuremath{\epsilon}} +h)\big] \\[6pt] = \nabla_y \cdot \big[ \mathfrak{H}_1\nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{h} + \mathfrak{H}_2 \nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{h}\cdot\nabla_y(h^{\ensuremath{\epsilon}}+h) \nabla_y (h^{\ensuremath{\epsilon}} +h)\big] \\[6pt]\quad + \nabla_y \cdot \big[ [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \mathfrak{H}_1\nabla_y] \hat{h} + [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \mathfrak{H}_2 \nabla_y (h^{\ensuremath{\epsilon}} +h) \nabla_y(h^{\ensuremath{\epsilon}}+h)\cdot \nabla_y] \hat{h} \big]. \end{array} \end{equation} Plug $(\ref{SectB_DifferenceEq2_Corollary_5})$ into $(\ref{SectB_DifferenceEq2_Corollary_4})$, we get \begin{equation}\label{SectB_DifferenceEq2_Corollary_6} \begin{array}{ll} \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{q}\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} -2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{v}\,\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} = g\partial_t^{\ell}\mathcal{Z}^{\alpha}\hat{h}\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} - \sigma \nabla_y \cdot \big( \mathfrak{H}_1\nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{h} \big) \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} \\[8pt]\quad - \sigma\nabla_y \cdot \big( \mathfrak{H}_2 \nabla_y \partial_t^{\ell}\mathcal{Z}^{\alpha} \hat{h}\cdot\nabla_y(h^{\ensuremath{\epsilon}}+h) \nabla_y (h^{\ensuremath{\epsilon}} +h)\big)\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} + 2\ensuremath{\epsilon} [\partial_t^{\ell}\mathcal{Z}^{\alpha},\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}] v^{\ensuremath{\epsilon}}\,\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} \\[6pt]\quad + 2\ensuremath{\epsilon}(\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}} - \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}}\ensuremath{\textbf{n}}^{\ensuremath{\epsilon}}\cdot\ensuremath{\textbf{n}}^{\ensuremath{\epsilon}})\,\partial_t^{\ell}\mathcal{Z}^{\alpha}\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} + 2\ensuremath{\epsilon}[\partial_t^{\ell}\mathcal{Z}^{\alpha}, \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}} - \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}}v^{\ensuremath{\epsilon}}\ensuremath{\textbf{n}}^{\ensuremath{\epsilon}}\cdot\ensuremath{\textbf{n}}^{\ensuremath{\epsilon}}, \ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}] \end{array} \end{equation} \begin{equation*} \begin{array}{ll} \quad + 2\ensuremath{\epsilon} \mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\mathcal{Z}^{\alpha}v\,\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} - \sigma\nabla_y \cdot \big[ [\partial_t^{\ell}\mathcal{Z}^{\alpha}, \mathfrak{H}_1\nabla_y]\big] \hat{h}\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}} \\[6pt]\quad - \sigma\nabla_y \cdot \big[[\partial_t^{\ell}\mathcal{Z}^{\alpha}, \mathfrak{H}_2 \nabla_y (h^{\ensuremath{\epsilon}} +h) \nabla_y(h^{\ensuremath{\epsilon}}+h)\cdot \nabla_y] \hat{h} \big]\ensuremath{\textbf{N}}^{\ensuremath{\epsilon}}. \hspace{3cm} \end{array} \end{equation*} Thus, Lemma $\ref{SectB_DifferenceEq2_Corollary}$ is proved. \end{proof} \begin{lemma}\label{SectB_DifferenceEq3_Corollary} Assume $0\leq \ell\leq k-1, \ |\alpha|=0$, then the main equation of $\partial_t^{\ell}\hat{v}$ and its kinetical boundary condition satisfy the equations $(\ref{Sect7_Tangential_Estimates_Time})$. \end{lemma} \begin{proof} The derivation of the main equation of $\partial_t^{\ell} \hat{v}$ is as follows: \begin{equation}\label{SectB_DifferenceEq3_Corollary_1} \begin{array}{ll} \partial_t^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\hat{v} + v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\hat{v} + \nabla^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\hat{q} - 2\ensuremath{\epsilon}\partial_t^{\ell}\nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \hat{v} \\[8pt]\quad = \partial_z^{\varphi} v \partial_t^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\hat{\varphi} + \partial_z^{\varphi} v \, v^{\ensuremath{\epsilon}}\cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}\partial_t^{\ell}\hat{\varphi} - \partial_t^{\ell}\hat{v}\cdot\nabla^{\varphi} v + \partial_z^{\varphi} q\nabla^{\varphi^{\ensuremath{\epsilon}}}\partial_t^{\ell}\hat{\varphi} \\[8pt]\quad + \ensuremath{\epsilon}\partial_t^{\ell}\triangle^{\varphi^\ensuremath{\epsilon}} v - [\partial_t^{\ell},\partial_t^{\varphi^{\ensuremath{\epsilon}}}]\hat{v} + [\partial_t^{\ell}, \partial_z^{\varphi} v \partial_t^{\varphi^{\ensuremath{\epsilon}}}]\hat{\varphi} - [\partial_t^{\ell}, v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}}] \hat{v} \\[8pt]\quad + [\partial_t^{\ell}, \partial_z^{\varphi} v \, v^{\ensuremath{\epsilon}}\cdot \nabla^{\varphi^{\ensuremath{\epsilon}}}]\hat{\varphi} - [\partial_t^{\ell}, \nabla^{\varphi} v\cdot]\hat{v} - [\partial_t^{\ell},\nabla^{\varphi^{\ensuremath{\epsilon}}}] \hat{q} + [\partial_t^{\ell},\partial_z^{\varphi} q\nabla^{\varphi^{\ensuremath{\epsilon}}}]\hat{\varphi}, \\[11pt] \partial_t^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\hat{v} + v^{\ensuremath{\epsilon}} \cdot\nabla^{\varphi^{\ensuremath{\epsilon}}} \partial_t^{\ell}\hat{v} - 2\ensuremath{\epsilon}\partial_t^{\ell}\nabla^{\varphi^{\ensuremath{\epsilon}}}\cdot\mathcal{S}^{\varphi^{\ensuremath{\epsilon}}} \hat{v} = \mathcal{I}_{12}. \end{array} \end{equation} The derivation of the kinetical boundary condition is the same as Lemma $\ref{SectA_DifferenceEq2_Lemma}$, but let $\alpha =0$ in $(\ref{Sect7_TangentialEstimates_Diff_Eq})_3$. Thus, Lemma $\ref{SectB_DifferenceEq3_Corollary}$ is proved. \end{proof} \addcontentsline{toc}{section}{References} \end{document}
\begin{document} \title{ Forcing clique immersions through chromatic number \thanks{This work supported by the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013)/ERC Grant Agreement no. 279558.}} \author{ Gregory Gauthier \thanks{ Princeton University, Princeton, NJ, USA, \texttt{[email protected]}}, Tien-Nam Le \thanks{Laboratoire d'Informatique du Parall\'elisme, \'Ecole Normale Sup\'erieure de Lyon, France, \texttt{[email protected]}}, \\ and Paul Wollan \thanks{Department of Computer Science, University of Rome, ``La Sapienza'', Rome, Italy, \texttt{[email protected]}}} \date{} \maketitle \begin{abstract} Building on recent work of Dvo\v{r}\'ak and Yepremyan, we show that every simple graph of minimum degree $7t+7$ contains $K_t$ as an immersion and that every graph with chromatic number at least $3.54t + 4$ contains $K_t$ as an immersion. We also show that every graph on $n$ vertices with no stable set of size three contains $K_{2\lfloor n/5 \rfloor}$ as an immersion. \end{abstract} Keywords: Graph immersion, Hadwiger conjecture, chromatic number. \section{Introduction} \label{section:introduction} \subsection{Hadwiger's conjecture} The graphs in this paper are simple and finite, while multigraphs may have loops and multiple edges. A fundamental question in graph theory is the relationship between the chromatic number of a graph $G$ and the presence of certain structures in $G$. One of the most well-known specific example of this type of question is the Four Color Theorem, which states that every planar graph is 4-colorable. Hadwiger \cite{Had} in 1943 proposed a far-reaching generalization of the Four Color Theorem, which asserts that for all positive integers $t$, every graph of chromatic number $t$ contains $K_t$, the clique on $t$ vertices, as a minor. In 1937, Wagner \cite{wa} proved that the Hadwiger's conjecture for $t=5$ is equivalent to the Four Color Theorem. Robertson, Seymour, and Thomas \cite{RST} settle the conjecture for $t=6$, while the conjecture is still open for $t\ge 7$. On the other hand, it was independently proved in 1984 by Kostochka and Thomasson \cite{Ko,Tho} that a graph without a $K_t$-minor is $O(k\sqrt{\log k})$-colorable for every $k\ge 1$, and there has been no improvement in the order $k\sqrt{\log k}$ since then. For graphs with no stable set of size three (i.e. there do not exist three vertices, all pairwise nonadjacent), Duchet and Meyniel \cite{DM} proposed an analogous conjecture to the Hadwiger's conjecture that every graph with $n$ vertices and no stable set of size three contains a $K_{\lceil n/2\rceil}$-minor and proved that such graphs contain $K_{\lceil n/3\rceil}$ as a minor, which remains the best bound to date. Plumber, Stiebitz, and Toft \cite{PST} showed that the conjecture of Duchet and Meyniel is indeed equivalent to the Hadwiger's conjecture for graphs with no stable set of size three. \subsection{Graph immersion} In this paper, we focus on the immersion relation on graphs, which is a variant of minor relation (see \cite{RS}). We follow the definitions in \cite{Wo}. Given loopless multigraphs $G,H$, we say that $G$ admits an \emph{immersion} of $H$ if there exists functions $\pi_1:V(H)\to V(G)$ and $\pi_2$ mapping the edges of $H$ to paths of $G$ satisfying the following: \begin{itemize} \item the map $\pi_1$ is an injection; \item for every edge $e\in E(H)$ with endpoints $x$ and $y$, $\pi_2(e)$ is a path with endpoints equal to $\pi_1(x)$ and $\pi_2(y)$; and \item for edges $e,e'\in E(H)$, $e\ne e'$, $\pi_2(e)$ and $\pi_2(e')$ have no edge in common. \end{itemize} We say that $G$ admits a \emph{strong immersion} of $H$ if the following condition holds as well. \begin{itemize} \item For every edge $e\in E(H)$ with endpoints $x$ and $y$, the path $\pi_2(e)$ intersects the set $\pi_1(V(H))$ only in its endpoints. \end{itemize} The vertices $\{\pi_1(x) : x \in V (H)\}$ are the \emph{branch vertices} of the immersion. We will also say that $G$ (strongly) immerses $H$ or alternatively that $G$ contains $H$ as a (strong) immersion. We can alternately define immersions as follows. Let $e_1$ and $e_2$ be distinct edges in $G$ such that the endpoints of $e_1$ are $x, y$ and the endpoints of $e_2$ are $y, z$. To \emph{split off} the edges $e_1$ and $e_2$, we delete the edges $e_1$ and $e_2$ from $G$ and add a new edge $e$ with endpoints $x$ and $z$ (note that this might result in a multi-edge or a loop). Then $G$ contains $H$ as an immersion if and only if $H$ can be obtained from a subgraph of $G$ by repeatedly splitting off pairs of edges and deleting isolated vertices. We consider a variant of Hadwiger's conjecture to {graph immersions} due to Lescure and Meynial \cite{LM} in 1989 and, independently, to Abu-Khzam and Langston \cite{AL} in 2003. The conjecture explicitly states the following. \begin{conjecture}[\cite{AL}, \cite{LM}]\label{conj:main} For every positive integer $t$, every graph with no $K_t$ immersion is properly colorable with at most $t-1$ colors. \end{conjecture} Conjecture \ref{conj:main} is trivial for $t \le 4$, and was independently proved by Lescure and Meyniel \cite{LM} and DeVos et al. \cite{DKMO10} for $5 \le t \le 7$. One can immediately show that a minimum counterexample to Conjecture \ref{conj:main} has minimum degree $t-1$. Thus, the conjecture provides additional motivation for the natural question of what is the smallest minimum degree necessary to force a clique immersion. DeVos et al. \cite{mohar} showed that minimum degree $200t$ suffices to force a $K_t$ immersion in a simple graph. This implies that every graph without a $K_t$-immersion is $200t$-colorable, providing the first linear bound for Conjecture \ref{conj:main}, while, as we discussed above, the best known bound for the Hadwiger's conjecture is superlinear. The bound $200t$ was recently improved by Dvo\v{r}\'ak and Yepremyan \cite{dvo} to $11t+7$. \begin{theorem}[Dvo\v{r}\'ak--Yepremyan, \cite{dvo}]\label{theorem:mindeg11} Every graph with minimum degree at least $11t+7$ contains an immersion of $K_t$. \end{theorem} We give a new result on clique immersions in dense graphs; we leave the exact statement for Section \ref{section:dense} below. As a consequence, it is possible to improve the analysis in \cite{dvo} and obtain the following bound. \begin{theorem}\label{theorem:mindeg} Every graph with minimum degree at least $7t+7$ contains an immersion of $K_t$. \end{theorem} Conjecture \ref{conj:main} can be relaxed to consider the following question. \begin{problem} What is the smallest function $f$ such that for all positive $t$ and all graphs $G$ with $\chi(G) \ge f(t)$, it holds that $G$ contains $K_t$ as an immersion. \end{problem} As observed above, a minimum counterexample to Conjecture \ref{conj:main} has minimum degree $t-1$. Thus by Theorem \ref{theorem:mindeg}, we get that chromatic number at least $f(t) = 7t+8$ forces a $K_t$ immersion. By combining our results for dense graphs with arguments based on analyzing Kempe chains in proper colorings of graphs, we obtain the following improved bound. \begin{theorem}\label{theorem:chromatic} Every graph with chromatic number at least $3.54t+4$ contains an immersion of $K_t$. \end{theorem} For graphs with no stable set of size three, Vegara \cite{V17} proposed a similar conjecture as that of Duchet and Meyniel that every graph with $n$ vertices and no stable set of size three contains a strong $K_{\lceil n/2\rceil}$-immersion and proved that it is equivalent to Conjecture \ref{conj:main} for graphs with no stable set of size three. In the same paper, Vegara showed that a relaxation to $K_{\lceil n/3\rceil}$-immersion holds. We improve this to $K_{2\lfloor n/5 \rfloor}$. \begin{theorem}\label{theorem:2.5} For every integer $n\ge 1$, every graph $G$ with $n$ vertices and no stable set of size three has a strong immersion of $K_{2\lfloor n/5 \rfloor}$. \end{theorem} An extended abstract presenting Theorems \ref{theorem:mindeg} and \ref{theorem:chromatic} appeared in 2016 \cite{LW}. \subsection{Notation} Given a multigraph $G$ and distinct vertices $u,v\in V(G)$, if there are $k\ge 2$ edges between $u$ and $v$, we say that $uv$ is a \emph{multi-edge} with \emph{multiplicity} $k$, and if $u$ is not adjacent to $v$, we say that $uv$ is a \emph{missing edge}. We denote by $N_G(v)$ the (non-repeated) set of neighbors of $v$ in $G$, and by $d_G(v)$ the {degree} of $v$ in $G$ (where a loop is counted $2$ and a multi-edge with multiplicity $k$ is counted $k$). We denote by $E_G(v)$ the multi-set of edges (loops are excluded) incident with $v$ (if $uv$ is a multi-edge of multicity $k$ then there are $k$ edges $uv$ in $E_G(v)$). Given $X\subseteq V(G)$, we denote by $f_G(v|X)$ the number of vertices in $X\backslash\{v\}$ which are not adjacent to $v$ in $G$, and we write $f_G(v)=f_G(v|V(G))$ for short. When it is clear in the context, we omit the subscript $G$ in this notation. Note that if $G$ is simple, then $d(v)=|N(v)|=|E(v)|=|V(G)|-f(v)-1$, but may not be the case if $G$ is a multigraph. Given a multigraph $G$ and a subset $M$ of $V(G)$, let $G[M]$ denote the subgraph of $G$ induced by $M$. Given a path linking vertices $u$ and $v$, to \emph{split off the path}, we delete the edges of the path and add an edge $uv$ to $G$. Given a vertex $v$ with $|E_G(v)|$ even, to \emph{suppress} $v$, we first match all edges of $E_G(v)$ into pairs; then we split off every pair $\{vu,vw\}$ of the matching, and finally delete $v$ and its loops (if any). Note that after suppressing a vertex, the degree of other vertices are unchanged. Both operations (splitting off a path and suppressing a vertex) can be expressed as a sequence of splitting off pairs of edges. Given two multigraphs $G$ and $G'$, we define the \emph{union} of $G$ and $G'$, denoted $G \cup G'=G^*$ to be the multigraph with vertex set $V(G) \cup V(G')$ and the following edge set. For every two vertices $u$ and $v$ in $V(G) \cup V(G')$, the number of edges $uv$ in $G^*$ is equal to the sum of the number of edges $uv$ in $G$ and $G'$. The structure of the paper is as follows. In sections \ref{section:dense}, we give some results on clique immersion in dense graphs, which are necessary for the proofs of Theorems \ref{theorem:mindeg} and \ref{theorem:chromatic}. Then we prove Theorems \ref{theorem:mindeg}, \ref{theorem:chromatic}, and \ref{theorem:2.5} in Sections \ref{section:minimum}, \ref{section:chromatic}, and \ref{section:2.5}, respectively. \section{Clique immersion in dense graphs} \label{section:dense} In the following lemma, we show that if $G$ contains a set $M$ of $t$ vertices where the total sum of ``missing degree'' is small, then $G$ immerses a $K_t$ on $M$. \begin{lemma} \label{lemma:average1} Let $G=(V,E)$ be a graph with $n$ vertices and $M$ be a subset of $V$ with $t$ vertices. If \begin{equation}\label{eq:dense1} \sum_{v\in M} f_G(v)\le \Big(n-t-\max_{v\in M}f_G(v)\Big)t, \end{equation} then $G$ contains an immersion of $K_t$. \end{lemma} \begin{proof} Let $\overline{M}=V\backslash M$ and let $b=\max_{v\in M}f_{G}(v)$. Suppose that there are distinct vertices $v,v'\in M$ and $w\in \overline{M}$ such that $vv'\notin E(G)$ and $vw,wv'\in E(G)$. By splitting off the path $vwv'$, we obtain the edge $vv'$ while $f(v)$ and $f(v')$ are unchanged, and so (\ref{eq:dense1}) still holds for the new graph. Thus by repeatedly finding such triples and splitting off, we obtain new graphs satisfying (\ref{eq:dense1}) while the number of edges strictly decreases after each step. Therefore the process must halt and return a graph $G_1=(V,E_1)$ satisfying \begin{equation}\label{en:dense1} \sum_{v\in M} f_{G_1}(v)\le \big(n-t-b\big)t, \text{ and} \end{equation} \begin{enumerate}[label=(\roman*)] \item \label{en:dense2} there are no $v,v'\in M$ and $w\in \overline{M}$ such that $vv'\notin E_1$ and $vw,wv'\in E_1$. \end{enumerate} For the rest of the proof, we write $f$ instead of $f_{G_1}$. Let $r$ be the number of missing edges of $G_1$ with two endpoints in $M$, and $X$ be the set of endpoints of these missing edges. If $r=0$, then $G_1[M]$ is a copy of $K_t$, which proves the lemma. Hence we may suppose that $r\ge 1$. For every $v\in X$, there is $v'\in M$ such that $vv'\notin E_1$. From \ref{en:dense2} we have $f(v|\overline{M})+f(v'|\overline{M})\ge |\overline{M}|=n-t;$ otherwise, there exists $w\in \overline{M}$ such that $vw,wv'\in E_1$. Hence $$n-t\le f(v|\overline{M})+f(v'|\overline{M})\le f(v|\overline{M})+f(v')\le f(v|\overline{M})+b,$$ and so $f(v|\overline{M})\ge n-t-b$ for every $v\in X$. This gives \begin{equation} \sum_{v\in X} f(v)= \sum_{v\in X}f(v|\overline{M})+\sum_{v\in X} f(v|M)\ge (n-t-b)|X| + 2r.\label{equation:XY} \end{equation} We will construct a $K_t$ immersion in $G_1$ as follows: for every non-adjacent pair of vertices $v,v'$ in $X$, we will obtain the edge $vv'$ by splitting off path $vwuw'v'$ for some $u\in Y= M \setminus X$ and $w,w' \in \overline{M}$. As a first step to finding such 4-edge paths, for all $u \in Y$, define $$h(u)=\max\Big(0,\Big\lfloor\frac{n-t-b-f(u)+1}{2}\Big\rfloor\Big).$$ It holds that $2h(u)\ge n-t-b-f(u)$. Hence $$2\sum_{u\in Y}h(u) \ge (n-t-b)|Y|-\sum_{u\in Y}f(u).$$ Combining with \eqref{equation:XY}, and then with (\ref{en:dense1}) yields \begin{align*} 2\sum_{u\in Y}h(u) -2r&\ge \Big((n-t-b)|Y|-\sum_{u\in Y}f(u)\Big)+\Big( (n-t-b)|X|-\sum_{v\in X}f(v)\Big)\\ &\ge (n-t-b)(|X|+|Y|)-\sum_{v\in M}f(v)\\ &\ge (n-t-b)t-(n-b-t)t= 0. \end{align*} Hence $\sum_{u\in Y}h(u) \ge r$. Choose arbitrarily two non-adjacent vertices $v,v'$ in $M$ (clearly $v,v'\in X$), and an arbitrary vertex $u\in Y$ such that $h(u)\ge 1$. Such a vertex $u$ always exists as $\sum_{u\in Y}h(u)\ge r\ge 1$ and $h(u)$ is an integer for every $u$. By definition of function $h$, we have $$f(u|\overline{M})\le f(u)\le n-t-b+1-2h(u)\le n-t-b-1.$$ From $f(v)\le b$, we have $$f(u|\overline{M})+f(v|\overline{M})\le (n-t-b-1)+f(v)\le n-t-1<|\overline{M}|,$$ so $u$ and $v$ have a common neighbor $w\in \overline{M}$. Similarly $u$ and $v'$ have a common neighbor $w'\in \overline{M}$. If $w=w'$ then $vw,wv'\in E_1$, contrary to \ref{en:dense2}. By splitting off the path $vwuw'v'$, we get the edge $vv'$. In doing so, we have that $f(v)$ and $f(v')$ remain unchanged while $f(u)$ increases by 2, i.e., $h(u)$ decreases by 1. Thus $\sum_{u\in Y}h(u)$ decreases by 1. However, the number of missing edges in $G_1[M]$ also decreases by 1, so we still have that $\sum_{u\in Y}h(u)$ is at least the number of missing edges in $G_1[M]$. We repeat the process above until we link all pairs of non-adjacent vertices in $M$, and so obtain a complete graph on $M$. Thus $G_1$ contains an immersion of $K_t$, and consequently, $G$ contains $K_t$ as an immersion as well. This proves the lemma. \end{proof} As a corollary of Lemma \ref{lemma:average1}, the following lemma provides a more general bound for clique immersion of a graph by its average ``missing degree''. \begin{lemma}\label{lemma:average2} Let $G$ be a graph on $n$ vertices, and let $\gamma=\sum_{v\in V(G)}f_G(v)/n$ be the average ``missing degree" of $G$. If $\gamma \le n/2$, then $G$ contains an immersion of $K_t$ where $t=\min\big( \lfloor n/2 \rfloor,\lfloor n -2\gamma\rfloor\big)$. \end{lemma} \begin{proof} Let $M$ be a set of $t=\min\big( \lfloor n/2 \rfloor,\lfloor n -2\gamma\rfloor\big)$ vertices minimizing $\sum_{v\in M}f(v)$. Let $b=\max_{v\in M}f(v)$ and $\overline{M}=V(G)\backslash M$. If $2b\le n-t$, note that $f(v)\le b$ for every $v\in M$, and so $\sum_{v\in M}f(v)\le bt\le (n-t-b)t$, and we apply Lemma \ref{lemma:average1} to complete the proof. Otherwise, $2b> n-t$. By the minimality of $f$ on $M$, we have $f(w)\ge b$ for every $w\in \overline{M}$. Hence \begin{equation} \sum_{v\in M}f(v)= \sum_{v\in V(G)}f(v)- \sum_{w\in \overline{M}}f(w) \le \gamma n-b(n-t).\label{equation:0} \end{equation} We now show that $\gamma n-b(n-t) \le (n-t-b)t$. Indeed, \begin{align} \gamma n-b(n-t) &\le (n-t-b)t\nonumber \\ \Longleftrightarrow 2(\gamma n-bn+bt) &\le 2(n-t-b)t\nonumber \\ \Longleftrightarrow\ \ 2\gamma n-n^2+tn &\le 2b(n-2t)-(n-t)(n-2t) \nonumber\\ \Longleftrightarrow\ \ \ (2\gamma+t-n)n &\le (2b-n+t)(n-2t). \label{equation:1} \end{align} Since $t=\min\big( \lfloor n/2 \rfloor,\lfloor n -2\gamma\rfloor\big)$, we have $2\gamma \le n-t$ and $2t\le n$. Combining with $2b>n-t$ yields $$(2\gamma +t-n)n\le 0\le (2b-n+t)(n-2t).$$ Hence (\ref{equation:1}) holds, and so $\gamma n-b(n-t) \le (n-t-b)t$. This, together with equality (\ref{equation:0}), implies that $\sum_{v\in M}f(v)\le (n-t-b)t$, and we apply Lemma \ref{lemma:average1} to complete the proof. \end{proof} In the case $n/4\le \gamma\le n/2$, by tightening the analysis, we can slightly improve the bound in Lemma \ref{lemma:average2} to $t=\lfloor n-2\gamma \rfloor+1$, which is sharp even if $\gamma$ is the maximum missing degree (see \cite{FW16}, Lemma 2.1). In the case $\gamma<n/4$, the above technique could yield $t=\max\big(\lfloor n/2\rfloor,\lfloor n-\sqrt{2\gamma n}\rfloor\big)$; however, $t=\lfloor n/2\rfloor$ is enough for our purpose. \section{Forcing a clique immersion via minimum degree} \label{section:minimum} In this section, we show how the proof of Theorem \ref{theorem:mindeg11} can be refined to give the proof of Theorem \ref{theorem:mindeg}. The main idea is as follows. Suppose, to reach a contradiction, that there is a graph with high minimum degree which does not contain a $K_t$-immersion. We choose such a graph $G$ with as few vertices as possible. If $G$ is dense, then we can find a $K_t$ immersion, a contradiction. Otherwise, $G$ is sparse, and so we can supress a vertex to get a smaller graph, which still has high minimum degree and does not contain a $K_t$-immersion, a contradiction again. The main difficulty is how to suppress a vertex of $G$ so that the new graph is still simple. We first state several results from \cite{dvo}. \begin{proposition}[\cite{dvo}, Lemma 6]\label{lemma:completemul} Every complete multipartite graph of minimum degree at least $t$ contains an immersion of $K_t$. \end{proposition} A graph on odd number of vertices is \emph{hypomatchable} if deleting any vertex results in a graph with a perfect matching. \begin{proposition}[\cite{dvo}, Lemma 8]\label{lemma:edmonds} Fix $t$ and let $H$ be a graph not containing any complete multipartite subgraph with minimum degree at least $t$. Suppose that the complement graph $\overline{H}$ of $H$ neither has a perfect matching nor is hypomatchable. Then there exist disjoint subsets $W,L$ of $V(H)$ such that \begin{itemize} \item $|W|\le t-1$ and $|L|\ge |V(H)|-2|W|$; \item $f_H(v)\le |W|$ for every $v\in W$; and \item $uv\in E(H)$ for every $u\in W$ and $v\in L$. \end{itemize} \end{proposition} Given a multigraph $G$, we say that a vertex $v$ of $G$ can be {well-suppressed} (in $G$) if we can suppress $v$ without creating any new loop or multi-edge in $G$. Precisely, $v$ can be \emph{well-suppressed} if there is a matching of edges of $E_G(v)$ such that \begin{itemize} \item for every pair $\{vu_1,vu_2\}$, we have $u_1\ne u_2$ and $u_1u_2\notin E(G)$, and \item for every two pairs $\{vu_1,vu_2\}$ and $\{vu'_1,vu'_2\}$ we have $\{u_1,u_2\}\ne \{u_1',u_2'\}$. \end{itemize} A vertex $v$ can be \emph{nearly well-suppressed} if for all edges $e \in E_G(v)$, the vertex $v$ can be well-suppressed after deleting $e$. Given a simple graph $G$, it is straightforward that if a vertex $v$ can be well-suppressed (nearly well-suppressed), then the complement graph of the induced subgraph $G[N(v)]$ has a perfect matching (is hypomatchable, respectively). The situation is more complex when $G$ is a multigraph. In the next lemma, we consider the case where some multi-edges are allowed. \begin{lemma}\label{lem:hypo-inside} Fix $t\ge 1$ and let $G'$ be a loopless multigraph with vertex set $V\cup \{z\}$ (where $z\notin V$) such that for every $v\in V$, $zv$ is either an edge or a multi-edge with multiplicity $2$. Let $R$ be the set of vertices incident with $z$ by a multi-edge. If \begin{itemize} \item $|V|-2|R|\ge 3t$, \item $G:=G'[V]$ is simple and does not contains $K_t$ as an immersion, and \item $z$ cannot be well-suppressed or nearly well-suppresed in $G'$, \end{itemize} then there is a set $W\subseteq V$ such that $|W|\le t-1$ and $f_G(v)\le |W|+|R|$ for every $v\in W$. \end{lemma} \begin{proof} We define an auxiliary (simple) graph $H$ as follows. Beginning with $G$, for every vertex $v\in R$, we add a \emph{clone} vertex $v_c$ to $H$ which has the following neighbors: all the vertices of $R$, all the neighbors of $v$ in $G$, and every other clone vertex $u_c$. Explicitly, $H$ has vertex set $V \cup \{v_c| v \in R\}$ and edge set $$E(H)=E(G) \cup \{u_cv| u,v \in R\}\cup \{u_cv_c| u, v \in R\} \cup \{v_cx| v\in R, vx\in E(G)\} .$$ Each vertex in $H$ indeed corresponds to an edge of $E_{G'}(z)$, where each clone vertex $v_c$ represents the additional edge in the multi-edge $zv$. Let $\overline{H}$ be the complement graph of $H$. We will show that $\overline{H}$ neither has a perfect matching nor is hypomatchable. If $\overline{H}$ has a perfect matching, then by the construction of $H$, that perfect matching corresponds to a matching of edges in $E_{G'}(z)$ such that \begin{itemize} \item for every pair $\{zu_1,zu_2\}$, we have $u_1\ne u_2$ and $u_1u_2\notin E(G')$, and \item for every two pairs $\{zu_1,zu_2\}$ and $\{zu'_1,zu'_2\}$ we have $\{u_1,u_2\}\ne \{u_1',u_2'\}$. \end{itemize} Thus we can we can well-suppress $z$ in $G'$, a contradiction to the third assumption of the lemma. If $\overline{H}$ is hypomatchable, then for every $v\in V(H)$, there is a perfect matching of $V(H)\backslash \{v\}$ in $\overline{H}$. The same argument shows that $z$ can be nearly well-suppressed in $G'$, a contradiction. We conclude that $\overline{H}$ neither has a perfect matching nor is it hypomatchable. Observe that removing a vertex of a complete multipartite graph with minimum degree $d$ results in a complete multipartite graph with minimum degree at least $d-1$. Hence suppose that $H$ contains a multipartite subgraph of minimum degree at least $|R|+t$. By removing all clone vertices of $H$, we obtain $G$, which still contains a complete multipartite subgraph with minimum degree at least $(|R|+t)-|R|=t$. By Proposition \ref{lemma:completemul}, $G$ contains $K_t$ as an immersion, a contradiction. We conclude that $H$ does not contains any multipartite subgraph of minimum degree at least $|R|+t$. Applying Proposition \ref{lemma:edmonds} to $H$, we obtain disjoint subsets $W',L'$ of $V(H)$ such that \begin{enumerate}[label=(\alph*)] \item \label{en:3.1} $|W'|\le |R|+t-1$ and $|L'|\ge |V(H)|-2|W'|$; \item \label{en:3.2} $f_H(v)\le |W'|$ for every $v\in W'$; and \item \label{en:3.3} $uv\in E(H)$ for every $u\in W'$ and $v\in L'$. \end{enumerate} Let $R_c$ be the set of clone vertices of $H$ and $W=W'\backslash R_c$ and $L=L'\backslash R_c$. We will show that $W$ is a desired set. By \ref{en:3.1} we have $$|L'|\ge |V(H)|-2|W'|> (|V|+|R|)-2(|R|+t)\ge |V|-|R|-2t.$$ Thus $|L'|-|R|\ge|V|-2|R|-2t$. Recall from the hypothesis that $|V|-2|R|\ge 3t$, and so $|L|\ge |L'|-|R|\ge t$. Note that by \ref{en:3.3}, $uv\in E(H)$ for every $u\in W$ and $v\in L$, and hence $uv\in E(G)$ for every $u\in W$ and $v\in L$. If $|W|\ge t$, then $G[W\cup L]$ contains a complete bipartite graph with minimum degree at least $t$, and so contains $K_t$ as an immersion by Proposition \ref{lemma:completemul}, a contradiction. Thus it holds that $|W|\le t-1$. Note that $f_G(v)\le f_H(v)$ since $G$ is an induced subgraph of $H$. It follows from \ref{en:3.2} that $f_G(v)\le f_H(v)\le |W'|\le |W|+|R|$ for every $v\in W$. This completes the proof of the lemma. \end{proof} Given an integer $t>1$, we call a graph $t$-deficient if it can be obtained from a graph with minimum degree $t$ by removing a few edges. Precisely, a graph $G$ is \emph{$t$-deficient} if $\sum_{v\in V(G)}\max(0,t-d_{G}(v))<t$. \begin{proposition}[\cite{dvo}, Lemma 13]\label{lemma:eulerian} If $G$ is a graph of minimum degree at least $7t+7$ that does not contain an immersion of $K_t$, then $G$ contains an immersion of some $7t$-deficient eulerian graph $G'$. \end{proposition} \begin{proposition}[\cite{dvo}, Lemma 15]\label{lemma:eulerian2} Every $7t$-deficient eulerian graph contains a vertex of degree at least $7t$. \end{proposition} The main technical step in the proof of Theorem \ref{theorem:mindeg} is the following lemma. Dvo\v{r}\'ak and Yepremyan proved a similar result for $11t + 7$-deficient eulerian graphs in \cite{dvo}. \begin{lemma}\label{lemma:mainmindeg} Every $7t$-deficient eulerian graph contains an immersion of $K_t$. \end{lemma} Theorem \ref{theorem:mindeg} follows easily from Lemma \ref{lemma:mainmindeg}. Suppose for a contradiction that there exists a graph $G$ of minimum degree at least $7t+7$ and does not have an immersion of $K_t$. By Proposition \ref{lemma:eulerian}, $G$ contains an immersion of a $7t$-deficient eulerian graph $G'$. By Lemma \ref{lemma:mainmindeg}, $G'$ contains an immersion of $K_t$, a contradiction. \begin{proof}[Proof of Lemma \ref{lemma:mainmindeg}] Suppose that there exists a $7t$-deficient simple eulerian graph which does not contain an immersion of $K_t$. Let $G=(V,E)$ be such a graph with as few vertices as possible. The idea of the proof is as follows. If $G$ has few edges, we show it would be possible to well-suppress some vertex of $G$ to get a smaller counterexample, a contradiction. Hence $G$ has many edges. We are then able to find in $G$ two disjoint sets of vertices $A$ and $B$ of size around $t$ and $6t$, respectively, such that there are very few missing edges between $A$ and $B$. We apply Lemma \ref{lemma:average1} to obtain an immersion of $K_t$ and so reach a contradiction. Let $z_1$ be a vertex in $G$ with $d(z_1)\ge 7t$, as guaranteed by Proposition \ref{lemma:eulerian2}. Let $1\le p< t$ be the maximum integer such that there exists an ordered set $A=\{z_1,z_2,...,z_p\}$ satisfying \begin{equation} f(z_i|B)\le p+i+r_i,\ \text {for all } i\ge 2. \label{equation:induction} \end{equation} where $B=N(z_1)\backslash A$ and $r_i=\big|\{j\le i: z_j\notin N(z_1)\}\big|$ for every $i\ge 2$. Such number $p$ clearly exists since \eqref{equation:induction} trivially holds for $A=\{z_1\}$. Since $|N(z_1)\cap A|=p-r_p$, we have \begin{equation}\label{eq:B} |B|=|N(z_1)\backslash A|=d(z_1)-|N(z_1)\cap A|\ge 7t-p+r_p. \end{equation} Let $\overline{A}=V\backslash A$. Starting with $G_p = G$, we will attempt to sequentially split off the vertices of $A$ in order $z_p, z_{p-1},\dots, z_1$ to create graphs $G_{p-1},G_{p-2}, \dots, G_0$. At each step, if we could find the complement of a perfect matching in $N_{G_i}(z_i)$, we could split off $z_i$ to obtain $G_{i-1}$ and maintaining the property that $G_{i-1}$ is simple. However, the requirement that $N_{G_i}(z_i)$ have the complement of a perfect matching is too strong and so we will have to slightly relax it. In doing so, we will need to introduce parallel edges into the graphs $G_i$, but we will want to do so in a tightly controlled manner. This leads us to the following definition. Fix $q$, $0 \le q \le p$ and multigraphs $G_i$, $q \le i \le p$ which satisfy the following. \begin{enumerate}[label=(\roman*)] \item \label{en:e.1} $G_p =G$ and for all $i$, $q \le i < p$, $G_i$ is obtained from $G_{i+1}$ by suppressing $z_{i+1}$. \item \label{en:e.2} For all $i$, $q \le i \le p$, $G_i[\overline{A}]$ is simple. \item \label{en:e.3} For all $i$, every multi-edge of $G_i$ with an endpoint in $\overline{A}$ has multiplicity 2. \item \label{en:e.5} For all $j,2\le j \le q$, there are at most $r_p - r_q$ multi-edges from $z_j$ to vertices of $\overline{A}$ in $G_q$, and there are at most $p - q$ multi-edges from $z_1$ to vertices of $\overline{A}$ in $G_q$, \item \label{en:e.4} There are at least $|\overline{A}|-p+q$ vertices in $\overline{A}$ not incident with any multi-edge in any $G_i,q \le i \le p$. \item \label{en:e.6} Given $v\in \overline{A}$ and $z\in A$, if $vz$ is a multi-edge in some $G_i,q\le i \le p$, then for every $z'\in A, z'\ne z$ and every $j, q\le j \le p$, $vz'$ is not a multi-edge in $G_j$. \item \label{en:e.7} Subject to \ref{en:e.1} -- \ref{en:e.6}, we choose $q$ and $G_i$, $q \le i \le p$ to minimize $q$. \end{enumerate} Such a number $q$ and multigraphs $G_i$, $q \le i \le p$ trivially exist, given the observation that $q = p$ and $G_p = G$ satisfy \ref{en:e.1} -- \ref{en:e.6} as $G$ is simple. We begin with the observation that $q>0$. Otherwise, the graph $G_0$ does not contain $K_t$ as an immersion because $G_0$ itself immerses in $G$ by construction. Moreover, $G_0$ is simple by \ref{en:e.2}, and for all $v \in V(G_0)$, $d_{G_0}(v) = d_{G}(v)$. We conclude that $G_0$ is both eulerian and $t$-deficient, contrary to our choice of $G$ to be a counterexample on a minimum number of vertices. We now consider the graph $G_q$ and keep in mind that by the minimality of $q$ in \ref{en:e.7}, we cannot supress $z_q$ to obtain $G_{q-1}$ which satisfies all \ref{en:e.1} -- \ref{en:e.6}. Let $X=N_{G_{q}}(z_q)\cap \overline{A}$. We will show that $G':=G_q[X\cup\{z_q\}]$ satisfies all hypotheses of Lemma \ref{lem:hypo-inside}. From \ref{en:e.2} and \ref{en:e.3}, we have $G'$ is a loopless multigraph with vertex set $X\cup \{z_q\}$ such that for every $v\in X$, $z_qv$ is either an edge or a multi-edge with multiplicity 2. Let $R$ be the set of vertices in $X$ incident with $z_q$ by a multi-edge. Then by \ref{en:e.5} we have \begin{equation} \label{eq:R} \left\{ \begin{array}{ll} |R|\le p-1 \ \ \ \ \ \ \ \ \ \text{ if } q=1,\\ |R|\le r_p-r_q \ \ \ \ \ \ \ \text{ if } q>1. \end{array} \right. \end{equation} \begin{claim} $G'$ satisfies all hypotheses of Lemma \ref{lem:hypo-inside}. \end{claim} \begin{cproof} We verify the hypotheses one by one. \begin{itemize} \item $G'[X]=G_q[X]$ is simple and does not contains $K_t$ as an immersion. \end{itemize} $G_q[X]$ is simple by \ref{en:e.2}, and does not contains $K_t$ as an immersion by \ref{en:e.1} and the assumption that $G$ does not contains $K_t$ as an immersion. \begin{itemize} \item $|X|-2|R|\ge 3t$. \end{itemize} To prove $|X|-2|R|\ge 3t$, note that $|B\backslash X|$ is the number of vertices in $B$ not adjacent to $z_q$ in $G_q$, which is at most the number of vertices in $B$ not adjacent to $z_q$ in $G$ since no edge between $z_q$ and $B$ have been removed in suppressing $z_p, \dots, z_{q+1}$. Thus $|B\backslash X|\le f_G(z_q|B)$. Combining with \eqref{equation:induction} we have \begin{equation} \label{eq:BX} \left\{ \begin{array}{ll} |B\backslash X|=0 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{ if } q=1,\\ |B\backslash X|\le p+q+r_q \ \ \ \text{ if } q>1. \end{array} \right. \end{equation} In the case $q>1$, by \eqref{eq:B}, $$|X|\ge |B|-|B\backslash X|\ge (7t-p+r_p)-(p+q+r_q).$$ From the fact that $t\ge \max(p,q,r_p)$ and \eqref{eq:R}, we have $$|X|-2|R|\ge 7t-2p-q-r_p+r_q\ge 3t.$$ In the case $q=1$, by \eqref{eq:B}, $$|X|\ge |B|-|B\backslash X|\ge |B|\ge 7t-p+r_p.$$ Hence from \eqref{eq:R} we have $|X|-2|R|\ge 7t-3p+r_p\ge 3t$. \begin{itemize} \item $z_q$ cannot be well-suppressed or nearly well-suppressed in $G'$. \end{itemize} Suppose that $z_q$ can be well-suppressed in $G'$. We first split off all edges from $z_q$ to $X$ by that matching. Then there are even number of edges incident with $z_q$ remaining in $G_q$, all from $z_q$ to $A$ since $X=N_{G_{q}}(z_q)\cap \overline{A}$. We now suppress $z_q$ in $G_q$ arbitrarily to obtain $G_{q-1}$. Since we do not create any new edge between $A$ and $\overline{A}$, \ref{en:e.1} -- \ref{en:e.6} hold trivially for $G_{q-1}$, which contradicts \ref{en:e.7}. As the second case, suppose that $z_q$ can be nearly well-suppressed in $G'$. Pick a vertex $v\in X$ which is not incident with any multi-edge in $G_i$, for all $q \le i \le p$. Such vertex $v$ exists since by \ref{en:e.4}, there was at most $p-q$ distinct vertices of $\overline{A}$ incident with some multi-edge over all $G_i$, $q \le i \le p$, while $|X|\ge 3t>p-q$ (as we show above that $|X|-2|R|\ge 3t)$. Since $z_q$ is can be nearly well-suppressed in $G'$, if we remove the edge $z_qv$ in $G'$, we can well-suppress $z_q$ (in $G'$), and we do so. Since $d_{G_q}(z_q)$ is even, $z_q$ must be adjacent to some vertex $z_s$ with $s<q$. We choose such $s$ as small as possible and split off $z_sz_qv$. We now suppress $z_q$ in $G_q$ arbitrarily to obtain $G_{q-1}$ and will show that $G_{q-1}$ satisfies \ref{en:e.1} -- \ref{en:e.6} and hence violates \ref{en:e.7}. Properties \ref{en:e.1} and \ref{en:e.2} hold trivially. The only possible new multi-edge that we have created is $z_sv$. Since $v$ is not incident with any multi-edge in $G_i$ for all $q \le i \le p$, \ref{en:e.3}, \ref{en:e.4} and \ref{en:e.6} hold for $G_{q-1}$. To prove \ref{en:e.5}, first observe that \ref{en:e.5} clearly holds if $z_s=z_1$. If $z_s\ne z_1$, then $z_1$ is not incident with $z_q$ by the choice of $s$, and so $r_{q-1}= r_{q}-1$ by the defintion of function $r$. Thus $r_p - r_{q-1}= r_p - r_q +1$ and therefore \ref{en:e.5} holds. \end{cproof} Hence $G'$ satisfies the hypotheses of Lemma \ref{lem:hypo-inside}, and so there is a set $W\subseteq X$ such that $|W|\le t-1$ and $f_{G_q[X]}(v)\le |W|+|R|$ for every $v\in W$. We next show that $|W|\ge t-p$. To do so, we need the following claim. \begin{claim}\label{cl:W-main} $f_G(v|B)\le |W|+2p+r_p$ for every $v\in W$. \end{claim} \begin{cproof} We first show that $f_{G_q}(v|B)\le|W|+p+q+r_p.$ Note that $f_{G_q}(v|X)= f_{G_q[X]}(v)$ for every $v\in X$, and so $$f_{G_q}(v|B) \le f_{G_q}(v|X)+f_{G_q}(v|B\backslash X)\le (|W|+|R|)+|B\backslash X|.$$ If $q>1$, recall that $|B\backslash X|\le p+q+r_q$ from \eqref{eq:BX} and $|R|\le r_p-r_q$ from \eqref{eq:R}. Hence we have $$f_{G_q}(v|B)\le (|W|+r_p-r_q)+(p+q+r_q)\le |W|+p+q+r_p.$$ If $q=1$, recall that $|B\backslash X|=0$ from \eqref{eq:BX} and $|R|\le p-1$ from \eqref{eq:R}. Thus we have $$f_{G_q}(v|B)\le |W|+p<|W|+p+q+r_p.$$ We conclude that $f_{G_q}(v|B)\le|W|+p+q+r_p$ in all cases. To complete the claim, it suffices to show that $$f_G(v|B)\le f_{G_q}(v|B)+(p-q)$$ for every $v\in X$. Fix $v\in X$. By property \ref{en:e.6}, there exists a value $s$ such that $z_{i}v$ is not a multi-edge in $G_i$ for every $i\ne s,q\le i\le p$. Thus for every $i\ne s, q< i\le p$, there is at most one edge $z_iv$ in $G_i$, and so when we supress $z_i$ in $G_i$ to obtain $G_{i-1}$, we add at most one edge between $v$ and $B$ into $G_{i-1}$. If $s>q$, note that there are at most two edges $z_sv$ in $G_s$ by property \ref{en:e.3}. Hence when we supress $z_s$ in $G_s$ to obtain $G_{s-1}$, we add at most two edges between $v$ and $B$ into $G_{s-1}$. Thus from $G=G_p$, when we supress $z_p,...,z_{q+1}$ to get $G_q$, we add in total at most $p-q-1+1=p-q$ edges from $v$ to $B$, and so $$f_G(v|B)=f_{G_p}(v|B)\le f_{G_q}(v|B)+(p-q).$$ This proves the claim. \end{cproof} \begin{claim} $|W|\ge t-p$. \end{claim} \begin{cproof} Suppose for a contradiction that $|W|+p=p^*<t$. Let $A^*=A\cup W$ where elements in $W$ are enumerated $z_{p+1},...,z_{p^*}$, and let $B^*=N(z_1)\backslash A^*=B\backslash W.$ Then \begin{itemize} \item $f_{G}(z_i|B^*) \le f_{G}(z_i|B) \le p+i+r_i\le p^*+i+r_i$ for every $i,2\le i\le p$. \item $f_{G}(z_i|B^*) \le f_{G}(z_j|B) \le |W|+2p+r_p\le p^*+i+r_i$ for every $i> p$ (note that $r_i\ge r_p$ since by definition $r$ is a non-decreasing function). \end{itemize} Hence \eqref{equation:induction} holds for $p^*$ and $A^*$, contrary to the maximality of $p$. Thus $|W|\ge t-p$. \end{cproof} Let $\hat{A}$ be an arbitrary set of $t-p$ vertices in $W$ and enumerate them $z_{p+1},...,z_t$. Let $M=A\cup \hat{A}$ and $\overline{M}=B\backslash \hat{A}$. Let $U=M\cup \overline{M}$ and $H=G[U]$. We will apply Lemma \ref{lemma:average1} to $H$ and deduce that $H$ must contain an immersion of $K_t$, which contradicts the assumption that $G$ does not contains an immersion of $K_t$ and so complete the proof of Lemma \ref{lemma:mainmindeg}. We first give some bounds for function $f$ in $H$. Observe that $f_H(z_i|\overline{M}) = f_{G}(z_i|\overline{M}) \le f_{G}(z_i|B)$ for every $i,1\le i\le p$. Note also that $f_{G}(z_1|B) =0$, and $f_{G}(z_i|B) \le 2p+i$ for every $i,1<i\le p$, and by Claim \ref{cl:W-main}, $$ f_{G}(z_i|B)\le |W|+2p+r_p\le t+2p+r_p$$ for every $i,p<i\le t$ (recall that $|W|\le t$). Thus \begin{equation} \label{eq:HH} \left\{ \begin{array}{ll} f_H(z_i|\overline{M})\le 2p+i \ \ \ \ \ \ \ \ \ \text{ if } i\le p,\\ f_H(z_i|\overline{M})\le t+2p+r_p \ \ \ \text{ if } i>p. \end{array} \right. \end{equation} Also note that $|M|=t$, and from \eqref{eq:B}, $$|\overline{M}|\ge |B|-|\hat{A}|\ge 7t-p+r_p-(t-p)=6t+r_p.$$ \begin{claim}\label{cl:HH} $H$ contains an immersion of $K_t$. \end{claim} \begin{cproof}We consider two cases. \textbf{Case 1:} $p\le t/2$. We have $f_H(z_i|M)\le |M|\le t$ for every $z_i$, and so \begin{align*} \sum_{z_i\in M}f_H(z_i)&\le \sum_{z_i\in M}f_H(z_i|M)+\sum_{z_i\in M}f_H(z_i|\overline{M})\\ &\le t^2+\sum_{1\le i\le p}f_H(z_i|\overline{M})+\sum_{p< i\le t}f_H(z_i|\overline{M})\\ &\le t^2+\sum_{i\le p}(2p+i)+\sum_{p< i\le t}(t+2p+r_p)\\ &\le t^2+3p^2+(t-p)(t+3p)\\ &\le 2t^2+2tp \le 3t^2. \end{align*} Since $2p\le t$, we have $$\max_{z_i\in M}f_H(z_i)\le t+\max_{z_i\in M}f_H(z_i|\overline{M})\le t+(t+2p+r_p)\le 3t+r_p.$$ Note that $|U|=|M|+|\overline{M}|=7t+r_p$. Hence $$\sum_{z_i\in M}f_H(z_i)\le 3t^2\le \Big(|U|-t-\max_{z_i\in M}f(z_i)\Big)t.$$ Apply Lemma \ref{lemma:average1} to obtain an immersion of $K_t$ on ${H}$. \textbf{Case 2:} $p>t/2$. Set $q=|\hat{A}|=t-p$, and so $p>q$. The analysis of this case is more involved. Even though $\sum_{z_i\in M}f_H(z_i)$ is small, $\max_{z_i\in M}f_H(z_i)$ could be very large, and so we cannot apply Lemma \ref{lemma:average1} directly. However, we can still use a similar argument to that in the proof of Lemma \ref{lemma:average1}. We present the argument as an algorithm to explicitly find a series of splitting off of edges to yield a $K_t$ immersion by finding edge disjoint paths of length two or four linking the desired pairs of vertices. Consider an arbitrary loopless multigraph $H'$ with vertex set $U$ and distinct vertices $z_i,z_j\in M$. We first define a subroutine called \textsc{Link$(H',z_i,z_j)$}: the algorithm finds $w\in \overline{M}$ such that $z_iw,wz_j\in E(H')$ and then split off the path $z_iwz_j$ to obtain an edge $z_iz_j$. The algorithm then returns $H'$ after splitting off the path. Such a $w$ can be found by checking all possible choices for $w$. In the case that multiple choices exist for $w$, the algorithm arbitrarily chooses one. In order to successfully run, the algorithm \textsc{Link$(H',z_i,z_j)$} assumes that the input satisfies: \begin{equation} f_{H'}(z_i|\overline{M})+f_{H'}(z_j|\overline{M})< 6t+r_p\le |\overline{M}|, \label{equation:link} \end{equation} Under assumption (\ref{equation:link}), such a $w \in \overline{M}$ must exist and therefore, the algorithm correctly terminates. Note also that $z_i,z_j$ are adjacent after performing \textsc{Link$(H',z_i,z_j)$}, and that the input $H'$ contains the output graph as an immersion. We now present the main algorithm to split off edges of $H$ to obtain a complete graph on $M=A\cup \hat{A}$. Set $H':=H$. The algorithm proceeds in stages. In stage 1, we link all vertices between $\{z_{q+1},...,z_p\}$ and $\hat{A}$. In stage 2, we link each pair of vertices between $\{z_{1},...,z_q\}$ and $\hat{A}$ with multi-edges of order two. Thus after stages 1 and 2, we obtain two edge-disjoint complete bipartite subgraphs, one between $A$ and $\hat{A}$ and another between $\{z_{1},...,z_q\}$ and $\hat{A}$ (the latter will be used later to obtain a complete graph on $\hat{A}$). In stage 3, we link all vertices inside $A$, and then obtain a complete graph on $M$. \begin{mdframed} {\sc Main}($H'$) \begin{enumerate} \item {Start with $s:=p$ and repeat the following whenever $s>q$. \begin{enumerate} \item[] Start with $i:=p+1$ and repeat the following whenever $i\le t$. \textbf{\ \ \ \ \ } \textsc{Link$(H',z_s,z_i)$}, $i:=i+1$. \item[] $s:=s-1$. \end{enumerate} } \item {Start with $s:=q$ and repeat the following whenever $s\ge 1$. \begin{enumerate} \item[] Start with $i:=p+1$ and repeat the following whenever $i\le t$. \textbf{\ \ \ \ \ } \textsc{Link$(H',z_s,z_i)$}, \textsc{Link$(H',z_s,z_i)$}, $i:=i+1$. \item[] $s:=s-1$. \end{enumerate} } \item {Start with $s:=p$ and repeat the following whenever $s\ge 1$. \begin{enumerate} \item[] Start with $i:=s-1$ and repeat the following whenever $i\ge 1$. \textbf{\ \ \ \ \ } \textsc{Link$(H',z_s,z_i)$}, $i:=i-1$. \item[] $s:=s-1$. \end{enumerate} } \item Return $H'$. \end{enumerate} \end{mdframed} Suppose that we have performed \textsc{Main($H'$)} successfully. The output $H'$ contains two edge-disjoint complete bipartite subgraphs, $H_1$ from $A$ to $\hat{A}$, and $H_2$ from $\{z_{1},...,z_q\}$ to $\hat{A}$, and a complete graph $H_3$ on $A$. We now show how to obtain from $H_2$ a complete graph $H_4$ on $\hat{A}$. Since $|\hat{A}|=q$, by Vizing Theorem, we can color the edges of an imagined complete graph on $\hat{A}$ by $q$ colors $\{1,2,...,q\}$ so that any two incident edges have different color. Now for every $z_i,z_j\in \hat{A}$, if the edge $z_iz_j$ in that imagined graph has color $s$, then we split off edges $z_iz_sz_j$ in the complete bipartite graph $H_2$ to get an edge $z_iz_j$, and so obtain a complete graph $H_4$ on $\hat{A}$. Hence $H_1\cup H_3\cup H_4$ is a complete graph on $M$. Thus the output $H'$ contains $K_t$ as an immersion, which implies that $H$ contains $K_t$ as an immersion. It only remains to show that we can perform \textsc{Main($H'$)} successfully, which is equivalent to verifying that for each call to the subroutine \textsc{Link($H',z_i,z_j$)} we have that \eqref{equation:link} is satisfied. We omit the subscript $H'$ of $f$ in the rest of this proof. Observe that after performing \textsc{Link($H',z_i,z_j$)}, $f(z_i|\overline{M})$ and $f(z_j|\overline{M})$ each increases by at most 1. Consider step $(s,i)$ of stage 1. The vertex $z_s$ has been linked $i-p-1$ times and so from \eqref{eq:HH} we have $f(z_s|\overline{M})< p+s+i$, and $z_i$ has been linked $p-s-1$ times and so $f(z_i|\overline{M})< t+3p+r_p-s$. Then $$f(z_s|\overline{M})+f(z_i|\overline{M})< t+4p+i+r_p\le 6t+r_p,$$ and so \eqref{equation:link} holds for every step $(s,i)$ of stage 1. From \eqref{eq:HH} and the definition of the algorithm, we have that at the end of stage 1: \begin{equation}\notag \begin{array}{ll} f(z_s|\overline{M})\le 2p+s\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{ if } s\le q,\\ f(z_s|\overline{M})\le (2p+s) + q\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{ if } q<s\le p,\\ f(z_i|\overline{M})\le (t+2p+r_p)+(p-q) \ \ \ \text{ if } i>p. \end{array} \end{equation} Consider step $(s, i)$ of stage 2. The vertex $z_s$ has been linked $2(i-p-1)$ times during stage 2 and so $f(z_s|\overline{M})\le 2p+s+2q-2=2t+s-2$ (since $t=p+q$), and $z_i$ has been linked $2(q-s-1)$ times. Thus $f(z_i|\overline{M})\le 2t+2p+r_p-2s-2$. It follows that $$f(z_s|\overline{M})+f(z_i|\overline{M})\le 4t+2p+r_p-s-4\le 6t+r_p-4.$$ We can perform \textsc{Link$(H',z_s,z_i)$} twice. At the end of stage 2, we have \begin{equation}\notag \begin{array}{ll} f(z_s|\overline{M})\le (2p+s)+2q\le 2t+s\ \ \ \ \text{ if } s\le q,\\ f(z_s|\overline{M})\le (2p+s)+ q\le 2t+s\ \ \ \ \ \ \text{ if } q<s\le p. \end{array} \end{equation} Consider step $(s,i)$ of stage 3. The vertex $z_s$ has been linked $(p-s)+(s-i-1)$ times during stage 3 (in which $p-s$ times with $z_{r},s<r\le p$ and $s-i-1$ with $z_{j},i<j\le s$) and so $f(z_s|\overline{M})< (2t+s)+p-i$, and $z_i$ has been linked $p-s-1$ times and so $f(z_i|\overline{M})<(2t+i)+p-s$. Then $$f(z_s|\overline{M})+f(z_i|\overline{M})< 4t+2p\le 6t+r_p,$$ and so \eqref{equation:link} holds for every step $(s,i)$ of stage 3. Claim \ref{cl:HH} now follows. \end{cproof} This proves Lemma \ref{lemma:mainmindeg}, and so prove Theorem \ref{theorem:mindeg}. \end{proof} \section{Forcing a clique immersion via the chromatic number} \label{section:chromatic} In this section we shall prove Theorem \ref{theorem:chromatic}. Recall that given $\ell \ge 1$, a graph $G$ is \emph{$\ell$-critical} if the chromatic number of $G$ is $\ell$, and deleting any vertex of $G$ results in a subgraph with chromatic number $\ell-1$. A well-known property of critical graphs is that if $G$ is a graph with chromatic number $\ell$, then $G$ contains an $\ell$-critical subgraph. Let us restate Theorem \ref{theorem:chromatic}. \begin{theorem}\label{theorem:chromatic2} Every graph with chromatic number at least $3.54t+4$ contains an immersion of $K_t$. \end{theorem} \begin{proof} Assume the theorem is false, and suppose that there exists a graph of chromatic number $\ell \ge 3.54t+4$ but which does not immerse $K_t$. Let $G^*$ be an $\ell$-critical subgraph of that graph. Let $v_0$ be a vertex of $G^*$ with minimum degree. By Theorem \ref{theorem:mindeg}, $d_{G^*}(v_0)\le 7t+6$. Let $N=N_{G^*}(v_0)$, and let $G$ be the graph obtained from $G^*$ by deleting $v_0$. It follows that $G$ does not immerse $K_t$. The graph $G^*$ is $\ell$-critical, so $G$ has chromatic number $\ell-1$. Furthermore, for any coloring of $G$, $N$ always has at least one vertex of each of the $\ell-1$ colors, otherwise we could color $G^*$ with $\ell-1$ colors. The proof uses Kempe-chains, introduced by Alfred Kempe in an 1879 attempt to prove that planar graphs are 4-colorable, to build a clique immersion with branch vertices in $N$. Given a coloring of $G$, we call a vertex $v\in N$ a \textit{singleton} if $v$ is the unique vertex in $N$ with its color. Two vertices $v,v'\in N$ of the same color form a \textit{doubleton} if they are the only two vertices with a given color in $N$. Let $\mathcal{C}$ be an $\ell - 1$ coloring of $G$ which maximizes the number of singletons. Let $X=\{x_1,...,x_\alpha\}$ and $Y=\{y_1,y_1',...,y_\beta,y'_\beta\}$ be the sets of singletons and doubletons, respectively, where $x_i$ has color $a_i$ and $y_i,y'_i$ share color $b_i$. All other colors appear at least 3 times in $N$. Thus, $\ell - 1$, the number of colors in $N$, is at most $$\alpha +\beta+ \frac{|N|-\alpha-2\beta}{3}=\frac{|N|+2\alpha+\beta}{3}.$$ Since $|N|= d_{G^*}(a)\le 7t+6$, we have $$3.54t+3\le \ell-1 \le \frac{7t+6+2\alpha+\beta}{3}$$ \begin{equation} \Longrightarrow\ 2\alpha+\beta\ge 2.62t+3 \label{equation:2alpha}. \end{equation} Given colors $a,b$, an \textit{$(a,b)$-chain} is a path with vertices colored alternately by colors $a$ and $b$. Clearly, if $\{a,b\}\ne \{a',b'\}$, then any $(a,b)$-chain and $(a',b')$-chain are edge-disjoint. The idea is as follows. We first show that there are many chains with endpoints in $X\cup Y$. Since these chains are edge-disjoint, we can split them off to get a dense graph on $X\cup Y$, then apply Lemma \ref{lemma:average2} to obtain a $K_t$ immersion, which leads to the contradiction. \begin{claim}\label{claim:chains} The following hold. \begin{enumerate}[label=(\alph*)] \item For all pairs of distinct colors $a_i, a_j$, there is an $(a_i,a_j)$-chain from $x_i$ to $x_j$. \label{enumerate:sing} \item For any colors $a_i,b_j$, there is an $(a_i,b_j)$-chain from $x_i$ to $y_j$, or from $x_i$ to $y'_j$. \label{enumerate:sing-doub} \item \label{enumerate:doub} For all pairs of distinct colors $b_i,b_j$, one of the following holds: \begin{enumerate}[label=(\roman*)] \setcounter{enumi}{2} \item \label{enumerate:alph1} there exist two edge-disjoint $(b_i,b_j)$-chains linking $y_i$ to $y_j$ and $y'_i$ to $y'_j$; \item \label{enumerate:alph2} there exist two edge-disjoint $(b_i,b_j)$-chains linking $y_i$ to $y'_j$ and $y'_i$ to $y_j$; \item \label{enumerate:alph3} there exist $(b_i,b_j)$-chains from any of $y_i,y_i'$ to any of $y_j,y_j'$ but they cannot be chosen edge-disjoint. \end{enumerate} \end{enumerate} \end{claim} \begin{cproof} For every color $a$, let $V_{a}\subseteq V(G)$ be the set of all vertices of color $a$ in $\mathcal{C}$. To prove \ref{enumerate:sing}, suppose that there exist two distinct colors $a_i,a_j$ such that there is no $(a_i,a_j)$-chain from $x_i$ to $x_j$. Then $x_i,x_j$ are disconnected in $G[V_{a_i}\cup V_{a_j}]$. Let $U$ be the connected component containing $x_i$ in $G[V_{a_i}\cup V_{a_j}]$. We exchange the color of all vertices in $U$ from color $a_i$ to $a_j$ and vice versa and obtain a new coloring $\mathcal{C}'$ in $G$. Clearly $\mathcal{C}'$ is a proper coloring in $G[V_{a_i}\cup V_{a_j}]$, and so is a proper coloring in $G$. Now both $x_i$ and $x_j$ has color $a_j$, so $\mathcal{C}'$ has no vertex of color $a_i$, contrary to the fact that $N$ has all colors for every $(\ell-1)$-coloring of $G$. To prove \ref{enumerate:sing-doub}, the same argument works. Suppose that there exist two distinct colors $a_i,b_j$ such that there is no $(a_i,b_j)$-chain from $x_i$ to $\{y_j,y_j'\}$. Then $x_i$ is disconnected with $\{y_j,y_j'\}$ in $G[V_{a_i}\cup V_{b_j}]$. Let $U$ be the connected component containing $x_i$ in $G[V_{a_i}\cup V_{b_j}]$. We exchange the color of all vertices in $U$ from color $a_i$ to $b_j$ and vice versa and obtain a new coloring $\mathcal{C}'$ in $G$. Then $\mathcal{C}'$ is a proper coloring in $G$ and has smaller number of colors on $N$ than $\mathcal{C}$, contrary to the fact that $N$ has all colors for every $(\ell-1)$-coloring of $G$. To prove \ref{enumerate:doub}, we first prove that \begin{enumerate}[label=(\alph*)]\setcounter{enumi}{3} \item \label{en:d} for every pair of distinct colors $b_i,b_j$, there is a $(b_i,b_j)$-chain from $y_i$ to $y_j$, or from $y_i$ to $y'_j$. \end{enumerate} Suppose that there exist two distinct colors $b_i,b_j$ such that there is no $(b_i,b_j)$-chain from $y_i$ to $\{y_j,y_j'\}$. Then $y_i$ is disconnected with $\{y_j,y_j'\}$ in $G[V_{b_i}\cup V_{b_j}]$. Let $U$ be the connected component containing $y_i$ in $G[V_{y_i}\cup V_{b_j}]$. We exchange the color of all vertices in $U$ from color $b_i$ to $b_j$ and vice versa and obtain a new coloring $\mathcal{C}'$ in $G$. Then $\mathcal{C}'$ is a proper coloring in $G$. If $y_i'\in U$, then $\mathcal{C}'$ has no vertex of color $b_i$ in $N$, contrary to the fact that $N$ has all colors for every $(\ell-1)$-coloring of $G$. If $y_i'\notin U$, then $\mathcal{C}'$ has exactly one vertex of color $b_i$ in $N$, and so has more singletons than $\mathcal{C}'$, which contradicts our choice of $\mathcal{C}$ to maximize the number of singletons. We now show how \ref{en:d} implies \ref{enumerate:doub}. From \ref{en:d}, every pair of distinct colors $b_i,b_j$, there is a $(b_i,b_j)$-chain from $y_i$ to $\{y_j,y_j'\}$ and another $(b_i,b_j)$-chain from $y_i'$ to $\{y_j,y_j'\}$. If one chain go to $y_j$ and another go to $y_j'$, then there are three possibilities. First, these chains are edge-disjoint and between $y_i,y_j$ and $y_i',y_j'$, then \ref{enumerate:alph1} holds. Second, these chains are edge-disjoint and between $y_i,y_j'$ and $y_i',y_j$, then \ref{enumerate:alph2} holds. Third, they are not edge-disjoint, then all $\{y_i,y_i',y_j,y_j'\}$ are connected by these two chains, and \ref{enumerate:alph3} holds. Otherwise, say these chains both go from $y_i,y_i'$ to $y_j$. Then by \ref{en:d}, there is a $(b_i,b_j)$-chain from $y_j'$ to either $y_i$ or $y_i'$. Hence all $\{y_i,y_i',y_j,y_j'\}$ are connected by some $(b_i,b_j)$-chains, and \ref{enumerate:alph3} holds. \end{cproof} For every pair of colors, we fix a subgraph based on the appropriate outcome of Claim \ref{claim:chains}. For every $i, j$, $1 \le i < j \le \alpha$, fix $C_a(i,j)$ to be an $(a_i, a_j)$-chain from $x_i$ to $x_j$. For all $i, j$, $1 \le i \le \alpha$, $1 \le j \le \beta$, fix $C_b(i,j)$ to be an $(a_i, b_j)$-chain from $x_i$ to either $y_j$ or $y_j'$. Let $i, j$ be such that $1 \le i < j \le \beta$; one of \ref{enumerate:alph1} - \ref{enumerate:alph3} holds for the colors $b_i$ and $b_j$. If either \ref{enumerate:alph1} or \ref{enumerate:alph2} holds, fix $C_c(i, j)$ to be the subgraph consisting of two edge disjoint $(b_i, b_j)$-chains linking $\{y_i, y_i'\}$ and $\{y_j, y_j'\}$. If \ref{enumerate:alph3} holds, fix $C(i,j)$ to be an edge minimal subgraph containing $(b_i, b_j)$-chains linking each of $y_i, y_i'$ to each of $y_j, y_j'$. For $i, j$, $1 \le i < j \le \beta$, we say that $C_c(i,j)$ has one of 3 \emph{types}, namely \ref{enumerate:alph1}, \ref{enumerate:alph2}, or \ref{enumerate:alph3}, depending on which outcome of \ref{enumerate:doub} holds. Note that $C_a(i,j)$, $C_b(i,j)$, and $C_c(i,j)$ are all pairwise edge disjoint. If we split off all the possible edge disjoint paths contained in subgraphs from the previous paragraph, it will not necessarily be the case that we will have sufficient edges on $X\cup Y$ to apply Lemma \ref{lemma:average2}. To get around this problem, we focus instead on the vertex set $X \cup \{y_1, \dots, y_\beta\}$. The subgraphs $C_c(i,j)$ of type \ref{enumerate:alph1} or type \ref{enumerate:alph3} contain a path which can be split off to yield the edge $y_iy_j$. Moreover, if we flip the labels $y_i$ and $y_i'$, every $C_c(i,j)$ subgraph of type \ref{enumerate:alph2} becomes a $C_c(i,j)$ subgraph of type \ref{enumerate:alph1} (and vice versa). Thus, we can increase the density of the resulting graph on $X \cup \{y_1, \dots, y_\beta\}$ by flipping the appropriate pairs of labeles $y_i$, $y_i'$. Unfortunately, this greedy approach will still not yield enough edges on $X \cup \{y_1, \dots, y_\beta\}$ to apply Lemma \ref{lemma:average2}. To further increase the final edge density, we will group together multiple $C_c(i,j)$ subgraphs of type \ref{enumerate:alph2} to split off paths and add further edges to the set $\{y_1, \dots, y_\beta\}$. The remainder of the argument carefully orders how the subgraphs are grouped together so that when we split them off and get as dense a subgraph as possible on the vertex set $X \cup \{y_1, \dots, y_\beta\}$. We begin by defining the subgraphs $G_1$, $G_2$ and the auxiliary graph $H$ as follows. Split off all paths of the form $C_a(i, j)$, $C_b(i, j)$, and the two edge disjoint $\{y_iy_i'\}-\{y_j,y_j'\}$-paths contained in the subgraphs $C_c(i,j)$ of type \ref{enumerate:alph1} and \ref{enumerate:alph2}. Let $G_1$ the graph with vertex set $V(G)$ and edge set the set of all new edges arising from splitting off these paths. Let $G_2$ be the subgraph of $G$ with vertex set $V(G)$ edge set the union of $E(C_c(i,j))$ for all subgraphs $C_c(i,j)$ of type \ref{enumerate:alph3}. Observe that $G_1\cup G_2$ is an immersion of $G$ and therefore does not immerse $K_t$. Clearly, $G_1[X]$ is a complete graph obtained from splitting off all the subgraphs $C_a(i,j)$, and so \begin{equation}\label{claim:alpha} \alpha=|X|\le t-1. \end{equation} We define an auxiliary graph $H$ by replacing each pair of vertices $y_i,y_i'$ with a single vertex $z_i$, and we color edges of incident with $z_i$ to describe the behavior of $y_i,y_i'$. Precisely, let $H$ be a graph with vertex set $X \cup Z$ where $Z=\{z_1,...,z_\beta\}$ and edge set \begin{align*} E(H) = \{x_iz_j: 1\le i&\ \le \alpha, 1 \le j \le \beta\}\cup \\ \cup \{z_iz_j:&\ 1 \le i < j \le \beta \text{ and $C_c(i,j)$ is not of type \ref{enumerate:alph3}}\}. \end{align*} The edges of $H$ are improperly colored by two colors \textit{odd, even} as follows: \begin{itemize} \item $x_iz_j$ is even if $x_iy_j\in E(G_1)$, and is odd if $x_iy'_j\in E(G_1)$. \item $z_iz_j$ is even if $y_iy_j,y_i'y_j'\in E(G_1)$, and is odd if $y_iy_j',y_i'y_j\in E(G_1)$. \end{itemize} To perform a \emph{swap} at a vertex $z_i$, we exchange the colors of all edges incident with $z_i$ in $H$; a swap is equivalent to switching the labels of $y_i$ and $y_i'$ in $G_1\cup G_2$. To \emph{swap} a set $S\subseteq Z$, we swap vertices in $S$ sequentially in an arbitrarily chosen order. One can easily show that to swap a set $S$ is equivalent to switching the color of every edges between $S$ and $V(H)\backslash S$. A triangle in $H$ is \textit{odd} if it has odd number of odd-edges. A key property of odd-triangles is that an odd-triangle is still odd after any swap. Each odd-triangle either has 3 vertices in $Z$ or exactly two vertices in $Z$ -- call them type 1 and type 2 odd-triangles, respectively. Given a type 1 odd-triangle $z_iz_jz_k$, the set of edges in $G_1$ with endpoints in $\{y_i,y_i',y_j,y'_j,y_k,y'_k\}$ are called the \emph{corresponding edges} of $z_iz_jz_k$. Similarly, given a type 2 odd-triangle $x_iz_jz_k$, the set of edges in $G_1$ with endpoints in $\{x_i,y_j,y_j',y_k,y'_k\}$ are called the \emph{corresponding edges} of $x_iz_jz_k$. Clearly, the set of corresponding edges of two edge-disjoint odd-triangles are disjoint. In Figure \ref{fig}, we describe all possibilities (up to permutation of indices) of the set of corresponding edges of a type 1 odd-triangle (upper figures) and of a type 2 odd-triangle (lower figures). \begin{figure} \caption{Possibilities of corresponding edges of odd-triangles.} \label{fig} \end{figure} Looking at Figure \ref{fig}, we can easily verify the following. \begin{enumerate}[label=(\Alph*)] \item \label{enumerate:Alph1} If $z_iz_jz_k$ is an odd-triangle of type 1, we can split off its corresponding edges to obtain edges $y_iy_j,y_jy_k,y_ky_i$. \item \label{enumerate:Alph3} If $x_iz_jz_k$ is an odd-triangle of type 2, we can split off its corresponding edges to obtain the edge $y_jy_k$. \item \label{enumerate:Alph2} If $x_iz_jz_k$ is an odd-triangle of type 2, we can alternatively split off its corresponding edges to obtain two edges from the set$\{x_iy_j,y_jy_k,y_kx_i\}$ (exactly which two edges depends on which case from Figure \ref{fig} we find ourselves in). \end{enumerate} Let $H_1$ be a graph obtained from $H$ by removing an (inclusion-wise) maximal set $\mathcal{T}_1$ of pairwise edge-disjoint odd-triangles of type 1, and let $H_2$ be a graph obtained from $H_1$ by removing an (inclusion-wise) maximal set $\mathcal{T}_2$ of pairwise edge-disjoint odd-triangles of type 2. In the following claims, we employ the assumption that $G$ does not contain a $K_t$-immersion to bound the degree of vertices in $H_1[Z]$ and $H_2$. \begin{claim}\label{claim:odd1} $d_{H_1[Z]}(z)< t$ for every $z\in Z$. \end{claim} \begin{cproof} Suppose for a contradiction that there exists $z\in Z$ such that $d_{H_1[Z]}(z)\ge t$. Let $M_o$ ($M_e$) the sets of vertices adjacent to $z$ in $H_1[Z]$ by an odd-edge (by an even-edge, respectively). Then $|M_o|+|M_e|=d_{H_1[Z]}(z)\ge t$. Every edge $uv$ in $H_1[Z]$ with $u,v\in M_o$ ($u,v\in M_e$, respectively) must be even; otherwise, $uvz$ is an odd-triangle of type 1, contradicting the maximality assumption on ${\cal T} _1$. Similarly, every edge $uv$ in $H_1[Z]$ with $u\in M_o$ and $v\in M_e$ must be odd. We now swap $M_o$, and then the new graph $H_1[M_o\cup M_e]$ contains only even-edges. Let $M=\{y_i:z_i\in M_o\cup M_e\}$. Then $|M|=|M_o|+|M_e|\ge t$. For every odd-triangle in $\mathcal{T}_1$, we split off corresponding edges in $G_1$ by method \ref{enumerate:Alph1} to get $y_iy_j,y_jy_k,y_ky_i$. Then for any distinct vertices $y_i,y_j\in M$, we have \begin{itemize} \item if $z_iz_j\in H_1$, then $z_iz_j$ is even, and hence $y_iy_j\in G_1$. \item if $z_iz_j\in H\backslash H_1$, then $z_iz_j$ belongs to some odd-triangle in $\mathcal{T}_1$, and we showed above that we can obtain $y_iy_j$ by splitting off edges of $G_1$ by method \ref{enumerate:Alph1}. \item if $z_iz_j\notin H$, then $C_c(i,j)$ is of type \ref{enumerate:alph3} and so there exists a $y_i - y_j$ path in $C_c(i,j)$ which can be split off to yield the edge $y_iy_j$. \end{itemize} We end up with a complete graph on $M$, and so conclude that $G_1\cup G_2$ contains $K_t$ as an immersion (since $|M|\ge t$), which is a contradiction. \end{cproof} \begin{claim}\label{claim:odd2} $d_{H_2}(x)<t$ for every $x\in X$. \end{claim} \begin{cproof} The proof is quite similar to the proof of Claim \ref{claim:odd1}. We suppose that there exists $x\in X$ such that $d_{H_2}(x)\ge t$. Note that $X$ is a stable set in $H$, and so all neighbors of $x$ in $H_2$ are in $Z$. Let $M_o$ ($M_e$) be the set of vertices adjacent to $x$ in $H_2$ by an odd-edge (by an even-edge, respectively). Then $M_o\cup M_e\subseteq Z$ and $|M_o|+|M_e|\ge t$. Every edge $uv$ in $H_2$ with $u,v\in M_o$ ($u,v\in M_e$, respectively) must be even; otherwise, $uvx$ is an odd-triangle of type 2, contradicting the maximality assumption of ${\cal T} _2$. Similarly, every edge $uv$ in $H_2$ with $u\in M_o$ and $v\in M_e$ must be odd. We now swap $M_o$. The new graph $H_2[M_o\cup M_e]$ contains only even-edges. Let $M=\{y_i:z_i\in M_o\cup M_e\}$. Then $|M|=|M_o|+|M_e|\ge t$. For every odd-triangle in $\mathcal{T}_1$, we split off corresponding edges in $G_1$ by method \ref{enumerate:Alph1} to get $y_iy_j,y_jy_k,y_ky_i$. For every odd-triangle in $\mathcal{T}_2$, we split off corresponding edges in $G_1$ by method \ref{enumerate:Alph3} to get $y_iy_j$. Then for any distinct vertices $y_i,y_j\in M$, we have \begin{itemize} \item if $z_iz_j\in H_2$, then $z_iz_j$ is even, and hence $y_iy_j\in G_1$. \item if $z_iz_j\in H_1 - E(H_2)$, then $z_iz_j$ belongs to some odd-triangle in $\mathcal{T}_2$, and we showed above that we can obtain $y_iy_j$ by splitting off edges of $G_1$ by method \ref{enumerate:Alph3}. \item if $z_iz_j\in H - E(H_1)$, then $z_iz_j$ belongs to some odd-triangle in $\mathcal{T}_1$, and we showed above that we can obtain $y_iy_j$ by splitting off edges of $G_1$ by method \ref{enumerate:Alph1}. \item if $z_iz_j\notin H$, we split off a $y_i - y_j$ path in $C_c(i,j)$ in $G_2$ to obtain $y_iy_j$. \end{itemize} We end up with a complete on $M$, and so $G_1\cup G_2$ contains $K_t$ as an immersion (since $|M|\ge t$), which is a contradiction. \end{cproof} The next claim guarantees that at least half of edges in $H_2$ are even. \begin{claim}\label{claim:bigswap} There exists a subset $S$ of vertices such that after swapping $S$ in $H_2$, the number of even-edges in $H_2$ is at least the number of odd-edges. \end{claim} \begin{cproof} We first show that there exits a sequence of swaps resulting in the number of even-edges in $H_2[Z]$ being at least the number of odd-edges $H_2[Z]$. If there is $z\in Z$ such that $z$ is incident with more odd-edges than even-edges in $H_2[Z]$, we swap $z$, then repeat. The process will halt since the number of even-edges in $H_2[Z]$ strictly increases after each swap. When the process halts, every $z\in Z$ is incident with at least as many even-edges as with odd-edges in $H_2[Z]$, and so in total, the number of even-edges in $H_2[Z]$ at least the number of odd-edges $H_2[Z]$. If the number of even-edges from $Z$ to $X$ in $H_2$ is less than the number of odd-edges from $Z$ to $X$ in $H_2$, we swap the set $Z$. After the switch, the number of even-edges from $Z$ to $X$ in $H_2$ is at least the number of odd-edges from $Z$ to $X$ in $H_2$. Moreover, edges in $H_2[Z]$ are not affected by swapping $Z$. Finally, note that there is no edge in $H_2[X]$. Thus at the end of this series of swaps, the number of even-edges in $H_2$ is at least the number of odd-edges in $H_2$, proving the claim. \end{cproof} By Claim \ref{claim:bigswap}, we may assume that at least half of edges in $H_2$ are even. We now split off edges in $G_1\cup G_2$ to obtain a dense graph on $X\cup \{y_1,...,y_\beta\}$ as follows. For every odd-triangle in $\mathcal{T}_1$, we split off its corresponding edges in $G_1$ by method \ref{enumerate:Alph1}. For every odd-triangle in $\mathcal{T}_2$, we split off its corresponding edges in $G_1$ by method \ref{enumerate:Alph2}, which implies that we obtain two of three edges in the set $\{x_iy_j, y_jy_k, y_kx_i\}$. For every pairs $b_i,b_j$ in case \ref{enumerate:alph3}, we also split off a path in $C_c(i,j)$ to get the edge $y_iy_j$ as guaranteed by \ref{enumerate:alph3}. We denote by $\hat G$ the induced subgraph of the new graph on $X\cup \{y_1,...,y_\beta\}$. Note that $\hat G$ is an immersion of $G_1\cup G_2$, and so does not contain an immersion of $K_t$. We will show that $\hat G$ is dense, specifically by counting the number of non-edges in $\hat G$. We first observe that by construction, $\hat G[X]$ is complete. Thus, all non-edges in $\hat G$ arise from odd-edges of $H$ for which the corresponding edge of $\hat G$ cannot be reconstructed through odd-triangles. Observe that $H=(H - E(H_1)) \cup (H_1 - E(H_2)) \cup H_2$. We consider each of the subgraphs $H - E(H_1)$, $H_1 - E(H_2)$, and $H_2$ and how they can contribute non-edges to $\hat G$ separately. \begin{itemize} \item Each odd-triangle $z_iz_jz_k\in \mathcal{T}_1$ contributes zero missing edge to $\hat G$ since we obtain $y_iy_j,y_jy_k,y_ky_i$ by method \ref{enumerate:Alph1}. Hence $H - E(H_1)$ (the union of odd-triangles in $\mathcal{T}_1$) contributes zero missing edge to $\hat G$. \item Each odd-triangle $x_iz_jz_k\in \mathcal{T}_2$ contributes exactly one missing edge to $\hat G$ since we obtain two edges among $x_iy_j,y_jy_k,y_kx_i$ by method \ref{enumerate:Alph2}. Hence $H_1 - E( H_2)$ (the union of odd-triangles in $\mathcal{T}_2$) contributes $|\mathcal{T}_2|$ missing edge to $\hat G$. \item Each odd-edge (even-edge) in $H_2$ contributes exactly one (zero, respectively) missing edge to $\hat G$. Hence $H_2$ contributes at most $|E(H_2)|/2$ missing edges to $\hat G$ by Claim \ref{claim:bigswap}. \end{itemize} We conclude that the number of missing edges in $\hat G$ is at most $|E(H_2)|/2+ |\mathcal{T}_2|$. We next give an explicit bound for the number of missing edges in $\hat G$. \begin{claim}\label{cl:df} The number of missing edges in $\hat G$ is at most $(\alpha\beta+\alpha t+\beta t)/4$. \end{claim} \begin{cproof} Let $p= |E(H_2)|/2$ and $q=|\mathcal{T}_2|$. Then the number of missing edges in $\hat G$ is at most $p+q$. By claim \ref{claim:odd1}, $H_1[Z]$ has $\beta$ vertices and minimum degree less than $t$, and so $E(H_1[Z])< \beta t/2$. Hence \begin{align*} 2p+3q &\le |E(H_2)|+|E(H_1 - E(H_2))|\\ &= |E(H_1)|\\ &= |X||Z|+\big|E(H_1[Z])\big|\\ &\le \alpha\beta+\beta t/2. \end{align*} Claim \ref{claim:odd2} states that every $x\in X$ is adjacent to at most $t$ vertices of $Z$ in $H_2$, and so is adjacent to at most $|Z|-t$ vertices of $Z$ in $H_1 - E( H_2)$. This implies that for every $x\in X$, there are at least $(|Z|-t)/2=(\beta-t)/2$ odd-triangles in $\mathcal{T}_2$ containing $x$ (since $H_1 - E(H_2)$ is the union of odd-triangles in $\mathcal{T}_2$). This means that $$q=|\mathcal{T}_2|\ge |X|(\beta-t)/2= \alpha(\beta-t)/2.$$ Hence $$p+q = \frac{(2p+3q)-q}{2}\le \frac{(\alpha\beta+\beta t/2)-\alpha(\beta-t)/2}{2}=\frac{\alpha\beta+\alpha t+\beta t}{4}.$$ Hence the number of missing edges in $\hat G$ is at most $(\alpha\beta+\alpha t+\beta t)/4$. \end{cproof} We next show that if $|V(\hat G)|=\alpha+\beta$ is large, then we can apply Lemma \ref{lemma:average2} to yield a contradiction that $\hat G$ contains an immersion of $K_t$. Hence $\alpha+\beta$ is small, which contradicts \eqref{equation:2alpha}, and the proof of Theorem \ref{theorem:chromatic} is complete. \begin{claim}\label{claim:alphabeta} $\alpha+\beta<2.62(t+1)$. \end{claim} \begin{cproof} Let $n=|V(\hat G)|=\alpha+\beta$ and suppose for a contradiction that $n\ge2.62(t+1)$. Let $\gamma=\frac{1}{n}\sum_{v\in \hat G}f_{\hat G}(v)$. Then $n\gamma/2$ is the number of missing edges in $\hat G$, and so by Claim \ref{cl:df} we have \begin{equation} 2\gamma\le\frac{\alpha\beta+\alpha t+\beta t}{n}=\frac{\alpha\beta}{n} +t \label{equation:ff}. \end{equation} If $\gamma<n/4$, then by Lemma \ref{lemma:average2}, $\hat G$ contains an immersion of $K_{t'}$, where $t'=\lfloor n/2\rfloor\ge t$. Hence $\hat G$ contains an immersion of $K_t$, a contradiction. Otherwise, since $\alpha\beta \le (\alpha+\beta)^2/4=n^2/4$, we have $2\gamma<n/4 +t<n$. Thus by applying Lemma \ref{lemma:average2}, $\hat G$ contains an immersion of $K_{t'}$, where $t'=\lfloor n-2\gamma \rfloor> n-2\gamma-1$. We conclude that $n-2\gamma-1<t$ since $\hat G$ does not contain $K_t$ as an immersion. Recall that by \eqref{claim:alpha}, we have that $\alpha<n/2$. For every $x$ such that $\alpha<x<n/2$, we have $$\alpha\beta =\alpha(n-\alpha) <x(n-x).$$ Since $\alpha<t+1<n/2$, we can choose $x:=t+1$, and so \begin{align*} (n-2\gamma-1)-t &\ge n-\bigg(\frac{\alpha\beta}{n}+t\bigg)-t-1\\ &\ge n-\frac{\alpha\beta}{n}-2x\\ &> \frac{n^2-3nx+x^2}{n}. \end{align*} The assumption of the claim is that $n\ge 2.62x$, and hence $n^2-3nx+x^2\ge 0$ (by solving the quadratic equation). This gives $n-2\gamma-1\ge t$, which contradicts what we obtained above that $n-2\gamma-1<t$. This prove the claim. \end{cproof} Combining Claim \ref{claim:alphabeta} with \eqref{claim:alpha}, we obtain $2\alpha+\beta< 3.62t+3$, which contradicts \eqref{equation:2alpha}. This completes the proof of Theorem \ref{theorem:chromatic}. \end{proof} \section{Immersion in graphs with no stable set of size 3} \label{section:2.5} We begin by reformulating Theorem \ref{theorem:2.5}. \begin{theorem}\label{theorem:2.5.2} For all $t \ge 1$, every graph $G$ with at least $5t$ vertices and no stable set of size three has a strong immersion of $K_{2t}$. \end{theorem} \begin{proof} Assume that the theorem is false, and pick a counterexample $G$ which minimizes $|V(G)| + |E(G)|$. Assume that $G$ has at least $5t+5$ vertices and no strong immersion of $K_{2t+2}$. Since every graph on at least 5 vertices with no independent set of size three contains an edge, we may assume that $t \ge 1$. By minimality, we may assume that $n = |V(G)| = 5t+5$. Furthermore, as $G-e$ does not contain a strong immersion of $K_{2t+2}$ for all edges $e$, by minimality it follows that deleting any edge results in a stable set of size three. All index arithmetic in the following proof is done mod 5. \begin{claim} $G$ contains an induced cycle of length $5$. \end{claim} \begin{cproof} If $G$ were the disjoint union of cliques, since it contains no stable set of size three then it must be a disjoint union of at most two cliques. One of the two cliques has at least $\lceil n/2\rceil \ge 2t+2$ vertices, and so $G$ contains a strong $K_{2t+2}$-immersion, a contradiction. Thus $G$ is not a disjoint union of cliques, and there exist two adjacent vertices $a_1,a_2$ such that $N(a_1)\ne N(a_2)$. Without loss of generality, we may suppose that $N(a_2)\backslash N(a_1)\ne \emptyset$ and let $a_3\in N_G(a_2)\backslash N_G(a_1)$. This gives $a_1a_3\notin E$ and $a_2a_3\in E$. Observe that there is $a_4$ with $a_1a_4,a_2a_4\notin E$; otherwise, we can remove the edge $a_1a_2$ without creating any stable set of size three, which contradicts the minimality of $G$. By the same argument, there is $a_5$ with $a_2a_5,a_3a_5\notin E$. Note that $G$ does not contain any stable set of size three and $a_1a_4,a_1a_3\notin E$, and so $a_3a_4\in E$. Similarly, $a_1a_5,a_4a_5\in E$. Thus $a_1a_2a_3a_4a_5$ forms an induced cycle of length 5 in $G$. \end{cproof} Let $C=\{a_i:1\le i\le 5\}$ induce a cycle of length five, and let $U=V\backslash C$. Then $G[U]$ has $n-5 = 5t$ vertices and $G[U]$ contains no stable set of size three. By minimality of $G$, $G[U]$ contains a strong immersion of $K_{2t}$ with with some set of branch vertices $M$. Let $Q=U\backslash M$, and for every $i, 1\le i\le 5$, let $M_i$ be the set of vertices in $M$ not adjacent to $a_i$. In the following claim, we show that if there are two large disjoint sets $X_1,X_3$ in $Q$ with some desired property, then for every $v\in M$, we can split off paths $a_1xv$ or $a_1xa_iv$ (of length 2 or 3) with $x\in X_1,i\in\{2,4,5\}$ to get the edge $a_1v$, and similarly to get the edge $a_3v$, and so get a strong clique immersion of size $2t+2$ on $M\cup \{a_1,a_3\}$, which is a contradiction. \begin{claim}\label{claim:2.5X} Suppose that there are disjoint sets $X_1,X_3\subseteq Q$ satisfying \begin{enumerate}[label=(\roman*)] \item \label{enum:2.5.2} $|X_1|\ge |M_1|$, and $|X_3|\ge |M_3|$; \item \label{enum:2.5.1} for every $x\in X_1$, we have $xa_1,xa_5\in E$ and either $xa_2\in E$ or $xa_4\in E$; and \item \label{enum:2.5.3} for every $x\in X_3$, we have $xa_3,xa_4\in E$ and either $xa_2\in E$ or $xa_5\in E$. \end{enumerate} Then $G$ has a strong immersion of $K_{2t}$, where the set of branch vertices is $M\cup \{a_1,a_3\}$. \end{claim} \begin{cproof} Let $E_1$ be the set of edges in $G$ from $C$ to $M_1\cup X_1$. We wish to split off paths in $E_1$ to obtain edges from $a_1$ to every vertex in $M_1$. The process of splitting off is as follows. \begin{itemize} \item Arbitrarily pair each vertex $v\in M_1$ with a vertex $x_v\in X_1$ such that $x_v\ne x_{v'}$ for every $v\ne v'$ (such a choice of $x_v$ exists by \ref{enum:2.5.2}). \item For every $v\in M_1$, note that $va_4\in E$ (otherwise, $\{a_1,v,a_4\}$ is a stable set of size three) and either $va_2\in E$ or $va_5\in E$ (otherwise, $\{a_2,v,a_5\}$ is a stable set of size three). If $va_5\in E$, we split off the path $va_5x_va_1$ to get an edge $va_1$. \item Otherwise, $va_4\in E$ and $va_2\in E$. By \ref{enum:2.5.1}, either $x_va_2\in E$ or $x_va_4\in E$. If $x_va_2\in E$, we split off the path $va_2x_va_1$ to get the edge $va_1$. Otherwise, we split off the path $va_4x_va_1$ to get the edge $va_1$. \end{itemize} Note that in this process we only use edges of $E_1$ and at the end we obtain all edges from $a_1$ to $M_1$, and so obtain all edges from $a_1$ to $M$. Let $E_3$ be the set of edges in $G$ from $C$ to $M_1\cup X_3$. Note that $E_1\cap E_3=\emptyset$, and hence we can split off paths in $E_3$ in the same manner to obtain all edges from $a_3$ to $M_3$, and so obtain all edges from $a_3$ to $M$. By minimality, we can split off edges of $G[U]$ to obtain a $K_{2t}$ on $M$. Note that $E_1$, $E_3$ and $E(G[U])$ are pairwise disjoint, so we never split off an edge twice. By splitting off $a_1a_2,a_2a_3$, we obtain $a_1a_3$, and hence obtain a complete graph on $M\cup\{a_1,a_3\}$. Clearly, all split off paths are internally edge-disjoint from $M\cup\{a_1,a_3\}$. Hence $G$ contains a strong immersion of $K_{2t+2}$, where the set of branch vertices is $M\cup\{a_1,a_3\}$, a contradiction. \end{cproof} To reach the contradiction, it only remains to show that such sets $X_1,X_3$ exist up to shifting indices. For every $i,1\le i\le 5$, let $A_i$ be the set of non-neighbors of $a_i$ in $G[U]$. Note that $A_i\cup\{a_{i-2},a_{i+2}\}$ is a clique, and so $|A_i|\le 2t$ since $G$ does not contain any $K_{2t+2}$-immersion. Also note that since $G$ contains no stable set of size three, and hence $A_i\cap A_{i+2}=\emptyset$ for every $i$. As discussed above, we wish to find sets $X_1,X_3$ satisfying Claim \ref{claim:2.5X}. One might hope to choose $X_1:=A_3\cap Q$ and $X_3:=A_1\cap Q$; these sets indeed satisfy \ref{enum:2.5.1} and \ref{enum:2.5.3} but may fail to meet \ref{enum:2.5.2} in the case either $|A_1|$ or $|A_3|$ is small. This problem can be avoided by enlarging $A_1$ and $A_3$. This leads to the following definition of $A_1'...,A_5'$. Let $A_1',...,A_5'$ be subsets of $U$ such that $\sum_{i=1}^{5}|A'_i|$ is as large as possible, and \begin{equation} \label{eq:2.5} \left\{\begin{array}{l} A_i\subseteq A_i',\\ |A_i'|\le 2t,\\ A_i'\cap A'_{i+2}=\emptyset, \end{array}\right. \ \ \ \forall 1\le i\le 5. \end{equation} \begin{claim}\label{claim:sum2t} There exists $i$ such that $|A_i'|=|A_{i+2}'|=2t$. \end{claim} \begin{cproof} Assume the claim is false. Then there exists $j$ such that $|A'_{j}|,|A'_{j+1}|,|A'_{j+2}|<2t$. Without loss of generality, assume $|A_1'|,|A'_2|,|A'_3|<2t$. For every $i$, let $B_{i,i+1}=A'_i\cap A'_{i+1}$, and $D_i=A'_i\backslash (A_{i-1}\cup A'_{i+1})$. Then all 10 sets $D_i, B_{i,i+1}$ are pairwise disjoint, and $A'_i=B_{i-1,i}\cup D_i \cup B_{i,i+1}$. Note also that $D_i\cap A'_{i+1}=\emptyset$ and $D_i\cap A'_{i-1}=\emptyset$ for every $i$. Suppose that there exists $v\in U$ such that $v\notin \bigcup_{i=1}^{5}A'_i$. Then $A_1'\cup\{v\},A_2',...,A_5'$ satisfy (\ref{eq:2.5}), while the sum of their cardinalities is larger, a contradiction. This gives $\bigcup_{i=1}^{5}A'_i=U$. In other words, $$\Big(\bigcup_{i=1}^{5}D_i\Big)\cup \Big(\bigcup_{i=1}^{5}B_{i,i+1}\Big)=U.$$ Since all these sets are pairwise disjoint, we have \begin{equation}\label{eq:2.5.2} \sum_{i=1}^{5}|D_i|+\sum_{i=1}^{5}|B_{i,i+1}|=|U|\ge 5t. \end{equation} Observe that if $|A'_i|<2t$ and there exists $v\in D_{i-1}\cup D_{i+1}$, then $(A'_i\cup\{v\})\cap A'_{i+2}=\emptyset$ and $(A'_i\cup\{v\})\cap A'_{i-2}=\emptyset$. Hence $A'_i\cup\{v\},A'_{i+1},...,A'_{i+4}$ satisfy (\ref{eq:2.5}), violating our choice to maximize the sum of their cardinalities. Hence if $|A_i'|<2t$, then $D_{i-1}=\emptyset$ and $D_{i+1}=\emptyset$. Recall the assumption that $|A'_1|,|A'_2|,|A'_3|<2t$. By the observation in the previous paragraph, we have $D_j=\emptyset$ for every $j$. Hence from (\ref{eq:2.5.2}) we have $\sum_{i=1}^{5}|B_{i,i+1}|\ge 5t$. Also note that $|A_i'|=|B_{i,i-1}|+|D_i|+|B_{i,i+1}|=|B_{i,i-1}|+|B_{i,i+1}|$ for every $i$. This gives $$10t\le2\sum_{i=1}^{5}|B_{i,i+1}|=\sum_{i=1}^{5}|A'_i|<10t,$$ a contradiction. This proves the claim. \end{cproof} Without loss of generality, we may suppose that $|A'_1|=|A'_3|=2t$. \begin{claim}\label{cl:2.5s} Let $X_1=A_3'\cap Q$ and $X_3=A_1'\cap Q$. Then $X_1,X_3$ satisfy conditions in Claim \ref{claim:2.5X}. \end{claim} \begin{cproof} We first show that \ref{enum:2.5.3} holds for $X_3$. Recall that $A_3\subseteq A_3'$, $A_4\subseteq A_4'$ and $A_1'\cap (A_3'\cup A_4')=\emptyset$. Then $A_1'\cap (A_3\cup A_4)=\emptyset$, and so $X_3\cap (A_3\cup A_4)=\emptyset$ since $X_3\subseteq A_1'$. Hence for every $v\in X_3$, we have $va_3\in E$ and $va_4\in E$ (otherwise, $G$ contains a stable set of size three). Note that $B_{5,1}\cap B_{1,2}=\emptyset$. Hence for every $v\in X_3$, either $v\notin B_{5,1}$ or $v\notin B_{1,2}$. If $v\notin B_{5,1}$ then $v\notin A_5'$ (since $v\in A'_1$), and so $v \notin A_5$. This means that $v$ is adjacent to $a_5$. Otherwise, $v\notin B_{1,2}$, and by the same argument, $v$ is adjacent to $a_2$. Hence, \ref{enum:2.5.3} holds for $X_3$. We now show that \ref{enum:2.5.2} holds for $X_3$. Let $M_1'=A_1'\cap M$ and $M_3'=A_3'\cap M$. Then by (\ref{eq:2.5}), we have $M_1\subseteq M_1'$, $M_3\subseteq M_3'$, and $M_1'\cap M_3'=\emptyset$. Besides, $X_3\cap M_1'\subseteq Q\cap M=\emptyset$ and $$X_3\cup M_1'=(A_1'\cap Q)\cup(A_1'\cap M)=A_1'\cap U=A_1'.$$ This gives $|X_3|+|M_1'|=|A_1'|=2t$, and so $$|X_3|=2t-|M_1'|= |M|-|M_1'|=|M\backslash M_1'|\ge |M_3'|\ge |M_3|.$$ Hence \ref{enum:2.5.2} holds for $X_3$. By the same arguments, \ref{enum:2.5.2} and \ref{enum:2.5.1} hold for $X_1$. This proves the claim. \end{cproof} Claims \ref{claim:2.5X} and \ref{cl:2.5s} complete the proof of Theorem \ref{theorem:2.5}. \end{proof} \end{document}
\begin{document} \title{Beals characterization of pseudodifferential operators\ in Wiener spaces} \begin{abstract} \noindent The aim of this article is to prove a Beals type characterization theorem for pseudodifferential operators in Wiener spaces. The definition of pseudodifferential operators in Wiener spaces and a Calder\'on-Vaillancourt type result appear in \cite{AJN}. The set of symbols considered here is the one of \cite{AJN}. The Weyl calculus in infinite dimension considered here emphasizes the role of the Wick bi-symbols. \end{abstract} \parindent=0pt \tableofcontents \parindent = 0 cm \parskip 10pt \baselineskip 15pt \section{Statement of the main result.}\label{s1} In quantum field theory, such as quantum electrodynamics which will be considered in a forthcoming article, the set of states of the quantized field may be chosen as a symmetrized Fock space ${\cal F}_s (H_{\bf C})$ over an Hilbert space $H$. Among the operators acting in such spaces, those coming from the Weyl calculus in infinite dimension and recently introduced in \cite{AJN} (see also in \cite{AJN2} the case of the large but finite dimension) may have applications to modelling the interaction of the quantized field with a fixed particle of spin $1/2$. These applications will be developed in a next article, but we need some properties which are not in \cite{AJN} and that we present it here. We note by $H$ a real separable space and by $H_{\bf C}$ the complexified. The norm of $H$ is noted by $|\cdot |$ and the scalar product of two elements $a$ and $b$ of $H$ is by $a \cdot b$. The norm of an element of $H^2$ is denoted by $|\cdot |$. For all $X = (x ,\xi)$ and $Y = (y ,\eta)$ in $H^2$, we set \begin{equation}\label{1.1} X \cdot \overline Y = (x+i \xi) \cdot (y-i\eta), \qquad \sigma (X , Y) = y\cdot \xi - x\cdot \eta. \end{equation} We recall that ${\cal F}_s (H_{\bf C})$ is the completion of the direct sum of the subspaces ${\cal F}_n$ ($n\geq 0$) where ${\cal F}_0$ is one dimensional and represents the vacuum, while ${\cal F}_1 = H_{\bf C} $ and ${\cal F}_n$ ($n\geq 2$) is the $n-$ fold symmetrized tensor product representing the $n$ particles states. This space is not very convenient for the Weyl calculus since we have to write down integrals but it is isomorphic to some $L^2$ space on a suitable Banach $B$ endowed with a gaussian measure. It is known that, for any separable real Hilbert space $H$ there exists, \hskip 1cm - a Banach space $B$ containing $H$, \hskip 1cm - a gaussian measure $\mu _{B , h }$ with variance $h$ on the $\sigma-$algebra of the Borel sets of $B$, for all $h>0$, satisfying some assumptions we formulate here in saying that $(i, H, B)$ is an abstract Wiener space (where $i$ is the injection from $H$ into $B$). See \cite{G1}\cite{G2}\cite{KU} and \cite{AJN} for precise conditions which should be fullfilled by $B$. See also \cite{G3} (example 2, p. 92) for a standard way of construction of a space $B$ satisfying the assumptions. Identifying $H$ with its dual, one has, \begin{equation}\label{1.2} B' \subset H' = H \subset B. \end{equation} If $H$ is finite dimensional, we have $B= H$ and for all Borel sets $\Omega$ in $H$, \begin{equation}\label{1.3} \mu _{H , h} ( \Omega) = (2\pi h)^{-{\rm dim} (E)/2} \int _{\Omega} e^{-{|y|^2 \over 2h}} dy. \end{equation} In the general case, the symmetrized Fock space ${\cal F}_s (H_{\bf C})$ (\cite{SE},\cite{RS}) is isomorphic to the space $L^2(B, \mu_{B , h/2})$ (see \cite{J}\cite{SI}). The complexified $H_{\bf C} \subset {\cal F}_s (H_{\bf C}) $ is identified with a closed subset of $L^2(B, \mu_{B , h/2})$ which in field theory is the subspace corresponding to the states of the field with exactly one particle. The Weyl calculus in infinite dimension of \cite{AJN} allows to associate to some suitable functions $F$ on the Hilbert space $H^2$, bounded and unbounded operators in ${\cal F}_s (H)$ (or in $L^2(B, \mu_{B , h/2})$). Let us first recall the assumptions filled by functions $F$. \begin{defi}\label{d1.1} Let $(i, H, B)$ be a Wiener space satisfying (\ref{1.2}). We choose a Hilbert basis $(e_j )_{(j\in \Gamma)}$ of $H$, each vector belonging to $B'$, indexed by a countable set $\Gamma$. Set $u_j = (e_j , 0)$ and $v_j = (0, e_j)$ $(j\in \Gamma)$. A multi-index is a map $(\alpha , \beta )$ from $\Gamma $ into $\N \times \N$ such that $\alpha_j = \beta _j = 0$ excepted for a finite number of indices. Let $M$ be a nonnegative real number, $m$ a nonnegative integer and $\varepsilon = (\varepsilon_j )_{(j \in \Gamma)}$ a family of nonnegative real numbers. One denotes by $ S_m(M, \varepsilon)$ the set of bounded continuous functions $ F:H^2\rightarrow {\bf C}$ satisfying the following conditions. For every multi-index $(\alpha , \beta)$ such that $0 \leq \alpha_j \leq m$ and $0 \leq \beta_j \leq m$ for all $j\in \Gamma$, the following derivative, \begin{equation}\label{1.4}\partial_x^{\alpha}\partial_{\xi}^{\beta} F = \left [\prod _{j\in \Gamma } \partial _{u_j} ^{\alpha_j} \partial _{v_j} ^{\beta_j}\right ] F \end{equation} is well defined, continuous on $H^2$ and satisfies, for every $(x , \xi)$ in $H^2$, \begin{equation}\label{1.5}\left |\partial_x^{\alpha}\partial_{\xi}^{\beta} F(x , \xi) \right | \leq M \prod _{j\in \Gamma } \varepsilon_j ^{\alpha_j + \beta_j}. \end{equation} \end{defi} For each summable sequence $(\varepsilon_j)$, the first step in \cite{AJN} is to associate to each function $F$ in $S_2(M, \varepsilon)$, a quadratic form $Q_h^{weyl} (F)$ on a dense subset ${\cal D}$ (see Definition \ref{d2.1} above), and not an operator on the above Hilbert spaces. One may also associate a quadratic form $Q_h^{weyl} (F)$ on ${\cal D}$ with symbols $F$ which are not in the above set, in particular if they are not bounded. To do it, it is sufficient that the two conditions below are satisfied: (H1) The function $F: H^2 \rightarrow {\bf C} $ has a stochastic extension $\widetilde F : B^2 \rightarrow {\bf C} $ in $L^1 ( B^2 , \mu _{B^2 , h/2})$ (see definition 4.4 of \cite{AJN} which recall and adapt a previous definition of L. Gross \cite{G1}. (H2) The action on $|\widetilde F|$ of the following heat operator \begin{equation}\label{1.6} ( H_{h/2} |\widetilde F | ) (X) = \int _{B^2} |\widetilde F(X+Y) | d\mu _{B^2 , h/2} (Y) \hskip 2cm X\in H^2\end{equation} is polynomially bounded, i.e., it satisfies for $m\geq 0$ and $C>0$, \begin{equation}\label{1.7} ( H_{h/2} |\widetilde F | ) (X) \leq C (1+ |X|)^m \end{equation} (that is to say that the norm in formula (12) in \cite{AJN} is finite). In Theorem \ref{t2.2}, we recall the construction of $Q_h^{weyl} (F)$ in a slightly simplified way, but the construction in \cite{AJN} uses the analog in infinite dimension of Wigner functions which may have its own interest. The hypotheses 1 and 2 are satisfied if $F$ belongs to $S_2(M, \varepsilon)$, the sequence $(\varepsilon _j)$ being summable. Inequality (\ref{1.7}) is then satisfied with $C=M$ and $m=0$. See others examples in Section \ref{s2}. Next, as shown in \cite{AJN} (Theorem 1.4), if $F$ belongs to $S_2(M, \varepsilon)$ then $Q_h^{weyl} (F)$ is the quadratic form of a bounded operator in $L^2(B, \mu_{B , h/2})$ or equivalently, bounded in ${\cal F}_s(H_{\bf C})$. In addition, this operator satisfies, if $0 < h < 1$, \begin{equation}\label{1.12} \Vert Op_h^{weyl} (F) \Vert \leq M \prod _{j\in \Gamma} (1 + 81\pi h S_{\varepsilon} \varepsilon_j ^2) \end{equation} where \begin{equation}\label{1.13} S_{\varepsilon} = \sup_{j\in \Gamma} \max(1, \varepsilon_j ^2 ).\end{equation} The hypothesis (H2) in Theorem 1.4 in \cite{AJN}, which not mentioned here, is always satisfied if $F$ belongs to $S_2(M, \varepsilon)$ and if the sequence $(\varepsilon_j )$ is summable (Proposition 8.4 in \cite{AJN}). We have now to define and to compute, commutators of these operators with momentum and position operators. In finite dimension $n$, theirs compositions and commutators are a classically defined as operators from ${\cal S} (\R^n)$ into ${\cal S}' (\R^n)$. In our case, ${\cal S} (\R^n)$ is replaced by space ${\cal D}$ of Definition \ref{d2.1}. In the absence of an analog of ${\cal S}' (\R^n)$, we prefer instead to use quadratic forms on ${\cal D}$ (see \cite{RS}). We then consider mappings $(f , g) \rightarrow A(f , g)$ on ${\cal D} \times {\cal D}$ that are linear in $f$ and antilinear in $g$. A notion of continuity is given in Section \ref{s2}. One may define two compositions (left and right) of a quadratic form $Q$ on the space ${\cal D}$ of Definition \ref{d2.1} with an operator $A : {\cal D} \rightarrow {\cal D}$ whose formal adjoint $A^{\star}$ also maps ${\cal D}$ into ${\cal D}$. One set, for all $f$ and $g$ in ${\cal D}$, \begin{equation}\label{a2}( Q \circ A ) (f , g) = Q ( Af , g),\qquad( A \circ Q ) (f , g) = Q ( f , A^{\star} g).\end{equation} One then define the commutator $[A, Q ]$ and $({\rm ad} A) Q$ as the following quadratic form, \begin{equation}\label{a3}[A, Q ] (f , g) = Q ( f , A^{\star} g ) - Q ( Af , g). \end{equation} Thus, one can define the iterated bracket $({\rm ad} A_1) \dots ({\rm ad} A_n) Q$ if $A_1$, \dots $A_n$ are operators from ${\cal D}$ into ${\cal D}$. We see in Proposition \ref{p2.3} that one may associate with each continuous linear form $G$ on $H^2$, not only a quadratic form $Q_h^{weyl} (G)$, but also an operator $Op_h^{weyl} (G)$ from ${\cal D}$ to ${\cal D}$. This Weyl operator is the Segal field, up to a numerical factor, and may be directly defined in ${\cal F}_s (H)$ using creation and annihilation operators, without using the Weyl calculus. In particular, when $F(x , \xi) = a \cdot x$ with $a$ in $H$, the corresponding Weyl operator will be denoted $Q_h (a)$ (position operator). When $F(x , \xi) = b \cdot \xi $, when $b$ in $H$, the operator will be denoted $P_h(b)$ (momentum operator). If $F$ belongs to $S_m (M, \varepsilon)$ and $G$ is a continuous linear form on $H^2$ then Proposition \ref{p2.6} allows us to extend the following result which is well-known in finite dimension, \begin{equation}\label{a1} [Q_h^{weyl } (F) , Op_h^{weyl } (G)] = {h\over i} Q_h^{weyl } ( \{ F , G \} ).\end{equation} In particular, if $(e_j)$ is the Hilbertian basis of $H$ chosen to define our sets of symbols then equality (\ref{a1}) gives, $$ [Q_h (e_j) , Q_h^{weyl } (F) ] = - {h\over i} Q_h^{weyl } \left ( {\partial F \over \partial \xi_j} \right ), $$ $$ [P_h (e_j) , Q_h^{weyl } (F) ] = {h\over i} Q_h^{weyl } \left ( {\partial F \over \partial x_j} \right ). $$ One may iterate and consider iterated commutators while restricting ourselves to some set of multi-indices. We denote by ${\cal M}_m$ the set of pairs $(\alpha , \beta)$ where $\alpha = (\alpha_j )_{(j\in \Gamma)}$ and $\beta = (\beta_j )_{(j\in \Gamma)}$ are sequences of nonnegative integers such that $\alpha_j = \beta _j =0$ except for a finite number of indices $j$, and such that $\alpha _j \leq m$ and $\beta_j \leq m$ for all $j\in \Gamma$. One associates to each multi-index $(\alpha , \beta)$ the following iterated commutator, $$ ({\rm ad}P_h )^{\alpha} ({\rm ad}Q_h )^{\beta} Q_h ^{weyl} (F) = \prod _{j\in \Gamma} ( ad P_h(e_j) ) ^{\alpha _j} \prod _{k\in \Gamma} ( ad Q_h(e_k) ) ^{\beta _k}Q_h ^{weyl} (F). $$ In the same way, if $F$ is in $S_m(M, \varepsilon)$ and if $(\alpha , \beta)$ is in ${\cal M}_p$, $p \leq m-2$, $$ ({\rm ad}P_h )^{\alpha} ({\rm ad}Q_h )^{\beta} Q_h ^{weyl} (F) = (-1)^{|\beta |} (h/i) ^{|\alpha + \beta |} Q_h ^{weyl} (\partial_x^{\alpha } \partial_{\xi}^{\beta } F). $$ From Theorem 1.4 in \cite{AJN}, the above Weyl quadratic form is associated to a bounded operator in $L^2(B , \mu_{B , h/2})$, denoted as below and verifiying, \begin{equation}\label{1.15}\Vert ({\rm ad}P )^{\alpha} ({\rm ad}Q )^{\beta} Op_h^{weyl}(F) \Vert \leq M \prod _{j\in \Gamma} (1 + 81\pi h S_{\varepsilon} \varepsilon_j ^2) \prod _{j\in \Gamma} (h\varepsilon _j )^{\alpha _j +\beta _j}.\end{equation} The purpose of this work is to prove the reciprocal statement, as Beals \cite{Bea} did in finite dimension (see also \cite{BO1}\cite{BO2} and \cite{BO-C} for adaptations to other classes of symbols in finite dimension). \begin{theo}\label{t1.2} Let $(i, H, B)$ be a Wiener space satisfying (\ref{1.1}). Let $A_h$ be a bounded operator in $L^2(B, \mu_{B, h/2})$. Let $(e_j)$ $(j\in \Gamma)$ a Hilbertian basis of $H$ consisting of elements in $B'$. Let $M > 0$ and let $(\varepsilon _j ) _{(j \in \Gamma)}$ a summable sequence of real numbers. Let $m \geq 2$. Suppose that, for all $(\alpha , \beta )$ in ${\cal M}_{m+4}$, the commutator $({\rm ad}P )^{\alpha} ({\rm ad}Q )^{\beta} A_h$ (being a priori defined as a quadratic form on ${\cal D}$) is bounded in $L^2(B, \mu_{B, h/2})$ and that, \begin{equation}\label{1.15b} \Vert ({\rm ad}P )^{\alpha} ({\rm ad}Q )^{\beta} A_h \Vert \leq M \prod _{j\in \Gamma} (h\varepsilon _j )^{\alpha _j +\beta _j}.\end{equation} Then, if $0 < h < 1$, there exists a function $F_h $ in $S_m(M', \varepsilon)$ with, \begin{equation}\label{1.16} M' = M \prod _{j\in \Gamma} (1 + K S_{\varepsilon} ^ 2 h \varepsilon _j ^2) \end{equation} where $K$ is a universal constant, and $S_{\varepsilon}$ is defined in (\ref{1.13}), such that the Weyl operator $Op_h^{weyl} (F )$ associated to $F$ is equal to $A_h$. \end{theo} Section \ref{s2} introduces various results concerning the Weyl calculus in infinite dimension intended to be used in an upcoming work. Sections \ref{s3} to \ref{s7} are devoted to proof of Theorem \ref{t1.2}. Section \ref{s8} applies this theorem to composition of two operators defined by the Weyl calculus. We show that the composition is also defined by this calculus, but we do not give any results on the possible asymptotic expansion of its symbol, this result being used in a forthcoming article. \section{Weyl calculus in infinite dimension.}\label{s2} \subsection{Coherent states.}\label{s2.A} For $X=(a , b)$ in $H^2$, and all $h>0$, one defines $\Psi_{X ,h}$ the corresponding coherent state (\cite{Ber}\cite{C-R}\cite{F}), they belong to ${\cal F}_s (H_{\bf C})$ and are defined by, \begin{equation}\label{2.1} \Psi_{(a , b) , h} = \sum _{n\geq 0} {e^{-{|a|^2+ |b|^2 \over 4h}} \over (2h)^{n/2} \sqrt {n!} } (a+ib) \otimes \cdots \otimes (a+ib).\end{equation} In view of the isomorphism from ${\cal F}_s (H_{\bf C})$ in $ L^2(B , \mu_{B , h/2})$, each element $a$ of $H \subset {\cal F}_s (H_{\bf C})$ is seen as a function in $L^2(B , \mu_{B , h/2})$ denoted $x \rightarrow \sqrt {h} \ell_a(x)$. When $a$ is in $B' \subset H$, one has $\ell_a(x) = a(x)$. When $a$ is in $H$, it is approximated by a sequence $(a_j)$ in $B'$, we then show that the sequence $\ell_{a_j}$ is a Cauchy sequence in $ L^2(B , \mu_{B , h/2})$ and we denote by $\ell _a$ its limit. With the same isomorphism, the coherent state $ \Psi_{(a , b) , h}$ defined in (\ref{2.1}) becomes, \begin{equation}\label{2.2}\Psi_{X , h} (u) = e^{{1\over h} \ell _{ (a+ib)} (u) -{1\over 2h}|a|^2 - {i\over 2h} a\cdot b},\quad X = (a , b) \in H^2,\quad {\rm a.e.}\ u\in B. \end{equation} We see, for all $X = (x , \xi)$ and $Y = (y, \eta)$, with the notation (\ref{1.1}), that \begin{equation}\label{2.3}< \Psi_{X h} , \Psi _{Yh}> =e^{-{1\over 4h}(|X|^2 +|Y|^2) + {1\over 2h} X \cdot \overline Y }.\end{equation} In particular, \begin{equation}\label{2.4}|< \Psi_{X h} , \Psi _{Yh}>| =e^{-{1\over 4h}|X-Y|^2 }.\end{equation} We call Segal Bargmann transform (\cite{HA}) of $f$ the function \begin{equation}\label{2.5}(T_hf) (X) = { < f , \Psi_{Xh} > \over < \Psi_{0h} , \Psi_{Xh} >},\qquad X\in H^2.\end{equation} We know that $T_hf$ admits a stochastic extension $\widetilde T_hf $ in $L^2(B^2 , \mu _{B^2, h})$ and we know that, $\widetilde T_h$ is a partial isometry from $L^2(B , \mu _{B, h/2})$ into $L^2(B^2 , \mu _{B^2, h})$. \subsection{The space ${\cal D}$ and Wick symbols.}\label{s2B} \begin{defi}\label{d2.1} For all subspaces $E$ of finite dimension in $H$, ${\cal D}_E$ denotes the space of functions $f : B \rightarrow {\bf C}$ such that, i) the function $f$ is written under the form $ \widehat f \circ P_E$, where $\widehat f $ is a continuous function from $E$ in ${\bf C}$ and $P_E$ is the mapping from $B$ in $E$ defined as follows, choosing an orthonormal basis $\{ u_1 , ... u_n \}$ of $E$, \begin{equation}\label{2.6}P_E(x) = \sum _{j=1}^n \ell _{u_j} (x) u_j,\quad a.e. \ x\in B\end{equation} (the map $P_E$ is independent of the chosen basis). ii) the function $E^2 \ni X \rightarrow < f, \Psi_{X h}>$ (scalar product in $L^2(B , \mu_{B , h/2})$) is in the Schwartz space ${\cal S} (E^2)$. We shall denote by ${\cal D}$ the union of all spaces ${\cal D}_E$. \end{defi} We observe that the coherent states belong to ${\cal D}$. The condition ii) is equivalent to say that the function $ \widehat f $ of i) is such that the function \begin{equation}\label{2.7} E \ni u \rightarrow \widehat f (u) e^{-{|u|^2 \over 2h}}\end{equation} belongs to ${\cal S} (E)$. One says that a quadratic form $Q$ on ${\cal D}$ is {\it continuous} if, for all $E \subset H$ of finite dimension, there exists $C>0$ and $m\geq 0$ such that, for all $f$ and $g$ in ${\cal D}_E$, \begin{equation}\label{2.8} |Q(f, g)| \leq C I(E, m) (f) I(E, m) (g)\end{equation} where \begin{equation}\label{2.9} I(E, m) (f) = \int _{E^2} |< f, \Psi_{X h}>| (1+|X|)^m dX.\end{equation} One says that a linear mapping $T $ in ${\cal D}$ is continuous if, for all $E\subset H$ of finite dimension, there exists $F\subset H$ of finite dimension such that $f\in {\cal D}_E$ implies $Tf \in {\cal D}_F$ and if, for all integer $m$, there exists $C$ and $m'$ such that, \begin{equation}\label{2.10} I(F, m) (Tf) \leq C I(E , m') (f).\end{equation} We shall recall the definition of the Wick symbol and bi-symbol. If $Q$ is a quadratic form on ${\cal D}$, we denote by $S_h (Q)$ the function defined on $H^2$ by, \begin{equation}\label{2.11} S_h (Q) (X , Y)= { Q( \Psi_{X , h} , \Psi_{Y , h}) \over < \Psi_{X , h} , \Psi_{Y , h} >}.\end{equation} If $Q(f , g) = < Af, g>$, where $A$ is an bounded operator in the Fock space ${\cal F}_s (H_{\bf C})$, or equivalently in $L^2(B, \mu_{B , h/2})$, then the symbol $S_h (Q)$ will be also denoted $S_h (A)$. Let us recall that, if $X = (x , \xi)$ is identified with $x+i \xi$, then the function $S_h(A)$ is Gateaux holomorphic in $X$ and antiholomorpic in $Y$. We denote by $\sigma_h^{wick} (Q)$ the restriction to the diagonal of the above function, \begin{equation}\label{2.12}\sigma_h^{wick} (Q) (X) = Q( \Psi_{X , h} , \Psi_{X , h}).\end{equation} \subsection{Definition of the Weyl calculus in infinite dimension.}\label{s2.C} If $H= B =\R^n$ and if, say, $F$ is a $C^{\infty }$ function on $\R^{2n}$ bounded together with all its derivatives, one associates with $F$ an operator $Op_h^{weyl} (F)$ satisfying, \begin{equation}\label{2.13} S_h ( Op_h^{weyl} (F) ) (X, Y) = \int _{\R^{2n}}F(Z) e^{{1 \over h}( X \cdot \overline Z + \overline Y \cdot Z - X \cdot \overline Y) } d\mu _{h/2} (Z) \end{equation} $$=e^{{1\over 4h}|X - Y|^2} \int _{\R^{2n}} F\left (Z + {X+Y\over 2} \right ) e^{{i\over 2h}( (\xi - \eta ) \cdot z - (x- y) \cdot \zeta )} d\mu _{\R^{2n} , h/2} (Z). $$ This equality is proved in Unterberger \cite{U} and we use it for an extension to the infinite dimensional spaces. The first issue is that, the function $F$ is defined on $H^2$ according the Definition \ref{d1.1}, and giving a meaning in infinite dimension to an integral such as the one in (\ref{2.13}), we have to integrate over $B^2$, where $(i, H, B)$ is a Wiener space. Indeed, in infinite dimension, $H^2$ cannot be endowed with a gaussian measure which corresponds to its own norm. We have to be able to extend the function $F$, defined on $H^2$, to a function $\widetilde F$ defined on $B^2$. In general it is not a density extension but a type of extension introduced by L. Gross and named {\it stochastic extension}. It may be found in \cite{AJN} (Definition 4.4) where we recall a definition of this notion adapted to our purposes. From Proposition 8.4 of \cite{AJN}, we know that each function $F$ in $S_1 (M, \varepsilon)$ admits a stochastic extension $\widetilde F$ in $L^1 (B^2 , \mu _{B^2 , h/2})$ at least if the sequence $(\varepsilon_j)$ is summable. Moreover, the proof of Proposition 8.4 of \cite{AJN} shows that any linear form $F$ on $H^2$ has a stochastic extension $\widetilde F$ in $L^1 (B^2 , \mu _{B^2 , h/2})$. By analogy with (\ref{2.13}), one expect to associate with each function $F$ satisfying the hypotheses (H1) and (H2) of Section \ref{s1}, a quadratic form $Q_h^{weyl} (F)$ on ${\cal D}$, with bi-symbol $S_h ( Q_h^{weyl} (F)) $ of form, \begin{equation}\label{2.14}\Phi (X , Y)=e^{{1\over 4h}|X - Y|^2} \int _{B^2}\widetilde F\left (Z + {X+Y\over 2} \right ) e^{{i\over 2h}( \ell _{\xi - \eta} (z) -\ell _{ x- y} (\zeta ) )} d\mu _{B^2 , h/2} (Z).\end{equation} \begin{theo}\label{t2.2} Let $F: H^2 \rightarrow {\bf C}$ be a function satisfying the hypotheses (H1) and (H2) of Section \ref{s1} with $m\geq 0$. Let $\widetilde F$ be the stochastic extension of $F$ in $L^1 (B^2 , \mu _{B^2 , h/2})$. Then, i) The integral (\ref{2.14}) converges and verifies, \begin{equation}\label{2.15} |\Phi (X , Y)| \leq C e^{{1\over 4h}|X-Y|^2} \left ( 1+ { |X+Y| \over 2} \right )^m.\end{equation} In addition, this function is Gateaux holomorphic in $X$ and anti-holomorphic in $Y$. ii) There is a continuous quadratic form $Q_h^{weyl} (F)$ on ${\cal D}$ such that $S_h ( Q_h^{weyl} (F)) = \Phi$, i.e., \begin{equation}\label{2.16} S_h ( Q_h^{weyl} (F)) (X , Y) = e^{{1\over 4h}|X - Y|^2} \int _{B^2}\widetilde F\left (Z + {X+Y\over 2} \right ) e^{{i\over 2h}( \ell _{\xi - \eta} (z) -\ell _{ x- y} (\zeta ) )} d\mu _{B^2 , h/2} (Z).\end{equation} \end{theo} {\it Proof.} i) The convergence of the integral (\ref{2.14}) and the estimate (\ref{2.15}) follow from hypothesis (H2). By a change of variables (cf \cite{AJN}\cite{KU}), the function $\Phi$ may be also written as, \begin{equation}\label{2.17}\Phi (X , Y)= \int _{B^2} \widetilde F(Z) e^{{1 \over h}( \ell _X ( \overline Z) + \ell _{\overline Y} ( Z ) - X \cdot \overline Y) } d\mu _{h/2} (Z). \end{equation} We deduce that it is holomorphic in $X$ and anti-holomorphic in $Y$. ii) For all $f$ and $g$ in ${\cal D}_E$, where $E\subset H$ is a subspace of finite dimension, set \begin{equation}\label{2.18} Q (f , g) = \int_{E^4} \Phi (X , Y) e^{ {1\over 2h} X\cdot \overline Y } (T_hf) (X) \overline {(T_hg) (Y)} d\mu _{E^4 , h}(X , Y).\end{equation} Using (\ref{2.15}) we see that, for all $f$ and $g$ in ${\cal D}_E$, $$ | Q(f , g) | \leq C (2\pi h)^{-2 {\rm dim} E} \int_{E^4} |< f , \Psi_{X , h}>| |< g , \Psi_{Y , h}>| (1+|X|) (1+|Y|) d\lambda (X , Y)$$ where $\lambda $ is the Lebesgue measure. Consequently, for all $f$ in ${\cal D}_E$, the integral defining $Q(f, g)$ converges. When $f$ and $g$ belongs to ${\cal D}_E$, they also are in ${\cal D}_F$, for all subspace $F$ containing $E$. If $F$ contains $E$, then we denote by $S$ the orthogonal set to $F$ in $E$, and $(X_E, X_S)$ the variable of $F^2$. The transform $T_h f$ is a function on $F^2$, independent of the variable $X_S$. We remark that, $$\int_{S^4} \Phi (X_E + X_S , Y_E + Y_S)e^{ {1\over 2h} X_S\cdot \overline Y_S } d\mu _{S^4 , h}(X_S , Y_S) = \Phi (X_E , Y_E ).$$ Indeed, the function in the integral is holomorphic in $X_S$, anti-holomorphic in $Y_S$, and its integral is equal to its value at $X_S = Y_S = 0$. Consequently the definition of $Q(f, g)$ is indeed coherent, whether that $f$ and $g$ are seen as functions in ${\cal D}_E$ or in ${\cal D}_F$. Let us show that the bi-symbol of $Q$ is $\Phi$. We have, for all $X= (x , \xi)$ and $Y = (y, \eta)$ in $H^2$, if $E$ is the subspace spanned by $x$, $\xi$, $y$ and $\eta$, $$ {Q (\Psi_{X h} , \Psi _{Yh} ) \over < \Psi_{X h} , \Psi _{Yh}> } = \int_{E^4} \Phi (U , V) {\cal B}_h(X , Y, U, V) d\mu _{E^4 , h}(U , V)$$ where ${\cal B}_h$ is a kind of reproducing kernel, \begin{equation}\label{2.19}{\cal B}_h(X , Y, U, V) = e^{ {1\over 2h} (X \cdot \overline U + U\cdot \overline V + V \cdot \overline Y - X\cdot \overline Y )}.\end{equation} In a standard way, we have, if $\Phi$ is holomorphic in $X$, anti-holomorphic in $Y$, \begin{equation}\label{2.20} \int_{E^4} \Phi (U , V) {\cal B}_h(X , Y, U, V) d\mu _{E^4 , h}(U , V) = \Phi (X , Y).\end{equation} It suffice to make the change of variables $U = X +S$, $V = Y + T$, and to apply the mean formula. We then deduce that the bi-symbol of $Q$ is indeed $\Phi$. \hbox{\vrule \vbox to 7pt{\hrule width 6pt \hrule}\vrule } When $F$ belongs to $S_2 (M, \varepsilon)$, where the sequence $(\varepsilon_j)$ is summable, we have proved in \cite{AJN} that the quadratic form $Q_h^{weyl} (F)$ is associated with a bounded operator. \subsection{Weyl symbol and Wick symbol.}\label{s2.D} It is sufficient to restrict equality (\ref{2.16}) to the diagonal $Y= X$ to see that, \begin{equation}\label{2.21}\sigma_h ^{wick} ( Q_h^{weyl} (F))(X) = \int _{B^2}\widetilde F(Z + X ) d\mu _{B^2 , h/2} (Z).\end{equation} For all $t>0$, the operator \begin{equation}\label{2.22}(H_tF) (X) = \int _{B^2} \widetilde F( X +Y ) d\mu _{B^2, t} (Y)\end{equation} is considered as the heat operator. In the above and below integrals on $B^2$, $\widetilde F(X+Y)$ denotes the stochastic extension on $B^2$ of $H^2 \ni Y \rightarrow F(X+Y)$ for each $X$ in $H^2$, which exists since it satisfies the same hypotheses as $F$. We then can write, \begin{equation}\label{2.23} \sigma_h ^{wick} ( Q_h^{weyl} (F)) =H_{h/2}F.\end{equation} Equality (\ref{2.23}) extends the standard fact in finite dimension, that the Wick symbol is obtained from the Weyl symbol by the action of the heat operator. From Kuo \cite{KU} (Theorem 6.2) or Gross \cite{G4} (Proposition 9), the function $H_tF$ is continuous on $H^2$. If $H$ is of finite dimension, we have $B= H$, $\widetilde F =F$, and $H_t F = e^{ (t/2) \Delta} F$. Note that, \begin{equation}\label{2.24} \sup _{X \in H^2} |(H_tF) (X)| \leq \sup _{Z \in B^2}| \widetilde F( Z )| = \sup _{X \in H^2} |F(X)|.\end{equation} \begin{prop}\label{p2.3} If $F$ is in $S_4(M, \varepsilon)$ with some chosen basis $(e_j)$ and if the sequence $(\varepsilon_j)$ is summable, then there exists $C>0$ such that, for all $X$ in $H^2$ and $t$ in $(0, 1)$, \begin{equation}\label{2.25} |(H_tF) (X) - F(X) | \leq C t.\end{equation} \end{prop} {\it Proof.} Let $E_m$ be the subspace spanned by the $e_j$ ($j\leq m$). We apply (\ref{2.24}) to the function $F_m = F - F \circ \pi _{E_m}$. We obtain, for all $X$ in $H^2$, $$ \int _{B^2 } |( F \circ P _{E_m} ) (X+Y )) - (\widetilde F_t (X+ Y )) | d\mu _{B^2, t} (Y) \leq \Vert F - F \circ \pi _{E_m} \Vert _{\infty}$$ where $\pi _{E_m} : H^2 \rightarrow E_m^2$ is the orthogonal projection and $P _{E_m} : B^2 \rightarrow E_m^2$ is its stochastic extension, defined as in (\ref{2.6}). If $F$ is in $S_1 (M, \varepsilon)$, we have, \begin{equation}\label{2.26}\Vert F - F \circ \pi _{E_m} \Vert _{\infty} \leq 2M \sum _{j=p}^{\infty} \varepsilon_j.\end{equation} For all $m>0$ and for all $X$ in $H^2$, we have, $$ \int _{B^2} F ( P _{E_m} (X+Y )) d\mu _{B^2, t} (Y) = \int _{E_m^2} F ( (\pi _{E_m} X) +Y ) d\mu _{E_m^2, t} (Y). $$ According to standard results in finite dimension, we have for all $a$ in $E_m^2$, $$ \left | \int _{E_m^2} F ( a +Y ) d\mu _{E_m^2, t} (Y) - F(a) \right | \leq t \Vert \Delta _m F \Vert _{\infty}$$ where $$ \Delta_m = \sum _{j=1}^m \left ( {\partial ^2 \over \partial _{x_j}^2} + {\partial ^2 \over \partial _{\xi_j}^2 } \right ).$$ We apply this inequality to $a = \pi _{E_m} (X)$ using again (\ref{2.26}). Consequently, for all $t\in (0, 1)$ and $m\geq 1$, $$ |(H_tF) (X) - F(X) | \leq 2M t\sum _{j=1}^m \varepsilon_j^2 + 4M \sum _{m+1}^{\infty } \varepsilon_j.$$ We deduce (\ref{2.25}) when $m$ goes to infinity. \hbox{\vrule \vbox to 7pt{\hrule width 6pt \hrule}\vrule } \subsection{Operators with linear symbol. Composition.}\label{s2.E} \begin{prop}\label{p2.4} Let $F$ be a continuous linear form on $H^2$. Let $Q_h^{weyl}(F)$ be the quadratic form on ${\cal D}$ defined in Theorem \ref{t2.2}. Then, there exists an operator denoted $Op_h ^{weyl} (F)$ from ${\cal D}$ into itself, such that \begin{equation}\label{2.27} Q_h^{weyl} ( F) (f , g) = < Op_h ^{weyl} (F) f, g>,\qquad (f , g)\in {\cal D}^2.\end{equation} \end{prop} {\it Proof.} Let $f$ be in ${\cal D}_E$, where $E \subset H$ is of finite dimension. As in Definition \ref{d2.1}, we may write, $f = \widehat f \circ P_E$, where the function (\ref{2.7}) is in ${\cal S} (E)$. Let $a$ and $b$ in $H$ be such that $F(x , \xi ) = a \cdot x + b \cdot \xi$. Let $E_1$ be the subspace spanned by $E$, $a$ and $b$. Set $f_1 : E_1 \rightarrow {\bf C}$ the function defined by, $$ f_1(u) = (a+i b) \cdot u \widehat f(\pi (u)) + {h\over i} (\pi (b) \cdot \nabla \widehat f) (\pi (u)),\qquad u\in E_1$$ where $\pi : E_1 \rightarrow E$ is the orthogonal projection. We have $OP_h^{weyl} (F) f = f_1 \circ P_{E_1}$ and this function is in ${\cal D}_{E_1}$. Thus, if $F$ is linear, the quadratic form $Q_h^{weyl} (F)$ is associated with a continuous operator $Op_h^{weyl}(F)$ from ${\cal D}$ into ${\cal D}$. The set of linear functions is invariant by the operator $H_{h/2}$. Consequently, the Wick symbol of $Q_h^{weyl} (F)$ is also $F$. We may write $F(x , \xi) = P(X ) + Q(\overline X)$. Then, the bi-symbol of $Q_h^{weyl} (F)$ is $P(X) + Q(\overline Y)$. We have, for all $f$ in ${\cal D}_E$, for all $Y \in (E_1)^2$, $$ < Op_h^{weyl} (F) f, \Psi_{Yh} > = (2\pi h) ^{-n} \int _{E^2} <f, \Psi_{Xh} > < Op_h^{weyl} (F) \Psi_{Xh} , \Psi_{Yh} > dX $$ $$ = (2\pi h) ^{-n} \int _{E^2} <f, \Psi_{Xh} > [P(X) + Q(\overline Y)] <\Psi_{Xh} , \Psi_{Yh} > dX.$$ Consequently, for all integer $m$, $$ (1+|Y|)^m | < Op_h^{weyl} (F) f, \Psi_{Yh} > | \hskip 8cm$$ $$ \hskip 2cm \leq C(E , E_1, h) \int _{E^2} (1+|X|)^{m+1} | <f, \Psi_{Xh} > | (1+|X-Y|)^{m+1} e^{-{1\over 4h} |Y-X|^2} dX. $$ Therefore, $$ I(E_1 , m) (Op_h^{weyl} (F) f) \leq C(E , E_1, m, h) I(E , m+1) (f)$$ which proves the continuity of $Op_h^{weyl} (F) $ in ${\cal D}$. \hbox{\vrule \vbox to 7pt{\hrule width 6pt \hrule}\vrule } Let $A$ be a continuous quadratic form on ${\cal D}$. Let $B : {\cal D} \rightarrow {\cal D}$ be a continuous linear mapping with a linear Wick symbol. We recall that the quadratic forms $ A \circ B $, $ B \circ A $ and $[A , B]$ are defined in (\ref{a2}) and (\ref{a3}). \begin{theo}\label{t2.5} Let $A_h$ be a bounded operator in $L^2(B , \mu _{B , h/2})$, and set $L_h $ an operator from ${\cal D}$ into ${\cal D}$ with a Wick symbol being a linear form $L(x , \xi) $ on $H^2$. Let $A_h \circ B_h $ be the quadratic form on ${\cal D}$ of their composition defined as in Section \ref{s1}. Then, we have, $$ \sigma_h ^{wick} (A_h \circ L_h) = \sigma_h ^{wick} (A_h) \sigma_h ^{wick} (L_h)+ \hskip 4cm$$ $$ \hskip 4cm{h\over 2} \sum _{j\in \Gamma} \left ( {\partial \over \partial x_j} - i {\partial \over \partial \xi_j}\right ) \sigma_h^{wick} (A_h) \left ( {\partial \over \partial x_j} + i {\partial \over \partial \xi_j}\right ) \sigma_h^{wick} (L_h) $$ This result is valid when exchanging the roles of $A_h$ and $L_h$. \end{theo} {\it Proof.} Set $L(x , \xi ) = a \cdot x + b \cdot \xi$ with $a$ and $b$ in $H$. Let $X $ be in $H^2$. There exists an unitary operator $W_{X , h}$ such that $\Psi _{X , h} = W _{X , h} \Psi _{0 , h} $. We have, $$ \sigma_h ^{wick} (A_h \circ L_h) (X) = < L_h \Psi_{X , h}, A_h^{\star} \Psi_{X , h } > = < f, g> $$ with $f= W_{X , h} ^{\star} L_h W_{X , h} \Psi_{0 , h} $ and $g= W_{X , h} ^{\star} A_h^{\star} W_{X , h} \Psi_{0 , h} $. Let $T_hf$ and $T_hg $ be the Segal Bargmann transforms of $f$ and $g$ defined in (\ref{2.5}), $\widetilde T_hf $ and $\widetilde T_hg$ being their stochastic extensions in $L^2(B^2 , \mu _{B^2, h})$. Since $\widetilde T_h$ is a partial isometry from $L^2(B , \mu _{B, h/2})$ into $L^2(B^2 , \mu _{B^2, h})$, we have $$ \sigma_h ^{wick} (A_h \circ L_h) (X) = \int _{B^2} \widetilde T_h f(Z) \overline { \widetilde T_h g (Z)} d \mu _{B^2, h} (Z).$$ We also have, $$\widetilde T_h f(Z) = L(X) + \ell _{a+ib} (z-i \zeta).$$ Since $T_hg$ is antiholomorphic then the mean formula gives, $$ \int _{B^2}\overline { \widetilde T_h g (Z)} d \mu _{B^2, h} (Z)= \overline {T_hg} (0) = < \Psi_{0, h}, g> = \sigma _h^{wick} (A_h ) (X). $$ Similarly, integrating by parts (see Theorem 6.2 of Kuo \cite{KU}), for all $\gamma $ in the complexified of $H$, $$ \int _{B^2}\ell _{\gamma } ( z - i \zeta) \overline { \widetilde T_h g (Z)} d \mu _{B^2, h} (Z)= h \gamma \cdot ( \partial_z - i \partial_{\zeta} ) \overline {T_hg} (0) = h \gamma \cdot ( \partial_x - i \partial_{\xi } ) \sigma _h^{wick} (A_h ) (X). $$ The proof of Theorem then follows. \hbox{\vrule \vbox to 7pt{\hrule width 6pt \hrule}\vrule } \begin{prop}\label{p2.6} Let $F$ be a function in $S_2(M, \varepsilon)$ where the sequence $(\varepsilon _j) $ is summable and let $L$ be a continuous linear form on $H^2$. Let \begin{equation}\label{b1} \Phi = FL + {h\over 2i} \{ F , L \},\qquad\Psi = FL - {h\over 2i} \{ F , L \}. \end{equation} Then, i) The functions $\Phi $ and $\Psi$ satisfy hypotheses (H1) and (H2) in Section \ref{s1} ii) The corresponding Weyl forms using the Theorem \ref{t2.2} satisfy, for all $f$ and $g$ in ${\cal D}$, $$ Q_h^{weyl } (\Phi ) (f , g) = < Op_h^{weyl } (L) f ,Op_h^{weyl } (F) ^{\star} g >, $$ $$ Q_h^{weyl } (\Psi ) (f , g) = < Op_h^{weyl } (F) f ,Op_h^{weyl } (L) ^{\star} g >.$$ \end{prop} {\it Proof. i) } Using the linearity of $G$ and the estimates $\int_B |\ell_a(X)| |\ell_b(X)| d\mu_{B,h/2}(X) \leq C |a| |b|$, the existence of $L^1$ stochastic extensions are obtained similarly as in the proof of the Proposition 8.4 of \cite{AJN}. The polynomial estimate on the semigroup uses that the stochastic extension of $X \rightarrow F(X) a.X$ is $\widetilde{F} \ell_a$ with $\int_B |\ell_a(X)| d\mu_{B,h/2}(X) \leq C |a|$. {\it ii) } We may write $L(x, \xi)= a \cdot x + b \cdot \xi$ with $a$ and $b$ in $H$. From (\ref{2.22}), $$ ( H_{h/2} F L) (X) = ( H_{h/2} F ) (X) L(X) + \int _{B^2} \widetilde F( X +Y ) ( \ell _a (y) + \ell _b (\eta) ) d\mu _{B^2, h/2} (Y). $$ Integrating by parts, $$ ( H_{h/2} F L) (X) = ( H_{h/2} F ) (X) L(X) +{h\over 2} \int _{B^2} \widetilde G( X +Y ) d\mu _{B^2, h/2} (Y) $$ where $G(x, \xi) = \Big ( a \cdot \partial _x + b \cdot \partial _{\xi} \Big ) F$. In other words, $$ ( H_{h/2} F L) (X) = ( H_{h/2} F ) (X) L(X) +{h\over 2} \Big ( a \cdot \partial _x + b \cdot \partial _{\xi} \Big ) ( H_{h/2} F ) (X). $$ Since $H_{h/2}$ leaves $F$ invariant, this may be written as, $$ ( H_{h/2} F ) ( H_{h/2} L) + {h\over 2} \sum _{j\in \Gamma} \left [ {\partial H_{h/2} F \over dx_j} {\partial H_{h/2} L \over dx_j} + {\partial H_{h/2} F \over d\xi_j} {\partial H_{h/2} L \over d\xi_j} \right ]. $$ Similarly, $$ H_{h/2} \{ F , L \} = \{ H_{h/2} F , L \} = \{ H_{h/2} F , H_{h/2} L \}.$$ Consequently, if $\Phi$ is defined in (\ref{b1}) then $$ H_{h/2} \Phi = ( H_{h/2} F) ( H_{h/2} L) +\hskip 4cm$$ $$\hskip 4cm {h\over 2} \sum _{j\in \Gamma} \left ( {\partial \over \partial x_j} - i {\partial \over \partial \xi_j}\right ) ( H_{h/2} F) \left ( {\partial \over \partial x_j} + i {\partial \over \partial \xi_j}\right ) ( H_{h/2} L). $$ From Theorem \ref{t2.5}, $ H_{h/2} \Phi$ is the Wick symbol of the composition of the two operators with Wick symbols being $H_{h/2} F$ and $H_{h/2} L$, that is to say, $Op_h^{weyl} (F)$ and $Op_h^{weyl} (G)$. The proposition is then a consequence of the following Lemma. \hbox{\vrule \vbox to 7pt{\hrule width 6pt \hrule}\vrule } \begin{lemm}\label{l2.7} Two continuous quadratic forms on ${\cal D}$ with the same Wick symbol are equal. \end{lemm} {\it Proof.} Let $A$ be a continuous quadratic form on ${\cal D}$ which Wick symbol vanishes identically. Let $X$ and $Y$ be in $H^2$. Set, $$ \varphi ( \lambda , \mu ) = S_h (A) \left ( {X+Y \over 2} + \lambda {X-Y \over 2} , {X+Y \over 2} + \mu {X-Y \over 2} \right ). $$ This function on ${\bf C}^2$ is holomorphic in $\lambda $, anti-holomorphic in $\mu$, and identically vanishing if $\lambda = \mu$. It is then identically vanishing and the equality $\varphi (1 , -1)=0$ shows that $S_h (A) (X , Y)=0$. The bi-symbol of $A$ is identically vanishing. Let $f$ and $g$ in ${\cal D}_E$ where $E \subset H$ is a subspace of finite dimension $n$. Let $C$ and $m$ be the constants such that we have (\ref{2.8}) for all $f$ and $g$ in ${\cal D}_E$. Denote by $D(E,m)$ of functions $f$ such that the integral $I(E,m)(f)$ is finite, where $I(E,m)(f)$ is given in (\ref{2.9}). We also have $$ f = ( 2\pi h)^{-n} \int _{E^2} < f , \Psi_{Xh} > \Psi_{Xh} dX $$ and similarly for $g$. Then applying \cite{Y} (Section V.5) one obtains $A(f,g)$ vanishes. \hbox{\vrule \vbox to 7pt{\hrule width 6pt \hrule}\vrule } \subsection{Unbounded operators. Sobolev spaces.}\label{s2.F} We denote by $W$ the completion of ${\cal D}$ for the following norm, $$ \Vert u\Vert_W ^2 = \Vert u \Vert ^ 2 + \sum _{j\in \Gamma } \Vert ( Q_h(e_j ) + i P_h(e_j )) u\Vert ^2. $$ Using annihilation operators, one has $Q_h(e_j ) + i P_h(e_j ) = \sqrt {2h} a_h (e_j)$. Using the number operator $N = \sum a_h ^{\star } (e_j) a_h (e_j)$, one has $ \Vert u\Vert_W ^2 = < (I + 2h N) u , u>$ (See also \cite{KR} and \cite{LA2} for other Sobolev spaces in infinite dimension). \begin{prop}\label{p2.8.} i) For all $(a, b)$ in $H^2$, let $F_{a , b} (q, p) = a \cdot q + b \cdot p$. Then the operator $Op_h^{weyl} (F_{a b})$ from ${\cal D}$ into itself, may be extended to an operator from $W$ in $L^2(B , \mu _{B , h/2})$ and we have, \begin{equation}\label{2.?} \Vert Op_h^{weyl} (F_{a b}) u \Vert \leq C (|a| + |b|) \ \Vert u \Vert _W.\end{equation} ii) Let $F$ in $ S_3(M , \varepsilon )$. Then the operator $A_h = Op_h^{weyl} (F )$ is bounded from $W $ into $W$. \end{prop} {\it Proof.} i) Point i) follows from estimates in Derezi\'nski-G\'erard \cite{DG} , Lemma 2.1 or Lemma 2.3. The operator $Op_h^{weyl} (F_{a b})$ is then denoted by $\Phi_S (a+i b)$. ii) For all $u$ in $W$ and for all $j$ in $\Gamma$, we have from Proposition \ref{p2.6}, $$ ( Q_h(e_j ) + i P_h(e_j )) A_h u = A_h ( Q_h(e_j ) + i P_h(e_j ))u + h Op_h^{weyl } (G_j) u $$ with $G_j (x , \xi) = {\partial F \over \partial x_j} + i {\partial F \over \partial \xi_j}$. This function belongs to a set $S_2 ( M \varepsilon_j , \varepsilon)$. From Theorem 1.4 of \cite{AJN}, $$\Vert A_h \Vert \leq M',\qquad\Vert Op_h^{weyl } (G_j) \Vert \leq M' \varepsilon _j$$ where $M'$ is independent of $j$. The proposition then follows. \hbox{\vrule \vbox to 7pt{\hrule width 6pt \hrule}\vrule } \section{Reduction to finite dimension.}\label{s3} With a given bounded operator $A$ in $L^2 (B, \mu _{B , h/2})$, one always may associate a Wick symbol $\sigma_h^{wick}(A)$. If $A$ verifies the hypotheses of Theorem \ref{t1.2}, we shall associate a Weyl symbol $F$ (which will depend on $h$). Functions $F$ will satisfy $H_{h/2} F = \sigma_h^{wick}(A)$. We bring this study to issues related to subspaces $E$ of finite dimension in $B' \subset H$. One associates two partial heat operators with each subspace $E \subset B'$. For any bounded continuous function $F$ on $H^2$ and for all $t > 0$, one set, \begin{equation}\label{3.1} (H_{E , t} F )(X) = \int _{E^2} F (X + Y_E ) d\mu _{ E^2 , t } (Y_E ).\end{equation} One can also define a partial heat operator acting, not on the variables of $E^2$, but on those of its orthogonal. The notation $E^{\perp}$ now denotes, \begin{equation}\label{3.2} E^{\perp} = \{ x \in B, \ \ \ u(x) = 0 \ \ \ \ u \in E \}.\end{equation} This heat operator related to the variables of $(E^{\perp})^2$ can only act on bounded continuous functions $F$ on $H^2$ with a stochastic extension $\widetilde F$ (bounded measurable function on $B^2$). One set \begin{equation}\label{3.3} (H_{ E^{\perp} , t } F )(X) = \int _{( E^{\perp})^2} \widetilde F (X + Y_{E^{\perp}} ) d\mu _{ (E^{\perp})^2 , t } (Y_{E^{\perp}} ).\end{equation} Indeed, we know from Ramer \cite{RA} (Section 1.B), that the space $E^{\perp}$ defined in (\ref{3.2}) is also endowed with a gaussian measure. Similarly to $H_t$, we note that, \begin{equation}\label{3.4} \sup_{X\in H^2} | (H_{ E^{\perp} , t } F )(X) | \leq \sup_{X\in H^2} |F(X)|.\end{equation} If $F$ is bounded and continuous on $H^2$ and if its stochastic extension $\widetilde F$ exits, then we have, from \cite{RA} (Section 1.B,), \begin{equation}\label{3.5} H_{h/2} F = H_{E , h/2 } H_{E^{\perp} , h/2 } F.\end{equation} We then consider an increasing sequence $(\Lambda _n)$ of finite subspaces in $\Gamma$ whose union is $\Gamma$. We set, $$E(\Lambda _ n) = {\rm Vect} (e_j, \ j \in \Lambda _ n).$$ In Sections \ref{s4} to \ref{s7}, we shall prove the following propositions. \begin{prop}\label{p3.1} Let $A$ be a bounded operator in $L^2 (B, \mu _{B , h/2})$ satisfying the hypotheses of Theorem \ref{t1.2}. Then, i) the function $\sigma _h^{wick} (A)$ is in the set $S_{m+4}(M, \varepsilon )$. ii) Setting, $$ P_{E(\Lambda _n)} (x, \xi ) = \left ( \sum _{j\in \Lambda _n } e_j (x)e_j, \sum _{k\in \Lambda _n } e_k( \xi )e_k \right ),\qquad (x, \xi ) \in B^2 $$ and by denoting $ \Vert \cdot \Vert _{\infty } $ the supremum norm on $H^2$, we have, \begin{equation}\label{a10} \Vert \sigma_h^{wick} (A) - \sigma_h^{wick} (A) \circ P_{E(\Lambda _n)} \Vert _{\infty} \leq 2M \sum _{j\notin \Lambda _n } \varepsilon_j. \end{equation} \end{prop} \begin{prop}\label{p3.2} Let $A$ be a bounded operator in $L^2 (B, \mu _{B , h/2})$ satisfying the hypotheses in Theorem \ref{t1.2}. Then, for all $n$, there exists a continuous bounded function $F_n$ on $H^2$ such that, if $0 < h < 1$, i) We have \begin{equation}\label{a4} H_{ E(\Lambda _n), h/2} F_n = \sigma_h^{wick} (A).\end{equation} ii) The function $F_n$ is in $S_m ( M_n, \varepsilon)$ with \begin{equation}\label{a5} M_n = M \prod _{j\in \Lambda _n} (1 + K S_{\varepsilon } ^2 h \varepsilon _j ^2 )\end{equation} where $K$ is a numerical constant and $S_{\varepsilon} $ is defined in (\ref{1.13}). iii) If $n < p$ then the function $F_n - F_p$ is in $S_m ( M_{np}, \varepsilon)$ where \begin{equation}\label{a6} M_{np} = M \left [ \sum _{j\in \Lambda _p \setminus \Lambda _n} K (1 + hS_{\varepsilon} ^2 )^2 h \varepsilon _j^2 \right ] \prod _{j\in \Lambda _p} (1 + K S_{\varepsilon} ^2 h\varepsilon _j^2 ).\end{equation} \end{prop} These propositions will be proved in Sections \ref{s4} to \ref{s7}. Let us verify that Theorem \ref{t1.2} follows from these propositions. From Proposition \ref{p3.2}, the sequence $(F_n)$ converges to a function $F$ in $S_m ( M' , \varepsilon)$ where $M'$ is defined in (\ref{1.16}). Let us show that $H_{h/2} F = \sigma_h^{wick} (A)$. From Proposition 8.4 in \cite{AJN}, the functions $F_n$ have stochastic extensions $\widetilde F_n$. Then, we may apply the operator $H_{ E(\Lambda _n )^{\perp} , h/2}$ to both sides of equality (\ref{a4}). We obtain from (\ref{a4}) and (\ref{3.5}), \begin{equation}\label{a7} H_{h/2} F_n = H_{ E(\Lambda_n)^{\perp} , h/2} \sigma_h^{wick} (A).\end{equation} Let us now take the limit as $n$ goes to infinity. We have from the point iii) of Proposition \ref{p3.2}, $$| F_n (X) - F (X)| \leq M \left [ \sum _{j\notin \Lambda _n} K h \varepsilon_j^2 \right ] \prod _{j\in \Gamma } (1 + K h\varepsilon_j^2 ). $$ From (\ref{3.4}) we see that, in the sense of the uniform convergence, \begin{equation}\label{a8} \lim _{n\rightarrow \infty } H_{h/2} F_n = H_{h/2} F.\end{equation} We shall also check that, \begin{equation}\label{a9} \lim _{n\rightarrow \infty } H_{ E(\Lambda _n)^{\perp} , h/2} \sigma_h^{wick} (A) = \sigma_h^{wick} (A).\end{equation} Indeed, setting, $\Psi = \sigma_h^{wick} (A)$, we have $$ \Vert \Psi - H_{ E(\Lambda _n)^{\perp} , h/2} \Psi \Vert _{\infty } \leq \Vert \Psi - \Psi \circ P_{E(\Lambda _n ) } \Vert _{\infty } + \Vert H_{ E(\Lambda _n)^{\perp} , h/2} ( \Psi - \Psi \circ P_{E(\Lambda _n ) } ) \Vert _{\infty }.$$ We have used the fact that $H_{ E(\Lambda_n)^{\perp} , h/2} (\Psi \circ P_{E(\Lambda _n)}) = \Psi \circ P_{E(\Lambda _n)}$. The limit in (\ref{a9}) follows from (\ref{3.4})(\ref{a10}) and of point ii) in Proposition \ref{p3.1}. Using (\ref{a7})(\ref{a8})(\ref{a9}) we obtain $H_{h/2}F = \sigma_h^{wick} (A)$. Since the function $F$ is in $S_m(M', K \varepsilon )$ then a Weyl quadratic form is associated with, by Theorem \ref{t2.2}, and a bounded operator $ Op_h^{weyl} (F )$ associated with, by Theorem 1.4 of \cite{AJN}. From (\ref{2.23}), the Wick symbol of this operator is $H_{h/2}F$. Consequently the operators $Op_h^{weyl} (F )$ and $A$ have the same Wick symbol. From Lemma \ref{l2.7}, these two operators are equal. Once Propositions \ref{p3.1} and \ref{p3.2} proved, we have indeed found a function $F$ in $S_m(M', K \varepsilon )$ whose corresponding Weyl operator equals to $ A$. Theorem \ref{t1.2} is then a consequence of Propositions \ref{p3.1} and \ref{p3.2}. \hbox{\vrule \vbox to 7pt{\hrule width 6pt \hrule}\vrule } \section{Proof of Proposition \ref{p3.1}.}\label{s4} Let $A$ be a bounded operator $A$ in $L^2 (B, \mu _{B , h/2})$ satisfying the hypotheses of Theorem \ref{t1.2}. From Theorem \ref{t2.5}, we have, \begin{equation}\label{4.1} \sigma_h^{wick} ( [Q_h(ej ), A] ) = ih {\partial \over \partial \xi _j } \sigma_h^{wick} (A),\qquad \sigma_h^{wick} ( [P_h(ej ), A] ) = - ih {\partial \over \partial \xi _j } \sigma_h^{wick} (A).\end{equation} For all bounded operator $B$, one has, $| \sigma_h^{wick} (B)(X)| \leq \Vert B \Vert $. Consequently, if $A$ verifies the hypotheses of Theorem \ref{t1.2} one deduces estimates, for each multi-index $(\alpha, \beta)$ in ${\cal M} _{m+4}$, $$ | \partial_x^{\alpha}\partial_{\xi}^{\beta} \sigma_h^{wick} (A)(x, \xi)| \leq M \prod _{j\in \Gamma } \varepsilon_j ^{\alpha _j + \beta _j}, $$ which prove point i) of Proposition \ref{p3.1}. We deduce, $$ | \sigma_h^{wick}(A)(x, \xi ) - \sigma_h^{wick} (A)(P_{E(\Lambda _n)} (x, \xi)) | \leq 2 M \sum _{j\notin \Lambda _n } \varepsilon_j ^{\alpha _j + \beta _j}, $$ which proves Proposition \ref{p3.1}. We shall also need analogous estimates on the bi-symbol. One deduces from (\ref{4.1}) these estimates by setting, for all $j\in \Gamma$, $$ {\partial \over \partial X_j} = {1\over 2} \left ( {\partial \over \partial x_j} - i {\partial \over \partial \xi _j} \right ),\qquad {\partial \over \partial \overline Y_j} = {1\over 2} \left ( {\partial \over \partial y_j} + i {\partial \over \partial \eta _j} \right ). $$ With these notations, one has, \begin{equation}\label{4.2} S_h([Q_h(e_j ), A])(X, Y ) = - h \left ( {\partial \over \partial X_j} - {\partial \over \partial \overline Y_j} \right ) (S_hA)(X, Y ),\end{equation} \begin{equation}\label{4.3} S_h([P_h(e_j ), A])(X, Y ) = -i h \left ( {\partial \over \partial X_j} + {\partial \over \partial \overline Y_j} \right ) (S_hA)(X, Y ).\end{equation} Consequently, for all multi-indices $(\alpha, \beta )$, \begin{equation}\label{4.4} S_h ( ({\rm ad} P_h)^{\alpha} ({\rm ad} Q_h)^{\beta} A )(X, Y ) = c_{\alpha \beta} h^{|\alpha + \beta |} (\partial _x + \partial _y)^{\alpha} (\partial _{\xi} + \partial _{\eta} )^{\beta } S_h(A)(X, Y )\end{equation} where $|c_{\alpha \beta} | =1$. With (\ref{2.4}), we deduce that \begin{equation}\label{4.5} | (\partial _x + \partial _y)^{\alpha} (\partial _{\xi} + \partial _{\eta} )^{\beta } S_h(A)(X, Y ) | \leq h^{-|\alpha + \beta |} e^{-{1\over 4h} |X-Y|^2} \Vert ({\rm ad} P_h)^{\alpha} ({\rm ad} Q_h)^{\beta} A \Vert.\end{equation} \section{Finite dimensional analysis.}\label{s5} We consider here the case where $H$ is a real Hilbert space with finite dimension $n$. Let $A$ be an operator satisfying hypothesis of Theorem \ref{t1.2}. Let $\Phi = S_hA$ its bi-symbol, defined in (\ref{2.11}). We have seen that $\Phi (X , Y)$ is holomorphic in $X$, anti-holomorphic in $Y$. From (\ref{4.5}), the following norm is finite, \begin{equation}\label{5.1} N_h^{(2)} (\Phi) = \sum _{(\alpha , \beta) \in {\cal M} _2} \Vert e^{-{1 \over 4h} |X - Y |^2 } (\partial _x + \partial _y)^{\alpha } (\partial _{\xi } + \partial _{\eta })^{\beta } \Phi \Vert_{\infty }, \end{equation} where $\Vert \cdot \Vert _{\infty }$ is the supremum norm. Note again that a choice of particular basis has been made. One introduces in distributions sense an integral transform giving the Weyl symbol $F$ of $A$ starting from the bi-symbol $\Phi$, and give estimates on $F$. This integral is not converging but has to be understood as an oscillatory integrals (see H\"ormander \cite{HO}). This leads to a proof of Beals's theorem in finite dimension (see Unterberger \cite{U}). Setting, \begin{equation}\label{5.2} K_h^{Beals} (X , Y, Z) = e^{- {1\over h}(Z-Y)\cdot (\overline Z- \overline X) - {1\over 2h}|X-Y|^2 }.\end{equation} \begin{theo}\label{t5.1} Let $H$ be a real Hilbert space of finite dimension $n$. Set $(X , Y) \rightarrow \Phi (X , Y)$ a function on $H^2 \times H^2$ which is holomorphic in $X$ and anti-holomorphic in $Y$, such that the norm $ N_h^{(2)} (\Phi)$ defined in (\ref{5.1}) is finite (for some orthonormal basis). Then, i) The following integral transform defines, a priori in the sense of distributions, a function $B _h \Phi $ which is bounded and continuous on $H^2$, \begin{equation}\label{5.3} (B_h \Phi ) (Z) = 2^n (2 \pi h)^{-2n} \int _{H^4} \Phi (X , Y) K_h^{Beals} (X , Y, Z) dX dY.\end{equation} Moreover, this function satisfies, \begin{equation}\label{5.4} \Vert B_h \Phi \Vert _{\infty } \leq K^n N_h^{(2)} (\Phi) \end{equation} ii) Moreover, one has, \begin{equation}\label{5.5} ( H_{h/2} B_h \Phi ) (Z) = \Phi (Z , Z).\end{equation} \end{theo} {\it Proof of i).} We follow the method of Unterberger \cite{U}. The change of variables $$ X = Z + S + {T\over 2},\qquad Y = Z + S - {T\over 2} $$ allows to rewrite (\ref{5.3}) as, \begin{equation}\label{5.6} (B_h \Phi )(Z) = 2^n (2\pi h)^{-2n} \int _{H^4} \Psi (S, T , Z) K_h(S, T )dSdT\end{equation} with \begin{equation}\label{5.7} \Psi (S, T , Z ) = \Phi \left ( Z + S +{T\over 2} , Z + S - {T\over 2} \right ) \end{equation} \begin{equation}\label{5.8} K_h(S, T ) = e^{-{1\over h} |S|^2 - {i \over h}\sigma (S , T) - {1\over 4h} |T|^2}.\end{equation} Set $S_j = (s_j, \sigma _j )$, $T_j = (t_j, \tau _j)$. Let $L_j$ and $M_j$ be the operators defined, for each function $G(S, T )$, by $$ L_j G = \left ( 1 + {\tau _j ^2 \over h } \right ) ^{-1} e^{-{1\over h} s_j^2 } \left ( 1 - h {\partial ^2 \over \partial s_j^2 } \right ) e^{{1\over h} s_j^2 } G $$ $$ M_j G = \left ( 1 + {t _j ^2 \over h } \right ) ^{-1} e^{-{1\over h} \sigma _j^2 } \left ( 1 - h {\partial ^2 \over \partial \sigma _j^2 } \right ) e^{{1\over h} \sigma _j^2 } G. $$ One verifies that, $$ L_jK_h = K_h,\qquad M_jK_h = K_h \hskip 2cm j\leq n $$ where the function $K_h$ defined in (\ref{5.8}). Consequently, $$ (B_h \Phi )(Z) = 2^n (2\pi h)^{-2n} \int_{H^4} K_h(S, T ) \left [ \prod _{j\leq n} ^tL_j \ ^ t M_j \right ] \Psi (S, T , Z ) dS dT. $$ We see that, $$ ^tL_j = \left ( 1 + {\tau _j ^2 \over h } \right ) ^{-1} \Big [ a_0(s_j / \sqrt h) + h^{1/2} a_1(s_j / \sqrt h)\partial _{s_j} + h a_2(s_j / \sqrt h) \partial _{s_j} ^2 \Big ] $$ with $$ a_0(s) = 3 - 4s^2,\qquad a_1(s) = 4s,\qquad a_2(s) = -1.$$ Similarly, $$ ^ tM_j = \left ( 1 + {t _j ^2 \over h } \right ) ^{-1} \Big [ a_0(\sigma _j / \sqrt h) + h^{1/2} a_1(\sigma _j / \sqrt h)\partial _{\sigma_j} + h a_2(\sigma _j / \sqrt h) \partial _{\sigma _j} ^2 \Big ] $$ Consequently, $$ | (B_h \Phi )(Z)| \leq \sum _{ (\alpha , \beta) \in {\cal M}_2 } h^{|\alpha + \beta |/2} F_{\alpha \beta } (Z) $$ $$ F_{\alpha \beta } (Z) = 2^n (2\pi h)^{-2n} \int _{H^4} e ^{ -{1\over h} |S|^2} \prod _{j\leq n} \left ( 1 + {t _j ^2 \over h } \right ) ^{-1} \left ( 1 + {\tau _j ^2 \over h } \right ) ^{-1} \left | a^{\alpha } (s / \sqrt h) a^{\beta } (\sigma / \sqrt h) \right | $$ $$ | e ^{ -{1\over 4h} |T|^2} \partial _s ^{\alpha } \partial _{\sigma} ^{\beta } \Psi (S, T , Z )| dS dT $$ where we have set $$ a ^{\alpha } (s ) = \prod _{j\leq n} a_{\alpha _j} (s_j ). $$ There exists $K > 0$ such that, $$ \pi ^{-1/2} \int _{\R}e^{-s^2} |a_j(s)| ds \leq K,\qquad 0 \leq j \leq 2 $$ and also $$ (2 \pi ) ^{-1/2} \int _{\R} (1 + x^ 2)^{-1} dx \leq K. $$ Consequently, $$ | (B_h \Phi )(Z)| \leq K^n \sum _{ (\alpha , \beta) \in {\cal M}_2 } h^{|\alpha + \beta |/2} \sup _{(S , T) \in H^4} \left | e^{-{1\over 4h} |T|^2} \partial_s ^{\alpha } \partial_{\sigma } ^{\beta } \Psi (S , T, Z) \right |. $$ From the defintion of $\Psi $ in (\ref{5.7}), $$ | (B_h \Phi )(Z)| \leq K^n \sum _{ (\alpha , \beta) \in {\cal M}_2 } h^{|\alpha + \beta |/2} \sup _{(X , Y) \in H^4} \left | e^{-{1\over 4h} |X-Y|^2 } ( \partial_x + \partial_y) ^{\alpha } ( \partial_{\xi } + \partial_{\eta} ) ^{\beta } \Phi (X Y) \right |.$$ We then deduce (\ref{5.5}) with another constant $K$. {\it Proof of ii).} If a function $\Psi$ on $H^2$ is written as $$ \Psi (Z) = e^{ -{1\over h} |Z|^2 + {1\over h} (A \cdot Z + B \cdot \overline Z )}$$ where $A$ and $B$ are in $H^2$, and $A \cdot Z $ denotes the bi-{\bf C}-linear scalar product, then the action of the heat operator on $\Psi$ verifies, $$ ( H_{h/2} \Psi ) (Z) = \Big ( e^{{h\over 4} \Delta } \Psi \Big ) (Z) = 2^{-n} e^{{1\over 2h} |Z|^2 + {1\over 2 h} (A \cdot Z + B \cdot \overline Z ) {1\over 2 h} A \cdot B }.$$ Thus, $$ ( H_{h/2} K_h^{Beals} (X , Y, \cdot ) ) (Z) = 2^{-n} {\cal B}_h (Z , Z , X , Y) e^{-{1\over 2h} (|X|^2 + |Y|^2)} $$ where ${\cal B}_h$ is our type of reproducing kernel introduced in (\ref{2.19}). Consequently, as in (\ref{2.20}) $$ ( H_{h/2} B_h \Phi ) (Z) = \int _{H^4} \Phi (X , Y) {\cal B}_h (Z , Z , X , Y) d\mu _{H^4 , h } (X , Y) = \Phi (Z , Z).$$ \section{Proof of Proposition 3.2: first step. }\label{s6} For all operators $A$ satisfying the hypotheses of Theorem \ref{t1.2} and for some subsets $E $ of finite dimension in $B' \subset H$, we shall find a bounded continuous function $\tau_{E,h} (A)$ on $H^2$ such that \begin{equation}\label{6.1} H_{E, h/2} \tau_{E,h} (A) = \sigma_h^{wick} (A).\end{equation} It is point i) of Proposition \ref{p3.2}. Moreover, we shall give estimations on this function. For all finite subsets $I$ in $\Gamma $, let $E(I)$ be the subspace of $B' \subset H$ spanned by the $e_j $, $j \in I$. Recall that the elements $e_j$ $(j \in \Gamma)$ of our Hilbertian basis are in $B'$. Let ${\cal M}_2(I)$ be the set of all multi-indices $(\alpha, \beta)$ such that $\alpha _j = \beta _j = 0$ if $j \notin I$, and $\alpha _j \leq 2$ and $\beta _j \leq 2$ if $j \in I$. \begin{prop}\label{p6.1} Let $A$ be an operator satisfying the hypotheses in Theorem \ref{t1.2}. Set $I$ a finite subspace of $\Gamma$. Then, there exists a bounded continuous function $ \tau_{E(I),h} (A)$ on $H^2$ satisfying (\ref{6.1}). Moreover, \begin{equation}\label{6.2} \Vert \tau_{E(I),h} (A) \Vert _{\infty } \leq K^{|I|} \sum _{(\alpha , \beta)\in {\cal M}_2(I) } h^{-|\alpha +\beta|/2} \Vert ({\rm ad} P_h)^{\alpha} ({\rm ad} Q_h)^{\beta} A \Vert \end{equation} where $ K$ is a numerical constant. \end{prop} {\it Proof.} We denote $E = E(I)$, $E^{\perp}$ the orthogonal complement of $E$ in $H$, and $Z = (Z_E, Z_{E^{\perp}} )$ the variable in $H^2$. For all $Z_{E^{\perp}} $ in $(E^{\perp})^2$, we shall apply Proposition \ref{t5.1} replacing $H$ by $E$, with the following function $\Phi $ defined on $E^2$, $$ \Phi_ { Z_{E^{\perp}} } (X_E , Y_E) = (S_hA) ( X_E , Z_{E^{\perp}} , Y_E, Z_{E^{\perp}} ). $$ Using again notation (\ref{5.3}), which a priori only makes sense as an oscillatory integral on $E^2$, one set for all $Z = (Z_E, Z_{E^{\perp}} )$ in $H^2$, $$ \tau_{E(I),h} (A) (Z) = 2^{ {\rm dim} (E)} (2 \pi h)^{-2{\rm dim} (E) } \int _{E^4} (S_hA) ( X_E , Z_{E^{\perp}} , Y_E, Z_{E^{\perp}} ) K_h^{Beals} (X_E , Y_E, Z_E) dX_E dY_E$$ where $ K_h^{Beals} $ is defined in (\ref{5.2}). One may apply Theorem \ref{t5.1}, choosing as an orthonormal basis of $E = E(I)$, the one constituted with the $e_j$ $j\in I$. With this choice, we have from (\ref{4.5}), $$ N_h^{(2)} (\Phi_ { Z_{E^{\perp}} } ) \leq \sum _{(\alpha , \beta)\in {\cal M}_2(I) } h^{-|\alpha +\beta|/2} \Vert ({\rm ad} P_h)^{\alpha} ({\rm ad} Q_h)^{\beta} A \Vert $$ and the term in the right hand side is finite under hypothesis of Theorem \ref{t1.2}. From Theorem \ref{t5.1}, the function $ \tau_{E(I),h} (A)$ is well-defined, continuous and bounded on $H^2$ and satisfies (\ref{6.1}) and (\ref{6.2}). \hbox{\vrule \vbox to 7pt{\hrule width 6pt \hrule}\vrule } \section{Proof of Proposition \ref{p3.2}: second step.}\label{s7} For all finite subsets $I$ of $\Gamma$, let us set \begin{equation}\label{7.1} T_{I, h} = \prod _{j\in I} (I - H_{D_j , h/2} )\end{equation} where $D_j$ is spanned by the vector $e_j$ of our Hilbertian basis of $H$, and $H_{D_j , h/2}$ is the operator defined in (\ref{3.1}), with $E$ replaced by $D_j$, thus with an integral on $D_j ^2 $. When $I =\emptyset$, we set $T _{I , h} = I d$. We denote by $E(I)$ the subspace of $B'$ spanned by the $e_j$, $j\in I$. Recall that the elements $e_j$ $(j \in \Gamma)$ of our Hilbertian basis of $H$ are in $B'$. If $I = \emptyset$ then set $E(I) = \{ 0 \}$. For any operator $A$ satisfying the hypotheses in Theorem \ref{t1.2} and for all subspaces $E \subset B' \subset H$ of finite dimension, set $\tau _{E(I) , h} (A)$ the function on $H^2$ defined in the Proposition \ref{p6.1}. In particular, we may have $E = E(I)$ with $I$ being a finite subset of $\Gamma$. We choose an increasing sequence $(\Lambda_n)$ of finite subsets of $\Gamma$ with its union equals to $\Gamma$. For all $ n$, one defines a function $F_n $ on $H^2$ by, \begin{equation}\label{7.2} F_n = \sum _{I\subset \Lambda _n} T _{I , h} \tau_{E(I) , h} (A).\end{equation} The above sum is running over all the subsets $I$ of $\Lambda _n$ including the empty set. We shall show that this sequence of functions has indeed the properties announced Proposition \ref{p3.2}. {\it Point i) } One has, for all subsets $I \subset \Lambda _n$, $$ H_{E(\Lambda _n), h/2} = H_{E(I), h/2} H_{E(\Lambda _n \setminus I), h/2} $$ and these operators commutes with each other and with $T _{I , h}$. Consequently, $$ H_{E(\Lambda _n), h/2} F_n = \sum _{I\subset \Lambda _n} T _{I, h} H_{E(\Lambda _n \setminus I), h/2} H_{E(I), h/2} \tau_{E(I) , h} (A). $$ From equality (\ref{6.1}) applied to set $E(I)$, one has, $$ H_{E(\Lambda _n), h/2} F_n = \sum _{I\subset \Lambda _n} T _{I , h} H_{E(\Lambda _n \setminus I), h/2} \sigma_h^{wick} (A). $$ The following equality is a variant of the binomial formula, $$\sum _{I\subset \Lambda _n} T _{I , h} H_{E(\Lambda _n \setminus I), h/2} = I d. $$ So, we have proved equality (\ref{a4}), point i) of the Proposition \ref{p3.2}. Points ii) and iii) will both be a direct consequence of the following inequality. If $A$ satisfies hypothesis in Theorem \ref{t1.2}, for all $(\alpha , \beta)$ in $M_m$, for any finite subset $I$ in $\Gamma$ and for all $h$ in $(0, 1)$, \begin{equation}\label{7.3} \Vert \partial_z ^{\alpha } \partial_{\zeta} ^{\beta } T_{I, h} \tau_{E(I) , h} (A) \Vert _{\infty} \leq M (K S_{\varepsilon} ^ 2) ^{|I|} \prod _{j\in I} h \varepsilon_j^2 \prod _{j\in \Gamma } \varepsilon _j ^{\alpha _j + \beta _j} \end{equation} where $K$ is a numerical constant and $S_{\varepsilon}$ is defined in (\ref{1.13}). It remains to prove (\ref{7.3}). If $H_{D_j,h/2}$ is defined in (\ref{3.1}), with $E$ replaced by $D_j = {\rm Vect}\,(ej )$, we may write, $$ I - H_{ D_j, h/2} = {h\over 4} V_j (\partial _{z_j} ^2 + \partial _{\zeta _j} ^2 ) $$ where the operators $V_j$ are bounded in the space $ C_b$ of continuous bounded functions on $H^2$, and are commuting with partial derivatives operators. Moreover, $$ \Vert V_j \Vert _{{\cal L} (C_b)} \leq 1. $$ Therefore, one may rewrite the operator $T_{I, h}$ defined in (\ref{7.1}) under the following form, $$ T_{I, h} = \prod _{j\in I} (h/4) V_j (\partial _{z_j} ^2 + \partial _{\zeta _j} ^2 ).$$ Let $ {\cal N} (I) $ be the set of multi-indices $(\alpha , \beta )$ such that $\alpha _j = \beta _j = 0$ if $ j \notin I$, and if $j \in I$, either we have $\alpha _ j = 2$ and $\beta _j = 0$, or $\alpha _j = 0$ and $\beta _j = 2$. Consequently, $$\Vert \partial_z ^{\alpha } \partial_{\zeta} ^{\beta } T_{I, h} \tau_{E(I) , h} (A) \Vert _{\infty} \leq (h/4)^{|I|} \sum _{ (\gamma , \delta) \in {\cal N} (I) } \Vert \partial_z ^{\alpha + \gamma } \partial_{\zeta} ^{\beta + \delta } \tau_{E(I) , h} (A) \Vert _{\infty}.$$ On verifies that, $$ \left [ {\partial \over \partial x_j} + {\partial \over \partial y_j} + {\partial \over \partial z_j} \right ] K_h^{Beals} (X, Y, Z ) = 0,\qquad \left [ {\partial \over \partial {\xi}_j} + {\partial \over \partial {\eta}_j} + {\partial \over \partial \zeta _j} \right ] K_h^{Beals} (X, Y, Z ) = 0. $$ Consequently, $$ \partial_z ^{\alpha } \partial_{\zeta} ^{\beta } \tau _{E(I),h} (A) = \tau _{E(I),h} A_{\alpha \beta} $$ where $A_{\alpha \beta}$ is such that, $$ (S_h A_{\alpha \beta} ) (X, Y ) = ( \partial _x +\partial _y)^{\alpha } ( \partial _{\xi} +\partial _{\eta} )^{\beta } (S_hA)(X, Y ).$$ From (\ref{4.4}), $$A_{\alpha \beta} = c_{\alpha \beta} h^{-|\alpha +\beta |} ({\rm ad} P_h)^{\alpha} ({\rm ad} Q_h)^{\beta} A$$ where $|c_{\alpha \beta} | = 1$. Then, $$ \Vert \partial_z ^{\alpha } \partial_{\zeta} ^{\beta } T _{I , h} \tau _{E(I),h} (A) \Vert _{\infty } \leq (h/4)^{|I|} \sum _{(\gamma , \delta ) \in {\cal N} (I)} h^{ -|\alpha + \beta + \gamma + \delta | } \Vert \tau _{E(I),h} ( ({\rm ad} P_h)^{\alpha + \gamma } ({\rm ad} Q_h)^{\beta + \delta } A \Vert _{\infty }. $$ From the Proposition \ref{p6.1}, $$ \Vert \partial_z ^{\alpha } \partial_{\zeta} ^{\beta } T _{I , h} \tau _{E(I),h} (A) \Vert _{\infty } \leq (K h/4)^{|I|} \sum _{(\gamma , \delta ) \in {\cal N} (I)} \sum _{(\lambda , \mu ) \in {\cal M}_2 (I)} h^{ -|\alpha + \beta + \gamma + \delta | - | \lambda + \mu |/2 }$$ $$ \hskip 3cm \Vert ( ({\rm ad} P_h)^{\alpha + \gamma + \lambda } ({\rm ad} Q_h)^{\beta + \delta + \mu } A \Vert. $$ If $(\alpha , \beta) \in {\cal M} _m$, $(\gamma , \delta ) \in {\cal N} (I)$ and $(\lambda , \mu) \in {\cal M} _2(I)$, then the sum $ (\alpha + \gamma + \lambda , \beta + \delta + \mu)$ belongs to $ {\cal M} _{m+4}$. From assumptions of Theorem \ref{t1.2}, $$ \Vert \partial_z ^{\alpha } \partial_{\zeta} ^{\beta } T _{I , h} \tau _{E(I),h} (A) \Vert _{\infty } \leq M(K h/4)^{|I|} \sum _{(\gamma , \delta ) \in {\cal N} (I)} \sum _{(\lambda , \mu ) \in {\cal M}_2 (I)} h^{ |\lambda + \mu | /2} \prod _{j\in \Gamma} \varepsilon _j^{ \alpha _j + \beta _j + \gamma _j + \delta _j + \lambda _j + \mu_j}. $$ The number of multi-indices in ${\cal N} (I)$ is $2^{|I|}$, and the number of multi-indices in ${\cal M}_2(I)$ is $9^{|I|}$. For all multi-indices $ (\gamma , \delta) \in {\cal N} (I)$, we have $ \gamma _j + \delta _j = 2$ if $j\in I$. If $0 < h < 1$, for all multi-indices $ (\lambda , \mu ) \in {\cal M}_2(I)$, we have $ ( \sqrt h \varepsilon _j )^{\lambda _j+ \mu _j} \leq S_{\varepsilon}^ 2$, where $ S_{\varepsilon}$ is defined in (\ref{1.13}). Consequently, we have indeed proved (\ref{7.3}) with another universal constant $K$. From (\ref{7.2}), we deduce the points ii) and iii) of the Proposition \ref{p3.2}, which complete the proof of Theorem \ref{t1.2}. \hbox{\vrule \vbox to 7pt{\hrule width 6pt \hrule}\vrule } \section{Composition of operators.}\label{s8} \begin{theo}\label{t8.1} Let $F$ in $S_{m+6}(M , \varepsilon )$ and $G$ in $S_{m+6}(M' , \varepsilon )$ ($m \geq 0$). Then there exists a function $ H_h$ in $S_{m}(M'' ,(m+4) \varepsilon ) $ such that, \begin{equation}\label{8.1} Op_h^{weyl} (F ) \circ Op_h^{weyl}(G) = Op_h^{weyl}(H_h).\end{equation} We have set, $$ M'' = M M' \prod _{j\in \Gamma} (1 + K(m + 4)^2 S_{\varepsilon}^2 h \varepsilon_j ^2)^3 \leqno (8.2) $$ where $K$ is a universal constant and $S_{\varepsilon}$ is defined in (\ref{1.13}). \end{theo} {\it Proof.} For any multi-index $(\alpha, \beta)$ in ${\cal M}_{m+4}$ we have, $$ ({\rm ad}P_h )^{\alpha} ({\rm ad}Q_h )^{\beta} \Big ( Op_h^{weyl} (F ) \circ Op_h^{weyl} (G) \Big ) = $$ $$ \sum_{ \alpha ' + \alpha '' = \alpha \atop \beta ' + \beta '' = \beta } \Big ( ({\rm ad}P_h )^{\alpha '} ({\rm ad}Q_h )^{\beta' } Op_h^{weyl} (F ) \Big ) \circ \Big ( ({\rm ad}P_h )^{\alpha ''} ({\rm ad}Q_h )^{\beta'' } Op_h^{weyl} (G ) \Big ). $$ From (\ref{1.15}) (with $m$ replaced by $m +6$) and similarly for $G$, we have, for each multi-index $(\alpha, \beta)$ in ${\cal M}_{m+4}$, $$ ({\rm ad}P_h )^{\alpha} ({\rm ad}Q_h )^{\beta} \Big ( Op_h^{weyl} (F ) \circ Op_h^{weyl} (G) \Big ) \Vert \leq M M' N(\alpha , \beta ) \prod _{j\in \Gamma} (1+ 81 \pi h S_{\varepsilon} \varepsilon _j^2)^2 \prod _{j\in \Gamma} (h\varepsilon _j)^{\alpha _j + \beta _j} $$ where $N(\alpha, \beta )$ is the number of decompositions of $(\alpha, \beta)$ as a sum of two multi-indices $(\alpha ', \beta ')$ and $(\alpha '', \beta '')$. If $(\alpha, \beta)$ is in ${\cal M}_{m+4}$ then this number equals is smaller than $ (m + 4)^{|\alpha + \beta|}$. Consequently, $Op_h^{weyl} (F ) \circ Op_h^{weyl} (G)$ satisfies a condition similar to (\ref{1.15b}) with $\varepsilon _j$ remplaced by $(m + 4)\varepsilon _j$. So our Theorem \ref{t8.1} is is a consequence of Theorem \ref{t1.2}. \hbox{\vrule \vbox to 7pt{\hrule width 6pt \hrule}\vrule } [email protected]\newline LMR EA 4535 and FR CNRS 3399, Universit\'e de Reims Champagne-Ardenne, Moulin de la Housse, BP 1039, 51687 REIMS Cedex 2, France. [email protected]\newline Institut Mathématique de Jussieu UMR CNRS 7586, Analyse Algébrique, 4 Place Jussieu, 75005 Paris, France. [email protected]\newline LMR EA 4535 and FR CNRS 3399, Universit\'e de Reims Champagne-Ardenne, Moulin de la Housse, BP 1039, 51687 REIMS Cedex 2, France. \end{document}
\begin{document} \title{Asymptotic behavior and rigidity results for symmetric solutions of the elliptic system $\Delta u=W_u(u)$ \\} \author{Nicholas D. Alikakos\footnote{\rm The first author was partially supported through the project PDEGE – Partial Differential Equations Motivated by Geometric Evolution, co-financed by the European Union – European Social Fund (ESF) and national resources, in the framework of the program Aristeia of the ‘Operational Program Education and Lifelong Learning’ of the National Strategic Reference Framework (NSRF).} \footnote{The research of N. Alikakos has been co-financed by the European Union – European Social Fund (ESF) and Greek national funds through the �� Operational Program Education and Lifelong Learning’ of the National Strategic Reference Framework (NSRF) - Research Funding Program: THALES}\ \ \ and Giorgio Fusco} \date{} \maketitle \begin{abstract} We study symmetric vector minimizers of the Allen-Cahn energy and establish various results concerning their structure and their asymptotic behavior. \end{abstract} 2010 {\it Mathematical Subject Classification} {Primary: 35J47, 35J50; Secondary: 35J20} \section{Introduction} The problem of describing the structure of bounded solutions $u:\Omega\rightarrow\mathbb{R}^m$ of the equation \begin{eqnarray}\label{system-0} \left\{\begin{array}{l} \Delta u=f(u),\quad x\in\Omega \\ u=u_0,\quad x\in\partial\Omega, \end{array}\right. \end{eqnarray} where $f:\mathbb{R}^m\rightarrow\mathbb{R}^m$ is a smooth map and $\Omega\subset\mathbb{R}^n$ is a smooth domain that can be bounded or unbounded and may also enjoy symmetry properties, is a difficult and important problem which has attracted the interest of many authors in the last twenty five years see \cite{gnn}, \cite{bcn}, \cite{bcn1} and \cite{eh} just to mention a few. Questions concerning monotonicity, symmetry and asymptotic behavior are the main objectives of these investigations. Most of the existing literature concerns the scalar case $m=1$ where a systematic use of the maximum principle and its consequences are the main tools at hand. For the vector case $m\geq 2$ we mention the works \cite{bgs} and \cite{gs} where the control of the asymptotic behavior of solutions was basic for proving existence. In this paper we are interested in the case where $f(u)=W_u(u)$ is the gradient of a potential $W:\mathbb{R}^m\rightarrow\mathbb{R}$ and $u$ is a minimizer for the action functional $\int\frac{1}{2}\vert\nabla v\vert^2+W(v)$ in the sense of the following \begin{definition}\label{definition-stable} A map $u\in C^2(\Omega;\mathbb{R}^m)\cap L^\infty(\Omega;\mathbb{R}^m)$, $\Omega\subset\mathbb{R}^n$ an open set, is said to be a \underline{{\it minimizer}} or \underline{{\it minimal}} if for each bounded open lipshitz set $\Omega^\prime\subset\Omega$ it results \begin{eqnarray} J_{\Omega^\prime}(u)=\min_{v\in W_0^{1,2}(\Omega^\prime;\mathbb{R}^m)} J_{\Omega^\prime}(u+v),\quad\quad J_{\Omega^\prime}(v)=\int_{\Omega^\prime}\frac{1}{2}\vert\nabla v\vert^2+W(v), \end{eqnarray} that is $u|_{\Omega^\prime}$ is an absolute minimizers in the set of $W^{1,2}(\Omega^\prime;\mathbb{R}^m)$ maps which coincide with $u$ on $\partial\Omega^\prime$. \end{definition} Clearly if $u:\Omega\rightarrow\mathbb{R}^m$ is minimal then it is a solution of the Euler-Lagrange equation associated to the functional $J_{\Omega^\prime}$ which is the vector Allen-Cahn equation \begin{equation}\label{system} \Delta u=W_u(u),\quad x\in\Omega. \end{equation} We will work in the context of reflection symmetries. Our main results are Theorem \ref{main} on the asymptotic behavior of symmetric minimizers and Theorem \ref{main-1} and Theorem \ref{triple} on the {\it rigidity} of symmetric minimizers. Rigidity meaning that, under suitable assumptions, a symmetric minimizer $u:\mathbb{R}^n\rightarrow\mathbb{R}^m$ must in effect depend on a number of variables $k<n$ strictly less than the dimension $n$ of the domain space. These theorems, in the symmetric setting, are vector counterparts of analogous results which are well known in the scalar case $m=1$ \cite{bar} \cite{far}. However in the vector case there is more structure as we explain after the statement of Theorem \ref{main-2}. In \cite{af3} we discuss a rigidity theorem where the assumption of symmetry is removed. We let $G$ a reflection group acting both on the domain space $\Omega\subseteq\mathbb{R}^n$ and on the target space $\mathbb{R}^m$. We assume that $W:\mathbb{R}^m\rightarrow\mathbb{R}$ a $C^3$ potential such that \begin{description} \item[${\bf H}_1$]$W$ is symmetric with respect to $G$: $W(g u)=W(u),\;\text{ for }\;g\in G,\; u\in\mathbb{R}^m$. \end{description} For Theorem \ref{main} and Theorem \ref{main-1} $G=S$ the group of order $2$ generated by the reflection $\mathbb{R}^d\ni z\mapsto\hat{z}\in\mathbb{R}^d$ in the plane $\{z_1=0\}$: \[\hat{z}=(-z_1,z_2,\ldots,z_d),\;d=n,\,m.\] In this case the symmetry of $W$ is expressed by $W(\hat{u})=W(u),\;u\in\mathbb{R}^m$. For Theorem \ref{triple} $G=T$ the group of order $6$ of the symmetries of the equilateral triangle. $T$ is generated by the reflection $\gamma$ in the plane $\{z_2=0\}$ and $\gamma_\pm$ in the plane $\{z_2=\pm\sqrt{3}z_1\}$. We let $F\subset\mathbb{R}^d$, $d=n$ or $d=m$ a fundamental region for the action of $G$ on $\mathbb{R}^d$. If $G=S$ we take $F=\mathbb{R}_+^d=\{z:z_1>0\}$. If $G=T$ we take $F=\{z:0<z_2<\sqrt{3}z_1,\;z_1>0\}$. \begin{description} \item[${\bf H}_2$] There exists $a\in\overline{F}$ such that: \begin{eqnarray} 0=W(a)\leq W(u),\; u\in\overline{F}. \end{eqnarray} Moreover $a$ is nondegenerate in the sense that the quadratic form $D^2W(a)(z,z)$ is positive definite. \end{description} In the symmetric setting we assume minimality in the class of symmetric variations: \begin{definition}\label{definition-stable-s} Assume that $\Omega\subset\mathbb{R}^n$ and $u\in C^2(\Omega;\mathbb{R}^m)\cap L^\infty(\Omega;\mathbb{R}^m)$, are symmetric \begin{equation}\label{symmetric-equiv} \begin{split} &x\in\Omega\Rightarrow\;g x \in\Omega,\;\text{ for }\;g\in G,\\ &u(g x )= g u(x),\;\text{ for }\;g\in G,\;x\in\Omega. \end{split} \end{equation} Then $u$ is said to be a symmetric minimizer if for each bounded open symmetric lipschitz set $\Omega^\prime\subset\Omega$ and for each symmetric $v\in W_0^{1,2}(\Omega^\prime;\mathbb{R}^m)$ it results \begin{eqnarray} J_{\Omega^\prime}(u)\leq J_{\Omega^\prime}(u+v). \end{eqnarray} \end{definition} In the following by a minimizer we will always mean a symmetric minimizer in the sense of the definition above. \begin{theorem}\label{theorem-1} Assume $G=S$ and assume that $W$ satisfies ${\bf H}_1-{\bf H}_2$. Assume that $\Omega\subseteq\mathbb{R}^n$ is {\it convex-symmetric} in the sense that \begin{eqnarray} x=(x_1,\dots,x_n)\in\Omega\Rightarrow(t x_1,\dots,x_n)\in\Omega, \text{ for } \vert t\vert\leq 1. \end{eqnarray} Let $\mathcal{Z}=\{z\in\mathbb{R}^m:z\neq a, W(z)=0\}$ and let $u:\Omega\rightarrow\mathbb{R}^m$ a minimizer that satisfies \begin{eqnarray} \vert u(x)-z\vert>\delta,\;\text{ for }\;z\in\mathcal{Z},\; d(x,\partial\Omega^+)\geq d_0,\;x\in\Omega^+, \end{eqnarray} $\Omega^+=\{x\in\Omega:x_1>0\}$, and \begin{equation}\label{assumed-bound} \vert u\vert+\vert\nabla u\vert\leq M,\;\text{ for }\;x\in\Omega, \end{equation} for some $M>0$ Then there exist $k_0, K_0>0$ such that \begin{eqnarray}\label{exponential-0} \vert u-a\vert\leq K_0e^{-k_0 d(x,\partial\Omega^+)},\;\text{ for }\;x\in\Omega^+. \end{eqnarray} \end{theorem} \begin{proof} A minimizer $u$ satisfies the assumptions of Theorem $1.2$ in \cite{fu} that implies the result. \end{proof} Examples of minimizers that satisfy the hypothesis of Theorem \ref{theorem-1} are provided (see \cite{af2}) by the entire equivariant solutions of (\ref{system}) constructed in \cite{af1}, \cite{a1}, \cite{f}. The gradient bound in (\ref{assumed-bound}) is a consequence of the smoothness of $\Omega$ or, as in the case of the entire solutions referred to above, follows from the fact that $u$ is the restriction to a non smooth set of a smooth map. We denote $C_S^{0,1}(\overline{\Omega},\mathbb{R}^m)$ the set of lipschitz symmetric maps $v:\overline{\Omega}\rightarrow\mathbb{R}^m$ that satisfy the bounds \begin{equation}\label{bounds} \begin{split} &\|v\|_{C^{0,1}(\overline{\Omega},\mathbb{R}^m)}\leq M,\\ &\vert v-a\vert +\vert\nabla v\vert\leq K_0e^{-k_0 d(x,\partial\Omega^+)},\;x\in\Omega^+. \end{split} \end{equation} We remark that from (\ref{exponential-0}) and elliptic regularity, after redefining $k_0$ and $K_0$ if necessary, we have \begin{equation}\label{gradu-expo} u\in C_S^{0,1}(\overline{\Omega},\mathbb{R}^m), \end{equation} for the minimizer in Theorem \ref{theorem-1}. \begin{theorem}\label{main} Assume $W$, $\Omega$ and $u:\Omega\rightarrow\mathbb{R}^m$ as in Theorem \ref{theorem-1}. Assume moreover that \begin{description} \item[${\bf H}_3$] The problem \begin{eqnarray} \left\{\begin{array}{l} u^{\prime\prime}=W_u(u),\quad s\in\mathbb{R} \\ u(-s)=\hat{u}(s),\;s\in\mathbb{R},\\ \lim_{s\rightarrow+\infty}u(s)=a, \end{array}\right. \end{eqnarray} has a unique solution $\bar{u}:\mathbb{R}\rightarrow\mathbb{R}^m.$ \item[${\bf H}_4$] the operator $T$ defined by \begin{eqnarray} D(T)=W_S^{2,2}(\mathbb{R},\mathbb{R}^m),\quad\quad Tv=-v^{\prime\prime}+W_{uu}(\bar{u})v, \end{eqnarray} where $W_S^{2,2}(\mathbb{R},\mathbb{R}^m)\subset W^{2,2}(\mathbb{R},\mathbb{R}^m)$ is the subspace of symmetric maps, has a trivial kernel. \end{description} Then there exist $k, K>0$ such that \begin{eqnarray}\label{exp-baru} \vert u(x)-\bar{u}(x_1)\vert\leq Ke^{-kd(x,\partial\Omega)},\quad x\in\Omega. \end{eqnarray} \end{theorem} \begin{theorem}\label{main-1} Assume that $\Omega=\mathbb{R}^n$ and that $W$ and $u:\mathbb{R}^n\rightarrow\mathbb{R}^m$ are as in Theorem \ref{main}. Then $u$ is unidimensional: \begin{equation} u(x)=\bar{u}(x_1),\;x\in\mathbb{R}^n.\hskip3cm \end{equation} \end{theorem} \begin{theorem}\label{main-2} Assume $\Omega=\{x\in\mathbb{R}^n:\;x_n>0\}$, $W$ and $u:\Omega\rightarrow\mathbb{R}^m$ as in Theorem \ref{main}. Then \[u(x)=\bar{u}(x_1),\;\text{ on }\;\partial\Omega\;\Rightarrow\;u(x)=\bar{u}(x_1),\;\text{ on }\;\Omega.\] \end{theorem} From \cite{af1}, \cite{a1} and \cite{f}, we know that given a finite reflection group $G$, provided $W$ is invariant under $G$, there exists a $G$-equivariant solutions $u:\mathbb{R}^n\rightarrow\mathbb{R}^m$ of the system (\ref{system}). It is natural to ask about the asymptotic behavior of these solutions. In particular, given a unit vector $\nu=(\nu_1,\dots,\nu_n)\in\mathbb{R}^n$ one may wonder about the existence of the limit \begin{eqnarray}\label{limit} \lim_{\lambda\rightarrow +\infty}u(x^\prime+\lambda\nu)=\tilde{u}(x^\prime), \end{eqnarray} where $x^\prime$ is the projection of $x=x^\prime +\lambda\nu$ on the hyperplane orthogonal to $\nu$. One can conjecture that this limit does indeed exist and that $\tilde{u}$ is a solution of the same system equivariant with respect to the subgroup $G_\nu\subset G$ that leave $\nu$ fixed, the stabilizer of $\nu$. In \cite{af1}, \cite{a1} and \cite{f} an exponential estimate analogous to (\ref{exponential-0}) in Theorem \ref{theorem-1} was established. This gives a positive answer to this conjecture for the case where $\nu$ is inside the set $D=\text{Int}\cup_{g\in G_a}g\overline{F}$. Here $F$ is a fundamental region for the action of $G$ on $\mathbb{R}^d$, $d=n,\,m$ and $G_a\subset G$ is the subgroup that leave $a$ fixed. Under the assumptions ${\bf H}_3$ and ${\bf H}_4$ Theorem \ref{main} goes one step forward and shows that the conjecture is true when $\nu$ belongs to the interior of one of the walls of the set $D$ above and $G_\nu$ is the subgroup of order two generated by the reflection with respect to that wall. In the proof of Theorem \ref{main} the estimate (\ref{exponential-0}) is basic. Once the exponential estimate in Theorem \ref{main} is established, we conjecture that, under assumptions analogous to ${\bf H}_3$ and ${\bf H}_4$, the approach developed in the proof of Theorem \ref{main} can be used to handle the case where $\nu$ belongs to the intersection of two walls of $D$. We also expect that, under the assumption that at each step $\tilde{u}$ is unique and hyperbolic, the process can be repeated to show the whole hierarchy of limits corresponding to all possible choice of $\nu$ and always $\tilde{u}$ is a solution of the system equivariant with respect to the subgroup $G_\nu$. This program is motivated by the analogy between equivariant connection maps and minimal cones \cite{a2}. Theorem \ref{triple} below is an example of such a splitting result \cite{gmt} in the diffused interface set-up. Our next result concerns minimizers equivariant with respect to the symmetry group $T$ of the equilateral triangle. We can imagine that $T=G_\nu$ for some $\nu$ that belongs to the intersection of two walls of $D$. The following assumptions ${\bf H}^\prime_3$ and ${\bf H}^\prime_4$, in the case at hand $G=T$, correspond to the assumption ${\bf H}_3$ and ${\bf H}_4$ in Theorem \ref{main} \begin{description} \item[${\bf H}^\prime_3$] The problem \begin{eqnarray} \left\{\begin{array}{l} u^{\prime\prime}=W_u(u),\quad s\in\mathbb{R} \\ u(-s)=\gamma u(s),\;s\in\mathbb{R},\\ \lim_{s\rightarrow+\infty}u(s)=\gamma_\pm a, \end{array}\right. \end{eqnarray} has a unique solution $\bar{u}:\mathbb{R}\rightarrow\mathbb{R}^m.$ \item[${\bf H}^\prime_4$] the operator $T$ defined by \begin{eqnarray} D(T)=W_\gamma^{2,2}(\mathbb{R},\mathbb{R}^m),\quad\quad Tv=-v^{\prime\prime}+W_{uu}(\bar{u})v, \end{eqnarray} where $W_\gamma^{2,2}(\mathbb{R},\mathbb{R}^m)\subset W^{2,2}(\mathbb{R},\mathbb{R}^m)$ is the subspace of the maps that satisfy $u(-s)=\gamma u(s)$, has a trivial kernel. \end{description} Then we have the assumptions concerning uniqueness and hyperbolicity of $\tilde{u}$ \begin{description} \item[${\bf H}_5$] There is a unique $G$-equivariant solution $\tilde{u}:\mathbb{R}^2\rightarrow\mathbb{R}^m$ of (\ref{system}) \begin{equation}\label{g-equivariance} \tilde{u}(g s)=g \tilde u(s),\;\text{ for }\;g\in T,\;s\in\mathbb{R}^2 \end{equation} that satisfies the estimate \begin{equation}\label{exp-est-two} \vert\tilde{u}(s)-a\vert\leq K e^{-kd(s,\partial D)},\;\text{ for }\;s\in\mathbb{R}^2, \end{equation} where $D=\mathrm{Int}\overline{F}\cup \gamma\overline{F}$. \item[${\bf H}_6$] the operator $\mathcal{T}$ defined by \begin{eqnarray} D(\mathcal{T})=W_G^{2,2}(\mathbb{R}^2,\mathbb{R}^m),\quad\quad \mathcal{T}v=-\Delta v+W_{uu}(\bar{u})v, \end{eqnarray} where $W_T^{2,2}(\mathbb{R}^2,\mathbb{R}^m)\subset W^{2,2}(\mathbb{R}^2,\mathbb{R}^m)$ is the subspace of $T$-equivariant maps, has a trivial kernel. \end{description} We are now in the position of stating \begin{theorem}\label{triple} Assume that $W$ satisfies ${\bf H}_1$ and ${\bf H}_2$ with $a=(1,0)$ and moreover that $0=W(a)<W(u)$ for $u\in\overline{F}$. Assume that ${\bf H}^\prime_3$, ${\bf H}^\prime_4$ and ${\bf H}_5$, ${\bf H}_6$ hold. Let $u:\mathbb{R}^n\rightarrow\mathbb{R}^m$, $n\geq 3$ and $m\geq 2$ be a $T$-equivariant minimizer that satisfies(\ref{assumed-bound}) and, for some $\delta, d_0>0$ the condition \begin{equation}\label{stay-away} \vert u(x)-\gamma_\pm a\vert\geq\delta\;\text{ for }\;d(x,\partial D)>d_0,\;x\in D, \end{equation} where $D=\{x\in\mathbb{R}^n: \vert x_2\vert<\sqrt{3} x_1,\;x_1>0\}$. Then $u$ is two-dimensional: \begin{equation}\label{two} u(x)=\tilde{u}(x_1,x_2),\;x\in\mathbb{R}^n. \end{equation} \end{theorem} \begin{remark} If instead of a minimizers defined on $\mathbb{R}^n$ we had considered a minimizer defined on a subset $\Omega\subset\mathbb{R}^n$, instead of (\ref{two}), the conclusion of Theorem \ref{triple} would be exponential convergence of $u$ to $\tilde{u}$ similar to (\ref{exp-baru}). \end{remark} Theorem \ref{triple} is an example of a De Giorgi type result for systems where monotonicity is replaced by minimality ( see \cite{aac},\cite{jm} and section 3 in \cite{s}). It is the PDE analog of the fact that a minimal cone $\mathcal{C}$ in $\mathbb{R}^n$ with the symmetry of the equilateral triangle is necessarily of the form $\mathcal{C}=\tilde{\mathcal{C}}\times\mathbb{R}^{n-2}$, with $\tilde{\mathcal{C}}$ is the triod in the plane. For De Giorgi type results for systems, for general solutions , but under monotonicity hypotheses on the potential W, we refer to Fazly and Ghoussoub \cite{fg}. The rest of the paper is devoted to the proofs. In Section \ref{sec-main} we prove Theorem \ref{main} in Section \ref{basic} and Section \ref{replacement-lemmas} we prove a number of Lemmas that are basic for the proof of Theorem \ref{main} that we conclude in Sections \ref{proof-main} and \ref{sec-exp}. Theorems \ref{main-1} and \ref{main-2} and Theorem \ref{triple} are proved in Section \ref{main-final} and Section \ref{sec-triple}. \section{The proof of Theorem \ref{main}}\label{sec-main} The proof of Theorem \ref{main} that we present here, from an abstract point of view, has a lot in common with the proof of Theorem 1.2 in \cite{fu}. We will remark on this point later and spend a few words to motivate the various lemmas that compose the proof of Theorem \ref{main}. We begin with some notation and two basic lemmas. \subsection{Basic lemmas}\label{basic} In the following we use the notation $x=(s,\xi)$ with $x_1=s$ and $(x_2,\dots,x_n)=\xi$. From (\ref{bounds}) it follows that, if $(l,\xi)\in\Omega^+$ satisfies $ d((l,\xi),\partial\Omega^+)\geq l$, then the map $s\rightarrow u(s,\xi), s\in[-l,l],$ that we still denote with $u$ satisfies the bound \begin{eqnarray}\label{v-vs-exp-bound} \vert u-a\vert+\vert u_s\vert\leq K_0e^{-k_0 s}, \text{ for } s\in[0,l]. \end{eqnarray} We denote by $E_l^\mathrm{xp}\subset C^1([-l,l]:\mathbb{R}^m)$ the set of symmetric maps $v:[-l,l]\rightarrow\mathbb{R}^m$ that satisfy \begin{equation}\label{define-exp} \vert v\vert+\vert v_s\vert\leq K e^{-k s}, \text{ for } s\in[0,l] \end{equation} for some $k, K>0$. We refer to $E_l^\mathrm{xp}$ as the exponential class. \noindent We let $T_l$ the operator defined by \begin{eqnarray} D_l(T_l)=\{v\in W_S^{2,2}([-l,l],\mathbb{R}^m):v(\pm l)=0\},\quad\quad T_lv=-v^{\prime\prime}+W_{uu}(\bar{u})v. \end{eqnarray} \noindent For $l\in(0,+\infty]$ we let $\langle v,w\rangle_l=\int_{-l}^lvw$ denote the inner product in $L^2((-l,l),\mathbb{R}^m)$. We let $\|v\|_l=\langle v, v\rangle_l^{\frac{1}{2}}$ and $\|v\|_{1,l}=\|v\|_{W^{1,2}([-l,l],\mathbb{R}^m)}$. \noindent For the standard inner product in $\mathbb{R}^m$ we use the notation $(\cdot,\cdot)$. \noindent It follows directly from (\ref{define-exp}) that $\|v\|_{1,l}\leq C=\frac{K}{\sqrt{k}}$. We set \begin{eqnarray} \mathcal{B}_l^{1,2}:=\{v\in W_S^{1,2}([-l,l],\mathbb{R}^m): v(\pm l)=0;\; \|v\|_{1,l}\leq C\}, \end{eqnarray} where $W_S^{1,2}([-l,l],\mathbb{R}^m)$ is the subspace of symmetric maps. Let $\mathbb{S}$ be defined by \begin{eqnarray} \mathbb{S}=\{\nu\in W_S^{1,2}([-l,l],\mathbb{R}^m): \|\nu\|_l=1\} \end{eqnarray} and set $q_\nu=\max\{q:q\nu\in\mathcal{B}_l^{1,2}\}$. \begin{lemma}\label{strict-minimizer} Assume $H_1$ and $H_2$ as in Theorem \ref{main} and let ${\bf e}_l:\mathcal{B}_l^{1,2}\rightarrow\mathbb{R}$ be defined by \begin{eqnarray} {\bf e}_l(v):=\frac{1}{2}(\langle \bar{u}_s+v_s, \bar{u}_s+v_s\rangle_l- \langle \bar{u}_s, \bar{u}_s\rangle_l)+\int_{-l}^l (W(\bar{u}+v)-W(\bar{u})). \end{eqnarray} Then there exist $l_0>0,\, q^\circ >0 \text{ and } c>0$ such that, for all $l\geq l_0$, we have \begin{eqnarray}\label{iota-properties} \left\{\begin{array}{l} D_{qq}{\bf e}_l(q\nu)\geq c^2,\quad \text{ for } q\in[0,q^\circ]\cap[0,q_\nu],\;\nu\in\mathbb{S},\\\\ {\bf e}_l(q\nu)\geq{\bf e}_l(q^\circ\nu),\;\, \text{ for } q^\circ\leq q\leq q_\nu,\;\nu\in\mathbb{S},\\\\ {\bf e}_l(q\nu)\geq \tilde{{\bf e}}_l(p,q,\nu):={\bf e}_l(p\nu)+D_q{\bf e}_l(p\nu)(q-p) ,\\\quad\hskip4cm \text{ for } 0\leq p<q\leq q_\nu\leq q^\circ,\;\nu\in\mathbb{S},\\\\ D_p\tilde{{\bf e}}_l(p,q,\nu)\geq 0 ,\quad \text{ for } 0\leq p<q\leq q_\nu\leq q^\circ,\;\nu\in\mathbb{S}. \end{array}\right. \end{eqnarray} \end{lemma} \begin{remark} ${\bf e}_l$ is a kind of an {\it effective} potential. Indeed, as we shall see, in the proof of Theorem \ref{main} the map $L^2((-l,l),\mathbb{R}^m)\ni q\mapsto{\bf e}_l(q\nu)$ plays a role similar to the one of the usual potential $\mathbb{R}\ni q\mapsto W(a+q\nu)$ in the proof of Theorem 1.2 in \cite{fu}. \end{remark} \begin{proof} By differentiating twice ${\bf e}_l(q\nu)$ with respect to $q$ gives \begin{eqnarray} D_{qq}{\bf e}_l(q\nu)&=&\int_{-l}^l(\nu_s,\nu_s)+\int_{-l}^lW_{uu}(\bar{u}+q\nu)(\nu,\nu)\\\nonumber &=& D_{qq}{\bf e}_l(q\nu)|_{q=0}+\int_{-l}^l(W_{uu}(\bar{u}+q\nu)-W_{uu}(\bar{u}))(\nu,\nu). \end{eqnarray} From the interpolation inequality: \begin{equation} \begin{split} \|v\|_{L^\infty}\leq &\sqrt{2}\|v\|_{1,l}^{\frac{1}{2}}\|v\|_l^{\frac{1}{2}},\\ \leq &\sqrt{2}\|v\|_{1,l}, \end{split} \end{equation} for $q\nu\in\mathcal{B}_l^{1,2}$ we get via the second inequality \begin{equation} \|q\nu\|_{L^\infty}\leq \sqrt{2}C, \end{equation} and via the first \begin{equation} \|\nu\|_{L^\infty}\leq \sqrt{2}C^{\frac{1}{2}}q^{-\frac{1}{2}}. \end{equation} Therefore we have \begin{eqnarray}\label{w-uu-estimate} \vert W_{u_iu_j}(\bar{u}(s)+ q\nu(s))-W_{u_iu_j}(\bar{u}(s))\vert\leq \sqrt{2}C^{\frac{1}{2}}\overline{W}^{\prime\prime\prime}q^{\frac{1}{2}}, \end{eqnarray} where $\overline{W}^{\prime\prime\prime}$ is defined by \begin{eqnarray} \overline{W}^{\prime\prime\prime}:=\max_{\left.\begin{array}{l} 1\leq i,j,k\leq m\\ s\in\mathbb{R}, \vert\tau\vert\leq 1 \end{array}\right.}W_{u_iu_ju_k}(\bar{u}(s)+ \tau\sqrt{2}C). \end{eqnarray} From (\ref{w-uu-estimate}) we get \begin{eqnarray}\label{int-wuu-wuu-estimate} \vert\int_{-l}^l(W_{uu}(\bar{u}+q\nu)-W_{uu}(\bar{u}))(\nu,\nu)\vert\leq C_1q^\frac{1}{2}, \end{eqnarray} where $C_1>0$ is a constant independent of $l$. We now observe that \begin{eqnarray}\label{tl-equal-t} D_{qq}{\bf e}_l(q\nu)|_{q=0}=\langle T_l\nu,\nu\rangle_l= \langle T\tilde{\nu},\tilde{\nu}\rangle_\infty, \end{eqnarray} where $\tilde{\nu}$ is the trivial extension of $\nu$ to $\mathbb{R}$. $T$ is a self-adjoint operator which is positive by the minimality of $\bar{u}$. Therefore assumption ${\bf H}_5$ implies that the point spectrum of $T$ is bounded below by a positive number. From ${\bf H}_2$ the smallest eigenvalue $\mu$ of the matrix $W_{uu}(a)$ is positive and Persson's Theorem in \cite{ag} implies that also the remaining part of the spectrum of $T$, the essential spectrum, is bounded below by $\mu>0$. It follows that the spectrum of $T$ is bounded below by a positive constant $0<\tilde{\mu}\leq\mu$. From this (\ref{tl-equal-t}) and Theorem 13.31 in \cite{r} it follows \begin{eqnarray} D_{qq}{\bf e}_l(q\nu)|_{q=0}\geq\tilde{\mu}, \end{eqnarray} which together with (\ref{int-wuu-wuu-estimate}) implies \begin{eqnarray} D_{qq}{\bf e}_l(q\nu)|\geq\tilde{\mu}\geq c^2:=\frac{\tilde{\mu}}{2},\;\;\text{ for }\,q\in[0,\bar{q}]\cap[0,q_\nu], \end{eqnarray} where $\bar{q}=\frac{1}{4}\frac{\tilde{\mu}^2}{C_1}$. This concludes the proof of (\ref{iota-properties})$_1$. We now consider the problem \begin{eqnarray}\label{constrained-minimization} \min_{\left.\begin{array}{l} v\in\mathcal{B}_l^{1,2}\\ \|v\|_l\geq \bar{q} \end{array}\right.} {\bf e}_l(v) \end{eqnarray} Since the constraint in problem (\ref{constrained-minimization}) is closed with respect to weak convergence in $W_0^{1,2}$, if $\bar{v}_l$ is a minimizer of problem (\ref{constrained-minimization}), we have $\bar{v}_l\neq 0$. This implies \begin{eqnarray} {\bf e}_l(\bar{v}_l)=\alpha_l>0. \end{eqnarray} Indeed the uniqueness assumption about the minimizer $\bar{u}$ implies that $v\equiv 0$ is the unique minimizer of ${\bf e}_l$. We have \begin{eqnarray}\label{alpha-infinity} \liminf_{l\rightarrow+\infty}\alpha_l=\alpha>0. \end{eqnarray} To prove this we assume that instead there is a sequence $l_k$ such that $\lim_{k\rightarrow+\infty}\alpha_{l_k}=0$. We can also assume that the sequence $\tilde{\bar{v}}_{l_k}$ of the trivial extensions of $\bar{v}_{l_k}$ converges weakly in $W^{1,2}$ to a map $\bar{v}$ which by lower semicontinuity satisfies \begin{eqnarray} {\bf e}_\infty(\bar{v})=0. \end{eqnarray} This is in contradiction with the assumption that $v\equiv 0$ is the unique minimizer of ${\bf e}_\infty$ indeed the constraint in problem (\ref{constrained-minimization}) persists in the limit and implies $\bar{v}\neq 0$. This establishes (\ref{alpha-infinity}) and concludes the proof of (\ref{iota-properties})$_2$ with $q^\circ=\min\{\bar{q},\alpha\}$. \noindent The last two inequalities in (\ref{iota-properties}) are straightforward consequences of (\ref{iota-properties})$_1$. \end{proof} \begin{lemma}\label{l-infinity-less-l-2} Let $u$ as in Theorem \ref{theorem-1} and assume that \begin{eqnarray}\label{lemma-assumption} (l,\xi)\in\Omega^+,\; d((l,\xi),\partial\Omega^+\geq l, \end{eqnarray} then there is a constant $C_2>0$ independent of $l>1$, such that \begin{eqnarray}\label{l-infinity-less-l-2-1} \|u(\cdot,\xi)-\bar{u}\|_{L^\infty([-l,l],\mathbb{R}^m)}\leq C_2 \| u(\cdot,\xi)-\bar{u}\|_l^\frac{2}{3}. \end{eqnarray} \end{lemma} \begin{proof} From (\ref{lemma-assumption}) $u(\cdot,\xi)$ satisfies (\ref{v-vs-exp-bound}). Since also $\bar{u}$ satisfies (\ref{v-vs-exp-bound}). There is $\bar{s}\in[0,l]$ such that $\vert u(s,\xi)-\bar{u}(s)\vert\leq m=:\vert u(\bar{s},\xi)-\bar{u}(\bar{s})\vert$. From this and $\vert u(\cdot,\xi)_s-\bar{u}_s\vert\leq 2K_0$ it follows \begin{eqnarray} \vert u(s,\xi)-\bar{u}(s)\vert\geq m (1-2K_0\vert s-\bar{s}\vert),\;\,\text{ for } s\in[-l,l]\cap[\bar{s}-\frac{m}{2K_0},\bar{s}+\frac{m}{2K_0}] \end{eqnarray} and a simple computation gives (\ref{l-infinity-less-l-2-1}). \end{proof} Before continuing with the proof, we explain the meaning of the lemmas that follow. Given $l, r>0$ and $\varsigma\in\mathbb{R}^{n-1}$ we let $\mathcal{C}_l^r(\varsigma)\subset\mathbb{R}^n$ the cylinder \begin{eqnarray} \mathcal{C}_l^r(\varsigma):=\{(s,\xi):-l<s<l;\,\vert \xi-\varsigma\vert<r\}. \end{eqnarray} Lemma \ref{lemma-1}, Lemma \ref{lemma-2} and Lemma \ref{lemma-w-q} describe successive deformations through which, fixed $\lambda>0$ and $\varrho>0$ and $\bar{q}\in(0,q^\circ)$, we transform the minimizer $u$ first into a map $v$ then into $w$ and finally into a map $w^{\bar{q}}$ that satisfies the conditions \begin{equation}\label{conditions-w-q} \begin{split} & w^{\bar{q}}=u,\;\text{ on }\;\Omega\setminus\mathcal{C}_{l+\lambda}^{r+2\varrho}(\varsigma),\\ & w^{\bar{q}}(l+\frac{\lambda}{2},\xi)=\bar{u}(l+\frac{\lambda}{2}),\;\text{ for }\;\vert\xi-\varsigma\vert\leq r+\frac{\varrho}{2},\\ &\|w^{\bar{q}}(\cdot,\xi)-\bar{u}(\cdot)\|_{l+\frac{\lambda}{2}}\leq\bar{q},\;\text{ for }\;\vert\xi-\varsigma\vert\leq r+\frac{\varrho}{2} \end{split} \end{equation} The deformations described in these lemmas are complemented by precise quantitative estimates on the amount of energy required for the deformation (see (iii) in Lemma \ref{lemma-1}, (iii) in Lemma \ref{lemma-2} and (\ref{quantitative-w-wq}) in Lemma \ref{lemma-w-q}). Lemma \ref{lemma-1} describes the deformation of $u$ into a map $v$ that coincides with $\bar{u}$ on the lateral boundary of $\mathcal{C}_{l+\frac{\lambda}{2}}^{r+\varrho}(\varsigma)$: \begin{equation}\label{coincide-lateral} \begin{split} & v=u,\;\text{ outside }\;\mathcal{C}_{l+\lambda}^{r+2\varrho}(\varsigma)\setminus\overline{\mathcal{C}}_l^{r+2\varrho}(\varsigma)\\ &\|w (\cdot,\xi)-\bar{u}(\cdot)\|_{l+\frac{\lambda}{2}}\leq\bar{q},\;\text{ for }\;\vert\xi-\varsigma\vert= r+\frac{\varrho}{2}. \end{split} \end{equation} Lemma \ref{lemma-2} describes the deformation of $v$ into a map $w$ that satisfies \begin{equation}\label{coincide-l2} \begin{split} & w=v,\;\text{ outside }\;\mathcal{C}_{l+\frac{\lambda}{2}}^{r+\varrho}(\varsigma)\setminus\overline{\mathcal{C}}_{l+\frac{\lambda}{2}}^r(\varsigma)\\ &\|w (\cdot,\xi)-\bar{u}(\cdot)\|_{l+\frac{\lambda}{2}}\leq\bar{q},\;\text{ for }\;\vert\xi-\varsigma\vert= r+\frac{\varrho}{2}. \end{split} \end{equation} Lemma \ref{quantitative-estimate0} and Corollary \ref{corollary} show that we can replace $w^{\bar{q}}$ with a map $\omega$ that coincides with $w^{\bar{q}}$ outside $\mathcal{C}_{l+\frac{\lambda}{2}}^{r+\frac{\varrho}{2}}(\varsigma)$ and has less energy than $w^{\bar{q}}$. Moreover Corollary \ref{corollary} yields a quantitative estimate for the energy difference. In Sec.\ref{proof-main} we put together all these energy estimates and show (see Proposition \ref{l-2-bound}) that the assumption that \[\|u (\cdot,\varsigma)-\bar{u}(\cdot)\|_l\geq q^\circ\] if $r>0$ is sufficiently large, is incompatible with the minimality of $u$. Thus establishing that, if a sufficiently large cylinder $\mathcal{C}_{l+\frac{\lambda}{2}}^{r+\frac{\varrho}{2}}(\varsigma)$ is contained in $\Omega$, then we have the estimate \[\|u (\cdot,\varsigma)-\bar{u}(\cdot)\|_l< q^\circ,\] which is the main step in the proof of Theorem \ref{main}. \subsection{Replacement Lemmas}\label{replacement-lemmas} \begin{lemma}\label{lemma-1} Let $\lambda \text{ and } \varrho>0$ be fixed. Assume that $\mathcal{C}_{l+\lambda}^{r+2\varrho}(\varsigma)\subset\Omega$ satisfies \begin{eqnarray}\label{distance-m-l} d(\mathcal{C}_{l+\lambda}^{r+2\varrho}(\varsigma),\partial\Omega)\geq l+\lambda. \end{eqnarray} Then there exists a map $v\in C_S^{0,1}(\overline{\Omega},\mathbb{R}^m)$ such that \begin{description} \item[(i)] $v=u,\;\text{ on }\, \overline{\Omega}\setminus (\mathcal{C}_{l+\lambda}^{r+2\varrho}(\varsigma)\setminus \overline{\mathcal{C}}_{l}^{r+2\varrho}(\varsigma))$, \item[(ii)] $v(l+\frac{\lambda}{2},\xi)=\bar{u}(l+\frac{\lambda}{2}),\;\text{ for }\, \vert\xi-\varsigma\vert\leq r+\varrho$. \item[(iii)] $J_{\mathcal{C}_{l+\lambda}^{r+2\varrho}(\varsigma)}(v)- J_{\mathcal{C}_{l+\lambda}^{r+2\varrho}(\varsigma)}(u)\leq C_0 r^{n-1}e^{-2k l}$, \end{description} where $C_0>0$ is a constant independent of $l$ and $r$. \end{lemma} \begin{proof} For $(s,\xi)\in \overline{\mathcal{C}}_{l+\lambda}^{r+\varrho}(\varsigma) \setminus\mathcal{C}_{l}^{r+\varrho}(\varsigma)$ we define $v$ by \begin{eqnarray}\label{v-definition-1} v(s,\xi)= (1-\vert 1-2\frac{s-l}{\lambda}\vert)\bar{u}(s)+ \vert 1-2\frac{s-l}{\lambda}\vert u(s,\xi),\hskip2cm\\\nonumber\hskip4cm s\in[l,l+\lambda],\, \vert \xi-\varsigma\vert\leq r+\varrho. \end{eqnarray} It remains to define $v(s,\xi) \text{ for } (s,\xi)\in(l,l+\lambda)\times\{\xi:r+\varrho<\vert \xi-\varsigma\vert<r+2\varrho\}$. Set \begin{eqnarray} B u(s,\xi)=\vert\frac{s-l-\lambda}{\lambda}\vert u(l,\xi)+\frac{s-l}{\lambda}u(l+\lambda,\xi),\\\nonumber \tilde{u}(s,\xi)=u(s,\xi)-B u(s,\xi).\hskip2.5cm \end{eqnarray} Note that by (\ref{v-definition-1}) $\vert\xi-\varsigma\vert=r+\varrho$ implies $v(l,\xi)=u(l,\xi),\; v(l+\lambda,\xi)=u(l+\lambda,\xi)$ and therefore we have \begin{eqnarray} \vert\xi-\varsigma\vert=r+\varrho\Rightarrow B u(s,\xi)=B v(s,\xi), \end{eqnarray} where $v$ is defined in (\ref{v-definition-1}). Set \begin{eqnarray} \hat{v}(s,\xi)= v(s,(r+\varrho)\frac{\xi-\varsigma}{\vert\xi-\varsigma\vert}+\varsigma)- B u(s,(r+\varrho)\frac{\xi-\varsigma}{\vert\xi-\varsigma\vert}+\varsigma), \end{eqnarray} where again $v$ is defined in (\ref{v-definition-1}). With these notations we complete the definition of $v$ by setting \begin{eqnarray}\label{v-definition-2} v(s,\xi)=B u(s,\xi) +\frac{\vert \xi-\varsigma\vert-r-\varrho}{\varrho}\tilde{u}(s,\xi) +\frac{2\varrho+r-\vert \xi-\varsigma\vert}{\varrho}\hat{v}(s,\xi),\\\nonumber \text{ for } (s,\xi)\in(l,l+\lambda)\times\{\xi:r+\varrho<\vert \xi-\varsigma\vert<r+2\varrho\}. \end{eqnarray} Statement (i) and (ii) are obvious consequences of the definition of $v$. Direct inspection of (\ref{v-definition-1}) and (\ref{v-definition-2}) shows that $v$ is continuous. From (\ref{v-definition-1}) $v(s,\xi)$ is a linear combination of $\bar{u}(s)$ and $u(s,\xi)$ computed for $s\in[l,l+\lambda]$. A similar statement applies to $v(s,\xi)$ in (\ref{v-definition-2}) since $B u(s,\xi),\,\hat{v}(s,\xi)$ and $\tilde{u}(s,\xi)$ are linear combinations of $u(s,\xi)$ and $v(s,\xi)$ in (\ref{v-definition-1}) computed for $s\in[l,l+\lambda]$. From this, assumption (\ref{distance-m-l}) and (\ref{v-vs-exp-bound}) we conclude \begin{eqnarray}\label{energy-density-bound} \vert v-a\vert+\vert\nabla v\vert\leq C_3 e^{-k_0 l} \;\,\text{ for } (s,\xi)\in \mathcal{C}_{l+\lambda}^{r+2\varrho}(\varsigma)\setminus \overline{\mathcal{C}}_{l}^{r+2\varrho}(\varsigma), \end{eqnarray} where $C_3>0$ is a constant independent of $l$ and $r$. From (\ref{energy-density-bound}) and the assumptions on the potential $W$ it follows \begin{eqnarray} \frac{1}{2}\nabla v\vert^2+W(v)\leq C_4e^{-2k_0 l}, \end{eqnarray} which together with $\mathcal{H}^n(\mathcal{C}_{l+\lambda}^{r+2\varrho}(\varsigma)\setminus \overline{\mathcal{C}}_{l}^{r+2\varrho}(\varsigma))\leq C_5 r^{n-1}$ concludes the proof. \end{proof} Given a number $0<\bar{q}< q^\circ$, let $A_{\bar{q}}$ be the set \begin{eqnarray} A_{\bar{q}}:=\{\xi: \|v(\cdot,\xi)-\bar{u}(\cdot)\|_{l+\frac{\lambda}{2}}>\bar{q},\,\vert \xi-\varsigma\vert< r+\varrho\}, \end{eqnarray} where $v$ is the map constructed in Lemma \ref{lemma-1}. \begin{lemma}\label{lemma-2} Let $v$ as before and let $S:=A_{\bar{q}}\cap\{\xi: r<\vert \xi-\varsigma\vert< r+\varrho\}$. Then there is a constant $C_1>0$ independent from $l \text{ and } r$ and a map $w\in C_S^{0,1}(\overline{\Omega},\mathbb{R}^m)$ such that \begin{description} \item[(i)] $w=v \text{ on } \overline{\Omega}\setminus(\mathcal{C}_{l+\frac{\lambda}{2}}^{r+\varrho}(\varsigma) \setminus\overline{\mathcal{C}}_{l+\frac{\lambda}{2}}^{r}(\varsigma))$ \item[(ii)] $\| w-\bar{u}\|_{l+\frac{\lambda}{2}}\leq \bar{q}, \text{ for } \vert\xi-\varsigma\vert=r+\frac{\varrho}{2}.$ \item[(iii)] $J_{\mathcal{C}_{l+\frac{\lambda}{2}}^{r+\varrho}(\varsigma) \setminus\overline{\mathcal{C}}_{l+\frac{\lambda}{2}}^{r}(\varsigma)}(w)- J_{\mathcal{C}_{l+\frac{\lambda}{2}}^{r+\varrho}(\varsigma) \setminus\overline{\mathcal{C}}_{l+\frac{\lambda}{2}}^{r}(\varsigma)}(v) \leq C_1\mathcal{H}^{n-1}(S)$. \end{description} \end{lemma} \begin{proof} Set \begin{equation}\label{qv-and-nuv} \begin{split} & q^v(\xi)=\|v(\cdot,\xi)-\bar{u}(\cdot)\|_{l+\frac{\lambda}{2}},\\ & \nu^v(s,\xi)=\frac{v(s,\xi)-\bar{u}(s)}{q^v(\xi)}, \end{split}\text{ for }\;s\in(-l-\frac{\lambda}{2},l+\frac{\lambda}{2}),\;\xi\in S. \end{equation} and, for $s\in(-l-\frac{\lambda}{2},l+\frac{\lambda}{2}),\;\xi\in S$, define \begin{equation}\label{w} \begin{split} & w(s,\xi)=\bar{u}(s)+q^w(\xi)\nu^v(s,\xi),\\ & q^w(\xi)=(1-\vert 1-2\frac{\vert\xi-\varsigma\vert-r}{\varrho}\vert)\bar{q}+ \vert 1-2\frac{\vert\xi-\varsigma\vert-r}{\varrho}\vert q^v(\xi). \end{split} \end{equation} From this definition it follows that $w$ coincides with $v=\bar{u}+q^v\nu^v$ if $\vert\xi-\varsigma\vert=r$ or $\vert\xi-\varsigma\vert=r+\varrho$ or $q^v=\bar{q}$. This shows that $w$ coincides with $v$ on the boundary of the set $(-l-\frac{\lambda}{2},l+\frac{\lambda}{2})\times S$ and proves (i). From (\ref{w}) also follows that $q^w=\bar{q}$ for $\vert\xi-\varsigma\vert=r+\frac{\varrho}{2}$ for $\xi\in S$. This and the definition of $S$ imply (ii). To prove (iii) we note that \begin{eqnarray}\label{w-bar-u} \vert w-\bar{u}\vert=\vert q^w\nu^v\vert\leq\vert q^v\nu^v\vert=\vert v-\bar{u}\vert, \text{ for } s\in(-l-\frac{\lambda}{2},l+\frac{\lambda}{2}),\;\xi\in S. \end{eqnarray} which implies \begin{eqnarray} \vert w-a\vert\leq Ke^{-k s},\;\text{ for }\;s\in(0,l+\frac{\lambda}{2}),\;\xi\in S. \end{eqnarray} Therefore we have \begin{eqnarray}\label{potential-bound} \int_{-l-\frac{\lambda}{2}}^{l+\frac{\lambda}{2}}(W(w)-W(v)) \leq\int_{-l-\frac{\lambda}{2}}^{l+\frac{\lambda}{2}}W(w)\leq C, \text{ for } \xi\in S. \end{eqnarray} We can write \[w=\frac{q^w}{q^v}(v-\bar{u}),\;\text{ for }\;s\in(0,l+\frac{\lambda}{2}),\;\xi\in S\] therefore we have, using also (\ref{energy-density-bound}) \begin{equation}\label{computation} \begin{split} & w_s=\frac{q^w}{q^v}(v_s-\bar{u}_s)\;\Rightarrow\;\vert w_s\vert\leq K e^{-k\vert s\vert},\\ & w_{\xi_j}=(\frac{q^w}{q^v})_{\xi_j}(v-\bar{u})+\frac{q^w}{q^v}v_{\xi_j}. \end{split} \end{equation} From $q^v_{\xi_j}=\langle \nu^v,v_{\xi_j}\rangle_{l+\frac{\lambda}{2}}$ and (\ref{w}) it follows \begin{equation}\label{computation1} \begin{split} & (\frac{q^w}{q^v})_{\xi_j}=\vert 1-2\frac{\vert\xi-\varsigma\vert-r}{\varrho}\vert_{\xi_j}(1-\frac{\bar{q}}{q^v}) -(1-\vert 1-2\frac{\vert\xi-\varsigma\vert-r}{\varrho}\vert)\frac{\bar{q}}{(q^v)^2}\langle \nu^v,v_{\xi_j}\rangle_{l+\frac{\lambda}{2}},\\ & \Rightarrow\;\vert (\frac{q^w}{q^v})_{\xi_j}\vert\leq\frac{2}{\varrho}+\frac{1}{q^v}\|v_{\xi_j}\|_{l+\frac{\lambda}{2}}. \end{split} \end{equation} where we have also used $\frac{\bar{q}}{q^v}\leq 1$ for $\xi\in S$. From (\ref{computation1}) and (\ref{computation1}) it follows \[\vert w_{\xi_j}\vert\leq(\frac{2}{\varrho}+\frac{\|v_{\xi_j}\|_{l+\frac{\lambda}{2}}}{\bar{q}})\vert v-\bar{u}\vert+ \vert v_{\xi_j}\vert\leq K e^{-k\vert,\;\text{ for }\; s\vert}s\in(-l-\frac{\lambda}{2},l+\frac{\lambda}{2}),\;\xi\in S,\] where we have also used (\ref{energy-density-bound}). From this and (\ref{computation}) we conclude \begin{eqnarray} \int_{-l-\frac{\lambda}{2}}^{l+\frac{\lambda}{2}}(\vert\nabla w\vert^2-\vert\nabla v\vert^2)\leq \int_{-l-\frac{\lambda}{2}}^{l+\frac{\lambda}{2}}\vert\nabla w\vert^2\leq C, \text{ for } \xi\in S. \end{eqnarray} This inequality together with (\ref{potential-bound}) conclude the proof. \end{proof} \begin{lemma}\label{lemma-w-q} Let $w$ the map constructed in Lemma \ref{lemma-2}. Define $w^{\bar{q}}$ by setting \begin{eqnarray} w^{\bar{q}}=\left\{\begin{array}{l} \bar{u}+\bar{q}\nu^v, \text{ for } (s,\xi)\in\mathcal{C}_{l+\frac{\lambda}{2}}^{r+\frac{\varrho}{2}}(\varsigma),\;\xi\in A_{\bar{q}},\\\\ w, \text{ for } (s,\xi)\in\mathcal{C}_{l+\frac{\lambda}{2}}^{r+\frac{\varrho}{2}}(\varsigma),\;\xi\not\in A_{\bar{q}}, \text{ and for } (s,\xi)\not\in\mathcal{C}_{l+\frac{\lambda}{2}}^{r+\frac{\varrho}{2}}(\varsigma). \end{array}\right. \end{eqnarray} Then $w^{\bar{q}}\in C_S^{0,1}(\overline{\Omega},\mathbb{R}^m)$ and \begin{eqnarray}\label{quantitative-w-wq} J_{\mathcal{C}_{l+\frac{\lambda}{2}}^{r+\frac{\varrho}{2}}(\varsigma)}(w^{\bar{q}}) -J_{\mathcal{C}_{l+\frac{\lambda}{2}}^{r+\frac{\varrho}{2}}(\varsigma)}(w)\leq 0. \end{eqnarray} \end{lemma} \begin{proof} We have $w-\bar{u}=q^w\nu^w$ and $q^w>\bar{q}$ on $A_{\bar{q}}$. Therefore, recalling the definition of ${\bf e}_l$ and Lemma \ref{strict-minimizer} we have \begin{eqnarray}\label{difference-w-wq} J_{\mathcal{C}_{l+\frac{\lambda}{2}}^{r+\frac{\varrho}{2}}(\varsigma)}(w^{\bar{q}}) -J_{\mathcal{C}_{l+\frac{\lambda}{2}}^{r+\frac{\varrho}{2}}(\varsigma)}(w)&=& \int_{\tilde{A}_{\bar{q}}} ({\bf e}_{l+\frac{\lambda}{2}}(\bar{q}\nu^w)-{\bf e}_{l+\frac{\lambda}{2}}(q^w\nu^w))d\xi\\\nonumber &&+ \frac{1}{2} \sum_j\int_{\tilde{A}_{\bar{q}}}(\langle w^{\bar{q}}_{\xi_j},w^{\bar{q}}_{\xi_j}\rangle_{l+\frac{\lambda}{2}}- \langle w_{\xi_j},w_{\xi_j}\rangle_{l+\frac{\lambda}{2}})d\xi\\\nonumber &\leq& \frac{1}{2} \sum_j\int_{\tilde{A}_{\bar{q}}}(\langle w^{\bar{q}}_{\xi_j},w^{\bar{q}}_{\xi_j}\rangle_{l+\frac{\lambda}{2}}- \langle w_{\xi_j},w_{\xi_j}\rangle_{l+\frac{\lambda}{2}})d\xi, \end{eqnarray} To conclude the proof we note that for $\xi\in\tilde{A}_{\bar{q}}$ \begin{equation}\label{wj-expressions} \begin{split} & w_{\xi_j}^{\bar{q}}=\bar{q}\nu_{\xi_j}^v,\;\Rightarrow\;\langle w_{\xi_j}^{\bar{q}},w_{\xi_j}^{\bar{q}}\rangle_{l+\frac{\lambda}{2}}=\bar{q}^2\langle \nu_{\xi_j}^v,\nu_{\xi_j}^v\rangle_{l+\frac{\lambda}{2}},\\ & w_{\xi_j}=q_{\xi_j}^w\nu+q^w\nu_{\xi_j}^v,\;\Rightarrow\;\langle w_{\xi_j},w_{\xi_j}\rangle_{l+\frac{\lambda}{2}}= (q_{\xi_j}^w)^2+(q^w)^2\langle \nu_{\xi_j}^v,\nu_{\xi_j}^v\rangle_{l+\frac{\lambda}{2}} \end{split} \end{equation} where we have also used that $\langle \nu^v,\nu_{\xi_j}^v\rangle_{l+\frac{\lambda}{2}}=0$. Form (\ref{wj-expressions}) it follows \[\langle w^{\bar{q}}_{\xi_j},w^{\bar{q}}_{\xi_j}\rangle_{l+\frac{\lambda}{2}}- \langle w_{\xi_j},w_{\xi_j}\rangle_{l+\frac{\lambda}{2}}=-(q_{\xi_j}^v)^2+(\bar{q}^2-(q^w)^2)\langle \nu_{\xi_j}^v,\nu_{\xi_j}^v\rangle_{l+\frac{\lambda}{2}}\leq 0,\] for $\xi\in\tilde{A}_{\bar{q}}$. This and (\ref{difference-w-wq}) prove (\ref{quantitative-w-wq}). \end{proof} Next we show that we can associate to $w^{\bar{q}}$ a map $\omega$ which coincides with $w^{\bar{q}}$ on $\Omega\setminus \mathcal{C}_{l+\frac{\lambda}{2}}^{r+\frac{\varrho}{2}}(\varsigma)$ and has less energy than $w^{\bar{q}}$. Moreover we derive a quantitative estimate of the energy difference. We follow closely the argument in \cite{fu}. First we observe that, if we define $q^\ast:=q^{w^{\bar{q}}}$, we can represent $J_{\mathcal{C}_{l+\frac{\lambda}{2}}^{r+\frac{\varrho}{2}}(\varsigma)}(w^{\bar{q}})$ in the {\it polar} form \begin{eqnarray} J_{\mathcal{C}_{l+\frac{\lambda}{2}}^{r+\frac{\varrho}{2}}(\varsigma)}(w^{\bar{q}})- J_{\mathcal{C}_{l+\frac{\lambda}{2}}^{r+\frac{\varrho}{2}}(\varsigma)}(\bar{u}) \hskip7cm\\\nonumber \hskip2cm= \int_{B_{\varsigma,r +\frac{\varrho}{2}}\cap\{q^\ast>0\}}\frac{1}{2}(\vert\nabla q^\ast\vert^2+{q^\ast}^2\sum_j\langle \nu_{\xi_j}^w,\nu_{\xi_j}^w\rangle_{l+\frac{\lambda}{2}})+{\bf e}_{l+\frac{\lambda}{2}}(q^\ast\nu^w). \end{eqnarray} This follows from $\nu^w=\nu^v$ and from $\langle \nu^v,\nu_{\xi_j}^v\rangle_{l+\frac{\lambda}{2}}=0$ that implies \[\sum_j\langle w_{\xi_j}^{\bar{q}},w_{\xi_j}^{\bar{q}}\rangle_{l+\frac{\lambda}{2}}=\vert\nabla q^\ast\vert^2+{q^\ast}^2\sum_j\langle \nu_{\xi_j}^w,\nu_{\xi_j}^w\rangle_{l+\frac{\lambda}{2}}\] and from the definition of ${\bf e}_l$ in Lemma \ref{strict-minimizer}. We remark that the definition of $q^\ast \text{ and } w^{\bar{q}}$ imply \begin{eqnarray} q^\ast&\leq& \bar{q}, \text{ on } B_{\varsigma,r +\frac{\varrho}{2}},\\\nonumber q^\ast&=& \bar{q}, \text{ on } A_{\bar{q}}\cap B_{\varsigma,r +\frac{\varrho}{2}}. \end{eqnarray} \begin{lemma}\label{quantitative-estimate0} Let $\varphi:B_{\varsigma,r +\frac{\varrho}{2}}\rightarrow\mathbb{R}$ the solution of \begin{eqnarray}\label{phi-comparison} \left\{\begin{array}{l} \Delta\varphi=c^2\varphi, \text{ in } B_{\varsigma,r +\frac{\varrho}{2}}\\ \varphi=\bar{q}, \text{ on } \partial B_{\varsigma,r +\frac{\varrho}{2}}. \end{array}\right. \end{eqnarray} Then there is a map $\omega\in C_S^{0,1}(\overline{\Omega},\mathbb{R}^m)$ with the following properties \begin{eqnarray} \left\{\begin{array}{l} \omega=w^{\bar{q}},\; \text{ on }\; \Omega\setminus \mathcal{C}_{l+\frac{\lambda}{2}}^{r+\frac{\varrho}{2}}(\varsigma),\\\\ \omega=q^\omega\nu^w+ \bar{u},\; \text{ on }\; \mathcal{C}_{l+\frac{\lambda}{2}}^{r+\frac{\varrho}{2}}(\varsigma),\\\\ q^\omega\leq\varphi\leq\bar{q},\; \text{ on }\; \mathcal{C}_{l+\frac{\lambda}{2}}^{r+\frac{\varrho}{2}}(\varsigma). \end{array}\right. \end{eqnarray} Moreover \begin{eqnarray}\label{quantitative-estimate} J_{\mathcal{C}_{l+\frac{\lambda}{2}}^{r+\frac{\varrho}{2}}(\varsigma)}( w^{\bar{q}})- J_{\mathcal{C}_{l+\frac{\lambda}{2}}^{r+\frac{\varrho}{2}}(\varsigma)}(\omega)\hskip7.5cm \\\nonumber\hskip2cm\geq \int_{B_{\varsigma,r +\frac{\varrho}{2}}\cap\{q^\ast>\varphi\}} ({\bf e}_{l+\frac{\lambda}{2}}(q^\ast\nu^w)-{\bf e}_{l+\frac{\lambda}{2}}(\varphi\nu^w) -D_q{\bf e}_{l+\frac{\lambda}{2}}(\varphi\nu^w)(q^\ast-\varphi))d\xi. \end{eqnarray} \end{lemma} \begin{proof} Let $b>0$, $b\leq\min_{\xi\in B_{\varsigma,r +\frac{\varrho}{2}}}\varphi$ be fixed and let $A_b\subset B_{\varsigma,r +\frac{\varrho}{2}}$ the set $A_b:=\{\xi\in B_{\varsigma,r +\frac{\varrho}{2}}:q^\ast>b\}$. $A_b$ is an open set since $w^{\bar{q}}=\bar{u}+q^\ast\nu^w$ is continuous by construction. Let \begin{eqnarray}\label{reduced-action} \mathcal{J}_{A_b}(p)=\int_{A_b}(\frac{1}{2}\vert\nabla p\vert^2+{\bf e}_{l+\frac{\lambda}{2}}(\vert p\vert\nu^w))d\xi, \end{eqnarray} Since $A_b$ is open and $q^\ast\in L^\infty(A_b,\mathbb{R})$ there exists a minimizer $p^\ast\in q^\ast+W_0^{1,2}(A_b,\mathbb{R})$ of the problem \begin{eqnarray} \mathcal{J}_{A_b}(p^\ast)=\min_{q^\ast+W_0^{1,2}(A_b,\mathbb{R})}{\mathcal{J}_{A_b}}. \end{eqnarray} We also have \begin{eqnarray} 0\,\leq\,p^\ast\,\leq\,\bar{q}. \end{eqnarray} This follows from (\ref{iota-properties}) that implies $\mathcal{J}_{A_b}(\frac{p^\ast+\vert p^\ast\vert}{2})\leq\mathcal{J}_{A_b}(p^\ast)$ and therefore $p^\ast\geq 0$. The other inequality is a consequence of $\mathcal{J}_{A_b}(\min\{p^\ast,\bar{q}\})\leq\mathcal{J}_{A_b}(p^\ast)$ which follows from $\int_{A_b}\vert\nabla(\min\{p^\ast,\bar{q}\})\vert^2\leq \int_{A_b}\vert\nabla p^\ast\vert^2$ and from (\ref{iota-properties}). Since the map $q\rightarrow {\bf e}_{l+\frac{\lambda}{2}}(\vert q\vert\nu^w))$ is a $C^1$ map, we can write the variational equation \begin{eqnarray}\label{rho-variation0} \int_{A_b}((\nabla p^\ast,\nabla\gamma)+D_q{\bf e}_{l+\frac{\lambda}{2}}( p^\ast\nu^w)\gamma)d\xi=0, \end{eqnarray} for all $\gamma\in W_0^{1,2}(A_b,\mathbb{R})\cap L^\infty(A_b)$. In particular, if we define $A_b^*:=\{x\in A_b: p^\ast>\varphi\}$, we have \begin{eqnarray}\label{rho-variation} \int_{A_b^*}((\nabla p^\ast,\nabla\gamma)+D_q{\bf e}_{l+\frac{\lambda}{2}}( p^\ast\nu^w)\gamma)d\xi=0, \end{eqnarray} for all $\gamma\in W_0^{1,2}(A_b,\mathbb{R})\cap L^\infty(A_b)$ that vanish on $A_b\setminus A_b^*$. If we take $\gamma=(p^\ast-\varphi)^+$ in (\ref{rho-variation}) and use (\ref{iota-properties})$_2$ which implies $D_q{\bf e}_{l+\frac{\lambda}{2}}( p^\ast\nu^w)\geq c^2 p^\ast$ we get \begin{eqnarray}\label{rho-variation1} \int_{A_b^*}((\nabla p^\ast,\nabla(p^\ast-\varphi))+c^2 p^\ast(p^\ast-\varphi))d\xi\leq 0, \end{eqnarray} This inequality and \begin{eqnarray}\label{phi-variation1} \int_{A_b^*}((\nabla\varphi,\nabla(p^\ast-\varphi))+c^2\varphi(p^\ast-\varphi))dx=0, \end{eqnarray} that follows from (\ref{phi-comparison}) imply \begin{eqnarray}\label{rho-variation2} \int_{A_b^*}(\vert\nabla(p^\ast-\varphi)\vert^2+c^2(p^\ast-\varphi)^2)d\xi\leq 0. \end{eqnarray} That is $\mathcal{H}^n(A_b^*)=0$ which together with $p^\ast\leq\varphi$ on $A_b\setminus A_b^*$ shows that \begin{eqnarray}\label{rho-min-phi} p^\ast\leq\varphi, \text{ for } \xi\in A_b. \end{eqnarray} Let $\omega$ be the map defined by setting \begin{eqnarray}\label{v-definition} \omega=\left\{\begin{array}{l} w^{\bar{q}},\text{ for } (s,\xi)\in \Omega\setminus (-l-\frac{\lambda}{2}, l+\frac{\lambda}{2})\times A_b,\\\\ \bar{u}+q^\omega\nu^w=\bar{u}+\min\{p^\ast,q^\ast\}\nu^w, \text{ for } \xi\in A_b. \end{array}\right. \end{eqnarray} Note that this definition, the definition of $A_b$ and (\ref{rho-min-phi}) imply \begin{eqnarray}\label{q-omega-min-phi} q^\omega\leq\varphi, \text{ for } \xi\in B_{\varsigma,r+\frac{\varrho}{2}}. \end{eqnarray} From (\ref{v-definition}) we have \begin{eqnarray}\label{diff-energy} J_{\mathcal{C}_{l+\frac{\lambda}{2}}^{r+\frac{\varrho}{2}}(\varsigma)}( w^{\bar{q}})- J_{\mathcal{C}_{l+\frac{\lambda}{2}}^{r+\frac{\varrho}{2}}(\varsigma)}(\omega) \hskip8cm\\\nonumber\geq \int_{A_b\cap\{p^\ast<q^\ast\}}(\frac{1}{2}(\vert\nabla q^\ast\vert^2-\vert\nabla p^\ast\vert^2+((q^\ast)^2-(p^\ast)^2)\sum_{j=1}^{n} \langle \nu_{\xi_j}^w, \nu_{\xi_j}^w \rangle_{l+\frac{\lambda}{2}})\\\nonumber+{\bf e}_{l+\frac{\lambda}{2}}( q^\ast\nu^w)-{\bf e}_{l+\frac{\lambda}{2}}( p^\ast\nu^w))d\xi\\\nonumber \geq\int_{A_b\cap\{p^\ast<q^\ast\}}(\frac{1}{2}(\vert\nabla q^\ast\vert^2-\vert\nabla p^\ast\vert^2 +{\bf e}_{l+\frac{\lambda}{2}}( q^\ast\nu^w)-{\bf e}_{l+\frac{\lambda}{2}}( p^\ast\nu^w))d\xi\\\nonumber \geq\int_{A_b\cap\{p^\ast<q^\ast\}}(\frac{1}{2}\vert\nabla q^\ast-\nabla p^\ast\vert^2\hskip5.5cm\\\nonumber \hskip1.7cm+{\bf e}_{l+\frac{\lambda}{2}}( q^\ast\nu^w)-{\bf e}_{l+\frac{\lambda}{2}}( p^\ast\nu^w)d\xi-D_q{\bf e}_{l+\frac{\lambda}{2}}( p^\ast\nu^w)(q^\ast-p^\ast))d\xi\geq 0. \end{eqnarray} where we have used \begin{eqnarray} \frac{1}{2}(\vert\nabla q^\ast\vert^2-\vert\nabla p^\ast\vert^2)&=&\frac{1}{2}\vert\nabla q^\ast-\nabla p^\ast\vert^2 +(\nabla p^\ast,\nabla( q^\ast-p^\ast)),\\\nonumber\\\nonumber \text{ and }\quad\quad\quad\quad\quad&&\\\nonumber\\\nonumber \int_{A_b\cap\{p^\ast<q^\ast\}}(\nabla p^\ast,\nabla(q^\ast-p^\ast)) &=&-\int_{A_b\cap\{p^\ast<q^\ast\}}D_q{\bf e}_{l+\frac{\lambda}{2}}( p^\ast\nu^w)(q^\ast-p^\ast)d\xi, \end{eqnarray} which follows from (\ref{rho-variation0}) with $\gamma=(q^\ast-p^\ast)^+$. From (\ref{iota-properties}$_3$) and (\ref{rho-min-phi}) we have \begin{eqnarray} {\bf e}_{l+\frac{\lambda}{2}}( q^\ast\nu^w)-\tilde{{\bf e}}_{l+\frac{\lambda}{2}}(p^\ast,q^\ast,\nu^w)\geq {\bf e}_{l+\frac{\lambda}{2}}( q^\ast\nu^w)-\tilde{{\bf e}}_{l+\frac{\lambda}{2}}(\varphi, q^\ast,\nu^w). \end{eqnarray} From this and (\ref{q-omega-min-phi}) which implies \begin{eqnarray} B_{\varsigma,r+\frac{\varrho}{2}}\cap\{\phi<q^\ast\}= A_b\cap\{\phi<q^\ast\}\subset A_b\cap\{p^\ast<q^\ast\}, \end{eqnarray} we have \begin{eqnarray} \int_{A_b\cap\{p^\ast<q^\ast\}} {\bf e}_{l+\frac{\lambda}{2}}( q^\ast\nu^w)-{\bf e}_{l+\frac{\lambda}{2}}( p^\ast\nu^w)-D_q{\bf e}_{l+\frac{\lambda}{2}}( p^\ast\nu^w)(q^\ast-p^\ast)d\xi\\\nonumber \geq \int_{B_{\varsigma,r+\frac{\varrho}{2}}\cap\{\varphi<q^\ast\}} {\bf e}_{l+\frac{\lambda}{2}}( q^\ast\nu^w)-{\bf e}_{l+\frac{\lambda}{2}}( p^\ast\nu^w)-D_q{\bf e}_{l+\frac{\lambda}{2}}( p^\ast\nu^w)(q^\ast-p^\ast)d\xi\\\nonumber \geq \int_{B_{\varsigma,r+\frac{\varrho}{2}}\cap\{\varphi<q^\ast\}} {\bf e}_{l+\frac{\lambda}{2}}( q^\ast\nu^w)-{\bf e}_{l+\frac{\lambda}{2}}( \varphi\nu^w)-D_q{\bf e}_{l+\frac{\lambda}{2}}(\varphi\nu^w)(q^\ast-\varphi))d\xi. \end{eqnarray} The inequality (\ref{quantitative-estimate}) follows from this and (\ref{diff-energy}). \end{proof} \begin{corollary}\label{corollary} Let $w^{\bar{q}}$ as before and let $\omega\in C_S^{0,1}(\overline{\Omega},\mathbb{R}^m)$ the map constructed in Lemma \ref{quantitative-estimate0}. Then there is a number $c_1>0$ independent from $l, r, \lambda$ and $\varrho$ such that \begin{eqnarray} J_{\mathcal{C}_{l+\frac{\lambda}{2}}^{r+\frac{\varrho}{2}}(\varsigma)}( w^{\bar{q}})- J_{\mathcal{C}_{l+\frac{\lambda}{2}}^{r+\frac{\varrho}{2}}(\varsigma)}(\omega) \geq c_1\mathcal{H}^{n-1}(A_{\bar{q}}\cap B_{\varsigma,r}). \end{eqnarray} \end{corollary} \begin{proof} Set $R=r+\frac{\varrho}{2}$, then we have $\varphi(\xi)=\bar{q}\phi(\vert \xi-\varsigma\vert,R)$ with $\phi(\cdot,R):[0,R]\rightarrow\mathbb{R}$ a positive function which is strictly increasing in $(0,R]$. Moreover we have $\phi(R,R)=1$ and \begin{eqnarray}\label{phi-l} R_1<R_2,\;t\in(0, R_1)\;\Rightarrow\;\;\phi(R_1-t,R_1)>\phi(R_2-t,R_2). \end{eqnarray} Note that $\xi\in B_{\varsigma,r}$ implies $\varphi(\xi)\leq \bar{q}\phi(r,r+\frac{\varrho}{2})$. Therefore for $\xi\in B_{\varsigma,r}\cap A_{\bar{q}}$ we have \begin{eqnarray}\label{diff-potential}\\\nonumber \hskip.5cm{\bf e}_{l+\frac{\lambda}{2}}(\bar{q}\nu^w) -{\bf e}_{l+\frac{\lambda}{2}}(\varphi\nu^w) -D_q{\bf e}_{l+\frac{\lambda}{2}}(\varphi\nu^w)(\bar{q}-\varphi)\hskip5cm\\\nonumber =\int_\varphi^{\bar{q}}(D_q{\bf e}_{l+\frac{\lambda}{2}}(s\nu^w) -D_q{\bf e}_{l+\frac{\lambda}{2}}(\varphi\nu^w))ds\hskip4.5cm\\\nonumber \hskip2cm\geq c^2\int_\varphi^{\bar{q}}(s-\varphi)ds=\frac{1}{2}c^2(\bar{q}-\varphi)^2\geq \frac{1}{2}c^2\bar{q}^2(1-\phi(r,r+\frac{\varrho}{2}))^2, \end{eqnarray} where we have also used (\ref{iota-properties})$_1$. The corollary follows from this inequality, from (\ref{quantitative-estimate}) and from the fact that, by (\ref{phi-l}), the last expression in (\ref{diff-potential}) is increasing with $r$. Therefore, for $r\geq r_0$, for some $r_0>0$, we can assume \begin{eqnarray} c_1=\frac{1}{2}c^2\bar{q}^2(1 -\phi(r_0,r_0+\frac{\varrho}{2}))^2. \end{eqnarray} \end{proof} \subsection{Conclusion of the proof of Theorem \ref{main}}\label{proof-main} Let $u$ as in Theorem \ref{main} and $l_0,\,q^\circ$ as in Lemma \ref{strict-minimizer} and assume that $\varsigma$ is such that \begin{eqnarray} \| u(\cdot,\varsigma)-\bar{u}\|_l\geq q^\circ, \end{eqnarray} for some $l\geq l_0$. Then $u\in C_S^{0,1}(\overline{\Omega},\mathbb{R}^m)$ implies that, there is $r_0>0$ independent from $l\geq l_0$ such that, \begin{eqnarray}\label{sigma-0} \| u(\cdot,\xi)-\bar{u}\|_l\geq \bar{q},\, \text{ for } \vert \xi-\varsigma\vert\leq r_0. \end{eqnarray} Let $j_0\geq 0,$ be minimum value of $j$ that violated the inequality \begin{eqnarray}\label{inequality} c_1\frac{r_0^{n-1}}{2}(1+\frac{c_1}{C_1})^j\leq C_1((r_0+(j+1)\varrho)^{n-1}-(r_0+j\varrho)^{n-1}), \end{eqnarray} where $c_1 \text{ and } C_2$ are the constants in Corollary \ref{corollary} and Lemma \ref{lemma-2}. Let $l^\circ\geq l_0$ be fixed so that \begin{eqnarray}\label{l-0-sufficiently-large} C_0(r_0+j_0\varrho)^{n-1}e^{-k l^\circ}\leq c_1\theta_{n-1}\frac{r_0^{n-1}}{2}, \end{eqnarray} where $C_0$ is defined in Lemma \ref{lemma-1} and $\theta_n$ is the measure of the unit ball in $\mathbb{R}^n$, \begin{proposition}\label{l-2-bound} Let $\lambda, \varrho, \bar{q}\in(0,q^\circ) \text{ and } l^\circ\geq l_0$ fixed as before and let $r^\circ=r_0+j_0\varrho$ where $j_0\geq 0$ is the minimum value of $j$ that violates (\ref{inequality}). Assume $l\geq l^\circ$ and assume that $\mathcal{C}_{l+\lambda}^{r^\circ+2\varrho}(\varsigma)\subset\Omega$ satisfies \begin{eqnarray} d(\mathcal{C}_{l+\lambda}^{r^\circ+2\varrho}(\varsigma),\partial\Omega)\geq l+\lambda. \end{eqnarray} Then \begin{eqnarray} q^u(\varsigma)=\| u(\cdot,\varsigma)-\bar{u}\|_{l+\frac{\lambda}{2}}< q^\circ. \end{eqnarray} \end{proposition} \begin{proof} Suppose instead that \begin{eqnarray} \| u(\cdot,\varsigma)-\bar{u}\|_{l+\frac{\lambda}{2}} \geq q^\circ, \end{eqnarray} and set \begin{eqnarray}\label{sigma-0-definition} \sigma_0:=\theta_{n-1}\frac{r_0^{n-1}}{2}. \end{eqnarray} Then $l^\circ\geq l_0$ and (\ref{sigma-0})) imply \begin{eqnarray} \mathcal{H}^{n-1}(A_{\bar{q}}\cap B_{\varsigma,r_0})\geq 2\sigma_0. \end{eqnarray} For each $0\leq j\leq j_0$ let $r_j:=r_0+j\varrho$ and let $v_j,\,w_j,\,w_j^{\bar{q}} \text{ and } \omega_j$ the maps $v,\,w,\,w^{\bar{q}} \text{ and } \omega$ defined in Lemma \ref{lemma-1},\,Lemma \ref{lemma-2},\, Lemma \ref{lemma-w-q} and Lemma \ref{quantitative-estimate0} with $l\geq l^\circ \text{ and } r=r_j$. Then from these Lemmas and Corollary \ref{corollary} we have \begin{eqnarray} \left.\begin{array}{l} J(u)_{\mathcal{C}_{l+\lambda}^{r_j^\circ+2\varrho}(\varsigma)} -J(v_j)_{\mathcal{C}_{l+\lambda}^{r_j^\circ+2\varrho}(\varsigma)}\geq-C_0r_j^{n-1}e^{-k l^\circ},\\\\ J(v_j)_{\mathcal{C}_{l+\lambda}^{r_j^\circ+2\varrho}(\varsigma)} -J(w_j)_{\mathcal{C}_{l+\lambda}^{r_j^\circ+2\varrho}(\varsigma)}\geq-C_1\mathcal{H}^{n-1}(A_{\bar{q}}\cap (\overline{B}_{\varsigma,r_{j+1}}\setminus B_{\varsigma,r_j})),\\\\ J(w_j)_{\mathcal{C}_{l+\lambda}^{r_j^\circ+2\varrho}(\varsigma)} -J(w_j^{\bar{q}})_{\mathcal{C}_{l+\lambda}^{r_j^\circ+2\varrho}(\varsigma)}\geq 0,\\\\ J(w_j^{\bar{q}})_{\mathcal{C}_{l+\lambda}^{r_j^\circ+2\varrho}(\varsigma)} -J(\omega_j)_{\mathcal{C}_{l+\lambda}^{r_j^\circ+2\varrho}(\varsigma)}\geq c_1\mathcal{H}^{n-1}(A_{\bar{q}}\cap \overline{B}_{\varsigma,r_j}). \end{array}\right. \end{eqnarray} From this and the minimality of $u$ it follows \begin{eqnarray}\label{inequality-1} \hskip1.5cm 0\geq -C_0r_j^{n-1}e^{-k l^\circ}-C_1\mathcal{H}^{n-1}(A_{\bar{q}}\cap (\overline{B}_{\varsigma,r_{j+1}}\setminus B_{\varsigma,r_j}))+ c_1\mathcal{H}^{n-1}(A_{\bar{q}}\cap \overline{B}_{\varsigma,r_j}). \end{eqnarray} Define \begin{eqnarray} \sigma_j:=\mathcal{H}^{n-1}(A_{\bar{q}}\cap B_{\varsigma,r_j})-\sigma_0, \text{ for } j\geq 1. \end{eqnarray} If $j_0=0$ the inequality (\ref{inequality-1}), using also (\ref{l-0-sufficiently-large}), implies \begin{eqnarray}\label{inequality-2} 0\geq -c_1\sigma_0-C_1\sigma_1+2C_1\sigma_0+2c_1\sigma_0\geq c_1\sigma_0-C_1(\sigma_1-\sigma_0). \end{eqnarray} If $j_0>0$ in a similar way we get \begin{eqnarray}\label{inequality-3} 0\geq -c_1\sigma_0-C_1(\sigma_{j-1}-\sigma_j)+c_1(\sigma_j+\sigma_0)= c_1\sigma_j-C_1(\sigma_{j+1}-\sigma_j). \end{eqnarray} From (\ref{inequality-2}) and (\ref{inequality-3}) it follows \begin{eqnarray} \sigma_j\geq (1+\frac{c_1}{C_1})^j\sigma_0, \end{eqnarray} and therefore, using also (\ref{sigma-0-definition}) \begin{eqnarray}\label{inequality-4} c_1(1+\frac{c_1}{C_1})^j\theta_{n-1}\frac{r_0^{n-1}}{2}\leq C_1(\sigma_{j+1}-\sigma{j})\leq C_1\theta_{n-1}(r_{j+1}^{n-1}-r_j^{n-1}). \end{eqnarray} This inequality is equivalent to (\ref{inequality}). It follows that, on the basis of the definition of $j_0$, putting $j=j_0$ in (\ref{inequality-4}) leads to a contradiction with the minimality of $u$. \end{proof} \subsection{The exponential estimate}\label{sec-exp} \begin{lemma}\label{lemma-case-2} Assume $r>r^\circ+2\varrho$ and $l>l^\circ+\lambda$ and assume that $\mathcal{C}_l^r(\varsigma_0)\subset\Omega$ satisfies \begin{eqnarray} d(\mathcal{C}_l^r(\varsigma_0),\partial\Omega)\geq l. \end{eqnarray} Then there are constants $K_1 \text{ and } k_1>0$ independent of $r>r^\circ+2\varrho$ and $l>l^\circ+\lambda$ such that \begin{eqnarray} \| u(\cdot,\varsigma_0)-\bar{u}\|_l^\frac{1}{2} \leq K_1e^{-k_1r}. \end{eqnarray} \end{lemma} \begin{proof} From $r>r^\circ+2\varrho$ it follows that $\vert\varsigma-\varsigma_0\vert\leq r-(r^\circ+2\varrho)$ implies \begin{eqnarray} d(\mathcal{C}_l^{r^\circ+2\varrho}(\varsigma),\partial\Omega)\geq l. \end{eqnarray} Therefore we can invoke Proposition \ref{l-2-bound} to conclude that \begin{eqnarray} \| u(\cdot,\varsigma)-\bar{u}\|\leq \bar{q}, \text{ for } \vert\varsigma-\varsigma_0\vert\leq r-(r^\circ+2\varrho). \end{eqnarray} Let $\varphi:B_{\varsigma_0,r -(r^\circ+2\varrho)}\rightarrow\mathbb{R}$ the solution of \begin{eqnarray}\label{phi-comparison-1} \left\{\begin{array}{l} \Delta\varphi=c^2\varphi, \text{ in } B_{\varsigma_0,r -(r^\circ+2\varrho)}\\\\ \varphi=\bar{q}, \text{ on } \partial B_{\varsigma_0,r -(r^\circ+2\varrho)}. \end{array}\right. \end{eqnarray} Then we have \begin{eqnarray}\label{q-omega-min-phi-1} \| u(\cdot,\varsigma)-\bar{u}\|\leq \varphi(\varsigma), \text{ for } \varsigma\in B_{\varsigma_0,r -(r^\circ+2\varrho)}. \end{eqnarray} This follows by the same argument leading to (\ref{q-omega-min-phi}) in the proof of Lemma \ref{quantitative-estimate0}. Indeed, if (\ref{q-omega-min-phi-1}) does not hold, then by proceeding as in the proof of Lemma \ref{quantitative-estimate0} we can construct a competing map $\omega$ that satisfies (\ref{q-omega-min-phi-1}) and has less energy than $u$ contradicting its minimality property. In particular (\ref{q-omega-min-phi-1}) implies \begin{eqnarray}\label{q-omega-min-phi-2} \|u(\cdot,\varsigma_0)-\bar{u}\|\leq \varphi(\varsigma_0). \end{eqnarray} On the other hand it can be shown, see Lemma 2.4 in \cite{flp}, that there is a constant $h_0>0$ such that \[\phi(0,r)\leq e^{-h_0 r};\;\text{ for }\;r\geq r_0\] From this and (\ref{q-omega-min-phi-2}) we get \begin{eqnarray} \varphi(\varsigma_0)=\bar{q}\phi(0,r-(r^\circ+2\varrho))\leq \bar{q}e^{h_0(r^\circ+2\varrho)}e^{-h_0r}=K_1e^{-k_1r}. \end{eqnarray} This concludes the proof with $K_1=\bar{q}e^{h_0(r^\circ+2\varrho)}$ and $k_1=h_0$. \end{proof} We are now in the position of proving the exponential estimate (i) in Theorem \ref{main}. We distinguish two cases: \begin{description} \item[Case $1$] $x=(s,\xi)\in\Omega \text{ satisfies } s>\frac{1}{2}d(x,\partial\Omega)$. In this case, taking also into account that $\Omega$ satisfies $\bf{(i)}$, we have \begin{eqnarray} d(x,\partial\Omega^+)\geq \frac{1}{2}d(x,\partial\Omega). \end{eqnarray} From this and Theorem \ref{theorem-1} it follows \begin{eqnarray}\label{case-1-estimate} \vert u(s,\xi)-\bar{u}(s)\vert\leq \vert u(s,\xi)-a\vert+\vert \bar{u}(s)-a\vert\hskip4.5cm\\\nonumber\hskip2.5cm\leq K_0e^{-k_0d(x,\partial\Omega^+)}+\bar{K}e^{-\bar{k}s} \leq(K_0+\bar{K})e^{-\frac{1}{2}\min\{k_0,\bar{k}\}d(x,\partial\Omega)}, \end{eqnarray} where we have also used \begin{eqnarray} \vert\bar{u}(s)-a\vert\leq\bar{K}e^{-\bar{k}s}. \end{eqnarray} \item[Case $2$] $x=(s,\xi)\in\Omega \text{ satisfies } 0\leq s\leq\frac{1}{2}d(x,\partial\Omega)$. In this case, elementary geometric considerations and the assumption $\bf{(i)}$ on $\Omega$ imply the existence of $\alpha\in(0,1)$ ($\alpha=\frac{1}{4}$ will do) such that \begin{eqnarray}\label{case-2-distance} \mathcal{C}_{s+\alpha d(x)}^{\alpha d(x)}(\xi)&\subset&\Omega\hskip.8cm \text{ and }\\\nonumber d(\mathcal{C}_{s+\alpha d(x)}^{\alpha d(x)}(\xi),\partial\Omega)&\geq& s+\alpha d(x), \end{eqnarray} where we have set $d(x):=d(x,\partial\Omega)$. From (\ref{case-2-distance}) and Lemma \ref{lemma-case-2} it follows \begin{eqnarray} \| u(\cdot,\xi)-\bar{u}\|_l \leq K_1e^{-k_1\alpha d(x)},\, \text{ for } d(x)>r^\circ+2\varrho. \end{eqnarray} This and Lemma \ref{l-infinity-less-l-2} imply, recalling $d(x)=d(x,\partial\Omega)$, \begin{eqnarray}\label{case-2-exponential} \vert u(s,\xi)-\bar{u}(s)\vert\leq K_1^\frac{2}{3}e^{-\frac{2}{3}k_1\alpha d(x,\partial\Omega)}. \end{eqnarray} \end{description} The exponential estimate follows from (\ref{case-2-exponential}) and (\ref{case-2-exponential}). \subsection{The proof of Theorems \ref{main-1} and \ref{main-2}}\label{main-final} If $\Omega=\mathbb{R}^n$ the proof of Theorem \ref{main} simplifies since we can avoid the technicalities needed in the case that $\Omega$ is bounded in the $s=x_1$ direction and assume $l=+\infty$. The possibility of working with $l=+\infty$ is based on the following lemma \begin{lemma}\label{l-infty} Let $u:\mathbb{R}^n\rightarrow\mathbb{R}^m$ the symmetric minimizer in Theorem \ref{theorem-1}. Given a smooth open set $O\subset\mathbb{R}^{n-1}$ let $\mathbb{R}\times O$ the cylinder $\mathbb{R}\times O=\{(s,\xi): s\in\mathbb{R},\;\xi\in\ O\}$. Then \begin{equation}\label{min-infty} J_{\mathbb{R}\times O}(u)=\min_{v\in u+W_{0 S}^{1,2}(\mathbb{R}\times O;\mathbb{R}^m)}J_{\mathbb{R}\times O}(v), \end{equation} where $W_{0 S}^{1,2}(\mathbb{R}\times O;\mathbb{R}^m)$ is the subset of $W_S^{1,2}(\mathbb{R}\times O;\mathbb{R}^m)$ of the maps that satisfy $v=0$ on $\partial\mathbb{R}\times O$. \end{lemma} \begin{proof} Assume there are $\eta>0$ and $v\in W_{0 S}^{1,2}(\mathbb{R}\times O;\mathbb{R}^m)$ such that \begin{equation}\label{cont-assum} J_{\mathbb{R}\times O}(u)-J_{\mathbb{R}\times O}(v)\geq\eta. \end{equation} For each $l>0$ define $\tilde{v}\in W_{0 S}^{1,2}(\mathbb{R}\times O;\mathbb{R}^m)$ by \[ \tilde{v}=\left\{\begin{array}{l} v,\quad\text{ for }\;s\in[0,l],\;\xi\in O,\\ (1+l-s)v+(s-l)u, \;s\in[l,l+1],\;\xi\in O,\\ u,\quad\text{ for } \;s\in[l,+\infty),\;\xi\in O. \end{array}\right.\] The minimality of $u$ implies \begin{equation}\label{before-limit} 0\geq J_{[-l-1,l+1]\times O}(u)-J_{[-l-1,l+1]\times O}(\tilde{v})=J_{[-l-1,l+1]\times O}(u)-J_{[-l,l]\times O}(v)+ \mathrm{O}(e^{-k l}), \end{equation} where we have also used the fact that both $u$ and $v$ belong to $W_S^{1,2}(\mathbb{R}\times O;\mathbb{R}^m)$. Taking the limit for $l\rightarrow +\infty$ in (\ref{before-limit}) yields \[0\geq J_{\mathbb{R}\times O}(u)-J_{\mathbb{R}\times O}(v)\] in contradiction with (\ref{cont-assum}). \end{proof} Once we know that $u$ satisfies (\ref{min-infty}) the same arguments leading to Proposition \ref{l-2-bound} imply the existence of $r^\circ>0$ such that \begin{equation}\label{stay-below-q0} \mathbb{R}\times B_{r^\circ}(\xi)\subset\mathbb{R}^n\;\Rightarrow\;\|u(\cdot,\xi)-\bar{u}\|_{\infty}<q^\circ, \end{equation} where $B_{r^\circ}(\xi)\subset\mathbb{R}^{n-1}$ is the ball of center $\xi$ and radius $r^\circ$. Since the condition $\mathbb{R}\times B_{r^\circ}(\xi)\subset\mathbb{R}^n$ is trivially satisfied for each $\xi\in\mathbb{R}^{n-1}$ we have \[\|u(\cdot,\xi)-\bar{u}\|_{\infty}<q^\circ,\;\text{ for every }\;\xi\in\mathbb{R}^{n-1}.\] To conclude the proof we observe that everything has been said concerning $q^\circ$ can be repeated verbatim for each $q\in(0,q^\circ)$. It follows that for each $q\in(0,q^\circ]$ there is a $r(q)>0$ such that (\ref{stay-below-q0}) holds with $q$ in place of $q^\circ$ and $r(q)$ in place of $r^\circ$. Therefore we have \[\|u(\cdot,\xi)-\bar{u}\|_{\infty}<q,\;\text{ for every }\;\xi\in\mathbb{R}^{n-1}.\] Since this holds for each $q\in(0,q^\circ]$ we conclude \[u(\cdot,\xi)=\bar{u},\;\text{ for every }\;\xi\in\mathbb{R}^{n-1}\] which complete the proof of Theorem \ref{main-1}. To prove Theorem \ref{main-2} we note that, if $\Omega=\{x\in\mathbb{R}^n: x_n>0\}$, then arguing as in the proof of Theorem \ref{main-1} above, we get that, given $q>0$ there exists $l_q>0$ such that \[\xi_n> l_q,\quad\Rightarrow\quad\|u(\cdot,\xi)-\bar{u}\|_{L^\infty}<q.\] From this, the boundary condition \[\xi_n=0,\quad\Rightarrow\quad\|u(\cdot,\xi)-\bar{u}\|_{L^\infty}=0,\] and the reasoning in the proof of Lemma \ref{lemma-w-q} it follows \[\|u(\cdot,\xi)-\bar{u}\|_{L^\infty}<q,\;\text{ for each }\;\xi_n\geq 0,\,q>0.\] The proof of Theorem \ref{main-2} is complete. \section{The proof of Theorem \ref{triple}}\label{sec-triple} From an abstract point of view the proof of Theorem \ref{triple} is essentially the same as the proof of Theorem \ref{main-1} after quantities like $q^u$ and $\nu^u$ are reinterpreted and properly redefined in the context of maps equivariant with respect to the group $G$ of the equilateral triangle. We divide the proof in steps pointing out the correspondence with the corresponding steps in the proof of Theorem \ref{main-1}. We write $x\in\mathbb{R}^n$ in the form $x=(s,\xi)$ with $s=(s_1,s_2)\in\mathbb{R}^2$ and $\xi=(x_2,\ldots,x_n)\in\mathbb{R}^{n-2}$. \begin{description} \item[Step 1] \end{description} From assumption (\ref{stay-away}) in Theorem \ref{triple} and equivariance it follows \begin{equation}\label{stay-away-1} \begin{split} & \vert u(x)-a\vert\geq\delta,\;\vert u(x)-g_-a\vert>\delta,\;\text{ for }\;x\in g_+D,\;d(x,\partial g_+D)\geq d_0,\\ & \vert u(x)-a\vert\geq\delta,\;\vert u(x)-g_+a\vert>\delta,\;\text{ for }\;x\in g_-D,\;d(x,\partial g_-D)\geq d_0. \end{split} \end{equation} From this and assumptions ${\bf H}^\prime_3$ and ${\bf H}^\prime_4$ it follows that we can apply Theorem \ref{main} with $\Omega=\mathbb{R}^n\setminus\overline{D}$ and $a_\pm=g_\pm a$ to conclude that there exist $k, K>0$ such that \begin{equation}\label{exp-est-t} \vert u(s_1,s_2,\xi)-\bar{u}(s_2)\vert\leq K e^{-k d(x,\partial(\mathbb{R}^n\setminus\overline{D}))},\;x\in\mathbb{R}^n\setminus\overline{D}. \end{equation} In exactly the same way we establish that \begin{equation}\label{exp-est-2} \vert \tilde{u}(s_1,s_2)-\bar{u}(s_2)\vert\leq K e^{-k d(s,\partial(\mathbb{R}^2\setminus\overline{D_2}))},\;s\in\mathbb{R}^2\setminus\overline{D_2}, \end{equation} where $D_2\subset\mathbb{R}^2=\{s:\vert s_2\vert<\sqrt{3}s_1,\;s_1>0\}$. From (\ref{exp-est-t}), (\ref{exp-est-2}) and equivariance it follows \begin{equation}\label{exp-est-3} \vert u(s,\xi)- \tilde{u}(s)\vert\leq K e^{-k\vert s\vert},\;\text{ for }\;s\in\mathbb{R}^2,\;\xi\in\mathbb{R}^{n-2}. \end{equation} \begin{description} \item[Step 2] \end{description} Let $C_G^{0,1}(\mathbb{R}^n;\mathbb{R}^m)$ the set of lipshizt maps $v:\mathbb{R}^n\rightarrow\mathbb{R}^m$ which are equivariant under $G$ and satisfy \begin{equation}\label{exp-est-4} \begin{split} &\vert v(s,\xi)- \tilde{u}(s)\vert\leq K e^{-k\vert s\vert},\\ &\vert\nabla_s v(s,\xi)-\nabla_s\tilde{u}(s)\vert\leq K e^{-k\vert s\vert},\\ &\vert\nabla_\xi v(s,\xi)\vert\leq K e^{-k\vert s\vert}, \end{split}\;\text{ for }\;s\in\mathbb{R}^2,\;\xi\in\mathbb{R}^{n-2}, \end{equation} We remark that from (\ref{exp-est-3}) we have $u\in C_G^{0,1}(\mathbb{R}^n;\mathbb{R}^m)$ for the minimizer $u$ in Theorem \ref{triple}. If $O\subset\mathbb{R}^{n-2}$ is an open bounded set with a lipshitz boundary we let $C_G^{0,1}(\mathbb{R}^2\times O;\mathbb{R}^m)$ the set of equivariant maps that satisfy (\ref{exp-est-4}) for $\xi\in O$. We denote $C_{0,G}^{0,1}(\mathbb{R}^2\times O;\mathbb{R}^m)$ the subset of $C_G^{0,1}(\mathbb{R}^2\times O;\mathbb{R}^m)$ of the maps the vanish on the boundary of $\mathbb{R}^2\times O$. The spaces $W_G^{1,2}(\mathbb{R}^2\times O;\mathbb{R}^m)$ and $W_{0,G}^{1,2}(\mathbb{R}^2\times O;\mathbb{R}^m)$ are defined in the obvious way. The exponential estimates in the definition of these function spaces and the same argument in the proof of Lemma \ref{l-infty} imply \begin{lemma}\label{l-infty-t} Let $u:\mathbb{R}^n\rightarrow\mathbb{R}^m$ the $G$-equivariant minimizer in Theorem \ref{triple}. Given an open bounded lipshitz set $O\subset\mathbb{R}^{n-2}$ we have \begin{equation}\label{min-infty-t} J_{\mathbb{R}^2\times O}(u)=\min_{v\in u+W_{0,G}^{1,2}(\mathbb{R}^2\times O;\mathbb{R}^m)}J_{\mathbb{R}^2\times O}(v), \end{equation} \end{lemma} \begin{description} \item[Step 3] \end{description} In analogy with the definition of ${\bf e}(v)$ in Lemma \ref{strict-minimizer}, for $v\in W_G^{1,2}(\mathbb{R}^n;\mathbb{R}^m)$, we define the {\it effective} potential ${\bf E}(v)$ for the case at hand. We set \begin{equation}\label{energy-t} {\bf E}(v)=\frac{1}{2}(\langle\nabla_s\tilde{u}+\nabla_s v,\nabla_s\tilde{u}+\nabla_s v\rangle-\langle\nabla_s\tilde{u},\nabla_s\tilde{u}\rangle)+\int_{\mathbb{R}^2}(W(\tilde{u}+v)-W(\tilde{u}))ds,\;\xi\in\mathbb{R}^{n-2}. \end{equation} With this definition we can represent the energy $J_{\mathbb{R}^2\times O}(v)$ of a generic map $v\in W_G^{1,2}(\mathbb{R}^2\times O;\mathbb{R}^m)$ in the {\it polar} form \begin{equation}\label{polar-form-t} J_{\mathbb{R}^2\times O}(v)=\int_O\frac{1}{2}\big((\vert\nabla_\xi q^v\vert^2+(q^v)^2\sum_j\langle\nu_{\xi_j}^v,\nu_{\xi_j}^v\rangle)+ {\bf E}(q^v\nu^v)\big)d\xi, \end{equation} where $\langle,\rangle$ denotes the standard inner product in $L^2(\mathbb{R}^2;\mathbb{R}^m)$ and $q^v$ and $\nu^v$ are defined by \begin{equation}\label{qn-nuv-t} \begin{split} & q^v(\xi)=\| v(\cdot,\xi)-\tilde{u}\|_{L^2(\mathbb{R}^2;\mathbb{R}^m)},\;\text{ for }\;\xi\in O\\ &\nu^v(s,\xi)=\frac{v(s,\xi)-\tilde{u}(s)}{q^v(\xi)},\;\text{ if }\;q^v(\xi)>0. \end{split} \end{equation} From and assumptions ${\bf H}^\prime_5$ and ${\bf H}^\prime_5$, arguing exactly as in the proof of Lemma \ref{strict-minimizer} we prove \begin{lemma}\label{strict-minimizer-t} ${\bf H}^\prime_5$ and ${\bf H}^\prime_5$. Then there exist $q^\circ >0 \text{ and } c>0$ such that \begin{eqnarray}\label{iota-properties-t} \left\{\begin{array}{l} D_{qq}{\bf E}(q\nu)\geq c^2,\quad \text{ for } q\in[0,q^\circ]\cap[0,q_\nu],\;\nu\in\mathbb{S},\\\\ {\bf E}(q\nu)\geq{\bf E}(q^\circ\nu),\;\, \text{ for } q^\circ\leq q\leq q_\nu,\;\nu\in\mathbb{S},\\\\ {\bf E}(q\nu)\geq \tilde{{\bf E}}(p,q,\nu):={\bf E}(p\nu)+D_q{\bf E}(p\nu)(q-p) ,\\\quad\hskip4cm \text{ for } 0\leq p<q\leq q_\nu\leq q^\circ,\;\nu\in\mathbb{S},\\\\ D_p\tilde{{\bf E}}(p,q,\nu)\geq 0 ,\quad \text{ for } 0\leq p<q\leq q_\nu\leq q^\circ,\;\nu\in\mathbb{S}. \end{array}\right. \end{eqnarray} \end{lemma} \begin{description} \item[Step 4] \end{description} Based on this lemma and on the polar representation of the energy (\ref{polar-form-t}) we can follow step by step the arguments in Sec. 2 to establish the analogous of Proposition \ref{l-2-bound}. Actually the argument simplifies since by Lemma \ref{l-infty-t} we can work directly in $\mathbb{R}^2\times O$ rather then in bounded cylinders as in Sec. 2. For example the analogous of Lemma \ref{lemma-1} is not needed. In conclusion, by arguing as in Sec .2, we prove that, given $q\in(0,q^\circ]$, there is $r(q)>0$ such that \begin{equation}\label{t-t} \mathbb{R}^2\times B_{r(q)}(\xi)\subset\mathbb{R}^n\;\;\Rightarrow\;\;q^u(\xi)=\|u(\cdot,\xi)\tilde{u}\|_{L^2(\mathbb{R}^2;\mathbb{R}^m)}<q, \end{equation} where $B_{r(q)}(\xi)\subset\mathbb{R}^{n-2}$ is the ball of center $\xi$ and radius $r(q)$. Since the condition on the l.h.s. of (\ref{t-t}) is trivially satisfied for all $\xi\in\mathbb{R}^{n-2}$ and for all $q\in(0,q^\circ]$ we have \[u(s,\xi)=\tilde{u}(s),\;\text{ for }\;s\in\mathbb{R}^2,\;\xi\in\mathbb{R}^{n-2}\] which concludes the proof. \vskip.2cm Department of Mathematics, University of Athens, Panepistemiopolis, 15784 Athens, Greece; e-mail: {\texttt{[email protected]}} \vskip.2cm \noindent Universit\`a degli Studi dell'Aquila, Via Vetoio, 67010 Coppito, L'Aquila, Italy; e-mail:{\texttt{[email protected]}} \end{document}
\begin{document} \title{On Antipodes Of Hom-Hopf algebras} \author {Mohammad Hassanzadeh} \date{University of Windsor\\ Windsor, Ontario, Canada\\ [email protected] } \maketitle \begin{abstract} In the recent definition of Hom-Hopf algebras the antipode $S$ is the relative Hom-inverse of the identity map with respect to the convolution product. We observe that some fundamental properties of the antipode of Hopf algebras and Hom-Hopf algebras, with the original definition, do not hold generally in the new setting. We show that the antipode is a relative Hom-anti algebra and a relative anti-coalgebra morphism. It is also relative Hom-unital, and relative Hom-counital. Furthermore if the twisting maps of multiplications and comultiplications are invertible then $S$ is an anti-algebra and an anti-coalgebra map. We show that any Hom-bialgebra map between two Hom-Hopf algebras is a relative Hom-morphism of Hom-Hopf alegbras. Specially if the corresponding twisting maps are all invertible then it is a Hom-Hopf algebra map. If the Hom-Hopf algebra is commutative or cocommutative we observe that $S^2$ is equal to the identity map in some sense. At the end we study the images of primitive and group-like elements under the antipode. \end{abstract} \section{ Introduction} The examples of Hom-Lie algebras were first appeared in $q$-deformations of algebras of vector fields, such as Witt and Virasoro algebras \cite{as}, \cite{ckl}, \cite{cz}. The concept of Hom-Lie algebras generalizes the one for Lie algebras where the Jocobi identity is twisted by a homomorphism \cite{hls}, \cite{ls}. Hom-associative algebras were introduced and studied in \cite{ms1}. Moreover Hom-coalgebras and Hom-bialgebras were studied in \cite{ms2}, \cite{ms3}, \cite{ya2}, \cite{ya3}, \cite{ya4}. In the last years, many classical algebraic concepts have been extended to the framework of Hom-structures. For examples see \cite{hls}, \cite{gw}, \cite{pss}, \cite{hss}, \cite{aem}, \cite{gmmp}, \cite{gr}, \cite{cq}, \cite{cs}, \cite{zz}. The Hom-Hopf algebras first introduced in \cite{ms2} and \cite{ms3}. In these works they defined a Hom-Hopf algebra $H$ to be a Hom-bialgebra $(H, \mu, \alpha, \eta, \Delta, \beta, \varepsilon)$, endowed with a map $S: H\longrightarrow H$, where it is the inverse of the identity map $\mathop{\rm Id}\nolimits_H$ with respect to the convolution product $\star$, i.e, $$ S \star \mathop{\rm Id}\nolimits = \mathop{\rm Id}\nolimits \star S= \eta\circ \varepsilon.$$ This definition of antipode is the same as the one for Hopf algebras. The universal enveloping algebra of a Hom-Lie algebra introduced in \cite{ya4}. It has been shown that it is a Hom-bialgebra. However it is not a Hom-Hopf algebra in the sense of \cite{ms2}, since it is shown in \cite{lmt} that the antipode is not an inverse of the identity map with respect to the the convolution product. This was a motivation to change definition of the antipode such that a Hom-Hopf algebra is a Hom-bialgebra which satisfies a weakened condition. For every $h\in H$ there exists $k\in \mathbb{N}$ satisfying the weakened condition $$ \alpha^k(S \star \mathop{\rm Id}\nolimits )(h)= \alpha^k(\mathop{\rm Id}\nolimits \star S)(h)= \eta\circ \varepsilon(h). $$ This naturally suggests to change the definition of invertible elements of a Hom-algebra $A$ as being elements $a\in A$ such that there exists $b\in A$ and $k \in \mathbb{N}$ where $ \alpha^k( ab) = \alpha^k(ba) =1_A.$ This means the antipode is the relative Hom-inverse of the identity map. In this paper we study this recent notion of Hom-Hopf algebras. More precisely by Definition \ref{Hom-Hopf} a Hom-Hopf algebra in the new setting is a Hom-bialgebra endowed with a unital, counital, anti-algebra and anti-coalgebra map $S: H\longrightarrow H$ which is relative Hom-inverse of the identity map $\mathop{\rm Id}\nolimits_H$, and it commutes with $\alpha$. The Hom-Hopf algebras in Examples \ref{Sweedler} and \ref{2-dimensional Hopf} satisfy the conditions of both definitions. The set of group-like elements and primitive elements are important to study Hopf type objects. The group-like elements gives a relation between Hom-Hopf algebras and Hom-groups while primitive elements connects Hom-Hopf algebras to Hom-Lie algebras. The authors in \cite{lmt} showed that the set of group-like elements in a Hom-Hopf algebra is a Hom-group where the inverse elements are given by the antipode. In Example \ref{polynomial}, we introduce a Hom-bialgebra containing a group-like element which does not have any inverse. Therefore it does not have any antipode or Hom-Hopf algebra structure. The main aim of this paper is to find out if one removes the important conditions unitality, counitality, anti-algebra map, anti coalgebra map, and $S\circ \alpha=\alpha\circ S$, from Definition \ref{Hom-Hopf}, and only sticks with the relative Hom-invertibility condition of $S$, then how much of these properties can be recovered and what are the other properties of the antipode. To investigate this, we consider a Hom-bialgebra endowed with a map $S$ which is a relative Hom-inverse of the identity map. First we need to find out the relations between relative Hom-inverse elements with respect to the convolution product in Proposition \ref{important}. In Propositions \ref{relative anti algebra} and \ref{relative anti coalgebra}, we show that the antipode is a relative Hom-anti-algebra and a relative Hom-anti-coalgebra morphism. it is also shown in Propositions \ref{hom unitality prop} and \ref{relative counitality} that the antipode is relative Hom-unital and relative Hom-counital. Furthermore if the twisting maps $\alpha$ and $\beta$ are invertible then $S$ is an anti-algebra and an anti-coalgebra map. Then in Proposition \ref{Hopf map} we prove that any Hom-bialgebra map between two Hom-Hopf algebras is a relative Hom-morphism of Hom-Hopf alegbras. By Corollary \ref{hopf morphism}, if the corresponding twisting maps are all invertible then it is a Hom-Hopf algebra map. Furthermore we observe that if $\alpha=\beta$ then $S$ commutes with powers of $\alpha$. Later we study $S^2$ for commutative and cocommutative Hom-Hopf algebras. In these cases we prove that $S^2$ is equal to the identity map in some sense. If $\alpha$ and $\beta$ are invertible then $S^2=\mathop{\rm Id}\nolimits$. At the end we study the images of primitive and group-like elements under the antipode. \textbf{Notations}:In this paper all (Hom)-algebras, (Hom)-colagebras, (Hom)-bialgebras and (Hom)-Hopf algebras are defined on a field $\mathbb{K}$. All tensor products $\otimes$ are on a field $\mathbb{K}$. We denote the set of natural numbers by $\mathbb{N}$. \tableofcontents \section{ Hom-Hopf algebras} In this section we recall the basics of Hom-algebras, Hom-coalgebras, Hom-bialgebras and Hom-Hopf algebras. To understand these structures we introduce some examples. By \cite{ms1}, a Hom-associative algebra $A$ over a field $\mathbb{K}$ is a $\mathbb{K}$-vector space with a bilinear map $m: A\otimes A\longrightarrow A$, called multiplication, and a linear homomorphism $\alpha: A\longrightarrow A$ satisfying the Hom-associativity condition $$m \circ (m \otimes \alpha)= m \circ ( \alpha \otimes m).$$ In terms of elements $a,b,c\in A$, this can be written as $\alpha(a)(bc) = (ab)\alpha(c)$. The Hom-associativity property in terms of a commutative diagram is $$ \xymatrix{ A\otimes A\otimes A \ar[r]^{m\otimes \alpha} \ar[d]_{\alpha\otimes m} & A\otimes A \ar[d]^{m} \\ A \otimes A\ar[r]^{m} & A } \hspace{30pt} \xymatrix{ } $$ A Hom-associative algebra $A$ is called unital if there exists a linear map $\eta: k\longrightarrow A$ where $\alpha \circ \eta=\eta$, and $$m \circ (\mathop{\rm Id}\nolimits \otimes \eta)= m \circ ( \eta\otimes\mathop{\rm Id}\nolimits) =\alpha.$$ The unit element of $A$ is $\eta(1_k)=1_A$. These conditions in terms of an element $a\in A$ can be written as $\alpha(1_A)=1$ and $a 1_A= 1_A a =\alpha(a)$. The unitality condition in terms of a commutative diagram is $$ \xymatrix{ A \ar[r]^{\eta\otimes Id} \ar[rd]_{\alpha} & A\otimes A \ar[d]^{m} & A \ar[l]_{\eta\otimes Id} \ar[ld]^{\alpha}\\ &A} $$ In many examples $\alpha$ is an algebra map, i.e, $\alpha(xy)= \alpha(x) \alpha(y)$ for all $x, y\in A$. When $\alpha=\mathop{\rm Id}\nolimits$, then we obtain the definition of associative algebras. \begin{example}{\rm Let $A$ be an algebra with multiplication $m: A\otimes A\longrightarrow A$, and $\alpha: A \longrightarrow A$ be an algebra map. We twist the multiplication of $A$ by $\alpha$ to obtain a new multiplication $m_{\alpha}(x,y)= m( \alpha(x), \alpha(y))$. Then $(A, m_{\alpha}, \alpha)$ is a Hom-algebra.} \end{example} \begin{example}\label{2d} {\rm This example is a special case of the last example in \cite{ms2}. In this example we define a $2$-dimensional Hom-algebra $A$ with a basis $B=\{ e_1, e_2\}$. We define the multiplication by $$m(e_1, e_1)=e_1, ~~~~~m(e_1, e_2)=m(e_2, e_1)=e_2, ~~~~~m(e_2, e_2)=e_2.$$ We set $\alpha(e_1)= 2e_1-e_2$ and $\alpha(e_2)=e_2$. This Hom-algebra is unital and commutative with the unit element $\eta(1)=e_1$. } \end{example} An element $x$ in an unital Hom-associative algebra $ (A, \alpha)$ is called Hom-invertible \cite{lmt}, if there exists an element $x^{-1}$ and a non-negative integer $k\in \mathbb{N}$ such that $$\alpha^k(x x^{-1}) = \alpha^k(x^{-1} x)= 1.$$ The element $x^{-1}$ is called a Hom-inverse and the smallest $k$ is the invertibility index of $x$. The Hom-inverse may not be unique if it exists. However the authors in \cite{lmt} showed that the unit element $1_A$ is Hom-invertible, the product of any two Hom-invertible elements is Hom-invertible and every inverse of a Hom-invertible element is Hom-invertible. For two Hom-algebras $(A,\mu ,\alpha )$ and $(A^{\prime },\mu ^{\prime },\alpha ^{\prime })$ a linear map $f:A\rightarrow A^{\prime }$ is called a Hom-algebra morphism if $$f(xy)= f(x) f(y), ~~~~~\hbox{and} ~~~ f(\alpha(x))= \alpha'(f(x)), ~~~\forall x, y\in A.$$ Now we recall the dual notion of a Hom-algebra which is called a Hom-coalgebra \cite{ms2}, \cite{ms3}. A Hom-coalgebra is a triple $(A, \Delta, \beta)$, where $C$ is a $\mathbb{K}$-vector space, $\Delta: C\longrightarrow C\otimes C$ is linear map, called comultiplication, and $\beta: C\longrightarrow C $ a linear map satisfying the Hom-coassociativity condition, $$(\Delta\otimes \beta ) \circ \Delta = ( \beta \otimes \Delta) \circ \Delta .$$ If we use the Sweedler notation $ \Delta (c) = c^{(1)} \otimes c^{(2)}$, then the coassociativity condition can be written as $$ \beta( c^{(1)}) \otimes c^{(2)(1)} \otimes c^{(2)(2)} = c^{(1)(1)} \otimes c^{(1)(2)}\otimes \beta(c^{(2)}).$$ The coassociativity property in terms of a commutative diagram is the dual of the one for the Hom-associativity of Hom-algebras as follows; $$ \xymatrix{ C\otimes C\otimes C & C\otimes C\ar[l]_{\Delta\otimes \beta} \\ C \otimes C \ar[u]^{\beta\otimes \Delta}& C\ar[l]_{\Delta}\ar[u]_{\Delta} } \hspace{30pt} \xymatrix{ } $$ A Hom-coassociative coalgebra is said to be counital if there exists a linear map $\varepsilon: C \longrightarrow \mathbb{K}$ where $$(\mathop{\rm Id}\nolimits\otimes \varepsilon ) \circ \Delta = ( \varepsilon \otimes \mathop{\rm Id}\nolimits) \circ \Delta =\beta .$$ This means $$c^{(1)} \varepsilon (c^{(2)})= \varepsilon (c^{(1)})c^{(2)} = \beta(c).$$ Furthermore the map $\beta$ is counital, i.e, $\varepsilon (\beta(c))= \varepsilon (c)$. The counitality condition in terms of a commutative diagram is $$ \xymatrix{ C \ar[r]^{\Delta} \ar[rd]_{\beta} & C\otimes C \ar[d]^{\varepsilon \otimes \mathop{\rm Id}\nolimits}_{\varepsilon \otimes \mathop{\rm Id}\nolimits} & C \ar[l]_{\Delta} \ar[ld]^{\beta}\\ &C} $$ Moreover if the map $\beta$ is a coalgebra map then we have $ \Delta \circ \beta= (\beta \otimes \beta)\circ \Delta$. \begin{example}\label{2dco}{\rm This example is a special case of the last example in \cite{ms2}. The Hom-algebra introduced in Example \ref{2d} is a Hom-coalgebra by \begin{align*} &\Delta(e_1)=e_1\otimes e_1, ~~~~~~ \Delta(e_2)= e_1\otimes e_2 + e_2\otimes e_1 -2 e_2\otimes e_2\\ &\varepsilon(e_1)=1, ~~~~~~~~~~~~~~~ \varepsilon(e_2)=0. \end{align*} We set $\beta(e_1)= e_1+e_2$ and $\beta(e_2)= e_2$. } \end{example} Let $(C, \Delta, \varepsilon, \beta)$ and $(C', \Delta', \varepsilon', \beta')$ be two Hom-coalgebras. A morphism $f: C\longrightarrow C'$ is called a Hom-coalgebra map if for all $x\in C$ we have $$ f(x)^{(1)}\otimes f(x)^{(2)}= f(x^{(1)})\otimes f(x^{(2)}), ~~~~~ f\circ \beta= \beta' \circ f. $$ A $(\alpha, \beta)$-Hom-bialgebra is a tuple $(B, m, \eta, \alpha, \Delta, \varepsilon, \beta)$ where $(B, m, \eta, \alpha)$ is a Hom-algebra and $(B, \varepsilon, \Delta, \beta)$ is a Hom-coalgebra where $\Delta$ and $\varepsilon$ are morphisms of Hom-algebras, that is\\ i) $\Delta$ is a Hom-algebra map, $\Delta (hk)= \Delta(h) \Delta(k)$, which is $$(hk)^{(1)}\otimes (hk)^{(2)}= h^{(1)}k^{(1)}\otimes h^{(2)}k^{(2)}, \quad \forall~~ h,k\in B,$$ ii) $\Delta$ is unital; $\Delta(1)= 1\otimes 1.$ iii) $\varepsilon$ is a Hom-algebra map; $\varepsilon(xy)= \varepsilon(x) \varepsilon (y)$. iv) $\varepsilon$ is unital; $\varepsilon(1)=1$. v) $\varepsilon (\alpha(x))= \varepsilon(x)$. The algebra map property of $\Delta$ in terms of commutative diagrams is \[ \underset{ { \hbox{{\bf{ }}}}} { \xymatrixcolsep{5pc}\xymatrix{ B\otimes B \ar[r]^-{\Delta m} \ar[d]_-{\Delta \otimes \Delta} &B\otimes B \\ B\otimes B\otimes B\otimes B \ar[r]_-{id \otimes \tau \otimes id} & B\otimes B \otimes B\otimes B \ar[u]_-{m \otimes m} } } \] Here the linear map $\tau:B\otimes B \rightarrow B\otimes B$ is given by $\tau(h\otimes k)=k\otimes h$. The map $\epsilon$ being an algebra morphism in terms of a commutative diagram means \[ \underset{ { \hbox{{\bf{ }}}}} { \xymatrixcolsep{5pc}\xymatrix{ B \otimes B \ar[r]^-{m} \ar[d]_-{\epsilon \otimes \epsilon}&B\ \ar[ld]^-{\epsilon}\\ k\otimes k =k }} \] \begin{remark}{\rm It can be proved that $\Delta$ and $\varepsilon$ are morphisms of unital Hom-algebras if and only if $m$ and $\eta$ are morphism of Hom-coalgebras.} \end{remark} \begin{example}\label{2dbi}{\rm The $2$-dimensional Hom-algebra in Example \ref{2d} is a $(\alpha, \beta)$-Hom-bialgebra by the coalgebra structure given in Example \ref{2dco}. See \cite{ms2}. } \end{example} \begin{example}\label{twisted Hom-bialgebra} {\rm Let $ (B, m, \eta, \Delta, \varepsilon)$ be a bialgebra and $\alpha: B\longrightarrow B$ be a bialgebra map. Then $ (B, m_{\alpha}=\alpha\circ m, \alpha, \eta, \Delta_{\alpha}=\Delta\circ \alpha, \varepsilon, \alpha)$ is a $(\alpha, \alpha)$-Hom-bialgebra. } \end{example} Let $(B, m, \eta, \alpha, \Delta, \varepsilon, \beta)$ and $(B', m', \eta', \alpha', \Delta', \varepsilon', \beta')$ be two Hom-bialgebras. A morphism $f: B\longrightarrow B'$ is called a map of Hom-bialgebras of it is both morphisms of Hom-algebras and Hom-coalgebras. Let $(B, m, \eta, \alpha, \Delta, \varepsilon, \beta)$ be a Hom-algebra. The authors in \cite{ms2}, \cite{ms3}, showed that $(\mathop{\rm Hom}\nolimits(B, B), \star, \gamma)$ is an unital Hom-algebra with $\star$ is the convolution product $$f \star g = m \circ (f\otimes g) \circ \Delta,$$ and $ \gamma \in \mathop{\rm Hom}\nolimits(B, B)$ is defined by $\gamma(f)= \alpha \circ f \circ \beta.$ The unit is $\gamma \circ \varepsilon$. Similarly if $(A, m, \eta, \alpha)$ and $(C, \varepsilon, \Delta, \beta)$ are a Hom-algebra and a Hom-coalgebra, respectively, then $(\mathop{\rm Hom}\nolimits(C, A), \star, \gamma)$ is an unital Hom-algebra where $\star$ is the convolution product.\\ Here we recall the original definition of Hom-Hopf algebras. \begin{remark}\label{old hom-hopf}{\rm The notion of Hom-Hopf algebras first was appeared in \cite{ms2} and \cite{ms3} as follows. A $(\alpha, \beta)$-Hom-bialgebra $(H, m, \eta, \alpha, \Delta, \varepsilon, \beta)$ with an antipode $S:H\longrightarrow H$ is called a $(\alpha, \beta)$-Hom-Hopf algebra. A map $S$ is called antipode if it is an inverse of the identity map $\mathop{\rm Id}\nolimits: H\longrightarrow H$ in the Hom-associative algebra $\mathop{\rm Hom}\nolimits(H,H) $ with respect to the multiplication given by the convolution product, i.e. $S \star \mathop{\rm Id}\nolimits = \mathop{\rm Id}\nolimits \star S= \eta \circ \varepsilon$. In fact for all $h\in H$ we have $$S(h^{(1)}) h^{(2)}= h^{(1)} S(h^{(2)})= \varepsilon(h) 1.$$ This is the same as usual definition of an antipode for Hopf algebras. The following properties of antipode of Hom-Hopf algebras with this definition were proved in \cite{cg} and \cite{ms2}. For all $x, y\in H$ we have;\\ i) If $\alpha=\beta$ then $S\circ \alpha=\alpha\circ S$. ii) The antipode $S$ of a Hom-Hopf algebra is unique. iii) $S$ is anti-algebra map, i.e, $S(xy)= S(y) S(x)$. iv) $S$ is anti-coalgebra map, i.e, $S(x)^{(1)}\otimes S(x)^{(2)}= S(x^{(2)})\otimes S(x^{(1)})$. v) $S$ is unital, i.e, $S(1)=1$. vi) $S$ is counital, i.e, $ \varepsilon(S(x))= \varepsilon(x)$. } \end{remark} In this paper we use the recent notion of Hom-Hopf algebras introduced in \cite{lmt}. \begin{definition}\label{Hom-Hopf}\cite{lmt} Let $(B, m, \eta, \alpha, \Delta, \varepsilon, \beta)$ be a $(\alpha, \beta)$-Hom-bialgebra. An anti-algebra, anti-coalgebra morphism $S: B\longrightarrow B$ is said to be an antipode if\\ a) $S\circ \alpha= \alpha \circ S$. b) $S \circ \eta = \eta$ and $\varepsilon \circ S= \varepsilon$. c) $S$ is a relative Hom-inverse of the identity map $\mathop{\rm Id}\nolimits : B \longrightarrow B$ for the convolution product, i.e, for any $x\in B$, there exists $k\in \mathbb{N}$ such that \begin{equation} \alpha^k \circ (S\otimes \mathop{\rm Id}\nolimits)\circ\Delta(x) = \alpha^k \circ (\mathop{\rm Id}\nolimits \otimes S)\circ\Delta(x)= \eta \circ \varepsilon (x). \end{equation} A $(\alpha, \beta)$-Hom-bialgebra with an antipode is called a $(\alpha, \beta)$-Hom-Hopf algebra. \end{definition} One notes that Definition \ref{Hom-Hopf}(c) in terms of Sweedler notation can be written as follows: \begin{equation} \alpha^k(S(x^{(1)}) x^{(2)}) = \alpha^k (x^{(1)} S(x^{(2)}))= \varepsilon(x)1_B. \end{equation} \begin{remark}{\rm There are some differences between the old definition of Hom-Hopf algebras in Remark \ref{old hom-hopf} and the recent one in Definition \ref{Hom-Hopf}. The Definition \ref{Hom-Hopf}(a) in the special case of $\alpha=\beta$ is followed by the definition of Hom-Hopf algebras in Remark \ref{old hom-hopf}(i). Also Definition \ref{Hom-Hopf}(b) is the result of the old definition in Remark \ref{old hom-hopf}(v)(vi). Furthermore the antipodes of Hom-Hopf algebras in Definition \ref{Hom-Hopf} are the relative Hom-inverse of the identity map whereas the antipode in Remark \ref{old hom-hopf} is actually the inverse of the identity map. Finally the antipode in Remark \ref{old hom-hopf} is unique however the antipode in Definition \ref{Hom-Hopf} is not necessarily unique. In fact the authors in \cite{lmt} proved that if $S$ and $S'$ are two antipodes for the Hom-Hopf algebra $H$ in the sense of Definition \ref{Hom-Hopf}, then for every $x\in H$ there exist $k\in \mathbb{N}$ where $$\alpha^{k+2} \circ S \circ \beta^2(x) = \alpha^{k+2} \circ S' \circ \beta^2(x).$$ In special case when $\alpha$ and $\beta$ are both invertible then $S=S'$ and the antipode is unique.} \end{remark} \begin{proposition} Any Hom-Hopf algebra $(H, m, \eta, \alpha, \Delta, \varepsilon, \beta, S)$ in the sense of Remark \ref{old hom-hopf} which satisfies the extra condition $S\circ \alpha= \alpha \circ S$, is a Hom-Hopf algebra in the sense of Definition \ref{Hom-Hopf} where $k=1$ for all elements $x\in H$. \end{proposition} \begin{proof} By Remark \ref{old hom-hopf} the antipode $S$ is an unital, counital, anti-algebra, and anti-coalgebra map. Therefore for $k=1$ it satisfies all the conditions of Hom-Hopf algebras in Definition \ref{Hom-Hopf}. \end{proof} \begin{corollary} If for Hom-Hopf algebra $(H, m, \eta, \alpha, \Delta, \varepsilon, \beta, S)$ in the sense of Remark \ref{old hom-hopf} satisfies $\alpha=\beta$ then $H$ is a Hom-Hopf algebra in the sense of Definition \ref{Hom-Hopf}. \end{corollary} \begin{proof} If $\alpha=\beta$ then $\alpha$ is a map of Hom-bialgebras and by Remark \ref{old hom-hopf} we have $\alpha \circ S= S\circ \alpha$. Now the result is followed by the previous Proposition. \end{proof} A Hom-Hopf algebra is called commutative if it is commutative as Hom-algebra and it is called cocommutative if it cocommutative as Hom-coalgebra. \begin{example}\label{HH} {\rm Let $(H, m, \eta, \alpha, \Delta, \varepsilon, \beta, S)$ and Let $(K, m', \eta', \alpha', \Delta', \varepsilon', \beta', S')$ be two Hom-Hopf algebras. Then $H\otimes K$ is also a Hom-Hopf algebra by multiplication $m\otimes m$, unit $\eta\otimes \eta'$, and $\alpha\otimes \alpha': H\otimes H \longrightarrow H\otimes H$, the coproduct $\Delta\otimes \Delta'$ and counit $\varepsilon \otimes \varepsilon'$ and the linear map $\beta\otimes \beta': H\otimes H \longrightarrow H\otimes H$. } \end{example} \begin{definition} Let $(H, m, \eta, \alpha, \Delta, \varepsilon, \beta, S)$ be a $(\alpha, \beta)$-Hom-Hopf algebra. An element $h\in H$ is called a group-like element if $\Delta(h)=h\otimes h$ and $\beta(h)=h$. \end{definition} \begin{remark}{\rm If $h\in H$ is a group-like element then $ \varepsilon(h)h=\beta(h)=h$. Therefore $\varepsilon(h)=1$.} \end{remark} One notes that the authors in \cite{lmt} introduced group-like elements with condition $\varepsilon(h)=1$ which in fact implies $\beta(h)=h$. Therefore their definition is equivalent to the one in this paper. However we preferred to have $\beta(h)=h$ as definition and similar as ordinary Hopf algebras the condition $\varepsilon(h)=1$ is the result of the fact that $h$ is a group-like element. The notion of Hom-groups introduced in \cite{lmt} and studied in \cite{hassan}. For any Hom-group $(G, \alpha)$, the author in \cite{hassan} introduce the Hom-group algebra $\mathbb{K}G$. \begin{proposition} For any Hom-group $(G, \alpha) $, the Hom-group algebra $\mathbb{K}G$ is a $(\alpha, \mathop{\rm Id}\nolimits)$-Hom-Hopf algebra. \end{proposition} \begin{proof} We define the coproduct by $\Delta (g)= g\otimes g$, counit by $\varepsilon(g)=1$, and the antipode by $S(g)=g^{-1}$. Since $\beta=\mathop{\rm Id}\nolimits$ one verifies that $\mathbb{K}G$ is a $(\alpha, \mathop{\rm Id}\nolimits)$-Hom-bialgebra. Since for any Hom-group we have $\alpha(g)^{-1}= \alpha(g^{-1})$ then $ S(\alpha(g))=\alpha(S(g))$. The unit element of $\mathbb{K}G$ is $1_G$ and therefore $S(1_G)=1_G^{-1}= 1_G$. Furthermore $\varepsilon (S(g))= \varepsilon (g^{-1})=1$. Finally if the invertibility index of $g\in G$ is $k$ then $$\alpha^k( S(g) g)= \alpha^k(g^{-1} g)=1.$$ \end{proof} One notes that all elements $g\in \mathbb{K}G$ are group-like elements. Also $\mathbb{K}G$ is a cocommutative Hom-Hopf algebra. If $G$ is an abelian Hom-group then $\mathbb{K}G$ is a commutative Hom-Hopf algebra. The authors in \cite{lmt}, proved that set of group-like elements of a Hom-Hopf algebra is a Hom-group. In the Hom-bialgebra structure of $\mathbb{K}G$ one can define the comultiplication by $\Delta(g) = \alpha(g)\otimes \alpha(g)$ to obtain a $(\alpha, \alpha)$-Hom-bialgebra. \begin{example}\label{polynomial} {\rm (Hom-bialgebra of quantum matrices) In this example we study a $4$-dimensional Hom-bialgebra which is not a Hom-Hopf algebra. First we recall the construction of quantum matrices from \cite{es}, \cite{k}, \cite{m}, \cite{s}. Let $q\in \mathbb{K}$ where $q\neq 0$ and $q^2\neq -1$. Let $\mathcal{O}_q(M_2)= \mathbb{K}[ a, b, c, d]$ be the polynomial algebra with variables $a,b,c,d$ satisfying the following relations \begin{align*} &ab = q^{-1} ba, ~~~~~~ bd= q^{-1}db, ~~~~ac = q^{-1}ca, ~~~~ cd=q^{-1}dc\\ &bc=cb, ~~~~~ ad-da= (q^{-1}-q)bc. \end{align*} Clearly $\mathcal{O}_q(M_2)$ is not commutative except $q=1$. We define a coproduct as follows. \begin{align*} &\Delta(a) = a\otimes a + b\otimes c, ~~~~~~~ \Delta(b)= a\otimes b+ b\otimes d\\ & \Delta (c) = c\otimes a+ d\otimes c, ~~~~~~~ \Delta(d) = c\otimes b+ d\otimes d \end{align*} If we consider the elements of $\mathcal{O}(M_2)$ as $2 \times 2$ matrices with entries in $\mathbb{K}$ then \\ $~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\begin{bmatrix} \Delta(a)& \Delta(b)\\ \Delta(c) &\Delta(d) \end{bmatrix}_{\mathcal{O}(M_2)}= \begin{bmatrix} a&b\\chi&d \end{bmatrix} \otimes \begin{bmatrix} a&b\\chi&d \end{bmatrix} $\\ This comultiplication is not cocommutative. We define the counit by \begin{align*} \varepsilon(a)=\varepsilon(d)=1, ~~~~~~~~~~~~~~~~~~~~ \varepsilon(b)=\varepsilon(c)=0. \end{align*} This coproduct and counit defines a bialgebra structure on $\mathcal{O}(M_2)$. Now we explain the Hom-bialgebra structure from \cite{ya1}. We define a bialgebra map $\alpha : \mathcal{O}(M_2)\longrightarrow \mathcal{O}(M_2)$ by \begin{align*} \alpha(a)=a, ~~~ \alpha(b)=\lambda b, ~~~ \alpha(c)=\lambda^{-1}c, ~~~ \alpha(d)=d. \end{align*} where $\lambda \in \mathbb{K}$ is any invertible element. In fact\\ $$ \alpha(\begin{bmatrix} a&b\\chi&d \end{bmatrix}) = \begin{bmatrix} \alpha(a)&\alpha(b)\\\alpha(c)&\alpha(d) \end{bmatrix}= \begin{bmatrix} a&\lambda b\\\lambda^{-1}c&d \end{bmatrix} $$\\ It can be verified that $\alpha$ is a bialgebra morphism. One notes that $\varepsilon\circ \alpha=\varepsilon$. Now we use $\alpha$ to twist both product and coproduct of $\mathcal{O}(M_2)$ as explained in Example \ref{twisted Hom-bialgebra} to obtain a $(\alpha, \alpha)$- Hom-bialgebra $\mathcal{O}_q(M_2)_{\alpha}$. Therefore the coproduct of $\mathcal{O}_q(M_2)_{\alpha}$ is \begin{align*} &\Delta(a) = a\otimes a + b\otimes c, ~~~~~~~~~~~~~~ \Delta(b)= \lambda a\otimes b+ \lambda b\otimes d\\ & \Delta (c) = \lambda^{-1}c\otimes a+ \lambda^{-1} d\otimes c, ~~~~~~~ \Delta(d) = c\otimes b+ d\otimes d \end{align*} In fact $~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\begin{bmatrix} \Delta(a)& \Delta(b)\\ \Delta(c) &\Delta(d) \end{bmatrix}_{\mathcal{O}_q(M_2)_{\alpha}}= \begin{bmatrix} a&\lambda b\\\lambda^{-1}c&d \end{bmatrix} \otimes \begin{bmatrix} a&\lambda b\\\lambda^{-1}c&d \end{bmatrix} $\\ Now we consider quantum determinant element $$det_q= \mu_{\alpha}(a, d) -q^{-1}\mu_{\alpha}(b, c )\in \mathcal{O}_q(M_2)_{\alpha}. $$ One notes that $$det_q= \alpha(a)\alpha(d)- q^{-1} \alpha(b)\alpha(c)= ad - q^{-1} (\lambda b) ( \lambda^{-1} c)= ad - q^{-1} bc.$$ Similarly $\alpha( det_q)= det_q$. Therefore $$\Delta_{\mathcal{O}_q(M_2)_{\alpha}}(det_q) = \Delta \alpha(det_q) = \Delta (det_q).$$ It is shown in \cite{k} and \cite{s} that $\Delta(det_q) = det_q \otimes det_q$ which means $det_q$ is a group-like element. Also $\varepsilon_{ \mathcal{O}(M_2)_{\alpha}}= \varepsilon_{\mathcal{O}(M_2)}.$ Therefore $\varepsilon(ad-q^{-1}bc)= 1$. Then $det_q$ is a group-like element of Hom-bialgebra $\mathcal{O}_q(M_2)_{\alpha}$. Since the set of group-like elements of a Hom-Hopf algebras is a Hom-group \cite{lmt}, then every group-like element is relative Hom-invertible. However $det_q$ is not clearly relative Hom-invertible by definition of $\alpha$. Therefore $\mathcal{O}_q(M_2)$ is not a Hom-Hopf algebra. } \end{example} \begin{example}\label{2-dimensional Hopf}{\rm The $2$-dimensional bialgebra $H$, in Example \ref{2dbi}, is a $(\alpha, \beta)$-Hom-Hopf algebra by \begin{align*} S(e_1)=e_1, ~~~~~~~~ S(e_2)=e_2. \end{align*} It is straightforward to check that $S(h^{(1)}) h^{(2)}= h^{(1)} S(h^{(2)}) = \varepsilon(h) 1$ for all $h\in H$. Therefore it is a Hom-Hopf algebra in the sense of Remark \ref{old hom-hopf}. Since $S=\mathop{\rm Id}\nolimits$ then $S\circ \alpha= \alpha \circ S$ and $H$ is also Hom-Hopf algebra in the sense of Definition \ref{Hom-Hopf}. } \end{example} \section{Properties of antipodes} In this section we study the properties of antipods for Hom-Hopf algebras. We remind that we are using Definition \ref{Hom-Hopf}. We need the following basic properties of convolution product for later results. \begin{remark}\label{convolution} Let $(H, m, \eta, \alpha, \Delta, \varepsilon, \beta)$ be a $(\alpha, \beta)$-Hom-bialgebra. We consider the convolution Hom-algebra $\mathop{\rm Hom}\nolimits( H, H)$. If $f, g\in \mathop{\rm Hom}\nolimits(H, H)$, then the authors in \cite{lmt}, showed that \\ i) $ \alpha^n ( f\star g) = \alpha^n f \star \alpha^n g$. ii) $f \star (\eta \circ\varepsilon )= \alpha \circ f \circ \beta= (\eta \circ\varepsilon ) \star f$. \end{remark} The following Proposition will play an important rule for the further results in this paper. \begin{proposition}\label{important} Let $(H, m, \eta, \alpha, \Delta, \varepsilon, \beta, S)$ and $(A, m', \eta', \alpha', \Delta', \varepsilon', \beta', S')$ be $(\alpha, \beta)$ and $(\alpha', \beta')$-Hom-bialgebras and $\mathop{\rm Hom}\nolimits(H, A)$ be the convolution Hom-algebra. If $f, g, \varphi \in Hom(H, A)$ where $f$ and $g$ are the relative Hom-inverse of $\varphi$, then for every $x\in H$ there exists $k\in\mathbb{ N}$ where $$\alpha'^{k+2} \circ f\circ \beta^2 (x)= \alpha'^{k+2} \circ g \circ \beta^2(x).$$ \end{proposition} \begin{proof} For $x\in H$ there exist $k', k''\in \mathbb{N}$ where $$ \alpha'^{k'} ( f\star \varphi)(x) = \alpha'^{k'} (\varphi \star f)(x)= \varepsilon(x)1 = \eta \circ \varepsilon(x) ,$$ and $$ \alpha'^{k''} ( g\star \varphi)(x) = \alpha'^{k''} (\varphi \star g)(x)=\varepsilon(x)1 =\eta \circ \varepsilon(x). $$ Let $k= \max (k', k'')$. We ignore the composition sign for easier computation. We have \begin{align*} \alpha'^{k+2} f \beta^2 &= \alpha' (\alpha'^{k+1} f\beta) \beta\\ &=(\alpha'^{k+1} f\beta) \star (\eta \varepsilon)= (\alpha'^{k+1} f\beta)\star \alpha'^k(\varphi \star g)\\ &=(\alpha'^{k+1} f\beta)\star (\alpha'^k\varphi \star \alpha'^kg)= (\alpha'^{k} f\beta)\star \alpha'^k \varphi) \star \alpha'^{k+1}g\\ &=(\alpha'^k f \star \alpha'^k \varphi)\star \alpha'^{k+1}g\beta= \alpha'^k ( f\star \varphi) \star \alpha^{k+1} g\beta\\ &=\alpha'^k(\eta \varepsilon) \star \alpha^{k+1} g\beta=\eta \varepsilon \star \alpha^{k+1} g\beta\\ &= \alpha^{k+2}g\beta^2. \end{align*} We used the Remark \ref{convolution}(ii) in the second equality, the Remark \ref{convolution}(i) in the fourth equality, the Hom-associativity in the fifth equality, the Hom-coassociativity in the sixth equality, the Remark \ref{convolution}(i) in the seventh equality, unitality of $\alpha'$ in the eight equality, and the Remark \ref{convolution}(ii) in the last equality. \end{proof} The previous proposition shows that the relative Hom-inverses are unique in some sense. In fact if $\alpha$ and $\beta$ are invertible then $f=g$. \begin{proposition}\label{relative anti algebra} Let $(H, m, \eta, \alpha, \Delta, \varepsilon, \beta)$ be a Hom-bialgebra with multiplicative $\alpha$, endowed with a map $S: H\longrightarrow H$ where $S$ is a relative Hom-inverse of the identity map $\mathop{\rm Id}\nolimits : H \longrightarrow H$ in $\mathop{\rm Hom}\nolimits(H, H)$. Let $P(x, y)=S(x y)$, $N(x,y)= S(y) S(x)$, and $ M(x, y)= xy$. Then $S(xy)$ and $S(y)S(x)$ are both relative Hom-inverse of the multiplication $M(x, y)=xy$ in $\mathop{\rm Hom}\nolimits(H\otimes H, H)$, with respect to the convolution product, and for every $x, y\in H$ there exists $K\in \mathbb{N}$ where \begin{equation}\label{relative hom-anti algebra} \alpha^{K+2}( S[\beta^2(x) \beta^2(y)])= \alpha^{K+2} [S(\beta^2(y)) S(\beta^2(x))]. \end{equation} \end{proposition} \begin{proof} For any $x, y\in H$ there exists $k', k''\in \mathbb{N}$, such that $$ \alpha^{k'}(S(x^{(1)}) x^{(2)}) = \alpha^{k'} (x^{(1)} S(x^{(2)}))= \varepsilon(x)1_B, $$ and $$ \alpha^{k''}(S(y^{(1)}) y^{(2)}) = \alpha^{k''} (y^{(1)} S(y^{(2)}))= \varepsilon(y)1_B. $$ Let $k=\max(k', k'')+2$. Then \begin{align*} \alpha^k(M\star N)( x,y) = &\alpha^k[M(x^{(1)}, y^{(1)}) N(x^{(2)},y^{(2)})]\\ &=\alpha^k([x^{(1)} y^{(1)}][S(y^{(2)}) S(x^{(2)}])\\ &=[\alpha^k(x^{(1)})\alpha^k(y^{(1)})] \alpha^k ( S(y^{(2)}) S(x^{(2)}))\\ &=\alpha^{k+1} (x^{(1)}) [\alpha^k (y^{(1)}) \alpha^{k-1} ( S(y^{(2)}) S(x^{(2)})) ]\\ &=\alpha^{k+1} (x^{(1)}) [\alpha^k (y^{(1)}) [ \alpha^{k-1} ( S(y^{(2)}))\alpha^{k-1}( S(x^{(2)}))] ]\\ &=\alpha^{k+1} (x^{(1)}) [[\alpha^{k -1}(y^{(1)}) \alpha^{k-1}( S(y^{(2)})]\alpha^{k}( S(x^{(2)}) ]\\ &=\alpha^{k+1} (x^{(1)}) [\alpha^{k -1}(y^{(1)} S(y^{(2)}))\alpha^{k}( S(x^{(2)}) ]\\ &=\alpha^{k+1} (x^{(1)})[\varepsilon(y) 1\alpha^{k}( S(x^{(2)})]\\ &=\alpha^{k+1} (x^{(1)})\alpha^{k+1}( S(x^{(2)}) \varepsilon(y)\\ &=\alpha^{k+1} [x^{(1)} S(x^{(2)})] \varepsilon(y)\\ &=\alpha^{k+1}( \varepsilon (x) 1) \varepsilon(y)\\ &= \varepsilon(x) \varepsilon(y)1. \end{align*} We used the Hom-associativity property in the fourth equality, the Hom-unitality in ninth equality. Therefore $ N(x,y)= S(y) S(x)$ is a relative Hom-inverse of $ M(x,y)= xy$. Now for $xy\in H$ there exists $n\in \mathbb{N}$ where $$ \alpha^{n}(S(x^{(1)}) x^{(2)}) = \alpha^{n} (x^{(1)} S(x^{(2)}))= \varepsilon(x)1_B. $$ Then \begin{align*} &\alpha^n( P\star M)(x, y)= \alpha^n(P(x^{(1)}, y^{(1)}) M(x^{(2)},y^{(2)}))\\ &= \alpha^n [S(x^{(1)} y^{(1)}) x^{(2)}y^{(2)}]= a^n [S ((xy)^{(1)}) (xy)^{(2)}]= \varepsilon (xy) 1. \end{align*} Therefore $ P(x, y) = S(xy)$ is a relative Hom-inverse of $ M(x,y)=xy$. Then $S(xy)$ and $S(y)S(x)$ are both relative Hom-inverse of the multiplication $M(x, y)=xy$ in $\mathop{\rm Hom}\nolimits(H\otimes H, H)$ with respect to the convolution product. Therefore by Proposition \ref{important} there exists $K\in \mathbb{N}$ such that $$ \alpha^{K+2} \circ P \circ \beta_{H\otimes H}^2 (x,y)= \alpha^{K+2} \circ N \circ \beta_{H\otimes H}^2(x,y) .$$ By Example \ref{HH}, we have $\beta_{H\otimes H}= \beta_H\otimes \beta_H$ and therefore we obtain the result. \end{proof} The relation \ref{relative hom-anti algebra} is called the relative Hom-anti algebra map property of $S$. \begin{corollary} Let $(H, m, \eta, \alpha, \Delta, \varepsilon, \beta)$ be a $(\alpha, \beta)$-Hom-bialgebra, where $\alpha$ is multiplicative and $\alpha$ and $\beta$ are invertible. If $H$ is endowed with a linear map $S: H \longrightarrow H$ where $S$ is a relative Hom-inverse of the identity map $\mathop{\rm Id}\nolimits : H \longrightarrow H$ in $\mathop{\rm Hom}\nolimits(H, H)$, then $S$ is an anti-algebra map. \end{corollary} \begin{proof} By previous Proposition we have $ \alpha^{K+2} \circ P \circ \beta_{H\otimes H}^2 (x,y)= \alpha^{K+2} \circ N \circ \beta_{H\otimes H}^2(x,y) .$ Since $\alpha$ and $\beta$ are invertible then $P= N$ or $S(xy)=S(y) S(x)$. \end{proof} Similarly we have the following proposition. \begin{proposition}\label{relative anti coalgebra} Let $(H, m, \eta, \alpha, \Delta, \varepsilon, \beta)$ be a Hom-bialgebra where $\beta$ is a coalgebra map. Assume $H$ is endowed with a map $S: H\longrightarrow H$ where $S$ is a relative Hom-inverse of the identity map $\mathop{\rm Id}\nolimits : H\longrightarrow H$ with respect to the convolution product. Let $P(x)=\Delta(S(x))$, $N= \tau(S\otimes S)\Delta$, and $ \Delta(x)= x^{(1)} \otimes x^{(2)}$ where $\tau(x,y)= (y, x)$ for all $x, y\in H$. Then $P$ and $N$ are both Hom-relative inverse of the comultiplication $\Delta$ in $\mathop{\rm Hom}\nolimits(H, H\otimes H)$ with respect to the convolution product and for every $x\in H$ there exists $k\in \mathbb{N}$ such that \begin{equation}\label{relative hom-anti coalgebra} \alpha^{k+2}[ S(\beta^2(x) )^{(1)}] \otimes \alpha^{k+2}[ S(\beta^2(x) )^{(2)}] = \alpha^{k+2} [S(\beta^2(x^{(2)}))]\otimes \alpha^{k+2} [S(\beta^2(x^{(1)}))]. \end{equation} \end{proposition} \begin{proof} Using proposition \ref{important} and Example \ref{HH}, the proof is similar to the previous Proposition. \end{proof} The relation \ref{relative hom-anti coalgebra} is called the relative Hom-anti coalgebra map property of $S$. In special case we have the following. \begin{corollary} Let $(H, m, \eta, \alpha, \Delta, \varepsilon, \beta)$ be a $(\alpha, \beta)$-Hom-bialgebra, where $\beta$ is coalgebra morphism and $\alpha$ and $\beta$ are invertible. Assume $H$ is endowed with a linear map $S: H \longrightarrow H$ where $S$ is a relative Hom-inverse of the identity map $\mathop{\rm Id}\nolimits : H \longrightarrow H$ with respect to the convolution product. Then $S$ is an anti-coalgebra map. \end{corollary} \begin{proposition}\label{hom unitality prop} Let $(H, m, \eta, \alpha, \Delta, \varepsilon, \beta)$ be a Hom-bialgebra , endowed with a map $S: H\longrightarrow H$ where $S$ is a relative Hom-inverse of the identity map $\mathop{\rm Id}\nolimits : H \longrightarrow H$ with respect to the the convolution product. Then there exists $k\in \mathbb{N}$ such that \begin{equation}\label{hom-unitality} \alpha^{k+1}( S(1))=1. \end{equation} \end{proposition} \begin{proof} We apply relative Hom-invertibility of $S$ for $h=1$. So there exist $k\in \mathbb{N}$ where $$ 1= \varepsilon(1) 1= \alpha^k( \mathop{\rm Id}\nolimits \star S)(1)= \alpha^k(1S(1)) = \alpha^{k+1}(S(1)). $$ \end{proof} The condition \ref{hom-unitality} is called the relative Hom-unitality property of $S$. \begin{proposition}\label{relative counitality} Let $(H, m, \eta, \alpha, \Delta, \varepsilon, \beta)$ be a Hom-bialgebra , endowed with a map $S: H\longrightarrow H$ where $S$ is a relative Hom-inverse of the identity map $\mathop{\rm Id}\nolimits : H \longrightarrow H$ with respect to the the convolution product. Then there exists $k\in \mathbb{N}$ such that \begin{equation}\label{hom-counitality} \varepsilon (\alpha^k(S(h)))=1 \varepsilon(h). \end{equation} \end{proposition} \begin{proof} For any $h\in H$ there exists $k\in \mathbb{N}$ such that $$\varepsilon(h) 1= \alpha^k( S(h^{(1)}h^{(2)})).$$ Therefore $$\varepsilon (\varepsilon(h)1) = \varepsilon(\alpha^k( S(h^{(1)}h^{(2)}))).$$ Since $\varepsilon$ is unital and it commutes with $\alpha$ then $$1\varepsilon(h) = \alpha^k ( \varepsilon(S(h^{(1)}h^{(2)})) ) = \alpha^k (\varepsilon(S(h^{(1)}) \varepsilon(h^{(2)}) ). $$ Therefore $$1\varepsilon(h) = \alpha^k( \varepsilon(S(h^{(1)}\varepsilon(h^{(2)})) ) = \alpha^k \varepsilon(S(h))= \varepsilon (\alpha^k(S(h))).$$ \end{proof} The condition \ref{hom-counitality} is called the relative Hom-counitality property of $S$. \begin{lemma} Let $(H, m, \eta, \alpha, \Delta, \varepsilon, \beta, S )$ be a Hom-Hopf algebra, $(A, m', \eta', \alpha' )$ be a Hom-algebra and $f: H\longrightarrow A$ be a Hom-algebra map. Then $f \circ S$ is a relative Hom-inverse of $f$ in $\mathop{\rm Hom}\nolimits (H, A)$. \end{lemma} \begin{proof} We show that $f\circ S$ is a relative Hom-inverse of $f$ in $ \mathop{\rm Hom}\nolimits( H, A)$. For any $h\in H$ there exist $k\in \mathbb{N}$ where $\alpha^k(S(h^{(1)}) h^{(2)}) = \alpha^k (h^{(1)} S(h^{(2)}))= \varepsilon(x)1.$ Therefore \begin{align*} &\alpha^k( (f\circ S)\star f)(h) = \alpha^k (f( S(h^{(1)})) f(h^{(2)})) = \alpha^k(f(S(h^{(1)})h^{(2)}) )\\ & = f (\alpha^k(S(h^{(1)})h^{(2)}))= f(\varepsilon(h)1)=\varepsilon(h)1. \end{align*} Similarly since $ \alpha^k (h^{(1)} S(h^{(2)}))= \varepsilon(x)1$ we have $\alpha^k( f \star(f\circ S) )(h)=\varepsilon(h)1$. \end{proof} \begin{lemma} Let $(H, m, \eta, \alpha, \Delta, \varepsilon, \beta, S )$ be a Hom-Hopf algebra, $(C, \Delta', \varepsilon', \beta' )$ be a Hom-coalgebra and $f: C\longrightarrow H$ be a Hom-coalgebra map. Then $ S\circ f$ is a relative Hom-inverse of $f$ in $\mathop{\rm Hom}\nolimits (C, H)$. \end{lemma} \begin{proof} Similar to the previous Lemma. \end{proof} \begin{proposition}\label{Hopf map} Let $(H, m, \eta, \alpha, \Delta, \varepsilon, \beta, S)$ and $(K, m', \eta', \alpha', \Delta', \varepsilon', \beta', S')$ be $(\alpha, \beta)$ and $(\alpha', \beta')$-Hom-Hopf algebras. If $f: H \longrightarrow K$ is a map of Hom-bialgebras then there exists $K\in \mathbb{N}$ such that \begin{align}\label{Hopf map condition} \alpha^K \circ ( f\circ S)\circ \beta^2(h) = \alpha^K\circ ( S'\circ f)\beta^2(h). \end{align} \end{proposition} \begin{proof} By the previous Lemmas $f \circ S$ and $ S' \circ f $ are the relative Hom-inverse of $f$ in $ \mathop{\rm Hom}\nolimits( H, K)$. Then the result is followed by Proposition \ref{important}. \end{proof} The condition \ref{Hopf map condition} is called the relative Hom-Hopf algebra map property of $S$. As a special case of the previous Proposition we have the following result. \begin{corollary}\label{hopf morphism} Let $(H, m, \eta, \alpha, \Delta, \varepsilon, \beta, S)$ and $(K, m', \eta', \alpha', \Delta', \varepsilon', \beta', S')$ be Hom-Hopf algebras where $\alpha, \beta, \alpha',\beta'$ are invertible. Then any Hom-bialgebra map $f: H \longrightarrow K$ is a Hom-Hopf algebra map, i.e, $$f\circ S= S' \circ f.$$ \end{corollary} \begin{corollary} Let $(H, m, \eta, \alpha, \Delta, \varepsilon, \beta, S)$ be Hom-Hopf algebra where $\alpha=\beta$. Then there exists $k\in\mathbb{ N}$ such that \begin{equation} \alpha^k (\alpha\circ S) \alpha^2 (h) = \alpha^k( S\circ \alpha)\alpha^2 . \end{equation} If $\alpha$ is invertible then $ \alpha \circ S = S\circ \alpha$. \end{corollary} \begin{proof} Since $\alpha=\beta$ then $\alpha$ is a map of Hom-bialgebras and the result is followed by Proposition \ref{Hopf map}. \end{proof} Here we summarize some of the results in this section. \begin{theorem} Let $(H, m, \eta, \alpha, \Delta, \varepsilon, \beta)$ be a Hom-bialgebra where $\alpha$ and $\beta$ are morphisms of algebra and coalgebra, respectively. Assume $H$ is endowed with a map $S: H\longrightarrow H$ where $S$ is a relative Hom-inverse of the identity map $\mathop{\rm Id}\nolimits : H \longrightarrow H$ with respect to the the convolution product. Then $S$ is a relative Hom-anti-algebra map, a relative Hom-anti-coalgebra map, relative Hom-unital, and relative Hom-counital, i.e, there exists $k\in\mathbb{ N}$ such that\\ i) $\alpha^{k+2}( S(\beta^2(x) \beta^2(y))= \alpha^{k+2} (S(\beta^2(y)) S(\beta^2(x))).$ ii)$\alpha^{k+2}[ S(\beta^2(x) )^{(1)}] \otimes \alpha^{k+2}[ S(\beta^2(x) )^{(2)}] = \alpha^{k+2} [S(\beta^2(x^{(2)}))]\otimes \alpha^{k+2} [S(\beta^2(x^{(1)}))].$ iii) $\alpha^{k+1}( S(1))=1.$ iv) $ \varepsilon (\alpha^k(S(h)))=1 \varepsilon(h)$. v) If $\alpha=\beta$ then $\alpha^k (\alpha\circ S) \alpha^2 (h) = \alpha^k( S\circ \alpha)\alpha^2 .$\\ Furthermore if $\alpha$ and $\beta$ are invertible then $S$ is morphisms of algebras and coalgebras, respectively, and $\alpha \circ S= S \circ \alpha$. \end{theorem} \begin{proposition} Let $(H, m, \eta, \alpha, \Delta, \varepsilon, \beta, S)$ be a commutative Hom-Hopf algebra. Then for every $x\in H$ there exist $k\in \mathbb{N}$ where $$\alpha^{k+2} \circ S^2\circ \beta^2 (x)= \alpha^{k+2} \circ \mathop{\rm Id}\nolimits \circ \beta^2(x).$$ \end{proposition} \begin{proof} We show that $S^2$ is a relative Hom-inverse of $S$. For any $h\in H$ there exist $k\in \mathbb{N}$ where $\alpha^k(S(h^{(1)}) h^{(2)}) = \alpha^k (h^{(1)} S(h^{(2)}))= \varepsilon(x)1.$ Therefore \begin{align*} \alpha^k(S\star S^2)(h)= & \alpha^k [S(h^{(1)}) S^2(h^{(2)})] \\ &= \alpha^k [S[S(h^{(2)}) h^{(1)} ]] = \alpha^k [S[ h^{(1)}S(h^{(2)}) ]]\\ &= S [\alpha^k[ h^{(1)}S(h^{(2)})] ]= S(\varepsilon(h) 1)= \varepsilon(h)1. \end{align*} We used the anti-algebra map property of $S$ in the second equality, commutativity of $H$ in the third equality, commutativity of $S$ and $\alpha$ in the fourth equality, and the unitality of $S$ in the last equality. Similarly it can be shown that $\alpha^k (S^2 \star S)(h)= \varepsilon(h)1$. Therefore $S^2$ and the identity map $\mathop{\rm Id}\nolimits_H$ are both relative Hom-inverse of $S$. Now the result is followed by Proposition \ref{important}. \end{proof} As a special case of previous Proposition, we have the following result. \begin{corollary} If $(H, m, \eta, \alpha, \Delta, \varepsilon, \beta, S)$ is a commutative Hom-Hopf algebra with invertible $\alpha$ and $\beta$ then $$S^2=\mathop{\rm Id}\nolimits.$$ \end{corollary} \begin{proposition} Let $(H, m, \eta, \alpha, \Delta, \varepsilon, \beta, S)$ be a cocommutative Hom-Hopf algebra. Then for every $x\in H$ there exist $k\in \mathbb{N}$ where $$\alpha^{k+2} \circ S^2\circ \beta^2 (x)= \alpha^{k+2} \circ \mathop{\rm Id}\nolimits \circ \beta^2(x).$$ \end{proposition} \begin{proof} We show that $S^2$ is a relative Hom-inverse of $S$. For any $h\in H$ there exist $k', k''\in \mathbb{N}$ where $\alpha^{k'}(S(h^{(1)}) h^{(2)}) = \alpha^{k'} (h^{(1)} S(h^{(2)}))= \varepsilon(x)1,$ and $\alpha^{k''}(S(S(h)^{(1)}) h^{(2)}) = \alpha^{k''} (h^{(1)} S(S(h)^{(2)}))= \varepsilon(x)1.$ Let $k=\max (k', k'')$. Therefore \begin{align*} \alpha^k(S\star S^2)(h)= & \alpha^k [S(h^{(1)}) S^2(h^{(2)})] \\ &= \alpha^k [S(h)^{(2)} S (S(h)^{(1)} )] = \alpha^k [S(h)^{(1)} S ( S(h)^{(2)}) ]\\ &= \alpha^k(\varepsilon(S(h))1)= \varepsilon(S(h))\alpha^k(1)= \varepsilon(h) 1. \end{align*} We used the anti-coalgebra map property of $S$ in the second equality, cocommutativity of $H$ in the third equality, and the counitality of antipode in fifth equality. Similarly $\alpha^k (S^2 \star S))(h)= \varepsilon(h) 1$. Therefore $S^2$ and the identity map $\mathop{\rm Id}\nolimits_H$ are both Hom-relative inverse of $S$. Then the result is followed by Proposition \ref{important}. \end{proof} As a special case of the previous Proposition we have the following result. \begin{corollary} If $(H, m, \eta, \alpha, \Delta, \varepsilon, \beta, S)$ is a cocommutative Hom-Hopf algebra with invertible $\alpha$ and $\beta$ then $$S^2=\mathop{\rm Id}\nolimits.$$ \end{corollary} \begin{proposition} Let $(H, m, \eta, \alpha, \Delta, \varepsilon, \beta, S)$ be a Hom-Hopf algebra and $h\in H$ a primitive element, i.e, $\Delta(h)= 1\otimes h+ h\otimes 1$. Then there exists $k\in \mathbb{N}$ where \begin{equation} \alpha^{k+1} (S(h))= -\alpha^{k+1} (h). \end{equation} \end{proposition} \begin{proof} There exists $k\in \mathbb{N}$ such that $\alpha^{k}(S(h^{(1)}) h^{(2)}) = \alpha^{k'} (h^{(1)} S(h^{(2)}))= \varepsilon(x)1.$ Therefore $$\alpha^k( hS(1)) + \alpha^k(1S(h))=\varepsilon(h)1.$$ Since $S$ is unital and for any $x\in H$, we have $1x=\alpha(x)$, then $$\alpha^{k+1}(h) + \alpha^{k+1}(h) = \epsilon(h)1.$$ By \cite{lmt}, for any primitive element $h$, we have $\varepsilon(h)=0$. Therefore $\alpha^{k+1} (S(h))= -\alpha^{k+1} (h).$ \end{proof} \begin{proposition} Let $(H, m, \eta, \alpha, \Delta, \varepsilon, \beta, S)$ be a Hom-Hopf algebra and $h\in H$ a group-like element, i.e, $\Delta(h)= h\otimes h$. Then there exists $k\in \mathbb{ N}$ where \begin{equation} \alpha^{k} (S(h)h)= \alpha^k ( h S(h))=1. \end{equation} \end{proposition} \begin{proof} There exists $k\in \mathbb{N}$ such that $\alpha^{k}(S(h^{(1)}) h^{(2)}) = \alpha^{k'} (h^{(1)} S(h^{(2)}))= \varepsilon(x)1.$ Then $$\alpha^k(S(h)h)= \alpha^k( hS(h))=\varepsilon(h)1.$$ Now the result is followed by the fact that $\varepsilon(h)=1$. \end{proof} By previous Proposition the relative Home inverse of a group-like element $h$ is $S(h)$. \end{document}
\begin{document} \title[Local Finiteness of the Twisted Bruhat Orders]{Local Finiteness of the Twisted Bruhat Orders\\ on Affine Weyl Groups} \author{Weijia Wang} \address{School of Mathematics (Zhuhai) \\ Sun Yat-sen University \\ Zhuhai, Guangdong, 519082 \\ China} \email{[email protected]} \date{\today} \begin{abstract} In this paper we investigate various properties of strong and weak twisted Bruhat orders on a Coxeter group. In particular we prove that any twisted strong Bruhat order on an affine Weyl group is locally finite, strengthening a result of Dyer in J. Algebra, 163, 861--879 (1994). We also show that for a non-finite and non-cofinite biclosed set $B$ in the positive system of an affine root system with rank greater than 2, the set of elements having a fixed $B$-twisted length is infinite. This implies that the twisted strong and weak Bruhat orders have an infinite antichain in those cases. Finally we show that twisted weak Bruhat order can be applied to the study of the tope poset of an infinite oriented matroid arising from an affine root system. \end{abstract} \maketitle \section{Introduction} Twisted strong and weak Bruhat orders on a Coxeter group were introduced by Dyer and the author respectively. These generalizations of the ordinary strong and weak Bruhat orders have found connections with problems related to reflection orders, representation theory and combinatorics of infinite reduced words. In \cite{DyerTwistedBruhat} it is shown that Kazhdan-Lusztig type polynomials can be defined for certain intervals of the twisted strong Bruhat orders and these polynomials are used to formulate a conjecture regarding the representation of Kac-Moody Lie algebras. In \cite{Gobet}, twisted strong Bruhat order is studied for the twisted filtration of the Soergel bimodules. In \cite{chenyu}, twisted strong Bruhat order is related to the poset of $B\times B$-orbits on the wonderful compactification of algebraic groups. In \cite{orderpaper}, the semilattice property of twisted weak Bruhat order is shown to characterize the biclosed sets arising from the infinite reduced words in affine cases. In this paper, we prove several properties regarding the structure of twisted Bruhat orders which are not previously known. First we study the finite intervals of twisted strong Bruhat orders. Finite intervals are of particular importance as the Kazhdan-Lusztig type polynomials in \cite{DyerTwistedBruhat} can only be defined for finite intervals with other favorable properties. It is not previously known whether for any infinite biclosed set $B$, the $B$-twisted strong Bruhat order on an affine Weyl group is locally finite, i.e. any interval of this poset is finite. In \cite{quotient}, Dyer gives a partial answer to the question, providing a technical condition guaranteeing the local finiteness. In section \ref{localfinite} we solve the problem in the positive, showing that such a poset is always locally finite. Indeed we prove a stronger fact for affine Weyl groups: the set $\{y|y\leq_B x, l_B(x)-l_B(y)=n\}$ is finite for any $x$ and biclosed set $B$. The proof exploits the explicit description of biclosed sets in the affine root system first given in \cite{DyerReflOrder}. In Section \ref{hyperinterval}, we consider the question whether for some biclosed set $B$, the strong $B$-twisted Bruhat order on a nonaffine Coxeter group is not locally finite. We propose a procedure to obtain an infinite interval for certain $B.$ In Section \ref{fixlength}, we show that while a twisted strong Bruhat order on an affine Weyl group is locally finite, the set of elements with a fixed twisted length is always infinite provided the twisting biclosed set is neither finite nor cofinite and the rank of the affine Weyl group is greater than 2. This result implies that the twisted strong and weak Bruhat order has an infinite antichain in those cases. The proof makes use of the properties of twisted weak Bruhat order and an explicit description of the biclosed sets in the affine root system. In Section \ref{examplesec}, we present the structure of a twisted strong Bruhat order on the affine Weyl group $\widetilde{W}$ of type $\widetilde{A}_2$. Such a description of the structure of a twisted Bruhat order (which is not isomorphic to ordinary Bruhat order or its opposite) was only known for $\widetilde{A}_1$ previously. In Section \ref{omsec}, we study the tope poset of the infinite oriented matroid from an affine root system. We first describe all hemispaces (topes) of such an oriented matroid and then show that twisted weak Bruhat orders show up in the tope poset. These results allow us to prove that certain finite intervals in the tope poset are lattices. \section{Preliminaries} Let $B$ be a set. Denote by $|B|$ the cardinality of $B$. We denote the disjoint union of sets by $\uplus$. We refer the reader to \cite{bjornerbrenti} and \cite{Hum} for the basic notions of Coxeter groups and their root systems. We call a Coxeter system without braid relations a universal Coxeter system. \subsection{Biclosed Sets of Coxeter Groups} Let $(W,S)$ be a Coxeter system. Denote by $T$ the set of reflections. Given a root $\alpha$, denote by $s_{\alpha}$ the corresponding reflection. Let $t\in T$ be a reflection. Denote by $\alpha_t$ the corresponding positive root. For $w\in W$, the inversion set of $w^{-1}$ is defined to be $\{\alpha\in \Phi^+|w^{-1}(\alpha)\in \Phi^-\}$ and is denoted by $N(w)$. For a general Coxeter system $(W,S)$, $\Phi,\Phi^+,\Phi^-$ denote the set of roots, positive roots and negative roots respectively. We denote by $l(w)$ the (usual) length of $w\in W.$ A set $\Gamma\subset\Phi$ is 2 clousre closed if for any $\alpha,\beta\in \Gamma$ and $k_1\alpha+k_2\beta\in\Phi, k_1,k_2\in \mathbb{R}_{\geq 0}$ one has that $k_1\alpha+k_2\beta\in\Gamma$. A set $B\subset \Gamma$ such that both $B$ and $\Gamma\backslash B$ are closed is called a biclosed set in $\Gamma$. Finite biclosed sets in $\Phi^+$ are precisely inversion sets $N(x)$ for some $x\in W$ (\cite{DyerWeakOrder} Lemma 4.1(d)). If $s_1s_2\cdots s_k$ is a reduced expression of $x$, then $N(x)=\{\alpha_{s_1},s_1(\alpha_{s_2}),\cdots,s_1s_2\cdots,s_{k-1}(\alpha_{s_k})\}$. Denote by $\widetilde{N}(x)=\{t\in T|\alpha_t\in N(x)\}$. We say a positive root $\beta$ dominates another positive root $\alpha$ if $\beta\in N(w)$ for some $w\in W$ implies $\alpha\in N(w).$ Suppose that $W$ is infinite. An infinite sequence $s_1s_2s_3\cdots, s_i\in S$ is called an infinite reduced word of $W$ provided that $s_1s_2\cdots s_j$ is reduced for any $j\geq 1$. For an infinite reduced word $x=s_1s_2\cdots$, define the inversion set $N(x)=\cup_{i=1}^{\infty}N(s_1s_2\cdots s_i)$. Two infinite reduced words $x,y$ are considered equal if $N(x)=N(y)$. The inversion set of an infinite reduced word is biclosed in $\Phi^+$. A (finite) prefix of an infinite reduced word $w$ is an element $u$ in the Coxeter group $W$ such that $N(u)\subset N(w).$ The set of infinite reduced words of $(W,S)$ is denoted by $W_l$. For an inversion set $N(w)$ we denote by $N(w)'$ the complement of $N(w)$ in $\Phi^+$. An element $w\in W$ is said to be straight if $l(w^n)=|nl(w)|.$ It is shown that in \cite{speyer} that if $W$ is infinite any Coxeter element is straight. Choose a reduced expression $\underline{w}$ of a straight element $w$, then $\underline{w}\underline{w}\underline{w}\cdots$ well defines an infinite reduced word indepenent of the choice of $\underline{w}$ and we denote it by $w^{\infty}$. There exists a $W$-action on the set of all biclosed sets in $\Phi^+$ given by $w\cdot B:=(N(w)\backslash w(-B))\cup (w(B)\backslash (-N(w)))$. In particular $u\cdot N(v)=N(uv)$ for $u\in W, v\in W\cup W_l.$ We shall call a set $B$ in $\Phi^+$ cofinite if $\Phi^+\backslash B$ is finite. \subsection{Twisted Bruhat Order and Twisted Weak Bruhat Order} Let $B$ be a biclosed set in $\Phi^+$. The left (resp. right) ($B$-)twisted length of an element of $w\in W$ is defined as $l(w)-2|N(w^{-1})\cap B|$ (resp. $l(w)-2|N(w)\cap B|$) and is denoted by $l_B(w)$ (resp. $l'_B(w)$). Clearly $l_B(w)=l_B'(w^{-1})$ and $l_{\emptyset}(w)=l'_{\emptyset}(w)=l(w)$. Fix a biclosed set $B$ in $\Phi^+$, the left (resp. right) ($B$-)twisted strong Bruhat order $\leq_B$ on $W$ is defined as follow: $x\leq_B y$ if and only if $y=t_kt_{k-1}\cdots t_1x$ with $l_B(t_i\cdots t_1x)=l_B(t_{i-1}\cdots t_1x)+1, t_i\in T, 1\leq i\leq k$ (resp. $l_B'(xt_1\cdots t_{i})=l_B'(xt_1\cdots t_{i-1})+1, t_i\in T, 1\leq i\leq k$). One easily sees that the left twisted strong Bruhat order is isomorphic to the right twisted strong Bruhat order under the map $w\mapsto w^{-1}$. The left (resp. right) $\emptyset-$twisted strong Bruhat order is the ordinary Bruhat order. For $x\in W$ it is known that the left (resp. right) $N(x)-$twisted strong Bruhat order is isomorphic to the ordinary Bruhat order. Following the convention in the literature, we consider the left twisted strong Bruhat orders instead of the right ones and often refer to left twisted strong Bruhat order as twisted Bruhat orders. Left $B-$twisted strong Bruhat order can also be characterized equivalently as the unique partial order on $W$ with the following property: for $t\in T, w\in W$, if $\alpha_t\in w\cdot B$, $tw\leq_B w$ and if $\alpha_t\not\in w\cdot B$ then $tw\geq_B w.$ A closed interval $[x,y]$ in a twisted (strong) Bruhat order is said to be spherical if any length 2 subinterval of it contains 4 elements. It is known that for a closed spherical interval $[x,y]$, the order complex of the corresponding open interval $(x,y)$ is a sphere. The left (resp. right) ($B$-)twisted weak Bruhat order $\leq_B'$ on $W$ is defined as follow: $x\leq_B' y$ if and only if $N(x^{-1})\backslash N(y^{-1})\subset B$ and $N(y^{-1})\backslash N(x^{-1})\subset \Phi^+\backslash B$ (resp. $N(x)\backslash N(y)\subset B$ and $N(y)\backslash N(x)\subset \Phi^+\backslash B$). The left $B$-twisted weak Bruhat order is isomorphic to the right $B$-twisted weak Bruhat order under the map $w\mapsto w^{-1}$. For $x\in W$ it is known that the left (resp. right) $N(x)-$twisted weak Bruhat order is isomorphic to the ordinary left (resp. right) weak order. If $x\leq_B' y$ (under the left (resp. right) $B$-twisted weak Bruhat order), one has that $x\leq_B y$ (under the left (resp. right) $B$-twisted strong Bruhat order). The left (resp. right) $\emptyset-$twisted weak Bruhat order is the ordinary left (resp. right) weak Bruhat order. Following the convention in the literature, we consider the right twisted weak Bruhat orders instead of the left ones and often refer to right twisted weak Bruhat orders as twisted weak orders. The (left) twisted (strong) Bruhat order (resp. (right) twisted weak (Bruhat) order) is graded by the left twisted length function (resp. right twisted length function). Suppose that $u\leq_B' v$ (under the right $B$-twisted weak order). It is shown in \cite{orderpaper} that one has a chain: $$u=u_0<_B'u_1<_B'u_2<_B'\cdots <_B' u_t=v$$ such that $u_{i}s=u_{i+1}$ for some $s\in S$ and $l_B'(u_i)+1=l_B'(u_{i+1})$. We will refer to this property as the chain property of the twisted weak order and write $u_i\vartriangleleft u_{i+1}$. Let $B$ be an inversion set of an element in $W$. Then $(W,\leq_B')$ is isomorphic to $(W,\leq_{\emptyset}')$. It is known that $(W,\leq_{\emptyset}')$ is a complete meet semilattice (Chapter 3 of \cite{bjornerbrenti}). Let $B$ be the inversion set of an infinite reduced word. It is shown in \cite{orderpaper} that $(W,\leq_B')$ is a non-complete meet semilattice and for an affine Weyl group $W$, $(W,\leq_B')$ is a non-complete meet semilattice only if $B$ is the inversion set of an infinite reduced word. \subsection{Construction of Affine Weyl Groups and Their Root Systems} Let $W$ be an irreducible Weyl group with the crystallographic root system $\Phi$ contained in the Euclidean space $V$. For these notions, see Chapter 1 of \cite{Hum}. The root system of an (irreducible) affine Weyl group $\widetilde{W}$ (corresponding to $W$) can be constructed as the ``loop extension" of $\Phi$. We describe such a construction. Let $\Phi^+$ be the chosen standard positive system of $\Phi$ and let $\Delta$ be the simple system of $\Phi^+.$ Define a $\mathbb{R}-$vector space $V'=V\oplus\mathbb{R}\delta$ where $\delta$ is an indeterminate. Extend the inner product on $V$ to $V'$ by requiring $(\delta,v)=0$ for any $v\in V'$. If $\alpha\in \Phi^+$, define $\{\alpha\}^{\wedge}=\{\alpha+n\delta|n\in \mathbb{Z}_{\geq 0}\}\subset V'$. If $\alpha\in \Phi^-$, define $\{\alpha\}^{\wedge}=\{\alpha+(n+1)\delta|n\in \mathbb{Z}_{\geq 0}\}\subset V'$. For a set $\Lambda\subset \Phi$, define $\Lambda^{\wedge}=\bigcup_{\alpha\in\Lambda}\{\alpha\}^{\wedge}\subset V'$. The set of roots of the affine Weyl group $\widetilde{W}$, denoted by $\widetilde{\Phi}$, is $\Phi^{\wedge}\uplus-\Phi^{\wedge}$. The set of positive roots and the set of negative roots are $\Phi^{\wedge}$ and $-\Phi^{\wedge}$ respectively. The set of simple roots is $\{\alpha|\alpha\in \Delta\}\cup\{\delta-\rho\}$ where $\rho$ is the highest root in $\Phi^+$. Let $\alpha\in \widetilde{\Phi}$ be a root. The reflection in $\alpha$, denoted by $s_{\alpha}$, is an $\mathbb{R}-$linear map $V'\rightarrow V'$ defined by $v\mapsto v-2\frac{(v,\alpha)}{(\alpha,\alpha)}\alpha.$ The (irreducible) affine Weyl group $\widetilde{W}$ is generated by $s_{\alpha},\alpha\in \widetilde{\Phi}$. The simple reflections are the reflections in the simple roots of $\Phi^{\wedge}$. For $v\in V$, define the $\mathbb{R}-$linear map $t_v$ which acts on $V'$ by $t_v(u)=u+(u,v)\delta.$ For $\alpha\in \Phi,$ define the coroot $\alpha^{\vee}=2\alpha/(\alpha,\alpha)$. Note that $s_{\alpha+n\delta}=s_{\alpha}t_{-n\alpha^{\vee}}$. Let $T$ be the free Abelian group generated by $\{t_{\gamma^{\vee}}|\gamma\in \Delta\}$. Then it is known that $\widetilde{W}=W\ltimes T.$ Let $\pi$ be the canonical projection from $\widetilde{W}$ to $W$. If $\alpha\in \Phi^+$, let $(\alpha)_0=\alpha\in \Phi^{\wedge}$. If $\alpha\in \Phi^-$, let $(\alpha)_0=\alpha+\delta\in \Phi^{\wedge}$. We also introduce the following compact notation: $\alpha_a^b:=\{\alpha+k\delta|a\leq k\leq b\}$ for $a,b\in \mathbb{Z}\cup \{\pm\infty\}.$ We also denote $\cup_{i=1}^t(\alpha_i)_{a_i}^{b_i}$ by $(\alpha_1)_{a_1}^{b_1}(\alpha_2)_{a_2}^{b_2}\cdots(\alpha_t)_{a_t}^{b_t}$. \subsection{Biclosed Sets of an Affine Weyl Group}\label{biclosedaffine} Let $\Phi$ be a finite irreducible crystallographic root system with $\Phi^+$ the standard positive system and $\Delta$ the simple system. Suppose that $\Delta'\subset \Phi$. Denote by $\Phi_{\Delta'}$ the root subsystem generated by $\Delta'$. It is shown in \cite{biclosedphi} that the biclosed sets in $\Phi$ are those $(\Psi^+\backslash \Phi_{\Delta_1})\cup \Phi_{\Delta_2}$ where $\Psi^+$ is a positive system of $\Phi$ and $\Delta_1,\Delta_2$ are two orthogonal subsets (i.e. $(\alpha,\beta)=0$ for any $\alpha\in \Delta_1,\beta\in \Delta_2$) of the simple system of $\Psi^+.$ For simplicity we denote the set $(\Psi^+\backslash \Phi_{\Delta_1})\cup \Phi_{\Delta_2}$ by $P(\Psi^+,\Delta_1,\Delta_2)$. The biclosed sets in $\Phi^{\wedge}$ is determined in \cite{DyerReflOrder}. Let $W'$ be the reflection subgroup of $\widetilde{W}$ generated by $(\Delta_1\cup\Delta_2)^{\wedge}$. Then any biclosed set in $\widetilde{\Phi}^+(=\Phi^{\wedge})$ is of the form $w\cdot P(\Psi^+,\Delta_1,\Delta_2)^{\wedge}$ for some $\Psi^+,\Delta_1,\Delta_2$ and $w\in \widetilde{W}.$ In particular a biclosed set $w\cdot P(\Psi^+,\Delta_1,\Delta_2)^{\wedge}$ where $w\in W'$ differs from $P(\Psi^+,\Delta_1,\Delta_2)^{\wedge}$ by finite many roots. Suppose that $B$ is a biclosed set in $\widetilde{\Phi}^+=\Phi^{\wedge}$. Let $I_B=\{\alpha\in \Phi|\,\{\alpha\}^{\wedge}\cap B$ is infinite$\}$ and $A_B=\{\alpha\in \Phi|\,|\{\alpha\}^{\wedge}\cap B\neq \emptyset\}$. It is known that $I_B$ is biclosed in $\Phi$. Those biclosed sets $B$ such that $I_B=P(\Psi^+,\Delta_1,\Delta_2)$ are precisely the biclosed sets of the form $w\cdot P(\Psi^+,\Delta_1,\Delta_2)^{\wedge}$ where $w\in W'$. It is shown in \cite{wang} that if $B$ is an inversion set of an element of the affine Weyl group $\widetilde{W}$ or an infinite reduced word in $\widetilde{W}_l$, then $A_B\cap -A_B=\emptyset.$ The following theorem is proved in \cite{wang} \begin{theorem}\label{infiniteword} For an affine Weyl group, the biclosed sets which are inversion sets of infinite reduced words are precisely those of the form $w\cdot P(\Psi^+,\Delta_1,\emptyset)^{\wedge}, w\in \widetilde{W},\Delta_1\subsetneq \Delta$. \end{theorem} \section{Twisted Bruhat Orders on an Affine Weyl Group Are Locally Finite}\label{localfinite} Throughout this paper unless otherwise specified, $W$ is a finite irreducible Weyl group and $\widetilde{W}$ is the corresponding irreducible affine Weyl group. Denote by $\Phi$ the crystallographic root system of $W$ and denote by $\Phi^+$ a chosen standard positive system. Let $\alpha\in \Phi$. A finite $\delta$-chain through $\alpha$ is a set $\{\alpha+d\delta|p\leq d\leq q\}$ where $p\leq q$ are two integers. \begin{lemma}\label{dominance} (1) Suppose that $\alpha\in \Phi$ and $k\in \mathbb{Z}_{\geq 0}$. Then in $\widetilde{\Phi}^+$, $\alpha_0+k\delta$ dominates the roots $\alpha_0,\alpha_0+\delta,\cdots,\alpha_0+(k-1)\delta.$ (2) Let $w\in \widetilde{W}$. Then $w$ carries a finite $\delta$-chain through $\alpha\in \Phi$ to another finite $\delta$-chain through some $\beta\in \Phi.$ In particular $\beta=\pi(w)(\alpha)$ \end{lemma} \begin{proof} (1) is well known. See for example Lemma 4.1 in \cite{wang}. (2) follows from the formula \begin{equation}\label{action} s_{\alpha+p\delta}(\beta+q\delta)=s_{\alpha}(\beta)+(q-(\beta,\alpha^{\vee})p)\delta \end{equation} and the fact $\pi(s_{\alpha+p\delta})=\pi(s_{\alpha}t_{-p\alpha^{\vee}})=s_{\alpha}$. \end{proof} \begin{lemma}\label{structure} (1) $N(s_{\alpha+n\delta})$ has the following structure: (i) Suppose that $\beta\in \Phi^+, s_{\alpha}(\beta)\in \Phi^+$. Then for $0\leq k<\frac{2(\alpha,\beta)}{(\alpha,\alpha)}n$, $\beta+k\delta\in N(s_{\alpha+n\delta}).$ (ii) Suppose that $\beta\in \Phi^+, s_{\alpha}(\beta)\in \Phi^-$. Then for $0\leq k\leq\frac{2(\alpha,\beta)}{(\alpha,\alpha)}n$, $\beta+k\delta\in N(s_{\alpha+n\delta}).$ (iii) Suppose that $\beta\in \Phi^-, s_{\alpha}(\beta)\in \Phi^+$. Then for $0<k<\frac{2(\alpha,\beta)}{(\alpha,\alpha)}n$, $\beta+k\delta\in N(s_{\alpha+n\delta}).$ (iv) Suppose that $\beta\in \Phi^-, s_{\alpha}(\beta)\in \Phi^-$. Then for $0<k\leq\frac{2(\alpha,\beta)}{(\alpha,\alpha)}n$, $\beta+k\delta\in N(s_{\alpha+n\delta}).$ (2) If there exists a $\delta$-chain through $\beta$ beginning from $\beta_0$ in $N(s_{\alpha+n\delta})$ then there also exists a $\delta$-chain through $-s_{\alpha}(\beta)$ beginning from $(-s_{\alpha}(\beta))_0$ in $N(s_{\alpha+n\delta})$. In addition, these two chains are of equal length. \end{lemma} \begin{proof} (1) follows from Equation \eqref{action}. To see (2), one notes that $(-s_{\alpha}(\beta),\alpha)=(s_{\alpha}(\beta),-\alpha)=(s_{\alpha}(\beta),s_{\alpha}(\alpha))=(\beta,\alpha)$ and then uses (1). \end{proof} \begin{lemma} When $n$ is sufficiently large, the inversion set $N(ws_{\alpha+n\delta})$ consists of some finite $\delta$-chains (through roots in $\Phi$) whose lengths are independent of $n$ and the following $\delta$-chains: $$(\pi(w)(\alpha))_0, (\pi(w)(\alpha))_0+\delta,\cdots, (\pi(w)(\alpha))_0+k_0\delta,$$ $$(\beta_1)_0, (\beta_1)_0+\delta,\cdots, (\beta_1)_0+k_1\delta,$$ $$(\beta_2)_0, (\beta_2)_0+\delta,\cdots, (\beta_2)_0+k_2\delta,$$ $$\cdots$$ $$(\beta_t)_0, (\beta_t)_0+\delta,\cdots, (\beta_t)_0+k_t\delta$$ where $\beta_i\in \Phi, 1\leq i\leq t$ and one can pair each $\beta_i$ with some $\beta_j, j\neq i$ such that $p\beta_i+p\beta_j=\pi(w)(\alpha), p>0$. Furthermore when $n$ is changed to $n+1$, the increment of the length of the $\delta$-chain through $\beta_i$ and that of the $\delta$-chain through $\beta_j$ coincides. \end{lemma} \begin{proof} By Lemma 4.1(e) of \cite{DyerWeakOrder} $$N(ws_{\alpha+n\delta})=(N(w)\backslash (-wN(s_{\alpha+n\delta})))\uplus (wN(s_{\alpha+n\delta})\backslash (-N(w))).$$ By Lemma \ref{dominance} (1) $N(w)$ consists some finite $\delta$-chains through certain roots in $\Phi$. Therefore we conclude that $(N(w)\backslash (-wN(s_{\alpha+n\delta})))$ consists of some finite $\delta$-chains (through roots in $\Phi$) whose lengths are independent of $n$ (when $n$ is sufficiently large). By Lemma \ref{structure} (1) and Lemma \ref{dominance} (2), one sees that $wN(s_{\alpha+n\delta})\cap \widetilde{\Phi}^+=wN(s_{\alpha+n\delta})\backslash (-N(w))$ consists of $\delta$-chains through certain roots in $\Phi$ and that when $n$ grows the upper bound of the coefficient of $\delta$ in each of these chains grows accordingly. Again Lemma \ref{dominance} (1) forces that any $\delta$-chain through a root $\beta$ must start from $\beta_0$ in $N(ws_{\alpha+n\delta})$. Therefore we have shown that $N(ws_{\alpha+n\delta})$ consists of roots in the form as listed in the lemma. We still need to show the existence of the pairing described in the Lemma. Now suppose $\beta_i=\pi(w)(\beta)$. Then the root $\beta_j=\pi(w)(-s_{\alpha}(\beta))$ satisfies the condition in this lemma thanks to Lemma \ref{structure} (2). Note that the positivity of $p$ follows from the fact $(\beta,\alpha)>0$ (note that $(\beta,\alpha)>0$ because otherwise the $\delta$-chain through $\beta$ will not appear in the inversion set of $s_{\alpha+n\delta}$ by Lemma \ref{structure} (1)). \end{proof} \begin{lemma}\label{limit} $$\lim_{n\rightarrow \infty} |l_B(s_{\alpha+n\delta}w)|=+\infty $$ for any $B$ biclosed in $\widetilde{\Phi}^+$, $w\in \widetilde{W}, \alpha\in \Phi.$ \end{lemma} \begin{proof} We examine the set $N(ws_{\alpha+n\delta})$ for sufficiently large $n$. By Lemma \ref{structure} besides some finite $\delta$-chain which will stay stationary when $n$ grows, this set consists some finite $\delta$-chain through roots in $\Phi$ which will grow when $n$ grows, i.e. $$(\pi(w)(\alpha))_0, (\pi(w)(\alpha))_0+\delta,\cdots, (\pi(w)(\alpha))_0+k_0\delta,$$ $$(\beta_1)_0, (\beta_1)_0+\delta,\cdots, (\beta_1)_0+k_1\delta,$$ $$(\beta_2)_0, (\beta_2)_0+\delta,\cdots, (\beta_2)_0+k_2\delta,$$ $$\cdots$$ $$(\beta_t)_0, (\beta_t)_0+\delta,\cdots, (\beta_t)_0+k_t\delta$$ Also one can pair those $\beta_i$'s, for each $\beta_i$ one can find $\beta_j, j\neq i$ such that $p\beta_i+p\beta_j=\pi(w)(\alpha), p>0$. Furthermore when $n$ is changed to $n+1$ the increment of the length of the $\delta$-chain through $\beta_i$ and that of the $\delta$-chain through $\beta_j$ coincides. Now we consider the set $I_B=\{\alpha\in \Phi|\{\alpha\}^{\wedge}\cap B$ is infinite$\}$. Then $I_B$ is a biclosed set in $\Phi.$ Also by Lemma 5.10 in \cite{DyerReflOrder} for any $\alpha\in I_B$, there exists $N_{\alpha}\in \mathbb{Z}_{>0}$ such that $\alpha+n\delta\in B$ for all $n>N_{\alpha}$. Suppose that $\pi(w)(\alpha)\in I_B$. Then for a pair of $\beta_i,\beta_j$ either one of them is in $I_B$ and one of them is in $\Phi\backslash I_B$ or both of them are in $I_B$ since otherwise $p\beta_i+p\beta_j=\pi(w)(\alpha)$ has to be in $\Phi\backslash I_B$ by coclosedness of $I_B.$ So when $n$ is changed to $n+1$ more added roots in the inversion set $N(ws_{\alpha+n\delta})$ are in $B$ than in $\widehat{\Phi}\backslash B$, making the twisted length decreasing. Similarly if $\pi(w)(\alpha)\not\in I_B$ one can deduce that when $n$ is changed to $n+1$ more added roots in the inversion set $N(ws_{\alpha+n\delta})$ are in $\widehat{\Phi}\backslash B$ than in $B$, making the length increasing. Therefore we see that the twisted length tends to infinity as $n\rightarrow \infty.$ \end{proof} \begin{proposition}\label{finiteunder} For any $x\in \widetilde{W}$, $n\in \mathbb{N}$ and a biclosed set $B$ in $\widetilde{\Phi}^+$, the set $$\{y\in \widetilde{W}|y\leq_B x, l_B(x)-l_B(y)=n\}$$ is finite. \end{proposition} \begin{proof} We prove this by induction. Let $n=1$ and $\alpha\in \Phi$. The set $\{k|l_B(s_{\alpha+k\delta}x)=l_B(x)-1\}$ is finite by Lemma \ref{limit}. Note that $\Phi$ is finite and therefore the proposition holds when $n=1.$ Now let $n=k$. Then $$\{y\in \widetilde{W}|y\leq_B x, l_B(x)-l_B(y)=k\}=$$$$\bigcup_{\alpha,k:l_B(s_{\alpha+k\delta}x)=l_B(x)-1}\{y\in \widetilde{W}|y\leq_B s_{\alpha+k\delta}x, l_B(s_{\alpha+k\delta}x)-l_B(y)=k-1\}$$ since twisted Bruhat order is graded by twisted length function. Now the proposition follows from induction. \end{proof} \begin{theorem} For any biclosed set $B$ in $\widetilde{\Phi}^+$ and any $x,y\in \widetilde{W}$ such that $x<_B y$, the interval $[x,y]$ in the poset $(W,\leq_B)$ is finite. \end{theorem} \begin{proof} If $l_B(y)-l_B(x)=1$ then the assertion is trivial (as $[x,y]=\{x,y\}$). Now suppose that the assertion holds for all pairs of $x,y$ with the properties $x<_B y$ and $l_B(y)-l_B(x)\leq k-1$. Take $x,y\in \widetilde{W}, x<_B y$ and $l_B(y)-l_B(x)=k$. By Proposition \ref{finiteunder}, the set $\{z|x<_B z<_B y, l_B(y)-l_B(z)=1\}$ is finite. For any $z$ such that $x<_B z<_B y, l_B(y)-l_B(z)=1$ the interval $[x,z]$ is finite by induction assumption. Then $[x,y]$ is also finite since the twisted Bruhat order is graded by the left twisted length function. \end{proof} \section{Construction of infinite interval in a non-affine Coxeter system}\label{hyperinterval} It is natural to ask that given a non-affine infinite Coxeter group $W$ whether infinite intervals always show up in $(W,\leq_B)$ for some biclosed set $B\subset \Phi^+.$ In \cite{DyerTwistedBruhat} subsection 1.10, it is shown that this is true for a universal Coxeter group of rank $>2$ as follow. Let $(W,S)$ be a universal Coxeter system of rank $3$ with $S=\{s_1,s_2,s_3\}$. Let $B=N((s_1s_2s_3)^{\infty})$. Then the closed interval $[1,s_1s_2s_3]$ is infinite in $(W,\leq_B)$. By \cite{UniversalRefl}, every irreducible infinite non-affine Coxeter group $W$ has a reflection subgroup $W'$ which is universal of rank $3$. Therefore by above for certain biclosed set $B\subset \Phi_{W'}^+$ (the set of positive roots of $W'$) the left $B-$twisted strong Bruhat order on such a reflection subgroup (denoted $\leq_{W',B}$) has an infinite interval (with bottom element $e$) which we denote by $[e,w]$. Suppose that there exists a biclosed set $A\subset \Phi^+$ such that $A\cap \Phi_{W'}^+=B$. Then by Proposition 1.4 in \cite{DyerTwistedBruhat} (or see subsection 1.12 of \cite{quotient}) we have a poset injection $([e,w],\leq_{W',B})\rightarrow ([e,w],\leq_A)$. Therefore the closed interval $[e,w]$ is also infinite in $(W,\leq_A)$. Now we illustrate this process through a concrete example. Let $(W,S)$ be of $(2,3,\infty)$ type, i.e. $W=\{s_1, s_2, s_3|s_1^2=s_2^2=s_3^2=(s_1s_2)^3=(s_1s_3)^2=e\}$ and $S=\{s_1, s_2, s_3\}.$ Denote $\alpha_{s_i}$ by $\alpha_i$. For a reflection subgroup $W'$ of $W$, denote $T\cap W'$ by $T_{W'}$. The root system of $W'$ is $\Phi_{W'}=\{\alpha\in \Phi|s_{\alpha}\in W'\}.$ Consider the reflection subgroup $W'$ generated by $S':=\{s_1, s_2s_3s_2, s_3s_2s_3s_2s_3\}$. Write $r_1=s_1, r_2=s_2s_3s_2, r_3=s_3s_2s_3s_2s_3.$ Note that $(\alpha_{r_i},\alpha_{r_j})=-1$ for $i,j\in \{1,2,3\},i\neq j.$ Also we note that $$\widetilde{N}(s_1)\backslash \{s_1\}=\emptyset,$$ $$\widetilde{N}(s_2s_3s_2)\backslash \{s_2s_3s_2\}=\{s_2,s_2s_3s_2s_3s_2\},$$ $$\widetilde{N}(s_3s_2s_3s_2s_3)\backslash\{s_3s_2s_3s_2s_3\}=\{s_3,s_3s_2s_3, s_3s_2s_3s_2s_3s_2s_3, s_3s_2s_3s_2s_3s_2s_3s_2s_3\}.$$ One easily checks that $r_i$ and $r_j, i\neq j, i,j\in \{1,2,3\}$ generate an infinite dihedral subgroup with $r_i$ and $r_j$ being the canonical simple generators. Then apply Proposition 3.5 of \cite{DyerReflSubgrp} we see that $\widetilde{N}(s_1)\cap T_{W'}=\{s_1\}, \widetilde{N}(s_2s_3s_2)\cap T_{W'}=\{s_2s_3s_2\},$ $\widetilde{N}(s_3s_2s_3s_2s_3)\cap T_{W'}=\{s_3s_2s_3s_2s_3\}.$ Therefore $(W',S')$ is indeed a rank 3 universal Coxeter system. Since the only braid moves that can be applied to $(s_1(s_2s_3s_2)(s_3s_2s_3s_2s_3))^k$ are short braid moves between $s_1$ and $s_3$, the element $s_1(s_2s_3s_2)(s_3s_2s_3s_2s_3)$ is straight. One can also check that $\widetilde{N}(s_1(s_2s_3s_2)(s_3s_2s_3s_2s_3))\cap T_{W'}=\{r_1, r_1r_2r_1, r_1r_2r_3r_2r_1\}$. Consequently $\widetilde{N}((s_1(s_2s_3s_2)(s_3s_2s_3s_2s_3))^{\infty})\cap T_{W'}=\newline$ $\{r_1,r_1r_2r_1,r_1r_2r_3r_2r_1, r_1r_2r_3r_1r_3r_2r_1,r_1r_2r_3r_1r_2r_1r_3r_2r_1\cdots\}$. Based on the above discussion, one concludes that the interval $[1, s_1(s_2s_3s_2)(s_3s_2s_3s_2s_3)]$ in $(W,\leq_A)$ is infinite where $A=N((s_1(s_2s_3s_2)(s_3s_2s_3s_2s_3))^{\infty})$. \section{Set of elements of a given twisted length}\label{fixlength} In this section we consider the set $\{w\in \widetilde{W}|l_B'(w)=n\}$. In contrast to the situation for the ordinary Bruhat order, we will show that such a set is infinite in most cases. Since $l_B(w^{-1})=l_B'(w)$, our result also shows that the set $\{w\in \widetilde{W}|\,l_B(w)=n\}$ is infinite. It is convenient to investigate this question in the context of (right) $B$-twisted weak Bruhat order $<_B'$. In the following two lemmas $(W,S)$ is a general finite rank infinite Coxeter system and $\Phi,\Phi^+$ are its root system and its set of positive roots. \begin{lemma}\label{nomaxminone} Let $w\in W_l$ (the set of infinite reduced words), then $(W,\leq_{N(w)}')$ (resp. $(W,\leq_{\Phi^+\backslash N(w)}')$) has no maximal element or minimal element. \end{lemma} \begin{proof} Suppose to the contrary that $u$ is a maximal element of $(W,\leq_{N(w)}')$ (i.e. for any element $p\in W$ either $u$ and $p$ are not comparable or $u\geq'_{N(w)} p$). Let $S=\{s_1,s_2,\cdots,s_n\}$ be the set of simple reflections. Then $us_i\lhd u$ for all $s_i\in S$. Let $U_i=N(us_i)\backslash N(u)$. These sets are contained in $N(w)$ by definition. Therefore there exists a (finite) prefix $v$ of $w$ such that $U_i\subset N(v)$ for all $i$. Also note that $N(u)\backslash N(us_i)\subset \Phi^+\backslash N(w)\subset \Phi^+\backslash N(v)$. Hence under the twisted weak order $(W,\leq_{N(v)}')$, one has $us_i\lhd u$ for all $i$. Therefore $u$ is a maximal element in $(W,\leq_{N(v)}')$ since if there exists some $u'\geq u$ then by the chain property of the twisted weak order there must be some $s_i$ such that $us_i\vartriangleright u.$ But $\leq_{N(v)}'$ is isomorphic to the standard right weak order which has no maximal element when $W$ is infinite. A contradiction. Therefore we have shown that $(W,\leq_{N(w)}')$ has no maximal elements. Now we claim that it cannot happen that there exists $u\in W$ such that $u\leq_{N(w)}' x, \forall x\in W$. Since $N(u)$ is finite, there exists $\alpha\in \Phi^+$ such that $\alpha\in N(w)\backslash N(u)$. Then $N(s_{\alpha})\backslash N(u)\not\subset (\Phi^+\backslash N(w))$. So $u\nleq_{N(w)}' s_{\alpha}$ and we establish the claim. Now for any $u\in W$ we take $x\in W$ such that $u\not\leq_{N(w)}' x$, and then consider the meet of $u$ and $x$ (it exists since $(W,\leq_{N(w)}')$ is a meet semilattice). Such a meet is clearly not equal to $u$ as $u\not\leq_{N(w)}' x$. Therefore $(W,\leq_{N(w)}')$ has no minimal element. The dual assertion about $(W,\leq_{\Phi^+\backslash N(w)}')$ can be proved in the same fashion and thus is omitted. \end{proof} \begin{lemma}\label{dotactioniso} Let $B$ be a biclosed set in $\Phi^+$ and $w\in W$. Then as posets $(W,\leq_B')\simeq (W,\leq_{w\cdot B}')$ under the map $u\mapsto wu.$ \end{lemma} \begin{proof} Suppose that $u\leq_B'v$. We show that $N(wu)\backslash N(wv)$ is contained in $w\cdot B=(N(w)\backslash -w(B))\uplus (w(B)\backslash -N(w))$. Note that $N(wu)\backslash N(wv)=((N(w)\backslash -wN(u))\backslash (N(w)\backslash -wN(v)))\uplus ((wN(u)\backslash -N(w)\backslash (w(N(v))\backslash -N(w)))).$ Take $\alpha\in (N(w)\backslash -wN(u))\backslash (N(w)\backslash -wN(v))$. This implies that $\alpha\in N(w)$ and $-w^{-1}(\alpha)\in N(v)\backslash N(u)$. Since $N(v)\subset N(u)\subset \Phi^+\backslash B$, $\alpha\not\in -w(B).$ So $\alpha\in N(w)\backslash -w(B).$ Take $\alpha\in (wN(u)\backslash -N(w)\backslash (w(N(v))\backslash -N(w)))$. This implies that $\alpha\not\in -N(w)$ and $\alpha\in w(N(u)\backslash N(v))$. Therefore $\alpha\in w(B)\backslash -N(w).$ Therefore we see that $N(wu)\backslash N(wv)\subset w\cdot B.$ Next we show that $N(wv)\backslash N(wu)\subset \Phi^+\backslash w\cdot B$. Since $v\leq_{\Phi^+\backslash B}'u$, the above argument shows that $N(wv)\backslash N(wu)\subset w\cdot (\Phi^+\backslash B)$. It follows from the formula $w\cdot B:=(N(w)\backslash w(-B))\cup (w(B)\backslash (-N(w)))$ that $w\cdot (\Phi^+\backslash B)=\Phi^+\backslash w\cdot B.$ Therefore we conclude that $wu\leq_{w\cdot B}' wv$. The above arguments also prove that the inverse map $u\mapsto w^{-1}u$ preserves the order since $w^{-1}\cdot(w\cdot B)=B$ and hence the lemma is proved. \end{proof} Now let $\Phi,\Psi^+$ and $\Delta$ be a finite crystallographic root system, a positive system of it and the corresponding simple system respectively. Suppose that $\Delta_1\subset \Delta$. Let $\Phi_{\Delta_1}=\mathbb{R}\Delta_1\cap \Phi$ be the root subsystem generated by $\Delta_1$. Then $\mathbb{R}_{\geq 0}\Delta_1\cap \Phi=\Phi_{\Delta_1}^+$ is a positive system of $\Phi_{\Delta_1}$. Let $W$ be the finite Weyl group with the root system $\Phi$ and $\widetilde{W}$ be the corresponding affine Weyl group. A subset $\Gamma$ of $\Phi$ is said to be $\mathbb{Z}$-closed if for $\alpha,\beta\in \Gamma$ such that $\alpha+\beta\in \Phi$ one has $\alpha+\beta\in \Gamma.$ \begin{lemma}\label{assemble} Let $\Gamma^+$ be another positive system of $\Phi_{\Delta_1}$. Then $(\Psi^+\backslash \Phi_{\Delta_1}^+)\cup \Gamma^+$ is a positive system of $\Phi$. If $\Delta_2\subset \Delta$ such that $(\Delta_1,\Delta_2)=0$, $\Delta_2$ is also a subset of the simple system of $(\Psi^+\backslash \Phi_{\Delta_1}^+)\cup \Gamma^+$. \end{lemma} \begin{proof} Find the element $w$ in the parabolic subgroup generated by $s_{\alpha},\alpha\in \Delta_1$ such that $w\Phi_{\Delta_1}^+=\Gamma^+$. Act $w$ on $\Psi^+$ and we get $(\Psi^+\backslash \Phi_{\Delta_1}^+)\cup \Gamma^+$. Note that we have $w(\Delta_2)=\Delta_2.$ \end{proof} Let $B$ be a biclosed set in $\widetilde{\Phi}^+$. \begin{lemma}\label{nomaxmintwo} If $B$ and $\widetilde{\Phi}^+\backslash B$ are both infinite, then $(\widetilde{W},\leq_B')$ has no maximal element or minimal element. \end{lemma} \begin{proof} By the characterization of the biclosed sets in $\widetilde{\Phi}$, $I_B=P(\Psi^+,\Delta_1,\Delta_2)$ for some positive system $\Psi^+$ and some orthogonal subsets $\Delta_1, \Delta_2$ of its simple system. By Lemma \ref{nomaxminone} and Theorem \ref{infiniteword}, it suffices to treat the case where $I_B=P(\Psi^+,\Delta_1,\Delta_2)$ where $\Delta_1$ and $\Delta_2$ are both nonempty. Further it suffices to check the case where $B=P(\Psi^+,\Delta_1,\Delta_2)^{\wedge}$ as all $(W,\leq_B')$ with $I_B=P(\Psi^+,\Delta_1,\Delta_2)$ are isomorphic by Lemma \ref{dotactioniso}. We suppose to the contrary that there exists $u\in W$ such that $us_i\rhd u$ for all $s_i$ simple reflection of $\widetilde{W}$ (so $u$ is minimal by the chain property of the twisted weak order). We write $A$ for $A_{N(u)}$. It is $\mathbb{Z}-$closed in $\Phi$ by Lemma 3.13 in \cite{wang}. We also have $A\cap -A=\emptyset$ by subsection \ref{biclosedaffine}. Then we deduce that $A':=\Phi_{\Delta_2}\cap A$ is $\mathbb{Z}$-closed in $\Phi_{\Delta_2}$ and that $A'\cap -A'=\emptyset$. Then by \cite{Bourbaki} Chapter IV, $\S 1$, Proposition 22, $A'$ is contained in a positive system $\Gamma^+$ in $\Phi_{\Delta_2}.$ Let $\mathbb{R}_{\geq 0}\Delta_2\cap \Phi=\Phi_{\Delta_2}^+.$ Then $A'$ is contained in $(\Psi^+\backslash \Phi_{\Delta_2}^+\cup \Gamma^+)\backslash \Phi_{\Delta_1}$ where $\Psi^+\backslash \Phi_{\Delta_2}^+\cup \Gamma^+$ is a positive system in $\Phi$ by Lemma \ref{assemble}. We denote this positive system by $\Xi^+$. Also by Lemma \ref{assemble} $\Delta_1$ is a subset of $\Xi^+$'s simple system. Since $A_{N(u)\backslash N(us_i)}\cap \Phi_{\Delta_2}\subset A_{N(u)}\cap \Phi_{\Delta_2}(=A')$, so \begin{equation}\label{partone} A_{N(u)\backslash N(us_i)}\cap \Phi_{\Delta_2}\subset \Xi^+\backslash \Phi_{\Delta_1}. \end{equation} Because $N(u)\backslash N(us_i)\subset B$, $A_{N(u)\backslash N(us_i)}\subset P(\Psi^+,\Delta_1,\Delta_2)=(\Psi^+\backslash \Phi_{\Delta_1})\cup \Phi_{\Delta_2}$. Hence \begin{equation}\label{parttwo} A_{N(u)\backslash N(us_i)}\backslash \Phi_{\Delta_2}\subset (\Psi^+\backslash \Phi_{\Delta_1})\backslash \Phi_{\Delta_2}\subset (\Psi^+\backslash \Phi_{\Delta_2}^+\cup \Gamma^+)\backslash \Phi_{\Delta_1}=\Xi^+\backslash \Phi_{\Delta_1}. \end{equation} Hence by equation \ref{partone} and \ref{parttwo} we conclude that $A_{N(u)\backslash N(us_i)}\subset \Xi^+\backslash \Phi_{\Delta_1}$. Consequently $N(u)\backslash N(us_i)\subset (\Xi^+\backslash \Phi_{\Delta_1})^{\wedge}$ for all $i$. The right hand side is the inversion set of an infinite reduced word $w$ by Theorem \ref{infiniteword}. On the other hand, one has the inclusion $N(us_i)\backslash N(u)\subset \widetilde{\Phi}^+\backslash P(\Psi^+,\Delta_1,\Delta_2)^{\wedge}\subset \widetilde{\Phi}^+\backslash (\Xi^+\backslash \Phi_{\Delta_1})^{\wedge}$. Thus we conclude $u$ is minimal under $(W,\leq_{N(w)}')$. This contradicts Lemma \ref{nomaxminone}. Similarly one can also prove that the poset has no maximal element. \end{proof} \begin{theorem}\label{setisinfinite} Suppose that $\widetilde{W}$ is irreducible and of rank $\geq 3.$ Let $k\in \mathbb{Z}$ and let $B$ be an infinite biclosed set in $\widetilde{\Phi}^+$ such that $\widetilde{\Phi}^+\backslash B$ is also infinite. The set $\{w\in \widetilde{W}|\,l_B'(w)=k\}$ is infinite. \end{theorem} \begin{proof} Suppose that to the contrary there exists $k\in \mathbb{Z}$, the set $\{w\in \widetilde{W}|\,l_B'(w)=k\}$ is finite. Suppose that this set is $\{w_1,w_2,\cdots,w_t\}$. We show that under the right $B$-twisted weak order $\leq_B'$ any element is either greater than one of $w_i$ or less than one of $w_i$ (depending whether its twisted length is $\geq$ or $\leq k$). Let $u$ be an element in $W$ and assume without loss of generality that $l_B'(u)>k.$ By Lemma \ref{nomaxmintwo} $u$ is not minimal. So one can find $s\in S$ such that $us<_B'u$ and $l_B'(us)=l_B'(u)-1$. Proceed this way one sees that $u$ must be greater than some element with twisted length $k$. Now by Lemma \ref{dotactioniso} we only need to consider the case where $B=P(\Psi^+,\Delta_1,\Delta_2)^{\wedge}$. If $\Delta_1=\Delta$, $B$ is empty. Let $\alpha\in \Delta\backslash \Delta_1$ and $\rho$ be the highest root in $\Psi^+$. Since $\cup_{i}N(w_i)$ is finite, there must exist some positive roots $\alpha+p\delta\not\in \cup_{i}N(w_i)$ and $-\rho+q \delta\not\in \cup_{i}N(w_i)$. We claim that there exists some $v$ such that $N(v)$ contains $-\rho+q\delta, \alpha+p\delta.$ Proof of the claim. Since the rank of $\Phi$ is at least two, $\alpha\neq \rho$. So $-\rho,\alpha\in \Omega^+:=\Psi^-\backslash \{-\alpha\}\cup \{\alpha\}$, which is another positive system. So $-\rho+q\delta, \alpha+p\delta$ are contained in $(\Omega^+)^{\wedge}$, which is the inversion set of an infinite reduced word. Consequently they are contained in the inversion set of a finite prefix $v$ of this infinite reduced word. Note that $\Delta_2\neq \Delta$ since that will cause $B=\widetilde{\Phi}^+=\Phi^{\wedge}$ and then $B$ is cofinite. Therefore $-\rho+q\delta\not\in B$. However $\alpha+p\delta\in B$. Therefore $v$ is not comparable to any of $w_i$ under the right $B$-twisted weak order. Contradiction. \end{proof} \begin{corollary} Suppose that $\widetilde{W}$ is irreducible and of rank $\geq 3.$ Let $B$ be an infinite biclosed set in $\widetilde{\Phi}^+$ such that $\widetilde{\Phi}^+\backslash B$ is also infinite. Then the $B$-twisted weak Bruhat order $(\widetilde{W},\leq_B')$ (resp. the $B$-twisted strong Bruhat order $(\widetilde{W},\leq_B)$) has an infinite antichain. \end{corollary} \begin{proof} Note that if $u\lneq'_B v$ (resp. $u\lneq_B v$) then $l_B'(v)>l_B'(u)$ (resp. $l_B(v)>l_B(u)$). Therefore the set $\{w\in \widetilde{W}|\,l_B'(w)=k\}$ (resp. $\{w\in \widetilde{W}|\,l_B(w)=k\}$) forms an infinite antichain by Theorem \ref{setisinfinite}. \end{proof} We remark that in \cite{antichain} it is proved that if $B$ is finite or cofinite, the $B$-twisted weak order (and consequently the $B$-twisted strong Bruhat order) on an affine Weyl group does not have an infinite antichain. \section{An example: alcove order of $\widetilde{A}_2$}\label{examplesec} In this section we provide a clear picture of a specific twisted (strong) Bruhat order. Let $W$ and $\Phi$ be of type $A_2$. The two simple roots are denoted by $\alpha, \beta$. We consider the $B$-twisted (strong) Bruhat order $(\widetilde{W}, \leq_B)$ where $B=\{\alpha,\alpha+\beta,\beta\}^{\wedge}$. Such an order is also called alcove order in \cite{quotient}. We describe all of its spherical intervals. In the following lemma we characterize all covering relations in $(\widetilde{W}, \leq_B)$. \begin{lemma}\label{sixtype} For $(\widetilde{W},\leq_B)$ one has the following results: (1) Suppose that $w\in T$. Then $l_B(s_{\alpha+k\delta}w)=l_B(w)-2k-1, l_B(s_{\beta+k\delta}w)=l_B(w)-2k-1, l_B(s_{\alpha+\beta+k\delta}w)=l_B(w)-4k-3.$ (2) Suppose that $w\in s_{\alpha}s_{\beta}T$. Then $l_B(s_{\alpha+k\delta}w)=l_B(w)+4k+1, l_B(s_{\beta+k\delta}w)=l_B(w)-2k-1, l_B(s_{\alpha+\beta+k\delta}w)=l_B(w)+2k+1.$ (3) Suppose that $w\in s_{\beta}s_{\alpha}T$. Then $l_B(s_{\alpha+k\delta}w)=l_B(w)-2k-1, l_B(s_{\alpha+\beta+k\delta}w)=l_B(w)+2k+1, l_B(s_{\beta+k\delta}w)=l_B(w)+4k+1.$ (4) Suppose that $w\in s_{\beta}s_{\alpha}s_{\beta}T$. Then $l_B(s_{\alpha+k\delta}w)=l_B(w)+2k+1, l_B(s_{\beta+k\delta}w)=l_B(w)+2k+1, l_B(s_{\alpha+\beta+k\delta}w)=l_B(w)+4k+3.$ (5) Suppose that $w\in s_{\beta}T$. Then $l_B(s_{\alpha+k\delta}w)=l_B(w)-4k-1, l_B(s_{\beta+k\delta}w)=l_B(w)+2k+1, l_B(s_{\alpha+\beta+k\delta}w)=l_B(w)-2k-1.$ (6) Suppose that $w\in s_{\alpha}T$. Then $l_B(s_{\alpha+k\delta}w)=l_B(w)+2k+1, l_B(s_{\alpha+\beta+k\delta}w)=l_B(w)-2k-1, l_B(s_{\beta+k\delta}w)=l_B(w)-4k-1.$ \end{lemma} \begin{proof} It follows from the fact $\widetilde{W}=W\ltimes T$ that every element is in one of the six subsets and these six subsets are pairwise disjoint. One can verify by induction easily that $$N(s_{\alpha+k\delta})=\alpha_0^{2k}(-\beta)_1^k(\alpha+\beta)_0^{k-1}, k\geq 0,$$ $$N(s_{\alpha+k\delta})=(-\alpha)_1^{-2k-1}(\beta)_0^{-k-1}(-\alpha-\beta)_1^{-k}, k<0,$$ $$N(s_{\alpha+\beta+k\delta})=(\alpha+\beta)_0^{2k}\alpha_0^k\beta_0^k, k\geq 0,$$ $$N(s_{\alpha+\beta+k\delta})=(-\alpha-\beta)_1^{-2k-1}(-\alpha)_1^{-k-1}(-\beta)_1^{-k-1}, k<0,$$ $$N(s_{\beta+k\delta})=\beta_0^{2k}(-\alpha)_1^k(\alpha+\beta)_0^{k-1}, k\geq 0,$$ $$N(s_{\beta+k\delta})=(-\beta)_1^{-2k-1}(\alpha)_0^{-k-1}(-\alpha-\beta)_1^{-k}, k<0.$$ Now by using the identity $N(wu)=(N(w)\backslash w(-N(u)))\cup (wN(u)\backslash (-N(w)))$, we compute $$N(t_{k_1\alpha+k_2\beta})$$ $$=(\alpha_0^{k_1-\frac{k_2}{2}-1}(-\alpha)_1^{-k_1+\frac{k_2}{2}}\beta_0^{-\frac{k_1}{2}+k_2-1}(-\beta)_1^{\frac{k_1}{2}-k_2} (\alpha+\beta)_0^{\frac{k_1}{2}+\frac{k_2}{2}-1}(-\alpha-\beta)_1^{-\frac{k_1}{2}-\frac{k_2}{2}}).$$ $$N(t_{k_1\alpha+k_2\beta}s_{\alpha+k\delta})$$ $$=\alpha_{0}^{k_1-\frac{k_2}{2}+2k}(-\alpha)_1^{-2k-1-k_1+\frac{k_2}{2}}\beta_0^{-k-1-\frac{k_1}{2}+k_2}(-\beta)_1^{k+\frac{k_1}{2}-k_2}$$ $$(\alpha+\beta)_0^{k-1+\frac{k_1+k_2}{2}}(-\alpha-\beta)_1^{-k-\frac{k_1+k_2}{2}}.$$ $l_B(s_{\alpha+k\delta}t_{k_1\alpha+k_2\beta})=-2k-1-k_1+\frac{k_2}{2}+k+\frac{k_1}{2}-k_2-k-\frac{k_1+k_2}{2}=-2k-k_1-k_2-1.$ $l_B(t_{k_1\alpha+k_2\beta})=-k_1+\frac{k_2}{2}+\frac{k_1}{2}-k_2-\frac{k_1}{2}-\frac{k_2}{2}=-k_1-k_2$. Hence $l_B(t_{k_1\alpha+k_2\beta})-2k-1=l_B(s_{\alpha+k\delta}t_{k_1\alpha+k_2\beta})$. The map $\tau: s_{\alpha}\mapsto s_{\beta}, s_{\beta}\mapsto s_{\alpha}, s_{\delta-\alpha-\beta}\mapsto s_{\delta-\alpha-\beta}$ induces an automorphism of $\widetilde{W}$. From this and the above calculation, one sees that $l_B(t_{k_1\alpha+k_2\beta})-2k-1=l_B(t_{k_1\alpha+k_2\beta}s_{\beta+k\delta})$. We compute $$N(t_{k_1\alpha+k_2\beta}s_{\alpha+\beta+k\delta})$$ $$=\alpha_0^{k+k_1-\frac{k_2}{2}}(-\alpha)_1^{-k-1-k_1+\frac{k_2}{2}}\beta_0^{k-\frac{k_1}{2}+k_2}(-\beta)_1^{-k-1+\frac{k_1}{2}-k_2}$$ $$(\alpha+\beta)_0^{2k+\frac{k_1+k_2}{2}}(-\alpha-\beta)_1^{-2k-1-\frac{k_1+k_2}{2}}.$$ $l_B(s_{\alpha+\beta+k\delta}t_{k_1\alpha+k_2\beta})=-k-1-k_1+\frac{k_2}{2}-k-1+\frac{k_1}{2}-k_2-2k-1-\frac{k_1+k_2}{2}=-4k-k_1-k_2-3.$ Hence $l_B(s_{\alpha+\beta+k\delta}t_{k_1\alpha+k_2\beta})=l_B(t_{k_1\alpha+k_2\beta})-4k-3.$ So we have proved (1). We compute $$N(t_{k_1\alpha+k_2\beta}s_{\alpha})$$ $$=(\alpha_0^{k_1-\frac{k_2}{2}}(-\alpha)_1^{-k_1+\frac{k_2}{2}-1}\beta_0^{-\frac{k_1}{2}+k_2-1}(-\beta)_1^{\frac{k_1}{2}-k_2} (\alpha+\beta)_0^{\frac{k_1}{2}+\frac{k_2}{2}-1}(-\alpha-\beta)_1^{-\frac{k_1}{2}-\frac{k_2}{2}}).$$ $$N(t_{k_1\alpha+k_2\beta}s_{\alpha}s_{\alpha+k\delta})$$ $$=\alpha_0^{-2k-1+k_1-\frac{k_2}{2}}(-\alpha)_1^{2k-k_1+\frac{k_2}{2}}\beta_0^{k-1-\frac{k_1}{2}+k_2}(-\beta)_1^{-k+\frac{k_1}{2}-k_2}$$ $$(\alpha+\beta)_0^{-k-1+\frac{k_1+k_2}{2}}(-\alpha-\beta)_1^{k-\frac{k_1+k_2}{2}}.$$ So $l_B(s_{\alpha+k\delta}s_{\alpha}t_{k_1\alpha+k_2\beta})=2k-k_1+\frac{k_2}{2}-k+\frac{k_1}{2}-k_2+k-\frac{k_1+k_2}{2}=2k-k_1-k_2$. $l_B(s_{\alpha}t_{k_1\alpha+k_2\beta})=-k_1+\frac{k_2}{2}-1+\frac{k_1}{2}-k_2-\frac{k_1}{2}-\frac{k_2}{2}=-k_1-k_2-1$. Therefore $l_B(s_{\alpha+k\delta}s_{\alpha}t_{k_1\alpha+k_2\beta})=l_B(s_{\alpha}t_{k_1\alpha+k_2\beta})+2k+1.$ $$N(t_{k_1\alpha+k_2\beta}s_{\alpha}s_{\beta+k\delta})$$ $$=\alpha_0^{k+k_1-\frac{k_2}{2}}(-\alpha)_1^{-k-1-k_1+\frac{k_2}{2}}(\beta)_0^{k-1-\frac{k_1}{2}+k_2}(-\beta)_1^{-k+\frac{k_1}{2}-k_2}$$ $$(\alpha+\beta)_0^{2k+\frac{k_1+k_2}{2}}(-\alpha-\beta)_1^{-2k-1-\frac{k_1+k_2}{2}}.$$ So $l_B(s_{\beta+k\delta}s_{\alpha}t_{k_1\alpha+k_2\beta})=-k-1-k_1+\frac{k_2}{2}-k+\frac{k_1}{2}-k_2-2k-1-\frac{k_1+k_2}{2}=-4k-2-k_1-k_2=l_B(s_{\alpha}t_{k_1\alpha+k_2\beta})-4k-1.$ $$N(t_{k_1\alpha+k_2\beta}s_{\alpha}s_{\alpha+\beta+k\delta})$$ $$=\alpha_0^{-k-1+k_1-\frac{k_2}{2}}(-\alpha)_1^{k-k_1+\frac{k_2}{2}}\beta_0^{2k-\frac{k_1}{2}+k_2}(-\beta)_1^{-2k-1+\frac{k_1}{2}-k_2}$$ $$(\alpha+\beta)_0^{k+\frac{k_1+k_2}{2}}(-\alpha-\beta)_1^{-k-1-\frac{k_1+k_2}{2}}.$$ So $l_B(s_{\alpha+\beta+k\delta}s_{\alpha}t_{k_1\alpha+k_2\beta})=k-k_1+\frac{k_2}{2}-2k-1+\frac{k_1}{2}-k_2-k-1-\frac{k_1+k_2}{2}=-2k-k_1-k_2-2=l_B(s_{\alpha}t_{k_1\alpha+k_2\beta})-2k-1.$ So we have proved (6). Using the map $\tau$ and the calculation for the case (6), one sees (5). Now we prove (3). We compute $$N(t_{k_1\alpha+k_2\beta}s_{\alpha}s_{\beta})$$ $$=(\alpha_0^{k_1-\frac{k_2}{2}}(-\alpha)_1^{-k_1+\frac{k_2}{2}-1}\beta_0^{-\frac{k_1}{2}+k_2-1}(-\beta)_1^{\frac{k_1}{2}-k_2} (\alpha+\beta)_0^{\frac{k_1}{2}+\frac{k_2}{2}}(-\alpha-\beta)_1^{-\frac{k_1}{2}-\frac{k_2}{2}-1}).$$ $$N(t_{k_1\alpha+k_2\beta}s_{\alpha}s_{\beta}s_{\alpha+k\delta})$$ $$=\alpha_0^{-k+k_1-\frac{k_2}{2}}(-\alpha)_1^{k-1-k_1+\frac{k_2}{2}}\beta_{0}^{2k-\frac{k_1}{2}+k_2}(-\beta)_1^{-2k-1+\frac{k_1}{2}-k_2}$$ $$(\alpha+\beta)_0^{k+\frac{k_1+k_2}{2}}(-\alpha-\beta)_1^{-k-1-\frac{k_1+k_2}{2}}.$$ So $l_B(s_{\alpha+k\delta}s_{\beta}s_{\alpha}t_{k_1\alpha+k_2\beta})=k-1-k_1+\frac{k_2}{2}-2k-1+\frac{k_1}{2}-k_2-k-1-\frac{k_1+k_2}{2}=-2k-k_1-k_2-3.$ $l_B(s_{\beta}s_{\alpha}t_{k_1\alpha+k_2\beta})=-k_1+\frac{k_2}{2}-1+\frac{k_1}{2}-k_2-\frac{k_1}{2}-\frac{k_2}{2}-1=-k_1-k_2-2$. Therefore $l_B(s_{\alpha+k\delta}s_{\beta}s_{\alpha}t_{k_1\alpha+k_2\beta})=l_B(s_{\beta}s_{\alpha}t_{k_1\alpha+k_2\beta})-2k-1.$ We compute $$N(t_{k_1\alpha+k_2\beta}s_{\alpha}s_{\beta}s_{\beta+k\delta})$$ $$=\alpha_0^{-k+k_1-\frac{k_2}{2}}(-\alpha)_1^{k-1-k_1+\frac{k_2}{2}}\beta_0^{-k-1-\frac{k_1}{2}+k_2}(-\beta)_1^{k+\frac{k_1}{2}-k_2}$$ $$(\alpha+\beta)_0^{-2k-1+\frac{k_1+k_2}{2}}(-\alpha-\beta)_1^{2k-\frac{k_1+k_2}{2}}.$$ So $l_B(s_{\beta+k\delta}s_{\beta}s_{\alpha}t_{k_1\alpha+k_2\beta})=k-1-k_1+\frac{k_2}{2}+k+\frac{k_1}{2}-k_2+2k-\frac{k_1+k_2}{2}=4k-k_1-k_2-1.$ Therefore $l_B(s_{\beta+k\delta}s_{\beta}s_{\alpha}t_{k_1\alpha+k_2\beta})=l_B(s_{\beta}s_{\alpha}t_{k_1\alpha+k_2\beta})+4k+1$. We compute $$N(t_{k_1\alpha+k_2\beta}s_{\alpha}s_{\beta}s_{\alpha+\beta+k\delta})$$ $$=\alpha_0^{-2k-1+k_1-\frac{k_2}{2}}(-\alpha)_1^{2k-k_1+\frac{k_2}{2}}\beta_0^{k-\frac{k_1}{2}+k_2}(-\beta)_1^{-k-1+\frac{k_1}{2}-k_2}$$ $$(\alpha+\beta)_0^{-k-1+\frac{k_1+k_2}{2}}(-\alpha-\beta)_1^{k-\frac{k_1+k_2}{2}}.$$ So $l_B(s_{\alpha+\beta+k\delta}s_{\beta}s_{\alpha}t_{k_1\alpha+k_2\beta})=2k-k_1+\frac{k_2}{2}-k-1+\frac{k_1}{2}-k_2+k-\frac{k_1+k_2}{2}=2k-k_1-k_2-1.$ Therefore $l_B(s_{\alpha+\beta+k\delta}s_{\beta}s_{\alpha}t_{k_1\alpha+k_2\beta})=l_B(s_{\beta}s_{\alpha}t_{k_1\alpha+k_2\beta})+2k+1$. Using the map $\tau$ and the calculation for the $s_{\beta}s_{\alpha}T$ case, one sees (2). Now we prove (4). We compute $$N(t_{k_1\alpha+k_2\beta}s_{\alpha}s_{\beta}s_{\alpha})$$ $$=(\alpha_0^{k_1-\frac{k_2}{2}}(-\alpha)_1^{-k_1+\frac{k_2}{2}-1}\beta_0^{-\frac{k_1}{2}+k_2}(-\beta)_1^{\frac{k_1}{2}-k_2-1} (\alpha+\beta)_0^{\frac{k_1}{2}+\frac{k_2}{2}}(-\alpha-\beta)_1^{-\frac{k_1}{2}-\frac{k_2}{2}-1}).$$ $$N(t_{k_1\alpha+k_2\beta}s_{\alpha}s_{\beta}s_{\alpha}s_{\alpha+k\delta})$$ $$=\alpha_0^{k+k_1-\frac{k_2}{2}}(-\alpha)_1^{-k-1-k_1+\frac{k_2}{2}}\beta_0^{-2k-1-\frac{k_1}{2}+k_2}(-\beta)_1^{2k+\frac{k_1}{2}-k_2}$$ $$(\alpha+\beta)_0^{-k+\frac{k_1+k_2}{2}}(-\alpha-\beta)_1^{k-1-\frac{k_1+k_2}{2}}.$$ So $l_B(s_{\alpha+k\delta}s_{\alpha}s_{\beta}s_{\alpha}t_{k_1\alpha+k_2\beta})=-k-1-k_1+\frac{k_2}{2}+2k+\frac{k_1}{2}-k_2+k-1-\frac{k_1+k_2}{2}=2k-2-k_1-k_2.$ $l_B(s_{\alpha}s_{\beta}s_{\alpha}t_{k_1\alpha+k_2\beta})=-k_1+\frac{k_2}{2}-1+\frac{k_1}{2}-k_2-1-\frac{k_1}{2}-\frac{k_2}{2}-1=-k_1-k_2-3.$ Therefore $l_B(s_{\alpha+k\delta}s_{\alpha}s_{\beta}s_{\alpha}t_{k_1\alpha+k_2\beta})=l_B(s_{\alpha}s_{\beta}s_{\alpha}t_{k_1\alpha+k_2\beta})+2k+1.$ Using the map $\tau$ and the above calculation, one sees that $l_B(s_{\beta+k\delta}s_{\alpha}s_{\beta}s_{\alpha}t_{k_1\alpha+k_2\beta})=l_B(s_{\alpha}s_{\beta}s_{\alpha}t_{k_1\alpha+k_2\beta})+2k+1.$ We compute $$N(t_{k_1\alpha+k_2\beta}s_{\alpha}s_{\beta}s_{\alpha}s_{\alpha+\beta+k\delta})$$ $$=\alpha_0^{-k-1+k_1-\frac{k_2}{2}}(-\alpha)_1^{k-k_1+\frac{k_2}{2}}\beta_0^{-k-1-\frac{k_1}{2}+k_2}(-\beta)_1^{k+\frac{k_1}{2}-k_2}$$ $$(\alpha+\beta)_0^{-2k-1+\frac{k_1+k_2}{2}}(-\alpha-\beta)_1^{2k-\frac{k_1+k_2}{2}}.$$ So $l_B(s_{\alpha+\beta+k\delta}s_{\alpha}s_{\beta}s_{\alpha}t_{k_1\alpha+k_2\beta})=k-k_1+\frac{k_2}{2}+k+\frac{k_1}{2}-k_2+2k-\frac{k_1+k_2}{2}=4k-k_1-k_2.$ Therefore $l_B(s_{\alpha+\beta+k\delta}s_{\alpha}s_{\beta}s_{\alpha}t_{k_1\alpha+k_2\beta})=l_B(s_{\alpha}s_{\beta}s_{\alpha}t_{k_1\alpha+k_2\beta})+4k+3.$ \end{proof} We draw below part of the Hasse diagram of the poset $(\widetilde{W},\leq_B)$. The numeric sequence $a_1a_2\cdots a_t, a_i\in \{1,2,3\}$ stands for the reduced expression $s_{a_1}s_{a_2}\cdots s_{a_t}$ where $s_1:=s_{\alpha}, s_2:=s_{\beta}$ and $s_3:=s_{\delta-\alpha-\beta}.$ The black edge corresponds to a covering relation in the corresponding $B$-twisted left weak Bruhat order. The blue edge corresponds to the additional covering relation in $B$-twisted strong Bruhat order. \tikzset{every picture/.style={line width=0.75pt}} \begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1] \draw (125,108) -- (146.65,120.5) -- (146.65,145.5) -- (125,158) -- (103.35,145.5) -- (103.35,120.5) -- cycle ; \draw (147,71) -- (168.65,83.5) -- (168.65,108.5) -- (147,121) -- (125.35,108.5) -- (125.35,83.5) -- cycle ; \draw (147,71) -- (168.65,83.5) -- (168.65,108.5) -- (147,121) -- (125.35,108.5) -- (125.35,83.5) -- cycle ; \draw (147,71) -- (168.65,83.5) -- (168.65,108.5) -- (147,121) -- (125.35,108.5) -- (125.35,83.5) -- cycle ; \draw (147,71) -- (168.65,83.5) -- (168.65,108.5) -- (147,121) -- (125.35,108.5) -- (125.35,83.5) -- cycle ; \draw (168.65,108.5) -- (190.3,121) -- (190.3,146) -- (168.65,158.5) -- (147,146) -- (147,121) -- cycle ; \draw (103.35,70.5) -- (125,83) -- (125,108) -- (103.35,120.5) -- (81.7,108) -- (81.7,83) -- cycle ; \draw (190.3,71) -- (211.95,83.5) -- (211.95,108.5) -- (190.3,121) -- (168.65,108.5) -- (168.65,83.5) -- cycle ; \draw (211.95,108.5) -- (233.6,121) -- (233.6,146) -- (211.95,158.5) -- (190.3,146) -- (190.3,121) -- cycle ; \draw (81.7,108) -- (103.35,120.5) -- (103.35,145.5) -- (81.7,158) -- (60.05,145.5) -- (60.05,120.5) -- cycle ; \draw (233.6,71) -- (255.25,83.5) -- (255.25,108.5) -- (233.6,121) -- (211.95,108.5) -- (211.95,83.5) -- cycle ; \draw (255.25,108.5) -- (276.9,121) -- (276.9,146) -- (255.25,158.5) -- (233.6,146) -- (233.6,121) -- cycle ; \draw (60.05,70.5) -- (81.7,83) -- (81.7,108) -- (60.05,120.5) -- (38.4,108) -- (38.4,83) -- cycle ; \draw (233.6,146) -- (255.25,158.5) -- (255.25,183.5) -- (233.6,196) -- (211.95,183.5) -- (211.95,158.5) -- cycle ; \draw (190.3,146) -- (211.95,158.5) -- (211.95,183.5) -- (190.3,196) -- (168.65,183.5) -- (168.65,158.5) -- cycle ; \draw (146.65,145.5) -- (168.3,158) -- (168.3,183) -- (146.65,195.5) -- (125,183) -- (125,158) -- cycle ; \draw (103.35,145.5) -- (125,158) -- (125,183) -- (103.35,195.5) -- (81.7,183) -- (81.7,158) -- cycle ; \draw (60.05,145.5) -- (81.7,158) -- (81.7,183) -- (60.05,195.5) -- (38.4,183) -- (38.4,158) -- cycle ; \draw (276.9,71) -- (298.55,83.5) -- (298.55,108.5) -- (276.9,121) -- (255.25,108.5) -- (255.25,83.5) -- cycle ; \draw (276.9,146) -- (298.55,158.5) -- (298.55,183.5) -- (276.9,196) -- (255.25,183.5) -- (255.25,158.5) -- cycle ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (168.65,158.5) -- (211.95,183.5) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (125.35,158.5) -- (168.65,183.5) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (103.7,121) -- (147,146) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (81.7,158) -- (125,183) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (38.4,158) -- (81.7,183) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (211.95,158.5) -- (255.25,183.5) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (255.25,158.5) -- (298.55,183.5) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (60.05,120.5) -- (103.35,145.5) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (147,121) -- (190.3,146) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (190.3,121) -- (233.6,146) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (233.6,121) -- (276.9,146) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (211.95,83.5) -- (255.25,108.5) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (255.25,83.5) -- (298.55,108.5) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (168.65,83.5) -- (211.95,108.5) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (125.35,83.5) -- (168.65,108.5) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (82.05,83.5) -- (125.35,108.5) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (38.4,83) -- (81.7,108) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (298.55,158.5) -- (255.25,183.5) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (211.95,183.5) -- (255.25,158.5) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (168.65,183.5) -- (211.95,158.5) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (125.35,183.5) -- (168.65,158.5) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (82.05,183.5) -- (125.35,158.5) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (38.4,183) -- (81.7,158) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (233.6,146) -- (276.9,121) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (190.3,146) -- (233.6,121) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (147,146) -- (190.3,121) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (103.7,146) -- (147,121) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (60.4,146) -- (103.7,121) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (211.95,108.5) -- (255.25,83.5) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (168.65,108.5) -- (211.95,83.5) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (125.35,108.5) -- (168.65,83.5) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (81.7,108) -- (125,83) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (38.75,108.5) -- (82.05,83.5) ; \draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (255.25,108.5) -- (298.55,83.5) ; \draw (82,176.5) node [align=left] {{\tiny 2132}}; \draw (58.5,199) node [align=left] {{\tiny 32132}}; \draw (99.2,199.8) node [align=left] {{\tiny 132}}; \draw (124.8,176) node [align=left] {{\tiny 32}}; \draw (148,199.8) node [align=left] {{\tiny 2}}; \draw (166.8,175.8) node [align=left] {{\tiny e}}; \draw (191.6,198.8) node [align=left] {{\tiny 1}}; \draw (211.2,177.2) node [align=left] {{\tiny 31}}; \draw (166,158.8) node [align=left] {{\tiny 3}}; \draw (189.2,150.6) node [align=left] {{\tiny 13}}; \draw (204.4,161) node [align=left] {{\tiny 131}}; \draw (146.65,145.5) node [align=left] {{\tiny 23}}; \draw (126.4,154) node [align=left] {{\tiny 232}}; \draw (184,123.6) node [align=left] {{\tiny 213}}; \draw (149.6,124.8) node [align=left] {{\tiny 123}}; \draw (168.65,108.5) node [align=left] {{\tiny 1213}}; \draw (226.8,142.6) node [align=left] {{\tiny 2131}}; \draw (225.2,124.2) node [align=left] {{\tiny 32131}}; \draw (206.4,107.2) node [align=left] {{\tiny 3213}}; \draw (105,143.5) node [align=left] {{\tiny 1232}}; \draw (78.5,158) node [align=left] {{\tiny 12132}}; \draw (127.5,112) node [align=left] {{\tiny 3123}}; \draw (111,124.5) node [align=left] {{\tiny 13123}}; \draw (239.5,196.5) node [align=left] {{\tiny 231}}; \draw (79,113.5) node [align=left] {{\tiny 213123}}; \draw (249,179) node [align=left] {{\tiny 1231}}; \draw (247,159) node [align=left] {{\tiny 21231}}; \end{tikzpicture} We collect in the following proposition some properties and the symmetry of this poset. \begin{proposition}\label{posetprop} (1) The sphericity of the intervals is characterized as follow. (a) The length 2 intervals of $(\widetilde{W},\leq_B)$ are either of the form $[x,s_1s_2x]$ where $s_1\neq s_2, s_1,s_2\in \{s_{\alpha},s_{\beta},s_{\delta-\alpha-\beta}\}$ or of the form $[x,s_{\delta-\gamma}s_{\gamma}x]$ where $\gamma\in \{\alpha,\beta,\alpha+\beta, \delta-\alpha, \delta-\beta, \delta-\alpha-\beta\}$. The former is spherical and the latter is non-spherical. (b) The length 3 intervals of $(\widetilde{W},\leq_B)$ are all non-spherical except those of the following types: $[x,s_{\alpha}s_{\beta}s_{\alpha}x]$ where $x\in s_{\alpha}s_{\beta}s_{\alpha}T$; $[x, s_{\delta-\alpha-\beta}s_{\beta}s_{\delta-\alpha-\beta}x]$ where $x\in s_{\beta}T$; $[x, s_{\alpha}s_{\delta-\alpha-\beta}s_{\alpha}x]$ where $x\in s_{\alpha}T$. (c) All intervals of $(\widetilde{W},\leq_B)$ of length greater than 3 are non-spherical. (2) Let $x=(s_{\beta}s_{\delta-\alpha-\beta}s_{\alpha})^{\infty}$ and $y=(s_{\alpha}s_{\delta-\alpha-\beta}s_{\beta})^{\infty}$ be two infinite reduced words. For $i\geq 0$, denote $x(i)$ and $x(-i)$ the unique length $i$ prefixes of $x$ and $y$ respectively. Let $U$ be the (infinite dihedral) reflection subgroup of $\widetilde{W}$ generated by $v:=s_{\delta-\alpha-\beta}$ and $u:=s_{\alpha}s_{\beta}s_{\alpha}$. For $x\in \widetilde{W}$, $y$ is the unique minimal element of the left coset $xU$ if and only if $y=w(i)^{-1}$ for some $i\in \mathbb{Z}$. Consequently every element $w\in \widetilde{W}$ can be written in the form $w(i)^{-1}u$ for some $u\in U$ and $i\in \mathbb{Z}$. Furthermore for every $w\in \widetilde{W}$, $l_B(w)$ is determined as follow: (a) $l_B(w(i)^{-1}u(vu)^k)=-4k-3$ if $i$ is even and $l_B(w(i)^{-1}u(vu)^k)=-4k-2$ if $i$ is odd. (b) $l_B(w(i)^{-1}(vu)^k)=-4k$ if $i$ is even and $l_B(w(i)^{-1}(vu)^k)=-4k-1$ if $i$ is odd. (c) $l_B(w(i)^{-1}v(uv)^k)=4k+1$ if $i$ is even and $l_B(w(i)^{-1}v(uv)^k)=4k+2$ if $i$ is odd. (d) $l_B(w(i)^{-1}(uv)^k)=4k$ if $i$ is even and $l_B(w(i)^{-1}(uv)^k)=4k-1$ if $i$ is odd. (3) Suppose that $x,y\in \widetilde{W}$ such that $l_B(x)$ and $l_B(y)$ are of the same parity. Then there exists a poset automorphism $\gamma: (\widetilde{W},\leq_B)\rightarrow (\widetilde{W},\leq_B)$ such that $\gamma(x)=y.$ \end{proposition} \begin{proof} (1) This can be verified routinely by using the covering relations in Lemma \ref{sixtype}. (2) By \cite{DyerReflSubgrp} $y$ is an element that satisfies $N(y^{-1})\cap\Phi_{U}^+=\emptyset.$ Hence $N(y^{-1})\subset \{\alpha,-\beta\}^{\wedge}\uplus \{-\alpha,\beta\}^{\wedge}$. Note that $\{\alpha,-\beta\}^{\wedge}=(s_{\beta}s_{\delta-\alpha-\beta}s_{\alpha})^{\infty}$ and $ \{-\alpha,\beta\}^{\wedge}=(s_{\alpha}s_{\delta-\alpha-\beta}s_{\beta})^{\infty}$. Assume that $N(y^{-1})\not\subset N(s_{\beta}s_{\delta-\alpha-\beta}s_{\alpha})^{\infty})$ and $N(y^{-1})\not\subset N((s_{\alpha}s_{\delta-\alpha-\beta}s_{\beta})^{\infty})$. Without loss of generality assume that $\alpha\in N(y^{-1})$. Then $\{-\alpha\}^{\wedge}\cap N(y^{-1})=\emptyset$. This forces some $\beta+k\delta\in N(y^{-1})$. Then $\alpha+\beta+k\delta\in N(y^{-1})$, contradicting $N(y^{-1})\cap\Phi_{U}^+=\emptyset.$ Hence either $N(y^{-1})\subset N(s_{\beta}s_{\delta-\alpha-\beta}s_{\alpha})^{\infty})$ or $N(y^{-1})\subset N((s_{\alpha}s_{\delta-\alpha-\beta}s_{\beta})^{\infty})$. Hence given $w\in \widetilde{W}$, then $w(i)^{-1}=wu$ for some $i\in \mathbb{Z}$ and $u\in U.$ Now we check the results on twisted length. We prove (a) and (b)-(d) can be verified similarly. One computes that $N((uv)^kuw(i))=\alpha_0^{k-\lceil\frac{i}{2}\rceil}\beta_0^{k+\lfloor\frac{i}{2}\rfloor}(\alpha+\beta)_0^{2k}(-\alpha)_1^{\lceil\frac{i}{2}\rceil-k-1}(-\beta)_1^{\lceil-\frac{i}{2}\rceil-k-1}.$ There are six possiblities depending on $k$ and $i$. (i) $N((uv)^kuw(i))=\alpha_0^{k-\lceil\frac{i}{2}\rceil}\beta_0^{k+\lfloor\frac{i}{2}\rfloor}(\alpha+\beta)_0^{2k}.$ Then $l_B((uv)^kuw(i))=-4k-3+\lceil\frac{i}{2}\rceil-\lfloor\frac{i}{2}\rfloor$. If $i$ is even the twisted length is $-4k-3$. If $i$ is odd, the twisted length is $-4k-2.$ (ii) $N((uv)^kuw(i))=\alpha_0^{k-\lceil\frac{i}{2}\rceil}(\alpha+\beta)_0^{2k}(-\beta)_1^{\lceil-\frac{i}{2}\rceil-k-1}.$ Then $l_B((uv)^kuw(i))=-4k-3+\lceil\frac{i}{2}\rceil+\lceil-\frac{i}{2}\rceil$. If $i$ is even the twisted length is $-4k-3$. If $i$ is odd, the twisted length is $-4k-2.$ (iii) $N((uv)^kuw(i))=\beta_0^{k+\lfloor\frac{i}{2}\rfloor}(\alpha+\beta)_0^{2k}(-\alpha)_1^{\lceil\frac{i}{2}\rceil-k-1}.$ Then $l_B((uv)^kuw(i))=-4k-3+\lceil\frac{i}{2}\rceil-\lfloor\frac{i}{2}\rfloor$. If $i$ is even the twisted length is $-4k-3$. If $i$ is odd, the twisted length is $-4k-2.$ (iv) $N((uv)^kuw(i))=(\alpha+\beta)_0^{2k}(-\alpha)_1^{\lceil\frac{i}{2}\rceil-k-1}(-\beta)_1^{\lceil-\frac{i}{2}\rceil-k-1}.$ Then $l_B((uv)^kuw(i))=-4k-3+\lceil\frac{i}{2}\rceil+\lceil-\frac{i}{2}\rceil$. If $i$ is even the twisted length is $-4k-3$. If $i$ is odd, the twisted length is $-4k-2.$ (v) If $k-\lceil\frac{i}{2}\rceil=-1$, $N((uv)^kuw(i))=\beta_0^{k+\lfloor\frac{i}{2}\rfloor}(\alpha+\beta)_0^{2k}.$ Then $l_B((uv)^kuw(i))=-3k-2-\lfloor\frac{i}{2}\rfloor=-4k-3+\lceil\frac{i}{2}\rceil-\lfloor\frac{i}{2}\rfloor$. If $i$ is even the twisted length is $-4k-3$. If $i$ is odd, the twisted length is $-4k-2.$ (vi) If $k+\lfloor\frac{i}{2}\rfloor=-1$, $N((uv)^kuw(i))=\alpha_0^{k-\lceil\frac{i}{2}\rceil}(\alpha+\beta)_0^{2k}.$ Then $l_B((uv)^kuw(i))=-3k-2+\lceil\frac{i}{2}\rceil=-4k-3+\lceil\frac{i}{2}\rceil-\lfloor\frac{i}{2}\rfloor$. If $i$ is even the twisted length is $-4k-3$. If $i$ is odd, the twisted length is $-4k-2.$ (3) Let $\sigma$ be the group automorphism of $\widetilde{W}$ induced by the map $s_{\delta-\alpha-\beta}\mapsto s_{\alpha}, s_{\beta}\mapsto s_{\delta-\alpha-\beta}, s_{\alpha}\mapsto s_{\beta}$. Note that $\sigma(t_{\alpha^{\vee}})=\sigma(s_{\alpha}s_{\beta}s_{\delta-\alpha-\beta}s_{\beta})=s_{\beta}s_{\delta-\alpha-\beta}s_{\alpha}s_{\delta-\alpha-\beta}=t_{\beta^{\vee}}$ and $\sigma(t_{\beta^{\vee}})=\sigma(s_{\beta}s_{\alpha}s_{\delta-\alpha-\beta}s_{\alpha})=s_{\delta-\alpha-\beta}s_{\beta}s_{\alpha}s_{\beta}=t_{-(\alpha+\beta)^{\vee}}$. So we conclude $\sigma(T)=T$. We show that there exists a poset automorphism $\eta$ of $(\widetilde{W},\leq_B)$ given by $w\mapsto \sigma(w)s_{\alpha}s_{\beta}$ and a poset automorphism $\eta'$ of $(\widetilde{W},\leq_B)$ given by $w\mapsto \sigma^{-1}(w)s_{\beta}s_{\alpha}$. Using the fact that $T$ is a normal subgroup one obtains that $$\eta(w)\in s_{\alpha}s_{\beta}T,$$ $$\eta(s_{\alpha}s_{\beta}w)\in s_{\beta}s_{\alpha}T,$$ $$\eta(s_{\beta}s_{\alpha}w)\in T,$$ $$\eta(s_{\beta}s_{\alpha}s_{\beta}w)\in s_{\beta}T,$$ $$\eta(s_{\beta}T)\in s_{\alpha}T,$$ $$\eta(s_{\alpha}T)\in s_{\beta}s_{\alpha}s_{\beta}T.$$ By above and the covering relations given in Lemma \ref{sixtype} one can easily checks that $\eta$ preserves covering relations. Similarly one verifies that $\eta'$ also preserves covering relations. Let $\rho=\eta(\eta')^{-1}$ be the poset automorphism of $(\widetilde{W},\leq_B)$. For $k\in \mathbb{Z},$ $\rho^k(w)=\sigma^{2k}(w)(x(2k))^{-1}$. For any $z\in U$ (as in (2)) we have $$\rho^2(w(i)^{-1}z)=\sigma^{-1}(w(i)^{-1}z)x(2)^{-1}=w(i+2)^{-1}(z).$$ By (2) if $l_B(x)=l_B(y)$, $x=w(j)^{-1}z$ and $y=w(k)^{-1}z$ where $j$ and $k$ are of the same parity and $z\in U$. Then for some $t$, $\rho^t(x)=y$. Now assume $l_B(x)\neq l_B(y)$ but they are of the same parity. Let $v,u$ be as in (2). We compute $$\eta(w(i)^{-1}u(vu)^k)=w(i+1)^{-1}(vu)^{k+1},$$ $$\eta(w(i)^{-1}(vu)^k)=w(i+1)^{-1}u(vu)^k,$$ $$\eta(w(i)^{-1}v(uv)^k)=w(i+1)^{-1}(uv)^k,$$ $$\eta(w(i)^{-1}(uv)^k)=w(i+1)^{-1}v(uv)^{k-1}.$$ Hence for any $z\in \widetilde{W}.$ $l_B(\eta(z))=l_B(z)-2.$ Therefore for some $t,$ $l_B(x)=l_B(\eta^t(y))$. \end{proof} \begin{remark} One of the classical theme in the study of Coxeter group (or more generally combinatorics) is the enumeration problem. Given the local finiteness of the twisted Bruhat order on affine Weyl groups, one can define the Poincar\'e series associated to the twisted Bruhat order $\leq_B$ and an element $w$ to be $F_{w,B}(t)=\sum_{u\leq_B w}t^{l(w)-l(u)}$. This generalizes the usual Poincar\'e polynomial associated to the ordinary Bruhat order. For type $\widetilde{A}_2$ and the above biclosed set $B$, we have the following result. If $l(w)$ is even, $F_{w,B}(t)=\frac{t^3+2t^2+2t+1}{t^4-2t^2+1}$. If $l(w)$ is odd, $F_{w,B}(t)=\frac{2t^2+3t+1}{t^4-2t^2+1}$. To see this, one can apply the Principle of Inculsion-Exclusion and Proposition \ref{posetprop} (3) to obtain $$F_1(t)=1+2tF_2(t)-2t^2F_1(t)+t^3F_2(t)$$ $$F_2(t)=1+3tF_1(t)-2t^2F_2(t)$$ where $F_1(t)=F_{w,B}(t)$ for $w$ even and $F_2(t)=F_{w,B}(t)$ for $w$ odd. Solve for $F_1(t)$ and $F_2(t)$ one obtains the closed formlae. \end{remark} \section{Tope Poset of the affine root system and a Bj\"{o}rner-Edelman-Ziegler type theorem}\label{omsec} The root system of a Coxeter group can be naturally regarded as an oriented matroid. The affine root systems provide interesting and accessible examples of infinite oriented matroids in the sense of \cite{largeconvex}. In this section we will use twisted weak order as a tool to investigate the tope poset of the oriented matroid arising from an affine root system. An (possibly infinite) oriented matroid is a triple $(E,*,cx)$ where $E$ is a set with an involution map $*: E\rightarrow E$ (i.e. $x^{**}=x, x\neq x^*$) and $cx$ a closure operator on $E$ such that (i) if $x\in cx(X)$ there exists a finite set $Y\subseteq X$ such that $x\in cx(Y)$, (ii) $cx(X)^*=cx(X^*)$, (iii) if $x\in cx(X\cup \{x^*\})$ then $x\in cx(X)$, (iv) if $x\in cx(X\cup \{y^*\})$ and $x\not\in cx(X)$ then $y\in cx(X\backslash\{y\}\cup\{x^*\})$. Given a root system $\Phi$, $(\Phi,-,\text{cone}_{\Phi})$ is an oriented matroid where $\text{cone}_{\Phi}=\text{cone}(\Gamma)\cap \Phi$ and $\cone(\Gamma)=\{\sum_{i\in I}k_iv_i|v_i\in \Gamma\cup\{0\}, k_i\in \mathbb{R}_{\geq 0}, |I|<\infty\}$. A $cx$-closed set in $E$ is said to be convex. A convex set $H\subset E$ is called a hemispace or a tope if $H\cap H^*=\emptyset, H\cup H^*=E$. Suppose that $\Phi$ is the root system of a finite Coxeter system. Then every positive system is a hemispace of the oriented matroid $(\Phi,-,\text{cone}_{\Phi})$ and vice versa. We denote the set of hemispaces by $\mathcal{H}(E).$ For a finite oriented matroid $(E,*,cx)$ and a hemispace $H$, there exists a unique minimal subset of $H$ whose $cx-$closure is $H$. We denote such a set by $\mathrm{ex}(H)$ and call it the set of extreme elements of $H.$ A hemispace is said to be simplicial if $|\mathrm{ex}(H)|$ equals the rank of the underlying (unoriented) matroid of $(E,*,cx)$. A finite oriented matroid is said to be simplicial if all of its hemispaces are simplicial. Denote by $\Delta$ the set symmetric difference. Fix a hemispace $H$ of $(E,*,cx)$ and one can define a poset structure on the set of hemispaces by $F\leq G\Longleftrightarrow F\Delta H \subset G\Delta H, F,G\in \mathcal{H}(E).$ We call such a poset the tope poset based at $H.$ In a tope poset $P$ of an infinite oriented matroid, we say two hemisapces $H_1, H_2$ are in the same block of $P$ if their symmetric difference is finite. In \cite{hyperplane}, Bj\"{o}rner, Edelman and Ziegler proved the following \begin{theorem} For a finite simplicial oriented matroid, the tope poset for any choice of base hemispace is a lattice. \end{theorem} It is well known that for the root system of a finite Coxeter system, the corresponding oriented matroid is simplicial. In fact all tope posets are isomorphic to the ordinary right weak order of the Coxeter group (which is a lattice). In this section we prove the following analogous theorem for irreducible affine root systems. \begin{theorem}\label{intervallattice} Let $\Phi$ be a finite irreducible crystallographic root system and let $H$ be any hemispace of the oriented matroid $(\widetilde{\Phi},-,cx)$ where $cx:=\mathrm{cone}_{\widetilde{\Phi}}$. Let $H_1, H_2$ be two hemispaces in a same block and $H_1\leq H_2$ in the tope poset based at $H$. Then $[H_1, H_2]$ is a lattice. \end{theorem} We will first classify all hemispaces of $(\widetilde{\Phi},-,cx)$. Let $\Gamma$ be a subset of a real vector space $V$. A subset $A$ of $\Gamma$ is called biconvex in $\Gamma$ if $A$ and $\Gamma\backslash A$ are both convex in the oriented matroid $(\Gamma,-,\mathrm{cone}_{\Gamma})$. For the next lemma, let $\Phi$ be a finite irreducible crystallographic root system and let $\Psi^+$ be a positive system of $\Phi$. \begin{lemma}\label{lem:coneintersectzero} Let $P(\Psi^+,\Delta_1,\Delta_2)$ be a 2 closure biclosed set in $\Phi$. Then $\mathbb{R}\Delta_1\cap \mathrm{cone}(P(\Psi^+,\Delta_1,\Delta_2))=0$ and the 2 closure biclosed set $P(\Psi^+,\Delta_1,\Delta_2)$ is biconvex in $\Phi$. \end{lemma} \begin{proof} Take a linear function $f:\mathbb{R}\Phi\rightarrow \mathbb{R}$ such that $f$ is positive on $\Delta\backslash (\Delta_1\cup\Delta_2)$ and zero on $\Delta_1\cup \Delta_2.$ Then $f$ is non-negative on $P(\Psi^+,\Delta_1,\Delta_2)$. If $\gamma_1,\cdots,\gamma_k\in P(\Psi^+,\Delta_1,\Delta_2), c_1,\cdots,c_k\in \mathbb{R}_{>0}$ and $\gamma=\sum_{i=1}^kc_i\gamma_i\in \mathbb{R}\Delta_1,$ then apply $f$ on both sides we see all $\gamma_i$ are in $\mathbb{R}(\Delta_1\cup \Delta_2)$ as $f(\gamma)=0.$ Then they are in either $\mathbb{R}\Delta_1$ or $\mathbb{R}\Delta_2$ as $\Delta_1\perp\Delta_2.$ But since they are in $P(\Psi^+,\Delta_1,\Delta_2)$, they have to be in $\mathbb{R}\Delta_2.$ Then this forces $\gamma=0.$ To prove the second assertion, again let $\gamma_1,\cdots,\gamma_k\in P(\Psi^+,\Delta_1,\Delta_2), c_1,\cdots,c_k\in \mathbb{R}_{>0}$ and $\gamma=\sum_{i=1}^kc_i\gamma_i$. Note that $\Phi\backslash P(\Psi^+,\Delta_1,\Delta_2)=\mathbb{R}\Delta_1\uplus -P(\Psi^+,\Delta_1\cup\Delta_2,\emptyset).$ Suppose $\gamma\in -P(\Psi^+,\Delta_1\cup\Delta_2,\emptyset)$, apply $f$ on $\gamma$ one gets a negative number, which is a contradiction. Therefore combining this with the first assertion, one sees $P(\Psi^+,\Delta_1,\Delta_2)$ is convex. Note $\Phi\backslash P(\Psi^+,\Delta_1,\Delta_2)=P(\Psi^-,-\Delta_1,-\Delta_2)$ and therefore is convex as well. Hence we see that $P(\Psi^+,\Delta_1,\Delta_2)$ is biconvex in $\Phi$. \end{proof} Similarly one can define the notion of a 2 closure hemispace in a root system $\Phi$. A 2 closure hemispace in $\Phi$ is a 2 closure closed set $H$ such that $H\uplus -H=\Phi$. A hemispace of the oriented matroid $(\Phi,-,cx)$ is necessarily a 2 closure hemispace of $\Phi$ but not vice versa. There exists a bijection between the set of 2 closure biclosed sets in $\Phi^+$ and the set of 2 closure hemispaces under the map $B\mapsto B\uplus -(\Phi^+\backslash B)$ for $B$ a 2 closure biclosed set in $\Phi^+.$ Since the 2 closure biclosed sets in $\widetilde{\Phi}^+$ are classified, the corresponding 2 closure hemispaces are therefore known via the bijection. We will next examine which of these 2 closure hemispaces are indeed hemispaces of the oriented matroid $(\widetilde{\Phi},-,cx)$. \begin{lemma}\label{lem:finitehemispace} For any Coxeter system $(W,S)$ and $w\in W$, (1) the 2-closure hemispace $N(w)\uplus -(\Phi^+\backslash N(w))$ is convex, (2) the 2-closure hemispace $N(w)'\uplus -(\Phi^+\backslash N(w)')$ is convex. \end{lemma} \begin{proof} (1) follows from the fact that $N(w)\uplus -(\Phi^+\backslash N(w))=-w\Phi^+$ and that $\Phi^+$ is convex. (2) follows from the fact that $N(w)'\uplus -(\Phi^+\backslash N(w)')=w\Phi^+$ and that $\Phi^+$ is convex. \end{proof} Now let $P(\Psi^+,\Delta_1,\Delta_2)$ be a 2 closure biclosed set (also biconvex by Lemma \ref{lem:coneintersectzero}) in $\Phi$. Any 2 closure biclosed set in $(\Phi)^{\wedge}$ is of the form $(P(\Psi^+,\Delta_1,\Delta_2)^{\wedge}\backslash B_2)\cup B_1$ where $B_1$ (resp. $B_2$) be a finite 2 closure biclosed set in $(\Phi_1)^{\wedge}$ (resp. $(\Phi_2)^{\wedge}$) and $\Phi_i=\mathbb{R}\Delta_i\cap \Phi$. Then we can associate to it a 2 closure hemispace $H:=B\uplus -((\Phi)^{\wedge}\backslash B)$. We investigate when this 2 closure hemispace is a cone closure (i.e. oriented matroidal) hemispace. \begin{proposition}\label{hemispacecoincide} (1) $H$ is convex if $\Delta_2$ is empty or $\Delta_1$ is empty. (2) If both $\Delta_1$ and $\Delta_2$ are non-empty, $H$ is not convex. \end{proposition} \begin{proof} (1) Assume $\Delta_2=\emptyset.$ Suppose $\{\pm\alpha_1,\pm\alpha_2,\cdots,\pm\alpha_m\}=\mathbb{R}\Delta_1\cap \Phi=\Phi_1.$ Let $\{\gamma_1,\gamma_2,\cdots,\gamma_p\}=\Psi^+\backslash \Phi_1$. By the construction $H$ contains $(\gamma_1)_{-\infty}^{\infty},(\gamma_2)_{-\infty}^{\infty},\cdots,(\gamma_p)_{-\infty}^{\infty}$. Suppose $\alpha_{i_1}+k_1\delta, \alpha_{i_2}+k_2\delta, \cdots, \alpha_{i_t}+k_t\delta, \gamma_{j_1}+l_1\delta, \gamma_{j_2}+l_2\delta,\cdots, \gamma_{j_s}+l_s\delta\in H$ where $\alpha_{i_p}\in \Phi_1, 1\leq p\leq t.$ Take $a_1, a_2, \cdots, a_t, b_1, b_2, \cdots, b_s\in \mathbb{R}_{>0}$. Assume that \begin{equation}\label{imp} \sum_{u=1}^t a_u(\alpha_{i_u}+k_u\delta)+\sum_{v=1}^s b_v(\gamma_{j_v}+l_v\delta)=\alpha'+q\delta \end{equation} where $\alpha'\in \Phi_1$ and $\alpha'+q\delta\not\in H.$ Then one has $$\alpha'-\sum_{u=1}^t a_u\alpha_{i_u}=\sum_{v=1}^s b_v\gamma_{j_v}.$$ By Lemma \ref{lem:coneintersectzero} both sides have to be $0$. This forces all $b_v=0$ (as all $\gamma_i$ are in a positive system). So we have $$\sum_{u=1}^t a_u(\alpha_{i_u}+k_u\delta)=\alpha'+q\delta.$$ On the other hand, note that $H\cap \widetilde{\Phi_1}$ is a hemispace in $(\widetilde{\Phi_1},-,\mathrm{cone}_{\widetilde{\Phi_1}})$. Indeed it is $B_1\cup -((\Phi_1)^{\wedge}\backslash B_1)$. Since $B_1$ is finite and biclosed in $(\Phi_1)^{\wedge}$, it is of the form $\Phi_w$ where $w$ is an element of the reflection subgroup generated by the reflections corresponding to the roots in $(\Phi_1)^{\wedge}$. So Lemma \ref{lem:finitehemispace} (1) ensures the above equation \ref{imp} is not possible. Now assume that \begin{equation}\label{imp2} \sum_{u=1}^t a_u(\alpha_{i_u}+k_u\delta)+\sum_{v=1}^s b_v(\gamma_{j_v}+l_v\delta)=-\gamma_w+q\delta. \end{equation} Then one has $$-\sum_{u=1}^t a_u\alpha_{i_u}=\sum_{v=1}^s b_v\gamma_{j_v}+\gamma_w.$$ Again by Lemma \ref{lem:coneintersectzero} both sides have to be $0$. But all $\gamma_v$ are in a positive system of $\Phi.$ So the equation \ref{imp2} is not possible. Hence $H$ is indeed convex. Now assume $\Delta_1=\emptyset$. Then same type of reasoning as above (using Lemma \ref{lem:finitehemispace} (2)) shows $H$ is convex. (2) Now we assume that both $\Delta_1$ and $\Delta_2$ are non-empty. Again suppose that $\{\pm\alpha_1,\pm\alpha_2,\cdots,\pm\alpha_m\}=\mathbb{R}\Delta_1\cap \Phi=\Phi_1$ and assume that $\alpha_1,\alpha_2,\cdots, \alpha_m\in \Phi^+.$ Suppose that $\{\pm\beta_1, \pm\beta_2, \cdots, \pm\beta_l\}=\mathbb{R}\Delta_2\cap \Phi=\Phi_2$ and assume that $\beta_1, \beta_2, \cdots, \beta_l\in \Phi^+.$ By Lemma 4.1 of \cite{wang}, $(\{\beta_i+t\delta|t\in \mathbb{Z}\}\cup \{-\beta_i+t\delta|t\in \mathbb{Z}\})\cap (P(\Psi^+,\Delta_1,\Delta_2)^{\wedge}\backslash B_2)\cup B_1$ has four possibilities: (i) $(\beta_i)_p^{\infty}\cup (-\beta_i)_1^{\infty}, p\geq 0,$ (ii) $(\beta_i)_p^{\infty}\cup (-\beta_i)_0^{\infty}, p\geq 1,$ (iii) $(-\beta_i)_p^{\infty}\cup (\beta_i)_1^{\infty}, p\geq 0,$ (iv) $(-\beta_i)_p^{\infty}\cup (\beta_i)_0^{\infty}, p\geq 1.$ Correspondingly $(\{\beta_i+t\delta|t\in \mathbb{Z}\}\cup \{-\beta_i+t\delta|t\in \mathbb{Z}\})\cap H$ can be one of the following four possibilities: (i) $(\beta_i)_p^{\infty}\cup (-\beta_i)_{1-p}^{\infty}, p\geq 0,$ (ii) $(\beta_i)_p^{\infty}\cup (-\beta_i)_{1-p}^{\infty}, p\geq 1,$ (iii) $(-\beta_i)_p^{\infty}\cup (\beta_i)_{1-p}^{\infty}, p\geq 0,$ (iv) $(-\beta_i)_p^{\infty}\cup (\beta_i)_{1-p}^{\infty}, p\geq 1.$ We note in each of these four cases for sufficiently large $s,t\in \mathbb{Z}_{>0}, \beta_i+s\delta$ and $-\beta_i+t\delta\in H$. It follows that $\delta\in \cone(H)$. Similarly by Lemma 4.1 of \cite{wang}, $(\{\alpha_i+t\delta|t\in \mathbb{Z}\}\cup \{-\alpha_i+t\delta|t\in \mathbb{Z}\})\cap\newline (P(\Psi^+,\Delta_1,\Delta_2)^{\wedge}\backslash B_2)\cup B_1$ has four possibilities: (i) $(\alpha_i)_0^p, p\geq 0$, (ii) $(-\alpha_i)_0^p, p\geq 0,$ (iii) $(\alpha_i)_1^p, p\geq 1,$ (iv) $(-\alpha_i)_1^p, p\geq 1.$ Correspondingly $(\{\alpha_i+t\delta|t\in \mathbb{Z}\}\cup \{-\alpha_i+t\delta|t\in \mathbb{Z}\})\cap H$ can be one of the following four possibilities: (i) $(\alpha_i)_{-\infty}^p\cup (-\alpha_i)_{-\infty}^{-p-1}, p\geq 0,$ (ii) $(-\alpha_i)_{-\infty}^p\cup (\alpha_i)_{-\infty}^{-p-1}, p\geq 0,$ (iii) $(\alpha_i)_{-\infty}^p\cup (-\alpha_i)_{-\infty}^{-p-1}, p\geq 1,$ (iv) $(-\alpha_i)_{-\infty}^p\cup (\alpha_i)_{-\infty}^{-p-1}, p\geq 1.$ We note in each of these four cases it follows that there exist $s,t\in \mathbb{Z}_{<0}, \beta_i+s\delta$ and $-\beta_i+t\delta\in H$. Hence $-\delta\in \cone(H)$. Thus we see that $(\{\beta_i+t\delta|t\in \mathbb{Z}\}\cup\{-\beta_i+t\delta|t\in \mathbb{Z}\})$ (resp. $(\{\alpha_i+t\delta|t\in \mathbb{Z}\}\cup\{-\alpha_i+t\delta|t\in \mathbb{Z}\})$) is completely contained in $\cone(H).$ Therefore $H$ is not convex. \end{proof} Now by Theorem \ref{infiniteword} we immediately obtain \begin{corollary}\label{twotypes} For an irreducible affine Weyl group $\widetilde{W}$, any hemispace of the oriented matroid $(\widetilde{\Phi},-,\mathrm{cone}_{\widetilde{\Phi}})$ is of the form $N(w)\uplus -(\Phi^{\wedge}\backslash N(w))$ or $-N(w)\uplus (\Phi^{\wedge}\backslash N(w))$ where $w\in \widetilde{W}$ or is an infinite reduced word. \end{corollary} \begin{remark} Let $(W,S)$ be a rank 3 universal Coxeter system. The three simple roots are denoted by $\alpha_1,\alpha_2,\alpha_3.$ Then $B:=\{\alpha\in \Phi^+|\alpha=k_1\alpha_1+k_2\alpha_2\}$ is a biclosed set in $\Phi^+$. One easily sees that $H=B\uplus -(\Phi^+\backslash B)=\{\alpha\in \Phi^+|\alpha=k_1\alpha_1+k_2\alpha_2\}\uplus \{\alpha\in \Phi^-|\alpha=k_1\alpha_1+k_2\alpha_2+k_3\alpha_3, k_3<0\}$ is a matroidal hemispace of the oriented matroid $(\Phi,-,cone_{\Phi})$. However in this case neither $B$ nor $\Phi^+\backslash B$ is an inversion set. \end{remark} \begin{lemma}\label{torder} Let $B=P(\Psi^+,\Delta_1,\emptyset)^{\wedge}$ be a biclosed set in $(\Phi)^{\wedge}$. Let $H=B\uplus -((\Phi)^{\wedge}\backslash B)$ be a hemispace of the oriented matroid $(\widetilde{\Phi},-,cx)$ and $G$ is an arbitrary hemispace of $(\widetilde{\Phi},-,cx)$. Then the block containing $H$ of the tope poset based at $G$ is isomorphic to the twisted weak order $(W',G\cap \widetilde{\Phi_1})$ where $W'$ is the reflection subgroup generated by $\widetilde{\Phi_1}$ and $\widetilde{\Phi_1}=\pm(\mathbb{R}\Delta_1)^{\wedge}.$ \end{lemma} \begin{proof} First note that since $G$ is a hemispace in the oriented matroid, it is biclosed in $\widetilde{\Phi}$. Therefore $G\cap \widetilde{\Phi_1}$ is indeed a biclosed set in $\widetilde{\Phi_1}.$ Let $H_1=B_1\uplus -(\Phi^{\wedge}\backslash B_1)$ be another hemispace. If $H$ and $H_1$ are in the same block, then clearly the symmetric difference of $B$ and $B_1$ is finite. By Proposition 5.11 and Corollary 5.12 of \cite{DyerReflOrder} (See also Theorem 1.3 of \cite{wang}), $B_1=w\cdot B$ for some unique $w\in W'$. Then $H_1=w\cdot B\uplus -(\Phi^{\wedge}\backslash w\cdot B)=wH$ by \cite{BPK} 3.2(e). Now define the map from the block containing $H$ of the tope poset based at $G$ to $(W',G\cap \widetilde{\Phi_1})$ by $wH\mapsto w$. We must show that such a map and its inverse preserve the orders. Take $w_1H, w_2H$ such that $w_1H<w_2H$ in the block containing $H$ of the tope poset based at $G$. Note that $w_iH=w_i\cdot P(\Psi^+,\Delta_1,\emptyset)^{\wedge} \uplus -((\Phi)^{\wedge}\backslash w_i\cdot P(\Psi^+,\Delta_1,\emptyset)^{\wedge}), i=1,2.$ by \cite{BPK} 3.2(e). Further one has that $w_i\cdot P(\Psi^+,\Delta_1,\emptyset)^{\wedge}=P(\Psi^+,\Delta_1,\emptyset)^{\wedge}\uplus (N(w_i)\cap \widetilde{\Phi_1})$ by Proposition 5.11 of \cite{DyerReflOrder}. Hence $w_iH=H\uplus (N(w_i)\cap \widetilde{\Phi_1})\backslash -((N(w_i)\cap \widetilde{\Phi_1}))$. Then $w_1H\Delta G\subset w_2H\Delta G$ if and if the following two containments hold: $$(N(w_1)\cap \widetilde{\Phi_1})\backslash G\subset (N(w_2)\cap \widetilde{\Phi_1})\backslash G,$$ $$(N(w_2)\cap \widetilde{\Phi_1})\cap G\subset (N(w_1)\cap \widetilde{\Phi_1})\cap G.$$ One sees that these two containments are equivalent to the conditions $(N(w_1)\cap \widetilde{\Phi_1})\backslash (N(w_2)\cap \widetilde{\Phi_1})\subset G$ and $(N(w_2)\cap \widetilde{\Phi_1})\backslash (N(w_1)\cap \widetilde{\Phi_1})\subset -G$, i.e. $w_1\leq_{G\cap \widetilde{\Phi_1}} w_2$. \end{proof} \begin{corollary}\label{block} Let $P$ be a block of a tope poset of $(\widetilde{\Phi},-,cx)$. Then $P$ is isomorphic to the twisted weak order on some Coxeter group $W'$. \end{corollary} \begin{proof} By Corollary \ref{twotypes}, there are two possible types of the base tope (hemisapce). Case I: The base hemispace is of the form $N(w)\uplus -(\widehat{\Phi}\backslash N(w))$ where $w\in \widetilde{W}$ or $w\in \widetilde{W}_l$. Then by Theorem \ref{infiniteword}, $N(w)=u\cdot P(\Psi^+,\Delta_1,\emptyset)^{\wedge}$ where $u\in \widetilde{W}$. (Note if $w\in \widetilde{W}$, then $\Delta_1=\Delta$ and $P(\Psi^+,\Delta_1,\emptyset)^{\wedge}=\emptyset.$) Suppose that $u=e$, the assertion follows from Lemma \ref{torder}. Now assume that $u\neq e$. Note that the map $H\mapsto uH$ is a bijection from the set of hemispaces to itself. One easily sees that the tope poset based at $H$ is isomorphic to the tope poset based at $wH$ by noting $G_1\Delta H\subset G_2\Delta H\Leftrightarrow uG_1\Delta uH\subset uG_2\Delta uH$. Therefore the assertion also holds in this situation. Case II. The base hemispace is of the form $-N(w)\uplus (\widehat{\Phi}\backslash N(w))$ where $w\in \widetilde{W}$ or $w\in \widetilde{W}_l$. One easily sees that such a tope poset is the opposite poset of the tope poset based at $N(w)\uplus -(\widehat{\Phi}\backslash N(w))$. Since the opposite poset of the twisted weak order $(W,\leq_B')$ is isomorphic to $(W,\leq_{\Phi^+\backslash B'})$, the assertion also holds in this case. \end{proof} \noindent \emph{Proof of Theorem \ref{intervallattice}}. By Corollary \ref{block}, $[H_1, H_2]$ is isomorphic to a closed interval of the twisted weak order of some Coxeter group $W'$. By Corollary 4.2 of \cite{orderpaper}, it is also isomorphic to a closed interval of the ordinary weak order of $W'$. Since the ordinary weak order $(W',\leq_{\emptyset}')$ is a complete meet semilattice, such an interval is a lattice. \begin{remark} Note that the example in Section 4 of \cite{wang} shows that in general the tope poset of the oriented matroid coming from an affine root system is not a lattice. \end{remark} To end this section, we sketch (part of) the Hasse diagram of a specific tope poset of $\widetilde{\Phi}$ of type $\widetilde{A}_2$ with the base hemispace $\widetilde{\Phi}^+$. This particular tope poset is isomorphic to the poset of 2 closure biclosed sets in $\widetilde{\Phi}^+$ (but in general there is no such isomorphism by our results above) and thus is a lattice by \cite{wang}. The two simple roots in $\Phi^+$ are denoted by $\alpha,\beta.$ \begin{tikzpicture}[scale=.15] \tiny \node (a) at (0,0) {$H_1$}; \node (b) at (-12,4) {$H_2$}; \node (c) at (0,4) {$H_3$}; \node (d) at (16,4) {$H_4$}; \node (e) at (-20,8) {$H_5$}; \node (f) at (-10,8) {$H_6$}; \node (g) at (-2,8) {$H_7$}; \node (h) at (3,8) {$H_8$}; \node (i) at (10,8) {$H_9$}; \node (j) at (20,8) {$H_{10}$}; \node (k) at (-20,12) {$H_{11}$}; \node (l) at (-14,12) {$H_{12}$}; \node (m) at (-8,12) {$H_{13}$}; \node (n) at (-2,12) {$H_{14}$}; \node (o) at (4,12) {$H_{15}$}; \node (p) at (10,12) {$H_{16}$}; \node (q) at (16,12) {$H_{17}$}; \node (r) at (22,12) {$H_{18}$}; \node (s) at (28,12) {$H_{19}$}; \node (t) at (0,16) {$\cdots\cdots\cdots$}; \node (u) at (-17,20) {$T_1$}; \node (v) at (-6,20) {$T_2$}; \node (w) at (4,20) {$T_3$}; \node (x) at (12,20) {$T_4$}; \node (y) at (22,20) {$T_5$}; \node (z) at (31,20) {$T_6$}; \node (aa) at (-20,24) {$T_{11}$}; \node (ab) at (-14,24) {$T_{12}$}; \node (ac) at (-20,28) {$T_{13}$}; \node (ad) at (-14,28) {$T_{14}$}; \node (ae) at (-10,24) {$T_{21}$}; \node (af) at (-3,24) {$T_{22}$}; \node (ag) at (-10,28) {$T_{23}$}; \node (ah) at (-3,28) {$T_{24}$}; \node (ai) at (1,24) {$T_{31}$}; \node (aj) at (6,24) {$T_{32}$}; \node (ak) at (1,28) {$T_{33}$}; \node (al) at (6,28) {$T_{34}$}; \node (am) at (10,24) {$T_{41}$}; \node (an) at (15,24) {$T_{42}$}; \node (ao) at (10,28) {$T_{43}$}; \node (ap) at (15,28) {$T_{44}$}; \node (aq) at (19,24) {$T_{51}$}; \node (ar) at (24,24) {$T_{52}$}; \node (as) at (19,28) {$T_{53}$}; \node (at) at (24,28) {$T_{54}$}; \node (au) at (28,24) {$T_{61}$}; \node (av) at (33,24) {$T_{62}$}; \node (aw) at (28,28) {$T_{63}$}; \node (ax) at (33,28) {$T_{64}$}; \node (ay) at (0,32) {$\cdots\cdots\cdots$}; \node (az) at (-17,36) {$U_1$}; \node (ba) at (-6,36) {$U_2$}; \node (bb) at (4,36) {$U_3$}; \node (bc) at (12,36) {$U_4$}; \node (bd) at (20,36) {$U_5$}; \node (be) at (27,36) {$U_6$}; \node (bf) at (0,40) {$\cdots\cdots\cdots$}; \node (bg) at (-20,48) {$-T_{11}$}; \node (bh) at (-14,48) {$-T_{12}$}; \node (bi) at (-20,44) {$-T_{13}$}; \node (bj) at (-14,44) {$-T_{14}$}; \node (bk) at (-10,48) {$-T_{21}$}; \node (bl) at (-3,48) {$-T_{22}$}; \node (bm) at (-10,44) {$-T_{23}$}; \node (bn) at (-3,44) {$-T_{24}$}; \node (bo) at (1,48) {$-T_{31}$}; \node (bp) at (6,48) {$-T_{32}$}; \node (bq) at (1,44) {$-T_{33}$}; \node (br) at (6,44) {$-T_{34}$}; \node (bs) at (10,48) {$-T_{41}$}; \node (bt) at (15,48) {$-T_{42}$}; \node (bu) at (10,44) {$-T_{43}$}; \node (bv) at (15,44) {$-T_{44}$}; \node (bw) at (19,48) {$-T_{51}$}; \node (bx) at (24,48) {$-T_{52}$}; \node (by) at (19,44) {$-T_{53}$}; \node (bz) at (24,44) {$-T_{54}$}; \node (ca) at (28,48) {$-T_{61}$}; \node (cb) at (33,48) {$-T_{62}$}; \node (cc) at (28,44) {$-T_{63}$}; \node (cd) at (33,44) {$-T_{64}$}; \node (uu) at (-17,52) {$-T_1$}; \node (vv) at (-6,52) {$-T_2$}; \node (ww) at (4,52) {$-T_3$}; \node (xx) at (12,52) {$-T_4$}; \node (yy) at (22,52) {$-T_5$}; \node (zz) at (31,52) {$-T_6$}; \node (dots) at (0,56) {$\cdots\cdots\cdots$}; \node (aaa) at (0,72) {$-H_1$}; \node (bbb) at (-12,68) {$-H_2$}; \node (ccc) at (0,68) {$-H_3$}; \node (ddd) at (16,68) {$-H_4$}; \node (eee) at (-20,64) {$-H_5$}; \node (fff) at (-10,64) {$-H_6$}; \node (ggg) at (-2,64) {$-H_7$}; \node (hhh) at (3,64) {$-H_8$}; \node (iii) at (10,64) {$-H_9$}; \node (jjj) at (20,64) {$-H_{10}$}; \node (kkk) at (-20,60) {$-H_{11}$}; \node (lll) at (-14,60) {$-H_{12}$}; \node (mmm) at (-8,60) {$-H_{13}$}; \node (nnn) at (-2,60) {$-H_{14}$}; \node (ooo) at (4,60) {$-H_{15}$}; \node (ppp) at (10,60) {$-H_{16}$}; \node (qqq) at (16,60) {$-H_{17}$}; \node (rrr) at (22,60) {$-H_{18}$}; \node (sss) at (28,60) {$-H_{19}$}; \draw (ccc) -- (aaa); \draw (bbb) -- (aaa); \draw (ddd) -- (aaa); \draw (eee)--(bbb); \draw (fff)--(ccc); \draw (ggg)--(bbb); \draw (hhh)--(ddd); \draw (iii)--(ccc); \draw (jjj)--(ddd); \draw (kkk)--(eee); \draw (kkk)--(fff); \draw (lll)--(eee); \draw (mmm)--(fff); \draw (nnn)--(ggg); \draw (ooo)--(ggg); \draw (ooo)--(hhh); \draw (ppp)--(hhh); \draw (qqq)--(iii); \draw (rrr)--(iii); \draw (rrr)--(jjj); \draw (sss)--(jjj); \draw (ca)--(zz); \draw (cb)--(zz); \draw (bw)--(yy); \draw (bx)--(yy); \draw (bs)--(xx); \draw (bt)--(xx); \draw (bo)--(ww); \draw (bp)--(ww); \draw (bk)--(vv); \draw (bl)--(vv); \draw (bg)--(uu); \draw (bh)--(uu); \draw (by)--(bw); \draw (bz)--(bx); \draw (cc)--(ca); \draw (cd)--(cb); \draw (bu)--(bs); \draw (bv)--(bt); \draw (bq)--(bo); \draw (br)--(bp); \draw (bm)--(bk); \draw (bn)--(bl); \draw (bi)--(bg); \draw (bj)--(bh); \draw (c) -- (a); \draw (b) -- (a); \draw (d) -- (a); \draw (e)--(b); \draw (f)--(c); \draw (g)--(b); \draw (h)--(d); \draw (i)--(c); \draw (j)--(d); \draw (k)--(e); \draw (k)--(f); \draw (l)--(e); \draw (m)--(f); \draw (n)--(g); \draw (o)--(g); \draw (o)--(h); \draw (p)--(h); \draw (q)--(i); \draw (r)--(i); \draw (r)--(j); \draw (s)--(j); \draw (aa)--(u); \draw (ab)--(u); \draw (ac)--(aa); \draw (ad)--(ab); \draw (ae)--(v); \draw (af)--(v); \draw (ag)--(ae); \draw (ah)--(af); \draw (ai)--(w); \draw (aj)--(w); \draw (ak)--(ai); \draw (al)--(aj); \draw (am)--(x); \draw (an)--(x); \draw (ao)--(am); \draw (ap)--(an); \draw (aq)--(y); \draw (ar)--(y); \draw (as)--(aq); \draw (at)--(ar); \draw (au)--(z); \draw (av)--(z); \draw (aw)--(au); \draw (ax)--(av); \end{tikzpicture} \tiny $H_1=-\widehat{\Phi};$ $H_2=\{\alpha\}\uplus -(\widehat{\Phi}\backslash \{\alpha\})$; $H_3=\{\beta\}\uplus -(\widehat{\Phi}\backslash \{\beta\});$ $H_4=\{\delta-\alpha-\beta\}\uplus -(\widehat{\Phi}\backslash \{\delta-\alpha-\beta\})$; $H_5=\{\alpha, \alpha+\beta\}\uplus -(\widehat{\Phi}\backslash \{\alpha,\alpha+\beta\})$; $H_6=\{\beta, \alpha+\beta\}\uplus -(\widehat{\Phi}\backslash \{\beta,\alpha+\beta\});$ $H_7=\{\alpha, \delta-\beta\}\uplus -(\widehat{\Phi}\backslash \{\alpha,\delta-\beta\})$; $H_8=\{\delta-\alpha-\beta,\delta-\beta\}\uplus -(\widehat{\Phi}\backslash \{\delta-\alpha-\beta,\delta-\beta\})$; $H_9=\{\beta, \delta-\alpha\}\uplus -(\widehat{\Phi}\backslash \{\beta,\delta-\alpha\})$; $H_{10}=\{\delta-\alpha-\beta,\delta-\alpha\}\uplus -(\widehat{\Phi}\backslash \{\delta-\alpha-\beta,\delta-\alpha\})$; $H_{11}=\{\alpha, \alpha+\beta, \beta\}\uplus -(\widehat{\Phi}\backslash \{\alpha,\alpha+\beta, \beta\})$; $H_{12}=\{\alpha, \alpha+\beta, \alpha+\delta\}\uplus -(\widehat{\Phi}\backslash \{\alpha,\alpha+\beta, \alpha+\delta\})$; $H_{13}=\{\beta, \alpha+\beta, \beta+\delta\}\uplus -(\widehat{\Phi}\backslash \{\beta,\alpha+\beta, \beta+\delta\})$; $H_{14}=\{\alpha, \delta-\beta, \alpha+\delta\}\uplus -(\widehat{\Phi}\backslash \{\alpha,\delta-\beta, \alpha+\delta\})$; $H_{15}=\{\alpha, \delta-\beta, \delta-\alpha-\delta\}\uplus -(\widehat{\Phi}\backslash \{\alpha,\delta-\beta,\delta-\alpha-\delta\})$; $H_{16}=\{2\delta-\alpha-\delta, \delta-\beta, \delta-\alpha-\delta\}\uplus -(\widehat{\Phi}\backslash \{2\delta-\alpha-\delta,\delta-\beta,\delta-\alpha-\delta\})$; $H_{17}=\{\beta,\delta-\alpha, \beta+\delta\}\uplus -(\widehat{\Phi}\backslash \{\beta,\delta-\alpha, \beta+\delta\})$; $H_{18}=\{\beta,\delta-\alpha, \delta-\beta-\delta\}\uplus -(\widehat{\Phi}\backslash \{\beta,\delta-\alpha, \delta-\beta-\delta\})$; $H_{19}=\{2\delta-\beta-\delta,\delta-\alpha, \delta-\beta-\delta\}\uplus -(\widehat{\Phi}\backslash \{2\delta-\beta-\delta,\delta-\alpha, \delta-\beta-\delta\})$; $T_1=\widehat{\alpha}\uplus\widehat{\alpha+\beta}\uplus -(\widehat{\Phi}\backslash (\widehat{\alpha}\uplus\widehat{\alpha+\beta}))$; $T_2=\widehat{\beta}\uplus\widehat{\alpha+\beta}\uplus -(\widehat{\Phi}\backslash (\widehat{\beta}\uplus\widehat{\alpha+\beta}))$; $T_3=\widehat{\beta}\uplus\widehat{-\alpha}\uplus -(\widehat{\Phi}\backslash (\widehat{\beta}\uplus\widehat{-\alpha}))$; $T_4=\widehat{-\beta-\alpha}\uplus\widehat{-\alpha}\uplus -(\widehat{\Phi}\backslash (\widehat{-\beta-\alpha}\uplus\widehat{-\alpha}))$; $T_5=\widehat{-\beta-\alpha}\uplus\widehat{-\beta}\uplus -(\widehat{\Phi}\backslash (\widehat{-\beta-\alpha}\uplus\widehat{-\beta}))$; $T_6=\widehat{\alpha}\uplus\widehat{-\beta}\uplus -(\widehat{\Phi}\backslash (\widehat{\alpha}\uplus\widehat{-\beta}))$; $T_{11}=\widehat{\alpha}\uplus\widehat{\alpha+\beta}\uplus \{\beta\}\uplus -(\widehat{\Phi}\backslash (\widehat{\alpha}\uplus\widehat{\alpha+\beta}\uplus \{\beta\}))$; $T_{12}=\widehat{\alpha}\uplus\widehat{\alpha+\beta}\uplus \{\delta-\beta\}\uplus -(\widehat{\Phi}\backslash (\widehat{\alpha}\uplus\widehat{\alpha+\beta}\uplus \{\delta-\beta\}))$ $T_{13}=\widehat{\alpha}\uplus\widehat{\alpha+\beta}\uplus \{\beta, \beta+\delta\}\uplus -(\widehat{\Phi}\backslash (\widehat{\alpha}\uplus\widehat{\alpha+\beta}\uplus \{\beta, \beta+\delta\}))$; $T_{14}=\widehat{\alpha}\uplus\widehat{\alpha+\beta}\uplus \{\delta-\beta, 2\delta-\beta\}\uplus -(\widehat{\Phi}\backslash (\widehat{\alpha}\uplus\widehat{\alpha+\beta}\uplus \{\delta-\beta, 2\delta-\beta\}))$; $T_{21}=\widehat{\beta}\uplus\widehat{\alpha+\beta}\uplus \{\alpha\}\uplus -(\widehat{\Phi}\backslash (\widehat{\beta}\uplus\widehat{\alpha+\beta}\uplus \{\alpha\}))$; $T_{22}=\widehat{\beta}\uplus\widehat{\alpha+\beta}\uplus \{\delta-\alpha\}\uplus -(\widehat{\Phi}\backslash (\widehat{\beta}\uplus\widehat{\alpha+\beta}\uplus \{\delta-\alpha\}))$; $T_{23}=\widehat{\beta}\uplus\widehat{\alpha+\beta}\uplus \{\alpha, \alpha+\delta\}\uplus -(\widehat{\Phi}\backslash (\widehat{\beta}\uplus\widehat{\alpha+\beta}\uplus \{\alpha, \alpha+\delta\}))$; $T_{24}=\widehat{\beta}\uplus\widehat{\alpha+\beta}\uplus \{\delta-\alpha, 2\delta-\alpha\}\uplus -(\widehat{\Phi}\backslash (\widehat{\beta}\uplus\widehat{\alpha+\beta}\uplus \{\delta-\alpha, 2\delta-\alpha\}))$; $T_{31}=\widehat{\beta}\uplus\widehat{-\alpha}\uplus \{\alpha+\beta\}\uplus -(\widehat{\Phi}\backslash (\widehat{\beta}\uplus\widehat{-\alpha}\uplus \{\alpha+\beta\}))$; $T_{32}=\widehat{\beta}\uplus\widehat{-\alpha}\uplus \{\delta-\alpha-\beta\}\uplus -(\widehat{\Phi}\backslash (\widehat{\beta}\uplus\widehat{-\alpha}\uplus \{\delta-\alpha-\beta\}))$; $T_{33}=\widehat{\beta}\uplus\widehat{-\alpha}\uplus \{\alpha+\beta, \alpha+\beta+\delta\}\uplus -(\widehat{\Phi}\backslash (\widehat{\beta}\uplus\widehat{-\alpha}\uplus \{\alpha+\beta, \alpha+\beta+\delta\}))$; $T_{34}=\widehat{\beta}\uplus\widehat{-\alpha}\uplus \{\delta-\alpha-\beta, 2\delta-\alpha-\beta\}\uplus -(\widehat{\Phi}\backslash (\widehat{\beta}\uplus\widehat{-\alpha}\uplus \{\delta-\alpha-\beta, 2\delta-\alpha-\beta\}))$; $T_{41}=\widehat{-\alpha-\beta}\uplus\widehat{-\alpha}\uplus \{\beta\}\uplus -(\widehat{\Phi}\backslash (\widehat{-\alpha-\beta}\uplus\widehat{-\alpha}\uplus \{\beta\}))$; $T_{42}=\widehat{-\alpha-\beta}\uplus\widehat{-\alpha}\uplus \{\delta-\beta\}\uplus -(\widehat{\Phi}\backslash (\widehat{\-\alpha-\beta}\uplus\widehat{-\alpha}\uplus \{\delta-\beta\}))$; $T_{43}=\widehat{-\alpha-\beta}\uplus\widehat{-\alpha}\uplus \{\beta, \beta+\delta\}\uplus -(\widehat{\Phi}\backslash (\widehat{-\alpha-\beta}\uplus\widehat{-\alpha}\uplus \{\beta, \beta+\delta\}))$; $T_{44}=\widehat{-\alpha-\beta}\uplus\widehat{-\alpha}\uplus \{\delta-\beta, 2\delta-\beta\}\uplus -(\widehat{\Phi}\backslash (\widehat{-\alpha-\beta}\uplus\widehat{-\alpha}\uplus \{\delta-\beta, 2\delta-\beta\}))$; $T_{51}=\widehat{-\alpha-\beta}\uplus\widehat{-\beta}\uplus \{\alpha\}\uplus -(\widehat{\Phi}\backslash (\widehat{-\alpha-\beta}\uplus\widehat{-\beta}\uplus \{\alpha\}))$; $T_{52}=\widehat{-\alpha-\beta}\uplus\widehat{-\beta}\uplus \{\delta-\alpha\}\uplus -(\widehat{\Phi}\backslash (\widehat{\-\alpha-\beta}\uplus\widehat{-\beta}\uplus \{\delta-\alpha\}))$; $T_{53}=\widehat{-\alpha-\beta}\uplus\widehat{-\beta}\uplus \{\alpha, \alpha+\delta\}\uplus -(\widehat{\Phi}\backslash (\widehat{-\alpha-\beta}\uplus\widehat{-\beta}\uplus \{\alpha, \alpha+\delta\}))$; $T_{54}=\widehat{-\alpha-\beta}\uplus\widehat{-\beta}\uplus \{\delta-\alpha, 2\delta-\alpha\}\uplus -(\widehat{\Phi}\backslash (\widehat{-\alpha-\beta}\uplus\widehat{-\beta}\uplus \{\delta-\alpha, 2\delta-\alpha\}))$; $T_{61}=\widehat{\alpha}\uplus\widehat{-\beta}\uplus \{\alpha+\beta\}\uplus -(\widehat{\Phi}\backslash (\widehat{\alpha}\uplus\widehat{-\beta}\uplus \{\alpha+\beta\}))$; $T_{62}=\widehat{\alpha}\uplus\widehat{-\beta}\uplus \{\delta-\alpha-\beta\}\uplus -(\widehat{\Phi}\backslash (\widehat{\alpha}\uplus\widehat{-\beta}\uplus \{\delta-\alpha-\beta\}))$; $T_{63}=\widehat{\alpha}\uplus\widehat{-\beta}\uplus \{\alpha+\beta, \alpha+\beta+\delta\}\uplus -(\widehat{\Phi}\backslash (\widehat{\alpha}\uplus\widehat{-\beta}\uplus \{\alpha+\beta, \alpha+\beta+\delta\}))$; $T_{64}=\widehat{\alpha}\uplus\widehat{-\beta}\uplus \{\delta-\alpha-\beta, 2\delta-\alpha-\beta\}\uplus -(\widehat{\Phi}\backslash (\widehat{\alpha}\uplus\widehat{-\beta}\uplus \{\delta-\alpha-\beta, 2\delta-\alpha-\beta\}))$; $U_1=\widehat{\{\alpha,\beta,\alpha+\beta\}}\uplus -(\widehat{\Phi}\backslash \widehat{\{\alpha,\beta,\alpha+\beta\}})$; $U_2=\widehat{\{-\alpha,\beta,\alpha+\beta\}}\uplus -(\widehat{\Phi}\backslash \widehat{\{-\alpha,\beta,\alpha+\beta\}})$; $U_3=\widehat{\{-\alpha,\beta,-\alpha-\beta\}}\uplus -(\widehat{\Phi}\backslash \widehat{\{-\alpha,\beta,-\alpha-\beta\}})$; $U_4=\widehat{\{-\alpha,-\beta,-\alpha-\beta\}}\uplus -(\widehat{\Phi}\backslash \widehat{\{-\alpha,-\beta,-\alpha-\beta\}})$; $U_5=\widehat{\{\alpha,-\beta,-\alpha-\beta\}}\uplus -(\widehat{\Phi}\backslash \widehat{\{\alpha,-\beta,-\alpha-\beta\}})$; $U_6=\widehat{\{\alpha,-\beta,\alpha+\beta\}}\uplus -(\widehat{\Phi}\backslash \widehat{\{\alpha,-\beta,\alpha+\beta\}})$; \normalsize \end{document}
\begin{document} \maketitle {\it Dedicated to Franco Giannessi for his 85th birthday} \begin{abstract} We consider shape optimization problems involving functionals depending on perimeter, torsional rigidity and Lebesgue measure. The scaling free cost functionals are of the form $P(\Omega)T^q(\Omega)|\Omega|^{-2q-1/2}$ and the class of admissible domains consists of two-dimensional open sets $\Omega$ satisfying the topological constraints of having a prescribed number $k$ of bounded connected components of the complementary set. A relaxed procedure is needed to have a well-posed problem and we show that when $q<1/2$ an optimal relaxed domain exists. When $q>1/2$ the problem is ill-posed and for $q=1/2$ the explicit value of the infimum is provided in the cases $k=0$ and $k=1$. \end{abstract} \textbf{Keywords:} torsional rigidity, shape optimization, perimeter, planar sets, topological genus. \textbf{2010 Mathematics Subject Classification:} 49Q10, 49J45, 49R05, 35P15, 35J25. \section{Introduction\label{sintro}} In the present paper we aim to study some particular shape optimization problems in classes of planar domains having a prescribed topology. The quantities we are going to consider for a general bounded open set $\Omega $ are the distributional perimeter $P(\Omega)$ and the torsional rigidity $T(\Omega)$. More precisely, we deal with a scaling free functional $F_q$ which is expressed as the product of the perimeter, and of a suitable powers of the torsional rigidity and of the Lebesgue measure of $\Omega$, depending on a positive parameter $q$. The restriction to the planar case is essential and is not made here for the sake of simplicity; indeed, in higher dimension stronger topological constraints have to be imposed to make the problems well posed. In a previous paper \cite{BBP20} we treated the problem above in every space dimension and, after discussing it for general open sets, we focused to the class of convex open sets. In the following we consider the optimization problems for $F_q$ in the classes $\mathcal{A}_k$ of planar domains having at most $k$ ``holes". While the maximization problems are always ill posed, even in the class of smooth open sets in $\mathcal{A}_k$, it turns out that the minimizing problems are interesting if $q\le 1/2$ and some regularity constraints are imposed to the sets $\Omega\in\mathcal{A}_k$. In this case, we provide a explicit lower bound for $F_q$ in the class of Lipschitz sets in $\mathcal{A}_k$, which turns out to be sharp when $k=0,1$ and $q=1/2$ and coincides with the infimum of $F_q$ in the class of convex sets, as pointed out by Polya in \cite{polya60}. When $q<1/2$ we study the existence of minimizers for $F_q$ and our approach is the one of direct methods of the calculus of variations which consists in the following steps: \begin{itemize} \item[-]defining the functional $F_q$ only for Lipschitz domains of the class $\mathcal{A}_k$; \item[-]relaxing the functional $F_q$ on the whole class $\mathcal{A}_k$, with respect to a suitable topology; \item[-]showing that the relaxed functional admits an optimal domain in $\mathcal{A}_k$; \item[-]proving that such a domain is Lipschitz. \end{itemize} The relaxation procedure above is necessary to avoid trivial counterexamples due to the fact that the perimeter is Lebesgue measure sensitive, while the torsional rigidity is capacity sensitive. As in most of the free boundary problems, the last regularity step presents strong difficulties and, even if the regularity of optimal domains could be expected, we are unable to give a full proof of this fact. It would be very interesting to establish if an optimal domain fulfills some kind of regularity, or at least if its perimeter coincides with the Hausdorff measure of the boundary, which amounts to exclude the presence of internal fractures. This paper is organized as follows. In Section \ref{spre}, after recalling the definitions of perimeter and torsional rigidity, we summarize the main results of this paper. In Section \ref{sapp} we describe the key tools necessary to apply the so-called method of \textit{interior parallels}, introduced by Makai in \cite{Ma},\cite{Ma59} and by Polya in \cite{polya60}, to our setting. Section \ref{shau} contains a review of some basic facts concerning the complementary Hausdorff convergence, with respect to which we perform the relaxation procedure. Although Sections \ref{sapp} and \ref{shau} may be seen as preliminary, we believe they contain some interesting results that, as far as we know, are new in literature. Finally, in Section \ref{sexis} we discuss the optimization problem: we extend a well known inequality due to Polya (Theorem \ref{theo.Polya} and Remark \ref{rem.polya}), and we prove the main results (Corollary \ref{coro.polya} and Theorem \ref{theo.exis}). \section{Preliminaries}\label{spre} The shape functionals we consider in this paper are of the form \begin{equation}\label{Fq}F_q(\Omega)=\frac{P(\Omega)T^q(\Omega)}{|\Omega|^{2q+1/2}} \end{equation} where $q>0$, $\Omega\subset\mathbb{R}^2$ is a general bounded open set and, $|\Omega|$ denotes its Lebesgue measure. For the reader's convenience, in the following we report the definitions and the basic properties of the perimeter and of the torsional rigidity. According to the De Giorgi formula, the perimeter is given by $$P(\Omega)=\sup\left\{\int_\Omega\dive\phi\,dx\ :\ \phi\in C^1_c(\mathbb{R}^2;\mathbb{R}^2),\ \|\phi\|_{L^\infty(\mathbb{R}^2)}\le1\right\},$$ and satisfies: \begin{itemize} \item[-]the {\it scaling property} $$P(t\Omega)=tP(\Omega)\qquad\text{for every }t>0;$$ \item[-] the lower semicontinuity with respect to the $L^1$-convergence, that is the convergence of characteristic functions. \item[-]the {\it isoperimetric inequality} \begin{equation}\label{isoper} \frac{P(\Omega)}{|\Omega|^{1/2}}\ge\frac{P(B)}{|B|^{1/2}} \end{equation} where $B$ is any disc in $\mathbb{R}^2$. In addition the inequality above becomes an equality if and only if $\Omega$ is a disc (up to sets of Lebesgue measure zero). \end{itemize} The torsional rigidity $T(\Omega)$ is defined as $$T(\Omega)=\int_\Omega u\,dx$$ where $u$ is the unique solution of the PDE \begin{equation}\label{pdetorsion}\begin{cases} -\Delta u=1&\text{in }\Omega,\\ u\in H^1_0(\Omega). \end{cases} \end{equation} By means of an integration by parts we can equivalently express the torsional rigidity as \begin{equation} \label{vartor} T(\Omega)=\max\Big\{\Big[\int_\Omega u\,dx\Big]^2\Big[\int_\Omega|\nabla u|^2\,dx\Big]^{-1}\ :\ u\in H^1_0(\Omega)\setminus\{0\}\Big\}. \end{equation} The main properties we use for the torsional rigidity are: \begin{itemize} \item[-]the monotonicity with respect to the set inclusion $$\Omega_1\subset\Omega_2\Longrightarrow T(\Omega_1)\le T(\Omega_2);$$ \item[-]the additivity on disjoint families of open sets $$T\Big(\bigcup_n\Omega_n\Big)=\sum_n T(\Omega_n)\qquad\text{whenever $\Omega_n$ are pairwise disjoint;}$$ \item[-]the scaling property $$T(t\Omega)=t^4T(\Omega),\qquad\text{for every }t>0;$$ \item[-]the relation between torsional rigidity and Lebesgue measure (known as {\it Saint-Venant inequality}) \begin{equation}\label{stven} \frac{T(\Omega)}{|\Omega|^2}\le\frac{T(B)}{|B|^2}. \end{equation} In addition, the inequality above becomes an equality if and only if $\Omega$ is a disc (up to sets of capacity zero). \end{itemize} If we denote by $B_1$ the unitary disc of $\mathbb{R}^2$, then the solution of \eqref{pdetorsion}, with $\Omega=B_1$, is $$u(x)=\frac{1-|x|^2}{4}$$ which provides $$T(B_1)=\frac{\pi}{8}.$$ Thanks to the scaling properties of the perimeter and of the torsional rigidity, the functional $F_q$ defined by \eqref{Fq} is {\it scaling free} and optimizing it in a suitable class $\mathcal{A}$ is equivalent to optimizing the product $P(\Omega)T^q(\Omega)$ over $\mathcal{A}$ with the additional measure constraint $|\Omega|=m$, for a fixed $m>0$. In a previous paper \cite{BBP20} we considered the minimum and the maximum problem for $F_q$ (in every space dimension) in the classes \[\begin{split} &\mathcal{A}_{all}:=\big\{\Omega\subset\mathbb{R}^d\ :\ \Omega\ne\emptyset\big\}\\ &\mathcal{A}_{convex}:=\big\{\Omega\subset\mathbb{R}^d\ :\ \Omega\ne\emptyset,\ \Omega\text{ convex}\big\}. \end{split}\] We summarize here below the results available in the case of dimension 2: \begin{itemize} \item[-]for every $q>0$ $$\inf\big\{F_q(\Omega)\ :\ \Omega\in\mathcal{A}_{all},\ \Omega\text{ smooth}\big\}=0;$$ \item[-]for every $q>0$ $$\sup\big\{F_q(\Omega)\ :\ \Omega\in\mathcal{A}_{all},\ \Omega\text{ smooth}\big\}=+\infty;$$ \item[-]for every $q>1/2$ $$\begin{cases} \inf\big\{F_q(\Omega)\ :\ \Omega\in\mathcal{A}_{convex}\big\}=0\\ \max\big\{F_q(\Omega)\ :\ \Omega\in\mathcal{A}_{convex}\big\}\quad\text{is attained}; \end{cases}$$ \item[-]for every $q<1/2$ $$\begin{cases} \sup\big\{F_q(\Omega)\ :\ \Omega\in\mathcal{A}_{convex}\big\}=+\infty\\ \min\big\{F_q(\Omega)\ :\ \Omega\in\mathcal{A}_{convex}\big\}\quad\text{is attained};\\ \end{cases}$$ \item[-]for $q=1/2$ $$\begin{cases} \inf\big\{F_{1/2}(\Omega)\ :\ \Omega\in\mathcal{A}_{convex}\big\}=(1/3)^{1/2}\\ \sup\big\{F_{1/2}(\Omega)\ :\ \Omega\in\mathcal{A}_{convex}\big\}=(2/3)^{1/2}, \end{cases}$$ asymptotically attained, respectively, when $\Omega$ is a long thin rectangle and when $\Omega$ is a long thin triangle. \end{itemize} Here we discuss the optimization problems for $F_q$ on the classes of planar domains $$\mathcal{A}_k:=\big\{\Omega\subset\mathbb{R}^2\ :\ \Omega\ne\emptyset,\ \Omega\text{ bounded, }\#\Omega^c\le k\big\},$$ where, for every set $E$, we denote by $\#E$ the number of bounded connected components of $E$ and $\Omega^c=\mathbb{R}^2\setminus\Omega$. In particular $\mathcal{A}_0$ denotes the class of simply connected domains (not necessarily connected). From what seen above the only interesting cases to consider are: $$\begin{cases} \text{the maximum problem for $F_q$ on $\mathcal{A}_k$ when $q\ge1/2$ ;}\\ \text{the minimum problem for $F_q$ on $\mathcal{A}_k$ when $q\le1/2$.} \end{cases}$$ We notice that the maximum problem is not well posed, since for every $q>0$ and every $k\ge0$ $$\sup\big\{F_q(\Omega)\ :\ \Omega\text{ smooth},\ \Omega\in\mathcal{A}_k\big\}=+\infty.$$ Indeed, it is enough to take as $\Omega_n$ a smooth perturbation of the unit disc $B_1$ such that $$B_{1/2}\subset\Omega_n\subset B_2\qquad\text{and}\qquad P(\Omega_n)\to+\infty.$$ All the domains $\Omega_n$ are simply connected, so belong to $\mathcal{A}_k$ for every $k\ge0$, and $$|\Omega_n|\le|B_2|,\qquad T(\Omega_n)\ge T(B_{1/2}),$$ where we used the monotonicity of the torsional rigidity. Therefore $$F_q(\Omega_n)\ge\frac{P(\Omega_n)T^q(B_{1/2})}{|B_2|^{2q+1/2}}\to+\infty.$$ Moreover $$\inf\big\{F_q(\Omega)\ :\ \Omega\in\mathcal{A}_k\big\}=0,$$ as we can easily see by taking as $\Omega_n$ the unit disk of $\mathbb{R}^2$ where we remove the $n$ segments (in polar coordinates $r,\theta$) $$S_i=\big\{\theta=2\pi i/n,\ r\in[1/n,1]\big\}\qquad i=1,\dots,n.$$ We have that all the $\Omega_n$ are simply connected, and $$|\Omega_n|=\pi,\qquad P(\Omega_n)=2\pi,\qquad T(\Omega_n)\to0,$$ providing then $F_q(\Omega_n)\to0$. Therefore, the problems we study in the sequel are $$\inf\big\{F_q(\Omega)\ :\ \Omega\in \mathcal{A}_k,\text{ $\Omega$ Lipschitz}\},$$ when $q\le1/2$ and $k\in\mathbb{N}$. Denoting by $m_{q,k}$ the infimum above we summarize here below our main results. \begin{itemize} \item[-] For every $q\le1/2$ the values $m_{q,k}$ are decreasing with respect to $k$ and $$\lim_{k\to\infty}m_{q,k}=0.$$ \item[-]When $k=0,1$ it holds $$m_{1/2,0}=m_{1/2,1}=3^{-1/2}=\inf\big\{F_{1/2}(\Omega)\ :\ \Omega\text{ convex}\big\};$$ in particular, for $q=1/2$ there is no gap for $\inf F_{1/2}$ between the classes $\mathcal{A}_{convex}$, $\mathcal{A}_0$, $\mathcal{A}_1$, and the infimum is asymptotically reached by a sequence of long and thin rectangles. \item[-]For every $q\le1/2$ and $k\in\mathbb{N}$, we have $$m_{q,k}\ge\begin{cases}(8\pi)^{1/2-q}3^{-1/2}&\text{if }k=0,1, \\ (8\pi)^{1/2-q}(3^{1/2}k)^{-1}&\text{if }k>1. \end{cases}$$ \item[-]For $q<1/2$, we define a relaxed functional $\mathcal{F}_{q,k}$, which coincides with $F_q$ in the class of the sets $\Omega\in\mathcal{A}_k$ satisfying $P(\Omega)=\mathcal{H}^1(\partial\Omega)$, being $\mathcal{H}^1$ the $1$-dimensional Hausdorff measure. We also prove that $\mathcal{F}_{q,k}$ admits an optimal domain $\Omega^{\star}\in\mathcal{A}_k$ with $\mathcal{H}^{1}(\partial\Omega^\star)<\infty$. \end{itemize} \section{ Approximation by interior parallel sets} \label{sapp} For a given bounded nonempty open set $\Omega$ we denote by $\rho(\Omega)$ its \textit{inradius}, defined as $$\rho(\Omega):=\sup\big\{d(x,\partial\Omega)\ :\ x\in\Omega\big\},$$ where, as usual, $d(x,E):=\inf\big\{d(x,y)\ :\ y\in E\big\}$. For every $t\ge 0$, we denote by $\Omega(t)$ the \textit{interior parallel set} at distance $t$ from $\partial\Omega$, i.e. $$\Omega(t):=\big\{x\in\Omega\ :\ d(x,\partial\Omega)>t\big\},$$ and by $A(t):=|\Omega(t)|$. Moreover we denote by $L(t)$ the length of the \textit{interior parallel}, that is the set of the points in $\Omega$ whose distance from $\partial\Omega$ is equal to $t$. More precisely we set $$L(t):=\mathcal{H}^1 (\{x\in\Omega\ :\ d(x,\partial\Omega)=t \}).$$ Notice that $\partial \Omega(t)\subseteq \{x\in\Omega\ :\ d(x,\partial\Omega)=t \}$. Using coarea formula (see \cite{EvGa} Theorem 3.13) we can write the following identity: \begin{equation}\label{eq.Evans} A(t)=\int_t^{\rho(\Omega)}L(s)\,ds\qquad\forall t\in(0,\rho(\Omega)). \end{equation} As a consequence it is easy to verify that for a.e. $t\in(0,\rho(\Omega))$ there exists the derivative $A'(t)$ and it coincides with $-L(t)$. The interior parallel sets $\Omega(t)$ belong to $\mathcal{A}_k$ as soon as $\Omega\in\mathcal{A}_k$, as next elementary argument shows. \begin{lemm}\label{lemm.innerA_k} Let $\Omega\in\mathcal{A}_k$. Then $\Omega(t)\in \mathcal{A}_k$ for every $t\in [0,\rho(\Omega))$. \end{lemm} \begin{proof} Let $\alpha:=\#\Omega^c$ ($\le k$), and $C^1,C^2,\cdots C^\alpha$ be the (closed) bounded connected components of $\Omega^c$ and $C^0$ the unbounded one. Define $$C^i(t):=\big\{x\in\mathbb{R}^2\ : \ d(x,C^i)\le t\big\}.$$ Since $C^i$ is connected, then $C^i(t)$ is connected and the set $\bigcup_{i=0}^\alpha C^i(t)$ has at most $\alpha+1$ connected components. Since we have $\Omega^c(t)=\bigcup_{i=0}^\alpha C^i(t)$, the lemma is proved. \end{proof} In the planar case, even without any regularity assumptions on $\partial\Omega$, the sets $\Omega(t)$ are a slightly smoothed version of $\Omega$. In particular the following result (see \cite{Fu85}), that we limit to report in the two dimensional case, proves that $\Omega(t)$ has a Lipschitz boundary for a.e. $t\in(0,\rho(\Omega))$. \begin{theo}[Fu]\label{theo.Fu} Let $K\subseteq \mathbb{R}^2$ be a compact set. There exists a compact set $C=C(K)\subseteq [0, 3^{-1/ 2} diam(K)]$ such that $|C|=0$ and if $t\notin C$ then the boundary of $\{x\in\mathbb{R}^2\ :\ d(x,K)>t\}$ is a Lipschitz manifold. \end{theo} We recall now some general facts of geometric measure theory. Let $E\subset \mathbb{R}^2$, we denote by $E^{(t)}$ the set of the points where the density of $E$ is $t\in [0,1]$, that is $$E^{(t)}:=\{ x\in\mathbb{R}^2: \lim_{r\to 0^+} (\pi r^2)^{-1}|E\cap B_r(x)|=t\}.$$ It is well known (see \cite{AFP} Theorem 3.61) that if $E$ is a set of finite perimeter, then $P(E)=\mathcal{H}^1(E^{1/2})$ and $E$ has density either $0$ or $1/2$ or $1$ at $\mathcal{H}^1$-a.e $x\in\mathbb{R}^2$. In particular it holds \begin{equation}\label{eq.decomp}\mathcal{H}^1(\partial E)= \mathcal{H}^1(\partial E\cap E^{(0)})+ \mathcal{H}^1(\partial E\cap E^{(1)})+ P(E), \end{equation} which implies \begin{equation} \label{eq.decomp2} P(E)+2\mathcal{H}^1(\partial E\cap E^{(1)})\le 2 \mathcal{H}^1(\partial E)-P(E). \end{equation} The Minkowski content and the outer Minkowski content of $E$ are, respectively, defined as $$\mathcal{M}(E):=\lim_{t\to 0}\frac {|\{x\in\mathbb{R}^2\ :\ d(x,E)\le t\}|}{2t},$$ and $$\mathcal{SM}(E):=\lim_{t\to 0}\frac {|\{x\in\mathbb{R}^2\ :\ d(x,E)\le t\}\setminus E|}{t},$$ whenever the limits above exist. We say that a compact set $E\subset\mathbb{R}^2$ is $1$-rectifiable if there exists a compact set $K\subset\mathbb{R}$ and a Lipschitz map $f:\mathbb{R}\to \mathbb{R}^2$ such that $f(K)=E$. Any compact connected set of $\mathbb{R}^2$, namely a \textit{continuum}, with finite $\mathcal{H}^1$-measure is $1$-rectifiable (see, for instance, Theorem 4.4 in \cite{AO}). Finally, if $E$ is $1$-rectifiable then \begin{equation}\label{eq.Amb} \mathcal{M}(E)=\mathcal{H}^1(E) \end{equation} (see Theorem $2.106$ in \cite{AFP}) and by Proposition 4.1 of \cite{V}, if $E$ is a Borel set and $\partial E$ is $1$-rectifiable it holds \begin{equation}\label{eq.Villa} \mathcal{SM}(E)=P(E)+2\mathcal{H}^1(\partial E\cap E^{(0)}). \end{equation} Next two results are easy consequence of \eqref{eq.Amb} and \eqref{eq.Villa}. \begin{theo}\label{theo.min} Let $\Omega$ be a bounded open set with $\mathcal{H}^1(\partial \Omega)<\infty$ and $\#\partial\Omega<+\infty$. Then $ \mathcal{M}(\partial\Omega)=\mathcal{H}^{1}(\partial \Omega)$ and $ \mathcal{SM}(\Omega)=P(\Omega)+2 \mathcal{H}^1(\partial \Omega\cap \Omega^{(0)}).$ \end{theo} \begin{proof} Since $\mathcal{H}^1(\partial\Omega)<\infty$, each connected component of $\partial\Omega$ is $1$-rectifiable. Being the connected components pairwise disjoint and compact, we easily prove that their finite union is $1$-rectifiable. Then, applying \eqref{eq.Amb} and \eqref{eq.Villa}, we get the thesis. \end{proof} \begin{coro}\label{coro.Mink} Let $\Omega$ be an open set such that $\mathcal{H}^1(\partial\Omega)<\infty$ and $\#\partial\Omega<+\infty$. Then there exists $$\lim_{r\to 0^+}\frac 1 r\int_{0}^r L(t)dt= P(\Omega)+2 \mathcal{H}^1(\partial \Omega\cap \Omega^{(1)}).$$ \end{coro} \begin{proof} We denote by $L^c(t)$ the following quantity $$L^c(t):=\mathcal{H}^1(\{x\in\Omega^c\ :\ d(x,\partial\Omega)=t\}).$$ By applying coarea formula and Theorem \ref{theo.min}, it holds \begin{equation}\label{eq.g1} \lim_{r\to 0^+} \frac{1}{r}\int_{0}^{r}L^c(t)dt=\mathcal{SM}(\Omega)=P(\Omega)+2 \mathcal{H}^1(\partial \Omega\cap \Omega^{(0)}). \end{equation} and \begin{equation}\label{eq.min} \lim_{r\to 0^+}\frac{1}{r}\int_{0}^{r}\left[L(t)+L^c(t)\right]dt=2\mathcal{M}(\partial\Omega)=2\mathcal{H}^1(\partial \Omega). \end{equation} Combining \eqref{eq.decomp}, \eqref{eq.g1} and \eqref{eq.min} we get $$ \lim_{r\to 0^+}\frac{1}{r}\int_{0}^{r}L(t) dt= \lim_{r\to 0^+}\left(\frac{1}{r}\int_{0}^{r}L(t) dt+\frac{1}{r}\int_{0}^{r}L^c(t) dt-\frac{1}{r}\int_{0}^{r}L^c(t) dt\right)$$ $$= 2\mathcal{H}^1(\partial \Omega)-P(\Omega)-2 \mathcal{H}^1(\partial \Omega\cap \Omega^{(0)})= P(\Omega)+2 \mathcal{H}^1(\partial \Omega\cap \Omega^{(1)}) $$ and the thesis is achieved. \end{proof} Most of the results we present rely on a geometrical theorem proved by Sz. Nagy in \cite{Nagy59}, concerning the behavior of the function $t\to A(t)=|\Omega(t)|$ for a given set $\Omega\in\mathcal{A}_k$. \begin{theo}[Sz. Nagy]\label{theo.Na} Let $\Omega\in\mathcal{A}_k$ and let $\alpha:=\#\Omega^c$. Then the function $$t\mapsto-A(t)-(\alpha-1)\pi t^{2}$$ is concave in $[0,\rho(\Omega))$. \end{theo} As a consequence of Corollary \ref{coro.Mink} and Theorem \ref{theo.Na} we have the following result. \begin{theo}\label{theo.Nareg} Let $\Omega\in\mathcal{A}_k$ with $\mathcal{H}^1(\partial \Omega)<\infty$ and $\#\Omega<+\infty$. Then, for a.e. $t\in(0,\rho(\Omega))$, it holds: \begin{align}\label{eq.boundL} &L(t)\le P(\Omega)+2 \mathcal{H}^1(\partial \Omega\cap \Omega^{(1)}) +2\pi(k-1)t;\\ \label{eq.boundA} &A(t)\le (P(\Omega)+2 \mathcal{H}^1(\partial \Omega\cap \Omega^{(1)}))(\rho(\Omega)-t)+\pi(k-1)(\rho(\Omega)-t)^2. \end{align} In particular $A\in W^{1,\infty}(0,\rho(\Omega))$. \end{theo} \begin{proof} We denote by $g(t)$ the right derivative of the function $t\mapsto -A(t)-(\alpha-1)\pi t^2$ where $\alpha:=\#\Omega^c$ $(\le k)$. By Theorem \ref{theo.Na}, $g$ is a decreasing function in $(0,\rho(\Omega))$ and an easy computation through \eqref{eq.Evans} shows that \begin{equation}\label{Lg} g(t)=L(t)-2\pi(\alpha-1)t\qquad\hbox {for a.e. }t\in(0,\rho(\Omega)). \end{equation} Thus, $$\lim_{r\to 0^+}\frac 1 r\int_{0}^r L(t)dt=\lim_{r\to 0^+}\frac 1 r\int_{0}^rg(t)dt=\sup_{(0,\rho(\Omega))}g(t).$$ Since $\Omega\in\mathcal{A}_k$ and $\#\Omega<\infty$ we have also $\#\partial\Omega<\infty$. Hence we can apply Corollary \ref{coro.Mink} to get \begin{equation}\label{eq.g} P(\Omega)+2 \mathcal{H}^1(\partial \Omega\cap \Omega^{(1)})=\sup_{(0,\rho(\Omega))}g(t). \end{equation} By using \eqref{Lg} and \eqref{eq.g}, inequality \eqref{eq.boundL} easily follows. Finally, by applying \eqref{eq.Evans}, we get both $A\in W^{1,\infty}(0,\rho(\Omega))$ and formula \eqref{eq.boundA}. \end{proof} The following lemma can be easily proved by lower semicontinuity property of the perimeter. \begin{lemm}\label{lem.top1} Let $\Omega\subset\mathbb{R}^2$ be an open set. Let $(\Omega^i)$ be its connected components and $\Omega_n:=\bigcup_{i=1}^n\Omega^i$. Then we have: \begin{enumerate} \item[(i)]$\partial\Omega_n=\bigcup_{i=1}^n\partial\Omega^i\subseteq\partial\Omega$ and $\mathcal{H}^1(\partial\Omega_n)\le \mathcal{H}^1(\partial\Omega)$; \item [(ii)]$\displaystyle P(\Omega)\le\liminf_{n\to\infty}P(\Omega_n)\le\limsup_{n\to\infty}P(\Omega_n)\le\limsup_{n\to\infty}\mathcal{H}^1(\partial\Omega_n)\le \mathcal{H}^1(\partial\Omega)$. \end{enumerate} \end{lemm} We are now in a position to prove the main results of this section. In Theorem 1.1 of \cite{Sc15} it is shown that, given any set $\Omega$ of finite perimeter satisfying $\mathcal{H}^1(\partial\Omega)=P(\Omega)$, it is possible to approximate $P(\Omega)$ with the perimeters of smooth open sets compactly contained in $\Omega$. Here we show that, if we assume the further hypothesis $\Omega\in\mathcal{A}_k$, then we can construct an approximation sequence made up of Lipschitz sets in $\mathcal{A}_k$. \begin{theo}\label{theo.approxim} Let $\Omega\in\mathcal{A}_k$ be a set of finite perimeter. Then there exists an increasing sequence $(A_n)\subset \mathcal{A}_k$ such that: \begin{enumerate} \item [(i)] $\overline A_n\subset \Omega$; \item[(ii)] $\bigcup_{n} A_n=\Omega$; \item[(iii)] $A_n$ is a Lipschitz set; \item[(iv)] $\displaystyle P(\Omega)\le\liminf_{n\to\infty}P(A_n)\le\limsup_{n\to\infty}P(A_n)\le2\mathcal{H}^1(\partial\Omega)-P(\Omega)$. \end{enumerate} In addition, if $\# \Omega<\infty$, then $$\lim_{n\to\infty}P(A_n)=P(\Omega)+2 \mathcal{H}^1(\partial\Omega\cap\Omega^{(1)}).$$ \end{theo} \begin{proof} Let $\Omega_n$ be defined as in Lemma \ref{lem.top1}. Clearly $\Omega_n\in\mathcal{A}_k$. Since $\Omega_n(t)$ converges to $\Omega_n$ in $L^1$ when $t\to0^+$, it follows that, for every $n$, $$\liminf_{t\to 0^+} P(\Omega_n(t))\ge P(\Omega_n).$$ Then there exists $0<\delta_n<1/n\wedge \rho(\Omega_n)$ such that \begin{equation}\label{primacond}P(\Omega_n(t))\ge P(\Omega_n)-\frac1n\qquad\forall t<\delta_n.\end{equation} Since $\#\Omega_n\le n$, by applying Theorem \ref{theo.Fu}, Lemma \ref{lemm.innerA_k} and Theorem \ref{theo.Nareg} to the set $\Omega_n$, we can choose a decreasing sequence $(t_n)$ with $0<t_n< \delta_n$ such that the set $A_n:=\Omega_n(t_n)$ is in $\mathcal{A}_k$, has Lipschitz boundary, and \begin{equation}\label{secondacond} \mathcal{H}^1(\{x\in\Omega_n\ :\ d(x,\partial\Omega_n)=t_n \})\le P(\Omega_n)+2\mathcal{H}^1(\partial\Omega_n\cap\Omega_n^{(1)})+2\pi(k-1)t_n. \end{equation} It is easy to prove that the sequence $(A_n)$ is increasing and satisfies (i) and (ii). By putting together \eqref{primacond} and \eqref{secondacond}, we get \begin{equation*} P(\Omega_n)-\frac1n\le P(A_n)\le P(\Omega_n)+2\mathcal{H}^1(\partial\Omega_n\cap\Omega_n^{(1)})+2\pi(k-1)t_n. \end{equation*} By Lemma \ref{lem.top1}, taking also into account \eqref{eq.decomp2}, the previous inequality implies $$P(\Omega)\le\liminf_n P(A_n)\le\limsup_n P(A_n)\le2\mathcal{H}^1(\partial\Omega)-P(\Omega)$$ which proves $(iv)$. To conclude consider the case $\#\Omega<+\infty$. We can choose $n$ big enough such that $\Omega_n=\Omega$, $A_n=\Omega(t_n)$ and $\alpha:=\# \Omega^c=\# A_n^c$. For simplicity we denote $\rho_n:=\rho(A_n)$ and $\rho:=\rho(\Omega)$. By applying equality \eqref{eq.g} to the Lipschitz set $A_n$, we get \begin{equation}\label{terza} P(A_n)=\sup_{ (0,\rho_n)}g_n(t) \end{equation} where $g_n$ is the right derivative of the function $t\mapsto -|A_n(t)|-(\alpha-1)\pi t^2$. Now, exploiting the equality $A_n(t)=\Omega(t+t_n)$, we obtain $$g_n(t)= g(t+t_n)+2\pi(\alpha-1)t_n$$ for all $0<t<(\rho-t_n)\wedge \rho_n$. Thus, as $t\to 0^+$ and applying \eqref{terza}, we can conclude that, for every $n$, it holds $$\lim_{t\to 0^+}g(t+t_n)+2\pi(\alpha-1)t_n=\sup_{(0,\rho_n)}g_n(t)=P(A_n).$$ Passing to the limit as $n\to\infty$ in the equality above and taking into account \eqref{eq.g} we achieve the thesis. \end{proof} \section{Continuity of volume for co-Hausdorff convergence}\label{shau} The Hausdorff distance between closed sets $C_1, C_2$ of $\mathbb{R}^2$ is defined by $$d_H(C_1,C_2):=\sup_{x\in C_1}d(x,C_2)\vee\sup_{x\in C_2}d(x,C_1).$$ Through $d_H$ we can define the so called co-Hausdorff distance $d_{H^c}$ between a pair of bounded open subsets $\Omega_1,\Omega_2$ of $\mathbb{R}^2$ $$d_{H^c}(\Omega_1,\Omega_2):=d_H(\Omega_1^c,\Omega_2^c).$$ We say that a sequence of compact sets $(K_n)$ converges in the sense of Hausdorff to some compact set $K$, if $ (d_H(K_n,K))$ converges to zero. In this case we write $K_n\overset{H}{\to}K$. Similarly we say that a sequence of open sets $(\Omega_n)$ converges in the sense of co-Hausdorff to some open set $\Omega$, if $(d_{H^c}(\Omega_n,\Omega))$ converges to zero, and we write $\Omega_n\overset{H^c}{\to}\Omega$. In the rest of the paper we use some elementary properties of Hausdorff distance and co-Hausdorff distance for which we refer to \cite{bubu05} and \cite{He}, (see, for instance, Proposition 4.6.1 of \cite{bubu05}). In particular we recall that if $(\Omega_n)$ is a sequence of equi-bounded sets in $\mathcal{A}_k$ and $\Omega_n\overset{H^c}{\to}\Omega$, then $\Omega$ still belongs to $\mathcal{A}_k$ (see Remark 2.2.20 of \cite{He}). The introduction of co-Hausdorff convergence is motivated by Sver\'ak's Theorem (see \cite{sv93}) which ensures the continuity of the torsional rigidity in the class $\mathcal{A}_k$. Actually the result is stronger and gives the continuity with respect to the $\gamma$-convergence (we refer to \cite{bubu05} for its precise definition and the related details). \begin{theo}[Sver\'ak]\label{theo.Sve} Let $(\Omega_n)\subset\mathcal{A}_k$ be a sequence of equi-bounded open sets. If $\Omega_n\overset{H^c}{\to}\Omega$, then $\Omega_n\to\Omega$ in the $\gamma$-convergence. In particular $T(\Omega_n)\to T(\Omega)$. \end{theo} Combining Sver\'ak theorem and Theorem \ref{theo.approxim}, we prove that we can equivalently minimize the functional $F_q$ either in the class of Lipschitz set in $\mathcal{A}_k$ or in the larger class of those sets $\Omega\in\mathcal{A}_k$ satisfying $P(\Omega)=\mathcal{H}^1(\partial\Omega)$. \begin{prop} The following identity holds: $$m_{q,k}=\inf\{F_q(\Omega)\ :\ \Omega\in\mathcal{A}_k,\ P(\Omega)=\mathcal{H}^1(\partial\Omega)\}$$ \end{prop} \begin{proof} By Theorem \ref{theo.approxim}, for every $\Omega\in\mathcal{A}_k$ such that $P(\Omega)=\mathcal{H}^1(\partial\Omega)<\infty$, there exists a sequence $(A_n)\subset\mathcal{A}_k$ of Lipschitz sets satisfying $\lim_n P(A_n)=P(\Omega)$. By construction $(A_n)$ is an equi-bounded sequence which converges both in the co-Hausdorff and in the $L^1$ sense. By Theorem \ref{theo.Sve} we have $$\lim_{n\to\infty} F_q(\Omega_n)=F_q(\Omega),$$ so that $$m_{q,k}\le\inf\{F_q(\Omega)\ :\ \Omega\in\mathcal{A}_k,\ P(\Omega)=\mathcal{H}^1(\partial\Omega)\}.$$ The thesis is then achieved since the opposite inequality is trivial. \end{proof} In general the volume is only lower semicontinuous with respect to the $H^c$-convergence as simple counterexamples may show. In this section we prove that $L^1$-convergence is guaranteed in the class $\mathcal{A}_k$ under some further hypotheses, see Theorem \ref{theo.convmeas}. The proof of this result requires several lemma and relies on the classical Go\l ab's semicontinuity theorem, which deals with the lower semicontinuity of the Hausdorff measure $\mathcal{H}^1$ (see, for instance, \cite{AO}, \cite{amti04}). \begin{theo}[Go\l ab]\label{theo.Golab} Let $X$ be a complete metric space and $k\in\mathbb{N}$ let $$\mathcal{C}_k:=\{K\ :\ K\subset X, \ K\text{ is closed},\ \#K\le k\}.$$ Then the function $K\mapsto\mathcal{H}^1(K)$ is lower semicontinuous on $\mathcal{C}_k$ endowed with the Hausdorff distance. \end{theo} \begin{lemm}\label{lem.inradcon} Let $(\Omega_n)$ be a sequence of equi-bounded open sets. If $\Omega_n\overset{H^c}{\to}\Omega$ we have also $\rho(\Omega_n)\to\rho(\Omega)$. \end{lemm} \begin{proof} For simplicity we denote $\rho:=\rho(\Omega)$, and $\rho_n:=\rho(\Omega_n)$. First we show that \begin{equation}\label{basso}\rho\le\liminf_n\rho_n.\end{equation} Indeed, without loss of generality let us assume $\rho>0$. Then for any $0<{\varepsilon}<\rho$, there exists a ball $B_{\varepsilon}$ whose radius is $\rho-{\varepsilon}$ and whose closure is contained in $\Omega$. By elementary properties of co-Hausdorff convergence, there exists $\nu$ such that $B_{\varepsilon}\subset\Omega_n$, for $n>\nu$, which implies $\rho_n\ge\rho-{\varepsilon}$. Since ${\varepsilon}>0$ is arbitrary, we get \eqref{basso}. In order to prove the upper semicontinuity, assume by contradiction that there exist ${\varepsilon}>0$ and a subsequence $(n_k)$ such that $\rho_{n_k}>\rho+{\varepsilon}$ for every $k\in \mathbb{N}$. Then there exists a sequence of balls $B_{n_k}=B_{ \rho_{n_k} }(x_{n_k})\subseteq\Omega_{n_k}$. Eventually passing to a subsequence, the sequence $(x_{n_k})$ converges to a point $x_{\infty}$ and the sequence of the translated open set $\Omega_{n_k}-x_{n_k}$ converges to $\Omega-x_{\infty}$. Since $B_{r}(0)\subseteq \Omega_{n_k}-x_{n_k} $ for $r=\rho+{\varepsilon}$, it turns out that $B_{r}(0)\subseteq \Omega-x_{\infty}$, i.e. $B_{r}(x_{\infty})\subseteq\Omega$ which leads to a contradiction. \end{proof} \begin{lemm}\label{lem.top2} Let $\Omega$ be a connected bounded open set of $\mathbb{R}^n$. There exists a sequence of connected bounded open sets $(\Omega_n)$ such that $\overline\Omega_n\subset\Omega_{n+1}$ and $\bigcup_n\Omega_n=\Omega$. \end{lemm} \begin{proof} We construct the sequence by induction. First of all we notice that there exists an integer $\nu_1>0$ such that $\Omega(\nu_1^{-1})$ contains at least one connected component of $\Omega$ with Lebesgue measure greater than $\pi\nu_1^{-2}$. Indeed it suffices to choose $$\nu_1^{-1}\le\min\{ d(y,\partial\Omega)\ :\ y\in\partial B_r(x)\}\wedge r$$ where $B_r(x)$ is any ball with closure contained in $\Omega$. Now let $M$ be the number of connected components of $\Omega(\nu_1^{-1})$ with Lebesgue measure greater than $\pi\nu_1^{-2}$. If $M=1$ we define $ \Omega_{1}:=\Omega(\nu_1^{-1})$. Otherwise, since $\Omega$ is pathwise connected, we can connect the closures of the $M$ connected components with finitely many arcs to define a connected compact set $K\subset\Omega$. Then, we choose $m$ such that $m>\nu_1$ and $m^{-1}<\inf\{d(x,\partial\Omega) : x\in K\}$ and we set $$\Omega_{1}:=\{ x\in \Omega: \ d(x,K)< (2m)^{-1} \}.$$ In both cases $\Omega_1$ is a connected open set which contains all the connected components of $\Omega(\nu_1^{-1})$ having Lebesgue measure greater then $\pi\nu_1^{-2}$. Moreover by construction there exists $\nu_2>\nu_1$ such that $\overline \Omega_1\subseteq \Omega(\nu^{-1}_2)$. Replacing $\nu_1$ with $\nu_2$ we can use the previous argument to define $\Omega_2$ such that $\overline\Omega_1\subset \Omega_2$. Iterating this argument we eventually define an increasing sequence $\nu_n$ and a sequence of connected open sets $(\Omega_n)$ such that $\overline \Omega_n\subset\Omega_{n+1}\subset\Omega$ and $\Omega_n$ contains all the connected components of $\Omega(\nu_n^{-1})$ of Lebesgue measure greater than $\pi\nu_{n}^{-2}$. Since for any $x\in\Omega$ there exists $r>0$ such that $\overline{B}_r(x)\subset\Omega$, choosing $\nu_n^{-1}\le \min\{ d(y,\partial\Omega)\ : y\in\partial B_r(x)\}\wedge r$, it is easy to show that $x\in\Omega_n$. Thus $\bigcup_{n} \Omega_{n}=\Omega$. \end{proof} In the following lemma we establish a Bonnesen-type inequality for sets $\Omega\in \mathcal{A}_k$ satisfying $\mathcal{H}^1(\partial \Omega)<\infty$ (see Theorem 2 in \cite{Oss79} when $\Omega$ is a simply connected plane domain bounded by a rectifiable Jordan curve). \begin{lemm}\label{lem.coarea2} Let $\Omega\in\mathcal{A}_k$ with $\mathcal{H}^1(\partial \Omega)<\infty$. Then \begin{equation}\label{eq.coarea} |\Omega| \le [2\mathcal{H}^1(\partial \Omega)-P(\Omega)+\pi ( k-1) \rho(\Omega)]\rho(\Omega). \end{equation} \end{lemm} \begin{proof} If $\#\Omega<\infty$, by Theorem \ref{theo.Nareg} and \eqref{eq.Evans}, $$|\Omega|\le \left(P(\Omega)+2\mathcal{H}^1(\partial\Omega \cap \Omega^{(1)})+\pi(k-1)\rho(\Omega)\right)\rho(\Omega),$$ and we conclude by \eqref{eq.decomp2}. To prove the general case we denote by $(\Omega^i)$ the connected components of $\Omega$ and we set $\Omega_n:=\bigcup_{i=1}^{n}\Omega^i$. By the previous step we have $$|\Omega_n|\le \big(2\mathcal{H}^1(\partial \Omega_n)-P(\Omega_n)+\pi (k-1) \rho(\Omega_n)\big)\rho(\Omega_n).$$ Since $\Omega_n\overset{H^c}{\to}\Omega$ and $\Omega_n\to\Omega$ in the $L^1$-convergence, taking into account Lemma \ref{lem.inradcon} and Lemma \ref{lem.top1}, we can conclude that \begin{align*} |\Omega|=\lim_{n\to\infty}|\Omega_n|&\le\big(2\mathcal{H}^1(\partial\Omega)-\limsup_n P(\Omega_n)+\pi(\alpha-1)\rho(\Omega)\big)\rho(\Omega)\\ &\le\big(2 \mathcal{H}^1(\partial\Omega)-P(\Omega)+\pi (k-1)\rho(\Omega)\big)\rho(\Omega), \end{align*} from which the thesis is achieved. \end{proof} \begin{theo}\label{theo.convmeas} Let $(\Omega_n)\subset\mathcal{A}_k$ be a sequence of equi-bounded open sets with $$\sup_n\mathcal{H}^1(\partial\Omega_n)<\infty.$$ If $\Omega_n\overset{H^c}{\to}\Omega$ then $\Omega\in \mathcal{A}_k$ and $\Omega_n\to \Omega$ in the $L^1$-convergence. If, in addition, either $\sup_n\#\partial\Omega_n< \infty$ or $\#\Omega<\infty$ then \begin{equation} \label{eq.golab} \mathcal{H}^1(\partial\Omega)\le \liminf_n \mathcal{H}^1(\partial\Omega_n). \end{equation} \end{theo} \begin{proof} We first deal with the case when $\sup_{n}\#\partial\Omega_n<\infty$, already considered in \cite{ChDo} and \cite{bubu05}. By compactness we can suppose that $\partial\Omega_n$ converges to some nonempty compact set $K$ which contains $\partial\Omega$. Then it is easy to show that $\bar\Omega_n\overset{H}{\to}\Omega\cup K$, which implies $\chi_{\Omega_n}\to\chi_\Omega$ pointwise in $\mathbb{R}^2\setminus K$, where $\chi_E$ denotes the characteristic function of a set $E$. By Theorem \ref{theo.Golab} we have also \begin{equation}\label{eq.golabK} \mathcal{H}^1(\partial \Omega)\le\mathcal{H}^1(K)\le\liminf_{n\to\infty}\mathcal{H}^1(\partial \Omega_n)<+\infty, \end{equation} which implies \eqref{eq.golab}. In particular, we have $|K|=0$, and $\Omega_n\to \Omega$ in the $L^1$ convergence. We consider now the general case. Let $(\Omega^i)$ be the connected components of $\Omega$ and ${\varepsilon}>0$. There exists an integer $\nu({\varepsilon})$ such that $$|\Omega|-{\varepsilon}<|\bigcup_{i=1}^{\nu({\varepsilon})}\Omega^i|\le |\Omega|$$ (when $\#\Omega<\infty$ we simply choose $\nu({\varepsilon})=\#\Omega$). For each $i\le \nu({\varepsilon})$, and for each set $\Omega^i$, we consider the sequence $(\Omega^i_n)$ given by Lemma \ref{lem.top2}. By elementary properties of co-Hausdorff convergence there exists $l:=l(n)$ such that $$\bigcup_{i}^{\nu({\varepsilon})}\overline{\Omega^i_n}\subset\Omega_{l}.$$ Let's denote by $\widetilde\Omega^i_{l}$ the connected component of $\Omega_{l}$ which contains $\overline{\Omega^i_n}$ (eventually $\widetilde{\Omega}^h_{l}=\widetilde{\Omega}^s_{l}$), and define $$\widetilde\Omega_{l}:=\bigcup_{i=1}^{\nu({\varepsilon})} \widetilde\Omega^i_{l}.$$ By compactness, passing eventually to a subsequence, there exists $\widetilde{\Omega}\in\mathcal{A}_k$ such that $\widetilde\Omega_{l}\overset{H^c}{\to}\widetilde\Omega$. Moreover, since $\widetilde\Omega_l\in\mathcal{A}_k$, $\sup_l\#\widetilde\Omega_l\le\nu({\varepsilon})$, and by Lemma \ref{lem.top1} we have $$\sup_l \mathcal{H}^1( \partial \widetilde \Omega_{l})\le\sup_l \mathcal{H}^1(\partial\Omega_{l})<\infty,$$ we can apply the first part of the proof to conclude that $\widetilde\Omega_l\to \widetilde\Omega$ in the $L^1$-convergence. If $\#\Omega<\infty$ an easy argument shows that $\widetilde\Omega$ must be equal to $\Omega$ and that \eqref{eq.golabK} holds with $K$ the Hausdorff limit of $(\partial\widetilde\Omega_l)$. In particular \eqref{eq.golab} holds. Otherwise we consider the set $\Omega^R_l$ of those connected components of $\Omega_{l}$ that have been neglected in the definition of $\widetilde\Omega_l$, that is $$\Omega^R_{l}:=\Omega_{l}\setminus\widetilde\Omega_{l}.$$ Passing to a subsequence we can suppose that $\Omega^R_l\overset{H^c}{\to}\Omega^R$, for some open set $\Omega^R\in\mathcal{A}_k$. Moreover since $|\widetilde\Omega|>|\Omega|-{\varepsilon}$, $\Omega^R\cap\tilde\Omega=\emptyset$ and $\Omega^R\subset\Omega$ we have also $|\Omega^R|\le{\varepsilon}$. This implies $\rho(\Omega^R)\le\sqrt{\pi^{-1}{\varepsilon}}$ and by Lemma \ref{lem.inradcon}, $$\lim_{l\to\infty}\rho(\Omega^R_{l})\le \sqrt{\pi^{-1}{\varepsilon}}.$$ Finally, by Lemma \ref{lem.coarea2}, we have \begin{align*} |\Omega|&\le\liminf_{n\to\infty}|\Omega_{n}|\le \limsup_{l\to\infty} ( |\widetilde\Omega_{l}|+|\Omega^R_{l}|)= |\widetilde \Omega|+\limsup_{l\to\infty}|\Omega^R_{l}|\le |\Omega|+o({\varepsilon}). \end{align*} Since ${\varepsilon}$ was arbitrary this shows that $$ \liminf_{n\to\infty}|\Omega_n|=|\Omega|, $$ and the thesis is easily achieved. \end{proof} As an application of the previous theorem we prove the following fact. \begin{coro} Let $\Omega\in\mathcal{A}_k$ with $\mathcal{H}^1(\partial\Omega)<\infty$ and $\#\Omega<\infty$. Then it holds $$\mathcal{H}^1(\partial\Omega\cap\Omega^{(0)})\le\mathcal{H}^1(\partial\Omega\cap\Omega^{(1)}).$$ \end{coro} \begin{proof} By Theorem \ref{theo.approxim} we can consider a sequence $(A_n)\in\mathcal{A}_k$ of Lipschitz sets such that $A_n\overset{H^c}{\to}\Omega$ and $P(A_n)\to P(\Omega)+2\mathcal{H}^1(\partial\Omega\cap \Omega^{(1)})<\infty$. Then, by Theorem \ref{theo.convmeas}, we conclude $$\mathcal{H}^1(\partial\Omega)\le \lim_{n\to\infty} P(A_n)\le P(\Omega)+2\mathcal{H}^1(\partial\Omega\cap \Omega^{(1)}),$$ which easily implies the thesis, using \eqref{eq.decomp}. \end{proof} \begin{rem} We remark the fact that the inequality $$\lim_{n\to\infty}P(A_n)\ge\mathcal{H}^1(\partial \Omega)$$ is not in general satisfied when $\#\Omega=\infty$, see also Remark \ref{ex.ce}. \end{rem} \section{Existence of relaxed solutions}\label{sexis} Our next result generalizes the estimate $F_{1/2}(\Omega)\ge3^{-1/2}$, proved in \cite{polya60} for the class $\mathcal{A}_{convex}$, to the class $\mathcal{A}_k$. \begin{theo}\label{theo.Polya} For every $\Omega\in\mathcal{A}_k$ set of finite perimeter we have \begin{equation}\label{eq.polro} \frac{T^{1/2}(\Omega)}{|\Omega|^{3/2}}\ge\frac{3^{-1/2}}{\left(2\mathcal{H}^1(\partial\Omega)-P(\Omega)+2\pi(k-1)\rho(\Omega)\right)}. \end{equation} \end{theo} \begin{proof} Without loss of generality we may assume that $\mathcal{H}^1(\partial\Omega)<\infty$ and we set $\rho:=\rho(\Omega)$. First we consider the case $\#\Omega<\infty$. We define $$G(t):=\int_{0}^{t}\frac{A(t)}{L(t)}dt, \quad u(x):=G(d(x,\partial\Omega)).$$ Notice that, since for any $t\in (0,\rho)$ it holds $L(t)\ge\mathcal{H}^1(\partial \Omega(t))\ge P(\Omega(t))$, by isoperimetric inequality \eqref{isoper} we have $$\frac{A(t)}{L(t)}=\frac{|\Omega(t)|^{1/2}}{L(t)}A^{1/2}(t)\le\frac{|\Omega(t)|^{1/2}}{P(\Omega(t))}A^{1/2}(t)\le\frac{|B_1|^{1/2}}{P(B_1)}A^{1/2}(t).$$ In particular, since $A$ is bounded, we get that $L^{-1}A$ is summable on $(0,\rho)$ and $G$ is a Lipschitz function on in the interval $(0,\rho)$. Thus $u\in H^1_{0}(\Omega)$. Using \eqref{vartor} and \eqref{eq.boundL} we have \begin{align*} T(\Omega)&\ge \frac{\left(\int_{\Omega}udx\right)^{2}}{\int_{\Omega}|\nabla u|^{2}dx}\ge\frac{\left(\int_{0}^{\rho}G(t)L(t)dt\right)^2}{\int_0^\rho (G'(t))^{2}L(t)dt}\ge\int_0^\rho\frac{(A(t))^{2}}{L(t)}\,dt=\int_{0}^\rho \frac{A^2(t)L(t)}{L^{2}(t)}dt\\ &\ge\frac{1}{(P(\Omega)+2\mathcal{H}^1(\partial \Omega\cap\Omega^{(1)})+2\pi(k-1)\rho)^2}\int_0^\rho A^2(t)L(t)\,dt. \end{align*} Since $A\in W^{1,\infty}(0,\rho(\Omega))$ by Corollary \ref{theo.Nareg} then, set $\psi(s)=s^2,$ we have that the function $\psi\circ A\in W^{1,\infty}(0,\rho(\Omega))$, so that $$\int_0^\rho A^2(t)L(t)\,dt=-\int_0^{\rho}A^2(t)A'(t)\,dt=-\frac13\left[A^3(t)\right]_0^{\rho(\Omega)}=\frac13|\Omega|^3.$$ Thus \begin{equation}\label{eq.polro1} \frac{T(\Omega)}{|\Omega|^3}\ge\frac{1}{3(P(\Omega)+2\mathcal{H}^1(\partial \Omega\cap\Omega^{(1)})+2\pi(k-1)\rho)^2}. \end{equation} Taking into account \eqref{eq.decomp2} we get $$ \frac{T(\Omega)}{|\Omega|^3}\ge\frac{1}{3(2\mathcal{H}^1(\partial\Omega)-P(\Omega)+2\pi(k-1)\rho)^2}. $$ To prove the general case, let $\Omega_n$ be defined as in Lemma \ref{lem.top1}. Since $\#\Omega_n<\infty$ and $\Omega_n\in \mathcal{A}_k$, by the first part of this proof we have that $$\frac{T(\Omega_n)}{|\Omega_n|^3}\left(2\mathcal{H}^1(\partial\Omega_n)-P(\Omega_n)+2\pi(k-1)\rho_n\right)^2\ge\frac1{3},$$ where $\rho_n:=\rho(\Omega_n)$. When $n\to\infty$ we have $|\Omega_n|\to |\Omega|$, $\rho_n\to\rho$ by Lemma \ref{lem.inradcon} and $T(\Omega_n)\to T(\Omega)$ by Theorem \ref{theo.Sve}. Hence, passing to the $\limsup$ in the previous inequality and using Lemma \ref{lem.top1}, we get \eqref{eq.polro}. \end{proof} \begin{rem}\label{rem.polya} Note that, in the special case of $\Omega\in\mathcal{A}_k$ and $\#\Omega<\infty$, we have the improved estimate \eqref{eq.polro1}. Moreover, if $k=0,1$, \eqref{eq.polro} implies \begin{equation}\label{eq.polyagen} F_{1/2}(\Omega)\ge\frac{3^{-1/2}P(\Omega)}{2\mathcal{H}^1(\partial\Omega)-P(\Omega)}\, , \end{equation} while, if $k>1$, we can use the inequality $2\pi\rho(\Omega)\le P(\Omega)$ (which can be easily derived from \eqref{isoper}), to obtain \begin{equation}\label{eq.polyagenk} F_{1/2}(\Omega)\ge\frac{3^{-1/2}P(\Omega)}{2\mathcal{H}^1(\partial\Omega)+(k-2)P(\Omega)}\;. \end{equation} \end{rem} As a consequence of Theorem \ref{theo.Polya}, and using the well known fact that for a Lipschitz open set $\Omega$ it holds $P(\Omega)=\mathcal{H}^1(\partial\Omega)$, we have the following main results. \begin{coro}\label{coro.polya} For every $q\le1/2$ we have \begin{equation}\label{eq.m01} m_{1/2,0}=m_{1/2,1}=3^{-1/2} \end{equation} and the value $3^{-1/2}$ is asymptotically reached by a sequence of long thin rectangles. More in general, for $k\ge 1$, it holds \begin{equation}\label{eq.boundkq} m_{q,k}\ge (8\pi)^{1/2-q}(3^{1/2}k)^{-1} \end{equation} and the sequence $(m_{q,k})$ decreases to zero as $k\to \infty$. \end{coro} \begin{proof} By inequality \eqref{eq.polyagen} we have that $m_{1/2,0}, m_{1/2,1}\ge 3^{-1/2}$. Moreover the computations made in \cite{BBP20} show that the value $3^{-1/2}$ is asymptotically reached by a sequence of long thin rectangles, that are clearly in $\mathcal{A}_0$. Thus, being $A_0\subset\mathcal{A}_1$, \eqref{eq.m01} holds. To prove \eqref{eq.boundkq} it is enough to notice that $$F_q(\Omega)=F_{1/2}(\Omega)\left(\frac{T(\Omega)}{|\Omega|^{2}}\right)^{q-1/2}$$ and apply \eqref{eq.polyagenk} together with the Saint-Venant inequality \eqref{stven}. Finally to prove that $m_{q,k}\to 0$ as $k\to \infty$, it is enough to consider the sequence $(\Omega_{1,n})$ defined in Theorem 2.1 of \cite{BBP20}, taking into account that $\Omega_{1,n}\in \mathcal{A}_k$ for $k$ big enough. \end{proof} We now introduce a relaxed functional $\mathcal{F}_{q,k}$. More precisely, for $\Omega\in\mathcal{A}_k$ we denote by $\mathcal{O}_k(\Omega)$ the class of equi-bounded sequences of Lipschitz sets in $\mathcal{A}_k$ which converge to $\Omega$ in the sense of co-Hausdorff and we define $\mathcal{F}_{q,k}$ as follows: $$\mathcal{F}_{q,k}(\Omega):=\inf\left\{\liminf_{n\to\infty} F_q(\Omega_{n}): \ (\Omega_n)\in\mathcal{O}_k(\Omega) \right\}.$$ It is straightforward to verify that $\mathcal{F}_{q,k}$ is translation invariant and scaling free. As already mentioned in the introduction, when $q<1/2$, we prove the existence of a minimizer for $\mathcal{F}_{q,k}$. We notice this relaxation procedure can be made on the perimeter term only. More precisely, defining $$\mathcal{P}_k(\Omega):=\inf \left\{\liminf_{n\to\infty} P(\Omega_{n})\ :\ (\Omega_n)\in\mathcal{O}_k(\Omega)\right\},$$ the following proposition holds. \begin{prop}\label{prop.PPkF} For every $\Omega\in\mathcal{A}_k$ we have $$\mathcal{F}_{q,k}(\Omega)=\frac{\mathcal{P}_k(\Omega)T^{q}(\Omega)}{|\Omega|^{2q+1/2}}.$$ \end{prop} \begin{proof} Fix ${\varepsilon}>0$. Suppose that $\infty>\mathcal{P}_k(\Omega)+{\varepsilon}\ge\lim_n P(\Omega_n)$, for some $(\Omega_n)\in\mathcal{O}_{k}(\Omega)$. By Theorems \ref{theo.Sve} and \ref{theo.convmeas}, we have $$ \frac{(\mathcal{P}_k(\Omega)+{\varepsilon})T^{q}(\Omega)}{|\Omega|^{2q+1/2}}\ge\lim_n\left(\frac{P(\Omega_n)T^q(\Omega_n)}{|\Omega_n|^{2q+1/2}}\right)\ge \mathcal{F}_{q,k}(\Omega), $$ and since ${\varepsilon}$ is arbitrary we obtain the $\le$ inequality. Similarly, to prove the opposite inequality assume $\lim_n F_q(\Omega_n)\le \mathcal{F}_{q,k}(\Omega)+{\varepsilon}<\infty$, for some sequence $(\Omega_n)\in \mathcal{O}_k(\Omega)$. Let $D$ be a compact set which contains each $\Omega_n$. Thanks to Theorem \ref{theo.Sve}, we have that $T(\Omega_n)\to T(\Omega)$ and, since $P(\Omega_n)=\mathcal{H}^1(\Omega_n)$, we have also $$\sup_n\mathcal{H}^1(\partial\Omega_n)=\sup_n\left( \frac{F_q(\Omega_n)|\Omega_n|^{2q+1/2}}{\displaystyle{T^q}(\Omega_n)}\right)\le \sup_n \left(\frac{F_q(\Omega_n)|D|^{2q+1/2}}{\displaystyle{T^q}(\Omega_n)}\right)<+\infty.$$ Applying again Theorem \ref{theo.convmeas} we have $|\Omega_n|\to|\Omega|$ and we can conclude $$\frac{\mathcal{P}_k(\Omega)T^q(\Omega)}{|\Omega|^{2q+1/2}}\le\lim_n F_q(\Omega_n)\le\mathcal{F}_{q,k}(\Omega)+{\varepsilon},$$ which implies the $\ge$ inequality as ${\varepsilon}\to 0$. \end{proof} The perimeter $\mathcal{P}_k$ satisfies the following properties. \begin{prop}\label{prop.Pk} For every $\Omega\in\mathcal{A}_k$ of finite perimeter we have \begin{equation}\label{eq.PPk} P(\Omega)\le\mathcal{P}_k(\Omega)\le 2\mathcal{H}^1(\partial\Omega)-P(\Omega). \end{equation} Moreover if $\#\Omega<\infty$ and $\mathcal{H}^1(\partial\Omega)<+\infty$ it holds \begin{equation}\label{eq.PPkH1} \mathcal{H}^1(\partial\Omega)\le\mathcal{P}_k(\Omega)\le P(\Omega)+2\mathcal{H}^1(\partial \Omega \cap \Omega^{(1)}) \end{equation} and $P(\Omega)=\mathcal{P}_k(\Omega)$ if and only if $P(\Omega)=\mathcal{H}^1(\partial\Omega)$. \end{prop} \begin{proof} Taking into account Theorem \ref{theo.convmeas} and lower semicontinuity of the perimeter with respect to the $L^1$-convergence we have $\mathcal{P}_k(\Omega)\ge P(\Omega)$. To prove the right-hand inequalities in \eqref{eq.PPk} and \eqref{eq.PPkH1} it is sufficient to take the sequence $(A_n)$ given by Theorem \ref{theo.approxim}. Finally, when $\#\Omega<\infty$, the inequality $\mathcal{H}^1(\partial\Omega)\le \mathcal{P}_k(\Omega)$ follows by Theorem \ref{theo.convmeas}. \end{proof} \begin{rem} \label{ex.ce} If we remove the assumption $\#\Omega<\infty$, then \eqref{eq.PPkH1} is no longer true. For instance, we can slightly modify the Example $3.53$ in \cite{AFP} to define $\Omega\in\ \mathcal{A}_0$ such that $P(\Omega),\mathcal{P}_0(\Omega)<\infty$ while $\mathcal{H}^1(\partial\Omega)=\infty$. More precisely let $(q_n)$ be an enumeration of $\mathbb{Q}^2\cap B_1(0)$ and $(r_n)\subset(0,{\varepsilon})$ be a decreasing sequence such that $\sum_n 2\pi r_n\le 1$. We recursively define the following sequence of open sets. Let $$\Omega_0:=B_{r_0}(q_0),\ \Omega_{n+1}:=\Omega_n\cup B_{s_n}(q_{h_n}),$$ where $$h_n:=\inf\{k: q_k \in\overline\Omega_n^c\},\quad s_n:=r_{n+1}\wedge\sup\{r_k: B_{r_k}(q_{h_n})\cap \Omega_n=\emptyset\}.$$ Finally let $\Omega=\bigcup_n\Omega_n$. By construction $\Omega_n\overset{H^c}{\to}\Omega$ and since $\Omega_n\in\mathcal{A}_0$ for all $n$, we have also $\Omega\in\mathcal{A}_0$. Moreover we notice that $P(\Omega)\le 1$ and it is easy to verify that the two dimensional Lebesgue measure of $\partial\Omega$ is positive, which implies $\mathcal{H}^1(\partial\Omega)=\infty$. Finally, since the sequence $(\Omega_n)\in \mathcal{O}_0(\Omega)$, we have also $\mathcal{P}_0(\Omega)\le 1$. \end{rem} Next we prove that the relaxed functional $\mathcal{F}_{q,k}$ agrees with $F_q$ on the class of Lipschitz open sets in $\mathcal{A}_k$. \begin{coro}\label{coro.Frel} For every $\Omega\in\mathcal{A}_k$ we have \begin{equation}\label{eq.FgF} \mathcal{F}_{q,k}(\Omega)\ge F_q(\Omega). \end{equation} If, in addition, $P(\Omega)=\mathcal{H}^1(\partial\Omega)$ then we have \begin{equation} \label{eq.FgF1} F_q(\Omega)=\mathcal{F}_{q,k}(\Omega). \end{equation} In particular $\mathcal{F}_{q,k}$ and $F_q$ coincide on the class of Lipschitz sets and it holds \begin{equation}\label{eq.infrel} m_{q,k}=\inf\{\mathcal{F}_{q,k}(\Omega)\ :\ \Omega\in\mathcal{A}_k\}. \end{equation} \end{coro} \begin{proof} The inequalities \eqref{eq.FgF} and \eqref{eq.FgF1} follow by Proposition \ref{prop.PPkF} and \eqref{eq.PPk}. The last part of the theorem follows as a general property of relaxed functionals. \end{proof} \begin{lemm}\label{lem.coninf} For every Lipschitz set $\Omega\in\mathcal{A}_k$, there exists a sequence of connected open sets $(\Omega_n)\subset\mathcal{A}_k$ such that $$P(\Omega_n)=\mathcal{H}^1(\partial\Omega_n)\qquad\text{and}\qquad\lim_{n\to\infty}F_q(\Omega_n)=F_q(\Omega).$$ \end{lemm} \begin{proof} Since $\Omega$ is a bounded Lipschitz set we necessarily have $\#\Omega<\infty$. If $\Omega$ is connected we can take $\Omega_n$ to be constantly equal to $\Omega$. Suppose instead that $\#\Omega=2$ and let $\Omega^1$ and $\Omega^2$ be the connected components of $\Omega$. Since $\Omega$ is Lipschitz there exist $x_1\in\partial \Omega^1, x_2\in\partial\Omega^2$ such that $$ 0<d:=d(x_1,x_2)=\inf\{d(w,v): \ v\in\Omega^1,\ w\in\Omega^2\}. $$ Define $$\Omega^{2}_n:=\Omega^2-\left(1-\frac 1 n\right)(x_2-x_1).$$ Clearly we have $\overline{\Omega^2_{n}}\cap \overline{\Omega^1}=\emptyset$ for every $n\ge 1$ and $\Omega^{2}_1=\Omega^2$. We set $$x_n=x_2-\left(1-\frac 1 n\right)(x_2-x_1).$$ Now we can join $x_1$ and $x_n$ through a segment $\Sigma_n$. By using the fact that the boundary of both $\partial\Omega^1$ and $\partial\Omega^2_n$ are represented as the graph of a Lipschitz functions in a neighborhood of $x_1$ and $x_n$ respectively, then the thin open channel $$C_{\varepsilon}:=\{ x\in\mathbb{R}^2\setminus\overline\Omega^1\cup\overline\Omega^2_n\ :\ d(x,\Sigma_n)<{\varepsilon}\}$$ of thickness ${\varepsilon}:={\varepsilon}(n)$ is such that the set $$\Omega_n:=\Omega^1\cup \Omega^2_n\cup C_{{\varepsilon}}$$ belongs to $\mathcal{A}_k$, it is connected and $P(\Omega_n)=\mathcal{H}^1(\partial\Omega_n)$. The following identities are then verified $$|\Omega_n|\to|\Omega|,\quad T(\Omega_n)\to T(\Omega),\quad P(\Omega_n)\approx P(\Omega^1)+P(\Omega^2)+\frac{2{\varepsilon}}{n},$$ so that $F_q(\Omega_n)\to F_q(\Omega)$ (notice that this does not imply $\Omega_n\to\Omega$). The general case is achieved by induction on $\#\Omega$. More precisely suppose $\#\Omega=N+1$. Let $(\Omega^i)$ be the connected components of $\Omega$. By induction we have $$F_q(\Omega^1\cup\dots \cup\Omega^N)=\lim_{n\to\infty}F_q(\Omega'_n),$$ for a sequence $(\Omega'_n)\subset\mathcal{A}_k$ of connected open sets satisfying $P(\Omega'_n)=\mathcal{H}^1(\partial\Omega'_n)$. Using the fact that, being $\Omega$ Lipschitz, the value of $F_q(\Omega)$ do not change if we translate (possibly in different direction and with different magnitude) each connected component of $\Omega$, being careful to avoid intersections, we can suppose $\overline{\Omega}^{N+1}$ to have a positive distance from $\overline{\Omega}'_n$, as $n$ is large enough. We then apply the previous step to define a sequence of connected open sets $\Omega_{n,m}\in\ A_k$ such that $P(\Omega_{n,m})=\mathcal{H}^1(\partial\Omega_{n,m})$ and $$F_q(\Omega_{n,m})\to F_q(\Omega'_n\cup \Omega^{N+1}),$$ as $m\to\infty$. Using a diagonal argument we achieve the thesis. \end{proof} We finally show the existence of a relaxed solution to the minimization problem of $\mathcal{F}_{q,k}$ in $\mathcal{A}_k$ when $q<1/2$. \begin{theo}\label{theo.exis} For $q<1/2$ there exists a nonempty bounded open set $\Omega^{\star}\in\mathcal{A}_k$ minimizing the functional $\mathcal{F}_{q,k}$ such that $\mathcal{H}^1(\partial\Omega^\star)<\infty$. \end{theo} \begin{proof} Let $(\widetilde\Omega_n)\subset\mathcal{A}_k$ be a sequence of Lipschitz sets such that $$ \lim_{n\to\infty} F_q(\widetilde\Omega_n)=m_{q,k}. $$ Applying Lemma \ref{lem.coninf} and \eqref{eq.FgF1}, we can easily replace the sequence $(\widetilde\Omega_n)$ with a sequence $(\Omega_n)\subset\mathcal{A}_k$ of connected (not necessarily Lipschitz) open sets, satisfying $\mathcal{H}^1(\Omega_n)=P(\Omega_n)$ and such that $$\lim_{n\to\infty} F_q(\Omega_n)=\lim_{n\to\infty} F_q(\widetilde\Omega_n)=m_{q,k}.$$ Eventually using the translation invariance of $F_q$ and possibly rescaling the sequence $(\Omega_n)$, we can assume that $(\Omega_n)$ is equi-bounded and \begin{equation}\label{ipinf} \mathcal{H}^1(\Omega_n)=P(\Omega_n)=1. \end{equation} By compactness, up to subsequences, there exists an open sets $\Omega^\star\in\mathcal{A}_k$ such that $\Omega_n\overset{H^c}{\to}\Omega^\star$. By \eqref{eq.infrel} we have $$m_{q,k}\le \mathcal{F}_{q,k}(\Omega^\star).$$ Let us prove the opposite inequality. We notice that, by Theorem \ref{theo.approxim} and \eqref{ipinf}, for every $n$ there exists a sequence $(A_{n,m})_m\subset\mathcal{A}_k$ of Lipschitz sets, such that, as $m\to\infty$, $$ P(A_{n,m})\to P(\Omega_n)\ \text{and}\ |A_{n,m}|\to |\Omega_n|. $$ By Theorem \ref{theo.Sve}, we have also $T(A_{n,m})\to T(\Omega_n)$ as $m\to \infty$. Thus $$F_q(\Omega_n)=\lim_{m\to\infty}F_q(A_{n,m}).$$ A standard diagonal argument allows us to define a subsequence $A_{n,m_n}\in\mathcal{O}_k(\Omega^\star)$. Then we have $$\mathcal{F}_{q,k}(\Omega^\star)\le \lim_n F_{q}(A_{n,m_n})=\lim_{n}F_{q}(\Omega_n)= m_{q,k}. $$ Hence $\Omega^\star$ is a minimum for $\mathcal{F}_{q,k}$. Moreover, notice that there exists a compact set $K$ containing $\partial\Omega^\star$ such that, up to a subsequence, $\partial\Omega_n\overset{H}{\to} K$. So, being $\Omega_n$ connected, we have $$ \sup_n\#\partial\Omega_n<\infty, $$ and by Theorem \ref{theo.Golab}, $$ \mathcal{H}^1(\partial\Omega^\star)\le \mathcal{H}^1(K)\le \liminf_{n\to\infty}\mathcal{H}^1(\Omega_n)\le 1. $$ To conclude we have only to show that $\Omega^*$ is nonempty. Notice that for $n$ big enough there exists $C>0$ such that $F_q(\Omega_n)< C$. Thus we have \begin{equation} \label{eq.final1} C>F_q(\Omega_n)=\frac{T^q(\Omega_n)}{|\Omega_n|^{2q+1/2}}=\left(\frac{T(\Omega_n)}{|\Omega_n|^{3}}\right)^{q}|\Omega_n|^{q-1/2}\ge \frac{1}{|\Omega_n|^{1/2-q}(\sqrt{3}k)^{2q}}\;, \end{equation} where the last inequality follows by \eqref{eq.polyagen}, using \eqref{ipinf}. By \eqref{eq.coarea} we have also \begin{equation} \label{eq.final2} |\Omega_n|\le (1+\pi(k-1)\rho(\Omega_n))\rho(\Omega_n). \end{equation} Combining \eqref{eq.final1}, \eqref{eq.final2} and the assumption $q<1/2$, we conclude that the sequence of inradius $(\rho(\Omega_n))$ must be bounded from below by some positive constant. By Lemma \ref{lem.inradcon}, $\Omega^{\star}$ is nonempty. \end{proof} \section{Conclusions}\label{sconc} We have seen that in the planar case the topological constraint present in classes $\mathcal{A}_k$ is strong enough to ensure the existence of at least a relaxed optimizer. In higher dimensions this is no longer true and easy examples show that it is possible to construct sequences $(\Omega_n)$ in $\mathcal{A}_k$ with $P(\Omega_n)$ bounded and $T(\Omega_n)\to0$. This suggests that in higher dimensions stronger constraints need to be imposed in order to have well posed optimization problems. Another interesting issue is the analysis of the same kind of questions when the exponent $2$ is replaced by a general $p>1$ in \eqref{vartor}; the torsional rigidity $T(\Omega)$ then becomes the $p$-torsion $T_p(\Omega)$ and it would be interesting to see how our results depend on the exponent $p$ and if in this case the analysis in dimensions higher than two is possible. Finally, shape functionals $F(\Omega)$ involving quantities other than perimeter and torsional rigidity are interesting to be studied: we point out some recent results in \cite{bbp20},\cite{FtLa} and references therein. However, to our knowledge, the study of these shape functionals under topological constraints as the ones of classes $\mathcal{A}_k$ is still missing. \noindent{\bf Acknowledgments.} The work of GB is part of the project 2017TEXA3H {\it``Gradient flows, Optimal Transport and Metric Measure Structures''} funded by the Italian Ministry of Research and University. The authors are member of the Gruppo Nazionale per l'Analisi Matematica, la Probabilit\`a e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM). {\small\noindent Luca Briani: Dipartimento di Matematica, Universit\`a di Pisa\\ Largo B. Pontecorvo 5, 56127 Pisa - ITALY\\ {\tt [email protected]} \noindent Giuseppe Buttazzo: Dipartimento di Matematica, Universit\`a di Pisa\\ Largo B. Pontecorvo 5, 56127 Pisa - ITALY\\ {\tt [email protected]}\\ {\tt http://www.dm.unipi.it/pages/buttazzo/} \noindent Francesca Prinari: Dipartimento di Matematica e Informatica, Universit\`a di Ferrara\\ Via Machiavelli 30, 44121 Ferrara - ITALY\\ {\tt [email protected]}\\ {\tt http://docente.unife.it/francescaagnese.prinari/}} \end{document}
\begin{document} \title{Distributed quantum sensing with a mode-entangled network of spin-squeezed atomic states} \author{Benjamin K. Malia} \affiliation{Department of Physics, Stanford University, Stanford, CA, USA} \affiliation{School of Applied and Engineering Physics, Cornell University, Ithaca, NY, USA} \author{Yunfan Wu} \affiliation{Department of Applied Physics, Stanford University, Stanford, CA, USA} \author{Juli\'{a}n Mart\'{i}nez-Rinc\'{o}n} \affiliation{Department of Physics, Stanford University, Stanford, CA, USA} \affiliation{Quantum Information Science and Technology Laboratory, Instrumentation Division, Brookhaven National Laboratory, Upton, NY, USA} \author{Mark A. Kasevich} \email[]{[email protected]} \affiliation{Department of Physics, Stanford University, Stanford, CA, USA} \affiliation{Department of Applied Physics, Stanford University, Stanford, CA, USA} \date{\today} \begin{abstract} Quantum sensors are used for precision timekeeping, field sensing, and quantum communication~\cite{Grotti2018,McGrew2018,Guo2020}. Comparisons among a distributed network of these sensors are capable of, for example, synchronizing clocks at different locations~\cite{Zhao2021,Zhang2021,Giovannetti2001,Beloy2021,Bothwell2022}. The performance of a sensor network is limited by technical challenges as well as the inherent noise associated with the quantum states used to realize the network~\cite{Pedrozo2020}. For networks with only local entanglement at each node, the noise performance of the network improves at best with square root of the number of nodes~\cite{Zheng2022}. Here, we demonstrate that nonlocal entanglement between network nodes offers better scaling with network size. A shared quantum nondemolition measurement entangles a clock network with up to four nodes. This network provides up to 4.5 dB better precision than one without nonlocal entanglement, and 11.6 dB improvement as compared to a network of sensors operating at the quantum projection noise limit. We demonstrate the generality of the approach with atomic clock and atomic interferometer protocols, in scientific and technologically relevant configurations optimized for intrinsically differential comparisons of sensor outputs. \end{abstract} \pacs{} \maketitle Distributed quantum sensors detect and compare phase shifts between spatially distinct modes of quantum systems with high precision~\cite{Zhao2021,Zhang2021,Giovannetti2001}. For example, the gravitational potential can induce relative phase shifts between spatially separated atomic clocks \cite{Grotti2018} or atom interferometers~\cite{Overstreet2022}. Quantum systems are an attractive platform for networks as they have the unique ability to directly benefit from both local and nonlocal entanglement. Experiments have demonstrated entanglement-enhanced networks in both discrete~\cite{Liu2021} and continuous variable~\cite{Xia2020} configurations. In general, quantum networks will play an important role in future technologies. Significant progress has been made with networks of quantum systems~\cite{Lu2019,McGrew2018,Bodine2020,Beloy2021,Matsukevich2006,Chou2005,Simon2007} for enhanced communication~\cite{Muralidharan2016,Gundogan2021} and timekeeping~\cite{Komar2014,Polzik2016} at the global scale. At small length scales, optical atomic clocks have pushed precision to record levels. In the work of Zheng et al.~\cite{Zheng2022}, up to six multiplexed Sr atomic clocks spaced over 1 cm are implemented to achieve a fractional frequency precision at the $10^{-20}$ level. Another work by Bothwell et al.~\cite{Bothwell2022} has measured the gravitational redshift over 1 mm within a single, spatially distributed sample of atomic Sr. In these systems, each clock's precision is limited by the Quantum Projection Noise (QPN) limit. In these mode-separable (MS) systems, the absence of correlation between the modes causes the total precision to scale as $1/\sqrt{M}$ where M is the number of identical clocks being compared. Through entanglement, a spin-squeezed clock or sensor is able to achieve precision beyond the QPN limit~\cite{Leroux2010a,Pedrozo2020}. However, if a network of squeezed clocks is MS, then the total precision still scales as $1/\sqrt{M}$. If nonlocal entanglement does exist, then the total precision of such a mode-entangled (ME) system has the potential to scale with the Heisenberg limit, 1/M~\cite{Komar2014,Polzik2016,Gessner2018,Zhuang2018,Eckert2006}. Guo et al. have demonstrated this scaling in a photonic system~\cite{Guo2020}, and Nichol et al. have measured this in a system of two Sr${}^+$ ions connected by a photonic link~\cite{Nichol2021}. Our work addresses a spin-squeezed ${}^{87}$Rb ME network whose noise scales better than a MS network. \begin{figure*} \caption{\textbf{Atomic sensor sequence}. \textbf{(Single-mode preparation)} A localized ensemble of atoms (purple circles) are prepared in a $J_z=0$ CSS. The purple distribution on the Bloch sphere is the CSS's Wigner function with a variance of $N/4$. \textbf{(Two-mode preparation)} Counter-propagating Raman lasers split the ensemble into two spatially distinct modes (red and blue circles). Each mode is located opposite each other on their respective Bloch spheres (red and blue distributions). Note that since the modes are separable here, the two distributions are not dependent on each other. At this stage, a $\pi/2$ microwave pulse brings both of these states to $J_z=0$. \textbf{(Entanglement)} A probe laser performs a QND measurement to create mode-entanglement. Measuring each mode independently does not give enhanced precision on the total measurement (gray shaded distribution represents the CSS of each mode). To show how simultaneous measurement improves precision, an example of marginal (light distributions) and conditional (black outlined distributions) Wigner functions~\cite{Jing2019} are shown on the Bloch spheres. Here, the red mode squeezed above the equator is conditional on the blue mode being found below the equator. \textbf{(Sensor operation)} The sensor requires an initial application of a $\pi/2$ microwave pulse which rotates the SSS to a vertical (phase sensitive) orientation on the Bloch sphere. The observable being measured dictates the series of microwave and Raman pulses applied during the sensor sequence. The atomic interferometer sequence is pictured here, with a detailed description of its sequence described in Fig.~\ref{fig:pulsesAI} in Methods. Mean trajectories of spin down (up) states are represented by solid (dotted) lines. (Relative times are not to scale.) In the presence of a field gradient, the phases of the modes shift by $\delta\theta^{(m)}$ (dashed arrows). \textbf{(Readout)} A $\pi/2$ microwave pulse then rotates the states back to a horizontal orientation and a second measurement (either QND or fluorescence population spectroscopy) is performed to measure the shift in the sum of all spin values.} \label{fig:sequence} \end{figure*} Several methods exist for generating spin-squeezing between spatially separate modes. In the pioneering work of Julsgaard et al.~\cite{Julsgaard2001}, two spatially separated Rb vapor cells were probed via a photonic quantum non-demolition (QND) measurement. In Bose Einstein Condensates (BECs), on the other hand, spin-squeezing can be generated through local collisions before the state is allowed to expand to several micrometers~\cite{Fadel2018,Lange2018,Kunkel2018}. Each part of the cloud can then be imaged separately. More recently, Anders et al. separated an entangled BEC state even further, to 80 $\mu$m, with the application of velocity-dependant Raman transitions~\cite{Anders2021}. Not only does the spin system now occupy separate spatial modes, but the modes consist of states with differing momenta. Finally, atom-cavity interactions can entangle two momentum states with different spin states~\cite{Greve2021}. In this work, we demonstrate a spatially distributed multimode atomic clock network with noise below the QPN limit. Velocity-dependent Raman transitions create up to four modes, each separated from an adjacent mode by $\sim$ 20 $\mu$m, before a nonlocal QND measurement is performed to entangle the modes. Nonlocal entanglement-enhanced precision is verified with networks of identical clocks, each containing 45,000 atoms per mode. The ME four-mode network exhibits noise roughly 4.5(0.8) dB lower than that of an equivalent MS network of spin-squeezed states (SSS) and 11.6(1.1) dB lower than a network of coherent spin states (CSS) operating at the QPN limit. Finally, we employ an $M=2$ node network to demonstrate an entangled differential atom interferometer. The methods and apparatus used to generate and detect SSS are detailed in Refs.~\cite{Hosten2016a,Malia2020}. In summary, ensembles of up to 220,000 ${}^{87}$Rb atoms are cooled to 25 $\mu$K and trapped in a 1,560 nm 1D lattice within a dual wavelength optical cavity. This cavity enables QND measurements via a 780 nm probe detuned from the $D_2$ transition. These projective measurements detect and squeeze the ensemble's collective spin, $J_z = (N_\uparrow-N_\downarrow)/2$, where $N_i$ are the populations of atoms in each state after the measurement. This spin-1/2 system is defined by the hyperfine ground states of ${}^{87}$Rb, $\ket{\downarrow} = 5^2S_{1/2}\ket{F=1,m_F=0}$ and $\ket{\uparrow}= 5^2S_{1/2}\ket{F=2,m_F=0}$. \begin{figure*} \caption{\textbf{Differential phase shift detection.} \textbf{(a)} For the two-mode case depicted in the space-time diagram, the expected value of $\bar{\theta}$ is measured after the final microwave pulse is phase shifted. Solid lines are linear fits to the expected values for the positive momentum mode (blue), negative mode (red), and ME (green) cases. $\bar{\theta}$ in the single-mode cases are offset by 1 and -1 mrad respectively for visual clarity. The enlarged region contains the average of the single momentum modes, ie. MS states, (purple) which are offset by -0.4 mrad for visual clarity. In all subfigures, error bars represent the standard error of the mean (SEM) for a set of 200 samples and shaded areas represent 68\% confidence intervals of the fits. \textbf{(b)} Distributions of 200 sample measurements for the two-mode sensor with coherent states (black), single-mode states (blue and red), MS states (purple), and ME states (green). Corresponding curves are Gaussian fits. \textbf{(c)} Response of a two-mode, ME sensor to a magnetic field gradient applied in the second half of the echo sequence (green circles). For reference, when the sensor sequence's microwave pulses are not performed (black circles), there is negligible change in $\bar{\theta}$ as the applied field increased. The relative magnetic field strength was determined by the relative voltage applied to the MOT coils.} \label{fig:shift} \end{figure*} In order to generate spatially separate modes, velocity-dependent stimulated Raman transitions couple these spin states to momentum $\mathbf{p}$, where eigenstates are denoted as $\ket{\text{spin},\mathbf{p}}$. The relevant transitions are driven by $\pi$ pulses that take $\ket{\downarrow,\mathbf{p}_I} \rightarrow \ket{\uparrow,\mathbf{p}_I+2\hbar \mathbf{k}}$ and $\ket{\uparrow,\mathbf{p}_I} \rightarrow \ket{\downarrow,\mathbf{p}_I-2\hbar \mathbf{k}}$, where $\mathbf{k}$ is the effective wavevector associated with the Raman transition. Without loss of generality, the initial momentum $\mathbf{p}_I$ can be set to zero. A laser system drives Raman transitions between groundstate hyperfine levels (see Methods). The $\pi$ pulse time is short enough to address nearly the entire velocity distribution of the atom source. The transitions occur with a Rabi frequency of $\Omega_R = 2\pi\times 500$ kHz and the maximum transition probability for a single Raman $\pi$ pulse is 88\%. When a spin state in an equal superposition of $\ket{\downarrow,0}$ and $\ket{\uparrow,0}$ experiences a Raman $\pi$ pulse, it coherently evolves into a linear superposition of the two modes $\ket{\uparrow,+2\hbar \mathbf{k}}$ and $\ket{\downarrow,-2\hbar \mathbf{k}}$ (see Fig.~\ref{fig:sequence}). To determine the coherence between the two modes, we apply a second Raman $\pi$ pulse a time $T$ after the first Raman pulse such that the states have drifted apart by a distance $v_\text{rel}T$ where $v_\text{rel} = 4 \hbar |\mathbf{k}|/m_{\text{Rb}} = 2.4$ cm/s is the relative velocity induced by the stimulated Raman interaction and $m_{\text{Rb}}$ is the mass of an atom. A final microwave $\pi/2$ pulse is then used to probe the coherence between the two modes (the microwave Rabi frequency is $\sim~2\pi \times 3$ kHz here and in the work described below). As $T$ increases, the coherence is observed to decay as $e^{-T/\beta}$ with a time constant $\beta=0.46~\mu$s due to the velocity spread ($\sim$~6.9 cm/sec) of the atomic source (see Methods). After roughly $T = 1.5~\mu$s (36 nm of separation) the contrast becomes negligible, indicating mode separation. If no effort is made to coherently recombine these modes, the system can now be treated as a two-mode quantum network, where each mode $m$ has a collective spin length of $\big<J^{(m)}_x\big> = CN^{(m)}/2$ and a QPN limited variance ($\Delta J_z^{(m)})^2 = N^{(m)}/4$. Modes with nearly equal mean atom number $N$ can each be represented on composite Bloch spheres with radii $CN/2$, where the contrast $C = 78(3)\%$ is determined by fluorescence imaging. Spin-squeezing improves the measurement of a linear combination of the polar angle shifts $\delta\theta^{(m)}=\delta J^{(m)}_z/(C N/2)$, where $\delta J_z^{(m)}$ are the differences in spin values between a first and second measurement (see Fig.~\ref{fig:sequence}). In the remainder of this work the measurable quantity $\bar{\theta}$, determined from the shift in the collective $\delta J_z = \sum \delta J_z^{(m)}$ value, will refer to the mean of the angles \begin{equation} \bar{\theta} = \frac{1}{M}\sum^M_{m=1}\delta \theta^{(m)}= \frac{1}{CMN/2}\sum^M_{m=1}\delta J^{(m)}_z, \end{equation} and $\Delta\bar{\theta}$ to the square root of its variance. The second observation of the collective $J_z$ is accomplished with a second cavity QND measurement in the case of a clock network demonstration (following Ref~\cite{Hosten2016a}) or a precision fluorescence measurement in the case of an atom interferometer demonstration (following Ref.~\cite{Malia2020}). To first demonstrate the effect of phase shifts on each mode, ensembles of 80,000 atoms are prepared in three different initial states: $\ket{\uparrow,2\hbar \mathbf{k}}$, $\ket{\downarrow,-2\hbar \mathbf{k}}$, or a superposition of the two. In the superposition case, waiting 0.9 ms separates the modes by roughly 20 $\mu$m, a significantly greater distance than the 36 nm coherence length identified above. The two modes' spins now point towards opposite poles on their respective Bloch spheres. As shown in Fig.~\ref{fig:sequence}, a $\pi/2$ microwave pulse is then applied to the atoms. The pulse brings their vectors to the equator of their Bloch spheres (with radius 20,000 in the third case). The mode with positive momentum, for example, is now in a superposition of $\ket{\downarrow,2\hbar \mathbf{k}}$ and $\ket{\uparrow,2\hbar\mathbf{k}}$. Since the microwave $\pi/2$ pulse simultaneously addresses both modes, the Bloch vectors remain anti-parallel. Finally, a (now nonlocal) QND measurement is performed to projectively squeeze the distributed states. This operation leads to a nonlocal correlation of $J_z$ values between the modes while increasing the variance of the spin distributions in the x-y plane, as illustrated in Fig.~\ref{fig:sequence}. Once the ME state has been prepared, a microwave $\pi/2-\pi-\pi/2$ spin echo sequence with $T_{\text{int}}=110~ \mu$s between each pulse is performed. The phases of the microwave pulses are adjusted to accomodate the AC Stark shift ($\sim$ 1 rad) induced by the entangling QND pulse so that the $J_z$ distributions are in metrologically sensitive configurations, as illustrated in Fig.~\ref{fig:sequence}. This is accomplished through observation of $J_z$ for the independently prepared modes $\ket{\uparrow,2\hbar }$ and $\ket{\downarrow,2\hbar \mathbf{k}}$. A second QND measurement determines the phase shift applied to the last microwave pulse. The two single-mode cases, $\bar{\theta}=\delta\theta^{(m)}$, experience nearly equal and opposite responses due to their anti-parallel spins (see Fig.~\ref{fig:shift}a). $\bar{\theta}$ in the ME case is consistent with the mean of the single-mode cases, indicating that each mode reacts oppositely to the applied shift. Therefore, a sensor utilizing this method will suppress phase noise associated with the pulses used for coherent spin manipulation. This property is useful for suppressing local oscillator noise in clock comparisons and optical phase noise in light-pulse atom interferometry applications (as demonstrated below). On the other hand, this type of sensor will measure a differential phase shift between the two modes due to, for example, position dependent fields~\cite{Fadel2022}. We demonstrate a non-zero differential measurement via the application of a magnetic field gradient across the 20 $\mu$m separation between the two modes. To introduce a clock frequency imbalance between the two modes, the magnetic field coils of the magneto-optical trap (MOT) are pulsed on during the second half of the echo sequence. As the magnetic field gradient (determined by the MOT coil current) increases, $\bar{\theta}$ is observed to shift away from zero (see Fig.~\ref{fig:shift}c). The measured shift of 1.7(0.3) mrad/A corresponds to an average clock frequency shift of $\delta\omega = \bar{\theta}/T_{\text{int}} = 2\pi\times15.7(2.8)$ Hz/A. Second order Zeeman shifts of this magnitude require 4.0(0.8) G/cm/A while the ${}^{87}$Rb atoms are in the presence of the 600 mG bias field. This value is consistent with the gradient estimated from the geometry of the MOT coils. We observed no significant increase in the width of the detection histograms (as shown in Fig.~\ref{fig:shift}b) for the relatively small ($\sim$ 1 mrad) differential phase shifts used in this work. These data demonstrate that this protocol can be used to measure the frequency difference between two distant entangled clocks through the observed (conditional) shift in $J_z$. Additional modes can be added, as described below, to detect higher order spatial correlations between modes. Next, we evaluate the noise performance of entangled, multimode clock networks with $N=45,000$ atoms per mode. The metrological improvement, relative to their respective M-mode network of N-atom coherent states, can be quantified by a parameter $\xi_\text{net}^2$ derived from the generalized version~\cite{Gessner2020} of the Wineland squeezing parameter~\cite{Wineland1994}. For example, when M = 2, \begin{equation} \xi_\text{net}^2 \equiv \frac{1}{C^2}\frac{\text{Var}\big(\delta J^{(1)}_z+\delta J^{(2)}_z\big)}{\text{Var}\big(\delta J^{(1),\text{CSS}}_z\big)+\text{Var}\big(\delta J^{(2),\text{CSS}}_z\big)}, \label{eq:sqzparam} \end{equation} where the CSS variances are $N/4$ (see Methods). This parameter accounts for both the local and nonlocal entanglement since $\text{Var}\big(\delta J^{(2)}_z+\delta J^{(1)}_z\big)$ is the sum of both individual variances and the covariance between the two modes. A two-mode SSS is prepared and a pair of $\pi/2$ microwave pulses separated by a time $T_{\text{int}}=100~\mu$sec constitute a standard Ramsey sequence. Squeezing reduces the variance of the joint measurement to $\Delta \bar{\theta} = 1.3(0.1)$ mrad (as shown in Fig.~\ref{fig:scaling}), which corresponds to $\xi_\text{net}^2=-8.6(1.0)$ dB. This precision is near that of a two-mode SSS in the absence of the Ramsey sequence ($\Delta \bar{\theta}=1.2(0.1)$ mrad without technical noise from the sensor sequence). A single-mode clock, on the other hand, has 3.6(0.6) mrad of technical noise. In this differential clock configuration, low measurement variance can be achieved without the need for high performance local oscillators, thus circumventing a limit of previous SSS sensor demonstrations~\cite{Hosten2016a}. This configuration will also suppress environmental noise common to both modes, such as a time varying bias magnetic field. This suppression is achieved with a single collective read-out measurement. \begin{figure} \caption{\textbf{Clock network sensitivity}. Measured sensitivity for a clock network utilizing SSS. The QPN limit for $N = 45,000$ atoms is given by the black line, which is proportional to $1/\sqrt{M}$. A MS network is realized by taking independent measurements of a single-mode clock for each $\delta\theta^{(m)}$ component of $\bar{\theta}$. When observing the second QND measurement only (black squares) the sensitivity is above the QPN limit due to the QPN and local oscillator noise. The difference in QND measurements (blue circles) detects the spin-squeezing and brings the sensitivity below the QPN, however, the local oscillator noise remains. A one parameter fit to these data (dashed blue line) is consistent with $1/\sqrt{M}$ scaling. Reducing the local oscillator noise would push the sensitivity lower in the blue shaded area, where the lower limit (solid blue line) is determined by the $M=1$ sensitivity measured without a Ramsey sequence (purple point). The ME networks (green circles) exist below this limit, where the green area represents sensitivities obtainable exclusively through simultaneous measurements of a ME network. Error bars represent the pooled variance of three sets of 200 measurements. \textbf{(Inset)} Space-time diagram of the state preparation for a four-mode ME network.} \label{fig:scaling} \end{figure} This method can be extended to $M=2^P$ clocks by further dividing the atomic ensemble into smaller subsets with $P$ Raman $\pi$ pulses. For example, we demonstrate a four-mode system by inserting an additional Raman $\pi$ pulse, followed by a microwave $\pi/2$ pulse, before the first QND measurement to generate spatially distinct modes. In this case, we adjust the total initial number of atoms to maintain $N=45,000$ atoms per mode. With four modes, the metrological enhancement is $\xi^2_\text{net} = 11.6(1.1)$ dB (see Methods). For comparison, this is a 4.5(0.8) dB relative improvement over the projected MS limit (see Fig.~\ref{fig:scaling}). Here, the network gain is driven by the improved squeezing efficiency for larger numbers of atoms since the total number of atoms initially entangled is $MN$. The observed network gain is consistent with the measured atom number dependence of squeezing efficacy observed in Ref.~\cite{Hosten2016a} for this system. This four-mode network could be used to search, for example, for spatially periodic clock frequency shifts. Finally, we apply this method to an atom interferometer configuration, as illustrated in Figs.~\ref{fig:sequence} and ~\ref{fig:pulsesAI}. In this case, two atomic modes are initially entangled as described above in an optical lattice. Atoms are then released from the lattice and subject to an atom interferometer pulse sequence after an interval of approximately 7 msec, after which they have separated by $\sim$ 0.16 mm. Specifically, a Raman $\pi$ pulse acts as a beamsplitter by simultaneously imparting opposite momentum to the spin states in each branch (resulting in a relative momentum between interfering wavepackets of $4 \hbar \mathbf{k}$, as depicted in Fig.~\ref{fig:sequence} and~\ref{fig:pulsesAI}). $T_{\text{int}}=50~\mu$s later, a sequence of a Raman $\pi$ pulse, microwave $\pi$ pulse and Raman $\pi$ pulse act as a mirror, while a final Raman $\pi$ pulse recombines the states. The duration of the interferometer pulse sequence is 270 $\mu$sec, dominated by the $\sim$ 160 $\mu$sec microwave $\pi$ pulse time. Each mode of N = 110,000 atoms accumulates a phase proportional to its local acceleration (see Methods). Fluorescence imaging (Ref.~\cite{Malia2020}) detects of the sum of the final $J_z^{(m)}$ (the modes are too closely spaced to resolve individually on the camera). This differential method suppresses large common mode optical phase fluctuations associated with the optical stimulated Raman transitions (measured to be 10 mrad, or roughly 15 dB above the projection noise limit, see Fig.~\ref{fig:intHist}a). \begin{figure} \caption{\textbf{Interferometer performance}. \textbf{(a)} Distributions of 200 sample measurements for the two-mode sequence with ME squeezed states (green), MS coherent states (black), and for the single-mode sequence (purple). \textbf{(b)} The fractional stability for a two-mode interferometer with ME squeezed states (green) and MS coherent states (black) is calculated from a single data set of 200 samples. Error bars represent 68\% confidence intervals. \textbf{(c)} Response of a single-mode interferometer to a phase shift $\phi$ in the final Raman $\pi$ pulse of the sequence. The coherence is determined from the peak-to-peak value of a sinusoidal fit.} \label{fig:intHist} \end{figure} The smallest observed single-shot phase uncertainty with a ME interferometer is 4.9(0.4) mrad (Fig.~\ref{fig:intHist}b), which corresponds to an inferred differential acceleration sensitivity of $1.4(0.1)\times10^{-2}~\text{m/sec}^2$ (see Methods). This sensitivity is limited by the relatively poor contrast (40\%, see Fig.~\ref{fig:intHist}c) associated with the interferometer pulse sequence. Entanglement-enhanced noise performance can be characterized by comparing the observed ME sensor noise to the noise observed for the same sensor sequence implemented without the entangling probe, as shown in Figs.~\ref{fig:intHist}a,b. With respect to the sequence which does not employ entanglement, we observe an average metrological improvement of 1.6(0.9) dB. The absolute noise is 0.1(0.7) dB above the QPN limit for the non-entangled sensor which is likely due to imperfect suppression of Raman laser phase noise. This configuration extrapolates directly to high performance, single source, differential gravity sensors (for example, Ref~\cite{Overstreet2022}). In the future, a distributed array of cavities sharing a common QND measurement~\cite{Polzik2016}, possibly via photonic links and shared probe light~\cite{Nichol2021,Chaudhary2022}, would enable entanglement across longer distances. Adapting this method to squeezed optical clocks~\cite{Pedrozo2020} would further push the limits of precision measurements of time~\cite{Beloy2021,Zheng2022} and gravity~\cite{Bothwell2022}. Applications in secure time transfer and quantum communications can benefit from a distributed entangled state~\cite{Komar2014} since an eavesdropper could not deduce the correlations through observation of one clock alone. For example, information encoded by rotations on one network node would only be detectable through a collective measurement of all nodes. Finally, the atomic interferometer protocol is technologically useful for future high performance gravity gradient sensors and differential configurations designed for gravitational wave detection~\cite{Abe2021,Zhan2019} and dark matter searches~\cite{Wcislo2018,Safronova2018,Tino2021}. \begin{acknowledgments} We acknowledge support from Department of Energy award DE-SC0019174-0001, the Department of Energy Q-NEXT NQI, a Vannevar Bush Faculty Fellowship and NSF QLCI Award OMA-2016244. B.K.M., Y.W., and J.M. designed, constructed, and characterized the experiment. B.K.M. and Y.W. performed data collection and analysis. M.A.K. supervised the research. All authors contributed to the manuscript. \end{acknowledgments} \begin{thebibliography}{10} \expandafter\ifx\csname url\endcsname\relax \def\url#1{\texttt{#1}}\fi \expandafter\ifx\csname urlprefix\endcsname\relax\defURL {URL }\fi \providecommand{\bibinfo}[2]{#2} \providecommand{\eprint}[2][]{\url{#2}} \bibitem{Grotti2018} \bibinfo{author}{Grotti, J.} \emph{et~al.} \newblock \bibinfo{title}{Geodesy and metrology with a transportable optical clock}. \newblock \emph{\bibinfo{journal}{Nature Physics}} \textbf{\bibinfo{volume}{14}}, \bibinfo{pages}{437--441} (\bibinfo{year}{2018}). \bibitem{McGrew2018} \bibinfo{author}{McGrew, W.~F.} \emph{et~al.} \newblock \bibinfo{title}{Atomic clock performance enabling geodesy below the centimetre level}. \newblock \emph{\bibinfo{journal}{Nature}} \textbf{\bibinfo{volume}{564}}, \bibinfo{pages}{87--90} (\bibinfo{year}{2018}). \bibitem{Guo2020} \bibinfo{author}{Guo, X.} \emph{et~al.} \newblock \bibinfo{title}{Distributed quantum sensing in a continuous-variable entangled network}. \newblock \emph{\bibinfo{journal}{Nature Physics}} \textbf{\bibinfo{volume}{16}}, \bibinfo{pages}{281--284} (\bibinfo{year}{2020}). \bibitem{Zhao2021} \bibinfo{author}{Zhao, S.-R.} \emph{et~al.} \newblock \bibinfo{title}{Field demonstration of distributed quantum sensing without post-selection}. \newblock \emph{\bibinfo{journal}{Physical Review X}} \textbf{\bibinfo{volume}{11}}, \bibinfo{pages}{031009} (\bibinfo{year}{2021}). \bibitem{Zhang2021} \bibinfo{author}{Zhang, Z.} \& \bibinfo{author}{Zhuang, Q.} \newblock \bibinfo{title}{Distributed quantum sensing}. \newblock \emph{\bibinfo{journal}{Quantum Science and Technology}} \textbf{\bibinfo{volume}{6}}, \bibinfo{pages}{043001} (\bibinfo{year}{2021}). \bibitem{Giovannetti2001} \bibinfo{author}{Giovannetti, V.}, \bibinfo{author}{Lloyd, S.} \& \bibinfo{author}{Maccone, L.} \newblock \bibinfo{title}{Quantum-enhanced positioning and clock synchronization}. \newblock \emph{\bibinfo{journal}{Nature}} \textbf{\bibinfo{volume}{412}}, \bibinfo{pages}{417--419} (\bibinfo{year}{2001}). \bibitem{Beloy2021} \bibinfo{author}{Beloy, K.} \emph{et~al.} \newblock \bibinfo{title}{Frequency ratio measurements at 18-digit accuracy using an optical clock network}. \newblock \emph{\bibinfo{journal}{Nature}} \textbf{\bibinfo{volume}{591}}, \bibinfo{pages}{564--569} (\bibinfo{year}{2021}). \bibitem{Bothwell2022} \bibinfo{author}{Bothwell, T.} \emph{et~al.} \newblock \bibinfo{title}{Resolving the gravitational redshift across a millimetre-scale atomic sample}. \newblock \emph{\bibinfo{journal}{Nature}} \textbf{\bibinfo{volume}{602}}, \bibinfo{pages}{420--424} (\bibinfo{year}{2022}). \bibitem{Pedrozo2020} \bibinfo{author}{Pedrozo-Pe{\~{n}}afiel, E.} \emph{et~al.} \newblock \bibinfo{title}{Entanglement on an optical atomic-clock transition}. \newblock \emph{\bibinfo{journal}{Nature}} \textbf{\bibinfo{volume}{588}}, \bibinfo{pages}{414--418} (\bibinfo{year}{2020}). \bibitem{Zheng2022} \bibinfo{author}{Zheng, X.} \emph{et~al.} \newblock \bibinfo{title}{Differential clock comparisons with a multiplexed optical lattice clock}. \newblock \emph{\bibinfo{journal}{Nature}} \textbf{\bibinfo{volume}{602}}, \bibinfo{pages}{425--430} (\bibinfo{year}{2022}). \newblock \eprint{2109.12237}. \bibitem{Overstreet2022} \bibinfo{author}{Overstreet, C.}, \bibinfo{author}{Asenbaum, P.}, \bibinfo{author}{Curti, J.}, \bibinfo{author}{Kim, M.} \& \bibinfo{author}{Kasevich, M.~A.} \newblock \bibinfo{title}{Observation of a gravitational aharonov-bohm effect}. \newblock \emph{\bibinfo{journal}{Science}} \textbf{\bibinfo{volume}{375}}, \bibinfo{pages}{226--229} (\bibinfo{year}{2022}). \bibitem{Liu2021} \bibinfo{author}{Liu, L.-Z.} \emph{et~al.} \newblock \bibinfo{title}{Distributed quantum phase estimation with entangled photons}. \newblock \emph{\bibinfo{journal}{Nature Photonics}} \textbf{\bibinfo{volume}{15}}, \bibinfo{pages}{137--142} (\bibinfo{year}{2021}). \bibitem{Xia2020} \bibinfo{author}{Xia, Y.} \emph{et~al.} \newblock \bibinfo{title}{Demonstration of a reconfigurable entangled radio-frequency photonic sensor network}. \newblock \emph{\bibinfo{journal}{Physical Review Letters}} \textbf{\bibinfo{volume}{124}}, \bibinfo{pages}{150502} (\bibinfo{year}{2020}). \bibitem{Lu2019} \bibinfo{author}{Lu, H.} \emph{et~al.} \newblock \bibinfo{title}{Experimental quantum network coding}. \newblock \emph{\bibinfo{journal}{npj Quantum Information}} \textbf{\bibinfo{volume}{5}} (\bibinfo{year}{2019}). \bibitem{Bodine2020} \bibinfo{author}{Bodine, M.~I.} \emph{et~al.} \newblock \bibinfo{title}{Optical atomic clock comparison through turbulent air}. \newblock \emph{\bibinfo{journal}{Physical Review Research}} \textbf{\bibinfo{volume}{2}}, \bibinfo{pages}{033395} (\bibinfo{year}{2020}). \bibitem{Matsukevich2006} \bibinfo{author}{Matsukevich, D.~N.} \emph{et~al.} \newblock \bibinfo{title}{Entanglement of remote atomic qubits}. \newblock \emph{\bibinfo{journal}{Physical Review Letters}} \textbf{\bibinfo{volume}{96}}, \bibinfo{pages}{030405} (\bibinfo{year}{2006}). \bibitem{Chou2005} \bibinfo{author}{Chou, C.~W.} \emph{et~al.} \newblock \bibinfo{title}{Measurement-induced entanglement for excitation stored in remote atomic ensembles}. \newblock \emph{\bibinfo{journal}{Nature}} \textbf{\bibinfo{volume}{438}}, \bibinfo{pages}{828--832} (\bibinfo{year}{2005}). \bibitem{Simon2007} \bibinfo{author}{Simon, J.}, \bibinfo{author}{Tanji, H.}, \bibinfo{author}{Ghosh, S.} \& \bibinfo{author}{Vuleti{\'{c}}, V.} \newblock \bibinfo{title}{Single-photon bus connecting spin-wave quantum memories}. \newblock \emph{\bibinfo{journal}{Nature Physics}} \textbf{\bibinfo{volume}{3}}, \bibinfo{pages}{765--769} (\bibinfo{year}{2007}). \bibitem{Muralidharan2016} \bibinfo{author}{Muralidharan, S.} \emph{et~al.} \newblock \bibinfo{title}{Optimal architectures for long distance quantum communication}. \newblock \emph{\bibinfo{journal}{Scientific Reports}} \textbf{\bibinfo{volume}{6}} (\bibinfo{year}{2016}). \bibitem{Gundogan2021} \bibinfo{author}{Gündo{\u{g}}an, M.} \emph{et~al.} \newblock \bibinfo{title}{Proposal for space-borne quantum memories for global quantum networking}. \newblock \emph{\bibinfo{journal}{npj Quantum Information}} \textbf{\bibinfo{volume}{7}} (\bibinfo{year}{2021}). \bibitem{Komar2014} \bibinfo{author}{K{\'{o}}m{\'{a}}r, P.} \emph{et~al.} \newblock \bibinfo{title}{A quantum network of clocks}. \newblock \emph{\bibinfo{journal}{Nature Physics}} \textbf{\bibinfo{volume}{10}}, \bibinfo{pages}{582--587} (\bibinfo{year}{2014}). \bibitem{Polzik2016} \bibinfo{author}{Polzik, E.~S.} \& \bibinfo{author}{Ye, J.} \newblock \bibinfo{title}{Entanglement and spin squeezing in a network of distant optical lattice clocks}. \newblock \emph{\bibinfo{journal}{Physical Review A}} \textbf{\bibinfo{volume}{93}}, \bibinfo{pages}{021404} (\bibinfo{year}{2016}). \bibitem{Leroux2010a} \bibinfo{author}{Leroux, I.~D.}, \bibinfo{author}{Schleier-Smith, M.~H.} \& \bibinfo{author}{Vuleti{\'{c}}, V.} \newblock \bibinfo{title}{Orientation-dependent entanglement lifetime in a squeezed atomic clock}. \newblock \emph{\bibinfo{journal}{Physical Review Letters}} \textbf{\bibinfo{volume}{104}} (\bibinfo{year}{2010}). \bibitem{Gessner2018} \bibinfo{author}{Gessner, M.}, \bibinfo{author}{Pezz{\`{e}}, L.} \& \bibinfo{author}{Smerzi, A.} \newblock \bibinfo{title}{Sensitivity bounds for multiparameter quantum metrology}. \newblock \emph{\bibinfo{journal}{Physical Review Letters}} \textbf{\bibinfo{volume}{121}}, \bibinfo{pages}{130503} (\bibinfo{year}{2018}). \bibitem{Zhuang2018} \bibinfo{author}{Zhuang, Q.}, \bibinfo{author}{Zhang, Z.} \& \bibinfo{author}{Shapiro, J.~H.} \newblock \bibinfo{title}{Distributed quantum sensing using continuous-variable multipartite entanglement}. \newblock \emph{\bibinfo{journal}{Physical Review A}} \textbf{\bibinfo{volume}{97}}, \bibinfo{pages}{032329} (\bibinfo{year}{2018}). \bibitem{Eckert2006} \bibinfo{author}{Eckert, K.} \emph{et~al.} \newblock \bibinfo{title}{Differential atom interferometry beyond the standard quantum limit}. \newblock \emph{\bibinfo{journal}{Physical Review A}} \textbf{\bibinfo{volume}{73}}, \bibinfo{pages}{013814} (\bibinfo{year}{2006}). \bibitem{Nichol2021} \bibinfo{author}{Nichol, B.~C.} \emph{et~al.} \newblock \bibinfo{title}{A quantum network of entangled optical atomic clocks}. \newblock \emph{\bibinfo{journal}{Preprint at https://arxiv.org/abs/2111.10336}} (\bibinfo{year}{2021}). \bibitem{Jing2019} \bibinfo{author}{Jing, Y.}, \bibinfo{author}{Fadel, M.}, \bibinfo{author}{Ivannikov, V.} \& \bibinfo{author}{Byrnes, T.} \newblock \bibinfo{title}{Split spin-squeezed bose{\textendash}einstein condensates}. \newblock \emph{\bibinfo{journal}{New Journal of Physics}} \textbf{\bibinfo{volume}{21}}, \bibinfo{pages}{093038} (\bibinfo{year}{2019}). \bibitem{Julsgaard2001} \bibinfo{author}{Julsgaard, B.}, \bibinfo{author}{Kozhekin, A.} \& \bibinfo{author}{Polzik, E.~S.} \newblock \bibinfo{title}{Experimental long-lived entanglement of two macroscopic objects}. \newblock \emph{\bibinfo{journal}{Nature}} \textbf{\bibinfo{volume}{413}} (\bibinfo{year}{2001}). \bibitem{Fadel2018} \bibinfo{author}{Fadel, M.}, \bibinfo{author}{Zibold, T.}, \bibinfo{author}{D{\'{e}}camps, B.} \& \bibinfo{author}{Treutlein, P.} \newblock \bibinfo{title}{Spatial entanglement patterns and einstein-podolsky-rosen steering in bose-einstein condensates}. \newblock \emph{\bibinfo{journal}{Science}} \textbf{\bibinfo{volume}{360}}, \bibinfo{pages}{409--413} (\bibinfo{year}{2018}). \bibitem{Lange2018} \bibinfo{author}{Lange, K.} \emph{et~al.} \newblock \bibinfo{title}{Entanglement between two spatially separated atomic modes}. \newblock \emph{\bibinfo{journal}{Science}} \textbf{\bibinfo{volume}{360}}, \bibinfo{pages}{416--418} (\bibinfo{year}{2018}). \bibitem{Kunkel2018} \bibinfo{author}{Kunkel, P.} \emph{et~al.} \newblock \bibinfo{title}{Spatially distributed multipartite entanglement enables {EPR} steering of atomic clouds}. \newblock \emph{\bibinfo{journal}{Science}} \textbf{\bibinfo{volume}{360}}, \bibinfo{pages}{413--416} (\bibinfo{year}{2018}). \bibitem{Anders2021} \bibinfo{author}{Anders, F.} \emph{et~al.} \newblock \bibinfo{title}{Momentum entanglement for atom interferometry}. \newblock \emph{\bibinfo{journal}{Physical Review Letters}} \textbf{\bibinfo{volume}{127}}, \bibinfo{pages}{140402} (\bibinfo{year}{2021}). \bibitem{Greve2021} \bibinfo{author}{Greve, G.~P.}, \bibinfo{author}{Luo, C.}, \bibinfo{author}{Wu, B.} \& \bibinfo{author}{Thompson, J.~K.} \newblock \bibinfo{title}{Entanglement-enhanced matter-wave interferometry in a high-finesse cavity}. \newblock \emph{\bibinfo{journal}{Preprint at http://arxiv.org/abs/2110.14027}} (\bibinfo{year}{2021}). \bibitem{Hosten2016a} \bibinfo{author}{Hosten, O.}, \bibinfo{author}{Engelsen, N.~J.}, \bibinfo{author}{Krishnakumar, R.} \& \bibinfo{author}{Kasevich, M.~A.} \newblock \bibinfo{title}{Measurement noise 100 times lower than the quantum-projection limit using entangled atoms}. \newblock \emph{\bibinfo{journal}{Nature}} \textbf{\bibinfo{volume}{529}}, \bibinfo{pages}{505} (\bibinfo{year}{2016}). \bibitem{Malia2020} \bibinfo{author}{Malia, B.}, \bibinfo{author}{Martínez-Rincón, J.}, \bibinfo{author}{Wu, Y.}, \bibinfo{author}{Hosten, O.} \& \bibinfo{author}{Kasevich, M.~A.} \newblock \bibinfo{title}{Free space ramsey spectroscopy in rubidium with noise below the quantum projection limit}. \newblock \emph{\bibinfo{journal}{Physical Review Letters}} \textbf{\bibinfo{volume}{125}} (\bibinfo{year}{2020}). \bibitem{Fadel2022} \bibinfo{author}{Fadel, M.}, \bibinfo{author}{Yadin, B.}, \bibinfo{author}{Mao, Y.}, \bibinfo{author}{Byrnes, T.} \& \bibinfo{author}{Gessner, M.} \newblock \bibinfo{title}{Multiparameter quantum metrology and mode entanglement with spatially split nonclassical spin states}. \newblock \emph{\bibinfo{journal}{Preprint at https://arxiv.org/abs/2201.11081}} (\bibinfo{year}{2022}). \bibitem{Gessner2020} \bibinfo{author}{Gessner, M.}, \bibinfo{author}{Smerzi, A.} \& \bibinfo{author}{Pezz{\`{e}}, L.} \newblock \bibinfo{title}{Multiparameter squeezing for optimal quantum enhancements in sensor networks}. \newblock \emph{\bibinfo{journal}{Nature Communications}} \textbf{\bibinfo{volume}{11}} (\bibinfo{year}{2020}). \bibitem{Wineland1994} \bibinfo{author}{Wineland, D.~J.}, \bibinfo{author}{Bollinger, J.~J.}, \bibinfo{author}{Itano, W.~M.} \& \bibinfo{author}{Heinzen, D.~J.} \newblock \bibinfo{title}{Squeezed atomic states and projection noise in spectroscopy}. \newblock \emph{\bibinfo{journal}{Physical Review A}} \textbf{\bibinfo{volume}{50}}, \bibinfo{pages}{67--88} (\bibinfo{year}{1994}). \bibitem{Chaudhary2022} \bibinfo{author}{Chaudhary, M.} \emph{et~al.} \newblock \bibinfo{title}{Stroboscopic quantum nondemolition measurements for enhanced entanglement generation between atomic ensembles}. \newblock \emph{\bibinfo{journal}{Physical Review A}} \textbf{\bibinfo{volume}{105}}, \bibinfo{pages}{022443} (\bibinfo{year}{2022}). \bibitem{Abe2021} \bibinfo{author}{Abe, M.} \emph{et~al.} \newblock \bibinfo{title}{Matter-wave atomic gradiometer interferometric sensor ({MAGIS}-100)}. \newblock \emph{\bibinfo{journal}{Quantum Science and Technology}} \textbf{\bibinfo{volume}{6}}, \bibinfo{pages}{044003} (\bibinfo{year}{2021}). \bibitem{Zhan2019} \bibinfo{author}{Zhan, M.-S.} \emph{et~al.} \newblock \bibinfo{title}{{ZAIGA}: Zhaoshan long-baseline atom interferometer gravitation antenna}. \newblock \emph{\bibinfo{journal}{International Journal of Modern Physics D}} \textbf{\bibinfo{volume}{29}}, \bibinfo{pages}{1940005} (\bibinfo{year}{2019}). \bibitem{Wcislo2018} \bibinfo{author}{Wcis{\l}o, P.} \emph{et~al.} \newblock \bibinfo{title}{New bounds on dark matter coupling from a global network of optical atomic clocks}. \newblock \emph{\bibinfo{journal}{Science Advances}} \textbf{\bibinfo{volume}{4}} (\bibinfo{year}{2018}). \bibitem{Safronova2018} \bibinfo{author}{Safronova, M.~S.}, \bibinfo{author}{Porsev, S.~G.}, \bibinfo{author}{Sanner, C.} \& \bibinfo{author}{Ye, J.} \newblock \bibinfo{title}{Two clock transitions in neutral yb for the highest sensitivity to variations of the fine-structure constant}. \newblock \emph{\bibinfo{journal}{Physical Review Letters}} \textbf{\bibinfo{volume}{120}}, \bibinfo{pages}{173001} (\bibinfo{year}{2018}). \bibitem{Tino2021} \bibinfo{author}{Tino, G.~M.} \newblock \bibinfo{title}{Testing gravity with cold atom interferometry: results and prospects}. \newblock \emph{\bibinfo{journal}{Quantum Science and Technology}} \textbf{\bibinfo{volume}{6}}, \bibinfo{pages}{024014} (\bibinfo{year}{2021}). \bibitem{Kasevich1991} \bibinfo{author}{Kasevich, M.} \& \bibinfo{author}{Chu, S.} \newblock \bibinfo{title}{Atomic interferometry using stimulated raman transitions}. \newblock \emph{\bibinfo{journal}{Physical Review Letters}} \textbf{\bibinfo{volume}{67}}, \bibinfo{pages}{181--184} (\bibinfo{year}{1991}). \bibitem{Malia2021_thesis} \bibinfo{author}{Malia, B.} \newblock \emph{\bibinfo{title}{Integration of spin squeezed states into free space atomic sensors}}. \newblock Ph.D. thesis, \bibinfo{school}{Stanford University} (\bibinfo{year}{2021}). \end{thebibliography} \section{Methods} \subsection{Contrast} When determining mode separation, a second Raman pulse removes the relative momentum after time $T$ to maintain the mode separation distance until detection takes place. A $\pi/2$ microwave pulse with varying phase addresses both modes simultaneously and the remaining contrast is determined by the peak-to-peak $J_z$ values from fluorescence imaging (see Fig.~\ref{fig:contrast}). The coherence falls to zero after roughly one thermal debroglie wavelength $\lambda_\text{th}=h/\sqrt{2\pi m_\text{Rb}k_BT_\text{ens}} = 36$ nm, where $k_B$ is the Boltzmann constant, $T_\text{ens}$ is the temperature of the ensemble, and $m_\text{Rb}$ is the mass of the ${}^{87}$Rb atom. In the absence of Raman transitions, the contrast is $C = 79(1)\%$ due to decoherence in the lattice both before and after the sensing times. Adding in the two Raman transitions with $T=0$ decreases the contrast to $C = 73(1)\%$. To determine $C$ for the clock measurement, a final microwave $\pi/2$ pulse temporarily introduced to the single-mode case resolves $C = 78(3)\%$. Therefore, introducing a single Raman transition before the QND measurement does not significantly reduce the final coherence of the ensemble. The gravity gradiometer has a lower final coherence, roughly 40\%, due to four additional Raman pulses. This is consistent with the expected $C = (88\%~\text{population transfer})^4\times(79\%~\text{contrast without gradiometer})$. \begin{figure} \caption{\textbf{Mode separation.} Contrast of the collective fluorescent measurement as a function of separation time between two $0.33~\mu$s Raman $\pi$ pulses. Solid curve is an exponential fit to the data with a decay rate of 0.46 $\mu$s. Note that $T=0$ corresponds to a single pulse with a total time of $2\pi$. Error bars represent a 95\% confidence interval.} \label{fig:contrast} \end{figure} \subsection{Squeezing matrix for multiparameter discrete-variable squeezing} The form of the metrological squeezing parameter $\xi_\text{net}^2$ is derived from Equation 13 in Gessner et al.~\cite{Gessner2020}. For a general multimode system, the squeezing matrix $\Xi^2$ characterizes the level of metrological improvement due to entangled quantum network. In other words, it compares the covariances between each mode to the QPN limit. The matrix elements can be defined as \begin{equation} \Xi^2_{kl}= \frac{\sqrt{N^{(k)} N^{(l)}} \text{Cov}\big(\hat{J}^{(k)}_z,\hat{J}^{(l)}_z\big)}{\big<\hat{J}^{(k)}_x\big>\big<\hat{J}^{(l)}_x\big>}, \end{equation} where $\hat{J}^{(m)}_x$ are the spin operators for each mode and the mean spins are in the $\mathbf{\hat{x}}$ direction. The metrological improvement in the multiparameter estimation can be written as the ratio of variance of the squeezed network to that of a network comprised of coherent states: $\mathbf{n}^T\mathbf{\Sigma}^2\mathbf{n}/\mathbf{n}^T\mathbf{\Sigma_\text{SN}}^2\mathbf{n}$, where $\mathbf{\Sigma}$ and $\mathbf{\Sigma_\text{SN}}$ are the covariance matrices of the squeezed and coherent states respectively, and $\mathbf{n}$ is the vector of coefficients for the linear combination of parameters being measured. In the case of equally populated [$N = N^{(m)}$] modes, the expected length values are $\big<\hat{J}^{(m)}_x\big>=CN/2$. For a measurement of the average angular shift ($n_m = 1/M$), it can be shown that the metrological improvement reduces to $\xi_\text{net}^2=M\mathbf{n}^T\mathbf{\Xi^2}\mathbf{n}$. More explicitly, in terms of the measured observables, it can be written as \begin{equation} \xi_\text{net}^2 \equiv \frac{1}{C^2}\frac{\text{Var}\big(\sum^M_{m=1}\delta J^{(m)}_z\big)}{M\times\text{Var}\big(\delta J^{\text{CSS}}_z\big)}. \label{eq:gensqzparam} \end{equation} \subsection{Measurement Sensitivity} Since the QND measurement addresses all modes simultaneously, it cannot distinguish between spin states with different momenta. The measured $\delta J_z$ is simply the sum of $\delta J^{(m)}_z$, with expectation value \begin{equation} \big<\delta J_z\big> = \sum_{m=1}^M \delta J^{(m)}_z = C\frac{N}{2}\sum_{m=1}^M \delta \theta^{(m)} =C\frac{N}{2}M\bar{\theta}. \end{equation} The sensitivity, $\sigma$, of this measurement to changes in $\bar{\theta}$ is given by standard error propagation~\cite{Guo2020}: \begin{equation} \sigma = \frac{\sqrt{\text{Var}(\delta J_z)}}{\partial\big<\delta J_z\big>/\partial\bar{\theta}} = \frac{\Delta (\delta J_z)}{CMN/2}=\Delta\bar{\theta}. \end{equation} \begin{figure} \caption{\textbf{Interferometer sequence timing.} Space time diagram in the inertial frame of a single-mode interferometer. Solid (dashed) lines represent the trajectory of the spin down (up) state. White (gray) waves represent the finite time of the microwave (Raman) pulses.} \label{fig:pulsesAI} \end{figure} \subsection{Laser System} A low phase noise, 1,560 nm laser is frequency doubled to 780 nm. This light is split and one mode passes through an electro-optic modulator (EOM) driven at 6.434 GHz, 400 MHz lower than the hyperfine transition frequency, $\omega_{\text{HF}}$. The driving signal is created by a low phase noise crystal oscillator mixed with a direct digital synthesizer (DDS), which allows for power, frequency and phase control. Next, both modes are amplified by semiconductor-based optical amplifiers to 2.8 W each. One mode is now up-shifted by a 200 MHz acousto-optic modulator (AOM) and the other is down-shifted by the same amount. Both AOMs are driven by a common signal from a low noise 200 MHz crystal oscillator. The pulsed signal controls the time the AOMs couple the light to optical fibers which deliver the light to the atoms. The fibers launch the light into 5.4 mm diameter, counter-propagating freespace beams at 45 deg angle to the vertical and a 45 deg angle to the cavity axis. The shifting places one sideband of the modulated beam $\omega_{\text{HF}}$ away from the un-modulated beam frequency. These two frequencies drive the Raman transition between the two hyperfine states. The two participating frequencies create a transition which is red-detuned by 3.5 GHz from the excited state. The other sidebands are used to balance the AC-Stark shift and do not contribute significantly to the population change. \subsection{Interferometer Phase Shift} The sequence provided in this work differs from a standard Mach-Zehnder configuration~\cite{Kasevich1991} in that both spin states receive a momentum kick instead of just one state. In addition, the microwave pulses are longer than the interrogation time so terms including pulse durations must be considered. The total phase shifts, $\delta \theta^{(m)}$, of an interferometer can be derived from the sensitivity function~\cite{Malia2021_thesis}: \begin{equation} \begin{split} \delta \theta^{(m)} = 2|\mathbf{k}|a^{(m)}(2T_{\text{int}}^2 + 4 T_{\text{int}} T_0 + 4 T_{\text{int}}\tau_0 \\+ 6 T_{\text{int}} \tau_k + 4 T_0 \tau_k + 4 \tau_0 \tau_k + 4 \tau_k^2), \end{split} \end{equation} where $a^{(m)}$ is the acceleration in mode $m$ projected along $\mathbf{k}$, $T_0 = 1 ~\mu$sec is the time between sequential pulses, $\tau_0=80$ $\mu$sec is the duration of a microwave $\pi/2$ pulse, and $\tau_k=2~\mu$sec is the duration of a Raman $\pi$ pulse (see Fig.~\ref{fig:pulsesAI}). For the data of Fig.~\ref{fig:intHist}b, where $T_{\text{int}} = 50~\mu$sec, we infer a statistical sensitivity of $\Delta \bar{a} = (\sum^M_{m=1} a^{(m)})/M = 1.4(0.1)\times10^{-2}~\text{m/sec}^2$ for a single shot. \end{document}
\begin{document} \begin{center} \begin{LARGE} \textbf{Entanglement and nonlocality versus spontaneous emission\\[6mm] in two -- atom system} \end{LARGE}\\[12mm] L. Jak{\'o}bczyk and A. Jamr{\'o}z\\[2mm] Institute of Theoretical Physics\\ University of Wroc{\l}aw\\ Pl. M. Borna 9, 50-204 Wroc{\l}aw, Poland \end{center} \vskip 12mm \noindent {\sc Abstract}: We study evolution of entanglement of two two-level atoms in the presence of dissipation caused by spontaneous emission. We find explicit formulas for the amount of entanglement as a function of time, in the case of destruction of the initial entanglement and possible creation of a transient entanglement between atoms. We also discuss how spontaneous emission influences nonlocality of states expressed by violation of Bell - CHSH inequality. It is shown that evolving system very quickly becomes local, even if entanglement is still present or produced. \section{Introduction} The process of spontaneous emission by a system of two - level atoms was extensively studied by several authors (see e.g. \cite{Dicke,dill,lemb1,lemb2}). In particular, in the case of spontaneous emission by two trapped atoms separated by a distance small compared to the radiation wavelength, where is a substantial probability that a photon emitted by one atom will be absorbed by the other, there are states of the system in which photon exchange can enhance or diminish spontaneous decay rates. The states with enhanced decay rate are called superradiant and analogously states with diminished decay rate are called subradiant \cite{Dicke}. It was also shown by Dicke, that the system of two coupled two-level atoms can be treated as a single four-level system with modified decay rates. \par Another aspects of the model of the spontaneous emission are studied in the present paper. When the compound system of two atoms is in an entangled state, the irreversible process of radiative decay usually destroys correlations and the state becomes unentangled. In the model studied here, the photon exchange produces correlations between atoms which can partially overcome decoherence caused by spontaneous radiation. As a result, some amount of entanglement can survive, and moreover there is a possibility that this process can entangle separable states of two atoms. The idea that dissipation can create entanglement in physical systems, was recently developed in several papers \cite{plenio,kim,milb,kni}. In particular, the effect of spontaneous emission on destruction and production of entanglement in two - atom system was discussed \cite{J,FT3,FT2}. Possible production of robust entanglement for closely separated atoms was shown in Ref. \cite{J}, and the existence of transient entanglement induced by this process in a system of two atoms separated by an arbitrary distance was also studied \cite{FT3,FT2}. \par In this paper we also concentrate on arbitrarily separated atoms and consider the dynamics of entanglement. Similarly as in \cite{FT2} we take some class of initial states, including interesting pure and mixed states, and discuss in details its time evolution as well as the evolution of its entanglement. Note that our initial states are different from that considered in Ref. \cite{FT2}. We study also the interesting problem how dissipative process of spontaneous emission influences nonlocal properties of initial states. Nonlocality of quantum theory manifets by violation of Bell inequalities, and in the case of two two-level systems it can be quantified by some numerical parameter ranging from $0$ for local states to $1$ for states maximally violating some Bell inequality. Atomic dynamics studied in the paper enables also to consider time evolution of this parameter. \par The model considered in the present paper consist of two two-level atoms coupled to a common thermostat at zero temperature and the reduced dynamics (in the Markovian approximation) is given by the semi-group $\{ T_{t} \}_{t\geq 0}$ of completely positive linear mappings acting on density matrices (see e.g. \cite{Alicki}). The dynamics takes into account only spontaneous emission and possible photon exchange between atoms \cite{Agar,FT1}, and the generator $L_{D}$ of $\{ T_{t} \}_{t\geq 0}$ is parametrized in terms of the spontaneous emission rate of the single atom $\gamma_{0}$ and the photon exchange rate $\gamma$. In the case of atoms separated by an arbitrary distance $R$, $\gamma$ is strictly smaller then $\gamma_{0}$ and one can check that the relaxation process brings all initial states into the unique asymptotic state when both the atoms are in their ground states. In contrast to the small separation regime ($\gamma=\gamma_{0}$) studied in Ref. \cite{J}, where the robust entanglement of non-trivial asymptotic states can be analysed, in the present case only the transient entanglement of some states can exist. To consider transient entanglement we need to know in details time evolution of initial states, not only its asymptotic behaviour, so the analysis of possible generation of entanglement is much more involved. \par In this paper we calculate time evolution of an arbitrary initial density matrix. To obtain an analytic expression for entanglement as a function of time, we concentrate on the class of states which is left invariant during the evolution, and admits explicite formula for the measure of entanglement. Next we discuss in details how evolve pure initial states, both unentangled and entangled. We show that entanglement as a function of time can behave very differently depending on initial conditions: it may monotonically decrease to zero, increase to maximal value and then decrease to zero or even it can have local minimum and maximum. In particular, there are states for which induced transient entanglement is larger then initial entanglement. Our solution enables also to study nonlocality of the evolving system of two atoms. It turns out that the natural measure of nonlocality very quickly becomes equal to zero, even if entanglement is increasing in some time interval. \noindent \section{Entanglement and Nonlocality for a Pair of Two-Level Atoms} \subsection{Measure of entanglement} \noindent Consider two-level atom $A$ with ground state $\ket{0}$ and excited state $\ket{1}$. This quantum system can be described in terms of the Hilbert space ${\mathcal H}_{A}=\mathbb C^{2}$ and the algebra $\mathfrak A_{A}$ of $2\times 2$ complex matrices. If we identify $\ket{1}$ and $\ket{0}$ with vectors $\bigl( \begin{smallmatrix} 1\\0 \end{smallmatrix}\bigr)$ and $\bigl( \begin{smallmatrix} 0 \\ 1 \end{smallmatrix}\bigr)$ respectively, then the raising and lowering operators $\si{+},\;\si{-}$ defined by \begin{equation} \si{+}=\ket{1}\bra{0},\quad \si{-}=\ket{0}\bra{1} \end{equation} can be expressed in terms of Pauli matrices $\si{1},\; \si{2}$ \begin{equation} \si{+}=\frac{1}{2}\,(\si{1}+i\,\si{2}),\quad \si{-}=\frac{1}{2}\,(\si{1}-i\,\si{2}) \end{equation} For a joint system $AB$ of two two-level atoms $A$ and $B$, the algebra $\mathfrak A_{AB}$ is equal to $4\times 4$ complex matrices and the Hilbert space ${\mathcal H}_{AB}={\mathcal H}_{A}\otimes {\mathcal H}_{B}=\mathbb C^{4}$. Let $\mathcal E_{AB}$ be the set of all states of the compound system i.e. \begin{equation} \mathcal E_{AB}=\{ \rho\in \mathfrak A_{AB}\, : \, \rho\geq 0\quad\text{and}\quad\mathrm{tr}\, \rho =1 \} \end{equation} The state $\rho\in \mathcal E_{AB}$ is \textit{separable} \cite{Werner}, if it has the form \begin{equation} \rho=\sum\limits_{k}\lambda_{k}\rho_{k}^{A}\otimes \rho_{k}^{B},\quad \rho_{k}^{A}\in \mathcal E_{A},\;\rho_{k}^{B}\in \mathcal E_{B},\; \lambda_{k}\geq 0\quad\text{and}\quad \sum\limits_{k}\lambda_{k}=1 \end{equation} The set $\mathcal E_{AB}^{\,\rm sep}$ of all separable states forms a convex subset of $\mathcal E_{AB}$. When $\rho$ is not separable, it is called \textit{inseparable} or \textit{entangled}. Thus \begin{equation} \mathcal E_{AB}^{\,\rm ent}=\mathcal E_{AB}\setminus \mathcal E_{AB}^{\,\rm sep} \end{equation} As a measure of the amount of entanglement a given state contains we take the entanglement of formation \cite{Bennett} \begin{equation} E(\rho)=\min \, \sum\limits_{k}\lambda_{k}E(P_{k}) \end{equation} where the minimum is taken over all possible decompositions \begin{equation} \rho=\sum\limits_{k}\lambda_{k}P_{k} \end{equation} and \begin{equation} E(P)=-\mathrm{tr}\,[ (\ptr{A}P)\,\log_{2}\, (\ptr{A}P)] \end{equation} In the case of two two-level atoms, $E(\rho)$ is the function of another useful quantity $C(\rho)$ called \textit{concurrence}, which also can be taken as a measure of entanglement \cite{HW, W}. Now we pass to the definition of $C(\rho)$. Let \begin{equation} \rho^{\dag}=(\si{2}\otimes \si{2})\,\overline{\rho}\,(\si{2}\otimes \si{2}) \end{equation} where $\overline{\rho}$ is the complex conjugation of the matrix $\rho$. Define also \begin{equation} \widehat{\rho}=(\rho^{1/2}\rho^{\dag}\rho^{1/2})^{1/2} \end{equation} Then the concurrence $C(\rho)$ is given by \cite{HW,W} \begin{equation} C(\rho)=\max\; (\,0, 2p_{\mathrm{max}}(\widehat{\rho})-\mathrm{tr}\, \widehat{\rho}\,) \end{equation} where $p_{\mathrm{max}}(\widehat{\rho})$ denotes the maximal eigenvalue of $\widehat{\rho}$. The value of the number $C(\rho)$ varies from $0$ for separable states, to $1$ for maximally entangled pure states. \par Consider now the class of density matrices $\rho$ \begin{equation} \rho=\begin{pmatrix} 0&0&0&0\\ 0&\rho_{22}&\rho_{23}&\rho_{24}\\ 0&\rho_{32}&\rho_{33}&\rho_{34}\\ 0&\rho_{42}&\rho_{43}&\rho_{44} \end{pmatrix} \end{equation} where the matrix elements are taken with respect to the basis $\ket{1}\otimes\ket{1},\, \ket{1}\otimes\ket{0},\,\ket{0}\otimes\ket{1}$ and $\ket{0}\otimes\ket{0}$. One can check that for density matrices of the form (12) \begin{equation} C(\rho)=|\rho_{23}|-\sqrt{\rho_{22}\rho_{33}}-|\,|\rho_{23}|-\sqrt{\rho_{22}\rho_{33}}\,| \end{equation} By positive-definiteness of $\rho$, $|\rho_{23}|\leq \sqrt{\rho_{22}\rho_{33}}$, and we have the result:\\[2mm] \textit{Concurrence of a density matrix} (12) \textit{is given by} \begin{equation} C(\rho)=2\,|\rho_{23}| \end{equation} As we will show, the class of density matrices given by (12) is invariant with respect to the time evolution considered in the paper, and formula (14) can be used to analyse the evolution of entanglement of states which initially have the form (12). \subsection{Violation of Bell inequalities} The contradiction between quantum theory and local realism expressed by the violation of Bell - CHSH inequality \cite{CHSH}, can be studied in the case of two - qubit system using simple necessary and sufficient condition \cite{HHH,H}. Any state $\rho\in\mathcal E_{AB}$ can be written as \begin{equation} \rho=\frac{1}{4}\left(\mathbb I_{2}\otimes\mathbb I_{2}+\tl{r}\cdot\tl{\sigma}\otimes\mathbb I_{2}+\mathbb I_{2}\otimes \tl{s}\cdot \tl{\sigma}+\sum\limits_{n,m=1}^{3}t_{nm}\,\si{n}\otimes\si{m}\right) \end{equation} where $\mathbb I_{2}$ is the identity matrix in two dimensions, $\si{1},\si{2},\si{3}$ are Pauli matrices, $\tl{r},\tl{s}$ are vectors in $\mathbb R^{3}$ and $\tl{r}\cdot\tl{\sigma}=\sum\limits_{j=1}^{3}r_{j}\si{j}$. The coefficients \begin{equation} t_{nm}=\mathrm{tr}\, (\rho\,\si{n}\otimes\si{m}) \end{equation} form a real matrix $T_{\rho}$. Define also real symmetric matrix \begin{equation} U_{\rho}=T_{\rho}^{T}\,T_{\rho} \end{equation} where $T_{\rho}^{T}$ is the transposition of $T_{\rho}$. Consider now the family of Bell operators \begin{equation} B_{CHSH}=\tl{a}\cdot\tl{\sigma}\otimes (\tl{b}+\tl{b}^{\prime})\cdot\tl{\sigma}+\tl{a}^{\prime}\cdot\tl{\sigma}\otimes (\tl{b}-\tl{b}^{\prime})\cdot\tl{\sigma} \end{equation} where $\tl{a},\,\tl{a}^{\prime},\,\tl{b},\,\tl{b}^{\prime}$ are unit vectors in $\mathbb R^{3}$. Then CHSH inequality reads \begin{equation} |\mathrm{tr}\, (\rho\, B_{CHSH})|\leq 2 \end{equation} Violation of inequality (19) by the density matrix (15) and some Bell operator (18) can be checked by the following criterion: Let \begin{equation} m(\rho)=\max_{j<k}\; (u_{j}+u_{k}) \end{equation} and $u_{j},\, j=1,2,3$ are the eigenvalues of $U_{\rho}$. As was shown in \cite{HHH,H} \begin{equation} \max_{B_{CHSH}}\,\mathrm{tr}\, (\rho\, B_{CHSH})=2\,\sqrt{m(\rho)} \end{equation} Thus (19) is violated by some choice of $\tl{a},\tl{a}^{\prime},\tl{b},\tl{b}^{\prime}$ iff $m(\rho)>1$. \par \noindent If we consider subclass of the class of states (12) consisting of density matrices of the form \begin{equation} \rho=\begin{pmatrix}0&0&0&0\\ 0&\rho_{22}&\rho_{23}&0\\ 0&\rho_{32}&\rho_{33}&0\\ 0&0&0&\rho_{44} \end{pmatrix} \end{equation} then we obtain the following expression for $m(\rho)$ \begin{equation} m(\rho)=\max\; (2\,C^{2}(\rho),\,(1-2\,\rho_{44})^{2}+C^{2}(\rho)\,) \end{equation} where $C(\rho)=2|\rho_{23}| $ is the concurrence of the state $\rho$. Notice that the inequality $$ (1-2\rho_{44})^{2}+C^{2}(\rho)>1 $$ is equivalent to $$ |\rho_{23}|^{2}>\rho_{44}\,(1-\rho_{44}) $$ Let us introduce linear entropy of the state $\rho$ $$ S_{L}(\rho)=1-\mathrm{tr}\,\,\rho^{2} $$ For states (22) $$ \mathrm{tr}\,\,\rho^{2}=\rho_{22}^{2}+\rho_{33}^{2}+\rho_{44}^{2}+2\,|\rho_{23}\,|^{2} $$ so using $(\rho_{22}+\rho_{33}+\rho_{44})^{2}=1$ we obtain $$ S_{L}(\rho)=2\,(\rho_{22}\rho_{33}+\rho_{22}\rho_{44}+\rho_{33}\rho_{44}-|\rho_{23}|^{2}\,) $$ On the other hand $$ |\rho_{23}|^{2}-\rho_{44}\,(\rho_{22}+\rho_{33}\,)=|\rho_{23}|^{2}-\rho_{44}\,(1-\rho_{44}\,)>0 $$ so $$ \rho_{22}\rho_{33}-\frac{1}{2}\,S_{L}(\rho)=|\rho_{23}|^{2}-\rho_{44} \,(\rho_{22}+\rho_{33}\,)>0 $$ and we obtain the following result: \\[4mm] \textit{The states} \rm{(22)} \textit{ violate some Bell - CHSH inequality if and only if $|\rho_{23}|>\frac{1}{2\sqrt{2}}$ or $\rho_{22}\rho_{33}>\frac{1}{2}\,S_{L}(\rho)$.} \noindent \section{Spontaneous Emission and Evolution of Entanglement} We study time evolution of the system of two two-level atoms separated by a distance $R$ when we take into account only the dissipative process of spontaneous emission. The dynamics of such system is given by the master equation \cite{Agar,FT1} \begin{equation} \frac{d\rho}{dt}=L_{D}\rho,\quad \rho\in \mathcal E_{AB} \end{equation} with the following generator $L_{D}$ \begin{equation} L_{D}\rho=\frac{1}{2}\sum\limits_{k,l=A,B}\gamma_{kl}\, (\,2\sa{-}{k}\rho\sa{+}{l}-\sa{+}{k}\sa{-}{l}\rho-\rho\sa{+}{k}\sa{-}{l}\,) \end{equation} where \begin{equation} \sa{\pm}{A}=\si{\pm}\otimes\mathbb I,\; \sa{\pm}{ B}=\mathbb I\otimes \si{\pm},\; \si{\pm}=\frac{1}{2}(\si{1}\pm i \si{2}) \end{equation} and $\gamma_{AA}=\gamma_{BB}=\gamma_{0},\;\gamma_{AB}=\gamma_{BA}=\gamma=g\gamma_{0}$. Here $\gamma_{0}$ is the single atom spontaneous emission rate, and $\gamma=g\gamma_{0}$ is a relaxation constant of photon exchange. In the model, $g$ is the function of the distance $R$ between atoms and $g\to 1$ when $R\to 0$. In this section we investigate the time evolution of the initial density matrix $\rho$ of the compound system, governed by the semi - group $\{T_{t}\}_{t\geq 0}$ generated by $L_{D}$. In particular, we will study the time development of entanglement of $\rho$, measured by concurrence. When $\gamma <\gamma_{0}$, the semi-group $\{T_{t}\}_{t\geq 0}$ is uniquely relaxing, with the asymptotic state $\ket{0}\otimes\ket{0}$. Thus, for any initial state $\rho$, the concurrence $C(\rho_{t})$ approaches $0$ when $t\to \infty$. But still there can be some transient entanglement between atoms \cite{J,FT3,FT2}. In this section we study in details time evolution of a given initial state $\rho= (\rho_{jk})$. Direct calculations show that the state $\rho(t)$ at time $t$ has the following matrix elements with respect to the basis $\ket{1}\otimes \ket{1},\, \ket{1}\otimes\ket{0},\, \ket{0}\otimes\ket{1},\, \ket{0}\otimes\ket{0}$ \begin{equation*} \begin{split} \rho_{11}(t)=&e^{-2\gamma_{0} t}\rho_{11}\\[2mm] \rho_{12}(t)=&e^{-\frac{3}{2}\gamma_{0} t}\,(\rho_{12}\cosh \frac{\gamma t}{2}-\rho_{13}\sinh\frac{\gamma t}{2}\,)\\[2mm] \rho_{13}(t)=&e^{-\frac{3}{2}\gamma_{0} t}\,(\rho_{13}\cosh \frac{\gamma t}{2}-\rho_{12}\sinh\frac{\gamma t}{2}\,)\\[2mm] \rho_{14}(t)=&e^{-\gamma_{0} t}\rho_{14}\\[2mm] \rho_{22}(t)=&-e^{-2\gamma_{0} t}\frac{\gamma^{2}+\gamma_{0}^{2}}{\gamma_{0}^{2}-\gamma^{2}}\,\rho_{11}+ e^{-\gamma_{0} t}\bigg[ \frac{1}{2}(\rho_{22}-\rho_{33})+ \big(\frac{\gamma^{2}+\gamma_{0}^{2}}{\gamma_{0}^{2}-\gamma^{2}}\,\rho_{11}+\frac{1}{2}\,(\rho_{22}+\rho_{33})\big)\cosh \gamma t \\[2mm] &-\frac{2\gamma \gamma_{0}}{\gamma_{0}^{2}-\gamma^{2}}\,\rho_{11}-\mathrm{Re}\rho_{23}\sinh \gamma t )\bigg ]\\[2mm] \rho_{33}(t)=&-e^{-2\gamma_{0} t}\frac{\gamma^{2}+\gamma_{0}^{2}}{\gamma_{0}^{2}-\gamma^{2}}\,\rho_{11}+ e^{-\gamma_{0} t}\bigg[ \frac{1}{2}(\rho_{33}-\rho_{22})+ \big(\frac{\gamma^{2}+\gamma_{0}^{2}}{\gamma_{0}^{2}-\gamma^{2}}\,\rho_{11}+\frac{1}{2}\,(\rho_{22}+\rho_{33})\big)\cosh \gamma t \\[2mm] &-\frac{2\gamma \gamma_{0}}{\gamma_{0}^{2}-\gamma^{2}}\,\rho_{11}-\mathrm{Re}\rho_{23}\sinh \gamma t )\bigg ] \end{split} \end{equation*} \begin{equation*} \begin{split} \rho_{23}(t)=&-e^{-2\gamma_{0} t}\frac{2\gamma \gamma_{0}}{\gamma_{0}^{2}-\gamma^{2}}\,\rho_{11}+e^{-\gamma_{0} t}\,\bigg [ -\big(\frac{\gamma^{2}+\gamma_{0}^{2}}{\gamma_{0}^{2}-\gamma^{2}}\sinh \gamma t+\cosh \gamma t\big)\,\rho_{11}+ i\mathrm{Im}\rho_{23}+\mathrm{Re}\rho_{23}\cosh\gamma t\\[2mm] & -\frac{1}{2}(\rho_{22}+\rho_{33})\sinh \gamma t\bigg]\\[2mm] \rho_{24}(t)=&\big[e^{-\frac{3}{2}\gamma_{0} t}\,(\sinh \frac{\gamma}{2}t-\gamma \cosh\frac{\gamma}{2}t)+e^{-\frac{\gamma_{0}}{2}t}\,(\gamma \cosh\frac{\gamma}{2}t-\sinh\frac{\gamma}{2}t)\big]\rho_{12}+\\[2mm] &\big[e^{-\frac{3}{2}\gamma_{0} t}\,(\gamma\sinh \frac{\gamma}{2}t- \cosh\frac{\gamma}{2}t)+e^{-\frac{\gamma_{0}}{2}t}\,( \cosh\frac{\gamma}{2}t-\gamma\sinh\frac{\gamma}{2}t)\big]\rho_{13}+\\[2mm] &e^{-\frac{\gamma_{0}}{2}t}\big(\rho_{24}\, \cosh\frac{\gamma}{2}t -\rho_{34}\,\sinh\frac{\gamma}{2}t\big)\\[2mm] \rho_{34}(t)=&\big[e^{-\frac{3}{2}\gamma_{0} t}\,(\gamma\sinh \frac{\gamma}{2}t- \cosh\frac{\gamma}{2}t)+e^{-\frac{\gamma_{0}}{2}t}\,( \cosh\frac{\gamma}{2}t-\gamma\sinh\frac{\gamma}{2}t)\big]\rho_{12}+\\[2mm] &\big[e^{-\frac{3}{2}\gamma_{0} t}\,(\sinh \frac{\gamma}{2}t- \gamma\cosh\frac{\gamma}{2}t)+e^{-\frac{\gamma_{0}}{2}t}\,( \gamma\cosh\frac{\gamma}{2}t-\sinh\frac{\gamma}{2}t)\big]\rho_{13}+\\[2mm] &e^{-\frac{\gamma_{0}}{2}t}\big(\rho_{34}\, \cosh\frac{\gamma}{2}t -\rho_{24}\,\sinh\frac{\gamma}{2}t\big)\\[2mm] \rho_{44}(t)=&1+e^{-2\gamma_{0} t}\frac{\gamma_{0}^{2}+3\gamma^{2}}{\gamma_{0}^{2}-\gamma^{2}}\rho_{11}+e^{-\gamma_{0} t}\,\bigg[\big(\frac{4\gamma\gamma_{0}}{\gamma_{0}^{2}-\gamma^{2}}\sinh \gamma t-\frac{2(\gamma^{2}+\gamma_{0}^{2})}{\gamma_{0}^{2}-\gamma^{2}}\cosh \gamma t\big )\rho_{11}\\[2mm] &-(\rho_{22}+\rho_{33})\cosh\gamma t +2\mathrm{Re}\rho_{23}\,\sinh\gamma t\bigg] \end{split} \end{equation*} The remaining matrix elements can be obtained by the Hermiticity condition. One can simply check that the classes of states (12) and (22) are invariant with respect to the above time evolution. In particular, in both cases \begin{equation} \rho_{23}(t)=e^{-\gamma_{0} t}\,\big[\,\mathrm{Re}\, \rho_{23}\cosh\gamma t+i\,\mathrm{Im}\,\rho_{23}-\frac{1}{2}\,(\rho_{22}+\rho_{33})\sinh\gamma t\,\big] \end{equation} So we obtain the result:\\[2mm] \textit{For initial states} (12) \textit{or} (22) \textit{the concurrence at time $t$ is given by \begin{equation} C(\rho(t))=2\,e^{-\gamma_{0} t}\,\big|\,\big(\, \mathrm{Re}\, \rho_{23}\cosh\gamma t+i\,\mathrm{Im}\,\rho_{23}-\frac{1}{2}\,(\rho_{22}+\rho_{33})\sinh\gamma t\,\big)\big| \end{equation} where $\rho_{jk}$ are matrix elements of the initial state.}\\[2mm] Let us consider pure initial states $\Psi\in \mathbb C^{4}$ belonging to the class (12). The most general pure state of this type can be written as \begin{equation} \Psi=\cos\phi\cos\psi\,\ket{1}\otimes\ket{0}+ \sin\phi\cos\psi\,e^{i\Theta}\,\ket{0}\otimes\ket{1}+ \sin\psi\,e^{i\Xi}\,\ket{0}\otimes\ket{0} \end{equation} with $\phi,\,\psi\in [0,\frac{\pi}{2}],\; \Theta,\,\Xi\in [0,2\pi]$. Using (14) we see that concurrence of (29) is given by \begin{equation} C(\Psi)=\cos^{2}\psi\,\sin 2\phi \end{equation} By (28), the time evolution of this initial concurrence is described by the following function \begin{equation} C(\rho(t))=e^{-\gamma_{0} t}\,\cos^{2}\psi\, \big| \sin 2\phi\, \cos\Theta \,\cosh \gamma t -\sinh\gamma t -i \sin2\phi\,\sin \Theta\,\big| \end{equation} The function (31) is simple to analyse when $\phi=0$ or $\frac{\pi}{2}$. In this case \begin{equation} \Psi=e^{i\Theta}\,\cos\psi\,\ket{0}\otimes\ket{1}+ e^{i\Xi}\,\sin\psi\,\ket{0}\otimes\ket{0} \end{equation} and \begin{equation} C(\rho(t))=e^{-\gamma_{0} t}\, \cos^{2}\psi\,\sinh\gamma t \end{equation} From the formula (33) we see that initial concurrence equal to zero, increases in the time interval $[0,t_{\mathrm{max}}]$, where \begin{equation} t_{\mathrm{max}}=\frac{1}{2\gamma}\ln\frac{\gamma_{0}+\gamma}{\gamma_{0}-\gamma} \end{equation} to the maximal value \begin{equation} C_{\mathrm{max}}=\cos^{2}\psi\,\frac{\gamma}{\gamma_{0}-\gamma} \left(\frac{\gamma_{0}+\gamma}{\gamma_{0}-\gamma}\right)^{-\frac{\gamma_{0}+\gamma}{2\gamma}} \end{equation} and then asymptotically goes to zero (see \textbf{Fig. 1}).\\[8mm] \begin{picture}(300,300) \put(50,70){\begin{picture}(180,180) \epsffile{fj1.eps} \end{picture}} \put(350,75){$\gamma_{0} t$} \put(60,255){$C(\rho(t))$} \put(120,150){$0.5$} \put(130,180){$0.7$} \put(140,220){$0.9$} \end{picture} \vskip -18mm \noindent \centerline{\textbf{Fig. 1.} $C(\rho(t))$ for initial states (32) with $\psi=0$ and different values of $\gamma/\gamma_{0}$} \vskip 8mm \noindent If $\phi$ is arbitrary, we can put for simplicity $\psi=0$. Then \begin{equation} \Psi=\cos\phi \,\ket{1}\otimes\ket{0}+\sin\phi\,e^{i\Theta}\,\ket{0}\otimes\ket{1} \end{equation} and $C(\Psi)=\sin 2\phi$. The evolution of this initial concurrence is given by \begin{equation} C(\rho(t))=e^{-\gamma_{0} t}\,\big |\,\sin2\phi\,\cos\Theta\,\cosh\gamma t -\sinh\gamma t -i\,\sin2\phi\,\sin\Theta\,\big| \end{equation} Depending on values of $\phi$ and $\Theta$, the function (37) can be strictly decreasing to zero, or can have one maximal value for some $t$, or even can have one minimal and one maximal value. Let us discuss all these possibilities by choosing some special initial states from the class (36).\\[2mm] \textbf{a.} Let $\Theta=0$. Then \begin{equation} \Psi=\cos\phi\, \ket{1}\otimes\ket{0}+\sin\phi\,\ket{0}\otimes\ket{1} \end{equation} and \begin{equation} C(\rho(t))=e^{-\gamma_{0} t}\,\big|\, \sin 2\phi\, \cosh\gamma t -\sinh \gamma t\,\big| \end{equation} The function (39) is decreasing to zero in the interval $[0,t_{\mr{min}}]$ where \begin{equation} t_{\mr{min}}=\frac{1}{2\gamma}\,\ln\frac{1+\sin 2\phi}{1-\sin 2\phi} \end{equation} Then in the interval $[t_{\mr{min}},t_{\mr{max}}]$, with \begin{equation} t_{\mr{max}}=\frac{1}{2\gamma}\,\ln\frac{(1+\sin 2\phi)(\gamma_{0}+\gamma)}{(1-\sin 2\phi)(\gamma_{0}-\gamma)} \end{equation} (39) increases to the maximal value \begin{equation} C_{\mr{max}}=\frac{\gamma\,|\cos 2\phi|}{\sqrt{\gamma_{0}^{2}-\gamma^{2}}}\,\left(\frac{(1+\sin 2\phi)(\gamma_{0}+\gamma)}{(1-\sin 2\phi)(\gamma_{0}-\gamma)}\right)^{-\frac{\gamma_{0}}{2\gamma}} \end{equation} For $t>t_{\mr{max}}$, (39) goes asymptotically to $0$. For sufficiently small initial concurrence $C_{\mr{max}}>C(\Psi)$ but for larger entanglement of the initial state, the maximal entanglement produced during the evolution is smaller then $C(\Psi)$ (see \textbf{Fig. 2.} below). \\[8mm] \begin{picture}(300,300) \put(50,70){\begin{picture}(180,180) \epsffile{fj2.eps} \end{picture}} \put(350,75){$\gamma_{0} t$} \put(60,255){$C(\rho(t))$} \put(120,120){$\phi=\pi/20$} \put(140,165){$\phi=\pi/40$} \end{picture} \vskip -18mm \noindent \centerline{\textbf{Fig. 2.} $C(\rho(t))$ for initial states (38), with $\phi=\pi/40,\,\pi/20$ and $\gamma/\gamma_{0}=0.75$} \vskip 8mm \noindent \textbf{b.} Let $\Theta=\pi$. Then \begin{equation} \Psi=\cos\phi\,\ket{1}\otimes\ket{0}-\sin\phi\,\ket{0}\otimes\ket{1} \end{equation} and \begin{equation} C(\rho(t))=e^{-\gamma_{0} t}\, \big|\, \sin 2\phi\,\cosh\gamma t+\sinh\gamma t\,\big| \end{equation} If $\sin 2\phi\geq \frac{\gamma}{\gamma_{0}}$ then function (44) is monotonically decreasing to $0$. On the other hand, if $\sin 2\phi <\frac{\gamma}{\gamma_{0}}$ then at time \begin{equation} t_{\mr{max}}=\frac{1}{2\gamma}\,\ln\frac{(1-\sin 2\phi)(\gamma_{0}+\gamma)}{(1+\sin 2\phi)(\gamma_{0}-\gamma)} \end{equation} (44) attains local maximum \begin{equation} C_{\mr{max}}=\frac{\gamma\,|\cos 2\phi|}{\sqrt{\gamma_{0}^{2}-\gamma^{2}}}\,\left(\frac{(1-\sin 2\phi)(\gamma_{0}+\gamma)}{(1+\sin 2\phi)(\gamma_{0}-\gamma)}\right)^{-\frac{\gamma_{0}}{2\gamma}} \end{equation} $C_{\mr{max}}$ is always greater then initial concurrence $C(\Psi)$ and \begin{equation} C_{\mr{max}}\to\frac{1+\sin 2\phi}{2}\quad\text{when}\quad \gamma\to\gamma_{0} \end{equation} and \begin{equation} C_{\mr{max}}\to \frac{\gamma}{\gamma_{0}}\quad{when}\quad \sin 2\phi\to \frac{\gamma}{\gamma_{0}} \end{equation} Thus, for entanged pure initial states (43) the dissipative process of spontaneous emission increases entanglement, provided that the initial entanglement was smaller then $\frac{\gamma}{\gamma_{0}}$ (see \textbf{Fig. 3}).\\[8mm] \noindent \begin{picture}(300,300) \put(50,70){\begin{picture}(180,180) \epsffile{fj3.eps} \end{picture}} \put(350,75){$\gamma_{0} t$} \put(60,255){$ C(\rho(t))$} \put(85,112){$0.1$} \put(82,145){$0.4$} \put(79,180){$0.7$} \put(85,210){$1$} \end{picture} \vskip -18mm \noindent \centerline{\textbf{Fig. 3.} $C(\rho(t))$ for initial states (43) with $C(\Psi)=0.1,\,0.4,\,0.7,\,1$ and $\gamma/\gamma_{0}=0.75$} \vskip 8mm \noindent \textbf{c.} Let $\Theta=\pi/2$. Then \begin{equation} \Psi=\cos\phi\,\ket{1}\otimes\ket{0}+i\,\sin\phi\,\ket{0}\otimes\ket{1} \end{equation} and \begin{equation} C(\rho(t))=e^{-\gamma_{0} t}\, \big|\, \sinh\gamma t +i\,\sin 2\phi\,\big| \end{equation} One can show that if $|\sin 4\phi|<\gamma/\gamma_{0}$ then (50) achieves local minimum at \begin{equation} t_{\mr{min}}=\frac{1}{2\gamma}\,\ln\frac{\gamma_{0}\cos 4\phi -\sqrt{\gamma^{2}-\gamma_{0}^{2}\sin^{2} 4\phi}}{\gamma_{0}-\gamma} \end{equation} and local maximum at \begin{equation} t_{\mr{max}}=\frac{1}{2\gamma}\,\ln\frac{\gamma_{0}\cos 4\phi +\sqrt{\gamma^{2}-\gamma_{0}^{2}\sin^{2} 4\phi}}{\gamma_{0}-\gamma} \end{equation} with the corresponding values of concurrence \begin{equation} C_{\mr{min}}=\left(\frac{\gamma_{0}\cos 4\phi -\sqrt{\gamma^{2}-\gamma_{0}^{2}\sin^{2} 4\phi}}{\gamma_{0}-\gamma}\right)^{-\frac{\gamma_{0}}{2\gamma}}\,\sqrt{\frac{\gamma^{2}\cos 4\phi -\gamma\,\sqrt{\gamma^{2}-\gamma_{0}^{2}\sin^{2} 4\phi}}{2(\gamma_{0}^{2}-\gamma^{2})}} \end{equation} and \begin{equation} C_{\mr{max}}=\left(\frac{\gamma_{0}\cos 4\phi +\sqrt{\gamma^{2}-\gamma_{0}^{2}\sin^{2} 4\phi}}{\gamma_{0}-\gamma}\right)^{-\frac{\gamma_{0}}{2\gamma}}\,\sqrt{\frac{\gamma^{2}\cos 4\phi +\gamma\,\sqrt{\gamma^{2}-\gamma_{0}^{2}\sin^{2} 4\phi}}{2(\gamma_{0}^{2}-\gamma^{2})}} \end{equation} For other cases, the function (50) is monotonically decreasing to $0$ (see \textbf{Fig. 4.} below). \vskip 8mm \noindent \begin{picture}(300,300) \put(50,70){\begin{picture}(180,180) \epsffile{fj4.eps} \end{picture}} \put(350,75){$\gamma_{0} t$} \put(60,255){$ C(\rho(t))$} \put(90,110){$\phi=\pi/20$} \put(110,160){$\phi=\pi/8$} \end{picture} \vskip -18mm \noindent \centerline{\textbf{Fig. 4.} $C(\rho(t))$ for initial states (49) with $\phi=\pi/20,\,\pi/40$ and $\gamma/\gamma_{0}=0.75$} \vskip 8mm \noindent \noindent \section{Evolution of Nonlocality} Nonlocality of quantum theory manifesting by violation of Bell inequalities is strictly connected with the existence of entangled states. It is known that every pure entangled state violates some Bell inequality (see e.g. \cite{Gisin}). But for mixed entangled states it is no longer true \cite{Werner}. So it is interesting to discuss how dissipative process of spontaneous emission influences nonlocal properties of initial states. For simplicity we restrict the class of states considered below to density matrices of the form (22). For that class we can apply the results of Sect. 2.2. \par \noindent As the initial states we take the states (43). Note that at time $t$, the state $\rho(t)$ will have the form (22). The initial entanglement is non-zero, so violation of some Bell - CHSH inequality occurs at time $t=0$. What happens during the evolution? Consider the inequality \begin{equation} \rho_{22}(t)\rho_{33}(t)>\frac{1}{2}S_{L}(\rho(t)) \end{equation} which is sufficient to nonlocality of the state $\rho(t)$. Observe that $S_{L}(\rho(0))=0$ and $S_{L}(\rho(t))$ is increasing in some time - interval. On the other hand, $$ \rho_{22}(0)\rho_{33}(0)\geq |\rho_{23}(0)|^{2}>0 $$ and $\rho_{22}(t)\rho_{33}(t)$ asymptotically goes to $0$, so there is some non - empty interval $0\leq t <t_{1}$ for which the inequality (55) is satisfied. Thus for $0\leq t <t_{1}$, all states $\rho(t)$ will still have nonlocal properties. We may also introduce the time $t_{\mr{n}}$ after which nonlocality is lost. To this end, besides $t_{1}$ consider $t_{2}$ such that $$ |\rho_{23}(t_{2})|=\frac{1}{2\sqrt{2}} $$ Then \begin{equation} t_{\mr{n}}=\max\, (t_{1},t_{2}) \end{equation} By the results of Sect. 2.2, all states $\rho(t)$ for $t\geq t_{\mr{n}}$ will admit local hidden variable model. To illustrate this concept, consider as initial states $\Psi^{+}$ and $\Psi^{-}$ (symmetric and antsymmetric states) $$ \Psi^{\pm}=\frac{1}{\sqrt{2}}\,\left( \ket{1}\otimes\ket{0}\pm\ket{0}\otimes\ket{1}\right) $$ Let $\rho^{\pm}(t)$ denote corresponding density matrices at time $t$. Then $$ |\rho_{23}^{\pm}(t)|=\frac{1}{2}\,e^{-(\gamma_{0} \pm\gamma)t} $$ and $$ \rho_{22}^{\pm}(t)\rho_{33}^{\pm}(t)=\frac{1}{4}\,e^{-2(\gamma_{0}\pm\gamma)t},\quad S_{L}(\rho^{\pm}(t))=2\left(e^{-(\gamma_{0}\pm\gamma)t}-e^{-2(\gamma_{0}\pm\gamma)t}\right) $$ We see that $$ t_{1}=\frac{1}{\gamma_{0}\pm \gamma}\;\ln\frac{5}{4},\quad t_{2}=\frac{1}{\gamma_{0}\pm \gamma}\;\frac{\ln 2}{2} $$ Thus $$ t_{\mr{n}}=\frac{1}{\gamma_{0}\pm \gamma}\;\frac{\ln 2}{2} $$ Note that for antisymmetric state $\Psi^{-}$ and $\gamma$ close to $\gamma_{0}$, $t_{\mr{n}}$ goes to infinity. \par \noindent It is also interesting to study in more details time evolution of the measure of nonlocality which may be defined as follows \begin{equation} n(\rho)=\max\, (\,0,\,m(\rho)-1\,) \end{equation} As is well known, $m(\rho)\leq 2$, so $0\leq n(\rho)\leq 1$ and larger value of $n(\rho)$ gretar then $1$ means violation of CHSH inequality to a larger extent. Since $m(\rho)=2$ for maximally entangled pure states which maximally violate CHSH inequalities, for them $n(\rho)=1$. To obtain analytic expression for $n(\rho(t))$ we can utilize formula (23). Formula (23) is further simplified if we take such initial states that \begin{equation} |\rho_{23}(t)|<\frac{1}{2\sqrt{2}} \end{equation} for all $t$. This condition can be achieved if $$ \frac{\gamma}{\gamma_{0}}<\frac{1}{\sqrt{2}} $$ Then inequality (58) is satisfied and $m(\rho(t))>1$ if and only if \begin{equation} (1-2\rho_{44}(t))^{2}+C^{2}(\rho(t))>1 \end{equation} and to study the time evolution of nonlocality, we only need to know time-dependence of the left hand side of (59). If we take initial state with $\sin 2\phi<\frac{\gamma}{\gamma_{0}}<\frac{1}{\sqrt{2}}$, then during the time evolution $C(\rho(t))$ increases to the maximal value $C_{\mr{max}}<\frac{1}{\sqrt{2}}$ in the interval $[0,t_{\mr{max}}]$. At the same time, \begin{equation} (1-2\rho_{44})^{2}=\left(\,2e^{-\gamma_{0} t}\,( \cosh\gamma t+\sin 2\phi\,\sinh\gamma t) -1\,\right)^{2} \end{equation} decreases so fast that left hand side of (59) is a decreasing function of $t$ (see \textbf{Fig. 5, 6} below). Thus we obtain the result:\\[2mm] \textit{For initial states} (43) \textit{ and $\sin 2\phi<\frac{\gamma}{\gamma_{0}}<\frac{1}{\sqrt{2}}$, nonlocality of $\rho(t)$ given by $n(\rho(t))$ decreases during the time evolution even if the entanglement increases.} \\[4mm] \begin{picture}(300,300) \put(50,70){\begin{picture}(180,180) \epsffile{fj5.eps} \end{picture}} \put(350,75){$\gamma_{0} t$} \end{picture} \vskip -18mm \noindent \centerline{\textbf{Fig. 5.} $C(\rho(t))$ (dotted line) and $(1-2\rho_{44}(t))^{2}$ (solid line) for initial state (43) with $C(\Psi)=0.3$ and $\gamma/\gamma_{0}=0.7$}\vskip 4mm \noindent \begin{picture}(300,300) \put(50,70){\begin{picture}(180,180) \epsffile{fj6.eps} \end{picture}} \put(350,75){$\gamma_{0} t$} \put(60,255){$ n(\rho(t))$} \end{picture} \vskip -18mm \noindent \centerline{\textbf{Fig. 6.} $n(\rho(t))$ for initial state (43) with $C(\Psi)=0.3$ and $\gamma/\gamma_{0}=0.7$} \vskip 4mm \noindent Numerical analysis indicates that for other initial states (43) for which $m(\rho(t))$ is defined by the whole expression (23), the nonlocality $n(\rho(t))$ also monotonically goes to $0$ irrespective of the evolution of entanglement (see \textbf{Fig. 7} below). \vskip 4mm \noindent \begin{picture}(300,300) \put(50,70){\begin{picture}(180,180) \epsffile{fj7.eps} \end{picture}} \put(350,75){$\gamma_{0} t$} \put(60,255){$ n(\rho(t))$} \put(110,130){$0.9$} \put(90,110){$0.8$} \put(130,145){$1.0$} \end{picture} \vskip -18mm \noindent \centerline{\textbf{Fig. 7.} $n(\rho(t))$ for initial states (43) with $C(\Psi)=0.8,\,0.9,\,1.0$ and $\gamma/\gamma_{0}=0.7$} \vskip 8mm \noindent \noindent \end{document}
\begin{document} \maketitle \begin{abstract} We produce the family of Calabi-Yau hypersurfaces $X_{n}$ of $(\mathbb P^{1})^{n+1}$ in higher dimension whose inertia group contains non commutative free groups. This is completely different from Takahashi's result \cite{ta98} for Calabi-Yau hypersurfaces $M_{n}$ of $\mathbb P^{n+1}$. \end{abstract} \section{Introduction} Throughout this paper, we work over $\mathbb C$. Given an algebraic variety $X$, it is natural to consider its birational automorphisms $\varphi \colon X \dashrightarrow X$. The set of these birational automorphisms forms a group $\operatorname{Bir}(X)$ with respect to the composition. When $X$ is a projective space $\mathbb P^{n}$ or equivalently an $n$-dimensional rational variety, this group is called the Cremona group. In higher dimensional case ($n \geq 3$), though many elements of the Cremona group have been described, its whole structure is little known. Let $V$ be an $(n+1)$-dimensional smooth projective rational manifold. In this paper, we treat subgroups called the ``inertia group" (defined below \eqref{inertia}) of some hypersurface $X \subset V$ originated in \cite{gi94}. It consists of those elements of the Cremona group that act on $X$ as identity. In Section \ref{cyn}, we mention the result (Theorem \ref{tak}) of Takahashi \cite{ta98} about the smooth Calabi-Yau hypersurfaces $M_{n}$ of $\mathbb P^{n+1}$ of degree $n+2$ (that is, $M_{n}$ is a hypersurface such that it is simply connected, there is no holomorphic $k$-form on $M_{n}$ for $0<k<n$, and there is a nowhere vanishing holomorphic $n$-form $\omega_{M_{n}}$). It turns out that the inertia group of $M_{n}$ is trivial (Theorem \ref{intro2}). Takahashi's result (Theorem \ref{tak}) is proved by using the ``Noether-Fano inequality". It is the useful result that tells us when two Mori fiber spaces are isomorphic. Theorem \ref{intro2} is a direct consequence of Takahashi's result. In Section \ref{cy1n}, we consider Calabi-Yau hypersurfaces \[ X_{n} = (2, 2, \ldots , 2) \subset (\mathbb P^{1})^{n+1}. \] Let \[ \operatorname{UC}(N) \coloneqq \overbrace{\mathbb Z/2\mathbb Z * \mathbb Z/2\mathbb Z * \cdots * \mathbb Z/2\mathbb Z}^{N} = \bigast_{i=1}^{N}\langle t_{i}\rangle \] be the \textit{universal Coxeter group} of rank $N$ where $\mathbb Z/2\mathbb Z$ is the cyclic group of order 2. There is no non-trivial relation between its $N$ natural generators $t_{i}$. Let \[ p_{i} \colon X_{n} \to (\mathbb P^{1})^{n}\ \ \ (i=1, \ldots , n+1) \] be the natural projections which are obtained by forgetting the $i$-th factor of $(\mathbb P^{1})^{n+1}$. Then, the $n+1$ projections $p_{i}$ are generically finite morphism of degree 2. Thus, for each index $i$, there is a birational transformation \[ \iota_{i} \colon X_{n} \dashrightarrow X_{n} \] that permutes the two points of general fibers of $p_{i}$ and this provides a group homomorphism \[ \Phi \colon \operatorname{UC}(n+1) \to \operatorname{Bir}(X_{n}). \] From now, we set $P(n+1) \coloneqq (\mathbb P^{1})^{n+1}$. Cantat-Oguiso proved the following theorem in \cite{co11}. \begin{thm}$($\cite[Theorem 1.3 (2)]{co11}$)$\label{iota} Let $X_{n}$ be a generic hypersurface of multidegree $(2,2,\ldots,2)$ in $P(n+1)$ with $n \geq 3$. Then the morphism $\Phi$ that maps each generator $t_{j}$ of $\operatorname{UC}(n+1)$ to the involution $\iota_{j}$ of $X_{n}$ is an isomorphism from $\operatorname{UC}(n+1)$ to $\operatorname{Bir}(X_{n})$. \end{thm} Here ``generic'' means $X_{n}$ belongs to the complement of some countable union of proper closed subvarieties of the complete linear system $\big| (2, 2, \ldots , 2)\big|$. Let $X \subset V$ be a projective variety. The \textit{decomposition group} of $X$ is the group \begin{align*} \operatorname{Dec}(V, X) \coloneqq \{f \in \operatorname{Bir}(V)\ |\ f(X) =X \text{ and } f|_{X} \in \operatorname{Bir}(X) \}. \end{align*} The \textit{inertia group} of $X$ is the group \begin{align}\label{inertia} \operatorname{Ine}(V, X) \coloneqq \{f \in \operatorname{Dec}(V, X)\ |\ f|_{X} = \operatorname{id}_{X}\}. \end{align} Then it is natural to consider the following question: \begin{que}\label{qu} Is the sequence \begin{align}\label{se} 1 \longrightarrow \operatorname{Ine}(V, X) \longrightarrow \operatorname{Dec}(V, X) \overset{\gamma}{\longrightarrow} \operatorname{Bir}(X) \longrightarrow 1 \end{align} exact, i.e., is $\gamma$ surjective? \end{que} Note that, in general, this sequence is not exact, i.e., $\gamma$ is not surjective (see Remark \ref{k3}). When the sequence \eqref{se} is exact, the group $\operatorname{Ine}(V, X)$ measures how many ways one can extend $\operatorname{Bir}(X)$ to the birational automorphisms of the ambient space $V$. Our main result is following theorem, answering a question asked by Ludmil Katzarkov: \begin{thm}\label{intro} Let $X_{n} \subset P(n+1)$ be a smooth hypersurface of multidegree $(2, 2, \ldots, 2)$ and $n \geq 3$. Then: \begin{itemize} \item[(1)] $\gamma \colon \operatorname{Dec}(P(n+1), X_{n}) \to \operatorname{Bir}(X_{n})$ is surjective, in particular Question $\ref{qu}$ is affirmative for $X_{n}$. \item[(2)] If, in addition, $X_{n}$ is generic, there are $n+1$ elements $\rho_{i}$ $(1 \leq i \leq n+1)$ of $\operatorname{Ine}(P(n+1), X_{n})$ such that \[ \langle \rho_{1}, \rho_{2}, \ldots , \rho_{n+1} \rangle \simeq \underbrace{\mathbb Z * \mathbb Z * \cdots * \mathbb Z}_{n+1} \subset \operatorname{Ine}(P(n+1), X_{n}). \] In particular, $\operatorname{Ine}(P(n+1), X_{n})$ is an infinite non-commutative group. \end{itemize} \end{thm} Our proof of Theorem \ref{intro} is based on an explicit computation of elementary flavour. We also consider another type of Calabi-Yau manifolds, namely smooth hypersurfaces of degree $n+2$ in $\mathbb P^{n+1}$ and obtain the following result: \begin{thm}\label{intro2} Suppose $n \geq 3$. Let $M_{n} = (n+2) \subset \mathbb P^{n+1}$ be a smooth hypersurface of degree $n+2$. Then Question $\ref{qu}$ is also affirmative for $M_{n}$. More precisely: \begin{itemize} \item[(1)] $\operatorname{Dec}(\mathbb P^{n+1}, M_{n}) = \{ f \in \operatorname{PGL}(n+2, \mathbb C) = \operatorname{Aut}(\mathbb P^{n+1})\ |\ f(M_{n}) = M_{n}\}$. \item[(2)] $\operatorname{Ine}(\mathbb P^{n+1}, M_{n}) = \{\operatorname{id}_{\mathbb P^{n+1}}\}$, and $\gamma \colon \operatorname{Dec}(\mathbb P^{n+1}, M_{n}) \overset{\simeq}{\longrightarrow} \operatorname{Bir}(M_{n}) = \operatorname{Aut}(M_{n})$. \end{itemize} \end{thm} It is interesting that the inertia groups of $X_{n} \subset P(n+1) = (\mathbb P^{1})^{n+1}$ and $M_{n} \subset \mathbb P^{n+1}$ have completely different structures though both $X_{n}$ and $M_{n}$ are Calabi-Yau hypersurfaces in rational Fano manifolds. \begin{rem}\label{k3} There is a smooth quartic $K3$ surface $M_{2} \subset \mathbb P^{3}$ such that $\gamma$ is not surjective (see \cite[Theorem 1.2 (2)]{og13}). In particular, Theorem \ref{intro2} is not true for $n = 2$. \end{rem} \section{Preliminaries} In this section, we prepare some definitions and properties of birational geometry and introduce the Cremona group. \subsection{Divisors and singularities} Let $X$ be a projective variety. A \textit{prime divisor} on $X$ is an irreducible subvariety of codimension one, and a \textit{divisor} (resp. \textit{$\mathbb Q$-divisor} or \textit{$\mathbb R$-divisor}) on $X$ is a formal linear combination $D = \sum d_{i}D_{i}$ of prime divisors where $d_{i} \in \mathbb Z$ (resp. $\mathbb Q$ or $\mathbb R$). A divisor $D$ is called \textit{effective} if $d_{i}$ $\geq$ 0 for every $i$ and denote $D \geq 0$. The closed set $\bigcup_{i}D_{i}$ of the union of prime divisors is called the \textit{support} of $D$ and denote Supp$(D)$. A $\mathbb Q$-divisor $D$ is called \textit{$\mathbb Q$-Cartier} if, for some $0 \neq m \in \mathbb Z$, $mD$ is a Cartier divisor (i.e. a divisor whose divisorial sheaf $\mathcal O_{X}(mD)$ is an invertible sheaf), and $X$ is called $\mathbb Q$-\textit{factorial} if every divisor is $\mathbb Q$-Cartier. Note that, since the regular local ring is the unique factorization domain, every divisor automatically becomes the Cartier divisor on the smooth variety. Let $f \colon X \dashrightarrow Y$ be a birational map between normal projective varieties, $D$ a prime divisor, and $U$ the domain of definition of $f$; that is, the maximal subset of $X$ such that there exists a morphism $f \colon U \to Y$. Then $\operatorname{codim}(X\setminus U) \geq 2$ and $D \cap U \neq \emptyset$, the image $(f|_{U})(D \cap U)$ is a locally closed subvariety of $Y$. If the closure of that image is a prime divisor of $Y$, we call it the \textit{strict transform} of $D$ (also called the \textit{proper transform} or \textit{birational transform}) and denote $f_{*}D$. We define $f_{*}D = 0$ if the codimension of the image $(f|_{U})(D \cap U)$ is $\geq$ 2 in $Y$. We can also define the strict transform $f_{*}Z$ for subvariety $Z$ of large codimension; if $Z \cap U \neq \emptyset$ and dimension of the image $(f|_{U})(Z \cap U)$ is equal to $\dim Z$, then we define $f_{*}Z$ as the closure of that image, otherwise $f_{*}Z$ = 0. Let $(X, D)$ is a \textit{log pair} which is a pair of a normal projective variety $X$ and a $\mathbb R$-divisor $D \geq 0$. For a log pair $(X, D)$, it is more natural to consider a \textit{log canonical divisor} $K_{X} + D$ instead of a canonical divisor $K_{X}$. A projective birational morphism $g \colon Y \to X$ is a \textit{log resolution} of the pair $(X, D)$ if $Y$ is smooth, $\operatorname{Exc}(g)$ is a divisor, and $g_{*}^{-1}(D) \cup \operatorname{Exc}(g)$ has simple normal crossing support (i.e. each components is a smooth divisor and all components meet transversely) where $\operatorname{Exc}(g)$ is an exceptional set of $g$, and a divisor $over$ $X$ is a divisor $E$ on some smooth variety $Y$ endowed with a proper birational morphism $g \colon Y \to X$. If we write \[ K_{Y} + \Gamma + \sum E_{i} = g^{*}(K_{X}+D) + \sum a_{E_{i}}(X, D)E_{i}, \] where $\Gamma$ is the strict transform of $D$ and $E_{i}$ runs through all prime exceptional divisors, then the numbers $a_{E_{i}}(X, D)$ is called the \textit{discrepancies of $(X, D)$ along $E_{i}$}. The \textit{discrepancy of} $(X, D)$ is given by \[ \operatorname{discrep}(X, D) \coloneqq \inf\{ a_{E_{i}}(X, D)\ |\ E_{i} \text{ is a prime exceptional divisor over } X\}. \] The discrepancy $a_{E_{i}}(X, D)$ along $E_{i}$ is independent of the choice of birational maps $g$ and only depends on $E_{i}$. Let us denote $\operatorname{discrep}(X, D) = a_{E}$. A pair $(X, D)$ is \textit{log canonical} (resp. \textit{Kawamata log terminal} ($klt$)) if $a_{E} \geq 0$ (resp. $a_{E} > 0$). A pair $(X, D)$ is \textit{canonical} (resp. \textit{terminal}) if $a_{E} \geq 1$ (resp. $a_{E} > 1$). \subsection{Cremona groups} Let $n$ be a positive integer. The \textit{Cremona group} $\operatorname{Cr}(n)$ is the group of automorphisms of $\mathbb C(X_{1}, \ldots, X_{n})$, the $\mathbb C$-algebra of rational functions in $n$ independent variables. Given $n$ rational functions $F_{i} \in \mathbb C(X_{1}, \ldots, X_{n})$, $1 \leq i \leq n$, there is a unique endomorphism of this algebra maps $X_{i}$ onto $F_{i}$ and this is an automorphism if and only if the rational transformation $f$ defined by $f(X_{1}, \ldots, X_{n}) = (F_{1}, \ldots, F_{n})$ is a birational transformation of the affine space $\mathbb A^{n}$. Compactifying $\mathbb A^{n}$, we get \[ \operatorname{Cr}(n) = \operatorname{Bir}(\mathbb A^{n}) = \operatorname{Bir}(\mathbb P^{n}) \] where Bir$(X)$ denotes the group of all birational transformations of $X$. In the end of this section, we define two subgroups in $\operatorname{Cr}(n)$ introduced by Gizatullin \cite{gi94}. \begin{dfn} Let $V$ be an $(n+1)$-dimensional smooth projective rational manifold and $X \subset V$ a projective variety. The \textit{decomposition group} of $X$ is the group \[ \operatorname{Dec}(V, X) \coloneqq \{f \in \operatorname{Bir}(V)\ |\ f(X) =X \text{ and } f|_{X} \in \operatorname{Bir}(X) \}. \] The \textit{inertia group} of $X$ is the group \[ \operatorname{Ine}(V, X) \coloneqq \{f \in \operatorname{Dec}(V, X)\ |\ f|_{X} = \operatorname{id}_{X}\}. \] \end{dfn} The decomposition group is also denoted by Bir$(V, X)$. By the definition, the correspondence \[ \gamma \colon \operatorname{Dec}(V, X) \ni f \mapsto f|_{X} \in \operatorname{Bir}(X) \] defines the exact sequence: \begin{align}\label{seq} 1 \longrightarrow \operatorname{Ine}(V, X) = \ker \gamma \longrightarrow \operatorname{Dec}(V, X) \overset{\gamma}{\longrightarrow} \operatorname{Bir}(X). \end{align} So, it is natural to consider the following question (which is same as Question \ref{qu}) asked by Ludmil Katzarkov: \begin{que}\label{qexact} Is the sequence \begin{align}\label{exact} 1 \longrightarrow \operatorname{Ine}(V, X) \longrightarrow \operatorname{Dec}(V, X) \overset{\gamma}{\longrightarrow} \operatorname{Bir}(X) \longrightarrow 1 \end{align} exact, i.e., is $\gamma$ surjective? \end{que} \begin{rem} In general, the above sequence \eqref{exact} is not exact, i.e., $\gamma$ is not surjective. In fact, there is a smooth quartic $K3$ surface $M_{2} \subset \mathbb P^{3}$ such that $\gamma$ is not surjective (\cite[Theorem 1.2 (2)]{og13}). \end{rem} \section{Calabi-Yau hypersurface in $\mathbb P^{n+1}$}\label{cyn} Our goal, in this section, is to prove Theorem \ref{intro2} (i.e. Theorem \ref{ta}). Before that, we introduce the result of Takahashi \cite{ta98}. \begin{dfn} Let $X$ be a normal $\mathbb Q$-factorial projective variety. The 1\textit{-cycle} is a formal linear combination $C = \sum a_{i}C_{i}$ of proper curves $C_{i} \subset X$ which are irreducible and reduced. By the theorem of the base of N\'eron-Severi (see \cite{kl66}), the whole numerical equivalent class of 1-cycle with real coefficients becomes the finite dimensional $\mathbb R$-vector space and denotes $N_{1}(X)$. The dimension of $N_{1}(X)$ or its dual $N^{1}(X)$ with respect to the intersection form is called the \textit{Picard number} and denote $\rho(X)$. \end{dfn} \begin{thm}$($\cite[Theorem 2.3]{ta98}$)$\label{tak} Let $X$ be a Fano manifold $($i.e. a manifold whose anti-canonical divisor $-K_{X}$ is ample,$)$ with $\dim X \geq 3$ and $\rho(X) = 1$, $S \in |-K_{X}|$ a smooth hypersurface with $\operatorname{Pic}(X) \to \operatorname{Pic}(S)$ surjective. Let $\Phi \colon X \dashrightarrow X'$ be a birational map to a $\mathbb Q$-factorial terminal variety $X'$ with $\rho(X') = 1$ which is not an isomorphism, and $S' = \Phi_{*}S$. Then $K_{X'} + S'$ is ample. \end{thm} This theorem is proved by using the \textit{Noether-Fano inequality} which is one of the most important tools in birational geometry, which gives a precise bound on the singularities of indeterminacies of a birational map and some conditions when it becomes isomorphism. This inequality is essentially due to \cite{im71}, and Corti proved the general case of an arbitrary Mori fiber space of dimension three \cite{co95}. It was extended in all dimensions in \cite{ta95}, \cite{bm97}, \cite{is01}, and \cite{df02}, (see also \cite{ma02}). In particular, a log generalized version obtained independently in \cite{bm97}, \cite{ta95} is used for the proof of Theorem \ref{tak}. After that, we consider $n$-dimensional \textit{Calabi-Yau manifold} $X$ in this paper. It is a projective manifold which is simply connected, \[ H^{0}(X, \Omega_{X}^{i}) = 0\ \ \ (0<i<\dim X = n),\ \ \textrm{and \ } H^{0}(X, \Omega_{X}^{n}) = \mathbb C \omega_{X}, \] where $\omega_{X}$ is a nowhere vanishing holomorphic $n$-form. The following theorem is a consequence of the Theorem \ref{tak}, which is same as Theorem \ref{intro2}. This provides an example of the Calabi-Yau hypersurface $M_{n}$ whose inertia group consists of only identity transformation. \begin{thm}\label{ta} Suppose $n \geq 3$. Let $M_{n} = (n+2) \subset \mathbb P^{n+1}$ be a smooth hypersurface of degree $n+2$. Then $M_{n}$ is a Calabi-Yau manifold of dimension $n$ and Question $\ref{qexact}$ is affirmative for $M_{n}$. More precisely: \begin{itemize} \item[(1)] $\operatorname{Dec}(\mathbb P^{n+1}, M_{n}) = \{ f \in \operatorname{PGL}(n+2, \mathbb C) = \operatorname{Aut}(\mathbb P^{n+1})\ |\ f(M_{n}) = M_{n}\}$. \item[(2)] $\operatorname{Ine}(\mathbb P^{n+1}, M_{n}) = \{\operatorname{id}_{\mathbb P^{n+1}}\}$, and $\gamma \colon \operatorname{Dec}(\mathbb P^{n+1}, M_{n}) \overset{\simeq}{\longrightarrow} \operatorname{Bir}(M_{n}) = \operatorname{Aut}(M_{n})$. \end{itemize} \end{thm} \begin{proof} By Lefschetz hyperplane section theorem for $n \geq 3$, $\pi_{1}(M_{n}) \simeq \pi_{1}(\mathbb P^{n+1}) = \{\operatorname{id}\}$, $\operatorname{Pic}(M_{n}) = \mathbb Z h$ where $h$ is the hyperplane class. By the adjunction formula, \[ K_{M_{n}} = (K_{\mathbb P^{n+1}} + M_{n})|_{M_{n}} = -(n+2)h + (n+2)h = 0 \] in Pic$(M_{n})$. By the exact sequence \[ 0 \longrightarrow \mathcal O_{\mathbb P^{n+1}}(-(n+2)) \longrightarrow \mathcal O_{\mathbb P^{n+1}} \longrightarrow \mathcal O_{M_{n}} \longrightarrow 0 \] and \[ h^{k}(\mathcal O_{\mathbb P^{n+1}}(-(n+2))) = 0\ \ \text{for}\ \ 1 \leq k \leq n, \] \[ H^{k}(\mathcal O_{M_{n}}) \simeq H^{k}(\mathcal O_{\mathbb P^{n+1}}) = 0\ \ \text{for}\ \ 1 \leq k \leq n-1. \] Hence $H^{0}(\Omega^{k}_{M_{n}}) = 0$ for $1 \leq k \leq n-1$ by the Hodge symmetry. Hence $M_{n}$ is a Calabi-Yau manifold of dimension $n$. By $\operatorname{Pic}(M_{n}) = \mathbb Z h$, there is no small projective contraction of $M_{n}$, in particular, $M_{n}$ has no flop. Thus by Kawamata \cite{ka08}, we get $\operatorname{Bir}(M_{n}) = \operatorname{Aut}(M_{n})$, and $g^{*}h = h$ for $g \in \operatorname{Aut}(M_{n}) = \operatorname{Bir}(M_{n})$. So we have $g = \tilde{g}|_{M_{n}}$ for some $\tilde{g} \in \operatorname{PGL}(n+1, \mathbb C)$. Assume that $f \in \operatorname{Dec}(\mathbb P^{n+1}, M_{n})$. Then $f_{*}(M_{n}) = M_{n}$ and $K_{\mathbb P^{n+1}} + M_{n} = 0$. Thus by Theorem \ref{tak}, $f \in \operatorname{Aut}(\mathbb P^{n+1}) = \operatorname{PGL}(n+2, \mathbb C)$. This proves (1) and the surjectivity of $\gamma$. Let $f|_{M_{n}} = \operatorname{id}_{M_{n}}$ for $f \in \operatorname{Dec}(\mathbb P^{n+1}, M_{n})$. Since $f \in \operatorname{PGL}(n+1, \mathbb C)$ by (1) and $M_{n}$ generates $\mathbb P^{n+1}$, i.e., the projective hull of $M_{n}$ is $\mathbb P^{n+1}$, it follows that $f = \operatorname{id}_{\mathbb P^{n+1}}$ if $f|_{M_{n}} = \operatorname{id}_{M_{n}}$. Hence $\operatorname{Ine}(\mathbb P^{n+1}, M_{n}) = \{\operatorname{id}_{\mathbb P^{n+1}}\}$, i.e., $\gamma$ is injective. So, $\gamma \colon \operatorname{Dec}(\mathbb P^{n+1}, M_{n}) \overset{\simeq}{\longrightarrow} \operatorname{Bir}(M_{n}) = \operatorname{Aut}(M_{n})$. \end{proof} \section{Calabi-Yau hypersurface in $(\mathbb P^{1})^{n+1}$}\label{cy1n} As in above section, the Calabi-Yau hypersurface $M_{n}$ of $\mathbb P^{n+1}$ with $n \geq 3$ has only identical transformation as the element of its inertia group. However, there exist some Calabi-Yau hypersurfaces in the product of $\mathbb P^{1}$ which does not satisfy this property; as result (Theorem \ref{main}) shows. To simplify, we denote \begin{align*} P(n+1) &\coloneqq (\mathbb P^{1})^{n+1} = \mathbb P^{1}_{1} \times \mathbb P^{1}_{2} \times \cdots \times \mathbb P^{1}_{n+1},\\ P(n+1)_{i} &\coloneqq \mathbb P^{1}_{1} \times \cdots \times \mathbb P^{1}_{i-1} \times \mathbb P^{1}_{i+1} \times \cdots \times \mathbb P^{1}_{n+1} \simeq P(n), \end{align*} and \begin{align*} p^{i} \colon P(n+1) &\to \mathbb P^{1}_{i} \simeq \mathbb P^{1},\\ p_{i} \colon P(n+1) &\to P(n+1)_{i} \end{align*} as the natural projection. Let $H_{i}$ be the divisor class of $(p^{i})^{*}(\mathcal O_{\mathbb P^{1}}(1))$, then $P(n+1)$ is a Fano manifold of dimension $n+1$ and its canonical divisor has the form $\displaystyle{-K_{P(n+1)} = \sum^{n+1}_{i=1}2H_{i}}$. Therefore, by the adjunction formula, the generic hypersurface $X_{n} \subset P(n+1)$ has trivial canonical divisor if and only if it has multidegree $(2, 2, \ldots, 2)$. More strongly, for $n \geq 3$, $X_{n} = (2, 2, \ldots, 2)$ becomes a Calabi-Yau manifold of dimension $n$ and, for $n=2$, a $K3$ surface (i.e. 2-dimensional Calabi-Yau manifold). This is shown by the same method as in the proof of Theorem \ref{ta}. From now, $X_{n}$ is a generic hypersurface of $P(n+1)$ of multidegree $(2, 2, \ldots , 2)$ with $n \geq 3$. Let us write $P(n+1) = \mathbb P^{1}_{i} \times P(n+1)_{i}$. Let $[x_{i1} : x_{i2}]$ be the homogeneous coordinates of $\mathbb P^{1}_{i}$. Hereafter, we consider the affine locus and denote by $\displaystyle x_{i} = \frac{x_{i2}}{x_{i1}}$ the affine coordinates of $\mathbb P^{1}_{i}$ and by ${\bf z}_{i}$ that of $P(n+1)_{i}$. When we pay attention to $x_{i}$, $X_{n}$ can be written by following equation \begin{align}\label{xn} X_{n} = \{ F_{i,0}({\bf z}_{i})x_{i}^{2} + F_{i,1}({\bf z}_{i})x_{i} + F_{i,2}({\bf z}_{i}) = 0 \} \end{align} where each $F_{i,j}({\bf z}_{i})$ $(j = 0, 1, 2)$ is a quadratic polynomial of ${\bf z}_{i}$. Now, we consider the two involutions of $P(n+1)$: \begin{align} \tau_{i} \colon (x_{i}, {\bf z}_{i}) &\to \left(-x_{i}- \frac{F_{i,1}({\bf z}_{i})}{F_{i,0}({\bf z}_{i})}, {\bf z}_{i} \right)\label{tau}\\ \sigma_{i} \colon (x_{i}, {\bf z}_{i}) &\to \left(\frac{F_{i,2}({\bf z}_{i})}{x_{i} \cdot F_{i,0}({\bf z}_{i})}, {\bf z}_{i} \right).\label{sigma} \end{align} Then $\tau_{i}|_{X_{n}} = \sigma_{i}|_{X_{n}} = \iota_{i}$ by definition of $\iota_{i}$ (cf. Theorem \ref{iota}). We get two birational automorphisms of $X_{n}$ \begin{align*} \rho_{i} = \sigma_{i} \circ \tau_{i} \colon (x_{i}, {\bf z}_{i}) &\to \left( \frac{F_{i,2}({\bf z}_{i})}{-x_{i} \cdot F_{i,0}({\bf z}_{i}) - F_{i,1}({\bf z}_{i})}, \ {\bf z}_{i} \right)\\ \rho'_{i} = \tau_{i} \circ \sigma_{i} \colon (x_{i}, {\bf z}_{i}) &\to \left( -\frac{x_{i} \cdot F_{i,1}({\bf z}_{i}) + F_{i,2}({\bf z}_{i})}{x_{i}\cdot F_{i,0}({\bf z}_{i})}, \ {\bf z}_{i} \right). \end{align*} Obviously, both $\rho_{i}$ and $\rho'_{i}$ are in Ine$(P(n+1), X_{n})$, map points not in $X_{n}$ to other points also not in $X_{n}$, and $\rho_{i}^{-1} = \rho'_{i}$ by $\tau_{i}^{2} = \sigma_{i}^{2} = \operatorname{id}_{P(n+1)}$. \begin{prp}\label{order} Each $\rho_{i}$ has infinite order. \end{prp} \begin{proof} By the definiton of $\rho_{i}$ and $\rho'_{i} = \rho_{i}^{-1}$, it suffices to show \begin{align*} {\begin{pmatrix} 0 & F_{i,2}\\ -F_{i,0} & -F_{i,1} \end{pmatrix}}^{k} \neq \alpha I \end{align*} for any $k \in \mathbb Z \setminus \{0\}$ where $I$ is an identity matrix and $\alpha \in \mathbb C^{\times}$. Their eigenvalues are \[ \frac{-F_{i,1} \pm \sqrt{F_{i,1}^{2} - 4F_{i,0}F_{i,2}}}{2}. \] Here $F_{i,1}^{2} - 4F_{i,0}F_{i,2} \neq 0$ as $X_{n}$ is general (for all $i$). If $\begin{pmatrix} 0 & F_{i,2}\\ -F_{i,0} & -F_{i,1} \end{pmatrix}^{k} = \alpha I$ for some $k \in \mathbb Z \setminus \{0\}$ and $\alpha \in \mathbb C^{\times}$, then \[ \left(\frac{-F_{i,1} + \sqrt{F_{i,1}^{2} - 4F_{i,0}F_{i,2}}}{-F_{i,1} - \sqrt{F_{i,1}^{2} - 4F_{i,0}F_{i,2}}}\right)^{k} = 1, \] a contradiction to the assumption that $X_{n}$ is generic. \end{proof} We also remark that Proposition \ref{order} is also implicitly proved in Theorem \ref{main}. Our main result is the following (which is same as Theorem \ref{intro}): \begin{thm}\label{main} Let $X_{n} \subset P(n+1)$ be a smooth hypersurface of multidegree $(2, 2, \ldots, 2)$ and $n \geq 3$. Then: \begin{itemize} \item[(1)] $\gamma \colon \operatorname{Dec}(P(n+1), X_{n}) \to \operatorname{Bir}(X_{n})$ is surjective, in particular Question $\ref{qexact}$ is affirmative for $X_{n}$. \item[(2)] If, in addition, $X_{n}$ is generic, $n+1$ elements $\rho_{i} \in \operatorname{Ine}(P(n+1), X_{n})$ $(1 \leq i \leq n+1)$ satisfy \[ \langle \rho_{1}, \rho_{2}, \ldots , \rho_{n+1} \rangle \simeq \underbrace{\mathbb Z * \mathbb Z * \cdots * \mathbb Z}_{n+1} \subset \operatorname{Ine}(P(n+1), X_{n}). \] In particular, $\operatorname{Ine}(P(n+1), X_{n})$ is an infinite non-commutative group. \end{itemize} \end{thm} Let $\operatorname{Ind}(\rho)$ be the union of the indeterminacy loci of each $\rho_{i}$ and $\rho^{-1}_{i}$; that is, $\displaystyle \operatorname{Ind}(\rho) = \bigcup_{i=1}^{n+1}\big(\operatorname{Ind}(\rho_{i}) \cup \operatorname{Ind}(\rho^{-1}_{i})\big)$ where $\operatorname{Ind}(\rho_{i})$ is the indeterminacy locus of $\rho_{i}$. Clearly, $\operatorname{Ind}(\rho)$ has codimension $\geq 2$ in $P(n+1)$. \begin{proof} Let us show Theorem \ref{main} (1). Suppose $X_{n}$ is generic. For a general point $x \in P(n+1)_{i}$, the set $p_{i}^{-1}(x)$ consists of two points. When we put these two points $y$ and $y'$, then the correspondence $y \leftrightarrow y'$ defines a natural birational involutions of $X_{n}$, and this is the involution $\iota_{j}$. Then, by Cantat-Oguiso's result \cite[Theorem 3.3 (4)]{co11}, $\operatorname{Bir}(X_{n})$ $(n\geq 3)$ coincides with the group $\langle \iota_{1}, \iota_{2}, \ldots , \iota_{n+1} \rangle \simeq \underbrace{\mathbb Z/2\mathbb Z * \mathbb Z/2\mathbb Z * \cdots * \mathbb Z/2\mathbb Z}_{n+1}$. Two involutions $\tau_{j}$ and $\sigma_{j}$ of $X_{n}$ which we construct in \eqref{tau} and \eqref{sigma} are the extensions of the covering involutions $\iota_{j}$. Hence, $\tau_{j}|_{X_{n}} = \sigma_{j}|_{X_{n}} = \iota_{j}$. Thus $\gamma$ is surjective. Since automorphisms of $X_{n}$ come from that of total space $P(n+1)$, it holds the case that $X_{n}$ is not generic. This completes the proof of Theorem \ref{main} (1). Then, we show Theorem \ref{main} (2). By Proposition \ref{order}, order of each $\rho_{i}$ is infinite. Thus it is sufficient to show that there is no non-trivial relation between its $n + 1$ elements $\rho_{i}$. We show by arguing by contradiction. Suppose to the contrary that there is a non-trivial relation between $n+1$ elements $\rho_{i}$, that is, there exists some positive integer $N$ such that \begin{align}\label{rho} \rho_{i_{1}}^{n_{1}} \circ \rho_{i_{2}}^{n_{2}} \circ \cdots \circ \rho_{i_{l}}^{n_{l}} = \operatorname{id}_{P(n+1)} \end{align} where $l$ is a positive integer, $n_{k} \in \mathbb Z\setminus\{0\}$ $(1\leq k \leq l)$, and each $\rho_{i_{k}}$ denotes one of the $n + 1$ elements $\rho_{i}$ $(1 \leq i \leq n+1)$ and satisfies $\rho_{i_{k}} \neq \rho_{i_{k+1}}$ $(0 \leq k \leq l-1)$. Put $N = |n_{1}| + \cdots + |n_{l}|$. In the affine coordinates $(x_{i_{1}}, {\bf z}_{i_{1}})$ where $x_{i_{1}}$ is the affine coordinates of $i_{1}$-th factor $\mathbb P^{1}_{i_{1}}$, we can choose two distinct points $(\alpha_{1}, {\bf z}_{i_{1}})$ and $(\alpha_{2}, {\bf z}_{i_{1}})$, $\alpha_{1} \neq \alpha_{2}$, which are not included in both $X_{n}$ and $\operatorname{Ind}(\rho)$. By a suitable projective linear coordinate change of $\mathbb P^{1}_{i_{1}}$, we can set $\alpha_{1} = 0$ and $\alpha_{2} = \infty$. When we pay attention to the $i_{1}$-th element $x_{i_{1}}$ of the new coordinates, we put same letters $F_{i_{1},j}({\bf z}_{i_{1}})$ for the definitional equation of $X_{n}$, that is, $X_{n}$ can be written by \[ X_{n} = \{ F_{i_{1},0}({\bf z}_{i_{1}})x_{i_{1}}^{2} + F_{i_{1},1}({\bf z}_{i_{1}})x_{i_{1}} + F_{i_{1},2}({\bf z}_{i_{1}}) = 0 \}. \] Here the two points $(0, {\bf z}_{i_{1}})$ and $(\infty, {\bf z}_{i_{1}})$ not included in $X_{n} \cup \operatorname{Ind}(\rho)$. From the assumption, both two equalities hold: \begin{numcases} {} \rho_{i_{1}}^{n_{1}} \circ \cdots \circ \rho_{i_{l}}^{n_{l}}(0, {\bf z}_{i_{1}}) = (0, {\bf z}_{i_{1}}) & \\ \rho_{i_{1}}^{n_{1}} \circ \cdots \circ \rho_{i_{l}}^{n_{l}}(\infty, {\bf z}_{i_{1}}) = (\infty, {\bf z}_{i_{1}}).\label{infty} \end{numcases} We proceed by dividing into the following two cases. {\flushleft (i). The case where $n_{1} > 0$. Write $\rho_{i_{1}} \circ \rho_{i_{1}}^{n_{1}-1} \circ \rho_{i_{2}}^{n_{2}} \circ \cdots \circ \rho_{i_{l}}^{n_{l}} = \operatorname{id}_{P(n+1)}$. } Let us denote $\rho_{i_{1}}^{n_{1}-1} \circ \cdots \circ \rho_{i_{l}}^{n_{l}}(0, {\bf z}_{i_{1}}) = (p, {\bf z}_{i_{1}}')$, then, by the definition of $\rho_{i_{1}}$, it maps $p$ to $0$. That is, the equation $F_{i_{1},2}({\bf z}'_{i_{1}}) = 0$ is satisfied. On the other hand, the intersection of $X_{n}$ and the hyperplane $(x_{i_{1}}=0)$ is written by \[ X_{n} \cap (x_{i_{1}}=0) = \{F_{i_{1},2}({\bf z}_{i_{1}}) = 0\}. \] This implies $(0, {\bf z}'_{i_{1}}) = \rho_{i_{1}}(p, {\bf z}'_{i_{1}}) = (0, {\bf z}_{i_{1}})$ is a point on $X_{n}$, a contradiction to the fact that $(0, {\bf z}_{i_{1}}) \notin X_{n}$. {\flushleft (ii). The case where $n_{1} < 0$. Write $\rho^{-1}_{i_{1}} \circ \rho_{i_{1}}^{n_{1}+1} \circ \rho_{i_{2}}^{n_{2}} \circ \cdots \circ \rho_{i_{l}}^{n_{l}} = \operatorname{id}_{P(n+1)}$. } By using the assumption \eqref{infty}, we lead the contradiction by the same way as in (i). Precisely, we argue as follows. Let us write $\displaystyle x_{i_{1}} = \frac{1}{y_{i_{1}}}$, then $(x_{i_{1}} = \infty, {\bf z}_{i_{1}}) = (y_{i_{1}} = 0, {\bf z}_{i_{1}})$ and $X_{n}$ and $\rho^{-1}_{i_{1}}$ can be written by \[ X_{n} \coloneqq \{F_{i_{1},0}({\bf z}_{i_{1}}) + F_{i_{1},1}({\bf z}_{i_{1}})y_{i_{1}} + F_{i_{1},2}({\bf z}_{i_{1}})y_{i_{1}}^{2} = 0\}, \] \[ \rho^{-1}_{i_{1}} \colon (y_{i_{1}}, {\bf z}_{i_{1}}) \to \left(\ -\frac{F_{i_{1},0}({\bf z}_{i_{1}})}{F_{i_{1},1}({\bf z}_{i_{1}}) + y_{i_{1}}\cdot F_{i_{1},2}({\bf z}_{i_{1}})},\ {\bf z}_{i_{1}} \right). \] Let us denote $ \rho_{i_{1}}^{n_{1}+1} \circ \rho_{i_{2}}^{n_{2}} \circ \cdots \circ \rho_{i_{l}}^{n_{l}} (y_{i_{1}} =0, {\bf z}_{i_{1}}) = (y_{i_{1}} = q, {\bf z}_{i_{1}}'')$, then $\rho^{-1}_{i_{1}}$ maps $q$ to $0$. That is, the equation $F_{i_{1},0}({\bf z}''_{i_{1}}) = 0$ is satisfied, but the intersection of $X_{n}$ and the hyperplane $(y_{i_{1}} = 0)$ is written by \[ X_{n}\cap (y_{i_{1}} = 0) = \{F_{i_{1},0}({\bf z}_{i_{1}}) = 0\}. \] This implies $(y_{i_{1}}=0, {\bf z}''_{i_{1}}) = \rho^{-1}_{i_{1}}(y_{i_{1}} = q, {\bf z}_{i_{1}}'') = (x_{i_{1}}=\infty, {\bf z}_{i_{1}})$ is a point on $X_{n}$; that is, $(x_{i_{1}}=\infty, {\bf z}_{i_{1}}) \in X_{n} \cap (x_{i_{1}}=\infty)$. This is contradiction. \par From (i) and (ii), we can conclude that there does not exist such $N$. This completes the proof of Theorem \ref{main} (2). \end{proof} Note that, for the cases $n = 2$ and $1$, Theorem \ref{main} (2) also holds though (1) does not hold. {\bf Acknowledgements: } The authors would like to express their sincere gratitude to their supervisor Professor Keiji Oguiso who suggested this subject and has given much encouragement and invaluable and helpful advices. \end{document}
\begin{document} \begin{frontmatter} \title{Rejoinder} \runtitle{Rejoinder} \begin{aug} \author[a]{\fnms{J. N. K.} \snm{Rao}\corref{}\ead[label=e1]{[email protected]}} \runauthor{J. N. K. Rao} \affiliation{Carleton University} \address[a]{J. N. K. Rao is Distinguished Research Professor, School of Mathematics and Statistics, Carleton University, Ottawa, Ontario K1S 5B6, Canada \printead{e1}.} \end{aug} \vspace*{-6pt} \end{frontmatter} First, I would like to thank the three discussants (Glen Meeden, Joe Sedransk and Eric Slud) for constructive comments on my paper and for providing additional relevant references, particularly on frequentist model diagnostics (Slud) and Bayesian model checking (Sedransk). I totally agree with Sedransk that studying alternative methods of making inference for finite populations is an ``underserved field of research.'' I will first address the constructive comments of the discussants on the comparison of methods for handling sampling errors in the context of estimation with fairly large domain samples. Subsequently, I will respond to the discussions on small area estimation. \vspace*{-1pt} \section*{Hansen et al. Example} \vspace*{-1pt} In Section 3.2, I cited the well-known Hansen,~Ma\-dow and Tepping (HMT) example illustrating the dangers of using model-dependent methods with fair\-ly large samples even under minor model misspecifications. Sedransk argues in his discussion that new advances in model diagnostics, such as model averaging, might remedy the difficulty noted by HMT and provide improvements over the ``straw man, the usual ratio estimator.'' I agree with Sedransk that it would be worthwhile analyzing this example and other examples to show how one can make valid model-dependent inferences routinely with fairly lar\-ge domain samples that can provide significant improvements over the design-based (possibly model-assisted) methods, particularly in the context of official statistics with many variables of interest. If this goal can be achieved, then I believe model-dependent methods (frequentist or Bayesian) will have significant impact on practice, similar to their current use in small area estimation with small domain samples. The HMT example showed the\vadjust{\eject} importance of using design weights under their design with deep stratification by size and disproportional sample allocation. The usual design unbiased weighted estimator is almost as efficient as the usual combined weighted ratio estimator under the HMT design because of deep stratification by size, so I~do not agree with Sedransk's comment on the importance of ratio estimator in the HMT example. It is interesting to note that under proportional sample allocation, the BLUP estimator (unweighted ratio estimator) under the incorrectly specified ratio model is identical to the combined weighted ratio estimator and hence it performs well because it is design consistent, unlike under disproportional sample allocation. The HMT example demonstrated the importance of design consistency, and in fact as noted in Section 3.2, Little (\citeyear{LIT83}) proposed restricting attention to models that hold for the sample and for which the corresponding BLUP estimator is design consistent. I have noted some limitations of this proposal in Section 3.2. It should be noted that the HMT illustration of the poor performance of the BLUP estimator used the repeated sampling design-based approach to evaluate confidence interval coverage. On the other hand, model-based inference~is based on the distribution induced by the model conditional on the particular sample that has been drawn. However, Rao (\citeyear{Rao97}) showed that the HMT conclusions still hold in the conditional framework because of the effective use of size information through size stratification. \section*{Role of Design Weights} I will now turn to Meeden's useful comments on the role of design weights and the use of Polya posterior (PP) for making inferences after the sample is observed. As noted in Section 4.2, the PP approach when applicable permits routine interval estimation for any finite population parameter of interest through simulation of many finite populations from PP and this general interval estimation feature of PP is indeed attractive. Meeden notes in his discussion that an R package is also available for simulating many complete populations. However, so far the PP methodology considered only simple designs that may satisfy the \mbox{assumption} that the un-sampled units are like the sampled units\vadjust{\eject} (exchangeability) which limits its applicability in practice. Meeden agrees with my comment that the PP approach needs extension to more complex designs \mbox{before} it becomes attractive to users. Even for the simple designs where it is applicable, it would be useful to identify scenarios where the PP can perform significantly better than the routine design-based methods in terms of confidence interval coverage, especially in cases where the traditional methods do not perform well; for example, the Woodruff interval on quantiles under size stratification noted in Section 1. Meeden notes the work of Lazar, Meeden and Nelson (\citeyear{l-m-n08}) on the constrained PP which incorporates known population information about auxiliary variables without any model assumptions about how the auxiliary variables are related to the variables of interest, similar to calibration estimation. It appears that the constraints allowed by this method are more flexible than those in the usual calibration estimation, such as the population median falls in some known interval, and this feature might prove attractive to the user, especially due to the availability of an R package. However, the constrained PP could run into problems when the number of population constraints is large, similar to traditional calibration estimation. In his concluding remarks, Meeden says that one should not focus on estimating the variance of an estimator, but this is a customary practice as it allows reporting estimated coefficient of variation (CV) of the estimator as a quality measure and the user can compute confidence interval from this variance estimator for any desired confidence level using normal approximation. Meeden also expresses concerns that the frequentist practice is often ``obscured by the prominent and unnecessary role played by the design weights after the sample has been selected.'' But design weights or calibration weights are needed for asymptotically valid design-based inferences, although it is often necessary to modify the weights to handle special situations, such as outlier weights. In fact, the PP-based \mbox{estimators} of a population mean are often close to the traditional weighted estimators, for example under stratified random sampling. \section*{Calibration Estimators} Slud and I seem to agree on the limitations of model-dependent approaches (frequentist or Baye\-sian) when the sample size in a domain of interest is sufficiently large: possible design inconsistency of the resulting estimators under minor model\vadjust{\eject} misspecifications, leading to erroneous inferences. In Section~3.1 I noted the popularity of model-free calibration estimators in the large-scale production of official statistics from complex surveys because of their ability to produce common calibration weights and accommodate arbitrary number of user-specified calibration constraints. In practice, design weights are adjusted first for unit nonresponse and then calibrated to known user-specified totals. The calibration weights are often modified to satisfy specified range restrictions and calibration constraints simultaneously, but there is no guarantee that such modified weights can be found. Rao and Singh (\citeyear{RAOSIN}, \citeyear{RAOSIN09}) proposed a ``ridge shrinkage'' approach (assuming complete response) to get around the latter problem by relaxing some calibration constraints incrementally while satisfying the range restrictions. Slud mentions a new method he has developed recently (Slud and Thibaudeau, \citeyear{SluThi}) that can do simultaneous weight adjustment for nonresponse, calibration and weight compression. This method looks very interesting and his empirical results are encouraging. But a solution satisfying specified range restrictions on the weights may not exist and it would be interesting to extend the Rao--Singh approach to handle simultaneous nonresponse adjustment and calibration. I agree with Slud that if the weights and calibration totals are correctly specified, the resulting calibration estimator is design consistent even if the underlying working linear regression model uses an incorrect or incomplete set of predictor variables, as in the example of Section 3.1. The effect of gross misspecification of the working model is on the coverage performance of the associated confidence intervals and hence it is ``more subtle than design-consistency'' as noted by Slud. Incidentally, Dorfman (\citeyear{autokey2}) used this example to question the contention of Hansen and Tepping (\citeyear{HANTEP90}) that ``design-based estimators that happen to incorporate a model are inferentially satisfactory, despite failure of the model'' and concluded that the results on coverage for the linear regression estimator calibrated on the population size $N$ and the population total $X$ ``dramatically call this contention into question.'' Dorfman's statement may be correct in regard to calibration estimators based solely on user-specified totals $Z$, but as noted in Section 3.1 a model-assisted approach based on a working model obtained after some model checking to eliminate gross misspecification of the working model can lead to good confidence interval coverage in the Dorfman example.\vadjust{\eject} \section*{Analysis of Survey Data} Section 3.3 of my paper on the analysis of complex survey data is somewhat brief due to my focus on estimating totals and means, but I should have mentioned goodness-of-fit tests that take account of survey design. I am thankful to Slud for pointing this out and making reference to my own work (Rao and Scott, \citeyear{RaoSco84}) on goodness-of-fit chi-squared tests for cross-classified survey data based on log-linear models. I might add that Roberts, Rao and Kumar (\citeyear{RobRaoKum87}) considered goodness-of-fit tests of logistic regression models with categorical predictor variables and binary response. Graubard, Korn and Midthune (\citeyear{GRA}) extended the well-known Hosmer and Lemeshow (\citeyear{HOSLEM80}) grouping method of goodness-of-fit for logistic regression to complex survey data. Roberts, Ren and Rao (\citeyear{ROB}) studied goodness-of-fit tests for mean specification in marginal models for longitudinal survey data and obtained an adjusted Hosmer and Lemeshow test using Rao--Scott corrections as well as a quasi-score test obtained by extending the method of Horton et al. (\citeyear{HORetal99}) to survey data. Multilevel models for analysis of survey data are more complex than the marginal models for estimating regression parameters because of the presence of random effects in the models. Goodness-of-fit methods for two-level models, when the model holds for the sample, are available in the literature (e.g., Pan and Lin, \citeyear{PanLin05}) but very little is known for survey data in the presence of sample selection bias. I am presently studying model-checking methods for two-level models taking account of the survey design. \section*{Small Area Estimation} Turning now to small area estimation, Slud notes ``But one serious objection is that each response~va\-riable would require its own Bayesian model'' unlike direct calibration estimators using common weights. Yet model-dependent small area methods (either HB or EB) are gaining acceptability because direct calibration estimators are unreliable due to small sample sizes. However, practitioners often prefer benchmarking the small area estimators to agree with a~reliable direct calibration estimator at a higher level. Sedransk notes that ``almost all of the applications use an area-level model'' even though it makes strong assumptions such as known sampling variances, as noted in Section 5. I agree with him that the quality of the smoothing methods used in practice to get around the assumption of known sampling variances is questionable although smoothed sampling variance estimates may be satisfactory for point estimation. However, as noted in Section 5, area-level models remain attractive because the sampling design is taken into account through the direct estimators, and the direct estimators and the associated area-level covariates are more readily available to the users than the corresponding unit-level sample data. Also, in using unit-level models one need to ensure that the population model holds for the sample and this could be problematic, although more complex methods have been proposed recently to handle sample selection bias in unit-level models (Pfeffermann and Sverchkov, \citeyear{PfeSve07}). Nevertheless, I~agree with Sedransk that unit-level models should receive more attention in the future. Turning to HB model diagnostics, I have noted in Section 5 some difficulties with the commonly used posterior predictive $p$-value (PPP) for checking goodness-of-fit of a model because of ``double use'' of data. Alternative methods that have been proposed to avoid double use of data are more difficult to implement, especially in the context of small area models as noted. Sedransk mentioned three additional references (Yan and Sedransk, \citeyear{YANSED06}, \citeyear{YanSed07}, \citeyear{YANSED10}) that studied alternative measures in the context of detecting unknown hierarchical structures under somewhat simplified assumptions. In particular, Yan and Sedransk demonstrated that the unit-specific PPP-values act like uniformly distributed random variables under the simple mean null model (without random area effects) and hence a Q--Q plot should reveal departures from the model. They assumed normality and absence of outliers in their study, but it would be interesting to see if their unit-specific P-values can in fact detect nonnormality of random effects, studied by Sinharay and Stern (\citeyear{SinSte03}). The use of unit-specific PPP-values might be more attractive than using the traditional PPP-function because it does not require the selection of an appropriate checking function, but further work is needed including the detection of nonnormality as noted above. Yan and Sedransk showed that the PPP-function, based on the F-statistic as the checking function, is very effective for detecting hierarchical structure when the true model is correctly guessed as the mean model with random area effects. This seems to imply that the PPP-function is chosen to reject the null model and yet Sedransk criticizes the frequentist goodness-of-fit tests by saying that\vadjust{\eject} ``such tests are constructed to \textit{reject} null hypotheses whereas one would like to accept a postulated model if the data are concordant with it.'' In the simulation study of Yan and Sedransk (\citeyear{YanSed07}) the F-statistic based PPP-value detected even small correlations when the sample size is large and the corresponding frequentist test would also lead to similar results. I do not agree with Sedransk that global frequentist goodness-of-fit tests necessarily reject the null model when the data are concordant with the model. In fact, many published papers have identified models from real data, using frequentist tests. For example, Datta, Hall and Mandal (\citeyear{DATHALMAN}) developed a frequentist model selection method by testing for the presence of small area random effects and applied the method to two real data sets involving 13 and 23 areas, respectively. Their test is based on simple bootstrap methods and it is free of normality assumption. The null model in both applications is a regression model without random area effects and they showed that the frequentist $p$-value is as large as 0.2, suggesting that the data are concordant with the simpler null model. Slud mentioned the work of Jiang, Lahiri and Wu (\citeyear{JiaLahWu01}) and Jiang (\citeyear{Jia01}) on mixed linear model diagnostics in the frequentist framework. I personally prefer using prior-free frequentist methods for model checking because they can handle a variety of model deviations including selection of variables and random effects selection in linear or generalized linear mixed models (e.g., Jiang et al., \citeyear{Jiaetal08}) and detection of outliers in multilevel models (Shi and Chen, \citeyear{ShiChe08}). A model selected by the frequentist methods can be further subjected to Bayesian selection methods if necessary before using HB methods for inference. Slud notes difficulties with model checking in the context of SAIPE for sample counties where no poor children were seen. This is also the case for counties or areas not sampled. Model checking in those cases is indeed challenging. Finally, Slud makes an important observation on goodness-of-fit tests when the primary interest is prediction: ``excellent predictions can be provided through estimating models which are too simple to pass goodness-of-fit checks.'' Slud notes that this observation ``has not yet been formulated with mathematical care'' and that both frequentists and Baye\-sians will benefit by characterizing ``which target parameters and which combinations of true and oversimplified models could work in this way.'' In this context, the recent work of Jiang, Nguyen and Rao (\citeyear{JIANGURAO}) on best predictive small area estimation is relevant. This paper develops a new prediction procedure, called observed best prediction (OBP), and shows that it can significantly outperform the traditional EBLUP. \section*{Acknowledgments} Again, I am thankful to the discussants for their insightful comments. I also wish to thank the guest editor, Partha Lahiri, for inviting me to submit this paper to \textit{Statistical Science}. This work was supported by a research grant from the Natural Sciences and Engineering Research Council of Canada. \begin{thebibliography}{17} \bibitem[\protect\citeauthoryear{Datta, Hall and Mandal}{2011}]{DATHALMAN} \begin{barticle}[auto:STB|2011-03-03|12:04:44] \bauthor{\bsnm{Datta},~\bfnm{G.~S.}\binits{G.~S.}}, \bauthor{\bsnm{Hall},~\bfnm{P.}\binits{P.}} \AND \bauthor{\bsnm{Mandal},~\bfnm{A.}\binits{A.}} (\byear{2011}). \btitle{Model selection by testing for the presence of small-area effects, and application to area-level data}. \bjournal{J. Amer. Statist. Assoc.} \bvolume{106} \bpages{362--374}. \end{barticle} \endbibitem \bibitem[\protect\citeauthoryear{Dorfman}{1994}]{autokey2} \begin{barticle}[auto:STB|2011-03-03|12:04:44] \bauthor{\bsnm{Dorfman},~\bfnm{A.~H.}\binits{A.~H.}} (\byear{1994}). \btitle{A note on variance estimation for the regression estimator in double sampling}. \bjournal{J. Amer. Statist. Assoc.} \bvolume{89} \bpages{137--140}. \end{barticle} \endbibitem \bibitem[\protect\citeauthoryear{Graubard, Korn and Midthune}{1997}]{GRA} \begin{bmisc}[auto:STB|2011-03-03|12:04:44] \bauthor{\bsnm{Graubard},~\bfnm{B.~I.}\binits{B.~I.}}, \bauthor{\bsnm{Korn},~\bfnm{E.~L.}\binits{E.~L.}} \AND \bauthor{\bsnm{Midthune},~\bfnm{D.}\binits{D.}} (\byear{1997}). \bhowpublished{Testing goodness-of-fit for logistic regression with survey data. In \textit{Proceedings of the Section on Survey Research Methods} 170--174. Amer. Statist. Assoc., Alexandria, VA}. \end{bmisc} \endbibitem \bibitem[\protect\citeauthoryear{Hansen and Tepping}{1990}]{HANTEP90} \begin{barticle}[auto:STB|2011-03-03|12:04:44] \bauthor{\bsnm{Hansen},~\bfnm{M.~N.}\binits{M.~N.}} \AND \bauthor{\bsnm{Tepping},~\bfnm{B.~J.}\binits{B.~J.}} (\byear{1990}). \btitle{Regression estimates in federal welfare quality control programs}. \bjournal{J. Amer. Statist. Assoc.} \bvolume{85} \bpages{856--864}. \end{barticle} \endbibitem \bibitem[\protect\citeauthoryear{Horton et~al.}{1999}]{HORetal99} \begin{barticle}[auto:STB|2011-03-03|12:04:44] \bauthor{\bsnm{Horton},~\bfnm{N.~J.}\binits{N.~J.}}, \bauthor{\bsnm{Bebchuk},~\bfnm{J.~D.}\binits{J.~D.}}, \bauthor{\bsnm{Jones},~\bfnm{C.~L.}\binits{C.~L.}}, \bauthor{\bsnm{Lipsitz},~\bfnm{S.~R.}\binits{S.~R.}}, \bauthor{\bsnm{Catalano},~\bfnm{P.~J.}\binits{P.~J.}}, \bauthor{\bsnm{Zahner},~\bfnm{G.~E.~P.}\binits{G.~E.~P.}} \AND \bauthor{\bsnm{Fitzmaurice},~\bfnm{G.~M.}\binits{G.~M.}} (\byear{1999}). \btitle{Goodness-of-fit for GEE: An example with mental service utilization}. \bjournal{Stat. Med.} \bvolume{18} \bpages{213--222}. \end{barticle} \endbibitem \bibitem[\protect\citeauthoryear{Hosmer and Lemeshow}{1980}]{HOSLEM80} \begin{barticle}[auto:STB|2011-03-03|12:04:44] \bauthor{\bsnm{Hosmer},~\bfnm{D.~W.}\binits{D.~W.}} \AND \bauthor{\bsnm{Lemeshow},~\bfnm{S.}\binits{S.}} (\byear{1980}). \btitle{A goodness-of-fit test for the multiple logistic regression}. \bjournal{Comm. Statist.} \bvolume{A10} \bpages{1043--1069}. \end{barticle} \endbibitem \bibitem[\protect\citeauthoryear{Jiang}{2001}]{Jia01} \begin{barticle}[mr] \bauthor{\bsnm{Jiang},~\bfnm{Jiming}\binits{J.}} (\byear{2001}). \btitle{Goodness-of-fit tests for mixed model diagnostics}. \bjournal{Ann. Statist.} \bvolume{29} \bpages{1137--1164}. \bid{doi={10.1214/aos/1013699997}, issn={0090-5364}, mr={1869244}} \end{barticle} \endbibitem \bibitem[\protect\citeauthoryear{Jiang, Lahiri and Wu}{2001}]{JiaLahWu01} \begin{barticle}[mr] \bauthor{\bsnm{Jiang},~\bfnm{Jiming}\binits{J.}}, \bauthor{\bsnm{Lahiri},~\bfnm{P.}\binits{P.}} \AND \bauthor{\bsnm{Wu},~\bfnm{Chien-Hua}\binits{C.-H.}} (\byear{2001}). \btitle{A generalization of the {P}earson's {$\chi\sp 2$} goodness-of-fit test with estimated cell frequencies}. \bjournal{Sankhy\=a Ser. A} \bvolume{63} \bpages{260--276}. \bid{issn={0581-572X}, mr={1897453}} \end{barticle} \endbibitem \bibitem[\protect\citeauthoryear{Jiang, Nguyen and Rao}{2011}]{JIANGURAO} \begin{bmisc}[auto:STB|2011-03-03|12:04:44] \bauthor{\bsnm{Jiang},~\bfnm{J.}\binits{J.}}, \bauthor{\bsnm{Nguyen},~\bfnm{T.}\binits{T.}} \AND \bauthor{\bsnm{Rao},~\bfnm{J.~S.}\binits{J.~S.}} (\byear{2011}). \bhowpublished{Best predictive small area estimation. \textit{J. Amer. Statist. Assoc.} \textbf{106} 732--745}. \end{bmisc} \endbibitem \bibitem[\protect\citeauthoryear{Jiang et~al.}{2008}]{Jiaetal08} \begin{barticle}[mr] \bauthor{\bsnm{Jiang},~\bfnm{Jiming}\binits{J.}}, \bauthor{\bsnm{Rao},~\bfnm{J.~Sunil}\binits{J.~S.}}, \bauthor{\bsnm{Gu},~\bfnm{Zhonghua}\binits{Z.}} \AND \bauthor{\bsnm{Nguyen},~\bfnm{Thuan}\binits{T.}} (\byear{2008}). \btitle{Fence methods for mixed model selection}. \bjournal{Ann. Statist.} \bvolume{36} \bpages{1669--1692}. \bid{doi={10.1214/07-AOS517}, issn={0090-5364}, mr={2435452}} \end{barticle} \endbibitem \bibitem[\protect\citeauthoryear{Lazar, Meeden and Nelson}{2008}]{l-m-n08} \begin{barticle}[auto:STB|2011-03-03|12:04:44] \bauthor{\bsnm{Lazar},~\bfnm{R.}\binits{R.}}, \bauthor{\bsnm{Meeden},~\bfnm{G.}\binits{G.}} \AND \bauthor{\bsnm{Nelson},~\bfnm{D.}\binits{D.}} (\byear{2008}). \btitle{A noninformative Bayesian approach to finite population sampling using auxiliary variables}. \bjournal{Survey Methodol.} \bvolume{34} \bpages{51--64}. \end{barticle} \endbibitem \bibitem[\protect\citeauthoryear{Little}{1983}]{LIT83} \begin{barticle}[auto:STB|2010-11-18|09:18:59] \bauthor{\bsnm{Little},~\bfnm{R.~J.~A.}\binits{R.~J.~A.}} (\byear{1983}). \btitle{Estimating a finite population mean from unequal probability samples}. \bjournal{J. Amer. Statist. Assoc.} \bvolume{78} \bpages{596--604}. \end{barticle} \endbibitem \bibitem[\protect\citeauthoryear{Pan and Lin}{2005}]{PanLin05} \begin{barticle}[mr] \bauthor{\bsnm{Pan},~\bfnm{Zhiying}\binits{Z.}} \AND \bauthor{\bsnm{Lin},~\bfnm{D.~Y.}\binits{D.~Y.}} (\byear{2005}). \btitle{Goodness-of-fit methods for generalized linear mixed models}. \bjournal{Biometrics} \bvolume{61} \bpages{1000--1009}. \bid{doi={10.1111/j.1541-0420.2005.00365.x}, issn={0006-341X}, mr={2216193}} \end{barticle} \endbibitem \bibitem[\protect\citeauthoryear{Pfeffermann and Sverchkov}{2007}]{PfeSve07} \begin{barticle}[mr] \bauthor{\bsnm{Pfeffermann},~\bfnm{Danny}\binits{D.}} \AND \bauthor{\bsnm{Sverchkov},~\bfnm{Michail}\binits{M.}} (\byear{2007}). \btitle{Small-area estimation under informative probability sampling of areas and within the selected areas}. \bjournal{J. Amer. Statist. Assoc.} \bvolume{102} \bpages{1427--1439}. \bid{doi={10.1198/016214507000001094}, issn={0162-1459}, mr={2412558}} \end{barticle} \endbibitem \bibitem[\protect\citeauthoryear{Rao}{1997}]{Rao97} \begin{barticle}[mr] \bauthor{\bsnm{Rao},~\bfnm{J.~N.~K.}\binits{J.~N.~K.}} (\byear{1997}). \btitle{Developments in sample survey theory: An appraisal}. \bjournal{Canad. J. Statist.} \bvolume{25} \bpages{1--21}. \bid{doi={10.2307/3315352}, issn={0319-5724}, mr={1451670}} \end{barticle} \endbibitem \bibitem[\protect\citeauthoryear{Rao and Scott}{1984}]{RaoSco84} \begin{barticle}[mr] \bauthor{\bsnm{Rao},~\bfnm{J.~N.~K.}\binits{J.~N.~K.}} \AND \bauthor{\bsnm{Scott},~\bfnm{A.~J.}\binits{A.~J.}} (\byear{1984}). \btitle{On chi-squared tests for multiway contingency tables with cell proportions estimated from survey data}. \bjournal{Ann. Statist.} \bvolume{12} \bpages{46--60}. \bid{doi={10.1214/aos/1176346391}, issn={0090-5364}, mr={0733498}} \end{barticle} \endbibitem \bibitem[\protect\citeauthoryear{Rao and Singh}{1997}]{RAOSIN} \begin{bmisc}[auto:STB|2011-03-03|12:04:44] \bauthor{\bsnm{Rao},~\bfnm{J.~N.~K.}\binits{J.~N.~K.}} \AND \bauthor{\bsnm{Singh},~\bfnm{A.~C.}\binits{A.~C.}} (\byear{1997}). \bhowpublished{A ridge-shrinkage method for range-restricted weight estimation in survey sampling. In \textit{Proceedings of the Section on Survey Research Methods} 57--65. Amer. Statist. Assoc., Alexandria, VA}. \end{bmisc} \endbibitem \bibitem[\protect\citeauthoryear{Rao and Singh}{2009}]{RAOSIN09} \begin{barticle}[auto:STB|2011-03-03|12:04:44] \bauthor{\bsnm{Rao},~\bfnm{J.~N.~K.}\binits{J.~N.~K.}} \AND \bauthor{\bsnm{Singh},~\bfnm{A.~C.}\binits{A.~C.}} (\byear{2009}). \btitle{Range-restricted weight calibration for survey data using ridge regression}. \bjournal{Pak. J. Statist.} \bvolume{25} \bpages{371--384}. \end{barticle} \endbibitem \bibitem[\protect\citeauthoryear{Roberts, Rao and Kumar}{1987}]{RobRaoKum87} \begin{barticle}[mr] \bauthor{\bsnm{Roberts},~\bfnm{G.}\binits{G.}}, \bauthor{\bsnm{Rao},~\bfnm{J.~N.~K.}\binits{J.~N.~K.}} \AND \bauthor{\bsnm{Kumar},~\bfnm{S.}\binits{S.}} (\byear{1987}). \btitle{Logistic regression analysis of sample survey data}. \bjournal{Biometrika} \bvolume{74} \bpages{1--12}. \bid{doi={10.1093/biomet/74.1.1}, issn={0006-3444}, mr={0885914}} \end{barticle} \endbibitem \bibitem[\protect\citeauthoryear{Roberts, Ren and Rao}{2009}]{ROB} \begin{bmisc}[auto:STB|2011-03-03|12:04:44] \bauthor{\bsnm{Roberts},~\bfnm{G.}\binits{G.}}, \bauthor{\bsnm{Ren},~\bfnm{Q.}\binits{Q.}} \AND \bauthor{\bsnm{Rao},~\bfnm{J.~N.~K.}\binits{J.~N.~K.}} (\byear{2009}). \bhowpublished{Using marginal mean models for data from longitudinal surveys with a complex design: Some advances in methods. In \textit{Methodology of Longitudinal Surveys} (P. Lynn, ed.) 351--266. Wiley, Chichester}. \end{bmisc} \endbibitem \bibitem[\protect\citeauthoryear{Shi and Chen}{2008}]{ShiChe08} \begin{barticle}[mr] \bauthor{\bsnm{Shi},~\bfnm{Lei}\binits{L.}} \AND \bauthor{\bsnm{Chen},~\bfnm{Gemai}\binits{G.}} (\byear{2008}). \btitle{Detection of outliers in multilevel models}. \bjournal{J. Statist. Plann. Inference} \bvolume{138} \bpages{3189--3199}. \bid{doi={10.1016/j.jspi.2008.01.004}, issn={0378-3758}, mr={2526230}} \end{barticle} \endbibitem \bibitem[\protect\citeauthoryear{Sinharay and Stern}{2003}]{SinSte03} \begin{barticle}[mr] \bauthor{\bsnm{Sinharay},~\bfnm{Sandip}\binits{S.}} \AND \bauthor{\bsnm{Stern},~\bfnm{Hal~S.}\binits{H.~S.}} (\byear{2003}). \btitle{Posterior predictive model checking in hierarchical models}. \bjournal{J. Statist. Plann. Inference} \bvolume{111} \bpages{209--221}. \bid{doi={10.1016/S0378-3758(02)00303-8}, mr={1955882}} \end{barticle} \endbibitem \bibitem[\protect\citeauthoryear{Slud and Thibaudeau}{2010}]{SluThi} \begin{bmisc}[auto:STB|2011-03-03|12:04:44] \bauthor{\bsnm{Slud},~\bfnm{E.}\binits{E.}} \AND \bauthor{\bsnm{Thibaudeau},~\bfnm{Y.}\binits{Y.}} (\byear{2010}). \bhowpublished{Simultaneous calibration and nonresponse adjustment.\ Census Bureau CSRM Research Report \#2010-03. Available at \href{http://www.census.gov/srd/papers/pdf/rrs2010-03.pdf}{http://} \href{http://www.census.gov/srd/papers/pdf/rrs2010-03.pdf}{www.census.gov/srd/papers/pdf/rrs2010-03.pdf}}. \end{bmisc} \endbibitem \bibitem[\protect\citeauthoryear{Yan and Sedransk}{2006}]{YANSED06} \begin{barticle}[auto:STB|2011-03-03|12:04:44] \bauthor{\bsnm{Yan},~\bfnm{G.}\binits{G.}} \AND \bauthor{\bsnm{Sedransk},~\bfnm{J.}\binits{J.}} (\byear{2006}). \btitle{Exploring the use of subpopulation membership in Bayesian hierarchical model assessment}. \bjournal{J. Data Sci.} \bvolume{4} \bpages{413--424}. \end{barticle} \endbibitem \bibitem[\protect\citeauthoryear{Yan and Sedransk}{2007}]{YanSed07} \begin{barticle}[mr] \bauthor{\bsnm{Yan},~\bfnm{Guofen}\binits{G.}} \AND \bauthor{\bsnm{Sedransk},~\bfnm{J.}\binits{J.}} (\byear{2007}). \btitle{Bayesian diagnostic techniques for detecting hierarchical structure}. \bjournal{Bayesian Anal.} \bvolume{2} \bpages{735--760}. \bid{issn={1931-6690}, mr={2361973}} \end{barticle} \endbibitem \bibitem[\protect\citeauthoryear{Yan and Sedransk}{2010}]{YANSED10} \begin{barticle}[auto:STB|2011-03-03|12:04:44] \bauthor{\bsnm{Yan},~\bfnm{G.}\binits{G.}} \AND \bauthor{\bsnm{Sedransk},~\bfnm{J.}\binits{J.}} (\byear{2010}). \btitle{A note on Bayesian residuals as a hierarchical model diagnostic technique}. \bjournal{Statist. Papers} \bvolume{51} \bpages{1--10}. \end{barticle} \endbibitem \end{thebibliography}\vspace*{-4pt} \end{document}
\begin{document} \author{Gergely Harcos} \address{Alfr\'ed R\'enyi Institute of Mathematics, Hungarian Academy of Sciences, POB 127, Budapest H-1364, Hungary}\email{[email protected]} \address{Central European University, Nador u. 9, Budapest H-1051, Hungary}\email{[email protected]} \title{Twisted Hilbert modular $L$-functions and spectral theory} \thanks{The author was supported by OTKA grants K~101855 and K~104183 and ERC Advanced Grant~228005.} \begin{abstract} These are notes for four lectures given at the 2010 CIMPA Research School ``Automorphic Forms and $L$-functions'' in Weihai, China. The lectures focused on a Burgess-like subconvex bound for twisted Hilbert modular $L$-functions published jointly with Valentin Blomer in the same year. They discussed the proof in some detail, especially how spectral theory can be used to estimate the relevant shifted convolution sums efficiently. They also discussed briefly an application for the number of representations by a totally positive ternary quadratic form over a totally real number field. The notes below follow the leisurely style of the lectures, hence they do not constitute a comprehensive survey of the subject. \end{abstract} \maketitle \section{Lecture One: Some Quadratic Forms} In the lectures we shall discuss a state-of-the-art bound for twisted Hilbert modular $L$-functions, namely an analogue of Burgess' bound~\cite{Bu} for Dirichlet $L$-functions. For motivation and context, we start with an application to quadratic forms. For a positive integral ternary quadratic form $Q$ let us denote by $r^*(n,Q)$ the number of \emph{primitive} integral representations of $n$ by $Q$, that is \[r^*(n,Q):=\#\left\{(x,y,z)\in\mathbb{Z}^3:\text{ $Q(x,y,z)=n$ and $\gcd(x,y,z)=1$}\right\}.\] Geometrically, this is the number of \emph{visible} lattice points on an ellipsoid of size $\sqrt{n}$. We would like to understand this quantity, in particular determine which positive integers $n$ are primitively represented by $Q$ over $\mathbb{Z}$. Note that when $n$ is square-free, all integral representations are primitive. \subsection{Sums of three squares} Let us look at the simplest example, \[Q(x,y,z):=x^2+y^2+z^2.\] Clearly, for $n\equiv 0,4,7\pmod{8}$ there are no primitive representations. However, for $n\equiv 1,2,3,5,6\pmod{8}$ there are, and we have the following elegant formula for their number: \begin{equation}\label{eq1} r^*(n,Q)=\frac{24}{\pi}\,\sqrt{n}\,L\left(1,\left(\frac{D}{\cdot}\right)\right), \end{equation} where \[D=\begin{cases} -4n,&n\equiv 1,2,5,6\pmod{8},\\ -n,&n\equiv 3\pmod{8}. \end{cases}\] This formula arises naturally from the work of Gauss~(1801) and Dirichlet~(1839) on the class number. For a negative discriminant $D$ let $h(D)$ denote the number of equivalence classes of positive \emph{primitive} binary quadratic forms of discriminant $D$, and let $w$ denote the number of automorphs\footnote{In these notes, equivalence classes and automorphs are meant in the narrow/strict/proper sense, i.e. they are defined in terms of the action of $\SL_n$ on $n$-ary quadratic forms.} of such a form: \[w=\begin{cases}6,&D=-3,\\4,&D=-4,\\2,&D<-4.\end{cases}\] In this notation a fundamental result of Gauss~\cite{Ga} states that \begin{equation}\label{eq2} r^*(n,Q)=\begin{cases} \frac{24}{w}h(-4n),&n\equiv 1,2,5,6\pmod{8},\\ \frac{48}{w}h(-n),&n\equiv 3\pmod{8}, \end{cases} \end{equation} while the class number formula of Dirichlet~\cite{D} reads \begin{equation}\label{eq3} h(D)=\frac{w}{2\pi}\,|D|^{1/2}\,L\left(1,\left(\frac{D}{\cdot}\right)\right). \end{equation} For a modern account of these results we refer the reader to \cite[\S 4]{G} and \cite[\S 8]{Z}, including the exercises. Combining \eqref{eq2} and \eqref{eq3}, we obtain \eqref{eq1}. It is not hard to see that the special $L$-value involved in \eqref{eq1} is not too large, while a deeper result of Siegel~\cite{S2} shows that it is not too small either: \begin{equation}\label{eq4} L\left(1,\left(\frac{D}{\cdot}\right)\right)=|D|^{o(1)}. \end{equation} We infer that $r^*(n,Q)$ is either zero or about $\sqrt{n}$: \begin{equation}\label{eq5} r^*(n,Q)=n^{\frac{1}{2}+o(1)},\qquad n\equiv 1,2,3,5,6\pmod{8}. \end{equation} That is, for the case of $Q(x,y,z)=x^2+y^2+z^2$, we know precisely when there is at least one visible lattice point on our ellipsoid (the sphere of radius $\sqrt{n}$ centered at the origin), and we also know that one visible lattice point implies many others. We can look at \eqref{eq5} as a quantitative local-to-global principle: \[\boxed{\text{$n$ is primitively represented by $Q$ over any $\mathbb{Z}_p$}\quad\Longrightarrow\quad\text{$r^*(n,Q)$ is large}}\] Here we could restrict the left hand side to $p=2$, because for $p>2$ the condition is automatically met. This formulation, of course, does not reveal that $r^*(n,Q)$ has a nice analytic description as in \eqref{eq1}. The two viewpoints are unified by the celebrated mass formula\footnote{The original formula concerns all integral representations, but it is easy to reformulate it in terms of primitive representations for the cases considered here. For the general case see \cite[\S 6.8]{K}.} of Siegel~\cite{S1}. Namely, let us assume that $n\equiv 1,2,3,5,6\pmod{8}$ so that $x^2+y^2+z^2=n$ has a solution in $\mathbb{R}$ and a primitive solution in each $\mathbb{Z}_p$. Then we can calculate the right hand side of \eqref{eq1} as a product of densities of the primitive local solutions over $\mathbb{R}$ and the various completions $\mathbb{Z}_p$: \[\beta_\infty=2\pi\sqrt{n},\qquad\beta_2=\frac{\frac{3}{2}}{1-(\frac{D}{2})\frac{1}{2}}, \qquad\beta_p=\frac{1-\frac{1}{p^2}}{1-(\frac{D}{p})\frac{1}{p}}\quad (p\neq 2),\] whose product is indeed \[\beta_\infty\prod_p\beta_p=2\pi\sqrt{n}\,\frac{\frac{3}{2}}{1-\frac{1}{2^2}}\,\frac{1}{\zeta(2)}\,L\left(1,\left(\frac{D}{\cdot}\right)\right)= \frac{24}{\pi}\,\sqrt{n}\,L\left(1,\left(\frac{D}{\cdot}\right)\right).\] \subsection{Ramanujan's ternary quadratic form} What about more general ternary quadratic forms? As a second example let us consider\footnote{This section was inspired by a nice paper of Ono and Soundararajan~\cite{OS}.} \[Q(x,y,z):=x^2+y^2+10z^2.\] Is there a similar elegant formula as \eqref{eq1} for the number of primitive representations? Well, almost. What happens is that there is another ternary quadratic form which is equivalent to $Q$ over $\mathbb{R}$ and all the completions $\mathbb{Z}_p$, but not over $\mathbb{Z}$: \[Q'(x,y,z):=2x^2+2y^2+3z^2-2xz.\] So $Q$ and $Q'$ produce the same primitive local densities of representations over the reals and the $p$-adic integers, yet they are really different over the integers. A quadratic form that is locally equivalent to $Q$ is globally equivalent to $Q$ or $Q'$, hence perhaps it is not surprising that the product of primitive local densities is related to a combination of $r^*(n,Q)$ and $r^*(n,Q')$. For this case Siegel's mass formula~\cite{S1} precisely tells us that \begin{equation}\label{eq6} \frac{1}{3}r^*(n,Q)+\frac{2}{3}r^*(n,Q')=\frac{4\sqrt{10}}{3\pi}\,\sqrt{n}\,L\left(1,\left(\frac{-10n}{\cdot}\right)\right), \end{equation} at least when $\gcd(n,10)=1$ as we assume for simplicity. Here the weight of $r^*(n,Q)$ is one-half of the weight of $r^*(n,Q')$, because $Q$ has twice as many automorphs as $Q'$ (eight vs. four). The right hand side manifests as the product of the common primitive local densities of $Q$ and $Q'$, \[\beta_\infty=\frac{2\pi\sqrt{n}}{\sqrt{10}},\qquad\beta_2=1,\qquad\beta_5=\frac{4}{5}, \qquad\beta_p=\frac{1-\frac{1}{p^2}}{1-(\frac{-10n}{p})\frac{1}{p}}\quad (p\neq 2,5).\] Indeed, the product of these quantities equals \[\frac{2\pi\sqrt{n}}{\sqrt{10}}\ \frac{1}{1-\frac{1}{2^2}}\,\frac{\frac{4}{5}}{1-\frac{1}{5^2}}\,\frac{1}{\zeta(2)}\,L\left(1,\left(\frac{-10n}{\cdot}\right)\right)= \frac{4\sqrt{10}}{3\pi}\,\sqrt{n}\,L\left(1,\left(\frac{-10n}{\cdot}\right)\right).\] We are still assuming that $\gcd(n,10)=1$. Comparing \eqref{eq6} with \eqref{eq4}, we see that $n$ has many primitive representations by $Q$ or $Q'$. However, we would like to know if $n$ has any or many primitive representations by $Q$ alone! Using automorphic forms and $L$-functions one can show that $r^*(n,Q)\approx r^*(n,Q')$ with great precision, so that both $r^*(n,Q)$ and $r^*(n,Q')$ are close to the right hand side of \eqref{eq6}. Specifically, by the work of Schulze-Pillot~\cite{SP1} and Waldspurger~\cite{W} based on the Shimura correspondence~\cite{Sh}, there exists a primitive holomorphic cusp form $f$ of weight $2$, level $20$, and trivial nebentypus such that \begin{equation}\label{eq7} \bigl(r^*(n,Q)-r^*(n,Q')\bigr)^2\ll_\varepsilon n^{\frac{1}{2}+\varepsilon}\,L\left(\frac{1}{2},f\otimes\biggl(\frac{-10 n}{\cdot}\biggr)\right). \end{equation} If we assume the Grand Riemann Hypothesis (GRH) for automorphic $L$-functions, then the central $L$-value on the right hand side of \eqref{eq7} is $\ll_\varepsilon n^\varepsilon$, whence from \eqref{eq6} we infer that \begin{equation}\label{eq8} r^*(n,Q)=\frac{4\sqrt{10}}{3\pi}\,\sqrt{n}\,L\left(1,\left(\frac{-10n}{\cdot}\right)\right)+O_\varepsilon\left(n^{\frac{1}{4}+\varepsilon}\right). \end{equation} The main term here is $\gg_\varepsilon n^{\frac{1}{2}-\varepsilon}$ by \eqref{eq4}, and all the implied constants are effective under GRH. Roughly, this means that \[\boxed{\text{GRH and $\gcd(n,10)=1$}\quad\Longrightarrow\quad\text{$r^*(n,Q)$ is large}}\] This is another instance (albeit conditionally) of a quantitative local-to-global principle, because the integers coprime with $10$ are primitively represented by $Q$ over any $\mathbb{Z}_p$. In particular, under GRH, there are only finitely many positive integers such that $\gcd(n,10)=1$ and $r^*(n,Q)=0$, and GRH even provides a \emph{theoretical} algorithm to determine them. It is a challenging task to turn GRH into a \emph{practical} algorithm for this problem, solved for $n$ square-free by Ono and Soundararajan~\cite{OS}. We can also estimate the right hand side of \eqref{eq7} without GRH. The functional equation for the twisted $L$-function coupled with the Phragm\'en-Lindel\"of convexity principle shows that the $L$-value in \eqref{eq7} is $\ll_\varepsilon n^{\frac{1}{2}+\varepsilon}$. This \emph{convexity bound}, however, yields the error term $O_\varepsilon\bigl(n^{\frac{1}{2}+\varepsilon}\bigr)$ in \eqref{eq8}, which is too weak to imply $r^*(n,Q)>0$. If, instead, we employ the \emph{subconvexity bound} \[L\left(\frac{1}{2},f\otimes\biggl(\frac{-10 n}{\cdot}\biggr)\right)\ll_\varepsilon n^{\frac{1}{2}-\delta+\varepsilon}\] for some $\delta>0$, then we arrive at a useful variant of \eqref{eq8} for $\gcd(n,10)=1$: \[r^*(n,Q)=\frac{4\sqrt{10}}{3\pi}\,\sqrt{n}\,L\left(1,\left(\frac{-10n}{\cdot}\right)\right)+O_\varepsilon\left(n^{\frac{1-\delta}{2}+\varepsilon}\right).\] Such a conclusion was first established unconditionally with $\delta=\frac{1}{14}$ by Duke~\cite{Du} using a method of Iwaniec~\cite{Iw} (see also \cite{DSP}), and then with $\delta=\frac{1}{8}$ by Bykovski\u\i~\cite{By} using a method of Duke, Friedlander, Iwaniec~\cite{DFI} (see also \cite{BH1} and \cite{HH}). Furthermore, it seems likely that the technical assumptions in a deep result of Conrey and Iwaniec~\cite{CI} can be relaxed so as to yield the conclusion with $\delta=\frac{1}{6}$. To summarize, \[\boxed{\text{Subconvexity and $\gcd(n,10)=1$}\quad\Longrightarrow\quad\text{$r^*(n,Q)$ is large}}\] \section{Lecture Two: More Quadratic Forms} The examples presented in the first lecture can be generalized to a large extent. Let $K$ be a totally real number field with discriminant $D$ and ring of integers $\mathfrak{o}$. Let $(a_{ij})$ be a $3\times 3$ matrix with $a_{ij}=a_{ji}\in\mathfrak{o}$ and $a_{ii}\in 2\mathfrak{o}$ such that the corresponding integral quadratic form \[Q(x_1,x_2,x_3):=\frac{1}{2}\sum_{i,j}a_{ij}x_ix_j\] is totally positive. Then the determinant $d:=\det(a_{ij})$ is totally positive and lies in $2\mathfrak{o}$ by \cite[Lemma~2.1]{H}. We are interested in the number of \emph{primitive} integral representations of a totally positive integer $n\in\mathfrak{o}$ by $Q$, that is \[r^*(n,Q):=\#\left\{(x,y,z)\in\mathfrak{o}^3:\text{ $Q(x,y,z)=n$ and $\gcd(x,y,z)=\mathfrak{o}$}\right\}.\] An obvious necessary condition for $r^*(n,Q)>0$ is that $n$ is primitively represented by $Q$ over any non-archimedean completion $\mathfrak{o}_\mathfrak{p}$ of $\mathfrak{o}$. This condition is invariant under replacing $Q$ by any form in its genus, that is, by any totally positive ternary quadratic form over $\mathfrak{o}$ which is locally equivalent to $Q$ over any $\mathfrak{o}_\mathfrak{p}$. In fact the primitive local densities of the representations only depend on the genus, and by Siegel's mass formula~\cite{S1} their product equals a weighted average over the finitely many equivalence classes contained in the genus: \[r^*(n,\gen Q):=\left(\sum_{[Q']\in\gen Q}\frac{r^*(n,Q')}{\aut(Q')}\right)\left(\sum_{[Q']\in\gen Q}\frac{1}{\aut(Q')}\right)^{-1} = \beta_\infty\prod_\mathfrak{p}\beta_\mathfrak{p}.\] Here $r^*(n,Q')$ and $\aut(Q')$, the number of automorphs of $Q'$, only depend on the class $[Q']$ of $Q'$, hence the sums are well-defined. For $\gcd(n,d)=\mathfrak{o}$ we can simplify the right hand side using \cite[\S 7 of Part III]{S1} and \cite[Lemma~3.2]{H} to obtain \begin{equation}\label{eq9} r^*(n,\gen Q)=c(n,Q)\,(\mathcal{N} n)^{\frac{1}{2}}\,L\left(1,\left(\frac{-2dn}{\cdot}\right)\right), \end{equation} where $\mathcal{N}$ stands for the norm and $c(n,Q)$ equals, up to a positive constant depending on $K$, the density of primitive solutions of the congruence $Q(x,y,z)\equiv n\pmod{4d}$. In particular, $c(n,Q)>0$ is equivalent to primitive local representability modulo~$4d$, in which case $r^*(n,\gen Q)=(\mathcal{N} n)^{\frac{1}{2}+o(1)}$ by a straightforward extension of \eqref{eq4}. Surprisingly, $r^*(n,\gen Q)$ being large does not ensure that $n$ is primitively represented by $Q$, even when $\mathcal{N} n$ is large and coprime with $d$. Here are two examples, borrowed from \cite{EHH} and \cite{SP2}. When $K=\mathbb{Q}$, the forms $x^2+3y^2+36z^2$ and $3x^2+4y^2+9z^2$ are in the same genus, yet any square number coprime with $6$ is primitively represented by exactly one of them. When $K=\mathbb{Q}(\sqrt{35})$, an integer of the form $7p^2$ with a rational prime $p\equiv 1\pmod{7}$ is not a sum of three squares in $\mathfrak{o}$, although a sum of three coprime squares in $\mathfrak{o}_\mathfrak{p}$ for any prime ideal $\mathfrak{p}$. The proper discussion of this phenomenon would lead us too far as it relies on the theory of spinor genera and theta series, see the recent surveys \cite{H2,SP2} and the references therein. Let us just say that in the modern theory one considers lattices and their representations in a quadratic space over $K$, and the restriction to free lattices yields the classical theory of integral quadratic forms and their representations \cite[\S 82]{M}. A class is an orbit of lattices under the group of rotations of the space. A genus is an orbit under the group of adelic rotations, while a spinor genus is an orbit under a certain normal subgroup of adelic rotations \cite[\S 102]{M}. Lattices in the same genus are isomorphic as $\mathfrak{o}$-modules\footnote{This was kindly explained to me by Rainer Schulze-Pillot, here is a variant of his argument. Let $L$ be a lattice in a fixed quadratic space $V$ over $K$. By \cite[81:3]{M}, there is a basis $(x_i)$ of $V$ and fractional ideals $\mathfrak{a}_i$ in $K$ such that $L=\mathfrak{a}_1x_1+\dots+\mathfrak{a}_nx_n$. Writing $\mathfrak{a}=\mathfrak{a}_1\dots \mathfrak{a}_n$, the volume of $L$ equals $\mathfrak{a}^2\disc(x_1,\dots,x_r)$, which is $\mathfrak{a}^2\disc(V)$ modulo $(F^\times)^2$. Therefore the volume of $L$ (hence also the genus of $L$) determines $\mathfrak{a}$ modulo $F^\times$, which is the Steinitz class of $L$ as an $\mathfrak{o}$-module.}, hence the set of free lattices is a disjoint union of genera (resp. spinor genera) in the modern sense. We shall conveniently avoid the above type of exceptions by restricting to totally positive \emph{non-square} integers $n\in\mathfrak{o}$ coprime with $dD$. Let $r^*(n,\spn Q)$ be defined similarly as $r^*(n,\gen Q)$, but over the spinor genus of $Q$. By the work of Kneser~\cite{Kn} and Hsia~\cite{Hs}, the two averages agree now\footnote{Let $L$ be a free ternary lattice corresponding to the class of $Q$. It suffices to show that $r(n,\spn\mathfrak{a} L)=r(n,\gen\mathfrak{a} L)$ for any ideal $\mathfrak{a}$ in $\mathfrak{o}$. Take a prime ideal $\mathfrak{p}\nmid dD$ such that $\ord_\mathfrak{p}(n)$ is odd. The quadratic extension $K_\mathfrak{p}(\sqrt{-2dn})/K_\mathfrak{p}$ is ramified, while $L_\mathfrak{p}$ is unimodular, hence local class field theory combined with \cite[92:5]{M} shows that \cite[(4)]{SP4} fails at $\mathfrak{p}$ in the present setting.}, so that the bounds usually proved for $r^*(n,Q)-r^*(n,\spn Q)$ apply for $r^*(n,Q)-r^*(n,\gen Q)$. By the work of Baruch--Mao~\cite{BM}, Blasius~\cite{Bl}, Schulze-Pillot~\cite{SP3}, Waldspurger~\cite{W2}, there exist finitely many primitive holomorphic Hilbert cusp forms $f_1,\dots, f_r$ over $K$ depending on $Q$, each of weight $(2,\dots,2)$ and trivial nebentypus, such that \begin{equation}\label{eq10} \bigl(r^*(n,Q)-r^*(n,\gen Q)\bigr)^2\ll_{Q,K,\varepsilon} (\mathcal{N} n)^{\frac{1}{2}+\varepsilon}\, \max_{1\leqslant i\leqslant r}L\left(\frac{1}{2},f_i\otimes\biggl(\frac{-2dn}{\cdot}\biggr)\right). \end{equation} As before, any subconvex bound for the central $L$-values on the right hand side yields an asymptotic formula for $r^*(n,Q)$ with a power saving error term. More specifically and generally, let us assume the bound \begin{equation}\label{eq11} L\left(\frac{1}{2},\pi\otimes \chi\right) \ll_{\pi,\chi_\infty,K,\varepsilon} (\mathcal{N}\mathfrak{q})^{\frac{1}{2}-\delta+\varepsilon}, \end{equation} where $\delta$ is a positive constant, $\pi$ is any irreducible unitary cuspidal representation of $\GL_2$ over $K$, and $\chi$ is any Hecke character of $K$ of conductor $\mathfrak{q}$. Then from \eqref{eq9} and \eqref{eq10} we infer, for totally positive non-square $n\in\mathfrak{o}$ coprime with $dD$, \begin{equation}\label{eq12} r^*(n,Q)=c(n,Q)\,(\mathcal{N} n)^{\frac{1}{2}}\,L\left(1,\left(\frac{-2dn}{\cdot}\right)\right)+O_{Q,K,\varepsilon}\left((\mathcal{N} n)^{\frac{1-\delta}{2}+\varepsilon}\right). \end{equation} Recall that $c(n,Q)>0$ is equivalent to primitive local representability of $n$ by $Q$ modulo~$4d$, in which case the main term dominates the error term for $\mathcal{N} n$ sufficiently large: \[\boxed{\text{$\gcd(n,dD)=\mathfrak{o}$ and $n\neq\square$ and $c(n,Q)>0$}\quad\Longrightarrow\quad\text{$r^*(n,Q)$ is large}}\] The conclusion \eqref{eq11} was first established by Cogdell, Piatetski-Shapiro, Sarnak~\cite{CPSS} with the value $\delta=\frac{1-2\theta}{14+4\theta}$, at least for $\pi$ induced by a holomorphic Hilbert cusp form, which suffices for the application \eqref{eq12}. For general $\pi$ (and arbitrary $K$), the breakthrough is due to Venkatesh~\cite{Ve} who achieved $\delta=\frac{(1-2\theta)^2}{14-2\theta}$. In these results, $0\leqslant\theta\leqslant \frac{1}{2}$ is an approximation towards the Ramanujan--Petersson conjecture, the current record being $\theta=\frac{7}{64}$ due to Blomer and Brumley~\cite{BB}. Interestingly, under the Ramanujan--Petersson conjecture $\theta=0$ both expressions become $\delta=\frac{1}{14}$, matching the already mentioned results of Duke~\cite{Du} and Iwaniec~\cite{Iw}. For totally real $K$ and general $\pi$, Blomer and Harcos~\cite{BH2} established the Burgess-like subconvexity saving $\delta=\frac{1-2\theta}{8}$, and the proof of this result is outlined in the rest of these notes. Recently Maga~\cite{Ma,Ma2} extended the method of \cite{BH2} to arbitrary number fields. Wu~\cite{Wu} obtained the same $\delta=\frac{1-2\theta}{8}$ in full generality, even uniformly in $\chi_\infty$, by a different method based on the deep work of Michel and Venkatesh~\cite{MV}. \begin{theorem}[\cite{BH2}]\label{thm1} Let $K$ be a totally real number field. Let $\pi$ be an irreducible unitary cuspidal representation of $\GL_2$ over $K$, and $\chi$ a Hecke character of $K$ of conductor $\mathfrak{q}$. Then for any $\varepsilon>0$ one has \[L\left(\frac{1}{2},\pi\otimes \chi\right) \ll_{\pi,\chi_\infty,K,\varepsilon} (\mathcal{N}\mathfrak{q})^{\frac{3+2\theta}{8}+\varepsilon}.\] \end{theorem} \begin{theorem}[\cite{BM,Bl,BH2,SP3,S1,W2}]\label{thm2} Let $K$ be a totally real number field with discriminant $D$. Let $Q$ be a totally positive integral ternary quadratic form over $K$ with determinant $d$. Let $n$ be a totally positive non-square integer in $K$ coprime to $dD$. Then the number of primitive representations $Q(x,y,z)=n$ equals \[r^*(n,Q)=c(n,Q)\,(\mathcal{N} n)^{\frac{1}{2}}\,L\left(1,\left(\frac{-2dn}{\cdot}\right)\right)+O_{Q,K,\varepsilon}\left((\mathcal{N} n)^{\frac{7+2\theta}{16}+\varepsilon}\right),\] where $c(n,Q)$ is a constant times the density of primitive solutions of the congruence $Q(x,y,z)\equiv n\pmod{4d}$. \end{theorem} \section{Lecture Three: Preliminaries from Number Theory} We collect some preliminaries for the proof of Theorem~\ref{thm1}, to be outlined in the fourth lecture. As explained in the first two lectures, Theorem~\ref{thm2} follows from Theorem~\ref{thm1}. For more detail concerning the preliminaries the reader should consult \cite{BH2,GJ,N,We}. \subsection{Adeles and ideles} The adele ring of $K$ is a restricted direct product of the completions $K_v$ at the various places of $K$: \[\mathbb{A}:=K_\infty \times \mathbb{A}_{\text{fin}}=\prod_{v\mid\infty}K_v\times{\prod_\mathfrak{p}}' K_\mathfrak{p},\] where $\mathfrak{p}$ runs through the prime ideals of $\mathfrak{o}$. The topology of the additive group $(\mathbb{A},+)$ is determined by the fundamental neighborhoods of zero in $\mathbb{A}_{\text{fin}}$: \[U^\mathfrak{m}:=\prod_\mathfrak{p} U_\mathfrak{p}^{(\ord_\mathfrak{p}\mathfrak{m})},\] where $\mathfrak{m}\subseteq\mathfrak{o}$ is any nonzero ideal, $\ord_\mathfrak{p}\mathfrak{m}$ denotes the exponent of $\mathfrak{p}$ in $\mathfrak{m}$, and \[U_\mathfrak{p}^{(n)}:=\mathfrak{p}^n\mathfrak{o}_\mathfrak{p},\qquad n\in\mathbb{N}.\] Then $(\mathbb{A},+)$ is a locally compact Hausdorff topological group such that \[K\overset{\text{diag}}\hookrightarrow\mathbb{A}\text{ is discrete},\qquad K\backslash\mathbb{A}\text{ is compact}.\] Similarly, the group of ideles of $K$ is a restricted direct product \[\mathbb{A}^\times:=K_\infty^\times \times \mathbb{A}_{\text{fin}}^\times=\prod_{v\mid\infty}K_v^\times\times{\prod_\mathfrak{p}}' K_\mathfrak{p}^\times,\] whose topology is determined by the fundamental neighborhoods of one in $\mathbb{A}_{\text{fin}}^\times$: \[V^\mathfrak{m}:=\prod_\mathfrak{p} V_\mathfrak{p}^{(\ord_\mathfrak{p}\mathfrak{m})},\] where $\mathfrak{m}\subseteq\mathfrak{o}$ is any nonzero ideal, $\ord_\mathfrak{p}\mathfrak{m}$ denotes the exponent of $\mathfrak{p}$ in $\mathfrak{m}$, and \[V_\mathfrak{p}^{(n)}:=\begin{cases}\mathfrak{o}_\mathfrak{p}^\times,&n=0;\\1+\mathfrak{p}^n\mathfrak{o}_\mathfrak{p},&n>0.\end{cases}\] Then $(\mathbb{A}^\times,\cdot)$ is a locally compact Hausdorff topological group such that \[\mathbb{A}^\times=K_{\infty,+}^{\text{diag}}\mathbb{A}^1\cong \mathbb{R}_{>0}\times\mathbb{A}^1,\qquad K^\times\overset{\text{diag}}\hookrightarrow\mathbb{A}^1\text{ is discrete},\qquad K^\times\backslash\mathbb{A}^1\text{ is compact}.\] Here $\mathbb{A}^1$ is the subgroup of ideles of module $1$, and $K_{\infty,+}^{\text{diag}}$ denotes $\mathbb{R}_{>0}$ diagonally embedded into $K_\infty^\times$. \subsection{Hecke characters and Gr\"ossencharacters} A \emph{Hecke character} is a continuous homomorphism $\chi:\mathbb{A}^\times\to S^1$ which is trivial on $K^\times$. The kernel of such a character $\chi$ always contains a subgroup of the form $\{(1,\dots,1)\}\times V^\mathfrak{m}$. Let $I^\mathfrak{m}$ denote the group of fractional ideals of $K$ coprime with $\mathfrak{m}$, this is a free abelian group generated by the prime ideals $\mathfrak{p}\nmid\mathfrak{m}$. Choose a prime element $p_\mathfrak{p}\in\mathfrak{p}-\mathfrak{p}^2$ for each $\mathfrak{p}\nmid\mathfrak{m}$, and define the character $\tilde\chi:I^\mathfrak{m}\to S^1$ via \[\tilde\chi(\mathfrak{p}):=\chi(\dots,1,1,p_\mathfrak{p},1,1,\dots),\qquad\mathfrak{p}\nmid\mathfrak{m}.\] Observe that $\tilde\chi$ is independent of the choice of the $p_\mathfrak{p}$'s. Moreover, for any \emph{principal} ideal $(a)\in I^\mathfrak{m}$ have \begin{align*} \tilde\chi((a)) &=\chi\bigl(\underbrace{1,\dots,1}_{\text{arch}},\underbrace{\dots,p_\mathfrak{p}^{\ord_\mathfrak{p}(a)},\dots}_{\text{non-arch}}\bigr)\\ &=\chi\bigl(\underbrace{1,\dots,1}_{v\mid\infty},\underbrace{1,\dots,1}_{\mathfrak{p}\mid\mathfrak{m}},\underbrace{a,a,\dots}_{\mathfrak{p}\nmid\mathfrak{m}}\bigr)\\ &=\chi\bigl(\underbrace{a^{-1},\dots,a^{-1}}_{v\mid\infty},\underbrace{a^{-1},\dots,a^{-1}}_{\mathfrak{p}\mid\mathfrak{m}},\underbrace{1,1,\dots}_{\mathfrak{p}\nmid\mathfrak{m}}\bigr). \end{align*} If we interpret $a\in K^\times$ as an element of $K_\infty^\times$ by embedding $a$ diagonally, and as an element of $(\mathfrak{o}/\mathfrak{m})^\times$ by reducing $a$ modulo~$\mathfrak{m}$, then we infer that \[\tilde\chi((a))=\tilde\chi_\infty(a)\tilde\chi_\text{fin}(a),\qquad (a)\in I^\mathfrak{m},\] where $\tilde\chi_\infty:K_\infty^\times\to S^1$ and $\tilde\chi_\text{fin}:(\mathfrak{o}/\mathfrak{m})^\times\to S^1$ are uniquely determined characters\footnote{In these notes, all characters are continuous.}. A character $\tilde\chi:I^\mathfrak{m}\to S^1$ with this property is called a \emph{Gr\"ossencharacter} modulo~$\mathfrak{m}$. So any Hecke character can be regarded as a Gr\"ossencharacter. Conversely, any Gr\"ossencharacter $\tilde\chi:I^\mathfrak{m}\to S^1$ arises in this way from a Hecke character $\chi:\mathbb{A}^\times\to S^1$ trivial on $\{(1,\dots,1)\}\times V^\mathfrak{m}$. To see this, define \begin{align*} V&:=K_\infty^\times\times\prod_{\mathfrak{p}\mid\mathfrak{m}}V_\mathfrak{p}^{(\ord_\mathfrak{p}\mathfrak{m})}\prod_{\mathfrak{p}\nmid\mathfrak{m}}K_\mathfrak{p}^\times;\\ \chi(a)&:=\tilde\chi_\infty^{-1}\left(a_\infty\right)\tilde\chi\bigl(\prod_{\mathfrak{p}\nmid\mathfrak{m}}\mathfrak{p}^{\ord_\mathfrak{p} a_\mathfrak{p}}\bigr),\qquad a\in V. \end{align*} Observe that $V$ is a subgroup of $\mathbb{A}^\times$, and $\chi:V\to S^1$ is a character trivial on $\{(1,\dots,1)\}\times V^\mathfrak{m}$. This character extends uniquely to a character $\chi:K^\times V\to S^1$ trivial on $K^\times$, because for $a\in K^\times\cap V$ we have $a\equiv 1\pmod{\mathfrak{m}}$, whence \[\chi(a)=\tilde\chi_\infty^{-1}(a)\tilde\chi((a))=\tilde\chi_\text{fin}(a)=1,\qquad a\in K^\times\cap V.\] On the other hand, $K^\times V$ equals $\mathbb{A}^\times$ by weak approximation in $K$, so we are done. If, for given $\chi$, we choose the largest $\mathfrak{m}$ (called the \emph{conductor}), then $\tilde\chi$ will be primitive, and vice versa. To summarize, we have a natural bijection \[\boxed{\text{$\chi$ a Hecke character}\quad\longleftrightarrow\quad\text{$\tilde\chi$ a primitive Gr\"ossencharacter}}\] In the rest of these notes, $\chi$ will always stand for a Hecke character, and $\tilde\chi$ for the corresponding primitive Gr\"ossencharacter. \subsection{The automorphic spectrum of $\GL_2$} Let us restrict to trivial central character for simplicity. Then the corresponding $L^2$ space of automorphic functions on $\GL_2$ over $K$ decomposes as a direct integral of irreducible automorphic representations, \[L^2(\GL_2(K)\backslash \GL_2(\mathbb{A}))= \left(\bigoplus_{\pi} V_{\pi}\right)\oplus \left(\bigoplus_{\chi^2=1} V_\chi\right)\oplus \left(\int\limits_{\{\chi,\chi^{-1}\}}V_{\chi,\chi^{-1}}\,d\{\chi,\chi^{-1}\}\right),\] in the sense that each function in the $L^2$ space is a convergent integral of functions from each subspace, and a Plancherel formula holds. In this decomposition, which is compatible with the right action of $\GL_2(\mathbb{A})$, \begin{itemize} \item $V_\pi$ is an irreducible subspace of the cuspidal space defined by \[\int_{K\backslash\mathbb{A}}\phi\left(\left(\begin{matrix}1 & x\\0 & 1\end{matrix}\right) g\right) dx = 0 \quad\text{for almost all } g \in \GL_2(\mathbb{A}).\]\vskip 6pt \item $V_\chi$ is the one-dimensional subspace spanned by the function $g\mapsto\chi(\det g)$.\vskip 8pt \item $V_{\chi,\chi^{-1}}$ consists of the Eisenstein series \[E(\varphi,g)\doteq \sum_{\gamma \in P(K)\backslash \GL_2(K)} \varphi(\gamma g),\] where $P$ stands for upper triangular matrices, and $\varphi : \GL_2(\mathbb{A}) \to \mathbb{C}$ satisfies \[\int_{\mathcal{K}} |\varphi(k)|^2 dk < \infty,\qquad \mathcal{K}:=\SO_2(K_\infty)\times\mathcal{K}(\mathfrak{o});\] \[\varphi\left(\begin{pmatrix} a & x\\0 & b\end{pmatrix}g\right) = \chi\left(\frac{a}{b}\right) \left|\frac{a}{b}\right|^{1/2}\varphi(g), \qquad \begin{pmatrix}a & x\\0 & b\end{pmatrix}\in P(\mathbb{A}).\] Here $\mathcal{K}(\mathfrak{o})$ is a maximal compact subgroup of $\GL_2(\mathbb{A}_{\text{fin}})$ to be defined in the next subsection. More precisely, the sum converges only when the exponent $1/2$ is replaced by any $s\in\mathbb{C}$ with $\Re s>1$. The symbol $\doteq$ stands for evaluation at $s=1/2$ of the function obtained by meromorphic continuation to $s\in\mathbb{C}$, keeping the restriction of $\varphi$ to $\mathcal{K}$ fixed all the way. \end{itemize} \subsection{Conductor, $L$-function, Kirillov model} Let $\mathfrak{d}$ denote the different ideal of $K$. For each cuspidal space $V_\pi$ (and also for each Eisenstein space $V_{\chi,\chi^{-1}}$) there is a largest congruence subgroup \[\mathcal{K}(\mathfrak{m}):=\prod_\mathfrak{p}\biggl\{\begin{pmatrix}a & b\\c & d\end{pmatrix}\in\GL_2(K_\mathfrak{p}):\ a,d \in \mathfrak{o}_\mathfrak{p},\ b \in \mathfrak{d}_\mathfrak{p}^{-1}, \ c \in \mathfrak{m}\mathfrak{d}_\mathfrak{p},\ ad-bc\in\mathfrak{o}_\mathfrak{p}^\times\biggr\},\] regarded as a subgroup of $\GL_2(\mathbb{A})$ in the obvious way, such that $V_\pi$ contains a nonzero vector fixed by the right action of $\mathcal{K}(\mathfrak{m})$. The corresponding nonzero ideal $\mathfrak{m}$ is called the \emph{conductor} of $\pi$, denoted $\mathfrak{c}_\pi$. It is known that $\mathfrak{c}_{\chi,\chi^{-1}}=\mathfrak{c}_\chi^2$. If $V_\pi(\mathfrak{m})$ denotes the $\mathcal{K}(\mathfrak{m})$-fixed subspace of $V_\pi$, then $V_\pi(\mathfrak{c}_\pi)$ (the space of \emph{newforms}) is particularly nice. Namely, for each nonzero ideal $\mathfrak{m}$, the Hecke operator $T(\mathfrak{m})$ acts by some scalar $\lambda_{\pi}(\mathfrak{m})\in\mathbb{C}$ on this space. These Hecke eigenvalues satisfy $\lambda_{\pi}(\mathfrak{m})\ll_{\varepsilon}(\mathcal{N}\mathfrak{m})^{\theta+\varepsilon}$ with $\theta=\frac{7}{64}$ by~\cite{BB}, and they determine the $L$-function of~$\pi$: \begin{align*} L(s,\pi)&=\sum_{\{0 \}\neq \mathfrak{m} \subseteq \mathfrak{o}} \frac{\lambda_{\pi}(\mathfrak{m})}{(\mathcal{N}\mathfrak{m})^{s}}\\ &=\prod_{\mathfrak{p}\mid\mathfrak{c}_\pi}\frac{1}{1-\lambda_\pi(\mathfrak{p})(\mathcal{N}\mathfrak{p})^{-s}}\prod_{\mathfrak{p}\nmid\mathfrak{c}_\pi}\frac{1}{1-\lambda_\pi(\mathfrak{p})(\mathcal{N}\mathfrak{p})^{-s}+(\mathcal{N}\mathfrak{p})^{-2s}}. \end{align*} Fixing a nontrivial character $\psi:K\backslash\mathbb{A}\to S^1$, any newform $\phi\in V_\pi(\mathfrak{c}_\pi)$ has a Fourier expansion \[\phi\left(\left(\begin{matrix} y & x\\ 0 & 1\end{matrix}\right)\right) = \sum_{r \in K^\times} \frac{\lambda_{\pi}(r y_{\text{fin}})}{\sqrt{\mathcal{N} (r y_{\text{fin}})}} W_{\phi}(r y_{\infty}) \psi(rx),\qquad y\in \mathbb{A}^{\times},\ \ x \in \mathbb{A},\] where $W_\phi\in L^2(K_\infty^\times,d^\times y)$ is given by \[W_{\phi}(y) := \int_{K\backslash\mathbb{A}} \phi\left(\begin{pmatrix} y & x\\ 0 & 1\end{pmatrix}\right) \psi(-x) \,dx,\qquad y\in K_\infty^\times.\] So we restricted here the Whittaker model to upper triangular matrices (this is called the Kirillov model), and we separated the archimedean and non-archimedan parts. An important feature is that any Whittaker function $W_\phi\in L^2(K_\infty^\times,d^\times y)$ occurs for some newform $\phi\in V_\pi(\mathfrak{c}_\pi)$, and $\|W_\phi\|$ is proportional to $\|\phi\|$ with a constant depending very mildly on~$\pi$. What about ``oldforms of level $\mathfrak{c}$'', i.e.\ the elements of $V_\pi(\mathfrak{c})$ for $\mathfrak{c}$ divisible by $\mathfrak{c}_\pi$? For any nonzero ideal $\mathfrak{t}\mid\mathfrak{c}\mathfrak{c}_\pi^{-1}$ we have an isometric embedding of complex vector spaces \[R_\mathfrak{t}:V_\pi(\mathfrak{c}_\pi)\hookrightarrow V_\pi(\mathfrak{c}),\qquad (R_\mathfrak{t}\phi)(g):= \phi\left(g\begin{pmatrix}t^{-1}&0\\0&1\end{pmatrix}\right),\] where $t\in\mathbb{A}_\text{fin}^\times$ is any finite idele representing $\mathfrak{t}$. Then it follows from the local result of Casselman~\cite{Ca} or the global result of Miyake~\cite{Mi} that \[V_\pi(\mathfrak{c})=\bigoplus_{\mathfrak{t}\mid\mathfrak{c}\mathfrak{c}_{\pi}^{-1}} R_\mathfrak{t} V_\pi(\mathfrak{c}_\pi),\qquad \mathfrak{c}_\pi\mid\mathfrak{c}.\] A technical difficulty here is that the spaces $R_\mathfrak{t} V_\pi(\mathfrak{c}_\pi)$ are in general not orthogonal. Nevertheless, using a Gram--Schmidt orthogonalization process, we can obtain an orthogonal decomposition \[V_\pi(\mathfrak{c})=\bigoplus_{\mathfrak{t}\mid\mathfrak{c}\mathfrak{c}_{\pi}^{-1}}R^{(\mathfrak{t})}V_\pi(\mathfrak{c}_\pi),\qquad \mathfrak{c}_\pi\mid\mathfrak{c},\] and for every $\phi\in R_\mathfrak{t} V_\pi(\mathfrak{c}_\pi)$ a Fourier expansion \[\phi\left(\left(\begin{matrix} y & x\\ 0 & 1\end{matrix}\right)\right) = \sum_{r \in K^{\times}} \frac{\lambda^{(\mathfrak{t})}_{\pi}(r y_{\text{fin}}) }{\sqrt{\mathcal{N}(r y_{\text{fin}})}} W_{\phi}(r y_{\infty}) \psi(rx) ,\qquad y\in \mathbb{A}^{\times},\ \ x \in \mathbb{A},\] \[W_\phi:=W_{(R^{(\mathfrak{t})})^{-1}\phi}\quad\text{and}\quad \lambda^{(\mathfrak{t})}_{\pi}(\mathfrak{m}) := \sum_{\mathfrak{s} \mid\gcd(\mathfrak{t},\mathfrak{m})} \alpha_{\mathfrak{t}, \mathfrak{s}} (\mathcal{N}\mathfrak{s})^{1/2} \lambda_{\pi}(\mathfrak{m}\mathfrak{s}^{-1}).\] The constants $\alpha_{\mathfrak{t}, \mathfrak{s}}\in\mathbb{C}$ are explicit, although difficult to estimate in general. Similarly, the Eisenstein spaces have an orthogonal decomposition \[V_{\chi,\chi^{-1}}(\mathfrak{c})=\bigoplus_{\mathfrak{t}\mid\mathfrak{c}\mathfrak{c}_\chi^{-2}} R^{(\mathfrak{t})} V_{\chi,\chi^{-1}}(\mathfrak{c}_\chi^2), \qquad \mathfrak{c}_\chi^2\mid\mathfrak{c},\] such that every $\phi \in R^{(\mathfrak{t})}V_{\chi,\chi^{-1}}(\mathfrak{c}_\chi^2)$ has a Fourier expansion \[\phi\left( \left(\begin{matrix} y & x\\ 0 & 1\end{matrix}\right)\right) = \phi_{\text{const}}(y) + \sum_{r \in K^\times} \frac{\lambda^{(\mathfrak{t})}_{\chi,\chi^{-1}}(r y_{\text{fin}})}{\sqrt{\mathcal{N} (r y_{\text{fin}})}} W_{\phi}(r y_{\infty}) \psi(rx),\qquad y\in \mathbb{A}^{\times},\ \ x \in \mathbb{A},\] \[\lambda^{(\mathfrak{t})}_{\chi,\chi^{-1}}(\mathfrak{m})\ll_{K,\varepsilon} (\mathcal{N}\gcd(\mathfrak{t},\mathfrak{m}))(\mathcal{N}\mathfrak{m})^\varepsilon.\] \section{Lecture Four: Subconvexity of Twisted $L$-functions} In this final lecture we highlight the main ideas in the proof of Theorem~\ref{thm1}, following closely the original source \cite{BH2}. The proof is inspired by and builds on important earlier work by several researchers: see \cite{BH3,BH2} for references and history. For the sake of readability, we omit some technicalities, and we do not indicate the dependence of implied constants on $\pi$, $\chi_\infty$, $K$, $\varepsilon$. By an approximate functional equation it suffices to bound finite sums \begin{equation}\label{eq13} \mathcal{L}_{\chi_\text{fin}}:= \sum_{0 << r \in \mathfrak{y}}\frac{\lambda_{\pi}(r\mathfrak{y}^{-1})\tilde\chi_\text{fin}(r)}{\sqrt{\mathcal{N}(r\mathfrak{y}^{-1})}}W(r), \end{equation} where $W:K_{\infty,+}^{\times}\to\mathbb{C}$ is a smooth weight function of compact support cutting off at about $\mathcal{N} r\approx(\mathcal{N}\mathfrak{q})^{1+\varepsilon}$, and $\mathfrak{y}$ represents a narrow ideal class. Recall that $\tilde\chi_\text{fin}:(\mathfrak{o}/\mathfrak{q})^\times\to S^1$ is determined by the Hecke character $\chi:\mathbb{A}^\times\to S^1$. We shall estimate \eqref{eq13} by a general principle of analytic number theory: sums in a harmonic family are easier to bound together than individually. To illustrate this point, let us assume that we are back in school, and we need to prove the inequality \[|\sin x+\cos x|\leqslant\sqrt{2}.\] Of course, we can accomplish this task in many ways, but a particularly nice way is to ``invent the harmonic complement'' $|\sin x-\cos x|$ and observe the identity \[(\sin x+\cos x)^2+(\sin x-\cos x)^2=2.\] In order to estimate \eqref{eq13}, we consider \emph{all the sums} \[\mathcal{L}_{\xi}:= \sum_{0 << r \in \mathfrak{y}} \frac{\lambda_{\pi}(r\mathfrak{y}^{-1})\xi(r)}{\sqrt{\mathcal{N}(r\mathfrak{y}^{-1})}}W(r),\] where $\xi$ is any character of $(\mathfrak{o}/\mathfrak{q})^{\times}$, and then the \emph{amplified second moment} \[\mathcal{S}:=\sum_{\xi \in \widehat{(\mathfrak{o}/\mathfrak{q})^{\times}}}\left|\sum_{\ell} \xi(\ell)\ov{\tilde\chi_\text{fin}(\ell)}\right|^2 \bigl|\mathcal{L}_{\xi}\bigr|^2,\] where $\ell\in\mathfrak{o}$ runs through certain elements of norm $\mathcal{N}\ell\in[L,2L]$ generating prime ideals $(\ell)\nmid\mathfrak{q}$. The quantity $L>1$ is the \emph{amplifier length}, and the previous sum is \[\mathcal{S}\gg(\mathcal{N}\mathfrak{q})^{-\varepsilon}L^2\bigl|\mathcal{L}_{\tilde\chi_\text{fin}}\bigr|^2\] by positivity. Applying Plancherel and some easy estimates for the diagonal contribution, we arrive at \begin{equation}\label{eq14} \frac{|\mathcal{L}_{\tilde\chi_\text{fin}}|^2}{(\mathcal{N}\mathfrak{q})^{1+\varepsilon}}\ll \frac{1}{L}+ \sum_{0\neq q \in\mathfrak{q}\mathfrak{y}\cap\mathcal{B}}\ \sum_{\substack{\ell_1r_1 - \ell_2r_2 =q \\ 0\neq r_1, r_2 \in \mathfrak{y}}} \frac{\lambda_{\pi}(r_1\mathfrak{y}^{-1})\ov{\lambda_\pi(r_2\mathfrak{y}^{-1})}}{\sqrt{\mathcal{N} (r_1r_2\mathfrak{y}^{-2})}}W(r_1)\ov{W(r_2)}, \end{equation} where $(\ell_1)$, $(\ell_2)$ are some prime ideals of norm about $L$, and $\mathcal{B}\subset K_\infty$ is some ball of volume at most $L(\mathcal{N}\mathfrak{q})^{1+\varepsilon}$ centered at the origin. To handle the \emph{shifted convolution sum} inside \eqref{eq14}, we write it as \begin{equation}\label{eq15} \sum_{\substack{r_1-r_2=q \\ r_1, r_2 \in K^\times}} \frac{\lambda_{\pi}(r_1\ell_1^{-1}\mathfrak{y}^{-1})\ov{\lambda_\pi(r_2\ell_2^{-1}\mathfrak{y}^{-1})}} {\sqrt{\mathcal{N} (r_1\ell_1^{-1}r_2\ell_2^{-1}\mathfrak{y}^{-2})}}W_1(r_1)\ov{W_2(r_2)}. \end{equation} The weight functions $W_1,W_2:K_\infty^\times\to\mathbb{C}$ are nice, hence they determine vectors $\phi_1,\phi_2\in V_\pi(\mathfrak{c}_\pi)$ such that \[\phi_j\left(\left(\begin{matrix} y & x\\ 0 &1\end{matrix}\right)\right)= \sum_{r \in K^\times}\frac{\lambda_{\pi}(r y_{\text{fin}})}{\sqrt{\mathcal{N} (r y_{\text{fin}})}} W_j(r y_{\infty}) \psi(rx),\qquad y\in \mathbb{A}^{\times},\ \ x \in \mathbb{A}.\] Let us fix $y\in \mathbb{A}^{\times}$ such that $y_\infty=(1,\dots,1)$ and $(y_{\text{fin}})=\mathfrak{y}^{-1}$. Let us also define $\Phi\in L^2(\GL_2(K)\backslash \GL_2(\mathbb{A}))$ via \[\Phi(g):=\phi_1\left(g\begin{pmatrix}\ell_1^{-1}&0\\0& 1\end{pmatrix}\right) \ov{\phi_2}\left(g\begin{pmatrix}\ell_2^{-1}&0\\0& 1\end{pmatrix}\right),\qquad g\in\GL_2(\mathbb{A}).\] It is straightforward to check that $\Phi$ is fixed by the right action of $\mathcal{K}(\mathfrak{c})$ for \[\mathfrak{c}:=\mathfrak{c}_\pi\lcm((\ell_1),(\ell_2)).\] Moreover, \[\Phi\left(\left(\begin{matrix} y & x\\ 0 &1\end{matrix}\right)\right)= \sum_{r_1,r_2\in K^\times}\frac{\lambda_{\pi}(r_1\ell_1^{-1}\mathfrak{y}^{-1})\ov{\lambda_\pi(r_2\ell_2^{-1}\mathfrak{y}^{-1})}} {\sqrt{\mathcal{N} (r_1\ell_1^{-1}r_2\ell_2^{-1}\mathfrak{y}^{-2})}}W_1(r_1)\ov{W_2(r_2)}\psi((r_1-r_2)x),\] whence \eqref{eq15} really equals \[\int_{K\backslash\mathbb{A}} \Phi\left(\begin{pmatrix} y & x\\ 0 &1\end{pmatrix}\right) \psi(-qx) \,dx.\] We decompose $\Phi$ in the level $\mathfrak{c}$ spectrum of $L^2(\GL_2(K)\backslash \GL_2(\mathbb{A}))$, following the discussion of the third lecture. We obtain \[\Phi = \Phi_\text{sp}+\int_{(\mathfrak{c})}\sum_{\mathfrak{t}\mid\mathfrak{c}\mathfrak{c}_\varpi^{-1}}\Phi_{\varpi,\mathfrak{t}}\,d\varpi,\] where $\Phi_\text{sp}$ lies in the span of the functions $g\mapsto\chi(\det g)$ for quadratic Hecke characters $\chi$, the representations $V_\varpi$ run through the cuspidal spaces $V_\pi$ and the Eisenstein spaces $V_{\chi,\chi^{-1}}$ of conductor $\mathfrak{c}_\varpi\mid\mathfrak{c}$, and \[\Phi_{\varpi,\mathfrak{t}}\in R^{(\mathfrak{t})}V_\varpi(\mathfrak{c}_\varpi),\qquad \mathfrak{t}\mid\mathfrak{c}\mathfrak{c}_\varpi^{-1}.\] Upon defining $W_{\varpi,\mathfrak{t}}:=W_{\Phi_{\varpi,\mathfrak{t}}}$, we can now rewrite \eqref{eq14} as \begin{equation}\label{eq16} \frac{|\mathcal{L}_{\tilde\chi_\text{fin}}|^2}{(\mathcal{N}\mathfrak{q})^{1+\varepsilon}}\ll \frac{1}{L}+ \sum_{0\neq q \in\mathfrak{q}\mathfrak{y}\cap\mathcal{B}}\ \int_{(\mathfrak{c})} \sum_{\mathfrak{t}\mid\mathfrak{c}\mathfrak{c}_{\varpi}^{-1}} \frac{\lambda_{\varpi}^{(\mathfrak{t} )}(q\mathfrak{y}^{-1})}{\sqrt{\mathcal{N}(q\mathfrak{y}^{-1})}} W_{\varpi,\mathfrak{t}}(q)\,d\varpi. \end{equation} The advantage of this expression is that it is not quadratic but linear in the Hecke eigenvalues. The price to pay is the spectral averaging in $\varpi$. The Whittaker functions $W_{\varpi,\mathfrak{t}}:K_\infty^\times\to\mathbb{C}$ depend on the original weight function $W:K_{\infty,+}^{\times}\to\mathbb{C}$ included in \eqref{eq13}. In order to proceed further, one needs to understand the ``size'' of these functions, at least on average over the spectrum. We carry this out via Sobolev type norms, the process schematized as follows: \[\|W\|_\ast\ \rightsquigarrow\ \|\phi_1\|_\ast\|\phi_2\|_\ast\ \rightsquigarrow\ \|\Phi\|_\ast\ \rightsquigarrow\ \|\Phi_{\varpi,\mathfrak{t}}\|_\ast\ \rightsquigarrow\ \|W_{\varpi,\mathfrak{t}}\|_\ast\] For example, over the Eisenstein spectrum we can readily derive the bound \[\int\limits_{\varpi\in\mathcal{E}(\mathfrak{c})} \sum_{\mathfrak{t} \mid \mathfrak{c}\mathfrak{c}_{\varpi}^{-1}} \left|W_{\varpi, \mathfrak{t} }(y)\right| \,d\varpi \ll(\mathcal{N}(\ell_1\ell_2))^{\varepsilon},\] using that the number of cusps in level $\mathfrak{c}$ is $\ll(\mathcal{N}(\ell_1\ell_2))^{\varepsilon}$, because $\mathfrak{c}\mathfrak{c}_\pi^{-1}$ is square-free. This implies that the contribution of the Eisenstein spectrum in \eqref{eq16} is \[\ll\frac{1}{\mathcal{N}\mathfrak{q}}\sqrt{L(\mathcal{N}\mathfrak{q})^{1+\varepsilon}}(\mathcal{N}(\ell_1\ell_2))^{\varepsilon}\ll\frac{L^\frac{1}{2}}{(\mathcal{N}\mathfrak{q})^{\frac{1}{2}-\varepsilon}}.\] Bounding the cuspidal contribution in \eqref{eq16} is much harder, but at least we can initially restrict to very small spectral parameters, namely $\mathcal{N}\tilde{\lambda}_{\varpi}\leqslant(\mathcal{N}\mathfrak{q})^\varepsilon$, thanks to the bound (valid for any $A>0$) \[\int\limits_{\varpi\in\mathcal{C}(\mathfrak{c})}(\mathcal{N}\tilde{\lambda}_{\varpi})^A \sum_{\mathfrak{t} \mid \mathfrak{c}\mathfrak{c}_{\varpi}^{-1}}\left|W_{\varpi, \mathfrak{t}}(y)\right| \,d\varpi \ll_A|\mathcal{N}(\ell_1\ell_2)|^{\frac{1}{2}+\varepsilon}.\] Using this observation we find that the contribution of the cuspidal spectrum in \eqref{eq16} is, for some values $f(\mathfrak{a})\ll (\mathcal{N}\mathfrak{q})^\varepsilon$, \[\ll(\mathcal{N}\mathfrak{q})^\varepsilon\left(\sum_{\varpi\in\mathcal{C}(\mathfrak{c},\varepsilon)}\sum_{\mathfrak{t}\mid\mathfrak{c}\mathfrak{c}_{\varpi}^{-1}} \left|\sum_{\mathcal{N}\mathfrak{m} \ll L(N\mathfrak{q})^\varepsilon} \frac{\lambda_{\varpi}^{(\mathfrak{t} )}(\mathfrak{m}\mathfrak{q})}{\sqrt{\mathcal{N}(\mathfrak{m}\mathfrak{q})}}f(\mathfrak{m}\mathfrak{q})\right|^2\right)^{1/2}.\] Here we essentially factor out $\lambda_{\varpi}(\mathfrak{q} )$ and bound it individually by $(\mathcal{N}\mathfrak{q})^{\theta+\varepsilon}$, which is why the parameter $\theta$ appears in Theorems~\ref{thm1} and~\ref{thm2}. Now we arrive at the endgame. We majorize the remaining sum by a smooth spectral sum involving an analogous Eisenstein contribution and rapidly decaying spectral weights. We open the square and apply a variant of the Bruggeman--Kuznetsov formula originally developed by Venkatesh~\cite{Ve2} and further extended by Maga~\cite{Ma}. Finally we apply familiar bounds for the resulting Kloosterman sums and Bessel transforms to infer that the contribution of the cuspidal spectrum in \eqref{eq16} is \[\ll(\mathcal{N}\mathfrak{q})^\varepsilon\Biggl(\underbrace{\frac{L^2}{(\mathcal{N}\mathfrak{q})^{1-2\theta-\varepsilon}}}_{\text{diagonal}} +\underbrace{\frac{L^{3/2}}{(\mathcal{N}\mathfrak{q})^{1-2\theta-\varepsilon}}}_{\text{off-diagonal}}\Biggr)^{1/2}\ll\frac{L}{(\mathcal{N}\mathfrak{q})^{\frac{1}{2}-\theta-\varepsilon}}.\] Collecting terms, we deduce from \eqref{eq16} that \[\frac{|\mathcal{L}_{\tilde\chi_\text{fin}}|^2}{(\mathcal{N}\mathfrak{q})^{1+\varepsilon}}\ll \frac{1}{L}+\frac{L}{(\mathcal{N}\mathfrak{q})^{\frac{1}{2}-\theta}}.\] The right hand side is smallest when $L:=(\mathcal{N}\mathfrak{q})^{\frac{1-2\theta}{4}}$, in which case we get \[\mathcal{L}_{\chi_\text{fin}}\ll(\mathcal{N}\mathfrak{q})^{\frac{3+2\theta}{8}+\varepsilon}.\] This concludes the proof of Theorem~\ref{thm1}. \end{document}
\begin{document} \begin{abstract} In this work, we use the classical moment method to find a practical and simple criterion to determine if a family of linearized Dispersive equations on a periodic domain is exactly controllable and exponentially stabilizable with any given decay rate in $H_{p}^{s}(\mathbb{T})$ with $s\in \mathbb{R}.$ We apply these results to prove that the linearized Smith equation, the linearized dispersion-generalized Benjamin-Ono equation, the linearized fourth-order Schr\"odinger equation, and the Higher-order Schr\"odinger equations are exactly controllable and exponentially stabilizable with any given decay rate in $H_{p}^{s}(\mathbb{T})$ with $s\in \mathbb{R}.$ \end{abstract} \maketitle \section{Introduction} In this work, we consider a family of linear one-dimensional dispersive equations on the periodic domain $\mathbb{T}:=\mathbb{R}/(2\pi \mathbb{Z})$, and investigate its control properties from the point of view of distributed control. Specifically, we consider the family of equations \begin{equation}\label{BO} \partial_{t}u- \partial_{x}\mathcal{A}u=f(x,t), \;\;\;\;x\in \mathbb{T},\;\;t\in \mathbb{R}, \end{equation} where $u=u(x,t)$ denotes a real or complex-valued function of two real variables $x$ and $t,$ the forcing term $f=f(x,t)$ is added to the equation as a control input supported in a given open set $\omega\subset \mathbb{T}$, and $\mathcal{A}$ denotes a linear Fourier multiplier operator. We assume that the multiplier $\mathcal{A}$ is of order $r-1,$ for some $r\in \mathbb{R},$ with $r \geq 1$, that is, the symbol $a:\mathbb{Z} \rightarrow \mathbb{R}$ is given by \begin{equation}\label{dispeop} \widehat {\mathcal{A}u}(k):=a(k)\widehat{u}, \;\;k\in \mathbb{Z}, \end{equation} where $\widehat{u}$ stands for the Fourier transform of $u$ (see \eqref{foutrans}), and \begin{equation}\label{limite} |a(k)|\leq C |k|^{r-1}, \qquad|k|\geq k_0, \end{equation} for some $k_0\geq0$ and some positive constant $C$. Equation \eqref{BO} encompass a wide class of linear dispersive equations. For instance, the well-known linearized Korteweg-de Vries equation ($\mathcal{A}=-\partial_{x}^2$), the Schr\"odinger equation ($\mathcal{A}=i\partial_{x}$), the Benjamin-Ono equation ($\mathcal{A}=\mathcal{H}\partial_{x}$, where $\mathcal{H}$ stands for the Hilbert transform), and the Benjamin equation ($\mathcal{A}=\partial_{x}^2+\alpha\mathcal{H}\partial_{x}$, where $\alpha$ is a positive constant). In the literature there is a wide range of references studing controllability and stabilization properties of linear and nonlinear dispersive equations. Specifically, for the Korteweg-de Vries equation (KdV) equation, the results regarding controllability and stabilization can be found in \cite{14,10, Zhang 1, Russell and Zhang, Rosier 1, Coron Crepau, Menzala Vasconcellos Zuazua, Rosier and Zhang 2}. For the Schr\"odinger equation we refer the reader to \cite{Laurent Camille, Laurent, Dehman Gerard Lebeau, Rosier Lionel Zhang, Rosier Lionel Zhang 2}. Also, the study on the controllability and stabilization for the Benjamin and Benjamin-Ono (BO) equations have received attention in the last decade, see \cite{Manhendra and Francisco, Manhendra and Francisco 2} and \cite{1,Laurent Linares and Rosier, Linares Rosier}, respectively. So, our main goal in this paper is to study all these equations in a unified way. Under the above conditions, the linear operator $\mathcal{A}$ commutes with derivatives and may be seen as a self-adjoint operator on $L^{2}_{p}(\mathbb{T})$ (see Section \ref{preliminares} for notations). Note also that solutions of the homogeneous equation \eqref{BO} ($f=0$) with initial data $u(0)=u_{0}$ conserve the ``mass'' in the sense that $$2\pi \widehat{u}(0,t)=2 \pi \widehat{u_{0}}(0),\;\;\text{for all}\;t\in \mathbb{R},$$ where $\widehat{u}$ stands for the Fourier transform of $u$ in the space variable (see \eqref{foutrans}). Before proceeding let us make clear the problems we are interested in.\\ \noindent \textbf{\emph{Exact controllability problem:}} Let $s\in \mathbb{R}$ and $T>0$ be given. Let $u_{0}$ and $u_{1}$ in $ H_{p}^{s}(\mathbb{T})$ be given with $\widehat{u_{0}}(0)=\widehat{u_{1}}(0).$ Can one find a control input $f$ such that the unique solution $u$ of the initial-value problem (IVP) \begin{equation}\label{2D-BO1} \begin{cases} \partial_{t}u - \partial_{x}\mathcal{A}u =f(x,t), \;\;x\in \mathbb{T},\;\;t\in \mathbb{R},\\ u(x,0)=u_{0}(x) \end{cases} \end{equation} is defined until time $T$ and satisfies $u(x,T)=u_{1}(x) \;\;\text{for all}\;\; x\in \mathbb{T}$?\\ \noindent \textbf{\emph{Asymptotic stabilizability problem:}} Let $s\in \mathbb{R}$ and $u_{0}\in H_{p}^{s}(\mathbb{T})$ be given. Can one define a feedback control law $f=Ku$, for some liner operator $K$, such that the resulting closed-loop system \begin{equation}\label{2D-BO2} \begin{cases} \partial_{t}u - \partial_{x}\mathcal{A}u =Ku, \;\;x\in \mathbb{T},\;\;t\in \mathbb{R}^{+},\\ u(x,0)=u_{0}(x), \end{cases} \end{equation} is globally well-defined and asymptotically stable to an equilibrium point as $t\rightarrow +\infty$?\\ In the present manuscript we use the classical Moment method (see \cite{Russell}) and a generalization of Ingham's inequality see (see {\cite[Theorem 4.6]{7}} and \cite{8}), to find a practical criterion regarding the eigenvalues associated with the operator $\partial_{x}\mathcal{A}$ to determine if equation \eqref{BO} is exactly controllable and exponentially stabilizable. Therefore, we were able to extend the techniques used by the authors in \cite{1, Manhendra and Francisco, Rosier Lionel Zhang} to a wide class of linearized dispersive equations on a periodic domain. \begin{rem}\label{systemsnot} Generalizing these techniques to linear systems of two or more equations require additional efforts because the mixed dispersive terms present in the equations generally induce a modification of the orthogonal basis we are considering on $L^{2}_{p}(\mathbb{T})\times L^{2}_{p}(\mathbb{T})$ (see for instance \cite[Proposition. 2.2]{Capistrano y Andressa}). This usually implies a loss of regularity of the considered controls (see also \cite[Theorem 2.23]{Micu Ortega Rosier and Zhang}). \end{rem} As usual in control theory for dispersive models (see \cite{1, 10, 14, Manhendra and Francisco}), in order to keep the mass of \eqref{2D-BO1} conserved, we define a bounded linear operator $G:H_{p}^{s}(\mathbb{T})\to H_{p}^{s}(\mathbb{T})$ in the following way: let $g$ be a real non-negative function in $C_p^\infty(\mathbb{T})$ such that \begin{equation}\label{gcondition} 2\pi\widehat{g}(0)=\int_{0}^{2\pi}g(x)\;dx=1, \end{equation} and assume $\text{supp} \;g=\omega \subset \mathbb{T},$ where $\omega=\{x\in \mathbb{T}: g(x)>0 \}$ is an open interval. The operator $G$ is then defined as \begin{equation}\label{EQ1} G(\phi):=g\phi-g\,\langle\phi,g\rangle, \qquad \phi\in H_{p}^{s}(\mathbb{T}), \end{equation} where the first product must be understood in the periodic distributional sense and $\langle\cdot,\cdot\rangle$ denotes the pairing between $\mathscr{P}'$ and $\mathscr{P}$ (see notations below). The control input $f$ is then chosen to be of the form $f(\cdot,t)=G(h(\cdot,t)),$ $t\in [0,T]$. As a consequence, the function $h \in L^{2}([0,T];H_{p}^{s}(\mathbb{T}))$ is now viewed as the new control function. \begin{rem} Some remarks concerning the operator $G$ are in order. \begin{enumerate} \item It is not difficult to see that $G$ is self-adjoint in $L^{2}_{p}(\mathbb{T})$ (see \cite[Proposition 3.2]{Manhendra and Francisco}). In addition, the authors in \cite[Remark 2.1]{1} and \cite[Lemma 2.20]{Micu Ortega Rosier and Zhang} showed that for any $s\in \mathbb{R}$ the operator $G$ acting from $L^{2}\left([0,T]; H^{s}_{p}(\mathbb{T})\right)$ into $L^{2}\left([0,T]; H^{s}_{p}(\mathbb{T})\right)$ is linear and bounded. \item When $s\geq0$ we may write $$ G(\phi)(x)=g(x)\left[ \phi(x)-\int_0^{2\pi} \phi(y)g(y)\,dy \right], $$ which is exactly the operator defined, for instance, in \cite{1, 10, 14, Manhendra and Francisco}. \item By recalling that for any $\phi\in \mathscr{P}'$ and $g\in \mathscr{P}$ (see \cite[Corollary 3.167]{6}) $$ \langle\phi,g\rangle= 2\pi \sum_{k \in \mathbb{Z}}\widehat{\phi}(k)\widehat{g}(-k), $$ in view of \eqref{gcondition}, we obtain \[ \begin{split} \widehat{G(\phi)}(0)=\widehat{\phi}\ast \widehat{g}(0)- \widehat{g}(0)\,\langle\phi,g\rangle =\sum_{j \in \mathbb{Z}}\widehat{\phi}(j)\widehat{g}(-j)-\widehat{g}(0)\,\langle\phi,g\rangle =0, \end{split} \] where the convolution of two sequences of complex numbers $(\alpha_k)_{k\in\mathbb{Z}}$ and $(\beta_k)_{k\in\mathbb{Z}}$ is the sequence $((\alpha\ast\beta)_k)_{k\in\mathbb{Z}}$ defined by $$ (\alpha\ast\beta)_k=\sum_{j\in \mathbb{Z}}\alpha_j\beta_{k-j}. $$ This implies that any solution $u$ of \eqref{2D-BO1} (with $f(x,t)=G(h(x,t))$) conserves the quantity $2\pi \widehat{u}(0,t)$. In particular, if $\widehat{u_0}(0)=0$ then $2\pi \widehat{u}(0,t)=0$, for any $t\in[0,T]$. \end{enumerate} \end{rem} Next, we turn attention to our criteria to obtain the controllability and stabilization of equation \eqref{BO}. As we will see, they directly link these problems with some specific properties of the eigenvalues and eigenfunctions associated to the operator $\partial_{x}\mathcal{A}$. To derive our first criterion regarding exact controllability, we assume that $\partial_{x}\mathcal{A}$ has a countable number of eigenvalues that are all simple, except by a finite number that have finite multiplicity. Specifically, we will assume that the following hypotheses hold: \vskip.2cm \begin{itemize} \item [$(H1)$] $\partial_{x}\mathcal{A}\psi_{k}=i\lambda_{k} \psi_{k},$ where $\psi_{k}$ is defined in \eqref{spi} and $\lambda_{k}=k a(k),$ for all $k\in \mathbb{Z}.$ \end{itemize} \vskip.2cm Note we are counting multiplicities, implying that the eigenvalues in the sequence $\{i \lambda_{k}\}_{k\in \mathbb{Z}}$ are not necessarily distinct. For each $k_{1}\in \mathbb{Z},$ we set $I(k_{1}):=\{k\in \mathbb{Z}: \lambda_{k}=\lambda_{k_{1}}\}$ and $m(k_{1}):=|I(k_{1})|,$ where $|I(k_{1})|$ denotes the number of elements in $I(k_{1}).$ Concerning the quantity $m(k_{1})$, we assume the following: \vskip.2cm \begin{itemize} \item [$(H2)$] $m(k_{1})\leq n_{0},$ for some $n_{0}\in \mathbb{N}$ and for all $k_{1}\in \mathbb{Z},$ \end{itemize} \vskip.2cm and \vskip.2cm \begin{itemize} \item[$(H3)$] there exists $k_{1}^{\ast} \in \mathbb{N}$ such that $m(k_{1})=1,$ for all $k_{1}\in \mathbb{Z}$ with $|k_{1}|\geq k_{1}^{\ast}.$ \end{itemize} \vskip.2cm Assumptions $(H2)$ and $(H3)$ together say that all eigenvalues $i\lambda_k$ have finite multiplicity. In addition, they are simple eigenvalues for sufficiently large indices. If we count only distinct eigenvalues, we may obtain a sequence $\{\lambda_{k}\}_{k \in \mathbb{I}}$, $\mathbb{I}\subseteq\mathbb{Z}$, with the property that $\lambda_{k_{1}}\neq \lambda_{k_{2}}$, for any $k_{1},k_{2}\in \mathbb{I}$ with $k_{1}\neq k_{2}$. Our main result at this point reads as follows. \begin{thm}[Criterion I]\label{ControlLa} Let $s\in \mathbb{R}$ and assume $(H1), \;(H2),$ and $(H3).$ Suppose that \begin{equation}\label{gammaseg} \begin{split} \gamma&:=\inf_{\substack{k,n\in \mathbb{I}\\k\neq n}}|\lambda_{k}-\lambda_{n}|>0 \end{split} \end{equation} and \begin{equation}\label{gammalinha} \begin{split} \gamma'&:=\underset{S\subset \mathbb{I}}{\sup}\; \underset{k\neq n}{\underset{k,n \in \mathbb{I}\backslash S}{\inf}} |\lambda_{k}-\lambda_{n}|>0, \end{split} \end{equation} where $S$ runs over all finite subsets of $\mathbb{I}.$ Then for any $T> \frac{2\pi}{\gamma'}$ and for each $u_{0},\; u_{1}\in H_{p}^{s}(\mathbb{T})$ with $\widehat{u_{0}}(0)=\widehat{u_{1}}(0),$ there exists a function $h\in L^{2}([0,T];H_{p}^{s}(\mathbb{T}))$ such that the unique solution $u$ of the non-homogeneous system \begin{equation}\label{introduc2} \begin{cases} u\in C([0,T];H^{s}_{p}(\mathbb{T})), \hbox{}\\ \partial_{t}u(t)= \partial_{x}\mathcal{A}u(t)+ G(h)(t)\in H_{p}^{s-r}(\mathbb{T}), t\in (0,T),\\ u(0)=u_{0}\in H_{p}^{s}(\mathbb{T}), \end{cases} \end{equation} satisfies $u(T)=u_{1}.$ Furthermore, \begin{equation}\label{hboun} \|h\|_{L^{2}([0,T];H_{p}^{s}(\mathbb{T}))} \leq \nu\; (\|u_{0}\|_{H_{p}^{s}(\mathbb{T})} +\|u_{1}\|_{H_{p}^{s}(\mathbb{T})}), \end{equation} for some positive constant $\nu \equiv \nu(s,g,T).$ \end{thm} \begin{rem}\label{partic1} Note that if $\gamma'$ defined in \eqref{gammalinha} is infinite ($\gamma'=+\infty$), then \eqref{introduc2} is exactly controllable for any positive time $T.$ In particular, if $(H1)$ holds and \begin{itemize} \item [(i)] $a(-k)=a(k),$ for all $k\in \mathbb{I}$; \item[(ii)] $\displaystyle{\lim_{|k|\rightarrow +\infty}| (k+1) a(k+1)- k a(k)|=+\infty},$ where $k$ runs over $\mathbb{I},$ \end{itemize} then system \eqref{introduc2} is exactly controllable for any $T>0.$ In fact, from (i) we infer that $\lambda_{k}=-\lambda_{-k}$ for all $k\in \mathbb{I}.$ On the other hand, property (ii) yields that $\gamma'=+\infty$ and the real sequence $\{\lambda_{k}\}_{k\in \mathbb{I}}$ is strictly increasing/decreasing for $k\in \mathbb{I}$ with $|k|>k_{1}^{\ast}$ for some $k_{1}^{\ast}$, implying that$(H2)$-$(H3)$ hold. Also, since terms of the sequence $\{\lambda_{k}\}_{k\in \mathbb{I}}$ are distinct, it is clear that \eqref{gammaseg} holds. \end{rem} Property (ii) in Remark \ref{partic1} implies the so called \textquotedblleft asymptotic gap condition" for the eigenvalues $\{i \lambda_{k}\}_{k\in \mathbb{I}}$ associated with the operator $\partial_{x}\mathcal{A}.$ This property is crucial to obtain the exact controllability for any $T>0.$ It appears that many dispersive models hold the properties (i) and (ii). For instance, the linearized KdV equation \cite{10,14}, the linearized Benjamin-Ono equation \cite{1}, and the linearized Benjamin equation \cite{Manhendra and Francisco}. See Figure \ref{fig1} for an illustrative figure. \begin{figure} \caption{Dispersion of $\lambda_{k}$'s for KdV, Benjamin-Ono and Benjamin equations} \label{fig1} \end{figure} Next we shall prove that even when we have an infinity quantity of repeated eigenvalues associated with $\partial_{x}\mathcal{A}$ in a particular form, we can still obtain an exact controllability result. This will provide our second criterion. For this, we will assume $(H1)$ and \vskip.2cm \begin{itemize} \item [$(H4)$] there are $n_{0},k_{1}^{\ast}\in \mathbb{N}$ such that $m(k_{1})\leq n_{0},$ for all $k_{1}\in \mathbb{Z}$ with $|k_{1}|< k_{1}^{\ast}.$ In addition, $m(k_1)=2$ for all $|k_{1}|\geq k_{1}^{\ast}$. \end{itemize} \vskip.2cm and \vskip.2cm \begin{itemize} \item [$(H5)$] $a(-k)=-a(k),$ for all $k\in \mathbb{Z}$ with $|k|\geq k_{1}^{\ast}.$ \end{itemize} \vskip.2cm Assumption $(H4)$ says that, except near the origin, all eigenvalues are double. Moreover, in view of $(H5)$, $\lambda_{k}=\lambda_{-k},$ for all $|k|\geq k_{1}^{\ast}.$ This implies that $I(k_1)=\{-k_1,k_1\}$ for $|k_1|\geq k_1^*$. As before, if we are interested in counting only the distinct eigenvalues we can obtain a set $$ \mathbb{J}\subset\{-k_1^*+1,-k_1^*+2, \ldots\} $$ such that the sequence $\{\lambda_{k}\}_{k \in \mathbb{J}}$ has the property that $\lambda_{k_{1}}\neq \lambda_{k_{2}}$, for any $k_{1},k_{2}\in \mathbb{J}$, with $k_{1}\neq k_{2}.$ Our second result regarding controllability reads as follows. \begin{thm}[Criterion II]\label{ControlLag} Let $s\in \mathbb{R}$ and assume $(H1), \;(H4),$ and $(H5).$ Suppose \begin{equation}\label{gammasegg} \begin{split} \tilde{\gamma} &:=\inf_{\substack{k,n\in \mathbb{J}\\k\neq n}}|\lambda_{k}-\lambda_{n}|>0 \end{split} \end{equation} and \begin{equation}\label{gammalinhag} \begin{split} \tilde{\gamma}'&:=\underset{S\subset \mathbb{J}}{\sup}\; \underset{k\neq n}{\underset{k,n \in \mathbb{J}\backslash S}{\inf}} |\lambda_{k}-\lambda_{n}|>0, \end{split} \end{equation} where $S$ runs over the finite subsets of $\mathbb{J}.$ Then for any $T> \frac{2\pi}{\tilde{\gamma}'}$ and for each $u_{0},\; u_{1}\in H_{p}^{s}(\mathbb{T})$ with $\widehat{u_{0}}(0)=\widehat{u_{1}}(0),$ there exists a function $h\in L^{2}([0,T];H_{p}^{s}(\mathbb{T}))$ such that the unique solution $u$ of the non-homogeneous system \eqref{introduc2} satisfies $u(T)=u_{1}.$ Moreover, there exists a positive constant $\nu \equiv \nu(s,g,T)$ such that \eqref{hboun} holds. \end{thm} \begin{rem}\label{partic} If hypotheses $(H1),$ $(H4)$ and $(H5)$ hold with $$ \displaystyle{\lim_{k\rightarrow +\infty}|(k+1) a(k+1)- k a(k)|=+\infty},$$ where $k$ take values in $\mathbb{J},$ then the system \eqref{introduc2} is exactly controllable for any $T>0.$ \end{rem} It is not difficult to see that the linear Schr\"odinger equation holds the assumptions in Theorem \ref{ControlLag} and Remark\ref{partic}. See Figure \ref{fig2} for an illustration of the eigenvalues. Actually, the exact controllability and exponential stabilization for the linear (and nonlinear cubic) Schr\"odinger equations were proved in \cite{ Rosier Lionel Zhang}, where the authors used $f(x,t)=Gh(x,t):=g(x)h(x,t)$ as a control input. Here we show that the control input as described in \eqref{EQ1} also serves to prove the exact controllability. The advantage of using this control input is that it allow us to get a controllability and stabilization result for the linear Schr\"odinger equation in the Sobolev space $H^{s}_{p}(\mathbb{T})$ for any $s\in \mathbb{R}$. \begin{figure} \caption{Dispersion of $\lambda_{k}$'s for Schr\"odinger equation} \label{fig2} \end{figure} Attention is now turned to our stabilization results. In what follows, $G^\ast$ denotes the adjoint operator of $G$. We will prove that if one chooses the feedback law $Ku=-GG^{\ast}u$ then the closed-loop system \eqref{2D-BO2} is exponentially stable. More precisely, we have the following. \begin{thm}\label{st351} Let $g$ be as in \eqref{gcondition} and let $s\in \mathbb{R}$ be given. Under the assumptions of Theorem \ref{ControlLa} or Theorem \ref{ControlLag}, there exist positive constans $M=M( g, s)$ and $\alpha=\alpha(g)$ such that for any $u_{0}\in H_{p}^{s}(\mathbb{T})$ the unique solution $u$ of the closed-loop system \begin{equation*} \begin{cases} u\in C([0,+\infty);H^{s}_{p}(\mathbb{T})), \hbox{}\\ \partial_{t}u(t)= \partial_{x}\mathcal{A}u(t)- GG^{\ast}u(t)\in H_{p}^{s-r}(\mathbb{T}), t> 0,\\ u(0)=u_{0}\in H_{p}^{s}(\mathbb{T}), \end{cases} \end{equation*} satisfies $$\|u(\cdot,t)-\widehat{u_{0}}(0)\|_{H_{p}^{s}(\mathbb{T})}\leq M e^{-\alpha t}\|u_{0}-\widehat{u_{0}}(0)\|_{H_{p}^{s}(\mathbb{T})},\;\;\;\text{for all}\;\; t\geq 0.$$ \end{thm} The feedback law $Ku=-GG^{\ast}u$ in Theorem \ref{st351} is the simplest one providing the exponential decay with a fixed exponential rate. However, by changing the feedback law one is able to show that the resulting closed-loop system actually has an arbitrary exponential decay rate. More precisely, \begin{thm}\label{estabilization} Let $s\in \mathbb{R},$ $\lambda>0,$ and $u_{0}\in H_{p}^{s}(\mathbb{T})$ be given. Under the assumptions of Theorem \ref{ControlLa} or Theorem \ref{ControlLag}, there exists a bounded linear operator $K_{\lambda}$ from $H_{p}^{s}(\mathbb{T})$ to $H_{p}^{s}(\mathbb{T})$ such that the unique solution $u$ of the closed-loop system \begin{equation}\label{estag} \begin{cases} u\in C([0,+\infty);H^{s}_{p}(\mathbb{T})), \hbox{}\\ \partial_{t}u(t)= \partial_{x}\mathcal{A}u(t)+ K_{\lambda}u(t)\in H_{p}^{s-r}(\mathbb{T}), t> 0,\\ u(0)=u_{0}\in H_{p}^{s}(\mathbb{T}), \end{cases} \end{equation} satisfies $$\|u(\cdot,t)-\widehat{u_{0}}(0)\|_{H_{p}^{s}(\mathbb{T})}\leq M\;e^{-\lambda\;t}\|u_{0}-\widehat{u_{0}}(0)\|_{H_{p}^{s}(\mathbb{T})},$$ for all $t\geq0,$ and some positive constant $M=M(g,\lambda, s).$ \end{thm} The paper is organized as follows: In section \ref{preliminares} a series of preliminary results that will be used throughout this work are recalled. In Section \ref{section4} we prove well-posedness results. The main results regarding controllability and stabilization are proved in Sections \ref{section5} and \ref{section6}, respectively. In Section \ref{section6g}, we apply our general criteria to establish the corresponding results regarding exact controllability and exponential stabilization for the linearized Smith equation, the linearized dispersion-generalized Benjamin-Ono equation, the fourth-order Schr\"odinger and a higher-order Schr\"odinger equation. Finally, in Section \ref{conc-rem} some concluding remarks and future works are presented. \section{Preliminaries}\label{preliminares} In this section we introduce some basic notations and recall the main tools to obtain our results. We denote by $\mathscr{P}$ the space $C^\infty_p(\mathbb{T})$ of all $C^\infty$ functions that are $2\pi$-periodic. By $\mathscr{P}'$ (the dual of $\mathscr{P}$) we denote the space of all periodic distributions. By $L^2_p(\mathbb{T})$ we denote the standard space of the square integrable $2\pi$-periodic functions. It is well-known that the sequence $\{ \psi_{k} \}_{k\in \mathbb{Z}}$ given by \begin{equation}\label{spi} \psi_{k}(x):=\frac{e^{ikx}}{\sqrt{2 \pi}},\;\; k\in \mathbb{Z},\;\;x\in \mathbb{T} \end{equation} is an orthonormal basis for $L_{p}^{2}(\mathbb{T})$. The Fourier transform of $v\in \mathscr{P}'$ is defined as \begin{equation}\label{foutrans} \widehat{v}(k)=\frac{1}{2 \pi}\langle v,e^{-ikx}\rangle,\;\; \;k \in \mathbb{Z}. \end{equation} Next we introduce the periodic Sobolev spaces. For a more detailed description and properties of these spaces, we refer the reader to \cite{6}. Given $s\in\mathbb{R}$, the (periodic) Sobolev space of order $s$ is defined as $$H^{s}_{p}(\mathbb{T})=\left\{ v\in \mathscr{P}' \left| \right. \|v\|_{H^{s}_{p}(\mathbb{T})}^{2}:=2\pi\sum_{k=-\infty}^{\infty} (1+|k|)^{2s}|\widehat{v}(k)|^{2}<\infty \right\}.$$ We consider the space $H^{s}_{p}(\mathbb{T}) $ as a Hilbert space endowed with the inner product \begin{equation}\label{innerprod} \displaystyle{(h\,, \,v)_{H^{s}_{p}(\mathbb{T})}= 2\pi \sum_{k\in \mathbb{Z}} (1+|k|)^{2s}\widehat{h}(k)\;\overline{\widehat{v}(k)},} \end{equation} For any $s\in \mathbb{R}$, $(H^{s}_{p}(\mathbb{T}))'$, the topological dual of $H^{s}_{p}(\mathbb{T})$, is isometrically isomorphic to $H^{-s}_{p}(\mathbb{T})$, where the duality is implemented by the pairing $$ \displaystyle{\langle h, v\rangle_{H^{-s}_{p}(\mathbb{T})\times H_{p}^{s}(\mathbb{T})}= 2\pi \sum_{k\in \mathbb{Z}} \widehat{h}(k)\;\overline{\widehat{v}(k)}},\;\;\text{for all}\; v \in H^{s}_{p}(\mathbb{T}),\;h\in H^{-s}_{p}(\mathbb{T}). $$ \begin{rem} It is well-known that any distribution $v\in \mathscr{P}'$ may be written as (see, for instance, \cite[page 188]{6}) \begin{equation}\label{prep} v=\sqrt{2\pi}\sum_{k\in\mathbb{Z}}\widehat{v}(k)\psi_{k}, \end{equation} where the series converges in the sense of $\mathscr{P}'$. In particular, any $v\in H^{s}_{p}(\mathbb{T})$, $s\in\mathbb{R}$, can be written in the form \eqref{prep}. \end{rem} We also consider the closed subspace $$H^{s}_{0}(\mathbb{T}):=\left\{ v\in H^{s}_{p}(\mathbb{T)}\left|\right. \;\; \widehat{v}(0)=0\right\}.$$ It can be seen that if $s_{1}, \;s_{2}\in \mathbb{R}$ with $s_{1}\geq s_{2}$ then $H_{0}^{s_{1}}(\mathbb{T}) \hookrightarrow H_{0}^{s_{2}}(\mathbb{T}),$ where the embedding is dense. We denote $H_{0}^{0}(\mathbb{T})$ by $L_{0}^{2}(\mathbb{T}).$ In particular, $L_{0}^{2}(\mathbb{T})$ is a closed subspace of $L^{2}_{p}(\mathbb{T})$. We continue with some characterization of Riesz basis in Hilbert spaces (see \cite{9} for more details). In what follows, $J$ represents a countable set of indices which could be finite or infinite. \begin{thm}\label{BasRieszTheo} Let $\{x_{n}\}_{n\in J}$ be a sequence in a Hilbert space $H.$ Then the following statements are equivalent. \begin{enumerate} \item $\{x_{n}\}_{n\in J}$ is a Riesz basis for $H$ (see \cite[Definition 7.9]{9}). \item $\{x_{n}\}_{n\in J}$ is complete in $H$ (see {\cite[Definition 1.25]{9}}) and there exist constants $A,\;B>0$ such that $$\text{for all}\;\;c_{1},...,c_{N}\;\;\text{scalars,}\;\;A\sum_{n=1}^{N}|c_{n}|^{2}\leq \|\sum_{n=1}^{N}c_{n}x_{n}\|^{2}_{H} \leq B\sum_{n=1}^{N}|c_{n}|^{2}.$$ \item There is an equivalent inner product $(\cdot, \cdot)$ for $H$ such that $\{x_{n}\}_{n\in J}$ is an orthonormal basis for $H$ with respect to $(\cdot, \cdot).$ \item $\{x_{n}\}_{n\in J}$ is a complete Bessel sequence (see {\cite[Definition 7.1]{9}}) and possesses a biorthogonal system $\{y_{n}\}_{n\in J}$ (see {\cite[Definition 4.10]{9}}) that is also a complete Bessel sequence. \end{enumerate} \end{thm} \begin{proof} See {\cite[Theorem 7.13]{9}}. \end{proof} Finally, we recall the generalized Ingham's inequality. \begin{thm}\label{InghamG2} Let $\{\lambda_{k}\}_{k\in J}$ be a family of real numbers, satisfying the uniform gap condition $$\gamma= \underset{k\neq n}{\underset{k,n \in J}{\inf}} |\lambda_{k}-\lambda_{n}|>0.$$ Set $$\gamma'=\underset{S\subset J}{\sup}\; \underset{k\neq n}{\underset{k,n \in J\backslash S}{\inf}} |\lambda_{k}-\lambda_{n}|>0,$$ where $S$ runs over all finite subsets of $J.$ If $I$ is a bounded interval of length $|I|> \frac{2\pi}{\gamma'},$ then there exist positive constants $A$ and $B$ such that $$A\sum_{k \in J}|c_{k}|^{2}\leq \int_{I}|f(t)|^{2}dt\leq B\sum_{k \in J}|c_{k}|^{2},$$ for all functions of the form $f(t)=\sum\limits_{k \in J}c_{k}e^{i\lambda_{k}t}$ with square-summable complex coefficients $c_{k}.$ \end{thm} \begin{proof} See {\cite[page 67]{7}}. \end{proof} For further generalizations of the Ingham inequality (see \cite{8}) we refer the reader to \cite{Ball and Slemrod} and \cite{7}. \section{Well-posedness} \label{section4} In this section we establish a global well-posedness result for system \eqref{introduc2}. We start with some results concerning the homogeneous equation. This results are quite standard but for the sake of completeness we bring the main steps. \begin{prop}\label{OGU} Let $r$ be as in \eqref{limite}. For any $u_{0}\in H_{p}^{r}(\mathbb{T})$, the homogeneous problem \begin{equation}\label{introduc} \begin{cases} u\in C(\mathbb{R};H^{r}_{p}(\mathbb{T}))\cap C^{1}(\mathbb{R},L^{2}_{p}(\mathbb{T})), \\ \partial_{t}u = \partial_{x}\mathcal{A}u \in L_{p}^{2}(\mathbb{T}),\quad t\in \mathbb{R},\\ u(0)=u_{0}, \end{cases} \end{equation} has a unique solution. \end{prop} \begin{proof} First note that from Plancherel's identity, for any $\varphi, \psi \in D(\partial_{x}\mathcal{A})=H^{r}_{p}(\mathbb{T}) $, we have \begin{align*} ( \partial_{x}\mathcal{A}\varphi, \psi)_{L^{2}_{p}(\mathbb{T})} &= 2\pi \sum_{k=-\infty}^{+\infty}\widehat{\partial_{x}\mathcal{A}\varphi}(k) \overline{\widehat{\psi}(k)}\\ &= 2\pi \sum_{k=-\infty}^{+\infty} i k a(k)\widehat{\varphi}(k) \overline{\widehat{\psi}(k)} \\ &=-2\pi \sum_{k=-\infty}^{+\infty} \widehat{\varphi}(k) \overline{ (-i k) a(k) \widehat{\psi}(k)}\\ &=-(\varphi, \partial_{x}\mathcal{A}\psi)_{L^{2}(\mathbb{T})}, \end{align*} which implies that $\partial_{x}\mathcal{A}$ is skew-adjoint. Hence, Stone's theorem gives that $\partial_{x}\mathcal{A}$ generates a strongly continuous unitary group $\{U(t)\}_{t\in \mathbb{R}}$ on $L^{2}_{p}(\mathbb{T}).$ Therefore, Theorem 3.2.3 in \cite{3} yields the desired result. \end{proof} Proposition \ref{OGU} provides the well-posedness theory for \eqref{introduc} only for initial data in $H_{p}^{r}(\mathbb{T})$. However, we can still obtain the well-posedness for initial data in $H_{p}^{s}(\mathbb{T})$ for any $s\in \mathbb{R}.$ To do so, one needs a more accurate description of the unitary group $\{U(t)\}_{t\in \mathbb{R}}$. At least in a formal level, by taking Fourier's transform in the spatial variable, it is not difficult to see that the solution of \eqref{introduc} may be written as \begin{equation}\label{prop2} \widehat{u}(t)(k)=e^{ik a(k) t}\widehat{u_{0}}(k),\;\; k\in \mathbb{Z}, \end{equation} or, by taking the inverse Fourier transform, \begin{equation}\label{solf} u(t)=\left(e^{ik a(k) t}\widehat{u_{0}}(k)\right)^{\vee}, \;\;\;t\in \mathbb{R}. \end{equation} This means that \begin{equation}\label{solf2} u(x,t)=\sum_{k\in \mathbb{Z}} e^{ik a(k) t}\widehat{u_{0}}(k)e^{ikx},\;\;\;t\in \mathbb{R}, \end{equation} must be the unique solution of \eqref{introduc}. The above calculation suggests that, in a rigorous way, we may define the family of linear operators \;$U:\mathbb{R}\rightarrow \mathcal{L}(H^{s}_{p}(\mathbb{T}))$ by \begin{equation}\label{semi2} \begin{split} t\rightarrow U(t)\varphi:=e^{\partial_{x}\mathcal{A}t}\varphi &=(e^{ik a(k) t}\widehat{\varphi}(k))^{\vee}, \end{split} \end{equation} in such a way that the solution of \eqref{introduc} now becomes $u(t)=U(t)u_{0},\;t\in \mathbb{R}.$ From the growth condition \eqref{limite} and classical results on the semigroup theory (see for instance \cite{3}, \cite{4} or \cite{6} for additional details), we can show that the family of operators $\{U(t)\}_{t\in\mathbb{R}}$ given by \eqref{semi2} indeed defines a strongly continuous one-parameter unitary group on $H^{s}_{p}(\mathbb{T})$, for any $s\in\mathbb{R}$. Additionally, if $u(t)=U(t)u_{0}$ with $u_{0}\in H^{s}_{p}(\mathbb{T}),$ then \begin{equation*} \lim_{h\rightarrow 0}\left\|\frac{u(t+h)-u(t)}{h}-\partial_{x}\mathcal{A}u \right\|_{H_{p}^{s-r}(\mathbb{T})}=0, \end{equation*} uniformly with respect $t\in\mathbb{R}.$ In particular, the following result holds. \begin{thm}\label{EU1} Let $s\in\mathbb{R}$ and $u_{0}\in H_{p}^{s}(\mathbb{T})$ be given. Then the homogeneous problem \begin{equation*} \begin{cases} u\in C(\mathbb{R};H^{s}_{p}(\mathbb{T})), \\ \partial_{t}u = \partial_{x}\mathcal{A}u \in H_{p}^{s-r}(\mathbb{T}), t\in \mathbb{R},\\ u(0)=u_{0}, \end{cases} \end{equation*} has a unique solution. \end{thm} Next, we deal with the well-posedness of the non-homogenous linear problem \eqref{introduc2}. \begin{lem}\label{EUNH} Let $0< T<\infty,$ $s\in \mathbb{R},$ $u_{0}\in H_{p}^{s}(\mathbb{T}),$ and $h\in L^{2}([0,T];H_{p}^{s}(\mathbb{T})).$ Then, there exists a unique mild solution $u\in C([0,T],H_{p}^{s}(\mathbb{T})) $ for the IVP \eqref{introduc2}. \end{lem} \begin{proof} This is a consequence of Corollary 2.2 and Definition 2.3 in \cite[page 106]{4}, and the fact that $G(h)\in L^{1}([0,T];H_{p}^{s}(\mathbb{T})).$ Furthermore, the unique (mild) solution of \eqref{introduc2} is given by \begin{equation}\label{inteeq} u(t)=U(t)u_{0}+\int_{0}^{t}U(t-t')Gh(t')dt', \qquad t\in[0,T]. \end{equation} This completes the proof of the lemma. \end{proof} \section{Proof of the Control Results} \label{section5} In this section we use the classical moment method (see \cite{Russell}) to show the criteria I and II regarding exact controllability for \eqref{introduc2}. First of all, by replacing $u_1$ by $u_1-U(T)u_0$ if necessary, we may assume without loss of generality that $u_{0}=0$ (see \cite[page 10]{Manhendra and Francisco}), implying that $\widehat{u_{1}}(0)=\widehat{u_{0}}(0)=0.$ Consequently, if we write $u_{1}(x)=\sum\limits_{k\in \mathbb{Z}}c_{k}\;\psi_{k}(x)$ with $\psi_{k}$ as in \eqref{spi} then $c_0=0$. Our first result is a characterization to get the exact controllability for \eqref{introduc2}. Its proof is similar to the proof of Lemma 4.1 in \cite{Manhendra and Francisco}, passing to the frequency space when necessary; so we omit the details. \begin{lem}\label{caractc} Let $ s\in \mathbb{R}$ and $T>0$ be given. Assume $u_{1}\in H^{s}_{p}(\mathbb{T})$ with $\widehat{u_{1}}(0)=0.$ Then, there exists $h\in L^{2}([0,T],H^{s}_{p}(\mathbb{T}))$ such that the solution of the IVP \eqref{introduc2} with initial data $u_{0}=0$ satisfies $u(T)=u_{1}$ if and only if \begin{equation}\label{CEQ} \int_{0}^{T}\left\langle Gh(\cdot,t),\varphi(\cdot,t)\right\rangle_{H^{s}_{p}\times(H^{s}_{p})'}dt =\left\langle u_{1},\varphi_{0}\right\rangle_{H^{s}_{p}\times(H^{s}_{p})'}, \end{equation} for any $\varphi_{0}\in (H^{s}_{p}(\mathbb{T}))'$, and $\varphi$ is the solution of the adjoint system \begin{equation}\label{adsis} \begin{cases} \varphi\in C([0,T]:\left(H_{p}^{s}(\mathbb{T})\right)'),\\ \partial_{t}\varphi=\partial_{x}\mathcal{A}\varphi \in H_{p}^{-s-r}(\mathbb{T}), \quad t>0,\\ \varphi(T)=\varphi_{0}. \end{cases} \end{equation} \end{lem} Next corollary is a consequence of Lemma \ref{caractc}. Having in mind its importance, we write the proof. \begin{cor} \label{controloperator1} Let $s\in \mathbb{R},$ $T> 0,$ and $u_{1}\in H^{s}_{p}(\mathbb{T})$ with $\widehat{u_{1}}(0)=0$ be given. Then, there exists $h\in L^{2}([0,T];H_{p}^{s}(\mathbb{T})),$ such that the unique solution of the IVP \eqref{introduc2} with initial data $u_{0}=0$ satisfies $u(T)=u_{1}$ if and only if there exists $\delta>0$ such that \begin{equation}\label{ob1} \int_{0}^{T}\|G^{\ast}U(\tau)^{\ast}\phi^{\ast}\| ^{2}_{(H_{p}^{s}(\mathbb{T}))'}(\tau) \; d\tau \geq \delta^{2} \|\phi^{\ast}\|^{2}_{(H^{s}_{p}(\mathbb{T}))'}, \end{equation} for any $\phi^{\ast} \in (H^{s}_{p}(\mathbb{T}))'.$ \end{cor} \begin{proof}$(\Rightarrow)$ Let $T>0$ and define the linear map $F_{T}:L^{2}([0,T]; H^{s}_{p}(\mathbb{T}))\rightarrow H^{s}_{p}(\mathbb{T})$ by $F_{T}(h)=u(T)$, where $u$ is the (mild) solution of \eqref{introduc2} with $u(0)=0.$ From the hypothesis, the map $F_{T}$ is onto and, given $u_1\in H^{s}_{p}(\mathbb{T})$, \begin{equation}\label{cont1} F_T(h)= u_{1}=\int_{0}^{T}U(T-s)(G(h))(s)\;ds, \end{equation} for some $h\in L^{2}([0,T]; H^{s}_{p}(\mathbb{T}))$. Therefore, $$ {\footnotesize \begin{split} \|F_{T}(h)\|_{H^{s}_{p}(\mathbb{T})} &\leq \int\limits_{0}^{T}\left\|U(T-s)(G(h))(s) \right\|_{H^{s}_{p}(\mathbb{T})}\;ds \leq c\int\limits_{0}^{T}\|h\|_{H^{s}_{p}(\mathbb{T})}\;ds \leq cT^{\frac{1}{2}} \|h\|_{L^{2}([0,T];H^{s}_{p}(\mathbb{T}))}, \end{split}} $$ for some constant $c$ depending on $g$. So, $F_{T}$ is a bounded linear operator. Thus, $F_{T}^{\ast}$ exists, is a bounded linear operator, and it is one-to-one (see Rudin \cite[Corollary b) page 99]{Rudin}). Also, from Theorem 4.13 in \cite{Rudin} (see also \cite[page 35]{Coron}), we have that there exists $\delta >0$ such that \begin{equation}\label{caractc12} \left\|F_{T}^{\ast}(\phi^{\ast}) \right\|_{\left(L^{2}([0,T];H^{s}_{p}(\mathbb{T}))\right)'} \geq \delta \;\|\phi^{\ast}\|_{\left(H^{s}_{p}(\mathbb{T})\right)'}, \;\;\;\text{ for all}\;\; \phi^{\ast}\in \left(H^{s}_{p}(\mathbb{T})\right)'. \end{equation} From Lemma \ref{caractc}, we have that the solution $u$ of \eqref{introduc2} with $u_{0}=0$ satisfies \begin{equation}\label{caractc6} \int_{0}^{T}\left\langle Gh(\cdot,t),\varphi(\cdot,t)\right\rangle_{H^{s}_{p} \times(H^{s}_{p})'}dt -\left\langle u_{1},\varphi_{0}\right\rangle_{H^{s}_{p}\times(H^{s}_{p})'}=0, \end{equation} for any $\varphi_{0}\in (H^{s}_{p}(\mathbb{T}))',$ and $\varphi$ the solution of the adjoint system \eqref{adsis}. By noting that $\varphi(\cdot,t)=U(T-t)^{\ast}\varphi_{0}$, it follows from \eqref{caractc6} that $${\small \begin{split} \int\limits_{0}^{T}\left\langle h(\cdot,t),G^{\ast}U(T-t)^{\ast}\varphi_{0} \right\rangle_{H^{s}_{p}(\mathbb{T})\times (H^{s}_{p}(\mathbb{T}))'}dt &=\left\langle u(T),\varphi_{0}\right \rangle_{H^{s}_{p}(\mathbb{T})\times (H^{s}_{p}(\mathbb{T}))'}\\ &=\left\langle F_{T}(h),\varphi_{0}\right \rangle_{H^{s}_{p}(\mathbb{T})\times (H^{s}_{p}(\mathbb{T}))'}\\ &=\left\langle h\;,\;F_{T}^{\ast}\varphi_{0}\right\rangle _{L^{2}([0,T];H^{s}_{p}(\mathbb{T})) \times \left(L^{2}([0,T];H_{p}^{s}(\mathbb{T}))\right)'}. \end{split}} $$ Identifying $L^{2}([0,T];H_{p}^{s}(\mathbb{T}))$ with its dual one infers $F_{T}^{\ast}=G^{\ast}U(T-t)^{\ast},$ and using \eqref{caractc12}, we have $$\left\|G^{\ast}U(T-t)^{\ast}(\phi^{\ast}) \right\|_{L^{2}([0,T];(H^{s}_{p}(\mathbb{T}))')} \geq \delta \;\|\phi^{\ast}\|_{\left(H^{s}_{p}(\mathbb{T})\right)'}, \;\;\;\text{ for all}\;\; \phi^{\ast}\in \left(H^{s}_{p}(\mathbb{T})\right)',$$ or, equivalently, $$\int_{0}^{T}\|G^{\ast}U(T-t)^{\ast}(\phi^{\ast}(x)) \|^{2}_{(H^{s}_{p}(\mathbb{T}))'}\;dt \geq \delta^{2}\; \|\phi^{\ast}\|^{2}_{(H_{p}^{s}(\mathbb{T}))'},\;\;\;\text{ for all}\;\; \phi^{\ast}\in (H^{s}_{p}(\mathbb{T}))'.$$ The change of variables $ \tau=T-t$ yields \eqref{ob1}. $(\Leftarrow)$ If \eqref{ob1} holds, then $F_{T}^{\ast}=G^{\ast}U(T-t)^{\ast}$ is onto. It is easy to prove that $F_{T}^{\ast}$ is bounded from $(H_{p}^{s}(\mathbb{T}))'$ into $(L^{2}([0,T];H_{p}^{s}(\mathbb{T})))'.$ Therefore, $F_{T}$ is onto. From computations similar to those above we obtain that \eqref{caractc6} holds. Then Lemma \ref{caractc} implies the result and we conclude the proof of the corollary. \end{proof} The following characterization is fundamental to prove the existence of control for \eqref{introduc2} with initial data $u_{0}=0.$ It provides a method to find the control function $h$ explicitly. \begin{lem}[Moment Equation]\label{coef} Let $ s\in \mathbb{R}$ and $T>0$ be given. If $$u_{1}(x)=\sum_{l\in \mathbb{Z}} c_{l}\psi_{l}(x)\;\;\in H^{s}_{p}(\mathbb{T}),$$ is a function such that $\widehat{u_{1}}(0)=0,$ then the solution $u$ of \eqref{introduc2} with initial data $u_{0}=0$ satisfies $u(T)=u_{1}$ if an only if there exists $h\in L^{2}([0,T];H^{s}_{p}(\mathbb{T}))$ and \begin{equation}\label{caract5} \int_{0}^{T}\left(Gh(x,t),\; \;e^{-i \lambda_{k} (T-t)} \psi_{k}(x)\right)_{L^{2}_{p}(\mathbb{T})}dt=c_{k},\;\forall\;k\in \mathbb{Z}, \end{equation} where $\lambda_{k}:=k a(k).$ \end{lem} \begin{proof} $(\Rightarrow)$ By taking $\varphi_{0}=\psi_{k}\in (H^{s}_{p}(\mathbb{T}))'$ in \eqref{adsis}, identity \eqref{solf} implies that \begin{equation*} \begin{split} \varphi(x,t) =\left(e^{-i\lambda_l (T-t)}\widehat{\psi_k}(l)\right)^{\vee} =\sum_{l\in \mathbb{Z}} e^{-i\lambda_{l}(T-t)}\widehat{\psi_k}(l)e^{ilx}=e^{-i\lambda_{k}(T-t)}\psi_{k}(x), \end{split} \end{equation*} where in the last identity we used that $\widehat{\psi_{k}}(l)=\frac{1}{\sqrt{2\pi}}\delta_{kl}$, with $\delta_{kl}$ being the Kronecker delta. Now, using \eqref{CEQ} one gets \begin{align*} \int_{0}^{T}\left(Gh(x,t),\;\varphi(x,t)\right)_{L^{2}_{p}(\mathbb{T})}dt& - \left(\sum_{l\in \mathbb{Z}} c_{l}\psi_{l}(x),\;\varphi_{0}(x)\right)_{L^{2}_{p}(\mathbb{T})}=0. \end{align*} Therefore, for any $k\in \mathbb{Z}$, \begin{align*} \int_{0}^{T}\left(Gh(x,t),\;e^{-i\lambda_{k}(T-t)}\psi_{k}(x)\right)_{L^{2}_{p}(\mathbb{T})}dt&= 2\pi \sum_{j\in \mathbb{Z}} \left(\sum_{l\in \mathbb{Z}} c_{l}\psi_{l}(x)\right)^{\wedge}(j)\;\overline{\widehat{\psi_{k}}(j)} \\ &= 2\pi \sum_{l\in \mathbb{Z}} c_{l} \widehat{\psi_{l}}(k)\;\frac{1}{\sqrt{2\pi}}\\ &=c_{k}, \end{align*} as required.\\ \noindent $(\Leftarrow)$ Now, suppose that there exists $h\in L^{2}([0,T];H^{s}_{p}(\mathbb{T}))$ such that \eqref{caract5} holds. With similar calculations as above, we obtain \begin{equation}\label{inte4} \int_{0}^{T}\left(Gh(x,t),\; \;e^{-i\lambda_{k}(T-t)}\;\psi_{k}(x)\right)_{L^{2}_{p}(\mathbb{T})}dt-\left( u_{1}(x),\;\psi_{k}(x)\right)_{L^{2}_{p}(\mathbb{T})}=0,\; \;k\in \mathbb{Z}. \end{equation} For any $\varphi_{0}\in C_{p}^{\infty}(\mathbb{T})$ we may write $$\varphi_{0}(x)=\sum_{k\in \mathbb{Z}}\sqrt{2\pi}\widehat{\varphi_{0}}(k)\;\psi_{k}(x),$$ where the series converges uniformly. Thus, using the properties of the inner product and \eqref{inte4}, we get \begin{equation}\label{inte5} \int_{0}^{T}\left(Gh(x,t),\; \varphi(x,t)\right)_{L^{2}_{p}(\mathbb{T})}dt= \left( u_{1}(x),\;\varphi_{0}(x)\right)_{L^{2}_{p}(\mathbb{T})}, \end{equation} where we used that the solution of \eqref{adsis} may be expressed as \begin{align*} \varphi(x,t)&=\sum_{k\in \mathbb{Z}} e^{-i\lambda_{k}(T-t)}\widehat{\varphi_{0}}(k)e^{ikx} \end{align*} with the series converging uniformly. By density, \eqref{inte5} holds for any $\varphi_{0}\in (H^{s}_{p}(\mathbb{T}))'$. An application of Lemma \ref{caractc} then gives the desired result. \end{proof} \begin{lem}\label{invertmatrix} For $ \psi_{k} $ as in \eqref{spi} and $G$ as in \eqref{EQ1}, define \begin{equation}\label{invertmatrix1} m_{j,k}:=\widehat{G(e^{ijx})}(k)=\int_{0}^{2\pi}G(\psi_{j})(x) \overline{\psi_{k}}(x)\;dx,\;\;\;\;j,k\in\mathbb{Z}. \end{equation} Given any finite sequence of nonzero integers $k_{j}$, $j=1,2,3,....,n,$ let $M_n$ be the $n\times n$ matrix, $$M_{n}:= \begin{pmatrix} m_{k_{1},k_{1}} & \cdots & m_{k_{1},k_{n}} \\ m_{k_{2},k_{1}} & \cdots & m_{k_{2},k_{n}} \\ \vdots & \vdots & \vdots \\ m_{k_{n},k_{1}} & \cdots & m_{k_{n},k_{n}} \\ \end{pmatrix}. $$ Then \begin{itemize} \item [(i)] there exists a constant $\beta>0,$ depending only on $g$, such that $$m_{k,k}\geq \beta,\;\;\;\text{ for any}\; k\in\mathbb{Z}-\{0\}.$$ \item[(ii)] $m_{j,0}=0$, $j\in\mathbb{Z}$. \item [(iii)]$M_{n}$ is invertible and hermitian. \item [(iv)] there exists $\delta>0,$ depending only on $g$, such that \begin{equation}\label{posit1} \delta_{k}=\|G(\psi_{k})\|^{2}_{L^{2}(\mathbb{T})}> \delta >0,\;\;\text{for all}\;k\in \mathbb{Z}-\{0\}. \end{equation} \item [(v)] $m_{-k,k}=\overline{m_{k,-k}}$ and $m_{-k,-k}=\overline{m_{k,k}}.$ \end{itemize} \end{lem} \begin{proof} The proof of parts (i) and (iii) can be found in \cite[page 296]{Micu Ortega Rosier and Zhang}. Part (iv) was proved in \cite[page 3650]{10} (see also \cite[page 213]{1}). Parts (ii) and (v) are direct consequences of the definition in \eqref{invertmatrix1}. \end{proof} Now we give the proof of our first criterion regarding controllability of non-homogenous linear system \eqref{introduc2} stated in Theorem \ref{ControlLa}. \begin{proof}[Proof of Theorem \ref{ControlLa}] As we already discussed, it suffices to assume $u_{0}=0.$ Let us start by performing a suitable decomposition of $\mathbb{Z}$. Indeed, in view of $(H3)$ there are only finitely many integers in $\mathbb{I},$ say, $k_{j},$ $j=1,2,\cdots,n_{0}^{\ast},$ for some $n_{0}^{\ast}\in \mathbb{N},$ such that one can find another integer $k\neq k_{j}$ with $\lambda_{k}=\lambda_{k_{j}}.$ By setting $$\mathbb{I}_{j}:=\{k\in \mathbb{Z}: k\neq k_{j}, \lambda_{k}=\lambda_{k_{j}}\},\;\;\;\;j=1,2,\cdots,n_{0}^{\ast}, $$ we then get the pairwise disjoint union, \begin{equation}\label{zdecomp} \mathbb{Z}=\mathbb{I}\cup\mathbb{I}_{1}\cup\mathbb{I}_{2}\cup\cdots\cup\mathbb{I}_{n_{0}^{\ast}}. \end{equation} We now prove the theorem in six steps. \noindent {\bf{Step 1.}} The family $\{e^{-i\lambda_{k}t}\}_{k\in \mathbb{I}}$, with $\lambda_{k}=k a(k)$, is a Riesz basis for $H:=\overline{\text{span}\{e^{-i\lambda_{k}t}: k\in \mathbb{I}\}}$ in $L^{2}([0,T]).$ In fact, since $L^{2}([0,T])$ is a reflexive separable Hilbert space so is $H$. In addition, by definition, it is clear that $\{e^{-i\lambda_{k}t}\}_{k\in \mathbb{I}}$ is complete in $H$. On the other hand, from \eqref{gammaseg}-\eqref{gammalinha} and Theorem \ref{InghamG2}, there exist positive constants $A$ and $B$ such that \begin{equation}\label{indesg} A\sum_{n \in \mathbb{I}}|b_{n}|^{2}\leq \int_{0}^{T}|f(t)|^{2}dt\leq B\sum_{n \in \mathbb{I}}|b_{n}|^{2}, \end{equation} for all functions of the form $f(t)=\sum\limits_{ n \in \mathbb{I}}b_{n}e^{-i\lambda_{n}t},$ $t\in [0,T]$, with square-summable complex coefficients $b_{n}.$ In particular, if $b_{1},...,b_{N}$ are $N$ arbitrary constants we have $$\displaystyle{A\sum_{n=1}^{N}|b_{n}|^{2}\leq \left\|\sum_{n=1}^Nb_{n}e^{-i\lambda_{n}t}\right\|^{2}_{H} \leq B\sum_{n=1}^{N}|b_{n}|^{2}.}$$ Hence, an application of Theorem \ref{BasRieszTheo} gives the desired property. \noindent {\bf{Step 2.}} There exists a unique biorthogonal basis $\{q_{j}\}_{j\in \mathbb{I}}\subseteq H^{\ast}$ to $\{e^{-i\lambda_{k}t}\}_{k\in \mathbb{I}}$. Indeed, Step 1 and Theorem \ref{BasRieszTheo} implies that $\{e^{-i\lambda_{k}t}\}_{k\in \mathbb{I}}$ is a complete Bessel sequence and possesses a biorthogonal system $\{q_{j}\}_{j\in \mathbb{I}}$ which is also a complete Bessel sequence. Moreover, Corollary 5.22 in {\cite[page 171]{9}} implies that $\{q_{j}\}_{j\in \mathbb{I}}$ is also a basis for $H$ (after identifying $H^*$ and $H$). So, from Lemma 5.4 {\cite[page 155]{9}}, we get that $\{e^{-i\lambda_{k}t}\}_{k\in \mathbb{I}}$ is a minimal sequence in $H$; and, hence, exact (see \cite[Definition 5.3]{9}). Finally, Lemma 5.4 in {\cite[page 155]{9}} gives that $\{q_{j}\}_{j\in \mathbb{I}}$ is the unique biorthogonal basis to $\{e^{-i\lambda_{k}t}\}_{k\in \mathbb{I}}$. Note that an immediate consequence is that \begin{equation}\label{dualbg} (e^{-i\lambda_{k}t}\;,\;q_{j})_{H}=\int_{0}^{T}e^{-i\lambda_{k}t}\overline{q_{j}}(t)\;dt=\delta_{kj},\;\;\;k,j\in \mathbb{I}, \end{equation} where $\delta_{kj}$ represents the Kronecker delta. \noindent {\bf{Step 3.}} Here we will define the appropriate control function $h.$ In fact, let $\{q_{j}\}_{j\in \mathbb{I}}$ be the sequence obtained in Step 2. The next step is to extend the sequence $q_{j}$ for $j$ running on $\mathbb{Z}$. In view of \eqref{zdecomp} it remains to define this sequence for indices in $\mathbb{I}_j$, $j=1,\cdots, n_{0}^{\ast}$. Furthermore, $(H2)$ gives that $\mathbb{I}_{j}$ contains at most $n_{0}-1$ elements. Without loss of generality, we may assume that all multiple eigenvalues have multiplicity $n_0$; otherwise we may repeat the procedure below according to the multiplicity of each eigenvalue. Thus we write \begin{equation}\label{Ijdef} \mathbb{I}_{j}=\{k_{j,1}, k_{j,2}, k_{j,3}, \cdots,k_{j,n_{0}-1}\},\;\;\;\;j=1,2,\cdots,n_{0}^{\ast}. \end{equation} To simplify notation, here and in what follows we use $k_{j,0}$ for $k_j$. Given $k_{j,l}\in \mathbb{I}_{j}$ we define $q_{k_{j,l}}:=q_{k_{j,0}}=q_{k_{j}}$. At this point recall that $\lambda_{k_{j,l}}=\lambda_{k_{j}}$ for any $j=1,2,\cdots,n_{0}^{\ast}$ and $l=0,1,2, \cdots, n_{0}-1.$ Having defined $q_j$ for all $j\in\mathbb{Z}$, we now define the control function $h$ by \begin{equation}\label{thecontrol} h (x,t)=\sum_{j \in \mathbb{Z}} h_{j}\;\overline{q_{j}}(t)\;\psi_{j}(x), \end{equation} for suitable coefficients $h_{j}$'s to be determined later. From the definition of $G$, we obtain \begin{equation}\label{thecontrol1g} {\footnotesize \begin{split} \int\limits_{0}^{T} \left(G(h)(x,t),\; e^{-i\lambda_{k}(T-t)} \psi_{k}(x)\right)_{L^{2}_{p}(\mathbb{T})}dt&= \int\limits_{0}^{T}\left(\sum_{j\in \mathbb{Z}}h_{j}\overline{q_{j}}(t)G(\psi_{j})(x,t),\; e^{-i\lambda_{k}(T-t)} \psi_{k}(x) \right)_{L^{2}_{p}(\mathbb{T})}dt\\ &= \sum_{j\in \mathbb{Z}}h_{j}\int\limits_{0}^{T}\overline{q_{j}}(t) e^{i\lambda_{k}(T-t)} \;dt\left(G(\psi_{j})(x),\; \psi_{k}(x)\right)_{L^{2}_{p}(\mathbb{T})}\\ &=\sum_{j\in \mathbb{Z}}h_{j} e^{i\lambda_{k}T} m_{j,k} \int\limits_{0}^{T}\overline{q_{j}}(t)e^{-i\lambda_{k}t}\,dt, \end{split}} \end{equation} with $m_{j,k}$ defined in \eqref{invertmatrix1}. \noindent {\bf{Step 4.}} Here we find $h_{j}$'s such that $h$ defined by \eqref{thecontrol} serves as the required control function. First of all, note that in order to prove the first part of the theorem, identity \eqref{thecontrol1g} and Lemma \ref{coef} yield that it suffices to choose $h_{j}$'s such that \begin{equation}\label{h formg} c_{k}=\sum_{j\in \mathbb{Z}}h_{j}e^{i\lambda_{k}T}m_{j,k} \int_{0}^{T}\overline{q_{j}}(t)e^{-i\lambda_{k}t}\;dt, \end{equation} where $u_{1}(x)=\sum_{n\in \mathbb{Z}} c_{n}\psi_{n}(x)$. We will show now that we may indeed choose $h_j$'s satisfying \eqref{h formg}. To see this, first observe that, since $c_0=0$, part (ii) in Lemma \ref{invertmatrix} implies that \eqref{h formg} holds for $k=0$ independently of $h_{j}$'s. In particular, we may choose $h_0=0$. Next, from \eqref{dualbg}, if $$k\in \widetilde{\mathbb{I}}:=\mathbb{I}-\{k_1,\ldots,k_{n_0^*}\}$$ we see that \eqref{h formg} reduces to $$c_{k}=h_{k}m_{k,k}\;e^{i\lambda_{k}T}.$$ Hence, in view of part (iii) in Lemma \ref{invertmatrix}, we have \begin{equation}\label{hform3g} h_{k}=\frac{c_{k}\;\; e^{-i\lambda_{k}T}}{m_{k,k}}, \qquad k\in\widetilde{\mathbb{I}}. \end{equation} On the other hand, if $k\in\mathbb{Z}-\widetilde{\mathbb{I}}$ then $k=k_{j,l_0}$ for some $j\in\{1, \ldots,n_0^*\}$ and $l_0\in\{0,1,\ldots,n_0-1\}$. Since $\lambda_{k}=\lambda_{k_{j,l_0}}=\lambda_{k_j}$, the integral in \eqref{h formg} is zero, except for those indices in $\mathbb{I}_{j}\cup\{k_j\}$. In particular, \eqref{h formg} reduces to \begin{equation}\label{ckl} \displaystyle{c_{k_{j,l_0}}=c_k=\sum_{l=0}^{n_{0}-1}h_{k_{j,l}} m_{k_{j,l},k_{j,l_0}}e^{i\lambda_{k_{j,l_0}}T}}. \end{equation} When $l_0$ runs over the set $\{0,1,\ldots,n_0-1\}$, the equations in \eqref{ckl} may be seen as a linear system for $h_{k_{j,l}}$ (with $j$ fixed) whose unique solution is \begin{equation}\label{hform4g} { \begin{pmatrix} h_{k_{j,0}}\\ h_{k_{j,1}}\\ \vdots\\ h_{k_{j,n_{0}-1}} \end{pmatrix}^{\top}= \begin{pmatrix} c_{k_{j,0}} e^{-i\lambda_{k_{j,0}}T}\\ c_{k_{j,1}} e^{-i\lambda_{k_{j,1}}T}\\ \vdots\\ c_{k_{j,n_{0}-1}} e^{-i\lambda_{k_{j,n_{0}-1}}T}\\ \end{pmatrix}^{\top} M_{j}^{-1},\;\;\text{for} \; j=1,2,\cdots,n_{0}^{\ast},} \end{equation} where $$M_{j}= \begin{pmatrix} m_{k_{j,0},k_{j,0}} & m_{k_{j,0},k_{j,1}} & \cdots& m_{k_{j,0},k_{j,n_{0}-1}} \\ m_{k_{j,1},k_{j,0}} & m_{k_{j,1},k_{j,1}} & \cdots & m_{k_{j,1},k_{j,n_{0}-1}} \\ \vdots& & \ddots &\vdots\\ m_{k_{j,n_{0}-1},k_{j,0}} & m_{k_{j,n_{0}-1},k_{j,1}} & \cdots & m_{k_{j,n_{0}-1},k_{j,n_{0}-1}} \\ \end{pmatrix} .$$ Since from Lemma \ref{invertmatrix} the matrix $M_j$ is invertible, equation \eqref{hform4g} makes sense. Consequently, for any $j\in\mathbb{Z}=\mathbb{I}\cup\mathbb{I}_{1}\cup\mathbb{I}_{2}\cup\cdots\cup\mathbb{I}_{n_{0}^{\ast}}$, we may choose $h_{j}$'s according to \eqref{hform3g} and \eqref{hform4g}. \noindent {\bf{Step 5.}} The function $h$ defined by \eqref{thecontrol} with $h_{0}=0$ and $h_{k}$, $k\neq 0$, given by \eqref{hform3g} and \eqref{hform4g} belongs to $L^{2}([0,T];H_{p}^{s}(\mathbb{T}))$. Indeed, recall from Step 2 that $\{q_{j}\}_{j\in \mathbb{I}}$ is a Riesz basis for $H$. Thus, from Theorem \ref{BasRieszTheo} part (3), it follows that $\{q_{j}\}_{j\in \mathbb{I}}$ is a bounded sequence in $L^2([0,T])$. Consequently, $\{q_{j}\}_{j\in \mathbb{Z}}$ is also bounded in $L^2([0,T])$. Hence, by using the explicit representation in \eqref{thecontrol}, we deduce \begin{align}\label{hes} \begin{split} \|h\|^{2}_{L^{2}([0,T];H_{p}^{s}(\mathbb{T}))} &=\frac{1}{2\pi}\sum_{k\in \mathbb{Z}} (1+|k|)^{2s}|h_{k}|^2 \int_{0}^{T}|q_{k}(t)|^{2}\;dt\\ &\leq C \sum_{k\in \mathbb{Z}}(1+|k|)^{2s}|h_{k}|^{2}, \end{split} \end{align} for some positive constant $C$. Thus, from identity \eqref{hform3g} and Lemma \ref{invertmatrix} part (ii) we obtain \begin{equation}\label{hform5g} \begin{split} \|h\|^{2}_{L^{2}([0,T];H_{p}^{s}(\mathbb{T}))} &\leq C {\sum_{ k\in \widetilde{\mathbb{I}},k\neq 0}} (1+|k|)^{2s}\left|\frac{c_{k} e^{-i\lambda_{k}T}}{m_{k,k}}\right|^{2} + C \sum_{k\in\mathbb{Z}-\widetilde{\mathbb{I}}}(1+|k|)^{2s}|h_{k}|^{2} \\ &\leq \frac{C}{\beta^{2}}{\sum_{ k\in \widetilde{\mathbb{I}},k\neq 0}} (1+|k|)^{2s}\left|c_{k}\right|^{2} + C \sum_{k\in\mathbb{Z}-\widetilde{\mathbb{I}}}(1+|k|)^{2s}|h_{k}|^{2}. \end{split} \end{equation} Since $u_1\in H^s_p(\mathbb{T})$ the above series converges. In addition, since the set $\mathbb{Z}-\widetilde{\mathbb{I}}$ is finite we conclude that the right-hand side of \eqref{hform5g} is finite, implying that $h$ belongs to $L^{2}([0,T];H_{p}^{s}(\mathbb{T}))$. In order to complete the proof of the theorem it remains to establish \eqref{hboun}. \noindent {\bf{Step 6.}} Estimate \eqref{hboun} holds. From Step 5 we see that we need to estimate de second term on the right-hand side of \eqref{hform5g}. So, fix some nonzero $k\in\mathbb{Z}-\widetilde{\mathbb{I}}$. We may write $k=k_{j,l}$ for some $l=0,1,2,\cdots,n_{0}-1$ and $j=1,2,\cdots,n_{0}^{\ast}$. From \eqref{hform4g} we infer \begin{align*} \begin{split} |h_{k_{j,l}}|^{2}&\leq \sum_{m=0}^{n_{0}-1}|h_{k_{j,m}}|^{2} \leq \left( \sum_{m=0}^{n_{0}-1}\left|c_{k_{j,m}}e^{-i\lambda_{k_{j,m}}T}\right|^{2} \right)\|M_{j}^{-1}\|^{2} \leq \|M_{j}^{-1}\|^{2}\sum_{m=0}^{n_{0}-1}|c_{k_{j,m}}|^{2}, \end{split} \end{align*} where $\|M_{j}^{-1}\|$ is the Euclidean norm of the matrix $M_{j}^{-1}.$ This implies that \begin{equation*} \begin{split} (1+|k_{j,l}|)^{2s}|h_{j,l}|^{2} & \leq\sum_{m=0}^{n_{0}-1}\|M_{j}^{-1}\|^{2} \frac{(1+|k_{j,l}|)^{2s}}{(1+|k_{j,m}|)^{2s}}(1+|k_{j,m}|)^{2s}|c_{k_{j,m}}|^{2}\\ &\leq C(s)\sum_{m=0}^{n_{0}-1}(1+|k_{j,m}|)^{2s}|c_{k_{j,m}}|^{2}, \end{split} \end{equation*} with $$\displaystyle{C(s)=\underset{m,l=0,1,2,\cdots, n_{0}-1}{\max_{j=1,2,...,n_{0}^{\ast}}} \left\{\|M_{j}^{-1}\|^{2}\frac{(1+|k_{j,l}|)^{2s}}{(1+|k_{j,m}|)^{2s}}\right\} }.$$ Therefore, \begin{equation}\label{hform7g} \begin{split} \sum_{k\in\mathbb{Z}-\widetilde{\mathbb{I}}}(1+|k|)^{2s}|h_{k}|^{2}&=\sum_{j=1}^{n_0^*}\sum_{l=0}^{n_0-1}(1+|k_{j,l}|)^{2s}|h_{k_{j,l}}|^2\\ &\leq C(s)n_0\sum_{m=1}^{n_0^*}\sum_{l=0}^{n_0-1}(1+|k_{j,m}|)^{2s}|c_{k_{j,m}}|^2\\ &=C(s)n_0\sum_{k\in\mathbb{Z}-\widetilde{\mathbb{I}}}(1+|k|)^{2s}|c_{k}|^{2}. \end{split} \end{equation} Gathering together \eqref{hform5g} and \eqref{hform7g}, we obtain \begin{equation}\label{cota} \begin{split} \|h\|^{2}_{L^{2}([0,T];H_{p}^{s}(\mathbb{T}))} &\leq \frac{C}{\beta^{2}}{\sum_{ k\in \widetilde{\mathbb{I}},k\neq 0}} (1+|k|)^{2s}\left|c_{k}\right|^{2} + C C(s)n_0\sum_{k\in\mathbb{Z}-\widetilde{\mathbb{I}}}(1+|k|)^{2s}|c_{k}|^{2}\\ &\leq \nu^2\|u_1\|^2_{H^s_p(\mathbb{T})}, \end{split} \end{equation} where $\displaystyle{\nu^{2}=\max\left\{\frac{ C }{\beta^{2}},\; n_{0}CC(s)\right\}}$. This completes the proof of the theorem. \end{proof} Now we proof our second criterion regarding controllability of non-homogenous linear system \eqref{introduc2} stated in Theorem \ref{ControlLag}. \begin{proof}[Proof of Theorem \ref{ControlLag}] The proof is similar to that of Theorem \ref{ControlLa}. So we bring only the necessary changes and estimates. As before, we assume $u_{0}=0.$ In view of $(H4),\;(H5)$ we may find finitely many integers in $\mathbb{J},$ say, $k_{j},$ $j=1,2,\cdots,n_{0}^{\ast},$ for some $n_{0}^{\ast}\in \mathbb{N},$ with $n_{0}^{\ast}\leq 2 k_{1}^{\ast}-1,$ such that one can find another integer $k\neq k_{j}$ with $\lambda_{k}=\lambda_{k_{j}}.$ Let $$\mathbb{J}_{j}:=\{k\in \mathbb{Z}: k\neq k_{j}, \lambda_{k}=\lambda_{k_{j}}\},\;\;\;\;j=1,2,\cdots,n_{0}^{\ast},$$ and $$\mathbb{J}^{-}:= \{ k\in \mathbb{Z}: k\leq -k_{1}^{\ast} \}.$$ Then we obtain the pairwise disjoint decomposition \begin{equation}\label{zde1} \mathbb{Z}=\mathbb{J}^{-} \cup\mathbb{J}\cup\mathbb{J}_{1}\cup\mathbb{J}_{2}\cup\cdots\cup\mathbb{J}_{n_{0}^{\ast}}. \end{equation} Again, we may prove the theorem into six steps. \noindent {\bf{Step 1.}} The family $\{e^{-i\lambda_{k}t}\}_{k\in \mathbb{J}}$, with $\lambda_{k}=k a(k)$, is a Riesz basis for $H:=\overline{\text{span}\{e^{-i\lambda_{k}t}: k\in \mathbb{J}\}}$ in $L^{2}([0,T])$. In fact, this is a consequence of \eqref{gammasegg}-\eqref{gammalinhag}, Theorem \ref{InghamG2}, and Theorem \ref{BasRieszTheo}. \noindent {\bf{Step 2.}} There exists a unique biorthogonal basis $\{q_{j}\}_{j\in \mathbb{J}}\subseteq H^{\ast}$ to $\{e^{-i\lambda_{k}t}\}_{k\in \mathbb{J}}$. This is a consequence of Theorem \ref{BasRieszTheo}, Corollary 5.22 in \cite{9}, and Lemma 5.4 \cite{9}. Furthermore, we have that \begin{equation}\label{dualbgg} (e^{-i\lambda_{k}t}\;,\;q_{j})_{H}=\int_{0}^{T}e^{-i\lambda_{k}t}\overline{q_{j}}(t)\;dt=\delta_{kj},\;\;\;k,j\in \mathbb{J}. \end{equation} \noindent {\bf{Step 3.}} Here we will define an adequate control function $h.$ As in \eqref{thecontrol}, for suitable coefficients $h_j$ to be determined later we set \begin{equation}\label{thecontrolg} h(x,t)=\sum_{j \in \mathbb{Z}} h_{j}\;\overline{q_{j}}(t)\;\psi_{j}(x), \end{equation} where, according to the decomposition \eqref{zde1}, the sequence $\{q_k\}_{k\in\mathbb{Z}}$ is defined as follows: if $k\in \mathbb{J}$ then $q_k$ is given in Step 2; if $k\in \mathbb{J}_j$ for some $j\in\{1,\ldots,n_0^*\}$ then by writing (assuming that all multiple eigenvalues have multiplicity $n_0$) $$ \mathbb{J}_{j}=\{k_{j,1}, k_{j,2}, k_{j,3}, \cdots,k_{j,n_{0}-1}\}, $$ and denoting $k_{j}$ by $k_{j,0}$ we set $$ q_k=q_{k_{j,l}}:=q_{k_{j,0}}=q_{k_{j}}. $$ Finally, if $k\in \mathbb{J}^-$ then we set $$ q_k=q_{-k}. $$ With this choice of $\{q_k\}_{k\in\mathbb{Z}}$, as in \eqref{thecontrol1g} we have \begin{equation}\label{thecontrol1gg} \int_{0}^{T}\left(G(h)(x,t),\; e^{-i\lambda_{k}(T-t)} \psi_{k}(x)\right)_{L^{2}_{p}(\mathbb{T})}dt =\sum_{j\in \mathbb{Z}}h_{j} e^{i\lambda_{k}T} m_{j,k} \int_{0}^{T}\overline{q_{j}}(t)e^{-i\lambda_{k}t}\;dt. \end{equation} \noindent {\bf{Step 4.}} In this step we find $h_{j}$'s such that $h$ defined by \eqref{thecontrolg} serves as the required control function. By writing $u_{1}(x)=\sum_{n\in \mathbb{Z}} c_{n}\psi_{n}(x)$, it is enough to consider $h_{j}$'s satisfying \begin{equation}\label{h formgg} c_{k}=\sum_{j\in \mathbb{Z}}h_{j}e^{i\lambda_{k}T}m_{j,k} \int_{0}^{T}\overline{q_{j}}(t)e^{-i\lambda_{k}t}\;dt. \end{equation} From Lemma \ref{invertmatrix} part (ii) we may take $h_0=0$. To see that we can choose $h_j$ such that \eqref{h formgg} holds let us start by defining the following sets of indices $$ \mathbb{J}^+:=\left\{k\in \mathbb{Z}\,:\, k\geq k_1^*\right\}, $$ $$ \widetilde{\mathbb{J}}=\left\{k\in \mathbb{Z}\,: k= k_{j,l};\;\; l=0,1,2,\cdots,n_{0}-1,\;\;\;j=1,2,\cdots,n_{0}^{\ast}\right\}, $$ and $$ \widetilde{\mathbb{I}}=\left\{k\in \mathbb{Z}\,: \, k\notin \mathbb{J}^{+}\cup \mathbb{J}^{-} \;\text{and}\;k\notin\widetilde{\mathbb{J}} \right\}. $$ It is clear that $\mathbb{Z}=\widetilde{\mathbb{I}}\cup \widetilde{\mathbb{J}}\cup \mathbb{J}^+\cup \mathbb{J}^-$. In addition, note that $\widetilde{\mathbb{I}}$ is nothing but the set of those indices for which the corresponding eigenvalue is simple. Without loss of generality we will assume that $\widetilde{\mathbb{I}}$ is nonempty; otherwise, this part has no contribution and these indices do not appear in \eqref{h formgg}. The idea now is to obtain $h_k$ according to $k\in \widetilde{\mathbb{I}}$, $k\in \widetilde{\mathbb{J}}$, or $k\in \mathbb{J}^+\cup \mathbb{J}^-$. From \eqref{dualbgg} we see that \eqref{h formgg} reduces to $$c_{k}=h_{k}m_{k,k}\;e^{i\lambda_{k}T},\;\;\;k\in \widetilde{\mathbb{I}}.$$ Therefore, \begin{equation}\label{hform3gg} h_{k}=\frac{c_{k}\;\; e^{-i\lambda_{k}T}}{m_{k,k}},\;\;k\in \widetilde{\mathbb{I}}. \end{equation} Next, if $k\in \widetilde{\mathbb{J}}$, then $k=k_{j,l_0}$ for some $j\in\{1, \ldots,n_0^*\}$ and $l_0\in\{0,1,\ldots,n_0-1\}$. Thus, as in Step 4 of Theorem \ref{ControlLa}, we see that $${c_{k_{j,l_0}}=\sum_{l=0}^{n_{0}-1}h_{k_{j,l}} m_{k_{j,l},k_{j,l_0}}e^{i\lambda_{k_{j,l_0}}T}}, $$ By solving the above system for $h_{k_{j,l}}$ (with $j$ fixed and running $l_0$ over $\{0,1,\ldots,n_0-1\}$) we find \begin{equation}\label{hform4gg} \begin{pmatrix} h_{k_{j,0}}\\ h_{k_{j,1}}\\ \vdots\\ h_{k_{j,n_{0}-1}} \end{pmatrix}^{\top}= \begin{pmatrix} c_{k_{j,0}} e^{-i\lambda_{k_{j,0}}T}\\ c_{k_{j,1}} e^{-i\lambda_{k_{j,1}}T}\\ \vdots\\ c_{k_{j,n_{0}-1}} e^{-i\lambda_{k_{j,n_{0}-1}}T}\\ \end{pmatrix}^{\top} \tilde{M}_{j}^{-1},\;\; \; j=1,2,\cdots,n_{0}^{\ast}, \end{equation} where $$\tilde{M}_{j}= \begin{pmatrix} m_{k_{j,0},k_{j,0}} & m_{k_{j,0},k_{j,1}} & \cdots& m_{k_{j,0},k_{j,n_{0}-1}} \\ m_{k_{j,1},k_{j,0}} & m_{k_{j,1},k_{j,1}} & \cdots & m_{k_{j,1},k_{j,n_{0}-1}} \\ \vdots& & \ddots &\vdots\\ m_{k_{j,n_{0}-1},k_{j,0}} & m_{k_{j,n_{0}-1},k_{j,1}} & \cdots & m_{k_{j,n_{0}-1},k_{j,n_{0}-1}} \\ \end{pmatrix}.$$ Finally, if $k\in \mathbb{J}^{+}$ we have $-k \in \mathbb{J}^{-}$ and $I(k)=\{k,-k\}$. We deduce from \eqref{h formgg} that \[ \begin{cases} \displaystyle{c_{k}=h_{k} e^{i\lambda_{k}T} m_{k,k} + h_{-k} e^{i\lambda_{k}T} m_{-k,k}}, \\ \displaystyle{c_{-k}=h_{k} e^{i\lambda_{-k}T} m_{k,-k} + h_{-k} e^{i\lambda_{-k}T} m_{-k,-k}}. \end{cases} \] Solving this system for $h_k$ and $h_{-k}$ we obtain \begin{equation}\label{even} \begin{pmatrix} h_{k}\\ h_{-k}\\ \end{pmatrix}^{\top} =\begin{pmatrix} c_{k} e^{-i\lambda_{k}T}\\ c_{-k} e^{-i\lambda_{-k}T}\\ \end{pmatrix}^{\top}M^{-1},\quad k\in \mathbb{J}^{+} \end{equation} where $$M^{-1}=\frac{1}{d_k} \begin{pmatrix} { m_{-k,-k} } & -{ m_{k,-k} } \\ -{ m_{-k,k} } & { m_{k,k} } \\ \end{pmatrix}, \qquad d_{k}=m_{k,k}m_{-k,-k}-m_{k,-k}m_{-k.k}. $$ Summarizing the above construction, we see that, for any $k\in \mathbb{Z}$, we may choose $h_{k}$ according to \eqref{hform3gg}, \eqref{hform4gg}, and \eqref{even}. Next we observe that the matrix $M^{-1}$ is bounded uniformly with respect to $k$. Indeed, from Lemma \ref{invertmatrix} part (v) we infer that $d_{k}=|m_{k,k}|^{2}-|m_{k,-k}|^{2}.$ Now, from the definition of $G$, $$ {\normalsize \begin{split} m_{k,-k}&= \frac{1}{2\pi}\left[\int_{0}^{2\pi}g(x)e^{i2kx}dx- \left(\int_{0}^{2\pi}g(x)e^{ikx}dx\right) \left(\int_{0}^{2\pi}g(y)e^{iky}dy\right)\right]\\ &=\widehat{g}(-2k)-2\pi[\widehat{g}(-k)]^2. \end{split}} $$ Since $g$ is smooth, using the Riemann-Lebesgue lemma we obtain $$ \lim_{k\rightarrow +\infty}|m_{k,-k}|=0. $$ On the other hand, in view of \eqref{gcondition}, for any $k\neq0$, $$m_{k,k}=\frac{1}{2\pi}\left(\int_{0}^{2\pi}g(x)dx-\left|\int_{0}^{2\pi}g(x)e^{ikx}dx\right|^{2}\right)=\frac{1}{2\pi}-|\widehat{g}(-k)|^2,$$ and, hence, $$ \lim_{k\rightarrow +\infty}m_{k,k}=\frac{1}{2\pi}. $$ Since $$ \lim_{k\rightarrow +\infty}d_k=\frac{1}{4\pi^2}, $$ we may assume, without loss of generality, that $d_k>\frac{1}{8\pi^2}$, for any $k\geq k_1^*$. Therefore, there exists $D>0$, independent of $k\in \mathbb{J}^+$, such that \begin{equation}\label{determ2} \|M^{-1}\|\leq D, \end{equation} where $\|M^{-1}\|$ is the Euclidean norm of the matrix $M^{-1}.$ \noindent {\bf{Step 5.}} The control function $h$ defined by \eqref{thecontrolg} with $h_{0}=0,$ and $h_{k}$, $k\neq0$, given by \eqref{hform3gg}, \eqref{hform4gg}, and \eqref{even} belongs to $L^{2}([0,T];H_{p}^{s}(\mathbb{T}))$. Indeed, as in \eqref{hes} we obtain \begin{align*} \begin{split} \|h\|^{2}_{L^{2}([0,T];H_{p}^{s}(\mathbb{T}))} &\leq C \sum_{k\in \mathbb{Z}}(1+|k|)^{2s}\;|h_{k}|^{2}, \end{split} \end{align*} for some positive constant $C$. Next, in the series above we split the sum according to $k\in \widetilde{\mathbb{I}}$, $k\in \widetilde{\mathbb{J}}$ or $k\in \mathbb{J}^{+}\cup \mathbb{J}^{-}$. Thus, we may write \begin{equation}\label{hform5gg} \begin{split} \|h\|^{2}_{L^{2}([0,T];H_{p}^{s}(\mathbb{T}))} &\leq C {\sum_{ k\in \widetilde{\mathbb{I}}}} (1+|k|)^{2s}\left|h_{k}\right|^{2} +{\sum_{ k\in \widetilde{\mathbb{J}}}} (1+|k|)^{2s}\left|h_{k}\right|^{2}\\ &\quad + C \sum_{k\in \mathbb{J}^{+}\cup\mathbb{J}^{-} }(1+|k|)^{2s}|h_{k}|^{2}. \end{split} \end{equation} The first two terms on the right-hand side of \eqref{hform5gg} may be estimated as in Theorem \ref{ControlLa} (see \eqref{hform7g}). Thus, \begin{equation}\label{hform5ggg} \begin{split} \|h\|^{2}_{L^{2}([0,T];H_{p}^{s}(\mathbb{T}))} &\leq \frac{C}{\beta^2} {\sum_{ k\in \widetilde{\mathbb{I}}}} (1+|k|)^{2s}\left|c_{k}\right|^{2} + CC(s)n_0 {\sum_{ k\in \widetilde{\mathbb{J}}}} (1+|k|)^{2s}\left|c_{k}\right|^{2}\\ &\quad + C \sum_{k\in \mathbb{J}^{+}\cup\mathbb{J}^{-} }(1+|k|)^{2s}|h_{k}|^{2}, \end{split} \end{equation} where $\displaystyle{C(s)=\underset{m,l=0,1,2,\cdots, n_{0}-1}{\max_{j=1,2,...,n_{0}^{\ast}}} \left\{\|\tilde{M}_{j}^{-1}\|^{2}\frac{(1+|k_{j,l}|)^{2s}}{(1+|k_{j,m}|)^{2s}}\right\} }.$\\ For the last term in \eqref{hform5ggg}, identity \eqref{even} and \eqref{determ2} imply that for any $ k\in \mathbb{J}^{+}$, \begin{align*} \begin{split} |h_{k}|^{2}& \leq \left( \left|c_{k}\right|^{2} + \left|c_{-k}\right|^{2}\right)\|M^{-1}\|^{2} \leq \left( \left|c_{k}\right|^{2} + \left|c_{-k}\right|^{2}\right)D^{2}, \end{split} \end{align*} and \begin{equation}\label{even1} \begin{split} (1+|k|)^{2s}|h_{k}|^{2}&\leq D^{2} (1+|k|)^{2s}\left|c_{k}\right|^{2} + D^{2} (1+|-k|)^{2s} \left|c_{-k}\right|^{2}. \end{split} \end{equation} Since the right-hand side of \eqref{even1} is symmetric with respect to $k$, the same estimate holds for $k\in \mathbb{J}^{-},$ from which we deduce that \begin{equation}\label{even2} \sum_{k\in \mathbb{J}^{+}\cup\mathbb{J}^{-} }(1+|k|)^{2s}|h_{k}|^{2}\leq 2D^2 \sum_{k\in \mathbb{J}^{+}\cup\mathbb{J}^{-} }(1+|k|)^{2s}|c_{k}|^{2}. \end{equation} Combining \eqref{hform5ggg} with \eqref{even1} we get that $h$ belongs to $L^{2}([0,T];H_{p}^{s}(\mathbb{T}))$. \noindent {\bf{Step 5.}} Estimate \eqref{hboun} holds. In view of \eqref{hform5ggg} and \eqref{even2}, we obtain \begin{equation*} \begin{split} \|h\|^{2}_{L^{2}([0,T];H_{p}^{s}(\mathbb{T}))} &\leq \frac{C}{\beta^2} {\sum_{ k\in \widetilde{\mathbb{I}}}} (1+|k|)^{2s}\left|c_{k}\right|^{2} + CC(s)n_0 {\sum_{ k\in \widetilde{\mathbb{J}}}} (1+|k|)^{2s}\left|c_{k}\right|^{2}\\ &\quad + 2D^2C \sum_{k\in \mathbb{J}^{+}\cup\mathbb{J}^{-} }(1+|k|)^{2s}|c_{k}|^{2}\\ &\leq \nu^2\,\|u_1\|_{H_{p}^{s}(\mathbb{T})}^2, \end{split} \end{equation*} where $\nu^{2}=\max\left\{\frac{C}{\beta^{2}}, \;n_{0}CC(s),\; 2CD^{2}\right\}$. This completes the proof of the theorem. \end{proof} \begin{rem} The dependence of $\nu$ with respect to $T$ is implicit in the constant $C$ which may depend on the time $T$. \end{rem} As an immediate consequence of Theorems \ref{ControlLa} and \ref{ControlLag} we get the following corollary. \begin{cor} \label{controloperator} For $s\in \mathbb{R}$ and $T> \frac{2\pi}{\gamma'}$ given, there exists a unique bounded linear operator $$\left\{\begin{array}{lclc} \Phi:& H_{p}^{s}(\mathbb{T})\times H_{p}^{s}(\mathbb{T}) & \longrightarrow & L^{2}([0,T];H_{p}^{s}(\mathbb{T}))\\ & (u_0,u_1)& \longmapsto &\Phi(u_{0},u_{1}):=h\\ \end{array} \right. $$ such that \begin{equation}\label{cont} u_{1}=U(T)u_{0}+\int_{0}^{T}U(T-s)(G(\Phi(u_{0},u_{1})))(\cdot,s)\;ds \end{equation} and \begin{equation}\label{oprestima} \|\Phi(u_{0},u_{1})\|_{L^{2}([0,T];H_{p}^{s}(\mathbb{T}))} \leq \nu\; (\|u_{0}\|_{H_{p}^{s}(\mathbb{T})} +\|u_{1}\|_{H_{p}^{s}(\mathbb{T})}), \end{equation} for some positive constant $\nu$. \end{cor} We end this section recalling Corollary \ref{controloperator1} to obtain the observability inequality, which in turn plays a fundamental role to get the exponential asymptotic stabilization with arbitrary decay rate. \begin{cor} \label{controloperator1g} Let $s\in\mathbb{R}$ and $T> \frac{2\pi}{\gamma'}$ be given. There exists $\delta>0$ such that $$\int_{0}^{T}\|G^{\ast}U(\tau)^{\ast}\phi^*\| ^{2}_{(H^{s}_{p}(\mathbb{T}))'}(\tau) \; d\tau \geq \delta^{2} \|\phi^*\|^{2}_{(H^{s}_{p}(\mathbb{T}))'}, $$ for any $\phi^* \in (H^{s}_{p}(\mathbb{T}))'.$ \end{cor} \begin{rem} If $\gamma'=+\infty$ or $\tilde{\gamma}'=+\infty$ then Corollaries \ref{controloperator} and \ref{controloperator1g} are valid for any positive time $T.$ \end{rem} \section{Proof of Theorems \ref{st351} and \ref{estabilization}}\label{section6} This section is devoted to prove the exponential stabilization results. Once we have the observability inequality in Corollary \ref{controloperator1g} it is well known that this implies the stabilization. So, we just give the main steps. Fist recall we are dealing with the equation \begin{equation}\label{eq1} \partial_{t}u= \partial_{x}\mathcal{A}u+Gh. \end{equation} Since any solution of \eqref{eq1} preserves its mass, without loss of generality, one can assume that the initial data $u_{0}$ satisfies $\widehat{u_{0}}(0)=0$ (otherwise, we perform the change of variables $\tilde{u}=u-\widehat{u_{0}}(0)$). Thus, it is enough to study the stabilization problem in $H_{0}^{s}(\mathbb{T})$, $s\in \mathbb{R}.$ The idea to prove Theorems \ref{st351} and \ref{estabilization} is to show the existence of a bounded linear operator, say, $K_1$ on $H_{0}^{s}(\mathbb{T})$ such that $$ h=K_1u $$ serves as the feedback control law. So, we study the stabilization problem for the system \begin{equation}\label{atabilizationL2} \begin{cases} u\in C([0,+\infty);H_{0}^{s}(\mathbb{T}))\\ \partial_{t}u=\partial_{x}\mathcal{A}u+GK_1u \in H_{0}^{s-r}(\mathbb{T}),\quad t>0,\\ u(0)=u_{0}\in H_{0}^{s}(\mathbb{T}), \end{cases} \end{equation} First, we prove that system \eqref{atabilizationL2} is globally well-posed in $H_{0}^{s}(\mathbb{T})$, $s\in \mathbb{R}$. \begin{thm}\label{solsta1} Let $u_{0}\in H_{0}^{r}(\mathbb{T}),$ with $r$ as in \eqref{limite}. Then the IVP \eqref{atabilizationL2} has a unique solution $$u\in C([0,\infty);H_{0}^{r}(\mathbb{T})) \cap C^{1}([0,\infty);L^{2}_{0}(\mathbb{T})).$$ Moreover, if $u_{0}\in H_{0}^{s}(\mathbb{T}),$ then we have $u\in C([0,\infty);H_{0}^{s}(\mathbb{T})),$ for any $s \in \mathbb{R}.$ \end{thm} \begin{proof} Since $\partial_{x}\mathcal{A}$ is the infinitesimal generator of a $C_{0}$-semigroup $\{U(t)\}_{t\geq 0}$ in $H_{0}^{s}(\mathbb{T})$ and $GK_1$ is a bounded linear operator on $H^{s}_{0}(\mathbb{T})$, we have that $\partial_{x}\mathcal{A}+GK_1$ is also an infinitesimal generator of a $C_{0}-$semigroup on $H^{s}_{0}(\mathbb{T})$ (see, for instance, \cite[page 76]{4}). Thus this a consequence of the semigroup theory. \end{proof} As we will see, Theorems \ref{st351} and \ref{estabilization} are consequences of the following result. \begin{thm}\label{st35} Let $s\in\mathbb{R}$ be given and $g$ as in \eqref{gcondition}. For any given $\lambda>0$, there exist a bounded linear operator $K_{1}$ on $H_{0}^{s}(\mathbb{T}))$ such that the unique solution of the closed-loop system \begin{equation}\label{atabilizationL2gg} \begin{cases} \partial_{t}u=\partial_{x}\mathcal{A}u+GK_1u, \\ u(0)=u_{0}, \end{cases} \end{equation} satisfies \begin{equation}\label{c5} \|u(\cdot,t)\|_{H_{0}^{s}(\mathbb{T})}\leq M e^{-\lambda t}\|u_{0}\|_{H_{0}^{s}(\mathbb{T})},\;\;\;\text{for all}\;\; t\geq 0. \end{equation} where the positive constant $M$ depends on $s$ and $G$ but is independent of $u_{0}.$ \end{thm} \begin{proof} This is a consequence of Corollary \ref{controloperator1g} and the classical principle that exact controllability implies exponential stabilizability for conservative control systems (see Theorem 2.3/Theorem 2.4 in \cite{Liu} and Theorem 2.1 \cite{Slemrod}). To be more precise, according to \cite{Slemrod, Liu}, one can choose $$K_{1}=-G^{\ast}L^{-1}_{T,\lambda},$$ where, for some $T>\frac{2\pi}{\gamma'}$, \begin{equation*} L_{T,\lambda}\phi=\int_{0}^{T}e^{-2\lambda \tau}\;U(-\tau)GG^{\ast} U(-\tau)^{\ast}\phi\;d\tau, \;\;\;\;\;\phi\in H_{0}^{s}(\mathbb{T}), \end{equation*} and $U(t)$ is the $C_0$-semigroup generated by $\partial_{x}\mathcal{A}$ (see Lemma 2.4 in \cite{14} for more details). In addition, if one simply chooses $K_{1}=-G^{\ast}$ then there exists $\alpha>0$ such that estimate \eqref{c5} holds with $\lambda$ replaced by $\alpha.$ \end{proof} Finally, observe that Theorem \ref{st351} and Theorem \ref{estabilization} are direct consequences of Theorem \ref{st35} just by taking $K_{1}=-G^{\ast}$ and $K_{1}=-G^{\ast}L^{-1}_{T,\lambda}$, respectively. \section{Applications}\label{section6g} As an application of our results, we will establish the controllability and stabilization for some linearized dispersive equations of the form \eqref{BO}. \subsection{The linearized Smith equation} The nonlinear Smith equation posed on the entire real line reads as \begin{equation}\label{ILW} \partial_{t}u-\partial_{x}\mathcal{A}u+u\partial_{x}u=0, \;\;\;\;x\in \mathbb{R},\;\;t\in \mathbb{R}, \end{equation} where $u=u(x,t)$ denotes a real-valued function and $\mathcal{A}$ is the nonlocal operator defined by $$ \widehat{\mathcal{A}u}(\xi):=2\pi\left(\sqrt{\xi^{2}+1}-1\right)\widehat{u}(\xi).$$ Here the hat stands for the Fourier transform on the line. Equation \eqref{ILW} was derived by Smith in \cite{Smith} and it governs certain types of continental-shelf waves. From the mathematical viewpoint, the well-posedness of the IVP associated to \eqref{ILW} in $H^{s}(\mathbb{R})$ has been studied for instance in \cite{Abdelouhab Bona Felland and Saut}, \cite{iorio}, and \cite{6}. In \cite[Theorems 7.1 and 7.7]{Abdelouhab Bona Felland and Saut} the authors proved that \eqref{ILW} is globally well-posed in $H^{s}(\mathbb{R})$ for $s=1$ and $s\geq 3/2$. In \cite{iorio} the author established a global well-posedness result in the weighted Sobolev space $H^s(\mathbb{R})\cap L^2((1+|x|^2)^sdx)$ for $s>3/2$. The control equation associated with the linearized Smith equation on the periodic setting reads as \begin{equation}\label{ILWP} \partial_{t}u-\partial_{x}\mathcal{A}u=Gh, \;\;\;\;x\in \mathbb{T},\;\;t\in \mathbb{R}, \end{equation} where $\mathcal{A}$ is such that \begin{equation}\label{ILWP1} \widehat{\mathcal{A}u}(k):= 2\pi\left(\sqrt{k^{2}+1}-1\right) \widehat{u}(k),\;\;\;\;k\in \mathbb{Z}, \end{equation} so that $a(k)=2\pi\left(\sqrt{k^{2}+1}-1\right)$. In what follows we will show that Criterion I can be applied to prove that \eqref{ILWP} is exactly controllable in any positive time $T>0$ and exponentially stabilizable with any given decay rate in the Sobolev space $H_{p}^{s}(\mathbb{T}),$ with $s\in \mathbb{R}$. Indeed, first of all note that clearly, $$ |a(k)|\leq C|k|, $$ for some positive constant $C$ and any $k\in\mathbb{Z}$. In addition the quantity $2\pi\widehat{u}(0,t)$ is invariant by the flow of \eqref{ILWP}. Using the Fourier transform, it is easy to check that $(H1)$ holds with $\lambda_{k}=2\pi\left(k\sqrt{k^{2}+1}-k\right)$. See Figure \ref{fig3} for an illustrative picture. By noting that $y\mapsto 2\pi\left(y\sqrt{y^{2}+1}-y\right)$ is a strictly increasing function we then deduce that all eigenvalues $\lambda_{k}$ are simple, giving $(H2)$ and $(H3)$. Additionally, observe that $a(-k)=a(k)$ for any $k\in \mathbb{Z}$ and $$ \displaystyle{\lim_{|k|\rightarrow +\infty}| (k+1) a(k+1)- k a(k)|=+\infty}. $$ Thus we may apply Remark \ref{partic1} to conclude that Theorem \ref{ControlLa} holds for any $T>0$. Consequently, Theorems \ref{st351} and \ref{estabilization} also hold. \subsection{The fourth-order Schr\"odinger equation} Here we consider the control equation associated with the linear fourth-order Schr\"odinger equation \begin{equation}\label{4nls} i\partial_{t}u+\partial_{x}^2u+\mu\partial_{x}^4u=0, \end{equation} where $u$ is a complex-valued function and $\mu\neq0$ is a real constant. Equation \eqref{4nls} is the linearized version, for instance, of the fourth-order cubic nonlinear equation \begin{equation}\label{4nlsn} i\partial_{t}u+\partial_{x}^2u+\mu\partial_{x}^4u+|u|^2u=0, \end{equation} which was introduced in \cite{kar} and \cite{kar1} to describe the propagation of intense laser beams in a bulk medium with Kerr nonlinearity when small fourth-order dispersion are taken into account. Several results concerning well-posedness for \eqref{4nlsn} may be found in \cite{fib} (see also subsequent references). Control and stabilization for \eqref{4nlsn} have already appeared in \cite{caca}. Equation \eqref{4nls} also serves as the linear version of the more general equation \begin{equation}\label{4nlsn1} i\partial_{t}u+\partial_{x}^2u+\mu\partial_{x}^4u+F=0, \end{equation} with $$ F=\frac{1}{2}|u|^2u+\mu\left( \frac{3}{8}|u|^4u+\frac{3}{2}(\partial_{x}u)^2\overline{u}+|\partial_{x}u|^2u+\frac{1}{2}u^2\partial_{x}^2\overline{u}+2|u|^2\partial_{x}^2u\right), $$ which describes the 3-dimensional motion of an isolated vortex filament embedded in an inviscid incompressible fluid filling an infinite region. Sharp results concerning local well-posedness in Sobolev spaces were proved in \cite{huo}. In order to set \eqref{4nls} as in \eqref{BO} we define $\mathcal{A}=i(\partial_{x}+\mu\partial_{x}^3)$, so that $a(k)=-k+\mu k^3$. Thus we may consider the equation \begin{equation}\label{33} \partial_{t}u-\partial_{x}\mathcal{A}u=Gh. \end{equation} We promptly see that the mass is also conserved by the flow of \eqref{33} and $$ |a(k)|\leq C|k|^3 $$ for some constant $C>0$ and any $k\in\mathbb{Z}$. Also, we easily check that $(H1)$ holds and $\lambda_{k}=-k^2+\mu k^4$. See Figure \ref{fig3}. \begin{figure} \caption{Dispersion of $\lambda_{k}$'s for the Smith and fourth-order Schr\"odinger equations with $\mu>0.$} \label{fig3} \end{figure} Note if $\mu<0$ then the even polynomial $p(y)= -y^2+\mu y^4$ has no nontrivial roots, implying that $\lambda_{k}, k\neq0$ are double eigenvalues and $(H4)$ holds with $n_0=1$ and $k_1^*=1$. On the other hand, if $\mu>0$ then $p(y)$ has the nontrivial roots $\pm 1/\sqrt{\mu}$; hence, if $k_1^*$ is the less integer satisfying $1/\sqrt{\mu}\leq k_1^*$, we see that $(H4)$ holds with $n_0=4$. It is also clear that $(H5)$ also holds. Even more, we may check that $$ \displaystyle{\lim_{k\rightarrow +\infty}| (k+1) a(k+1)- k a(k)|=+\infty}. $$ As a consequence, we may now apply Remark \ref{partic} to conclude that Theorem \ref{ControlLag} holds for any $T>0$. Consequently, Theorems \ref{st351} and \ref{estabilization} also hold. \subsection{The linearized dispersion-generalized Benjamin-Ono equation} In this subsection we investigate the control and stabilization properties of the linearized dispersion-generalized Benjamin-Ono (LDGBO) equation, which contains fractional-order spatial derivatives on a periodic domain, \begin{equation}\label{dgBO} \partial_{t}u+\partial_{x}D^{\alpha}u=0,\;\;\;x\in \mathbb{T},\;\;t\in\mathbb{R}, \end{equation} where $\alpha>0,$ $u$ is a real-valued function and the Fourier multiplier operator $D^{\alpha}$ is defined as \begin{equation}\label{OdgBO} \widehat{D^{\alpha}u}(k)=|k|^{\alpha}\widehat{u}(k),\;\;\;\text{for all}\;k\in \mathbb{Z}. \end{equation} When $\alpha\in (1,2),$ the dispersion generalized Benjamin-Ono (DGBO) equation \begin{equation}\label{DGBO} \partial_{t}u+\partial_{x}D^{\alpha}u+u\partial_{x}u=0,\;\;\;x\in \mathbb{R},\;\;t>0, \end{equation} defines a family of equations which models vorticity waves in coastal zones \cite{Shrira}. The end points $\alpha=1$ and $\alpha=2$ corresponds to the well-known Benjamin-Ono and KdV equations, respectively. In this sense \eqref{DGBO} defines a continuum of equations of dispersive strength intermediate to two celebrated models. Regarding control and stabilization properties, the author in \cite{Flores} proved that the LDGBO equation with $\alpha \in (1,2)$ is exactly controllable in $H^{s}_{p}(\mathbb{T})$ with $s\geq 0$ and exponentially stabilizable in $L^{2}_{p}(\mathbb{T}).$ Here we extend these results to the (periodic) Sobolev space $H^{s}_{p}(\mathbb{T})$ with $s\in \mathbb{R},$ for any $\alpha>0.$ In fact, we consider the operator $\mathcal{A}$ in \eqref{BO} defined by $\mathcal{A}=-D^{\alpha}.$ Therefore, $a(k)=-|k|^{\alpha}$ and it is easy to verify that $$|a(k)|\leq |k|^{\alpha},$$ and $$a(k)=a(-k),$$ for any $k\in \mathbb{Z}.$ Hence, $(H1)$ holds with $\lambda_{k}=-k|k|^{\alpha}.$ Using the L'Hospital rule we can prove that $$\lim_{y\to +\infty}y^{\alpha+1}\left(\left(1+\frac{1}{y}\right)^{\alpha+1}-1\right)=+\infty,\;\;\text{for any}\;\alpha>0.$$ From this, we conclude that $$ \displaystyle{\lim_{k\rightarrow +\infty}| (k+1) a(k+1)- k a(k)|=\lim_{k\rightarrow +\infty}\left( (k+1)^{\alpha+1}- k^{\alpha+1}\right)= +\infty}. $$ Thus, we can apply Remark \ref{partic1} to infer that Theorem \ref{ControlLa} holds for any $T>0$. Consequently, Theorems \ref{st351} and \ref{estabilization} also hold in this particular case. Finally, we point out the authors in \cite{Flores Oh and Smith} developed a dissipation-normalized Bourgain-type space, which simultaneously gains smoothing properties from the dissipation and dispersion present in the equation, to show that the nonlinear DGBO equation on a periodic setting is well-posed and local exponentially stable in $L^{2}_{p}(\mathbb{T}).$ Extending these results to the Sobolev space $H^{s}_{p}(\mathbb{T})$ with $s >0$ is a challenging task. This is an open problem. \subsection{Higher-order Schr\"odinger equation} In this section we consider the following higher-order Schr\"odinger equations \begin{equation}\label{hnls1} i\partial_{t}u+\alpha_2\partial_{x}^2u+\alpha_4\partial_{x}^4u+\ldots+\alpha_{2m}\partial_{x}^{2m}u=0 \end{equation} and \begin{equation}\label{hnls} i\partial_{t}u+\alpha_2\partial_{x}^2u-i\alpha_3\partial_{x}^3u+\alpha_4\partial_{x}^4u-i\alpha_5\partial_{x}^5u+\ldots-i\alpha_{2m+1}\partial_{x}^{2m+1}u=0, \end{equation} where $m$ is a positive integer and $\alpha_j$ are real constants with $\alpha_2\neq0$, $\alpha_{2m}\neq0$, and $\alpha_{2m+1}\neq0$. Equations \eqref{hnls1} and \eqref{hnls} are the linearized versions of an infinite hierarchy of nonlinear Schr\"odinger equations (see \cite{ank}). Thus, here we consider the control equation \begin{equation} \partial_{t}u-\partial_{x}\mathcal{A}_{2m+j}u=Gh, \end{equation} where $$ \mathcal{A}_{2m+j}= \begin{cases} i\alpha_2\partial_{x}+i\alpha_4\partial_{x}^3+\ldots+i\alpha_{2m}\partial_{x}^{2m-1}, \quad \mbox{if}\, j=0,\\ i\alpha_2\partial_{x}+\alpha_3\partial_{x}^2+i\alpha_4\partial_{x}^3+\alpha_5\partial_{x}^4+\ldots+\alpha_{2m+1}\partial_{x}^{2m}, \quad \mbox{if}\, j=1. \end{cases} $$ The symbol associated with $\mathcal{A}_{2m+j}$ is $$ a_{2m+j}(k)= \begin{cases} -\alpha_2k+\alpha_4k^3+\ldots+\alpha_{2m}(-1)^mk^{2m-1},\quad \mbox{if}\, j=0,\\ -\alpha_2k-\alpha_2k^2+\alpha_4k^3+\alpha_5k^4+\ldots+\alpha_{2m+1}(-1)^mk^{2m} , \quad \mbox{if}\, j=1. \end{cases} $$ It is clear that $$ |a_{2m+j}(k)|\leq C|k|^{2m-1+j}, $$ for some $C>0$ and $|k|$ large enough. Let us show that in the cases $j=0$ and $j=1$ we can apply Theorems \ref{ControlLag} and \ref{ControlLa}, respectively. Indeed, assume first $j=0$. It is easy to see that $(H1)$ holds where the eigenvalues $i\lambda_{k}$ are such that $$ \lambda_{k}=-\alpha_2k^2+\alpha_4k^4+\ldots+(-1)^m\alpha_{2m}k^{2m}. $$ The polynomial $p_{2m}(y)=-\alpha_2y^2+\alpha_4y^4+\ldots+(-1)^m\alpha_{2m}y^{2m}$ is even and goes to either $+\infty$ or $-\infty$ as $|y|\to+\infty$ (according to $m$ and the sign of $\alpha_{2m})$. Thus $(H4)$ and $(H5)$ holds with $n_0=2m$ and $k_1^*$ sufficiently large. Assume now $j=1$. In this case we have $$ \lambda_k=-\alpha_2k^2-\alpha_2k^3+\alpha_4k^4+\alpha_5k^5+\ldots+\alpha_{2m+1}(-1)^mk^{2m+1}. $$ Note that the polynomial $$p_{2m+1}(y)=-\alpha_2y^2-\alpha_2y^3+\alpha_4y^4+\alpha_5y^5+\ldots+\alpha_{2m+1}(-1)^my^{2m+1}$$ has different limits ($+\infty$ or $-\infty$) as $y\to+\infty$ or $y\to-\infty$ (according to $m$ and the sign of $\alpha_{2m+1}$). Hence $(H2)$ and $(H3)$ holds with $n_0=2m+1$ and $k_1^*$ sufficiently large. \\ Furthermore, either in the case $j=0$ or $j=1,$ it can be showed that the eigenvalues $\{i \lambda_{k}\}$ satisfies the \textquotedblleft asymptotic gap condition". Hence, we obtain the exact controllability for any $T>0$ and Theorems \ref{st351}$-$\ref{estabilization} hold as well. \section{Concluding Remarks}\label{conc-rem} In this work, we have showed two different criteria to prove that a linearized family of dispersive equations on a periodic domain is exactly controllable and exponentially stabilizable with any given decay rate in the Sobolev space $H_{p}^{s}(\mathbb{T})$ with $s\in \mathbb{R}.$ We have applied these results to prove exact controllability and exponential stabilization for the linearized Smith equation and Schr\"odinger-type equations on a periodic domain. In a forthcoming paper we plan to use these results to prove some fundamental properties like the propagation of compactness, the unique continuation property and the propagation of smoothness for the solutions of the nonlinear Smith equation in order to show that it is exactly controllable and exponentially stabilizable on a periodic domain. That is the adequate approach to prove exact controllability and exponential stabilization for nonlinear PDE's of dispersive type (see \cite{14, Laurent, Linares Rosier, Laurent Linares and Rosier, Manhendra and Francisco 2}). However, the symbol of the linear part associated to the Smith equation creates extra difficulty to prove the unique continuation property on a periodic domain. This work is in progress. \subsection*{\textbf{Acknowledgment}} F. J. Vielma Leal is partially supported by FAPESP/Brazil grant 2020/14226-4. A. Pastor is partially supported by FAPESP/Brazil grant 2019/02512-5 and CNPq/Brazil grant 303762/2019-5. The authors would like to thank Prof. Roberto Capistrano-Filho for many helpful discussions and suggestions to complete this work. \end{document}
\begin{document} \title[Two New Convex Dominated Functions]{Hermite-Hadamard-Type Inqualities for New Different Kinds of Convex Dominated Functions} \author{M.Emin \"{O}zdemir$^{\blacktriangle }$} \address{$^{\blacktriangle }$Atat\"{u}rk University, K.K. Education Faculty, Department of Mathematics, 25240, Campus, Erzurum, Turkey} \email{[email protected]} \author{Havva Kavurmac\i $^{\blacktriangle ,\blacksquare }$} \email{[email protected]} \thanks{$^{\blacksquare }$Corresponding Author} \author{Mevl\"{u}t Tun\c{c}} \address{Department of Mathematics, Faculty of Art and Sciences, Kilis 7 Aralik University, Kilis, 79000, Turkey} \email{[email protected]} \date{February 2, 2012} \subjclass[2000]{ Primary 26D15, Secondary 26D10, 05C38} \keywords{$m-$convex dominated function, Hermite-Hadamard's inequality, $ \left( \alpha ,m\right) -$convex function, $r-$convex function.} \begin{abstract} In this paper, we establish several new convex dominated functions and then we obtain new Hadamard type inequalities. \end{abstract} \maketitle \section{Introduction} The inequality \begin{equation} f\left( \frac{a+b}{2}\right) \leq \frac{1}{b-a}\int_{a}^{b}f\left( x\right) dx\leq \frac{f\left( a\right) +f\left( b\right) }{2} \label{h} \end{equation} which holds for all convex functions $f:[a,b]\rightarrow \mathbb{R} $, is known in the literature as Hermite-Hadamard's inequality. In \cite{G}, Toader defined $m-$convexity as the following: \begin{definition} The function $f:[0,b]\rightarrow \mathbb{R} ,$ $b>0$, is said to be $m-$convex where $m\in \lbrack 0,1],$ if we have \begin{equation*} f(tx+m(1-t)y)\leq tf(x)+m(1-t)f(y) \end{equation*} for all $x,y\in \lbrack 0,b]$ and $t\in \lbrack 0,1].$ We say that $f$ is $ m- $concave if $\left( -f\right) $ is $m-$convex. \end{definition} In \cite{D}, Dragomir proved the following theorem. Let $f:\left[ 0,\infty \right) \rightarrow \mathbb{R} $ be an $m-$convex function with $m\in \left( 0,1\right] $ and $0\leq a<b.$ If $f\in L_{1}\left[ a,b\right] ,$ then the following inequalities hold: \begin{eqnarray} f\left( \frac{a+b}{2}\right) &\leq &\frac{1}{b-a}\int_{a}^{b}\frac{f\left( x\right) +mf\left( \frac{x}{m}\right) }{2}dx \label{h1} \\ &\leq &\frac{1}{2}\left[ \frac{f\left( a\right) +mf\left( \dfrac{a}{m} \right) }{2}+m\frac{f\left( \frac{b}{m}\right) +mf\left( \dfrac{b}{m^{2}} \right) }{2}\right] . \notag \end{eqnarray} In \cite{DI} and \cite{DPP}, the authors connect together some disparate threads through a Hermite-Hadamard motif. The first of these threads is the unifying concept of a $g-$convex dominated function. Similarly, in \cite{KOS} , Kavurmac\i\ et al. introduced the following class of functions and then proved a theorem for this class of functions related to (\ref{h1}). \begin{definition} Let $g:\left[ 0,b\right] \rightarrow \mathbb{R} $ be a given $m-$convex function on the interval $\left[ 0,b\right] $. The real function $f:\left[ 0,b\right] \rightarrow \mathbb{R} $ is called $\left( g,m\right) -$convex dominated on $\left[ 0,b\right] $ if the following condition is satisfied \begin{eqnarray*} &&\left\vert \lambda f(x)+m(1-\lambda )f(y)-f\left( \lambda x+m\left( 1-\lambda \right) y\right) \right\vert \\ &\leq &\lambda g(x)+m(1-\lambda )g(y)-g\left( \lambda x+m\left( 1-\lambda \right) y\right) \end{eqnarray*} for all $x,y\in \left[ 0,b\right] $, $\lambda \in \left[ 0,1\right] $ and $ m\in \left[ 0,1\right] .$ \end{definition} \begin{theorem} \label{a} Let $g:\left[ 0,\infty \right) \rightarrow \mathbb{R} $ be an $m-$convex function with $m\in \left( 0,1\right] $. $f:\left[ 0,\infty \right) \rightarrow \mathbb{R} $ is $\left( g,m\right) -$convex dominated \ mapping and $0\leq a<b.$ If $ f\in L_{1}\left[ a,b\right] ,$ then one has the inequalities: \begin{eqnarray*} &&\left\vert \frac{1}{b-a}\int_{a}^{b}\frac{f\left( x\right) +mf\left( \frac{ x}{m}\right) }{2}dx-f\left( \frac{a+b}{2}\right) \right\vert \\ && \\ &\leq &\frac{1}{b-a}\int_{a}^{b}\frac{g\left( x\right) +mg\left( \frac{x}{m} \right) }{2}dx-g\left( \frac{a+b}{2}\right) \end{eqnarray*} and \begin{eqnarray*} &&\left\vert \frac{1}{2}\left[ \frac{f\left( a\right) +mf\left( \dfrac{a}{m} \right) }{2}+m\frac{f\left( \frac{b}{m}\right) +mf\left( \dfrac{b}{m^{2}} \right) }{2}\right] -\frac{1}{b-a}\int_{a}^{b}\frac{f\left( x\right) +mf\left( \frac{x}{m}\right) }{2}dx\right\vert \\ && \\ &\leq &\frac{1}{2}\left[ \frac{g\left( a\right) +mg\left( \dfrac{a}{m} \right) }{2}+m\frac{g\left( \frac{b}{m}\right) +mg\left( \dfrac{b}{m^{2}} \right) }{2}\right] -\frac{1}{b-a}\int_{a}^{b}\frac{g\left( x\right) +mg\left( \frac{x}{m}\right) }{2}dx. \end{eqnarray*} \end{theorem} In \cite{MIH}, definition of $\left( \alpha ,m\right) -$convexity was introduced by Mihe\c{s}an as the following. \begin{definition} \label{d1} The function $f:[0,b]\rightarrow \mathbb{R} ,$ $b>0$, is said to be $\left( \alpha ,m\right) -$convex, where $\left( \alpha ,m\right) \in \lbrack 0,1]^{2},$ if we have \begin{equation*} f(tx+m(1-t)y)\leq t^{\alpha }f(x)+m(1-t^{\alpha })f(y) \end{equation*} for all $x,y\in \lbrack 0,b]$ and $t\in \lbrack 0,1].$ \end{definition} Denote by $K_{m}^{\alpha }\left( b\right) $ the class of all $\left( \alpha ,m\right) -$convex functions on $\left[ 0,b\right] $ for which $f\left( 0\right) \leq 0.$ If we take $\left( \alpha ,m\right) =\left\{ \left( 0,0\right) ,\left( \alpha ,0\right) ,\left( 1,0\right) ,\left( 1,m\right) ,\left( 1,1\right) ,\left( \alpha ,1\right) \right\} ,$ it can be easily seen that $\left( \alpha ,m\right) -$convexity reduces to increasing: $ \alpha -$starshaped, starshaped, $m-$convex, convex and $\alpha -$convex, respectively. In \cite{SSOR}, Set et al. proved the following Hadamard type inequalities for $\left( \alpha ,m\right) -$convex functions. \begin{theorem} Let $f:\left[ 0,\infty \right) \rightarrow \mathbb{R} $ be an $\left( \alpha ,m\right) -$convex function with $\left( \alpha ,m\right) \in \left( 0,1\right] ^{2}.$ If \ $0\leq a<b<\infty $ and $f\in $ $ L_{1}\left[ a,b\right] \cap L_{1}\left[ \frac{a}{m},\frac{b}{m}\right] ,$ then the following inequality holds: \begin{equation} f\left( \frac{a+b}{2}\right) \leq \frac{1}{b-a}\int_{a}^{b}\frac{f\left( x\right) +m\left( 2^{\alpha }-1\right) f\left( \frac{x}{m}\right) }{ 2^{\alpha }}dx. \label{h2} \end{equation} \end{theorem} \begin{theorem} Let $f:\left[ 0,\infty \right) \rightarrow \mathbb{R} $ be an $\left( \alpha ,m\right) -$convex function with $\left( \alpha ,m\right) \in \left( 0,1\right] ^{2}.$ If \ $0\leq a<b<\infty $ and $f\in $ $ L_{1}\left[ a,b\right] ,$ then the following inequality holds: \begin{equation} \frac{1}{b-a}\int_{a}^{b}f\left( x\right) dx\leq \frac{1}{2}\left[ \frac{ f\left( a\right) +f\left( b\right) +m\alpha f\left( \frac{a}{m}\right) +m\alpha f\left( \frac{b}{m}\right) }{\alpha +1}\right] . \label{h3} \end{equation} \end{theorem} For the recent results based on the above definition see the papers \cite {BOP}, \cite{BPR}, \cite{OKS}, \cite{OAK} and \cite{SSO}. In \cite{OSA}, the power mean $M_{r}(x,y;\lambda )$ of order $r$ of positive numbers $x,y$ is defined by \begin{equation*} M_{r}(x,y;\lambda )=\left\{ \begin{array}{cc} \left( \lambda x^{r}+\left( 1-\lambda \right) y^{r}\right) ^{\frac{1}{r}}, & r\neq 0 \\ x^{\lambda }y^{1-\lambda }, & r=0. \end{array} \right. \end{equation*} A positive function $f$ is $r-$convex on $[a,b]$ if for all $x,y\in \lbrack a,b]$ and $\lambda \in \lbrack 0,1]$ \begin{equation} f(\lambda x+(1-\lambda )y)\leq M_{r}(f\left( x\right) ,f\left( y\right) ;\lambda ). \label{h4} \end{equation} The generalized logarithmic mean of order $r$ of positive numbers $x,y$ is defined by \begin{equation} L_{r}\left( x,y\right) =\left\{ \begin{array}{cc} \frac{r}{r+1}\frac{x^{r+1}-y^{r+1}}{x^{r}-y^{r}}, & r\neq 0,1,x\neq y \\ & \\ \frac{x-y}{\ln x-\ln y}, & r=0,x\neq y \\ & \\ xy\frac{\ln x-\ln y}{x-y}, & r=-1,x\neq y \\ & \\ x, & x=y \end{array} \right. \label{L} \end{equation} In \cite{GPP}, the following theorem was proved by Gill et al. for $r-$ convex functions. \begin{theorem} Suppose $f$ is a positive $r-$convex function on $\left[ a,b\right] .$ Then \begin{equation} \frac{1}{b-a}\int_{a}^{b}f\left( x\right) dx\leq L_{r}\left( f\left( a\right) ,f\left( b\right) \right) . \label{h5} \end{equation} If $f$ is a positive $r-$concave function, then the inequality is reversed. \end{theorem} In the following sections our main results are given: We establish several new convex dominated functions and then we obtain new Hadamard type inequalities. \section{$\left( g-\left( \protect\alpha ,m\right) \right) $-convex dominated functions} \begin{definition} \label{d2} Let $g:\left[ 0,b\right] \rightarrow \mathbb{R} ,$ $b>0$ be a given $\left( \alpha ,m\right) $-convex function on the interval $\left[ 0,b\right] $. The real function $f:\left[ 0,b\right] \rightarrow \mathbb{R} $ is called $\left( g-\left( \alpha ,m\right) \right) $-convex dominated on $ \left[ 0,b\right] $ if the following condition is satisfied \begin{eqnarray} &&\left\vert \lambda ^{\alpha }f(x)+m(1-\lambda ^{\alpha })f(y)-f\left( \lambda x+m\left( 1-\lambda \right) y\right) \right\vert \label{h6} \\ &\leq &\lambda ^{\alpha }g(x)+m(1-\lambda ^{\alpha })g(y)-g\left( \lambda x+m\left( 1-\lambda \right) y\right) \notag \end{eqnarray} for all $x,y\in \left[ 0,b\right] $, $\lambda \in \left[ 0,1\right] $ and $ \left( \alpha ,m\right) \in \left[ 0,1\right] ^{2}.$ \end{definition} The next simple characterisation of $\left( \alpha ,m\right) $-convex dominated functions holds. \begin{lemma} \label{l1} Let $g:\left[ 0,b\right] \rightarrow \mathbb{R} $ be an $\left( \alpha ,m\right) $-convex function on the interval $\left[ 0,b\right] $ and the function $f:\left[ 0,b\right] \rightarrow \mathbb{R} .$ The following statements are equivalent: \end{lemma} \begin{enumerate} \item $f$ is $\left( g-\left( \alpha ,m\right) \right) $-convex dominated on $\left[ 0,b\right] .$ \item The mappings $g-f$ and $g+f$ are $\left( \alpha ,m\right) $-convex functions on $\left[ 0,b\right] .$ \item There exist two $\left( \alpha ,m\right) $-convex mappings $h,k$ defined on $\left[ 0,b\right] $ such that \begin{equation*} \begin{array}{ccc} f=\frac{1}{2}\left( h-k\right) & \text{and} & g=\frac{1}{2}\left( h+k\right) \end{array} . \end{equation*} \end{enumerate} \begin{proof} 1$\Longleftrightarrow $2 The condition (\ref{h6}) is equivalent to \begin{eqnarray*} &&g\left( \lambda x+m\left( 1-\lambda \right) y\right) -\lambda ^{\alpha }g(x)-m(1-\lambda ^{\alpha })g(y) \\ &\leq &\lambda ^{\alpha }f(x)+m(1-\lambda ^{\alpha })f(y)-f\left( \lambda x+m\left( 1-\lambda \right) y\right) \\ &\leq &\lambda ^{\alpha }g(x)+m(1-\lambda ^{\alpha })g(y)-g\left( \lambda x+m\left( 1-\lambda \right) y\right) \end{eqnarray*} for all $x,y\in I$, $\lambda \in \left[ 0,1\right] $ and $\left( \alpha ,m\right) \in \left[ 0,1\right] ^{2}.$ The two inequalities may be rearranged as \begin{equation*} \left( g+f\right) \left( \lambda x+m\left( 1-\lambda \right) y\right) \leq \lambda ^{\alpha }\left( g+f\right) (x)+m(1-\lambda ^{\alpha })\left( g+f\right) (y) \end{equation*} and \begin{equation*} \left( g-f\right) \left( \lambda x+m\left( 1-\lambda \right) y\right) \leq \lambda ^{\alpha }\left( g-f\right) (x)+m(1-\lambda ^{\alpha })\left( g-f\right) (y) \end{equation*} which are equivalent to the $\left( \alpha ,m\right) $-convexity of $g+f$ and $g-f,$ respectively. 2$\Longleftrightarrow $3 We define the mappings $f,g$ as $f=\frac{1}{2} \left( h-k\right) $ and $g=\frac{1}{2}\left( h+k\right) $. Then, if we sum and subtract $f,g,$ respectively, we have $g+f=h$ and $g-f=k.$ By the condition 2 of Lemma 1, the mappings $g-f$ and $g+f$ are $\left( \alpha ,m\right) $-convex on $\left[ 0,b\right] ,$ so $h,k$ are $\left( \alpha ,m\right) $-convex mappings too. \end{proof} \begin{theorem} \label{t1} Let $g:\left[ 0,\infty \right) \rightarrow \mathbb{R} $ be an $\left( \alpha ,m\right) -$convex function with $\left( \alpha ,m\right) \in \left( 0,1\right] ^{2}$. $f:\left[ 0,\infty \right) \rightarrow \mathbb{R} $ is $\left( g-\left( \alpha ,m\right) \right) -$convex dominated mapping and $0\leq a<b.$ If $f\in L_{1}\left[ a,b\right] \cap L_{1}\left[ \frac{a}{m} ,\frac{b}{m}\right] ,$ then the first inequality holds: \begin{eqnarray*} &&\left\vert \frac{1}{b-a}\int_{a}^{b}\frac{f\left( x\right) +m\left( 2^{\alpha }-1\right) f\left( \frac{x}{m}\right) }{2^{\alpha }}dx-f\left( \frac{a+b}{2}\right) \right\vert \\ && \\ &\leq &\frac{1}{b-a}\int_{a}^{b}\frac{g\left( x\right) +m\left( 2^{\alpha }-1\right) g\left( \frac{x}{m}\right) }{2^{\alpha }}dx-g\left( \frac{a+b}{2} \right) \end{eqnarray*} and if $f\in L_{1}\left[ a,b\right] $ then the second inequality holds: \begin{eqnarray*} &&\left\vert \frac{1}{2}\left[ \frac{f\left( a\right) +mf\left( \dfrac{a}{m} \right) }{\alpha +1}+m\alpha \frac{f\left( \frac{b}{m}\right) +mf\left( \dfrac{b}{m^{2}}\right) }{\alpha +1}\right] -\frac{1}{b-a}\int_{a}^{b}\frac{ f\left( x\right) +mf\left( \frac{x}{m}\right) }{2}dx\right\vert \\ && \\ &\leq &\frac{1}{2}\left[ \frac{g\left( a\right) +mg\left( \dfrac{a}{m} \right) }{\alpha +1}+m\alpha \frac{g\left( \frac{b}{m}\right) +mg\left( \dfrac{b}{m^{2}}\right) }{\alpha +1}\right] -\frac{1}{b-a}\int_{a}^{b}\frac{ g\left( x\right) +mg\left( \frac{x}{m}\right) }{2}dx. \end{eqnarray*} \end{theorem} \begin{proof} By Definition \ref{d2} with $\lambda =\frac{1}{2}$, as the mapping $f$ is $ \left( g-\left( \alpha ,m\right) \right) -$convex dominated function, we have that \begin{equation*} \left\vert \frac{f\left( x\right) +m\left( 2^{\alpha }-1\right) f\left( y\right) }{2^{\alpha }}-f\left( \frac{x+my}{2}\right) \right\vert \leq \frac{ g\left( x\right) +m\left( 2^{\alpha }-1\right) g\left( y\right) }{2^{\alpha } }-g\left( \frac{x+my}{2}\right) \end{equation*} for all $x,y\in \left[ 0,\infty \right) $ and $\left( \alpha ,m\right) \in \left( 0,1\right] ^{2}.$ If we choose $x=ta+(1-t)b,$ $y=\left( 1-t\right) \frac{a}{m}+t\frac{b}{m}$ and $t\in \left[ 0,1\right] ,$ then we get \begin{eqnarray*} &&\left\vert \frac{f\left( ta+(1-t)b\right) +m\left( 2^{\alpha }-1\right) f\left( \frac{\left( 1-t\right) a+tb}{m}\right) }{2^{\alpha }}-f\left( \frac{ a+b}{2}\right) \right\vert \\ && \\ &\leq &\frac{g\left( ta+(1-t)b\right) +m\left( 2^{\alpha }-1\right) g\left( \frac{\left( 1-t\right) a+tb}{2}\right) }{2^{\alpha }}-g\left( \frac{a+b}{2} \right) . \end{eqnarray*} Integrating over $t$ on $\left[ 0,1\right] $ we deduce that \begin{eqnarray*} &&\left\vert \frac{\int_{0}^{1}f\left( ta+(1-t)b\right) dt+m\left( 2^{\alpha }-1\right) \int_{0}^{1}f\left( \frac{\left( 1-t\right) a+tb}{m}\right) dt}{ 2^{\alpha }}-f\left( \frac{a+b}{2}\right) \right\vert \\ && \\ &\leq &\frac{\int_{0}^{1}g\left( ta+(1-t)b\right) dt+m\left( 2^{\alpha }-1\right) \int_{0}^{1}g\left( \frac{\left( 1-t\right) a+tb}{m}\right) dt}{ 2^{\alpha }}-g\left( \frac{a+b}{2}\right) \end{eqnarray*} and so the first inequality is proved. Since $f$ is $\left( g-\left( \alpha ,m\right) \right) -$convex dominated function, we have \begin{eqnarray*} &&\left\vert t^{\alpha }f\left( x\right) +m(1-t^{\alpha })f\left( y\right) -f\left( tx+m(1-t)y\right) \right\vert \\ && \\ &\leq &t^{\alpha }g\left( x\right) +m(1-t^{\alpha })g\left( y\right) -g\left( tx+m(1-t)y\right) ,\text{ for all }x,y>0 \end{eqnarray*} which gives for $x=a$ and $y=\frac{b}{m}$ \begin{eqnarray} &&\left\vert t^{\alpha }f\left( a\right) +m(1-t^{\alpha })f\left( \frac{b}{m} \right) -f\left( ta+m(1-t)\frac{b}{m}\right) \right\vert \label{h7} \\ && \notag \\ &\leq &t^{\alpha }g\left( a\right) +m(1-t^{\alpha })g\left( \frac{b}{m} \right) -g\left( ta+m(1-t)\frac{b}{m}\right) \notag \end{eqnarray} and for $x=\frac{a}{m}$, $y=\frac{b}{m^{2}}$ and then multiply with $m$ \begin{eqnarray} &&\left\vert mtf\left( \frac{a}{m}\right) +m^{2}(1-t)f\left( \frac{b}{m^{2}} \right) -mf\left( t\frac{a}{m}+(1-t)\frac{b}{m}\right) \right\vert \label{h8} \\ && \notag \\ &\leq &mtg\left( \frac{a}{m}\right) +m^{2}(1-t)g\left( \frac{b}{m^{2}} \right) -mg\left( t\frac{a}{m}+(1-t)\frac{b}{m}\right) \notag \end{eqnarray} for all $t\in \left[ 0,1\right] .$ By properties of modulus, if we add the inequalities in $\left( \text{\ref{h7}}\right) $ and $\left( \text{\ref{h8}} \right) $, we get \begin{eqnarray*} &&\left\vert t^{\alpha }\left[ f\left( a\right) +mf\left( \frac{a}{m}\right) \right] +m(1-t^{\alpha })\left[ f\left( \frac{b}{m}\right) +mf\left( \frac{b }{m^{2}}\right) \right] \right. \\ && \\ &&-\left. \left[ f\left( ta+m(1-t)\frac{b}{m}\right) +mf\left( t\frac{a}{m} +(1-t)\frac{b}{m}\right) \right] \right\vert \\ && \\ &\leq &t^{\alpha }\left[ g\left( a\right) +mg\left( \frac{a}{m}\right) \right] +m(1-t^{\alpha })\left[ g\left( \frac{b}{m}\right) +mg\left( \frac{b }{m^{2}}\right) \right] \\ && \\ &&-\left[ g\left( ta+m(1-t)\frac{b}{m}\right) +mg\left( t\frac{a}{m}+(1-t) \frac{b}{m}\right) \right] . \end{eqnarray*} Thus, integrating over $t$ on $\left[ 0,1\right] $ we obtain the second inequality. The proof is completed. \end{proof} \begin{remark} If we choose $\alpha =1$ in Theorem \ref{t1}, we get two inequalities of Hermite-Hadamard type for functions that are $\left( g,m\right) -$convex dominated in Theorem \ref{a}. \end{remark} \begin{theorem} \label{t2} Let $g:\left[ 0,\infty \right) \rightarrow \mathbb{R} $ be an $\left( \alpha ,m\right) -$convex function with $\left( \alpha ,m\right) \in \left( 0,1\right] ^{2}$. $f:\left[ 0,\infty \right) \rightarrow \mathbb{R} $ is $\left( g-\left( \alpha ,m\right) \right) -$convex dominated \ mapping and $0\leq a<b.$ If $f\in L_{1}\left[ a,b\right] ,$ then the following inequality holds: \begin{eqnarray} &&\left\vert \frac{1}{2}\left[ \frac{f\left( a\right) +f\left( b\right) +m\alpha f\left( \frac{a}{m}\right) +m\alpha f\left( \frac{b}{m}\right) }{ \alpha +1}\right] -\frac{1}{b-a}\int_{a}^{b}f\left( x\right) dx\right\vert \label{h9} \\ && \notag \\ &\leq &\frac{1}{2}\left[ \frac{g\left( a\right) +g\left( b\right) +m\alpha g\left( \frac{a}{m}\right) +m\alpha g\left( \frac{b}{m}\right) }{\alpha +1} \right] -\frac{1}{b-a}\int_{a}^{b}g\left( x\right) dx \notag \end{eqnarray} \end{theorem} \begin{proof} Since $f$ is $\left( g-\left( \alpha ,m\right) \right) -$convex dominated function, we have \begin{eqnarray*} &&\left\vert t^{\alpha }f\left( a\right) +m(1-t^{\alpha })f\left( \frac{b}{m} \right) -f\left( ta+m(1-t)\frac{b}{m}\right) \right\vert \\ && \\ &\leq &t^{\alpha }g\left( a\right) +m(1-t^{\alpha })g\left( \frac{b}{m} \right) -g\left( ta+m(1-t)\frac{b}{m}\right) \end{eqnarray*} and \begin{eqnarray*} &&\left\vert t^{\alpha }f\left( b\right) +m(1-t^{\alpha })f\left( \frac{a}{m} \right) -f\left( tb+m(1-t)\frac{a}{m}\right) \right\vert \\ && \\ &\leq &t^{\alpha }g\left( b\right) +m(1-t^{\alpha })g\left( \frac{a}{m} \right) -g\left( tb+m(1-t)\frac{a}{m}\right) \end{eqnarray*} for all $t\in \left[ 0,1\right] .$ Adding the above inequalities, we get \begin{eqnarray*} &&\left\vert t^{\alpha }\left[ f\left( a\right) +f\left( b\right) \right] +m(1-t^{\alpha })\left[ f\left( \frac{a}{m}\right) +f\left( \frac{b}{m} \right) \right] -f\left( ta+m(1-t)\frac{b}{m}\right) -f\left( tb+m(1-t)\frac{ a}{m}\right) \right\vert \\ && \\ &\leq &t^{\alpha }\left[ g\left( a\right) +g\left( b\right) \right] +m(1-t^{\alpha })\left[ g\left( \frac{a}{m}\right) +g\left( \frac{b}{m} \right) \right] -g\left( ta+m(1-t)\frac{b}{m}\right) -g\left( tb+m(1-t)\frac{ a}{m}\right) . \end{eqnarray*} Integrating over $t\in \left[ 0,1\right] $ and then by dividing the resulting inequality with $2,$ we get the desired result. The proof is completed. Another proof can be done as the following. Since $f$ is $\left( g-\left( \alpha ,m\right) \right) -$convex dominated, we have by Lemma \ref{l1} that $g+f$ and $g-f$ are $\left( \alpha ,m\right) - $convex on $\left[ 0,b\right] ,$ and so by the Hadamard's type inequality for $\left( \alpha ,m\right) -$convex functions in $\left( \text{\ref{h3}} \right) $ \begin{eqnarray} &&\frac{1}{b-a}\int_{a}^{b}\left( g+f\right) \left( x\right) dx \label{h10} \\ &\leq &\frac{1}{2}\left[ \frac{\left( g+f\right) \left( a\right) +\left( g+f\right) \left( b\right) +m\alpha \left( g+f\right) \left( \frac{a}{m} \right) +m\alpha \left( g+f\right) \left( \frac{b}{m}\right) }{\alpha +1} \right] \notag \end{eqnarray} and \begin{eqnarray} &&\frac{1}{b-a}\int_{a}^{b}\left( g-f\right) \left( x\right) dx \label{h11} \\ &\leq &\frac{1}{2}\left[ \frac{\left( g-f\right) \left( a\right) +\left( g-f\right) \left( b\right) +m\alpha \left( g-f\right) \left( \frac{a}{m} \right) +m\alpha \left( g-f\right) \left( \frac{b}{m}\right) }{\alpha +1} \right] \notag \end{eqnarray} By using the inequalities in $\left( \text{\ref{h10}}\right) $ and $\left( \text{\ref{h11}}\right) $, we get the inequality in $\left( \text{\ref{h9}} \right) $. \end{proof} \section{$\left( g,r\right) -$convex dominated functions} \begin{definition} \label{d3} Let positive function $g:\left[ a,b\right] \rightarrow \mathbb{R} $ be a given $r-$convex function on $\left[ a,b\right] $. The real function $ f:\left[ a,b\right] \rightarrow \mathbb{R} $ is called $\left( g,r\right) -$convex dominated on $\left[ a,b\right] $ if the following condition is satisfied: \begin{eqnarray*} &&\left\vert M_{r}(f\left( x\right) ,f\left( y\right) ;\lambda )-f\left( \lambda x+\left( 1-\lambda \right) y\right) \right\vert \\ &\leq &M_{r}(g\left( x\right) ,g\left( y\right) ;\lambda )-g\left( \lambda x+\left( 1-\lambda \right) y\right) \end{eqnarray*} for all $x,y\in \left[ a,b\right] $ and $\lambda \in \left[ 0,1\right] .$ \end{definition} \begin{theorem} Let positive function $g:\left[ a,b\right] \rightarrow \mathbb{R} $ be an $r-$convex function on $\left[ a,b\right] $. $f:\left[ a,b\right] \rightarrow \mathbb{R} $ is $\left( g,r\right) -$convex dominated\ mapping and $0\leq a<b.$ If $ f\in L_{1}\left[ a,b\right] ,$ then the following inequality holds: \begin{equation*} \left\vert L_{r}\left( f\left( a\right) ,f\left( b\right) \right) -\frac{1}{ b-a}\int_{a}^{b}f\left( x\right) dx\right\vert \leq L_{r}\left( g\left( a\right) ,g\left( b\right) \right) -\frac{1}{b-a}\int_{a}^{b}g\left( x\right) dx \end{equation*} for all $x,y\in I$, $\lambda \in \left[ 0,1\right] $ and $L_{r}\left( f\left( a\right) ,f\left( b\right) \right) $ as in (\ref{L}). \end{theorem} \begin{proof} By the Definition \ref{d3} with $r=0,\ f\left( a\right) \neq f\left( b\right) ,$ we have \begin{eqnarray*} &&\left\vert f^{\lambda }\left( a\right) f^{1-\lambda }\left( b\right) -f\left( \lambda a+\left( 1-\lambda \right) b\right) \right\vert \\ &\leq &g^{\lambda }\left( a\right) g^{1-\lambda }\left( b\right) -g\left( \lambda a+\left( 1-\lambda \right) b\right) . \end{eqnarray*} Integrating the above inequality over $\lambda $ on $\left[ 0,1\right] ,$ we have \begin{eqnarray*} &&\left\vert f\left( b\right) \int_{0}^{1}\left[ \frac{f\left( a\right) }{ f\left( b\right) }\right] ^{\lambda }d\lambda -\int_{0}^{1}f\left( \lambda a+\left( 1-\lambda \right) b\right) d\lambda \right\vert \\ && \\ &\leq &g\left( b\right) \int_{0}^{1}\left[ \frac{g\left( a\right) }{g\left( b\right) }\right] ^{\lambda }d\lambda -\int_{0}^{1}g\left( \lambda a+\left( 1-\lambda \right) b\right) d\lambda . \end{eqnarray*} By a simple calculation we have \begin{eqnarray*} &&\left\vert \frac{f\left( b\right) -f\left( a\right) }{\ln f\left( b\right) -\ln f\left( a\right) }-\frac{1}{b-a}\int_{a}^{b}f\left( x\right) dx\right\vert \\ && \\ &\leq &\frac{g\left( b\right) -g\left( a\right) }{\ln g\left( b\right) -\ln g\left( a\right) }-\frac{1}{b-a}\int_{a}^{b}g\left( x\right) dx. \end{eqnarray*} The above inequality can written as \begin{equation*} \left\vert L_{r}\left( f\left( a\right) ,f\left( b\right) \right) -\frac{1}{ b-a}\int_{a}^{b}f\left( x\right) dx\right\vert \leq L_{r}\left( g\left( a\right) ,g\left( b\right) \right) -\frac{1}{b-a}\int_{a}^{b}g\left( x\right) dx. \end{equation*} For $r=0,\ f\left( a\right) =f\left( b\right) ,$ we have with the same development \begin{eqnarray*} &&\left\vert f\left( a\right) -f\left( \lambda a+\left( 1-\lambda \right) b\right) \right\vert \\ &\leq &g\left( a\right) -g\left( \lambda a+\left( 1-\lambda \right) b\right) \end{eqnarray*} and this inequality can be written as \begin{equation*} \left\vert L_{r}\left( f\left( a\right) ,f\left( b\right) \right) -\frac{1}{ b-a}\int_{a}^{b}f\left( x\right) dx\right\vert \leq L_{r}\left( g\left( a\right) ,g\left( b\right) \right) -\frac{1}{b-a}\int_{a}^{b}g\left( x\right) dx. \end{equation*} By the Definition \ref{d3} with $r\neq 0,-1,\ f\left( a\right) \neq f\left( b\right) ,$ we have \begin{eqnarray*} &&\left\vert \left( \lambda f^{r}\left( a\right) +\left( 1-\lambda \right) f^{r}\left( b\right) \right) ^{\frac{1}{r}}-f\left( \lambda a+\left( 1-\lambda \right) b\right) \right\vert \\ && \\ &\leq &\left( \lambda g^{r}\left( a\right) +\left( 1-\lambda \right) g^{r}\left( b\right) \right) ^{\frac{1}{r}}-g\left( \lambda a+\left( 1-\lambda \right) b\right) . \end{eqnarray*} Integrating the above inequality over $\lambda $ on $\left[ 0,1\right] ,$ we have \begin{eqnarray*} &&\left\vert \frac{r}{r+1}\frac{f^{r+1}\left( a\right) -f^{r+1}\left( b\right) }{f^{r}\left( a\right) -f^{r}\left( b\right) }-\frac{1}{b-a} \int_{a}^{b}f\left( x\right) dx\right\vert \\ && \\ &\leq &\frac{r}{r+1}\frac{g^{r+1}\left( a\right) -g^{r+1}\left( b\right) }{ g^{r}\left( a\right) -g^{r}\left( b\right) }-\frac{1}{b-a} \int_{a}^{b}g\left( x\right) dx. \end{eqnarray*} The above inequality can be written as \begin{eqnarray*} &&\left\vert L_{r}\left( f\left( a\right) ,f\left( b\right) \right) -\frac{1 }{b-a}\int_{a}^{b}f\left( x\right) dx\right\vert \\ &\leq &L_{r}\left( g\left( a\right) ,g\left( b\right) \right) -\frac{1}{b-a} \int_{a}^{b}g\left( x\right) dx. \end{eqnarray*} For$\ r\neq 0$ and $f\left( a\right) =f\left( b\right) ,$ we have similarly \begin{eqnarray*} &&\left\vert \left( f^{r}\left( a\right) \right) ^{\frac{1}{r}}-f\left( \lambda a+\left( 1-\lambda \right) b\right) \right\vert \\ &\leq &\left( g^{r}\left( a\right) \right) ^{\frac{1}{r}}-g\left( \lambda a+\left( 1-\lambda \right) b\right) . \end{eqnarray*} Then integrating the above inequality over $\lambda $ on $\left[ 0,1\right] , $ we have \begin{eqnarray*} &&\left\vert L_{r}\left( f\left( a\right) ,f\left( b\right) \right) -\frac{1 }{b-a}\int_{a}^{b}f\left( x\right) dx\right\vert \\ &\leq &L_{r}\left( g\left( a\right) ,g\left( b\right) \right) -\frac{1}{b-a} \int_{a}^{b}g\left( x\right) dx. \end{eqnarray*} Finally, let $r=-1.$ For$\ f\left( a\right) \neq f\left( b\right) $ we have again \begin{eqnarray*} &&\left\vert \left( \lambda f^{-1}\left( a\right) +\left( 1-\lambda \right) f^{-1}\left( b\right) \right) ^{-1}-f\left( \lambda a+\left( 1-\lambda \right) b\right) \right\vert \\ && \\ &\leq &\left( \lambda g^{-1}\left( a\right) +\left( 1-\lambda \right) g^{-1}\left( b\right) \right) ^{-1}-g\left( \lambda a+\left( 1-\lambda \right) b\right) . \end{eqnarray*} Integrating the above inequality over $\lambda $ on $\left[ 0,1\right] ,$ we have \begin{eqnarray*} &&\left\vert \frac{f\left( a\right) f\left( b\right) }{f\left( b\right) -f\left( a\right) }\int_{\frac{1}{f\left( a\right) }}^{\frac{1}{f\left( b\right) }}\lambda ^{-1}d\lambda -\frac{1}{b-a}\int_{a}^{b}f\left( x\right) dx\right\vert \\ && \\ &\leq &\frac{g\left( a\right) g\left( b\right) }{g\left( b\right) -g\left( a\right) }\int_{\frac{1}{f\left( a\right) }}^{\frac{1}{f\left( b\right) } }\lambda ^{-1}d\lambda -\frac{1}{b-a}\int_{a}^{b}g\left( x\right) dx. \end{eqnarray*} The above inequality can be written as \begin{eqnarray*} &&\left\vert L_{-1}\left( f\left( a\right) ,f\left( b\right) \right) -\frac{1 }{b-a}\int_{a}^{b}f\left( x\right) dx\right\vert \\ &\leq &L_{-1}\left( g\left( a\right) ,g\left( b\right) \right) -\frac{1}{b-a} \int_{a}^{b}g\left( x\right) dx. \end{eqnarray*} The proof is completed. \end{proof} \end{document}
\begin{document} \title{Transporting Experimental Results with Entropy Balancing} \author[1]{Kevin P. Josey} \author[2]{Seth A. Berkowitz} \author[1]{Debashis Ghosh} \author[3,4]{Sridharan Raghavan*} \authormark{Josey \textsc{et al}.} \address[1]{\orgdiv{Department of Biostatistics and Informatics}, \orgname{Colorado School of Public Health, University of Colorado Anschutz Medical Campus}, \orgaddress{\state{Colorado}, \country{USA}}} \address[2]{\orgdiv{Division of General Medicine and Clinical Epidemiology}, \orgname{University of North Carolina at Chapel Hill School of Medicine}, \orgaddress{\state{North Carolina}, \country{USA}}} \address[3]{\orgname{Rocky Mountain Regional VA Medical Center}, \orgaddress{\state{Colorado}, \country{USA}}} \address[4]{\orgdiv{Division of Hospital Medicine}, \orgname{University of Colorado School of Medicine}, \orgaddress{\state{Colorado}, \country{USA}}} \corres{*Sridharan Raghavan, Rocky Mountain Regional VA Medical Center, \email{[email protected]}} \presentaddress{1700 N Wheeling St, Aurora, CO 80045} \abstract[Summary]{We show how entropy balancing can be used for transporting experimental treatment effects from a trial population onto a target population. This method is doubly-robust in the sense that if either the outcome model or the probability of trial participation is correctly specified, then the estimate of the target population average treatment effect is consistent. Furthermore, we only require the sample moments of the effect modifiers drawn from the target population to consistently estimate the target population average treatment effect. We compared the finite-sample performance of entropy balancing with several alternative methods for transporting treatment effects between populations. Entropy balancing techniques are efficient and robust to violations of model misspecification. We also examine the results of our proposed method in an applied analysis of the Action to Control Cardiovascular Risk in Diabetes Blood Pressure (ACCORD-BP) trial transported to a sample of US adults with diabetes taken from the National Health and Nutrition Examination Survey (NHANES) cohort.} \keywords{Calibration, Causal Inference, Generalizablity, Effect Modification} \jnlcitation{} \maketitle \footnotetext{} \section{Introduction} In a randomized controlled trial (RCT), the population from which the sample is collected, the \textit{trial population}, often differs from the population of interest, the \textit{target population}. This scenario becomes problematic when the true causal effect is heterogeneous, implying the existence of effect modifying covariates - \textit{effect modifiers} - which alter the average treatment effect. If the distribution of the effect modifiers is different in the trial and target populations, the average treatment effect observed in the trial will likely differ from what would be observed within the target population, limiting the conclusions that can be drawn from an otherwise well-designed study. It is worth noting that effect modifiers are specific to the scale of the target estimand. Throughout, we will refer to effect modifiers as variables that modify the average treatment effect which we will define later on. The recent literature on the subject of transportability is divided into two scenarios determined by the nature of the trial and target populations, and the desired causal estimand. If the trial population is nested within the target population, we can extend the results of an RCT using a sample from the target population in a process called \textit{generalizability}. If the target and trial populations are subpopulations drawn from some super population, then the problem is one of \textit{transportability}. Figure \ref{generalize} provides a diagram relating the data to the corresponding populations in both the generalizeability and transportability problems. Note that for generalizability, the trial population is a subpopulation of the target population, while in transportability the target and trial populations are not nested. We will discuss the difference between these two scenarios in more detail in Section \ref{assumptions}. The work herein, however, will focus primarily on the issue of transportability. Some articles have approached the problem of transportability from the setting in which the investigator is provided the individual-level data from the trial population along with individual-level covariate data from the target population.\cite{rudolph_robust_2017} Another setting provides the individual-level data from the trial population, but only the covariate sample moments (e.g., the mean and standard deviation) from the target population, which can often be found in a so-called Table 1 throughout the medical literature.\cite{signorovitch_comparative_2010} One property that is often sought while developing estimators for causal inference is called double-robustness.\cite{bang_doubly_2005} In the context of transporting experimental results, this means that if either the probability of trial participation or the outcome model are correctly specified, then the resulting average treatment effect estimator is consistent. We propose using entropy balancing to solve transportability problems. The procedure we propose builds upon several other causal effect estimators which employ convex optimization techniques to estimate a vector of \textit{sampling weights}.\cite{signorovitch_comparative_2010, hartman_sample_2015, zhang_new_2016, phillippo_methods_2018} These sampling weights would otherwise be uniform if the RCT data were randomly sampled from the target population. The literature on convex optimization in the context of causal inference has abounded in recent years.\cite{hainmueller_entropy_2012, imai_covariate_2014, wang_minimal_2020} Rather than using these methods to exactly balance the covariate distribution between the treated and control units within an observational study, convex optimization techniques applied to transportability can be used to estimate weights which balance the covariate distribution of the trial participants and non-participants. Entropy balancing is flexible in that it can be applied both when the complete individual-level covariate data are provided as well as when only the covariate sample moments of the target population are provided, such as what might appear in the Table 1 of a clinical paper. Furthermore, the specific entropy balancing procedure we develop can be shown to be doubly-robust for estimating the target population average treatment effect given the complete individual-level covariate data in the context of transportability. The entropy balancing solution we propose is also considered as a solution for indirectly comparing experimental results from two separate randomized trials with an anchored treatment arm, a problem not too dissimilar from that of transportability. However, transportability as we describe it does not require a second randomized trial. The sample drawn from the target population, which is subsequently used in our transportability formulations, does not require any information about the outcome or treatment assignment. Moreover, based on new results identified in the indirect comparison literature, we can relax a rather strong assumption about the nature of the potential outcomes typically made in the transportability literature.\cite{phillippo_methods_2018} In addition, we take a more comprehensive view of entropy balancing through the lens of causal inference, motivating this work through the potential outcomes framework and describe several properties about entropy balancing for transportability that are sought for estimators in the causal inference literature. This includes the property of double-robustness, semiparametric efficiency, and considerations between finite-sample and superpopulation settings. We also compare the entropy balancing approach with several other methods in an effort to showcase the strengths of doubly-robust estimators more generally in transportability problems. The contents of the article are as follows. In Section \ref{setting} we define the notation, setting, and assumptions necessary for transporting experimental results between populations and describe several existing methods for transportation, including two methods that can be applied in the setting where we are given only the sample covariate moments of the target population and two methods that require individual-level covariate data from the target population, one of which is doubly-robust. In Section \ref{entropy}, we introduce entropy balancing and describe the difference between conducting inference upon the target population average treatment effect versus the target sample average treatment effect. In Section \ref{simulation} we compare the five methods considered in Sections \ref{setting} and \ref{entropy} using numerical studies. We also illustrate through a secondary simulation how entropy balancing and other methods that do not require individual-level data from the target population only allow for inference upon the target sample average treatment effect and not the target population average treatment effect. In Section \ref{illustrative} we compare entropy balancing and inverse odds of sampling weights in a real-data example: transporting results from a clinical trial of blood pressure treatment intensity in diabetes patients to a representative sample of the US population. Section \ref{discussion} concludes with a discussion. \begin{figure} \caption{Square nodes represent populations whereas circular nodes represent samples. The solid arrow represents a subsetting of the origin node. The dashed line represents the process of generalizability (A) and transportability (B).} \label{generalize} \end{figure} \section{Setting and Preliminaries}\label{setting} \subsection{Notation and Potential Outcomes}\label{notation} Suppose we have two random samples from different populations. For independent sampling units $i = 1,2,\ldots,n$, let $S_i \in \{0,1\}$ denote a random sampling indicator. Indexed by $\{i : S_i = 1\}$, the \textit{trial sample} evaluates the efficacy of some treatment on the trial population. The second sample is randomly selected from the target population and indexed by $\{i : S_i = 0\}$. We refer to this sample as the \textit{target sample}. We denote $n_1 = \sum_{i = 1}^n S_i$, $n_0 = \sum_{i = 1}^n (1 - S_i)$, and $n = n_1 + n_0$. Both $\mathbb{E}(\cdot)$ and $\Pr\{\cdot\}$ will be evaluated over the \textit{superpopulation} which is the combined trial and target population. For $i = 1,2\ldots, n$, let $\mathbf{X}_{i} \in \mathcal{X}$ denote a vector of measured baseline (i.e. pretreatment) covariates. For $i\in\{i:S_i = 1\}$, let $Y_{i} \in \Re$ denote the real valued outcome, and $Z_i \in \{0,1\}$ denote the random treatment assignment. We assume throughout that $\mathbf{X}_i$ contains an intercept term. The probability density function for $\mathbf{X}_i$ is denoted $f(\mathbf{x}_i)$ for $\mathbf{x}_i \in \mathcal{X}$. Indexed by $j = 1,2,\ldots,m$, we denote the vector-valued balance function $\mathbf{c}(\mathbf{X}_i) \equiv [c_1(\mathbf{X}_i), c_2(\mathbf{X}_i), \ldots, c_j(\mathbf{X}_i), \ldots, c_m(\mathbf{X}_i)]^T$, which are the effect modifiers included into the models of $S_i$ and $Y_i$ along with $Z_i$. We suppose $c_1(\mathbf{X}_i) = 1$ for all $i = 1,2,\ldots,n$. Some examples for $c_j(\mathbf{X}_i)$, $j = 2,3,\ldots,m$ include polynomial transformations of the covariates and interaction terms - anything that might modify the effect of the treatment on the outcome. It is common practice to balance the covariates as well as the variance (i.e. second moments) of the covariates when more intimate knowledge about the effect modifying process is unknown.\cite{signorovitch_comparative_2010} We employ the potential outcomes framework for a binary treatment.\cite{rubin_estimating_1974} This framework allows us to construct the observed outcome in terms of the factual and counterfactual variables $Y_{i}(0)$ and $Y_{i}(1)$, $i = 1,2,\ldots,n$. $Y_{i}(0)$ and $Y_{i}(1)$ correspond to each unit's outcomes when $Z_{i} = 1$ and $Z_{i} = 0$, respectively. The observed responses are then defined as $Y_{i} \equiv Z_{i} Y_{i}(1) + (1 - Z_{i})Y_{i}(0)$. The potential outcomes framework also allows us to define the target population average treatment effect, $\tau_{\text{TATE}} \equiv \mathbb{E}[Y_i(1) - Y_i(0)|S_i = 0]$ and the target sample average treatment effect \[ \tau_{\text{TSATE}} \equiv \frac{1}{n_0}\sum_{\{i : S_i = 0 \}} \left[Y_i(1) - Y_i(0)\right]. \] The target sample average treatment effect only concerns the effects among units within the target sample whereas the target population average treatment effect concerns the average effect for all units that make up the target population.\cite{imai_misunderstandings_2008} We also define $\rho(\mathbf{X}_i) \equiv \Pr\{S_i = 1|\mathbf{X}_i\}$ and $\pi \equiv \Pr\{Z_i = 1|\mathbf{X}_i\}$. Recall that in an RCT, $\pi \in (0,1)$ should be independent of $\mathbf{X}_i$. Another way to write $\tau_{\text{TATE}}$ is \[ \tau_{\text{TATE}} \equiv \frac{ \mathbb{E}\left\{[1 - \rho(\mathbf{X}_i)][Y_i(1) - Y_i(0)]\right\}}{\mathbb{E}[1 - \rho(\mathbf{X}_i)]}. \] This alternative definition identifies the target population average treatment effect as a type of weighted average treatment effect. A corollary to Theorem 4 of Hirano \textit{et al}. can be used to derive the semiparametric efficiency bound for any estimator of $\tau_{\text{TATE}}$ as \begin{equation}\label{semi} \Sigma_{\text{semi}} \equiv \frac{\mathbb{E}\left([1 - \rho(\mathbf{X}_i)]^2 \left\{\frac{\mathbb{V}[Y_i(1)|\mathbf{X}_i]}{\pi} + \frac{\mathbb{V}[Y_i(0)|\mathbf{X}_i]}{1 - \pi} + \left[\tau_{\text{TATE}}(\mathbf{X}_i) - \tau_{\text{TATE}} \right]^2 \right\} \right)}{\mathbb{E}[1 - \rho(\mathbf{X}_i)]^2} \end{equation} where $\tau_{\text{TATE}}(\mathbf{X}_i) \equiv \mathbb{E}[Y_i(1) - Y_i(0)|\mathbf{X}_i, S_i = 0]$.\cite{hirano_efficient_2003} This setup allows us to utilize the asymptotic results about weighted average treatment effects derived previously using convex optimization techniques such as those employed by entropy balancing.\cite{chan_globally_2016} We denote the population moments of the target covariate distribution as $\mathbb{E}[\mathbf{c}(\mathbf{X}_i)|S_i = 0] = \boldsymbol{\theta}_0$. For much of this paper, we will describe methods for transporting experimental results which weight the responses $\mathbf{Y}_i$ for $i \in \{i:S_i = 1\}$ so that the weighted trial sample moments are the same as the population moments of the target population.\cite{deville_calibration_1992} We will denote the sample weights as $\boldsymbol{\gamma} \equiv (\gamma_1, \gamma_2, \ldots, \gamma_{n_1})$. Since $\boldsymbol{\theta}_0$ is usually unknown, we will need to make use of the estimator $\hat{\boldsymbol{\theta}}_0 \equiv n_0^{-1}\sum_{\{i:S_i = 0\}} \mathbf{c}(\mathbf{X}_i)$. In practice, we usually set $\mathbf{c}(\mathbf{X}_i) = \mathbf{X}_i$ unless more is known about the data generating mechanisms. In those cases, $\hat{\boldsymbol{\theta}}_0$ typically appears in the so-called Table 1 of many publications. \subsection{Assumptions for Transportability}\label{assumptions} The following assumptions facilitate our ability to transport experimental results onto a target population. These assumptions represent the necessary conditions required for the transporting experimental results. They are also adapted from similar articles on the subject. \cite{pearl_external_2014, rudolph_robust_2017, dahabreh_extending_2020} Furthermore, we invoke the stable unit treatment value, which comprises the no interference and consistency assumptions. \begin{assumption}[Mean Difference Exchangeability]\label{exchange} The target population average treatment effect conditional on the baseline covariates is exchangeable between samples: \[ \mathbb{E}[Y_{i}(1) - Y_{i}(0)|\mathbf{X}_i, S_i = 1] = \mathbb{E}[Y_{i}(1) - Y_{i}(0)|\mathbf{X}_i, S_i = 0] = \mathbb{E}[Y_{i}(1) - Y_{i}(0)|\mathbf{X}_i]. \] \end{assumption} \begin{assumption}[Sampling Positivity]\label{positivity} The probability of trial participation, conditioned on the baseline covariates necessary to ensure Assumption \ref{exchange}, is bounded away from zero and one: \[ 0 < \Pr\{S_i = 1|\mathbf{X}_i = \mathbf{x}_i \} < 1 \ \text{for all} \ \mathbf{x}_i \in \mathcal{X} \ \text{where} \ f(\mathbf{x}_i|S_i = 0) > 0. \] \end{assumption} \begin{assumption}[Strongly Ignorable Treatment Assignment]\label{sita} The potential outcomes among the trial participants are independent of the treatment assignment given $\mathbf{X}_i$: \[ [Y_{i}(0), Y_{i}(1)]^T \independent Z_{i} | \mathbf{X}_{i} \ \text{for all} \ i \in \{i:S_i = 1\}. \] \end{assumption} Assumption \ref{sita} is a standard assumption in the potential outcomes literature.\cite{rosenbaum_central_1983} This assumption can be further simplified in an RCT setting to assume \[ [Y_{i}(0), Y_{i}(1)]^T \independent Z_{i} \ \text{for all} \ i \in \{i:S_i = 1\} \] since there should be no association between the treatment assignment and the covariates. The covariate imbalance that requires amelioration in transportability instead appears between $\mathbf{X}_i$ and $S_i$. As noted previously in the Introduction, there are subtle distinctions between generalizability and transportability. The main difference occurs with the causal estimand of interest. In transportability, the target estimand is $\tau_{\text{TATE}}$. For generalizability, the causal estimand of interest is $\tau_{\text{ATE}} \equiv \mathbb{E}\left[Y_i(1) - Y_i(0) \right]$. This is on account of the trial population being nested within the target population, so the superpopulation and the target population are identical. Under our notation, generalizability further assumes that the units $\{i:S_i = 0\}$ are sampled from the target population and the complement of the trial population. As a result, we would need to rewrite Assumption \ref{positivity} for generalizability to state \[ 0 < \Pr\{S_i = 1|\mathbf{X}_i = \mathbf{x}_i \} < 1 \ \text{for all} \ \mathbf{x}_i \in \mathcal{X} \ \text{where} \ f(\mathbf{x}_i) > 0. \] We avoid this setup to the problem and instead focus on methods developed for transportability and inference on $\tau_{\text{TATE}}$. In addition to Assumptions \ref{exchange}-\ref{sita}, we require the following assumptions to establish the double-robustness property of entropy balancing. We will show that if either assumption is met, then the entropy balancing methods introduced in Section \ref{entropy} will be consistent for the target population average treatment effect. We also use these assumptions to establish consistency for some of the other methods we describe in Section \ref{previous} when standard regression methods are employed. Note that an underlying requirement implied by these two assumptions is that there is no unnmeasured effect modification present given the known balance functions. If an effect modifier is missing, then any estimator we present will likely produce biased estimates of the target population average treatment effect. \begin{assumption}[Conditional Linearity]\label{linear} The expected value of the potential outcomes, conditioned on $\mathbf{\mathbf{X}_i}$, is linear across the span of the covariates. That is $\mathbb{E}[Y_{i}(1) - Y_{i}(0)|\mathbf{X}_i] = \mathbf{c}(\mathbf{X}_i)^{T}\boldsymbol{\alpha}$ and $\mathbb{E}[Y_{i}(0)|\mathbf{X}_i] = \mathbf{c}(\mathbf{X}_i)^{T}\boldsymbol{\beta}$ for all $i = 1,2,\ldots,n$ and $\boldsymbol{\alpha},\boldsymbol{\beta} \in \Re^m$. \end{assumption} \begin{assumption}[Linear Conditional Log-Odds]\label{odds} The log-odds of trial participation are linear across the span of the covariates. That is $\text{logit}[\rho(\mathbf{X}_i)] = \mathbf{c}(\mathbf{X}_i)^{T} \boldsymbol{\lambda}$ for all $i = 1,2,\ldots,n$ and some $\boldsymbol{\lambda} \in \Re^m$. \end{assumption} Assumption \ref{exchange} is substantially relaxed from what appears in more recent literature.\cite{rudolph_robust_2017, dahabreh_extending_2020} These other articles require the expected value of the potential outcomes to be exchangeable between populations: \begin{equation}\label{absolute} \mathbb{E}[Y_{i}(1)|\mathbf{X}_i, S_i = 1] = \mathbb{E}[Y_{i}(1)|\mathbf{X}_i, S_i = 0] \quad \text{and} \quad \mathbb{E}[Y_{i}(0)|\mathbf{X}_i, S_i = 1] = \mathbb{E}[Y_{i}(0)|\mathbf{X}_i, S_i = 0]. \end{equation} The indirect comparison literature refers to this assumption as the conditional constancy of \textit{absolute} effects whereas Assumption \ref{exchange} is commonly referred to as the conditional constancy of \textit{relative} effects. Whereas the conditional constancy of absolute effects requires adjustment for all prognostic and effect modifying covariates, the conditional constancy of relative effects only requires adjustments for the effect modifiers. Note that the much stronger conditional constancy of absolute effects assumption is enforced in Assumption \ref{linear}. However, the less stringent Assumption \ref{exchange}, in addition to Assumptions \ref{positivity} and \ref{sita}, is all that is required to obtain consistent estimates when Assumption \ref{odds} holds. This assumption relaxation result can be made using arguments of anchored indirect comparisons.\cite{phillippo_methods_2018} Suppose the target sample is a randomized control trial comparing $Z = 1$ with $Z = 0$, similar to the data contained within the trial sample, only we do not observe either the outcome or the treatment assignment. Then this target ``trial" is figuratively anchored by both treatment groups. If the target ``trial" were to be conducted, and the outcome and treatment data were collected, then a resulting indirect comparison estimator should yield estimates of zero as both ``trials" examine the same estimand.\cite{phillippo_equivalence_2020} This is precisely what transportability methods are targeting – what the causal effect would be if the trial were conducted on a different population. Only requiring the conditional constancy of relative effects assumption versus the conditional constancy of absolute effects assumption adds incentive to focus on correctly specifying the sampling model over the outcome model through the entropy balancing techniques that we will describe in Section \ref{entropy}. \subsection{Alternative Methods for Transportability}\label{previous} In this section we present four different methods for transporting experimental results to estimate $\tau_{\text{TATE}}$. For each method, we assume we are given Assumptions \ref{exchange}-\ref{sita}. The first method weights responses of the trial sample with the inverse odds of sampling.\cite{westreich_transportability_2017} Define the inverse odds of sampling weights as \[ \hat{\gamma}^{\text{PS}}_i = \begin{cases} \frac{1 - \hat{\rho}(\mathbf{X}_i)}{\hat{\rho}(\mathbf{X}_i)\hat{\pi}}, & \text{when} \ S_i = 1, Z_i = 1 \\ \frac{1 - \hat{\rho}(\mathbf{X}_i)}{\hat{\rho}(\mathbf{X}_i)(1 - \hat{\pi})}, & \text{when} \ S_i = 1, Z_i = 0 \\ 0, & \text{when} \ S_i = 0 \end{cases} \] where $\hat{\pi}$ is a consistent estimator of the probability of treatment and $\hat{\rho}(\mathbf{X}_i)$ is a consistent estimator of the probability of trial participation. The target population average treatment effect is then estimated by computing \[ \hat{\tau}_{\text{IOSW}} = \sum_{\{i:S_i = 1\}} \frac{\hat{\gamma}^{\text{PS}}_iZ_iY_i}{\sum_{\{i:S_i = 1\}} \hat{\gamma}^{\text{PS}}_iZ_i} - \sum_{\{i:S_i = 1\}} \frac{\hat{\gamma}^{\text{PS}}_i(1 - Z_i)Y_i}{\sum_{\{i:S_i = 1\}} \hat{\gamma}^{\text{PS}}_i(1 - Z_i)}. \] If Assumption \ref{odds} is given, we may use logistic regression to consistently estimate $\hat{\rho}(\mathbf{X}_i)$. A consistent estimator for $\rho(\mathbf{X}_i)$ by extension renders $\hat{\tau}_{\text{IOSW}}$ consistent for $\tau_{\text{TATE}}$. Given the extended conditional constancy of absolute effects assumption in (\ref{absolute}), another proposed solution is to find a consistent estimator of the conditional means for the potential outcomes with the sample data; $\mu_1(\mathbf{X}_i) \equiv \mathbb{E}[Y_i(1)|\mathbf{X}_i, S_i = 1]$ and $\mu_0(\mathbf{X}_i) \equiv \mathbb{E}[Y_i(0)|\mathbf{X}_i, S_i = 1]$. We will refer to this method as the outcome modeling (OM) approach. The consistent estimators are denoted as $\hat{\mu}_1(\mathbf{X}_i)$ and $\hat{\mu}_0(\mathbf{X}_i)$, respectively. Under Assumption \ref{exchange}, $\tau_{\text{TATE}}$ can be estimated by solving for \[ \hat{\tau}_{\text{OM}} = \frac{1}{n_0} \sum_{\{i: S_i = 0\}} \left[\hat{\mu}_1(\mathbf{X}_i) - \hat{\mu}_0(\mathbf{X}_i)\right]. \] In the causal inference literature, this method follows the framework for computing causal effects known as g-computation.\cite{robins_new_1986} If Assumption \ref{linear} is given, we can estimate $\tau_{\text{TATE}}$ with the OM approach if we are only given $\hat{\boldsymbol{\theta}}_0$ instead of $\mathbf{X}_i$ for all $i \in \{i:S_i = 0\}$. To do so, we would regress $Y_i$ onto $\mathbf{c}(\mathbf{X}_i)$ for all $\{i:S_i = 1, Z_i = 1\}$ and $\{i:S_i = 1, Z_i = 0\}$ to get $\hat{\boldsymbol{\alpha}}$ and $\hat{\boldsymbol{\beta}}$, respectively. We then compute \[ \hat{\tau}_{\text{OM}} = \hat{\boldsymbol{\theta}}^T_0\left(\hat{\boldsymbol{\alpha}} - \hat{\boldsymbol{\beta}}\right) \] where $\hat{\boldsymbol{\alpha}}$ and $\hat{\boldsymbol{\beta}}$ can be fit with ordinary least squares. The OM approach and the inverse odds of sampling weights may be combined into a so called doubly-robust estimator. A doubly robust (DR) estimator combines estimators of the model components, in this case the model for $[Y_i(1), Y_i(0)]$ and $S_i$, as to be consistent when at most one model is misspecified. The conventional doubly-robust estimator for a binary treatment was first proposed as a semiparametric technique for missing-data problems.\cite{robins_estimation_1994} There have been extensive modifications to this conventional doubly-robust estimator, including alterations for transporting experimental results of a binary treatment.\cite{zhang_new_2016} Using the same notation outlined for the outcome model approach and the inverse odds of sampling weights, the target population average treatment effect can be estimated by solving for \begin{equation}\label{dr} \hat{\tau}_{\text{DR}} = \sum_{\{i:S_i = 1\}} \frac{\hat{\gamma}^{\text{PS}}_iZ_i[Y_i - \hat{\mu}_1(\mathbf{X}_i)]}{\sum_{\{i:S_i = 1\}} \hat{\gamma}^{\text{PS}}_iZ_i} - \sum_{\{i:S_i = 1\}} \frac{\hat{\gamma}^{\text{PS}}_i(1 - Z_i)[Y_i - \hat{\mu}_0(\mathbf{X}_i)]}{\sum_{\{i:S_i = 1\}} \hat{\gamma}^{\text{PS}}_i(1 - Z_i)} + \frac{1}{n_0} \sum_{\{i: S_i = 0\}} \left[\hat{\mu}_1(\mathbf{X}_i) - \hat{\mu}_0(\mathbf{X}_i)\right]. \end{equation} It is easy to see that if $\hat{\mu}_0(\mathbf{X}_i)$ and $\hat{\mu}_1(\mathbf{X}_i)$ are consistent for $\mu_0(\mathbf{X}_i)$ and $\mu_1(\mathbf{X}_i)$, respectively, then the last summation in (\ref{dr}) is consistent for the target sample average treatment effect (and therefore also for the target population average treatment effect) while the first two summations converge in probability to zero. Similarly, if $\hat{\gamma}^{\text{PS}}$ is consistent for $\rho(\mathbf{X}_i)[1 - \rho(\mathbf{X}_i)]^{-1}$, then the first two summations of (\ref{dr}) will consistently estimate the negative bias induced by the last summation.\cite{dahabreh_extending_2020} Another doubly-robust estimator closely related to the augmented estimator in (\ref{dr}) uses targeted maximum likelihood estimation (TMLE).\cite{laan_targeted_2006} TMLE is a popular choice among causal practitioners due to its flexibility for estimating a variety of different estimands, including the target population average treatment effect.\cite{rudolph_robust_2017} For transportability, the targeted maximum likelihood estimator is constructed as follows. First, the initial estimates of $\hat{\mu}_1(\mathbf{X}_i)$ and $\hat{\mu}_0(\mathbf{X}_i)$ are fit using the trial sample data. We then update the predictions of the potential outcomes on the trial sample with \begin{equation}\label{offset} \begin{split} \tilde{\mu}_0(\mathbf{X}_i) &= \hat{\mu}_0(\mathbf{X}_i) + \hat{\epsilon}_0 (1 - Z_i) \hat{\gamma}^{\text{PS}}_i \enskip \\ \tilde{\mu}_1(\mathbf{X}_i) &= \hat{\mu}_1(\mathbf{X}_i) + \hat{\epsilon}_1 Z_i \hat{\gamma}^{\text{PS}}_i. \end{split} \end{equation} The estimates of $\epsilon_0$ and $\epsilon_1$ are obtained by regressing the outcome $Y_i$ onto the clever covariates - $Z_i \hat{\gamma}^{\text{PS}}_i$ and $(1 - Z_i)\hat{\gamma}^{\text{PS}}_i$ - with $\hat{\mu}_0(\mathbf{X}_i)$ and $\hat{\mu}_0(\mathbf{X}_i)$ serving as offsets for all $i \in \{i:S_i = 1\}$. The estimator of $\tau_{\text{TATE}}$ under the TMLE framework solves for \begin{equation}\label{tmle} \hat{\tau}_{\text{TMLE}} = \frac{1}{n_0} \sum_{\{i: S_i = 0\}} \left[\tilde{\mu}_1(\mathbf{X}_i) - \tilde{\mu}_0(\mathbf{X}_i) \right] \end{equation} in a similar manner to the OM approach. Equation (\ref{tmle}) is doubly-robust for estimating $\tau_{\text{TATE}}$ in the sense that if either the sampling model or the potential outcomes models are consistent, then $\hat{\tau}_{\text{TMLE}}$ is also consistent.\cite{rudolph_robust_2017} TMLE also requires individual-level covariate data to estimate some of the components in (\ref{offset}) and (\ref{tmle}). For the DR and TMLE methods, it is unclear to us whether the more relaxed Assumption \ref{exchange} remains applicable in cases where the sampling model is correctly specified. Note that both these methods are heavily geared toward the outcome regression model being correctly specified. To avoid any distraction from this potential discrepancy, we ensure the conditional constancy of absolute effects assumption is satisfied in the simulation study found in Section \ref{sim_1}. Furthermore, we will compare the entropy balancing methods described in the next section with IOSW alone in the data analysis found in Section \ref{illustrative} as the conditional constancy of absolute effects assumption cannot be guaranteed like in a simulation study. \section{Entropy Balancing}\label{entropy} Entropy balancing has emerged as a popular method for estimating weights in a variety of contexts, particularly for estimating the average treatment effect of the treated.\cite{hainmueller_entropy_2012,zhao_entropy_2017} Entropy balancing has also previously been introduced for evaluating indirect comparisons of randomized trials, though in this case it is referred to as a method of moment estimator for the inverse odds of sampling when the probability of trial participation follows a a logit model.\cite{signorovitch_comparative_2010} This method of moment estimator just so happens to be the dual solution to an entropy balancing primal problem. We prefer to use the term entropy balancing as it more concisely describes the underlying constrained convex optimization problem that must be solved in order to balance the covariate distribution, \begin{equation}\label{primal} \begin{split} \text{minimize} &\quad \sum_{\{i:S_i = 1\}} \gamma_i\log{\gamma_i} - \gamma_i\\ \text{subject to} &\quad \sum_{\{i:S_i = 1\}} \mathbf{A}_{i}\gamma_i = \mathbf{B} \end{split} \end{equation} Note that the criterion distance in (\ref{primal}) is the entropy function, hence the ``entropy" in entropy balancing. The constraints in (\ref{primal}), represented by the vectors $\mathbf{A}_i$ and $\mathbf{B}$, can be constructed to satisfy some moment balancing conditions, hence the ``balancing" aspect of entropy balancing. For example we can set $\mathbf{A}_{i} = \mathbf{c}(\mathbf{X}_i)$ and $\mathbf{B} = \hat{\boldsymbol{\theta}}_0$ so that the weighted sample moments of $\mathbf{c}(\mathbf{X}_i)$ for all $i \in \{i:S_i = 1\}$ are equal to the sample moments $\mathbf{c}(\mathbf{X}_i)$ for all $i \in \{i:S_i = 0\}$. This specific choice of $\mathbf{A}_i$ and $\mathbf{B}$ is the primal problem for the previously proposed method of moments estimator.\cite{signorovitch_comparative_2010} Entropy balancing and the method of moment estimator for evaluating indirect comparisons are often conflated due to the different Lagrangian dual solutions one can arrive at while solving (\ref{primal}), one of which we will get to later in this section. Nevertheless, due to the strict convexity of the criterion function, the solution to (\ref{primal}) is unique and hence the dual solution must also be unique.\cite{josey_framework_2020} This result was also made explicit specifically when the convex criterion function is the entropy function.\cite{phillippo_equivalence_2020} The dual solution for the method of moments estimator only requires the target sample moments of the covariates. For this balancing solution, denote $\tilde{\mathbf{c}}(\mathbf{X}_i) = (2Z_i - 1, \mathbf{X}_i)$ and $\tilde{\boldsymbol{\theta}}_0 = \left(0, \hat{\boldsymbol{\theta}}^T_0\right)^T$. The method of moments estimator first solves the Lagrangian dual problem \begin{equation}\label{MOM} \hat{\boldsymbol{\lambda}} = \argmax_{\boldsymbol{\lambda} \in \Re^{m + 1}} \ \sum_{\{i:S_i = 1\}} \left[ -\exp\left(-\tilde{\mathbf{c}}(\mathbf{X}_i)^T\boldsymbol{\lambda} \right) - \tilde{\boldsymbol{\theta}}^{T}_0 \boldsymbol{\lambda} \right], \end{equation} which in turn is used to estimate the sampling weights, $\hat{\gamma}^{\text{MOM}}_i = \exp\left[-\tilde{\mathbf{c}}(\mathbf{X}_i)^T \hat{\boldsymbol{\lambda}} \right]$ for all $i \in \{i:S_i = 1\}$. We can then use a Horvitz-Thompson type estimator similar to the inverse odds of sampling weights to estimate $\tau_{\text{TATE}}$, \[ \hat{\tau}_{\text{MOM}} = \sum_{\{i:S_i = 1\}} \frac{\hat{\gamma}^{\text{MOM}}_i(2Z_i - 1)Y_i}{\sum_{\{i:S_i = 1\}} \hat{\gamma}^{\text{MOM}}_iZ_i}. \] In Signorovitch \textit{et al}., Assumptions \ref{exchange}-\ref{sita} along with Assumption \ref{odds}, are necessary to establish the consistency of $\hat{\tau}_{\text{MOM}}$ for $\tau_{\text{TATE}}$.\cite{signorovitch_comparative_2010} More recent work can be adapted to show that this estimator is also consistent when Assumption \ref{linear} holds, thus achieving the doubly-robust property.\cite{dong_integrative_2020} Our proposed adaptation to this entropy balancing solution instead sets $\mathbf{A}_{i} = \left(\mathbf{A}_{i0}^T, \mathbf{A}_{i1}^T\right)^T$ with $\mathbf{A}_{i0} = (1 - Z_i) \mathbf{c}(\mathbf{X}_i)$ and $\mathbf{A}_{i1} = Z_i\mathbf{c}(\mathbf{X}_i)$ while $\mathbf{B} = \left(\hat{\boldsymbol{\theta}}^T_0, \hat{\boldsymbol{\theta}}^T_{0}\right)^T$ to solve (\ref{primal}) using the following separable Lagrangian dual problem, \begin{equation}\label{dual} \begin{split} \hat{\boldsymbol{\lambda}}_0 &= \argmax_{\boldsymbol{\lambda} \in \Re^{m}} \ \sum_{\{i:S_i = 1\}} \left\{ -\exp\left[-(1 - Z_i) \mathbf{c}(\mathbf{X}_i)^T \boldsymbol{\lambda}\right] - \hat{\boldsymbol{\theta}}^T_0 \boldsymbol{\lambda} \right\} \enskip \text{and} \\ \hat{\boldsymbol{\lambda}}_1 &= \argmax_{\boldsymbol{\lambda} \in \Re^{m}} \ \sum_{\{i:S_i = 1\}} \left[ -\exp\left(-Z_i\mathbf{c}(\mathbf{X}_i)^T \boldsymbol{\lambda}\right) - \hat{\boldsymbol{\theta}}^{T}_0 \boldsymbol{\lambda} \right]. \end{split} \end{equation} The empirical sampling weights are subsequently found with \begin{equation}\label{weights} \hat{\gamma}^{\text{CAL}}_i = \exp\left[-(1 - Z_i) \mathbf{c}(\mathbf{X}_i)^T \hat{\boldsymbol{\lambda}}_{0} -Z_i\mathbf{c}(\mathbf{X}_i)^T \hat{\boldsymbol{\lambda}}_{1}\right] \ \text{for all} \ i \in \ \{i:S_i = 1\} . \end{equation} The estimator for $\tau_{\text{TATE}}$ using these estimated sampling weights is the same Horvitz-Thompson type estimator used by both the MOM and the IOSW approaches, \begin{equation}\label{HT} \hat{\tau}_{\text{CAL}} = \sum_{\{i:S_i = 1\}} \frac{\hat{\gamma}^{\text{CAL}}_i(2Z_i - 1)Y_i}{\sum_{\{i:S_i = 1\}}\hat{\gamma}^{\text{CAL}}_iZ_i}. \end{equation} Notice that the covariate distributions are balanced between treatment groups and between the target sample and the trial participants. This is in contrast with the MOM estimator which only balances the covariate distribution between the target sample and the trial participants. This alteration to $\hat{\tau}_{\text{MOM}}$ remains doubly-robust for estimating $\tau_{\text{TATE}}$ given either Assumption \ref{linear} or \ref{odds}. The double-robustness property of $\hat{\tau}_{\text{CAL}}$ is examined more closely in the Supplementary Material. The alteration to the MOM estimator is also motivated by the equivalence of (\ref{dual})-(\ref{HT}) to the exponential tilting estimator implicitly proposed by Chan \textit{et al}., which is why we refer to it as the calibration (CAL) estimator.\cite{chan_globally_2016} Recall that $\tau_{\text{TATE}}$ is a special case of a weighted average treatment. According to Theorem 3 of Chan \textit{et al}., if we can can uniformly approximate $\rho(\mathbf{X}_i)$, $\mu_1(\mathbf{X}_i)$, and $\mu_0(\mathbf{X}_i)$ using a sufficiently rich basis represented by the balance functions $c_1(\mathbf{X}_i), c_2(\mathbf{X}_i), \ldots, c_m(\mathbf{X}_i)$ (i.e. the number of balancing functions $m$ increases with $n$) while assuming mild conditions about the data generating processes, then the estimate of $\tau_{\text{TATE}}$ with $\hat{\tau}_{\text{CAL}}$ will achieve the efficiency bound in (\ref{semi}).\cite{chan_globally_2016} This efficiency property is not shared by the method of moments version of entropy balancing, further motivating the calibration version.\cite{dong_integrative_2020} Additional details on how to estimate the variance appears in the Supplementary Material. There are a few reasons why we use the relative entropy over other criterion distance functions for transporting experimental results. The first is due to the resemblance of (\ref{weights}) to the inverse odds of sampling prescribed under Assumption \ref{odds}. This relationship has been noted in several other articles. \cite{signorovitch_comparative_2010, zhao_entropy_2017} Another reason for using the relative entropy is the guarantee that the estimated sampling weights will always be positive. Another suggestion might be to construct a Lagrangian dual using the Euclidean distance as the criterion function to get a different version of (\ref{primal}) and (\ref{dual}). However, the support for the Euclidean distance is the real numbers, implying that negative weights are feasible in such a setup. Adding the necessary constraint that $\gamma_i > 0$ for all $i = 1,2,\ldots,n$ makes the optimization problems in (\ref{primal}) and (\ref{dual}) less straightforward to solve. Now consider the setting where we are provided only the sample covariate moments from the target sample. Assuming that $\hat{\boldsymbol{\theta}}_0$ is fixed results in an inflated Type I error rate for inferences of $\tau_{\text{TATE}}$. The one exception to this rule is when $\hat{\boldsymbol{\theta}}_0 = \boldsymbol{\theta}_0$ with zero variability. In other words, we would need to estimate $\hat{\boldsymbol{\theta}}_0$ over the entire target population. If we are provided individual-level covariate data from the target sample, then we may derive a variance estimator for estimates of $\tau_{\text{TATE}}$ as opposed to $\tau_{\text{TSATE}}$. Despite this shortfall, the estimators (\ref{dual}) - (\ref{HT}) remain consistent for $\tau_{\text{TATE}}$ in either setting. The same rule applies for both the OM approach and the MOM estimator since neither of these methods necessarily require the complete individual-level covariate data. A more concrete demonstration of this phenomenon is shown in Section \ref{sim_2}. \section{Numerical Examples}\label{simulation} \subsection{Simulation Study}\label{sim_1} In this section we present a simulation study to better understand the performance of entropy balancing techniques compared with the alternative methods illustrated in Section \ref{previous}. We consider four experimental scenarios that test the consistency and efficiency of the estimators on finite-samples by altering the data generating processes. \text{red}{These scenarios all make the conditional constancy of absolute effects assumption defined in (\ref{absolute}) to ensure compatibility between all the methods in consideration.} The first scenario establishes a baseline. For $i = 1,2,\ldots,n$, let $(X_{i0}|S_i = 0) \sim \mathcal{N}(-1, 4)$, $(X_{i1}|S_i = 0) \sim \text{Bin}(1, 0.6)$, $(X_{i2}|S_i = 0) \sim \mathcal{N}(0, 1)$, and $(X_{i3}|S_i = 0) \sim \text{Bin}(1, 0.5)$. Let $(X_{i0}|S_i = 1) \sim \mathcal{N}(1, 4)$, $(X_{i1}|S_i = 1) \sim \text{Bin}(1, 0.4)$, $(X_{i2}|S_i = 1) \sim \mathcal{N}(0, 1)$, and $(X_{i3}|S_i = 1) \sim \text{Bin}(1, 0.5)$. We generate the treatment assignment by sampling $Z_i \sim \text{Bin}(1, 0.5)$. The conditional mean of the potential outcomes are constructed as \begin{equation}\label{mu} \begin{split} \mu_0(\mathbf{X}_i) &= 10 - 3X_{i0} - X_{i1} + X_{i2} + 3X_{i3} \enskip \text{and} \\ \mu_1(\mathbf{X}_i) &= \mu_0(\mathbf{X}_i) + 5 + 3X_{i0} - X_{i1} + X_{i2} - 3X_{i3}. \end{split} \end{equation} Gaussian potential outcomes for each experimental scenario are generated by sampling $Y_{i}(0) \sim \mathcal{N}[\mu_0(\mathbf{X}_i), \sigma^2]$ and $Y_{i}(1) \sim \mathcal{N}[\mu_1(\mathbf{X}_i), \sigma^2]$, with the observed outcome determined by $Y_i = Z_iY_i(1) + (1 - Z_i)Y_i(0)$ for each $i = 1,2,\ldots,n$. We discard the $n_0$ values of $Y_i$ and $Z_i$ for all $i \in \{i:S_i = 0\}$. We will refer to this set of conditions with the label ``baseline". Unless stated otherwise, $S_i$, $\mathbf{X}_i = (X_{i0}, X_{i1}, X_{i2}, X_{i3})$, $Y_i$, and $Z_i$ are provided to estimate $\hat{\gamma}_i^{\text{PS}}$, $\hat{\gamma}_i^{\text{MOM}}$, $\hat{\gamma}_i^{\text{CAL}}$, $\hat{\mu}_1(\mathbf{X}_i)$, $\hat{\mu}_0(\mathbf{X}_i)$, $\tilde{\mu}_1(\mathbf{X}_i)$, and $\tilde{\mu}_0(\mathbf{X}_i)$ which are required in the estimators described in \ref{previous} and \ref{entropy}. In the scenarios labeled ``positivity", we increase the difference between the two covariate distributions by substituting $(X_{i0}|S_i = 0) \sim \mathcal{N}(1, 2)$, $(X_{i0}|S_i = 1) \sim \mathcal{N}(-1, 1)$, $(X_{i1}|S_i = 0) \sim \text{Bin}(1, 0.2)$, and $(X_{i1}|S_i = 1) \sim \text{Bin}(1, 0.8)$ for the respective covariates into the data generating mechanisms. This alteration will test the sensitivity of each method on the intrinsic limitations associated with Assumption \ref{positivity}. For the scenario labeled ``sparse", we provide each method an additional set of covariates that do not affect the responses. The potential outcomes are still determined from (\ref{mu}) with the original covariate values, yet the different estimators must also accommodate the additional noise covariates of $(X_{ir}|S_i = 0) \sim (X_{i(r-4)}|S_i = 1)$ and $(X_{ir}|S_i = 1) \sim (X_{i(r-4)}|S_i = 0)$ for $r \in \{4,5,6,7\}$. Each of the estimators are provided data for $(X_{i0}, X_{i1}, \ldots, X_{i7})$ in addition to $Y_i$ $Z_i$, and $S_i$. In the scenarios labeled ``missing", we generate the outcomes according to (\ref{mu}) yet we provide each method only $(X_{i0}, X_{i2})$ and omit $X_{i0}$ and $X_{i2}$ while estimating $\tau_{\text{TATE}}$. Note that this means we omit one of the effect modifiers, $X_{i1}$. Next, we formulate scenarios which misspecify the outcome model (``outcome"). To do so, we generate outcomes according to the model \begin{equation}\label{mu_int} \begin{split} \mu_0(\mathbf{X}_i) &= 10 - 3U_{i0} - X_{i1} + U_{i2} + 3X_{i3} \enskip \text{and} \\ \mu_1(\mathbf{X}_i) &= \mu_0(\mathbf{X}_i) + 5 + 3U_{i0} - X_{i1} + U_{i2} - 3X_{i3} \end{split} \end{equation} where $U_{i0} = \exp(-X_{i0}/4 + X_{i2}/4)$ and $U_{i2} = (X_{i0}/2 - X_{i2}/2)^2$. Both $U_{i0}$ and $U_{i2}$ are standardized across both samples to have a mean of 0 and variance 1 in the combined trial and target samples. We then provide each method the original covariate values $(X_{i0}, X_{i1},X_{i2},X_{i3})$ for all $i = 1,2,\ldots,n$. On the other hand, in the sampling misspecification scenario (``sampling"), we provide each method data for $(U_{i0}, X_{i1}, U_{i2}, X_{i3})$ while the outcomes are still generated by the model in (\ref{mu_int}). The standardization step is key to ensure that the true magnitude of the differences between the sample covariate distributions are never fully expressed by the sampling model. In addition to varying the scenarios that test the violations to the assumptions in Section \ref{assumptions}, we also vary $n_0 \in \{500,1000\}$ and $n_1 \in \{500,1000\}$, creating $24$ different conditions for which we will generate $1000$ replications. We report the average bias and empirical mean squared error of the average treatment effect estimates across the $1000$ iterations for each scenario. The average model and empirical standard errors are provided in tables S1 and S2 in the Supplementary Materials. The model standard errors are obtained using a sandwich variance estimator for each estimate in every iteration of the simulation. The empirical standard errors are the standard deviations of the estimates from each estimator pooled across the iterations of a given scenario. The five methods we compare for estimating the target average treatment effect are: Inverse Odds of Sampling Weights (IOSW), G-Computation (OM), Augmented Inverse Odds of Sampling Weights (DR), Targeted Maximum Likelihood Estimation (TMLE), Method of Moments (MOM), and Calibration (CAL). Additionally, for IOSW, OM, DR, and TMLE, both $\hat{\mu}_1(\mathbf{X}_i)$ and $\hat{\mu}_0(\mathbf{X}_i)$ are fit by regressing $Y_i$ onto the covariates provided in each scenario with data from $S_i = 1$ and stratified by the $Z_i$. $\hat{\rho}(\mathbf{X}_i)$ is fit with logistic regression using covariates that predict $S_i$. The standard errors are estimated using robust sandwich variance estimators. For the non-entropy balancing type estimators, \texttt{R} code for finding variance estimates are provided in the existing literature.\cite{rudolph_robust_2017, dahabreh_extending_2020} \begin{table} \centering \scriptsize \begin{tabular}{cccccccccc} \hline $n_0$ & $n_1$ & Scenario & $\tau_{\text{TATE}}$ & IOSW & OM & DR & TMLE & MOM & CAL \\ \hline 500 & 1000 & baseline & -0.1 & 0.03 (0.47) & 0.00 (0.09) & 0.00 (0.10) & 0.00 (0.10) & 0.00 (0.33) & 0.00 (0.10) \\ 500 & 1000 & missing & -0.1 & 0.23 (0.48) & 0.19 (0.16) & 0.19 (0.17) & 0.19 (0.17) & 0.19 (0.36) & 0.19 (0.17) \\ 500 & 1000 & outcome & 8.9 & -0.03 (0.41) & -1.44 (2.23) & -0.08 (0.72) & -0.03 (0.26) & -0.05 (0.24) & -0.08 (0.20) \\ 500 & 1000 & positivity & -0.3 & 0.27 (1.51) & -0.01 (0.08) & -0.01 (0.19) & -0.01 (0.38) & 0.00 (0.81) & -0.01 (0.19) \\ 500 & 1000 & sample & 4.5 & 0.64 (3.61) & 0.00 (0.03) & 0.00 (0.05) & -0.02 (0.59) & 0.00 (0.06) & 0.00 (0.04) \\ 500 & 1000 & sparse & -0.1 & 0.08 (1.18) & -0.01 (0.11) & -0.01 (0.16) & -0.02 (0.20) & -0.03 (0.63) & -0.01 (0.16) \\ 1000 & 200 & baseline & -0.1 & 0.15 (2.04) & 0.00 (0.14) & 0.00 (0.18) & -0.01 (0.18) & -0.02 (1.18) & -0.01 (0.18) \\ 1000 & 200 & missing & -0.1 & 0.30 (1.94) & 0.18 (0.27) & 0.20 (0.33) & 0.20 (0.33) & 0.19 (1.07) & 0.20 (0.33) \\ 1000 & 200 & outcome & 8.9 & -0.10 (1.66) & -1.45 (2.87) & -0.22 (2.87) & -0.15 (1.14) & -0.15 (0.87) & -0.26 (0.76) \\ 1000 & 200 & positivity & -0.3 & 0.91 (4.80) & 0.03 (0.24) & 0.05 (0.49) & 0.05 (3.10) & 0.11 (3.17) & 0.08 (0.62) \\ 1000 & 200 & sample & 3.2 & 0.37 (2.63) & -0.01 (0.10) & -0.01 (0.14) & 0.07 (1.57) & 0.00 (0.20) & -0.01 (0.13) \\ 1000 & 200 & sparse & -0.1 & 0.31 (4.14) & -0.01 (0.20) & -0.01 (0.37) & -0.01 (0.84) & -0.05 (2.67) & 0.00 (0.41) \\ 1000 & 1000 & baseline & -0.1 & 0.01 (0.48) & 0.00 (0.06) & 0.00 (0.07) & 0.00 (0.07) & -0.02 (0.29) & 0.00 (0.07) \\ 1000 & 1000 & missing & -0.1 & 0.21 (0.46) & 0.19 (0.11) & 0.19 (0.13) & 0.19 (0.13) & 0.19 (0.30) & 0.19 (0.13) \\ 1000 & 1000 & outcome & 8.9 & -0.03 (0.43) & -1.41 (2.14) & -0.06 (0.75) & -0.03 (0.23) & -0.05 (0.23) & -0.08 (0.20) \\ 1000 & 1000 & positivity & -0.3 & 0.19 (1.74) & 0.00 (0.06) & 0.00 (0.16) & -0.01 (0.31) & -0.01 (0.81) & 0.00 (0.16) \\ 1000 & 1000 & sample & 4.0 & 0.78 (4.22) & 0.00 (0.02) & -0.01 (0.04) & -0.12 (1.89) & -0.01 (0.04) & -0.01 (0.03) \\ 1000 & 1000 & sparse & -0.1 & 0.12 (1.03) & -0.01 (0.07) & -0.01 (0.12) & -0.01 (0.14) & 0.00 (0.57) & -0.01 (0.11) \\ \hline \end{tabular} \caption{Average bias and (mean squared error) of $\tau_{\text{TATE}}$ estimates. The scenarios where $n_1 = 200$ and $n_0 = 500$ are detailed in Figure \ref{ATE-plot}.}\label{table-1} \end{table} The average bias and mean squared errors of the experiment are summarized in Table \ref{table-1}. A visual comparison for a subset of the results featured in Table \ref{table-1} where $n_1 = 200$ and $n_0 = 500$ appear in the boxplots of Figure \ref{ATE-plot}. Each method produces consistent estimates under the baseline scenario. However, each method also has its short-comings. First, we can see that IOSW produce highly variable estimates in cases where the positivity assumption (Assumption \ref{positivity}) is practically violated and is biased in cases when the sampling scenario is misspecified. On the other hand, the OM approach is biased when the outcome model is misspecified. TMLE, DR, MOM, and CAL all appear to produce unbiased estimates of the average treatment effect in every scenario. This is interesting for the MOM estimator since this would imply that it is also doubly-robust in terms of consistency. Some insight into why this might be is provided elsewhere.\cite{dong_integrative_2020} However, we can see in Table \ref{table-1} that CAL had either the same or smaller mean squared errors over TMLE and MOM. In some scenarios DR did have smaller errors than CAL. Nevertheless, The OM approach had the smallest mean squared errors across most scenarios, other than in the scenarios where we misspecify the outcome model naturally. When the models miss (or ignore) some of the effect modifiers, we see that every method we test produces biased estimates of the target population average treatment effect. This particular scenario emphasizes the results of these estimators when both the outcome and the sampling models are misspecified, even when missing a single effect modifier. \begin{figure} \caption{Estimates of the target population average treatment effects over the 1000 iterations of the simulation study described in Section \ref{sim_1}. The dashed line demarcates the true target population average treatment effect for each scenario while the x is the average of the estimates. These estimates are drawn from cases when $n_1 = 200$ and $n_0 = 500$.} \label{ATE-plot} \end{figure} There is a downside to the so-called calibration version of entropy balancing. In the sparse and positivity violation scenarios, the number of models that converge decreases considerably. When $n_1 = 200$ and $n_0 = 500$, CAL was only able to find a solution in 64.2\% of the iterations under the sparse scenario and 49.0\% of the iterations when positivity is practically violated. When $n_1 = 200$ and $n_0 = 1000$ we observe a 66.0\% and 48.1\% rate of convergence in the sparse and positivity scenarios, respectively. Otherwise, the calibration approach to entropy balancing converged in each iteration for every other scenario. Meanwhile, the method of moments approach to entropy balancing also failed to converge in approximately 7.5\% of the scenarios in both of the positivity scenarios where the calibration estimator often failed to converge. \subsection{Coverage Probabilities of $\tau_{\text{TATE}}$ and $\tau_{\text{TSATE}}$}\label{sim_2} Consider the baseline scenario in the previous set of simulations. Using the individual-level data from the trial sample, and the target sample covariate moments, we demonstrate how inferences for $\tau_{\text{TATE}}$ will have an inflated Type I error. We do so by finding the empirical coverage probability of both $\tau_{\text{TSATE}}$ and $\tau_{\text{TATE}}$ with both of the entropy balancing approaches described in Section \ref{entropy}. Robust sandwich variance estimators are used to construct the confidence intervals. The coverage probability is obtained by averaging over the indicator variable generated by whether the resulting confidence interval about the average treatment effect estimate covers either of $\tau_{\text{TSATE}}$ or $\tau_{\text{TATE}}$ at each iteration. This will demonstrate why entropy balancing can only be used to infer upon the target sample average treatment effect instead of the target population average treatment effect unless the entire individual-level data about the balance functions in both the target and trial samples is available. For this simulation experiment, we set the target and trial sample sizes at $n_0 \in \{500,1000,10000\}$ and $n_1 \in \{1000,10000\}$, respectively. We use large sample sizes to ensure the accuracy of the robust variance estimator. \begin{table} \centering \scriptsize \begin{tabular}{cccccccccc} \hline \multirow{2}{*}{$n_0$} & \multirow{2}{*}{$n_1$} & & \multicolumn{4}{c}{Without Individual Level Data} & & \multicolumn{2}{c}{With Individual Level Data} \\ \cline{4-7} \cline{9-10} & & & MOM $\tau_{\text{TSATE}}$ & MOM $\tau_{\text{TATE}}$ & CAL $\tau_{\text{TSATE}}$ & CAL $\tau_{\text{TATE}}$ & & MOM $\tau_{\text{TATE}}$ & CAL $\tau_{\text{TATE}}$ \\ \hline 500 & 1000 & & 0.926 & 0.882 & 0.929 & 0.623 & & 0.937 & 0.943 \\ 500 & 10000 & & 0.956 & 0.667 & 0.936 & 0.289 & & 0.948 & 0.960 \\ 1000 & 1000 & & 0.922 & 0.897 & 0.938 & 0.761 & & 0.928 & 0.951 \\ 1000 & 10000 & & 0.929 & 0.770 & 0.938 & 0.372 & & 0.945 & 0.942 \\ 10000 & 1000 & & 0.928 & 0.923 & 0.930 & 0.902 & & 0.923 & 0.929 \\ 10000 & 10000 & & 0.930 & 0.912 & 0.960 & 0.791 & & 0.935 & 0.958 \\ \hline \end{tabular} \caption{Coverage Probabilities of $\tau_{\text{TSATE}}$ and $\tau_{\text{TATE}}$ using Entropy Balancing and Outcome Modeling Techniques.}\label{table-2} \end{table} The results in Table \ref{table-2} show how modifying $n_1$ and $n_0$ affects the coverage probabilities for $\tau_{\text{TSATE}}$ and $\tau_{\text{TATE}}$ for the setting where we are given the target sample covariate moments. Observe that the coverage probability of $\tau_{\text{TSATE}}$ is dependent on $n_1$ alone - as $n_1$ increases, the coverage probabilities increase. The coverage probability of $\tau_{\text{TATE}}$, on the other hand, appears to be dependent on the ratio between $n_0$ and $n_1$. For inference on $\tau_{\text{TATE}}$, we see the best results occur when $n_1$ is small relative to $n_0$. When $n_1 = 1000$ and $n_0 = 10000$, the variation of $\hat{\boldsymbol{\theta}}$ has less impact on the total variance, producing the best results. In contrast, when $n_1 = 10000$ and $n_0 = 10000$, the variation of $\hat{\boldsymbol{\theta}}$ has a greater impact, resulting in a decreased probability of coverage. This observation is only compounded in cases where $n_1 > n_0$. This leads us to believe that $n_1$ needs to be sufficiently large while also remaining small compared to $n_0$ in order to be effective for inferring on $\tau_{\text{TATE}}$. When we adjust the sandwich estimator to incorporate individual-level covariate data from the target sample, we see that the accuracy of coverage probability is now tied to the total sample size $n = n_0 + n_1$, which is typical for robust variance estimators as they are derived under asymptotic conditions. \section{Transporting Results of ACCORD-BP study to the US Population}\label{illustrative} Translating clinical trial results to clinical care is particularly challenging when the results of two studies conducted for similar indications and treatments yield conflicting conclusions. For example, the optimal approach to hypertension treatment remains unclear, partly due to conflicting clinical trial results. The Systolic Blood Pressure Intervention Trial (SPRINT) and the Action to Control Cardiovascular Risk in Diabetes Blood Pressure (ACCORD-BP) trial both randomized participants with hypertension to intensive ($<120$ mmHg) or conventional ($<140$ mmHg) blood pressure control targets. The study populations differed in that ACCORD-BP was limited to diabetes patients while SPRINT excluded diabetes patients. The two similarly designed studies in differing populations had different results: SPRINT, but not ACCORD-BP, found an association of intensive blood pressure control with several clinically meaningful outcomes including cardiovascular disease events.\cite{the_accord_study_group_effects_2010, the_sprint_study_group_randomized_2015} Importantly, the ACCORD-BP trial was enriched for individuals at high cardiovascular disease risk aside from the presence of diabetes, raising the question of whether the result of the trial applies to a more general diabetes patient population. Thus, transporting the ACCORD-BP trial to the general US population of diabetes patients may provide insight into hypertension management for individuals with diabetes and help reconcile the discrepant trial results. To address this question, Berkowitz \textit{et al}. used inverse odds of sampling weights (IOSW) to transport the ACCORD-BP trial to a sample of US diabetes patients drawn from the US National Health and Nutrition Examination Survey (NHANES).\cite{berkowitz_generalizing_2018} They found that weighting the ACCORD-BP sample to reflect the diabetes patient sample in NHANES yielded intervention effects more similar to those observed in the SPRINT trial of non-diabetes patients than in the unweighted ACCORD-BP trial. We use this previously demonstrated application of transportability methods to ACCORD-BP as a real-world application of the entropy balancing (CAL) methods described here. In our applied example, we transport four-year post randomization risk difference estimates of total mortality observed in the ACCORD-BP trial\cite{the_accord_study_group_effects_2010} to a sample of US diabetes patients drawn from the NHANES cohort.\cite{berkowitz_generalizing_2018} We use two methods for transporting the results of ACCORD-BP to NHANES - IOSW and entropy balancing (CAL). Furthermore, using entropy balancing we provide confidence intervals about the target sample average treatment effect and the target population average treatment effect. Recall that the former estimand does not require any individual-level data from the NHANES sample. \begin{table} \centering \scriptsize \begin{tabular}{lcccc} \hline Variables & NHANES & ACCORD-BP & IOSW ACCORD-BP & CAL ACCORD-BP \\ \hline Baseline age, yrs & $59.65 \pm 13.70$ & $62.84 \pm 6.74$ & $61.50 \pm 6.66$ & $59.61 \pm 6.91$ \\ Female & $48.9$ & $47.1$ & $48.0$ & $48.9$ \\ Race/Ethnicity & & & & \\ \multicolumn{1}{r}{Non-Hispanic white} & $62.6$ & $58.7$ & $51.6$ & $62.5$ \\ \multicolumn{1}{r}{Non-Hispanic black} & $15.2$ & $24.0$ & $19.6$ & $15.2$ \\ \multicolumn{1}{r}{Hispanic} & $15.2$ & $6.8$ & $22.3$ & $15.2$ \\ \multicolumn{1}{r}{Asian/multi/other} & $7.0$ & $10.5$ & $6.5$ & $7.1$ \\ Insurance & $86.8$ & $85.0$ & $84.6$ & $86.6$ \\ Smoking status & & & & \\ \multicolumn{1}{r}{Never} & $51.4$ & $44.6$ & $52.9$ & $51.4$ \\ \multicolumn{1}{r}{Former} & $33.1$ & $42.6$ & $30.9$ & $33.1$ \\ \multicolumn{1}{r}{Current} & $15.5$ & $12.8$ & $16.2$ & $15.5$ \\ Education & & & & \\ \multicolumn{1}{r}{Less than HS} & $25.7$ & $16.3$ & $30.9$ & $25.7$ \\ \multicolumn{1}{r}{HS diploma} & $27.1$ & $27.0$ & $26.4$ & $27.1$ \\ \multicolumn{1}{r}{Some college} & $29.3$ & $32.4$ & $26.5$ & $29.3$ \\ \multicolumn{1}{r}{College diploma or higher} & $17.9$ & $24.3$ & $16.2$ & $17.9$ \\ History of CHF & $7.7$ & $4.2$ & $11.0$ & $7.7$ \\ History of MI & $10.5$ & $13.6$ & $11.4$ & $10.5$ \\ History of stroke & $7.9$ & $6.4$ & $7.5$ & $7.8$ \\ Years with diabetes & $7.49 \pm 9.20$ & $10.88 \pm 7.83$ & $10.05 \pm 7.26$ & $7.50 \pm 6.51$ \\ BMI, kg/$\text{m}^2$ & $32.80 \pm 7.31$ & $32.10 \pm 5.47$ & $32.07 \pm 5.52$ & $32.80 \pm 5.78$ \\ SBP, mm Hg & $130.05 \pm 19.15$ & $139.33 \pm 15.61$ & $133.94 \pm 14.42$ & $129.67 \pm 13.98$ \\ DBP, mm Hg & $69.50 \pm 12.96$ & $75.86 \pm 10.28$ & $71.53 \pm 9.57$ & $69.44 \pm 9.55$ \\ HDL, mg/dl & $49.11 \pm 13.46$ & $46. 049 \pm 13.68$ & $51.60 \pm 17.24$ & $49.08 \pm 17.72$ \\ LDL, mg/dl & $103.83 \pm 36.03$ & $110.70 \pm 36.52$ & $105.42 \pm 31.33$ & $103.59 \pm 33.31$ \\ Triglycerides, mg/dl & $148.93 \pm 76.13$ & $193.36 \pm 174.21$ & $125.40 \pm 68.01$ & $147.21 \pm 95.52$ \\ FPG, mg/dl & $151.88 \pm 54.62$ & $174.81 \pm 57.66$ & $171.54 \pm 57.47$ & $151.11 \pm 47.30$ \\ $\text{HbA}_{1\text{c}}$, \% & $7.16 \pm 1.64$ & $8.34 \pm 1.09$ & $7.94 \pm 0.95$ & $7.16 \pm 0.75$ \\ Estimated GFR, ml/min & $87.46 \pm 28.11$ & $91.64 \pm 29.83$ & $84.99 \pm 21.16$ & $87.31 \pm 22.66$ \\ Urine albumin to creatinine ratio & $75.44 \pm 481.68$ & $93.84 \pm 333.81$ & $105.89 \pm 427.60$ & $45.32 \pm 204.57$ \\ \hline \end{tabular} \caption{Values are mean $\pm$ SD or \%. Means and percentages for NHANES are nationally representative using NHANES sampling weights.}\label{table-3} \end{table} Table \ref{table-3} shows covariates (which we set as the balance functions) balanced between ACCORD-BP and NHANES, their unweighted sample covariate moments from the NHANES and from the ACCORD-BP data, and the subsequent weighted covariate sample moments of the ACCORD-BP sample after balancing. Compared to ACCORD-BP, the NHANES diabetes sample was younger, more likely to be Hispanic and less likely to be black, more likely to be never smokers, less likely to have a history of myocardial infarction (MI) but more likely to have a history of congestive heart failure (CHF), and had a shorter duration of diabetes and better glycemic control (indicated by hemoglobin A1c) (Table 3). Many of the differences in covariate distributions reflect that ACCORD trial eligibility criteria focused on those with relatively long duration of diabetes and high prevalence of cardiovascular risk factors. Of note, the intensive blood pressure control intervention had a smaller benefit in individuals with pre-existing cardiovascular disease in the SPRINT trial, making it plausible that differences between the ACCORD-BP population and a general population of diabetes patients might modify the effect of the blood pressure intervention.\cite{the_sprint_study_group_randomized_2015} In another study using data from NHANES, hemogloblin A1c was associated with increased risk of all-cause and cause-specific mortality.\cite{palta_hemoglobin_2017} Zoungas \textit{et al.}\cite{zoungas_impact_2014} show that diabetes duration is associated with death while McEwen \textit{et al}.\cite{mcewen_predictors_2012} identified multiple predictors of total mortality such as race, age, and previous cardiovascular events among diabetic patients. These previous findings imply that numerous factors might have the potential for confounding the relationship between sampling and the outcome. The differences in baseline covariates between ACCORD-BP and NHANES are reduced after balancing with both CAL and IOSW. However, the covariate sample moments after CAL weighting consistently matched the NHANES sample more closely than after IOSW weighting (Table \ref{table-3}, Figure \ref{bal-plot}). Small residual differences remain between NHANES and the weighted ACCORD-BP sample, for example with triglycerides and high density lipoproteins (Figure \ref{bal-plot}). \begin{figure} \caption{Absolute standardized mean differences for various weighting estimators between NHANES and ACCORD. The red dotted line demarcates an absolute standardized mean difference of 0.1.} \label{bal-plot} \end{figure} The ACCORD-BP study originally found an increase in four-year mortality of 0.59\% [95\% CI:(-0.75\%, 1.93\%)] in the intensive treatment group. After weighting the ACCORD-BP responses with inverse odds of sampling weights estimated with maximum likelihood, the estimated risk difference on the NHANES population is -1.35\% [95\% CI: (-3.5\%, 0.8\%)]. Using CAL, we observe a risk difference of -0.04\% [95\% CI: (-1.80\%, 1.71\%)] where the confidence interval corresponds to the NHANES sample average treatment effect. The 95\% confidence interval for the NHANES population average treatment effect is (-1.94\%, 1.86\%) when using the individual level covariate data from the NHANES sample. Though the total mortality is insignificant at a $0.05$ level of significance, regardless of method, we see changes in the risk difference estimate. The original analysis found an increase in mortality among the intensively treated patients. IOSW weights yielded a decreased total mortality among intensively treated patients in the NHANES population, while CAL weights yielded a nearly null result. These differences seem to indicate the presence of effect modifiers contributing to the effect of blood pressure treatment intensity on mortality. Notice also that the population level estimate is the same as the sample level estimate using entropy balancing. However, the width of the confidence interval is wider for the population level estimate. Nevertheless, the population level estimate from the CAL approach is still narrower than the estimated confidence interval produced by the IOSW approach, indicating an increase in efficiency. \section{Discussion}\label{discussion} In this article we have described a doubly-robust method for transporting experimental results borrowed from the entropy optimization literature. We also borrow results from the indirect comparison literature, which allows us to relax the conditional constancy of absolute effects assumption typically applied in the transportability literature and focus our efforts on modeling relative effects with effect modifiers rather than the absolute effects which require both the effect modifiers and any prognostic variables.\cite{rudolph_robust_2017, dahabreh_extending_2020} However, if the sampling model is incorrect, then we would need the conditional constancy of absolute effects assumption to hold in order to get consistent estimates given the doubly-robust property. As a result more emphasis should be placed on correctly specifying the sampling model over the outcome model if modelling choices begin to deviate. The entropy balancing methodology may operate in two settings - when we are presented with the complete individual-level data of the trial sample and either the individual-level covariate data or the covariate sample moments of the target sample. The distinction between the two settings amounts to inferring upon the target population average treatment effect versus the target sample average treatment effect. We showed entropy balancing to be an efficient causal effect estimator in finite-samples through simulation. We also compared two methods for transporting the ACCORD-BP study to the NHANES population. These numerical examples demonstrate some of the practical implications of our work. The drawback to using entropy balancing for transportability is with the algorithm's rate of convergence. In small samples, the probability that a feasible weighting solution exists decreases, particularly when the positivity assumption is practically violated. One solution applied to covariate balance problems is to use inequality constraints to mitigate treatment group heterogeneity.\cite{wang_minimal_2020} This solution is most useful in high-dimensional settings. There may also be a way to incorporate the method of moment balancing weights into the TMLE framework by substituting $\hat{\boldsymbol{\gamma}}^{\text{MOM}}$ for $\hat{\boldsymbol{\gamma}}^{\text{PS}}$ in (\ref{tmle}). This could eventually set up a targeted maximum likelihood-type estimator that can operate in the setting where we do not have any individual-level data from the target population. In the case where convergence failure occurs due to a high dimension of potential effect modifiers relative to the trial sample size, one should carefully consider balance diagnostics between trial participants and non-participants in much the same way as one identifies potential confounders in an observational study.\cite{brookhart_variable_2006, westreich_role_2011} Additional sensitivity analyses should be employed intermittently to ensure that every effect modifier is accounted for.\cite{nguyen_sensitivity_2018} In the case of positivity violations, it seems that methods that employ more direct implementations of an outcome model, such as the G-compuation and DR approaches, fair better given their ability to extrapolate over the covariate space. Violations of Assumptions \ref{exchange} and \ref{sita} pose a more difficult challenge to evaluate as these assumptions are untestable. Expert level knowledge of the domain area are necessary to ensure that these assumptions will hold with the preferred transportability model. Future work will address two additional data settings not evaluated here. First, the setting where the target sample contains data from a second randomized experiment, including both the individual-level outcome and the treatment assignment. The process of combining experiments, termed as data-fusion, is beyond what we discuss in this paper but is nevertheless an important problem which we would like to approach with entropy balancing in future research. A second direction for future work is to examine methods for transportability between two observational samples, rather than assuming availability of randomized clinical trial data for the trial sample.\cite{josey2020calibration} In this situation, we would also need to model the probability of treatment within the the observational study representing the ``trial'' sample. We might also seek to relax Assumptions \ref{linear} and \ref{odds} using a nonparametric setup to the problem similar to the sieve approach but instead applied to transportability.\cite{hirano_efficient_2003, chan_globally_2016} Finally, while the average treatment effect estimands under consideration in this manuscript are applicable to various outcomes, including a binary one, more work is needed to generalize many of these estimators to accommodate a non-linear link function for the outcome model. In summary, entropy balancing provides an approach to transportability that is flexible regarding the applicable data settings and exhibits double robustness in specific scenarios. In particular, entropy balancing yields more precise effect estimates across a range of simulation scenarios when the target population is large than alternative methods using only covariate sample moments from the target population. \section*{Acknowledgments} \noindent\textbf{Funding information:} This research was supported in part by the US Department of Veterans Affairs Award IK2-CX001907-01. S.A. Berkowitz's role in the research reported in this publication was supported by the National Institute Of Diabetes And Digestive And Kidney Diseases of the National Institutes of Health under Award Number K23DK109200. D. Ghosh's role reported in the publication was supported by NSF DMS-1914937. \noindent\textbf{Disclaimer:} This manuscript will be submitted to the Department of Biostatistics and Informatics in the Colorado School of Public Health, University of Colorado Anschutz Medical Campus, in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Biostatistics for Kevin P. Josey. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health or the United States Department of Veterans Affairs. \appendix \section{Simulation Code} Code for reproducing the simulation experiment conducted in Section \ref{sim_1} is available at the following address: \texttt{https://github.com/kevjosey/transport-sim}. All data analyzed in this study are publicly available to investigators with approved human subjects approval via the US National Institutes of Health, National Heart, Lung, and Blood Institutes, Biologic Specimen and Data Repository Information Coordinating Center (ACCORD Study, https://biolincc.nhlbi.nih.gov/studies/accord/) or the US National Center for Health Statistics (NHANES, https://www.cdc.gov/nchs/nhanes/). Statistical code for creating analytic datasets and for performing analyses are available from the authors upon request. \end{document}
\begin{document} \title{ Orthogonal trades in complete sets of MOLS } \author{ Nicholas J. Cavenagh\\ [email protected]\\ Department of Mathematics and Statistics\\ University of Waikato, Private Bag 3105, New Zealand \\ \\ Diane M. Donovan \\ [email protected]\\ Centre for Discrete Mathematics and Computing,\\ School of Mathematics and Physics,\\ University of Queensland, St Lucia 4072 Australia \\ \\ Fatih Demirkale \\ [email protected]\\ Department of Mathematics\\ Y\i ld\i z Technical University\\ Esenler, 34220, \.{I}stanbul, Turkey\\ } \maketitle \begin{abstract} Let $B_p$ be the Latin square given by the addition table for the integers modulo an odd prime $p$. Here we consider the properties of Latin trades in $B_p$ which preserve orthogonality with one of the $p-1$ MOLS given by the finite field construction. We show that for certain choices of the orthogonal mate, there is a lower bound logarithmic in $p$ for the number of times each symbol occurs in such a trade, with an overall lower bound of $(\log{p})^2/\log\log{p}$ for the size of such a trade. Such trades imply the existence of orthomorphisms of the cyclic group which differ from a linear orthomorphism by a small amount. We also show that any transversal in $B_p$ hits the main diagonal either $p$ or at most $p-\log_2{p}-1$ times. Finally, if $p\equiv 1\mod{6}$ we show the existence of Latin square containing a $2\times 2$ subsquare which is orthogonal to $B_p$. \end{abstract} \textbf{Keywords:} Orthogonal array, MOLS, trade, orthomorphism, transversal. \section{Introduction and Definitions} Let $p$ be an odd prime. Consider the ``complete'' set of $p-1$ MOLS of order $p$, constructed via the finite field of order $p$. (It is conjectured, but not yet proven, that a complete set of MOLS of order $p$ is unique up to isomorphism.) The problem considered in this paper is how to change a ``small'' number of entries in one of these Latin squares so that it maintains orthogonality with at least one other Latin square in the complete set of MOLS. To this end, for each $k$, $1\leqslant k\leqslant p-1$, define $B_p(k)$ to be the Latin square where the entry in cell $(i,j)$ of $B_p(k)$ is given by $ki+j$, for each $i,j\in {\mathbb Z}_p$. (In the above and throughout this paper, arithmetic is performed modulo $p$ with residues in ${\mathbb Z}_p$ whenever the context makes this clear.) Then it is well-known that $${\mathcal B}_p:=\{B_p(1),B_p(2),\dots ,B_p(p-1)\}$$ is a set of $p-1$ MOLS of order $p$. For convenience we often write $B_p$ instead of $B_p(1)$. The Latin squares $B_7$ and $B_7(3)$ are given in Figure \ref{figg1}. Observe that after each symbol is replaced by its subscript in $B_7$, the Latin squares remain orthogonal. We will refer to this change as an {\em orthogonal trade}. We are interested in determining general properties of orthogonal trades; in particular lower bounds for the size of an orthogonal trade. \begin{figure} \caption{An orthogonal trade in $B_7$} \label{figg1} \end{figure} Considering a Latin square of order $n$ to be a set of ordered $($row, column, entry$)$ triples (in this paper a subset of $\Bbb{Z}_n\times \Bbb{Z}_n \times\Bbb{Z}_n$), a {\em Latin trade} is a subset $T$ of a Latin square $L$ such that there exists a partially filled-in Latin square $T'$ (called a {\em disjoint mate} of $T$) such that for each $(i,j,k)\in T$ (respectively, $T'$), there exists unique $i'\neq i$, $j'\neq j$ and $k'\neq k$ such that $(i',j,k),(i,j',k)$ and $(i,j,k')\in T'$ (respectively, $T$). It follows that $(L\setminus T)\cup T'$ is a Latin square not equal to $L$. In fact, Latin trades describe differences between Latin squares of the same order; see \cite{cavsurvey} for more details. We define an {\em orthogonal trade} (in ${\mathcal B}_p$) of index $(\ell,k)$ to be a Latin trade $T\subset B_p(\ell)$ such that there exists a disjoint mate $T'$ such that $(B_p(\ell)\setminus T)\cup T'$ is orthogonal to $B_p(k)$. Thus Figure \ref{figg1} gives an example of an orthogonal trade in ${\mathcal B}_p$ of index $(1,3)$. Using symmetries of ${\mathcal B}_p$, we may assume certain properties of an orthogonal trade therein. In this paper, $k^{-1}$ is always taken to be the least non-negative integer representing the congruence class of $k^{-1}$ (mod $p$). \begin{lemma} Let $T$ be an orthogonal trade in ${\mathcal B}_p$ of index $(\ell,k)$. Then we may assume, without loss of generality, that $\ell=1$, $k\leqslant k^{-1}$ and $(0,0,0)\in T$. \label{iceice} \end{lemma} \begin{proof} Let $1\leqslant x\leqslant p-1$. The mapping $\phi:(a,b,ax+b)\rightarrow (a,b/\ell,(ax+b)/\ell)$ maps $B_p(x)$ onto $B_p(x/\ell)$ and thus acts as a bijection on the set ${\mathcal B}_p$. We may thus assume that $\ell=1$. Next, the mapping $\phi':(a,b,ax+b)\rightarrow (b,-a/x,(b-a)/x)$ maps maps $B_p(x)$ to $B_p(x^{-1})$ (again as part of a bijection on the set ${\mathcal B}_p$), fixing $B_p(1)$ and mapping $B_p(k)$ to $B_p(k^{-1})$. We may thus assume $k\leqslant k^{-1}$. Finally, if $0\leqslant i\leqslant p-1$, the map $\phi'':(a,b,ax+b)\rightarrow (a,b+i,ax+b+i)$ maps each element of ${\mathcal B}_p$ to itself, allowing us to assume that $(0,0,0)\in T$. \end{proof} It is possible, of course, to consider Latin trades which preserve orthogonality within pairs of MOLS that do not necessarily belong to ${\mathcal B}_p$. The spectrum of possible sizes of such Latin trades is explored in \cite{DDSS16}. However for the rest of the paper we assume that any orthogonal trade is always in $B_p$ with the assumptions of the previous lemma. \section{The theory of Latin trades in $B_p$} In this section we give relevant known results and theory of Latin trades in $B_p$ - that is, the operation table for the integers modulo $p$, also known as the {\em back circulant Latin square}. Since an orthogonal trade necessarily is also a Latin trade in $B_p$, this theory will be useful in later sections. A {\em trade matrix} $A=[a_{ij}]$ is an $m\times m$ matrix with integer entries such that for all $1\leqslant i,j\leqslant m$: (1) $a_{ii}>0$; (2) $a_{ij}\leqslant 0$ whenever $i\neq j$ and (3) $\sum_{j=1}^m a_{ij}\geqslant 0$. \begin{lemma} {\rm (Lemma 7 of \cite{Cav1})}:\ If $A=[a_{ij}]$ is an $m\times m$ trade matrix, $\det(A)\leqslant \Pi_{i=1}^{m} a_{ii}$. \label{bcc0} \end{lemma} The following lemmas are implied by the theory in \cite{Cav1}. The results therein are expressed in terms of symbols rather than rows; however statements about rows, columns and symbols are equivalent due to equivalences of $B_p$. \begin{lemma} Let $x_1,x_2,\dots ,x_{m},x_{m+1}$ be the non-empty rows of a Latin trade $T$ in $B_p$. Then there exists an $(m+1)\times (m+1)$ trade matrix $A$ such that $AX=B$, where $X=(x_1,x_2, \dots ,x_m,x_{m+1})^T$, $a_{ii}$ gives the number of entries in row $x_i$ of $T$ and $B$ is an $(m+1)\times 1$ vector of integers, each a multiple of $p$. Moreover, the row and column sums of $A$ are each equal to $0$. \label{bcc1} \end{lemma} \begin{lemma} Let $A$ be an $m\times m$ trade matrix such that $\det(A)\neq 0$ and there exist $m\times 1$ vectors $X$ and $B$ such that $AX=B$, where each entry of $B$ is divisble by $p$ but each entry of $X$ is not divisible by $p$. Then $\det(A)$ is divisible by $p$. \label{bcc} \end{lemma} \begin{lemma} If $T$ is a Latin trade in $B_p$, then $|T| \geqslant mp^{1/m}+2$. \label{bcc2} \end{lemma} We will also need the following corollary from the theory in $\cite{Cav1}$. \begin{lemma} There does not exist a row $i$ of a trade matrix $A$ such that $a_{ii}=2$ and $a_{ij}=-2$ where $j\neq i$. \label{nointerc} \end{lemma} \begin{proof} If such a row exists, Equation (1) of \cite{Cav1} becomes $2x_i\equiv 2x_j$ (mod $p$), which implies $x_i=x_j$ since $p$ is odd, a contradiction to the rows being distinct. \end{proof} Those readers who refer back to the detail in paper \cite{Cav1} may notice that the step of proving that a trade matrix has a non-zero determinant is omitted. However Theorem \ref{irrr} in the next section addresses the original oversight from that paper. \section{Smallest orthogonal trade} In this section we give a lower bound on the number of times each symbol occurs in an orthogonal trade (Theorem \ref{lowa}) and an overall lower bound for the size of an orthogonal trade (Theorem \ref{lowa2}). Suppose that $k\neq 1$ and symbol $s$ occurs in the rows in the set $R=\{r_1,r_2,\dots ,r_m\}$ of an orthogonal trade $T$ of index $(1,k)$. Then clearly the set of columns of $T$ which include $s$ is equal to $\{s-r_1,s-r_2,\dots ,s-r_m\}$. Let $\phi$ be the devolution on $R$ such that $s$ occurs in the set of cells $$\{(r_i,s-\phi(r_i))\mid r_i\in R\}$$ in $T'$. Observe that $(k-1)r_i+s$ occurs in cell $(r_i,s-r_i)$ of $B_p(k)$. Thus, considering orthogonality, the set of orthogonal ordered pairs $\{(s,(k-1)r_i+s)\mid r_i\in R\}$ must be covered after $T$ is replaced by $T'$; it follows that \begin{eqnarray} \{(k-1)r_i+s\mid r_i\in R\} & = & \{kr_i+s-\phi(r_i)\mid r_i\in R\}. \label{roro} \end{eqnarray} We thus may define another permutation $\phi'$ on $R$ such that $\phi'(r_i)=(kr_i-\phi(r_i))/(k-1)$ for each $r_i\in R$. If $\phi'(r_i)=r_i$ for some $r_i\in R$, $\phi$ is not a devolution, a contradiction. Similarly, $\phi'(r_i)\neq \phi(r_i)$ for each $r_i\in R$. We thus obtain a linear system of the form $A{\bf u}={\bf 0}$ (mod $p$), where ${\bf u}=(r_1,r_2,\dots ,r_{m})^T$ and $A$ is a square matrix of dimensions $m\times m$ with the following properties: \begin{enumerate} \item[{\rm (P1)}] Each entry of the main diagonal of $A$ is $k$. \item[{\rm (P2)}] Each off-diagonal entry of $A$ is either $0$, $-1$ or $1-k$. \item[{\rm (P3)}] The sum of each row and column of $A$ is $0$. \end{enumerate} In the example in Figure \ref{figg1} with $s=0$, we have $R=\{0,4,5\}$, $\phi=(045)$, $\phi'=(054)$, ${\bf u}=(0,4,5)^T$ and $$A=\left[\begin{array}{ccc} 3 & -1 & -2 \\ -2 & 3 & -1 \\ -1 & -2 & 3 \end{array}\right].$$ The following lemma is immediate. \begin{lemma} Any symbol in an orthogonal trade occurs at least $3$ times. \label{th3} \end{lemma} Next, property (P3) above implies that $\det(A)=0$. From Lemma \ref{iceice}, we may assume without loss of generality that $r_1=0$. Let $A'$ be the $(m-1)\times (m-1)$ matrix obtained by deleting the first row and column of $A$ and let ${\bf u}'=(r_2,\dots ,r_{m})^T$. Then $A'{\bf u}'={\bf 0}$, where $A'$ satisfies (P1), (P2) and the following properties: \begin{enumerate} \item[{\rm (P4)}] The sum of each row of $A$ is $0$ except for at least two rows which have a positive sum. \item[{\rm (P5)}] The sum of each column of $A$ is $0$ except for at least two columns which have a positive sum. \end{enumerate} An $m\times m$ matrix $A=(a_{ij})$ is said to be {\em diagonally dominant} if $$2|a_{ii}|\geqslant \sum_{j=1}^m |a_{ij}|$$ for each $i\in [m]$. Clearly $A'$ above is diagonally dominant. \begin{theorem} {\rm (\cite{HoJo, Taus})} If $A$ is diagonally dominant and irreducible and there is an integer $k\in [m]$ such that \begin{eqnarray} 2|a_{kk}| & > & \sum_{j=1}^m |a_{kj}|, \label{eqqq} \end{eqnarray} then $A$ is non-singular. \label{irrr} \end{theorem} Thus if $A'$ is irreducible, we have from the previous theorem, det$(A')\neq 0$. However the case when $A'$ is reducible can be dealt with in the following lemma, which is easy to prove. \begin{lemma} Let $A'$ be a diagonally dominant matrix satisfying {\rm (P1)}, {\rm (P2)}, {\rm (P4)} and {\rm (P5)} above. Then there exists an irreducible, diagonally dominant $m'\times m'$ matrix $A''$ with $m'\leqslant m$ satisfying {\rm (P1)}, {\rm (P2)} and Equation $(\ref{eqqq})$ above. \end{lemma} Thus there exists an $m'\times m'$ matrix $A''$, satisfying {\rm (P1)}, {\rm (P2)} and Equation $(\ref{eqqq})$ above, with non-zero determinant, where $m'\leqslant m$. Moreover, $A''$ is a type of {\em trade matrix} as defined in the previous section. From Lemma \ref{bcc0}, the determinant of $A''$ is bounded above by $k^{m-1}$. Thus from Lemma \ref{bcc}, $p< k^{m-1}$ and we have shown the following. \begin{theorem} Let $K=${\rm\ min}$\{k,k^{-1}\}$. The number of times each symbol occurs in an orthogonal trade is greater than $\log_K{p+1}$. \label{lowa} \end{theorem} We next find a lower bound on the size of $T$. \begin{theorem} If $T$ is an orthogonal trade of index $(1,k)$, then $$|T|> \frac{\log{p}\log_K{p}}{\log\log_K{p}}.$$ where $K=${\rm\ min}$\{k,k^{-1}\}$. \label{lowa2} \end{theorem} \begin{proof} Let $T$ contain $m$ distinct symbols and let $s_i$ be the the number of times symbol $i$ occurs in $T$, where $1\leqslant i\leqslant m$. From Lemma \ref{bcc2}, for any Latin trade in $B_p$, $\sum_{i=1}^{m} s_i=|T|> mp^{1/m}$. Let $x=|T|/m= (\sum_{i=1}^{m} s_i)/m$. From Lemma \ref{th3}, $x\geqslant 3$. Also, from above, $x> p^{1/m}$ which implies that $m> (\log{p})/(\log{x})$. Thus $|T|> (x/\log{x})\log{p}$. But the function $x/\log{x}$ is strictly increasing for $x>e$; thus the result follows from the previous theorem. \end{proof} \section{Orthogonal trades permuting entire rows} \label{section:entirerows} In this section we investigate the case when $T$ and $T'$ are constructed taking complete rows of $B_p$ and permuting them. It turns out that such orthogonal trades arise from considering a symbol from an arbitrary orthogonal trade. \begin{theorem} Let $T$ be an orthogonal trade in $B_p$. Let $R$ be the set of rows that contain a particular symbol $s$ in $T$. Then there exists an orthogonal trade of size $p|R|$ constructed by permuting the rows of $R$. \label{rara} \end{theorem} \begin{proof} Fix $s\in {\Bbb Z}_p$. Equation \ref{roro} implies that $$\{((k-1)r_i+s+j)\mid r_i\in R,j\in {\mathbb Z}_p\} = \{(kr_i+s-\phi(r_i)+j)\mid r_i\in R,j\in {\mathbb Z}_p\}.$$ Thus if we replace row $r_i$ with row $\phi(r_i)$ for each $r_i\in R$ we obtain an orthogonal trade. \end{proof} \begin{figure} \caption{An orthogonal trade derived from Figure $\ref{figg1}$ as in Theorem $\ref{rara}$.} \end{figure} In fact, the existence of an orthogonal trade permuting entire rows is equivalent to the existence of any matrix $A$ satisfying properties from Section 2. \begin{corollary} Let $A$ be an $m\times m$ matrix satisfying properties {\rm (P1)}, {\rm (P2)} and {\rm (P3)} from Section $2$. Suppose furthermore there is a solution to $A{\bf u}={\bf 0}$ where ${\bf u}=(r_1,r_2,\dots ,r_m)^T$ and $r_1,r_2,\dots ,r_m$ are distinct residues in ${\mathbb Z}_p$. Then there exists an orthogonal trade $T$ of index $(1,k)$ whose disjoint mate $T'$ is formed by permuting the rows $r_1,r_2,\dots ,r_m$ of $T$. \label{striker} \end{corollary} \begin{proof} Define $\phi(r_i)=r_j$ if and only if $A_{ij}=-1$ and define $\phi'(r_i)=r_j$ if and only if $A_{ij}=-(k-1)$. Then $\phi$ and $\phi'$ are disjoint devolutions on the set $\{r_1,r_2,\dots ,r_m\}$ and $\phi'(r_i)(k-1)=kr_i-\phi(r_i)$ for each $i$, $1\leqslant i\leqslant m$. In turn, Equation \ref{roro} is satisfied. The proof then follows by Theorem \ref{rara}. \end{proof} From the previous theorem and Theorem \ref{lowa}, we have the following. \begin{corollary} Let $T$ be an orthogonal trade of index $(1,k)$ with disjoint mate $T'$ such that $T'$ is the permutation of $m$ entire rows of $T$. Then $m\geqslant \log_K{p+1}$, where $K=${\rm\ min}$\{k,k^{-1}\}$. \label{permm} \end{corollary} \begin{theorem} There exists an orthogonal trade $T$ with disjoint mate $T'$ such that $T'$ is the permutation of $3$ entire rows of $T$ if and only if $p\equiv 1\mod{6}$. \end{theorem} \begin{proof} From the theory in the previous section, the determinant of $A'$ must be equal to $k^2-k+1$. However $k^2-k+1=0$ has a solution mod $p$ if and only if $-3$ is a square mod $p$. Elementary number theory can be used to show that $-3$ is a square mod $p$ if and only if $p\equiv 1\mod{6}$. Finally, if $p\equiv 1\mod{6}$, replacing row $0$ with row $1$, row $1$ with row $k$ and row $k$ with row $0$ in $B_p$ creates a Latin square which remains orthogonal to $B_p(k)$. \end{proof} It is an open problem to determine whether there exists an orthogonal trade permuting a bounded number of rows for any odd prime $p$. \section{Orthogonal trades via Latin trades in $B_p$} Our aim in this section is to construct an orthogonal trade $T$ with index $(1,2)$ with disjoint mate $T'$ such that $T'$ permutes $O(\log{p})$ entire rows of $T$. We do this by showing the existence of orthogonal Latin trades in $B_p$ with size $O(\log{p})$. \begin{theorem} For each prime $p$ there exists a Latin trade $T$ of size $O(\log{p})$ within $B_p$ such that each symbol occurs either twice in $T$ or not at all. \label{szz} \end{theorem} \begin{theorem} For each prime $p$ there exists an orthogonal trade of index $(1,2)$ permuting $O(\log{p})$ rows. \label{yoyo} \end{theorem} \begin{proof} From Section 2, the trade matrix $A$ corresponding to the trade $T$ given by the previous theorem has the following properties. Firstly, the number of rows (and the number of columns of $A$) is $O(\log{p})$. Secondly, each entry of the main diagonal is $2$, every other entry is either $-2$, $-1$ or $0$ and the row and column sums are at least $0$. From Lemma \ref{nointerc}, there are no entries $-2$. Moreover, from Lemma \ref{bcc2}, $A{\bf u}={\bf 0}$ has a solution in ${\mathbb Z}_p$ where the entries of ${\bf u}$ are distinct. The result follows by Corollary \ref{striker}. \end{proof} In order to prove Theorem \ref{szz} we modify a construction given by Szabados \cite{sz} which proved the following. \begin{theorem} {\rm (Szabados, \cite{sz})} For each prime $p$ there exists a Latin trade of size at most $5\log_2{p}$ within $B_p$. \end{theorem} Since our proof is a modification of that given in \cite{sz} (which was in turn inspired by classic results on dissections of squares by Brooks, Smith, Stone and Tutte \cite{BSST,tut} and Trustum \cite{tru}) we borrow from the notation given in \cite{sz}. A {\em dissection} of order $k$ of a rectangle $R$ with integer sides is a set of $k$ squares of integral side which partition the area of the rectangle (i.e. they cover the rectangle and overlap at most on their boundaries). A dissection is said to be $\oplus$-free if no four of them share a common point. For the following definition we position our rectangle $R$ with a corner at the origin, its longest side along the positive $x$-axis and another side along the negative $y$-axis. We say that a dissection is {\em good} if it is: \begin{enumerate} \item[(G1)] $\oplus$-free; \item[(G2)] the square with the origin as a corner point has side at least $3$; \item[(G3)] there is no line of gradient $-1$ intersecting corner points of more than one square; and \item[(G4)] the lines $y=1-x$ and $y=2-x$ do not intersect corner points of any square. \end{enumerate} We wish to construct a good dissection of a rectangle of dimensions $n\times (n+3)$ for any $n\geqslant 3$. We first deal with small values of $n$. \begin{lemma} There exists a good dissection of the rectangle $n\times (n+3)$ for $3\leqslant n\leqslant 14$ with at most $8$ squares. \end{lemma} \begin{proof} In every case, make one of the squares an $n\times n$ square with the origin as a corner point and another $3\times 3$ subsquare with $(n+3,-n)$ as a corner point. Then (G2) and (G4) are satisifed. It is then easy to find a dissection of the remaining $(n-3)\times 3$ rectangle satisfying (G1) and (G3), using at most $6$ squares for each case (one can simply use a greedy algorithm, cutting off a largest possible square at each step). \end{proof} Figure \ref{good5} displays a good dissection of the $5\times 8$ rectangle into $5$ squares. \begin{figure} \caption{An example of a good dissection into squares} \label{good5} \end{figure} For $n$ of the form $4k+z$ with $k\geqslant 3$, $z\in \{3,4,5,6\}$ we may dissect an $n\times (n+3)$ rectangle into at most $5$ squares and a rectangle of size $2k\times 2(k+3)$, as shown in Figure \ref{sz}. \begin{figure} \caption{ Dissecting a rectangle of size $n\times (n+3)$ \ (Figure $1$. from \cite{sz}) } \label{sz} \end{figure} \begin{lemma} For each $n\geqslant 3$, there exists a good dissection of an $n\times (n+3)$ rectangle using at most $3+5\log_4{(n+1)}$ squares. \label{goodies} \end{lemma} \begin{proof} From the previous lemma, the result holds for $3\leqslant n\leqslant 14$. If $n\geqslant 15$, write $n=4k+z$ where $k\geqslant 3$ and $z\in \{3,4,5,6\}$ and use a dissection as in Figure \ref{sz}, recursively using a good dissection of the $k\times (k+3)$ rectangle with the length of each square doubled. Property (G2) of the smaller rectangle ensures that (G1) holds for the larger rectangle. Property (G2) clearly holds for the larger rectangle as $k\geqslant 3>0$. Next, property (G4) (avoiding the line $y=1-x$) for the smaller rectangle ensures that (G3) holds for the larger rectangle. Finally, property (G4)(avoiding the line $y=2-x$) for the smaller rectangle ensures than (G4) holds for the larger rectangle. Note in the previous that $y=2-x$ with respect to the larger rectangle cannot hit any corners of squares in the smaller rectangle because each square has even length side. Suppose such a recursion occurs $\alpha$ times to an intial rectangle of order $m\times (m+3)$ where $3\leqslant m\leqslant 14$. Then $n\geqslant g^{\alpha}(m)$, where $g(m)=4m+3$ and $g^{\alpha}$ is the function $g$ composed with itself ${\alpha}$ times. Observe that $g^{\alpha}(m)=4^{\alpha}m+4^{\alpha}-1$. Thus $(n-4^{\alpha}+1)/4^{\alpha}\geqslant 3$ and $\alpha\leqslant \log_4{(n+1)}-1$. Each recursive step gives at most $5$ extra squares; with at most $8$ squares in the initial step, the result follows by Lemma \ref{goodies}. \end{proof} The proof of Theorem \ref{szz} now follows from the following theorem, which is outlined in \cite{sz} and first established in \cite{Dr}. \begin{theorem} Suppose there exists a good dissection of order $k$ of an $n\times m$ rectangle. Then there exists a Latin trade $T$ in the addition table for the integers modulo $m+n$ (i.e. $B_{m+n}$ if $m+n$ is prime) such that each entry of $T$ appears exactly twice and $T$ has size $2k+2$. \end{theorem} \begin{proof} The proof follows from the construction, first given in \cite{Dr}, showing that a dissection of a right-angled isoceles triangle (with two sides of length $p$) into smaller, integer-sided right-angled isoceles triangles gives rise to a Latin trade $T$ in $B_p$, provided that no point is the vertex of $6$ of the smaller triangles. In such a construction, the number of smaller triangles gives the size of the Latin trade. Reposition the triangle on the Euclidean plane so that its vertices have positions $(0,0)$, $(0,p)$ and $(p,0)$. Then the coordinates of the vertices of the smaller triangles give precisely the cells of $B_p$ which $T$ occupies. Next, reposition the $n\times m$ rectangle so that its vertices have coordinates $(0,0),(0,m),(n,0)$ and $(m,n)$. Embed this rectangle into an isocoles right-angled triangle as above (with two equal sides of length $n+m$). Dissect each square in the good dissection into two triangles so that the sides of each triangle are parallel to the larger triangle. This gives a dissection of the right-angled triangle into $2k+2$ smaller right-angled isoceles triangles. Reposition the triangle as above. Then in our construction, the line segements of gradient $-1$ contain the same symbol in $B_p$. Each such line segment intersects only two corners of squares and thus only two vertices of triangles. Together with condition (G3), this ensures that each symbol occurs exactly twice in the Latin trade. \end{proof} Apply the process in the above theorem to the example in Figure \ref{good5}. This results in the following Latin trade $T$ in $B_{13}$ (with a unique disjoint mate $T'$): $$ \begin{array}{l} T:=\{(0,0,0),(0,5,5),(5,0,5),(5,3,8),(8,0,8),(5,5,10),(7,3,10), \\ (7,4,11),(8,3,11),(7,5,12),(8,4,12),(8,5,0)\}. \\ T':=\{(0,0,5),(0,5,0),(5,0,8),(5,3,10),(8,0,0),(5,5,5),(7,3,11), \\ (7,4,12),(8,3,8),(7,5,10),(8,4,11),(8,5,12)\}. \\ \end{array} $$ Note that each symbol occurs twice. For a general proof of why this construction gives a Latin trade, see \cite{Dr}. \section{Orthomorphisms of cyclic groups and transversals in $B_p$} As in previous sections we assume that $p$ is prime. An {\em orthomorphism} of the cyclic group ${\mathbb Z}_p$ is a permutation $\phi$ of the elements of ${\mathbb Z}_p$ such that $\phi(x)-x$ is also a permutation. We note that orthomorphisms have a more general definition for arbitrary groups. However in this section we assume that orthomorphisms are of the cyclic group only. Trivial examples of orthomorphisms are given by $\phi(x)=kx$ for any $k$, $2\leqslant k\leqslant p-1$. Given any orthomorphism $\phi$, construct a Latin square $L_{\phi}$ by placing $\phi(r) +c$ in cell $(r,c)$. Then by definition $L_{\phi}$ is orthogonal to $B_p$. Given two orthomorphisms $\phi$ and $\phi'$, the {\em distance} between $\phi$ and $\phi'$ is defined to be the number of values $x$ for which $\phi(x)\neq \phi'(x)$. Corollary \ref{permm} implies the following result about orthomorphisms. \begin{theorem} Let $\phi'$ be an orthomorphism not equal to $\phi(x)=kx$. Then the distance between $\phi$ and $\phi'$ is at least $\log_K{p}+1$ where $K={\rm\ min}\{k,k^{-1}\}$. \end{theorem} A {\em transversal} of a Latin square of order $n$ is a set of ordered triples that include each row, column and symbol exactly once. Given any orthomorphism $\phi$, the set of triples $(x,\phi(x)-x,\phi(x))$ is a transversal of $B_p$. For example, if $\phi(x)=2x$ we obtain a transversal on the main diagonal of $B_p$. So we have the following corollary. \begin{corollary} Any transversal of $B_p$ not equal to the main diagonal has at least $\log_2{p}+1$ elements off the main diagonal. \end{corollary} From Theorem \ref{yoyo}, we also have the following. \begin{theorem} There exists a transversal of $B_p$ not equal to the main diagonal which has $O(\log{p})$ elements not on the main diagonal. \end{theorem} \section{A construction for an orthogonal trade with size not divisible by $p$} In this section we construct an orthogonal trade of size not divisible by $p$ whenever $p\equiv 1$ (mod $6$). Figure \ref{figg1} gives the construction for $p=7$. Figure \ref{figg3(p-1)trade} is an example of the construction for $p=13$, where the trademate is shown via subscripts. \begin{figure} \caption{An orthogonal trade of index $(1,4)$ and size $36$ in $B_{13}$} \label{figg3(p-1)trade} \end{figure} Let $k\geqslant 2$; since $p\equiv 1$ (mod $6$) there exists $k$ such that $k^2-k+1$ is divisible by $p$ (since $-3$ is a square modulo $p$ if and only if $p-1$ is divisible by $6$). Note that if $k$ is a solution then $1-k$ is also a solution modulo $p$; thus we assume in this section that $k$ is an integer such that $2\leqslant k\leqslant (p+1)/2$. We remind the reader that all values are evaluated modulo $p$ with a residue between $0$ and $p-1$. We define the following subsets $T_0,T_1,\dots ,T_{k-1}$ of $B_{p}$: $$T_0:=\{(0,j,j),(0,k+j,k+j)\mid 0\leqslant j\leqslant k-2\}$$ and if $1\leqslant i\leqslant k-1$, $$T_i:=\{(i(k-1),j,i(k-1)+j)\mid i\leqslant j\leqslant 2(k-1)\}\cup$$ $$\{(i(k-1)+1,j,i(k-1)+j+1)\mid 0\leqslant j\leqslant k+i-2\}.$$ We then define $T$ to be the union of all these sets; i.e. $$T:=\bigcup_{i=0}^{k-1} T_i.$$ The condition $p>2k-2$ ensures that the above sets are disjoint. Observe that $|T_0|=2(k-1)$ and for each $1\leqslant i\leqslant k-1$, $|T_i|=3k-2$. Thus $|T|=3k(k-1)$ which is not divisible by $p$. In the case where $p=k^2-k+1$ (where $p$ and $k$ are integers), the size of $T$ is $3(p-1)$, but in general may be larger relative to $p$. We will show that $T$ is a Latin trade which preserves orthogonality between the Latin squares $B_{p}$ and $B_p(k)$. With this aim in view, we define a partial Latin square $T'$ which we will show is a disjoint mate of $T$. Let \begin{eqnarray*} T_0':=\{(0,j,k+j),(0,k+j,j)\mid 0\leqslant j\leqslant k-2\} \end{eqnarray*} and if $1\leqslant i\leqslant k-1$, \begin{eqnarray*}T_i':=&&\{(i(k-1),j,i(k-1)+j+k)\mid i\leqslant j\leqslant k-2\}\cup\\ &&\{(i(k-1),j,i(k-1)+j+1)\mid k-1\leqslant j\leqslant k+i-2 \}\cup\\ &&\{(i(k-1),j,(i-1)(k-1)+j)\mid k+i-1\leqslant j\leqslant 2(k-1)\}\cup\\ &&\{(i(k-1)+1,j,i(k-1)+j+k)\mid 0\leqslant j\leqslant i-1\}\cup\\ &&\{(i(k-1)+1,j,i(k-1)+j)\mid i\leqslant j\leqslant k-1\}\cup\\ &&\{(i(k-1)+1,j,i(k-1)+j-k+1)\mid k\leqslant j\leqslant k+i-2\}. \end{eqnarray*} Note that for $i=k-1$ the first set in $T_i'$ is empty and for $i=0$ the last set is empty. We define $T'$ to be the union of the above sets; i.e. $$T':=\bigcup_{i=0}^{k-1} T_i'.$$ By observation, $T$ and $T'$ occupy the same set of cells and are disjoint. We next check that corresponding rows contain the same set of symbols. This is easy to check for row $0$. Let $1\leqslant i\leqslantk-1$. Row $i(k-1)$ of $T'$ contains the symbols $\{i(k-1)+j+k\mid i\leqslant j\leqslant k-2\}=\{i(k-1)+j \mid i+k\leqslant j\leqslant 2(k-1)\}$, $\{i(k-1)+j+1\mid k-1\leqslant j\leqslant k+i-2\}=\{i(k-1)+j \mid k\leqslant j\leqslant k+i-1 \}$ and $\{(i-1)(k-1)+j\mid k+i-1\leqslant j\leqslant 2(k-1)\}=\{i(k-1)+j \mid i\leqslant j\leqslant k-1 \}$. Thus row $i(k-1)$ of $T'$ contains the same set of symbols as the corresponding row of $T$. Next, row $i(k-1)+1$ of $T'$ contains the symbols $\{i(k-1)+j+k)\mid 0\leqslant j\leqslant i-1\}= \{i(k-1)+j+1)\mid k-1\leqslant j\leqslant i+k-2\}$, $\{i(k-1)+j)\mid i\leqslant j\leqslant k-1\}= \{i(k-1)+j+1)\mid i-1\leqslant j\leqslant k-2\}$ and $\{i(k-1)+j-k+1)\mid k\leqslant j\leqslant k+i-2\}= \{i(k-1)+j+1)\mid 0\leqslant j\leqslant i-2\}$. Thus row $i(k-1)+1$ of $T'$ contains the same set of symbols as the corresponding row of $T$. We have shown that $T$ and $T'$ share the same sets of symbols in correpsonding rows. We now show this property for the columns. It suffices to show that each symbol in a column of $T'$ occurs within the same column of $T$. First consider elements of $T_0'$. Let $0\leqslant j\leqslant k-2$. Then symbol $j+k$ in cell $(0,j)$ of $T_0'$ belongs also to cell $(k,j)$ of $T_1$. Moreover symbol $j$ in cell $(0,k+j)$ of $T_0'$ belongs also to cell $((k-1)^2,k+j)$ of $T_{k-1}$ since $(k-1)^2+k$ is divisible by $p$. In this paragraph we deal with symbols which occur in row $i(k-1)$ of $T_i'$ for some $1\leqslant i\leqslant k-1$. Consider symbol $i(k-1)+j+k$ in column $j$ of $T_i'$ where $i\leqslant j\leqslant k-2$. This symbol also lies in cell $((i+1)(k-1)+1,j)$ of $T_{i+1}$. Consider symbol $i(k-1)+j+1$ in column $j$ of $T_i'$ where $k-1\leqslant j\leqslant k+i-2$. This symbol lies in cell $(i(k-1)+1,j)$ of $T_i$. Consider symbol $(i-1)(k-1)+j$ in column $j$ of $T_i'$ where $k+i-1\leqslant j\leqslant 2(k-1)$. This symbol lies in cell $((i-1)(k-1),j)$ of $T_{i-1}$. Finally, to verify that $T'$ is indeed a disjoint mate of $T$, we look at symbols which occur in row $i(k-1)+1$ of $T_i'$ for some $1\leqslant i\leqslant k-1$. Consider symbol $i(k-1)+j+k$ which occurs in column $j$ of $T_i'$ where $0\leqslant j\leqslant i-1$. This symbol occurs in cell $((i+1)(k-1)+1,j)$ of $T_{i+1}$ (if $i<k-1$) or $T_0$ (if $i=k-1$). Next consider symbol $i(k-1)+j$ which occurs in column $j$ of $T_i'$ where $i\leqslant j\leqslant k-1$. This symbol occurs in cell $(i(k-1),j)$ of $T_i$. Thirdly, consider symbol $i(k-1)+j-k+1$ of $T_i'$ where $k\leqslant j\leqslant k+i-2$. This symbol occurs in cell $((i-1)(k-1),j)$ of $T_{i-1}$. We have shown that $T$ is a Latin trade in $B_p$ with disjoint mate $T'$. Next we show orthogonality. it suffices to show that for each element $(r,c,r+c)\in T$, there is a cell $(r',c')\in T'$ containing $r+c$ such that $(r',c')$ contains $rk+ c$ in $B_p(k)$ (equivalently, $r'k+c'=rk+c$). Firstly, let $(0,j,j)\in T_0$ where $0\leqslant j\leqslant k-2$. Then $(p-k,j+k-1,j)\in T_{k-1}'$. Next let $1\leqslant i\leqslant k-1$. Let $(i(k-1),j,i(k-1)+j)\in T_i$ where $i\leqslant j\leqslant k-1$ ($i\leqslant j\leqslant k-2$ when $i=0$). Then $((i-1)(k-1),j-1,i(k-1)+j)\in T_{i-1}'$. Next let $1\leqslant i\leqslant k-1$. Let $(i(k-1),j,i(k-1)+j)\in T_i$ where $k\leqslant j\leqslant k+i-1$. Then $(i(k-1)+1,j-k,i(k-1)+j)\in T_i'$. Let $0\leqslant i\leqslant k-2$. Let $(i(k-1),j,i(k-1)+j)\in T_i$ where $k+i\leqslant j\leqslant 2(k-1)$. Then $((i+1)(k-1)+1,j-k+1,i(k-1)+j)\in T_{i+1}'$. Next let $1\leqslant i\leqslant k-1$. Let $(i(k-1)+1,j,i(k-1)+1+j)\in T_i$ where $i-1\leqslant j\leqslant k-2$. Then $(i(k-1),j+k,i(k-1)+1+j)\in T_i'$. Let $2\leqslant i\leqslant k-1$. Let $(i(k-1)+1,j,i(k-1)+1+j)\in T_i$ where $0\leqslant j\leqslant i-2$. Then $((i-1)(k-1),j+k-1,i(k-1)+1+j)\in T_{i-1}'$. Finally let $1\leqslant i\leqslant k-1$. Let $(i(k-1)+1,j,i(k-1)+1+j)\in T_i$ where $k\leqslant j\leqslant k+i-2$. Then $((i+1)(k-1)+1,j+1,i(k-1)+1+j)\in T_{i+1}'$. An {\em intercalate} in a Latin square is a $2\times 2$ subsquare. The construction in this section shows the potential of using trades to construct MOLS with particular properties. We demonstrate this with the following theorem. \begin{theorem} Let $p$ be a prime such that $p\equiv 1$ {\rm(mod $6$)}. Then there exists a Latin square $L$ orthogonal to $B_p$ such that $L$ contains an intercalate. \end{theorem} \begin{proof} Let $L:=(B_p\setminus T)\cup T'$, where $T$ and $T'$ are defined as in this section. We have shown above that $L$ is orthogonal to $B_p(k)$. Observe that $(k-1,1,2k),(k-1,k,k)$ and $(k,1,k)$ are each elements of $T_1'$ and thus $L$. Finally, cell $(k,k)$ is not included in $T$ so $(k,k,2k)\in L$. \end{proof} \section{Computational results} In this section, we give some computational results on the spectrum of the possible sizes of orthogonal trades mentioned in the previous sections. These orthogonal trades can be found as ancillary files in \cite{CDD}. Let $S_p$ be the set of sizes so that an orthogonal trade in $B_p$ of index $(1,k)$ exists for some $k$. For $p=5$, $S_5 = \{0, 10, 15, 20, 25\}$. The results for $p=7$ and $p=11$ are summarised in the following lemma. \begin{lemma} The spectrum of the sizes of orthogonal trades for $p=7$ and $p=11$ are $S_7 = \{0, 14, 18, 21, 24, 25, \dots, 49\}$ and $S_{11} = \{0, 22, 33, 36, 37, \dots, 121\}$, respectively. \end{lemma} Note that an orthogonal trade in $B_7$ of size $18$ is given in Figure \ref{figg1}. Our theoretical results only considered orthogonal trades when $p$ is prime. A similar question can be studied for odd values of $p$ in general. Here $B_p(1)$ is orthogonal to $B_p(k)$ if and only if $k\not\equiv 1\mod{p}$. Then the spectrum of the sizes of orthogonal trades in $B_9$ is the set $\{0, 6, 9, 12, 15, 16, 18, 19, \dots, 81\}$. In Section \ref{section:entirerows}, we investigated the orthogonal trades in $B_p$ which are constructed by permuting entire rows. These trades preserve orthogonality with one of the $p-1$ MOLS. The possible number of rows needed to be permuted are the elements of sets $\{4, 5\}$, $\{3, 5, 6, 7\}$, $\{5, 6, 7, 8, 9, 10, 11\}$ and $\{3, 4, 6, 7, 8, 9, 10, 11, 12, 13\}$ for $p=5, 7, 11$ and 13, respectively. This idea can be generalised for trades in $B_p$ which preserve orthogonality with more than one of the $p-1$ MOLS. We analyse this question for orders $p=5, 7, 11$ and 13. We start by considering the orthogonal trades in $B_p$ which preserve orthogonality with {\em two} other MOLS from the complete set of size $p-1$ - but only those formed by permuting entire rows. So, these orthogonal trades are formed in three MOLS of order $p$. The possible number of rows needed to be permuted are the elements of sets $\{4, 5\}$, $\{6, 7\}$, $\{5, 6, 8, 9, 10, 11\}$ and $\{4, 6, 8, 9, 10, 11, 12, 13\}$ for $p=5, 7, 11$ and 13, respectively. Here, the non-trivial cases occur when the number of rows are not $p-1$ or $p$. So, we continue with only those cases. Next, we consider the orthogonal trades in $B_p$ which preserve orthogonality with {\em three} other MOLS from the complete set of size $p-1$. The possible number of rows needed to be permuted are the sets $\{5, 9\}$ and $\{6, 11\}$ for $p=11$ and 13, respectively. The orthogonal trades which preserve orthogonality with four of the $p-1$ MOLS can be constructed by permuting 6 or 11 rows for $p=13$. Lastly, an orthogonal trade which preserve orthogonality with five of the $p-1$ MOLS cannot be constructed by permuting entire rows for these orders. \end{document}
\begin{document} \section{Introduction} The Markov Chain Monte Carlo methods, are a set of algorithms used for optimization, integral approximation, dynamic simulation, efficient counting, and sampling. This methods are widely used in problems applied to physics, Chemistry, statistics, probability, combinatorial, optimization, numerical analysis among others. The MCMC methods are preferred by statisticians and applied mathematician because of its easy implementation, in some cases fast convergences, and numerical stability. But, because of its complex theoretical background and lack of theoretic convergence diagnostic, the MCMC methods sometimes are referred as black boxes for sampling and posterior estimation \cite{Brooks2011}. The aim of this work is to present a simple review of the theoretical background and computational complexity of MCMC methods, in particular of the Metropolis-Hastings algorithm, one of the most popular algorithm to the time.\\ The plan of this work is as follows. In Section 2, a history background of MCMC is presented. In section 3, the preliminary concepts of Markov chain, convergence and Complexity classes are given as the necessary theoretical background. In Section 4, a statistics review of Bayesian and Classical schemes, the computational problems related to statistical inference and the Metropolis-Hasting algorithm are presented. Finally, in section 5, a review that MCMC methods are equivalent to probabilistic Turing machines and their applications to counting problems is given. \section{A short story of MCMC} Markov chain Monte Carlo (MCMC) was invented after the Monte Carlo methods in \textit{Los Alamos}, \cite{Metropolis1953} simulated a liquid in equilibrium with its gas phase. The obvious way to find out about the thermodynamic equilibrium is to simulate the dynamics of the system, and let it run until it reaches equilibrium. For it, they simulate some Markov chains that have the same stationary distribution as the proposed dynamical system. This methodology is currently known as the Metropolis algorithm and it was heavily used in physic and chemical experiments until it was discovered by statistician in 1990 and used for the posterior distribution approximation in Bayesian inference. On the same years \cite{Sinclair1988} use them to approximate solutions to combinatorial counting and uniform generation problems.\\ Their is a lot of MCMC variants, in 1970 Hastings generalize Metropolis algorithm using asymmetric distributions. In 1984, Geman and Geman introduce a particular case of the Metropolis-Hasting known as the Gibbs Sampler. Other MCMC methodologies are Reversible Jump MCMC, Neuronal Network-HMC, Sequential Monte Carlo, Quasi-Monte Carlo (QMC),and Hamiltonian Monte Carlo (HMC) \cite{DUANE1987216}. Because of its fast converges, the Hamiltonian or Hybrid Monte Carlo, is widely used for the approximation of high dimensional posterior distribution in probabilistic models \cite{betancourt2017}, increasing the popularity of Bayesian methods in Health-care, social science, and physical applications, see \cite{Stan} and \cite{Paul2017}. \section{Preliminary concepts} This section is divided in three parts, the first gives preliminary definitions of Markov chains, and some important properties. In the second part we present some definitions for analyzing the convergence time to the stationary distribution. Finally, on the third part, we provide an overview of complexity classes and Turing machines. \subsection*{An overview of Markov chains} In this section we provide definitions of stochastic process, Markovian steps and some other properties need for designing the Metropolis algorithm. More information see \cite{Levin2006}. \begin{Def} An stochastic process is a sequence of random variables $X = \{X_t | t \in T\}$ where T is the space set usually denoted as time, and the random variables are defined in the state set $\Omega$. \end{Def} Unlike random samples, stochastic process does not assume independence between variables, and most of them are classified by their dependency structure. One of the simplest stochastic models are markov chains. \begin{Def} Let be $X = \{X_t | t \in T\}$ an stochastic process that satisfies: \begin{equation} P(X_t|X_{t-1},X_{t-2}\ldots, X_1,X_0) = P(X_t | X_{t-1}) \end{equation} The X is called a Markov process, and if the process X just takes values in a finite set $B \subset \Omega$, then X is called a Markov Chain. \end{Def} Equation (1) is usually denoted as the \textit{Markov Property} and can be easy interpreted as follows:\\ \textit{A process is Markovian if the present is only defined by the previous time}.\\ Is valid to specify that only discrete time stochastic process are considered ($T \subset \mathbb{Z}$), the principal reason, is because the impossibility of simulate infinite values in a computer. Every value $x \in \Omega$ is denoted by state, and P(x,y) is the probability of the process to move form the state x to y in one time $P(x,y) = P(X_t = y|X_{t-1} = x)$, called the transition probability. Using this notation, $P^n(x,y)$ represents the probability of moving from x to y in exactly n times. \begin{Def} A Markov process X is called irreducible if for any two states $x,y \in \Omega$ there exists an integer $k > 0$ such that $P^k(x,y) > 0$. \end{Def} This means that is possible to move from any state to any other in a finite amount of time. With this same ideas we can define the periodicity of an state x. \begin{Def} Lets denote $\tau = \{ t >1 | P^t(x,x) > 0 \}$ the set of times for a chain returns to the start position. Then the period of the state x is the greatest common divisor of $\tau$. \end{Def} A process is \textit{aperiodic} if all states have period 1, in other case the process is called \textit{periodic}. Some distributions in stochastic process are invariant over time, and are defined as follows. \begin{Def} An stochastic process has a stationary distribution $\pi$ such as $$\pi(X_t) = \pi(X_{m}) \text{ for all }t,m \in T$$. \end{Def} For Markov chains where the transition probability accepts a stochastic matrix representation P, the stationary distribution follows the next equation, $$\pi = \pi P$$ Another important property is the \textit{time reversible} in Markov chains, this property guaranties the existence of an stationary distribution. \begin{Def} An stochastic Markov process is reversible in time if it satisfies: $$\pi(X_t =x) = \pi(X_{t-1}= x)$$ for all $x \in \Omega$ and stationary distribution $\pi$. \end{Def} An interesting result of reversible chains is the following proposition \begin{prop} Let be X an irreducible Markov chain with transition matrix P, if any distribution $\pi$ that satisfies, \begin{equation} \pi(x)P(x,y) = \pi(y)P(y,x) \text{ for all } x,y \in \Omega \end{equation} Then $\pi$ is an stationary distribution. \end{prop} \begin{dem} Sum both sides of the equation $\pi = \pi P$ over all y: $$\pi P = \sum_{y \in \Omega}\pi(y)P(y,x) =\sum_{y \in \Omega}\pi(x)P(x,y) = \pi(x) = \pi$$ \end{dem} (2) is usually denoted as the detailed balance equation. Checking (2) is often the simplest way to verify a particular distribution is stationary.\\ The major applications of MCMC methods is that the simulated chain converges to the stationary \textit{target} distribution. Before presenting the methodology itself, some natural questions are proposed: \begin{itemize} \item How to know if the chain converge to the target distribution? \item There exist an methodology to prove the chain's convergences? \item How many simulation are need to reach the stationary distribution? \end{itemize} The following definitions and propositions are presented to solve the first two question, the last one is hard problem (\textit{coNP-hard to be exact}), some results and references will be presented later in this work. \subsection*{Markov chain's mixing time} For measuring the chains convergence a parameters which measures the time required by a Markov Chain to \textit{reach} the stationary distribution is required \cite{hsu2017mixing}. \begin{Def}[Total variance distance] The total variance distance of two probability distributions f and g in $\Omega$ is defined by: $$||f -g||_{TV} = \sup_{A \in \Omega}|f(A) -g(A)|$$ \end{Def} In terms of the total variance distance we can define the distance between the simulated step and the stationary distribution as follows: $$d(t) = \sup_{x \in\Omega} ||P(X_t = x|X_{t-1}) -\pi||_{TV}$$ \begin{Def}[Mixing time] Lets be X a Markov chain that converges to a stationary distribution $\pi$ the mixing time t is defined by: $$t_{mix}(\epsilon) = min\{t |d(t) < \epsilon\} $$ \end{Def} The mixing time is just the minimum time, the chain needs to be \textit{"close"} to the stationary distribution by a real $epsilon$. Using this previous definitions we can give the following results. \begin{teo}[Convergence theorem] Suppose that a Markov chain is irreducible and aperiodic with stationary distribution $\pi$. Then for every constants $\alpha \in (0,1)$ exist a $C > 0$ such that: $$d(t) \leq C\alpha^t$$ \end{teo} \begin{dem} The proof of this theorem is omitted, \cite{Levin2006} in page 54 provides a full demonstration. \end{dem} This results provides a guarantee that the simulated chain eventually will converge to the target distribution.The next theorem has a great impact in statistics applications where the target distribution is the posterior distribution of an unknown quantity and a expected value is the estimated value. \begin{teo}[Ergodic Theorem] Let g be a real-valued function defined on $\Omega$. If X is an irreducible Markov Chain, then for any starting distribution f $$P_f \left[\lim_{t \to \infty}\dfrac{1}{t}\sum_{s=0}^{t-1}g(X_s) = E_\pi(g)\right] = 1$$ \end{teo} \begin{dem} The proof of this theorem is omitted, \cite{Levin2006} in page 59 provides a full demonstration. \end{dem} The idea of \textit{Ergodic Theorem} for Markov chains is that time averages is equal to the state average. In other words the expected value of the stationary distribution is simply the average of the chain's simulated values. Mixing time will be important in the next sections, the complexity of an MCMC algorithm is measured with the chains mixing time. \subsection*{An overview of complexity classes} Let be P the complexity class that accepts a deterministic Turing machine (TM) in polynomial time in the size of the input. In the same scheme, the NP class, accepts a non deterministic Turing machine (NTM) in polynomial time. The NP class that accepts decision problems can be extended to $\#P$ class,that consist of all counting problems associated with the decision problems in NP. Furthermore a general definition of $\#P$ is provided as follows. \begin{Def} A problem $\Pi \in \#P$ if there is a nondeterministic polynomial time Turing machine that for any instance I of $\Pi$, has a number of accepting computations exactly equal to the number of distinct solutions to I. And $\Pi$ is a $\#P$-complete if any problem $\Gamma \in \#P$ can be reduced to $\Pi$ by a polynomial time Turing reduction. \end{Def} One example of $\#P$-complete problem is the \textit{0,1-Perm} provided by \cite{Valiant}, \textit{determinate the number of perfect matchings in a bipartite graph G}. There is no known efficient deterministic approximation algorithm for any $\#P$-complete problem, but there exists some efficient randomized algorithms that provides good approximation \cite{Jerrum}. \\ \begin{Def} A probabilistic Turing machine (PTM) is a kind of Nondeterministic Turing Machine, with coin flip choices instead of nondeterministic choices. A coin flip choice is an unbiased random choice between two successor states. \end{Def} A PTM follows only one possible branch of a nondeterministic choice, whereas a NTM follows them all. The probability that M will follow a given computation branch b is, $$P[b] = 2^{-1k}$$ Where k is the number of coin flip choices on branch b. The probability that M accepts w is, $$P[\text{PTM accepts w}] =\sum_{b \in A}P(b)$$ Where A is the set of all accepting branches. M is said to reject w if and only if it does not accept w $$P[\text{PTM rejects w}] = 1 - P[\text{M accepts w}].$$ \begin{Def} LeT PP the complexity class that accepts a probabilistic Turing machine, such that the following hold when M is run on $w \in \Sigma^*$ $$ w \in S \Longrightarrow P[\text{PTM accepts w}] > 0.5 $$ $$ w \not\in S \Longrightarrow P[\text{PTM accepts w}] \leq 0.5$$ \end{Def} It can be prove that $NP \subset PP$ (\cite{Daria} provides a full proof), another important class derived from PP s the bounded PP class (BPP) defined as follows. \begin{Def} LeT BPP the complexity class that accepts a probabilistic Turing machine, such that the following hold when M is run on $w \in \Sigma^*$ $$ w \in S \Longrightarrow P[\text{PTM accepts w}] > 0.5 +\epsilon $$ $$ w \not\in S \Longrightarrow P[\text{PTM accepts w}] \leq 0.5 -\epsilon$$ where $\epsilon$ is a constant $0 < \epsilon < 0.5$ \end{Def} By definition $BPP \subset PP$, another useful class is the randomized polynomial time RP class, this class differs from BPP in that it has only one-side error. \begin{Def} LeT BPP the complexity class that accepts a probabilistic Turing machine, such that the following hold when M is run on $w \in \Sigma^*$ $$ w \in S \Longrightarrow P[\text{PTM accepts w}] > \epsilon $$ $$ w \not\in S \Longrightarrow P[\text{PTM accepts w}] = 0$$ where $0 < \epsilon < 1.$ \end{Def} coRP is a derived class defined as the complement of RP, some interesting relationship between classes that can be proved directly are, $RP \subseteq NP$, $coRP \subseteq coNP$ and $coRP \cup RP \subseteq BPP$. A further discussion is provided by \cite{Daria}. \section{MCMC and statistical inference} The basic idea of statistical inference is to estimate important quantities of a given model, using the available data. There are two principal schemes in statistical inference, classic and Bayesian. In the first scheme, some restrictions are considered, like and all the unknown quantities are considered constant real values. In a Bayesian scheme, every unknown value is considerate a random variable. \subsection*{Elements of inference} \begin{Def} Let $X =\{X_1,X_2,\ldots,X_n\}$ be a set of independent and identically distributed random variables, with a probability distribution P, where P is depending of unknown quantity $\theta$, the X is denoted as a random sample, and $theta$ as an unknown parameter. \end{Def} Most of the classical statistical work, estimates $\theta$ using only the available data via the likelihood function, where $\theta$ is a constant real value. \begin{Def} Le be X a random sample, with unknown parameter $\theta$ then the likelihood function $L:\mathbb{R} \rightarrow \mathbb{R}^+$ is defined as follows $$L(\theta) = L(\theta;X) = P(X/\theta) = \prod_{i=1}^nf(X_i/\theta)$$ \end{Def} The point estimator is simply a real value used to approximate $\theta$ the most famous method is the Maximum likelihood estimator. \begin{Def} Let $X =\{X_1,X_2,\ldots,X_n\}$ be a random sample with unknown quantity $\theta$, then the maximum likelihood estimator (MLE) of $\theta$, is a real value that maximize the likelihood function $$MLE = arg \max_{\theta} \{L(\theta;X)\} $$ \end{Def} In a Bayesian inference scheme the parameter $\theta$ is obtained using the available data, and all the external information provided by the expert. The basic idea of Bayesian statistic is to assume that every unknown parameter $\theta$ is a random variable in a probability space $\Theta$, and estimate its conditional distribution using the Bayes theorem as follows, $$P(\theta/X) = \dfrac{P(X/\theta)P(\theta)}{P(X)}$$ The previous equation can be simplified as follows $$P(\theta/X) = k P(X/\theta)P(\theta)$$ $$P(\theta/X) \propto P(X/\theta)P(\theta)$$ Where: \begin{itemize} \item $P(\theta/X)$ is the posterior distribution for $\theta$ \item $P(X/\theta)$ is the likelihood of the model \item $P(\theta)$ is the prior distribution for $\theta$ \item $k^{-1} = P(X)$ is a proportional constant that does not affect directly the model \end{itemize} The major problem of Bayesian inference is finding the proportional constant k, that can be computed directly by, $$P(X) = \int_{\Theta}P(X,\theta)d\theta = \int_{\Theta}P(X/\theta)P(\theta)d\theta$$ For most of the selected prior distributions, finding k is a real hard task making this scheme unpractical for many years until the MCMC methods where discovered by statisticians. For more details see \cite{Miggon} and \cite{degroot19886}. Before introducing the Metropolis Hasting algorithm we present the Bayesian point estimate analogous. \begin{Def} Let $L(\theta,\delta(X))$ be the loss function of choosing $\theta$ using the decision rule $\delta$, we denote risk by $$R(\delta) = \int_{\Theta}L(\theta,\delta)P(\theta/X)d\theta$$ As the expected posterior loss. \end{Def} We say the a decision rule is optimal if it has a minimum risk, as following we define the Bayesian point estimate using two different loss functions. \begin{prop} Let $L_2 = (\delta -\theta)^2$ be the loss associated with the estimation of $\theta$ by $\delta$. Then the estimator of $\theta$ is the posterior expected value $E_{\theta/X}[\theta]$. \end{prop} \begin{dem} Let be $\delta_2 = E[\theta]$ $$R(\delta) = E[(\delta -\theta)^2] = E[((\delta -\delta_2) +(\delta_2-\theta))^2]$$ $$R(\delta) =(\delta -\delta_2)^2 + E[(\delta_2 -\theta)^2] +2(\delta -\delta_2)E[\delta_2 -\Theta]$$ $$R(\delta) =(\delta -\delta_2)^2 +V(\theta)$$ Then for any value of $\delta$ different than $\delta_2$ $$R(\delta_2) = V(\theta) \leq (\delta- \delta_2)^2 + V(\theta) = R(\delta)$$ Therefore $\delta_2 = E[\theta]$ is the point estimate associated to $L_2$. \end{dem} This estimator is currently known as the posterior mean, another important estimator is the maximum a posteriori (MAP) or the GMLE (Generalized Maximum likelihood estimator) cause it maximize the likelihood function penalized by the prior distribution. \begin{prop} Let $L_\infty = lim_{\epsilon \to 0}I_{|\theta -\delta|}([\epsilon.\infty[)^2$ be the loss associated with the estimation of $\theta$ by $\delta$. Then the estimator of $\theta$ is the posterior mode. \end{prop} \begin{dem} $$R(\delta) = E[L_\infty] = \lim_{\epsilon \to 0}\left[\int_{-\infty}^{\delta -\epsilon}P(\theta/X)d\theta + \int_{\delta - \epsilon}^{\delta +\epsilon}P(\theta/X)d\theta + \int_{\delta + \epsilon}^{\infty}P(\theta/X)d\theta \right]$$ $$ R(\delta) = = 1 -\lim_{\epsilon \to 0}P(\delta -\epsilon < \theta < \delta +\epsilon) = P(\delta) = 1 -P(\delta/X)$$ Thus $R(\delta)$ is minimized when $P(\delta/X)$ is maximized. Hence the mode is the point estimate associated to $L_\infty$. \end{dem} \subsection*{Computational complexity of statistical inference} Lets present the computational complexity for statistical problems. \cite{Arora2012} provides a demonstration that computing the MLE estimator for $\theta$ is NP-Hard, where the problem can be rewritten as follows:\\ \begin{algorithm}[H] \SetAlgoLined \caption{ML ESTIMATION: $MLE(p,\Theta)$} \KwInput{A random sample $X = \{X;1,\ldots,X_n\}$ with distribution P} \KwOutput{A parameter $\theta = arg \max_{\theta_0 \in\Theta}L(\theta_0;X) $} \end{algorithm} It it is easy to see that estimating the Generalized Maximum Likelihood GMLE or Maximum a posterior is just a constrained by the prior ($P(\theta)$) MLE problem: $$MAP(\theta) = arg \max_{\theta_0 \in\Theta}P(\theta_0/X) = k arg \max_{\theta_0 \in\Theta}L(\theta_0;X)P(\theta_0)$$ Then, estimating the Maximum a posteriori problem can be written as follows: \\ \begin{algorithm}[H] \SetAlgoLined \caption{MAP ESTIMATION: $MAP(p,\Theta)$} \KwInput{A random sample $X = \{X;1,\ldots,X_n\}$ with distribution P} \KwOutput{A parameter $\theta = arg \max_{\theta_0 \in\Theta}P(\theta_0/X) $} \end{algorithm} \cite{Tosh} proves that the $MAP(p,\Theta)$ problem is reduced in polynomial time to the $MLE(p,\Theta)$ problem, and therefore, is NP-Hard. In the same work, \cite{Tosh} shows that approximate sampling problems for finding the sample distribution of an estimator\footnote{Approximate sampling problems are usually treated with resampling methods such as bootstrap and Jacknife algorithms, that are equivalent to a randomized Turing machines}, does not have a polynomial time Turing machine, unless NP = RP. \subsection*{The Metropolis-Hasting algorithm} Let present the most common algorithm for Bayesian inference, physics applications and some other theoretical problems. Let be $X = \{X_1,X_2,\ldots,X_n\}$ a random sample with unknown parameter $\theta$, and let $P(\theta)$ a defined prior where: \begin{itemize} \item m is the number of simulations \item $f(\theta/X) = P(X/\theta)P(\theta)$ is the non normalized posterior distribution called the proposal distribution. \item $J(\theta_t/\theta_{t-1})$ be the Markov chain's jump distribution \item the metropolis-Hastings ratio $$r =\dfrac{f(\theta_*/X)J(\theta_{t-1}/\theta_*)}{f(\theta_{t-1}/X)J(\theta_*/\theta_{t-1})}$$ \end{itemize} The Metropolis-Hastings algorithm goes as follows:\\ \begin{algorithm}[H] \caption{Metropolis-Hastings algorithm} \SetAlgoLined \KwResult{A sample $\theta_1,\theta_2,\ldots,\theta_n$ of $P(\theta/X)$} Draw a random value $\theta_0$ such that $f(\theta_0/X) > 0 $\; \For{ t $\in$ 1:2,3,...,m}{ Draw a candidate value $\theta_*$ from $J(\theta/\theta_{t-1})$\; Calculate the metropolis-Hasting ratio r\; Set $\alpha = min(r,1)$\; Draw a value U from a U(0,1) distribution\; \eIf{U $< \alpha$}{ Set $\theta_t = \theta_*$\; }{ Set $\theta_t =\theta_{t-1} $\; } } \end{algorithm} Using that $r(\theta_t,\theta_{t-1}) = 1/r(\theta_{t-1},\theta_t)$ is easy to prove the next formula. \begin{equation} f(\theta_t/X)\alpha(\theta_t,\theta_{t-1})r(\theta_t,\theta_{t-1}) = f(\theta_{t-1}/X)\alpha(\theta_{t-1},\theta_t)r(\theta_{t-1},\theta_t) \end{equation} The algorithm simulates an irreducible, reversible time Markov chain with transition probability J, and by \textit{proposition 1}, has an stationary distribution $\pi$, dividing equation (3) by $$k = P(X) = \int_{\Theta}f(\theta/X)d\theta$$ The stationary distribution $\pi$ is $P(\theta/X)$. The simulated Markov chain $\{\theta_i\}_{i=1}^m$ is a sample of the posterior distribution $P(\theta/X)$, and by the \textit{Ergodic theorem}, the posterior mean is approximated by: $$E[\theta/X] \approx \dfrac{1}{m}\sum_{k=1}^{m}\theta_k$$ A further discussion and complete proof is provided by \cite{Brooks2011}. There exist a lot of Metropolis-Hastings(MH) algorithm variations, must of them are improvements or particular cases of the presented algorithm. The Random-Walk Metropolis is simply use a Random-walk for jump distribution \cite{Sherlock}. \cite{Brooks2011} provides a demonstration that a Gibbs sampler is just a particular case of the MH. The Metropolis-adjusted Langevien algorithm (MALA) \cite{papamarkou2016} and Hamiltonian Monte Carlo (HMC) \cite{betancourt2017} are MH with an optimal adjusted random jump. The No U-turn sampler algorithm (NUTS) proposed by \cite{hoffman14a} and implemented in \cite{Stan} is just an automatically tunned HMC. \subsection*{Computational complexity of MCMC} Give a complexity bound of an MCMC methods by counting the number of operations is a difficult task, and most of it is because it depends on the number of estimated parameters $\theta$, the selected proposal distribution f, and the number of simulations m. Another perspective, is to bound the complexity in terms of the chain's mixing time to its stationary distribution. \cite{Nayantara2011} prove that establish if the Markov chain a the time t is close to stationary, is an NP-hard problem. Even so, \cite{roberts2014} using diffusion limits, give a lower bound $O(d)$ for a Metropolis-Hastings to converge to the stationary distribution, where d is the dimension of the parameter space $\Theta$. On the other hand, on cases of large data, \cite{Belloni2011} provides an upper bound $O(d^2)$ , for a MH to converge to its stationary distribution. For MCMC methods with optimal jump adjustment such as MALA, \cite{roberts2014} provide a lower bound $O(d^{1/3})$ to converge, and \cite{papamarkou2016} provides convergence bounds comparation for MALA, HMC, MMALA (Manifold-MALA) and SMMALA (simplified MMALA).\\ \section{MCMC and counting problems} Following the work of \cite{Sinclair1988} we show the MCMC methods are equivalent to probabilistic Turing Machines, and provide an overview, that this methods can be used for efficient approximation of some counting problems of the complexity class $\#P$.\\ The basic idea is to present a fully-polynomial almost uniform sampler (FPAUS) algorithm as a simulated finite Markov chain that converges to a uniform stationary distribution (MCMC method in a finite state $\Omega$). Finally, we show the utility of MCMC for self reducible counting problems that accepts randomized algorithms. Let R be a counting problem with no exact solution in polynomial time, then, an approximate counting solution might be possible if R accepts a randomized Turing machine as follows: \begin{Def}[Fully polynomial randomized approximation scheme] A randomized approximate counter for a relation R in an finite alphabet $\Sigma$ with real valued function $$\rho:\mathbb{N} \rightarrow \mathbb{R}^+$$ is a probabilistic Turing machine C whose output for every $x \in \Sigma^*$ and two parameters $\epsilon> 0$, and $\delta < 1$, is non negative real value random variable $X(x,\epsilon)$ satisfying $$P(X(x,\epsilon)\rho^{-1}|x| \leq \#R \leq X(x,\epsilon) \rho|x| ) \geq 1 -\delta$$ The algorithm is fully polynomial in $|x|$ $\epsilon^-{1}$ and $-log(\delta)$. \end{Def} The significance of a lower bound $1 - \delta$ in the previous definition lies in the fact the allows a count approximation, so the probability of producing a bad estimation is low in polynomial time. \begin{Def}[Fully-polynomial almost uniform sampler] An almost uniform sampler for a relation R in an finite alphabet $\Sigma$ is a probabilistic Turing machine C whose output for every $x \in \Sigma^*$ and a parameter $\epsilon > 0$, is non negative real value random variable $X(x,\epsilon)$ satisfying \begin{itemize} \item $X(x,\epsilon)$ takes values in a set $R \cup \{y\}$, with $y \not\in \Sigma$ and $$R \neq \emptyset \Longrightarrow P(X = y) \leq 1/2 $$ \item There exist a function $\phi:\Sigma^* \rightarrow ]0,1]$ such that for all $y \in \Sigma^*$ $$y \not\in R \Longleftrightarrow P(X = y) = 0$$ $$y \in R \Longleftrightarrow (1+ \epsilon)^{-1}\phi(x,\epsilon) \leq P(X = y) = 0 \leq (1+ \epsilon)\phi(x,\epsilon) $$ \end{itemize} The algorithm is fully polynomial in $|x|$ and $-log(\epsilon)$. \end{Def} Is hard to visualize that a FPAUS algorithm is equivalent to a Markov chain. For it, we present the concept of self-reducibility that allows to visualize the problem as a tree T, and then construct a Markov chain with a random path in T is intuitive. \begin{Def} A relation R is polynomial time self-reducible if: \begin{itemize} \item There exists a polynomial time computable length function $I_R:\Sigma^* \rightarrow N$ such that $I_R(x) = O(|x|^)$ for some $k > 0$, and $y \in R \Longleftrightarrow |y| = L_R(x)$ for all $x,y \in \Sigma$ \item For all $x \in \Sigma^*$ with $I_R(x) = 0$, then the predicate $\Lambda \in R$ can be tested in polynomial time \item There exists polynomial time computable functions $\phi:\Sigma^* \rightarrow \Sigma^*$ and $\sigma: \Sigma^* \rightarrow N$ satisfying: $$\sigma(x) = O(log|x|) $$ $$I_R(x) > 0 \text{ if and only if } \sigma(x) > 0$$ $$|\phi(x,w)| \leq |x| \text{ for all } x,w \in \Sigma^*$$ $$I_R\phi(x,w) = max\{I_R(x)-|w|,0\} \text{ for all } x,w \in \Sigma^*$$ \end{itemize} \end{Def} The inductive construction of solutions of a self-reducible relation explicitly in a tree structure. For each $x \in \Sigma^*$ with $R(x) \neq 0$, the tree of derivations T(x) is a rooted tree, which each vertex v bears both a problem instance label and a partial solution label. Then the constructed sampler views the vertices of the tree of derivations as the states of a Markov chain, in which there is a non-zero transition probability between two states if they are adjacent in the tree.\\ Finally, a result from \cite{Sinclair1988} is presented, it shows that every self-reducible counting problem that accept as FRPAS also accepts a FPAUS. There fore, an approximate solution might be perform simulating a Markov chain with a uniform stationary distribution, in polynomial time. \begin{teo} Let be R self-reducible. If there exists a polynomial-time approximate counter scheme for R, within ratio $1 + O(n^2)$ for some $X \in R$, then there exist a fully polynomial almost uniform sampler for R. \end{teo} \begin{dem} A fully proof is given by \cite{Sinclair1988} in \textit{theorem 4.4} and \textit{theorem 4.5}. \end{dem} \end{document}
\begin{document} \title[Rigidity of the \'{A}lvarez class]{Rigidity of the \'{A}lvarez class of Riemannian foliations with nilpotent structure Lie algebras} \author{Hiraku Nozawa} \address{Graduate School of Mathematical Sciences, University of Tokyo, 3-8-1 Komaba, Meguro, Tokyo, 153-8914, Japan} \email{[email protected]} \begin{abstract} We show that if the structure algebra of a Riemannian foliation $\mathcal{F}$ on a closed manifold $M$ is nilpotent, the integral of the \'{A}lvarez class of $(M,\mathcal{F})$ along every closed path is the exponential of an algebraic number. As a corollary, we prove that the \'{A}lvarez class and geometrically tautness of Riemannian foliations on a closed manifold $M$ are invariant under deformation, if the fundamental group of $M$ has polynomial growth. \end{abstract} \maketitle \section{Introduction} A Riemannian foliation $\mathcal{F}$ on a closed manifold $M$ is geometrically taut if there exists a bundle-like metric $g$ on $M$ such that every leaf of $\mathcal{F}$ is a minimal submanifold of $(M,g)$. Geometrically tautness of Riemannian foliations is a purely differential geometric property, but it is known that it has remarkable relations with the cohomological properties of $(M,\mathcal{F})$. For examples, Masa~\cite{Mas} characterized the geometrically tautness of $(M,\mathcal{F})$ by the nontriviality of the top degree part of the basic cohomology of $(M,\mathcal{F})$ and \'{A}lvarez L\'{o}pez~\cite{Alv} defined a cohomology class $[\kappa_b]$ of degree $1$ for $(M,\mathcal{F})$ which vanishes if and only if $(M,\mathcal{F})$ is geometrically taut. We call $[\kappa_b]$ the \'{A}lvarez class of $(M,\mathcal{F})$. In this paper, we show that \'{A}lvarez classes of Riemannian foliations have a rigidity property, if the structure Lie algebra is nilpotent. As a corollary of the rigidity of \'{A}lvarez classes, we obtain the invariance of geometrically tautness of Riemannian foliations under deformation, if the fundamental group of the ambient manifold has polynomial growth. The main result in this paper is the following theorem: \begin{thm}\label{expalg} Let $M$ a closed manifold and $\mathcal{F}$ be a Riemannian foliation on $M$ with nilpotent structure Lie algebra. Then $e^{\int_{\gamma} [\kappa_b]}$ is an algebraic number for every $\gamma$ in $\pi_1(M)$ where $[\kappa_b]$ is the \'{A}lvarez class of $(M,\mathcal{F})$. \end{thm} \noindent Theorem~\ref{expalg} is shown by computation of \'{A}lvarez classes in terms of the holonomy of the basic fibration in Section 2 and application of Mal'cev theory in Section 3. We state two corollaries of Theorem~\ref{expalg}. Let $M$ be a closed manifold whose fundamental group has polynomial growth. Then the structure Lie algebra of every Riemannian foliation on $M$ is nilpotent according to Carri\`{e}re~\cite{Car}. By Theorem~\ref{expalg}, the \'{A}lvarez classes of Riemannian foliations on $M$ are contained in a countable subset of $H^1(M;\mathbb{R})$ which is independent of foliations. On the other hand, if we have a smooth family $\{\mathcal{F}^{t}\}_{t \in [0,1]}$ of Riemannian foliation on $M$, their \'{A}lvarez classes varies continuously in $H^1(M;\mathbb{R})$ as shown in~\cite{Noz}. Hence we have the following corollary: \begin{cor}\label{alvinv} Let $M$ be a closed manifold whose fundamental group has polynomial growth and $\{\mathcal{F}^{t}\}_{t \in [0,1]}$ be a smooth family of Riemannian foliation on $M$ over $[0,1]$. Then we have $[\kappa_{b}^{t}]=[\kappa_{b}^{t'}]$ in $H^1(M;\mathbb{R})$ for any $t$ and $t'$ in $[0,1]$ where $[\kappa_{b}^{t}]$ is the \'{A}lvarez class of $(M,\mathcal{F}^{t})$. \end{cor} Since $(M,\mathcal{F}^{t})$ is geometrically taut if and only if the \'{A}lvarez class of $(M,\mathcal{F}^{t})$ vanishes according to \'{A}lvarez L\'{o}pez~\cite{Alv}, we have the following corollary of Corollary~\ref{alvinv}: \begin{cor}\label{definv} Let $M$ be a closed manifold whose fundamental group has polynomial growth and $\{\mathcal{F}^{t}\}_{t \in [0,1]}$ be a smooth family of Riemannian foliation on $M$. Then one of the following is true: \begin{enumerate} \item $(M,\mathcal{F}^{t})$ is geometrically taut for every $t$ in $[0,1]$. \item $(M,\mathcal{F}^{t})$ is not geometrically taut for every $t$ in $[0,1]$. \end{enumerate} \end{cor} If the dimension of $\mathcal{F}$ is $1$, the structure Lie algebra of $\mathcal{F}$ is always abelian according to Caron-Carri\`{e}re~\cite{CaCa} or Carri\`{e}re~\cite{Car}. Hence the conclusion of Corollary~\ref{alvinv} and Corollary~\ref{definv} follow from Theorem~\ref{expalg} without the assumption on the growth of the fundamental group of $M$ for the cases where $\mathcal{F}$ is $1$-dimensional. Note that we can show Theorem~\ref{expalg} by more direct computation in~\cite{Noz} if the dimension of $\mathcal{F}$ is $1$, using a theorem of Caron-Carri\`{e}re~\cite{CaCa} which claims $1$-dimensional Lie foliations with dense leaves are linear flows on tori with irrational slopes and computation of the mapping class group of the group of diffeomorphisms preserving a linear foliation with dense leaves on tori by Molino and Sergiescu~\cite{MoSe}. The author expresses his gratitude to Jes\'{u}s Antonio \'{A}lvarez L\'{o}pez, Jos\'{e} Royo Prieto Ignacio and Steven Hurder. The conversation with \'{A}lvarez L\'{o}pez and Royo Prieto on \'{A}lvarez classes gave the author a definite clue to carry out the computation of \'{A}lvarez classes in Section 2. The idea on conditions for the growth of the fundamental group of the ambient manifold came from the conversation with Hurder. \section{Computation of \'{A}lvarez classes in terms of the basic fibration} We prepare the notation to state the main result in this section. Let $(M,\mathcal{F})$ be a closed manifold with an orientable transversely parallelizable foliation. Let $\pi \colon M \longrightarrow W$ be the basic fibration of $(M,\mathcal{F})$. Fix a point $x$ on $M$. We denote a fiber of $\pi$ which contains $x$ by $N$ and the restriction of $\mathcal{F}$ to $N$ by $\mathcal{G}$. Then $(N,\mathcal{G})$ is a Lie foliation by Molino's structure theorem. We denote the dimension and codimension of $(N,\mathcal{G})$ by $k$ and $l$ respectively. Let $\mathfrak{g}$ be the structure algebra of $(N,\mathcal{G})$, which has dimension $l$. Throughout Section 2, we assume the unimodularity of $\mathfrak{g}$, that is, \begin{equation}\label{unimod} H^{l}(\mathfrak{g}) \cong \mathbb{R}. \end{equation} Note that assumption~\eqref{unimod} is satisfied if $\mathfrak{g}$ is nilpotent and may not be satisfied if $\mathfrak{g}$ is solvable. For the basic definition of the spectral sequences of foliated manifolds, we refer to a paper by Kamber and Tondeur~\cite{KaTo2}. Let $E_{2}^{0,k}$ be the $(0,k)$-th $E_2$ term of the spectral sequence of $(N,\mathcal{G})$. Then the dimension of $E_{2}^{0,k}$ is $1$ by the assumption~\eqref{unimod}, since we have \begin{equation} H^{l}(\mathfrak{g}) \cong H_{B}^{l}(N/\mathcal{G}) \cong E_{2}^{0,k} \end{equation} where the first isomorphism follows from the denseness of the leaves of $(N,\mathcal{G})$ and the second isomorphism follows from the duality theorem of Masa in~\cite{Mas}. Hence $\Aut (E_{2}^{0,k})$ is canonically identified with $\mathbb{R}-\{0\}$. We denote the orientation preserving automorphism group of $E_{2}^{0,k}$ by $\Aut_{+} (E_{2}^{0,k})$, which is identified with the set of positive real numbers $\mathbb{R}_{>0}$. Let $\Diff(N,\mathcal{G})$ be the group of diffeomorphisms on $N$ which map each leaf of $\mathcal{G}$ to a leaf of $\mathcal{G}$ and denote its mapping class group in the $C^{\infty}$ topology by $\pi_{0}(\Diff(N,\mathcal{G}))$. We have a canonical action $\Phi \colon \pi_{0}(\Diff(N,\mathcal{G})) \longrightarrow \Aut(E_{2}^{0,k})$ defined in the following way: Let $H^{k}(\mathcal{G})$ be the $k$-th leafwise cohomology group of $(N,\mathcal{G})$, which is identified with $E_{1}^{0,k}$ in the spectral sequence of $(N,\mathcal{G})$. $E_{2}^{0,k}$ is identified with the kernel of $d_{1,0} \colon H^{k}(\mathcal{G}) \longrightarrow E_{1}^{1,k}$ where $d_{1,0}$ is the map induced on $E_1$ terms from the composition of the de Rham differential and the projection $C^{\infty}(\wedge^{k+1} T^{*}N) \longrightarrow C^{\infty}((T\mathcal{G}^{\perp})^{*} \otimes \wedge^{k} T^{*}\mathcal{G})$ determined by a Riemann metric $g$ of $(N,\mathcal{G})$ (See~\cite{KaTo2}.). $\Diff(N,\mathcal{G})$ acts to $E_{2}^{0,k}$, since $\Diff(N,\mathcal{G})$ acts to the leafwise cohomology group by pulling back leafwise volume forms and this action preserves $E_{2}^{0,k}$. This action descends to an action of $\pi_{0}(\Diff(N,\mathcal{G}))$, since the action of the identity component of $\Diff(N,\mathcal{G})$ to the leafwise cohomology group is trivial according to the integrable homotopy invariance of leafwise cohomology shown by El Kacimi Alaoui~\cite{ElK}. We show the following proposition which computes the period of the \'{A}lvarez class $[\kappa_b]$ of $(M,\mathcal{F})$ in terms of the holonomy of the basic fibration: \begin{prop}\label{holonomyformula} The diagram \begin{equation}\label{diag1} \xymatrix{ \pi_1(M,x) \ar[rr]^{\int [\kappa_b]} \ar[d]_{\pi_{*}} & & \mathbb{R} \\ \pi_1(W,\pi(x)) \ar[r]^{hol_{\pi}} & \pi_{0}(\Diff(N,\mathcal{G})) \ar[r]^{\Phi} & \Aut_{+}(E_{2}^{0,k}) \ar[u]_{\log}} \end{equation} is commutative where $\int [\kappa_b]$ is the period map of $[\kappa_b]$, $hol_{\pi}$ is the holonomy map of the basic fibration $\pi \colon M \longrightarrow W$ , $\log$ is defined through the identification of $\Aut(E_{2}^{0,k})$ with $\mathbb{R}_{>0}$ and $\Phi$ is the canonical action described above. \end{prop} \begin{proof} To show the commutativity of the diagram~\eqref{diag1} for an element $[\gamma]$ of $\pi_1(M,x)$ which is represented by a smooth path $\gamma$, it suffices to show the case of $W=S^1$ pulling back the fibration $\pi$ by $\gamma$. By the assumption~\eqref{unimod}, the \'{A}lvarez class of $(N,\mathcal{G})$ cannot be nontrivial. Hence the restriction of \'{A}lvarez class of $(M,\mathcal{F})$ to a fiber of $\pi$ is zero. We will compute the integration of \'{A}lvarez class of $(M,\mathcal{F})$ along a path which gives a generator of $\pi_1(S^1,\pi(x))$. We denote the holonomy of the $(N,\mathcal{G})$-bundle $\pi$ over $S^1$ along a path which gives a generator of $\pi_1(S^1,\pi(x))$ by $f$ and its action on $E_{2}^{0,k}$ by $f^*$. We write $\overline{\mathcal{F}}$ for the foliation defined by the fibers of $\pi$. We fix a bundle-like metric $g'$ on $(M,\mathcal{F})$. Let $E$ be the vector bundle of rank $1$ over $S^1$ whose fiber $E_t$ over $t$ in $S^1$ is the $(0,k)$-th $E_2$ term of the spectral sequence of $(\pi^{-1}(t),\mathcal{F}|_{\pi^{-1}(t)})$. We define an affine connection $\nabla$ on $E$ by \begin{equation} \nabla s = \int_{\pi} (d_{1,0} s) \end{equation} for $s$ in $C^{\infty}(E)$ where $d_{1,0}$ is the map induced on $E_1$ terms from the composition of the de Rham differential and the projection $C^{\infty}(\wedge^{k+1} T^{*}M) \longrightarrow C^{\infty}((T\mathcal{F}^{\perp})^{*} \otimes \wedge^{k} T^{*}\mathcal{F})$ determined by a Riemann metric $g$ of $(M,\mathcal{F})$, and $\int_{\pi}$ is the integration on fibers on $\pi$ with respect to the first component of $C^{\infty}(T^{*}M) \otimes C^{\infty}(E)$ using the fiberwise volume form $vol_{\pi}$ of $\pi$ determined by $g'$ which is defined by $\int_{\pi} (\alpha \otimes h[\chi]) = \big(\int_{\pi} h\alpha \wedge vol_{\pi} \big) \otimes [\chi]$ for $\alpha$ in $C^{\infty}(T^{*}M)$ and $h$ in $C^{\infty}(M)$. The Rummler's formula \begin{equation}\label{Rum} d_{1,0} \chi = -\kappa \wedge \chi \end{equation} in~\cite{Rum} implies $\nabla$ is an connection on $E$. Note that $d_{1,0}$ coincides with the differential on $E_1$ terms of the spectral sequence of $(M,\mathcal{F})$ which is determined only by $\mathcal{F}$ and hence $\nabla$ is independent of the metric $g'$. The connection $\nabla$ is flat, since every connection on a vector bundle over $S^1$ is flat. The holonomy of $(E,\nabla)$ which corresponds to a generator of $\pi_1(S^1,\pi(x))$ is shown to be equal to $f^{*}$ in the following way: We pull back the $(N,\mathcal{G})$ bundle $\pi$ by the canonical map $\iota \colon [0,1] \longrightarrow [0,1]/ \{0\} \sim \{1\}=S^1$ and denote the total space of $\iota^{*}\pi$ by $(M',\mathcal{F}')$. Fix a trivialization $(M',\mathcal{F}') \cong (N,\mathcal{G}) \times [0,1]$ as a $(N,\mathcal{G})$ bundle, then we have an induced trivialization of $\iota^{*}E$. Since $\iota^{*}\nabla$ is independent of the metric, we can assume that $\iota^{*}\nabla$ is defined by a product metric. Then the parallel section of $(\iota^{*}E,\iota^{*}\nabla)$ is the constant sections with respect to the trivialization. Since $(E,\nabla)$ is obtained by identifying the boundaries of $(\iota^{*}E,\iota^{*}\nabla)$ by $f^*$, the holonomy of $(E,\nabla)$ is equal to $f^{*}$. We will show that the holonomy of $(E,\nabla)$ which corresponds to a generator of $\pi_1(S^1,\pi(x))$ is equal to $e^{\int_{S^1} \kappa_b}$ where the holonomy of $(E,\nabla)$ is regarded as a real number. For this purpose, we construct a bundle-like metric $g$ on $(M,\mathcal{F})$ such that each leaf of $\overline{\mathcal{F}}$ is a minimal manifold of $(M,g)$ and the leafwise volume form $\chi^{t}$ of $(\pi^{-1}(t),\mathcal{F}|_{\pi^{-1}(t)})$ determined by $g$ satisfies $d_{1,0}^{t} \chi^{t}=0$ where $d_{1,0}^{t}$ is the transverse component of de Rham differential of $({\pi^{-1}(t)},\mathcal{F}|_{{\pi^{-1}(t)}})$ defined in the same way as $d_{1,0}$ for $(N,\mathcal{G})$. By the duality theorem of Masa in~\cite{Mas} and the assumption~\eqref{unimod}, we have a bundle-like metric $g_0$ on $(N,\mathcal{G})$ such that each leaf of $\mathcal{G}$ is a minimal submanifold of $(N,g_0)$. Note that the leafwise cohomology class $[\chi_0]$ of the characteristic form $\chi_0$ of $(N,\mathcal{G},g_0)$ is an eigenvector of $f^*$, since $[\chi_0]$ is a generator of $E_{2}^{0,k}$ and $f^*$ preserves $E_{2}^{0,k}$. Hence we can write $f^*[\chi_0]=c[\chi_0]$ for some real number $c$. By the Moser's argument in~\cite{Ghy}, we can isotope $f$ to $f_1$ in $\Diff(N,\mathcal{G})$ so that $f^{*}_{1}\chi_0=c\chi_0$. Let $\omega_0$ be a basic transverse volume form of $(N,\mathcal{G})$. Then we have $f^{*}_{1}\omega_0=b\omega_0$ for some real number $b$, the leaves of $(N,\mathcal{G})$ are dense. Since $f_{1}$ induces the identity map on $H^{k+l}(N ; \mathbb{R})$ and a pairing \begin{equation}\label{pair} E_{2}^{0,k} \times E_{2}^{l,0} \longrightarrow E_{2}^{l,k}=H^{k+l}(N ; \mathbb{R}) \end{equation} is natural with respect to $\Diff(N,\mathcal{G})$, we have $d=\frac{1}{c}$. Hence we have $f^{*}_{1} (\chi_0 \wedge \omega_0)=\chi_0 \wedge \omega_0$. We define a bundle-like metric $g$ of $(M,\mathcal{F})$ by $g= \rho(t)g_{0} +(1-\rho(t))f^{*}_{1}g_{0} + dt \otimes dt$ where $\rho$ is a smooth function on $[0,1]$ which satisfies $\rho(t)=0$ near $0$ and $\rho(t)=1$ near $1$. We denote the characteristic forms of $(M,\mathcal{F},g)$ by $\chi$. Then we have $\chi = \rho(t)\chi_{0} +(1-\rho(t))f^{*}_{1}\chi_{0}$. By the Rummler formula, we have $d_{1,0}\chi_0=0$ and hence $d_{1,0}^{t}(\chi|_{\pi^{-1}(t)})=0$ for every $t$ in $S^1$. Note that each leaf of $\overline{\mathcal{F}}$ is a minimal submanifold of $(M,g)$, since $f_1$ preserves the volume form of $(N,g_0)$. We calculate the holonomy of $(E,\nabla)$ using $g$. We denote the map which corresponds the leafwise cohomology class $[\chi|_{\pi^{-1}(t)}]$ to $t$ in $S^1$ by $[\chi]$. Then $[\chi]$ is a global section of $E$, since $d_{1,0}^{t} \chi|_{\pi^{-1}(t)}=0$ is satisfied by the construction of $\chi$. By the Rummler's formula~\eqref{Rum}, we have \begin{equation}\label{conn} \nabla [\chi] = -\Big(\int_{\pi} \kappa \Big) \otimes [\chi]. \end{equation} We show that $\int_{\pi} \kappa$ is a closed $1$-form on $S^1$ which satisfies $\kappa_b(\frac{\partial}{\partial t})=\int_{\pi} \kappa(\frac{\partial}{\partial t})$ on $S^1$. By the minimality of the fibers of $\pi$ with respect to $g$ and Rummler's formula for $(M,\overline{\mathcal{F}})$, $dvol_{\pi}$ has no component which has $m$-form tangent to the fibers of $\pi$ where $m$ is the dimension of the fibers of $\pi$. Hence we have \begin{equation} \begin{array}{rl} d\int_{\pi} \kappa & = d\int_{\pi} \kappa_b \\ & =\int_{\pi} \Big(d\kappa_b \wedge vol_{\pi} - \kappa_b \wedge dvol_{\pi} \Big) \\ & = \int_{\pi} \Big(d\kappa_b \wedge vol_{\pi} - \kappa_b \wedge dvol_{\pi} \Big) \\ & = \int_{\pi} \Big(d\kappa_b \wedge vol_{\pi}\Big) \\ & = \int_{\pi} d\kappa_b. \end{array} \end{equation} Since $\kappa_b$ is closed by Corollary~3.5 of \'{A}lvarez L\'{o}pez~\cite{Alv}, we have $d\int_{\pi} \kappa=0$. $\kappa_b(\frac{\partial}{\partial t})=\int_{\pi} \kappa(\frac{\partial}{\partial t})$ is clear by the definition of $\kappa_b$. Hence $\nabla$ is a flat connection defined by a closed form $\int_{\pi} \kappa$ and we can show the holonomy of $(E,\nabla)$ is equal to $e^{\int_{S^1} \int_{\pi} \kappa}=e^{\int_{S^1} \kappa_b}$ in a standard way. \end{proof} \section{Application of Mal'cev Theory} Let $(M,\mathcal{F})$ be a closed manifold with a Riemannian foliation with nilpotent structure Lie algebra. Since the \'{A}lvarez class of $(M,\mathcal{F})$ is defined by the integration along fibers of the \'{A}lvarez class of $(M^{1},\mathcal{F}^{1})$ where $M^{1}$ is the transverse orthonormal frame bundle of $(M,\mathcal{F})$ and $\mathcal{F}^{1}$ is the lift of $\mathcal{F}$, which is transversely parallelizable, Theorem~\ref{expalg} is reduced to the case where $(M,\mathcal{F})$ is transversely parallelizable. Moreover we can assume the orientability of $\mathcal{F}$, since a square root of an algebraic number is algebraic. To show Theorem~\ref{expalg} in the cases where $(M,\mathcal{F})$ is orientable and transversely parallelizable, it suffices to show the following Proposition~by Proposition~\ref{holonomyformula}: \begin{prop}\label{algebraicaction} Let $(N,\mathcal{G})$ be a closed manifold with a Lie foliation with nilpotent structure Lie algebra of which every leaf is dense in $N$. Then the image of $\Phi \colon \pi_{0}(\Diff(N,\mathcal{G})) \longrightarrow \Aut(E_{2}^{0,k})$ is contained in the set of algebraic numbers where $\Aut(E_{2}^{0,k})$ is canonically identified with $\mathbb{R}-\{0\}$. \end{prop} We show a lemma which reduces Proposition~\ref{algebraicaction} to a problem on nilpotent Lie groups and will apply Mal'cev theory to complete the proof of Proposition~\ref{algebraicaction}. Let $N$ be a closed manifold and $\mathcal{G}$ be a Lie foliation on $N$ of dimension $k$ and codimension $l$ of which every leaf is dense in $N$. We denote the structure Lie algebra of $(N,\mathcal{G})$ by $\mathfrak{g}$. We fix a point $x$ on $N$. Then the holonomy homomorphism $hol \colon \pi_1(N,x) \longrightarrow G$ of $(N,\mathcal{G})$ is determined where $G$ is the simply connected structure Lie group determined by $\mathfrak{g}$. We denote the canonical action $\Diff(N,\mathcal{G}) \longrightarrow \Aut(E_{2}^{0,l})$ by $\Psi$ where $E_{2}^{l,0}$ is the $(0,l)$-th $E_2$ term of the spectral sequence of $(N,\mathcal{G})$. Note that $E_{2}^{l,0}$ is isomorphic to the $l$-th basic cohomology group of $(N,\mathcal{G})$~\cite{Hae} and hence isomorphic to $H^{l}(\mathfrak{g})$, since the leaves of $\mathcal{G}$ are dense in $N$. \begin{lem}\label{liefol} If we assume \begin{equation}\label{unimod2} H^{l}(\mathfrak{g}) \cong \mathbb{R}, \end{equation} then we have the following: \begin{enumerate} \item The diagram \begin{equation} \xymatrix{ \Diff(N,\mathcal{G}) \ar[r]^{\Phi} \ar[rd]_{\Psi} & \Aut(E_{2}^{0,k}) \\ & \Aut(E_{2}^{l,0}) \ar[u]_{i}} \end{equation} commutes where $i$ is the map which takes the inverse through identification of $\Aut(E_{2}^{0,k})$ and $\Aut(E_{2}^{l,0})$ with $\mathbb{R}-\{0\}$. \item \label{2} We denote the image of $hol$ by $\Gamma$. A foliation preserving diffeomorphism $f \colon (N,\mathcal{G}) \longrightarrow (N,\mathcal{G})$ which fixes $x$ induces an automorphism of $G$ which preserves $\Gamma$. \item We denote the group of diffeomorphisms which fix $x$ and preserve $\mathcal{G}$ by $\Diff(N,x,\mathcal{G})$, the group of automorphisms of $G$ which preserve $\Gamma$ by $\Aut(G, \Gamma)$ and the homomorphism $\Diff(N,x,\mathcal{G}) \longrightarrow \Aut(G)$ which is obtained by~\eqref{2} by $\iota$. Then the diagram \begin{equation} \xymatrix{ \Diff(N,x,\mathcal{G}) \ar[r]^{\Phi} \ar[rd]_{\iota} & \Aut(E_{2}^{l,0}) \\ & \Aut(G,\Gamma) \ar[u]_{A} } \end{equation} commutes where $A$ is the canonical action of the automorphism group of Lie group to the Lie algebra cohomology under the identification of $E_{2}^{l,0}$ with $H^{l}(\mathfrak{g})$. \end{enumerate} \end{lem} \begin{proof} We prove (1). Let $f$ be an element of $\Diff(N,\mathcal{G})$. Since the pairing~\eqref{pair} is natural with respect to $\Diff(N,\mathcal{G})$ and the action of $f$ to $H^{k+l}(N;\mathbb{R})$ is trivial, we have the conclusion. We prove (2). Let $X$ be a basic transverse vector field on $(N,\mathcal{G})$ which we regard as an element of $\mathfrak{g}$. Then $f_{*}X$ is again an element of $\mathfrak{g}$ where $f_{*} \colon C^{\infty}(TN/T\mathcal{G}) \longrightarrow C^{\infty}(TN/T\mathcal{G})$ is the map induced by $f$, since every leafwise constant function of $(N,\mathcal{G})$ is constant. Hence we have an automorphism of $\mathfrak{g}$ induced by $f_{*}$, which we denote by $f_{*}$ again. Let $(\tilde{N},\tilde{\mathcal{G}})$ be the universal cover of $(N,\mathcal{G})$. We fix a point $\tilde{x}$ on $\tilde{N}$ such that $u(\tilde{x})=x$ where $u$ is the projection of the universal covering $\tilde{N} \longrightarrow N$. We can lift $f$ to $\tilde{f} \colon (\tilde{N},\tilde{\mathcal{G}}) \longrightarrow (\tilde{N},\tilde{\mathcal{G}})$ so that $\tilde{f}$ fixes $\tilde{x}$. $\tilde{f}$ induces a diffeomorphism $\overline{f}$ on $G=\tilde{N}/\tilde{\mathcal{G}}$. Let $g$ be an automorphism of $G$ which is induced by an element $(f_{*})^{-1}$ of $\Aut(\mathfrak{g})$. Then $\overline{f} \circ g = L_{h}$ for some $h$ in $G$, since $d(\overline{f} \circ g)$ preserves every left-invariant vector field on $G$. But since $\overline{f} \circ g$ fixes $e$, we have $\overline{f} \circ g = id_{G}$. We prove the latter part. By the definition of $hol$, we have $hol(\gamma)=p \circ \tilde{\gamma}(1)$ for each element $\gamma$ in $\pi_1(N,x)$ where $p$ is the canonical projection $p \colon \tilde{N} \longrightarrow \tilde{N}/\tilde{\mathcal{G}}=G$ and $\tilde{\gamma}$ is the lift of $\gamma$ to $\tilde{N}$ such that $\tilde{\gamma}(0)=\tilde{x}$. Then we have $\overline{f}(hol(\gamma))=\overline{f} \circ p \circ \tilde{\gamma}(1)= p \circ \tilde{f} \circ \tilde{\gamma}(1) = p \circ \tilde{f \circ \gamma}(1) = hol(f \circ \gamma)$, since $\tilde{f}$ fixes $\tilde{x}$. (3) is clear from the construction. \end{proof} Note that every element of $\pi_{0}(\Diff(N,\mathcal{G}))$ is represented by an element of $\Diff(N,x,\mathcal{G})$. In fact, $(N,\mathcal{F})$ is a Lie foliation and the identity component of $\Diff(N,\mathcal{G})$ acts to $N$ transitively. Hence Proposition~\ref{algebraicaction} is reduced to the algebraicity of $A \colon \Aut(G,\Gamma) \longrightarrow \Aut(E^{0,l}_2)$. We prove the following proposition applying Mal'cev theory. \begin{prop}\label{malcev} Let $G$ be an $l$-dimensional simply connected nilpotent Lie group and $\Gamma$ be a finitely generated dense subgroup of $G$. We put $\mathfrak{g}=\Lie(G)$. If we denote the group of automorphisms of $G$ which preserves $\Gamma$ by $\Aut(G,\Gamma)$, then the image of the canonical action \begin{equation} A \colon \Aut(G,\Gamma) \longrightarrow \Aut(H^{l}(\mathfrak{g})) \end{equation} is contained in the set of algebraic numbers where $\Aut(H^{l}(\mathfrak{g}))$ is canonically identified with $\mathbb{R}-\{0\}$. \end{prop} \begin{proof} At first, we apply Mal'cev theory following Ghys~\cite{Ghy}. We refer to~\cite{Mal} and Chapter II of~\cite{Rag} on Mal'cev theory. We have a simply connected nilpotent Lie group $H$ and an embedding $i \colon \Gamma \longrightarrow H$ such that $i(\Gamma)$ is a uniform lattice of $H$ by Mal'cev theory. The homomorphism $i^{-1} \colon i(\Gamma) \longrightarrow G$ can be extend to a homomorphism $\pi \colon H \longrightarrow G$ again by Mal'cev theory. $\pi$ is clearly surjective, since $\Gamma$ is dense in $G$. We also have a lift $\tilde{f}$ of $f$ which is an automorphism of $H$ and preserves $i(\Gamma)$ again by Mal'cev theory, since $i(\Gamma)$ is a uniform lattice of $H$. Then we have a diagram: \begin{equation}\label{diag2} \xymatrix{ H \ar[dd]_{\pi} \ar[rrr]^{\tilde{f}} & & & H \ar[dd]^{\pi} \\ & \Gamma \ar[ul]^{i} \ar[ld] \ar[r]^{f} & \Gamma \ar[ur]_{i} \ar[rd] \\ G \ar[rrr]^{f} & & & G } \end{equation} which satisfies the following conditions: \begin{description} \item[(a)] $H$ is nilpotent, \item[(b)] $\pi$ is surjective and \item[(c)] $i$ is an embedding of a uniform lattice. \end{description} Proposition~\ref{malcev} follows from the following lemma: \begin{lem}\label{ind} If we have a diagram~\eqref{diag2} which satisfies conditions (a), (b) and (c), then $A(f) \colon H^{l}(\mathfrak{g}) \longrightarrow H^{l}(\mathfrak{g})$ is algebraic under the canonical identification of $\Aut(H^{l}(\mathfrak{g}))=\mathbb{R}-\{0\}$. \end{lem} We prove Lemma~\ref{ind} inductively on the rank of $H$ as a nilpotent Lie group. If $H$ is abelian, $(H,\Gamma)$ is isomorphic to $(\mathbb{R}^{m},\mathbb{Z}^{m})$. Then $f$ is an element of $\SL(m;\mathbb{Z})$. $f$ induces a homomorphism $\hat{f} \colon \wedge^{l} (\mathbb{R}^{m})^{*} \longrightarrow \wedge^{l} (\mathbb{R}^{m})^{*}$ and $\hat{f}$ has integral entries with respect to the standard basis, since the entries of $\hat{f}$ are minor determinants of $f$. $H^{l}(\mathfrak{g})$ is generated by an element represented by left invariant volume forms of $G$ and left invariant volume forms of $G$ are eigenvectors of $\hat{f}$ by the diagram~\eqref{diag2}. Then $A(f)$ is algebraic, since $A(f)$ is the eigenvalue of $\hat{f}$ with respect to left invariant volume forms of $G$. Assume that we have the diagram~\eqref{diag2} which satisfies the conditions (a),(b) and (c) where the rank of $H$ is $n$ and the claim of Lemma~\ref{ind} is correct if the rank of $H$ is less than $n$. Then $A([f,f])$ and $A(H_1(f))$ are algebraic under the identification of $H^{l'}([\mathfrak{g},\mathfrak{g}])$ and $H^{l''}(\mathfrak{g}/[\mathfrak{g},\mathfrak{g}])$ with $\mathbb{R}-\{0\}$ where $l'=\dim [\mathfrak{g},\mathfrak{g}]$ and $l''=\dim \mathfrak{g}/[\mathfrak{g},\mathfrak{g}]$ by the following diagrams: \begin{equation} \xymatrix{ [H,H] \ar[dd]_{[\pi,\pi]} \ar[rrr]^{[\tilde{f},\tilde{f}]} & & & [H,H] \ar[dd]^{[\pi,\pi]} \\ & \Gamma \cap [H,H] \ar[ul]^{[i,i]} \ar[ld] \ar[r]^{[f,f]} & \Gamma \cap [H,H] \ar[ur]_{[i,i]} \ar[rd] \\ [G,G] \ar[rrr]^{[f,f]} & & & [G,G] } \end{equation} and \begin{equation} \xymatrix{ H/[H,H] \ar[dd]_{H_1(\pi)} \ar[rrr]^{H_1(\tilde{f})} & & & H/[H,H] \ar[dd]^{H_1(\pi)} \\ & p(\Gamma) \ar[ul]^{H_1(i)} \ar[ld] \ar[r]^{H_1(f)} & p(\Gamma) \ar[ur]_{H_1(i)} \ar[rd] \\ G/[G,G] \ar[rrr]^{H_1(f)} & & & G/[G,G] } \end{equation} where $p$ is the canonical projection $H \longrightarrow [H,H]$. Then we have the algebraicity of $A(f)$, since we have $A(f)=A([f,f])\cdot A(H_1(f))$ under the identification of $\Aut(H^{l}(\mathfrak{g})), H^{l'}([\mathfrak{g},\mathfrak{g}])$ and $H^{l''}(\mathfrak{g}/[\mathfrak{g},\mathfrak{g}])$ with $\mathbb{R}-\{0\}$ by the following diagram: \begin{equation} \xymatrix{ 0 \ar[r] & [G,G] \ar[r] \ar[d]^{[f,f]} & G \ar[r] \ar[d]^{f} & G/[G,G] \ar[r] \ar[d]^{H_1(f)} & 0 \\ 0 \ar[r] & [G,G] \ar[r] & G \ar[r] & G/[G,G] \ar[r] & 0.} \end{equation} Hence Lemma~\ref{ind} and Proposition~\ref{malcev} are proved. \end{proof} \section{Examples} \subsection{Torus fibration over $S^1$} Let $A$ be an element of $\SL(n;\mathbb{Z})$ with an eigenvector $v$ with respect to an eigenvalue $\lambda$. Assume that the components of $v$ are linearly independent over $\mathbb{Q}$ and $\lambda$ is a positive real number. For an example, take \begin{equation} A=\begin{pmatrix} 2 & 1 \\ 1 & 1 \end{pmatrix}, v= \begin{pmatrix} \frac{\sqrt{5}-1}{2} \\ -1 \end{pmatrix}, \lambda=\frac{3-\sqrt{5}}{2}. \end{equation} $A$ induces a diffeomorphism $\overline{A}$ on $T^{n}=\mathbb{R}^{n}/\mathbb{Z}^{n}$. We denote the mapping torus $T^{n} \times [0,1]/(\overline{A}w,0) \sim (w,1)$ of $\overline{A}$ by $M$ and define a map $\pi \colon M \longrightarrow S^1$ by $\pi([(w,t)])=t$, which gives a $T^n$ fibration over $S^1$. Since $v$ is an eigenvector of $A$, we have a foliation $\mathcal{F}$ on $M$ formed by the lines parallel to $v$ in each $T^{n}$ fiber of $\pi$. By the assumption on $v$, the leaves of $\mathcal{F}$ are dense in the fibers of $\pi$. $(M,\mathcal{F})$ is Riemannian since we can construct a bundle-like metric $g$ of $(M,\mathcal{F})$ by $g = \rho(t)g_0 + (1-\rho(t))g_0 + dt \otimes dt$ where $g_0$ is the flat metric on $T^n$ and $\rho(t)$ is a smooth function which satisfies $\rho(t)=0$ near $0$ and $\rho(t)=1$ near $1$. Note that the structure Lie algebra of $(M,\mathcal{F})$ is abelian. In this case, the mean curvature form of $(M,\mathcal{F},g)$ is a closed form on $S^1$ and we can calculate the \'{A}lvarez class $[\kappa_b]$ of $(M,\mathcal{F})$ directly to obtain \begin{equation} \int_{S^1}[\kappa_b]= \log \lambda. \end{equation} \subsection{A Riemannian foliation on a solvmanifold} We present an example of a $2$-dimensional Riemannian foliation on a $6$-dimensional nilmanifold bundle over $S^1$ with nonabelian nilpotent structure Lie algebra and nontrivial \'{A}lvarez class. Let $p$ be a prime and $\alpha$ be an element of $\mathbb{Z}(\sqrt{p})$ which has an inverse $\beta$ in $\mathbb{Z}(\sqrt{p})$. We put $k=\GCD(\alpha_2,\beta_2)$ where $\alpha_2$ and $\beta_2$ are integers which satisfies $\alpha=\alpha_1+\alpha_2\sqrt{p}$ and $\beta=\beta_1+\beta_2\sqrt{p}$ for some integers $\alpha_1$ and $\beta_1$. Let $G$ be a nilpotent Lie group which is defined by \begin{equation} G=\Big\{ \begin{pmatrix} 1 & x & z \\ 0 & 1 & y \\ 0 & 0 & 1 \end{pmatrix} \Big| x,y,z \in \mathbb{R} \Big\} \end{equation} and $\Gamma$ be a subgroup of $G$ which is generated by \begin{equation} A_1=\begin{pmatrix} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}, A_2=\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{pmatrix}, A_3=\begin{pmatrix} 1 & k\sqrt{p} & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}, A_4=\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & k\sqrt{p} \\ 0 & 0 & 1 \end{pmatrix}. \end{equation} $\Gamma$ is dense in $G$, since we have $[A_1,A_2]=\begin{pmatrix} 1 & 0 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}$, $[A_1,A_4]=\begin{pmatrix} 1 & 0 & k\sqrt{p} \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}$ and the Lie algebra of the closure of $\Gamma$ is equal to the Lie algebra of $G$. We define a Lie group $H$ by $H=G \oplus \mathbb{R}^{2}$. Let $\iota'$ be a homomorphism $\iota' \colon \Gamma \longrightarrow \mathbb{R}^{2}$ which is defined by \begin{equation} \iota'\Big(\begin{pmatrix} 1 & x_1 + x_2 \sqrt{p} & z \\ 0 & 1 & y_1 + y_2 \sqrt{p} \\ 0 & 0 & 1 \end{pmatrix} \Big)=(x_2,y_2) \end{equation} where $x_1,x_2,y_1$ and $y_2$ are integers and we define an embedding $\iota \colon \Gamma \longrightarrow H$ by $\iota(g)=(g,\iota'(g))$ for every $g$. Then $\iota(\Gamma)$ is a uniform lattice of $H$. The fibers of the first projection $H \longrightarrow G$ are preserved by the right multiplication of $\iota(\Gamma)$ to $H$ and define a $G$-Lie foliation $\mathcal{G}$ of dimension $2$ and codimension $3$ on $H/\iota(\Gamma)$. Let $\begin{pmatrix} a' & b' \\ c' & d' \end{pmatrix}$ be an element of $\SL(2;\mathbb{Z})$ and put $\begin{pmatrix} a & b \\ c & d \end{pmatrix}=\begin{pmatrix} a'\alpha & b'\alpha \\ c'\alpha & d'\alpha \end{pmatrix}$. Let $f$ be a map $G \longrightarrow G$ defined by \begin{equation} f\begin{pmatrix} 1 & x & z \\ 0 & 1 & y \\ 0 & 0 & 1 \end{pmatrix}=\begin{pmatrix} 1 & ax+by & \alpha^{2} z+acx^2+bdy^2+bcxy \\ 0 & 1 & cx+dy \\ 0 & 0 & 1 \end{pmatrix}. \end{equation} Then $f$ is a homomorphism from $G$ to $G$ by the definition of $\begin{pmatrix} a & b \\ c & d \end{pmatrix}$ and $\alpha$. Since $f$ is bijective, $f$ is an automorphism. Clearly $f$ preserves $\Gamma$. Moreover $f(\Gamma)=\Gamma$ follows from the definition of $k$ and $\Gamma$. Note that there exists an element $g$ of $\Gamma$ which satisfies $g=\begin{pmatrix} 1 & 0 & \delta \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}$ for every $\delta$ in $\mathbb{Z}(\sqrt{p})$ which has a form $\delta=\delta_1+\delta_2 k\sqrt{p}$ where $\delta_1$ and $\delta_2$ are integers. Since $\iota(\Gamma)$ is a uniform lattice of $H$, the automorphism $f|_{\Gamma}$ of $\Gamma$ is uniquely extended to an automorphism $\tilde{f}$ of $H$ and induces a diffeomorphism $\overline{f}$ on $H/\iota(\Gamma)$. Since $\overline{f}$ preserves $\mathcal{G}$, we have a foliation $\mathcal{F}$ on the mapping torus $M$ of $\overline{f}$ which is Riemannian. By $f^{*}(dx \wedge dy \wedge dz)=\alpha^{3} dx \wedge dy \wedge dz$ and Proposition~\ref{holonomyformula}, we have \begin{equation} \int_{S^1}[\kappa_b]= \log \alpha^3 \end{equation} where $S^1$ is the base space of the canonical fibration $M \longrightarrow S^1$ of the mapping torus and $[\kappa_b]$ is the \'{A}lvarez class of $(M,\mathcal{F})$. \end{document}
\begin{document} \title{Approximation Algorithms for Connected Maximum Cut and Related Problems} \begin{abstract} An instance of the \textsf{Connected Maximum Cut}\ problem consists of an undirected graph $G=(V,E)$ and the goal is to find a subset of vertices $S \subseteq V$ that maximizes the number of edges in the cut $\delta(S)$ such that the induced graph $G[S]$ is connected. We present the first non-trivial $\Omega(\frac{1}{\log n})$ approximation algorithm for the \textsf{Connected Maximum Cut}\ problem in general graphs using novel techniques. We then extend our algorithm to edge weighted case and obtain a poly-logarithmic approximation algorithm. Interestingly, in contrast to the classical Max-Cut problem that can be solved in polynomial time on planar graphs, we show that the \textsf{Connected Maximum Cut}\ problem remains NP-hard on unweighted, planar graphs. On the positive side, we obtain a polynomial time approximation scheme for the \textsf{Connected Maximum Cut}\ problem on planar graphs and more generally on bounded genus graphs. \end{abstract} \section{Introduction} \label{sec:introduction} Submodular optimization problems have, in recent years, received a considerable amount of attention \cite{badanidiyuru2014fast, buchbinder2012tight, buchbinder2014submodular, calinescu2007maximizing,chekuri2011submodular, feige2011maximizing, nemhauser1978analysis} in algorithmic research. In a general \textsf{Submodular Maximization} problem, we are given a non-negative submodular\footnote{A function $f$ is called submodular if $f(S) + f(T) \geq f(S \cup T) + f(S \cap T)$ for all $S, T \subseteq U$.} function over the power set of a universe $U$ of elements, $f:2^U\rightarrow \mathbb{R}^+\cup \{0\}$ and the goal is to find a subset $S\subseteq U$ that maximizes $f(S)$ so that $S$ satisfies certain pre-specified constraints. In addition to their practical relevance, the study of submodular maximization problems has led to the development of several important theoretical techniques such as the continuous greedy method and multi-linear extensions~\cite{calinescu2007maximizing} and the double greedy~\cite{buchbinder2012tight} algorithm, among others. In this study, we are interested in the problem of maximizing a submodular set function over vertices of a graph, such that the selected vertices induce a connected subgraph. Motivated by applications in coverage over wireless networks, Kuo et al.~\cite{kuo2013maximizing} consider the problem of maximizing a monotone, submodular function $f$ subject to connectivity and cardinality constraints of the form $|S|\leq k$ and provide an $\Omega(\frac{1}{\sqrt{k}})$ approximation algorithm. For a restricted class of monotone, submodular functions that includes the covering function\footnote{In this context, a covering function is defined as $f(S) = \sum_{v \in \mathbb{N}^+(S)} weight(v)$ where $\mathbb{N}^+(S)$ is the closed neighborhood of the set of vertices $S$}, Khuller et al.~\cite{khuller2014analyzing} give a constant factor approximation to the problem of maximizing $f$ subject to connectivity and cardinality constraints. In the light of these results, it is rather surprising that no non-trivial approximation algorithms are known for the case of general (non-monotone) submodular functions. Formally, we are interested in the following problem, which we refer to as \textsf{Connected Submodular Maximization (CSM)}: Given a simple, undirected graph $G=(V,E)$ and a non-negative submodular set function $f:2^V\rightarrow \mathbb{R}^+\cup\{0\}$, find a subset of vertices $S\subseteq V$ that maximizes $f(S)$ such that $G[S]$ is connected. We take the first but important step in this direction and study the problem in the case of one of the most important non-monotone submodular functions, namely the \textsf{Cut} function. Formally, given an undirected graph $G=(V,E)$, the goal is to find a subset $S \subseteq V$, such that $G[S]$ is connected and the number of edges that have exactly one end point in $S$, referred to as the cut function $\delta(S)$, is maximized. We refer to this as the \textsf{Connected Maximum Cut}\ problem. Further, we also consider an edge weighted variant of this problem, called the \textsf{Weighted }\textsf{Connected Maximum Cut}\ problem, where function to be maximized is the total weight of edges in the cut $\delta(S)$. We now outline an application to the image segmentation problem that seeks to identify ``objects'' in an image. Graph based approaches for image segmentation \cite{felzenszwalb2004efficient, petrov2005image} represent each pixel as a vertex and weighted edges represent the dissimilarity (or similarity depending on the application) between adjacent pixels. Given such a graph, a connected set of pixels with a large weighted cut naturally corresponds to an object in the image. Vicente et al.~\cite{vicente2008graph} show that even for interactive image segmentation, techniques that require connectivity also perform significantly better that cut based methods alone. \subsection*{Related Work} \textsf{Max-Cut}\ is a fundamental problem in combinatorial optimization that finds applications in diverse areas. A simple randomized algorithm that adds each vertex to $S$ independently with probability $\slfrac{1}{2}$ gives a $0.5$-approximate solution in expectation. In a breakthrough result, Goemans and Williamson \cite{goemans1995improved} gave a $0.878$-approximation algorithm using semidefinite programming and randomized rounding. Further, Khot et al.~\cite{khot2007optimal} showed that this factor is optimal assuming the Unique Games Conjecture. Interestingly, the \textsf{Max-Cut}\ problem can be optimally solved in polynomial time in planar graphs by a curious connection to the matching problem in the dual graph~\cite{hadlock1975finding}. To the best of our knowledge, the \textsf{Connected Maximum Cut}\ problem has not been considered before our work. Haglin and Venkatesan~\cite{haglin1991approximation} showed that a related problem, where we require both sides of the cut, namely $S$ and $V \setminus S$, to be connected, is NP-hard in planar graphs. We note that the well studied \textsf{Maximum Leaf Spanning Tree} (MLST) problem (e.g. see \cite{solis19982}) is a special case of the \textsf{Connected Submodular Maximization} problem. We also note that recent work on graph connectivity under vertex sampling leads to a simple constant approximation to the \textsf{Connected Submodular Maximization} for highly connected graphs, i.e., for graphs with $\Omega(\log n)$ vertex connectivity. Proofs of these claims are presented in the Appendix~\ref{app:mlst} and \ref{app:conn} respectively. We conclude this section by noting that connected variants of many classical combinatorial problems have been extensively studied in the literature and have been found to be useful. The best example for this is the \textsf{Connected Dominating Set} problem. Following the seminal work of Guha and Khuller~\cite{guha1998approximation}, the problem has found extensive applications (with more than a thousand citations) in the domain of wireless ad hoc networks as a \emph{virtual backbone} (e.g. see~\cite{das1997routing, du2013connected}). Few other examples of connected variants of classic optimization problems include \textsf{Group Steiner Tree}~\cite{garg1998polylogarithmic} (which can be seen as a generalization of a connected variant of \textsf{Set Cover}), \textsf{Connected Domatic Partition}~\cite{censor2015tight, censor2014new}, \textsf{Connected Facility Location}~\cite{eisenbrand2008approximating,swamy2004primal}, and \textsf{Connected Vertex Cover}~\cite{cygan2012deterministic}. \subsection*{Contribution and Techniques} \label{sec:our-contribution} Our key results can be summarized as follows. 1. We obtain the first $\Omega(\frac{1}{\log n})$ approximation algorithm for the \textsf{Connected Maximum Cut}\ (\textsf{CMC}) problem in general graphs. Often, for basic connectivity problems on graphs, one can obtain simple $O(\log n)$ approximation algorithms using a probabilistic embedding into trees with $O(\log n)$ stretch~\cite{fakcharoenphol2003tight}. Similarly, using the cut-based decompositions given by R\"{a}cke~\cite{racke2008optimal}, one can obtain $O(\log n)$ approximation algorithms for cut problems (e.g. Minimum Bisection). Interestingly, since the \textsf{CMC}\ problem has the flavors of both cut and connectivity problems simultaneously, neither of these approaches are applicable. Our novel approach is to look for \textsf{$\alpha$-thick trees}, which are basically sub-trees with ``high'' degree sum on the leaves. 2. For the \textsf{Weighted }\textsf{Connected Maximum Cut}\ problem, we obtain an $\Omega(\frac{1}{\log^2n})$ approximation algorithm. The basic idea is to group the edges into logarithmic number of weight classes and show that the problem on each weight class boils down to the special case where the weight of every edge is either $0$ or $1$. 3. We obtain a polynomial time approximation scheme for the \textsf{CMC}\ problem in planar graphs and more generally in bounded genus graphs. This requires the application of a stronger form of the edge contraction theorem by Demaine, Hajiaghayi and Kawarabayashi~\cite{demaine2011contraction} that may be of independent interest. 4. We show that the \textsf{CMC}\ problem remains NP-hard even on unweighted, planar graphs. This is in stark contrast with the regular \textsf{Max-Cut}\ problem that can be solved optimally in planar graphs in polynomial time. We obtain a polynomial time reduction from a special case of \textsf{3-SAT} called the \textsf{Planar Monotone 3-SAT} (\textsf{PM-3SAT}), to the \textsf{CMC}\ problem in planar graphs. This entails a delicate construction, exploiting the so called ``rectilinear representation'' of a \textsf{PM-3SAT}\ instance, to maintain planarity of the resulting \textsf{CMC}\ instance. \section{Approximation Algorithms for General Graphs} \label{sec:appr-algor-gener} In this section, we consider the \textsf{Connected Maximum Cut}\ problem in general graphs. In fact, we provide an $\Omega(\frac{1}{\log n})$ approximation algorithm for the more general problem in which edges can have weight 0 or 1 and the objective is to maximize the number of edges of weight 1 in the cut. This generalization will be useful later in obtaining a poly-logarithmic approximation algorithm for arbitrary weighted graphs. We denote the cut of a subset of vertices $S$ in a graph $G$, i.e., the set of edges in $G$ that are incident on exactly one vertex of $S$ by $\delta_G(S)$ or when $G$ is clear from context, just $\delta(S)$. Further, for two disjoint subsets of vertices $S_1$ and $S_2$ in $G$, we denote the set of edges that have one end point in each of $S_1$ and $S_2$, by $\delta_G(S_1, S_2)$ or simply $\delta(S_1,S_2)$. The formal problem definition follows - \noindent {\bf Problem Definition. }\textsf{\{0,1\}-Connected Maximum Cut} (\textsf{b-CMC}): Given a graph $G = (V,E)$ and a weight function $w:E \rightarrow \{0,1\}$, find a set $S \subset V$ that maximizes $\sum_{e \in \delta(S)} w(e)$ such that $G[S]$ induces a connected subgraph. We call an edge of weight $0$, a \textsf{0-edge} and that of weight $1$, a \textsf{1-edge}. Further, let $w(\delta(S)) = \sum_{e \in \delta(S)} w(e)$ denote the weight of the cut, i.e., the number of \textsf{1-edges} in the cut. We first start with a simple reduction rule that ensures that every vertex $v \in V$ has at least one \textsf{1-edge} incident on it. \begin{restatable}{clm}{onlygood} Given a graph $G = (V,E)$, we can construct a graph $G' = (V',E')$ in polynomial time, such that every $v' \in V'$ has at least one \textsf{1-edge} incident on it and $G'$ has a \textsf{b-CMC}\ solution $S'$ of weight at least $\psi$ if and only if $G$ has a \textsf{b-CMC}\ solution $S$ of weight at least $\psi$. \label{lem:only-good} \end{restatable} \begin{proof} Let $v \in V$ be a vertex in $G$ that has only \textsf{0-edges} incident on it and let $\{v_1,v_2,\ldots,v_l\}$ denote the set of its neighbors. Consider the graph $G'$ obtained from $G$ by deleting $v$ along with all its incident edges and adding \textsf{0-edges} between every pair of its neighbors $\{v_i,v_j\}$ such that $\{v_i, v_j\} \notin E$. Let $S$ denote a feasible solution of weight $\psi$ in $G$. If $v \notin S$, then clearly $S' = S$ is the required solution in $G'$. If $v \in S$, we set $S' = S \setminus \{v\}$ and we claim that $G'[S']$ is connected if $G[S]$ is connected and $\sum_{e \in \delta_{G'}(S')} w(e) = \sum_{e \in \delta_G(S)} w(e)$. The latter part of the claim is true since all the edges that we delete and add are \textsf{0-edges}. To prove the former part, notice that if $v$ is not a cut vertex in $G[S]$ then $G[S']$ must be connected. On the other hand, even if $v$ is a cut vertex, the new edges added among all pairs of $v$'s neighbors ensure that $G[S']$ is connected. Finally, to prove the other direction, suppose we have a feasible solution $S'$ of weight $\psi$ in $G'$. Now, if $G[S']$ is connected, then $S = S'$ is a feasible solution in $G$ of weight $\psi$. Otherwise, set $S = S' \cup \{v\}$. Since $v$ creates a path between all pairs of its neighbors, $G[S]$ is connected if $G'[S']$ is connected and is thus a feasible solution of the same weight. The proof of the lemma follows from induction. \qed \end{proof} From now on, we will assume, without loss of generality, that every vertex of $G$ has at least one \textsf{1-edge} incident on it. We now introduce some new definitions that would help us to present the main algorithmic ideas. We denote by $W_G(v)$ the total weight of edges incident on a vertex $v$ in $G$, i.e., $W_G(v) = \sum_{e:v\in e} w(e)$. In other words, $W_G(v)$ is total number of \textsf{1-edges} incident on $v$. Further let ${\eta}$ be the total number of \textsf{1-edges} in the graph. The following notion of an $\alpha$-thick tree is a crucial component of our algorithm. \begin{definition}[$\alpha$-Thick Tree] Let $G=(V,E)$ be a graph with $n$ vertices and ${\eta}$ \textsf{1-edges}. A subtree $T \subseteq G$ (not necessarily spanning), with leaf set $L$, is said to be $\alpha$-thick if $\sum_{v \in L} W_G(v) \geq \alpha {\eta}$. \end{definition} The following lemma shows that this notion of an $\alpha$-thick tree is intimately connected with the \textsf{b-CMC}\ problem. \begin{restatable}{lem}{alphaalpha} \label{lem:alphatt} For any $\alpha > 0$, given a polynomial time algorithm $A$ that computes an $\alpha$-thick tree $T$ of a graph $G$, we can obtain an $\frac{\alpha}{4}$-approximation algorithm for the \textsf{b-CMC}\ problem on $G$. \end{restatable} \begin{proof} Given a graph $G = (V,E)$ and weight function $w:E\rightarrow \{0,1\}$, we use Algorithm $A$ to compute an $\alpha$-thick tree $T$, with leaf set $L$. Let $m_L$ denote the number of \textsf{1-edges} in $G[L]$, the subgraph induced by $L$ in the graph $G$. We now partition $L$ into two disjoint sets $L_1$ and $L_2$ such that the number of \textsf{1-edges} in $\delta(L_1,L_2) \geq \frac{m_L}{2}$. This can be done by applying the standard randomized algorithm for \textsf{Max-Cut}\ (e.g. see ~\cite{motwani1995randomized}) on $G[L]$ after deleting all the \textsf{0-edges}. Now, consider the two connected subgraphs $T \setminus L_1$ and $T \setminus L_2$. We first claim that every \textsf{1-edge} in $\delta(L)$ belongs to either $\delta(T \setminus L_1)$ or $\delta(T \setminus L_2)$. Indeed, any \textsf{1-edge} $e$ in $\delta(L)$, belongs to one of the four possible sets, namely $\delta(L_2, T\setminus L)$, $\delta(L_1, V\setminus T)$, $\delta(L_1, T\setminus L)$ and $\delta(L_2, V\setminus T)$. In the first two cases, $e$ belongs to $\delta(T\setminus L_2)$ while in the last two cases, $e$ belongs $\delta(T\setminus L_1)$, hence the claim. Further, every \textsf{1-edge} in $\delta(L_1, L_2)$ belongs to both $\delta(T \setminus L_1)$ and $\delta(T \setminus L_2)$. Hence, we have - \begin{align} \label{eq:1} \sum_{e\in \delta(T \setminus L_1)} w(e) + \sum_{e\in \delta(T \setminus L_2)} w(e) &= \sum_{e\in \delta(L)} w(e) + 2\sum_{e\in \delta(L_1,L_2)} w(e) \\ \geq \sum_{e\in \delta(L)} w(e) + m_L &\geq \frac{1}{2} \sum_{v \in L} W_G(v) \geq \frac{\alpha {\eta}}{2} \end{align} Hence, the better of the two solutions $T \setminus L_1$ or $T \setminus L_2$ is guaranteed to have a cut of weight at least $\frac{\alpha {\eta}}{4}$, where ${\eta}$ is the total number of \textsf{1-edges} in $G$. To complete the proof we note that for any optimal solution $OPT$, $w(\delta(OPT)) \leq {\eta}$. \qed \end{proof} Thus, if we have an algorithm to compute $\alpha$-thick trees, Lemma \ref{lem:alphatt} provides an $\Omega(\alpha)$-approximation algorithm for the \textsf{b-CMC}\ problem. Unfortunately, there exist graphs that do not contain $\alpha$-thick trees for any non-trivial value of $\alpha$. For example, let $G$ be a \emph{path graph} with $n$ vertices and $m = n-1$ \textsf{1-edges}. It is easy to see that for any subtree $T$, the sum of degrees of the leaves is at most 4. In spite of this setback, we show that the notion of $\alpha$-thick trees is still useful in obtaining a good approximation algorithm for the \textsf{b-CMC}\ problem. In particular, Lemma \ref{lem:good-leaves} and Theorem \ref{thm:logn} show that path graph is the \emph{only} bad case, i.e., if the graph $G$ does not have a long induced path, then one can find an $\Omega(\frac{1}{\log n})$-thick tree. Lemma \ref{lem:deg2} shows that we can assume without loss of generality that the \textsf{b-CMC}\ instance does not have such a long induced path. \\ \subsubsection*{Shrinking Thin Paths.} A natural idea to handle the above ``bad'' case is to get rid of such long paths that contain only vertices of degree two by contracting the edges. We refer to a path that only contains vertices of degree two as a \textsf{d-2} path. Further, we define the length of a \textsf{d-2} path as the number of \emph{vertices} (of degree two) that it contains. The following lemma shows that we can assume without loss of generality that the graph $G$ contains no ``long'' \textsf{d-2} paths. \begin{restatable}{lem}{degtwo} Given a graph $G$, we can construct, in polynomial time, a graph $G'$ with no \textsf{d-2} paths of length $\geq 3$ such that $G'$ has a \textsf{b-CMC}\ solution $S'$ of cut weight ($w(\delta(S'))$) at least $\psi$ if and only if $G$ has a \textsf{b-CMC}\ solution $S$ of cut weight at least $\psi$. Further, given the solution $S'$ of $G'$, we can recover $S$ in polynomial time. \label{lem:deg2} \end{restatable} \begin{proof} We may assume that $G$ is connected, because otherwise we can handle each component separately. We further assume that $G$ is not a simple cycle, otherwise it is trivial to solve such an instance. If $G$ does not have a \textsf{d-2} path of length $\geq 3$, then trivially we have $G' = G$. Otherwise, let $\wp = [v_0,e_0,v_1,e_1,v_2,e_2,v_3]$ be a path in $G$ such that $v_1, v_2$ and $v_3$ have degree two and $deg(v_0) \neq 2$. Note that such a path $\wp$ must exist as $G$ is not a simple cycle. We now perform the following operation on $G$ to obtain a new graph $G_{new}$: Delete these elements $\{e_0, v_1, e_1, v_2, e_2\}$. Add a new vertex $v_{new}$ and edges $e_0' =(v_0,v_{new})$ and $e_1' = (v_{new}, v_3)$. Since $deg(v_0) \neq 2$ and $deg(v_3) = 2$, we are guaranteed that $v_0 \neq v_3$ and hence we do not introduce any multi-edges. The weights on the new edges are determined as follows - Let $n_\wp$ denote the number of \textsf{1-edges} in $E_\wp = \{e_0, e_1,e_2\}$. If $n_\wp \geq 2$, we set $w(e'_0) =w(e'_1) = 1$. If $n_\wp = 1$, then we set $w(e'_0) = 0$ and $w(e'_1) = 1$. Otherwise, we set $w(e'_0) = w(e'_1) = 0$. We claim that $G_{new}$ has a \textsf{b-CMC}\ solution $S'$ of cut weight at least $\psi$ if and only if $G$ has a solution $S$ of cut weight at least $\psi$. Let us first assume that there is a set $S$ in $G$ that is a solution to the \textsf{b-CMC}\ problem with cut weight $\psi$. We now show that there exists a $S'$ in $G_{new}$ that is a solution to the \textsf{b-CMC}\ problem with cut weight at least $\psi$. The proof in this direction is done for three possible cases, based on the cardinality of $\delta(S)\cap E_\wp$. We note that $|\delta(S)\cap E_\wp|$ is $\leq 2$, since $G[S]$ must be connected. {\bf Case 1.} $|\delta_G(S)\cap E_\wp| = 2$. Note that since $S$ is connected, we must have either (i) $S \subseteq \{v_1,v_2\}$ or (ii) $\{v_0,v_3\} \subseteq S$. In the former case, we set $S' = \{v_{new}\}$ and the claim follows by the definition of $w(e'_0)$ and $w(e'_1)$. In the latter case, we set $S' = S \setminus \{v_1, v_2\}$. Since $v_1$ and $v_2$ are vertices of degree two, $G_{new}[S']$ is connected. Further, every edge $e \in \delta_G(S) \setminus E_\wp$ also belongs to $\delta_{G_{new}}(S')$. The claim follows once we observe that both $e'_0$ and $e'_1$ are in $\delta_{G_{new}}(S')$. {\bf Case 2.} $|\delta_G(S)\cap E_\wp| = 1$. In this case, we must have either $v_0 \in S$ or $v_3 \in S$ but not both. Let us first assume $v_0\in S$. We set $S' = (S\cup \{v_{new}\})\setminus \{v_1, v_2\}$. It is clear that if $G[S]$ is connected, so is $G_{new}[S']$. Due to the removal of $v_1$ and $v_2$, we have $\delta_G(S) \setminus \delta_{G_{new}}(S') = \{e_i\}$ for some edge $e_i \in E_\wp$. On the other hand, due to the addition of $v_{new}$, we have $\delta_{G_{new}}(S') \setminus \delta_{G}(S) = \{e'_1\}$ and the claim follows since $w(e'_1) \geq w(e_i)$ for any $e_i \in E_\wp$. Now assume that $v_3\in S$. In this case, we set $S' = S\setminus \{v_1, v_2\}$. Since $v_{new}\notin S'$, we again have $e_1' \in \delta_{G_{new}}(S')$ and the proof follows as above. {\bf Case 3.} $|\delta_G(S)\cap E_\wp| = 0$. In this case, one of the following holds, either (i) $\{v_0,v_1,v_2,v_3\} \subseteq S$ or (ii) $\{v_0,v_1,v_2,v_3\} \cap S = \phi$. If the latter is true, the proof is trivial by setting $S' = S$. In the former case, we set $S' = S \setminus \{v_1,v_2\} \cup \{v_{new}\}$. The addition of $v_{new}$ maintains connectivity between $v_0$ and $v_3$ and hence since $S$ is connected, so is $S'$. Further, we have $\delta_G(S) = \delta_{G_{new}}(S')$ since no edge in $\delta_G(S)$ in incident on $v_1$ or $v_2$. In order to prove the other direction, we assume that $S'$ is a solution to the \textsf{b-CMC}\ problem on $G_{new}$ with a cut weight of $\psi$. We now construct a set $S$ that is a solution to \textsf{b-CMC}\ on $G$ of weight at least $\psi$. The proof proceeds in three cases similarly. {\bf Case 1.} Both $e_0' \in \delta_{G_{new}}(S')$ and $e'_1 \in \delta_{G_{new}}(S')$. One of the following holds - (i) $S' = \{v_{new}\}$ or (ii) $\{v_0, v_3\} \subseteq S'$. In the former case, let $S$ be the subset of $\{v_1, v_2\}$ having the largest weight cut. By construction, we have that weight of the cut $\delta(S)$ is at least the sum of weights of $e_0'$ and $e_1'$. For the latter, let $S$ to be the best among $S', S' \cup \{v_1\},$ and $S' \cup \{v_2\}$ and the proof follows as above. {\bf Case 2.} Either $e_0'\in \delta_{G_{new}}(S')$ or $e_1'\in \delta_{G_{new}}(S')$ but not both. Let $e_{max}$ be the edge of maximum weight in $E_\wp$. The edge $e_{max}$ splits the path $\wp$ into two connected components one containing $v_0$, call it $\wp_0$ and the other containing $v_3$, call it $\wp_3$. Now to construct $S$, we delete $v_{new}$ from $S'$ (if it contains it) and add the component $\wp_0$ if $v_0\in S'$ or the component $\wp_3$ if $v_3\in S'$. Again connectivity is clearly preserved. We now argue that the cut weight is also preserved. Indeed, this is true since we have that $w(e_{max}) \geq max(w_{e_0'}, w_{e_1'})$ and the rest of the cut edges in $S'$ remain as they are in $S$. {\bf Case 3.} None of $e_0', e_1'$ belong to $\delta_{G_{new}}(S')$. In this case, if $v_{new}\notin S'$, then trivially $S = S'$ works. Otherwise, we set $S = S' \cup \{v_1, v_2\}$. It is easy to observe that both connectivity and all the cut edges are preserved in this case. Now, to construct $G'$, we repeatedly apply the above contraction as long as possible. This will clearly take polynomial time as in each iteration, we reduce the number of degree-2 vertices by 1. Hence we have the claim. \qed \end{proof} \subsubsection*{Spanning Tree with Many Leaves.} Assuming that the graph has no long \textsf{d-2} paths, the following lemma shows that we can find a spanning tree $T$ that has $\Omega(n)$ leaves. Note that Claim \ref{lem:only-good} now guarantees that there are $\Omega(n)$ \textsf{1-edges} incident on the leaves of $T$. \begin{restatable}{lem}{goodleaves} Given a graph $G=(V,E)$ with no \textsf{d-2} paths of length $\geq 3$, we can obtain, in polynomial time, a spanning tree $T = (V,E_T)$ with at least $\frac{n}{14}$ leaves. \label{lem:good-leaves} \end{restatable} \begin{proof} Let $T$ be any spanning tree of $G$. We note that although $G$ does not have \textsf{d-2} paths of length $\geq$ 3, such a guarantee does not hold for paths in $T$. Suppose that there is a \textsf{d-2} path $\wp$ of length 7 in $T$. Let the vertices of this path be numbered $v_1, v_2, \ldots, v_7$ and consider the vertices $v_3, v_4, v_5$. Since $G$ does not have any \textsf{d-2} path of length 3, there is a vertex $v_i, i \in \{3,4,5\}$ such that $deg_G(v_i) \geq 3$. We now add an edge $e = \{v_i, w\}$ in $G \setminus T$ to the tree $T$. The cycle $C$ that is created as a result must contain either the edge $\{v_1,v_2\}$ or the edge $\{v_6,v_7\}$. We delete this edge to obtain a new spanning tree $T'$. It is easy to observe that the number of vertices of degree two in $T'$ is strictly less than that in $T$. This is because, although the new edge $\{v_i, w\}$ can cause $w$ to have degree two in $T'$, we are guaranteed that the vertex $v_i$ will have degree three and vertices $v_1$ and $v_2$ (or $v_6$ and $v_7$) will have degree one. Hence, as long as there are \textsf{d-2} paths of length 7 in $T$, the number of vertices of degree two can be strictly decreased. Thus this process must terminate in at most $n$ steps and the final tree $T^{(1)}$ obtained does not have any \textsf{d-2} paths of length $\geq 7$. We now show that the tree $T^{(1)}$ contains $\Omega(n)$ leaves by a simple charging argument. Let the tree $T^{(1)}$ be rooted at an arbitrary vertex. We assign each vertex of $T^{(1)}$ a token and redistribute them in the following way : Every vertex $v$ of degree two in $T^{(1)}$ gives its token to its first non degree two descendant, breaking ties arbitrarily. Since there is no \textsf{d-2} path of length $\geq 7$, each non degree two vertex collects at most 7 tokens. Hence, the number of vertices not having degree two in $T^{(1)}$ is at least $\frac{n}{7}$. Further, since the average degree of all vertices in a tree is at most 2, a simple averaging argument shows that $T^{(1)}$ must contain at least $\frac{n}{14}$ vertices of degree one, i.e., $\frac{n}{14}$ leaves. \qed \end{proof} \subsection*{Obtaining an $\Omega(\frac{1}{\log n})$ Approximation} We now have all the ingredients required to obtain the $\Omega(\frac{1}{\log n})$ approximation algorithm. We observe that if the graph $G$ is sparse, i.e. ${\eta} \leq c n \log n$ (for a suitable constant $c$), then the tree obtained by using Lemma \ref{lem:good-leaves} is an $\Omega(\frac{1}{\log n})$-thick tree and thus we obtain the required approximate solution in this case. On the other hand, if the graph $G$ is sparse, then we use Lemma \ref{lem:good-leaves} to obtain a spanning tree, delete the leaves of this tree, and then repeat this procedure until we have no more vertices left. Since, we delete a constant fraction of vertices in each iteration, the total number of iterations is $O(\log n)$. We then choose the ``best'' tree out of the $O(\log n)$ trees so obtained and show that it must be an $\alpha$-thick tree, with $\alpha=\Omega(\frac{1}{\log n})$. Finally, using Lemma \ref{lem:alphatt}, we obtain an $\Omega(\frac{1}{\log n})$ approximate solution as desired. We refer to Algorithm \ref{alg:logn} for the detailed algorithm. \begin{algorithm}[htbp] \DontPrintSemicolon {\bf Input}: Graph $G = (V, E)$\; {\bf Output:} A subset $S\subseteq V$, such that $G[S]$ is connected\; Set $G_1(V_1,E_1) = G$, $n_1 = |V_1|$\; Let ${\eta} \leftarrow$ Number of \textsf{1-edges} in $G$\; Use Lemma~\ref{lem:good-leaves} to obtain a spanning tree $T_1$ of $G_1$ with leaf set $L_1$\; \If{${\eta} \leq c n \log n$} { Use Lemma~\ref{lem:alphatt} on $T_1$ to obtain a set connected $S$\; \Return $S$\; } $i = 1$\; \While{$G_i \neq \phi$}{\label{algm:line:alphatt} $E_{i+1} \leftarrow E_i \setminus (E[L_i] \cup \delta(L_i))$ \label{algm:line:delete_edges} \; $V_{i+1} \leftarrow V_i \setminus L_i$, $n_{i+1} = |V_{i+1}|$\; Contract degree-2 vertices in $G_{i+1}$ \label{algm:line:contract_edges}\; Use Lemma~\ref{lem:good-leaves} to obtain a spanning tree $T_{i+1}$ of $G_{i+1}$ with leaf set $L_{i+1}$\; $i = i+1$\; } Choose $j = \argmax_i(\sum_{v \in L_i} deg_G(v))$\; Use Lemma~\ref{lem:alphatt} on $T_j$ to obtain a connected set $S$\; \Return $S$\; \caption{Finding $\alpha$-\textsf{thick}\ trees} \label{alg:logn} \end{algorithm} \begin{restatable}{thm}{main} \label{thm:logn} Algorithm \ref{alg:logn} gives an $\Omega(\frac{1}{\log n})$ approximate solution for the \textsf{b-CMC}\ problem. \end{restatable} \begin{proof} Let us assume that ${\eta} \leq c n \log n$ (for some constant $c$). Now, Lemma \ref{lem:good-leaves} and Claim \ref{lem:only-good} together imply that $\sum_{v \in L_1} W_G(v) = \Omega(n)$. Further, since we have $w(\delta(OPT)) \leq {\eta} \leq c n \log n$, $T$ is an $\alpha$-thick tree for some $\alpha = \Omega(\frac{1}{\log n})$. Hence, we obtain an $\Omega(\frac{1}{\log n})$ approximate solution using Lemma \ref{lem:alphatt}. On the other hand, if ${\eta} > c n \log n$, we show that at least one of the trees $T_i$ obtained by the repeated applications of the Lemma \ref{lem:good-leaves} is an $\alpha$-thick tree $T$ of $G$ for $\alpha = \Omega(\frac{1}{\log n})$. We first observe that the \textsf{While} loop in Step \ref{algm:line:alphatt} runs for at most $O(\log n)$ iterations. This is because we delete $\Omega(n_i)$ leaves in each iteration and hence after $k = O(\log n)$ iterations, we get $G_k = \phi$. We now count the number of \textsf{1-edges} ``lost'' in each iteration. We recall that $W_G(v)$ is the total number of \textsf{1-edges} incident on $v$ in a graph $G$. In an iteration $i$, the number of \textsf{1-edges} lost at Step \ref{algm:line:delete_edges} is at most $\sum_{v \in L_i} W_{G_{i}}(v)$. In addition, we may lose a total of at most $2n \leq \frac{2{\eta}}{c \log n}$ edges due to the contraction of degree two vertices in Step \ref{algm:line:contract_edges}. Suppose for the sake of contradiction that $\sum_{v \in L_i} W_{G}(v) < \frac{{\eta}}{d \log n}, \forall 1 \leq i \leq k$ where $d$ is a suitable constant. Then the total number of \textsf{1-edges} lost in $k = O(\log n)$ iterations is at most \[ \sum_{i = 1}^k (\sum_{v \in L_i} W_{G_{i}}(v)) + \frac{2{\eta}}{c \log n} < \sum_{i = 1}^k \frac{{\eta}}{d \log n} + \frac{2{\eta}}{c \log n} = \frac{{\eta}}{\hat{d}} + \frac{{\eta}}{c \log n} < {\eta} \] The equality follows for a suitable constant $\hat{d}$ as $k = O(\log n)$. The final inequality holds for a suitable choice of the constants $c$ and $d$. But this is a contradiction since we have $G_k = \phi$. Since we choose $j$ to be the best iteration, we have $\sum_{v \in L_j}W_{G}(v) \geq \frac{{\eta}}{d \log n}$ for some constant $d$. Hence the tree $T_j$ is an $\alpha$-thick tree of $G$ for $\alpha = \frac{1}{d\log n}$ and the theorem follows by Lemma \ref{lem:alphatt}. \qed \end{proof} \subsection*{General Weighted Graphs} \label{sec:weighted-graphs} We now consider the \textsf{Weighted Connected Maximum Cut} (\textsf{WCMC}) problem. Formally, we are given a graph $G=(V,E)$ and a weight function $w:E\rightarrow \mathbb{R}^+\cup\{0\}$. The goal is to find a subset $S$ of vertices that induces a connected subgraph and maximizes the quantity $\sum_{e\in \delta(S)} w(e)$. We obtain a $\Omega(\frac{1}{\log^2n})$ approximation algorithm for this problem. Our basic strategy is to group edges having nearly the same weight into a class and thus create $O(\log n)$ classes. We then solve the \textsf{b-CMC}\ problem for each class independently and return the best solution. \begin{algorithm}[htbp] {\bf Input}: Connected graph $G = (V, E)$ with $|V| = n$ and $|E| = m$; Weight function, $w:E\rightarrow \mathbb{R}^+\cup{0}$ ; $\epsilon > 0 $\; {\bf Output:} A subset $S\subseteq V$, such that $G[S]$ is connected\; Let $w_{max}$ be the maximum weight over any edge of the graph\; Define, $w_0 = \frac{\epsilon w_{max}}{m}$ and $w_i = w_0(1+\epsilon)^i$, for $i\in [\log_{1+\epsilon} \frac{m}{\epsilon}]$\; \For{$i\in [0, \log_{1+\epsilon} \frac{m}{\epsilon}]$} { \For{$e \in E$}{ \If{$ w_i \leq w(e) < w_{i+1}$}{ $w'_i(e) = 1$\; } \Else{ $w'_i(e)=0$\; } Using Theorem~\ref{thm:logn}, solve for the connected subset $S_i$\; } } \Return $S_{best}$, such that $best = \displaystyle \argmax_{i\in[0,\log \frac{n}{\epsilon}]} \sum_{e\in \delta(S_i)}w(e)$\; \caption{Algorithm for the \textsf{Weighted} \textsf{Connected Maximum Cut}\ problem.} \label{alg:log2n} \end{algorithm} \begin{theorem} Algorithm~\ref{alg:log2n} gives a $\Omega(\frac{1}{\log^2n})$ approximation guarantee for the \textsf{Weighted} \textsf{Connected Maximum Cut}\ problem. \end{theorem} \begin{proof} Let $OPT$ be an optimal solution for a given instance of the problem and let $\psi = \sum_{e\in \delta(OPT)} w(e)$. Also, let $\epsilon\in (0,1]$. Since we have that $\psi \geq w_{max}$, we can reset the weights of those edges with weight $< \frac{\epsilon w_{max}}{m}$ to 0 and assume that $w_{min} \geq \frac{\epsilon w_{max}}{m}$ where $w_{min}$ denotes the weight of the minimum (non zero) weight edge. Let $E_i$ be the set of edges $e$ such that $w_i\leq w(e) < w_{i+1}$ and finally let $OPT_i = \delta(OPT)\cap E_i$. We now claim that $\sum_{e\in OPT_i} w(e) = O((1+\epsilon)\log n\sum_{e\in \delta(S_i)}w(e))$. This immediately gives us that $\sum_{e\in \delta(S_{best})}w(e) = \Omega(\frac{\sum_{e\in OPT}w(e)}{(1+\epsilon)\log n\log_{1+\epsilon}\frac{m}{\epsilon}}) = \Omega({\frac{1}{\log^2n})(\sum_{e\in OPT}w(e)})$. We now prove the claim. Consider solving the \textsf{b-CMC}\ instance with weight function $w_i'$. Clearly $OPT$ is a feasible solution to this instance and we have $\sum_{e\in \delta(OPT)}w'_i(e) = \sum_{e\in OPT_i} w'_i(e) \leq O(\log n \sum_{e\in \delta(S_i)}w'_i(e))$. The previous inequality holds as $S_i$ is guaranteed to be an $\Omega(\frac{1}{\log n})$-approximate solution by Theorem \ref{thm:logn}. Now, we have $\sum_{e\in OPT_i} w(e) \leq (1+\epsilon)w_i\sum_{e\in OPT_i} w'_i(e) \leq O((1+\epsilon)w_i \log n\sum_{e\in \delta(S_i)} w'_i(e)) \leq O((1+\epsilon)\log n\sum_{e\in \delta(S_i)} w(e))$. Hence, the claim. \qed \end{proof} \section{CMC in Planar and Bounded Genus Graphs} In this section, we consider the \textsf{CMC}\ problem in planar graphs and more generally, in graphs with genus bounded by a constant. We show that the \textsf{CMC}\ problem has a PTAS in bounded genus graphs. \subsection*{PTAS for Bounded Genus Graphs.} \label{sec:bounded-genus-graphs} We use the following (paraphrased) contraction decomposition theorem by Demaine, Hajiaghayi and Kawarabayashi~\cite{demaine2011contraction}. \begin{restatable}{thm}{demaine}\emph{(\cite{demaine2011contraction})}\label{thm:demaine} For a bounded-genus graph $G$ and an integer $k$, the edges of $G$ can be partitioned into $k$ color classes such that contracting all the edges in any color class leads to a graph with treewidth $O(k)$. Further, the color classes are obtained by a radial coloring and have the following property: If edge $e=(u,v)$ is in class $i$, then every edge $e'$ such that $e' \cap e \neq \phi$ is in class $i-1$ or $i$ or $i+1$. \end{restatable} Given a graph $G$ of constant genus, we use Theorem \ref{thm:demaine} appropriately to obtain a graph $H$ with constant treewidth. In Appendix \ref{sec:dptw}, we show that one can solve the \textsf{CMC}\ problem optimally in polynomial time on graphs with constant treewidth. \begin{restatable}{thm}{ptas} If the \textsf{CMC}\ problem can be solved optimally on graphs of constant treewidth, then there exists a polynomial time $(1-\epsilon)$ approximation algorithm for the \textsf{CMC}\ problem on bounded genus graphs (and hence on planar graphs). \end{restatable} \begin{proof} Let $G = (V,E)$ be the graph of genus bounded by a constant and let $S$ denote the optimal \textsf{CMC}\ of $G$ and $\psi = |\delta(S)|$ be its size. Using Theorem \ref{thm:demaine} with $k = \frac{3}{\epsilon}$, we obtain a partition of the edges $E$ into $\frac{3}{\epsilon}$ color classes namely $C_1, C_2, \ldots, C_{\frac{3}{\epsilon}}$. We further group three consecutive color classes into $\frac{1}{\epsilon}$ groups $G_1, \ldots, G_{\frac{1}{\epsilon}}$ where $G_j = C_{3j-2} \cup C_{3j-1} \cup C_{3j}$. Let $G_{j^*}$ denote the group that intersects the least with the optimal connected max cut of $G$, i.e., $j^* = \argmin_j(|G_j \cap \delta(S)|)$\footnote{We ``guess'' $j^*$ by trying out all the $\frac{1}{\epsilon}$ possibilities}. As the $\frac{1}{\epsilon}$ groups partition the edges, we have $|G_{j^*} \cap \delta(S)| \leq \epsilon \psi$. Let $i = 3j^* - 1$, so that $G_{j^*} = C_{i-1} \cup C_i \cup C_{i+1}$. Let $H = (V_H, E_H)$ denote the graph of treewidth $O(\frac{1}{\epsilon})$ obtained by contracting all edges of color $C_i$. We first show that $H$ has a \textsf{CMC}\ of size at least $(1-\epsilon) \psi$. For a vertex $v \in V_H$, let $\mu(v) \subseteq V$ denote the set of vertices of $G$ that have merged together to form $v$ due to the contraction. We define a subset $S' \subset V_H$ as $S' = \{v \in V_H \ |\ \mu(v) \cap S \neq \phi\}$. Note that because we contract edges (and not delete them), $S'$ remains connected. We claim that $|\delta(S')| \geq (1 - \epsilon) \psi$. Let $e = (u,v)$ be an edge in $\delta(S)$. Now $e \notin \delta(S')$ implies that at least one edge $e'$ such that $e' \cap e \neq \phi$ has been contracted. By the property guaranteed by Theorem \ref{thm:demaine}, we have that $e \in G_{j^*}$. Hence we have, $|\delta(S')| \geq |\delta(S) \setminus G_{j^*}| = |\delta(S)| - |G_{j^*} \cap \delta(S)| \geq (1-\epsilon) \psi$. Finally, given a connected max cut of size $\psi$ in $H$, we can recover a connected max cut of size at least $\psi$ in $G$ by simply un-contracting all the contracted edges. Hence, by solving the \textsf{CMC}\ problem on $H$ optimally, we obtain a $(1-\epsilon)$ approximate solution in $G$. \qed \end{proof} \subsection*{NP-hardness in planar graphs} \label{sec:np-hardness} We now describe a non-trivial polynomial time reduction of a \textsf{3-SAT} variant known as \textsf{Planar Monotone 3-SAT} (\textsf{PM-3SAT}) to the \textsf{CMC}\ problem on a planar graph, thereby proving that the latter is NP-hard. The following reduction is interesting as the classical \textsf{Max-Cut}\ problem can be solved optimally in polynomial time on planar graphs using duality. In fact, it was earlier claimed that even \textsf{CMC}\ can be solved similarly~\cite{haglin1991approximation}. An instance of \textsf{PM-3SAT}\ is a 3-CNF boolean formula $\phi$ such that - \begin{enumerate} \item[a)] A clause contains either all positive literals or all negative literals. \item[b)] The associated bipartite graph $G_{\phi}$\footnote{$G_{\phi}$ has a vertex for each clause and each variable and an edge between a clause and the variables that it contains} is planar. \item[c)] Furthermore, $G_\phi$ has monotone, rectilinear representation. We refer the reader to Berg and Khosravi~\cite{de2008finding} for a complete description. Figure~\ref{rectilinear} illustrates the rectilinear representation by a simple example. \end{enumerate} \begin{figure} \caption{Monotone Rectilinear Representation} \label{rectilinear} \caption{Reduction of PM-3SAT to a \textsf{Planar } \textsf{CMC}\ instance} \label{reduction} \caption{Example illustrating the rectilinear representation and the reduction to a \textsf{Planar }\textsf{CMC}\ instance of the formula $(x_1\vee x_2 \vee x_5) \wedge (x_2\vee x_3 \vee x_4) \wedge (\bar{x_1}\vee \bar{x_2}\vee \bar{x_3})\wedge (\bar{x_3}\vee \bar{x_4}\vee \bar{x_5}) \wedge (\bar{x_1}\vee \bar{x_3}\vee \bar{x_5}). $} \label{figall} \end{figure} Given such an instance, the \textsf{PM-3SAT}\ problem is to decide whether the boolean formula is satisfiable or not. Berg and Khosravi~\cite{de2008finding} show that the \textsf{PM-3SAT}\ problem is NP-complete.\\ \noindent {\bf The Reduction.} Given a \textsf{PM-3SAT}\ formula $\phi$, with a rectilinear representation, we obtain a polynomial time reduction to a \textsf{Planar }\textsf{CMC}\ instance, there by showing that the latter is NP-hard. Let $\{x_i\}_{i=1}^n$ denote the variables of the \textsf{PM-3SAT}\ instance and $\{C_j\}_{j=1}^m$ denote the clauses. We construct a planar graph $H_{\phi}$ as follows. For every variable $x_i$, we construct the following gadget: We create two vertices $v(x_i)$ and $v(\bar{x_i})$ corresponding to the literals $x_i$ and $\bar{x_i}$. Additionally, we have $K > m^2 $ ``helper" vertices, $h^i_1, h^i_2, \ldots, h^i_K$ such that each $h^i_k$ is adjacent to both $x_i$ and $\bar{x_i}$. Further, for every $h^i_k$ we add a set $L^i_k$ of $K$ new vertices that are adjacent only to $h^i_k$. Now, in the rectilinear representation of the \textsf{PM-3SAT}, we replace each variable rectangle by the above gadget. For two adjacent variable rectangles in the rectilinear representation, say $x_i$ and $x_{i+1}$, we connect the helpers $h^i_K$ and $h^{i+1}_1$. For every clause $C_j$, $H_{\phi}$ has a corresponding vertex $v(C_j)$ with edges to the three literals in the clause. Finally, for each vertex $v(C_j)$, we add a set $L_j$ of $\sqrt{K}$ new vertices adjacent only to $v(C_j)$. It is easy to observe that the reduction maintains the planarity of the graph. Figure~\ref{reduction} illustrates the reduction by an example. We show the following theorem that proves the \textsf{Planar} \textsf{Connected Maximum Cut}\ problem is NP-hard. \begin{restatable}{thm}{nphard} Let $H_{\phi}$ denote an instance of the planar \textsf{CMC}\ problem corresponding to an instance $\phi$ of \textsf{PM-3SAT}\ obtained as per the reduction above. Then, the formula $\phi$ is satisfiable if and only if there is a solution $S$ to the \textsf{CMC}\ problem on $H_{\phi}$ with $|\delta_{H_\phi}(S)| \geq m\sqrt{K} + nK+ nK^2$. \end{restatable} \begin{proof} For brevity, we denote $\delta_{H_\phi}(S)$ as $\delta(S)$ in the rest of the proof. {\bf Forward direction.} Assume that $\phi$ is satisfiable under an assignment $A$. We now show that we can construct a set $S$ with the required properties. Let $\{l_i\}_{i\in [n]}$ be the set of literals that are true in $A$. We define $S = \{v(l_i)\}_{i \in [n]} \cup \{C_j\}_{j \in [m]} \cup \{h^i_k\}_{i \in [n], k \in [K]}$, i.e., the set of vertices corresponding to the true literals, all the clauses and all the helper vertices. By construction, the set of all helper vertices and one literal of each variable induces a connected subgraph. Further, since in a satisfying assignment every clause has at least one true literal, the constructed set $S$ is connected. We now show that $|\delta(S)|\geq m\sqrt{K} + nK +nK^2$. Indeed, $\delta(S)$ contains all the edges corresponding to the one degree vertices incident on clauses and all the helpers. This contributes a profit of $m\sqrt{K} + nK^2$. Also, since no vertex corresponding to a false literal is included in $S$ but all helpers are in $S$, we get an additional profit of $K$ for each variable. Hence, we have the claim. {\bf Reverse direction.} Assume that $S$ is a subset of vertices in $H_\phi$ such that $H_\phi[S]$ is connected and $|\delta(S)| \geq m\sqrt{K} + nK+ nK^2$. We now show that $\phi$ is satisfiable. We may assume that $S$ is an optimal solution (since optimal solution will satisfy these properties, if a sub-optimal solution does). We first observe that at least one of the (two) literals for each variable must be chosen into $S$. Indeed, if this is not the case for some variable, for $H_\phi[S]$ to be remain connected, none of the helper vertices corresponding to that variable can be chosen. This implies that the maximum possible value for $|\delta(S)| \leq (n-1)K^2 + m\sqrt{K} + 3m + 2(n-1)K$ (this is the number of remaining edges)$< nK^2$ (since $K> m^2$) $ < m\sqrt{K} + nK + nK^2$, a contradiction. We now show that every helper vertex must be included in $S$. Assume that this is not true and let $h^i_k$ be some helper vertex not added to $S$. We note that none of the $K$ degree one vertices in $L^i_k$ can be in $S$ because $H_\phi[S]$ must be connected. Now, consider the solution $S'$ formed by adding $h^i_k$ to $S$. Since at least one vertices $v(x_i)$ or $v(\bar{x_i})$ is in $S$, if $H_\phi[S]$ is connected, so is $H_\phi[S']$. Further, the total number of edges in the cut increases by $K-2$. This is a contradiction to the fact that $S$ is an optimal solution. Hence, every helper vertex $h^i_k$ belongs to the solution $S$. We now show that, no two literals of the same variable are chosen into $S$. Assume the contrary and let $v(x_i)$, $v(\bar{x_i})$ both be chosen into $S$. We claim that removing one of these two literals will strictly improve the solution. Indeed, consider removing $v(x_i)$ from $S$. Clearly, we gain all the edges from $v(x_i)$ to all the helper vertices corresponding to this variable. Thus we gain at least $K$ edges. We now bound the loss incurred. In the worst case, removing $v(x_i)$ from $S$ might force the removal of all the clause vertices due to the connectivity restriction. But this would lead to a loss of at most $m\sqrt{K} + 3m < K$. Hence, we arrive at a contradiction that $S$ is an optimal solution. Therefore, exactly one literal vertex corresponding to each variable is included in $S$. Finally, we observe that all the clauses must be included in $S$. Assume this is not true and that $m'< m$ clause vertices are in $S$. Now the total cut is $nK + nK^2 + m'\sqrt{K} < nK+nK^2+m\sqrt{K}$, which is again a contradiction. Now, the optimal solution $S$ gives a natural assignment to the \textsf{PM-3SAT}\ instance: a literal is set to \textsf{TRUE} if its corresponding vertex is included in $S$. Since, every clause vertex belongs to $S$, which in turn is connected, it must contain a \textsf{TRUE} literal and hence the assignment satisfies $\phi$. \qed \end{proof} \appendix \section{Dynamic program for constant tree-width graphs} \label{sec:dptw} The notion of \textsf{tree decomposition} and \textsf{tree-width} was first introduced by Robertson and Seymour~\cite{robertson1984graph}. Given a graph $G=(V,E)$, its tree decomposition is a tree representation $T = (\mathcal{B}, \mathcal{E})$, where each $b\in \mathcal{B}$ (called as a \emph{bag}) is associated with a subset $B_b\subseteq V$ such that the following properties hold: \begin{enumerate} \item $\bigcup_{b\in B} B_b = V$. \item For every edge $u,v\in E$, there is a bag $b\in \mathcal{B}$, such that $u,v\in B_b$. \item For every $u\in V$, the subgraph $T_u$ of $T$, induced by bags that contain $u$, is connected. \end{enumerate} The width of a decomposition is defined as the size of the largest bag $b \in B$ minus one. Treewidth of a graph is the minimum width over all the possible tree decompositions. In this section, we show that the \textsf{CMC}\ problem can be solved optimally in polynomial time on graphs with constant treewidth $t$. \noindent {\bf Notation.} We denote the tree decomposition of a graph $G=(V,E)$ by $\mathcal{T} = (\mathcal{B}, \mathcal{E})$. For a given bag of the decomposition $b\in \mathcal{B}$, let $B_b$ denote the set of vertices of $G$ contained in $b$ and $V_b$ denote the set of vertices in the subtree of $\mathcal{T}$ rooted at $b$. As shown by Kloks~\cite{kloks1994treewidth}, we may assume that $\mathcal{T}$ is \textsf{nice tree decomposition}, that has the following additional properties. \begin{enumerate} \item Any node of the tree has at most $2$ children. \item A node $b$ with no children is called a \textsf{leaf} node and has $|B_b| = 1$. \item A node $b$ with two children $c_1$ and $c_2$ is called a \textsf{join} node. For such a node, we have $B_b = B_{c_1} = B_{c_2}$. \item A node $b$ with exactly one child $c$ is either a \textsf{forget} node or an \textsf{introduce} node. If $b$ is a \textsf{forget} node then $B_b = B_c\setminus \{v\}$ for some $v\in B_c$. One the other hand, if $b$ is an \textsf{introduce} node then $B_b = B_c\cup \{v\}$ for $v\notin B_c$. \end{enumerate} We now describe a dynamic program to obtain the optimal solution for the \textsf{CMC}\ problem. Let $OPT$ denote the optimal solution. We first prove the following simple claim that helps us define the dynamic program variable.\\ \begin{claim} For any bag $b\in \mathcal{B}$, the number of components induced by $OPT\cap V_b$ in $G$ is at most $t$. \end{claim} \begin{proof} Consider the induced subgraph $G[OPT\cap V_b]$ and let $C$ be one of its components. We observe that $C$ has at least one vertex in $B_b$, i.e., $C\cap B_b \neq \phi$. Assume this is not true and $C\cap B_b = \phi$. Now consider an edge $e =(u,v)$ such that $u\in C$ and $v\in OPT\setminus C$. Such an edge is guaranteed to exist owing to the connectivity of $G[OPT]$. By our assumptions, $v\notin V_b$. This implies there is some bag $b'$ not in the subtree of $\mathcal{T}$ rooted at $b$, that contains both $u$ and $v$. But this in turn implies $u\in B_b$, a contradiction to the assumption $C\cap B_b = \phi$. Now, since each vertex in $B_b$ belongs to at most one component, there can be at most $|B_b|\leq t$ components in $G[OPT\cap V_b]$. Hence, the claim. \end{proof} \noindent For a given $b\in \mathcal{B}$, let $S_b = OPT\cap B_b$ be the set of vertices chosen by the optimal solution from the bag $b$. Further, let $P_b = (C_1, C_2\ldots C_t)$ be a partition, of size $t$, of the vertices in $S_b$, such that each non-empty $C_i$ (some of the $C_i$'s could possibly be empty) is a subset of a unique component of the subgraph induced by $V_b\cap OPT$. We now define the variable of the dynamic program $M_b(P_b,S_b)$ in the following way: Consider the subgraph induced by $V_b$ in $G$ and let $S$ be a subset of $V_b$ with maximum cut $\delta_{G[V_b]}(S)$, such that every $C_i\in P_b$ is completely contained in a distinct component of $G[S]$. We set $M_b(P_b, S_b) = |\delta_{G[V_b]}(S)|$. From this definition, it follows that the optimal solution can be obtained by computing $M_r(P_r = (S,\phi,\phi, \ldots, \phi), S)$, for every subset $S$ of $B_r$, where $r$ is the root bag of the tree $\mathcal{T}$ and picking the best possible solution. We now describe the dynamic program to compute the above variable $M_b(P_b, S_b)$ for a given bag $b$. \noindent {\bf Case 1:} \textsf{Node $b\in \mathcal{T}$ is a leaf node}. In this case, $B_b= V_b = \{v\}$, for some vertex $v$. \begin{align} M_b((\{v\}, \phi, \ldots \phi), \{v\}) &= 0 \nonumber \\ M_b((\phi, \phi, \ldots \phi),\phi) &= 0 \nonumber \end{align} \noindent {\bf Case 2:} \textsf{Node $b\in \mathcal{T}$ is an introduce node}. In this case, $b$ has exactly one child node $c$ and $B_b = B_c\cup \{v\}$, for some vertex $v$. Let $P_b = (C_1,C_2\ldots C_t)$ be some partition of $S_b\subseteq B_b$. We compute $M_b(P_b, S_b)$ as follows. \begin{align*} \intertext{If $v\notin S_b$, then} M_b(P_b,S_b) &= M_c(P_b, S_b) + |\delta_{G[B_b]}(v, S_b)| \intertext{If $v\in C_i$ and $v$ is adjacent to some vertex in $C_i\setminus \{v_i\}$ but not to any vertex in $C_j, j\neq i$, then} M_b(P_b,S_b) &= M_c((C_1,C_2, \ldots, C_i\setminus \{v\}, \ldots, C_t),S_b\setminus\{v\}) + |\delta_{G[B_b]}(v,B_b\setminus S_b)| \intertext {In all other cases, we set} M_b(P_b,S_b) &= -\infty \end{align*} We now argue about the correctness of this case. First assume that $v\notin S_b$, that is $v$ is not chosen into our solution. If $S \subseteq V_c$ is the set of vertices chosen into our solution so far, the total cut size increases by $|\delta_{G[V_c]}(v,S)|$, which we claim is equal to $|\delta_{G[B_b]}(v,S_b)|$. In other words, none of the edges incident on $v$ are adjacent to any vertices in $S\setminus S_b$. Suppose for the sake of contradiction that there exists a vertex $w\in S\setminus S_b$ be such that $\{v,w\}\in E$. This implies that there exists a bag $b'$, not in the subtree of $b$, that contains both $v$ and $w$. This in turn implies that $w\in S_b$, which is a contradiction to the fact that $w\in S\setminus S_b$. Hence, in this case the total cut increases by $|\delta_{G[B_b]}(v,S_b)|$. Now assume that $v\in S_b$, more specifically, let $v\in C_i$, for some $i$. From the above argument there are no edges between $v$ and vertices in $S\setminus S_b$. Since we must have all vertices in $C_i$ in a single component of $G[V_b]$ and all edges incident on $v$ in $G[V_b]$ have the other end in $B_b$, $v$ must have an edge to some vertex in $C_i\setminus \{v\}$. Further, since any $C_i$ and $C_j$ must belong to distinct components, they must not share any vertices. Thus, if $v$ either has no edges to $C_i\setminus \{v\}$ or has an edge to some $C_j$, there is no feasible solution and we assign $-\infty$ to this variable. On the other hand if both these conditions are satisfied, our solution is valid and the increase in the cut size is $|\delta_{G[V_b]}(v, B_b\setminus S_b)|$. \noindent{\bf Case 3:} \textsf{Node $b\in \mathcal{T}$ is a forget node}. In this case, again, $b$ has exactly one child node $c$ with $B_b= B_c \setminus \{v\}$. Let $P_b = (C_1,C_2\ldots C_t)$. It is easy to see that: \[ M_b(P_b, S_b) = \max \left\{ \begin{array} {ll} M_c(P_b, S_b) &\\ M_c((C_1,C_2, \ldots, C_i\cup \{v\}, \ldots, C_t), S_b\cup\{v\}), &\text{}\forall i\in[t] \end{array} \right. \] \noindent {\bf Case 4:}\textsf{ Node $b\in \mathcal{T}$ is a join node}. In this case, $b$ has two children $c_1, c_2$ with $B_b = B_{c_1} = B_{c_2}$. Let $P_b = (C_1, C_2\ldots, C_t)$ be the partition of vertices in $S_b$ such that each $C_i$ belongs to a distinct component in $G[OPT\cap V_b]$. Similarly, define $P_{c_1} = (C_1^1, C_2^1 \ldots C_t^1)$ and $P_{c_2} = (C_1^2, C_2^2 \ldots C_t^2)$ as partitions of $S_{c_1} = S_b$ and $S_{c_2} = S_b$ respectively. Consider the construction of following auxiliary graph, that we refer to as ``merge graph'', denoted by $M$. For every $C_i^1$ and $C_i^2$, we have a corresponding vertex $v(C_i^1)$ and $v(C_i^2)$ respectively in $M$. Further if two sets $C_i^1$ and $C_j^2$ intersect, i.e., $C_i^1\cap C_j^2 \neq \phi$, then we add an edge between $v(C_i^1),v(C_j^2)$ in $M$. It is easy to observe that for a given component $C$ of $M$, the union of all subsets of vertices corresponding to the vertices of $C$ must belong to the same component of $G[OPT\cap V_b]$. Thus in turn implies that there is a one to one correspondence between $C_i\in P_b$ and the components of $M$. For a given partition $P_b$ of $S_b$, we call two partitions $P_{c_1}$ and $P_{c_2}$ of $S_{c_1}$ and $S_{c_2}$ as ``valid'' if there is a one to one correspondence between $C_i\in P_b$ and the components of $M$, as described above. We now prove the following simple claim.\\ \begin{claim} For any $S\subseteq V_b$, $\delta_{G[V_b]}(S, V_b\setminus S) = \delta_{G[V_{c_1}]}(S_1, V_{c_1}\setminus S_1) + \delta_{G[V_{c_2}]}(S_2, V_{c_2}\setminus S_2) - \delta_{G[V_b]}(S_b,B_b\setminus S_b)$, where $S_1 = S\cap V_{c_1}$ and $S_2 = S\cap V_{c_2}$ \end{claim} \begin{proof} From the properties of tree decomposition, it follows that for any two vertices $u\in V_{c_1}\setminus B_{b}$ and $w\in V_{c_2}\setminus B_b$, $uw\notin E$. Further all the edges in $\delta(V_{c_1}\setminus B_{b})$ and $\delta(V_{c_1}\setminus B_{b})$ are incident on the vertices in $B_b$. Thus we have the following equation, \begin{align} \delta_{G[V_b]}(S,V_b\setminus S) =& \delta_{G[V_{c_1}]}(S_1\setminus B_b, V_{c_1}\setminus S_1) + \delta_{G[V_{c_2}]}(S_2\setminus B_b, V_{c_2}\setminus S_2)+ \delta_{G[V_b]}(S_b,B_b\setminus S_b) \nonumber\\ =&(\delta_{G[V_{c_1}]}(S_1\setminus B_b, V_{c_1}\setminus S_1) + \delta_{G[V_{b}]}(S_b,B_b\setminus S_b))\nonumber \\ &+ (\delta_{G[V_{c_1}]}(S_2\setminus B_b, V_{c_2}\setminus S_2) + \delta_{G[V_{b}]}(S_b,B_b\setminus S_b)) - \delta_{G[V_{b}]}(S_b,B_b\setminus S_b) \nonumber\\ =& \delta_{G[V_{c_1}]}(S_1, V_{c_1}\setminus S_1) + \delta_{G[V_{c_2}]}(S_2, V_{c_2}\setminus S_2) - \delta_{G[V_{b}]}(S_b,B_b\setminus S_b) \nonumber \end{align} \end{proof} We can now compute the dynamic program variable, in this case, as follows. \[ M_b(P_b, S_b) = \displaystyle \max_{\substack{\text{$(P_{c_1},P_{c_2})$ are valid}\\ \textsf{with respect to $P_b$}}} M_{c_1}(P_{c_1}, S_b) + M_{c_2}(P_{c_2}, S_b) - |\delta_{G[V_b]}(S_b,B_b\setminus S_b)| \] \section{Maximum Leaf Spanning Tree} \label{app:mlst} In the Maximum Leaf Spanning Tree problem, we are given an undirected graph $G=(V,E)$ and the goal is to find a spanning tree that has a maximum number of leaves. We now show that this well studied problem is a special case of the \textsf{Connected Submodular Maximization} problem. Consider maximizing the submodular function $f(S) = |\{v\ |\ v \in \mathbb{N}(S) \setminus S\}|$ where $\mathbb{N}(S)$ is the set of vertices that have a neighbor in $S$. Now, it is easy to observe that there is a tree with $L$ leaves if and only if there is a connected set $S$ such that $f(S) = L$. Further, we show that without loss of generality for any solution $S$ to Connected Submodular Maximization problem, we have that $S \cup \mathbb{N}(S) = V$ and hence the corresponding tree is spanning. Suppose $S$ is a feasible solution and $V \neq S \cup \mathbb{N}(S)$, then there must exist an edge $(u,v)$ such that $u \in \mathbb{N}(S)\setminus S$ and $v \notin S \cup \mathbb{N}(S)$. Now $S' = S \cup \{u\}$ is also a feasible solution and $f(S) = f(S')$. \section{High Connectivity} \label{app:conn} We observe that the \textsf{Connected Submodular Maximization} has a constant factor approximation algorithm if the graph has high connectivity. Let $S \subseteq V$ be a random set of vertices such that every vertex $v$ is chosen in $S$ independently with probability $\frac{1}{2}$. As shown by Feige et al.~\cite{feige2011maximizing}, $\mathbb{E}[f(S)] \geq (\frac{1}{4}) f(OPT)$ where the $f(OPT)$ is the maximum value of $f$. Further, as shown by Censor-Hillel et al.~\cite{censor2015tight} if the graph $G$ has $\Omega(\log n)$ vertex connectivity, the set $S$ obtained above is connected with high probability. Hence, $S$ is a $\frac{1}{4}$ approximate solution to the \textsf{Connected Submodular Maximization} problem. \end{document}
\begin{document} \title{Long-distance entanglement purification for quantum communication} \author{Xiao-Min Hu,$^{1,4}$ Cen-Xiao Huang,$^{1,4}$ Yu-Bo Sheng,$^{2,3,5}$\footnote{[email protected]} Lan Zhou,$^{2,3,5}$ Bi-Heng Liu,$^{1,4}$\footnote{[email protected]} Yu Guo,$^{1,4}$ Chao Zhang,$^{1,4}$ Wen-Bo Xing,$^{1,4}$ Yun-Feng Huang,$^{1,4}$ Chuan-Feng Li,$^{1,4}$\footnote{[email protected]} Guang-Can Guo$^{1,2,4,5}$ } \address{$^1$CAS Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei, 230026, People's Republic of China\\ $^2$Institute of Quantum Information and Technology, Nanjing University of Posts and Telecommunications, Nanjing, 210003, People's Republic of China\\ $^3$College of Mathematics \& Physics, Nanjing University of Posts and Telecommunications, Nanjing, 210003, People's Republic of China\\ $^4$CAS Center For Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, 230026, People's Republic of China\\ $^5$Key Lab of Broadband Wireless Communication and Sensor Network Technology, Nanjing University of Posts and Telecommunications, Ministry of Education, Nanjing, 210003, People's Republic of China\\} \date{\today } \begin{abstract} High quality long-distance entanglement is essential for both quantum communication and scalable quantum networks. Entanglement purification is to distill high-quality entanglement from low-quality entanglement in a noisy environment and it plays a key role in quantum repeaters. The previous significant entanglement purification experiments require two pairs of low-quality entangled states and were demonstrated in table-top. Here we propose and report a high-efficiency and long-distance entanglement purification using only one pair of hyperentangled state. We also demonstrate its practical application in entanglement-based quantum key distribution (QKD). One pair of polarization spatial-mode hyperentanglement was distributed over 11~km multicore fiber (noisy channel). After purification, the fidelity of polarization entanglement arises from 0.771 to 0.887 and the effective key rate in entanglement-based QKD increases from 0 to 0.332. The values of Clauser-Horne-Shimony-Holt (CHSH) inequality of polarization entanglement arises from 1.829 to 2.128. Moreover, by using one pair of hyperentanglement and deterministic controlled-NOT gate, the total purification efficiency can be estimated as $6.6\times 10^{3}$ times than the experiment using two pairs of entangled states with spontaneous parametric down conversion (SPDC) sources. Our results offer the potential to be implemented as part of a full quantum repeater and large scale quantum network. \end{abstract} \maketitle Quantum entanglement~\cite{Horodecki2009} plays an essential role in both quantum communication~\cite{teleportation,Guo2019,QKD,QSDC} and scalable quantum networks~\cite{network}. However, the unavoidable environmental noise degrades entanglement quality. Entanglement purification is a powerful tool to distill high-quality entanglement from low-quality entanglement ensembles~\cite{purification1,purification2} and is the heart of quantum repeaters~\cite{repeater1}. Several significant entanglement purification experiments using photons~\cite{experiment1,experiment4}, atoms~\cite{experiment2}, and electron-spin qubits~\cite{experiment3} were reported. These experiments were all table-top and did not distribute entanglement over a long distance. Moreover, these experiments based on Ref.~\cite{purification1} were low efficiency for they require two copies of low-quality entangled states and consume at least one pair of low-quality entangled states even if the purification is successful. In optical systems, a spontaneous parametric down conversion (SPDC) source is commonly used to generate entangled states. The probabilistic nature of SPDC makes it still challenging to generate two clean pairs of entangled states simultaneously because of double-pair emission noise \cite{experiment4}. Hyperentanglement~\cite{hyper1}, simultaneous entanglement with more than one degree of freedom (DOF), is more powerful and can be used to increase the channel capacity~\cite{highdimension1,highdimension2}. Hyperentanglement also fulfills quantum teleportation of a single photon with multiple DOFs~\cite{highdimension4,multiteleportation,Graham15}. The distribution of hyperentanglement were also reported~\cite{highdimension6,highdimension7}. Some entanglement purification protocols (EPPs) assisted by spatial mode DOF have been proposed~\cite{hyperpurification1,hyperpurification2,hyperpurification3}. Such deterministic entanglement purification usually requires the spatial or other entanglement to be more robust. The fidelity of purified polarization entanglement equals the fidelity of spatial entanglement, and this is essentially the transformation from spatial entanglement to polarization entanglement. \begin{figure} \caption{\textbf{Protocol of entanglement purification using hyperentanglement.} \textbf{a}, General protocol of long-distance entanglement purification using hyperentanglement. After long-distance distribution, deterministic controlled-NOT (CNOT) operations are performed between different degrees of freedom of single particles. Then the target qubits (encoded in one degree of freedom) are measured. If we only keep the same measurement results, then the entanglement encoded in the other degree of freedom (control qubits) is purified. \textbf{b}, Purification protocol based on polarization-spatial mode hyperentanglement. The hyperentangled state $|\Phi^{+}\rangle_{ab}\otimes|\phi^{+}\rangle_{ab}$ is first generated by the entanglement source (S) and then distributed to Alice and Bob in the channel ($a_{1}b_{1}$, $a_{2}b_{2}$). After suffering from the channel noise, the entanglement purification is performed. The entanglement purification operation uses an HWP and a PBS. This essentially acts as a CNOT gate with a success probability of 100$\%$ between the polarization (target qubits) and spatial mode (control qubits) degree of freedom. After PBS1 and PBS2, we convert the spatial mode to polarization. Thus, we can verify its success by selecting the cases in which the two photons are in the output mode state D$_{1}$D$_{2}$ or D$_{3}$D$_{4}$. PBS--polarizing beam splitter, which transmits the H-polarized photon and reflects the V-polarized photon. BD--beam displacer, which can couple H- and V-polarized components from different spatial modes. HWP--half-wave plate set at $45^\circ$.} \label{fig1} \end{figure} Here we propose and report the first high-efficiency long-distance polarization entanglement purification using only one pair of hyperentanglement. We also demonstrate its practical application in entanglement-based QKD~\cite{E91}. We show that the EPP using two copies~\cite{purification1} and subsequent experiments~\cite{experiment1,experiment2,experiment3,experiment4} is not necessary and polarization entanglement can be purified using entanglement in other DOF. Moreover, the double-pair emission noise using spontaneous parametric down-conversion (SPDC) source is removed automatically and the purification efficiency can be greatly increased in a second time. A general protocol is shown in Fig.~\ref{fig1}a. In our experiment, we use hyperentanglement encoded on polarization and spatial modes. As shown in Fig.~\ref{fig1}b, a hyperentangled state $|\phi\rangle=|\Phi^{+}\rangle_{ab}\otimes|\phi^{+}\rangle_{ab}$ is distributed to Alice and Bob. $|\Phi^{+}\rangle_{ab}$ is one of the polarization Bell states $ |\Phi^{\pm}\rangle_{ab}=\frac{1}{\sqrt{2}}(|H\rangle_{a}|H\rangle_{b}\pm|V\rangle_{a}|V\rangle_{b})$, and $|\Psi^{\pm}\rangle_{ab}=\frac{1}{\sqrt{2}}(|H\rangle_{a}|V\rangle_{b}\pm|V\rangle_{a}|H\rangle_{b})$. $|\phi^{+}\rangle_{ab}$ is one of the spatial mode Bell states $|\phi^{\pm}\rangle_{ab}=\frac{1}{\sqrt{2}}(|a_{1}\rangle|b_{1}\rangle\pm|a_{2}\rangle|b_{2}\rangle)$, and $|\psi^{\pm}\rangle_{ab}=\frac{1}{\sqrt{2}}(|a_{1}\rangle|b_{2}\rangle\pm|a_{2}\rangle|b_{1}\rangle)$, where $H (V)$ denotes horizontal (vertical) polarization, and $a_{1}$, $b_{1}$, $a_{2}$, and $b_{2}$ are the spatial modes. The noise channel makes the hyperentangled state become a mixed state as $\rho_{ab}=\rho^{P}_{ab}\otimes\rho^{S}_{ab}$ with \begin{eqnarray} \rho^{P}_{ab}=F_{1}|\Phi^{+}\rangle_{ab}\langle\Phi^{+}|+(1-F_{1})|\Psi^{+}\rangle_{ab}\langle\Psi^{+}|,\label{mixed1} \end{eqnarray} and \begin{eqnarray} \rho^{S}_{ab}=F_{2}|\phi^{+}\rangle_{ab}\langle\phi^{+}|+(1-F_{2})|\psi^{+}\rangle_{ab}\langle\psi^{+}|.\label{mixed2} \end{eqnarray} The principle of purification is to select the cases in which the photons are in the output modes D$_{1}$D$_{2}$ or D$_{3}$D$_{4}$~\cite{Supplementary} and we can obtain a new mixed state \begin{eqnarray} \rho'_{ab}=F'|\Phi^{+}\rangle_{ab}\langle\Phi^{+}|+(1-F')|\Psi^{+}\rangle_{ab}\langle\Psi^{+}|\label{mixed3}. \end{eqnarray} Here $F'=\frac{F_{1}F_{2}}{F_{1}F_{2}+(1-F_{1})(1-F_{2})}$. If $F_{1}> \frac{1}{2}$ and $F_{2}> \frac{1}{2}$, we can obtain $F'> F_{1}$ and $F'> F_{2}$. \begin{figure*} \caption{\textbf{Experimental Setup.} \textbf{a}, Preparation of hyperentangled state. The pump light beam is separated into two spatial modes by the BD. These two beams are injected into a Sagnac interferometer to pump a type-II cut periodically poled potassium titanyl phosphate (PPKTP) crystal (1mm$\times$7mm$\times$10mm) and generate the two-photon polarization entanglement $1/\sqrt{2}(|HV\rangle+|VH\rangle)$ in each spatial mode. After HWP1 (set at $45^\circ$), the hyperentangled state $|\Phi^{+}\rangle_{ab}\otimes|\phi^{+}\rangle_{ab}=1/\sqrt{2}(|HH\rangle+|VV\rangle)\otimes 1/\sqrt{2}(|a_{1}b_{1}\rangle+|a_{2}b_{2}\rangle)$ is generated. \textbf{b}, Quantum noisy channel. This channel is divided into two parts. The first part is a controllable spatial mode and polarization bit flip (BF) or phase flip (PF) loading noise setup. The other part is an 11~km multicore fiber (MCF). \textbf{c}, Purification process. This process is also divided into two steps. The first step is a controlled-NOT (CNOT) gate between the spatial mode and polarization DOF. The setup consists of an HWP (set at $45^{\circ}$) placed on spatial modes $a_{2}$ and $b_{2}$. The PBS is used for post-selection of the polarization qubit, and the spatial mode states of the same polarization are preserved while different polarization states are ignored. Hadamard operations are needed to convert the PF noise to BF noise before purification. \textbf{d}, Quantum state tomography setup. BD--beam displacer, DM--dichroic mirror, HWP--half-wave plate, QWP--quarter-wave plate, PBS--polarizing beam splitter. } \label{fig2} \end{figure*} \begin{figure*} \caption{\textbf{Experimental results.} \textbf{a}, Results before and after BF noise purification. On the left are the density matrices of polarization and path-entangled states before purification, and on the right are the density matrices of path states after purification. The yellow column represents a value greater than 0, and the blue column represents a value less than 0. The transparent column represents the value of the ideal maximally entangled state. The fidelity of the purified quantum state has obviously been improved significantly. \textbf{b}, Results before and after PF noise purification. The first half is the result of loading $20\%$ PF noise. The second half is the result of noise introduced only by the MCF.} \label{fig3} \end{figure*} To demonstrate the purification, we first generated one pair of hyperentangled state. As shown in Fig.~\ref{fig2}a, a continuous-wave (CW) laser operated at 775~nm is separated into two spatial modes ($p_{1}$ and $p_{2}$) by a beamdisplacer and then injected to a polarization Sagnac interferometer to generate polarization-entangled photon pair~\cite{Kim2006} in each spatial modes~\cite{highdimension2,highdimension7,Guo2018}. Noticed that we use a CW laser, the final state is the superposition of the states in each mode. Thus we can generate the hyperentanglement $|\phi\rangle=|\Phi^{+}\rangle_{ab}\otimes|\phi^{+}\rangle_{ab}$ by tune the relative phase between the two spatial modes. We used 200~mW pumped light to excite 2400 photon pairs/s. The coincidence efficiency of the entangled source is $18\%$. To show the performance of entanglement purification in the noisy channel, we distributed the hyperentangled state over an 11~km multicore fiber (MCF)~\cite{highdimension7,Xavier2020,Canas2017,Bacco2019}. The difficulty of long-distance distribution of polarization and spatial mode hyperentanglement is maintaining the coherence and phase stability between different paths. The MCF provides an ideal platform for distributing spatial-mode states over a long distance. The distance between the nearest two cores of the MCF is very small (approximate 41.5~$\mu$m), and the noises of different paths are very close, so it can maintain coherence~\cite{highdimension7,Xavier2020,Canas2017,Bacco2019}. However, there still have some other difficulties, such as the polarization-maintaining and group delay mismatch. To overcome these obstacles, we used a phase-locking system to ensure the effective distribution of hyperentanglement~\cite{highdimension7,Supplementary}. In Fig.~\ref{fig2}b, the hyperentangled state $|\phi\rangle$ was distributed over 11~km in the MCF. During distribution, a small bit flip (BF) error ($|\Psi^{+}\rangle_{ab}$ and $|\psi^{+}\rangle_{ab}$) and small phase flip (PF) error ($|\Phi^{-}\rangle_{ab}$ and $|\phi^{-}\rangle_{ab}$) were introduced by the fiber noise environment. The fidelities of the hyperentangled state in the polarization and spatial modes were $F_{P}=0.961\pm0.001$ and $F_{S}=0.952\pm0.001$, respectively. Here, we use superconducting single photon detectors to detect each photon, the efficiency is 80\%, and the dark count rate is approximate 300~Hz. Including all the losses, the coincidence efficiency was $\sim8.1\%$, and the coincident count rate was 600~Hz. In our experiment, we added symmetrical BF noise to both the polarization and spatial mode DOFs, so that $F_{P}\approx F_{S}\approx F$. The BF noise loading setup~\cite{Supplementary} can add any proportion of BF noise ($|\Psi^{+}\rangle_{ab}$ and $|\psi^{+}\rangle_{ab}$) to the hyperentangled state of the polarization and spatial modes. We loaded $20\%$ BF noise into the ideal state, and when it was combined with the MCF, the fidelities of the polarization and spatial mode states were $F_{P}^{0.8}=0.771\pm0.002$ and $F_{S}^{0.8}=0.771\pm0.002$, respectively. When $30\%$ BF noise was added, the fidelities of the polarization and spatial mode states were $F_{P}^{0.7}=0.666\pm0.002$ and $F_{S}^{0.7}=0.664\pm0.002$, respectively. The purification setup is rather simple and only contains a PBS and an HWP (Fig.~\ref{fig2}c). It is essentially the controlled-NOT (CNOT) gate between the polarization and spatial mode DOFs for a single photon. Unlike the CNOT gate between two photons in polarization, such a CNOT gate works in a deterministic way and does not exploit the auxiliary single photon. The control qubit can be regarded as a spatial mode qubit ($|a_{1}\rangle=|0\rangle$,$|a_{2}\rangle=|1\rangle$), and the target qubit can be regarded as the polarization qubit. The CNOT gate makes $|0\rangle|H\rangle\rightarrow|0\rangle|H\rangle$, $|0\rangle|V\rangle\rightarrow|0\rangle|V\rangle$, $|1\rangle|H\rangle\rightarrow|1\rangle|V\rangle$, and $|1\rangle|V\rangle\rightarrow|1\rangle|H\rangle$. After the CNOT operation, the second operation is to postselect the polarization qubit. Through the PBS, the spatial mode states with the same polarization are retained, and different polarization states are ignored. The purification process is completed. The experimental results show that the fidelity of the purified state was significantly improved for BF noise (Fig.~\ref{fig3}a). For $20\%$ BF noise, the fidelity after purification became $F^{0.8}=0.887\pm0.001$, which is very close to the theoretically predicted value $F=0.896$. For $30\%$ BF noise, the fidelity after purification became $F^{0.7}=0.774\pm0.002$, which is also very close to the theoretical value $F=0.778$. BF and PF noise can be converted to each other through the Hadamard operations~\cite{Supplementary}. We also show that our protocol is still feasible in the case of PF noise. A PF noise proportion of $20\%$ ($|\Phi^{-}\rangle_{ab}$ and $|\phi^{-}\rangle_{ab}$) was loaded into the hyperentangled state. When this was combined with the MCF noise, the fidelities of the polarization and spatial mode states were $F_{P}^{0.8}=0.793\pm0.002$ and $F_{S}^{0.8}=0.796\pm0.002$, respectively. Differently from the case of BF noise, we first converted PF noise into BF noise through Hadamard operations and then completed entanglement purification. The fidelity after purification is $F^{0.8}=0.903\pm0.001$, which is also very close to the theoretical value $F=0.932$. For hyperentangled states with only MCF noise, we found that PF noise ($\sim3.3\%$) was much higher than BF noise ($\sim1.1\%$). After the purification, the fidelity was $F=0.974\pm0.001$. This is higher than the fidelity of the polarization or spatial mode state before purification, which shows that our purification was efficient in fiber distribution. \begin{figure*} \caption{\textbf{QKD based on entanglement purification.} \textbf{a},\textbf{b},\textbf{c} represent the correlations for mutually unbiased bases required for QKD (computational bases ($|0\rangle_{Z},|1\rangle_{Z}$) and Fourier bases $|0\rangle_{F},|1\rangle_{F}$) before and after purification in the case of loading $20\%$ BF noise, $20\%$ PF noise and using MCF only. The corresponding quantum bit error rate (QBER) and quantum key rate (R) are also presented. } \label{fig4} \end{figure*} Finally, let us show the practical application of this purification experiment. The first is to increase the secure key rate ~\cite{Shor2000} in entanglement-based QKD~\cite{E91}. Secure QKD requires that the quantum bit error rate (QBER) is less than $11\%$ (QBER=1-$F$, $F$ is fidelity) \cite{QKD} to generate an effective key rate ($R$). In $20\%$ BF noise and $20\%$ PF noise cases, as shown in Fig.~\ref{fig4}a and Fig.~\ref{fig4}b, after purification, the $R$ increases significantly from $0$ or nearly $0$ to $0.332\pm 0.010$ and $0.371\pm 0.009$. Here $R$ is defined as $R=1-2H(QBER)$, where $H(e)$ represents the Shannon entropy, given as a function of the QBER by $H_{e}\left(e\right)=-\left(1-e\right) \log _{2}\left(1-e\right)-e \log _{2}\left(e\right)$. We also show the improvement of $R$ along a real noise channel in Fig.~\ref{fig4}c. The second is to distill nonlocality from nonlocal resources~\cite{nonlocal}, which has the potential application to improve the noise tolerance in future device-independent QKD (DI-QKD)~\cite{diqkd}. Using the reconstructed density matrix, we can calculate the values of Clauser-Horne-Shimony-Holt (CHSH) inequality of these nonlocal quantum states. Initially, for $30\%$ BF noise, $S_{S}=1.837\pm0.006<2$ for spatial mode entanglement and $S_{P}=1.829\pm0.006<2$ for polarization entanglement. After purification, $S=2.128\pm0.006>2$~\cite{Supplementary}. The integration time of each data point is 60~s, and the count rate of the entangled source after fiber distribution is $\sim600/s$ (before purification). After purification, due to the influence of postselection, the successful events are retained and the failure events are ignored, thus the count rate of the entangled source after purification is reduced respectively. For $30\%$ BF noise, the count rate of purified entangled source is $\sim350/s$, for $20\%$ BF and PF noise, the count rate of purified entangled source is $\sim410/s$. We propose and demonstrate the first long-distance polarization entanglement purification and show its practical application to increase the secure key rate in entanglement-based QKD and improve the noise tolerance in DI-QKD. Compared with all two-copy EPPs based on Ref.~\cite{purification1}, our EPP using one pair of hyperentanglement has several advantages. Firstly, this protocol reduces half of the consumption of copies of entangled pairs. Secondly, benefited from the success probability 100$\%$ CNOT gate between the polarization and spatial inner DOFs, the purification efficiency of this EPP is four times than that of two-copy EPPs in Refs.~\cite{purification2,experiment1,Supplementary}. Thirdly, if we consider the experimental implementation (SPDC sources), the double-pair emission noise in generating two clean pairs can be removed automatically and the purification efficiency can be estimated as $1.65\times 10^{3}$ times than the EPPs using two pairs entangled states with SPDC sources. The total purification efficiency can be calculated as $4\times 1.65\times 10^{3}=6.6 \times 10^{3}$ than the EPPs using two pairs entangled states with SPDC sources. It is worth noting that since both outcomes of PBSs are used for postselection, we need two sets of measurement setup at both sides of Alice and Bob. However, in the two copy EPPs~\cite{experiment1}, two photons act as triggers, so two additional measurement setups are also needed. Our protocol is general and can be effectively extended to other DOFs of photons, such as the time bin~\cite{Martin2017}, frequency~\cite{Kues2017}, and orbital angular momentum~\cite{multiteleportation}, to perform multi-step purification to improve the fidelity of entanglement further. Moreover, if combining with the suitable high-capacity and high-fidelity quantum memory~\cite{memory2} and entanglement swapping~\cite{multiteleportation,highdimension5}, the approach presented here could be extended to implement the full repeater protocol and large scale quantum networks, enabling a polynomial scaling of the communication rate with distance. \begin{acknowledgments} This work was supported by the National Key Research and Development Program of China (No.\ 2017YFA0304100, No. 2016YFA0301300 and No. 2016YFA0301700), National Natural Science Foundation of China (Nos. 11774335, 11734015, 11874345, 11821404, 11904357, 11974189), the Key Research Program of Frontier Sciences, CAS (No.\ QYZDY-SSW-SLH003), Science Foundation of the CAS (ZDRW-XH-2019-1), the Fundamental Research Funds for the Central Universities, Science and Technological Fund of Anhui Province for Outstanding Youth (2008085J02), Anhui Initiative in Quantum Information Technologies (Nos.\ AHY020100, AHY060300). \end{acknowledgments} \begin{thebibliography}{99} \bibitem{Horodecki2009} R. Horodecki, P. Horodecki, M. Horodecki, and K. Horodecki, \textit{Rev. Mod. Phys.} \textbf{81}, 865 (2009). \bibitem{QKD} V. Scarani, H. Bechmann-Pasquinucci, N. J. Cerf, M. Du\v{s}ek, N. L\"{u}tkenhaus, and M. Peev, \textit{Rev. Mod. Phys.} \textbf{81}, 1301 (2009). \bibitem{teleportation} S. Pirandola, J. Eisert, C. Weedbrook, A. Furusawa, and S. L. Braunstein, Advances in quantum teleportation. \textit{Nat. Photonics} \textbf{9}, 641 (2015). \bibitem{Guo2019} Y. Guo, B.-H. Liu, C.-F. Li, and G.-C. Guo, \textit{Adv. Quantum Technol.} \textbf{2}, 1900011 (2019). \bibitem{QSDC} G. L. Long and X. S. Liu, \textit{Phys. Rev. A} \textbf{65}, 032302 (2002). \bibitem{network} H. J. Kimble, \textit{Nature} \textbf{453}, 1023 (2008). \bibitem{purification1} C. H. Bennett, G. Brassard, S. Popescu, B. Schumacher, J. A. Smolin, and W. K. Wootters, \textit{Phys. Rev. Lett.} \textbf{76}, 722 (1996). \bibitem{purification2} J.-W. Pan, C. Simon, C. Brukner, and A. Zeilinger, \textit{Nature} \textbf{410}, 1067 (2001). \bibitem{repeater1} H. J. Briegel, W. D\"{u}r, J. I. Cirac, and P. Zoller, \textit{Phys. Rev. Lett.} \textbf{81}, 5932 (1998). \bibitem{experiment1} J.-W. Pan, S. Gasparoni, R. Ursin, G. Weihs, and A. Zeilinger, \textit{Nature} \textbf{423}, 417 (2003). \bibitem{experiment4} L.-K. Chen, H.-L. Yong, P. Xu, X.-C. Yao, T. Xiang, Z.-D. Li, C. Liu, H. Lu, N.-L. Liu, L. Li \textit{et al.}, \textit{Nat. Photonics} \textbf{11}, 695 (2017). \bibitem{experiment2} R. Reichle, D. Leibfried, E. Knill, J. Britton, R. B. Blakestad, J. D. Jost, C. Langer, R. Ozeri, S. Seidelin, and D. J. Wineland, \textit{Nature} \textbf{443}, 838 (2006). \bibitem{experiment3} N. Kalb, A. A. Reiserer, P. C. Humphreys, J. J. W. Bakermans, S. J. Kamerling, N. H. Nickerson, S. C. Benjamin, D. J. Twitchen, M. Markham, and R. Hanson, \textit{Science} \textbf{356}, 928 (2017). \bibitem{hyper1} J. T. Barreiro, N. K. Langford, N. A. Peters, and P. G. Kwiat, \textit{Phys. Rev. Lett.} \textbf{95}, 260501 (2005). \bibitem{highdimension1} J. T. Barreiro, T. C. Wei, and P. G. Kwiat, \textit{Nat. Phys.} \textbf{4}, 282 (2008). \bibitem{highdimension2} X.-M. Hu, Y. Guo, B.-H. Liu, Y.-F. Huang, C.-F. Li, and G.-C. Guo, \textit{Sci. Adv.} \textbf{4}, eaat9304 (2018). \bibitem{highdimension4} Y. B. Sheng, F. G. Deng, and G. L. Long, \textit{Phys. Rev. A} \textbf{82}, 032318 (2010). \bibitem{multiteleportation} X.-L. Wang, X.-D. Cai, Z.-E. Su, M.-C. Chen, D. Wu, L. Li, N.-L. Liu, C.-Y. Lu, and J.-W. Pan, \textit{Nature} \textbf{518}, 516 (2015). \bibitem{Graham15} T. M. Graham, H. J. Bernstein, T. C. Wei, M. Junge, and P. G. Kwiat, \textit{Nat. Commun.} \textbf{6}, 7185 (2015). \bibitem{highdimension6} F. Steinlechner, S. Ecker, M. Fink, B. Liu, J. Bavaresco, M. Huber, T. Scheidl, and R. Ursin, \textit{Nat. Commun.} \textbf{8}, 15971 (2017). \bibitem{highdimension7} X.-M. Hu, W.-B. Xing, B.-H. Liu, D.-Y. He, H. Cao, Y. Guo, C. Zhang, H. Zhang, Y.-F. Huang, C.-F. Li, and G.-C. Guo, \textit{Optica} \textbf{7}, 2334 (2020). \bibitem{hyperpurification1} C. Simon, and J. W. Pan, \textit{Phys. Rev. Lett.} \textbf{89}, 257901 (2002). \bibitem{hyperpurification2} Y. B. Sheng, and F. G. Deng, \textit{Phys. Rev. A} \textbf{82}, 044305 (2010). \bibitem{hyperpurification3} X. H. Li, \textit{Phys. Rev. A} \textbf{82}, 044304 (2010). \bibitem{E91} A. K. Ekert, \textit{Phys. Rev. Lett.} \textbf{67}, 661 (1991). \bibitem{Supplementary} See Supplementary Materials for details, which includes Ref.~\cite{Hu2019,DIQKD,DIQSDC,Brukner2004}. \bibitem{Hu2019} X.-M. Hu, C. Zhang, B.-H. Liu, Y. Cai, X.-J. Ye, Y. Guo, W.-B. Xing, C.-X. Huang, Y.-F. Huang, C.-F. Li, and G.-C. Guo, \textit{Phys. Rev. Lett.} \textbf{125}, 230501 (2020). \bibitem{DIQKD} A. Ac\'{\i}n, N. Brunner, N. Gisin, S. Massar, S. Pironio, and V. Scarani, \textit{ Phys. Rev. Lett.} \textbf{98}, 230501 (2007). \bibitem{DIQSDC} L. Zhou, Y. B. Sheng, and G. L. Long, \textit{Sci. Bull.} \textbf{65}, 12 (2020). \bibitem{Brukner2004} C. Brukner, M. Zukowski, J.-W. Pan, and A. Zeilinger, \textit{Phys. Rev. Lett.} \textbf{92}, 127901 (2004). \bibitem{Kim2006} T. Kim, M. Fiorentino, and F. N. C. Wong, \textit{Phys. Rev. A} \textbf{73}, 012316 (2006). \bibitem{Guo2018} Y. Guo, X.-M. Hu, B.-H. Liu, Y.-F. Huang, C.-F. Li, and G.-C. Guo, \textit{Phys. Rev. A} \textbf{97}, 062309 (2018). \bibitem{Xavier2020} G. B. Xavier and G. Lima, \textit{Commun. Phys.} \textbf{3}, 9 (2020). \bibitem{Canas2017} G. Ca\~{n}as, N. Vera, J. Cari\~{n}e, P. Gonz\'{a}lez, J. Cardenas, P. W. R. Connolly, A. Przysiezna, E. S. G\'{o}mez, M. Figueroa, G. Vallone \textit{et al.} \textit{Phys. Rev. A} \textbf{96}, 022317 (2017). \bibitem{Bacco2019} D. Bacco, B. D. Lio, D. Cozzolino, F. D. Ros, X. Guo, Y. Ding, Y. Sasaki, K. Aikawa, S. Miki, H. Terai \textit{et al.} \textit{Commun. Phys.} \textbf{2}, 140 (2019). \bibitem{Shor2000} P. W. Shor and J. Preskill, \textit{Phys. Rev. Lett.} \textbf{85}, 441 (2000). \bibitem{nonlocal} P. Walther, K. J. Resch, C. Brukner, A. M. Steinberg, J. W. Pan, and A. Zeilinger, \textit{Phys. Rev. Lett.} \textbf{94}, 040504 (2005). \bibitem{diqkd} E. Y. Z. Tan, C. C. W. Lim, and R. Renner, \textit{Phys. Rev. Lett.} \textbf{124}, 020502 (2020). \bibitem{Martin2017} A. Martin, T. Guerreiro, A. Tiranov, S. Designolle, F. Frowis, N. Brunner, M. Huber, and N. Gisin, \textit{Phys. Rev. Lett.} \textbf{118}, 110501 (2017). \bibitem{Kues2017} M. Kues, C. Reimer, P. Roztocki, L. R. Cort\'{e}s, S. Sciara, B. Wetzel, Y. Zhang, A. Cino, S. T. Chu, B. E. Little \textit{et al.}, \textit{Nature} \textbf{546}, 622 (2017). \bibitem{memory2} Y. Yu, F. Ma, X.-Y. Luo, B. Jing, P.-F. Sun, R.-Z. Fang, C.-W. Yang, H. Liu, M.-Y. Zheng, X.-P. Xie \textit{et al.}, \textit{Nature } \textbf{578}, 240 (2020). \bibitem{highdimension5} M. K. Bhaskar, R. Riedinger, B. Machielse, D. S. Levonian, C. T. Nguyen, E. N. Knall, H. Park, D. Englund, M. Lon\v{c}ar, D. D. Sukachev, and M. D. Lukin, \textit{Nature} \textbf{580}, 60 (2020). \end{thebibliography} \setcounter{table}{0} \renewcommand{S\arabic{table}}{S\arabic{table}} \setcounter{figure}{0} \renewcommand{S\arabic{figure}}{S\arabic{figure}} \setcounter{equation}{0} \renewcommand{S\arabic{equation}}{S\arabic{equation}} \textbf{SUPPLEMENTARY INFORMATION}\\ \noindent\textbf{The protocol of entanglement purification using hyperentanglement}\\ As shown in Fig.~1, the hyperentanglement source (S) generates one pair of hyperentangled state $|\phi\rangle=|\Phi^{+}\rangle_{ab}\otimes|\phi^{+}\rangle_{ab}$. After the photons transmit across the channel, the polarization and spatial mode entanglements become a mixed state $\rho_{ab}=\rho^{P}_{ab}\otimes\rho^{S}_{ab}$. $|\Psi^{+}\rangle_{ab}$ and $|\psi^{+}\rangle_{ab}$ mean that bit-flip error occurs for both polarization and spatial mode entanglements. Thus, the initial hyperentangled state becomes a probabilistic mixture of four states. These states are: $|\Phi^{+}\rangle_{ab}\otimes|\phi^{+}\rangle_{ab}$ with a probability of $F_{1}F_{2}$, $|\Phi^{+}\rangle_{ab}\otimes|\psi^{+}\rangle_{ab}$ with a probability of $F_{1}(1-F_{2})$, $|\Phi^{+}\rangle_{ab}\otimes|\phi^{+}\rangle_{ab}$ with a probability of $(1-F_{1})F_{2}$, and $|\Psi^{+}\rangle_{ab}\otimes|\psi^{+}\rangle_{ab}$ with a probability of $(1-F_{1})(1-F_{2})$. The first state, $|\Phi^{+}\rangle_{ab}\otimes|\phi^{+}\rangle_{ab}$, leads the two photons in the output modes D$_{1}$D$_{2}$, namely $\frac{1}{\sqrt{2}}(|H\rangle_{D_{1}}|H\rangle_{D_{2}}+|V\rangle_{D_{1}}|V\rangle_{D_{2}})$ or in the output modes D$_{3}$D$_{4}$, namely $\frac{1}{\sqrt{2}}(|H\rangle_{D_{3}}|H\rangle_{D_{4}}+|V\rangle_{D_{3}}|V\rangle_{D_{4}})$. The state $|\Psi^{+}\rangle_{ab}\otimes|\psi^{+}\rangle_{ab}$ also leads the two photons in the output mode D$_{1}$D$_{2}$ or D$_{3}$D$_{4}$. They are in the state $\frac{1}{\sqrt{2}}(|H\rangle_{D_{1}}|V\rangle_{D_{2}}+|V\rangle_{D_{1}}|H\rangle_{D_{2}})$ or $\frac{1}{\sqrt{2}}(|H\rangle_{D_{3}}|V\rangle_{D_{4}}+|V\rangle_{D_{3}}|H\rangle_{D_{4}})$. The other two states, $|\Phi^{+}\rangle_{ab}\otimes|\psi^{+}\rangle_{ab}$ and $|\Psi^{+}\rangle_{ab}\otimes|\phi^{+}\rangle_{ab}$, cannot lead the two photons in the output modes D$_{1}$D$_{2}$ or D$_{3}$D$_{4}$. They are in the output modes D$_{1}$D$_{4}$ or D$_{2}$D$_{3}$. Finally, by selecting the cases in which the photons are in the output modes D$_{1}$D$_{2}$ or D$_{3}$D$_{4}$, we can obtain a new mixed state: \begin{eqnarray} \rho'_{ab}=F'|\Phi^{+}\rangle_{ab}\langle\Phi^{+}|+(1-F')|\Psi^{+}\rangle_{ab}\langle\Psi^{+}|\label{mixed3}. \end{eqnarray} Here $F'=\frac{F_{1}F_{2}}{F_{1}F_{2}+(1-F_{1})(1-F_{2})}$. If $F_{1}> \frac{1}{2}$ and $F_{2}> \frac{1}{2}$, we can obtain $F'> F_{1}$ and $F'> F_{2}$. In general, an error model contains not only bit-flip error, but also phase-flip error. The phase-flip error can be converted into the bit-flip error by adding the unitary operations and can also be purified.\\ \noindent\textbf{Multicore fiber (MCF) and optical fiber locking system} \begin{figure*} \caption{\textbf{Optical fiber phase-locking system.} To reduce the effect of the reference light (orange solid line) on the signal light (red solid line) of the system, they propagate in opposite directions. The light intensities from both are weak. BD1 and BD2 constitute an MZ interferometer used to lock the phase. } \label{figs1} \end{figure*} The MCF we used here was 11~km long and had a core diameter of 8~$\mu$m and seven cores with a core-to-core distance of 41.5~$\mu$m. Each core supported a single optical mode and had transmission characteristics similar to those of a standard single-mode fiber (SMF). The crosstalk between the cores was -45~dB/100~km. The MCF was divided into seven separate SMFs through a fan in and fan out. The average loss (including the insertion loss) of the fiber was 5.13~dB. The technical challenge of the spatial-mode entanglement in long-distance distribution is to maintain phase stability along different spatial modes. In our protocol, the phase between different MCF cores is stable. However, considering the fan in and fan out, the relative phase still needs to be locked. We used a fiber-locking system to lock the relative phase of the two cores. To reduce the disturbance of the quantum state by optical-fiber phase locking, the reference light (1570~nm CW light) was opposite to the system light. The reference light passed through BD1 and was divided into two beams (Fig.~\ref{figs1}), which were coupled to the fan-out fibers using mirrors. The light in the lower path was reflected by the mirror of a piezoelectric ceramic material (PZT), which was used to change the length of the optical path to stabilize the phase. The reference light was then emitted from the other end of the fan in the optical fiber, and two Mach-Zehnder (MZ) interferometers were formed by BD1,BD2. The phase between BDs was stable, and all the phase changes were due to the instability of the two MCF cores. Hence, we only needed to use the PZT to adjust the position of the mirror according to the signal of the MZ interferometer composed of BD1 and BD2 to lock the phase of the optical fiber.\\ \noindent\textbf{BF and PF loading setup} In this section, we introduce the BF and PF noise loading setup in detail. These setups are composed of HWPs, liquid crystals (LCs), and PBSs and BDs, which can achieve any proportion of BF (Fig.~\ref{figs2}) or PF (Fig.~\ref{figs3}) noise loading. When voltage V$_{\pi}$ or V$_{I}$ is applied to LC1 in Fig.~\ref{figs2}(a), the operation $\sigma^{P}_{x}\equiv|H\rangle\langle V|+|V\rangle\langle H|$ or $\sigma^{P}_{I}\equiv|H\rangle\langle H|+|V\rangle\langle V|$ is applied to the polarization qubit, respectively. Meanwhile, when voltage V$_{I}$ or V$_{\pi}$ is applied to LC2, the operation $\sigma^{S}_{x}\equiv|b_{1}\rangle\langle b_{2}|+|b_{2}\rangle\langle b_{1}|$ or $\sigma^{S}_{I}\equiv|b_{1}\rangle\langle b_{1}|+|b_{2}\rangle\langle b_{2}|$ is applied to the spatial mode qubit, respectively. We adjust the temporal delay between the intervals corresponding to V$_{I}$ and V$_{\pi}$ applied to the two LCs. In Fig.~\ref{figs2}(b), we define $T$ as the period of the LC activation cycle, and $t_{1}$, $t_{2}$, $t_{3}$, and $T-t_{1}-t_{2}-t_{3}$ as the activation times of the operations $\sigma^{S}_{x}\otimes \sigma^{P}_{I}$, $\sigma^{S}_{x}\otimes \sigma^{P}_{x}$, $\sigma^{S}_{I}\otimes \sigma^{P}_{x}$, and $\sigma^{S}_{I}\otimes \sigma^{P}_{I}$. In the interval between $t_{2}$ and $t_{3}$, V$_{\pi}$ is added to LC1 to realize the operation $\sigma^{P}_{x}$ of the polarization qubit. In the interval between $t_{1}$ and $t_{2}$, V$_{I}$ is applied to LC2 to realize $\sigma^{S}_{x}$ for the spatial mode qubit. In this way, we can change the noise ratio by adjusting the interval times $t_{1}$, $t_{2}$, and $t_{3}$ for a fixed period $T$. In the actual experiment, we take a period $T=15 s$. \begin{figure} \caption{\textbf{Bit-flip noise setup.} \textbf{a}, The setup consists of HWPs, compensations, BDs, LC1($@45^{\circ}$), and LC2($@45^{\circ}$). When the voltage $V_{\pi}$ or $V_{I}$ is applied to the LC1, a $\sigma_{x}^{P}$ or $\sigma_{I}^{P}$ operation is performed on the polarization qubit, respectively. When V$_{I}$ or V$_{\pi}$ is applied to the LC2, a $\sigma_{x}^{S}$ or $\sigma_{I}^{S}$ operation is performed on the spatial mode qubit, respectively. \textbf{b}, The LC activation times, $t_{1}$, $t_{2}$, and $t_{3}$, correspond to the three kinds of BF noise, $\sigma_{I}^{P}\otimes\sigma_{x}^{S} $, $\sigma_{x}^{P}\otimes \sigma_{x}^{S}$, and $\sigma_{x}^{P}\otimes \sigma_{I}^{S}$, respectively. By adjusting $t_{1}$, $t_{2}$, and $t_{3}$, any proportion of BF noise can be loaded.} \label{figs2} \end{figure} \begin{figure} \caption{\textbf{Phase flip noise Setup.} \textbf{a}, The setup consists of HWPs, compensations, BDs, LC1($@0^{\circ}$), and LC2($@0^{\circ}$). When the voltage $V_{\pi}$ or $V_{I}$ is applied to the LC1, the operation $\sigma_{z}^{P}$ or $\sigma_{I}^{P}$ is performed on the polarization qubit, respectively. When the voltage $V_{\pi}$ or $V_{I}$ is applied to the LC2, the operation $\sigma_{z}^{S}$ or $\sigma_{I}^{S}$ is performed on the spatial mode qubit, respectively. \textbf{b}, Photon B is transmitted through a phase flip noise channel. This is implemented by liquid crystals LC1 and LC2. The LC activation times, $t_{1}$, $t_{2}$, and $t_{3}$, correspond to the three kinds of phase flip noise, $\sigma_{I}^{P}\otimes\sigma_{z}^{S} $, $\sigma_{z}^{P}\otimes \sigma_{z}^{S}$, and $\sigma_{z}^{P}\otimes \sigma_{I}^{S}$, respectively. } \label{figs3} \end{figure} The PF noise loading setup is similar to that of BF noise, as shown in Fig.~\ref{figs3}(a). When V$_{\pi}$ or V$_{I}$ is applied in LC1, $\sigma^{P}_{z}\equiv|H\rangle\langle H|-|V\rangle\langle V|$ or $\sigma^{P}_{I}\equiv|H\rangle\langle H|+|V\rangle\langle V|$ is added to the polarization qubits, respectively. When V$_{\pi}$ or V$_{I}$ is applied to LC2, $\sigma^{S}_{z}\equiv|b_{1}\rangle\langle b_{1}|-|b_{2}\rangle\langle b_{2}|$ or $\sigma^{S}_{I}\equiv|b_{1}\rangle\langle b_{1}|+|b_{2}\rangle\langle b_{2}|$ is added on the spatial mode qubit, respectively. We can use the same loading time sequence as for BF noise, as shown in Fig.~\ref{figs3}(b). This way, we can load any proportion of PF error on the qubit in both the polarization and spatial mode DOFs.\\ \noindent\textbf{Hadamard operation for BF noise and PF noise conversion} \begin{figure} \caption{\textbf{Setup for Hadamard operation.} The Hadamard operation on the polarization entanglement and spatial mode entanglement makes $|\phi^{+}\rangle\leftrightarrow|\phi^{+}\rangle$, $|\phi^{-}\rangle\leftrightarrow|\psi^{+}\rangle$, $|\psi^{+}\rangle\leftrightarrow|\phi^{-}\rangle$, and $|\psi^{-}\rangle\leftrightarrow|\psi^{-}\rangle$, respectively. The setup for the Hadamard operation for both polarization and spatial mode qubits is similar to the setup for BF noise. Here, we replace LCs with HWPs ([email protected]^{\circ}$). The fidelity of this Hadamard operation is $F\sim0.997$. } \label{figs4} \end{figure} The single qubit unitary operation is important in entanglement purification. If some unitary operations (such as Hadamard operation) are performed where both Alice and Bob are, BF noise and PF noise can be transformed into each other ($|\phi^{+}\rangle\leftrightarrow|\phi^{+}\rangle$, $|\phi^{-}\rangle\leftrightarrow|\psi^{+}\rangle$, $|\psi^{+}\rangle\leftrightarrow|\phi^{-}\rangle$, and $|\psi^{-}\rangle\leftrightarrow|\psi^{-}\rangle$). As shown in Fig.~\ref{figs4}, the experimental setup for the Hadamard operation of the polarization and spatial modes includes HWPs and BDs. Through the setup, the Hadamard operation for polarization and path operation can be expressed as: \begin{equation} H_{P}\otimes H_{S}=\left( \begin{array}{cc} 1 & 1 \\ 1 & -1 \\ \end{array} \right)\otimes \left( \begin{array}{ccc} 1 & 1 \\ 1 & -1\\ \end{array} \right). \end{equation}\\ \noindent\textbf{Polarization-spatial mode hybrid tomography} \begin{figure} \caption{\textbf{ Measurement setup of polarization and spatial mode qubit.} } \label{figs5} \end{figure} As shown in Fig. \ref{figs5}, we use a BD, HWPs, QWPs, and a PBS to perform tomographic analysis of polarization and spatial mode states. When we need to measure one of the DOFs (polarization or spatial mode), we need to trace the other DOF and then conduct standard qubit tomography. If we need to do tomography for the spatial mode DOF, we set HWP1-2 and QWP1-2 appropriately so that the upper and lower $|H\rangle$ polarization states (or $|V\rangle$ polarization states) pass the BD. Then we use HWP3, QWP3, and PBS to complete the tomography for the path qubit. If we need to measure the polarization qubit state, we first select one of the paths $|0\rangle$ (or $|1\rangle$) through HWP3, QWP3, and PBS and then complete the polarization state of the qubit state through HWP1-2, QWP1-2, and BD. \\ \noindent\textbf{Fidelity estimation} We can estimate the fidelity of the state after purification according to the density matrix of the state before purification. We take the purification process of loading $20\%$ BF noise as an example. During distribution, small amounts of BF and PF noise are introduced by the fiber noise environment. The fidelities of the hyperentangled state in the polarization and spatial modes are $F_{P}=0.965\pm0.001$ and $F_{S}=0.953\pm0.001$ after distribution. After loading $20\%$ BF noise, the density matrix of the polarization state $\rho^{0.8}_{P}$ and spatial mode state $\rho^{0.8}_{S}$ becomes \begin{widetext} \begin{eqnarray} \rho^{0.8}_{P}=\left( \begin{array}{cccc} 0.397 + 0.000i & -0.001 - 0.003i & 0.004 - 0.005i & 0.377 - 0.003i \\ -0.001 + 0.003i & 0.111 + 0.000i & 0.090 + 0.001i & -0.006 + 0.004i \\ 0.004 + 0.005i & 0.090 - 0.001i & 0.102 + 0.000i & -0.003 + 0.001i \\ 0.377 + 0.003i & -0.006 - 0.004i & -0.003 - 0.001i & 0.390 + 0.000i \\ \end{array} \right), \\ \rho^{0.8}_{S}=\left( \begin{array}{cccc} 0.400 + 0.000i & 0.002 + 0.008i & 0.005 + 0.001i & 0.379 - 0.002i \\ 0.002 - 0.008i & 0.109 + 0.000i & 0.092 + 0.003i & -0.006 + 0.001i \\ 0.005 - 0.001i & 0.092 - 0.003i & 0.107 + 0.000i & -0.000 - 0.002i \\ 0.377 + 0.002i & -0.006 - 0.001i & -0.000 + 0.002i & 0.384 + 0.000i \\ \end{array} \right). \end{eqnarray} \end{widetext} We then do the CNOT operation in polarization and spatial DOF at Alice and Bob respectively. After that, the spatial mode states of the same polarization are preserved, and the different polarization spatial mode states are ignored: \begin{eqnarray} \rho_{P}=Tr_{(HH\ and\ VV)}(CNOT\rho^{0.8}_{P}\otimes\rho^{0.8}_{S}). \end{eqnarray} The estimated theoretical fidelity after purification is $F=0.896$, which is very close to the fidelity of our experiment, $F^{0.8}=0.887$. A similar method can be used to estimate the fidelity after purification under other noise conditions. In the cases of $30\%$BF noise, $20\%$PF noise, and an MCF only, the estimated fidelities after purification are 0.778, 0.932, and 0.985, respectively.\\ \noindent\textbf{Purification efficiency} We analyse the efficiency of this protocol in detail. The efficiency consists of three parts. The first one comes from the protocol itself. The success probability of the protocol is $P_{P}=F_{1}F_{2}+(1-F_{1})(1-F_{2})$. The second part comes from the contribution of the entanglement sources, i.e., the generation efficiency of the SPDC implementation. We denote this part of the efficiency as $P_{s}$. The third part comes from transmission losses. In this protocol, the entanglement is distributed to distant locations. To perform the purification, we should ensure the entangled state does not experience loss. Therefore, the transmission efficiency in the optical fiber is $\eta=e^{\frac{-\alpha L}{10}}$ with $\alpha\simeq0.2 db/km$ for 1550~nm \cite{optical}, where $L$ is the entanglement distribution distance. The total efficiency $P_{one}$ can be written as \begin{eqnarray} P_{one}&=&P_{P} \ast P_{s}\ast\eta\nonumber\\ &=&(F_{1}F_{2}+(1-F_{1})(1-F_{2}))\ast P_{s}\ast e^{\frac{-\alpha L}{10}}. \end{eqnarray} We can also estimate the purification efficiency using two pairs of mixed states with linear optics. In linear optics, the CNOT gate with a success probability of 1/4. Each purification works in a heralded way, and at least one pair of mixed states should be measured. This way, the success probability of the protocol is $P'_{P}=\frac{1}{4}(F_{1}F_{2}+(1-F_{1})(1-F_{2}))$. For the efficiency of SPDC implementation, the advantages of our protocol and two copy EPPs need to be compared under the same conditions. The ultrafast pulsed laser is usually used in the multiphoton experiments~\cite{experiment1}, while CW laser is used in our experiments. In principle, after strict compensation, our experimental setup can also be pumped by ultrafast pulse laser~\cite{Hu2019}. For comparison with the two copy SPDC implementation, we assume that our photon source is also pumped by ultrafast pulse laser (76M Hz). The success probability of generating two pairs of entangled states is $P^{2}_{s}$. Both pairs experience the noise during the entanglement distribution, and the success probability of obtaining two pairs of entangled states is $\eta^{2}$. Finally, we can estimate the efficiency as \begin{eqnarray} P_{two}&=&\frac{1}{4}P_{P} \ast P^{2}_{s}\ast \eta^{2}\nonumber\\ &=&\frac{1}{4}(F_{1}F_{2}+(1-F_{1})(1-F_{2}))\ast P^{2}_{s}\ast (e^{\frac{-\alpha L}{10}})^{2} .\nonumber\\ \end{eqnarray} The efficiency ratio of the two protocols can be written as \begin{eqnarray} \frac{P_{one}}{P_{two}}=\frac{4}{P_{s}\eta}. \end{eqnarray} In the current entanglement source, $P_{s}=C/(\varepsilon^2\times Rep)=2.4\times10^{3}/(0.18^{2}\times76\times10^{6})\sim0.001$ ($C$ is the coincidence count per second before fiber, $\varepsilon$ is the photon coupling efficiency, and $Rep$ is the repetition rate of the pump light). For an 11~km fiber, we can estimate $\eta\sim 0.602$. We obtain $ \frac{P_{one}}{P_{two}}\sim 6.6\times10^{3}$. \\ \noindent\textbf{Purify the nonlocal quantum states from the local quantum states} Quantum nonlocality is an important feature of entangled states. Many important quantum information tasks rely on quantum nonlocality, such as device independent quantum key distribution \cite{DIQKD}, device independent quantum secure direct communication \cite{DIQSDC}, quantum communication complexity \cite{Brukner2004}. However, not all entangled states can show quantum nonlocality \cite{Horodecki2009}. In our experiment, we also show the ability of distilling nonlocality from local resources. We use Clauser-Horne-Shimony-Holt (CHSH) inequality to verify the nonlocality of quantum states. Alice and Bob measure observables $A_{1}$, $A_{2}$ and $B_{1}$, $B_{2}$ respectively. For local hidden variable models, they are in accordance with: \begin{eqnarray} S=\langle A_{1}B_{1}\rangle+\langle A_{1}B_{2}\rangle+\langle A_{2}B_{1}\rangle-\langle A_{2}B_{2}\rangle\leq 2. \end{eqnarray} When $S>2$, it cannot be explained by the theory of local hidden variables, showing the quantum nonlocality. For a maximally entangled state, the maximal value is $S=2\sqrt{2}$. In our experiment, when PF noise is $30\%$, we can calculate the violation of CHSH inequality before and after purification by the density matrix. Before purification, $S_{S}=1.837\pm0.006 < 2$ for spatial entanglement, $S_{P}=1.829\pm0.006 < 2$ for polarization entanglement. After purification, $S=2.128\pm0.006>2$. This proves that we distill nonlocality from local quantum states. \\ \end{document}
\begin{document} \title{General properties of Nonsignaling Theories} \author{Ll. Masanes$^1$, A. Acin$^2$ and N. Gisin$^3$} \affiliation{ $^{1}$Department of Mathematics, University of Bristol, BS8 1TW Bristol, UK\\ $^{2}$ICFO-Institut de Ci\`encies Fot\`oniques, 08034 Barcelona, Spain\\ $^{3}$GAP-Optique, University of Geneva, 20 Rue de l'\'Ecole de M\'edicine, CH-1211 Geneva 4, Switzerland} \date{\today} \begin{abstract} This article identifies a series of properties common to all theories that do not allow for superluminal signaling and predict the violation of Bell inequalities. Intrinsic randomness, uncertainty due to the incompatibility of two observables, monogamy of correlations, impossibility of perfect cloning, privacy of correlations, bounds in the shareability of some states; all these phenomena are solely a consequence of the no-signaling principle and nonlocality. In particular, it is shown that for any distribution, the properties of (i) nonlocal, (ii) no arbitrarily shareable and (iii) positive secrecy content are equivalent. \end{abstract} \maketitle \section{Introduction} There are two experimental facts that, when considered together, significantly restrict any possible physical theory that aims at accounting for them. The first one is the constancy of the speed of light in any reference frame. This implies that no signal carrying information can propagate faster than light. More generally, we refer as {\em the no-signaling principle} the impossibility of sending information arbitrarily fast. The second fact, is the existence of correlations between space-like separated events that violate Bell inequalities \cite{Bell,aspect}. This means that such correlations cannot be explained by strategies arranged in the past. Models accounting for such correlations can be constructed by assuming some signaling between the correlated events. But this seems to contradict the first experimental fact. This is the reason why such correlations are called {\em nonlocal}. Despite this, physical theories exist that predict the violation of Bell inequalities and are nonsignaling, an example being Quantum Mechanics (QM). QM is not the unique theory consistent with the two mentioned experimental facts. It is well known that there exist nonsignaling correlations that are more nonlocal than the ones predicted by QM. Indeed, Popescu and Rohrlich proved that there are nonsignaling correlations giving a Bell inequality violation larger than the quantum mechanical prediction \cite{pr}. This suggests the possible existence of theories, different from QM, that allow for Bell inequality violation without contradicting the no-signaling principle. Although there is no experimental reason to reject QM, it is highly desirable to know the nature of these alternative theories in order to "study quantum physics form the outside". In this article, we aim at providing a unified picture for the {\em static} part [we do not consider {\em dynamics}] of all such theories, identifying a series of features common to all of them. Analyzing these common properties can be very useful in gaining a better understanding of QM. It is often said that the postulates of QM do not have a clear physical meaning, especially when compared with the postulates of other theories, like Relativity or Thermodynamics. The postulates of QM imply no-signaling [if we assume locality of interactions], and nonlocality. It was proposed by Popescu and Rohrlich to consider no-signaling and the existence of nonlocal correlations as proper physical principles. Could these two principles, together with other {\em independent} postulates imply QM? What would these other postulates look like? For such an enterprise, it is very important to learn all the consequences that follow from these two principles without any extra assumption. From an information-theoretical point of view, it is also worth looking at a framework more general than QM, as illustrated by several recent works analyzing the use of nonlocal correlations as an information-theoretical resource \cite{NSC,infres}. This is of particular interest in the case of secret communication: there, the security of a protocol relies on some assumptions on the eavesdropper capabilities. Usually, it is assumed that her computational power is bounded, or that her action is constrained by QM laws. It is then desirable to weaken the strength of these assumptions as much as possible. In this sense, a secret key distribution was recently proposed in \cite{crypto} and its security proved solely using the no-signaling principle. In this article, we extend the connection between nonlocality and secrecy at the level of an equivalence. Notice that the fact that a probability distribution contains secrecy does not imply that it can be distilled into a secret key (see below). \subsection{Summary and results} The article is organized as follows: in section 2 nonsignaling correlations are introduced, local and nonlocal ones are distinguished. Special emphasis is made on a particular family of distributions that we call isotropic, which will prove very useful in later reasonings. In section 3, different aspects of monogamy in nonlocal correlations are presented. In particular, the complete equivalence between locality and infinite shareability is proven (section 3.1). In section 3.2, through some examples, we survey the complex structure of the monogamy relations. In section 4 we prove that, any nonsignaling theory that predicts the violation of at least one Bell inequality has a No-Cloning Theorem. Some additional analysis is made for the case of QM. In section 5 we prove that, nonsignaling correlations contain secrecy (in the sense of cost) if and only if they are nonlocal. In section 6 we review the fact that all nonlocal correlations must have nondeterministic outcomes. And, in section 6.1 we show that, the more incompatible two observables are, the more uncertain their outcomes. Finally, we conclude with some final remarks, exposing some open question. Some additional material and proofs is contained in the appendixes. \section{Definitions and general frame} Consider $n$-parties ---Alice, Bob, Clare\ldots--- each possessing a physical system, which can be measured with different observables. Denote by $x_k$ the observable chosen by party $k$, and by $a_k$ the corresponding measurement outcome . The joint probability distribution for the outcomes, conditioned on the observables chosen by the $n$ parties is \begin{equation}\label{correlation} P(a_1,\ldots, a_n|x_1,\ldots, x_n). \end{equation} One can formulate this scenario in an equivalent and slightly more abstract way. Imagine that each of the $n$ parties has a physical device with an input and an output. Just after the $k^{\mbox{\scriptsize th}}$ party inputs $x_k$, the device outputs $a_k$, and it cannot be used anymore. Throughout this article, we assume that inputs and outputs take values from finite, but arbitrarily large, alphabets: $x_k\in\{0,1,\ldots, X_k-1\}$ and $a_k\in\{0,1,\ldots, A_k-1\}$. Notice that, without loss of generality, we assume that all observables belonging to one party have the same number of outcomes. It is useful to look at these conditioned probability distributions (\ref{correlation}) as points in a large dimensional space. The set of all these points (\ref{correlation}) is a convex polytope. Unless no other constraints are imposed, (\ref{correlation}) can be any vector of positive numbers, satisfying the normalization conditions \begin{equation} \sum_{a_1,\ldots a_n} P(a_1,\ldots a_n|x_1,\ldots x_n)=1 \end{equation} for all input values $x_1,\ldots x_n$. \subsection{Nonsignaling correlations} The $n$-partite distribution $P(a_1,\ldots a_n|x_1,\ldots x_n)$ is nonsignaling, when the marginal distribution for each subset of parties $\{a_{k_1},\ldots a_{k_m}\}$ only depends on its corresponding inputs \begin{equation}\label{totes} P(a_{k_1},\ldots a_{k_m}|x_1,\ldots x_n)= P(a_{k_1},\ldots a_{k_m}|x_{k_1},\ldots x_{k_m}). \end{equation} It turns out that very few of these conditions are linearly independent. It was proved in \cite{NSC}, that all conditions of the form (\ref{totes}) can be derived from the following {\bf Condition:} For each $k\in\{1,\ldots n\}$ the marginal distribution obtained when tracing out $a_k$ is independent of $x_k$: \begin{eqnarray}\label{ns} \sum_{a_k} P(a_1,\ldots a_k\ldots a_n |x_1,\ldots x_k\ldots x_n) \\ =\sum_{a_k} P(a_1,\ldots a_k\ldots a_n |x_1,\ldots x'_k\ldots x_n), \nonumber \end{eqnarray} for all values of $a_1,\ldots a_{k-1},a_{k+1}\ldots a_m$ and $x_1,\ldots x_k, x'_k, x_{k+1}\ldots x_n$. These linear constraints characterize an affine set. The intersection of this set with the polytope of distributions (\ref{correlation}) gives another convex polytope. Throughout this article, whenever we refer to distributions, correlations, states or points, we always assume they belong to the nonsignaling polytope. \subsection{Local correlations} Local correlations are the ones that can be generated if the parties share classical information, or equivalently, the ones that can be written as \begin{eqnarray}\label{local} P(a_1,\ldots a_n|x_1,\ldots x_n) \\ = \sum_e P(e) P(a_1|x_1,e)\cdots P(a_n|x_n,e). \nonumber \end{eqnarray} This subset of correlations is a convex polytope delimited by two kinds of facets. The first kind warrants that all the components of (\ref{local}) are positive, and thus, it is not interesting. Actually, they are already facets of the nonsignaling (and also of the more general) polytope. The second kind are the Bell inequalities, which can be violated by nonlocal correlations. Throughout this article we assume that all Bell inequalities have been normalized [with a transformation of the form $\mathcal{B} \rightarrow \alpha\mathcal{B}+\beta$, where $\alpha$ and $\beta$ are real numbers ], such that the local bound is $\mathcal{B}[P_{\mbox{\tiny{LOCAL}}}]\leq0$, and the maximal violation compatible with no-signaling is $\mathcal{B}[P_{\mbox{\tiny{MAX}}}]=1$. As said above, local correlations can be generated with shared randomness and local operations. In expression (\ref{local}), the random variable $e$ stands for the information shared among the parties, sometimes called {\em local hidden variable}. Depending on its value, the $k^{\mbox{\scriptsize th}}$ party locally generates $P(a_k|x_k,e)$. The distributions that cannot be written like (\ref{local}) are called nonlocal. \subsection{Quantum correlations} We call quantum those correlations that can be generated if the parties share quantum information [entanglement], or equivalently, those correlations that can be written as \begin{equation}\label{quantum} P(a_1,\ldots a_n|x_1,\ldots x_n) =\mbox{tr}\!\left[ F_{a_1}^{(x_1)}\!\otimes\cdots\otimes F_{a_n}^{(x_n)} \rho \right], \end{equation} where $\rho$ is a quantum state, namely a unit-trace, semi-definite positive matrix, and $\{F_{0}^{(x_k)},\ldots F_{A_k-1}^{(x_k)}\}$ define what is called a {\em positive operator valued measure} \cite{POVM}. That is, a set of positive operators $\{F_{a_k}^{(x_k)}\}$ satisfying $\sum_{a_k}F_{a_k}^{(x_k)}=\leavevmode\hbox{\small1\normalsize\kern-.33em1},\,\forall\,x_k$. \subsection{Isotropic correlations} Let us define a particular family of bipartite distributions with binary input/output. In the case where the marginal distributions for $a$ and $b$ are unbiased, all the information of $P(a,b|x,y)$ is contained in the four correlation functions: \begin{eqnarray}\label{} C_{xy}=&+& P(0,0|x,y)+P(1,1|x,y) \\ &-& P(0,1|x,y)-P(1,0|x,y),\nonumber \end{eqnarray} for $xy=00,01,10,11$. One can always fix $C_{00},C_{01},C_{10}\geq 0$ by performing local reversible transformations. Once we have a distribution in this canonical form, its nonlocality is decided by the CHSH inequality \cite{chsh} in standard form: \begin{equation}\label{chsh} \mathcal{B}_{\mbox{\tiny CHSH}}=\frac{1}{2}\left[ C_{00}+C_{01}+C_{10}-C_{11} \right]-1. \end{equation} We call isotropic, denoted by $P_{\mbox{\tiny ISO}}(a,b|x,y)$, those correlations with unbiased marginal distributions for $a$ and $b$ that satisfy \begin{equation}\label{isoP} C_{00}=C_{01}=C_{10}=-C_{11} \geq 0. \end{equation} This family depends on a unique parameter $C=C_{00}$, whose relation to the CHSH violation is \begin{equation} \mathcal{B}_{\mbox{\tiny CHSH}}[P_{\mbox{\tiny ISO}}]=2 C-1. \end{equation} In figure \ref{chshfig} we can see for which values of $C$ the distribution $P_{\mbox{\tiny ISO}}$ belongs to the local and quantum set. \begin{figure} \caption{Value of $C$ for isotropic correlations (\ref{iso2}).} \label{chshfig} \end{figure} When $C=1$, this distribution is known as PR-box \cite{pr,NSC}, and is usually written as \begin{equation}\label{iso} P_{\mbox{\tiny PR}}(a,b|x,y)= \left\{ \begin{array}{ll} 1/2 & \mbox{if }\ a+b \bmod 2=xy \\ 0 & \mbox{otherwise} \\ \end{array} \right. . \end{equation} This distribution can be considered the paradigm of nonlocal and nonsignaling correlations (see \cite{conversion}). With this definition, we can express any $P_{\mbox{\tiny ISO}}$ as the following mixture \begin{equation}\label{iso2} P_{\mbox{\tiny ISO}}=C P_{\mbox{\tiny PR}}+ (1-C) P^A_{\mbox{\tiny N}}P^B_{\mbox{\tiny N}}, \end{equation} where $P^A_{\mbox{\tiny N}}$ is the local noise distribution for Alice, independently of the inputs. Thus, one can interpret $C$ as the probability of sharing a PR-box instead of local noise. \section{Monogamy of nonlocal correlations} While classical correlations can be shared among an indefinite number of parties, it is well known that quantum correlations cannot. This fact is often called {\em monogamy of entanglement} \cite{monogamy}. In this section we prove that this is a generic feature of all non-signaling theories. First, let us recall a result already mentioned in \cite{NSC}. {\em All Bell inequalities for which the maximal violation consistent with no-signaling is attained by a unique distribution, have monogamy constraints}. Suppose that $\mathcal{B}$ is a Bell inequality with unique maximal violator $P_{\mbox{\tiny{MAX}}}$. If Alice-Bob maximally violate this inequality $\mathcal{B}[P(a,b|x,y)]=1$, then, Alice and Clare are completely uncorrelated. To prove this, first notice that because all Bell inequalities $\mathcal{B}[P]$ are linear in $P$, $P_{\mbox{\tiny{MAX}}}$ must be an extreme of the Alice and Bob polytope. Otherwise, the maximal violator would not be unique. Second, using the definition of marginal distribution and the no-signaling condition we have \begin{eqnarray} P_{\mbox{\tiny{MAX}}}(a,b|x,y) &=& \sum_c P(a,b,c|x,y,z)\nonumber\\ &=& \sum_c P(a,b|x,y,z,c)P(c|x,y,z)\nonumber\\ &=&\sum_c P(a,b|x,y,z,c)P(c|z), \end{eqnarray} for all $z$. But, because $P_{\mbox{\tiny{MAX}}}(a,b|x,y)$ is extremal, any such decompositions must consist of only one term. This implies that Clare is uncorrelated with Alice and Bob. Actually, one can prove that all the CGLMP inequalities have a unique nonsignaling probability distribution achieving its algebraic maximum. This well-known set of inequalities was first proposed in \cite{CGMLP} for the case of two inputs of $d$ possible outputs. One can easily see that imposing no-signaling and maximal violation of CGLMP inequality identifies a unique probability distribution $P(a,b|x,y)$. This means that this set of Bell inequalities have the previous monogamy condition. \subsection{$m$-shareability and locality} Shareability represents a natural property in the analysis of the monogamy of correlations. A bipartite probability distribution $P(a,b|x,y)$ is said to be $m$-shareable with respect to Bob, if there exists an $(m+1)$-partite distribution $P(a,b_1,\ldots b_m|x,y_1,\ldots y_m)$ being symmetric with respect to $(b_1,y_1)\cdots (b_m,y_m)$, with marginals $P(a,b_i|x,y_i)$ equal to the original distribution $P(a,b|x,y)$. The following result shows the relation between shareability and nonlocality. {\bf Result 1:} {\em If $P(a,b|x,y)$ is $m$-shareable with respect to Bob, then, it satisfies all Bell inequalities with $m$ (or less) different values for the input $y$.} {\em Proof:} To prove this statement, we construct a local model for $P(a,b|x,y)$ when $y$ constrained to $y=1,\ldots m$ [without loss of generality]. By assumption $P(a,b_1,\ldots b_m|x,y_1,\ldots y_m)$ exists, then so $P(b_1,\ldots b_m|y_1,\ldots y_m)$ and $P(a|x,b_1,\ldots b_m,y_1,\ldots y_m)$ do. In this local model, the information shared by the parties is the string $(b_1,\ldots b_m)$, when the corresponding inputs are fixed to $y_1=1, \ldots y_m=m$. Thus, using the definition of conditional probabilities, we can decompose $P(a,b|x,y)$ in the following way \begin{eqnarray}\label{} P(a,b|x,y)= \sum_{b_1,\ldots b_m} P(b_1,\ldots b_m|1,\ldots m) \\ \times P(a|x,b_1,\ldots b_m,1,\ldots m) \delta_{b,b_y} \nonumber. \end{eqnarray} Where the three factors in each term of the sum have to be interpreted as the $P(e)$, $P(a|x,e)$ and $P(b|y,e)$ appearing in the decomposition (\ref{local}), respectively. Note that this result represents the extension of Theorem 2 in \cite{terhal}, derived for quantum states, to the more general nonlocal scenario. It also implies that if a state is $X/Y$-shareable with respect to Alice/Bob, then it is local. In particular, two-shareable states do not violate the CHSH nor the CGMLP inequalities. A converse of the previous result is also true: {\em if a state is local, then it is $\infty$-shareable with respect to any party}. To show the last statement, we explicitly construct the extension [to $m$ Bobs] for the arbitrary local correlations written in (\ref{local}): \begin{eqnarray}\label{extension} P(a,b_1,\ldots b_m|x,y_1,\ldots y_m) \\ =\sum_e P(e) P(a|x,e) P(b_1|y_1,e) \cdots P(b_m|y_m,e), \nonumber \end{eqnarray} with each distribution $P(b_i|y_i,e)$ being equal to the $P(b|y,e)$ that appears in (\ref{local}). We can merge the previous two statements in the following one: {\bf Result 2: }{\em locality and $\infty$-shareability are equivalent properties.} This result is analogous to what happens in QM: a bipartite quantum state is $\infty$-shareable if and only if it is separable \cite{share}. \subsection{Examples} In what follows, we show that the CHSH inequality presents an even stronger kind of monogamy. {\bf Result 3: }{\em Consider a binary input/output tripartite distribution $P(a,b,c|x,y,z)$. If Alice and Bob's marginal is nonlocal, then Alice and Clare's marginal must be local.} \begin{equation}\label{} \mathcal{B}_{\mbox{\tiny CHSH}}[P(a,b|x,y)]>0\ \Rightarrow\ \mathcal{B}_{\mbox{\tiny CHSH}}[P(a,c|x,z)]\leq 0 \end{equation} {\em Proof.} We prove this statement by contradiction. Suppose that there exists a tripartite distribution $P(a,b,c|x,y,z)$ such that both $P(a,b|x,y)$ and $P(a,c|x,z)$ are nonlocal. Then Alice-Bob, and simultaneously Alice-Clare, can depolarize their bipartite correlations and transform them into isotropic ones, without decreasing the Bell violation. This procedure is shown in Appendix B. Then, if Alice-Bob have larger $C$ than Alice-Clare, Bob decreases it until both are equal (this procedure is explained in Appendix A). An analogous thing is done in the opposite situation. After this manipulations, both marginals are isotropic and have the same value of $C$. This implies that the two marginals are equal, and thus two-shareable. In section 3.3 we have seen that, a two-shareable state cannot violate CHSH. This finishes the construction of the contradiction. In more general situations strict monogamy no longer holds. Indeed, one can easily design a situation where Alice shares a PR-box with Bob, and another with Clare. This corresponds to a case where Alice can choose between 4 inputs of 4 outputs, while Bob and Clare are restricted to the simplest case of $Y=Z=B=C=2$. Clearly, the corresponding Alice-Bob and Alice-Clare distribution violate the CHSH inequality. A nicer and more symmetric example, with only two inputs for each party, is given by the following tripartite distribution \begin{equation}\label{polygame} P^{ABC}=\frac{1}{2} P^{AB}_{\mbox{\tiny PR}\{0,1\}} P^C_{\mbox{\tiny N}\{0,1\}} + \frac{1}{2} P^{AC}_{\mbox{\tiny PR}\{2,3\}} P^B_{\mbox{\tiny N}\{2,3\}}, \end{equation} where $P_{\mbox{\tiny PR}\{\alpha,\beta\}}$ is a PR-box with outputs restricted to $a,b\in\{\alpha,\beta\}$, $P_{\mbox{\tiny N}\{\alpha,\beta\}}$ is a local noise distribution with outputs restricted to $a,b\in\{\alpha,\beta\}$, and the superindices label the parties. In what follows, we prove that the Alice-Bob marginal \begin{equation}\label{} P^{AB}=\frac{1}{2} P^{AB}_{\mbox{\tiny PR}\{0,1\}}+ \frac{1}{2} P^{A}_{\mbox{\tiny N}\{2,3\}} P^B_{\mbox{\tiny N}\{2,3\}}, \end{equation} is nonlocal. Assume the opposite: $P^{AB}$ can be expressed as a mixture of local extreme points (\ref{local}). Because each local extreme point has determined outcomes, we can split the local mixture into a part with outcomes $\{2,3\}$, and a part with outcomes $\{0,1\}$. The last, would correspond to a local expansion of $P^{AB}_{\mbox{\tiny PR}\{0,1\}}$, but we know that such thing does not exist. Now, using the symmetry of (\ref{polygame}), we conclude that its marginals $P^{AB}$ and $P^{AC}$ are both nonlocal. In the case $X=Y=2$ and $A,B$ arbitrary, there is a situation where strong monogamy still holds: where the reduced states of Alice-Bob and Alice-Clare, consist both on isotropic correlations with non-uniform noise [independent of the inputs]. First, let us generalize the idea of isotropic distributions for arbitrary output alphabets. The generalization of the PR-box is \cite{NSC} \begin{equation}\label{prd} P_{\mbox{\tiny PR}}(a,b|x,y)= \left\{ \begin{array}{ll} 1/A & \mbox{if }\ a-b\bmod A=xy \\ 0 & \mbox{otherwise} \\ \end{array} \right. . \end{equation} In a natural way, we define \begin{equation}\label{isod} P_{\mbox{\tiny ISO}}^{AB}=C P_{\mbox{\tiny PR}}^{AB}+ (1-C) P^A_{\mbox{\tiny IND}} P^B_{\mbox{\tiny IND}}, \end{equation} where $P^A_{\mbox{\tiny IND}}$ is an arbitrary local distribution for Alice, independent of the inputs. It is clear that if Alice and Bob add to their outputs a shared random number modulo $A$: \begin{eqnarray} a \rightarrow a + r \bmod A \\ b \rightarrow b+ r \bmod A , \end{eqnarray} their distribution becomes: \begin{equation}\label{} P_{\mbox{\tiny ISO}}^{AB} \rightarrow C P_{\mbox{\tiny PR}}^{AB}+ (1-C) P^A_{\mbox{\tiny N}} P^B_{\mbox{\tiny N}}, \end{equation} where $P^{A/B}_{\mbox{\tiny N}}$ is the (local) uniform distribution independent of the inputs $x/y$. As in the case $A=B=2$, if $C$ is positive, one of the parties can decrease its value by performing a local operation. Using the same trick as before, one can prove that all tripartite distributions where the marginals Alice-Bob and Alice-Clare are both isotropic with non-uniform noise (\ref{isod}), show strong monogamy. \section{No-Cloning} The Quantum No-Cloning Theorem represents one of the cornerstones of Quantum Information Theory. It is usually explained as a consequence of the nonorthogonality of quantum states and the linearity of quantum time evolution. The relation between quantum cloning and no-signaling has also been studied by several authors. Indeed, if one assumes that (i) states are described by vectors in Hilbert spaces, (ii) probabilities are obtained according to the usual trace rule, and (iii) no-signaling, the optimal fidelity of a cloning machine cannot be larger than the one allowed by quantum dynamics \cite{gisin}. In what follows, we formulate the problem independently of QM and show that {\bf Result 4:} {\em All nonsignaling theories predicting the violation Bell inequalities have a no-cloning theorem.} A similar result was proved for the case of the CHSH inequality by R. F. Werner, in \cite{qit}. Here we prove it for general nonlocal theories, not necessarily violating the CHSH inequality. Suppose that there exists a machine to which we can input a physical system [in an arbitrary state], and it outputs two systems in exactly the same state as the original one. We call such engine {\em perfect cloning machine}. Let us consider the following situation: Alice and Bob share the nonlocal distribution $P(a,b|x,y)$, and perform the following two space-like separated events. On one site, Alice chooses the input $x_0$ and obtains the output $a_0$. On the other site, Bob performs $m$ clones of its original system. For an observer who see first the event on Alice's site, the description of Bob's input system is $P(b|y,x_0,a_0)$. For this observer, Bob's system is completely uncorrelated with the rest of the universe, and the functioning of the perfect cloning machine is unambiguous: \begin{equation}\label{cln} P(b|y,x_0,a_0) \rightarrow P(b_1,\ldots b_m|y_1,\ldots y_m,x_0,a_0) \end{equation} Obviously, the joint sate of all clones $P(b_1,\ldots b_m|y_1,\ldots y_m,x_0,a_0)$ is such that when we trace all but one, $P(b_i|y_i,x_0,a_0)$, this distribution is the same as the original one, $P(b|y,x_0,a_0)$. Because we consider a perfect cloning machine there is no distinction between pure and mixed states: all are perfectly cloned. For an observer who first sees Bob's operation, its description of the physical situation is \begin{equation}\label{cln2} P(a,b_1,\ldots b_m|x,y_1,\ldots y_m)\ . \end{equation} But, because all descriptions must give consistent predictions, the descriptions from the point of view of the two mentioned observers (\ref{cln}) and (\ref{cln2}) must be the same, up to conditioning on $a$. This implies that the original distribution $P(a,b|x,y)$ is $m$-shareable. More concretely, because $m$ is arbitrary, we can say that $P(a,b|x,y)$ is $\infty$-shareable. According to the result of section 3.1, the original distribution $P(a,b|x,y)$ must be local, in contradiction with the initial assumption. \subsection{Phase covariant cloning machine} Once we have ruled out the existence of a perfect cloning machine, it is interesting to look for the optimal imperfect one. Suppose that its action is \begin{equation}\label{} P(a,b|x,y) \quad\longrightarrow\quad P(a,b_1,b_2|x,y_1,y_2), \end{equation} where, without loss of generality we can assume that the final distribution is symmetric with respect to $(b_1,y_1)$ and $(b_2,y_2)$. By definition, the reduced distribution $P(a,b_i|x,y_i)$ is two-shareable. This implies that it cannot violate any two-input Bell inequality. In particular, if the initial distribution $P(a,b|x,y)$ has $Y=2$, the resulting clones are correlated with Alice's system in a local way. Let us consider a particular case in the binary input/output scenario. Consider that Alice and Bob share an isotropic distribution with parameter $C$. Bob clones his subsystem, and, according to the previous paragraph, the resulting clones are locally correlated with Alice's subsystem. If we suppose that the clones are isotropically correlated with Alice, the maximum value for their parameter is $C_{\mbox{\tiny CLN}}=1/2$. Thus, the {\em shrinking factor} associated to this cloning operation is \begin{equation}\label{sf} \frac{C_{\mbox{\tiny CLN}}}{C}=\frac{1}{2\,C}. \end{equation} Now, consider the isotropic correlations that arise when measuring a singlet with the observables that maximize the CHSH violation, that is $P_{\mbox{\tiny ISO}}$ with $C=1/\sqrt{2}$. In this case, the shrinking factor (\ref{sf}) coincides with the one of the {\em phase covariant quantum cloning machine} $1/\sqrt{2}$ \cite{phasecovariant}, that is QM attains this maximum value for the cloning of nonlocal correlations. In this sense, QM clones the quantum correlations achieving the Cirelson bound in an optimal way. \section{Non-locality and privacy} The monogamy of correlations and the impossibility of perfect cloning seem immediately to be related to the concept of privacy. If two honest parties know to share correlations with some degree of monogamy, they can estimate and possibly bound their correlations with a third dishonest party, the eavesdropper. In this section we strengthen this intuitive idea, proving that under the no-signaling assumption, a probability distribution contains secrecy if and only if it is nonlocal. Recall that this does not mean that this probability distribution can be transformed into a secret key. For the sake of simplicity we consider the bipartite case. In a cryptographic scenario, one usually considers two honest parties (Alice and Bob) each possessing a random variable $A$ and $B$, and an eavesdropper (Eve) having $E$. The correlations among the three random variables are described by a probability distribution $P_{ABE}$ . On the other hand, it is meant by nonlocal correlations those probability distributions conditioned on some inputs $P(a,b|x,y)$ that cannot be written in the form of Eq. (\ref{local}). It is in principle not so evident how to relate the two scenarios. For instance, how to add (i) the third party in the nonlocal scenario or (ii) the missing inputs for Alice and Bob in the cryptographic scenario. Therefore, before proving the equivalence between privacy and nonlocality one has to connect the two considered scenarios. \subsection{Secret correlations} A tripartite probability distribution [without inputs] $P_{ABE}$ among two honest parties and an eavesdropper contains secrecy when it cannot be generated by local operations and public communication (LOPC), i.e. its formation requires the use of a private channel or secret bits \cite{LOPC}. On the other hand, $P_{ABE}$ can be generated by LOPC, if there exists a stochastic map $E\rightarrow E'$ such that \begin{equation}\label{LOPC} P_{AB|E'}=P_{A|E'}P_{B|E'}. \end{equation} We say that $P_{ABE}$ contains secrecy \cite{LOPC} when this is not possible. We stress that this does not mean that many copies of $P_{ABE}$ can later be used to obtain a secret key by LOPC. Indeed, there are probability distributions with positive secrecy content, which cannot be distilled into a secret key by LOPC \cite{bound}. Now, suppose Alice and Bob share a distribution $P(a,b|x,y)$. They decide the inputs according to uniform distributions: $p(x)=1/X$ and $p(y)=1/Y$ \cite{noteinp}. Then, Alice's and Bob's information is respectively $A=(a,x)$ and $B=(b,y)$. The random variables $A$ and $B$ are correlated according to \begin{equation}\label{probab} P_{AB}=P(a,b|x,y)\frac{1}{XY} . \end{equation} Can Alice and Bob bound Eve's information on their outcomes from their observed correlations? Can one prove that all possible extension $P_{ABE}$ of $P_{AB}$, derived from $P(a,b|x,y)$ through Eq. ({\ref{probab}), contain secrecy? This is of course impossible if no assumption on the possible extensions are made. In general, Alice and Bob can never exclude that Eve has a perfect copy of their outcomes, unless some constraints are imposed. However, if it is assumed that no faster-than-light communication is possible, not all possible extension of the initial bipartite probability distribution are allowed. Let us only consider extensions $P(a,b,e|x,y)$ compatible with no-signaling. Thus, to each $P(a,b|x,y)$ we can associate a family of tripartite distributions \begin{equation}\label{ug} P_{ABE}=P(a,b,e|x,y)\frac{1}{XY}, \end{equation} where $E=e$. We say that {\em $P(a,b|x,y)$ contains secrecy if all its associated $P_{ABE}$ contain secrecy.} \subsection{All nonlocal correlations contain secrecy} The aim of this section is to show the link between the nonlocal properties of $P(a,b|x,y)$ and the secrecy content of any possible extension $P_{ABE}$, defined through (\ref{ug}). Before proceeding, note that an equivalent way of defining local correlations is as follows: a probability distribution $P(a,b|x,y)$ is local (\ref{local}) when there exists a [nonsignaling] extension $P(a,b,e|x,y)$ such that \begin{equation}\label{local2} P(a,b|x,y,e)=P(a|x,e)P(b|y,e) . \end{equation} Now, assume one has a bipartite distribution $P(a,b|x,y)$ for which there exists an extension $P_{ABE}$ with no secrecy content, that is \begin{equation}\label{L} P_{AB|E}=P_{A|E}P_{B|E}. \end{equation} Because processing the outcomes of a nonsignaling distribution gives another nonsignaling distribution, any transformation $E\rightarrow E'$ is included in the arbitrariness of the extension $P(a,b,e|x,y)$. By using the definition of conditional probabilities, one can see that (\ref{L}) is equivalent to (\ref{local2}). That is, $P_{ABE}$ has no secrecy if and only if there exists an extension of $P(a,b|x,y)$ satisfying (\ref{local2}), which is to say that $P(a,b|x,y)$ is local. This establishes the following equivalence. {\bf Result 5: }{\em A distribution contains secrecy if and only if it is nonlocal.} It was already proven in \cite{crypto}, that all local correlations (\ref{local}) can be distributed by LOPC. The public message that one of the parties, say Alice, should send to the rest in order to create the correlations, is precisely the (hidden) variable $e$ that appears in (\ref{local}). Therefore, if Alice and Bob's probability distribution is local, they cannot exclude that the global probability distribution including Eve does not contain any secrecy. The following natural question is to identify those nonlocal correlations distillable to a secret key and whether they can be distributed using quantum states \cite{prep}. This will define those quantum correlations secure against an eavesdropper only limited by the no-signaling principle \cite{crypto}. \section{Nonlocality and randomness} We first start by showing that all nonlocal correlations have random outcomes (see also \cite{pr}). Consider a deterministic bipartite distribution $P_{\mbox{\tiny{DET}}}(a,b|x,y)$. That is, $a$ and $b$ are deterministic functions of $(x,y)$. Using this and no-signaling, we can get the following equalities \begin{eqnarray} P_{\mbox{\tiny{DET}}}(a,b|x,y) &=& \delta_{(a,b),(f[x,y],g[x,y])}\nonumber\\ &=& \delta_{a,f[x,y]}\ \delta_{b,g[x,y]}\nonumber\\ &=& P(a|x,y)P(b|x,y)\nonumber\\ &=& P(a|x)P(b|y). \end{eqnarray} The last line is a distribution of the form (\ref{local}). Therefore, all deterministic distributions are local. Or in other words, all nonlocal states have uncertain outcomes. This fact can be straightforwardly extended to the $n$-party case. Thus, there are two kinds of randomness in any nonsignaling theory with nonlocal correlations. The first one reflects our ignorance and corresponds to those probability distributions that can be written as the convex combination of extreme points. But, like in QM, there is also an intrinsic randomness even for extreme points, or pure states. The PR-box (\ref{iso}) is an example of a pure state with uncertain outcomes. \subsection{Incompatible observables and uncertainty} Finally, within QM it is said that two observables $(O_0,O_1)$ are compatible if there exists a more complete one $O$ of which both are functions: $(O_0,O_1)=f(O)$. Consider $P(a,b|x,y)$, we say that the two observables in Bob's site $b_{0}$ and $b_{1}$ [corresponding to the inputs $y=0,1$] are compatible, if there exists a joint distribution for both $P'(a,b_0,b_1|x)$. That is \begin{eqnarray} \sum_{b_0} P'(a,b_0,b_1|x) &=& P(a,b_1|x,y=1)\ , \\ \sum_{b_1} P'(a,b_0,b_1|x) &=& P(a,b_0|x,y=0)\ . \end{eqnarray} Or in other words, $P(a,b|x,y)$ is two-shareable with respect to Bob if we restrict to $y=0,1$. When the observables $(b_{0},b_{1})$ are not compatible, a possible way of quantifying the degree of incompatibility is \begin{eqnarray}\label{incomp} && \mbox{inc}[b_{0},b_{1}]= \min\!\big\{\eta>0: P(a,b|x,y)\\ && =\eta P_{\mbox{\tiny INC}}(a,b|x,y)\, +(1-\eta)P_{\mbox{\tiny COM}}(a,b|x,y)\big\}, \nonumber \end{eqnarray} where $P_{\mbox{\tiny COM}}(a,b|x,y)$ is a distribution where $b_{0}$ and $b_{1}$ are compatible, and, $P_{\mbox{\tiny INC}}(a,b|x,y)$ is an arbitrary one. It is clear that the range of $\mbox{inc}[b_{0},b_{1}]$ is $[0,1]$, and $\mbox{inc}[b_{0},b_{1}]=0$ if and only if $b_{0}$ and $b_{1}$ are compatible. In Appendix B it is proven that in the binary input/output case, this minimization yields the CHSH violation: \begin{equation}\label{} \mbox{inc}[b_0,b_1]=\mathcal{B}_{\mbox{\tiny CHSH}}[P(a,b|x,y)]\ . \end{equation} In the case of binary outputs or inputs, we are able to establish a direct relation between $\mbox{inc}[b_{0},b_{1}]$ and the uncertainty of $b_{0}$ and $b_{1}$: {\bf Result 6: } {\em In the binary output case [$A=B=2$] the following constraints hold:} \begin{eqnarray}\label{heisenberg0} H(b_0)&\geq& h\!\left(\frac{1}{2}\mbox{inc}[b_0,b_1]\right),\\ \label{heisenberg1} H(b_1)&\geq& h\!\left(\frac{1}{2}\mbox{inc}[b_0,b_1]\right), \end{eqnarray} {\em where $H(b)$ is the entropy of the output $b$, and $h(x)$ is the binary entropy of $x$ \cite{entropy}. These inequalities also hold in the binary input case [$X=Y=2$], and are still tight.} The proof of this result is in Appendix B. Although this has the flavor of the Heisenberg uncertainty relations, it differs in the fact that here we do not have a trade off between the uncertainty of each observable. In particular, if $b_0$ is deterministic, inequality (\ref{heisenberg0}) implies $\mbox{inc}[b_0,b_1]=0$, and hence, nothing prevents $b_1$ from being deterministic too. It is also remarkable that, a deterministic observable is compatible with any other. Notice that in some of the proofs in this article, we express distributions in terms of nonlocal extreme points. But, some nonsignaling theories may not include them, like for example, QM does not include PR correlations (\ref{iso}). It is important to stress that this is not an inconvenient for the validity of the proofs when applied to any particular theory. For instance, although QM does not predict PR correlations we can always write some quantum mechanical correlations as a mixture of PR and local ones. \section{Conclusions} In this work, we have identified a series of features common to all physical theories that do not allow for instantaneous transmission of information, and predict the violation of Bell inequalities. As shown, these two assumptions are sufficient to prove: \begin{itemize} \item Constraints on how nonlocality is distributed among the correlations of different pairs of particles in multipartite scenarios. \item Impossibility of perfect cloning of states. \item Strict equivalence of the following properties: \begin{enumerate} \item nonlocality \item bounded shareability \item positive secrecy content \end{enumerate} \item A relation for the incompatibility of two observables and the uncertainty of their outcomes. \end{itemize} Hence, some properties traditionally attributed to QM are generic within this family of physical theories. For example: the fact that two observables cannot be simultaneously measured on the same system (incompatibility), becomes necessary to explain the correlations observed in some experiments [violation of CHSH \cite{aspect}], independently of the fact that we use models based on noncommuting operators to explain such experiments (see also \cite{qit}). Moreover, a no-cloning theorem can be derived without invoking any nonorthogonality of states of linearity of the evolution. This indicates how constraining is the demand that a theory compatible with special relativity predicts the violation of Bell inequalities. One could actually say that there is not much room left out of QM. From a more fundamental point of view, this work proposes a different approach to the study of quantum properties. In general, QM has been studied in comparison with Classical Mechanics, that is, starting from a more restrictive theory. Here, the idea is to start from a more general family of theory, and to study ``quantum" properties common to all them. It is then an open research project to identify those additional postulates that allow one to recover the whole quantum structure. \section*{Appendix A. Depolarization and shrinking} In this appendix it is shown that, in the case $X=Y=A=B=2$, any distribution can be transformed into an isotropic one maintaining the CHSH violation (\ref{chsh}) invariant. We call this process {\em depolarization}. We also show that the parameter $C$ of an isotropic distribution can be decreased with local operations. We call this operation {\em shrinking}. {\bf Depolarization:} this transformation can be implemented by using 3 bits of shared randomness and local operations, in the following two steps: First step, Alice and Bob perform with probability $1/2$ one of the following two operations: \begin{enumerate} \item Nothing \item Flip $a$ and $b$ \end{enumerate} This makes the correlations locally unbiased. Second step, with probability $1/4$ both parties perform one of the following four operations: \begin{enumerate} \item Nothing \item Flip $a_{x=1}$ and $y$ \item Flip $x$ and $b_{y=1}$ \item Flip $x$, $a_{x=0}$, $y$ and $b_{b=1}$ \end{enumerate} where flipping $a_{x=1}$ means that $a$ is only flipped when $x=1$, that is $a \rightarrow a+x \bmod 2$. After the second step, the resulting correlations satisfy (\ref{isoP}). It can be seen that both steps keep invariant the violation of the CHSH inequality. {\bf Shrinking:} a useful observation is that when $C>0$, the value of $\mathcal{B}_{\mbox{\tiny CHSH}}$ can always be decreased by performing an operation in one site. This is accomplished when one party, say Bob, outputs $b$ with probability $1-\epsilon$, and an unbiased random bit with probability $\epsilon$. This operation implements the transformation: $C\rightarrow (1-\epsilon)C$. \section*{Appendix B. Proofs of section 6} {\bf Result:} {\em In the case $A=B=X=Y=2$ the degree of incompatibility of two observables is} \begin{equation}\label{ruru} \mbox{inc}[b_0,b_1]=\mathcal{B}_{\mbox{\tiny CHSH}}[P]\ . \end{equation} {\em Proof.} The minimization in the definition of $\mbox{inc}[b_0,b_1]$ in (\ref{incomp}), is completely equivalent to the minimization of $p_{\mbox{\tiny NL}}$ in the optimal eavesdropping extension (Appendix B). Then, we just have to substitute $\mu$ by $p_{\mbox{\tiny NL}}$ which gives the equality (\ref{ruru}). {\bf Result 6: } {\em In the binary output case [$A=B=2$] the following constraints hold:} \begin{eqnarray}\label{h0} H(b_0)&\geq& h\!\left(\frac{1}{2}\mbox{inc}[b_0,b_1]\right),\\ \label{h1} H(b_1)&\geq& h\!\left(\frac{1}{2}\mbox{inc}[b_0,b_1]\right), \end{eqnarray} {\em where $H(b)$ is the entropy of the output $b$, and $h(x)$ is the binary entropy of $x$ \cite{entropy}. This inequalities also hold in the binary input case [$X=Y=2$], and are still tight.} {\em Proof.} Let us prove the above inequalities (\ref{h0},\ref{h1}) for the binary output case. It is shown in this case \cite{conversion} that, for all extreme points, the one party marginals are deterministic or unbiased: $\left[P(b=0|y),P(b=1|y)\right]\in\left\{[0,1],[1,0],[1/2,1/2]\right\}$. In the next we see that, if one observable, say $y=0$, is deterministic [$P(b_0|0)=0,1$] then it is compatible with all the rest. To see this suppose that the outcome of $b_0$ is always $b_0=\beta$, then, for any $y$, the joint distribution $P(a,b_0,b_y|x,y)=P(a,b_y|x,y)\, \delta_{b_0,\beta}$ exists. Then, $b_0$ and $b_y$ are compatible by definition. Now, let us decompose $P_{\mbox{\tiny INC}}$ as a mixture of extreme points. This mixture must not contain extreme points having the marginal of $b_0$ or the marginal of $b_1$ deterministic. Otherwise, one could move this extreme point to the mixture of compatible ones $P_{\mbox{\tiny COM}}$, decreasing the value of $\eta$. Thus, the marginals for $b_0$ and $b_1$ taken from $P_{\mbox{\tiny INC}}$ are always unbiased. Therefore, $\mbox{inc}[b_0,b_1]$ is the probability of getting with certainty an unbiased outcome. The situation where $b_0$ and $b_1$ have minimal entropy is when $P_{\mbox{\tiny COM}}$ is deterministic. Suppose that $P_{\mbox{\tiny COM}}(b=0|y=0)=1$, then recalling (\ref{incomp}) \begin{eqnarray} P(b=1|y=0) &=& \mbox{inc}[b_0,b_1]\,P_{\mbox{\tiny INC}}(b=1|y=0) \nonumber\\ &=& \frac{1}{2}\mbox{inc}[b_0,b_1], \end{eqnarray} and thus the entropy of $b_0$ is $H(b_0)=h(\mbox{inc}[b_0,b_1]/2)$. The same holds for $b_1$. In general, when $P_{\mbox{\tiny COM}}$ is not deterministic, the entropies will be larger than the bounds (\ref{h0},\ref{h1}). Let us prove that the bounds (\ref{h0},\ref{h1}) also hold in the case where inputs are binary, and the outputs belong to larger alphabets. In that case, all extreme points have been classified in \cite{NSC}. There, it is shown that, all extreme points have local marginals where all outcomes with non-zero probability are equiprobable. As discussed before, if we write $P_{\mbox{\tiny INC}}$ as a mixture of extreme points, the marginals for $b_0$ and $b_1$ given by these extreme points must have at least two outcomes with nonzero probability. Otherwise the two observables are compatible and we can attach the extreme point to $P_{\mbox{\tiny INC}}$, decreasing $\eta$. The situation where $b_0$ and $b_1$ have minimal entropy is when $P_{\mbox{\tiny COM}}$ is deterministic, and $P_{\mbox{\tiny INC}}$ has only two outcomes with nonzero probability for $b_0$ and $b_1$. In such case, the inequalities (\ref{h0},\ref{h1}) are saturated. When $P_{\mbox{\tiny INC}}$ has more than two outcomes with nonzero probability for $b_0$ and $b_1$, the entropies will be larger. \end{document}
\begin{document} \title{Poisson brackets with prescribed Casimirs} \author{Pantelis A. Damianou and Fani Petalidou} \date{} \maketitle \vskip 1 cm \begin{center} \emph{Dedicated to Giuseppe Marmo, on the occasion of his 65$^{th}$ birthday.} \end{center} \begin{abstract} We consider the problem of constructing Poisson brackets on smooth manifolds $M$ with prescribed Casimir functions. If $M$ is of even dimension, we achieve our construction by considering a suitable almost symplectic structure on $M$, while, in the case where $M$ is of odd dimension, our objective is achieved by using a convenient almost cosymplectic structure. Several examples and applications are presented. \end{abstract} \noindent {\bf{Keywords: }}{Poisson bracket, Casimir function, almost symplectic structure, almost cosymplectic structure.} \noindent {\bf{MSC (2010):}} 53D17, 53D15. \section{Introduction} A \emph{Poisson bracket} on the space $C^{\infty}(M)$ of smooth functions on a smooth manifold $M$ is a skew-symmetric, bilinear map, \begin{equation*} \{\cdot,\cdot\} : C^{\infty}(M) \times C^{\infty}(M) \to C^{\infty}(M), \end{equation*} that verifies the Jacobi identity and is a biderivation. Thus, $(C^{\infty}(M), \{\cdot,\cdot\})$ has the structure of a Lie algebra. This notion has been introduced in the framework of classical mechanics by S. D. Poisson, who discovered the natural symplectic bracket on $\mathbb{R}^{2n}$ \cite{poi}, a notion that was later generalized to manifolds of arbitrary dimension by S. Lie \cite{lie}. The interest in this subject, motivated by the important role of Poisson structures in Hamiltonian dynamics, was revived during the last 35 years, after the publication of the fundamental works of A. Lichn\'erowicz \cite{damianou:lch1}, A. Kirillov \cite{kir} and A. Weinstein \cite{wei}, and Poisson geometry has emerged as a major branch of modern differential geometry. The pair $(M, \{\cdot,\cdot\})$ is called a \emph{Poisson manifold} and is foliated by symplectic immersed submanifolds, the \emph{symplectic leaves}. The functions in the center of $(C^{\infty}(M), \{\cdot,\cdot\})$, i.e., the elements $f\in C^{\infty}(M)$ such that $\{f,\cdot\}=0$, are called the \emph{Casimirs} of the Poisson bracket $\{\cdot,\cdot\}$ and they form the space of first integrals of the symplectic leaves. For this reason, Casimir invariants have acquired a dominant role in the study of integrable systems defined on a manifold $M$ and in the theory of the local structure of Poisson manifolds \cite{wei}. To introduce the problem we remark that, for an arbitrary smooth function $f$ on $\mathbb{R}^3$, the bracket \begin{equation} \label{damianou:equation1} \{x, y\}={\partial f \over \partial z}, \qquad \{x, z \}=-{\partial f \over \partial y} \qquad \mathrm{and} \qquad \{y, z\}= {\partial f \over \partial x} \end{equation} is Poisson and it admits $f$ as Casimir. Clearly, if $\Omega = dx\wedge dy\wedge dz$ is the standard volume element on $\mathbb{R}^3$, then the bracket (\ref{damianou:equation1}) can be written as \begin{equation*} \{x,y\}\Omega = dx\wedge dy \wedge df, \quad \{x,z\}\Omega = dx\wedge dz \wedge df, \quad \{y,z\}\Omega = dy\wedge dz \wedge df. \end{equation*} More generally, let $f_1,f_2,\ldots, f_l$ be functionally independent smooth functions on $\mathbb{R}^{l+2}$ and $\Omega$ a non-vanishing $(l+2)$-smooth form on $\mathbb{R}^{l+2}$. Then, the formula \begin{equation} \label{damianou:equation2} \{ g, h \} \Omega =fdg \wedge dh \wedge df_1 \wedge \ldots \wedge df_l, \quad \quad g,\,h \in C^\infty(\mathbb{R}^{l+2}), \end{equation} defines a Poisson bracket on $\mathbb{R}^{l+2}$ with $f_1, \ldots, f_l $ as Casimir invariants. In addition, the symplectic leaves of (\ref{damianou:equation2}) have dimension at most $2$. The Jacobian Poisson structure (\ref{damianou:equation2}) (the bracket $\{g,h\}$ is equal, up to a coefficient function $f$, with the usual Jacobian determinant of $(g,h,f_1,\ldots,f_l)$) appeared in \cite{damianou:Dam89} in 1989 where it was attributed to H. Flaschka and T. Ratiu. The first explicit proof of this result was given in \cite{damianou:Grab93}, while the first application of formula (\ref{damianou:equation2}) is presented in \cite{damianou:Dam89, damianou:Dam96} in conjunction with transverse Poisson structures to subregular nilpotent orbits of $\mathfrak{gl}(n,\mathbb{C})$, $n\leq 7$. It was shown that these transverse Poisson structures which are usually computed using Dirac's constraint formula can be calculated much more easily using the Jacobian Poisson structure (\ref{damianou:equation2}). This fact was extended to any semisimple Lie algebra in \cite{damianou:Dam07}. In the same paper it is also proved that, after a suitable change of coordinates, the above referred transverse Poisson structures is reduced to a 3-dimensional structure of type (\ref{damianou:equation1}). We believe that for the other type of orbits, e.g. the minimal orbit and all the other intermediate orbits, one can compute the transverse Poisson structures using the results of the present paper. However, this study will be the subject of a future work. Another interesting application of formula (\ref{damianou:equation2}) appears in \cite{or}, where the polynomial Poisson algebras with some regularity conditions are studied. We also mention the study of a family of rank 2 Poisson structures in \cite{bermejo}. The purpose of this paper is to extend the formula of type (\ref{damianou:equation2}) in the more general case of higher rank Poisson brackets. The problem can be formulated as follows: \emph{Given $(m-2k)$ smooth functions $f_1,\ldots,f_{m-2k}$ on an $m$-dimensional smooth manifold $M$, functionally independent almost everywhere, describe the Poisson brackets $\{\cdot,\cdot\}$ on $C^{\infty}(M)$ of rank at most $2k$ which have $f_1,\ldots,f_{m-2k}$ as Casimirs.} Firstly, we investigate this problem in the case where $m=2n$, i.e., $M$ is of even dimension. We assume that $M$ is endowed with a suitable almost symplectic structure $\omega_0$ and we prove that (Theorem \ref{damianou:THEOREM}) a Poisson bracket $\{\cdot,\cdot\}$ on $C^{\infty}(M)$ with the required properties is defined, for any $h_1,h_2 \in C^{\infty}(M)$, by the formula \begin{equation*} \{h_1,h_2\} \Omega = -\frac{1}{f} dh_1 \wedge dh_2 \wedge (\sigma + \frac{g}{k-1}\omega_0) \wedge \frac{\omega_0^{k-2}}{(k-2)!}\wedge df_1\wedge\ldots \wedge df_{2n-2k}, \end{equation*} where $\Omega=\displaystyle{\frac{\omega_0^n}{n!}}$ is a volume element on $M$, $f$ satisfies $f^2 = \det \big(\{f_i,f_j\}_{_0}\big)\neq 0$ ($\{\cdot,\cdot\}_{_0}$ being the bracket defined by $\omega_0$ on $C^{\infty}(M)$), $\sigma$ is a $2$-form on $M$ satisfying certain special requirements (see, Proposition \ref{damianou:theorem-cond-delta-sigma}) and $g = i_{\Lambda_0}\sigma$ \footnote{$\Lambda_0$ being the bivector field on $M$ associated to $\omega_0$.}. We proceed by considering the case where $M$ is an odd-dimensional manifold, i.e., $m=2n+1$, and we establish a similar formula for the Poisson brackets on $C^{\infty}(M)$ with the prescribed properties. For this construction, we assume that $M$ is equipped with a suitable almost cosymplectic structure $(\vartheta_0,\Theta_0)$ and with the volume form $\Omega = \vartheta_0 \wedge \displaystyle{\frac{\Theta_0^n}{n!}}$. Then, we show that (Theorem \ref{THEOREM-ODD}) a Poisson bracket $\{\cdot,\cdot\}$ on $C^{\infty}(M)$ with $f_1,\ldots,f_{2n+1-2k}$ as Casimirs functions is defined, for any $h_1,h_2 \in C^{\infty}(M)$, by the formula \begin{equation*} \{h_1,h_2\} \Omega = -\frac{1}{f} dh_1 \wedge dh_2 \wedge (\sigma + \frac{g}{k-1}\Theta_0) \wedge \frac{\Theta_0^{k-2}}{(k-2)!}\wedge df_1\wedge\ldots \wedge df_{2n+1-2k}, \end{equation*} where $f$ is given by (\ref{f-odd}), $\sigma$ is a $2$-form on $M$ satisfying certain particular conditions (see, Proposition \ref{prop-sigma-odd}), and $g=i_{\Lambda_0}\sigma$ \footnote{$\Lambda_0$ being the bivector field on $M$ associated to $(\vartheta_0,\Theta_0)$.}. The proofs of the main results are given in section 3. Section 2 consists of preliminaries and fixing the notation, while in section 4 we present several applications of our formul{\ae} on Dirac brackets, on almost Poisson brackets associated to nonholonomic systems and on Toda and Volterra lattices. \section{Preliminaries} We start by fixing our notation and by recalling the most important notions and formul{\ae} needed in this paper. Let $M$ be a real smooth $m$-dimensional manifold, $TM$ and $T^\ast M$ its tangent and cotangent bundles and $C^{\infty}(M)$ the space of smooth functions on $M$. For each $p\in \mathbb{Z}$, we denote by $\mathcal{V}^p(M)$ and $\Omega^p(M)$ the spaces of smooth sections, respectively, of $\bigwedge^p TM$ and $\bigwedge^p T^\ast M$. By convention, we set $\mathcal{V}^p(M) = \Omega^p(M) = \{0\}$, for $p<0$, $\mathcal{V}^0(M) = \Omega^0(M) = C^{\infty}(M)$, and, taking into account the skew-symmetry, we have $\mathcal{V}^p(M) = \Omega^p(M) = \{0\}$, for $p>m$. Finally, we set $\mathcal{V}(M)=\oplus_{p\in \mathbb{Z}}\mathcal{V}^p(M)$ and $\Omega(M) = \oplus_{p\in \mathbb{Z}}\Omega^p(M)$. \subsection{From multivector fields to differential forms and back} There is a natural \emph{pairing} between the elements of $\Omega(M)$ and $\mathcal{V}(M)$, i.e., the $C^{\infty}(M)$-bilinear map $\langle \cdot, \cdot \rangle : \Omega(M) \times \mathcal{V}(M) \to C^{\infty}(M)$, $(\eta, P) \mapsto \langle \eta,P\rangle$, defined as follows: For any $\eta \in \Omega^q(M)$ and $P\in \mathcal{V}^p(M)$ with $p \neq q$, $\langle \eta, P\rangle =0$; for any $f,g\in \Omega^0(M)$, $\langle f, g\rangle =fg$; while, if $\eta = \eta_1\wedge \eta_2\wedge \ldots \wedge \eta_p \in \Omega^p(M)$ is a decomposable $p$-form ($\eta_i\in \Omega^1(M)$) and $P = X_1\wedge X_2\wedge \ldots \wedge X_p$ is a decomposable $p$-vector field ($X_i\in \mathcal{V}^1(M)$), \begin{equation*} \langle \eta, P\rangle = \langle \eta_1\wedge \eta_2\wedge \ldots \wedge\eta_p, X_1\wedge X_2\wedge \ldots \wedge X_p \rangle = \det(\langle \eta_i,X_j\rangle). \end{equation*} The above definition is extended to the nondecomposable forms and multivector fields by bilinearity in a unique way. Precisely, for any $\eta\in \Omega^p(M)$ and $X_1,\ldots,X_p \in \mathcal{V}^1(M)$, \begin{equation*} \langle \eta, X_1\wedge X_2\wedge \ldots \wedge X_p\rangle = \eta (X_1,X_2, \ldots, X_p). \end{equation*} Similarly, for $P\in \mathcal{V}^p(M)$ and $\eta_1,\eta_2,\ldots,\eta_p \in \Omega^1(M)$, \begin{equation*} \langle \eta_1\wedge \eta_2 \wedge \ldots \wedge \eta_p, P \rangle = P(\eta_1,\eta_2,\ldots,\eta_p). \end{equation*} We adopt the following convention for the \emph{interior product $i_P: \Omega(M) \to \Omega(M)$ of differential forms by a $p$-vector field} $P$, viewed as a $C^{\infty}(M)$-linear endomorphism of $\Omega(M)$ of degree $-p$. If $P=X \in \mathcal{V}^1(P)$ and $\eta$ is a $q$-form, $i_X\eta$ is the element of $\Omega^{q-1}(M)$ defined, for any $X_1,\ldots,X_{q-1}\in \mathcal{V}^1(M)$, by \begin{equation*} (i_X\eta)(X_1,\ldots,X_{q-1}) = \eta (X,X_1,\ldots,X_{q-1}). \end{equation*} If $P=X_1\wedge X_2\wedge \ldots \wedge X_p$ is a decomposable $p$-vector field, we set \begin{equation*} i_P\eta = i_{X_1\wedge X_2\wedge \ldots \wedge X_p}\eta = i_{X_1}i_{X_2}\ldots i_{X_p}\eta. \end{equation*} More generally, recalling that each $P\in \mathcal{V}^p(M)$ can be locally written as the sum of decomposable $p$-vector fields, we define as $i_P\eta$, with $\eta \in \Omega^{q}(M)$ and $q\geq p$, to be the unique element of $\Omega^{q-p}(M)$ such that, for any $Q\in \mathcal{V}^{q-p}(M)$, \begin{equation}\label{damianou:sign-inter. prod} \langle i_P\eta, Q\rangle = (-1)^{(p-1)p/2}\langle \eta, P\wedge Q\rangle. \end{equation} Similarly, we define the \emph{interior product $j_{\eta} : \mathcal{V}(M) \to \mathcal{V}(M)$ of multivector fields by a $q$-form $\eta$}. If $\eta = \alpha \in \Omega^1(M)$ and $P\in \mathcal{V}^p(M)$, then $j_{\alpha}P$ is the unique $(p-1)$-vector field on $M$ given, for any $\alpha_1,\ldots,\alpha_{p-1}$, by \begin{equation*} (j_{\alpha}P)(\alpha_1,\ldots,\alpha_{p-1})= P(\alpha_1,\ldots,\alpha_{p-1}, \alpha). \end{equation*} Moreover, if $\eta = \alpha_1\wedge \alpha_2 \wedge \ldots \wedge \alpha_q$ is a decomposable $q$-form, we set \begin{equation*} j_{\eta}P = j_{\alpha_1\wedge \alpha_2 \wedge \ldots \wedge \alpha_q}P = j_{\alpha_1}j_{\alpha_2}\ldots j_{\alpha_q}P. \end{equation*} Hence, using the fact that any $\eta \in \Omega^q(M)$ can be locally written as the sum of decomposable $q$-forms, we define $j_{\eta}$ to be the $C^{\infty}(M)$-linear endomorphism of $\mathcal{V}(M)$ of degree $-q$ which associates, with each $P\in \mathcal{V}^p(M)$ ($p\geq q$), the unique $(p-q)$-vector field $j_{\eta}P$ defined, for any $\zeta\in \Omega^{p-q}(M)$, by \begin{equation*} \langle \zeta, j_{\eta}P\rangle = \langle \zeta\wedge \eta, P\rangle. \end{equation*} If the degrees of $\eta$ and $P$ are equal, i.e., $q=p$, the interior products $j_{\eta}P$ and $i_P\eta$ are, up to sign, equal: \begin{equation*} j_{\eta}P = (-1)^{(p-1)p/2}i_P\eta = \langle \eta, P\rangle. \end{equation*} The \emph{Schouten bracket} $[\cdot,\cdot] : \mathcal{V}(M)\times \mathcal{V}(M) \to \mathcal{V}(M)$, which is a natural extension of the usual Lie bracket of vector fields on the space $\mathcal{V}(M)$ \cite{damianou:duf-zung, damianou:kz}, is related to the operator $i$ through the following useful formula, due to Koszul \cite{damianou:kz}. For any $P\in \mathcal{V}^p(M)$ and $Q\in \mathcal{V}^q(M)$, \begin{equation}\label{damianou:koszul-formula} i_{[P,Q]} = [[i_P,d],i_Q], \end{equation} where the brackets on the right hand side of (\ref{damianou:koszul-formula}) denote the graded commutator of graded endomorphisms of $\Omega(M)$, i.e., for any two endomorphisms $E_1$ and $E_2$ of $\Omega(M)$ of degrees $e_1$ and $e_2$, respectively, $[E_1,E_2] = E_1\circ E_2 - (-1)^{e_1e_2}E_2\circ E_1$. Hence, we have \begin{eqnarray}\label{damianou:koszul-formula-2} \lefteqn{i_{[P,Q]} = i_P\circ d \circ i_Q - (-1)^p\,d\circ i_P\circ i_Q} \nonumber \\ & & - (-1)^{(p-1)q}\,i_Q\circ i_P\circ d + (-1)^{(p-1)q-p}\,i_Q\circ d \circ i_P. \end{eqnarray} Furthermore, given a smooth \emph{volume form} $\Omega$ on $M$, i.e., a nowhere vanishing element of $\Omega^m(M)$, the interior product of $p$-vector fields on $M$, $p=0,1,\ldots,m$, with $\Omega$ yields a $C^{\infty}(M)$-linear isomorphism $\Psi$ of $\mathcal{V}(M)$ onto $\Omega(M)$ such that, for each degree $p$, $0\leq p \leq m$, \begin{eqnarray*} \Psi : \mathcal{V}^p(M) & \to & \Omega^{m-p}(M) \\ P & \mapsto & \Psi(P) = \Psi_P = (-1)^{(p-1)p/2}i_P\Omega. \end{eqnarray*} Its inverse map $\Psi^{-1}: \Omega^{m-p}(M) \to \mathcal{V}^p(M)$ is defined, for any $\eta \in \Omega^{m-p}(M)$, by $\Psi^{-1}({\eta}) = j_{\eta} \tilde{\Omega}$, where $\tilde{\Omega}$ denotes the dual $m$-vector field of $\Omega$, i.e., $\langle \Omega, \tilde{\Omega}\rangle =1$. By composing $\Psi$ with the exterior derivative $d$ on $\Omega(M)$ and $\Psi^{-1}$, we obtain the operator $D=-\Psi^{-1}\circ d \circ \Psi$ which was introduced by Koszul \cite{damianou:kz}. It is of degree $-1$ and of square $0$ and it generates the Schouten bracket. For any $P\in \mathcal{V}^p(M)$ and $Q\in \mathcal{V}(M)$, \begin{equation}\label{damianou:schouten-D} [P,Q] = (-1)^p\big(D(P\wedge Q)-D(P)\wedge Q - (-1)^pP\wedge D(Q) \big). \end{equation} \subsection{Poisson manifolds} We recall the notion of \emph{Poisson manifold} and some of its properties whose proofs may be found, for example, in the books \cite{damianou:lm, damianou:duf-zung, damianou:vai-b}. A \emph{Poisson structure} on a smooth manifold $M$ is a Lie algebra structure on $C^{\infty}(M)$ whose the bracket $\{\cdot,\cdot\} : C^{\infty}(M) \times C^{\infty}(M) \to C^{\infty}(M)$ verifies the Leibniz's rule: \begin{equation*} \{f,gh\} = \{f,g\}h + g\{f,h\}, \quad \quad \forall \, f,g,h \in C^{\infty}(M). \end{equation*} In \cite{damianou:lch1}, Lichn\'erowicz remarks that $\{\cdot,\cdot\}$ gives rise to a contravariant antisymmetric tensor field $\Lambda$ of order $2$ such that $\Lambda(df,dg) = \{f,g\}$, for $f,g\in C^{\infty}(M)$. Conversely, each such bivector field $\Lambda$ on $M$ gives rise to a bilinear and antisymmetric bracket $\{\cdot,\cdot\}$ on $C^{\infty}(M)$, $\{f,g\} = \Lambda(df,dg)$, $f,g\in C^{\infty}(M)$, which satisfies the Jacobi identity, i.e., for any $f,g,h \in C^{\infty}(M)$, $\{f,\{g,h\}\} + \{g,\{h,f\}\} + \{h,\{f,g\}\} = 0$, if and only if $[\Lambda,\Lambda]=0$, where $[\cdot , \cdot]$ denotes the Schouten bracket on $\mathcal{V}(M)$. In this case $\Lambda$ is called a \emph{Poisson tensor} and the manifold $(M,\Lambda)$ a \emph{Poisson manifold}. While, in the case where $[\Lambda,\Lambda]\neq 0$ we say that $\Lambda$ is an \emph{almost Poisson tensor}. As was proved in \cite{damianou:Grab93}, it is a consequence of expression (\ref{damianou:koszul-formula-2}) of $[\cdot,\cdot]$ that an element $\Lambda \in \mathcal{V}^2(M)$ defines a Poisson structure on $M$ if and only if \begin{equation*} 2i_{\Lambda} d \Psi_{\Lambda} + d \Psi_{\Lambda \wedge \Lambda}=0.\footnote{Because we have adopted a different convention of sign for the interior product $i$ from that in \cite{damianou:Grab93}, our condition differs up to a sign from the one in \cite{damianou:Grab93}.} \end{equation*} Equivalently, using formula (\ref{damianou:schouten-D}) of $[\cdot, \cdot]$ and the fact that, for any $P\in \mathcal{V}^p(M)$, \begin{equation*} \Psi^{-1}\circ i_P = (-1)^{(p-1)p/2}P\wedge \Psi^{-1}, \end{equation*} the last condition can be written as \begin{equation}\label{damianou:cond-D-Lambda} 2 \Lambda \wedge D(\Lambda) = D(\Lambda \wedge \Lambda). \end{equation} Given a Poisson tensor $\Lambda$ on $M$, we can associate to it a natural homomorphism $\Lambda^\# : \Omega^1(M) \to \mathcal{V}^1(M)$, which maps each element $\alpha$ of $\Omega^1(M)$ to a unique vector field $\Lambda^\#(\alpha)$ such that, for any $\beta\in \Omega^1(M)$, \begin{equation*} \langle \alpha \wedge \beta, \Lambda\rangle = \langle \beta, \Lambda^\#(\alpha)\rangle = \Lambda(\alpha,\beta). \end{equation*} If $\alpha = df$, for some $f\in C^{\infty}(M)$, the vector field $\Lambda^\#(df)$ is called the \emph{hamiltonian vector field of $f$ with respect to $\Lambda$} and it is denoted by $X_f$. The image $\mathrm{Im}\Lambda^\#$ of $\Lambda^\#$ is a completely integrable distribution on $M$ and defines the \emph{symplectic foliation} of $(M,\Lambda)$ whose space of first integrals is the space of \emph{Casimir functions of} $\Lambda$, i.e., the space of the functions $f\in C^{\infty}(M)$ which are solutions of $\Lambda^\#(df) =0$. Moreover, $\Lambda^\#$ can be extended to a homomorphism, also denoted by $\Lambda^\#$, from $\Omega^p(M)$ to $\mathcal{V}^p(M)$, $p\in \mathbb{N}$, by setting, for any $f\in C^{\infty}(M)$, $\Lambda^\#(f) = f$, and, for any $\zeta \in \Omega^p(M)$ and $\alpha_1,\ldots,\alpha_p\in \Omega^1(M)$, \begin{equation}\label{damianou:def-extension} \Lambda^\#(\zeta)(\alpha_1,\ldots,\alpha_p) = (-1)^p\zeta(\Lambda^\#(\alpha_1),\ldots,\Lambda^\#(\alpha_p)). \end{equation} Thus, $\Lambda^\#(\zeta \wedge \eta) = \Lambda^\#(\zeta) \wedge \Lambda^\#(\eta)$, for all $\eta\in \Omega(M)$. When $\Omega(M)$ is equipped with the Koszul bracket $\{\!\! \{ \cdot, \cdot \}\!\! \}$ defined, for any $\zeta \in \Omega^p(M)$ and $\eta \in \Omega(M)$, by \begin{equation}\label{damianou:bracket-forms} \{\!\! \{\zeta, \eta \}\!\! \} = (-1)^p \big(\Delta(\zeta \wedge \eta) - \Delta(\zeta)\wedge \eta - (-1)^p\zeta \wedge \Delta(\eta) \big), \end{equation} where $\Delta = i_{\Lambda}\circ d - d\circ i_{\Lambda}$, $\Lambda^\#$ becomes a graded Lie algebras homomorphism. Explicitly, \begin{equation*} \Lambda^\#(\{\!\! \{ \zeta, \eta \}\!\! \}) = [\Lambda^\#(\zeta),\Lambda^\#(\eta)] \ , \end{equation*} where the bracket on the right hand side is the Schouten bracket. \begin{example}\label{example-Poisson} {\rm Any symplectic manifold $(M,\omega_0)$, where $\omega_0$ is a nondegenerate closed smooth $2$-form on $M$, is equipped with a Poisson structure $\Lambda_0$ defined by $\omega_0$ as follows. Define the tensor field $\Lambda_0$ to be the image of $\omega_0$ by the extension of the isomorphism $\Lambda_0^\# : \Omega^1(M) \to \mathcal{V}^1(M)$, (inverse of $\omega_0^\flat : \mathcal{V}^1(M) \to \Omega^1(M)$, $X\mapsto \omega_0^\flat (X) = - \omega_0(X, \cdot)$), to $\Omega^2(M)$, given by (\ref{damianou:def-extension}).} \end{example} \subsection{Decomposition theorem for exterior differential forms} In this subsection, we begin by reviewing some important results concerning the decomposition theorem for exterior differential forms on almost symplectic manifolds. The complete study of these results can be found in \cite{damianou:lm} and \cite{damianou:lib-th}. Let $(M,\omega_0)$ be a $2n$-dimensional almost symplectic manifold, i.e., $\omega_0$ is a nondegenerate smooth $2$-form on $M$, $\Lambda_0$ the bivector field on $M$ associated with $\omega_0$ (see, Example \ref{example-Poisson}), $\Omega = \displaystyle{\frac{\omega_0^n}{n!}}$ the corresponding volume form on $M$, and $\tilde{\Omega} =\displaystyle{\frac{\Lambda_0^n}{n!}}$ the dual $2n$-vector field of $\Omega$. We define an isomorphism $\ast : \Omega^p(M) \to \Omega^{2n-p}(M)$ by setting, for any $\varphi \in \Omega^p(M)$, \begin{equation}\label{def-ast} \ast \, \varphi = (-1)^{(p-1)p/2}\, i_{\Lambda_0^\#(\varphi)}\frac{\omega_0^n}{n!} \ . \end{equation} \begin{remark} {\rm In order to be in agreement with the convention of sign adopted in (\ref{damianou:sign-inter. prod}) for the interior product, we make a sign convention for $\ast$ different from the one given in \cite{damianou:lm}.} \end{remark} The $(2n-p)$-form $\ast \, \varphi$ is called the \emph{adjoint of $\varphi$ relative to $\omega_0$}. The isomorphism $\ast$ has the following properties: \begin{enumerate} \item[i)] $\ast \, \ast = Id$. \item[ii)] For any $\varphi\in \Omega^p(M)$ and $\psi \in \Omega^q(M)$, \begin{eqnarray}\label{damianou:property-ii} \ast\,(\varphi \wedge \psi) & = & (-1)^{(p+q-1)(p+q)/2}\,i_{\Lambda_0^\#(\varphi)\wedge\Lambda_0^\#(\psi)}\frac{\omega_0^n}{n!} \nonumber \\ & = & (-1)^{(p-1)p/2}\,i_{\Lambda_0^\#(\varphi)}(\ast\, \psi) = (-1)^{pq + (q-1)q/2} i_{\Lambda_0^\#(\psi)}(\ast \, \varphi). \end{eqnarray} \item[iii)] For any $k\leq n$, \begin{equation*} \ast\, \frac{\omega_0^k}{k!} = \frac{\omega_0^{n-k}}{(n-k)!}. \end{equation*} \end{enumerate} \begin{definition} A smooth form $\psi \in \Omega(M)$ such that $i_{\Lambda_0}\psi = 0$ everywhere on $M$ is said to be \emph{effective}. On the other hand, a smooth form $\varphi$ on $M$ is said to be \emph{simple} if it can be written as \begin{equation*} \varphi = \psi \wedge \frac{\omega_0^k}{k!}, \end{equation*} where $\psi$ is effective. \end{definition} \begin{proposition} The adjoint of an effective differential form $\psi$ of degree $p\leq n$ is \begin{equation*} \ast \, \psi = (-1)^{p(p+1)/2}\, \psi \wedge \frac{\omega_0^{n-p}}{(n-p)!}. \end{equation*} The adjoint $\ast \, \varphi$ of a smooth $(p+2k)$-simple form $\varphi = \psi \wedge \displaystyle{\frac{\omega_0^k}{k!}}$ is \begin{equation}\label{damianou:adjoint-simple} \ast \,\varphi = (-1)^{p(p+1)/2}\, \psi \wedge \frac{\omega_0^{n-p-k}}{(n-p-k)!}. \end{equation} \end{proposition} \begin{th-lep} Every differential form $\varphi\in \Omega(M)$, of degree $p\leq n$, may be uniquely decomposed as the sum \begin{equation*} \varphi = \psi_p + \psi_{p-2}\wedge \omega_0 + \ldots + \psi_{p-2q}\wedge \frac{\omega_0^q}{q!}, \end{equation*} with $q\leq [p/2]$ ($[p/2]$ being the largest integer less than or equal to $p/2$), where, for $s = 0, \ldots, q$, the differential forms $\psi_{p-2s}$ are effective and may be calculated from $\varphi$ by means of iteration of the operator $i_{\Lambda_0}$. Then, its adjoint $\ast \, \varphi$ may be uniquely written as the sum \begin{equation*} \ast\, \varphi = (-1)^{p(p+1)/2}\big(\psi_p - \psi_{p-2}\wedge \frac{\omega_0}{n-p+1} + \ldots + (-1)^q\frac{(n-p)!}{(n-p+q)!}\psi_{p-2q}\wedge \omega_0^q \big)\wedge \frac{\omega_0^{n-p}}{(n-p)!}. \end{equation*} \end{th-lep} We continue by indicating the relation which links $\ast$ with $\Psi$ and its effect on Poisson structures. Since $\Lambda_0^\# : \Omega^p(M) \to \mathcal{V}^p(M)$, $p \in \mathbb{N}$, defined by (\ref{damianou:def-extension}), is an isomorphism, for any smooth $p$-vector field $P$ on $M$ there exists an unique $p$-form $\sigma_p \in \Omega^p(M)$ such that $P = \Lambda_0^\#(\sigma_p)$. So, it is clear that \begin{equation}\label{damianou:Psi-sigma} \Psi (P) = \ast \, \sigma_p. \end{equation} In particular, a bivector field $\Lambda$ on $(M,\omega_0)$ can be viewed as the image $\Lambda_0^\#(\sigma)$ of a $2$-form $\sigma$ on $M$ by the isomorphism $\Lambda_0^\#$. We want to establish the condition on $\sigma$ under which $\Lambda = \Lambda_0^\#(\sigma)$ is a Poisson tensor. For this reason, we consider the \emph{codifferential operator} \begin{equation*} \delta = \ast \, d\,\ast \end{equation*} introduced in \cite{damianou:lib-th}, which is of degree $-1$ and satisfies the relation $\delta^2 = 0$, and we prove: \begin{lemma} For any differential form $\zeta$ on $(M,\omega_0)$ of degree $p\leq n$, \begin{equation}\label{damianou:Psi-1,ast} \Psi^{-1}(\zeta) = \Lambda_0^\#(\ast\,\zeta). \end{equation} \end{lemma} \begin{proof} Let $\eta$ be a smooth $(2n-p)$-form on $M$. We have \begin{eqnarray*} \langle \eta,\, \Psi^{-1}(\zeta) \rangle & = & \langle \eta, \,j_{\zeta}\frac{\Lambda_0^n}{n!} \rangle = \langle \eta \wedge \zeta, \, \frac{\Lambda_0^n}{n!}\rangle \\ & = & (-1)^{2n} \langle \frac{\omega_0^n}{n!},\, \Lambda_0^\#(\eta)\wedge \Lambda_0^\#(\zeta)\rangle = (-1)^{p(2n-p)}\langle \frac{\omega_0^n}{n!}, \, \Lambda_0^\#(\zeta)\wedge\Lambda_0^\#(\eta) \rangle \\ & = & (-1)^{p(2n-p)}(-1)^{(p-1)p/2}\,\langle i_{\Lambda_0^\#(\zeta)}\frac{\omega_0^n}{n!}, \,\Lambda_0^\#(\eta)\rangle \\ & = & (-1)^{p(2n-p)}(-1)^{(p-1)p/2}(-1)^{2n-p}\langle \eta, \,\Lambda_0^\#(i_{\Lambda_0^\#(\zeta)}\frac{\omega_0^n}{n!}) \rangle \\ & = & (-1)^{(p-1)p/2}\,\langle \eta,\, \Lambda_0^\#(i_{\Lambda_0^\#(\zeta)}\frac{\omega_0^n}{n!}) \rangle = \langle \eta,\, \Lambda_0^\#(\ast\, \zeta)\rangle, \end{eqnarray*} whence (\ref{damianou:Psi-1,ast}) follows. (We remark that the number $p(2n-p)+(2n-p) = (2n-p)(p+1)$ is even for any $p\in \mathbb{N}$.) \end{proof} \begin{proposition}\label{damianou:theorem-cond-delta-sigma} Using the same notation, $\Lambda = \Lambda_0^\#(\sigma)$ defines a Poisson structure on $(M,\omega_0)$ if and only if \begin{equation}\label{damianou:cond-delta-sigma} 2\sigma \wedge \delta(\sigma) = \delta (\sigma \wedge \sigma). \end{equation} \end{proposition} \begin{proof} We have seen that $\Lambda$ is a Poisson tensor if and only if (\ref{damianou:cond-D-Lambda}) holds. But, in our case $\Lambda = \Lambda_0^\#(\sigma)$, so $\Lambda \wedge \Lambda = \Lambda_0^\#(\sigma \wedge \sigma)$, and $\Lambda_0^\#$ is an isomorphism. Therefore, \begin{eqnarray*} 2 \Lambda \wedge D(\Lambda) = D(\Lambda \wedge \Lambda) & \Leftrightarrow & - 2 \Lambda \wedge \big((\Psi^{-1}\circ d \circ \Psi)(\Lambda) \big) = -(\Psi^{-1}\circ d \circ \Psi)(\Lambda \wedge \Lambda) \nonumber \\ & \stackrel{(\ref{damianou:Psi-sigma})}{\Leftrightarrow} & 2\Lambda_0^\#(\sigma) \wedge \big(\Psi^{-1}(d\,\ast \sigma) \big) = \Psi^{-1}\big(d\,\ast (\sigma \wedge \sigma)\big) \nonumber \\ & \stackrel{(\ref{damianou:Psi-1,ast})}{\Leftrightarrow} & 2\Lambda_0^\#(\sigma) \wedge \Lambda_0^\# (\ast \,d\,\ast (\sigma)) = \Lambda_0^\#(\ast \,d \, \ast (\sigma \wedge \sigma))\nonumber \\ & \Leftrightarrow & \Lambda_0^\# (2 \sigma \wedge \delta \sigma ) = \Lambda_0^\# \big(\delta(\sigma \wedge \sigma)\big) \nonumber \\ & \Leftrightarrow & 2\sigma \wedge \delta(\sigma) = \delta (\sigma \wedge \sigma), \end{eqnarray*} and we are done. \end{proof} \begin{remark} {\rm Brylinski \cite{damianou:brl} observed that, when the manifold is symplectic, i.e., $d\omega_0 = 0$, $\delta$ is equal, up to sign, to $\Delta = i_{\Lambda_0}\circ d - d \circ i_{\Lambda_0}$. Then, in this framework, (\ref{damianou:cond-delta-sigma}) is equivalent to $\{\!\! \{ \sigma, \sigma\}\!\! \}_{_0} =0$, ($\{\!\! \{ \cdot, \cdot \}\!\! \}_{_0}$ being the Koszul bracket (\ref{damianou:bracket-forms}) associated to $\Lambda_0$), which means that $\sigma$ is a complementary $2$-form on $(M,\Lambda_0)$ in the sense of Vaisman \cite{damianou:vai}.} \end{remark} \section{Poisson structures with prescribed Casimir functions}\label{damianou:section-theorem} Let $M$ be a $m$-dimensional smooth manifold and $f_1,\ldots,f_{m-2k}$ smooth functions on $M$ which are functionally independent almost everywhere. We want to construct Poisson structures $\Lambda$ on $M$ with symplectic leaves of dimension at most $2k$ which have as Casimirs the given functions $f_1, f_2, \ldots, f_{m-2k}$. We start by discussing the problem on even-dimensional manifolds. In the next subsection we extend the results to odd-dimensional manifolds. \subsection{On even-dimensional manifolds}\label{section-even} We suppose that $\dim M = 2n$ and we begin our study by remarking \begin{lemma} Given $(M,\,f_1,\ldots,f_{2n-2k})$, with $f_1,\ldots,f_{2n-2k}$ functionally independent almost everywhere on $M$, then there exists, at least locally, $\Lambda_0 \in \mathcal{V}^2(M)$, with $rank\,\Lambda_0=2n$, such that \begin{equation*} \big\langle df_1\wedge \ldots \wedge df_{2n-2k},\; \frac{\Lambda_0^{n-k}}{(n-k)!}\big\rangle \neq 0. \end{equation*} \end{lemma} \begin{proof} In fact, let $p \in M$ and $U$ an open neighborhood of $p$ such that $f_1,\ldots,f_{2n-2k}$ are functionally independent at each point $x\in U$. That means that $df_1\wedge \ldots \wedge df_{2n-2k}(x) \neq 0$ on $U$. We select $1$-forms $\beta_1,\ldots,\beta_{2k}$ on $U$ so that $(df_1, \ldots, df_{2n-2k},\beta_1,\ldots,\beta_{2k})$ become a basis of the cotangent space at each point. Let $(Y_1,\ldots,Y_{2n-2k},Z_1,\ldots,Z_{2k})$ be a family of vector fields on $U$ dual to $(df_1, \ldots, df_{2n-2k},\beta_1,\ldots,\beta_{2k})$. That is, they satisfy $df_i(Y_j) = \delta_{ij}$, $\beta_i(Z_j)=\delta_{ij}$, and all other pairings are zero. We consider the bivector field \begin{equation*} \Lambda_0 = \sum_{i=1}^{n-k}Y_{2i-1}\wedge Y_{2i} + \sum_{j=1}^kZ_{2j-1}\wedge Z_{2j} \end{equation*} which is of maximal rank on $U$. It is clear that \begin{equation*} \langle df_1\wedge \ldots \wedge df_{2n-2k}, \,\frac{\Lambda_0^{n-k}}{(n-k)!}\rangle = 1 \neq 0. \end{equation*} \end{proof} Consider now $(M,\,f_1,\ldots,f_{2n-2k})$ and a nondegenerate bivector field $\Lambda_0$ on $M$ such that \begin{equation}\label{damianou:f} f = \big\langle df_1\wedge \ldots \wedge df_{2n-2k},\; \frac{\Lambda_0^{n-k}}{(n-k)!}\big\rangle =\big \langle \frac{\omega_0^{n-k}}{(n-k)!}, \;X_{f_1}\wedge \ldots \wedge X_{f_{2n-2k}}\big\rangle \neq 0 \end{equation} on an open and dense subset $\mathcal{U}$ of $M$. In (\ref{damianou:f}), $\omega_0$ denotes the almost symplectic form on $M$ defined by $\Lambda_0$ and $X_{f_i} = \Lambda_0^\#(df_i)$ are the hamiltonian vector fields of $f_i$, $i = 1,\ldots, 2n-2k$, with respect to $\Lambda_0$. Let $D = \langle X_{f_1}, \ldots, X_{f_{2n-2k}}\rangle$ be the distribution on $M$ generated by $X_{f_i}$, $i=1,\ldots,2n-2k$, $D^\circ$ its annihilator, and $\mathrm{orth}_{\omega_0}D$ the symplectic orthogonal of $D$ with respect to $\omega_0$. Since $\det\big(\{f_i,f_j\}_{_0} \big) = f^2 \neq 0$ on $\mathcal{U}$, $D_x = D \cap T_xM$ is a symplectic subspace of $T_xM$ with respect to $\omega_{0_x}$ at each point $x\in \mathcal{U}$. Thus, $T_xM = D_x \oplus \mathrm{orth}_{\omega_{0_x}}D_x = D_x \oplus \Lambda_{0_x}^\#(D_x^\circ)$, where $D_x^\circ = D^\circ \cap T_x^\ast M$, and $T_x^\ast M = D_x^\circ \oplus (\Lambda_{0_x}^\#(D_x^\circ))^\circ = D_x^\circ \oplus \langle df_1,\ldots,df_{2n-2k}\rangle_x$. Finally, we denote by $\sigma$ the smooth $2$-form on $M$ which corresponds, via the isomorphism $\Lambda_0^\#$, to an element $\Lambda$ of $\mathcal{V}^2(M)$. \begin{proposition}\label{damianou:prop-Lambda-sigma} Under the above assumptions, a bivector field $\Lambda$ on $(M,\omega_0)$, of rank at most $2k$ on $M$, admits as unique Casimirs the functions $f_1,\ldots, f_{2n-2k}$ if and only if its corresponding $2$-form $\sigma$ is a smooth section of $\bigwedge^2D^\circ$ of maximal rank on $\mathcal{U}$. \end{proposition} \begin{proof} Effectively, for any $f_i$, $i=1,\ldots, 2n-2k$, \begin{equation}\label{damianou:prop-Lambda-sigma-fi} \Lambda(df_i, \cdot) = 0 \Leftrightarrow \Lambda_0^\#(\sigma)(df_i, \cdot) = 0 \Leftrightarrow \sigma (X_{f_i}, \Lambda_0^\#(\cdot)) = 0. \end{equation} Thus, $f_1,\ldots,f_{2n-2k}$ are the unique Casimir functions of $\Lambda$ on $\mathcal{U}$ if and only if the vector fields $X_{f_1},\ldots,X_{f_{2n-2k}}$ with functionally independent hamiltonians on $\mathcal{U}$ generate $\ker \sigma$, i.e., for any $x\in \mathcal{U}$, $D_x = \ker \sigma_x^\flat$. The last relation means that $\sigma$ is a section of $\bigwedge^2D^\circ$ of maximal rank on $\mathcal{U}$. \end{proof} Still using the same notation, we can formulate the following main theorem. \begin{theorem}\label{damianou:THEOREM} Let $f_1,\ldots,f_{2n-2k}$ be smooth functions on a $2n$-dimensional differentiable manifold $M$ which are functionally independent almost everywhere, $\omega_0$ an almost symplectic structure on $M$ such that (\ref{damianou:f}) holds on an open and dense subset $\mathcal{U}$ of $M$, $\Omega = \displaystyle{\frac{\omega_0^n}{n!}}$ the corresponding volume form on $M$, and $\sigma$ a section of $\bigwedge^2 D^{\circ}$ of maximal rank on $\mathcal{U}$ that satisfies (\ref{damianou:cond-delta-sigma}). Then, the $(2n-2)$-form \begin{equation}\label{damianou:expression-Phi} \Phi = - \frac{1}{f}(\sigma + \frac{g}{k-1}\omega_0)\wedge \frac{\omega_0^{k-2}}{(k-2)!}\wedge df_1\wedge\ldots \wedge df_{2n-2k}, \end{equation} where $f$ is given by (\ref{damianou:f}) and $g = i_{\Lambda_0}\sigma$, corresponds, via the isomorphism $\Psi^{-1}$, to a Poisson tensor $\Lambda$ with orbits of dimension at most $2k$ for which $f_1,\ldots,f_{2n-2k}$ are Casimirs. Precisely, $\Lambda = \Lambda_0^\#(\sigma)$ and the associated bracket of $\Lambda$ on $C^{\infty}(M)$ is given, for any $h_1,h_2 \in C^{\infty}(M)$, by \begin{equation}\label{damianou:bracket-Lambda-Omega} \{h_1,h_2\} \Omega = -\frac{1}{f} dh_1 \wedge dh_2 \wedge (\sigma + \frac{g}{k-1}\omega_0) \wedge \frac{\omega_0^{k-2}}{(k-2)!}\wedge df_1\wedge\ldots \wedge df_{2n-2k}. \end{equation} Conversely, if $\Lambda \in \mathcal{V}^2(M)$ is a Poisson tensor of rank $2k$ on an open and dense subset $\mathcal{U}$ of $M$, then there are $2n-2k$ functionally independent smooth functions $f_1,\ldots,f_{2n-2k}$ on $\mathcal{U}$ and a section $\sigma$ of $\bigwedge^2 D^{\circ}$ of maximal rank on $\mathcal{U}$ satisfying (\ref{damianou:cond-delta-sigma}), such that $\Psi_{\Lambda}$ and $\{\cdot,\cdot\}$ are given, respectively, by (\ref{damianou:expression-Phi}) and (\ref{damianou:bracket-Lambda-Omega}). \end{theorem} \begin{proof} We denote by $\tilde{\Omega}= \displaystyle{\frac{\Lambda_0^n}{n!}}$ the dual $2n$-vector field of $\Omega$ on $M$ and we set $\Lambda = j_{\Phi}\tilde{\Omega}$. For any $f_i$, $i = 1,\ldots,2n-2k$, we have \begin{equation*} \Lambda^\#(df_i) =- j_{df_i}\Lambda = -j_{df_i}j_{\Phi}\tilde{\Omega} = -j_{df_i\wedge \Phi}\tilde{\Omega} = -j_0\tilde{\Omega} = 0, \end{equation*} which means that $f_1,\ldots, f_{2n-2k}$ are Casimir functions of $\Lambda$. We shall see that $\Lambda = \Lambda_0^\#(\sigma)$. Thus, $\Lambda$ will define a Poisson structure on $M$ having the required properties. We calculate the adjoint form $\ast\, \Phi$ of $\Phi$ relative to $\omega_0$: \begin{eqnarray*} \ast\,\Phi & = & - \frac{1}{f} \ast\big((\sigma + \frac{g}{k-1}\omega_0)\wedge \frac{\omega_0^{k-2}}{(k-2)!}\wedge df_1 \ldots \wedge df_{2n-2k}\big) \nonumber \\ & \stackrel{(\ref{damianou:property-ii})}{=} & - (-1)^{(2n-2k-1)(2n-2k)/2}\,\frac{1}{f}i_{X_{f_1}\wedge \ldots \wedge X_{f_{2n-2k}}} \big[\ast \big((\sigma + \frac{g}{k-1}\omega_0)\wedge \frac{\omega_0^{k-2}}{(k-2)!}\big) \big]. \end{eqnarray*} But, from Lepage's decomposition theorem, $\sigma$ can be written as $\sigma = \psi_2 + \psi_0\omega_0$, where $\psi_2$ is an effective $2$-form on $M$ with respect to $\Lambda_0$ and $\psi_0 = \displaystyle{\frac{i_{\Lambda_0}\sigma}{i_{\Lambda_0}\omega_0} = -\frac{g}{n}}$. (It is easy to check that $i_{\Lambda_0}\omega_0 = - \langle \omega_0,\Lambda_0\rangle = -\displaystyle{ \frac{Tr(\omega_0^\flat \circ \Lambda_0^\#)}{2}}= - \frac{Tr(I_{2n})}{2}=-n.$) Hence, \begin{eqnarray*} (\sigma +\frac{g}{k-1}\omega_0)\wedge \frac{\omega_0^{k-2}}{(k-2)!} & = & (\psi_2 - \frac{g}{n}\omega_0 + \frac{g}{k-1}\omega_0)\wedge \frac{\omega_0^{k-2}}{(k-2)!} \\ & = & \psi_2\wedge \frac{\omega_0^{k-2}}{(k-2)!} + \frac{n-k+1}{n}g\frac{\omega_0^{k-1}}{(k-1)!} \end{eqnarray*} and \begin{eqnarray}\label{damianou:ast-sigma-k-1} \ast \big((\sigma + \frac{g}{k-1}\omega_0)\wedge \frac{\omega_0^{k-2}}{(k-2)!}\big) & = & \ast\, \big(\psi_2\wedge \frac{\omega_0^{k-2}}{(k-2)!}\big) + \frac{n-k+1}{n}g \big(\ast\,\frac{\omega_0^{k-1}}{(k-1)!}\big) \nonumber \\ & \stackrel{(\ref{damianou:adjoint-simple})}{=} & - \psi_2 \wedge \frac{\omega_0^{n-(k-2)-2}}{(n-(k-2)-2)!} + \frac{n-k+1}{n}g \frac{\omega_0^{n-(k-1)}}{(n-(k-1))!} \nonumber \\ & =& - (\psi_2 - \frac{g}{n}\omega_0)\wedge \frac{\omega_0^{n-k}}{(n-k)!}\, = \,-\sigma \wedge \frac{\omega_0^{n-k}}{(n-k)!}. \end{eqnarray} Consequently, \begin{eqnarray}\label{damianou:ast-Phi-sigma} \ast\, \Phi & = & - (-1)^{(2n-2k-1)(2n-2k)/2}\,\frac{1}{f}i_{X_{f_1}\wedge \ldots \wedge X_{f_{2n-2k}}} \big[ -\sigma \wedge \frac{\omega_0^{n-k}}{(n-k)!}\big] \nonumber \\ & \stackrel{(\ref{damianou:sign-inter. prod})(\ref{damianou:prop-Lambda-sigma-fi})}{=} & \frac{1}{f} \big \langle \frac{\omega_0^{n-k}}{(n-k)!},\, X_{f_1}\wedge \ldots \wedge X_{f_{2n-2k}}\big \rangle \, \sigma \,= \frac{1}{f}f\,\sigma = \sigma. \end{eqnarray} By applying (\ref{damianou:Psi-1,ast}) to the above relation, we obtain \begin{equation*} \Lambda_0^\#(\sigma) = \Lambda_0^\#(\ast\,\Phi) = \Psi^{-1}(\Phi) = j_\Phi \tilde{\Omega} = \Lambda. \end{equation*} Thus, according to Proposition \ref{damianou:theorem-cond-delta-sigma}, $\Lambda$ defines a Poisson structure on $M$, with orbits of dimension at most $2k$, for which $f_1, \ldots, f_{2n-2k}$ are Casimir functions. Obviously, the associated bracket of $\Lambda$ on $C^{\infty}(M)$ is given by (\ref{damianou:bracket-Lambda-Omega}). For any $h_1,h_2 \in C^{\infty}(M)$, \begin{eqnarray*} \{h_1,h_2\}& = & j_{dh_1 \wedge dh_2}\Lambda = j_{dh_1 \wedge dh_2}j_\Phi \tilde{\Omega} = j_{dh_1\wedge dh_2 \wedge \Phi}\tilde{\Omega} \;\;\; \Leftrightarrow \nonumber \\ \{h_1,h_2\}\Omega &= & -\frac{1}{f}dh_1\wedge dh_2 \wedge (\sigma + \frac{g}{k-1}\omega_0) \wedge \frac{\omega_0^{k-2}}{(k-2)!}\wedge df_1\wedge\ldots \wedge df_{2n-2k}. \end{eqnarray*} Conversely, if $\Lambda$ is a Poisson tensor on $M$ with symplectic leaves of dimension at most $2k$, then in a neighborhood $U$ of a nonsingular point there are coordinates $(z_1,\ldots,z_{2k},f_1,\ldots,f_{2n-2k})$ such that the symplectic leaves of $\Lambda$ are defined by $f_l = const.$, $l= 1,\ldots, 2n-2k$. Let $\Lambda_0$ be a nondegenerate bivector field on $U$ such that $f=\langle df_1\wedge \ldots \wedge df_{2n-2k},\: \displaystyle{\frac{\Lambda_0^{n-k}}{(n-k)!}} \rangle \neq 0$ on $U$ and $\sigma$ the $2$-form on $U$ which corresponds, via the isomorphism $\Lambda_0^\#$, to $\Lambda$. As we did earlier, we construct the distribution $D$ on $U$ and its annihilator $D^\circ$. According to Propositions \ref{damianou:prop-Lambda-sigma} and \ref{damianou:theorem-cond-delta-sigma}, $\sigma$ is a section of $\bigwedge^2 D^\circ$ of maximal rank on $U$ satisfying (\ref{damianou:cond-delta-sigma}). We will prove that the $(2n-2)$-form $\Psi_{\Lambda} = -i_{\Lambda_0^\#(\sigma)}\Omega = \ast\,\sigma$, where $\Omega = \displaystyle{\frac{\omega_0^n}{n!}}$ is the volume element on $U$ defined by the almost symplectic form $\omega_0$, the inverse of $\Lambda_0$, can be written in the form (\ref{damianou:expression-Phi}). Since (\ref{damianou:f}) holds on $U$, $\Omega$ can be written on $U$ as \begin{equation*} \Omega = \frac{1}{f}\frac{\omega_0^k}{k!}\wedge df_1\wedge \ldots \wedge df_{2n-2k} \end{equation*} and \begin{equation}\label{damianou:eq-Lambda-fi} \Psi_{\Lambda} = - i_{\Lambda}\Omega = - \frac{1}{f}\big(i_{\Lambda} \frac{\omega_0^k}{k!}\big) \wedge df_1 \wedge \ldots \wedge df_{2n-2k}. \end{equation} We now proceed to calculate the $(2k-2)$-form $\displaystyle{-i_{\Lambda}\frac{\omega_0^k}{k!}}$. We remark that $\displaystyle{\frac{\omega_0^k}{k!} = \ast \, \frac{\omega_0^{n-k}}{(n-k)!}}$. So, from (\ref{damianou:property-ii}) we get that \begin{equation}\label{damianou:eq-Lambda-k} -i_{\Lambda}\frac{\omega_0^k}{k!} = \ast \, (\sigma \wedge \frac{\omega_0^{n-k}}{(n-k)!}). \end{equation} Repeating the calculation of (\ref{damianou:ast-sigma-k-1}) in the inverse direction, we have \begin{equation}\label{damianou:ast-sigma-k} \ast \, (\sigma \wedge \frac{\omega_0^{n-k}}{(n-k)!}) = - \ast \ast \big((\sigma + \frac{g}{k-1}\omega_0)\wedge \frac{\omega_0^{k-2}}{(k-2)!}\big) = - (\sigma + \frac{g}{k-1}\omega_0)\wedge \frac{\omega_0^{k-2}}{(k-2)!}. \end{equation} Therefore, by replacing (\ref{damianou:ast-sigma-k}) in (\ref{damianou:eq-Lambda-k}) and the obtained relation in (\ref{damianou:eq-Lambda-fi}), we prove that $\Psi_{\Lambda}$ is given by the expression (\ref{damianou:expression-Phi}). Then, it is clear that $\{\cdot,\cdot\}$ is given by (\ref{damianou:bracket-Lambda-Omega}). \end{proof} \noindent \textbf{The case of almost Poisson structures with prescribed kernel.} Theorem \ref{damianou:THEOREM} can be generalized by replacing the exact $1$-forms $df_1,\ldots,df_{2n-2k}$ with $1$-forms $\alpha_1,\ldots,\alpha_{2n-2k}$ which are linearly independent at each point of an open and dense subset of $M$. It suffices to consider a nondegenerate almost Poisson structure $\Lambda_0$ on $M$ such that \begin{equation*} f = \big\langle \alpha_1\wedge \ldots \wedge \alpha_{2n-2k},\; \frac{\Lambda_0^{n-k}}{(n-k)!}\big\rangle \neq 0 \end{equation*} holds on an open and dense subset $\mathcal{U}$ of $M$ and to construct the distribution $D = \langle X_{\alpha_1},\ldots,X_{\alpha_{2n-2k}}\rangle$, $X_{\alpha_i} = \Lambda_0^\#(\alpha_i)$, and its annihilator $D^\circ$. Then, to each section $\sigma$ of $\bigwedge^2 D^\circ$ of maximal rank on $\mathcal{U}$ corresponds an almost Poisson structure $\Lambda \in \mathcal{V}^2(M)$ of rank at most $2k$ whose kernel coincides with the space $\langle \alpha_1,\ldots,\alpha_{2n-2k}\rangle$ almost everywhere on $M$ and its associated bracket on $C^{\infty}(M)$ is given by \begin{equation}\label{br-almostPoisson} \{h_1,h_2\} \Omega = - \frac{1}{f}dh_1\wedge dh_2\wedge(\sigma + \frac{g}{k-1}\omega_0)\wedge \frac{\omega_0^{k-2}}{(k-2)!}\wedge \alpha_1\wedge\ldots \wedge \alpha_{2n-2k}, \end{equation} $\omega_0$ being the almost symplectic structure on $M$ defined by $\Lambda_0$, $g = i_{\Lambda_0}\sigma$ and $\Omega = \displaystyle{\frac{\omega_0^n}{n!}}$. \subsection{On odd-dimensional manifolds} Let $M$ be a $(2n+1)$-dimensional manifold. We remark that any Poisson tensor $\Lambda$ on $M$ admitting $f_1,\ldots,f_{2n+1-2k} \in C^{\infty}(M)$ as Casimir functions can be viewed as a Poisson tensor on $M'=M\times \mathbb{R}$ admitting $f_1,\ldots,f_{2n+1-2k}$ and $f_{2n+2-2k}(x,s)=s$ ( $s$ being the canonical coordinate on the factor $\mathbb{R}$) as Casimir functions, and conversely. Thus, the problem of construction of Poisson brackets on $C^{\infty}(M)$ having as center the space of functions generated by $(f_1,\ldots,f_{2n+1-2k})$ is equivalent to that of construction of Poisson brackets on $C^\infty(M')$ having as center the space of functions generated by $(f_1,\ldots,f_{2n+1-2k}, s)$, a setting which was completely studied in subsection \ref{section-even}. In what follows, using the results of 3.1., we establish a formula analogous to (\ref{damianou:bracket-Lambda-Omega}) for Poisson brackets on odd-dimensional manifolds. But, before we proceed, let us recall the notion of \emph{almost cosymplectic} structures on $M$ and some of their properties \cite{lib1, lch2}. An \emph{almost cosymplectic} structure on a smooth manifold $M$, with $\dim M = 2n+1$, is defined by a pair $(\vartheta_0,\Theta_0)\in \Omega^1(M)\times \Omega^2(M)$ such that $\vartheta_0 \wedge \Theta_0^n \neq 0$ everywhere on $M$. The last condition means that $\vartheta_0 \wedge \Theta_0^n$ is a volume form on $M$ and that $\Theta_0$ is of constant rank $2n$ on $M$. Thus, $\ker \vartheta_0$ and $\ker \Theta_0$ are complementary subbundles of $TM$ called, respectively, the \emph{horizontal bundle} and the \emph{vertical bundle}. Of course, their annihilators are complementery subbundles of $T^\ast M$. Moreover, it is well known \cite{lch2} that $(\vartheta_0,\Theta_0)$ gives rice to a transitive \emph{almost Jacobi} structure $(\Lambda_0,E_0) \in \mathcal{V}^2(M)\times \mathcal{V}^1(M)$ on $M$ such that \begin{equation*} i(E_0)\vartheta_0 = 1 \quad \mathrm{and} \quad i(E_0)\Theta_0 = 0, \end{equation*} \begin{equation*} \Lambda_0^\#(\vartheta_0) = 0 \quad \mathrm{and} \quad i(\Lambda_0^\#(\zeta)\Theta_0) = -(\zeta - \langle \zeta, E_0\rangle \vartheta_0), \quad \mathrm{for}\;\;\mathrm{all}\;\; \zeta \in \Omega^1(M). \end{equation*} We have, $\ker \vartheta_0 = \mathrm{Im}\Lambda_0^\#$ and $\ker \Theta_0 = \langle E_0\rangle$. So, $TM =\mathrm{Im}\Lambda_0^\# \oplus \langle E_0\rangle$ and $T^\ast M = \langle E_0\rangle^\circ \oplus \langle \vartheta_0\rangle$. The sections of $\langle E_0\rangle^\circ$ are called \emph{semi-basic} forms and $\Lambda_0^\#$ is an isomorphism from the $C^{\infty}(M)$-module of semi-basic $1$-forms to the $C^{\infty}(M)$-module of horizontal vector fields. This isomorphism can be extended, as in (\ref{damianou:def-extension}), to an isomorphism, also denoted by $\Lambda_0^\#$, from the $C^{\infty}(M)$-module of semi-basic $p$-forms on the $C^{\infty}(M)$-module of horizontal $p$-vector fields. Finally, we note that $(\vartheta_0,\Theta_0)$ determines on $M'=M\times \mathbb{R}$ an almost symplectic structure $\omega'_0 = \Theta_0 + ds \wedge \vartheta_0$ whose corresponding nondegenerate almost Poisson tensor is $\Lambda'_0 = \Lambda_0 + \displaystyle{\frac{\partial}{\partial s}}\wedge E_0$. Now, we consider $(M, f_1,\ldots,f_{2n+1-2k})$, with $f_1,\ldots,f_{2n+1-2k}$ functionally independent almost everywhere on $M$, and an almost cosymplectic structure $(\vartheta_0,\Theta_0)$ on $M$ whose associated nondegenerate almost Jacobi structure $(\Lambda_0, E_0)$ verifies the condition \begin{equation}\label{f-odd} f = \langle df_1\wedge \ldots \wedge df_{2n+1-2k},\; E_0\wedge \frac{\Lambda_0^{n-k}}{(n-k)!}\rangle \neq 0 \end{equation} on an open and dense subset $\mathcal{U}$ of $M$.\footnote{As in the case of even-dimensional manifolds, such a structure $(\Lambda_0,E_0)$ always exists at least locally.} Let $\omega_0'= \Theta_0 + ds \wedge \vartheta_0$ and $\Lambda'_0 = \Lambda_0 + \displaystyle{\frac{\vartheta}{\vartheta s}}\wedge E_0$ be, respectively, the associated almost symplectic and almost Poisson structure on $M'=M\times \mathbb{R}$. Since, for any $m = 1,\ldots,n+1$, \begin{equation}\label{volume'} \frac{\omega_0'\,^m}{m!} = \frac{\Theta_0^m}{m!} + ds\wedge \vartheta_0 \wedge \frac{\Theta_0^{m-1}}{(m-1)!} \quad \mathrm{and} \quad \frac{\Lambda_0'\,^m}{m!} = \frac{\Lambda_0^m}{m!} + \frac{\partial}{\partial s}\wedge E_0 \wedge \frac{\Lambda_0^{m-1}}{(m-1)!}, \end{equation} it is clear that \begin{eqnarray}\label{f-odd-even} \lefteqn{\langle df_1\wedge \ldots \wedge df_{2n+1-2k}\wedge ds,\; \frac{\Lambda_0'\,^{n+1-k}}{(n+1-k)!} \rangle }\nonumber \\ & = & \langle df_1\wedge \ldots \wedge df_{2n+1-2k}\wedge ds,\; \frac{\Lambda_0^{n+1-k}}{(n+1-k)!} + \frac{\partial}{\partial s}\wedge E_0 \wedge \frac{\Lambda_0^{n-k}}{(n-k)!}\rangle \nonumber \\ & = & \langle df_1\wedge \ldots \wedge df_{2n+1-2k}\wedge ds,\; \frac{\partial}{\partial s}\wedge E_0 \wedge \frac{\Lambda_0^{n-k}}{(n-k)!}\rangle = - f \neq 0 \end{eqnarray} on the open and dense subset $\mathcal{U}'= \mathcal{U} \times \mathbb{R}$ of $M'$. Furthermore, we view any bivector field $\Lambda$ on $(M,\vartheta_0,\Theta_0)$ having as Casimirs the given functions as a bivector field on $(M',\omega_0')$ having $f_1,\ldots,f_{2n+1-2k}$ and $f_{2n+2-2k}(x,s)=s$ as Casimirs. Let $D'^\circ$ be the annihilator of the distribution $D'=\langle X_{f_1}',\ldots,X_{f_{2n+2-2k}}' \rangle$ on $M'$ generated by the hamiltonian vector fields $X_{f_i}'= \Lambda_0'^{\#}(df_i) = \Lambda_0^\#(df_i)-\langle df_i,E_0\rangle \displaystyle{\frac{\partial}{\partial s}}$, $i = 1,\ldots,2n+1-2k$, and $X'_{f_{2n+2-2k}} = \Lambda_0'^{\#}(ds)=E_0$ of $f_1,\ldots,f_{2n+1-2k}$ and $f_{2n+2-2k}(x,s) = s$ with respect to $\Lambda_0'$. Then, from Proposition \ref{damianou:prop-Lambda-sigma} we get that there exists an unique $2$-form $\sigma'$ on $M'$, which is a section of $\bigwedge^2 D'^\circ$ of maximal rank $2k$ on $\mathcal{U}'=\mathcal{U}\times \mathbb{R}$, such that $\Lambda = \Lambda_0'^{\#}(\sigma')$. Moreover, since $\Lambda$ is independent of $s$ and without a term of type $X\wedge \displaystyle{\frac{\partial}{\partial s}}$, $\sigma'$ must be of type \begin{equation}\label{sigma'} \sigma' = \sigma + \tau \wedge ds, \end{equation} where $\sigma$ and $\tau$ are, respectively, a $2$-form and a $1$-form on $M$ having the following additional properties: \begin{enumerate} \item [i)] $\sigma$ is a section $\bigwedge^2 \langle E_0\rangle^\circ$, i.e., $\sigma$ is a semi-basic $2$-form on $M$ with respect to $(\Lambda_0,E_0)$; \item[ii)] $\tau$ is a section of $D^\circ = \langle X_{f_1},\ldots,X_{f_{2n+1-2k}},E_0\rangle^\circ$, where $X_{f_i} = \Lambda_0^\#(df_i)$, i.e., $\tau$ is a semi-basic $1$-form on $(M,\Lambda_0,E_0)$ which is also semi-basic with respect to $X_{f_1},\ldots, X_{f_{2n+1-2k}}$; \item[iii)] for any $f_i$, $i=1,\ldots,2n+1-2k$, $\sigma(X_{f_i},\cdot) + \langle df_i,E_0\rangle \tau =0$. \end{enumerate} Consequently, $\Lambda$ is written, in an unique way, as $\Lambda = \Lambda_0^\#(\sigma) + \Lambda_0^\#(\tau)\wedge E_0$. Summarizing, we may formulate the next Proposition. \begin{proposition} Under the above notations and assumptions, a bivector field $\Lambda$ on $(M,\vartheta_0,\Theta_0)$, of rank at most $2k$, has as unique Casimirs the functions $f_1,\ldots, f_{2n+1-2k}$ if and only if its corresponding pair of forms $(\sigma,\tau)$ has the properties (i)-(iii) and $(rank\,\sigma,\, rank\,\tau)=(2k,0)$ or $(2k,1)$ or $(2k-2,1)$ on $\mathcal{U}$. \end{proposition} On the other hand, it follows from Theorem \ref{damianou:THEOREM} that the bracket $\{\cdot,\cdot\}$ of $\Lambda$ on $C^{\infty}(M)$ is calculated, for any $h_1,h_2\in C^{\infty}(M)$, viewed as elements of $C^\infty(M')$, by the formula \begin{equation*} \{h_1,h_2\}\Omega'\stackrel{(\ref{f-odd-even})}{=} \frac{1}{f}dh_1\wedge dh_2\wedge (\sigma'+ \frac{g'}{k-1}\omega_0')\wedge \frac{\omega_0'\,^{k-2}}{(k-2)!}\wedge df_1\wedge \ldots \wedge df_{2n+1-2k}\wedge ds, \end{equation*} where $\Omega'=\displaystyle{\frac{\omega_0'\,^{n+1}}{(n+1)!}}$ and $g'=i_{\Lambda_0'}\sigma'$. But, $\Omega'\stackrel{(\ref{volume'})}{=}-\Omega\wedge ds$, $\Omega = \vartheta_0\wedge \displaystyle{\frac{\Theta_0^n}{n!}}$ being a volume form on $M$, and $g'=i_{\Lambda_0'}\sigma' = i_{\Lambda_0 + \partial/\partial s\wedge E_0}(\sigma + \tau \wedge ds) = i_{\Lambda_0}\sigma = g$. Thus, taking into account (\ref{volume'}) and (\ref{sigma'}), we have \begin{equation*} \{h_1,h_2\}\Omega\wedge ds = - \frac{1}{f}dh_1\wedge dh_2\wedge (\sigma+ \frac{g}{k-1}\Theta_0)\wedge \frac{\Theta_0^{k-2}}{(k-2)!}\wedge df_1\wedge \ldots \wedge df_{2n+1-2k}\wedge ds \end{equation*} which is equivalent to \begin{equation*} \{h_1,h_2\}\Omega = - \frac{1}{f}dh_1\wedge dh_2\wedge (\sigma+ \frac{g}{k-1}\Theta_0)\wedge \frac{\Theta_0^{k-2}}{(k-2)!}\wedge df_1\wedge \ldots \wedge df_{2n+1-2k}. \end{equation*} However, according to Proposition \ref{damianou:theorem-cond-delta-sigma}, $\{\cdot,\cdot\}$ is a Poisson bracket on $C^{\infty}(M) \subset C^\infty(M')$ if and only if \begin{equation}\label{cond-sigma'} 2\sigma'\wedge \delta'(\sigma')=\delta'(\sigma'\wedge \sigma'), \end{equation} where $\delta'= \ast\,'d\,\ast'$ is the codifferential on $\Omega(M')$ of $(M',\omega_0')$ defined by the isomorphism $\ast': \Omega^p(M') \to \Omega^{2n+2-p}(M')$ of (\ref{def-ast}). We want to translate (\ref{cond-sigma'}) to a condition on $(\sigma,\tau)$. Let $\Omega_{sb}^p(M)$ be the space of semi-basic $p$-forms on $(M,\Lambda_0,E_0)$, $\ast$ the isomorphism between $\Omega_{sb}^p(M)$ and $\Omega_{sb}^{2n-p}(M)$ given, for any $\varphi \in \Omega_{sb}^p(M)$, by \begin{equation*} \ast \, \varphi = (-1)^{(p-1)p/2}i_{\Lambda_0^\#(\varphi)}\frac{\Theta_0^n}{n!}, \end{equation*} $d_{sp} : \Omega_{sb}^p(M) \to \Omega_{sb}^{p+1}(M)$ the operator which corresponds to each semi-basic form $\varphi$ the semi-basic part of its differential $d\varphi$, and $\delta = \ast \,d_{sb} \,\ast$ the associated ``codifferential'' operator on $\Omega_{sb}(M)=\oplus_{p\in \mathbb{Z}}\Omega_{sb}^p(M)$. By a straightforward, but long, computation, we show that (\ref{cond-sigma'}) is equivalent to the system \begin{equation}\label{cond-sigma-tau} \left\{ \begin{array}{l} 2\sigma\wedge \delta(\sigma)=\delta(\sigma\wedge \sigma)\\ \\ \delta(\sigma\wedge \tau) + \delta(\sigma)\wedge\tau - \sigma\wedge \delta(\tau) = (i_{\Lambda_0^\#(d\vartheta_0)}\sigma)\sigma -\frac{1}{2}i_{\Lambda_0^\#(d\vartheta_0)}(\sigma \wedge \sigma). \end{array} \right. \end{equation} Hence, we deduce: \begin{proposition}\label{prop-sigma-odd} Under the above assumptions and notations, $\Lambda = \Lambda_0^\#(\sigma) + \Lambda_0^\#(\tau)\wedge E_0$ defines a Poisson structure on $(M, \vartheta_0,\Theta_0)$ if and only if $(\sigma,\tau)$ satisfies (\ref{cond-sigma-tau}). \end{proposition} Concluding, we can announce the following theorem. \begin{theorem}\label{THEOREM-ODD} Let $f_1,\ldots,f_{2n+1-2k}$ be smooth functions on a $(2n+1)$-dimensional smooth manifold $M$ which are functionally independent almost everywhere, $(\vartheta_0,\Theta_0)$ an almost cosymplectic structure on $M$ such that (\ref{f-odd}) holds on an open and dense subset $\mathcal{U}$ of $M$, $\Omega = \vartheta_0\wedge\displaystyle{\frac{\Theta_0^n}{n!}}$ the corresponding volume form on $M$, and $(\sigma,\tau)$ an element of $\Omega^2_{sb}(M)\times \Omega^1_{sb}(M)$, with $(rank\,\sigma,\, rank\,\tau)=(2k,0)$ or $(2k,1)$ or $(2k-2,1)$ on $\mathcal{U}$, that has the properties (ii)-(iii) and satisfies (\ref{cond-sigma-tau}). Then, the bracket $\{\cdot,\cdot\}$ on $C^{\infty}(M)$ given, for any $h_1,h_2 \in C^{\infty}(M)$, by \begin{equation}\label{br-odd} \{h_1,h_2\}\Omega = - \frac{1}{f}dh_1\wedge dh_2\wedge (\sigma+ \frac{g}{k-1}\Theta_0)\wedge \frac{\Theta_0^{k-2}}{(k-2)!}\wedge df_1\wedge \ldots \wedge df_{2n+1-2k}, \end{equation} where $f$ is that of (\ref{f-odd}) and $g = i_{\Lambda_0}\sigma$, defines a Poisson structure $\Lambda$ on $M$, $\Lambda = \Lambda_0^\#(\sigma) + \Lambda_0^\#(\tau)\wedge E_0$, with symplectic leaves of dimension at most $2k$ for which $f_1,\ldots,f_{2n+1-2k}$ are Casimirs. The converse is also true. \end{theorem} \begin{remark} {\rm We remark that, in both cases (of even dimension $m=2n$ and of odd dimension $m=2n+1$), when $k=1$, the brackets (\ref{damianou:bracket-Lambda-Omega}) and (\ref{br-odd}) are reduced to a bracket of type (\ref{damianou:equation2}). Precisely, \begin{equation*} \{h_1,h_2\}\Omega = -\frac{g}{f}dh_1\wedge dh_2\wedge df_1\wedge \ldots \wedge df_{m-2}. \end{equation*}} \end{remark} \section{Some Examples} \subsection{Dirac Brackets} Let $(M,\omega_0)$ be a symplectic manifold of dimension $2n$, $\Lambda_0$ its associated Poisson structure, and $f_1,\ldots, f_{2n-2k}$ smooth functions on $M$ whose the differentials are linearly independent at each point in the submanifold $M_0$ of $M$ defined by the equations $f_1(x) =0, \,\ldots, \, f_{2n-2k}(x) =0$. We assume that the matrix $\big(\{f_i,f_j\}_{_0} \big)$ is invertible on an open neighborhood $\mathcal{W}$ of $M_0$ in $M$ and we denote by $c_{ij}$ the coefficients of its inverse matrix which are smooth functions on $\mathcal{W}$ such that $\sum_{j=1}^{2n-2k}\{f_i,f_j\}_{_0}c_{jk} = \delta_{ik}$. We consider on $\mathcal{W}$ the $2$-form \begin{equation}\label{damianou:sigma-dirac} \sigma = \omega_0 + \sum_{i<j}c_{ij}df_i\wedge df_j. \end{equation} We will prove that it is a section of $\bigwedge^2 D^\circ$ of maximal rank on $\mathcal{W}$ which verifies (\ref{damianou:cond-delta-sigma}). As in subsection \ref{section-even}, $D$ denotes the subbundle of $TM$ generated by the hamiltonian vector fields $X_{f_i}$ of $f_i$, $i=1,\ldots,2n-2k$, with respect to $\Lambda_0$ and $D^\circ$ its annihilator. For any $X_{f_l}$, $l=1,\ldots,2n-2k$, we have \begin{eqnarray*} \sigma (X_{f_l},\cdot)& = & \omega_0(X_{f_l},\cdot) + \sum_{i<j}c_{ij}\langle df_i, X_{f_l}\rangle df_j - \sum_{i<j}c_{ij}\langle df_j, X_{f_l}\rangle df_i \\ & = & - df_l + \sum_{i<j}c_{ij}\{f_l,f_i\}_{_0}df_j - \sum_{i<j}c_{ij}\{f_l,f_j\}_{_0}df_i \\ & = & -df_l + \sum_j\delta_{lj}df_j \,=\, -df_l + df_l\, =\, 0, \end{eqnarray*} which means that $\sigma$ is a section of $\bigwedge^2 D^\circ \to \mathcal{W}$. The assumption that $\big(\{f_i,f_j\}_{_0}\big)$ is invertible ensures that $D$ is a symplectic subbundle of $T_{\mathcal{W}}M$. So, for any $x \in \mathcal{W}$, $T^\ast_x M = D_x^\circ \oplus \langle df_1,\ldots,df_{2n-2k} \rangle_x$ and $\bigwedge^2 T_x^\ast M = \bigwedge^2 D_x^\circ + \bigwedge^2 \langle df_1,\ldots,df_{2n-2k}\rangle_x + D_x^\circ \wedge \langle df_1,\ldots,df_{2n-2k}\rangle_x$. But, $\omega_0$ is a nondegenerate section of $\bigwedge^2 T^\ast M$ and the part $\sum_{i<j}c_{ij}df_i\wedge df_j$ of $\sigma$ is a smooth section of $\bigwedge^2 \langle df_1,\ldots,df_{2n-2k}\rangle$ of maximal rank on $\mathcal{W}$, because $\det(c_{ij})\neq 0$ on $\mathcal{W}$. Thus, $\sigma$ is of maximal rank on $\mathcal{W}$. Also, we have \begin{equation*} g = i_{\Lambda_0}\sigma = - \langle \omega_0 + \sum_{i<j}c_{ij}df_i\wedge df_j, \Lambda_0\rangle = - n - \sum_{i<j}c_{ij} \{f_i,f_j\}_{_0} = -n + (n-k) = -k, \end{equation*} and \begin{eqnarray}\label{damianou:dirac-sigma} \ast \, \sigma & \stackrel{(\ref{damianou:ast-Phi-sigma})(\ref{damianou:expression-Phi})}{=} & - \frac{1}{f}(\sigma + \frac{g}{k-1}\omega_0)\wedge \frac{\omega_0^{k-2}}{(k-2)!}\wedge df_1\wedge\ldots \wedge df_{2n-2k} \nonumber \\ & = & - \frac{1}{f} (\omega_0 + \sum_{i<j}c_{ij}df_i\wedge df_j - \frac{k}{k-1}\omega_0) \wedge \frac{\omega_0^{k-2}}{(k-2)!}\wedge df_1\wedge\ldots \wedge df_{2n-2k} \nonumber \\ & = & \frac{1}{f} \frac{\omega_0^{k-1}}{(k-1)!}\wedge df_1 \wedge \ldots \wedge df_{2n-2k}. \end{eqnarray} Consequently, \begin{equation*} \delta \sigma = (\ast \,d \,\ast)\sigma \stackrel{(\ref{damianou:dirac-sigma})}{=} \ast (- \frac{df}{f}\wedge (\ast \,\sigma)) \stackrel{(\ref{damianou:property-ii})}{=} - \frac{1}{f} i_{X_f} \sigma \end{equation*} and \begin{equation}\label{damianou:1} \quad 2\sigma \wedge \delta(\sigma) = -\frac{2}{f}\sigma \wedge (i_{X_f}\sigma) = - \frac{1}{f}i_{X_f}(\sigma \wedge \sigma). \end{equation} On the other hand, \begin{eqnarray}\label{damianou:dirac-sigma-sigma} \ast \, (\sigma \wedge \sigma) & \stackrel{(\ref{damianou:property-ii})}{=} & - i_{\Lambda_0^\#(\sigma)} (\ast \, \sigma) \stackrel{(\ref{damianou:dirac-sigma})}{=} - \frac{1}{f} \big(i_{\Lambda_0^\#(\sigma)}\frac{\omega_0^{k-1}}{(k-1)!} \big)\wedge df_1 \wedge \ldots \wedge df_{2n-2k} \nonumber \\ & \stackrel{(\ref{damianou:eq-Lambda-k})}{=} & \frac{1}{f}\,[\ast (\sigma \wedge \frac{\omega_0^{n-k+1}}{(n-k+1)!})]\wedge df_1 \wedge \ldots \wedge df_{2n-2k} \nonumber \\ &\stackrel{(\ref{damianou:ast-sigma-k-1})(\ref{damianou:sigma-dirac})}{=} & - \frac{1}{f}(\omega_0 + \sum_{i<j}c_{ij}df_i\wedge df_j - \frac{k}{k-2}\omega_0)\wedge \frac{\omega_0^{k-3}}{(k-3)!}\wedge df_1 \wedge \ldots \wedge df_{2n-2k} \nonumber \\ & = & \frac{2}{f}\wedge \frac{\omega_0^{k-3}}{(k-3)!}\wedge df_1 \wedge \ldots \wedge df_{2n-2k} \end{eqnarray} and \begin{equation}\label{damianou:2} \delta (\sigma \wedge \sigma) = \ast \,d \,\ast (\sigma \wedge \sigma) \stackrel{(\ref{damianou:dirac-sigma-sigma})}{=} \ast \, \big(-\frac{df}{f}\wedge \ast\,(\sigma \wedge \sigma) \big) \stackrel{(\ref{damianou:property-ii})}{=} - \frac{1}{f}i_{X_f}(\sigma \wedge \sigma). \end{equation} From (\ref{damianou:1}) and (\ref{damianou:2}) we conclude that $\sigma$ verifies (\ref{damianou:cond-delta-sigma}). Thus, according to Theorem \ref{damianou:THEOREM}, the bivector field \begin{equation*} \Lambda = \Lambda_0^\#(\sigma) = \Lambda_0 + \sum_{i<j}c_{ij}X_{f_i}\wedge X_{f_j} \end{equation*} defines a Poisson structure on $\mathcal{W}$ whose corresponding bracket $\{\cdot,\cdot\}$ on $C^\infty (\mathcal{W}, \mathbb{R})$ is given, for any $h_1,h_2 \in C^\infty (\mathcal{W},\mathbb{R})$, by \begin{equation}\label{damianou:br-dirac} \{h_1,h_2\}\Omega = \frac{1}{f} dh_1 \wedge dh_2 \wedge \frac{\omega_0^{k-1}}{(k-1)!}\wedge df_1 \wedge \ldots \wedge df_{2n-2k}. \end{equation} In the above expression of $\Lambda$ we recognize the Poisson structure defined by Dirac \cite{damianou:dr} on an open neighborhood $\mathcal{W}$ of the constrained submanifold $M_0$ of $M$ and in (\ref{damianou:br-dirac}) the expression of the Dirac bracket given in \cite{damianou:Grab2}. \subsection{Almost Poisson brackets for nonholonomic systems} Let $Q$ be the configuration space of a Lagrangian system with Lagrangian function $L:TQ\to \mathbb{R}$, subjected to nonholonomic, homogeneous, constraints defined by a distribution $C \subset TQ$ on $Q$. In a local coordinate system $(q^1,\ldots,q^n,\dot{q}^1,\ldots,\dot{q}^n)$ of $TQ$, $C$ is described by the independent equations \begin{equation}\label{nh-C} \zeta_s^i(q)\dot{q}^s =0,\footnote{In this subsection, the Einstein convention of sum over repeated indices holds.} \quad \quad i = 1,\ldots,n-k, \end{equation} where $\zeta^i_s$, $s=1,\ldots,n$, are smooth functions on $Q$, and the equations of motion of the nonholonomic system are given by \begin{equation}\label{nh-eqmotion} \frac{d}{dt}(\frac{\partial L}{\partial \dot{q}^s}) - \frac{\partial L}{\partial q^s} = \lambda_i\zeta^i_s, \quad s= 1,\ldots,n, \end{equation} ($\lambda_i$ being the Lagrangian multipliers) together with the constraint equations (\ref{nh-C}). We now turn to the Hamiltonian formulation of our system on the cotangent bundle $T^\ast Q$ of $Q$. We suppose that $T^\ast Q$ is equipped with the standard, nondegenerate, Poisson structure $\Lambda_0 = \frac{\partial}{\partial p_s}\wedge \frac{\partial}{\partial q^s}$ associated with the symplectic form $\omega_0=dp_s\wedge dq^s$. Let $\mathcal{L}:TQ \to T^\ast Q$, $(q^s,\dot{q}^s)\mapsto (q^s,p_s=\frac{\partial L}{\partial \dot{q}^s})$, be the Legendre transformation associated with $L$. Assuming that $L$ is regular, we have that $\mathcal{L}$ is a diffeomorphism which maps the equations of motion (\ref{nh-eqmotion}) to the system \begin{eqnarray}\label{nh-hamilton} \dot{q}^s & = & \frac{\partial H}{\partial p_s} \nonumber \\ \dot{p}_s & = & -\frac{\partial H}{\partial q^s} + \lambda_i\zeta^i_s,\quad \quad s=1,\ldots,n, \end{eqnarray} where $H : T^\ast Q \to \mathbb{R}$ is the Hamiltonian given by $H=(\dot{q}^s\frac{\partial L}{\partial \dot{q}^s} - L)\circ \mathcal{L}^{-1}$, and the constraint distribution $C$ to the constraint submanifold $\mathcal{M}$ of $T^\ast Q$, which is defined by the equations \begin{equation*} f^i(q,p) = \zeta^i_s(q)\frac{\partial H}{\partial p_s} =0, \quad \quad i=1,\ldots,n-k. \end{equation*} Also, the regularity assumption on $L$ implies that, at each point $(q,p)\in \mathcal{M}$, $T_{(q,p)}T^\ast Q$ splits into a direct sum of symplectic subspace and that the matrix $\mathcal{C} = \big(\mathcal{C}^{ij}\big)=\big(\Lambda_0(df^i,\mathbf{q}^\ast \zeta^j)\big)=\big(\zeta^i_s\displaystyle{\frac{\partial^2H}{\partial p_s \partial p_t}}\zeta^j_t\big)$, which is symmetric, is invertible on $\mathcal{M}$. Precisely, \begin{equation*} T_{(q,p)}T^\ast Q = T_{(q,p)}\mathcal{M}\oplus \mathcal{Z}, \end{equation*} where $\mathcal{Z}\subset TT^\ast Q$ is the distribution on $T^\ast Q$ spanned by the vector fields \begin{equation*} Z^i = \zeta_s^i\frac{\partial}{\partial p_s} = \Lambda_0^\#(-\mathbf{q}^\ast \zeta^i), \end{equation*} where $\zeta^i = \zeta^i_s(q)dq^s$, $i=1,\ldots,n-k$, are the constraint $1$-forms on $Q$ and $\mathbf{q} : T^\ast Q \to Q$ is the canonical projection. Hence, in view of (\ref{nh-hamilton}), the Hamiltonian vector field $X_H = \Lambda_0^\#(dH)$ admits, along $\mathcal{M}$, the decomposition $X_H = X_{nh}-\lambda_iZ^i$. The part $X_{nh}$ is tangent to $\mathcal{M}$ and $\lambda_iZ^i$ lies on $\mathcal{Z}$, along $\mathcal{M}$. According to the results of \cite{c-mdl-d, mrl-nh, vm}, the dynamical equations of $X_{nh}$ on $\mathcal{M}$ are expressed in Hamiltonian form with respect to the restriction $\{\cdot,\cdot\}_{nh}^{\mathcal{M}}$ on $C^\infty(\mathcal{M})$ of the bracket $\{\cdot,\cdot\}_{nh}$ given, for any $H_1,H_2\in C^\infty(T^\ast Q)$, by \begin{eqnarray}\label{nh-bracket} \lefteqn{\{H_1,H_2\}_{nh} = \{H_1,H_2\}_{_0} +\mathcal{C}_{lm}\{f^l,H_1\}_{_0}\langle dH_2,Z^m\rangle} \nonumber \\ & & - \mathcal{C}_{lm}\{f^l,H_2\}_{_0}\langle dH_1,Z^m\rangle + \mathcal{C}_{ij}\{f^j,f^l\}_{_0}\mathcal{C}_{lm}\langle dH_1,Z^i\rangle \langle dH_2,Z^m\rangle, \end{eqnarray} where $\{\cdot,\cdot\}_{_0}$ is the bracket of $\Lambda_0$ on $C^\infty(T^\ast Q)$ and $\big(\mathcal{C}_{ij}\big)$ is the inverse matrix of $\mathcal{C}$. In other words, for functions $h_1,h_2 \in C^\infty(\mathcal{M})$, the value of $\{h_1,h_2\}_{nh}^{\mathcal{M}}$ is equal to the value of $\{H_1,H_2\}_{nh}$ along $\mathcal{M}$, where $H_1$ and $H_2$ are, respectively, arbitrary smooth extensions of $h_1$ and $h_2$ on $T^\ast Q$. We will show that (\ref{nh-bracket}) holds, and so $\{\cdot,\cdot\}_{nh}^{\mathcal{M}}$, can be calculated by (\ref{br-almostPoisson}). We remark that \begin{equation*} \Lambda_{nh} = \Lambda_0 + \mathcal{C}_{lm}X_{f^l}\wedge Z^m + \frac{1}{2}\mathcal{C}_{ij}\{f^j,f^l\}_{_0} \ \mathcal{C}_{lm}Z^i\wedge Z^m, \end{equation*} where $X_{f^l}=\Lambda_0^\#(df^l)$, is the bivector field on $T^\ast Q$ associated to (\ref{nh-bracket}) whose the kernel along $\mathcal{M}$ coincides with the space $\langle df^1,\ldots, df^{n-k}, \mathbf{q}^\ast \zeta^1,\ldots,\mathbf{q}^\ast \zeta^{n-k}\rangle \vert_{\mathcal{M}}$. In fact, \begin{eqnarray*} \Lambda_{nh}(df^s) & = & X_{f^s} + \mathcal{C}_{lm}\{f^l,f^s\}_{_0}Z^m - \mathcal{C}_{lm}\langle df^s,Z^m\rangle X_{f^l} \nonumber \\ & & + \, \frac{1}{2}\mathcal{C}_{ij}\{f^j,f^l\}_{_0}\mathcal{C}_{lm} \langle df^s, Z^i\rangle Z^m - \frac{1}{2}\mathcal{C}_{ij}\{f^j,f^l\}_{_0}\mathcal{C}_{lm}\langle df^s,Z^m\rangle Z^i \nonumber \\ & = & X_{f^s} + \mathcal{C}_{lm}\{f^l,f^s\}_{_0}Z^m - \mathcal{C}_{lm}\mathcal{C}^{sm} X_{f^l} \nonumber \\ & & +\, \frac{1}{2}\mathcal{C}_{ij}\{f^j,f^l\}_{_0}\mathcal{C}_{lm}\mathcal{C}^{si}Z^m - \frac{1}{2}\mathcal{C}_{ij}\{f^j,f^l\}_{_0}\mathcal{C}_{lm}\mathcal{C}^{sm}Z^i \nonumber \\ & = & X_{f^s} + \mathcal{C}_{lm}\{f^l,f^s\}_{_0}Z^m - X_{f^s} + \frac{1}{2}\{f^s,f^l\}_{_0}\mathcal{C}_{lm}Z^m - \frac{1}{2}\mathcal{C}_{ij}\{f^j,f^s\}_{_0}Z^i\\ & = & 0 \end{eqnarray*} and \begin{equation*} \Lambda_{nh}(\mathbf{q}^\ast \zeta^s) = \Lambda_0^\#(\mathbf{q}^\ast \zeta^s) + \mathcal{C}_{lm}\langle \mathbf{q}^\ast \zeta^s, X_{f^l}\rangle Z^m = - Z^s + \mathcal{C}_{lm}\mathcal{C}^{ls}Z^m = - Z^s + Z^s = 0, \end{equation*} while $\mathrm{rank}\, \Lambda_{nh} = 2k$ everywhere on $\mathcal{M}$ \cite{vm}. On the other hand, $\Lambda_{nh}$ can be viewed as the image, via the isomorphism $\Lambda_0^\#$, of the $2$-form \begin{equation*} \sigma = \omega_0 - \mathcal{C}_{lm}df^l \wedge \mathbf{q}^\ast \zeta^m + \frac{1}{2}\mathcal{C}_{ij}\{f^j,f^l\}_{_0}\mathcal{C}_{lm}\mathbf{q}^\ast \zeta^i\wedge \mathbf{q}^\ast \zeta^m \end{equation*} on $T^\ast Q$ with $\mathrm{rank}\,\sigma = 2k$ on $\mathcal{M}$. Also, \begin{equation*} f= \langle df^1\wedge\ldots \wedge df^{n-k}\wedge \mathbf{q}^\ast \zeta^1 \wedge \ldots \wedge \mathbf{q}^\ast \zeta^{n-k},\; \frac{\Lambda_0^{n-k}}{(n-k)!} \rangle \neq 0 \end{equation*} on $\mathcal{M}$, because $f^2 = \det J = \det \mathcal{C}^2 \neq 0$ on $\mathcal{M}$, where \begin{equation*} J= \left(\begin{array}{cc} \{f^i,f^j\}_{_0} & \Lambda_0(df^i,\mathbf{q}^\ast \zeta^j) \\ \Lambda_0(\mathbf{q}^\ast \zeta^i, df^j) & \Lambda_0(\mathbf{q}^\ast \zeta^i,\mathbf{q}^\ast \zeta^j) \end{array} \right) = \left( \begin{array}{cc} \{f^i,f^j\}_{_0} & \mathcal{C} \\ -\mathcal{C} & 0 \end{array} \right), \end{equation*} and \begin{eqnarray*} g & = & i_{\Lambda_0}\sigma = - \langle \omega_0 - \mathcal{C}_{lm}df^l \wedge \mathbf{q}^\ast \zeta^m + \frac{1}{2}\mathcal{C}_{ij}\{f^j,f^l\}_{_0}\mathcal{C}_{lm}\mathbf{q}^\ast \zeta^i\wedge \mathbf{q}^\ast \zeta^m, \, \Lambda_0 \rangle \\ & = & - (n- \mathcal{C}_{lm}\mathcal{C}^{lm}) = - n + (n-k) = -k. \end{eqnarray*} Hence, we can apply (\ref{br-almostPoisson}) for the calculation of $\{\cdot,\cdot\}_{nh}$ on $C^\infty(T^\ast Q)$ and, by restriction, on $C^\infty(\mathcal{M})$. For any $H_1,H_2 \in C^\infty(T^\ast Q)$, \begin{equation*} \{H_1,H_2\}_{nh} \Omega = \frac{1}{f}dH_1 \wedge dH_2 \wedge \frac{\omega_0^{k-1}}{(k-1)!}\wedge df^1\wedge \ldots \wedge df^{n-k}\wedge \mathbf{q}^\ast \zeta^1 \wedge \ldots \wedge \mathbf{q}^\ast \zeta^{n-k}, \end{equation*} where $\Omega = \displaystyle{\frac{\omega_0^n}{n!}}$ is the corresponding volume element on $T^\ast Q$. \begin{remark} {\rm Without doubt, $\Lambda_{nh}$ is Poisson if and only if $\sigma$ satisfies (\ref{damianou:cond-delta-sigma}). But, Van der Schaft and Maschke proved \cite{vm} that $\{\cdot,\cdot\}_{nh}$ satisfies the Jacobi identity if and only if the constraints (\ref{nh-C}) are holonomic. Hence, we conclude that $\sigma$ satisfies (\ref{damianou:cond-delta-sigma}) if and only if the constraint distribution $C$ is completely integrable. These facts have an interesting geometric interpretation observed by Koon and Marsden \cite{km}; the vanishing of the Schouten bracket $[\Lambda_{nh},\Lambda_{nh}]$ is equivalent with the vanishing of the curvature of an Ehresmann connection associated with the constraint distribution $C$.} \end{remark} \subsection{Periodic Toda and Volterra lattices} In this paragraph we study the linear Poisson structure $\Lambda_{_T}$ associated with the periodic Toda lattice of $n$ particles. This Poisson structure has two well-known Casimir functions. Using Theorem \ref{damianou:THEOREM} we construct another Poisson structure having the same Casimir invariants with $\Lambda_{_T}$. It turns out that this structure decomposes as a direct sum of two Poisson tensors one of which (involving only the $a$ variables in Flaschka's coordinates) is the quadratic Poisson bracket of the Volterra lattice (also known as the KM-system). It agrees with the general philosophy (see \cite{damianou:Dam02}) that one obtains the Volterra lattice from the Toda lattice by restricting to the $a$ variables. The periodic Toda lattice of $n$ particles ($n\geq 2$) is the system of ordinary differential equations on $\mathbb{R}^{2n}$ which in Flaschka's \cite{damianou:fl} coordinate system $(a_1,\ldots,a_n, b_1,\ldots,b_n)$ takes the form \begin{equation*} \dot{a}_i = a_i(b_{i+1} - b_i) \quad \mathrm{and} \quad \dot{b}_i = 2(a_i^2 - a_{i-1}^2) \quad \quad (i \in \mathbb{Z} \;\;\; \mathrm{and} \;\;\; (a_{i+n},b_{i+n})=(a_i,b_i)). \end{equation*} This system is hamiltonian with respect to the nonstandard Lie-Poisson structure \begin{equation*}\label{damianou:Poisson-Toda} \Lambda_T = \sum_{i=1}^n a_i \frac{\partial}{\partial a_i}\wedge(\frac{\partial}{\partial b_i} - \frac{\partial}{\partial b_{i+1}}) \end{equation*} on $\mathbb{R}^{2n}$ and it has as hamiltonian the function $H = \sum_{i = 1}^n(a_i^2 + \displaystyle{\frac{1}{2}b_i^2})$. The structure $\Lambda_T$ is of rank $2n-2$ on $\mathcal{U} = \{(a_1,\ldots,a_n,b_1,\ldots,b_n) \in \mathbb{R}^{2n} \; / \; \sum_{i=1}^n a_1\ldots a_{i-1}a_{i+1}\ldots a_n \neq 0\}$ and it admits two Casimir functions: \begin{equation*}\label{damianou:Toda-Casimir} C_1 = b_1 + b_2 + \ldots + b_n \quad \quad \mathrm{and} \quad \quad C_2 = a_1a_2\ldots a_n. \end{equation*} We consider on $\mathbb{R}^{2n}$ the standard symplectic form $\omega_0 = \sum_{i=1}^n da_i \wedge db_i$, its associated Poisson tensor $\Lambda_0 = \sum_{i=1}^n\displaystyle{\frac{\partial}{\partial a_i}\wedge \frac{\partial}{\partial b_i}}$, and the corresponding volume element $\Omega = \displaystyle{\frac{\omega_0^n}{n!}}=da_1\wedge db_1\wedge \ldots \wedge da_n \wedge db_n$. The hamiltonian vector fields of $C_1$ and $C_2$ with respect to $\Lambda_0$ are \begin{equation*} X_{_{C_1}} = - \sum_{i=1}^n \frac{\partial}{\partial a_i} \quad \quad \mathrm{and} \quad \quad X_{_{C_2}} =\sum_{i=1}^n a_1\ldots a_{i-1}a_{i+1}\ldots a_n \frac{\partial}{\partial b_i}. \end{equation*} So, $D = \langle X_{_{C_1}}, X_{_{C_2}}\rangle$ and \begin{equation*} D^\circ = \big\{\sum_{i=1}^n(\alpha_i da_i + \beta_i db_i)\in \Omega^1(\mathbb{R}^{2n})\; / \; \sum_{i=1}^n\alpha_i = 0 \;\;\; \mathrm{and} \;\;\; \sum_{i=1}^n a_1\ldots a_{i-1}\beta_ia_{i+1}\ldots a_n = 0\big\}. \end{equation*} The family of $1$-forms $(\sigma_1,\ldots,\sigma_{n-1},\sigma'_1,\ldots,\sigma'_{n-1})$, \begin{equation*} \sigma_j = da_j - da_{j+1} \quad \quad \mathrm{and} \quad \quad \sigma'_{j} = a_jdb_j - a_{j+1}db_{j+1}, \quad \quad j = 1,\ldots, n-1, \end{equation*} provides, at every point $(a,b) \in \mathcal{U}$, a basis of $D^\circ_{(a,b)}$. The section of maximal rank $\sigma_{_T}$ of $\bigwedge^2D^\circ \to \mathcal{U}$, which corresponds to $\Lambda_T$, via the isomorphism $\Lambda_0^\#$, and verifies (\ref{damianou:cond-delta-sigma}), is written, in this basis, as \begin{equation*} \sigma_{_T} = \sum_{j = 1}^{n-1} \sigma_j \wedge \big(\sum_{l=j}^{n-1}\sigma'_l\big). \end{equation*} Now, we consider on $\mathbb{R}^{2n}$ the $2$-form \begin{eqnarray*} \sigma & = & \sum_{j=1}^{n-2} \sigma_j \wedge \big(\sum_{l=j+1}^{n-1}\sigma_l\big) \,+ \,\sum_{j=1}^{n-2} \sigma'_j \wedge \big(\sum_{l=j+1}^{n-1}\sigma'_l\big) \nonumber \\ & = & \sum_{j=1}^{n-2} \big[(da_j - da_{j+1}) \wedge (da_{j+1} - da_n) + (a_jdb_j - a_{j+1}db_{j+1}) \wedge (a_{j+1}db_{j+1} - a_ndb_n)\big] \nonumber \\ & = & \sum_{j=1}^{n}\big(da_j \wedge da_{j+1} + a_ja_{j+1}db_j \wedge db_{j+1}\big). \end{eqnarray*} It is a section of $\bigwedge^2D^\circ$ whose rank depends on the parity of $n$; if $n$ is odd, its rank is $2n-2$ on $\mathcal{U}$, while, if $n$ is even, its rank is $2n-4$ almost everywhere on $\mathbb{R}^{2n}$. Also, after a long computation, we can confirm that it satisfies (\ref{damianou:cond-delta-sigma}). Thus, its image via $\Lambda_0^\#$, i.e., the bivector field \begin{equation}\label{new-poisson-vol} \Lambda = \sum_{j=1}^{n}\big(a_ja_{j+1}\frac{\partial}{\partial a_j}\wedge \frac{\partial}{\partial a_{j+1}} + \frac{\partial}{\partial b_j}\wedge \frac{\partial}{\partial b_{j+1}}\big), \end{equation} defines a Poisson structure on $\mathbb{R}^{2n}$ with symplectic leaves of dimension at most $2n-2$, when $n$ is odd, that has $C_1$ and $C_2$ as Casimir functions. (When $n$ is even, $\Lambda$ has two more Casimir functions.) We remark that $(\mathbb{R}^{2n},\Lambda)$ can be viewed as the product of Poisson manifolds $(\mathbb{R}^n, \Lambda_{_V})\times (\mathbb{R}^n, \Lambda')$, where \begin{equation*} \Lambda_{_V} = \sum_{j=1}^{n}a_ja_{j+1}\frac{\partial}{\partial a_j}\wedge \frac{\partial}{\partial a_{j+1}} \quad \quad \mathrm{and} \quad \quad \Lambda'= \sum_{j=1}^{n}\frac{\partial}{\partial b_j}\wedge \frac{\partial}{\partial b_{j+1}}. \end{equation*} The Poisson tensor $\Lambda_{_V}$ is the quadratic bracket of the periodic Volterra lattice on $\mathbb{R}^n$ and it has $C_2$ as unique Casimir function, when $n$ is odd. In the following, using (\ref{damianou:bracket-Lambda-Omega}), we illustrate the explicit formul{\ae} of the brackets of $\Lambda_{_T}$ and $\Lambda$ in the special case $n=3$. We have $C_1 = b_1 + b_2 + b_3$, $C_2 = a_1a_2a_3$, $k = 2$, $\Lambda_0 = \sum_{i=1}^3\displaystyle{\frac{\partial}{\partial a_i}\wedge \frac{\partial}{\partial b_i}}$, and $\Omega = da_1\wedge db_1 \wedge da_2 \wedge db_2 \wedge da_3 \wedge db_3$. Consequently, $f = \langle dC_1\wedge dC_2, \,\Lambda_0\rangle = -(a_1a_2 + a_2a_3 +a_1a_3)$, which is a nonvanishing function on $\mathcal{U}$. For the periodic Toda lattice of $3$ particles, we have $\sigma_{_T} = (da_1 - da_2)\wedge (a_1db_1 -a_3db_3) + (da_2 - da_3)\wedge (a_2db_2 - a_3db_3)$, $g_{_T} = i_{\Lambda_0}\sigma_{_T} = -(a_1 + a_2 +a_3)$ and \begin{eqnarray*} \Phi_{_T} & = & - \frac{1}{f}(\sigma_{_T} + g_{_T}\omega_0)\wedge dC_1 \wedge dC_2 \\ & = & -a_1 db_1 \wedge da_2 \wedge da_3 \wedge db_3 + a_1 da_2 \wedge db_2 \wedge da_3 \wedge db_3 + a_2 da_1 \wedge db_1 \wedge da_3 \wedge db_3 \\ & & -a_2 da_1 \wedge db_1 \wedge db_2 \wedge da_3 + a_3 da_1 \wedge da_2 \wedge db_2 \wedge db_3 + a_3da_1 \wedge db_1 \wedge da_2 \wedge db_2. \end{eqnarray*} Thus, \begin{equation*} \begin{array}{ll} \{a_1,b_1\}_{_T}\Omega = da_1 \wedge db_1 \wedge \Phi_{_T} = a_1 \Omega, & \{a_1,b_2\}_{_T}\Omega = da_1 \wedge db_2 \wedge \Phi_{_T} = - a_1 \Omega, \\ \\ \{a_2,b_2\}_{_T}\Omega = da_2 \wedge db_2 \wedge \Phi_{_T} = a_2 \Omega, & \{a_2,b_3\}_{_T}\Omega = da_2 \wedge db_3 \wedge \Phi_{_T} = - a_2 \Omega,\\ \\ \{a_3,b_3\}_{_T}\Omega = da_3 \wedge db_3 \wedge \Phi_{_T} = a_3 \Omega, & \{a_3,b_1\}_{_T}\Omega = da_3 \wedge db_1 \wedge \Phi_{_T} = - a_3 \Omega, \end{array} \end{equation*} and all other brackets are zero. For the Poisson structure (\ref{new-poisson-vol}) on $\mathbb{R}^6$, we have $\sigma = (da_1 - da_2)\wedge (da_2 - da_3) + (a_1db_1 -a_2db_2)\wedge (a_2db_2 -a_3db_3)$, $g = i_{\Lambda_0}\sigma = 0$ and \begin{eqnarray*} \Phi & = & - \frac{1}{f}\sigma \wedge dC_1 \wedge dC_2 \\ & = & -a_1a_2db_1 \wedge db_2 \wedge da_3 \wedge db_3 + a_1a_3 db_1 \wedge da_2 \wedge db_2 \wedge db_3 - a_2a_3 da_1 \wedge db_1 \wedge db_2 \wedge db_3 \\ & & - da_1 \wedge db_1 \wedge da_2 \wedge da_3 - da_1 \wedge da_2 \wedge da_3 \wedge db_3 + da_1 \wedge da_2 \wedge db_2 \wedge da_3. \end{eqnarray*} Thus, \begin{equation*} \begin{array}{ll} \{a_1,a_2\}\Omega = da_1 \wedge da_2 \wedge \Phi = a_1a_2 \Omega, & \{a_1,a_3\}\Omega = da_1 \wedge da_3 \wedge \Phi = - a_1a_3 \Omega,\\ \\ \{a_2,a_3\}\Omega = da_2 \wedge da_3 \wedge \Phi = a_2a_3 \Omega, & \{b_1,b_2\}\Omega = db_1 \wedge db_2 \wedge \Phi = \Omega,\\ \\ \{b_1,b_3\}\Omega = db_1 \wedge db_3 \wedge \Phi = - \Omega, & \{b_2,b_3\}\Omega = db_2 \wedge db_3 \wedge \Phi = \Omega, \end{array} \end{equation*} and all other brackets are zero. \subsection{A Lie-Poisson bracket on $\mathbf{\mathfrak{gl}(3,\mathbb{R})}$} On the $9$-dimensional space $\mathfrak{gl}(3,\mathbb{R})$ of $3\times 3$ matrices \begin{equation*} \left( \begin{array}{ccc} x_1 & z_2 & y_3 \\ y_1 & x_2 & z_3 \\ z_1 & y_2 & x_3 \end{array} \right), \end{equation*} which is isomorphic to $\mathbb{R}^9$, we consider the functions \begin{equation*} C_1(x,y,z) = x_1 + x_2 + x_3, \quad C_2(x,y,z) = y_1z_2 + y_2z_3 + y_3z_1 \quad \mathrm{and} \quad C_3(x,y,z) = z_1z_2z_3. \end{equation*} Using Theorem \ref{THEOREM-ODD}, we are able to construct a linear Poisson structure $\Lambda$ on $\mathfrak{gl}(3,\mathbb{R})$, with sysmplectic leaves of dimension at most $6$, having $C_1$, $C_2$ and $C_3$ as Casimir functions. For this, we consider on $\mathfrak{gl}(3,\mathbb{R})\cong \mathbb{R}^9$ the cosymplectic structure $(\vartheta_0,\Theta_0)$, \begin{equation*} \vartheta_0 = dz_3 \quad \mathrm{and} \quad \Theta_0 = dx_1 \wedge dy_1 + dx_2\wedge dy_2 + dx_3 \wedge dy_3 + dz_1 \wedge dz_2, \end{equation*} whose corresponding transitive Jacobi structure $(\Lambda_0,E_0)$ is: \begin{equation*} \Lambda_0 = \frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial y_1} + \frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial y_2} + \frac{\partial}{\partial x_3}\wedge \frac{\partial}{\partial y_3} + \frac{\partial}{\partial z_1}\wedge \frac{\partial}{\partial z_2} \quad \mathrm{and} \quad E_0 = \frac{\partial}{\partial z_3}. \end{equation*} Clearly, \begin{equation*} f = \langle dC_1\wedge dC_2\wedge dC_3,\, E_0\wedge\Lambda_0\rangle = -z_1z_2^2 - z_1^2z_2 - z_1z_2z_3 \end{equation*} is nonzero on the open and dense subset $\mathcal{U} = \{(x,y,z)\in \mathbb{R}^9 \,/\, z_1z_2^2 + z_1^2z_2 + z_1z_2z_3 \neq 0\}$ of $\mathfrak{gl}(3,\mathbb{R})\cong \mathbb{R}^9$ and \begin{equation*} \Omega = \vartheta_0 \wedge \Theta_0^4 = dx_1\wedge dy_1\wedge dx_2\wedge dy_2 \wedge dx_3\wedge dy_3 \wedge dz_1\wedge dz_2 \wedge dz_3 \end{equation*} is a volume form of $\mathfrak{gl}(3,\mathbb{R})$. Furthermore, we consider on $\mathfrak{gl}(3,\mathbb{R})$ the pair of semi-basic forms $(\sigma, \tau)$, \begin{eqnarray*} \sigma & = & -z_1dx_1\wedge dx_2 -z_2 dx_2\wedge dx_3 + z_3 dx_1 \wedge dx_3 -y_1dx_1\wedge dy_1 + y_1dx_1\wedge dy_2 \\ & & - y_2dx_2 \wedge dy_2 + y_2dx_2\wedge dy_3 -y_3dx_3\wedge dy_3 + y_3dx_3\wedge dy_1 \\ & & - z_2dy_1\wedge dz_1 -z_1dy_1\wedge\wedge dz_2 + z_2 dy_2\wedge dz_1 + z_1 dy_3\wedge dz_2 \end{eqnarray*} and \begin{equation*} \tau = -z_3dy_2 + z_3 dy_3, \end{equation*} which has the properties (ii)-(iii) and verifies the system (\ref{cond-sigma-tau}). Thus, the bracket $\{\cdot,\cdot \}$ on $C^\infty(\mathfrak{gl}(3,\mathbb{R}))$ given by (\ref{br-odd}) defines a Poisson structure $\Lambda$ on $\mathfrak{gl}(3,\mathbb{R})$. We have, $g = i_{\Lambda_0}\sigma = y_1 + y_2 + y_3$ and \begin{eqnarray*} \Phi & = & -\frac{1}{f}(\sigma + \frac{g}{2}\Theta_0)\wedge \Theta_0 \wedge dC_1 \wedge dC_2 \wedge dC_3 \\ & = & z_1dx_1\wedge dy_1\wedge dx_2\wedge dy_2 \wedge dy_3 \wedge dz_2 \wedge dz_3 -z_1 dy_1\wedge dx_2\wedge dy_2 \wedge dx_3\wedge dy_3 \wedge dz_2 \wedge dz_3 \\ &- & z_1 dx_1\wedge dx_2\wedge dx_3\wedge dy_3 \wedge dz_1\wedge dz_2 \wedge dz_3 - z_2 dx_1 \wedge dy_1 \wedge dx_2 \wedge dx_3 \wedge dz_1 \wedge dz_2 \wedge dz_3 \\ & -& z_2 dy_1\wedge dx_2\wedge dy_2\wedge dx_3\wedge dy_3\wedge dz_1\wedge dz_3 +z_2 dy_1\wedge dz_1\wedge dx_3\wedge dy_3\wedge dz_3\wedge dy_2\wedge dx_1 \\ & -& y_1 dx_3\wedge dy_3\wedge dz_1\wedge dz_2\wedge dz_3\wedge dy_2\wedge dx_2 -y_3 dy_1\wedge dz_1\wedge dx_2\wedge dy_2\wedge dz_3\wedge dz_2\wedge dx_3 \\ &- & y_1 dx_1\wedge dy_2\wedge dz_1\wedge dz_2\wedge dz_3\wedge dy_3\wedge dx_3 -z_3 dy_2\wedge dz_1\wedge dx_1\wedge dy_1\wedge dz_2\wedge dy_3\wedge dx_2 \\ &- & y_2 dx_2\wedge dy_3\wedge dz_1\wedge dz_2\wedge dz_3\wedge dy_1\wedge dx_1+z_3 dx_1\wedge dx_2\wedge dz_1\wedge dz_2\wedge dz_3\wedge dy_2\wedge dx_3\\ &- & y_3 dy_1\wedge dz_1\wedge dx_2\wedge dy_2\wedge dz_3\wedge dz_2\wedge dx_1-z_3 dy_2\wedge dz_1\wedge dx_3\wedge dy_3\wedge dz_2\wedge dy_1\wedge dx_1\\ &- & y_2 dy_3\wedge dz_2\wedge dx_1\wedge dy_1\wedge dz_1\wedge dz_3\wedge dx_3. \end{eqnarray*} So, \begin{equation*} \begin{array}{ll} \{x_1,y_1\}\Omega = dx_1\wedge dy_1\wedge \Phi = -y_1\Omega, & \{x_1,y_3\}\Omega = dx_1\wedge dy_3\wedge \Phi = y_3\Omega,\\ \\ \{x_1,z_1\}\Omega = dx_1\wedge dz_1\wedge \Phi = -z_1\Omega, & \{x_1,z_2\}\Omega = dx_1\wedge dz_2\wedge \Phi = z_2\Omega,\\ \\ \{x_2,y_1\}\Omega = dx_2\wedge dy_1\wedge \Phi = y_1\Omega, & \{x_2,y_2\}\Omega = dx_2\wedge dy_2\wedge \Phi = -y_2\Omega,\\ \\ \{x_2,z_2\}\Omega = dx_2\wedge dz_2\wedge \Phi = -z_2\Omega, & \{x_2,z_3\}\Omega = dx_2\wedge dz_3\wedge \Phi = z_3\Omega,\\ \\ \{x_3,y_2\}\Omega = dx_3\wedge dy_2\wedge \Phi = y_2\Omega, & \{x_3,y_3\}\Omega = dx_3\wedge dy_3\wedge \Phi = -y_3\Omega,\\ \\ \{x_3,z_1\}\Omega = dx_3\wedge dz_1\wedge \Phi = z_1\Omega, & \{x_3,z_3\}\Omega = dx_3\wedge dz_3\wedge \Phi = -z_3\Omega,\\ \\ \{y_1,y_2\}\Omega = dy_1\wedge dy_2\wedge \Phi = -z_1\Omega, & \{y_1,y_3\}\Omega = dy_1\wedge dy_3\wedge \Phi = z_3\Omega,\\ \\ \{y_2,y_3\}\Omega = dy_2\wedge dy_3\wedge \Phi = -z_2\Omega,& \end{array} \end{equation*} and all other brackets are zero. The Lie-Poisson bracket in this example coincides with the one of the bi--Hamiltonian pair formulated by Meucci \cite{meu} for $\mathrm{Toda}_3$ system, a dynamical system studied by Kupershmidt in \cite{kup} as a reduction of the KP hierarchy. Meucci derives this structure by a suitable restriction of a related pair of Lie algebroids on the set of maps from the cyclic group $\mathbb{Z}_3$ to $\mathrm{GL}(3, \mathbb{R})$. Explicit formul{\ae} for the above bracket can also be found in \cite{dam-magri} where the $\mathrm{Toda}_3$ system is reduced to the phase space of the full Kostant--Toda lattice. \noindent Pantelis A. DAMIANOU \\ \emph{Department of Mathematics and Statistics}, \emph{University of Cyprus} \\ \emph{P.O. Box 20537, 1678 Nicosia, Cyprus} \\ \noindent\emph{E-mail: [email protected]} \\ \\ \noindent Fani PETALIDOU \\ \emph{Department of Mathematics and Statistics}, \emph{University of Cyprus} \\ \emph{P.O. Box 20537, 1678 Nicosia, Cyprus} \\ \noindent\emph{E-mail: [email protected]} \end{document}
\begin{document} \title{Time-Frequency Analysis and Coorbit Spaces of Operators} \begin{abstract} We introduce an operator valued Short-Time Fourier Transform for certain classes of operators with operator windows, and show that the transform acts in an analogous way to the Short-Time Fourier Transform for functions, in particular giving rise to a family of vector-valued reproducing kernel Banach spaces, the so called coorbit spaces, as spaces of operators. As a result of this structure the operators generating equivalent norms on the function modulation spaces are fully classified. We show that these operator spaces have the same atomic decomposition properties as the function spaces, and use this to give a characterisation of the spaces using localisation operators. \end{abstract} \section{Introduction} In time-frequency analysis, the modulation spaces $M^{p,q}_m(\mathbb{R}^d)$, first introduced by Feichtinger in 1983 \cite{feich83}, play a central role, where they define spaces of functions with certain desirable time-frequency decay. In particular the Feichtinger algebra, $M^1(\mathbb{R}^d)$ or $\mathbf{S}_0(\mathbb{R}^d)$ \cite{feich81} \cite{feich79}, gives well concentrated functions in the time-frequency sense, which are for many purposes the ideal atoms for Gabor analysis. The modulation spaces are usually defined in terms of the Short-Time Fourier Transform (STFT), namely as the spaces \begin{align*} M^{p,q}_m(\mathbb{R}^{d}) := \{\psi\in\mathcal{S}'(\mathbb{R}^d): \Big( \int_{\mathbb{R}^d}\Big(\int_{\mathbb{R}^d} |V_{\varphi_0} \psi(z)|^p m(x,\omega)^p dx\Big)^{q/p}d\omega\Big)^{1/q} < \infty \}, \end{align*} where $\varphi_0$ is the Gaussian. Modulation spaces and their various generalisations have been studied extensively, and surveys and monographs can be found in \cite{jakob18} \cite{benyimodspaces} \cite{grochenigtfa}. The properties and utility of these function spaces are too broad to hope to cover, but of particular interest to our work is that these spaces are the coorbit spaces \cite{feich89i} \cite{feich89ii} of the projective unitary representation of the reduced Weyl-Heisenberg group, and as such have (among others) the following properties: \begin{enumerate} \item All $g\in L^2(\mathbb{R}^d)$ that satisfy the condition $V_g g \in L^1_v(\mathbb{R}^{2d})$, generate the same modulation spaces $M^{p,q}_m(\mathbb{R}^{d})$ as windows, and their norms are equivalent. \item (\textit{Correspondence Principle}) Given an atom $g$ as above, there is an isometric isomorphism $M^{p,q}_m(\mathbb{R}^{d}) \cong \{F\in L^{p,q}_m(\mathbb{R}^{2d}): F = F \natural V_g g\}$ (where $\natural$ is the twisted convolution discussed below), given by $V_g$. Note that the later are reproducing kernel Banach spaces. \end{enumerate} There is a vast body of contributions to the theory of coorbit spaces, e.g. \cite{bagr17,chol11,hovo21}. In this work we examine spaces of operators exhibiting similar properties, by introducing an STFT with operator window and argument, returning an operator-valued function on phase space. One motivation comes from \cite{Dorf21}, where local structures of a data set $\mathcal{D}=\{f_1,...,f_N\}$ were identified via mapping the data points of functions $f_i$ on $\mathbb{R}^d$ to rank-one operators $f_i\otimes f_i$, and constructing the data operator $S_{\mathcal{D}}=\sum_{i=1}^N f_i\otimes f_i$. Hence, it would be of interest to compare to data sets $\mathcal{D}$ and $\mathcal{D}^\prime$ via its respective data operators $S_\mathcal{D}$ and $S_{\mathcal{D}^\prime}$. Another source of inspiration is the work \cite{keyl16}, where operator analogues of the Schwartz class of functions and of the space of tempered distributions have been introduced and their basic theory has been developed along the lines of the function/distribution case. The concept of an STFT for operators is not a new one. In \cite{balasz2019}, the authors consider the wavelet transform for the representation $\pi(w)\otimes\pi(z)$ on $\mathcal{HS}=L^2(\mathbb{R}^d)\otimes L^2(\mathbb{R}^d)$ to examine kernel theorems for coorbit spaces. This entails using the standard scalar-valued construction for the coorbit spaces defined by the wavelets transform, giving different spaces to our approach. On the other hand nor are vector-valued reproducing kernel Hilbert spaces in time-frequency analysis a new concept. In \cite{balan00} and \cite{abreau10} an STFT is constructed for vectors of functions, which results in a direct sum of Gabor spaces. Our work differs from these in that windows, arguments and resulting output of the operator STFT are all operators. In \cite{skrett22}, the author introduced an equivalent notion of a STFT with an operator window, given by \begin{align} \label{functionstft} \mathfrak{V}_S \psi(z) := S\pi(z)^* \psi \end{align} for some appropriate operator $S$ and function $\psi$. In particular, the question was considered of which operators would define equivalent norms on $M^{p,q}_m(\mathbb{R}^{d})$ under this STFT, that is, for which operators \begin{align*} \|\psi\|_{M^{p,q}_m} \asymp \|S\pi(z)^*\psi\|_{L^{p,q}_m(\mathbb{R}^{2d};L^2)}. \end{align*} In further work by Guo and Zhao \cite{Guo22}, some equivalent conditions for equivalence were given. In both works a class of operators with adjoints in a certain class of nuclear operators was discussed, along with the open question in the latter of whether these operators exhausted all possible operators generating equivalent norms on $M^{p,q}_m(\mathbb{R}^{d})$. In this work we present an extension of the operator window STFT \eqref{functionstft}, which acts on operators instead of functions. We initially define such a transform for $S,T\in\mathcal{HS}$ in the following manner: \begin{definition}{(Operator STFT)} For $S,T\in\mathcal{HS}$, the STFT of $T$ with window $S$, is given by \begin{align} \mathfrak{V}_S T(z) := S^*\pi(z)^* T. \end{align} \end{definition} Note that in the case of rank-one operators $S=g\otimes e$ and $T=f\otimes e$ for $e,f,g\in L^2(\mathbb{R}^d)$ the operator STFT becomes $V_gf(z)e\otimes e$, which is the STFT of functions embedded into the space of Hilbert-Schmidt operator-valued functions. We examine the behaviour of this transform, e.g. Moyal's identity, paying particular attention to the spaces it produces as images. In this respect the first result of this paper demonstrates a parallel to the STFT of functions, regarding the reproducing structure of the image of the Hilbert space of Hilbert-Schmidt operators:\todo{M: maybe make new own contributions more explicit} \begin{theorem} For any Hilbert-Schmidt operator $S$, the space defined by \begin{align*} \mathfrak{V}_S (\mathcal{HS}) := \{\mathfrak{V}_S T(z): T\in \mathcal{HS}\} \end{align*} is a vector-valued uniform reproducing kernel Hilbert space as a subspace of the Bochner-Lebesgue space $L^2(\mathbb{R}^{2d};\mathcal{HS})$. \end{theorem} Motivated by this, we extend the reproducing properties of this space to the "coorbit spaces", and consider the spaces $\mathfrak{A}_v := \{S\in \mathcal{HS}: \mathfrak{V}_S S \in L^1_v(\mathbb{R}^{2d};\mathcal{HS})\}$, and \todo{M: when using not so popular notion refer to section where they will be defined?}$\mathfrak{M}^{p,q}_m := \{T\in \mathfrak{S}': \mathfrak{V}_{S} T \in L^{p,q}_m(\mathbb{R}^{2d};\mathcal{HS})\}$, where $\mathfrak{S}'$ are operators with Weyl symbols in $\mathscr{S}^\prime$ and $S\in\mathfrak{A}_v$, to derive the result \begin{theorem} For any $S\in \mathfrak{M}^1_v$, we have an isometric isomorphism \begin{align*} \mathfrak{M}^{p,q}_m \cong \{\Psi\in L^{p,q}_m(\mathbb{R}^{2d};\mathcal{HS}): \Psi = \Psi \natural \mathfrak{V}_S S\} \end{align*} under the mapping \begin{align*} T \mapsto \mathfrak{V}_S T, \end{align*} \end{theorem} the twisted convolution $\natural$ is to be defined in \cref{rkhs}. Furthermore, for all $S\in\mathfrak{A}_v$ the resulting spaces coincide, and the associated norms are equivalent. The dual space of $\mathfrak{M}^{p,q}_m$ is $\mathfrak{M}^{p',q'}_{1/m}$, where $\frac{1}{p}+\frac{1}{p'}=1$, $\frac{1}{q}+\frac{1}{q'}=1$ with the usual adjustment for $p,q=1,\infty$. As a corollary of the coorbit structure and independence of windows, we characterise operators satisfying the equivalent norm condition; \begin{corollary} The operators which define equivalent norms on the spaces $M^{p,q}_m(\mathbb{R}^{d})$ by \begin{align*} \|S^*\pi(z)^*\psi\|_{L^{p,q}_m(\mathbb{R}^{2d};L^2(\mathbb{R}^d))} \end{align*} or every $1\leq p,q \leq \infty$ and $v$-multiplicative $m$, are precisely the admissible operators \begin{align*} \mathfrak{A}_v := \{S: \mathfrak{V}_S S \in L^1_v(\mathbb{R}^{2d};\mathcal{HS})\}, \end{align*} \end{corollary} We finally consider the atomic decomposition of operators in the $\mathfrak{M}^{p,q}_m$, which follows from the same arguments as the function case given the coorbit structure. Using this we can characterise the spaces using localisation operators: \begin{corollary} Let $\varphi\in L^2(\mathbb{R}^d)$ be non-zero and $h\in L^1_v(\mathbb{R}^{2d})$ be some non-negative symbol satisfying \begin{align*} A \leq \sum_{\lambda\in\Lambda} h(z-\lambda) \leq B \end{align*} for positive constants $A,B$, and almost all $z\in\mathbb{R}^{2d}$. Then for every $v$-moderate weight $m$ and $1\leq p < \infty$ the operator $T\in\mathfrak{M}^{\infty}_{1/v}$ belongs to $\mathfrak{M}^{p,q}_m$ if and only if \begin{align*} \big\{ A_{\overline{h}}^{\varphi} \pi(\lambda)^* T \big\}_{\lambda\in\Lambda} \in l^{p,q}_m(\Lambda;\mathcal{HS}). \end{align*} where $\Lambda = \alpha\mathbb{Z} \times \beta\mathbb{Z}$ is some full rank lattice. \end{corollary} \section{Preliminaries} \subsection{Time-Frequency Analysis Basics} While coorbit spaces are defined in general for integrable representations of locally compact groups, modulation spaces of functions and the spaces discussed in this work arise from the particular case of the time-frequency shifts $\pi(z)$, the projective unitary representation of the reduced Weyl-Heisenberg group on the Hilbert space $L^2(\mathbb{R}^d)$. Such shifts can be defined as the composition of the translation operator $T_x:f(t)\mapsto f(t-x)$, and the modulation operator $M_{\omega}:f(t)\mapsto e^{2\pi i \omega t}f(t)$, by the identity \begin{align*} \pi(z) = M_{\omega}T_x \end{align*} where $z=(x,\omega)\in \mathbb{R}^{2d}$. Direct calculations show that $\pi(z)$ is unitary on $L^2(\mathbb{R}^d)$, and that we have \begin{align*} \pi(z)\pi(z') &= e^{-2\pi i \omega' x}\pi(z+z') \\ \pi(z)^* &= e^{-2\pi i x \omega}\pi(-z). \end{align*} The Short-Time Fourier Transform (STFT) for functions is then defined, for two functions $f,g\in L^2(\mathbb{R}^d)$, by \begin{align} V_g f(z) := \langle f, \pi(z) g\rangle_{L^2}. \end{align} The window function $g$ is usually chosen to have compact support, or be concentrated around the origin, such as in the case of the normalised Gaussian $\varphi_0(t)=2^{d/4}e^{\pi t^2}$. For $f,g\in L^2(\mathbb{R}^{2d})$, $V_g f$ is uniformly continuous as a function in $L^2(\mathbb{R}^{2d})$, which will be instructive when considering reproducing kernel Hilbert spaces later. One has for the STFT \textit{Moyal's Identity} (see for example Theorem 3.2.1 of \cite{grochenigtfa}), giving an understanding of the basic properties of the STFT in terms of its window: \begin{lemma}{(Moyal's Identity)} Given functions $f_1,f_2,g_1,g_2\in L^2(\mathbb{R}^d)$, we have $V_{g_1} f_1,\,V_{g_2} f_2\in L^2(\mathbb{R}^{2d})$, and in addition: \begin{align*} \langle V_{g_1} f_1, V_{g_2} f_2 \rangle_{L^2(\mathbb{R}^{2d})} = \langle f_1, f_2 \rangle_{L^2(\mathbb{R}^d)} \overline{\langle g_1, g_2 \rangle_{L^2(\mathbb{R}^d)}}. \end{align*} \end{lemma} As a direct consequence, we have that for any $g \in L^2(\mathbb{R}^d)$ such that $\|g\|_{L^2}=1$, the map $V_g: L^2(\mathbb{R}^d)\to L^2(\mathbb{R}^{2d})$ is an isometry. As such, we can consider the inverse mapping. Rearranging Moyal's identity shows the reconstruction formula \begin{align} f = \int_{\mathbb{R}^{2d}} V_g f(z) \pi(z)g\, dz, \end{align} for any $g\in L^2(\mathbb{R}^d)$ with $\|g\|=1$. A direct calculation then shows that the adjoint $V_g^*$ is given by \begin{align} V_g^*(F) := \int_{\mathbb{R}^{2d}} F(z) \pi(z)g\, dz, \end{align} where the integral can be interpreted in the weak sense, and so from the reconstruction formula \begin{align*} V_g^* V_g = I_{L^2(\mathbb{R}^d)}. \end{align*} \subsection{Weight functions and mixed-norm spaces} We begin by defining a sub-multiplicative weight $v$ as a non-negative, locally integrable function on phase space $\mathbb{R}^{2d}$ satisfying the condition \begin{align*} v(z_1+z_2) \leq v(z_1)v(z_2) \end{align*} for all $z_1,z_2\in\mathbb{R}^{2d}$. As a direct result, $v(0)\geq 1$. A $v$-moderate weight $m$ is then a non-negative, locally integrable function on phase space such that \begin{align*} m(z_1 + z_2) \leq v(z_1)m(z_2) \end{align*} for all $z_1,z_2\in\mathbb{R}^{2d}$. As a particular consequence, we have for such a $v,m$ that \begin{align*} \frac{1}{C_{v,m}v(z)} \leq m(z) \leq C_{v,m}v(z). \end{align*} In this work we consider weights of at most polynomial growth. We define the weighted, mixed-norm space $L^{p,q}_m(\mathbb{R}^{2d})$, for $1\leq p,q < \infty$, as the functions for which the norm \begin{align*} \|F\|_{L^{p,q}_m} := \Big(\int_{\mathbb{R}^d}\Big(\int_{\mathbb{R}^d} |F(x,\omega)|^p m(x,\omega)^p\, dx\Big)^{q/p}\, d\omega \Big)^{1/q} \end{align*} is finite. In the case where $p$ or $q$ is infinite, we replace the corresponding integral with essential supremum. For such spaces we have the duality $(L^{p,q}_m(\mathbb{R}^{2d}))' = L^{p',q'}_{1/m}(\mathbb{R}^{2d})$, where $\frac{1}{p}+\frac{1}{p'} = 1$, $\frac{1}{q}+\frac{1}{q'} = 1$. Further details on weights and mixed-norm spaces can be found in chapter 11, \cite{grochenigtfa}. In this work we consider discretisation over the full rank lattice $\Lambda = \alpha\mathbb{Z}^{d}\times \beta\mathbb{Z}^{d}$. An arbitrary lattice $\Lambda = A\mathbb{Z}^{2d}$, $A\in GL(2d;\mathbb{R})$ can also be used, but the notion of mixed-norm becomes less clear. We define the mixed-norm weighted sequence space $l^{p,q}_m(\Lambda;\mathcal{HS})$ as the sequences $a_{(k,l)}$ such that \begin{align*} \|a\|_{l^{p,q}_m(\Lambda;\mathcal{HS})} := \Big(\sum_{n\in\mathbb{Z}^d} \big(\sum_{k\in\mathbb{Z}^d} m(\alpha k,\beta l)^p \|a_{\alpha k,\beta l}\|_{\mathcal{HS}}^p \big)^{q/p} \Big)^{1/q} < \infty. \end{align*} The Wiener Amalgam spaces introduced in \cite{feich83i} provide the required framework for sampling estimates on the lattice. To that end we define for a given function $\Psi: \mathbb{R}^{2d}\to\mathcal{HS}$ the sequence \begin{align*} a^{\Psi}_{(k,l)} = \Big(\esssup_{x,\omega\in [0,1]^d} \|\Psi(x+k,\omega+l)\|_{\mathcal{HS}}\Big)_{(k,l)}. \end{align*} \begin{definition}\label{Wieneramalg} Let $1\leq p,q \leq \infty$ and $m$ be some weight function. The Wiener Amalgam space $W(L^{p,g}_m(\mathbb{R}^{2d};\mathcal{HS}))$ consists of all functions $\Psi: \mathbb{R}^{2d}\to\mathcal{HS}$ such that \begin{align*} \|a^{\Psi}_{(k,l)}\|_{l^{p,q}_m} < \infty, \end{align*} with the norm $\|\Psi\|_{W(L^{p,q}_m(\mathbb{R}^{2d};\mathcal{HS}))} := \|a^{\Psi}_{(k,l)}\|_{l^{p,q}_m}$. \end{definition} One feature of the Wiener Amalgam spaces we use (see for example Proposition 11.1.4 of \cite{grochenigtfa}) is the following: \begin{proposition}\label{wieneramalg} Let $\Lambda = \alpha\mathbb{Z}^{d}\times \beta\mathbb{Z}^{d}$ and $\Psi\in W(L^{p,q}_m(\mathbb{R}^{2d};\mathcal{HS}))$ be continuous. Then \begin{align*} \|\Psi|_{\Lambda}\|_{l^{p,q}_{\Tilde{m}}(\Lambda;\mathcal{HS})} \leq c\|\Psi\|_{W(L^{p,q}_m(\mathbb{R}^{2d};\mathcal{HS}))} \end{align*} where $\Tilde{m}(k,l)=m(\alpha k, \beta l)$, and $c$ depends on the lattice $\Lambda$. \end{proposition} While stated for scalar-valued functions in \cite{grochenigtfa}, the same argument gives the vector-valued case. \subsection{Reproducing Kernel Hilbert Spaces}\label{rkhs} \subsubsection{Vector-Valued RKHS} We recall definitions and identities in this section in terms of \textit{vector-valued} reproducing kernel Hilbert spaces, following the formalism of Paulsen and Raghupathi in chapter 6 of \cite{paulsenrkhs}. The familiar scalar case follows simply by considering the vector space which functions take their values to be $\mathbb{C}$. \begin{definition}{} Let $\mathcal{C}$ be a Hilbert space, and $X$ some set. We denote by $\mathcal{F}(X, \mathcal{C})$ the vector space of $\mathcal{C}$-valued functions under the usual pointwise sum and scalar multiplication. A subspace $\mathcal{H}\subseteq \mathcal{F}$ is a $\mathcal{C}$-valued reproducing Kernel Hilbert Space (RKHS) if it is a Hilbert space, and for every $x\in X$, the evaluation map $E_x:f\to f(x)$ is a bounded operator. If the set $\{E_x\}_{x\in X}$ is uniformly bounded in norm, then $\mathcal{H}$ is referred to as \textit{uniform}. \end{definition} Since $\mathcal{H}$ is a Hilbert space, it follows from Riesz' representation theorem that for each $E_x$, there is some $k_x\in \mathcal{H}$ such that $E_x(f) = \langle f, k_x\rangle_{\mathcal{H}}$. It follows from definition that \begin{align*} |f(x)-g(x)| = |\langle f, k_x\rangle_{\mathcal{H}} - \langle g, k_x\rangle_{\mathcal{H}}| = |\langle f - g, k_x\rangle_{\mathcal{H}}| \leq \|f-g\|_{\mathcal{H}} \|k_x\|_{\mathcal{H}}, \end{align*} so unlike in the general Hilbert space setting, we have pointwise bounds in terms of norms in the RKHS setting. The \textit{kernel function} $K:X\times X \to \mathcal{L}(\mathcal{C})$ is defined as $K(x,y) = E_x E_y^*$, and has the property $K(x,y) = K(x,y)^*$. The kernel function uniquely defines the RKHS, that is to say given two RKHS' $\mathcal{H}_1,\mathcal{H}_2$, if $K_1(x,y)=K_2(x,y)$ then $\mathcal{H}_1 = \mathcal{H}_2$ and $\|\cdot\|_{\mathcal{H}_1} = \|\cdot \|_{\mathcal{H}_2}$, and vice versa. \begin{example} \normalfont Given $g\in L^2(\mathbb{R}^d)$ such that $\|g\|_{L^2}=1$, the \textit{Gabor space} $V_g(L^2(\mathbb{R}^d)) \subset L^2(\mathbb{R}^{2d})$ with norm $\|V_g f\|_{V_g(L^2)}=\|f\|_{L^2}$ is a RKHS with kernel $K(z,z') = \langle \pi(z')g, \pi(z) g\rangle$. \end{example} This result can be deduced by noting that $V_g$ is an isometry onto its image, then proceeding with the adjoint as defined above. \subsubsection{Twisted Convolutions} Reproducing properties of Gabor spaces are intimately connected to the \textit{twisted convolution}, which is defined in terms of the $2$-cocycle of $\pi(z)$, which we define as $c(z,z')=e^{-2\pi ix'(\omega-\omega')}$, such that $\pi(z)^*\pi(z')=c(z,z')\pi(z+z')^*$. We define the twisted convolution for a Lebesgue-Bochner space (for details see for example \cite{DincVecMeas}). The twisted convolution presented here swaps the arguments, but this is only in order to fit our construction of the operator STFT with the standard notation in coorbit theory. \begin{definition} Given two operator-valued functions $F,H\in L^2(\mathbb{R}^{2d};\mathcal{HS})$, we define the twisted convolution $\natural$ as \begin{align*} F\natural H(x) = \int_{\mathbb{R}^{2d}} H(x-y)F(y)c(x-y,y)\, dy, \end{align*} where the integral can be interpreted in the sense of a Bochner integral. \end{definition} In the concrete setting of Gabor spaces, a direct calculation shows for functions $f_1,f_2,g_1,g_2\in L^2(\mathbb{R}^{2d})$, that $V_{g_1} f_1 \natural V_{g_2} f_2 = \langle f_2, g_1 \rangle V_{g_2} f_1$. Clearly then for some $F\in V_g(L^2)$, the identity $F\natural V_g g = F$ holds when $\|g\|_{L^2}=1$, however a fundamental result of coorbit theory is that the converse also holds, giving the following: \begin{proposition} \label{rkhsclassification} Given some $g\in L^2(\mathbb{R}^d)$ with $\|g\|_{L^2}=1$, a function $F\in L^2(\mathbb{R}^{2d})$ is in $V_g (L^2)$ if and only if $F\natural V_g g = F$. \end{proposition} We note an application of weighted, mixed-norm Young's inequality to Lebesgue-Bochner spaces of Banach algebras to be used in the sequel. A proof of the scalar valued case can be found for example in Proposition 11.1.3 of \cite{grochenigtfa}, the vector valued case follows by the same argument. \begin{lemma} \label{youngsineq} Given functions $F\in L^1_v(\mathbb{R}^{2d};\mathcal{HS})$ and $H\in L^{p,q}_m(\mathbb{R}^{2d};\mathcal{HS})$ we have \begin{align*} \|F\natural H \|_{L^{p,q}_m} \leq C_{m,v}\|F\|_{L^1_v} \|H\|_{L^{p,q}_m}, \end{align*} where $v$ is some sub-multiplicative function and $m$ a $v$-moderate weight, and $C_{m,v}$ a constant depending on $v$ and $m$. \end{lemma} \subsection{Modulation Spaces} We begin by considering the space $M^1_v(\mathbb{R}^{d})$. For a sub-multiplicative $v$, we define the modulation space as functions whose image under the STFT with Gaussian window is in $L^1_v(\mathbb{R}^{2d})$; \begin{align*} M^1_v(\mathbb{R}^{d}) := \{f\in L^2(\mathbb{R}^d): V_{\varphi_0} f \in L^1_v(\mathbb{R}^{2d})\}. \end{align*} Such a space is always non-empty, as $\varphi_0$ itself is contained in it, and for weights of polynomial growth it contains the Schwartz functions $\mathscr{S}$. In addition, it is closed under pointwise multiplication, time-frequency shifts, and is a Banach space under the norm $\|f\|_{M^1_v} = \|V_{\varphi_0} f\|_{L^1_v}$. The unweighted $M^1(\mathbb{R}^{d})$ is Feichtinger's algebra, which has been studied extensively and provides for many avenues of time-frequency analysis the ideal set of test functions. We refer to the early paper \cite{feich81} and recent survey \cite{jakob18} for more details on the space. General modulation spaces are then defined, for any $v$-moderate weight $m$, by \begin{align*} M^{p,q}_m(\mathbb{R}^{d}) := \{f\in (M^1_v(\mathbb{R}^{d}))': V_{\varphi_0} f \in L^{p,q}_m(\mathbb{R}^{2d})\}, \end{align*} with the associated norm $\|f\|_{M^{p,q}_m}=\|f\|_{L^{p,q}_m}$ For any $g\in M^1_v(\mathbb{R}^{d})$, the space $\{f\in (M^1_v(\mathbb{R}^{d}))': V_{g} f \in L^{p,q}_m(\mathbb{R}^{2d})\}$ is equal to the space $M^{p,q}_m(\mathbb{R}^{d})$ and the associated norms are equivalent. It is not hard to see that $M^{2,2}(\mathbb{R}^{d})=L^2(\mathbb{R}^{d})$, from the properties of the STFT with window $\varphi_0 \in L^2(\mathbb{R}^{2d})$. The modulation spaces form coorbit spaces of the unitary representation $\pi$, and as such have the property: \begin{theorem}{(Correspondence Principle)} Given some $g\in M^1_v(\mathbb{R}^{d})$, for every $1\leq p \leq \infty$, the STFT defines an isomorphism \begin{align*} V_g: M^{p,q}_v(\mathbb{R}^{d}) \to \{F\in L^{p,q}_v(\mathbb{R}^{2d}): F \natural V_g g = F\}. \end{align*} \end{theorem} \begin{remark} \normalfont In this work we consider the case of the operator STFT $\mathfrak{V}_S$. One might therefore ask why we do not refer simply to an operator modulation space. We believe this would be misleading, since the term modulation space refers to the construction of the spaces by the $M^1(\mathbb{R}^{d})$ condition $\int_{\hat{G}} \|M_\omega f * f\|_1 d\omega < \infty$. We do not work with the analogous concept of modulation for operators, so we choose to refer to them as coorbit spaces. Although not coorbit spaces in the "strict sense" \cite{voigt15}, since the elements are not in the representation space of $\pi$, they can be considered as generalised coorbit spaces in the sense that instead of our transforms being a functional, they are now simply maps between spaces of operators, for example $\mathcal{HS}\to\mathcal{HS}$. \end{remark} \subsection{Spaces of Operators} \subsubsection{Schatten class and nuclear operators} In this work we consider several spaces of operators. We begin by defining the \textit{trace class} operators as \begin{align*} \mathcal{S}^1:= \{T\in\mathcal{L}(L^2(\mathbb{R}^{d})): \sum_{n\in \mathbb{N}} \langle |T|e_n, e_n\rangle < \infty\} \end{align*} for any orthonormal basis $\{e_n\}_{n\in\mathbb{N}}$ of $L^2(\mathbb{R}^{d})$. For operators satisfying this condition, the sum $\sum_{n\in \mathbb{N}} \langle Te_n, e_n\rangle$ in fact converges for all orthonormal bases to the same value, the trace of $T$, given by tr$(T)$. The set of such operators is then a Banach space when equipped with the norm $\|T\|_{\mathcal{S}^1} = tr(|T|)$. The \textit{Hilbert-Schmidt} operators $\mathcal{HS}$, are the operators \begin{align*} \mathcal{HS}:= \{T\in\mathcal{L}(L^2(\mathbb{R}^{d})): T^*T\in\mathcal{S}^1 \}. \end{align*} The space $\mathcal{HS}$ is a Hilbert-Schmidt space with the inner product $\langle S,T\rangle_{\mathcal{HS}} = \mathrm{tr}(ST^*)$, and contains $\mathcal{S}^1$ as a proper ideal. We will often use that every compact operator, and therefore every Hilbert-Schmidt and trace class operator, admits a spectral decomposition \begin{align*} S = \sum_{n\in\mathbb{N}} \lambda_n \psi_n \otimes \phi_n, \end{align*} where $\lambda_n$ are the singular values of $S$, $\{\psi_n\}_{n\in \mathbb{N}}$ and $\{\phi_n\}_{n\in \mathbb{N}}$ are orthonormal sets and the sum converges in operator norm. Both these spaces are Banach algebras with their respective norms and two-sided ideals in $\mathcal{L}(L^2(\mathbb{R}^{d}))$, with $\mathcal{S}^1\subset\mathcal{HS}\subset \mathcal{L}(L^2(\mathbb{R}^{d}))$. The further \textit{Schatten class operators}, $\mathcal{S}^p$, are defined by the decay of their singular values; \begin{align*} \mathcal{S}^p := \{T\in\mathcal{L}(L^2(\mathbb{R}^{d})): \{\lambda_n\}_{n\in\mathbb{N}} \in l^p \} \end{align*} where $\lambda_n$ are again the singular values of $T$. Clearly $\mathcal{HS}=\mathcal{S}^2$. We also introduce a space of \textit{nuclear operators}, a concept which generalises the concept of trace to operators between Banach spaces. In particular for two Banach spaces $X,Y$, the nuclear operators $\mathcal{N}(X,Y)$ are the linear operators $T$ which have an expansion \todo{M: $x_n$} $T=\sum_n y_n\otimes x_n$, where $y_n\in Y$, $x_n\in X'$ such that $\sum_n \|y_n\|_Y\|x_n\|_{X'} < \infty$. These operators become a Banach space when endowed with the norm $\|T\|_{\mathcal{N}(X,Y)} = \inf \sum_n \|y_n\|_Y\|x_n\|_{X'}$ where the infimum is taken over all possible decompositions of $T$. In our case we are interested in the nuclear operators $\mathcal{N}(L^2(\mathbb{R}^{d});M^1_v(\mathbb{R}^{d}))$. Such operators may be defined as the projective tensor product $\mathcal{N}(L^2(\mathbb{R}^{d});M^1_v(\mathbb{R}^{d})) := M^1_v(\mathbb{R}^{d}) \Hat{\otimes}_{\pi} L^2(\mathbb{R}^{d})$, the completion of the algebraic tensor product $M^1_v(\mathbb{R}^{d}) \otimes L^2(\mathbb{R}^{d})$ with respect to the nuclear norm \begin{align*} \|h\|_{M^1_v \otimes L^2} = \inf \left\{ \sum_{n=1}^N \|g_n\|_{M^1_v}\|f_n\|_{L^2}: h=\sum_{n=1}^N g_n \otimes f_n \right\}. \end{align*} Finally we introduce the \textit{Schwartz operators} $\mathfrak{S}$, as the space of bounded integral operators with kernel $k\in \mathscr{S}(\mathbb{R}^{2d})$. Such operators form a Frechet space as detailed in \cite{keyl16}, and the topological dual $\mathfrak{S}'$ consists of integral operators with kernels in $\mathscr{S}'(\mathbb{R}^{2d})$, which by the Schwartz kernel theorem is the space of operators from $\mathscr{S}(\mathbb{R}^{2d})$ to $\mathscr{S}'(\mathbb{R}^{2d})$. For polynomial sub-multiplicative weight $v$, we use the sequence of inclusions $\mathfrak{S}\subset \mathcal{N}(L^2(\mathbb{R}^{d});M^1_v(\mathbb{R}^{d})) \subset \mathcal{HS} \subset \mathfrak{S}'$. \subsubsection{G-frames for Operators} In the operator setting, we will consider g-frames as introduced in \cite{sun06} as an analogue to frames in the function setting. In particular, given a Hilbert space $\mathcal{U}$, and a sequence of Hilbert spaces $\{\mathcal{V}_i\}_{i\in I}$, then a sequence of operators $\{S_i\in \mathcal{L}(\mathcal{U};\mathcal{V}_i)\}_{i\in I}$ is called a g-frame of $\mathcal{U}$ with respect to $\{\mathcal{V}_i\}_{i\in I}$ if there exists positive constants $A,B$ such that the \textit{g-frame condition} \begin{align} A\|u\|^2_{\mathcal{U}} \leq \sum_{i\in I} \|S_i u\|^2_{\mathcal{V}_i} \leq B\|u\|^2_{\mathcal{U}} \end{align} holds for all $u\in\mathcal{U}$. We call $\{S_i\}_{i\in I}$ a tight frame when $A=B$, and a Parseval frame when $A=B=1$. In our work we consider the case where $\mathcal{V}_i$ coincide for all $i$. When the g-frame condition holds, the g-frame operator \begin{align*} \mathfrak{O}_S = \sum_{i\in I} S_i^* S_i \end{align*} is positive, bounded and invertible on $\mathcal{U}$. In \cite{skrett21}, g-frame operators of the type \begin{align*} \mathfrak{O}_S = \sum_{\lambda\in\Lambda} \pi(\lambda)S^*S\pi(\lambda)^* \end{align*} for some lattice $\Lambda$, were considered on the Hilbert space $L^2(\mathbb{R}^d)$. In this work, we say an operator $S\in\mathcal{L}(L^2(\mathbb{R}^{d}))$ generates a \textit{Gabor g-frame} if $\{S^*\pi(\lambda)^*\}_{\lambda\in\Lambda}$ is a frame for $\mathcal{HS}$. \begin{proposition} If $S\in\mathcal{L}(L^2(\mathbb{R}^{d}))$ generates a Gabor g-frame of $\mathcal{HS}$ for some lattice $\Lambda$, then $S\in\mathcal{HS}$. \end{proposition} This follows from the same argument as \todo{M: should be capitalized?}Proposition 5.7 in \cite{skrett21}, taking a rank-one $T$. For $S\in\mathcal{HS}$ which generates a Gabor g-frame, we define the analysis operator $C_S: \mathcal{HS} \to l^2(\Lambda;\mathcal{HS})$ by \begin{align*} C_S T = \{S^*\pi(\lambda)^*T\}_{\lambda\in\Lambda}, \end{align*} and the synthesis operator $D_S: l^2(\Lambda;\mathcal{HS}) \to \mathcal{HS}$ by \begin{align*} D_S (\{T_{\lambda}\}_{\lambda\in\Lambda}) = \sum_{\lambda\in\Lambda} \pi(\lambda)S T_{\lambda}. \end{align*} If $S$ generates a Gabor g-frame, then general g-frame theory \cite{sun06} tells us there exists a \textit{canonical dual frame} \begin{align*} \{\Tilde{S}_{\lambda}\}_{\lambda\in\Lambda} := \{S^*\pi(\lambda)^*\mathfrak{O}_S^{-1}\}_{\lambda\in\Lambda} \end{align*} since $T=\mathfrak{O}_S^{-1}\mathfrak{O}_S T = \mathfrak{O}_S\mathfrak{O}_S^{-1} T$. $\Tilde{S}$ can be shown to be a Gabor g-frame generated by $(S^*\mathfrak{O}_S^{-1})$. We say in general that two operators $S,T\in\mathcal{L}(\mathcal{HS})$ generate \textit{dual Gabor g-frames} if $S$ and $T$ generate Gabor g-frames, and $\mathfrak{O}_{S,T} := D_S C_T = I_{\mathcal{HS}}$. \subsection{Quantum Harmonic Analysis} As a final prerequisite we present some theorems of quantum harmonic analysis, based on the convolutions introduced by Werner in \cite{werner84}, and recently applied to time-frequency anlysis in \cite{skrett18} \cite{skrett19} \cite{skrett20}, where it is used to generalise known results and provide more concise proofs by extending the mechanics of harmonic analysis to operators. We will on occasion use the framework of quantum harmonic analysis to simplify a proof or give an alternative framing. Convolutions between operators and functions are defined in the following manner; \begin{definition} For $f\in L^p(\mathbb{R}^{2d})$, $S\in\mathcal{S}^q$ and $T\in\mathcal{S}^p$, where $\frac{1}{p}+\frac{1}{q}=1+\frac{1}{r}$, convolutions are defined by \begin{align*} f \star S &:= \int_{\mathbb{R}^{2d}} f(z) \alpha_z(S)\, dz \\ S \star T &:= \mathrm{tr}(S\alpha_z(\Check{T})) \end{align*} where $\alpha_z(S)=\pi(z)S\pi(z)^*$ is a representation of the Weyl-Heisenberg group on $\mathcal{HS}$ and $\Check{T}=PTP$ where $P$ is the parity operator. The first integral is to be interpreted as a Bochner integral. \end{definition} We will use a generalised version of Moyal's identity from \cite{werner84}; \begin{lemma}{(Generalised Moyal's Identity)}\label{generalmoyal} For two operators $S,T\in\mathcal{S}^1$, the mapping $z\mapsto S\alpha_z(T)$ is integrable over $\mathbb{R}^{2d}$, and \begin{align*} \int_{\mathbb{R}^{2d}} S\alpha_z(T)\, dz = \mathrm{tr}(S)\mathrm{tr}(T). \end{align*} \end{lemma} Taking rank one operators returns precisely the original Moyal's identity, hence the name. We also note that the above holds when $T$ is replaced with $\Check{T}$, since tr$(T)=$tr$(PTP)$. We also make frequent use of the fact that for $S\in\mathcal{S}^1$; \begin{align} \label{convidentity} 1 \star S = \mathrm{tr}(S)I_{L^2}, \end{align} which can be seen by using the spectral decomposition of $S$ and the reconstruction formula for $V_g$. \section{An Operator STFT} We start by defining the operator valued STFT. \begin{definition}{(Operator STFT)} For two $\mathcal{HS}$ operators $S,T$ on $L^2(\mathbb{R}^d)$, the operator short-time Fourier transform, $\mathfrak{V}_S T$, is given by \begin{equation} \mathfrak{V}_S T(z) = S^*\pi(z)^*T. \end{equation} \end{definition} The operator STFT thus defines an operator valued function in phase space. We will see that this operator valued function is in many respects an analogue to the scalar function of the function STFT. To motivate such a definition, we consider the following: \begin{example} \normalfont For operators $S=\sum_n g_n \otimes e_n $ and $T=\sum_n f_n \otimes e_n$ with $f_n, g_n \in L^2(\mathbb{R}^d)$ and $\{e_n\}_n$ some orthonormal basis in $L^2(\mathbb{R}^d)$; \begin{align*} \mathfrak{V}_S T(z) &= \sum_{n,m} V_{g_n}f_m(z) e_n \otimes e_m. \end{align*} \end{example} \begin{remark} \normalfont Here and in the sequel, we will often consider an operator $S=\sum_n f_n \otimes e_n$ where only the $e_n$ are assumed to be orthonormal. This is done because we will later consider different norms on the $f_n$. In the above example we could of course assume the $f_n$ to have $\|f_n\|_{L^2} = s_n$, where $s_n$ are the singular values of $S$. \end{remark} This definition is clearly equivalent to the definition in \cite{skrett22}, \cite{Guo22} in the case of a rank one $T=\psi \otimes \xi$, where we have $\mathfrak{V}_S T = (S^*\pi(z)^*\psi) \otimes \xi$, except that we consider the adjoint $S^*$. This adjustment is to make formulae in the sequel cleaner, and we note that there is no material difference in the two formulations. The STFT can thus be considered to encode information about time frequency correlations over functions. \begin{example}\label{locopex} \normalfont Let $A_f^{\varphi_1,\varphi_2}$ and $A_g^{\psi_1,\psi_2}$ be standard single window localisation operators given by \begin{align*} A_f^{\varphi_1,\varphi_2}&=\int_{\mathbb{R}^{2d}} f(z) \pi(z)\varphi_1\otimes\pi(z)\varphi_2\, dz \\ A_g^{\psi_1,\psi_2}&=\int_{\mathbb{R}^{2d}} g(z) \pi(z)\psi_1\otimes\pi(z)\psi_2\, dz \end{align*} where $f,g\in L^2(\mathbb{R}^{2d})$ and $\varphi_i,\psi_i\in L^2(\mathbb{R}^d)$. The operator STFT of $A_g^{\psi_1,\psi_2}$ with window $A_f^{\varphi_1,\varphi_2}$ is then \begin{align*} \mathfrak{V}_{A_f^{\varphi_1,\varphi_2}} A_g^{\psi_1,\psi_2}(z) = \int_{\mathbb{R}^{4d}} \overline{f(z')}g(z'')\langle \pi(z'')\psi_1,\pi(z)\pi(z')\varphi_1\rangle \pi(z')\varphi_2\otimes \pi(z'')\psi_2 \,dz' \,dz''. \end{align*} This expression has an intuitive interpretation; if windows $\psi_1$ and $\varphi_1$ are concentrated in time-frequency around the origin, then the inner product in the integrand is negligible outside of the region around $z=z''-z'$. To illustrate this we consider the simple situation where $f,g$ are characteristic functions, and all windows are the Gaussian $\varphi_0$: \begin{align*} \mathfrak{V}_{A_{\Omega_1}} A_{\Omega_2}(z) = \int_{\Omega_2} \int_{\Omega_1} \langle \pi(z'')\varphi_0,\pi(z)\pi(z')\varphi_0\rangle \pi(z')\varphi_0\otimes \pi(z'')\varphi_0 \,dz' \,dz'', \end{align*} where $A_{\Omega_i}:= A_{\chi_{\Omega_i}}^{\varphi_0,\varphi_0}$\todo{M: indices?}. \cref{fig:masks} shows some simple domains $\Omega_i$ in the time-frequency plane. In \cref{fig:spectrograms}, the resulting Hilbert-Schmidt norm of the operator STFTs $\mathfrak{V}_{A_{\Omega_i}} A_{\Omega_j}$ are shown as a function of $z$: \begin{figure} \caption{From left to right: $\Omega_1$, $\Omega_2$, $\Omega_3$} \label{fig:masks} \end{figure} \begin{figure}\label{fig:spectrograms} \end{figure} These examples illustrates how the Hilbert-Schmidt norm of the operator STFT acts, but we also have the interpretation of $\pi(z')\varphi_0\otimes \pi(z'')\varphi_0$, as the operator sending time-frequency energy of a function from around $z''$ to around $z''-z$. \end{example} \begin{example}\label{dataopex} \normalfont For a data operator $S=\sum_n f_n \otimes e_n$, \begin{align*} \mathfrak{V}_S S(z) = \sum_{n,m} V_{f_n} f_m(z) e_n\otimes e_m. \end{align*} Upon taking the taking the Hilbert-Schmidt norm, we recover the total correlation function from \cite{Dorf21}; \begin{align*} \|\mathfrak{V}_S S(z)\|_{\mathcal{HS}}^2 = \sum_{n,m} |V_{f_n} f_m(z)|^2. \end{align*} Hence the structure of the resulting operator can be seen to provide more information regarding the correlations within the dataset, as it relates where in the dataset the correlation occurs, for example on the diagonal versus off. To see this we compare two operators with identical total correlation functions, but one generated by functions determined by stationary process, while the other is generated by functions drawn from a non-stationary process. Both operators are of the form \begin{align*} S_i = \sum_{n=1}^{200} f_i \otimes e_i \end{align*} where $e_i$ form an orthonormal basis, and $f_i$ are of the form \begin{align*} f_i = a_i \cdot sin(\mathrm{freq}_i\cdot t) \cdot g_i(t) \end{align*} where $a_i$ is a constant from a random normal distribution, $g_i$ a Bartlett-Hann window translated by a random $x$ from a normal distribution. The length of the signal is $200$, and so the resulting matrix $S_i$ has dimensions $200\times 200$. For the operator $S_1$, the frequencies $\mathrm{freq}_i$ are given by a base frequency with the addition of random noise from both a normal and sinoidal distribution. The operator $S_2$ on the other hand, is generated by the same base frequency, with random noise from a normal distribution, but with an $i$-dependent modulation. The two operators have an identical total correlation function, shown in \cref{fig:tc}, but comparing the operator-value of the STFT for different $z$'s shows the non-stationary structure of the function data set generating the operator $S_2$, \cref{fig:timedepstft}, when compared to $S_1$ \cref{fig:randstft}. In this respect the operator STFT can be seen to reflect the structure of an ordered data set, such as a functional time series. \begin{figure} \caption{The common total correlation function of both $S_1$ and $S_2$} \label{fig:tc} \end{figure} \begin{figure} \caption{The matrix given by operator STFT $\mathfrak{V}_{S_1} S_1$ at $z=(0,0)$, $z=(0.5,0.5)$, $z=(1,1)$ and $z=(1.5,1.5)$.} \label{fig:randstft} \end{figure} \begin{figure} \caption{The matrix given by operator STFT $\mathfrak{V}_{S_2} S_2$ at $z=(0,0)$, $z=(0.5,0.5)$, $z=(1,1)$ and $z=(1.5,1.5)$.} \label{fig:timedepstft} \end{figure} \end{example} \todo{mention length of signal/size of matrix} We collect some simple properties of the operator STFT: \begin{proposition} \label{stftbasics} For operators $Q,R,S,T\in\mathcal{HS}$; \begin{enumerate} \item $\mathfrak{V}_S T(z) = e^{-2\pi i \omega x}(\mathfrak{V}_T S(-z))^*$ \item $\int_{\mathbb{R}^{2d}} \langle \mathfrak{V}_S T(z), \mathfrak{V}_Q R(z)\rangle_{\mathcal{HS}}dz = \langle Q,S \rangle_{\mathcal{HS}} \langle T,R \rangle_{\mathcal{HS}}$ \item $\int_{\mathbb{R}^{2d}} \| \mathfrak{V}_S T(z)\|^2_{\mathcal{HS}} = \|S\|^2_{\mathcal{HS}}\|T\|^2_{\mathcal{HS}}$ \end{enumerate} \end{proposition} \begin{proof} The first claim is merely a restatement of the property $\pi(z)^* = e^{-2\pi i x \omega}\pi(-z)$, and the third a special case of the second. To prove the second claim; \begin{align*} \int_{\mathbb{R}^{2d}} \langle \mathfrak{V}_S T(z), \mathfrak{V}_Q R(z) \rangle_{\mathcal{HS}}\, dz &= \int_{\mathbb{R}^{2d}} \mathrm{tr}(S^*\pi(z)^*TR^*\pi(z)Q)\, dz \\ &= \int_{\mathbb{R}^{2d}} \mathrm{tr}(TR^*\pi(z)QS^*\pi(z)^*)\, dz \\ &= \int_{\mathbb{R}^{2d}} (TR^*)\star (PQS^*P) \, dz \\ &= \langle Q,S \rangle_{\mathcal{HS}} \langle T,R \rangle_{\mathcal{HS}} \end{align*} where we have used \Cref{generalmoyal} in moving from the third to fourth line. \end{proof} In particular, the third statement gives us that $\mathfrak{V}_S:\mathcal{HS}\to L^2(\mathbb{R}^{2d};\mathcal{HS})$, and the mapping is continuous and injective. It is then natural to consider the Hilbert space adjoint, $\mathfrak{V}_S^*:L^2(\mathbb{R}^{2d};\mathcal{HS})\to\mathcal{HS}$, which is given by \begin{align*} \mathfrak{V}_S^* \Psi = \int_{\mathbb{R}^{2d}} \pi(z)S\Psi(z)\, dz \end{align*} for $\Psi(z)\in L^2(\mathbb{R}^{2d};\mathcal{HS})$. The integral can be interpreted in the weak sense in $\mathcal{HS}$. This can be seen directly; \begin{align*} \langle \mathfrak{V}_S T, \Psi \rangle_{L^2(\mathbb{R}^{2d},\mathcal{HS})} &= \int_{\mathbb{R}^{2d}} \langle S^*\pi(z)^* T,\Psi(z)\rangle_{\mathcal{HS}}\, dz \\ &= \int_{\mathbb{R}^{2d}} \mathrm{tr}(T\Psi(z)^*S^*\pi(z)^*)\, dz \\ &= \int_{\mathbb{R}^{2d}} \langle T, \pi(z)S\Psi(z)\rangle_{\mathcal{HS}} \, dz. \end{align*} The operator STFT and its adjoint shares the reconstruction property with the function case, namely \begin{align*} \mathfrak{V}_S^* \mathfrak{V}_R T &= \int_{\mathbb{R}^{2d}} \pi(z)SR^*\pi(z)^* T\, dz \\ &= \int_{\mathbb{R}^{2d}} \alpha_z (SR^*) \, dz T\\ &= (1\star SR^*) \cdot T = \langle S, R \rangle_{\mathcal{HS}} T \end{align*} where we use \eqref{convidentity}. We have as a result \begin{align}\label{isometry} \mathfrak{V}_S^* \mathfrak{V}_S = I_{\mathcal{HS}} \end{align} for any $S\in\mathcal{HS}$ such that $\|S\|_{\mathcal{HS}}=1$. The converse then follows immediately, namely that for such an $S$, \begin{align}\label{adjisometry} \mathfrak{V}_S \mathfrak{V}_S^* = I_{\mathfrak{V}_S (L^2)}. \end{align} \section{Reproducing Kernel Structure}\label{repkernstruc} In this section we examine the structure of the spaces generated by the operator STFT. We begin with the RKHS structure: \begin{proposition} For any $S\in\mathcal{HS}$, the space \begin{align*} \mathfrak{V}_S(\mathcal{HS}) := \{\mathfrak{V}_S T : T\in\mathcal{HS} \} \end{align*} is a uniform reproducing kernel Hilbert space as a subspace of $L^2(\mathbb{R}^{2d};\mathcal{HS})$. \end{proposition} \begin{proof} We start by confirming that the space is closed, since \begin{align*} \|\mathfrak{V}_S T\|_{L^2(\mathbb{R}^{2d};\mathcal{HS})} = \int_{\mathbb{R}^{2d}}\|\mathfrak{V}_S T(z)\|_{\mathcal{HS}}^2 \, dz = \|S\|_{\mathcal{HS}}^2 \|T\|_{\mathcal{HS}}^2 \end{align*} from \Cref{stftbasics}. Uniform boundedness of evaluation is quite straightforward; \begin{align*} \|\mathfrak{V}_S T(z)\|_{\mathcal{HS}} = \|S^*\pi(z)^*T\|_{\mathcal{HS}} \leq \|S^*\pi(z)^*\|_{\mathcal{HS}}\|T\|_{\mathcal{HS}} \end{align*} \end{proof} In fact we have already seen from \eqref{adjisometry} that the evaluation operator $E_z$ is given explicitly by $E_z = S^*\pi(z)^* \mathfrak{V}_S^*$, and so we must have that $E_z^* = \mathfrak{V}_S \pi(z)S$. By definition of the kernel function, we have for $S\in\mathcal{HS}$, with $\|S\|_{\mathcal{HS}}=1$, that \begin{align*} K(z,z') &= E_z E_{z'}^* \\ &= S^*\pi(z)^*\pi(z')S. \end{align*} \begin{remark} \normalfont It should be noted that in the vector-valued RKHS setting the appearance of the operators $S$ and $\pi(z)$ (and their respective adjoints), in the definition of evaluation operator $E_z$ and its adjoint $E_z^*$, denotes the conjugation with these operators. As such we have that $E_z: \mathfrak{V}_S(\mathcal{HS})\to\mathcal{HS}$ and $E_z^*: \mathcal{HS} \to \mathfrak{V}_S(\mathcal{HS})$. \end{remark} This kernel is the integral kernel of the projection from $L^2(\mathbb{R}^{2d};\mathcal{HS})$ to $\mathfrak{V}_S(\mathcal{HS})$; \begin{align} \label{rkhsproj} P_S \Psi(z) = \int_{\mathbb{R}^{2d}} K(z,z') \Psi(z') dz' \end{align} for $\Psi\in L^2(\mathbb{R}^{2d};\mathcal{HS})$. This defines\todo{formulation?} a projection, which can be seen from a simple calculation of $P_S^2$, and for any $T\in\mathcal{HS}$ \begin{align*} \int_{\mathbb{R}^{2d}} K(z,z') \mathfrak{V}_S T(z') dz' &= \mathfrak{V}_S\mathfrak{V}_S^*\mathfrak{V}_S T(z). \end{align*} Decomposing $S=\sum_n g_n \otimes e_n$, for orthonormal set $\{e_n\}_{n\in\mathbb{N}}$ and orthogonal set $\{g_n\}_{n\in\mathbb{N}}$ (where $g_i$ may be $0$), we find \begin{align*} K(z,z') = \sum_{n,m\geq 0} \langle \pi(z')g_n, \pi(z)g_m\rangle_{L^2} \, e_m \otimes e_n. \end{align*} On the diagonals we have precisely the reproducing kernels of the scalar-valued Gabor spaces with windows $g_n$, that is to say kernels of the projections $V_{g_n} V_{g_n}^*$, but we have in addition the off-diagonal terms corresponding to the kernels of the maps $V_{g_n} V_{g_m}^*$. As a general property of RKHS', we have the inclusion \begin{align*} \mathfrak{V}_S (\mathcal{HS}) \subset L^2(\mathbb{R}^{2d};\mathcal{HS}) \cap L^{\infty}(\mathbb{R}^{2d};\mathcal{HS}), \end{align*} since \begin{align*} \|\mathfrak{V}_S (T)(z)\|^2_{\mathcal{HS}} &\leq \langle \mathfrak{V}_S (T), E_z^*E_z \mathfrak{V}_S (T)\rangle_{L^2(\mathbb{R}^{2d};\mathcal{HS})} \\ &= \|\mathfrak{V}_S (T)\|^2_{L^2(\mathbb{R}^{2d};\mathcal{HS})}. \end{align*} \subsection{Characterisation from Twisted Convolution} In an analogue way to the characterisation of Gabor space in terms of the twisted convolution, we can characterise the RKHS $\mathfrak{V}_S(\mathcal{HS})$ by the equivalent condition. \begin{proposition} Given $\Psi\in L^2(\mathbb{R}^{2d};\mathcal{HS})$, and $S\in\mathcal{HS}$ such that $\|S\|_{\mathcal{HS}}=1$; \begin{align*} \Psi\natural\mathfrak{V}_S S = \Psi \iff \Psi \in \mathfrak{V}_S (\mathcal{HS}). \end{align*} \end{proposition} \begin{proof} On the one hand we have that for $Q,R,S,T\in\mathcal{HS}$; \begin{align} \label{twistedconv} \mathfrak{V}_Q T \natural \mathfrak{V}_S R(z) &= \int_{\mathbb{R}^{2d}} S^*\pi(z-z')^*R Q^*\pi(z')^*Te^{-2\pi i x(\omega-\omega')}dz' \nonumber \\ &= S^*\pi(z)^*\int_{\mathbb{R}^{2d}}\pi(z')R Q^*\pi(z')^*dz' T \nonumber \\ &= \langle R, Q \rangle_{\mathcal{HS}} \mathfrak{V}_S T(z), \end{align} where the last inequality follows from \eqref{convidentity}, and hence the one direction follows in the case $Q=R=S$. On the other, from \eqref{rkhsproj}, \begin{align*} \Psi \natural \mathfrak{V}_S S(z) &= \int_{\mathbb{R}^{2d}} S^*\pi(z-z')^*S \Psi(z') e^{-2\pi i x(\omega-\omega')}\, dz' \\ &= \int_{\mathbb{R}^{2d}} K(z,z') \Psi(z')\, dz' \\ &= \big(P_S \Psi\big)(z)= \Psi(z) \end{align*} implies $\Psi\in \mathfrak{V}_S(\mathcal{HS})$. \end{proof} \subsection{Toeplitz operators} With a RKHS structure, it is natural to consider what the corresponding Toeplitz operators on the space look like. Toeplitz operators are of the form $T_f = P_V M_f$, that is to say a pointwise multiplication by some $f\in L^{\infty}(\mathbb{R}^{2d})$, followed by a projection back onto the RKHS. In the case of Gabor spaces these are precisely the localisation or anti-Wick operators, which are accordingly also called Gabor-Toeplitz operators \cite{feich14}. Considering the Toeplitz operators on $\mathfrak{V}_S (\mathcal{HS})$, we have operators of the type \begin{align*} T_f (\mathfrak{V}_S T) = \mathfrak{V}_S \mathfrak{V}_S^* (f\cdot\mathfrak{V}_S T) \end{align*} where $f\in L^{\infty}(\mathbb{R}^{2d})$ and $f\cdot\mathfrak{V}_S T$ is pointwise multiplication. We then define the unitarily equivalent operator $\Theta(T_f) := \mathfrak{V}_S^*T_f\mathfrak{V}_S$ on $\mathcal{HS}$: \begin{align*} \Theta (T_f)(T) &= \mathfrak{V}_S^*\mathfrak{V}_S \mathfrak{V}_S^* (f\cdot\mathfrak{V}_S T) \\ &= \mathfrak{V}_S^* (f\cdot\mathfrak{V}_S T) \\ &= \int_{\mathbb{R}^{2d}} f(z) \pi(z)SS^*\pi(z)^*T\, dz \\ &= f \star (SS^*) T, \end{align*} and hence Toeplitz operators in the operator case correspond to the composition with the mixed-state localisation operators discussed in \cite{skrett19}. \section{Coorbit Spaces for Operators}\label{opcoorbit} From the previous section, we have a characterisation of the space $\mathfrak{V}_S(\mathcal{HS})$. We now turn to other classes which can be similarly characterised. In particular, from \Cref{stftbasics} the Hilbert-Schmidt operators are precisely the operators $\{T\in\mathcal{L}(L^2(\mathbb{R}^d)): \mathfrak{V}_S T\in L^2(\mathbb{R}^{2d};\mathcal{HS}) \}$ for $S \in \mathcal{HS}$, similarly to the function case of $L^2(\mathbb{R}^d)=M^2(\mathbb{R}^d)$. We therefore set out to define what we refer to as \textit{operator coorbit spaces}. In the sequel, $v(z)$ will be a sub-multiplicative weight function of polynomial growth on phase space, and $m(z)$ will be a $v$-moderate weight function on phase space. \subsection{The $\mathfrak{M}^1_v$ case} In a similar vein to the function case we define the admissible operators, for a weight function $v$, to be \begin{align}\label{admcond} \mathfrak{A}_v := \{S\in\mathcal{HS}: \mathfrak{V}_S S \in L^1_v(\mathbb{R}^{2d};\mathcal{HS}) \}. \end{align} \begin{example} \normalfont Clearly any rank one operator which can be written as $T=f\otimes \psi$, where $f\in M^1_v(\mathbb{R}^d)$ and $\psi\in L^2(\mathbb{R}^d)$, is in $\mathfrak{A}_v$. \end{example} We set $S_0 = \varphi_0 \otimes \varphi_0$, and define the space \begin{align}\label{m1cond} \mathfrak{M}^1_v := \{T \in \mathcal{HS}: \mathfrak{V}_{S_0} T \in L^1_v(\mathbb{R}^{2d};\mathcal{HS}) \} \end{align} with corresponding norm $\|T\|_{\mathfrak{M}^1_v} = \|\mathfrak{V}_{S_0} T\|_{L^1_v(\mathbb{R}^{2d};\mathcal{HS})}$, and we denote the unweighted version $v(z)\equiv 1$ by $\mathfrak{M}^1$. \begin{remark}\label{tfintuition} \normalfont Considering $\|(\varphi_0\otimes\varphi_0)\pi(z)^*T\|_{\mathcal{HS}}$, it is easy to see how the $\mathfrak{M}^{1}_v$ condition (and later the $\mathfrak{M}^{p,q}_m$ conditions) can be seen to measure the time-frequency localisation of an operator. In this case, the $\mathfrak{M}^1_v$ condition is simply a measure of how time-frequency translations of $\varphi_0$ decay as arguments of $T^*$: $\int v(z)\|T^*(\pi(z)\varphi_0)\|_{L^2} dz$. Following this line of reasoning, we consider appropriate localisation operators of the type in \cref{locopex}: \end{remark} \todo{Say what happens next} \begin{example} \normalfont For a localisation operator $A_h^{\psi}$, if $h\in L^1(\mathbb{R}^{2d})$ and $\psi\in M^1(\mathbb{R}^d)$, then $A_h^{\psi}\in\mathfrak{M}^1$;\todo{something is unclear in first step- what is $A_{\overline{h}}^{\psi}(z)$? } \begin{align*} \int_{\mathbb{R}^{2d}} \|\mathfrak{V}_{S_0} A_h^{\psi}(z)\|_{\mathcal{HS}}dz &= \int_{\mathbb{R}^{2d}} \| \varphi_0\otimes \varphi_0 \pi(z)^* A_{h}^{\psi}\|_{\mathcal{HS}}dz \\ &= \int_{\mathbb{R}^{2d}} \| A_{\overline{h}}^{\psi}\pi(z)\varphi_0\|_{L^2}dz \\ &= \int_{\mathbb{R}^{2d}} \| \int_{\mathbb{R}^{2d}} \overline{h}(z')\langle \pi(z)\varphi_0,\pi(z')\psi\rangle \pi(z')\psi\, dz' \|_{L^2} dz \\ &\leq \int_{\mathbb{R}^{2d}} |\overline{h}(z')| \int_{\mathbb{R}^{2d}} |V_{\psi} \varphi_0 (z-z')|\, dz\, dz'. \end{align*} \end{example} Since $\mathfrak{V}_{S_0}(\mathcal{HS})$ is a RKHS, it is clear that $\mathfrak{M}^1_v \subset \mathcal{HS}$. This inclusion is continuous, since \begin{align*} \|T\|_{\mathcal{HS}}^2 &= \|\mathfrak{V}_{S_0} T\|_{L^2(\mathbb{R}^{2d};\mathcal{HS})}^2 \\ &= \int_{\mathbb{R}^{2d}} v(z)\|\mathfrak{V}_{S_0} T(z) \|_{\mathcal{HS}}\|\mathfrak{V}_{S_0} T(z) \|_{\mathcal{HS}}\, dz \\ &\leq \|T\|_{\mathcal{HS}} \|T\|_{\mathfrak{M}^1_v} \end{align*} where we have used that $\|\mathfrak{V}_S T(z)\|_{\mathcal{HS}}\leq\|S\|_{\mathcal{HS}}\|T\|_{\mathcal{HS}}$ for every $z$. We can hence decompose every $T \in \mathfrak{M}^1_v$ as $T=\sum_{n\geq 0} f_n \otimes e_n$ for some orthonormal system $\{e_n\}_n$ and orthogonal $\{f_n\}_n$. The $\mathfrak{M}^1_v$ condition \Cref{m1cond} is then equivalent to \begin{align*} \mathfrak{M}^1_v = \{ T = \sum_n f_n \otimes e_n \in \mathcal{HS}: \int_{\mathbb{R}^{2d}} v(z)\|V_{\varphi_0} f_n (z)\|_{l^2(\mathbb{N})} dz<\infty \}. \end{align*} Noting that \begin{align*} \|T\|_{\mathfrak{M}^1_v} \geq \int_{\mathbb{R}^{2d}} v(z) |V_{\varphi_0} f_n (z)| dz \end{align*} for each $n$, we find that $f_n \in M^1_v(\mathbb{R}^d)$ for all $n$ when $T\in \mathfrak{M}^1_v$, with $\|f\|_{M^1_v} \leq \|T\|_{\mathfrak{M}^1_v}$. \begin{claim} \label{nuclearinc} The space of nuclear operators $\mathcal{N}(L^2(\mathbb{R}^d);M^1_v(\mathbb{R}^d))$ is contained in $\mathfrak{M}^1_v$. \end{claim} \begin{proof} \normalfont Taking some $T\in \mathcal{N}(L^2(\mathbb{R}^d);M^1_v(\mathbb{R}^d))$, we can decompose $T=\sum_n f_n\otimes g_n$, with $\sum_n \|f_n\|_{M^1_v}\|g_n\|_{L^2} < \infty$, we assume without loss of generality that $\|g\|_{L^2}=1$ for all $n$. Here neither the $f_n$ or $g_n$ are necessarily orthogonal. We have that \begin{align*} \mathfrak{V}_{S_0} T(z) =\sum_{n} V_{\varphi_0} f_n(z) \varphi_0 \otimes g_m. \end{align*} It then follows that \begin{align*} \|\mathfrak{V}_{S_0} T(z) \|_{\mathcal{HS}} &= \big(\sum_{n,m} |\langle g_m, g_n \rangle V_{\varphi_0} f_n(z)V_{\varphi_0} f_m(z)|\big)^{1/2} \\ &\leq \big(\sum_{n,m} |V_{\varphi_0} f_n(z)V_{\varphi_0} f_m(z)|\big)^{1/2} \\ &= \sum_{n} |V_{\varphi_0} f_n(z)|, \end{align*} since we assumed $\|g_n\|_{L^2}=1$. The nuclear condition thus gives that \begin{align*} \int_{\mathbb{R}^{2d}} v(z) \|\mathfrak{V}_{S_0} T(z) \|_{\mathcal{HS}} \, dz &\leq \int_{\mathbb{R}^{2d}} \sum_n v(z) |V_{\varphi_0} f_n(z)|\, dz \\ &= \sum_n \int_{\mathbb{R}^{2d}} v(z) |V_{\varphi_0} f_n(z)|\, dz \\ &= \sum_n \|f_n\|_{M^1_v} \\ &< \infty. \end{align*} We conclude that $\mathcal{N}(L^2(\mathbb{R}^d);M^1_v(\mathbb{R}^d)) \subset \mathfrak{M}^1_v$ \end{proof} \begin{remark} \label{nuclear2inc} \normalfont Any $T\in \mathfrak{M}^1_v$ can be written in the form $\sum_n f_n \otimes e_n$, where $\|e_n\|_{L^2}=1$ for all $n$, and $\{\|f_n\|_{M^1_v}\}_n\in l^2$. This follows from the inequality \begin{align*} \big\| \int_{\mathbb{R}^{2d}} |V_{\varphi_0} f_n(z)| \, dz \big\|_{l^2} \leq \int_{\mathbb{R}^{2d}} \|V_{\varphi_0} f_n(z)\|_{l^2}\, dz. \end{align*} As a result, for the unweighted case, $S\in\mathfrak{M}^1 \implies SS^*\in \mathcal{N}(M^1(\mathbb{R}^2);M^1(\mathbb{R}^2))$, or alternatively $\sigma_{SS^*}\in M^1(\mathbb{R}^{2d})$ where $\sigma_{SS^*}$ is the Weyl symbol of $SS^*$ \cite{jakob22}. \end{remark} \begin{remark} \label{rankoneschatten} \normalfont An operator $T=\sum_n f_n\otimes e_n$ in the space $\mathfrak{M}^1_v$ also satisfies the condition \begin{align*} \mathfrak{V}_{S_0} T \in L^1(\mathbb{R}^{2d};\mathcal{S}^p), \end{align*} since $\mathfrak{V}_{S_0} T$ takes the values of rank one operators, and so all Schatten class norms coincide. However, as we will later see, using the Hilbert-Schmidt norm is required when considering results for a wider class of window operators. Hence for the sake of consistency we define the operator coorbit spaces in terms of the Hilbert-Schmidt norm. \end{remark} As a corollary of \Cref{nuclearinc}, operators $T\in M^1_v(\mathbb{R}^d) \hat{\otimes}_{\pi} M^1_v(\mathbb{R}^d)$, and in the case of polynomial growth of $v$ the Schwartz operators $\mathfrak{S}$, are contained in $\mathfrak{M}^1_v$. We will use this to give a suitably large reservoir for defining general coorbit spaces. \subsection{The general $\mathfrak{M}^{p,q}_m$ case} We then define the operator coorbit spaces for $1\leq p,q \leq \infty$ and $v$-moderate weight $m$ by \begin{align*} \mathfrak{M}^{p,q}_m := \{T \in \mathfrak{S}': S_0^* \pi(z)^* T \in L^{p,q}_m(\mathbb{R}^{2d};\mathcal{HS}) \}. \end{align*} with norms $\|T\|_{\mathfrak{M}^{p,q}_m} = \|\mathfrak{V}_{S_0} T\|_{L^{p,q}_m}$. \begin{example} \normalfont As in the $\mathfrak{M}^1_v$ case, any rank one operator which can be written as $T=f\otimes \psi$, where $f\in M^{p,q}_v(\mathbb{R}^d)$ and $\psi\in L^2$, is in $\mathfrak{M}^{p,q}_v$. \end{example} \begin{remark} \normalfont Since we restrict our focus to weights of polynomial growth, the Schwartz operator dual is a sufficiently large reservoir, although if we wished to extend to a larger class of weights this may fail. For weights of exponential growth one has to use ultradistributions \cite{pite04,cato17} and it seems to be a promising topic for future research to study these kind of objects in our setting, too. \end{remark} We use the notation $\mathfrak{V}_S T(z) = S^*\pi(z)^* T$ for $S\in\mathfrak{M}^1_v$ and $T\in \mathfrak{M}^{p,q}_m$, and similarly $\mathfrak{V}_S^* \Psi = \int_{\mathbb{R}^{2d}} \pi(z)S\Psi(z)dz$ for $\Psi\in L^{p,q}_m(\mathbb{R}^{2d};\mathcal{HS})$. The map $\mathfrak{V}_S$ is injective, as for any non-zero $R\in\mathfrak{M}^{p,q}_m$ there exists some $f\in L^2(\mathbb{R}^d)$ such that $Rf$ is non-zero, and so the injectivity of $\mathfrak{V}_S$ follows from the properties of the function STFT. The $\mathfrak{M}^{p,q}_m$ spaces are clearly closed under addition and scalar multiplication. To show that they are in fact Banach spaces, we use the following lemma: \begin{lemma}\label{projidentity} For $1\leq p \leq \infty$ and $S\in\mathfrak{A}_v$, the map $\mathfrak{V}_S \mathfrak{V}_S^*$ is a bounded operator on $L^{p,q}_m(\mathbb{R}^{2d};\mathcal{HS})$, and if $\|S\|_{\mathcal{HS}}=1$ then its restriction to $\mathfrak{V}_S (\mathfrak{M}^{p,q}_m)$ is the identity. \end{lemma} \begin{proof} We begin by noting that for $\Psi \in L^{p,q}_m(\mathbb{R}^{2d};\mathcal{HS})$; \begin{align*} \mathfrak{V}_S \mathfrak{V}_S^*\Psi(z) &= \int_{\mathbb{R}^{2d}} K(z,z') \Psi(z') dz' \\ &= \int_{\mathbb{R}^{2d}} S^*\pi(z)^*\pi(z')S \Psi(z') dz' \\ &= \Psi\natural \mathfrak{V}_S S(z). \end{align*} Hence from \Cref{youngsineq}; \begin{align*} \| \mathfrak{V}_S \mathfrak{V}_S^*\Psi \|_{L^{p,q}_m} &= \|\Psi\natural \mathfrak{V}_S S(z)\|_{L^{p,q}_m} \\ &\leq C_{m,v}\|\Psi\|_{L^{p,q}_m} \|\mathfrak{V}_S S\|_{L^1_v} \end{align*} and so $\mathfrak{V}_S \mathfrak{V}_S^*$ is bounded, since $S\in\mathfrak{A}_v$. For $\mathfrak{V}_S T \in L^{p,q}_m(\mathbb{R}^{2d};\mathcal{HS})$, as in the $\mathcal{HS}$ case we observe \begin{align*} \mathfrak{V}_S \mathfrak{V}_S^* \mathfrak{V}_S T &= \mathfrak{V}_S \int_{\mathbb{R}^{2d}} \pi(z)S S^*\pi(z)^*T\, dz \\ &= \mathfrak{V}_S \int_{\mathbb{R}^{2d}} \alpha_z(S^*S)\, dz\, T = \mathfrak{V}_S T. \end{align*} \end{proof} \begin{corollary}\label{stftinvmod} For $1\leq p \leq \infty$ and $S\in\mathfrak{A}_v$, $\mathfrak{V}_S^*$ is a bounded map from $L^{p,q}_m(\mathbb{R}^{2d};\mathcal{HS})$ to $\mathfrak{M}^{p,q}_m$. \end{corollary} \begin{proposition} For $1\leq p \leq \infty$, $1\leq q \leq \infty$, and $v$-moderate $m$, $\mathfrak{M}^{p,q}_m$ is a Banach space. \end{proposition} \begin{proof} We consider a Cauchy sequence $\{T_n\}_{n\in\mathbb{N}} \subset \mathfrak{M}^{p,q}_m$. The sequence $\{\mathfrak{V}_{S_0} T_n\}_{n\in\mathbb{N}}$ is a Cauchy sequence in $L^{p,q}_m(\mathbb{R}^{2d})$ by definition of the norm, and since $L^{p,q}_m(\mathbb{R}^{2d};\mathcal{HS})$ is a Banach space we denote the limit of this sequence $\Psi$. From \cref{stftinvmod} $\mathfrak{V}_{S_0}^* \Psi\in \mathfrak{M}^{p,q}_m$ and $T_n \to \mathfrak{V}_{S_0}^* \Psi$ by boundedness, so $\mathfrak{M}^{p,q}_m$ are Banach spaces. \end{proof} As in the function case we have the embedding of our spaces: \begin{claim} For $1\leq p \leq p' \leq \infty$, $1\leq q \leq q' \leq \infty$, and $m(z)\geq m'(z)$ , $\mathfrak{M}^{p,q}_m \subset \mathfrak{M}^{p',q'}_{m'}$. \end{claim} This follows from the reproducing formula for $\mathfrak{M}^{p,q}_m$ and the previous lemma; $\mathfrak{V}_{S_0} \mathfrak{V}_{S_0}^* \mathfrak{V}_{S_0} T (z) = S_0^*\pi(z)^*\mathfrak{V}_{S_0}^* \mathfrak{V}_{S_0} T$. Hence $\mathfrak{V}_{S_0}\mathfrak{M}^{p,q}_m \subset L^{p,q}_{m'}(\mathbb{R}^{2d};\mathcal{HS}) \cap L^{\infty}_{m'}(\mathbb{R}^{2d};\mathcal{HS})$ and the claim follows. \subsection{Equivalent Norms} \label{equivnorms} The twisted convolution structure can be used to show that as in the function case, different operators in $\mathfrak{M}^1_v$ generate the same $\mathfrak{M}^{p,q}_m$ spaces, with equivalent norms. \begin{proposition}\label{equivnorms} Given some $R \in \mathfrak{M}^1_v$, the space $\{T\in\mathcal{HS}: \mathfrak{V}_R T\in L^{p,q}_m(\mathbb{R}^{2d};\mathcal{HS})\}$ is equal to the space $\mathfrak{M}^{p,q}_m$, and the associated norms are equivalent. \end{proposition} \begin{proof} Given $R\in\mathfrak{M}^1_v$, and $T\in\mathfrak{M}^{p,q}_m$, we aim to show that $\mathfrak{V}_R T\in L^{p,q}_m(\mathbb{R}^{2d};\mathcal{HS})$. To that end we have from \Cref{youngsineq} that \begin{align*} \|\mathfrak{V}_R T\|_{L^{p,q}_m} &= \frac{1}{\|S_0\|^2_{\mathcal{HS}}} \|\mathfrak{V}_{S_0} T \natural \mathfrak{V}_R S_0\|_{L^{p,q}_m} \\ &\leq C_{v,m}\|\mathfrak{V}_{S_0} T\|_{L^{p,q}_m} \|\mathfrak{V}_{S_0} R\|_{L^1_v} \\ &< \infty. \end{align*} where $C_{v,m}$ is the $v$-moderate constant of $m$, and we have used \Cref{stftbasics}(i). We have also used the formula $\mathfrak{V}_Q T \natural \mathfrak{V}_S R(z) = \langle R, Q \rangle_{\mathcal{HS}} \mathfrak{V}_S T(z)$, which we initially defined only for $T\in\mathcal{HS}$. However examining the argument confirms we are justified in using this for general $T$. Hence $\mathfrak{M}^{p,q}_m \subset \{T\in\mathcal{HS}: \mathfrak{V}_R T\in L^{p,q}_m(\mathbb{R}^{2d};\mathcal{HS})\}$. Conversely, repeating the above argument with $T$ such that $\mathfrak{V}_R T\in L^{p,q}_m(\mathbb{R}^d)$ gives the reverse inclusion. Equivalence of norms is also clear from these symmetric arguments, namely \begin{align*} \frac{\|R\|_{\mathcal{HS}}^2}{C_{v,m}\|R\|_{\mathfrak{M}^1_v}} \|T\|_{\mathfrak{M}^{p,q}_m} \leq \|\mathfrak{V}_R T\|_{L^{p,q}_m} \leq \frac{C_{v,m}\|R\|_{\mathfrak{M}^1_v}}{\|S_0\|_{\mathcal{HS}}^2} \|T\|_{\mathfrak{M}^{p,q}_m}. \end{align*} \end{proof} \begin{corollary}\label{admissableops} $\mathfrak{M}^1_v = \mathfrak{A}_v$. \end{corollary} \begin{proof} It is clear that $\mathfrak{M}^1_v \subset \mathfrak{A}_v$. The inclusion $\mathfrak{A}_v \subset \mathfrak{M}^1_v$ follows from a similar argument as above. Given some $T\in\mathfrak{A}_v$ such that $\langle S, T\rangle_{\mathcal{HS}} \neq 0$; \begin{align*} \|\mathfrak{V}_{S_0} T\|_{L^1_v} \leq \frac{1}{\langle S_0, T\rangle_{\mathcal{HS}}}\|\mathfrak{V}_T T\|_{L^1_v} \|\mathfrak{V}_{S_0} S_0 \|_{L^1_v}. \end{align*} In the case that $\langle S, T\rangle_{\mathcal{HS}} = 0$, we simply take some $R\in\mathfrak{M}^1_v$ with $\langle S, R\rangle_{\mathcal{HS}} \neq 0$ and $\langle T, R\rangle_{\mathcal{HS}} \neq 0$ and repeat the above expansion twice with respect to $R$ to derive \begin{align*} \|\mathfrak{V}_{S_0} T\|_{L^1_v} \leq \frac{1}{\langle T, S_0\rangle_{\mathcal{HS}}\langle R, T\rangle_{\mathcal{HS}}}\|\mathfrak{V}_T T\|_{L^1_v} \|\mathfrak{V}_{S_0} S_0 \|_{L^1_v}\|\mathfrak{V}_{R} R \|_{L^1_v}. \end{align*} \end{proof} \begin{corollary} A Hilbert-Schmidt operator $S$ belongs to the space $\mathfrak{A}_v$ if and only if the following norm equivalence holds: \begin{align}\label{equivnormfunc} \|S^*\pi(z)^*f\|_{L^{p,q}_m(\mathbb{R}^{2d};L^2)} \asymp \|f\|_{M^{p,q}_m} \end{align} For every $1\leq p,q \leq \infty$, $v$-multiplicative $m$. \end{corollary} \begin{proof} The $M^{p,q}_m(\mathbb{R}^d)$ condition $\|V_{\varphi_0}f\|_{L^{p,q}_m(\mathbb{R}^{2d})} < \infty$ is equivalent to the $\mathfrak{M}^{p,q}_m$ condition $\|\mathfrak{V}_{S_0} (f \otimes \varphi_0)\|_{L^{p,q}_m(\mathbb{R}^{2d};\mathcal{HS})} < \infty$. From \Cref{equivnorms}, all $S\in\mathfrak{A}_v$ determine equivalent norms on these spaces. Conversely for any operator $S$ satisfying \Cref{equivnormfunc}, for all $1\leq p,q \leq \infty$ and all $v$-multiplicative $m$, satisfies $\|\mathfrak{V}_S (f \otimes \varphi_0)\|_{L^{p,q}_m(\mathbb{R}^{2d};\mathcal{HS})}<\infty$, and in particular $\|\mathfrak{V}_S S_0\|_{L^1_v(\mathbb{R}^{2d};\mathcal{HS})} < \infty$, so $S$ must be in $\mathfrak{A}_v$ by \cref{admissableops}. \end{proof} \subsection{Duality} To show the duality property $(\mathfrak{M}^{p,q}_m)' \cong \mathfrak{M}^{p',q'}_{1/m}$ where $\frac{1}{p} + \frac{1}{p'} = 1$ and similarly for $q$, we follow a similar approach to the function case proof in \cite{grochenigtfa}. We will however need a result of \cite{Gret72} for Lebesgue-Bochner spaces: \begin{lemma}\label{bochnerdual} For a Banach space $B$, and $\sigma$-finite measure space $(\Omega,\mathcal{A},\mu)$, $B$ has the Radon-Nikodym property (RNP) if and only if \begin{align*} L^p(\Omega;B)' = L^q(\Omega;B') \end{align*} with dual action \begin{align*} \langle a, a^* \rangle_{B,B'} = \int_{\Omega} a^*(a)\, d\mu \end{align*} where $\frac{1}{p}+\frac{1}{q} = 1$ for $1\leq p < \infty$. \end{lemma} Since $\mathcal{HS}$ has the RNP this gives that $\big(L^{p,q}_{m}(\mathbb{R}^{2d};\mathcal{HS})\big)' \cong L^{p',q'}_{1/m}(\mathbb{R}^{2d};\mathcal{HS})$, with the dual action $\langle \Psi, \Phi\rangle_{L^{p,q}_{m},L^{p',q'}_{1/m}} = \int_{\mathbb{R}^{2d}} \langle \Psi(z), \Phi(z)\rangle_{\mathcal{HS}}\, dz$. \begin{proposition} \label{dualspaces} For $S\in \mathfrak{A}_v$ and $1\leq p < \infty$, we have the duality identity \begin{align*} (\mathfrak{M}^{p,q}_m)' \cong \mathfrak{M}^{p',q'}_{1/m} \end{align*} with the dual action given by \begin{align*} \langle T, R \rangle_{\mathfrak{M}^{p,q}_m,\mathfrak{M}^{p',q'}_{1/m}} = \int_{\mathbb{R}^{2d}} \langle \mathfrak{V}_S T(z), \mathfrak{V}_S R(z)\rangle_{\mathcal{HS}}\, dz. \end{align*} \end{proposition} \begin{proof} On the one hand, the inclusion $\mathfrak{M}^{p',q'}_{1/m} \subset (\mathfrak{M}^{p,q}_m)'$ is clear from Hölder's inequality for weighted mixed norm spaces; \begin{align*} \Big| \int_{\mathbb{R}^{2d}} \langle \mathfrak{V}_S T(z), \mathfrak{V}_S R(z)\rangle_{\mathcal{HS}}\, dz \Big| \leq \|T\|_{\mathfrak{M}^{p,q}_m}\|R\|_{\mathfrak{M}^{p',q'}_{1/m}}. \end{align*} To demonstrate the converse, take $R\in (\mathfrak{M}^{p,q}_m)'$. The composition $\Tilde{R} := R\circ \mathfrak{V}_S^*$ then defines a functional on $L^{p,q}_m(\mathbb{R}^{2d};\mathcal{HS})$, by \Cref{stftinvmod}. There exists then some $\Theta(z) \in L^{p',q'}_{1/m}(\mathbb{R}^{2d};\mathcal{HS})$, due to \Cref{bochnerdual}, such that \begin{align*} \Tilde{R}(\Psi) = \int_{\mathbb{R}^{2d}} \langle \Psi(z), \Theta(z)\rangle_{\mathcal{HS}}\, dz \end{align*} for $\Psi\in L^{p,q}_m((\mathbb{R}^{2d};\mathcal{HS})$. From \Cref{stftinvmod} it follows that \begin{align*} \mathfrak{V}_S^*\Theta=\int_{\mathbb{R}^{2d}}\pi(z) S \Theta(z)\, dz \in \mathfrak{M}^{p',q'}_{1/m} \end{align*} and we denote this element $\theta$. We then conclude by confirming that \begin{align*} \langle T, \theta \rangle_{\mathfrak{M}^{p,q}_m,\mathfrak{M}^{p',q'}_{1/m}} &= \int_{\mathbb{R}^{2d}} \langle \mathfrak{V}_S T(z), \mathfrak{V}_S\mathfrak{V}_S^*\Theta(z) \rangle_{\mathcal{HS}} dz \\ &= \int_{\mathbb{R}^{2d}} \langle \mathfrak{V}_S T(z), \Theta(z) \rangle_{\mathcal{HS}} dz \\ &= \Tilde{R}(\mathfrak{V}_S T) = R(T), \end{align*} i.e. that an arbitrary functional $R \in (\mathfrak{M}^{p,q}_m)'$ corresponds to an element $\theta\in\mathfrak{M}^{p',q'}_{1/m}$ with the dual action defined above. Thus we have shown the reverse inclusion of $(\mathfrak{M}^{p,q}_m)^* \subset \mathfrak{M}^{p',q'}_{1/m}$, and conclude $(\mathfrak{M}^{p,q}_m)' \cong \mathfrak{M}^{p',q'}_{1/m}$. \end{proof} \begin{remark}\label{Schatten behaviour} \normalfont In examples so far of $\mathfrak{M}^{p,q}_m$ operators, we have considered the rank one case, where one retrieves the familiar functions in the $M^{p,q}_m$ spaces and their associated relations. However, the $\mathfrak{M}^{p,q}_m(\mathbb{R}^d)$ spaces can also be related to the Schatten properties of operators, as seen by the inclusions \begin{align*} \mathcal{N}(L^2;M^1)\subseteq \mathfrak{M}^1 \subset \mathcal{S}^1 \subset \mathcal{HS} \subset \mathcal{L}(L^2) \subset \mathfrak{M}^{\infty} \subseteq \mathcal{L}(L^2;M^{\infty}). \end{align*} \end{remark} In particular we have a Gelfand triple $\mathfrak{M}^1_m \subset \mathcal{HS} \subset \mathfrak{M}^{\infty}_{1/m}$, where the embeddings are continuous. \subsection{Correspondence Principle for Operators} Finally we can give a characterisation of the spaces in terms of a coorbit structure: \begin{theorem} For any $S\in \mathfrak{A}_v$ such that $\|S\|_{\mathcal{HS}}=1$, we have an isometric isomorphism \begin{align*} \mathfrak{M}^{p,q}_m := \{ T\in \mathfrak{S}': \mathfrak{V}_S T \in L^{p,q}_m(\mathbb{R}^{2d};\mathcal{HS}) \} \cong \{\Psi\in L^{p,q}_m(\mathbb{R}^{2d};\mathcal{HS}): \Psi = \Psi \natural \mathfrak{V}_S S\}, \end{align*} under the mapping \begin{align*} T \mapsto \mathfrak{V}_S T. \end{align*}\todo{$\Psi$?} \end{theorem} \begin{proof} The inclusion $\mathfrak{V}_S(\mathfrak{M}_S^p) \subset \{\Psi\in L^p(\mathbb{R}^{2d};\mathcal{HS}): \Psi = \Psi \natural \mathfrak{V}_S S\}$ follows from \Cref{projidentity}. It remains to show the converse. We have that $\Psi \natural \mathfrak{V}_S S = \mathfrak{V}_S\mathfrak{V}_S^* \Psi$ for any $\Psi\in L^{p,q}_m(\mathbb{R}^{2d};\mathcal{HS})$. Hence if $\Psi = \Psi \natural \mathfrak{V}_S S$, then $\Psi = \mathfrak{V}_S R$, where $R=\mathfrak{V}_S^* \Psi \in\mathfrak{M}^{p,q}_m$, since $\mathfrak{V}_S^*:L^{p,q}_m\to\mathfrak{M}^{p,q}_m$. We recall that $\mathfrak{V}_S$ is injective on $\mathfrak{M}^{p,q}_m$, and the isometry property follows simply as a result of definitions of $\mathfrak{M}^{p,q}_m$ norms for a normalised $S$. Hence we have the correspondence principle; \begin{align*} \{ T\in \mathfrak{S}': \mathfrak{V}_S T \in L^{p,q}_m(\mathbb{R}^{2d};\mathcal{HS}) \} \cong \{\Psi\in L^{p,q}_m(\mathbb{R}^{2d};\mathcal{HS}): \Psi = \Psi \natural \mathfrak{V}_S S\}, \end{align*} for any $S\in\mathfrak{A}_v$ with $\|S\|_{\mathcal{HS}}=1$. \end{proof} \section{Atomic Decomposition}\label{atomdecomp} Coorbit spaces were introduced as a means of giving atomic decompositions with respect to unitary representations, and are fundamental to the field of time-frequency analysis for this reason. It is therefore natural, once one has such spaces, to consider the resulting discretisation. In particular we are interested in the discretisation of the identity \begin{align*} T = \mathfrak{V}_S^* \mathfrak{V}_S T \end{align*} for $T\in\mathfrak{M}^{p,q}_m$ and $S\in\mathfrak{A}_v$, and the g-frame condition \begin{align} A\|T\|_{\mathcal{HS}} \leq \sum_{\lambda \in \Lambda} \|S^*\pi(\lambda)^*T\|_{\mathcal{HS}} \leq B\|T\|_{\mathcal{HS}} \end{align} for some lattice $\Lambda\subset \mathbb{R}^{2d}$. We then proceed to consider the corresponding statement for operator modulation spaces, and interpret this as the statement that for $T$ an operator with poor time-frequency concentration, in some $\mathfrak{M}^{p,q}_m$ for large $p,q$, we can nonetheless decompose $T$ into well localised operators in the above manner. In \cite{skrett21}, a similar problem was considered, of the conditions for which decompositions of functions $\psi\in M^p_m(\mathbb{R}^d)$ of the form \begin{align*} \sum_{\lambda\in\Lambda} \alpha_{\lambda}(SS^*)\psi \end{align*} converge in a given norm. In that work the primary operators of interest were those of the form $S\in M^1_v(\mathbb{R}^d) \otimes M^1_v(\mathbb{R}^d)$, although as discussed in Remark 7.8 of that work, the same results hold for operators $S = \sum_n f_n \otimes g_n$ where $\{g_n\}_n$ is an orthonormal system in $L^2(\mathbb{R}^{2d})$, and $\{f_n\}_n\subset M^1_v$, with the condition $\sum_n \|f_n\|_{M^1_v} < \infty$. With the twisted convolution structure of our coorbit spaces already in place, atomic decomposition results can be derived in an almost identical way to the function case, as presented in chapter 12 of \cite{grochenigtfa}, with some slight changes to accommodate the operator setting, based on the Wiener Amalgam spaces defined in \cref{Wieneramalg}. We present the proofs here for completeness. \todo{Recall def of Wiener spaces?} \begin{lemma} \label{amalgconv} Given $G\in W(L^1_v(\mathbb{R}^{2d};\mathcal{HS}))$ and $F\in L^{p,q}_m(\mathbb{R}^{2d};\mathcal{HS})$ continuous functions, where $m$ is a $v$-moderate weight, we have \begin{align*} \|F\natural G\|_{W(L^{p,q}_m)} \leq C \|F\|_{L^{p,q}_m} \|G\|_{W(L^1_v)}. \end{align*} \end{lemma} \begin{proof} We construct the function $G_s(z)=\sum_{\lambda\in\Lambda} S_{\lambda}\cdot \chi_{\Omega_{\lambda}}(z)$, where $\Omega_{\lambda} = \lambda + [0,1]^{2d}$ and $S_{\lambda}$ is a value of $G$ in $\Omega_{\lambda}$ which maximises $\|G(z)\|_{\mathcal{HS}}$, which exists since $G$ is assumed to be continuous. Then $\|G(z)\|_{\mathcal{HS}} \leq \|G_s(z)\|_{\mathcal{HS}}$ and $\|G\|_{W(L^1_v)} = \|G_s\|_{W(L^1_v)}$. We then have \begin{align*} \|F\natural G\|_{W(L^{p,q}_m)} &\leq \sum_{\lambda\in\Lambda} \|S_{\lambda}\|_{\mathcal{HS}} \| F\natural T_{\lambda} \chi_{\Omega_0}\|_{W(L^{p,q}_m)} \\ &\leq \sum_{\lambda\in\Lambda} v(\lambda)\|S_{\lambda}\|_{\mathcal{HS}} \| F\natural \chi_{\Omega_0}\|_{W(L^{p,q}_m)} \\ &= \| F\natural \chi_{\Omega_0}\|_{W(L^{p,q}_m)} \|G\|_{W(L^1_v)}. \end{align*} We abuse notation here by taking the twisted convolution of a vector valued and scalar valued function, but we interpret $F\natural \chi_{\Omega_0}(z)$ simply as $\int_{z-\Omega_0} F(z')c(z,z')dz'$. We also comment that while $S_{\lambda}$ may not be the value of $G$ maximising $F\natural G$, it nonetheless provides the upper bound in the first line. We consider the sequence \begin{align*} a_{\lambda} &= \esssup_{z\in\Omega_{\lambda}} \|F\natural \chi_{\Omega_0}(z+\lambda)\|_{\mathcal{HS}} \\ &\leq \esssup_{z\in\Omega_{\lambda}} \int_{z + \lambda-\Omega_0} \|F(z')\|_{\mathcal{HS}}dz' \\ &\leq \int_{\lambda-\Tilde{\Omega}_0} \|F(z')\|_{\mathcal{HS}}dz' \\ &= (\|F\|_{\mathcal{HS}}*\chi_{\Tilde{\Omega}_0})(z + \lambda) \end{align*} where $\Tilde{\Omega}_0 = [-1,1]^{2d}$, and $\|F\|_{\mathcal{HS}}$ is considered a scalar valued $L^{p,q}_m$ function. Moreover, we see that $a_{\lambda}\chi_{\Omega_{\lambda}}(z) \leq (\|F\|_{\mathcal{HS}}*\chi_{\Check{\Omega}_0})(\lambda+z)$ for $z\in [0,1]^{2d}$, where here $\Check{\Omega}_0=[-2,2]^{2d}$, and so \begin{align*} \sum_{\lambda\in\Lambda} a_{\lambda}\chi_{\Omega_{\lambda}}(z) \leq (\|F\|_{\mathcal{HS}}*\chi_{\Check{\Omega}_0})(z). \end{align*} Finally we conclude \begin{align*} \| F\natural \chi_{\Omega_0}\|_{W(L^{p,q}_m)} &= \|a\|_{l^{p,q}_m} \\ &\leq C^\prime\|\sum_{\lambda\in\Lambda} a_{\lambda}\chi_{\Omega_{\lambda}} \|_{L^{p,q}_m} \\ &\leq C^\prime\| \|F\|_{\mathcal{HS}}*\chi_{\Check{\Omega}_0} \|_{L^{p,q}_m} \\ &\leq C\|F\|_{L^{p,q}_m} \|\chi_{\Check{\Omega}_0}\|_{L^1_v} \end{align*} where we have used Young's inequality for mixed norm spaces in the last line. The claim follows. \end{proof} \todo{Since, for $\varphi_0$....we have}In the function case, $V_{\varphi_0} \varphi_0 \in W(L^1_v(\mathbb{R}^{2d}))$, from which it follows that $\mathfrak{V}_{S_0} S_0 \in W(L^1_v(\mathbb{R}^{2d};\mathcal{HS}))$, since $\|\mathfrak{V}_{S_0} S_0(z)\|_{\mathcal{HS}}=|V_{\varphi_0} \varphi_0(z)|$. \begin{corollary}\label{l1amalg}\label{atomicwiener} If $T\in \mathfrak{M}^{p,q}_m$ and $S\in\mathfrak{A}_v$, then $\mathfrak{V}_S T \in W(L^{p,q}_m(\mathbb{R}^{2d};\mathcal{HS}))$. \end{corollary} \begin{proof} From \cref{amalgconv}, for any $S\in \mathfrak{A}_v$, $\mathfrak{V}_{S_0} S\in W(L^1_v(\mathbb{R}^{2d};\mathcal{HS}))$. By then considering $\mathfrak{V}_{S_0} S \natural \mathfrak{V}_S S_0$ in the equation \eqref{twistedconv}, it follows that $\mathfrak{V}_S S\in W(L^1_v(\mathbb{R}^{2d};\mathcal{HS}))$ again by \cref{amalgconv}. The corollary then follows for general $T\in \mathfrak{M}^{p,q}_m$ from \Cref{youngsineq} and \cref{amalgconv}. \end{proof} With these preliminaries the boundedness of the analysis operator follows painlessly as in the function case presented in \cite{grochenigtfa}; \begin{proposition} For $S\in\mathfrak{A}_v$, the analysis operator $C_S: \mathfrak{M}^{p,q}_{{m}}\to l^{p,q}_{\Tilde{m}}(\Lambda,\mathcal{HS})$, defined by \begin{align*} C_S(T) = \{S^*\pi(\lambda)^*T\}_{\lambda\in\Lambda}, \end{align*} is a bounded operator with norm \begin{align*} \|C_S\| \leq C\|\mathfrak{V}_S S\|_{W(L^1_v)}, \end{align*} where the constant $C$ depends only on the lattice $\Lambda$ and weight $v(z)$ \end{proposition} \begin{proof} By \cref{atomicwiener}, $\mathfrak{V}_S S \in W(L^1_v(\mathbb{R}^{2d};\mathcal{HS}))$. Since $\mathfrak{V}_S T$ is continuous, we have from \cref{atomicwiener} and \Cref{wieneramalg} that \begin{align*} \|C_S(T)\|_{l^{p,q}_{\Tilde{m}}(\Lambda,\mathcal{HS})} &= \|\mathfrak{V}_S T|_{\Lambda}\|_{l^{p,q}_{\Tilde{m}}(\Lambda,\mathcal{HS})} \\ &\leq C' \|\mathfrak{V}_S T\|_{W(L^{p,q}_m)} \\ &\leq C \|\mathfrak{V}_S S\|_{W(L^1_v)} \|T\|_{\mathfrak{M}^{p,q}_m}. \end{align*} \end{proof} On the other hand, we find that the synthesis operator is similarly bounded, again in the same manner as the function case of \cite{grochenigtfa}: \begin{proposition} For $S\in\mathfrak{A}_v$, the synthesis operator $D_S: l^{p,q}_{\Tilde{m}}(\Lambda,\mathcal{HS}) \to \mathfrak{M}^{p,q}_{{m}}$, defined by \begin{align*} D_S((T_{\lambda})_{\lambda\in\Lambda}) = \sum_{\lambda\in\Lambda} \pi(\lambda)S T_{\lambda} \end{align*} is a bounded operator with norm \begin{align*} \|D_S\| \leq C \|\mathfrak{V}_S S\|_{W(L^1_v)}. \end{align*} Convergence is interpreted to be unconditional for $p,q< \infty$, otherwise weak*, and the constant $C$ depends only on the lattice $\Lambda$ and weight $v(z)$. \end{proposition} \begin{proof} We are required to show that $\mathfrak{V}_S D_S ((T_{\lambda})_{\lambda\in\Lambda}) \in L^{p,q}_m$. By definition; \begin{align*} \|\mathfrak{V}_S D_S ((T_{\lambda})_{\lambda\in\Lambda})(z)\|_{\mathcal{HS}} &= \|\sum_{\lambda\in\Lambda} S^*\pi(z)^* \pi(\lambda)S T_{\lambda}\|_{\mathcal{HS}} \\ &\leq \sum_{\lambda\in\Lambda} \|\mathfrak{V}_S S(z-\lambda)\|_{\mathcal{HS}}\|T_{\lambda}\|_{\mathcal{HS}} \end{align*} We have from \Cref{l1amalg} that $\mathfrak{V}_S S\in W(L^1_v)$, so we denote once more $G(z) = \sum_{\lambda \in \Lambda} S_{\lambda}\cdot\xi_{\Omega_{\lambda}} (z)$, where $S_{\lambda}$ is the value of $\mathfrak{V}_S S$ maximising the norm over $\lambda+[0,1]^{2d}$ as in \Cref{amalgconv}. We see then that the $L^{p,q}_m$ norm is bounded (up to a constant) by the discrete $l^{p,q}_{\Tilde{m}}$ norm of the convolution of sequences $s=(\|S_{\lambda}\|_{\mathcal{HS}})$ and $t=(\|T_{\lambda}\|_{\mathcal{HS}})$, and hence \begin{align*} \|\mathfrak{V}_S D_S ((T_{\lambda})_{\lambda\in\Lambda})\|_{L^{p,q}_m} &\leq C' \|s*t\|_{l^{p,q}_{\Tilde{m}}} \\ &\leq C'' \|s\|_{l^1_{\Tilde{v}}}\|t\|_{l^{p,q}_{\Tilde{m}}}, \end{align*} and since $\|\mathfrak{V}_S S\|_{W(L^1_v)} = \|s\|_{l^1_{\Tilde{v}}}$, it follows that \begin{align*} \|D_S\| \leq C \|\mathfrak{V}_S S\|_{W(L^1_v)}. \end{align*} Unconditional convergence for $p,q<\infty$ follows from the boundedness of $D_S$, since finite sequences are dense in $l^{p,q}_m$. For the case $p=\infty$ or $q=\infty$, the same fact can be used for the series $\langle R, \sum_{\lambda} \pi(\lambda)S T_{\lambda}\rangle_{\mathfrak{M}^1_{v},\mathfrak{M}^{\infty}_{1/v}}$, for all $R\in\mathfrak{M}^1_v$. \end{proof} \begin{corollary} Given $S, R \in\mathfrak{A}_v$, the frame operator $\mathfrak{O}_{S,R}:= D_S C_R$ is a bounded operator on $\mathfrak{M}^{p,q}_m$ for all $1\leq p,q\leq \infty$ and $v$-moderate weights $m$, with operator norm \begin{align*} \|\mathfrak{O}_S\| \leq C \|\mathfrak{V}_S S\|_{W(L^1_v)} \|\mathfrak{V}_R R\|_{W(L^1_v)}. \end{align*} \end{corollary} As a final corollary, we see that Gabor g-frames for operators in $\mathfrak{A}_v$ generate equivalent norms on $\mathfrak{M}^{p,q}_m$. We note that while stated for general $S,R$, we can always consider the canonical dual frame $\{S^*\pi(\lambda)^*\mathfrak{O}^{-1}\}$ given a Gabor g-frame $S\in\mathfrak{A}_v$. \begin{corollary} If $S,R\in\mathfrak{A}_v$ are dual Gabor g-frames, so $\mathfrak{O}_{S,R} = I_{\mathcal{HS}}$, then $\mathfrak{O}_{S,R} = \mathfrak{O}_{R,S} = I_{\mathfrak{M}^{p,q}_m}$ where the sum is unconditional for all $1\leq p,q < \infty$, and weak* otherwise. Furthermore, there are constants $A,B$ such that \begin{align*} A\|T\|_{\mathfrak{M}^{p,q}_m} \leq \|S^*\pi(\lambda)^* T\|_{l^{p,q}_m} \leq B \|T\|_{\mathfrak{M}^{p,q}_m} \end{align*} (and similarly for $R$). \end{corollary} \begin{remark} \normalfont It would be nice to be able to decompose an operator $T\in\mathcal{HS}$ solely in terms of $\alpha_{\lambda}$ shifts of some window $S\in\mathcal{HS}$, that is, in the form $T=\sum_{\lambda} c_{\lambda} \alpha_{\lambda}(S)$. In general this is impossible, since there does not exist and operator $S\in\mathcal{HS}$ and lattice $\Lambda$ such that the linear span of $\{\alpha_{\lambda}(S)\}_{\lambda\in\Lambda}$ is dense in $\mathcal{HS}$ (Proposition 7.2, \cite{skrett20b}). This is roughly due to the fact that one must have control of both sides of the tensor product $L^2(\mathbb{R}^d)\otimes L^2(\mathbb{R}^d)$, such as in the case presented in \cite{balasz2008} wherein frames for $\mathcal{HS}$ are generated by two frames for $L^2(\mathbb{R}^d)$. Restricting to the set of positive $\mathcal{HS}$ operators, shifts of $S=\varphi_0 \otimes \varphi_0$ span a dense subset for any lattice for which $\varphi_0$ is a frame for $L^2(\mathbb{R}^d)$. However a positive $S$ may fail to be generate a dense subset for greater rank even when constructed from functions forming a multi-window Gabor frame, since for example even a rank-one $T=f\otimes f$ can only be reconstructed if the coefficients for all functions making up the multi-window Gabor frame in S are the coincide for $f$, ie $f=\sum_{\lambda} c_{\lambda} \sum_j \pi(\lambda)g_j$ where $S=\sum_j g_j \otimes g_j$. This inability to decompose $\mathcal{HS}$ operators solely as $\alpha_{\lambda}$ shifts of some window, shows the necessity of g-frames in our setting. \end{remark} \subsection{Modulation Space Characterisation by Localisation Operators} In \cite{dor06} and \cite{dor11}, the authors consider the characterisation of modulation spaces by g-frames of translated localisation operators, initially for the Gelfand triple $(M^1(\mathbb{R}^d),L^2(\mathbb{R}^d),M^{\infty}(\mathbb{R}^d))$, and later for general $M^p_m(\mathbb{R}^d)$: \begin{theorem}[Theorem 8, \cite{dor11}] Let $\varphi\in M^1_v(\mathbb{R}^d)$ be non-zero and $h\in L^1_v(\mathbb{R}^{2d})$ be some non-negative symbol satisfying \begin{align}\label{locopframecond} A \leq \sum_{\lambda\in\Lambda} h(z-\lambda) \leq B \end{align} for positive constants $A,B$, and almost all $z\in\mathbb{R}^{2d}$. Then for every $v$-moderate weight $m$ and $1\leq p < \infty$, the function $f\in M^{\infty}_{1/v}(\mathbb{R}^d)$ belongs to $M^p_m(\mathbb{R}^d)$ if and only if \begin{align*} (\sum_{\lambda\in\Lambda} \|A_h^{\varphi} \pi(\lambda)^*f\|^p_{L^2}m(\lambda))^{1/p} < \infty, \end{align*} where $A_h^{\varphi}:f \mapsto V_{\varphi}^*(h\cdot V_{\varphi}f)$ is the localisation operator with symbol $h$. In this case the left hand side is an equivalent norm to $\|\cdot\|_{M^p_m}$. Similarly for $p=\infty$; \begin{align*} \|f\|_{M^{\infty}_m} \asymp \sup_{\lambda\in\Lambda} \|A_h^{\varphi} \pi(\lambda)^*f\|_{L^2}m(\lambda) \end{align*} \end{theorem} We note in particular that condition \eqref{locopframecond} gives criteria for $A_h^{\varphi}$ to generate a Gabor g-frame, which one sees by considering $p=2$, $m\equiv 1$. We can characterise operator coorbit spaces similarly. We confirm that we can consider Gabor g-frames on $L^2(\mathbb{R}^d)$ in the operator setting: \begin{proposition} $S$ generates a Gabor g-frame on $L^2(\mathbb{R}^d)$ if and only if $S$ generates a Gabor g-frame on $\mathcal{HS}$, with the same frame constants. \end{proposition} \begin{proof} Assume $S$ generates a Gabor g-frame on $L^2(\mathbb{R}^d)$, that is \begin{align} A\|f\|_{L^2}^2 \leq \sum_{\lambda\in\Lambda} \|S^*\pi(\lambda)^*f\|_{L^2}^2 \leq B\|f\|_{L^2}^2 \end{align} for all $f\in L^2(\mathbb{R}^d)$. Then any $T\in\mathcal{HS}$ can be decomposed as $T=\sum_n f_n \otimes e_n$ for some orthonormal set $\{e_n\}_n$, and the trace taken with respect to $\{e_n\}_n$ (which can be extended to an orthonormal basis if T is not full rank): \begin{align*} \sum_{\lambda\in\Lambda}\|S^*\pi(\lambda)^*T\|_{\mathcal{HS}}^2 &= \sum_{\lambda\in\Lambda} \sum_{n\in\mathbb{N}} \langle S^*\pi(\lambda)^*T e_n, S^*\pi(\lambda)^*T e_n\rangle_{L^2} \\ &= \sum_{\lambda\in\Lambda} \sum_{n\in\mathbb{N}} \|S^*\pi(\lambda)^*f_n\|_{L^2}^2. \end{align*} The Gabor g-frame condition on $L^2(\mathbb{R}^d)$ then gives \begin{align*} \sum_{n} A\|f_n\|_{L^2}^2 \leq \sum_{n\in\mathbb{N}} \sum_{\lambda\in\Lambda} \|S^*\pi(\lambda)^*f_n\|_{L^2}^2 \leq \sum_{n} B\|f_n\|_{L^2}^2, \end{align*} and so \begin{align} A\|T\|_{\mathcal{HS}}^2 \leq \sum_{\lambda\in\Lambda}\|S^*\pi(\lambda)^*T\|_{\mathcal{HS}}^2 \leq B\|T\|_{\mathcal{HS}}^2. \end{align} The opposite direction follows by the same expansion with a rank one operator. \end{proof} The following characterisation then uses Proposition 7.14 of \cite{skrett21}, which states that given $h\in L^1_{v^2}(\mathbb{R}^d)$, $A_h^{\varphi}\in M^1_v(\mathbb{R}^d)\otimes M^1_v(\mathbb{R}^d)$, which in particular tells us $A_h^{\varphi}\in\mathfrak{A}_v$. \begin{corollary} Given $h\in L^1_{v^2}(\mathbb{R}^{2d})$ satisfying \eqref{locopframecond} and some $v$-moderate $m$, the operator $T\in\mathfrak{M}^{\infty}_{1/v}$ belongs to $\mathfrak{M}^{p,q}_m$ if and only if \begin{align*} \big\{ A_{\overline{h}}^{\varphi} \pi(\lambda)^* T \big\}_{\lambda\in\Lambda} \in l^{p,q}_m(\Lambda;\mathcal{HS}). \end{align*} \end{corollary} In a similar manner to \Cref{tfintuition}, this corollary supports the intuition of the $\mathfrak{M}^{p,q}_m$ condition measuring the time-frequency decay in the operator sense. We often consider localisation operators with symbol $h$ having essential support concentrated in some domain $\Omega$, such as the characteristic function $\chi_{\Omega}$. Hence $A_h^{\varphi}$ can be seen as measuring the time frequency concentration of a function in $\Omega$. With this intuition we can consider $ A_{\overline{h}}^{\varphi}\pi(\lambda)^*T$ as measuring how much $T$ concentrates a function to some domain $\Omega + \lambda$, and thus we interpret the sum over $\lambda$ as a measure of the extent to which $T$ spreads out functions in the time-frequency plane. \section{Final Remarks} This paper introduces an operator STFT, a novel concept bearing potential both in theoretical settings and applications to data analysis and quantum harmonic analysis. The main results of the paper arise from representing an ensemble of data points or signals with respect to a joint time-frequency representation, which captures correlations between data points in the time-frequency plane (\Cref{dataopex}). We show that the operator STFT has many of the familiar properties of the function STFT, and in particular that the spaces produced by the operator STFT with a fixed window are reproducing kernel Hilbert spaces. The Toeplitz operators associated with such spaces are the mixed-state localisation operators, an observation, that supports the notion of the operator STFT extending the function STFT to an appropriate object for quantum harmonic analysis (\Cref{repkernstruc}). From a functional data analysis point of view, stable representations of continuous data is the ideal, and so having a reproducing structure when analysing data sets ensures stability with respect to noise and small perturbations in the incoming data. Furthermore, by extending the spaces of operators one considers, we are able to define coorbit spaces for operators. It turns out, that we thus obtain Banach spaces of operators behaving analogously to the function coorbit spaces, with regards to duality, the equivalence of window in the definition, and even the correspondence principle for the coorbit spaces (\Cref{opcoorbit}). If we interpret function coorbit spaces as those functions appropriately concentrated in a region of the time-frequency plane, we can consider the operator coorbit spaces as those operators which \textit{act} on functions in a concentrated region of the time-frequency plane (\cref{tfintuition}). These coorbit spaces of operators turn out to have the remarkable property of decomposition via Gabor g-frames, which says that given an operator in a coorbit space, we can write the operator as a sum of translations of well localised operators (\Cref{atomdecomp}). \pagebreak \Addresses \end{document}
\begin{document} \begin{center} {\bf Optimal Stencils in Sobolev Spaces} Oleg Davydov\footnote{ Univ. Gie\ss{}en, [email protected]\\ https://www.staff.uni-giessen.de/odavydov/} and Robert Schaback\footnote{ Univ. G\"ottingen, [email protected]\\ http://num.math.uni-goettingen.de/schaback/research/group.html}\\ Draft of \today \end{center} {\bf Abstract}: This paper proves that the approximation of pointwise derivatives of order $s$ of functions in Sobolev space $W_2^m(\ensuremath{\mathbb{R}}^d)$ by linear combinations of function values cannot have a convergence rate better than $m-s-d/2$, no matter how many nodes are used for approximation and where they are placed. These convergence rates are attained by {\em scalable} approximations that are exact on polynomials of order at least $\lfloor m-d/2\rfloor +1$, proving that the rates are optimal for given $m,\,s,$ and $d$. And, for a fixed node set $X\subset\ensuremath{\mathbb{R}}^d$, the convergence rate in any Sobolev space $W_2^m(\Omega)$ cannot be better than $q-s$ where $q$ is the maximal possible order of polynomial exactness of approximations based on $X$, no matter how large $m$ is. In particular, scalable stencil constructions via polyharmonic kernels are shown to realize the optimal convergence rates, and good approximations of their error in Sobolev space can be calculated via their error in Beppo-Levi spaces. This allows to construct near-optimal stencils in Sobolev spaces stably and efficiently, for use in meshless methods to solve partial differential equations via generalized finite differences (RBF-FD). Numerical examples are included for illustration. \section{Introduction}\label{SecIntro} We consider discretizations of continuous linear functionals $\lambda\;:\;U\to\ensuremath{\mathbb{R}}$ on some normed linear space $U$ of real-valued functions on some bounded domain $\Omega\subset\ensuremath{\mathbb{R}}^d$. The discretizations are {\em nodal}, i.e. they work with values $u(x_j)$ of functions $u\in U$ on a set $X=\{x_1,\ldots,x_M\} \subset\Omega$ of {\em nodes} by \begin{equation}\label{eqapp} \lambda(u)\approx \lambda_{a,X}(u):= \displaystyle{\sum_{j=1}^Ma_ju(x_j) }\hbox{ for all } u\in U. \end{equation} The background is that most operator equations can be written as infinitely may linear equations $$ \lambda(u)=f_\lambda \hbox{ for all } \lambda\in \Lambda\subset U^*, $$ where the functionals evaluate weak or strong derivatives or differential operators like the Laplacian or take boundary values. This means that the classical approach of {\em meshless methods} is taken, namely to write the approximations {\it entirely in terms of nodes} \cite{belytschko-et-al:1996-1}. Our concern is to find {\em optimal} approximations in Sobolev space $W_2^m(\Omega)$ for domains $\Omega\subset \ensuremath{\mathbb{R}}^d$. Their calculation is computationally costly and very unstable, but we shall prove that there are {\em suboptimal} approximations that can be calculated cheaply and stably, namely via {\em scalable} approximations that have a certain exactness on polynomials (Section \ref{SecOCiSS}) and may be constructed via polyharmonic kernels (Section \ref{SecPHS}). In particular, we shall show that they can have the same convergence rate as the optimal approximations, and we present the minimal assumptions on the node sets to reach that optimal rate. The application for all of this is that error bounds and convergence rates for nodal approximations to linear functionals enter into the consistency part of the error analysis \cite{schaback:2016-1} of nodal meshless methods. These occur in many papers in Science and Engineering, e.g. \cite{aboiyar-et-al:2010-1,agarwal-basu:2012-1, bayona-et-al:2012-1,chandhini-sanyarisanu:2007-1, flyer-et-al:2015-1,flyer-et-al:2016-1, gerace-et-al:2009,hoang-et-al:2012-1,hosseini-hashemi:2011-1,iske:2013-1, sarler:2007-1,shankar-et-al:2015-1,shu-et-al:2003-1,shu-et-al:2005-1, stevens-et-al:2011-1,thai-et-al:2012-1,tolstykh:2000-1, vertnik-sarler:2011-1,yao-et-al:2011-1,yao-et-al:2012-1 }, and several authors have analyzed the construction of nodal approximations mathematically, e.g. \cite{davydov-oanh:2011-1,davydov-oanh:2011-2, davydov-et-al:2016-1,iske:1995-1, iske:2003-1, larsson-et-al:2013-1,wright-fornberg:2006-1}, but without considering optimal convergence rates. To get started, we present a suitable notion of {\em scalability} in Section \ref{SecSca} that allows to define error functionals $\epsilon_h\in U^*$ based on the scaled point set $hX$ for small $h>0$ and to prove {\em convergence rates} $k$ in the sense that error bounds of the form $\|\epsilon_h\|_{U^*}\leq Ch^k$ hold for $h\to 0$. The standard {\em derivative order} $|\alpha|$ of a pointwise multivariate derivative functional $\lambda(u):=D^\alpha u(0)$ will reappear as a {\em scaling} order $s(\lambda)$ that governs how the approximations of a functional $\lambda$ scale for $h\to 0$. Of course, optimal error bounds will crucially depend on the space $U$ and the node set $X$. If $U$ contains all real-valued polynomials, the achievable convergence rate of an approximation of a functional $\lambda$ based on a node set $X$ is limited by the maximal convergence rate on the subspace of polynomials. Section \ref{SecEAoP} will prove that the upper limit of the convergence rate on polynomials is $q_{max}(\lambda,X)-s(\lambda)$ where $q_{max}$ is the maximal order of polynomials on which the approximation is exact, and that this rate can be reached by {\em scalable} approximations constructed via exactness on polynomials. But even if the node set $X$ is large enough to let approximations be exact on high-order polynomials, the convergence rate may be restricted by limited smoothness of the functions in $U$. In Sobolev spaces $W_2^m(\ensuremath{\mathbb{R}}^d)$ or $W_2^m(\Omega)$ with $\Omega\subset\ensuremath{\mathbb{R}}^d$ the achievable rate for arbitrarily large node sets $X$ turns out to be bounded above by $m-d/2-s(\lambda)$ in Section \ref{SecOCiSS}, so that \begin{equation}\label{eqOptRate} \min\left(m-d/2-s(\lambda),q_{max}(\lambda,X)-s(\lambda)\right), \end{equation} is a general formula for an upper bound on the convergence rate in Sobolev space $W_2^m(\ensuremath{\mathbb{R}}^d)$, and this is confirmed by numerical experiments in Section \ref{SecExa}. Then Sections \ref{SecEAoP}, \ref{SecOCiSS}, and \ref{SecPHS} prove that the convergence rate \eref{eqOptRate} is {\em optimal}, and it can be achieved by {\em scalable} stencils based solely on exactness on polynomials. Furthermore, Section \ref{SecSC} gives a sufficient condition for the convergence of optimal stencils to scalable stencils. A particularly interesting case is the {\em best compromise} case where the two constraints on the convergence rate are equal, i.e. \begin{equation}\label{eqcompro} q_{max}(\lambda,X)=\lceil m-d/2\rceil. \end{equation} For a given smoothness $m$ it yields the sparsest approximation that has the optimal convergence rate (or comes arbitrarily close to it if $m-d/2$ is an integer), and for a given sparsity via $X$ it provides the minimal smoothness that is required to realize the maximal possible rate of convergence using that node set. The numerical examples are collected in Section \ref{SecExa}, while the final section \ref{SecU} summarizes our results and points out a few open problems for further research. \section{Scalability}\label{SecSca} We now study the behavior of functionals and their approximations under {\em scaling}. \begin{definition}\label{defScaOrd} \begin{enumerate} \item A domain $\Omega\subset\ensuremath{\mathbb{R}}^d$ is {\em scalable}, if it contains the origin as an interior point and satisfies $h\Omega\subseteq\Omega$ for all $0\leq h\leq 1$, i.e. if $\Omega$ is star--shaped with repect to the origin. \item A space $U$ of functions on a scalable domain $\Omega$ is {\em scalable}, if $u(h\cdot)$ is in $U$ for all $0<h\leq 1$ and all $u\in U$. \item A functional $\lambda\in U^*$ on a scalable space $U$ has {\em scaling order} or {\em homogeneity order} $s$ if $$ \lambda(u(h\cdot)) =h^{s}\lambda(u) \hbox{ for all } u\in U. $$ \end{enumerate} \end{definition} Of course, this means that the functional $\lambda$ must be local in or near the origin. For example, the standard strong functionals are modelled by multivariate derivatives $$ \lambda_\alpha(u)=\dfrac{\partial^\alpha u}{\partial x^\alpha}(0) $$ at zero, with the scaling behaviour $$ \begin{array}{rcl} \lambda_\alpha(u(h\cdot)) &=&h^{|\alpha|}\lambda_\alpha(u) \end{array} $$ showing that the scaling order coincides with the order of differentiation here. This generalizes to all linear homogeneous differential operators, e.g. the Laplacian. Having dealt with scalability of $\lambda$, we now turn to scalability of the nodal approximation $\lambda_{a,X}$ of \eref{eqapp}. To match the scalability order $s$ of $\lambda$, we should assume the same $h$ power for $\lambda_{a,X}$, and consider $$ h^{-s}\lambda_{a,X}(u(h\,\cdot))=\displaystyle{\sum_{j=1}^Ma_jh^{-s}u(hx_j)} = \lambda_{ah^{-s},hX}(u) $$ for all $u\in U$ and $0<h\leq 1$. This is the right notion of scalability for the approximation, but now we need the $h$ dependence and refrain from setting this equal to $\lambda_{a,X}(u)$ like in Definition \ref{defScaOrd}. \begin{definition}\label{defErrScaOrd} \begin{enumerate} \item An approximation \eref{eqapp} to a scalable functional $\lambda$ of scaling order $s$ is {\em scalable} of the same order, if the error functional is scalable of order $s$, i.e. \begin{equation}\label{eqerrfunct} \epsilon_h(u):=\lambda(u)-\lambda_{ah^{-s},hX}(u) =h^{-s}(\lambda-\lambda_{a,X})(u(h\,\cdot))=h^{-s}\epsilon_1(u(h\cdot)) \end{equation} for all $u\in U,\;0<h\leq 1$. \item A scalable approximation \eref{eqerrfunct} will be called a {\em stencil}. \item If an approximation \eref{eqapp} is given for $h=1$, and if the functional $\lambda$ has scaling order $s$, the transition to \eref{eqerrfunct} by using weights $a_jh^{-s}$ in the scaled case will be called {\em enforced} scaling. \end{enumerate} \end{definition} A standard example is the five-point star approximation $$ -\Delta u(0,0)\approx \dfrac{1}{h^2}(4u(0,0)-u(0,h)-u(0,-h)-u(h,0)-u(-h,0)) $$ to the Laplacian in 2D, and all other notions of generalized divided differences that apply to scaled node sets $hX$. The scaled form in \eref{eqerrfunct} allows the very simple error bound $$ |\epsilon_h(u)|\leq h^{-s}\|\lambda- \lambda_{a,X}\|_{U^*}\|u(h\cdot)\|_{U} \hbox{ for all } u\in U $$ that is useful if $\|u(h\cdot)\|_{U}$ is accessible and behaves nicely for $h\to 0$. Weights of scalable approximations can be calculated at large scales and then scaled down by multiplication. This bypasses instabilities for small $h$ and saves a lot of computational work, in particular if applications work on multiple scales or if meshless methods use the same geometric pattern of nodes repeatedly, e.g. in Meshless Local Petrov Galerkin \cite{atluri:2005-1} techniques. However, {\em optimal} approximations in Sobolev spaces will not be scalable. This is why the rest of the paper studies how close scalable approximations come to the optimal ones analyzed in \cite{davydov-schaback:2016-2}. \section{Optimal Convergence on Polynomials}\label{SecEAoP} We first relate the approximation error of nodal approximations to exactness on polynomials and assume that a scalable functional $\lambda$ of scaling order $s$ is given that is applicable to all $d$-variate polynomials. This will be true, for instance, in all Sobolev spaces $W_2^m(\Omega)$ for bounded scalable domains $\Omega\subset\ensuremath{\mathbb{R}}^d$. The space of all real-valued $d$-variate polynomials up to order $q$ will be denoted by ${\cal P}_q^d$, and for a given node set $X\subset\ensuremath{\mathbb{R}}^d$ and a functional $\lambda$ we define $$ q_{max}(\lambda,X)=\max\{q\;:\; \lambda- \lambda_{a,X}=0 \hbox{ on } {\cal P}_q^d \hbox{ for some } a \in\ensuremath{\mathbb{R}}^{|X|}\} $$ to be the maximal possible {\em polynomial exactness order} (abbreviated by PEO in the figures of the examples) of a nodal approximation \eref{eqapp} to $\lambda$ based on $X$. \begin{theorem}\label{theqsbnd} Consider a fixed set $X\subset\ensuremath{\mathbb{R}}^d$ and a functional $\lambda$. If a sequence of general nodal approximations $\lambda_{a(h),hX}$ converges to $\lambda$ on a space spanned by finitely many monomials, then $X$ admits an approximation to $\lambda$ that is exact on these monomials. \end{theorem} \begin{proof} Due to \begin{equation}\label{eqpolscale} \lambda(x^\alpha)- \lambda_{a(h),hX}(x^\alpha) = \lambda(x^\alpha)- \lambda_{a(h)h^{|\alpha|},X}(x^\alpha), \end{equation} convergence of functionals $\lambda_{a(h),hX}$ to $\lambda$ on a set of monomials implies that the error of the best approximation to $\lambda$ by functionals $\lambda_{a,X}$, restricted to the space spanned by those monomials, is zero. \end{proof} We now know an upper bound for the maximal order of polynomials for which approximations can be convergent, if $X$ and $\lambda$ are fixed. This order can be achieved for scalable stencils: \begin{theorem}\label{thePolOrd} If all polynomials are in $U$, the convergence rate of a scalable stencil of scaling order $s$ based on a point set $X$ on all polynomials is exactly $q_{max}(\lambda,X)-s$ if the stencil is exact on ${\cal P}_q^d$ for $q=q_{max}(\lambda,X)$. The convergence rate on all of $U$ is bounded above by $q_{max}(\lambda,X)-s$. \end{theorem} \begin{proof} We apply \eref{eqpolscale} in the scalable situation and get $$ h^{-s}\lambda((hx)^\alpha)- h^{-s} \lambda_{a,X}((hx)^\alpha) = h^{-s+|\alpha|}\left(\lambda(x^\alpha)- \lambda_{a,X}(x^\alpha)\right), $$ proving the assertion. \end{proof} Consequently, if a node set $X=\{x_1,\ldots,x_M\}$ is given, if the application allows all polynomials, and if one wants a scalable stencil, the best one can do is to take a stencil with maximal order $q_{max}(\lambda,X)$ of polynomial exactness. It will lead to a scalable stencil with the optimal convergence rate among all approximations. Additional tricks cannot improve that rate, but it can be smaller due to restricted smoothness of functions in $U$. This will be the topic of Section \ref{SecOCiSS}. If exactness of order $q$ is required in applications, one takes a basis $p_1,\ldots,p_Q$ of the space $\mathcal{P}_q^d$ of $d$-variate polynomials of order $q$ with $Q=\dim \mathcal{P}_q^d={q+d-1\choose d}$ and has to find a solution of the linear system \begin{equation}\label{eqlinsys} \lambda(p_k)=\displaystyle{\sum_{j=1}^Ma_jp_k(x_j),\;1\leq k\leq Q }. \end{equation} This may exist even in case $M<Q$, the simplest example being the five-point star in 2D for $\lambda(u)=\Delta u(0)$ which is exact of order 4, while $M=5<Q=10$. For general point sets, there is no way around setting up and solving the above linear system. If the system has a solution, we get a stencil by enforced scaling and with error $$ h^{-s}\lambda(u(h\cdot))-h^{-s}\displaystyle{\sum_{j=1}^Ma_ju(hx_j)} $$ which then is polynomially exact of order $q$ and has convergence rate $k=q-s$, but only on polynomials. If $U$ contains functions of limited smoothness, this convergence rate will not be attained for all functions in $U$. We shall prove in Section \ref{SecOCiSS} that the convergence rate in $W_2^m(\Omega)$ for $\Omega\subseteq\ensuremath{\mathbb{R}}^d$ is limited by $m-s-d/2$, no matter how large the order $q$ of polynomial exactness on $X$ is. To make this construction partially independent of the functionals, we add \begin{definition}\label{defqX} A finite point set $X=\{x_1,\ldots,x_M\}\subset\ensuremath{\mathbb{R}}^d$ has {\em polynomial reproduction} of order $q$, if all polynomials in $\mathcal{P}_q^d$ can be recovered from their values on $X$. \end{definition} \begin{theorem}\label{theExOnPolRep} If the set $X$ allows polynomial reproduction of order $q$, then all admissible linear functionals of scaling order $s\leq q$ have a stencil that is exact at least of order $q$, by applying $\lambda$ to a Lagrange basis of $\mathcal{P}_q^d$. This stencil has convergence rate at least $q-s$ on polynomials. \end{theorem} \begin{proof} Let the set $X$ allow polynomial reproduction of order $q$. Then, for $Q=\dim \mathcal{P}_q^d$, there are polynomials $p_1,\ldots,p_Q$ and a subset $Y=\{y_1,\ldots,y_Q\}\subseteq X$ such that the representation $$ p(x)=\displaystyle{\sum_{j=1}^Qp(y_j)p_j(x)\hbox{ for all } p\in \mathcal{P}_q^d } $$ holds, and the matrix of values $p_k(y_j),\;1\leq j.k\leq Q$ is the identity. This implies $Q\leq M$, and the stencil satisfying $$ \lambda(p)=\displaystyle{\sum_{j=1}^Qp(y_j)\lambda(p_j)\hbox{ for all } p\in \mathcal{P}_q^d } $$ with weights $a_j:=\lambda(p_j)$ is exact on $\mathcal{P}_q^d$. The rest follows like above. \end{proof} But note that the five-point star is an example of an approximation on a set that has polynomial reproduction only of order $2$, while it has a scalable stencil for the Laplacian that is exact on polynomials of order up to $4$ and convergent of rate 2. The application of Theorem \ref{theExOnPolRep} would require polynomial reproduction of order $4$ for the same convergence rate. In general, one can use the $M$ given nodes for getting exactness on polynomials of maximal order, and then there can be additional degrees of freedom because the $Q\times M$ linear system \eref{eqlinsys} may be nonuniquely solvable. The paper \cite{davydov-schaback:2016-1} deals with various techniques to use the additional degrees of freedom, e.g. for minimizing the $\ell_1$ norm of the weights. In all cases the result is scalable and then this paper applies as well. On the other hand, the paper \cite{davydov-schaback:2016-2} focuses on non-scalable approximations induced by kernels. Both papers perform their convergence analysis mainly for single approximations. While this paper focuses on convergence rates in Sobolev spaces, \cite{davydov-schaback:2016-2} considers Hölder spaces and Sobolev spaces $W_\infty^r$. A third way to use additional degrees of freedom is to take optimal stencils for polyharmonic kernels in Beppo-Levi spaces, see Section \ref{SecPHS}. But before we go over from polynomials to these spaces, we remark that many application papers use meshless methods to solve problems that have true solutions $u^*$ with rapidly convergent power series representations (see e.g \cite{kansa:2015-1} for a recent example with $u^*(x,y)=\exp(ax+by)$). In such cases, a high order of polynomial exactness pays off, but as soon as the problem is treated in Sobolev space, this advantage is gone. A truly worst-case analysis of nodal meshless methods is in \cite{schaback:2016-1}. This discussion showed that on polynomials one can get stencils of arbitrarily high convergence rates, provided that there are enough nodes to ensure exactness on high-degree polynomials. For working on spaces of functions with limited smoothness, the latter will limit the convergence rate of the stencil, and we want to show how. \section{Optimal Convergence in Sobolev Spaces}\label{SecOCiSS} Our goal is to reach the optimal convergence rates in Sobolev spaces via cheap, scalable, and stable stencils, and for this we need to know those rates. But before that, we want to eliminate the difference between local and global Sobolev spaces, as far as convergence {\em rates} are concerned. Local Sobolev functionals are global ones due to $W_2^m(\Omega)^*\subset W_2^m(\ensuremath{\mathbb{R}}^d)^*$ that follows from $W_2^m(\Omega)\supset W_2^m(\ensuremath{\mathbb{R}}^d)$ for Lipschitz domains. This implies that we can evaluate the norm of each functional $\lambda\in W_2^m(\Omega)^*$ in $W_2^m(\ensuremath{\mathbb{R}}^d)^*$ via the kernel, up to a fixed multiplicative constant. For the other way round and in the scalable case, we consider the subspace $L_\Omega$ of all point-based functionals $ \lambda_{a,X} \in W_2^m(\ensuremath{\mathbb{R}}^d)^*$ with sets $X\subset\Omega$ and $a\in\ensuremath{\mathbb{R}}^{|X|}$ for a scalable domain $\Omega\subset\ensuremath{\mathbb{R}}^d$ and form its closure $\mathcal{L}_\Omega$ under the kernel-based $W_2^m(\ensuremath{\mathbb{R}}^d)^*$ norm. Exactly these functionals are those that we study here. Since the spaces $W_2^m(\ensuremath{\mathbb{R}}^d)$ and $W_2^m(\Omega)$ are norm-equivalent, the limit process is the same in $W_2^m(\Omega)$, and therefore we have that $\mathcal{L}_\Omega\subset W_2^m(\Omega)^*$. \begin{theorem}\label{theCREqui} The functionals considered here are always in the space $\mathcal{L}_\Omega\subset W_2^m(\Omega)^*$, and their norm can be evaluated in $W_2^m(\ensuremath{\mathbb{R}}^d)^*$ up to a space- and domain- dependent constant. The convergence rates in $W_2^m(\Omega)^*$ and $W_2^m(\ensuremath{\mathbb{R}}^d)^*$ are the same.\qed \end{theorem} In Section \ref{SecPHS} we shall extend this argument to Beppo-Levi spaces. \begin{theorem}\label{theopt} The convergence rate of any nodal approximation to a scalable functional $\lambda$ of scalability order $s$ on $W_2^m(\ensuremath{\mathbb{R}}^d)$ with $m>d/2$ is at most $m-s-d/2$. \end{theorem} \begin{proof} We need at least $m>d/2$ to let the nodal approximations $\lambda_{a,X}$ of \eref{eqapp} to be well-defined. Then we take a ``bump'' function $v\in W_2^m(\ensuremath{\mathbb{R}}^d)$ that vanishes on $X$ and has $\lambda(v)\neq 0$. Now we scale and consider $\lambda_{a(h),hX}$ as an approximation on $hX$ with error functional $$ \epsilon_h=\lambda-\lambda_{a(h),hX}. $$ Then $$ \begin{array}{rcl} \epsilon_h(v(\cdot/h)) &=& \lambda(v(\cdot/h))-\lambda_{a(h),hX}(v(\cdot/h))\\ &=& h^{-s}\lambda(v)-0\\ \end{array} $$ and $$ \begin{array}{rcl} \|v(\cdot/h)\|^2_{W_2^m(\ensuremath{\mathbb{R}}^d)} &=& \displaystyle{\sum_{|\alpha|\leq m}\int_{\ensuremath{\mathbb{R}}^d}|D^\alpha(v(\cdot/h))|^2}\\ &=& \displaystyle{\sum_{|\alpha|\leq m}h^{-2|\alpha|}\int_{\ensuremath{\mathbb{R}}^d}|D^\alpha(v)(x/h)|^2dx}\\ &=& \displaystyle{h^d\sum_{|\alpha|\leq m}h^{-2|\alpha|}\int_{\ensuremath{\mathbb{R}}^d}|D^\alpha(v)(y)|^2dy}\\ &\leq& \displaystyle{h^{d-2m}\|v\|^2_{W_2^m(\ensuremath{\mathbb{R}}^d)}}\\ \end{array} $$ leading to $$ \begin{array}{rcl} \|\epsilon_h\|_{W_2^m(\ensuremath{\mathbb{R}}^d)^*} &=& \displaystyle{ \sup_{u\in W_2^m(\ensuremath{\mathbb{R}}^d)\setminus \{0\}} \dfrac{|\epsilon_h(u)|}{\|u\|_{W_2^m(\ensuremath{\mathbb{R}}^d)}}}\\[0.5cm] &\geq& \displaystyle{\dfrac{|\epsilon_h(v(\cdot/h))|}{\|v(\cdot/h)\|_{W_2^m(\ensuremath{\mathbb{R}}^d)}}}\\[0.5cm] &\geq&h^{-s} \displaystyle{\dfrac{|\lambda(v)|}{\|v(\cdot/h)\|_{W_2^m(\ensuremath{\mathbb{R}}^d)}}}\\[0.5cm] &\geq&h^{m-s-d/2} \displaystyle{\dfrac{|\lambda(v)|}{\|v\|_{W_2^m(\ensuremath{\mathbb{R}}^d)}}}. \end{array} $$ \end{proof} This holds for all weights, including the non-scalable optimal ones, and for all nodal point sets $X$. Our next goal is to show that this rate is attainable for scalable stencils with sufficient polynomial exactness, in particular for optimal stencils calculated via polyharmonic kernels. \begin{theorem}\label{theSobbound} Let $\lambda$ be a functional of scaling order $s$ that is continuous on $W_2^\mu(\Omega)$ for some $\mu>d/2$, and let $X$ allow a polynomially exact approximation to $\lambda$ of of some order $q\geq \mu>d/2$. Then any scalable stencil for approximation of $\lambda$ on $X$ with that exactness has the optimal convergence rate $m-s-d/2$ in $W_2^m(\Omega)$ for all $m$ with $\mu\leq m < q+d/2$. In case $m=q+d/2$, the rate is at least $m-s-d/2-\epsilon=q-s-\epsilon$ for arbitrarily small $\epsilon >0$. \end{theorem} \begin{proof} We first treat the case $m\leq q$. By the Bramble-Hilbert lemma \cite{bramble-hilbert:1970-1}, the error functional defined by $$ \epsilon(u)=\lambda(u)- \lambda_{a,X}(u) $$ is continuous on $W_2^m(\Omega)$ and vanishes on $\mathcal{P}_m^d$. Then it has an error bound $$ |\epsilon(u)| \leq \|\epsilon\|_{W_2^m(\Omega)^*}|u|_{W_2^m(\Omega)}\hbox{ for all } u\in W_2^m(\Omega). $$ This leads to $$ \begin{array}{rcl} |h^{-s}\lambda(u(h\cdot))- h^{-s} \lambda_{a,X}(u(h\cdot))| &=& h^{-s}|\epsilon(u(h\cdot))|\\ &\leq & h^{-s}\|\epsilon\|_{W_2^m(\Omega)^*}|u(h\cdot)|_{W_2^m(\Omega)}\\ &=& h^{-s}\|\epsilon\|_{W_2^m(\Omega)^*}h^{m-d/2}|u|_{W_2^m(h\Omega)}\\ &\leq& h^{-s}\|\epsilon\|_{W_2^m(\Omega)^*}h^{m-d/2}|u|_{W_2^m(\Omega)} \end{array} $$ where we used \begin{equation}\label{equhhu} \begin{array}{rcl} |u(h\cdot)|^2_{W_2^m(\Omega)} &=& \displaystyle{\sum_{|\alpha|=m}\int_\Omega \left|D^\alpha (u(h\cdot))(x) \right|^2dx}\\ &=& \displaystyle{h^{2m}\sum_{|\alpha|=m}\int_\Omega \left|D^\alpha (u)(hx) \right|^2dx}\\ &=& \displaystyle{h^{2m-d}\sum_{|\alpha|=m}\int_{h\Omega} \left|D^\alpha (u)(y) \right|^2dy}\\ &=& h^{2m-d}|u|^2_{W_2^m(h\Omega)}. \end{array} \end{equation} For the case $q\leq m <q+d/2$ we repeat the argument, but now in $W_p^q(\Omega)\supseteq W_2^m(\Omega)$ for $p\in [2,\infty)$ with $q-d/p=m-d/2$. Because of $q\geq \mu$ we also have $W_p^q(\Omega)\subseteq W_2^\mu(\Omega)$, guaranteeing continuity on $W_p^q(\Omega)$. The corresponding proof steps are $$ \begin{array}{rcl} |h^{-s}\lambda(u(h\cdot))- h^{-s} \lambda_{a,X}(u(h\cdot))| &\leq & h^{-s}\|\epsilon\|_{W_p^q(\Omega)^*}h^{q-d/p}|u|_{W_p^q(\Omega)},\\ |u(h\cdot)|^p_{W_p^q(\Omega)} &=& h^{pq-d}|u|^p_{W_p^q(\Omega)}. \end{array} $$ For $m=q+d/2$, the space $W_2^m(\Omega)$ is embedded in $W_p^q(\Omega)$ for arbitrary $p\in [2,\infty)$, and on that space we get the rate $q-s-d/p=m-s-d/2-d/p$. \end{proof} Theorem \ref{theSobbound} proves optimality of the convergence rate \eref{eqOptRate}, and it shows that the optimal rate is attained by {\em scalable} stencils whose point sets allow polynomial exactness of some order larger than $m-d/2$. In view of the {\em best compromise} situation, one can ask for the minimal polynomial exactness order $q$ that allows the optimal convergence rate for fixed $m$ and $d$. If $m-d/2$ is not an integer, this is $q:=\lceil m-d/2\rceil$ as in \eref{eqcompro}. In the exceptional case $m-d/2\in \ensuremath{\mathbb{N}}$, the order $m-d/2+1$ is sufficient for the optimal rate, but order $m-d/2$ can come arbitrarily close to it. We shall deal with this situation in Sections \ref{SecPHS} and \ref{SecExa}. Consequently, large orders of polynomial exactness will not pay off, if smoothness is the limiting factor. If the size of the point set $X$ is the limiting factor, we get \begin{corollary}\label{cortheCbound} Let $\lambda$ be a functional of scaling order $s$ which is continuous on $W_2^\mu(\Omega)$ with integer $\mu>d/2$, and let $X$ allow a polynomially exact approximation to $\lambda$ of of some order $q\geq \mu$. Then any scalable stencil for approximation of $\lambda$ on $X$ with that exactness has convergence rate at least $q-s$ in $W_2^m(\Omega)$ for all $m>q+d/2$. \end{corollary} \begin{proof} We repeat the proof of Theorem \ref{theSobbound}, but now on $W_2^q(\Omega)$ and get $$ \begin{array}{rcl} |h^{-s}\lambda(u(h\cdot))- h^{-s} \lambda_{a,X}(u(h\cdot))| &=& h^{-s}|\epsilon(u(h\cdot))|\\ &\leq & h^{-s}\|\epsilon\|_{W_2^q(\Omega)^*}|u(h\cdot)|_{W_2^q(\Omega)}. \end{array} $$ Then we use \eref{equhhu} replacing $m$ by $q$ there, but insert functions $u\in W_2^m(\Omega)$ for $m>q+d/2$. Then the $q$-th derivatives in \eref{equhhu} will be continuous, proving $$ \begin{array}{rcl} |u(h\cdot)|^2_{W_2^q(\Omega)} &=& \displaystyle{h^{2q}\sum_{|\alpha|=q}\int_\Omega \left|D^\alpha (u)(hx) \right|^2dx}\\ &\leq &C h^{2q}\|u\|_{C^q(\Omega)}. \end{array} $$ Thus the convergence rate in $W_2^m(\Omega)$ is at least $q-s$. \end{proof} This argument used continuity of higher derivatives to bound local integrals, as in \cite{davydov-schaback:2016-2}. Note that Corollary \ref{cortheCbound} produces only integer or half-integer convergence rates while Theorem \ref{theSobbound} allows general non-integer rates. We shall give examples in Section \ref{SecExa}. To summarize, we get convergence rates for scalable stencils as in Table \ref{tabSobRates}. For the case in the second row, the optimal convergence behavior is not reached for order $q$, but for order $q+1$ by applying the first row. For given $m$ and $d$, a scalable stencil with polynomial exactness order $\lfloor m-d/2\rfloor+1$ is sufficient for optimal convergence in $W_2^m(\Omega),\;\Omega\subset\ensuremath{\mathbb{R}}^d.$ By solving the system \eref{eqlinsys}, such stencils are easy to calculate, but if the system is underdetermined, one should make good use of the additional degrees of freedom. This topic is treated in \cite{davydov-schaback:2016-1} by applying optimization techniques, while the next sections will focus on unique stencils obtained by polyharmonic kernels. Because the latter come close to the kernels reproducing Sobolev spaces, they should provide good approximations to the non-scalable optimal approximations in Sobolev spaces. \begin{table}[hbt]\centering \begin{tabular}{||c||c|c|}\hline $m$ and $q$ & minimal rate & optimal rate\\\hline $m <q+d/2$ & $m-s-d/2$ & yes \\ $m=q+d/2$ & $m-s-d/2-\epsilon,\;\epsilon>0 $ & no, $m-s-d/2=q-s$ \\ $m>q+d/2$ & $q-s$& yes for $q= q_{max}(\lambda,X)$\\ \hline \end{tabular} \caption{Convergence rates in $W_2^m(\ensuremath{\mathbb{R}}^d)$ for scalable stencils defined on $W_2^\mu(\ensuremath{\mathbb{R}}^d)$ with polynomial exactness $q\geq \mu>d/2$. \label{tabSobRates}} \end{table} \section{Polyharmonic Kernels}\label{SecPHS} For $m-d/2>0$ real, we define the polyharmonic kernel \begin{equation}\label{eqKmd} H_{m,d}(r):=(-1)^{\lfloor m-d/2 \rfloor +1} \left\{ \begin{array}{ll} r^{2m-d}\log r, &2m-d \hbox{ even integer }\\ r^{2m-d},& \hbox{ else } \end{array} \right\} \end{equation} up to a positive scalar multiple. This kernel is conditionally positive definite of order $$ q(m-d/2):=\lfloor m-d/2 \rfloor +1. $$ For comparison, the Whittle-Mat\'ern kernel generating Sobolev space $W_2^m(\ensuremath{\mathbb{R}}^d)$ is, up to a positive constant, $$ S_{m,d}(r):= K_{m-d/2}(r)r^{m-d/2} $$ with the modified Bessel function of second kind. The generalized $d$-variate Fourier transforms then are $$ \begin{array}{rcl} \hat H_{m,d}(\omega)&=&\|\omega\|_2^{-2m},\\ \hat S_{m,d}(\omega)&=&(1+\|\omega\|_2^2)^{-m}, \end{array} $$ up to positive constants, showing a similarity that we will not explore further at this point. While $S_{m,d}$ reproduces $W_2^m(\ensuremath{\mathbb{R}}^d)$, the polyharmonic kernel $H_{m,d}$ reproduces the {\em Beppo-Levi} space $BL_{m,d}$. This has a long history, see e.g. \cite{iske:1995-1, schaback:1997-3,iske:2003-1, wendland:2005-1,beatson-et-al:2005-1,iske:2011-1}, but we take a shortcut here and refer the reader to the background literature. From the paper \cite{iske:2003-1} of A. Iske we take the very useful fact that optimal approximations in Beppo-Levi spaces using polyharmonic kernels are always scalable and can be stably and efficiently calculated. We shall investigate the optimal convergence rate in Sobolev and Beppo-Levi space here, while \cite{iske:2003-1} contains convergence rates in $C^m(\Omega)$. A typical scale-invariance property of Beppo-Levi spaces is \begin{equation}\label{equhBL} \|u(h\cdot)\|_{BL_{m,d}}=h^{m-d/2}\|u\|_{BL_{m,d}} \hbox{ for all } u \in BL_{m,d}. \end{equation} Note the similarity between the above formula and \eref{equhhu} used the proof of Theorem \ref{theSobbound}, because the classical $W_2^m(\ensuremath{\mathbb{R}}^d)$ seminorm coincides with the norm in $BL_{m,d}$. \begin{theorem}\label{theGenPHSOpt} Let a scalable approximation \eref{eqapp} of scaling order $s$ be exact on the polynomials of some order $q\geq q(m-d/2)=\lfloor m-d/2 \rfloor +1$ and assume that $\lambda- \lambda_{a,X}$ is in $BL_{m,d}^*$. Then this stencil has the exact convergence rate $m-s-d/2$ in $BL_{m,d}$. \end{theorem} \begin{proof} We evaluate the norm of the error functional after scaling via $$ \begin{array}{rcl} \|\lambda-h^{-s} \lambda_{a,hX}\|_{BL_{m,d}^*} &=&\displaystyle{ \sup_{\|u\|_{BL_{m,d}}\leq 1}|\lambda(u)-h^{-s} \lambda_{a,hX}(u)|}\\ &=&h^{-s}\displaystyle{ \sup_{\|u\|_{BL_{m,d}}\leq 1}|\lambda(u(h\cdot))- \lambda_{a,X}(u(h\cdot))|}\\ &=&h^{-s+m-d/2}\displaystyle{ \sup_{\|u(h\cdot)\|_{BL_{m,d}}\leq 1}|\lambda(u(h\cdot))- \lambda_{a,X}(u(h\cdot))|}\\ &=&h^{-s+m-d/2}\|\lambda- \lambda_{a,X}\|_{BL_{m,d}^*} \end{array} $$ using that \eref{equhBL} implies that the unit balls of all $u$ and all $u(h\cdot)$ are the same up to a factor. \end{proof} \begin{corollary}\label{corPHS1} Polynomial exactness of more than order $\lfloor m-d/2 \rfloor +1$ does not pay off in a higher convergence rate in Beppo-Levi space $BL_{m,d}$. \qed \end{corollary} \begin{corollary}\label{corPHS2} Let a point set $X=\{x_1,\ldots,x_M\}\subset\Omega\subset\ensuremath{\mathbb{R}}^d$ be given such that there is some approximation \eref{eqapp} that is exact on polynomials of order $\lfloor m-d/2 \rfloor +1$ and that has $\lambda- \lambda_{a,X}\in BL_{m,d}^*$. Then there is a weight vector $a^*\in\ensuremath{\mathbb{R}}^M$ that minimizes $\|\lambda- \lambda_{a,X}\|_{BL_{m,d}^*}$ under all competing approximations, and the resulting stencil is $BL_{m,d}$-optimal under all stencils of at least that polynomial exactness. \qed \end{corollary} By applying Theorem \ref{theSobbound}, we get \begin{corollary}\label{corPH4Scal} One can use optimal scalable stencils obtained via polyharmonic kernels $H_{m,d}$ to get optimal convergence rates in $W_2^m(\Omega)$ for $\Omega\subset\ensuremath{\mathbb{R}}^d$, provided that the underlying sets allow exactness on polynomials of order $q(m-d/2)=\lfloor m-d/2\rfloor+1$.\qed \end{corollary} If $m-d/2$ is not an integer, the above order is smallest possible for optimal convergence. For $m-d/2$ integer, we have $$ q(m-d/2)=\lfloor m-d/2\rfloor+1=m-d/2+1, $$ and Theorem \ref{theSobbound} suggests that we could come arbitrarily close to the optimal convergence rate if we use order $q=m-d/2$. But then we cannot use the polyharmonic kernel $H_{m,d}$. However, there is a workaround. We construct a scalable stencil via the polyharmonic kernel $H_{m',d}$ for $m-1\leq m'<m$ using polynomial exactness of order $q(m'-d/2)=q$. By Theorem \ref{theSobbound} this yields a convergence rate at least $m-s-d/2-\epsilon$ for all $\epsilon >0$, no matter how $m'$ was chosen. \begin{corollary}\label{corSecCase2} For the special situation $m=q+d/2$ in Table \ref{tabSobRates} there is a scalable stencil with polynomial exactness order $q$, based on a polyharmonic kernel, that has convergence rate at least $m-s-d/2-\epsilon$ for all $\epsilon>0$. \qed \end{corollary} \section{Stable Error Evaluation}\label{SecStEE} In the most interesting cases, the leading term of the error of a scalable stencil in Sobolev space can be stably calculated via polyharmonic kernels. To prove this, we show now that the polyharmonic kernels $H_{m,d}$ arise naturally as part of the kernels $S_{m,d}$ reproducing Sobolev space $H^m(\ensuremath{\mathbb{R}}^d)$. The latter have expansions as series in $r$, beginning with a finite number of even powers with alternating signs. Such even powers, when written as $r^{2k}=\|x-y\|_2^{2k}$ are polynomials in $x$ and $y$. After these even powers, the next term is a polyharmonic kernel: \begin{theorem}\label{theExp} The first non-even term in the expansion of $\sqrt{\frac{2}{\pi}} K_{n+1/2}(r)r^{n+1/2}$ into powers of $r$ for integer $n\geq 0$ is the polyharmonic kernel $$ r^{2n+1}\dfrac{(-1)^{n+1}}{(2n+1)(2n-1)(2n-3)\cdots 1}= r^{2n+1}\dfrac{(-1)^{n+1}2^n\,n!}{(2n+1)!}. $$ The first non-even term in the expansion of $K_n(r)r^n$ for integer $n\geq 0$ is the polyharmonic kernel $(-1)^{n+1}r^{2n}\log(r)\frac{2^{-n}}{n!}$. \end{theorem} \begin{proof} Equation 10.39.2 of \cite{NIST:2015-1} has $n=0$ of $$ \sqrt{\frac{2}{\pi}} K_{n+1/2}(r)r^{n+1/2}=q_n(r)=e^{-r}p_n(r) $$ with a polynomial $p_n$ of degree at most $n, \;p_0(r)=1,\;q_0(r)=e^{-r}$. It can easily be shown that $rp_{n-1}(r)+p_n'(r)=p_n(r)$ holds, using the derivative of the above expression, and similarly one gets $$ -rq_{n-1}(r)= q_n'(r) $$ from that derivative formula. If we make it explicit by $$ q_n(r)=:\displaystyle{\sum_{j=0}^\infty q_{j,n}r^j }, $$ we get $$ \begin{array}{rcl} -q_{k-1,n-1} &=& q_{k+1,n}(k+1),\, k,n\geq 1\\ 0&=&q_{1,n},\;n\geq 1.\\ \end{array} $$ The assertion $q_{2k-1,n}=0$ for $1\leq k\leq n$ is true for $k=1$ and all $n\geq 1$. Assume it to be true for $k$ and all $n\geq k$. Then for all $n\geq k\geq 1$, $$ \begin{array}{rcl} 0=-q_{2k-1,n} &=& q_{2k+1,n+1}(2k+1),\, 2k\geq 1, n\geq 0\\ \end{array} $$ proves the assertion. The first odd term of the kernel expansion is $q_{2n+1,n}r^{2n+1}$, and its coefficient has the recursion $$ \begin{array}{rcl} -q_{2n-1,n-1} &=& q_{2n+1,n}(2n+1),\, n\geq 1.\\ \end{array} $$ For the other case we use equation (10.31.1) of \cite{NIST:2015-1} in shortened form as $$ K_n(z)z^n=p_n(z^2)+(-1)^{n+1}z^n\log(z/2)I_n(z) $$ with an even power series $p_n(z^2)$, and due to (10.25.2) of \cite{NIST:2015-1} we have $I_n(z)=z^nq_n(z^2)$ with an even power series $q_n(z^2)$ with $q_n(0)=\frac{2^{-n}}{n!}$. Thus $$ K_n(z)z^n=p_n(z^2)+(-1)^{n+1}z^{2n}\log(z/2)q_n(z^2), $$ and the first non-even term of the expansion of $K_n(r)r^n$ is the polyharmonic kernel $$ (-1)^{n+1}r^{2n}\log(r)q_n(0)=(-1)^{n+1}r^{2n}\log(r)\frac{2^{-n}}{n!}. $$ \end{proof} We now are ready to show that a good approximation of the error in Sobolev space can be calculated stably via the error in Beppo-Levi space, i.e. via polyharmonic kernels: \begin{theorem}\label{theComp} Assume a scalable stencil of scalability order $s$ on a set $X\subset\ensuremath{\mathbb{R}}^d$ to be given with polynomial exactness $q$. For all integer $m$ with $\lfloor m-d/2\rfloor+1\leq q$, its error norm can be evaluated on all Beppo-Levi spaces $BL_{m,d}$ and on Sobolev space $W_2^m(\ensuremath{\mathbb{R}}^d)$. The convergence rate in both cases then is $m-s-d/2$, and the quotient of errors converges to 1 for $h\to 0$, if the scalar factors in the Sobolev and polyharmonic kernel are aligned properly, namely as given in Theorem \ref{theExp}. \end{theorem} \begin{proof}\label{ProtheComp} The squared norm of the stencil's error functional can be evaluated on Sobolev space $W_2^m(\ensuremath{\mathbb{R}}^d)$ by $$ \begin{array}{rcl} && \epsilon(h)^x\epsilon(h)^yK(x,y)\\ &=& \displaystyle{ h^{-2s}\left(\lambda^x\lambda^yK(hx,hy) -2\sum_{j=1}^Ma_j\lambda^yK(hx_j,hy) \right.}\\ && +\displaystyle{ \left. -2\sum_{j,k=1}^Ma_ja_k\lambda^yK(hx_j,hx_k) \right)}\\ \end{array} $$ where we used $K(x,y)$ as a shortcut for $K_{m-d/2}(\|x-y\|_2)\|x-y\|_2^{m-d/2}$ and ignore scalar multiples. Now we insert the series expansions of Theorem \ref{theExp}. For odd $d$ and $m-d/2=n+1/2$ we have, up to constant factors, $$ K_{m-d/2}(r)r^{m-d/2}=\displaystyle{\sum_{j=0}^{m-d/2-1/2}f_{2j}r^{2j}} +f_{2m-d}r^{2m-d} +\sum_{k>2m-d}f_kr^k $$ and $$ K_{m-d/2}(hr)(hr)^{m-d/2}=\displaystyle{\sum_{j=0}^{m-d/2-1/2}f_{2j}h^{2j}r^{2j}} +f_{2m-d}h^{2m-d}r^{2m-d} +\sum_{k>2m-d}f_kh^kr^k. $$ If we hit this twice with $\epsilon(h)$, i.e. forming $$ \|\epsilon(h)\|^2_{H^m(\ensuremath{\mathbb{R}}^d)}= \epsilon(h)^x\epsilon(h)^yK(h\|x-y\|_2), $$ all even terms with exponents $2j<2q=2p+2s>2m-d$ go away \cite{schaback:2005-2}, and we are left with the polyharmonic part and higher-order terms. The odd ones are all polyharmonic, and the even ones remain only from exponent $2q=2p+2s>2m-d$ on, i.e. they behave like $h^{2m-d+1}$ or higher-order terms. The polyharmonic terms $f_{2m-d+2k}h^{2m-d+2k}r^{2m-d+2k}$ representing $BL_{m+k,d}$ require polynomial exactness of order $m-d/2+1/2+k$ which is satisfied for $0\leq k<q-m+d/2$, and double action of the error functional on these terms has a scaling law of $h^{2m+2k-2s-d}$. This means that the dominating term is the one with $k=0$, and the squared error norm behaves like $h^{2m-d-2s}$ as in the $BL_{m,d}$ case. Now we treat even dimensions, and use the expansion $$ K_{m-d/2}(r)r^{m-d/2}=\displaystyle{\sum_{j=0}^{\infty}f_{2j}r^{2j}} +g_{2m-d}\log(r)r^{2m-d}+ \log(r)\displaystyle{\sum_{2k>2m-d}g_{2k}r^{2k}} $$ up to constant factors. With scaling, it reads as $$ \begin{array}{rcl} &&K_{m-d/2}(hr)h^{m-d/2}r^{m-d/2}\\ &=& \displaystyle{\sum_{j=0}^{\infty}f_{2j}h^{2j}r^{2j}} +g_{2m-d}\log(hr)h^{2m-d}r^{2m-d} + \log(hr)\displaystyle{\sum_{2k>2m-d}g_{2k}h^{2k}r^{2k}}\\ &=& \displaystyle{\sum_{j=0}^{\infty}f_{2j}h^{2j}r^{2j}} +g_{2m-d}h^{2m-d}\log(r)r^{2m-d}+g_{2m-d}\log(h)h^{2m-d}r^{2m-d}\\ && + \displaystyle{\sum_{2k>2m-d}g_{2k}h^{2k}r^{2k}\log(r)} +\displaystyle{\sum_{2k>2m-d}g_{2k}h^{2k}\log(h)r^{2k}}\\ \end{array} $$ We now have $q=p+s\geq 2m-d+2$ and hitting the scaled kernel twice will annihilate all even powers up to and including exponents $2j<2q=2p+2s\geq 2m-d+2$, i.e. the remaining even powers scale like $h^{2m-d+2}\log(h)$ or higher. The rest is a sum of polyharmonic kernels $H_{m+k,d}$ for $k\geq 0$, and we know the scaling laws of them, if the stencil has enough polynomial exactness. Again, the term with $k=0$ is the worst case, leading to a summand of type $h^{2m-d-2s}$ in the squared norm of the error that cannot be cancelled by the other terms of higher order. \end{proof} \section{Stencil Convergence}\label{SecSC} Here, we prove that the renormalized weights of the optimal non-scalable approximations in Sobolev space converge to the weights of a scalable stencil. \begin{theorem}\label{theASA} Consider the $W_2^m(\ensuremath{\mathbb{R}}^d)$-optimal approximation weights $a^*(h)$ on a set $X\subset\ensuremath{\mathbb{R}}^d$ for a functional of scaling order $s$. Assume that $X$ allows a unique scalable stencil with weights $\hat a$ that is exact on polynomials of order $q$. Then $$ \|a^*(h)h^s-\hat a\|_\infty \leq Ch^{m-q+1-d/2} $$ if $m-d/2 < q$, and $$ \|a^*(h)h^s-\hat a\|_\infty \leq Ch^{1} $$ if $m-d/2 \geq q$. \end{theorem} \begin{proof} We consider the uniquely solvable system of polynomial exactness as $$ \sum_{j=1}^M\hat a_jx_j^\alpha=\lambda(x^\alpha),\;0\leq |\alpha|<q $$ and in scaled form as $$ \sum_{j=1}^Mh^{-s}\hat a_j(hx_j)^\alpha=\lambda(x^\alpha) ,\;0\leq |\alpha|<q $$ which is the unscaled system where the equation for $x^\alpha$ is multiplied by $h^{|\alpha|-s}$, namely $$ \sum_{j=1}^Mh^{-s}\hat a_j(hx_j)^\alpha=h^{|\alpha|-s}\lambda(x^\alpha)=\lambda(x^\alpha) ,\;0\leq |\alpha|<q $$ which is no contradiction because scaling order $s$ implies $\lambda(x^\alpha)=0$ for $|\alpha|\neq s$. Then we insert the rescaled optimal Sobolev weights into the unscaled system to get \begin{equation}\label{eqlinsyscheck} \begin{array}{rcl} && h^s\sum_{j=1}^Ma_j^*(h)x_j^\alpha\\ &=& h^{s-|\alpha|}\sum_{j=1}^Ma_j^*(h)(hx_j)^\alpha\\ &=& h^{s-|\alpha|}\lambda_{a^*(h),hX}(x^\alpha)\\ &=& h^{s-|\alpha|}(\lambda_{a^*(h),hX}(x^\alpha)-\lambda(x^\alpha))+h^{s-|\alpha|} \lambda(x^\alpha)\\ &=& h^{s-|\alpha|}(\lambda_{a^*(h),hX}(x^\alpha)-\lambda(x^\alpha))+ \lambda(x^\alpha) \end{array} \end{equation} and $$ \begin{array}{rcl} \sum_{j=1}^M(h^sa_j^*(h)-\hat a_j)x_j^\alpha &=&h^{s-|\alpha|}(\lambda_{a^*(h),hX}(x^\alpha)-\lambda(x^\alpha)). \end{array} $$ If we insert the convergence rate $m-s-d/2$ for the optimal Sobolev approximation in the case $m-s-d/2 < q-s$ or $m-d/2 < q$, the right-hand side of this system converges to zero with rate $m-|\alpha|-d/2\geq m-(q-1)-d/2\geq 1$ and this implies \begin{equation}\label{eqstenrate} h^sa_j^*(h)-\hat a_j ={\cal O}(h^{m-(q-1)-d/2}) \hbox{ for }h\to 0. \end{equation} If we have $m-d/2\geq q$, we insert the rate $q-s$ and get the rate $q-|\alpha|\geq 1$ for the right-hand side. \end{proof} \section{Examples}\label{SecExa} First, we demonstrate numerically that the convergence rate $$ \min(m-d/2-s,q_{max}(\lambda,X)-s) $$ for approximations in $W_2^m(\ensuremath{\mathbb{R}}^d)$ to functionals $\lambda\in W_2^m(\ensuremath{\mathbb{R}}^d)^*$ with scaling order $s$ is optimal, even among unscaled approximations. This was verified in many cases including dimensions 2 and 3 using MAPLE$^\copyright$ with extended precision. The number of decimal digits had to be beyond 100 in extreme situations. All the loglog plots of $\|\epsilon(h)\|_{W_2^m(\ensuremath{\mathbb{R}}^d)}$ versus $h$ show the standard linear behaviour for $h\to 0$, if enough decimal digits are used and if started with small $h$ values. Therefore, they are suppressed here. Instead, we present convergence rate estimates by plotting $$ \dfrac{\log(\|\epsilon_{h_{i+1}}\|_{W_2^m(\ensuremath{\mathbb{R}}^d)})-\log(\|\epsilon_{h_i}\|_{W_2^m(\ensuremath{\mathbb{R}}^d)})} {\log(h_{i+1})-\log(h_i)} $$ against $h_i$. For a specific case, we take $M=18$ random points in 2D and approximate the Laplacian. Then $s=2$ and $q_{max}(\lambda,X)=5$ leading to the expected convergence rate $\min(m-3,3)$ as a function of smoothness. Figure \ref{figsobopton18pts} shows the cases $m=3.75$ and $m=6.25$ with the expected rates $0.75$ and 2, respectively. These correspond to situations where either smoothness $m$ or size of $X$ restrict the convergence rate. \begin{figure}\label{figsobopton18pts} \end{figure} For illustration of the optimal compromise situation in \eref{eqcompro}, Figure \ref{figcompro1} shows the convergence rate 1 for approximation of the Laplacian in 3D on only 10 points in general position assuming smoothness $m=4.5$. By Table \ref{tabSobRates} we expect a convergence rate between $m-s-d/2-\epsilon=1-\epsilon$ and 1 for all $\epsilon >0$ when using polynomial exactness order $q=m-d/2=3$, but the true optimal convergence could be like $h\log(h)$. The issue cannot be visually decided. \begin{figure}\label{figcompro1} \end{figure} Test runs with the scalable approximations based on polynomial exactness show exactly the same behaviour, since they have the same convergence rate. To illustrate the ratio between the errors of scalable polyharmonic stencils and unscaled optimal approximations, Figure \ref{Figquot} shows the error ratio in the 2D equilibrium case with 10 points and $m=q=4$, tending to 1 for $h\to 0$. The same remark as for the $m=4.5,\,d=3$ case applies here. \begin{figure} \caption{Quotient between errors of polyharmonic and optimal Sobolev approximations as functions of $h$ } \label{Figquot} \end{figure} To deal with the special situation of $m-d/2$ being an integer in Corollary \ref{corSecCase2} via polyharmonic kernels, we take 6 points in $\ensuremath{\mathbb{R}}^2$ with $q=q_{max}=3$ for the Laplacian with optimal convergence rate $m-2-d/2=1$ for $m=4$. Working in $BL_{4,2}$ would need 10 points. A unique scalable stencil is obtained from $BL_{m',2}$ with polynomial exactness order $q(m',2)=3$ for all $3\leq m'<4$ and the convergence rate is at least $m-s-d/2-\epsilon=1-\epsilon$ for all $\epsilon >0$ by Table \ref{tabSobRates}. The corresponding convergence rate estimate for $m'=3.5$ is in Figure \ref{Figsoboptm4ph3p5on6pts}, and there is no visible $\log(h)$ factor. \begin{figure}\label{Figsoboptm4ph3p5on6pts} \end{figure} To see whether a $\log(h)$ term can be present in the situation of integer $q=m-d/2$, we take $m=d=2,\;q=1,\;s=0,$ i.e. interpolation. We need just a single point $x\in\ensuremath{\mathbb{R}}^2$ with $\|x\|_2=1$ for exactness on constants. The kernel is $\phi(r)=rK_1(r)=1+\frac{1}{2}r^2\log r +{\cal O}(r^2)$ with $\phi(0)=1$. The optimal recovery for $\lambda(u)=u(0)$ from $u(hx)$ is the kernel interpolant, i.e. $u(hx)\phi(\|\cdot-hx\|_2)$, and the approximation error is $$ u(0)-u(hx)\phi(\|hx\|_2)=u(0)-u(hx)\phi(h). $$ In the dual of $W_2^2(\ensuremath{\mathbb{R}}^2)$ the square of the norm of the error functional is $$ \begin{array}{rcl} \|\delta_0-\phi(h)\delta_{hx}\|^2_{{W_2^2}^*(\ensuremath{\mathbb{R}}^2)} &=& \phi(0)-\phi(h)^2\\ &=& -h^2\log(h)+{\cal O}(h^2)\\ \end{array} $$ due to MAPLE. Since the standard error bound $$ |u(0)-u(hx)\phi(h)|\leq \|\delta_0-\phi(h)\delta_{hx}\|_{{W_2^2}^*(\ensuremath{\mathbb{R}}^2)} \|u\|_{W_2^2(\ensuremath{\mathbb{R}}^2)} $$ is sharp, and since we constructed the optimal recovery, we have that the convergence for $q=1$ is only $h|\log(h)|^{1/2}$ and not like the optimal behaviour $h^{m-0-d/2}=h$ in Sobolev space $W_2^2(\ensuremath{\mathbb{R}}^2)$. To reach the optimal rate, we need a polynomial exactness order $q\geq 2$ by Table \ref{tabSobRates}, i.e. at least three non-collinear points. For curiosity, note that the above analysis works for all even dimensions, provided that smoothness $m=1+d/2$ is varying accordingly. The suboptimal nearest-neighbor interpolation by constants has $$ \begin{array}{rcl} \|\delta_0-\delta_{hx}\|^2_{{W_2^2}^*(\ensuremath{\mathbb{R}}^2)} &=& 2-2\phi(h)\\ &=& -h^2\log(h)+{\cal O}(h^2) \end{array} $$ and a more exact expansion via MAPLE shows that this is larger than the squared error for optimal one-point interpolation in $W_2^{1+d/2}(\ensuremath{\mathbb{R}}^d)$ by ${\cal O}(\log^2(h)h^4)$. In several numerical examples we verified the stencil convergence proven in Theorem \ref{theASA}, but the observed convergence rates turned out to be better than the proven ones. In particular, choosing 15 points in general position in $\ensuremath{\mathbb{R}}^2$ with $q=5$ led to a convergence rate $\min(2,2m-10)$ for $m\geq 5$ instead of $\min(1,m-5)$ in Theorem \ref{theASA}. This seems to be a consequence of {\em superconvergence} \cite{schaback:1999-1,schaback:2016-5}, but needs further work. We now check approximation of the Laplacian in the native space of the Gaussian in Figure \ref{figgauss30}. This should behave like $m=\infty$ in \eref{eqOptRate} and thus show a convergence rate $q_{max}(\lambda,X)-s$. We used 256 decimal digits for that example and took a set of 30 random points in 2D. Then $q_{max}(\Delta,X)=7$ and the observed convergence rate is indeed $q_{max}-s=5$. Furthermore, this rate is attained already for a scalable stencil that is polynomially exact of order $7$ on these points. We chose the optimal scalable polyharmonic stencil in $BL_{7,2}$ for this, and the ratio of the error norms was about 5. See \cite{larsson-et-al:2013-1} for a sophisticated way to circumvent the instability of calculating optimal non-scalable stencils for Gaussian kernels, but this paper suggests to use scalable stencils calculated via polyharmonic kernels instead. \begin{figure} \caption{Gaussian native space convergence rate estimates for the error norms of the optimal and a polynomially exact stencil of order $7$, approximating the Laplacian on $30$ general points, as function of $h$} \label{figgauss30} \end{figure} We finally compare with approximations that optimize weights under the constraint of a fixed polynomial exactness \cite{davydov-schaback:2016-1}. The three point sets $X1,\;X2$, and $X3$ of \cite{davydov-schaback:2016-1} have 32 points in $[-1,+1]^2$ each, and the maximal possible order of polynomial reproduction in 2D is 7, if the geometry of the point set allows it. If everything works fine, this would result in convergence of optimal order $5$ for the approximation of the Laplacian in Sobolev spaces of order $m\geq 8$, while the optimal rate for smaller $m$ is $m-3$. A simple Singular Value Decomposition of the 28x32 value matrix of polynomials of order 7 on these points reveals that the small singular values in the three cases are like in Table \ref{svdtable}. This means that only $X1$ allows working for exactness order 7 without problems, while $X2$ suggests order $6$ and $X3$ should still work with order $5$. If users require higher polynomial exactness orders (PEO), there is a risk of numerical instabilities. To demonstrate this effect, Figure \ref{FigX2m8O7P7} shows what happens if both the polyharmonic and the minimal-weight approximations are kept at order 7 for the set $X2$. As Figure \ref{FigX2m8O6P7runs20} will show, the optimal Sobolev approximation stays at rate 4 for larger $h$ and needs rather small $h$ to show its optimal rate 5. In Figure \ref{FigX2m8O7P7}, both the polyharmonic and the minimal-weight approximations perform considerably worse than the optimum. If we go to polynomial exactness order 6, we get Figure \ref{FigX2m8O6P6}, and now both approximations are close to what the Sobolev approximation does, though the latter is not at its optimal rate yet. In Figure \ref{FigX2m8O6P7runs20}, the polyharmonic approximation is forced to stay at exactness order 7, while the weight-minimal approximation is taken at order 6 to allow more leeway for weight optimization. Now, in the same range as before, the weight-optimal approximation clearly outperforms the polyharmonic approximation. The same situation occurs on the set $X3$ under these circumstances, see Figure \ref{FigX3m8O6P7runs20}. Thus, for problematic point sets, the polyharmonic approximation should get as much leeway as the minimal-weight approximation. The most sensible choice on $X3$ is to fix the exactness orders to 5, and the results are in Figure \ref{FigX3m8O5P5}. Both approximations cannot compete with the convergence rate 4 that the Sobolev approximation shows in this range of $h$. The latter is calculated using 128 digits and can still use the point set as one that allows polynomial reproduction of order 6. The other two approximations are calculated at 32 decimal digits and see the set $X3$ as one that allows reproduction of order 5 only. To get back to a stable situation, we should lower the Sobolev smoothness to $m=6$ to get Figure \ref{FigX3m6O5P5}. We then are back to a convergence rate like $h^3$ in all cases. \begin{table}[hbt] \begin{center} \begin{tabular}{|r||c|c|c|} \hline Set & $>0.002$ & $\in [2.0e-8,3.6e-7]$ & $<5.0e-14$\\ \hline X1 & 28 & 0 & 0 \\ X2 & 25 & 3 & 0 \\ X3 & 18 & 9 & 1 \\ \hline \end{tabular} \end{center} \caption{Singular values for three point sets, for polynomial reproduction of order 7 \label{svdtable}} \end{table} \begin{figure}\label{FigX2m8O7P7} \end{figure} \begin{figure}\label{FigX2m8O6P6} \end{figure} \begin{figure}\label{FigX2m8O6P7runs20} \end{figure} \begin{figure}\label{FigX3m8O6P7runs20} \end{figure} \begin{figure}\label{FigX3m8O5P5} \end{figure} \begin{figure}\label{FigX3m6O5P5} \end{figure} \section{Summary and Outlook}\label{SecU} We established the optimal convergence rate \eref{eqOptRate} of nodal approximations in Sobolev spaces and proved that it can be attained for {\em scalable} approximations with sufficient polynomial exactness. But we did not investigate the factors in front of the rates. For highly irregular nodes, it might be reasonable to go for a smaller convergence rate, if the factor is much smaller than the one for the highest possible rate for that node configuration. This requires an analysis of how to use the additional degrees of freedom, and various possibilities for this are in \cite{davydov-schaback:2016-1}. On point sets that are badly distributed, it pays off to avoid the highest possible order of polynomial exactness, and to use the additional degrees of freedom for minimization of weights along the lines of \cite{davydov-schaback:2016-1} or to use optimal approximations by polyharmonic kernels at a smaller order of polynomial exactness. The kernels reproducing Sobolev spaces $W_2^m(\ensuremath{\mathbb{R}}^d)$ have expansions into power series in $r=\|x-y\|_2$ that start with even powers of $r$ until the polyharmonic kernel $H_{m,d}$ occurs. This shows that error evaluation in Sobolev spaces can be replaced asymptotically by evaluation in Beppo-Levi spaces, and it suggests that the errors of optimal kernel-based approximations should be close to the errors of optimal scalable stencils based on polyharmonic kernels. This occurred in various experiments (see Figure \ref{Figquot}), but a more thorough investigation is needed. Finally, the exceptional case $m-d/2\in \ensuremath{\mathbb{N}}$ of the second row of Table \ref{tabSobRates} needs more attention. Approximating a functional with scaling order $s$ by scalable stencils with the minimal polynomial exactness order $q=m-d/2$ leads to an unknown convergence behavior between rates $m-s-d/2-\epsilon$ and the optimal rate $m-s-d/2$ that is guaranteed for order $q+1=m-d/2+1$. The convergence could be like ${\cal O}(h^{m-s-d/2}|\log(h)|^p)$, for instance, and we presented an example with $p=1/2$ for $m=d=2,\;s=0$. \end{document}
\begin{document} \title{A Survival Copula Mixture Model for Comparing Two Genomic Rank Lists} \markboth {YY Wei and HK Ji} {Genomic Rank Lists Comparison} \author{Yingying Wei, Hongkai Ji\\ Johns Hopkins University Bloomberg School of Public Health} \date{} \maketitle \begin{abstract} Analyses of high-throughput genomic data often lead to ranked lists of genomic loci. How to characterize concordant signals between two rank lists is a common problem with many applications. One example is measuring the reproducibility between two replicate experiments. Another is to characterize the interaction and co-binding between two transcription factors (TF) based on the overlap between their binding sites. As an exploratory tool, the simple Venn diagram approach can be used to show the common loci between two lists. However, this approach does not account for changes in overlap with decreasing ranks, which may contain useful information for studying similarities or dissimilarities of the two lists. The recently proposed irreproducible discovery rate (IDR) approach compares two rank lists using a copula mixture model. This model considers the rank correlation between two lists. However, it only analyzes the genomic loci that appear in both lists, thereby only measuring signal concordance in the overlapping set of the two lists. When two lists have little overlap but loci in their overlapping set have high concordance in terms of rank, the original IDR approach may misleadingly claim that the two rank lists are highly reproducible when they are indeed not. In this article, we propose to address the various issues above by translating the problem into a bivariate survival problem. A survival copula mixture model is developed to characterize concordant signals in two rank lists. The effectiveness of this approach is demonstrated using both simulations and real data. \end{abstract} {\bf Keywords} Genomics; High-throughput experiments; Mixture model; Survival copula; EM algorithm; Reproducibility; Co-binding of transcription factors. \section{Introduction} \label{s:intro} Analyses of high-throughput genomic data often produce ranked lists of genomic loci. Two examples are lists of differentially expressed genes from an RNA-seq or microarray experiments \cite{Reflimma} and lists of transcription factor (TF) binding peaks from ChIP-seq data \cite{RefCisgenome}. In each list, loci are ranked based on scores such as p-values, false discovery rates (FDR) \cite{RefFDR,rStorey1,rStorey2} or other summary statistics. When two such lists are available, a common problem is to characterize the degree of concordance between them. Below are two examples. \begin{itemize} \item \textit{Characterizing co-binding of two transcription factors}: ChIP-seq data are collected for two different TFs. For each TF, an initial data analysis yields a list of peaks along the genome representing its putative binding regions. In order to characterize whether the two TFs collaborate and how they interact with each other, one wants to compare the two peak lists to answer the following questions: (1) What proportion of the true binding sites are shared by the two TFs? (2) How does this proportion change as one moves from high quality peaks to low quality ones? \item \textit{Assessing reproducibility of scientific findings}: Gene expression data for the same biological system are collected independently by two different laboratories. Each lab collects the data using its own platform and protocol. The data from each lab contain gene expression profiles for two biological conditions, each with multiple replicate samples. Each lab analyzes its own data to generate a list of differentially expressed genes. One wants to compare the differential gene lists from the two labs to determine which differential genes are likely to be reproducible by other labs. \end{itemize} In both these scenarios, perhaps the best way to compare two datasets is to model it at the raw data level. Whenever possible, directly comparing or modeling the raw data may allow one to keep most of the information. However, this is not always easy or feasible. For instance, sometimes genomic rank lists are published without releasing the raw data to protect confidentiality of research subjects. Sometimes, one may want to compare his/her own data with thousands of other datasets in public repositories such as ENCODE \cite{RefEncode}, modENCODE \cite{RefmodENCODE}, and Gene Expression Omnibus (GEO) \cite{RefGEO}. Analyzing all the raw data in these databases is a huge undertaking that requires significant amount of resources. This is often beyond the capacity of an individual investigator, and it may not be justified based on the return. In those situations, comparing two datasets based on the readily available rank lists may be preferred. Sometimes, this may be the only solution. This article considers analysis issues in this scenario. As an exploratory tool, the simple Venn diagram approach is widely used to show the overlap between two genomic loci lists. However, this approach does not consider the concordance or correlation of ranks between the two lists. A feature commonly seen in genomic rank lists is that the top ranked loci are more likely to be true signals. Signals are more likely to be reproduced in independent studies than noise; therefore, they tend to be correlated between different datasets. Because of this, the concordance between the two rank list is a function that changes with the rank of the loci. This information is not reflected in a Venn diagram. To address this limitation, Li et al. recently proposed a method to measure the concordance of two rank lists as a function of rank. They developed a Gaussian copula mixture model to assign a reproducibility index, irreproducible discovery rate (IDR), to each locus. The IDR analysis produces a concordance curve rather than a scalar number to measure the overlap between two lists. This approach is semiparametric and invariant to monotone transformations of the scores used for ranking \cite{RefLi}. In principle, IDR is a model based version for one minus the correspondence at the top (CAT) plot proposed by Irizarry et al.\cite{RefCAT}. The original authors of IDR demonstrated their method using an application where they evaluated the reproducibility of different ChIP-seq peak callers by comparing the peak calling results from two replicate experiments. Although the IDR approach represents a significant advance compared to the simple Venn diagram analysis, it also has limitations. Importantly, the Gaussian copula mixture model in the original IDR approach requires one to know the ranks of each locus in both lists. However, many loci occur only in one list. As a result, to perform the IDR analysis, Li et al. first filtered out all loci that were reported in only one rank list. Loci are included in the IDR analysis only if they appear in both lists. As Li et al. reported, for the real data they analyzed (which are peak lists from replicate ChIP-seq experiments), only ``23-78\% of peaks are retained for this analysis'' \cite{RefLi}. As such, the original IDR analysis only characterizes the signal concordance for a subset of loci that are reported in both lists. Attempting to interpret the resulting IDR as a reproducibility measure for the whole dataset could be misleading. It is possible that the two original loci lists have little overlap and, therefore, low reproducibility, but the loci in their overlapping set (i.e., the loci shared by both lists) are highly correlated in terms of their relative ranking. In such a situation, the IDR computed using only the overlapping loci may misleadingly suggest high reproducibility of the two datasets. This is a limitation caused by ignoring list-specific loci, and can only be addressed by bringing them back into the analysis. Here we propose a Survival COPula mixture model, SCOP, to tackle the general problem of comparing two genomic rank lists. This new approach allows one to include the list-specific loci in the analysis when evaluating the signal concordance between the two datasets. For loci that occur only in one list, we treat the scores used for ranking (e.g., p-values or FDRs) in the other list as censored data. In this way, we translate the problem into a bivariate survival problem. Although many works have been done in the area of estimating correlation structure of bivariate failure times in survival analysis \cite{RefFrailty,rFan,RefMultSurv,RefHougaard,RefLiYi,RefNan,RefCopula,RefOakes,rLouis}, none of them considered the issue specific to genomic data. In genomic applications, the higher ranked loci are more likely to be true signals. Thus, adopting the traditional survival analysis terminology, earlier failure time is of higher interest. Built upon Li et al, our survival copula mixture model attempts to borrow strength from both the copula mixture model and the survival analysis. The benefit is that it can better characterize the overlap and concordance between two rank lists. The article is organized as follows. In Section 2, we introduce the survival copula mixture model and discuss its connection to survival analysis. Section 3 uses simulations to demonstrate our method and compare it with the other alternatives. We apply our method to two real ChIP-seq datasets example in Section 4. We then conclude the article with discussions in Section 5. \section{Method} \label{s:model} \subsection{Data structure} Consider two genomic loci lists such as lists of differentially expressed genes from two RNA-seq experiments or lists of transcription factor binding regions from two ChIP-seq experiments. In each list $j$ ($\in \left\{1,2\right\}$), loci are rank ordered based on a score such as a p-value or an FDR. Let $T_{i,j}$ be the score for locus $i$ in list $j$. Without loss of generality, we assume that smaller score (e.g., smaller FDR) represents a higher significance. Often, a locus is reported in list $j$ only when its score passes a cutoff $C_{j}$. Thus, all loci in list 1 satisfy $T_{i,1} \leq C_{1}$, and any locus with $T_{i,1} > C_{1}$ is not reported. Similarly, list 2 contains loci for which $T_{i,2} \leq C_{2}$. A locus may be reported in both lists, in one list only, or in none of the lists. Each list may contain a certain amount of noise or false positives in addition to signals. By comparing the two lists, the goal is to characterize the degree of concordance of the signals from the two datasets, and how the concordance varies as one moves from the top ranked loci to those lower ranked. To analyze these data, we borrow the idea of IDR. However, instead of excluding loci that occur in only one list from the analysis, we retain all loci that occur in any of the two lists. If a locus does not appear in one list, its score in that list is labeled as missing. This creates missing data, but the data here are not missing completely at random. For example, if rank list 1 uses an FDR cutoff of 0.1, then we know that for any loci in list 2 but not in list 1, their missing FDR in list 1 are indeed greater than 0.1. In other words, the data we observe are right truncated. This naturally translates the problem into a survival problem with right censoring data. Figure 1(a) shows a numerical example. The figure displays two ChIP-seq peak lists ranked according to FDR. Region 2 passes the FDR cutoff for both lists, but region 1 only appears in the peak list for TF A. It is absent in the peak list for TF B since its FDR in that dataset is higher than the 0.1 cutoff. Rather than excluding region 1 from the analysis, we retain it and encode the data using ``observed survival time'' and ``censoring indicator'' adopting the terminology in survival analysis. The ``observed survival time'' is defined as $X_{i,j}=min\{T_{i,j},C_{j}\}$, and the ``censoring indicator'' is defined as $\delta_{i,j}=I(T_{i,j}\leq C_{j})$. In this example, the observed survival time for region 1 in peak list B is 0.1, and the censoring indicator is equal to zero indicating that the data is censored. Intuitively, the original IDR approach by \cite{RefLi} only models the red points (i.e., cases with complete data) in Figure 1(b), whereas our new approach attempts to use information from all data points regions II, III, and IV. Later we will show that compared to the original IDR calculation which excludes the list-specific loci, including them as censored data in our model will provide more information. \subsection{The SCOP Model} Let $f_j(t)$ be the probability density function for $T_{i,j}$. $S_j(t)=P(T_{i,j}\geq t)$ is the corresponding survival function. For any bivariate random variables, there exists a copula, which is invariant under monotone transformation for the marginal distribution \cite{RefCopula}. Based on this, we use two latent random variables $Z_{i,1}$ and $Z_{i,2}$ to characterize the relationship between $T_{i,1}$ and $T_{i,2}$. For each $j$, $Z_{i,j}$ is assumed to follow a Gaussian mixture distribution. $G_j(z)=P(Z_{i,j}\geq z)$ represents the survival function for the latent variable $Z_{i,j}$. The latent variables ($Z_{i,1}$ and $Z_{i,2}$) and the observed scores ($T_{i,1}$ and $T_{i,2}$) are linked through a monotone transformation $S_j(t_{i,j})=G_j(z_{i,j})$. Let $g_1(z)$ denote the density function for $Z_{i,1}$. It is assumed that this density is a mixture of a noise component $g_{10}\sim N(0,1)$ and a signal component $g_{11}\sim N(\mu_1,\sigma_1^2)$, where $\mu_1<0$. Similarly, the density function for $Z_{i,2}$, $g_2(t)$, is assumed to be a mixture of noise $g_{20}\sim N(0,1)$ and signal $g_{21}\sim N(\mu_2,\sigma_2^2)$ where $\mu_2<0$. \begin{figure} \caption{The connection between the comparison of two genomic rank lists and the bivariate survival analysis. (a) An illustrative example showing how two rank lists when combined together can be represented using ``observed survival time'' and ``censoring indicators'' according to the survival terminology. (b) An illustration of how information is used in the complete case analysis by the original IDR versus the survival model in this article. (c) A cartoon illustration of the data generation process assumed by the survival copula mixture model.} \end{figure} The data are assumed to be generated as below (see Figure 1(c) for a cartoon illustration): \begin{enumerate} \item A random indicator $b_i$ is first assigned to each locus $i$. \begin{itemize} \item If $b_i=0$, then locus $i$ is noise in both lists. \item If $b_i=1$, then locus $i$ is signal in list 1 but noise in list 2. \item If $b_i=2$, then locus $i$ is signal in list 2 but noise in list 1. \item If $b_i=3$, then locus $i$ is signal in both lists. \end{itemize} Thus, $b_i$ represents the co-existing pattern of signals. It is usually called ``frailty'' in survival analysis. The $b_i$ is assumed to be assigned according to the probability vector $\mbox{\boldmath{$\pi$}}=(\pi_0,\pi_1,\pi_2,\pi_3)$, where $\pi_k \equiv Pr(b_i=k)$. \item Given $b_i$, latent variables $\tilde{z}_{i,1}$ and $\tilde{z}_{i,2}$ are generated according to $g_1(z)$ and $g_2(z)$, respectively. \item $\tilde{z}_{i,1}$ and $\tilde{z}_{i,2}$ are truncated using $K_{1}$ and $K_{2}$ as cutoffs. \item The truncated pseudo data $z_{i,1}$ and $z_{i,2}$ are monotone transformed to observed data $x_{i,1}$ and $x_{i,2}$ based on $S_j(x_{i,j})=G_j(z_{i,j})$, which yields $x_{i,j}=S_j^{-1}(G_j(z_{i,j}))$. Correspondingly, $C_{j} = S_j^{-1}(G_j(K_{j}))$. Also note that $x_{i,j}=min\{t_{i,j},C_{j}\}$ where and $t_{i,j} = S_j^{-1}(G_j(\tilde{z}_{i,j}))$. \end{enumerate} Since $T_{i,j}$ is truncated at $C_j$ and the censoring time $C_j$ is a constant, the censoring time is independent of the underlying true failure time $T_{i,j}$ and contains no information about $f_j(t)$ and $S_j(t)$. As a result, the contribution of each locus $i$ to the likelihood can be represented by one of the four formulas below: \begin{itemize} \item $b_i=0$: $h_{0i} \equiv g_{10}^{\delta_{i,1}}(z_{i,1})G_{10}^{1-\delta_{i,1}}(z_{i,1})g_{20}^{\delta_{i,2}}(z_{i,2})G_{20}^{1-\delta_{i,2}}(z_{i,2})$. \item $b_i=1$: $h_{1i} \equiv g_{11}^{\delta_{i,1}}(z_{i,1})G_{11}^{1-\delta_{i,1}}(z_{i,1})g_{20}^{\delta_{i,2}}(z_{i,2})G_{20}^{1-\delta_{i,2}}(z_{i,2})$. \item $b_i=2$: $h_{2i} \equiv g_{10}^{\delta_{i,1}}(z_{i,1})G_{10}^{1-\delta_{i,1}}(z_{i,1})g_{21}^{\delta_{i,2}}(z_{i,2})G_{21}^{1-\delta_{i,2}}(z_{i,2})$. \item $b_i=3$: $h_{3i} \equiv g_{11}^{\delta_{i,1}}(z_{i,1})G_{11}^{1-\delta_{i,1}}(z_{i,1})g_{21}^{\delta_{i,2}}(z_{i,2})G_{21}^{1-\delta_{i,2}}(z_{i,2})$. \end{itemize} Collect the data and latent variables into three sets $\mbox{\boldmath{$Z$}}=\{z_{i,j}\}$, $\mbox{\boldmath{$\Delta$}}=\{\delta_{i,j}\}$ and $\mbox{\boldmath{$B$}}=\{b_i\}$, and define $\mbox{\boldmath{$\theta$}}=\left\{ \mbox{\boldmath{$\pi$}},\mu_1,\mu_2,\sigma_1^2,\sigma_2^2 \right\}$. The full likelihood can be derived as: \begin{eqnarray} Pr(\mbox{\boldmath{$Z$}},\mbox{\boldmath{$\Delta$}},\mbox{\boldmath{$B$}}| \mbox{\boldmath{$\theta$}})=\prod_{i=1}^n\{\pi_0g_{10}^{\delta_{i,1}}(z_{i,1})G_{10}^{1-\delta_{i,1}}(z_{i,1})g_{20}^{\delta_{i,2}}(z_{i,2})G_{20}^{1-\delta_{i,2}}(z_{i,2})\}^{I(b_i=0)}\nonumber\\ *\{\pi_1g_{11}^{\delta_{i,1}}(z_{i,1})G_{11}^{1-\delta_{i,1}}(z_{i,1})g_{20}^{\delta_{i,2}}(z_{i,2})G_{20}^{1-\delta_{i,2}}(z_{i,2})\}^{I(b_i=1)}\nonumber\\ *\{\pi_2g_{10}^{\delta_{i,1}}(z_{i,1})G_{10}^{1-\delta_{i,1}}(z_{i,1})g_{21}^{\delta_{i,2}}(z_{i,2})G_{21}^{1-\delta_{i,2}}(z_{i,2})\}^{I(b_i=2)}\nonumber\\ *\{\pi_3g_{11}^{\delta_{i,1}}(z_{i,1})G_{11}^{1-\delta_{i,1}}(z_{i,1})g_{21}^{\delta_{i,2}}(z_{i,2})G_{21}^{1-\delta_{i,2}}(z_{i,2})\}^{I(b_i=3)}\nonumber\\ =\prod_{i=1}^n\{\pi_0h_{0i}\}^{I(b_i=0)}\{\pi_1h_{1i}\}^{I(b_i=1)}\{\pi_2h_{2i}\}^{I(b_i=2)}\{\pi_3h_{3i}\}^{I(b_i=3)}. \end{eqnarray} \subsection{Model fitting} To fit the model, we use an iterative EM algorithm similar to the one proposed by Li et al.\cite{RefLi} to estimate $\mbox{\boldmath{$\theta$}}$. \begin{enumerate} \item Intialize $\mbox{\boldmath{$\theta$}}$ using random values $\mbox{\boldmath{$\theta$}}_0$. \item Use the Kaplan-Meier \cite{RefKMest} estimator to estimate the marginal survival functions for $X_{i,1}$ and $X_{i,2}$. \item Given the initial $\mbox{\boldmath{$\theta$}}_0$, obtain pseudo-data $z_{i,1}=\hat{G}_1^{-1}(\hat{S}_1(x_{i,1}))$, $z_{i,2}=\hat{G}_2^{-1}(\hat{S}_2(x_{i,2}))$. \item Estimate parameters $\mbox{\boldmath{$\theta$}}$ based on the pseduo-data $z_{i,1}$ and $z_{i,2}$ using an EM algorithm \cite{RefEM}. \item Update the $\hat{G}_1$ and $\hat{G}_2$ using the newly estimated $\mbox{\boldmath{$\theta$}}$, and update the pseudo-data $z_{i,1}$ and $z_{i,2}$ using the new $\hat{G}_1$ and $\hat{G}_2$. \item Iterate between steps 3 and 4 until the change in log-likelihood between the two nearby iterations is less than a pre-specified threshold. \end{enumerate} Details of the algorithm are given in the Appendix. \subsection{Statistical Inference} \label{s:inf} Once the model parameters are estimated, a coexistence probability (also called probability for having reproducible signals) can be computed for each locus $i$ as: \begin{equation} cop(x_{i,1},x_{i,2})=Pr(K_i=3|(x_{i,1},x_{i,2}),\mbox{\boldmath{$\theta$}})=\frac{\pi_3h_{3i}}{\sum_{k=0}^3\pi_kh_{ki}}. \end{equation} Using these coexistence probabilities, we define two coexistence curves (COP curves) as: \begin{equation} COP_1(x_{i,1})=mean_{\left\{l: x_{l,1}\leq x_{i,1}\right\}}(cop(x_{l,1},x_{l,2})). \end{equation} \begin{equation} COP_2(x_{i,2})=mean_{\left\{l: x_{l,2}\leq x_{i,2}\right\}}(cop(x_{l,1},x_{l,2})). \end{equation} Intuitively, $COP_1(x_{i,1})$ indicates that among the loci whose scores in list 1 are less than $x_{i,1}$, the proportion that are true signals in both lists. Similarly, $COP_2(x_{i,2})$ shows among the loci ranked higher than locus $i$ in list 2, the fraction that represents signals reproducible in both lists. From these two COP curves, one can see how the co-existence strengths between the two lists change from the most significant loci to the least significant ones. To facilitate the comparison with the IDR approach in \cite{RefLi}, we also define: \begin{equation} IDR_1(x_{i,1})=1-COP_1(x_{i,1}) \end{equation} \begin{equation} IDR_2(x_{i,2})=1-COP_2(x_{i,2}) \end{equation} $IDR_1(x_{i,1})$ represents the fraction of noise or non-concordant (non-reproducible) signals among loci whose score in list 1 does not exceed $x_{i,1}$. $IDR_2(x_{i,2})$ can be interpreted similarly. Our model and measures here allows asymmetry of the signals in the two lists. For instance, if one list is obtained from a poor quality experiment with low signal-to-noise ratio and the other list is from a high-quality experiment with high signal-to-noise ratio, the two COP curves will be different. In contrast, the original IDR approach only produces one IDR curve to show the concordance. As a result, it cannot show the difference between two asymmetric datasets. \section{Simulations} In this section, we use simulations to illustrate SCOP and compare it with the Venn diagram and IDR approach. \subsection{Characterization of degree of concordance between two rank lists} \subsubsection{Case I} Case I illustrates why SCOP is better at characterizing the degree of concordance between two rank lists. Consider two lists, each with 10,000 loci. Since the copula model is invariant to monotone transformation of marginal scores, we generated the simulation data by first generating latent random variables $Z_{i,1}$ and $Z_{i,2}$ and then transforming them to p-values, denoted as $T_{i,j}$, from a one-sided $z$-test for $H_0: \mu=0$ vs $H_0: \mu<0$. Specifically, $T_{i,j}=P(Z<Z_{i,j})$ where $Z$ follows the standard normal distribution $N(0,1)$. For both lists, a normal distribution $N(-5, 1)$ was used as the signal component for the latent variable $Z_{i,j}$, and $N(0,1)$ was used as the noise component. The mixture proportions of the four possible co-existence patterns were $\mbox{\boldmath{$\theta$}}=c(0.9,0,0,0.1)$. In other words, for the full lists without truncation, only 10\% of the loci represent signals in both lists, and the other 90\% of the loci are noise. All $Z_{i,j}$ values greater than -1.65, corresponding to p-value$>0.05$ in a one-sided z-test, were truncated. The p-values $T_{i,j}$ were then generated according to the process described in Section 2.2. Under this setting, the two lists are symmetric in terms of their signal-to-noise ratio. To reflect the scenario in real applications, loci whose p-values were greater than $>0.05$ in both lists were excluded from the analysis. Meanwhile, all the other loci, either censored in only one or neither of the two lists, were retained. As shown in Figure 2 (a), a total of 1,872 loci passed the p-value cutoff in either list 1 or list 2. Among them, 56.1\% (1,050) were reported in both lists. Nevertheless, the Venn diagram does not characterize the rank concordance between the two lists. \begin{figure} \caption{(a) The Venn diagram for Case I. (b) IDR by Li et al(2011) for Case I.(c) $IDR_1$ for Case I. (d) $IDR_2$ for Case I. (e)-(f) The Venn diagram, IDR by Li et al(2011), $IDR_1$ and $IDR_2$ for Case II.} \end{figure} We then applied the IDR approach to the 1,050 loci reported in both lists, consistent with how the IDR analysis was performed by \cite{RefLi}. Figure 2 (b) shows the corresponding IDR curve. Based on the curve, the IDR analysis would claim high reproducibility between the two datasets. However, this is clearly not the case, since Figure 2 (a) shows that 43.9\% of the 1,872 reported loci were not shared by the two lists. This illustrates why ignoring the list-specific loci in the IDR analysis can be misleading. The high reproducibility that the method reports only describes the degree of concordance among the loci common in both lists. It does not characterize the concordance or the reproducibility of the whole lists. This has important implications. IDR is widely used in the ENCODE project to measure the reproducibility of replicate experiments, and to evaluate the performance of data analysis algorithms in terms of how consistent they perform when applied to replicate experiments \cite{RefEncodeGuid}. The example here shows that IDR can be very misleading if one wants to measure the global reproducibility of two replicate experiments, or to evaluate if a data analysis algorithm is stable. This is caused by ignoring the list-specific loci, which is not allowed in the original IDR model. Finally, we applied SCOP to the simulated data. Figures 2 (c) and (d) show the corresponding $IDR_1$ and $IDR_2$ curves, together with the underlying truth curve. The $IDR_1$ and $IDR_2$ curves (red dashed lines) match the underlying truth curves (black solid lines) very well. As a benchmark comparison, we also counted among the top ranked $k$ loci in one list how many of them were absent from the other list, and created the corresponding curves called ``NaiveVenn'' (dark green dotted lines) hereafter. From another perspective, NaiveVenn curves were constructed by fixing one circle in the Venn diagram, varying the other circle with different rank cutoffs, and counting the overlap proportions. To certain extent, ``NaiveVenn'' can be viewed as a naive estimation for $IDR_1$ and $IDR_2$. However, NaiveVenn underestimates the irreproducibility for loci occurring in only one list, whereas SCOP is able to borrowing information from all loci, complete or missing in one list, to better estimate the signal and noise proportions in the data. $IDR_1$ and $IDR_2$ curves clearly demonstrate that the fraction of concordant signal in the two lists is not high, and the irreproducible loci consist of 40\% of both observed lists. $IDR_1$ and $IDR_2$ curves also show that the signal concordance decreases as one moves from top ranked loci to lower ranked loci, a trend not directly revealed by the Venn diagram approach. With these curves, one may adjust the cutoff for calling signals based on the degree of reproducibility between two independent experiments, which is a function not usually provided by the Venn diagram approach. \subsubsection{Case II} In Case II, each rank list contains 1,000 loci. The mixture proportions of the co-existence patterns were $\mbox{\boldmath{$\theta$}}=c(0.1,0,0,0.9)$. For both lists, the noise and signal components for generating latent random variables $Z_{i,j}$ were again assumed to follow $N(0,1)$ and $N(-5,1)$ respectively. $Z_{i,j}$s were truncated at -1.65 as well. Among the 1,083 loci that passed the cutoff in either list, 98.8\% (1,070) were not found in both lists. Figures 2(k) and (l) show that SCOP accurately characterizes the degree of signal concordance between the two lists (compare the red and the black curves). Comparing Figure 2(b) in Case I and Figure 2(f) in Case II, one can see that the original IDR approach would claim high reproducibility in both cases. However, Figure 2(g)(h) clearly demonstrates that these two cases are different. In Case I, 40\% of all loci are claimed as noise (Figure 2 (c)(d)) for the observed lists, whereas only 1.5\% of all loci in Case II are estimated as noise (Figure 2 (g)(h)). In summary, the two simulations above show that the overlap revealed by Venn diagrams does not contain all information about the degree of signal concordance and how it changes when rank changes. They also show that the IDR computed using the loci present in both lists can be misleading for characterizing global concordance or reproducibility. By incorporating the censoring data into the analysis, SCOP addresses both issues and can provide a better characterization of concordance or reproducibility. \subsection{Uncovering list-specific characteristics} \begin{figure} \caption{(a) The Venn diagram for Case III. (b) IDR by Li et al(2011) for Case III.(c) $IDR_1$ for Case III. (d) $IDR_2$ for Case III.} \end{figure} Unlike the IDR approach which only produces one IDR curve, SCOP creates two curves, one for each rank list. Using these two curves, one can explore characteristics specific to each rank list. For instance, besides measuring the overall concordance of two rank lists, IDR is also used to determine where to cut the rank lists to keep only the loci that are likely to be reproducible in independent experiments, that is, it serves a role similar to false discovery rate (FDR). In many real applications, one rank list is obtained from a high quality experiment, whereas the other list is obtained from a noisy dataset; thus, the two rank lists may be asymmetric in terms of their signal-to-noise ratio. In such a scenario, one may want to have a more detailed view of each list. Since the IDR approach produces only one IDR curve for loci shared by both lists, it does not reveal the asymmetry between the two lists. Using this IDR curve to choose cutoff forces the same cutoff to be applied to both lists. This may result in decreased power in detecting reproducible loci. In contrast, SCOP allows one to estimate IDR separately for each list and to observe the asymmetry of data quality for the two lists. To demonstrate, we generated scores for 10,000 loci and created two rank lists using a procedure similar to Case I in Section 3.1. For both lists, the signal and noise components were $N(-3,1)$ and $N(0,1)$ respectively. The latent variables in both lists were censored at -1.65. The mixture proportions of the four co-existence patterns were set as $\mbox{\boldmath{$\theta$}}=c(0.3,0.5,0,0.2)$. In this case, 70\% of loci in the complete list 1 are signals, of which only 20\% are also reproducible in complete list 2, corresponding to signal proportion of 96\% and 28\% in the observed lists 1 and 2. This simulation is referred to as Case III. When the IDR approach was applied to analyze the loci present in both lists, the IDR estimates were very conservative compared to the true FDR (Figure 3(b)). This is because the asymmetry of the two lists lead to high variability, inflating the error rate estimates. In contrast, the $IDR_1$ and $IDR_2$ curves produced by SCOP accurately estimated the proportion of irreproducible signals in each list (Figure 3(c)(d)) and indicated that the two lists have asymmetric signals. Thus, a separate analysis rather than a pooled one is needed for these two lists. Moreover, Figure 3(d) once again illustrates that NaiveVenn can underestimate the irreproducibility. The reason is that it fails to distinguish between the signals and noise in the overlap part of the Venn diagram, and hence count both of them in the calculation. \section{Real Datasets} \subsection{Assessing reproducibility of replicate experiments} We downloaded two replicate ChIP-seq experiments for transcription factor NF-kB in cell line Gm10847 from the ENCODE \cite{RefEncode} together with their corresponding input data (Table 1). Peak list A was called using CisGenome \cite{RefCisgenome} by comparing sample 1 with sample 3 and 4 at cutoff of FDR=0.01; similarly, peak list B was called by comparing sample 2 with sample 3 and 4. For each peak, we extracted the 150bp window centered at the peak summit to ensure the same length in peaks. We then compared the two peak lists. The Venn diagram in Figure 4(a) shows that the majority of the loci in these two peak lists were different. Only 39.0\% of loci in list A and 20.5\% of loci in list B were found in the other list. Nevertheless, the IDR analysis of the shared loci gives a low IDR estimate of 0.015, misleadingly suggesting high reproducibility between the two replicates (Figure 4(b)). In contrast, SCOP was able to show that the two replicate experiments have low reproducibility and high IDRs (Figure 4(c)(d)). \begin{table} \scalebox{0.9}{ \begin{tabular}{cllll} \hline Id& Section & List name & File name & Experiment type\\ \hline 1&$4.1$ & RepA &wgEncodeSydhTfbsGm10847NfkbTnfaIggrabAlnRep2.bam & NF-kB \\ 2&&RepB & wgEncodeSydhTfbsGm10847NfkbTnfaIggrabAlnRep4.bam & NF-kB \\ 3&& &wgEncodeSydhTfbsGm10847InputIggmusAlnRep1.bam & Input \\ 4&& & wgEncodeSydhTfbsGm10847InputIggmusAlnRep2.bam & Input \\ 5&$4.2$ & FoxA1 &wgEncodeHaibTfbsHepg2Foxa1sc6553V0416101AlnRep1.bam & FoxA1 \\ 6&& FoxA1 &wgEncodeHaibTfbsHepg2Foxa1sc6553V0416101AlnRep2.bam & FoxA1 \\ 7&& FoxA2 &wgEncodeHaibTfbsHepg2Foxa2sc6554V0416101AlnRep1.bam& FoxA2 \\ 8&& FoxA2 &wgEncodeHaibTfbsHepg2Foxa2sc6554V0416101AlnRep2.bam& FoxA2 \\ 9&& &wgEncodeHaibTfbsHepg2RxlchV0416101AlnRep1.bam& Input \\ 10&& &wgEncodeHaibTfbsHepg2RxlchV0416101AlnRep2.bam& Input \\ \hline \end{tabular}} \caption{Data description for real ENCODE ChIP-seq datasets.} \end{table} \begin{figure} \caption{Different measures for two peak lists constructed from NF-kB ChIP-seq datasets in cell line Gm10847 from the ENCODE project. (a) The Venn diagram. (b) IDR by Li et al(2011).(c) $IDR_1$ for Replicate A. (d) $IDR_2$ for Replicate B.} \end{figure} \subsection{Characterizing co-binding of two transcription factors (TF)} FoxA transcription factors are a key family of TFs that regulate gene activities in liver cancer. Biologists are interested in how members in this TF family interact with each other and whether different members bind to the same genomic loci in liver cancer cells. The ENCODE project has generated ChIP-seq data for both FoxA1 and FoxA2 in a liver cancer cell line Hepg2. These data can be used to answer the questions raised in Section 1. Using CisGenome \cite{RefCisgenome}, we called 65,535 binding peaks for FoxA1 (comparing sample 5 and 6 with sample 9 and 10) and 48,503 peaks for FoxA2 (comparing sample 7 and 8 with sample 9 and 10), respectively at the FDR=0.01 cutoff. \begin{figure} \caption{COP curves for FoxA1 and FoxA2 peak lists constructed from ChIP-seq datasets in Hepg2 cell line from the ENCODE project.} \end{figure} Finally, we applied SCOP to characterize the concordance between the two lists. Figure 5(a) shows that among the top ranked FoxA1 peak regions, about 60\% are also bound by FoxA2. As one moves to the lower ranked FoxA1 peaks, a lower percentage are simultaneously bound by FoxA2. Thus, robust FoxA1 binding seems to require FoxA2 binding at the same location. In contrast, the very top ranked FoxA2 peaks are more likely to be FoxA2 specific and less likely to be shared for FoxA1 binding. The middle ranked FoxA2 peaks are more often bound by FoxA1. The co-binding proportion drops again for FoxA2 peaks with low ranking which are increasingly more likely to be noise. This suggests that FoxA2 may play its regulatory role in a different mode compared to FoxA1. This information is not immediately revealed by the Venn diagram and IDR approach. \section{Discussion} \label{s:discuss} In summary, SCOP offers a new solution for comparing two rank lists. SCOP takes into account both the overall proportion of overlap shared by the two lists and the consistency of ranks along them. This overcomes the shortcomings of the Venn diagram and the IDR approach, and allows better characterizing of the concordance and global reproducibility between two datasets. Our simulation studies show that drawing conclusions on concordance from Venn diagrams may not reveal all the information in the data. The same degree of overlap may correspond to different signal-to-noise ratio. IDR, on the other hand, is limited in terms of characterizing the global reproducibility between two datasets since it focuses on analyzing loci shared by both lists. In light of these results, the SCOP curves should provide a better solution to assessing data quality (e.g., reproducibility between replicate ChIP-seq samples) and computational algorithms (e.g., evaluate consistency of the results when a method is applied to two replicate experiments) in projects such as ENCODE. Our current model considers the problem of comparing two rank lists. An interesting future research topic is how to extend it to comparing multiple rank lists. Currently, one can apply SCOP to compare each pair of lists. However, this pairwise comparison approach does not directly reveal higher order relationships. For instance, with three datasets, one can also ask how many loci are shared by all three lists in addition to asking how many loci are shared by each pair of lists. For $D$ rank lists, there are $2^D$ combinatorial signal coexistence patterns. As $D$ increases, the complexity of the problem increases exponentially. Efficient ways to perform the comparison and summarize results, similar to those in \cite{RefiASeq}, need to be developed in order to solve this problem. Currently, an R package for SCOP is available upon request. The package will soon be submitted to Bioconductor. \setcounter{section}{0} \setcounter{equation}{0} \setcounter{table}{0} \renewcommand{A.\arabic{equation}}{A.\arabic{equation}} \renewcommand{A.\arabic{table}}{A.\arabic{table}} \renewcommand{A.\arabic{figure}}{A.\arabic{figure}} \section*{Appendix} \subsection*{Iterative algorithm for model fitting}\label{app} Here we present the details of the iterative algorithms used to estimate $\mbox{\boldmath{$\theta$}}=(\mbox{\boldmath{$\pi$}},\mu_1,\mu_2,\sigma_1^2,\sigma_2^2)$. \begin{enumerate} \item Initialize parameters $\mbox{\boldmath{$\theta$}}=\mbox{\boldmath{$\theta_0$}}$. \item Estimate the survival function $S_j(x_{i,j})$ using the Kaplan-Meier estimator. \item Compute the pseudo-data $\hat{z}_{i,j}=G^{-1}_j(\hat{S}_{j}(x_{i,j})|\mbox{\boldmath{$\theta$}})$. Since $G^{-1}_j$ does not have a closed form, $G_j$ is first computed on a grid of 5,000 points over the range $[min(-5,\mu_j-5*\sigma_j),max(5,\mu_j+5*\sigma_j)]$. $\hat{z}_{i,j}$ is then obtained through linear interpolation on the grid. \item Run EM algorithm to search for $\hat{\mbox{\boldmath{$\theta$}}}$ that maximizes the log-likelihood of pseudo data $Pr(\hat{\mbox{\boldmath{$Z$}}},\mbox{\boldmath{$\Delta$}}| \mbox{\boldmath{$\theta$}})=\sum_{\mbox{\boldmath{$B$}}}Pr(\hat{\mbox{\boldmath{$Z$}}},\mbox{\boldmath{$\Delta$}},\mbox{\boldmath{$B$}}| \mbox{\boldmath{$\theta$}})$. The resulting $\hat{\mbox{\boldmath{$\theta$}}}$ is denoted as $\mbox{\boldmath{$\theta^t$}}$. 5. Iterate between steps 3 and 4 until the change in log-likelihood between the two nearby iterations is less than a pre-specified threshold. \end{enumerate} Below are details of the EM algorithm in step 4. In the E-step, one evaluates the Q-function \begin{equation} Q(\mbox{\boldmath{$\theta$}}|\mbox{\boldmath{$\theta^{old}$}})=Q(\mbox{\boldmath{$\pi$}},\mu_1,\mu_2,\sigma_1^2,\sigma_2^2| \mbox{\boldmath{$\pi^{old}$}},\mu_1^{old},\mu_2^{old},\sigma_1^{2old},\sigma_2^{2old})=E_{old}(\ln Pr(\hat{\mbox{\boldmath{$Z$}}},\mbox{\boldmath{$\Delta$}},\mbox{\boldmath{$B$}}| \mbox{\boldmath{$\theta^{old}$}})) \end{equation} Here the expectation is taken with respect to probability distribution $Pr(\mbox{\boldmath{$B$}}| \hat{\mbox{\boldmath{$Z$}}},\mbox{\boldmath{$\Delta$}},\mbox{\boldmath{$\theta^{old}$}})$. \begin{eqnarray} \ln Pr(\hat{\mbox{\boldmath{$Z$}}},\mbox{\boldmath{$\Delta$}},\mbox{\boldmath{$B$}}| \mbox{\boldmath{$\theta$}})=\sum_{i=1}^nI(b_i=0)*\{\ln\pi_0+\delta_{i,1}\ln g_{10}(\hat{z}_{i,1})+(1-\delta_{i,1})\ln G_{10}(\hat{z}_{i,1})\nonumber\\ +\delta_{i,2}\ln g_{20}(\hat{z}_{i,2})+(1-\delta_{i,2})\ln G_{20}(\hat{z}_{i,2})\}\nonumber\\ +(b_i=1)*\{\ln\pi_1+\delta_{i,1}\ln g_{11}(\hat{z}_{i,1})+(1-\delta_{i,1})\ln G_{11}(\hat{z}_{i,1})\nonumber\\ +\delta_{i,2}\ln g_{20}(\hat{z}_{i,2})+(1-\delta_{i,2})\ln G_{20}(\hat{z}_{i,2})\}\nonumber\\ +I(b_i=2)*\{\ln\pi_2+\delta_{i,1}\ln g_{10}(\hat{z}_{i,1})+(1-\delta_{i,1})\ln G_{10}(\hat{z}_{i,1})\nonumber\\ +\delta_{i,2}\ln g_{21}(\hat{z}_{i,2})+(1-\delta_{i,2})\ln G_{21}(\hat{z}_{i,2})\}\nonumber\\ +I(b_i=3)*\{\ln\pi_3+\delta_{i,1}\ln g_{11}(\hat{z}_{i,1})+(1-\delta_{i,1})\ln G_{11}(\hat{z}_{i,1})\nonumber\\ +\delta_{i,2}\ln g_{21}(\hat{z}_{i,2})+(1-\delta_{i,2})\ln G_{21}(\hat{z}_{i,2})\}. \end{eqnarray} Therefore, \begin{eqnarray} Q(\mbox{\boldmath{$\theta$}}|\mbox{\boldmath{$\theta^{old}$}})&=&\sum_{i=1}^n\sum_{k=0}^3P_{old}(b_i=k)\ln\pi_k\nonumber\\ &+&\sum_{i=1}^n\{(Pr_{old}(b_i=1)+Pr_{old}(b_i=3))(\delta_{i,1}\ln g_{11}(\hat{z}_{i,1})+(1-\delta_{i,1})\ln G_{11}(\hat{z}_{i,1}))\nonumber\\ &&+(Pr_{old}(b_i=0)+Pr_{old}(b_i=2))(\delta_{i,1}\ln g_{10}(\hat{z}_{i,1})+(1-\delta_{i,1})\ln G_{10}(\hat{z}_{i,1}))\}\nonumber\\ &&+(Pr_{old}(b_i=2)+Pr_{old}(b_i=3))(\delta_{i,2}\ln g_{21}(\hat{z}_{i,2})+(1-\delta_{i,2})\ln G_{21}(\hat{z}_{i,2}))\nonumber\\ &&+(Pr_{old}(b_i=0)+Pr_{old}(b_i=1))(\delta_{i,2}\ln g_{20}(\hat{z}_{i,2})+(1-\delta_{i,2})\ln G_{20}(\hat{z}_{i,2}))\nonumber\\. \end{eqnarray} In the M-step, one finds $\mbox{\boldmath{$\theta$}}$ that maximize the Q-function $Q(\mbox{\boldmath{$\theta$}}|\mbox{\boldmath{$\theta^{old}$}})$. Denote them by $\mbox{\boldmath{$\hat{\theta}^{new}$}}=(\mbox{\boldmath{$\pi^{new}$}},\mu_1^{new},\mu_2^{new},\sigma_1^{2new},\sigma_2^{2new})$. Solving \begin{equation} \frac{\partial Q(\mbox{\boldmath{$\theta$}}|\mbox{\boldmath{$\theta^{old}$}})}{\partial \pi_k}=0 \end{equation} We have: \begin{equation} \hat{\pi}_k^{new}=\frac{\sum_{i=1}^nPr_{old}(b_i=k)}{n} \end{equation} Recall: \begin{eqnarray} Pr_{old}(b_i=k)&=&Pr(b_i=k|\hat{z}_{i,1},\hat{z}_{i,2},\delta_{i,1},\delta_{i,2},\mbox{\boldmath{$\hat{\theta}^{old}$}})\\ &=&\frac{Pr(b_i=k,\hat{z}_{i,1},\hat{z}_{i,2}|\delta_{i,1},\delta_{i,2},\mbox{\boldmath{$\hat{\theta}^{old}$}})}{Pr(\hat{z}_{i,1},\hat{z}_{i,2}|\delta_{i,1},\delta_{i,2},\mbox{\boldmath{$\hat{\theta}^{old}$}})}\nonumber\\ \nonumber \end{eqnarray} and \begin{equation} Pr(b_i=0,\hat{z}_{i,1},\hat{z}_{i,2}|\delta_{i,1},\delta_{i,2},\mbox{\boldmath{$\hat{\theta}$}})=\pi_0g_{10}^{\delta_{i,1}}(\hat{z}_{i,1})G_{10}^{1-\delta_{i,1}}(\hat{z}_{i,1})g_{20}^{\delta_{i,2}}(\hat{z}_{i,2})G_{20}^{1-\delta_{i,2}}(\hat{z}_{i,2}) \end{equation} \begin{equation} Pr(b_i=1,\hat{z}_{i,1},\hat{z}_{i,2}|\delta_{i,1},\delta_{i,2},\mbox{\boldmath{$\hat{\theta}$}})=\pi_1g_{11}^{\delta_{i,1}}(\hat{z}_{i,1})G_{11}^{1-\delta_{i,1}}(\hat{z}_{i,1})g_{20}^{\delta_{i,2}}(\hat{z}_{i,2})G_{20}^{1-\delta_{i,2}}(\hat{z}_{i,2}) \end{equation} \begin{equation} Pr(b_i=2,\hat{z}_{i,1},\hat{z}_{i,2}|\delta_{i,1},\delta_{i,2},\mbox{\boldmath{$\hat{\theta}$}})=\pi_2g_{10}^{\delta_{i,1}}(\hat{z}_{i,1})G_{10}^{1-\delta_{i,1}}(\hat{z}_{i,1})g_{21}^{\delta_{i,2}}(\hat{z}_{i,2})G_{21}^{1-\delta_{i,2}}(\hat{z}_{i,2}) \end{equation} \begin{equation} Pr(b_i=3,\hat{z}_{i,1},\hat{z}_{i,2}|\delta_{i,1},\delta_{i,2},\mbox{\boldmath{$\hat{\theta}$}})=\pi_3g_{11}^{\delta_{i,1}}(\hat{z}_{i,1})G_{11}^{1-\delta_{i,1}}(\hat{z}_{i,1})g_{21}^{\delta_{i,2}}(\hat{z}_{i,2})G_{21}^{1-\delta_{i,2}}(\hat{z}_{i,2}) \end{equation} $Pr_{old}(b_i=k)$ can be computed by replacing $\mbox{\boldmath{$\hat{\theta}$}}$ with $\mbox{\boldmath{$\hat{\theta}^{old}$}}$ accordingly. Only $\sum_{i=1}^n(Pr_{old}(b_i=1)+Pr_{old}(b_i=3))(\delta_{i,1}\ln g_{11}(\hat{z}_{i,1})+(1-\delta_{i,1})\ln G_{11}(\hat{z}_{i,1}))$ in Equation A.3 involves $(\mu_1,\sigma_1)$. Because $G_{11}(\hat{z}_{i,1})$, the tail probability of a normal distribution, has no close form, we use the R function $optim$ with the ``L-BFGS-B'' option to obtain the values that maximize $\sum_{i=1}^n(Pr_{old}(b_i=1)+Pr_{old}(b_i=3))(\delta_{i,1}\ln g_{11}(\hat{z}_{i,1})+(1-\delta_{i,1})\ln G_{11}(\hat{z}_{i,1}))$. $(\mu_2,\sigma_2)$ are searched in a similar fashion to maximize $\sum_{i=1}^n(Pr_{old}(b_i=2)+Pr_{old}(b_i=3))(\delta_{i,2}\ln g_{21}(\hat{z}_{i,2})+(1-\delta_{i,2})\ln G_{21}(\hat{z}_{i,2}))$. \end{document}
\begin{document} \title[Real operator spaces and algebras]{Real structure in operator spaces, injective envelopes and $G$-spaces} \author{David P. Blecher} \address{Department of Mathematics, University of Houston, Houston, TX 77204-3008, USA} \email{[email protected]} \author{Arianna Cecco} \address{Department of Mathematics, University of Houston, Houston, TX 77204-3008, USA} \email{[email protected]} \author{Mehrdad Kalantar} \address{Department of Mathematics, University of Houston, Houston, TX 77204-3008, USA} \email{[email protected]} \date{3/29/2023} \thanks{MK and AC are supported by the NSF Grant DMS-2155162. DB is supported by a Simons Foundation Collaboration Grant.} \subjclass[2020]{Primary 46L07, 47L05, 47L25, 47L30; Secondary: 37A55, 46L55, 46M10, 47L75} \keywords{Operator space, operator algebra, real operator space, group action, injective envelope, complexification} \begin{abstract} We present some more foundations for a theory of real structure in operator spaces and algebras, in particular concerning the real case of the theory of injectivity, and the injective, ternary, and $C^*$-envelope. We consider the interaction between these topics and the complexification. We also generalize many of these results to the setting of operator spaces and systems acted upon by a group. \end{abstract} \maketitle \section{Introduction} Ruan initiated the study of real operator spaces in \cite{ROnr,RComp}, and this study was continued in \cite{Sharma} and \cite{BT}. Recently there has been an increased interest in this topic, and in its links to quantum physics (see e.g.\ \cite{Ch1, Ch2} and references therein). Indeed real structure occurs naturally in very many areas of mathematics, as is mentioned also for example in the first paragraphs of \cite{BT} and \cite{Sharma}, or in \cite{Ros}. In the present paper we examine some aspects of real and complex structure in operator spaces, establishing the real case of several important aspects of the theory of injectivity, and the injective, ternary, and $C^*$-envelope, advancing on Sharma's work in \cite{Sharma}. We also discuss the interaction between these topics and the complexification, and establish some natural relations between injectivity in the real and the complex categories. In fact, we undertake much of this study in a more general setting where the given operator space $X$ is equipped with a completely isometric action of a discrete group $G$. Then the results in the usual setting of operator spaces follow from the case where the group $G$ is trivial. Recently, the theory of equivariant injective envelopes has found very many striking applications in various problems concerning the structure theory of group $C^*$-algebras (see e.g.\ \cite{KK,BKKO,Bry,KSg,KS,KKLRU} and references therein). In the light of this, and considering the recent emergence of interest in real $C^*$-algebras alluded to above, it is natural to investigate $G$-spaces in the real case. Turning to the structure of our paper, in Section 2 we review the basic theory of operator space complexifications, including a short proof of Ruan's uniqueness of a reasonable complexification. In Section \ref{chcvn} we give a characterization of commutative real $W^*$-algebras in the most general case. (We stated this result in \cite{BT}, and it will be used in \cite{BReal} to prove a couple of nice facts about real operator spaces and algebras.) We begin Section \ref{Coen} gently by considering the relations between the complexification and injective and $C^*$-envelopes. As pointed out in \cite{BWinv} many aspects of the study of real operator spaces or algebras are equivalent to the study of real structure in complex spaces or algebras, in particular the features of conjugate linear completely isometric period 2 automorphisms on complex operator spaces or algebras. We continue Section \ref{Coen} by considering real and complex operator spaces and systems with a $G$-action by a discrete group $G$, and generalize much of our earlier theory (which in some sense is the case that $G = \Zdb_2$) to this setting. In particular we study the $G$-$C^*$-envelope and $G$-injective envelope, following Hamana's work in the complex case \cite{Hamiecds, Hamiods}. The second author has undertaken a very systematic investigation of injective envelopes in various categories in \cite{Cecco,CeccoTh}, and part of this (and several of the preliminary results needed for this) is established in this section. For example we prove that the real and complex $G$-injective envelopes of a complex $G$-operator space coincide, and that the $G$-injective envelope of the complexification of a real $G$-operator space $X$ is canonically identified with the complexification of the $G$-injective envelope of $X$. We also prove similar facts for $G$-$C^*$ and $G$-ternary envelopes. We remark that Hamana works in a great generality in e.g.\ \cite{Hamiods} that includes all locally compact groups, and indeed Hopf von Neumann algebras, which are more general still. Most of our results in Section \ref{Coen} seem to be true in the real case of Hamana's hugely general setting, by appropriate modification of the same basic proof ideas, and using (the real case of) the matching facts in \cite{Hamiods}. However for simplicity we will stick to discrete groups since the idea of the proofs would be essentially unchanged in the general case. An exception to this remark is our results on finite groups. These seem to be easily generalizable to actions by a compact group, but seem to us to be difficult to generalize much further than this (see Remark below Theorem \ref{Ginj}). In Section \ref{fe} we discuss extending real structure on an algebra to containing von Neumann algebras. The reader will need to be familiar with the basics of operator spaces and von Neumann algebras as may be found in early chapters of \cite{BLM,ER,Pau, Pisbk}, and e.g.\ \cite{P}. With familiarity with the very basics the reader will have no problems following Sections 2, \ref{chcvn}, and \ref{fe}. Section \ref{Coen} will assume more familiarity with (the usual complex) operator space theory. For the injective envelope, $C^*$-envelope, and ternary envelope in the complex case we refer to \cite[Sections 4.2--4.4]{BLM}, \cite[Section 6.2]{ER}, \cite[Chapter 15]{Pau}, or the papers of Hamana and Ruan referenced there. Then \cite{Good, Li,ARU} are texts on the theory of real $C^*$-algebras and real $W^*$-algebras. We recall that a real $C^*$-algebra is a closed real $*$-subalgebra of $B(H)$ for a real Hilbert space $H$. Abstractly it is a real Banach $*$-algebra whose $*$-algebra complexification has a $C^*$-norm making it a complex $C^*$-algebra. Equivalently, it is a real Banach $*$-algebra satisfying the $C^*$-identity whose selfadjoint elements have real spectrum (or alternatively, with $a^* a$ having positive spectrum, or $1+a^*a$ being invertible, for $a \in A$). See \cite[Chapter 5]{Li}. A space $Z$ in a category will be called {\em injective} if whenever $E \subset F$ as subobjects in the category then any morphism (e.g.\ real linear complete contraction in the category of real operator spaces) $T : E \to Z$ has an extending morphism $\tilde{T} : F \to Z$. E.g.\ $B(H)$ is injective in the category of real operator spaces, if $H$ is a real Hilbert space \cite{ROnr}. A preliminary study of injective spaces, the injective envelope $I(X)$ and $C^*$-envelope $C^*_e(X)$ in the real case may be found in \cite{Sharma}. Indeed we will be using selected results and notation from the existing theory of real operator spaces \cite{ROnr,RComp,Sharma,BT}. Section \ref{Coen} will also use basic ideas from Hamana's theory of $G$-injective envelopes of $G$-spaces from \cite{Hamiecds}. The letters $H, K$ are reserved for real Hilbert spaces. We sometimes write the complex number $i$ as $\iota$ to avoid confusion with matrix subscripting. For us a {\em projection} in a $*$-algebra is always an orthogonal projection (so $p = p^2 = p^*$). A normed algebra $A$ is {\em unital} if it has an identity $1$ of norm $1$, and a map $T$ is unital if $T(1) = 1$. We say that $A$ is {\em approximately unital} if it has a contractive approximate identity (cai). We write $X_+$ for the positive operators (in the usual sense) that happen to belong to $X$. If $X$ is a subspace of a (real or complex) $C^*$-algebra $B$ then we write $C^*(X)$ or $C^*_B(X)$ for the $C^*$-subalgebra of $B$ generated by $X$. If $T : X \to Y$ we write $T^{(n)}$ for the canonical `entrywise' amplification taking $M_n(X)$ to $M_n(Y)$. The completely bounded norm is $\| T \|_{\rm cb} = \sup_n \, \| T^{(n)} \|$, and $T$ is completely contractive if $\| T \|_{\rm cb} \leq 1$. A map $T$ is said to be {\em positive} if it takes positive elements to positive elements, and {\em completely positive} if $T^{(n)}$ is positive for all $n \in \Ndb$. A UCP map is unital and completely positive. A real operator space may either be viewed as a real subspace of $B(H)$ for a real Hilbert space $H$, or abstractly as a vector space with a norm $\| \cdot \|_n$ on $M_n(X)$ for each $n \in \Ndb$ satisfying the conditions of Ruan's characterization \cite{ROnr}. Sometimes the sequence of norms $(\| \cdot \|_n)$ is called the {\em operator space structure}. All spaces in the present paper are such operator spaces at the least, but often have more structure. Then $X_c = X + i X$, this is the complexification that will be discussed in detail in Section 2. If $T : X \to Y$ then we write $T_c$ for the complexified map $x + i y \mapsto T(x) + i T(y)$ for $x, y \in X$. A {\em unital operator space} (resp.\ {\em real operator system}) is a real subspace (resp.\ selfadjoint subspace) of $B(H)$ for a real Hilbert space $H$, containing $I_H$. The basics of the theory of real operator systems are much the same as in the complex case (see \cite{RComp,ROnr, Sharma,BT,BReal}, most results being proved in the same way as the complex case, or simply following from that case by complexification. One does need to be careful about a couple of issues though. Arguments involving states or positive or selfadjoint elements often do not work in the real case, as pointed out e.g.\ in \cite{BT}. Indeed there may be not enough, or there may be too many, selfadjoint elements. It is shown however in \cite[Lemma 2.3]{BT} that a completely positive map on a real operator system is selfadjoint (i.e.\ $T(x)^* = T(x^*)$) and $T_c$ is completely positive. Also a unital map between real operator systems is completely positive if and only if it is completely contractive. The one direction of this follows by the just cited lemma. Conversely if $T$ is unital and and completely contractive then so is $T_c$, so by the complex theory $T_c$ (and hence $T$) is UCP. A unital completely contractive map between real unital operator spaces of course extends to a completely completely contractive, hence UCP map on any containing operator system. A homomorphism between real $C^*$-algebras is contractive if and only if it is a $*$-homomorphism, and in this case it is completely positive and completely contractive \cite[Theorem 2.6]{BT}. We remark that \cite{BReal} includes a systematic discussion of the real variants of results in Chapters 1--4 and 8 of \cite{BLM} (results which are not in the present paper); and checks the behaviour of the complexification of very many standard constructions in the theory, etc. An injective envelope $I(X)$ of $X$ in any of our categories is a pair $(E,j)$ consisting of an space $E$ which is injective in our category and completely isometric morphism $j : X \to E$ which is {\em rigid}: that is, $I_E$ is the only completely contractive morphism $u : E \to E$ extending the identity map on $j(X)$. We will not use this but the injective envelope may also be characterized in terms of the `essential' or `envelope' property as in e.g.\ \cite[Lemma 4.2.4]{BLM}. We sometimes write $I(X)$ as $I_{\Fdb}(X)$, where $\Fdb = \Rdb$ or $\Cdb$ if it is important to distinguish between the real and complex case. Often we do not do this though, and leave it to the reader to distinguish what is meant from the context. The injective envelope may be given the structure of a {\em ternary system}, or TRO for short. We mention the definition and the real versions of some basic results on TRO's from e.g.\ \cite{BLM} that we will need in Section 4. Much of this is already in the real $JB$-triple literature or in e.g.\ \cite{Sharma}. A real TRO is a closed linear subspace $Z \subset B(K, H),$ for real Hilbert spaces $K$ and $H$ , satisfying $Z Z^* Z \subset Z$ . A {\em ternary morphism} between TROs is a linear map satisfying $T (x y^* z) = T (x) T( y)^* T(z)$. If $Z \subset B_{\Rdb}(K,H)$ is a real TRO then (the closures of) $Z^* Z$ and $Z Z^*$ are real $C^*$-subalgebras of $B(K)$ and $B(H)$ respectively. As in \cite[p.\ 1052]{RComp} we have identifications $$Z_c = Z + i Z \subset B(K,H) + i B(K,H) = B(K,H)_c = B_{\Cdb}(K_c,H_c).$$ Then $Z_c Z_c^* Z_c \subset Z_c$, so that $Z_c$ is a complex TRO. The real case of \cite[Lemma 8.3.2]{BLM} holds: a ternary morphism between real TRO's is completely contractive, and is completely isometric if it is also one-to-one. Indeed if $T : Z \to W$ is a real ternary morphism then $T_c$ is a complex ternary morphism. So by the complex theory (see e.g.\ \cite[Lemma 8.3.2]{BLM} ) $T_c$ and hence $T$ are completely contractive. If also $T$ is one-to-one then $T_c$ is one-to-one so completely isometric. Hence $T$ is completely isometric. It is proved in e.g.\ \cite{Sharma} that conversely a completely isometric surjection between real TRO's is a ternary morphism (or this follows from the complex case by passing to the complexifications). \section{Complexifications of real operator spaces} One of the cornerstones of the study of real operator spaces is Ruan's unique complexification theorem. By an {\em operator space complexification} of a real operator space $X$ we mean a pair $(X_c, \kappa)$ consisting of a complex operator space $X_c$ and a real linear complete isometry $\kappa : X \to X_c$ such that $X_c = \kappa(X) \oplus i \, \kappa(X)$ as a vector space. For simplicity we usually identify $X$ and $\kappa(X)$ and write $X_c = X + i \, X$. {\bf Remarks.} 1)\ By the closed graph theorem the projection $P$ onto $i(X)$ is bounded, and so $X_c$ is real isomorphic to a Banach space direct sum of $X$ with itself. Some authors may also assume that the projection $P$ onto $i(X)$ is contractive (in our case we would want completely contractive here), but for `reasonable' complexifications (see below) this will be automatic. Note that $x = P(x) - i P(ix)$ for all $x \in X_c$. 2)\ Not every complex operator space is the complexification of a real operator space. Indeed on a complex operator space $X$ there need not exist any closed subspace $Y$ with $X = Y \oplus iY$. Dramatic examples of this may be found in e.g.\ \cite{FerR}, which contains infinite dimensional complex Banach spaces which are not the direct sum of any two infinite dimensional real subspaces (indeed the examples there are much more startling). For a simpler example one may use for example any complex Banach space $X$ which is not complex isomorphic to its complex conjugate $\bar{X}$ (see e.g.\ \cite{Kalton} for an elementary example). If $X = Y + i \, Y$ as above then the map $\theta(y_1 + i y_2) = y_1 - i y_2$ is a bicontinuous isomorphism onto $\bar{X}$. Indeed if $z = y_1 - i y_2$ with $y_1, y_2 \in Y$ then $\| z \|_{\bar{X}} \leq \| y_1 \| + \| y_2 \| = \| P(z) \| + \| P(iz) \|$ is dominated by a constant times $\| z \|_X$. So the map is bicontinuous by the open mapping theorem. This is a contradiction (one may assign any compatible operator space structure to $X$, such as Min$(X)$). The above also shows that complex operator spaces which are real linearly completely isometric, actually need not have any complex linear completely isometric surjection between them. For a $C^*$-algebra example of this, if $A$ is a unital C*-algebra not $*$-isomorphic to its opposite C*-algebra $A^\circ$ (see e.g.\ \cite{C}), consider the map $\theta(a) = (a^*)^\circ$. This is a surjective real linear unital complete isometry $A \to A^\circ$ (even positive since if $a = x^2$ for $x = x^*$ then $a^\circ = (x^\circ)^2$). However by an operator space version of the Kadison-Banach-Stone theorem, if there exists a surjective complex linear complete isometry $A \to A^\circ$ then $A$ is $*$-isomorphic to $A^\circ$ (see e.g.\ \cite[Theorem 4.4]{RComp}). We say that an operator space complexification $X_c = X + iX$ of an operator space $X$ is {\em (completely) reasonable} if the map $\theta_X : x+iy \mapsto x - iy$ is a complete isometry, for $x, y \in X$. Any reasonable complexification of a Banach space possesses a canonical conjugate linear isometric period 2 automorphism. Conversely, given any complex Banach space $X$ with a conjugate linear isometric period 2 automorphism $\theta$, it is well known (or an exercise) that $X$ is the reasonable complexification of the set of fixed points of $\theta$. A similar result holds for operator spaces. A completely reasonable operator space complexification $X_c$ possesses a canonical conjugate linear completely isometric period 2 automorphism, namely the map $\theta_X$ above. This gives an action of the group $\Zdb_2$ as complete isometries on $X_c$, via the identity map and $\theta_X$. We will use this notation many times in our paper. Henceforth we shall shorten `completely reasonable' to `reasonable' or `reasonable operator space complexification', since we will want all complexifications hereafter to be completely reasonable. \begin{proposition} \label{chco} Let $X$ be a real operator space with a complete isometry $\kappa : X \to Y$ into a complex operator space. Then $(Y,\kappa)$ is a reasonable operator space complexification of $X$ if and only if $Y$ possesses a conjugate linear completely isometric period 2 automorphism whose fixed points are $\kappa(X)$. \end{proposition} \begin{proof} The one direction is easy and mentioned above. Conversely, given such an automorphism $\theta$, that $Y = \kappa(X) \oplus i \kappa(X)$ is just as in the Banach space case. Indeed any $z \in Y$ may be written as $z = x + i y \in \kappa(X) + i \, \kappa(X),$ where $x = \frac{z + \theta(z)}{2}$ and $y = \frac{z - \theta(z)}{2i}$ are in $\kappa(X)$. Since $\theta$ is conjugate linear we have $\theta(x + iy) = x - iy$, and since $\theta$ is a complete isometry we have that $(Y,\kappa)$ is a reasonable complexification. \end{proof} Ruan showed that a real operator space $X$ possesses a reasonable operator space complexification, which is unique up to complete isometry. The `existence' of such complexification follows quickly from Proposition \ref{chco}. Indeed suppose that $X \subset B(H)$ for a real Hilbert space $H$, and let $H_c$ be the canonical Hilbert space complexification of $H$. Let $\kappa : B(H) \to B_{\Cdb}(H_c)$ be $\kappa(T) = T_c$. It is easy to check that $\kappa$ is a faithful $*$-homomorphism, and $\theta(T) = \theta_H \circ T \circ \theta_H$ is a period 2 conjugate linear $*$-automorphism of the $C^*$-algebra $B(H_c)$ whose fixed points are $\kappa(B(H))$. Hence $\kappa$ and $\theta$ are completely isometric. So $(B_{\Cdb}(H_c), \kappa)$ is a reasonable operator space complexification of $B(H)$ by Proposition \ref{chco}. Hence $\kappa(X) + i \kappa(X)$ is a reasonable operator space complexification of $X$. This complexification may be identified up to real complete isometry with the operator subspace $V_X$ of $M_2(X)$ consisting of matrices of the form \begin{equation} \label{ofr} \begin{bmatrix} x & -y \\ y & x \end{bmatrix} \end{equation} for $x, y \in X$. Note that $H_c \cong H^{(2)}$ as real Hilbert spaces and we have $$B_{\Cdb}(H_c) \subset B_{\Rdb}(H_c) \cong B(H^{(2)}) \cong M_2(B(H)).$$ These identifications are easily checked to be real complete isometries. The canonical embedding $\kappa : B(H) \to B_{\Cdb}(H_c)$ above and the associated embedding $B(H) + i B(H) \to \kappa(B(H)) + i \kappa(B(H)) = B_{\Cdb}(H_c)$, when viewed as a map into $M_2(B(H))$ by the identifications above, correspond to the map $$x + iy \mapsto \begin{bmatrix} x & -y \\ y & x \end{bmatrix} \in V_{B(H)} \subset M_2(B(H)) \; , \qquad x, y \in B(H).$$ Thus $B_{\Cdb}(H_c)$ is real completely isometric to $V_{B(H)}$. The operator $i I$ in $B_{\Cdb}(H_c)$ corresponds to the matrix $$u = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}.$$ Then $\theta_{B(H)}(x + iy) = x -iy$ for $x, y \in B(H)$, corresponds to the completely isometric operation $z \mapsto u^* z u = - u zu$ on $V_{B(H)}$. Restricting to $X$, we see that $\theta_{X}$ is completely isometric, and $X_c$ is real completely isometric to $V_X$. \begin{theorem} \label{rcth} {\rm (Ruan's unique complexification theorem) } \ Let $X$ be a real operator space. Then $X$ possesses a reasonable operator space complexification, which is unique up to complete isometry. That is if $Y_1, Y_2$ are complex operator spaces such that for $k = 1, 2$ if $u_k : X \to Y_k$ is a real linear complete isometry with $u_k(X) + i u_k(X) = Y_k$, and such that $u_k(x) + i u_k(y) \mapsto u_k(x) - i u_k(y)$ is a complete isometry, then there exists a unique surjective complex linear complete isometry $\rho : Y_1 \to Y_2$ with $\rho \circ u_1 = u_2$. \end{theorem} \begin{proof} A proof of the `easy direction', the `existence', was given above. For the uniqueness, let $\kappa = u_k$. It is enough to show that the map taking $\kappa(x) + i \kappa(y)$ to the matrix in (\ref{ofr}) is a complete isometry into $M_2(X)$. Let $x = [x_{ij}], y = [ y_{ij}]$. Then $$\| [ \kappa(x_{ij})+\iota \, \kappa(y_{ij})] \| = \left\| \begin{bmatrix} \kappa(x_{ij}) & -\kappa(y_{ij})\\ \kappa(y_{ij}) & \kappa(x_{ij}) \end{bmatrix} \right\| = \left\| \begin{bmatrix} x & -y \\ y & x \end{bmatrix} \right\| $$ by the simple fact (2.2) in \cite{BNmetric2}. This concludes the proof. \end{proof} \begin{theorem} \label{crcb} If $X$ and $Y$ are real operator spaces, then $(CB_{\Cdb}(X_c,Y_c ), \kappa)$ is a reasonable operator space complexification of $CB_{\Rdb}(X,Y)$, where $\kappa(T) = T_c$. \end{theorem} \begin{proof} Clearly $T \mapsto T_c$ is a complete isometry from $CB_{\Rdb}(X,Y)$ into $CB_{\Cdb}(X_c,Y_c )$. Conversely, there is a conjugate linear completely isometric period 2 automorphism on $CB_{\Cdb}(X_c,Y_c)$ defined by $T \mapsto \theta_Y \circ T \circ \theta_X$. Its fixed points are precisely the $S_c$ for $S \in CB_{\Rdb}(X,Y)$. Indeed suppose that $T \circ \theta_X = \theta_Y \circ T$. Then $T$ takes the fixed points of $\theta_X$ into the fixed points of $\theta_Y$. Since these fixed point spaces are (the copies of) $X$ and $Y$ respectively, this gives a map $S : X \to Y$. Then $T$ and $S_c$ agree on $X$ and hence on $X_c = X + i X$. Thus $CB_{\Cdb}(X_c,Y_c) = CB_{\Rdb}(X,Y)_c$, a reasonable complexification. \end{proof} \begin{corollary} \label{recc} If $X$ and $Y$ are real operator spaces, then $T \in CB_{\Cdb}(X_c,Y_c)$ equals $S_c$ for $S \in CB_{\Rdb}(X,Y)$ if and only if $T(X) \subset Y$, and if and only if $T = \theta_Y \circ T \circ \theta_X$. If $X = Y$ the last condition becomes $T \circ \theta_X = \theta_X \circ T$, which is equivalent to $T$ being $\Zdb_2$-equivariant with the $\Zdb_2$-action mentioned above mentioned above Proposition {\rm \ref{chco}}. \end{corollary} \begin{proof} If $T(X) \subset Y$ then $T = (T_{|X})_c$ since both these complex linear maps agree on $X$. The rest is contained in the last proof. \end{proof} \begin{corollary} \label{ducr} {\rm (Ruan)}\ If $X$ is a real operator space then the operator space dual $(X_c)^*$ is a reasonable operator space complexification of $X^*$, with embedding $\kappa(\varphi) = \varphi_c$. \end{corollary} The complexification of a real unital operator space or operator system may obviously be taken to be a complex unital operator space or operator system. \section{The characterization of commutative real $W^*$-algebras} \label{chcvn} The characterization in this section will be used in \cite{BReal} to prove several deeper facts about real operator spaces and algebras. We will improve the characterization of commutative real $W^*$-algebras from \cite{Li} (see e.g.\ Section 6.3 there, and e.g. \cite{IP}). Indeed the commutative real $W^*$-algebras come from the commutative complex $W^*$-algebras regarded as real algebras. Ideas in the proof may be interpreted as matching certain facts in `measurable dynamics' as we will remark after the theorem. In the following we allow $(0)$ to be considered as a $W^*$-algebra. \begin{theorem} \label{coisre} Any commutative real $W^*$-algebra $M$ is (real) $*$-isomorphic to the direct sum of a commutative complex $W^*$-algebra, and the set of selfadjoint elements in a commutative complex $W^*$-algebra. That is $M \cong L^\infty(X,\mu,\Cdb) \oplus L^\infty(Y,\nu,\Rdb)$. Here $(X,\mu)$ and $(Y,\mu)$ are localizable (or semifinite) measure spaces (and we allow one of these to be the empty set, and in this case interpret the $L^\infty$-space to be $(0)$). \end{theorem} \begin{proof} Let $M$ be a commutative real $W^*$-algebra. Then $M_c$ is a commutative $W^*$-algebra possessing a period 2 conjugate linear $*$-isomorphism $\pi$ with $M = \{ x \in M_c : \pi(x) = x \}$. Consider the projections $p \in M_c$ such that $\pi(q) = q$ for all projections $q \leq p$. We claim there is a largest such projection. If $(p_i)$ is an increasing chain of such projections, let $p = \sup \; p_i$. Then $\pi(p) = p$. For a projection $q \in M_c$ such that $q \leq p$ we have $p_i q \leq p_i$ for each $i$. For any spectral projection $r$ of $p_i q$ we have $r \leq p_i$ and $\pi(r) = r$. So $\pi(p_i q) = p_i q = p_i \pi(q)$. In the limit $\pi(q) = q$. Thus chains have upper bounds, and so by Zorn's lemma there is a maximal such projection $p$. If $r$ is another such projection, then we claim that $e = p \vee r = p + r -pr$ is also another such projection. Indeed for a projection $q \leq p \vee r$ we have $\pi(q p) = qp$ since $q \wedge p \leq p$. Similarly $\pi(qr) = qr$ and $\pi(qpr) = qpr$, and so $\pi(q) = q$. By maximality $p = e$ and $r \leq p$. Let $N_0 = p^\perp \, M_c$. Assume that $p \neq 1$. Note that $\pi(p^\perp) = p^\perp$ and $\pi(N_0) = N_0$. If $r$ is any nonzero projection in $N_0$ there must exist a projection $q \leq r$ with $\pi(q) \neq q$. We claim that this implies that there exists some nonzero projection $q$ in $N_0$ with $\pi(q) = 1 - q \neq 0$. Here $1$ denotes the identity of $N_0$, namely $p^\perp$. We consider the class ${\mathcal C}$ of nonzero projections $r$ in $N_0$ such that $\pi(r) \perp r$. Given any projection $q$ in $N_0$ with $q \neq \pi(q)$ we claim that $q$ has a nonzero subprojection $r$ in ${\mathcal C}$. Indeed set $r = q - q \pi(q)$. Note $r \neq 0$ or else $q \leq \pi(q)$ and (applying $\pi$) $\pi(q) = q$. This contradicts that $q \neq \pi(q)$. Hence $\pi(r) \neq 0$. Also $$r \pi(r) = q \pi(q) - q \pi(q) - q \pi(q) + q \pi(q) = 0.$$ Thus $r \in {\mathcal C}$. Let $q$ be a maximal projection in ${\mathcal C}$. That this exists follows by Zorn's lemma again: If $(p_i)$ is an increasing chain of projections in ${\mathcal C}$, let $r = \sup \; p_i$. Since $p_i \, \pi(p_i) = 0$ we get $r \pi(r) = 0$. We now show that $\pi(q) = 1 - q$. If this were false then since $q \pi(q) = 0$ we have that $r = 1 - q - \pi(q)$ is a nonzero projection in $N_0$. So there exists a projection $e \leq r$ with $\pi(e) \neq e$. Let $q' = e - e \pi(e)$. Note that $q' \leq e \leq r \leq 1-q$. By the above $0 \neq q' \in {\mathcal C}$. Then $q + q'$ is a projection dominating $q$, and $\pi(q+q') (q+q') = \pi(q) q' + \pi(q') q$. Now $\pi(q) q' = 0$ since $q' \leq r \leq 1 - \pi(q)$. Applying $\pi$, we see that $\pi(q') q = 0$. So $\pi(q+q') (q+q') = 0$, so that $q + q' \in {\mathcal C}$. This contradicts the maximality of $q$. Next note that $x \mapsto x + \pi(x)$ is a real $*$-isomorphism from $q M_c$ onto $\{ a \in N_0 : \pi(a) = a \}$. Indeed this is the map $q x \mapsto qx \oplus (1-q) \pi (qx)$, which is a homomorphism since $q$ is central. And if $a \in N_0$ with $\pi(a) = a$ then $qa \in q M_c = qN_0$ with $qa + \pi(qa) = qa + (1-q) \pi(a) = a$. Claim: $\pi(x) = x^*$ on $p M_c$, so that for any $x \in p M_c$ we have $\pi(x) = x$ if and only if $x = x^*$. Thus $\{ x \in pM_c : \pi(x) = x \} = (pM_c)_{\rm sa}$. Note that $x \mapsto \pi(x)^*$ is a weak* continuous complex linear isometry on the complex $W^*$-algebra $p M_c$ which is the identity on projections in $p M_c$. Hence $\pi(x) = x^*$ on $p M_c$, by the spectral theorem and the fact that this is true for any projection in $p M_c$. Finally, we claim that $M$ is real $*$-isomorphic to the real $W^*$-algebra $(pM_c)_{\rm sa} \oplus q M_c$. If $x \in M_c$ with $\pi(x) = x$, then $x = xp + x p^\perp$, where $xp = \pi(x p)$ and $x p^\perp = \pi(x p^\perp) \in N_0$. So $$M = \{ x \in pM_c : \pi(x) = x \} \oplus \{ x \in N_0 : \pi(x) = x \} \cong (pM_c)_{\rm sa} \oplus q M_c.$$ Since $pM_c$ and $q M_c$ are commutative complex von Neumann algebras, we are done (for the connection with localizable/semifinite measure spaces see e.g.\ \cite{BGL}). \end{proof} {\bf Remarks.} 1)\ The separable case of the last result is in \cite{Li} (see e.g.\ Theorem 6.3.6 there) and the work of Ayupov and his coauthors (see e.g.\ \cite{ARU} and references therein), but we could not find a full proof of our statement above of the general case in those sources. In the noncommutative case the result fails: for a start $M_{\rm sa}$ need not even be an algebra. For an explicit low dimensional example, the quaternions $\Hdb$ is not of form $M_{\rm sa} \oplus N$ for complex von Neumann algebras $M$ and $N$. Indeed the center of $\Hdb$ is $\Rdb 1$, so that there is no such nontrivial splitting, and $\Hdb$ is not a complex von Neumann algebra nor the selfadjoint part of one. 2)\ Notice that in the proof of the claim of existence of the projection $q$ with $\pi(q) = 1-q$, we only used the fact that $\pi$ was a real $*$-automorphism of the complex von Neumann algebra $M_c$. Suppose that $M_c = L^\infty(X, \mu)$ for a measure space $(X, \mu)$, and $\tau: L^\infty(X, \mu)\to L^\infty(X, \mu)$ is a complex $*$-automorphism with $\tau^2 = I$. If $(X, \mu)$ is a standard measure space for example then by von Neumann's lifting theorem there is a 2-periodic measurable automorphism $\varphi$ of $(X, \mu)$ (this means the pushforward of $\mu$ under $\varphi$ is in the same class as $\mu$) that yields $\tau$ in the sense that $\tau(f) = f\circ \varphi$ for every $f\in L^\infty(X, \mu)$. In this case the projections $p$ and $q$ from the above proof have the following interpretations. The projection $p$ is the characteristic function of the set $X_0:=\{x\in X : \varphi(x) = x\}$ of fixed points of $\varphi$. The complement $X-X_0$ is then decomposed into a disjoint sum $X_1 \sqcup \varphi(X_1)$, where $X_1$ is a measurable subspace of $X$, whose characteristic function is the projection $q$ in the proof above. In general, it is a simple fact that an action of $\Zdb$ or $\Zdb_n$, when $n$ is prime, is (essentially) free if and only if it has trivial kernel (i.e. is faithful). Thus, any such action on a measure space $(X, \mu)$ yields a decomposition $X= E \sqcup F$ where the restriction of the action on $E$ is trivial and on $F$ is (essentially) free. Arguing similarly to the one in the proof above, one can show that for any action $\Zdb_n \curvearrowright (X, \mu)$, with $n\in\Ndb$ prime, given by the measure automorphism $\varphi: X\to X$, there is a ``periodic decomposition'' of the form $$X= X_0 \sqcup X_1 \sqcup \varphi(X_1) \sqcup \varphi^2(X_1) \sqcup \cdots \varphi^{n-1}(X_1),$$ where the restriction of $\varphi$ to $X_0$ is the identity map. These dynamical facts are surely well-known (although through a relatively quick search we were not able to locate the precise statements in standard literature). One should also note that the automorphism $\pi$ in the proof of the Theorem \ref{coisre} is not $\Cdb$-linear, so that it does not come directly from a map on the underlying sets; and moreover there is no obvious appropriate map on the underlying sets without restrictions on the measure space such as being standard. Probably one may obtain such a map using 6.3.3 in \cite{Li}, and this would give an alternative `dynamics route' to the theorem. \section{Complexifications, envelopes, and $G$-spaces} \label{Coen} We begin the section gently with some of the basic ideas in the operator space case, before throwing in the $G$-space technology which for non-experts may take some time to absorb. Ruan shows \cite{RComp, Sharma} that a real operator space $X$ is injective if and only if $X_c$ is complex injective. Indeed an injective real operator space is real complemented in $B(H)$ so that $X_c$ is complemented in $B(H_c)$. The other direction can be deduced from the next result and the fact that $X$ is real complemented in $X_c$. \begin{lemma} \label{lem1} A complex operator space is real injective if and only if it is complex injective. \end{lemma} \begin{proof} A complex injective operator space $X$ is (complex, hence real) completely isometrically embedded and complemented in $B_{\Cdb}(H)$ for a complex Hilbert space $H$. Define a projection $P : B_{\Rdb}(H) \to B_{\Cdb}(H)$ by $P(T)(x) = \frac{1}{2} (T(x) - i T(ix))$. It is an exercise to check that this is well defined and completely contractive. It follows that $X$ is real completely isometrically embedded and complemented in the real injective space $B_{\Rdb}(H)$, and so is real injective. Note that $B_{\Rdb}(H)$ is real injective since we may view $H$ as the real Hilbert space $\ell^2_{\Cdb}(I) \cong \ell^2_{\Rdb}(I \times \{ 1, 2 \})$. A real injective complex operator subspace $X \subset B_{\Cdb}(H)$ for a complex Hilbert space $H$, is real linear completely contractively complemented in $B_{\Cdb}(H)$ via a projection $P : B_{\Cdb}(H) \to X$. Define $Q(x) = \frac{1}{2} (P(x) - i P(ix))$, which is completely contractive and maps into $X$. We have $Q(ix) = i Q(x)$, so that $Q$ is complex linear. Note that $Qx = \frac{1}{2} (x - i(ix)) = x$ for $x \in X$, so $Q$ is a projection. It follows that $X$ is complex completely isometrically embedded and complemented in the complex injective space $B_{\Cdb}(H)$, and so is complex injective. \end{proof} There is no need to state real operator system versions of the above results, since an operator system ${\mathcal S}$ is injective in the category of operator systems and (necessarily selfadjoint--see \cite[Lemma 2.3]{BT}) UCP maps if and only if it is injective in the category of operator spaces and completely contractive maps. Indeed if ${\mathcal S}$ is an operator system in $B(H)$ and is injective as an operator system (resp.\ space) then its complementability in $B(H)$ ensures that it is injective as an operator space (resp.\ system). We are using the fact mentioned in the introduction that a map on a unital real or complex $C^*$-algebra is UCP if and only if it is completely contractive and unital. Indeed injective operator systems have a $C^*$-algebra structure (see e.g.\ \cite[Proposition 4.4]{ROnr}, and \cite[Theorem 4.9]{Sharma} in the real case), and so can be represented as a $C^*$-subalgebra of $B(H)$. For a real operator space $X$ the next result, like many results involving the injective envelope, may also be proved by first proving the result in the case that $X$ is an operator system, and second, applying the first case to the Paulsen operator system ${\mathcal S}(X)$. See for example \cite{BP}, or in 4.4.2 of \cite{BLM} or Chapter 16 of \cite{Pau}, and its applications in those sources. We also remark that some parts of this result and the next are generalized later for $G$-spaces, however we do the `base case first' because of its importance and clarity, and also to give the model that makes the later proofs quicker to grasp. \begin{theorem} \label{ijco} Let ${\mathcal S}$ be a real operator space with injective envelope $(I({\mathcal S}), j)$. \begin{itemize} \item [(1)] $I({\mathcal S})_c = I({\mathcal S}_c)$ (that is, $(I_{\Rdb}({\mathcal S})_c, j_c)$ is a complex injective envelope of ${\mathcal S}_c$). \item [(2)] If ${\mathcal S}$ is a real operator system or unital real operator space then $I({\mathcal S})$ may be taken to be a real $C^*$-algebra, indeed a real $C^*$-subalgebra of its $C^*$-algebra complexification, which is an injective envelope of ${\mathcal S}_c$. Also $j$ may be taken to be unital. \item [(3)] If ${\mathcal S}$ is an approximately unital real operator algebra (or an approximately unital real Jordan operator algebra) then the conclusions in the first sentence of {\rm (2)} hold, but in addition $j$ may be taken to be a homomorphism. \end{itemize} \end{theorem} \begin{proof} (1)\ Let $Z = I({\mathcal S})$. To see that $(I({\mathcal S})_c, j_c)$ is an injective envelope of ${\mathcal S}_c$, by the lines above Lemma \ref{lem1} we have that $I({\mathcal S})_c$ is an injective extension of ${\mathcal S}_c$. To see that it is rigid, suppose that $u : Z_c \to Z_c$ is a (complex linear) complete contraction extending $I_{{\mathcal S}_c}$. Then $R = \frac{1}{2} (u + (\theta_Z \circ u \circ \theta_Z))$ is a $\Zdb_2$-equivariant (as in Corollary \ref{recc}) complex linear complete contraction $v$ extending $I_{{\mathcal S}_c}$. Thus $R = v_c$ by the just cited Corollary \ref{recc}, for a real linear complete contraction on $I({\mathcal S})$ extending $I_{{\mathcal S}}$. By rigidity we have $v = I$ and so $R = I$. Now the identity is well known to be an extreme point of the unit ball of any unital (real or complex) Banach algebra (see e.g.\ \cite[Corollary 2.1.42]{Rod}, although there is a well known simple and direct Krein-Milman argument). Hence $u = I$. Thus $I({\mathcal S})_c$ is an injective and rigid extension of ${\mathcal S}_c$, and hence it is an injective envelope of ${\mathcal S}_c$. (2)\ If ${\mathcal S}$ is a real operator system or unital real operator space, then as in the complex case $I({\mathcal S})$ may be taken to be a real unital $C^*$-algebra $A$ with $j(1) = 1$ (see \cite[Theorem 4.9]{Sharma} and the proof of \cite[Corollary 4.2.8 (1)]{BLM}). The $C^*$-algebra $(A_c,j_c)$ is a complexification of $A$ with the same identity, and $I({\mathcal S})$ is a real $C^*$-subalgebra. If ${\mathcal S}$ is a real unital operator algebra then as in the complex case \cite[Corollary 4.2.8 (1)]{BLM} we may further take $j$ to be a homomorphism. (3)\ Let $A$ be an approximately unital real (Jordan) operator algebra, represented completely isometrically as a Jordan subalgebra of $B(H)$. Its unitization $A^1 = A + \Rdb I_H$ is (as an operator space and (Jordan) algebra) independent of $H$ by \cite[Lemma 4.10]{BT}. Using \cite[Proposition 4.3]{BT} and taking a compression, we may assume that a partial cai $(e_t)$ of $A$ has SOT limit $1$. Then $I(A)$ may be identified with the range of a completely contractive projection $\Phi$ on $B(H)$ fixing $A$. Let $\varphi = \langle \Phi(\cdot) \zeta, \zeta \rangle$, where $\| \zeta \| = 1$. As in the proof of \cite[Corollary 4.2.8]{BLM} this is a contractive functional on $A + \Rdb I$, and the fact that $\varphi(e_t) \to 1$ implies that $\| \varphi \| = \| \varphi_{|A} \|$. This implies by \cite[Lemma 4.11]{BT} that $\varphi(I) = \langle \Phi(I) \zeta, \zeta \rangle = 1.$ By the converse to the Cauchy-Schwarz inequality, $\Phi(I) \zeta = \zeta$. So $\Phi(I) = I$. This implies as in \cite[Corollary 4.2.8]{BLM} that $(I(A^1),j)$ is a $C^*$-algebra and injective envelope of $A$, and $j$ is a homomorphism. By the unital case above, $I(A)$ is a unital real $C^*$-subalgebra of $I(A^1_c)$ and we have $I(A)_c = I(A^1)_c = I(A^1_c) = I(A_c)$ as real $C^*$-algebras. \end{proof} \begin{corollary} \label{tocenv} Let $A$ be a real operator system, unital operator space, or approximately unital real operator algebra (or Jordan operator algebra), with $I(A)$ taken to be a $C^*$-algebra as in Theorem {\rm \ref{ijco}}. \begin{itemize} \item [(1)] If $C^*_e(A)$ is the $C^*$-subalgebra of $I(A)$ generated by $A$ then $C^*_e(A)_c = C^*_e(A_c)$. \item [(2)] $C^*_e(A)$ has the universal property expected of the $C^*$-envelope: given any unital complete isometry $j : A \to D$ into a real $C^*$-algebra $D$ such that $j(A)$ generates $D$ as a real $C^*$-algebra, there exists a (necessarily unique and surjective) $*$-epimorphism $\pi : D \to C^*_e(A)$ such that $\pi \circ j$ is the canonical inclusion of $A$ in $C^*_e(A)$. \end{itemize} \end{corollary} \begin{proof} (1)\ The real operator system and unital operator space result here follows easily from Theorem \ref{ijco} (1) and (2). It also follows from the theory of involutions similarly to e.g.\ \cite[Theorem 2.4 or Theorem 2.8]{BWinv} or \cite[Proposition 1.16]{BKM}. It will also be generalized in Corollary \ref{tocenvG}. In the approximately unital case we know $(A^1)_c = (A_c)^1$ by the uniqueness of unitization (e.g.\ \cite[Lemma 4.10]{BT}). By the proof in Theorem \ref{ijco}, $I(A) \subset I(A_c)$ are unital $C^*$-algebras and agree with the injective envelope of $A^1$ and $(A_c)^1$. In the complex case (4.3.4 in \cite{BLM}), $C^*_e(A_c)$ is defined to be the $C^*$-algebra generated by $A_c$ inside $C^*_e(A^1_c)$. Since $$C^*_e(A^1_c) = C^*_e(A^1)_c = C^*_e(A^1) + i \, C^*_e(A^1) \subset I(A^1)_c = I(A)_c,$$ we have $C^*_e(A_c) = C^*_e(A) + i \, C^*_e(A) = C^*_e(A)_c$. (2)\ This is already mentioned in \cite[Section 4]{Sharma}, and also follows immediately from the complex case by complexification. \end{proof} {\bf Remark.} For a nonunital real operator algebra $A$ with no kind of approximate identity one defines $C^*_e(A)$ to be the $C^*$-algebra generated by $A$ inside $C^*_e(A^1)$. Then as in \cite[Proposition 4.3.5]{BLM} it is easy to check that $C^*_e(A)$ has the universal property of the $C^*$-envelope: given any unital completely isometric homomorphism $j : A \to D$ into a real $C^*$-algebra $D$ such that $j(A)$ generates $D$ as a real $C^*$-algebra, there exists a (necessarily unique and surjective) $*$-epimorphism $\pi : D \to C^*_e(A)$ such that $\pi \circ j$ is the canonical inclusion of $A$ in $C^*_e(A)$. Again we use the uniqueness of unitization (e.g.\ \cite[Lemma 4.10]{BT}). We now turn to the more general setting of $G$-operator spaces. The reader who only cares about real spaces for example may take $G = (0)$. For a given group $G$, a $G$\textit{-set} refers to a given action on a set $X$. Given two $G$-sets $X\text{ and }Y$, a \textit{$G$-equivariant map} (or \textit{$G$-map}) is a map $f:X\to Y$ such that $$f(gx)=gf(x),\quad\text{ for all }g\in G\text{ and }x\in X.$$ Henceforth let $G$ be a discrete group (although as we said in the Introduction most of the results in this section hold in far greater generality). As in Hamana's notion of $G$-module from \cite{Hamiecds,Hamiods} we define a real $G$-{\em operator space} (resp.\ $G$-{\em operator system}) to be a real operator space (resp.\ operator system) $X$ such that $G \curvearrowright X$ by real linear surjective complete isometries (resp.\ unital complete order real linear isomorphisms). Sometimes we write these complete isometries as $u_g : X \to X$ for $g \in G$. We say that a complex operator space $X$ is a {\em complex} $G$-{\em operator space} if $G\curvearrowright X$ by complex linear surjective complete isometries. A $G$-{\em unital operator space} is a unital operator space and a $G$-operator space with a $G$-action by linear surjective unital complete isometries. We remark that in \cite{Hamiods} Hamana studies a class of complex operator spaces much more general than the complex $G$-operator spaces and works out the injective and ternary envelope theory for these. We recall that a (real or complex) $G$-$C^*$-algebra is a (real or complex) $C^*$-algebra $B$ such that $G \curvearrowright B$ by (real or complex) $*$-automorphisms. Clearly a $G$-$C^*$-algebra (resp.\ unital $G$-$C^*$-algebra) is a $G$-operator space (resp.\ $G$-operator system). Conversely a $G$-operator system which is a $C^*$-algebra is a $G$-$C^*$-algebra. Indeed the `operator system $G$-action' on a $C^*$-algebra is necessarily by $*$-automorphisms. This is because of the operator space version of the Kadison-Banach-Stone theorem that we have seen several times before in the complex case: a unital complete isometry between $C^*$-algebras is a $*$-homomorphism. The real case of this version of the Kadison-Banach-Stone theorem follows immediately from the complex version by complexification (see also \cite[Theorem 4.4]{RComp}). Similarly, we define a $G$-TRO to be a (real or complex) TRO $Z$ such that $G \curvearrowright Z$ by (real or complex) ternary automorphisms (see also \cite[Section 4]{Hamiods}). By the facts about TRO's mentioned in the introduction, a $G$-TRO is a $G$-operator space. Conversely a $G$-operator space which is a TRO is a $G$-TRO. Indeed the `operator space $G$-action' on a TRO is necessarily by ternary automorphisms, by the just cited facts. \begin{lemma} \label{Gcom} If $X$ is a real $G$-operator space (resp.\ real $G$-operator system) then $X_c$ with action $g(x+iy) = gx + i gy$ is a complex $G$-operator space (resp.\ complex $G$-operator system), $\theta_X$ is $G$-equivariant, and the canonical projection $X_c \to X$ is a real linear $G$-equivariant completely contraction (resp.\ UCP). Indeed the $G$-action on $X_c$ commutes with the $\Zdb_2$-action mentioned above Proposition {\rm \ref{chco}}. \end{lemma} \begin{proof} This is essentially just the fact that $(u_g)_c : X_c \to X_c$ is a surjective complete isometry. In the system case the canonical projection $X_c \to X$ is completely contractive and unital, hence UCP. \end{proof} {\bf Remarks.} 1)\ If $X, Y$ are real $G$-operator spaces (resp.\ real $G$-operator systems) and $T : X \to Y$ is $G$-equivariant then so is $T_c$. 2)\ A simple but important fact is that if $X$ and $Y$ are real $G$-operator spaces, then a $G$-equivariant $T \in CB_{\Cdb}(X_c,Y_c)$ equals $S_c$ for a $G$-equivariant $S \in CB_{\Rdb}(X,Y)$ if and only if $T$ is $\Zdb_2$-equivariant (as defined above Proposition {\rm \ref{chco}}). One can restate many of our definitions and results in this paper in terms of the $\Zdb_2$-action discussed above Proposition {\rm \ref{chco}}. For example, the maps $P$ and $Q$ in Lemma \ref{lem1} (and elsewhere in our paper) may be viewed as an average with respect to the $\Zdb_2$-action. Or, Corollary \ref{recc} may be stated as saying that $T \in CB_{\Cdb}(X_c,Y_c)$ equals $S_c$ for $S \in CB_{\Rdb}(X,Y)$ if and only if $T$ is $\Zdb_2$-equivariant. 3)\ One can often make a $G$-map out of an arbitrary linear map by ``averaging". For example suppose that $G$ is a finite group, and $X$ and $Y$ are vector spaces, which are also $G$-sets, and $T:X\to Y$ is linear. Define $\tilde{T}:X\to Y$ by $$\tilde{T}(x)=\frac{1}{| G |}\sum_{g\in G}g^{-1}T(gx),$$ for $x\in X$. Then $\tilde{T}$ is linear and $G$-equivariant. Hamana proves in \cite{Hamiecds} and \cite{Hamiods} that injective envelopes and ternary envelopes exist in his categories of (complex) $G$-modules and $G$-morphisms. This theory parallels the usual injective envelope theory, with $G$-{\em rigid} and $G$-{\em essential} playing their requisite roles just as before. (Note for example that $G$-rigid is defined just as in the definition of rigid, but with all morphisms $G$-equivariant.) The same theory also exists with the same proofs for real $G$-operator spaces (resp.\ real $G$-operator systems, complex $G$-operator spaces), where the morphisms for real or complex $G$-operator spaces (resp.\ $G$-operator systems) are $G$-equivariant real or complex linear complete contractions (resp.\ UCP maps). We call these the $G$-{\em morphisms}. See \cite{CeccoTh} for a systematic development of this theory in very many categories. Hamana's complex $G$-modules in \cite{Hamiecds} are the complex $G$-operator systems in this notation. See \cite{KK} and its sequels for more modern usage of the $G$-injective envelope. \begin{theorem} \label{ath} {\rm \cite{Hamiecds,Hamiods, CeccoTh}} \ Every real or complex $G$-operator space (resp.\ $G$-operator system) $X$ has a real or complex $G$-injective envelope, written $(I_{G}(X),\kappa)$ (in the category of real or complex $G$-operator spaces (resp.\ $G$-operator systems). Here $\kappa : X \to I_{G}(X)$ is a real or complex linear $G$-equivariant complete isometry (resp.\ unital complete order embedding). This $G$-injective envelope is unique in the sense that for any $G$-injective envelope $(Z, \lambda)$ of $V$ in the categories above, there is a $G$-isomorphism $\psi: I_{G}(V) \rightarrow Z$ (that is, $\psi$ and $\psi^{-1}$ are morphisms in the category) with $\psi \circ \kappa=\lambda$. \end{theorem} We summarize briefly the main ideas, which will also be used below. For any $X$ in one of the just mentioned categories (e.g.\ real $G$-operator spaces), if $X \subset B(H)$ as an operator subspace (or subsystem), we may define a $G$-action on $l^\infty(G,X)$ and $l^\infty(G,B(H))$ by $(t f)(s) = f(t^{-1} s)$ for $s, t \in G$. There is also an involution on $l^\infty(G,B(H))$: $f^*(s) = f(s)^*$. We have $G$-equivariant inclusions $$X \; \; \subset \; \; l^\infty(G,X) \; \; \subset \; \; l^\infty(G,B(H)) ,$$ where the first of these is the map $j(x)(s) = s^{-1} x$ for $s \in G, x \in X$. We sometimes write this $j$ as $j_X$ below. It is easy to see that $l^\infty(G,B(H))$ is an injective and $G$-injective von Neumann algebra, and $l^\infty(G,X)$ is $G$-injective if $X$ is injective (see \cite[Lemma 2.2]{Hamiecds} for the idea of proof). The $G$-injective envelope may be constructed as a subspace of $W = l^\infty(G,B(H))$, namely as the range of a certain completely contractive projection $\Phi : W \to W$ that fixes $j(X)$. Using these ideas (see \cite[Lemma 2.4]{Hamiecds} and the lines above it) one adapts the usual construction of the injective envelope \cite{BLM,ER,Pau} to $G$-spaces to obtain Theorem \ref{ath}. See also e.g.\ \cite{Bry} for an exposition of Hamana's ideas in one particular case. Note that if $X$ is a (real or complex) unital $C^*$-subalgebra (resp.\ von Neumann subalgebra, unital subalgebra, unital subspace, operator subsystem, subTRO) of $B(H)$ then $l^\infty(G,X)$ is a unital $C^*$-subalgebra (resp.\ von Neumann subalgebra, unital subalgebra, unital subspace, operator subsystem, subTRO) of $l^\infty(G,B(H))$. It is easy to see that $j_X : X \to l^\infty(G,X)$ is a unital $*$-homomorphism (resp.\ normal $*$-homomorphism, unital homomorphism, unital map, UCP map, ternary morphism). (The von Neumann algebra case of this will not be used in this paper, but can be deduced from a fact from 1.6.6 in \cite{BLM}.) As in \cite[Remark 2.3]{Hamiecds} we have: \begin{lemma} \label{Hrem} Let $X$ be a real or complex $G$-operator space (resp.\ real or complex $G$-operator system). Then $X$ is $G$-injective in its given category if and only if $X$ is injective as a real or complex operator space (resp.\ real or complex operator system) and there is a $G$-morphism $\phi : l^\infty(G,X) \to X$ such that $\phi \circ j = I_X$. \end{lemma} It follows that a $G$-injective real or complex $G$-operator space $Z$ is a TRO, since injective operator spaces are TRO's (we recall from e.g.\ 4.4.2--4.4.3 in \cite{BLM} and \cite{Sharma} (the paragraph before Theorem 4.13 there) that the $1$-$2$-corner $p I({\mathcal S}(X)) p^\perp$ may be taken to be a TRO and an injective envelope of $X$). Hence $Z$ is a $G$-TRO by the lines above Lemma \ref{Gcom}. As mentioned, $G$-{\em rigidity} and $G$-{\em essentiality} play their requisite roles just as before: An injective $G$-extension is a $G$-injective envelope if and only if it is a $G$-rigid extension, and if and only if it is $G$-essential extension. This follows a well-trodden route, see also \cite{CeccoTh} for precise details if needed in certain particular categories. For a $G$-operator system ${\mathcal S}$ one may take the $G$-injective envelope to be a unital $G$-$C^*$-algebra (this is done as usual with the Choi-Effros product, as in the lines above \cite[Theorem 2.5]{Hamiecds}). \begin{lemma} \label{Hreml} Let $X$ be a real $G$-operator space (resp.\ $G$-operator system, unital $G$-operator space). Then $l^\infty(G,X)_c = l^\infty(G,X_c)$ via a completely isometric complex $G$-morphism $\kappa$ (that is, a $G$-isomorphism) with $\kappa \circ (j_X)_c = j_{X_c}$. \end{lemma} \begin{proof} The automorphism $\theta_X$ induces a conjugate linear period 2 completely isometric $G$-equivariant automorphism on $l^\infty(G,X_c)$ whose fixed points correspond to $l^\infty(G,X)$. Here $$\kappa(f + ig)(t) = f(t) + i g(t) , \qquad f, g \in l^\infty(G,X), \; t \in G .$$ For $x,y \in X$ we have $$(\kappa((j_{X})_c(x + iy)))(t) = (\kappa(j_{X}(x) + i \, j_{X}(y)))(t) = j_{X}(x) (t) + i \, j_{X}(y)(t) ,$$ which is $t^{-1} x + i \,t^{-1} y = j_{X_c}(x+iy)(t).$ So $\kappa \circ (j_X)_c = j_{X_c}$. \end{proof} \begin{corollary} \label{Gin} If $X$ is a real $G$-operator space (resp.\ real $G$-operator system) then $X_c$ is complex $G$-injective if and only if $X$ is real $G$-injective. \end{corollary} \begin{proof} By Lemma \ref{Hrem} we have that $X$ is real $G$-injective if and only if $X$ is injective as a real operator space and there is a real $G$-morphism $\phi : l^\infty(G,X) \to X$ such that $\phi \circ j = I_X$. By the fact above Lemma \ref{lem1} this implies that $X_c$ is injective as a real operator space. Also by Lemma \ref{Hreml} from $\phi_c : l^\infty(G,X)_c \to X_c$ we obtain a complex $G$-morphism $\psi = \phi_c \circ \kappa^{-1} : l^\infty(G,X_c) \to X_c$ with $\psi \circ j_{X_c} = I_{X_c}$. So $X_c$ is complex $G$-injective by Lemma \ref{Hrem} again. Conversely if $X_c$ is complex $G$-injective, then it is real $G$-injective by e.g.\ a simpler variant of the arguments in the last paragraph (briefly, it is complex injective so real injective, so $l^\infty(G,X_c)$ is real $G$-injective and hence so is $X_c$). So $X$ is real $G$-injective since it is `real $G$-complemented' in $X_c$ (see Lemma \ref{Gcom}). \end{proof} \begin{proposition} \label{Ginjuos} If $X$ is a real (resp.\ complex) $G$-unital operator space then the space $X^*$ of adjoints is a real (resp.\ complex) $G$-unital operator space, $S = X + X^*$ is a real (resp.\ complex) $G$-operator system, and $$I_G(X + X^*) = I_G(X) \; , \; \; \; C^*_{e,G}(X + X^*) = C^*_{e,G}(X).$$ Moreover any completely contractive unital $G$-equivariant map $T : X \to Y$ between real (resp.\ complex) unital $G$-operator spaces extends uniquely to a selfadjoint $G$-equivariant UCP map $\tilde{T} : X + X^* \to Y + Y^*$, which is also completely isometric if $T$ is. \end{proposition} \begin{proof} For complex unital operator spaces that $X + X^*$ is well defined as an operator system independent of representation is well known \cite{Pau,BLM,ER} , as is the last assertion. In the real case this is the unital space assertion in Proposition 2.4 and Corollary 2.5 in \cite{BT}, and the lines after that. The $G$-equivariance of the extension to $X + X^*$ is an exercise (some of the lines below may help). As in the construction below Theorem \ref{ath}, we have $G$-equivariant unital completely isometric inclusions $$X \; \; \subset \; \; l^\infty(G,X) \; \; \subset \; \; l^\infty(G,B(H)) , $$ where the first of these is the map $j(x)(s) = s^{-1} x$ for $s \in G, x \in X$. Then $$j(X)^* \; \; \subset \; \; l^\infty(G,B(H)) \; , \; \; \; \; j(X) \, + \, j(X)^* \; \; \subset \; \; l^\infty(G,B(H)) $$ are $G$-equivariant inclusions. For example, for $s, t \in G, x \in X$ we have $$(t j(x)^*)(s) = j(x)^*(t^{-1} s) = (j(x) (t^{-1} s))^* = ((s^{-1} t) x)^* = (s^{-1} t) x^* ,$$ while $$j(tx)^*(s) = (j(tx)(s))^* = (s^{-1} (tx))^* = s^{-1} (t x^*) = (s^{-1} t) x^* .$$ Thus $t j(x)^* = j(tx)^*$. Hence $j(X)^*$, and so $X^*$, is a real (resp.\ complex) $G$-unital operator space. Similarly $j(X) + j(X)^*$, and so $X + X^*$, is a real (resp.\ complex) $G$-operator system. We are also using the fact in the first line of the proof. Choose $I_G(X + X^*)$ to be a unital $G$-$C^*$-algebra as above with unital $G$-$C^*$-subalgebra $C^*_{e}(X + X^*)$. These are clearly $G$-injective (resp.\ $G$-$C^*$-) extensions of $X$ which are $G$-rigid since they are $G$-rigid as extensions of $X + X^*$, and $X \subset X + X^*$. \end{proof} Because of the last proposition many results about unital $G$-operator spaces follow from the $G$-operator system case. For example Lemma \ref{Hrem} and Corollary \ref{Gin} have appropriate variants for unital $G$-operator spaces. The (real or complex) ternary $G$-envelope ${\mathcal T}_G(X)$ of a (real or complex) $G$-operator space may be defined to be the (real or complex) ternary subspace of $I_G(X)$ generated by the copy of $X$. Since $X$ is a $G$-subspace of $I_G(X)$ it is clear that ${\mathcal T}_G(X)$ is also a $G$-subspace, hence is a $G$-operator space. \begin{theorem} \label{ijcoG} Let ${\mathcal S}$ be a real $G$-operator space. Then $I_G({\mathcal S})_c = I_G({\mathcal S}_c)$ and ${\mathcal T}_G({\mathcal S})_c = {\mathcal T}_G({\mathcal S}_c)$. If ${\mathcal S}$ is a real $G$-operator system or unital real $G$-operator space then $(I_G({\mathcal S}),j)$ may be taken to be a real $G$-$C^*$-algebra with $j$ unital, indeed $I_G({\mathcal S})$ is a real $G$-$C^*$-subalgebra of its complexification $I_G({\mathcal S}_c)$. \end{theorem} \begin{proof} The first part follows as in the proof of Theorem \ref{ijco}. For example, by Corollary \ref{Gin}, $I_G({\mathcal S})_c$ is a $G$-injective extension of ${\mathcal S}_c$. If $u : I_G({\mathcal S})_c \to I_G({\mathcal S})_c$ is a (complex linear) $G$-morphism extending $I_{{\mathcal S}_c}$ then as in the proof of \ref{ijco} we have that $R = \frac{1}{2} (u + (\theta_{{\mathcal S}} \circ u \circ \theta_{{\mathcal S}})) = I$ and hence $u = I$. Thus $I_G({\mathcal S})_c$ is a $G$-injective $G$-rigid extension of ${\mathcal S}_c$, and hence it is a $G$-injective envelope of ${\mathcal S}_c$. As stated after Lemma \ref{Hreml}, $Z = I({\mathcal S})$ may be taken to be a $G$-TRO. Since ${\mathcal T}_G({\mathcal S})$ is the real $G$-subTRO generated by ${\mathcal S}$, we have that $Z_c = {\mathcal T}_G({\mathcal S}) + i {\mathcal T}_G({\mathcal S})$ is a complex $G$-TRO and $G$-extension of ${\mathcal S}_c$, and $Z_c$ is easily seen to be the $G$-TRO generated by ${\mathcal S}_c$ in $I_G({\mathcal S})_c$. So ${\mathcal T}_G({\mathcal S})_c = {\mathcal T}_G({\mathcal S}_c)$. That $(I_G({\mathcal S}),j)$ may be taken to be a $G$-$C^*$-algebra for $G$-unital spaces uses the same idea for \cite[Corollary 4.2.8 (1)]{BLM} or the lines above \cite[Theorem 2.5]{Hamiecds}. Indeed the projection $\Phi$ from $W = l^\infty(G,B(H))$ onto $I_G({\mathcal S})$ in the construction of the $G$-injective envelope discussed after Theorem \ref{ath} above is unital, so UCP, and so $I_G({\mathcal S})$ is a $C^*$-algebra in the Choi-Effros product (see the next proof for a few more details if needed). \end{proof} {\bf Remarks.} 1)\ Similarly there will be `$G$-versions' of the statements in Theorem \ref{ijco} for unital real operator algebras or Jordan operator algebras. 2)\ For a finite group $G$ one may use Theorem \ref{Ginj} to give shortened proofs of Theorem \ref{ijcoG} and Corollaries \ref{Gin}, \ref{lem1G}, and \ref{corcpxG}. The $G$-$C^*$-envelope $C^*_{e,G}(X)$ of a $G$-unital operator space or system $X$ is defined to be the (real or complex) $C^*$-algebra generated by the copy of $X$ in $I_G(X)$. Nearly all of the following result is due to Hamana in the complex case (who proved it in \cite{Hamiods} for general locally compact group actions and indeed much more generally than that). \begin{corollary} \label{tocenvG} Let $X$ be a real $G$-operator system or $G$-unital operator space, with $I_G(X)$ taken to be a $C^*$-algebra as in Theorem {\rm \ref{ijcoG}}. \begin{itemize} \item [(1)] $C^*_{e,G}(X)_c = C^*_{e,G}(X_c)$. \item [(2)] $C^*_{e,G}(X)$ has the universal property: given any unital $G$-equivariant complete isometry $\kappa : X \to D$ into a real $G$-$C^*$-algebra $D$ such that $\kappa(X)$ generates $D$ as a real $C^*$-algebra, there exists a (necessarily unique and surjective) $G$-equivariant $*$-epimorphism $\pi : D \to C^*_{e,G}(X)$ such that $\pi \circ \kappa$ is the canonical inclusion of $X$ in $C^*_{e,G}(X)$. \item [(3)] ${\mathcal T}_G({\mathcal S})$ in Theorem {\rm \ref{ijcoG}} has the desired universal property of the ternary $G$-envelope. Namely: given any $G$-equivariant complete isometry $\rho : {\mathcal S} \to W$ into a real $G$-TRO $W$ such that $j({\mathcal S})$ generates $W$ as a real $G$-TRO, there exists a $G$-equivariant ternary morphism $\pi : W \to {\mathcal T}({\mathcal S})$ such that $\pi \circ \rho = j$. \end{itemize} \end{corollary} \begin{proof} Essentially we just need to explicate Hamana's method in the present setting. (1)\ Let $A$ be a real unital $G$-operator space or system, and $I_G(A_c)$ a unital $G$-$C^*$-algebra as in the last result. Since the $C^*$-subalgebra of a unital $C^*$-algebra generated by a subspace containing the identity is the TRO generated by that subspace, $C^*_e(A)_c = C^*_e(A_c)$ as complex $G$-$C^*$-algebras and as $G$-extensions of $A_c$ by the analogous statement for the $G$-ternary envelope in Theorem \ref{ijcoG}. (2)\ For the universal property we follow the argument in 4.3.3 in \cite{BLM}. First we choose a $G$-injective envelope $I_G(X)$ of $X$ which is a unital $G$-$C^*$-algebra containing $X$ unitally, as in Theorem \ref{ijcoG}. Then $C^*_{e,G}(X)$ is a $C^*$-subalgebra of $I_G(X)$. Suppose that $(D,\kappa)$ is any $G$-$C^*$-extension of a (real or complex) $G$-operator system $X$, with $D$ generated by $\kappa(X)$, and suppose that $D$ is a $G$-$C^*$-algebra which is a unital $*$-subalgebra of $B(H)$. Then $j_D(\kappa(X)) \subset l^\infty(G,B(H))$. By the construction of the $G$-injective envelope discussed after Theorem \ref{ath}, there is a $G$-equivariant completely positive idempotent map $\Phi$ on $l^\infty(G,B(H))$ whose range is a $G$-injective envelope $R$ of $j_D(\kappa(X))$. Also, $R$ is a $C^*$-algebra with respect to a new product, the Choi-Effros product. With respect to the usual product on $l^\infty(G,B(H))$, the $C^*$-subalgebra of $l^\infty(G,B(H))$ generated by $R$ contains $j_D(D)$, the $C^*$-subalgebra of $l^\infty(G,B(H))$ generated by $j_D(\kappa(X))$. We let $B$ be the $C^*$-subalgebra (in the new product) of $R$ generated by $j_D(\kappa(X))$. As in 4.3.3 in \cite{BLM}, a formula related to the Choi-Effros product shows that $\pi = \Phi_{\vert D}$ is a $*$-homomorphism from $D$ to $R$, with respect to the new product on $R$. Since $\pi$ extends the identity map on $j_D(\kappa(X))$, it clearly also maps into $B$. Since the canonical inclusion of $X$ in $C^*_{e,G}(X)$ is $G$-equivariant, we have $\pi(\kappa(g x) )= g \pi(\kappa(x) )$ for $x \in X$. Thus $\pi(\kappa(g x) y)= g \pi(\kappa(x) y)$ for $x \in X$ and $y$ a product of terms from $\kappa(X)$ and $\kappa(X)^*$. So $\pi$ is $G$-equivariant. As in the usual argument (e.g.\ 4.3.3 in \cite{BLM}), the natural unital $G$-equivariant completely isometric surjection $R \rightarrow I_G(X)$ is a $*$-homomorphism. Hence it is clear that $(B, j_D \circ \kappa)$, as a $G$-$C^*$-extension of $X$, is identifiable with $C^*_{e,G}(X)$. Putting these facts together, we see that $C^*_{e,G}(X)$ has the desired universal property. (3)\ The universal property of ${\mathcal T}_{G}(X)$ can be proved either similarly to our proof in the last paragraphs but using Youngson's theorem (\cite[Theorem 4.4.9]{BLM}, and its real variant in \cite{BReal}) in place of the Choi-Effros product and formula referred to above; or similarly to the proof of 8.3.11 in \cite{BLM} but with all morphisms $G$-equivariant. The latter is the approach taken in the complex case in \cite[Theorem 4.3]{Hamiods}. The real case is immediate from the complex case applied to $\rho_c : {\mathcal S}_c \to W_c$ and $j_c$. \end{proof} {\bf Remarks.} 1)\ As usual, the real $G$-$C^*$-envelope of $A$ is a $G$-rigid and $G$-essential extension in the appropriate sense of the category in which we are in. This is because it sits between $A$ and its $G$-injective envelope, and the latter is $G$-rigid and $G$-essential. Similarly, ${\mathcal T}_G(X)$ is a $G$-rigid and $G$-essential extension of the real or complex $G$-operator space $X$. 2)\ In particular if $(Z,j)$ is a ternary $G$-extension of ${\mathcal S}$ with the universal property in (3), then the usual commutative diagram argument shows that there is a $G$-equivariant ternary {\em isomorphism} $\theta : Z \to {\mathcal T}({\mathcal S})$ such that $\theta \circ j$ is the canonical inclusion of $X$ in ${\mathcal T}({\mathcal S})$. So $Z$ `equals' ${\mathcal T}({\mathcal S})$ as ternary $G$-extensions of $X$. \begin{corollary} \label{lem1G} A complex $G$-operator space or system $X$ is real $G$-injective if and only if $X$ is complex $G$-injective. \end{corollary} \begin{proof} By Lemma \ref{Hrem}, $X$ is real $G$-injective if and only if $X$ is injective as a real operator space or system and there is a real $G$-morphism $\phi : l^\infty(G,X) \to X$ such that $\phi \circ j = I_X$. Now $X$ is injective as a real operator space or system if and only if $X$ is injective as a complex operator space or system, by Lemma \ref{lem1}. Let $Q(f) = \frac{1}{2} (\phi(f) - i \phi(if))$ for $f \in l^\infty(G,X)$. As in the proof of Lemma \ref{lem1} we see that $Q$ is complex linear and $Q \circ j = I_X$. It is also $G$-equivariant and completely contractive. In the operator system case $Q(1) = Q(j(1)) = 1$, so that $Q$ is UCP and a $G$-morphism. Using Lemma \ref{Hrem} again we see that $X$ is complex $G$-injective. The converse is similar, but easier, and as in the last part of Corollary \ref{Gin}. \end{proof} {\bf Remark.} A similar result holds in the unital $G$-operator space case, with a slight variant of the proof which is left to the reader. The same goes for the first statement in the next result. \begin{corollary} \label{corcpxG} The real and complex $G$-injective envelopes of a complex $G$-operator space or system coincide. The real and complex ternary $G$-envelopes of a complex $G$-operator space coincide. The real and complex $G$-$C^*$-envelopes of a complex unital $G$-operator space coincide. \end{corollary} \begin{proof} Let $I_G(X)$ be the complex $G$-injective envelope of the complex $G$-operator space or system $X$. Then $I_G(X)$ is complex $G$-injective and $G$-rigid. By Corollary \ref{lem1G} we have that $I_G(X)$ is real $G$-injective. So to show that it is a real $G$-injective envelope, it is enough to show that it is real $G$-rigid. Consider $X$ and $I_G(X)$ as real spaces and consider a real linear $G$-morphism $u : I_G(X) \to I_G(X)$ extending $I_{X}$. As in the proof of Corollary \ref{lem1G} we have that $Q(z)= \frac{1}{2} (u(z) - i u(iz))$, for $z \in I_G(X)$, is a complex linear $G$-morphism extending $I_{X}$. By complex rigidity and the extreme point argument in the proof of Theorem \ref{ijco} we have $Q = I =u$. Thus $I_G(X)$ is a real $G$-injective $G$-rigid extension of $X$, and hence it is a real $G$-injective envelope of $X$. The real ternary $G$-envelope is the real $G$-TRO in the real $G$-injective envelope, hence in $I_{G}(X)$, generated by $X$. Since $X$ is closed in $I_{G}(X)$ under multiplication by $i$, this agrees with the complex $G$-TRO in $I_{\Cdb}(X)$ generated by $X$. So the real and complex ternary $G$-envelopes coincide. The $G$-$C^*$-envelope assertion follows from this if we take $I_{G}(X)$ a $G$-$C^*$-algebra with the same identity as $X$ as in Theorem \ref{ijcoG}. \end{proof} \begin{theorem} \label{cist} A complex operator space which is a real TRO is a complex TRO. A complex operator space which is a real $C^*$-algebra is a complex $C^*$-algebra. A complex operator space which is a real approximately unital operator algebra is a complex operator algebra. \end{theorem} \begin{proof} To see the first statement apply Corollary \ref{corcpxG} with $G = (0)$ to the ternary envelope. The third statement follows from the `BRS characterization' of operator algebras \cite[Theorem 2.3.2]{BLM}. Indeed if an approximately unital algebra $A$ is a complex operator space, and in addition satisfies the `real condition' $\| x y \| \leq \| x \| \| y \|$ for $x, y \in M_n(A)$, then it satisfies all the hypotheses of the cited characterization of complex operator algebras. If $A$ is a real $C^*$-algebra then it is a complex operator algebra $B$ by the third statement. We have $A = A \cap A^* = \Delta(B)$, which is a complex $C^*$-algebra. We remark that the `diagonal' $\Delta( \cdot )$ of an operator algebra is well defined independently of representation in both the real and complex cases by 2.1.2 in \cite{BLM} and \cite{BReal}. \end{proof} \begin{theorem} \label{Ginj} Let $G$ be a finite group. If $S$ is a real (resp.\ complex) operator space or system which is also a real (resp.\ complex) $G$-operator space or system then $S$ is real (resp.\ complex) $G$-injective if and only if $S$ is real (resp.\ complex) injective. \end{theorem} \begin{proof} We just do the operator space case, the result for $G$-operator systems is similar. Suppose that $S$ is injective. Let $V$ and $W$ be $G$-operator spaces, with $V$ a $G$-subspace of $W$ (that is, the inclusion map is $G$-equivariant). Let $\phi: V \to S$ be a real (resp.\ complex) $G$-equivariant complete contraction. Then since $S$ is injective, we may extend $\phi$ to a real (resp.\ complex) $\psi: W \to S$, which may not be $G$-equivariant. For each $g\in G,$ the map $\psi_g : W \to S$ defined by $\psi_g (x) = g \psi(g^{-1}x)$ is also completely contractive, and extends $\phi$ by $G$-equivariance of $\phi$. If we take the average $\rho = \frac{1}{|G|} \sum_{g\in G} \psi_g$, we get an extension of $\phi$ to a $G$-equivariant complete contraction from $W$ to $S$. Thus, $S$ is $G$-injective. Now suppose that $S$ is $G$-injective. By the next theorem (whose proof only uses the last paragraph) we have $I(S) = I_G(S) = S$, so that $S$ is injective. \end{proof} {\bf Remark.} A similar result holds in the unital $G$-operator space case, with basically the same proof. In the last result and in Theorem \ref{mkb} we focus on finite groups. However we remark that the results will still be valid for continuous actions of a compact group $G$, with some modifications in proof. Indeed the average $\rho$ in the last proof may be replaced by a Bochner integral $\rho = \int_G \, \psi_g \, dg$ with respect to normalized Haar measure, which is easily argued to exist and be $G$-invariant. We leave the details to the reader. \begin{theorem} \label{mkb} Let $G$ be a finite group and $X$ a real or complex $G$-operator space (resp.\ $G$-operator system). \begin{itemize} \item [(1)] The $G$-injective envelope of $X$ in the category of real or complex $G$-operator spaces (resp.\ $G$-operator systems, $G$-unital operator spaces) is an injective envelope of $X$ in that category. That is, $I_G(X) = I(X)$. \item [(2)] ${\mathcal T}_G(X)$ is a real (resp.\ complex) ternary envelope of $X$. \item [(3)] The $G$-$C^*$-envelope $C^*_{e,G}(X)$ is a real (resp.\ complex) $C^*$-envelope $C^*_{e}(X)$ for a real (resp.\ complex) $G$-unital operator space $X$. \end{itemize} \end{theorem} \begin{proof} We just prove the operator space cases, the other being similar. For (1) let $X$ be a real (resp.\ complex) $G$-operator space. By rigidity, for each $g \in G$ the action of $g$ on $X$ extends uniquely to a complete isometry of $I(X)$, whose inverse is the extended action of $g^{-1}$. Again by rigidity, we have $g^{-1} (h^{-1} ((hg) x)) = x$ on $I(X)$, so that $(hg) x = h(gx)$. So we have a group homomorphism into the group of surjective complete isometries on $X$. So $I(X)$ is a real (resp.\ complex) $G$-operator space. It is $G$-rigid since it is rigid. Then $I(X)$ is $G$-injective by the first paragraph of the proof of Theorem \ref{Ginj} and hence is the $G$-injective envelope. Since ${\mathcal T}_G(X)$ (resp.\ $C^*_{e,G}(X)$) is the (real or complex) ternary subspace (resp.\ $C^*$-algebra) of $I_G(X) = I(X)$ generated by the copy of $X$, it is a ternary envelope (resp.\ $C^*$-envelope) of $X$. \end{proof} {\bf Remark.} There are analogous results to the operator algebra results early in Section 4 for $G$-operator algebras. We say that an operator algebra $A$ is a $G$-{\em operator algebra} if $G\curvearrowright X$ by surjective completely isometric homomorphisms (automorphisms). Again if $A$ is also unital then by a well known Banach-Stone theorem (e.g.\ 4.5.13 in \cite{BLM} in the complex case, which gives the real case by complexification; see also \cite[Theorem 4.4]{RComp}), this is equivalent to $G$ acting by linear surjective unital completely isometries. \section{Further extension of real structure} \label{fe} We have seen that real structure in a complex operator system (resp.\ operator space) $\cS$ forces a compatible real structure in the $C^*$-envelope (resp.\ ternary envelope) and injective envelope. A natural question is whether it forces real structure in the bicommutant of the $C^*$-envelope? It certainly gives real structure in the bidual of the $C^*$-envelope (by Proposition \ref{chco} applied to $(\theta_{\cS})^{**}$). We give an example showing that this is not true in general. Consider the conjugate linear period 2 involution $f^*(z) = \overline{f(\bar{z})}$ on the disk algebra and on its $C^*$-envelope $C(\Tdb)$. The question is if these algebras are represented nondegenerately on a Hilbert space, then does the involution extend to the bicommutant $C(\Tdb)''$ in $B(H)$? By von Neumann's bicommutant theorem this is also the weak* closure (and the strong and weak operator topology closure) of the copy of $C(\Tdb)$. Consider a countable dense set $E$ in $\Tdb$ with $E \cap \bar{E} = \emptyset$. Note that $C(\Tdb) \subset l^\infty(E)$ isometrically via the canonical map $f \mapsto f_{|E}$. This corresponds to a canonical representation $C(\Tdb) \to B(l^2(E))$, coming from the canonical representation $\lambda : l^\infty(E) \to B(l^2(E))$. We claim that $\lambda(C(\Tdb))''$ is the von Neumann algebra $\lambda(l^\infty(E))$. Indeed $C(\Tdb)$ is weak* dense in $l^\infty(E)$. To see this suppose that $g \in l^1(E)$ annihilated $C(\Tdb)$. Fix $x \in E$ and choose a decreasing sequence of positive `tent functions' $f_n \in C(\Tdb)$ with $f_n(x) = 1$ and $f_n = 0$ outside of an arc center $x$ of length $1/n$. So $f_n$ converges pointwise on $E$ to $\chi_{\{ x \}}$ By Lebesgues theorem we have $$0 = \langle f_n , g \rangle \to \langle \chi_{\{ x \}} , g \rangle = g(x) .$$ Thus $g = 0$ and $C(\Tdb)$ is weak* dense in $l^\infty(E)$, and $\lambda(C(\Tdb))'' = \lambda(l^\infty(E))$ as claimed. There is no conjugate linear $*$-automorphism of $l^\infty(E)$ extending the given involution on $C(\Tdb)$. Indeed such a $*$-automorphism would be given by a period 2 permutation (bijection) $\alpha$ of $E$. This yields $$\overline{f(\bar{z})} = \overline{f(\alpha(z))} , \qquad f \in C(\Tdb),$$ which implies the contradiction that $\alpha(z) \in E \cap \bar{E}$. Phrasing this in different language: let $N$ be the real $C^*$-algebra of fixed points of the given involution on $C(\Tdb)$. Then $C(\Tdb) = N_c$, but $(\bar{N}^{w*})_c \neq \overline{N_c}^{w*}$. Indeed $\overline{N_c}^{w*} = l^\infty(E)$ has no conjugate linear $*$-automorphism extending the involution on $N_c = C(\Tdb)$. In this example $N = \{ f \in C(\Tdb) : f(z) = \overline{f(\bar{z})} , z \in \Tdb \}$ is real $*$-isomorphic to the continuous complex valued functions on the upper semicircle that are real at the endpoints $\pm1$. This example also shows that one cannot extend the real structure to every injective $C^*$-algebra containing $C^*_e(A)$. For $l^\infty(E)$ is an injective $C^*$-algebra containing $C(\Tdb)$. On the other hand, note that the given involution on $C(\Tdb)$ does extend to the universal von Neumann algebra $C(\Tdb)^{**}$ (by e.g.\ \cite[Lemma 2.12]{BWinv}), and to $L^\infty(\Tdb)$. The latter may be viewed as the GNS representation of the Lebesgue integral on $C(\Tdb)$. More generally, we have: \begin{proposition} \label{tgns} Suppose that we have an involution $\dagger$ on a complex $C^*$-algebra $A$ given by a conjugate linear completely isometric period 2 $*$-automorphism $\theta : A \to A$ (so that $\theta(a) = a^\dagger$, and $A$ is the complexification of a real $C^*$-algebra $B$). If $\tau$ is a faithful $\dagger$-preserving state on $A$, and if $\tau$ has GNS representation $\pi_\tau$ on $H_\tau$, then the involution $\theta$ extends to the von Neumann algebra $\pi_\tau(A)''$, and indeed further to $B(H_\tau)$. \end{proposition} \begin{proof} This may be seen by first defining a complex linear involution $c_\tau$ on the GNS Hilbert space $H_\tau$ by $c_\tau (a) = \theta(a)$ for $a \in A$. Note that $$\| c_\tau (a) \|^2_{H_\tau} = \tau(\theta(a^*) \theta(a)) = \tau(a^* a) = \| a \|^2_{H_\tau}, \qquad a \in A .$$ So $c_\tau$ extends to a selfadjoint unitary (a symmetry) $U$ on $H_\tau$. Then $T \mapsto U T U$ is a conjugate linear weak* continuous automorphism on $B(H_\tau)$ which extends the canonical involution on $\pi_\tau(A)$. Indeed $$U \pi_\tau(a) U (b) = U \pi_\tau(a) \theta(b) = U a \theta(b) = \theta(a \theta(b)) = a^{\dagger} b , \qquad a, b \in A, $$ So $U \pi_\tau(a) U (b) = \pi_\tau(a^{\dagger})$ since $A$ is dense in $H_\tau$. By continuity and density the restriction of this automorphism to $\pi_\tau(A)'' = \overline{\pi_\tau(A)}^{w*}$ is a conjugate linear completely isometric period 2 $*$-automorphism there. \end{proof} Thus if $A$ is the complexification of a real $C^*$-algebra $B$ as above then we conclude that $\pi_\tau(B_c)''$ is the complexification of $\pi_\tau(B)''$. \end{document}
\begin{document} \title{Generalized dynamical theories in phase space and the hydrogen atom} \author{Martin Pl\'{a}vala} \email{[email protected]} \affiliation{Naturwissenschaftlich-Technische Fakultät, Universität Siegen, 57068 Siegen, Germany} \author{Matthias Kleinmann} \email{[email protected]} \affiliation{Naturwissenschaftlich-Technische Fakultät, Universität Siegen, 57068 Siegen, Germany} \begin{abstract} The success of quantum theory is founded on the ability to model a wide range of physical systems and extract corresponding predictions. While the quantum models are subject to experimental tests and subsequent refinements, the language of quantum theory itself is generally taken as fixed. But quantum theory is only a distinct instance within the class of all probabilistic theories, an insight that enables rudimentary experimental tests of the language of quantum theory. However, thorough tests are inconceivable because general probabilistic theories cannot describe concrete physical systems. We show that the phase-space formulation of probabilistic theories can be extended to include dynamics and that it can describe a nonquantum hydrogen-like system which is stable, has discrete energy levels, includes the Zeeman effect, consistently predicts excitations by a resonant laser, and exhibits Rutherford scattering. Our construction demonstrates that the language of quantum theory is a specific choice and that other probabilistic theories also lead to measurable predictions. \end{abstract} \maketitle \section{Introduction} Quantum theory is an extremely successful theory with a long track record of applications \cite{NawaregMuhammadHorodeckiBourennane-superadditivity,Zhang_2022,Nadlinger_2022} and precise experimental predictions \cite{Christensen_2015,Hu_2018,Mazurek_2021,RenouTrilloWeilenmannLeTavakoliGisinAcinNavascues-realQT}. But there are parts of physics where quantum theory did not yet give a satisfactory answer, for example for macroscopic systems or an experimentally relevant theory of gravity. Given that classical and quantum theory are the only established fundamental physical theories, one may be lead to the implication that if a theory is nonclassical, then it should be described by quantum theory or a minor modification thereof. However, assuming that a theory is quantum already fixes many of its fundamental properties, for example, the structure of uncertainty relations \cite{Robertson-UR} and Tsirelson's bound on nonlocal correlations \cite{Tsirelson:1993HJS}. We argue here against the validity of the aforementioned implication by presenting an intuitive construction of a nonclassical and nonquantum theory of a fundamental type of interacting systems, namely two nonrelativistic particles coupled by a $\frac{1}{r}$-potential. The most common example of this system is the hydrogen atom, but it applies also to any other pair of interacting systems where the far field gives rise to a $\frac{1}{r}$ effective potential. Our construction shows that when searching for unknown nonclassical theories, we cannot restrict our search to theories based on quantum theory and its underlying mathematical language of Hilbert spaces. The analysis of the hydrogen atom itself is one of the most intriguing early successes of quantum theory. An application of the uncertainty principle allows one to explain the stability of the hydrogen atom and from Schrödinger's equation one can compute the spectral lines with high precision as well as the correct behavior in Rutherford scattering. In some sense, however, this great success may even hinder further development in theoretical and applied physics. This is so, because on the one hand, this striking explanatory power of quantum theory makes it difficult to imagine a completely different theory which makes the same predictions. On the other hand, without comparison to any alternative, it is notoriously difficult to identify why quantum theory is so successful and hence to harness possible resources that are still hidden in the theory. We consider theories in which the states are not necessarily representable in terms of (ensembles of) wave-functions but rather by pseudo-probability densities over the phase space. A widely accepted mathematical framework to describe theories with a modified state space are the generalized probabilistic theories (GPTs) \cite{JanottaHinrichsen-review,Lami-thesis,Mueller-review,Plavala-review,Leppajarvi-thesis}.Starting from the states, by adding a notion of measurements and further constructive elements to the theory, one arrives at theories that can be vastly different from quantum theory, for example, predicting beyond-quantum correlations or by admitting information theoretical tasks not possible according to quantum theory \cite{Barrett-GPTinformation,MatsumotoKimura-superInformationStorability,SelbySikora-money,HeinosaariLeppajarviPlavala-noFreeInformation,SikoraSelby-coinFlipping,DArianoErbaPerinotti-classicalEntanglement,CavalcantiSelbySikoraGalleySainz-witworld,GilliganLee-computation}. One can also add additional postulates in order to derive finite-dimensional quantum theory \cite{Hardy-derivationQT,MasanesMuller-derivationQT,ChiribellaDArianoPerinotti-derivationQT,Wilce-derivationQT,Buffenoir-derivationQT}. However, one faces now two main difficulties when one aims to describe physical systems like the hydrogen atom within a GPT. First, one needs to construct a model in the formalism of GPTs for which one can reasonably argue that it constitutes a model of the hydrogen atom. The problem of model building is addressed by a phase-space formulation generalizing the Wigner-Weyl formalism in quantum theory \cite{Spekkens-epistricted,LinDahlsten-tunneling,PlavalaKleinmann-oscillator}. Second, in order to obtain predictions beyond simple static predictions, one needs to formulate the time-evolution of the system. In quantum theory, a general Markovian time-evolution is given by the Lindblad equation. In fact, besides Markovianity the Lindblad equation is a consequence of the assertion that the system is described by a quantum state, for all times. While Markovianity can be directly formulated in GPTs \cite{GrossMuellerColbeckDahlsten-dynamicsGPT,AlSafiShort-boxworldDynamics,AlSafiRichens-reversibleDynamics,BranfordDahlstenGarner-hamiltonDynamics}, the positivity conditions are more difficult to implement. This difficulty has at least two sources. First, the set of states may not be fully specified, for example, when no exhaustive set of observables has been described. Second, even if the set of states is known, positivity alone may not be a sufficient condition for similar reasons that in quantum theory one needs to consider complete positivity \cite{HeinosaariZiman-MLQT}. We avoid this difficulties by imposing a general form of the dynamical equations and subordinating positivity to the time evolution. The time-evolution is based on the Moyal bracket, but has a significantly more general form. Despite of its generality, we show that any GPT under this time-evolution exhibits a generalized version of Ehrenfest's theorem. Returning to the hydrogen atom, we first provide a spectral decomposition of the energy observable, study the ground state, determine the conserved quantities, and discuss the degeneracy of the spectrum. Then we consider two dynamical situations, first the interaction of the electron with an external electric field, and second the scattering of a charged particle in a Rutherford-type scenario. In the first situation, there are deviations to the quantum predictions, albeit quite small. For the scattering, we use a Green's function approach. We find that, in the far field, the scattering cannot be different from the classical and quantum case, thus yielding the Rutherford scattering in any generalized dynamical theory in phase-space. \section{Operational theories in phase space} We build on the results of Ref.~\cite{PlavalaKleinmann-oscillator}. In this framework, the state of a physical system is described by a real-valued phase-space function $\rho(\vec{q}, \vec{p})$. This function generalizes ensembles in statistical mechanics by allowing that $\rho$ can attain negative values for some regions in phase space. It follows that $\rho$ does no longer have an interpretation as a probability density over phase space. But, in analogy to Wigner functions in quantum theory, the marginals $\rho$ are required to have an interpretation as a probability density. That is, \begin{equation} \rho_q(\vec{q}) = \int_{\RR^3} \rho(\vec{q},\vec{p})\ddd{3} p, \end{equation} is a proper probability density for position obeying $\rho_q(\vec{q}) \geq 0$ for all $\vec{q}$ and $\int \rho_q(\vec{q})\ddd{3} q = 1$ and similarly for the momentum marginal $\rho_p(\vec{p})$. Note that we always consider a phase-space of dimension $3+3$, with generalizations to other dimensions being straightforward. In addition to the marginal distributions, the state of a system determines the expectation value of any observable. An observable itself is described by the phase-space function known from classical mechanics, for example, $H(\vec{q},\vec{p})=\frac{\abs{\vec{p}}^2}{2 \mu}$ for the energy of a free nonrelativistic particle or $L_3(\vec{q},\vec{p})=q_1 p_2 - q_2 p_1$ for the $z$-component of the angular momentum vector. In order to be more concrete about the ``expectation value of an observable,'' we describe the outcome of a measurement of an observable $A$ as a random variable $\tilde{A}$. The expectation value of $\tilde{A}$ is then computed via \begin{equation} \braket{\tilde{A}}= \int_{\RR^6} A(\vec q,\vec{p})\rho(\vec q,\vec{p})\ddd{3} q \ddd{3} p= \braket{A,\rho}, \end{equation} where we introduced the short-hand notation $\braket{A,\rho}$. We mention that although the phase-space observables remain the same as in classical theory, their interpretation does change: In classical theory, $A(\vec{q},\vec{p})$ can be understood as the value of $A$ if the system is at the phase-space point $(\vec{q},\vec{p})$. This interpretation does not hold in the generalized framework, since $\rho$ is not a probability density over the phase space. Rather $A(\vec{q},\vec{p})$ should be seen as an abstract representation of an observable as a function over the phase space. As a consequence, the second moment of the random variable $\tilde{A}$, that is, the mean square, cannot be computed as integral over $A^2(\vec{q},\vec{p})$. In other words in general we have $\braket{A^2, \rho} \neq \braket{(\tilde{A})^2}$. We can deduce already a first implication of the phase-space formalism: It is a known fact stated in textbooks on quantum theory \cite{Cohen-TannoudjiDiuLaloe-quantum,Bohm-quantum,GottfriedYan-quantum} that the electron in the hydrogen atom cannot collapse into the nucleus, because that would violate uncertainty relations. Hence the ground state energy of the quantum hydrogen atom must be finite. Here we derive the other implication, showing that uncertainty relations between position and momentum are necessary to prevent the hydrogen atom from collapsing in operational theories on phase space. For this, we compare the minimal value of the energy at any phase-space point $E_\mathrm{min} = \inf_{\vec{q},\vec{p}} H(\vec{q}, \vec{p})$ with the lowest energy reachable among all states of a given theory, $E_0 = \inf_\rho \braket{H,\rho}$. We say that a theory exhibits preparation uncertainty, if the set of states is restricted in such a way that $\rho(\vec{q},\vec{p})= \delta^{(3)}(\vec{q}_0-\vec{q})\delta^{(3)}(\vec{p}_0-\vec{p})$ is not a valid state for some phase-space point $(\vec{q}_0, \vec{p}_0)$. This kind of uncertainty is clearly a necessary precondition for $E_0 > E_\mathrm{min}$, thus proving our claim. Finally, we are interested in the probability distribution of the random variable $\tilde{A}$. For this one invokes a ``spectral decomposition'' of $A(\vec{q},\vec{p})$ in the form of the phase-space spectral measure $g_A(I;\vec{q},\vec{p})$. The spectral measure yields then the probability distribution of $\tilde{A}$ via \begin{equation} \Pr[\tilde{A}\in I] = \braket{g_A(I),\rho}. \end{equation} Here, $I$ is any measurable subset of the range of $\tilde{A}$. The phase space spectral measure must satisfy two properties: $\Pr[\tilde{A}\in I]$ must be a probability distribution and the spectral measure must also reproduce the expectation value of $\tilde{A}$, that is, $\braket{\tilde{A}}= \int_\RR a \braket{g_A(a),\rho}\dd a$, see Ref.~\cite{PlavalaKleinmann-oscillator}. It is sufficient to require that $\int_\RR g_A(a) \dd a = 1$ and $\int_\RR a g_A(a; \vec{q}, \vec{p}) \dd a = A(\vec{q}, \vec{p})$ while the positivity condition $\braket{g_A(I),\rho}\ge 0$ must hold for all states. Comparing the definition of the spectral measure with the initial definition of the marginals of $\rho$, we identify, the spectral measure of the position observable $\vec{Q}(\vec{q},\vec{p})=\vec{q}$ to be \begin{equation} g_{\vec{Q}}(I;\vec{q},\vec{p})= \int_I \delta^{(3)}(\vec{q}_0-\vec{q}) \ddd{3} q_0 \end{equation} and analogically for momentum. For other observables there is no unique way to define the phase-space spectral measure. Already the phase-space spectral measure for the energy of the harmonic oscillator differs between classical and quantum theory and other theories for the harmonic oscillator that are neither classical nor quantum have been proposed \cite{PlavalaKleinmann-oscillator}. \section{Time-evolution} We consider now the time evolution of a system in the sense that the random variable $\tilde{A}$ associated to an observable $A$ evolves over time. For the moment we assume that this time dependence can be solely attributed to a change of the state of the system, \begin{equation} \label{eq:time-Schrodinger} \Pr[\,\tilde{A}(t)\in I\,] = \braket{g_A(I), \rho(t)}. \end{equation} The time evolution of the state is Markovian if, for any $t>0$ and any $t_0$, $\rho(t+t_0)$ is fully determined by $\rho(t_0)$ and $t$, that is, \begin{equation} \rho(t+t_0)= \mathcal{R}(t)\rho(t_0) \end{equation} for some linear map $\mathcal{R}(t)$ mapping states to states. Under mild assumptions such a Markovian time evolution yields the equation of motion $\dot{\rho}(t_0) = \mathcal{G}\rho(t_0)$, where $\mathcal{G}= \dot{\mathcal{R}}(0)$ is the generator of time shifts and $\mathcal R(t)= \e^{t\mathcal G}$. Here, we assume that $\mathcal{R}$ and $\mathcal{G}$ are linear maps in order to achieve that convex combinations are preserved by the time evolution, that is, $\rho(t+t_0) = p\rho_1(t+t_0)+(1-p)\rho_2(t+t_0)$ for $0\le p\le 1$. The situation that we described so far corresponds to the Schrödinger picture in quantum theory, since only the state evolves in time. We switch now to the Heisenberg picture and assume that, equivalently, all of the time-dependence of $\tilde{A}(t)$ can be accounted for by the spectral measure while the state remains constant, \begin{equation} \label{eq:time-Heisenberg} \Pr[\, \tilde{A}(t)\in I\,]= \braket{g_A(I;t), \rho}. \end{equation} Equating the time derivatives of the right hand sides of Eq.~\eqref{eq:time-Heisenberg} and Eq.~\eqref{eq:time-Schrodinger}, gives us $\braket{\dot{g}_A(I;t),\rho}= \braket{g_A(I),\dot{\rho}(t)}= \braket{g_A(I), \mathcal{G} \rho} = \braket{\mathcal{G}^\dag g_A(I),\rho}$, and hence the adjoint map $\mathcal{G}^\dag$ is the generator for the time shifts of observables. In summary, we arrived at the dynamical equations \begin{equation} \label{eq:time-dynG} \dot{\rho} = \mathcal{G} \rho \quad \text{and} \quad \dot{g}_A(I) = \mathcal{G}^\dagger g_A(I), \end{equation} where $\mathcal{G}$ and $\mathcal{G}^\dagger$ are linear maps with the latter being the adjoint map of the former with respect to $\braket{\cdot,\cdot}$. In the following we choose a generator $\mathcal{G}$ that generalizes both the classical and quantum case. The time evolution in classical mechanics is given by the Liouville equation $\dot{\rho} = \{H,\rho\}$. Here, \begin{equation} \{f,g\} = \sum_{i=1}^3 \left(\dfrac{\partial f}{\partial q_i} \dfrac{\partial g}{\partial p_i} - \dfrac{\partial f}{\partial p_i} \dfrac{\partial g}{\partial q_i}\right) = f D_\omega g \end{equation} denotes the Poisson bracket and we define the operator $D_\omega$ which acts both to the left and to the right. In contrast, using the Wigner function formalism for quantum theory \cite{Wigner-WignerFunctions,Case-wignerFunctions}, the time-evolution of a Wigner function is given as $\dot{\rho} = \mb{H,\rho}_{\QT}$, where \begin{equation} \mb{f, g}_{\QT} = \dfrac{2}{\hbar} f \sin\left( \frac{\hbar}{2} D_\omega \right) g \end{equation} is the Moyal bracket \cite{Groenewold-QM,Moyal-WignerFunctions}. As a generalization of both, the Poisson bracket and the Moyal bracket, we define the generalized Moyal bracket as \begin{equation} \label{eq:time-generalizedMoyal} \mb{f,g} = \{f,g\} + \sum_{n=1}^\infty a_n \hbar^{2n} f D_\omega^{2n+1} g, \end{equation} where $a_n$ are some dimensionless constants. We recover the Poisson bracket for $a_n = 0$ while $a_n = \frac{(-1)^n}{2^{2n} (2n+1)!}$ yields back the Moyal bracket. Moreover we will assume that, just like in quantum theory, $\hbar$ is a small constant and so the terms proportional to $\hbar^{2n}$ are microscopic corrections to the classical time evolution obtained in the limit $\hbar \to 0$. Nevertheless, the coefficients $a_n$ are experimentally accessible in anharmonic systems. Consider the Hamiltonian $H(t)=\frac{p^2}{2m} + \frac{m\omega^2}2 q^2 +\lambda(t)\frac{m^2\omega^3}{2\hbar} q^4$ where $\lambda(t)$ is a function of time. Then $\lambda(t)\frac{m^2\omega^3}{2\hbar} q^4$ will contribute terms proportional to $a_1$ to the time-evolution. Thus one can determine $a_1$ by introducing the anharmonic term and measuring $p^2$ at a later time, see Appendix~\ref{appendix:timedep} for the explicit calculation. We summarize key properties of the generalized Moyal bracket. \begin{enumerate}[(i)] \item $\mb{\cdot,\cdot}$ is linear in both arguments. \item $\mb{\cdot,\cdot}$ is antisymmetric, $\mb{f,g}=-\mb{g,f}$. \item\label{item:time-GMB-polynomials} For $g$ a polynomial on phase space of at most second order and $f$ an arbitrary phase-space function, $\mb{f,g}=\{f,g\}$. \item $\Lie_f: g\mapsto \mb{f,g}$ is skew-adjoint, $\Lie_f^\dagger = -\Lie_f$. \item $\mb{\cdot,\cdot}$ satisfies the Jacobi identity if and only if it coincides with the Moyal bracket (up to the value of $\hbar$). \end{enumerate} The proof of Property~(i)--(iv) can be found in Appendix~\ref{appendix:propertiesGMB}; Property~(v) was proved in Ref.~\cite{Fletcher-moyalBracket}. We mention that antisymmetry and skew-adjointness follow from the fact that only odd powers of $D_\omega$ occur in the generalized bracket; including even powers would either contradict Markovianity of the time-evolution, or the generator of time-translations would be different from $\Lie_H$, see Appendix~\ref{appendix:propertiesGMB}. Other properties that are familiar from the Poisson bracket are not fulfilled by the Moyal bracket and the generalized bracket. In particular, $\Lie_f$ does neither obey Leibniz's rule $\Lie_f(gh)\neq h\Lie_fg + g\Lie_f h$ nor the chain rule $\Lie_f g(h) \neq g'(h) \Lie_f h$. Also there exist functions $f,g$ such that $\{f,g\} = 0$ but $\mb{f,g} \neq 0$, or such that $\mb{f,g} = 0$ but $\{f,g\} \neq 0$, see Appendix~\ref{appendix:propertiesGMB}. We use the generator $\Lie_H: f \mapsto \mb{H,f}$ in order to refine the dynamical equations in Eq.~\eqref{eq:time-dynG}, that is, \begin{equation} \label{eq:time-timeEvolution} \dot{\rho} = \Lie_H \rho \quad \text{and} \quad \dot{g}_A(I) = -\Lie_H g_A(I). \end{equation} For time-independent Hamiltonians, these equations have the solutions \begin{equation} \dot{\rho}(t) = \e^{t \Lie_H} \rho \quad \text{and} \quad \dot{g}_A(I;t) = e^{-t \Lie_H} g_A(I), \end{equation} respectively. Note that Property~\eqref{item:time-GMB-polynomials} may hold at time $t_0$ but not at later times. For example, in order to obtain the time-evolution of position $q_i$ in the Heisenberg picture we have to solve the equation $\dot{Q}_i(t) = - \Lie_H Q_i(t)$ with the initial condition $Q_i(0; \vec{q}, \vec{p}) = q_i$ and in general $\Lie_H Q_i(0) = \{H, Q_i(0)\}$ does not imply that the same holds at later times. \section{Properties of the time-evolution} For a Markovian time-evolution, the generator $\mathcal G$ must be constant in time. This is consistent with our dynamical equations, since the generalized bracket is antisymmetric and thus $\dot{H} = - \Lie_H H = 0$. In particular, the energy is conserved on average, that is, $\frac{d}{dt}\braket{\tilde H}= \braket{\dot H,\rho} = 0$. However, this does not yet imply that the probability distribution of $\tilde H$ is constant, since $\dot{H} = 0$ does not establish $\dot{g}_H(I) = 0$. This property is further distinct from the question whether an eigenstate, that is, a state with sharp energy distribution, is constant in time. We mention that in classical and quantum theory, the distribution of $\tilde H$ is constant in time. For classical theory this follows at once from $g_H(E)=\delta(E-H)$ and $\dot g_H(I)=-\{H,g_H(I)\}= 0$ and in quantum theory from $[\hat H,\Pi(I)]=0$ where $I\mapsto \Pi(I)$ is the projection-valued spectral measure of the Hamilton operator $\hat H$. Similarly, any eigenstate in quantum theory is stationary, while this is not generally the case in classical mechanics or operational theories. While it is possible to construct a spectral measure such that $\Lie_H g_H(I) \neq 0$, one can prove that for discrete energy spectrum and for states such that $\Pr[\tilde{H} = E_n] \neq 0$ only for at most two $n$ we have $\frac{d}{dt} \Pr[\tilde{H} \in I] = 0$, see Appendix~\ref{appendix:propertiesGMB}. This has two immediate consequences: First, the probability distribution of $\tilde{H}$ is constant in time if the system is in an eigenstate. Second, observing the effects of the time-evolution of $g_H(I)$ requires the existence of a state with contributions to more than two energy levels. Since these contributions must not be merely due to convex combinations, such states are in a generalized superposition state \cite{AubrunLamiPalazuelosPlavala-superposition}. A hallmark of quantum theory is Ehrenfest's theorem, which shows that the mean values of position and momentum follow classical equations of motion. We can establish an analogous result in our general framework. For this we assume a Hamiltonian of the form \begin{equation} H(\vec{q}, \vec{p})=T(\vec{p}) + V(\vec{q}). \end{equation} Then the observables of position $\vec{Q}(\vec{q},\vec{p})= \vec{q}$, momentum $\vec{P}(\vec{q},\vec{p})=\vec{p}$, velocity $\vec v(\vec{q},\vec{p})= \vec{\nabla}_p T(\vec{p})$, and force $\vec F(\vec{q}, \vec{p})=- \vec{\nabla}_q V(\vec{q})$, where $\vec{\nabla}_q$ and $\vec{\nabla}_p$ are the gradients in position and momentum respectively, obey classical equations of motion on average, \begin{equation} \label{eq:ehrenfest} \dfrac{d}{dt}\braket {\tilde Q_i} = \braket {\tilde v_i} \quad \text{and} \quad \dfrac{d}{dt}\braket {\tilde P_j} = \braket{\tilde F_j}. \end{equation} In order to see this, we first observe that $-\Lie_H\vec{q}= \vec v$ and $-\Lie_H \vec{p}=\vec F$, due to property~\eqref{item:time-GMB-polynomials} of the generalized bracket. Hence, \begin{equation} \begin{split} \dot{\vec{Q}}(t) &= -\Lie_H \e^{-t \Lie_H}\vec{q} = \e^{-t \Lie_H}(-\Lie_H \vec{q}) \\ &= \e^{-t \Lie_H}\vec v = \vec{v}(t), \end{split} \end{equation} and using analogous steps, one finds $\dot{\vec{P}}(t)= \e^{-t \Lie_H}\vec{F} = \vec{F}(t)$. Taking the mean value of these relations yields Eq.~\eqref{eq:ehrenfest}. Despite of this result, the evolving observables $\vec{Q}(t;\vec{q},\vec{p})$ and $\vec{P}(t;\vec{q},\vec{p})$ cannot be generally understood as phase-space trajectories \cite{SteuernagelKakofengitisRitter-wignerTrajectories,OlivaKakofengitisSteuernagel-wignerTrajectories}. In particular, Liouville's theorem $\frac{d}{dt}\rho(t;\vec{Q}(t;\vec{q},\vec{p}), \vec{P}(t;\vec{q},\vec{p}))= 0$, does not apply already for the Moyal bracket. Similarly, for an observable $A$, in general we have $A(t;\vec{q},\vec{p})\ne A(\vec{Q}(t;\vec{q},\vec{p});\vec{P}(t;\vec{q},\vec{p}))$. An important practical tool in quantum theory is the interaction picture. This is applicable for Hamiltonians of the form $H(t)=H_0+H_1(t)$, where $H_0$ is the free Hamiltonian and $H_1(t)$ is an interaction term which may depend explicitly on time. In the interaction picture, observables evolve according to the free Hamiltonian $H_0$ while the time-evolution of the state also involves the interaction $H_1(t)$. Accordingly, the state and any observable $A$ in the interaction picture are defined as \begin{equation} \rho_\mathrm{int}(t)= \e^{-t \Lie_{H_0}}\rho(t) \quad \text{and}\quad A_\mathrm{int}(t)= \e^{-t \Lie_{H_0}}A, \end{equation} respectively, where $\rho(t)$ and $A$ are understood in the Schrödinger picture with respect to the full Hamiltonian $H(t)$, that is, $\dot{\rho}(t) = \mb{H(t),\rho(t)}$. Consequently, in the interaction picture we have $\braket{\tilde{A}(t)} = \braket{A_\mathrm{int}(t),\rho_\mathrm{int}(t)}$ and the dynamical equation \begin{equation} \label{eq:nonstationary-timeEvolution} \begin{split} \dot\rho_\mathrm{int}(t) &= -\Lie_{H_0} e^{-t \Lie_{H_0}} \rho(t) + \e^{-t \Lie_{H_0}} \Lie_{H(t)} \rho(t) \\ &= \left( -\Lie_{H_0} + \e^{-t \Lie_{H_0}} \Lie_{H(t)} \e^{t \Lie_{H_0}} \right) \rho_\mathrm{int}(t) \\ &= \e^{-t \Lie_{H_0}} \Lie_{H_1(t)} \e^{t \Lie_{H_0}} \rho_\mathrm{int}(t). \end{split} \end{equation} Interestingly, if the generalized bracket satisfies the Jacobi identity, then Eq.~\eqref{eq:nonstationary-timeEvolution} simplifies to $\dot{\rho}_\mathrm{int}(t) = \mb{H_{1,\mathrm{int}}(t),\rho_\mathrm{int}(t)}$ and thus the time-evolution of $\rho_\mathrm{int}$ is generated by $H_1(t)$ in the interaction picture, $H_{1,\mathrm{int}}(t) = \e^{-t\Lie_{H_0}}H_\mathrm{int}(t)$. Although this can be a significantly more convenient expression, we argue that it does not constitute a sufficient argument to conclude that the generalized bracket has to satisfy the Jacobi identity. \section{The hydrogen atom} We now turn to a concrete physical system, the nonrelativistic hydrogen atom with the Hamiltonian \begin{equation} H = \dfrac{\abs{\vec{p}}^2}{2 \mu} - \dfrac{\kappa}{\abs{\vec{q}}}. \end{equation} Here, $\mu$ is the reduced mass of the electron and $\kappa = \frac{e^2}{4\pi\epsilon_0}$ with $e$ the elementary charge. In this notation, the Bohr radius is given by $a_0 = \frac{\hbar^2}{\kappa\mu}$. Let us first consider the conserved quantities. In classical and quantum theory, besides energy, also the angular momentum $\vec L=\vec q\times \vec p$ and the Runge--Lentz vector $\vec{A} = \vec{p} \times \vec{L} - \mu \kappa \vec{q}\,\abs{\vec{q}}^{-1}$ are constants in time. This turns out to be also the case for the generalized bracket, as can be verified in a straightforward calculation. \begin{figure} \caption{The functions $T^H_n(x)$ used for the spectral measure of the energy of the hydrogen atom in Eq.~\eqref{eq:hydrogen-PSSM}. The functions are plotted for $n = 1,2, \ldots, 8$ and $x$ is in units of $\frac{\kappa}{2 a_0}$.} \label{fig:hydrogen-TH} \end{figure} Quantum theory can explain the stability of the hydrogen atom as well as its spectrum which is in contrast to classical theory. But these two properties are by far not special for quantum theory per se. In the following we construct a phase space spectral measure $g_H(I)$ for the energy observable that is different from quantum theory, but has similar features. We first fix the energy spectrum to the one know from nonrelativistic quantum theory, \begin{equation} \label{eq:hydrogen-En} E_n = - \dfrac{\kappa}{2 a_0} \dfrac{1}{n^2} \end{equation} where $n = 1,2,\ldots$ is the principal quantum number. For any set of negative energies $I$ we then define the phase space spectral measure as \begin{equation} \label{eq:hydrogen-PSSM} g_H(I; \vec{q}, \vec{p}) = \sum_{n\colon E_n\in I} T^H_n(H(\vec{q},\vec{p})) \end{equation} where $T^H_n$ are the functions depicted in Figure~\ref{fig:hydrogen-TH}, see Appendix~\ref{appendix:hydrogen} for details. This measure is the uniquely specified by the following conditions: \begin{enumerate}[(i)] \item $g_H(I; \vec{q}, \vec{p})$ is piecewise linear function of $H(\vec{q}, \vec{p})$. \item $g_H(E_n; \vec{q}, \vec{p}) \neq 0$ only if $H(\vec{q}, \vec{p}) \in [E_{n-1}, E_{n+1}]$. \end{enumerate} The second condition means that only the region of phase-space ``close'' to $H(\vec{q}, \vec{p}) = E_n$ contributes to the probability of observing the particle in the $n$\textsuperscript{th} energy level. This is a straightforward natural choice. \begin{figure}\label{fig:hydrogen-gHE} \end{figure} It follows from the definition of $T^H_2$ that $g_H(E_2; \vec{q}, \vec{p}) < 0$ everywhere in phase-space where $H(\vec{q}, \vec{p}) < E_1$. From this we can conclude that the hydrogen atom must be stable in the sense that no state can be supported solely where $H(\vec{q}, \vec{p}) \le E_1-\epsilon$ with $\epsilon>0$, see also Figure~\ref{fig:hydrogen-gHE}. Indeed, if $\supp(\rho)$ would be the support of such a state, then the continuity of $g_H(E_2;\vec q,\vec p)$ implies that $M = \sup\set{H(\vec q,\vec p) : (\vec q,\vec p) \in \supp(\rho) } < 0$ and hence \begin{equation} \begin{split} \braket{g_H(E_2),\rho} &= \int_{\supp(\rho)} g_H(E_2;\vec q,\vec p)\rho(\vec q,\vec p) \ddd3q \ddd3p \\ &\le M \int_{\supp(\rho)} \rho(\vec q,\vec p) \ddd{3}q \ddd{3}p = M < 0, \end{split} \end{equation} which is a contradiction with our requirement that all probabilities must be positive. The measure $g_H(I; \vec{q}, \vec{p})$ is not stationary, $\Lie_H g_H(I) \neq 0$, because of discontinuities in the derivatives of $g_H(I)$. But we have $\{H, g_H(I)\} = 0$, so the measure is stationary if we approximate the time-evolution by the Poisson bracket, up to order $\hbar^2$. In a similar way one can construct the observable of angular momentum. For the $i$\textsuperscript{th} component of $\vec{L}$ we let \begin{equation} g_{L_i}(I; \vec{q}, \vec{p}) = \sum_{m \colon m\hbar \in I} T^{L_i}_m(L_i(\vec{q}, \vec{p})) \end{equation} where $T^{L_i}_m$ are the functions depicted in Figure~\ref{fig:hydrogen-TLz} and defined in Appendix~\ref{appendix:hydrogen}. \begin{figure} \caption{The functions $T_m^{L_i}$ used in the definition of the phase space spectral measure of the angular momentum. The functions are plotted for $m = -3, \ldots, 3$ and $x$ is in units of $\hbar$.} \label{fig:hydrogen-TLz} \end{figure} For possible states of the hydrogen atom we consider the Gaussian distribution \begin{equation} \label{eq:hydrogen-state-Gaussian} \rho_G(\vec{q},\vec{p})= \dfrac{1}{(2\pi)^3 \sigma_q^3 \sigma_p^3} \e^{-\frac{\abs{\vec{q}}^2}{2\sigma_q^2}-\frac{\abs{\vec{p}}^2}{2\sigma_p^2}}. \end{equation} Not all choices of $\sigma_p$ and $\sigma_q$ give a valid state, since $\braket{g_H(E_2),\rho_G}$ can be negative, in particular if $\sigma_q$ is too small. One can compute the value of $\braket{g_H(E_2),\rho_G}$ numerically for different pairs of $\sigma_q$ and $\sigma_p$ to find a region where $\rho_G$ gives positive probabilities, see Figure~\ref{fig:hydrogen-state-UR}. Note that we verify the positivity condition only at time $t = 0$. In principle we should verify whether $\braket{g_H(E_n),\rho(t)}$ is nonnegative for all later times. For the Poisson bracket, this is not the case because Leibniz rule renders the spectral measure to be constant in time by virtue of the Leibniz rule. It follows that $\braket{g_H(E_n),\rho}<0$ at $t>0$ must be at least of order $\hbar^2$. The same consideration also holds for the positivity of the marginals $\rho_q$ and $\rho_p$. \begin{figure} \caption{Value of $\braket{g_H(E_2),\rho_G}$ for the family of states $\rho_G$ given by Eq.~\eqref{eq:hydrogen-state-Gaussian}. For a valid state this value has to be nonnegative. The positivity of $\braket{g_H(E_2),\rho_G}$ is significantly influenced by $\sigma_q$, therefore sufficient position uncertainty is necessary. $\sigma_q$ is in units of $a_0$ and $\sigma_p$ is in units of $\hbar/a_0$. The numerical uncertainty of the calculation is equal to the width of the white line separating the positive and negative region.} \label{fig:hydrogen-state-UR} \end{figure} For $\sigma_p \to 0$ and $\sigma_q=\sigma_{\gnd} \approx 1.59577048804 \, a_0$ the state $\rho_G$ becomes \begin{equation} \label{eq:hydrogen-state-deltaP} \rho_{\gnd} (\vec{q},\vec{p})= \dfrac{1}{(2\pi)^{\frac{3}{2}} \sigma_{\gnd}^3} \e^{-\frac{\abs{\vec{q}}^2}{2\sigma_{\gnd}^2}} \delta^{(3)}(\vec{p}). \end{equation} This state is is a good approximation of a ground state, having $\braket{g_H(E_2),\rho_{\gnd}}=0$ and $\braket{\tilde H}\approx (1-10^{-6})E_1$ and subsequently we use it as if it was a proper ground state. We mention that $\rho_\gnd$ is clearly incompatible with quantum theory because the preparation uncertainty relation of position and momentum is violated. For the ground state one obtains the most probable distance from the center at $t=0$ as $\sqrt{2} \sigma_Z \approx 2.26a_0$. But the state is not stationary since already $\{H,\rho_{\gnd}\}\ne 0$ and, for example, the most probable distance changes with time. However, the energy distribution $\Pr[\tilde{H} \in I]$ is constant in time since the state is an eigenstate, see Appendix~\ref{appendix:propertiesGMB}, Proposition~\ref{prop:propertiesGMB-max2Energy}. \section{External magnetic field} We briefly treat the influence of an external magnetic field to our model of a hydrogen atom in our model of a hydrogen atom. For simplicity we assume that the electron is spinless and so the external magnetic field only interacts with the angular momentum. Hence, for a homogeneous magnetic field in $z$-direction, the new Hamiltonian is \begin{equation} H_B(\vec{q}, \vec{p}) = H(\vec{q}, \vec{p}) + \frac{\mu_B}{\hbar} B L_3(\vec{q}, \vec{p}), \end{equation} where $\mu_B$ denotes the Bohr magneton. We construct the phase space spectral measure for $H_B(\vec{q}, \vec{p})$ using a product of functions. In quantum theory we would get similar result using the Moyal product \cite{Moyal-WignerFunctions,Fairlie-starProduct,Zachos-starProduct}, but for our purposes, the ordinary point-wise product is sufficient. We get \begin{equation}\label{eq:magnetic-pssmhb} g_{H_B} (I) = \sum_{n,m\colon E_{n,m}\in I} T_n^H(H) T_m^{L_3}(L_3) \end{equation} with $E_{n,m}=E_n+\mu_B Bm$ and $m$ the magnetic quantum number. Thus we get a splitting of the energy levels due to the external magnetic field. At first sight it may seem that there is no restriction of the magnetic number $m$, quite in contrast to quantum theory where $\abs{m}<n$ holds. However, certain combinations of quantum numbers cannot occur according to Eq.~\eqref{eq:magnetic-pssmhb}, simply because $T_n^H(H)$ and $T_m^{L_3}(L_3)$ have disjoint support in phase space. Using this we find that $\Pr[\, \tilde H_B=E_{n,m}\,] > 0$ only if $\abs{m} \leq 2(n+1)$, see Appendix~\ref{appendix:hydrogen} for a proof. \section{Non-stationary external electric field} We now investigate the case when the atom is perturbed by a non-stationary external electric field, in particular, by an external electromagnetic wave. We take into account only the electric field corresponding to the electromagnetic wave and we assume that the wavelength of the electromagnetic wave is significantly larger than the size of the atom. Thus the new Hamiltonian is \begin{equation} H_E(\vec{q}, \vec{p}, t) = H(\vec{q}, \vec{p}) + H_e(\vec{q}, \vec{p}, t) \end{equation} where \begin{equation} H_e(\vec{q}, \vec{p}, t) = - 2 e E \sin (\omega t) q_3, \end{equation} assuming that the electric field is oriented in the $z$-direction. We proceed in the interaction picture as discussed above. Expanding the exponentials in Eq.~\eqref{eq:nonstationary-timeEvolution} we get \begin{equation} \dfrac{d}{dt} \rho_\mathrm{int} = \Lie_{H_e} \rho_\mathrm{int} + t ( \Lie_{H_e} \Lie_{H} - \Lie_{H} \Lie_{H_e}) \rho_\mathrm{int} + \dotsm, \end{equation} where we omitted terms which are second order in $t \Lie_H$ and higher. One finds $\Lie_{H_e} \Lie_{H} - \Lie_{H} \Lie_{H_e} = \Lie_{G}$ with \begin{equation} G = - \dfrac{2eE}{\mu} \sin(\omega t) p_3, \end{equation} see Appendix~\ref{appendix:electric}. Thus we get the approximation \begin{equation} \dfrac{d}{dt} \rho_\mathrm{int} \approx \Lie_{(H_e + t G)} \rho_\mathrm{int}. \label{eq:nonstationary-finalTimeEvolution} \end{equation} Since the effective Hamiltonian $H_{\eff} = H_e + t G$ is only a linear function of position and momentum, the generalized bracket reduces to the Poisson bracket. Then $\rho(t;\vec q,\vec p)=\rho(0;\vec q+\vec s(t);\vec p+\vec u(t))$ is a solution of Eq.~\eqref{eq:nonstationary-finalTimeEvolution} for \begin{equation} \begin{split} \vec s(t)&=\int_0^t \{ H_{\eff}(\tau),\vec q\}\dd\tau\\& =\dfrac{2 e E}{\mu \omega^2} ( \sin(\omega t) - \omega t \cos(\omega t) )\vec e_3,\\ \vec u(t)&=\int_0^t \{H_{\eff}(\tau),\vec p\}\dd\tau=\dfrac{2eE}{\omega}(\cos(\omega t)- 1)\vec e_3. \end{split} \end{equation} It is now straightforward to compute $\Pr[\tilde{H}(t) = E_n]$ numerically. Hereby the phase space spectral measure in the interaction picture depends on time, however this introduces only corrections of the order $\hbar^2$, which we neglect. The results of our computations are plotted in Figure~\ref{fig:nonstatioanry-compareQT} for $\omega = (E_2 - E_1)/\hbar$ and $\rho_{\gnd}$ as initial state. Importantly, we see that there is a non-zero probability of exciting the atom to the second energy level. \begin{figure}\label{fig:nonstatioanry-compareQT} \end{figure} For comparison, we also compute the transition probability $\Pr_\QT[\tilde{H}(t) = E_n]$ in quantum theory. We use the same approximation and assume as initial state $\rho$ the ground state of the hydrogen atom with the wave function $\psi_{100}$. The Schrödinger equation with Hamiltonian $\hat H_{\eff}$ yields \begin{equation} \psi_{100}(t;\vec x)= \e^{i\phi(t)}\e^{-\frac i\hbar \vec x\cdot \vec u(t)} \psi_{100}(0;\vec x+\vec s(t)), \end{equation} where $\phi(t)$ is an irrelevant phase. The results are plotted in Figure~\ref{fig:nonstatioanry-compareQT}. We see that also in quantum theory there is a nonzero probability of exciting the atom in our approximation and $\Pr[\tilde{H}(t) = E_1] \approx \Pr_\QT[\tilde{H}(t) = E_1]$ up to the numerical precision of our calculations. We finally mention that the excitation of the first energy level is of different origin than Rabi oscillations in quantum optics. This is because we consider here the limit of short times while in quantum optics one usually applies the rotating-wave approximation which disregards fast oscillations. \section{Scattering theory} In both classical and quantum theory the scattering of charged particles on a Coulomb potential leads to the same differential cross section given by the Rutherford formula \cite{Rutherford-scattering} \begin{equation} \dfrac{d \sigma}{d \Omega} = \dfrac{\kappa^2 \mu^2}{4 p_0^4} \dfrac{1}{\sin^4(\vartheta / 2)} \end{equation} where $p_0$ is the momentum of the incoming particles and $\vartheta$ is the scattering angle. In this section we show that this result holds in all operational theories of the hydrogen atom where the time-evolution is described by the generalized Moyal bracket \eqref{eq:time-generalizedMoyal}. Moreover this result is independent of the phase space spectral measure of the energy observable, since we only use Hamiltonian as the generator of time-evolution. The Wigner function representation was used before to investigate the scattering \cite{Remler-WignerFunctionsScattering,CarruthersZachariasen-WignerFunctionsScattering,KarlovetsSerbo-WignerFunctionsScattering} and our approach is in particular based on Ref.~\cite{CarruthersZachariasen-WignerFunctionsScattering}. We assume that in the asymptotic past, $t\to -\infty$, the scattering particles have a uniform spacial density $\nu$ and a fixed momentum $\vec p_0$. Dropping the normalization condition of the state, we write \begin{equation} \rho_{\ini}(\vec{q}, \vec{p})= \lim_{t\to-\infty}\rho(t;\vec q,\vec p)= \nu\delta^{(3)} (\vec{p} - \vec{p}_0). \end{equation} Note that this is the same initial condition as one uses in quantum scattering theory. The particle density at later times is given by \begin{equation} \label{eq:scattering-density} D(t;\vec{q}) = \int_{\RR^3} \rho(t; \vec{q}, \vec{p}) \ddd{3} p, \end{equation} and for computing the cross-section we aim to obtain this density in the far field for the asymptotic future, that is, for $t\to +\infty$ and $\abs{\vec q}\to \infty$. Using a Green's functions approach \cite{CarruthersZachariasen-WignerFunctionsScattering}, the formal solution of the dynamical equations in Eq.~\eqref{eq:time-timeEvolution} is given by \begin{widetext} \begin{equation} \label{eq:scattering-solution} \rho(t; \vec{q}, \vec{p}) = \rho_{\ini}(t; \vec{q}, \vec{p}) + \int\limits_{\RR^3} \int\limits_{-\infty}^t K(\vec{q} - \dfrac{\vec{p}}{\mu}(t - \tau), \vec{p}, \vec{p'}) \rho(\tau; \vec{q} - \dfrac{\vec{p}}{\mu}(t - \tau), \vec{p'}) \dd \tau \ddd{3} p' \end{equation} \end{widetext} where \begin{equation} \label{eq:scattering-K} K(\vec{q}, \vec{p}, \vec{p'}) = \mb{ V(\vec{q}), \delta^{(3)}_p(\vec{p} - \vec{p'})}, \end{equation} see Appendix~\ref{appendix:scattering} for the full derivation. The solution can be found in a perturbative manner via $V(\vec{q}) \mapsto \lambda V(\vec{q})$ and the expansion \begin{equation} \label{eq:scattering-rhoSeries} \rho(t; \vec{q}, \vec{p}) = \sum_{n=0}^\infty\sum_{k=0}^\infty \hbar^{2n} \lambda^k \rho_{n,k}(t; \vec{q}, \vec{p}). \end{equation} by comparing coefficients in $\hbar$ and $\lambda$ in Eq.~\eqref{eq:scattering-K}. Using these techniques, we show in Appendix~\ref{appendix:scattering} that \begin{equation} D_{n,k}(t;\vec{q}) = \int_{\RR^3} \rho_{n,k}(t;\vec q,\vec p)\ddd{3}p = \dfrac{f_{n,k}(t;\vartheta)}{\abs{\vec{q}}^{2n+k}}, \end{equation} for some functions $f_{k,n}(t;\vartheta)$. The only terms that can contribute to the far field differential cross section are now for $n = 0$ and $k = 0, 1, 2$ and $n = 1$ and $k = 0$. The terms with $n = 0$ are the classical terms which are obtained by replacing the generalized bracket in Eq.~\eqref{eq:scattering-K} by the Poisson bracket and hence they give us the same prediction for the differential cross section as classical theory. We never get terms of the order $\hbar^{2} \lambda^0$ on the right-hand-side of Eq.~\eqref{eq:scattering-solution} because the terms of order $\lambda^0$ are the terms that do not include the potential but only the initial state $\rho_{\ini}$. Thus the differential cross section must be given by Rutherford's formula in all operational theories where the time-evolution is given by the generalized Moyal bracket. \section{Conclusions} We constructed a toy model for the hydrogen, which does not fall into the formalism of neither quantum theory nor classical theory and as such clearly does not satisfy the constraints from modern experimental data. But, as we showed here, this model is internally consistent and conceptually it is not obvious why the toy model is incorrect, other than that it is not a quantum model. Moreover the toy model is in accordance with experimental predictions and theoretical paradigms of early quantum theory: The collapse of the atom is prevented due to the uncertainty principle, the model features a discrete energy spectrum in accordance with experimental observations, the angular momentum is quantized as predicted by Bohr, perturbations by nonstationary electric field lead to excitations, and Rutherford's formula for scattering cross-section holds. Moreover there is a meaningful classical limit: $\hbar \to 0$ recovers the classical dynamics given by the Poisson bracket and localized particles distant enough from the center of the potential are possible within our toy model. After having established a formal background for dynamical theories on phase space, a conceptually rather straightforward construction gives already our toy model. The model does not even remotely resemble quantum theory, but still makes precise, measurable predictions. Besides that this shows how little is known about the space of theories in which quantum theory resides as a special case, we also found evidence that quantum theory is a strikingly simple theory with curious mathematical coincidences. For example, since the Moyal bracket satisfies the Jacobi identity, the interaction picture is especially simple to handle. We found that although conservation of energy always holds on average, the distribution of the energy can change over time. In quantum theory and classical theory the energy distribution is constant roughly because in both theories $H$ and $H^2$ ``commute.'' Generally, while it clearly is possible to identify experiments that invalidate our toy model in favor of quantum theory, it is an upcoming theoretical challenge to find operational postulates that are obeyed by quantum theory but violated in the toy model. While such postulates are known for finite-dimensional systems, it is not straightforward to generalize them to continuous variable systems. \begin{acknowledgments} We acknowledge support from the Deutsche For\-schungs\-gemeinschaft (DFG, German Research Foundation, project numbers 447948357 and 440958198), the Sino-German Center for Research Promotion (Project M-0294), the ERC (Consolidator Grant 683107/TempoQ), and the German Ministry of Education and Research (Project QuKuK, BMBF Grant No. 16KIS1618K). M.P. is thankful for the financial support from the Alexander von Humboldt Foundation. The OMNI cluster of the University of Siegen was used for the computations. \end{acknowledgments} \onecolumngrid \appendix \section{Experimental accessibility of the coefficients of the generalized bracket} \label{appendix:timedep} The generalized bracket $\mb{\cdot,\cdot}$ is defined in term of dimensionless coefficients $a_n$, see Eq.~\eqref{eq:time-generalizedMoyal}. In quantum theory these coefficients take specific values, for example, $a_1=-\frac1{24}$. One may take the perspective that each of these coefficients is a constant of nature and therefore must be verified in an experiment. However, the coefficients are not accessible by measuring the spectrum of an observable, because this spectrum can be chosen freely, as we have seen for the hydrogen atom. But the coefficients do become experimentally accessible in a time-dependent potential. In order to obtain $a_1$, a possible way is to determine the variance cf the momentum for the time-dependent Hamiltonian \begin{equation} \label{eq:timedep-anharmonic} H(t)=\dfrac{p^2}{2m} + \dfrac{m\omega^2}2 q^2 +\lambda(t)\dfrac{m^2\omega^3}{2\hbar} q^4. \end{equation} Importantly, this Hamiltonian does not ``commute'' with itself at different times, that is, $\mb{H(t),H(t')}\ne 0$. Hence the formal solution of the dynamical equation for an observable $A$ is not simply given by $A(t+t_0)=\exp(-\Lie_Ht)A(t_0)$ but rather by the time-ordered exponential, which can be approximated as \begin{equation} A(t+t_0) \approx (\mathrm{id}-\tfrac tn\Lie_{H(t_{n-1})}) (\mathrm{id}-\tfrac tn\Lie_{H(t_{n-2})}) \dotsm (\mathrm{id}-\tfrac tn\Lie_{H(t_{0})})A(t_0), \end{equation} with $t_k=\frac knt+t_0$ and $n \in \NN$. For $A(t_0)=p^2$, the first dependence on $a_1$ occurs at the fifth step, with \begin{equation} (\mathrm{id}-\tfrac tn\Lie_{H(t_{4})}) \dotsm (\mathrm{id}-\tfrac tn\Lie_{H(t_{0})})p^2 = K +\frac{1728}{n^4} m^2\omega^6 \, a_1 \lambda(t_1)\lambda(t_4) q^2 t^4, \end{equation} assuming that $\lambda(t_0)=0$ and where $K$ is a term independent of $a_1$, that is, corresponding to the case $a_1 = 0$. Hence it is possible to determine $a_1$ by measuring $p^2$. \begin{figure} \caption{Time-evolved state for a time-dependent anharmonic potential in Eq.~\eqref{eq:timedep-anharmonic}. The left panel is for the classical time evolution using the Poisson bracket, and the right panel is for the quantum time evolution using the Moyal bracket. In either case, the initial state is the Wigner function of the ground state of the quantum harmonic oscillator and the time-dependence $\lambda(t)$ of the anharmonic term is chosen in a wedge-like form. The left axes is $p$ in units of $\sqrt{m\omega\hbar}$ and right axis is $q$ in units of $\sqrt{\hbar/m\omega}$. The height is in units of $1/\hbar$.} \label{fig:time-state} \end{figure} For a quantitative evaluation, we assume that $\lambda(t)$ has a wedge-like shape $\Lambda(\tau)=\max(0,1-\abs{1-2 \tau})$, more specifically, we choose $\lambda(t)=\frac13\Lambda(\frac{4\omega}\pi t)$. We assume that the initial state is the ground state of the quantum harmonic oscillator, $\rho_0 =\frac1{\pi\hbar}\exp(-H(0)/\frac{\hbar\omega}2)$ and due to its Gaussian shape, it is numerically more stable to solve the time-evolution for the state than for $p^2$. We use a numerical solver for the corresponding partial differential equation and obtain the time-evolved state at $t=\frac\pi{4\omega}$ as displayed in Figure~\ref{fig:time-state}, both for the classical ($a_1=0$) and quantum ($a_1=-\frac1{24})$ case. Since the mean value of the momentum vanishes, $\braket{\tilde p}=0$, for symmetry reasons and since we have $\tilde p^2=\widetilde{p^2}$, the variance of the momentum is simply given by $(\Delta\tilde p)^2=\braket{\widetilde{p^2}}$. Evaluating the corresponding phase-space integral on our numerical evolution, we find \begin{equation} \frac{(\Delta\tilde p)^2}{\hbar\omega m} \approx 0.6795+0.0823\, a_1 \end{equation} for the variance at $t=\frac\pi{4\omega}$. \section{Properties of the generalized Moyal bracket} \label{appendix:propertiesGMB} In this section we prove several properties of the generalized Moyal bracket, defined as \begin{equation} \mb{f,g} = \sum_{n=0}^\infty a_n \hbar^{2n} f D_\omega^{2n+1} g, \end{equation} where $a_0=1$ and \begin{equation} D_\omega = \sum_{i=1}^N \dfrac{\overleftarrow{\partial}}{\partial q_i} \dfrac{\overrightarrow{\partial}}{\partial p_i} - \dfrac{\overleftarrow{\partial}}{\partial p_i} \dfrac{\overrightarrow{\partial}}{\partial q_i}. \end{equation} \begin{proposition} The generalized Moyal bracket is linear, i.e., we have \begin{align} \mb{\alpha_1 f_1 + \alpha_2 f_2, g} & = \alpha_1 \mb{f_1, g} + \alpha_2 \mb{f_2, g}, \\ \mb{f, \beta_1 g_1 + \beta_2 g_2} & = \beta_1 \mb{f, g_1} + \beta_2 \mb{f, g_2}. \end{align} \end{proposition} \begin{proof} The proof follows by induction. Assume that \begin{equation} (f_1 + \alpha f_2) D_\omega^n g = f_1 D_\omega^n g + \alpha f_2 D_\omega^n g, \end{equation} then \begin{align} (f_1 + \alpha f_2) D_\omega^{n+1} g & = \sum_{i=1}^N \left( \dfrac{\partial (f_1 + \alpha f_2)}{\partial q_i} D_\omega^{n} \dfrac{\partial g}{\partial p_i} - \dfrac{\partial (f_1 + \alpha f_2)}{\partial p_i} D_\omega^{n} \dfrac{\partial g}{\partial q_i} \right) \\ &= \sum_{i=1}^N \left( \dfrac{\partial f_1}{\partial q_i} D_\omega^{n} \dfrac{\partial g}{\partial p_i} - \dfrac{\partial f_1}{\partial p_i} D_\omega^{n} \dfrac{\partial g}{\partial q_i} \right) + \alpha \sum_{i=1}^N \left( \dfrac{\partial f_2}{\partial q_i} D_\omega^{n} \dfrac{\partial g}{\partial p_i} - \dfrac{\partial f_2}{\partial p_i} D_\omega^{n} \dfrac{\partial g}{\partial q_i} \right) \\ &= f_1 D_\omega^{n+1} g + \alpha_2 f_2 D_\omega^{n+1} g. \end{align} Linearity of the generalized Moyal bracket follows from its definition. Proof of linearity in the second argument is analogical. \end{proof} In order to show that $\mb{\cdot,\cdot}$ is anti-symmetric we will need the following result. \begin{proposition} \label{prop:propertiesGMB-symAntisym} Let $k \in \{0, 1, 2, \ldots \}$, then \begin{align} f D_\omega^{2k} g & = g D_\omega^{2k} f, \\ f D_\omega^{2k+1} g & = - g D_\omega^{2k+1} f. \end{align} \end{proposition} \begin{proof} The proof follows by induction: we will prove that if $f D_\omega^{2k} g = g D_\omega^{2k} f$ then $f D_\omega^{2k+1} g = -g D_\omega^{2k+1} f$ and that if $f D_\omega^{2k-1} g = -g D_\omega^{2k-1} f$, then $f D_\omega^{2k} g = g D_\omega^{2k} f$. The result then follows from $f D_\omega g = -g D_\omega f$ by alternating between the induction steps. So assume that $f D_\omega^{2k} g = g D_\omega^{2k} f$, then we have \begin{align} f D_\omega^{2k+1} g & = \sum_{i=1}^N \left( \dfrac{\partial f}{\partial q_i} D_\omega^{2k} \dfrac{\partial g}{\partial p_i} - \dfrac{\partial f}{\partial p_i} D_\omega^{2k} \dfrac{\partial g}{\partial q_i} \right) \\ &= \sum_{i=1}^N \left( \dfrac{\partial g}{\partial p_i} D_\omega^{2k} \dfrac{\partial f}{\partial q_i} - \dfrac{\partial g}{\partial q_i} D_\omega^{2k} \dfrac{\partial f}{\partial p_i} \right) \\ &= - g D_\omega^{2k+1} f. \end{align} Assuming that $f D_\omega^{2k-1} g = -g D_\omega^{2k-1} f$, we get \begin{align} f D_\omega^{2k} g & = \sum_{i=1}^N \left( \dfrac{\partial f}{\partial q_i} D_\omega^{2k-1} \dfrac{\partial g}{\partial p_i} - \dfrac{\partial f}{\partial p_i} D_\omega^{2k-1} \dfrac{\partial g}{\partial q_i} \right) \\ &= \sum_{i=1}^N \left( - \dfrac{\partial g}{\partial p_i} D_\omega^{2k-1} \dfrac{\partial f}{\partial q_i} + \dfrac{\partial g}{\partial q_i} D_\omega^{2k-1} \dfrac{\partial f}{\partial p_i} \right) \\ &= g D_\omega^{2k} f. \end{align} \end{proof} \begin{corollary} $\mb{f,g} = -\mb{g,f}$ \end{corollary} \begin{proof} The result follows since $\mb{\cdot,\cdot}$ contains only odd powers of $D_\omega$. \end{proof} \begin{proposition} Let $P_2$ be a polynomial of at most second order, then $\mb{f,P_2} = \{f, P_2\}$. \end{proposition} \begin{proof} $P_2$ is a polynomial of at most second order if it is of the form \begin{equation} P_2(\vec{q},\vec{p}) = \sum_{i,j=1}^N ( J_{ij} q_i q_j + K_{ij} p_i p_j + L_{ij} q_i p_j) + \sum_{i=1}^N (a_i q_i + b_i p_i) + c. \end{equation} Clearly any third derivative of $P_2$ is zero, hence we have \begin{equation} f D_\omega^{2k+1} P_2 = 0 \end{equation} for $k \geq 1$. It thus follows that $\mb{f,P_2} = \{f, P_2\}$. \end{proof} \begin{proposition} Let $f,g,h$ be functions on the phase space such that their product and products of their derivatives vanish at infinity. Then \begin{equation} \label{eq:propertiesGMB-DomegaPerPartes} \int_{\RR^{2N}} f \left( g D^k_\omega h \right) \ddd{N} q \ddd{N} p = \int_{\RR^{2N}} \left( f D^k_\omega g \right) h \ddd{N} q \ddd{N} p \end{equation} \end{proposition} \begin{proof} The result follows by induction. We have: \begin{align} \int_{\RR^{2N}} f (g D^{k+1}_\omega h) \ddd{N} q \ddd{N} p & = \sum_{i=1}^N \left( \int_{\RR^{2N}} f \left( \dfrac{\partial g}{\partial q_i} D^k_\omega \dfrac{\partial h}{\partial p_i} \right) \ddd{N} q \ddd{N} p - \int_{\RR^{2N}} f \left( \dfrac{\partial g}{\partial p_i} D^k_\omega \dfrac{\partial h}{\partial q_i} \right) \ddd{N} q \ddd{N} p \right) \\ &= - \sum_{i=1}^N \left( \int_{\RR^{2N}} \dfrac{\partial f}{\partial p_i} \left( \dfrac{\partial g}{\partial q_i} D^k_\omega h \right) \ddd{N} q \ddd{N} p - \int_{\RR^{2N}} \dfrac{\partial f}{\partial q_i} \left( \dfrac{\partial g}{\partial p_i} D^k_\omega h \right) \ddd{N} q \ddd{N} p \right) \\ &= - \sum_{i=1}^N \left( \int_{\RR^{2N}} \left( \dfrac{\partial f}{\partial p_i} D^k_\omega \dfrac{\partial g}{\partial q_i} \right) h \ddd{N} q \ddd{N} p - \int_{\RR^{2N}} \left( \dfrac{\partial f}{\partial q_i} D^k_\omega \dfrac{\partial g}{\partial p_i} \right) h \ddd{N} q \ddd{N} p \right) \\ &= \int_{\RR^{2N}} \left( f D^{k+1}_\omega g \right) h \ddd{N} q \ddd{N} p \end{align} where in the first step we have used integration by parts, the terms containing second derivatives of $g$ cancel each other. In the second step we have used the induction assumption. One may conclude the proof by either checking that \eqref{eq:propertiesGMB-DomegaPerPartes} holds for $k=1$, or one may define $f D^0_\omega g = fg$, then it is trivial to check that \eqref{eq:propertiesGMB-DomegaPerPartes} holds for $k=0$. \end{proof} \begin{corollary} Let $f,g,h$ be functions on the phase space such that their product and products of their derivatives vanish at infinity. Then \begin{equation} \label{eq:propertiesGMB-LhPerPartes} \int_{\RR^{2N}} f (\Lie_h g) \ddd{N} q \ddd{N} p = - \int_{\RR^{2N}} (\Lie_h f) g \ddd{N} q \ddd{N} p. \end{equation} \end{corollary} \begin{proof} The result follows by expressing the generalized Moyal bracket in powers of the operator $D_\omega$ and using the the antisymmetry of the bracket. \end{proof} We will use the following result in order to argue that only odd powers of $D_\omega$ can be included in the definition of $\mb{\cdot,\cdot}$. \begin{proposition} \label{prop:propertiesGMB-evenPolynomials} $(qp)^n D^{2n}_\omega (qp)^n = (-1)^n (2n)! (n!)^2$. \end{proposition} \begin{proof} All terms in the expansion of $D^{2n}_\omega$ will contain $2n$ derivatives and we get non-zero contribution only from terms with the same number of position and momentum derivatives. Counting the number of such terms is the same as counting the number of orderings of $n$ identical black and $n$ identical white balls, since we are essentially counting the number of ways in which we obtain the correct derivative by adding either derivatives with respect to position or momentum. Thus one can see that there is exactly $\frac{(2n)!}{(n!)^2}$ of these factors. Moreover all of these terms contain the factor $(-1)^n$ coming from the momentum derivative. We thus get: \begin{equation} (qp)^n D^{2n}_\omega (qp)^n = (-1)^n \dfrac{(2n)!}{(n!)^2} \left( \dfrac{\partial^{2n} (qp)^n}{\partial^n q \partial^n p} \right)^2 = (-1)^n (2n)! (n!)^2. \end{equation} \end{proof} \begin{corollary} Assume that $\mb{\cdot,\cdot}$ includes even powers of $D_\omega$, then there is a function $f$ such that $\mb{f,f} \neq 0$. \end{corollary} \begin{proof} Let $\mb{\cdot,\cdot}$ be defined as: \begin{equation} \mb{f,g} = \sum_{k=0}^\infty b_k \hbar^{k-1} f D_\omega^{k} g \end{equation} and let $n \in \NN$ be the smallest number such that $b_{2n} \neq 0$. We then have \begin{equation} \mb{(qp)^n, (qp)^n} = b_{2n} \hbar^{2n-1} (qp)^n D^{2n}_\omega (qp)^n, \end{equation} because $(qp)^n D^{2 \ell + 1}_\omega (qp)^n = 0$ due to Proposition~\ref{prop:propertiesGMB-symAntisym} and $(qp)^n D^{2 \ell}_\omega (qp)^n = 0$ for $\ell > n$ because then the order of derivatives in $D^{2 \ell}_\omega$ is strictly higher than $n$. The result follows by Proposition~\ref{prop:propertiesGMB-evenPolynomials}. \end{proof} \begin{corollary} Assume that $\mb{\cdot,\cdot}$ includes even powers of $D_\omega$, then either the resulting time-evolution is not Markovian, or the generator of time-translations and energy observable coincide only for $t=0$. \end{corollary} \begin{proof} Assume that $\mb{\cdot,\cdot}$ includes even powers of $D_\omega$ and let $H$ be a function such that $\mb{H,H} \neq 0$. But then in Heisenberg picture we have $\dot{H} \neq 0$ and so $H = H(t)$ depends on time. Let $\mathcal{G}$ be the generator of the time-evolution constructed from the Hamiltonian using the generalized Moyal bracket. We now have two options: we either allow the generator to evolve itself in time and we have $\mathcal{G} = \Lie_{H(t)}$, or we keep the generator constant in time and we have $\mathcal{G} = \Lie_H = \Lie_{H(0)}$. In the first case the time-evolution is not Markovian anymore since the generator depends on time, in the second case the energy observable $H(t)$ corresponds to the generator $\mathcal{G}$ only for $t=0$. \end{proof} \begin{example}[Example of functions such that $\mb{f,g} = 0$ but $\{f,g\} \neq 0$] For the Pöschl--Teller potential we have the Hamiltonian \begin{equation} H(q,p) = \frac{p^2}{2m} + \frac{\eta^2}{2m}\left(1 - \dfrac{2}{\cosh^2(q\eta / \hbar)}\right) \end{equation} where $\eta$ is some constant with units of momentum. The ground state has energy $0$ and the Wigner function \cite{CurtrightFairlieZachos-timeIndependentWignerFunction} \begin{equation} \rho(q,p) = \frac1\hbar\dfrac{\sin(2qp / \hbar)}{\sinh(2q\eta / \hbar) \sinh(\pi p/\eta)}. \end{equation} So we must have $\mb{H, \rho} = 0$ for the Moyal bracket used in quantum theory, but it is straightforward to check that $\{H,\rho\} \neq 0$. We mention that the occurrence of $\hbar$ in the Hamiltonian is essential and it cannot be easily replaced by another constant with the same units. \end{example} \begin{example}[Example of functions such that $\{f,g\} = 0$ but $\mb{f,g} \neq 0$] Observe that for any function $f: \RR^2 \to \RR$ we have $\{f, f^2\} = 0$; we will show that analogical result does not hold for the generalized Moyal bracket. One can also show that we have \begin{equation} (q^n p^{n+1}) D_\omega^{2n+1} (q^{2n} p^{2n+2}) = (-1)^{n+1} \dfrac{(2n)! (2n+1)! (2n+2)!}{(n-1)! (n+2)!} q^{n-1} p^{n+2}. \end{equation} Let then $n \geq 1$ be the lowest index such that $a_n \neq 0$, where $a_n$ are the coefficients used in the definition of the generalized Moyal bracket in Eq.~\eqref{eq:time-generalizedMoyal}. Consequently, we have $\mb{f,f^2}\ne 0$ for $f=q^np^{n+1}$. \end{example} \begin{proposition} \label{prop:propertiesGMB-max2Energy} Let $H:\RR^{2N} \to \RR$ be a Hamiltonian and let $g_H(I)$ be the corresponding spectral measure and assume that the energy spectrum is discrete, i.e., that there are energies $E_n \in \RR$, $n \in \NN$, such that $g_H(I;t) = \sum_{n: E_n \in I}g_H(E_n;t)$. Let $\rho$ be a state such that $\Pr[\tilde{H} = E_k] \neq 0$ for at most two $k$, that is $\Pr[\tilde{H} = E_k] \neq 0$ only for $k \in \{n_1, n_2\}$. Then $\frac{d}{dt} \Pr[\tilde{H} = E_k] = 0$. \end{proposition} \begin{proof} From the assumptions it follows that at $t=0$, \begin{align} 1 & = \braket{g_H(E_{n_1}), \rho} + \braket{g_H(E_{n_2}), \rho}, \\ \braket{\tilde{H}} & = E_{n_1} \braket{g_H(E_{n_1}), \rho} + E_{n_2} \braket{g_H(E_{n_2}), \rho}. \end{align} Taking the time derivative in Heisenberg picture at $t=0$ we get \begin{align} 0 & = \braket{\dot{g}_H(E_{n_1}), \rho} + \braket{\dot{g}_H(E_{n_2}), \rho}, \\ 0 & = E_{n_1} \braket{\dot{g}_H(E_{n_1}), \rho} + E_{n_2} \braket{\dot{g}_H(E_{n_2}), \rho} \end{align} and since $E_{n_1} \neq E_{n_2}$ we get $\braket{\dot{g}_H(E_{n_1}), \rho}=0$ and $\braket{\dot{g}_H(E_{n_2}), \rho}=0$. \end{proof} \section{Phase space spectral measures of energy and angular momentum of the hydrogen atom} \label{appendix:hydrogen} The spectrum of energies of bound states is discrete and we set $E_n = -\frac{\kappa}{2 a_0} \frac{1}{n^2}$. The phase space spectral measure of energy, $g_H(I)$, is given as \begin{equation} g_H(I; \vec{q}, \vec{p}) = \sum_{n: E_n \in I} T_n(H(\vec{q}, \vec{p}) \end{equation} where $T_n^H$ are the piecewise linear functions. If we assume that $g_H(E_n; \vec{q}, \vec{p}) \neq 0$ only if $H(\vec{q}, \vec{p}) \in [E_{n-1}, E_{n+1}]$, then we get $T_n(x) \neq 0$ only if $x \in [E_{n-1}, E_{n+1}]$. For $x \in [E_n, E_{n+1}]$ the normalization and expectation value conditions on $g_H(I; \vec{q}, \vec{p})$ become \begin{align} T_n(x) + T_{n+1}(x) & = 1, \\ E_n T_n(x) + E_{n+1} T_{n+1}(x) & = x. \\ \end{align} This series of linear equations has unique solution given by the sawtooth functions: \begin{align} T_1^H (x) & = \begin{cases} \frac{E_2 - x}{E_2 - E_1} & x \leq E_2 \\ 0 & x \geq E_2 \end{cases} \\ T_2^H (x) & = \begin{cases} \frac{x - E_1}{E_2 - E_1} & x \leq E_2 \\ \frac{E_3 - x}{E_3 - E_2} & x \in [E_2,E_3] \\ 0 & x \geq E_3 \end{cases} \end{align} and for $n \geq 3$ as \begin{equation} T_n^H (x) = \begin{cases} \frac{x - E_{n-1}}{E_n - E_{n-1}} & x \in [E_{n-1},E_n] \\ \frac{E_{n+1} - x}{E_{n+1} - E_n} & x \in [E_n,E_{n+1}] \\ 0 & x \notin [E_{n-1},E_{n+1}] \end{cases} \end{equation} Note that for $n \neq 2$ we have $T_n^H(x) \geq 0$ for all $x$, while $T_2^H(x) \leq 0$ for $x \leq E_1$ but $T_2^H(x) \geq 0$ for $x \leq E_2$. Since this is only the spectral measure for bound states, we only need to check the conditions that the phase space spectral measure has satisfy for negative energies, i.e., only for $I \subset \RR_-$, where $\RR_-$ is the set of negative real numbers. The normalization condition $g_H(\RR_-) = 1$, follows from $\sum_{n=1}^\infty T_n^H(x) = 1$, while the expectation value condition $\sum_{n=1}^\infty g_H(E_n; \vec{q}, \vec{p}) = H(\vec{q}, \vec{p})$ follows from $\sum_{n=1}^\infty E_n T_n^H(x) = x$. The phase space spectral measure for angular momentum is constructed in a similar way, \begin{equation} g_{L_i}(I; \vec{q}, \vec{p}) = \sum_{m: m \hbar \in I} T_m^{L_i}(L_i(\vec{q}, \vec{p})) \end{equation} for $m \in \ZZ$. Here $T_m^{L_i}$ are the sawtooth functions given as \begin{equation} T_m^{L_i} (x) = \begin{cases} \frac{x - (m-1)\hbar}{\hbar} & x \in [(m-1)\hbar, m\hbar] \\ \frac{(m+1)\hbar - x}{\hbar} & x \in [m\hbar, (m+1)\hbar] \\ 0 & x \notin [(m-1)\hbar, (m+1)\hbar] \end{cases} \end{equation} The normalization condition $g_{L_i}(\RR) = 1$ follows from $\sum_{m \in \ZZ} T_m^{L_i} = 1$ and the expectation value condition $\sum_{m \in \ZZ} m \hbar g_{L_i}(m \hbar; \vec{q}, \vec{p}) = L_i(\vec{q}, \vec{p})$ follows from $\sum_{m \in \ZZ} m \hbar T_m^{L_i}(x) = x$. \begin{proposition} We have $g_H(E_n) g_{L_i}(m \hbar) \neq 0$ only if $\abs{m} \leq 2(n+1)$. \end{proposition} \begin{proof} Let $n \in \NN$ and $m \in \ZZ$ be such that $g_H(E_n) g_{L_i}(m \hbar) \neq 0$, which implies $g_H(E_n) \neq 0$ and $g_{L_i}(m \hbar) \neq 0$. We have $g_H(E_n) \neq 0$ only if $H(\vec{q}, \vec{p}) \in (E_{n-1}, E_{n+1})$, this implies \begin{equation} \label{eq:hydrogen-nmRel-EnBound} - \dfrac{\abs{\vec{p}}^2}{2 \mu} + \dfrac{\kappa}{\abs{\vec{q}}} > \abs{E_{n+1}}. \end{equation} We have \begin{equation} \abs{E_{n+1}} < - \dfrac{\abs{\vec{p}}^2}{2 \mu} + \dfrac{\kappa}{\abs{\vec{q}}} \leq \dfrac{\kappa}{\abs{\vec{q}}} \end{equation} which yields \begin{equation} \label{eq:hydrogen-nmRel-qBound} \abs{\vec{q}} < \dfrac{\kappa}{\abs{E_{n+1}}}. \end{equation} Using Eq.~\eqref{eq:hydrogen-nmRel-EnBound} we also get \begin{equation} \abs{E_{n+1}} + \dfrac{\abs{\vec{p}}^2}{2 \mu} < \dfrac{\kappa}{\abs{\vec{q}}} \end{equation} from which we get \begin{equation} \label{eq:hydrogen-nmRel-p2qBound} \abs{\vec{p}}^2 \abs{\vec{q}} < 2\mu ( \kappa - \abs{E_{n+1}} \abs{\vec{q}} ) \leq 2\mu \kappa. \end{equation} We have $g_{L_i}(m \hbar) \neq 0$ only if $L_i(\vec{q}, \vec{p}) \in ((m-1)\hbar, (m+1)\hbar)$ which implies \begin{equation} (\abs{m} - 1) \hbar < \abs{L_i}. \end{equation} Using the formula for the norm of cross product we get $\abs{L_i} \leq \abs{\vec{L}} \leq \abs{\vec{q}} \abs{\vec{p}}$ and we get \begin{equation} (\abs{m} - 1) \hbar < \abs{\vec{q}} \abs{\vec{p}}. \end{equation} By squaring both sides and using Eq.~\eqref{eq:hydrogen-nmRel-p2qBound} we get \begin{equation} (\abs{m} - 1)^2 \hbar^2 < \abs{\vec{q}}^2 \abs{\vec{p}}^2 < 2 \mu \kappa \abs{\vec{q}} \end{equation} and finally using Eq.~\eqref{eq:hydrogen-nmRel-qBound} we obtain \begin{equation} (\abs{m} - 1)^2 \hbar^2 < 2 \mu \kappa \abs{\vec{q}} < 2 \mu \dfrac{\kappa^2}{\abs{E_{n+1}}} = 4 \mu \kappa a_0 (n+1)^2 = 4 \hbar^2 (n+1)^2. \end{equation} Taking a square root we obtain the final expression \begin{equation} \abs{m} \leq 2(n+1). \end{equation} \end{proof} \section{Hydrogen atom in a non-stationary electric field} \label{appendix:electric} Let \begin{align} H(\vec{q}, \vec{p}) & = \dfrac{\abs{\vec{p}}^2}{2 \mu} - \dfrac{\kappa}{\abs{\vec{q}}}, \\ H_e(\vec{q}, \vec{p}, t) & = -2 eE \sin(\omega t) q_3, \end{align} be the corresponding Hamiltonians. Let $f: \RR^6 \to \RR$ be a function on the phase space, then we have \begin{equation} \Lie_{H_e} f(\vec{q}. \vec{p}) = -2 eE \sin(\omega t) \dfrac{\partial}{\partial p_3} f(\vec{q}, \vec{p}) \end{equation} and \begin{equation} \Lie_H f(\vec{q}, \vec{p}) = - \dfrac{\vec{p}}{\mu} \cdot \vec{\nabla}_q f(\vec{q}, \vec{p}) - \mb{\dfrac{\kappa}{\abs{\vec{q}}}, f(\vec{q}, \vec{p})}. \end{equation} We then have \begin{align} \Lie_{H_e} \Lie_H f(\vec{q}, \vec{p}) & = -2 eE \sin(\omega t) \dfrac{\partial}{\partial p_3} \left( - \dfrac{\vec{p}}{\mu} \cdot \vec{\nabla}_q f(\vec{q}, \vec{p}) - \mb{\dfrac{\kappa}{\abs{\vec{q}}}, f(\vec{q}, \vec{p})} \right) \\ &= -2 eE \sin(\omega t) \left( - \dfrac{1}{\mu} \dfrac{\partial}{\partial q_3} f(\vec{q}, \vec{p}) - \dfrac{\vec{p}}{\mu} \cdot \vec{\nabla}_q \dfrac{\partial}{\partial p_3} f(\vec{q}, \vec{p}) - \mb{\dfrac{\kappa}{\abs{\vec{q}}}, \dfrac{\partial}{\partial p_3} f(\vec{q}, \vec{p})} \right) \\ &= 2 eE \sin(\omega t) \dfrac{1}{\mu} \dfrac{\partial}{\partial q_3} f(\vec{q}, \vec{p}) + \Lie_H \Lie_{H_e} f(\vec{q}, \vec{p}) \end{align} and we get \begin{equation} \Lie_{H_e} \Lie_H - \Lie_H \Lie_{H_e} = 2 eE \sin(\omega t) \dfrac{1}{\mu} \dfrac{\partial}{\partial q_3} = \Lie_{G} \end{equation} for \begin{equation} G = - \dfrac{2 eE}{\mu} \sin(\omega t) p_3. \end{equation} \section{Scattering theory of the hydrogen atom} \label{appendix:scattering} Let $H(\vec{q}, \vec{p}) = \frac{\abs{\vec{p}}^2}{2 \mu} + V(\vec{q})$ be a Hamiltonian, we will decompose the time-evolution equation for a state $\rho$ as follows: \begin{align} \dot{\rho} = \mb{H, \rho} & = \{\frac{\abs{\vec{p}}^2}{2 \mu}, \rho(t; \vec{q}, \vec{p})\} + \mb{V(\vec{q}), \rho(t; \vec{q}, \vec{p})} \\ & = \{\frac{\abs{\vec{p}}^2}{2 \mu}, \rho(t; \vec{q}, \vec{p}) \} + \int\limits_{\RR^3} \mb{V(\vec{q}), \delta^{(3)}(\vec{p} - \vec{p'})} \rho(t; \vec{q}, \vec{p'}) \ddd{3} p', \end{align} where we have used that in $\mb{V(\vec{q}), \rho(t; \vec{q}, \vec{p})}$ only partial derivatives in $\vec{p}$ act on $\rho(t; \vec{q}, \vec{p})$. Let us denote \begin{equation} K(\vec{q}, \vec{p}, \vec{p'}) = \mb{V(\vec{q}), \delta^{(3)}(\vec{p} - \vec{p'})}. \end{equation} Also note that \begin{equation} \{\frac{\abs{\vec{p}}^2}{2 \mu}, \rho(t; \vec{q}, \vec{p})\} = - \frac{\vec{p}}{\mu} \cdot \vec{\nabla}_q \rho(t; \vec{q}, \vec{p}) \end{equation} where $\vec{\nabla}_q$ is the gradient in $\vec{q}$. Putting all this together we get a new form of the time-evolution equation: \begin{equation} \label{eq:scattering-timeEvolution} \dot{\rho}(t; \vec{q}, \vec{p}) + \frac{\vec{p}}{\mu} \cdot \vec{\nabla}_q \rho(t; \vec{q}, \vec{p}) = \int\limits_{\RR^3} K(\vec{q}, \vec{p}, \vec{p'}) \rho(t; \vec{q}, \vec{p'}) \ddd{3} p'. \end{equation} In order to solve this equation we will use the fact that the retarded Green's function for the differential operator on left-hand-side is known. That is, let \begin{equation} \label{eq:scattering-greens} G(\vec{q}, \vec{p}, t) = \theta(t) \delta^{(3)} (\vec{q} - \frac{\vec{p} t}{\mu}), \end{equation} where $\theta(t)$ is the Heaviside step function and $\delta^{(3)} (\vec{q} - \frac{\vec{p} t}{\mu})$ is to be integrated over $\vec{q}$. Then we have \begin{equation} \dot{G}(\vec{q}, \vec{p}, t) + \frac{\vec{p}}{\mu} \cdot \vec{\nabla}_q G(\vec{q}, \vec{p}, t) = \delta(t) \delta^{(3)}(\vec{q}). \end{equation} Before we construct the solution of the time-evolution that we aim for, we have to discuss initial conditions. In this case we want to have initial conditions formally at $t = -\infty$. In order to define these initial conditions in a meaningful way let $\rho_{\ini}$ be the solution of the free time-evolution equation, that is, \begin{equation} \label{eq:scattering-indyn} \dot{\rho}_{\ini} = \{ \dfrac{\abs{\vec{p}}^2}{2 \mu}, \rho_{\ini} \}. \end{equation} Then as an initial condition we require that (pointwise), \begin{equation} \label{eq:scattering-initial} \lim_{t \to -\infty}(\rho - \rho_{\ini}) = 0. \end{equation} One finds easily that \begin{equation} \label{eq:scattering-sol} \rho(t; \vec{q}, \vec{p}) = \rho_{\ini}(t; \vec{q}, \vec{p}) + \int\limits_{\RR} \int\limits_{\RR^3} \int\limits_{\RR^3} G(\vec{q} - \vec{q'}, \vec{p}, t - \tau) K(\vec{q'}, \vec{p}, \vec{p'}) \rho(\tau; \vec{q'}, \vec{p'}) \ddd{3} p' \ddd{3} q' \dd \tau. \end{equation} satisfies the initial condition \eqref{eq:scattering-initial}. In addition, applying Eq.~\eqref{eq:scattering-sol} to the left hand side of Eq.~\eqref{eq:scattering-timeEvolution} gives immediately the right hand side of Eq.~\eqref{eq:scattering-timeEvolution}, by virtue of Eq.~\eqref{eq:scattering-greens} and Eq.~\eqref{eq:scattering-indyn}. Eq.~\eqref{eq:scattering-sol} is still an integral equation but we will be able to solve it iteratively. Plugging in the expression for $G$ and integrating we get: \begin{align} \rho(t; \vec{q}, \vec{p}) & = \rho_{\ini}(t; \vec{q}, \vec{p}) + \int\limits_{\RR} \int\limits_{\RR^3} \int\limits_{\RR^3} \theta(t - \tau) \delta(\vec{q} - \vec{q'} - \frac{\vec{p}(t - \tau)}{\mu}) K(\vec{q'}, \vec{p}, \vec{p'}) \rho(\tau; \vec{q'}, \vec{p'}) \ddd{3} p' \ddd{3} q' \dd \tau \\ & = \rho_{\ini}(t; \vec{q}, \vec{p}) + \int\limits_{\RR^3} \int\limits_{-\infty}^{t} K(\vec{q} - \frac{\vec{p}}{\mu}(t - \tau), \vec{p}, \vec{p'}) \rho(\tau; \vec{q} - \frac{\vec{p}}{\mu}(t - \tau), \vec{p'}) \dd \tau \ddd{3} p'. \label{eq:scattering-EQ} \end{align} We can solve the equation above in a perturbative manner via $V(\vec{q}) \to \lambda V(\vec{q})$ and the expansion \begin{equation} \rho(t;\vec{q},\vec{p}) = \sum_{n=0}^\infty \sum_{k=0}^\infty \hbar^{2n} \lambda^k \rho_{n,k}(t; \vec{q},\vec{p}). \end{equation} Our strategy is now to determine how $\rho_{n,k}(t; \vec{q},\vec{p})$ depends on $\abs{\vec{q}}$. We do this since we assume that the detector is positioned far from the center of the potential, so we are interested only in the limit $\abs{\vec{q}} \to \infty$. In order to simplify the notation let $a_0 = 1$, then after plugging this expression into Eq.~\eqref{eq:scattering-EQ} we get \begin{equation} \begin{split} \sum_{n=0}^\infty \sum_{k=0}^\infty \hbar^{2n} \lambda^k \rho_{n,k}(t; \vec{q},\vec{p}) &= \rho_{\ini}(t; \vec{q}, \vec{p}) + \sum_{n=0}^\infty \sum_{n'=0}^\infty \sum_{k=0}^\infty a_{n'} \hbar^{2(n+n')} \lambda^{k+1} \\ &\int\limits_{\RR^3} \int\limits_{-\infty}^{t} \left( V(\vec{q} - \frac{\vec{p}}{\mu}(t - \tau)) D^{2n'+1}_\omega \delta^{(3)}(\vec{p} - \vec{p'}) \right) \rho_{n,k}(\tau; \vec{q} - \frac{\vec{p}}{\mu}(t - \tau), \vec{p'}) \dd \tau \ddd{3} p'. \end{split} \end{equation} We proceed by comparing terms of the same powers in $\lambda$ on both sides. First of all observe that for $k = 0$ we have $\rho_{0,0} = \rho_{\ini}$ and $\rho_{n,0} = 0$ for $n > 0$. For $k \geq 1$ we get \begin{equation} \begin{split} \sum_{n=0}^\infty \hbar^{2n} \rho_{n,k}(t; \vec{q},\vec{p}) = \sum_{n=0}^\infty \sum_{n'=0}^\infty a_{n'} \hbar^{2(n+n')} \int\limits_{\RR^3} \int\limits_{-\infty}^{t} &\left( V(\vec{q} - \frac{\vec{p}}{\mu}(t - \tau)) D^{2n'+1}_\omega \delta^{(3)}(\vec{p} - \vec{p'}) \right) \\ &\rho_{n,k-1}(\tau; \vec{q} - \frac{\vec{p}}{\mu}(t - \tau), \vec{p'}) \dd \tau \ddd{3} p'. \end{split} \end{equation} Using the general identity $\sum_{n=0}^\infty \sum_{n'=0}^\infty f(n,n') = \sum_{N=0}^\infty \sum_{\tilde{n}=0}^N f(\tilde{n}, N-\tilde{n})$ we get \begin{equation} \begin{split} \sum_{n=0}^\infty \hbar^{2n} \rho_{n,k}(t; \vec{q},\vec{p}) = \sum_{N=0}^\infty \sum_{\tilde{n}=0}^N a_{(N-\tilde{n})} \hbar^{2 N} \int\limits_{\RR^3} \int\limits_{-\infty}^{t} &\left( V(\vec{q} - \frac{\vec{p}}{\mu}(t - \tau) ) D^{2(N-\tilde{n})+1}_\omega \delta^{(3)}(\vec{p} - \vec{p'}) \right) \\ &\rho_{\tilde{n},k-1}(\tau; \vec{q} - \frac{\vec{p}}{\mu}(t - \tau), \vec{p'}) \dd \tau \ddd{3} p' \end{split} \end{equation} and by comparing the terms with the same power in $\hbar$ we obtain \begin{equation} \rho_{n,k}(t; \vec{q},\vec{p}) = \sum_{\tilde{n}=0}^n a_{(n-\tilde{n})} \int\limits_{\RR^3} \int\limits_{-\infty}^{t} \left( V(\vec{q} - \frac{\vec{p}}{\mu}(t - \tau) ) D^{2(n-\tilde{n})+1}_\omega \delta^{(3)}(\vec{p} - \vec{p'}) \right) \rho_{\tilde{n},k-1}(\tau; \vec{q} - \frac{\vec{p}}{\mu}(t - \tau), \vec{p'}) \dd \tau \ddd{3} p'. \end{equation} Moreover we perform the substitution $t - \tau = u$, we get \begin{equation} \label{eq:scattering-rhonk} \rho_{n,k}(t; \vec{q},\vec{p}) = \sum_{\tilde{n}=0}^n a_{(n-\tilde{n})} \int\limits_{\RR^3} \int\limits_{0}^{\infty} \left( V(\vec{q} - \frac{\vec{p}}{\mu} u) D^{2(n-\tilde{n})+1}_\omega \delta^{(3)}(\vec{p} - \vec{p'}) \right) \rho_{\tilde{n},k-1}(t-u; \vec{q} - \frac{\vec{p}}{\mu} u, \vec{p'}) \dd u \ddd{3} p'. \end{equation} We next prove that $\rho_{n,k}$ does not explicitly depend on time $t$, the physical intuition for this is that the initial state $\rho_{\ini}$ does not depend on time and since the experiment starts at $t = -\infty$, for any finite time the flow of the scattered particles must have stabilized. Mathematically we see from Eq.~\eqref{eq:scattering-rhonk} that $\rho_{n,k}$ depends on $t$ only if $\rho_{\tilde{n},k-1}$ depends on $t$ for some $\tilde{n} \in \{0, \ldots, n\}$. Since we already argued that $\rho_{\tilde{n}, 0}$ does not depend on $t$, we get that $\rho_{\tilde{n}, 1}$ does not depend on $t$. Proceeding by induction we get that $\rho_{n,k}$ does not depend on $t$ for all $k$ and $n$. For this reason we will drop the explicit time dependence and we will write \begin{equation} \rho_{n,k}(\vec{q}, \vec{p}) = \rho_{n,k}(t; \vec{q}, \vec{p}) \end{equation} and \begin{equation} \rho(\vec{q}, \vec{p}) = \rho(t; \vec{q}, \vec{p}). \end{equation} Observe that in the expression $V(\vec{q} - \frac{\vec{p}}{\mu} u) D^{2(n-\tilde{n})+1}_\omega \delta^{(3)}(\vec{p} - \vec{p'})$ the nonzero contributions are only from terms where $\frac{\partial}{\partial q_i}$ acts on $V(\vec{q} - \frac{\vec{p}}{\mu} u)$ and so we have \begin{equation} V(\vec{q} - \frac{\vec{p}}{\mu} u) D^{n'}_\omega \delta^{(3)}(\vec{p} - \vec{p'}) = \sum_{\substack{i_1 = 1\\ \ldots \\ i_{n'} = 1}}^3 \left( \dfrac{\partial}{\partial q_{i_{n'}}} \cdots \dfrac{\partial}{\partial q_{i_1}} V(\vec{q} - \frac{\vec{p}}{\mu} u) \right) \left( \dfrac{\partial}{\partial p_{i_{n'}}} \cdots \dfrac{\partial}{\partial p_{i_1}} \delta^{(3)}(\vec{p} - \vec{p'}) \right), \end{equation} here $n' \in \NN$ is an odd number. Since we have \begin{equation} \dfrac{\partial}{\partial q_i} \dfrac{1}{\abs{\vec{q} - \dfrac{\vec{p}}{\mu} u}^k} = \dfrac{-k (q_i-\frac{p_i}{\mu} u)}{\abs{\vec{q} - \dfrac{\vec{p}}{\mu} u}^{k+2}} \end{equation} and every derivative either acts on the term in the denominator in this way, or acts on the polynomial in the numerator, we see that every derivative decreases the order $\abs{\vec{q}}$ by one. We thus have \begin{equation} V(\vec{q} - \frac{\vec{p}}{\mu} u) D^{2(n-\tilde{n})+1}_\omega \delta^{(3)}(\vec{p} - \vec{p'}) = \dfrac{\tilde{f}_{2(n-\tilde{n})+1}(\frac{\vec{q}}{\abs{\vec{q}}} - \frac{\vec{p}}{\mu} \frac{u}{\abs{\vec{q}}}, \vec{p}, \vec{p'}) }{\abs{\vec{q}}^{2(n - \tilde{n}) + 2}} \end{equation} where $\tilde{f}_{2(n-\tilde{n})+1}$ is some suitable function which also contains derivations of the Dirac distributions. Plugging this expression into Eq.~\eqref{eq:scattering-rhonk} we get \begin{equation} \rho_{n,k}(\vec{q},\vec{p}) = \sum_{\tilde{n}=0}^n a_{(n-\tilde{n})} \int\limits_{\RR^3} \int\limits_{0}^{\infty} \dfrac{\tilde{f}_{2(n-\tilde{n})+1}(\frac{\vec{q}}{\abs{\vec{q}}} - \frac{\vec{p}}{\mu} \frac{u}{\abs{\vec{q}}}, \vec{p}, \vec{p'}) }{\abs{\vec{q}}^{2(n - \tilde{n}) + 2}} \rho_{\tilde{n},k-1}(\vec{q} - \frac{\vec{p}}{\mu} u, \vec{p'}) \dd u \ddd{3} p'. \end{equation} Using the substitution $u = \abs{\vec{q}} v$ we get \begin{equation} \label{eq:scattering-rhonkAbsq} \rho_{n,k}(\vec{q},\vec{p}) = \sum_{\tilde{n}=0}^n \dfrac{a_{(n-\tilde{n})}}{\abs{\vec{q}}^{2(n - \tilde{n}) + 1}} \int\limits_{\RR^3} \int\limits_{0}^{\infty} \tilde{f}_{2(n-\tilde{n})+1}(\frac{\vec{q}}{\abs{\vec{q}}} - \frac{\vec{p}}{\mu} v, \vec{p}, \vec{p'}) \rho_{\tilde{n},k-1}(\vec{q} - \frac{\vec{p}}{\mu} \abs{\vec{q}} v, \vec{p'}) \dd v \ddd{3} p'. \end{equation} and we see that the last term inside the integral that depends on $\abs{\vec{q}}$ is $\rho_{\tilde{n},k-1}(\vec{q} - \frac{\vec{p}}{\mu} \abs{\vec{q}} v, \vec{p'})$. At this point it is useful to compute $\rho_{n,1}(\vec{q},\vec{p})$ to get some intuition for the following calculations. Using that $\rho_{0,0} = \rho_{\ini}$ and $\rho_{n,0} = 0$ for $n \geq 1$ we get \begin{equation} \label{eq:scattering-rhon1Absq} \rho_{n,1}(\vec{q},\vec{p}) = \dfrac{a_n}{\abs{\vec{q}}^{2n + 1}} \int\limits_{\RR^3} \int\limits_{0}^{\infty} \tilde{f}_{2n+1}(\frac{\vec{q}}{\abs{\vec{q}}}, \frac{\vec{q}}{\abs{\vec{q}}} - \frac{\vec{p}}{\mu} v, \vec{p}, \vec{p'}) \nu \delta^{(3)}(\vec{p'} - \vec{p}_0) \dd v \ddd{3} p' \end{equation} and so we see that the dependence on $\abs{\vec{q}}$ becomes explicit. We will proceed as follows: assume that for a given $k \in \NN$ and for all $n \in \NN$ we have \begin{equation} \rho_{n,k-1}(\vec{q}, \vec{p}) = \dfrac{\tilde{\rho}_{n,k-1}(\frac{\vec{q}}{\abs{\vec{q}}}, \vec{p})}{\abs{\vec{q}}^{\alpha(n,k-1)}} \end{equation} where $\alpha(n,k-1) \in \NN$. Then, using Eq.~\eqref{eq:scattering-rhonkAbsq} we get \begin{equation} \rho_{n,k}(\vec{q},\vec{p}) = \sum_{\tilde{n}=0}^n \dfrac{a_{(n-\tilde{n})}}{\abs{\vec{q}}^{2(n - \tilde{n}) + 1}} \int\limits_{\RR^3} \int\limits_{0}^{\infty} \tilde{f}_{2(n-\tilde{n})+1}(\frac{\vec{q}}{\abs{\vec{q}}} - \frac{\vec{p}}{\mu} v, \vec{p}, \vec{p'}) \dfrac{\tilde{\rho}_{\tilde{n},k-1} \left( \frac{\vec{q} - \frac{\vec{p}}{\mu} \abs{\vec{q}} v}{\abs{\vec{q} - \frac{\vec{p}}{\mu} \abs{\vec{q}} v}}, \vec{p'} \right)}{\abs{ \vec{q} - \frac{\vec{p}}{\mu} \abs{\vec{q}} v}^{\alpha(\tilde{n},k-1)}} \dd v \ddd{3} p' \end{equation} which leads to \begin{equation} \rho_{n,k}(\vec{q},\vec{p}) = \sum_{\tilde{n}=0}^n \dfrac{a_{(n-\tilde{n})}}{\abs{\vec{q}}^{2(n - \tilde{n}) + 1 + \alpha(\tilde{n},k-1)}} \int\limits_{\RR^3} \int\limits_{0}^{\infty} \tilde{f}_{2(n-\tilde{n})+1}(\frac{\vec{q}}{\abs{\vec{q}}} - \frac{\vec{p}}{\mu} v, \vec{p}, \vec{p'}) \dfrac{\tilde{\rho}_{\tilde{n},k-1} \left( \frac{ \frac{\vec{q}}{\abs{\vec{q}}} - \frac{\vec{p}}{\mu} v}{\abs{\frac{\vec{q}}{\abs{\vec{q}}} - \frac{\vec{p}}{\mu} v}}, \vec{p'} \right)}{\abs{\frac{\vec{q}}{\abs{\vec{q}}} - \frac{\vec{p}}{\mu} v}^{\alpha(\tilde{n},k-1)}} \dd v \ddd{3} p'. \end{equation} We now want to show that $2(n - \tilde{n}) + 1 + \alpha(\tilde{n},k-1)$ does not depend on $\tilde{n}$. For $k = 2$ this is straightforward as we see from Eq.~\eqref{eq:scattering-rhon1Absq} that $\alpha(\tilde{n},1) = 2\tilde{n}+1$ thus we get: \begin{equation} 2(n - \tilde{n}) + 1 + \alpha(\tilde{n},1) = 2n+2. \end{equation} Generalizing this assume that $\alpha(\tilde{n}, k-1) = 2n + k$ for some $k \in \NN$, then \begin{equation} 2(n - \tilde{n}) + 1 + \alpha(\tilde{n},k-1) = 2n + k. \end{equation} Since all of our assumptions hold for $k=1$, we get using induction that $\alpha(n,k) = 2n + k$ and \begin{equation} \label{eq:scattering-rhonkFinal} \rho_{n,k}(\vec{q},\vec{p}) = \dfrac{\tilde{\rho}_{n,k}(\frac{\vec{q}}{\abs{\vec{q}}}, \vec{p})}{\abs{\vec{q}}^{2n + k}} \end{equation} where \begin{equation} \tilde{\rho}_{n,k}(\frac{\vec{q}}{\abs{\vec{q}}}, \vec{p}) = \sum_{\tilde{n}=0}^n a_{(n-\tilde{n})} \int\limits_{\RR^3} \int\limits_{0}^{\infty} \tilde{f}_{2(n-\tilde{n})+1}(\frac{\vec{q}}{\abs{\vec{q}}} - \frac{\vec{p}}{\mu} v, \vec{p}, \vec{p'}) \dfrac{\tilde{\rho}_{\tilde{n},k-1} \left( \frac{ \frac{\vec{q}}{\abs{\vec{q}}} - \frac{\vec{p}}{\mu} v}{\abs{\frac{\vec{q}}{\abs{\vec{q}}} - \frac{\vec{p}}{\mu} v}}, \vec{p'} \right)}{\abs{\frac{\vec{q}}{\abs{\vec{q}}} - \frac{\vec{p}}{\mu} v}^{\alpha(\tilde{n},k-1)}} \dd v \ddd{3} p'. \end{equation} In order to compute the differential cross section we are only interested in the spatial density of the particles $D(\vec{q}) = \int_{\RR^3} \rho(\vec{q}, \vec{p}) \ddd{3} p$, thus we want to compute \begin{equation} D_{n,k}(\vec{q}) = \int_{\RR^3} \rho_{n,k}(\vec{q}, \vec{p}) \ddd{3} p. \end{equation} Using Eq.~\eqref{eq:scattering-rhonkFinal} we get \begin{equation} D_{n,k}(\vec{q}) = \dfrac{f_{n,k}(\vartheta)}{\abs{\vec{q}}^{2n+k}} \end{equation} where $f_{n,k}(\vartheta)$ is some function of the scattering angle $\vartheta$. Due to the symmetry of the scattering problem the spatial density $D_{n,k}(\vec{q})$ cannot depend on the polar angle $\varphi$, but only on the azimuthal angle $\vartheta$, which coincides with the scattering angle. That is why $f_{n,k}(\vartheta)$ depends only on the scattering angle. The density of particles that reach the detector is given as $\lim_{\abs{\vec{q}} \to \infty} D(\vec{q}) \abs{\vec{q}}^2 \dd \Omega$, where $\abs{\vec{q}}^2 \dd \Omega$ is the infinitesimal surface element, $\dd \Omega = \sin(\vartheta) \dd \vartheta \dd \varphi$. We get for $\lambda=1$ \begin{equation} \lim_{\abs{\vec{q}} \to \infty} D(\vec{q}) \ddd{3} q = \lim_{\abs{\vec{q}} \to \infty} \sum_{n=0}^\infty \sum_{k=0}^\infty \hbar^{2n} D_{n,k}(\vec{q}) \abs{\vec{q}}^2 \dd \Omega = \lim_{\abs{\vec{q}} \to \infty} \sum_{n=0}^\infty \sum_{k=0}^\infty \hbar^{2n} \dfrac{f_{n,k}(\vartheta)}{\abs{\vec{q}}^{2n+k-2}} \dd \Omega. \end{equation} In the limit we get nonzero contributions only from the terms where $2n+k-2 \geq 0$, these are: $n = 0$, $k \leq 2$ and $n = 1$, $k = 0$. Since $\rho_{1,0} = 0$ we have nonzero contribution only from the terms where $n = 0$. But these are only the terms that are obtained using the Poisson bracket, hence $\lim_{\abs{\vec{q}} \to \infty} D(\vec{q}) \abs{\vec{q}}^2 \dd \Omega$ must coincide with the predictions of classical theory. \end{document}
\begin{document} \begin{abstract} We present a table of symmetric diagrams for strongly invertible knots up to 10 crossings, point out the similarity of transvergent diagrams for strongly invertible knots with symmetric union diagrams and discuss open questions. \end{abstract} \keywords{strongly invertible knots} \subjclass[2020]{57K10} \captionsetup{belowskip=10pt,aboveskip=10pt} \def0.8{0.8} \definecolor{myboxcolour}{gray}{0.8} \reversemarginpar \title{Symmetric diagrams for all strongly invertible knots up to 10 crossings} \section{Introduction} \label{sec:Introduction} A knot $K \subset \mathbb{S}^3$ is said to be \textsl{strongly invertible} if there is an orientation preserving smooth involution $h$ of $\mathbb{S}^3$ such that $h(K) = K$ and $h$ reverses the orientation on $K$. Sakuma's article `On strongly invertible knots' \cite{Sakuma} which appeared in 1986 contains a table of symmetric diagrams of strongly invertible knot up to crossing number 9. Strongly invertible knots currently receive a lot of attention, see for instance \cite{Boyle}, \cite{BoyleIssa}, \cite{LobbWatson} and \cite{Snape}. Our own motivation to extend the table of symmetric diagrams to knots with crossing number 10 stems from the related symmetry type of symmetric unions of knots and a project to study strongly positive amphicheiral knots. Strongly invertible knots can be depicted as \textsl{transvergent} diagrams (in which the axis of rotation lies in the diagram plane) or as \textsl{intravergent} diagrams (where the axis is perpendicular to the diagram plane). The purpose of this article is to give one transvergent diagram for each (prime) strongly invertible knot with $c(K) \le 10$. \begin{definition} For a knot $K$ we denote, as usual, by $c(K)$ the minimal crossing number of a diagram of $K$. If $K$ is strongly invertible we denote by $c_t(K)$ the minimal crossing number of all transvergent diagrams of $K$. \end{definition} Symmetric equivalence can be defined for strongly invertible knots either using the conjugacy class of $h$ in the symmetry group of $K$ or by symmetric diagram moves (see \cite{LobbWatson}, Section 2). Interestingly, especially when compared to the case of symmetric unions \cite{EisermannLamm2007}, there are always one or two equivalent classes (two equivalent classes occuring for knots with period 2). Whereas Sakuma lists diagrams showing both classes simultaneously (for knots having two of them), in this article we contend with only one diagram for each knot. We hope to extend this in a later version to a complete list of diagrams for knots with two equivalence classes. We focus on the crossing number of the symmetric diagrams and try to find minimal (transvergent) diagrams. In many cases symmetric diagrams exist with $c(K)$ crossings, so that $c_t(K)=c(K)$. These symmetric diagrams with minimal crossing number are occasionally also contained in knot tables (for instance in Rolfsen's table the knots $9_{39}$, $9_{40}$, $9_{41}$ are shown with a vertical symmetry axis). For our list we cannot guarantee that the crossing number of a diagram is minimal if it exceeds $c(K)$ because we did not exhaustively check all symmetric diagrams. \section{Motivation} \label{sec:Motivation} Symmetric unions of knots were introduced by Kinoshita and Terasaka in 1957 \cite{KinoshitaTerasaka}. They consist of the connected sum of a knot and its mirror image with reversed orientation, with crossings inserted on the symmetry axis. The building blocks of transvergent diagrams of strongly invertible knots are very similar: A connected sum of a knot diagram and a rotational copy, with crossings inserted on the symmetry axis. Figure \ref{9_46_st_inv_symm_union} shows an example for this similarity. In a future study we will compare methods for symmetric unions and strongly invertible knots and check if the transfer of results is possible. \begin{figure} \caption{The knot $9_{46}$. Left: as a symmetric union (mirror symmetry), right: as a strongly invertible diagram (rotational symmetry with inversion of orientation).} \label{9_46_st_inv_symm_union} \end{figure} Another aim is the construction of strongly positive amphicheiral knots as symmetric unions with strongly invertible partial knots. The diagrams used for this construction possess two symmetry axes and a rotation around an axis perpendicular to the diagram plane yields the strongly positive amphicheiral property, see Figure \ref{12a1019}. \begin{figure} \caption{A strongly positive amphicheiral knot diagram with two axes of symmetry (showing the knot 12a1019)} \label{12a1019} \end{figure} A knot is called strongly positive amphicheiral if it has a diagram which is mapped to its mirror image by a rotation of $\pi$, preserving the orientation. Obviously, composite knots consisting of a knot and its mirror image are strongly positive amphicheiral. Prime strongly positive amphicheiral knots are rare, however: until recently only three prime strongly positive amphicheiral knots with 12 or fewer crossings were known: $10_{99}$, $10_{123}$ and 12a427. In \cite{Lamm} we found additional examples, so that currently the list consists of $10_{99}$, $10_{123}$, 12a427, 12a1019, 12a1105, 12a1202, and 12n706. A future study will extend this list to strongly positive amphicheiral knots with crossing numbers up to 16. \section{Template notation and generation of the table} \label{sec:Notation} All two-bridge knots are strongly invertible and because they are also 2-periodic they have two equivalent classes of strong invertibility, see \cite{Sakuma}. Sakuma shows how to find representatives for each knot and class. Therefore we do not need to list diagrams of two-bridge knots in detail and we concentrate on the remaining knots up to crossing number 10 (which all have bridge number 3). For two-bridge knots we will discuss the question whether symmetric diagrams exist with $c(K)$ crossings, however. Our tabulating approach uses templates, in a similar way as in \cite{Lamm}. As half of a diagram contains already the information for the whole diagram, a template only shows the upper half and integer markings on the axis for the twists inserted on the axis. The axis is placed horizontally. Figure \ref{diagram_to_template} shows how a template diagram is constructed from a transvergent diagram. \begin{figure} \caption{Constructing a template representation from a transvergent diagram (the example shows $B_1(-2,-1,-1) = 10_{76}$)} \label{diagram_to_template} \end{figure} In the other direction we proceed as in Figure \ref{template_explained}: If a template diagram is given we need to reconstruct the other half and insert the crossings on the axis. \begin{figure} \caption{Converting a template into a knot diagram (the example shows $C_1(1,-2,1) = 10_{122}$)} \label{template_explained} \end{figure} The list of all strongly invertible 3-bridge knots in the Appendix was compiled in the following way: We started with Sakuma's table and converted his diagrams into templates. Variation of the twist attributes on the axis then yielded many 10 crossing knots (using Knotscape \cite{Knotscape}). Other sources, as Rolfsen's table and examples in \cite{BoyleIssa} and \cite{LobbWatson} were used in the same way. The remaining 8 cases were manually transformed into symmetric diagrams with the help of the software KLO \cite{KLO}, see the Acknowledgments. From the collection of all examples we chose one diagram with smallest crossing number for our table. As we remarked, it is expected that transvergent diagrams exist with smaller crossing number for some of them. \section{Two-bridge knots} For two-bridge knots we use the Conway notation $C(a_1, \ldots, a_n)$ (and a shorter version $[a_1, \ldots, a_n]$ in Figure \ref{twobridge}). As usual, the plat diagram's closing patterns are different for even and odd $n$, see Figures \ref{twobridge_def} and \ref{twobridge_def_v}. \begin{figure} \caption{Horizontal symmetry for alternating two-bridge knot diagrams} \label{twobridge_def} \end{figure} \begin{wrapfigure}{R}{0.32\textwidth} \centering \includegraphics[width=0.27\textwidth]{twobridge_def_v} \caption{Vertical symmetry for alternating two-bridge knot diagrams} \label{twobridge_def_v} \end{wrapfigure} In these figures we give examples for $n=6$ (horizontal placement and axis) and $n=5$ (vertical case, but still with horizontal axis). Sakuma's representatives for the two equivalence classes use `even' continued fractions. Each two-bridge knot can be written as $C(a_1, \ldots, a_n)$ with even numbers $a_i$. In this case $n$ is also even and half of $n$ is the knot's genus. If all $a_i$ are even then the flyping modification shown in Figure \ref{twobridge_def} is possible. However, to proceed as in the figure, only ever second half-twist number needs be even. In order to find diagrams with $c_t(K)=c(K)$ we therefore try to transform the Conway notation of each 2-bridge knot so that it realizes the minimal crossing number and has even entries for ever second $a_i$. For example, for 2-bridge torus knots we may use $T(2,2n+1) = C(2n+1) = C(1,2n)$. This condition is the first part of the following proposition. The second part uses a symmetric diagram which is placed vertically and is illustrated in Figure \ref{twobridge_def_v}. \begin{proposition} A two-bridge knot $K$ has a transvergent diagram with $c(K)$ crossings, if \begin{itemize} \item $K$ can be written as $C(a_1, \ldots, a_n)$ with even $n$ and all $a_i > 0$ where the twist-numbers $a_2$, $a_4$, \ldots are even, or \item $K$ can be written as $C(a_1, \ldots, a_n)$ with odd $n$ and all $a_i > 0$ and symmetric twist-numbers $a_1 = a_n$, $a_2 = a_{n-1}$, \ldots. In this case $a_{\frac{n+1}{2}}$ is necessarily odd. \end{itemize} \label{twobridgeprop} \end{proposition} \begin{conjecture} A two-bridge knot $K$ has a transvergent diagram with $c(K)$ crossings if and only if the above conditions are fulfilled. \end{conjecture}\label{two-bridge-conjecture} For crossing numbers less than 8 we illustrate this in Figure \ref{twobridge}. Note, that for the knot $6_3 = C(2,1,1,2)$ neither of the two conditions of Proposition \ref{twobridgeprop} is realized so that we conjecture that no transvergent diagram with 6 crossings exists for this knot.\footnote{This should be easy to show, enumerating all alternating diagrams with 6 crossings. Because of the Tait conjecture for alternating knots the proof of Conjecture \ref{two-bridge-conjecture} shouldn't be too difficult either. An extended project would then try to characterize 2-bridge knots with $c_t(K) = c(K)+1$.} \begin{figure} \caption{Two-bridge knots with $c \le 7$: Symmetric diagrams with minimal crossing number} \label{twobridge} \end{figure} There are five 2-bridge knots with 8 crossings which do not satisfy one of the two conditions: $8_7 = C(4,1,1,2)$, $8_8 = C(2,3,1,2)$, $8_9 = C(3,1,1,3)$, $8_{13} = C(3,1,1,1,2)$, $8_{14} = C(2,2,1,1,2)$. \section{Three-bridge knots} The result of the generation of transvergent diagrams as described in Section \ref{sec:Notation} is shown in Table \ref{tab:knotList}. The template names require some explanation: We use $B$, $C$, $D$ and $E$ to denote template families. Family $B$ collects curves which can be simplified by Reidemeister 1 moves. (The even simpler Family $A$ is used for the standard representations of two-bridge knots and is not yet depicted.) Curves in Family $C$ contain a trefoil or, as in case $C_4$, a related diagram with a trefoil shadow. Family $D$ features the figure eight knot as half-diagram and Family $E$ the shadow of the torus knot $T(2,5)$. In all three cases of Family $E$ the curve is a trefoil because the crossings are not alternating. \small \noindent \begin{table}[htbp] \begin{tabular}{|l|llr|l|llr|l|llr|l|llr|} \hline 8 a& $8_{5}$ & $B_1$ & 8 & 10 a& $10_{46}$ & $B_4$ & 10 & 10 a& $10_{77}$ & $B_1$ & 11 & 10 n& $10_{132}$ & $C_{3a}$ & 11 \\ & $8_{10}$ & $B_1$ & 9 & & $10_{47}$ & $B_4$ & 11 & & $10_{78}$ & $C_{3a}$ & 10 & & $10_{133}$ & $D_1$ & 11 \\ & $8_{15}$ & $C_1$ & 8 & & $10_{48}$ & $B_3$ & 11 & & $10_{89}$ & $D_1$ & 10 & & $10_{134}$ & $B_3$ & 11 \\ & $8_{16}$ & $C_1$ & 8 & & $10_{49}$ & $B_3$ & 12 & & $10_{96}$ & $E_1$ & 13 & & $10_{135}$ & $B_3$ & 12 \\ & $8_{18}$ & $C_1$ & 8 & & $10_{50}$ & $B_1$ & 10 & & $10_{97}$ & $E_2$ & 13 & & $10_{136}$ & $C_1$ & 10 \\ & & & & & $10_{51}$ & $B_1$ & 11 & & $10_{99}$ & $D_1$ & 10 & & $10_{137}$ & $B_2$ & 10 \\ 8 n& $8_{19}$ & $C_1$ & 8 & & $10_{52}$ & $B_1$ & 11 & & $10_{100}$ & $C_{3a}$ & 10 & & $10_{138}$ & $D_1$ & 10 \\ & $8_{20}$ & $B_2$ & 8 & & $10_{53}$ & $B_1$ & 12 & & $10_{101}$ & $C_{3a}$ & 11 & & $10_{139}$ & $C_4$ & 10 \\ & $8_{21}$ & $C_1$ & 8 & & $10_{54}$ & $B_4$ & 11 & & $10_{103}$ & $C_1$ & 10 & & $10_{140}$ & $B_5$ & 10 \\ & & & & & $10_{55}$ & $B_4$ & 12 & & $10_{104}$ & $C_{2a}$ & 10 & & $10_{141}$ & $C_{3b}$ & 10 \\ 9 a& $9_{16}$ & $B_1$ & 9 & & $10_{56}$ & $B_4$ & 12 & & $10_{105}$ & $C_{2a}$ & 11 & & $10_{142}$ & $C_1$ & 10 \\ & $9_{22}$ & $B_1$ & 10 & & $10_{57}$ & $B_4$ & 13 & & $10_{108}$ & $C_1$ & 10 & & $10_{143}$ & $B_5$ & 11 \\ & $9_{24}$ & $B_1$ & 10 & & $10_{58}$ & $B_1$ & 10 & & $10_{111}$ & $C_1$ & 10 & & $10_{144}$ & $C_1$ & 10 \\ & $9_{25}$ & $B_1$ & 10 & & $10_{59}$ & $B_1$ & 11 & & $10_{112}$ & $C_{3a}$ & 10 & & $10_{145}$ & $C_{2a}$ & 11 \\ & $9_{28}$ & $C_6$ & 10 & & $10_{60}$ & $D_1$ & 10 & & $10_{113}$ & $C_{3a}$ & 11 & & $10_{146}$ & $C_{3a}$ & 11 \\ & $9_{29}$ & $E_2$ & 12 & & $10_{61}$ & $C_1$ & 10 & & $10_{114}$ & $C_1$ & 10 & & $10_{152}$ & $E_3$ & 11 \\ & $9_{30}$ & $B_1$ & 11 & & $10_{62}$ & $B_5$ & 11 & & $10_{116}$ & $D_1$ & 10 & & $10_{154}$ & $E_3$ & 11 \\ & $9_{34}$ & $C_1$ & 9 & & $10_{63}$ & $C_1$ & 10 & & $10_{120}$ & $C_1$ & 10 & & $10_{155}$ & $C_{3b}$ & 10 \\ & $9_{35}$ & $C_1$ & 9 & & $10_{64}$ & $C_{2a}$ & 10 & & $10_{121}$ & $C_1$ & 10 & & $10_{156}$ & $C_{2a}$ & 11 \\ & $9_{36}$ & $B_1$ & 9 & & $10_{65}$ & $B_5$ & 12 & & $10_{122}$ & $C_1$ & 10 & & $10_{157}$ & $C_{2b}$ & 10 \\ & $9_{37}$ & $C_1$ & 9 & & $10_{66}$ & $C_{3a}$ & 10 & & $10_{123}$ & $D_1$ & 10 & & $10_{158}$ & $C_1$ & 10 \\ & $9_{38}$ & $E_1$ & 12 & & $10_{68}$ & $C_{2a}$ & 11 & & & & & & $10_{159}$ & $C_{3b}$ & 10 \\ & $9_{39}$ & $C_1$ & 9 & & $10_{69}$ & $C_{3a}$ & 11 & 10 n& $10_{124}$ & $B_6$ & 10 & & $10_{160}$ & $C_1$ & 10 \\ & $9_{40}$ & $C_1$ & 9 & & $10_{70}$ & $B_1$ & 10 & & $10_{125}$ & $B_3$ & 11 & & $10_{161}$ & $C_4$ & 10 \\ & $9_{41}$ & $C_1$ & 9 & & $10_{71}$ & $B_1$ & 11 & & $10_{126}$ & $B_3$ & 10 & & $10_{162}$ & $C_1$ & 10 \\ & & & & & $10_{72}$ & $B_1$ & 11 & & $10_{127}$ & $B_3$ & 11 & & $10_{163}$ & $C_1$ & 10 \\ 9 n& $9_{42}$ & $C_1$ & 9 & & $10_{73}$ & $B_1$ & 12 & & $10_{128}$ & $C_1$ & 10 & & $10_{164}$ & $C_1$ & 10 \\ & $9_{43}$ & $B_2$ & 9 & & $10_{74}$ & $C_5$ & 10 & & $10_{129}$ & $B_2$ & 10 & & $10_{165}$ & $C_1$ & 10 \\ & $9_{44}$ & $B_2$ & 9 & & $10_{75}$ & $C_6$ & 10 & & $10_{130}$ & $B_2$ & 10 & & & & \\ & $9_{45}$ & $C_1$ & 10 & & $10_{76}$ & $C_{2a}$ & 10 & & $10_{131}$ & $B_2$ & 11 & & & & \\ & $9_{46}$ & $C_1$ & 9 & & & & & & & & & & & & \\ & $9_{47}$ & $C_1$ & 9 & & & & & & & & & & & & \\ & $9_{48}$ & $C_1$ & 9 & & & & & & & & & & & & \\ & $9_{49}$ & $C_1$ & 9 & & & & & & & & & & & & \\ \hline \end{tabular} \caption{A list of all 118 prime strongly invertible knots with bridge number 3. The second column indicates the template where the diagram information is found and the third column contains the crossing number of the diagram.} \label{tab:knotList} \end{table} \normalsize Note, that some templates generate many diagrams and other just a few. A discussion of some of these cases will be inserted here in a later version. As remarked before, we chose one diagram for each knot for this list. For instance, the knot $10_{76}$ is assigned to template $C_{2a}$ with a symmetric diagram with 10 crossings, $10_{76} = C_{2a}(0,0,-2)$. We could also have assigned it to template $B_1$, because, as shown in Figure \ref{diagram_to_template}, $10_{76} = B_1(-2,-1,-1)$ in a 10 crossing diagram. \section{A comparison of symmetries} In this section we will compare several involutive symmetries of knots, see Figures \ref{symmetries_periodic_invertible_union} and \ref{symmetries_amphicheiral}. For a diagram $D$ we denote by $D^r$ the rotated diagram, by $-D$ the mirrored diagram (at a plane perpendicular to the diagram plane) with reversed orientation, and by $D^*$ the diagram mirrored in the diagram plane. \begin{figure} \caption{Three symmetries: Intravergent diagrams are shown on the left and transvergent diagrams on the right. Symmetric unions are symmetric to a plane of reflection and all diagrams are transvergent.} \label{symmetries_periodic_invertible_union} \end{figure} \begin{figure} \caption{Two amphicheiral symmetries: Strongly positive and negative amphicheiral knots are point symmetric and all diagrams are intravergent.} \label{symmetries_amphicheiral} \end{figure} \section{Comparison of ribbons spanning symmetric unions and strongly invertible knots} As a first topic in the comparison of methods for symmetric unions and strongly invertible knots we look at the symmetric ribbon which spans a symmetric union \cite{Lamm}. Because one half in a corresponding diagram for a strongly invertible knot differs from the symmetric union just by switching crossings we obtain a similar ribbon but with clasp singularities, see Figure \ref{band_diagram}. The clasp number of a knot $K$ is the minimum number of clasp singularities among all disks spanning $K$. The article \cite{KadokamiKawamura} contains a table of the clasp numbers of knots up to crossing number 10 (some of the values are not yet known). Example: For $10_{142}$, shown in Figure \ref{band_diagram} on the left, we find in \cite{KadokamiKawamura} that the clasp number is 3 and the diagram contains 3 clasp singularities. For another knot, $10_{139}$, the clasp number is 4 and it is listed in our table with template $C_4$ which generates 4 clasp singularities in the ribbon (because there are 4 crossings in the template). We conclude that this knot cannot be generated with templates $B_1$, $B_2$ or $C_3$. Therefore, transferring the use of the ribbon from the symmetric union case to the strongly invertible case may give some interesting results and, as mentioned, we propose to do that in more detail in a future article. \begin{figure} \caption{A comparison of the singularities in a band (a) left: spanning a strongly invertible knot, $10_{142}=C_1(0,4,0)$, with clasp singularities and (b) right: a symmetric union, $10_{140}$, with ribbon singularities.} \label{band_diagram} \end{figure} \section{Plausibility check for alternating diagrams and related examples} We distinguish alternating and non-alternating templates -- defined in the same way as for knot diagrams. Only alternating templates can generate transvergent diagrams for alternating knots with $c(K)$ crossings. On the other hand, non-alternating templates may generate transvergent diagrams for alternating knots, which then have necessarily more than $c(K)$ crossings. \subsection{Alternating diagrams} Table \ref{tab:knotList} and the Appendix contain 8 alternating and 10 non-alternating templates. For the 8 alternating templates we list in Table \ref{tab:alternating_templates} the alternating knots with transvergent diagrams having $c(K)$ crossings. For each template there is a specific condition on the twist numbers: the diagram generated from the template is alternating if and only if this condition is satisfied. This is a useful plausibility check for the computational results. \small \begin{table}[htbp] \begin{tabular}{|l|l|l|} \hline $B_1$ & $t_1 \le 0$, $t_2 \le 0$, $t_3 \le 0$ & $8_5$, $9_{16}$, $9_{36}$, $10_{50}$, $10_{58}$, $10_{70}$ \\ $B_4$ & $t_1 \le 0$, $t_2 \le 0$, $t_3 \le 0$ & $10_{46}$ \\ $C_1$ & $t_1 \ge 0$, $t_2 \le 0$, $t_3 \ge 0$ & all 18 alternating knots listed with template $C_1$ \\ $C_{2a}$ & $t_1 \ge 0$, $t_2 \le 0$, $t_3 \le 0$ & $10_{64}$, $10_{76}$, $10_{104}$ \\ $C_{3a}$ & $t_1 \ge 0$, $t_2 \le 0$, $t_3 \ge 0$, $t_4 \ge 0$ & $10_{66}$, $10_{78}$, $10_{100}$, $10_{112}$ \\ $C_5$ & $t_1 \le 0$, $t_2 \le 0$ & $10_{74}$ \\ $C_6$ & $t_1 \ge 0$, $t_2 \ge 0$ & $10_{75}$ \\ $D_1$ & $t_1 \ge 0$, $t_2 \ge 0$, $t_3 \le 0$, $t_4 \le 0$ & $10_{60}$, $10_{89}$, $10_{99}$, $10_{116}$, $10_{123}$ \\ \hline \end{tabular} \caption{The alternating templates, the conditions on the twist numbers for alternating diagrams and the respective alternating knots} \label{tab:alternating_templates} \end{table} \normalsize \begin{remark} There are alternating knots for which Table \ref{tab:knotList} contains non-alternating diagrams and these obviously do not satisfy the conditions in Table \ref{tab:alternating_templates}. For instance $10_{72} = B_1(-1,-1,3)$ is an alternating knot and listed with template $B_1$ as a diagram with 11 crossings. The twist numbers $t_1 = -1$ and $t_2 = -1$ satisfy the condition $t_1 \le 0$, $t_2 \le 0$ but the third twist number does not, since $t_3 > 0$. The similar cases $10_{47}$, $10_{54}$ and $10_{55}$ are illustrated in the next section in Figure \ref{reducible_diagrams}. \end{remark} \subsection{Alternating knots with `nearly alternating diagrams'} Table \ref{tab:knotList} and the Appendix contain some diagrams for alternating knots with $c(K)+1$ crossings which can be transformed by a `flip' to alternating diagrams with $c(K)$ crossings, see Figure \ref{reducible_diagrams} (and there is even the case $10_{55}$ with two flips, transforming a symmetric diagram with 12 crossings to a minimal alternating one). Since these flips are especially simple, we list the cases in Table \ref{tab:nearly_alternating}. In each of these cases a twist number of 2 occurs in a template where in the last section we noted the condition $t_i \le 0$, or a twist number of -2 where we noted the condition $t_i \ge 0$. This corresponds to the fact that the diagrams are not alternating inside the areas marked in gray. \small \begin{table}[htbp] \begin{tabular}{|l|l|l|} \hline $B_1$ & $8_{10}$, $9_{24}$, $9_{25}$, $9_{30}$, $10_{51}$, $10_{53}$, $10_{71}$, $10_{77}$ \\ $B_4$ & $10_{47}$, $10_{54}$, $10_{55}$ \\ $C_{2a}$ & $10_{68}$, $10_{105}$ \\ $C_{3a}$ & $10_{69}$, $10_{101}$, $10_{113}$ \\ \hline \end{tabular} \caption{Symmetric diagrams in alternating templates which can be transformed by `flips' to alternating diagrams} \label{tab:nearly_alternating} \end{table} \normalsize \begin{figure} \caption{In each case flipping the gray area(s) yields an alternating diagram: $10_{47} = B_4(-1,0,2)$, $10_{54} = B_4(2,0,-1)$ and $10_{55} = B_4(2,0,2)$} \label{reducible_diagrams} \end{figure} \begin{remark} Similar `nearly minimal' diagrams exist for non-alternating knots, for instance $10_{124}$, $10_{134}$ in the non-alternating template $B_3$, and $10_{146}$ in the alternating template $C_{3a}$. \end{remark} \section{Open questions and suggested projects} The knot invariant $c_t(K)$ is currently only known for knots for which transvergent diagrams with $c(K)$ crossings were experimentally found, and in this case $c_t(K) = c(K)$. This situation can be improved in computational or theoretical ways. \begin{project} Generate all transvergent diagrams up to $b$ crossings and determine their knot types. Since for 3-bridge knots our list contains diagrams with $\le 13$ crossings, it is enough to set $b=12$ in order to obtain $c_t(K)$ for all (strongly invertible, prime) 3-bridge knots up to crossing number 10. \end{project} \begin{project} Find a knot invariant which allows the computation of a lower bound for $c_t(K)$ for each (strongly invertible) knot $K$. \end{project} On the other hand, we would like to know more about the difference $c_t(K)-c(K)$: \begin{project} Find an upper bound for $d(n):=\max_K\{c_t(K)-c(K)|c(K) = n\}$, for strongly invertible knots $K$. It might be reasonable to study $d(n)$ separately for 2-bridge knots and knots with a larger bridge number. \end{project} \section*{Acknowledgments} I thank Marc Kegel for his contribution of 8 symmetric diagrams in the Mathoverflow discussion `Diagrams for strongly invertible knots with 10 crossings' \cite{mathoverflow} which I started in May 2022. \section{Appendix} The Appendix contains the 18 templates generating 118 diagrams of prime strongly invertible knots with bridge number 3 and crossing number $\le 10$. The twist numbers correspond to the dotted lines in the templates read from left to right. \small \hspace{-2.0cm} \colorbox{myboxcolour}{Family B} \noindent \parbox[t]{4.0cm}{ \centering \mbox{} \\ \includegraphics[scale=0.8]{t_B1} \\ template $B_1$ \\ \mbox{} \\ \begin{tabular}{rcr@{, }r@{, }r} $8_{5\phantom{0}}$ &=&( 0& -1& -1) \\ $8_{10}$ &=&( 0& -1& 2) \\ $9_{16}$ &=&(-1& -1& -1) \\ $9_{22}$ &=&( 0& -1& 3) \\ $9_{24}$ &=&(-1& -1& 2) \\ $9_{25}$ &=&( 0& -2& 2) \\ \end{tabular} } \parbox[t]{4.0cm}{ \mbox{} \\ \begin{tabular}{rcr@{, }r@{, }r} $9_{30}$ &=&( 0& 2& 3) \\ $9_{36}$ &=&( 0& -2& -1) \\ $10_{50}$ &=&( 0& -3& -1) \\ $10_{51}$ &=&( 0& -3& 2) \\ $10_{52}$ &=&( 0& -1& 4) \\ $10_{53}$ &=&( 0& 2& 4) \\ $10_{58}$ &=&( 0& -2& -2) \\ $10_{59}$ &=&( 0& -2& 3) \\ $10_{70}$ &=&(-1& -2& -1) \\ $10_{71}$ &=&(-1& -2& 2) \\ $10_{72}$ &=&(-1& -1& 3) \\ $10_{73}$ &=&(-1& 2& 3) \\ $10_{77}$ &=&(-2& -1& 2) \\ \end{tabular} } \noindent \parbox[t]{3.5cm}{ \centering \mbox{} \\ \includegraphics[scale=0.8]{t_B2} \\ template $B_2$ } \parbox[t]{3.5cm}{ \mbox{} \\ \begin{tabular}{lcr@{, }r} $\phantom{1}8_{20}$ &=&(-1& 1) \\ $\phantom{1}9_{43}$ &=&(-1& 2) \\ $\phantom{1}9_{44}$ &=&(-2& 1) \\ $10_{129}$ &=&(-3& 1) \\ $10_{130}$ &=&(-1& 3) \\ $10_{131}$ &=&( 2& 3) \\ $10_{137}$ &=&(-2& 2) \\ \end{tabular} } \hspace{7.0cm} \noindent \parbox[t]{4.2cm}{ \centering \mbox{} \\ \includegraphics[scale=0.8]{t_B3} \\ template $B_3$ } \parbox[t]{3.5cm}{ \mbox{} \\ \begin{tabular}{lcr@{, }r@{, }r} $10_{48}$ &=&(-1& 1& -1) \\ $10_{49}$ &=&(-1& 1& 2) \\ $10_{125}$ &=&( 1& 1& -1) \\ $10_{126}$ &=&( 0& 1& -1) \\ $10_{127}$ &=&( 0& 1& 2) \\ $10_{134}$ &=&( 0& -2& -1) \\ $10_{135}$ &=&( 0& -2& 2) \\ \end{tabular} } \noindent \parbox[t]{4.0cm}{ \centering \mbox{} \\ \includegraphics[scale=0.8]{t_B4} \\ template $B_4$ } \parbox[t]{4.0cm}{ \mbox{} \\ \begin{tabular}{rcr@{, }r@{, }r} $10_{46}$ &=&(-1& 0& -1) \\ $10_{47}$ &=&(-1& 0& 2) \\ $10_{54}$ &=&( 2& 0& -1) \\ $10_{55}$ &=&( 2& 0& 2) \\ $10_{56}$ &=&(-1& 2& -1) \\ $10_{57}$ &=&(-1& 2& 2) \\ \end{tabular} } \parbox[t]{3.5cm}{ \centering \mbox{} \\ \includegraphics[scale=0.8]{t_B5} \\ template $B_5$ } \parbox[t]{3.5cm}{ \mbox{} \\ \begin{tabular}{lcr@{, }r@{, }r} $10_{62}$ &=&(1& 1& -1) \\ $10_{65}$ &=&(1& -2& -1) \\ $10_{140}$ &=&(1& 0& -1) \\ $10_{143}$ &=&(1& -1& -1) \\ \end{tabular} } \noindent \parbox[t]{4.0cm}{ \centering \mbox{} \\ \includegraphics[scale=0.8]{t_B6} \\ template $B_6$ } \parbox[t]{3.5cm}{ \mbox{} \\ \begin{tabular}{lcr@{, }r} $10_{124}$ &=&( 1& 1) \\ \end{tabular} } \hspace{-2.0cm} \colorbox{myboxcolour}{Family C} \noindent \parbox[t]{4.0cm}{ \centering \mbox{} \\ \includegraphics[scale=0.8]{t_C1} \\ template $C_1$ \\ \mbox{} \\ \begin{tabular}{lcr@{, }r@{, }r} $\phantom{1}8_{15}$ &=&( 0& 0& 2) \\ $\phantom{1}8_{16}$ &=&( 0& -1& 1) \\ $\phantom{1}8_{18}$ &=&( 1& 0& 1) \\ $\phantom{1}8_{19}$ &=&( 0& 2& 0) \\ $\phantom{1}8_{21}$ &=&(-2& 0& 0) \\ $\phantom{1}9_{34}$ &=&( 1& 0& 2) \\ $\phantom{1}9_{35}$ &=&( 0& -3& 0) \\ $\phantom{1}9_{37}$ &=&( 0& 0& 3) \\ $\phantom{1}9_{39}$ &=&( 0& -1& 2) \\ $\phantom{1}9_{40}$ &=&( 1& -1& 1) \\ $\phantom{1}9_{41}$ &=&( 0& -2& 1) \\ $\phantom{1}9_{42}$ &=&( 0& 2& 1) \\ $\phantom{1}9_{45}$ &=&( 1& 1& 2) \\ $\phantom{1}9_{46}$ &=&( 0& 3& 0) \\ \end{tabular} } \parbox[t]{4.2cm}{ \centering \mbox{} \\ \begin{tabular}{lcr@{, }r@{, }r} $\phantom{1}9_{47}$ &=&(-2& 0& 1) \\ $\phantom{1}9_{48}$ &=&(-3& 0& 0) \\ $\phantom{1}9_{49}$ &=&(-2& -1& 0) \\ $10_{61}$ &=&( 0& -4& 0) \\ $10_{63}$ &=&( 0& 0& 4) \\ $10_{103}$ &=&( 0& -1& 3) \\ $10_{108}$ &=&( 0& -3& 1) \\ $10_{111}$ &=&( 0& -2& 2) \\ $10_{114}$ &=&( 1& 0& 3) \\ $10_{120}$ &=&( 2& 0& 2) \\ $10_{121}$ &=&( 1& -1& 2) \\ $10_{122}$ &=&( 1& -2& 1) \\ $10_{128}$ &=&( 0& 2& 2) \\ $10_{136}$ &=&( 1& 2& 1) \\ $10_{142}$ &=&( 0& 4& 0) \\ $10_{144}$ &=&(-4& 0& 0) \\ $10_{158}$ &=&(-3& -1& 0) \\ $10_{160}$ &=&( 0& 3& 1) \\ $10_{162}$ &=&(-2& -2& 0) \\ $10_{163}$ &=&(-3& 0& 1) \\ $10_{164}$ &=&(-2& -1& 1) \\ $10_{165}$ &=&(-2& 0& 2) \\ \end{tabular} } \noindent \parbox[t]{3.5cm}{ \centering \mbox{} \\ \includegraphics[scale=0.8]{t_C2a} \\ template $C_{2a}$ } \parbox[t]{3.5cm}{ \mbox{} \\ \begin{tabular}{lcr@{, }r@{, }r} $10_{64}$ &=&( 0& -1& -1) \\ $10_{68}$ &=&( 0& -1& 2) \\ $10_{76}$ &=&( 0& 0& -2) \\ $10_{104}$ &=&( 1& 0& -1) \\ $10_{105}$ &=&( 1& 0& 2) \\ $10_{145}$ &=&( 0& 2& -1) \\ $10_{156}$ &=&( 1& 1& -1) \\ \end{tabular} } \hspace{7.9cm} \noindent \parbox[t]{3.5cm}{ \centering \mbox{} \\ \includegraphics[scale=0.8]{t_C2b} \\ template $C_{2b}$ } \parbox[t]{3.5cm}{ \mbox{} \\ \begin{tabular}{lcr@{, }r@{, }r} $10_{157}$ &=&( 1& 0& 1) \\ \end{tabular} } \hspace{-1.5cm} \noindent \parbox[t]{5.5cm}{ \centering \mbox{} \\ \includegraphics[scale=0.8]{t_C3a} \\ template $C_{3a}$ } \parbox[t]{4.2cm}{ \mbox{} \\ \begin{tabular}{lcr@{, }r@{, }r@{, }r} $10_{66}$ &=&( 0& 0& 1& 1) \\ $10_{69}$ &=&( 0& 0& 1& -2) \\ $10_{78}$ &=&( 0& 0& 0& 2) \\ $10_{100}$ &=&( 0& -1& 0& 1) \\ $10_{101}$ &=&( 0& -1& 0& -2) \\ $10_{112}$ &=&( 1& 0& 0& 1) \\ $10_{113}$ &=&( 1& 0& 0& -2) \\ $10_{132}$ &=&( 0& -1& -1& 1) \\ $10_{146}$ &=&( 0& 0& -2& 1) \\ \end{tabular} } \noindent \parbox[t]{5.5cm}{ \centering \mbox{} \\ \includegraphics[scale=0.8]{t_C3b} \\ template $C_{3b}$ \\ \mbox{} \\ \begin{tabular}{lcr@{, }r@{, }r@{, }r} $10_{141}$ &=&( 0& 0& -1& -1) \\ $10_{155}$ &=&( 0& -1& 0& -1) \\ $10_{159}$ &=&( 1& 0& 0& -1) \\ \end{tabular} } \noindent \parbox[b]{4.0cm}{ \centering \mbox{} \\ \includegraphics[scale=0.8]{t_C4} \\ template $C_4$ \\ \mbox{} \\ \begin{tabular}{rcr@{, }r} $10_{139}$ &=&( 1& 1) \\ $10_{161}$ &=&(-1& 1) \\ \end{tabular} } \hspace{0.8cm} \parbox[b]{3.8cm}{ \centering \mbox{} \\ \includegraphics[scale=0.8]{t_C5} \\ template $C_5$ \\ \mbox{} \\ \begin{tabular}{rcr@{, }r} $10_{74}$ &=&(-1&-1) \\ \end{tabular} } \hspace{0.8cm} \parbox[b]{3.8cm}{ \centering \mbox{} \\ \includegraphics[scale=0.8]{t_C6} \\ template $C_6$ \\ \mbox{} \\ \begin{tabular}{rcr@{, }r} $9_{28}$ &=&(-2& 0) \\ $10_{75}$ &=&( 1& 1) \\ \end{tabular} } \hspace{-2.0cm} \colorbox{myboxcolour}{Family D} \noindent \parbox[t]{4.5cm}{ \centering \mbox{} \\ \includegraphics[scale=0.8]{t_D1} \\ template $D_1$ } \parbox[t]{3.5cm}{ \mbox{} \\ \begin{tabular}{lcr@{, }r@{, }r@{, }r} $10_{60}$ &=&( 2& 0& 0& 0) \\ $10_{89}$ &=&( 1& 0& 0& -1) \\ $10_{99}$ &=&( 0& 1& 0& -1) \\ $10_{116}$ &=&( 1& 1& 0& 0) \\ $10_{123}$ &=&( 1& 0&-1& 0) \\ $10_{133}$ &=&( 1& 1& 0& 1) \\ $10_{138}$ &=&( 0& 0& 2& 0) \\ \end{tabular} } \hspace{-2.0cm} \colorbox{myboxcolour}{Family E} \noindent \parbox[t]{4.5cm}{ \centering \mbox{} \\ \includegraphics[scale=0.8]{t_E1} \\ template $E_1$ } \parbox[t]{3.5cm}{ \mbox{} \\ \begin{tabular}{rcr@{, }r} $9_{38}$ &=&(-1& 1) \\ $10_{96}$ &=&(-1& 2) \\ \end{tabular} } \noindent \parbox[t]{4.5cm}{ \centering \mbox{} \\ \includegraphics[scale=0.8]{t_E2} \\ template $E_2$ } \parbox[t]{3.5cm}{ \mbox{} \\ \begin{tabular}{rcr@{, }r} $9_{29}$ &=&( 1& 1) \\ $10_{97}$ &=&( 2& 1) \\ \end{tabular} } \noindent \parbox[t]{4.5cm}{ \centering \mbox{} \\ \includegraphics[scale=0.8]{t_E3} \\ template $E_3$ } \parbox[t]{3.0cm}{ \mbox{} \\ \begin{tabular}{rcr} $10_{152}$ &=&( 1) \\ $10_{154}$ &=&(-1) \\ \end{tabular} } \noindent Christoph Lamm \\ \noindent R\"{u}ckertstr. 3, 65187 Wiesbaden \\ \noindent Germany \\ \noindent e-mail: [email protected] \end{document}
\begin{document} \newtheorem{theorem}{Theorem} \newtheorem{lemma}{Lemma} \newtheorem{corollary}{Corollary} \title{Unambiguous discrimination of mixed states: A description based on system-ancilla coupling} \author{Xiang-Fa Zhou} \email{[email protected]} \author{Yong-Sheng Zhang} \email{[email protected]} \author{Guang-Can Guo} \email{[email protected]} \affiliation{\textit{Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China}} \begin{abstract} We propose a general description on the unambiguous discrimination of mixed states according to the system-environment coupling, and present a procedure to reduce this to a standard semidefinite programming problem. In the two states case, we introduce the canonical vectors and partly simplify the problem to the case of discrimination between pairs of canonical vectors. By considering the positivity of the two by two matrices, we obtain a series of new upper bounds of the total success probability which depends on both the prior probabilities and specific state structures. \end{abstract} \pacs{03.67.-a, 03.65.Ta} \maketitle \section{Introduction} Quantum state discrimination (QSD) is one of the fundamentally important problems in quantum information science. Especially in quantum communication and quantum cryptography, many novel schemes are based on the fact that nonorthogonal states cannot be discriminated determinately. Henceforth study on the discrimination of quantum states has a close relation to the security of quantum cryptographic protocols. On the other hand, since there is no measurement that can perform a perfect identification, several strategies have been proposed in QSD based on different criteria. One of these is the minimum-error discrimination \cite{mim-error}, which permits incorrect outcomes during the measurement procedure. The other one is unambiguous discrimination (UD) of quantum states. This sort of discrimination procedure never gives an erroneous result, but sometimes it fails. Here we consider the latter case which has received much attention recently. In the pure states case, UD has been widely considered \cite{early1,early2}. While in mixed states case, it seems to be a hard problem. In many earlier works, some useful bounds of the total success probability $P$, together with several useful reduction theorem, have been presented \cite{mix-early, Rudolph, mix-condition, Raynal1, Hergou, Raynal2,Raynal3, bergou}. However, totally solving this problem seems not so easy at all. What's more, even for the simple case, e.g., UD between two mixed states, which has been wildly studied recently, there still exists many questions which are not clear to us. The standard description of UD among mixed states are usually formulated as this: given a set of mixed states $\{\rho_1, \rho_2, \ldots, \rho_N \}$ with the corresponding prior probabilities $\{ \eta_1, \eta_2, \ldots, \eta_N \}$, the aim of discriminating these states unambiguously is to find a $N+1$-element positive operator value measurement (POVM) $\{E_0, E_1, E_2, \ldots, E_N \}$ with $\sum_{i=0}^N E_i=I$ and $\mbox{Tr}(E_k\rho_l)=p_k\delta_{kl} (k,l\neq0)$, such that the measurement operator $E_k$ gets a result with the probability $p_k$ only when the input state is $\rho_k$. Here $E_0$ denotes the inconclusive measurement where the identification fails. The average failure probability is described by $Q=\sum_{k=1}^N Q_k$ with $Q_k=\eta_k \mbox{Tr}(E_0 \rho_k)$ being the failure probability of identifying $\rho_k$. Equivalently, one can also concentrate on the total success probability $P=1-Q=\sum_{k=1} \eta_k \mbox{Tr}(E_k\rho_k)$. From the general viewpoint, UD can be regarded as some kind of physically accessible transformation on a finite number of input states $\xi: \{\rho_1, \rho_2, \ldots, \rho_N \}\rightarrow\{\sigma_1, \sigma_2, \ldots, \sigma_N \}$ \cite{uniform}. The major character of this transformation is that it is probabilistic, accurate, and the output states $\sigma_k$ must be orthogonal to each other so that they can be identified perfectly. There are several equivalent approaches to describe a completely positive (CP) map \cite{book}. For example, it can be represented in a Kraus operator sum form. Also it can be implemented by employing a unitary transformation on the system plus ancilla, i.e. $\xi(\rho)=\mbox{Tr}_{E'}[U \rho \otimes \rho_E U^\dag I \otimes P_{E'}]$, where $\rho_E$ is the initial state of the ancilla system, $I$ denotes the identity operator in output Hilbert space ${\cal H}_2$, $P_{E'}$ is a projector in ${\cal H}_{E'}$, and ${\cal H}_1 \otimes {\cal H}_E = {\cal H}_2 \otimes {\cal H}_{E'}$. In this paper, we consider to discriminate mixed states unambiguously from the system-ancilla coupling viewpoint. By constructing the whole unitary transformation on the combinations of the inputs and the auxiliary system, we obtain the necessary and sufficient conditions on the existence of a UD strategy. We point out that in the more general case to find the optimal UD strategy can be reduced to a standard semidefinite programming problem. Especially, in the case of UD between two mixed states, we obtain a series of new upper bounds of the success probability which are closely related to the structure of the input quantum states together with the ratio of prior probabilities. In some sense our result confirms the conjecture made by Bergou \emph{et al} \cite{bergou}. \section{General description of Unambiguous discrimination} Let us start with the general UD of $N$ mixed states. For any mixed state $\rho_k$, it can always be regarded as the mixture of pure states, i.e. $\rho_k=\sum_m|\tilde{\psi}_m^{(k)}\rangle \langle \tilde{\psi}_m^{(k)}|$, where $|\tilde{\psi}_m^{(k)}\rangle$ are nonnormalized state vectors. Here for simplicity, we assume that $|\tilde{\psi}_m^{(k)}\rangle$ are linearly independent. Firstly, we suppose the intersection of the supports of two density matrices $\rho_k$ and $\rho_l$ is empty (except for a trivial zero vector). This also indicates that all vectors $\{|\tilde{\psi}_m^{(k)}\rangle, \ldots, |\tilde{\psi}_n^{(l)}\rangle \}$ are linearly independent. By introducing suitable auxiliary system, we consider the following unitary realization of the CP map $\xi$ \begin{eqnarray} U |\tilde{\psi}^{(k)}_{m} \rangle_1 |0\rangle_E = |\tilde{\phi}^{(k)}_{m} \rangle_{2a} |P_0\rangle_p + |\tilde{\beta}^{(k)}_{m} \rangle_{2ap}. \end{eqnarray} Here ${\cal H}_{E'}={\cal H}_{a} \otimes {\cal H}_{p}$, $|0\rangle$ is the fixed initial state of the environment, $|P_0\rangle$ is the state of the probe system satisfying $\langle P_0|\widetilde{\beta}^{(k)}_{m}\rangle=0$, and we also use the tilde $\widetilde{\mbox{ }}$ to denote a nonnormalized state vector. The output state $\sigma_k$ can be obtained by tracing over the subsystem $a$ after we get a measurement outcome corresponding to the probe $|P_0\rangle$, i.e. $\sigma_k=\sum_m\mbox{Tr}_a[|\tilde{\phi}^{(k)}_{m} \rangle \langle \tilde{\phi}^{(k)}_{m}| ]$. On the other hand, if the intersection of the supports of two density matrix $\rho_k$ and $\rho_l$ is not empty, there exists at least one state vector $|\psi\rangle \in supp(\rho_k) \cap supp(\rho_l)$. From the definition of the CP map, we have \begin{eqnarray} U |\psi\rangle |0\rangle &=& |\tilde{\phi}^{(k)}\rangle |P_0\rangle + |\tilde{\beta}^{(k)} \rangle \nonumber \\ &=& |\tilde{\phi}^{(l)}\rangle |P_0\rangle + |\tilde{\beta}^{(l)}\rangle. \label{supprot} \end{eqnarray} Since $|\tilde{\phi}^{(k)}\rangle \neq |\tilde{\phi}^{(l)} \rangle$ (The output states are different from each other), Eq. ($\ref{supprot}$) is satisfied only when $|\tilde{\phi}^{(k)}\rangle = |\tilde{\phi}^{(l)} \rangle=0$, hence any state contained in $supp(\rho_k) \cap supp(\rho_l)$ has no contribution to the desired transformation. Thus it's enough to consider the case $supp(\rho_k) \cap supp(\rho_l)= \{ 0 \}$, which reproduces the known results \cite{mix-condition}. The inner-product preservation of unitary transformation leads us to the following equation \begin{eqnarray} \tilde{X}-\tilde{Y}=\tilde{B} \ge 0 \label{main} \end{eqnarray} with \begin{eqnarray} \tilde{w}= \left ( \begin{array}{ccc} \tilde{w}_{kk} & \ldots & \tilde{w}_{kl} \\ \vdots & \ddots & \vdots \\ \tilde{w}_{lk} & \cdots & \tilde{w}_{ll} \end{array} \right ) \,\,\,\, \{w \in (X, Y, B) \}. \end{eqnarray} Here $\tilde{w}_{kl}$ are all block matrices with $(\tilde{X}_{kl})_{mn}=\langle \tilde{\psi}^{(k)}_m|\tilde{\psi}^{(l)}_n \rangle$, $(\tilde{Y}_{kl})_{mn}=\langle \tilde{\phi}^{(k)}_m|\tilde{\phi}^{(l)}_n \rangle$, and $(\tilde{B}_{kl})_{mn}=\langle \tilde{\beta}^{(k)}_m|\tilde{\beta}^{(l)}_n \rangle$ respectively. Also we can find that all the three matrices ($\tilde{X}, \tilde{Y}, \tilde{B}$) are Hermitian, and positive semidefinite. Since $\sigma_k$ are orthogonal to each other, we have $\langle \tilde{\phi}^{(k)}_{m}| \tilde{\phi}^{(l)}_{n} \rangle=0 (k\neq l)$. This indicates $\tilde{Y}$ is quasi-diagonal and can be written as $ \tilde{Y}= diag\{\tilde{Y}_{kk}, \ldots, \tilde{Y}_{ll} \}$. Contrarily, if there exists a positive semidefinite $\tilde{Y}$ matrix satisfying Eq. ($\ref{main}$), we can always choose suitable state vectors $|\tilde{\phi}^{(k)}_{m} \rangle$ and $|\tilde{\beta}^{(k)}_{m} \rangle$ such that $\tilde{X}=\tilde{B}+\tilde{Y}$. With the standard Gram-Schmidt orthogonalization procedure, the desired the unitary transformation can be easily obtained. We conclude the above discussion by the following theorem. \begin{theorem} $N$ mixed states $\{\rho_1, \rho_2, \cdots \rho_N \}$ can be unambiguously discriminated if and only if there exists a positive semidefinite quasi-diagonal matrix $\tilde{Y}$ such that $\tilde{X}- \tilde{Y} \geq 0$. Moreover, if the input states are chosen with prior probabilities $\{ \eta_1, \eta_2,\cdots \eta_N\}$ and $\sum_k \eta_k=1$, the total success probability will be $P=\sum_k \eta_k \mbox{Tr}(\tilde{Y}_{kk})$. \end{theorem} This theorem characterize the general properties of UD among $N$ mixed states in the system-ancilla framework. One can also easily check that it is consistent with earlier works \cite{mix-early, Rudolph, mix-condition, Raynal1, Hergou, Raynal2,Raynal3, bergou}. In a more realistic situation, people often concentrate on the total success probability of such physical transformation. This indicates that we should make the probability $P$ as high as possible. Mathematically, this is equivalent to maximizing \begin{eqnarray} P=\sum_k \eta_k \mbox{Tr}(\tilde{Y}_{kk}) \end{eqnarray} under the constraints \begin{eqnarray} \label{sdp2} \tilde{X} - \tilde{Y} \geq 0, and \ \tilde{Y} \geq 0. \end{eqnarray} Usually given the input mixed states, we can get to know the matrix $\tilde{X}$ exactly. Therefore the only thing we should do is to find the optimal positive semidefinite matrix $\tilde{Y}$ which maximizes the success probability $P$. By redefining a series of new matrices $ F_0=diag\{\tilde{X},0 \}$, $ F_{k}^{pq} = diag\{ E_k^{pq}, -E_k^{pq}\}$, and $G_k^{pq} = diag \{i E_k^{pq}, -i E_k^{pq} \}$, where $i$ is the basic imaginary unit, and $E_k^{pq}$ are matrices corresponding the block matrices $\tilde{Y}_{kk}$ with $(E_k^{pq})_{mn}=\delta_{mp}\delta_{nq}$, the problem under consideration can be reformulated as \begin{eqnarray} {\hbox{\space \raise-2mm\hbox{$\textstyle max \atop \scriptstyle \tilde{Y}$} \space}} \sum_k \eta_k \mbox{Tr}(\tilde{Y}_{kk}) \end{eqnarray} subject to \begin{eqnarray}\label{sdp} \nonumber F_0 &-& \sum_{k,p,q } \left \{ Re[(\tilde{Y}_{kk})_{pq}] F_k^{pq} + Im[(\tilde{Y}_{kk})_{pq}] G_k^{pq} \right \} \geq 0, \end{eqnarray} where $Re[(\tilde{Y}_{kk})_{pq}]$ and $Im[(\tilde{Y}_{kk})_{pq}]$ represent the real and imaginary parts of the matrices elements $(\tilde{Y}_{kk})_{pq}$ respectively. This is a standard semi-definite programming (SDP) problem \cite{SDP}, and can be solved by numeric method efficiently (one can also find another method to reduce this to a SDP problem in \cite{sdp2}, which is equivalent to our result). Therefore in principle, the optimal success probability of UD of mixed states can be found numerically. Actually once we have found the optimal matrix $\tilde{Y}$, with the standard procedure, we can construct the corresponding unitary implementation of the discrimination operation. \section{Unambiguous discrimination of two mixed states} In the above discussion, we have given a general description on UD among $N$ mixed input states. To be specific, in the following, we will focus on a particular case, i.e. UD between two mixed states. This is a basic and very important case in the study of UD, and much attention has been paid to the problem recently. In \cite{Rudolph}, Rudolph \emph{et al.} present the lower bound on the failure probability $Q$, and later, it has been pointed out that there exist mixed states for which the lower bound can not be reached for any prior probabilities. Based on these facts, Raynal \emph{et al.} \cite{Raynal2, Raynal3} investigated a large class of two mixed states discrimination, they also found the necessary and sufficient conditions for two mixed state to saturate these bounds. In all these works, $Q$ is considered in three different regions, depending on the ratio between two prior probabilities. Recently, Bergou \emph{et al.} \cite{bergou} have considered the discrimination of two subspaces and they find that for this special case there are many parameter regions which can give different minimal failure probabilities. The regions depend on both the prior probabilities and the specific structure of the two subspaces. The lower bound $2\sqrt{\eta_1 \eta_2}F$ of the failure probability $Q$ can be reached only when the prior probabilities lie in some specific regions. Later they conjecture that this phenomenon occurs for any two mixed states. In the following, we will show that this result is indeed universal. When restricted to two-state case, Eq. ($\ref{main}$) can be simplified as \begin{eqnarray} \label{two} \left ( \begin{array}{cc} \tilde{X}_{11} & \tilde{X}_{12} \\ \tilde{X}_{21} & \tilde{X}_{22} \end{array} \right ) - \left ( \begin{array}{cc} \tilde{Y}_{11} & 0 \\ 0 & \tilde{Y}_{22} \end{array} \right ) \ge 0, \end{eqnarray} where $\tilde{X}_{kl}$ arise from the decompositions of $\rho_1$ and $\rho_2$. Usually there exist many other ensembles which can generate the same operators, i.e. $\rho_1=(|\tilde{\psi}_1^{(1)}\rangle, |\tilde{\psi}_2^{(1)}\rangle, \ldots)(\langle \tilde{\psi}_1^{(1)}|, \langle \tilde{\psi}_2^{(1)}|, \ldots)^T=(|\tilde{\psi}_1^{(1)}\rangle, |\tilde{\psi}_2^{(1)}\rangle, \ldots) U^\dag U (\langle \tilde{\psi}_1^{(1)}|, \langle \tilde{\psi}_2^{(1)}|, \ldots)^T=(|\tilde{r}_1\rangle, |\tilde{r}_2\rangle, \ldots)(\langle \tilde{r}_1|, \langle \tilde{r}_2|, \ldots)^T$, where $U$ ($V$ for $\rho_2$) is a unitary matrix and $T$ represents the transpose of the matrix. This is known as the unitary freedom for density matrices \cite{book2}. Hence we can also write down the correspondence of Eq. ($\ref{main}$) according to this new decomposition \begin{eqnarray} \label{two-2} \tilde{X'}-\tilde{Y'} \ge 0. \end{eqnarray} Since $\tilde{X'}=\mbox{diag}\{U,V\}\tilde{X}\mbox{diag}\{U^\dag,V^\dag\}$, we can immediately obtain that this will not affect the total success probability $P$ we consider here (This is also general for $N$ input mixed states). Keeping in mind that $U$ and $V$ can be arbitrary, we can choose the two matrices appropriately such that $U \tilde{X}_{12} V^\dag= \mbox{diag}\{\mbox{diag}\{ f_1, f_2, \ldots, f_t \}, \overrightarrow{\textbf{0}} \}$, where we assume $\tilde{X}_{11}$ and $\tilde{X}_{22}$ are $u\times u$ and $v\times v$ matrices respectively with $u\le v$, $f_m$ are the singular values of $X_{12}$, and $\overrightarrow{\textbf{0}}$ is a $(u-t)\times (v-t)$ zero matrix. This implies there exist some kinds of decompositions of $\rho_1$ and $\rho_2$, namely, $\rho_1=\sum_m|\tilde{r}_m\rangle \langle \tilde{r}_m|$ and $\rho_2=\sum_n|\tilde{s}_n\rangle \langle \tilde{s}_n|$, which satisfies the following equations \begin{eqnarray} \label{vector} \langle \tilde{r}_m| \tilde{s}_n \rangle = \left \{ \begin{array}{ll} f_m \delta_{mn} & (m,n) \le t, \\ 0 & otherwise. \end{array} \right. \end{eqnarray} The singular values $f_m$ have very interesting properties and we characterize this by the following theorem \cite{Uhlmann}. \begin{theorem} Given two mixed states density matrices $\rho_1$ and $\rho_2$, there exist two sets of canonical vectors $\{|\tilde{r}_1\rangle, |\tilde{r}_2\rangle, \ldots \}$ and $\{|\tilde{s}_1\rangle, |\tilde{s}_2\rangle, \ldots \}$, which generate $\rho_1$ and $\rho_2$ respectively, such that Eq. ($\ref{vector}$) is satisfied. And the fidelity of the two density matrices can be formulated as $F=\mbox{Tr}\sqrt{\rho_1^{1/2}\rho_2\rho_1^{1/2}}=\sum_m f_m$. \end{theorem} \emph{Proof}: The only thing we should do now is to prove the second part of this theorem. Consider the spectral decompositions of $\rho_1$ and $\rho_2$ \begin{eqnarray} \rho_1=\sum_i \alpha_i |\alpha_i \rangle \langle \alpha_i|, \,\,\,\, \rho_2=\sum_h \beta_h |\beta_h \rangle \langle \beta_h|. \end{eqnarray} According to the definition of fidelity $F$, we obtain \begin{eqnarray} (\rho_1^{1/2}\rho_2\rho_1^{1/2})_{ij}=& \sum_h & \sqrt{\alpha_i}\sqrt{\beta_h}\langle \alpha_i|\beta_h\rangle \nonumber \\ &\mbox{}& \!\!\cdot \sqrt{\alpha_j}\sqrt{\beta_h}\langle \beta_h|\alpha_j \rangle. \end{eqnarray} Now we definite a new matrix \begin{eqnarray} A_{ih} = \sqrt{\alpha_i}\sqrt{\beta_h}\langle \alpha_i|\beta_h\rangle. \end{eqnarray} The fidelity $F$ can be rewritten as this \begin{eqnarray} F=\mbox{Tr}\sqrt{\rho_1^{1/2}\rho_2\rho_1^{1/2}}=\mbox{Tr}\sqrt{AA^\dag}. \end{eqnarray} Since $A$ is a complex matrix, using a singular value decomposition, we have $A=U_1 \mbox{diag}\{\mbox{diag}\{ f_1, f_2, \ldots, f_t \}, \overrightarrow{\textbf{0}} \} V_1$ with $f_m \ge 0$ for all $1\le m \le t$. Thus the fidelity becomes \begin{eqnarray} F=\mbox{Tr}\sqrt{AA^\dag}=\sum_m f_m. \end{eqnarray} On the other hand, because of the unitary freedom in the ensemble representation of density matrices, we can find two unitary operations $U_2$ and $V_2$ such that $U_2 X_{12} V_2=A$. Therefore $A$ and $X_{12}$ have the same singular values, which completes the proof. Theorem 2 indicates for any two mixed states, it is always possible to find two sets of canonical vectors $\{|\tilde{r}_1\rangle, |\tilde{r}_2\rangle, \ldots \}$ and $\{|\tilde{s}_1\rangle, |\tilde{s}_2\rangle, \ldots \}$ so that $|\tilde{r}_m\rangle$ only have a nonzero overlap with $|\tilde{s}_m\rangle$. When $(n, m) \ge t$, one can easily check that $|\tilde{r}_m\rangle$ and $|\tilde{s}_n\rangle$ lie in the subspace orthogonal to the supports of $\rho_2$ and $\rho_1$ respectively. From the reduction theorem in \cite{Raynal1}, we conclude that UD between $\rho_1$ and $\rho_2$ is equivalent to that between the two newly defined density matrices $\rho_1'=\sum_{m=1}^t |\tilde{r}_m\rangle \langle \tilde{r}_m|/N_1$ and $\rho_2'=\sum_{n=1}^t |\tilde{s}_n\rangle \langle \tilde{s}_n|/N_2$ with $N_1=\mbox{Tr}(\sum_{m=1}^t |\tilde{r}_m\rangle \langle \tilde{r}_m|)$ and $N_2=\mbox{Tr}(\sum_{n=1}^t |\tilde{s}_n\rangle \langle \tilde{s}_n|)$ being the corresponding normalization factors. According to the system-ancilla model (Theorem 1), Eq. ($\ref{two}$) can always be reduced to a $2t \times 2t$ matrix \begin{eqnarray}\label{three} \left ( \begin{array}{cc} \tilde{X}_{11}-\tilde{Y}_{11} & \mbox{diag}\{f_1, \ldots, f_t\} \\ \mbox{diag}\{f_1, \ldots, f_t\} & \tilde{X}_{22}-\tilde{Y}_{22} \end{array} \right ) \ge 0, \end{eqnarray} where we have used the same notations for simplicity. Equation ($\ref{three}$) supplies enough information which can be used to demonstrate our main results. Actually, since $\tilde{X}-\tilde{Y}$ is positive semidefinite, from the standard linear algebra theory, we have that every principal minor of $\tilde{X}-\tilde{Y}$ is also positive semidefinite, i.e. \begin{eqnarray} \left ( \begin{array}{cc} r_m-y_m & f_m \\ f_m & s_m-z_m \end{array} \right ) \ge 0, \end{eqnarray} where we have used $y_m$ and $z_m$ to denote the diagonal elements of $\tilde{Y}_{11}$ and $\tilde{Y}_{22}$ respectively, and $r_m$ ($s_m$) is the modulus of the vector $|\tilde{r}_m \rangle$ ($|\tilde{s}_m \rangle$). Therefore by introducing the canonical state vectors, the question can be in part reduced to UD between pairs of state vectors $|\tilde{r}_m \rangle$ and $|\tilde{s}_m\rangle$. Such question has been solved in many earlier works, and the results are listed as follows \begin{eqnarray} \label{newbound} P_m \!\!\! &=& \!\! \eta_1 y_m + \eta_2 z_m \nonumber \\ &\le& P_m^{max}\nonumber \\ & = &\!\! \left \{ \begin{array}{ll} \eta_2(s_m - f_m^2/r_m) & 0 \le \sqrt{\frac{\eta_1}{\eta_2}} \le \frac{f_m}{r_m}, \\ \eta_1 r_m + \eta_2 s_m -2\sqrt{\eta_1 \eta_2}f_m & \frac{f_m}{r_m} \le \sqrt{\frac{\eta_1}{\eta_2}} \le \frac{s_m}{f_m}, \\ \eta_1(r_m - f_m^2/s_m) & \sqrt{\frac{\eta_1}{\eta_2}} \ge \frac{s_m}{f_m}. \end{array} \right .\nonumber\\ \mbox{ } \end{eqnarray} The above expression shows that for every $m$, the maximal value that $P_m$ can achieve has a close relation with the specific configuration of $\sqrt{\frac{\eta_1}{\eta_2}}$, $r_m$, $s_m$, and $f_m$. Generally, for different $m$, $P_m$ will have very different expressions. Therefore, the total success probability $P=\sum_mP_m$ cannot always be represented as a function of the fidelity $F=\sum_m f_m$. Specifically, in the following we will focus on some special cases. Firstly, if for all $m=1, \ldots, t$, we have $\frac{f_m}{r_m} \le \sqrt{\frac{\eta_1}{\eta_2}} \le \frac{s_m}{f_m}$, then according to the above equation, the upper bound of the total success probability can be rewritten as \begin{eqnarray} P &=& \sum_m P_m \le \sum_m P^{max}_m \nonumber \\ &=& \sum_m \eta_1 r_m + \eta_2 s_m - 2 \sqrt{\eta_1 \eta_2} f_m \nonumber \\ &=& 1 - 2 \sqrt{\eta_1 \eta_2} F. \end{eqnarray} The corresponding lower bound of the failure probability becomes $Q=1-P \ge 2 \sqrt{\eta_1 \eta_2}F$. This bound has been proved to be the minimal value of $Q$ for any type of input configurations. However, our result shows that even in this special case, the lower bound of $Q$ can only be possibly saturated. This occurs, for example, when the canonical vectors are orthogonal to each other. In general case, since $\tilde{X}_{11}$ and $\tilde{X}_{22}$ are not diagonal matrices, this lower bound cannot always be reached. In the second example, we assume that $\sqrt{\frac{\eta_1}{\eta_2}} \ge \frac{s_m}{f_m}$ for all $1 \le m \le t$. A simple algebra will lead us to the following bound of the total success probability $P \le \eta_1(1-\sum_m f_m^2/s_m)$. If we introduce a new operator $C_2=\sum_m |s_m\rangle \langle s_m|$ composed of the normalized canonical vectors of $\rho_2$, we can reformulate $P $ as $P \le \eta_1(1-\mbox{Tr}(\rho_1 C_2))$, or equivalently $Q \ge \eta_2 + \eta_1 \mbox{Tr}(\rho_1 C_2)$. When $|s_m\rangle$ are orthogonal to each other, $C_2$ is nothing but the projection onto the support of $\rho_2$. Thirdly, if we have $\sqrt{\frac{\eta_1}{\eta_2}} \le \frac{f_m}{r_m} (\forall m)$, the total success probability satisfies $P \le \eta_2(1-\sum_m f_m^2/r_m)=\eta_2(1-\mbox{Tr}(\rho_2 C_1))$ with $C_1=\sum_m |r_m\rangle \langle r_m|$. Correspondingly, the failure probability becomes $Q \ge \eta_1 + \eta_2 \mbox{Tr}(\rho_2 C_1)$. For mixed states $\rho_1$ and $\rho_2$, we always have $(r_m, s_m)<1$. This indicates that the failure probability $Q$ can never reach the bound $2\sqrt{\eta_1 \eta_2}F$ for the latter two cases. Generally, different canonical vectors of the input states will separate the parameter space into different regions, and the lower bound of $Q$ is determined by both the prior probabilities and the structure of states. Moreover, in each region, the lower bound of $Q$ can not always be reached. Mathematically, to judge whether the lower bound can be saturated is equivalent to determining whether there exists a positive semidefinite matrix $\tilde{Y} \ge 0$ such that Eq. ($\ref{sdp}$) is satisfied. This problem is often called semidefinite feasibility problem (SDFP). Unfortunately, the complexity of SDFP is still not known, and currently we can only say that it cannot be a NP-complete problem unless NP=NP-complete. Therefore to judge whether the bound of $Q$ can be reached or not seems to be a hard problem. But in some special case (for example, the canonical vectors are orthogonal to each other, or $\tilde{X}-\tilde{Y}$ is a diagonally dominant matrix), some known results in linear algebra theory will be helpful to solve this problem. \section{examples} In many related works, the upper bound of the success probability $P$ is only considered in three different intervals, which depends on the ratio of $\eta_1$ and $\eta_2$ together with the fidelity $F$ and supports of the input states. Here by introducing the decomposition in Theorem 2, we find a series of parameter regions related to the specific input states. In addition, from the system-ancilla coupling viewpoint, one can also derive the corresponding results in \cite{Raynal2} and \cite{Rudolph}. For example, if we definite $\tilde{B} = \tilde{X'}-\tilde{Y'}$ in Eq. ($\ref{two-2}$), then since $\tilde{B}$ is positive semidefinite, we have $\sqrt{\mbox{Tr}(\tilde{B}_{11})\mbox{Tr}(\tilde{B}_{22})} \ge |\mbox{Tr}(\tilde{B}_{12})|$ for any kind of decompositions of $\rho_1$ and $\rho_2$ (for the definitions of $\tilde{B_{ij}}$, see Eq. ($\ref{main}$)). Therefore we obtain $\sqrt{\mbox{Tr}(\tilde{B}_{11})\mbox{Tr}(\tilde{B}_{22})} \ge F$, where equality holds only when $\tilde{B}_{11}=\alpha \tilde{B}_{22}$ with $\alpha \in \mathbb{R}$. This also indicates that the output states corresponding to the failure measurement results cannot be used for further discrimination operations, which is consistent with the discussions in \cite{Raynal2}. To reveal the relation and difference between the bounds list above and those in the previous works, in the following we will investigate a specific example. Consider two rank-$2$ mixed states $\rho_1=\frac{1}{2}(|r_1\rangle\langle r_1|+|r_2\rangle \langle r_2|)$ and $\rho_2=\frac{1}{2}(|s_1\rangle \langle s_1|+|s_2\rangle \langle s_2|)$ with $\langle r_1|s_2\rangle=\langle r_2|s_1\rangle=0$, $\langle r_1|s_1\rangle=\mbox{cos}\theta_1$, and $\langle r_2|s_2\rangle=\mbox{cos}\theta_2$. To simplify our consideration, we also assume $\langle r_1|r_2\rangle=\langle s_1|s_2\rangle=0$. Actually, discrimination of such kind of mixed states has be extensively studied in \cite{bergou}. Here we also use it to manifest the difference of the upper bounds presented in several related works. Suppose $0 < \mbox{cos}\theta_1 \le \mbox{cos}\theta_2 <1$. Then based on our former discussions, optimal success probability $P$ can be obtained exactly in five parameter regions, $[0, \mbox{cos}\theta_1]$, $[\mbox{cos}\theta_1, \mbox{cos}\theta_2]$, $[\mbox{cos}\theta_2, 1/\mbox{cos}\theta_2]$, $[1/\mbox{cos}\theta_2, 1/\mbox{cos}\theta_1]$, $[1/\mbox{cos}\theta_1, \infty)$. Alternatively, one can also obtain the corresponding upper bound of $P$ according to \cite{Raynal2} and \cite{Rudolph}. Table 1 shows the details of $P$ in each of these regions. \begin{widetext} \begin{table}\label{table-I} \begin{center} \begin{tabular}{||c||c|c|}\hline $\sqrt{\eta_1/\eta_2}$ & $P$ & $P_{Ra}(P_{Ru})$ \\ \hline $[0, \mbox{cos} \theta_1]$ & $\frac{1}{2}\eta_2(\mbox{sin}^2 \theta_1 + \mbox{sin}^2 \theta_2)$ & $\frac{1}{2}\eta_2(\mbox{sin}^2\theta_1 + \mbox{sin}^2 \theta_2)+ $ \\ \cline{1-2} $[\mbox{cos}\theta_1, Ra_1(F)]$ & \multirow{2}{*}{$\frac{1}{2}-\sqrt{\eta_1 \eta_2}\mbox{cos}\theta_1+\frac{1}{2}\eta_2 \mbox{sin}^2\theta_2$} & $\frac{1}{2}\eta_1\frac{(\mbox{cos}\theta_1-\mbox{cos}\theta_2)^2}{\mbox{cos}^2\theta_1+\mbox{cos}^2\theta_2}$ (or $\eta_2(1-F^2)$) \\ \cline{1-1} \cline{3-3} $[Ra_1(F), \mbox{cos}\theta_2]$ & & \multirow{3}{*}{$1-2 \sqrt{\eta_1 \eta_2}F$ (or $1-2\sqrt{\eta_1 \eta_2}F$) } \\ \cline {1-2} $[\mbox{cos}\theta_2, \frac{1}{\mbox{cos}\theta_2}]$ & $1-2\sqrt{\eta_1 \eta_2}F$& \\ \cline{1-2} $[\frac{1}{\mbox{cos}\theta_2}, Ra_2(1/F)]$ &\multirow{2}{*}{$\frac{1}{2}-\sqrt{\eta_1 \eta_2}\mbox{cos}\theta_1+ \frac{1}{2}\eta_1 \mbox{sin}^2\theta_2$} & \\ \cline{1-1} \cline{3-3} $[Ra_2(1/F), \frac{1}{\mbox{cos}\theta_1}]$ & & $\frac{1}{2}\eta_1(\mbox{sin}^2\theta_1+\mbox{sin}^2\theta_2)+$ \\ \cline{1-2} $[\frac{1}{\mbox{cos}\theta_1}, \infty)$ & $\frac{1}{2} \eta_1 (\mbox{sin}^2\theta_1+\mbox{sin}^2\theta_2)$ & $\frac{1}{2}\eta_2 \frac{(\mbox{cos}\theta_1-\mbox{cos}\theta_2)^2}{\mbox{cos}^2\theta_1+\mbox{cos}^2\theta_2}$ (or $\eta_1(1-F^2)$) \\ \hline \end{tabular} \end{center} \caption{Bounds of the maximal success probabilities presented in several related works. Here $P$ denotes the bound according to Eq. ($\ref{newbound}$). $P_{Ra}$ and $P_{Ru}$ are the results obtained from \cite{Raynal2} and \cite{Rudolph} respectively. $F=(\mbox{cos}\theta_1+\mbox{cos}\theta_2)/2$ is the fidelity of the two input mixed states. $Ra_1=\mbox{Tr}(P_1 \rho_2)/F=(\mbox{cos}^2\theta_1+\mbox{cos}^2\theta_2)/(\mbox{cos}\theta_1+\mbox{cos}\theta_2)$ and $Ra_2=F/\mbox{Tr}(P_2 \rho_1)=(\mbox{cos}\theta_1+\mbox{cos}\theta_2)/(\mbox{cos}^2\theta_1+\mbox{cos}^2\theta_2)$ are parameters according to \cite{Raynal2} with $P_1$ and $P_2$ being the supports of $\rho_1$ and $\rho_2$ separately.} \end{table} \end{widetext} The above table shows that when $\mbox{cos}\theta_1=\mbox{cos}\theta_2$, the three bounds $P$, $P_{Ra}$, and $P_{Ru}$ are equal to each other. However, for the general case $\mbox{cos}\theta_1 \ne \mbox{cos}\theta_2$, one can easily obtain $P \le P_{Ra}$ and $P \le P_{Ru}$, and equalities hold only when $\mbox{cos}\theta_2 \le \sqrt{\eta_1/\eta_2} \le 1/\mbox{cos}\theta_2$. For example, if $\mbox{cos}\theta_1 \le \sqrt{\eta_1/\eta_2} \le Ra_1$, we have $P_{Ra}-P=\frac{1}{2}\eta_2[2x\mbox{cos}\theta_1-\mbox{cos}^2\theta_1-2x^2\mbox{cos}\theta_1\mbox{cos}\theta_2/(\mbox{cos}^2\theta_1+\mbox{cos}^2\theta_2)]=f(x)$, where we have assumed $x=\sqrt{\eta_1/\eta_2}$. Since $f(\mbox{cos}\theta_1)=\eta_2\mbox{cos}^2\theta_1(\mbox{cos}\theta_1-\mbox{cos}\theta_2)^2/[2(\mbox{cos}^2\theta_1+\mbox{cos}^2\theta_2)] \ge 0$ and $f(Ra_1)=\eta_2\mbox{cos}^2\theta_1(\mbox{cos}\theta_1-\mbox{cos}\theta_2)^2/[2(\mbox{cos}\theta_1+\mbox{cos}\theta_2)^2] \ge 0$, one immediately sees that $P \le P_{Ra}$ for any $\sqrt{\eta_1/\eta_2}$ in $[\mbox{cos}\theta_1, Ra_1]$. These observations indicate that the bound presented in this work is independent of those of former works, and sometimes it can provide tighter bound of the total success probability $P$, as we have expected. \section{Conclusion} To summarize, we have proposed a general description on the UD of mixed states from the system-ancilla model, and presented a procedure to reduce this to a standard SDP problem, which makes the problem to be solvable numerically. On the UD between two mixed states, we have introduced the canonical vectors and partly reduced the original problem to the UD between pairs of canonical vectors. We present a series of new upper bounds on the total success probability which depends on both the ratio of the prior probabilities and the input state structures. This indicates that the results in \cite{bergou} are universal for any type of input states. It also should be mentioned that throughout the paper we mainly concentrate on the diagonal elements of the corresponding matrices. In practice, the non-diagonal elements also play important roles which deserves further investigation. \end{document}
\begin{document} \maketitle \begin{abstract} We consider the minimum value of the first dynamical degrees, which are larger than $1$, of automorphisms for prime dimensional complex simple abelian varieties. Also, we calculate the minimum value of the first dynamical degrees, which are larger than $1$, of automorphisms of complex simple abelian varieties with fixing the dimensions from $2$ to $10$. \end{abstract} \section{Introduction} Let $X$ be a compact K\"{a}hler manifold and $f\colon X\dashrightarrow X$ be its dominant meromorphic map. The first dynamical degree of $f$ is defined by \begin{align*} \lambda_1(f)=\lim_{n\to+\infty}||(f^n)^{*}:\mathrm{H}^{1,1}(X)\rightarrow\mathrm{H}^{1,1}(X)||^{\frac{1}{n}} \end{align*} where $||\cdot||$ is a norm and $\mathrm{H}^{1,1}(X)$ is the $(1,1)$-Dolbeault cohomology. Especially if $X$ is a complex simple abelian variety and $f$ is an automorphism, \begin{align*} \lambda_1(f)&=\lim_{n\to+\infty}||(f^n)^{*}:\mathrm{H}^{1,1}(X)\rightarrow\mathrm{H}^{1,1}(X)||^{\frac{1}{n}}\\ &=\lim_{n\to+\infty}||(f^{*})^n:\mathrm{H}^{1,1}(X)\rightarrow\mathrm{H}^{1,1}(X)||^{\frac{1}{n}}\\ &=\rho(f^{*}:\mathrm{H}^{1,1}(X)\rightarrow\mathrm{H}^{1,1}(X)) \end{align*} and we only consider this case in this paper. It is known that $\lambda_1(f)\geq1$ (cf.\ \cite{DS05} or \cite{DS17}) and if $g=1$, then $\lambda_1(f)=1$ (cf.\ \cite[Definition 4.1]{DS17}).\par In our past study, we prove the next theorem. \begin{theorem}[{cf.\ \cite[Theorem 7.2]{Sug23}}] For an integer $g\geq2$, the set \begin{align*} S_g:=\left\{ \begin{array}{l} \text{the first dynamical degrees of automorphisms}\\ \text{of simple abelian varieties over $\mathbb{C}$ whose dimension is $g$} \end{array} \right\}\setminus\{1\} \end{align*} has the minimum value. \end{theorem} Denote the minimum value of $S_g$ by $m(g)$ and in this paper, we study $m(g)$ for some $g$ in detail.\par In Section \ref{The minimum value of the first dynamical degrees for simple abelian varieties with prime dimensions}, we consider the minimum value for prime dimensional simple abelian varieties. \begin{thma} For a prime number $p$, we get the next result about $m(p)$. \begin{align*} \begin{cases} m(p)=4\mathrm{cos}^2\left(\frac{\pi}{2p+1}\right) & \text{$($if $2p+1$ is also a prime$)$} \\ 4+4^{-2p-3}<m(p)<4.3264=(2.08)^2 & \text{$($otherwise$)$} \end{cases} \end{align*} \end{thma} \begin{remark} $p$ is called a Sophie Germain prime if $p$ and $2p+1$ are both prime numbers, and it is conjectured that there are infinitely many Sophie Germain primes (cf.\ \cite[Chapter 5.5.5]{Sho09}). Conversely, since there are infinitely many prime numbers $p$ such that $p\equiv1\ (\mathrm{mod}\ 3)$, there are infinitely many prime numbers $p$ which are not Sophie Germain primes. \end{remark} In Section \ref{The minimum value of the first dynamical degrees for simple abelian varieties with lower dimensions}, we consider the minimum value for lower dimensions. \begin{thmb} For $g=2$ to $10$, the value $m(g)$ is the following. \begin{table}[hbtp] \label{table} \begin{tabular}{|c|c|} \hline dimension $g$ & minimum value $m(g)$ \\ \hline $1$ & $(\text{all }1)$\\ $2$ & $4\mathrm{cos}^2(\pi/5)=2.6180\cdots$ \\ $3$ & $4\mathrm{cos}^2(\pi/7)=3.2469\cdots$ \\ $4$ & $2\mathrm{cos}(\pi/5)=1.6180\cdots$ \\ $5$ & $4\mathrm{cos}^2(\pi/11)=3.6825\cdots$ \\ $6$ & $2\mathrm{cos}(\pi/7)=1.8019\cdots$ \\ $7$ & $4.0333\cdots$ \\ $8$ & $2\mathrm{cos}(\pi/5)=1.6180\cdots$ \\ $9$ & $1.1509\cdots^2=1.3247\cdots$ \\ $10$& $1.1762\cdots^2=1.3836\cdots$ \\ \hline \end{tabular} \end{table} \end{thmb} \noindent {\bf Acknowledgements.} The author thanks Professor Keiji Oguiso for suggesting the direction of this paper. \section{Algebraic preliminaries} This section is devoted to the algebraic preparations for the following sections. Some of the results are also used in our past study (cf. \cite{Sug23}). \begin{definition}[{cf.\ \cite[Chapter 1.2]{Lan83}}]\label{equivalent condition for CM-field} A CM-field is a number field $K$ which satisfies the following equivalent conditions. \begin{enumerate} \item $K$ is a totally complex field, and is a quadratic extension of some totally real number field. \item $K$ is closed under the complex conjugation and all $\mathbb{Q}$-embeddings $K\hookrightarrow\mathbb{C}$ commute with the complex conjugation, and $K$ is not real. \end{enumerate} \end{definition} For a number field $K$, denote its ring of integer by $\mathcal{O}_K$ and define \begin{align*} U_K:=\{x\in\mathcal{O}_K\setminus\{0\}\mid x^{-1}\in\mathcal{O}_K\} \end{align*} as the group of algebraic units.\par The next lemma follows from the definition of CM-fields. \begin{lemma}[{cf.\ \cite[Chapter 1.2]{Lan83}}]\label{unit element} Let $L$ be a totally real number field. \begin{enumerate} \item If $L'$ is a totally complex quadratic extension of $L$, then for $x\in U_{L'}$, $\abs{x}^2$ is an element of $U_L$. \item Fix $a\in U_L\setminus\{0\}$. Then the next two conditions are equivalent. \begin{enumerate} \item There exist a totally complex quadratic extension $L'/L$ and $x\in U_{L'}$ which satisfies $\abs{x}^2=a$. \item The conjugate elements of $a\in L$ are all positive (i.e., $a$ is totally positive). \end{enumerate} Moreover, these conditions are equivalent to the totally realness of $L(\sqrt{a})$. \end{enumerate} \end{lemma} \begin{proof} We start with the proof of (1). By the assumption, $x, \frac{1}{x}\in \mathcal{O}_{L'}$. Since $[L':L]=2$ and $L$ is totally real, $x\overline{x}, \frac{1}{x\overline{x}}\in\mathcal{O}_{L}$. Thus, $\abs{x}^2=x\overline{x}\in U_L$.\par Next, consider the proof of (2). If all conjugate elements of $a\in L$ are positive, then by defining $L':=L(\sqrt{-a})$, $L'$ is a totally complex number field. This implies (ii)$\Rightarrow$(i).\par Assume the condition (i). Fix a $\mathbb{Q}$-embedding $\sigma':L\hookrightarrow\mathbb{C}$ and take a $\mathbb{Q}$-embedding $\sigma:L'\hookrightarrow\mathbb{C}$ with $\sigma'=\sigma\circ i$ where $i:L\hookrightarrow L'$ is the natural inclusion. By Definition \ref{equivalent condition for CM-field}, the complex conjugation commutes with $\sigma'$ and so \begin{align*} \sigma'(a)=\sigma'(\abs{x}^2)=\sigma'(x\overline{x})=\sigma'(x)\sigma'(\overline{x})=\sigma'(x)\overline{\sigma'(x)}=\abs{\sigma'(x)}^2. \end{align*} Thus, all the conjugates of $a$ is positive and this implies (i)$\Rightarrow$(ii) and the proof is concluded. \end{proof} In this paper, we denote $\zeta_n=\mathrm{cos}\left(\frac{2\pi}{n}\right)+i\cdot\mathrm{sin}\left(\frac{2\pi}{n}\right)$ for a positive integer $n$. Denote the minimal polynomial of $\zeta_n$ over $\mathbb{Q}$ by $\Phi_n(x)\in\mathbb{Z}[x]$ and called a cyclotomic polynomial. Now the degree of $\Phi_n(x)$ is Euler's totient function $\varphi(n)$. Let $\Psi_n(x)$ be the minimal polynomial of $2\mathrm{cos}\left(\frac{2\pi}{n}\right)$ over $\mathbb{Q}$ (define $\Psi_4(x)=x$ for $n=4$). The constant term of $\Psi_n(x)$ is calculated in \cite{ACR16} as below. \begin{proposition}[{cf.\ \cite{ACR16}}]\label{constant term of minimal polynomial} The absolute value of the constant term of $\Psi_n(x)$ is equal to $1$ except the following cases. \begin{enumerate} \renewcommand{\rm{(\roman{enumi})}}{\rm{(\roman{enumi})}} \item $\abs{\Psi_n(0)}=0$ for $n=4$ \item $\abs{\Psi_n(0)}=2$ for $n=2^m$, with $m\in\mathbb{Z}_{\geq0}\setminus\{2\}$ \item $\abs{\Psi_n(0)}=p$ for $n=4p^k$, with $k\in\mathbb{Z}_{>0},\ p\text{ is an odd prime number}$ \end{enumerate} \end{proposition} Also for $n\geq3$, the polynomial $\Psi_n(x)$ satisfies \begin{align*} \Phi_n(x)=x^{\frac{\varphi(n)}{2}}\Psi_n\left(x+\frac{1}{x}\right) \end{align*} and $\Psi_n(x)$ is of degree $\frac{\varphi(n)}{2}$.\par The next lemma, which follows from Kronecker's theorem, is useful to determine the minimum value of the first dynamical degrees. \begin{lemma}\label{cyclotomic} Let $P(x)=x^n+a_1x^{n-1}+\cdots+a_n\in\mathbb{Z}[x]$ be an irreducible monic polynomial whose roots are all real. If the maximal absolute value of the roots of $P(x)$ is less than $2$, then $P(x)$ is a polynomial which satisfies $x^nP\left(x+\frac{1}{x}\right)=\Phi_N(x)$ for some $N\in\mathbb{Z}_{\geq3}$. This implies $P(x)=\Psi_N(x)$. Also, the roots of $P(x)$ can be written as $\zeta_N^m+\frac{1}{\zeta_N^m}=2\mathrm{cos}(\frac{2m\pi}{N})$ for some $m\in\mathbb{Z}_{>0}$ with $m<N$, $\mathrm{gcd}(N,m)=1$. Moreover, the maximal absolute value of the roots of $P(x)$ can be written as \begin{align*} \left\{ \begin{array}{ll} 2\mathrm{cos}\left(\frac{2\pi}{N}\right) & (\text{if }N\text{ is even})\\ [+5pt] 2\mathrm{cos}\left(\frac{\pi}{N}\right) & (\text{if }N\text{ is odd}) \end{array}. \right. \end{align*} \end{lemma} Let $L/K$ be a field extension of number fields with $[L:K]=n$ and $\mathcal{O}_L$ and $\mathcal{O}_K$ be their ring of integers. Then $\mathcal{O}_L$ and $\mathcal{O}_K$ are Dedekind domains. For a non-zero prime ideal $\mathfrak{p}\subset\mathcal{O}_K$, there is the prime ideal factorization $\mathfrak{p}\mathcal{O}_L=\mathfrak{P}_1^{e_1}\cdots\mathfrak{P}_g^{e_g}$ since $\mathcal{O}_L$ is a Dedekind domain. We say that $\mathfrak{P}_i$ is over $\mathfrak{p}$.\par Now $L/K$ is a separable extension of degree $n$ and so \begin{align*} n=\sum_{i=1}^g e_if_i. \end{align*} Moreover, if $L/K$ is a Galois extension, by Hilbert's ramification theory (cf.\ \cite[Chapter I, \S9]{Neu99}), \begin{align*} e_1=\cdots=e_g=e, f_1=\cdots=f_g=f \end{align*} hold and this implies $n=efg$. \begin{remark}[{cf.\ \cite[Chapter I, Proposition 8.3]{Neu99}}]\label{construction of prime ideal factorization} Let $L/K$ be a field extension of number fields and let $\theta\in\mathcal{O}_L$ be a primitive element for $L/K$ (i.e., $L=K(\theta)$).\par Denote the conductor of $\mathcal{O}_K[\theta]$ by $\mathfrak{J}\subset\mathcal{O}_L$, in other words, \begin{align*} \mathfrak{J}=\{\alpha\in\mathcal{O}_L\mid\alpha\mathcal{O}_L\subset\mathcal{O}_K[\theta]\}. \end{align*} Under this condition, let $\mathfrak{p}\subset\mathcal{O}_K$ be a prime ideal which is relatively prime to $\mathfrak{J}$ (i.e., $\mathfrak{p}\mathcal{O}_L+\mathfrak{J}=\mathcal{O}_L$). Denote the minimal polynomial of $\theta$ over $\mathcal{O}_K$ by $p(X)\in\mathcal{O}_K[X]$ and denote the image of $p(X)$ via the map $\mathcal{O}_K[X]\rightarrow(\mathcal{O}_K/\mathfrak{p})[X]$ by $\overline{p}(X)$. $\overline{p}(X)$ is decomposed as \begin{align*} \overline{p}(X)=\overline{p_1}(X)^{e_1}\cdots\overline{p_g}(X)^{e_g} \end{align*} in $(\mathcal{O}_K/\mathfrak{p})[X]$ for some $p_i(X)\in\mathcal{O}_K[X]$. Then, by defining \begin{align*} \mathfrak{P}_i:=\mathfrak{p}\mathcal{O}_L+p_i(\theta)\mathcal{O}_L, \end{align*} $\mathfrak{P}_i\subset\mathcal{O}_L$ are all different prime ideals over $\mathfrak{p}$ and \begin{align*} \mathfrak{p}\mathcal{O}_L=\mathfrak{P}_1^{e_1}\cdots\mathfrak{P}_g^{e_g} \end{align*} holds and this is the prime ideal factorization of $\mathfrak{p}$ in $L$. Also, $f_i=[\mathcal{O}_L/\mathfrak{P}_i:\mathcal{O}_K/\mathfrak{p}]=\mathrm{deg}(p_i(X))$ holds. \end{remark} Let $L/K$ be an extension of number fields with $[L:K]=n$ again. Let $\{\sigma_i\}_{1\leq i\leq n}$ be the set of all $K$-embeddings $L\hookrightarrow\mathbb{C}$.\par For $a_1,\ldots,a_n\in L$, the discriminant over $L/K$ is defined as \begin{align*} \Delta_{L/K}(a_1,\ldots,a_n):=\mathrm{det}\begin{pmatrix} \sigma_1(a_1) & \sigma_1(a_2) & \dots & \sigma_1(a_n) \\ \sigma_2(a_1) & \sigma_2(a_2) & \dots & \sigma_2(a_n) \\ \vdots & \vdots & \ddots & \vdots \\ \sigma_n(a_1) & \sigma_n(a_2) & \dots & \sigma_n(a_n) \end{pmatrix}^2. \end{align*} Then, $\Delta_{L/K}(a_1,\ldots,a_n)\in K$ and $\Delta_{L/K}(a_1,\ldots,a_n)\neq0$ if and only if $a_1,\ldots,a_n\in L$ is linearly independent over $K$. Now, the next theorem holds. \begin{lemma}[{cf.\ \cite[Chapter I, Lemma 2.9]{Neu99}}]\label{discriminant} Let $L/K$ be an extension of number fields with $[L:K]=n$ and $a_1,\ldots,a_n\in\mathcal{O}_L$ be a basis of $L/K$. Define $\delta:=\Delta_{L/K}(a_1,\ldots,a_n)$ and then \begin{align*} \mathcal{O}_L\subset\frac{a_1}{\delta}\mathcal{O}_K\oplus\cdots\oplus\frac{a_n}{\delta}\mathcal{O}_K. \end{align*} \end{lemma} \begin{remark}\label{conductor and discriminant} The conductor defined in Remark \ref{construction of prime ideal factorization} contains $\delta=\Delta_{L/K}(1,\theta,\ldots,\theta^{n-1})$, where $n=[L:K]$, by Lemma \ref{discriminant}. \end{remark} \begin{definition}[{cf.\ \cite[Definition 3.2.1, Definition 8.4.1]{Voi21}}] Let $B$ be a finite-dimensional algebra over $\mathbb{Q}$. An anti-involution is a $\mathbb{Q}$-linear map $\phi:B\rightarrow B$ such that \begin{enumerate} \item $\phi(1)=1$ \item $\phi(\phi(x))=x$\quad($x\in B$) \item $\phi(xy)=\phi(y)\phi(x)$\quad($x,y\in B$) \end{enumerate} An anti-involution $\phi:B\rightarrow B$ is called positive anti-involution over $\mathbb{Q}$ if and only if $\mathrm{Tr}_{B/\mathbb{Q}}(\phi(x)x)>0$ for all $x\in B$. \end{definition} Also, in this paper, we adopt the next definition of Salem numbers. \begin{definition}[{\cite[Chapter 5.2]{BDGPS92}}] A Salem number is a real algebraic integer $\lambda$ greater than $1$ whose other conjugates have modulus at most equal to $1$, at least one having a modulus equal to $1$. \end{definition} This implies that all Salem numbers have degrees at least $4$. \section{Endomorphism algebras of simple abelian varieties}\label{Endomorphism algebras of simple abelian varieties} Let $X$ be a $g$-dimensional simple abelian variety and then its endomorphism algebra $B:=\mathrm{End}_{\mathbb{Q}}(X)=\mathrm{End}(X)\otimes_\mathbb{Z}\mathbb{Q}$ is a division ring. $B$ is a finitely generated algebra over $\mathbb{Q}$ with a center $K$, which is a field. Also, $B$ admits the Rosati involution and it is a positive anti-involution $\phi:B\rightarrow B$ over $\mathbb{Q}$. Now $\phi$ can be restricted to $K$ and define $K_0:=\{x\in K\mid \phi(x)=x\}$. Then $K_0$ is a totally real number field and $[K:K_0]$ is $1$ or $2$. This deduction gives the classification of simple abelian varieties by their endomorphism algebras as below (cf.\ \cite[Chapter 5.5]{BL04}). Denote $[B:K]=d^2$, $[K:\mathbb{Q}]=e$ and $[K_0:\mathbb{Q}]=e_0$. \begin{table}[h] \caption{Classification of $X$ by its endomorphism algebra} \label{table1} \begin{tabular}{|c|c|c|c|c|c|} \hline $X$ & $B=\mathrm{End}_\mathbb{Q}(X)$ & $K$ & $d$ & $e_0$ & restriction \\ \hline Type 1 & $K$ & totally real & $1$ & $e$ & $e\mid g$\\ Type 2 & totally indefinite quaternion algebra over $K$ & totally real & $2$ & $e$ & $2e\mid g$\\ Type 3 & totally definite quaternion algebra over $K$ & totally real & $2$ & $e$ & $2e\mid g$\\ Type 4 & division ring with center $K$ & CM-field & $d$ & $\frac{e}{2}$ & $\frac{d^2 e}{2}\mid g$\\ \hline \end{tabular} \end{table}\par An endomorphism $f:X\rightarrow X$ can be identified as an element $\alpha\in B$ with the minimal polynomial $p(x)\in\mathcal{O}_K[x]$ over $K$ and the minimal polynomial $P(x)\in\mathbb{Z}[x]$ over $\mathbb{Q}$. If $f$ is an automorphism, the constant term of $p(x)$ is in $U_K$ and the constant term of $P(x)$ is in $\{\pm1\}$. The degree of $p(x)$ is at most $d$ and the degree of $P(x)$ is at most $de\leq 2g$. Also, the first dynamical degree of the endomorphism $f$ is the square of the maximal absolute value of the roots of $P(x)\in\mathbb{Z}[x]$ (cf.\ \cite[Section 3]{Sug23}).\par Conversely, for a division algebra in Table \ref{table1} with some property, we have a simple abelian variety whose endomorphism algebra is the same with the division algebra by the following propositions (cf.\ \cite[Chapter 9]{BL04}). The same methods are also used in \cite{Sug23}. \begin{proposition}[Type 1]\label{construction for Type 1} Let $K$ be a totally real number field with $[K:\mathbb{Q}]=e$ and fix a $\mathbb{Z}$-order $\mathcal{O}$. Then, for any ineger $m\in\mathbb{Z}_{>0}$, there exists an $em$-dimensional simple abelian variety $X$ with an isomorphism $K\stackrel{\simeq}{\longrightarrow}\mathrm{End}_\mathbb{Q}(X)$ which induces an injective ring homomorphism $\mathcal{O}\hookrightarrow\mathrm{End}(X)$. \end{proposition} \begin{proposition}[Type 2]\label{construction for Type 2} Let $B$ be a totally indefinite quaternion algebra over a totally real number field $K$ with $[K:\mathbb{Q}]=e$ and fix a $\mathbb{Z}$-order $\mathcal{O}$. Assume there exists a positive anti-involution $\phi:B\rightarrow B$ over $\mathbb{Q}$ and fix a positive integer $m\in\mathbb{Z}_{>0}$. Then there exists a $2em$-dimensional simple abelian variety $X$ with an isomorphism $B\stackrel{\simeq}{\longrightarrow}\mathrm{End}_\mathbb{Q}(X)$ which induces an injective ring homomorphism $\mathcal{O}\hookrightarrow\mathrm{End}(X)$. \end{proposition} \begin{proposition}[Type 4]\label{construction for Type 4} Let $B$ be a central simple division algebra over a CM-field $K$ with $[B:K]=d^2$, $[K:\mathbb{Q}]=e=2e_0$ and fix a $\mathbb{Z}$-order $\mathcal{O}$. Assume there exists\nolinebreak\ a positive anti-involution $\phi:B\rightarrow B$ over $\mathbb{Q}$ of the second kind (i.e., $\phi\lvert_K\neq\mathrm{id}_K$) and fix a positive integer $m\in\mathbb{Z}_{>0}$. Then there exists a $d^2e_0m$-dimensional abelian variety $X$ with an injective ring homomorphism $B\hookrightarrow\mathrm{End}_\mathbb{Q}(X)$ which induces an injective ring homomorphism $\mathcal{O}\hookrightarrow\mathrm{End}(X)$.\par In addition, assume one of the next conditions. \begin{enumerate} \item $dm\geq3$ \item $dm=2$ and $e_0\geq2$ \end{enumerate} Then the above $X$ can be taken as a simple abelian variety and $B\simeq\mathrm{End}_\mathbb{Q}(X)$. \end{proposition} \begin{remark} The construction of simple abelian varieties of Type 3 is also written in \cite[Chapter 9]{BL04}, but it is not used in this paper, so we omit here. \end{remark} The construction of a division ring is due to the next theorems. \begin{theorem}[{\cite[Theorem 2.5]{DH22}}]\label{divisional} Let $F$ be a totally real number field and $K=F(\sqrt{a})$ a quadratic extension for $a\in\mathcal{O}_F$. Then there exists a prime number $p$ such that the quaternion algebra $B=\left(\frac{a,p}{F}\right)$ is divisional. \end{theorem} \begin{theorem}[{cf.\ \cite[Chapter 15.1, Corollary d]{Pie82}}]\label{construct division algebra} Let $E/F$ be a cyclic extension of fields with $[E:F]=n$. Define $G=\mathrm{Gal}(E/F)$ and denote its generator by $\sigma$. Fix $u\in F^{\times}$ and take a symbol $v$ as $v^n=u$. Define $B=\oplus_{i=0}^{n-1} v^{i}E$ with the multiplication on $B$ as $e\cdot v=v\cdot\sigma(e)$ for $e\in E$. Assume that the order of $u$ in $F^{\times}/\mathrm{N}_{E/F}(E^{\times})$ is exactly $n$.\par Then $B$ is a division ring and also a central simple algebra over $F$ with $[B:F]=n^2$. \end{theorem} The next lemma is used for searching $u\in F^{\times}$ with $u\notin\mathrm{N}_{E/F}(E^{\times})$. \begin{lemma}[{\cite[Lemma 8.9]{Sug23}}]\label{multiplicity} Let $E/F$ be a Galois extension of number fields with $[E:F]=n$ and let $\mathfrak{q}\subset\mathcal{O}_E$ be a prime ideal and take $\alpha\in E^{\times}\setminus U_E$.\par Assume $\sigma(\mathfrak{q})=\mathfrak{q}$ for all $\sigma\in\mathrm{Gal}(E/F)$. Then, the multiplicity of $\mathfrak{q}$ of the prime ideal factorization of the ideal generated by $\mathrm{N}_{E/F}(\alpha)$ in $\mathcal{O}_E$ is a multiple of $n$. \end{lemma} Also, the construction of a positive anti-involution is reduced to the following theorems in this paper. \begin{theorem}[{cf.\ \cite[Theorem 5.5.3]{BL04}}]\label{construction of positive anti-involution} Let $B$ be a totally indefinite quaternion algebra of finite dimension over $\mathbb{Q}$ with center a totally real number field $K$. Assume that $B$ is divisional. Then, $\mathbb{Q}$-linear map $\phi:B\rightarrow B$ is a positive anti-involution over $\mathbb{Q}$ if and only if it can be written as \begin{align*} \phi(x)=c^{-1}\overline{x}c \end{align*} where $c\in B\setminus K$ with $c^2\in K$ totally negative and $x\mapsto\overline{x}$ is the quaternion conjugation. \end{theorem} \begin{theorem}[{cf.\ \cite[Chapter 21]{Mum70}, \cite[Theorem 5.5.6]{BL04}}]\label{existence of positive anti-involution} Let $B$ be a division algebra of finite dimension over $\mathbb{Q}$ with center a CM-field $K$. Assume that there exists an anti-involution $\phi:B\rightarrow B$ of the second kind. Then there exists a positive anti-involution $\phi':B\rightarrow B$ over $\mathbb{Q}$ of the second kind. \end{theorem} \section{The minimum value of the first dynamical degrees of automorphisms}\label{The minimum value of the first dynamical degrees of automorphisms} First we consider the general results for considering the minimum value of the first dynamical degrees of automorphisms. The flow is repeatedly used in the following sections. Let $X$ be a $g$-dimensional simple abelian variety and consider the automorphisms and its first dynamical degrees for each type in Table \ref{table1}. Define $B$, $K$, $K_0$, $d$, $e$ and $e_0$ as in Section \ref{Endomorphism algebras of simple abelian varieties}. \begin{flushleft}{\bf{Type 1}}\end{flushleft}\par Since $B=K$ is a totally real number field, so an automorphism $f:X\rightarrow X$ can be identified as an element $\alpha\in U_K$ and its conjugates are all real. The degree of $\alpha$ divides $e$ and by the restricton in Table \ref{table1}, it also divides $g=\mathrm{dim}(X)$. Assuming that the first dynamical degree is less than $4$, then the conjugates of $\alpha$ are all in the interval $(-2,2)$. Thus, by Lemma \ref{cyclotomic}, it is reduced to the deduction of cyclotomic polynomials.\par Conversely, let $K$ be a totally real number field with $[K:\mathbb{Q}]=e$ and take $\alpha\in U_K$ and $m\in\mathbb{Z}_{>0}$. Then by Proposition \ref{construction for Type 1}, there exists an $em$-dimensional simple abelian variety $X$ with the endomorphism algebra $K=\mathrm{End}_\mathbb{Q}(X)$ with $\mathcal{O}_K\subset\mathrm{End}(X)$. Thus for the endomorphism $f:X\rightarrow X$ which corresponds to $\alpha$, the condition $\alpha\in U_K$ implies $\frac{1}{\alpha}\in\mathcal{O}_K$ and so $f:X\rightarrow X$ is an automorphism.\par Therefore, the minimum value of the first dynamical degrees for this type is caluculated by \begin{align*} \mathrm{min}\left\{ \begin{array}{ll} \text{the square of the maximal absolute value of the conjugates}\\ \text{of an algebraic unit, whose conjugates are all real, of degree divides $g$} \end{array} \right\}\setminus\{1\} \end{align*} \begin{flushleft}{\bf{Type 2, Type 3}}\end{flushleft}\par An automorphism $f$ corresponds to an element $\alpha\in B$ and since $B$ is a quaternion algebra over a totally real number field $K$, $\alpha$ has the minimal polynomial $p(x)\in\mathcal{O}_K[x]$ over $K$ of degree $1$ or $2$.\par If the degree is $1$, then $\alpha\in U_K$ and so the deduction about the minimum value is reduced to the case of Type 1.\par If the degree is $2$, then it can be written as $p(x)=x^2+ax+b\in\mathcal{O}_K[x]$ with $b\in U_K$. Now the minimal poynomial of $\alpha$ over $\mathbb{Q}$ satisfies \begin{align*} {P(x)}^m=\prod_{i=1}^e(x^2+\sigma_i(a)x+\sigma_i(b))\in\mathbb{Z}[x] \end{align*} for some $m\in\mathbb{Z}_{>0}$, where $\{\sigma_i\}$ is the set of all $\mathbb{Q}$-embeddings $K\hookrightarrow\mathbb{C}$.\par The first dynamical degree of $f$ is the square of the maximal absolute value of the roots of $P(x)$ and we consider the following three cases.\\ $\bullet$ there exists $1\leq i\leq e$ such that $\abs{\sigma_i(b)}>1$\par If $\abs{\sigma_i(b)}>1$, one of the roots of $x^2+\sigma_i(a)x+\sigma_i(b)$ has absolute value not less than $\sqrt{\abs{\sigma_i(b)}}$ and so the first dynamical degree is not less than $\abs{\sigma_i(b)}$. $\sigma_i(b)$ are all real and so by assuming that the first dynamical degree is less than $2$, $\sigma_i(b)$ are all in the interval $(-2,2)$ and the remaining is same as the case of Type 1.\\ $\bullet$ $\sigma_i(b)=1$ for all $1\leq i\leq e$\par The roots of $x^2+\sigma_i(a)x+1$ have modulus $1$ if and only if $-2\leq\sigma_i(a)\leq2$. Thus, for small first dynamical degrees except $1$, $\mathrm{max}\{\abs{\sigma_i(a)}\}$ must be larger than $2$ and close to $2$.\\ $\bullet$ $\sigma_i(b)=-1$ for all $1\leq i\leq e$\par The roots of $x^2+\sigma_i(a)x-1$ have modulus $1$ if and only if $\sigma_i(a)=0$. Thus, for the small first dynamical degrees except $1$, $\mathrm{max}\{\abs{\sigma_i(a)}\}$ must be larger than $0$ and close to $0$. Since $\prod_{i=1}^e \sigma_i(a)\in\mathbb{Z}$, $\mathrm{max}\{\abs{\sigma_i(a)}\}\geq1$ for $a\neq0$ and the equality is always achieved by $a=1$. For this case, the minimum value of the maximal absolute value of the conjugates is $\frac{1+\sqrt{5}}{2}$. Thus, $\frac{3+\sqrt{5}}{2}$ is a lower bound of the first dynamical degrees for this case.\par The realizability would be considered independently in the following sections. \begin{flushleft}{\bf{Type 4}}\end{flushleft}\par For the case $d=1$, $B=K$ is a CM-field and an automorphism of $X$ corresponds to $\alpha\in U_{K}$.\par If $\alpha\in K_0$, the deduction is reduced to the case of Type 1.\par If $\alpha\in K\setminus K_0$, by Lemma \ref{unit element}, for denoting $a=\abs{\alpha}^2\in U_{K_0}$, $K_0(\sqrt{a})$ is a totally real number field. The first dynamical degree of $\alpha$ is the square of the maximal absolute value of the conjugate elements of $\alpha$, and so this is equal to the maximal absolute value of the conjugates of $a\in K_0$. In order to search the small first dynamical degree, assume the maximal absolute value of the conjugate elements of $\sqrt{a}$ is less than $2$. If $a=1$, then the first dynamical degree would be $1$ and so we assume $a\neq1$. Then, as in Lemma \ref{cyclotomic}, it can be written as $\sqrt{a}=\zeta_N^m+\frac{1}{\zeta_N^m}$ ($m,N\in\mathbb{Z}_{>0}$) with $m<N$, $\mathrm{gcd}(N,m)=1$ and we can assume $N\neq1,2,3,4,6$. Thus, $a=\left(\zeta_N^m+\frac{1}{\zeta_N^m}\right)^2=\zeta_N^{2m}+\frac{1}{\zeta_N^{2m}}+2$ and so \begin{align*} [\mathbb{Q}(a):\mathbb{Q}]=\left\{ \begin{array}{ll} \frac{1}{2}\varphi(\frac{N}{2}) & (N:\text{even})\\ [+5pt] \frac{1}{2}\varphi(N) & (N:\text{odd}) \end{array}. \right. \end{align*} Now $K_0\supset\mathbb{Q}(a)\supset\mathbb{Q}$ and so we get the candidates of $N$ from this condition.\par Conversely, let $K$ be a CM-field with $[K:\mathbb{Q}]=e=2e_0$ and take $\alpha\in U_K$ and fix $m\in\mathbb{Z}_{>0}$. Now $K$ admits the complex conjugate as a positive anti-involution of the second kind. If $m\in\mathbb{Z}_{\geq3}$ (or $e\geq4$ and $m\in\mathbb{Z}_{\geq2}$) holds, then by Proposition \ref{construction for Type 4}, there exist an $e_0m$-dimensional simple abelian variety $X$ and an automorphism $f:X\rightarrow X$ which corresponds to $\alpha$.\par The case $d\geq2$ would be considered independently in the following sections. \section{The minimum value of the first dynamical degrees for simple abelian varieties with prime dimensions}\label{The minimum value of the first dynamical degrees for simple abelian varieties with prime dimensions} Define \begin{align*} m(g):=\mathrm{min}\left\{ \begin{array}{l} \text{the first dynamical degrees of automorphisms}\\ \text{of simple abelian varieties over $\mathbb{C}$ whose dimension is $g$} \end{array} \right\}\setminus\{1\}. \end{align*}\par We prove the next result in this section. \begin{theorem}\label{Result1} For a prime number $p$, we get the next result about $m(p)$. \begin{align*} \begin{cases} m(p)=4\mathrm{cos}^2\left(\frac{\pi}{2p+1}\right) & \text{$(2p+1$ is also a prime$)$} \\ 4+4^{-2p-3}<m(p)<(2.08)^2 & \text{$($otherwise$)$} \end{cases} \end{align*} \end{theorem} \begin{remark} Especially, $(2.08)^2$ is an upper bound of $m(p)$ for any prime number $p$. \end{remark} First, we consider the case $p=2$. In other words, we prove the next theorem. \begin{proposition}\label{case=2} \begin{align*} m(2)=4\mathrm{cos}^2\left(\frac{\pi}{5}\right) \end{align*} \end{proposition} \begin{proof} We consider by dividing into the type of simple abelian varieties in Table \ref{table1}.\par Define $B$, $K$, $K_0$, $d$, $e$ and $e_0$ as in Section \ref{Endomorphism algebras of simple abelian varieties} for each type of $X$. \begin{flushleft}{\bf{Type 1}}\end{flushleft}\par Let $X$ be a $2$-dimensional simple abelian variety of Type 1 and $f\in\mathrm{End}(X)$ be an automorphism of $X$ which corresponds to $\alpha\in U_K$ as in Section \ref{The minimum value of the first dynamical degrees of automorphisms}. Now $[K:\mathbb{Q}]=1$ or $2$ by the restriction in Table \ref{table1}.\par Assuming that the first dynamical degree is less than $4$, the conjugates of $\alpha$ are all in the interval $(-2,2)$. If the degree of $\alpha$ is $1$, then $\alpha=\pm1$ and so the first dynamical degree is $1$ and so we assume the degree of $\alpha$ is $2$. Denote the minimal polynomial of $\alpha$ over $\mathbb{Q}$ by $P(x)$, whose degree is $2$. By Lemma \ref{cyclotomic}, $x^2P(x+\frac{1}{x})$ is an irreducible cyclotomic polynomial and its degree is $4$. Cyclotomic polynomials $\Phi_N(x)$ have degree $4$ only for $N=5,8,10,12$. By comparing with Proposition \ref{constant term of minimal polynomial}, since $P(x)$ has constant term $\pm1$, $N=5,10$ are only possible. Thus, the minimum value of the maximal absolute value of the roots of $P(x)$ is $2\mathrm{cos}\left(\frac{\pi}{5}\right)$ except $1$. The realizability is concerned in Section \ref{The minimum value of the first dynamical degrees of automorphisms}. Thus, the minimum value of the first dynamical degrees is $4\mathrm{cos}^2\left(\frac{\pi}{5}\right)=\left(\frac{1+\sqrt{5}}{2}\right)^2$ except $1$ for this type. \begin{flushleft}{\bf{Type 2, Type 3}}\end{flushleft}\par Let $X$ be a $2$-dimensional simple abelian variety of Type 2 or 3 and $f\in\mathrm{End}(X)$ be an automorphism of $X$ which corresponds to $\alpha\in B$. Now $[K:\mathbb{Q}]=1$ by the restriction in Table \ref{table1} and so $K=\mathbb{Q}$.\par Denote the minimal polynomial of $\alpha$ over $\mathbb{Q}$ by $P(x)$. If the degree of $P(x)$ is $1$, then $\alpha\in U_\mathbb{Q}=\{\pm1\}$ and so the first dynamical degree is $1$.\par If the degree of $P(x)$ is $2$, then it can be written as $P(x)=x^2+ax+b\in\mathbb{Z}[x](b\in\{\pm1\})$ and consider the following 3 cases as in Section \ref{The minimum value of the first dynamical degrees of automorphisms}. Let $\sigma_1$ be the $\mathbb{Q}$-embedding $\mathbb{Q}\hookrightarrow\mathbb{C}$.\\ $\bullet$ there exists $1\leq i\leq e$ such that $\abs{\sigma_i(b)}>1$\par This case would not be occured for this condition.\\ $\bullet$ $\sigma_i(b)=1$ for all $1\leq i\leq e$\par Since $\mathrm{max}\{\abs{\sigma_i(a)}\}$ must be larger than $2$ and close to $2$, $a=3$ provides the minimum value of the maximal absolute value of the conjugates of $\alpha$. Thus, $\left(\frac{3+\sqrt{5}}{2}\right)^2$ is a lower bound except $1$ for this case.\\ $\bullet$ $\sigma_i(b)=-1$ for all $1\leq i\leq e$\par $a=1$ provides the minimum value of the first dynamical degrees as in Section \ref{The minimum value of the first dynamical degrees of automorphisms}. Thus, $\left(\frac{1+\sqrt{5}}{2}\right)^2$ is a lower bound except $1$ for this case.\par Thus, $\left(\frac{1+\sqrt{5}}{2}\right)^2$ is a lower bound of the first dynamical degrees except $1$ for these types. \begin{flushleft}{\bf{Type 4}}\end{flushleft}\par Let $X$ be a $2$-dimensional simple abelian variety of Type 4 and $f\in\mathrm{End}(X)$ be an automorphism of $X$ which corresponds to $\alpha\in U_K$. Now $d=1$ and $[K:\mathbb{Q}]=2$ or $4$ by the restriction in Table \ref{table1} and then $[K_0:\mathbb{Q}]=1$ or $2$, respectively. Denote the minimal polynomial of $\alpha$ over $\mathbb{Q}$ by $P(x)$.\par If $\alpha\in K_0$, the minimum value of the square of the maximal absolute value of the roots of $P(x)$ is $\left(\frac{1+\sqrt{5}}{2}\right)^2$ except $1$, by the deduction in Type 1.\par If $\alpha\in K\setminus K_0$, by denoting $a=\abs{\alpha}^2\in U_{K_0}$, as in Section \ref{The minimum value of the first dynamical degrees of automorphisms}, $\mathbb{Q}(\sqrt{a})$ is a totally real number field. If $a=1$, then the first dynamical degree would be $1$ and so we assume $a\neq1$. Also, by assuming that the maximal absolute value of the conjugates of $\sqrt{a}$ is less than $2$, it can be written as $\sqrt{a}=\zeta_N^m+\frac{1}{\zeta_N^m}$ ($m,N\in\mathbb{Z}_{>0}$) with $m<N$, $\mathrm{gcd}(N,m)=1$ and we can assume $N\neq1,2,3,4,6$. Thus, $a=\left(\zeta_N^m+\frac{1}{\zeta_N^m}\right)^2=\zeta_N^{2m}+\frac{1}{\zeta_N^{2m}}+2$ and so \begin{align*} [\mathbb{Q}(a):\mathbb{Q}]=\left\{ \begin{array}{ll} \frac{1}{2}\varphi(\frac{N}{2}) & (N:\text{even})\\ [+5pt] \frac{1}{2}\varphi(N) & (N:\text{odd}) \end{array}. \right. \end{align*} Now $K_0\supset\mathbb{Q}(a)\supset\mathbb{Q}$ and so $[\mathbb{Q}(a):\mathbb{Q}]=1$ or $2$, and this implies $N=5,8,10,12,16,20,24$. By comparing with Proposition \ref{constant term of minimal polynomial}, since $P(x)$ has constant term $\pm1$, $N=5,10,24$ are only possible. Thus, the minimum value of the maximal absolute value of the roots of $P(x)$ is $2\mathrm{cos}\left(\frac{\pi}{5}\right)$ except $1$. Thus, $4\mathrm{cos}^2\left(\frac{\pi}{5}\right)=\left(\frac{1+\sqrt{5}}{2}\right)^2$ is a lower bound of the first dynamical degrees except $1$ for this type.\par Therefore, the minimum value of the first dynamical degrees is $\frac{3+\sqrt{5}}{2}=4\mathrm{cos}^2(\frac{\pi}{5})$ and realized on Type 1. \end{proof} Next, we consider the case that $p$ is a Sophie Germain prime except for $p=2$. \begin{proposition}\label{case for Sophie Germain prime} For a prime number $p\neq2$ such that $2p+1$ is also a prime, \begin{align*} m(p)=4\mathrm{cos}^2\left(\frac{\pi}{2p+1}\right) \end{align*} \end{proposition} \begin{proof} By the restriction in Table \ref{table1}, it suffices to consider only for Type 1 and 4. Define $B$, $K$, $K_0$, $d$, $e$ and $e_0$ as in Section \ref{Endomorphism algebras of simple abelian varieties} for each type of $X$. \begin{flushleft}{\bf{Type 1}}\end{flushleft}\par Let $X$ be a $p$-dimensional simple abelian variety of Type 1 and $f\in\mathrm{End}(X)$ be an automorphism of $X$ which corresponds to $\alpha\in U_K$ as in Section \ref{The minimum value of the first dynamical degrees of automorphisms}. Now $[K:\mathbb{Q}]=1$ or $p$ by the restriction in Table \ref{table1}.\par Assuming that the first dynamical degree is less than $4$, the conjugates of $\alpha$ are all in the interval $(-2,2)$. If the degree of $\alpha$ is $1$, then $\alpha=\pm1$ and so the first dynamical degree is $1$ and so we assume the degree of $\alpha$ is $1$. Denote the minimal polynomial of $\alpha$ over $\mathbb{Q}$ by $P(x)$, whose degree is $p$. By Lemma \ref{cyclotomic}, $x^pP(x+\frac{1}{x})$ is an irreducible cyclotomic polynomial and its degree is $2p$. Cyclotomic polynomials $\Phi_N(x)$ have degree $2p$ only for $N=2p+1,4p+2$ (but for $p=3$, $N=7,9,14,18$). By comparing with Proposition \ref{constant term of minimal polynomial}, since $P(x)$ has constant term $\pm1$, $N=2p+1,4p+2$ are possible (but for $p=3$, $N=7,9,14,18$ are possible). Thus, the minimum value of the maximal absolute value of the roots of $P(x)$ is $2\mathrm{cos}\left(\frac{\pi}{2p+1}\right)$ except $1$. The realizability is concerned in Section \ref{The minimum value of the first dynamical degrees of automorphisms}. Thus, the minimum value of the first dynamical degrees is $4\mathrm{cos}^2\left(\frac{\pi}{2p+1}\right)$ except $1$ for this type. \begin{flushleft}{\bf{Type 4}}\end{flushleft}\par Let $X$ be a $p$-dimensional simple abelian variety of Type 4 and $f\in\mathrm{End}(X)$ be an automorphism of $X$ which corresponds to $\alpha\in U_K$. Now $d=1$ and $[K:\mathbb{Q}]=2$ or $2p$ by the restriction in Table \ref{table1} and then $[K_0:\mathbb{Q}]=1$ or $p$, respectively. Denote the minimal polynomial of $\alpha$ over $\mathbb{Q}$ by $P(x)$.\par If $\alpha\in K_0$, the minimum value of the square of the maximal absolute value of the roots of $P(x)$ is $4\mathrm{cos}^2\left(\frac{\pi}{2p+1}\right)$ except $1$, by the deduction in Type 1.\par If $\alpha\in K\setminus K_0$, by denoting $a=\abs{\alpha}^2\in U_{K_0}$, as in Section \ref{The minimum value of the first dynamical degrees of automorphisms}, $\mathbb{Q}(\sqrt{a})$ is a totally real number field. If $a=1$, then the first dynamical degree would be $1$ and so we assume $a\neq1$. Also, by assuming that the maximal absolute value of the conjugates of $\sqrt{a}$ is less than $2$, it can be written as $\sqrt{a}=\zeta_N^m+\frac{1}{\zeta_N^m}$ ($m,N\in\mathbb{Z}_{>0}$) with $m<N$, $\mathrm{gcd}(N,m)=1$ and we can assume $N\neq1,2,3,4,6$. Thus, $a=\left(\zeta_N^m+\frac{1}{\zeta_N^m}\right)^2=\zeta_N^{2m}+\frac{1}{\zeta_N^{2m}}+2$ and so \begin{align*} [\mathbb{Q}(a):\mathbb{Q}]=\left\{ \begin{array}{ll} \frac{1}{2}\varphi(\frac{N}{2}) & (N:\text{even})\\ [+5pt] \frac{1}{2}\varphi(N) & (N:\text{odd}) \end{array}. \right. \end{align*} Now $K_0\supset\mathbb{Q}(a)\supset\mathbb{Q}$ and so $[\mathbb{Q}(a):\mathbb{Q}]=1$ or $p$, and this implies $N=8,12,2p+1,4p+2,8p+4$ (but for $p=3$, $N=7,8,9,12,14,18,28,36$). By comparing with Proposition \ref{constant term of minimal polynomial}, since $P(x)$ has constant term $\pm1$, $N=2p+1,4p+2$ are only possible (but for $p=3$, $N=7,9,14,18$ are possible). Thus, the minimum value of the maximal absolute value of the roots of $P(x)$ is $2\mathrm{cos}\left(\frac{\pi}{2p+1}\right)$ except $1$. Thus, $4\mathrm{cos}^2\left(\frac{\pi}{2p+1}\right)$ is a lower bound of the first dynamical degrees except $1$ for this type.\par Therefore, the minimum value of the first dynamical degrees is $4\mathrm{cos}^2(\frac{\pi}{2p+1})$ and realized on Type 1. \end{proof} Next, we consider the case that $p$ is not a Sophie Germain prime. First, we prove the next lemma. \begin{lemma}\label{unrealizable first dynamical degrees} The first dynamical degree of an automorphism of a simple abelian variety is not equal to $n^2$ for an integer $n\in\mathbb{Z}_{\geq2}$. Especially, The first dynamical degree of an automorphism is not equal to $4$. \end{lemma} \begin{proof} Suppose that there is an automorphism $f:X\rightarrow X$ of a simple abelian variety $X$ which has the first dynamical degree $n^2$. Let $B$ be the endomorphism algebra of $X$. Let $\alpha\in B$ be the element corresponding to $f:X\rightarrow X$ and denote the minimal polynomial of $\alpha$ over $\mathbb{Q}$ by $P(x)\in\mathbb{Z}[x]$. Since $f$ is an automorphism, the constant term of $P(x)$ is $\pm1$.\par By the assumption that the first dynamical degree of $f$ is $n^2$, the maximal absolute value of the roots of $P(x)$ is $n$, and denote this root by $z$. Now $\abs{z}=n$, $\abs{\frac{1}{z}}=\frac{1}{n}$ and $\frac{1}{z}$ is an algebraic integer, and so $\frac{1}{z}\cdot\frac{1}{\bar{z}}=\frac{1}{n^2}$ is also an algebraic integer, a contradiction. \end{proof} \begin{proposition}\label{case otherwise} For a prime number $p$ such that $2p+1$ is not a prime, \begin{align*} m(p)=\mathrm{min}\left\{ \begin{array}{ll} \text{the maximal absolute value of the conjugates of an}\\ \text{algebraic unit, whose conjugates are all real and positive, of degree $p$} \end{array} \right\} \end{align*} and also $m(p)$ is larger than $4+4^{-2p-3}$. \end{proposition} \begin{remark} The minimum value of the right side exists since the coefficients of the minimal polynomial is restricted by the maximal absolute value of its roots by using the triangle inequality. As a result, this equation holds for all prime numbers. \end{remark} \begin{proof}[Proof of Proposition \ref{case otherwise}] By the restriction in Table \ref{table1}, it suffices to consider only for Type 1 and 4. Define $B$, $K$, $K_0$, $d$, $e$ and $e_0$ as in Section \ref{Endomorphism algebras of simple abelian varieties} for each type of $X$. \begin{flushleft}{\bf{Type 1}}\end{flushleft}\par Let $X$ be a $p$-dimensional simple abelian variety of Type 1 and $f\in\mathrm{End}(X)$ be an automorphism of $X$ which corresponds to $\alpha\in U_K$ as in Section \ref{The minimum value of the first dynamical degrees of automorphisms}. Now $[K:\mathbb{Q}]=1$ or $p$ by the restriction in Table \ref{table1}.\par If $\alpha$ has degree $1$, then $\alpha=\pm1$ and the first dynamical degree is $1$ and so we assume $\alpha$ has degree $p$. Thus, by the deduction in Section \ref{The minimum value of the first dynamical degrees of automorphisms}, the set of the first dynamical degrees except $1$ for this type is \begin{align*} \mathcal{A}:=\left\{ \begin{array}{ll} \text{the square of the maximal absolute value of the conjugates}\\ \text{of an algebraic unit, whose conjugates are all real, of degree $p$} \end{array} \right\}. \end{align*} \begin{flushleft}{\bf{Type 4}}\end{flushleft}\par Let $X$ be a $p$-dimensional simple abelian variety of Type 4 and $f\in\mathrm{End}(X)$ be an automorphism of $X$ which corresponds to $\alpha\in U_K$. Now $d=1$ and $[K:\mathbb{Q}]=2$ or $2p$ by the restriction in Table \ref{table1} and then $[K_0:\mathbb{Q}]=1$ or $p$, respectively. Denote the minimal polynomial of $\alpha$ over $\mathbb{Q}$ by $P(x)$.\par If $\alpha\in K_0$, the set of the first dynamical degrees except $1$ is in \begin{align*} \mathcal{A}:=\left\{ \begin{array}{ll} \text{the square of the maximal absolute value of the conjugates}\\ \text{of an algebraic unit, whose conjugates are all real, of degree $p$} \end{array} \right\}, \end{align*} by the deduction in Type 1.\par If $\alpha\in K\setminus K_0$, by denoting $a=\abs{\alpha}^2\in U_{K_0}$, as in Section \ref{The minimum value of the first dynamical degrees of automorphisms}, $\mathbb{Q}(\sqrt{a})$ is a totally real number field. If $a$ has degree $1$, then $a=1$ and so the first dynamical degree would be $1$ and so we assume $a$ has degree $p$. The first dynamical degree of $f$ is the square of the maximal absolute value of the conjugates of $\alpha$ and it is equal to the maximal absolute value of the conjugates of $a$. Now $a$ is a totally positive algebraic unit of degree $p$, the first dynamical degrees except $1$ are contained in \begin{align*} \mathcal{A}':=\left\{ \begin{array}{ll} \text{the maximal absolute value of the conjugates of an}\\ \text{algebraic unit, whose conjugates are all real and positive, of degree $p$} \end{array} \right\}. \end{align*}\par Let $a$ be an element of $\mathcal{A}'$. If $a$ is an element of $\mathcal{A}$, $a$ is realizable as the first dynamical degree of an automorphism of a simple abelian variety of Type 1. If $a$ is not an element of $\mathcal{A}$, then $\sqrt{a}$ is of degree $2p$. $a$ is a totally positive algebraic unit of degree $p$ and let $K=\mathbb{Q}(\sqrt{-a})$, $K_0=\mathbb{Q}(a)$, $d=1$, $e=2p$, $e_0=p$ and $m=1$. Now $K$ is a CM-field by Lemma \ref{unit element}. $K$ admits the complex conjugate as a positive anti-involution of the second kind, so by Proposition \ref{construction for Type 4}, there exist a $p$-dimensional abelian variety $X$ and an automorphism $f:X\rightarrow X$ which corresponds to $\sqrt{-a}$. Then, $a$ is the first dynamical degree of $f$ and by Lemma \ref{prove simplicity}, $X$ must be a simple abelian variety. Therefore, every element of $\mathcal{A}'$ is realizable as the first dynamical degree of an automorphism of some $p$-dimensional simple abelian variety.\par Let $r$ be an algebraic unit, whose conjugates are all real, of degree $p$. We can assume that the maximal absolute value of the conjugates of $r$ is $\abs{r}$. Then, $r^2$ is an algebraic unit, whose conjugates are all real and positive, and the maximal absolute value of the conjugate of $r^2$ is $r^2=\abs{r}^2$. Since $\mathbb{Q}(r)\supset\mathbb{Q}(r^2)\supset\mathbb{Q}$ and $[\mathbb{Q}(r):\mathbb{Q}(r^2)]=1$ or $2$, the degree of $r^2$ is $p$. Thus, $\mathcal{A}\subset\mathcal{A}'$ and so \begin{align*} m(p)=\mathrm{min}(\mathcal{A}'). \end{align*}\par It remains to show $m(p)>4+4^{-2p-3}$. If $m(p)<4$, then there exists a totally real algebraic unit $r$ of degree $p$, whose conjugates are all inside the interval $(0,4)$. Thus, $r-2$ is also a totally real algebraic integer and so by Lemma \ref{cyclotomic}, the minimal polynomial of $r-2$ is written as $\Psi_N(x)$ for some $N\geq3$. The degree of this polynomial is $p=\frac{1}{2}\varphi(N)$, but this would not be occured by the assumption that $2p+1$ is not a prime. Therefore, by composing with Lemma \ref{unrealizable first dynamical degrees}, $m(p)>4$. Let $r'$ be an algebraic unit of degree $p$ whose conjugates are all real and positive, and at most one conjugate of $r'$ is larger than $4$. Then $r'-2$ is a totally real algebraic integer and its maximal absolute value of conjugates is larger than $2$. Thus, by \cite[Theorem 2]{SZ65}, the maximal absolute value of the conjugates of $r'-2$ is larger than $2+4^{-2p-3}$, and it concludes that at least one conjugate of $r'$ is larger than $4+4^{-2p-3}$. Therefore, $m(p)>4+4^{-2p-3}$. \end{proof} \begin{lemma}\label{prove simplicity} Let $p$ be an odd prime number, $a$ be a totally positive algebraic unit of degree $p$ and $X$ be a $p$-dimensional abelian variety. Assume $\sqrt{a}$ is of degree $2p$ and denote $\alpha=\sqrt{-a}$. If $\alpha\in\mathrm{End}_\mathbb{Q}(X)$, then $X$ is a simple abelian variety. \end{lemma} \begin{proof} Let $f(x)\in\mathbb{Z}[x]$ be the minimal polynomial of $a$ and so its constant term is $\pm1$. Then the minimal polynomial of $\alpha$ over $\mathbb{Q}$ is $g(x)=-f(-x^2)\in\mathbb{Z}[x]$ of degree $2p$.\par Assume $X$ is not a simple abelian variety. By Poincare's complete reducibility theorem (\cite[Theorem 5.3.7]{BL04}), there is an isogeny \begin{align*} X\rightarrow X_1^{n_1}\times\cdots\times X_r^{n_r}, \end{align*} where $X_i$ are simple abelian varieties and not isogenious each other. Denote $\mathrm{dim}(X_1)=g_1$ and then $p=\sum_{i=1}^r g_in_i$. Moreover, by its corollary (cf.\ \cite[Corollary 5.3.8]{BL04}), \begin{align*} \mathrm{End}_\mathbb{Q}(X)\simeq\mathrm{M}_{n_1}(\mathrm{End}_\mathbb{Q}(X_1))\oplus\cdots\oplus \mathrm{M}_{n_r}(\mathrm{End}_\mathbb{Q}(X_r)), \end{align*} where $\mathrm{End}_\mathbb{Q}(X_i)$ are division rings.\par $\alpha\in\mathrm{End}_\mathbb{Q}(X)$ is decomposed as $\alpha=\alpha_1+\cdots+\alpha_r$ with $\alpha_i\in \mathrm{M}_{n_i}(\mathrm{End}_\mathbb{Q}(X_i))$. Each $\alpha_i$ has the minimal polynomial $p_i(x)$ over $\mathbb{Q}$ and its degree is at most $2g_in_i$. The minimal polynomial $g(x)\in\mathbb{Z}[x]$ of $\alpha$ is a factor of $p_1(x)\cdots p_r(x)$ and by the inequality \begin{align*} 2p=\mathrm{deg}(g(x))\leq\sum_{i=1}^r\mathrm{deg}(p_i(x))=\sum_{i=1}^r 2g_in_i=2p, \end{align*} this implies $r=1$. Since $X$ is not simple, the case $g_1=1$, $n_1=p$ is only possible. For this case, the minimal polynomial of $\alpha_1$ is $g(x)$.\par Now \begin{align*} \mathrm{End}_\mathbb{Q}(X)=\mathrm{M}_{p}(\mathrm{End}_\mathbb{Q}(X_1)) \end{align*} and since $\alpha_1\in \mathrm{M}_{p}(\mathrm{End}_\mathbb{Q}(X_1))$ has the minimal polynomial of degree $2p$ over $\mathbb{Q}$, $\mathrm{End}_\mathbb{Q}(X_1)\neq\mathbb{Q}$. By Table \ref{table1}, $X_1$ is of Type 4 and $\mathrm{End}_\mathbb{Q}(X_1)$ is a totally complex quadratic algebra over $\mathbb{Q}$. Denote $\mathrm{End}_\mathbb{Q}(X)=\mathbb{Q}(\sqrt{-D})$ for some $D\in\mathbb{Z}_{>0}$ a square-free integer. The polynomial $g(x)$ is written as \footnotesize \begin{align*} g(x)&=(x^p+(a_1+b_1\sqrt{-D})x^{p-1}+\cdots+(a_p+b_p\sqrt{-D}))(x^p+(a_1-b_1\sqrt{-D})x^{p-1}+\cdots+(a_p-b_p\sqrt{-D}))\tag*{($\ast$)}\\ &=g_1(x)g_2(x) \end{align*} \normalsize with $a_i+b_i\sqrt{-D}\in\mathcal{O}_{\mathbb{Q}(\sqrt{-D})}$. If $D\equiv1,2\ (\mathrm{mod}\ 4)$, then $a_i,b_i\in\mathbb{Z}$ and if $D\equiv3\ (\mathrm{mod}\ 4)$, then $a_i,b_i\in\frac{1}{2}\mathbb{Z}$. Now the roots of $g(x)$ are in $\mathbb{R}i$ and this implies $g_1(x)=(-1)^pg_2(-x)$. Thus, $a_p+b_p\sqrt{-D}=-a_p+b_p\sqrt{-D}$ and so $a_p=0$. By considering the constant term of the equation ($\ast$), \begin{align*} \pm1=a_p^2+b_p^2D=b_p^2D \end{align*} and so $D=1$.\par Thus, it can be said that $\mathbb{Q}(\alpha)\supset\mathbb{Q}(i)$ and therefore $\mathbb{Q}(\alpha)\supset\mathbb{Q}(\sqrt{a})\supset\mathbb{Q}(a)$. By the assumption, $[\mathbb{Q}(\sqrt{a}):\mathbb{Q}(a)]=2$ and this implies $\mathbb{Q}(\alpha)=\mathbb{Q}(\sqrt{a})$. This contradicts to $\mathbb{Q}(\alpha)\not\subset\mathbb{R}$ and $\mathbb{Q}(\sqrt{a})\subset\mathbb{R}$, and so $X$ is a simple abelian variety. \end{proof} Next, for proving Theorem \ref{Result1}, we would like to prove $m(p)<(2.08)^2$ for a prime number $p$ which is not a Sophie Germain prime. This result follows from the next proposition. \begin{proposition}\label{Main Proposition} For any prime number $p>3$, there exists a monic irreducible polynomial $T(x)\in\mathbb{Z}[x]$ of degree $p$, whose constant term is $\pm1$ and whose roots are all real and inside the interval $(-2.08,2.08)$. \end{proposition} \begin{remark} The proposition holds also for $p=2,3$, but for simplifying the proof, we assume $p>3$. \end{remark} For proving this proposition, we use the next lemmas. \begin{lemma}\label{make Salem polynomials} Define the polynomial \begin{align*} P_n(x)= \frac{x^{n-2}(x^3-x-1)+(x^3+x^2-1)}{x-1},\ Q_m(x)=x^{m-3}(x^3-x-1)-(x^3+x^2-1) \end{align*} for $n\geq3$ and $m\geq4$, The next properties hold. \begin{enumerate} \item For any $n\geq3$, there is at most one root of mudulus larger than $1$, counted with multiplicity, among the roots of $P_n(x)$, and it is a real root. \item For any $m\geq4$, there is at most one root of mudulus larger than $1$, counted with multiplicity, among the roots of $Q_m(x)$, and it is a real root. \end{enumerate} \end{lemma} \begin{proof} $x^3-x-1$ is the minimal polynomial of (the smallest) Pisot number and by the proof of \cite[Theorem 6.4.1]{BDGPS92}, the lemma holds. \end{proof} \begin{lemma}\label{cyclotomic factor} The cyclotomic polynomials which divide $Q_m(x)\in\mathbb{Z}[x]$ for some $m\geq4$ are only $\Phi_2(x), \Phi_8(x), \Phi_{12}(x), \Phi_{18}(x), \Phi_{30}(x)$. \end{lemma} \begin{proof} For $N\geq1$, if $\Phi_N(x)$ divides $Q_m(x)$ for some $m\geq4$, then $Q_m(\zeta_N)=0$ for some $m\geq4$. Thus, \begin{align*} \zeta_N^{3-m}=\frac{\zeta_N^3-\zeta_N-1}{\zeta_N^3+\zeta_N^2-1}=-\zeta_N^{-3}\frac{1+\zeta_N-\zeta_N^3}{1+\zeta_N^{-1}-\zeta_N^{-3}}\tag*{($\ast\ast$)} \end{align*} and for $N=1$ to $8$, left side calculated as \begin{align*} &N=1:\frac{\zeta_N^3-\zeta_N-1}{\zeta_N^3+\zeta_N^2-1}=-1,\quad &N=2&:\frac{\zeta_N^3-\zeta_N-1}{\zeta_N^3+\zeta_N^2-1}=1,\\ &N=3:\frac{\zeta_N^3-\zeta_N-1}{\zeta_N^3+\zeta_N^2-1}=-\zeta_3^2,\quad &N=4&:\frac{\zeta_N^3-\zeta_N-1}{\zeta_N^3+\zeta_N^2-1}=\frac{4+3i}{5},\\ &N=5:\frac{\zeta_N^3-\zeta_N-1}{\zeta_N^3+\zeta_N^2-1}=-\zeta_5^3,\quad &N=6&:\frac{\zeta_N^3-\zeta_N-1}{\zeta_N^3+\zeta_N^2-1}=\frac{11+5\sqrt{3}i}{14},\\ &N=7:\frac{\zeta_N^3-\zeta_N-1}{\zeta_N^3+\zeta_N^2-1}=\frac{3+\sqrt{7}i}{4},\quad &N=8&:\frac{\zeta_N^3-\zeta_N-1}{\zeta_N^3+\zeta_N^2-1}=\zeta_8. \end{align*} Thus, $N=2,8$ are only possible. For $N>8$, the equation ($\ast\ast$) implies \begin{align*} \mathrm{arg}(1+\zeta_N-\zeta_N^3)=-\frac{\pi}{2N}, -\frac{\pi}{N}, -\frac{3\pi}{2N}, \ldots \end{align*} and this is equivalent to \begin{align*} \angle{PAQ}=\frac{\pi}{2N}, \frac{\pi}{N}, \frac{3\pi}{2N}, \ldots \end{align*} in Figure \ref{figure1}, Figure \ref{figure2} or Figure \ref{figure3}. By some coordinate calculation, it is also equivalent to \begin{align*} \frac{2\mathrm{sin}\left(\frac{2\pi}{N}\right)\mathrm{cos}\left(\frac{4\pi}{N}\right)}{1+2\mathrm{sin}\left(\frac{2\pi}{N}\right)\mathrm{sin}\left(\frac{4\pi}{N}\right)}=\mathrm{tan}\left(\frac{\pi}{2N}\right), \mathrm{tan}\left(\frac{\pi}{N}\right), \mathrm{tan}\left(\frac{3\pi}{2N}\right), \ldots. \end{align*} \begin{enumerate} \item{$9\leq N<18$}\\ See Figure \ref{figure1}. For this case, point A is outside of the unit circle, so by the inscribed\\ angle theorem, $\angle{PAQ}<\frac{2\pi}{N}$ and so $\angle{PAQ}=\frac{\pi}{2N},\frac{\pi}{N},\frac{3\pi}{2N}$. By observing the equations \begin{equation*} \frac{2\mathrm{sin}(\theta)\mathrm{cos}(2\theta)}{1+2\mathrm{sin}(\theta)\mathrm{sin}(2\theta)}=\mathrm{tan}\left(\frac{\theta}{4}\right), \mathrm{tan}\left(\frac{\theta}{2}\right), \mathrm{tan}\left(\frac{3\theta}{4}\right) \end{equation*} for $\frac{\pi}{9}<\theta\leq\frac{2\pi}{9}$, then $\theta=\frac{\pi}{6}$ and this correeponds to the case $N=12$. \item{$N=18$}\\ See Figure \ref{figure2}. For this case, point A is on the unit circle, so by the inscribed angle theorem, $\angle{PAQ}=\frac{2\pi}{N}$, and this is the case. \item{$18<N$}\\ See Figure \ref{figure3}. For this case, point A is inside of the unit circle, so by the inscribed angle theorem, $\angle{PAQ}>\frac{2\pi}{N}$. Also, by $\angle{PBA}=\frac{2\pi}{N}$ and $AB<1<AP$, $\angle{PAQ}=\angle{PBA}+\angle{APB}<2\angle{PBA}=\frac{4\pi}{N}$ and so $\angle{PAQ}=\frac{5\pi}{2N},\frac{3\pi}{N},\frac{7\pi}{2N}$. By observing the equations \begin{equation*} \frac{2\mathrm{sin}(\theta)\mathrm{cos}(2\theta)}{1+2\mathrm{sin}(\theta)\mathrm{sin}(2\theta)}=\mathrm{tan}\left(\frac{5\theta}{4}\right), \mathrm{tan}\left(\frac{3\theta}{2}\right), \mathrm{tan}\left(\frac{7\theta}{4}\right) \end{equation*} for $0<\theta<\frac{\pi}{9}$, then $\theta=\frac{\pi}{15}$ and this correeponds to the case $N=30$. \end{enumerate} \begin{figure} \caption{$9<N<18$} \label{figure1} \caption{$N=18$} \label{figure2} \end{figure} \begin{figure} \caption{$18<N$} \label{figure3} \end{figure} \end{proof} \begin{remark}\label{Remark} For $0<t<2$, the equation \begin{align*} \frac{2\mathrm{sin}(\theta)\mathrm{cos}(2\theta)}{1+2\mathrm{sin}(\theta)\mathrm{sin}(2\theta)}=\mathrm{tan}(t\theta) \end{align*} has at most one root in the interval $\left(0, \frac{\pi}{4}\right)$ (see Appendix \ref{appendix b}) and so the way for searching the suitable $N\in\mathbb{Z}_{>0}$ is returned to substituting integers into the equations in the proof. \end{remark} \begin{remark}\label{equivalent condition2} By using the equation ($\ast\ast$) in the proof of the lemma, it can be said that \begin{align*} &Q_m(x)\text{ is divided by }\Phi_2(x)=x+1\quad\Longleftrightarrow\quad m\equiv1\ (\mathrm{mod}\ 2)\\ &Q_m(x)\text{ is divided by }\Phi_8(x)=x^4+1\quad\Longleftrightarrow\quad m\equiv2\ (\mathrm{mod}\ 8)\\ &Q_m(x)\text{ is divided by }\Phi_{12}(x)=x^4-x^2+1\quad\Longleftrightarrow\quad m\equiv1\ (\mathrm{mod}\ 12)\\ &Q_m(x)\text{ is divided by }\Phi_{18}(x)=x^6-x^3+1\quad\Longleftrightarrow\quad m\equiv17\ (\mathrm{mod}\ 18)\\ &Q_m(x)\text{ is divided by }\Phi_{30}(x)=x^8+x^7-x^5-x^4-x^3+x+1\quad\Longleftrightarrow\quad m\equiv24\ (\mathrm{mod}\ 30) \end{align*} \end{remark} \begin{lemma}\label{cyclotomic factor for P_n} The cyclotomic polynomials which divide $P_n(x)\in\mathbb{Z}[x]$ for some $n\geq10$ are only $\Phi_2(x), \Phi_3(x), \Phi_5(x), \Phi_8(x), \Phi_{12}(x), \Phi_{18}(x), \Phi_{30}(x)$. \end{lemma} \begin{proof} The proof is proceeding as the proof of Lemma \ref{cyclotomic factor}. For $1\leq N\leq8$, $N=2,3,5,8$ are only possible. Also for $N>8$, the only different point is the candidates for $\angle{PAQ}$ are changed to $\angle{PAQ}=\frac{\pi}{N}, \frac{2\pi}{N}, \frac{3\pi}{N}$ and $N=12,18,30$ are possible. \end{proof} \begin{remark}\label{equivalent condition1} As in Remark \ref{equivalent condition2}, it can be said that \begin{align*} &P_n(x)\text{ is divided by }\Phi_2(x)=x+1\quad\Longleftrightarrow\quad n\equiv1\ (\mathrm{mod}\ 2)\\ &P_n(x)\text{ is divided by }\Phi_3(x)=x^2+x+1\quad\Longleftrightarrow\quad n\equiv0\ (\mathrm{mod}\ 3)\\ &P_n(x)\text{ is divided by }\Phi_5(x)=x^4+x^3+x^2+x+1\quad\Longleftrightarrow\quad n\equiv4\ (\mathrm{mod}\ 5)\\ &P_n(x)\text{ is divided by }\Phi_8(x)=x^4+1\quad\Longleftrightarrow\quad n\equiv5\ (\mathrm{mod}\ 8)\\ &P_n(x)\text{ is divided by }\Phi_{12}(x)=x^4-x^2+1\quad\Longleftrightarrow\quad n\equiv6\ (\mathrm{mod}\ 12)\\ &P_n(x)\text{ is divided by }\Phi_{18}(x)=x^6-x^3+1\quad\Longleftrightarrow\quad n\equiv7\ (\mathrm{mod}\ 18)\\ &P_n(x)\text{ is divided by }\Phi_{30}(x)=x^8+x^7-x^5-x^4-x^3+x+1\quad\Longleftrightarrow\quad n\equiv8\ (\mathrm{mod}\ 30) \end{align*} \end{remark} \begin{remark} Similar results for Lemma \ref{cyclotomic factor for P_n} and Remark \ref{equivalent condition1} are stated in \cite[Theorem 1.1]{GHM09}. \end{remark} \begin{lemma}\label{simpliity of roots} For each $n\geq10$, the roots of $P_n(x)$ are all simple. Also, for each $m\geq4$, the roots of $Q_m(x)$ are all simple. \end{lemma} \begin{proof} Every $P_n(x)$ and $Q_m(x)$ has at most one root of modulus larger than $1$, so this root and its conjugates are all simple. The simplicity of the other roots can be checked by using Remark \ref{equivalent condition1} and Remark \ref{equivalent condition2} for each cyclotomic polynomial. \end{proof} \begin{remark} Assume $n\geq10$ and $m\geq4$. Dividing $P_n(x)$ (resp. $Q_m(x)$) by its cyclotomic factor as many as possible, the remaining polynomial is constant or the polynomial which has only one root of modulus larger than $1$ by Lemma \ref{make Salem polynomials}.\par By Lemma \ref{simpliity of roots}, the dividing operation is ended at most $7$ times (resp. $5$ times) and the degree decreases at most $29$ (resp. $23$). Thus, by Table \ref{appendix 1} and Table \ref{appendix 2} in Appendix \ref{appendix a}, the remaining polynomials have degree at least $4$ and so these are Salem polynomials. Write $P_n(x)=C_n(x)S_n(x)$ (resp. $Q_m(x)=C'_m(x)S'_m(x)$) where $C_n(x)$ (resp. $C'_m(x)$) is a product of cyclotomic polynomials and $S_n(x)$ (resp. $S'_m(x)$) is a Salem polynomial. \end{remark} \begin{lemma}\label{convergence lemma} Assume $n\geq10$ and $m\geq4$. Let $s_n$ be a Salem number whose minimal polynomial is $S_n(x)$ and let $s'_m$ be a Salem number whose minimal polynomial is $S'_m(x)$. Then, the sequence $\{s_n\}_{n\geq10}$ strictly increasing and the sequence $\{s'_m\}_{m\geq4}$ strictly decreasing. Also, the limit of these sequence is the same and it is $1.3247\cdots$, the smallest Pisot number. \end{lemma} \begin{proof} $1.3247\cdots$ is the real root of $x^3-x-1\in\mathbb{Z}[x]$ and this result is proved as in the proof of \cite[Theorem 6.4.1]{BDGPS92}. \end{proof} For proceeding the proof, we need the next notation. \begin{definition} For a Salem polynomial $S(x)$ of degree $2d$, the \textit{trace polynomial} of $S(x)$ is a polynomial $T(x)$ of degree $d$ such that $S(x)=x^dT(x+\frac{1}{x})$. \end{definition} \begin{remark} By the definition of Salem numbers, the roots of $T(x)$ are all real and it has only one root outside of the interval $(-2,2)$. Thus, for proving Proposition \ref{Main Proposition}, we need a Salem polynomial $S(x)$ of degree $2p$ and the associated trace polynomial with constant term $\pm1$. \end{remark} \begin{lemma}\label{constant term} For a Salem polynomial $S(x)$ and its associated trace polynomial $T(x)$, the constant term of $T(x)$ is $\pm1$ if and only if $\abs{S(i)}=1$. \end{lemma} \begin{proof} By the relation $S(x)=x^dT(x+\frac{1}{x})$, it is immediate that \begin{align*} \text{the constant term of $T(x)$ is $\pm1$}\ \Longleftrightarrow\ T(0)=\pm1\Longleftrightarrow\ \abs{S(i)}=1 \end{align*} \end{proof} For $n\geq10$ and $m\geq4$, let $T_n(x)$ be the trace polynomial of $S_n(x)$ and let $T'_m(x)$ be the trace polynomial of $S'_m(x)$. By using Remark \ref{equivalent condition1} and Remark \ref{equivalent condition2}, we get the condition for that when the constant term of $T_n(x)$ or $T'_m(x)$ is $\pm1$.\\ \begin{lemma}\label{condition for constant term} Assume $n\geq10$ and $m\geq4$. Then, \begin{align*} \text{the constant term of $T_n(x)$ is $\pm1$}\quad\Longleftrightarrow\quad n\equiv0,3,4,5,6,&7,8,11,12,13,15,\\ &16,18,19,20,21,23\ (\mathrm{mod}\ 24) \end{align*} \begin{align*} \text{the constant term of $T'_m(x)$ is $\pm1$}\quad\Longleftrightarrow\quad m\equiv1,2,3,7,10,11,13,15,18,19,23\ (\mathrm{mod}\ 24) \end{align*} \end{lemma} \begin{proof} By calculating, \begin{align*} \abs{\Phi_2(i)}=\sqrt{2}, \abs{\Phi_3(i)}=1, \abs{\Phi_5(i)}=1, \abs{\Phi_8(i)}=2, \abs{\Phi_{12}(i)}=3, \abs{\Phi_{18}(i)}=1, \abs{\Phi_{30}(i)}=1, \end{align*} \begin{align*} \abs{P_n(i)}=\left\{ \begin{array}{ll} 1 & (n\equiv0\ (\mathrm{mod}\ 4))\\ 2\sqrt{2} & (n\equiv1\ (\mathrm{mod}\ 4))\\ 3 & (n\equiv2\ (\mathrm{mod}\ 4))\\ \sqrt{2} & (n\equiv3\ (\mathrm{mod}\ 4)) \end{array}, \right. \abs{Q_m(i)}=\left\{ \begin{array}{ll} 4 & (n\equiv0\ (\mathrm{mod}\ 4))\\ 3\sqrt{2} & (n\equiv1\ (\mathrm{mod}\ 4))\\ 2 & (n\equiv2\ (\mathrm{mod}\ 4))\\ \sqrt{2} & (n\equiv3\ (\mathrm{mod}\ 4)) \end{array}. \right. \end{align*} By Lemma \ref{constant term}, it is enough to calculate $\abs{S(i)}$ and so by using Remark \ref{equivalent condition1} and Remark \ref{equivalent condition2}, cyclicity of order $24$ can be found and the proof is done. \end{proof} In Appendix \ref{appendix a}, Table \ref{appendix 3} is enumerating the degree of $S_n(x)$ for $n$, which satisfies the condition in Lemma \ref{condition for constant term}. Table \ref{appendix 4} is enumerating the degree of $S'_m(x)$ for $m$, which satisfies the condition in Lemma \ref{condition for constant term}. Both have cyclicity of order $360$ and so only $10\leq n\leq373$ and $7\leq m\leq378$ are filled in. \begin{proof}[Proof of Proposition \ref{Main Proposition}] Now all the double of an odd integer, larger than $3$, would be appeared as the degree of $S_n(x)$ in Table \ref{appendix 3} or $S'_m(x)$ in Table \ref{appendix 4} by the cyclicity. Thus, for a prime number $p>3$, $2p$ is appeared as the degree of $S_n(x)$ or $S'_m(x)$ with the conditions in Lemma \ref{condition for constant term}.\par If $2p$ is appeared as the degree of $S_n(x)$ for some $n$, then the associated trace polynomial $T_n(x)$ has degree $p$ and its roots are all totally real algebraic intregers.\par By Lemma \ref{convergence lemma}, the maximal absolute value of the roots of $S_n(x)$ is less than $1.3247\cdots$ and so the maximal absolute value of the roots of $T_n(x)$ is less than $1.3247\cdots+\frac{1}{1.3247\cdots}=2.0795\cdots<2.08$.\par The first $2p$, which is not appeared as the degree of $S_n(x)$ is $26$ and it is appeared as the degree of $S'_{27}(x)$ and the associated trace polynomial $T'_{27}(x)$ has degree 13. By Lemma \ref{convergence lemma} again, the maximal absolute value of $S'_m(x)$ is strictly decreasing, and so it suffices to show that the maximal absolute value of $T'_{27}(x)$ is less than $2.08$. The Salem number for $S'_{27}(x)$ is $1.3255\cdots$ and so the maximal absolute value of $T'_{27}(x)$ is $1.3255\cdots+\frac{1}{1.3255\cdots}=2.0799\cdots<2.08$. Thus, the proof is concluded. \end{proof} \begin{proof}[Proof of Theorem \ref{Result1}] Theorem \ref{Result1} is concluded by composing Proposition \ref{case=2}, Proposition \ref{case for Sophie Germain prime}, Proposition \ref{case otherwise} and Proposition \ref{Main Proposition}. \end{proof} \section{The minimum value of the first dynamical degrees for simple abelian varieties with lower dimensions}\label{The minimum value of the first dynamical degrees for simple abelian varieties with lower dimensions} In this section, we calculate the minimum value of the first dynamical degrees except $1$ with fixing the dimension of simple abelian varieties $g=2$ to $10$. The result is the following. \begin{table}[hbtp] \label{table} \begin{tabular}{|c|c|} \hline dimension $g$ & minimum value \\ \hline $1$ & $(\text{all }1)$\\ $2$ & $4\mathrm{cos}^2(\pi/5)=2.6180\cdots$ \\ $3$ & $4\mathrm{cos}^2(\pi/7)=3.2469\cdots$ \\ $4$ & $2\mathrm{cos}(\pi/5)=1.6180\cdots$ \\ $5$ & $4\mathrm{cos}^2(\pi/11)=3.6825\cdots$ \\ $6$ & $2\mathrm{cos}(\pi/7)=1.8019\cdots$ \\ $7$ & $4.0333\cdots$\footnotemark[1] \\ $8$ & $2\mathrm{cos}(\pi/5)=1.6180\cdots$ \\ $9$ & $1.1509\cdots^2=1.3247\cdots$\footnotemark[2] \\ $10$& $1.1762\cdots^2=1.3836\cdots$\footnotemark[3] \\ \hline \end{tabular} \end{table} \footnotetext[1]{$4.0333\cdots$ is the maximal absolute value of the roots of $x^7-14x^6+77x^5-211x^4+301x^3-210x^2+56x-1$} \footnotetext[2]{$1.1509\cdots$ is the absolute value of a complex root of $x^3-x^2+1$} \footnotetext[3]{$1.1762\cdots$ is the Lehmer number} \begin{remark} If $g=1$, then the first dynamical degree of an automorphism is the last dynamical degree and it is always $1$.\par The cases $g=2,3,5$ are already done and the result is compatible with Theorem \ref{Result1}.\par We use a computer algebra for $g=6$ to $10$. The calculation would be proceeding as dividing into the type of simple abelian varieties. \end{remark} \subsection{dimension $g=4$} Let $X$ be a $4$-dimensional simple abelian variety and define $B$, $K$, $K_0$, $d$, $e$ and $e_0$ as in Section \ref{Endomorphism algebras of simple abelian varieties}. \begin{flushleft}{\bf{Type 1}}\end{flushleft}\par Let $X$ be a $4$-dimensional simple abelian variety of Type 1 and $f\in\mathrm{End}(X)$ be an automorphism of $X$ which corresponds to $\alpha\in U_K$ as in Section \ref{The minimum value of the first dynamical degrees of automorphisms}. Now $[K:\mathbb{Q}]=1$ or $2$ or $4$ by the restriction in Table \ref{table1}.\par Assuming that the first dynamical degree is less than $4$, the conjugates of $\alpha$ are all in the interval $(-2,2)$. If the degree of $\alpha$ is $1$, then $\alpha=\pm1$ and so the first dynamical degree is $1$ and so we assume the degree of $\alpha$ is $2$ or $4$. Denote the minimal polynomial of $\alpha$ over $\mathbb{Q}$ by $P(x)$, whose degree is $r=2$ or $4$. By Lemma \ref{cyclotomic}, $x^rP(x+\frac{1}{x})$ is an irreducible cyclotomic polynomial and its degree is $2r=4$ or $8$. Cyclotomic polynomials $\Phi_N(x)$ have degree $4$ or $8$ only for $N=5,8,10,12,15,16,20,24,30$. By comparing with Proposition \ref{constant term of minimal polynomial}, since $P(x)$ has constant term $\pm1$, $N=5,10,15,24,30$ are only possible. Thus, the minimum value of the maximal absolute value of the roots of $P(x)$ is $2\mathrm{cos}\left(\frac{\pi}{5}\right)$ except $1$. The realizability is concerned in Section \ref{The minimum value of the first dynamical degrees of automorphisms}. Thus, the minimum value of the first dynamical degrees is $4\mathrm{cos}^2\left(\frac{\pi}{5}\right)=\left(\frac{1+\sqrt{5}}{2}\right)^2$ except $1$ for this type. \begin{flushleft}{\bf{Type 2, Type 3}}\end{flushleft}\par Let $X$ be a $4$-dimensional simple abelian variety of Type 2 or 3 and $f\in\mathrm{End}(X)$ be an automorphism of $X$ which corresponds to $\alpha\in B$. Now $[K:\mathbb{Q}]=1$ or $2$ by the restriction in Table \ref{table1}.\par Denote the minimal polynomial of $\alpha$ over $K$ by $p(x)$ and the minimal polynomial of $\alpha$ over $\mathbb{Q}$ by $P(x)$. If the degree of $p(x)$ is $1$, then $\alpha\in U_K$ and so it is reduced to Type 1 of $2$-dimensional case. Thus, $\frac{3+\sqrt{5}}{2}$ is a lower bound except $1$ for this case.\par If the degree of $p(x)$ is $2$, then it is written as $p(x)=x^2+ax+b\in\mathcal{O}_K[x](b\in U_K)$ and consider the following 3 cases as in Section \ref{The minimum value of the first dynamical degrees of automorphisms}. Let $\{\sigma_i\}$ be the set of all $\mathbb{Q}$-embeddings $K\hookrightarrow\mathbb{C}$.\\ $\bullet$ there exists $1\leq i\leq e$ such that $\abs{\sigma_i(b)}>1$\par The maximal absolute value of all conjugates of $b\in U_K$ is at least $\frac{1+\sqrt{5}}{2}$ and so the maximal absolute value of the roots of $P(x)$ is at least $\sqrt{\frac{1+\sqrt{5}}{2}}$. This is achived by $p(x)=x^2+\frac{1+\sqrt{5}}{2}$ and $P(x)=x^4+x^2-1$. Thus, $\frac{1+\sqrt{5}}{2}$ is a lower bound except $1$ for this case.\\ $\bullet$ $\sigma_i(b)=1$ for all $1\leq i\leq e$\par Since $\mathrm{max}\{\abs{\sigma_i(a)}\}$ must be larger than $2$ and close to $2$, $a=\sqrt{5}$ provides the minimum value of the maximal absolute value by some calculation. For this case, the maximal absolute value of the roots of $P(x)=x^4-3x^2+1$ is $\frac{1+\sqrt{5}}{2}$ and $\left(\frac{1+\sqrt{5}}{2}\right)^2$ is a lower bound except $1$ for this case.\\ $\bullet$ $\sigma_i(b)=-1$ for all $1\leq i\leq e$\par $a=1$ provides the minimum value of the first dynamical degrees as in Section \ref{The minimum value of the first dynamical degrees of automorphisms}. Thus, $\left(\frac{1+\sqrt{5}}{2}\right)^2$ is a lower bound except $1$ for this case.\par Thus, $\frac{1+\sqrt{5}}{2}$ is the minimum value of the first dynamical degrees except $1$ for these types by the next theorem. \begin{theorem}\label{4-dimensional construction} Define $\alpha=\sqrt{-\frac{1+\sqrt{5}}{2}}$. There exists a prime number $p$ where $B:=\left(\frac{\alpha^2, p}{\mathbb{Q}(\sqrt{5})}\right)$ is a divisional totally indefinite quaternion algebra by Theorem \ref{divisional}. On this condition, there is a positive anti-involotion on $B$ over $\mathbb{Q}$. Moreover, on the case \textnormal{Type 2 ($d=2$, $e=2$, $m=1$)}, there is an automorphism of a $4$-dimensional simple abelian vaeriety which correponds to $\alpha=\sqrt{-\frac{1+\sqrt{5}}{2}}$. \end{theorem} \begin{proof} By Theorem \ref{construction of positive anti-involution}, it is enough to find $c\in B\setminus\mathbb{Q}(\sqrt{5})$ such that $c^2\in\mathbb{Q}(\sqrt{5})$ is totally negative.\par Denote $c=xi+yij$ ($x,y\in\mathbb{Q}(\sqrt{5})$) where $1,i,j,ij$ is an $\mathbb{Q}(\sqrt{5})$-basis of $B$ with $i^2=\alpha^2$ and $j^2=p$. Then $c^2=x^2\cdot\left(-\frac{1+\sqrt{5}}{2}\right)-y^2\cdot\left(-\frac{1+\sqrt{5}}{2}\right)\cdot p=\left(\frac{1+\sqrt{5}}{2}\right)(y^2p-x^2)$ by calculation. Take a pair $(u,w)$ of integers which are larger than $p$, which satisfies a Pell's equation $u^2-5w^2=1$. By substituting $x=u+w\sqrt{5}, y=1$, then, \begin{align*} c^2=\left(\frac{1+\sqrt{5}}{2}\right)(p-(u+w\sqrt{5})^2)<0. \end{align*} Now $1=u^2-5w^2=(u+w\sqrt{5})(u-w\sqrt{5})$ and so $0<(u-w\sqrt{5})<1$.\par Thus, the another conjugate of $c^2\in\mathbb{Q}(\sqrt{5})$ is \begin{align*} \left(\frac{1-\sqrt{5}}{2}\right)(p-(u-w\sqrt{5})^2)<0. \end{align*} Therefore, $c^2\in\mathbb{Q}(\sqrt{5})$ is totally negative and so the division ring $B$ has a positive anti-involution over $\mathbb{Q}$.\par Now there is the order $\mathcal{O}=\mathcal{O}_{\mathbb{Q}(\sqrt{5})}\oplus\mathcal{O}_{\mathbb{Q}(\sqrt{5})}i\oplus\mathcal{O}_{\mathbb{Q}(\sqrt{5})}j\oplus\mathcal{O}_{\mathbb{Q}(\sqrt{5})}ij$ in $B$, so by Proposition \ref{construction for Type 2}, there exists a $4$-dimenional simple abelian variety whose endomorphism ring contains $\mathcal{O}$. Thus, the automorphism corresponding to $i\in\mathcal{O}$ corresponds to $\alpha=\sqrt{-\frac{1+\sqrt{5}}{2}}$ and so the proof is concluded. \end{proof} \begin{flushleft}{\bf{Type 4}}\end{flushleft}\par Let $X$ be a $4$-dimensional simple abelian variety of Type 4 and $f\in\mathrm{End}(X)$ be an automorphism of $X$ which corresponds to $\alpha\in B$.\\ $\bullet$ If $d=1$, $[K:\mathbb{Q}]=2$ or $4$ or $8$ by the restriction in Table \ref{table1} and $\alpha\in U_K$.\par Now $[K_0:\mathbb{Q}]=1$ or $2$ or $4$, respectively. Denote the minimal polynomial of $\alpha$ over $\mathbb{Q}$ by $P(x)$.\par If $\alpha\in K_0$, the minimum value of the square of the maximal absolute value of the roots of $P(x)$ is $\left(\frac{1+\sqrt{5}}{2}\right)^2$ except $1$, by the deduction in Type 1.\par If $\alpha\in K\setminus K_0$, by denoting $a=\abs{\alpha}^2\in U_{K_0}$, as in Section \ref{The minimum value of the first dynamical degrees of automorphisms}, $\mathbb{Q}(\sqrt{a})$ is a totally real number field. If $a=1$, then the first dynamical degree would be $1$ and so we assume $a\neq1$. Also, by assuming that the maximal absolute value of the conjugates of $\sqrt{a}$ is less than $2$, it can be written as $\sqrt{a}=\zeta_N^m+\frac{1}{\zeta_N^m}$ ($m,N\in\mathbb{Z}_{>0}$) with $m<N$, $\mathrm{gcd}(N,m)=1$ and we can assume $N\geq3$. Thus, $a=\left(\zeta_N^m+\frac{1}{\zeta_N^m}\right)^2=\zeta_N^{2m}+\frac{1}{\zeta_N^{2m}}+2$ and so \begin{align*} [\mathbb{Q}(a):\mathbb{Q}]=\left\{ \begin{array}{ll} \frac{1}{2}\varphi(\frac{N}{2}) & (N:\text{even})\\ [+5pt] \frac{1}{2}\varphi(N) & (N:\text{odd}) \end{array}. \right. \end{align*} Now $K_0\supset\mathbb{Q}(a)\supset\mathbb{Q}$ and so $[\mathbb{Q}(a):\mathbb{Q}]=1$ or $2$ or $4$, and this implies $N=5,8,10,12$, $15,16,20,24,30,32,40,48,60$. By comparing with Proposition \ref{constant term of minimal polynomial}, since $P(x)$ has constant term $\pm1$, $N=5,10,15,24,30,40,48,60$ are only possible. Thus, the minimum value of the maximal absolute value of the roots of $P(x)$ is $2\mathrm{cos}\left(\frac{\pi}{5}\right)$ except $1$.\par Thus, $4\mathrm{cos}^2\left(\frac{\pi}{5}\right)$ is the minimum value of the first dynamical degrees except $1$ for this case.\\ $\bullet$ If $d=2$, $[K:\mathbb{Q}]=2$ and denote $K=\mathbb{Q}(\sqrt{-D})$ for some square-free integer $D>0$.\par Denote the minimal polynomial of $\alpha$ over $K$ by $p(x)\in\mathcal{O}_{K}[x]$ and the minimal polynomial of $\alpha$ over $\mathbb{Q}$ by $P(x)$.\par If the degree of $p(x)$ is $1$, then $\alpha\in U_K$ and so the maximal absolute value of the conjugates is always $1$.\par For the case that $p(x)$ is $2$, it can be written as $p(x)=x^2+sx+t\in\mathcal{O}_{K}[x]$ with $t\in U_{K}$. By Dirichlet's unit theorem, $U_K$ is generated by some cyclotomic element $\zeta$ which has the smallest positive angle and denote $t=\zeta^n$.\par If the degree of $P(x)$ is $2$, then it can be written as $x^2+sx\pm1\in\mathbb{Z}[x]$ and the minimum value of the maximal absolte value of the roots of $P(x)$ is $\frac{1+\sqrt{5}}{2}$ except $1$.\par Otherwise, $P(x)$ has degree $4$ of the form $(x^2+sx+\zeta^n)(x^2+\overline{s}x+\zeta^{-n})\in\mathbb{Z}[x]$. The roots of this polynomial are $\alpha,\frac{\zeta^n}{\alpha},\overline{\alpha},\frac{\zeta^{-n}}{\overline{\alpha}}$ and so the maximal absolute value of these roots is $\mathrm{max}\{\abs{\alpha},\frac{1}{\abs{\alpha}}\}$. Assuming that $\mathrm{max}\{\abs{\alpha},\frac{1}{\abs{\alpha}}\}<1.5$, then $s\in\mathcal{O}_K$ satisfies $\abs{s}<1.5+\frac{1}{1.5}=\frac{13}{6}$.\par For the case $D\neq1,3$, $U_K=\{\pm1\}$ and so $t$ is $\pm1$ and this implies $s\notin\mathbb{Z}$.\\ $\bullet$ If $-D\equiv1$ $(\mathrm{mod}\ 4)$, then $\mathcal{O}_{\mathbb{Q}(\sqrt{-D})}=\mathbb{Z}\left[\frac{1+\sqrt{-D}}{2}\right]$.\par By denoting $s=p+q\left(\frac{1+\sqrt{-D}}{2}\right)\in\mathcal{O}_{\mathbb{Q}(\sqrt{-D})}$ ($p,q\in\mathbb{Z}$), \begin{align*} \abs{s}^2=\left(p+\frac{q}{2}\right)^2+\frac{q^2D}{4}=\frac{(2p+q)^2+q^2D}{4}<\frac{169}{36}. \end{align*}\par Thus, if $D\neq3$, then $q\neq0$ and so $D=7,11,15$.\\ $\bullet$ If $-D\equiv2,3$ $(\mathrm{mod}\ 4)$, then $\mathcal{O}_{\mathbb{Q}(\sqrt{-D})}=\mathbb{Z}\left[\sqrt{-D}\right]$.\par By denoting $s=p+q\sqrt{-D}\in\mathcal{O}_{\mathbb{Q}(\sqrt{-D})}$ ($p,q\in\mathbb{Z}$), \begin{align*} \abs{s}^2=p^2+q^2D<\frac{169}{36}. \end{align*}\par Thus, if $D\neq1$, then $q\neq0$ and so $D=2$.\\ For any integer $m$, the absolute values of the roots of $(x^2+sx+t)(x^2+\overline{s}x+\overline{t})$ and $(x^2+s\zeta^mx+t\zeta^{2m})(x^2+\overline{s}\zeta^{-m}x+\overline{t}\zeta^{-2m})$ are the same and so the maximal absolute value is the same too. Also, $s\neq0$ by assuming that the maximal absolute value of the roots is larger than $1$.\par Thus, only the cases $0\leq\mathrm{arg}(s)<\mathrm{arg}(\zeta)$ are worth to consider and they are enumerated in Table \ref{appendix 5} and the minimum of the square of the maximal absolute value of the roots of $P(x)$ is $1.3122\cdots^2=1.7220\cdots$ except $1$.\\ \\ Therefore, the minimum value of the first dynamical degrees of automorphisms of $4$-dimensional simple abelian varieties is $\frac{1+\sqrt{5}}{2}=1.6180\cdots$ except $1$, and realized on a simple abelian variety of Type 2. \subsection{dimension $g=6$} Let $X$ be a $6$-dimensional simple abelian variety and define $B$, $K$, $K_0$, $d$, $e$ and $e_0$ as in Section \ref{Endomorphism algebras of simple abelian varieties}. \begin{flushleft}{\bf{Type 1}}\end{flushleft}\par Let $X$ be a $6$-dimensional simple abelian variety of Type 1 and $f\in\mathrm{End}(X)$ be an automorphism of $X$ which corresponds to $\alpha\in U_K$ as in Section \ref{The minimum value of the first dynamical degrees of automorphisms}. Now $[K:\mathbb{Q}]=1$ or $2$ or $3$ or $6$ by the restriction in Table \ref{table1}.\par Assuming that the first dynamical degree is less than $4$, by using similar deduction as in the $4$-dimensional case, the minimum value of the maximal absolute value of the roots of $P(x)$ is $2\mathrm{cos}\left(\frac{\pi}{5}\right)$ except $1$. The realizability is concerned in Section \ref{The minimum value of the first dynamical degrees of automorphisms}. Thus, the minimum value of the first dynamical degrees is $4\mathrm{cos}^2\left(\frac{\pi}{5}\right)=\left(\frac{1+\sqrt{5}}{2}\right)^2$ except $1$ for this type. \begin{flushleft}{\bf{Type 2, Type 3}}\end{flushleft}\par Let $X$ be a $6$-dimensional simple abelian variety of Type 2 or 3 and $f\in\mathrm{End}(X)$ be an automorphism of $X$ which corresponds to $\alpha\in B$. Now $[K:\mathbb{Q}]=1$ or $3$ by the restriction in Table \ref{table1}.\par Denote the minimal polynomial of $\alpha$ over $K$ by $p(x)$ and the minimal polynomial of $\alpha$ over $\mathbb{Q}$ by $P(x)$. If the degree of $p(x)$ is $1$, then $\alpha\in U_K$ and so it is reduced to Type 1 of $3$-dimensional case. Thus, $4\mathrm{cos}^2\left(\frac{\pi}{7}\right)$ is a lower bound except $1$ for this case.\par If the degree of $p(x)$ is $2$, it can be written as $p(x)=x^2+ax+b\in\mathcal{O}_K[x](b\in U_K)$ and consider the following 3 cases as in Section \ref{The minimum value of the first dynamical degrees of automorphisms}. Let $\{\sigma_i\}$ be the set of all $\mathbb{Q}$-embeddings $K\hookrightarrow\mathbb{C}$.\\ $\bullet$ there exists $1\leq i\leq e$ such that $\abs{\sigma_i(b)}>1$\par The maximal absolute value of all conjugates of $b\in U_K$ is at least $2\mathrm{cos}\left(\frac{\pi}{7}\right)$ and so the maximal absolute value of the roots of $P(x)$ is at least $\sqrt{2\mathrm{cos}\left(\frac{\pi}{7}\right)}$. This is achived by $p(x)=x^2-2\mathrm{cos}\left(\frac{2\pi}{7}\right)$ and $P(x)=(x^2-2\mathrm{cos}\left(\frac{2\pi}{7}\right))(x^2-2\mathrm{cos}\left(\frac{4\pi}{7}\right))(x^2-2\mathrm{cos}\left(\frac{6\pi}{7}\right))$. Thus, $2\mathrm{cos}\left(\frac{\pi}{7}\right)=1.8019\cdots$ is a lower bound except $1$ for this case.\\ $\bullet$ $\sigma_i(b)=1$ for all $1\leq i\leq e$\par $\mathrm{max}\{\abs{\sigma_i(a)}\}$ must be larger than $2$ and close to $2$. By using a computer algebra, totally real algebraic integers of degree $3$ whose conjugates have modulus less than $2.25$ are enumerated in Table \ref{appendix 6}. $a=2.1149\cdots$ provides the minimum of the maximal absolute value. For this case, the maximal absolute value of the roots of $P(x)$ is $1.4012\cdots$ and so $1.4012\cdots^2=1.9635\cdots$ is a lower bound except $1$ for this case.\\ $\bullet$ $\sigma_i(b)=-1$ for all $1\leq i\leq e$\par $a=1$ provides the minimum value of the first dynamical degrees as in Section \ref{The minimum value of the first dynamical degrees of automorphisms}. Thus, $\left(\frac{1+\sqrt{5}}{2}\right)^2$ is a lower bound except $1$ for this case.\par Thus, $2\mathrm{cos}\left(\frac{\pi}{7}\right)=1.8019\cdots$ is the minimum value of the first dynamical degrees except $1$ for these types by the next theorem. \begin{theorem} Define $\alpha:=\sqrt{\zeta_7+\frac{1}{\zeta_7}}$. Theorem \ref{divisional} implies that there exists a prime\nolinebreak\ number $p$ where $B:=\left(\frac{\zeta_7+\frac{1}{\zeta_7},\ p}{\mathbb{Q}(\zeta_7+\frac{1}{\zeta_7})}\right)$ is a divisional totally indefinite quaternion algebra. On this condition, there is a positive anti-involotion on $B$ over $\mathbb{Q}$. Moreover, there is an automorphism of a $6$-dimensional simple abelian vaeriety of \textnormal{Type 2 ($d=2$, $e=3$, $m=1$)} which correponds to $\alpha=\sqrt{\zeta_7+\frac{1}{\zeta_7}}$. \end{theorem} \begin{proof} By Theorem \ref{construction of positive anti-involution}, it is enough to find $c\in B\setminus\mathbb{Q}\left(\zeta_7+\frac{1}{\zeta_7}\right)$ such that $c^2\in\mathbb{Q}\left(\zeta_7+\frac{1}{\zeta_7}\right)$ is totally negative.\par Denote $c=xi+yij\in B$ where $1,i,j,ij$ is an $\mathbb{Q}\left(\zeta_7+\frac{1}{\zeta_7}\right)$-basis of $B$ with $i^2=\zeta_7+\frac{1}{\zeta_7}$ and $j^2=p$. Then $c^2=x^2\cdot\left(\zeta_7+\frac{1}{\zeta_7}\right)-y^2\cdot\left(\zeta_7+\frac{1}{\zeta_7}\right)\cdot p=\left(\zeta_7+\frac{1}{\zeta_7}\right)(x^2-y^2p)$ by calculation. By substituting $x=a+b\left(\zeta_7+\frac{1}{\zeta_7}\right), y=1$ with $a,b\in\mathbb{Z}$, then, \begin{align*} c^2=\left(\zeta_7+\frac{1}{\zeta_7}\right)\left(\left(a+b\left(\zeta_7+\frac{1}{\zeta_7}\right)\right)^2-p\right). \end{align*} The other conjugates of $c^2\in\mathbb{Q}\left(\zeta_7+\frac{1}{\zeta_7}\right)$ are \begin{align*} \left(\zeta_7^2+\frac{1}{\zeta_7^2}\right)\left(\left(a+b\left(\zeta_7^2+\frac{1}{\zeta_7^2}\right)\right)^2-p\right) \end{align*} and \begin{align*} \left(\zeta_7^3+\frac{1}{\zeta_7^3}\right)\left(\left(a+b\left(\zeta_7^3+\frac{1}{\zeta_7^3}\right)\right)^2-p\right). \end{align*} Take $(a,b)$ as $a>0$, $b<0$, $0<a+b\left(\zeta_7+\frac{1}{\zeta_7}\right)<1$ and then by $a\to\infty$, $b\to-\infty$, it holds that $c^2<0$, $\abs[\big]{a+b\left(\zeta_7^2+\frac{1}{\zeta_7^2}\right)}\to\infty$ and $\abs[\big]{a+b\left(\zeta_7^3+\frac{1}{\zeta_7^3}\right)}\to\infty$. Thus, all the conjugates of $c^2$ can be negative and so it defines a positive anti-involution on $B$ over $\mathbb{Q}$,\par Now there is the order $\mathcal{O}=\mathcal{O}_{\mathbb{Q}\left(\zeta_7+\frac{1}{\zeta_7}\right)}\oplus\mathcal{O}_{\mathbb{Q}\left(\zeta_7+\frac{1}{\zeta_7}\right)}i\oplus\mathcal{O}_{\mathbb{Q}\left(\zeta_7+\frac{1}{\zeta_7}\right)}j\oplus\mathcal{O}_{\mathbb{Q}\left(\zeta_7+\frac{1}{\zeta_7}\right)}ij$ in $B$, so by Proposition \ref{construction for Type 2}, there exists a $6$-dimenional simple abelian variety whose endomorphism ring contains $\mathcal{O}$. Thus, the automorphism corresponding to $i\in\mathcal{O}$ corresponds to $\alpha=\sqrt{\zeta_7+\frac{1}{\zeta_7}}$ and so the proof is concluded. \end{proof} \begin{flushleft}{\bf{Type 4}}\end{flushleft}\par Let $X$ be a $6$-dimensional simple abelian variety of Type 4 and $f\in\mathrm{End}(X)$ be an automorphism of $X$ which corresponds to $\alpha\in B$.\par Now $d=1$ and $[K:\mathbb{Q}]=2$ or $4$ or $6$ or $12$ by the restriction in Table \ref{table1} and $\alpha\in U_K$.\par Now $[K_0:\mathbb{Q}]=1$ or $2$ or $3$ or $6$, respectively. Denote the minimal polynomial of $\alpha$ over $\mathbb{Q}$ by $P(x)$.\par If $\alpha\in K_0$, the minimum value of the square of the maximal absolute value of the roots of $P(x)$ is $4\mathrm{cos}^2\left(\frac{\pi}{7}\right)$ except $1$, by the deduction in Type 1.\par If $\alpha\in K\setminus K_0$, by denoting $a=\abs{\alpha}^2\in U_{K_0}$, as in Section \ref{The minimum value of the first dynamical degrees of automorphisms}, $\mathbb{Q}(\sqrt{a})$ is a totally real number field. By using similar deduction as in the $4$-dimensional case, the minimum value of the square of the maximal absolute value of the roots of $P(x)$ is $4\mathrm{cos}^2\left(\frac{\pi}{5}\right)$ except $1$.\\ \\ Therefore, the minimum value of the first dynamical degrees of automorphisms of $6$-dimensional simple abelian varieties is $2\mathrm{cos}\left(\frac{\pi}{7}\right)$ except $1$, and realized on a simple abelian variety of Type 2. \subsection{dimension $g=7$} By using a computer algebra, \begin{align*} \mathrm{min}\left\{ \begin{array}{ll} \text{the maximal absolute value of the conjugates of an}\\ \text{algebraic unit, whose conjugates are all real and positive, of degree $p$} \end{array} \right\} \end{align*} is $4.0333\cdots$, the maximum root of the polynonial $x^7-14x^6+77x^5-211x^4+301x^3-210x^2+56x-1$ by Table \ref{appendix 7}. By Proposition \ref{case otherwise}, $m(p)=4.0333\cdots$ and it is realized on a simple abelian variety of Type 4. \subsection{dimension $g=8$} Let $X$ be an $8$-dimensional simple abelian variety and define $B$, $K$, $K_0$, $d$, $e$ and $e_0$ as in Section \ref{Endomorphism algebras of simple abelian varieties}. \begin{flushleft}{\bf{Type 1}}\end{flushleft}\par Let $X$ be an $8$-dimensional simple abelian variety of Type 1 and $f\in\mathrm{End}(X)$ be an automorphism of $X$ which corresponds to $\alpha\in U_K$ as in Section \ref{The minimum value of the first dynamical degrees of automorphisms}. Now $[K:\mathbb{Q}]=1$ or $2$ or $4$ or $8$ by the restriction in Table \ref{table1}.\par Assuming that the first dynamical degree is less than $4$, by using similar deduction as in the $4$-dimensional case, the minimum value of the maximal absolute value of the roots of $P(x)$ is $2\mathrm{cos}\left(\frac{\pi}{5}\right)$ except $1$. The realizability is concerned in Section \ref{The minimum value of the first dynamical degrees of automorphisms}. Thus, the minimum value of the first dynamical degrees is $4\mathrm{cos}^2\left(\frac{\pi}{5}\right)=\left(\frac{1+\sqrt{5}}{2}\right)^2$ except $1$ for this type. \begin{flushleft}{\bf{Type 2, Type 3}}\end{flushleft}\par Let $X$ be an $8$-dimensional simple abelian variety of Type 2 or 3 and $f\in\mathrm{End}(X)$ be an automorphism of $X$ which corresponds to $\alpha\in B$. Now $[K:\mathbb{Q}]=1$ or $2$ or $4$ by the restriction in Table \ref{table1}.\par Denote the minimal polynomial of $\alpha$ over $K$ by $p(x)$ and the minimal polynomial of $\alpha$ over $\mathbb{Q}$ by $P(x)$. If the degree of $p(x)$ is $1$, then $\alpha\in U_K$ and so it is reduced to Type 1 of $4$-dimensional case. Thus, the minimum value is $\frac{3+\sqrt{5}}{2}$ for this case.\par If the degree of $p(x)$ is $2$, it can be written as $p(x)=x^2+ax+b\in\mathcal{O}_K[x](b\in U_K)$ and consider the following 3 cases as in Section \ref{The minimum value of the first dynamical degrees of automorphisms}. Let $\{\sigma_i\}$ be the set of all $\mathbb{Q}$-embeddings $K\hookrightarrow\mathbb{C}$.\\ $\bullet$ there exists $1\leq i\leq e$ such that $\abs{\sigma_i(b)}>1$\par The maximal absolute value of all conjugates of $b\in U_K$ is at least $2\mathrm{cos}\left(\frac{\pi}{5}\right)$ and so the maximal absolute value of the roots of $P(x)$ is at least $\sqrt{2\mathrm{cos}\left(\frac{\pi}{5}\right)}$. This is achived as in $4$-dimensional case. Thus, $2\mathrm{cos}\left(\frac{\pi}{5}\right)=1.6180\cdots$ is a lower bound except $1$ for this case.\par This is realizable for the case of Type 2 ($d=2$, $e=2$, $m=2$) as in Theorem \ref{4-dimensional construction}.\\ $\bullet$ $\sigma_i(b)=1$ for all $1\leq i\leq e$\par $\mathrm{max}\{\abs{\sigma_i(a)}\}$ must be larger than $2$ and close to $2$. By using a computer algebra, totally real algebraic integers of degree $4$ whose conjugates have modulus less than $2.1$ are enumerated in Table \ref{appendix 8}. $a=2.0614\cdots$ provides the minimum of the maximal absolute value. For this case, the maximal absolute value of the roots of $P(x)$ is $1.2806\cdots$ and so $1.2806\cdots^2=1.6400\cdots$ is a lower bound except $1$ for this case.\\ $\bullet$ $\sigma_i(b)=-1$ for all $1\leq i\leq e$\par $a=1$ provides the minimum value of the first dynamical degrees as in Section \ref{The minimum value of the first dynamical degrees of automorphisms}. Thus, $\left(\frac{1+\sqrt{5}}{2}\right)^2$ is a lower bound except $1$ for this case.\par Therefore, $2\mathrm{cos}\left(\frac{\pi}{5}\right)$ is the minimum value of the first dynamical degrees for these types. \begin{flushleft}{\bf{Type 4}}\end{flushleft}\par Let $X$ be an $8$-dimensional simple abelian variety of Type 4 and $f\in\mathrm{End}(X)$ be an automorphism of $X$ which corresponds to $\alpha\in B$.\\ $\bullet$ If $d=1$, $[K:\mathbb{Q}]=2$ or $4$ or $8$ or $16$ by the restriction in Table \ref{table1} and $\alpha\in U_K$.\par Now $[K_0:\mathbb{Q}]=1$ or $2$ or $4$ or $8$, respectively. Denote the minimal polynomial of $\alpha$ over $\mathbb{Q}$ by $P(x)$.\par If $\alpha\in K_0$, the minimum value of the square of the maximal absolute value of the roots of $P(x)$ is $\left(\frac{1+\sqrt{5}}{2}\right)^2$ except $1$, by the deduction in Type 1.\par If $\alpha\in K\setminus K_0$, by denoting $a=\abs{\alpha}^2\in U_{K_0}$, as in Section \ref{The minimum value of the first dynamical degrees of automorphisms}, $\mathbb{Q}(\sqrt{a})$ is a totally real number field. By using similar deduction as in the $4$-dimensional case, the minimum value of the maximal absolute value of the roots of $P(x)$ is $2\mathrm{cos}\left(\frac{\pi}{5}\right)$ except $1$.\par Thus, $4\mathrm{cos}^2\left(\frac{\pi}{5}\right)$ is the minimum value of the first dynamical degrees except $1$ for this case.\\ $\bullet$ If $d=2$, $[K:\mathbb{Q}]=2$ or $4$ is only possible by the restriction in Table \ref{table1}.\par If $[K:\mathbb{Q}]=2$, then it is reduced to the case of $4$-dimensional, and $1.3122\cdots^2=1.7220\cdots$ is a lower bound of the first dynamical degrees except $1$.\par For the case $[K:\mathbb{Q}]=4$, let $p(x)$ be the minimal polynomial of $\alpha$ over $K$ and let $P(x)$ be the minimal polynomial of $\alpha$ over $\mathbb{Q}$. If the degree of $p(x)$ is $1$, then $\alpha\in U_K$ and so $\frac{3+\sqrt{5}}{2}$ is a lower bound except $1$ as the case of $2$-dimensional.\par For the case that the degree of $p(x)$ is $2$, it can be written as $p(x)=x^2+sx+t\in\mathcal{O}_{K}[x]$ with $t\in U_{K}$. If $\mathrm{max}\{\text{modulus of conjugates of }t\}>1$, then the maximal absolute value of the roots of $P(x)$ is at least $\sqrt{\mathrm{max}\{\text{modulus of conjugates of }t\}}$ and so $\sqrt{\frac{1+\sqrt{5}}{2}}$ is a lower bound. Thus, $\frac{1+\sqrt{5}}{2}$ is a lower bound of the first dynamical degrees except $1$ for this case.\par For the case $\mathrm{max}\{\text{modulus of conjugates of }t\}=1$, by $[K:\mathbb{Q}]=4$, \begin{align*} t\in\{1,-1,\zeta_3,\zeta_3^2,i,-i,\zeta_5,\zeta_5^2,\zeta_5^3,\zeta_5^4,\zeta_6,\zeta_6^5,\zeta_8,\zeta_8^3,\zeta_8^5,\zeta_8^7,\zeta_{10},\zeta_{10}^3,\zeta_{10}^7,\zeta_{10}^9, \zeta_{12},\zeta_{12}^5,\zeta_{12}^7,\zeta_{12}^{11}\} \end{align*} For considering the absolute value of the roots of $P(x)$, by the correspondence \begin{align*} &(x^2+sx+\zeta_3)\leftrightarrow(x^2+s\zeta_3x+1)\leftrightarrow(x^2+s\zeta_3^2x+\zeta_3^2),\\ &(x^2+sx+\zeta_5)\leftrightarrow(x^2+s\zeta_5x+\zeta_5^3)\leftrightarrow(x^2+s\zeta_5^2x+1)\\ &\hspace{5cm}\leftrightarrow(x^2+s\zeta_5^3x+\zeta_5^2)\leftrightarrow(x^2+s\zeta_5^4x+\zeta_5^4),\\ &(x^2+sx+\zeta_6)\leftrightarrow(x^2+s\zeta_6x-1)\leftrightarrow(x^2+s\zeta_6^2x+\zeta_6^5),\\ &(x^2+sx+\zeta_{10})\leftrightarrow(x^2+s\zeta_{10}x+\zeta_{10}^3)\leftrightarrow(x^2+s\zeta_{10}^2x-1)\\ &\hspace{5cm}\leftrightarrow(x^2+s\zeta_{10}^3x+\zeta_{10}^7)\leftrightarrow(x^2+s\zeta_{10}^4x+\zeta_{10}^9),\\ &(x^2+sx+i)\leftrightarrow(x^2+six-i),\\ &(x^2+sx+\zeta_{8})\leftrightarrow(x^2+s\zeta_{8}x+\zeta_{8}^3)\leftrightarrow(x^2+s\zeta_{8}^2x+\zeta_{8}^5)\leftrightarrow(x^2+s\zeta_{8}^3x+\zeta_{8}^7),\\ &(x^2+sx+\zeta_{12})\leftrightarrow(x^2+s\zeta_{12}^2x+\zeta_{12}^5)\leftrightarrow(x^2+s\zeta_{12}^3x+\zeta_{12}^7)\leftrightarrow(x^2+s\zeta_{12}^5x+\zeta_{12}^{11}), \end{align*} it suffices to consider the cases $t=1,-1,i,\zeta_8,\zeta_{12}$. Denote $K_0=\mathbb{Q}(\sqrt{D})$ where $D\in\mathbb{Z}_{>0}$ is a square-free integer. $s\in\mathcal{O}_K$ is of degree at most $2$ over $K_0$. \begin{enumerate} \item{Case $t=1$}\\ It can be written as $P(x)=(x^2+sx+1)(x^2+s_1x+1)(x^2+s_2x+1)(x^2+s_3x+1)$ where $s=s_0,s_1,s_2,s_3$ are conjugates over $\mathbb{Q}$. Let \begin{align}\label{equation1} R(X)&=(X^2+(a+b\sqrt{D})X+a'+b'\sqrt{D})(X^2+(a-b\sqrt{D})X+a'-b'\sqrt{D})\nonumber\\ &=X^4+2aX^3+(a^2-b^2D+2a')X^2+(2aa'-2bb'D)X+a'^2-b'^2D \end{align} be a polynomial, with $a+b\sqrt{D},a'+b'\sqrt{D}\in\mathcal{O}_{\mathbb{Q}(\sqrt{D})}$, which has $s=s_0,s_1,s_2,s_3$ as the roots. For $N=1.28$, all roots of $P(x)$ have modulus less than $N$ if and only if $s=s_0,s_1,s_2,s_3$ are all inside the domain \footnotesize \begin{align*} E:=\left\{z\in\mathbb{C}\;\middle|\;\mathrm{Re}(z)^2\left(N-\frac{1}{N}\right)^2+\mathrm{Im}(z)^2\left(N+\frac{1}{N}\right)^2<\left(N+\frac{1}{N}\right)^2\left(N-\frac{1}{N}\right)^2\right\} \end{align*} \normalsize The quartic polynomials, whose roots are all in $E$, are showed in Table \ref{appendix 9} and considering the polynomial is of the form (\ref{equation1}), or not. Thus, $1.2720\cdots=\sqrt{\frac{1+\sqrt{5}}{2}}$ is the minimum value and so $\frac{1+\sqrt{5}}{2}$ is a lower bound except $1$ for this case. \item{Case $t=-1$}\\ It can be written as $P(x)=(x^2+sx-1)(x^2+s_1x-1)(x^2+s_2x-1)(x^2+s_3x-1)$ where $s=s_0,s_1,s_2,s_3$ are conjugates over $\mathbb{Q}$.\par For $N=1.28$, all roots of $P(x)$ have modulus less than $N$ if and only if $s=s_0,s_1,s_2,s_3$ are all inside the domain \footnotesize \begin{align*} E_1:=\left\{z\in\mathbb{C}\;\middle|\;\mathrm{Re}(z)^2\left(N+\frac{1}{N}\right)^2+\mathrm{Im}(z)^2\left(N-\frac{1}{N}\right)^2<\left(N+\frac{1}{N}\right)^2\left(N-\frac{1}{N}\right)^2\right\} \end{align*} \normalsize The quartic polynomials, whose roots are all in $E_1$, are showed in Table \ref{appendix 10} and considering the polynomial is of the form (\ref{equation1}), or not. Thus, $1.2720\cdots=\sqrt{\frac{1+\sqrt{5}}{2}}$ is the minimum value and so $\frac{1+\sqrt{5}}{2}$ is a lower bound except $1$ for this case. \item{Case $t=i$}\\ It can be written as $P(x)=(x^2+sx+i)(x^2+tx+i)(x^2+\overline{s}x-i)(x^2+\overline{t}x-i)$ where $s,t,\overline{s},\overline{t}$ are conjugates over $\mathbb{Q}$. Especially, $s,t$ are conjugates over $\mathbb{Q}(i)$.\par $z=s,t$ is the roots of \begin{align}\label{equation2} R_1(z)=z^2+(a+bi)z+(c+di)\in\mathbb{Z}[i][z]. \end{align} For $N=1.28$, all roots of $P(x)$ have modulus less than $N$ if and only if $s,t$ are inside the domain \begin{align*} E_2:=\left\{z\in\mathbb{C}\mid z\cdot\zeta_8\in E_1\right\} \end{align*} The quadratic polynomials of the form (\ref{equation2}), whose roots are all in $E_2$, are showed in Table \ref{appendix 11}. Thus, $1.2720\cdots=\sqrt{\frac{1+\sqrt{5}}{2}}$ is the minimum value except $1$ and so $\frac{1+\sqrt{5}}{2}$ is a lower bound except $1$ for this case. \item{Case $t=\zeta_8$}\\ $K=\mathbb{Q}(\zeta_8)$ and $K_0=\mathbb{Q}\left(\zeta_8+\frac{1}{\zeta_8}\right)=\mathbb{Q}(\sqrt{2})$. It can be written as $P(x)=(x^2+sx+\zeta_8)(x^2+tx+\zeta_8^3)(x^2+\overline{t}x+\zeta_8^5)(x^2+\overline{s}x+\zeta_8^7)$ where $s,t,\overline{t},\overline{s}$ are conjugates over $\mathbb{Q}$. Especially, $s,t$ are conjugates over $\mathbb{Q}(\sqrt{2}i)$.\par $z=s,t$ is the roots of \begin{align}\label{equation3} R_2(z)=z^2+(a+b\sqrt{2}i)z+(c+d\sqrt{2}i)\in\mathbb{Z}[\sqrt{2}i][z]. \end{align} For $N=1.28$, all roots of $P(x)$ have modulus less than $N$ if and only if $s$ is inside the domain \begin{align*} E_3:=\left\{z\in\mathbb{C}\mid z\cdot\zeta_{16}^3\in E_1\right\} \end{align*} and $t$ is inside the domain \begin{align*} E_4:=\left\{z\in\mathbb{C}\mid z\cdot\zeta_{16}\in E_1\right\}. \end{align*} The quadratic polynomials of the form (\ref{equation3}), with the roots $s,t\in\mathbb{Q}(\zeta_8)$ such that $s\in E_3$, $t\in E_4$ and $(x^2+sx+\zeta_8)(x^2+tx+\zeta_8^3)\in\mathbb{Z}[\sqrt{2}i][z]$, are showed in Table \ref{appendix 12}. Thus, $1.28^2$ is a lower bound except $1$ for this case. \item{Case $t=\zeta_{12}$}\\ $K=\mathbb{Q}(\zeta_{12})$ and $K_0=\mathbb{Q}\left(\zeta_{12}+\frac{1}{\zeta_{12}}\right)=\mathbb{Q}(\sqrt{3})$. It can be written as $P(x)=(x^2+sx+\zeta_{12})(x^2+tx+\zeta_{12}^5)(x^2+\overline{t}x+\zeta_{12}^7)(x^2+\overline{s}x+\zeta_{12}^{11})$ where $s,t,\overline{t},\overline{s}$ are conjugates over $\mathbb{Q}$. Especially, $s,t$ are conjugates over $\mathbb{Q}(i)$.\par $z=s,t$ is the roots of \begin{align}\label{equation4} R_3(z)=z^2+(a+bi)z+(c+di)\in\mathbb{Z}[i][z]. \end{align} For $N=1.28$, all roots of $P(x)$ have modulus less than $N$ if and only if $s$ is inside the domain \begin{align*} E_5:=\left\{z\in\mathbb{C}\mid z\cdot\zeta_{24}^5\in E_1\right\} \end{align*} and $t$ is inside the domain \begin{align*} E_6:=\left\{z\in\mathbb{C}\mid z\cdot\zeta_{24}\in E_1\right\}. \end{align*} The quadratic polynomials of the form (\ref{equation4}), with the roots $s,t\in\mathbb{Q}(\zeta_{12})$ such that $s\in E_5$, $t\in E_6$ and $(x^2+sx+\zeta_{12})(x^2+tx+\zeta_{12}^5)\in\mathbb{Z}[i][z]$, are showed in Table \ref{appendix 13}. Thus, $1.28^2$ is a lower bound except $1$ for this case. \end{enumerate}\par \noindent Therefore, the minimum value of the first dynamical degrees of automorphisms of $8$-dimensional simple abelian varieties is $\frac{1+\sqrt{5}}{2}$ except $1$, and realized on a simple abelian variety of Type 2. \subsection{dimension $g=9$} Let $X$ be a $9$-dimensional simple abelian variety and define $B$, $K$, $K_0$, $d$, $e$ and $e_0$ as in Section \ref{Endomorphism algebras of simple abelian varieties}. \begin{flushleft}{\bf{Type 1}}\end{flushleft}\par Let $X$ be a $9$-dimensional simple abelian variety of Type 1 and $f\in\mathrm{End}(X)$ be an automorphism of $X$ which corresponds to $\alpha\in U_K$ as in Section \ref{The minimum value of the first dynamical degrees of automorphisms}. Now $[K:\mathbb{Q}]=1$ or $3$ or $9$ by the restriction in Table \ref{table1}.\par Assuming that the first dynamical degree is less than $4$, by using similar deduction as in the $4$-dimensional case, the minimum value of the maximal absolute value of the roots of $P(x)$ is $2\mathrm{cos}\left(\frac{\pi}{7}\right)$ except $1$. The realizability is concerned in Section \ref{The minimum value of the first dynamical degrees of automorphisms}. Thus, the minimum value of the first dynamical degrees is $4\mathrm{cos}^2\left(\frac{\pi}{7}\right)$ except $1$ for this type. \begin{flushleft}{\bf{Type 4}}\end{flushleft}\par Let $X$ be a $9$-dimensional simple abelian variety of Type 4 and $f\in\mathrm{End}(X)$ be an automorphism of $X$ which corresponds to $\alpha\in B$.\\ $\bullet$ If $d=1$, $[K:\mathbb{Q}]=2$ or $6$ or $18$ by the restriction in Table \ref{table1} and $\alpha\in U_K$.\par Now $[K_0:\mathbb{Q}]=1$ or $3$ or $9$, respectively. Denote the minimal polynomial of $\alpha$ over $\mathbb{Q}$ by $P(x)$.\par If $\alpha\in K_0$, the minimum value of the square of the maximal absolute value of the roots of $P(x)$ is $4\mathrm{cos}^2\left(\frac{\pi}{7}\right)$ except $1$, by the deduction in Type 1.\par If $\alpha\in K\setminus K_0$, by denoting $a=\abs{\alpha}^2\in U_{K_0}$, as in Section \ref{The minimum value of the first dynamical degrees of automorphisms}, $\mathbb{Q}(\sqrt{a})$ is a totally real number field. By using similar deduction as in the $4$-dimensional case, the minimum value of the square of the maximal absolute value of the roots of $P(x)$ is $4\mathrm{cos}^2\left(\frac{\pi}{7}\right)$ except $1$.\\ $\bullet$ If $d=3$, $[K:\mathbb{Q}]=2$ is only possible by the restriction in Table \ref{table1}.\par Denote the minimal polynomial of $\alpha$ over $K$ by $p(x)\in\mathcal{O}_{K}[x]$ and the minimal polynomial of $\alpha$ over $\mathbb{Q}$ by $P(x)$.\par If the degree of $p(x)$ is $1$, then $\alpha\in U_K$ and so the maximal absolute value of the conjugates is always $1$.\par If the degree of $p(x)$ is $3$, then it can be written as $p(x)=x^3+sx^2+tx+u\in\mathcal{O}_{K}[x]$ with $u\in U_{K}$. By Dirichlet's unit theorem, $U_K$ contains only cyclotomic elements and so by $[K:\mathbb{Q}]=2$, \begin{align*} u\in\{1,-1,i,-i,\zeta_6,\zeta_6^2,\zeta_6^4,\zeta_6^5\} \end{align*} For considering the absolute value of the roots of $P(x)$, by the correspondence \begin{align*} &(x^3+sx^2+tx-1)\leftrightarrow(x^3-sx^2+tx+1),\\ &(x^3+sx^2+tx+\zeta_6)\leftrightarrow(x^3+s\zeta_6x^2+t\zeta_6^2x+\zeta_6^4)\leftrightarrow(x^3+\overline{s}\zeta_6^5x^2+\overline{t}\zeta_6^4x+\zeta_6^2)\\ &\hspace{10cm}\leftrightarrow(x^3+\overline{s}x^2+\overline{t}x+\zeta_6^5),\\ &(x^3+sx^2+tx+i)\leftrightarrow(x^3+six^2-tx+1)\leftrightarrow(x^3-sx^2+tx-i)\leftrightarrow(x^3-six^2-tx-1), \end{align*} it suffices to consider the cases $u=1,\zeta_6$. Denote $K=\mathbb{Q}(\sqrt{-D})$ where $D\in\mathbb{Z}_{>0}$ is a square-free integer. Write $p(x)=x^3+(a+b\sqrt{-D})x^2+(a'+b'\sqrt{-D})x+u\in\mathcal{O}_{K}[x]$ and then $P(x)=(x^3+(a+b\sqrt{-D})x^2+(a'+b'\sqrt{-D})x+u)(x^3+(a-b\sqrt{-D})x^2+(a'-b'\sqrt{-D})x+\overline{u})$. For $N=1.3$, all roots of $P(x)$ have modulus less than $N$ if and only if all roots of $p(x)$ have modulus less than $N$. \begin{enumerate} \item{Case $u=1$, $b=0$, $b'=0$}\\ $p(x)=x^3+ax^2+a'x+1\in\mathbb{Z}[x]$ and by Table \ref{appendix 14}, the minimum value of the maximal absolute value of the conjugates is $1.1509\cdots$. Thus, $1.1509\cdots^2=1.3247\cdots$ is a lower bound of the first dynamcal degrees except $1$ for this case. \item{Case $u=1$, $(b,b')\neq(0,0)$}\\ Assuming that all roots of $p(x)$ have modulus less than $N=1.3$, the minimum value of the maximal absolute value of the conjugates is $1.1509\cdots$ by Table \ref{appendix 15}. Thus, $1.1509\cdots^2=1.3247\cdots$ is a lower bound of the first dynamcal degrees except $1$ for this case. \item{Case $u=\zeta_6$}\\ Assuming that all roots of $p(x)$ have modulus less than $N=1.3$, the minimum value of the maximal absolute value of the conjugates is $1.2167\cdots$ by Table \ref{appendix 16}. Thus, $1.2167\cdots^2=1.4803\cdots$ is a lower bound of the first dynamcal degrees except $1$ for this case. \end{enumerate}\par Therefore, $1.3247\cdots$ is a lower bound of the first dynamical degrees except $1$.\par It remains to show that this is the minimum value.\par Denote the real root of $x^3-x^2+1\in\mathbb{Z}[x]$ by $\lambda$ and the other complex roots by $z,\overline{z}$. Define $E:=\mathbb{Q}(\lambda,z)$, $F:=\mathbb{Q}(\sqrt{-23})$, and $K:=\mathbb{Q}(\lambda)$. \begin{lemma} $E$ contains $F$ as a subfield. Moreover, $E/F$ is a Galois extension of degree $3$ and it is cyclic. \end{lemma} \begin{proof} $E\supset K\supset\mathbb{Q}$ is a sequence of field extensions and it is clear that $E/\mathbb{Q}$ is a Galois extension, $[K:\mathbb{Q}]=3$ and $[E:K]\leq2$. $E\not\subset\mathbb{R}$ and $K\subset\mathbb{R}$ implies $E\neq K$ and so $[E:K]=2$ and $[E:\mathbb{Q}]=6$.\par The equation \begin{align*} x^3-x^2+1=(x-\lambda)(x^2+(\lambda-1)x+\lambda^2-\lambda) \end{align*} implies \begin{align*} z, \overline{z}=\frac{1-\lambda\pm\sqrt{-3\lambda^2+2\lambda+1}}{2} \end{align*} and by \begin{align*} (3\lambda^2-2\lambda)\sqrt{-3\lambda^2+2\lambda+1}&=\sqrt{(3\lambda^2-2\lambda)^2(-3\lambda^2+2\lambda+1)}\\ &=\sqrt{-23}, \end{align*} $\mathbb{Q}(\lambda,\sqrt{-23})=\mathbb{Q}(\lambda,z)=E$ and so $E\supset F$.\par Now $[F:\mathbb{Q}]=2$ and so $E/F$ is a Galois extension of degree $3$. The Galois group $\mathrm{Gal}(E/F)$ is a group of order $3$ and so it is cyclic. \end{proof} Define \begin{align*} \begin{array}{cccc} \sigma\colon & E & \rightarrow & E\\ & \lambda & \mapsto & z \\ & z & \mapsto & \overline{z}\\ & \overline{z} & \mapsto & \lambda \end{array} \end{align*} over $\mathbb{Q}$ and then $\sigma$ is a generator of $\mathrm{Gal}(E/F)$. \begin{lemma} There exists $u\in F^\times$ such that the order of $u$ in $F^\times/N_{E/F}(E^\times)$ is $3$. \end{lemma} \begin{proof} First consider the prime factorization of the prime ideal $3\mathbb{Z}$ in $E$ and $F$.\par For the field extension $F/\mathbb{Q}$ and the primitive element $\theta=\sqrt{-23}$, the conductor $\mathfrak{J}$ of $\mathbb{Z}[\sqrt{-23}]$ in $\mathcal{O}_F=\mathbb{Z}\left[\frac{1+\sqrt{-23}}{2}\right]$ contains $2$ and so $3\mathbb{Z}$ and $\mathfrak{J}$ are relatively prime. By using Remark \ref{construction of prime ideal factorization}, there is a prime ideal factorization \begin{align*} 3\mathcal{O}_F=\mathfrak{p}_1\mathfrak{p}_2\quad(\mathfrak{p}_1,\ \mathfrak{p}_2\subset\mathcal{O}_F\text{ are prime ideals}), \end{align*} which corresponds to \begin{align*} X^2+23=(X+1)(X+2)\in(\mathbb{Z}/3\mathbb{Z})[X]. \end{align*} Now $F/\mathbb{Q}$ is a Galois extension and so $[\mathcal{O}_F/\mathfrak{p}_1:\mathbb{Z}/3\mathbb{Z}]=1$. For the field extension $E/F$ and the primitive element $\theta'=\lambda$, by using Remerk \ref{conductor and discriminant}, the conductor $\mathfrak{J}'$ of $\mathcal{O}_F[\lambda]$ in $\mathcal{O}_E$ contains \begin{align*} \delta=\Delta_{E/F}(1,\lambda,\lambda^2)=\mathrm{det}\begin{pmatrix} 1 & \lambda & \lambda^2 \\ 1 & z & z^2 \\ 1 & \overline{z} & \overline{z}^2 \end{pmatrix}^2\\ =(\lambda-z)^2(\lambda-\overline{z})^2(z-\overline{z})^2 =-23 \end{align*} by using the equation of Vandermonde's determinant and the discriminant of a polynomial for $x^3-x^2+1\in\mathbb{Z}[x]$. This implies $\mathfrak{p}_1$ is relatively prime to $\mathfrak{J}'$ and so by using Remark \ref{construction of prime ideal factorization}, \begin{align*} \mathfrak{p}_1\mathcal{O}_E=\mathfrak{q}\quad(\mathfrak{q}\subset\mathcal{O}_F\text{ is a prime ideal}), \end{align*} which corresponds to \begin{align*} X^3-X^2+1\text{ is irreducible in }(\mathcal{O}_F/\mathfrak{p}_1)[X]=(\mathbb{Z}/3\mathbb{Z})[X]. \end{align*} The pair $(p=3,\mathfrak{p}=\mathfrak{p}_1,\mathfrak{q})$ satisfies the following conditions. \begin{itemize} \item $\mathfrak{p}$ is over $p\mathbb{Z}$ and $\mathfrak{q}$ is over $\mathfrak{p}$. \item $\sigma(\mathfrak{q})=\mathfrak{q}$ for all $\sigma\in \mathrm{Gal}(E/F)$. \item the multiplicity of $\mathfrak{q}$ of the prime ideal factorization of the ideal $p\mathcal{O}_E$ is not a multiple of $3$. \end{itemize} Thus, by Lemma \ref{multiplicity}, there is no $\alpha\in E^\times\setminus U_E$ such that $N_{E/F}(\alpha)=p$. Moreover, since $p$ is not an unit of $\mathbb{Z}$, there is no $\alpha\in U_E$ such that $N_{E/F}(\alpha)=p$. Thus, $p\notin N_{E/F}(E^\times)$ and the order of $p$ in $F^\times/N_{E/F}(E^\times)$ is $3$. \end{proof} \begin{remark}\label{division algebra} Take a symbol $v$ with $v^3=u$ and define $B=\oplus_{i=0}^{2} v^{i}E$ as in Theorem \ref{construct division algebra}, and then $B$ is a division ring and also a central simple algebra over $F$ with $[B:F]=9$.\par Also, the order $\mathcal{O}:=\oplus_{i=0}^{2} v^{i}\mathcal{O}_E\subset B$ contains both $\lambda$ and $\frac{1}{\lambda}$. \end{remark} \begin{theorem} For the division algebra $B$ in Remark \ref{division algebra}, there is a positive anti-involution of the second kind on $B$ over $\mathbb{Q}$. \end{theorem} \begin{proof} Define \begin{align*} \begin{array}{ccccc} \phi\colon & B & \rightarrow & B & \\ & v^i\cdot a & \mapsto & v^i\cdot(\sigma^i\circ\tau)(a) & (a\in E) \end{array} \end{align*} over $\mathbb{Q}$, where $\tau$ is the complex conjugation map.\par By using the equation $\sigma\circ\tau\circ\sigma=\tau$, \begin{align*} &\phi(1)=1,\\ &\phi(\phi(v^i\cdot a))=\phi(v^i\cdot\sigma^i\circ\tau(a))=v^i\cdot(\sigma^i\circ\tau\circ\sigma^i\circ\tau)(a)=v^i\cdot a, \end{align*} \begin{align*} \phi(v^i\cdot a\cdot v^{i'}\cdot a')=\phi(v^i v^{i'}\cdot\sigma^{i'}(a)a')&=\phi(v^{i+i'}\cdot\sigma^{i'}(a)a')\\ &=v^{i+i'}\cdot(\sigma^{i+i'}\circ\tau)(\sigma^{i'}(a)a')\\ &=v^{i+i'}\cdot(\sigma^i\circ\tau)(a)\cdot(\sigma^{i+i'}\circ\tau)(a') \end{align*} and \begin{align*} \phi(v^{i'}\cdot a')\phi(v^i\cdot a)&=v^{i'}\cdot(\sigma^{i'}\circ\tau)(a')\cdot v^i\cdot(\sigma^i\circ\tau)(a)\\ &=v^{i'} v^i\cdot(\sigma^{i+i'}\circ\tau)(a')\cdot(\sigma^i\circ\tau)(a)\\ &=u^{i+i'}\cdot(\sigma^i\circ\tau)(a)\cdot(\sigma^{i+i'}\circ\tau)(a') \end{align*} hold and so $\phi$ is the anti-involution map over $\mathbb{Q}$. Also, the restriction map $\phi\lvert_F$ is the complex conjugation map and so $\phi$ is of the second kind. Thus, by composing with Theorem \ref{existence of positive anti-involution}, there exists a positive anti-involution map over $\mathbb{Q}$ of the second kind. \end{proof} By Proposition \ref{construction for Type 4}, there is a $9$-dimensional simple abelian variety with an automorphism corresponding to $\lambda$ for the case $d=3$, $e=2$, $e_0=1$ and $m=1$.\par The first dynamical degree of the automorphism is the square of the maximal absolute value of the conjugates of $\lambda$ and it is $1.1509\cdots^2=1.3247\cdots$.\\ \\ Therefore, the minimum value of the first dynamical degrees of automorphisms of $9$-dimensional simple abelian varieties is $1.3247\cdots$ except $1$, and realized on a simple abelian variety of Type 4. \begin{remark} By some deduction, we can check that $1.3247\cdots$ is the real root of $x^3-x-1\in\mathbb{Z}[x]$ and it is the smallest Pisot number (cf.\ \cite[Theorem 7.2.1]{BDGPS92}). \end{remark} \subsection{dimension $g=10$} Let $X$ be a $10$-dimensional simple abelian variety and define $B$, $K$, $K_0$, $d$, $e$ and $e_0$ as in Section \ref{Endomorphism algebras of simple abelian varieties}. \begin{flushleft}{\bf{Type 1}}\end{flushleft}\par Let $X$ be a $10$-dimensional simple abelian variety of Type 1 and $f\in\mathrm{End}(X)$ be an automorphism of $X$ which corresponds to $\alpha\in U_K$ as in Section \ref{The minimum value of the first dynamical degrees of automorphisms}. Now $[K:\mathbb{Q}]=1$ or $2$ or $5$ or $10$ by the restriction in Table \ref{table1}.\par Assuming that the first dynamical degree is less than $4$, by using similar deduction as in the $4$-dimensional case, the minimum value of the maximal absolute value of the roots of $P(x)$ is $2\mathrm{cos}\left(\frac{\pi}{5}\right)$ except $1$. The realizability is concerned in Section \ref{The minimum value of the first dynamical degrees of automorphisms}. Thus, the minimum value of the first dynamical degrees is $4\mathrm{cos}^2\left(\frac{\pi}{5}\right)=\left(\frac{1+\sqrt{5}}{2}\right)^2$ except $1$ for this type. \begin{flushleft}{\bf{Type 2, Type 3}}\end{flushleft}\par Let $X$ be a $10$-dimensional simple abelian variety of Type 2 or 3 and $f\in\mathrm{End}(X)$ be an automorphism of $X$ which corresponds to $\alpha\in B$. Now $[K:\mathbb{Q}]=1$ or $5$ by the restriction in Table \ref{table1}.\par Denote the minimal polynomial of $\alpha$ over $K$ by $p(x)$ and the minimal polynomial of $\alpha$ over $\mathbb{Q}$ by $P(x)$. If the degree of $p(x)$ is $1$, then $\alpha\in U_K$ and so it is reduced to Type 1 of $5$-dimensional case. Thus, $4\mathrm{cos}^2\left(\frac{\pi}{11}\right)$ is a lower bound except $1$ for this case.\par If the degree of $p(x)$ is $2$, it can be written as $p(x)=x^2+ax+b\in\mathcal{O}_K[x](b\in U_K)$ and consider the following 3 cases as in Section \ref{The minimum value of the first dynamical degrees of automorphisms}. Let $\{\sigma_i\}$ be the set of all $\mathbb{Q}$-embeddings $K\hookrightarrow\mathbb{C}$.\\ $\bullet$ there exists $1\leq i\leq e$ such that $\abs{\sigma_i(b)}>1$\par The maximal absolute value of all conjugates of $b\in U_K$ is at least $2\mathrm{cos}\left(\frac{\pi}{11}\right)$ and so the maximal absolute value of the roots of $P(x)$ is at least $\sqrt{2\mathrm{cos}\left(\frac{\pi}{11}\right)}$. Thus, $2\mathrm{cos}\left(\frac{\pi}{11}\right)=1.9189\cdots$ is a lower bound except $1$ for this case.\\ $\bullet$ $\sigma_i(b)=1$ for all $1\leq i\leq e$\par $\mathrm{max}\{\abs{\sigma_i(a)}\}$ must be larger than $2$ and close to $2$. By using a computer algebra, totally real algebraic integers of degree $5$ whose conjugates have modulus less than $2.1$ are enumerated in Table \ref{appendix 17}. $a=2.0264\cdots$ provides the minimum of the maximal absolute value. For this case, the maximal absolute value of the roots of $P(x)$ is the Lehmer number $1.1762\cdots$ and so $1.1762\cdots^2=1.3836\cdots$ is a lower bound except $1$ for this case.\\ $\bullet$ $\sigma_i(b)=-1$ for all $1\leq i\leq e$\par $a=1$ provides the minimum value of the first dynamical degrees as in Section \ref{The minimum value of the first dynamical degrees of automorphisms}. Thus, $\left(\frac{1+\sqrt{5}}{2}\right)^2$ is a lower bound except $1$ for this case.\par Thus, $1.1762\cdots^2=1.3836\cdots$ is a lower bound of the first dynamical degrees except $1$ for these types. The realizability of $1.1762\cdots^2=1.3836\cdots$ as the first dynamical degree is due to \cite[Theorem 2.1 (i)]{DH22}. \begin{flushleft}{\bf{Type 4}}\end{flushleft}\par Let $X$ be a $10$-dimensional simple abelian variety and $f\in\mathrm{End}(X)$ be an automorphism of $X$ which corresponds to $\alpha\in B$.\par Now $d=1$ and $[K:\mathbb{Q}]=2$ or $4$ or $10$ or $20$ by the restriction in Table \ref{table1} and $\alpha\in U_K$.\par Now $[K_0:\mathbb{Q}]=1$ or $2$ or $5$ or $10$, respectively. Denote the minimal polynomial of $\alpha$ over $\mathbb{Q}$ by $P(x)$.\par If $\alpha\in K_0$, the minimum value of the square of the maximal absolute value of the roots of $P(x)$ is $4\mathrm{cos}^2\left(\frac{\pi}{5}\right)$ except $1$, by the deduction in Type 1.\par If $\alpha\in K\setminus K_0$, by denoting $a=\abs{\alpha}^2\in U_{K_0}$, as in Section \ref{The minimum value of the first dynamical degrees of automorphisms}, $\mathbb{Q}(\sqrt{a})$ is a totally real number field. By using similar deduction as in the $4$-dimensional case, the minimum value of the square of the maximal absolute value of the roots of $P(x)$ is $4\mathrm{cos}^2\left(\frac{\pi}{5}\right)$ except $1$.\\ \\ Therefore, the minimum value of the first dynamical degrees of automorphisms of $10$-dimensional simple abelian varieties is $1.3836\cdots$ except $1$, and realized on a simple abelian variety of Type 2. \appendix \setcounter{table}{0} \renewcommand{\Alph{section}.\arabic{table}}{\Alph{section}.\arabic{table}} \section{Tables}\label{appendix a} The tables we use in this paper are lined up in this section. On creating from Table \ref{appendix 6} to Table \ref{appendix 17}, we use Python as a programming language. The program is available at \url{https://github.com/sugi0000/maximal-absolute-value}. \begin{table}[hbtp] \caption{The degree of $S_n(x)$ for small $n$} \label{appendix 1} \small \begin{tabular}{|c|c|c|} \hline degree of $P_n(x)$ & cyclotomic factor of $P_n(x)$ & degree of $S_n(x)$ \\ \hline 10 & & 10\\ \hline 11 & $\Phi_2(x)$ & 10\\ \hline 12 & $\Phi_3(x)$ & 10\\ \hline 13 & $\Phi_2(x)$, $\Phi_8(x)$ & 8\\ \hline 14 & $\Phi_5(x)$ & 10\\ \hline 15 & $\Phi_2(x)$, $\Phi_3(x)$ & 12\\ \hline 16 & & 16\\ \hline 17 & $\Phi_2(x)$ & 16\\ \hline 18 & $\Phi_3(x)$, $\Phi_{12}(x)$ & 12\\ \hline 19 & $\Phi_2(x)$, $\Phi_5(x)$ & 14\\ \hline 20 & & 20\\ \hline 21 & $\Phi_2(x)$, $\Phi_3(x)$, $\Phi_8(x)$ & 14\\ \hline 22 & & 22\\ \hline 23 & $\Phi_2(x)$ & 22\\ \hline 24 & $\Phi_3(x)$, $\Phi_5(x)$ & 18\\ \hline 25 & $\Phi_2(x)$, $\Phi_{18}(x)$ & 18\\ \hline 26 & & 26\\ \hline 27 & $\Phi_2(x)$, $\Phi_3(x)$ & 24\\ \hline 28 & & 28\\ \hline 29 & $\Phi_2(x)$, $\Phi_5(x)$, $\Phi_8(x)$ & 20\\ \hline 30 & $\Phi_3(x)$, $\Phi_{12}(x)$ & 24\\ \hline 31 & $\Phi_2(x)$ & 30\\ \hline 32 & & 32\\ \hline 33 & $\Phi_2(x)$, $\Phi_3(x)$ & 30\\ \hline \end{tabular} \end{table} \begin{table}[hbtp] \caption{The degree of $S'_m(x)$ for small $m$} \label{appendix 2} \small \begin{tabular}{|c|c|c|} \hline degree of $Q_m(x)$ & cyclotomic factor of $Q_m(x)$ & degree of $S'_m(x)$ \\ \hline 4 & & 4\\ \hline 5 & $\Phi_2(x)$ & 4\\ \hline 6 & & 6\\ \hline 7 & $\Phi_2(x)$ & 6\\ \hline 8 & & 8\\ \hline 9 & $\Phi_2(x)$ & 8\\ \hline 10 & $\Phi_8(x)$ & 6\\ \hline 11 & $\Phi_2(x)$ & 10\\ \hline 12 & & 12\\ \hline 13 & $\Phi_2(x)$, $\Phi_{12}(x)$ & 8\\ \hline 14 & & 14\\ \hline 15 & $\Phi_2(x)$ & 14\\ \hline 16 & & 16\\ \hline 17 & $\Phi_2(x)$, $\Phi_{18}(x)$ & 10\\ \hline 18 & $\Phi_8(x)$ & 14\\ \hline 19 & $\Phi_2(x)$ & 18\\ \hline 20 & & 20\\ \hline 21 & $\Phi_2(x)$ & 20\\ \hline 22 & & 22\\ \hline 23 & $\Phi_2(x)$ & 22\\ \hline 24 & $\Phi_{30}(x)$ & 16\\ \hline 25 & $\Phi_2(x)$, $\Phi_{12}(x)$ & 20\\ \hline 26 & $\Phi_8(x)$ & 22\\ \hline 27 & $\Phi_2(x)$ & 26\\ \hline 28 & & 28\\ \hline 29 & $\Phi_2(x)$ & 28\\ \hline 30 & & 30\\ \hline 31 & $\Phi_2(x)$ & 30\\ \hline \end{tabular} \end{table} \begin{table}[hbtp] \caption{The degree of $S_n(x)$} \label{appendix 3} \tiny \setlength{\tabcolsep}{3pt} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \begin{tabular}{c}degree of\\ $P_n(x)$\end{tabular} & \begin{tabular}{c}degree of\\ $S_n(x)$\end{tabular} & \begin{tabular}{c}degree of\\ $P_n(x)$\end{tabular} & \begin{tabular}{c}degree of\\ $S_n(x)$\end{tabular} & \begin{tabular}{c}degree of\\ $P_n(x)$\end{tabular} & \begin{tabular}{c}degree of\\ $S_n(x)$\end{tabular} & \begin{tabular}{c}degree of\\ $P_n(x)$\end{tabular} & \begin{tabular}{c}degree of\\ $S_n(x)$\end{tabular} & \begin{tabular}{c}degree of\\ $P_n(x)$\end{tabular} & \begin{tabular}{c}degree of\\ $S_n(x)$\end{tabular}\\ \hline 11 & 10 & 90 & 84 & 168 & 166 & 247 & 246 & 327 & 324 \\ \hline 12 & 10 & 91 & 90 & 171 & 168 & 248 & 240 & 328 & 328\\ \hline 13 & 8 & 92 & 92 & 172 & 172 & 251 & 250 & 330 & 324\\ \hline 15 & 12 & 93 & 86 & 173 & 168 & 252 & 250 & 331 & 324\\ \hline 16 & 16 & 95 & 94 & 174 & 164 & 253 & 248 & 332 & 332\\ \hline 18 & 12 & 96 & 94 & 175 & 174 & 255 & 252 & 333 & 326\\ \hline 19 & 14 & 99 & 92 & 176 & 176 & 256 & 256 & 335 & 334\\ \hline 20 & 20 & 100 & 100 & 179 & 174 & 258 & 252 & 336 & 334\\ \hline 21 & 14 & 101 & 96 & 180 & 178 & 259 & 248 & 339 & 332\\ \hline 23 & 22 & 102 & 96 & 181 & 176 & 260 & 260 & 340 & 340\\ \hline 24 & 18 & 103 & 102 & 183 & 180 & 261 & 254 & 341 & 336\\ \hline 27 & 24 & 104 & 100 & 184 & 180 & 263 & 262 & 342 & 336\\ \hline 28 & 28 & 107 & 106 & 186 & 180 & 264 & 258 & 343 & 342\\ \hline 29 & 20 & 108 & 106 & 187 & 180 & 267 & 264 & 344 & 340\\ \hline 30 & 24 & 109 & 100 & 188 & 180 & 268 & 268 & 347 & 346\\ \hline 31 & 30 & 111 & 108 & 189 & 178 & 269 & 260 & 348 & 346\\ \hline 32 & 32 & 112 & 112 & 191 & 190 & 270 & 264 & 349 & 334\\ \hline 35 & 34 & 114 & 104 & 192 & 190 & 271 & 270 & 351 & 348\\ \hline 36 & 34 & 115 & 108 & 195 & 192 & 272 & 272 & 352 & 352\\ \hline 37 & 32 & 116 & 116 & 196 & 196 & 275 & 274 & 354 & 344\\ \hline 39 & 32 & 117 & 110 & 197 & 192 & 276 & 274 & 355 & 354\\ \hline 40 & 40 & 119 & 114 & 198 & 192 & 277 & 266 & 356 & 356\\ \hline 42 & 36 & 120 & 118 & 199 & 194 & 279 & 272 & 357 & 350\\ \hline 43 & 36 & 123 & 120 & 200 & 200 & 280 & 280 & 359 & 354\\ \hline 44 & 40 & 124 & 120 & 203 & 202 & 282 & 276 & 360 & 358\\ \hline 45 & 38 & 125 & 120 & 204 & 198 & 283 & 282 & 363 & 360\\ \hline 47 & 46 & 126 & 120 & 205 & 194 & 284 & 280 & 364 & 360\\ \hline 48 & 46 & 127 & 126 & 207 & 204 & 285 & 278 & 365 & 360\\ \hline 51 & 48 & 128 & 120 & 208 & 208 & 287 & 286 & 366 & 360\\ \hline 52 & 52 & 131 & 130 & 210 & 204 & 288 & 286 & 367 & 360\\ \hline 53 & 48 & 132 & 130 & 211 & 210 & 291 & 288 & 368 & 360\\ \hline 54 & 44 & 133 & 122 & 212 & 212 & 292 & 292 & 371 & 370\\ \hline 55 & 54 & 135 & 132 & 213 & 206 & 293 & 288 & 372 & 370\\ \hline 56 & 56 & 136 & 136 & 215 & 214 & 294 & 284 & 373 & 368\\ \hline 59 & 54 & 138 & 132 & 216 & 214 & 295 & 288 & & \\ \hline 60 & 58 & 139 & 134 & 219 & 212 & 296 & 296 & & \\ \hline 61 & 50 & 140 & 140 & 220 & 220 & 299 & 294 & & \\ \hline 63 & 60 & 141 & 134 & 221 & 216 & 300 & 298 & & \\ \hline 64 & 60 & 143 & 142 & 222 & 216 & 301 & 296 & & \\ \hline 66 & 60 & 144 & 138 & 223 & 216 & 303 & 300 & & \\ \hline 67 & 66 & 147 & 144 & 224 & 220 & 304 & 300 & & \\ \hline 68 & 60 & 148 & 148 & 227 & 226 & 306 & 300 & & \\ \hline 69 & 58 & 149 & 140 & 228 & 226 & 307 & 306 & & \\ \hline 71 & 70 & 150 & 144 & 229 & 220 & 308 & 300 & & \\ \hline 72 & 70 & 151 & 144 & 231 & 228 & 309 & 298 & & \\ \hline 75 & 72 & 152 & 152 & 232 & 232 & 311 & 310 & & \\ \hline 76 & 76 & 155 & 154 & 234 & 224 & 312 & 310 & & \\ \hline 77 & 72 & 156 & 154 & 235 & 234 & 315 & 312 & & \\ \hline 78 & 72 & 157 & 152 & 236 & 236 & 316 & 316 & & \\ \hline 79 & 68 & 159 & 152 & 237 & 230 & 317 & 312 & & \\ \hline 80 & 80 & 160 & 160 & 239 & 234 & 318 & 312 & & \\ \hline 83 & 82 & 162 & 156 & 240 & 238 & 319 & 314 & & \\ \hline 84 & 78 & 163 & 162 & 243 & 240 & 320 & 320 & & \\ \hline 85 & 80 & 164 & 160 & 244 & 240 & 323 & 322 & & \\ \hline 87 & 84 & 165 & 158 & 245 & 240 & 324 & 318 & & \\ \hline 88 & 88 & 167 & 166 & 246 & 240 & 325 & 320 & & \\ \hline \end{tabular} \end{table} \begin{table}[hbtp] \caption{The degree of $S'_m(x)$} \label{appendix 4} \tiny \setlength{\tabcolsep}{3pt} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \begin{tabular}{c}degree of\\ $Q_m(x)$\end{tabular} & \begin{tabular}{c}degree of\\ $S'_m(x)$\end{tabular} & \begin{tabular}{c}degree of\\ $Q_m(x)$\end{tabular} & \begin{tabular}{c}degree of\\ $S'_m(x)$\end{tabular} & \begin{tabular}{c}degree of\\ $Q_m(x)$\end{tabular} & \begin{tabular}{c}degree of\\ $S'_m(x)$\end{tabular} & \begin{tabular}{c}degree of\\ $Q_m(x)$\end{tabular} & \begin{tabular}{c}degree of\\ $S'_m(x)$\end{tabular} &\begin{tabular}{c}degree of\\ $Q_m(x)$\end{tabular} & \begin{tabular}{c}degree of\\ $S'_m(x)$\end{tabular} \\ \hline 7 & 6 & 95 & 94 & 181 & 176 & 267 & 266 & 355 & 354\\ \hline 10 & 6 & 97 & 92 & 183 & 182 & 271 & 270 & 359 & 352\\ \hline 11 & 10 & 98 & 94 & 186 & 182 & 274 & 270 & 361 & 356\\ \hline 13 & 8 & 99 & 98 & 187 & 186 & 275 & 274 & 362 & 358\\ \hline 15 & 14 & 103 & 102 & 191 & 190 & 277 & 272 & 363 & 362\\ \hline 18 & 14 & 106 & 102 & 193 & 188 & 279 & 278 & 367 & 366\\ \hline 19 & 18 & 107 & 100 & 194 & 190 & 282 & 278 & 370 & 366\\ \hline 23 & 22 & 109 & 104 & 195 & 194 & 283 & 282 & 371 & 370\\ \hline 25 & 20 & 111 & 110 & 199 & 198 & 287 & 280 & 373 & 368\\ \hline 26 & 22 & 114 & 102 & 202 & 198 & 289 & 284 & 375 & 374\\ \hline 27 & 26 & 115 & 114 & 203 & 202 & 290 & 286 & 378 & 374\\ \hline 31 & 30 & 119 & 118 & 205 & 200 & 291 & 290 & & \\ \hline 34 & 30 & 121 & 116 & 207 & 206 & 295 & 294 & & \\ \hline 35 & 28 & 122 & 118 & 210 & 206 & 298 & 294 & & \\ \hline 37 & 32 & 123 & 122 & 211 & 210 & 299 & 298 & & \\ \hline 39 & 38 & 127 & 126 & 215 & 208 & 301 & 296 & & \\ \hline 42 & 38 & 130 & 126 & 217 & 212 & 303 & 302 & & \\ \hline 43 & 42 & 131 & 130 & 218 & 214 & 306 & 302 & & \\ \hline 47 & 46 & 133 & 128 & 219 & 218 & 307 & 306 & & \\ \hline 49 & 44 & 135 & 134 & 223 & 222 & 311 & 310 & & \\ \hline 50 & 46 & 138 & 134 & 226 & 222 & 313 & 308 & & \\ \hline 51 & 50 & 139 & 138 & 227 & 226 & 314 & 310 & & \\ \hline 55 & 54 & 143 & 136 & 229 & 224 & 315 & 314 & & \\ \hline 58 & 54 & 145 & 140 & 231 & 230 & 319 & 318 & & \\ \hline 59 & 58 & 146 & 142 & 234 & 222 & 322 & 318 & & \\ \hline 61 & 56 & 147 & 146 & 235 & 234 & 323 & 316 & & \\ \hline 63 & 62 & 151 & 150 & 239 & 238 & 325 & 320 & & \\ \hline 66 & 62 & 154 & 150 & 241 & 236 & 327 & 326 & & \\ \hline 67 & 66 & 155 & 154 & 242 & 238 & 330 & 326 & & \\ \hline 71 & 64 & 157 & 152 & 243 & 242 & 331 & 330 & & \\ \hline 73 & 68 & 159 & 158 & 247 & 246 & 335 & 334 & & \\ \hline 74 & 70 & 162 & 158 & 250 & 246 & 337 & 332 & & \\ \hline 75 & 74 & 163 & 162 & 251 & 244 & 338 & 334 & & \\ \hline 79 & 78 & 167 & 166 & 253 & 248 & 339 & 338 & & \\ \hline 82 & 78 & 169 & 164 & 255 & 254 & 343 & 342 & & \\ \hline 83 & 82 & 170 & 166 & 258 & 254 & 346 & 342 & & \\ \hline 85 & 80 & 171 & 170 & 259 & 258 & 347 & 346 & & \\ \hline 87 & 86 & 175 & 174 & 263 & 262 & 349 & 344 & & \\ \hline 90 & 86 & 178 & 174 & 265 & 260 & 351 & 350 & & \\ \hline 91 & 90 & 179 & 172 & 266 & 262 & 354 & 342 & & \\ \hline \end{tabular} \end{table} \begin{table}[hbtp] \begin{threeparttable}[h] \caption{Calculation of the maximal absolute value of the roots for the case of Type 4 of $4$-dimenisonal} \label{appendix 5} \footnotesize \begin{tabular}{|c|c|c|c|c|} \hline $D$ & $(p,q)$ & $t$ & $(x^2+sx+t)(x^2+\overline{s}x+\overline{t})$ & maximal absolute value \\ \hline\hline $7$ & $(-2,1)$ & 1 & $x^4-3x^3+6x^2-3x+1$ & $2.1022\cdots$\\ $7$ & $(-2,1)$ & -1 & $x^4-3x^3+2x^2+3x+1$ & $2.1889\cdots$\\ $7$ & $(-1,1)$ & 1 & $x^4-x^3+4x^2-x+1$ & $1.8832\cdots$\\ $7$ & $(-1,1)$ & -1 & $x^4-x^3+x+1$ & $1.3722\cdots$\\ $7$ & $(0,1)$ & 1 & $x^4+x^3+4x^2+x+1$ & $1.8832\cdots$\\ $7$ & $(0,1)$ & -1 & $x^4+x^3-x+1$ & $1.3722\cdots$\\ $7$ & $(1,1)$ & 1 & $x^4+3x^3+6x^2+3x+1$ & $2.1022\cdots$\\ $7$ & $(1,1)$ & -1 & $x^4+3x^3+2x^2-3x+1$ & $2.1889\cdots$\\ $11$ & $(-1,1)$ & 1 & $x^4-x^3+5x^2-x+1$ & $2.1537\cdots$\\ $11$ & $(-1,1)$ & -1 & $x^4-x^3+x^2+x+1$ & $1.4675\cdots$\\ $11$ & $(0,1)$ & 1 & $x^4+x^3+5x^2+x+1$ & $2.1537\cdots$\\ $11$ & $(0,1)$ & -1 & $x^4+x^3+x^2-x+1$ & $1.4675\cdots$\\ $15$ & $(-1,1)$ & 1 & $x^4-x^3+6x^2-x+1$ & $2.3869\cdots$\\ $15$ & $(-1,1)$ & -1 & $x^4-x^3+2x^2+x+1$ & $1.6180\cdots$\\ $15$ & $(0,1)$ & 1 & $x^4+x^3+6x^2+x+1$ & $2.3869\cdots$\\ $15$ & $(0,1)$ & -1 & $x^4+x^3+2x^2-x+1$ & $1.6180\cdots$\\ $2$ & $(-1,1)$ & 1 & $x^4-2x^3+5x^2-2x+1$ & $2.0322\cdots$\\ $2$ & $(-1,1)$ & -1 & $x^4-2x^3+x^2+2x+1$ & $1.8039\cdots$\\ $2$ & $(0,1)$ & 1 & $x^4+4x^2+1$ & $1.9318\cdots$\\ $2$ & $(0,1)$ & -1 & $x^4+1$ & $1$\\ $2$ & $(1,1)$ & 1 & $x^4+2x^3+5x^2+2x+1$ & $2.0322\cdots$\\ $2$ & $(-1,1)$ & -1 & $x^4+2x^3+x^2-2x+1$ & $1.8039\cdots$\\ $3$ & $(1,1)$ & $1$ & $x^4+3x^3+5x^2+3x+1$ & $1.7220\cdots$\\ $3$ & $(1,1)$ & $\omega$\tnote{*1} & $x^4+3x^3+4x^2+3x+1$ & $1$\\ $3$ & $(1,1)$ & $\omega^2$ & $x^4+3x^3+2x^2+1$ & $1.7220\cdots$\\ $3$ & $(1,1)$ & $\omega^3=-1$ & $x^4+3x^3+x^2-3x+1$ & $2.0758\cdots$\\ $3$ & $(1,1)$ & $\omega^4$ & $x^4+3x^3+2x^2-3x+1$ & $2.1889\cdots$\\ $3$ & $(1,1)$ & $\omega^5$ & $x^4+3x^3+4x^2+1$ & $2.0758\cdots$\\ $3$ & $(1,0)$ & $\omega$ & $x^4+2x^3+2x^2+x+1$ & $1.3122\cdots$\\ $3$ & $(1,0)$ & $\omega^2$ & $x^4+2x^3-x+1$ & $1.5392\cdots$\\ $3$ & $(1,0)$ & $\omega^4$ & $x^4+2x^3-x+1$ & $1.5392\cdots$\\ $3$ & $(1,0)$ & $\omega^5$ & $x^4+2x^3+2x^2+x+1$ & $1.3122\cdots$\\ $3$ & $(2,0)$ & $\omega$ & $x^4+4x^3+5x^2+2x+1$ & $1.9318\cdots$\\ $3$ & $(2,0)$ & $\omega^2$ & $x^4+4x^3+3x^2-2x+1$ & $2.2966\cdots$\\ $3$ & $(2,0)$ & $\omega^4$ & $x^4+4x^3+3x^2-2x+1$ & $2.2966\cdots$\\ $3$ & $(2,0)$ & $\omega^5$ & $x^4+4x^3+5x^2+2x+1$ & $1.9318\cdots$\\ $1$ & $(1,1)$ & $1$ & $x^4+2x^3+4x^2+2x+1$ & $1.7000\cdots$\\ $1$ & $(1,1)$ & $i$ & $x^4+2x^3+2x^2+2x+1$ & $1$\\ $1$ & $(1,1)$ & $-1$ & $x^4+2x^3-2x+1$ & $1.7000\cdots$\\ $1$ & $(1,1)$ & $-i$ & $x^4+2x^3+2x^2-2x+1$ & $1.9318\cdots$\\ $1$ & $(1,0)$ & $i$ & $x^4+2x^3+x^2+1$ & $1.4425\cdots$\\ $1$ & $(1,0)$ & $-i$ & $x^4+2x^3+x^2+1$ & $1.4425\cdots$\\ $1$ & $(2,0)$ & $i$ & $x^4+4x^3+4x^2+1$ & $2.1474\cdots$\\ $1$ & $(2,0)$ & $-i$ & $x^4+4x^3+4x^2+1$ & $2.1474\cdots$\\ \hline \end{tabular} \begin{tablenotes} \item{*1} Denote $\omega=\frac{1+\sqrt{3}i}{2}$ \end{tablenotes} \end{threeparttable} \end{table} \begin{table}[hbtp] \caption{totally real, cubic algebraic integer, which is larger than $2$, whose conjugates have modulus$<2.25$} \label{appendix 6} \small \begin{tabular}{|c|c|} \hline algebraic integer & polynomial \\ \hline\hline $2.1149\cdots$ & $x^3-4x-1$\\ $2.1700\cdots$ & $x^3-x^2-3x+1$\\ $2.1986\cdots$ & $x^3-x^2-4x+3$\\ $2.2143\cdots$ & $x^3-4x-2$\\ $2.2469\cdots$ & $x^3-2x^2-x+1$\\ \hline \end{tabular} \end{table} \begin{table}[hbtp] \caption{totally positive, septic algebraic unit, which is larger than $4$, whose conjugates are in $(0,4.04)$} \label{appendix 7} \small \begin{tabular}{|c|c|} \hline algebraic unit & polynomial \\ \hline\hline $4.0333\cdots$ & $x^7-14x^6+77x^5-211x^4+301x^3-210x^2+56x-1$\\ $4.0341\cdots$ & $x^7-14x^6+76x^5-200x^4+259x^3-146x^2+24x-1$\\ \hline \end{tabular} \end{table} \begin{table}[hbtp] \caption{totally real, quartic algebraic integer, which is larger than $2$, whose conjugates have modulus$<2.1$} \label{appendix 8} \small \begin{tabular}{|c|c|} \hline algebraic integer & minimal polynomial \\ \hline\hline $2.0614\cdots$ & $x^4-4x^2-x+1$\\ $2.0743\cdots$ & $x^4-5x^2+3$\\ $2.0952\cdots$ & $x^4-x^3-3x^2+x+1$\\ \hline \end{tabular} \end{table} \begin{table}[hbtp] \caption{irreducible quartic polynomial, all roots are in $E$, one root is not inside $[-2,2]$, identifying $R(X)$ with $R(-X)$} \label{appendix 9} \footnotesize \begin{tabular}{|c|c|c|c|} \hline \begin{tabular}{c} condition\\ (\ref{equation1})\end{tabular} & polynomial $R(X)$ & polynomial $x^4R(x+\frac{1}{x})$ & \begin{tabular}{c}maximal\\ absolute value\end{tabular} \\ \hline\hline N & $X^4-2X^3-2X^2+5X-1$ & \begin{tabular}{c}$x^8-2x^7+2x^6-x^5+x^4$\\ $-x^3+2x^2-2x+1$\end{tabular} & $1.2150\cdots$ \\ N & $X^4-X^3-4X^2+2X+5$ & \begin{tabular}{c}$x^8-x^7-x^5+3x^4-x^3-x+1$\end{tabular} & $1.1837\cdots$ \\ N & $X^4-X^3-4X^2+4X+2$ & \begin{tabular}{c}$x^8-x^7+x^5+x^3-x+1$\end{tabular} & $1.2744\cdots$ \\ N & $X^4-X^3-3X^2+X+3$ & \begin{tabular}{c}$x^8-x^7+x^6-2x^5+3x^4$\\ $-2x^3+x^2-x+1$\end{tabular} & $1.2522\cdots$ \\ N & $X^4-X^3-3X^2+3X-1$ & \begin{tabular}{c}$x^8-x^7+x^6-x^4+x^2-x+1$\end{tabular} & $1.1705\cdots$ \\ N & $X^4-X^3-3X^2+3X+1$ & \begin{tabular}{c}$x^8-x^7+x^6+x^4+x^2-x+1$\end{tabular} & $1.2196\cdots$ \\ N & $X^4-X^3-2X^2+1$ & \begin{tabular}{c}$x^8-x^7+2x^6-3x^5+3x^4$\\ $-3x^3+2x^2-x+1$\end{tabular} & $1.2408\cdots$ \\ N & $X^4-X^3-2X^2+X+2$ & \begin{tabular}{c}$x^8-x^7+2x^6-2x^5+4x^4$\\ $-2x^3+2x^2-x+1$\end{tabular} & $1.2474\cdots$ \\ N & $X^4-X^3-2X^2+2X-1$ & \begin{tabular}{c}$x^8-x^7+2x^6-x^5+x^4$\\ $-x^3+2x^2-x+1$\end{tabular} & $1.2722\cdots$ \\ N & $X^4-5X^2+7$ & \begin{tabular}{c}$x^8-x^6+3x^4-x^2+1$\end{tabular} & $1.2406\cdots$ \\ N & $X^4-4X^2-X+3$ & \begin{tabular}{c}$x^8-x^5+x^4-x^3+1$\end{tabular} & $1.1692\cdots$ \\ Y & $X^4-4X^2-1$ & \begin{tabular}{c}$x^8-3x^4+1$\end{tabular} & $1.2720\cdots$ \\ N & $X^4-4X^2+5$ & \begin{tabular}{c}$x^8+3x^4+1$\end{tabular} & $1.2720\cdots$ \\ N & $X^4-3X^2+3$ & \begin{tabular}{c}$x^8+x^6+3x^4+x^2+1$\end{tabular} & $1.2406\cdots$ \\ \hline \end{tabular} \end{table} \begin{table}[hbtp] \caption{irreducible quartic polynomial, all roots are in $E_1$, one root is not written as $si$ with $s\in[-2,2]$, identifying $R(X)$ with $R(-X)$} \label{appendix 10} \footnotesize \begin{tabular}{|c|c|c|c|} \hline \begin{tabular}{c} condition\\ (\ref{equation1})\end{tabular} & polynomial $R(X)$ & polynomial $x^4R(x-\frac{1}{x})$ & \begin{tabular}{c}maximal\\ absolute value\end{tabular} \\ \hline\hline N & $X^4-X^3+3X^2-2X+1$ & \begin{tabular}{c}$x^8-x^7-x^6+x^5+x^4$\\ $-x^3-x^2+x+1$\end{tabular} & $1.2245\cdots$ \\ N & $X^4-X^3+4X^2-3X+1$ & \begin{tabular}{c}$x^8-x^7-x^4+x+1$\end{tabular} & $1.2306\cdots$ \\ N & $X^4-X^3+4X^2-3X+2$ & \begin{tabular}{c}$x^8-x^7+x+1$\end{tabular} & $1.2612\cdots$ \\ N & $X^4-X^3+4X^2-2X+2$ & \begin{tabular}{c}$x^8-x^7+x^5-x^3+x+1$\end{tabular} & $1.2397\cdots$ \\ N & $X^4-X^3+4X^2-2X+3$ & \begin{tabular}{c}$x^8-x^7+x^5+x^4-x^3+x+1$\end{tabular} & $1.1837\cdots$ \\ N & $X^4-X^3+5X^2-4X+3$ & \begin{tabular}{c}$x^8-x^7+x^6-x^5+x^4$\\ $+x^3+x^2+x+1$\end{tabular} & $1.2788\cdots$ \\ N & $X^4-X^3+5X^2-3X+4$ & \begin{tabular}{c}$x^8-x^7+x^6+x^2+x+1$\end{tabular} & $1.2272\cdots$ \\ N & $X^4-X^3+5X^2-3X+5$ & \begin{tabular}{c}$x^8-x^7+x^6+x^4+x^2+x+1$\end{tabular} & $1.2734\cdots$ \\ N & $X^4-X^3+6X^2-3X+8$ & \begin{tabular}{c}$x^8-x^7+2x^6+2x^4+2x^2+x+1$\end{tabular} & $1.2474\cdots$ \\ N & $X^4+2X^2-X+1$ & \begin{tabular}{c}$x^8-2x^6-x^5+3x^4+x^3-2x^2+1$\end{tabular} & $1.2553\cdots$ \\ N & $X^4+3X^2-X+1$ & \begin{tabular}{c}$x^8-x^6-x^5+x^4+x^3-x^2+1$\end{tabular} & $1.1932\cdots$ \\ N & $X^4+3X^2-X+2$ & \begin{tabular}{c}$x^8-x^6-x^5+2x^4+x^3-x^2+1$\end{tabular} & $1.2461\cdots$ \\ N & $X^4+3X^2+3$ & \begin{tabular}{c}$x^8-x^6+3x^4-x^2+1$\end{tabular} & $1.2406\cdots$ \\ N & $X^4+4X^2-X+1$ & \begin{tabular}{c}$x^8-x^5-x^4+x^3+1$\end{tabular} & $1.2512\cdots$ \\ N & $X^4+4X^2-X+2$ & \begin{tabular}{c}$x^8-x^5+x^3+1$\end{tabular} & $1.2331\cdots$ \\ N & $X^4+4X^2-X+3$ & \begin{tabular}{c}$x^8-x^5+x^4+x^3+1$\end{tabular} & $1.2450\cdots$ \\ Y & $X^4+4X^2-1$ & \begin{tabular}{c}$x^8-3x^4+1$\end{tabular} & $1.2720\cdots$ \\ N & $X^4+4X^2+5$ & \begin{tabular}{c}$x^8+3x^4+1$\end{tabular} & $1.2720\cdots$ \\ N & $X^4+5X^2+7$ & \begin{tabular}{c}$x^8+x^6+3x^4+x^2+1$\end{tabular} & $1.2406\cdots$ \\ \hline \end{tabular} \end{table} \begin{table}[hbtp] \caption{quadratic polynomial of the form (\ref{equation2}), all roots are in $E_2$, identifying $R_1(z)$ with $R_1(-z)$} \label{appendix 11} \small \begin{tabular}{|c|c|c|c|} \hline polynomial $R_1(z)$ & polynomial $x^2R_1(x+\frac{i}{x})$ & \begin{tabular}{c}maximal\\ absolute value\end{tabular} \\ \hline\hline $z^2-(2+2i)z+2i$ & \begin{tabular}{c}$x^4-(2+2i)x^3+4ix^2+(2-2i)x-1$\end{tabular} & $1$ \\ $z^2-(1+i)z-i$ & \begin{tabular}{c}$x^4-(1+i)x^3+ix^2+(1-i)x-1$\end{tabular} & $1$ \\ $z^2+(-1-i)z$ & \begin{tabular}{c}$x^4-(1+i)x^3+2ix^2+(1-i)x-1$\end{tabular} & $1$ \\ $z^2-1-2i$ & \begin{tabular}{c}$x^4-x^2-1$\end{tabular} & $1.2720\cdots$ \\ $z^2-4i$ & \begin{tabular}{c}$x^4-2ix^2-1$\end{tabular} & $1$ \\ $z^2-3i$ & \begin{tabular}{c}$x^4-ix^2-1$\end{tabular} & $1$ \\ $z^2-2i$ & \begin{tabular}{c}$x^4-1$\end{tabular} & $1$ \\ $z^2-i$ & \begin{tabular}{c}$x^4+ix^2-1$\end{tabular} & $1$ \\ $z^2$ & \begin{tabular}{c}$x^4+2ix^2-1$\end{tabular} & $1$ \\ $z^2+1-2i$ & \begin{tabular}{c}$x^4+x^2-1$\end{tabular} & $1.2720\cdots$ \\ \hline \end{tabular} \end{table} \begin{table}[hbtp] \caption{quadratic polynomial of the form (\ref{equation3}), with the roots $s,t\in\mathbb{Q}(\zeta_8)$ satisfy $s\in E_3$, $t\in E_4$ and $(x^2+sx+\zeta_8)(x^2+tx+\zeta_8^3)\in\mathbb{Z}[\sqrt{2}i][x]$, identifying $R_2(z)$ with $R_2(-z)$} \label{appendix 12} \small \begin{tabular}{|c|c|c|c|} \hline polynomial $R_2(z)$ & \begin{tabular}{c}polynomial\\ $(x^2+sx+\zeta_8)(x^2+tx+\zeta_8^3)$\end{tabular} & \begin{tabular}{c}maximal\\ absolute value\end{tabular} \\ \hline\hline $z^2-(2+\sqrt{2}i)z+\sqrt{2}i$ & \begin{tabular}{c}$x^4+(2+\sqrt{2}i)x^3+2\sqrt{2}ix^2+(-2+\sqrt{2}i)x-1$\end{tabular} & $1$ \\ $z^2-\sqrt{2}iz-\sqrt{2}i$ & \begin{tabular}{c}$x^4+\sqrt{2}ix^3+\sqrt{2}ix-1$\end{tabular} & $1$ \\ $z^2$ & \begin{tabular}{c}$x^4+\sqrt{2}ix^2-1$\end{tabular} & $1$ \\ \hline \end{tabular} \end{table} \begin{table}[hbtp] \caption{quadratic polynomial of the form (\ref{equation4}), with the roots $s,t\in\mathbb{Q}(\zeta_{12})$ satisfy $s\in E_5$ and $t\in E_6$ and $(x^2+sx+\zeta_{12})(x^2+tx+\zeta_{12}^5)\in\mathbb{Z}[i][x]$, identifying $R_3(z)$ with $R_3(-z)$} \label{appendix 13} \small \begin{tabular}{|c|c|c|c|} \hline polynomial $R_3(z)$ & \begin{tabular}{c}polynomial\\ $(x^2+sx+\zeta_{12})(x^2+tx+\zeta_{12}^5)$\end{tabular} & \begin{tabular}{c}maximal\\ absolute value\end{tabular} \\ \hline\hline $z^2-(2+i)z+i$ & \begin{tabular}{c}$x^4+(2+i)x^3+2ix^2-(2-i)x-1$\end{tabular} & $1$ \\ $z^2-(1+2i)z+i$ & \begin{tabular}{c}$x^4+(1+2i)x^3+2ix^2-(1-2i)x-1$\end{tabular} & $1$ \\ $z^2-(1-i)z-2i$ & \begin{tabular}{c}$x^4+(1-i)x^3-ix^2-(1+i)x-1$\end{tabular} & $1$ \\ $z^2$ & \begin{tabular}{c}$x^4+ix^2-1$\end{tabular} & $1$ \\ \hline \end{tabular} \end{table} \begin{table}[hbtp] \caption{cubic polynomial with constant term $1$, with coefficients in $\mathbb{Z}$, whose roots have modulus less than $1.3$, at least one root has modulus$>1$} \label{appendix 14} \small \begin{tabular}{|c|c|} \hline polynomial $p(x)$ & maximal absolute value \\ \hline\hline $x^3-x^2+1$ & $1.1509\cdots$ \\ $x^3+x+1$ & $1.2106\cdots$ \\ \hline \end{tabular} \end{table} \begin{table}[hbtp] \caption{cubic polynomial with constant term $1$, with coefficients in $\mathcal{O}_{\mathbb{Q}(\sqrt{-D})}$, whose roots have modulus less than $1.3$, at least one root has modulus$>1$, identifying $p(x)$ with $\overline{p(x)}$} \label{appendix 15} \small \begin{tabular}{|c|c|} \hline polynomial $p(x)$ & maximal absolute value \\ \hline\hline $x^3+\left(\frac{1}{2}-\frac{\sqrt{3}}{2}i\right)x^2+1$ & $1.1509\cdots$ \\ $x^3-\left(\frac{1}{2}+\frac{\sqrt{3}}{2}i\right)x+1$ & $1.2106\cdots$ \\ $x^3+x^2+(1-i)x+1$ & $1.2328\cdots$ \\ $x^3-ix^2-x+1$ & $1.2878\cdots$ \\ $x^3-ix+1$ & $1.2966\cdots$ \\ $x^3-(1+i)x^2+ix+1$ & $1.2969\cdots$ \\ \hline \end{tabular} \end{table} \begin{table}[hbtp] \caption{cubic polynomial with constant term $\frac{1}{2}+\frac{\sqrt{3}}{2}i$, with coefficients in $\mathcal{O}_{\mathbb{Q}(\sqrt{-3})}$, whose roots have modulus less than $1.3$, at least one root has modulus$>1$} \label{appendix 16} \small \begin{tabular}{|c|c|} \hline polynomial $p(x)$ & maximal absolute value \\ \hline\hline $x^3-\left(\frac{1}{2}+\frac{\sqrt{3}}{2}i\right)x^2+\left(-\frac{1}{2}+\frac{\sqrt{3}}{2}i\right)x+\left(\frac{1}{2}+\frac{\sqrt{3}}{2}i\right)$ & $1.2167\cdots$ \\ $x^3-\left(\frac{1}{2}-\frac{\sqrt{3}}{2}i\right)x^2+\left(-\frac{1}{2}-\frac{\sqrt{3}}{2}i\right)x+\left(\frac{1}{2}+\frac{\sqrt{3}}{2}i\right)$ & $1.2167\cdots$ \\ $x^3+x^2+x+\left(\frac{1}{2}+\frac{\sqrt{3}}{2}i\right)$ & $1.2167\cdots$ \\ $x^3-x+\left(\frac{1}{2}+\frac{\sqrt{3}}{2}i\right)$ & $1.2746\cdots$ \\ $x^3+\left(\frac{1}{2}-\frac{\sqrt{3}}{2}i\right)x+\left(\frac{1}{2}+\frac{\sqrt{3}}{2}i\right)$ & $1.2746\cdots$ \\ $x^3+\left(\frac{1}{2}+\frac{\sqrt{3}}{2}i\right)x+\left(\frac{1}{2}+\frac{\sqrt{3}}{2}i\right)$ & $1.2746\cdots$ \\ \hline \end{tabular} \end{table} \begin{table}[hbtp] \caption{totally real, quintic algebraic integer, which is larger than $2$, whose conjugates have modulus$<2.1$} \label{appendix 17} \small \begin{tabular}{|c|c|} \hline algebraic integer & minimal polynomial \\ \hline\hline $2.0264\cdots$ & $x^5+x^4-5x^3-5x^2+4x+3$\\ $2.0384\cdots$ & $x^5-5x^3+4x-1$\\ $2.0431\cdots$ & $x^5-5x^3-x^2+5x+1$\\ $2.0541\cdots$ & $x^5-6x^3+8x-1$\\ $2.0665\cdots$ & $x^5-6x^3-x^2+8x+3$\\ $2.0850\cdots$ & $x^5-x^4-5x^3+4x^2+5x-3$\\ $2.0911\cdots$ & $x^5-x^4-5x^3+4x^2+4x-1$\\ \hline \end{tabular} \end{table} \section{the proof of the fact in Remark \ref{Remark}}\label{appendix b} In Remark \ref{Remark}, we presented that for $0<t<2$, the equation \begin{align*} \frac{2\mathrm{sin}(x)\mathrm{cos}(2x)}{1+2\mathrm{sin}(x)\mathrm{sin}(2x)}=\mathrm{tan}(tx) \end{align*} has at most one root in the interval $\left(0, \frac{\pi}{4}\right)$. We prove this fact in this section.\par Denote \begin{align*} f(x)=1+2\mathrm{sin}(x)\mathrm{sin}(2x),\ g(x)=2\mathrm{sin}(x)\mathrm{cos}(2x). \end{align*} Now \begin{align*} \frac{g(x)}{f(x)}>0\text{ and }\mathrm{tan}(tx)>0 \end{align*} for $0<x<\frac{\pi}{4}$. In the $xy$ coordinate plane, the curve $C:y=\frac{g(x)}{f(x)}$ and $D_t:y=\mathrm{tan}(tx)$ both through $(x,y)=(0,0)$. Assume $C$ and $D_t$ intersect at least one point for $0<x<\frac{\pi}{4}$. Let $(x,y)=(x_1,y_1)$ be an intersection point on the very right of $C$ and $D_t$ with $0<x<\frac{\pi}{4}$. Denote the line passing through $(0,0)$ and $(x_1,y_1)$ by $L$. First, \begin{align*} \left(\frac{d}{dx}\right)^2(\mathrm{tan}(tx))=\frac{2t^2\mathrm{sin}(tx)\mathrm{cos}(tx)}{\mathrm{cos}^4(tx)}>0 \end{align*} for $0<x<\frac{\pi}{4}$ and so $\mathrm{tan}(tx)$ is a convex downward function on the interval $\left(0, \frac{\pi}{4}\right)$. Thus, $L$ is over $D_t$ for $0<x<x_1$ and $L$ is under $D_t$ for $x_1<x<\frac{\pi}{4}$. Next we show that $L$ intersects with $C$ at only $(x_1,y_1)$ transversally. By some calculation, \begin{align*} \left(\frac{d}{dx}\right)(g'(x)f(x)-f'(x)g(x))&=g''(x)f(x)-f''(x)g(x)\\ &=-10\mathrm{sin}(x)\mathrm{cos}(2x)-8\mathrm{cos}(x)\mathrm{sin}(2x)\\ &\quad\quad-16\mathrm{sin}(x)\mathrm{cos}(x)\mathrm{sin}^2(2x)-16\mathrm{sin}(x)\mathrm{cos}(x)\mathrm{cos}^2(2x)\\ &<0 \end{align*} for $0<x<\frac{\pi}{4}$ and so \begin{align*} g'(x)f(x)-f'(x)g(x) \end{align*} is strictly decreasing on the interval $\left(0, \frac{\pi}{4}\right)$. Now by \begin{align*} f'(x)=2\mathrm{cos}(x)\mathrm{sin}(2x)+4\mathrm{sin}(x)\mathrm{cos}(2x)>0,\ f(x)>0 \end{align*} for $0<x<\frac{\pi}{4}$, $f(x)$ is strictly increasing on the interval $\left(0, \frac{\pi}{4}\right)$ and so \begin{align*} \left(\frac{d}{dx}\right)\left(\frac{g(x)}{f(x)}\right)=\frac{g'(x)f(x)-f'(x)g(x)}{f(x)^2} \end{align*} is strictly decreasing on $\left\{x\in\left(0, \frac{\pi}{4}\right)\mid g'(x)f(x)-f'(x)g(x)>0\right\}$.\par Thus, by $\frac{y_1}{x_1}>0$, there exists at most one root of the equation \begin{align*} \left(\frac{d}{dx}\right)\left(\frac{g(x)}{f(x)}\right)=\frac{y_1}{x_1} \end{align*} for $0<x<\frac{\pi}{4}$. By the mean value theorem, $L$ and $C$ intersects at only $(x,y)=(x_1,y_1)$ for $0<x<\frac{\pi}{4}$ and it is transversally. Also by $g'(0)f(0)-f'(0)g(0)>0$, \begin{align*} \left(\frac{d^2}{{dx}^2}\right)\left(\frac{g(x)}{f(x)}\right)<0, \end{align*} on $(0,\epsilon)$ for some $\epsilon>0$ and so $C:y=\frac{g(x)}{f(x)}$ is a convex upward function on $(0,\epsilon)$. Thus, $C$ is over $L$ for $0<x<x_1$ and $C$ is under $L$ for $x_1<x<\frac{\pi}{4}$.\par Therefore, $C$ and $D_t$ intersects at only $(x_1,y_1)$ and so the proof is concluded. \renewcommand{References}{References} \end{document}
\begin{document} \begin{frontmatter} \title{A non-cooperative Pareto-efficient solution to the one-shot Prisoner's Dilemma} \author{Haoyang Wu\corauthref{cor}} \corauth[cor]{Wan-Dou-Miao Research Lab, Suite 1002, 790 WuYi Road, Shanghai, 200051, China.} \ead{[email protected]} \ead{Tel: 86-18621753457} \begin{abstract} The Prisoner's Dilemma is a simple model that captures the essential contradiction between individual rationality and global rationality. Although the one-shot Prisoner's Dilemma is usually viewed simple, in this paper we will categorize it into five different types. For the type-4 Prisoner's Dilemma game, we will propose a self-enforcing algorithmic model to help non-cooperative agents obtain Pareto-efficient payoffs. The algorithmic model is based on an algorithm using complex numbers and can work in macro applications. \end{abstract} \begin{keyword} Prisoner's Dilemma; Non-cooperative games. \end{keyword} \end{frontmatter} \section{Introduction} The Prisoner's Dilemma (PD) is perhaps the most famous model in the field of game theory. Roughly speaking, there are two sorts of PD: one-shot PD and iterated PD. Nowadays a lot of studies on PD are focused on the latter case. For example, Axelrod \cite{Axelrod1981} investigated the evolution of cooperative behavior in well-mixed populations of selfish agents by using PD as a paradigm. Nowak and May \cite{Nowak1992} induced spatial structure in PD, i.e., agents were restricted to interact with his immediate neighbors. Santos and Pacheco \cite{Santos2005} found that when agents interacted following scale-free networks, cooperation would become a dominating trait throughout the entire range of parameters of PD. Perc and Szolnoki \cite{Perc2008} proposed that social diversity could induce cooperation as the dominating trait throughout the entire range of parameters of PD. Compared with the iterated PD, the one-shot PD is usually viewed simple. In the original version of one-shot PD, two prisoners are arrested by a policeman. Each prisoner must independently choose a strategy between ``Confessing'' (denoted as strategy ``\emph{Defect}'') and ``Not confessing'' (denoted as strategy ``\emph{Cooperate}''). The payoff matrix of prisoners is shown in Table 1. As long as two agents are rational, the unique Nash equilibrium shall be (\emph{Defect}, \emph{Defect}), which results in a Pareto-inefficient payoff $(P,P)$. That is the dilemma. \emph{Table 1: The payoff matrix of PD, where }$T>R>P>S$, and $R>(T+S)/2$. \emph{The first entry in the parenthesis denotes the payoff of agent 1 and the second entry stands for the payoff of agent 2}.\\ \begin{tabular}{|c|c|c|} \hline \backslashbox{agent 1}{agent 2} & {\emph{Cooperate}}&{\emph{Defect}} \\\hline \emph{Cooperate} & (R, R) & (S, T) \\ \emph{Defect} & (T, S) & (P, P) \\ \hline \end{tabular} In 1999, Eisert \emph{et al} \cite{Eisert1999} proposed a quantum model of one-shot PD (denoted as EWL model). The EWL model showed ``quantum advantages'' as a result of a novel quantum Nash equilibrium, which help agents reach the Pareto-efficient payoff $(R,R)$. Hence, the agents escape the dilemma. In 2002, Du \emph{et al} \cite{Du2002} gave an experiment to carry out the EWL model. So far, there are some criticisms on EWL model: 1) It is a new game which has new rules and thus has no implications on the original one-shot PD \cite{Enk2002}. 2) The quantum state serves as a binding contract which let the players chooses one of the two possible moves (\emph{Cooperate} or \emph{Defect}) of the original game. 3) In the full three-parameter strategy space, there is no such quantum Nash equilibrium \cite{BH2001} \cite{Flitney2007}. Besides these criticisms, here we add another criticism: in the EWL model, the arbitrator is required to perform quantum measurements to readout the messages of agents. This requirement is unreasonable for common macro disciplines such as politics and economics, because the arbitrator should play a neutral role in the game: His reasonable actions should only receive agents' strategies and assign payoffs to agents. Put differently, if the arbitrator is willing to work with an additional quantum equipment which helps agents to obtain the Pareto-efficient payoffs $(R, R)$, then \emph{why does not he directly assign the Pareto-efficient payoffs to the agents}? Motivated by these criticisms, this paper aims to investigate whether a Pareto-efficient outcome can be reached by non-cooperative agents in macro applications. Note that a non-cooperative game is one in which players make decisions independently. Thus, while they may be able to cooperate, any cooperation must be self-enforcing \cite{wiki}. The rest of this paper is organized as follows: in Section 2 we will propose an algorithmic model, where the arbitrator does not have to work with some additional quantum equipment (Note: here we do not aim to solve the first three criticisms on the EWL model, because these criticisms are irrelevant to the algorithmic model). In Section 3, we will categorize the one-shot PD into five different types, and claim that the agents can self-enforcingly reach the Pareto-efficient outcome for the case of type-4 PD by using the algorithmic model. The Section 4 gives some discussions. The last section draws conclusion. \section{An algorithmic model} As we have pointed out above, for macro applications, it is unreasonable to require the arbitrator act with some additional quantum equipment. In what follows, firstly we will amend the EWL model such that the arbitrator works in the same way as he does in classical environments, then we will propose an algorithmic version of the amended EWL model. \subsection{The amended EWL model} Let the set of two agents be $N=\{1, 2\}$. Following formula (4) in Ref. \cite{Flitney2007}, two-parameter quantum strategies are drawn from the set: \begin{equation*} \hat{\omega}(\theta,\phi)\equiv \begin{bmatrix} e^{i\phi}\cos(\theta/2) & i\sin(\theta/2)\\ i\sin(\theta/2) & e^{-i\phi}\cos(\theta/2) \end{bmatrix}, \end{equation*} $\hat{\Omega}\equiv\{\hat{\omega}(\theta,\phi):\theta\in[0,\pi],\phi\in[0,\pi/2]\}$, $\hat{J}\equiv\cos(\gamma/2)\hat{I}\otimes \hat{I}+i\sin(\gamma/2)\hat{\sigma_{x}}\otimes \hat{\sigma_{x}}$ (where $\gamma$ is an entanglement measure, $\hat{\sigma_{x}}$ is the Pauli matrix, $\otimes$ is tensor product), $\hat{I}\equiv\hat{\omega}(0,0)$, $\hat{D}\equiv\hat{\omega}(\pi,\pi/2)$, $\hat{C}\equiv\hat{\omega}(0,\pi/2)$. Without loss of generality, we assume:\\ 1) Each agent $j\in N$ has a quantum coin (qubit), a classical card and a channel connected to the arbitrator. The basis vectors $|C\rangle=[1,0]^{T}$, $|D\rangle=[0,1]^{T}$ of a quantum coin denote head up and tail up respectively.\\ 2) Each agent $j\in N$ independently performs a local unitary operation on his/her own quantum coin. The set of agent $j$'s operation is $\hat{\Omega}_{j}=\hat{\Omega}$. A strategic operation chosen by agent $j$ is denoted as $\hat{\omega}_{j}\in\hat{\Omega}_{j}$. If $\hat{\omega}_{j}=\hat{I}$, then $\hat{\omega}_{j}(|C\rangle)=|C\rangle$, $\hat{\omega}_{j}(|D\rangle)=|D\rangle$; If $\hat{\omega}_{j}=\hat{D}$, then $\hat{\omega}_{j}(|C\rangle)=|D\rangle$, $\hat{\omega}_{j}(|D\rangle)=|C\rangle$. $\hat{I}$ denotes ``\emph{Not flip}'', $\hat{D}$ denotes ``\emph{Flip}''. \\ 3) The two sides of a card are denoted as Side 0 and Side 1. The messages written on the Side 0 (or Side 1) of card $j$ is denoted as $card(j,0)$ (or $card(j,1)$). $card(j,0)$ represents ``\emph{Cooperate}'', and $card(j,1)$ represents ``\emph{Defect}''.\\ 4) There is a device that can measure the state of two quantum coins and send messages to the designer. Fig. 1 shows the amended version of EWL model (denoted as the A-EWL model). Its working steps are defined as follows: \\ Step 1: The state of each quantum coin is set as $|C\rangle$. The initial state of the two quantum coins is $|\psi_{0}\rangle=|CC\rangle$.\\ Step 2: Let the two quantum coins be entangled by $\hat{J}$. $|\psi_{1}\rangle=\hat{J}|CC\rangle$.\\ Step 3: Each agent $j$ independently performs a local unitary operation $\hat{\omega}_{j}$ on his own quantum coin. $|\psi_{2}\rangle=[\hat{\omega}_{1}\otimes\hat{\omega}_{2}]\hat{J}|CC\rangle$.\\ Step 4: Let the two quantum coins be disentangled by $\hat{J}^{+}$. $|\psi_{3}\rangle=\hat{J}^{+}[\hat{\omega}_{1}\otimes\hat{\omega}_{2}] \hat{J}|CC\rangle$.\\ Step 5: The device measures the state of the two quantum coins and sends $card(j,0)$ (or $card(j,1)$) as the message $m_{j}$ to the arbitrator if the collapsed state of quantum coin $j$ is $|C\rangle$ (or $|D\rangle$).\\ Step 8: The arbitrator receives the overall message $m=(m_{1}, m_{2})$ and assigns payoffs to the two agents according to Table 1. END. In the A-EWL model, the assumed device performs quantum measurements and sends messages to the arbitrator on behalf of agents. Thus, the arbitrator needs not work with an additional quantum equipment as EWL model requires, i.e., the arbitrator works in the same way as before. It should be emphasized that the A-EWL model does not aim to solve the criticisms on the EWL model as specified in the Introduction. We propose the A-EWL model only for the following simulation process, which is a key part of the algorithmic model. Since quantum operations can be simulated classically by using complex numbers, the A-EWL model can also be simulated. In what follows we will give matrix representations of quantum states and then propose an algorithmic version of A-EWL model. \subsection{Matrix representations of quantum states} In quantum mechanics, a quantum state can be described as a vector. For a two-level system, there are two basis vectors: $[1,0]^{T}$ and $[0,1]^{T}$. In the beginning, we define: \begin{align*} |CC\rangle=[1,0,0,0]^{T}, |CD\rangle=[0,1,0,0]^{T}, |DC\rangle=[0,0,1,0]^{T}, |DD\rangle=[0,0,0,1]^{T}. \end{align*} \begin{align*} \hat{J}=\begin{bmatrix} \cos(\gamma/2) & 0 & 0 & i\sin(\gamma/2)\\ 0 & \cos(\gamma/2) & i\sin(\gamma/2) & 0 \\ 0 & i\sin(\gamma/2)& \cos(\gamma/2) & 0 \\ i\sin(\gamma/2) & 0 & 0 & \cos(\gamma/2) \end{bmatrix}, \;\gamma\in[0,\pi/2]. \end{align*} For $\gamma=\pi/2$, \begin{align*} \hat{J}_{\pi/2}=\frac{1}{\sqrt{2}}\begin{bmatrix} 1 & 0& 0& i\\ 0 & 1& i& 0\\ 0 & i& 1& 0\\ i & 0& 0& 1 \end{bmatrix}, \hat{J}^{+}_{\pi/2}=\frac{1}{\sqrt{2}}\begin{bmatrix} 1 & 0& 0& -i\\ 0 & 1& -i& 0\\ 0 & -i& 1& 0\\ -i & 0& 0& 1 \end{bmatrix}, \end{align*} where $\hat{J}^{+}_{\pi/2}$ is the conjugate of $\hat{J}_{\pi/2}$. \textbf{Definition 1}: $\psi_{1}\equiv\hat{J}|CC\rangle=\begin{bmatrix} \cos(\gamma/2)\\ 0\\ 0\\ i\sin(\gamma/2) \end{bmatrix}$.\\ Since only two values in $\psi_{1}$ are non-zero, we only need to calculate the leftmost and rightmost column of $\hat{\omega}_{1}\otimes\hat{\omega}_{2}$ to derive $\psi_{2}=[\hat{\omega}_{1}\otimes\hat{\omega}_{2}]\psi_{1}$. \textbf{Definition 2}: $\psi_{3}\equiv \hat{J}^{+}\psi_{2}$. Suppose $\psi_{3}=[\eta_{1}, \cdots, \eta_{4}]^{T}$, let $\Delta=[|\eta_{1}|^{2}, \cdots, |\eta_{4}|^{2}]$. It can be easily checked that $\hat{J}$, $\hat{\omega}_{1}$, $\hat{\omega}_{2}$ and $\hat{J}^{+}$ are all unitary matrices. Hence, $|\psi_{3}|^{2}=1$. Thus, $\Delta$ can be viewed as a probability distribution over the states $\{|CC\rangle, |CD\rangle, |DC\rangle, |DD\rangle\}$. \subsection{An algorithmic model} Based on the matrix representations of quantum states, here we will propose an algorithmic model that simulates the A-EWL model. Since the entanglement measurement $\gamma$ is a control factor, it can be simply set as its maximum $\pi/2$. The input and output of the algorithmic model are shown in Fig. 2. A \emph{Matlab} program is shown in Fig. 3(a)-(d). \textbf{Input}:\\ 1) $\xi_{j}, \phi_{j}$, $j=1, 2$: the parameters of agent $j$'s local operation $\hat{\omega}_{j}$, $\xi_{j}\in[0,\pi],\phi_{j}\in[0,\pi/2]$.\\ 2) $card(j,0),card(j,1)$, $j=1,2$: the messages written on the two sides of agent $j$'s card. $card(j,0)$ and $card(j,1)$ represent \emph{Cooperate} and \emph{Defect} respectively. \textbf{Output}:\\ $m_{j}\in$ \{$card(j,0),card(j,1)$\}, $j=1, 2$: agent $j$'s message that is sent to the arbitrator. \textbf{Procedures of the algorithmic model}:\\ Step 1: Reading two parameters $\xi_{j}$ and $\phi_{j}$ from each agent $j$ (See Fig. 3(a)).\\ Step 2: Computing the leftmost and rightmost columns of $\hat{\omega}_{1}\otimes\hat{\omega}_{2}$ (See Fig. 3(b)).\\ Step 3: Computing $\psi_{2}=[\hat{\omega}_{1}\otimes\hat{\omega}_{2}]\hat{J}_{\pi/2}|CC\rangle$, $\psi_{3}=\hat{J}^{+}_{\pi/2}\psi_{2}$, and the probability distribution $\Delta$ (See Fig. 3(c)).\\ Step 4: Randomly choosing a state from the set of all four possible states $\{|CC\rangle, |CD\rangle, |DC\rangle, |DD\rangle\}$ according to the probability distribution $\Delta$.\\ Step 5: For each $j\in I$, the computer sends $card(j,0)$ (or $card(j,1)$) as message $m_{j}$ to the arbitrator through channel $j$ if the $j$-th element of the chosen state is $|C\rangle$ (or $|D\rangle$) (See Fig. 3(d)). \section{Five types of one-shot PD} Since its beginning, PD has been generalized to many disciplines such as politics, economics, sociology, biology and so on. Despite these widespread applications, people seldom care how the payoffs of agents are determined. For example, Axelrod \cite{Axelrod1981} used the word ``yield'' to describe how the agents obtained the payoffs. Nowak and May \cite{Nowak1992} used the word ``get'', and Santos and Pacheco \cite{Santos2005} used the word ``receive'' respectively. One may think that such question looks trivial at first sight. However, as we will show in this section, there exists an interesting story behind this question. In what follows, we will categorize the one-shot PD into five different types. \textbf{Type-1 PD}:\\ 1) There are two agents and no arbitrator in the game. \\ 2) The strategies of agents are actions performed by agents. The agents' payoffs are determined by the outcomes of these actions and satisfy Table 1. For example, let us neglect the United Nation and consider two countries (e.g., US and Russia) confronted the problem of nuclear disarmament. The strategy \emph{Cooperate} means ``Obeying disarmament'', and \emph{Defect} means ``Refusing disarmament''. If the payoff matrix confronted by the two countries satisfies Table 1, the nuclear disarmament game is a type-1 PD. \textbf{Type-2 PD}:\\ 1) There are two agents and an arbitrator in the game.\\ 2) The strategies of agents are actions performed by agents. The arbitrator observes the outcomes of actions and assign payoffs to the agents according to Table 1. For example, let us consider a taxi game. Suppose there are two taxi drivers and a manager. Two drivers drive a car in turn, one in day and the other in night. The car's status will be very good, ok or common if the number of drivers who maintain the car is two, one or zero respectively. The manager observes the car's status and assigns rewards $R_{2}$, $R_{1}$, $R_{0}$ to each driver respectively, where $R_{2}>R_{1}>R_{0}$. The whole cost of maintenance is $c$. Let the strategy \emph{Cooperate} denote ``Maintain'', and \emph{Defect} denote ``Not maintain''. The payoff matrix can be represented as Table 2. If Table 2 satisfies the conditions in Table 1, the taxi game is a type-2 PD. \emph{Table 2: The payoff matrix of type-2 PD}.\\ \begin{tabular}{|c|c|c|} \hline \backslashbox{agent 1}{agent 2} & {\emph{Cooperate}}&{\emph{Defect}} \\\hline \emph{Cooperate} & ($R_{2}-c/2, R_{2}-c/2$) & ($R_{1}-c, R_{1}$) \\ \emph{Defect} & ($R_{1}, R_{1}-c$) & ($R_{0}, R_{0}$) \\ \hline \end{tabular} \textbf{Type-3 PD}:\\ 1) There are two agents and an arbitrator in the game.\\ 2) The strategy of each agent is not an action, but a message that can be sent to the arbitrator through a channel. The arbitrator receives two messages and assign payoffs to the agents according to Table 1.\\ 3) Two agents cannot communicate with each other. For example, suppose two agents are arrested separately and required to report their crime information to the arbitrator through two channels independently. If the arbitrator assigns payoffs to agents according to Table 1, this game is a type-3 PD. \textbf{Type-4 PD}:\\ Conditions 1-2 are the same as those in type-3 PD. \\ 3) Two agents can communicate with each other. \\ 4) Before sending messages to the arbitrator, two agents can construct the algorithmic model specified in Fig. 2. Each agent $j$ can observe whether the other agent participates the algorithmic model or not: whenever the other agent takes back his channel, agent $j$ will do so and sends his message $m_{j}$ to the arbitrator directly. \emph{Remark 1}: At first sight, the conditions of type-4 PD is complicated. However, these conditions are not restrictive when the arbitrator communicate with agents indirectly and cannot separate them. For example, suppose the arbitrator and agents are connected by Internet, then all conditions of type-4 PD can be satisfied in principle. The type-4 PD works in the following way:\\ \emph{Stage 1: (Actions of two agents)} For each agent $j\in N$, he faces two strategies: \\ $\bullet$ $S(j,0)$: Participate the algorithmic model, i.e., leave his channel to the computer, and submit $\xi_{j}, \phi_{j}, card(j,0), card(j,1)$ to the computer;\\ $\bullet$ $S(j,1)$: Not participate the algorithmic model, i.e., take back his channel, and submit $m_{j}$ to the arbitrator directly.\\ According to condition 4, the algorithmic model is triggered if and only if both two agents participate it.\\ \emph{Stage 2: (Actions of the arbitrator)} The arbitrator receives two messages and assigns payoffs to agents according to Table 1. In type-4 PD, from the viewpoints of the arbitrator, he acts in the same way as before, i.e., nothing is changed. However, the payoff matrix confronted by two agents is now changed to Table 3. For each entry of Table 3, we give the corresponding explanation as follows: \emph{Table 3: The payoff matrix of two agents by constructing the algorithmic model, where} $R,P$ \emph{are defined in Table 1}, $R>P$.\\ \begin{tabular}{|c|c|c|} \hline \backslashbox{agent 1}{agent 2} & {$S(2,0)$}&{$S(2,1)$} \\\hline $S(1,0)$ & (R, R) & (P, P) \\ $S(1,1)$ & (P, P) & (P, P) \\ \hline \end{tabular} 1) $(S(1,0),S(2,0))$: This strategy profile means two agents both participate the algorithmic model and submit parameters to the computer. According to Ref. \cite{Eisert1999}, for each agent $j\in N$, his dominant parameters are $\xi_{j}=0$ and $\phi_{j}=\pi/2$, which result in a Pareto-efficient payoff $(R,R)$. \\ 2) $(S(1,0),S(2,1))$: This strategy profile means agent 1 participates the algorithmic model, but agent 2 takes back his channel and submits a message to the arbitrator directly. Since agent 1 can observe agent 2's action, in the end, both agents will take back their channels and submit messages to the arbitrator directly. Obviously, the dominant message of each agent $j$ is $card(j,1)$, and the arbitrator will assign the Pareto-inefficient payoff $(P,P)$ to agents. \\ 3) $(S(1,1),S(2,0))$: This strategy profile is similar to the above case. The arbitrator will assign $(P, P)$ to two agents.\\ 4) $(S(1,1),S(2,1))$: This strategy profile means two agents both take back their channels and send messages to the arbitrator directly. This case is similar to the case 2. The arbitrator will assign $(P, P)$ to two agents. From Table 3, it can be seen that $(S(1,0),S(2,0))$ and $(S(1,1),S(2,1))$ are two Nash equilibria, and the former is Pareto-efficient. As specified by Telser (Page 28, Line 2, \cite{Telser1980}), ``\emph{A party to a self-enforcing agreement calculates whether his gain from violating the agreement is greater or less than the loss of future net benefits that he would incur as a result of detection of his violation and the consequent termination of the agreement by the other party.}'' Since two channels have been controlled by the computer in Stage 1, in the end $(S(1,0),S(2,0))$ is a self-enforcing Nash equilibrium and the Pareto-efficient payoff $(R,R)$ is the unique Nash equilibrium outcome. In this sense, the two agents escape the dilemma. \textbf{Type-5 PD}:\\ Conditions 1-3 are the same as those in type-4 PD. \\ 4) The last condition of type-4 PD does not hold.\\ For this case, although the two agents can communicate before moving and agree that collaboration is good for each agent, they will definitely choose (\emph{Defect}, \emph{Defect}) as if they are separated. Thus, the agents cannot escape the dilemma. \section{Discussions} The algorithmic model revises common understanding on the one-shot PD. Here we will discuss some possible doubts about it. \emph{Q1}: The type-4 PD seems to be a cooperative game because in condition 4, the algorithmic model constructed by two agents acts as a correlation between agents. \emph{A1}: From the viewpoints of agents, the game is different from the original one-shot PD, since the payoff matrix confronted by the two agents has been changed from Table 1 to Table 3. But from the viewpoints of the arbitrator, nothing is changed. Thus, the so-called correlation between two agents is indeed \emph{unobservable} to the arbitrator. Put differently, the arbitrator cannot prevent agents from constructing the algorithmic model.\\ On the other hand, since each agent can freely choose not to participate the algorithmic model and send a message to the arbitrator directly in Stage 1, the algorithmic model is self-enforcing and thus still a non-cooperative game. \emph{Q2}: After the algorithmic model is triggered, can it simply send $(card(1,0)$, $card(2,0))$ to the arbitrator instead of running Steps 1-5? \emph{A2}: The algorithmic model enlarges each agent's strategy space from the original strategy space \emph{\{Cooperate, Defect\}} to a two-dimensional strategy space $[0,\pi]\times[0,\pi/2]$, and generates the Pareto-efficient payoff $(R,R)$ in Nash equilibrium. The enlarged strategy space includes the original strategy space of one-shot PD: the strategy (\emph{Cooperate, Cooperate}), (\emph{Cooperate, Defect}), (\emph{Defect, Cooperate}), (\emph{Defect, Defect}) in the original PD correspond to the strategy $((0, 0), (0, 0))$, $((0, 0), (\pi, \pi/2))$, $((\pi, \pi/2), (0, 0))$, $((\pi, \pi/2), (\pi, \pi/2))$ in the algorithmic model respectively, since $\hat{I}=\hat{\omega}(0,0)$, $\hat{D}= \hat{\omega}(\pi,\pi/2)$.\\ However, the idea in this question restricts each agent's strategy space from the original strategy space \emph{\{Cooperate, Defect\}} to a single strategy \emph{Cooperate}. In this sense, two agents are required to sign a binding contract to do so. This is beyond the range of non-cooperative game. \emph{Remark 2}: The algorithmic model is not suitable for type-1 and type-2 PD, because the computer cannot perform actions on behalf of agents. The algorithmic model is not applicable for type-3 PD either because two agents are separated, thereby the algorithmic model cannot be constructed. For the case of type-5 PD, the algorithmic model is not applicable because condition 4 in type-4 PD is vital and indispensable. \section{Conclusion} In this paper, we categorize the well-known one-shot PD into five types and propose an algorithmic model to help two non-cooperative agents self-enforcingly escape a special type of PD, i.e., the type-4 PD. The type-4 PD is justified when the arbitrator communicate with the agents indirectly through some channels, and each agent's strategy is not an action, but a message that can be sent to the arbitrator. With the rapid development of Internet, more and more type-4 PD games will be seen. One point is important for the novel result: Usually people may think the two payoff matrices confronted by agents and the arbitrator are the same (i.e., Table 1). However we argue that for the case of type-4 PD, the two payoff matrices can be different: The arbitrator still faces Table 1, but the agents can self-enforcingly change their payoff matrix to Table 3 by virtue of the algorithmic model, which leads to a Pareto-efficient payoff. \section*{Acknowledgments} The author is very grateful to Ms. Fang Chen, Hanyue Wu (\emph{Apple}), Hanxing Wu (\emph{Lily}) and Hanchen Wu (\emph{Cindy}) for their great support. \end{document}
\begin{document} \title{The G-convex Functions Based on the Nonlinear Expectations Defined by G-BSDEs$^*$} \footnote[0]{${}^{*}$The Project-sponsored by NSFC (11301068), NSFC (11171062), NSFC(11371362) and the Fundamental Research Funds for the Central Universities No. 2232014D3-08.} \author[K. He]{Kun He } \date{} \keywords{} \maketitle \begin{center} {\footnotesize {\it hekun\symbol{64}dhu.edu.cn\\ Department of Mathematics\\ Donghua University\\ 2999 North Renmin Rd., Songjiang\\ Shanghai 201620, P.R. China }} \end{center} \begin{abstract} In this paper, generalizing the definition of $G$-convex functions defined by Peng \cite{Peng2010} during the construction of $G$-expectations and related properties, we define a group of $G$-convex functions based on the Backward Stochastic Differential Equations driven by $G$-Brownian motions. \end{abstract} {\bf Key words:} G-expectations, G-BSDEs, G-convex functions, Nonlinear expectations\\ {\bf AMS 2000 subject classifications: } 60H10, 60H30 \section{Introduction} As we all know, Jensen's inequality is an important result in the theory of linear expectations. For nonlinear expectations, Jiang \cite{CKJ1,CKJ2,J} talked about a type of $g-$expectation and found out that if this group of $g-$expectations satisfy the related Jensen's inequality, then the corresponding generator $g$ should satisfies the propositions of positive homogenous and subadditivity. After this in 2010, Jia and Peng \cite{JP} defined a new group of functions as $g-$convexity and gave a necessary and sufficient condition for a $\mathbb{C}^2$ function being a $g-$convex function. \par After the construction of the G-expectations by Peng's work from 2005 to 2010 \cite{Peng2005,Peng2007a,Peng2008a,Peng2008b,Peng2010}, another series of work \cite{STZ,Song1,Song2} aims at solving an opening problem, a G-martingale $M$ which can be decomposed into a sum of a symmetric G-martingale $\bar{M}$ and a decreasing G-martingale $K$, and this problem was solved by \cite{PSZ} in 2012. Then Hu, Ji, Peng and Song defined a new type of Backward Stochastic Differential Equation driven by G-Brownian motion (G-BSDE) \cite{HJPS1}, proved a related comparison theorem and defined the group of nonlinear expectations by the solutions of G-BSDEs \cite{HJPS2}. Based on their definition of this group of nonlinear expectations and the related comparison theorem of G-BSDEs, He and Hu \cite{HH} proved a representation theorem for this group of nonlinear expectations and proved some related equivalent conditions between the generator and related nonlinear expectations. In this paper, we will talk about the $G-$convex function, defined in Peng \cite{Peng2010}, under the framework of the nonlinear expectations defined by the G-BSDEs \cite{HJPS1}. In section \ref{Sec:Preliminary}, we recall some fundamental definitions and results about G-expectations and G-BSDEs. In section \ref{Sec:Main}, we will prove our main result, giving the equivalent condition of G-convex function under the framework of G-BSDEs. \section{Preliminary}\label{Sec:Preliminary} Let us recall some notations for the related spaces of random variables, definitions and results in the construction of G-Brownian motions and G-expectations. The readers may refer to \cite{Peng2007a,Peng2008a,Peng2008b,Peng2010,HJPS1}. Throughout the paper, for $x\in\mathbb{R}^d$, we denote $|x|=\sqrt{x\cdot x}$ and $\langle x,x\rangle=x\cdot x$. \begin{definition}\label{def2.1} Let $\Omega$ be a given set and let $\mathcal{H}$ be a vector lattice of real valued functions defined on $\Omega$, namely $c\in \mathcal{H}$ for each constant $c$ and $|X|\in \mathcal{H}$ if $X\in \mathcal{H}$. $\mathcal{H}$ is considered as the space of random variables. A sublinear expectation $\mathbb{\hat{E}}$ on $\mathcal{H}$ is a functional $\mathbb{\hat {E}}:\mathcal{H}\rightarrow \mathbb{R}$ satisfying the following properties: for all $X,Y\in \mathcal{H}$, we have \item[(a)] Monotonicity: If $X\geq Y$ then $\mathbb{\hat{E}}[X]\geq \mathbb{\hat{E}}[Y]$; \item[(b)] Constant preservation: $\mathbb{\hat{E}}[c]=c$; \item[(c)] Sub-additivity: $\mathbb{\hat{E}}[X+Y]\leq \mathbb{\hat{E} }[X]+\mathbb{\hat{E}}[Y]$; \item[(d)] Positive homogeneity: $\mathbb{\hat{E}}[\lambda X]=\lambda \mathbb{\hat{E}}[X]$ for each $\lambda \geq 0$. $(\Omega,\mathcal{H},\mathbb{\hat{E}})$ is called a sublinear expectation space. \end{definition} \begin{definition} \label{def2.2} Let $X_{1}$ and $X_{2}$ be two $n$-dimensional random vectors defined respectively in sublinear expectation spaces $(\Omega_{1} ,\mathcal{H}_{1},\mathbb{\hat{E}}_{1})$ and $(\Omega_{2},\mathcal{H} _{2},\mathbb{\hat{E}}_{2})$. They are identically distributed, denoted by $X_{1}\overset{d}{=}X_{2}$, if $\mathbb{\hat{E}}_{1}[\varphi(X_{1} )]=\mathbb{\hat{E}}_{2}[\varphi(X_{2})]$, for all$\ \varphi \in C_{b.Lip} (\mathbb{R}^{n})$, where $C_{b.Lip}(\mathbb{R}^{n})$ denotes the space of bounded and Lipschitz functions on $\mathbb{R}^{n}$. \end{definition} \begin{definition} \label{def2.3} In a sublinear expectation space $(\Omega,\mathcal{H} ,\mathbb{\hat{E}})$, a random vector $Y=(Y_{1},\cdot \cdot \cdot,Y_{n})$, $Y_{i}\in \mathcal{H}$, is said to be independent of another random vector $X=(X_{1},\cdot \cdot \cdot,X_{m})$, $X_{i}\in \mathcal{H}$ under $\mathbb{\hat {E}}[\cdot]$, denoted by $Y\bot X$, if for every test function $\varphi \in C_{b.Lip}(\mathbb{R}^{m}\times \mathbb{R}^{n})$ we have $\mathbb{\hat{E} }[\varphi(X,Y)]=\mathbb{\hat{E}}[\mathbb{\hat{E}}[\varphi(x,Y)]_{x=X}]$. \end{definition} \begin{definition} \label{def2.4} ($G$-normal distribution) A $d$-dimensional random vector $X=(X_{1},\cdot \cdot \cdot,X_{d})$ in a sublinear expectation space $(\Omega,\mathcal{H},\mathbb{\hat{E}})$ is called $G$-normally distributed if for each $a,b\geq0$ we have \[ aX+b\bar{X}\overset{d}{=}\sqrt{a^{2}+b^{2}}X, \] where $\bar{X}$ is an independent copy of $X$, i.e., $\bar{X}\overset{d}{=}X$ and $\bar{X}\bot X$. Here the letter $G$ denotes the function \[ G(A):=\frac{1}{2}\mathbb{\hat{E}}[\langle AX,X\rangle]:\mathbb{S} _{d}\rightarrow \mathbb{R}, \] where $\mathbb{S}_{d}$ denotes the collection of $d\times d$ symmetric matrices. \end{definition} Peng \cite{Peng2008b} showed that $X=(X_{1},\cdot \cdot \cdot,X_{d})$ is $G$-normally distributed if and only if for each $\varphi \in C_{b.Lip}(\mathbb{R}^{d})$, $u(t,x):=\mathbb{\hat{E}}[\varphi(x+\sqrt{t}X)]$, $(t,x)\in \lbrack 0,\infty)\times \mathbb{R}^{d}$, is the solution of the following $G$-heat equation: \[ \partial_{t}u-G(D_{x}^{2}u)=0,\ u(0,x)=\varphi(x). \] The function $G(\cdot):\mathbb{S}_{d}\rightarrow \mathbb{R}$ is a monotonic, sublinear mapping on $\mathbb{S}_{d}$ and $G(A)=\frac{1}{2}\mathbb{\hat{E} }[\langle AX,X\rangle]\leq \frac{1}{2}|A|\mathbb{\hat{E}}[|X|^{2}]$ implies that there exists a bounded, convex and closed subset $\Gamma \subset \mathbb{S}_{d}^{+}$ such that \[ G(A)=\frac{1}{2}\sup_{\gamma \in \Gamma}\mathrm{tr}[\gamma A], \] where $\mathbb{S}_{d}^{+}$ denotes the collection of non-negative elements in $\mathbb{S}_{d}$. In this paper, we only consider non-degenerate $G$-normal distribution, i.e., there exists some $\underline{\sigma}^{2}>0$ such that $G(A)-G(B)\geq \underline{\sigma}^{2}\mathrm{tr}[A-B]$ for any $A\geq B$. \begin{definition} \label{def2.5} i) Let $\Omega=C_{0}^{d}(\mathbb{R}^{+})$ denote the space of $\mathbb{R}^{d}$-valued continuous functions on $[0,\infty)$ with $\omega _{0}=0$ and let $B_{t}(\omega)=\omega_{t}$ be the canonical process. Set \[ L_{ip}(\Omega):=\{ \varphi(B_{t_{1}},...,B_{t_{n}}):n\geq1,t_{1},...,t_{n} \in \lbrack0,\infty),\varphi \in C_{b.Lip}(\mathbb{R}^{d\times n})\}. \] Let $G:\mathbb{S}_{d}\rightarrow \mathbb{R}$ be a given monotonic and sublinear function. $G$-expectation is a sublinear expectation defined by \[ \mathbb{\hat{E}}[X]=\mathbb{\tilde{E}}[\varphi(\sqrt{t_{1}-t_{0}}\xi_{1} ,\cdot \cdot \cdot,\sqrt{t_{m}-t_{m-1}}\xi_{m})], \] for all $X=\varphi(B_{t_{1}}-B_{t_{0}},B_{t_{2}}-B_{t_{1}},\cdot \cdot \cdot,B_{t_{m}}-B_{t_{m-1}})$, where $\xi_{1},\cdot \cdot \cdot,\xi_{n}$ are identically distributed $d$-dimensional $G$-normally distributed random vectors in a sublinear expectation space $(\tilde{\Omega},\tilde{\mathcal{H} },\mathbb{\tilde{E}})$ such that $\xi_{i+1}$ is independent of $(\xi_{1} ,\cdot \cdot \cdot,\xi_{i})$ for every $i=1,\cdot \cdot \cdot,m-1$. The corresponding canonical process $B_{t}=(B_{t}^{i})_{i=1}^{d}$ is called a $G$-Brownian motion. ii) For each fixed $t\in \lbrack0,\infty)$, the conditional $G$-expectation $\mathbb{\hat{E}}_{t}$ for $\xi=\varphi(B_{t_{1}}-B_{t_{0}},B_{t_{2}} -B_{t_{1}},\cdot \cdot \cdot,B_{t_{m}}-B_{t_{m-1}})\in L_{ip}(\Omega)$, without loss of generality we suppose $t_{i}=t$, is defined by \[ \mathbb{\hat{E}}_{t}[\varphi(B_{t_{1}}-B_{t_{0}},B_{t_{2}}-B_{t_{1}} ,\cdot \cdot \cdot,B_{t_{m}}-B_{t_{m-1}})] \] \[ =\psi(B_{t_{1}}-B_{t_{0}},B_{t_{2}}-B_{t_{1}},\cdot \cdot \cdot,B_{t_{i} }-B_{t_{i-1}}), \] where \[ \psi(x_{1},\cdot \cdot \cdot,x_{i})=\mathbb{\hat{E}}[\varphi(x_{1},\cdot \cdot \cdot,x_{i},B_{t_{i+1}}-B_{t_{i}},\cdot \cdot \cdot,B_{t_{m}}-B_{t_{m-1} })]. \] \end{definition} For each fixed $T>0$, we set \[ L_{ip}(\Omega_{T}):=\{ \varphi(B_{t_{1}},...,B_{t_{n}}):n\geq1,t_{1} ,...,t_{n}\in \lbrack0,T],\varphi \in C_{b.Lip}(\mathbb{R}^{d\times n})\}. \] For each $p\geq1$, we denote by $L_{G}^{p}(\Omega)$ (resp. $L_{G}^{p} (\Omega_{T})$) the completion of $L_{ip}(\Omega)$ (resp. $L_{ip}(\Omega_{T})$) under the norm $\Vert \xi \Vert_{p,G}=(\mathbb{\hat{E}}[|\xi|^{p}])^{1/p}$. It is easy to check that $L_{G}^{q}(\Omega)\subset L_{G}^{p}(\Omega)$ for $1\leq p\leq q$ and $\mathbb{\hat{E}}_{t}[\cdot]$ can be extended continuously to $L_{G}^{1}(\Omega)$. For each fixed $\mathbf{a}\in \mathbb{R}^{d}$, $B_{t}^{\mathbf{a}} =\langle \mathbf{a},B_{t}\rangle$ is a $1$-dimensional $G_{\mathbf{a}} $-Brownian motion, where $G_{\mathbf{a}}(\alpha)=\frac{1}{2}(\sigma _{\mathbf{aa}^{T}}^{2}\alpha^{+}-\sigma_{-\mathbf{aa}^{T}}^{2}\alpha^{-})$, $\sigma_{\mathbf{aa}^{T}}^{2}=2G(\mathbf{aa}^{T})$, $\sigma_{-\mathbf{aa}^{T} }^{2}=-2G(-\mathbf{aa}^{T})$. Let $\pi_{t}^{N}=\{t_{0}^{N},\cdots,t_{N}^{N} \}$, $N=1,2,\cdots$, be a sequence of partitions of $[0,t]$ such that $\mu (\pi_{t}^{N})=\max \{|t_{i+1}^{N}-t_{i}^{N}|:i=0,\cdots,N-1\} \rightarrow0$, the quadratic variation process of $B^{\mathbf{a}}$ is defined by \[ \langle B^{\mathbf{a}}\rangle_{t}=\lim_{\mu(\pi_{t}^{N})\rightarrow0} \sum_{j=0}^{N-1}(B_{t_{j+1}^{N}}^{\mathbf{a}}-B_{t_{j}^{N}}^{\mathbf{a}} )^{2}. \] For each fixed $\mathbf{a}$, $\mathbf{\bar{a}}\in \mathbb{R}^{d}$, the mutual variation process of $B^{\mathbf{a}}$ and $B^{\mathbf{\bar{a}}}$ is defined by \[ \langle B^{\mathbf{a}},B^{\mathbf{\bar{a}}}\rangle_{t}=\frac{1}{4}[\langle B^{\mathbf{a}+\mathbf{\bar{a}}}\rangle_{t}-\langle B^{\mathbf{a} -\mathbf{\bar{a}}}\rangle_{t}]. \] \begin{definition} \label{def2.6} For fixed $T>0$, let $M_{G}^{0}(0,T)$ be the collection of processes in the following form: for a given partition $\{t_{0},\cdot \cdot \cdot,t_{N}\}=\pi_{T}$ of $[0,T]$, \[ \eta_{t}(\omega)=\sum_{j=0}^{N-1}\xi_{j}I_{[t_{j},t_{j+1})}(t), \] where $\xi_{j}\in L_{ip}(\Omega_{t_{j}})$, $j=0,1,2,\cdot \cdot \cdot,N-1$. For $p\geq1$, we denote by $H_{G}^{p}(0,T)$, $M_{G}^{p}(0,T)$ the completion of $M_{G}^{0}(0,T)$ under the norms $\Vert \eta \Vert_{H_{G}^{p}}=\{ \mathbb{\hat {E}}[(\int_{0}^{T}|\eta_{s}|^{2}ds)^{p/2}]\}^{1/p}$, $\Vert \eta \Vert _{M_{G}^{p}}=\{ \mathbb{\hat{E}}[\int_{0}^{T}|\eta_{s}|^{p}ds]\}^{1/p}$ respectively. \end{definition} For each $\eta \in M_{G}^{1}(0,T)$, we can define the integrals $\int_{0} ^{T}\eta_{t}dt$ and $\int_{0}^{T}\eta_{t}d\langle B^{\mathbf{a}} ,B^{\mathbf{\bar{a}}}\rangle_{t}$ for each $\mathbf{a}$, $\mathbf{\bar{a}} \in \mathbb{R}^{d}$. For each $\eta \in H_{G}^{p}(0,T;\mathbb{R}^{d})$ with $p\geq1$, we can define It\^{o}'s integral $\int_{0}^{T}\eta_{t}dB_{t}$. In the following $\langle B\rangle$ denotes the quadratic variation of $B$ (refer to \cite{Peng2010,HJPS1,PSZ}). Let $S_{G}^{0}(0,T)=\{h(t,B_{t_{1}\wedge t},\cdot \cdot \cdot,B_{t_{n}\wedge t}):t_{1},\ldots,t_{n}\in \lbrack0,T],h\in C_{b,Lip}(\mathbb{R}^{n+1})\}$. For $p\geq1$ and $\eta \in S_{G}^{0}(0,T)$, set $\Vert \eta \Vert_{S_{G}^{p}}=\{ \mathbb{\hat{E}}[\sup_{t\in \lbrack0,T]}|\eta_{t}|^{p}]\}^{\frac{1}{p}}$. Denote by $S_{G}^{p}(0,T)$ the completion of $S_{G}^{0}(0,T)$ under the norm $\Vert \cdot \Vert_{S_{G}^{p}}$. We consider the following type of $G$-BSDEs (in this paper we always use Einstein convention): \begin{equation}\label{eq:GBSDE1} Y_{t} =\xi+\int_{t}^{T}g(s,Y_{s},Z_{s})ds+\int_{t}^{T}f(s,Y_{s} ,Z_{s})d\langle B\rangle_{s} -\int_{t}^{T}Z_{s}dB_{s}-(K_{T}-K_{t}), \end{equation} where \[ g(t,\omega,y,z),f(t,\omega,y,z):[0,T]\times \Omega_{T}\times \mathbb{R}\times \mathbb{R}\rightarrow \mathbb{R} \] satisfy the following properties: \begin{itemize} \item[(H1)] There exists some $\beta>1$ such that for any $y,z$, $g(\cdot,\cdot,y,z),f(\cdot,\cdot,y,z)\in M_{G}^{\beta}(0,T)$. \item[(H2)] There exists some $L>0$ such that \[ |g(t,y,z)-g(t,y^{\prime},z^{\prime})|+|f(t,y,z)-f(t,y^{\prime},z^{\prime})|\leq L(|y-y^{\prime}|+|z-z^{\prime}|). \] \end{itemize} For simplicity, we denote by $\mathfrak{S}_{G}^{\alpha}(0,T)$ the collection of processes $(Y,Z,K)$ such that $Y\in S_{G}^{\alpha}(0,T)$, $Z\in H_{G}^{\alpha}(0,T;\mathbb{R})$, $K$ is a decreasing $G$-martingale with $K_{0}=0$ and $K_{T}\in L_{G}^{\alpha}(\Omega_{T})$. \begin{definition} \label{def3.1} Let $\xi \in L_{G}^{\beta}(\Omega_{T})$, $g$ and $f$ satisfy (H1) and (H2) for some $\beta>1$. A triplet of processes $(Y,Z,K)$ is called a solution of equation (\ref{eq:GBSDE1}) if for some $1<\alpha \leq \beta$ the following properties hold: \begin{itemize} \item[(a)] $(Y,Z,K)\in \mathfrak{S}_{G}^{\alpha}(0,T)$; \item[(b)] $Y_{t}=\xi+\int_{t}^{T}g(s,Y_{s},Z_{s})ds+\int_{t}^{T} f(s,Y_{s},Z_{s})d\langle B\rangle_{s}-\int_{t}^{T}Z_{s} dB_{s}-(K_{T}-K_{t})$. \end{itemize} \end{definition} \begin{lemma} \label{the1.1} (\cite{HJPS1}) Assume that $\xi \in L_{G}^{\beta}(\Omega_{T})$ and $g$, $f$ satisfy (H1) and (H2) for some $\beta>1$. Then equation (\ref{eq:GBSDE1}) has a unique solution $(Y,Z,K)$. Moreover, for any $1<\alpha<\beta$ we have $Y\in S_{G}^{\alpha}(0,T)$, $Z\in H_{G}^{\alpha}(0,T;\mathbb{R})$ and $K_{T}\in L_{G}^{\alpha}(\Omega_{T})$. \end{lemma} In this paper, we also need the following assumptions for $G$-BSDE (\ref{eq:GBSDE1}). \begin{itemize} \item[(H3)] For each fixed $(\omega,y,z)\in \Omega_{T}\times \mathbb{R} \times \mathbb{R}$, $t\rightarrow g(t,\omega,y,z)$ and $t\rightarrow f(t,\omega,y,z)$ are continuous. \item[(H4)] For each fixed $(t,y,z)\in \lbrack0,T)\times \mathbb{R} \times \mathbb{R}$, $g(t,y,z)$, $f(t,y,z)\in L_{G}^{\beta}(\Omega _{t})$ and \[ \lim_{\varepsilon \rightarrow0+}\frac{1}{\varepsilon}\mathbb{\hat{E}}[\int _{t}^{t+\varepsilon}(|g(u,y,z)-g(t,y,z)|^{\beta}+ |f(u,y,z)-f(t,y,z)|^{\beta})du]=0. \] \item[(H5)] $K_t=\int_0^t\eta_sd\langle B\rangle_s-2\int_0^tG(\eta_s)ds$, where $\eta\in M_G^p(0,T)$, $p\geq 1$. \item[(H6)] For each $(t,\omega,y)\in \lbrack0,T]\times \Omega_{T} \times \mathbb{R}$, $g(t,\omega,y,0)=f(t,\omega,y,0)=0$. \end{itemize} Assume that $\xi \in L_{G}^{\beta}(\Omega_{T})$, $g$ and $f$ satisfy (H1) and (H2) for some $\beta>1$. Let $(Y^{T,\xi},Z^{T,\xi},K^{T,\xi})$ be the solution of $G$-BSDE (\ref{eq:GBSDE1}) corresponding to $\xi$, $g$ and $f$ on $[0,T]$. It is easy to check that $Y^{T,\xi}=Y^{T^{\prime},\xi}$ on $[0,T]$ for $T^{\prime}>T$. Following (\cite{HJPS2}), we define a nonlinear expectation as \[ \mathbb{\mathcal{E}}_{s,t}[\xi]=Y_{s}^{t,\xi}\text{\quad for \quad }0\leq s\leq t\leq T. \] \begin{remark} In \cite{HJPS2,HH} they both define the nonlinear expectation under the assumption(H1), (H2) and (H6), their consistent nonlinear expectation was defined by $\mathcal{{E}}_{t}[\xi]=Y_{t}^{T,\xi}\text{ for }t\in \lbrack0,T]$. As described by \cite{HJPS2}, under assumption (H6), the nonlinear expectation satisfies, for $T_1<T_2$, $\mathcal{E}_{t,T_1}[\xi]=\mathcal{E}_{t,T_2}[\xi]$. Then the $\mathcal{E}_t[\xi]=\mathcal{E}_{t,T}[\xi]=Y_t^{T,\xi}$ notation is used. \end{remark} The classical g-expectations possess many properties that are useful in finance and economics and became an important risk measure tool in financial mathematics under the complete market case. The nonlinear expectations derived by the G-BSDEs is a useful generalization of $g$-expectations defined on an incomplete market case. \section{Main result}\label{Sec:Main} \begin{definition}\label{def:1} For $\xi\in \mathbb{L}_G^{\infty}(\Omega_s), ~ 0\leq t\leq s\leq T$, we define the function $h\in C^2(\mathbb{R})$ be $G$-convex, if $\mathcal{E}_{t,s}[h(\xi)]\geq h[\mathcal{E}_{t,s}(\xi)]$. \end{definition} \begin{lemma}\label{le:1}(see \cite{HJPS1}). Let $\xi\in L_T^{\beta}(\Omega_T)$ and $g, f$ satisfy (H1) and (H2) for some $\beta>1$. Assume that $(Y,Z,K)$ satisfies $(Y,Z)\in S_G^{\alpha}(0,T)\times H_G^{\alpha}(0,T;\mathbb{R}^d)$ and K is a decreasing G-martingale with $K_0=0$ and $K_T\in L_G^{\alpha}(\Omega_T)$ for some $1<\alpha<\beta$ is a solution of \eqref{eq:GBSDE1}. Then, there exists a constant $C_{\alpha}>0$ depending on $\alpha, T, G$ and $L$ such that \begin{equation}\label{eq:Y-esti} |Y_t|^{\alpha}\leq C_{\alpha}\hat{{\mathbb E}}_t\left[|\xi|^{\alpha}+\left(\int_t^T|h_s^0|ds\right)^{\alpha}\right], \end{equation} \begin{equation}\label{eq:Z-esti} \hat{{\mathbb E}}\left[\left(\int_0^T|Z_s|^2ds\right)^{\alpha/2}\right]\leq C_{\alpha}\left\{\hat{{\mathbb E}}\left[\sup_{t\in[0,T]}|Y_t|^{\alpha}\right]+ \left(\hat{{\mathbb E}}\left[\sup_{t\in[0,T]}|Y_t|^{\alpha}\right]\right)^{1/2} \left(\hat{{\mathbb E}}\left[\left(\int_0^Th_s^0ds\right)^{\alpha}\right]\right)^{1/2}\right\}, \end{equation} where $h_s^0=|g(s,0,0)|+|f(s,0,0)|$. \end{lemma} \begin{lemma}\label{le:2}(see \cite{HJPS1,Song1}) Let $\alpha\geq 1$ and $\delta>0$ be fixed. Then, there exists a constant C depending on $\alpha$ and $\delta$ such that \begin{equation}\label{eq:xi-esti} \hat{{\mathbb E}}\left[\sup_{t\in[0,T]}\hat{E}_t[|\xi|^{\alpha}]\right]\leq C\left\{\left(\hat{{\mathbb E}}\left[|\xi|^{\alpha+\delta}\right]\right)^{\alpha/(\alpha+\delta)}+ \hat{{\mathbb E}}\left[|\xi|^{\alpha+\delta}\right]\right\}, \end{equation} $\forall \xi\in L_G^{\alpha+\delta}(\Omega_T)$. \end{lemma} Following Theorem 12 in \cite{HH}, we have the following representation. \begin{lemma}\label{le:3} Suppose (H1)-(H4) satisfied. Take a polynomial growth function $\Phi\in C^2_b(\mathbb{R})$, $s\in[t,t+\epsilon]$. $$Y_s=\Phi(B_{t+\epsilon}-B_t)+\int_s^{t+\epsilon}g(r,Y_r,Z_r)dr+\int_s^{t+\epsilon}f(r,Y_r,Z_r)d\langle B\rangle_r- \int_s^{t+\epsilon}Z_rdB_r-(K_{t+\epsilon}-K_s).$$ Then \begin{equation}\label{eq:g-represent} L^2_G-\lim_{\epsilon\rightarrow 0+}\frac{\mathcal{E}_{t,t+\epsilon}[\Phi(B_{t+\epsilon}-B_t)]-\Phi(0)}{\epsilon} =g(t,\Phi(0),\Phi'(0))+2G(f(t,\Phi(0),\Phi'(0))+\frac12\Phi''(0)). \end{equation} \end{lemma} {\bf Proof: } Let $\tilde{Y}_s=Y_s-\Phi(B_s-B_t)$, then $\tilde{Y}_t=Y_t-\Phi(0)$ and $\tilde{Y}_{t+\epsilon}= Y_{t+\epsilon}-\Phi(B_{t+\epsilon}-B_t)=0$ hold. By using Ito's formula, \begin{equation*}\begin{split} -d\tilde{Y}_s=&-dY_s+d\Phi(B_s-B_t)\\ =&g(s,Y_s,Z_s)ds+f(s,Y_s,Z_s)d\langle B\rangle_s-Z_sdB_s-dK_s\\ &+\Phi'(B_s-B_t)dB_s+\frac 12\Phi''(B_s-B_t)d\langle B\rangle_s \end{split}\end{equation*} \begin{equation*}\begin{split} \tilde{Y}_s=&0+\int_s^{t+\epsilon}g(r,Y_r,Z_r)dr+\int_s^{t+\epsilon}(f(r,Y_r,Z_r)+\frac12\Phi''(B_r-B_t))d\langle B\rangle_r\\ &-\int_s^{t+\epsilon}(Z_r-\Phi'(B_r-B_t))dB_r-(K_{t+\epsilon}-K_s). \end{split}\end{equation*} Let $\tilde{Z}_s=Z_s-\Phi'(B_s-B_t)$ and $\tilde{K}_s=K_s$. Then $(\tilde{Y}_s,\tilde{Z}_s,\tilde{K}_s)$ satisfies the G-BSDE: \begin{equation*}\begin{split} \tilde{Y}_s=&0+\int_s^{t+\epsilon}g(r,\tilde{Y}_r+\Phi(B_r-B_t),\tilde{Z}_r+\Phi'(B_r-B_t))dr\\ &+\int_s^{t+\epsilon}(f(r,\tilde{Y}_r+\Phi(B_r-B_t),\tilde{Z}_r+\Phi'(B_r-B_t))+\frac12\Phi''(B_r-B_t))d\langle B\rangle_r\\ &-\int_s^{t+\epsilon}\tilde{Z_r}dB_r-(\tilde{K}_{t+\epsilon}-K_s). \end{split}\end{equation*} From Lemma \ref{le:1}, \begin{equation*}\begin{split} |\tilde{Y}^{\epsilon}_s|^{\alpha}&\leq C_{\alpha}\hat{{\mathbb E}}_s\left[\left(\int_s^{t+\epsilon}( |g(r,\Phi(B_r-B_t),\Phi'(B_r-B_t))|\right.\right.\\ &\left.\left.+|f(r,\Phi(B_r-B_t),\Phi'(B_r-B_t))|+\frac12|\phi''(B_r-B_t)| )dr\right)^{\alpha}\right], \end{split}\end{equation*} \begin{equation*}\begin{split} \hat{{\mathbb E}}\left[\left(\int_t^{t+\epsilon}|\tilde{Z}_r^{\epsilon}|^2dr\right)^{\alpha/2}\right]&\leq C_{\alpha}\left\{\hat{{\mathbb E}}\left[\left(\int_t^{t+\epsilon}(|g(r,\Phi(B_r-B_t),\Phi'(B_r-B_t))|+ \frac12|\Phi''(B_r-B_t)|\right.\right.\right.\\ &\left.\left.\left.+|f(r,\Phi(B_r-B_t),\Phi'(B_r-B_t))|)dr\right)^{\alpha}\right] +\hat{{\mathbb E}}\left[\sup_{s\in[t,t+\epsilon]}|\tilde{Y}^{\epsilon}_s|^{\alpha}\right]\right\} \end{split}\end{equation*} hold for some constant $C_{\alpha}>0$, only depending on $\alpha, T, G$ and $L$. \begin{multline*} \int_t^{t+\epsilon}\left(|g(r,0,0)|^{\beta}+|f(r,0,0)|^{\beta}\right)dr\leq 2^{\beta-1}\left\{ \epsilon\left(|g(t,0,0)|^{\beta}+|f(t,0,0)|^{\beta}\right)\right.\\ \left. +\int_t^{t+\epsilon}\left( |g(r,0,0)-g(t,0,0)|^{\beta}+|f(r,0,0)-f(t,0,0)|^{\beta}\right)dr\right\}. \end{multline*} Together with Lemma \ref{le:2} and assumption (H4), we get \begin{equation}\label{eq:YZ-esti} \hat{{\mathbb E}}\left[\sup_{s\in[t,t+\epsilon]}|\tilde{Y}_s^{\epsilon}|^{\alpha}+ \left(\int_t^{t+\epsilon}|\tilde{Z}^{\epsilon}_r|^2\right)^{\alpha/2}\right]\leq C_3\epsilon^{\alpha}, \end{equation} where $C_3$ depends on $x, y, p,\alpha, \beta, T, G$ and $L$. Now we prove \eqref{eq:g-represent}. Dividing $\epsilon$, take conditional G-expectations and take limits on both sides of the equation in $L_G^2$ norm, then $\forall \Phi\in C_b^2(\mathbb{R})$, \begin{equation*} \begin{split} \lim_{\epsilon\rightarrow 0+}\frac{\tilde{Y}_t}{\epsilon}=&\lim_{\epsilon\rightarrow 0+}\frac1{\epsilon} \hat{E}_t[\tilde{Y}_t+(\tilde{K}_{t+\epsilon}-\tilde{K}_t)]\\ =&\lim_{\epsilon\rightarrow 0+}\frac1{\epsilon}\hat{{\mathbb E}}_t\left[\int_t^{t+\epsilon}g(r,\tilde{Y}_r+\Phi(B_r-B_t),\tilde{Z}_r +\Phi'(B_r-B_t))dr\right. \\ &\left.+\int_t^{t+\epsilon}\left(f(r,\tilde{Y}_r+\Phi(B_r-B_t),\tilde{Z}_r+\Phi'(B_r-B_t)) +\frac12\Phi''(B_r-B_t)\right)d\langle B\rangle_r\right]\\ =&\lim_{\epsilon\rightarrow 0+}\frac1{\epsilon}\hat{{\mathbb E}}\left[\int_t^{t+\epsilon}g(r,\Phi(B_r-B_t),\Phi'(B_r-B_t))dr\right.\\ &\left.+\int_t^{t+\epsilon}\left(f(r,\Phi(B_r-B_t),\Phi'(B_r-B_t))+\frac12 \Phi''(B_r-B_t)\right)d\langle B\rangle_r \right]+L_{\epsilon} \end{split}\end{equation*} where \begin{equation*}\begin{split} L_{\epsilon}=&\frac1{\epsilon}\left\{\hat{{\mathbb E}}\left[\int_t^{t+\epsilon} g(r,\tilde{Y}_r+\Phi(B_r-B_t),\tilde{Z}_r+\Phi'(B_r-B_t))dr\right.\right.\\ +&\left.\int_t^{t+\epsilon}\left(f(r,\tilde{Y}_r+\Phi(B_r-B_t),\tilde{Z}_r+\Phi'(B_r-B_t))+ \frac12\Phi''(B_r-B_t)\right)d\langle B\rangle_r\right]\\ -&\hat{{\mathbb E}}\left[\int_t^{t+\epsilon}g(r,\Phi(B_r-B_t),\Phi'(B_r-B_t))dr\right.\\ +&\left.\left.\int_t^{t+\epsilon}\left(f(r,\Phi(B_r-B_t),\Phi'(B_r-B_t))+\frac12\Phi''(B_r-B_t)\right)d\langle B\rangle_r\right]\right\} \end{split}\end{equation*} It can be verified that $|L_{\epsilon}|\leq (C_4/\epsilon)\hat{{\mathbb E}}[\int_t^{t+\epsilon}(|\tilde{Y}_r|+|\tilde{Z}_r|)dr]$, where $C_4$ depends on $G, L$ and $T$. By \eqref{eq:YZ-esti}, we have \begin{equation*}\begin{split} \hat{{\mathbb E}}[|L_{\epsilon}|^{\alpha}]\leq& \frac{C_4^{\alpha}}{\epsilon^{\alpha}}\hat{{\mathbb E}}\left[\left(\int_t^{t+\epsilon}(|\tilde{Y}_r|+ |\tilde{Z}_r|)dr\right)^{\alpha}\right]\\ \leq & \frac{2^{\alpha-1}C_4^{\alpha}}{\epsilon^{\alpha}}\hat{{\mathbb E}}\left[\left(\int_t^{t+\epsilon}|\tilde{Y}_r|dr\right)^{\alpha}+ \left(\int_t^{t+\epsilon}|\tilde{Z}_r|dr\right)^{\alpha}\right]\\ \leq & 2^{\alpha-1}C_4^{\alpha}\left\{\hat{{\mathbb E}}\left[\sup_{s\in[t,t+\epsilon]}|\tilde{Y}_s|^{\alpha}\right]+ \epsilon^{-\alpha/2}\hat{{\mathbb E}}\left[\left(\int_t^{t+\epsilon}|\tilde{Z}_r|^2dr\right)^{\alpha/2}\right]\right\}\\ \leq & 2^{\alpha-1}C_4^{\alpha}C_3(\epsilon^{\alpha}+\epsilon^{\alpha/2}), \end{split}\end{equation*} then $L_G^{\alpha}-\lim_{\epsilon\rightarrow 0+}L_{\epsilon}=0$. We set \begin{equation*}\begin{split} M_{\epsilon}=&\frac1{\epsilon}\left\{\hat{{\mathbb E}}_t\left[\int_t^{t+\epsilon}g(r,\Phi(B_r-B_t),\Phi'(B_r-B_t))dr\right.\right.\\ +&\left.\int_t^{t+\epsilon}f(r,\Phi(B_r-B_t),\Phi'(B_r-B_t))+\frac12\Phi''(B_r-B_t)d\langle B\rangle_r\right]\\ -&\left.\hat{{\mathbb E}}_t\left[\int_t^{t+\epsilon}g(r,\Phi(0),\Phi'(0))dr+\int_t^{t+\epsilon}\left(f(r,\Phi(0),\Phi'(0)) +\frac12\Phi''(0)\right)d\langle B\rangle_r \right]\right\} \end{split}\end{equation*} By the Lipschitz condition of function $g$ and $f$, and the polynomial growth of $\Phi\in C_b^2(\mathbb{R})$, we have $L_{G}^{\alpha}-\lim_{\epsilon\rightarrow 0+}M_{\epsilon}=0$. Further we set \begin{equation*}\begin{split} N_{\epsilon}=&\frac1{\epsilon}\left\{\hat{{\mathbb E}}_t\left[\int_t^{t+\epsilon}g(r,\Phi(0),\Phi'(0))dr +\int_t^{t+\epsilon}\left(f(r,\Phi(0),\Phi'(0))+\frac12\Phi''(0)\right)d\langle B\rangle_r\right]\right.\\ &\left.-\hat{{\mathbb E}}_t\left[\int_t^{t+\epsilon}g(r,\Phi(0),\Phi'(0))dr+\int_t^{t+\epsilon}\left( f(r,\Phi(0),\Phi'(0))+\frac12\Phi''(0)\right) d\langle B\rangle_r\right]\right\} \end{split}\end{equation*} You can check that \begin{equation*}\begin{split} |N_{\epsilon}|\leq& (C_7/\epsilon)\hat{{\mathbb E}}_t[\int_t^{t+\epsilon}(|g(r,\Phi(0),\Phi'(0))- g(t,\Phi(0),\Phi'(0))|+\\ &|f(r,\Phi(0),\Phi'(0))-f(t,\Phi(0),\Phi'(0))|)^{\alpha}dr], \end{split}\end{equation*} where $C_7$ depends on $G$. Then, \begin{equation*}\begin{split} \hat{{\mathbb E}}[|N_{\epsilon}|^{\alpha}]\leq & C_7^{\alpha} \frac1{\epsilon}\hat{{\mathbb E}}\left[\int_t^{t+\epsilon}( |g(r,\Phi(0),\Phi'(0))-g(t,\Phi(0),\Phi'(0))|\right.\\ &\left.+|f(r,\Phi(0),\Phi'(0))-f(t,\Phi(0),\Phi'(0))| )^{\alpha}dr\right]\\ \leq & C_7^{\alpha}\left(\frac1{\epsilon}\hat{{\mathbb E}}\left[\int_t^{t+\epsilon}(|g(r,\Phi(0),\Phi'(0))-g(t,\Phi(0),\Phi'(0))| \right.\right.\\ &\left.\left.+|f(r,\Phi(0),\Phi'(0))-f(t,\Phi(0),\Phi'(0))|)^{\beta}dr\right]\right)^{\alpha/\beta}. \end{split}\end{equation*} Take limits from both sides of the above inequality and use assumption (H4), then we have \begin{equation*} L_G^{\alpha}-\lim_{\epsilon\rightarrow 0+}N_{\epsilon}=0. \end{equation*} At the same time, \begin{equation*}\begin{split} \hat{{\mathbb E}}\left[\int_t^{t+\epsilon}g(r,\Phi(0),\Phi'(0))dr+ \int_t^{t+\epsilon}(f(t,\Phi(0),\Phi'(0))+\frac12\Phi''(0))d\langle B\rangle_r\right]\\ =g(t,\Phi(0),\Phi'(0))\epsilon+\hat{{\mathbb E}}_t\left[f(t,\Phi(0),\Phi'(0))\left(\langle B\rangle_{t+\epsilon}-\langle B\rangle_t\right)\right]\\ =\left[g(t,\Phi(0),\Phi'(0))+2G\left((f(t,\Phi(0),\Phi'(0))+\frac12\Phi''(0))\right)\right]\epsilon. \end{split}\end{equation*} Then we have \begin{equation*}\begin{split} L_G^{\alpha}-\lim_{\epsilon\rightarrow 0+}\frac{\tilde{Y}_t}{\epsilon}=& \lim_{\epsilon\rightarrow 0+}\frac1{\epsilon}\{Y_t-\Phi(0)\}\\ =& g(t,\Phi(0),\Phi'(0))+2G\left( (f(r,\Phi(0),\Phi'(0)+\frac12\Phi''(0))\right). \end{split}\end{equation*} The proof is finished. \begin{theorem}\label{thm:main} Suppose (H1)-(H4) satisfied. Take a function $h\in C^2$, $\phi\in C_b^2(\mathbb{R})$ is polynomial growth function and $h(\phi)\in C_b^2(\mathbb{R})$. Then $h$ is a G-convex function that is equivalent with \begin{multline}\label{eq:G-convex-equivalent} g(t,h(y),h'(y)z)+2G(f(t,h(y),h'(y)z)+\frac12h''(y)z^2+\frac12h'(y)A)\geq\\ h'(y)g(t,y,z)+2h'(y)G(f(t,y,z)+\frac12A),\quad\textrm{ for all } y,z\in\mathbb{R} \textrm{ and } A\in\mathbb{R}. \end{multline} \end{theorem} {\bf Proof: Necessary condition: } Take a function $h\in C^2$ and $\phi\in C^2_b(\mathbb{R})$, with $H(\phi)\in C^2_b(\mathbb{R})$. By Lemma \ref{le:3} we have \begin{equation}\label{eq:e-h-phi}\begin{split} L^2_G-& \lim_{\epsilon\rightarrow 0+}\frac{\mathcal{E}_{t,t+\epsilon}[h(\phi(B_{t+\epsilon}-B_t))]-h(\phi(0))}{\epsilon}= g(t,h(\phi(0)),h'(\phi(0))\phi'(0))\\ &+2G\left(f(t,h(\phi(0)),h'(\phi(0))\phi'(0))+\frac12h''(\phi(0))(\phi'(0))^2+\frac12h'(\phi(0) )\phi''(0)\right) \end{split}\end{equation} and \begin{equation}\label{eq:phi-lim-rep}\begin{split} &L_G^2-\lim_{\epsilon\rightarrow 0+}\frac{\CE_{t,t+\epsilon}[\phi(B_{t+\epsilon}-B_t)]-\phi(0)}{\epsilon}\\ &=g(t,\phi(0),\phi'(0))+2G(f(t,\phi(0),\phi'(0))+\frac12\phi''(0)). \end{split}\end{equation} Based on \eqref{eq:phi-lim-rep}, we have \begin{equation}\label{eq:h-e-phi}\begin{split} &L_G^2-\lim_{\epsilon\rightarrow 0+}\frac{h(\CE_{t,t+\epsilon}[\phi(B_{t+\epsilon}-B_t)])-h(\phi(0))}{\epsilon}\\ &=h'(\phi(0))\left(g(t,\phi(0),\phi'(0))+2G(f(t,\phi(0),\phi'(0))+\frac12\phi''(0))\right). \end{split}\end{equation} If $h$ is a G-convex function, from Definition \ref{def:1}, $h$ satisfies $\mathcal{E}_{t,t+\epsilon}[h(\phi(B_{t+\epsilon}-B_t))]\geq h\{\mathcal{E}_{t,t+\epsilon}[\phi(B_{t+\epsilon}-B_t)]\}$. From \eqref{eq:e-h-phi} and \eqref{eq:h-e-phi} we have \begin{equation*}\begin{split} g(t,h(\phi(0)),h'(\phi(0))\phi'(0))&+2G\left(f(t,h(\phi(0)),h'(\phi(0))\phi'(0))+ \frac12h''(\phi(0))(\phi'(0))^2+\frac12h'(\phi(0) )\phi''(0)\right)\\ &\geq h'(\phi(0))\left(g(t,\phi(0),\phi'(0))+2G(f(t,\phi(0),\phi'(0))+\frac12\phi''(0))\right) \end{split}\end{equation*} Where $(\phi(0),\phi'(0),\phi''(0))$ are arbitrary values in $\mathbb{R}^3$. Then we get \eqref{eq:G-convex-equivalent}. \par {\bf Sufficient condition:} Following a series of work of Soner, Touzi and Zhang \cite{STZ}, Song \cite{Song1,Song2}, the work of Peng, Song and Zhang \cite{PSZ} proved a representation theorem of G-martingales in a complete subspace of $L_G^{\alpha}(\Omega_T)$ $(\alpha\geq 1)$. They proved the decomposition of G-martingale of $\hat{{\mathbb E}}_t[\xi]$ can be uniquely represented $K_t=\int_0^t\eta_sd\langle B\rangle_s-\int_0^t2G(\eta_s)ds$. And then use the similar Picard approximation approach used in \cite{HJPS1} we can get the corresponding theorem as Theorem \ref{thm:main} for a normal decreasing martingale with $K_0=0$ and $K_T\in \mathbb{L}^{\alpha}(\Omega_T)$. Take $\xi\in L^{\infty}_G(\phi(B_t))$, we need to prove $$\mathcal{E}_{s,t}[h(\xi)]\geq h[\mathcal{E}_{s,t}(\xi)].$$ Take $$Y_u=\xi+\int_u^tg(r,Y_r,Z_r)dr+\int_u^tf(r,Y_r,Z_r)d\langle B\rangle_r-\int_u^tZ_rdB_r-(K_t-K_u).$$ Applying Ito's formula, $$-dh(Y_r)=h'(Y_r)\left[g(r,Y_r,Z_r)dr+f(r,Y_r,Z_r)d\langle B\rangle_r-Z_rdB_r-dK_r\right]-\frac12h''(Y_r)|Z_r|^2d\langle B\rangle_r,$$ then \begin{equation*} \begin{split} h(Y_u)=&h(\xi)+\int_u^th'(Y_r)g(r,Y_r,Z_r)dr +\int_u^t\left[h'(Y_r)f(r,Y_r,Z_r)-\frac12h''(Y_r)|Z_r|^2\right]d\langle B\rangle_r\\ &-\int_u^th'(Y_r)Z_rdB_r-\int_u^th'(Y_r)dK_r\\ =&h(\xi)+\int_u^tg(r,h(Y_r),h'(Y_r)Z_r)dr+\int_u^tf(r,h(Y_r),h'(Y_r)Z_r)d\langle B\rangle_r-\int_u^th'(Y_r)Z_rdB_r\\ &+\int_u^t\left(h'(Y_r)g(r,Y_r,Z_r)-g(r,h(Y_r),h'(Y_r)Z_r)\right)dr\\ &+\int_u^t\left(h'(Y_r)f(r,Y_r,Z_r)-\frac12h''(Y_r)|Z_r|^2-f(r,h(Y_r),h'(Y_r)Z_r)\right)d\langle B\rangle_r -\int_u^th'(Y_r)dK_r. \end{split} \end{equation*} Since the decreasing process $K_r$, G-martingale, satisfies (H5), we have \begin{equation*}\begin{split} =&h(\xi)+\int_u^tg(r,h(Y_r),h'(Y_r)Z_r)dr+\int_u^tf(r,h(Y_r),h'(Y_r)Z_r)d\langle B\rangle_r-\int_u^th'(Y_r)Z_rdB_r\\ &+\int_u^t\left[h'(Y_r)g(r,Y_r,Z_r)-g(r,h(Y_r),h'(Y_r)Z_r)+2h'(Y_r)G(\eta_r)\right]dr\\ &-\int_u^t\left[-h'(Y_r)f(r,Y_r,Z_r)+\frac12h''(Y_r)|Z_r|^2+f(r,h(Y_r),h'(Y_r)Z_r)+h'(Y_r)\eta_r\right]d\langle B\rangle_r \end{split}\end{equation*} \begin{equation*}\begin{split} =&h(\xi)+\int_u^tg(r,h(Y_r),h'(Y_r)Z_r)dr+\int_u^tf(r,h(Y_r),h'(Y_r)Z_r)d\langle B\rangle_r-\int_u^th'(Y_r)Z_rdB_r\\ &+\int_u^t\left[h'(Y_r)g(r,Y_r,Z_r)-g(r,h(Y_r),h'(Y_r)Z_r)+2h'(Y_r)G(\eta_r)\right.\\ &\left.-2G\left(f(r,h(Y_r),h'(Y_r)Z_r)+ \frac12h''(Y_r)|Z_r|^2+h'(Y_r)(\eta_r-f(r,Y_r,Z_r))\right)\right]dr-(\tilde{K}_t-\tilde{K}_u), \end{split}\end{equation*} where \begin{equation*}\begin{split} \tilde{K}_t=&-\left\{\int_0^t\left[f(r,h(Y_r),h'(Y_r)Z_r)+\frac12h''(Y_r)|Z_r|^2+h'(Y_r)\left(\eta_r-f(r,Y_r,Z_r)\right)\right]d\langle B\rangle_r\right.\\ &\left.-2\int_0^tG\left[f(r,h(Y_r),h'(Y_r)Z_r)+\frac12h''(Y_r)|Z_r|^2+h'(Y_r)\left(\eta_r-f(r,Y_r,Z_r)\right) \right] \right\} \end{split}\end{equation*} is a decreasing G-martingale. Denote $\tilde{Y}_u=h(Y_u)$ and $\tilde{Z}_u=h'(Y_u)Z_u$, then \begin{equation}\label{eq:G-BSDE-G-conv-1}\begin{split} \tilde{Y}_u=&h(\xi)+\int_u^tg(r,\tilde{Y}_r,\tilde{Z}_r)dr+ \int_u^tf(r,\tilde{Y}_r,\tilde{Z}_r)d\langle B\rangle_r-\int_u^t\tilde{Z}_rdB_r\\ &+\int_u^t\left[h'(Y_r)g(r,Y_r,Z_r)-g(r,\tilde{Y}_r,\tilde{Z}_r)+2h'(Y_r)G(\eta_r)\right.\\ &\left.-2G\left(f(r,\tilde{Y}_r,\tilde{Z}_r)+ \frac12h''(Y_r)|Z_r|^2+h'(Y_r)(\eta_r-f(r,Y_r,Z_r))\right)\right]dr-(\tilde{K}_t-\tilde{K}_u), \end{split}\end{equation} we know from the inequality \eqref{eq:G-convex-equivalent} that the fourth integral is less or equal to 0. \par On the other hand, $\CE_{s,t}[h(\xi)]$ is the solution of the following G-BSDE, \begin{equation}\label{eq:G-BSDE-G-conv-2} \bar{Y}_u=h(\xi)+\int_u^tg(r,\bar{Y}_r,\bar{Z}_r)dr+\int_u^tf(r,\bar{Y}_r,\bar{Z}_r)d\langle B\rangle_r-\int_u^t\bar{Z}_rdB_r -(\bar{K}_t-\bar{K}_u) \end{equation} Applying the comparison theorem of G-BSDEs \cite{HJPS2}, we have $$\tilde{Y}_u\leq\bar{Y}_u.$$ Since $\tilde{Y}_u=h(Y_u)=h(\CE_{u,t}[\xi])$ and $\bar{Y}_u=\CE_{u,t}[h(\xi)]$, then $$\CE_{u,t}[h(\xi)]\geq h(\CE_{u,t}[\xi]).$$ \begin{remark} In this paper, we covered how G-Brownian motion is 1-dimensional case. In fact, the n-dimensional case is also satisfied. The proof does not have any great difference. \end{remark} \end{document}
\begin{document} \title{Slow Down \& Sleep for Profit in\Online Deadline Scheduling hanks{This work was partially supported by the German Research Foundation (DFG) within the Collaborative Research Center ``On-The-Fly Computing'' (SFB 901) and by the Graduate School on Applied Network Science (GSANS).} \begin{abstract} We present and study a new model for energy-aware and profit-oriented scheduling on a single processor. The processor features dynamic speed scaling as well as suspension to a sleep mode. Jobs arrive over time, are preemptable, and have different sizes, values, and deadlines. On the arrival of a new job, the scheduler may either accept or reject the job. Accepted jobs need a certain energy investment to be finished in time, while rejected jobs cause costs equal to their values. Here, power consumption at speed $s$ is given by $P(s)=s^{\alpha}+\beta$ and the energy investment is power integrated over time. Additionally, the scheduler may decide to suspend the processor to a sleep mode in which no energy is consumed, though awaking entails fixed transition costs $\gamma$. The objective is to minimize the total value of rejected jobs plus the total energy. Our model combines aspects from advanced energy conservation techniques (namely speed scaling and sleep states) and profit-oriented scheduling models. We show that \emph{rejection-oblivious} schedulers (whose rejection decisions are not based on former decisions) have – in contrast to the model without sleep states – an unbounded competitive ratio w.r.t\text{.} the processor parameters $\alpha$ and $\beta$. It turns out that the worst-case performance of such schedulers depends linearly on the jobs' value densities (the ratio between a job's value and its work). We give an algorithm whose competitiveness nearly matches this lower bound. If the maximum value density is not too large, the competitiveness becomes $\alpha^{\alpha}+2e\alpha$. Also, we show that it suffices to restrict the value density of low-value jobs only. Using a technique from \cite{Chan:2010} we transfer our results to processors with a fixed maximum speed. \end{abstract} \section{Introduction} Over the last decade, energy usage of data centers and computers in general has become a major concern. There are various reasons for this development: the ubiquity of technical systems, the rise of mobile computing, as well as a growing ecological awareness. Also from an economical viewpoint, energy usage can no longer be ignored. Energy costs for both the actual computation and the cooling have become \emph{the} decisive cost factor in today's data centers (see, e.g., \textcite{Barroso:2007}). In combination with improvements on the technical level, algorithmic research has great potential to reduce energy consumption. \Textcite{Albers:2010} gives a good insight on the role of algorithms to fully exploit the energy-saving mechanisms of modern systems. Two of the most prominent techniques for power saving are \emph{dynamic speed scaling} and \emph{power-down}. The former allows a system to save energy by adapting the processor's speed to the current system load, while the latter can be used to transition into a sleep mode to conserve energy. There is an extensive body of literature on both techniques (see below). From an algorithmic viewpoint, the most challenging aspect in the design of scheduling strategies is to handle the lack of knowledge about the future: should we use a high speed to free resources in anticipation of new jobs or enter sleep mode in the hope that no new jobs arrive in the near future? Given that profitability is a driving force for most modern systems and that energy consumption has gained such a high significance, it seems natural to take this relation explicitly into account. \Textcite{Pruhs:2010} consider a scheduling model that does so by introducing job values. Their scheduler controls energy usage via speed scaling and is allowed to reject jobs if their values seem too low compared to their foreseeable energy requirements. The objective is to maximize the profit, which is modeled as the total value of finished jobs minus the invested energy. Our work is based on a result by \textcite{Chan:2010}. We enhance their model by combining speed scaling and power-down mechanisms for energy management, which not only introduces non-trivial difficulties to overcome in the analysis, but proves to be inherently more complex compared to the original model insofar that classical algorithms can become arbitrarily bad. \subsubsection{History \& Related Work.} There is much literature concerning energy-aware scheduling strategies both in practical and theoretical contexts. A recent survey by \textcite{Albers:2011} gives a good and compact overview on the state of the art in the dynamic speed scaling setting, also in combination with power-down mechanisms. In the following, we focus on theoretical results concerning scheduling on a single processor for jobs with deadlines. Theoretical work in this area has been initiated by \textcite{Yao:1995}. They considered scheduling of jobs having different sizes and deadlines on a single variable-speed processor. When running at speed $s$, its power consumption is $P(s)=s^{\alpha}$ for some constant $\alpha\geq2$. \Citeauthor{Yao:1995} derived a polynomial time optimal offline algorithm as well as two online algorithms known as \emph{optimal available} (\OA) and \emph{average rate} (\AVR). Up to now, \OA remains one of the most important algorithms in this area, as it is used as a basic building block by many strategies (including the strategy we present in this paper). Using an elegant amortized potential function argument, \textcite{Bansal:2007a} were able to show that \OA's competitive factor is exactly $\alpha^{\alpha}$. Moreover, the authors stated a new algorithm, named \BKP, which achieves a competitive ratio of essentially $2e^{\alpha+1}$. This improves upon \OA for large $\alpha$. The best known lower bound for deterministic algorithms is $\sfrac{e^{\alpha-1}}{\alpha}$ due to \textcite{Bansal:2009}. They also presented an algorithm (qOA) that is particularly well-suited for low powers of $\alpha$. An interesting and realistic model extension is the restriction of the maximum processor speed. In such a setting, a scheduler may not always be able to finish all jobs by their deadlines. \Textcite{Chan:2007} were the first to consider the combination of classical speed scaling with such a maximum speed. They gave an algorithm that is $\alpha^{\alpha}+\alpha^2 4^{\alpha}$-competitive on energy and $14$-competitive on throughput. \Textcite{Bansal:2008} improved this to a $4$-competitive algorithm concerning the throughput while maintaining a constant competitive ratio with respect to the energy. Note that no algorithm – even if ignoring the energy consumption – can be better than $4$-competitive for throughput (see~\cite{Baruah:1991}). Power-down mechanisms were studied by \textcite{Baptiste:2006}. He considered a fixed-speed processor needing a certain amount of energy to stay awake, but which may switch into a sleep state to save energy. Returning from sleep needs energy $\gamma$. For jobs of unit size, he gave a polynomial time optimal offline algorithm, which was later extended to jobs of arbitrary size~\cite{Baptiste:2007}. The first work to combine both dynamic speed scaling and sleep states in the classical YAO-model is due to \textcite{Irani:2007}. They achieved a $2$-approximation for arbitrary convex power functions. For the online setting and power function $P(s)=s^{\alpha}+\beta$ a competitive factor of $4^{\alpha-1}\alpha^{\alpha}+2^{\alpha-1}+2$ was reached. \Textcite{Han:2010} improved upon this in two respects: they lowered the competitive factor to $\alpha^{\alpha}+2$ and transferred the result to scenarios limiting the maximum speed. Only recently, \textcite{Albers:2012} proved that the optimization problem is NP-hard and gave lower bounds for several algorithm classes. Moreover, they improved the approximation factor for general convex power functions to $\sfrac{4}{3}$. The papers most closely related to ours are due to \textcite{Pruhs:2010} and \textcite{Chan:2010}. Both considered the dynamic speed scaling model of \citeauthor{Yao:1995}. However, they extended the idea of energy-minimal schedules to a profit-oriented objective. In the simplest case, jobs have values (or priorities) and the scheduler is no longer required to finish all jobs. Instead, it can decide to reject jobs whose values do not justify the foreseeable energy investment necessary to complete them. The objective is to maximize profit~\cite{Pruhs:2010} or, similarly, minimize the loss~\cite{Chan:2010}. As argued by the authors, the latter model has the benefit of being a direct generalization of the classical model of \textcite{Yao:1995}. For maximizing the profit, \textcite{Pruhs:2010} showed that, in order to achieve a bounded competitive factor, resource augmentation is necessary and gave a scalable online algorithm. For minimizing the loss, \textcite{Chan:2010} gave a $\alpha^{\alpha}+2e\alpha$-competitive algorithm and transferred the result to the case of a bounded maximum speed. \subsubsection{Our Contribution.} We present the first model that not only takes into account two of the most prominent energy conservation techniques (namely, speed scaling and power-down) but couples the energy minimization objective with the idea of profitability. It combines aspects from both \cite{Irani:2007} and \cite{Chan:2010}. From~\cite{Irani:2007} we inherit one of the most realistic processor models considered in this area: A single variable-speed processor with power function $P(s)=s^{\alpha}+\beta$ and a sleep state. Thus, even at speed zero the system is charged a certain amount $\beta$ of energy, but it can suspend to sleep such that no energy is consumed. Waking up causes transition cost of $\gamma$. The job model stems from~\cite{Chan:2010}: Jobs arrive in an online fashion, are preemptable, and have a deadline, size, and value. The scheduler can reject jobs (e.g., if their values do not justify the presumed energy investment). Its objective is to minimize the total energy investment plus the total value of rejected jobs. A major insight of ours is that the maximum value density \valdensMAX (i.e., the ratio between a job's value and its work) is a parameter that is inherently connected to the necessary and sufficient competitive ratio achievable for our online scheduling problem. We present an online algorithm that combines ideas from \cite{Chan:2010} and \cite{Han:2010} and analyze its competitive ratio with respect to \valdensMAX. This yields an upper bound of $\alpha^{\alpha}+2 e\alpha+\valdensMAX\frac{\scrit}{P(\scrit)}$.\footnote{The expression $\frac{\scrit}{P(\scrit)}$ depends only on $\alpha$ and $\beta$, see Section~\ref{sec:preliminaries}.} If the value density of low-valued jobs is not too large or job values are at least $\gamma$, the competitive ratio becomes $\alpha^{\alpha}+2e\alpha$. Moreover, we show that one cannot do much better: any \emph{rejection-oblivious} strategy has a competitive ratio of at least $\valdensMAX\frac{\scrit}{P(\scrit)}$. Here, rejection-oblivious means that rejection decisions are based on the \emph{current} system state and job properties only. This lower bound is in stark contrast to the setting without sleep states, where a rejection-oblivious $\LDAUOmicron{1}$-competitive algorithm exists~\cite{Chan:2010}. Using the definition of a job's penalty ratio (due to \textcite{Chan:2010}), we extend our results to processors with a bounded maximum speed. \section{Model \& Preliminaries}\label{sec:preliminaries} We are given a speed-scalable processor that can be set to any speed $s\in[0,\infty)$. When running at speed $s$ its power consumption is $P_{\alpha,\beta}(s)=s^{\alpha}+\beta$ with $\alpha\geq2$ and $\beta\geq0$. If $s(t)$ denotes the processor speed at time $t$, the total power consumption is $\int_{0}^{\infty}P_{\alpha,\beta}(s(t))\dif{t}$. We can suspend the processor into a sleep state to save energy. In this state, it cannot process any jobs and has a power consumption of zero. Though entering the sleep state is for free, waking up needs a fixed \emph{transition energy} $\gamma\geq0$. Over time, $n$ jobs $J=\set{1,2,\ldots,n}$ are released. Each job $j$ appears at its release time $r_j$ and has a deadline $d_j$, a (non-negative) value $v_j$, and requires a certain amount $w_j$ of work. The processor can process at most one job at a time. Preemption is allowed, i.e., jobs may be paused at any time and continued later on. If $I$ denotes the period of time (not necessarily an interval) when $j$ is scheduled, the amount of work processed is $\int_Is(t)\dif{t}$. A job is finished if $\int_Is(t)\dif{t}\geq w_j$. Jobs not finished until their deadline cause a cost equal to their value. We call such jobs \emph{rejected}. A schedule $S$ specifies for any time $t$ the processor's state (asleep or awake), the currently processed job (if the processor is awake), and sets the speed $s(t)$. W.l.o.g. we assume $s(t)=0$ when no job is being processed. Initially, the processor is assumed to be asleep. Whenever it is neither sleeping nor working we say it is \emph{idle}. A schedule's cost is the invested energy (for awaking from sleep, idling, and working on jobs) plus the loss due to rejected jobs. Let $m$ denote the number of sleep intervals, $l$ the total length of idle intervals, and $\mathcal{I}_{\text{work}}$ the collection of all working intervals (i.e., times when $s(t)>0$). Then, the schedule's \emph{sleeping energy} is $\Esleep{S}:=(m-1)\gamma$, its \emph{idling energy} is $\Eidle{S}:=l\beta$, and its \emph{working energy} is $\Ework{S}:=\int_{\mathcal{I}_{\text{work}}}P_{\alpha,\beta}(s(t))\dif{t}$. We use $\Vrej{S}$ to denote the total value of rejected jobs. Now, the cost of schedule $S$ is \begin{equation} \cost{S}:=\Esleep{S}+\Eidle{S}+\Ework{S}+\Vrej{S} . \end{equation} We seek online strategies yielding a provably good schedule. More formally, we measure the quality of online strategies by their competitive factor: For an online algorithm $A$ and a problem instance $I$ let $A(I)$ denote the resulting schedule and $O(I)$ an optimal schedule for $I$. Then, $A$ is said to be $c$-competitive if $\sup_I\frac{\cost{A(I)}}{\cost{O(I)}}\leq c$. We define the \emph{system energy} \Esys{S} of a schedule to be the energy needed to hold the system awake (whilst idling and working). That is, if $S$ is awake for a total of $x$ time units, $\Esys{S}=x\beta$. Note that $\Esys{S}\leq\Eidle{S}+\Ework{S}$. The \emph{critical speed} of the power function is defined as $\scrit:=\arg\min_{s\geq0}\sfrac{P_{\alpha,\beta}(s)}{s}$ (cf.~also~\cite{Irani:2007,Han:2010}). If job $j$ is processed at constant speed $s$ its energy usage is $w_j\cdot\sfrac{P_{\alpha,\beta}(s)}{s}$. Thus, assuming that $j$ is the only job in the system and ignoring its deadline, \scrit is the energy-optimal speed to process $j$. One can easily check that $\scrit^{\alpha}=\frac{\beta}{\alpha-1}$. Given a job $j$, let $\valdens{j}:=\sfrac{v_j}{w_j}$ denote the job's \emph{value density}. Following~\cite{Chan:2010} and~\cite{Pruhs:2010}, we define the \emph{profitable speed} \sprofup{j} of job $j$ to be the maximum speed for which its processing may be profitable. More formally, $\sprofup{j}:=\max\set{s\geq0}[w_j\cdot\sfrac{P_{\alpha,0}(s)}{s}\leq v_j]$. Note that the definition is with respect to $P_{\alpha,0}$, i.e., it ignores the system energy. The profitable speed can be more explicitly characterized by $\sprofup{j}^{\alpha-1}=\valdens{j}$. It is easy to see that a schedule that processes $j$ at average speed faster than \sprofup{j} cannot be optimal: rejecting $j$ and idling during the former execution phase would be more profitable. See Figure~\ref{fig:basicnotions} for an illustration of these notions. \begin{figure}\label{fig:basicnotions} \end{figure} \subsubsection{Optimal Available \& Structural Properties.} One of the first online algorithms for dynamic speed scaling was Optimal Available (\OA) due to \cite{Yao:1995}. As it is an essential building block not only of our but many algorithms for speed scaling, we give a short recap on its idea (see~\cite{Bansal:2007a} for a thorough discussion and analysis). At any time, \OA computes the optimal offline schedule assuming that no further jobs arrive. This optimal offline schedule is computed as follows: Let the density of an interval $I$ be defined as $\sfrac{w(I)}{\abs{I}}$. Here, $w(I)$ denotes the total work of jobs $j$ with $\intco{r_j,d_j}\subseteq I$ and $\abs{I}$ the length of $I$. Now, whenever a job arrives \OA computes so-called \emph{critical intervals} by iteratively choosing an interval of maximum density. Jobs are then scheduled at a speed equal to the density of the corresponding critical interval using the earliest deadline first policy. Let us summarize several structural facts known about the \OA schedule. \begin{fact}\label{fact:structuralproperties} Let $S$ and $S'$ denote the \OA schedules just before and after $j$'s arrival. We use $S(j)$ and $S'(j)$ to denote $j$'s speed in the corresponding schedule. \begin{enumerate}[(a)] \item\label{fact:structuralproperties_a} The speed function of $S$ (and $S'$) is a non-increasing staircase function. \item\label{fact:structuralproperties_b} The minimal speed of $S'$ during \intco{r_j,d_j} is at least $S'(j)$. \item\label{fact:structuralproperties_c} Let $I$ be an arbitrary period of time during \intco{r_j,d_j} (not necessarily an interval). Moreover, let $W$ denote the total amount of work scheduled by $S$ and $W'$ the one scheduled by $S'$ during $I$. Then the inequality $W\leq W'\leq W+w_j$ holds. \item\label{fact:structuralproperties_d} The speed of any $j'\neq j$ can only increase due to $j$'s arrival: $S'(j')\geq S(j')$. \end{enumerate} \end{fact} \section{Lower Bound for Rejection-Oblivious Algorithms}\label{sec:lowerbounds} This section considers a class of simple, deterministic online algorithms that we call \emph{rejection-oblivious}. When a job arrives, a rejection-oblivious algorithm decides whether to accept or reject the job. This decision is based solely on the processor's current state (sleeping, idling, working), its current workload, and the job's properties. Especially it does not take former decisions into account. An example for such an algorithm is $PS(c)$ in \cite{Chan:2010}. For a suitable parameter $c$, it is $\alpha^{\alpha}+2e\alpha$-competitive in a model without sleep state. In this section we show that in our model (i.e., with a sleep state) no rejection-oblivious algorithm can be competitive. More exactly, the competitiveness of any such algorithm can become arbitrarily large. We identify the jobs' value density as a crucial parameter for the competitiveness of these algorithms. \begin{theorem}\label{thm:lowerbound} The competitiveness of any rejection-oblivious algorithm $A$ is unbounded. More exactly, for any $A$ there is a problem instance $I$ with competitive factor $\geq\valdensMAX\frac{\scrit}{P_{\alpha,\beta}(\scrit)}$. Here, \valdensMAX is the maximum value density of jobs from $I$. \end{theorem} \begin{proof} For $A$ to be competitive, there must be some $x\in\R$ such that, while $A$ is asleep, all jobs of value at most $x$ are rejected (independent of their work and deadlines). Otherwise, we can define a sequence of $n$ identical jobs $1,2,\ldots,n$ of arbitrary small value $\epsilon$. W.l.o.g., we release them such that $A$ goes to sleep during \intco{d_{j-1},r_j} (otherwise $A$ consumes an infinite amount of energy). Thus, $A$'s cost is at least $n\gamma$. If instead considering schedule $S$ that rejects all jobs, we have $\cost{S}=n\epsilon$. For $\epsilon\to0$ we see that $A$'s competitive ratio is unbounded. So, let $x\in\R$ be such that $A$ rejects any job of value at most $x$ whilst asleep. Consider $n$ jobs of identical value $x$ and work $w$. For each job, the deadline is set such that $w=\scrit(d_j-r_j)$. The jobs are released in immediate succession, i.e., $r_j=d_{j-1}$. Algorithm $A$ rejects all jobs, incurring cost $n x$. Let $S$ denote the schedule that accepts all jobs and processes them at speed \scrit. The cost of $S$ is given by $\cost{S}=\gamma+n w\frac{P_{\alpha,\beta}(\scrit)}{\scrit}$. Thus, $A$'s competitive ratio is at least \begin{equation*} \frac{n x}{\gamma+n w\frac{P_{\alpha,\beta}(\scrit)}{\scrit}}=\valdensMAX\frac{1}{\frac{\gamma}{n w}+\frac{P_{\alpha,\beta}(\scrit)}{\scrit}} . \end{equation*} For $n\to\infty$ we get the lower bound $\valdensMAX\smash{\frac{\scrit}{P_{\alpha,\beta}(\scrit)}}$. \qed \end{proof} \section{Algorithm \& Analysis}\label{sec:algorithm+analysis} In the following, we use $A$ to refer to both our algorithm and the schedule it produces; which is meant should be clear from the context. As most algorithms in this area (see, e.g.,~\cite{Irani:2007,Bansal:2009,Chan:2010,Han:2010,Albers:2011a}), $A$ relies heavily on the good structural properties of \OA and its wide applicability to variants of the original energy-oriented scheduling model of \textcite{Yao:1995}. It essentially consists of two components, the \emph{rejection policy} and the \emph{scheduling policy}. The rejection policy decides which jobs to accept or reject, while the scheduling policy ensures that all accepted jobs are finished until their deadline. Our rejection policy is an extension of the one used by the algorithm \PS in~\cite{Chan:2010}. It ensures that we process only jobs that have a reasonable high value (value $>$ planned energy investment) and that we do not awake from sleep for very cheap jobs. The scheduling policy controls the speed, the job assignment, and the current mode of the processor. It is a straightforward adaption of the algorithm used in~\cite{Han:2010}. However, its analysis proves to be more involved because we have to take into account its interaction with the rejection policy and that the job sets scheduled by the optimal algorithm and $A$ may be quite different. \begin{lstlisting}[language=pascal,keywords={arrival,depending,on,reject,working,idling,sleeping},caption={Rejection-oblivious online scheduler $A$.},label={lst:algorithmA},captionpos=b,mathescape=true] {at any time $t$ and for $x$ equal to current idle cost} on arrival of job $j$: {let $s_{\OA}$ be $\OA^t$'s speed for $j$ if it were accepted} reject if $\valdens{j}<\sfrac{\scrit^{\alpha-1}}{\alpha\crejTWO^{\alpha-1}}$ or $v_j<\crejONE x$ or $s_{\OA}>\crejTWO\sprofup{j}$ depending on current mode: {let $\rho_t$ denote $\OA^t$'s speed planned for for the current time $t$} working: if no remaining work:$\quad$ switch to idle mode otherwise:$\quad$ work at speed $\max(\rho_t,\scrit)$ with earliest deadline first idling: if $\disguisemath[l]{\rho_t>\scrit}{x\geq\gamma}$:$\quad$ switch to sleep mode if $\rho_t>\scrit$:$\quad$ switch to work mode sleeping: if $\rho_t>\scrit$:$\quad$ switch to work mode \end{lstlisting} The following description assumes a continuous recomputation of the current \OA schedule. See Listing~\ref{lst:algorithmA} for the corresponding pseudocode. It is straightforward to implement $A$ such that the planned schedule is recomputed only when new jobs arrive. \paragraph{Scheduling Policy.} All accepted jobs are scheduled according to the earliest deadline first rule. At any time, the processor speed is computed based on the \OA schedule. Use $\OA^t$ to denote the schedule produced by \OA if given the remaining (accepted) work at time $t$ and the power function $P_{\alpha,0}$. Let $\rho_t$ denote the speed planned by $\OA^t$ at time $t$. $A$ puts the processor either in \emph{working}, \emph{idling}, or \emph{sleeping} mode. During working mode the processor speed is set to $\max(\rho_t,\scrit)$ until there is no more remaining work. Then, speed is set to zero and the processor starts idling. When idling or sleeping, we switch to the working mode only when $\rho_t$ becomes larger than \scrit. When the amount of energy spent in the current idle interval equals the transition energy $\gamma$ (i.e., after time $\sfrac{\gamma}{P_{\alpha,\beta}(0)}$) the processor is suspended to sleep. \paragraph{Rejection Policy.} Let \crejONE and $\crejTWO$ be parameters to be determined later. Consider the arrival of a new job $j$ at time $r_j$. Reject it immediately if $\valdens{j}<\sfrac{\scrit^{\alpha-1}}{\alpha\crejTWO^{\alpha-1}}$. Otherwise, define the \emph{current idle cost} $x\in\intcc{0,\gamma}$ depending on the processor's state as follows: \begin{enumerate*}[(i)] \item zero if it is working, \item the length of the current idle interval times $\beta$ if it is idle, and \item $\gamma$ if it is asleep. \end{enumerate*} If $v_j<\crejONE x$, the job is rejected. Otherwise, compute the job's speed $s_{\OA}$ which would be assigned by $\OA^{r_j}$ if it were accepted. Reject the job if $s_{\OA}>\crejTWO\sprofup{j}$, accept otherwise. \subsection{Bounding the Different Portions of the Cost} In the following, let $O$ denote an optimal schedule. Remember that $\cost{A}=\Esleep{A}+\Eidle{A}+\Ework{A}+\Vrej{A}$. We bound each of the three terms $\Esleep{A}+\Eidle{A}$, $\Ework{A}$, and $\Vrej{A}$ separately in Lemma~\ref{lem:sleep+idle}, Lemma~\ref{lem:work}, and Lemma~\ref{lem:rejectedvalue}, respectively. Eventually, Section~\ref{sec:competitiveness} combines these bounds and yields our main result: a nearly tight competitive factor depending on the maximum value density of the problem instance. \begin{lemma}[Sleep and Idle Energy]\label{lem:sleep+idle} $\Esleep{A}+\Eidle{A}\leq6\Esleep{O}+2\Esys{O}+\frac{4}{\crejONE}\Vrej{O}$ \end{lemma} \begin{proof} Let us first consider \Eidle{A}. Partition the set of idle intervals under schedule $A$ into three disjoint subsets $\mathcal{I}_1$, $\mathcal{I}_2$, and $\mathcal{I}_3$ as follows: \begin{itemize} \item $\mathcal{I}_1$ contains idle intervals not intersecting any sleep interval of $O$. By definition, the total length of idle intervals from $\mathcal{I}_1$ is bounded by the time $O$ is awake. Thus, the total cost of $\mathcal{I}_1$ is at most \Esys{O}. \item For each sleep interval $I$ of $O$, $\mathcal{I}_2$ contains any idle interval $X$ that is not the the last idle interval having a nonempty intersection with $I$ and that is completely contained within $I$ (note that the former requirement is redundant if the last intersecting idle interval is not completely contained in $I$). Consider any $X\in\mathcal{I}_2$ intersecting $I$ and let $j$ denote the first job processed by $A$ after $X$. It is easy to see that we must have $\intco{r_j,d_j}\subseteq I$. Thus, $O$ has rejected $j$. But since $A$ accepted $j$, we must have $v_j\geq\crejONE\abs{X}\beta$. This implies that the total cost of $\mathcal{I}_2$ cannot exceed $\sfrac{\Vrej{O}}{\crejONE}$. \item $\mathcal{I}_3$ contains all remaining idle intervals. By definition, the first sleep interval of $O$ can intersect at most one such idle interval, while the remaining sleep intervals of $O$ can be intersected by at most two such idle intervals. Thus, if $m$ denotes the number of sleep intervals under schedule $O$, we get $\abs{\mathcal{I}_3}\leq2m-1$. Our sleeping strategy ensures that the cost of each single idle interval is at most $\gamma$. Using this and the definition of sleeping energy, the total cost of $\mathcal{I}_3$ is upper bounded by $(2m-1)\gamma=2\Esleep{O}+\gamma$. \end{itemize} Together, we get $\Eidle{A}\leq\Esys{O}+\sfrac{\Vrej{O}}{\crejONE}+2\Esleep{O}+\gamma$. Moreover, without loss of generality we can bound $\gamma$ by $\sfrac{\Vrej{O}}{\crejONE}+\Esleep{O}$: if not both $A$ and $O$ reject all incoming jobs (in which case $A$ would be optimal), $O$ will either accept at least one job and thus wake up ($\gamma\leq\Esleep{O}$) or reject the first job $A$ accepts ($\gamma\leq\sfrac{\Vrej{O}}{\crejONE}$). This yields $\Eidle{A}\leq\Esys{O}+2\sfrac{\Vrej{O}}{\crejONE}+3\Esleep{O}$. For \Esleep{A}, note that any but the first of $A$'s sleep intervals is preceded by an idle interval of length $\sfrac{\gamma}{P_{\alpha,\beta}(0)}$. Each such idle interval has cost $\gamma$, so we get $\Esleep{A}\leq\Eidle{A}$. The lemma's statement follows by combining the bounds for $\Eidle{A}$ and $\Esleep{A}$. \qed \end{proof} \begin{lemma}[Working Energy]\label{lem:work} $\Ework{A}\leq\alpha^{\alpha}\Ework{O}+\crejTWO^{\alpha-1}\alpha^2\Vrej{O}$ \end{lemma} The proof of Lemma~\ref{lem:work} is based on the standard amortized local competitiveness argument, first used by \textcite{Bansal:2007a}. Although technically quite similar to the typical argumentation, our proof must carefully consider the more complicated rejection policy (compared to~\cite{Chan:2010}), while simultaneously handle the different processor states. Given a schedule $S$, let $\Ework{S}(t)$ denote the working energy spent until time $t$ and $\Vrej{S}(t)$ the discarded value until time $t$. We show that at any time $t\in\R_{\geq0}$ the \emph{amortized energy inequality} \begin{equation} \Ework{A}(t)+\Phi(t)\leq\alpha^{\alpha}\Ework{O}(t)+\crejTWO^{\alpha-1}\alpha^2\Vrej{O}(t) \end{equation} holds. Here, $\Phi$ is a potential function to be defined in a suitable way. It is constructed such that the following conditions hold: \begin{enumerate}[(i)] \item \emph{Boundary Condition:} At the beginning and end we have $\Phi(t)=0$. \item \emph{Running Condition:} At any time $t$ when no job arrives we have \begin{equation}\label{eqn:differentialinequality} \od{\Ework{A}(t)}{t}+\od{\Phi(t)}{t}\leq\alpha^{\alpha}\od{\Ework{O}(t)}{t}+\crejTWO^{\alpha-1}\alpha^2\od{\Vrej{O}(t)}{t} . \end{equation} \item \emph{Arrival Condition:} At any time $t$ when a job arrives we have \begin{equation}\label{eqn:differenceinequality} \Delta\Ework{A}(t)+\Delta\Phi(t)\leq\alpha^{\alpha}\Delta\Ework{O}(t)+\crejTWO^{\alpha-1}\alpha^2\Delta\Vrej{O}(t) . \end{equation} The $\Delta$-terms denote the corresponding change caused by the job arrival. \end{enumerate} Once these are proven, amortized energy inequality follows by induction: It obviously holds for $t=0$, and Conditions (ii) and (iii) ensure that it is never violated. Applying Condition (i) yields Lemma~\ref{lem:work}. The crucial part is to define a suitable potential function $\Phi$. Our analysis combines aspects from both \cite{Chan:2010} and \cite{Han:2010}. Different rejection decisions of our algorithm $A$ and the optimal algorithm $O$ require us to handle possibly different job sets in the analysis, while the sleep management calls for a careful handling of the processor's current state. \paragraph{Construction of $\Phi$.} Consider an arbitrary time $t\in\R_{\geq0}$. Let $\Wrem{A}{t}(t_1,t_2)$ denote the remaining work at time $t$ accepted by schedule $A$ with deadline in \intoc{t_1,t_2}. We call the expression $\frac{\smash{\Wrem{A}{t}}(t_1,t_2)}{t_2-t_1}$ the \emph{density} of the interval $\intoc{t_1,t_2}$. Next, we define \emph{critical intervals} \intoc{\tau_{i-1},\tau_i}. For this purpose, set $\tau_0:=t$ and define $\tau_i$ iteratively to be the maximum time that maximizes the density $\rho_i:=\frac{\smash{\Wrem{A}{t}}(\tau_{i-1},\tau_i)}{\tau_i-\tau_{i-1}}$ of the interval $\intoc{\tau_{i-1},\tau_i}$. We end at the first index $l$ with $\rho_l\leq\scrit$ and set $\tau_l=\infty$ and $\rho_l=\scrit$. Note that $\rho_1>\rho_2>\ldots>\rho_l=\scrit$. Now, for a schedule $S$ let $\smash{\Wrem{S}{t}}(i)$ denote the remaining work at time $t$ with deadline in the $i$-th critical interval \intoc{\tau_{i-1},\tau_i} accepted by schedule $S$. The potential function is defined as $\Phi(t):=\alpha\sum_{i=1}^l\rho_i^{\alpha-1}\left(\Wrem{A}{t}(i)-\alpha\Wrem{O}{t}(i)\right)$. It quantifies how far $A$ is ahead or behind in terms of energy. The densities $\rho_i$ essentially correspond to \OA's speed levels, but are adjusted to $A$'s usage of \OA. Note that whenever $A$ is in working mode its speed equals $\rho_1\geq\scrit$. It remains to prove the boundary, running, and arrival conditions. The boundary condition is trivially true as both $A$ and $O$ have no remaining work at the beginning and end. For the running and arrival conditions, see Propositions~\ref{prop:runningcondition} and~\ref{prop:arrivalcondition}, respectively. \begin{proposition}\label{prop:runningcondition} The running condition holds. That is, at any time $t$ when no job arrives we have \begin{equation*} \od{\Ework{A}(t)}{t}+\od{\Phi(t)}{t}\leq\alpha^{\alpha}\od{\Ework{O}(t)}{t}+\crejTWO^{\alpha-1}\alpha^2\od{\Vrej{O}(t)}{t} . \end{equation*} \end{proposition} \begin{proof} Because no jobs arrive we have $\od{\Vrej{O}(t)}{t}=0$. Let $s_A$ denote the speed of $A$ and $s_O$ the speed of $O$. Depending on these speeds, we distinguish four cases: \begin{description} \item[Case 1:] $s_O=0$, $s_A=0$\\ In this case $\od{\Ework{A}(t)}{t}=\od{\Ework{O}(t)}{t}=\od{\Phi(t)}{t}=0$. Thus, the Running Condition~\eqref{eqn:differentialinequality} holds. \item[Case 2:] $s_O=0$, $s_A>0$\\ Since $s_A>0$, algorithm $A$ is in working mode and we have $s_A=\rho_1\geq\scrit$. Moreover, $\od{\Ework{A}(t)}{t}=P_{\alpha,\beta}(s_A)$, $\smash{\od{\Phi(t)}{t}}=-\alpha s_A^{\alpha}$, and $\od{\Ework{O}(t)}{t}=0$. We get \begin{align*} & \od{\Ework{A}(t)}{t}+\od{\Phi(t)}{t}-\alpha^{\alpha}\od{\Ework{O}(t)}{t}=P_{\alpha,\beta}(s_A)-\alpha s_A^{\alpha}\\ {}={}& \beta-(\alpha-1) s_A^{\alpha} \leq\beta-(\alpha-1) \scrit^{\alpha}=0 . \end{align*} \item[Case 3:] $s_O>0$, $s_A=0$\\ In this case $l=1$ and thus $\rho_1=\scrit$. The terms in Inequality~\eqref{eqn:differentialinequality} become $\od{\Ework{A}(t)}{t}=0$, $\od{\Phi(t)}{t}=\alpha^2\scrit^{\alpha-1} s_O$, and $\od{\Ework{O}(t)}{t}=P_{\alpha,\beta}(s_O)$. We get \begin{align*} &\textstyle \od{\Ework{A}(t)}{t}+\od{\Phi(t)}{t}-\alpha^{\alpha}\od{\Ework{O}(t)}{t}=\alpha^2\scrit^{\alpha-1} s_O-\alpha^{\alpha} P_{\alpha,\beta}(s_O)\\ {}\leq{}&\textstyle \alpha^2\scrit^{\alpha-1} s_O-\alpha^{\alpha} s_O\frac{P_{\alpha,\beta}(\scrit)}{\scrit}\leq s_O\scrit^{\alpha-1}(\alpha^2-\alpha^{\alpha})\leq0 . \end{align*} \item[Case 4:] $s_O>0$, $s_A>0$\\ Because of $s_A>0$ we know $A$ is in the working state and, thus, $s_A=\rho_1\geq\scrit$. So, this time we have $\od{\smash{\Ework{A}}(t)}{t}=P_{\alpha,\beta}(s_A)$, $\od{\Phi(t)}{t}=-\alpha s_A^{\alpha}+\alpha^2\rho_k^{\alpha-1}s_O$, and $\od{\smash{\Ework{O}}(t)}{t}=P_{\alpha,\beta}(s_O)$. We get \begin{align*} \od{\Ework{A}(t)}{t}+\od{\Phi(t)}{t}-\alpha^{\alpha}\od{\Ework{O}(t)}{t}&=P_{\alpha,\beta}(s_A)-\alpha s_A^{\alpha}+\alpha^2 \rho_k^{\alpha-1} s_O-\alpha^{\alpha} P_{\alpha,\beta}(s_O)\\ &\leq s_A^{\alpha}-\alpha s_A^{\alpha}+\alpha^2 s_A^{\alpha-1} s_O-\alpha^{\alpha} s_O^{\alpha}\leq0 . \end{align*} The last inequality follows from the same argument as in~\cite{Bansal:2007a}: Divide by $s_O^{\alpha}$ and substitute $z=\sfrac{s_A}{s_O}$. It becomes equivalent to $(1-\alpha)z^{\alpha}+\alpha^2z^{\alpha-1}-\alpha^{\alpha}\leq0$. Differentiating with respect to $z$ yields the correctness. \end{description} \qed \end{proof} \begin{proposition}\label{prop:arrivalcondition} The arrival condition holds. That is, at any time $t$ when a job arrives we have \begin{equation*} \Delta\Ework{A}(t)+\Delta\Phi(t)\leq\alpha^{\alpha}\Delta\Ework{O}(t)+\crejTWO^{\alpha-1}\alpha^2\Delta\Vrej{O}(t) . \end{equation*} Here, the $\Delta$-terms denote the corresponding change caused by the job arrival. \end{proposition} \begin{proof} The arrival of a job $j$ does not change the energy invested so far, thus $\Delta\Ework{A}(t)=0$ and $\Delta\Ework{O}(t)=0$. If $A$ rejects $j$, we have $\Delta\Phi(t)\leq0$ and $\Delta\Vrej{O}(t)\geq0$, thus the Arrival Condition~\eqref{eqn:differenceinequality} holds. So assume $A$ accepts $j$. The arrival of $j$ may change the critical intervals and their densities significantly. However, as pointed out in \cite{Bansal:2007a}, these changes can be broken down into a series of simpler changes affecting at most two adjacent critical intervals. Thus, we first consider the effect of arrivals which do not change the critical intervals. Afterward, we use the technique from~\cite{Bansal:2007a} to reduce an arbitrary change to these simple cases. \begin{description} \item[Case 1:] The critical intervals remain unchanged and only $\rho_k$ for $k<l$ changes.\\ Let $\rho_k$ and $\rho_k'$ denote the densities of \intoc{\tau_{k-1},\tau_k} just before and after $j$'s arrival, respectively. That is, $\rho_k'=\rho_k+\smash{\frac{w_j}{\tau_k-\tau_{k-1}}}$. Note that $\rho_k'$ is the speed planned by \smash{$\OA^t$} for job $j$. Because $A$ accepted $j$ we have $\rho_k'\leq\crejTWO\sprofup{j}$. If $j$ is rejected by $O$, we have $\Delta\Vrej{O}(t)=v_j$. Since only the $k$-th critical interval is affected, the change in the potential function is given by \begin{equation*} \Delta\Phi(t)=\alpha\rho_k'^{\alpha-1}\left(\Wrem{A}{t}(k)+w_j-\alpha\Wrem{O}{t}(k)\right)-\alpha\rho_k^{\alpha-1}\left(\Wrem{A}{t}(k)-\alpha\Wrem{O}{t}(k)\right) . \end{equation*} Now, we compute, analogously to Lemma~4 in \cite{Chan:2010}, that $\Delta\Phi(t)$ equals \begin{align*} &\textstyle \alpha\rho_k'^{\alpha-1}\left(\Wrem{A}{t}(k)+w_j-\alpha\Wrem{O}{t}(k)\right)-\alpha\rho_k^{\alpha-1}\left(\Wrem{A}{t}(k)-\alpha\Wrem{O}{t}(k)\right)\\ {}\leq{} &\textstyle \alpha\rho_k'^{\alpha-1}\left(\Wrem{A}{t}(k)+w_j\right)-\alpha\rho_k^{\alpha-1}\Wrem{A}{t}(k)=\alpha\frac{\left(\Wrem{A}{t}(k)+w_j\right)^{\alpha}-\Wrem{A}{t}(k)^{\alpha}}{(\tau_k-\tau_{k-1})^{\alpha-1}}\\ {}\leq{} &\textstyle \alpha^2\frac{\left(\Wrem{A}{t}(k)+w_j\right)^{\alpha-1}w_j}{(\tau_k-\tau_{k-1})^{\alpha-1}}=\alpha^2\rho_k'^{\alpha-1} w_j\leq\alpha^2(\crejTWO\sprofup{j})^{\alpha-1} w_j=\crejTWO^{\alpha-1}\alpha^2v_j . \end{align*} The penultimate inequality uses the fact that $f(x)=x^{\alpha}$ is convex, yielding $f(y)-f(x)\leq f'(y)\cdot(y-x)$ for all $y>x$. This implies the Arrival Condition~\eqref{eqn:differenceinequality}. If $j$ is accepted by $O$, we have $\Delta\Vrej{O}(t)=0$ and $\Delta\Phi(t)$ becomes \begin{equation*} \alpha\rho_k'^{\alpha-1}\left(\Wrem{A}{t}(k)+w_j-\alpha\left(\Wrem{O}{t}(k)+w_j\right)\right)-\alpha\rho_k^{\alpha-1}\left(\Wrem{A}{t}(k)-\alpha\Wrem{O}{t}(k)\right) . \end{equation*} In the same way as in~\cite{Bansal:2007a}, we now get $\Delta\Phi(t)\leq0$. \item[Case 2:] Only the amount of work assigned to the last critical interval increases.\\ Remember that, by definition, $\tau_l=\infty$ and $\rho_l=\scrit$. Since $j$ is accepted by $A$ we have $\scrit^{\alpha-1}\leq\alpha\crejTWO^{\alpha-1}\valdens{j}$. If $O$ rejects $j$ we have $\Delta\Vrej{O}(t)=v_j$ and the Arrival Condition~\eqref{eqn:differenceinequality} follows from \begin{align*} \Delta\Phi(t) &= \alpha\rho_l^{\alpha-1}\left(\Wrem{A}{t}(l)+w_j-\alpha\Wrem{O}{t}(l)\right)-\alpha\rho_l^{\alpha-1}\left(\Wrem{A}{t}(l)-\alpha\Wrem{O}{t}(l)\right)\\ &= \alpha\scrit^{\alpha-1} w_j\leq\alpha^2\crejTWO^{\alpha-1}\valdens{j} w_j=\crejTWO^{\alpha-1}\alpha^2v_j . \end{align*} If $O$ accepts $j$, $\Delta\Vrej{O}(t)=0$ and the Arrival Condition~\eqref{eqn:differenceinequality} is implied by \begin{align*} \Delta\Phi(t) &= \alpha\rho_l^{\alpha-1}\left(\Wrem{A}{t}(l)+w_j-\alpha\left(\Wrem{O}{t}(l)+w_j\right)\right)-\alpha\rho_l^{\alpha-1}\left(\Wrem{A}{t}(l)-\alpha\Wrem{O}{t}(l)\right)\\ &= \alpha\rho_l^{\alpha-1}w_j(1-\alpha)\leq0 . \end{align*} \end{description} Let us now consider the arrival of an arbitrary job $j$. The idea is to split this job into two jobs $j_1$ and $j_2$ with the same release time, deadline, and value density as $j$. Their total work equals $w_j$. Let $x$ denote the size of $j_1$. We determine a suitable $x$ by continuously increasing $x$ from $0$ to $w_j$ until two critical intervals merge or one critical interval splits. The arrival of $j_1$ can then be handled by one of the above cases, while $j_2$ is treated recursively in the same way as $j$. For details, see~\cite{Bansal:2007a} or~\cite{Han:2010}. \qed \end{proof} \subsubsection{Bounding the Rejected Value.}\label{sec:rejectedvalue} In the following we bound the total value \Vrej{A} of jobs rejected by $A$. The general idea is similar to the one by \textcite{Chan:2010}. However, in contrast to the simpler model without sleep states, we must handle small-valued jobs of high density explicitly (cf.~Section~\ref{sec:lowerbounds}). Moreover, the sleeping policy introduces an additional difficulty: our algorithm does not preserve all structural properties of an \OA schedule (cf.~Fact~\ref{fact:structuralproperties}). This prohibits a direct mapping between the energy consumption of algorithm $A$ and of the intermediate \OA schedules during a \emph{fixed time interval}, as used in the corresponding proof in~\cite{Chan:2010}. Indeed, the actual energy used by $A$ during a fixed time interval may decrease compared to the energy planned by the intermediate \OA schedule, as $A$ may decide to raise the speed to \scrit at certain points in the schedule. Thus, to bound the value of a job rejected by $A$ but processed by the optimal algorithm for a relatively long time, we have to consider the energy usage for the workload \OA planned for that time (instead of the actual energy usage for the workload $A$ processed during that time, which might be quite different). \begin{lemma}[Rejected Value]\label{lem:rejectedvalue} Let \valdensMAX be the maximum value density of jobs of value less than $\crejONE\gamma$ and consider an arbitrary parameter $b\geq\sfrac{1}{\crejTWO}$. Then, $A$'s rejected value is at most \begin{equation*} \Vrej{A}\leq\max\left(\valdensMAX\frac{\scrit}{P_{\alpha,\beta}(\scrit)},b^{\alpha-1}\right)\Ework{O}+\frac{b^{\alpha-1}}{(\crejTWO b-1)^{\alpha}}\Ework{A}+\Vrej{O}. \end{equation*} \end{lemma} \begin{proof} Partition the jobs rejected by $A$ into two disjoint subsets $J_1$ (jobs rejected by both $A$ and $O$) and $J_2$ (jobs rejected by $A$ only). The total value of jobs in $J_1$ is at most $\Vrej{O}$. Thus, it suffices to show that the total value of $J_2$ is bounded by \begin{equation*} \textstyle \max\left(\valdensMAX\frac{\scrit}{P_{\alpha,\beta}(\scrit)},b^{\alpha-1}\right)\Ework{O}+\frac{b^{\alpha-1}}{(\crejTWO b-1)^{\alpha}}\Ework{A} . \end{equation*} To this end, let $j\in J_2$. Remember that, because of the convexity of the power function, $O$ can be assumed to process $j$ at a constant speed $s_O$. Otherwise processing $j$ at its average speed could only improve the schedule. Let us distinguish three cases, depending on the reason for which $A$ rejected $j$: \begin{description} \item[Case 1:] $j$ got rejected because of $\valdens{j}<\frac{\scrit^{\alpha-1}}{\alpha\crejTWO^{\alpha-1}}$.\\ Let $\Ework{O}(j)$ denote the working energy invested by $O$ into job $j$. Using the rejection condition we can compute \begin{equation*} \textstyle \Ework{O}(j)=\frac{P_{\alpha,\beta}(s_O)}{s_O} w_j\geq\frac{P_{\alpha,\beta}(\scrit)}{\scrit} w_j\geq\scrit^{\alpha-1} w_j>\alpha\crejTWO^{\alpha-1} v_j . \end{equation*} Together with $b\geq\sfrac{1}{\crejTWO}$ we get $v_j<b^{\alpha-1}\Ework{O}(j)$. \item[Case 2:] $j$ got rejected because of $v_j<\crejONE x$\\ As in the algorithm description, let $x\in[0,\gamma]$ denote the current idle cost at time $r_j$. Since $j$'s value is less than $\crejONE x\leq\crejONE\gamma$, we have $\valdens{j}\leq\valdensMAX$. We get \begin{equation*} \textstyle \Ework{O}(j)=\frac{P_{\alpha,\beta}(s_O)}{s_O} w_j=\frac{P_{\alpha,\beta}(s_O)}{s_O}\frac{v_j}{\valdens{j}}\geq\frac{P_{\alpha,\beta}(\scrit)}{\scrit}\frac{v_j}{\valdensMAX}, \end{equation*} which eventually yields $v_j\leq\valdensMAX\frac{\scrit}{P_{\alpha,\beta}(\scrit)}\Ework{O}(j)$. \item[Case 3:] $j$ got rejected because of $s_{\OA}>\crejTWO\sprofup{j}$\\ Here, $s_{\OA}$ denotes the speed $\OA^{r_j}$ would assign to $j$ if it were accepted. We use $\OA^{r_j}_-$ to refer to the \OA schedule at time $r_j$ without $j$. Let $b_j:=\sfrac{\sprofup{j}}{s_O}$. We bound $v_j$ in different ways, depending on $b_j$. If $b_j$ is small (i.e., $b_j\leq b$) we use \begin{equation*} \textstyle \Ework{O}(j)\geq\frac{P_{\alpha,0}(s_O)}{s_O} w_j=\frac{P_{\alpha,0}(\sfrac{\sprofup{j}}{b_j})}{\sfrac{\sprofup{j}}{b_j}} w_j=\frac{\sprofup{j}^{\alpha-1}}{b_j^{\alpha-1}} w_j=\frac{v_j}{b_j^{\alpha-1}}. \end{equation*} That is, we have $v_j\leq b_j^{\alpha-1}\Ework{O}(j)$. Otherwise, if $b_j$ is relatively large, $v_j$ is bounded by $\Ework{A}$. Let $I$ denote the period of time when $O$ processes $j$ at constant speed $s_O$ and let $W$ denote the work processed by $\OA^{r_j}_-$ during this time. Since $I\subseteq\intco{r_j,d_j}$, Fact~\ref{fact:structuralproperties}(\ref{fact:structuralproperties_b}) implies that $\OA^{r_j}$'s speed during $I$ is at least $s_{\OA}>\crejTWO\sprofup{j}$. Thus, the total amount of work processed by $\OA^{r_j}$ during $I$ is larger than $\crejTWO\sprofup{j}\abs{I}$. But then, by applying Fact~\ref{fact:structuralproperties}(\ref{fact:structuralproperties_c}), we see that $W$ must be larger than $\crejTWO\sprofup{j}\abs{I}-w_j$. Now, $W$ is a subset of the work processed by $A$. Moreover, Fact~\ref{fact:structuralproperties}(\ref{fact:structuralproperties_d}) and the definition of algorithm $A$ ensure that the speeds used for this work in schedule $A$ cannot be smaller than the ones used in $\OA^{r_j}_-$. Especially, the average speed $s_{\varnothing}$ used for this work in schedule $A$ is at least $\sfrac{W}{\abs{I}}$ (the average speed used by $\OA^{r_j}_-$ for this work). Let $\Ework{A}(W)$ denote the energy invested by schedule $A$ into the work $W$. Then, by exploiting the convexity of the power function, we get \begin{align*} \Ework{A}(W) &\textstyle\geq \frac{P_{\alpha,\beta}(s_{\varnothing})}{s_{\varnothing}}W\geq\frac{P_{\alpha,0}(s_{\varnothing})}{s_{\varnothing}}W={s_{\varnothing}}^{\alpha-1}W\geq\frac{W^{\alpha-1}}{\abs{I}^{\alpha-1}}W=\abs{I}\frac{W^{\alpha}}{\abs{I}^{\alpha}}\\ &\textstyle>\abs{I}(\crejTWO\sprofup{j}-s_O)^{\alpha}=\frac{w_j}{s_O}s_O^{\alpha}(\crejTWO b_j-1)^{\alpha}=\frac{(\crejTWO b_j-1)^{\alpha}}{b_j^{\alpha-1}} v_j. \end{align*} That is, we have $v_j<\frac{b_j^{\alpha-1}}{(\crejTWO b_j-1)^{\alpha}}\Ework{A}(W)$. Now, let us specify how to choose from these two bounds: \begin{itemize} \item If $b_j\leq b$, we apply the first bound: $v_j=b_j^{\alpha-1}\Ework{O}(j)\leq b^{\alpha-1}\Ework{O}(j)$. \item Otherwise we have $b_j>b\geq\sfrac{1}{\crejTWO}$. Note that for $x>\sfrac{1}{c}$ the function \smash{$f(x)=\frac{x^{\alpha-1}}{(cx-1)^{\alpha}}$} decreases. Thus, we get \smash{$v_j<\frac{b^{\alpha-1}}{(\crejTWO b-1)^{\alpha}}\Ework{A}(W)$}. \end{itemize} \end{description} By combining these cases we get \begin{equation*} \textstyle v_j\leq\max\left(\valdensMAX\frac{\scrit}{P_{\alpha,\beta}(\scrit)},b^{\alpha-1}\right)\Ework{O}(j)+\frac{b^{\alpha-1}}{(\crejTWO b-1)^{\alpha}}\Ework{A}(W). \end{equation*} Note that both energies referred to, $\Ework{O}(j)$ as well as $\Ework{A}(W)$, are mutually different for different jobs $j$. Thus, we can combine these inequalities for all jobs $j\in J_2$ to get the desired result. \qed \end{proof} \subsection{Putting it All Together.}\label{sec:competitiveness} The following theorem combines the results of Lemma~\ref{lem:sleep+idle}, Lemma~\ref{lem:work}, and Lemma~\ref{lem:rejectedvalue}. \begin{theorem}\label{thm:competitiveness} Let $\alpha\geq2$ and let \valdensMAX be the maximum value density of jobs of value less than $\crejONE\gamma$. Moreover, define $\eta:=\max\bigl(\valdensMAX\frac{\scrit}{P_{\alpha,\beta}(\scrit)},b^{\alpha-1}\bigr)$ and $\mu:=\smash{\frac{b^{\alpha-1}}{(\crejTWO b-1)^{\alpha}}}$ for a parameter $b\geq\sfrac{1}{\crejTWO}$. Then, $A$'s competitive factor is at most \begin{equation*} \max\left(\crejTWO^{\alpha-1}\alpha^2,\alpha^{\alpha}\right)\left(1+\mu\right)+\max\left(2+\eta,1+\sfrac{4}{\crejONE}\right) . \end{equation*} \end{theorem} \begin{proof} Lemma~\ref{lem:sleep+idle} together with the relation $\Esys{O}\leq\Eidle{O}+\Ework{O}$ bounds the sleep and idle energy of $A$ with respect to $O$'s cost as $\Esleep{A}+\Eidle{A}\leq6\Esleep{O}+2\Eidle{O}+2\Ework{O}+\frac{4}{\crejONE}\Vrej{O}$. For the working energy, Lemma~\ref{lem:work} yields $\Ework{A}\leq\alpha^{\alpha}\Ework{O}+\crejTWO^{\alpha-1}\alpha^2\Vrej{O}$. To bound the total value rejected by $A$ with respect to the cost of $O$, we apply Lemma~\ref{lem:work} to Lemma~\ref{lem:rejectedvalue} and get \begin{equation*} \Vrej{A}\leq\eta\Ework{O}+\mu\Ework{A}+\Vrej{O}\leq \left(\eta+\alpha^{\alpha}\mu\right)\Ework{O}+\left(\crejTWO^{\alpha-1}\alpha^2\mu+1\right)\Vrej{O} . \end{equation*} Using these inequalities, we can bound the cost of $A$ as follows: \begin{align*} \cost{A}\leq{} & 6\Esleep{O} + 2\Eidle{O} + \left(\alpha^{\alpha}+\alpha^{\alpha}\mu+2+\eta\right)\Ework{O}\\ & + \left(\crejTWO^{\alpha-1}\alpha^2+\crejTWO^{\alpha-1}\alpha^2\mu+1+\sfrac{4}{\crejONE}\right)\Vrej{O} . \end{align*} Since $6\leq\alpha^{\alpha}+2$ for $\alpha\geq2$, we get the following bound on $A$'s competitive factor: \begin{equation*} \textstyle \frac{\cost{A}}{\cost{O}}\leq\max\left(\crejTWO^{\alpha-1}\alpha^2,\alpha^{\alpha}\right)\left(1+\mu\right)+\max\left(2+\eta,1+\sfrac{4}{\crejONE}\right) . \end{equation*} \qed \end{proof} By a careful choice of parameters we get a constant competitive ratio if restricting the value density of small-valued jobs accordingly. So, let $\alpha\geq2$ and set $\crejTWO=\alpha^{\frac{\alpha-2}{\alpha-1}}$, $b=\frac{\alpha+1}{\crejTWO}$, and $\crejONE=\frac{4}{1+b^{\alpha-1}}\leq1$. Applying Theorem~\ref{thm:competitiveness} using these parameters yields the following results: \begin{corollary}\label{cor:nicegeneralcompetitiveness} Algorithm $A$ is $\alpha^{\alpha}+2 e\alpha+\valdensMAX\frac{\scrit}{P_{\alpha,\beta}(\scrit)}$-competitive. \end{corollary} \begin{corollary}\label{cor:competitiveness} Algorithm $A$ is $\alpha^{\alpha}+2 e\alpha$-competitive if we restrict it to instances of maximum value density $\valdensMAX:=b^{\alpha-1}\frac{P_{\alpha,\beta}(\scrit)}{\scrit}$. This competitive factor still holds if the restriction is only applied to jobs of value less than \smash{$\frac{4}{1+b^{\alpha-1}}\gamma$}. \end{corollary} \begin{proof} First note the identity $b^{\alpha-1}=\alpha(1+\sfrac{1}{\alpha})^{\alpha-1}$. Moreover, using the definitions from Theorem~\ref{thm:competitiveness}, we see that $\eta=b^{\alpha-1}$ and $\alpha^{\alpha}\mu=b^{\alpha-1}$. By applying Theorem~\ref{thm:competitiveness} to our choice of parameters, the competitive factor of $A$ becomes \begin{align*} \alpha^{\alpha}(1+\mu)+2+\eta &= \alpha^{\alpha}+2+2 b^{\alpha-1}=\alpha^{\alpha} + 2\left(1+\alpha(1+\sfrac{1}{\alpha})^{\alpha-1}\right)\\ &\leq \alpha^{\alpha} + 2\alpha(1+\sfrac{1}{\alpha})^{\alpha}\leq\alpha^{\alpha}+2 e\alpha . \end{align*} \qed \end{proof} \begin{corollary}\label{cor:competitivenessforcheapjobs} If only considering instances for which the job values are at least $\frac{8}{2+3\alpha}\gamma\leq\gamma$, $A$'s competitive factor is at most $\alpha^{\alpha}+2e\alpha$. \end{corollary} \begin{proof} Follows from Corollary~\ref{cor:competitiveness} by using that for $\alpha\geq2$ we have $\frac{4}{1+b^{\alpha-1}}=\frac{4}{1+\alpha(1+\sfrac{1}{\alpha})^{\alpha-1}}\leq\frac{4}{1+\frac{3}{2}\alpha}=\frac{8}{2+3\alpha}$. \qed \end{proof} Note that the bound from Corollary~\ref{cor:nicegeneralcompetitiveness} is nearly tight with respect to $\valdensMAX$ and the lower bound from Theorem~\ref{thm:lowerbound}. \section{The Speed-Bounded Case} As stated earlier, our model can be considered as a generalization of~\cite{Chan:2010}. It adds sleep states, leading to several structural difficulties which we solved in the previous section. A further, natural generalization to the model is to cap the speed at some maximum speed $T$. Algorithms based on \OA often lend themselves to such bounded speed models. In many cases, a canonical adaptation – possibly mixed with a more involved job selection rule – leads to an algorithm for the speed bounded case with similar properties (see, e.g., \cite{Chan:2010,Han:2010,Chan:2007,Bansal:2008}). A notable property of the profit-oriented scheduling model of~\cite{Chan:2010} is that limiting the maximum speed leads to a non-constant competitive factor. Instead, it becomes highly dependent on a job's penalty ratio defined as $\penrat{j}:=\sfrac{\sprofup{j}}{T}$. They derive a lower bound of $\LDAUOmega{\max(\sfrac{e^{\alpha-1}}{\alpha},\penratMAX^{\alpha-2+\sfrac{1}{\alpha}})}$ where $\penratMAX=\max\penrat{j}$. Since our model generalizes their model, this bound transfers immediately to our setting (for the case $\beta=\gamma=0$). On the positive side we can adapt our algorithm, similar to~\cite{Chan:2010}, by additionally rejecting a job if its speed planned by \OA is larger than $T$ (cf.~rejection condition in algorithm description, Section~\ref{sec:algorithm+analysis}). Our main theorem from Section~\ref{sec:algorithm+analysis} becomes \begin{theorem}\label{thm:competitiveness+boundedspeed} Let $\alpha\geq2$ and let \valdensMAX be the maximum value density of jobs of value less than $\crejONE\gamma$. Moreover, define $\eta:=\max\bigl(\valdensMAX\frac{\scrit}{P_{\alpha,\beta}(\scrit)},\penratMAX^{\alpha-1}b^{\alpha-1}\bigr)$ and $\mu:=\penratMAX^{\alpha-1}\smash{\frac{b^{\alpha-1}}{(b-1)^{\alpha}}}$ for $b\geq1$. Then, $A$'s competitive factor is at most \begin{equation*} \alpha^{\alpha}\left(1+\mu\right)+\max\left(2+\eta,1+\sfrac{4}{\crejONE}\right) . \end{equation*} \end{theorem} \begin{proof}[sketch] Note that the results from Lemmas~\ref{lem:sleep+idle} and~\ref{lem:work} remain valid without any changes, as an additional rejection rule does not influence the corresponding proofs. The only lemma affected by the changed algorithm is Lemma~\ref{lem:rejectedvalue}. In its proof, we have to consider an additional rejection case, namely that job $j$ got rejected because of $s_{\OA}>T=\frac{1}{\penrat{j}}\sprofup{j}$. This can be handled completely analogously to Case~3 in the proof, using the factor $\frac{1}{\penrat{j}}$ instead of \crejTWO. We get the bounds $v_j\leq b_j^{\alpha-1}\Ework{O}(j)$ and $v_j<\sfrac{b_j^{\alpha-1}}{(\sfrac{b_j}{\penrat{j}}-1)^{\alpha}}\Ework{A}(W)$. If $b_j\leq\penrat{j}b$ this yields $v_j\leq \penrat{j}^{\alpha-1}b^{\alpha-1}\Ework{O}(j)$. Otherwise, if $b_j>\penrat{j}b$, we have $v_j<\penrat{j}^{\alpha-1}\frac{b^{\alpha-1}}{(b-1)^{\alpha}}\Ework{A}(W)$. The remaining argumentation is the same as in the proof of Theorem~\ref{thm:competitiveness}. \qed \end{proof} For $b=\alpha+1$ and the interesting case $\Gamma>1$ we get a competitive factor of $\alpha^{\alpha}(1+2\Gamma^{\alpha-1})+\valdensMAX\frac{\scrit}{P_{\alpha,\beta}(\scrit)}$. For job values of at most $\gamma$ it is $\alpha^{\alpha}(1+2\penratMAX^{\alpha-1})$. \section{Conclusion \& Outlook} We examined a new model that combines modern energy conservation techniques with profitability. Our results show an inherent connection between the necessary and sufficient competitive ratio of rejection-oblivious algorithms and the maximum value density. A natural question is how far this connection applies to other, more involved algorithm classes. Can we find better strategies if allowed to reject jobs even after we invested some energy, or if taking former rejection decisions into account? Such more involved rejection policies have proven useful in other models~\cite{Pruhs:2010,Han:2010}, and we conjecture that they would do so in our setting. Other interesting directions include models for multiple processors as well as general power functions. \Textcite{Pruhs:2010} modeled job values and deadlines in a more general way, which seems especially interesting for our profit-oriented model. \printbibliography \end{document}
\begin{document} \title{An estimate for narrow operators on $L^p([0, 1])$} \author{Eugene Shargorodsky {\protect \and} Teo Sharia} \address{E. Shargorodsky \\ Department of Mathematics\\ King's College London\\ Strand, London WC2R 2LS\\ United Kingdom\quad and\quad Technische Universit\"at Dresden\\ Fakult\"at Mathematik\\ 01062 Dresden\\ Germany} \email{[email protected]} \address{T. Sharia\\ Department of Mathematics\\ Royal Holloway\\ University of London\\ Egham, Surrey TW20 0EX\\ United Kingdom} \email{[email protected]} \date{} \begin{abstract} We prove a theorem, which generalises C. Franchetti's estimate for the norm of a projection onto a rich subspace of $L^p([0, 1])$ and the authors' related estimate for compact operators on $L^p([0, 1])$, $1 \le p < \infty$. \end{abstract} \subjclass[2000]{47A30, 47B07, 47B38, 46E30} \maketitle \section{Introduction} For Banach spaces $X$ and $Y$, let $\mathcal{B}(X, Y)$ and $\mathcal{K}(X, Y)$ denote the sets of bounded linear and compact linear operators from $X$ to $Y$, respectively; $\mathcal{B}(X) := \mathcal{B}(X, X)$, $\mathcal{K}(X) := \mathcal{K}(X, X)$; $I \in \mathcal{B}(X)$ denotes the identity operator. An operator $P \in \mathcal{B}(X)$ is called a \textit{projection} if $P^2 = P$. A closed linear subspace $X_0 \subset X$ is called \textit{1-complemented} (in $X$) if there exists a projection $P \in \mathcal{B}(X)$ such that $P(X) = X_0$ and $\|P\| = 1$. Let $(\Omega, \Sigma, \mu)$ be a nonatomic measure space with $0 < \mu(\Omega) < \infty$. We will use the following notation: \begin{itemize} \item $\Sigma^+ :=\{A \in \Sigma : \ \mu(A) > 0\}$, \item $\mathbb{I}_A$ is the indicator function of $A \in \Sigma$, i.e. $\mathbb{I}_A(\omega) = 1$ if $\omega \in A$ and $\mathbb{I}_A(\omega) = 0$ if $\omega \not\in A$, \item $\mathbf{1} := \mathbb{I}_\Omega$, \item $\mathbf{ E}f := \left(\frac{1}{\mu(\Omega)} \int_\Omega f\, d\mu\right) \mathbf{1}$. \end{itemize} We will use the terminology from \cite{PR13}. A $\Sigma$-measurable function $g$ is called a \textit{sign} if it takes values in the set $\{-1, 0, 1\}$, and a \textit{sign on} $A \in \Sigma$ if it is a sign with the support equal to $A$, i.e. if $g^2 = \mathbb{I}_A$. A sign is of \textit{mean zero} if $\int_\Omega g\, d\mu = 0$. An operator $T \in \mathcal{B}(L^p(\mu), Y)$, $1 \le p < \infty$ is called \textit{narrow} if for every $A \in \Sigma^+$ and every $\varepsilon > 0$, there exists a mean zero sign $g$ on $A$ such that $\|Tg\| < \varepsilon$. Every $T \in \mathcal{K}(L^p(\mu), Y)$ is narrow (see \cite[Proposition 2.1]{PR13}), but there are noncompact narrow operators. Indeed, let $\mathcal{G}$ be a sub-$\sigma$-algebra of $\Sigma$ such that there exists a random variable $\xi$ on $\left(\Omega, \Sigma, \frac{1}{\mu(\Omega)}\,\mu\right)$, which is independent of $\mathcal{G}$ and has a nontrivial Gaussian distribution. Then the corresponding conditional expectation operator $\mathbf{E}^\mathcal{G} = \mathbf{ E}(\cdot | \mathcal{G}) \in \mathcal{B}(L^p(\mu))$ is narrow (see \cite[Corollary 4.25]{PR13}), but not compact if $\mathcal{G}$ has infinitely many pairwise disjoint elements of positive measure. Let \begin{equation}\label{Cp} C_p := \max_{0 \le \alpha \le1} \left(\alpha^{p - 1} + (1 - \alpha)^{p - 1}\right)^{\frac1p} \left(\alpha^{\frac1{p - 1}} + (1 - \alpha)^{\frac1{p - 1}}\right)^{1 - \frac1p} \end{equation} for $ 1 < p < \infty$, and $C_1 := 2$. In the following theorems, $(\Omega, \Sigma, \mu)= ([0, 1], \mathcal{L}, \lambda)$, where $\lambda$ is the standard Lebesgue measure on $[0, 1]$ and $\mathcal{L}$ is the $\sigma$-algebra of Lebesgue measurable subsets of $[0, 1]$. Our starting point is a result due to C. Franchetti. \begin{theorem}[\cite{F90}, \cite{F92}]\label{Fran} Let $P \in \mathcal{B}(L^p([0, 1]))\setminus\{0\}$ be a narrow projection operator, $1 \le p < \infty$. Then \begin{equation}\label{P} \|I - P\|_{L^p \to L^p} \ge \|I - \mathbf{ E}\|_{L^p \to L^p} = C_p . \end{equation} \end{theorem} The following theorem was proved in \cite{SS}, where it was used to show that $C_p$ is the optimal constant in the bounded compact approximation property of $L^p([0, 1])$. It implies the inequality in \eqref{P} in the case when $P \not= 0$ is a finite-rank projection. \begin{theorem}[\cite{SS}]\label{compT} Let $1 \le p < \infty$, $\gamma \in \mathbb{C}$, and let $T \in \mathcal{K}(L^p([0, 1]))$. Then \begin{equation}\label{gammag} \|I - T\|_{L^p \to L^p} + \inf_{\|u\|_{L^p} = 1} \|(\gamma I - T)u\|_{L^p} \ge \|I - \gamma\mathbf{ E}\|_{L^p \to L^p} . \end{equation} In particular, \begin{equation}\label{gamma1} \|I - T\|_{L^p \to L^p} + \inf_{\|u\|_{L^p} = 1} \|(I - T)u\|_{L^p} \ge C_p . \end{equation} \end{theorem} The following theorem is the main result of the paper. It generalises both Theorems \ref{Fran} and \ref{compT}. \begin{theorem}\label{gammaT} Estimates \eqref{gammag} and \eqref{gamma1} hold for all narrow operators $T \in \mathcal{B}(L^p([0, 1]))$, $1 \le p < \infty$. \end{theorem} In the case $p = 1$, inequality \eqref{gamma1} (for all narrow operators) follows from a Daugavet-type result due to A.M. Plichko and M.M. Popov (\cite[\S9, Theorem 8]{PP90}, see also \cite[Corollary 6.4]{PR13}): $$ \|I - T\| = 1 + \|T\| \quad\mbox{for every narrow operator}\quad T \in \mathcal{B}(L^1([0, 1])) . $$ Indeed, \begin{align*} \|I - T\|_{L^1 \to L^1} + \inf_{\|u\|_{L^1} = 1} \|(I - T)u\|_{L^1} \ge \|I - T\|_{L^1 \to L^1} + 1 - \sup_{\|u\|_{L^1} = 1} \|Tu\|_{L^1} \\ = 1 + \|T\| _{L^1 \to L^1} + 1 - \|T\|_{L^1 \to L^1} = 2 = C_1 . \end{align*} \section{Proof of Theorem \ref{gammaT}} It follows from the definition of a narrow operator that if $T \in \mathcal{B}(L^p(\mu))$ is narrow and $S \in \mathcal{B}(L^p(\mu))$, then $ST \in \mathcal{B}(L^p(\mu))$ is narrow (see \cite[Proposition 1.8]{PR13}). On the other hand, there are $S, T \in \mathcal{B}(L^p(\mu))$ such that $T$ is narrow but $TS$ is not (see \cite[Proposition 5.1]{PR13}). The following lemma shows that the latter cannot happen if $S$ is a multiplication operator. \begin{lemma}\label{sign} Let $X = L^p(\mu)$,\, $g \in L^\infty(\mu)$,\, and $T \in \mathcal{B}(X, Y)$ be a narrow operator. Then the operator $TgI \in \mathcal{B}(X, Y)$ is also narrow. \end{lemma} \begin{proof} There is nothing to prove if $T = 0$. Suppose $T \not= 0$. Take any $A \in \Sigma^+$ and any $\varepsilon > 0$. There exists a simple function $g_0 =\sum_{k = 1}^M a_k \mathbb{I}_{B_k} \not\equiv 0$ such that $$ \|g - g_0\|_{L^\infty} < \frac{\varepsilon}{2 \|T\| (\mu(\Omega))^{1/p}}\, . $$ Here $M \in \mathbb{N}$,\, $B_k \in \Sigma$, $k = 1, \dots, M$ are pairwise disjoint, $\mu(B_k) > 0$, $\cup_{k = 1}^M B_k = \Omega$,\, $a_k \in \mathbb{C}$. Let $A_k := A\cap B_k$. If $\mu(A_k) > 0$, let $h_k$ be a mean zero sign on $A_k$ such that $$ \|Th_k\| < \frac{\varepsilon}{2\sum_{k = 1}^M |a_k|}\, . $$ Let $$ h := \sum_{\{k : \ \mu(A_k) > 0\}} h_k . $$ It is clear that $h$ is a mean zero sign on $A$ and \begin{align*} \|TgI h\| & \le \|Tg_0 h\| + \|T(g - g_0) h\| \\ & \le \left\|T\sum_{\{k : \ \mu(A_k) > 0\}} a_k h_k\right\|+ \|T\| \|g - g_0\|_{L^\infty} \|h\|_{L^p} \\ & < \sum_{\{k : \ \mu(A_k) > 0\}} |a_k| \|Th_k\| + \|T\|\, \frac{\varepsilon}{2 \|T\| (\mu(\Omega))^{1/p}}\, (\mu(\Omega))^{1/p} \\ &\le \frac{\varepsilon}{2\sum_{k = 1}^M |a_k|} \sum_{\{k : \ \mu(A_k) > 0\}} |a_k| + \frac{\varepsilon}{2} \le \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon . \end{align*} \end{proof} The above lemma and its proof remain valid if $X$ is a K\"othe F-space on $(\Omega, \Sigma, \mu)$ (see \cite[Section 1.3]{PR13}). Similarly, the following lemma and its proof remain valid if $X$ is a rearrangement-invariant Banach space on $([0, 1], \mathcal{L}, \lambda)$ with absolutely continuous norm. This lemma is a minor modification of \cite[Theorem 2.21]{PR13} and \cite[\S 8, Proposition 5]{PP90}. \begin{lemma}\label{one} Let $X = L^p([0, 1])$ and $T \in \mathcal{B}(X, Y)$ be a narrow operator. Then there exists a 1-complemented subspace $X_0$ of $X$ isometrically isomorphic to $X$ such that $\mathbf{1} \in X_0$ and the restriction $T|_{X_0}$ of $T$ to $X_0$ is a compact operator. \end{lemma} \begin{proof} Take a mean zero sign $\overline{g}_1$ on $[0, 1]$ and set $T_1 := T\overline{g}_1 I$. The operator $T_1$ is narrow according to Lemma \ref{sign}. The proof of \cite[Theorem 2.21]{PR13} (with $T_1$ in place of $T$ and with $\varepsilon \ge 2\|T_1 \overline{g}_1\|,\, \varepsilon_1 > \|T_1 \overline{g}_1\|$) shows that there exists a 1-complemented (see the proof of \cite[\S 8, Proposition 5]{PP90}) subspace $X_1$ of $X$ isometrically isomorphic to $X$ such that $\overline{g}_1 \in X_1$ and the restriction $T_1|_{X_1}$ of $T_1$ to $X_1$ is a compact operator. Let $X_0 := \overline{g}_1 X_1$. Since $\overline{g}_1^2 = \mathbf{1}$, the operator of multiplication $\overline{g}_1 I$ is an isometric isomorphism of $X_0$ onto $X_1$ and of $X$ onto itself. Let $P_1 \in \mathcal{B}(X)$ be a projection onto $X_1$ such that $\|P_1\| = 1$. Then $P_0 := \overline{g}_1 P_1 \overline{g}_1 I \in \mathcal{B}(X)$ is a projection onto $X_0$ such that $\|P_0\| = 1$. Hence $X_0$ is 1-complemented (this follows also from \cite[Theorem 4]{A66}, since $X_0$ is isometrically isomorphic to $X = L^p([0, 1])$). Further, $\mathbf{1} = \overline{g}_1^2 \in \overline{g}_1 X_1 = X_0$ and $T|_{X_0} = T_1|_{X_1}\, \overline{g}_1 I|_{X_0}$ is compact. \end{proof} \begin{proof}[Proof of Theorem \ref{gammaT}] Take an arbitrary $\varepsilon > 0$. Let \begin{equation}\label{delta} \delta := \inf_{\|u\|_{L^p} = 1} \|(\gamma I - T)u\|_{L^p} . \end{equation} There exists $u_0 \in L^p([0, 1])$ such that $\|u_0\|_{L^p} = 1$ and $\|(\gamma I - T)u_0\|_{L^p} < \delta + \epsilon$. Then there exists an approximation $h :=\sum_{k = 1}^M a_k \mathbb{I}_{A_k}$ of $u_0$ such that $A_k$, $k = 1, \dots, M$, $M \in \mathbb{N}$ are pairwise disjoint Borel sets of positive measure, $\cup_{k = 1}^M A_k = [0, 1]$, $a_k \in \mathbb{C}$, and \begin{equation}\label{deltaeps} \left\|\gamma h - Th\right\|_{L^p([0, 1])} \le \delta + 2\varepsilon , \quad \|h\|_{L^p([0, 1])} = 1 . \end{equation} Partition $[0, 1]$ into subintervals $I_k$ of length $\lambda(A_k)$, $k = 1, \dots, M$. Since $(A_k, \mathcal{L}, \lambda)$ is isomorphic (modulo sets of measure $0$) to $(I_k, \mathcal{L}, \lambda)$ (see, e.g., \cite[Theorem 9.2.2 and Corollary 6.6.7]{B07}), one can easily derive from Lemma \ref{one} the existence, for each $k$, of a 1-complemented subspace $X_k$ of $L^p(A_k)$ isometrically isomorphic to $L^p(I_k)$ such that $\mathbb{I}_{A_k} \in X_k$ and $T|_{X_k}$ is a compact operator. Let $$ X_0 := \left\{f \in L^p([0, 1]) : \ f|_{A_k} \in X_k , \, k = 1, \dots, M \right\}. $$ It is easy to see that $X_0$ is 1-complemented and isometrically isomorphic to $L^p([0, 1])$, and that $T_0 := T|_{X_0}$ is a compact operator. Let $J : L^p([0, 1]) \to X_0$ be an isometric isomorphism and $P_0 \in \mathcal{B}(L^p([0, 1]))$ be a projection onto $X_0$ such that $\|P_0\| = 1$. Then $T_1 := J^{-1}P_0 T_0 J \in \mathcal{K}(L^p([0, 1]))$ and it follows from Theorem \ref{compT} that \begin{align*} & \|I - T_0\|_{X_0 \to L^p} + \inf_{f \in X_0, \, \|f\|_{L^p} = 1} \|(\gamma I - T_0)f\|_{L^p} \\ & \ge \|P_0(I - T_0)\|_{X_0 \to X_0} + \inf_{f \in X_0, \, \|f\|_{L^p} = 1} \|P_0(\gamma I - T_0)f\|_{L^p} \\ & = \|J^{-1}P_0(I - T_0)J\|_{L^p \to L^p} + \inf_{\varphi \in L^p, \, \|\varphi\|_{L^p} = 1} \|J^{-1}P_0(\gamma I - T_0)J\varphi\|_{L^p} \\ & = \|I - T_1\|_{L^p \to L^p} + \inf_{\varphi \in L^p, \, \|\varphi\|_{L^p} = 1} \|(\gamma I - T_1)\varphi\|_{L^p} \ge \|I - \gamma\mathbf{ E}\|_{L^p \to L^p} . \end{align*} Since $h \in X_0$, it follows from \eqref{deltaeps} that $$ \delta + 2\varepsilon \ge \inf_{f \in X_0, \, \|f\|_{L^p} = 1} \|(\gamma I - T_0)f\|_{L^p} . $$ Hence \begin{align*} \label{} \|I - T\|_{L^p \to L^p} + \delta + 2\varepsilon & \ge \|I - T_0\|_{X_0 \to L^p} + \inf_{f \in X_0, \, \|f\|_{L^p} = 1} \|(\gamma I - T_0)f\|_{L^p} \\ &\ge \|I - \gamma\mathbf{ E}\|_{L^p \to L^p} \end{align*} and $$ \|I - T\|_{L^p \to L^p} + \inf_{\|u\|_{L^p} = 1} \|(\gamma I - T)u\|_{L^p} + 2\varepsilon \ge \|I - \gamma\mathbf{ E}\|_{L^p \to L^p} $$ for all $\varepsilon > 0$ (see \eqref{delta}). \end{proof} \end{document}
\begin{document} \title[Krieger's finite generator theorem for countable groups I]{Krieger's finite generator theorem for actions of countable groups I} \author{Brandon Seward} \address{Courant Institute of Mathematical Sciences, New York University, 251 Mercer Street, New York, NY 10003, U.S.A.} \email{[email protected]} \keywords{generating partition, finite generator, Krieger's finite generator theorem, entropy, sofic entropy, f-invariant, non-amenable} \subjclass[2010]{37A15, 37A35} \begin{abstract} For an ergodic {p{$.$}m{$.$}p{$.$}} action $G \curvearrowright (X, \mu)$ of a countable group $G$, we define the Rokhlin entropy $h^{\mathrm{Rok}}_G(X, \mu)$ to be the infimum of the Shannon entropies of countable generating partitions. It is known that for free ergodic actions of amenable groups this notion coincides with classical Kolmogorov--Sinai entropy. It is thus natural to view Rokhlin entropy as a close analogue to classical entropy. Under this analogy we prove that Krieger's finite generator theorem holds for all countably infinite groups. Specifically, if $h^{\mathrm{Rok}}_G(X, \mu) < \log(k)$ then there exists a generating partition consisting of $k$ sets. \end{abstract} \maketitle \section{Introduction} Let $(X, \mu)$ be a standard probability space, meaning $X$ is a standard Borel space with Borel $\sigma$-algebra $\mathcal{B}(X)$ and $\mu$ is a Borel probability measure. Let $G$ be a countable group, and let $G \curvearrowright (X, \mu)$ be a probability-measure-preserving ({p{$.$}m{$.$}p{$.$}}) action. For a collection $\xi \subseteq \mathcal{B}(X)$, we let $\sigma \text{-}\mathrm{alg}_G(\xi)$ denote the smallest $G$-invariant $\sigma$-algebra containing $\xi$. A countable Borel partition $\alpha$ is \emph{generating} if $\sigma \text{-}\mathrm{alg}_G(\alpha) = \mathcal{B}(X)$ (equality modulo $\mu$-null sets). The \emph{Shannon entropy} of a countable Borel partition $\alpha$ is $$\mathrm{H}(\alpha) = \sum_{A \in \alpha} - \mu(A) \cdot \log(\mu(A)).$$ A \emph{probability vector} is a finite or countable ordered tuple $\bar{p} = (p_i)$ of positive real numbers which sum to $1$ (a more general definition will appear in Section \ref{SECT PRELIM}). We write $|\bar{p}|$ for the length of $\bar{p}$ and $\mathrm{H}(\bar{p}) = \sum - p_i \cdot \log(p_i)$ for the Shannon entropy of $\bar{p}$. Countable Borel partitions, and generating partitions in particular, have long played an important role in classical entropy theory. Generating partitions greatly simplify the definition and computation of Kolmogorov--Sinai entropy, and the proofs of two of the most well known results in entropy theory, Sinai's factor theorem and Ornstein's isomorphism theorem, relied upon deep, intricate constructions in which partitions played a starring role. Furthermore, generating partitions are more than merely a tool in entropy theory, but in fact are intimately connected with the notion of entropy itself. This fact is demonstrated by the following fundamental theorems of Rokhlin and Krieger. \begin{thm*}[Rokhlin's generator theorem \cite{Roh67}, 1967] If $\mathbb{Z} \curvearrowright (X, \mu)$ is a free ergodic {p{$.$}m{$.$}p{$.$}} action then its Kolmogorov--Sinai entropy $h^{\mathrm{KS}}_\mathbb{Z}(X, \mu)$ satisfies $$h^{\mathrm{KS}}_\mathbb{Z}(X, \mu) = \inf \Big\{ \mathrm{H}(\alpha) \,:\, \alpha \text{ is a countable generating partition} \Big\}.$$ \end{thm*} \begin{thm*}[Krieger's finite generator theorem \cite{Kr70}, 1970] If $\mathbb{Z} \curvearrowright (X, \mu)$ is a free ergodic {p{$.$}m{$.$}p{$.$}} action and $h^{\mathrm{KS}}_\mathbb{Z}(X, \mu) < \log(k)$ then there exists a generating partition $\alpha$ consisting of $k$ sets. \end{thm*} Both of the above theorems were later superseded by the following result of Denker. \begin{thm*}[Denker \cite{De74}, 1974] If $\mathbb{Z} \curvearrowright (X, \mu)$ is a free ergodic {p{$.$}m{$.$}p{$.$}} action and $\bar{p}$ is a finite probability vector with $h^{\mathrm{KS}}_\mathbb{Z}(X, \mu) < \mathrm{H}(\bar{p})$, then for every $\epsilon > 0$ there is a generating partition $\alpha = \{A_0, \ldots, A_{|\bar{p}|-1}\}$ with $|\mu(A_i) - p_i| < \epsilon$ for every $0 \leq i < |\bar{p}|$. \end{thm*} Grillenberger and Krengel \cite{GK76} obtained a further strengthening of these results which roughly says that, under the assumptions of Denker's theorem, one can control the joint distribution of $\alpha$ and finitely many of its translates. In particular, they showed that under the assumptions of Denker's theorem there is a generating partition $\alpha$ with $\mu(A_i) = p_i$ for every $0 \leq i < |\bar{p}|$. Over the years, Krieger's theorem acquired much fame and underwent various generalizations. In 1972, Katznelson and Weiss \cite{KaW72} outlined a proof of Krieger's theorem for free ergodic actions of $\mathbb{Z}^d$. Roughly a decade later, \v{S}ujan \cite{Su83} stated Krieger's theorem for amenable groups but only outlined the proof. The first proof for amenable groups to appear in the literature was obtained in 1988 by Rosenthal \cite{Ros88} who proved Krieger's theorem under the more restrictive assumption that $h^{\mathrm{KS}}_G(X, \mu) < \log(k-2) < \log(k)$. This was not improved until 2002 when Danilenko and Park \cite{DP02} proved Krieger's theorem for amenable groups under the assumption $h^{\mathrm{KS}}_G(X, \mu) < \log(k-1) < \log(k)$. It is none-the-less a folklore unpublished result that Krieger's theorem holds for amenable groups, i.e. if $G \curvearrowright (X, \mu)$ is a free ergodic {p{$.$}m{$.$}p{$.$}} action of an amenable group and $h^{\mathrm{KS}}_G(X, \mu) < \log(k)$ then there is a generating partition consisting of $k$ sets. Our much more general investigations here yield this as a consequence. We believe that this is the first explicit proof of this fact. Krieger's theorem was also generalized to a relative setting. The relative version of Krieger's theorem for $\mathbb{Z}$ actions was first proven by Kifer and Weiss \cite{KiW02} in 2002. It states that if $\mathbb{Z} \curvearrowright (X, \mu)$ is a free ergodic {p{$.$}m{$.$}p{$.$}} action, $\mathcal{F}$ is a $\mathbb{Z}$-invariant sub-$\sigma$-algebra, and the relative entropy satisfies $h^{\mathrm{KS}}_\mathbb{Z}(X, \mu | \mathcal{F}) < \log(k)$, then there is a Borel partition $\alpha$ consisting of $k$ sets such that $\sigma \text{-}\mathrm{alg}_\mathbb{Z}(\alpha) \vee \mathcal{F} = \mathcal{B}(X)$. This result was later extended by Danilenko and Park \cite{DP02} to free ergodic actions of amenable groups under the assumption that $\mathcal{F}$ induces a class-bijective factor. Rokhlin's theorem was generalized to actions of abelian groups by Conze \cite{C72} in 1972 and was just recently extended to amenable groups by Seward and Tucker-Drob \cite{ST14}. Specifically, if $G \curvearrowright (X, \mu)$ is a free ergodic {p{$.$}m{$.$}p{$.$}} action of an amenable group then the entropy $h^{\mathrm{KS}}_G(X, \mu)$ is equal to the infimum of $\mathrm{H}(\alpha)$ over all countable generating partitions $\alpha$. Denker's theorem on the other hand has not been extended beyond actions of $\mathbb{Z}$. In this paper we consider arbitrary countable groups, but we are particularly interested in the case of non-amenable groups. This is due to the recent breathtaking development of an entropy theory for actions of certain non-amenable groups. Specifically, groundbreaking work of Bowen in 2008 \cite{B10b}, followed with improvements by Kerr and Li \cite{KL11a}, has created the notion of \emph{sofic entropy} for {p{$.$}m{$.$}p{$.$}} actions of sofic groups. We remind the reader that the class of sofic groups contains the countable amenable groups, and it is an open question if every countable group is sofic. Sofic entropy in fact extends Kolmogorov--Sinai entropy as the two notions coincide for actions of amenable groups \cite{B12,KL13}. The sofic entropy of a Bernoulli shift $(L^G, \lambda^G)$ is equal to the Shannon entropy of its base $\mathrm{H}(\lambda)$ \cite{B10b,KL11b}. Consequently, for sofic groups containing an infinite amenable subgroup, sofic entropy classifies Bernoulli shifts up to isomorphism \cite{Or70a,Or70b,OW87,St75}. Bowen has also made significant progress on classifying Bernoulli shifts over general countable groups \cite{B12b}, but a full classification does not yet exist. This paper is motivated by some of the challenges facing sofic entropy theory. For instance, is Sinai's factor theorem true: do free ergodic actions of positive sofic entropy factor onto Bernoulli shifts? Is the Ornstein isomorphism theorem true: are Bernoulli shifts over sofic groups classified up to isomorphism by their sofic entropy? For both questions, the classical proofs involving elaborate constructions of partitions cannot be carried out. This is because vital properties of actions of amenable groups such as the Rokhlin lemma, the Shannon--McMillan--Breiman theorem, and the monotone decreasing property of entropy under factor maps are simply false for non-amenable groups. Additionally, the formula for sofic entropy involves counting auxiliary objects which are external to the original action, making it unclear if the language of sofic entropy is sufficiently rich to uncover the delicate constructions of partitions which are needed for answering these questions. We also do not know if sofic entropy satisfies either the Rokhlin generator theorem or the Krieger finite generator theorem. These are pertinent questions since sofic entropy is easier to define, compute, and understand when there exists a finite generating partition. Finally, a significant challenge to sofic entropy is that it is restricted to the realm of actions of sofic groups (in fact, to the realm of ``sofic actions''). If countable, non-sofic groups exist, how will we understand and classify their Bernoulli shifts? Motivated by these issues, we introduce a new notion of entropy which is defined for all {p{$.$}m{$.$}p{$.$}} actions of all countable groups. For a {p{$.$}m{$.$}p{$.$}} action $G \curvearrowright (X, \mu)$ we define the \emph{Rokhlin entropy} to be $$h^{\mathrm{Rok}}_G(X, \mu) = \inf \Big\{ \mathrm{H}(\alpha | \mathscr{I}) : \alpha \text{ is a countable partition and } \sigma \text{-}\mathrm{alg}_G(\alpha) \vee \mathscr{I} = \mathcal{B}(X) \Big\},$$ where $\mathscr{I}$ is the $\sigma$-algebra of $G$-invariant Borel sets. We study this invariant in this three-part series, but in the present paper we only consider ergodic actions, in which case the Rokhlin entropy simplifies to $$h^{\mathrm{Rok}}_G(X, \mu) = \inf \Big\{ \mathrm{H}(\alpha) \,:\, \alpha \text{ is a countable Borel generating partition} \Big\}.$$ We name this invariant in honor of Rokhlin's generator theorem. For free actions of amenable groups, Rokhlin entropy coincides with Kolmogorov--Sinai entropy \cite{AS,ST14}. Thus Rokhlin entropy is a simple and natural extension of Kolmogorov--Sinai entropy. More generally, if $\mathcal{F} \subseteq \mathcal{B}(X)$ is a $G$-invariant $\sigma$-algebra, then we define the \emph{Rokhlin entropy of $G \curvearrowright (X, \mu)$ relative to $\mathcal{F}$}, denoted $h^{\mathrm{Rok}}_G(X, \mu | \mathcal{F})$, to be $$\inf \Big\{ \mathrm{H}(\alpha | \mathcal{F} \vee \mathscr{I}) \,:\, \alpha \text{ is a countable Borel partition and } \sigma \text{-}\mathrm{alg}_G(\alpha) \vee \mathcal{F} \vee \mathscr{I} = \mathcal{B}(X) \Big\}.$$ Again, $\mathscr{I}$ will always be trivial in this paper because we will only consider ergodic actions. We refer the reader to Section \ref{SECT PRELIM} for the definition of the conditional Shannon entropy $\mathrm{H}(\alpha | \mathcal{F})$, but we remark that when $\mathcal{F} = \{X, \varnothing\}$ we have $\mathrm{H}(\alpha | \mathcal{F}) = \mathrm{H}(\alpha)$. In particular $h^{\mathrm{Rok}}_G(X, \mu | \{X, \varnothing\}) = h^{\mathrm{Rok}}_G(X, \mu)$. We show in Proposition \ref{PROP RELROK} that for free ergodic actions of amenable groups relative Rokhlin entropy coincides with relative Kolmogorov--Sinai entropy (this is extended to non-ergodic actions in Part III \cite{AS}). We also observe in Proposition \ref{PROP OE} that $h^{\mathrm{Rok}}_G(X, \mu | \mathcal{F})$ is invariant under orbit equivalences for which the orbit-change cocycle is $\mathcal{F}$-measurable, generalizing a similar property of Kolmogorov--Sinai entropy discovered by Rudolph and Weiss \cite{RW00}. Our main theorem is the following finite generator theorem which applies to all ergodic actions of countably infinite groups. (We in fact prove a stronger result; see Theorem \ref{INTRO THM2}). \begin{thm} \label{INTRO THM1} Let $G$ be a countably infinite group acting ergodically, but not necessarily freely, by measure-preserving bijections on a non-atomic standard probability space $(X, \mu)$. Let $\mathcal{F}$ be a $G$-invariant sub-$\sigma$-algebra. If $\bar{p} = (p_i)$ is any finite or countable probability vector with $h^{\mathrm{Rok}}_G(X, \mu | \mathcal{F}) < \mathrm{H}(\bar{p})$, then there is a Borel partition $\alpha = \{A_i \,:\, 0 \leq i < |\bar{p}|\}$ with $\mu(A_i) = p_i$ for every $0 \leq i < |\bar{p}|$ and with $\sigma \text{-}\mathrm{alg}_G(\alpha) \vee \mathcal{F} = \mathcal{B}(X)$. \end{thm} This theorem greatly supersedes previous work of the author in \cite{S12} which, under the assumption $h^{\mathrm{Rok}}_G(X, \mu) < \infty$, constructed a finite generating partition without any control over its cardinality or distribution. The major difficulty which the present work overcomes is that all prior arguments for controlling the cardinality and distribution of generating partitions rely critically upon the classical Rokhlin lemma and Shannon--McMillan--Breiman theorem, and these tools do not exist for actions of general countable groups. We remark that in order for a partition $\alpha$ to exist as described in Theorem \ref{INTRO THM1}, it is necessary that $h^{\mathrm{Rok}}_G(X, \mu | \mathcal{F}) \leq \mathrm{H}(\bar{p})$. So the above theorem is optimal since in general there are actions where the infimum $h^{\mathrm{Rok}}_G(X, \mu)$ is not achieved, such as free ergodic actions which are not isomorphic to any Bernoulli shift \cite{S14a}. If $h^{\mathrm{Rok}}_G(X, \mu) < \log(k)$ then using $\bar{p} = (p_0, \ldots, p_{k-1})$ where each $p_i = 1 / k$ we obtain the following generalization of the (relative) Krieger finite generator theorem: \begin{cor} \label{INTRO CORK} Let $G$ be a countably infinite group acting ergodically, but not necessarily freely, by measure-preserving bijections on a non-atomic standard probability space $(X, \mu)$, and let $\mathcal{F}$ be a $G$-invariant sub-$\sigma$-algebra. If $h^{\mathrm{Rok}}_G(X, \mu | \mathcal{F}) < \log(k)$, then there is a partition $\alpha$ with $|\alpha| = k$ and $\sigma \text{-}\mathrm{alg}_G(\alpha) \vee \mathcal{F} = \mathcal{B}(X)$. \end{cor} We mention that Corollary \ref{INTRO CORK} is the first version of Krieger's finite generator theorem for non-free actions. Furthermore, we believe that Corollary \ref{INTRO CORK} (together with the Rokhlin generator theorem for amenable groups \cite{ST14}) is the first explicit proof of Krieger's finite generator theorem for free ergodic actions of countable amenable groups. In fact, we obtain the following strong form of Denker's theorem for amenable groups: \begin{cor} Let $G$ be a countably infinite amenable group and let $G \curvearrowright (X, \mu)$ be a free ergodic {p{$.$}m{$.$}p{$.$}} action. If $\bar{p} = (p_i)$ is any finite or countable probability vector with $h^{\mathrm{KS}}_G(X, \mu) < \mathrm{H}(\bar{p})$ then there exists a generating partition $\alpha = \{A_i \,:\, 0 \leq i < |\bar{p}|\}$ with $\mu(A_i) = p_i$ for every $0 \leq i < |\bar{p}|$. \end{cor} Returning to our points of motivation, we point out that Theorem \ref{INTRO THM1} implies that sofic entropy will satisfy the Krieger finite generator theorem provided it satisfies the Rokhlin generator theorem (i.e. provided sofic entropy coincides with Rokhlin entropy). A simple consequence of the definitions is that sofic entropy is bounded above by Rokhlin entropy \cite{AS, B10b}, and its an important open problem to determine if they are equal (assuming a free action with sofic entropy not minus infinity). Since sofic entropy is a lower bound to Rokhlin entropy, we know that Bernoulli shifts $(L^G, \lambda^G)$ over sofic groups $G$ have Rokhlin entropy $h^{\mathrm{Rok}}_G(L^G, \lambda^G) = \mathrm{H}(\lambda)$. Since the definition of Rokhlin entropy does not mention soficity, this suggests the equality $h^{\mathrm{Rok}}_G(L^G, \lambda^G) = \mathrm{H}(\lambda)$ may hold for all countably infinite groups. If so, Rokhlin entropy could be capable of classifying Bernoulli shifts over all countably infinite groups up to isomorphism (in particular, over non-sofic groups if they exist). We further investigate this question in Part II \cite{S14a}. In the long-term, we hope that Rokhlin entropy will not only be useful in its own right, but that its study will develop in parallel with sofic entropy theory and that the two theories will be mutually beneficial to one another. In particular, we hope Rokhlin entropy will enrich the language and framework available for studying entropy-type problems and that it will progress our techniques for constructing partitions, possibly leading the way to generalizations of the many deep results of Ornstein. A hidden significance of Theorem \ref{INTRO THM1} is that it opens the door to developing a theory of Rokhlin entropy. It should be pointed out that the definition of Rokhlin entropy is both quite natural and immediately suggested by Rokhlin's generator theorem, and the idea of its definition had certainly occurred to researchers beforehand. However, the abstract nature of the definition, an infimum over {\it all} generating partitions, seems to prevent any viable means of study. Our main theorem changes this situation. It reveals, as a consequence, a sub-additive property of Rokhlin entropy. To properly state this property in its strongest form requires additional definitions and is postponed to Part II \cite{S14a}, but we mention here a simple corollary to give an indication. \begin{cor} \label{INTRO COR1} Let $G$ be a countably infinite group acting ergodically, but not necessarily freely, by measure-preserving bijections on a non-atomic standard probability space $(X, \mu)$. If $G \curvearrowright (Y, \nu)$ is a factor of $G \curvearrowright (X, \mu)$ and $\mathcal{F}$ is the sub-$\sigma$-algebra of $X$ associated to $Y$ then $$h^{\mathrm{Rok}}_G(X, \mu) \leq h^{\mathrm{Rok}}_G(Y, \nu) + h^{\mathrm{Rok}}_G(X, \mu | \mathcal{F}).$$ \end{cor} For example, if $\alpha$ and $\beta$ are partitions with $\sigma \text{-}\mathrm{alg}_G(\alpha \vee \beta) = \mathcal{B}(X)$, then the above corollary implies $h^{\mathrm{Rok}}_G(X, \mu) \leq \mathrm{H}(\alpha) + \mathrm{H}(\beta | \sigma \text{-}\mathrm{alg}_G(\alpha))$. The inequality in Corollary \ref{INTRO COR1} can be strict, such as when $h^{\mathrm{Rok}}_G(X, \mu) < h^{\mathrm{Rok}}_G(Y, \nu)$. A strict inequality is common for actions of non-amenable groups \cite{S13}. The sub-additive property turns out to be tremendously useful, and it is absolutely critical to our study of Rokhlin entropy in Parts II and III. \subsection*{Update} Between this article's first appearance on the arXiv and it reaching its final form, a few developments have occurred. Rokhlin entropy theory was ultimately successful in providing a framework for generalizing the Sinai factor theorem to all countably infinite groups. Specifically, if $G \curvearrowright (X, \mu)$ is a free ergodic action of any countably infinite group $G$, then $G \curvearrowright (X, \mu)$ factors onto the Bernoulli shift $G \curvearrowright (L^G, \lambda^G)$ whenever $h^{\mathrm{Rok}}_G(X, \mu) \geq \mathrm{H}(\lambda)$ \cite{S16a}. In particular, actions which do not admit any finite generating partitions must factor onto all Bernoulli shifts. Also, since sofic entropy is bounded above by Rokhlin entropy, this implies that sofic entropy satisfies the Sinai factor theorem as well. In \cite{S16} two fairly computable expressions were found which provide upper bounds to Rokhlin entropy. In particular, it was shown that if $G \curvearrowright (X, \mu)$ is a free {p{$.$}m{$.$}p{$.$}} action and $\alpha$ is a generating partition with $\mathrm{H}(\alpha) < \infty$, then the Rokhlin entropy satisfies $$h^{\mathrm{Rok}}_G(X, \mu) \leq \inf_{T \subseteq G} \frac{1}{|T|} \cdot \mathrm{H}( \textstyle{\bigvee_{t \in T}} t \cdot \alpha),$$ where the infimum is over all finite $T \subseteq G$. When $G$ is amenable, the right-hand side coincides with Kolmogorov--Sinai entropy (it is common to use a sequence of F{\o}lner sets $T$, but this isn't necessary \cite{OW87}). Thus, in some sense this upper bound should be no more difficult to compute in practice than Kolmogorov--Sinai entropy. We point out that this upper bound makes our finite generator theorem, Theorem \ref{INTRO THM1}, easier to apply. In addition to Parts II and III \cite{S14a,AS} and the papers \cite{S16, S16a} mentioned above, additional study of Rokhlin entropy has been undertaken in \cite{Al, B16, GS15}. The sub-additive property of Rokhlin entropy continues to serve as the foundation for all of these new results. \subsection*{Outline} The proof of Theorem \ref{INTRO THM1} is essentially self-contained as it relies on technical constructions carried out by hand. The proof makes significant use of the pseudo-group of the induced orbit-equivalence relation. We review basic properties of the pseudo-group in Section \ref{SECT PSEUDO}. In Section \ref{SECT INFT} we review and strengthen a construction of the author used in \cite{S12} for replacing countably infinite partitions with finite ones. Sections \ref{SECT PSEUDO} and \ref{SECT INFT} thus reprove the main theorem of \cite{S12} which states that finite Rokhlin entropy implies the existence of a finite generating partition. The real difficulty of the present work is constructing a generating partition while controlling its cardinality and distribution. The classical Rokhlin lemma and Shannon--McMillan--Breiman theorem were critical to this task in all prior proofs of Krieger's theorem. The important advantage we obtain by working with the pseudo-group is that we are able to develop a replacement to the Rokhlin lemma and the Shannon--McMillan--Breiman theorem which is suitable to our needs. We present this replacement in Section \ref{SECT EQREL}. A new, significant difficulty is created from our use of the pseudo-group. Specifically, the notion of a ``generating'' partition cannot be expressed in the language of the pseudo-group. Ultimately, we must build a single, efficient partition which both codes information for certain transformations in the pseudo-group, and simultaneously codes information for the action of $G$. An obstacle in this simultaneous-coding problem is that there is no geometric relationship between our pseudo-group transformations and the $G$-action. This is the most challenging part of the proof. The coding machinery needed for this task is presented in Section \ref{SECT ACT}. Then in Section \ref{SECT KRIEGER} we prove the main theorem. Finally, in Section \ref{SECT AMENABLE} we show that relative Rokhlin entropy and relative Kolmogorov--Sinai entropy coincide for free actions of amenable groups. \subsection*{Acknowledgments} This research was supported by the National Science Foundation Graduate Student Research Fellowship under Grant No. DGE 0718128. The author thanks his advisor, Ralf Spatzier, for numerous helpful discussions, Tim Austin for many suggestions to improve the paper, and Miklos Ab\'{e}rt and Benjy Weiss for encouraging the author to coin a name for the new invariant studied here. Part of this work was completed while the author attended the Arbeitsgemeinschaft: Sofic Entropy workshop at the Mathematisches Forschungsinstitut Oberwolfach in Germany. The author thanks the MFO for their hospitality and travel support. \section{Preliminaries} \label{SECT PRELIM} Every probability space $(X, \mu)$ which we consider will be assumed to be standard. In particular, $X$ will be a standard Borel space. For $\xi \subseteq \mathcal{B}(X)$, we let $\sigma \text{-}\mathrm{alg}(\xi)$ denote the smallest sub-$\sigma$-algebra containing $\xi$ (not to be confused with the notation $\sigma \text{-}\mathrm{alg}_G(\xi)$ from the introduction). At times, we will consider the space of all Borel probability measures on $X$. Recall that the space of Borel probability measures on $X$ has a natural standard Borel structure which is generated by the maps $\lambda \mapsto \lambda(A)$ for $A \subseteq X$ Borel \cite[Theorem 17.24]{K95}. An action $G \curvearrowright (Y, \nu)$ is a \emph{factor} of $G \curvearrowright (X, \mu)$ if there exists a measure-preserving $G$-equivariant map $\pi : (X, \mu) \rightarrow (Y, \nu)$. Every factor $\pi : (X, \mu) \rightarrow (Y, \nu)$ is uniquely associated (mod $\mu$-null sets) to a $G$-invariant sub-$\sigma$-algebra $\mathcal{F}$ of $X$, and conversely every $G$-invariant sub-$\sigma$-algebra $\mathcal{F}$ of $(X, \mu)$ is uniquely associated (up to isomorphism) to a factor $\pi : (X, \mu) \rightarrow (Y, \nu)$ \cite[Theorem 2.15]{Gl03}. If $\pi : (X, \mu) \rightarrow (Y, \nu)$ is a factor map, then there is an essentially unique Borel map associating each $y \in Y$ to a Borel probability measure $\mu_y$ on $X$ such that $\mu = \int \mu_y \ d \nu(y)$ and $\mu_y(\pi^{-1}(y)) = 1$. We call this the \emph{disintegration} of $\mu$ over $\nu$. Note that for any Borel set $A \subseteq X$, the map $y \mapsto \mu_y(A)$ is Borel. Let $G \curvearrowright (X, \mu)$ be a {p{$.$}m{$.$}p{$.$}} action, and let $\mathcal{F}$ be a $G$-invariant sub-$\sigma$-algebra. Let $\pi : (X, \mu) \rightarrow (Y, \nu)$ be the associated factor, and let $\mu = \int \mu_y \ d \nu(y)$ be the disintegration of $\mu$ over $\nu$. For a countable Borel partition $\alpha$ of $X$, the \emph{conditional Shannon entropy} of $\alpha$ relative to $\mathcal{F}$ is $$\mathrm{H}(\alpha | \mathcal{F}) = \int_Y \sum_{A \in \alpha} - \mu_y(A) \cdot \log(\mu_y(A)) \ d \nu(y) = \int_Y \mathrm{H}_{\mu_y}(\alpha) \ d \nu(y).$$ If $\mathcal{F} = \{X, \varnothing\}$ is the trivial $\sigma$-algebra then $\mathrm{H}(\alpha | \mathcal{F}) = \mathrm{H}(\alpha)$. For a countable partition $\beta$ of $X$ we set $\mathrm{H}(\alpha | \beta) = \mathrm{H}(\alpha | \sigma \text{-}\mathrm{alg}(\beta))$. We write $\alpha \geq \beta$ if $\alpha$ is finer than $\beta$. We will need the following standard properties of Shannon entropy (proofs can be found in \cite{Do11}): \begin{lem} \label{LEM SHAN} Let $(X, \mu)$ be a standard probability space, let $\alpha$ and $\beta$ be countable Borel partitions of $X$, and let $\mathcal{F}$ and $\Sigma$ be sub-$\sigma$-algebras. Then \begin{enumerate} \item[\rm (i)] $\mathrm{H}(\alpha | \mathcal{F}) \leq \log |\alpha|$; \item[\rm (ii)] if $\alpha \geq \beta$ then $\mathrm{H}(\alpha | \mathcal{F}) \geq \mathrm{H}(\beta | \mathcal{F})$; \item[\rm (iii)] if $\Sigma \subseteq \mathcal{F}$ then $\mathrm{H}(\alpha | \Sigma) \geq \mathrm{H}(\alpha | \mathcal{F})$; \item[\rm (iv)] $\mathrm{H}(\alpha \vee \beta) = \mathrm{H}(\beta) + \mathrm{H}(\alpha | \beta) \geq \mathrm{H}(\beta)$; \item[\rm (v)] $\mathrm{H}(\alpha \vee \beta | \mathcal{F}) = \mathrm{H}(\beta | \mathcal{F}) + \mathrm{H}(\alpha | \sigma \text{-}\mathrm{alg}(\beta) \vee \mathcal{F}) \leq \mathrm{H}(\beta | \mathcal{F}) + \mathrm{H}(\alpha)$; \item[\rm (vi)] $\mathrm{H}(\alpha | \mathcal{F}) = \sup_\xi \mathrm{H}(\xi | \mathcal{F})$, where the supremum is over all finite partitions $\xi$ coarser than $\alpha$; \item[\rm (vii)] if $\mathrm{H}(\alpha) < \infty$ then $\mathrm{H}(\alpha | \mathcal{F}) = \inf_\xi \mathrm{H}(\alpha | \xi)$, where the infimum is over all finite partitions $\xi \subseteq \mathcal{F}$. \end{enumerate} \end{lem} Throughout this paper, whenever working with a probability space $(X, \mu)$ we will generally ignore sets of measure zero. In particular, we write $A = B$ for $A, B \subseteq X$ if their symmetric difference is null. We similarly write $\mathcal{F} = \Sigma$ for sub-$\sigma$-algebras $\mathcal{F}$, $\Sigma$ if they agree up to null sets. Also, by a partition of $X$ we will mean a countable collection of pairwise-disjoint Borel sets whose union is conull. In particular, we allow partitions to contain the empty set. Similarly, we will use the term \emph{probability vector} more freely than described in the introduction. A probability vector $\bar{p} = (p_i)$ will be any finite or countable ordered tuple of non-negative real numbers which sum to $1$ (so some terms $p_i$ may be $0$). We say that another probability vector $\bar{q}$ is \emph{coarser} than $\bar{p}$ if there is a partition $\mathcal{Q} = \{Q_j \,:\, 0 \leq j < |\bar{q}|\}$ of the integers $\{0 \leq i < |\bar{p}|\}$ such that for every $0 \leq j < |\bar{q}|$ $$q_j = \sum_{i \in Q_j} p_i.$$ A \emph{pre-partition} of $X$ is a countable collection of pairwise-disjoint subsets of $X$. We say that another pre-partition $\beta$ extends $\alpha$, written $\beta \sqsupseteq \alpha$, if there is an injection $\iota : \alpha \rightarrow \beta$ with $A \subseteq \iota(A)$ for every $A \in \alpha$. Equivalently, $\beta \sqsupseteq \alpha$ if and only if $\cup \alpha \subseteq \cup \beta$ and the restriction of $\beta$ to $\cup \alpha$ coincides with $\alpha$. For a Borel pre-partition $\alpha$, we define the \emph{reduced $\sigma$-algebra} $\sigma \text{-}\mathrm{alg}^{\text{red}}_G(\alpha)$ to be the collection of Borel sets $R \subseteq X$ such that there is a conull $X' \subseteq X$ satisfying: \begin{quote} for every $r \in R \cap X'$ and $x \in X' \setminus R$ there is $g \in G$ with $g \cdot r, g \cdot x \in \cup \alpha$ and with $g \cdot r$ and $g \cdot x$ lying in distinct classes of $\alpha$. \end{quote} It is a basic exercise to verify that $\sigma \text{-}\mathrm{alg}^{\text{red}}_G(\alpha)$ is indeed a $\sigma$-algebra. We note the following basic property. \begin{lem} \label{LEM EXT} Let $G \curvearrowright (X, \mu)$ be a {p{$.$}m{$.$}p{$.$}} action, and let $\alpha$ be a pre-partition. If $\beta$ is a pre-partition and $\beta \sqsupseteq \alpha$ then $\sigma \text{-}\mathrm{alg}^{\text{red}}_G(\beta) \supseteq \sigma \text{-}\mathrm{alg}^{\text{red}}_G(\alpha)$. In particular, if $\beta$ is a partition and $\beta \sqsupseteq \alpha$ then $\sigma \text{-}\mathrm{alg}_G(\beta) \supseteq \sigma \text{-}\mathrm{alg}^{\text{red}}_G(\alpha)$. \end{lem} \begin{proof} Fix $R \in \sigma \text{-}\mathrm{alg}^{\text{red}}_G(\alpha)$. By definition of $\sigma \text{-}\mathrm{alg}^{\text{red}}_G(\alpha)$, there is a conull $X' \subseteq X$ such that for all $r \in R \cap X'$ and $x \in X' \setminus R$ there is $g \in G$ with $g \cdot r, g \cdot x \in \cup \alpha$ and such that $\alpha$ separates $g \cdot r$ and $g \cdot x$. Since the restriction of $\beta$ to $\cup \alpha$ is equal to $\alpha$, we also have that $g \cdot r, g \cdot x \in \cup \beta$ and $\beta$ separates $g \cdot r$ and $g \cdot x$. We conclude that $R \in \sigma \text{-}\mathrm{alg}^{\text{red}}_G(\beta)$. \end{proof} The definition of reduced $\sigma$-algebra may seem a bit odd at first, but comes about naturally from our work here and will significantly simplify some of the proofs in Part II and Part III \cite{S14a, AS}. A key property of this definition is that if $\beta$ is any partition extending $\alpha$ then one automatically has $\sigma \text{-}\mathrm{alg}_G(\beta) \supseteq \sigma \text{-}\mathrm{alg}^{\text{red}}_G(\alpha)$. Another important property is that if $G \curvearrowright (Y, \nu)$ is a factor of $(X, \mu)$ via $\phi : (X, \mu) \rightarrow (Y, \nu)$, then for any pre-partition $\alpha$ of $Y$ we have $\phi^{-1}(\sigma \text{-}\mathrm{alg}^{\text{red}}_G(\alpha)) \subseteq \sigma \text{-}\mathrm{alg}^{\text{red}}_G(\phi^{-1}(\alpha))$. These properties can be quite useful for specialized constructions. For example, one could imagine constructing two pre-partitions $\alpha^1$ and $\alpha^2$ which achieve different goals. If $\cup \alpha^1$ is disjoint from $\cup \alpha^2$, then one can choose a common extension partition $\alpha$ and automatically have $\sigma \text{-}\mathrm{alg}_G(\alpha) \supseteq \sigma \text{-}\mathrm{alg}^{\text{red}}_G(\alpha^1) \vee \sigma \text{-}\mathrm{alg}^{\text{red}}_G(\alpha^2)$. This type of construction will be performed in Part II. Below is the statement of the main theorem of this paper in its strongest form. It is a strengthening of Theorem \ref{INTRO THM1} mentioned in the introduction. This theorem is new even in the case $G = \mathbb{Z}$. \begin{thm} \label{INTRO THM2} Let $G$ be a countably infinite group acting ergodically, but not necessarily freely, by measure-preserving bijections on a non-atomic standard probability space $(X, \mu)$. Let $\mathcal{F}$ be a $G$-invariant sub-$\sigma$-algebra of $X$. If $0 < r \leq 1$ and $\bar{p} = (p_i)$ is any finite or countable probability vector with $h^{\mathrm{Rok}}_G(X, \mu | \mathcal{F}) < r \cdot \mathrm{H}(\bar{p})$, then there is a Borel pre-partition $\alpha = \{A_i \,:\, 0 \leq i < |\bar{p}|\}$ with $\mu(\cup \alpha) = r$, $\mu(A_i) = r \cdot p_i$ for every $0 \leq i < |\bar{p}|$, and $\sigma \text{-}\mathrm{alg}^{\text{red}}_G(\alpha) \vee \mathcal{F} = \mathcal{B}(X)$. \end{thm} \section{The pseudo-group of an ergodic action} \label{SECT PSEUDO} For a {p{$.$}m{$.$}p{$.$}} action $G \curvearrowright (X, \mu)$ we let $E_G^X$ denote the induced orbit equivalence relation: $$E_G^X = \{(x, y) \,:\, \exists g \in G, \ \ g \cdot x = y\}.$$ The \emph{pseudo-group} of $E_G^X$, denoted $[[E_G^X]]$, is the set of all Borel bijections $\theta : \mathrm{dom}(\theta) \rightarrow \mathrm{rng}(\theta)$ where $\mathrm{dom}(\theta), \mathrm{rng}(\theta) \subseteq X$ are Borel and $\theta(x) \in G \cdot x$ for every $x \in \mathrm{dom}(\theta)$. The \emph{full group} of $E_G^X$, denoted $[E_G^X]$, is the set of all $\theta \in [[E_G^X]]$ with $\mathrm{dom}(\theta) = \mathrm{rng}(\theta) = X$ (i.e. conull in $X$). For every $\theta \in [[E_G^X]]$ there is a Borel partition $\{Z_g^\theta \,:\, g \in G\}$ of $\mathrm{dom}(\theta)$ such that $\theta(x) = g \cdot x$ for every $x \in Z_g^\theta$. Thus, an important fact which we will use repeatedly is that every $\theta \in [[E_G^X]]$ is measure-preserving. We mention that the sets $Z_g^\theta$ are in general not uniquely determined from $\theta$ since the action of $G$ might not be free. It will be necessary to keep record of such decompositions $\{Z_g^\theta\}$ for $\theta \in [[E_G^X]]$. The precise notion we need is the following. \begin{defn} Let $G \curvearrowright (X, \mu)$ be a {p{$.$}m{$.$}p{$.$}} action, let $\theta \in [[E_G^X]]$, and let $\mathcal{F}$ be a $G$-invariant sub-$\sigma$-algebra. We say that $\theta$ is \emph{$\mathcal{F}$-expressible} if $\mathrm{dom}(\theta), \mathrm{rng}(\theta) \in \mathcal{F}$ and there is a $\mathcal{F}$-measurable partition $\{Z_g^\theta \,:\, g \in G\}$ of $\mathrm{dom}(\theta)$ such that $\theta(x) = g \cdot x$ for every $x \in Z_g^\theta$ and all $g \in G$. \end{defn} We observe two simple facts on the notion of expressibility. \begin{lem} \label{LEM EXPMOVE} Let $G \curvearrowright (X, \mu)$ be a {p{$.$}m{$.$}p{$.$}} action and let $\mathcal{F}$ be a $G$-invariant sub-$\sigma$-algebra. If $\theta \in [[E_G^X]]$ is $\mathcal{F}$-expressible and $A \subseteq X$, then $\theta(A) = \theta(A \cap \mathrm{dom}(\theta))$ is $\sigma \text{-}\mathrm{alg}_G(\{A\}) \vee \mathcal{F}$-measurable. In particular, if $A \in \mathcal{F}$ then $\theta(A) \in \mathcal{F}$. \end{lem} \begin{proof} Fix a $\mathcal{F}$-measurable partition $\{Z_g^\theta \,:\, g \in G\}$ of $\mathrm{dom}(\theta)$ such that $\theta(x) = g \cdot x$ for all $x \in Z_g^\theta$. Then \begin{equation*} \theta(A) = \bigcup_{g \in G} g \cdot (A \cap Z_g^\theta) \in \sigma \text{-}\mathrm{alg}_G(\{A\}) \vee \mathcal{F}. \qedhere \end{equation*} \end{proof} \begin{lem} \label{LEM EXPGROUP} Let $G \curvearrowright (X, \mu)$ be a {p{$.$}m{$.$}p{$.$}} action and let $\mathcal{F}$ be a $G$-invariant sub-$\sigma$-algebra. If $\theta, \phi \in [[E_G^X]]$ are $\mathcal{F}$-expressible then so are $\theta^{-1}$ and $\theta \circ \phi$. \end{lem} \begin{proof} Fix $\mathcal{F}$-measurable partitions $\{Z_g^\theta \,:\, g \in G\}$ and $\{Z_g^\phi \,:\, g \in G\}$ of $\mathrm{dom}(\theta)$ and $\mathrm{dom}(\phi)$, respectively, satisfying $\theta(x) = g \cdot x$ for all $x \in Z_g^\theta$ and $\phi(x) = g \cdot x$ for all $x \in Z_g^\phi$. Define for $g \in G$ $$Z_g^{\theta^{-1}} = g^{-1} \cdot Z_{g^{-1}}^\theta.$$ Then each $Z_g^{\theta^{-1}}$ is $\mathcal{F}$-measurable since $\mathcal{F}$ is $G$-invariant. It is easily checked that $\{Z_g^{\theta^{-1}} \,:\, g \in G\}$ partitions $\mathrm{rng}(\theta)$ and satisfies $\theta^{-1}(x) = g \cdot x$ for all $x \in Z_g^{\theta^{-1}}$. Thus $\theta^{-1}$ is $\mathcal{F}$-expressible. Observe that by the previous lemma, $\phi^{-1}(Z_g^\theta) \in \mathcal{F}$ for every $g \in G$ since $\phi^{-1}$ is $\mathcal{F}$-expressible. Notice that the sets $Z_g^\phi \cap \phi^{-1}(Z_h^\theta)$ partition $\mathrm{dom}(\theta \circ \phi)$. Define for $g \in G$ $$Z_g^{\theta \circ \phi} = \bigcup_{h \in G} \Big( Z_{h^{-1} g}^\phi \cap \phi^{-1}(Z_h^\theta) \Big).$$ These sets are $\mathcal{F}$-measurable and pairwise-disjoint and we have $\theta \circ \phi(x) = g \cdot x$ for all $x \in Z_g^{\theta \circ \phi}$. \end{proof} With the aid of Lemma \ref{LEM EXPMOVE}, we observe a basic property of relative Rokhlin entropy. The proposition below resembles a theorem of Rudolph and Weiss from classical entropy theory \cite{RW00}. Note that if $G$ and $\Gamma$ act on $(X, \mu)$ with the same orbits then $E_G^X = E_\Gamma^X$ and $[[E_G^X]] = [[E_\Gamma^X]]$. In this situation, we say that $\theta \in [[E_G^X]]$ is $(G, \mathcal{F})$-expressible if it is $\mathcal{F}$-expressible with respect to the $G$-action $G \curvearrowright (X, \mu)$. \begin{prop} \label{PROP OE} Let $G$ and $\Gamma$ be countable groups, and let $G \curvearrowright (X, \mu)$ and $\Gamma \curvearrowright (X, \mu)$ be {p{$.$}m{$.$}p{$.$}} ergodic actions having the same orbits. Suppose that $\mathcal{F}$ is a $G$ and $\Gamma$ invariant sub-$\sigma$-algebra such that the transformation associated to each $g \in G$ is $(\Gamma, \mathcal{F})$-expressible and similarly the transformation associated to each $\gamma \in \Gamma$ is $(G, \mathcal{F})$-expressible. Then $$h^{\mathrm{Rok}}_G(X, \mu | \mathcal{F}) = h^{\mathrm{Rok}}_\Gamma(X, \mu | \mathcal{F}).$$ \end{prop} \begin{proof} It suffices to show that for every countable partition $\alpha$, $\sigma \text{-}\mathrm{alg}_G(\alpha) \vee \mathcal{F} = \sigma \text{-}\mathrm{alg}_\Gamma(\alpha) \vee \mathcal{F}$. Indeed, since the transformation associated to each $g \in G$ is $(\Gamma, \mathcal{F})$-expressible, it follows from Lemma \ref{LEM EXPMOVE} that the $\sigma$-algebra $\sigma \text{-}\mathrm{alg}_\Gamma(\alpha) \vee \mathcal{F}$ is $G$-invariant and contains $\alpha$. Therefore $\sigma \text{-}\mathrm{alg}_G(\alpha) \vee \mathcal{F} \subseteq \sigma \text{-}\mathrm{alg}_\Gamma(\alpha) \vee \mathcal{F}$. With the same argument we obtain the reverse containment. \end{proof} The lemma below and the corollaries which follow it provide us with all elements of the pseudo-group $[[E_G^X]]$ which will be needed in forthcoming sections. \begin{lem} \label{LEM SIMPLEMIX} Let $G \curvearrowright (X, \mu)$ be an ergodic {p{$.$}m{$.$}p{$.$}} action. Let $A, B \subseteq X$ be Borel sets with $0 < \mu(A) \leq \mu(B)$. Then there exists a $\sigma \text{-}\mathrm{alg}_G(\{A, B\})$-expressible function $\theta \in [[E_G^X]]$ with $\mathrm{dom}(\theta) = A$ and $\mathrm{rng}(\theta) \subseteq B$. \end{lem} \begin{proof} Let $g_0, g_1, \ldots$ be an enumeration of $G$. Set $Z_{g_0}^\theta = A \cap g_0^{-1} \cdot B$ and inductively define $$Z_{g_n}^\theta = \Big( A \setminus \Big( \textstyle{\bigcup_{i = 0}^{n-1} Z_{g_i}^\theta} \Big) \Big) \bigcap g_n^{-1} \cdot \Big( B \setminus \Big( \textstyle{\bigcup_{i = 0}^{n - 1} g_i \cdot Z_{g_i}^\theta} \Big) \Big).$$ Define $\theta : \bigcup_{n \in \mathbb{N}} Z_{g_n}^\theta \rightarrow B$ by setting $\theta(x) = g_n \cdot x$ for $x \in Z_{g_n}^\theta$. Clearly $\theta$ is $\sigma \text{-}\mathrm{alg}_G(\{A, B\})$-expressible. Set $C = A \setminus \mathrm{dom}(\theta)$. Towards a contradiction, suppose that $\mu(C) > 0$. Then we have $$\mu(\mathrm{rng}(\theta)) = \mu(\mathrm{dom}(\theta)) < \mu(A) \leq \mu(B).$$ So $\mu(B \setminus \mathrm{rng}(\theta)) > 0$ and by ergodicity there is $n \in \mathbb{N}$ with $$\mu \Big( C \cap g_n^{-1} \cdot (B \setminus \mathrm{rng}(\theta)) \Big) > 0.$$ However, this implies that $\mu(C \cap Z_{g_n}^\theta) > 0$, a contradiction. We conclude that, up to a null set, $\mathrm{dom}(\theta) = A$. \end{proof} \begin{cor} \label{COR MAKEPART} Let $G \curvearrowright (X, \mu)$ be a {p{$.$}m{$.$}p{$.$}} ergodic action. If $C \subseteq B \subseteq X$ and $\mu(C) = \frac{1}{n} \cdot \mu(B)$ with $n \in \mathbb{N}$, then there is a $\sigma \text{-}\mathrm{alg}_G(\{C, B\})$-measurable partition $\xi$ of $B$ into $n$ pieces with each piece having measure $\frac{1}{n} \cdot \mu(B)$ and with $C \in \xi$. \end{cor} \begin{proof} Set $C_1 = C$. Once $\sigma \text{-}\mathrm{alg}_G(\{C, B\})$-measurable subsets $C_1, \ldots, C_{k-1}$ of $B$, each of measure $\frac{1}{n} \cdot \mu(B)$, have been defined, we apply Lemma \ref{LEM SIMPLEMIX} to get a $\sigma \text{-}\mathrm{alg}_G(\{C, B\})$-expressible function $\theta \in [[E_G^X]]$ with $\mathrm{dom}(\theta) = C$ and $$\mathrm{rng}(\theta) \subseteq B \setminus (C_1 \cup \cdots \cup C_{k-1}).$$ We set $C_k = \theta(C)$. We note that $\mu(C_k) = \frac{1}{n} \cdot \mu(B)$ and $C_k \in \sigma \text{-}\mathrm{alg}_G(\{C, B\})$ by Lemma \ref{LEM EXPMOVE}. Finally, set $\xi = \{C_1, \ldots, C_n\}$. \end{proof} In the corollary below we write $\mathrm{id}_A \in [[E_G^X]]$ for the identity function on $A$ for $A \subseteq X$. \begin{cor} \label{COR PERMUTE} Let $G \curvearrowright (X, \mu)$ be an ergodic {p{$.$}m{$.$}p{$.$}} action. If $\xi = \{C_1, \ldots, C_n\}$ is a collection of pairwise disjoint Borel sets of equal measure, then there is a $\sigma \text{-}\mathrm{alg}_G(\xi)$-expressible function $\theta \in [[E_G^X]]$ which cyclically permutes the members of $\xi$, meaning that $\mathrm{dom}(\theta) = \mathrm{rng}(\theta) = \cup \xi$, $\theta(C_k) = C_{k+1}$ for $1 \leq k < n$, $\theta(C_n) = C_1$, and $\theta^n = \mathrm{id}_{\cup \xi}$. \end{cor} \begin{proof} By Lemma \ref{LEM SIMPLEMIX}, for each $2 \leq k \leq n$ there is a $\sigma \text{-}\mathrm{alg}_G(\xi)$-expressible function $\phi_k \in [[E_G^X]]$ with $\mathrm{dom}(\phi_k) = C_1$ and $\mathrm{rng}(\phi_k) = C_k$. We define $\theta : \cup \xi \rightarrow \cup \xi$ by $$\theta(x) = \begin{cases} \phi_2(x) & \text{if } x \in C_1\\ \phi_{k+1} \circ \phi_k^{-1}(x) & \text{if } x \in C_k \text{ and } 1 < k < n\\ \phi_n^{-1}(x) & \text{if } x \in C_n. \end{cases}$$ Then $\theta$ cyclically permutes the members of $\xi$ and has order $n$. Finally, each restriction $\theta \restriction C_k$ is $\sigma \text{-}\mathrm{alg}_G(\xi)$-expressible by Lemma \ref{LEM EXPGROUP} and thus $\theta$ is $\sigma \text{-}\mathrm{alg}_G(\xi)$-expressible. \end{proof} \section{Countably infinite partitions} \label{SECT INFT} In this section, we show how to replace countably infinite partitions by finite ones. This will allow us to carry-out counting arguments in proving the main theorem. Our work in this section retraces and improves upon methods used by the author in \cite{S12}. We improve upon \cite{S12} in two ways. First, we work in a relative setting where a $G$-invariant sub-$\sigma$-algebra $\mathcal{F}$ is given, and second we show that one can control the Shannon entropy of the newly constructed partition. For a finite set $S$ we let $S^{< \omega}$ denote the set of all finite words with letters in $S$ (the $\omega$ in the superscript denotes the first infinite ordinal). For $z \in S^{< \omega}$ we let $|z|$ denote the length of the word $z$. The lemma below is a relativized version of a similar result due to Krieger \cite{Kr70}. \begin{lem} \label{LEM KRIEGER} Let $(X, \mu)$ be a probability space, let $\mathcal{F}$ be a sub-$\sigma$-algebra, let $(Y, \nu)$ be the associated factor of $(X, \mu)$, and let $\mu = \int_Y \mu_y \ d \nu(y)$ be the disintegration of $\mu$ over $\nu$. If $\xi$ is a countable Borel partition of $X$ with $\mathrm{H}(\xi | \mathcal{F}) < \infty$, then there is a Borel function $L : Y \times \xi \rightarrow \{0, 1, 2\}^{<\omega}$ which has finite average length $$\int_Y \sum_{C \in \xi} |L(y, C)| \cdot \mu_y(C) \ d \nu(y) < \infty$$ and such that $\nu$-almost-every restriction $L(y, \cdot) : \xi \rightarrow \{0, 1, 2\}^{< \omega}$ is essentially injective in the sense that $L(y, C) = L(y, C')$ and $C \neq C'$ implies $\mu_y(C) \cdot \mu_y(C') = 0$. \end{lem} \begin{proof} If $\xi$ is finite then we can simply fix an injection $L : \xi \rightarrow \{0, 1, 2\}^k$ for some $k \in \mathbb{N}$. So suppose that $\xi$ is infinite. Say $\xi = \{C_1, C_2, \ldots\}$. For $y \in Y$ let $\sigma(y) : \mathbb{N} \rightarrow \mathbb{N}$ be the unique bijection satisfying for all $n \in \mathbb{N}$: either $\mu_y(C_{\sigma(y)(n+1)}) < \mu_y(C_{\sigma(y)(n)})$ or else $\mu_y(C_{\sigma(y)(n+1)}) = \mu_y(C_{\sigma(y)(n)})$ and $\sigma(y)(n+1) > \sigma(y)(n)$. Since each map $y \mapsto \mu_y(C_k)$ is Borel (see \S\ref{SECT PRELIM}), we see that $\sigma : Y \rightarrow \mathbb{N}^\mathbb{N}$ is Borel. For each $n$ let $t(n) \in \{0, 1, 2\}^{<\omega}$ be the ternary expansion of $n$. Note that $|t(n)| \leq \log_3(n) + 1$. For $y \in Y$ define $$L(y, C_{\sigma(y)(n)}) = t(n) \text{ for } n \in \mathbb{N} \quad \text{and} \quad L(y, C_k) = t(1) \text{ for } k \in \mathbb{N} \setminus \sigma(y)(\mathbb{N}).$$ If $|t(n)| = |L(y, C_{\sigma(y)(n)})| > - \log \mu_y(C_{\sigma(y)(n)})$ then for all $k \leq n$ $$\mu_y(C_{\sigma(y)(k)}) \geq \mu_y(C_{\sigma(y)(n)}) > e^{-|t(n)|} \geq \frac{1}{e} \cdot e^{- \log_3(n)} = \frac{1}{e} \cdot n^{- \log_3(e)}.$$ Thus $$\frac{1}{e} \cdot n^{1 - \log_3(e)} = n \cdot \frac{1}{e} \cdot n^{- \log_3(e)} < \sum_{k = 1}^n \mu_y(C_{\sigma(y)(k)}) \leq 1,$$ and hence $n \leq \exp(1 / (1 - \log_3(e)))$. Letting $m$ be the least integer greater than $\exp(1 / (1 - \log_3(e)))$, we have that $|L(y, C_{\sigma(y)(n)})| \leq - \log \mu_y(C_{\sigma(y)(n)})$ for all $y \in Y$ and all $n > m$. Therefore, recalling that $\mu_y(C_k) = 0$ for all $k \in \mathbb{N} \setminus \sigma(y)(\mathbb{N})$, we have \begin{align*} \sum_{n \in \mathbb{N}} |L(y, C_n)| \cdot \mu_y(C_n) & = \sum_{n \in \mathbb{N}} |L(y, C_{\sigma(y)(n)})| \cdot \mu_y(C_{\sigma(y)(n)})\\ & \leq m \cdot |t(m)| + \sum_{n > m} |L(y, C_{\sigma(y)(n)})| \cdot \mu_y(C_{\sigma(y)(n)}) \\ & \leq m \cdot |t(m)| + \sum_{n \in \mathbb{N}} - \mu_y(C_n) \log \mu_y(C_n) \\ & = m \cdot |t(m)| + \mathrm{H}_{\mu_y}(\xi). \end{align*} Integrating both sides over $Y$ and using $\int_Y \mathrm{H}_{\mu_y}(\xi) \ d \nu(y) = \mathrm{H}(\xi | \mathcal{F}) < \infty$ completes the proof. \end{proof} \begin{prop} \label{PROP FINGEN} Let $G \curvearrowright (X, \mu)$ be an ergodic {p{$.$}m{$.$}p{$.$}} action, let $\mathcal{F}$ be a $G$-invariant sub-$\sigma$-algebra, and let $\xi$ be a countable Borel partition with $\mathrm{H}(\xi | \mathcal{F}) < \infty$. Then for every $\epsilon > 0$ there is a finite Borel partition $\alpha$ with $\sigma \text{-}\mathrm{alg}_G(\alpha) \vee \mathcal{F} = \sigma \text{-}\mathrm{alg}_G(\xi) \vee \mathcal{F}$ and $\mathrm{H}(\alpha | \mathcal{F}) < \mathrm{H}(\xi | \mathcal{F}) + \epsilon$. \end{prop} \begin{proof} Let $\pi : (X, \mu) \rightarrow (Y, \nu)$ be the factor map associated to $\mathcal{F}$, and let $\mu = \int \mu_y \ d \nu(y)$ be the disintegration of $\mu$ over $\nu$. Apply Lemma \ref{LEM KRIEGER} to obtain a Borel function $L: Y \times \xi \rightarrow \{0, 1, 2\}^{< \omega}$ such that $\nu$-almost-every restriction $L(y, \cdot) : \xi \rightarrow \{0, 1, 2\}^{<\omega}$ is essentially injective and $$\int_Y \sum_{C \in \xi} |L(y, C)| \cdot \mu_y(C) \ d \nu(y) < \infty.$$ We define $\ell : X \rightarrow \{0, 1, 2\}^{< \omega}$ by $$\ell(x) = L(\pi(x), C)$$ for $x \in C \in \xi$. Observe that $\ell$ is $\sigma \text{-}\mathrm{alg}(\xi) \vee \mathcal{F}$-measurable and $$\int_X |\ell(x)| \ d \mu(x) = \int_Y \int_X |\ell(x)| \ d \mu_y(x) \ d \nu(y) = \int_Y \sum_{C \in \xi} |L(y, C)| \cdot \mu_y(C) \ d \nu(y) < \infty.$$ For $n \in \mathbb{N}$ let $\mathcal{P}_n = \{P_n, X \setminus P_n\}$ where $$P_n = \{x \in X \,:\, |\ell(x)| \geq n\}.$$ Then the $P_n$'s are decreasing and have empty intersection. Refine $\mathcal{P}_n$ to $\beta_n = \{X \setminus P_n, B_n^0, B_n^1, B_n^2\}$ where for $i \in \{0, 1, 2\}$ $$B_n^i = \{x \in P_n \,:\, \ell(x)(n) = i\}.$$ For $n \in \mathbb{N}$ define $$\gamma_n = \bigvee_{k \leq n} \beta_k.$$ Since each restriction $L(y, \cdot) : \xi \rightarrow \{0, 1, 2\}^{< \omega}$ is essentially injective we have that \begin{equation} \label{EQN ALPHA} \xi \subseteq \mathcal{F} \vee \bigvee_{n \in \mathbb{N}} \sigma \text{-}\mathrm{alg}(\gamma_n). \end{equation} Fix $0 < \delta < \min(1/4, \epsilon / 2)$ with $$-\delta \cdot \log(\delta) - (1 - \delta) \cdot \log(1 - \delta) + \delta \cdot \log(7) < \epsilon.$$ Since $$\sum_{n \in \mathbb{N}} \mu(P_n) = \int_X |\ell(x)| \ d \mu(x) < \infty$$ we may fix $N \in \mathbb{N}$ so that $\sum_{n = N}^\infty \mu(P_n) < \delta$. Observe that in particular $\mu(P_N) < \delta$ and thus $$\mu(P_N) + \sum_{n = N}^\infty \mu(P_n) < 2 \delta < 1 / 2.$$ For $n \geq N$ we seek to build $\sigma \text{-}\mathrm{alg}_G(\mathcal{P}_n \vee \gamma_{n-1})$-expressible functions $\theta_n \in [[E_G^X]]$ with $\mathrm{dom}(\theta_n) = P_n$ and $$\mathrm{rng}(\theta_n) \subseteq X \setminus \left( P_N \cup \bigcup_{k = N}^{n-1} \theta_k(P_k) \right).$$ We build the $\theta_n$'s by induction on $n \geq N$. To begin we note that $\mu(P_N) < \mu(X \setminus P_N)$ and we apply Lemma \ref{LEM SIMPLEMIX} to obtain $\theta_N$. Now assume that $\theta_N, \ldots, \theta_{n-1}$ have been defined and posses the properties stated above. Then since $\gamma_{n-1}$ refines $\mathcal{P}_k \vee \gamma_{k-1}$ for every $k < n$, we obtain from Lemma \ref{LEM EXPMOVE} $$P_N \cup \bigcup_{k = N}^{n-1} \theta_k(P_k) \in \sigma \text{-}\mathrm{alg}_G(\gamma_{n-1}).$$ Also, by our choice of $N$ we have that \begin{align*} \mu(P_n) \leq \mu(P_N) < \frac{1}{2} < 1 - 2 \delta & < 1 - \mu(P_N) - \sum_{k = N}^{n-1} \mu(P_k)\\ & = \mu \left( X \setminus \Big( P_N \cup \textstyle{\bigcup_{k = N}^{n-1}} \, \theta_k(P_k) \Big) \right). \end{align*} Therefore we may apply Lemma \ref{LEM SIMPLEMIX} to obtain $\theta_n$. This defines the functions $\theta_n$, $n \geq N$. Define the partition $\beta = \{X \setminus P, B^0, B^1, B^2\}$ of $X$ by \begin{align*} P & = \bigcup_{n \geq N} \theta_n(P_n); \\ B^i & = \bigcup_{n \geq N} \theta_n(B_n^i). \end{align*} Note that the above expressions do indeed define a partition of $X$ since the images of the $\theta_n$'s are pairwise disjoint. Also define $\mathcal{Q} = \{Q, X \setminus Q\}$ where $$Q = \bigcup_{n \geq N} \theta_n(P_{n+1}).$$ Note that $Q$ is contained in $P$ and so $\beta$ might not refine $\mathcal{Q}$. Set $\alpha = \gamma_N \vee \beta \vee \mathcal{Q}$. Then $\alpha$ is finite. Using Lemma \ref{LEM SHAN} and the facts that $X \setminus P \in \beta \vee \mathcal{Q}$, $\mu(P) < \delta$, and $\mathrm{H}_{\mu_y}(\gamma_N) \leq \mathrm{H}_{\mu_y}(\xi)$ for $\nu$-almost-every $y \in Y$ (since $\xi$ $\mu_y$-almost-everywhere refines $\gamma_N$), we obtain \begin{align*} \mathrm{H}(\alpha | \mathcal{F}) & \leq \mathrm{H}(\gamma_N | \mathcal{F}) + \mathrm{H}(\beta \vee \mathcal{Q}) \\ & = \mathrm{H}(\gamma_N | \mathcal{F}) + \mathrm{H}(\{P, X \setminus P\}) + \mathrm{H}(\beta \vee \mathcal{Q} | \{P, X \setminus P\}) \\ & \leq \mathrm{H}(\gamma_N | \mathcal{F}) - \mu(P) \cdot \log \mu(P) - \mu(X \setminus P) \log \mu(X \setminus P) + \mu(P) \cdot \log(7) \\ & < \mathrm{H}(\gamma_N | \mathcal{F}) + \epsilon \\ & = \int_Y \mathrm{H}_{\mu_y}(\gamma_N) \ d \nu(y) + \epsilon \\ & \leq \int_Y \mathrm{H}_{\mu_y}(\xi) \ d \nu(y) + \epsilon \\ & = \mathrm{H}(\xi | \mathcal{F}) + \epsilon. \end{align*} Thus it only remains to check that $\sigma \text{-}\mathrm{alg}_G(\alpha) \vee \mathcal{F} = \sigma \text{-}\mathrm{alg}_G(\xi) \vee \mathcal{F}$. First notice that the function $\ell$ and all of the partitions $\gamma_n$ and $\mathcal{P}_n$ are $\sigma \text{-}\mathrm{alg}_G(\xi) \vee \mathcal{F}$-measurable and therefore each $\theta_k$ is $\sigma \text{-}\mathrm{alg}_G(\xi) \vee \mathcal{F}$-expressible. It follows from Lemma \ref{LEM EXPMOVE} that $\beta$, $\mathcal{Q}$, and $\alpha$ are $\sigma \text{-}\mathrm{alg}_G(\xi) \vee \mathcal{F}$-measurable. Thus $\sigma \text{-}\mathrm{alg}_G(\alpha) \vee \mathcal{F} \subseteq \sigma \text{-}\mathrm{alg}_G(\xi) \vee \mathcal{F}$. Now we consider the reverse inclusion. By induction and by (\ref{EQN ALPHA}) it suffices to assume that $\gamma_k \subseteq \sigma \text{-}\mathrm{alg}_G(\alpha)$ and prove that $\gamma_{k+1} \subseteq \sigma \text{-}\mathrm{alg}_G(\alpha)$ as well. This is immediate when $k \leq N$. So assume that $k \geq N$ and that $\gamma_k \subseteq \sigma \text{-}\mathrm{alg}_G(\alpha)$. Since $\theta_k$ is expressible with respect to $\sigma \text{-}\mathrm{alg}_G(\gamma_k) \subseteq \sigma \text{-}\mathrm{alg}_G(\alpha)$, we have that $$P_{k+1} = \theta_k^{-1}(Q) \in \sigma \text{-}\mathrm{alg}_G(\alpha)$$ by Lemmas \ref{LEM EXPMOVE} and \ref{LEM EXPGROUP}. Therefore $\mathcal{P}_{k+1} \subseteq \sigma \text{-}\mathrm{alg}_G(\alpha)$. Now since $\theta_{k+1}$ is expressible with respect to $\sigma \text{-}\mathrm{alg}_G(\mathcal{P}_{k+1} \vee \gamma_k) \subseteq \sigma \text{-}\mathrm{alg}_G(\alpha)$ we have that for $i \in \{0, 1, 2\}$ $$B_{k+1}^i = \theta_{k+1}^{-1}(B^i) \in \sigma \text{-}\mathrm{alg}_G(\alpha)$$ by Lemmas \ref{LEM EXPMOVE} and \ref{LEM EXPGROUP}. Thus $\beta_{k+1} \subseteq \sigma \text{-}\mathrm{alg}_G(\alpha)$ and we conclude that $\gamma_{k+1} \subseteq \sigma \text{-}\mathrm{alg}_G(\alpha)$. This completes the proof. \end{proof} \section{Finite subequivalence relations} \label{SECT EQREL} The methods of the previous section produce finite generating partitions but do not provide any control over the cardinality or distribution of the partition constructed. Overcoming this difficulty is the main focus of this paper and requires entirely new techniques. We develop these techniques in this section and in Section \ref{SECT ACT}. The goal of this section is to construct finite subequivalence relations which will ultimately be used to replace the traditional role of the Rokhlin lemma and the Shannon--McMillan--Breiman theorem. For an equivalence relation $E$ on $X$ and $x \in X$, we write $[x]_E$ for the $E$-class of $x$. Recall that a set $T \subseteq X$ is a \emph{transversal} for $E$ if $|T \cap [x]_E| = 1$ for almost-every $x \in X$. We will work with equivalence relations which are generated by an element of the pseudo-group in the following sense. \begin{defn} Let $G \curvearrowright (X, \mu)$ be a {p{$.$}m{$.$}p{$.$}} action, let $B \subseteq X$ be a Borel set of positive measure, and let $E$ be an equivalence relation on $B$ with $E \subseteq E_G^X \cap B \times B$. We say that $E$ is \emph{generated by} $\theta \in [[E_G^X]]$ if $\mathrm{dom}(\theta) = \mathrm{rng}(\theta) = B$ and $[x]_E = \{ \theta^i(x) \,:\, i \in \mathbb{Z}\}$ for almost-all $x \in B$. In this case, we write $E = E_\theta$. \end{defn} \begin{lem} \label{LEM AVGMIX} Let $G \curvearrowright (X, \mu)$ be an ergodic {p{$.$}m{$.$}p{$.$}} action, let $B \subseteq X$ have positive measure, let $\alpha$ be a finite partition of $X$, and let $\epsilon > 0$. Then there is an equivalence relation $E$ on $B$ with $E \subseteq E_G^X \cap B \times B$ and $n \in \mathbb{N}$ so that for $\mu$-almost-every $x \in B$, the $E$-class of $x$ has cardinality $n$ and $$\forall A \in \alpha \qquad \frac{\mu(A \cap B)}{\mu(B)} - \epsilon < \frac{|A \cap [x]_E|}{|[x]_E|} < \frac{\mu(A \cap B)}{\mu(B)} + \epsilon.$$ Moreover, $E$ admits a $\sigma \text{-}\mathrm{alg}_G(\alpha \cup \{B\})$-measurable transversal and is generated by a $\sigma \text{-}\mathrm{alg}_G(\alpha \cup \{B\})$-expressible function $\theta : B \rightarrow B$ in $[[E_G^X]]$ which satisfies $\theta^n = \mathrm{id}_B$. \end{lem} \begin{proof} Let $\pi : (X, \mu) \rightarrow (Y, \nu)$ be the factor map associated to the $G$-invariant sub-$\sigma$-algebra generated by $\alpha \cup \{B\}$. Enumerate $\alpha$ as $\alpha = \{A_1, A_2, \ldots, A_p\}$. Set $B' = \pi(B)$ and $\alpha' = \{A_i' \,:\, 1 \leq i \leq p\}$ where $A_i' = \pi(A_i)$. Note that $\alpha'$ is a partition of $(Y, \nu)$ and that $\nu(A_i' \cap B') = \mu(A_i \cap B)$. First, let's suppose that $(Y, \nu)$ is non-atomic. Pick $n \in \mathbb{N}$ satisfying $p / n < \epsilon$, and for $1 \leq i \leq p$ set $$r_i = \lfloor n \nu(A_i' \cap B') / \nu(B') \rfloor.$$ Since $(Y, \nu)$ is non-atomic, we can find a partition $\xi'$ of $B'$ into $n$ pieces each of measure $\frac{1}{n} \cdot \nu(B')$ such that for every $i$, $A_i' \cap B'$ contains at least $r_i$ many classes of $\xi'$. Then at most $p$ many classes of $\xi'$ are not contained in any $A_i' \cap B'$. Set $\xi = \pi^{-1}(\xi')$. Then the classes of $\xi$ lie in $\sigma \text{-}\mathrm{alg}_G(\alpha \cup \{B\})$. Apply Corollary \ref{COR PERMUTE} to get a $\sigma \text{-}\mathrm{alg}_G(\alpha \cup \{B\})$-expressible function $\theta \in [[E_G^X]]$ which cyclically permutes the classes of $\xi$. Set $E = E_\theta$. Then for $\mu$-almost-every $x \in B$, the $E$-class of $x$ has cardinality $n$ and $$\forall i \qquad - \epsilon \leq - \frac{1}{n} < \frac{|A_i \cap [x]_E|}{|[x]_E|} - \frac{\mu(A_i \cap B)}{\mu(B)} \leq \frac{p}{n} < \epsilon.$$ In the case that $(Y, \nu)$ has an atom, we deduce by ergodicity that, modulo a null set, $Y$ is finite. Say $|Y| = m$ and each point in $Y$ has measure $\frac{1}{m}$. Set $n = |B'|$. Clearly there are integers $k_i \in \mathbb{N}$, with $\sum_{i = 1}^p k_i = n$ and $$\frac{\mu(A_i \cap B)}{\mu(B)} = \frac{\nu(A_i' \cap B')}{\nu(B')} = \frac{k_i / m}{n / m} = \frac{k_i}{n}.$$ Let $\xi'$ be the partition of $B'$ into points, and pull back $\xi'$ to a partition $\xi$ of $B$. Now apply Corollary \ref{COR PERMUTE} and follow the argument from the non-atomic case. \end{proof} \begin{cor} \label{COR AVGFUNCMIX} Let $G \curvearrowright (X, \mu)$ be an ergodic {p{$.$}m{$.$}p{$.$}} action, let $B \subseteq X$ have positive measure, let $\epsilon > 0$, and let $F = \{ f : B \rightarrow \mathbb{R} \}$ be a finite collection of finite valued Borel functions. Then there is an equivalence relation $E$ on $B$ with $E \subseteq E_G^X \cap B \times B$ and $n \in \mathbb{N}$ so that for $\mu$-almost-every $x \in B$, the $E$-class of $x$ has cardinality $n$ and $$\forall f \in F \qquad \frac{1}{\mu(B)} \cdot \int_B f \ d \mu - \epsilon < \frac{1}{|[x]_E|} \cdot \sum_{y \in [x]_E} f(y) < \frac{1}{\mu(B)} \cdot \int_B f \ d \mu + \epsilon.$$ Moreover, if each $f \in F$ is $\mathcal{F}$-measurable then $E$ admits a $\sigma \text{-}\mathrm{alg}_G(\mathcal{F} \cup \{B\})$-measurable transversal and is generated by a $\sigma \text{-}\mathrm{alg}_G(\mathcal{F} \cup \{B\})$-expressible function $\theta : B \rightarrow B$ in $[[E_G^X]]$ which satisfies $\theta^n = \mathrm{id}_B$. \end{cor} \begin{proof} Define a partition $\alpha$ of $B$ so that $x, y \in B$ lie in the same piece of $\alpha$ if and only if $f(x) = f(y)$ for all $f \in F$. Then $\alpha$ is a finite partition. Now the desired equivalence relation $E$ is obtained from Lemma \ref{LEM AVGMIX}. \end{proof} The conclusions of the previous lemma and corollary are not too surprising since you are allowed to ``see'' the sets which you wish to mix, i.e. you are allowed to use $\sigma \text{-}\mathrm{alg}_G(\alpha \cup \{B\})$. The following proposition however is unexpected. It roughly says that you can achieve the same conclusion even if you are restricted to only seeing a very small sub-$\sigma$-algebra. We will use the proposition below in the same fashion one typically uses the Rokhlin lemma and the Shannon--McMillan--Breiman theorem, although technically the proposition below bears more similarity with the Rokhlin lemma and the ergodic theorem. Let us say a few words on the Rokhlin lemma to highlight the similarity. For a free {p{$.$}m{$.$}p{$.$}} action $\mathbb{Z} \curvearrowright (X, \mu)$, $n \in \mathbb{N}$, and $\epsilon > 0$, the Rokhlin lemma provides a Borel set $S \subseteq X$ such that the sets $i \cdot S$, $0 \leq i \leq n - 1$, are pairwise disjoint and union to a set having measure at least $1 - \epsilon$. The set $S$ naturally produces a subequivalence relation $E$ defined as follows. For $x \in X$ set $x_S = (-i) \cdot x$ where $(-i) \cdot x \in S$ and $(-j) \cdot x \not\in S$ for all $0 \leq j < i$. We set $x \ E \ y$ if and only if $x_S = y_S$. Clearly every $E$ class has cardinality at least $n$, and a large measure of $E$-classes have cardinality precisely $n$. A key fact which is frequently used in classical results such as Krieger's theorem is that the equivalence relation $E$ is easily described. Specifically, $S$ is small since $\mu(S) \leq 1 / n$, and so $E$ can be defined by using the small sub-$\sigma$-algebra $\sigma \text{-}\mathrm{alg}_\mathbb{Z}(\{S\})$. \begin{prop} \label{PROP RLEM} Let $G \curvearrowright (X, \mu)$ be an ergodic {p{$.$}m{$.$}p{$.$}} action with $(X, \mu)$ non-atomic, let $\alpha$ be a finite collection of Borel subsets of $X$, let $\epsilon > 0$, and let $N \in \mathbb{N}$. Then there are $n \geq N$, Borel sets $S_1, S_2 \subseteq X$ with $\mu(S_1) + \mu(S_2) < \epsilon$, and a $\sigma \text{-}\mathrm{alg}_G(\{S_1, S_2\})$-expressible $\theta \in [E_G^X]$ such that $E_\theta$ admits a $\sigma \text{-}\mathrm{alg}_G(\{S_1, S_2\})$-measurable transversal, and for almost-every $x \in X$ we have $|[x]_{E_\theta}| = n$ and $$\forall A \in \alpha \qquad \mu(A) - \epsilon < \frac{|A \cap [x]_{E_\theta}|}{|[x]_{E_\theta}|} < \mu(A) + \epsilon.$$ \end{prop} \begin{proof} Pick $m > \max(4 / \epsilon, \ N)$ with $m \in \mathbb{N}$ and $$|\alpha| \cdot \log_2(m + 1) < \frac{\epsilon}{4} \cdot m.$$ Let $S_1 \subseteq X$ be any Borel set with $\mu(S_1) = \frac{1}{m} < \frac{\epsilon}{4}$. Apply Corollaries \ref{COR MAKEPART} and \ref{COR PERMUTE} to obtain a $\sigma \text{-}\mathrm{alg}_G(\{S_1\})$-expressible function $h \in [E_G^X]$ such that $\mathrm{dom}(h) = \mathrm{rng}(h) = X$, $h^m = \mathrm{id}_X$, and such that $\{h^i(S_1) \,:\, 0 \leq i < m\}$ is a partition of $X$. The induced Borel equivalence relation $E_h$ is finite, in fact almost-every $E_h$-class has cardinality $m$, and it has $S_1$ as a transversal. We imagine the classes of $E_h$ as extending horizontally to the right, and we visualize $S_1$ as a vertical column. We consider the distribution of $\alpha \restriction [s]_{E_h}$ for each $s \in S_1$. For $A \in \alpha$ define $d_A : S_1 \rightarrow \mathbb{R}$ by $$d_A(s) = \frac{|A \cap [s]_{E_h}|}{|[s]_{E_h}|} = \frac{1}{m} \cdot \Big| A \cap [s]_{E_h} \Big|.$$ Note that for each $A \in \alpha$ $$\int_{S_1} d_A \ d \mu = \frac{1}{m} \cdot \mu(A) = \mu(S_1) \cdot \mu(A).$$ By Corollary \ref{COR AVGFUNCMIX} there is $k \in \mathbb{N}$ and an equivalence relation $E_v \subseteq E_G^X \cap S_1 \times S_1$ on $S_1$ such that for almost every $s \in S_1$, the $E_v$-class of $s$ has cardinality $k$ and $$\forall A \in \alpha \qquad \mu(A) - \epsilon < \frac{1}{|[s]_{E_v}|} \cdot \sum_{s' \in [s]_{E_v}} d_A(s') < \mu(A) + \epsilon.$$ Moreover, if we let $\mathcal{F}$ denote the $G$-invariant sub-$\sigma$-algebra generated by the functions $d_A$, $A \in \alpha$, then $E_v$ admits a $\sigma \text{-}\mathrm{alg}_G(\mathcal{F} \cup \{S_1\})$-measurable transversal $T$ and is generated by a $\sigma \text{-}\mathrm{alg}_G(\mathcal{F} \cup \{S_1\})$-expressible function $v \in [[E_G^X]]$ which satisfies $\mathrm{dom}(v) = \mathrm{rng}(v) = S_1$ and $v^k = \mathrm{id}_{S_1}$. Let $E = E_v \vee E_h$ be the equivalence relation generated by $E_v$ and $E_h$. Then $T \subseteq S_1$ is a transversal for $E$, and for every $s \in T$ $$\Big| [s]_E \Big| = \sum_{s' \in [s]_{E_v}} \Big| [s']_{E_h} \Big| = k \cdot m.$$ Setting $n = k \cdot m \geq N$, we have that almost every $E$-class has cardinality $n$. Also, for every $A \in \alpha$ and $s \in T$ we have $$\frac{|A \cap [s]_E|}{|[s]_E|} = \frac{1}{k \cdot m} \cdot \sum_{s' \in [s]_{E_v}} \Big| A \cap [s']_{E_h} \Big| = \frac{1}{|[s]_{E_v}|} \cdot \sum_{s' \in [s]_{E_v}} d_A(s').$$ It follows that for $\mu$-almost-every $x \in X$ $$\forall A \in \alpha \qquad \mu(A) - \epsilon < \frac{|A \cap [x]_E|}{|[x]_E|} < \mu(A) + \epsilon.$$ Now consider the partition $\xi = \{T_{i,j} \,:\, 0 \leq i < k, \ 0 \leq j < m\}$ of $X$ where $$T_{i,j} = h^j \circ v^i (T).$$ Note that $T_{i,j} \in \sigma \text{-}\mathrm{alg}_G(\mathcal{F} \cup \{S_1\})$ by Lemmas \ref{LEM EXPMOVE} and \ref{LEM EXPGROUP}. We will define a function $\theta \in [E_G^X]$ which generates $E$ by defining $\theta$ on each piece of $\xi$. We define $$\theta \restriction T_{i,j} = \begin{cases} h \restriction T_{i,j} & \text{if } j + 1 < m \\ v \circ h \restriction T_{i,j} & \text{if } j + 1 = m. \end{cases}$$ In regard to the second case above, one should observe that $h(T_{i,m-1}) = T_{i,0}$ since $h^m = \mathrm{id}_X$. Since $v$ satisfies $v^k = \mathrm{id}_{S_1}$ and $n = k \cdot m$, we see that $\theta$ satisfies $\theta^n = \mathrm{id}_X$. We also have $E = E_\theta$. Finally, $\theta$ is $\sigma \text{-}\mathrm{alg}_G(\mathcal{F} \cup \{S_1\})$-expressible since each restriction $\theta \restriction T_{i, j}$ is $\sigma \text{-}\mathrm{alg}_G(\mathcal{F} \cup \{S_1\})$-expressible by Lemma \ref{LEM EXPGROUP}. To complete the proof, we must find a Borel set $S_2 \subseteq X$ with $\mu(S_2) < \frac{3}{4} \cdot \epsilon < \epsilon - \mu(S_1)$ such that $\mathcal{F} \subseteq \sigma \text{-}\mathrm{alg}_G(\{S_1, S_2\})$. Notice that $|\mathrm{rng}(d_A)| \leq m + 1$ for every $A \in \alpha$ and therefore the product map $$d_\alpha = \prod_{A \in \alpha} d_A : S_1 \rightarrow \Big\{ 0, \frac{1}{m}, \frac{2}{m}, \ldots, 1 \Big\}^\alpha$$ has an image of cardinality at most $(m + 1)^{|\alpha|}$. Set $\ell = \lceil (\epsilon / 4) \cdot m \rceil$ (i.e. the least integer greater than or equal to $(\epsilon / 4) \cdot m$). Since $(\epsilon / 4) \cdot m > 1$ we have that $\ell < (\epsilon / 2) \cdot m$. By our choice of $m$ we have $$(m + 1)^{|\alpha|} < 2^{(\epsilon / 4) \cdot m} \leq 2^\ell.$$ Therefore there is an injection $$r : \{0, 1/m, \ldots, 1\}^\alpha \rightarrow \{0, 1\}^\ell.$$ Now we will define $S_2$ so that, for every $s \in S_1$, the integers $\{1 \leq i \leq \ell \,:\, h^i(s) \in S_2\}$ will encode the value $r \circ d_\alpha(s)$. Specifically, we define $$S_2 = \{h^i(s) \,:\, 1 \leq i \leq \ell, \ s \in S_1, \ r(d_\alpha(s))(i) = 1\}.$$ We have that $S_2 \subseteq \bigcup_{1 \leq i \leq \ell} h^i(S_1)$ and therefore $$\mu(S_2) \leq \ell \cdot \mu(S_1) < \left( \frac{\epsilon}{2} \cdot m \right) \cdot \frac{1}{m} = \frac{\epsilon}{2}$$ as required. Finally, we check that $\mathcal{F} \subseteq \sigma \text{-}\mathrm{alg}_G(\{S_1, S_2\})$. Fix $p \in \{0, 1/m, \ldots, 1\}^\alpha$. Set $$I_p^0 = \{1 \leq i \leq \ell \,:\, r(p)(i) = 0\} \quad \text{and} \quad I_p^1 = \{1 \leq i \leq \ell \,:\, r(p)(i) = 1\}.$$ Then for $s \in S_1$ we have \begin{align*} d_\alpha(s) = p & \Longleftrightarrow r(d_\alpha (s)) = r(p) \\ & \Longleftrightarrow (\forall i \in I_p^0) \ \ h^i(s) \not\in S_2 \quad \text{and} \quad (\forall i \in I_p^1) \ \ h^i(s) \in S_2 \\ & \Longleftrightarrow s \in S_1 \cap \left( \bigcap_{i \in I_p^0} h^{-i}(X \setminus S_2) \right) \cap \left( \bigcap_{i \in I_p^1} h^{-i}(S_2) \right). \end{align*} So $d_\alpha^{-1}(p) \in \sigma \text{-}\mathrm{alg}_G(\{S_1, S_2\})$ by Lemmas \ref{LEM EXPMOVE} and \ref{LEM EXPGROUP}. Thus $\mathcal{F} \subseteq \sigma \text{-}\mathrm{alg}_G(\{S_1, S_2\})$. \end{proof} \section{Distributions on finite sets} In this section we present a few counting lemmas from information theory which we will need. These facts are well known and were used in classical proofs of Krieger's finite generator theorem. At the end of this section we will briefly sketch why replacing the Rokhlin lemma and the Shannon--McMillan--Breiman theorem in the classical proof of Krieger's theorem with Proposition \ref{PROP RLEM} does not (yet) result in a proof of our main theorem. This will illustrate what new techniques are required and will motivate the technical constructions in the next section. For a finite probability vector $\bar{p}$, $n \in \mathbb{N}$, and $\epsilon \geq 0$, we let $L_{\bar{p},\epsilon}^n$ be the set of functions $\ell : \{0, \ldots, n-1\} \rightarrow \{0, \ldots, |\bar{p}|-1\}$ which approximate the distribution of $\bar{p}$ in the sense that $$\forall 0 \leq t < |\bar{p}| \qquad \left| \frac{|\ell^{-1}(t)|}{n} - p_t \right| \leq \epsilon.$$ Similarly, if $(X, \mu)$ is a probability space and $\xi$ is a finite partition of $X$, then we let $L_{\xi, \epsilon}^n$ be the set of functions $\ell : \{0, 1, \ldots, n - 1\} \rightarrow \xi$ such that $$\forall C \in \xi \qquad \left| \frac{|\ell^{-1}(C)|}{n} - \mu(C) \right| \leq \epsilon.$$ If $\xi$ is a finite partition of $(X, \mu)$ and $\theta : (X, \mu) \rightarrow (X, \mu)$ is a measure-preserving bijection with every $\theta$-orbit having cardinality $n$, then we associate to each $x \in X$ its \emph{$(\xi, \theta)$-name} $\mathcal{N}_\xi^\theta(x) \in L_{\xi, \infty}^n$ defined by setting $\mathcal{N}_\xi^\theta(x)(i) = C$ if $\theta^i(x) \in C \in \xi$. If $\xi$ and $\zeta$ are finite partitions of $(X, \mu)$ and $\xi$ is finer than $\zeta$, then we define the \emph{coarsening map} $\pi_\zeta : \xi \rightarrow \zeta$ to be the unique map satisfying $C \subseteq \pi_\zeta(C)$ for all $C \in \xi$. By applying $\pi_\zeta$ coordinate-wise, we obtain a map $\pi_\zeta : L_{\xi, \infty}^n \rightarrow L_{\zeta, \infty}^n$. \begin{lem} \label{LEM RELSTIRLING} Let $(X, \mu)$ be a probability space and let $\xi$ and $\zeta$ be finite partitions of $X$. Suppose that $\xi$ refines $\zeta$ and let $\pi_\zeta : \xi \rightarrow \zeta$ be the coarsening map. Then for every $\kappa > 0$ there is $\epsilon_0 > 0$ so that for all $0 < \epsilon < \epsilon_0$, all sufficiently large $n$, and every $z \in L_{\zeta, \epsilon}^n$ $$\exp \Big( n \cdot \mathrm{H}(\xi | \zeta) - n \cdot \kappa \Big) \leq \Big| \Big\{ c \in L_{\xi, \epsilon}^n \,:\, \pi_\zeta(c) = z \Big\} \Big| \leq \exp \Big( n \cdot \mathrm{H}(\xi | \zeta) + n \cdot \kappa \Big).$$ \end{lem} \begin{proof} This is a well known fact from information theory which can be quickly deduced from Stirling's formula. See \cite[Lemma 2.13]{CsKo}. \end{proof} By taking $\zeta$ to be the trivial partition in the previous lemma, we obtain the following. \begin{cor} \label{COR STIRLING} Let $\bar{p}$ be a finite probability vector. Then for every $\kappa > 0$ there is $\epsilon_0 > 0$ so that for all $0 < \epsilon < \epsilon_0$ and all sufficiently large $n$ $$\exp \Big( n \cdot \mathrm{H}(\bar{p}) - n \cdot \kappa \Big) \leq \Big| L_{\bar{p}, \epsilon}^n \Big| \leq \exp \Big( n \cdot \mathrm{H}(\bar{p}) + n \cdot \kappa \Big).$$ \end{cor} For $x \in \mathbb{R}$ we write $\lfloor x \rfloor$ and $\lceil x \rceil$ for the greatest integer less than or equal to $x$ and the least integer greater than or equal to $x$, respectively. \begin{cor} \label{COR CHOOSE} Fix $0 < \delta < 1$. Then for every $\kappa > 0$ and for all sufficiently large $n$ we have $$\binom{n}{\lfloor \delta \cdot n \rfloor} \leq \exp \Big( n \cdot \mathrm{H}(\delta, 1 - \delta) + n \cdot \kappa\Big).$$ \end{cor} \begin{proof} Set $\bar{p} = (1 - \delta, \delta)$. By definition $\binom{n}{\lfloor \delta \cdot n \rfloor}$ is the number of subsets of $\{0, \ldots, n-1\}$ having cardinality $\lfloor \delta \cdot n \rfloor$. Such subsets naturally correspond, via their characteristic functions, to elements of $L_{\bar{p}, \epsilon}^n$ when $n > 1 / \epsilon$. Thus when $n > 1 / \epsilon$ we have $\binom{n}{\lfloor \delta \cdot n \rfloor} \leq |L_{\bar{p}, \epsilon}^n|$. Now apply Corollary \ref{COR STIRLING}. \end{proof} The normalized Hamming metric $d_{\mathrm{Ham}}$ on the set $L_{\bar{p}, \infty}^n$ is defined by $$d_{\mathrm{Ham}}(\ell, \ell') = \frac{1}{n} \cdot | \{i \,:\, \ell(i) \neq \ell'(i)\} |.$$ \begin{cor} \label{COR SEP} Let $\bar{p}$ be a finite probability vector. Then for every $\kappa > 0$ there are $\delta, \epsilon_0 > 0$ so that for all $0 < \epsilon < \epsilon_0$ and all sufficiently large $n$ there exists $K \subseteq L_{\bar{p}, \epsilon}^n$ satisfying $d_{\mathrm{Ham}}(k, k') > 2 \delta$ for all $k \neq k' \in K$ and $|K| \geq \exp(n \cdot \mathrm{H}(\bar{p}) - n \cdot \kappa)$. \end{cor} \begin{proof} Fix $\delta, \epsilon_0 > 0$ so that $$\mathrm{H}(2 \delta, 1 - 2 \delta) + 2 \delta \cdot \log |\bar{p}| < \kappa / 3$$ and $|L_{\bar{p}, \epsilon}^n| \geq \exp(n \cdot \mathrm{H}(\bar{p}) - n \cdot \kappa / 3)$ for all $0 < \epsilon < \epsilon_0$ and for sufficiently large $n$. For $V \subseteq L_{\bar{p}, \infty}^n$ let $$B(V; \rho) = \{\ell \in L_{\bar{p}, \infty}^n \,:\, \exists v \in V \ \ d_{\mathrm{Ham}}(\ell, v) \leq \rho\}$$ be the ball about $V$ of radius $\rho$. Basic combinatorics implies that for $0 < \rho < 1$ $$\Big| B(V; \rho) \Big| \leq |V| \cdot \binom{n}{\lfloor \rho \cdot n \rfloor} \cdot |\bar{p}|^{\rho \cdot n}.$$ Fix $0 < \epsilon < \epsilon_0$. For each $n$ let $K_n \subseteq L_{\bar{p}, \epsilon}^n$ be maximal with the property that $d(k, k') > 2 \delta$ for all $k \neq k' \in K_n$. Then by maximality of $K_n$ we have $L_{\bar{p}, \epsilon}^n \subseteq B(K_n; 2 \delta)$ and thus $$| L_{\bar{p}, \epsilon}^n | \leq | B(K_n; 2 \delta) | \leq |K_n| \cdot \binom{n}{\lfloor 2 \delta \cdot n \rfloor} \cdot |\bar{p}|^{2 \delta \cdot n}.$$ Solving for $|K_n|$ and letting $n$ be sufficiently large gives \begin{align*} |K_n| & \geq | L_{\bar{p}, \epsilon}^n | \cdot \binom{n}{\lfloor 2 \delta \cdot n \rfloor}^{-1} \cdot |\bar{p}|^{- 2 \delta \cdot n}\\ & \geq \exp(n \cdot \mathrm{H}(\bar{p}) - n \cdot \kappa / 3 - n \cdot \mathrm{H}(2 \delta, 1 - 2 \delta) - n \cdot \kappa / 3 - n \cdot 2 \delta \cdot \log |\bar{p}|)\\ & \geq \exp(n \cdot \mathrm{H}(\bar{p}) - n \kappa).\qedhere \end{align*} \end{proof} We present one more technical lemma we will need. \begin{lem} \label{LEM J} Let $\bar{p}$ be a finite probability vector, let $\epsilon, \delta > 0$, and let $n \in \mathbb{N}$. If $\ell \in L_{\bar{p}, \epsilon}^n$ then there is $J \subseteq \{0, \ldots, n-1\}$ such that $|J| \leq (\epsilon n + 1) \cdot |\bar{p}| + \delta n$ and $$\forall 0 \leq t < |\bar{p}| \qquad \frac{1}{n} \cdot \Big| \{ i : \ell(i) = t\} \setminus J \Big| \leq (1 - \delta) p_t.$$ \end{lem} \begin{proof} For each $t$ we have $|\{i : \ell(i) = t\}| \leq n p_t + n \epsilon$. So we may choose $J$ so that for every $t$ $$|J \cap \{i : \ell(i) = t\}| = \max(0, \ \lceil |\{i : \ell(i) = t\}| - (1 - \delta) n p_t \rceil) \leq \lceil \epsilon n + \delta n p_t\rceil.$$ Then $J$ will have the desired property and \begin{equation*} |J| \leq \sum_{t = 0}^{|\bar{p}| - 1} \lceil \epsilon n + \delta n p_t \rceil \leq \epsilon n \cdot |\bar{p}| + \delta n + |\bar{p}| = (\epsilon n + 1) |\bar{p}| + \delta n.\qedhere \end{equation*} \end{proof} Before closing this section we briefly clarify to the reader what new methods are required in order to prove the main theorem. Let us consider the simplest setting where partitions and probability vectors are finite, $\mathcal{F} = \{X, \varnothing\}$ is trivial, and $r = 1$. The argument we present below is simply a sketch intended to give some intuition and motivation. Let $G \curvearrowright (X, \mu)$ be a {p{$.$}m{$.$}p{$.$}} ergodic action with $\mu$ non-atomic, let $\xi$ be a finite generating partition, and let $\bar{p}$ be a finite probability vector with $\mathrm{H}(\xi) < \mathrm{H}(\bar{p})$. We would like to construct a generating partition $\alpha = \{A_i : 0 \leq i < |\bar{p}|\}$ with $\mu(A_i) = p_i$ for all $i$. Pick $n_0 \in \mathbb{N}$ and $\epsilon > 0$. By Proposition \ref{PROP RLEM} there are $n \geq n_0$, Borel sets $S_1, S_2 \subseteq X$ with $\mu(S_1) + \mu(S_2) < \epsilon$, and a $\sigma \text{-}\mathrm{alg}_G(\{S_1, S_2\})$-expressible $\theta \in [E_G^X]$ such that $E_\theta$ admits a $\sigma \text{-}\mathrm{alg}_G(\{S_1, S_2\})$-measurable transversal $Y$ and such that for $\mu$-almost-every $x \in X$, the $E_\theta$ class of $x$ has cardinality $n$, and $$\forall C \in \xi \qquad \mu(C) - \epsilon < \frac{|C \cap [x]_{E_\theta}|}{|[x]_{E_\theta}|} < \mu(C) + \epsilon.$$ So we have $\mathcal{N}_\xi^\theta(y) \in L_{\xi, \epsilon}^n$ for almost-every $y \in Y$. Since $|L_{\xi, \epsilon}^n| \approx \exp(n \cdot \mathrm{H}(\xi)) < \exp(n \cdot \mathrm{H}(\bar{p})) \approx |L_{\bar{p},\epsilon}^n|$, by beginning with a sufficiently small $\epsilon$ and sufficiently large $n_0$, we can conclude from Corollary \ref{COR STIRLING} that $$|L_{\xi,\epsilon}^n| < |L_{\bar{p},\epsilon}^n|.$$ So there is an injection $f : L_{\xi,\epsilon}^n \rightarrow L_{\bar{p}, \epsilon}^n$. For $y \in Y$ set $c_y = \mathcal{N}_\xi^\theta(y)$ and $a_y = f(c_y)$. Since $\{\theta^i(Y) : 0 \leq i < n\}$ is a partition of $X$, there is a unique partition $\alpha = \{A_i : 0 \leq i < |\bar{p}|\}$ of $X$ satisfying $\mathcal{N}_\alpha^\theta(y) = a_y$ for all $y \in Y$. Specifically, $x \in A_t$ if and only if $a_y(i) = t$ where $y \in Y$ and $0 \leq i < n$ satisfy $\theta^i(y) = x$. The condition $\mathcal{N}_\alpha^\theta(y) = a_y \in L_{\bar{p}, \epsilon}^n$ implies that $|\mu(A_i) - p_i| \leq \epsilon$ for all $i$. Since $f$ is an injection, $c_y = f^{-1}(a_y)$ is determined from $a_y$. It can be deduced from this fact that \begin{equation} \label{EQN FAIL} \xi \subseteq \sigma \text{-}\mathrm{alg}_{\langle \theta \rangle}(\alpha \vee \{Y, X \setminus Y\}). \end{equation} Now it becomes clear what is missing in order to prove our main theorem. We do want $\mu(A_i) = p_i$ instead of $|\mu(A_i) - p_i| \leq \epsilon$, but this is only a minor problem. The major problem is that instead of (\ref{EQN FAIL}) we want $\xi \subseteq \sigma \text{-}\mathrm{alg}_G(\alpha)$, so that $\alpha$ is a generating partition. For this, it would suffice to have both (\ref{EQN FAIL}) hold and $S_1, S_2 \in \sigma \text{-}\mathrm{alg}_G(\alpha)$. In this case, $Y$ would be measurable and $\theta$ would be expressible with respect to $\sigma \text{-}\mathrm{alg}_G(\{S_1, S_2\}) \subseteq \sigma \text{-}\mathrm{alg}_G(\alpha)$ and therefore $$\xi \subseteq \sigma \text{-}\mathrm{alg}_{\langle \theta \rangle}(\alpha \vee \{Y, X \setminus Y\}) \subseteq \sigma \text{-}\mathrm{alg}_G(\alpha).$$ So we want to have both (\ref{EQN FAIL}) hold and $S_1, S_2 \in \sigma \text{-}\mathrm{alg}_G(\alpha)$ simultaneously. This first requirement on $\alpha$ uses $\theta$-translates and the second uses $G$-translates. The difficulty is that we must build an $\alpha$ which simultaneously encodes messages under the $\theta$-action and encodes messages under the $G$-action, and these two actions are almost completely unrelated. We solve this problem in the next section. \section{Coding small sets} \label{SECT ACT} The goal of this section is to construct, for any two sets $S_1, S_2$ having small measure, a pre-partition $\beta = \{B_0, B_1\}$ with the property that $\mu(\cup \beta)$ is small and $S_1, S_2 \in \sigma \text{-}\mathrm{alg}^{\text{red}}_G(\beta)$. This task is vital to the proof of the main theorem, as it will connect the $G$-action and the transformation $\theta$ used in Proposition \ref{PROP RLEM}. We first focus our attention on building a pre-partition $\beta$ and a set $R$, $0 < \mu(R) < 1$, with $R \in \sigma \text{-}\mathrm{alg}^{\text{red}}_G(\beta)$. The pre-partition $\beta$ will consist of two (disjoint) sets $B_0, B_1$. On an intuitive level, it is likely helpful to imagine points in $B_0$ as ``labeled with $0$,'' points in $B_1$ as ``labeled with $1$'', and points in $X \setminus (B_0 \cup B_1)$ as ``unlabeled.'' The condition $R \in \sigma \text{-}\mathrm{alg}^{\text{red}}_G(\beta)$ then roughly means that no matter how the unlabeled points are later labeled, for every $x \in X$ it is possible to determine from the labeling of its orbit whether $x \in R$. This condition is similar to the notions of locally recognizable functions, membership tests, and recognizable sets appearing in \cite{GJS09,GJS12,ST14}. It is the similarity with recognizable sets which led us to use the letter $R$. \begin{figure} \caption{The structure in $G$ which will be needed in constructing $R$ and $\beta = \{B_0, B_1\}$.} \label{fig:WF} \end{figure} A naive but suggestive idea for building $R, B_0, B_1$ is to fix a finite window $W \subseteq G$ and a finite set $F$ containing $W^2$, and label all points in $W \cdot R$ with $1$ (i.e. set $B_1 = W \cdot R$), label all points in $(F \setminus W) \cdot R$ with $0$, and label more points $0$ as needed so that for every $x \not\in R$ there is a point in $W \cdot x$ labeled $0$. It seems plausible then that $x \in R$ if and only if $W \cdot x$ is labeled identically $1$. If so, $R \in \sigma \text{-}\mathrm{alg}^{\text{red}}_G(\beta)$ as desired. This naive approach is the right idea but does not quite work. For example, this may fail if $W$ has too much symmetry, such as if $W$ is a finite subgroup, for one might not be able to distinguish $R$ from $W \cdot R$. This problem can be easily fixed by wisely choosing a ``checkpoint'' $c \in F \setminus W$ and labeling $c \cdot R$ with $1$ in order to break any potential symmetries. In the case of free actions nothing more is required, but for non-free actions additional problems emerge which are a bit tedious to handle. There are a few interacting problems in the case of a non-free action, but in brief the primary problem is that we may have $W \cdot x = \{x\}$ for some $x = c \cdot r$ with $r \in R$. We will overcome this problem by introducing two new points $q_1, q_2 \in F \setminus W$. In fact, we will construct a set $Q = \{q_1, \ldots, q_6\} \subseteq F \setminus W$ of ``query points'' whose labels will hold important information. However $q_3, \ldots, q_6$ will not be needed until the second half of this section. See Figure \ref{fig:WF} for an illustration. The following lemma is well known. \begin{lem} \label{LEM MARKER} Let $G \curvearrowright (X, \mu)$ be a {p{$.$}m{$.$}p{$.$}} action. If $Y \subseteq X$ is Borel and $F \subseteq G$ is finite, then there exists a Borel set $D \subseteq Y$ such that $Y \subseteq F^{-1} F \cdot D$ and $F \cdot d \cap F \cdot d' = \varnothing$ for all $d \neq d' \in D$. In particular, if $\mu(Y) > 0$ then $\mu(D) > 0$. \end{lem} \begin{proof} Define the Borel graph $\Gamma \subseteq Y \times Y$ by $(y, y') \in \Gamma$ if and only if $y \neq y'$ and $F \cdot y \cap F \cdot y' \neq \varnothing$. Since every vertex has finite degree in $\Gamma$, a result of Kechris--Solecki--Todorcevic \cite[Prop 4.2 and Prop. 4.5]{KST99} states that there is a maximal (with respect to containment) Borel set $D$ which is $\Gamma$-independent (i.e. no two elements of $D$ are adjacent). Since $D$ is $\Gamma$-independent we have $F \cdot d \cap F \cdot d' = \varnothing$ for all $d \neq d'$, and since $D$ is maximal we have $Y \subseteq F^{-1} F \cdot D$. \end{proof} In the outline above we mentioned labeling points $0$ as needed so that for most $x \in X$ there is a point in $W \cdot x$ labeled $0$. The lemma below will help us determine where to place these $0$'s. \begin{lem} \label{LEM SMARKER} Let $G \curvearrowright (X, \mu)$ be a {p{$.$}m{$.$}p{$.$}} ergodic action with $(X, \mu)$ non-atomic, and let $\delta > 0$. Then there exists a finite symmetric set $W \subseteq G$ with $1_G \in W$ and a Borel set $D \subseteq X$ such that $0 < \mu(D) < \delta$ and $W \cdot x \cap D \neq \varnothing$ for all $x \in X$. \end{lem} \begin{proof} Since $(X, \mu)$ is non-atomic, we can fix a Borel set $D_1 \subseteq X$ with $0 < \mu(D_1) < \delta / 2$. As $G \curvearrowright (X, \mu)$ is ergodic, we have $\mu(G \cdot D_1) = 1$. So there must be a sufficiently large finite symmetric set $W \subseteq G$ with $1_G \in W$ satisfying $\mu(W \cdot D_1) > 1 - \delta/2$. Now set $D = D_1 \cup (X \setminus W \cdot D_1)$. Then $\mu(D) < \delta$ and $W \cdot D = X$. \end{proof} We will need the following fact from group theory. \begin{lem}[B.H. Neumann, \cite{N}] \label{LEM BHN} Let $G$ be a group, and let $H_i$, $1 \leq i \leq n$, be subgroups of $G$. Suppose there are group elements $g_i \in G$ so that $$G = \bigcup_{i = 1}^n g_i \cdot H_i.$$ Then there is $i$ such that $|G : H_i| < \infty$. \end{lem} The corollary below will allow us to construct the checkpoint $c$. \begin{cor} \label{COR APPBHN} Let $G \curvearrowright (X, \mu)$ be a {p{$.$}m{$.$}p{$.$}} ergodic action with $(X, \mu)$ non-atomic. Let $R \subseteq X$ have positive measure and let $W, T \subseteq G$ be finite. Then there are a Borel set $R' \subseteq R$ with $\mu(R') > 0$ and $c \in G$ such that $c W \cdot R' \cap T \cdot R' = \varnothing$. \end{cor} \begin{proof} Our assumptions imply that almost-every orbit is infinite. So for $\mu$-almost-every $r \in R$ the stability group $\mathrm{Stab}(r) = \{g \in G \,:\, g \cdot r = r\}$ has infinite index in $G$ and thus by Lemma \ref{LEM BHN} $$T \cdot \mathrm{Stab}(r) \cdot W^{-1} = \bigcup_{t \in T} \bigcup_{w \in W} t w^{-1} \cdot (w \mathrm{Stab}(r) w^{-1}) \neq G.$$ As $G$ is countable, there is $c \in G$ and a non-null Borel set $R_0 \subseteq R$ with $$c \not\in T \cdot \mathrm{Stab}(r) \cdot W^{-1}$$ for all $r \in R_0$. It follows that $c W \cdot r \cap T \cdot r = \varnothing$ for all $r \in R_0$. Now apply Lemma \ref{LEM MARKER} to get positive measure Borel set $R' \subseteq R_0$ with $(c W \cup T) \cdot r \cap (c W \cup T) \cdot r' = \varnothing$ for all $r \neq r' \in R'$. \end{proof} The next lemma will later be applied six times in order to build $Q = \{q_1, \ldots, q_6\}$. \begin{lem} \label{LEM AVOID} Let $G \curvearrowright (X, \mu)$ be a {p{$.$}m{$.$}p{$.$}} ergodic action with $(X, \mu)$ non-atomic. Let $R, Y \subseteq X$ be positive measure Borel sets and let $T \subseteq G$ be finite. Then there are $q \in G$ and a Borel set $R' \subseteq R$ of positive measure such that $q \cdot R' \subseteq Y$ and $q \cdot R' \cap T \cdot R' = \varnothing$. \end{lem} \begin{proof} Let $R_0 \subseteq R$ be a Borel set with $\mu(R_0) > 0$ and $\mu(Y \setminus T \cdot R_0) > 0$. By ergodicity, there is $q \in G$ such that $R_0 \cap q^{-1} \cdot (Y \setminus T \cdot R_0)$ has positive measure. Set \begin{equation*} R' = R_0 \cap q^{-1} \cdot (Y \setminus T \cdot R_0). \qedhere \end{equation*} \end{proof} Now we are ready to build $W, F \subseteq G$ and $c, q_1, \ldots, q_6 \in G$ as pictured in Figure \ref{fig:WF}. We mention that when we will later apply this lemma, the set $Y$ will be defined as $Y = \{y \in X : |W \cdot y| > 1\}$. We also mention that the equalities in clauses (v) and (vi) below are mostly due to the fact that $1_G \in W$, while in clause (vii) it would be preferable that the intersection be empty but due to non-trivial stabilizers the stated containment is the best one can hope for. Following this lemma we will immediately build a pre-partition $\beta$ with $R \in \sigma \text{-}\mathrm{alg}^{\text{red}}_G(\beta)$. \begin{lem} \label{LEM TECH} Let $G \curvearrowright (X, \mu)$ be a {p{$.$}m{$.$}p{$.$}} ergodic action with $(X, \mu)$ non-atomic. Let $Y \subseteq X$ be a Borel set of positive measure, let $W \subseteq G$ be finite and symmetric with $1_G \in W$, and let $m \in \mathbb{N}$. Then there exist $n \in \mathbb{N}$, $F \cup Q \cup \{c\} \subseteq G$, and a Borel set $R \subseteq X$ with $Q = \{q_1, \ldots, q_6\}$, $\mu(R) = \frac{1}{n}$, $n > m \cdot |F|$, and satisfying the following: \begin{enumerate} \item[\rm (i)] $Q \cdot R \subseteq Y$; \item[\rm (ii)] $|(\{c\} \cup Q) \cdot r \setminus W \cdot r| = 7$ for all $r \in R$; \item[\rm (iii)] $( W \cup \{c\} \cup Q )^2 \subseteq F$; \item[\rm (iv)] $F \cdot r \cap F \cdot r' = \varnothing$ for all $r \neq r' \in R$; \item[\rm (v)] $W q \cdot R \cap (W \cup \{c\} \cup Q) \cdot R = q \cdot R$ for every $q \in Q$; \item[\rm (vi)] $c W \cdot R \cap (W \cup \{c\} \cup Q) \cdot R = c \cdot R$; \item[\rm (vii)] $Q c \cdot R \cap (W \cup \{c\} \cup Q) \cdot R \subseteq c \cdot R$; \item[\rm (viii)] for all $r \in R$, either $q_1 c \cdot r \neq c \cdot r$ or $q_2 c \cdot r = c \cdot r$. \end{enumerate} \end{lem} \begin{proof} Set $R_0 = X$. By induction on $1 \leq i \leq 6$ we choose $q_i \in G$ and a Borel set $R_i \subseteq R_{i-1}$ such that $\mu(R_i) > 0$, $q_i \cdot R_i \subseteq Y$, and $$q_i \cdot R_i \cap W (W \cup \{q_j \,:\, j < i\}) \cdot R_i = \varnothing.$$ Both the base case and the inductive steps are taken care of by Lemma \ref{LEM AVOID}. Set $Q = \{q_1, q_2, \ldots, q_6\}$. Then $q_i \cdot R_6 \subseteq q_i \cdot R_i \subseteq Y$ and $|Q \cdot r \setminus W \cdot r| = 6$ for all $r \in R_6$. Now apply Corollary \ref{COR APPBHN} to obtain $c \in G$ and a Borel set $R_c \subseteq R_6$ with $\mu(R_c) > 0$ and $$c W \cdot R_c \cap (\{1_G\} \cup Q^{-1})(W \cup Q \cup W Q) \cdot R_c = \varnothing.$$ Set $F = (W \cup \{c\} \cup Q)^2$ so that (iii) is satisfied. If there is $q \in Q$ with $q c \cdot r = c \cdot r$ for all $r \in R_c$, then set $R' = R_c$ and re-index the elements of $Q$ so that $q_2 = q$. Otherwise, we may re-index $Q$ and find a Borel set $R' \subseteq R_c$ of positive measure with $q_1 c \cdot r \neq c \cdot r$ for all $r \in R'$. Now apply Lemma \ref{LEM MARKER} to obtain a positive measure Borel set $R \subseteq R'$ with $F \cdot r \cap F \cdot r' = \varnothing$ for all $r \neq r' \in R$. By shrinking $R$ if necessary, we may suppose that $\mu(R) = \frac{1}{n}$ for some $n > m \cdot |F|$. Then (iv) is immediately satisfied, (viii) is satisfied since $R \subseteq R'$, and (i) is satisfied since $R \subseteq R_6$. Clause (ii) also holds since $c \cdot r \in c W \cdot r$ is disjoint from $(W \cup Q) \cdot r$ for every $r \in R$. Recall that $W = W^{-1}$ and $1_G \in W$. Fix $1 \leq i \leq 6$. By the definition of $q_i$ we have $W q_i \cdot R \cap W \cdot R = \varnothing$, and if $j \neq i$ then $W q_i \cdot R \cap q_j \cdot R = \varnothing$. Also, the definition of $c$ implies that $W q_i \cdot R \cap c \cdot R = \varnothing$. Therefore $$W q_i \cdot R \cap (W \cup \{c\} \cup Q) \cdot R \subseteq q_i \cdot R.$$ This establishes (v) since $1_G \in W$. By definition of $c$ we have $Q c \cdot R \cap (W \cup Q) \cdot R = \varnothing$. So (vii) follows. Similarly, $c W \cdot R \cap (W \cup Q) \cdot R = \varnothing$ which gives one inclusion in (vi), and the reverse inclusion follows from $1_G \in W$. \end{proof} Now we construct a pre-partition $\beta$ with $R \in \sigma \text{-}\mathrm{alg}^{\text{red}}_G(\beta)$. \begin{lem} \label{LEM REC} Let $G \curvearrowright (X, \mu)$ be a {p{$.$}m{$.$}p{$.$}} ergodic action with $(X, \mu)$ non-atomic. Let $W \subseteq G$ be finite and symmetric with $1_G \in W$, and let $D \subseteq X$ be a Borel set with $W \cdot x \cap D \neq \varnothing$ for all $x \in X$. Assume that the set $Y = \{x \in X \,:\, |W \cdot x| \geq 2\}$ has positive measure, and let $F \cup Q \cup \{c\} \subseteq G$ and $R \subseteq X$ be as in Lemma \ref{LEM TECH}. If $\beta = \{B_0, B_1\}$ is a pre-partition satisfying \begin{align} B_1 & \supseteq (W \cup \{c, q_1\}) \cdot R, \quad \text{and}\label{eqn:B1}\\ B_0 & \supseteq (D \setminus F \cdot R) \bigcup \Big( F \cdot R \setminus (W \cup \{c\} \cup Q) \cdot R \Big) \bigcup q_2 \cdot R\label{eqn:B0} \end{align} then $R \in \sigma \text{-}\mathrm{alg}^{\text{red}}_G(\beta)$. \end{lem} \begin{proof} We will use the roman numerals (i) through (viii) to refer to clauses of Lemma \ref{LEM TECH}. If $r \in R$ then it is immediate from the definitions that $(W \cup \{c, q_1\}) \cdot r \subseteq B_1$ and $q_2 \cdot r \in B_0$. So it suffices to show that if $x \not\in R$ then either $(W \cup \{c, q_1\}) \cdot x \cap B_0 \neq \varnothing$ or $q_2 \cdot x \in B_1$. Fix $x \not\in R$. Let $w \in W$ be such that $w \cdot x \in D$. If $w \cdot x \in B_0$ then we are done. So suppose that $w \cdot x \not\in B_0$. Since $w \cdot x \in D \setminus B_0$, by (\ref{eqn:B0}) we must have $w \cdot x \in F \cdot R$. Now $w \cdot x \in F \cdot R \setminus B_0$ so again by (\ref{eqn:B0}) we obtain $w \cdot x \in (W \cup \{c\} \cup Q) \cdot R$. If $x \in B_0$ then we are done since $1_G \in W$. So suppose that $x \not\in B_0$. Since $W$ is symmetric, $x \in W \cdot w \cdot x$ and hence $x \in F \cdot R$ by (iii). Again, $x \in F \cdot R \setminus B_0$ so (\ref{eqn:B0}) implies $x \in (W \cup \{c\} \cup Q) \cdot R$. The previous paragraph shows that if $W \cdot x$ is disjoint with $B_0$ then we must have $x \in (W \cup \{c\} \cup Q) \cdot R$. We will divide the remainder of the argument into three cases: $x \in W \cdot R \setminus R$, $x \in c \cdot R$, and $x \in Q \cdot R$. Along the way we will illuminate why $c, q_1, q_2$ are needed in this construction. Suppose that $x \in W \cdot R \setminus R$. Here is the problem in which $W$ together with stabilizers may possess too much symmetry: even though $x \not\in R$, it may be that $W \cdot x \subseteq W \cdot R \subseteq B_1$. This problem is resolved by using the checkpoint $c$. Since $x \not\in R$, it follows from (vi) that $c \cdot x \not\in (W \cup \{c\} \cup Q) \cdot R$. By (iii) we have $c \cdot x \in F \cdot R$, so by (\ref{eqn:B0}) we find that $c \cdot x \in B_0$. Thus in the case $x \in W \cdot R \setminus R$ we are done. Now suppose that $x \in c \cdot R$. The problem in this case is that we may have $W \cdot x = \{x\}$, and based on the conditions imposed on $c$ this situation is unavoidable to the best knowledge of the author. We handle this problem by using the points $q_1$ and $q_2$. Fix $r \in R$ with $x = c \cdot r$. By (viii) we have that either $q_1 c \cdot r \neq c \cdot r$ or $q_2 c \cdot r = c \cdot r$. In the latter case, (\ref{eqn:B1}) gives $$q_2 \cdot x = q_2 c \cdot r = c \cdot r \in B_1$$ and we are done. So assume that $q_1 c \cdot r \neq c \cdot r$. Then $q_1 c \cdot r \not\in c \cdot R$ by (iii) and (iv) and so by (vii) $$q_1 c \cdot r \not\in (W \cup \{c\} \cup Q) \cdot R.$$ As $q_1 c \in F$ by (iii), we find that $$q_1 \cdot x = q_1 c \cdot r \in F \cdot R \setminus (W \cup \{c\} \cup Q) \cdot R \subseteq B_0$$ which finishes this case. Finally, suppose that $x \in Q \cdot R$. Thankfully $Q$ does not create any new problems and the argument can terminate here. Fix $r \in R$ and $q \in Q$ with $x = q \cdot r$. By (i) $q \cdot r \in Y$ and hence there is $w \in W$ with $w q \cdot r \neq q \cdot r$. It follows $w q \cdot r \not\in q \cdot R$ by (iii) and (iv) and so by (v) $$w q \cdot r \not\in (W \cup \{c\} \cup Q) \cdot R.$$ Therefore \begin{equation*} w \cdot x = w q \cdot r \in F \cdot R \setminus (W \cup \{c\} \cup Q) \cdot R \subseteq B_0.\qedhere \end{equation*} \end{proof} We are now ready for the main result of this section. \begin{prop} \label{PROP CODE} Let $G \curvearrowright (X, \mu)$ be a {p{$.$}m{$.$}p{$.$}} ergodic action with $(X, \mu)$ non-atomic and let $0 < \delta < 1$. Then there are $\epsilon > 0$ and a Borel set $M \subseteq X$ with $\mu(M) = \delta$ with the following property: for any $S_1, S_2 \subseteq X$ satisfying $\mu(S_1) + \mu(S_2) < \epsilon$ there is a two-piece partition $\beta = \{B_0, B_1\}$ of $M$ with $S_1, S_2 \in \sigma \text{-}\mathrm{alg}^{\text{red}}_G(\beta)$. \end{prop} \begin{proof} By Lemma \ref{LEM SMARKER}, there is a finite symmetric set $W \subseteq G$ with $1_G \in W$ and a Borel set $D \subseteq X$ with $\mu(D) < \delta / 2$ such that $W \cdot x \cap D \neq \varnothing$ for all $x \in X$. Note that if $|W \cdot x| = 1$ then $x \in D$. Thus the set $Y = \{x \in X \,:\, |W \cdot x| \geq 2\}$ has positive measure. Apply Lemma \ref{LEM TECH} to obtain $F \cup \{c\} \cup Q \subseteq G$ with $Q = \{q_1, \ldots, q_6\}$ and $R \subseteq X$ with $\mu(R) = \frac{1}{n}$, where $n > 2 |F| / \delta$. Fix $k \in \mathbb{N}$ with $$\log_2(2 n k) < k - 1$$ and let $Z_1$ and $Z_2$ be disjoint Borel subsets of $R$ with $\mu(Z_1) = \mu(Z_2) = \frac{1}{2 n k}$. Set $Z = Z_1 \cup Z_2$ and note that $\mu(Z) = \frac{1}{n k} = \frac{1}{k} \cdot \mu(R)$. Fix $\epsilon > 0$ with $\epsilon < \frac{1}{6 n k}$. Let $M \subseteq X$ be any Borel set with $D \cup F \cdot R \subseteq M$ and $\mu(M) = \delta$. Apply Corollaries \ref{COR MAKEPART} and \ref{COR PERMUTE} to obtain a $\sigma \text{-}\mathrm{alg}_G(\{Z, R\})$-expressible function $\rho \in [[E_G^X]]$ such that $\mathrm{dom}(\rho) = \mathrm{rng}(\rho) = R$, $\rho^k = \mathrm{id}_R$, and such that $\{\rho^i(Z) \,:\, 0 \leq i < k\}$ is a partition of $R$. For each $j = 1, 2$, again apply these corollaries to obtain a $\sigma \text{-}\mathrm{alg}_G(\{Z_j\})$-expressible function $\psi_j \in [E_G^X]$ such that $\mathrm{dom}(\psi_j) = \mathrm{rng}(\psi_j) = X$, $\psi_j^{2 n k} = \mathrm{id}_X$, and such that $\{\psi_j^i(Z_j) \,:\, 0 \leq i < 2 n k\}$ is a partition of $X$. We mention that there are no assumed relationships between $\psi_1$, $\psi_2$, and $\rho$. Let $S_1, S_2 \subseteq X$ be Borel sets with $\mu(S_1) + \mu(S_2) < \epsilon$. Our intention will be to encode how the sets $\psi_1^i(Z_1)$ meet $S_1$ and similarly how the sets $\psi_2^i(Z_2)$ meet $S_2$. For $1 \leq m \leq 2 n k$ and $j = 1, 2$, let $Z_j^m$ be the set of $z \in Z_j$ such that $$| \{ 0 \leq i < 2 n k \,:\, \psi_j^i(z) \in S_j\} | \geq m.$$ Then $Z_j^1 \supseteq Z_j^2 \supseteq \cdots \supseteq Z_j^{2 n k}$ and $$\sum_{m = 1}^{2 n k} \mu(Z_1^m \cup Z_2^m) = \mu(S_1) + \mu(S_2) < \epsilon.$$ Setting $Z_j^* = Z_j \setminus Z_j^1$, we have $$\mu(Z_j^*) = \mu(Z_j) - \mu(Z_j^1) > \frac{1}{2 n k} - \epsilon > \frac{1}{3 n k} > 2 \epsilon.$$ In particular, \begin{equation} \label{EQN KAPPA} \mu(Z_1^* \cup Z_2^*) - \sum_{m = 1}^{2 n k} \mu(Z_1^m \cup Z_2^m) > 4 \epsilon - \epsilon = 3 \epsilon. \end{equation} Set $Z^m = Z_1^m \cup Z_2^m$ and $Z^* = Z_1^* \cup Z_2^*$. For each $1 \leq m \leq 2 n k$ we wish to build a function $\theta_m \in [[E_G^X]]$ which is expressible with respect to $\sigma \text{-}\mathrm{alg}_G(\{ Z^*, Z^1, \ldots, Z^m \})$ and satisfies $\mathrm{dom}(\theta_m) = Z^m$ and $$\mathrm{rng}(\theta_m) \subseteq Z^* \setminus \bigcup_{k = 1}^{m-1} \theta_k(Z^k).$$ We construct these functions inductively. When $m = 1$, we have $\mu(Z^1) < \epsilon < \mu(Z^*)$ and thus $\theta_1$ is obtained immediately from Lemma \ref{LEM SIMPLEMIX}. Now suppose that $\theta_1$ through $\theta_{m-1}$ have been defined. Then $$Z^* \setminus \bigcup_{k = 1}^{m-1} \theta_k(Z^k)$$ lies in $\sigma \text{-}\mathrm{alg}_G(\{Z^*, Z^1, \ldots, Z^{m-1} \})$ by Lemma \ref{LEM EXPMOVE}. By (\ref{EQN KAPPA}) we have $$\mu(Z^m) < \epsilon < \mu(Z^*) - \sum_{k = 1}^{m-1} \mu(Z^k) = \mu \left( Z^* \setminus \bigcup_{k = 1}^{m-1} \theta_k(Z^k) \right).$$ Therefore we may apply Lemma \ref{LEM SIMPLEMIX} to obtain $\theta_m$. This completes the construction. Define $f : \bigcup_{m = 1}^{2 n k} \mathrm{rng}(\theta_m) \rightarrow \{0, 1, \ldots, 2 n k - 1\}$ by setting $f(\theta_m(z)) = \ell$ for $z \in Z_j^m$ if and only if $\psi_j^\ell(z) \in S_j$, and $$| \{0 \leq i \leq \ell \,:\, \psi_j^i(z) \in S_j \} | = m.$$ For $i, t \in \mathbb{N}$ we let $\mathbb{B}_i(t) \in \{0, 1\}$ denote the $i^\text{th}$ digit in the binary expansion of $t$ (so $\mathbb{B}_i(t) = 0$ for all $i > \log_2(t) + 1$). Now define a Borel set $B_1 \subseteq X$ by the rule $$x \in B_1 \Longleftrightarrow \begin{cases} x \in W \cdot R & \text{or} \\ x \in c \cdot R & \text{or} \\ x \in q_1 \cdot R & \text{or} \\ x \in q_3 \cdot Z & \text{or} \\ x \in q_4 \cdot Z_1 & \text{or} \\ x \in q_5 \cdot Z^1 & \text{or} \\ x \in q_6 \cdot \theta_m(Z^{m+1}) & \text{for some } 1 \leq m < 2 n k, \text{ or} \\ x = q_6 \cdot \rho^i(z) & \text{where } 1 \leq i < k, \ z \in \mathrm{dom}(f), \\ & \qquad\qquad \text{and } \mathbb{B}_i(f(z)) = 1. \end{cases}$$ It is important to note that $B_1 \subseteq (W \cup \{c\} \cup Q) \cdot R$. In particular, $B_1 \subseteq F \cdot R$ by Lemma \ref{LEM TECH}.(iii). We also define the Borel set $$B_0 = M \setminus B_1 \supseteq (D \setminus F \cdot R) \cup \Big( F \cdot R \setminus (W \cup \{c\} \cup Q) \cdot R \Big) \cup \Big( (W \cup \{c\} \cup Q) \cdot R \setminus B_1 \Big).$$ Note that clauses (iii) and (iv) of Lemma \ref{LEM TECH} imply that for every $r \neq r' \in R$ $$(W \cup \{c\} \cup Q) \cdot r \cap (W \cup \{c\} \cup Q) \cdot r' = \varnothing.$$ Thus from clause (ii) of Lemma \ref{LEM TECH} we obtain the following one-way implications $$x \in B_0 \Longleftarrow \begin{cases} x \in q_2 \cdot R & \text{or} \\ x \in q_3 \cdot (R \setminus Z) & \text{or} \\ x \in q_4 \cdot (R \setminus Z_1) & \text{or} \\ x \in q_5 \cdot (R \setminus Z^1) & \text{or} \\ x \in q_6 \cdot \textstyle{\bigcap_{m = 1}^{2 n k - 1} (Z \setminus \theta_m(Z^{m+1}))} & \text{or} \\ x = q_6 \cdot \rho^i(z) & \text{where } 1 \leq i < k, \ z \in \mathrm{dom}(f), \\ & \qquad\qquad \text{and } \mathbb{B}_i(f(z)) = 0. \end{cases}$$ In particular, $q_2 \cdot R \subseteq B_0$. Therefore $\beta = \{B_0, B_1\}$ satisfies the assumptions of Lemma \ref{LEM REC}. We will now check that $S_1, S_2 \in \sigma \text{-}\mathrm{alg}^{\text{red}}_G(\beta)$. By Lemma \ref{LEM REC} we have $R \in \sigma \text{-}\mathrm{alg}^{\text{red}}_G(\beta)$. By $G$-invariance of $\sigma \text{-}\mathrm{alg}^{\text{red}}_G(\beta)$, we have $q_i \cdot R \in \sigma \text{-}\mathrm{alg}^{\text{red}}_G(\beta)$ for $1 \leq i \leq 6$. Since $q_i \cdot R \subseteq B_0 \cup B_1$, it immediately follows from the definition of $\sigma \text{-}\mathrm{alg}^{\text{red}}_G(\beta)$ that $B_0 \cap q_i \cdot R$ and $B_1 \cap q_i \cdot R$ lie in $\sigma \text{-}\mathrm{alg}^{\text{red}}_G(\beta)$. Defining the partition $$\gamma = \Big\{R, X \setminus (R \cup Q \cdot R) \Big\} \cup \Big\{B_0 \cap q_i \cdot R : 1 \leq i \leq 6 \Big\} \cup \Big\{B_1 \cap q_i \cdot R : 1 \leq i \leq 6 \Big\},$$ we have $\sigma \text{-}\mathrm{alg}_G(\gamma) \subseteq \sigma \text{-}\mathrm{alg}^{\text{red}}_G(\beta)$. It suffices to show that $S_1, S_2 \in \sigma \text{-}\mathrm{alg}_G(\gamma)$. We have $x \in Z$ if and only if $q_3 \cdot x \in B_1 \cap q_3 \cdot R \in \gamma$. Thus $Z \in \sigma \text{-}\mathrm{alg}_G(\gamma)$. Similarly, $x \in Z_1$ if and only if $q_4 \cdot x \in B_1 \cap q_4 \cdot R$. As $R \in \gamma$, we conclude that $R, Z, Z_1, Z_2 = Z \setminus Z_1 \in \sigma \text{-}\mathrm{alg}_G(\gamma)$. It follows that $\rho$, $\psi_1$, and $\psi_2$ are $\sigma \text{-}\mathrm{alg}_G(\gamma)$-expressible. We prove by induction on $1 \leq m \leq 2 n k$ that $Z^m, Z_1^m, Z_2^m \in \sigma \text{-}\mathrm{alg}_G(\gamma)$ and that $\theta_m$ is $\sigma \text{-}\mathrm{alg}_G(\gamma)$-expressible. Since $x \in Z^1$ if and only if $q_5 \cdot x \in B_1 \cap q_5 \cdot R$, we have $Z^1 \in \sigma \text{-}\mathrm{alg}_G(\gamma)$. Also $Z_1^1 = Z^1 \cap Z_1$ and $Z_2^1 = Z^1 \cap Z_2$ are in $\sigma \text{-}\mathrm{alg}_G(\gamma)$. So $Z^* = Z \setminus Z^1$, $Z_1^* = Z_1 \setminus Z_1^1$, and $Z_2^* = Z_2 \setminus Z_2^1$ are in $\sigma \text{-}\mathrm{alg}_G(\gamma)$ as well. It follows that $\theta_1$ is $\sigma \text{-}\mathrm{alg}_G(\gamma)$-expressible. Now inductively suppose that $Z^i \in \sigma \text{-}\mathrm{alg}_G(\gamma)$ and that $\theta_i$ is $\sigma \text{-}\mathrm{alg}_G(\gamma)$-expressible for all $1 \leq i \leq m$. Then $z \in Z^{m+1}$ if and only if $z \in Z^m$ and $q_6 \cdot \theta_m(z) \in B_1 \cap q_6 \cdot R$. In other words, $$Z^{m+1} = \theta_m^{-1} \Big(q_6^{-1} \cdot (B_1 \cap q_6 \cdot R) \Big).$$ Thus $Z^{m+1} \in \sigma \text{-}\mathrm{alg}_G(\gamma)$ by Lemmas \ref{LEM EXPMOVE} and \ref{LEM EXPGROUP}. Similarly, $Z_1^{m+1} = Z^{m+1} \cap Z_1$ and $Z_2^{m+1} = Z^{m+1} \cap Z_2$ are in $\sigma \text{-}\mathrm{alg}_G(\gamma)$. Finally, $\theta_{m+1}$ is expressible with respect to $\sigma \text{-}\mathrm{alg}_G(\{Z^*, Z^1, \ldots, Z^{m+1}\}) \subseteq \sigma \text{-}\mathrm{alg}_G(\gamma)$. This completes the inductive argument. Now to complete the proof we show that $S_1, S_2 \in \sigma \text{-}\mathrm{alg}_G(\gamma)$. We first argue that $f$ is $\sigma \text{-}\mathrm{alg}_G(\gamma)$-measurable. It follows from the previous paragraph and Lemma \ref{LEM EXPMOVE} that $\mathrm{dom}(f) \in \sigma \text{-}\mathrm{alg}_G(\gamma)$. Observe that the numbers $\ell \in \mathrm{rng}(f)$ are distinguished by their first $(k - 1)$-binary digits $\mathbb{B}_i(\ell)$, $1 \leq i < k$, since by construction $\log_2(2 n k) < k - 1$. So for $0 \leq \ell < 2 n k$, if we set $I_0 = \{1 \leq i < k \,:\, \mathbb{B}_i(\ell) = 0\}$ and $I_1 = \{1 \leq i < k\} \setminus I_0$ then we have $$f^{-1}(\ell) = \mathrm{dom}(f) \cap \bigcap_{i \in I_0} \rho^{-i} \Big( q_6^{-1} \cdot (B_0 \cap q_6 \cdot R) \Big) \cap \bigcap_{i \in I_1} \rho^{-i} \Big( q_6^{-1} \cdot (B_1 \cap q_6 \cdot R) \Big).$$ Thus $f^{-1}(\ell) \in \sigma \text{-}\mathrm{alg}_G(\gamma)$ by Lemmas \ref{LEM EXPMOVE} and \ref{LEM EXPGROUP}. Now suppose that $x \in S_j$. Then there is $z \in Z_j$ and $0 \leq \ell < 2 n k$ with $x = \psi_j^\ell(z)$. It follows that $z \in Z_j^m$ where $$m = |\{0 \leq i \leq \ell \,:\, \psi_j^i(z) \in S_j\}|.$$ Furthermore, $\ell = f(\theta_m(z))$. Conversely, if there is $1 \leq m \leq 2 n k$, $z \in Z_j^m$, and $0 \leq \ell < 2 n k$ with $x = \psi_j^\ell(z)$ and $f(\theta_m(z)) = \ell$, then $x \in S_j$. Therefore \begin{equation*} S_j = \bigcup_{\ell = 0}^{2 n k - 1} \bigcup_{m = 1}^{2 n k} \psi_j^\ell \Big( Z_j \cap \theta_m^{-1}(f^{-1}(\ell)) \Big) \in \sigma \text{-}\mathrm{alg}_G(\gamma) \subseteq \sigma \text{-}\mathrm{alg}^{\text{red}}_G(\beta). \qedhere \end{equation*} \end{proof} \section{Krieger's finite generator theorem} \label{SECT KRIEGER} We now present the main theorem. \begin{thm} \label{THM RELBASIC} Let $G \curvearrowright (X, \mu)$ be a {p{$.$}m{$.$}p{$.$}} ergodic action with $(X, \mu)$ non-atomic, and let $\mathcal{F}$ be a $G$-invariant sub-$\sigma$-algebra. If $\xi$ is a countable Borel partition of $X$, $0 < r \leq 1$, and $\bar{p}$ is a probability vector with $\mathrm{H}(\xi | \mathcal{F}) < r \cdot \mathrm{H}(\bar{p})$, then there is a Borel pre-partition $\alpha = \{A_i : 0 \leq i < |\bar{p}|\}$ with $\mu(A_i) = r p_i$ for every $i$ and $\sigma \text{-}\mathrm{alg}_G(\xi) \vee \mathcal{F} \subseteq \sigma \text{-}\mathrm{alg}^{\text{red}}_G(\alpha) \vee \mathcal{F}$. \end{thm} \begin{proof} Apply Proposition \ref{PROP FINGEN} to obtain a finite Borel partition $\xi'$ with $\sigma \text{-}\mathrm{alg}_G(\xi') \vee \mathcal{F} = \sigma \text{-}\mathrm{alg}_G(\xi) \vee \mathcal{F}$ and $\mathrm{H}(\xi' | \mathcal{F}) < r \cdot \mathrm{H}(\bar{p})$. Since $\xi'$ is finite, by Lemma \ref{LEM SHAN} we have that $\mathrm{H}(\xi' | \mathcal{F})$ is equal to the infimum of $\mathrm{H}(\xi' | \zeta)$ over finite $\mathcal{F}$-measurable partitions $\zeta$ of $X$. So fix a finite $\mathcal{F}$-measurable partition $\zeta$ with $\mathrm{H}(\xi' | \zeta) < r \cdot \mathrm{H}(\bar{p})$. Since $\mathrm{H}(\xi' \vee \zeta | \zeta) = \mathrm{H}(\xi' | \zeta)$ and $\sigma \text{-}\mathrm{alg}_G(\xi' \vee \zeta) \vee \mathcal{F} = \sigma \text{-}\mathrm{alg}_G(\xi') \vee \mathcal{F}$, we may replace $\xi'$ with $\xi' \vee \zeta$ if necessary and assume that $\xi'$ refines $\zeta$. Let $\pi_\zeta : \xi' \rightarrow \zeta$ be the coarsening map. Finally, by Lemma \ref{LEM SHAN} we may let $\bar{q}$ be a finite probability vector which coarsens $\bar{p}$ and satisfies $\mathrm{H}(\xi' | \zeta) < r \cdot \mathrm{H}(\bar{q}) \leq r \cdot \mathrm{H}(\bar{p})$. Since $\mathrm{H}(\bar{q}) > 0$, by permuting the coordinates of $\bar{q}$ if necessary we may assume that $0 < q_0 \leq q_1$. Fix $\kappa > 0$ with $\mathrm{H}(\xi' | \zeta) + \kappa < r \mathrm{H}(\bar{q}) - r \kappa$. Apply Corollary \ref{COR SEP} to $\bar{q}, \kappa$ to obtain $\delta_0, \epsilon_0 > 0$ with the property that for all $0 < \epsilon < \epsilon_0$ and all sufficiently large $n$ \begin{equation} \label{eqn:sep} \exists K \subseteq L_{\bar{q},\epsilon}^n \quad |K| \geq \exp(n \cdot \mathrm{H}(\bar{q}) - n \cdot \kappa) \quad \text{and} \quad \forall k \neq k' \in K \ d_{\mathrm{Ham}}(k, k') > 2 \delta_0. \end{equation} By Lemma \ref{LEM RELSTIRLING} we may shrink $\epsilon_0$ so that for all $0 < \epsilon < \epsilon_0$, all sufficiently large $n$, and all $z \in L_{\zeta, \epsilon}^n$ \begin{equation} \label{eqn:relstirling} |\{c \in L_{\xi',\epsilon}^n : \pi_\zeta(c) = z\}| \leq \exp(n \cdot \mathrm{H}(\xi' | \zeta) + n \cdot \kappa). \end{equation} Set $\delta = r \cdot q_0 \cdot \delta_0 / 6$. Let $M$ with $\mu(M) = \delta$ and $\epsilon > 0$ be given by Proposition \ref{PROP CODE}. By shrinking $\epsilon$ if necessary, we may assume that $\epsilon < \epsilon_0$ and $\epsilon |\bar{q}| < \delta_0 / 6$. Let $n_0 \in \mathbb{N}$ be such that for all $n \geq \lfloor r n_0 \rfloor$: statement (\ref{eqn:sep}) holds, for all $z \in L_{\zeta, \epsilon}^n$ inequality (\ref{eqn:relstirling}) holds, $$(\epsilon n + 1) \cdot |\bar{q}| < (\delta_0 / 6) n, \quad \text{and}$$ \begin{equation} \label{EQN KAPPA2} n \cdot \mathrm{H}(\xi' | \zeta) + n \kappa < \lfloor r n \rfloor \cdot \mathrm{H}(\bar{q}) - \lfloor r n \rfloor \cdot \kappa. \end{equation} By Proposition \ref{PROP RLEM} there are $n \geq n_0$, Borel sets $S_1, S_2 \subseteq X$ with $\mu(S_1) + \mu(S_2) < \epsilon$, and a $\sigma \text{-}\mathrm{alg}_G(\{S_1, S_2\})$-expressible $\theta \in [E_G^X]$ such that $E_\theta$ admits a $\sigma \text{-}\mathrm{alg}_G(\{S_1, S_2\})$-measurable transversal $Y$ and such that for $\mu$-almost-every $x \in X$, the $E_\theta$ class of $x$ has cardinality $n$, \begin{align*} & \forall C \in \xi' \cup \zeta & \mu(C) - \epsilon < \frac{|C \cap [x]_{E_\theta}|}{|[x]_{E_\theta}|} & < \mu(C) + \epsilon, \\ & \text{and} & \qquad \frac{|M \cap [x]_{E_\theta}|}{|[x]_{E_\theta}|} & < \mu(M) + \delta = 2 \delta. \end{align*} So we have $\mathcal{N}_{\xi'}^\theta(y) \in L_{\xi', \epsilon}^n$ and $\mathcal{N}_\zeta^\theta(y) \in L_{\zeta, \epsilon}^n$ for almost-every $y \in Y$. We set $m = \lfloor r \cdot n \rfloor$ and encourage the reader to pay attention to the distinction between $m$ and $n$. Let $K \subseteq L_{\bar{q}, \epsilon}^m$ be as given by (\ref{eqn:sep}) so that $d_{\mathrm{Ham}}(k, k') > 2 \delta_0$ for all $k \neq k' \in K$. By (\ref{eqn:sep}), (\ref{eqn:relstirling}), and (\ref{EQN KAPPA2}) we have that for every $z \in L_{\zeta, \epsilon}^n$ \begin{equation*} \Big| \{c \in L_{\xi', \epsilon}^n : \pi_\zeta(c) = z\} \Big| \leq \exp \Big( n \cdot \mathrm{H}(\xi' | \zeta) + n \cdot \kappa \Big) < \exp \Big( m \cdot \mathrm{H}(\bar{q}) - m \cdot \kappa \Big) \leq |K|. \end{equation*} Thus for every $z \in L_{\zeta, \epsilon}^n$ we may fix an injection $f_z : \{c \in L_{\xi', \epsilon}^n : \pi_\zeta(c) = z\} \rightarrow K \subseteq L_{\bar{q}, \epsilon}^m$. For $y \in Y$ set $z_y = \mathcal{N}_\zeta^\theta(y)$, $c_y = \mathcal{N}_{\xi'}^\theta(y)$, and $\tilde{a}_y = f_{z_y}(c_y) \in K \subseteq L_{\bar{q}, \epsilon}^m$. Also define $M_y = \{0 \leq i < n \,:\, \theta^i(y) \in M\}$. Then $$|M_y| < 2 \delta \cdot n = 2 (q_0 \cdot \delta_0 / 6) r n < (\delta_0 / 3) (m + 1) \leq (2 \delta_0 / 3) m$$ for $\mu$-almost-every $y \in Y$. Since $\tilde{a}_y \in L_{\bar{q}, \epsilon}^m$, Lemma \ref{LEM J} provides a set $J_y \subseteq \{0, 1, \ldots, m-1\}$ with $|J_y| < (\epsilon m + 1) \cdot |\bar{q}| + (\delta_0/6) m < (\delta_0 / 3) m$ such that for all $0 \leq t < |\bar{q}|$ \begin{equation} \label{eqn:dist} \frac{1}{m} \cdot \Big| \tilde{a}_y^{-1}(t) \setminus (M_y \cup J_y) \Big| \leq \frac{1}{m} \cdot \Big| \tilde{a}_y^{-1}(t) \setminus J_y \Big| < (1 - \delta_0 / 6) q_t. \end{equation} Since there are only finitely many choices for $J_y$, it is easy to arrange the map $y \mapsto J_y$ to be Borel. We then let $J$ be the Borel set $J = \{\theta^j(y) \,:\, y \in Y, \ j \in J_y\}$. Define the pre-partition $\alpha^0 = \{A_t^0 \,:\, 0 \leq t < |\bar{q}|\}$ by setting $$A_t^0 = \{\theta^i(y) \,:\, y \in Y, \ 0 \leq i < m, \ i \not\in M_y \cup J_y, \text{ and } \tilde{a}_y(i) = t\}.$$ Observe that $\mu(Y) = 1 / n$ since $Y$ is a transversal for $E_\theta$. By (\ref{eqn:dist}) we have that for every $0 \leq t < |\bar{q}|$ \begin{align*} \mu(A_t^0) & = \int_Y |\tilde{a}_y^{-1}(t) \setminus (M_y \cup J_y)| \ d \mu(y)\\ & < \frac{m}{n} \cdot (1 - \delta_0/6) q_t \leq r (1 - \delta_0 / 6)\cdot q_t = \left(r - \frac{\delta}{q_0} \right) q_t. \end{align*} Thus $\mu(A_t^0) < r \cdot q_t$ and, since $q_0 \leq q_1$, we have $\mu(A_0^0) < r \cdot q_0 - \delta = r \cdot q_0 - \mu(M)$ and similarly $\mu(A_1^0) \leq r \cdot q_1 - \mu(M)$. Apply Proposition \ref{PROP CODE} to get a partition $\beta = \{B_0, B_1\}$ of $M$ with $S_1, S_2 \in \sigma \text{-}\mathrm{alg}^{\text{red}}_G(\beta)$. Since $M$ is disjoint from $\cup \alpha^0$ and $\mu$ is non-atomic, there is a pre-partition $\alpha' = \{A_t' : 0 \leq t < |\bar{q}|\}$ with $\mu(A_t') = r \cdot q_t$ for every $0 \leq t < |\bar{q}|$, $A_0^0 \cup B_0 \subseteq A_0'$, $A_1^0 \cup B_1 \subseteq A_1'$, and $A_t^0 \subseteq A_t'$ for $2 \leq t < |\bar{q}|$. The pre-partition $\alpha'$ extends both $\alpha^0$ and $\beta$. In particular, $S_1, S_2 \in \sigma \text{-}\mathrm{alg}^{\text{red}}_G(\beta) \subseteq \sigma \text{-}\mathrm{alg}^{\text{red}}_G(\alpha')$ by Lemma \ref{LEM EXT}. We have that $\theta$ is expressible and $Y$ is measurable with respect to $\sigma \text{-}\mathrm{alg}_G(\{S_1, S_2\}) \subseteq \sigma \text{-}\mathrm{alg}^{\text{red}}_G(\alpha')$. By Lemma \ref{LEM EXPGROUP} it follows that $\theta^i$ is $\sigma \text{-}\mathrm{alg}^{\text{red}}_G(\alpha')$-expressible for all $i \in \mathbb{Z}$. We will show that $\xi' \subseteq \sigma \text{-}\mathrm{alg}^{\text{red}}_G(\alpha') \vee \mathcal{F}$. We claim that the map $y \in Y \mapsto \tilde{a}_y$ is $\sigma \text{-}\mathrm{alg}^{\text{red}}_G(\alpha')$-measurable. We check this via the definition of reduced $\sigma$-algebras. Fix $y \in Y$ and $x \in X$ with either $x \not\in Y$ or $\tilde{a}_x \neq \tilde{a}_y$. If $x \not\in Y$ then we are done since $Y \in \sigma \text{-}\mathrm{alg}^{\text{red}}_G(\alpha')$ (since then there is $g \in G$ with $g \cdot x, g \cdot y \in \cup \alpha'$ and with $\alpha'$ separating $g \cdot x$ from $g \cdot y$). So suppose that $x \in Y$ and $\tilde{a}_x \neq \tilde{a}_y$. Then $d_{\mathrm{Ham}}(\tilde{a}_y, \tilde{a}_x) > 2 \delta_0$ since $\tilde{a}_x, \tilde{a}_y \in K$. Set $I = \{0 \leq i < m \,:\, \tilde{a}_y(i) \neq \tilde{a}_x(i)\}$ and note $|I| > 2 \delta_0 \cdot m$. Since $$\Big| M_y \cup J_y \cup M_x \cup J_x \Big| < 2 \cdot (2 \delta_0 / 3) m + 2 \cdot (\delta_0 / 3) m = 2 \delta_0 m,$$ we may fix $i \in I \setminus (M_y \cup J_y \cup M_x \cup J_x)$. Since $\theta^i$ is $\sigma \text{-}\mathrm{alg}^{\text{red}}_G(\alpha')$-expressible, there is a $\sigma \text{-}\mathrm{alg}^{\text{red}}_G(\alpha')$-measurable partition $\{Z_g : g \in G\}$ of $X$ such that $\theta^i(z) = g \cdot z$ for all $g \in G$ and $z \in Z_g$. If $y$ and $x$ are separated by the partition $\{Z_g : g \in G\}$ then, since this partition is $\sigma \text{-}\mathrm{alg}^{\text{red}}_G(\alpha')$-measurable, there must be $h \in G$ with both $h \cdot y$ and $h \cdot x$ lying in $\cup \alpha'$ and separated by $\alpha'$. We are done in this case. So assume there is $g \in G$ with $y, x \in Z_g$. Then $g \cdot y = \theta^i(y)$ lies in $A_t^0 \subseteq A_t'$ where $t = \tilde{a}_y(i)$ and similarly $g \cdot x = \theta^i(x)$ lies in $A_s^0 \subseteq A_s'$ where $s = \tilde{a}_x(i)$. As $t \neq s$ we have that $g \cdot y$ and $g \cdot x$ lie in $\cup \alpha'$ and are separated by $\alpha'$. This proves the claim. We observe that the map $y \in Y \mapsto z_y$ is $\sigma \text{-}\mathrm{alg}^{\text{red}}_G(\alpha') \vee \mathcal{F}$-measurable since $\zeta \subseteq \mathcal{F}$ and the value of $z_y$ is entirely determined by the location of $y$ in the partition $\bigvee_{i=0}^{n-1} \theta^{-i}(\zeta) \restriction Y$ of $Y$. This partition is $\sigma \text{-}\mathrm{alg}^{\text{red}}_G(\alpha') \vee \mathcal{F}$-measurable by Lemmas \ref{LEM EXPMOVE} and \ref{LEM EXPGROUP}. So the map $y \in Y \mapsto (z_y, \tilde{a}_y)$ is $\sigma \text{-}\mathrm{alg}^{\text{red}}_G(\alpha') \vee \mathcal{F}$-measurable. Since $c_y = f_{z_y}^{-1}(\tilde{a}_y)$, it follows that the map $y \in Y \mapsto c_y$ is $\sigma \text{-}\mathrm{alg}^{\text{red}}_G(\alpha') \vee \mathcal{F}$-measurable as well. For $C \in \xi'$ we have \begin{equation*} C = \{\theta^i(y) \,:\, y \in Y, \ 0 \leq i < n, \text{ and } c_y(i) = C\} = \bigcup_{i = 0}^{n-1} \theta^i \Big( \{y \in Y \,:\, c_y(i) = C\} \Big). \end{equation*} Therefore $\xi' \subseteq \sigma \text{-}\mathrm{alg}^{\text{red}}_G(\alpha') \vee \mathcal{F}$ by Lemmas \ref{LEM EXPMOVE} and \ref{LEM EXPGROUP}. We conclude that $$\sigma \text{-}\mathrm{alg}_G(\xi) \vee \mathcal{F} = \sigma \text{-}\mathrm{alg}_G(\xi') \vee \mathcal{F} \subseteq \sigma \text{-}\mathrm{alg}^{\text{red}}_G(\alpha') \vee \mathcal{F}.$$ Finally, since $(X, \mu)$ is non-atomic, $\mu(A_t') = r \cdot q_t$, and $\bar{q}$ is a coarsening of $\bar{p}$, there is a refinement $\alpha$ of $\alpha'$ with $\mu(A_t) = r \cdot p_t$ for all $0 \leq t < |\bar{p}|$. Clearly we still have $\sigma \text{-}\mathrm{alg}_G(\xi) \vee \mathcal{F} \subseteq \sigma \text{-}\mathrm{alg}^{\text{red}}_G(\alpha) \vee \mathcal{F}$. \end{proof} Note that Theorem \ref{INTRO THM2} follows from the above theorem by choosing a partition $\xi$ with $\mathrm{H}(\xi | \mathcal{F})$ close to $h^{\mathrm{Rok}}_G(X, \mu | \mathcal{F})$ and with $\sigma \text{-}\mathrm{alg}_G(\xi) \vee \mathcal{F} = \mathcal{B}(X)$. \begin{cor} \label{COR ADD} Let $G \curvearrowright (X, \mu)$ be a {p{$.$}m{$.$}p{$.$}} ergodic action with $(X, \mu)$ non-atomic, and let $\mathcal{F}$ be a $G$-invariant sub-$\sigma$-algebra. If $G \curvearrowright (Y, \nu)$ is a factor of $G \curvearrowright (X, \mu)$ and $\Sigma$ is the sub-$\sigma$-algebra of $X$ associated to $Y$ then $$h^{\mathrm{Rok}}_G(X, \mu | \mathcal{F}) \leq h^{\mathrm{Rok}}_G(Y, \nu) + h^{\mathrm{Rok}}_G(X, \mu | \mathcal{F} \vee \Sigma).$$ \end{cor} \begin{proof} This is immediate if either $h^{\mathrm{Rok}}_G(Y, \nu)$ or $h^{\mathrm{Rok}}_G(X, \mu | \mathcal{F} \vee \Sigma)$ is infinite, so suppose that both are finite. Fix $\epsilon > 0$ and fix a generating partition $\beta'$ for $G \curvearrowright (Y, \nu)$ with $\mathrm{H}(\beta') < h^{\mathrm{Rok}}_G(Y, \nu) + \epsilon / 2$. Pull back $\beta'$ to a partition $\beta$ of $X$. Then $\mathrm{H}(\beta) = \mathrm{H}(\beta')$ and $\sigma \text{-}\mathrm{alg}_G(\beta) = \Sigma$. By definition of $h^{\mathrm{Rok}}_G(X, \mu | \mathcal{F} \vee \Sigma)$, there is a partition $\gamma'$ of $X$ with $$\mathrm{H}(\gamma' | \mathcal{F} \vee \Sigma) < h^{\mathrm{Rok}}_G(X, \mu | \mathcal{F} \vee \Sigma) + \epsilon / 2$$ and $\sigma \text{-}\mathrm{alg}_G(\gamma') \vee \mathcal{F} \vee \Sigma = \mathcal{B}(X)$. Apply Theorem \ref{THM RELBASIC} to get a partition $\gamma$ of $X$ with $$\mathrm{H}(\gamma) < h^{\mathrm{Rok}}_G(X, \mu | \mathcal{F} \vee \Sigma) + \epsilon / 2$$ and $\sigma \text{-}\mathrm{alg}_G(\gamma) \vee \mathcal{F} \vee \Sigma = \mathcal{B}(X)$. Then $$\mathcal{B}(X) = \sigma \text{-}\mathrm{alg}_G(\gamma) \vee \mathcal{F} \vee \Sigma = \sigma \text{-}\mathrm{alg}_G(\gamma \vee \beta) \vee \mathcal{F},$$ and hence \begin{equation*} h^{\mathrm{Rok}}_G(X, \mu | \mathcal{F}) \leq \mathrm{H}(\beta \vee \gamma | \mathcal{F}) \leq \mathrm{H}(\beta) + \mathrm{H}(\gamma) < h^{\mathrm{Rok}}_G(Y, \nu) + h^{\mathrm{Rok}}_G(X, \mu | \mathcal{F} \vee \Sigma) + \epsilon. \qedhere \end{equation*} \end{proof} \section{Relative Rokhlin entropy and amenable groups} \label{SECT AMENABLE} We verify that for free ergodic actions of amenable groups, relative Rokhlin entropy and relative Kolmogorov--Sinai entropy agree. This result was previously established in the non-relative case by the author and Tucker-Drob \cite{ST14}. We first recall the definition of relative Kolmogorov--Sinai entropy. Let $G$ be a countably infinite amenable group, and let $G \curvearrowright (X, \mu)$ be a free {p{$.$}m{$.$}p{$.$}} action. For a partition $\alpha$ and a finite set $T \subseteq G$, we write $\alpha^T$ for the join $\bigvee_{t \in T} t \cdot \alpha$, where $t \cdot \alpha = \{t \cdot A : A \in \alpha\}$. Given a $G$-invariant sub-$\sigma$-algebra $\mathcal{F}$, the relative Kolmogorov--Sinai entropy is defined as $$h^{\mathrm{KS}}_G(X, \mu | \mathcal{F}) = \sup_\alpha \inf_{T \subseteq G} \frac{1}{|T|} \cdot \mathrm{H}(\alpha^T | \mathcal{F}),$$ where $\alpha$ ranges over all finite partitions and $T$ ranges over finite subsets of $G$ \cite{DP02}. Equivalently, one can replace the infimum with a limit over a F{\o}lner sequence $(T_n)$ \cite{OW87}. Recall that a sequence $T_n \subseteq G$ of finite sets is a \emph{F{\o}lner sequence} if $$\lim_{n \rightarrow \infty} \frac{|\Bnd{K}{T_n}|}{|T_n|} = 0$$ for every finite $K \subseteq G$, where $\Bnd{K}{T} = \{t \in T : t K \not\subseteq T\}$. We also write $\Int{K}{T}$ for $T \setminus \Bnd{K}{T}$. \begin{prop} \label{PROP RELROK} Let $G$ be a countably infinite amenable group, let $G \curvearrowright (X, \mu)$ be a free ergodic action, and let $\mathcal{F}$ be a $G$-invariant sub-$\sigma$-algebra. Then the relative Kolmogorov--Sinai entropy and relative Rokhlin entropy coincide: $$h^{\mathrm{KS}}_G(X, \mu | \mathcal{F}) = h^{\mathrm{Rok}}_G(X, \mu | \mathcal{F}).$$ \end{prop} \begin{proof} We first show that $h^{\mathrm{KS}}_G(X, \mu | \mathcal{F}) \leq h^{\mathrm{Rok}}_G(X, \mu | \mathcal{F})$. If $h^{\mathrm{Rok}}_G(X, \mu | \mathcal{F}) = \infty$ then there is nothing to show. So suppose that $h^{\mathrm{Rok}}_G(X, \mu | \mathcal{F}) < \infty$ and fix $\epsilon > 0$. Let $\alpha$ be a countable partition with $\sigma \text{-}\mathrm{alg}_G(\alpha) \vee \mathcal{F} = \mathcal{B}(X)$ and $\mathrm{H}(\alpha | \mathcal{F}) < h^{\mathrm{Rok}}_G(X, \mu | \mathcal{F}) + \epsilon$. Let $\beta$ be any finite partition of $X$ and let $(T_n)$ be a F{\o}lner sequence. Then by Lemma \ref{LEM SHAN} $$0 = \mathrm{H}(\beta | \sigma \text{-}\mathrm{alg}_G(\alpha) \vee \mathcal{F}) = \inf_{K \subseteq G} \mathrm{H}(\beta | \alpha^K \vee \mathcal{F}),$$ where $K$ ranges over finite subsets of $G$. Fix $K \subseteq G$ so that $\mathrm{H}(\beta | \alpha^K \vee \mathcal{F}) < \epsilon$. Note that $\mathrm{H}(t \cdot \beta | \alpha^{t K} \vee \mathcal{F}) < \epsilon$ for all $t \in G$. Therefore \begin{align*} \lim_{n \rightarrow \infty} \frac{1}{|T_n|} & \cdot \mathrm{H}(\beta^{T_n} | \mathcal{F})\\ & \leq \lim_{n \rightarrow \infty} \frac{1}{|T_n|} \cdot \mathrm{H}(\alpha^{T_n} \vee \beta^{T_n} | \mathcal{F})\\ & = \lim_{n \rightarrow \infty} \frac{1}{|T_n|} \cdot \mathrm{H}(\alpha^{T_n} | \mathcal{F}) + \frac{1}{|T_n|} \cdot \mathrm{H}(\beta^{T_n} | \alpha^{T_n} \vee \mathcal{F})\\ & \leq \lim_{n \rightarrow \infty} \frac{1}{|T_n|} \cdot \sum_{t \in T_n} \Big( \mathrm{H}(t \cdot \alpha | \mathcal{F}) + \mathrm{H}(t \cdot \beta | \alpha^{T_n} \vee \mathcal{F}) \Big)\\ & < \lim_{n \rightarrow \infty} h^{\mathrm{Rok}}_G(X, \mu | \mathcal{F}) + \epsilon + \frac{|\Int{K}{T_n}|}{|T_n|} \cdot \epsilon + \frac{|\Bnd{K}{T_n}|}{|T_n|} \cdot \mathrm{H}(\beta)\\ & = h^{\mathrm{Rok}}_G(X, \mu | \mathcal{F}) + 2 \epsilon. \end{align*} Now let $\epsilon$ tend to $0$ and then take the supremum over all $\beta$. Now we argue that $h^{\mathrm{Rok}}_G(X, \mu | \mathcal{F}) \leq h^{\mathrm{KS}}_G(X, \mu | \mathcal{F})$. Again this is immediate if $h^{\mathrm{KS}}_G(X, \mu | \mathcal{F}) = \infty$, so we assume $h^{\mathrm{KS}}_G(X, \mu | \mathcal{F}) < \infty$. Since the action of $G$ is free, a theorem of Seward and Tucker-Drob \cite{ST14} states that there is a factor action $G \curvearrowright (Z, \eta)$ of $(X, \mu)$ such that the action of $G$ on $Z$ is free and $h^{\mathrm{Rok}}_G(Z, \eta) < \epsilon$. Let $\Sigma$ be the $G$-invariant sub-$\sigma$-algebra of $X$ associated to $Z$, and let $G \curvearrowright (Y, \nu)$ be the factor of $(X, \mu)$ associated to $\mathcal{F} \vee \Sigma$. Then $G$ acts freely on $(Y, \nu)$ since $(Y, \nu)$ factors onto $(Z, \eta)$. By the Ornstein--Weiss theorem \cite{OW80}, all free ergodic actions of countably infinite amenable groups are orbit equivalent. In particular, there is a free ergodic {p{$.$}m{$.$}p{$.$}} action $\mathbb{Z} \curvearrowright (Y, \nu)$ which has the same orbits as $G \curvearrowright (Y, \nu)$ and has $0$ Kolmogorov--Sinai entropy, $h^{\mathrm{KS}}_\mathbb{Z}(Y, \nu) = 0$. By the Rokhlin generator theorem \cite{Roh67}, we have $h^{\mathrm{Rok}}_\mathbb{Z}(Y, \nu) = 0$ as well. Let's say $\mathbb{Z} = \langle t \rangle$. Define $c : Y \rightarrow G$ by $$c(y) = g \Longleftrightarrow t \cdot y = g \cdot y.$$ Let $f : (X, \mu) \rightarrow (Y, \nu)$ be the factor map, and let $\mathbb{Z}$ act on $(X, \mu)$ by setting $$t \cdot x = c(f(x)) \cdot x.$$ Then $\mathcal{F} \vee \Sigma$ and the actions of $G$ and $\mathbb{Z}$ on $(X, \mu)$ satisfy the assumptions of Proposition \ref{PROP OE}. Equivalently, in the terminology of Rudolph--Weiss \cite{RW00}, the orbit-change cocycles between the actions of $G$ and $\mathbb{Z}$ on $X$ are $\mathcal{F} \vee \Sigma$-measurable. Thus $h^{\mathrm{KS}}_G(X, \mu | \mathcal{F} \vee \Sigma) = h^{\mathrm{KS}}_\mathbb{Z}(X, \mu | \mathcal{F} \vee \Sigma)$ by \cite[Theorem 2.6]{RW00}. Also, since $h^{\mathrm{Rok}}_\mathbb{Z}(X, \mu | \mathcal{F} \vee \Sigma) \leq h^{\mathrm{Rok}}_\mathbb{Z}(X, \mu)$ and $h^{\mathrm{Rok}}_\mathbb{Z}(Y, \nu) = 0$, it follows from Corollary \ref{COR ADD} that \begin{equation} \label{eqn:Z} h^{\mathrm{Rok}}_\mathbb{Z}(X, \mu | \mathcal{F} \vee \Sigma) = h^{\mathrm{Rok}}_\mathbb{Z}(X, \mu). \end{equation} We have \begin{equation*} \begin{array}{rcll} h^{\mathrm{KS}}_G(X, \mu | \mathcal{F} \vee \Sigma) & = & h^{\mathrm{KS}}_\mathbb{Z}(X, \mu | \mathcal{F} \vee \Sigma) & \text{by the Rudolph--Weiss theorem \cite{RW00}}\\ & = & h^{\mathrm{KS}}_\mathbb{Z}(X, \mu) - h^{\mathrm{KS}}_\mathbb{Z}(Y, \nu) & \text{by the Abramov--Rokhlin theorem \cite{AR62}}\\ & = & h^{\mathrm{KS}}_\mathbb{Z}(X, \mu) & \text{since } h^{\mathrm{KS}}_\mathbb{Z}(Y, \nu) = 0\\ & = & h^{\mathrm{Rok}}_\mathbb{Z}(X, \mu) & \text{by the Rokhlin generator theorem \cite{Roh67}}\\ & = & h^{\mathrm{Rok}}_\mathbb{Z}(X, \mu | \mathcal{F} \vee \Sigma) & \text{by Equation \ref{eqn:Z}}\\ & = & h^{\mathrm{Rok}}_G(X, \mu | \mathcal{F} \vee \Sigma) & \text{by Proposition \ref{PROP OE}} \end{array} \end{equation*} So $h^{\mathrm{KS}}_G(X, \mu | \mathcal{F} \vee \Sigma) = h^{\mathrm{Rok}}_G(X, \mu | \mathcal{F} \vee \Sigma)$. Also, it is immediate from the definitions that $h^{\mathrm{KS}}_G(X, \mu | \mathcal{F} \vee \Sigma) \leq h^{\mathrm{KS}}_G(X, \mu | \mathcal{F})$. Finally, by Corollary \ref{COR ADD} we have $$h^{\mathrm{Rok}}_G(X, \mu | \mathcal{F}) \leq h^{\mathrm{Rok}}_G(Z, \eta) + h^{\mathrm{Rok}}_G(X, \mu | \mathcal{F} \vee \Sigma) < \epsilon + h^{\mathrm{KS}}_G(X, \mu | \mathcal{F} \vee \Sigma) \leq \epsilon + h^{\mathrm{KS}}_G(X, \mu | \mathcal{F}).$$ Now let $\epsilon$ tend to $0$. \end{proof} \thebibliography{999} \bibitem{AR62} L. M. Abramov and V. A. Rohlin, \textit{Entropy of a skew product of mappings with invariant measure}, Vestnik Leningrad. Univ. 17 (1962), no. 7, 5--13. \bibitem{Al} A. Alpeev, \textit{On Pinsker factors for Rokhlin entropy}, Journal of Mathematical Sciences 209 (2015), no. 6, 826--829. \bibitem{AS} A. Alpeev and B. Seward, \textit{Krieger's finite generator theorem for actions of countable groups III}, preprint. https://arxiv.org/abs/1705.09707. \bibitem{B10b} L. Bowen, \textit{Measure conjugacy invariants for actions of countable sofic groups}, Journal of the American Mathematical Society 23 (2010), 217--245. \bibitem{B12} L. Bowen, \textit{Sofic entropy and amenable groups}, Ergod. Th. \& Dynam. Sys. 32 (2012), no. 2, 427--466. \bibitem{B12b} L. Bowen, \textit{Every countably infinite group is almost Ornstein}, in Dynamical Systems and Group Actions, Contemp. Math., 567, Amer. Math. Soc., Providence, RI, 2012, 67--78. \bibitem{B16} L. Bowen, \textit{Zero entropy is generic}, Entropy 18 (2016), no. 6. \bibitem{C72} J. P. Conze, \textit{Entropie d'un groupe ab\'{e}lien de transformations}, Z. Wahrscheinlichkeitstheorie verw. Geb. 15 (1972), 11--30. \bibitem{CsKo} I. Csisz\'{a}r and J. K\"{o}rner, Information Theory: Coding Theorems for Discrete Memoryless Systems. Cambridge University Press, New York, 2011. \bibitem{DP02} A. Danilenko and K. Park, \textit{Generators and Bernoullian factors for amenable actions and cocycles on their orbits}, Ergod. Th. \& Dynam. Sys. 22 (2002), 1715--1745. \bibitem{De74} M. Denker, \textit{Finite generators for ergodic, measure-preserving transformations}, Prob. Th. Rel. Fields 29 (1974), no. 1, 45--55. \bibitem{Do11} T. Downarowicz, Entropy in Dynamical Systems. Cambridge University Press, New York, 2011. \bibitem{GS15} D. Gaboriau and B. Seward, \textit{Cost, $\ell^2$-Betti numbers, and the sofic entropy of some algebraic actions}, preprint. http://arxiv.org/abs/1509.02482. \bibitem{GJS09} S. Gao, S. Jackson, and B. Seward, \textit{A coloring property for countable groups}, Mathematical Proceedings of the Cambridge Philosophical Society 147 (2009), no. 3, 579--592. \bibitem{GJS12} S. Gao, S. Jackson, and B. Seward, \textit{Group colorings and Bernoulli subflows}, Memoirs of the American Mathematical Society 241 (2016), no. 1141, 1--241. \bibitem{Gl03} E. Glasner, \textit{Ergodic theory via joinings}. Mathematical Surveys and Monographs, 101. American Mathematical Society, Providence, RI, 2003. xii+384 pp. \bibitem{GK76} C. Grillenberger and U. Krengel, \textit{On marginal distributions and isomorphisms of stationary processes}, Math. Z. 149 (1976), no. 2, 131--154. \bibitem{KaW72} Y. Katznelson and B. Weiss, \textit{Commuting measure preserving transformations}, Israel J. Math. 12 (1972), 161--173. \bibitem{K95} A. Kechris, Classical Descriptive Set Theory. Springer-Verlag, New York, 1995. \bibitem{KST99} A. Kechris, S. Solecki, and S. Todorcevic, \textit{Borel chromatic numbers}, Adv. in Math. 141 (1999), 1--44. \bibitem{KL11a} D. Kerr and H. Li, \textit{Entropy and the variational principle for actions of sofic groups}, Invent. Math. 186 (2011), 501--558. \bibitem{KL13} D. Kerr and H. Li, \textit{Soficity, amenability, and dynamical entropy}, American Journal of Mathematics 135 (2013), 721--761. \bibitem{KL11b} D. Kerr and H. Li, \textit{Bernoulli actions and infinite entropy}, Groups Geom. Dyn. 5 (2011), 663--672. \bibitem{KiW02} Y. Kifer and B. Weiss, \textit{Generating partitions for random transformations}, Ergod. Th. \& Dynam. Sys. 22 (2002), 1813--1830. \bibitem{Kr70} W. Krieger, \textit{On entropy and generators of measure-preserving transformations}, Trans. Amer. Math. Soc. 149 (1970), 453--464. \bibitem{Or70a} D. Ornstein, \textit{Bernoulli shifts with the same entropy are isomorphic}, Advances in Math. 4 (1970), 337--348. \bibitem{Or70b} D. Ornstein, \textit{Two Bernoulli shifts with infinite entropy are isomorphic}, Advances in Math. 5 (1970), 339--348. \bibitem{OW80} D. Ornstein and B. Weiss, \textit{Ergodic theory of amenable group actions. I : The Rohlin lemma}, Bull. Amer. Math. Soc. 2 (1980), no. 1, 161--164. \bibitem{OW87} D. Ornstein and B. Weiss, \textit{Entropy and isomorphism theorems for actions of amenable groups}, Journal d'Analyse Math\'{e}matique 48 (1987), 1--141. \bibitem{N} B.H. Neumann, \textit{Groups covered by permutable subsets}, J. London Math. Soc. (1954), no. 2, 236--248. \bibitem{Roh67} V. A. Rokhlin, \textit{Lectures on the entropy theory of transformations with invariant measure}, Uspehi Mat. Nauk 22 (1967), no. 5, 3--56. \bibitem{Ros88} A. Rosenthal, \textit{Finite uniform generators for ergodic, finite entropy, free actions of amenable groups}, Prob. Th. Rel. Fields 77 (1988), 147--166. \bibitem{RW00} D. J. Rudolph and B. Weiss, \textit{Entropy and mixing for amenable group actions}, Annals of Mathematics (151) 2000, no. 2, 1119--1150. \bibitem{S12} B. Seward, \textit{Ergodic actions of countable groups and finite generating partitions}, Groups, Geometry, and Dynamics 9 (2015), no. 3, 793--810. \bibitem{S13} B. Seward, \textit{Every action of a non-amenable group is the factor of a small action}, Journal of Modern Dynamics 8 (2014), no. 2, 251--270. \bibitem{S14a} B. Seward, \textit{Krieger's finite generator theorem for actions of countable groups II}, preprint. http://arxiv.org/abs/1501.03367. \bibitem{S16} B. Seward, \textit{Weak containment and Rokhlin entropy}, preprint. http://arxiv.org/abs/1602.06680. \bibitem{S16a} B. Seward, \textit{Positive entropy actions of countable groups factor onto Bernoulli shifts}, preprint. https://arxiv.org/abs/1804.05269. \bibitem{ST14} B. Seward and R. D. Tucker-Drob, \textit{Borel structurability on the $2$-shift of a countable group}, Annals of Pure and Applied Logic 167 (2016), no. 1, 1--21. \bibitem{St75} A. M. Stepin, \textit{Bernoulli shifts on groups}, Dokl. Akad. Nauk SSSR 223 (1975), no. 2, 300--302. \bibitem{Su83} \v{S}. \v{S}ujan, \textit{Generators for amenable group actions}, Mh. Math. 95 (1983), no. 1, 67--79. \end{document}
\begin{document} \date{} \title{Genuine infinitesimal bendings\\ of submanifolds} \author{M. Dajczer and M. I. Jimenez} \maketitle \begin{abstract} A basic question in submanifold theory is whether a given isometric immersion $f\colon M^n\to\mathbb{R}^{n+p}$ of a Riemannian manifold of dimension $n\geq 3$ into Euclidean space with low codimension $p$ admits, locally or globally, a genuine infinitesimal bending. That is, if there exists a genuine smooth variation of $f$ by immersions that are isometric up to the first order. Until now only the hypersurface case $p=1$ was well understood. We show that a strong necessary local condition to admit such a bending is the submanifold to be ruled and give a lower bound for the dimension of the rulings. In the global case, we describe the situation of compact submanifolds of dimension $n\geq 5$ in codimension $p=2$. \end{abstract} An isometric immersion $f\colon M^n\to\mathbb{R}^{n+p}$ of an $n$-dimensional Riemannian manifold $M^n$ into Euclidean space with codimension $p$ is called isometrically bendable if there is a non-trivial smooth variation ${\cal F}\colon I\times M^n\to\mathbb{R}^{n+p}$ of $f$ for an interval $0\in I\subset\mathbb{R}$ such that $f_t={\cal F}(t,\cdot)\colon M^n\to\mathbb{R}^{n+p}$ with $f_0=f$ is an isometric immersion for any $t\in I$ , that is, the metrics $g_t$ induced by $f_t$ satisfy $g_t=g_0$. The bending being trivial means that the variation is the restriction to the submanifold of a smooth one-parameter family of isometries of $\mathbb{R}^{n+p}$. The study of bendings of surfaces $M^2$ in $\mathbb{R}^3$ was a hot topic between geometers in the $19^{th}$ century. Initially, there was no distinction between isometric variations and the ones that are only infinitesimally isometric, but that changed due to the work of Darboux by the end of that century. For a modern account of some aspects of the subject we refer to Spivak \cite{Sp}. The study of isometric bendings of hypersurfaces $f\colon M^n\to\mathbb{R}^{n+1}$, $n\geq 3$, goes back to the first part of the last century. In fact, the local classification of isometrically bendable hypersurfaces is due to Sbrana \cite{Sb1} in 1909 and Cartan \cite{Ca} in 1916. For a modern presentation of their parametric classifications, as well as for further results, see \cite{DFT} or \cite{DT}. In the global case, the classification is due to Sacksteder \cite{Sa} for compact hypersurfaces and to Dajczer and Gromoll \cite{DG0} in the case of complete hypersurfaces. The classical concept of an infinitesimal bending of an isometric immersion $f\colon M^n\to\mathbb{R}^{n+p}$ is the infinitesimal analogue of an isometric bending and refers to smooth variations ${\cal F}\colon I\times M^n\to\mathbb{R}^{n+p}$ that preserve lengths ``up to the first order", that is, the metrics $g_t$ induced by $f_t={\cal F}(t,\cdot)\colon M^n\to\mathbb{R}^{n+p}$ satisfy $g'_t(0)=0$. The variational vector field $\tau={\cal F}_*\partial/\partial t|_{t=0}$ verifies \begin{equation} \label{infbend} \<f_*X,\tau_*X{\rangle}=0 \end{equation} for any tangent vector fields $X\in\mathfrak{X}(M)$. Clearly \eqref{infbend} is the condition for a smooth variation to preserve the metric up to the first order. If $\tau$ is an immersion, it was said classically that the pair of submanifolds $f$ and $\tau$ correspond with orthogonality of corresponding linear elements; see Bianchi \cite{Bi} or Eisenhart \cite{Ei}. We say that a section $\tau$ of $f^*T\mathbb{R}^{n+p}$ is an \emph{infinitesimal bending} of an isometric immersion $f\colon M^n\to\mathbb{R}^{n+p}$ if \eqref{infbend} holds. Given a smooth variation whose variational vector field $\tau$ is an infinitesimal bending, by keeping only the terms of first order of the variation we obtain the smooth variation \mbox{${\cal F}\colon\mathbb{R}\times M^n\to\mathbb{R}^{n+p}$} with variational vector field $\tau$ defined by $f_t=f+t\tau$. Then \eqref{infbend} gives $$ \|f_{t*}X\|^2=\|f_*X\|^2+t^2\|\tau_*X\|^2 $$ for any $X\in TM$. Of course, we always have the \emph{trivial infinitesimal bendings} obtained as the variational vector field of a smooth variation by isometries of the ambient space. In other words, they are locally the restriction to the submanifold of a Killing vector field of the ambient space. Dajczer and Rodríguez \cite{DR} showed that submanifolds in low codimension are generically \emph{infinitesimally rigid}, that is, only trivial infinitesimal bendings are possible. In fact, they proved that well-known algebraic conditions on the second fundamental form of an immersion that give isometric rigidity also yield infinitesimal rigidity. For instance, for a hypersurface $f\colon M^n\to\mathbb{R}^{n+1}$ to be infinitesimally bendable it is a necessary condition (but far from sufficient) to have at most two nonzero principal curvatures at any point. This result is already contained in the book of Cesàro \cite{Ce} published in 1886. For higher codimension the rather strong algebraic conditions are given in terms of the type number or the $s$-nullities of the immersion. After the pioneering work of Sbrana \cite{Sb} in 1908, a complete parametric local classification of the infinitesimally bendable hypersurfaces was given by Dajczer and Vlachos \cite{DV}. In particular, they showed that this class is much larger than the class of isometrically bendable ones, a fact that may be seen as a surprise. The classification in the case of complete hypersurfaces was obtained by Jimenez \cite{Ji}. Infinitesimal bendings of submanifolds have also been considered by Schouten \cite{sc} in 1928. When trying to understand the geometry of the infinitesimally bendable submanifolds in codimension larger than one the following fact has to be taken into consideration. If $\tilde{\tau}$ is an infinitesimal bending of an isometric immersion $F\colon\tilde{M}^{n+\ell}\to\mathbb{R}^{n+p}$, $0<\ell< p$, and $j\colon M^n\to\tilde{M}^{n+\ell}$ is an embedding, then $\tau=\tilde{\tau}|_{j(M)}$ is an infinitesimal bending of $f=F\circ j\colon M^n\to\mathbb{R}^{n+p}$. This basic observation motivates the following definitions where a more general situation is considered since certain singularities are allowed. A smooth map $F\colon\tilde{M}^{n+\ell}\to\mathbb{R}^{n+p}$, $0<\ell<p$, from a differentiable manifold into Euclidean space is said to be a \emph{singular extension} of a given isometric immersion $f\colon M^n\to\mathbb{R}^{n+p}$ if there is an embedding \mbox{$j\colon M^n\to\tilde{M}^{n+\ell}$}, $0<\ell< p$, such that $F$ is an immersion along $\tilde{M}^{n+\ell}\setminus j(M)$ and $f=F\circ j$. Notice that the map $F$ may fail (but not necessarily) to be an immersion along points of $j(M)$. We say that an infinitesimal bending $\tau$ of an isometric immersion $f\colon M^n\to\mathbb{R}^{n+p}$ \emph{extends in the singular sense} if there is a singular extension \mbox{$F\colon\tilde{M}^{n+\ell}\to\mathbb{R}^{n+p}$} of $f$ and a smooth map $\tilde{\tau}\colon\tilde{M}^{n+\ell}\to\mathbb{R}^{n+p}$ such that $\tilde{\tau}$ is an infinitesimal bending of $F_{\tilde{M}\setminus j(M)}$ and $\tau=\tilde{\tau}|_{j(M)}$. We point out that the necessity to admit the existence of singularities of $F$ along $j(M)$ in the above definitions was already well established in \cite{DG} and \cite{FG} for isometric bendings in both the local and global situation. An infinitesimal bending $\tau$ of an isometric immersion $f\colon M^n\to\mathbb{R}^{n+p}$, $p\geq 2$, is called a \emph{genuine infinitesimal bending} if $\tau$ does not extend in the singular sense when restricted to any open subset of $M^n$. If $f$ admits such a bending we say that it is \emph{genuinely infinitesimally bendable}. As one expects, trivial infinitesimal bending are never genuine. If $f(M)\subset\mathbb{R}^{n+\ell}\subset\mathbb{R}^{n+p}$, $\ell<p$, and $e\in\mathbb{R}^{n+p}$ is orthogonal to $\mathbb{R}^{n+\ell}$ then $\tau=\phi e$ for $\phi\in C^\infty(M)$ is another example of an infinitesimal bending that is not genuine. Recall that an isometric immersion $f\colon M^n\to\mathbb{R}^{n+p}$ is said to be \emph{$r$-ruled} if there exists an $r$-dimensional smooth totally geodesic tangent distribution whose leaves (rulings) are mapped diffeomorphically by $f$ to open subsets of affine subspaces of $\mathbb{R}^{n+p}$. \begin{theorem}\label{local1} Let $f\colon M^n\to\mathbb{R}^{n+p}$, $n>2p\geq 4$, be an isometric immersion and let $\tau$ be an infinitesimal bending of $f$. Then along each connected component of an open and dense subset either $\tau$ extends in the singular sense or $f$ is $r$-ruled with $r\geq n-2p$. \end{theorem} The following is an immediate consequence of the above result. \begin{corollary} Let $f\colon M^n\to\mathbb{R}^{n+p}$, $n>2p\geq 4$, be a genuinely infinitesimally bendable isometric immersion. Then $f$ is $r$-ruled with $r\geq n-2p$ along connected components of an open dense subset of $M^n$. \end{corollary} We say that $f\colon M^n\to\mathbb{R}^{n+p}$ is \emph{genuinely infinitesimally rigid} if given any infinitesimal bending $\tau$ of $f$ there is an open dense subset of $M^n$ such that $\tau$ restricted to any connected component extends in the singular sense. Theorem \ref{local1} also has the following two consequences. \begin{corollary} Let $f\colon M^n\to\mathbb{R}^{n+p}$, $n>2p\geq 4$, be an isometric immersion. If $M^n$ has positive Ricci curvature then $f$ is genuinely infinitesimally rigid. \end{corollary} \begin{corollary} Let $g\colon M^n\to\mathbb{S}^{n+p-1}$, $n>2p\geq 4$, be an isometric immersion and let $f=i\circ g$ where $i\colon\mathbb{S}^{n+p-1}\to\mathbb{R}^{n+p}$ denotes the umbilical inclusion. Then $f$ is genuinely infinitesimally rigid. \end{corollary} A special class of ruled submanifolds are the ones with a relative nullity foliation. The \emph{relative nullity} subspace $\Delta(x)$ of $f\colon M^n\to\mathbb{R}^{n+p}$ at $x\in M^n$ is the kernel of the second fundamental form $\alpha\colon TM\times TM\to N_fM$ with values in the normal bundle, that is, $$ \Delta(x)=\{X\in T_xM: \alpha(X,Y)=0\;\;\mbox{for all}\;\;Y\in T_xM\}. $$ The dimension $\nu(x)$ of $\Delta(x)$ is called the \emph{index of relative nullity} of $f$ at $x\in M^n$. It is a standard fact that the submanifold is ruled by the leaves of the relative nullity distribution on any open subset of $M^n$ where the index of relative nullity $\nu>0$ is constant. In the case of low codimension, with a substantial additional effort we obtain a better lower bound for the dimension of the rulings. \begin{theorem}\label{local2} Let $f\colon M^n\to\mathbb{R}^{n+p}$, $n>2p$, be a genuinely infinitesimally bendable isometric immersion. If $2\leq p\leq 5$, then one of the following holds along any connected component, say $U$, of an open dense subset of $M^n$: \begin{itemize} \item[(i)] $f|_U$ is $\nu$-ruled by leaves of relative nullity with $\nu\geq n-2p$. \item [(ii)] $f|_U$ has $\nu<n-2p$ at any point and is $r$-ruled with $r\geq n-2p+3$. \end{itemize} \end{theorem} For $p=2$ notice that we are always in case $(i)$ since a $(n-1)$-ruled submanifold in that codimension has index of relative nullity $\nu\geq n-3$ at any point. Dajczer and Gromoll \cite{DG} proved that along connected components of an open dense subset an isometrically deformable compact Euclidean submanifold of dimension at least five and codimension two is either isometrically rigid or is contained in a deformable hypersurface (with possible singularities) and any isometric deformation of the former is given by an isometric deformation of the latter. This result was extended by Florit and Guimarães \cite{FG} to other low codimensions. The next result of similar nature concerns infinitesimal bendings of submanifolds in codimension two. \begin{theorem}\label{maincompact} Let $f\colon M^n\to\mathbb{R}^{n+2}$, $n\geq 5$, be an isometric immersion of a compact Riemannian manifold with no open flat subset. For any infinitesimal bending $\tau$ of $f$ one of the following holds along any connected component, say $U$, of an open dense subset of $M^n$: \begin{itemize} \item[(i)] The infinitesimal bending $\tau|_U$ extends in the singular sense. \item[(ii)] There is an orthogonal splitting $\mathbb{R}^{n+2}=\mathbb{R}^{n+1}\oplus\mbox{span}\{e\}$ so that $f(U)\subset\mathbb{R}^{n+1}$ and $\tau|_U=\tau_1+\tau_2$ is a sum of infinitesimal bendings that extend in the singular sense where $\tau_1\in\mathbb{R}^{n+1}$ and $\tau_2=\phi e$ for $\phi\in C^\infty(U)$. \end{itemize} \end{theorem} It follows from the proof that the assumption on the open flat subset can be replaced by the weaker hypothesis that there is no open subset of $M^n$ where the index of relative nullity satisfies $\nu\geq n-1$. Moreover, we will see that cases $(i)$ and $(ii)$ are not disjoint. In the last section of the paper, we discuss why the local results given above also hold if the ambient space is a nonflat space form. \section{The associated tensor} In this section, we discuss several properties of a tensor associated to an infinitesimal bending called in the classical theory of surfaces the associated rotation field; for instance see \cite{Sp}. For basic facts on infinitesimal bendings we refer to \cite{DR}, \cite{DT}, \cite{DV} and \cite{GR}. In the sequel, let $\tau$ denote an infinitesimal bending of a isometric immersion $f\colon M^n\to\mathbb{R}^{n+p}$. Then the section $L\in\Gamma(\mbox{Hom}(TM,f^*T\mathbb{R}^{n+p}))$ is the tensor defined as $$ LX=\tilde\nabla_X\tau $$ where $\tilde\nabla$ is the Levi-Civita connection in $\mathbb{R}^{n+p}$. Hence \eqref{infbend} can be written as \begin{equation} \label{skew} \<LX,f_*Y{\rangle}+\<LY,f_*X{\rangle}=0 \end{equation} for any $X,Y\in\mathfrak{X}(M)$. Let $B\colon TM\times TM\to f^*T\mathbb{R}^{n+p}$ the symmetric tensor defined as $$ B(X,Y)=(\tilde\nabla_XL)Y $$ for any $X,Y\in\mathfrak{X}(M)$. If $\tau$ is an immersion notice that $B$ is nothing else than its second fundamental form. \begin{proposition} The tensor $B$ satisfies \begin{equation} \label{segderL} (\tilde\nabla_XB)(Y,Z)-(\tilde\nabla_YB)(X,Z)=-LR(X,Y)Z \end{equation} for any $X,Y,Z\in\mathfrak{X}(M)$. \end{proposition} \proof Use that \begin{equation} \label{form} (\tilde\nabla_XB)(Y,Z) =\tilde\nabla_X(\tilde\nabla_YL)Z-(\tilde\nabla_{\nabla_XY}L)Z-(\tilde\nabla_Y L)\nabla_XZ \end{equation} and the definition of the curvature tensor. \qed The metrics $g_t$ induced by $f_t=f+t\tau$ satisfy \begin{equation} \label{der metrica} \partial/\partial t|_{t=0}g_t(X,Y)=0 \end{equation} for any $X,Y\in\mathfrak{X}(M)$. Hence, the Levi-Civita connections and curvature tensors of $g_t$ verify \begin{equation} \label{der conex} \partial/\partial t|_{t=0}\nabla^{t}_X Y=0 \end{equation} and \begin{equation} \label{der curv} \partial/\partial t|_{t=0}g_t(R^t(X,Y)Z,W)=0 \end{equation} for any $X,Y,Z,W\in\mathfrak{X}(M)$. Taking the derivative with respect to $t$ at $t=0$ of the Gauss formula for $f_t$, namely, of $$ \tilde\nabla_Xf_{t*}Y=f_{t*}\nabla^t_XY+\alpha^t(X,Y), $$ we obtain \begin{equation} \label{derL} B(X,Y)=\partial/\partial t|_{t=0}\alpha^t(X,Y). \end{equation} Taking tangent and normal components with respect to $f$ we have $$ B(X,Y)=f_*{\cal Y}(X,Y)+\beta(X,Y) $$ where the tensors ${\cal Y}\colon TM\times TM\to TM$ and $\beta\colon TM\times TM\to N_fM$ are also symmetric. \begin{proposition} The tensor ${\cal Y}\colon TM\times TM\to TM$ satisfies \begin{equation} \label{parte} {\langle}\alpha(X,Y),LZ{\rangle}+{\langle}{\cal Y}(X,Y),Z{\rangle}=0 \end{equation} for any $X,Y,Z\in\mathfrak{X}(M)$. \end{proposition} \proof Given $\eta(t)\in\Gamma(N_{f_t}M)$, let ${\cal Y}_\eta$ be the tangent vector field given by $$ f_*{\cal Y}_\eta=(\partial/\partial t|_{t=0}\eta(t))_{f_*TM}. $$ The derivative of $\<f_{t*}Z,\eta(t){\rangle}=0$ with respect to $t$ at $t=0$ yields $$ {\langle}\eta,LZ{\rangle}+{\langle}{\cal Y}_{\eta},Z{\rangle}=0 $$ where $Z\in\mathfrak{X}(M)$ and $\eta=\eta(0)$. In particular, $$ {\langle}\alpha(X,Y),LZ{\rangle}+{\langle}{\cal Y}_{\alpha(X,Y)},Z{\rangle}=0 $$ for any $X,Y,Z\in\mathfrak{X}(M)$. On the other hand, we obtain from \eqref{derL} that $$ {\cal Y}_{\alpha(X,Y)}={\cal Y}(X,Y) $$ for any $X,Y\in\mathfrak{X}(M)$.\qed \begin{proposition} The tensor $\beta\colon TM\times TM\to N_fM$ satisfies \begin{align} \label{derGauss} {\langle}\beta(X,W),\alpha(Y,Z){\rangle}\,+\,&{\langle}\alpha(X,W),\beta(Y,Z){\rangle}\nonumber\\ &={\langle}\beta(X,Z),\alpha(Y,W){\rangle}+{\langle}\alpha(X,Z),\beta(Y,W){\rangle} \end{align} and \begin{align}\label{casicodazzi} (\nabla^{\perp}_X\beta)(Y,Z)-&\,(\nabla_Y^{\perp}\beta)(X,Z)\nonumber\\ &=\,\alpha(Y,{\cal Y}(X,Z))-\alpha(X,{\cal Y}(Y,Z))-(LR(X,Y)Z)_{N_fM} \end{align} for any $X,Y,Z,W\in\mathfrak{X}(M)$. \end{proposition} \proof To prove \eqref{derGauss} take the derivative with respect to $t$ at $t=0$ of the Gauss equations for $f_t$, that is, of $$ g_t(R^t(X,Y)Z,W)=g_t(\alpha^t(X,W),\alpha^t(Y,Z))-g_t(\alpha^t(X,Z),\alpha^t(Y,W)) $$ and use \eqref{der metrica}, \eqref{der curv} and \eqref{derL}. Using \eqref{form} we have $$ ((\tilde\nabla_XB)(Y,Z))_{N_fM}= \alpha(X,{\cal Y}(Y,Z))+(\nabla_X^{\perp}\beta)(Y,Z) $$ and \eqref{casicodazzi} follows from \eqref{segderL}. \qed We discuss next the simplest examples of infinitesimal bendings. \begin{examples} \label{examples}\emph{ $(1)$ If $\tau$ is a trivial infinitesimal bending of $f\colon M^n\to\mathbb{R}^{n+p}$, $p\geq 2$, then we have from the references that $$ \tau=\mathcal{D}f(x)+w $$ where $\mathcal{D}$ is a skew-symmetric linear transformation of $\mathbb{R}^{n+p}$ and $w\in\mathbb{R}^{n+p}$. Take $\lambda\in\Gamma(f^*T\mathbb{R}^{n+p})$ such that $F\colon \tilde{M}^{n+1}=M^n\times(-\epsilon,\epsilon)\to \mathbb{R}^{n+p}$, given by $F(x,t)=f(x)+t\lambda(x)$, is an immersion for $t\neq 0$. Then $\tau$ extends in the singular sense since $$ \tilde{\tau}(x,t)=\tau+t\mathcal{D}\lambda $$ is a (trivial) infinitesimal bending of $F$ on the open subset where $F$ is an immersion. \\ $(2)$ The first normal space of $f\colon M^n\to\mathbb{R}^{n+p}$ at $x\in M^n$ is $$ N_1(x)=\mbox{span}\{\alpha(X,Y):X,Y\in T_xM\}. $$ Then $\tau=f_*Z+\delta$ is an infinitesimal bending if $Z\in\mathfrak{X}(M)$ is a Killing field and $\delta\in \Gamma(N_1^\perp)$ is a smooth normal vector field. }\end{examples} \section{Flat bilinear forms} Flat bilinear forms were introduced by J. D. Moore \cite{Mo} after the pioneering work of E. Cartan to deal with rigidity questions on isometric immersions in space forms. In this paper, it is shown that they are also very helpful in the study of similar questions for infinitesimal bendings of submanifolds. Let $V$ and $U$ be finite dimensional real vector spaces and let $W^{p,q}$ be a real vector space of dimension $p+q$ endowed with an indefinite inner product of type $(p,q)$. A bilinear form $\mathcal{B}\colon V\times U\to W^{p,q}$ is said to be \emph{flat} if $$ {\langle}\mathcal{B}(X,Z),\mathcal{B}(Y,W){\rangle} -{\langle}\mathcal{B}(X,W),\mathcal{B}(Y,Z){\rangle}=0 $$ for all $X,Y\in V$ and $W,Z\in U$. Then $X\in V$ is called a (left) \emph{regular element} of $\mathcal{B}$ if $$ \dim\mathcal{B}_X(U)=\max\{\dim\mathcal{B}_Y(U)\colon Y\in V\} $$ where $\mathcal{B}_X(Y)=\mathcal{B}(X,Y)$ for any $Y\in U$. The set $RE(\mathcal{B})$ of regular elements of $\mathcal{B}$ is open dense in $V$. The following basic fact was given in \cite{Mo}. \begin{lemma}\label{fbn} Let $\mathcal{B}\colon V\times U\to W$ be a flat bilinear form. If $Y\in RE (\mathcal{B})$ then $$ \mathcal{\mathcal{B}}(X,\ker\mathcal{B}_Y) \subset \mathcal{B}_Y(U)\cap\mathcal{B}_Y(U)^{\perp} $$ for any $X\in V$. \end{lemma} The next is a fundamental result in the theory of symmetric flat bilinear forms. It turns out to be false for $p\geq 6$ as shown in \cite{DF2}. \begin{lemma}\label{main} Let $\mathcal{B}\colon V^n\times V^n\to W^{p,q}$, $p\leq 5$ and $p+q<n$, be a symmetric flat bilinear form and set $$ \mathcal{N}(\mathcal{B})=\{X\in V: \mathcal{B}(X,Y)=0\;\;\mbox{for all}\;\;Y\in V\}. $$ If $\dim \mathcal{N}(\mathcal{B})\leq n-p-q-1$ then there is an orthogonal decomposition $$ W^{p,q}=W_1^{\ell,\ell}\oplus W_2^{p-\ell,q-\ell},\; 1\leq\ell\leq p, $$ such that the $W_j$-components $\mathcal{B}_j$ of $\mathcal{B}$ satisfy: \begin{itemize} \item[(i)] $\mathcal{B}_1$ is nonzero and $$ {\langle}\mathcal{B}_1(X,Y),\mathcal{B}_1(Z,W){\rangle}=0 $$ for all $X,Y,Z,W\in V$. \item[(ii)] $\mathcal{B}_2$ is flat and $\dim\mathcal{N}(\mathcal{B}_2)\geq n-p-q+2\ell$. \end{itemize} \end{lemma} \proof See \cite{DF} or \cite{DT}. \qed \section{The local results} In this section we give the proofs the local theorems in the introduction. A key ingredient is the following result due to Florit and Guimarães \cite{FG}. \begin{proposition}\label{nowhere} Let $f\colon M^n\to\mathbb{R}^{n+p}$ be an isometric immersion and let $D$ be a smooth tangent distribution of dimension $d>0$. Assume that there does not exist an open subset $U\subset M^n$ and $Z\in\Gamma(D|_U)$ such that the map $F\colon U\times\mathbb{R}\to\mathbb{R}^{n+p}$ given by $$ F(x,t)=f(x)+tf_*Z(x) $$ is a singular extension of $f$ on some open neighborhood of $U\times\{0\}$. Then for any $x\in M^n$ there is an open neighborhood $V$ of the origin in $D(x)$ such that $f_*(x)V\subset f(M)$. Hence $f$ is $d$-ruled along each connected component of an open dense subset of $M^n$. \end{proposition} \proof See \cite{DT} or \cite{FG}.\qed \subsection{The first local result} We first associate to an infinitesimal bending a flat bilinear form. \begin{lemma}\label{thetaflat} Let $\tau$ be an infinitesimal bending of an isometric immersion $f\colon M^n\to\mathbb{R}^{n+p}$. Then the bilinear form $\theta\colon TM\times TM\to N_fM\oplus N_fM$ defined at any point of $M^n$ by \begin{equation} \label{theta} \theta(X,Y)=(\alpha(X,Y)+\beta(X,Y),\alpha(X,Y)-\beta(X,Y)) \end{equation} is flat with respect to the inner product in $N_fM\oplus N_fM$ given by $$ {\langle}\!{\langle}(\xi_1,\eta_1),(\xi_2,\eta_2){\rangle}\!{\rangle}_{N_fM\oplus N_fM} ={\langle}\xi_1,\xi_2{\rangle}_{N_fM}-{\langle}\eta_1,\eta_2{\rangle}_{N_fM}. $$ \end{lemma} \proof A straightforward computation shows that \begin{align*} &\frac{1}{2}\left({\langle}\!{\langle}\theta(X,Z),\theta(Y,W){\rangle}\!{\rangle} -{\langle}\!{\langle}\theta(X,W),\theta(Y,Z){\rangle}\!{\rangle}\right) ={\langle}\beta(X,Z),\alpha(Y,W){\rangle}\\ &\;\;\;+{\langle}\alpha(X,Z),\beta(Y,W){\rangle} -{\langle}\beta(X,W),\alpha(Y,Z){\rangle}-{\langle}\alpha(X,W),\beta(Y,Z){\rangle}, \end{align*} and the proof follows from \eqref{derGauss}. \qed An isometric immersion $f\colon M^n\to\mathbb{R}^{n+p}$ is called \emph{$1$-regular} if the first normal spaces $N_1(x)$ have constant dimension $k\leq p$ on $M^n$ and thus form a subbundle $N_1$ of rank $k$ of the normal bundle. Under the $1$-regularity assumption we have the following equivalent statement. \begin{lemma}\label{cor} Assume that $f$ is $1$-regular and let $\beta_1\colon TM\times TM\to N_1$ be the $N_1$-component of $\beta$. Then the bilinear form $\hat\theta\colon TM\times TM\to N_1\oplus N_1$ defined at any point by \begin{equation} \label{deftheta1} \hat\theta(X,Y) =\left(\alpha(X,Y)+\beta_1(X,Y),\alpha(X,Y)-\beta_1(X,Y)\right) \end{equation} is flat with respect to the inner product induced on $N_1\oplus N_1$. \end{lemma} \noindent\emph{Proof of Theorem \ref{local1}:} Let $\tau$ be an infinitesimal bending of $f$. With the use of \eqref{skew} and \eqref{parte} we easily obtain \begin{equation} \label{above} \<f_*X+\tilde\nabla_XY,LX+\tilde\nabla_X LY{\rangle}={\langle}\alpha(X,Y),\beta(X,Y){\rangle} \end{equation} for any $X,Y\in\mathfrak{X}(M)$. By Lemma \ref{thetaflat} we have at any point of $M^n$ that the symmetric tensor $\theta$ is flat. Given $Y\in RE(\theta)$ at a point denote $D=\ker\theta_Y$ where $\theta_Y(X)=\theta(Y,X)$. Notice that $Z\in D$ means that $\alpha(Y,Z)=0=\beta(Y,Z)$. Let $U\subset M^n$ be an open subset where $Y\in\mathfrak{X}(U)$ satisfies $Y\in RE(\theta)$ and $D$ has dimension $d$ at any point. Lemma \ref{fbn} gives $$ {\langle}\!{\langle}\theta(X,Z),\theta(X,Z){\rangle}\!{\rangle}=0 $$ for any $X\in\mathfrak{X}(U)$ and $Z\in\Gamma(D)$. Equivalently, the right hand side of \eqref{above} vanishes and thus \begin{equation} \label{vanish} \<f_*X+\tilde\nabla_XZ,LX+\tilde\nabla_X LZ{\rangle}=0 \end{equation} for any $X\in\mathfrak{X}(U)$ and $Z\in\Gamma(D)$. Assume that there exists a nowhere vanishing $Z\in\Gamma(D)$ defined on an open subset $V$ of $U$ such that $F\colon V\times(-\epsilon,\epsilon)\to\mathbb{R}^{n+p}$ given by $$ F(x,t)=f(x)+tf_*Z(x) $$ is a singular extension of $f|_{V}$. The map $\tilde{\tau}\colon V\times(-\epsilon,\epsilon)\to\mathbb{R}^{n+p}$ given by $$ \tilde{\tau}(x,t)=\tau(x)+tLZ(x) $$ is an infinitesimal bending as well as an extension of $\tau|_{V}$ in the singular sense. In fact, $$ \<F_*\partial_t,\tilde\nabla_{\partial_t}\tilde{\tau}{\rangle}=\<f_*Z,LZ{\rangle}=0, $$ $$ {\langle}\tilde\nabla_{\partial_t}\tilde{\tau},F_*X{\rangle}+{\langle}\tilde\nabla_X\tilde{\tau},F_*\partial_t{\rangle} =\<LZ,f_*X+t\tilde\nabla_XZ{\rangle}+\<LX+t\tilde\nabla_XLZ,f_*Z{\rangle}=0 $$ and $$ \<F_*X,\tilde\nabla_X\tilde{\tau}{\rangle}=\<f_*X+t\tilde\nabla_XZ,LX+t\tilde\nabla_XLZ{\rangle}=0 $$ where the last equality follows from \eqref{vanish}. Let $W\subset U$ be an open subset such that a $Z\in\Gamma(D)$ as above does not exist along any open subset of $W$. By Proposition \ref{nowhere} the immersion is $d$-ruled along any connected component of an open dense subset of $W$. Moreover, we have $d=\dim D=n-\dimIm(\theta_Y)\geq n-2p$. \qed \begin{remark} {\em In Theorem \ref{local1} if $f$ is $1$-regular with $\dim N_1=q<p$ we obtain the better lower bound $r\geq n-2q$ since the proof still works making use of Lemma \ref{cor} instead of Lemma \ref{thetaflat}. }\end{remark} \subsection{The second local result} Let $F\colon\tilde{M}^{n+1}\to\mathbb{R}^{n+p}$ be an isometric immersion and let $\tilde{\tau}$ be an infinitesimal bending of $F$. Given an isometric embedding $j\colon M^n\to\tilde{M}^{n+1}$ consider the composition of isometric immersions $f=F\circ j\colon M^n\to\mathbb{R}^{n+p}$. Then $\tau=\tilde{\tau}|_{j(M)}$ is an infinitesimal bending of $f$. It is easy to see that $$ B(X,Y)=\tilde{B}(X,Y)+{\langle}\tilde\nabla_XY,F_*\eta{\rangle}\tilde{L}\eta $$ for $\eta\in\Gamma(N_jM)$ of unit length and $X,Y\in\mathfrak{X}(M)$. Then \eqref{parte} gives $$ {\langle}\beta(X,Y),F_*\eta{\rangle}+{\langle}\alpha^f(X,Y),\tilde{L}\eta{\rangle}=0 $$ for any $X,Y\in\mathfrak{X}(M)$. We will see that satisfying a condition of this type may guarantee that an infinitesimal bending is not genuine. In fact, this was already proved by Florit \cite{Fl} in a special case. We say that an infinitesimal bending of a given isometric immersion $f\colon M^n\to\mathbb{R}^{n+p}$, $p\geq 2$, satisfies the \emph{condition $(*)$} if there is $\eta\in\Gamma(N_fM)$ nowhere vanishing and $\xi\in\Gamma(R)$, where $R$ is determined by the orthogonal splitting $N_fM=P\oplus R$ and $P=\mbox{span}\{\eta\}$, such that \begin{equation} \label{criterio} B_\eta+A_\xi=0 \end{equation} where $B_\eta={\langle}\beta,\eta{\rangle}$. We choose $\eta$ of unit length for simplicity. Thus, that \eqref{criterio} holds means \begin{equation} \label{criterios} {\langle}\beta(X,Y),\eta{\rangle}+{\langle}\alpha(X,Y),\xi{\rangle}=0 \end{equation} for any $X,Y\in\mathfrak{X}(M)$. The following result is of independent interest since it does not require the codimension to satisfy $p\leq 5$ as is the case in Theorem \ref{local2}. \begin{theorem}\label{local} Let $f\colon M^n\to\mathbb{R}^{n+p}$, $p\geq 2$, be an isometric immersion and let $\tau$ be an infinitesimal bending of $f$ that satisfies the condition $(*)$. Then along each connected component of an open and dense subset of $M^n$ either $\tau$ extends in the singular sense or $f$ is $r$-ruled with $r\geq n-2p+3$. \end{theorem} As before there is the following immediate consequence. \begin{corollary}\label{localgen} Let $f\colon M^n\to\mathbb{R}^{n+p}$, $p\geq 2$, be an isometric immersion and let $\tau$ be a genuine infinitesimal bending of $f$ that satisfies the condition $(*)$. Then $f$ is $r$-ruled with $r\geq n-2p+3$ on connected components of an open dense subset of $M^n$. \end{corollary} When $\tau$ satisfies the condition $(*)$ we may extend the tensor $L$ to the tensor $\bar{L}\in\Gamma(\mbox{Hom}(TM\oplus P,f^*T\mathbb{R}^{n+p})$ by defining $$ \bar{L}\eta=f_*Y+\xi $$ where $Y\in\mathfrak{X}(M)$ is given by $$ \<Y,X{\rangle}+\<LX,\eta{\rangle}=0 $$ for any $X\in\mathfrak{X}(M)$. Then $\bar{L}$ satisfies $$ {\langle}\bar{L}X,\eta{\rangle}+\<f_*X,\bar{L}\eta{\rangle}=0 $$ for any $X\in\mathfrak{X}(M)$. Given $\lambda\in\Gamma (f_*TU\oplus P)$ nowhere vanishing where $U$ is an open subset of $M^n$, we define the map $F\colon U\times (-\epsilon,\epsilon)\to\mathbb{R}^{n+p}$ by \begin{equation} \label{F} F(x,t)=f(x)+t\lambda(x). \end{equation} Notice that $F$ is not an immersion at least for $t=0$ at points where $\lambda$ is tangent to $U$. Then let $\tilde{\tau}\colon U\times (-\epsilon,\epsilon)\to\mathbb{R}^{n+p}$ be the map given by \begin{equation} \label{tilde} \tilde{\tau}(x,t)=\tau(x)+t\bar{L}\lambda(x). \end{equation} We have $$ \<F_*\partial_t,\tilde\nabla_{\partial_t}\tilde{\tau}{\rangle}=0. $$ Moreover, since ${\langle}\bar{L}\lambda,\lambda{\rangle}=0$ we obtain $$ {\langle}\tilde\nabla_{\partial_t}\tilde{\tau},F_*X{\rangle}+{\langle}\tilde\nabla_X\tilde{\tau},F_*\partial_t{\rangle} ={\langle}\bar{L}\lambda,f_*X{\rangle} +\<LX,\lambda{\rangle}+tX{\langle}\bar{L}\lambda,\lambda{\rangle}=0 $$ for any $X\in\mathfrak{X}(M)$ and $t\in (-\epsilon,\epsilon)$. Thus $\tilde{\tau}$ is an infinitesimal bending of $F$ on the open subset $\tilde{U}$ of $U\times (-\epsilon,\epsilon)$ where $F$ is an immersion if and only if $$ \<F_*X,\tilde\nabla_X \tilde{\tau}{\rangle}=0, $$ or equivalently, if $$ \<f_*X+t\tilde\nabla_X\lambda,LX+t\tilde\nabla_X\bar{L}\lambda{\rangle}=0 $$ for any $X\in\mathfrak{X}(M)$. In the sequel we take $F$ restricted to $\tilde{U}$. By the above, in order to have that $\tilde{\tau}$ is an infinitesimal bending of $F$ the strategy is to make use of the condition $(*)$ to construct a subbundle $D\subset f_*TM\oplus P$ such that $$ \<f_*X+\tilde\nabla_X \lambda,LX+\tilde\nabla_X\bar{L}\lambda{\rangle}=0 $$ for any $X\in\mathfrak{X}(M)$ and any $\lambda\in\Gamma(D)$. \begin{lemma} Assume that $\tau$ satisfies the condition $(*)$. Then \begin{equation} \label{impext} \<f_*X+\tilde\nabla_X \lambda,LX+\tilde\nabla_X\bar{L}\lambda{\rangle} ={\langle}(\tilde\nabla_X\lambda)_R,(\tilde\nabla_X \bar{L})\lambda{\rangle} \end{equation} where $X\in\mathfrak{X}(M)$, $\lambda\in\Gamma(f_*TM\oplus P)$ and $$ (\tilde\nabla_X\bar{L})\lambda=\tilde\nabla_X\bar{L}\lambda-\bar{L}\nabla'_X\lambda, $$ being $\nabla'$ the connection induced on $f_*TM\oplus P$. \end{lemma} \proof Set $\lambda=f_*Z+\phi\eta$ where $Z\in\mathfrak{X}(M)$ and $\phi\in C^\infty(M)$. Then \begin{align}\label{eq 0} &\<f_*X+\tilde\nabla_X \lambda,LX+\tilde\nabla_X\bar{L}\lambda{\rangle}= \<f_*(\tilde\nabla_X\lambda)_{TM}+(\tilde\nabla_X\lambda)_P +(\tilde\nabla_X\lambda)_R,\tilde\nabla_X\bar{L}\lambda{\rangle}\nonumber\\ &\;\;+{\langle}\tilde\nabla_X\lambda,LX{\rangle}+\<f_*X,\tilde\nabla_X\bar{L}\lambda{\rangle}\nonumber\\ &=\<f_*(\tilde\nabla_X\lambda)_{TM},(\tilde\nabla_XL)Z+L\nabla_X Z +X(\phi)\bar{L}\eta+\phi\tilde\nabla_X\bar{L}\eta{\rangle}\nonumber\\ &\;\;+(\<A_\eta X,Z{\rangle}+X(\phi)){\langle}\eta,(\tilde\nabla_XL)Z+L\nabla_XZ +X(\phi)\bar{L}\eta+\phi\tilde\nabla_X\bar{L}\eta{\rangle}\nonumber\\ &\;\;+{\langle}(\tilde\nabla_X\lambda)_R,\tilde\nabla_X\bar{L}\lambda{\rangle} +{\langle}\tilde\nabla_X\lambda,LX{\rangle}+\<f_*X,\tilde\nabla_X\bar{L}\lambda{\rangle} \end{align} for any $X\in\mathfrak{X}(M)$. Using \eqref{parte} and \eqref{criterios} we obtain \begin{align}\label{eq 1} {\langle}(\tilde\nabla_X\lambda)_{TM},&(\tilde\nabla_XL)Z+L\nabla_XZ{\rangle}\nonumber\\ &=-\<L(\tilde\nabla_X\lambda)_{TM},\alpha(X,Z){\rangle} -\phi\<A_\eta X,L\nabla_X Z{\rangle} \end{align} and \begin{align}\label{eq 2} {\langle}(\tilde\nabla_X\lambda)_{TM},&X(\phi)\bar{L}\eta +\phi\tilde\nabla_X\bar{L}\eta{\rangle}=\phi{\langle}(\tilde\nabla_X\lambda)_{TM},\nabla_XY{\rangle}\nonumber\\ \!& -X(\phi)\<L(\tilde\nabla_X\lambda)_{TM},\eta{\rangle} -\phi{\langle}\alpha(X,(\tilde\nabla_X\lambda)_{TM}),\xi{\rangle} \end{align} where for the first term in the right hand side of \eqref{eq 2} we have \begin{align}\label{eq 5} {\langle}(\tilde\nabla_X\lambda)_{TM},\nabla_XY{\rangle} =&\,X{\langle}(\tilde\nabla_X\lambda)_{TM},Y{\rangle} -{\langle}\nabla_X(\tilde\nabla_X\lambda)_{TM},Y{\rangle}\nonumber\\ =&-X\<L(\tilde\nabla_X\lambda)_{TM}),\eta{\rangle} +\<L\nabla_X(\tilde\nabla_X\lambda)_{TM},\eta{\rangle}\nonumber\\ =&-{\langle}(\tilde\nabla_XL)(\tilde\nabla_X\lambda)_{TM},\eta{\rangle} -\<L(\tilde\nabla_X\lambda)_{TM},\tilde\nabla_X\eta{\rangle}\nonumber\\ =&\,{\langle}\alpha(X,(\tilde\nabla_X\lambda)_{TM}),\xi{\rangle} -\<L(\tilde\nabla_X\lambda)_{TM},\tilde\nabla_X\eta{\rangle}. \end{align} Moreover, \begin{align}\label{eq 3} {\langle}\eta,(\tilde\nabla_XL)Z+L\nabla_XZ{\rangle}=& -{\langle}\alpha(X,Z),\xi{\rangle}+{\langle}\eta,L\nabla_XZ{\rangle}, \end{align} \begin{align}\label{eq 4} {\langle}\eta,X(\phi)\bar{L}\eta+\phi\tilde\nabla_X\bar{L}\eta{\rangle}=& -\phi{\langle}\tilde\nabla_X\eta,\bar{L}\eta{\rangle}\nonumber\\ =&-\phi\<LA_\eta X,\eta{\rangle}-\phi{\langle}\nabla^\perp_X\eta,\xi{\rangle} \end{align} and \begin{align}\label{eq 6} &{\langle}\tilde\nabla_X\lambda,LX{\rangle}+\<f_*X,\tilde\nabla_X\bar{L}\lambda{\rangle} =-{\langle}\tilde\nabla_X X,\bar{L}\lambda{\rangle}-{\langle}\lambda,\tilde\nabla_XLX{\rangle}\nonumber\\ =&-{\langle}\nabla_XX,\bar{L}\lambda{\rangle}-{\langle}\alpha(X,X),\bar{L}\lambda{\rangle} -{\langle}\lambda,L\nabla_X X{\rangle}-{\langle}\lambda,(\tilde\nabla_XL)X{\rangle}=0. \end{align} Now a straightforward computation replacing \eqref{eq 1} through \eqref{eq 6} in \eqref{eq 0} yields \begin{align*} &\<f_*X+\tilde\nabla_X \lambda,LX+\tilde\nabla_X\bar{L}\lambda{\rangle}= {\langle}(\tilde\nabla_X\lambda)_R,\tilde\nabla_X\bar{L}\lambda{\rangle} -\<L(\tilde\nabla_X\lambda)_{TM},\alpha(X,Z)_R{\rangle}\\ &-\phi\<L(\tilde\nabla_X\lambda)_{TM},\nabla^\perp_X\eta{\rangle} -{\langle}\alpha(X,Z),\bar{L}(\tilde\nabla_X\lambda)_P{\rangle} -\phi{\langle}\nabla^\perp_X\eta,\bar{L}(\tilde\nabla_X\lambda)_P{\rangle}\\ &=\,{\langle}(\tilde\nabla_X\lambda)_R,(\tilde\nabla_X\bar{L})\lambda{\rangle}.\qed \end{align*} In view of \eqref{impext} the next step is to construct a subbundle $D\subset f_*TM\oplus P$ such that \begin{equation} \label{requisito} {\langle}(\tilde\nabla_X\lambda)_R,(\tilde\nabla_X\bar{L})\lambda{\rangle}=0 \end{equation} for any $X\in\mathfrak{X}(M)$ and $\lambda\in\Gamma(D)$. \begin{lemma}\label{varphi} Assume that $\tau$ satisfies the condition $(*)$. Then the bilinear form $\varphi\colon TM\times f_*TM\oplus P\to R\oplus R$ defined~by $$ \varphi(X,\lambda) =((\tilde\nabla_X\lambda)_R+((\tilde\nabla_X\bar{L})\lambda)_R, (\tilde\nabla_X\lambda)_R-((\tilde\nabla_X\bar{L})\lambda)_R) $$ is flat with respect to the indefinite inner product given by $$ {\langle}\!{\langle}(\xi_1,\mu_1),(\xi_2,\mu_2){\rangle}\!{\rangle}_{R\oplus R} ={\langle}\xi_1,\xi_2{\rangle}_R-{\langle}\mu_1,\mu_2{\rangle}_R. $$ \end{lemma} \proof We need to show that $$ \Theta={\langle}\!{\langle}\varphi(X,\lambda),\varphi(Y,\delta){\rangle}\!{\rangle} -{\langle}\!{\langle}\varphi(X,\delta),\varphi(Y,\lambda){\rangle}\!{\rangle}=0 $$ for any $X,Y\in\mathfrak{X}(M)$ and $\lambda,\delta\in f_*TM\oplus P$. We have \begin{align*} \frac{1}{2}\Theta=&{\langle}(\tilde\nabla_X\lambda)_R,((\tilde\nabla_Y\bar{L})\delta)_R{\rangle} +{\langle}(\tilde\nabla_Y\delta)_R,((\tilde\nabla_X\bar{L})\lambda)_R{\rangle}\nonumber\\ &\;\;-{\langle}(\tilde\nabla_X\delta)_R,((\tilde\nabla_Y\bar{L})\lambda)_R{\rangle} -{\langle}(\tilde\nabla_Y\lambda)_R,((\tilde\nabla_X\bar{L})\delta)_R{\rangle}. \end{align*} Clearly $\Theta=0$ if $\lambda,\delta\in\Gamma(P)$. If $\lambda,\delta\in\mathfrak{X}(M)$, then \begin{align*} \frac{1}{2}\Theta =&\,{\langle}\alpha(X,\lambda)_R,((\tilde\nabla_Y\bar{L})\delta)_R{\rangle} +{\langle}\alpha(Y,\delta)_R,((\tilde\nabla_X\bar{L})\lambda)_R{\rangle}\\ &-{\langle}\alpha(X,\delta)_R,((\tilde\nabla_Y\bar{L})\lambda)_R{\rangle} -{\langle}\alpha(Y,\lambda)_R,((\tilde\nabla_X\bar{L})\delta)_R{\rangle}\\ =&\,{\langle}\alpha(X,\lambda)_R,((\tilde\nabla_Y L)\delta)_R{\rangle} -\<A_\eta Y,\delta{\rangle}{\langle}\alpha(X,\lambda)_R,\bar{L}\eta{\rangle}\\ &+{\langle}\alpha(Y,\delta)_R,((\tilde\nabla_X L)\lambda)_R{\rangle} -\<A_\eta X,\lambda{\rangle}{\langle}\alpha(Y,\delta)_R,\bar{L}\eta{\rangle}\\ &-{\langle}\alpha(X,\delta)_R,((\tilde\nabla_Y L)\lambda)_R{\rangle} +\<A_\eta Y,\lambda{\rangle}{\langle}\alpha(X,\delta)_R,\bar{L}\eta{\rangle}\\ &-{\langle}\alpha(Y,\lambda)_R,((\tilde\nabla_X L)\delta)_R{\rangle} +\<A_\eta X,\delta{\rangle}{\langle}\alpha(Y,\lambda)_R,\bar{L}\eta{\rangle}. \end{align*} Using first \eqref{criterios} and then \eqref{derGauss} we obtain \begin{align*} \frac{1}{2}\Theta=&\,{\langle}\alpha(X,\lambda),\beta(Y,\delta){\rangle} +{\langle}\alpha(Y,\delta),\beta(X,\lambda){\rangle}\\ &-{\langle}\alpha(X,\delta),\beta(Y,\lambda){\rangle} -{\langle}\alpha(Y,\lambda),\beta(X,\delta){\rangle}=0. \end{align*} Finally, we consider the case $\lambda=\eta$ and $\delta=Z\in\mathfrak{X}(M)$. Then \begin{align*} \frac{1}{2}\Theta=&\,{\langle}\nabla^\perp_X\eta,((\tilde\nabla_Y L)Z)_R{\rangle} -\<A_\eta Y,Z{\rangle}{\langle}\nabla^\perp_X\eta,\bar{L}\eta{\rangle} +{\langle}\alpha(Y,Z)_R,((\tilde\nabla_X\bar{L})\eta)_R{\rangle}\\ -&\,{\langle}\nabla^\perp_Y\eta,((\tilde\nabla_X L)Z)_R{\rangle} +\<A_\eta X,Z{\rangle}{\langle}\nabla^\perp_Y\eta,\bar{L}\eta{\rangle} -{\langle}\alpha(X,Z)_R,((\tilde\nabla_Y\bar{L})\eta)_R{\rangle}. \end{align*} Since \begin{align*} {\langle}\nabla^\perp_X\eta,\bar{L}\eta{\rangle} =&\,{\langle}\tilde\nabla_X\eta,\bar{L}\eta{\rangle} +\<A_\eta X,\bar{L}\eta{\rangle} =-{\langle}\eta,\tilde\nabla_X\bar{L}\eta{\rangle}-\<LA_\eta X,\eta{\rangle}\\ =&-{\langle}\eta,(\tilde\nabla_X\bar{L})\eta{\rangle} \end{align*} we obtain \begin{align*} \frac{1}{2}\Theta=&\,{\langle}\nabla^\perp_X\eta,(\tilde\nabla_Y L)Z{\rangle} -{\langle}\nabla^\perp_Y\eta,(\tilde\nabla_X L)Z{\rangle}\\ &+{\langle}\alpha(Y,Z),(\tilde\nabla_X\bar{L})\eta{\rangle} -{\langle}\alpha(X,Z),(\tilde\nabla_Y\bar{L})\eta{\rangle}. \end{align*} For the first term using \eqref{form}, \eqref{parte} and \eqref{criterios} we obtain \begin{align*} {\langle}\nabla^\perp_X\eta, (\tilde\nabla_Y L)Z{\rangle}=&\,X{\langle}\eta,(\tilde\nabla_YL)Z{\rangle} -{\langle}\eta,\tilde\nabla_X(\tilde\nabla_YL)Z{\rangle} +\<A_\eta X,(\tilde\nabla_YL)Z{\rangle}\\ =&-X{\langle}\alpha(Y,Z),\bar{L}\eta{\rangle}-{\langle}\alpha(Y,Z),LA_\eta X{\rangle}\\ &-{\langle}\eta,(\tilde\nabla_X B)(Y,Z)+(\tilde\nabla_{\nabla_X Y}L)Z +(\tilde\nabla_YL)\nabla_X Z{\rangle}\\ =&-{\langle}(\nabla^\perp_X\alpha)(Y,Z)+\alpha(\nabla_X Y,Z) +\alpha(Y,\nabla_X Z),\bar{L}\eta{\rangle}\\ &-{\langle}\eta,(\tilde\nabla_X B)(Y,Z)+(\tilde\nabla_{\nabla_X Y}L)Z +(\tilde\nabla_YL)\nabla_X Z{\rangle}\\ &-{\langle}\alpha(Y,Z),\tilde\nabla_X\bar{L}\eta{\rangle}-{\langle}\alpha(Y,Z),LA_\eta X{\rangle} +\<A_{\alpha(Y,Z)}X,\bar{L}\eta{\rangle}\\ =&-{\langle}(\nabla^\perp_X\alpha)(Y,Z),\bar{L}\eta{\rangle} -{\langle}\eta,(\tilde\nabla_X B)(Y,Z){\rangle}\\ &-{\langle}\alpha(Y,Z),(\tilde\nabla_X\bar{L})\eta{\rangle}-\<LA_{\alpha(Y,Z)}X,\eta{\rangle}. \end{align*} Likewise, we have \begin{align*} {\langle}\nabla^\perp_Y\eta, (\tilde\nabla_X L)Z{\rangle}=& -{\langle}(\nabla^\perp_Y\alpha)(X,Z),\bar{L}\eta{\rangle} -{\langle}\eta,(\tilde\nabla_Y B)(X,Z){\rangle}\\ &-{\langle}\alpha(X,Z),(\tilde\nabla_Y\bar{L})\eta{\rangle}-\<LA_{\alpha(X,Z)}Y,\eta{\rangle}. \end{align*} From \eqref{segderL} and the Codazzi equation $$ (\nabla^{\perp}_X\alpha)(Y,Z)=(\nabla^{\perp}_Y\alpha)(X,Z) $$ we obtain \begin{align*} \frac{1}{2}\Theta=\<L(R(X,Y)Z-A_{\alpha(Y,Z)}X+A_{\alpha(X,Z)}Y),\eta{\rangle}. \end{align*} Hence $\Theta=0$ from the Gauss equation. \qed \noindent \emph{Proof of Theorem \ref{local}:} By Lemma \ref{varphi} there is a flat bilinear form $\varphi$. Let $U$ be an open subset of $M^n$ where there is $Y\in\mathfrak{X}(U)$ such that $Y\in RE(\varphi)$ and $D=\ker\varphi_Y$ has dimension $d$ at any point. Then Lemma \ref{fbn} gives $$ {\langle}\!{\langle}\varphi(X,\lambda),\varphi(X,\lambda){\rangle}\!{\rangle}=0 $$ for any $X\in\mathfrak{X}(U)$ and $\lambda\in\Gamma(D)$. Notice that this implies that \eqref{requisito} holds for any $\lambda\in\Gamma(D)$. Whenever there is a nonvanishing $\lambda\in\Gamma(D)$ on an open subset $V\subset U$ such that \eqref{F} defines a singular extension of $f|_V$, then $\tau|_V$ extends in the singular sense by means of \eqref{tilde}. Let $W\subset U$ be an open subset where $\lambda\in\Gamma(D)$ as above does not exist along any open subset of $W$. Hence $D$ must be a tangent distribution on $W$, and Proposition \ref{nowhere} gives that $f|_W$ is $d$-ruled on connected components of an open dense subset of $W$. Moreover, the dimension of the rulings is bounded from below by $n+1-\dimIm(\varphi_Y)\geq n-2p+3$. \qed \noindent \emph{Proof of Theorem \ref{local2}:} We work on the open dense subset of $M^n$ where $f$ is \mbox{$1$-regular} on any connected component. Consider an open subset of a connected component where the index of relative nullity is $\nu\leq n-2p-1$ at any point. Lemma \ref{main} applies and thus the flat bilinear form $\hat\theta$ in \eqref{deftheta1} decomposes at any point as $\hat\theta=\theta_1+\theta_2$ where $\theta_1$ is as in part $(i)$ of that result. Hence, on any open subset where the dimension of ${\cal S}(\theta_1) ={\cal S}(\hat\theta)\cap{\cal S}(\hat\theta)^\perp$ is constant there are smooth local unit vector fields $\zeta_1,\zeta_2\in N_1$ such that $(\zeta_1,\zeta_2)\in {\cal S}(\theta_1)$. Equivalently, \begin{equation} \label{almost} {\langle}\beta(X,Y),\zeta_1+\zeta_2{\rangle}+{\langle}\alpha(X,Y),\zeta_1-\zeta_2{\rangle}=0 \end{equation} for any $X,Y\in\mathfrak{X}(M)$. Then $\zeta_1+\zeta_2\neq 0$ since otherwise $\zeta_1-\zeta_2\in N_1^\perp$. Hence $\tau$ satisfies the condition $(*)$ and the proof follows from Corollary \ref{localgen}.\qed \section{The global result} The first two results are of independent interest. \begin{proposition} Let $\tau$ be an infinitesimal bending of $f\colon M^n\to\mathbb{R}^{n+p}$ and let $\theta$ be the flat bilinear form defined by \eqref{theta}. Denote $\nu^*(x)=\dim\Delta^*(x)$ at $x\in M^n$ where $$ \Delta^*(x)=\mathcal{N}(\theta)(x)=\Delta\cap\mathcal{N}(\beta)(x). $$ Then, on any open subset of $M^n$ where $\nu^*$ is constant the distribution $\Delta^*$ is totally geodesic and its leaves are mapped by $f$ onto open subsets of affine subspaces of $\mathbb{R}^{n+p}$. \end{proposition} \proof From \eqref{parte} we have $\Delta\subset\mathcal{N}({\cal Y})$. Then \eqref{casicodazzi} and the Gauss equation give $$ (\nabla^{\perp}_X\beta)(Z,Y)=(\nabla^{\perp}_Z\beta)(X,Y)=0 $$ for any $X,Y\in\Gamma(\Delta^*)$ and $Z\in\mathfrak{X}(M)$. Let $\nabla^*=(\nabla^{\perp},\nabla^{\perp})$ be the compatible connection in $N_fM\oplus N_fM$. Hence $$ 0=(\nabla^*_X\theta)(Z,Y)= \theta(Z,\nabla_X Y) $$ for any $X,Y\in\Gamma(\Delta^*)$ and $Z\in\mathfrak{X}(M)$. Thus $\Delta^*\subset\Delta$ is totally geodesic. \qed On an open subset of $M^n$ where $\nu^*>0$ is constant consider the orthogonal splitting $TM=\Delta^*\oplus E$ and the tensor $C\colon \Gamma(\Delta^*)\times\Gamma(E)\to\Gamma(E)$ defined by $$ C(S,X)=C_S X=-(\nabla_X S)_E $$ where $S\in\Gamma(\Delta^*)$ and $X\in\Gamma(E)$. Since $\Delta^*\subset\Delta$ is totally geodesic, the Gauss equation gives $$ \nabla_TC_SX=C_SC_T +C_{\nabla_T S} $$ for any $S,T\in\Gamma(\Delta^*)$. In particular, we have \begin{equation} \label{split} \frac{D}{dt}C_{\gamma'}=C_{\gamma'}^2 \end{equation} along a unit speed geodesic $\gamma$ contained in a leaf of $\Delta^*$. The next result provides a way to transport information along geodesics contained in leaves of the nullity of $\theta$. This technique has been widely used, for instance, see \cite{DG}, \cite{FG} and \cite{Ji}. \begin{proposition}\label{completeness} Let $\nu^*>0$ be constant on an open subset $U\subset M^n$. If $\gamma\colon[0,b]\to M^n$ is a unit speed geodesic such that $\gamma([0,b))$ is contained in a leaf of $\Delta^*$ in $U$, then $\Delta^*(\gamma(b))=\mathcal{P}_0^b(\Delta^*(\gamma(0)))$ where $\mathcal{P}_0^t$ is the parallel transport along $\gamma$ from $\gamma(0)$ to $\gamma(t)$. In particular, we have $\nu^*(\gamma(b))=\nu^*(\gamma(0))$ and the tensor $C_{\gamma'}$ extends smoothly to $[0,b]$. \end{proposition} \proof We mimic the proof of Lemma $27$ in \cite{FG}. Let the tensor $J\colon E\to E$ be the solution in $[0,b)$ of $$ \frac{D}{dt}J+C_{\gamma'}\circ J=0 $$ with initial condition $J(0)=I$. We have from \eqref{split} that $D^2J/dt^2=0$, and hence $J$ extends smoothly to $\mathcal{P}_0^b(E(0))$ in $\gamma(b)$. Let $Y$ and $Z$ be parallel vector fields along $\gamma$ such that $Y(t)\in E(t)$ for each $t\in[0,b)$. Since $\gamma'\in\Delta^*$, it follows from \eqref{casicodazzi} that $$ (\nabla^*_{\gamma'}\theta)(JY,Z)=(\nabla^*_{JY}\theta)(\gamma',Z). $$ This and the definition of $J$ imply that $\theta(JY,Z)$ is parallel along $\gamma$. In particular $J$ is invertible in $[0,b]$. By continuity $\mathcal{P}_0^b(\Delta^*(\gamma(0)))\subset\Delta^*(\gamma(b))$, and since $Z(0)$ is arbitrary, then $\mathcal{P}_0^b(\Delta^*(\gamma(0)))=\Delta^*(\gamma(b))$. Finally we extend the tensor $C_{\gamma'}$ to $[0,b]$ as $-DJ/dt\circ J^{-1}$. \qed \begin{lemma}\label{compact} Let $f\colon M^n\to\mathbb{R}^{n+p}$, $p\leq 5$ and $n>2p$ be an isometric immersion of a compact Riemannian manifold and let $\tau$ be an infinitesimal bending of $f$. Then, at any $x\in M^n$ there is a pair of vectors $\zeta_1,\zeta_2\in N_fM(x)$ of unit length such that $(\zeta_1,\zeta_2)\in(\mathcal{S}(\theta))^\perp(x)$ where $$ \mathcal{S}(\theta)(x)=\mbox{span}\,\{\theta(X,Y):X,Y\in T_xM\}. $$ Moreover, on any connected component of an open dense subset of $M^n$ the pair $\zeta_1,\zeta_2$ at $x\in M^n$ extend to smooth vector fields $\zeta_1$ and $\zeta_2$ parallel along $\Delta^*$ that satisfy the same conditions. \end{lemma} \proof We claim that the subset of points $U$ of $M^n$ where there is no such a pair, that is, where the metric induced on $(\mathcal{S}(\theta))^\perp$ is positive or negative definite, is empty. It is not difficult to see that $U$ is open. From Lemma~\ref{main} we have $\nu^*>0$ in $U$. Let $V\subset U$ be the open subset where $\nu^*=\nu_0^*$ is minimal. Take $x_0\in V$ and a unit speed geodesic $\gamma$ in $M^n$ contained in a maximal leaf of $\Delta^*$ with $\gamma(0)=x_0$. Since $M^n$ is compact, there is $b>0$ such that $\gamma([0,b))\subset V$ and $\gamma(b)\notin V$. Proposition \ref{completeness} gives $\nu^*(\gamma(b))=\nu_0^*$ which implies $\gamma(b)\notin U$. Hence, there are unit vectors $\zeta_1,\zeta_2\in N_fM(\gamma(b))$ such that $(\zeta_1,\zeta_2)\in(\mathcal{S}(\theta))^\perp(\gamma(b))$. Let $\zeta_i(t)$ be the parallel transport along $\gamma$ of $\zeta_i$, $i=1,2$. Then $$ {\langle}\!{\langle}\theta(X,Y),(\zeta_1,\zeta_2){\rangle}\!{\rangle} ={\langle}(A_{\zeta_1-\zeta_2}+B_{\zeta_1+\zeta_2})X,Y{\rangle}. $$ It follows from \eqref{parte} and \eqref{casicodazzi} that \begin{equation} \label{der} (\nabla_T^*\theta)(X,Y)=(\nabla_X^*\theta)(T,Y) \end{equation} where $T\in\Gamma(\Delta^*)$ extends $\gamma'$ and $X,Y\in\mathcal{X}(M)$. Along $\gamma$ this gives $$ \frac{D}{dt}\mathcal{C}_{\zeta_1,\zeta_2} =\mathcal{C}_{\zeta_1,\zeta_2}C_{\gamma'} =C'_{\gamma'}\mathcal{C}_{\zeta_1,\zeta_2} $$ where $\mathcal{C}_{\zeta_1,\zeta_2} =A_{\zeta_1-\zeta_2}+B_{\zeta_1+\zeta_2}$ and $C'_{\gamma'}$ denotes the transpose of $C_{\gamma'}$. Moreover, by Proposition \ref{completeness} this ODE holds on $[0,b]$. Given that $\mathcal{C}_{\zeta_1,\zeta_2}(\gamma(b))=0$, then $\mathcal{C}_{\zeta_1,\zeta_2}$ vanishes along $\gamma$. This is a contradiction and proves the claim. We have from \eqref{der} that $$ (\nabla_T^*\theta)(X,Y) =-\theta(\nabla_XT,Y)\in\Gamma(\mathcal{S}(\theta)) $$ for any $T\in\Gamma(\Delta^*)$ and $X,Y\in\mathcal{X}(M)$. Thus $\mathcal{S}(\theta)$ is parallel along the leafs of $\Delta^*$. Let $U_0$ be a connected component of the open dense subset of $M^n$ where the dimension of $\Delta^*$, $\mathcal{S}(\theta), \mathcal{S}(\theta)\cap \mathcal{S}(\theta)^\perp$ and the index of the metric induced on $\mathcal{S}(\theta)^\perp\times\mathcal{S}(\theta)^\perp$ are all constant. Hence on $U_0$ the vector fields $\zeta_1,\zeta_2$ can be taken parallel along the leafs of $\Delta^*$. \qed For an hypersurface $f\colon M^n\to\mathbb{R}^{n+1}$ we have \begin{equation} \label{defB1} (\tilde\nabla_XL)Y=\<B_NX,Y\>N+f_*\mathcal{Y}(X,Y) \end{equation} where $N$ is a unit vector field normal to $f$. The next result follows from Theorem $13$ in \cite{DV} and was fundamental in \cite{Ji}. \begin{lemma}\label{triv} An infinitesimal bending $\tau$ of $f\colon M^n\to \mathbb{R}^{n+1}$ is trivial if and only if $B_N=0$. \end{lemma} \noindent \emph{Proof of Theorem \ref{maincompact}:} We assume that there is no open subset of $M^n$ where the index of relative nullity satisfies $\nu\geq n-1$. By Lemma \ref{compact}, on connected components of an open dense subset of $M^n$ there are $\zeta_1,\zeta_2\in\Gamma(N_fM)$ with $\|\zeta_1\|=\|\zeta_2\|=1$ parallel along the leaves of $\Delta^*$ and such that $$ {\langle}\!{\langle}\theta(X,Y),(\zeta_1,\zeta_2){\rangle}\!{\rangle}=0 $$ for any $X,Y\in \mathfrak{X}(M)$. It follows from \eqref{theta} that \eqref{almost} holds on connected components of an open dense subset of $M^n$. Let $U\subset M^n$ be an open subset where $\zeta_1, \zeta_2$ are smooth and $\zeta_1+\zeta_2\neq 0$. Thus $\tau|_U$ satisfies the condition~$(*)$. Let $\tilde{V}\subset U$ be an open subset where $\tau$ is a genuine infinitesimal bending. By Corollary~\ref{localgen} we have that $f$ is $(n-1)$-ruled on each connected component $V$ of an open dense subset of $\tilde{V}$. Since our goal is to show that $V$ is empty we assume otherwise. Proposition \ref{nowhere} and the proof of Theorem \ref{local} yield that the rulings on $V$ are determined by the tangent subbundle $D=\ker\varphi_Y$ where $\varphi$ was given in Lemma \ref{varphi} and $Y\in RE(\varphi)$. Also from that proof $\dimIm(\varphi_Y)=2$ and therefore $Im(\varphi_Y)=R\oplus R$ where $N_fM=P\oplus R$ as in Lemma \ref{varphi}. Lemma~\ref{fbn} gives $$ \varphi_X(D)\subsetIm(\varphi_Y)\capIm(\varphi_Y)^\perp=\{0\} $$ for any $X\in\mathfrak{X}(M)$, that is, $D=\mathcal{N}(\varphi)$. In particular, from the definition of $\varphi$ it follows that $D\subset \mathcal{N}(\alpha_R)$. Hence, by dimension reasons either $\mathcal{N}(\alpha_R)=TM$ or $D=\mathcal{N}(\alpha_R)$. Next we contemplate both possibilities. Let $V_1\subset V$ be an open subset where $\mathcal{N}(\alpha_R)=TM$ holds, that is, $N_1=P$. Thus $N_1$ is parallel relative to the normal connection since, otherwise, the Codazzi equation gives $\nu=n-1$, and that has been ruled out. Hence $f|_{V_1}$ reduces codimension, that is, $f(V_1)$ is contained in an affine hyperplane $\mathbb{R}^{n+1}$. Decompose $\tau=\tau_1+\tau_2$ where $\tau_1$ and $\tau_2$ are tangent and normal to $\mathbb{R}^{n+1}$, respectively. It follows that $\tau_1$ is an infinitesimal bending of $f|_{V_1}$ in $\mathbb{R}^{n+1}$. Since $\tau$ satisfies the condition $(*)$ then Lemma \ref{triv} gives that $\tau_1$ is trivial, that is, the restriction of a Killing vector field of $\mathbb{R}^{n+1}$ to $f(V_1)$. Extending $\tau_2$ as a vector field normal to $\mathbb{R}^{n+1}$ it follows that $\tau|_{V_1}$ extends in the singular sense and this is a contradiction. Let $V_2\subset V$ be an open subset where $D=\mathcal{N}(\alpha_R)$. By assumption $D\neq \Delta$. Let $\hat{D}$ be the distribution tangent to the rulings in a neighborhood $V_2'$ of $x_0\in V_2$. From Proposition \ref{nowhere} we have $D(x_0)=\hat{D}(x_0)$. Let $W\subset V_2'$ be an open subset where $D\neq\hat{D}$, that is, where $D$ is not totally geodesic. Then there are two transversal $(n-1)$-dimensional rulings passing through any point $y\in W$. It follows easily that $N_1=P$ on $W$. As above we obtain that $\tau|_W$ extends in the singular sense, leading to a contradiction. Let $V_3\subset V_2$ be the interior of the subset where $D$ is totally geodesic. On $V_3$ the Codazzi equation gives $$ \nabla^\perp_X\alpha(Z,Y)\in \Gamma(P) $$ for all $X,Y\in \Gamma(D)$ and $Z\in\mathcal{X}(M)$. Thus $R$ is parallel along $D$ relative to the normal connection. We have from Proposition 4 in \cite{DG} that $f$ admits a singular extension $$ F(x,t)=f(x)+t\lambda(x) $$ for $\lambda\in\Gamma(f_*TM\oplus P)$ as a flat hypersurface. Moreover, $F$ has $R$ as normal bundle and $\partial_t$ belongs to the relative nullity distribution. Then $(\tilde\nabla_X\lambda)_R=0$ for any $X\in\mathfrak{X}(V_3)$. Hence \eqref{requisito} is satisfied and thus $\tau|_{V_3}$ extends in the singular sense. This is a contradiction which shows that $V$ is empty, and hence also is $\tilde{V}$. It remains to consider the existence of an open subset $U'\subset M^n$ where $\zeta_1,\zeta_2$ are smooth and $\zeta_1+\zeta_2=0$. It follows from \eqref{almost} that $\zeta_1-\zeta_2\perp N_1$. Once more, we obtain that $f(U')\subset\mathbb{R}^{n+1}$. Thus, we have an orthogonal decomposition of $\tau|_{U'}$ as in part $(ii)$ of the statement and $\tau_1,\tau_2$ extend in the singular sense as follows: \begin{itemize} \item[(i)] $\bar{\tau}_1(x,t)=\tau_1(x)$ to $F\colon U\times\mathbb{R}\to\mathbb{R}^{n+2}$ where $F(x,t)=f(x)+te$. \item[(ii)] For instance locally as $\bar{\tau}_2(x,t)=\tau_2(x)$ to $F\colon U\times I\to\mathbb{R}^{n+2}$ where $F(x,t)=f(x)+tN$ being $N$ is a unit normal field to $f|_U$ in $\mathbb{R}^{n+1}$.\qed \end{itemize} \begin{remarks}{\em $(1)$ In case $(ii)$ of Theorem \ref{maincompact} if $\tau_1$ is trivial then $\tau_1$ and $\tau_2$ extend in the same direction, and hence $\tau$ also does. Therefore we are also in case $(i)$. \noindent $(2)$ Notice that for $p=2$ we have shown as part of the proof that an infinitesimal bending of a submanifold without flat points as in in part $(ii)$ of Theorem \ref{local2} cannot be genuine. }\end{remarks} \section{Nonflat ambient spaces} In this section we argue for the following statement: \noindent \emph{Theorems \ref{local1}, \ref{local2} and \ref{local} hold if the Euclidean ambient space is replaced by a nonflat space form.} Let $f\colon M^n\to\mathbb{Q}^{n+p}_c$ be an isometric immersion where $\mathbb{Q}^{n+p}_c$ denotes either the sphere $\mathbb{S}_c^{n+p}$ or the hyperbolic space $\mathbb{H}_c^{n+p}$ of sectional curvature $c\neq 0$. Then we say that $\tau\in\Gamma(f^*T\mathbb{Q}_c^{n+p})$ is an infinitesimal bending of $f$ if \eqref{infbend} is satisfied in terms of the connection in $\mathbb{Q}_c^{n+p}$. And now that $f$ is \emph{$r$-ruled} means that there is an $r$-dimensional smooth totally geodesic distribution whose leaves are mapped by $f$ to open subsets of totally geodesic submanifolds of the ambient space $\mathbb{Q}^{n+p}_c$. In the sequel, for simplicity we also denote by $f$ the composition of the immersion with the umbilical inclusion of $\mathbb{Q}_c^{n+p}$ into $\mathbb{O}^{n+p+1}$, where $\mathbb{O}^{n+p+1}$ stands for either Euclidean or Lorentzian flat space depending on whether $c>0$ or $c<0$, respectively. Let $\tau$ be an infinitesimal bending of $f$ and let ${\cal F}\colon I\times M^n\to\mathbb{Q}_c^{n+p}$ be a smooth variation such that $f_t={\cal F}(t,\cdot)\colon M^n\to\mathbb{Q}_c^{n+p}$ verifies $f_0=f$ and having $\tau$ as variational vector field. In this case we still have that \eqref{der metrica}, \eqref{der conex} and \eqref{der curv} hold. And also as before, associated to $\tau$ we have the tensors $$ LX=\tilde\nabla_{X}\tau\;\;\mbox{and}\;\;B(X,Y)=(\tilde\nabla_XL)Y $$ where $X,Y\in\mathfrak{X}(M)$ and $\tilde\nabla$ denotes the connection in $\mathbb{Q}_c^{n+p}$. Now $$ B(X,Y)=f_*{\cal Y}(X,Y)+\beta(X,Y)+c\<f_*Y,\tau\>f_*X-c\<X,Y{\rangle}\tau $$ where the tensors ${\cal Y}\colon TM\times TM\to TM$ and $\beta\colon TM\times TM\to N_fM$ are the tangent and normal components of $\partial/\partial t|_{t=0}\alpha^t$, respectively, and $\alpha^t$ is the second fundamental form of $f_t$ as a submanifold in $\mathbb{Q}^{n+p}_c$. In particular, we have that \eqref{derGauss} holds. In this case, an infinitesimal bending of $f$ is said to satisfy the \emph{condition $(*)$} if there is $\eta\in\Gamma(N_fM)$ of unit length and $\xi\in\Gamma(R)$, where $R$ is determined by the orthogonal splitting $N_fM=P\oplus R$ and $P=\mbox{span}\{\eta\}$, such that $$ B_\eta+A_\xi+c{\langle}\tau,\eta{\rangle}I=0 $$ where $B_\eta={\langle}\beta,\eta{\rangle}$. The cone over an isometric immersion $f\colon M^n\to\mathbb{Q}^{n+p}_c$ is defined by \begin{align*} \hat{f}\colon\hat{M}^{n+1} =(0,\infty)\times &M^n\to\mathbb{O}^{n+p+1}\nonumber\\ (s,x)&\mapsto sf(x). \end{align*} Notice that $\partial_s$ lies in the relative nullity of $\hat{f}$ and that $N_{\hat{f}}\hat{M}$ is the parallel transport of $N_fM$ along the lines parametrized by $s$. Observe that if $c<0$, then the cone over $f$ is a Lorentzian submanifold of $\mathbb{L}^{n+p+1}$ and hence $N_{\hat f}\hat{M}$ has positive definite metric. If $\tau$ is an infinitesimal bending of $f$, it is easy to see that $\hat{\tau}(s,x)=s\tau(x)$ is an infinitesimal bending of $\hat{f}$ in $\mathbb{O}^{n+p+1}$, that is, $\hat{\tau}$ is a vector field that satisfies \eqref{infbend} with respect to the connection in $\mathbb{O}^{n+p+1}$. Moreover, if $\tau$ satisfies the condition $(*)$ then $\hat{\tau}$ satisfies the condition $(*)$ for the flat ambient space. Let $\hat{f}$ be the cone over an immersion $f$ in $\mathbb{Q}_c^{n+p}$. Notice that the parameter $s$ defines lines parallel to the position vector. Thus, if the map $\hat{f}+t\lambda$, is a singular extension of $\hat{f}$ for some vector field $\lambda$ then the intersection of its image with $\mathbb{Q}_c^{n+p}$ determines a singular extension of $f$. Consider the maps $$ \hat{F}(t,s,x)=\hat{f}(s,x)+t\lambda(s,x)\;\;\mbox{and}\;\; \hat{\tau}'(t,s,x)=\hat{\tau}(s,x)+t\bar{L}\lambda(s,x) $$ as in the proofs of Theorems \ref{local1} and \ref{local}. Notice that \begin{align*} {\langle}\hat{F}(t,s,x),\hat{\tau}'(t,s,x){\rangle} &={\langle}\hat{f}(s,x)+t\lambda(s,x),\hat{\tau}(s,x) +t\bar{L}\lambda(s,x){\rangle} \\ &=st\<f(x),\bar{L}\lambda{\rangle}+st{\langle}\lambda,\tau{\rangle}\\ &=0 \end{align*} where for the last equality we used $\hat{L}\partial_s=\tau(x)$. Then we have that $\hat{\tau}'$ is orthogonal to the position vector $\hat{F}$. From this we have that if $\hat{F}$ determines a singular extension of $\hat{f}$ then $\tau$ extends in the singular sense. As in the proofs of Theorems \ref{local1} and \ref{local}, if there is no $\lambda$ as above that determines a singular extension of $\hat{f}$ we conclude that $\hat{f}$ is ruled. Finally, observe that being $\hat{f}$ the cone over $f$, then these rulings determine rulings of $f$. \noindent Marcos Dajczer\\ IMPA -- Estrada Dona Castorina, 110\\ 22460--320, Rio de Janeiro -- Brazil\\ e-mail: [email protected] \noindent Miguel Ibieta Jimenez\\ IMPA -- Estrada Dona Castorina, 110\\ 22460--320, Rio de Janeiro -- Brazil\\ e-mail: [email protected] \end{document}
\begin{document} \begin{frontmatter} \title{Grammar Logics in Nested Sequent Calculus: Proof Theory and Decision Procedures} \author{Alwen Tiu} \address{Research School of Computer Science, The Australian National University} \author{Egor Ianovski} \address{Department of Computer Science, University of Auckland} \author{Rajeev Gor\'e} \address{Research School of Computer Science, The Australian National University} \begin{abstract} A grammar logic refers to an extension to the multi-modal logic K in which the modal axioms are generated from a formal grammar. We consider a proof theory, in nested sequent calculus, of grammar logics with converse, i.e., every modal operator $\obox{a}$ comes with a converse $\obox{a}^{-1}.$ Extending previous works on nested sequent systems for tense logics, we show all grammar logics (with or without converse) can be formalised in nested sequent calculi, where the axioms are internalised in the calculi as structural rules. Syntactic cut-elimination for these calculi is proved using a procedure similar to that for display logics. If the grammar is context-free, then one can get rid of all structural rules, in favor of deep inference and additional propagation rules. We give a novel semi-decision procedure for context-free grammar logics, using nested sequent calculus with deep inference, and show that, in the case where the given context-free grammar is regular, this procedure terminates. Unlike all other existing decision procedures for regular grammar logics in the literature, our procedure does not assume that a finite state automaton encoding the axioms is given. \end{abstract} \begin{keyword} Nested sequent calculus, display calculus, modal logics, deep inference. \end{keyword} \end{frontmatter} \section{Introduction} A grammar logic refers to an extension of the multi-modal logic K in which the modal axioms are generated from a formal grammar. Thus given a set $\Sigma$ of indices, and a grammar production rule as shown below left, where each $a_i$ and $b_j$ are in $\Sigma$, we extend K with the multi-modal axiom shown below right: $$a_1 a_2 \cdots a_l \rightarrow b_1 b_2 \cdots b_r \qquad\qquad\qquad \obox{a_1} \obox{a_2} \cdots \obox{a_l} A \supset\obox{b_1} \obox{b_2} \cdots \obox{b_r} A $$ The logic is a context-free grammar logic if $l=1$ and furthermore, is a right linear grammar logic if the production rules also define a right linear grammar. The logic is a regular grammar logic if the set of words generated from each $a \in \Sigma$ using the grammar production rules is a regular language. A right linear grammar logic is also a regular grammar logic since a right linear grammar can be converted to a finite automaton in polynomial time. Adding ``converse'' gives us alphabet symbols like $\bar{a}$ which correspond to the converse modality $\obox{\bar a}$ and lead to multi-modal extensions of tense logic Kt where each modality $\obox{a}$ and its converse $\obox{\bar a}$ obey the interaction axioms $ A \supset\obox{a}\odia{\bar a} A$ and $A \supset\obox{\bar a}\odia{a} A.$ Display calculi~\cite{Belnap82JPL} can handle grammar logics with converse since they all fall into the primitive fragment identified by Kracht~\cite{Kra96}. Display calculi all enjoy Belnap's general cut-elimination theorem, but it is well-known that they are not suitable for proof-search. Our work is motivated by the problem of automating proof search for display calculus. As in our previous work~\cite{Gore09tableaux,Gore10AiML,Gore11LMCS}, we have chosen to work not directly in display calculus, but in a slightly different calculus based on nested sequents~\cite{kashima-cut-free-tense,Brunnler09AML}, which we call shallow nested sequent calculi. The syntactic constructs of nested sequents are closer to traditional sequent calculu, so as to allow us to use familiar notions in sequent calculus proof search procedures, such as the notions of saturation and loop checking, to automate proof search. A common feature of shallow nested sequent calculus and display calculus is the use display postulates and other complex structural rules. These structural rules are the main obstacle to effective proof search, and our (proof theoretic) methodology for designing proof search calculi is guided by the problem of eliminating these structural rules entirely. We show here how our methodology can be used to derive proof search calculi for context-free grammar logics. The general satisfiability problem for a grammar logic is to decide the satisfiability of a given formula when given a set of production rules or when given an explicit finite state automaton (FSA) for the underlying grammar. Nguyen and Sza{\l}as~\cite{Nguyen11StudiaLogica} give an excellent summary of what is known about this problem, as outlined next. Grammar logics were introduced by del Cerro and Penttonen~\cite{del-Cerro-Penttonen}. Baldoni et al~\cite{BaldoniGM98} used prefixed tableaux to show that this problem is decidable for right linear logics but is undecidable for context free grammar logics. Demri~\cite{Demri01} used an embedding into propositional dynamic logic with converse to prove this problem is EXPTIME-complete for right linear logics. Demri and de Nivelle~\cite{Demri05} gave an embedding of the satisfiability problem for regular grammar logics into the two-variable guarded fragment of first-order logic and showed that satisfiability of regular grammar logics with converse is also EXPTIME-complete. Seen as description logics with inverse roles and complex role inclusions, decision procedures for regular grammar logics have also been studied extensively by Horrocks, et. al., see, e.g., \cite{Horrocks04AI,Horrocks06KR,Kazakov08KR}. Gor\'e and Nguyen~\cite{GoreN05} gave an EXPTIME tableau decision procedure for the satisfiability of regular grammar logics using formulae labelled with automata states. Finally, Nguyen and Sza{\l}as~\cite{Nguyen09CADE,Nguyen11StudiaLogica} gave an extension of this method to handle converse by using the cut rule. In an unpublished manuscript, Nguyen has shown how to use the techniques of Gor\'e and Widmann~\cite{GoreW10} to avoid the use of the cut rule. But as far as we know, there is no comprehensive sequent-style proof theory for grammar logics with converse which enjoys a syntactic cut-elimination theorem and which is amenable to proof-search. We consider a proof theory, in nested sequent calculus, of grammar logics with converse, i.e., every modal operator $\obox{a}$ comes with a converse $\obox{a}^{-1}.$ Extending previous works on nested sequent systems for (bi-)modal logics~\cite{Gore09tableaux,Gore11LMCS}, we show, in Section~\ref{sec:skm}, that all grammar logics (with or without converse) can be formalised in (shallow) nested sequent calculi, where the axioms are internalised in the calculi as structural rules. Syntactic cut-elimination for these calculi is proved using a procedure similar to that for display logics. We then show, in Section~\ref{sec:dkm}, that if the grammar is context-free, then one can get rid of all structural rules, in favor of deep inference and additional propagation rules. We then recast the problem of deciding grammar logics for the specific cases where the grammars are regular, using nested sequent calculus with deep inference. We first give, in Section~\ref{sec:auto-proc}, a decision procedure in the case where the regular grammar is given in the form of a FSA. This procedure is similar to existing tableaux-based decision procedures~\cite{Horrocks06KR,Nguyen09CADE,Nguyen11StudiaLogica}, where the states and transitions of the FSA is incorporated into proof rules for propagation of diamond-formulae. This procedure serves as a stepping stone to defining the more general decision procedure which does not depend on an explicit representation of axioms as a FSA in Section~\ref{sec:grammar-proc}. The procedure in Section~\ref{sec:grammar-proc} is actually a semi-decision procedure that works on any finite set of context-free grammar axioms. However, we show that, in the case where the given grammar is regular, this procedure terminates. The procedure avoids the requirement to provide a FSA for the given axioms. This is significantly different from existing decision procedures for regular grammar logics~\cite{Demri05,GoreN05,Nguyen11StudiaLogica,Nguyen09CADE}, where it is assumed that a FSA encoding the axioms of the logics is given. In this work, we follow Demri and de Nivelle's presentation of grammar axioms as a semi-Thue system~\cite{Demri05}. The problem of deciding whether a context-free semi-Thue system is regular or not appears to be still open; see \cite{Kazakov08KR} for a discussion on this matter. Termination of our generic procedure for regular grammar logics of course does not imply solvability of this problem as it is dependent on the assumption that the given grammar is regular (see Theorem~\ref{thm:grammar-proc-terminates}). \section{Grammar logics} \label{sec:logic} The language of a multi-modal logic is defined w.r.t. to an alphabet $\Sigma$, used to index the modal operators. We use $a, b$ and $c$, possibly with subscripts, for elements of $\Sigma$ and use $u$ and $v$, for elements of $\Sigma^*$, the set of finite strings over $\Sigma$. We use $\epsilon$ for the empty string. We define an operation $\bar .$ (converse) on alphabets to capture converse modalities following Demri~\cite{Demri05}. The converse operation satisfies $\bar {\bar a} = a.$ We assume that $\Sigma$ can be partitioned into two distinct sets $\Sigma^+$ and $\Sigma^-$ such that $a \in \Sigma^+$ iff $\bar a \in \Sigma^{-}.$ The converse operation is extended to strings in $\Sigma^*$ as follows: if $u = a_1a_2 \ldots a_n$, then $ \bar u = \bar a_n \bar a_{n-1} \ldots \bar a_2 \bar a_1, $ where $n \geq 0.$ Note that if $u = \epsilon$ then $\bar u = \epsilon.$ We assume a given denumerable set of atomic formulae, ranged over by $p$, $q$, and $r.$ The language of formulae is given by the following, where $a \in \Sigma$: $$ A ::= p \mid \neg A \mid A \lor A \mid A \land A \mid \obox a A \mid \odia a A $$ Given a formula $A$, we write $\lneg A$ for the negation normal form (nnf) of $\neg A.$ Implication $A \impl B$ is defined as $\neg A \lor B.$ \begin{definition} A {\em $\Sigma$-frame} is a pair $\langle W, R\rangle$ of a non-empty set of worlds and a set of binary relations $\{ R_a \}_{a\in \Sigma}$ over $W$ satisfying, for every $a \in \Sigma$, $ R_a = \{(x,y) \mid R_{\bar a}(y,x) \}. $ A {\em valuation} $V$ is a mapping from propositional variables to sets of worlds. A {\em model} $\mathfrak{M}$ is a triple $\langle W,R,V\rangle$ where $\langle W,R \rangle$ is a frame and $V$ is a valuation. The relation $\models$ is defined inductively as follows: \begin{itemize} \item $\mathfrak{M}, x \models p$ iff $x \in V(p).$ \item $\mathfrak{M}, x \models \neg A$ iff $\mathfrak{M}, x \not \models A$. \item $\mathfrak{M}, x \models A \land B$ iff $\mathfrak{M}, x \models A$ and $\mathfrak{M}, x \models B.$ \item $\mathfrak{M}, x \models A \lor B$ iff $\mathfrak{M}, x \models A$ or $\mathfrak{M}, x \models B.$ \item For every $a \in \Sigma$, $\mathfrak{M}, x \models \obox a A$ iff for every $y$ such that $R_a(x,y)$, $\mathfrak{M}, y \models A.$ \item For every $a \in \Sigma$, $\mathfrak{M}, x \models \odia a A$ iff there exists $y$ such that $R_a(x,y),$ $\mathfrak{M}, y \models A.$ \end{itemize} A formula $A$ is {\em satisfiable} iff there exists a $\Sigma$-model $\mathfrak{M} = \langle W, R, V \rangle$ and a world $x \in W$ such that $\mathfrak{M}, x \models A.$ \end{definition} We now define a class of multi-modal logics, given $\Sigma$, that is induced by {\em production rules} for strings from $\Sigma^*.$ We follow the framework in \cite{Demri05}, using semi-Thue systems to define the logics. A production rule is a binary relation over strings in $\Sigma^*$, interpreted as a rewrite rule on strings. We use the notation $u \rightarrow v$ to denote a production rule which rewrites $u$ to $v.$ A {\em semi-Thue} system is a set $S$ of production rules. It is {\em closed} if $u \rightarrow v \in S$ implies $\bar u \rightarrow \bar v \in S.$ Given a $\Sigma$-frame $\langle W, R\rangle$, we define another family of accessibility relations indexed by $\Sigma^*$ as follows: $R_\epsilon = \{(x,x) \in x \in W\}$ and for every $u\in \Sigma^*$ and for every $a \in \Sigma$, $R_{ua} = \{(x,y) \mid (x,z) \in R_u, (z,y)\in R_a, \mbox{ for some $z \in W$} \}.$ \begin{definition} \label{def:S-frame} Let $u \rightarrow v$ be a production rule and let $\mathcal{F} = \langle W, R \rangle$ be a $\Sigma$-frame. $\mathcal{F}$ is said to satisfy $u \rightarrow v$ if $R_v \subseteq R_u.$ $\mathcal{F}$ satisfies a semi-Thue system $S$ if it satisfies every production rule in $S.$ \end{definition} \begin{definition} Let $S$ be a semi-Thue system. A formula $A$ is said to be {\em $S$-satisfiable} iff there is a model $\mathfrak{M} = \langle W , R , V\rangle$ such that $\langle W, R\rangle$ satisfies $S$ and $\mathfrak{M}, x \models A$ for some $x \in W.$ $A$ is said to be {\em $S$-valid} if for every $\Sigma$-model $\mathfrak{M} = \langle W, R, V\rangle$ that satisfies $S$, we have $\mathfrak{M}, x \models A$ for every $x \in W.$ \end{definition} Given a string $u = a_1 a_2 \ldots a_n$ and a formula $A$, we write $\odia u A$ for the formula $\odia {a_1} \odia {a_2} \cdots \odia{a_n} A.$ The notation $\obox u A$ is defined analogously. If $u = \epsilon$ then $\odia u A = \obox u A = A.$ \begin{definition} Let $S$ be a closed semi-Thue system over an alphabet $\Sigma$. The system $\mathrm{Km}(S)$ is an extension of the standard Hilbert system for multi-modal $\mathrm{Km}$ (see, e.g., \cite{blackburn07handbook}) with the following axioms: \begin{itemize} \item for each $a \in \Sigma$, a {\em residuation axiom:} $ A \supset\obox a \odia {\bar a} A $ \item and for each $u \rightarrow v \in S$, an axiom $ \obox u A \supset\obox v A. $ \end{itemize} \end{definition} Note that because we assume that $S$ is closed, each axiom $\obox u A \supset\obox v A$ has an {\em inverted version} $\obox {\bar u} A \supset\obox {\bar v} A.$ The following theorem can be proved following a similar soundness and completeness proof for Hilbert systems for modal logics (see, e.g., \cite{blackburn07handbook}). \begin{theorem} \label{thm:Km-S} A formula $F$ is $S$-valid iff $F$ is provable in $\mathrm{Km}(S).$ \end{theorem} \section{Nested sequent calculi with shallow inference} \label{sec:skm} We now give a sequent calculus for $\mathrm{Km}(S)$, by using the framework of nested sequent calculus~\cite{kashima-cut-free-tense,Brunnler09AML,Gore09tableaux,Gore11LMCS}. We follow the notation used in \cite{kashima-cut-free-tense,Gore11LMCS}, extended to the multi-modal case. From this section onwards, we shall be concerned only with formulae in nnf, so we can restrict to one-sided sequents. A nested sequent is a multiset of the form shown below at left $$ A_1,\dots,A_m, \seq {a_1} {\Delta_1} ,\dots, \seq {a_n} {\Delta_n} \quad\quad\quad A_1 \lor \cdots \lor A_m \lor \obox {a_1} B_1 \lor \cdots \lor \obox{a_n} B_n $$ where each $A_i$ is a formula and each $\Delta_i$ is a nested sequent. The structural connective $\seq {a} {.}$ is a proxy for the modality $\obox {a}$, so this nested sequent can be interpreted as the formula shown above right (modulo associativity and commutativity of $\lor$), where each $B_i$ is the interpretation of $\Delta_i.$ We shall write $\seq u \Delta$, where $u = a_1 \cdots a_n \in \Sigma^*$, to denote the structure: $$ \seq {a_1} {\seq {a_2} {\cdots \seq {a_n} \Delta} \cdots }. $$ A {\em context} is a nested sequent with a `hole' $[~]$ in place of a formula: this notation should not be confused with the modality $\obox a$. We use $\Gamma[~]$, $\Delta[~]$, etc.\ for contexts. Given a context $\Gamma[~]$ and a nested sequent $\Delta$, we write $\Gamma[\Delta]$ to denote the nested sequent obtained by replacing the hole in $\Gamma[~]$ with $\Delta.$ The core inference rules for multi-modal $\mathrm{SKm}$ (without axioms) are given in Figure~\ref{fig:SKm}. The rule $r$ is called a {\em residuation rule} (or display postulate) and corresponds to the residuation axioms. \begin{figure} \caption{The inference rules of $\mathrm{SKm}$} \label{fig:SKm} \end{figure} To capture $\mathrm{Km}(S)$, we need to convert each axiom generated from $S$ to an inference rule. Each production rule $u \rightarrow v$ gives rise to the axiom $\obox u A \supset\obox v A$, or equivalently, $\odia {\bar v} A \supset\odia {\bar u} A.$ The latter is an instance of the Kracht's {\em primitive axioms}~\cite{Kra96} (generalised to the multimodal case). Thus, we can convert the axiom into a structural rule following Kracht's rule scheme for primitive axioms: $$\infer[]{\seq v {\Delta}, \Gamma}{\seq u {\Delta}, \Gamma}$$ Let $\rho(S)$ be the set of structural rules induced by the semi-Thue system $S.$ \begin{definition} Let $S$ be a closed semi-Thue system $S$ over an alphabet $\Sigma$. $\mathrm{SKm}(S)$ is the proof system obtained by extending $\mathrm{SKm}$ with $\rho(S).$ \end{definition} We say that two proof systems are equivalent if and only if they prove the same set of formulae. \begin{theorem} \label{thm:SKm-Km-equiv} The system $\mathrm{SKm}(S)$ and $\mathrm{Km}(S)$ are equivalent. \end{theorem} The cut-elimination proof for $\mathrm{SKm}(S)$ follows a similar generic procedure for display calculi~\cite{Belnap82JPL,Kra96}, which has been adapted to nested sequent in~\cite{Gore11LMCS}. The key to cut-elimination is to show that $\mathrm{SKm}(S)$ has the {\em display property}. \begin{lemma} \label{lm:display} Let $\Gamma[\Delta]$ be a nested sequent. Then there exists a nested sequent $\Gamma'$ such that $\Gamma[\Delta]$ is derivable from the nested sequent $\Gamma',\Delta$, and vice versa, using only the residuation rule $r.$ \end{lemma} \begin{theorem} Cut elimination holds for $\mathrm{SKm}(S).$ \end{theorem} \begin{proof} This is a straightforward adaptation of the cut-elimination proof in \cite{Gore11LMCS} for tense logic. \end{proof} \section{Deep inference calculi} \label{sec:dkm} Although the shallow system $\mathrm{SKm}(S)$ enjoys cut-elimination, proof search in its cut-free fragment is difficult to automate, due to the presence of structural rules. To reduce the non-determinism caused by structural rules, we consider next a proof system in which all structural rules (including those induced by grammar axioms) can be absorbed into logical rules. As the display property in Lemma~\ref{lm:display} suggests, the residuation rule allows one to essentially apply an inference rule to a particular subsequent nested inside a nested sequent, by displaying that subsequent to the top and undisplaying it back to its original position in the nested sequent. It is therefore quite intuitive that one way to get rid of the residuation rule is to allow {\em deep inference} rules, that apply deeply within any arbitrary context in a nested sequent. The deep inference system $\mathrm{DKm}$, which corresponds to $\mathrm{SKm}$, is given in Figure~\ref{fig:DKm}. As can be readily seen, the residuation rule is absent and contraction and weakening are absorbed into logical rules. \begin{figure} \caption{The inference rules of $\mathrm{DKm}$} \label{fig:DKm} \end{figure} To fully absorb the residuation rule, and other structural rules induced by production rules, we need to modify the introduction rules for diamond-formulae. Their introduction rules will be dependent on what axioms one assumes. We refer to these introduction rules for diamond-formulae as {\em propagation rules}. This will be explained shortly, but first we need to define a couple of notions needed to define propagation rules. Let $S$ be a closed semi-Thue system over alphabet $\Sigma$. We write $u \Rightarrow_S v$ to mean that the string $v$ can be reached from $u$ by applying the production rules (as rewrite rules) in $S$ successively to $u.$ Define $L_a(S) = \{ u \mid a \Rightarrow_S u \}.$ Then $L_a(S)$ defines a language generated from $S$ with the start symbol $a.$ A nested sequent can be seen as a tree whose nodes are multisets of formulae, and whose edges are labeled with elements of $\Sigma.$ We assume that each node in a nested sequent can be identified uniquely, i.e., we can consider each node as labeled with a unique position identifier. An internal node of a nested sequent is a node which is not a leaf node. We write $\Gamma[~]_i$ to denote a context in which the hole is located in the node at position $i$ in the tree representing $\Gamma[~].$ This generalises to multi-contexts, so $\Gamma[~]_i[~]_j$ denotes a two-hole context, one hole located at $i$ and the other at $j$ (they can be the same location). From now on, we shall often identify a nested sequent with its tree representation, so when we speak of a node in $\Gamma$, we mean a node in the tree of $\Gamma.$ If $i$ and $j$ are nodes in $\Gamma$, we write $i \next{a} j$ when $j$ is a child node of $i$ and the edge from $i$ to $j$ is labeled with $a.$ If $i$ is a node in the tree of $\Gamma$, we write $\Gamma|i$ to denote the multiset of formula occuring in the node $i.$ Let $\Delta$ and $\Gamma$ be nested sequents. Suppose $i$ is a node in $\Gamma$. Then we write $\Gamma(i \ll \Delta)$ for the nested sequent obtained from $\Gamma$ by adding $\Delta$ to node $i$ in $\Gamma.$ Note that for such an addition to preserve the uniqueness of the position identifiers of the resulting tree, we need to rename the identifiers in $\Delta$ to avoid clashes. We shall assume implicitly that such a renaming is carried out when we perform this addition. \begin{definition}[Propagation automaton.] A propagation automaton is a finite state automaton $\mathcal{P} = (\Sigma,Q, I, F, \delta)$ where $Q$ is a finite set of states, $I = \{s\}$ is a singleton set of initial state and $F = \{t\}$ is a singleton set of final state with $s,t \in Q$, and for every $i,j \in Q$, if $i \trans{a} j \in \delta$ then $j \trans{\bar a} i \in \delta.$ \end{definition} In other words, a propagation automaton is just a finite state automaton (FSA) where each transition has a dual transition. \begin{definition} Let $\mathcal{A} = (\Sigma, Q, I, F, \delta)$ be a FSA. Let $\vec i = i_1,\dots,i_n$ and $\vec j = j_1,\dots,j_n$ be two sequences of states in $Q.$ Let $[i_1 \mathrel{{\mathrel{\mathop:}=}} j_1,\dots, i_n \mathrel{{\mathrel{\mathop:}=}} j_n]$ (we shall abbreviate this as $[\vec i \mathrel{{\mathrel{\mathop:}=}} \vec j]$) be a (postfix) mapping from $Q$ to $Q$ that maps $i_m$ to $j_m$, where $1 \leq m \leq n$, and is the identity map otherwise. This mapping is extended to a (postfix) mapping between sets of states as follows: given $Q' \subseteq Q$, $Q'[\vec i \mathrel{{\mathrel{\mathop:}=}} \vec j] = \{k[\vec i \mathrel{{\mathrel{\mathop:}=}} \vec j] \mid k \in Q'\}.$ The automaton $\mathcal{A}[\vec i \mathrel{{\mathrel{\mathop:}=}} \vec j]$ is the tuple $(\Sigma, Q[\vec i \mathrel{{\mathrel{\mathop:}=}} \vec j], I[\vec i \mathrel{{\mathrel{\mathop:}=}} \vec j], F[\vec i \mathrel{{\mathrel{\mathop:}=}} \vec j], \delta')$ where $$ \delta' = \{k[\vec i \mathrel{{\mathrel{\mathop:}=}} \vec j] \trans{a} l[\vec i \mathrel{{\mathrel{\mathop:}=}} \vec j] \mid k \trans{a} l \in \delta \}. $$ \end{definition} To each nested sequent $\Gamma$, and nodes $i$ and $j$ in $\Gamma$, we associate a propagation automaton $\mathcal{R}(\Gamma,i,j)$ as follows: \begin{enumerate} \item the states of $\mathcal{R}(\Gamma,i,j)$ are the nodes of (the tree of) $\Gamma$; \item $i$ is the initial state of $\mathcal{R}(\Gamma,i,j)$ and $j$ is its final state; \item each edge $x \next{a} y$ in $\Gamma$ corresponds to two transitions in $\mathcal{R}(\Gamma,i,j)$: $ x \trans{a} y$ and $y \trans{\bar a} x.$ \end{enumerate} Note that although propagation automata are defined for nested sequents, they can be similarly defined for (multi-)contexts as well, as contexts are just sequents containing a special symbol $[~]$ denoting a hole. So in the following, we shall often treat a context as though it is a nested sequent. A semi-Thue system $S$ over alphabet $\Sigma$ is {\em context-free} if its production rules are all of the form $a \to u$ for some $a \in \Sigma.$ In the following, to simplify presentation, we shall use the same notation to refer to an automaton $\mathcal{A}$ and the regular language it accepts. Given a context-free closed semi-Thue system $S$, the {\em propagation rules for $S$} are all the rules of the following form where $i$ and $j$ are two (not necessarily distinct) nodes of $\Gamma$: $$ \infer[p_S, \mbox{ provided } {\mathcal{R}(\Gamma[~]_i[~]_j, i, j) \cap L_a(S) \not = \emptyset.}] {\Gamma[\odia a A]_i[\emptyset]_j} {\Gamma[\odia a A]_i[A]_j} $$ Note that the intersection of a regular language and a context-free language is a context-free language (see, e.g., Chapter 3 in \cite{ginsburg} for a construction of the intersection), and since the emptiness checking for context-free languages is decidable~\cite{ginsburg}, the rule $p_S$ can be effectively mechanised. \begin{definition} Given a context-free closed semi-Thue system $S$ over an alphabet $\Sigma$, the proof system $\mathrm{DKm}(S)$ is obtained by extending $\mathrm{DKm}$ with $p_S.$ \end{definition} We now show that $\mathrm{DKm}(S)$ is equivalent to $\mathrm{SKm}(S).$ The proof relies on a series of lemmas showing admissibility of all structural rules of $\mathrm{SKm}(S)$ in $\mathrm{DKm}(S).$ The proof follows the same outline as in the case for tense logic~\cite{Gore11LMCS}. The adaptation of the proof in \cite{Gore11LMCS} is quite straightforward, so we shall not go into detailed proofs but instead just outline the required lemmas. Some of their proofs are outlined in the appendix. In the following lemmas, we shall assume that $S$ is a closed context-free semi-Thue system over some $\Sigma.$ Given a derivation $\Pi$, we denote with $|\Pi|$ the {\em height} of $\Pi$, i.e., the length (i.e., the number of edges) of the longest branch in $\Pi.$ A rule $\rho$ is said to be {\em admissible} in $\mathrm{DKm}(S)$ if provability of its premise(s) in $\mathrm{DKm}(S)$ implies provability of its conclusion in $\mathrm{DKm}(S).$ It is {\em height-preserving admissible} if whenever the premise has a derivation then the conclusion has a derivation of the same height, in $\mathrm{DKm}(S).$ Admissibility of the weakening rule is a consequence of the following lemma. \begin{lemma} \label{lm:weak} Let $\Pi$ be a derivation of $\Gamma[\emptyset]$ in $\mathrm{DKm}(S).$ Then there exists a derivation $\Pi'$ of $\Gamma[\Delta]$ in $\mathrm{DKm}(S)$ such that $|\Pi| = |\Pi'|.$ \end{lemma} The admissibility proofs of the remaining structural rules all follow the same pattern: the most important property to prove is that, if a propagation path for a diamond formula exists between two nodes in the premise, then there exists a propagation path for the same formula, between the same nodes, in the conclusion of the rule. \begin{lemma} \label{lm:adm-r} The rule $r$ is height-preserving admissible in $\mathrm{DKm}(S).$ \end{lemma} Admissibility of contraction is proved indirectly by showing that it can be replaced by a formula contraction rule and a distributivity rule: $$ \infer[\mathit{actr}] {\Gamma[A]} {\Gamma[A,A]} \qquad \infer[\mathit{m}] {\Gamma[\seq a {\Delta_1,\Delta_2}]} {\Gamma[\seq a {\Delta_1}, \seq a {\Delta_2}]} $$ The rule $m$ is also called a {\em medial} rule and is typically used to show admissibility of contraction in deep inference~\cite{Brunnler01LPAR}. \begin{lemma} The rule $\mathit{ctr}$ is admissible in $\mathrm{DKm}(S)$ plus $\mathit{actr}$ and $m.$ \end{lemma} \begin{lemma} \label{lm:adm-medial} The rules $\mathit{actr}$ and $m$ are height-preserving admissible in $\mathrm{DKm}(S)$. \end{lemma} Admissibility of contraction then follows immediately. \begin{lemma} The contraction rule $\mathit{ctr}$ is admissible in $\mathrm{DKm}(S).$ \end{lemma} \begin{lemma} \label{lm:adm-rho-S} The structural rules $\rho(S)$ of $\mathrm{SKm}(S)$ are height-preserving admissible in $\mathrm{DKm}(S).$ \end{lemma} \begin{theorem} \label{thm:SKm-DKm-equiv} For every context-free closed semi-Thue system $S$, the proof systems $\mathrm{SKm}(S)$ and $\mathrm{DKm}(S)$ are equivalent. \end{theorem} \section{Regular grammar logics} A context free semi-Thue system $S$ over $\Sigma$ is regular if for every $a \in \Sigma$, the language $L_a(S)$ is a regular language. In this section, we consider logics generated by regular closed semi-Thue systems. We assume in this case that the union of the regular languages $\{L_a(S) \mid a \in \Sigma \}$ is represented explicitly as an FSA $\mathcal{A}$ with no silent transitions. Thus $ \mathcal{A} = (\Sigma, Q, I, F, \delta) $ where $Q$ is a finite set of states, $I \subseteq Q$ is the set of initial states, $F \subseteq Q$ is the set of final states, and $\delta$ is the transition relation. Given $\mathcal{A}$ as above, we write $s \trans{a}_\mathcal{A} t$ to mean $s \trans{a} t \in \delta.$ We further assume that each $a \in \Sigma$ has a unique initial state $init_a \in I.$ We shall now define an alternative deep inference system given this explicit representation of the grammar axioms as an FSA. Following similar tableaux systems in the literature that utilise such an automaton representation~\cite{Horrocks06KR,Nguyen09CADE,Nguyen11StudiaLogica}, we use the states of the FSA to index formulae in a nested sequent to record stages of a propagation. For this, we first introduce a form of labeled formula, written $s : A$, where $s \in Q.$ The propagation rules corresponding to $\mathcal{A}$ are: $$ \infer[i] {\Gamma[\odia a A]} { \Gamma[\odia a A, init_a : A] } \qquad\qquad\qquad \infer[t\!\uparrow, \hbox{ if } s \trans{a}_\mathcal{A} s'] {\Gamma[s : A, \seq a {\Delta}]} { \Gamma[s : A, \seq a {s' : A, \Delta}] } $$ $$ \infer[f, ~ \hbox{if } s \in F] {\Gamma[s : A]} {\Gamma[s : A, A]} \qquad\qquad\qquad \infer[t\!\downarrow, \hbox{ if } s \trans{\bar a}_\mathcal{A} s'. ] {\Gamma[\seq a {s: A, \Delta}]} {\Gamma[\seq a {s: A, \Delta}, s':A]} $$ \begin{definition} Let $S$ be a regular closed semi-Thue system over $\Sigma$ and let $\mathcal{A}$ be an FSA representing the regular language generated by $S$ and $\Sigma.$ $\mathrm{DKm}(\mathcal{A})$ is the proof system $\mathrm{DKm}$ extended with the rules $\{i, f, t\!\downarrow, t\!\uparrow \}$ for $\mathcal{A}.$ \end{definition} It is intuitively clear that $\mathrm{DKm}(\mathcal{A})$ and $\mathrm{DKm}(S)$ are equivalent, when $\mathcal{A}$ defines the same language as $L(S).$ Essentially, a propagation rule in $\mathrm{DKm}(S)$ can be simulated by $\mathrm{DKm}(\mathcal{A})$ using one or more propagations of labeled formulae. The other direction follows from the fact that when a diamond formula $\odia a A$ is propagated, via the use of labeled formulae, to a labeled formula $s : A$ where $s$ is a final state, then there must be a chain of transitions between labeled formulae for $A$ whose string forms an element of $\mathcal{A}$, hence also in $L_a(S).$ One can then propagate directly $\odia a A$ in $\mathrm{DKm}(S).$ \begin{theorem} \label{thm:DKm-A-eq-S} Let $S$ be a regular closed semi-Thue system over $\Sigma$ and let $\mathcal{A}$ be a FSA representing the regular language generated by $S$ and $\Sigma.$ Then $\mathrm{DKm}(S)$ and $\mathrm{DKm}(\mathcal{A})$ are equivalent. \end{theorem} \section{Decision procedures} We now show how the proof systems $\mathrm{DKm}(\mathcal{A})$ and $\mathrm{DKm}(S)$ can be turned into decision procedures for regular grammar logics. Our aim is to derive the decision procedure for $\mathrm{DKm}(S)$ directly without the need to convert $S$ explicitly to an automaton; the decision procedure $\mathrm{DKm}(\mathcal{A})$ will serve as a stepping stone towards this aim. The decision procedure for $\mathrm{DKm}(S)$ is a departure from all existing decision procedures for regular grammar logics (with or without converse)~\cite{Horrocks06KR,Demri05,GoreN05,Nguyen09CADE,Nguyen11StudiaLogica} that assume that an FSA representing $S$ is given. \subsection{An automata-based procedure} \label{sec:auto-proc} The decision procedure for $\mathrm{DKm}(\mathcal{A})$ is basically just backward proof search, where one tries to saturate each sequent in the tree of sequents until either the $\mathit{id}_d$ rule is applicable, or a certain stable state is reached. When the latter is reached, we show that a counter model to the original nested sequent can be constructed. Although we obtain this procedure via a different route, the end result is very similar to the tableaux-based decision procedure in \cite{Horrocks06KR}. In particular, our notion of a stable state (see the definition of $\mathcal{A}$-stability below) used to block proof search is the same as the blocking condition in tableaux systems~\cite{Horrocks06KR,Demri05,GoreN05,Nguyen11StudiaLogica,Nguyen09CADE}, which takes advantange of the labeling of formulae with the states of the automaton. \begin{figure} \caption{An automata-based prove procedure.} \label{fig:prove1} \end{figure} \begin{definition}[Saturation and realisation] A node $i$ in $\Gamma$ is {\em saturated} if the following hold: \begin{enumerate} \item If $A \in \Gamma|i$ then $\lneg A \not \in \Gamma|i.$ \item If $A \lor B \in \Gamma|i$ then $A \in \Gamma|i$ and $B \in \Gamma|i$. \item If $A \land B \in \Gamma|i$ then $A \in \Gamma|i$ or $B \in \Gamma|i$. \end{enumerate} $\Gamma|i$ is {\em realised} if $\obox a A \in \Gamma|i$ implies that there exists $j$ such that $i \next{a} j$ and $A \in \Gamma|j.$ \end{definition} \begin{definition}[$\mathcal{A}$-propagation] Let $\mathcal{A} = (\Sigma, Q, I, F, \delta)$. A nested sequent $\Gamma$ is said to be {$\mathcal{A}$-propagated} if for every node $i$ in $\Gamma$, the following hold: \begin{enumerate} \item If $\odia a A \in \Gamma|i$ then $init_a : A \in \Gamma|i$ for any $a \in \Sigma.$ \item If $s : A \in \Gamma|i$ and $s \in F$, then $A \in \Gamma|i.$ \item For all $j$, $a$, $s$ and $t$, such that $i \next{a} j$ and $s \trans{a}_\mathcal{A} t$, if $s : A \in \Gamma|i$ then $t : A \in \Gamma|j.$ \item For all $j$, $a$, $s$ and $t$, such that $j \next{a} i$ and $s \trans{\bar a}_\mathcal{A} t$, if $s : A \in \Gamma|i$ then $t : A \in \Gamma|j.$ \end{enumerate} \end{definition} \begin{definition}[$\mathcal{A}$-stability] A nested sequent $\Gamma$ is {\em $\mathcal{A}$-stable} if \begin{enumerate} \item Every node is saturated. \item $\Gamma$ is $\mathcal{A}$-propagated. \item Every internal node is realised. \item For every leaf node $i$, one of the following holds: \begin{enumerate} \item There is an ancestor node $j$ of $i$ such that $\Gamma|i = \Gamma|j.$ We call the node $i$ a {\em loop node}. \item $\Gamma|i$ is realised (i.e., it cannot have a member of the form $\obox a A$). \end{enumerate} \end{enumerate} \end{definition} The prove procedure for $\mathrm{DKm}(\mathcal{A})$ is given in Figure~\ref{fig:prove1}. We show that the procedure is sound and complete with respect to $\mathrm{DKm}(\mathcal{A})$. The proofs of the following theorems can be found in the appendix. \begin{theorem} \label{thm:auto correctness} If $Prove_1(\mathcal{A},\{F\})$ returns $\top$ then $F$ is provable in $\mathrm{DKm}(\mathcal{A}).$ If $Prove_1(\mathcal{A},\{F\})$ returns $\bot$ then $F$ is not provable in $\mathrm{DKm}(\mathcal{A}).$ \end{theorem} \begin{theorem} \label{thm:auto-proc-terminates} For every nested formula $A$, $Prove_1(\mathcal{A}, \{A\})$ terminates. \end{theorem} \begin{corollary} The proof system $\mathrm{DKm}(\mathcal{A})$ is decidable. \end{corollary} \subsection{A grammar-based procedure} \label{sec:grammar-proc} The grammar-based procedure differs from the automaton-based procedure in the notion of propagation and that of a stable nested sequent. In the following, given a function $\theta$ from labels to labels, and a list $\vec i = i_1,\dots,i_n$ of labels, we write $\theta(\vec i)$ to denote the list $\theta(i_1),\dots,\theta(i_n).$ We write $[\vec i \mathrel{{\mathrel{\mathop:}=}} \theta(\vec i)]$ to mean the mapping $[i_1 \mathrel{{\mathrel{\mathop:}=}} \theta(i_1), \ldots, i_n \mathrel{{\mathrel{\mathop:}=}} \theta(i_n)].$ In the following definitions, $S$ is assumed to be a context-free semi-Thue system over some alphabet $\Sigma.$ \begin{definition}[$S$-propagation] Let $\Gamma$ be a nested sequent. Let $\mathcal{P} = (\Sigma,Q,\{i\},\{j\},\delta)$ be a propagation automata, where $Q$ is a subset of the nodes in $\Gamma.$ We say that $\Gamma$ is {\em $(S,\mathcal{P})$-propagated} if the following holds: $\odia a A \in \Gamma|i$ and $\mathcal{P} \cap L_a(S) \not = \emptyset$ imply $A \in \Gamma|j.$ $\Gamma$ is {\em $S$-propagated} if it is $(S, \mathcal{R}(\Gamma,i,j))$-propagated for every node $i$ and $j$ in $\Gamma$. \end{definition} \begin{definition}[$S$-stability]\label{def:S-stable} A nested sequent $\Gamma$ is {\em $S$-stable} if \begin{enumerate} \item Every node is saturated. \item $\Gamma$ is $S$-propagated. \item Every internal node is realised. \item Let $\vec x = x_1,\dots,x_n$ be the list of all unrealised leaf nodes. There is a function $\lambda$ assigning each unrealised leaf node $x_m$ to an ancestor $\lambda(x_m)$ of $x_m$ such that $\Gamma|x_m = \Gamma|\lambda(x_m)$ and for every node $y$ and $z$, $\Gamma$ is $(S,\mathcal{P})$-propagated, where $ \mathcal{P} = \mathcal{R}(\Gamma,y,z)[\vec x \mathrel{{\mathrel{\mathop:}=}} \lambda(\vec x)]. $ \end{enumerate} \end{definition} \begin{figure} \caption{A grammar-based prove procedure.} \label{fig:prove2} \end{figure} Now we define a non-deterministic prove procedure $Prove_2(S,\Gamma,k)$ as in Figure~\ref{fig:prove2}, where $k$ is an integer and $S$ is a context-free closed semi-Thue system. Given a nested sequent $\Gamma$, and a node $i$ in $\Gamma$, the {\em height} of $i$ in $\Gamma$ is the length of the branch from the root of $\Gamma$ to node $i.$ The procedure $Prove_2(S,\Gamma,k)$ tries to construct a derivation of $\Gamma$, but is limited to exploring only those nested sequents derived from $\Gamma$ that has height at most $k.$ The procedure $Prove$ given below is essentially an iterative deepening procedure that calls $Prove_2$ repeatedly with increasing values of $k$. If an input sequent is not valid, the procedure will try to guess the smallest $S$-stable sequent that refutes the input sequent, i.e., it essentially tries to construct a finite countermodel. \begin{enumerate} \item[] \noindent $Prove(S,\Gamma)$ \item $k:=0$. \item If $Prove_2(S,\Gamma,k)=\top$ or $Prove_2(S,\Gamma,k)=\bot$, return $\top$ or $\bot$ respectively. \item $k:=k+1$. Go to step (ii). \end{enumerate} The procedure $Prove$ gives a semi-decision procedure for context-free grammar logics. This uses the following lemma about $S$-stable sequents, which shows how to extract a countermodel from an $S$-stable sequent. \begin{lemma} \label{lm:S-stable} Let $S$ be a context-free closed semi-Thue system. If $\Gamma$ is an $S$-stable nested sequent, then there exists a model $\mathfrak{M}$ such that for every node $x$ in $\Gamma$ and for every $A \in \Gamma|x$, there exists a world $w$ in $\mathfrak{M}$ such that $\mathfrak{M}, w \not \models A.$ \end{lemma} \begin{theorem} \label{thm:Prove-sound-complete} Let $S$ be a context-free closed semi-Thue system. For every formula $F$, $Prove(S, \{F\})$ returns $\top$ if and only if $F$ is provable in $\mathrm{DKm}(S).$ \end{theorem} We next show that $Prove(S,\Gamma)$ terminates when $S$ is regular. The key is to bound the size of $S$-stable sequents, hence the non-deterministic iterative deepening will eventually find an $S$-stable sequent, when $\Gamma$ is not provable. \begin{theorem} \label{thm:grammar-proc-terminates} Let $S$ be a regular closed semi-Thue system over an alphabet $\Sigma$. Then for every formula $F$, the procedure $Prove(S, \{F\})$ terminates. \end{theorem} The proof relies on the fact that there exists a minimal FSA $\mathcal{A}$ encoding $S$, so one can simulate steps of $Prove_1(\mathcal{A},\{F\})$ in $Prove(S,\{F \}).$ It is not difficult to show that if a run of $Prove_1(\mathcal{A},\{F\})$ reaches a $\mathcal{A}$-stable nested sequent $\Gamma'$, then one can find a $k$ such that a run of $Prove_2(S,\{F\},k)$ reaches a saturated and $S$-propagated nested sequent $\Delta$, such that $\Gamma'$ and $\Delta$ are identical except for the labeled formulae in $\Gamma'$. The interesting part is in showing that $\Delta$ is $S$-stable. The details are in the appendix. The following is then a corollary of Theorem~\ref{thm:Prove-sound-complete} and Theorem~\ref{thm:grammar-proc-terminates}. \begin{corollary} Let $S$ be a regular closed semi-Thue system over an alphabet $\Sigma$. Then the procedure $Prove$ is a decision procedure for $\mathrm{DKm}(S).$ \end{corollary} \section{Conclusion and future work} Nested sequent calculus is closely related to display calculi, allowing us to benefit from well-studied proof theoretic techniques in display calculi, such as Belnap's generic cut-elimination procedure, to prove cut-elimination for $\mathrm{SKm}(S)$. At the more practical end, we have established via proof theoretic means that nested sequent calculi for regular grammar logics can be effectively mechanised. This work and our previous work~\cite{Gore09tableaux,Gore11LMCS} suggests that nested sequent calculus could potentially be a good intermediate framework to study both proof theory and decision procedures, at least for modal and substructural logics. Nested sequent calculus can be seen as a special case of labelled sequent calculus, as a tree structure in a nested sequent can be encoded using labels and accessibility relations among these labels in labelled calculi. The relation between the two has recently been established in \cite{ramanayake11phd}, where the authors show that, if one gets rid of the frame rules in labelled calculi and structural rules in nested sequent calculi, there is a direct mapping between derivations of formulae between the two frameworks. However, it seems that the key to this connection, i.e., admissibility of the frame rules, has already been established in Simpson's thesis~\cite{simpson94phd},\footnote{Simpson's results are shown for intuitionistic modal logics, but it is straightforward to apply the techniques shown there to classical modal logics} where he shows admissibility of a class of frame rules (specified via Horn clauses) in favor of propagation rules obtained by applying a closure operation on these frame rules. The latter is similar to our notion of propagation rules. Thus it seems that structural rules in (shallow) nested sequent calculus play a similar role to the frame rules in labelled calculi. We plan to investigate this connection further, e.g., under what conditions the structural rules are admissible in deep inference calculi, and whether those conditions translate into any meaningful characterisations in terms of (first-order) properties of frames. The two decision procedures for regular grammar logics we have presented are not optimal. As can be seen from the termination proofs, their complexity is at least EXPSPACE. We plan to refine the procedures further to achieve optimal EXPTIME complexity, e.g, by extending our deep nested sequent calculi with ``global caching'' techniques from tableaux systems~\cite{GoreW09}. \paragraph{Acknowledgment} The authors would like to thank an anonymous reader of a previous draft for his/her detailed and useful comments. The first author is supported by the Australian Research Council Discovery Grant DP110103173. \appendix \newcommand{\thmhead}[2]{\noindent {\bf #1 \ref{#2}.}} \section{Proofs} \thmhead{Theorem}{thm:Km-S} A formula $F$ is $S$-valid iff $F$ is provable in $\mathrm{Km}(S).$ \begin{proof} The soundness and completeness proofs follow the same proofs in \cite{baldoni98phd} for axiomatisations of grammar logics without converse. The soundness proof is quite straightforward so we omit them. For the completeness proof, it is enough to show that the construction of canonical models in \cite{baldoni98phd} additionally satisfies the residuation axiom, and the rest of the proof is the same. The canonical models are defined using the notion of maximal consistent sets. A formula $A$ is said to be consistent if $\neg A$ is not provable in $\mathrm{Km}(S).$ A finite set of formulae is consistent if the conjuction of all of them is consistent, and an infinite set is consistent if every finite subset of it is consistent. A set of formulae $\mathcal{S}$ is maximally consistent if it is consistent and for every formula $A$, either $A \in \mathcal{S}$ or $\neg A \in \mathcal{S}.$ Following \cite{baldoni98phd}, it can be shown that a maximal consistent set $\mathcal{S}$ satisfies, among others, the following: \begin{itemize} \item There is no formula $A$ such that $A \in \mathcal{S}$ and $\neg A \in \mathcal{S}.$ \item If $A \in \mathcal{S}$ and $A \impl B \in \mathcal{S}$ then $B \in \mathcal{S}.$ \item If $A$ is provable in $\mathrm{Km}(S)$ then $A \in \mathcal{S}.$ \end{itemize} We now define the canonical model $\mathfrak{M}_c = \langle W, \{R_a\}_{a\in \Sigma}, V \rangle$ as follows: \begin{itemize} \item $W$ is the set of all maximal consistent sets. \item For every $a \in \Sigma$, $R_a = \{(w,w') \mid w_a \subseteq w' \}$ where $w_a = \{A \mid [a]A \in w \}.$ \item For each propositional variable $p$, $V(p) = \{w \mid p \in w\}.$ \end{itemize} It is enough to show that $R_a = R_{\bar a}^{-1}$, i.e., that $R_a$ is the inverse of $R_{\bar a}.$ This is proved by contradiction. Suppose otherwise, i.e., there exists $w$ and $w'$ such that $(w,w') \in R_a$ but $(w',w) \not \in R_{\bar a}.$ This means that there exists $[\bar a]A \in w'$ such that $A \not \in w.$ Because $w$ is maximally consistent, we have $\neg A \in w.$ Since we have an instance of the residuation axiom $\neg A \supset\obox{a} \odia{\bar a} \neg A \in w$ and since maximally consistent sets are closed under modus ponens, we also have $\obox{a}\odia{\bar a}\neg A \in w.$ Because $(w,w') \in R_a$, the latter implies that $\odia{\bar a} \neg A \in w'.$ But this means $\odia{\bar a}\neg A = \neg (\obox{\bar a} A) \in w'$, contradicting the consistency of $w'.$ The rest of the proof then proceeds as in \cite{baldoni98phd} (Chapter II). Briefly, one shows that for every $w$ and $A$, if $A \in w$ then $\mathfrak{M}_c, w \models A$. Now if $A$ is $S$-valid but not provable in $\mathrm{Km}(S)$, then $\neg \neg A$ is not provable either. This means $\neg A$ is in some maximal consistent set $w$, and therefore $\mathfrak{M}_c, w\models \neg A$, and $\mathfrak{M}_c, w \not \models A$, contradicting the validity of $A.$ \qed \end{proof} \thmhead{Theorem}{thm:SKm-Km-equiv} The system $\mathrm{SKm}(S)$ and $\mathrm{Km}(S)$ are equivalent. \begin{proof} {\em (Outline).} In one direction, from $\mathrm{SKm}(S)$ to $\mathrm{Km}(S)$, we show that, for each inference rule of $\mathrm{SKm}(S)$, if the formula interpretation of the premise(s) is valid then the formula interpretation of the conclusion is also valid. For the converse, it is enough to show that all axioms of $\mathrm{Km}(S)$ are derivable in $\mathrm{SKm}(S).$ It can be shown that both the residuation axioms and the axioms generated from $S$ can be derived using the structural rules $r$ and $\rho(S).$ For example, suppose $S$ contains the axiom $[a][b] p \supset[c][d] p.$ Then the (nnf of the) axiom can be derived as shown in the figure on the right (where a double-line indicates one or more application of rules): $$ \infer[\lor] {\odia a \odia b \neg p \lor \obox c \obox d p} { \infer[\obox c] {\odia a \odia b \neg p, \obox c \obox d p} { \infer[r] {\odia a \odia b \neg p, \seq c {\obox d p}} { \infer[\obox d] {\seq{\bar c}{\odia a \odia b \neg p}, \obox d p} { \infer[r] {\seq{\bar c}{\odia a \odia b \neg p}, \seq{d} p} { \infer[\rho(S)] {\seq{\bar d}{\seq{\bar c}{\odia a\odia b \neg p}}, p} { \infer=[r] {\seq{\bar b}{\seq{\bar a}{\odia a\odia b\neg p}}, p} { \infer[\odia a] {\odia a\odia b \neg p, \seq a {\seq b p}} { \infer[r] { \seq a {\odia b \neg p, \seq b p}} { \infer[\odia b] {\seq {\bar a} {~}, \odia b \neg p, \seq b p} { \infer[r] {\seq a {~}, \seq b {\neg p, p}} { \infer[id] {\seq {\bar b} {\seq{\bar a}{~}}, \neg p, p} {} } } } } } } } } } } } $$ \end{proof} \thmhead{Lemma}{lm:adm-r} The rule $r$ is height-preserving admissible in $\mathrm{DKm}(S).$ \begin{proof} Suppose $\Pi$ is a derivation of $\Gamma, \seq a \Delta$. We show by induction on $|\Pi|$ that there exists a derivation $\Pi'$ of $\seq {\bar a} \Gamma, \Delta$ such that $|\Pi| = |\Pi'|.$ This is mostly straightforward, except for the case where $\Pi$ ends with a propagation rule. In this case, it is enough to show that the propagation automata for $\Gamma, \seq a \Delta$ is in fact exactly the same as the propagation automata of $\seq {\bar a} \Gamma, \Delta$. \end{proof} \thmhead{Lemma}{lm:adm-medial} The rules $\mathit{actr}$ and $m$ are height-preserving admissible. \begin{proof} Admissibility of $\mathit{actr}$ is trivial. To show admissibility of $m$, the non-trivial case is when we need to permute $m$ over $p_S.$ Suppose $\Pi$ is a derivation of $\Gamma[\seq a {\Delta_1}, \seq a {\Delta_2}]$ ending with a propagation rule. Suppose $i$ is the node where $\Delta_1$ is located and $j$ is the node where $\Delta_2$ is located. If $\mathcal{P}$ is a propagation automata between nodes $k$ and $l$ in $\Gamma[\seq a {\Delta_1}, \seq a {\Delta_2}]$, then $\mathcal{P}[j \mathrel{{\mathrel{\mathop:}=}} i]$ is a propagation automata between nodes $k[j \mathrel{{\mathrel{\mathop:}=}} i]$ and $l[j \mathrel{{\mathrel{\mathop:}=}} i]$ in $\Gamma[\seq a {\Delta_1,\Delta_2}]$. So all potential propagations of diamond formulae are preserved in the conclusion of $m.$ So $m$ can be permuted up over the propagation rule and by the induction hypothesis it can be eventually eliminated. \end{proof} \thmhead{Lemma}{lm:adm-rho-S} The structural rules $\rho(S)$ of $\mathrm{SKm}(S)$ are height-preserving admissible in $\mathrm{DKm}(S).$ \begin{proof} Suppose $\Pi$ is a derivation of $\Gamma[\seq a \Delta]$. We show that there is a derivation $\Pi'$ of $\Gamma[\seq {u} \Delta]$, where $u = a_1\cdots a_n$ such that $a \rightarrow u \in S.$ This is mostly straightforward except when $\Pi$ ends with a propagation rule. Suppose the hole in $\Gamma[~]$ is located at node $k$ and $\Delta$ is located at node $l$, with $k \next{a} l.$ In this case we need to show that if a diamond formula $\odia b A$ can be propagated from a node $i$ to node $j$ in $\Gamma[\seq {a} \Delta]$ then there is also a propagation path between $i$ and $j$ in $\Gamma[\seq {u} \Delta]$ for the same formula. Suppose $\mathcal{P}_1$ is the propagation automata $\mathcal{R}(\Gamma[\seq {a} \Delta],i,j).$ Then the propagation automata $\mathcal{P}_2 = \mathcal{R}(\Gamma[\seq u \Delta],i,j)$ is obtained from $\mathcal{P}_1$ by adding $n-1$ new states $k_1,\dots,k_{n-1}$ between $k$ and $l$, and the following transitions: $k \trans{a_1} k_1$, $k_1 \trans{a_{m+1}} k_{m+1}$, for $2 \leq m < n$ and $k_{n-1} \trans{a_n} l$, and their dual transitions. Suppose $i \trans{v} j$ is a propagation path in $\Gamma[\seq a \Delta].$ If $v$ does not go through the edge $k \next{a} l$ (in either direction, up or down) then the same path also exists in $\Gamma[\seq {u} \Delta].$ If it does pass through $k \next{a} l$, then the path must contain one or more transitions of the form $k \trans{a} l$ or $l \trans{\bar a} k$. Then one can simulate the path $i \trans{v} j$ with a path $i \trans{v'} j$ in $\mathcal{P}_2$, where $v'$ is obtained from $v$ by replacing each $k \trans{a} l$ with $k \trans{u} l$ and each $l \trans{\bar a} k$ with $l \trans{\bar u} k.$ It remains to show that $v' \in \mathcal{P}_2 \cap L_b(S).$ But this follows from the fact that $a \rightarrow u \in S$ and $\bar a \rightarrow \bar u \in S$ (because $S$ is a closed), so $v \Rightarrow_S v' \in L_b(S).$ \end{proof} \thmhead{Theorem}{thm:SKm-DKm-equiv} For every context-free closed semi-Thue system $S$, the proof systems $\mathrm{SKm}(S)$ and $\mathrm{DKm}(S)$ are equivalent. \begin{proof} One direction, from $\mathrm{SKm}(S)$ to $\mathrm{DKm}(S)$ follows from the admissibility of structural rules of $\mathrm{SKm}(S)$ in $\mathrm{DKm}(S).$ To show the other direction, given a derivation $\Pi$ in $\mathrm{DKm}(S)$, we show, by induction on the number of occurrences of $p_S$, with a subinduction on the height of $\Pi$, that $\Pi$ can be transformed into a derivation in $\mathrm{SKm}(S).$ As rules other than $p_S$ can be derived directly in $\mathrm{SKm}(S)$, the only interesting case to consider is when $\Pi$ ends with $p_S$: $$ \infer[p_S, \mbox{ where $\mathcal{R}(\Gamma[~]_i[~]_j, i, j) \cap L_a(S) \not = \emptyset$}] {\Gamma[\odia a A]_i[\emptyset]_j} {\Gamma[\odia a A]_i[A]_j} $$ and $\Gamma[\odia a A]_i[A]_j$ is derivable via a derivation $\Pi'$ in $\mathrm{DKm}(S).$ Choose some $u \in \mathcal{R}(\Gamma[~]_i[~]_j, i, j) \cap L_a(S).$ Then we can derive the implication $\odia u A \supset\odia a A$ in $\mathrm{SKm}(S)$. Using this implication, the display property and the cut rule, it can be shown that the following rule is derivable in $\mathrm{SKm}(S).$ $$ \infer[d] {\Gamma[\odia a A]} {\Gamma[\odia a A, \odia u A]} $$ Then we show that the rule $p_S$ can be simulated by the derived rule $d$ above, with chains of $\odia a$-rules in $\mathrm{SKm}(S)$, and utilising the weakening lemma (Lemma~\ref{lm:weak}). Suppose $u = a_1 \cdots a_n$. Then there are nodes $s_1,\dots,s_n$ in $\Gamma[]_i[]_j$, with $s_1 = i$ and $s_n=j$, such that the following is a path in the propagation automaton $\mathcal{R}(\Gamma[~]_i[~]_j,i,j)$: $$ i = s_1 \trans{a_1} s_2 \trans{a_2} \cdots s_{n-1} \trans{a_n} s_n = j $$ Now instead of propagating $A$ using $p_S$ applied to $\odia a A$, we can propagate $A$ in stages using $\odia u A$ and the diamond rules $\odia {a_1}, \ldots, \odia {a_n}.$ Let $\Gamma'[]_i[]_j$ be a context obtained from $\Gamma[]_i[]_j$ by adding the formula $\odia {a_1} \cdots \odia{a_{n-k + 1}} A$ to node $s_k$, for each $1 \leq k \leq n.$ Then it can be shown, by induction on $n$, that we have a derivation $$ \deduce{\Gamma[\odia a A, \odia u A]_i[\emptyset]_j} {\deduce{\vdots}{\Gamma'[\odia a A, \odia u A]_i[A]_j}} $$ in $\mathrm{DKm}(S)$ using only the diamond rules $\odia {a_1}, \dots, \odia{a_n}.$ Note that as these are diamond rules, not $p_S$, they can be simulated in $\mathrm{SKm}(S)$, so the above derivation can be simulated as well in $\mathrm{SKm}(S).$ By the weakening lemma (Lemma~\ref{lm:weak}), we can construct a derivation $\Psi$ of $\Gamma'[\odia a A, \odia u A]_i[A]_j$, such that the height of $\Pi'$ is the same as $\Psi.$ So by the induction hypothesis we have a derivation $\Psi'$ of $\Gamma'[\odia a A, \odia u A]_i[A]_j$ in $\mathrm{SKm}(S).$ The final derivation in $\mathrm{SKm}(S)$ is thus constructed by chaining the above derivations: $$ \infer[d] {\Gamma[\odia A]_i[\emptyset]_j} { \deduce{\Gamma[\odia a A, \odia u A]_i[\emptyset]_j} {\deduce{\vdots}{\deduce{\Gamma'[\odia a A, \odia u A]_i[A]_j}{\Psi'}}} } $$ \end{proof} \thmhead{Theorem}{thm:DKm-A-eq-S} Let $S$ be a regular closed semi-Thue system over $\Sigma$ and let $\mathcal{A}$ be a FSA representing the regular language generated by $S$ and $\Sigma.$ Then $\mathrm{DKm}(S)$ and $\mathrm{DKm}(\mathcal{A})$ are equivalent. \begin{proof} {\em (Outline).} To show that if a formula $B$ is provable in $\mathrm{DKm}(S)$ then $B$ is provable in $\mathrm{DKm}(\mathcal{A})$ we will demonstrate that given a proof of $B$ in $\mathrm{DKm}(S)$ it is possible to replace the highest application of a propagation rule from $\mathrm{DKm}(S)$ with a sequence of propagation rules from $\mathrm{DKm}(\mathcal{A})$. As all non-propagation rules between the two systems are identical, this will be sufficient to show that a proof in $\mathrm{DKm}(S)$ can be translated to a proof in $\mathrm{DKm}(\mathcal{A})$. Suppose we have a derivation $\Pi$ of $\Gamma[\odia a A]_i[A]_j$ using only the rules of $\mathrm{DKm}$. If $p_S$ is applicable and yields $\Gamma[\odia a A]_i[\emptyset]_j$, it must be the case that $\mathcal{R}(\Gamma[]_i[]_j,i,j)\cap L_a(S)\neq\emptyset$. Therefore there exists a sequence of transitions in $\mathcal{R}(\Gamma[~]_i[~]_j,i,j)$: $ i \trans{a_1} i_1 \trans{a_2} \cdots \trans{a_{n-1}} i_{n-1} \trans{a_n} j, $ where $a_1\cdots a_n \in L_a(S)$ and where each $i_k$, for $1 \leq k \leq n-1$, is a node in $\Gamma[~]_i[~]_j$ and \begin{itemize} \item either $i \next{a_1} i_1$ or $i_1 \next{\bar a_1} i$, \item either $i_{k-1} \next{a_{k}} i_{k}$ or $i_{k} \next{\bar a_k} i_{k-1}$, for $2 \leq k < n-1$ \item and either $i_{n-1} \next{a_n} j$ or $j \next{\bar a_n} i_{n-1}.$ \end{itemize} Since $\mathcal{A}$ accepts $L_a(S)$, there must exist a sequence of transitions in $\mathcal{A}$ such that: $ init_a \trans{a_1} s_1 \trans{a_2} \cdots \trans{a_{n-1}} s_{n-1} \trans{a_n} f, $ where $f$ is a final state in $\mathcal{A}.$ The propagation path $a_1\cdots a_n$ can then be simulated in $\mathrm{DKm}(\mathcal{A})$ as follows. First, define a sequence of nested sequents as follows: \begin{itemize} \item $\Gamma_0 := \Gamma[\odia a A]_i[\emptyset]_j$, $\Gamma_1 := \Gamma[\odia a A, init_a : A]_i[\emptyset]_j.$ \item $\Gamma_{k+1} := \Gamma_k(i_k \ll \{s_k : A\})$, for $1 \leq k \leq n-1$. \item $\Gamma_{n+1} := \Gamma_{n}(j \ll \{f : A\})$ and $\Gamma_{n+2} := \Gamma_{n+1}(j \ll \{A\}).$ \end{itemize} Then $\Gamma_0$ can be obtained from $\Gamma_{n+2}$ by a series of applications of propagation rules of $\mathrm{DKm}(\mathcal{A})$. That is, $\Gamma_0$ is obtained from $\Gamma_1$ by applying the rule $i$; $\Gamma_k$ is obtained from $\Gamma_{k+1}$ by applying either the rule $t\!\downarrow$ or $t\!\uparrow$, for $1 \leq k \leq n-1$, at node $i_{k}$ and $\Gamma_{n}$ is obtained from $\Gamma_{n+1}$ by applying the rule $t\!\downarrow$ or $t\!\uparrow$ at node $j$, and $\Gamma_{n+1}$ is obtained from $\Gamma_{n+2}$ by applying the rule $f$ at node $j.$ Note that $\Gamma_{n+2}$ is a weakening of $\Gamma[\odia a A]_i[A]_j$ with labeled formulae spread in some nodes between $i$ and $j.$ It remains to show that $\Gamma_{n+2}$ is derivable. This is obtained simply by applying weakening (Lemma~\ref{lm:weak}) to $\Pi.$ For the other direction, assume we have a $\mathrm{DKm}(\mathcal{A})$-derivation $\Psi$ of $B$. We show how to construct a derivation $\Psi'$ of $B$ in $\mathrm{DKm}(S).$ The derivation $\Psi'$ is constructed as follows: First, remove all labelled formulae from $\Psi$; then remove the rules $t\!\uparrow$, $t\!\downarrow$ and $i$, and finally, replace the rule $f$ with $p_S.$ The rules $t\!\uparrow$, $t\!\downarrow$ and $i$ from $\Psi$ simply disappear in $\Psi'$ because with labelled formulae removed, the premise and the conclusion of any of the rules in $\Psi$ map to the same sequent in $\Psi'.$ Instances of the other rules in $\Psi$ map to the same rules in $\Psi'.$ We need to show that $\Psi'$ is indeed a derivation in $\mathrm{DKm}(S).$ The only non-trivial case is to show that the mapping from the rule $f$ to the rule $p_S$ is correct, i.e., the resulting instances of $p_S$ in $\Psi'$ are indeed valid instances. We first prove an invariant property that holds for $\Psi$. We say that a nested sequent $\Delta$ is {\em $\mathcal{A}$-connected} iff the following hold \begin{itemize} \item If $init_a : C \in \Delta | i$ then $\odia a C \in \Delta | i.$ \item If $s : C \in \Delta | i$ and $s$ is not an initial state of $\mathcal{A}$, then there exists an $a \in \Sigma$ and a sequence of nodes $x_1,\dots, x_n$ in $\Delta$ and a sequence of states $s_1,\dots,s_n$ of $\mathcal{A}$ such that \begin{itemize} \item $s_k : C \in \Delta | x_k$ for $1 \leq k \leq n.$ \item $s_1 = init_a$ and $x_n = i$. \item For each $1 \leq k < n$, $s_k \trans{b}_\mathcal{A} s_{k+1}$ for some $b \in \Sigma$, and either $x_k \next{b} x_{k+1}$ or $x_{k+1} \next{\bar b} x_k.$ \end{itemize} \end{itemize} It is then easy to verify the following claim: \begin{quote}{\bf Claim:} If $\Delta$ is $\mathcal{A}$-connected and there is a derivation $\Xi$ of $\Delta$ in $\mathrm{DKm}(\mathcal{A})$, then every nested sequent in $\Xi$ is $\mathcal{A}$-connected. \end{quote} Given the above claim, and the fact that the nested sequent $\{B\}$ is trivially $\mathcal{A}$-connected, it follows that every nested sequent in $\Psi$ is $\mathcal{A}$-connected. Now, it remains to show each instance of $f$ in $\Psi$ can be replaced by a valid instance of $p_S$ in $\Psi'.$ Suppose there is an instance of $f$ in $\Psi$ as shown below left: $$\infer[f]{\Gamma[s:A]_j}{\Gamma[s:A,A]_j} \qquad \infer[p_S] {\Gamma''[\odia a A]_i[\emptyset]_j}{\Gamma''[\odia a A]_i[A]_j} $$ Then we by the above claim, there must exist a node $i$ and an $a \in \Sigma$ such that $\odia a A \in \Gamma[s:A]_j | i$ and that there exist a sequence of nodes $i = x_1,\dots, x_n = j$ and a sequence of states $init_a = s_1, \dots, s_n = s$ such that $s_1 \trans{a_1}_\mathcal{A} s_2 \trans{a_2}_\mathcal{A} \cdots \trans{a_{n-1}}_\mathcal{A} s_n$ for some $a_1,\dots,a_{n-1}.$ It also follows from $\mathcal{A}$-connectedness that $a_1\cdots a_{n-1}$ is an element of the propagation automata $R(\Gamma[s:A]_j, i, j).$ Because $\mathcal{A}$ represents the regular languages $\{L_b(S) \mid b \in \Sigma \}$, we have that $a_1 \cdots a_{n-1} \in L_a(S)$, and \begin{equation} \label{eq:DKm-A-DKm-S} a_1 \cdots a_{n-1} \in L_a(S) \cap R(\Gamma[s:A]_j, i,j). \end{equation} Let $\Gamma'[\odia a A]_i[s:A]_j = \Gamma[s:A]_j.$ Let $\Gamma''[~]_i[~]_j$ be the context obtained from $\Gamma'[~]_i[~]_j$ by removing all labelled formulae. Then (\ref{eq:DKm-A-DKm-S}) can be rewritten as: $$a_1 \cdots a_{n-1} \in L_a(S) \cap R(\Gamma''[\odia a A]_i[\emptyset]_j, i, j).$$ Thus the propagation instance $p_S$ shown above right, to which the above instance of $f$ maps to, is indeed a valid instance of $p_S.$ \end{proof} \thmhead{Theorem}{thm:auto correctness} If $Prove_1(\mathcal{A},\{F\}) = \top$ then $F$ is provable in $\mathrm{DKm}(\mathcal{A}).$ If $Prove_1(\mathcal{A},\{F\}) = \bot$ then $F$ is not provable in $\mathrm{DKm}(\mathcal{A}).$ \begin{proof} The proof of the first statement is straightforward, since the steps of $Prove_1$ are just backward applications of rules of $\mathrm{DKm}(\mathcal{A}).$ To prove the second statement, we show that if $Prove_1(\mathcal{A},\{F\})=\bot$ then there exists a model $\mathfrak{M}=(W, R,V)$, where $R = \{R_a\}_{a\in\Sigma}$, such that $\mathfrak{M}\ \slashed{\models}\ F.$ By the completeness of $\mathrm{DKm}(\mathcal{A})$, it will follow that $F$ is not provable in $\mathrm{DKm}(\mathcal{A})$. Since $Prove_1(\mathcal{A},\{F\})=\bot$ the procedure must generate an $\mathcal{A}$-stable $\Delta$, with $F$ in the root node of $\Delta.$ Let $W$ be the set of all the realised nodes of $\Delta$. For every pair $i,j \in W$, construct an automaton $\mathcal{P}(i,j)$ by modifying the propagation automaton $\mathcal{R}(\Delta,i,j)$ by identifying every unrealised node $k'$ with its closest ancestor $k$ such that $\Delta|k=\Delta|k'$. That is, replace every transition of the form $s\trans{a} k'$ with $s\trans{a} k$ and $k'\trans{a} s$ with $k\trans{a} s$. Then define $R_a(x,y)$ iff $\mathcal{P}(x,y)\cap L(\mathcal{A}_a)\neq\emptyset$, where $\mathcal{A}_a$ is $\mathcal{A}$ with only $init_a$ as the initial state. Suppose $S$ is a closed semi-Thue system that corresponds to $\mathcal{A}.$ Then $L(\mathcal{A}_b) = L_b(S)$ for every $b \in \Sigma.$ We first show that the $\Sigma$-frame $\langle W, R\rangle$ defined above satisfies all the production rules in $S$ (see Definition~\ref{def:S-frame}). Let $a \rightarrow u \in S$, where $u = a_1 \cdots a_n.$ We need to show that $R_u \subseteq R_a.$ Suppose otherwise, that is, there is a sequence of worlds $x_1,\dots,x_{n+1}$ such that $x_i R_{a_i} x_{i+1}$ but $(x_1,x_{n+1}) \not \in R_a.$ By the above construction, we have $R_b(x,y)$ iff $\mathcal{P}(x,y) \cap L_b(S) \neq \emptyset$ for every $b \in \Sigma.$ So it follows that, for each pair $(x_i, x_{i+1})$, there is a string $u_i \in \mathcal{R}(x_i,x_{i+1}) \cap L_{a_i}(S).$ It also follows that we have a sequence of transitions $x_1 \trans{u_1\cdots u_n} x_{n+1}$ in $\mathcal{P}(x_1,x_{n+1})$, by chaining the transitions $x_i \trans{u_i} x_{i+1}$ together. Because $a \rightarrow u \in S$, and each $u_i \in L_{a_i}(S)$, we have $$a \rightarrow a_1\cdots a_n \Rightarrow u_1 \cdots u_n \in L_a(S).$$ So $u_1 \cdots u_{n}$ is in $\mathcal{R}(\Delta,x_1,x_{n+1}) \cap L_a(S)$, and therefore $(x_1,x_{n+1}) \in R_a$, contradicting the assumption. To complete the model, let $x \in V(p)$ iff $\neg p \in\Delta|x$. We claim that for every $x \in W$ and every $A \in \Delta | x$, we have $\mathfrak{M}, x \ \slashed{\models}\ A$. We shall prove this by induction on the size of $A$. Note that we ignore the labelled formulae in $\Delta$; they are just a bookeeping mechanism. As $F$ is in the root node of $\Delta$, this will also prove $\mathfrak{M}\ \slashed{\models} F.$ We show here the interesting case involving the diamond operators. Suppose $\odia a A\in\Delta|x$. Assume for a contradiction that $\mathfrak{M},x\models\odia a A$. That is, $R_a(x,y)$ and $\mathfrak{M},y\models A.$ If $R_a(x,y)$ then there is a accepting path $p_a(x,y)$ in $\mathcal{P}(x,y)$ of the form: $ x_0 \trans{a_1} x_1 \trans{a_2} x_2 \cdots x_{n-1} \trans{a_n} x_n, $ where $x_0 = x$ and $x_n = y$ such that $u = a_1 \dots a_n \in L(\mathcal{A}_a).$ Then because $u \in L(\mathcal{A}_a)$, there must be a sequence of states $s_0,s_1,\dots,s_n$ of $\mathcal{A}$ such that $s_0 = init_a \in I$ and $s_n\in F$ and the transitions between states $$ s_0 \trans{a_1} s_1 \trans{a_2} s_2 \cdots s_{n-1} \trans{a_n} s_n. $$ We show by induction on the length of transtions that that $s_i : A \in \Delta|x_i$ for $0 \leq i \leq n.$ In the base case, because $\odia a A \in \Delta|x$, by $\mathcal{A}$-propagation, we have $s_0 : A \in \Delta|x_0.$ For the inductive cases, suppose $s_i : A \in \Delta|x_i$, for $n > i \geq 0$. There are two cases to consider. Suppose the transition $x_i \trans{a_{i+1}}_{\mathcal{P}(x,y)} x_{i+1}$ is present in $\mathcal{R}(\Delta,x,y).$ Then either $x_i \next{a_{i+1}} x_{i+1}$ or $x_{i+1} \next{\bar a_{i+1}} x_i.$ In either case, by $\mathcal{A}$-propagation of $\Delta$, we must have $s_{i+1} : A \in \Delta|x_{i+1}.$ If $x_i \trans{a_{i+1}}_{\mathcal{P}(x,y)} x_{i+1}$ is not a transition in $\mathcal{R}(\Delta,x,y),$ then this transition must have resulted from a use of a loop node. There are two subcases: either $x_i$ or $x_{i+1}$ is the closest ancestor of a loop node $x'$ with $\Delta | x_i = \Delta | x'$ or, respectively, $\Delta | x_{i+1} = \Delta | x'.$ Suppose $x_i$ is the closest ancestor of $x'$ with $\Delta | x_i = \Delta | x'.$ By the definition of $\mathcal{P}(x,y)$, this means we have $x' \trans{a_{i+1}} x_{i+1}$ in $\mathcal{R}(\Delta,x,y).$ Because $\Delta | x_i = \Delta | x'$ and $s_i : A \in \Delta | x_i$, we have $s_i : A \in \Delta|x'.$ Then by $\mathcal{A}$-propagation, it must be the case that $s_{i+1} : A \in \Delta|x_{i+1}.$ Suppose $x_{i+1}$ is the closest ancestor of $x'$ with $\Delta|x' = \Delta|x_{i+1}.$ Then $x_i \trans{a_{i+1}} x'$ is a transition in $\mathcal{R}(\Delta,x,y).$ By $\mathcal{A}$-propagation, it must be the case that $s_{i+1}:A \in \Delta|x'$, and therefore also $s_{i+1} : A \in \Delta|x_{i+1}.$ So we have $s_n : A \in \Delta|y.$ But, again by $\mathcal{A}$-propagation, this means $A \in \Delta|y$ (because $s_n$ is a final state). Then by the induction hypothesis, we have $\mathfrak{M}, y \slashed{\models} A$, contradicting the assumption. \end{proof} \thmhead{Theorem}{thm:auto-proc-terminates} For every nested formula $A$, $Prove_1(\mathcal{A}, \{A\})$ terminates. \begin{proof} {\em (Outline)} We say that a nested sequent $\Gamma$ is a {\em set-based nested sequent} if in every node of $\Gamma$, every (labelled) formula occurs at most once (a formula $C$ and its labelled versions are considered distinct). By inspection of the procedure $Prove_1$, it is clear that all the intermediate sequents created during proof search for $Prove_1(\mathcal{A},\{A\})$ are set-based sequents. Steps (i) -- (iv) of the procedure only add (strict) subformulae of formulae occurring in the input sequent without creating new nodes, so for a given input nested sequent, applications of these steps eventually terminate. Because of the blocking conditions in each step, the same formula cannot be added twice to a node, so the upper bound of the size of a node (i.e., the number of formulae in it) is the cardinality of the set of all subformulae in the input sequent, plus all their possible labellings (which is finite because $\mathcal{A}$ has only a finite number of states). Step (v) is applicable only to internal nodes which are not realised. So the expansion of the nested sequent tree in this case adds to the width of the tree, not the height. It is easy to see that the number of branches in an internal node is bounded by the number of distinct `boxed' subformulae in the original sequent, so this expansion step cannot be applied indefinitely without applying step (vi), as the number of distinct boxed subformulae is bounded and no new internal nodes are created. So the combination of steps (i) -- (v) always terminates for a given input sequent. The only possible cause of termination is if step (vi) can be applied infinitely often. We next show that this is not the case. The expansion in step (vi) adds to the height of the input nested sequent tree. Because of the loop checking condition in the step, the height of the trees generated during proof search is bounded; we give a more precise bound next. Let $m$ be the number of states in $\mathcal{A}$ and let $n$ be the number of subformulae of $A$. Then the total number of different sets of formulae and labeled formulae (with labels from $\mathcal{A}$) is bounded by $2^{(m+1)n}.$ Therefore, any set-based nested sequent generated during proof search will not cross this bound without creating a loop node. As the height of the trees generated during proof search is bounded, and the number of branches at each node of the trees is also bounded, there are only finitely many possible nested sequent trees that can be generated in each branch of the proof search. Note that every recursive call in the proof procedure adds something to the input nested sequent, so every branch in the proof search generates pairwise distinct (set-based) nested sequents. As the number of possible set-based nested sequents is bounded, the depth of the search is bounded, and because the branching in proof search is also bounded (i.e., it is a binary branch, created when applying the $\land_d$ rule in step (iii)), the search tree must be finite, and thefore the search procedure must terminate. \end{proof} \thmhead{Lemma}{lm:S-stable} Let $S$ be a context-free closed semi-Thue system. If $\Gamma$ is an $S$-stable nested sequent, then there exists a model $\mathfrak{M}$ such that for every node $x$ in $\Gamma$ and for every $A \in \Gamma|x$, there exists a world $w$ in $\mathfrak{M}$ such that $\mathfrak{M}, w \not \models A.$ \begin{proof} Let $\vec x = x_1,\dots, x_n$ be the list of (pairwise distinct) unrealised leaf nodes in $\Gamma.$ Because $\Gamma$ is $S$-stable, we have a function $\lambda$ assigning each unrealised leaf node $x_i$ to an ancestor node $\lambda(x_i)$ such that $\Gamma|x_i = \Gamma | \lambda(x_i)$, and for every node $y$ and $z$ in $\Gamma$, we have that $\Gamma$ is $(S,\mathcal{P}(y,z))$-propagated, where $ \mathcal{P}(y,z) = \mathcal{R}(\Gamma,y,z)[\vec x \mathrel{{\mathrel{\mathop:}=}} \lambda(\vec x)]. $ Then define $ \mathfrak{M} = \langle W, \{R_a \mid a \in \Sigma\}, V\rangle $ where \begin{itemize} \item $W$ is the set of nodes of $\Gamma$ minus the nodes $\vec x,$ \item for every $x,y \in W$, $R_a(x,y)$ iff $\mathcal{P}(x,y) \cap L_a(S) \not = \emptyset$, and \item $V(p) = \{ x \in W \mid \neg p \in \Gamma | x \}.$ \end{itemize} We now show that if $A \in \Gamma|v$ then there is a $w \in W$ such that $\mathfrak{M}, w \not \models A,$ where the world $w$ is determined as follows: if $v$ is in $\vec x$, then $w = \lambda(v)$; otherwise, $w = v.$ We prove this by induction on the size of $A.$ The only interesting cases are those where $A = \odia a C$ or $A = \obox a C$ for some $a$ and $C.$ \begin{itemize} \item Suppose $A = \odia a C.$ Suppose, for a contradiction, that $\mathfrak{M}, w \models \odia a C.$ That means there exists a $w'$ such that $R_a(w,w')$ and $\mathfrak{M}, w' \models C.$ By the definition of $R_a$, we have that $\mathcal{P}(w,w') \cap L_a(S) \not = \emptyset.$ Because $\Gamma$ is $S$-stable, by Definition~\ref{def:S-stable}(iv), it is $(S,\mathcal{P}(w,w'))$-propagated. This means that $C \in \Gamma|w'.$ Then by the induction hypothesis, $\mathfrak{M}, w' \not \models C,$ which contradicts our assumption. \item Suppose $A = \obox a C.$ To show $\mathfrak{M}, w \not \models \obox a C$, it is enough to show there exists $w'$ such that $R_a(w,w')$ and $\mathfrak{M}, w' \not \models C.$ Note that $w$ must be an internal node in $\Gamma$, so by the $S$-stability of $\Gamma$, node $w$ in $\Gamma$ must be realised. Therefore there exists a node $z$ such that $w \next{a} z$ in $\Gamma$ and $C \in \Gamma | z.$ If $z \not \in \vec x$, then let $w' = z$; otherwise, let $w' = \lambda(z).$ In either case, $\Gamma | z = \Gamma | w'$, so in particular, $C \in \Gamma|w'.$ Also, in either case, the propagation automata $\mathcal{P}(w, w')$ contains a transition $w \trans{a}_{\mathcal{P}(w,w')} w'$ (in the case where $z \in \vec x$, this is because $\lambda(z)$ is identified with $z$). Obviously, $a \in L_a(S)$, so $L_a(S) \cap \mathcal{P}(w,w') \not = \emptyset,$ so by the definition of $R_a$, we have $R_a(w,w').$ Since $C \in \Gamma|w'$, by the induction hypothesis, $\mathfrak{M}, w' \not \models C.$ So we have $R_a(w,w')$, and $\mathfrak{M}, w' \not \models C$, therefore $\mathfrak{M}, w \not \models \obox a C.$ \end{itemize} \end{proof} \thmhead{Theorem}{thm:Prove-sound-complete} Let $S$ be a context-free closed semi-Thue system. For every formula $F$, $Prove(S, \{F\})$ returns $\top$ if and only if $F$ is provable in $\mathrm{DKm}(S).$ \begin{proof} {\em (Outline)} One direction, i.e., $Prove(S,\{F\}) = \top$ implies that $F$ is provable in $\mathrm{DKm}(S)$, follows from the fact that steps of $Prove$ are simply backward applications of rules of $\mathrm{DKm}(S).$ To prove the other direction, we note that if $F$ has a derivation in $\mathrm{DKm}(S)$, it has a derivation of a minimal length, say $\Pi$. In particular, in such an derivation, there are no two identical nested sequents in any branch of the derivation. Because in $\mathrm{DKm}(S)$ each backward application of a rule retains the principal formula of the rule, every application of a rule in $\Pi$ will eventually be covered by one of the steps of $Prove.$ Since there are only finitely many rule applications in $\Pi$, eventually these will all be covered by $Prove$ and therefore it will terminate. For example, if $\Pi$ ends with a diamond (propagation) rule applied to a non-saturated sequent, the $Prove$ procedure will choose to first saturate the sequent before applying the propagation rule. Since all rules are invertible, we do not lose any provability of the original sequent, but the $Prove$ procedure may end up doing more steps. We need to show, additionally, that every sequent arising from the execution of $Prove(S,\{F\})$ is not $S$-stable. Suppose otherwise, i.e., the procedure produces an $S$-stable sequent $\Delta$. Now it must be the case that $F$ is in the root node of $\Delta.$ By Lemma~\ref{lm:S-stable}, this means there exists a countermodel that falsifies $F$, contrary to the validity of $F$. \end{proof} \thmhead{Theorem}{thm:grammar-proc-terminates} Let $S$ be a regular closed semi-Thue system. Then for every formula $F$, the procedure $Prove(S, \{F\})$ terminates. \begin{proof} Since $S$ is regular, there exists an automaton $\mathcal{A}$ such that $Prove_1(\mathcal{A},\{F\})$ terminates. We choose the minimal deterministic finite state automaton $\mathcal{A}$ that corresponds to $S.$ Suppose $Prove_1(\mathcal{A},\{F\})=\top$. Then $F$ must be derivable in $\mathrm{DKm}(\mathcal{A})$ by Theorem~\ref{thm:auto correctness}. Since $\mathrm{DKm}(\mathcal{A})$ and $\mathrm{DKm}(S)$ are equivalent (Theorem~\ref{thm:DKm-A-eq-S}), there must also be a derivation of $F$ in $\mathrm{DKm}(S)$. Then by Theorem~\ref{thm:Prove-sound-complete}, $Prove(S,\{F\})$ must terminate and return $\top.$ Suppose $Prove_1(\mathcal{A},\Gamma)=\bot$. Then there exists an $\mathcal{A}$-stable $\Gamma'$ that can be constructed from $\Gamma$ in the execution of $Prove_1(\mathcal{A},\Gamma)$. It can be shown that a $\Delta$ that is identical to $\Gamma'$ without any labelled formulae can be constructed in the execution of $Prove_2(S,\Gamma,d)$ for some $d$. We claim that $\Delta$ is $S$-stable. Saturation, propagation and the realisation of internal nodes follow immediately from the construction, it remains to find a function $\lambda$ as in Definition~\ref{def:S-stable}. We claim that such a function is given by $\lambda(x)=y$ where $y$ is the closest ancestor of $x$ in $\Gamma'$ such that $\Gamma'|x=\Gamma'|y$. That is, we identify each unrealised leaf with the same node it would have been identified with in $Prove_1(\mathcal{A},\Gamma)$. Let $\vec i = i_1,\dots,i_l$ be the list of all unrealised leaf nodes in $\Delta$ and let $\mathcal{P}(x,y) = \mathcal{R}(\Delta, x, y)[\vec i \mathrel{{\mathrel{\mathop:}=}} \lambda(\vec i)].$ (Note that as the tree structures of $\Gamma'$ and $\Delta$ are identical, we also have $\mathcal{P}(x,y) = \mathcal{R}(\Gamma', x, y)[\vec i \mathrel{{\mathrel{\mathop:}=}} \lambda(\vec i)].$) For a contradiction, suppose there exists $j$ and $k$ such that $\Delta$ is not $(S,\mathcal{P}(j,k))$-propagated, i.e., there exist $\odia a A \in \Delta|j$, such that $A\notin\Delta|k$ but $\mathcal{P}(j,k) \cap L_a(S)\neq\emptyset.$ In other words, there is a word $b_1\dots b_n\in \mathcal{P}(j,k) \cap L_a(S)$, and a sequence of states $x_0,\dots,x_n$ in $\mathcal{P}(j,k)$ such that $x_0=j,x_n=k,x_{m-1}\trans{b_m}_{\mathcal{P}(j,k)} x_m,$ where $1 \leq m < n.$ We will show that there exists a function $St$ assigning states of $\mathcal{A}$ to nodes of $\Gamma'$ satisfying: $St(x_0)\in I$, $St(x_{m-1})\trans{b_m}_{\mathcal{A}} St(x_m)$, $St(x_n)\in F$, and $St(x_m):A\in\Gamma'|x_m.$ This will establish that $St(x_n):A\in\Gamma'|x_n$ where $St(x_n)\in F$. Then by $\mathcal{A}$-propagation, it will follow that $A\in\Gamma'|k$, and therefore $A \in \Delta|k$, contradicting our assumption that $A \not \in \Delta|k$. Let $s_0,\dots,s_n$ be the run of $\mathcal{A}_a$ associated with input $b_1\dots b_n$. Let $St(x_m)=s_m$. As $L(\mathcal{A}_a)=L_a(S)$, we know that $s_0,\dots,s_n$ is an accepting run. This gives us $St(x_0)\in I, St(x_{m-1})\trans{b_m}_\mathcal{A} St(x_m)$ and $St(x_n)\in F$. It remains to show that $St(x_m):A\in\Gamma'|x_m$. We will do so by induction on $m$. Base case: As $\odia a A\in\Gamma'|x_0$, by $\mathcal{A}$-propagation we obtain $s_0:A\in\Gamma'|x_0$. Inductive case: Suppose $x_m\trans{b_{m+1}}_{\mathcal{P}(j,k)} x_{m+1}$. By the inductive hypothesis, $s_m:A\in\Gamma'|x_m.$ There are two cases to consider: \begin{itemize} \item The transition $x_m \trans{b_{m+1}}_{\mathcal{P}(j,k)} x_{m+1}$ also exists in $\mathcal{R}(\Gamma',j,k)$. In this case, by $\mathcal{A}$-propagation, we have $s_{m+1} : A \in \Gamma'|x_{m+1}.$ \item The transition $x_m \trans{b_{m+1}}_{\mathcal{P}(j,k)} x_{m+1}$ is obtained from $\mathcal{R}(\Gamma',j,k)$ through the identification of unrealised leaf nodes with their closest ancestors. There are two subcases: \begin{itemize} \item $x_m = \lambda(y)$ for some unrealised leaf node $y$ such that $\Gamma'|x_m = \Gamma'|y$, and $y \trans{b_{m+1}}_{\mathcal{R}(\Gamma',j,k)} x_{m+1}.$ Since $\Gamma'|x_m = \Gamma'|y$, we have that $s_m : A \in \Gamma'|y$ and it follows by $\mathcal{A}$-propagation that $s_{m+1} : A \in \Gamma' | x_{m+1}.$ \item $x_{m+1} = \lambda(y)$ for some unrealised leaf node $y$ such that that $\Gamma'|x_{m+1} = \Gamma'|y$, and $x_m \trans{b_{m+1}}_{\mathcal{R}(\Gamma',j,k)} y.$ By $\mathcal{A}$-propagation, $s_{m+1} : A \in \Gamma' | y = \Gamma' | x_{m+1}.$ \end{itemize} \end{itemize} Thus when $Prove(S,\Gamma)$ calls $Prove_2(S,\Gamma,d)$, it will construct an $S$-stable sequent and terminate. \end{proof} \end{document}
\begin{document} \title{Openness and Reproducibility: Insights from a Model-Centric Approach } \author{ Bert Baumgaertner \and Berna Devezer \and Erkan O. Buzbas \and Luis G. Nardin } \institute{ Bert Baumgaertner \at Department of Politics and Philosophy, University of Idaho, Moscow, ID 84844-1104 USA\\\email{[email protected]} \and Berna Devezer \at Department of Business, University of Idaho, Moscow, ID 84844-1104 USA\\ \email{[email protected]} \and Erkan O. Buzbas \at Department of Statistical Science, University of Idaho, Moscow, ID 84844-1104 USA\\ \email{[email protected]} \and Luis G. Nardin \at Computer Science, National College of Ireland, Dublin, Ireland\\ \email{[email protected]} } \date{Received: date / Accepted: date} \maketitle \begin{abstract} This paper investigates the conceptual relationship between openness and reproducibility using a model-centric approach, heavily informed by probability theory and statistics. We first clarify the concepts of reliability, auditability, replicability, and reproducibility--each of which denotes a potential scientific objective. Then we advance a conceptual analysis to delineate the relationship between open scientific practices and these objectives. Using the notion of an idealized experiment, we identify which components of an experiment need to be reported and which need to be repeated to achieve the relevant objective. The model-centric framework we propose aims to contribute precision and clarity to the discussions surrounding the so-called reproducibility crisis. \keywords{reproducibility \and open science \and replication \and model-centric \and reliability \and confirmation} \end{abstract} \begin{quote} \footnotesize{``I also know this, that, although I might teach only what is true, you must deny me faith in my words. So in order that I do not perorate, but not leaving with any faith in all my words, I transfer the authority to anyone who wishes, to come and show me if what I say concerning the findings of anatomies is really true. For I have already shown thousands of times the twin [organs] that intercede the spermatic cords from the outer horns to the inside of the uterus, be the animal a goat or an ox or a donkey or a horse, that either wooden sticks round and long, even three or four times thicker, or the ones called sword-like probes, I have positioned through the horns. And this must be shown by anyone [that follows the same experimental method] after I and my pupils have died.'' \textit{Galen of Pergamon (c.130--210 AD)}} \end{quote} \section{Introduction} In the last decade, scientists have discussed whether science is facing a reproducibility crisis~\citep{Baker2016} and numerous explanations have been given for the proliferation of irreproducible results~\citep{Heesen2018,Munafo2017,Spellman2015}. Many of these explanations suggest that irreproducibility is the consequence of methodological and cultural practices that are erroneous at individual or system levels. Well-known examples of such questionable research practices include HARKing, p-hacking, and publication bias~\citep{Bishop2019,Spellman2015}. HARKing (hypothesizing after results are known) involves presenting a post hoc hypothesis conditional on observing the data as an a priori hypothesis~\citep{Kerr1998,Munafo2017}. P-hacking is a form of data dredging to find statistically significant results and is a misapplication of proper statistical methodology~\citep{Bruns2016,Munafo2017}. Publication bias involves omitting studies with statistically nonsignificant results from publications and is primarily attributed to flawed incentive structures in scientific publishing~\citep{Munafo2017,Open2015}. The common denominator across these three phenomena is a lack of transparency (of hypotheses, analyses, and studies, respectively) in research reporting. As such, openness is sometimes touted as a remedy to the supposed crisis of irreproducibility~\citep{Collins2014,Iqbal2016,Nosek2015}. For example, the National Academies of Sciences, Engineering, and Medicine view data sharing as a prerequisite for reproducible results~\citep{national2017fostering}. A number of tools are becoming increasingly available to make numerous aspects of science open~\citep{Ioannidis2014,Munafo2017,Nosek2015,Nosek2018}. As the thinking goes, the building blocks of a reproducible science are replication studies~\citep{Earp2015,Lebel2011,Schmidt2009}, and transparency makes replication studies possible by making research materials and processes explicit and accessible. The link between transparency, replication studies, and reproducibility is not straightforward, however. For example, results can be reproduced from experiments that are not completely transparent (say, because the same materials in question are not accessible, or methods are unknown) and reproducibility can be difficult to achieve, even when experiments are completely transparent and exact replication studies can be conducted. We are not convinced by the picture that openness, which would correct questionable research practices, is the straightforward solution to the purported reproducibility crisis. Even if openness were the solution, it is not clear to us why it would be. Our problem is an intellectual one: if we lack a clear understanding of the relationship between transparency, replications, and reproducibility, it is not clear how or why openness is supposed to help. Before calling foul on scientists or scientific practice, it behooves us to better understand these relationships and set our expectations right for a reproducible science. That is the aim of this paper. To set the stage, we first present a toy example and derive a number of relatively benign observations and conclusions regarding replications and reproducibility, drawing from probability theory and well-known facts in statistics (Section~\ref{toy}). This motivates our conceptualization of an idealized experiment and definitions of reliability, auditability, replicability, and reproducibility (Section~\ref{Concepts}). We are then in a position to present our analysis of openness, which comes in two parts. The first part emphasizes the epistemic aspect of reproducibility, which illuminates a historical debate between Newton, Goethe, and others (Section~\ref{open}). The second part provides a more detailed account of the components of an experiment that need to be shared (or don't) given some objective (Section~\ref{components}). Our analysis makes little to no mention of hypotheses. This is not an oversight. Our analysis is situated in what we call a model-centric view of science. From this perspective, scientific progress is made when old models are replaced by new ones and experiments are performed in order to compare models. The idea that scientific meaning is carried by models or that models play a central role otherwise is not novel and has been discussed both in scientific and philosophical literature~\citep{Glass2008,Taper2011,weisberg2012simulation}. This approach is distinct from a hypothesis-centric approach and we contrast these two in the discussion section (Section~\ref{discussion}). \section{Stage setting: A toy example and initial observations} \label{toy} Consider a large population of ravens where each raven is either black or white. Our goal is to estimate the true proportion of black ravens in the population, denoted by $\pi,$ given a random sample of ravens from this population. We assume that there is no classification or measurement error. That is, each observed raven can be identified correctly as black or white. We also assume that the population is well-mixed and observations are independent, each with probability of being black equal to the true proportion of black ravens in the population. This initial setup is generalizable and quite mundane from a statistical perspective. We could have considered any phenomenon that can be observed repeatedly, where a decision has to be made under uncertainty using observations. The probability calculus quantifies uncertainty, and statistical methods based on probabilistic models provide the machinery to perform inference about aspects of the mechanism generating these observations. Examples of these aspects include estimating a parameter or predicting future observations under an assumed probability model, or selecting between competing probability models. With this in mind, we now consider the following two experiments that could be conducted within the paradigm of our toy example. \begin{enumerate} \item[]{\textbf{Experiment 1.}} A simple random sample of $n$ ravens is collected. We observe $b$ black ravens in the sample. The likelihood of the observed data conditional on $\pi$ and $n$ is given by the binomial probability model, and is equal to \begin{equation}\label{eq:likbin} \mathbb{P}(b|\pi,n)=\binom{n}{b}\pi^b(1-\pi)^{n-b}. \end{equation} \item[]\textbf{Experiment 2.} A simple random sample of ravens is collected until $w$ white ravens are observed. We observe $b$ black ravens such that $b+w=n.$ The likelihood of the observed data conditional on $\pi$ and $w$ is given by the negative binomial probability model, and is equal to \begin{equation}\label{eq:liknegbin} \mathbb{P}(b|\pi,w)=\binom{n-1}{w-1}\pi^b(1-\pi)^w. \end{equation} \end{enumerate} This setup is not novel -- it could be found in a standard statistics text. Nevertheless, several observations about the setup are important to make explicit, as they provide us with conclusions that help guide our thinking about reproducibility and openness. \begin{enumerate} \item[] {\bf Observation 1:} The probability models in Equations~\eqref{eq:likbin} and \eqref{eq:liknegbin} are different because the parameter vectors $(\pi,n)$ and $(\pi,w)$ are different in these two models. Yet the population proportion of black ravens $\pi$ is a parameter of both models and therefore can be estimated. Assume we estimate the proportion of black ravens by the well-known statistical method of the maximum likelihood (ML). The ML estimate $\hat{\pi}_{ML}$ is the proportion of black ravens that maximizes the likelihood of the observed data. The estimates for $\pi$ under model \eqref{eq:likbin} and model \eqref{eq:liknegbin} are both equal to $b/n.$ \item[]{\bf Conclusion 1:} Some results of Experiment 1 can be reproduced by Experiment 2 even if the models in these experiments are different. In other words, the model in Experiment 2 is not required to be the identical model as in the Experiment 1 to reproduce some of its results. \item[] {\bf Observation 2:} Conclusion 1 cannot solely be explained by the use of the identical method in both experiments because ML is applied on realizations of different random variables in two models. To see this, note that the maximum number of black ravens that can be observed in Experiment 1 is $n$ but in Experiment 2 it is the maximum number of black ravens in the population. If we use $\hat{\pi}_{ML}$ in Experiment 1 and $\hat{\pi}_{MM}$ in Experiment 2, where MM denotes the method of moments estimator, the estimates are still both equal to $b/n$, even though MM is based on a different principle than ML. \item[] {\bf Conclusion 2:} The method in Experiment 2 is not required to be the identical method as in Experiment 1 to reproduce some of its results. \item[] {\bf Observation 3:} The stopping rule of data collection in Experiment 1 is different than the stopping rule in Experiment 2. Experiment 1 stops when $n$ ravens are observed. Experiment 2 stops when $w$ white ravens are observed. Thus, the data structures in Experiment 1 and Experiment 2 are different. \item[] {\bf Conclusion 3:} The data structure in Experiment 2 is not required to be identical to the data structure in Experiment 1 to reproduce some of its results. \item[] {\bf Observation 4:} To assess whether the estimate $b/n$ obtained in Experiment 1 is reproduced in Experiment 2, Experiment 2 must know the result of Experiment 1. \item[] {\bf Conclusion 4:} Experiment 2 needs sufficient background information about results of Experiment 1 to assess whether the results are reproduced. Thus, the background information used in Experiment 1 and Experiment 2 must be different from each other. \item[] {\bf Observation 5:} Assume we repeat Experiment 1 a large number of times and we choose the estimator of $\pi$ as: $$\hat{\pi}=\frac{b}{n}+\mathbf{I}_{\{1^{st}\;raven\;is\;black\}},$$ where $\mathbf{I}_{\{A\}}=1$ if $A,$ and else zero. This estimator is equal to its true value $\pi$ in $100(1-\pi)\%$ on average and equal to $2\pi$ in $100\pi\%$ of the experiments on average. \item[]{\bf Conclusion 5:} True results are not always reproducible. For example, if the true population of black ravens is 0.8 and our experiments return a smattering of values close to the truth but only one that is exactly 0.8, then only one result is true. Note that ``true'' here involves the idea of accuracy because the estimates are real numbers. Providing a range of values instead would sacrifice accuracy to ensure truth: in the extreme, if we say that the proportion of black ravens is between 0 and 1, we guarantee a true claim, but it is not informative. For ease of exposition, we continue to use ``true'' with the precision provided by real numbers and handle the idea of closeness in other ways. \item[] {\bf Observation 6:} Assume we repeat Experiment 1 a large number of times and we choose the estimator of $\pi$ as: $$\hat{\pi}=\frac{1}{2}.$$ There is no statistical reason for this estimator to be equal to its true value $\pi$ since it is not using the observations as a representative sample from the population. However, we will always return the same value if we fix the sample size. \item[]{\bf Conclusion 6:} Perfectly reproducible results may not be true. Hence reproducibility is not sufficient for truth. \end{enumerate} A few remarks regarding the simplicity of our toy example. Observation 5 presents a biased estimator that only sometimes equals the true parameter value. While this may or may not be a realistic choice of an estimator, our point in this observation is that the observed rate of reproducibility is a function of the methods that we apply on data to make inference and the true rate of reproducibility. Theoretically speaking, we cannot always expect true results to be reproducible. Similarly, sampling error and model misspecification~\citep{Box1976,Dennis2019} present other potential reasons for why true results may not always be reproducible. Observation 6, on the other hand, is in tension with actual scientific practice, which uses observations as a way for models to make ``contact'' with the world. Though contrived, this conclusion, as well as the others, carry over to relevant aspects of real science. For example, the estimator in observation 6 could be a complicated algorithm with an unintentional ``error'' in the code implementation could have the same effect. It is thus possible that two separate labs using the same code reproduce one another's results, not because they are accessing the truth, but because the same error is biasing the process to provide the same result. Of course code verification and validation processes will help minimize the instantiation of such possibilities. This acknowledges the point. Reproducibility may occur because of aspects of experiments or analyses and not ``nature'' (or whatever scientists are trying to make contact with)\footnote{Ian Hacking has referred to this phenomenon as ``the self-vindication of the laboratory sciences''~\citep{Hacking1992}.}. In brief, it is possible to reproduce the results of an experiment in a variety of ways, even in experiments that are not carbon copies of the original experiment. In order to provide a more detailed analysis of what does and doesn't need to be copied, and relatedly made open, we first provide some additional clarification of the idea of an experiment and define some key terms. \section{Concepts and Terminology} \label{Concepts} We start with the notion of idealized experiment, which is central to many forms of scientific inquiry~\citep{Devezer2018}. Given some background knowledge $K$ on a natural phenomenon, a scientific theory makes a prediction, which is in principle testable using observables, the data $D$. A mechanism generating $D$ is formulated under uncertainty. This mechanism is represented as a probability model $M_\theta$ parametrized by $\theta.$ The extent to which, parts of $M_\theta$ are relevant to the prediction are confirmed by $D$ is assessed by a fixed and known collection of methods $S$ evaluated at $D.$ We denote $(M_\theta,D,S,K)$ by $\xi,$ an idealized experiment\footnote{Our conceptualization of the idealized experiment follows a parallel to~\citet{Hacking1992}'s taxonomy of laboratory experiments. Our $K$ and $M_\theta$ would be subsumed under his \textit{ideas}, $S$ under \textit{things}, and $D$ under \textit{marks}.}. We further refine two components of $\xi.$ First, we let $D\equiv\{D_v,D_s\}$ where $D_v$ denotes the observed values, and $D_s$ denotes the structural aspects of the data, such as the sample size, number of variables, units of measurement for each variable, and metadata. Second, we let $S\equiv\{S_{pre},S_{post}\}$ where $S_{pre}$ denotes the procedures, instruments, experimental design, and tools used prior to and necessary for obtaining $D_v$, and $S_{post}$ denotes the analytical tools and procedures applied to $D$ once it is obtained. We define $R_i$ as a result which is obtained by applying $S_{post}$ to $D.$ We denote the set of all results obtainable from an experiment as $R\equiv\{R_1, R_2, R_3,\cdots\}.$ Figure~\ref{fig:elements} shows these elements in the context of our toy example of Section \ref{toy}. \begin{figure} \caption{Elements of three idealized experiments: experiment, replication experiment 1, and an alternative replication experiment 2.} \label{fig:elements} \end{figure} Using this notion of an idealized experiment, we adopt the following definitions. We will clarify why in some cases we define a term differently from in relevant literature. \begin{itemize} \item [] \textit{Reliability}: Propensity of a method $S$ to produce consistent results given the same inputs or initial conditions. Conditional on the same observation in the sample space, $M_\theta,$ and $K,$ a method $S_{pre}$ is reliable if it consistently produces $D$. A method $S_{post}$ is reliable if applying $S_{post}$ to $D$ consistently yields $R.$ For the rest of this manuscript, we make the simplifying assumption that $S$ is sufficiently reliable. \item [] \textit{Auditability}: The accessibility of all necessary information regarding the components of $\xi$ so that $S_{post}$ can be applied to $D$ to obtain $R$ independently of $\xi.$ Auditing is a procedure of screening for certain errors, including human and instrumental, that may be introduced in the process of obtaining $R.$ Examples are data entry and programming errors. If $S_{post}$ is not reliable, auditability of $\xi$ will not be affected but the auditing process will also be less reliable because it may not consistently yield $R$. \item [] \textit{Replicability}: An experiment $\xi$ is replicable if information about the necessary components of $\xi$ to obtain some $R_i$ is available and if these components can be duplicated, copied, or matched in an independent experiment $\xi'.$ A replication experiment $\xi'$, generates $D'$ independent from $D,$ conditional on the true data generating mechanism. We use $R_i$ instead of $R$ in this definition because $\xi'$ might only be interested in a subset of the results of $\xi.$ A common interpretation of $\xi'$ would be $(M_\theta,D',S,K)$, where the replication experiment differs from $\xi$ only in data values $D'_v$ while duplicating all other components of $\xi.$ Our analysis in section~\ref{components} diverges from this view of replication studies and brings a more fine-tuned understanding of which components of an experiment need to be duplicated or matched for replicability. \item [] \textit{Reproducibility}: The rate of $R_i$ being reproduced. We say that $R_i$ is reproduced by $R_i'$ if $M_\theta$ and $M_\theta'$ are confirmed or disconfirmed in the same direction in a probabilistic confirmation sense, such that $R_i$ and $R_i'$ are deemed equivalent. For example, if $R_i$ is an estimate of parameter $\theta$, then $D$ confirms $\theta$ if the probability of $\theta$ after observing the data, $\mathbb{P}(\theta|M_\theta,D, K)$, is greater than the probability of $\theta$ before observing the data, $\mathbb{P}(\pi|M_\theta,K).$ Here, $R_i'$ reproduces $R_i$ if $\mathbb{P}(\theta|M_\theta',D', K')>\mathbb{P}(\theta|M_\theta',K').$ In order to reproduce $R_i$ for the right reasons, $S$ must be sufficiently reliable. \end{itemize} Notice that our definition of reproducibility focuses on the end products of experiments, the results, and not the other components of experiments that bring those products about. This choice is fitting given the etymology of the term ``reproduce''. Our choice for our definition of replicability similarly respects its etymology---it incorporates the notion to repeat something, in our case it is the components of an experiment. While our use of these terms is consistent with, if more refined than, other work~\citep{Leonelli2018,Radder1992,Radder1996,Open2015}, there is considerable variation in how these terms are used in the scientific literature~\citep{sep-scientific-reproducibility,Penders2019,Stodden2011}. We aim to sidestep potential confusion by laying out the definitions as we have and adhering to them for the remainder of this paper. From our definitions, we conclude that auditability is not necessary for replicability or reproducibility. For example, to audit $R_i$, we need to examine $D$ and implement $S_{post}$ on $D$. To replicate $\xi$ or to reproduce $R_i$, on the other hand, we do not need to know $D.$ Moreover, auditability is not sufficient for reproducibility either. A replication experiment $\xi'$ includes new data $D'$ that is generated by the true data generating mechanism. Even if $\xi$ is auditable, $R_i$ may not be reproduced by $R_i'$---an example of which was shown in Observation 5 in Section~\ref{toy}. Our definition of auditability closely tracks the idea of openness in science. However, whereas we have just stated that audibility is not necessary for reproducibility, the science reform movement, as described in the introduction, leads us to believe that openness is necessary for reproducibility. The tension we have built between auditability, openness, and reproducibility provides an opportunity to clarify their relationship. Doing so will ultimately lead us to a better understanding of reproducibility and the putative crisis. In the next section we work through a thought experiment to illustrate an important epistemic aspect of reproducibility that helps us enrich the concept of openness. \section{Open Science as a Logical Necessity to Epistemic Reproducibility} \label{open} We expand our toy example of ravens with a thought experiment, ``the reproducibility collaboratorium'' to distinguish between two types of reproducibility: {\em in-principle} and {\em epistemic}. We argue that {\em open science}, which makes the necessary components\footnote{We further explain what we mean by ``necessary components'' in Section~\ref{components}.} of an experiment available for use by others, is a logical necessity for {\em epistemic} reproducibility of research results. We consider two collaboratoriums, a {\em closed collaboratorium} and an {\em open collaboratorium} and imagine the following scenario common to both collaboratoriums: Each collaboratorium consists of Lab 1 and Lab 2 that conduct Experiment 1 and Experiment $1'$, respectively. Experiment $1'$ is conducted after Experiment 1, and ravens are sampled from one large population. All four labs assume identical models and data structure, and employ identical methods with the goal of estimating the population proportion of black ravens using their observations. Further, we assume that the number of black ravens observed in all four labs is the same. In the closed collaboratorium, Lab 1 and Lab 2 are isolated from each other and there is no information flow from Lab 1 to Lab 2. Crucially, because of this lack of information flow, Experiment $1'$ will match all the elements of Experiment 1 that are relevant to estimating the proportion of black ravens in the population by {\em chance}. Such a match is improbable since there are many reasonable ways of conducting an experiment. Nevertheless, since Experiment 1 and Experiment $1'$ use identical models and methods and observed the same number of black ravens in the sample, they return the same estimate of the population proportion of black ravens. However, Experiment $1'$ does not have any information pertaining to the results of Experiment 1, and thus Lab 2 is in a position neither to learn from the results of Experiment 1, nor to claim that it reproduced the result of Experiment 1. If an external observer were to observe the experiments conducted in both labs, they could first learn from the result of Experiment 1. Starting with an updated view about the proportion of black ravens provided by the result of Experiment 1, they could then use the number of ravens observed in Experiment $1'$ to conclude that the result of Experiment 1 is indeed reproduced by Experiment $1'$. When there is no information exchange between the labs, however, there is no meaningful {\em epistemic} interaction between Experiment 1 and Experiment $1'$. This closed collaboratorium example highlights two important points: (1) If there is no open science in the sense of information flow from one experiment (or lab) to the next, it is improbable (but still possible) for a replication experiment to take place, and (2) the result of Experiment 1 can be said to be reproduced by Experiment $1'$ only if the result of Experiment 1 is available to Experiment $1'$. In order to acknowledge these points, we say that a result can only be \textit{in-principle} reproducible if there is no epistemic exchange between the labs which would accumulate evidence, with the exception of via some omniscient external observer. In the open collaboratorium we assume that Experiment 1 has a view on the population proportion of black ravens prior to observing ravens. By contrast to the closed version, however, in the open collaboratorium Lab 1 reports all information relevant to estimating the proportion of black ravens to Lab 2, which incorporates this information to conduct Experiment $1'$. Thus Experiment $1'$ matches the elements of Experiment 1 by a kind of {\em social learning}. We assume this information is transmitted in the background knowledge of the experiment. Starting with an updated view about the proportion of black ravens in the population and conditional on the number of black ravens observed, Lab 2 could conclude that they have indeed reproduced the result of Experiment 1. Thus in the open collaboratorium there is an {\em epistemic} interaction between the two experiments which contributes to the progress of science through accumulation of evidence. In contrast to the closed collaboratorium, in the open collaboratorium replication experiments are not contingent on chance but can be routinely performed via social learning, which gives us the notion of \emph{epistemic} reproducibility. We illustrate the two collaboratoriums in Figure~\ref{fig:collaboratorium}. \begin{figure} \caption{Closed collaboratorium: Experiment 1 starts with prior view of $1/2$ on proportion of black ravens. Observes $2$ black ravens and updates their view to $3/4.$ Identical view, model, and methods are assumed and the same data values are observed in Experiment $1'$, but in the absence of an external observer the two results cannot be connected, thus reproducibility is only {\em in principle} and evidence does not accumulate in the absence of an external observer privy to both experiments. Open collaboratorium: Experiment $1$ starts with prior view of $1/2$ on proportion of black ravens. Observes $2$ black ravens and updates their view to $3/4.$ Experiment $1'$--a replication experiment--is informed of the result, model, methods and observes the same data values. Starting with a view of $3/4$ they update the proportion of black ravens to $5/6.$ Thus, Experiment $1'$ learns from Experiment 1 in a planned manner. The two results can be connected and thus reproducibility is {\em epistemic.}} \label{fig:collaboratorium} \end{figure} The distinction between in-principle and epistemic reproducibility is relevant to understanding debates about reproducibility. Consider a historical example. Isaac Newton (1643--1727~AD) believed his \textit{experimentum crucis} did not need to be replicated because it already presented conclusive proof of his theory. When Anthony Lucas (1633--1693~AD) failed to reproduce and even negated Newton's results in his replication experiments, Newton was angry: [it was not] ``the number of experiments, but weight to be regarded; and where one will do, what need many?''~\citep{Newton1676}. When faced with further criticism from his opponents, Newton refused to discuss Lucas's (unsuccessful) replications and invited Lucas to talk about his original \textit{experimentum crucis} instead~\citep[p.~275]{Westfall1981}. Newton's anti-replication attitude was heavily criticized by Johann Wolfgang von Goethe (1749--1832~AD) who believed that single experiments or even several of them would not be enough to prove any theory, and that it was a major task of scientists to design and conduct a series of contiguous experiments, each derived from the preceding one \citep{Ribe1985}. Goethe's insistence on replication experiments echoes a long and far ranging history of advocates. Any coverage of this history here would be superficial and take us too far afield. Suffice it to say that this history is on the side of Goethe. How can we make sense of Netwon's anti-replication attitude against a large backdrop of advocates represented by Goethe? If Newton's dismissive attitude towards replication is framed by in-principle reproducibility, we can understand why the replication of experiments is unnecessary. Newton must have already believed that his results were in-principle reproducible because of the underlying theory, hence, he did not deem any direct replication experiments necessary. This view would be consistent with his famous quote ``hypotheses non fingo'' (I feign no hypotheses) since he did not see his \textit{experimentum crucis} so much as testing a scientific hypothesis as demonstrating an already proven theory. This view, however, is not widely shared by scientists conducting experiments under uncertainty today. For most of empirical science today, scientists must exchange information and replicate each others' experiments in order to increase their confidence that new knowledge has been added (or, that they know that they have discovered some new fact) by way of reproducing results. It seems that Goethe and others were concerned with epistemic rather than in-principle reproducibility. With epistemic reproducibility defined, we can take a closer look at the components $(M_\theta, D, S, K)$ defining $\xi$ and investigate what, more specifically, needs to be open for epistemic reproducibility. We turn to this topic in the next section. \section{Which components of an experiment need to be open for epistemic reproducibility?} \label{components} In recent years several tools have been developed to help facilitate openness in science~\citep{Collins2014,Munafo2017,Nosek2015,Wagenmakers2012}. In some cases the net is cast broadly, making as much information available as possible. In other cases intuition guides which components are relevant and need to be shared in the replication of an experiment. We are interested in using our model-centric framework to better understand what does and does not need to be made available and under what conditions. The context of epistemic reproducibility is our starting point. In addition, our analysis will take into account our toy example and initial observations, as well as our thought experiment about collaboratoriums. Moreover, our analysis will be structured by our concept of an idealized experiment. That is, we confine ourselves to identify the components of experiments as we have conceived of them, leaving open the possibility that there are other ways of describing scientific processes relevant to replication and reproducibility. For our analysis, we can understand an experiment $\xi$ given by $(M_\theta, D, S, K)$ as a function that takes $D$ generated from the true data generating model as a random input and produces a result $R$ as a random output. Thus, $\xi$ is a random transformation from the space of data to the space of results under the assumptions specified by $M_\theta$ (the model), $S$ (the methods), and $K$ (background knowledge). Assuming that a model in an experiment captures the true data generating mechanism, we can investigate which components of the model, the method, and the data are needed to be transmitted to reproduce the results of an experiment in a replication experiment $\xi'$. We encapsulate the transmission of these components from $\xi$ to $\xi'$ in $K'$. By definition, this makes $K$ and $K'$ different, a version of an observation we already made in sections~\ref{toy} and~\ref{Concepts}. Our results are grouped by components: i) (model) specific aspects of the model might have to be shared, but whether it does and which parts of a model depends on the objective; ii) (method) we specifically identify those aspects of $S'_{post}$ that need to be shared; iii) (data) here we distinguish between the structural aspects of data and the observed values, and what has to be open depends on whether we are doing an exact replication or a reproducibility experiment. \subsection{What parts of model $M_\theta$ are needed for reproducibility?} \label{model} Statistical theory shows us that for $\xi'$ to be able to reproduce {\em all possible} results of $\xi,$ the specification of model $M_\theta$ up to the unknown quantities needs to be transmitted to the replication experiment, such that $M_\theta$ and $M_\theta'$ are identical models. If an aspect of $M_\theta$ that has an inferential value is not transmitted to $\xi'$, that inferential value is lost, and the results relevant to that inferential value cannot be reproduced. On the other hand, given an inferential objective to produce a specific result $R_i$, the aspects of $M_\theta$ that are irrelevant to that objective need not be transmitted to the replication experiment. This point is shown by Observation 1 and Conclusion 1 of our toy example in Section~\ref{toy}. When we consider estimating the population proportion of black ravens, the two models in our example are different from each other but have an identical parameter capturing the population proportion of black ravens and they both employ the number of black ravens observed in the sample in the same way for that particular objective. If $M_\theta$ is not identical to $M_\theta',$ then $\xi$ and $\xi'$ differ from each other with respect to the assumed data generating mechanism. What matters is whether these aspects affect the results of $S$ applied on $D$ for estimating the population proportion of black ravens. That is, even though the model in a replication experiment differs from the original, what matters is that the models share the relevant parameters, in this case the proportion of black ravens. We can demonstrate this formally. Equation~\eqref{eq:likbin} and \eqref{eq:liknegbin} give the likelihood of observing $b$ black ravens in a sample of size $n$ under the binomial and negative binomial probability models respectively and the maximum likelihood estimate for the population proportion of black ravens under both models is $b/n$. The reason for this is that binomial and negative binomial models are in the same likelihood equivalence class with respect to the objective of estimating the population proportion of black ravens: The maximum likelihood estimator can be derived by setting the expression resulting from taking the derivative of the logarithm of the likelihood function with respect to $\pi$ and solving for $\pi$. For Equation~\eqref{eq:likbin} we get \begin{equation}\label{eq:mlebin} \frac{d}{d\pi}[\log \mathbb{P}(b|\pi,n)]=\frac{d}{d\pi}\left[\log{\binom{n}{b}}\right]+\frac{d}{d\pi}[b\log\pi] + \frac{d}{d\pi}[w\log(1-\pi)], \end{equation} and for Equation~\eqref{eq:liknegbin} we get \begin{equation}\label{eq:mlenegbin} \frac{d}{d\pi}[\log \mathbb{P}(b|\pi,n)]=\frac{d}{d\pi}\left[\log {\binom{n-1}{w-1}}\right] + \frac{d}{d\pi}[b\log \pi] + \frac{d}{d\pi}[w\log (1-\pi)]. \end{equation} The difference between these two equations is only in the first terms which are equal to zero. We get $\hat{\pi}=b/n$ as the unique solution in both models. The first term in Equation~\eqref{eq:likbin} and Equation~\eqref{eq:liknegbin} determines the stopping rule of the experiments. In $\xi,$ we stop the experiment when $n$ ravens are observed and the last raven can be black or white. In $\xi'$ we stop the experiment when $w$ white ravens are observed and the last observation must be a white raven. This difference between stopping rules means that: 1) $S_{pre}$ is different from $S_{pre}'.$ 2) Under our choice of $S_{post}$ as the maximum likelihood estimator the stopping rules in two models are irrelevant for estimating the proportion of black ravens in the population. We also need to distinguish between openness (and auditability) of $M_\theta$ from replicability of $M_\theta.$ In our binomial and negative binomial models, $M_\theta'$ is different from $M_\theta.$ However, these two models are compatible with respect to a certain inferential objective that allows for reproducing a specific $R_i,$ which is estimating the proportion of black ravens in the population. To establish this compatibility, $M_\theta$ should be open to $\xi'$ but does not need to be replicable or replicated. Specifically, to choose a negative binomial probability model in $\xi'$ to reproduce the estimate of the proportion of black ravens in the population obtained in $\xi$, we need to know that $\xi$ has used a binomial probability model, which ensures that $\xi'$ will use a probability model that has the same parameter---proportion of black ravens, with the exact same meaning in $\xi$. Without $M_\theta,$ this compatibility cannot be established. This point is clearly illustrated in a recent article~\citep{Silberzahn2018} in which the same data $D$ was independently analyzed by twenty-nine research teams who were provided data and a research question that puts a restriction on which $R_i$s would be relevant for the purposes of the project. The teams were not, however, provided a $M_\theta,$ $S_{post},$ or $K.$ Teams ended up using a variety of models differing in their assumptions about the error variance and the number of covariates to analyze the same data set. The results differed widely with regard to reported effect sizes and hypothesis tests. So even when $D$ was open, the lack of specification with regard to $M_\theta$ yielded largely inconsistent results. Taking stock, our ravens example is deliberately simple to help in our analysis. State of the art models are often complex objects. If the assumed model and its assumptions are complex, it might not always be clear which class of models contains others, and a matching model for $\xi$ may not even be available. Then, $M_\theta$ needs to be both auditable and replicable for reproducibility. This result is particularly important to communicate to scientists who primarily engage in routine null hypothesis significant testing procedures and may not be conventionally expected to transparently report their models. \subsection{What parts of a method are needed for reproducibility?} \label{method} In this section, we focus on $S_{post}$ (the analytical methods applied to data) but leave $S_{pre}$ (experimental procedures to generate data) unspecified. Studying $S_{pre}$ is complicated because for a given model $M_\theta,$ the number of ways that an experiment can be designed is not well specified under statistical theory, and procedures and measurements to test the same research question can vary. Even in our simple ravens example, a raven can be observed for its color by an investigator using their eyes, but a blind investigator may opt for a mechanical pigment test. This experimental design issue is sometimes referred to as ``hidden moderators'' when explaining why results of replication experiments differ from original experiments~\citep{Baribault2018}. In addition, the issues surrounding measurement error has been studied extensively and measurement error might be a potential factor exacerbating irreproducibility~\citep{Loken2017,Stanley2014}. What we can say is that, at minimum, auditability of $S_{pre}$ appears to be essential for reproducibility. Once all experimental procedures, design details, and instruments are reported, their exact replicability becomes less of an issue for $\xi'$ to the degree that measurement error can be explicitly modeled in case of any deviation from $S_{pre}$. Turning our attention to analytical methods applied to data, Observation 3 and Conclusion 3 in Section~\ref{toy} show that $S_{post}$ and $S_{post}'$ do not have to be identical. Some statistical methods are mathematically equivalent even though their motivations are different. For example, the maximum likelihood estimator and the method of moments estimator are equivalent in estimating the population proportion of black ravens in our toy example. We can demonstrate this formally. Consider the binomial model specified by Equation~\eqref{eq:likbin}. If $S_{post}$ is the maximum likelihood estimator motivated by the likelihood principle, the standard procedure to obtain it is to take the likelihood represented by Equation~\eqref{eq:mlebin}, setting the right hand side to zero, taking the derivative, and finally solving for $\pi:$ \begin{eqnarray*} \frac{d}{d\pi}\left[\log{\binom{n}{b}}\right]+\frac{d}{d\pi}[b\log\pi] + \frac{d}{d\pi}[w\log(1-\pi)]&=0,\\ b/\pi - w/(1-\pi)&=0, \end{eqnarray*} so we have $$\hat{\pi}_{ML} = b/(b+w) \Rightarrow \hat{\pi}_{ML} = b/n.$$ On the other hand, if $S_{post}$ is the method of moments estimator, the motivation is to set the population mean equal to the sample mean and solve for $\pi.$ The population mean in a binomial model with sample size $n$ and the probability of observing a black raven $\pi$ is $n\pi$ and the sample mean is the number of black ravens in the sample $b.$ So the method of moments estimator is $$n\hat{\pi}_{MM}=b\;\Rightarrow\; \hat{\pi}_{MM}=b/n,$$ equivalent to $\hat{\pi}_{ML}.$ Furthermore, other methods are mathematically equivalent even though their very interpretation of probability differs. For example, maximum likelihood estimates and the posterior mode in Bayesian inference under uniform prior distribution of parameters are equivalent regardless of the true data generating model. Conversely, there are also methods designed for a specific goal but that do not produce identical $R$ when applied to the same $D$. For example, ~\citet{Devezer2018} shows that the choice between the Akaike's Information Criterion and Schwarz criterion might influence the reproducibility of results in a model comparison. In addition, a statistical method used to draw inferences about is often conditional on a fully specified statistical model up to a finite number of unknown parameters of that model. Consider the following situation. A scientist who designs $\xi'$ is given only the following information about $\xi:$ ``A population has only black and white ravens, and ravens are sampled to perform inference about the error on the population proportion of black ravens.'' This is an underspecified model that might easily lead to different methods of choice for $S_{post}$ and $S_{post}'$---the estimator of the error of population proportion of black ravens. The scientist might assume a large population and sample the ravens with replacement to build a binomial probability model for the data generating process. On the other hand, she might also assume a small population and sample the ravens without replacement to build a hypergeometric probability model. The error about these estimates are different even though $S_{post}$ and $S_{post}'$ might be motivated by one principle, such as maximum likelihood. Further, these differences are likely to be exacerbated if the methods are motivated by different principles. From these examples, we infer that $S_{post}'$ either needs to be identical to $S_{post}$ or should match it with regard to desired inference (point estimation, interval estimation, hypothesis testing, prediction, model selection) and in a way to allow for reproducing a specific $R_i.$ To match, $S_{post}$ should be open or auditable to $\xi'$ but does not need to be replicable or replicated. In order to use the method of moments estimator to estimate the proportion of black ravens in a replication experiment, we need to know that $\xi$ has used a maximum likelihood estimator. This way, it can be ensured that $\xi'$ will either use the same estimator as $\xi$ or will match it. \subsection{What parts of data are needed for reproducibility?} \label{data} In section \ref{Concepts} we defined $D\equiv\{D_v,D_s\}$ where $D_v$ denotes the observed values, and $D_s$ denotes the structural aspects of the data, such as the sample size, number of variables, units of measurement for each variable, and metadata. If $D\equiv\{D_v,D_s\}$ and $D'\equiv\{D_v',D_s'\}$ are the data obtained in an experiment and a replication experiment respectively, $D'$ is often thought of as the new data of the {\em old kind} in the sense that the values $D_v'$ are independent from the values $D_v,$ but that the data structures $D_s$ and $D_s'$ are identical. Observation 4 and Conclusion 4 of our toy example consists of a counterexample to this case where the $D_s$ and $D_s'$ can be different and the result $R_i$ is still reproduced by $R_i',$ if these differences do not affect how the method $S$ evaluates $D$ and $D'$ for the inferential objective. While $D_s$ does not need to be exactly duplicated in $\xi'$ for reproducibility of $R_i$, the parts of it relevant to obtaining $R_i$ need to be open. Consider a situation where some ravens cannot be classified as black or white (perhaps due to $S_{pre}$ not being sufficiently sensitive) and are recorded as missing data. In this case the number of missing data points is carried in $D_s.$ If the estimate of population proportion of black ravens is reported as $b/n,$ without $D_s,$ $\xi'$ would not know whether the number of missing data points are treated as part of $n$ or left out. Therefore, $\xi'$ cannot ensure whether they reproduced $R_i.$ From this example, we infer that $D_s'$ either needs to be identical to $D_s$ or should match it with regard to desired inference and in a way to allow for reproducing a specific $R_i.$ For similar reasons, $D_s$ also needs to be open for the purposes of auditability of $R.$ Data sharing is often viewed as a prerequisite for a reproducible science~\citep{national2017fostering,Hardwicke2018,Molloy2011,Stodden2011}. Our analysis suggests this is potentially misconceived. Using the components of the idealized experiment and statistical theory, we have shown that to reproduce a result $R_i$ of $\xi:=(M_\theta,D,S,K)$, one needs to know aspects of $M_\theta,$ $D_s$, and $S$ relevant to obtaining $R_i.$ Moreover, a replication experiment $\xi'$ need not copy these aspects of $M_\theta,$ $D_s$, and $S$ to reproduce $R_i.$ We also show that having open access to $D_v$ has no bearing on designing and performing a replication experiment $\xi'$ or reproducibility of $R_i.$ $\xi'$ aims to reproduce the result, not the data. That said, openness of $D_v$ is necessary for auditability of $\xi$. Auditability, replicability, and reproducibility are distinct concepts and they need to be assessed separately when evaluating individual experiments. While some level of open scientific practices is necessary to obtain reproducible results, we argue that open data are not a prerequisite. There might be other benefits to open data, from auditability of results to enabling further research on the same data; however, the distinction we draw matters particularly in situations where there may be arguably valid concerns, such as ethics regarding data sharing~\citep{Borgman2012}. We recommend that open data be evaluated on its own merits which has been discussed extensively~\citep{Janssen2012} but not as a precursor of reproducibility. \section{Discussion}\label{discussion} Open practices in scientific inquiry have long been intuitively proposed as a key to solve the issues surrounding reproducibility of scientific results. However, a formal framework to validate this intuition has been missing and is needed for a clearer discussion of reproducibility. We have contributed to such a theoretical framework here. We finish with some discussion about how we see our project situated in the landscape of theories of science. Within the last century, an important approach that has been taken towards understanding reproducibility was motivated by the work of Karl Popper, particularly The Logic of Scientific Discovery and the notion of falsifiability. \citet{Popper1959} states that ``non-reproducible single occurrences are of no significance to science'' (p. 86) and they would not be useful in refuting theories. Here Popper appears to rely on some notion of reproducibility to establish his falsifiability criterion of science. At the very least, Popper would agree that in order to refute a theory or [scientific] hypothesis, the experimental result must indeed be a falsifying counterexample. There is, however, a tension between Popper's emphasis on falsifiability and his skepticism towards confirmation. To generate confidence that a counterexample is genuine and not an error or one-off case is to ensure that the result is reproducible. To do this, the original experiment should be replicated. The problem here is that this process of replication and reproducibility is a kind of confirmation, one that establishes or increases confidence that a counterexample is genuine. Only once we establish that we have a genuine counterexample can we then proceed in Popperian-style towards the falsification of the hypothesis or theory in question. Popper's falsifiability view has had considerable impact on 20th century science and reproducibility has implicitly been accepted as part of scientific activity. Furthermore, Popper's view on falsifiability motivates what we call the hypothesis-centric approach to understanding reproducibility. On this approach, an experiment is performed in order to disconfirm a hypothesis, and thereby falsify a theory~\citep{Glass2008}. One example of taking the hypothesis-centric approach in the contemporary literature about reproducibility is~\citet{McElreath2015}. There is an alternative. Whereas Popper places a great deal of emphasis on the concepts of theories and hypotheses, much of current science, particularly given the rise of Bayesianism, focuses instead on models and model comparison~\citep{Burnham2003}. Since the exponential increase in computing power it has become possible to perform the necessary computations and complete analyses that were previously practically impossible. An important consequence of this change is the ability to consider, compare, and contrast different models that explain or predict data. As a result, vague scientific hypotheses can be given more rigorous specifications in terms of statistical models, which in turn provides precision in testing. Our work here is an example of this alternative, \emph{model-centric} approach to study reproducibility. An experiment is performed in order to compare models and scientific progress is made as old models are replaced by new models. That is, whereas the hypothesis-centric approach typically assumes a model structure and tests a hypothesis, in a model-centric approach the whole model is considered and compared to other models. In this model-centric view, hypotheses are subsumed under models such that a hypothesis represents a specific statement about a model parameter. In addition to being more general, a model-centric approach avoids certain challenges that a hypothesis-centric approach faces, in particular the underdetermination of theory by data and holism~\citep{Quine1976}. Theories or hypotheses can never be tested in isolation because they come as a bundle and with a group of background assumptions. For example, given evidence that is inconsistent with some hypothesis, one option is to reject the hypothesis and corresponding theory. Another option, however, is to place blame on the measuring instruments, the experimental procedures, or some background assumption. How does one decide which option to exercise? Scientists are generally adept at trouble-shooting issues and figuring out whether a particular instrument is malfunctioning, for example. They do this by holding fixed one set of assumptions (e.g., background theory and the reliability of the experimental setup) and checking the reliability of others (e.g., whether a particular measuring instrument is working as expected). And that is the point about holism: there is always some network of assumptions being held fixed when determining where to place the blame for negative results. Claiming to test a hypothesis in isolation is pretending that there is no network of assumptions, but there is, and consequently, a hypothesis cannot be tested in isolation. The model-centric approach does a better job of making background assumptions more explicit than the hypothesis-centric approach. Moreover, the model-centric approach is better suited to capture the scientific practice of model comparison, an integral part of the open, collaborative practices that have been proposed to solve issues surrounding reproducibility. Our analysis underscores the importance of transmitting model-specific information for reproducibility of results---a condition often readily satisfied in a model-centric framework. We believe that a hypothesis-centric approach is too impoverished to provide the necessary resources for a formal theory of such open practices. \section*{Conclusion} We used our model-centric approach and formalization of reproducibility and related concepts, to distinguish between reliability, auditability, replicability, and reproducibility. The relationship among them is not as straightforward as it may seem and a need for a nuanced understanding is warranted. For example, a perfectly auditable experiment does not necessarily lead to reproducible results, and an experiment that does not open its data does not necessarily yield irreproducible results. Nevertheless, irreproducible results sometimes raise suspicion and discussions typically turn towards concerns regarding the transparency of research or validity of findings. These discussions, however, use heuristic analogs of the concepts of reliability, auditability, replicability, and reproducibility. Such heuristics might not hold and can lead to erroneous inferences about research findings and researchers' practices. Relatedly, we have provided some details regarding which components need to be made open relative to some objective, and which don't. For example, while necessary for auditability of experiments, data sharing is not a prerequisite for reproducible results, as suggested by NASEM, but other components of an experiment are. On the other hand, reporting model details, such as modeling assumptions, model structure, and parameters, becomes critical for improving reproducibility. Notably, even in recent recommendations for improving transparency in reporting via practices such as preregistration, models are typically left out while transparency of hypotheses, methods, and study design are emphasized~\citep{Nosek2018,Veer2016}. Our framework is useful in improving the accuracy of judgments made regarding replication and reproducibility. The literature on replication crisis cites several putative causes of irreproducibility, including p-hacking, HARKing, and publication bias. The social epistemology literature contributes other underlying causes such as the rush to publish results due to perverse reward structures within the publication system~\citep{Heesen2018}. Our analysis shows that neither elimination of questionable research practices nor correction of scientific reward structures would necessarily lead to reproducible results, as there are other impediments to reproducibility logically preceding lack of rigor or transparency. For example, the rate of reproducibility is a parameter of the system and therefore is a function of truth. Some level of irreproducibility will always remain as a component of the system and we can only hope to attain a level of reproducibility within the bounds of model misspecification and sampling error. And even under the assumption of an ideal version of science that is free of methodological and cultural errors, employs reliable methods, and does not operate under model misspecification, we might still not be able to make a true discovery despite having improved the rate of reproducibility. We are optimistic that our analysis improves the level of precision in discussions surrounding the drivers of epistemic reproducibility. \end{document}
\begin{document} \title[Gaussian noise measure]{Nash twist and Gaussian noise measure for\\ isometric $C^1$ maps} \author[A. Dasgupta]{Amites Dasgupta} \address{Statistics and Mathematics Unit, Indian Statistical Institute\\ 203, B.T. Road, Calcutta 700108, India.\\ e-mail:[email protected]\\ } \author[M. Datta]{Mahuya Datta} \address{Statistics and Mathematics Unit, Indian Statistical Institute\\ 203, B.T. Road, Calcutta 700108, India.\\ e-mail:[email protected]\\ } \keywords{Isometric maps, Nash twist, Gaussian noise measure} \thanks{2010 Mathematics Subject Classification: 60F17, 60H05} \begin{abstract}Starting with a short map $f_0:I\to \mathbb R^3$ on the unit interval $I$, we construct random isometric map $f_n:I\to \mathbb R^3$ (with respect to some fixed Riemannian metrics) for each positive integer $n$, such that the difference $(f_n - f_0)$ goes to zero in the $C^0$ norm. The construction of $f_n$ uses the Nash twist. We show that the distribution of $ n^{3/2} (f_n -f_0)$ converges (weakly) to a Gaussian noise measure. \end{abstract} \maketitle \section{introduction} The problem of associating a measure to the solution space of a differential equation has been mentioned by Gromov in an interview with M. Berger \cite{berger}. Our point of interest lies in the space of isometric immersions of a Riemannian manifold $(M,g)$ into a Euclidean space $\mathbb R^q$ with the canonical metric $h$. In 1954, Nash proved that if a manifold $M$ with a Riemannian metric $g$ can be embdedded in a Euclidean space $\mathbb R^q$, $q > n+1$, then one can construct a large class of isometric $C^1$ embeddings (\cite{nash}). If the initial embedding $f_0:M\to \mathbb R^q$ is strictly short, that is if $g-f^*h$ is a Riemannian metric on $M$ then the isometric embeddings can be made to lie in an arbitrary $C^0$ neighbourhood of the initial embedding. In the following year, Kuiper showed that the bound can be improved to $q\geq n+1$ (\cite{kuiper}). The Nash process is an iterative process; each stage of the iteration consists of several small steps each of which involves a choice of a rapidly oscillating function defining a perturbation, called a Nash twist. Starting with the short map $f_0$, one constructs a sequence $\{f_n\}$ of short immersions, where $f_n$ is obtained from $f_{n-1}$ possibly through infinite steps each involving a Nash twist. Successive Nash twists performed on $f_n$ results in a correction to the induced metric $f_n^*h$. These corrections do not yield an isometric immersion at any stage but each $f_n$ still remains strictly short; however, $f_{n+1}$ is better than $f_n$ in the sense that the induced metric $f_{n+1}^*h$ is closer to $g$ than that in the previous stage. The Nash twist is a controlled perturbation - the $C^1$ distance between any two consecutive maps $f_n$ and $f_{n+1}$ remains bounded by the distance between $g$ and the induced metric $f_n^*h$; furthermore $f_n$ can be made to lie in an arbitrary $C^0$ neighbourhood of $f_0$. As a result the sequence converges to a $g$-isometric $C^1$ immersion. Nash-Kuiper theory was later generalised by Gromov into the theory of convex integration (\cite{gromov}). Camillo De Lellis and L\'{a}szl\'{o} Sz\'{e}kelyhidi, Jr., have briefly mentioned about the probabilistic approach to convex integration (\cite{lellis}) by pointing to the fact that Convex integration can be seen as a control problem: at each step of the iteration, one has to choose an admissible perturbation, consisting essentially of a (plane-)wave direction and a frequency. In the present article we shall consider the domain space of the maps to be 1-dimensional. In dimension 1, the solution to the $C^1$-isometric embedding problem does not require an infinite Nash process. Isometric maps $f:\mathbb I\to \mathbb R^3$ with respect to the standard Riemannian metrics can be obtained simply by integrating a curve in the 2-sphere and this reduces the Nash process to a single stage. However, Nash twists play an important role in controlling the distance between the initial and the perturbed map - it can produce isometric immersions within an arbitrary $C^0$ neighbourhood of the initial embedding $f_0$. In order to keep the solutions sufficiently close to the original embedding, Nash introduced a periodic function of high frequency (or rapidly oscillating function) under the integration process. We may remark here that in higher dimension, each step in the Nash process can be reduced to a parametric version of the 1-dimensional Nash process described above. The $C^0$-closeness will then translate into $C^\perp$-closeness (refer to \cite[Pp. 170]{gromov}). However, the problem in dimension greater than 1 is considerably more difficult and we plan to take it up in future. It is indeed the case that as the frequency in the Nash twist goes to infinity the distance between the initial short map and the resulting isometric immersion goes to zero in the $C^0$-norm. This motivates us to study the distribution of $f-f_0$ with respect to an appropriate measure on the space of isometric immersions $f:\mathbb I\to \mathbb R^3$. We naturally incorporate a randomness in the Nash twist which translates into a Gaussian noise measure for the difference function $f-f_0$. For each positive integer $n$, we construct random functions $f_n$ such that the difference $(f_n(.) - f_0(.))$ goes to zero (in $C^0$ norm). We scale it up and examine the distribution. We show that the distribution of $ n^{3/2} (f_n - f_0)$ converges (weakly) to a Gaussian noise measure. Thus the random solutions $f_n(.)$ can be thought of as distributed like $f_0+n^{-3/2}$(Gaussian noise) for large $n$. In Theorem~\ref{main} we state this rigorously identifying the weak limit of $n^{3/2}\int_0^t(f_n - f_0)(s)\,ds$ as a Gaussian process. Section 2.2 is devoted to the proof which requires essentially weak convergence of random walks. In the last section, we compare the above process with a class of extensively studied Gaussian processes. \section{Notation and main result} Let $M$ be a smooth manifold with a Riemannian metric $g$. The isometric immersions $f:M\to \mathbb R^q$ are solutions to the following system of partial differential equations: \[\langle\frac{\partial f}{\partial u_i},\frac{\partial f}{\partial u_j}\rangle = g_{ij},\ \ \ i,j=1,2,\dots,n,\] where $u_1,u_2,\dots u_n$ is a local coordinate system on $M$, $g_{ij}$, $i,j=1,2,\dots,n$, are the matrix coefficients of $g$ and the $\langle\ ,\ \rangle$ denotes the inner product on $\mathbb R^q$. To motivate the concept of randomness and measure we consider the following simple case where the domain space $M$ is the unit interval $[0,1]$ and hence an arbitrary metric on $M$ is of the form $g\,dt^2$, where $g : [0, 1] \rightarrow \mathbb{R}_+$ is a smooth positive function on $[0,1]$. The shortness condition on a smooth regular curve $f_0 : [0, 1] \rightarrow \mathbb{R}^3$ then translates into the pointwise inequality $0<||\partial_u f_0|| < \sqrt{g}$ on $[0,1]$. Given such an $f_0$ we want to find a function $f_n : [0, 1] \rightarrow \mathbb{R}^3$ such that $||\partial_u f_n|| = \sqrt{g}$ (which means that $f_n$ is isometric) and the $C^0$-distance between $f_n$ and $f_0$ decreases with $n$. The following considerations illustrate the \emph{Nash twist} in dimension 1. Suppose that $(X,Y,Z)$ is the Frenet-Serret frame along $f_0$ (assuming that such a frame exists at all points $u\in [0,1]$), where $X$ is the unit tangent along the curve. Then $(Y,Z)$ span a plane field $J$ along $f_0$ perpendicular to $X$. Consider the curve $Y(u)\cos 2\pi s+Z(u)\sin 2\pi s$, $0 \leq s \leq 1$, on $J(u)$ for each fixed $u\in [0,1]$. Then with $r^2 = g - ||\partial_u f_0||^2$ the function $$ \partial_u f_0 + r(u) (Y(u)\cos 2\pi s+Z(u)\sin 2\pi s)$$ has the required euclidean norm $\sqrt{g}$ and over $s \in [0, 1]$ integrates to $\partial_u f_0$ (the convex integration condition). Now, a Nash twist of $f_0$ is given by $$f_n(t) = f_0(0) + \int_0^t \{ \partial_u f_0(u) + r(u) (Y(u)\cos 2\pi nu+Z(u)\sin 2\pi nu) \} du,$$ where $n$ connects with the frequency of the periodic functions, namely $\cos 2\pi nu$ and $\sin 2\pi nu$, mentioned in the previous section. Clearly, $f_n$ is a solution of the isometry equation since $(Y(u),Z(u))$ is an orthonormal basis of $J(u)$. We want to show that the function (or the difference curve) $\int_0^t r(u) (Y(u)\cos 2\pi nu+Z(u)\sin 2\pi nu) du$ is uniformly small over $[0, 1]$. We do this for the two integrals separately with some notational abuse. Applying integration by parts we get \begin{eqnarray*} \int_0^1 r(u) \cos {2\pi nu}\, du &=& \frac{1}{2\pi n}r(u)\sin 2\pi nt - \frac{1}{2\pi n}\int_0^t r'(u)\sin 2\pi nu\, du \end{eqnarray*} which in absolute value is bounded by const.$\frac{1}{2n\pi}$ as $r$ is a smooth function on the interval $[0,1]$. The same estimates also apply to $\int_0^t r(u)\sin {2\pi nu}\, du$ and thus we conclude uniform closeness of $f_n$ and $f_0$. This difference curve can also be considered for a random path by changing the function $H_n(u) = nu$ over random choices. The $H_n$ in the above example can be obtained by integrating the constant function $h_n=n$ over $[0,1]$. Instead we take $h_n=\pm n$ on each subinterval $(k/n, (k+1)/n]$ independently with equal probability. These choices are actually explicitly mentioned by Gromov except for the probability part. Then integration of the resulting function will give a random function $H_n(\omega, u)$ which is the graph of a simple random walk. To see this calculation we consider a sequence of independent and identically distributed random variables $X_k$ which take the values $\pm 1$ with equal probability on a probability space $(\Omega,\mathcal F, P)$, where $\Omega$ can be taken as the infinite product space $\{-1,+1\}^{\mathbb N}$ and consider the function $h_n(\omega, x) = nX_k, (k-1)/n \leq x < k/n$. Therefore, each subinterval of length $1/n$ contributes $\pm 1$ and hence \[H_n(\omega, u) = \int_0^u h_n(\omega, x) dx = S_k \pm n(u - (k/n)), \ \ k/n \leq u < (k+1)/n,\] where $S_k = X_1 + \cdots + X_k$. The random sum $S_n(\omega, t)$ can be interpreted as the random walk obtained by linearly joining $S_k$, $1\leq k\leq n$. If we now consider the components of the random difference curve \[\int_0^t r(u) [Y(u) \cos {2\pi H_n(\omega, u)}+Z(u) \sin {2\pi H_n(\omega, u)}] du,\] $C^0$-closeness will follow in the same way. However from the viewpoint of probability, when a sequence of random variables $V_n$ converges almost surely to a random variable $V$, many times the difference $V_n - V$, after scaling, converges in a suitable sense to a nontrivial limiting random variable. When the convergence is weak convergence, the resulting distribution of the limit is a measure, on a suitable space, associated to the sequence (\cite{billingsley}). In our case the graph of $\int_0^t e^{2\pi iH_n(\omega, u)} du$ (omitting $r(u), Y(u), Z(u)$ for simplicity now) looks similar (in a probabilistic sense of considering all possible paths) over equal intervals and is independent over disjoint intervals. However, the limit of its normalization is not a function but a random distribution (in the sense of generalized functions), it is called the Brownian white noise measure. To keep track of the random difference curve one tracks its rescaled integral, which converges weakly to Brownian motion. A rigorous formulation (bringing in $r(u)$, $Y(u)$, $Z(u)$) is the following \begin{theorem} The sequence of processes $2 \pi n^{3/2} \int_0^t (f_n-f_0) \,ds, 0 \leq t \leq 1$, converges weakly to $$\int_0^t r(u) Z(u) dW(u) - \int_0^t \int_0^s \partial_u (r(u) Z(u)) dW(u)\, ds, 0 \leq t \leq 1,$$ as $n \rightarrow \infty$, where $W(\cdot)$ denotes the Wiener measure on $C[0, 1]$, the space of real valued continuous functions on the interval $[0,1]$. \label{main}\end{theorem} In this sense, the limit of the scaled random difference curve is locally the Gaussian noise measure $r(t) Z(t) dW(t) - (\int_0^t \partial_u (r(u) Z(u)) dW(u))\, dt$ (see \cite{mitoma}). The rate of convergence may also be interesting from the probabilistic point of view. \section{Proof of the main result} We first keep track of the integral (omitting $r(u), Y(u), Z(u)$) \begin{equation}\label{wn} \int_0^s e^{2 \pi i H_n(u)} du = \int_0^s e^{2 \pi i n u } du = (1/2\pi n)[ \sin (2 \pi ns) - i \{\cos (2 \pi n s) - 1 \}], 0 \leq s \leq 1. \end{equation} The main observation about this function on the right is that over each $(k/n, (k+1)/n]$ interval the imaginary part is the graph of $1 - \cos (2 \pi n x), 0 < x \leq 1/n$, and the real part is the graph of $\sin (2 \pi n x), 0 < x \leq 1/n$. The periodic behavior along with the factor $(1/2\pi n)$ indicates the $C^0$-closeness as $s$ varies in $[0, 1]$. Plugging in $H_n(\omega, u)$ described in the previous section the corresponding integral $\int_0^s e^{2 \pi i H_n(\omega, u)} du$ differs from (\ref{wn}) in randomly inverting the imaginary part of the graph over each $(k/n, (k+1)/n]$ interval. To see this consider $s \in (k/n, (k+1)/n]$. Then $$ \int_{\frac{k}{n}}^s e^{2 \pi i (S_k/n \pm n(u - k/n)} du = (1/2\pi n)[ \sin (2 \pi n(s - k/n)) \pm (-i) \{\cos (2 \pi n (s - k/n)) - 1 \} ]. $$ Noting that the integral over each $(k/n, (k+1)/n]$ is zero and using the periodicity of $\sin$ and $\cos$ functions we have established that the graph of $\int_0^s e^{2 \pi i H_n(\omega, u)} du$ is obtained by randomly inverting the imaginary part of the graph of (\ref{wn}) over each $(k/n, (k+1)/n]$ interval. We now prove that \[2 \pi n^{3/2} \int_0^t \{ \int_0^s \sin{2 \pi H_n(\omega, u)} du\} ds,\ 0 \leq t \leq 1,\] converges weakly to Brownian motion as $n \rightarrow \infty$ and \[2 \pi n^{3/2} \int_0^t \{ \int_0^s \cos{2 \pi H_n(\omega, u)} du\} ds,\ 0 \leq t \leq 1,\] converges to the zero process. To see the exact form of this (random) function we note that, as proved, the function $\int_0^s e^{2 \pi i H_n(\omega, u)} du$ has a graph which is the graph of (\ref{wn}) with the imaginary part randomly inverted over each $[k/n, (k+1)/n)$ interval. The integral of the (periodic) function in (\ref{wn}) over each $[k/n, (k+1)/n)$ interval is $ (i/2\pi n^2)$ (the real part integrates to contribute zero). Thus the integral of the function $\int_0^s e^{2 \pi i H_n(\omega, u)} du$ is $\pm i (1/2\pi n^2)$ over the same interval, the $\pm$ sign coming from the random inverting. Written explicitly, for $k/n \leq t < (k+1)/n$, \[\begin{array}{rcl}\int_0^t\{\int_0^s e^{2 \pi i H_n(\omega, u)} du\}ds & = & \sum_{j = 0}^{k - 1}\int _{j/n}^{(j+1)/n}\{\int_0^s e^{2 \pi i H_n(\omega, u)} du\}ds\\ && + \int _{k/n}^{t}\{\int_0^s e^{2 \pi i H_n(\omega, u)} du\}ds \\ & = & \frac{i}{2\pi n^2}\sum_{j = 1}^{k} X_j + O(\frac{1}{n^2}),\end{array}\] where $X_j$ are independent $\pm 1$ random variables. Note that the integral from $k/n$ to $t$ adds a continuous function of order $O(\frac{1}{n^2})$ to the random walk obtained from the $X_j$'s. Multiplying by $2 \pi n^{3/2}$ we get the weak convergence to Brownian motion. In this sense the random difference $\int_0^s e^{2 \pi i H_n(\omega, u)} du, 0 \leq s \leq 1,$ when normalized converges to the generalized derivative of Brownian motion, called Brownian white noise. For the general case, we consider (with some abuse of notation) \[\int_0^s r(u) e^{2 \pi i H_n(\omega, u)} du, 0 \leq s \leq 1,\] where $r$ is a sufficiently differentiable function. In this case, depending on $r$, after two integrations we get back a different random walk minus the area under another random walk, and consider weak convergence again. We shall deal with the real and the imaginary part of the integral separately. As observed before the randomness has no role to play in the real part. A straightforward calculation shows that \begin{center} $\begin{array}{rcl}\int_0^s r(u) \cos(2 \pi H_n(\omega, u))du & = & \int_0^s r(u) \cos(2 \pi nu)du\\ & = &\frac{1}{2n\pi}r(s)\sin 2n\pi s - \frac{1}{2n\pi}\int_0^s r'(u)\sin 2n\pi u \,du\\ & = &\frac{1}{2n\pi}r(s)\sin 2n\pi s + \frac{1}{4n^2\pi^2}r'(s)\cos 2n\pi s\\ & - & \frac{1}{4n^2\pi^2}r'(0)- \frac{1}{4n^2\pi^2}\int_0^s r''(u)\cos 2n\pi u \,du \end{array}$\end{center} Therefore, \begin{center}$\int_0^t\{\int_0^s r(u) \cos(2 \pi H_n(\omega, u))du\}\,ds = \frac{1}{2\pi n}\int_0^t r(s) \sin (2 \pi ns)\, ds + O(\frac{1}{n^2})$.\end{center} Since the first term on the right hand side is $O(1/n^2)$ it follows that \[\lim_{n\to\infty} n^{3/2}\int_0^t\{\int_0^s r(u) \cos{2 \pi H_n(\omega, u)} du\}\,ds=0.\] Thus the real part of the integral when scaled by $n^{3/2}$ converges to zero uniformly. To deal with the imaginary part of the integral, for $l/n\leq s<(l+1)/n$, we split it as follows \begin{eqnarray}\label{area} \sum_{k = 0}^{[ns] - 1} \int_{k/n}^{(k+1)/n} r(u) \sin (2\pi H_n(\omega, u)) du + \int_{l/n}^s r(u) \sin (2\pi H_n(\omega, u)) du, \end{eqnarray} and then writing $r(u) = r(k/n) + (r(u) - r(k/n))$ on the subinterval $(k/n, (k+1)/n]$ we get the following for (\ref{area}): \begin{eqnarray}\label{area2} & & \sum_{k = 0}^{[ns] - 1} \int_{k/n}^{(k+1)/n} (r(u) - r(k/n)) \sin (2\pi H_n(\omega, u)) du \nonumber \\ &+&r(l/n)\int_{l/n}^s \sin(2\pi H_n(\omega,u)) du + \int_{l/n}^s (r(u) - r(l/n)) \sin (2\pi H_n(\omega, u)) du. \end{eqnarray} We denote the function (represented by the sum) on the first row by $\psi_1(s)$ and the two terms on the second row by $\psi_2(s)$ and $\psi_3(s)$ respectively. Since $n^{3/2}\int_0^t \psi_2(s) ds$ gives a random walk with steps $\pm\frac{1}{\sqrt{n}}r(k/n)$ over the interval $[k/n,(k+1)/n)$, $n^{3/2}\int_0^t \psi_2(s) ds$ converges weakly to $\frac{1}{2\pi}\int_0^t r(s) dW(s)$. Next we consider the part $\psi_1$. The summands of $\psi_1$ are not necessarily zero as $r$ is non-constant. Substituting $u=k/n+z/n$ in the $k$-th summand and disregarding the $\pm$ signs, we get $$\frac{1}{n} \int_0^1 [r\left(\frac{k}{n} + \frac{z}{n}\right) - r\left(\frac{k}{n}\right)] \sin (2 \pi z) dz. $$ To clearly understand this contribution divide each integral $[k/n,(k+1)/n)$ into the parts where the sine function has the same sign (to apply the mean value theorem for integrals for some $z_1 \in (0, 1/2)$). Thus, \begin{eqnarray*} && \frac{1}{n} \int_0^1 [r\left(\frac{k}{n} + \frac{z}{n}\right) - r\left(\frac{k}{n}\right)] \sin (2 \pi z) dz \\ &=& \frac{1}{n} \int_0^{1/2} [r\left(\frac{k}{n} + \frac{z}{n}\right) - r\left(\frac{k}{n}\right)] \sin (2 \pi z) dz \\ & + & \frac{1}{n}\int_{1/2}^1 [r\left(\frac{k}{n} + \frac{z}{n}\right) - r\left(\frac{k}{n}\right)] \sin (2 \pi z) dz \\ &=& \frac{1}{n} \int_0^{1/2} [r\left(\frac{k}{n} + \frac{z}{n}\right) - r\left(\frac{k}{n} + \frac{z}{n}+\frac{1}{2n}\right)] \sin (2 \pi z)\, dz \\ &=& \frac{1}{n\pi} \left[r\left(\frac{k}{n} + \frac{z_1}{n}\right) - r\left(\frac{k}{n} + \frac{z_1}{n}+\frac{1}{2n}\right)\right] \\ &=& - \frac{1}{2n^2\pi} r^{\prime} \left(\frac{k}{n} + \frac{z_1}{n}+\frac{\theta}{2n}\right), \end{eqnarray*} where $0<\theta<1$. If we add these integrals after multiplying each them by $\pm 1$ from random inversions, and scale the sum by $n^{3/2}$ then it corresponds to a random walk converging weakly to $$ - \frac{1}{2\pi} \int_0^s r^\prime(u) dW(u).$$ The continuous random curve $\psi_1+\psi_3$ matches this random walk at the points $k/n$ and is otherwise at a distance at most $$O(\frac{1}{n}|r(k/n + 1/2n) - r(k/n)|) = O(1/n^2)$$ from it. Thus after scaling by $n^{3/2}$ the random curve $\psi_1+\psi_3$ converges weakly to the same limit and the integral of $\psi_1+\psi_3$ converges weakly to $-\frac{1}{2\pi}\int_0^t \{\int_0^s r^\prime(u) dW(u)\} ds$ by the continuous mapping theorem. This completes the proof of the main result. For completeness we indicate how the random walk \begin{equation}\sum_{k=1}^{[nt_1]} r(k/n)Y_k\label{int_psi_2}\end{equation} (here $Y_k$ are i.i.d. $\pm\frac{1}{\sqrt{n}}$ random variables) and the area under the random walk \begin{equation}- \sum_{i=1}^{[ns]} r'(i/n)Y_i\label{area_random}\end{equation} up to time $t_2$ converges jointly in distribution, after which tightness on product space can be used to conclude weak convergence on $C[0,1]\times C[0,1]$. To obtain the area under (\ref{area_random}), each random walk height is multiplied by $1/n$ and then added. The interchange of summation gives the area as \begin{equation}-\sum_{i=0}^{k-1} (\frac{k-i}{n}) r'(i/n) Y_i,\label{int_psi_1+psi_3}\end{equation} where $k=[nt_2]$. From (\ref{int_psi_2}) and (\ref{int_psi_1+psi_3}) the limiting joint finite dimensional distribution follows. The limit of the expression in (\ref{int_psi_1+psi_3}) is seen to be $-\frac{1}{2\pi}\int_0^t(t-u)r'(u)\, dW(u)$ which equals $-\frac{1}{2\pi}\int_0^t \{\int_0^s r^\prime(u) dW(u)\} ds$. Now the sum of the two processes (\ref{int_psi_2}) and (\ref{int_psi_1+psi_3}) converges weakly. $\Box$ \section{Concluding remarks:} It is seen from the proof of Theorem~\ref{main} that the random Nash twist on the initial short curve $f_0$ to obtain an increase of $r^2(u)\,du^2$ to the induced metric $f_0^*h$, leads to a process whose structure is similar to the following process (refer to \cite[Theorem 6.3]{hida}) $$X(t)=W(t)-\int_0^t\int_0^s\ell(s,u)dW(u)\,ds,$$ where $\ell(s,u)$ is a Volterra kernel with appropriate conditions. In our case, componentwise we need a Gaussian process $\int_0^t r(u)\,dW(u)$ and a function $r'(u)/r(u)$ to replace $W(t)$ and $\ell(s,u)$ in the above formula. In higher dimension the difference metric $g-f_0^*h$ can be written as a finite sum of monomials $r^2 d\varphi^2$, where $\varphi$ is a rank 1 function. Applying Nash twist along (Im\,$Df_0)^\perp$, one is able to add $r^2 d\varphi^2$ only approximately. To look into the problem of $C^0$ distance one may need several independent Brownian motions in different directions and we refer to Theorem 3.2.5 of Kallianpur and Xiong (\cite{kallianpur}) for such a construction. However a precise formulation combining various directions is not clear and we hope to explore these aspects in future. \emph{Acknowledgement}: The second author is greatly indebted to Misha Gromov for sharing his insight on the subject during a visit of the author to IHES. \end{document}
\begin{document} \title*{Optimization-based Motion Planning in Virtual Driving Scenarios with Application to Communicating Autonomous Vehicles } \titlerunning{Optimization-based Motion Planning} \author{Matthias Gerdts and Bj\"orn Martens} \institute{Matthias Gerdts \at Institute of Mathematics and Applied Computing, Department of Aerospace Engineering, University of the Federal Armed Forces at Munich, Werner-Heisenberg-Weg 39, 85577 Neubiberg, Germany, \email{[email protected]} \and Bj\"orn Martens \at Institute of Mathematics and Applied Computing, Department of Aerospace Engineering, University of the Federal Armed Forces at Munich, Werner-Heisenberg-Weg 39, 85577 Neubiberg, Germany, \email{[email protected]}} \maketitle \abstract{The paper addresses the problem of providing suitable reference trajectories in motion planning problems for autonomous vehicles. Among the various approaches to compute a reference trajectory, our aim is to find those trajectories which optimize a given performance criterion, for instance fuel consumption, comfort, safety, time, and obey constraints, e.g. collision avoidance, safety regions, control bounds. This task can be approached by geometric shortest path problems or by optimal control problems, which need to be solved efficiently. To this end we use direct discretization schemes and model-predictive control in combination with sensitivity updates to predict optimal solutions in the presence of perturbations. Applications arising in autonomous driving are presented. In particular, a distributed control algorithm for traffic scenarios with several autonomous vehicles that use car-to-car communication is introduced. } \section{Introduction} \label{sec:intro} Virtual driving summarizes the ability to simulate vehicles in different driving scenarios on a computer. It is an important tool as it allows to analyze the dynamic behavior of a vehicle and the performance of driver assistance systems in parallel to the development process. The employment of virtual driving allows to reduce costs since simulations are less expensive and less time consuming than real test-drives. Moreover, virtual driving is particularly useful in scenarios that are potentially dangerous for human drivers such as collision avoidance scenarios or driving close to the physical limit. Moreover, the autonomous driving strategies can be developed and analyzed in virtual driving systems. However, virtual driving methods cannot fully replace physical testing since the virtual system is based on modeling assumptions that need to be verified in practice. Virtual driving simulators require models for the vehicle, the road, the environment (i.e. other cars, pedestrians, obstacles, ...), components (i.e. sensors, cameras, ...), and the driver (i.e. path planning, controller, driver assistance, ...). In this paper we focus on the driver, suitable path planning strategies, and control actions. Automatic path planning strategies are in the core of every virtual or real driving scenario for autonomous vehicles. Amongst the various approaches, e.g. sampling methods, shortest path problems and optimal control techniques, we focus on deterministic optimization-based methods such as geometric shortest paths, optimal control, and model-predictive control. While in virtual driving real-time capability is only of minor interest, in online computations it is the most important issue. In both cases robust methods are required that are capable of providing a suitable result reliably. A discussion of technical, legal, and social aspects of autonomous driving can be found in the recent book \cite{Maurer2015}. This paper is organized as follows: Section~\ref{sec:modeling} introduces working models for the vehicle, the road, obstacles, and the driver. In Section~\ref{sec:trajectory} approaches for optimization-based path planning are discussed. Amongst them we discuss geometric shortest path problems and optimal control approaches in more detail. Section~\ref{Sec:CollisionDetection} addresses collision detection methods. A model-predictive control scheme for communicating vehicles is suggested in Section~\ref{sec:control}. In Section~\ref{sec:Tracking} we discuss a feedback controller based on inverse kinematics for tracking a reference spline curve. \section{Modeling} \label{sec:modeling} Virtual driving requires a sufficiently realistic vehicle model, a model for its environment, and a driver model. Vehicle models exist in various levels of accuracy, ranging from simple point-mass models through single-track models to full car models. Which model to use depends on the effects that one likes to investigate. For handling purposes and online computations often a simple point-mass model or a single-track model with realistic tyre characteristics, compare \cite{Ger05a}, are sufficiently accurate. For the investigation of dynamic load changes or vibrations a full car model in terms of a mechanical multi-body system is necessary, compare \cite{Ger03b,Rill2011} for full car models and \cite{Burger2010,Burger2013} for applications with load excitations. Often, very detailed component models such as tyre models become necessary to investigate tyre-road contacts, compare \cite{Gallrein2007,Gallrein2014}. Throughout the paper we use a simple kinematic car model, compare \cite{Rill2011}, since it is sufficient to introduce the basic ideas. The kinematic car model is valid for low lateral accelerations, low lateral tyre forces, and negligible side slip angles. It is not suitable for investigations close to the dynamic limit, though. The configuration of the car in a reference coordinate system is depicted in Figure~\ref{Fig:1}. Herein, $\delta$ denotes the steering angle at the front wheels, $v$ the velocity of the car, $\psi$ the yaw angle, $\ell$ the distance from rear axle to front axle and $(x,y)$ the position of the midpoint of the rear axle. \begin{figure} \caption{Configuration of the kinematic car model.} \label{Fig:1} \end{figure} The equations of motion are derived as follows. The midpoint position and velocity of the rear axle of the car in the fixed reference system compute to \begin{eqnarray*} r_R = \left(\begin{array}{c} x \\ y \end{array}\right),\qquad v_R = \left(\begin{array}{c} x' \\ y' \end{array}\right). \end{eqnarray*} The midpoint position and velocity of the front axle of the car in the fixed reference system compute to \begin{eqnarray*} r_F & = & \left(\begin{array}{c} x_F \\ y_F \end{array}\right) = r_R + S(\psi) \left(\begin{array}{c} \ell \\ 0 \end{array}\right) = \left(\begin{array}{c} x + \ell \cos\psi \\ y + \ell \sin\psi \end{array}\right), \\ v_F & = & \left(\begin{array}{c} x_F' \\ y_F' \end{array}\right) = \left(\begin{array}{c} x' - \ell \psi' \sin\psi \\ y' + \ell \psi' \cos\psi \end{array}\right), \end{eqnarray*} where \begin{equation}\label{EQ:Drehmatrix} S(\psi) = \left(\begin{array}{cc} \cos\psi & -\sin\psi \\ \sin\psi & \cos\psi \end{array}\right) \end{equation} is a rotation matrix that describes the rotation of the car's fixed coordinate system against the inertial coordinate system. Under the assumption that the lateral velocity components at rear axle and front axle vanish, we have that $r_R$, if transformed to the car's reference system, only has a velocity component in the longitudinal direction of the car, i.e. \begin{displaymath} \left(\begin{array}{c} v \\ 0 \end{array}\right) = S(\psi)^\top v_R \quad \Longleftrightarrow \quad v_R = S(\psi) \left(\begin{array}{c} v \\ 0 \end{array}\right) = \left(\begin{array}{c} v \cos\psi \\ v \sin\psi \end{array}\right). \end{displaymath} This leads to the differential equations for the position $(x,y)$ of the midpoint of the rear axle: \begin{displaymath} x'(t) = v(t) \cos \psi(t),\qquad y'(t) = v(t) \sin\psi(t). \end{displaymath} Under the assumption that the lateral velocity component at the front axle vanishes, we have \begin{displaymath} 0 = e_q^\top v_{F,b}, \end{displaymath} where $e_q = (-\sin\delta,\cos\delta)^\top$ denotes the lateral direction of the front axle and \begin{displaymath} v_{F,b} := S(\psi)^\top v_F = \left(\begin{array}{c} v \\ \ell \psi' \end{array}\right) \end{displaymath} denotes the representation of the velocity $v_F$ in the car's body fixed reference system. The equation $0 = e_q^\top v_{F,b}$ yields the differential equation \begin{displaymath} \psi'(t) = \frac{v(t)}{\ell} \tan\delta(t) \end{displaymath} for the yaw angle $\psi$. In summary, given the velocity $v(t)$ and the steering angle $\delta(t)$, the car's motion is given by the following system of differential equations: \begin{eqnarray} x'(t) & = & v(t) \cos \psi(t), \label{EQ:1a}\\ y'(t) & = & v(t) \sin\psi(t), \label{EQ:1b}\\ \psi'(t) & = & \frac{v(t)}{\ell} \tan\delta(t). \label{EQ:1c} \end{eqnarray} The steering angle and the velocity are typically bounded by $| \delta | \leq \delta_{max}$ and by $0 \leq v \leq v_{max}$ with given bounds $\delta_{max}$ and $v_{max}$, respectively. Moreover, one or more of the following modifications and restrictions can be used to yield a more realistic motion of the car: \begin{itemize} \item Instead of controlling the velocity directly one often controls the acceleration by adding the differential equation \begin{equation}\label{EQ:2a} v'(t) = a(t) \end{equation} with bounds for the control $a$, i.e. $a_{min} \leq a \leq a_{max}$. \item In order to model a certain delay in controlling the velocity $v$, one can consider the following differential equation for $v$: \begin{equation}\label{EQ:2aa} v'(t) = \frac{v_{d}(t) - v(t)}{T}\qquad (T>0) \end{equation} Herein, $v_d(t)$ is the reference (=desired) velocity viewed as a control input and $v$ is the actual velocity. The constant $T$ allows to influence the response time, i.e. the delay of $v$. \item Instead of controlling the steering angle directly one often controls the steering angle velocity by adding the differential equation \begin{equation}\label{EQ:2b} \delta'(t) = w(t) \end{equation} with bounds for the control $w$, i.e. $|w| \leq w_{max}$. \end{itemize} The car model derived in this section only provides a simple model, which is, however, very useful for path planning tasks. More detailed models can be found in \cite{Rill2011}. \section{Trajectory Optimization and Path Planning} \label{sec:trajectory} \subsection{Geometric Path Planning by Shortest Paths} A first approach towards the automatic path planning is based on purely geometric considerations, whereas the detailed dynamics of the vehicle are not taken into account in its full complexity. In fact, a shortest path is sought that connects two given points in a configuration space, e.g. vehicle positions, while taking into account a fixed number of obstacles. The result is a sequence of configuration points that need to be tracked by the vehicle. This simple, but robust and effective planning strategy turns out to be useful in the presence of complicated path constraints or many fixed obstacles that need to be avoided. The resulting path may serve as a reference path for inverse dynamics, a feedback tracking controller, or as an initial guess for more sophisticated optimization approaches from optimal control. In summary, the following steps need to be performed in order to compute a trajectory from an initial position to a terminal position, while taking obstacles into account: \begin{itemize} \item[(1)] Solve a collision-free geometric shortest path problem using Dijkstra's algorithm. The shortest path consists of a sequence of way-points leading from the initial position to the terminal position. \item[(2)] Interpolate the way-points by a cubic spline function (use a thinning algorithm if necessary). \item[(3)] Use inverse kinematics based on the car model or a feedback controller to track the spline function. \end{itemize} The above steps are discussed step by step. In a first attempt we focus on the two dimensional $(x,y)$-plane as the configuration space, which is often sufficient for autonomous ground-based vehicles. Note that the subsequent techniques can be easily extended to higher dimensions, e.g. in the context of robotics or flight path optimization. Consider a two dimensional configuration space \begin{displaymath} Q=\{ (x,y)^\top \in{\mathbb{R}}^2 \; |\; x_{min} \leq x\leq x_{max}, y_{min}\leq y\leq y_{max}\} = [x_{min},x_{max}]\times [y_{min},y_{max}], \end{displaymath} which corresponds to the feasible $(x,y)$-positions of the reference point on the vehicle. An equidistant discretization of the configuration space reads as \begin{displaymath} Q_h = \{ (x_i,y_j)^\top \in {\mathbb{R}}^2\;|\; x_i = x_{min} + i h_x, \; y_j = y_{min} + j h_x, i=0,\ldots,N_x, j=0,\ldots,N_y\} \end{displaymath} with step-sizes $h_x = (x_{max}-x_{min})/N_x$ and $h_y = (y_{max}-y_{min})/N_y$ and given numbers $N_x,N_y\in{\mathbb{N}}$. Every grid point $q\in Q_h$ corresponds to the position of the vehicle's reference point. The geometric motion of the vehicle can be modeled in a very simplified way by transitions from a given grid point to neighboring grid points (called feasible transitions), see Figure~\ref{Fig:ShortestPath}. \begin{figure} \caption{Configuration space $Q_h$ and feasible transitions from a given grid point.} \label{Fig:ShortestPath} \end{figure} To this end, the discrete configuration space $Q_h$ together with the feasible transitions, defined by the discrete control set $\mathcal{U}_h$, define a directed graph $G = (Q_h,E_h)$ with nodes $Q_h$ and edges $E_h \subset Q_h\times Q_h$ such that \begin{displaymath} ( \bar q, q ) \in E_h \qquad \Longleftrightarrow \qquad | x - \bar x | \leq h_x \mbox{ and } |y-\bar x| \leq h_y , \end{displaymath} where $\bar q = (\bar x,\bar y)^\top\in Q_h$ and $q = (x,y)^\top\in Q_h$. Note that the set of edges can be further reduced if only some of the transitions between grid points are permitted. To each edge $(\bar q,q)\in E_h$ of the graph we may assign the cost $c(\bar q,q) = \| q - \bar q\|_2$ assuming that the time to move from node $\bar q$ to node $q$ is proportional to its Euclidean distance. More general costs may be assigned as well as long as they are non-negative. In particular we have to deal with infeasible nodes owing to collisions with obstacles. In order to avoid collisions, infeasible nodes can be eliminated from the set $Q_h$ beforehand or an infinite cost $c(\bar q,q)$ can be assigned whenever $\bar q$ or $q$ are infeasible nodes. Whether a collision occurs can be checked with the technique of Section~\ref{Sec:CollisionDetection}. Given the graph $G$ and the non-negative cost function $c: E_h \rightarrow {\mathbb{R}}$, the task is to find a shortest path in $G$ leading from an initial node $q_0 \in Q_h$ to a terminal node $q_T\in Q_h$. This can be achieved by Dijkstra's shortest path algorithm or versions of it, compare \cite{Pap98,Kor08}. The algorithm, which exploits the dynamic programming principle, has a complexity of $\mathcal{O}(n^2)$, where $n$ denotes the number of nodes in $G$. An efficient implementation of Dijkstra's algorithm uses a priority queue, see \cite{Cormen2009}. After a shortest path has been found, it can be further tuned in a post-processing process. The post-processing includes an optional thinning of the path as described in \cite{Nachtigal2015}, a spline interpolation, and the application of inverse kinematics or a tracking controller to follow the spline with the actual car model. A tracking controller based on the car model in (\ref{EQ:1a})-(\ref{EQ:1c}) is designed in Section~\ref{sec:Tracking}. Figure~\ref{Fig:DijkstraLaneChange} shows the result of the geometric shortest path planning approach for a double lane change maneuver. The trajectory is typically not optimal in the sense of optimal control, but may be acceptable for practical purposes or it may serve as an initial guess for an optimal control problem. \begin{figure} \caption{A car's trajectory in the $(x,y)$-plane in the presence of obstacles starting at initial position $(0,5)$ and ending at position $(150,0)$: Shortest path, spline interpolation and tracked curve. } \label{Fig:DijkstraLaneChange} \end{figure} A more complicated track is depicted in Figure~\ref{Fig:DijkstraScenario2}. \begin{figure} \caption{A car's trajectory in the $(x,y)$-plane in the presence of obstacles starting at initial position $(100,100)$ and ending at position $(100,80)$: Shortest path, spline interpolation and tracked curve. } \label{Fig:DijkstraScenario2} \end{figure} Although the geometric shortest path problem is robust and can handle fixed obstacles, moving obstacles (at the cost of an additional time state variable), and complicated obstacle geometries very well, it suffers from the disadvantage that, e.g. the steering angle $\delta$ cannot be constrained. For instance, the course in Figure~\ref{Fig:DijkstraScenario2} requires a maximum steering angle of 73 degrees, which is beyond the technical limit of a car. To avoid this remedy, it is possible to compute a cubic spline approximation in combination with inverse kinematics subject to constraints for the steering angle. Alternatively, one can modify the shortest path approach as follows: In the previous model, a purely geometric motion was considered that can be interpreted as controlling the motion in $x$- and $y$-direction independently from each other. This might be reasonable for omnidirectional robots, but it does not reflect the actual motion capabilities of a car with a steering device very well. To this end it is more realistic, although computationally more expensive, to work with a discretization of the dynamics (\ref{EQ:1a})-(\ref{EQ:1c}) and the control set $\mathcal{U}:=[v_{min},v_{max}]\times [\delta_{min},\delta_{max}]$. This leads to a three dimensional configuration space $Q$ with points of type $q = (x,y,\psi)^\top$ subject to suitable bounds on the components. The discretized configuration space $Q_h$ is constructed as follows, where we use the explicit Euler method for simplicity: Let $\bar q=(\bar x,\bar y,\bar \psi)^\top\in Q_h$ be arbitrary. Then $Q_h$ contains all the points $q=(x,y,\psi)^\top \in Q$ of type \begin{displaymath} q = \left(\begin{array}{c} x \\ y \\ \psi \end{array}\right) = \left(\begin{array}{c} \bar x + h v \cos \bar \psi \\ \bar y + h v \sin \bar \psi \\ \bar\psi + h \frac{v}{\ell} \tan\delta \end{array}\right) =: F(\bar q,u) \end{displaymath} with $u = (v,\delta)^\top \in \mathcal{U}_h$, where \begin{displaymath} \mathcal{U}_h := \{ (v_i,\delta_j)^\top \in {\mathbb{R}}^2\;|\; v_i = v_{min} + i h_v, \; \delta_j = \delta_{min} + j h_\delta, i=0,\ldots,N_v, j=0,\ldots,N_\delta\} \end{displaymath} is a discrete approximation to the set $\mathcal{U}$ with step-sizes $h_v = (v_{max}-v_{min})/N_v$ and $h_\delta = (\delta_{max}-\delta_{min})/N_\delta$. Again, the discrete configuration space $Q_h$ together with the feasible transitions, defined by the discrete control set $\mathcal{U}_h$, define a directed graph $G = (Q_h,E_h)$ with nodes $Q_h$ and edges $E_h \subset Q_h\times Q_h$ such that \begin{displaymath} ( \bar q, q ) \in E_h \qquad \Longleftrightarrow \qquad \exists u\in \mathcal{U}_h : q = F(\bar q,u) . \end{displaymath} The graph is considerably larger than the previous one and the computational effort for solving the corresponding shortest path problem increases accordingly. Please note that time dependent obstacle motions can be incorporated at the dispense of an additional configuration variable which corresponds to the time. The shortest path approach essentially coincides with the dynamic programming approach, compare \cite{Bertsekas2012}. \subsection{Collision Detection} \label{Sec:CollisionDetection} Collision detection is an important issue in robotics and autonomous systems and various approaches based on distance functions of convex bodies have been developed, see \cite{Gilbert1985,Johnson1985,Gilbert1988}. A smoothed distance measure was constructed in \cite{Escande2014} and can be used in gradient type optimization algorithms. Often it is sufficient to approximate the vehicle with center $(x_c,y_c)$ and obstacles with centers $(x_i,y_i)$ by circles of radius $r$ and $r_i$, $i=1,\ldots,M$, respectively, and to impose state constraints of type \begin{displaymath} (x_c - x_i)^2 + (y_c - y_i)^2 \geq (r+r_i)^2,\qquad i=1,\ldots,M, \end{displaymath} to prevent collisions. Herein, the center of the vehicle based on the configuration in Figure~\ref{Fig:1} is given by \begin{equation}\label{EQ:CarCenter} r_c := \left(\begin{array}{c} x_c \\ y_c \end{array}\right) = \left(\begin{array}{c} x \\ y \end{array}\right) + S(\psi) \left(\begin{array}{c} \ell/2 \\ 0 \end{array}\right). \end{equation} Collision detection, taking the detailed shape of the vehicle and the obstacles into account, is much more involved, since it is not straightforward to compute the distance function for potentially non-convex bodies. We follow a technique in \cite{Landry2012} and assume that the shape of the vehicle of length $\ell$ and width $w$ in the two dimensional plane is given by a rectangle (w.r.t. to the vehicle's coordinate system) \begin{displaymath} R = \{ z\in {\mathbb{R}}^2 \; |\; A z\leq b\}, \qquad A = \left(\begin{array}{rr} 1 & 0 \\ -1 & 0 \\ 0 & 1 \\ 0 & -1 \end{array}\right),\ b = \left(\begin{array}{r} \ell/2 \\ -\ell/2 \\ w/2 \\ -w/2 \end{array}\right). \end{displaymath} The rectangle moves along with the vehicle and its location at time $t$ is given by \begin{displaymath} R(t) = S(\psi(t)) R + r_c(t) = \{ z\in{\mathbb{R}}^2 \; |\; A S(\psi(t))^\top z \leq b + A S(\psi(t))^\top r_c(t)\} \end{displaymath} with $S$ from (\ref{EQ:Drehmatrix}) and $r_c$ from (\ref{EQ:CarCenter}). Let the $M$ obstacles at time $t$ be given by the union of convex polyhedra \begin{displaymath} Q_i(t) := \bigcup_{j=1}^{M_i} Q^{(i,j)}(t) \qquad \text{with} \qquad Q^{(i,j)}(t)=\{ y\in {\mathbb{R}}^2 \; |\; C^{(i,j)}(t) y\leq d^{(i,j)}(t)\}, \end{displaymath} where $M_i$ is the number of polyhedra in obstacle $Q_i$ and for $j=1,\ldots,M_i$, the matrix $C^{(i,j)}(t)\in {\mathbb{R}}^{q_{i,j}\times 2}$ and the vector $d^{(i,j)}(t)\in {\mathbb{R}}^{q_{i,j}}$ define the convex parts of the i-th obstacle at time $t$. Herein, $q_{i,j}$ is the number of facets in $Q^{(i,j)}$. The vehicle and the obstacles do not collide at time $t$ if and only if \begin{displaymath} R(t) \cap Q^{(i,j)}(t) \,=\,\emptyset \qquad \forall j=1,\ldots,M_i, i=1,\ldots,M. \end{displaymath} This is equivalent to the infeasibility of the linear system \begin{equation}\label{EQ:intersection} \left(\begin{array}{c} A(t) \\ C^{(i,j)}(t) \end{array}\right)\, z\,\leq\,\left(\begin{array}{c} b(t) \\ d^{(i,j)}(t) \end{array}\right),\quad \forall j=1,\ldots,M_i, i=1,\ldots,M, \end{equation} where $A(t) := A S(\psi(t))^\top$ and $b(t) := b + A S(\psi(t))^\top r_c(t)$. According to the Lemma of Gale the system (\ref{EQ:intersection}) has no solution at time $t$ if and only if there exist vectors $ w^{(i,j)}(t)\in{\mathbb{R}}^{4+q_{i,j}}$ such that \begin{displaymath} w^{(i,j)}(t)\,\geq\,0,\quad \left(\begin{array}{c} A(t) \\ C^{(i,j)}(t) \end{array}\right)^\top w^{(i,j)}(t)\,=\,0 \quad \text{and}\quad \left(\begin{array}{c} b(t) \\ d^{(i,j)}(t) \end{array}\right)^\top w^{(i,j)}(t) < 0. \end{displaymath} Note that a scaled vector $\lambda w^{(i,j)}(t)$ with $\lambda>0$ satisfies the conditions as well, if $ w^{(i,j)}(t)$ does so. Hence, we may bound the length of the components by one and impose the additional constraint $w^{(i,j)}(t) \leq e$, where $e = (1,\ldots,1)^\top$ denotes the vector of all ones of appropriate dimension. This condition can be checked by solving the following linear program for all $j=1,\ldots,M_i$, $i=1,\ldots,M$, and all $t$: {\em Minimize \begin{displaymath} \left(\begin{array}{c} b(t) \\ d^{(i,j)}(t) \end{array}\right)^\top w^{(i,j)}(t) \end{displaymath} \indent subject to the constraints \begin{displaymath} 0\,\leq \, w^{(i,j)}(t) \,\leq\,e,\qquad \left(\begin{array}{c} A(t)\{\mathbb{C}}^{(i,j)}(t) \end{array}\right)^\top w^{(i,j)}(t)\,=\,0. \end{displaymath}} \noindent Note that the feasible sets of the linear programs are non-empty and compact and thus an optimal solution exists. A collision does not occur, if the value function \begin{displaymath} \zeta^{(i,j)}(t) := \min\left\{ \left(\begin{array}{c} b(t)^\top \\ d^{(i,j)}(t) \end{array}\right)^\top w \;\Bigg|\; 0\,\leq w\,\leq\,e,\; \left(\begin{array}{c} A(t) \{\mathbb{C}}^{(i,j)}(t) \end{array}\right)^\top w=0\right\} \end{displaymath} is negative for all combinations $(i,j)$ and all $t$. Hence, collisions are avoided by imposing the non-linear and non-differentiable constraint \begin{displaymath} \sup_{t\in [0,t_f],(i,j)} \zeta^{(i,j)}(t) \leq -\varepsilon \end{displaymath} for some $\varepsilon>0$ sufficiently small. Note that $\zeta^{(i,j)}$ implicitly depends on the vehicle's state $(x,y,\psi)$. As an alternative to the above linear programming approach, the Gilbert-Johnson-Keehrti algorithm from \cite{Gilbert1988} is frequently used in computer graphics and robotics for real-time collision detection. It uses Minkowski sums and convex hulls to compute the signed distance function between two polyhedral objects. \subsection{Optimal Drivers by Optimal Control} \label{sec:OCP} A virtual ``optimal'' driver can be modeled by means of a suitable optimal control problem, which fits into the following general class of parametric optimal control problems OCP($p$): {\em Minimize \begin{equation} \label{EQ:OCP:OBJ} \varphi(z(t_f),t_f,p) + \int_{t_0}^{t_f} f_0(z(t),u(t),p) dt \end{equation} \indent subject to the constraints \begin{align} z'(t) & = f(z(t),u(t),p) && \mbox{a.e. in } [t_0,t_f], \label{EQ:OCP:1}\\ g(t,z(t),p) & \leq 0 && \mbox{in } [t_0,t_f], \label{EQ:OCP:2}\\ \psi(z(t_0),z(t_f),p) & = 0, \label{EQ:OCP:3} \\ u(t) & \in \mathcal{U} & & \mbox{a.e. in } [t_0,t_f]. \label{EQ:OCP:4} \end{align}} Herein, the objective function (\ref{EQ:OCP:OBJ}) typically consists of a linear combination of final time, steering effort and fuel consumption. The car model defines the differential equation in (\ref{EQ:OCP:1}) for the state $z$ and control $u$, while road boundaries and stationary or moving obstacles lead to state constraints of type (\ref{EQ:OCP:2}). Initial and terminal states of the vehicle at initial time $t_0$ and terminal time $t_f$ are limited by the constraint in (\ref{EQ:OCP:3}). Finally, the control vector $u$ is restricted to the control set $\mathcal{U}$ in (\ref{EQ:OCP:4}). The problem formulation may depend on a parameter vector $p$ that can be used to model perturbations or uncertainties, which enter the optimal control problem as parameters. Given a nominal parameter $p^*$, various approaches exist to solve OCP($p^*$) numerically. The indirect solution approach exploits first order necessary optimality conditions, see \cite{Iof79}. The function space approach applies optimization procedures, i.e. gradient type methods in the function space setting of OCP($p^*$), compare~\cite{Pol73}. For highly nonlinear problems with complicated state and control constraints, direct discretization methods, see \cite{Boc84,Bet01,Gerdts2012}, are often preferred owing to their flexibility and robustness. In the following example, we use an optimal control problem to model a parking maneuver of a car and solve the optimal control problem by the direct shooting method OCPID-DAE1, see \cite{OCPIDDAE1}. \begin{example}[Parking maneuver] \label{ex:Parken} The task is to park a car in a parking space next to the car on the road. The car's dynamics are given by (\ref{EQ:1a})-(\ref{EQ:1c}), (\ref{EQ:2a}) and (\ref{EQ:2b}) with $\ell=2.7$ and width $b=1.8$. The state vector is given by $z=(x,y,\psi,v,\delta)^\top$ and the control vector by $u=(w,a)^\top$. The steering angle velocity $w$ and the acceleration $a$ are restricted by the control constraints \begin{equation}\label{EQ:Parken:1} w(t) \in [-0.5,0.5],\qquad a(t)\in [-0.5,0.5]. \end{equation} The steering angle $\delta$ is bounded by the state constraints \begin{equation}\label{EQ:Parken:2} -\frac{\pi}{6} \leq \delta(t) \leq \frac{\pi}{6}. \end{equation} The initial state of the car is given by \begin{equation}\label{EQ:Parken:3} x(0) = 2.5,\quad y(0) = 1.5,\quad \psi(0) = v(0) = \delta(0) = 0 \end{equation} and the terminal state by \begin{equation}\label{EQ:Parken:4} x(t_f) = -1.25,\quad y(t_f) = -1.5,\quad \psi(t_f) = v(t_f) = \delta(t_f) = 0. \end{equation} Moreover, the parking lot, which is located to the right of the car, is defined by the state constraints \begin{equation}\label{EQ:Parken:5} y_r(t) \geq \eta(x_r(t)) \quad \mbox{and}\quad y_f(t) \geq \eta(x_f(t)), \end{equation} where \begin{displaymath} \left(\begin{array}{c} x_r(t) \\ y_r(t) \end{array}\right) = \left(\begin{array}{c} x(t) \\ y(t) \end{array}\right) + S(\psi(t)) \left(\begin{array}{c} 0 \\ b/2 \end{array}\right)\mbox{ and } \left(\begin{array}{c} x_f(t) \\ y_f(t) \end{array}\right) = \left(\begin{array}{c} x(t) \\ y(t) \end{array}\right) + S(\psi(t)) \left(\begin{array}{c} \ell \\ b/2 \end{array}\right) \end{displaymath} denote the positions of the right front and right rear wheel centers, respectively, $S$ is the rotation matrix in (\ref{EQ:Drehmatrix}) and $\eta$ is the piecewise defined and continuously differentiable function \begin{displaymath} \eta(x) := \left\{\begin{array}{ll} 0, & \mbox{if } |x| \geq 2.5,\\ -3, & \mbox{if } |x| \leq 2.4, \\ -900 (|x| - 2.5)^2 - 6000 (|x|-2.5)^3, & \mbox{if } 2.4 < |x| < 2.5. \end{array}\right. \end{displaymath} Summarizing, the optimal control problem aims at minimizing a linear combination of final time and steering effort, that is \begin{displaymath} t_f + \int_0^{t_f} w(t)^2 dt, \end{displaymath} subject to the constraints (\ref{EQ:1a})-(\ref{EQ:1c}), (\ref{EQ:2a}), (\ref{EQ:2b}), (\ref{EQ:Parken:1})-(\ref{EQ:Parken:5}). Figures~\ref{Fig:Parken:1a}-\ref{Fig:Parken:1b} show the result of the direct shooting method OCPID-DAE1, see \cite{OCPIDDAE1}, with $N=101$ grid points and final time $t_f\approx 15.355$. Please note the 3 phases of the parking maneuver in Figure~\ref{Fig:Parken:1a}. \begin{figure} \caption{Trajectory $(x,y)$ for optimal parking maneuver.} \label{Fig:Parken:1a} \end{figure} \begin{figure} \caption{Velocity $v$ (state, top left), steering angle $\delta$ (state, top right), acceleration $a$ (control, bottom left), and steering angle velocity $w$ (control, bottom right) for optimal parking maneuver.} \label{Fig:Parken:1b} \end{figure} Figure~\ref{Fig:Parken:2} illustrates the motion of the car with some snapshots. \begin{figure} \caption{Snapshots of the car's parking maneuver.} \label{Fig:Parken:2} \end{figure} \end{example} Example~\ref{ex:Parken} illustrated how optimal control can be used to simulate a driver in an automatic parking maneuver. Now we like to investigate the influence of parameters. A parametric sensitivity analysis as in \cite{Mau01} for the nominal optimal control problem OCP($p^*$) with some nominal parameter $p^*$ allows to approximate the optimal solution $(z(p),u(p))$ of the perturbed problem OCP($p$) with $p$ close to $p^*$ locally by a first order Taylor approximation \begin{equation} \label{EQ:Sens:1} z(p) \approx z(p^*) + \frac{\partial z}{\partial p}(p^*) (p-p^*),\quad u(p) \approx u(p^*) + \frac{\partial u}{\partial p}(p^*) (p-p^*). \end{equation} This approximation holds for $p$ sufficiently close to $p^*$ under suitable assumptions on the nominal solution, i.e. first-order necessary conditions, strict complementarity, second-order sufficient conditions and linear independence constraint qualification have to hold. A similar approximation holds for discretized optimal control problems, compare \cite{Bue01a}. These update formulas can be applied in real-time to update a nominal solution in the presence of perturbations in $p^*$. This idea can be exploited in multistep model-predictive control schemes as well, compare \cite{Palma2015}. In the following example, we demonstrate the parametric sensitivity analysis for a collision avoidance maneuver. \begin{example}[Collision avoidance maneuver] The task is to avoid a collision with a fixed obstacle that is blocking the right half of a straight road. The width of the road is $8$ meters and we aim to find the minimal distance $d$ such that an avoidance maneuver is possible with moderate steering effort. We introduce two perturbation parameters $p_1$ and $p_2$ with nominal values $p_1^* = p_2^* = 0$ into the problem formulation. The first parameter $p_1$ models perturbations in the initial yaw angle. The second parameter $p_2$ models perturbations in the motion of the obstacle and allows the obstacle to move with a given velocity $v_{obs} = 100$ [km/h] into a given direction with angle $\psi_{obs} = 170$ [$^\circ$]. The car's dynamics are given by (\ref{EQ:1a})-(\ref{EQ:1c}), (\ref{EQ:2a}) and (\ref{EQ:2b}) with $\ell=2.7$ and width $b=2$. The steering angle velocity $w$ and the acceleration $a$ are restricted by the control constraints \begin{equation}\label{EQ:Avoidance:1} w(t) \in [-0.5,0.5],\qquad a(t)\in [-10.0,0.5]. \end{equation} The steering angle $\delta$ is bounded by the state constraints \begin{equation}\label{EQ:Avoidance:2} -\frac{\pi}{6} \leq \delta(t) \leq \frac{\pi}{6}. \end{equation} The initial state of the car is given by \begin{equation}\label{EQ:Avoidance:3} x(0) = 0,\quad y(0) = 1.75,\quad \psi(0) = p_1,\quad \delta(0) = 0, \quad v(0) = 27.78, \end{equation} where $p_1$ is a perturbation parameter with nominal value $p_1^* = 0$. The terminal state is given by \begin{equation}\label{EQ:Avoidance:4} x(t_f) = x_{obs}(t_f) + 3,\quad \psi(t_f) = \delta(t_f) = 0 \end{equation} with \begin{displaymath} x_{obs}(t) = d + t p_2 v_{obs} \cos \psi_{obs},\qquad y_{obs}(t) = 3.5 + t p_2 v_{obs} \sin \psi_{obs} \end{displaymath} and perturbation parameter $p_2$ with nominal value $p_2^* = 0$. Note that the obstacle is not moving for the nominal parameter $p_2^*=0$. Moreover, the obstacle is defined by the state constraints \begin{equation}\label{EQ:Avoidance:5} s(x(t), x_{obs}(t), y_{obs}(t)) + \frac{b}{2} \leq y(t) \leq 8 - \frac{b}{2}, \end{equation} where $s$ is the piecewise defined and continuously differentiable function \begin{displaymath} s(x,d,h) := \left\{\begin{array}{ll} 0, & \mbox{if } x < d,\\ 4 h (x - d)^3, & \mbox{if } d \leq x < d + 0.5, \\ 4 h (x - (d+1))^3 + h, & \mbox{if } d+0.5 \leq x < d+1, \\ h, & \mbox{if } x \geq d+1. \end{array}\right. \end{displaymath} Summarizing, the optimal control problem aims at minimizing a linear combination of initial distance to the obstacle $d$ and steering effort, that is \begin{displaymath} d + 18 \int_0^{t_f} w(t)^2 dt, \end{displaymath} subject to the constraints (\ref{EQ:1a})-(\ref{EQ:1c}), (\ref{EQ:2a}), (\ref{EQ:2b}), (\ref{EQ:Avoidance:1})-(\ref{EQ:Avoidance:5}). Figure~\ref{Fig:Ausweichen:1} shows the output of the direct shooting method OCPID-DAE1, see \cite{OCPIDDAE1}, with $N=51$ grid points for the nominal optimal control problem with parameters $p_1^* = p_2^* = 0$. The final time is $t_f\approx 1.00541$, the optimal distance to the obstacle amounts to $d\approx 19.62075$ and the acceleration is active at the lower bound with $a(t) = -10$ for all $t \in [0,t_f]$. \begin{figure} \caption{Trajectory $(x,y)$ (state, top), velocity $v$ (state, bottom left), steering angle $\delta$ (state, bottom middle) and steering angle velocity $w$ (control, bottom right) for optimal avoidance maneuver.} \label{Fig:Ausweichen:1} \end{figure} Figure~\ref{Fig:Ausweichen:2} shows the sensitivities $\partial w/\partial p_1$ and $\partial w/\partial p_2$ of the nominal steering angle velocity $w(t)$ with respect to the parameters $p_1$ and $p_2$ for $t\in [0,t_f]$ . \begin{figure} \caption{Sensitivities of the steering angle velocity w.r.t. to $p_1$ (perturbation in initial yaw angle) and $p_2$ (perturbation of obstacle).} \label{Fig:Ausweichen:2} \end{figure} The sensitivities of the nominal final time $t_f$ with respect to $p_1$ and $p_2$ compute to \begin{displaymath} \frac{\partial t_f}{\partial p_1} \approx -1.66018,\qquad \frac{\partial t_f}{\partial p_2} \approx 0.50118. \end{displaymath} The sensitivities of the nominal distance $d$ with respect to $p_1$ and $p_2$ compute to \begin{displaymath} \frac{\partial d}{\partial p_1} \approx -28.95949,\qquad \frac{\partial d}{\partial p_2} \approx 35.66225. \end{displaymath} The sensitivities allow to predict the optimal solution under (small) perturbations using the Taylor approximation in (\ref{EQ:Sens:1}). Figures~\ref{Fig:Ausweichen:3}-\ref{Fig:Ausweichen:4} show the results of such a prediction for perturbations in the range of $p_1 \in [-0.1,0.1]$ and $p_2\in [-0.1,0.1]$. \begin{figure} \caption{Perturbed trajectories (top), steering angle (bottom, left) and steering angle velocity (bottom, right) with sensitivity updates for perturbations $p_1\in [-0.1,0.1]$. Please note that not all perturbations are feasible.} \label{Fig:Ausweichen:3} \end{figure} \begin{figure} \caption{Perturbed trajectories (top), steering angle (bottom, left) and steering angle velocity (bottom, right) with sensitivity updates for perturbations $p_2\in [-0.1,0.1]$. Please note that not all perturbations are feasible.} \label{Fig:Ausweichen:4} \end{figure} \end{example} Naturally, these examples can only provide a small idea of how optimal control techniques can be used to control vehicles and many extensions and applications to more complicated scenarios exist. The problem of controlling an autonomous vehicle by means of optimal control in real-time was addressed in \cite{Schmidt2014b}. Optimization based obstacle avoidance techniques can be found in \cite{Schmidt2014a,Schmidt2014c}. A different approach that exploits ideas from reachability analysis was used in \cite{WooEsf:2012:IFA_4040} to design a controller for a scale car that drives autonomously on a given track. Reachable sets turn out to be a powerful tool to detect and avoid collisions and to investigate the influence on perturbations on the future dynamic behavior. A comprehensive overview can be found in \cite{Kurzhanski2014}. \cite{Althoff2010a} uses reachability analysis with zonotopes and linearized dynamics for collision detection. Reachable set approximations through zonotopes have been obtained in \cite{Althoff2011f}. Verification approaches for collision avoidance systems using reachable sets are investigated in \cite{Nielsson2014,Xausa2015}. Virtual drives with gear shifts leading to mixed-integer optimal control problems have been considered in \cite{Ger05a,Ger06b,Kirches2010}. \section{Distributed Hierarchical Model-Predictive Control for Communicating Vehicles} \label{sec:control} While the trajectory generation for a single vehicle was in the focus of Section~\ref{sec:trajectory}, we are now discussing control strategies for several autonomous vehicles that interact with each other. To this end we assume that we have $N\in{\mathbb{N}}$ vehicles that can communicate through suitable communication channels and exchange information on positions, velocities, and predicted future behavior. The aim is to efficiently control the vehicles in a self-organized and autonomous way without prescribing a route. We suggest to use a distributed model-predictive control (MPC) strategy and couple it with a priority list or hierarchy, compare \cite{Kianfar2012,Pannek2013,Gross2014,Findeisen2014}. The priority list will rank the vehicles in an adaptive way depending on the current driving situation and give highly ranked vehicles priority while driving. Vehicles with low priority have to obey the motion of vehicles with higher priority. Model-predictive control is a well established feedback control paradigm, compare \cite{Gru11} for a detailed exposition and discussion of stability and robustness properties. The working principle of model predictive control is based on a repeated solution of (discretized) optimal control problems on a moving time horizon, see Figure~\ref{mg:FIG:mpc_concept}. The model-predictive control scheme was used in \cite{Ger09} to simulate the drive on long tracks, see the picture on the right in Figure~\ref{mg:FIG:mpc_concept} for an example. \begin{figure} \caption{Model-predictive control scheme: Repeated optimization on a prediction horizon of length $T$ and control acting on a control interval of length $\tau$ (left). Application to a test-drive (right): Local solutions of the MPC scheme, compare \cite{Ger09}.} \label{mg:FIG:mpc_concept} \end{figure} The MPC algorithm depends on a local time horizon of length $T>0$, sampling times $t_i$, $i\in{\mathbb{N}}_0$, and a shifting parameter $\tau > 0$. On each local time horizon $[t_i,t_i+T]$ an optimal control problem has to be solved for a given initial state, which may result from a measurement and represents the current state of the vehicle. Then the computed optimal control is implemented on the interval $[t_i,t_i+\tau]$ and the state is measured again at the sampling time $t_{i+1} = t_i+\tau$. After that, the process is repeated on the shifted time interval $[t_{i+1},t_{i+1}+T]$. This control paradigm provides a feedback control, since it reacts on the actual state at the sampling times $t_i$, $i\in{\mathbb{N}}_0$. Moreover, the control paradigm is very flexible since control and state constraints and individual objectives can be considered in the local optimal control problems, compare Section~\ref{sec:OCP}. The basic model-predictive control scheme has to be enhanced towards distributed vehicle systems. Herein, the computations take place on the individual vehicles and the relevant information is exchanged. In addition, a priority list has to be included. \subsection{Priority List} \label{sec:Priority} The priority list consists of a set of predefined rules to rank the vehicles. Vehicles with a lower priority have to take into account in their motion planning algorithm the motion of the vehicles with higher priority, while vehicles with the same priority can move independently, see \cite{Pannek2013}. To simplify the computations we assume that each vehicle only considers its neighboring vehicles as potential obstacles in the state constraints, i.e. vehicles which are inside a certain communication radius or distance. Vehicles outside this neighborhood are ignored in the optimization process. For each of the $N\in{\mathbb{N}}$ vehicles we introduce the set \[I_{NH}(i):=\left\lbrace j \in\left\lbrace 1,\ldots,N \right\rbrace\setminus\left\lbrace i\right\rbrace \mid j \text{ is a neighbor of } i \right\rbrace, \] which contains the indexes of the neighbors of the $i$-th vehicle. In order to avoid a conflict between vehicles, which could lead to a collision, we introduce a set of rules which assigns a distinct hierarchy level to each vehicle, i.e. every vehicle inside the $i$-th vehicle's neighborhood has either a higher or lower hierarchy level than the $i$-th vehicle. The vehicle with the lower hierarchy level has to consider the safety boundary of the vehicles with a higher hierarchy level in terms of state constraints, while a vehicle with a higher priority is allowed to ignore the safety boundaries of neighboring vehicles with a lower priority. The set \[I_{PR}(i):=\left\lbrace j \in I_{NH}(i)\mid j \text{ has a higher priority than } i \right\rbrace \] contains all indexes of the neighboring vehicles of the $i$-th vehicle, which have a higher hierarchy level than the $i$-th vehicle. If a vehicle has the highest possible priority this set is empty, if it has the lowest hierarchy level it contains all neighboring vehicles. This approach is also able to handle vehicles, which are not part of the communication network, e.g. an ambulance or non autonomous vehicles, by giving those vehicles the highest priority. The rules may also have a certain priority, e.g. traffic rules may have a higher priority than mathematically motivated rules. By this approach the computational effort is being reduced, because vehicles with no neighbors or vehicles with the highest hierarchy level are allowed to ignore other vehicles and are able to drive in an optimal way with fewer state constraints. It also reduces the potential for conflicts between vehicles, because of the distinct hierarchy. Depending on the scenario, the rules which assign the hierarchy level might change, e.g. if two vehicles meet at an intersection the vehicle coming from the right would have higher priority, but in a roundabout scenario the vehicle inside the roundabout would have the highest hierarchy level. To identify the priority we first considered traffic rules in our analysis. If those rules fail to assign a distinct priority we then used a mathematically motivated rule which is based on the adjoints computed by solving the optimal control problem. By using the adjoints for the controls we determined which vehicle would have the higher cost if it would deviate from its optimal trajectory. For a fixed set of priority rules we get the following distributed hierarchical model-predictive control algorithm:\\ \\ \begin{tabular}{|l|} \hline \\ \Large{\textbf{Distributed Hierarchical MPC Algorithm}} \\ \\ \hline \\ \textbf{Input:} prediction horizon, control horizon, set of priority rules\\ \\ \hline \\ \begin{minipage}{1.0\textwidth} \begin{description} \item[\textbf{1. }] Determine current states of all vehicles. \\ \item[\textbf{2a.}] Compute in parallel the optimal driving paths of all vehicles with respect to the neighborhood relations and hierarchy levels. \\ \item[\textbf{2b.}] For all vehicles \begin{description} \item[(i) ] reset all previous neighborhood relations. \item[(ii) ] screen for neighboring vehicles. \item[(iii)] submit current states and optimal driving paths to all neighbors. \end{description} \item[\textbf{2c.}] For all vehicles \begin{description} \item[(i)] reset all previous priority relations. \item[(ii)] apply the priority rules and assign the appropriate hierarchy levels. \end{description} \item[\textbf{3. }] Apply the computed optimal control on the given control horizon and repeat on shifted time horizon. \end{description} \end{minipage} \\ \\ \hline \end{tabular} The computation of the optimal trajectory for a vehicle stops if the vehicle is close to the destination or if a fixed time limit is exceeded. The advantage of a model-predictive control approach is that after each iteration it is possible to update the neighborhood relations and the hierarchy levels. \subsection{Optimal Control Problems on Prediction Horizon} \label{sec:OCP-MPC} In Step 2a of the distributed hierarchical MPC algorithm each of the $N$ vehicles has to solve an individual optimal control problem of type \eqref{EQ:OCP:OBJ} - \eqref{EQ:OCP:4}, whose details will be defined in this section. We again use the dynamics (\ref{EQ:1a})-(\ref{EQ:1c}), (\ref{EQ:2a}) and (\ref{EQ:2b}) with control vectors $u^{[j]}=(w^{[j]},a^{[j]})^\top$ and state vectors $z^{[j]}=(x^{[j]},y^{[j]},\psi^{[j]},v^{[j]},\delta^{[j]})^\top$ for the vehicles $j=1,\ldots,N$. We assume that the MPC scheme has proceeded until the sampling point $t_i$ with states $\bar z^{[j]} = z^{[j]}](t_i)$, $j=1,\ldots,N$. Moreover, we assume that each vehicle $j\in\{1,\ldots,N\}$ has a given target position $(x_\star^{[j]},y_\star^{[j]})$ that it aims to reach. Let $I_{PR}(j)(t_i)$, $j\in \{1,\ldots,N\}$, denote the priority sets of the vehicles at time $t_i$. Then the j-th vehicle has to solve the following optimal control problem on the local time horizon $[t_i,t_i+T]$ in Step 2a: {\em Minimize \begin{align}\label{EQ:MPC:OBJ} J(z^{[j]},u^{[j]}) &= \alpha_1\,\left\Vert\left( \begin{array}{c} x^{[j]}(t_i+T) \\ y^{[j]}(t_i+T) \end{array} \right) - \left( \begin{array}{c} x^{[j]}_\star \\ y^{[j]}_\star \end{array} \right)\right\Vert^2 \nonumber \\ & \quad + \alpha_2\,\int_{t_i}^{t_i+T} a^{[j]}(t)^2dt + \alpha_3\,\int_{t_i}^{t_i+T} w^{[j]}(t)^2dt \end{align} \indent subject to the constraints (\ref{EQ:1a})-(\ref{EQ:1c}), (\ref{EQ:2a}), (\ref{EQ:2b}) for $z^{[j]}$ and $u^{[j]}$, $z^{[j]}(t_i) =\bar z^{[j]}$ and \begin{displaymath} v^{[j]}(t) \in [v_{min}^{[j]}, v_{max}^{[j]}], \quad a^{[j]}(t) \in [a_{min}^{[j]},a_{max}^{[j]}], \quad w^{[j]}(t) \in [-w_{max}^{[j]},w_{max}^{[j]}], \end{displaymath} \indent and \begin{align*} \left(\begin{array}{c} x^{[j]}(t) \\ y^{[j]}(t) \end{array}\right) & \in \Omega_{r} \cap \Omega_{c}^{[j]}(t) \end{align*} \indent for $t\in [t_i,t_i+T]$. } Herein, the set $\Omega_r \subseteq {\mathbb{R}}^2$ defines state constraints imposed by the road. The set $\Omega_{c}^{[j]}(t)$ defines time dependent collision avoidance constraints imposed by vehicles with higher priority level, i.e. \begin{displaymath} \Omega_{c}^{[j]}(t) = \bigcap_{k\in I_{PR}(j)(t_i)} \left\{ \left(\begin{array}{c} x \\ y \end{array}\right)\in {\mathbb{R}}^2 \;\Bigg|\; \left(\begin{array}{c} x - x_c^{[k]}(t) \\ y - y_c^{[k]}(t) \end{array}\right)^\top Q^{[j]}(\psi^{[k]}(t)) \left(\begin{array}{c} x - x_c^{[k]}(t) \\ y - y_c^{[k]}(t) \end{array}\right) \geq 1 \right\}. \end{displaymath} Herein, we assumed an ellipsoidal shape of the vehicles $k\in I_{PR}(j)(t_i)$ with half radii $r_x^{[k]}>0$, $r_y^{[k]} > 0$, matrix \begin{displaymath} Q^{[k]}(\psi) := S(\psi) \left(\begin{array}{cc} \frac{1}{\left(r^{[k]}_x\right)^2} & 0 \\ 0 & \frac{1}{\left(r^{[k]}_y\right)^2} \end{array}\right) S(\psi)^\top \end{displaymath} with the rotation matrix $S$ from (\ref{EQ:Drehmatrix}) and the vehicle's center \begin{displaymath} \left(\begin{array}{c} x^{[k]}_c(t) \\ y^{[k]}_c(t) \end{array}\right) = \left(\begin{array}{c} x^{[k]}(t) \\ y^{[k]}(t) \end{array}\right) + S(\psi^{[k]}(t)) \left(\begin{array}{c} \ell^{[k]}/2 \\ 0 \end{array}\right), \end{displaymath} where $\ell^{[k]}$ is the length of vehicle $k$, compare (\ref{EQ:CarCenter}). The weights $\alpha_1^{[j]},\alpha_2^{[j]},\alpha_3^{[j]}>0$ can be used to individually weight the terms in the objective function (\ref{EQ:MPC:OBJ}) in order to model different drivers. The main goal of the vehicle is to reach its destination, i.e. to minimize the distance to its final destination $(x_\star^{[j]},y_\star^{[j]})$. The optimal control problem also allows to consider criteria such as fuel consumption or comfort in the minimization process by choosing moderate weights $\alpha_2^{[j]}$ and $\alpha_3^{[j]}$, which represent the cost for accelerating/braking and steering, respectively. \subsection{Simplifications} The approach in Section~\ref{sec:OCP-MPC} provides full flexibility to the motion of the vehicles as long as they stay on the road and obey collision avoidance constraints. As a result the local optimal control problems are very nonlinear and occasionally require a high computational effort, especially if many vehicles interact. One way to decrease the computational effort is to simplify the car model so that the cars follow precomputed feasible trajectories coming, e.g., from a navigation system, compare \cite{Geyer2014} and Figure~\ref{fig:Spline}. By restricting the vehicle's motion to a predefined curve, the degrees of freedom in the local optimal control problems are reduced considerably and real-time computations become realistic at the cost of reduced maneuverability. \begin{figure} \caption{Distributed hierarchical control of autonomous cars along preassigned paths.} \label{fig:Spline} \end{figure} Let the trajectories of the $N$ vehicles be defined by cubic spline curves \[\gamma^{[j]}(s) = \left(\begin{array}{c} x^{[j]}(s) \\ y^{[j]}(s) \end{array}\right),\qquad 0\leq s \leq L^{[j]}, j=1,\ldots,N,\] which interpolate given way-points $(x_i^{[j]},y_i^{[j]})$, $i=0,\ldots,M^{[j]}$, $M^{[j]}\in\mathbb{N}$, i.e. \begin{displaymath} \gamma^{[j]}(s_i^{[j]}) = \left(\begin{array}{c} x_i^{[j]} \\ y_i^{[j]} \end{array}\right), \qquad i=0,\ldots,M^{[j]}, j=1,\ldots,N. \end{displaymath} Herein, the curves are parametrized with respect to their arc lengths with \[s^{[j]}_0:=0,\quad s^{[j]}_{i+1}:=s^{[j]}_{i} + \sqrt{\left(x^{[j]}_{i+1}-x^{[j]}_{i} \right)^2 + \left( y^{[j]}_{i+1}-y^{[j]}_{i}\right)^2 },\quad i=0,\ldots,M^{[j]}-1,\] and $L^{[j]}:=s_{M^{[j]}}$. The initial value problem \begin{equation}\label{EQ:MPC:Spline} (s^{[j]})'(t)=v^{[j]}(t),\qquad s^{[j]}(0)= 0 \end{equation} describes the motion of the j-th vehicle alongside the spline curve $\gamma^{[j]}$, where $t$ denotes the time and $v^{[j]}(t)$ the velocity of the j-th vehicle at time $t$. We assume that we are able to control the velocity $v^{[j]}(t)\in [v^{[j]}_{min},v^{[j]}_{max}]$ of the vehicle, where $v^{[j]}_{min}$ and $v^{[j]}_{max}$ are the minimum and maximum velocity, respectively. The position of the j-th vehicle at time $t$ is then given by $\gamma^{[j]}(s^{[j]}(t)) = (x^{[j]}(s^{[j]}(t)),y^{[j]}(s^{[j]}(t)))^\top$. In Step 2a of the distributed hierarchical model predictive control algorithm, the j-th vehicle has to solve the following optimal control problem on the time horizon $[t_i,t_i + T]$, where $s^{[j]}_\star$ denotes the terminal arc-length for vehicle $j$ and $r>0$ is a given security distance. For simplicity we use a ball-shaped constraint for collision avoidance. {\em Minimize \begin{align*} \frac{1}{2} ( s^{[j]}(t_i + T) - s^{[j]}_\star )^2 \end{align*} \indent subject to the constraints \begin{align*} (s^{[j]})'(t) & = v^{[j]}(t), && t\in [t_i,t_i+T],\\ v^{[j]}(t) & \in [v^{[j]}_{min},v^{[j]}_{max}], && t\in [t_i,t_i+T],\\ \| \gamma^{[j]}(s^{[j]}(t)) - \gamma^{[k]}(s^{[k]}(t)) \|^2 & \geq r^2 , & & \forall k\in I_{PR}(j)(t_i), t\in [t_i,t_i+T]. \end{align*}} Instead of controlling the velocity directly it is also possible to control the acceleration of the vehicles, compare (\ref{EQ:2a}), or to introduce a delay as in (\ref{EQ:2aa}). \subsection{Numerical Results} We present numerical results for the distributed hierarchical model predictive controller in Sections~\ref{sec:Priority}, \ref{sec:OCP-MPC} using the full maneuverability of the cars. For a numerical solution it is important to choose suitable parameters. Especially the selection of the prediction horizon length $T$ and the control horizon length $\tau$ as well as the weights $\alpha_1,\alpha_2,\alpha_3$ is essential. If the prediction horizon is too short the reaction time might be to brief, if it is too large the computational effort is too high. The computational effort can also be reduced by the choice of the length of the control horizon, but its size also influences the approximation error. The choice of the weights for the controls are linked to the range of the controls and the preferred driving style. We tested our approach for several everyday scenarios for $N=2$ or $N=3$ cars. All cars are subject to the same car model so they have the same dynamics and the same box-constraints as well as the same limits for velocity and steering angle. We also assumed that the cars are driving in an equal way, i.e. the objective function \eqref{EQ:MPC:OBJ} of each car has the same weights. Furthermore the cars are allowed to occupy the entire road. Since all cars have the same parameters, we suppress the index $j$ throughout and used the following values: \[ \begin{array}{lcllcllcl} T &=& 2 [s] & \qquad \tau &=& 0.1 [s] & \qquad && \\ \alpha_1 &=& 1 & \qquad \alpha_2 &=& 1 & \qquad \alpha_3 &=& 10 \\ v_{\min} &=& 1 [m/s] & \qquad v_{\max} &=& 10 [m/s] & \qquad &&\\ a_{\min} &=& -10 [m/s^2] & \qquad a_{\max} &=& 1.5 [m/s^2] & \qquad w_{\max} &=& 0.5 [rad] \\ \ell & = & 4 [m] & \qquad r_x & = & 3.5 [m] & \qquad r_y & = & 2.5 [m] \end{array} \] \begin{example}[Scenario 1: Avoiding a parking car] In this scenario we consider two consecutive cars which drive in the same lane and direction as shown in Figure \ref{Scenario_1}. The car in the front is going straight for some time and then it parks in the right lane. The second car is driving behind the first car with the same velocity until the car in the front stops. The parked car is then considered to be an obstacle by the second car and an evasive maneuver is executed. After the second car passed by the first car it changes from the left to right lane again. \begin{figure} \caption{Scenario 1: Trajectory (left) and snapshots of the motion (pictures on the right).} \label{Scenario_1} \end{figure} \end{example} \begin{example}[Scenario 2: Two cars driving through a narrow space.] This time we examine two cars cars which move towards each other from opposite directions and both cars have to pass through a narrow space in the middle of the road as shown in Figure \ref{Scenario_2}. The car coming from below is closer to the obstacle. Therefore it drives through the narrow space first while the second car slows down and waits until the first car passed through the obstacle. Then the second car accelerates and also moves through the narrow space. \begin{figure} \caption{Scenario 2: Trajectory (left) and snapshots of the motion (pictures on the right).} \label{Scenario_2} \end{figure} \end{example} \begin{example}[Scenario 3: Three cars at a intersection] For the last scenario we consider three cars which cross each other at a intersection, compare Figure \ref{Scenario_3}. In this case, traffic rules apply and the cars on the right of each car have higher priority, i.e. the car coming from below has a higher hierarchy level than the car coming from the left, which has a higher priority than the car coming from above, which has to drive onto the left lane of the road so that the car in the middle is able to turn left without collision. The car from below is allowed to ignore the other cars and is able to turn left without collision as shown in figure \ref{Screenshots_3}. \begin{figure} \caption{Scenario 3: Trajectories.} \label{Scenario_3} \end{figure} \begin{figure} \caption{Snapshots of scenario 3.} \label{Screenshots_3} \end{figure} \end{example} \section{A Tracking Controller for Spline Curves} \label{sec:Tracking} Once a reference track has been obtained, e.g., by the techniques in Section~\ref{sec:trajectory}, the task is to follow the track with a real car. To this end let the reference track (=desired track) be given by a cubic spline curve \begin{equation}\label{EQ:Spline} \gamma(t) := \left(\begin{array}{c} x_d(t) \\ y_d(t) \end{array}\right), \qquad t\in [t_0,t_f], \end{equation} which interpolates the solution obtained by one of the techniques in Section~\ref{sec:trajectory} at given grid points within $[t_0,t_f]$. The goal is to design a nonlinear feedback controller according to the flatness concept in \cite{Fliess95,Rotella2002,Martin2003}. The basic idea is to use inverse kinematics in order to find a feedback law for the control inputs of the car. To this end we consider the system of differential equations given by (\ref{EQ:1a})-(\ref{EQ:1c}), where we control the velocity $v$ and the steering angle $\delta$. We assume that we can measure the outputs \begin{displaymath} y_1 := x\qquad \mbox{and}\qquad y_2 := y, \end{displaymath} i.e. the (x,y)-position of the vehicle. By differentiation we obtain \begin{eqnarray} y_1' & = & x' = v \cos \psi, \label{EQ:Out:1a}\\ y_1'' & = & x'' = v' \cos \psi - v \psi'\sin \psi = v' \cos \psi - \frac{v^2}{\ell}\sin \psi \tan \delta,\label{EQ:Out:2a}\\ y_2' & = & y' = v \sin \psi, \label{EQ:Out:1b}\\ y_2'' & = & y'' = v' \sin \psi + v \psi'\cos \psi = v' \sin \psi + \frac{v^2}{\ell}\cos \psi \tan \delta.\label{EQ:Out:2b} \end{eqnarray} Multiplication of \eqref{EQ:Out:1a} by $\cos\psi$ and of \eqref{EQ:Out:1b} by $\sin\psi$, adding both equations, exploiting $\psi = \arctan(y'/x')$ and solving for the control $v$ yields \begin{eqnarray*} v = V(y_1,y_2) & := & y_1'\cos\left( \arctan\left(\frac{y_2'}{y_1'} \right)\right) + y_2'\sin\left( \arctan\left(\frac{y_2'}{y_1'} \right)\right) \\ & = & \sqrt{ (y_1')^2 + (y_2')^2}. \end{eqnarray*} Likewise, multiplication of \eqref{EQ:Out:2a} by $\sin\psi$ and of \eqref{EQ:Out:2b} by $\cos\psi$, subtracting equations, exploiting $\psi = \arctan(y'/x')$, $v = \sqrt{(y_1')^2 +(y_2')^2}$, and solving for the control $\delta$ yields \begin{eqnarray*} \delta = \Delta(y_1,y_2) & := & \arctan\left(\frac{\ell \left( y_2''\cos\left( \arctan\left(\frac{y_2'}{y_1'} \right)\right) - y_1''\sin\left( \arctan\left(\frac{y_2'}{y_1'}\right)\right)\right) }{v^2} \right) \\ & = & \arctan\left(\frac{\ell \left( y_2'' y_1' - y_1'' y_2'\right)}{\left(\left(y_{1}'\right)^2 + \left(y_{2}'\right)^2\right)^\frac{3}{2}} \right). \end{eqnarray*} If we introduce the reference coordinates $x_d$ and $y_d$ (=desired outputs) from (\ref{EQ:Spline}) into these formula, then the corresponding controls $v_d = V(x_d,y_d)$ and $\delta_d = \Delta(x_d,y_d)$ would track the reference input provided the initial value is consistent with the reference trajectory. However, in practice there will be deviations due to modeling errors or disturbances. Hence, we need a feedback control law that is capable of taking deviations from the reference input into account. Such a feedback control law reads as follows, compare \cite{Rotella2002} for general flat systems: \begin{eqnarray*} & & K_v(y_1,y_2,y_{1,d},y_{2,d}) := \sqrt{\left( y_{1,d}' - k_1\left(y_{1} - y_{1,d}\right) \right)^2 + \left( y_{2,d}' - k_2\left( y_2 - y_{2,d}\right) \right)^2}, \end{eqnarray*} and \begin{eqnarray*} & & K_\delta(y_1,y_2,y_{1,d},y_{2,d}) := \\ & & \qquad \arctan\left(\ell\,\frac{\left( y_{2,d}'' - k_5\left(y_{2}' - y_{2,d}'\right) - k_6\left( y_2 - y_{2,d}\right) \right)\,y_{1}'}{\left(\left(y_{1}'\right)^2 + \left(y_{2}'\right)^2\right)^\frac{3}{2}} \right. \\ & &\qquad\qquad\qquad \left. -\ell \frac{\left( y_{1,d}'' - k_3\left(y_{1}' - y_{1,d}'\right) - k_4\left( y_1 - y_{1,d}\right) \right)\,y_{2}'}{\left(\left(y_{1}'\right)^2 + \left(y_{2}'\right)^2\right)^\frac{3}{2}} \right) . \end{eqnarray*} Herein, $k_1,\ldots,k_6$ are constants that influence the response time. $y_1 \ (=x)$ and $y_2\ (=y)$ are the actual measurements of the car's midpoint position of the rear axle and $y_{1,d}\ (=x_d)$ and $y_{2,d} \ (=y_d)$ are the reference coordinates, respectively. In addition, the derivatives $y_1'=x'$ and $y_2'=y'$ need to be estimated as well, for instance by finite difference approximations using the position measurements. If we insert this feedback control law into our system, we obtain the closed loop system \begin{eqnarray} x'(t) & = K_v(x(t),y(t),x_d(t),y_d(t))\cos\psi(t), \label{EQ:Feedback:1a}\\ y'(t) & = K_v(x(t),y(t),x_d(t),y_d(t))\sin\psi(t), \label{EQ:Feedback:1b}\\ \psi'(t) & =\frac{K_v(x(t),y(t),x_d(t),y_d(t))}{\ell} \tan K_\delta(x(t),y(t),x_d(t),y_d(t)).\label{EQ:Feedback:1c} \end{eqnarray} To study the stability behavior we assume $k_1=k_2,k_3=k_5,k_4=k_6$ and linearize the right side of the system with respect to the reference trajectory, which gives us the time variant matrix \begin{equation*} A = \begin{pmatrix} -\frac{k_1\,\left(x_d'\right)^2}{v_d} & -\frac{k_1\,x_d'\,y_d'}{v_d} & -y_d'\\ -\frac{k_1\,x_d'\,y_d'}{v_d} & -\frac{k_1\,\left(y_d'\right)^2}{v_d} & x_d'\\ \frac{k_1\,\left(y_d''\,\left(x_d'\right)^2 - x_d''\,x_d'\,y_d'\right) - k_3\,y_d'\,v_d^2}{v_d^4} & \frac{k_1\left(y_d''\,x_d'\,y_d' - x_d''\,\left(y_d'\right)^2\right) - k_3\,y_d'\,v_d^2}{v_d^4} & -k_2 \end{pmatrix} , \end{equation*} with $v_d:=\sqrt{\left(x_d'\right)^2 + \left(y_d'\right)^2}$, where we suppressed the argument $t$ for notational simplicity. The characteristic polynomial of the matrix is \begin{equation*} \det(\lambda\,I_4 - A)=\left(\lambda^2+k_3\lambda+k_4\right)\left(\lambda+k_1\right). \end{equation*} It follows, if $\lambda+k_1$ and $\lambda^2+k_3\lambda+k_4$ are Hurwitz the linearized system is asymptotic stable and thereby the closed loop nonlinear system is locally asymptotic stable. Measurement errors can be modeled by introducing white noise $(x_{wn},y_{wn})^\top$ and $(x'_{wn},y'_{wn})^\top$ into the feedback control law, i.e. by changing $x$ and $y$ to $x-x_{wn}$ and $y-y_{wn}$ and by changing $x'$ and $y'$ to $x'-x'_{wn}$ and $y'-y'_{wn}$. \begin{example}[Tracking controller] To test the constructed feedback controller we consider to have a constant reference velocity $v_d=11.5\,[\frac{m}{s}]$ and $\ell=2.8\,[m]$ to be the distance from rear axle to front axle. For the parameters in the closed loop system we chose $k_1=k_2=1$, $k_3=k_4=k_5=k_6=2$. For the system with white noise we chose a random number generator with values in the interval $[-10,10)$ (in meters) for the $(x,y)$-position and in the interval $[-2,2]$ (in meters per second) for the velocities $(x',y')$. In practice it is possible to measure the actual position with a tolerance of below one millimeter, but for illustration purposes we use a higher tolerance to test the tracking ability of the controller. For the system with an offset we changed the initial state from $(-26,-1)^\top$ to $(-16,9)^\top$. In all cases the control values of the feedback control law are projected into the feasible control set $[0,50]$ for the velocity $v$ and to $[-\pi/6,\pi/6]$ for the steering angle $\delta$. Figure~\ref{Fig:EX1a} shows the simulations with the feedback controlled system (\ref{EQ:Feedback:1a}) - (\ref{EQ:Feedback:1c}) for a given track at a sampling rate of 20 Hz. For better visibility only every 5th data point is plotted in Figure~\ref{Fig:EX1a}. In all cases the controller is able to accurately follow the track. \begin{figure}\label{Fig:EX1a} \end{figure} \end{example} \end{document}
\begin{document} \author[M. P. Cohen]{Michael P. Cohen} \email[1]{[email protected]} \author[T. Johnson]{Todd Johnson} \email[2]{[email protected]} \author[A. Kral]{Adam Kral} \email[3]{[email protected]} \author[A. Li]{Aaron Li} \email[4]{[email protected]} \author[J. Soll]{Justin Soll} \email[5]{[email protected]} \address{Department of Mathematics and Statistics, Carleton College, One North College Street, Northfield, MN 55057} \subjclass[2010]{54A05, 54A35, 03E15, 03E65, 06F05} \title{A Kuratowski closure-complement variant whose solution is independent of ZF} \begin{abstract} We pose the following new variant of the Kuratowski closure-complement problem: How many distinct sets may be obtained by starting with a set $A$ of a Polish space $X$, and applying only closure, complementation, and the $d$ operator, as often as desired, in any order? The set operator $d$ was studied by Kuratowski in his foundational text \textit{Topology: Volume I}; it assigns to $A$ the collection $dA$ of all points of second category for $A$. We show that in ZFC set theory, the answer to this variant problem is $22$. In a distinct system equiconsistent with ZFC, namely ZF+DC+PB, the answer is only $18$. \end{abstract} \maketitle \section*{Introduction} Kuratowski's \textit{closure-complement theorem}, a result of his 1922 thesis, states that at most $14$ distinct sets are obtainable by applying the operations of closure and complementation to any particular initial set $A$ in any topological space $X$. The algebraic result underlying this theorem is that the monoid generated by the set operators $k$ (closure) and $c$ (complement) has cardinality $\leq 14$. This surprising and amusing result has inspired a substantial literature of generalizations and variants; see for example \cite{Gaida_Eremenko_1974a}, \cite{Gardner_Jackson_2008a}, \cite{Sherman_2010a}, \cite{Banakh_2018a}, \cite{CCGS_2020a} or visit Bowron's website \textit{Kuratowski's Closure-Complement Cornucopia} \cite{Bowron_2012a} for a comprehensive list of relevant literature. The purpose of this note is to give an example of a natural variant of the Kuratowski closure-complement problem, whose solution turns out to be independent of ZF set theory. To pose the problem, we let $X$ be a topological space. Given a subset $A\subseteq X$, we say that a point $p\in X$ is a \textit{point of second category} for $A$ if whenever $U\subseteq X$ is an open neighborhood of $p$, we have $U\cap A$ nonmeager in $X$. Then, we define \begin{center} $dA=\{p\in X: p$ is a point of second category for $A\}$. \end{center} The operator $d$ was apparently first defined by Kuratowski himself in his foundational text \textit{Topology Vol. 1} (\cite{Kuratowski_1966a}, first edition 1933) and is associated with the application of Baire category methods in general topology. The operator has handy applications in important descriptive set theoretic results, especially as it appears in Pettis's lemma which states that if $A,B\subseteq X$ have the \textit{Baire property} (Definition \ref{defn_BP}), then $AB\supseteq id(A)id(B)$ (where $i$ denotes the topological interior operator), and thus $i(AB)$ is nonempty \cite{Pettis_1950a}. This lemma implies that every Borel-measurable homomorphism between Polish (i.e., separable and completely metrizable) topological groups is automatically continuous (see \cite{Rosendal_2009a} for an admirable survey of this and many related results). We ask the following. \begin{question} How many distinct sets may be obtained by starting with a subset $A$ of a Polish space $X$, and applying only the operators $k$, $c$, and $d$, as often as desired, in any order? Equivalently, what is the maximal cardinality of the monoid of set operators generated by $k$, $c$, and $d$? \end{question} For the remainder of this paper, $X$ denotes an arbitrary Polish (separable completely metrizable) space, $k$ and $i$ the closure and interior operators on $X$ respectively, and $c$ the complementation operator. We recall the DeMorgan's law for interiors and closures which states that $kc=ci$, or equivalently that $ic=ck$. We let $\mathcal{KD}$ denote the monoid of set operators on $X$ generated by $k$, $c$, and $d$. We first answer the question in the traditional domain, where we assume the usual ZF axioms plus the Axiom of Choice (AC). \begin{thm}[ZFC] \label{thm1} The cardinality of $\mathcal{KD}$ is $\leq 22$. Moreover, if $X=\mathbb{R}$ with the usual topology, then there exists a set $A\subseteq\mathbb{R}$ for which $\#\{oA:o\in\mathcal{KD}\}=22$. \end{thm} On the other hand, weak forms of AC are not sufficient to obtain the solution above. We denote by DC the Axiom of Dependent Choice, which is equivalent over ZF to the Baire Category Theorem. We denote by PB the axiom that ``every subset of every Polish space has the Baire property,'' and we recall the definition of the Baire property below. \begin{defn} \label{defn_BP} A set $A\subseteq X$ has the \textit{Baire property} if there exists an open set $U\subseteq X$ for which the symmetric difference $A\Delta U=(A-U)\cup(U-A)$ is a meager set. \end{defn} In the seminal paper \cite{Solovay_1970a}, Solovay showed that if ZF is consistent with the existence of an inaccessible cardinal, then ZF+DC+PB is consistent. In \cite{Shelah_1984a}, Shelah improved this result to show that ZFC and ZF+DC+PB are equiconsistent axiom systems. Our second theorem below shows that the solution to this natural extension of the Kuratowski problem differs in this alternative axiom system, and thus the solution is independent of ZF. \begin{thm}[ZF+DC+PB] \label{thm2} The cardinality of $\mathcal{KD}$ is $\leq 18$. Moreover, if $X=\mathbb{R}$ with the usual topology, then there exists a set $A\subseteq\mathbb{R}$ for which $\#\{oA:o\in\mathcal{KD}\}=18$. \end{thm} \section*{Preliminaries and Sets with the Baire Property} First we establish the basic properties of the operator $d$, most of which are observed without proof in \cite{Kuratowski_1966a} 4.IV. \begin{lemma}[ZF+DC] \label{lemma_main} Let $X$ be a Polish space, and $A,B\subseteq X$. \begin{enumerate}[(a)] \item $A\subseteq B$ implies $dA\subseteq dB$. \item $dA$ is closed and therefore $dA\subseteq kA$. \item $A$ open implies $dA=kA$. \item $d(A\cup B)=dA\cup dB$. \item $A-dA$ is meager. \item $A$ is meager in $X$ if and only if $dA=\emptyset$. \item $ddA=dA$. \item $dkA=kikA$. \item $kidA=dA$. \end{enumerate} \end{lemma} \begin{proof} \textit{(a)} Immediate from the definition of $d$. \textit{(b)} Suppose $p\in X$ is a limit point of $dA$. Given an arbitrary open neighborhood $U$ of $p$, it means there is $x\in dA$ with $x\in U$. Since $x$ is a point of second category for $A$, $U\cap A$ is nonmeager in $X$, whence $p\in dA$. \textit{(c)} We have $dA\subseteq kA$ by (c). Conversely if $p\in kA$, then each open neighborhood $U$ of $p$ will satisfy $U\cap A\neq\emptyset$. Assuming $A$ is open, then $U\cap A$ is a nonempty open set and hence nonmeager by the Baire Category Theorem (equivalent to DC). Thus $p\in dA$. \textit{(d)} By (a), we have $dA\cup dB\subseteq d(A\cup B)$. Conversely suppose $p\notin dA\cup dB$. Then there are open neighborhoods $U,V$ of $P$ so that $U\cap A$ is meager and $V\cap B$ is meager. But then $U\cap V$ is an open neighborhood of $p$ whose intersection with $A\cup B$ is meager, because $(U\cap V)\cap(A\cup B)\subseteq (U\cap A)\cup (V\cap B)$, where the latter is a union of two meager sets and hence meager. \textit{(e)} Since $X$ is Polish, we may find a countable family of open sets $\{B_i:i\in\mathbb{N}\}$ which comprise a basis for the topology of $X$. Form the subcollection $\mathcal{C}=\{B_i:B_i\cap A$ is meager$\}$. For each $x\in A-dA$, since $x$ is not a point of second category for $A$, we may find a neighborhood $B_i$ for which $x\in B_i$ and $B_{i}\cap A$ is meager, so $B_i\in\mathcal{C}$. This shows $A-dA\subseteq\bigcup_{B_i\in\mathcal{C}}B_i\cap A$, so $A-dA$ is a subset of a countable union of meager sets and hence meager. \textit{(f)} If $A$ is meager, then $dA=\emptyset$ immediately from the definition of $d$. Conversely, if $dA=\emptyset$, then $A=A-dA$ and $A$ is meager by (e). \textit{(g)} By (b), $ddA\subseteq kdA=dA$. By (a), (d), (e), and (f), $dA\subseteq d((A-dA)\cup dA)=d(A-dA)\cup ddA=ddA$. \textit{(h)} Note that $kA-ikA$ is a closed nowhere dense set and hence meager. So by (c), (d) and (f), we have $dkA=d((kA-ikA)\cup ikA)=d(kA-ikA)\cup dikA=\emptyset\cup kikA=kikA.$ \textit{(i)} By (b), (g), and (h), $kidA=kikdA=dkdA=ddA=dA$. \end{proof} \begin{lemma}[ZF+DC] \label{lemma_BP} Let $X$ be a Polish space, and $A\subseteq X$. Then the following are equivalent. \begin{enumerate}[(a)] \item $A$ has the Baire property. \item $dA-A$ is meager. \item $idcA=cdA$. \item $idA=cdcA$. \item $dA=cidcA$. \item $dcA=kcdA$. \end{enumerate} \end{lemma} \begin{proof} \textit{(a $\Rightarrow$ b)} Assume $A$ has the Baire property, and find $U\subseteq X$ open so that $M=A\Delta U$ is a meager set. We have $A=M\Delta U$, and therefore $dA=d((M-U)\cup(U-M))\cup\emptyset=d(M-U)\cup d(U-M)\cup d(M\cap U)=\emptyset\cup d((U-M)\cup (M\cap U))=dU$ by Lemma \ref{lemma_main} (d) and (f). So $dA=dU$, and by applying the same argument, we obtain $dcA=dcU$, because $M=(cA)\Delta(cU)$. Now we have \begin{align*} dA-A &= (dA-A-dcA)\cup((dA-A)\cap dcA)\\ &\subseteq (cA-dcA)\cup(dA\cap dcA)\\ &= (cA-dcA)\cup(dU\cap dcU). \end{align*} The set $cA-dcA$ is meager by Lemma \ref{lemma_main} (e), and the set $dU\cap dcU\subseteq kU\cap kcU=kU-U$ is nowhere dense. So $dA-A$ is a subset of the union of two meager sets, hence meager. \textit{(b $\Rightarrow$ a)} Assume $dA-A$ is meager. Set $U=idA$, so $U$ is an open subset of $X$. We have $U-A$ meager because $U-A\subseteq dA-A$. Also, $A-U=(A-dA)\cup(dA-U)$, where $A-dA$ is meager by Lemma \ref{lemma_main} (e), and $dA-U=dA-idA$ is closed nowhere dense, so $A-U$ is meager. Therefore $A\Delta U$ is meager, and we conclude $A$ has the Baire property. \textit{(b $\Rightarrow$ c)} Assume $dA-A$ is meager. Applying Lemma \ref{lemma_main} (b), (c), (d), (e), (f), and (i), as well as the DeMorgan's law for interior/closure, we have \begin{align*} idcA &= id((cA-dA)\cup(cA\cap dA))\\ &= i[d(cA-dA)\cup d(cA\cap dA)]\\ &= i[d(cdA-A)\cup\emptyset\cup d(dA-A)]\\ &= i[d(cdA-A)\cup d(A-dA)\cup\emptyset]\\ &= id((cdA-A)\cup (cdA\cap A))\\ &= idcdA\\ &= ikcdA\\ &= ckidA\\ &= cdA. \end{align*} \textit{(c $\Rightarrow$ d)} Assume $idcA=cdA$. Note that $idcA\cup idA=idX=X$ by Lemma \ref{lemma_main} (d), so $idA\supseteq cidcA=kcdcA\supseteq cdcA$. Conversely, $cdA\cup cdcA=idcA\cup cdcA=i(dcA\cup cdcA)=iX=X$, so $cdcA\supseteq ccdA=dA\supseteq idA$. So $idA=cdcA$. \textit{(d $\Rightarrow$ e)} Assume $idA=cdcA$. Taking the closure of both sides, we have $dA=kidA=kcdcA=cidcA$. \textit{(e $\Rightarrow$ f)} Assume $dA=cidcA$. Since $dA\cup dcA=d(A\cup cA)=X$ by Lemma \ref{lemma_main} (d), we have $dcA\supseteq cdA$, and since $dcA$ is closed (Lemma \ref{lemma_main} (b)) we also have $dcA\supseteq kcdA$. Conversely, we have $cidcA\cup kcdA=dA\cup kcdA\supseteq dA\cup cdA=X$, so $kcdA\supseteq ccidcA=idcA$. Since $kcdA$ is closed, we also have $kcdA\supseteq kidcA=dcA$ by Lemma \ref{lemma_main} (i). So $dcA=kcdA$. \textit{(f $\Rightarrow$ b)} Assume $dcA=kcdA$. To show $dA-A$ is meager, by Lemma \ref{lemma_main} (f) it suffices to show that $d(dA-A)=\emptyset$. We have $d(dA-A)=d(dA\cap cA)\subseteq (ddA\cap dcA)$ by Lemma \ref{lemma_main} (a). Therefore by assumption, $d(dA-A)\subseteq dA\cap kcdA$, so $id(dA-A)\subseteq idA\cap ikcdA=idA\cap ckidA=idA\cap cdA\subseteq dA\cap cdA=\emptyset$. Therefore $d(dA-A)=kid(dA-A)=k\emptyset=\emptyset$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm2}] We work in ZF+DC+PB. We denote by $\mathcal{K}_0$ the monoid of set operators generated by $k$ and $i$, and by $\mathcal{KD}_0$ the monoid of set operators generated by $k$, $i$, and $d$. Since $i=ckc$, $\mathcal{K}_0$ and $\mathcal{KD}_0$ are submonoids of $\mathcal{KD}$. The proof of Kuratowski's closure-complement theorem relies on observing that $kiki=ki$ and $ikik=ik$, and therefore \begin{center} $\mathcal{K}_0=\{e, i, k, ki, ik, iki, kik\}$, \end{center} \noindent where $e$ denotes the identity operator. $\mathcal{K}_0$ is often called the monoid of \textit{even operators} in the closure-complement problem. Using the tools of Lemma \ref{lemma_main}, we compute the (no more than seven) members of $\mathcal{K}_0d$: $d$, $id$, $kd=d$, $kid=d$, $ikd=id$, $ikid=id$, and $kikd=d$. So $\mathcal{KD}_0\supseteq \mathcal{K}_0d=\{d,id\}$. In fact, we have the following set equality: \begin{center} $\mathcal{KD}_0=\{e, i, k, ki, ik, iki, kik, d, id\}$ \end{center} \noindent which is easily verified by using Lemma \ref{lemma_main} to check that $i\mathcal{KD}_0\subseteq\mathcal{KD}_0$, $k\mathcal{KD}_0\subseteq\mathcal{KD}_0$, and $d\mathcal{KD}_0\subseteq\mathcal{KD}_0$. So there are nine even operators in $\mathcal{KD}_0$. Applying $c$ to the left (or right) of $\mathcal{KD}_0$ yields nine more operators, the odd operators, as depicted in Figure \ref{fig1}. \begin{figure} \caption{Operators in $\mathcal{KD}=\mathcal{KD}_0\cup c\mathcal{KD}_0$.} \label{fig1} \end{figure} The equalities in the last two entries in the table of Figure \ref{fig1} hold because every set in $X$ has the Baire property, allowing us to apply Lemma \ref{lemma_BP} universally. Noting that $c\mathcal{KD}_0=\mathcal{KD}_0c$ consists of the nine odd operators in the right column, we claim that $\mathcal{KD}=\mathcal{KD}_0\cup c\mathcal{KD}_0$, which proves that $\#\mathcal{KD}\leq 18$, as claimed. To see this, simply check the following set equalities which show that $\mathcal{KD}_0\cup c\mathcal{KD}_0$ is closed under multiplication from the left by the generating operators $k$, $c$, and $d$: \begin{center} $k(\mathcal{KD}_0\cup c\mathcal{KD}_0) = k\mathcal{KD}_0\cup kc\mathcal{KD}_0=\mathcal{KD}_0\cup ci\mathcal{KD}_0=\mathcal{KD}_0\cup c\mathcal{KD}_0$;\\ $c(\mathcal{KD}_0\cup c\mathcal{KD}_0) = c\mathcal{KD}_0\cup cc\mathcal{KD}_0=\mathcal{KD}_0\cup c\mathcal{KD}_0$;\\ $d(\mathcal{KD}_0\cup c\mathcal{KD}_0) = d\mathcal{KD}_0\cup dc\mathcal{KD}_0=\mathcal{KD}_0\cup cid\mathcal{KD}_0=\mathcal{KD}_0\cup c\mathcal{KD}_0$. \end{center} To prove the second statement of the theorem, we can take for example \begin{center} $A=(1,2)\cup(2,3)\cup\{4\}\cup[(5,6)\cap\mathbb{Q}]\cup[(6,7)\cap(\mathbb{R}-\mathbb{Q})]$ \end{center} \noindent and see that application of the $9$ even operators in $\mathcal{KD}_0$ yields $9$ distinct sets as depicted in Figure \ref{fig4}. Taking complements, we get $18$ distinct sets from the $18$ distinct operators in $\mathcal{KD}$. \begin{figure} \caption{Even operators applied to $A$.} \label{fig4} \end{figure} \end{proof} \section*{Vitali Sets and Distinguishing Words Under AC} Recall that a \textit{Vitali set} is a subset $V\subseteq\mathbb{R}$ consisting of exactly one representative from each coset in the quotient group $\mathbb{R}/\mathbb{Q}$. Vitali sets can be constructed by invoking the Axiom of Choice, and do not have the Baire property. \begin{prop}[ZFC] \label{prop1} Let $W_0\subseteq\mathbb{R}$ be open. Then there exists a Vitali set $V$ such that $dV=kW_0$. \end{prop} \begin{proof} Let $\alpha$ be an arbitrary irrational real number, and let $H=\langle\mathbb{Q},\alpha\rangle$ be the additive subgroup of $\mathbb{R}$ generated by $\mathbb{Q}$ and $\alpha$, so $H=\{q+n\alpha:q\in\mathbb{Q},n\in\mathbb{Z}\}$. Let $V^1$ be a set consisting of exactly one representative from each coset of $\mathbb{R}/H$. For each $n\in\mathbb{Z}$, we define the sets \begin{center} $P_n=\{v+n\alpha+\mathbb{Q}:v\in V^1\}\subseteq\mathbb{R}/\mathbb{Q}$ \end{center} \noindent and \begin{center} $R_n=\bigcup P_n\subseteq\mathbb{R}$. \end{center} We first claim that $\bigcup_{n\in\mathbb{Z}}P_n=\mathbb{R}/\mathbb{Q}$. The left-to-right inclusion is by definition. For the right-to-left inclusion, consider an arbitrary coset $x+\mathbb{Q}$ in $\mathbb{R}/\mathbb{Q}$. There exists a unique element $v\in V^1$ for which $x+H=v+H$, i.e. $x-v\in H$. Therefore we may write $x-v=q+n\alpha$ for some $q\in\mathbb{Q}$ and some $n\in\mathbb{Z}$. But then $x-q=v+n\alpha$, whence $x+\mathbb{Q}=v+n\alpha+\mathbb{Q}\in P_n$. So $\mathbb{R}/\mathbb{Q}\subseteq\bigcup_{n\in\mathbb{Z}}P_n$ as claimed. Moreover, the sets $P_n$ are pairwise disjoint. For if $P_n\cap P_m\neq\emptyset$, it means that there are $v,w\in V^1$ for which $v+n\alpha+\mathbb{Q}=w+m\alpha+\mathbb{Q}$, i.e. $v+n\alpha=w+m\alpha+q$ for some $q\in\mathbb{Q}$. But then $v-w=(m-n)\alpha+q\in H$, so $v=w$ by construction of $V^1$. In turn, this implies $(m-n)\alpha=-q\in\mathbb{Q}$, and hence $n=m$ since $\alpha$ is irrational. The preceding two paragraphs imply that the family $\{P_n:n\in\mathbb{Z}\}$ forms a partition of $\mathbb{R}/\mathbb{Q}$. Consequently, the sets $R_n=n\alpha+R_0$ comprise a partition of $\mathbb{R}$, and we conclude that each set $R_n$ is nonmeager in $\mathbb{R}$. Next, let $\{B_n:n\in\mathbb{Z}\}$ be a countable basis of open sets for the topology on $W_0$. For each $n\in\mathbb{Z}$, each coset $v+n\alpha+\mathbb{Q}$ in $P_n$ is dense in $\mathbb{R}$, and hence meets $B_n$. So let $V_n$ be a set consisting of exactly one element chosen from each intersection $(v+n\alpha+\mathbb{Q})\cap B_n$ ($v\in V^1$). Then $V=\bigcup_{n\in\mathbb{Z}}V_n$ consists of exactly one representative from each distinct coset in $\bigcup_{n\in\mathbb{Z}}P_n=\mathbb{R}/\mathbb{Q}$, so $V$ is a Vitali set. If $x\in kW_0$, and $U\subseteq\mathbb{R}$ is an arbitrary open neighborhood of $x$, then there exists $n\in\mathbb{Z}$ with $B_n\subseteq U$. But then $V_n\subseteq V\cap B_n\subseteq V\cap U$, and $R_n=\bigcup P_n=\bigcup_{z\in V_n}\bigcup_{q\in\mathbb{Q}}(z+q)=\bigcup_{q\in\mathbb{Q}}(V_n+q)$. Since $R_n$ is nonmeager, it follows that $V_n$ is nonmeager and hence $V\cap U$ is nonmeager. So $x\in dV$ and we have shown $kW_0\subseteq dV$. Conversely, $V\subseteq W_0$ so $dV\subseteq kV\subseteq kW_0$, and the proposition is proven. \end{proof} \begin{prop}[ZFC] \label{prop2} Let $W_0\subseteq W_1\subseteq\mathbb{R}$ such that $W_0$ and $W_1$ are both open. Then there exists a Vitali set $V$ such that $dV=kW_0$ and $kV=kW_1$. \end{prop} \begin{proof} Using Proposition \ref{prop1}, start with a Vitali set $V_0$ satisfying $dV_0=kW_0$. Let $\{v_n:n\in \mathbb{N}\}$ be an arbitrary sequence of distinct elements in $V_0$, and let $\{C_n:n\in\mathbb{N}\}$ be a countable basis of open sets for the topology on $W_1$. For each $n$, let $w_n\in (v_n+\mathbb{Q})\cap C_n$. Define $V=(V_0-\{v_n:n\in \mathbb{N}\})\cup\{w_n:n\in\mathbb{N}\}$, so $V$ is a Vitali set. Since $V\Delta V_0$ is countable, hence meager, we have $dV=dV_0=kW_0$. We also have $kV\subseteq kW_1$ since $V\subseteq W_1$, and $kW_1\subseteq kV$ since $V$ is dense in $W_1$ by construction. \end{proof} \begin{proof}[Proof of Theorem \ref{thm1}] Working in ZFC, Lemma \ref{lemma_main} still holds, so $\mathcal{KD}$ consists of at least the $18$ operators in $\mathcal{KD}_0\cup c\mathcal{KD}_0$ (see Figure \ref{fig1}). However, the identities in Lemma \ref{lemma_BP} do not apply to every subset of $X$, and thus in general we do not have $idc=cd$, $id=cdc$, $d=cidc$, or $dc=kcd$. To see that these equalities fail, we apply Proposition \ref{prop2} and construct $V$ a Vitali set satisfying $dV=[8,9]$ and $kV=[8,10]$. The complement $cV$ has the following property: for every open set $U$ in $\mathbb{R}$, the intersection $U\cap cV$ contains a representative from each coset of $\mathbb{Q}$ (in fact infinitely many representatives). Thus $\mathbb{R}\subseteq \bigcup_{q\in\mathbb{Q}}q+(U\cap cV)$, so $\mathbb{R}$ is covered by countably many translates of $U\cap cV$. This implies $U\cap cV$ is nonmeager. The preceding paragraph implies $dcV=kcV=\mathbb{R}$. So $V$ distinguishes additional operators in the monoid $\mathcal{KD}$, as depicted in the table below: \begin{figure}\label{fig2} \end{figure} Thus we claim that in ZFC, we have \begin{center} $\mathcal{KD}=\{e, i, k, ki, ik, iki, kik, d, id, c, ci, ck, cki, cik, ciki, ckik, cd, cid, dc, idc, cdc, cidc\}$. \end{center} To verify the claim, one must check that $\mathcal{KD}$ is invariant under both left and right multiplication by $k$, $c$, and $d$, and we leave the task to the reader using Lemma \ref{lemma_main}. So $\#\mathcal{KD}\leq 22$. For an example of an explicit initial set in $\mathbb{R}$ which distinguishes all $22$ operators, we give \begin{center} $A=(1,2)\cup(2,3)\cup\{4\}\cup[(5,6)\cap\mathbb{Q}]\cup[(6,7)\cap(\mathbb{R}-\mathbb{Q})]\cup V$, \end{center} \noindent where $V$ is as in the first paragraph of the proof. \end{proof} \begin{rem} Examining the $22$ operators in $\mathcal{KD}$ in Theorem \ref{thm1}, we may regard them as either \textit{even} or \textit{odd} depending on the number of instances of the $c$ operator in the reduced word. So we find $11$ even operators and $11$ odd operators. However, in this case the monoid $\mathcal{KD}_0$ generated by $k$, $i$, and $d$ does not yield all even operators, nor does either of the sets $c\mathcal{KD}_0$ or $\mathcal{KD}_0$ consist of all odd operators. (Contrast with the situation in the original closure-complement problem, and in Theorem \ref{thm2}.) \end{rem} \section*{Partial Orderings and Other Addenda} The monoid $\mathcal{KD}$ admits a natural partial ordering defined by the rule $o_1\leq o_2$ if and only if $o_1A\subseteq o_2A$ for every set $A\subseteq X$. The partial ordering on $\mathcal{K}_0$ (the monoid generated by $k$ and $i$) has been diagrammed by various authors; see for instance \cite{Gardner_Jackson_2008a}. In general if $o_1\leq o_2$ then $io_1\leq io_2$, $ko_1\leq ko_2$, and $co_1\geq co_2$. We observe also the following proposition. \begin{prop}[ZF+DC] The following relations hold among even operators in $\mathcal{KD}$. \begin{enumerate}[(a)] \item $d\leq kik$; \item $iki\leq cdc$; \item $cdc\leq id$; \item $cdc\leq cidc$; \item $ki\leq cidc$; and \item $cidc\leq d$. \end{enumerate} \end{prop} \begin{proof} \textit{(a)} Since $d\leq k$, we have $d=kid\leq kik$. \textit{(b)} By (a) we have $dc\leq kikc$, and hence $cdc\geq ckikc=iki$. \textit{(c)} Let $A\subseteq X$ be arbitrary. If $p\in cdcA$, then $p$ has an open neighborhood $U$ for which $U\cap cA$ is meager. Given arbitrary $x\in U$ and an arbitrary open neighborhood $V$ of $x$, we can observe that $(U\cap V)\cap cA\subseteq U\cap cA$ is meager, and hence $(U\cap V)\cap A$ is nonmeager, because $U\cap V$ is nonmeager (being an open set). So $V\cap A$ is nonmeager, which implies $x\in dA$. Therefore $U\subseteq dA$ which implies $p\in idA$ and $cdcA\subseteq idA$. \textit{(d)} Since $id\leq d$ we have $idc\leq dc$ and therefore $cdc\leq cidc$. \textit{(e)} Apply $k$ to the left side of the inequality in (b). \textit{(f)} Apply $k$ to the left side of the inequality in (c). \end{proof} Combining the preceding inequalities with the known ordering on $\mathcal{K}_0$, we obtain the partial ordering on the even operators of $\mathcal{KD}$ presented in Figure \ref{fig3}. For each pair of even operators $o_1,o_2\in\mathcal{KD}$ not connected by an arrow in the diagram, the reader may verify that $o_1A\not\subseteq o_2A$ where $A$ is one of the sets given in the proofs of Theorems \ref{thm1} and \ref{thm2}. Thus the diagram is complete. \begin{figure} \caption{\textit{Left:} the partial ordering on the $11$ even operators of $\mathcal{KD}$ in ZFC. \textit{Right:} the partial ordering on the $9$ even operators of $\mathcal{KD}$ in ZF+DC+PB.} \label{fig3} \end{figure} \begin{example}[Another ZF-Independent Problem] We also consider the problem of the cardinality of the monoid $\mathcal{KFD}=\langle k, c, f, d\rangle$ generated by $k$, $c$, $d$, and the topological frontier operator $f$ defined by $fA=kA\cap kcA$ for all sets $A$. As one would expect, the cardinality of this monoid also depends on axiomatic assumptions. The submonoid generated by $k$, $c$, and $f$ has size $\leq 34$, as shown by Gaida and Eremenko in \cite{Gaida_Eremenko_1974a}. This can be computed using the following identities: $fff=ff$, $fc=kf=f$, $ffk=fk$, $ifk=0$, $fkik=fki$, and $fiki=fik$, where $0$ denotes the ``empty set operator'' defined by $0A=\emptyset$ for every $A$. To this list we add the following three identities whose proofs we leave to the reader: $df=kif$, $fid=fd$, $dfk=0$. We are also interested in cardinality of the submonoid generated by $k, i, f, d$. We compute the following presentations and cardinalities. \begin{center} \begin{tabular}{|c|c|c|p{6cm}|} \hline Axiom System & Generators & Cardinality & List of Elements\\ \hline ZF+DC & $\langle k, i, f, d\rangle$ & $20$ & $\{e, k, i, d, f, ik, fk, ki, fi, fd, id, if, ff,$\newline $kif, kik, fik, 0=ifk, iki, fki, fif\}$\\ \hline ZF+DC+PB & $\langle k, c, f, d\rangle$ & $40$ & $\{$above$\}\cup\{c, ck, ci, cd, cf, cik, cfk, cki,$\newline $cfi, cfd, cid, cif, cff, ckif, ckik, cfik,$\newline $1=c0, ciki, cfki, cfif\}$\\ \hline ZFC & $\langle k, c, f, d\rangle$ & $46$ & $\{$above$\}\cup\{dc, idc, cdc, cidc, fdc, cfdc\}$\\ \hline \end{tabular} \end{center} The initial set $A$ given in the proof of Theorem \ref{thm1} is sufficient to distinguish the $46$ operators in $\mathcal{KFD}$. \end{example} \begin{rem}[Suggestions for Further Projects] Several variant problems remain open to be solved by an interested party. For example, how many distinct sets are obtainable using $k$, $i$, $d$, together with one or both of $\cap$ and $\cup$? (See \cite{Gardner_Jackson_2008a} Section 4 for more information.) Also, it was shown by Kuratowski that it is possible to obtain infinitely many sets using $k$, $c$, and either $\cap$ or $\cup$. We believe replacing $k$ with $d$ should yield a finite answer and it may be interesting to compute. More broadly, the operator $d$ is an example of a \textit{local function} associated to a $\sigma$-ideal $\mathcal{I}$ on a topological space $X$. A general local function $\ell$ associated to $\mathcal{I}$ assigns to a set $A\subseteq X$ the set $\ell A$ consisting of all points $p\in X$ for which every open neighborhood $U$ of $p$ satisfies $U\cap A\notin\mathcal{I}$. For $d$, the $\sigma$-ideal in question is the family of meager subsets of $X$. It may be interesting to study variants of the Kuratowski problem using local functions associated to other $\sigma$-ideals. Moreover, given a local function $\ell$, the operator $k_\ell$ defined by $k_\ell A=A\cup \ell A$ is an example of a Kuratowski closure operator, which generates a topology finer than the original. In fact the new topology and the old are \textit{saturated} in the sense that every open set in either has nonempty interior in the other. There exists some literature on variants of the Kuratowski problem in spaces equipped with multiple topologies (i.e. \textit{polytopological spaces}), including the special case of saturated polytopological spaces\textemdash see especially \cite{Banakh_2018a} and \cite{CCGS_2020a}. The creative reader may be able to craft interesting problems by combining the machinery of local functions and polytopological spaces. \end{rem} \end{document}
\begin{document} \title{f Computational Tools for the Shot Noise with Random Amplitude } {\bf MSC:} primary: 60D, 33; secondary: 62D\\ {\bf Keywords: } Random Difference Equations,Triggered Shot Noise, Iterated Cosine-Bessel functions, Logarithmic Distribution Functions,Saddle Point Method, Asymptotic Calculations, Generalized Hypergeometric Functions, Psi Functions.\\ \section{Abstract/Introduction} The following random recurrency:\\ $$ W_{n+1} = U_{n+1} ( W_n + \Lambda_{n+1} ) $$ where $W_{-1}=0.$ is known to be associated with the shot noise (see scheme 1): $$ W_{t} = \sum_{0<t_k<t} \Lambda_{k} e^{-(t-t_k)} $$ where the $t_k$ are the dates of a Poisson process, the $ U_i ;\ i = 0,1,... $ are independent uniform variables. This can be extended to the triggered shot noise ( cf. ref. [7] , see scheme 2): $$ W_{t} = \sum_{0<t_{lk}<t} \Lambda_{lk} e^{-(t-t_{lk})} , l=1,2,3,... $$ The random amplitudes $\Lambda_i,\ i = 0,1,... $ are positive Dufresne independent variables which Laplace transform of the density is:\\ $$\, _{p}F_{q} ({\bf a};{\bf b};-s) , p+1 \le q $$ or variables $ \lambda_c ({\bf a};{\bf b}) $ independent which characteristic function is (cf. ref. [9]):\\ $$\, _{2p+1}F_{2p} (c,{\bf a},{\bf a}+\frac{1}{2};{\bf b},{\bf b}+\frac{1}{2};-s^2) $$ or more generally even density variables which characteristic function (i.e. the Fourier transform) is:\\ $$\, _{p}F_{q} ({\bf a};{\bf b},;-s^2) , p+1 \le q $$\\ the generalized hypergeometric function, with the sequences of parameters $p,q$; $ {\bf a} = a_1,...,a_p > 0 ; {\bf b} = b_1,...,b_q > 0 $ defined for $-1< s< 1$: \begin{eqnarray*} \, _{p}F_{q} ({\bf a};{\bf b};-s^2)=\sum_{n=0}^{\infty}\frac{ ({\bf a})_n }{({\bf b})_n}\frac{(-s^{2})^{n}}{n!}. \end{eqnarray*} where $(a)_s=\frac{\Gamma(a+s)}{\Gamma(a)}$ is the Pochhammer symbol and $ \Gamma$ the Gamma function.\\ The compact notation used means for the products $ ({\bf a})_n = (a_1)_n ... (a_p)_n $ and $ (-) $ an empty set of parameters. Our aim is to provide tools to compute the density of the stationnary behaviour of the random recurrency i.e. of the classical or triggered shot noise. Special cases have already been studied This work take advantage of previous results for different type of laws with the following transforms: \\ Laplace\\ \begin{enumerate} \item deterministic: $ \, _0F_0(-;-;-s)= \exp{(-s)} $, ref. [8],\\ \item Gamma: $ \, _1F_0(a;-;-s)= \frac{1}{(1+s)^a}, a>0$, ref. [7],\\ \item Beta: $ \, _1F_1(a;b;-s), b>a>0$,\\ \end{enumerate} or Fourier:\\ \begin{enumerate} \item Laplace: $ \, _1F_0(a;-;-s^2)= \frac{1}{(1+s^2)^a} , a>0$, ref. [7],\\ \item Cauchy: $ \, _0F_0(-;-;-|s|)= \exp{(-|s|)} $, ref. [7],\\ \item Standard Normal: $ \, _0F_0(-;-;-s^2/2)= \exp{(-s^2/2)} $, ref. [7],\\ \item Arcsine: $ \, _0F_1(-;1;-s^2/4)= J_0(s) $, ref. [6,7], \\ \item Bernoulli: $ \, _0F_1(-;\frac{1}{2};-s^2/4)= \cos{(s)} $, ref. [5],[7], \\ \end{enumerate} We intend to illustrate the general case with the particular example combinating the 2 last ones: sum of Arcsine and Bernoulli variables, i.e.: $$\, _0F_1(-;\frac{1}{2};-s^2/4) \, _0F_1(-;1;-s^2/4)= \cos{(s)} J_0(s)= \, _2F_3(\frac{1}{4},\frac{3}{4};\frac{1}{2},\frac{1}{2},1;-s^2) $$ see [20] 7.15.1.3 p 609. \section{Shot Noise} The stationnary density is obtained from a first order differential equation giving the transform (Laplace, Fourier) taking into account the characteristic function of the random amplitude $ g(s) $ (see [8] ) \begin{eqnarray*} h(s) = \frac{C}{s} \exp(-\int_{s}^{\infty} \frac{g(\xi)}{\xi} d\xi) \end{eqnarray*} where $ C $ is the normalization factor. The density of probability is given after the formal inversion of the Fourier transform for our general case by: \begin{eqnarray*} f(x) = \frac{C}{\pi} \int_{0}^{\infty} \cos(xt) \exp(-\int_{t}^{\infty} \frac{\, _{p}F_{q} ({\bf a};{\bf b};-\xi^2)}{\xi} d\xi) \frac{dt}{t} \end{eqnarray*} the normalization factor is given by (see the appendix for the proof): \begin{eqnarray*} C = \exp(\frac{1}{2}(-\gamma-\psi({\bf a})+\psi({\bf b}))) \end{eqnarray*} where $ \psi(a) $ is the logarithmic derivative of the $ \Gamma(a) $ function, $ \gamma $ the Euler constant, the compact notation used means for the sums: $ \psi({\bf a}) = \psi(a_1) +...+\psi(a_p) $. The previous papers required the use of known special functions: exponential, sine/cosine (hyperbolic),Bessel(modified)Integral, we introduce now the generalized hypergeometric Integral: \begin{eqnarray*} \, _{p}Ti_{q} ({\bf a};{\bf b};-x^2) =2 \int_{x}^{\infty} \frac{ \, _{p}F_{q} ({\bf a};{\bf b};-\xi^2)}{\xi} d\xi \\ =-\gamma-\ln(x)-\psi({\bf a})+\psi({\bf b}) \\ +\,_{p}Tin_{q}({\bf a};{\bf b};-x^2) \end{eqnarray*} (see the appendix for the computation of the integration constant) where the complementary function is: \begin{eqnarray*} \, _{p}Tin_{q} ({\bf a};{\bf b};-x^2) =2 \int_{0}^{x} \frac{(1-\, _{p}F_{q} ({\bf a};{\bf b};-\xi^2))}{\xi} d\xi = \\ - \sum_{k=1}^{\infty} \frac{({\bf a})_{k}}{({\bf b})_{k} k k!} (-x^{2})^{k}. \end{eqnarray*} The asymptotical behavior for $ x \rightarrow \infty $ is given traditionally by integration by parts (for not integer $ {\bf b} $):\\ \begin{eqnarray*} 2 \int_{x}^{\infty} \frac{ \, _{p}F_{q} ({\bf a};{\bf b};-\xi^2)}{\xi} d\xi = \\ \sum_{k=1}^{\infty} \frac{({\bf b})_{-k} (k-1)! (-1)^{(k-1)}}{ ({\bf a})_{-k} x^{2 k}}\, _{p}F_{q} ({\bf a}-k;{\bf b}-k;-x^2). \end{eqnarray*} We could have introduced even more general function like Meijer function where the hypergeometric function is replaced by its extension (see [20] III,8.2.2.3 page 618): \begin{eqnarray*} G^{m,n}_{p,q} \left[ \xi \left| \begin{array} {c} {\bf a} \\ {\bf b} \end{array} \right| \right] = \\ \sum_{k=1}^{m} \frac{ \prod_{j=1}^{* \, m} \Gamma(b_j-b_k) \prod_{j=1}^{n} \Gamma(1-a_j+b_k)} {\prod_{j=m+1}^{q} \Gamma(1-b_j+b_k) \prod_{j=n+1}^{p} \Gamma(a_j-b_k)}\\ \,_pF_{q-1} ( 1-a_1+b_k,...,1-a_p+b_k ; 1-b_1+b_k,.*.,1-b_q+b_k;(-1)^{p-m-n} \xi ) \end{eqnarray*} where the normalization gives: \begin{eqnarray*} G^{m,n}_{p,q} \left[ 0 \left| \begin{array} {c} {\bf a} \\ {\bf b} \end{array} \right| \right] = \sum_{k=1}^{m} \frac{ \prod_{j=1}^{* \, m} \Gamma(b_j-b_k) \prod_{j=1}^{n} \Gamma(1-a_j+b_k)} {\prod_{j=m+1}^{q} \Gamma(1-b_j+b_k) \prod_{j=n+1}^{p} \Gamma(a_j-b_k)}\\ \end{eqnarray*} \subsection{Asymptotical Study} An asymptotical study by the saddle point method of the stationary density is undertaken by considering the inverse Laplace transform: $$ f(x) = \frac{1}{2i\pi} \int_{A_0} \exp(- sx +\int_{0}^{s} \frac{\, _{p}F_{q} ({\bf a};{\bf b};\xi^2)-1}{\xi} d\xi) ds $$ where $A_0 $ is a vertical path in the $s$ plane through the saddle point $s_0$ if $$ \Phi(s) = - sx +\int_{0}^{s} \frac{\, _{p}F_{q} ({\bf a};{\bf b};\xi^2) -1 }{\xi}d\xi,$$ the saddle points $s_0$ are the roots of $$ \Phi^{'}(s) = -x+ \frac{\, _{p}F_{q} ({\bf a};{\bf b};s^2)-1}{s}. $$ Then the density $f(x)$ can be approximated by: $$ f(x)= \frac{\exp(\Phi(s_0))}{\sqrt{2\pi \Phi^{"}(s_0)}}, x \rightarrow \infty $$ For $x$ sufficiently large $\Phi^{'}(s) $ has only one root $s_0(x)$ tending to infinity with $x. $ This fact can be seen using the asymptotical representation of $ \, _{p}F_{q} ({\bf a};{\bf b};s^2) $ The calculation of the roots of $$ 1+x s_0 = \, _{p}F_{q} ({\bf a};{\bf b};s_0^2) $$ gives an idea of the asymptotic behavior, first with: $ s_0^{(1)} = O(\ln (x))$ \subsection{Our Favorite Example} We handle the following random recurrency: \begin{eqnarray*} W_{n+1} = U_{n+1} ( W_n + cos(\pi V_{n+1}) + \Delta_{n+1} ) \end{eqnarray*} where: \begin{eqnarray*} W_0 = U_0 ( cos(\pi V_0) + \Delta_0) ; n = 0,1,... \end{eqnarray*} the density of $W_0$ is the following: $ f_0(x)= \frac{1}{2 \pi}\sqrt{\frac{2-|x|}{|x|}} {\bf 1}_{[-2,2]}(x)$\\ $f_1(x) $ the density of $ W_1 $ is given from the intermediary step: calculation of $ g_1(y) $ the density of $ Y = W_0 + ( cos(\pi V_1) + \Delta_1) $ obtained using complete elliptic functions $ { \bf K } $ : \begin{eqnarray*} g_1(y) = \frac{1}{\pi^2}(2 \, {\bf K}(1-\frac{y^2}{4}) {\bf 1}_{[-2,2]} +(2-\frac{y}{2})\, {\bf K }((2-\frac{y}{2})\frac{y}{2}) {\bf 1}_{[-4,4]})(y) \end{eqnarray*} Then $ f_1 $ can be expressed through the integral: \begin{eqnarray*} \int_0^{\sqrt{1-(x-1)^2}}\, \frac{{\bf K}(\xi)}{\xi}\, (\sqrt[5]{1-\xi^2}-\frac{1}{\sqrt[5]{1-\xi^2}})^2 d\xi \end{eqnarray*} integrated by series expansion see Prudnikov , vol III page 36. The comparisons between the densities and the Monte-Carlo simulations are presented in figure 1 for $f$ and figure 2 for $g$ (only on the positive abscissae due to parity). The previous recurrency is associated to the shot noise: \begin{eqnarray*} W_{t} = \sum_{0<t_k<t} ( cos(\pi V_{n+1}) + \Delta_{n+1} ) e^{-(t-t_k)} \end{eqnarray*} where the $ t_k$ are the dates of a Poisson process, the $ U_i , V_i ; i = 0,1,... $ are independent uniform random variables and the $ \Delta_i , i = 0,1,... $ independent Bernoulli variables $ \frac{1}{2} ( \delta_{+1} + \delta_{-1} ) $.\\ This random amplitude can be considered as the product of the same Bernoulli by a Beta $ \beta_{\frac{1}{2},\frac{1}{2}}^{(1)} $ multiplied by 2, this fact is also shown by the Carlson identity [4] entry (11)$p=1, a=\frac{1}{2}, q=1, b=1$: \begin{eqnarray*} \,_{2 p}F_{2q+1}( \frac{{\bf a}}{2},\frac{{\bf a}+1}{2};\frac{1}{2},\frac{{\bf b}}{2},\frac{{\bf b}+1}{2};-s2)=\\ \frac{1}{2}(\,_{p}F_{q}( {\bf a};{\bf b};i 2^{1+q-p}s)+\,_{p}F_{q}( {\bf a};{\bf b};-i 2^{1+q-p}s)) \end{eqnarray*} A numerical computation of the stationary density is tractable by splitting the integration domain of the integral and negleting the minor integration part (see appendix for the normalization constant):\\ \begin{eqnarray*} f(x) = \frac{2e^{-\gamma}}{\pi} \int_{0}^{\infty} cos(xt) exp(- \int_{t}^{\infty} \frac{cos(\xi) J_0 (\xi)}{\xi} d\xi) \frac{dt}{t} = \end{eqnarray*} \begin{eqnarray*} \frac{2e^{-\gamma}}{\pi} [\int_{0}^{x_1} cos(xt) exp(- \int_{t}^{\infty} \frac{cos(\xi) J_0 (\xi)}{\xi} d\xi) \frac{dt}{t} \\ -\frac{ sin(xx_1)}{xx_1} exp(- \int_{x_1}^{\infty} \frac{cos(\xi) J_0 (\xi)}{\xi} d\xi) \\ +\int_{x_1}^{x_2} \frac{1-cos(t)J_0(t)}{xt} sin(xt) exp(- \int_{t}^{\infty} \frac{cos(\xi) J_0 (\xi)}{\xi} d\xi) \frac{dt}{t}]+O(\frac{1}{xx_2^2}) \end{eqnarray*} where $O(\frac{1}{xx_2^2}) \sim O(h^2) , h $ is the integration step from the trapezoidal rule of this oscillating integral (see [11] for the numerical results).\\ The computations are based on earlier results obtained with other types of laws (Arcsine or Bernoulli alone), they require now the calculations of the special function Cosine-Bessel Integral in accordance with the Agrest's papers [1,2,3] for the function Exponential-Bessel Integral. We can choose between several expansions: \begin{eqnarray*} cos(x) J_0 (x) & = &\sum_{k=0}^{\infty} \frac{(\frac{1}{2})_{2k} }{( (2k)! )^2}(-4x^{2})^{k}. \end{eqnarray*} which gives the following moments: $ K_{2k} = \frac{(\frac{1}{2})_{2k} 2^{2k} }{(2k)!},K_{2k+1}= 0 , k=0,1,... $, or: \begin{eqnarray*} cos(x) J_0 (x) & = & \sum_{k=0}^{\infty} (-1)^k \frac{(4k)!}{ ((2k)!)^3} (\frac{x}{2})^{2k} \end{eqnarray*} found in [13] 5-24-30 p 92, and the hypergeometrical representation already established \begin{eqnarray*} cos(x) J_0 (x) = \, _2F_3( \frac{1}{4},\frac{3}{4}; \frac{1}{2},\frac{1}{2},1 ; - x^2 ) = \sum_{n=0}^{\infty}\frac{(\frac{1}{4})_n (\frac{3}{4})_n }{(\frac{1}{2})_n (\frac{1}{2})_n}\frac{(-x^{2})^{n}}{(n!)^2}. \end{eqnarray*} which is more convenient for our purpose, we get: \begin{eqnarray*} CJi ( x ) = \int_{x}^{\infty} \frac{cos(\xi) J_0 (\xi)}{\xi} d\xi = \frac{1}{\pi} \int_{0}^{\pi} Ci(2 x \sin^{2}{(\frac{\theta}{2})}) d\theta = m_1 - Ln(x)+ CJin (x) \end{eqnarray*} where $ Ci $ is the cosine integral (the integral uses the integral representation of the Bessel function)and the complementary function is: \begin{eqnarray*} CJin (x) = \int_{0}^{x} \frac{(1 - cos(\xi) J_0 (\xi))}{\xi} d\xi = -\sum_{n=1}^{\infty}\frac{(\frac{1}{4})_n (\frac{3}{4})_n }{(\frac{1}{2})_n (\frac{1}{2})_n}\frac{(-x^{2})^{n}}{2n (n!)^2} \end{eqnarray*} $ m_1 $ being the integration constant computed in the appendix, the constant is given by: $$ m_1= \frac{1}{2} ( - \gamma - \psi(\frac{1}{4}) - \psi(\frac{3}{4}) + 2 \psi(\frac{1}{2}) +\psi(1) ) = - \gamma + Ln(2) $$ (cf. ref. [21] for the $ \psi $ values or the pseudo code of its computation).\\ The asymptotical behavior for $ x \rightarrow \infty $ is given by an integration by parts (due to the integer parameter of the hypergeometric function) using the Hankel asymptotic expansion for $ J_0 $ see Abramovitz [0] page 364: \begin{eqnarray*} \int_{x}^{\infty} \frac{cos(\xi) J_0 (\xi)}{\xi} d\xi \sim \frac{2-\frac{1}{12 x} +\frac{\sqrt{2} \cos{(2 x -\frac{\pi}{4})}}{4 x}}{\sqrt{\pi x}} +\circ(\frac{1}{x^{\frac{5}{2}}}) \end{eqnarray*} \subsection{Asymptotical Study of the example} The asymptotic behavior of the density is given from the saddle point method by considering the bilateral inverse Laplace transform:\\ \begin{eqnarray*} f(x) = \frac{1}{2i\pi} \int_{A_0} exp(- sx +\int_{0}^{s} \frac{Ch(\xi) I_0 (\xi)-1}{\xi} d\xi) ds \end{eqnarray*} where $A_0 $ is a vertical path in the $ s$ plane through the saddle point $s_0$ \\ if $ \Phi(s) = - sx +\int_{0}^{s} \frac{Ch(\xi) I_0 (\xi)-1}{\xi} d\xi $ the saddle points $s_0$ are the roots from:\\ \begin{eqnarray*} \Phi^{'}(s) = -x+ \frac{Ch(s) I_0 (s)-1}{s} \end{eqnarray*} then the density $ f(x) $ can be approximated by:\\ \begin{eqnarray*} f(x)= \frac{e^{\Phi(s_0)}}{\sqrt{2\pi \Phi^{"}(s_0)}}, x \rightarrow \infty \end{eqnarray*} for $x$ sufficiently large $\Phi^{'}(s) $ has only one root $ s_0(x) $ tending to infinity with $x$,and using the following asymptotic representations for the modified Bessel function and the hyperbolic functions:\\ \begin{eqnarray*} I_n(s)\sim \frac{e^s}{\sqrt{2 \pi s}} , Ch(s) \sim Sh(s) \sim \frac{e^s}{2} \end{eqnarray*} the calculation of the roots from $ 1+xs_0 = \frac{e^{2s_0}}{2\sqrt{2 \pi s_0}} $ gives an idea of the asymptotic behavior, first $ s_0^{(1)} = O(\frac{1}{2} Ln(x))$ and $ s_0^{(2)} = \frac{1}{2} Ln(x) + \frac{3}{4} Ln(Ln(x)) + \frac{1}{4} Ln(\pi) , x \rightarrow \infty $\\ The analogous relations exist for the special function hyperbolic cosine modified Bessel integral for the asymptotic study with the help of the saddle point method: \begin{eqnarray*} CIi ( x ) = \gamma + Ln(x/2) + CIin (x) \end{eqnarray*} where: \begin{eqnarray*} CIin (x) = \int_{0}^{x} \frac{(Ch(\xi) I_0 (\xi)-1)}{\xi} d\xi = \sum_{n=1}^{\infty}\frac{(\frac{1}{4})_n (\frac{3}{4})_n }{(\frac{1}{2})_n (\frac{1}{2})_n}\frac{(x^{2})^{n}}{2n (n!)^2} \end{eqnarray*} again the asymptotic behavior for $ x \rightarrow \infty $ is given by an integration using the asymptotic expansion of $ I_0 $ see Abramovitz [0] page 377: \begin{eqnarray*} \int \frac{Ch(\xi) I_0 (\xi)}{\xi} d\xi \sim \frac{\sqrt{2} \exp{(2 x)}}{8 x \sqrt{\pi x}} +\circ(\frac{\exp{(2 x)}}{x^{\frac{5}{2}}}) \end{eqnarray*} \section{Triggered Shot Noise} \subsection{General Case} The approximated stationary density of the random recurrency is obtained from a lth order differential equation giving the transforms (Laplace, Fourier) $ h(s) $ taking into account the characteristic function of the random amplitude $ g(s) $ (see [7] ), for l=2: \begin{eqnarray*} s^2 h''(s) + 3 s h'(s) + ( 1 - g(s) ) h(s) = 0 \end{eqnarray*} the $h$ expansion in the neighbourhood of zero is: $ h(s) = \sum_{i=0}^{\infty} c_i s^i $, the coefficients are identified using the $ g(s) $ expansion, we get: \begin{eqnarray*}c_0=1 , c_{2n+1}=0, c_{2n}=\frac{1}{4n(n+1)}\sum_{j=1}^{n-1}\frac{c_{2j} K_{2(n-j)}}{(2(n-j))!} , n=1,2,... \end{eqnarray*} where the $ K_n $ are the moments of the random amplitude. The iterative computation according to cf. [6] of $ h(s) $: $$ s h_2(s) = C_1 (( Ln(s) + \gamma)(1+Ti^{(2)}(s))+2 Ti^{(3)}(s))+(C_2-C_1)(1+Ti^{(2)}(s))$$ is used for the calculation of the 2 constants $ C_1, C_2 $ from the comparison with $ \sum_{j=0}^{\infty} c_j s^j $ which allows the knowledge of the asymptotical behavior for the triggered case.\\ For the triggered shot noise we need to introduce the special function iterated generalized hypergeometric Integral function: $$ \, _{p}Ti_{q}^{(n)} ({\bf a};{\bf b};-x^2) = 2 \int_{x}^{\infty} \frac{ \,_{p}Ti_{q}^{(n-1)} ({\bf a};{\bf b};\xi^2)}{\xi} d\xi $$ Where: $$\, _{p}Ti_{q}^{(0)} ({\bf a};{\bf b};-x^2) =2 \, _{p}F_{q} ({\bf a};{\bf b};-x^2)$$ and $$ \, _{p}Tin_{q}^{(n)} ({\bf a};{\bf b};-x^2) = -\sum_{k=1}^{\infty} \frac{({\bf a})_{k} }{({\bf b})_{k} (k)^{n} k!} (-x^{2})^{k}. $$\\ The semiconvergent asymptotic forms $ x \rightarrow \infty $ of the iterated exponential integral can be found in Kunstner [17], Van De Hulst [22] page 11 and of the iterated cosine integral in Hallen [12], for the iterated Bessel integral we get the following result from Bessel Integral $ Ji^{(1)}(x)$ approximation found in Petiau [19] Smith, page 254 :\\ \begin{eqnarray*} Ji^{(2)}(x) = J_0(x) ( \frac{1}{x^2}-\frac{20}{x^4}+... +(-1)^{n-1} \frac{(2^n n!)( 2^{n-1} (n-1)!)(\sum_{k=1}^{n-1}\frac{1}{k} +\frac{1}{2n} )}{x^{2n}}+...)\\ + J_1(x) ( \frac{4}{x^3}-\frac{96}{x^5} +...+(-1)^{n-1} \frac{(2^n n!)^2 (\sum_{k=1}^{n}\frac{1}{k} )}{x^{2n+1}}+...) \end{eqnarray*} \subsection{Our favorite Example} According to the results in [7] on the triggered shot noise we get for the $ W_0 $ density:\\ $$ f_0(x)= \frac{1}{\pi}(\sqrt{\frac{2-|x|}{|x|}}- Arctg(\sqrt{\frac{2-|x|}{|x|}})){\bf 1}_{[-2,2]}(x)$$ figure 3 shows the comparison with a Monte-Carlo simulation of the process.\\ The link between iterated Integral functions and their complementary is given, cf. [14] for $ \psi^{(n)} $ using $ m_2 , m_3 $ (see appendix), by: \begin{eqnarray*} m_2 = \frac{1}{2^2 2!} (\psi'(\frac{1}{4}) + \psi'(\frac{3}{4}) - 2 \psi'(\frac{1}{2}) ) = \pi^2/4 \\ m_3=\frac{1}{2^3 3!}(2\psi''(1)-(\psi''(\frac{1}{4})+\psi''(\frac{3}{4}))+2\psi''(\frac{1}{2})) =\\ \frac{1}{16.3}(-4\zeta(3)+4.28\zeta(3)-2(8-1)\zeta(3))=\zeta(3)/3 \end{eqnarray*} then: \begin{eqnarray*} CJi^{(2)}(x) = \frac{1}{2!} (\gamma +Ln(x/2))^2 + \pi^2/4 -\sum_{n=1}^{\infty}\frac{(\frac{1}{4})_n (\frac{3}{4})_n }{(\frac{1}{2})_n (\frac{1}{2})_n}\frac{(-x^{2})^{n}} {(2n)^2 (n!)^2}\\ CJi^{(3)}(x) = -\frac{1}{3!} (\gamma +Ln(x/2))^3 +\frac{\pi^2}{4} (\gamma+Ln(x/2))+\zeta(3)/3 -\sum_{n=1}^{\infty}\frac{(\frac{1}{4})_n (\frac{3}{4})_n }{(\frac{1}{2})_n (\frac{1}{2})_n}\frac{(-x^{2})^{n}} {(2n)^3 (n!)^2} \end{eqnarray*} where the zeta function: $$ \zeta(3) = 1.2020569032$$ Since these functions are necessary to the computation according to cf. [6] of: $$ s h_2(s) = C_1 (( Ln(s) + \gamma)(1+CJi^{(2)}(s))+2 CJi^{(3)}(s))+(C_2-C_1)(1+CJi^{(2)}(s)) + \o{1/(\sqrt{s})^9}$$ for the calculation of the 2 constants $ C_1, C_2 $ from the comparison with $ \sum_{j=0}^{\infty} c_j s^j $ which allows the knowledge of the asymptotical behavior for the triggered case, figure 4 shows the comparison (only for positive abscissae due to the parity).\\ The asymptotical behavior for $ x \rightarrow \infty $ is given by a double integration by parts using the Hankel asymptotic expansion for $ J_0 $ see Abramovitz [0] page 364: \begin{eqnarray*} \int_{x}^{\infty} \frac{1}{\xi} \int_{\xi}^{\infty} \frac{cos(\eta) J_0 (\eta)}{\eta} d\eta d\xi = \frac{4-\frac{1}{6 x} }{\sqrt{\pi x}} + +\circ(\frac{1}{x^{\frac{5}{2}}}) \end{eqnarray*} The knowledge of the asymptotic behaviour is necessary to compute by inverse Fourier transform the density with short support from the characteristic function with extended support as shown in figure (5) \subsection{Waiting Time Paradox } It is necessary to take into account the waiting time correction for the triggered shot noise, (see [23],[7]) when $ l =2 $ for instance: $$ f_{waiting-time}(x)= \frac{1}{4\pi}(3 \sqrt{\frac{2-|x|}{|x|}}- 2 Arctg(\sqrt{\frac{2-|x|}{|x|}})){\bf 1}_{[-2,2]}(x)$$ The density of the product of the random amplitude by 3 independent uniforms variables can also be calculated in closed form: \begin{eqnarray*} f_{0}(x)= \frac{2}{\pi}(\sqrt{\frac{2-|x|}{|x|}}- Arctg(\sqrt{\frac{2-|x|}{|x|}})\\ -\frac{1}{2} (\omega + \theta ) Ln(r) +\frac{1}{4} ( Cl_2 (2(\omega+\theta)) -(Cl_2 (2\omega)+Cl_2(2\theta)))){\bf 1}_{[-2,2]}(x) \end{eqnarray*} where: $ r= \frac{1}{\sqrt{2|x|}} , \theta= Arctg(\sqrt{\frac{2-|x|}{|x|}})$ and $ \omega = Arctg(\frac{r sin(\theta)}{(1- r cos(\theta))})$\\ with the help of the Clausen Integral, see [18]: $$ Cl_2(\alpha) = -\sum_{k=1}^{\infty} \frac{sin( k\alpha)}{k^2} = - \int_{0}^{\alpha} Ln(2 sin(\frac{\theta}{2})) d\theta $$ Figure 5 shows the comparison of $ f_0 $ with the Monte-Carlo simulation of the random recurrency, the first term is sufficient, except for an accurate computation of the tail of the stationnary density. All the other numerical resuls and graphs can be found in the report [11].\\ \section{Appendix: Auxiliary Result} The method to compute the integration constants of the iterated Integral functions follows the line of the proof in [16] pages 258-259.\\ {\bf Lemma 1:}\\ Let $ g(s) $ be the transform (Laplace or Fourier) of the random amplitude density, with moments : $ K_k, k=1,2,...$, we get: $$ g^{(n)}(s) = \int_s^{\infty} g^{(n-1)}(\xi) \frac{d \xi}{\xi}, g^{(0)}(s) = g(s) $$ we get: \begin{eqnarray*} g^{(n)}(s) = M_{n-1}+(-1)^n \frac{ Ln^{n} (s)}{n!} +(-1)^n \sum_{k=1}^{\infty} (-1)^{k-1} \frac{K_k s^k}{k^n k!} +\sum_{l=0}^{n-2} \frac{(-1)^{(n-l-1)} Ln^{n-l-1}(s) M_l }{l! (n-l-1)! } \end{eqnarray*} where: \begin{eqnarray*} M_n= \int_{0}^{\infty} g'(\xi) \frac{Ln^{n} (\xi)}{ n! }d \xi \end{eqnarray*} {\bf Proof:}\\ Integrating by parts: $$ g^{(n)}(s) = \int_1^{\infty} g(s \xi) \frac{Ln^{n-1}(\xi)}{ (n-1)! }\frac{ d \xi}{ \xi} , n=1,2,...$$ Splitting the domain of integration we get for $g^{(n)}(s)$ : \begin{eqnarray*} \int_{\frac{1}{s}}^{\infty} g(s \xi) \frac{Ln^{n-1}(\xi)}{(n-1)!}\frac{d \xi}{\xi} - \int_{0}^{\frac{1}{s}}(1- g(s \xi)) \frac{Ln^{n-1} (\xi)}{(n-1)!}\frac{d \xi}{\xi}\\ +\int_{1}^{\frac{1}{s}} \frac{Ln^{n-1} (\xi)}{ (n-1)! }\frac{ d \xi}{ \xi} +\int_{0}^{1} (1-g(s \xi)) \frac{Ln^{n-1}(\xi)}{(n-1)!}\frac{ d \xi}{\xi}= \\ \int_{1}^{\infty} g(\xi) \frac{Ln^{n-1} (\xi/s)}{ (n-1)! }\frac{ d \xi}{\xi} -\int_{0}^{1}(1- g(\xi)) \frac{Ln^{n-1} (\xi/s)}{ (n-1)! }\frac{ d \xi}{\xi}\\ +(-1)^n\frac{ Ln^{n}(s)}{n!} + \sum_{k=1}^{\infty}(-1)^{k-1}\frac{K_k s^k}{k!} \int_{0}^{1} x^{k-1} \frac{Ln^{n-1} (\xi)}{ (n-1)! }d \xi=\\ \int_{1}^{\infty} g(\xi) \frac{Ln^{n-1} (\xi)}{ (n-1)! }\frac{ d \xi}{ \xi} -\int_{0}^{1}(1- g(\xi)) \frac{Ln^{n-1} (\xi) }{ (n-1)! }\frac{d \xi}{\xi}\\ +(-1)^n\frac{Ln^{n}(s)}{n!} +(-1)^n\sum_{k=1}^{\infty}(-1)^{k-1}\frac{K_k s^k}{k^n k!}\\ +\sum_{l=0}^{n-2} \frac{(-1)^{(n-l-1)} Ln^{n-l-1}(s)}{l! (n-l-1)! } [ \int_{1}^{\infty} g(\xi) \frac{Ln^{l} (\xi) d \xi}{\xi} -\int_{0}^{1}(1- g(\xi)) \frac{Ln^{l} (\xi) d \xi}{\xi}]=\\ M_{n-1} +(-1)^n \frac{ Ln^{n} (s)}{n!} +(-1)^n \sum_{k=1}^{\infty} (-1)^{k-1} \frac{K_k s^k}{k^n k!} +\sum_{l=0}^{n-2} \frac{(-1)^{(n-l-1)} Ln^{n-l-1}(s) M_l }{l! (n-l-1)! } \end{eqnarray*} The integrals $ M_n$ are also obtained by integration by parts: \begin{eqnarray*} M_n = \int_{1}^{\infty} g(\xi) \frac{Ln^{n} (\xi)}{ n! }\frac{ d \xi}{\xi} -\int_{0}^{1}(1- g(\xi)) \frac{Ln^{n} (\xi) }{ n! }\frac{d \xi}{\xi}= \int_{0}^{\infty} g'(\xi) \frac{Ln^{n} (\xi)}{ n! }d \xi \end{eqnarray*}\\ $ \ddag $\\ Generally this integral is unknown in closed form for any $ g(s) $, more likely we will obtain the following one: \begin{eqnarray*} M_n=\frac{1}{n!}[\frac{d^n}{ds^n}(K(s))]_{s=0}= \frac{1}{n!}[\frac{d^n}{ds^n}(\int_{0}^{\infty}g'(\xi)\xi^s d\xi)]_{s=0} \end{eqnarray*} This is true for instance for the Fox $ H $ function, a very general special function (see [20] III paragraph 8.3 pp626): \begin{eqnarray*} g(s) = H^{m,n}_{p,q} \left[ s \left| \begin{array}{c} ({\bf a}, {\bf A}) \\({\bf b}, {\bf B}) \end{array} \right| \right] \end{eqnarray*} where(cf. [20]III,8.3.2.15 page 629)for the differentiation of the Fox function ): \begin{eqnarray*} s g'(s) = H^{m,n+1}_{p+1,q+1} \left[ s \left| \begin{array}{c} (0,1);({\bf a}, {\bf A}) \\ ({\bf b}, {\bf B});(1,1) \end{array}\right| \right] \end{eqnarray*} From the definition, using Mellin-Barnes integrals (see [20]III,8.3.1.1 page 626) we get for the integral $ K $: \begin{eqnarray*} K(s) = \int_{0}^{\infty} \xi^{s-1} H^{m,n+1}_{p+1,q+1} \left[ \xi \left| \begin{array}{c} (0,1); ({\bf a}, {\bf A}) \\ ({\bf b}, {\bf B});(1,1) \end{array} \right| \right] d\xi= \\ -s \frac{\prod_{j=1}^m \Gamma(b_j-B_js) \prod_{j=1}^n \Gamma(1-a_j-A_js)} {\prod_{j=m+1}^q \Gamma(1-b_j-B_js) \prod_{j=n+1}^p \Gamma(a_j-A_js)} \end{eqnarray*} Returning to our general case:\\ {\bf Lemma 2:}\\ For $$ g(s) = \, _pF_q({\bf a};{\bf b};-s^2)$$ the integrals: \begin{eqnarray*} M_n= \frac{1}{n!}[\frac{d^n}{ds^n}(\int_{0}^{\infty}g'(\xi)\xi^s d\xi)]_{s=0} , n=1,2... \end{eqnarray*} are given by the recurrency: $$M_n = m_n + m_1 M_{n-1} , M_0 = 0 , n=1,2,...$$ where: $$m_k =\frac{1}{2^k k!} (\psi^{(k-1)}(1)+(-1)^k(\psi^{(k-1)}({\bf a})-\psi^{(k-1)}({\bf b}))), k=1,2,...$$ the $ \psi^{(k)}$ are the derivative of the $ \psi $ function.\\ {\bf Proof.}\\ ( cf. [20] for the differentiation of the hypergeometric function ): $$ g'(s) = -2s \frac{({\bf a})_1 }{({\bf b})_1} \,_pF_q({\bf a}+1;{\bf b}+1;-s^2)$$ then from the definition of the generalized hypergeometric functions using Mellin-Barnes integrals cf. [20] 7.3.4.12 page 438. we get: $$ K(s) = \frac{({\bf a})_1 }{({\bf b})_1}\int_0^{\infty} \, _pF_q({\bf a}+1;{\bf b}+1;-\xi) \xi^{s/2} d\xi = \frac{({\bf a})_{-s/2} \Gamma(1+s/2)}{({\bf b})_{-s/2}} $$ from which: \begin{eqnarray*} K'(s)= \frac{1}{2} (\frac{({\bf a})_{-s/2} \Gamma'(1+s/2)}{({\bf b})_{-s/2}} +\frac{({\bf a})_{-s/2} \Gamma(1+s/2)\Gamma({\bf b}) \Gamma'({\bf b}-s/2)}{\Gamma^2({\bf b}-s/2)} -\frac{\Gamma(1+s/2)\Gamma'({\bf a}-s/2)}{\Gamma({\bf a})({\bf b})_{-s/2}}) \end{eqnarray*} then: \begin{eqnarray*} M_1= m_1 = K'(0) = \frac{1}{2}( \psi(1)+\psi({\bf b})-\psi({\bf a})) \end{eqnarray*} However to continue it will be more convenient to introduce the expansion of $ Ln(K(s)) $ using the expansions of the logarithm of the $ \Gamma $ function, cf. [13] 54-111 p 358 where the Riemann Hurwitz function is given in terms of derivative of the $ \psi $ function: $$ \zeta(k,{\bf a})=\frac{(-1)^k}{(k-1)!} \psi^{(k-1)}({\bf a})$$ we get: \begin{eqnarray*} Ln(K(s)) = Ln(\frac{\Gamma({\bf a}-s/2)}{\Gamma({\bf a})}) -Ln(\frac{\Gamma({\bf b}-s/2)}{\Gamma({\bf b})}) +Ln(\frac{\Gamma(1+s/2)}{\Gamma(1)})=\\ \frac{1}{2}(\psi(1)-\psi({\bf a})+\psi({\bf b})) t/2 + \sum_{k=2}^{\infty}\frac{t^k}{2^k k!} (\psi^{(k-1)}(1)+(-1)^k(\psi^{(k-1)}({\bf a})-\psi^{(k-1)}({\bf b}))) \end{eqnarray*} finally: $$m_k =\frac{1}{2^k k!} (\psi^{(k-1)}(1)+(-1)^k(\psi^{(k-1)}({\bf a})-\psi^{(k-1)}({\bf b}))), k=1,2,...$$ the $ M_n $ are obtained from the $ m_k$ using the application of the method presented in [10],[14] : $$M_n = m_n + m_1 M_{n-1} , M_0 = 0 , n=1,2,...$$ $ \ddag $\\ \section{References:} \noindent[0] \textsc{Abramovitz M., Stegun I.A.} {\it Handbook of Mathematical Functions }{\bf [1964]}, National Bureau of Standards. \noindent[1] \textsc{M.M. Agrest} {\it "Generalization of some relations for the Bessel Integral Functions"} {\bf Bull. Acad. Sci. Georgian SSR }126,2{\bf [1987]} 241-244. \noindent[2] \textsc{M.M. Agrest, T.S. Chachibaya} {\it " Expansions of Bessel Exponential-Integral Functions and related Functions"} {\bf Soviet Math. ( Iz VUZ )} 32,4{\bf [1988] }1-13. \noindent[3] \textsc{M.M. Agrest} {\it " Certain Relations for Exponential-Integral Bessel Functions of genus 1 "} {\bf Soviet Math. ( Iz VUZ)} 35,6 {\bf[1991]} 67-69. \noindent[4] \textsc{B. C. Carlson} {\it " Some extensions of Lardner's relations between $\, _0F_3$ and Bessel functions "} {\bf SIAMJ. Math. Anal.} 1 {\bf[1970]} 232-242,MR41:3819. \noindent[5] \textsc{J-F. Chamayou} {\it " Numerical evaluation of a solution of a special mixed type differential difference equation "} {\bf Calcolo} 15 {\bf[1978]} 395-414. \noindent[6] \textsc{J-F. Chamayou} {\it " Mod\`{e}le de bruit de grenaille trigonom\'etrique "} {\bf Journ\'ees de Statistique, Lyon 24-27 mai} Actes, Universit\'e Claude Bernard {\bf[1983]} 19. \noindent[7] \textsc{J-F. Chamayou, J-L. Dunau} {\it " Random difference equations with logarithmic distribution and the triggered shot noise "} {\bf Adv. Appl. Math.} 29 {\bf[2002]} 454-470. \noindent[8] \textsc{J-F. Chamayou, J-L. Dunau} {\it " Random difference equations: an asymptotical result "} {\bf J. Comput. Appl. Math.} 154,1{\bf[2003]} 183-193. \noindent[9] \textsc{J-F. Chamayou} {\it " Products of double gamma, gamma and beta distributions "} {\bf Stat. Probab. Lett.} 68,2 {\bf[2004]} 199-208. \noindent[10] \textsc{E. A. Gussmann} {\it Modifizierung der Gewichtsfunktionenmethode zur Berechnung der Fraunhoferlinien in Sonnen und Sternspektren } {\bf Zeitschrift Astrophysik.} 65 {\bf[1967]} 456-497. \noindent[11] \textsc{J-L. Habemont, K. Hami-Eddine} {\it " Simulation du bruit de grenaille "} {\bf Rapport 4GMM, Institut National Sciences Appliqu\'ees, mai} Universit\'e Toulouse{\bf[2005]}. \noindent[12] \textsc{E. Hallen} {\it "Iterated sine and cosine Integrals" } {\bf Trans. Roy. Inst. Techn. Stockholm.} 12 {\bf[1947]} . \noindent[13] \textsc{E. R. Hansen } {\it A table of Series and Products }{\bf [1975]},Prentice Hall. \noindent[14] \textsc{K. S. Kolbig} {\it On the integral $ \int_0^{\infty} e^{-\mu t} t^{\nu-1} log^{m} t dt $ } {\bf Math. Comput.} 41 {\bf[1983]} 171-182. \noindent[15] \textsc{K. S. Kolbig} {\it The Polygamma Function $ \psi^{(k)}(x) $ for $ x=\frac{1}{4} $ and $ x=\frac{3}{4} $ } {\bf J. Comput. Appl. Math.} 75,1 {\bf[1996]} 43-46. \noindent[16] \textsc{V. Kourganoff, I.W. Busbridge} {\it " Basic methods in Transfer Problems"} {\bf[1952]} Clarendon, Oxford. \noindent[17] \textsc{H. Kunstner} {\it "Zur Berechnung zweier spezieller Integrale" } {\bf Wissenschäft. Zeit. Wilhem-Pieck Uni. Rostock, NaturW. Reihe.} 3 {\bf[1984]} 63-64. \noindent[18] \textsc{L. Lewin} {\it "Polylogarithms and associated Functions"} {\bf[1981]} North Holland, NewYork. \noindent[19] \textsc{G. Petiau} {\it La Th\'eorie des Fonctions de Bessel } {\bf CNRS.} {\bf[1955]} . \noindent[20]\textsc{A.P. Prudnikov, Y.A. Brychkov, O.I. Marichev} {\it Elementary Functions, Special Functions, More Special Functions, Vol.1 to 3 in Integrals and Series}{\bf [1992]}, Gordon Breach. \noindent[21] \textsc{V. G. Smith} {\it An asymptotic expansion of $Ji_0(x)=\int_x^{\infty}\frac{J_0(t)}{t}dt$. } {\bf J. Math. Phys. }22 {\bf [1943]} 58-59. \noindent[21] \textsc{J. Spanier, K.B. Oldham} {\it " An Atlas of Functions"} {\bf[1987]} Hemisphere Publ, Washington D.C.; Springer Verlag, Berlin. \noindent[22] \textsc{H. C. Van De Hulst} {\it " Multiple Scattering"} {\bf[1980]} Acad. Press, New York. \noindent[23] \textsc{K. Van Harn, F.W. Steutel} {\it Infinite divisibility and the waiting-time paradox } {\bf Commun. Statist. Stochastic Models }11,3 {\bf [1995]} 527-540. \begin{center} \begin{picture}(400,200)(0,0) \put(200,0){\vector(1,0){100}} \put(10,0){\line(0,1){200}} \thicklines \put(0,0){\line(1,0){200}} \put(50,50){\oval(80,80)[bl]} \put(100,180){\oval(100,100)[bl]} \put(190,110){\oval(180,180)[bl]} \put(260,190){\oval(140,140)[bl]} \put(285,110){\oval(50,40)[bl]} \thinlines \put(145,190){$\Lambda_i$} \put(1,10){...$t_{0}$} \put(50,10){$t_{1}$} \put(100,10){$t_{2}$...} \put(160,10){...$t_{n-k}$...} \put(260,10){...$t_{n}$} \put(270,110){$W_t$} \put(8,0){$\uparrow$} \put(48,0){$\uparrow$} \put(98,0){$\uparrow$} \put(188,0){$\uparrow$} \put(258,0){$\uparrow$} \put(10,0){\line(0,1){200}} \put(50,0){\line(0,1){200}} \put(100,0){\line(0,1){200}} \put(190,0){\line(0,1){200}} \put(260,0){\line(0,1){200}} \put(280,0){$t$} \put(20,150){Scheme 1: Shot Noise Process} \end{picture} \end{center} \begin{center} \begin{picture}(400,200)(0,0) \put(200,0){\vector(1,0){100}} \thicklines \put(0,0){\line(1,0){200}} \put(100,90){\oval(198,90)[bl]} \put(190,150){\oval(180,180)[bl]} \put(260,100){\oval(140,140)[bl]} \put(281,200){\oval(40,40)[bl]} \thinlines \put(145,190){$\Lambda_i$} \put(1,10){$t_{0}$} \put(50,10){$t_{1}$} \put(100,10){$t_{2}$} \put(130,10){$t_{3}...$} \put(160,10){...$t_{2(n-k)}$...} \put(260,10){...$t_{2n}$} \put(270,110){$W_t$} \put(0,0){$\uparrow$} \put(48,0){$\uparrow$} \put(98,0){$\uparrow$} \put(98,0){$\uparrow$} \put(128,0){$\uparrow$} \put(188,0){$\uparrow$} \put(258,0){$\uparrow$} \put(0,0){\line(0,1){200}} \put(100,0){\line(0,1){200}} \put(190,0){\line(0,1){200}} \put(260,0){\line(0,1){200}} \put(275,0){$t$} \put(10,150){Scheme 2:Triggered Shot Noise Process} \end{picture} \end{center} \begin{figure} \caption{example} \end{figure} \begin{figure} \caption{example} \end{figure} \begin{figure} \caption{example} \end{figure} \begin{figure} \caption{example} \end{figure} \begin{figure} \caption{example} \end{figure} \begin{figure} \caption{example} \end{figure} \end{document}
\begin{document} \title{Generalized notions of amenability for a class of matrix algebras} \author{A. Sahami} \address{Faculty of Basic sciences, Department of Mathematics, Ilam University, P.O.Box 69315-516, Ilam, Iran.} \email{[email protected]} \begin{abstract} We investigate the notions of amenability and its related homological notions for a class of $I\times I$-upper triangular matrix algebra, say $UP(I,A)$, where $A$ is a Banach algebra equipped with a non-zero character. We show that $UP(I,A)$ is pseudo-contractible (amenable) if and only if $I$ is singleton and $A$ is pseudo-contractible (amenable), respectively. We also study the notions of pseudo-amenability and approximate biprojectivity of $UP(I,A)$. \end{abstract} \subjclass[2010]{Primary 46M10 Secondary, 43A07, 43A20.} \keywords{Upper triangular Banach algebra, Amenability, Left $\phi$-amenability, Approximate biprojectivity.} \maketitle \section{Introduction and Preliminaries} B. E. Johnson studied the class of amenable Banach algebras. Indeed a Banach algebra $A$ is amenable if every continuous derivation $D:A\rightarrow X^{*}$ is inner, for every Banach $A$-bimodule $X$, that is, there exists $x_{0}\in X^{*}$ such that $$D(a)=a\cdot x_{0}-x_{0}\cdot a\quad(a\in A).$$ He also showed that $A$ is amenable if and only if there exists a bounded net $(m_{\alpha})$ in $A\otimes_{p}A$ such that $$a\cdot m_{\alpha}-m_{\alpha}\cdot a\rightarrow 0,\quad \pi_{A}(m_{\alpha})a\rightarrow a\qquad(a\in A),$$ where $\pi_{A}:A\otimes_{p}A\rightarrow A$ is given by $\pi_{A}(a\otimes b)=ab$ for every $a,b\in A$, see \cite{Joh}. About the same time A. Ya. Helemskii defined the homological notions of biflatness and biprojectivity for Banach algebras. In fact a Banach algebra $A$ is called biflat (biprojective), if there exists a bounded $A$-bimodule morphism $\rho:A\rightarrow (A\otimes_{p}A)^{**}$ ($\rho:A\rightarrow A\otimes_{p}A$) such that $\pi_{A}^{**}\circ\rho$ is the canonical embedding of $A$ into $A^{**}$ ($\rho$ is a right inverse for $\pi_{A}$), respectively see \cite{hel}. Note that a Banach algebra $A$ is amenable if and only if $A$ is biflat and $A$ has a bounded approximate identity. It is known that for a locally compact group $G$, $L^{1}(G)$ is biflat (biprojective) if and only if $G$ is amenable(compact), respectively. Amenability of some matrix algebras studied by Esslamzadeh \cite{Ess} and also biflatness and biprojectivity of some semigroup algebras related to matrix algebras investigated by Ramsden in \cite{rams}. Recently approximate versions of amenability and homological properties of Banach algebras have been under more observations. In \cite{zhang} Zhang introduced the notion of approximately biprojective Banach algebras, that is, $A$ is approximately biprojective if there exists a net of $A$-bimodule morphism $\rho_{\alpha}:A\rightarrow A\otimes_{p}A$ such that $$\pi_{A}\circ\rho_{\alpha}(a)\rightarrow a\quad(a\in A).$$ Author with A. Pourabbas investigated approximate biprojectivity of $2\times 2$ upper triangular Banach algebra which is a matrix algebra, also we characterized approximate biprojectivity of Segal algebras and weighted group algebras. We show that a Segal algebra $S(G)$ is approximately biprojective if and only if $G$ is compact and also we show that $L^{1}(G,w)$ is approximately biprojective if and only if $G$ is compact, provided that $w\geq 1$ is a continuous weight function, see \cite{sah col} and \cite{sah3}. Approximate amenable Banach algebras have been introduced by Ghahramani and Loy. Indeed a Banach algebra $A$ is approximate amenable if for every continuous derivation $D:A\rightarrow X^{*}$, there exists a net $(x_{\alpha})$ in $X^{*}$ such that $$D(a)=\lim_{\alpha}a\cdot x_{\alpha}-x_{\alpha}\cdot a \quad(a\in A).$$ Other extensions of amenability are pseudo-amenability and pseudo-contractibility. A Banach algebra $A$ is pseudo-amenable (pseudo-contractible) if there exists a not necessarily bounded net $(m_{\alpha})$ in $A\otimes_{p}A$ such that $$a\cdot m_{\alpha}-m_{\alpha}\cdot a\rightarrow 0,\quad(a\cdot m_{\alpha}=m_{\alpha}\cdot a),\qquad \pi_{A}(m_{\alpha})a\rightarrow a\quad (a\in A).$$ For more information about these new concepts the reader referred to \cite{ghah pse}, \cite{gen 1} and \cite{gen 2}. Recently in \cite{essmail arch} and \cite{essmail sem} pseudo-amenability and pseudo-contractibility of certain semigroup algebras, using the properties of matrix algebras, have been studied. In this paper, we investigate amenability and its related homological notions for a class of matrix algebras. We show that for a Banach algebra $A$ with a non-zero character, $I\times I$ upper triangular Banach algebra $UP(I,A)$ is pseudo-contractible (amenable) if and only if $I$ is singleton and $A$ is pseudo-contractible (amenable), respectively. Also we characterize whether $UP(I,A)$ is approximate amenable, pseudo-amenable and approximate biprojective. The paper concluded by studying amenability and approximate biprojectivity of some semigroup algebras related to a matrix algebra. We remark some standard notations and definitions that we shall need in this paper. Let $A$ be a Banach algebra. Throughout this paper the character space of $A$ is denoted by $\Delta(A)$, that is, all non-zero multiplicative linear functionals on $A$. Let $A$ be a Banach algebra. The projective tensor product $A\otimes_{p}A$ is a Banach $A$-bimodule via the following actions $$a\cdot(b\otimes c)=ab\otimes c,~~~(b\otimes c)\cdot a=b\otimes ca\hspace{.5cm}(a, b, c\in A).$$ Let $A$ be a Banach algebra and $I$ be a non-empty set. $UP(I,A)$ is denoted for the set of all $I\times I$ upper triangular matrices which entries come from $A$ and $$||(a_{i,j})_{i,j\in I}||=\sum_{i,j\in I}||a_{i,j}||<\infty.$$ With the usual matrix operations and $||\cdot||$ as a norm, $UP(I,A)$ becomes a Banach algebra. \section{a class of matrix algebras and generalized notions of amenability} In this section we investigate generalized notions of amenability for upper triangular Banach algebras. We remind that a Banach algebra $A$ with $\phi\in\Delta(A)$ is called left(right) $\phi$-contractible, if there exists $m\in A$ such that $am=\phi(a)m(ma=\phi(a)m)$ and $\phi(m)=1$ for every $a\in A$, respectively. For more information the reader referred to \cite{nas}. \begin{Theorem}\label{main} Let $I$ be a non-empty set and $A$ be a unital Banach algebra with $\Delta(A)\neq \emptyset.$ $UP(I,A)$ is pseudo-contractible if and only if $I$ is singleton and $A$ is pseudo-contractible. \end{Theorem} \begin{proof} Let $UP(I,A)$ be pseudo-contractible. Then $UP(I,A)$ has a central approximate identity, say $(e_{\alpha})$. Put $F_{i,j}$ for a matrix belongs to $UP(I,A)$ which $(i,j)$-th entry is $e_{A}$ and others are zero, where $e_{A}$ is an identity of $A$. Thus $F_{i,j}e_{\alpha}=e_{\alpha}F_{i,j}$ for every $i,j\in I.$ This equation implies that the entries on main diagonal of $e_{\alpha}$ is equal. Suppose conversely that $I$ is infinite. Since the entries on main diagonal of $e_{\alpha}$ are equal, it implies that $||e_{\alpha}||=\infty$ or the main diagonal of $e_{\alpha}$ is zero. In the case $||e_{\alpha}||=\infty$, $e_{\alpha}$ does not belong to $UP(I,A)$ which is impossible. Otherwise if the main diagonal of $e_{\alpha}$ is zero, then $e_{\alpha}F_{i,i}=0$. Thus $0=e_{\alpha}F_{i,i}\rightarrow F_{i,i}$ which is impossible, hence $I$ must be finite. Suppose that $I=\{i_{1},i_{2},...,i_{n}\}$ and $\phi\in\Delta(A)$. Define $\psi\in\Delta(UP(I,A))$ by $\psi((a_{i,j})_{i,j\in I})=\phi(a_{i_{n},i_{n}})$ for every $(a_{i,j})\in UP(I,A)$. Since $UP(I,A)$ is pseudo-contractible, by \cite[Theorem 1.1]{alagh1} $UP(I,A)$ is left and right $\psi$-contractible. Set $$J=\{(a_{i,j})\in UP(I,A)|a_{i,j}=0,\text{for all}\quad j\neq i_{n}\}.$$ It is clear $J$ is a closed ideal of $UP(I,A)$ and $\psi|_{J}\neq 0$, hence by \cite[Proposition 3.8]{nas} $J$ is left and right $\psi$-contractible. So there exist $m_{1},m_{2}\in J$ such that $jm_{1}=\psi(j)m_{1}$ and $m_{2}j=\psi(j)m_{2}$ and also $\psi(m_{1})=\psi(m_{2})=1$ for each $j\in J.$ Set $m=m_{1}m_{2}\in J.$ Clearly we have \begin{equation}\label{eq main} jm=mj=\psi(j)m,\quad \psi(m)=\psi(m_{1}m_{2})=\psi(m_{1})\psi(m_{2})=1,\qquad (j\in J). \end{equation} Suppose conversely that $|I|>1.$ Set $m$ for the matrix with n-th columns $(x_{1},x_{2},...,x_{n})^{t}$, where $x_{i}\in A$ for each $i\in\{1,2,...,n\}$. Let $a$ be an element of $J$ which its n-th columns has the form $(0,0,...,a_{n})^{t}$ for an arbitrary element $a_{n}\in A$. Applying (\ref{eq main}) we have $$x_{1}a_{n}=x_{2}a_{n}=...=x_{n-1}a_{n}=0,\quad \phi(a_{n})x_{1}=\phi(a_{n})x_{2}=...=\phi(a_{n})x_{n-1}=0,$$ and also $$a_{n}x_{n}=x_{n}a_{n}=\phi(a_{n})x_{n},\quad \phi(x_{n})=1.$$ Pick an element $a_{n}\in A$ such that $\phi(a_{n})=1.$ Applying (\ref{eq main}) follows that $x_{1}=x_{2}=...=x_{n-1}=0$. Then $m$ becomes a matrix which its n-th columns has the form $(0,0,...,0,x_{n})^{t}$. Set $b$ for a matrix in $J$ which its n-th columns has the form $(b_{1},b_{2},...,b_{n-1},b_{n})^{t}$, where $b_{n}\in \ker\phi$ and $\phi(b_{1})=\phi(b_{2})=...=\phi(b_{n-1})=1.$ Applying (\ref{eq main}) we have $a_{1}x_{n}=0$. Taking $\phi$ on this equation we have $0=\phi(a_{1}x_{n})=\phi(a_{1})\phi(x_{n})=1$ which is a contradiction. Therefore $I$ must be singleton. So $A$ is pseudo-contractible. Converse is clear. \end{proof} Suppose that $A$ is a Banach algebra and $\phi\in\Delta(A)$. $A$ is called (approximately) left $\phi$-amenable if there exists (a not necessarily) bounded net $(m_{\alpha})$ in $A$ such that $$am_{\alpha}-\phi(a)m_{\alpha}\rightarrow 0\quad \phi(m_{\alpha})\rightarrow1\qquad (a\in A),$$ respectively. Right cases define similarly. For more information about these new concepts of amenability and its related homological notions see \cite{agha}, \cite{kan}, \cite{Hu} and \cite{sah1}. \begin{Theorem}\label{main1} Let $I$ be an ordered set with an smallest element. Also let $A$ be a Banach algebra with a left unit such that $\Delta(A)\neq \emptyset.$ $UP(I,A)$ is pseudo-amenable (approximate amenable) if and only if $I$ is singleton and $A$ is pseudo-amenable(approximate amenable), respectively. \end{Theorem} \begin{proof} Here we proof the pseudo-amenable case, approximate amenability is similar. Suppose that $UP(I,A)$ is pseudo-amenable. Then there exists a net $(m_{\alpha})$ in $UP(I,A)\otimes_{p}UP(I,A)$ such that $$a\cdot m_{\alpha}-m_{\alpha}\cdot a\rightarrow 0,\quad \pi_{UP(I,A)}(m_{\alpha})a\rightarrow a\qquad(a\in UP(I,A)).$$ Let $i_{0}$ be a smallest element of $I$. It is easy to see that $\psi$ given by $\psi(a)=\phi(a_{i_{0},i_{0}})$ is a character on $UP(I,A),$ for each $a=(a_{i,j})\in UP(I,A)$. Define $$T:UP(I,A)\otimes_{p}UP(I,A)\rightarrow UP(I,A)$$ by $T(a\otimes b)=\psi(a)b$ for each $a,b\in UP(I,A)$. It is easy to see that $T$ is a bounded linear map which satisfies the following: $$T(a\cdot x)=\psi(a)T(x),\quad T(x\cdot a)=T(x)a,\quad \psi\circ T(x)=\psi\circ \pi_{UP(I,A)}(x),$$ for each $a,b\in UP(I,A)$ and $ x\in UP(I,A)\otimes_{p}UP(I,A)$. Thus we have $$\psi(a)T(m_{\alpha})-T(m_{\alpha})a=T(a\cdot m_{\alpha}-m_{\alpha}\cdot a)\rightarrow 0$$ and $\psi\circ T(m_{\alpha})=\psi\circ \pi_{UP(I,A)}(m_{\alpha})\rightarrow 1$. Hence $UP(I,A)$ is approximately right $\psi$-amenable. Using the same arguments as in the proof of Theorem \ref{main} and applying \cite[Proposition 5.1]{saha} one can see that $I$ is singleton and $A$ is pseudo-amenable. Converse is clear. \end{proof} Let $A$ be a Banach algebra and $a\in A.$ By $a\varepsilon_{i,j}$ we mean a matrix belongs to $UP(I,A)$ with $(i,j)$-th place is $a$ and zero elsewhere. \begin{Theorem} Let $I$ be non-empty set and $A$ be a Banach algebra such that $\Delta(A)\neq \emptyset.$ $UP(I,A)$ is amenable if and only if $I$ is singleton and $A$ amenable. \end{Theorem} \begin{proof} Let $UP(I,A)$ be amenable. Then $UP(I,A)$ has a bounded approximate identity, say $(E^{\alpha})$. Let $M>0$ be a bound for $(E^{\alpha})$. We claim that $A$ has a bounded left approximate identity. To see this, fix $k,l\in I.$ Then for each $a\in A$, we have \begin{equation} \begin{split} 0=\lim_{\alpha}||E^{\alpha}a\varepsilon_{k,l}-a\varepsilon_{k,l}||&=\lim_{\alpha}||(\sum_{i,j} E^{\alpha}_{i,j}\varepsilon_{i,j})a\varepsilon_{k,l}-a\varepsilon_{k,l}||\\ &=\lim_{\alpha}||\sum_{i} E^{\alpha}_{i,l}a\varepsilon_{i,l}-a\varepsilon_{k,l}||\\ &=\lim_{\alpha}(||\sum_{i\neq k} E^{\alpha}_{i,l}a||+||E^{\alpha}_{k,l}a-a||. \end{split} \end{equation} Thus $e_{\alpha}=E^{\alpha}_{k,l}$ is a left approximate identity of $A.$ It is easy to see that $||e_{\alpha}||\leq ||E^{\alpha}||\leq M$. So $(e_{\alpha})$ is a bounded left approximate identity for $A.$ We claim that $I$ is finite. Suppose conversely that $I$ is infinite. Pick $a\in A$ such that $||a||=1.$ Since $(e_{\alpha})$ is a bounded left approximate identity for $A$, then $\lim_{\alpha}e_{\alpha}a=a$, for each $a\in A.$ Thus there exists a $\alpha_{l,k}$ such that $\alpha\geq \alpha_{k,l}$ such that $\frac{1}{2}<||e_{\alpha}a||$. Hence for $\alpha\geq \alpha_{k,l}$ we have \begin{equation}\label{eq} \frac{1}{2}<||e_{\alpha}a||\leq ||e_{\alpha}||=||E_{k,l}^{\alpha}||. \end{equation} Since $I$ is infinite we can choose $N\in \mathbb{N}$ such that $N>2M.$ Then choose distinct $k_{1}, l_{1}, k_{2},l_{2},...,k_{N}, l_{N}$ in $I$ and $\alpha\geq \alpha_{k_{i},l_{i}},$ $i=1,2,...,N$. Using (\ref{eq}) one can see that $$M<\frac{1}{2}N= \sum^{N}_{i=1}||E_{k_i,l_{i}}^{\alpha}||\leq \sum_{i,j\in I}||E_{i,j}^{\alpha}||\leq M,$$ which is a contradiction. So $I$ is finite. Applying the same method as in the proof of previous Theorem, it is easy to see that $I$ must be singleton, then $A$ is amenable. \end{proof} \section{a class of Matrix algebra and approximate biprojectivity} In this section we study approximate biprojectivity of some matrix algebra. We also investigate the relation of approximate biprojectivity and discreteness of maximal ideal space of a Banach algebra. \begin{Theorem} Let $I$ be an ordered set with an smallest element. Also let $A$ be a Banach algebra with a right identity such that $\Delta(A)\neq \emptyset.$ $UP(I,A)$ is approximately biprojective if and only if $I$ is singleton and $A$ is approximately biprojective. \end{Theorem} \begin{proof} Let $i_{0}$ be smallest element of $I$. Define $\psi\in\Delta(UP(I,A))$ by $\psi(a)=\phi(a_{i_{0},i_{0}})$, where $a=(a_{i,j})\in UP(I,A)$. Suppose that $UP(I,A)$ is approximately biprojective. Since $A$ has a right identity, by \cite[Lemma 5.2]{saha}, $UP(I,A)$ has a right approximate identity. Applying \cite[Theorem 3.9]{sah3}, $UP(I,A)$ is right $\psi$-contractible. Using the same arguments as in the proof of the Theorem \ref{main}, $I$ is singleton and $A$ is approximately biprojective. Converse is clear. \end{proof} \begin{Remark} Let $A$ be a Banach algebra with a left approximate identity and $I$ be a finite set which has at least two elements. Then $UP(I,A)$ is never approximately biprojective. To see this, since $I=\{i_{1},i_{2},...,i_{n}\}$ is finite then left approximate identity of $A$ gives a left approximate identity for $UP(I,A)$. Define $\psi\in\Delta(UP(I,A))$ by $\psi(a)=\phi(a_{i_{n},i_{n}})$ for every $a=(a_{i,j})\in UP(I,A)$. By \cite[Theorem 3.9]{sah3} approximate biprojectivity of $UP(I,A)$ implies that $UP(I,A)$ is left $\psi$-contractible, then the rest is similar to the proof of Theorem \ref{main}. \end{Remark} \begin{Proposition} Let $A$ be a Banach algebra with a left approximate identity and $\Delta(A)$ be a non-empty set. If $A$ is approximately biprojective, then $\Delta(A)$ is discrete with respect to the $w^{*}$-topology. \end{Proposition} \begin{proof} Since $A$ is an approximately biprojective Banach algebra with a left approximate identity, by \cite[Theorem 3.9]{sah3} $A$ is left $\phi$-contractible for every $\phi\in\Delta(A)$. Applying \cite[Corollary 2.2]{dashti} one can see that $\Delta(A)$ is discrete. \end{proof} \begin{cor} Let $A$ be a Banach algebra with a left identity, $\phi\in\Delta(A)$ and let $I$ be a non-empty set. If $UP(I,A)$ is approximate biprojective, then $\Delta(UP(I,A))$ is discrete with respect to the $w^{*}$-topology. \end{cor} \begin{proof} Note that, since $\phi\in\Delta(A)$, $\Delta(UP(I,A))$ is a non-empty set. Existence of left identity for $A$ implies that $UP(I,A)$ has a left approximate identity, see \cite[Lemma 5.2]{saha}. Applying previous Proposition one can see that $\Delta(UP(I,A))$ is discrete with respect to the $w^{*}$-topology. \end{proof} Let $A$ be a Banach algebra and $\phi\in\Delta(A)$. $A$ is $\phi$-inner amenable if there exists a bounded net $(a_{\alpha})$ in $A$ such that $$aa_{\alpha}-a_{\alpha}a\rightarrow 0,\quad \phi(a_{\alpha})\rightarrow 1\qquad(a\in A).$$ For more information about $\phi$-inner amenability, see \cite{jab}. \begin{lemma} Let $A$ be a Banach algebra and $\phi\in\Delta(A).$ Suppose that $A$ has an approximate identity. Then approximate biprojectivity of $A$ implies that $A$ is $\phi$-inner amenable. \end{lemma} \begin{proof} Suppose that $A$ is approximate biprojective. Using \cite[Theorem 3.9]{sah3}, existence of approximate identity implies that $A$ is left and right $\phi$-contractible. Then there exist $m_{1}$ and $m_{2}$ in $A$ such that $$am_{1}=\phi(a)m_{1}(m_{2}a=\phi(a)m_{2})\quad \phi(m_{1})=\phi(m_{2})=1\qquad(a\in A),$$ respectively. Since $$m_{1}=\phi(m_{2})m_{1}=m_{2}m_{1}=\phi(m_{1})m_{2}=m_{2},$$ one can see that $$am_{1}=m_{1}a=\phi(a)m_{1}\quad \phi(m_{1})=1, (a\in A).$$ It follows that $A$ is $\phi$-inner amenable. \end{proof} \begin{Remark} There exists a matrix algebra which is approximate biprojective but it is not $\phi$-inner amenable. Then the converse of previous Lemma is not always true. To see this, let $A=\left(\begin{array}{cc} 0&\mathbb{C}\\ 0&\mathbb{C}\\ \end{array} \right)$ and also let $a_{0}=\left(\begin{array}{cc} 0&1\\ 0&1\\ \end{array} \right)$. Define $\rho:A\rightarrow A\otimes_{p}A$ by $\rho(a)=a\otimes a_{0}$ for every $a\in A$. It is easy to see that $\rho$ is a bounded $A$-bimodule morphism and $$\pi_{A}\circ \rho(a)=a,\qquad (a\in A).$$ Then $A$ is biprojective and it follows that $A$ is approximate biprojective. Set $\phi(\left(\begin{array}{cc} 0&a\\ 0&b\\ \end{array} \right))=b$ for every $a,b\in \mathbb{C}$. It is easy to see that $\phi\in\Delta(A)$. We claim that $A$ is not $\phi$-inner amenable. We suppose conversely that $A$ is $\phi$-inner amenable. Then there exists a bounded net $(a_{\alpha})$ in $A$ such that $$aa_{\alpha}-a_{\alpha}a\rightarrow 0,\quad \phi(a_{\alpha})\rightarrow 1\qquad (a\in A).$$ It is easy to see that $ab=\phi(b)a$ for every $a\in A.$ Hence we have $$0=\lim_{\alpha} a_{0}a_{\alpha}-a_{\alpha}a_{0}=\lim \phi(a_{\alpha})a_{0}-\phi(a_{0})a_{\alpha}=\lim a_{0}-a_{\alpha},$$ It follows that $a_{0}=\lim a_{\alpha}$. Hence for each $a\in A$, we have $$aa_{0}=a_{0}a,\quad \phi(a_{0})=1.$$ It follows that $a=\phi(a)a_{0}$. Thus $\dim A=1$ which is a contradiction. \end{Remark} \section{Examples of semigroup algebras related to the matrix algebras} \begin{Example} Suppose that $A$ is a Banach algebra and $I$ is a non-empty set. Put $B=UP(I,A)$. It is obvious that $B$ with matrix multiplication can be observed as a semigroup. Equip this semigroup with the discrete topology and denote it with $S_{B}.$ Suppose that $A$ has a non-zero idempotent. We claim that $\ell^{1}(S_B)$ is not amenable, whenever $I$ is an infinite set. Suppose conversely that $\ell^{1}(S_B)$ is amenable. Let $e$ be an idempotent for $A$. $E_{i,i}$ for a matrix belongs to $B$ which its $(i,i)$-th entry is $e$, otherwise is $0$. It is easy to see that $E_{i,i}$ is an idempotent for the semigroup $S_B$, for every $i\in I.$ So the set of idempotents of $S_{B}$ is infinite, whenever $I$ is infinite. Thus by \cite[Theorem 2]{dun} $\ell^{1}(S_B)$ is not amenable which is contradiction. Suppose that $A$ is a Banach algebra with a left identity, also suppose that $I$ is an ordered set with smallest element. We also claim that $\ell^{1}(S_B)$ is never approximate biprojective. To see this suppose conversely that $\ell^{1}(S_{B})$ is approximately biprojective. We denote augmentation character on $\ell^{1}(S_B)$ by $\phi_{S_{B}}$. It is easy to see that $\delta_{\hat{0}}\in S_{B}$ and $\phi_{S_{B}}(\delta_{\hat{0}})=1,$ where $\hat{0}$ is denoted for the zero matrix belongs to $S_{B}.$ One can see that the center of $S_{B}$, say $Z(S_{B})$, is non-empty, because $\hat{0}$ belongs to $Z(S_{B})$. Using \cite[Proposition 3.1(ii)]{sah3}, one can see that $\ell^{1}(S_{B})$ is left $\phi_{S_{B}}$-contractible. Let $i_{0}$ be an smallest element of $I$. Define $$J=\{(a_{i,j})\in S_{B}|a_{i,j}=0,\text{for all}\quad i\neq i_{0}\},$$ it is easy to see that $J$ is an ideal of $S_{B}$, then by \cite[page 50]{dales semi} $\ell^{1}(J)$ is a closed ideal of $\ell^{1}(S_{B})$. Since $\phi_{S_{B}}|_{\ell^{1}(J)}$ is non-zero, $\ell^{1}(J)$ is left $\phi_{S_{B}}$-contractible. Thus there exists $m\in \ell^{1}(J)$ such that $am=\phi_{S_{B}}(a)m$ and $\phi_{S_{B}}(m)=1,$ for every $a\in A.$ On the other hand since $A$ has a left identity, then $J$ has a left identity. Thus by the same argument as in the proof of \cite[Proposition 3.1(ii)]{sah3} we have $$m(j)=m(e_{l}j)=\delta_{j}m(e_{l})=\phi_{S_{B}}(\delta_{j})m(e_{l})=m(e_{l})\quad (j\in J),$$ where $e_{l}$ is a left unit for $J.$ It follows that $m$ is a constant function belongs to $\ell^{1}(J)$. Since $\phi_{S_{B}}(m)=1,$ then $m\neq 0$ which implies that $J$ is finite which is impossible. \end{Example} \end{document}
\begin{document} \author{Mario Alberto Castagnino.} \address{I.A.F.E. (Univ. de Bs. As.)} \author{Adolfo Ram\'{o}n Ord\'{o}\~{n}ez.} \address{I.F.I.R. / Fac. de Ciencias Exactas, Ing. y Agrim. (Univ. Nac. de Rosario)} \author{Daniela Beatriz Emmanuele.} \address{Fac. Cs. Exactas, Ing.y Agrim.U.N.R.} \title{A general mathematical structure for the time-reversal operator.} \date{December 27th., 2000} \maketitle \begin{abstract} The aim of this work is the mathematical analysis of the physical time-reversal operator and its definition as a geometrical structure{\bf , } in such a way that it could be generalized to the purely mathematical realm. Rigorously, only having such a ``time-reversal structure'' it can be decided whether a dynamical system is time-symmetric or not.{\it \ }The ``time-reversal structures'' of several important physical and mathematical examples are presented, showing that there are some mathematical categories whose objects are the (classical or abstract) ``time-reversal systems'' and whose morphisms generalize the Wigner transformation. \end{abstract} \section{Introduction.} The dynamics and the thermodynamics of both, classical and quantum physical systems, are modelized by the mathematical theory of classical and abstract dynamical systems. It is obvious that the {\it physical} notion of ``time-symmetric (or asymmetric) dynamical systems'' requires the definition of a ``time-reversal operator'', $K$ \cite{7}$.$ In fact, every known model of a physical dynamical system {\it has} some $K$ operator. E.g., the dynamic of classical physical systems is described in the cotangent fiber bundle $T^{*}(N)$ of its configuration manifold $N$, and therefore the action of $K:T^{*}(N)\rightarrow T^{*}(N)$ is defined as \begin{equation} p_{q}\mapsto -p_{q} \label{0.1} \end{equation} for any linear functional $p$ on $q\in N,$ or in coordinate's language: \begin{equation} (q^{i},p_{i})\mapsto (q^{i},-p_{i}) \label{0.2} \end{equation} in a particular $(q^{i},p_{i})$ coordinate system. In Quantum Mechanics there is the well known Wigner antiunitary operator $K$ defined through the complex conjugation in the position (wave functions) representation: \begin{equation} \psi (x,t)\mapsto \psi (x,-t)^{*} \label{0.3} \end{equation} In the last few years it was demonstrated that ordinary Quantum Mechanics (with no superselection sectors) can be naturally included in the Hamiltonian formalism of its real K\"{a}hlerian differentiable manifold of quantum states \cite{Cire0} \cite{Cire1} \cite{Cire2}. The latter one is the real but infinite dimensional manifold of the associated projective space $ {\bf P}({\cal H})$ of its Hilbert states space ${\cal H}$ \footnote{ We should remember the fact that ordinary pure quantum states are not {\it vectors} $\psi $ (normalized or not) of a Hilbert space ${\cal H}$, but rays or {\it projective equivalence classes of vectors} $[\psi ]\in {\bf P}({\cal H})$.}. From this point of view, $K:{\bf P}({\cal H})\rightarrow {\bf P}( {\cal H})$ acts as the cannonical projection to the quotient of (\ref{0.3}) \begin{equation} \lbrack \psi (x,t)]\mapsto \left[ \psi (x,-t)^{*}\right] \label{0.4} \end{equation} Moreover, this result has been generalized to more general quantum systems through its characteristic C*-algebra $A$, and its pure quantum states space $\partial K(A)$ turns out to be a projective K\"{a}hler bundle over the spectrum $\widehat{A},$ whose fiber over the class of a state $\psi ,$ is isomorphic to ${\bf P}({\cal H}_{\psi }),$ being ${\cal H}_{\psi }$ its GNS (Gelfand-Naimark-Segall) representation's space \cite{Cire3}. More recently these authors have relaxed the K\"{a}hlerian structure, showing ''minimal'' mathematical structures involved in the quantum principles of superposition and uncertainty, with the aim of considering non linear extensions of quantum mechanics \cite{Cire4}. But, what happens in more general dynamical systems? Some of them -such as the Bernouilli systems and certain Kolmogorov-systems \cite{2}- are purely mathematical. Nevertheless, the notion of time-symmetry seems to make sense also for them. So, it would be interesting to know what kind of mathematical structures are involved in these systems. Our aim is to show that: 1.-{\it The mere existence of the time reversal operator is a mathematical structure }consisting of a non trivial involution $K$ of the states space of a general system (with holonomic constraints) $M,$ which splits into a $K$ -invariant submanifold (coordinatized by ${\it q}^{i}$) and whose complementary set (the field of the effective action of $K,$ coordinatized by ${\it p}_{i}$) is a manifold with the same dimension of $M.$ This structure is logically independent of the symplectic one \cite{1}, which doesn't require such a splitting at all. In fact, the essence of symplectic geometry, as its own etymology shows, is the ''common enveloping'' of $q$ and $p$, loosing any ''privilege'' between them. Actually, {\it this }$K$ {\it -structure is defined by the action of that part of the complete Galilei group -including the time-reversal- which is allowed to act on the phase space manifold }$M$ {\it by the constraints. }In fact, only on the homogeneous Euclidean phase space $M^{\prime }={\Bbb R}^{6n},$ it is possible to have the transitive action \cite{4} of the complete Galilei group {\it . } 2.-{\it It is possible to define generalized and purely mathematical ``time-reversals'' }allowing a generalization of the notion of ``time-symmetry'' for a wider class of dynamical systems, including all Bernouilli systems. In fact there are mathematical categories whose objects are the time-reversal (classical or abstract) systems $(M,K)$ and whose morphisms generalize the Wigner transformation \cite{Wigner}{\it . } 3.-When the states space has additional structures, {\it there is a possibility of getting a richer time-reversal compatible with these structures. }For example, in Classical Mechanics the canonical $K$ is a symplectomorphism of phase space, and the Wigner quantum operator is compatible with the K\"{a}hlerian structure of{\it \ }${\bf P}({\cal H}).$ At first sight (\ref{0.2}) is quite similar to (\ref{0.3}) and it seems to be some kind of ''complex conjugation'' (and the even dimensionality of phase space reinforces this idea). We will prove this fact, namely, the existence of an almost complex structure $J$ with respect to which $K$ is an almost complex time-reversal. This increases the analogy with Quantum Mechanics, where the strong version of the Heisemberg Uncertainty Principle, \cite{Cire1} \cite{Cire2} is equivalent to the existence of a complex structure $J,$ by means of which the Wigner time-reversal is defined. 4.-{\it It is possible to make a definition of time-reversal systems so general }that it includes among its examples the real line, the Minkowski space-time, the cotangent fiber bundles, the quantum systems, the classical densities function space, the quantum densities operator space, the Bernouilli systems, etc. The paper is organized as follows: In section 2, the general theory of{\it \ }{\bf reversals }and{\bf \ time-reversal systems} is developed. In section 3, the theory of {\bf abstract} {\bf reversals }and {\bf abstract\ time-reversal systems} is given. In section 4, many important examples of time-reversal systems are given{\it .} In section 5, we give the abstract time-reversal of Bernouilli systems and we show explicitly the geometrical meaning of our definitions for the Baker's transformation. \section{Reversals and time-reversal systems} {\bf Definition:} Let $M$ be a real paracompact, connected, finite or infinite-dimensional differentiable manifold, and let $K:M\rightarrow M$ be a diffeomorphism. We will say that $K$ is a {\bf reversal} on $M,$ and that $ (M,K)$ is a {\bf reversal system} if the following conditions are satisfied: (r.1) $K$ is an involution, i.e. $K^2=I_M$ (r.2) The set $N$ of all fixed points of $K$ is an immersed submanifold of $ M,$ such that $M-N$ is a connected or disconnected submanifold of the same dimension of $M$. (In particular, this implies that $M$ is a non trivial involution, i.e. $K\neq I_{M},$ the identity function on $M$) We will say that $N$ is the {\bf invariant} {\bf submanifold} of the reversal system. {\bf Definition:} Let $(M,K)$ be a reversal system. We will say that $M$ is $ K${\bf -orientable} if $M-N$ is composed of two diffeomorphic connected components $M_{+}$ y $M_{-}$, and if \begin{equation} K(M_{+})=M_{-}\;,\;\text{and }K(M_{-})=M_{+} \label{1.0} \end{equation} $M$ is $K${\bf -oriented} when -conventionally or arbitrarily- one of these components is selected as {\bf ``positively oriented''}. In that case, $K$ changes the $K${\bf -}orientation of $M.$ If there is a complex (or almost complex) structure $J$ on $M$ (and therefore $J^{\prime }=-J$ is another one) and if, in addition, $K$ satisfies: (c.r.) $K$ is complex (or almost complex) as a map from $(M,J)$ to $ (M,J^{\prime })=(M,-J)$, i.e. : \begin{equation} K_{*}\circ J=-J\circ K_{*} \label{1.1} \end{equation} \noindent we will say that $K$ is a {\bf complex (or almost complex) reversal, or a conjugation. } Similarly, if a symplectic (or almost symplectic) 2-form $\omega $ is given on $M$ \footnote{ In the infinite-dimensional case we require $\omega $ to be {\it strongly non-degenerate }\cite{Cire4} in the sense that the map $X\mapsto \omega (X,.) $ is a toplinear isomorphism.} (and therefore $\omega ^{\prime }=-\omega $ is another one) and if, in addition to (r.1) y (r.2), $K$ satisfies: (s.r.) $K$ is a symplectomorphism from $(M,\omega )$ to $(M,\omega ^{\prime })=(M,-\omega )$, i.e. : \begin{equation} K^{*}\omega =-\omega \label{1.2} \end{equation} \noindent we will say that $K$ is a {\bf symplectic (or almost symplectic) reversal. }If we have a symplectic (or almost symplectic) reversal system $ (M,\omega ,K),$ then for every \[ m\in M:K_{*}:T_{m}(M)\rightarrow T_{m}(M) \] is a (toplinear) isomorphism, and if \[ i:N\rightarrow M\text{ is the immersion}:i(q)=m \] and $i_{*}(X_{q})=X_{i(q)}$ is the induced isomorphism, we can define an almost complex structure $J$: \begin{eqnarray} J\left( X_{i(q)}\right) &:&=Y_{m}\Leftrightarrow \omega \left( X_{i(q)},Y_{m}\right) =1 \nonumber \\ J\left( Y_{m}\right) &:&=-X_{i(q)} \label{1.2a} \end{eqnarray} that is to say, by defining the pairs of ``conjugate'' vectors (and extending by linearity). Then \[ K_{*}\left( X_{i(q)}\right) =X_{i(q)}\;,\;K_{*}\left( Y_{m}\right) =-Y_{m} \] When $(M,\omega ,J,g)$ is a K\"{a}hler (or almost K\"{a}hler) manifold, and $ K$ satisfies the properties (r.1), (r.2), (c.r.) and (s.r.), we will say that $K$ is a {\bf K\"{a}hlerian (or almost K\"{a}hlerian) reversal. }In that case, $K$ is also an isometry \begin{equation} K^{*}g=g \label{1.3} \end{equation} with respect to the K\"{a}hler metric $g$ defined by: \begin{equation} g(X,Y)=-\omega (X,JY)\text{ for all vector fields }X\text{ and }Y. \label{1.4} \end{equation} {\bf Definition:} Let $(M,K)$ be a reversal system such that there is a class ${\cal F}$ of flows $\left( S_t\right) _{t\in {\Bbb R}}$ or / and cascades $\left( S_t=S^t\right) _{t\in {\Bbb Z}}$ on $M$ such that, for any $ m\in M,$ and any $t$ in ${\Bbb R}$ (or in ${\Bbb Z}$) satisfies: \begin{equation} (K\circ S_t\circ K)(m)=S_{-t}(m) \label{1.5} \end{equation} Then we will say that $K$ is a {\bf time-reversal }for ${\cal F}${\bf \ }on $ M.$ (In Physics we can take ${\cal F}$ as a class of physical interest. For example, in Classical Mechanics we can take the class of all Hamiltonian flows over a fixed phase space and in Quantum Mechanics the class of solutions of the Schr\"{o}dinger equation in a fixed states space, etc.) Only having a time-reversal on $M,$ {\bf time-symmetric} {\bf (or asymmetric) } dynamical systems $\left( S_{t}\right) $ (flows $\left( S_{t}\right) _{t\in {\Bbb R}}$ ; or cascades $\left( S_{t}=S^{t}\right) _{t\in {\Bbb Z}}$ ) can be defined. In fact, $\left( S_{t}\right) $ will be considered as{\bf \ symmetric with respect to the time-reversal }$K${\bf ,} if it fulfills $ \forall m\in M$ the condition (\ref{1.5}) (or {\bf asymmetric }if it doesn't). When $M$ is orientable (oriented) with respect to a time-reversal $K$, we will say that it is {\bf time-orientable (oriented). } {\bf Definition: }By a {\bf morphism} of the reversal system $(M,K)$ into $ (M^{\prime },K^{\prime }),$ we mean a differentiable map $f$ of $M$ into $ M^{\prime }$ such that \begin{equation} f\circ K=K^{\prime }\circ f \label{1.6} \end{equation} As the composition of two morphisms is a morphism and the identity $I_{M}$ is a morphism, we get a {\bf category of reversal systems}, whose objects are the reversal systems and whose morphisms are the morphisms of reversal systems. Also we have the subcategories of symplectic, almost complex, K\"{a}hlerian, etc. reversal systems. \section{Abstract reversals and abstract time-reversal systems} {\bf Definition:} Let $(M,\mu )$, be a measure space, and let $ K:M\rightarrow M$ be an isomorphism (mod 0) \cite{2}. We will say that $K$ is an {\bf abstract} {\bf reversal} on $(M,\mu )$ and that $(M,\mu ,K)$ is an {\bf abstract} {\bf reversal system} if the following conditions are satisfied: (a.r.1) $K$ is an involution, i.e. $K^2=I_M$ (mod 0) (a.r.2) The set $N$ of all fixed points of $K$ is a measurable subset of null measure of $M,$ and so $\mu [M-N]=\mu [M]$ (In particular, this implies that $M$ is a non trivial involution, i.e. $K\neq I_{M},$ the identity function on $M$) We will say that $N$ is the {\bf invariant} {\bf subset} of the abstract reversal system. {\bf Definition:} Let $(M,\mu ,K)$ be an abstract reversal system such that there is a class ${\cal F}$ of measure preserving flows $\left( S_{t}\right) _{t\in {\Bbb R}}$ or / and cascades $\left( S_{t}=S^{t}\right) _{t\in {\Bbb Z }}$ on $M$ such that, for all $m\in M,$ and all $t$ in ${\Bbb R}$ (or in $ {\Bbb Z}$) it satisfies (\ref{1.5}). Then, we will say that $K$ is a {\bf time reversal }for ${\cal F}${\bf \ }on $(M,\mu )$. Only having an abstract time-reversal on $M,$ {\bf time-symmetric} {\bf (or asymmetric)} abstract dynamical systems $\left( S_{t}\right) $ (flows $ \left( S_{t}\right) _{t\in {\Bbb R}}$ ; or cascades $\left( S_{t}=S^{t}\right) _{t\in {\Bbb Z}}$ ) can be defined. In fact, $\left( S_{t}\right) $ will be regarded as{\bf \ symmetric with respect to the time-reversal }$K${\bf ,} if it fulfills $\forall m\in M$ the condition (\ref {1.5}) (or {\bf asymmetric }if it doesn't) {\bf Definition: }By a {\bf morphism} of the abstract reversal system $ (M,\mu ,K)$ into $(M^{\prime },\mu ^{\prime },K^{\prime }),$ we mean a measurable map $f$ of $(M,\mu )$ into $(M^{\prime },\mu ^{\prime })$ such that, $\forall A^{\prime }\subset M^{\prime }$ measurable: \begin{equation} \mu \left( f^{-1}(A^{\prime })\right) =\mu ^{\prime }\left( A^{\prime }\right) \text{ mod 0} \label{1.7} \end{equation} and \begin{equation} f\circ K=K^{\prime }\circ f \label{1.8} \end{equation} As the composition of two morphisms is a morphism and the identity $I_{M}$ is a morphism, we get a {\bf category of abstract reversal systems}, whose objects are the abstract reversal systems and whose morphisms are the morphisms of abstract reversal systems. \section{Examples of time-reversal systems} We will see how the mathematical structure just described can be implemented in all the classical and quantum physical systems, and also generalized to more abstract purely mathematical dynamical systems, as the Bernouilli systems. \subsection{The real line} Let us consider in the real line ${\Bbb R}$, the mapping $K:{\Bbb R} \rightarrow {\Bbb R}$ defined by: \begin{equation} K(t)=-t \label{2.1} \end{equation} Clearly, ${\Bbb R}$ is $K$-orientable, because if $N=\{0\},$ then ${\Bbb R} -\{0\}={\Bbb R}_{+}\cup {\Bbb R}_{-},$ and $K$ is a canonical time-reversal for the family of translations: for $a\in {\Bbb R}$ fixed, and $t\in {\Bbb R} $, \begin{equation} \text{ }S_{t}^{a}(x):=x+ta \label{2.2} \end{equation} \subsection{The Minkowski space-time} Let $({\Bbb R}^4,\eta )$ be the Minkowski space-time, with $\eta =$ diag $ (1,-1,-1,-1)$. The invariant submanifold is the spacelike hyperplane \[ N=\left\{ (0,x,y,z):x,y,z\in {\Bbb R}\right\} \] Clearly, fixing $M_{+}$ as the halfspace containing the ''forward'' light cone \[ \left\{ (ct,x,y,z):c^{2}t^{2}-x^{2}-y^{2}-z^{2}>0\text{ and }t>0\right\} \] and $M_{-}$ as the halfspace containing the ''backward'' light cone \[ \left\{ (ct,x,y,z):c^{2}t^{2}-x^{2}-y^{2}-z^{2}>0\text{ and }t<0\right\} \] and defining $K:{\Bbb R}^{4}\rightarrow {\Bbb R}^{4}$ by: \begin{equation} K(ct,x,y,z)=(-ct,x,y,z) \label{2.3} \end{equation} we get a canonical $K$-orientation, equivalent to the ussual time-orientation. $K$ is a time-reversal with respect to the temporal translations \begin{equation} S_{t}^{A}(X):=X+tA=(x^{0}+ta^{0},x^{1},x^{2},x^{3}) \label{2.4} \end{equation} for $A=(a^{0},0,0,0)\in {\Bbb R}^{4}:a^{0}\neq 0$ fixed, and $t\in {\Bbb R}$. {\bf Remark: }As an effect of curvature, not every general 4-dimensional Lorentzian manifold $(M,g),$ will be time-orientable \cite{Licner}. Nevertheless, a time-orientable general space-time is necessary if we are searching for a model of a universe with an arrow of time \cite{cosmic arrow} \cite{Casta}. In fact, if our universe were represented by a non-time-orientable manifold, it would be impossible to define past and future in a global sense, in contradiction with all our present cosmological observations. Precisely, we know that there are no parts of the universe where the local arrow of time points differently from our own arrow. \subsection{The cotangent fiber bundle. Classical Mechanics.} A general cotangent fiber bundle needs not to be $K$-orientable. Nevertheless, we have the following result: {\bf Theorem}: The cotangent fiber bundle (of a finite dimensional differentiable manifold) $\left( T^{*}(N),\pi ,N\right) $ \cite{1} has a canonical almost K\"{a}hlerian time-reversal (for the Hamiltonians flows on it). {\bf Proof:} Let $M$ be the cotangent fiber bundle $T^{*}(N)$ of a real n-dimensional manifold $N.$ In this case $N$ is an embedded submanifold of $ M $, being the embedding $i:N\rightarrow T^{*}(N)$ such that: \[ \text{ }i(q)=0_{q}\text{ (the null functional at }q\text{)} \] Let's define \begin{eqnarray} K &:&T^{*}(N)\rightarrow T^{*}(N) \nonumber \label{4.1} \\ \forall p_q &\in &T_q^{*}(N):K(p_q)=-p_q \label{2.5} \end{eqnarray} Because of its definition, this map is obviously an involution, and by its linearity is differentiable and its differential or tangent map \[ K_{*}:T\left( T^{*}(N)\right) \rightarrow T\left( T^{*}(N)\right) \] verifies: \begin{equation} K_{*}(X_{p_{q}})= {X_{p_{q}}\text{ if }X_{p_{q}}\in i_{*}\left( T_{q}(N)\right) \atopwithdelims\{. -X_{p_{q}}\text{ if }X_{p_{q}}\in T_{p_{q}}\left( \pi ^{-1}(q)\right) } \label{2.6} \end{equation} $X_{p_{q}}\in T_{p_{q}}\left( \pi ^{-1}(q)\right) $ means that it is ``vertical'' or tangent to the point $p_{q}$ of the fibre in $q$. It must be taken into account that by joining a vertical base with the image of a base in $N$ by the isomorphism $i_{*}$ $,$ we get a base of $T_{p_{q}}\left( T^{*}(N)\right) .$ Let $\omega $ be the canonical symplectic 2-form of the cotangent fiber bundle. As $\omega $ is antisymmetric, in order to evaluate $K^{*}\omega ,$ it is sufficient to consider only three possibilities: \begin{eqnarray*} 1)\;\left( X_{p_q},Y_{p_q}\right) &:&X_{p_q}\;,Y_{p_q}\in i_{*}\left( T_q(N)\right) \\ 2)\;\left( X_{p_q},Y_{p_q}\right) &:&X_{p_q}\;,Y_{p_q}\in T_{p_q}\left( \pi ^{-1}(q)\right) \\ 3)\;\left( X_{p_q},Y_{p_q}\right) &:&X_{p_q}\in i_{*}\left( T_q(N)\right) \text{ but }Y_{p_q}\in T_{p_q}\left( \pi ^{-1}(q)\right) \end{eqnarray*} In case 1) \begin{equation} \omega \left( K_{*}(X_{p_q}),K_{*}(Y_{p_q})\right) =\omega \left( X_{p_q},Y_{p_q}\right) =0 \label{2.7} \end{equation} In case 2) \begin{equation} \omega \left( K_{*}(X_{p_q}),K_{*}(Y_{p_q})\right) =\omega \left( -X_{p_q},-Y_{p_q}\right) =\omega \left( X_{p_q},Y_{p_q}\right) =0 \label{2.8} \end{equation} In case 3) \begin{eqnarray} \omega \left( K_{*}(X_{p_q}),K_{*}(Y_{p_q})\right) &=&\omega \left( X_{p_q},-Y_{p_q}\right) \nonumber \\ &=&-\omega \left( X_{p_q},Y_{p_q}\right) \label{2.9} \end{eqnarray} Thus, in any case \begin{equation} \left( K^{*}\omega \right) \left( X_{p_q},Y_{p_q}\right) =\omega \left( K_{*}(X_{p_q}),K_{*}(Y_{p_q})\right) =-\omega \left( X_{p_q},Y_{p_q}\right) \label{2.10} \end{equation} \noindent which proves that $K^{*}\omega =-\omega ,$ the (s.r.) property$.$ Now, let us define \begin{eqnarray} J &:&T\left( T^{*}(N)\right) \rightarrow T\left( T^{*}(N)\right) \nonumber \\ J(X_{p_q}) &=&Y_{p_q}\Leftrightarrow \omega \left( X_{p_q},Y_{p_q}\right) =1 \label{2.10.1} \end{eqnarray} that is to say, $J(X_{p_q})$ is the canonical conjugate of $X_{p_q}.$ Then, by the antisymmetry of $\omega ,$ clearly $J^{2}=-I.$ In addition \begin{eqnarray} \left( K_{*}\circ J\right) (X_{p_{q}}) &=&K_{*}\left( J(X_{p_{q}})\right) =-J\left( X_{p_{q}}\right) \nonumber \\ &=&-J\left( K_{*}\left( X_{p_{q}}\right) \right) \nonumber \\ &=&\left( -J\circ K_{*}\right) (X_{p_{q}}) \label{2.10.2} \end{eqnarray} if $X_{p_{q}}\in i_{*}\left( T_{q}(N)\right) $ and \begin{eqnarray} \left( K_{*}\circ J\right) (X_{p_{q}}) &=&K_{*}\left( J(X_{p_{q}})\right) =J(X_{p_{q}}) \nonumber \\ &=&J\left( -K_{*}\left( X_{p_{q}}\right) \right) \nonumber \\ &=&\left( -J\circ K_{*}\right) (X_{p_{q}}) \label{2.10.3} \end{eqnarray} if $X_{p_{q}}\in T_{p_{q}}\left( \pi ^{-1}(q)\right) .$ So, $K$ preserves both $\omega $ and $J$, and therefore is almost K\"{a}hlerian. $\Box $ As it is well known, the phase space $M$ of a classical system with a finite number ($n$) of degrees of freedom and holonomic constraints has the particular form $T^{*}(N),$ where $N$ denotes the configuration space of the system. This implies the existence of a privileged submanifold $N$ of $M$. We may enquire: why is this so? The answer is: because every law of Classical Mechanics is invariant with respect to the Galilei group {\it which contains all the spatial translations} (and it is itself a contraction of the inhomogeneous Lorentz group \cite{Hermann}). This forces the configuration space to be a submanifold of some {\it homogeneuos} ${\Bbb R} ^{d}$ space. Now, in this submanifold we also have a privileged system of coordinates: the spatial position coordinates $q_{1}=x_{1},...,q_{d}=x_{d}$, with respect to which the action of the Galilei group has its simplest affine expression. Nevertheless, in general this action will take us away from the configuration manifold $N,$ because it doesn't fit with the constraints (think for example in the configuration space of a double pendulum -with two united threads- which is a 2-torus, contained in ${\Bbb R} ^{3}$). \subsection{Classical Statistical Mechanics} Let us consider the phase space of a classical system $\left( T^{*}(N),\omega \right) $ and take the real Banach space $V=L_{{\Bbb R} }^1\left( T^{*}(N),\sigma \right) $ containing the probability densities over the phase space, where $\sigma =\omega \wedge ...\wedge \omega $ ($n$ times) is the Liouville measure. $V$ is a real infinite dimensional differentiable manifold modelled by itself. Then, the above defined almost K\"{a}hlerian time-reversal $K$ on $T^{*}(N)$ induces $\widetilde{K} :V\rightarrow V$ by: \begin{equation} \rho \mapsto \widetilde{K}(\rho ):\left( \widetilde{K}(\rho )\right) (m):=\rho \left( K(m)\right) \label{2.11} \end{equation} Clearly, $\widetilde{K}$ is a toplinear involution. Now, let us consider the set $P$ of all (``almost everywhere'' equivalent classes of) ``$\widetilde{K} $-even'' integrable functions \begin{equation} P=\left\{ \rho \in V:\rho \left( K(m)\right) =\rho (m)\right\} \label{2.12} \end{equation} and the set $I$ of all (``almost everywhere'' equivalent classes of) the ``$ \widetilde{K}$-odd'' integrable functions \begin{equation} I=\left\{ \rho \in V:\rho \left( K(m)\right) =-\rho (m)\right\} \label{2.13} \end{equation} Trivially, $V=P\oplus I$ , and there are two toplinear projectors mapping any $\rho \in V$ into its ``$\widetilde{K}$-even'' and ``$\widetilde{K}$ -odd'' parts. $P$ is the invariant subspace of $\widetilde{K}.$ Its complement is the (infinite dimensional) open submanifold of (``almost everywhere'' equivalent classes of) integrable functions whose ``$\widetilde{ K}$-odd'' projection doesn't vanish$.$ Now, every dynamical system $(S_t)$ (in particular those of the class ${\cal F}$ of $K$) on $T^{*}(N)$ induces another dynamical system $(U_t)$ on $V:$ \begin{equation} \left( U_t(\rho )\right) (m):=\rho \left( S_{-t}(m)\right) \label{2.14} \end{equation} Considering the class $\widetilde{{\cal F}},$ induced by ${\cal F},$ we conclude that $\widetilde{K}$ is a time-reversal. So we have another example lacking time orientability but having a time-reversal structure . \subsection{Complex Banach spaces} A {\bf complex structure} on a {\it real} finite (or infinite) Banach space $ V$ is a linear (toplinear) transformation $J$ of $V$ such that $J^2=-I $ , where $I$ stands for the identity transformation of $V$. \cite{4} In the case of a {\it complex} Banach space $V_{{\Bbb C}}$ , we can consider the associated real vector space (its ``realification'') $V=V_{{\Bbb R}}$ composed of the same set of vectors, but with ${\Bbb R}$ instead of ${\Bbb C} $, as the field of its scalars. Then, $J=iI$ is the canonical complex structure of $V_{{\Bbb R}}.$ If $J$ is a complex structure on a finite dimensional real vector space, its dimension must be even. In any case, there exist elements $X_1$, $X_2$,..., $ X_n,...$ of $V$ such that \[ \left\{ X_1,...,X_n,...,JX_1,...JX_n,...\right\} \] is a basis for $V$ \cite{4}. Let us define $K:V\rightarrow V$ as the ``conjugation'', i.e. extending by linearity the assignment \begin{equation} \forall i=1,2,...:K(X_{i})=X_{i}\;;\;K(JX_{i})=-JX_{i} \label{2.15} \end{equation} Then the real subspace generated by $\{X_{1},X_{2},...\},$ is the subspace of fixed points of $K,$ $N$. So, $(V,J,K)$ is a complex time-reversal system for the class of ''non real translations'' $\left( S_{t}^{A}\right) _{t\in {\Bbb R}}$ ($A$ being a linear combination of $JX_{1},...JX_{n},...)$: \begin{equation} S_{t}^{A}(X)=X+tA\;,\;t\in {\Bbb R} \label{2.16} \end{equation} In fact, \begin{eqnarray} \left( K\circ S_{t}^{A}\circ K\right) (X) &=&K\left( K(X)+tA\right) =X-tA \nonumber \\ &=&S_{-t}^{A}(X) \label{2.17} \end{eqnarray} \subsection{Ordinary quantum mechanical systems} As a particular case of the previous example, let us consider a classical system whose phase space is ${\Bbb R}^{6n},$ and take $V={\cal H} =L^{2}\left( {\Bbb R}^{3n},\sigma \right) $ (actually, its realification). This choice is motivated by the fact that we want to have a Galilei-invariant Quantum Mechanics, and so {\it we must quantify the spatial position coordinates}. There is no cannonical or symplectic symmetry here. Only acting on the wave functions of the position coordinates the Wigner time-reversal operator will be expressed as the complex conjugation. So, we get a complex time-reversal structure. Now, following \cite{Cire1}, let us consider the real but infinite dimensional K\"{a}hlerian manifold $\left( {\bf P}({\cal H}),\widetilde{J}, \widetilde{\omega },g\right) $ of the associated projective space ${\bf P}( {\cal H})$ of the Hilbert states space ${\cal H}$ of an ordinary quantum mechanical system. $J$ is the complex structure of ${\cal H}$, and it is the local expresion of \[ \widetilde{J}:T\left( {\bf P}({\cal H})\right) \rightarrow T\left( {\bf P}( {\cal H})\right) \] $\left( {\bf P}({\cal H}),\widetilde{J},\widetilde{\omega },g\right) $ has a canonical K\"{a}hlerian time-reversal structure. In fact, we define $K:{\cal H}\rightarrow {\cal H}$ as in the previous example, and take: \[ \widetilde{K}:{\bf P}({\cal H})\rightarrow {\bf P}({\cal H})\text{ by: } \widetilde{K}\left[ \psi \right] =\left[ K(\psi )\right] \] then, all the desired properties follows easily. \subsection{Quantum Statistical Mechanics} Let $V=L^{1}({\cal H})$ denote the complex Banach space generated by all nuclear operators on ${\cal H}$ with the trace norm. This set contains the density operators of Quantum Statistical Mechanics. $V$ is a real infinite dimensional differentiable manifold modelled by itself. Then, the above defined complex time-reversal $K$ on ${\cal H}$ induces $\widehat{K} :V\rightarrow V$ by: \begin{equation} \widehat{\rho }\mapsto \widehat{K}(\widehat{\rho }):\left( \widehat{K}( \widehat{\rho })\right) (\psi ):=\widehat{\rho }\left( K(\psi )\right) \label{2.18} \end{equation} Clearly, $\widehat{K}$ is a toplinear involution. Now, let us consider the set $R$ of all ``$\widehat{K}$-real'' densities \begin{equation} R=\left\{ \widehat{\rho }\in V:\widehat{\rho }\left( K(\psi )\right) = \widehat{\rho }(\psi )\right\} \label{2.19} \end{equation} and the set $I$ of all the ``$\widehat{K}$-imaginary'' densities \begin{equation} I=\left\{ \widehat{\rho }\in V:\widehat{\rho }\left( K(\psi )\right) =- \widehat{\rho }(\psi )\right\} \label{2.20} \end{equation} Trivially, $V=R\oplus I$ , and there are two toplinear projectors mapping any $\rho \in V$ into its ``$\widehat{K}$-real'' and ``$\widehat{K}$ -imaginary'' parts. $R$ is the invariant subspace of $\widehat{K}.$ Its complement is the (infinite dimensional) open submanifold of (``almost everywhere'' equivalent classes of) integrable functions whose ``$\widehat{K} $-imaginary'' projection is not null$.$ Now, every dynamical system $(S_{t})$ (in particular those of the class $ {\cal F}$ of $K$) on ${\cal H}$ induces another dynamical system $(U_{t})$ on $V$ by setting$:$ \begin{equation} \left( U_{t}(\widehat{\rho })\right) (\psi ):=\widehat{\rho }\left( S_{-t}(\psi )\right) \label{2.21} \end{equation} Considering the class $\widehat{{\cal F}},$ induced by ${\cal F},$ we conclude that $\widehat{K}$ is a complex time-reversal. So we have another example lacking time orientability but having a time-reversal structure. In both Classical and Quantum Statistical Mechanics, we have used the same criterium to choose $N$ and ${\cal H}$ respectively. The densities of the two theories are related by the Wigner integral $W$, which is an essential ingredient in the theory of the classical limit \cite{Casta-Laura}. In the one dimensional case, it is the mapping $\widehat{\rho }\mapsto \rho =W\left( \widehat{\rho }\right) $ given by: \begin{equation} \rho (q,p)=\frac{1}{\pi }\int\limits_{-\infty }^{+\infty }\widehat{\rho } (q-\lambda ,q+\lambda )\,e^{2ip\lambda }\,d\lambda \label{2.21a} \end{equation} where $q$ is the spatial position coordinate \footnote{ We want to emphasize the necessity of having an homogeneous configuration space (${\Bbb R}$ in the one dimensional case) in order to have the translations $q\mapsto q\pm \lambda $ in $W$ integral.}, $p$ its conjugate momentum, $\rho (q,p)$ a classical density function, and \begin{eqnarray} \widehat{\rho }(x,x^{\prime }) &=&\left( \sum\limits_{j=1}^{\infty }\rho _{j}\,\overline{\psi _{j}}\otimes \psi _{j}\right) (x,x^{\prime }) \nonumber \\ &=&\sum\limits_{j=1}^{\infty }\rho _{j}\,\overline{\psi _{j}}(x)\psi _{j}(x^{\prime }) \label{2.21b} \end{eqnarray} is a generic matrix element of a quantum density, being $\left\{ \psi _{j}\right\} _{j=1}^{\infty }$ an orthonormal base of ${\cal H}$, $\rho _{j}\geqslant 0$ and $\sum\limits_{j=1}^{\infty }\rho _{j}=1$. As it is obvious by a simple change of variables, \begin{equation} W\left[ \widehat{K}\left( \widehat{\rho }\right) \right] \left( q,p\right) =\rho (q,-p)=\rho \left( K(q,p)\right) =\widetilde{K}\left[ W\left( \widehat{ \rho }\right) \right] \left( q,p\right) \label{2.21c} \end{equation} and therefore, $W$ is a morphism between $\left( L^{1}({\cal H}),\widehat{K} \right) $ and $\left( L_{{\Bbb R}}^{1}\left( T^{*}(N)\right) ,\widetilde{K} \right) .$ \subsection{Koopman treatment of Kolmogorov-Systems} With the definition of time-reversal in the physical examples above, we now face the same definition in purely mathematical dynamical systems. Let $(M,\mu ,S_t)$ be a Kolmogorov system (cascade or flow). As it is well known, this implies that the induced unitary evolution $U_t$ in ${\cal H} =[1]^{\bot }$ the orthogonal complement of the one dimensional subspace of the classes a. e. of the constant functions in the Hilbert space $L^2(M,\mu ),$ has uniform Lebesgue spectrum of numerable constant multiplicity. This, in turn, implies the existence of a system of imprimitivity $(E_s)_{s\in {\Bbb G}}$ based on ${\Bbb G}$ for the group $(U_t)_{t\in {\Bbb G}}$ , where ${\Bbb G}$ is ${\Bbb Z}$ or ${\Bbb R}$: \begin{equation} E_{s+t}=U_tE_sU_t^{-1} \label{2.22} \end{equation} Following Misra \cite{Misra}, we define the ``Aging'' operator \begin{equation} T=\int\limits_{{\Bbb G}}s\,dE_{s}= {\int\limits_{{\Bbb R}}s\,dE_{s}\text{ for fluxes} \atopwithdelims\{. \sum\limits_{s\in {\Bbb Z}}sE_{s}\text{ for cascades}} \label{2.23} \end{equation} Then \begin{equation} U_{-t}TU_t=T+t \label{2.24} \end{equation} $T$ is selfadjoint in the discrete case, and essentially selfadjoint in the continuous case, and there are eigenvectors in the discrete case, and generalized eigenvectors (antifunctionals) in certain riggings of ${\cal H}$ by a nuclear space $\Phi $ ($\Phi \prec {\cal H}\prec \Phi ^{\times }$) in the continuous case, $\left( \left| \tau ,n\right\rangle \right) _{\tau \in {\Bbb G}}$ , such that: \begin{eqnarray} T\left| \tau ,n\right\rangle &=&\tau \left| \tau ,n\right\rangle \label{2.25} \\ U_{t}\left| \tau ,n\right\rangle &=&(\tau +t)\left| \tau ,n\right\rangle \label{2.26} \end{eqnarray} Defining \begin{equation} K\left| \tau ,n\right\rangle =-\left| \tau ,n\right\rangle \label{2.27} \end{equation} It follows easilly that $K$ restricted to ${\cal H}$ is a time-reversal for $ {\cal F}=\{(U_t)\}$ with respect to which $U_t$ is symmetric. \section{Examples of abstract time-reversals} \subsection{Bernouilli schemes} Let $M$ be the set $\Sigma ^{{\Bbb Z}}$ of all bilateral sequences (of ``bets'') \begin{equation} m=(a_{j})_{j\in {\Bbb Z}}=(...a_{-2},a_{-1},a_{0},a_{1},a_{2},...) \label{3.1} \end{equation} on a finite set $\Sigma $ with $n$ elements (a ``dice'' with $n$ faces). Let ${\frak X}$ be the $\sigma $-algebra on $M$ generated by all the subset of the form \begin{equation} A_{j}^{s}=\{m:a_{j}=s\in \Sigma \} \label{3.2} \end{equation} Clearly, \begin{equation} M=\bigcup\limits_{s\in \Sigma }A_j^s=\bigcup\limits_{k=1}^nA_j^{s_k} \label{3.3} \end{equation} Let's define a normalized measure $\mu $ on $M$ by choosing $n$ ordered positive real numbers $p_{1},...,p_{n}$ whose sum is equal to one ($p_{k}$ is the ``probability'' of getting $s_{k}$ when the ``dice'' is thrown), and setting: \begin{equation} \forall k:k=1,...,n\;:p_{k}=\mu (A_{j}^{s_{k}}) \label{3.4} \end{equation} \begin{equation} \mu \left( A_{j_1}^{s_1}\cap ...\cap A_{j_k}^{s_k}\right) =\mu (A_{j_1}^{s_1})...\mu (A_{j_k}^{s_k}) \label{3.5} \end{equation} where $j_1,...,j_k$ are all different. Let the dynamical authomorphism $S$ be the shift to the right: \begin{eqnarray} S\left( (a_{j})_{j\in {\Bbb Z}}\right) &=&(a_{j}^{\prime })_{j\in {\Bbb Z}} \nonumber \\ \text{where: } &&a_{j}^{\prime }:=a_{j-1} \label{3.6} \end{eqnarray} The shift preserves $\mu $ because \begin{equation} \mu \left( S(A_j^{s_k})\right) =\mu (A_{j+1}^{s_k})=p_k \label{3.7} \end{equation} The above abstract dynamical scheme is called a Bernouilli scheme and denoted $B(p_1,...,p_n).$ Let's define a ``cannonical'' abstract reversal by: \begin{eqnarray} K\left( (a_{j})_{j\in {\Bbb Z}}\right) &=&(a_{j}^{\prime })_{j\in {\Bbb Z}} \nonumber \\ a_{j}^{\prime } &=&a_{-j+1} \label{3.8} \end{eqnarray} Clearly, $K$ is an isomorphism, and its invariant set \begin{equation} N=\bigcap\limits_{j\in {\Bbb Z}}\left\{ \bigcup\limits_{s\in \Sigma }\left( A_j^s\cap A_{-j}^s\right) \right\} \label{3.9} \end{equation} has $\mu $-measure $0.$ In addition $K$ is a time-reversal for the class $ {\cal F}$ of all Bernouilli schemes, because \begin{equation} K\circ S\circ K=S^{-1} \label{3.10} \end{equation} being $S^{-1}$ the shift to the left. \subsection{The Baker's transformation} We will show the geometrical meaning of the last two time-reversals for the Baker's transformation. The measure space is the torus \[ M=\left[ 0,1\right] \times \left[ 0,1\right] \;/\sim \;=\left\{ (x,y) \mathop{\rm mod} 1=[x,y]:x,y\in \left[ 0,1\right] \right\} \] that is to say, $\sim $ is the equivalence relation that identifies the following boundary points: \[ (0,x)\sim (1,x)\;\text{and}\;(x,0)\sim (x,1) \] with its Lebesgue measure. The automorphism $S$ acts as follows: \begin{equation} S(x,y)= {(2x,\frac 12y)\;\;\;\;\;\;\;\;\;\;if\;0\leq x\leq \frac 12\;,\;0\leq y\leq 1 \atopwithdelims\{. (2x-1,\frac 12y+\frac 12)\;if\;\frac 12\leq x\leq 1\;,\;0\leq y\leq 1} \label{5.1} \end{equation} It's clear that $S$ is a non-continuous{\bf \ }but measure preserving transformation which involves a contraction in the $y$ direction and a dilatation in the $x$ direction: the contracting and dilating directions at every point $m\in M$ (that is, the vertical and the horizontal lines through each $m$). The torus is a compact, connected Lie group and we can define an involutive automorphism $K$ on $M$ by putting: \begin{equation} K[x,y]=[y,x]\;;\;x,y\in I=\left[ 0,1\right] \label{5.2} \end{equation} The fixed points of $K$ constitute a submanifold of the torus: the projection of the diagonal $\Delta $ of the unit square $I\times I$ \[ N=\Delta \;/\sim =\left\{ [x,x]:x\in I\right\} \] Then: \begin{equation} K\circ S^t\circ K=S^{-t},\;\forall t\in {\Bbb Z} \label{5.3} \end{equation} In fact, the first application of $K$ to the generating partition of $S$, rotates the unit square, interchanging the $x$ fibers with the $y$ ones. Then by applicating $S^{t}$ (that is $t$ times $S$) we get a striped pattern of horizontal lines, which is rotated and yields a striped pattern of vertical lines when $K$ is applicated again. The same pattern would be obtained if $S^{-t}$ was used. As it is well known \cite{2}, the Baker transformation is isomorphic to $B( \frac{1}{2},\frac{1}{2})$. In fact, the map \begin{equation} (x,y)\mapsto m=(a_{j})_{j\in {\Bbb Z}}\Leftrightarrow x=\sum\limits_{j=0}^{\infty }\frac{a_{-j}}{2^{j+1}}\;\text{and } y=\sum\limits_{j=1}^{\infty }\frac{a_{j}}{2^{j}} \label{5.4} \end{equation} is an isomorphism (mod 0). Moreover it is an isomorphism of abstract time-reversal systems, because it sends the time-reversal of (\ref{5.2}) in the time-reversal of (\ref{3.8}). In particular, this implies that the Baker's map is a Kolmogorov system, and therefore having the corresponding time-reversal for its Koopman treatment. These three reversals are related. Let $\{A,B\}$ be the partition of the unit square into its left and right halves. As it is well known this partition is both independent and generating for the Baker's map. Let's define \begin{equation} \theta _{0}=1-\chi _{A}= {\text{\ }1\text{ in }A \atopwithdelims\{. -1\text{ in }B} \label{5.5} \end{equation} where $\chi _{A}$ is the characteristic function of the set $A,$ as well as \begin{equation} \theta _{n}=U^{n}(\theta _{0})=\theta _{0}\circ S^{-n}= {\text{\ }1\text{ in }S^{n}(A) \atopwithdelims\{. -1\text{ in }S^{n}(B)} \label{5.6} \end{equation} and for any finite set $F=\{n_{1},...,n_{F}\}\subset {\Bbb Z}$, put \begin{equation} \theta _{F}=\theta _{n_{1}}...\theta _{n_{F}}\;\text{(ordinary product of functions)} \label{5.7} \end{equation} Then, all the eigenvectors of the Aging operator $T$ of $U$ are of the form \cite{Prigo}: \begin{equation} T\theta _{F}=n_{m}\theta _{F} \label{5.8} \end{equation} where $n_{m}=\max F.$ Geometrically speaking, $\rho =\theta _{0}$ -that we can identify with $\{A,B\}$- is an eigenvector of age $0,$ and if $U$ acts $ n $ times on it we get an eigenvector of age $n$: $\theta _{n}$ -which can be identified with a set of horizontal fringes-$.$ On the other hand if $ U^{-1}$ acts $n$ times on it we get an eigenvector of age $-n$: $\theta _{-n} $ -which can be identified with a set of vertical fringes-$.$ As expected, the induced action of $K$ sends the ``future'' horizontal eigenstates of $T$ to the ``past'' vertical ones, and reciprocally. \begin{center} ACKNOWLEDGMENT \end{center} The authors wish to express their gratitude to Dr. Sebastiano Sonego for providing an initial and fruitful discussion on the subject of this paper. This work was partially supported by grant PIP 4410 of CONICET (Argentine National Research Council) \begin{references} \bibitem{1} ARNOLD, V.,{\it Mathematical Methods of Classical Mechanics}, Springer-Verlag, New York, 1989. \bibitem{2} ARNOLD, V. - AVEZ, A., {\it Ergodic Problems of Classical Mechanics}, Benjamin, New York, 1968. \bibitem{cosmic arrow} CASTAGNINO, M., Phys. Rev. D 57, 750 (1998); and {\it The global nature of the arrow of time and Bohm-Reichenbach diagram,} in ''Irreversibility and Causality (A selection of articles presented at the 21st International Colloquium on Group Methods in Physics, Goslar, July 1996), A. Bohm et al. Ed. Springer-Verlag, Berlin, page 282 (1998). \bibitem{Casta} CASTAGNINO, M., GUNZIG, E., IGURI, S., ORDO\~{N}EZ, A., {\it The Kolmogorov-Lax-Phillips systems as branch systems of the Reichemach model}, Proc. 7th. Intrnational Workshop of Instabilities and Non Equilibrium Structures, Valpara\'{i}so, Chile, 15/12/1997. \bibitem{Casta-Laura} CASTAGNINO, M., LAURA, R., {\it Functional Approach to Quantum Decoherence and Classical Final Limit}, submitted to Phys. Rev. A (1999). \bibitem{Cire0} CIRELLI, R., MANI\`{A}, A., PIZZOCCHERO, L., {\it Quantum Phase Space formulation of Schr\"{o}dinger Mechanics, }Int. Jour. of Mod. Phys. A, Vol. 6, N 12 (1991), 2133-2146. \bibitem{Cire1} CIRELLI, R., MANI\`{A}, A., PIZZOCCHERO, L., {\it Quantum mechanics as an infinite-dimensional Hamiltonian system with uncertainty structure: Part I}, J. Math. Phys. 31 (12), Dec. 1990, 2891-2897. \bibitem{Cire2} CIRELLI, R., MANI\`{A}, A., PIZZOCCHERO, L., {\it Quantum mechanics as an infinite-dimensional Hamiltonian system with uncertainty structure: Part II}, J. Math. Phys. 31 (12), Dec. 1990, 2898-2903. \bibitem{Cire3} ABBATI, M.C., CIRELLI, R., LANZAVECCHIA, P., MANI\'{A}, A., {\it Pures Quantum-Mechanicals Systems as K\"{a}hler Bundles}, Il Nuovo Cimento, Vol. 83 B, N 1, (sett. 1984), 43-60. \bibitem{Cire4} CIRELLI, R., GATTI, M. ,MANI\`{A}, A., {\it On the nonlinear extension of quantum superposition and uncertainty principles}, Jour. Geom. and Phys. 516 (1998), 1-23. \bibitem{Cornfeld} CORNFELD, I.P., FOMIN, S.V., SINAI, Ya.G., {\it Ergodic Theory, }Springer-Verlag, New York, 1982.{\it \ } \bibitem{3} HELGASON, S., {\it Differential Geometry, Lie Groups and Symmetric Spaces}, Acad. Press, New York, 1978. \bibitem{Hermann} HERMANN, R., {\it Lie Groups for Physicists}, Benjamin, New York, 1966. \bibitem{4} KOBAYASHI, S. AND NOMIZU, K., {\it Foundations of Differential Geometry}, Volumes I and II, Interscience Publishers,1969. \bibitem{5} LASOTA, A. - MACKEY, M., {\it Probabilistic properties of deterministic systems}, Cambridge, Univ. Press, 1985. \bibitem{6} MACKEY, M., {\it Time's Arrow: The origins of Thermodynamic Behaviour}, Springer-Verlag, New York, 1992. \bibitem{7} MESSIAH, A., {\it Mec\'{a}nica Cu\'{a}ntica}, Ed. Tecnos, Madrid, 1964. \bibitem{Licner} LICHNEROWITZ, A., {\it Champ de Dirac, champ du neutrino et transformations C, P, T sur un espace-temps courbe}, Ann. Inst. H. Poincar\'{e}, Vol. I, n${{}^{o}}$ 3, 233-290, 1964. \bibitem{Misra} MISRA, B. , {\it Nonequilibrium Entropy, Liapounov Variables and Ergodic Properties of Classical Systems}, Proc. Nat. Acad. Sci. USA, 75, p. 1627, 1978. \bibitem{Prigo} MISRA, B., PRIGOGINE, I. , COURBAGE, M. , {\it From deterministic dynamics to probabilistic descriptions}, Physica 98 A, 1-26, 1979. \bibitem{15} SPIVAK, M., {\it A Comprehensive Introduction to Differential Geometry,} Volume 1, Publish or Perish, Inc., USA, 1979. \bibitem{8} STERNBERG, S., {\it Lectures on Differential Geometry}, Prentice-Hall, New Jersey, 1964. \bibitem{16} TABOR, M. , {\it Chaos and integrability in nonlinear dynamics} , A Wiley-Interscience Publication , John Wiley et Sons. \bibitem{14} WARNER, F., {\it Foundations of Differentiable Manifolds and Lie Groups,} Springer-Verlag, New York, 1983. \bibitem{Wigner} HILLERY, M. ,O'CONELL, R. F. , SCULLY, M. O. , WIGNER, E.P., {\it Distribution Function in Physics: Fundamentals}, Phys. Repp. 106, 3, (1984), 123-167. \end{references} \end{document}
\begin{document} \date{May 2, 2005} \author{S.~Kuhr} \email{[email protected]} \author{W.~Alt} \author{D.~Schrader} \author{I.~Dotsenko} \author{Y.~Miroshnychenko} \author{A.~Rauschenbeutel} \author{D.~Meschede} \affiliation{Institut f\"ur Angewandte Physik, Universit\"at Bonn, Wegelerstr.~8, D-53115 Bonn, Germany} \title{Analysis of dephasing mechanisms in a standing wave dipole trap} \begin{abstract} We study in detail the mechanisms causing dephasing of hyperfine coherences of cesium atoms confined by a far off-resonant standing wave optical dipole trap [S. Kuhr {\it et al.}, Phys. Rev. Lett. {\bf 91}, 213002 (2003)]. Using Ramsey spectroscopy and spin echo techniques, we measure the reversible and irreversible dephasing times of the ground state coherences. We present an analytical model to interpret the experimental data and identify the homogeneous and inhomogeneous dephasing mechanisms. Our scheme to prepare and detect the atomic hyperfine state is applied at the level of a single atom as well as for ensembles of up to 50 atoms. \end{abstract} \pacs{32.80.Lg, 32.80.Pj, 42.50.Vk} \maketitle \section{Introduction} The coherent manipulation of isolated quantum systems has received increased attention in the recent years, especially due to its importance in the field of quantum computing. A possible quantum computer relies on the coherent manipulation of quantum bits (qubits), in which information is also encoded in the quantum phases. The coherence time of the quantum state superpositions is therefore a crucial parameter to judge the usefulness of a system for storage and manipulation of quantum information. Moreover, long coherence times are of great importance for applications in precision spectroscopy such as atomic clocks. Information cannot be lost in a closed quantum system since its evolution is unitary and thus reversible. However, a quantum system can never be perfectly isolated from its environment. It is thus to some extent an open quantum system, characterized by the coupling to the environment \cite{Zurek82}. This coupling causes decoherence, i.\,e.~the evolution of a pure quantum state into a statistical mixture of states. Decoherence constitutes the boundary between quantum and classical physics \cite{Zurek91}, as demonstrated in experiments in Paris and Boulder \cite{Brune96,Haroche98,Monroe96}. There, decoherence was observed as the decay of macroscopic superposition states (Schr\"odinger cats) to statistical mixtures. We can distinguish decoherence due to the progressive entanglement with the environment from dephasing effects caused by classical fluctuations. This dephasing of quantum states of trapped particles has recently been studied both with ions \cite{Schmidt-Kaler03} and neutral atoms in optical traps \cite{Davidson95,Ozeri99}. In this work, we have analyzed measurements of the dephasing mechanisms acting on the hyperfine ground states of cesium atoms in a standing wave dipole trap. More specifically, we use the two Zeeman sublevels \ket{F=4,m_F=0} and \ket{F=3,m_F=0} which are coupled by microwave radiation at $\omega_{\rm hfs}/2\pi=9.2$~GHz. We present our setup and the relevant experimental tools in Sec.~\ref{sec:setup}, with special regard to the coherent manipulation of single neutral atoms. Our formalism and the notation of the dephasing/decoherence times are briefly introduced in Sec.~\ref{sec:definition}. Finally, in Secs.~\ref{sec:ramsey} and~\ref{sec:echo} we experimentally and theoretically analyze the inhomogeneous and homogeneous dephasing effects. \section{Experimental tools}\label{sec:setup} \subsection{Setup} \begin{figure} \caption{Experimental setup.} \label{fig:setup} \end{figure} We trap and manipulate cesium atoms in a red detuned standing wave dipole trap. Our trap is formed of two counterpropagating Gaussian laser beams with waist $2w_0=40$~\textmu m and a power of max.~2~W per beam (see Fig.~\ref{fig:setup}), derived from a single Nd:YAG laser ($\lambda=1064$~nm). Typical trap depths are on the order of $U_0=1$~mK. The laser beams have parallel linear polarization and thus produce a standing wave interference pattern. Both laser beams are sent through acousto-optic modulators (AOMs), to mutually detune them for the realization of a moving standing wave. This ``optical conveyor-belt'' was introduced in previous experiments \cite{Kuhr01,Schrader01} and has been used for the demonstration of quantum state transportation \cite{Kuhr03}. For the experiments in this paper, however, we do not transport the trapped atoms. To eliminate any heating effect arising due to the phase noise of the AOM drivers \cite{Schrader01,Alt02b}, we used the non-deflected beams (0$^{\rm th}$~order of the AOMs) to form the dipole trap. The AOMs are only used to vary the dipole trap laser intensity by removing power from the trap laser beams. Cold atoms are loaded into the dipole trap from a high gradient magneto-optical trap (MOT). The high field gradient of the MOT ($\partial B/\partial z=340$~G/cm) is produced by water cooled magnetic coils, placed at a distance of 2~cm away from the trap. The magnetic field can be switched to zero within 60~ms (limited by eddy currents in the conducting materials surrounding the vacuum chamber) and it can be switched back on within 30~ms. Our vacuum chamber consists of a glass cell, with the cesium reservoir being separated from the main chamber by a valve. Cesium atoms are loaded into the MOT at random from the background gas vapor. To speed up the loading process, we temporarily lower the magnetic field gradient to $\partial B/\partial z=25$~G/cm during a time $t_{\rm low}$. The low field gradient results in a larger capture cross section which significantly increases the loading rate. Then, the field gradient is returned to its initial value, confining the trapped atoms at the center of the MOT. Varying $t_{\rm low}$ enables us to select a specific average atom number ranging from 1 to 50. The required time depends on the cesium partial pressure, which was kept at a level such that we load typically 50 atoms within $t_{\rm low}=100$~ms in these experiments. In order to transfer cold atoms from the MOT into the dipole trap, both traps are simultaneously operated for some tens of milliseconds before we switch off the MOT. After an experiment in the dipole trap the atoms are transferred back into the MOT by the reverse procedure. All our measurements rely on counting the number of atoms in the MOT before and after any intermediate experiment in the dipole trap. For this purpose we collect their fluorescence light by a home-built diffraction-limited objective \cite{Alt02a} and detect the photons with an avalanche photodiode (APD). Three diode lasers are employed in this experiment which are set up in Littrow configuration and locked by polarization spectroscopy. The MOT cooling laser is stabilized to the $F=4\rightarrow F\mbox{$^\prime$}=3/F\mbox{$^\prime$}=5$ crossover transition and shifted by an AOM to the red side of the cooling transition $F=4\rightarrow F\mbox{$^\prime$}=5$. The MOT repumping laser is locked to the $F=3\rightarrow F\mbox{$^\prime$}=4$ transition, it is $\pi$-polarized and is shined in along the dipole trap axis. To optically pump the atoms into the $\ket{F=4, m_F=0}$ state, we use the unshifted MOT cooling laser, which is only detuned by +25~MHz from the required $F=4\rightarrow F\mbox{$^\prime$}=4$ transition. This small detuning is partly compensated for by the light shift of the dipole trap. We shine in the laser along the dipole trap axis with $\pi$-polarization together with the MOT repumper. We found that 80\% of the atoms are pumped into the $\ket{F=4, m_F=0}$ state, presumably limited by polarization imperfections of the optical pumping lasers. For the state selective detection (see below) we use a ``push-out'' laser, resonant to the $F=4\rightarrow F\mbox{$^\prime$}=5$ transition. It is $\sigma^+$-polarized and shined in perpendicular to the trapping beams (z-axis in Fig.~\ref{fig:setup}). To generate microwave pulses at the frequency of 9.2~GHz we use a synthesizer (Agilent 83751A), which is locked to an external rubidium frequency standard (Stanford Research Systems, PRS10). The amplified signal ($P=+36~\mbox{dBm}=4.0$~W) is radiated by a dipole antenna, placed at a distance 5~cm away from the MOT. Compensation of the earth's magnetic field and stray fields created by magnetized objects close to the vacuum cell is achieved with three orthogonal pairs of coils. For the compensation, we minimize the Zeeman splitting of the hyperfine ground state $m_F$ manifold which is probed by microwave spectroscopy. Using this method we achieve residual fields of $B_{\rm res} < 0.4$~\textmu T (4~mG). The coils of the $z$-axis also serve to produce a guiding field, which defines the quantization axis. \subsection{State selective detection of a single neutral atom} Sensitive experimental methods had to be developed in order to prepare and to detect the atomic hyperfine state at the level of a single atom. State selective detection is performed by a laser which is resonant with the $F=4 \rightarrow F'=5$ transition and thus pushes the atom out of the dipole trap if and only if it is in $F=4$. An atom in the $F=3$ state, however, is not influenced by this laser. Thus, it can be transferred back into the MOT and be detected there. Although this method appears complicated at first, it is universal, since it works with many atoms as well as with a single one. Other methods, such as detecting fluorescence photons in the dipole trap by illuminating the atom with a laser resonant to the $F=4 \rightarrow F\mbox{$^\prime$}=5$ transition, failed in our case because the number of photons detected before the atom decays into the $F=3$ state is not sufficient. In order to achieve a high efficiency of the state selective detection process, it is essential to remove the atom out of the dipole trap before it is off-resonantly excited to $F\mbox{$^\prime$}=4$ and spontaneously decays into the $F=3$ state. For this purpose, we use a $\sigma^{+}$-polarized push-out laser, such that the atom is optically pumped into the cycling transition $\ket{F=4, m_F=4} \rightarrow \ket{F\mbox{$^\prime$}=5, m_F=5}$. In our setup, the polarization is not perfectly circular, since for technical reasons we had to shine in the laser beam at an angle of $2^\circ$ with respect to the magnetic field axis. This entails an increased probability of exciting the $F'=4$ level from where the atom can decay into the $F=3$ ground state. To prevent this, we remove the atom from the trap sufficiently fast by shining in the push-out laser from the radial direction with high intensity ($I/I_0\approx 100$, with $w_0 = 100$~\textmu m, $P = 30$~\textmu W, where $I_0=1.1$~mW/cm$^2$ is the saturation intensity). In this regime its radiation pressure force is stronger than the dipole force in the radial direction, such that we push out the atom within half the radial oscillation period ($\approx~1$~ms). In this case, the atom receives a momentum corresponding to the sum of all individual photon momenta. This procedure is more efficient than heating an atom out of the trap, which occurs when the radiation pressure force of the push-out laser is weaker than the dipole force, and the atom performs a random walk in momentum space while absorbing and emitting photons. If we adiabatically lower the trap to typically $0.12$~mK prior to the application of the push-out laser, we need on average only 35 photons to push the atom out of the trap. This number is small enough to prevent off-resonant excitation to $F\mbox{$^\prime$}=4$ and spontaneous decay to $F=3$. \begin{figure} \caption{State selective detection of a single atom. The graphs show the fluorescence signal of the atom during state preparation and detection, binned in time intervals of 5 ms. The bars above the graphs show the timing of the lasers. Graphs (a)(i) and (b)(i) show the signals of a single atom, prepared in $F=3$ and $F=4$, respectively. Graphs (a)(ii) and (b)(ii) show the added signal of about 150 events. } \label{fig:SSD} \end{figure} A typical experimental sequence to test the state selective detection is shown in Fig.~\ref{fig:SSD}. First, the atom is transferred from the MOT into the optical dipole trap. Using the cooling and repumping laser of the MOT, we optically pump the atom either in the $F=3$ (Fig.~\ref{fig:SSD}a) or the $F=4$ hyperfine state (Fig.~\ref{fig:SSD}b). The push-out laser then removes all atoms in $F=4$ from the trap. Any remaining atom in $F=3$ is transferred back into the MOT, where it is detected. Figures~\ref{fig:SSD}a(i) and \ref{fig:SSD}b(i) show the signals of a single atom, prepared in $F=3$ and $F=4$, respectively. Our signal-to-noise ratio enables us to unambiguously detect the surviving atom in $F=3$, demonstrating the state-selective detection at the single atom level. We performed 157 repetitions with a single atom prepared in $F=3$ and found that in 153 of the cases the atom remains trapped, yielding a detection probability of $97.5^{+1.2}_{-2.0}\%$. Similarly, only 2 out of 167 atoms prepared in $F=4$ remain trapped, yielding 1.2$^{+1.6}_{-0.8}\%$. The asymmetric errors are the Clopper-Pearson 68\% confidence limits \cite{Barlow}. These survival probabilities can also be inferred by directly adding the signals of the individual repetitions and by comparing the initial and final fluorescence levels in the MOT, see Figs.~\ref{fig:SSD}a(ii) and \ref{fig:SSD}b(ii). \begin{figure} \caption{Atom counting. Initial and final numbers of atoms are inferred from their fluorescence in the MOT. Shown are the integrated APD counts binned in time intervals of 1~ms and accumulated over 10 repetitions with 20 atoms each. } \label{fig:AtomCounting} \end{figure} All following experiments are performed in the same way. We initially prepare the atoms in the \ket{F=4,m_F=0} state and measure the population transfer to \ket{F=3, m_F=0} induced by the microwave radiation. After the application of one or a sequence of microwave pulses, the atom is in general in a superposition of both hyperfine states, \begin{equation} \ket{\psi} = c_3 \ket{F=3, m_F=0} + c_4 \ket{F=4, m_F=0}, \end{equation} with complex probability amplitudes $c_3$ and $c_4$. Our detection scheme only allows us to measure the population of the hyperfine state $F=3$: \begin{equation}\label{e:P3(w)} P_3 = |c_3|^2 = \frac{w+1}{2}, \end{equation} where $w$ is the third component of the Bloch vector, see below. The number $P_3$ is determined from the number of atoms before ($N_{\rm initial}$) and after ($N_{\rm final}$) any experimental procedure in the dipole trap. $N_{\rm initial}$ and $N_{\rm final}$ are inferred from the measured photon count rates, $C_{\rm initial}$, $C_{\rm final}$ and $C_{\rm backgr}$ (see Fig.~\ref{fig:AtomCounting}): \begin{equation}\label{e:N_initial and N_final} N_{\rm initial} = \frac{C_{\rm initial}-C_{\rm backgr}}{C_{\rm 1atom}} \end{equation} and \begin{equation} N_{\rm final} = \frac{C_{\rm final}-C_{\rm backgr}}{C_{\rm 1atom}}. \end{equation} The fluorescence rate of a single atom, $C_{\rm 1atom}$, is measured independently. From the atom numbers we obtain the fraction of atoms transferred to $F=3$, \begin{equation}\label{e:P3} P_3 = \frac{N_{\rm final}}{N_{\rm initial}} \end{equation} The measured number of atoms, $N_{\rm inital}$ can be larger than the actual number of atoms in the dipole trap, since we lose atoms during the transfer from the MOT into the dipole trap (see below). \subsection{Rabi oscillations}\label{ch:ExpRabiOsz} \begin{figure} \caption{Rabi oscillations on the $\ket{F=4, m_F=0}\rightarrow \ket{F=3, m_F=0}$ clock transition recorded at a trap depth $U_0=1.0$~mK. Each data point results from 100 shots with about 60 initial atoms. The line is a fit according to Eq.~(\ref{e:p3RabiOscillation}).} \label{fig:RabiOscillations} \end{figure} We induce Rabi oscillations by a single resonant microwave pulse at the maximum RF power. For the graph of Fig.~\ref{fig:RabiOscillations} we varied the pulse length from 0~\textmu s to 225~\textmu s in steps of 5~\textmu s. Each point in the graph results from 100 shots with about $60\pm10$ atoms each. The corresponding statistical error is below 1\% and is thus not shown in the graph. The error of the data points in Fig.~\ref{fig:RabiOscillations} is dominated by systematic drifts of the storage probability and efficiencies of the state preparation and detection. Since $w(t)=-\cos{\Omega_{\rm R}t}$, we fit the graph with \begin{equation}\label{e:p3RabiOscillation} P_{\rm 3} (t) = \dfrac{C}{2} \left(1 - \cos \Omega_{\rm R} t\right), \end{equation} which yields a Rabi frequency $\Omega_{\rm R}/2\pi=(14.60\pm 0.02)$~kHz. Note that this Rabi frequency is higher than the one used later in this report (10~kHz) because we changed the position of the microwave antenna for practical reasons. The maximum population detected in $F=3$ is $C=(60.4\pm 0.7)$\%. This reduction from 100\% is caused by two effects. First, when we use many ($>40$) atoms at a time, up to 20\% of the atoms are lost during the transfer from the MOT into the dipole trap due to inelastic collisions, as verified in an independent measurement. The remaining losses arise due to the non-perfect optical pumping process. \section{Phenomenological description of decoherence and dephasing}\label{sec:definition} In our experiment we observe quantum states in an ensemble average, and decoherence manifests as a decay or dephasing of the induced magnetic dipole moments. It is useful to distinguish between homogeneous and inhomogeneous effects. Whereas homogeneous dephasing mechanisms affect each atom in the same way, inhomogeneous dephasing only appears when observing an ensemble of many atoms possessing slightly different resonance frequencies. As we will see later, the most important difference between the two mechanisms is the fact that inhomogeneous dephasing can be reversed, in contrast to the irreversible homogeneous dephasing. The interaction between the oscillating magnetic field component of the microwave radiation, $B \cos{\omega t}$, and the magnetic dipole moment, $\mu$, of the atom is well approximated by the optical Bloch equations \cite{AllenEberly}: \begin{equation}\label{e:BlochVector} \dot{\boldsymbol u}= - \boldsymbol \Omega \times \boldsymbol u \end{equation} with the torque vector $\boldsymbol \Omega \equiv (\Omega_{\rm R}, 0, \delta)$ and the Bloch vector $\boldsymbol u \equiv (u,v,w)$. Here, $\Omega_{\rm R}=\mu B/\hbar$ is the Rabi frequency and $\delta=\omega-\omega_0$ is the detuning of the microwave from the atomic transition frequency $\omega_0$. In the following, the initial quantum state \ket{F = 4, m_F=0} corresponds to the Bloch vector $\boldsymbol u=(0,0,-1)$, whereas \ket{F = 3, m_F=0} corresponds to $\boldsymbol u=(0,0,1)$. We include the decay rates as damping terms into the Bloch equations and use a notation of the different times for population and polarization decay similar to the one of nuclear magnetic resonance: \begin{subequations} \begin{eqnarray}\label{e:BlochDampingExpt} \dot{\left<u\right>} & =& \delta \left<v\right> - \frac{\left<u\right>}{T_2}\\ \dot{\left<v\right>} & =& -\delta \left<u\right> + \Omega_{\rm R} \left<w\right> - \frac{\left<v\right>}{T_2}\\ \dot{\left<w\right>} & =&- \Omega_{\rm R} \left<v\right>- \frac{\left<w\right>-w_{\rm st}}{T_1}, \end{eqnarray} \end{subequations} where $\left<\dotsc\right>$ denotes the ensemble average. The total homogeneous transverse decay time $T_2$ is given by the polarization decay time $T_2\mbox{$^\prime$}$ and the reversible dephasing time $T_2^*$ \begin{equation}\label{e:tTimes} \frac{1}{T_2} = \frac{1}{T_2\mbox{$^\prime$}} + \frac{1}{T_2^*}. \end{equation} Inhomogeneous dephasing ($T_2^*$) occurs because the atoms may have different resonance frequencies depending on their environment. Thus the Bloch vectors of the individual atoms precess with different angular velocities and lose their phase relationship, they dephase. In our case, inhomogeneous dephasing arises due to the energy distribution of the atoms in the trap. This results in a corresponding distribution of light shifts because hot and cold atoms experience different average trapping laser intensities. The longitudinal relaxation time, $T_1$, describes the population decay to a stationary value $w_{\rm st}$. In our case, $T_1$ is governed by the scattering of photons from the dipole trap laser, which couples the two hyperfine ground states via a two-photon Raman transition. This effect is suppressed due to a destructive interference effect yielding relaxation times of several seconds (see Sec.~\ref{subsec:originsOfDephasing}). We do not include losses of atoms from the trap in the decay constants, which occur on the same timescale. \section{Inhomogeneous dephasing}\label{sec:ramsey} We measure the transverse decay time $T_2$ by performing Ramsey spectroscopy, which consists of the application of two coherent rectangular microwave pulses, separated by a time interval $t$ \cite{Ramsey}. The initial Bloch vector $\boldsymbol u_0 = (0,0,-1)$ corresponds to an atom prepared in the \ket{F=4, m_F=0}-state. A $\pi/2$-pulse rotates the Bloch vector into the state $(0,-1,0)$, where the atom is in a superposition of both hyperfine states. The Bloch vector freely precesses in the $uv$-plane with an angular frequency $\delta$. Note that $\delta$ has to be small compared to the Rabi frequency and the spectral pulse width, such that the pulse can be approximated as near resonant, and complete population transfer can occur. After a free precession during $t$, a second $\pi/2$-pulse is applied. The measurement of the quantum state finally projects the Bloch vector onto the $w$ axis. \begin{figure} \caption{Ramsey fringes recorded for two different trap depths (a) $U_0=0.1$~mK and (b) $0.04$~mK. Their decay with time constants $T_2^*=4.4\pm0.1$~ms and $20.4\pm1.1$~ms, respectively, is governed by inhomogeneous dephasing caused by the energy distribution in the trap. Each data point results from 30 shots with about 50 initial atoms. The damped oscillation is a fit with $P_{\rm 3,Ramsey}(t)$ and the envelopes are the functions $B\pm A\alpha(t,T_2^*)$ (see Eqs.~(\ref{e:alpha(t)}), (\ref{e:FITp3Ramsey})). } \label{fig:Ramsey_Experiment} \end{figure} We recorded Ramsey fringes for two different dipole trap depths, $U_0=0.1$~mK and $0.04$~mK (see Fig.~\ref{fig:Ramsey_Experiment}). Each point in the graph of Fig.~\ref{fig:Ramsey_Experiment} corresponds to 30 shots with about 50 trapped atoms per shot, yielding errors (not shown) of less than 1\%. The quoted values for $U_0$ are calculated from the measured power and the waist of the dipole trap laser beam and have an estimated uncertainty of up to 50\%. We initially transfer the atoms from the MOT into a deeper trap ($U_0>1$~mK) to achieve a high transfer efficiency. When the MOT is switched off, we adiabatically lower the trap depth using the AOMs. Our Ramsey fringes show a characteristic decay, which is not exponential. This decay is due to inhomogeneous dephasing, which occurs because after the first $\pi/2$-pulse, the atomic pseudo-spins precess with different angular frequencies. In the following, we derive analytic expressions for the observed Ramsey signal and we show that the envelope of the graphs of Fig.~\ref{fig:Ramsey_Experiment} is simply the Fourier transform of the atomic energy distribution.\\ \subsection{Differential light shift and decay of Ramsey fringes} The light shift of the ground state due to the Nd:YAG laser is simply the trapping potential \begin{equation}\label{e:U02} U_0(\Delta) = \frac{\hbar\Gamma}{8} \frac{I}{I_0} \frac{\Gamma}{\Delta}. \end{equation} The detuning of the Nd:YAG-laser from the D-line of an atom in $F=4$ is 9.2~GHz less than for an atom in $F=3$. As a consequence, the $F=4$ level experiences a slightly stronger light shift, resulting in a shift of the $F=3\rightarrow F=4$ microwave transition towards smaller resonance frequencies. This differential light shift, $\delta_{0}$, can be approximated as \begin{equation}\label{e:diffLSAnsatz} \hbar \delta_{\rm 0} = U_0(\Delta_{\rm eff}) - U_0(\Delta_{\rm eff}+\omega_{\rm hfs}), \end{equation} where $\Delta_{\rm eff}= - 1.2\times 10^7~\Gamma$ is an effective detuning, taking into account the weighted contributions of the D$_1$ and D$_2$ lines \cite{Schrader01}. $\omega_{\rm hfs}=2.0\times 10^3~\Gamma$ is the ground state hyperfine splitting. Since $\omega_{\rm hfs}\ll\Delta_{\rm eff}$, we find that the differential light shift is proportional to the total light shift $U_0$, \begin{equation}\label{e:differentialLS} \hbar \delta_{\rm 0} = \eta U_0, \end{equation} with a scaling factor $\eta=\omega_{\rm hfs}/\Delta_{\rm eff} = 1.45\times10^{-4}$. For atoms trapped in the bottom of a potential of $U_0=1$~mK, the differential light shift is $\delta_{0}=- 2\pi\times 3.0$~kHz. In the semiclassical limit, i.~e.~neglecting the quantized motion of the atom in the dipole trap potential, the free precession phase accumulated by an atomic superposition state between the two $\pi/2$-pulses depends on the average differential light shift only. In the following, we calculate the expected Ramsey signal using this semiclassical approach and obtain simple analytical expressions. Furthermore, we verified the validity of the presented model by performing a quantum mechanical density matrix calculation (not presented here) which agrees to within one percent with the semiclassical results. The small deviation can be attributed to the occurrence of small oscillator quantum numbers $n_{\rm osc} \simeq 5$ in the stiff direction of the trap. Note that, strictly speaking, our model of a time-averaged differential light shift is only correct if the atom carries out an integer number of oscillation periods in the trap between the two $\pi/2$-pulses. However, we have checked that the variable phase accumulated during the remaining fraction of an oscillation period does not cause a measurable reduction of the Ramsey fringe contrast and can therefore be neglected. Since a hot atom experiences a lower laser intensity than a cold one, its averaged differential light shift is smaller. The energy distribution of the atoms in the dipole trap obeys a three-dimensional Boltzmann distribution with probability density \cite{Metcalf,korrMaxwell} \begin{equation}\label{e:Boltzmann} p(E) = \frac{E^2}{2(k_{\rm B}T)^{3}}\, \exp{\left(-\frac{E}{k_{\rm B}T}\right)}. \end{equation} Here $E=E_{\rm kin}+U$ is the sum of kinetic and potential energy. In a harmonic trap the virial theorem states that the average potential energy is half the total energy, $U = E/2$. Thus, the average differential light shift for an atom with energy $E$ is given by: \begin{equation}\label{e:deltaLS} \delta_{\rm ls}(E) = \delta_0 + \frac{\eta E}{2\hbar} \end{equation} where $\delta_0<0$ is the maximum differential light shift. As a consequence, the energy distribution $p(E)$ yields, except for a factor and an offset, an identical distribution $\tilde{\alpha}(\delta_{\rm ls})$ of differential light shifts \cite{korrMaxwell}: \begin{equation}\label{e:distrDiffLS} \tilde{\alpha}(\delta_{\rm ls}) = \frac{K^3}{2} (\delta_{\rm ls}-\delta_{\rm 0})^2 \exp{[-K(\delta_{\rm ls}-\delta_{\rm 0})]} \end{equation} with \begin{equation} K = \frac{2\hbar}{\eta k_{\rm B}T}. \end{equation} Note that this distribution is only valid in the regime $k_{\rm B}T\ll U_0$, since the virial theorem was applied for the case of a harmonic potential. To model the action of the Ramsey pulse sequence, we express the solutions of Eq.~(\ref{e:BlochVector}) as rotation matrices acting on the Bloch vector. The Ramsey sequence then reads \begin{equation}\label{e:Ramsey} \boldsymbol u_{\rm Ramsey}(t) = \Theta_{\pi/2} \cdot \Phi_{\rm free}(\delta,t) \cdot \Theta_{\pi/2} \cdot \boldsymbol u_0, \end{equation} with the matrices describing the action of a $\pi/2$-pulse, \begin{equation}\label{e:Rot_PiOver2} \Theta_{\pi/2} = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & -1 & 0\ \end{pmatrix}, \end{equation} and the free precession around the $w$-axis with angular frequency $\delta$ during a time interval $t$, \begin{equation}\label{e:rot_free} \Phi_{\rm free}(\delta,t) =\begin{pmatrix} \ \,\,\, \cos \phi(\delta,t) & \sin \phi(\delta,t) & 0 \\ -\sin \phi(\delta,t) & \cos \phi(\delta,t) & 0 \\ 0 & 0 & 1 \ \end{pmatrix}. \end{equation} The total precession angle, $\phi(\delta, t)$, represents the accumulated phase during the free evolution of the Bloch vector, $\phi(t) = \int_0^t \delta(t') dt'$. The detuning $\delta (t)$ may in general vary spatially and in time, depending on the energy shifts of the atomic levels. If the Bloch vector is initially in the state $\boldsymbol u_0 = (0,0,-1)$, we obtain from Eq.~(\ref{e:Ramsey}): \begin{equation}\label{e:w_Ramsey} w_{\rm Ramsey}(t) = \cos \delta t, \end{equation} where $\delta = \omega-\omega_0$ is the detuning of the microwave radiation with frequency $\omega$ from the atomic resonance $\omega_0$. In order to see the Ramsey fringes, we purposely shift $\omega$ with respect to the ground state hyperfine splitting, $\omega_{\rm hfs}$, by a small detuning, $\delta_{\rm synth}$, set at the frequency synthesizer \begin{equation} \omega = \omega_{\rm hfs}+\delta_{\rm synth}. \end{equation} The atomic resonance frequency $\omega_0$ is modified due to external perturbations \begin{equation} \omega_0=\omega_{\rm hfs}+\delta_{\rm ls} + \delta_{\rm B}, \end{equation} where $\delta_{\rm ls}$ is the energy dependent differential light shift, $\delta_{\rm B}$ is the quadratic Zeeman shift. Now, the inhomogeneously broadened Ramsey signal is obtained by averaging over all differential light shifts $\delta_{\rm ls}$: \begin{equation}\label{e:Ramseydeph} \begin{array}{ll} \!\!\! \!\!\! w_{\rm Ramsey, inh}(t) = & \displaystyle\!\!\! \int^{ \infty}_{\delta_{\rm 0}} \!\!\! \tilde{\alpha}(\delta_{\rm ls})\\ & \!\!\! \times\cos\left[(\delta_{\rm synth} - \delta_{\rm B} - \delta_{\rm ls}) t\right] d\delta_{\rm ls}. \end{array} \end{equation} Eq.~(\ref{e:Ramseydeph}) shows that the shape of the Ramsey fringes is the Fourier(-Cosine)-Transform of the atomic energy distribution. Note that in the above integral we have set the upper integration limit to $\infty$, instead of the maximum physically reasonable value, $\delta_0/2$, to obtain the analytic solution \begin{equation}\label{e:w_analyt} w_{\rm Ramsey, inh}(t) = \alpha(t,T_2^*)\, \cos{\left[\delta\mbox{$^\prime$} t + \kappa(t,T_2^*)\right]}, \end{equation} with the sum of the detunings, \begin{equation} \delta\mbox{$^\prime$} = \delta_{\rm synth} - \delta_{\rm B}-\delta_{\rm 0} \end{equation} and a time- dependent amplitude $\alpha(t,T_2^*)$ and phase shift $\kappa (t,T_2^*)$ \cite{korrMaxwell} \begin{equation}\label{e:alpha(t)} \alpha(t,T_2^*) = \left[1+0.95\left(\frac{t}{T_2^*}\right)^2\right]^{-3/2} \end{equation} and \begin{equation}\label{e:kappa(t)} \kappa (t,T_2^*) = -3\arctan\left(0.97 \frac{t}{T_2^*}\right). \end{equation} Despite this non-exponential decay, we have introduced the inhomogeneous or reversible dephasing time $T_2^*$ as the 1/e-time of the amplitude $\alpha(t)$: \begin{equation}\label{e:def_T2*} T_2^* = \sqrt{e^{2/3}-1}\, K = 0.97 \frac{2\hbar}{\eta k_{\rm B}T}. \end{equation} Thus, the reversible dephasing time $T_2^*$ is inversely proportional to the temperature of the atoms. The phase shift $\kappa(t,T_2^*)$ arises due to the asymmetry of the probability distribution $\tilde{\alpha}(\delta_{\rm ls})$. The hot atoms in the tail of the energy distribution dephase faster than the cold atoms, due to their larger spread. The fact that these hot atoms no longer contribute to the Ramsey signal results in a weighting of the mean $\delta_{\rm ls}$ towards larger negative values. To fit our experimental data, we derive the following expression from Eq.~(\ref{e:w_analyt}), \begin{eqnarray}\label{e:FITp3Ramsey} P_{\rm 3,Ramsey} (t)&&=B + \alpha(t,T_2^*)\nonumber\\ && \times A\cos{\left[ \delta\mbox{$^\prime$} t + \kappa(t,T_2^*) + \varphi\right]} , \end{eqnarray} where the amplitude $A$ and the offset $B$ account for the imperfections of state preparation and detection. The other fit parameters are $\delta\mbox{$^\prime$}$, $T_2^*$, and a phase offset $\varphi$. \begin{table}[!t] \begin{center} \begin{tabular}{|c|rcrl|rcrl|} \cline{2-9} \multicolumn{1}{c|}{} & \multicolumn{4}{|c|}{\rule[-2mm]{0mm}{6mm}Fig.~\ref{fig:Ramsey_Experiment}(a)} & \multicolumn{4}{|c|}{Fig.~\ref{fig:Ramsey_Experiment}(b)} \\ \hline $U_0 \mbox{ (est.)}$ & \rtabentry{0.1 mK}{} & \rtabentry{0.04 mK}{} \\ $\delta_{\rm synth}/2\pi$ & \rtabentry{2250~Hz}{} & \rtabentry{1050~Hz}{} \\ \hline $A$ & \tabentry{28.7}{0.5}{\%} & \tabentry{13.6}{0.1}{\%}\\ $B$ & \tabentry{30.5}{0.1}{\%} & \tabentry{13.8}{0.1}{\%} \\ $\delta\mbox{$^\prime$}/2\pi$ & \tabentry{2133.7}{1.5}{Hz} & \tabentry{722.5}{0.5}{Hz}\\ $\varphi$ & \tabentry{0.35}{0.02}{} & \tabentry{0.13}{0.03}{} \\ $T_2^*$ & \tabentry{4.4}{0.1}{ms} & \tabentry{20.4}{0.6}{ms} \\ \hline \end{tabular} \end{center} \caption{Fit parameters extracted from the Ramsey fringes of Fig.~\ref{fig:Ramsey_Experiment} using Eq.~(\ref{e:FITp3Ramsey}).\label{tab:FitRamsey}} \end{table} The corresponding fits are shown in Fig.~\ref{fig:Ramsey_Experiment} and the resulting fit parameters are summarized in Table~\ref{tab:FitRamsey}. For the two graphs, the maximum population detected in $F=3$, $P_{3,{\rm max}}=A+B$ is only about 60\% and 30\%, respectively. The reduction to 60\% in Fig.~\ref{fig:Ramsey_Experiment}(a) is again due to imperfections in the optical pumping process and due to losses by inelastic collisions, as discussed in Sec.~\ref{ch:ExpRabiOsz}. The additional reduction in Fig.~\ref{fig:Ramsey_Experiment}(b) occurs during the lowering of the trap to $U_0=0.04$~mK, where another 50\% of the atoms are lost. Note, however, that the fringe visibility \begin{equation}\label{e:DefVisibility} V =\frac{A}{B} \end{equation} is not impaired by these imperfections. From the fit parameters we obtain $V=0.97\pm0.01$ and $V=1.00^{+0}_{-0.03}$ for the two cases. As a check of consistency, we calculate the differential light shift $\delta_0$ from the fitted detuning $\delta\mbox{$^\prime$}$ and the experimental values of $\delta_{\rm B}$ and $\delta_{\rm synth}$, \begin{equation}\label{e:delta0FromFit} \delta_0=\delta_{\rm synth}-\delta_{\rm B}-\delta\mbox{$^\prime$}. \end{equation} The calculated quadratic Zeeman shift in the externally applied guiding field of $B=97.9\pm1.5$~\textmu T is $\delta_{\rm B}/2\pi=412\pm13$~Hz, where the error is due to the uncertainty of the calibration. We obtain $\delta_0/2\pi=-268\pm13$~Hz and $\delta_0/2\pi=-78\pm13$~Hz. From the values of $\delta_0$ we can formally deduce the potential depth corresponding to $U_0=0.090\pm0.004$~mK and $U_0=0.026\pm0.004$~mK, which almost match the expected trap depths estimated from the dipole trap laser power assuming purely linear polarization. The discrepancy for the lowest trap depth could arise from the fact that the energy distribution is truncated at $E = U_0$, since we have lost the atoms with the highest energy during the lowering of the trap. This truncation will reduce the effective $\delta_0$ and thus yield a smaller trap depth. Finally, the phase offset $\varphi$ occurs because the Bloch vector precesses around the $w$-axis even during the application of the two $\pi/2$-pulses. In contrast, our ansatz of Eq.~(\ref{e:Ramsey}) takes into account only the free precession in between the two pulses. The additional precession angle amounts to $\varphi = 2\,t_{\pi/2}\,\delta\mbox{$^\prime$}$. Given $t_{\pi/2}=16$~\textmu s and the fitted value of $\delta\mbox{$^\prime$}$ we obtain $\varphi=0.42$ for Fig.~\ref{fig:Ramsey_Experiment}(a) and $\varphi=0.14$ for Fig.~\ref{fig:Ramsey_Experiment}(b), which is close to the fitted values of Table~\ref{tab:FitRamsey}. \begin{figure*} \caption{Spin echoes. Shown are spin echoes recorded for three different trap depths, (a) $U_0=1.0$~mK, (b) $0.1$~mK and (c) $0.04$~mK. We observe a decrease of the maximum spin echo amplitude with increasing time of the $\pi$-pulse, with longer decay times in lower trap depths. All spin echoes are fitted using Eq.~(\ref{e:FITp3Echo}). In (a) and (b), the first curve is a Ramsey signal, recorded with otherwise identical parameters. } \label{fig:LotsOfEchoes} \end{figure*} \section{Homogeneous dephasing mechanisms}\label{sec:echo} \subsection{Spin echoes} The inhomogeneous dephasing can be reversed using a spin echo technique, i.~e.~by applying an additional $\pi$-pulse between the two Ramsey $\pi/2$-pulses. Although originally invented in the field of nuclear magnetic resonance \cite{Hahn50}, this technique was recently also employed in optical dipole traps \cite{Andersen03}. We recorded echo signals in three different trap depths, $U_0=1.0$~mK, $0.1$~mK and $0.04$~mK for different times of the $\pi$-pulse, $\tau_\pi$ (see Fig.~\ref{fig:LotsOfEchoes}). We observe that the visibility of the echo signals decreases if we increase $\tau_\pi$. A slower decrease of the visibility is obtained in lower traps. For $U_0=0.04$~mK, $\tau_\pi=200$~ms, we even observed oscillations that reappear at $t=400$~ms. In order to interpret these results, we first model the action of the microwave pulses for the spin echo, similar to the discussion in Sec.~\ref{sec:ramsey}. After the first $\pi/2$-pulse at $t=0$, all Bloch vectors start at $\boldsymbol u(0)=(0,-1,0)$. Due to inhomogeneous dephasing, the Bloch vectors rotate at slightly different frequencies around the $w$-axis. A $\pi$-pulse at time $\tau_\pi$ rotates the ensemble of Bloch vectors around the $u$ axis by $180^\circ$ and induces a complete rephasing at $2\tau_\pi$ in the state $\boldsymbol u(2\tau_\pi)=(0,1,0)$. The corresponding matrix equation reads \begin{eqnarray}\label{e:SpinEcho_Matrix} \boldsymbol u_{\rm echo}(t) &=& \Theta_{\pi/2} \cdot \Phi_{\rm free}(\delta,t-\tau_\pi) \cdot \Theta_{\pi} \cdot \nonumber \\ && \cdot \Phi_{\rm free}(\delta,\tau_{\pi}) \cdot \Theta_{\pi/2} \cdot \boldsymbol u_0, \end{eqnarray} where we defined \begin{equation}\label{e:Rot_Pi} \Theta_{\pi} = \begin{pmatrix} 1 & 0 & 0 \\ 0 &-1 & 0 \\ 0 & 0 & -1\ \end{pmatrix}. \end{equation} Here, $\tau_\pi$ is the time between the first $\pi/2$- and the $\pi$-pulse, and $t>\tau_\pi$ is the time of the second $\pi/2$ pulse. As a result of Eq.~(\ref{e:SpinEcho_Matrix}), we obtain \begin{equation}\label{e:w_echo} w_{\rm echo}(t) = - \cos[\delta (t-2\tau_{\pi})]. \end{equation} We calculate the shape of the inhomogeneously broadened echo signal, $w_{\rm echo, inh}(t)$, by integrating over all differential light shifts $\delta_{\rm ls}$ \begin{eqnarray}\label{e:Echodeph_Int} w_{\rm echo,inh}(t) &=&- \int^{\infty}_{\delta_{\rm 0}}\! \!\!\! \tilde{\alpha}(\delta_{\rm ls})\times\nonumber\\ &&\hspace{-1.5cm}\times \cos \left[ (\delta_{\rm synth} - \delta_{\rm ls} -\delta_{\rm B}) (t-2\tau_\pi)\right] d\delta_{\rm ls}. \rule[-7mm]{7mm}{0mm} \end{eqnarray} The integration yields a result similar to Eq.~(\ref{e:Ramseydeph}), \begin{eqnarray}\label{e:echoAnalyt} w_{\rm echo, inh}(t) &=& - \alpha(t-2\tau_\pi)\nonumber\\ &&\!\!\times \cos{\left[\delta\mbox{$^\prime$}(t-2\tau_\pi) + \kappa(t-2\tau_\pi)\right]}, \end{eqnarray} with amplitude $\alpha(t)$ and phase shift $\kappa(t)$ as defined in Eqs.~(\ref{e:alpha(t)}) and (\ref{e:kappa(t)}). Eq.~(\ref{e:echoAnalyt}) shows that the amplitude of the echo signal regains its maximum at time $2\tau_{\pi}$. Finally, the population in $F=3$ reads: \begin{eqnarray}\label{e:FITp3Echo} P_{3, \rm echo}(t) &=& B- \alpha(t-2\tau_\pi,T_2^*)\nonumber\\ &&\hspace{-1.8cm} \times A\cos{\left[\delta\mbox{$^\prime$} (t\!-2\tau_\pi)+\kappa(t-2\tau_\pi,T_2^*)+\psi\right]}. \end{eqnarray} This equation is used to extract dephasing times $T_2^*$ from all spin echoes of Fig.~\ref{fig:LotsOfEchoes}. The average values are listed in Table~\ref{tab:times}, where $T_2^*$ was obtained by averaging over the respective datasets. From the amplitude, $A$, and offset, $B$, of each echo signal we calculate the visibility $V=A/B$, plotted in Fig.~\ref{fig:ContrastOfSpinEcho} as a function of $\tau_{\pi}$. The phase shift $\psi$ accounts for slow systematic phase drifts during the spin echo sequence. \begin{table}[!t] \begin{center} \begin{tabular}{|c|rcrl|rcrl|rcrl|} \cline{2-13} \multicolumn{1}{c|}{} & \multicolumn{4}{|c|}{\rule[-2mm]{0mm}{6mm}Fig.~\ref{fig:LotsOfEchoes}(a)} & \multicolumn{4}{|c|}{Fig.~\ref{fig:LotsOfEchoes}(b)} & \multicolumn{4}{|c|}{Fig.~\ref{fig:LotsOfEchoes}(c)} \\ \hline $U_0 \mbox{ (est.)}$ & \rtabentry{1.0 mK}{} & \rtabentry{0.1 mK}{} & \rtabentry{0.04 mK}{} \\ $T_2^*$ & \tabentry{0.86}{0.05}{ms} & \tabentry{2.9}{0.1}{ms} & \tabentry{18.9}{1.7}{ms}\\ $T_2\mbox{$^\prime$}$ & \tabentry{10.2}{0.4}{ms} & \tabentry{33.9}{1.0}{ms} & \tabentry{146.2}{6.6}{ms}\\ $T_1$ (calc.) & \ctabentry{8.6~s}{} & \ctabentry{86~s}{} & \ctabentry{220~s}{}\\ \hline \end{tabular} \end{center} \caption{Summary of dephasing times. $T_2^*$ and $T_2\mbox{$^\prime$}$ are obtained from the echo signals of Fig.~\ref{fig:LotsOfEchoes}. $T_1=\Gamma_{\rm Raman}^{-1}$ is calculated using Eq.~(\ref{e:RamanVSRayleigh}). }\label{tab:times} \end{table} So far, we considered the detuning as constant during the experimental sequence. We now include in our model a time-varying detuning, $\delta(t)$, in order to account for a stochastic variation of the precession angles of the Bloch vector, \begin{equation} \phi_1=\int_0^{\tau_\pi}\delta(t)\,dt \quad\mbox{and}\quad \phi_2=\int_{\tau_\pi}^{2\tau_\pi}\delta(t)\,dt, \end{equation} before and after the $\pi$-pulse. The phase difference $\phi_2-\phi_1$ is expressed as a mean difference of the detuning, \begin{equation} \Delta\delta = \frac{\phi_2-\phi_1}{\tau_\pi}. \end{equation} The Bloch vector at time $2\tau_\pi$, when the inhomogeneous dephasing has been fully reversed, reads \begin{eqnarray}\label{e:SpinEcho_Matrix_DeltaDelta} \boldsymbol u_{\rm echo}(\Delta\delta, 2\tau_\pi) &=& \Theta_{\pi/2} \cdot \Phi_{\rm free}(\delta+\Delta\delta,\tau_\pi) \cdot \Theta_{\pi}\cdot\nonumber\\ && \cdot \Phi_{\rm free}(\delta,\tau_{\pi}) \cdot \Theta_{\pi/2} \cdot \boldsymbol u_0, \end{eqnarray} which results in \begin{equation}\label{e:w(2TauPi)} w_{\rm echo}(\Delta\delta, 2\tau_\pi)=-\cos(\Delta\delta\,\tau_\pi). \end{equation} For a Gaussian distribution of fluctuations with mean $\overline{\Delta\delta}=0$ and variance $\sigma(\tau_\pi)^2$, \begin{equation}\label{e:p(Deltadelta)} p_{\tau_\pi}(\Delta\delta)=\frac{1}{\sigma(\tau_\pi)\sqrt{2\pi}} \exp{\left[-\frac{(\Delta\delta)^2}{2\sigma(\tau_\pi)^2}\right]}, \end{equation} the average $w$-component of the Bloch vector is calculated, \begin{eqnarray}\label{e:w(2TauPi)Integration} w_{\rm echo,hom}(2\tau_\pi)&=&\int_{-\infty}^\infty -\cos\left(\Delta\delta\,\tau_\pi\right)\, p_{\tau_\pi}(\Delta\delta)\, d\Delta\delta\nonumber\\ &=&\exp\left[-\tfrac{1}{2}\tau_\pi^2\sigma(\tau_\pi)^2\right]. \end{eqnarray} Thus, the spin-echo visibility, $V$, yields \begin{equation}\label{e:FitContrastOfSpinEchoTau} V(2\tau_\pi) = V_0 \, \exp{\left[-\tfrac{1}{2}\tau_\pi^2\,\sigma(\tau_\pi)^2\right]}. \end{equation} \begin{figure} \caption{Decay of the spin echo visibility, extracted from the signals of Fig.~\ref{fig:LotsOfEchoes}. The fits (red lines) are the Gaussians of Eq.~(\ref{e:FitContrastOfSpinEcho}). The dashed and dotted lines are the best and worst case predictions inferred from the measured pointing instability of the trapping laser shown in Fig.~\ref{fig:PointingInstabilities}. } \label{fig:ContrastOfSpinEcho} \end{figure} For comparison with the experimental values, we fit the spin echo visibility of Fig.~\ref{fig:ContrastOfSpinEcho} with a Gaussian, \begin{equation}\label{e:FitContrastOfSpinEcho} V(2\tau_\pi) = C_0 \exp{\left[-\frac{1}{2}\tau_\pi^2\sigma_{\rm exp}^2\right]} \end{equation} with a time-independent detuning fluctuation $\sigma_{\rm exp}$. We define the homogeneous dephasing time $T_2\mbox{$^\prime$}$ as the $1/e$ decay time of the spin echo visibility: \begin{equation}\label{e:DefT2Prime} V(2T_2\mbox{$^\prime$}) =C_0e^{-1}\quad\Rightarrow\quad T_2\mbox{$^\prime$} = \frac{\sqrt{2}}{\sigma_{\rm exp}}. \end{equation} \subsection{Origins of irreversible dephasing}\label{subsec:originsOfDephasing} Candidates for irreversible dephasing mechanisms include intensity fluctuations (1) and pointing instability of the dipole trap laser (2), heating of the atoms (3), fluctuating magnetic fields (4), fluctuations of the microwave power and pulse duration (5) and spin relaxation due to spontaneous Raman scattering from the dipole trap laser (6). \\ \newlength{\tw} \setlength{\tw}{2.5cm} \begingroup \squeezetable \begin{table}[!h] \begin{center} \begin{tabular}{|c|c|rcrl|rcrl|rcrl|} \cline{2-14} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{$U_0$} & \multicolumn{4}{|c|}{1.0 mK} & \multicolumn{4}{|c|}{0.1~mK} & \multicolumn{4}{|c|}{0.04~mK} \\ \cline{2-14} \cline{2-14} \multicolumn{1}{c|}{} & &&&&&&&&&&&&\\ \multicolumn{1}{c|}{} & $\sigma_{\rm exp}$ (meas.) & 22.0 &$\pm$&0.9&Hz & \tabentry{6.6}{0.2}{Hz} & \tabentry{1.54}{0.07}{Hz} \\ \multicolumn{1}{c|}{} &&&&&&&&&&&&&\\ \hline &&&&&&&&&&&&&\\ (1)&\parbox{\tw}{intensity fluctuations } & \ctabentry{$5.9$}{Hz} & \ctabentry{$0.67$}{Hz} & \ctabentry{$0.17$}{Hz} \\ &&&&&&&&&&&&&\\ \hline &\parbox{\tw}{ pointing instability}&&&&&&&&&&&& \\ (2)& best case & \ctabentry{$10.6$}{Hz} & \ctabentry{$2.4$}{Hz} & \ctabentry{$1.3$}{Hz} \\ & worst case & \ctabentry{$21.6$}{Hz} & \ctabentry{$6.7$}{Hz} & \ctabentry{$3.7$}{Hz} \\ &&&&&&&&&&&&&\\ \hline &&&&&&&&&&&&&\\ (3a)&\parbox{\tw}{heating\\$\sigma^{\rm(3)}_{\rm h}/2\pi$\\(upper limit) } & \ctabentry{5.3}{Hz} & \ctabentry{1.6}{Hz} & \ctabentry{2.0}{Hz}\\ &&&&&&&&&&&&&\\ &&&&&&&&&&&&&\\ (3b)&\parbox{\tw}{photon scattering\\$\sigma_{\rm p}(T_2')/2\pi$} & \ctabentry{4.5}{Hz} & \ctabentry{1.5}{Hz} & \ctabentry{1.4}{Hz}\\ &&&&&&&&&&&&&\\ \hline &&&&&&&&&&&&&\\ (4)&\parbox{\tw}{magnetic field fluctuations\\$\sigma_{\rm b}(T_2\mbox{$^\prime$})/2\pi$} & \ctabentry{1.7}{Hz} & \ctabentry{0.35}{Hz} & \ctabentry{0.17}{Hz} \\ &&&&&&&&&&&&&\\ \hline \end{tabular} \end{center} \caption{Summary of dephasing mechanisms. Shown are the fluctuation amplitudes $\sigma(T_2')/2\pi$.} \label{tab:SummaryDephasingMechanisms} \end{table} \endgroup {\it (1) Intensity fluctuations of the trapping laser.} The intensity fluctuations are measured by shining the laser onto a photodiode and recording the resulting voltage as a function of time. From this signal we calculate $\sigma(\tau_\pi)^2$ by means of the Allan variance, defined as \cite{Allan66}: \begin{equation}\label{e:AllanVar} \sigma_{\rm A}^2(\tau) = \frac{1}{m}\sum_{k=1}^m \frac{(\overline{x}_{\tau,k+1}-\overline{x}_{\tau,k})^2}{2}. \end{equation} Here $\overline{x}_{\tau, k}$ denotes the average of the photodiode voltages over the $k$-th time interval $\tau$, normalized to the mean voltage of the entire dataset. The resulting Allan deviation $\sigma_{\rm A}$ is a dimensionless number which expresses the relative fluctuations. They directly translate into fluctuations $\sigma(\tau)$ of the detuning, \begin{equation}\label{e:sigma(tau)} \sigma(\tau)=\sqrt{2}\delta_0\sigma_{\rm A}(\tau). \end{equation} The factor of $\sqrt{2}$ arises because $\sigma(\tau)$ is the standard deviation of the difference of two detunings with standard deviation $\sigma_{\rm A}(\tau)$ each. The maximum differential light shift $\delta_0$ in Eq.~(\ref{e:sigma(tau)}) is calculated according to Eq.~(\ref{e:delta0FromFit}) using the measured values of $\delta\mbox{$^\prime$}$. As a result we find relative intensity fluctuations of $\sigma_{\rm A}(\tau) < 0.2\%$ (see Fig.~\ref{fig:IntensityFluct}). The corresponding absolute fluctuation amplitudes $\sigma(T_2\mbox{$^\prime$})/2\pi$ (shown in Table~\ref{tab:SummaryDephasingMechanisms}) are too weak to account for the observed decay of the spin echo visibility.\\ \begin{figure} \caption{ Allan deviation of the intensity fluctuations according to Eq.~(\ref{e:AllanVar}).} \label{fig:IntensityFluct} \end{figure} {\it (2) Pointing instability of the trapping laser.} Any change of the relative position of the two interfering laser beams also changes the interference contrast, and hence the light shift $\delta_0$. These position shifts can arise due to shifts of the laser beam itself, due to variations of the optical paths e.\,g.~from acoustic vibrations of the mirrors or from air flow. In order to measure the pointing instabilities we mutually detune the two dipole trap beams by $\Delta\nu=10$~MHz using the AOMs and overlap them on a fast photodiode (see Fig.~\ref{fig:PointingInstabilities}(a)). The amplitude of the resulting beat signal directly measures the interference contrast of the two beams and is thus proportional to the depth of the potential wells of the standing wave dipole trap. We used a network analyzer (HP~3589A) operated in ``zero span'' mode to record the temporal variation of the beat signal amplitude within a filter bandwidth of 10~kHz. \begin{figure} \caption{Measuring the pointing instability. (a) The dipole trap beams having a frequency difference of $\Delta\nu=10$~MHz are overlapped on a fast photodiode. (b) Allan deviation of the amplitude of the resulting beat signal.} \label{fig:PointingInstabilities} \end{figure} The resulting Allan deviation of the beat signal amplitudes is shown in Fig.~\ref{fig:PointingInstabilities}(b). The lower curve shows the signal in the case of well overlapped beams, whereas for the upper curve, we purposely misaligned the beams so that the beat signal amplitude is reduced by a factor of 2. In the latter case variations of the relative beam position cause a larger variation of the beat signal amplitude, since the beams overlap on the slopes of the Gaussian profile. These two curves measure the best and the worst cases of the fluctuations. We found that the relative fluctuations for long time scales of $\tau>100$~ms reach up to 3\% in the worst case. They are thus one order of magnitude greater than the variations caused by intensity fluctuations. The frequency fluctuations $\sigma(\tau)$ are again calculated using Eq.~(\ref{e:sigma(tau)}). This result is plotted together with the observed visibility in Fig.~\ref{fig:ContrastOfSpinEcho}. Our data points lie in between these best and worst case predictions.\\ {\it (3a) Heating effects.} Heating processes in the trap can also cause significant irreversible decoherence, since they cause a variation of the atomic resonance frequency within the microwave pulse sequence. A constant heating rate $\dot{E}$ increases the average energy of the atoms for the second free precession interval $[\tau_\pi, 2\tau_\pi]$, compared to the first interval $[0,\tau_\pi]$, by $\dot{E}\tau_\pi$. The energy $E$ of individual atoms, however, can be changed by much more than this average energy gain. To estimate the effect we have to calculate the typical energy change of individual atoms during the free precession interval caused by the fluctuating forces which are responsible for the heating. For this purpose we approximate the trap as harmonic and assume the following model of the heating process. Due to a random walk in momentum space, an initial atomic momentum $p=\sqrt{2mE}$ evolves into a symmetric, Gaussian momentum distribution around $p$ with a standard deviation of $\Delta p_{\rm rms}$ given by the average energy gain $\dot{E}\tau_\pi$ \begin{equation} \dot{E}\tau_\pi = \frac{(\Delta p_{\rm rms})^2}{2m}. \end{equation} Assuming $E \gg \dot{E}\tau_\pi$ we can linearly approximate the energy-momentum relationship at $E$. In this approximation the momentum distribution is therefore equivalent to a Gaussian distribution of the energies with \begin{equation}\label{Erms} \Delta E_{\rm rms} = 2\sqrt{E \dot{E} \tau_\pi}. \end{equation} According to Eq. (\ref{e:deltaLS}) the corresponding standard deviation $\sigma_{{\rm heat},E}$ of the detunings $\Delta\delta$ is \begin{equation}\label{DDRMS} \sigma_{{\rm heat},E}(\tau_\pi) =\frac{\eta}{\hbar} \sqrt{E \dot{E} \tau_\pi} , \end{equation} depending on the initial energy E. We now integrate the distribution of the detunings \begin{eqnarray}\label{pdeltaDeltaE} &&\!\!\!\!\!\!\!\!\!\!\! p_E(\Delta\delta) =\nonumber\\ && \!\!\!\!\!\! = \frac{1}{\sqrt{2 \pi}}\frac{1} {\sigma_{{\rm heat},E}(\tau_\pi)} \exp{\left(-\frac{\Delta\delta^2}{2 \bigl( \sigma_{{\rm heat},E}(\tau_\pi) \bigr)^2}\right)} \end{eqnarray} over the initial energy $E$ weighted by the $n$-dimensional thermal energy distribution $p^{(n)}(E) \propto E^{n-1} \exp{\left(-{E}/{k_{B}T}\right)}$: \begin{equation}\label{pII_n} p^{(n)}(\Delta\delta) = \int_0^{\infty} p_E(\Delta\delta) p^{(n)}(E) dE. \end{equation} Finally we obtain the rms detuning fluctuations $\sigma_{\rm heat}$ from the resulting distribution of $\Delta\delta$ as \begin{equation} \left[ \sigma^{(n)}_{\rm heat}(\tau_\pi)\right]^2 = \int_{-\infty}^{\infty} \Delta\delta^2 p^{(n)} (\Delta\delta) d\Delta\delta . \end{equation} Evaluation of $\sigma^{(n)}_{\rm heat}$ for the experimentally relevant time scale $\tau_\pi=T_2'/2$ yields \begin{equation}\label{sigmaHEND} \sigma^{(n)}_{\rm heat} = \frac{\eta}{\hbar} \sqrt{\frac{n}{2} \dot E T_2' k_B T} . \end{equation} Heating effects in our trap have been investigated in detail in Ref.~\cite{Alt02b}. An upper limit for the heating rate of $\dot E = 2\cdot 10^{-2}$~mK/s is obtained from the trap lifetime of 50~s (for $U_0 = 1.0$~mK). When we linearly scale $\dot E$ to our trap depths of $U_0 = 1.0$~mK, $U_0 = 0.1$~mK, and $U_0 = 0.04$~mK, and assume temperatures of $T = 0.1$~mK, $T =0.06$~mK, and $T =0.02$~mK, we obtain fluctuation amplitudes for the 3D-case ($n=3$) of $\sigma^{(3)}_{\rm heat}=5.3$~Hz, $\sigma^{(3)}_{\rm heat}=1.6$~Hz and $\sigma^{(3)}_{\rm heat}=2.0$~Hz, respectively. We stress however, that these values for $\sigma_{\rm heat}$ are upper limits since we did not measure heating rates $\dot E$ for the trap depths we used. The actual values of $\dot E$ and the resulting values for $\sigma_{\rm heat}$ could be orders of magnitude smaller than the upper limits inferred from the life time because the heating rate strongly depends on the oscillation frequencies and the details of the laser noise spectrum.\\ {\it (3b) Photon recoil.} Our model of the heating process also gives an estimate of the dephasing due to photon recoil. If we had one photon scattered per time interval $\tau_\pi$ giving two recoils, we would obtain a heating rate \begin{equation}\label{Edot} \dot E = \frac{\hbar^2 k^2}{m}\frac{1}{\tau_\pi}. \end{equation} Inserting into Eq.~(\ref{sigmaHEND}) ($n=3$) yields \begin{equation}\label{sigmaH1D1PH} \sigma_{\rm heat}^{\rm 1 ph} = \eta k \sqrt {\frac{3 k_B T}{m}} . \end{equation} Scattering of $n_{\rm ph}$ photons would yield \begin{equation}\label{sigmaH1D1PHn} \sigma_{\rm ph}(n_{\rm ph}) = \sqrt{n_{\rm ph}} \,\sigma_{\rm heat}^{\rm 1 ph}. \end{equation} Given a scattering rate $\Gamma_{\rm s}$, the number of scattered photons obeys a Poissonian distribution. Since for our parameters, the probability of scattering more than one photon is negligible, we obtain \begin{equation}\label{deDZ} \sigma_{\rm ph}(\tau_\pi) = \eta k \sqrt {\frac{3 k_B T \Gamma_{\rm s}\tau_\pi}{m}} \exp{\left(-\frac{\Gamma_{\rm s} \tau_\pi}{2} \right)} . \end{equation} We use the temperatures of the previous paragraph and the photon scattering rates (see below) of $\Gamma_{\rm s} =10.6$~s$^{-1}$, $\Gamma_{\rm s} = 1.06$~s$^{-1}$ and $\Gamma_{\rm s} =0.41$~s$^{-1}$. With $\tau_\pi=T_2'/2$ we obtain $\sigma_{\rm ph}(T_2')=4.5$~Hz, $\sigma_{\rm ph}(T_2')=1.5$~Hz and $\sigma_{\rm ph}(T_2')=1.4$~Hz, respectively.\\ {\it (4) Fluctuating magnetic fields.} Using a fluxgate magnetometer we measured a peak-to-peak value of the magnetic field fluctuations of $\Delta B=0.13~\mu$T, dominated by components at $\nu=50$~Hz. The resulting frequency shift on the \ket{F=4,m_F=0} $\rightarrow$ \ket{F=3,m_F=0} transition is: \begin{equation}\label{e:BField} \Delta \omega=2\,\Delta\omega_{0\rightarrow0}\,B_0\, \Delta B, \end{equation} where \mbox{$B_0=97.9~\mu$T} is the offset field and $\Delta\omega_{0\rightarrow0}/2\pi = 43$~mHz/($\mu$T)$^2$ is the quadratic Zeeman shift. For our case, we obtain $\Delta\omega=1.1$~Hz. The effect of the magnetic fluctuations depends on the time interval between the microwave pulses. If this time is large compared to $1/\nu$, all fluctuations cancel except for those of the last oscillation period. As a consequence, the effect on the detuning fluctuations $\sigma$ also decreases. We calculate this effect by computing the Allan deviation $\sigma_{\rm A}(\tau)$ of a 50~Hz sine signal. The detuning fluctuations then read $\sigma_{\rm b}(\tau)=\sqrt{2}\Delta \omega\,\sigma_{\rm A}(\tau)$. The resulting $\sigma_{\rm b}(T_2\mbox{$^\prime$})$, shown in Table~\ref{tab:SummaryDephasingMechanisms}, is too small to account for the decay of the spin echo amplitude.\\ {\it (5) Fluctuation of microwave power and pulse durations.} The application of two $\pi/2$-pulses and one $\pi$-pulse results in $w_{\rm echo}(2\tau_\pi)=-1$. Any fluctuations of the amplitude ($\Delta\Omega_{\rm R}/\Omega_{\rm R}$) or pulse duration ($\Delta \tau/\tau$) result in variations of the amplitude of the spin echo signal, i.\,e.~$w_{\rm echo}(2\tau_\pi)=-\cos\Delta\phi$ according to: \begin{equation}\ \left(\frac{\Delta\phi}{2\pi}\right)^2 = \left(\frac{\Delta\Omega_{\rm R}}{\Omega_{\rm R}}\right)^2 + \left(\frac{\Delta \tau}{\tau}\right)^2 \end{equation} With $\Delta \tau/\tau < 10^{-3}$ (measured) and $\Delta\Omega_{\rm R}/\Omega_{\rm R}<10^{-2}$ (specifications of the synthesizer) we obtain $\Delta\phi/2\pi < 10^{-2}$, which is too small to be observed. Moreover, this effect neither depends on the dipole trap depth nor on the time delay between the microwave pulses. The timing of the microwave pulses would be affected by a clock inaccuracy of the D/A-board of the computer control system which triggers the microwave pulses. Its specified accuracy $\Delta\tau/\tau=10^{-4}$, results in a phase fluctuation $\delta\mbox{$^\prime$}\tau_\pi \,\Delta\tau/\tau < 0.01$ for all parameters $\delta\mbox{$^\prime$}$ and $\tau_\pi$ used in our experiment. Thus, the fluctuations of microwave power, pulse duration, and timing do not account for the observed reduction of the spin echo visibility. \\ {\it (6) Spin relaxation due to light scattering.} The population decay time, $T_1$, is governed by the scattering of photons from the dipole trap laser, which couples the two hyperfine ground states via a two-photon Raman transition. In our case, the hyperfine changing spontaneous Raman processes are strongly reduced due to a destructive interference of the transition amplitudes. Thus, the spin relaxation rate is much larger than the spontaneous scattering rate. This effect was first observed on optically trapped Rubidium atoms in the group of D.~Heinzen \cite{Cline94} and was also verified in experiments in our group \cite{Frese00}. The corresponding transition rate is calculated by means of the Kramers-Heisenberg formula \cite{Loudon}, which is a result from second order perturbation theory. We obtain for the rate of spontaneous transitions, $\Gamma_{\rm s}$, from the ground state $\ket{F,m}$ to the ground state $\ket{F'',m''}$: \begin{equation}\label{e:KramersHeisenberg} \Gamma_{\rm s} = \frac{3c^2\omega_{\rm L}^3I}{4\hbar\, d^4} \left| \frac{a^{\mbox{\tiny (1/2)}}}{\Delta_{\rm 1/2}}+ \frac{a^{\mbox{\tiny (3/2)}}}{\Delta_{\rm 3/2}} \right|^2, \end{equation} where $\Delta_{\rm J'}=\omega_{\rm L}-\omega_{\rm J'}$ is the detuning of the dipole trap laser from the $^6\!P_{\rm J'}$-state and $d=\bra{4,4}\mu_{+1}\ket{5,5}$, with the dipole operator $\mu_{+1}$ for $\Delta m = +1$ transitions. The transition amplitudes $a^{\mbox{\tiny (J')}}$ are obtained by summing over all possible intermediate states \ket{F',m'} of the relevant $^6\!P_{\rm J'}$ manifold \cite{Cline94}. For Rayleigh scattering processes, which do not change the hyperfine state ($F,M=F'', M''$), the amplitudes add up, $a^{\mbox{\tiny (3/2)}}=2a^{\mbox{\tiny (1/2)}}$. However, for state changing Raman processes ($F,M \neq F'', M''$), the two transition amplitudes are equal but have opposite sign, $a^{\mbox{\tiny (3/2)}}=-a^{\mbox{\tiny (1/2)}}$. Then the two terms in Eq.~(\ref{e:KramersHeisenberg}) almost cancel in the case of far detuning, $\Delta_{1/2}\approx\Delta_{3/2}$. As a result the spontaneous Raman scattering rate scales as $1/\Delta^4$ whereas the Rayleigh scattering rate scales as $1/\Delta^2$. The suppression factor can be expressed using the fine structure splitting $\Delta_{\rm fs}=\Delta_{\rm 3/2}-\Delta_{\rm 1/2}$ as \begin{equation}\label{e:RamanVSRayleigh} \Gamma_{\rm Raman} = \beta\,\Gamma_{\rm s} \qquad \mbox{with} \quad \beta = \left| \frac{\Delta_{\rm fs}}{3\Delta_{\rm 1/2}} \right|^2. \end{equation} For the case of cesium, we obtain a suppression factor of $\beta = 0.011$. The Rayleigh scattering rate for an atom trapped in a potential of $U_0=1.0$~mK is $\Gamma_{\rm s}=11$~s$^{-1}$. Then, the corresponding spontaneous Raman scattering rate is $\Gamma_{\rm Raman}=0.12$~s$^{-1}$ and the population decay time $T_1 = \Gamma_{\rm Raman}^{-1} = 8.6$~s. Since in most of our experiments, the trap depth is significantly smaller, $T_1$ will be even larger. As a consequence, we neglect the population decay due to spontaneous scattering. Note that the experiments of Refs. \cite{Cline94,Frese00} were only sensitive to changes of the hyperfine $F$-state, since the atoms were in a mixture of $m_F$-sublevels. However, the theoretical treatment above predicts similarly long relaxation times for any particular $m_F$-sublevel, which is consistent with our observations. \\ \subsection{Conclusions} We have developed an analytical model which treats the various decay mechanisms of the hyperfine coherence of trapped cesium atoms independently. This is justified by the very different time scales of the decay mechanisms ($T_2^*\ll T_2\mbox{$^\prime$}\ll T_1$). Our model reproduces the observed shapes of Ramsey and spin echo signals, whose envelopes are the Fourier transform of the energy distribution of the atoms in the trap. The irreversible decoherence rates manifest themselves in the decay of the spin echo visibility and are caused by fluctuations of the atomic resonance frequency in between the microwave pulses. In the above analysis we have investigated various dephasing mechanisms and characterized them by the corresponding amplitude of the detuning fluctuations which are summarized in Table~\ref{tab:SummaryDephasingMechanisms}. We find that a major mechanism of irreversible dephasing is the pointing instability of the dipole trap laser beams resulting in fluctuations of the trap depth and thus the differential light shift. Significant decoherence is also caused in the shallow dipole trap by heating due to photon scattering. Heating due to technical origin, such as fluctuations of the depth and the position of the trap, cannot be excluded as an additional source of decoherence.\\ Compared to our experiment, significantly longer coherence times ($T_2^*=4$\,s) were observed by N.~Davidson and S.~Chu in blue detuned traps in which the atoms are trapped at the minimum of electric fields \cite{Davidson95}. In Ref.~\cite{Davidson95}, $T_2^*=15$~ms obtained with sodium atoms in a Nd:YAG dipole trap ($U_0=0.4$~mK) was reported, which is comparable to our observation. In other experiments, the inhomogeneous broadening has been reduced by the addition of a weak light field, spatially overlapped with the trapping laser field and whose frequency is tuned in between the two hyperfine levels \cite{Kaplan02}. Of course, cooling the atoms to the lowest vibrational level by using e.\,g.~Raman sideband cooling techniques \cite{Hamann98,Perrin98}, would also reduce inhomogeneous broadening. The magnetic field fluctuations could possibly be largely suppressed by triggering the experiment to the 50~Hz of the power line.\\ \end{document}
\begin{document} \title{Some observations on the FGH theorem} \begin{abstract} We investigate the Friedman--Goldfarb--Harrington theorem from two perspectives. Firstly, in the frameworks of classical and modal propositional logics, we study the forms of sentences whose existence is guaranteed by the FGH theorem. Secondly, we prove some variations of the FGH theorem with respect to Rosser provability predicates. \end{abstract} \section{Introduction} The notion of the weak representability of computably enumerable (c.e.)~sets plays an important role in a proof of G\"odel's first incompleteness theorem. We say that a set $X$ of natural numbers is \textit{weakly representable} in a theory $T$ if there exists a formula $\varphi_X(v)$ such that for any natural number $n$, $n \in X$ if and only if $T \vdash \varphi_X(\overline{n})$. If a set $X$, which is c.e.~but not computable, is weakly representable in a c.e.~theory $T$, then it is shown that there exists a natural number $n$ such that $T \nvdash \varphi_X(\overline{n})$ and $T \nvdash \neg \varphi_X(\overline{n})$. Therefore, such a theory $T$ is incomplete. It is easily shown that every c.e.~set is weakly representable in every $\Sigma_1$-sound c.e.~extension of $\mathsf{PA}$ because of $\Sigma_1$-soundness and $\Sigma_1$-completeness. Furthermore, Ehrenfeucht and Feferman \cite{EF} proved the weak representability of c.e.~sets in every consistent c.e.~extension of $\mathsf{PA}$ (see also Shepherdson \cite{She}). \begin{thm}[Ehrenfeucht and Feferman]\label{EFThm} Let $T$ be any consistent c.e.~extension of $\mathsf{PA}$. Then, every c.e.~set of natural numbers is weakly representable in $T$. \end{thm} Ehrenfeucht and Feferman's theorem is a metamathematical result, but is formalizable and also provable in $\mathsf{PA}$. This fact is called the \textit{FGH theorem} (see Smory\'nski \cite[p.366]{Smo}, Visser \cite[Theorem 4.1]{Vis84}, \cite[Section 3]{Vis05} and Lindstr\"om \cite[Exercise 2.26 (a)]{Lin}). \begin{thm}[The FGH theorem] Let $T$ be any consistent c.e.~extension of $\mathsf{PA}$. Then, for any $\Sigma_1$ sentence $\sigma$, there exists a sentence $\varphi$ such that \[ \mathsf{PA} + \mathrm{Con}_T \vdash \sigma \leftrightarrow \mathrm{Pr}_T(\gn{\varphi}). \] \end{thm} Here $\mathrm{Pr}_T(x)$ is a natural $\Sigma_1$ provability predicate of $T$ which expresses the provability of $T$, and $\mathrm{Con}_T$ is the $\Pi_1$ sentence $\neg \mathrm{Pr}_T(\gn{0=1})$ expressing the consistency of $T$. ``FGH'' stands for Friedman, Goldfarb and Harrington. Harrington and Friedman pointed out that $\varphi$ in the statement is found as $\Pi_1$ and $\Sigma_1$, respectively. For a history of the FGH theorem, see Visser \cite{Vis05}. The FGH theorem has been used in the literature (for example, in papers by the author, it appears in \cite{KK,Kur18}). A version of the FGH theorem with a parameter is also proved. That is, by modifying the usual proof of the FGH theorem, it is proved that for any $\Sigma_1$ formula $\sigma(v)$ with the only free variable $v$, there exists a formula $\varphi(v)$ such that $\mathsf{PA} + \mathrm{Con}_T$ proves $\forall v\, \bigl(\sigma(v) \leftrightarrow \mathrm{Pr}_T(\gn{\varphi(\dot{v})}) \bigr)$. Here $\gn{\varphi(\dot{v})}$ is a primitive recursive term corresponding to a primitive recursive function calculating the G\"odel number of $\varphi(\overline{n})$ from $n$. If $T$ is consistent, then the theory $\mathsf{PA} + \mathrm{Con}_T$ is sound, and hence the weak representability of c.e.~sets in $T$ follows from this parameterized version of the FGH theorem. In the present paper, we analyze the FGH theorem from two perspectives. The first perspective is the ``form'' of the sentence $\varphi$ in the statement of the FGH theorem. Let $\mathrm{True}_{\Sigma_1}(v)$ be a partial truth definition for $\Sigma_1$ sentences, that is, $\mathrm{True}_{\Sigma_1}(v)$ is a $\Sigma_1$ formula such that for any $\Sigma_1$ sentence $\sigma$, $\mathsf{PA}$ proves $\sigma \leftrightarrow \mathrm{True}_{\Sigma_1}(\gn{\sigma})$ (cf.~Lindstr\"om \cite[Fact 10]{Lin}). By the parameterized version of the FGH theorem, there exists a formula $\xi(v)$ such that $\mathsf{PA} + \mathrm{Con}_T \vdash \forall v \, \bigl(\mathrm{True}_{\Sigma_1}(v) \leftrightarrow \mathrm{Pr}_T(\gn{\xi(\dot{v})}) \bigr)$. Then, for any $\Sigma_1$ sentence $\sigma$, $\mathsf{PA} + \mathrm{Con}_T \vdash \sigma \leftrightarrow \mathrm{Pr}_T(\gn{\xi(\gn{\sigma})})$. Therefore, $\varphi$ in the FGH theorem can be taken in the form $\xi(\cdot)$ uniformly regardless of $\sigma$. From the above observation, we consider the following question: What form of the sentence $\varphi$ in the FGH theorem can we find? In Section \ref{Classical}, we investigate this question in the framework of classical propositional logic. Specifically, we study the following rephrased question: What is a propositional formula $A$ such that $\varphi$ in the FGH theorem can be taken uniformly in the form $A$ regardless of $\sigma$? In Section \ref{Modal}, we investigate this rephrased question in the framework of modal propositional logic. However, unlike the case of classical propositional logic, the formula $A$ may contain the modal operator $\Box$. Therefore, the interpretation of $\Box$ in arithmetic should be clearly defined. As is usually done in the study of provability logic, in the present paper, $\Box$ is interpreted by $\mathrm{Pr}_T(x)$. Then, our proofs of theorems in Section \ref{Modal} are modifications of that of Solovay's arithmetical completeness theorem \cite{Sol} which is one of the most remarkable theorems in the research of provability logic. The second perspective is the choice of provability predicates. Joosten \cite{Joo} generalized the FGH theorem by proving similar statements concerning several nonstandard provability predicates such as a formula expressing the provability in $T$ together with all true $\Sigma_{n+2}$ sentences. In the last section, we study some variations of the FGH theorem with respect to Rosser provability predicates. Sections \ref{Classical}, \ref{Modal}, and \ref{Rosser} can be read independently. We close the introduction with common preparations for reading these sections. Let $\mathcal{L}_A$ denote the language of first-order arithmetic containing the symbols $0, S, +, \times, \leq$. We do not specify what exactly $\mathcal{L}_A$ is, but it may be assumed to have as many function symbols for primitive recursive functions as necessary. Throughout the present paper, we fix a consistent c.e.~extension $T$ of Peano Arithmetic $\mathsf{PA}$ in the language $\mathcal{L}_A$. Let $\omega$ denote the set of all natural numbers. For each $n \in \omega$, let $\overline{n}$ denote the numeral $S(S(\cdots S(0) \cdots))$ ($n$ times applications of $S$) for $n$. We fix a primitive recursive formula $\mathrm{Proof}_T(x, y)$ naturally expressing that ``$y$ is a $T$-proof of $x$''. Our $\Sigma_1$ provability predicate $\mathrm{Pr}_T(x)$ of $T$ is defined by $\exists y\, \mathrm{Proof}_T(x, y)$, saying that ``$x$ is $T$-provable''. Then, we may assume that the provability predicate $\mathrm{Pr}_T(x)$ satisfies the following clauses (see Boolos \cite{Boo}): \begin{itemize} \item If $T \vdash \varphi$, then $\mathsf{PA} \vdash \mathrm{Pr}_T(\gn{\varphi})$, \item $\mathsf{PA} \vdash \mathrm{Pr}_T(\gn{\varphi \to \psi}) \to \bigl(\mathrm{Pr}_T(\gn{\varphi}) \to \mathrm{Pr}_T(\gn{\psi}) \bigr)$, \item If $\varphi$ is a $\Sigma_1$ sentence, then $\mathsf{PA} \vdash \varphi \to \mathrm{Pr}_T(\gn{\varphi})$. \end{itemize} We may also assume that $\mathsf{PA}$ verifies that every theorem of $T$ has infinitely many proofs, that is, $\mathsf{PA} \vdash \forall x \forall y \, \bigl(\mathrm{Proof}_T(x, y) \to \exists z\, {>}\, y\, \mathrm{Proof}_T(x, z) \bigr)$. We introduce witness comparison notation (cf.~Lindstr\"om \cite[Lemma 1.3]{Lin}). \begin{defn}[Witness comparison notation]\label{WC} Suppose that $\mathcal{L}_A$-formulas $\varphi$ and $\psi$ are of the forms $\exists x \, \varphi'(x)$ and $\exists y \, \psi'(y)$, respectively. \begin{itemize} \item $\varphi \preccurlyeq \psi$ is an abbreviation for $\exists x\, \bigl(\varphi'(x) \land \forall y\, {<}\, x \, \neg \psi'(y) \bigr)$. \item $\varphi \prec \psi$ is an abbreviation for $\exists x\, \bigl(\varphi'(x) \land \forall y\, {\leq}\, x \, \neg \psi'(y) \bigr)$. \end{itemize} \end{defn} It is easily verified that $\mathsf{PA}$ proves the formulas $\neg (\varphi \preccurlyeq \psi \land \psi \prec \varphi)$ and $\varphi \lor \psi \to (\varphi \preccurlyeq \psi \lor \psi \prec \varphi)$. \section{On the form of a sentence in the FGH theorem\\ -- the case of classical propositional logic}\label{Classical} In this section, we investigate the following question mentioned in the introduction: What is a propositional formula $A$ such that $\varphi$ in the FGH theorem can be taken uniformly in the form $A$ regardless of $\sigma$? The language of classical propositional logic consists of countably many propositional variables $p_1, p_2, \ldots, q_1, q_2, \ldots$, propositional constants $\top, \bot$, and propositional connectives $\land, \lor, \neg, \to, \leftrightarrow$. For each propositional formula $A$, let $\models A$ mean that $A$ is a tautology. We say $A$ is unsatisfiable if $\models \neg A$. We say that a propositional formula is \textit{contingent} if it is neither a tautology nor unsatisfiable. For any propositional formula $A(p_1, \ldots, p_n)$ containing only the indicated propositional variables and any $\mathcal{L}_A$-sentences $\varphi_1, \ldots, \varphi_n$, let $A(\varphi_1, \ldots, \varphi_n)$ denote the $\mathcal{L}_A$-sentence obtained by simultaneously replacing all the occurrences of $p_i$ in $A$ by $\varphi_i$, for each $i \in \{1, \ldots, n\}$. We introduce the following sets in order to simplify our descriptions. \begin{defn} For each $\Sigma_1$ sentence $\sigma$, let $\mathrm{FGH}_T(\sigma)$ be the set of all $\mathcal{L}_A$-sentences $\varphi$ such that $\mathsf{PA} + \mathrm{Con}_T \vdash \sigma \leftrightarrow \Pr_T(\gn{\varphi})$. \end{defn} Then, the FGH theorem states that for any $\Sigma_1$ sentence $\sigma$, the set $\mathrm{FGH}_T(\sigma)$ is non-empty. We show that the set $\mathrm{FGH}(\sigma)$ is closed under the $T$-provable equivalence. \begin{prop}\label{FGHeq} For any $\Sigma_1$ sentence $\sigma$ and any $\mathcal{L}_A$-sentences $\varphi$ and $\psi$, if $\varphi \in \mathrm{FGH}_T(\sigma)$ and $T \vdash \varphi \leftrightarrow \psi$, then $\psi \in \mathrm{FGH}_T(\sigma)$. \end{prop} \begin{proof} Suppose $\varphi \in \mathrm{FGH}_T(\sigma)$ and $T \vdash \varphi \leftrightarrow \psi$. Then, the equivalence $\mathrm{Pr}_T(\gn{\varphi}) \leftrightarrow \mathrm{Pr}_T(\gn{\psi})$ is provable in $\mathsf{PA}$. Since $\mathsf{PA} + \mathrm{Con}_T \vdash \sigma \leftrightarrow \mathrm{Pr}_T(\gn{\varphi})$, we obtain $\mathsf{PA} + \mathrm{Con}_T \vdash \sigma \leftrightarrow \mathrm{Pr}_T(\gn{\psi})$, and hence $\psi \in \mathrm{FGH}_T(\sigma)$. \end{proof} First, we prove the following introductory theorem. \begin{thm}\label{Thm1} Let $A(p_1, \ldots, p_n)$ be any propositional formula with only the indicated propositional variables. Then, the following are equivalent: \begin{enumerate} \item $A(p_1, \ldots, p_n)$ is contingent. \item For any $\Sigma_1$ sentence $\sigma$, there exist $\mathcal{L}_A$-sentences $\varphi_1, \ldots, \varphi_n$ such that $A(\varphi_1, \ldots, \varphi_n) \in \mathrm{FGH}_T(\sigma)$. \end{enumerate} \end{thm} For each formula $A$, let $A^0$ and $A^1$ be $\neg A$ and $A$, respectively. Theorem \ref{Thm1} follows from the following lemma. \begin{lem}\label{Lem1} Let $A(p_1, \ldots, p_n)$ be any propositional formula with only the indicated propositional variables and let $r$ be any propositional variable not contained in $A$. If $A(p_1, \ldots, p_n)$ is contingent, then there exist propositional formulas $B_1(r), \ldots, B_n(r)$ such that $\models r \leftrightarrow A(B_1(r), \ldots, B_n(r))$. \end{lem} \begin{proof} Suppose that $A(p_1, \ldots, p_n)$ is contingent. Let $f$ and $g$ be mappings from $\{1, \ldots, n\}$ to $\{0, 1\}$ such that $\models A(\top^{f(1)}, \ldots, \top^{f(n)})$ and $\models \neg A(\top^{g(1)}, \ldots, \top^{g(n)})$. For each $i$ ($1 \leq i \leq n$), let $B_i(r)$ be the propositional formula $(r \land \top^{f(i)}) \lor (\neg r \land \top^{g(i)})$. Then, $\models r \to (B_i(r) \leftrightarrow \top^{f(i)})$ and $\models \neg r \to (B_i(r) \leftrightarrow \top^{g(i)})$ hold. Thus, we have \[ \models r \to \bigl(A(B_1(r), \ldots, B_n(r)) \leftrightarrow A(\top^{f(1)}, \ldots, \top^{f(n)}) \bigr) \] and \[ \models \neg r \to \bigl(A(B_1(r), \ldots, B_n(r)) \leftrightarrow A(\top^{g(1)}, \ldots, \top^{g(n)}) \bigr). \] Since $\models A(\top^{f(1)}, \ldots, \top^{f(n)})$ and $\models \neg A(\top^{g(1)}, \ldots, \top^{g(n)})$, we obtain \[ \models r \to A(B_1(r), \ldots, B_n(r))\ \text{and}\ \models \neg r \to \neg A(B_1(r), \ldots, B_n(r)). \tag*{\mbox{\qedhere}} \] \end{proof} \begin{proof}[Proof of Theorem \ref{Thm1}] $(1 \Rightarrow 2)$: Suppose that $A(p_1, \ldots, p_n)$ is contingent and let $\sigma$ be any $\Sigma_1$ sentence. Then, by Lemma \ref{Lem1}, there exist propositional formulas $B_1(r), \ldots, B_n(r)$ such that \begin{equation}\label{equiv0} \models r \leftrightarrow A(B_1(r), \ldots, B_n(r)). \end{equation} By the FGH theorem, there exists an $\mathcal{L}_A$-sentence $\varphi \in \mathrm{FGH}_T(\sigma)$. For each $i$ ($1 \leq i \leq n$), let $\varphi_i$ be the $\mathcal{L}_A$-sentence $B_i(\varphi)$. Then, by the equivalence (\ref{equiv0}), we obtain \[ \mathsf{PA} \vdash \varphi \leftrightarrow A(\varphi_1, \ldots, \varphi_n). \] By Proposition \ref{FGHeq}, we conclude $A(\varphi_1, \ldots, \varphi_n) \in \mathrm{FGH}_T(\sigma)$ by Proposition \ref{FGHeq}. $(2 \Rightarrow 1)$: Suppose that $A(p_1, \ldots, p_n)$ is not contingent. Then, for any $\mathcal{L}_A$-sentences $\varphi_1, \ldots, \varphi_n$, either $A(\varphi_1, \ldots, \varphi_n)$ or $\neg A(\varphi_1, \ldots, \varphi_n)$ is provable in $\mathsf{PA}$. Let $\sigma$ be any $\Sigma_1$ sentence independent of $\mathsf{PA} + \mathrm{Con}_T$. Then, $A(\varphi_1, \ldots, \varphi_n)$ is not in $\mathrm{FGH}_T(\sigma)$ for all $\mathcal{L}_A$-sentences $\varphi_1, \ldots, \varphi_n$. \end{proof} As mentioned in the introduction, the FGH theorem and the weak representability theorem are related to each other. In particular, we can also prove the following theorem by a similar proof. \begin{thm}\label{Thm1'} Let $A(p_1, \ldots, p_n)$ be any propositional formula with only the indicated propositional variables. Then, the following are equivalent: \begin{enumerate} \item $A(p_1, \ldots, p_n)$ is contingent. \item For any c.e.~set $X$, there exist $\mathcal{L}_A$-formulas $\varphi_1(v), \ldots, \varphi_n(v)$ such that $A(\varphi_1(v), \ldots, \varphi_n(v))$ weakly represents $X$ in $T$. \end{enumerate} \end{thm} Theorem \ref{Thm1} will be extended to the framework of modal propositional logic in the next section. In this section, we further improve Theorem \ref{Thm1} in the framework of classical propositional logic. For any propositional formula $A(p_1, \ldots, p_n, q_1, \ldots, q_m)$ with the only indicated propositional variables, let $\mathsf{FGH}_T[A; q_1, \ldots, q_m]$ denote the metamathematical statement ``for any $\mathcal{L}_A$-sentences $\psi_1, \ldots, \psi_m$ and for any $\Sigma_1$ sentence $\sigma$, there exist $\mathcal{L}_A$-sentences $\varphi_1, \ldots, \varphi_n$ such that $A(\varphi_1, \ldots, \varphi_n, \psi_1, \ldots, \psi_m) \in \mathrm{FGH}_T(\sigma)$'', and we provide a necessary and sufficient condition for $\mathsf{FGH}_T[A; q_1, \ldots, q_m]$. From this, we obtain more detailed information about the elements of $\mathrm{FGH}(\sigma)$ and the first incompleteness theorem (see Corollary \ref{FI1}). Let $\mathcal{F}_m$ denote the set of all functions $f : \{1, \ldots, m\} \to \{0, 1\}$. We prove the following theorem which is one of main theorems of the present paper. \begin{thm}\label{Thm2} For any propositional formula $A(p_1, \ldots, p_n, q_1, \ldots, q_m)$, the following are equivalent: \begin{enumerate} \item For all $f \in \mathcal{F}_m$, $A(p_1, \ldots, p_n, \top^{f(1)}, \ldots, \top^{f(m)})$ are contingent. \item $\mathsf{FGH}_T[A; q_1, \ldots, q_m]$ holds. \end{enumerate} \end{thm} Theorem \ref{Thm2} also follows from the following lemma that is an improvement of Lemma \ref{Lem1}. For the sake of simplicity, we sometimes abbreviate the tuples $p_1, \ldots, p_n$ and $q_1, \ldots, q_m$ as $\vec{p}$ and $\vec{q}$, respectively. \begin{lem}\label{Lem2} Let $A(\vec{p}, \vec{q})$ be any propositional formula with only the indicated propositional variables and let $r$ be any propositional variable not contained in $A$. If $A(\vec{p}, \top^{f(1)}, \ldots, \top^{f(m)})$ is contingent for all $f \in \mathcal{F}_m$, then there exist propositional formulas $B_1(r, \vec{q}), \ldots, B_n(r, \vec{q})$ such that \[ \models r \leftrightarrow A(B_1(r, \vec{q}), \ldots, B_n(r, \vec{q}), \vec{q}). \] \end{lem} \begin{proof} For each $f \in \mathcal{F}_m$, let $Q^f(\vec{q})$ denote the formula $q_1^{f(1)} \land \cdots \land q_m^{f(m)}$. Notice that for each distinct elements $f, g$ of $\mathcal{F}_m$, $\models \neg \bigl(Q^f(\vec{q}) \land Q^g(\vec{q}) \bigr)$. Also $\models \bigvee_{f \in \mathcal{F}_m} Q^f(\vec{q})$. Suppose that $A(\vec{p}, \top^{f(1)}, \ldots, \top^{f(m)})$ is contingent for all $f \in \mathcal{F}_m$. For each $g \in \mathcal{F}_m$, there exist propositional formulas $C_1^g(r), \ldots, C_n^g(r)$ such that \begin{equation}\label{equiv} \models r \leftrightarrow A(C_1^g(r), \ldots, C_n^g(r), \top^{g(1)}, \ldots, \top^{g(m)}) \end{equation} by Lemma \ref{Lem1}. For each $i$ ($1 \leq i \leq n$), let $B_i(r, \vec{q})$ be the propositional formla $\bigvee_{f \in \mathcal{F}_m} \bigl(C_i^f(r) \land Q^f(\vec{q}) \bigr)$. Then, $\models Q^g(\vec{q}) \to \bigl(B_i(r, \vec{q}) \leftrightarrow C_i^g(r) \bigr)$ and $\models Q^g(\vec{q}) \to (q_i \leftrightarrow \top^{g(i)})$. By the equivalence (\ref{equiv}), we obtain \[ \models Q^g(\vec{q}) \to \bigl(r \leftrightarrow A(B_1(r, \vec{q}), \ldots, B_n(r, \vec{q}), \vec{q}) \bigr). \] Then, \[ \models \bigvee_{f \in \mathcal{F}_m} Q^f(\vec{q}) \to \bigl(r \leftrightarrow A(B_1(r, \vec{q}), \ldots, B_n(r, \vec{q}), \vec{q}) \bigr). \] Since $\models \bigvee_{f \in \mathcal{F}_m} Q^f(\vec{q})$, we conclude \[ \models r \leftrightarrow A(B_1(r, \vec{q}), \ldots, B_n(r, \vec{q}), \vec{q}). \tag*{\mbox{\qedhere}} \] \end{proof} \begin{proof}[Proof of Theorem \ref{Thm2}] $(1 \Rightarrow 2)$: Suppose that $A(\vec{p}, \top^{f(1)}, \ldots, \top^{f(m)})$ is contingent for all $f \in \mathcal{F}_m$. Let $\psi_1, \ldots, \psi_m$ be any $\mathcal{L}_A$-sentences and let $\sigma$ be any $\Sigma_1$ sentence. By Lemma \ref{Lem2}, there exist propositional formulas $B_1(r, \vec{q}), \ldots, B_n(r, \vec{q})$ such that \begin{equation}\label{equiv3} \models r \leftrightarrow A(B_1(r, \vec{q}), \ldots, B_n(r, \vec{q}), \vec{q}). \end{equation} By the FGH theorem, there exists an $\mathcal{L}_A$-sentence $\varphi \in \mathrm{FGH}_T(\sigma)$. For each $i$ ($1 \leq i \leq n$), let $\varphi_i$ be the $\mathcal{L}_A$-sentence $B_i(\varphi, \psi_1, \ldots, \psi_m)$. Then, by the equivalence (\ref{equiv3}), \[ \mathsf{PA} \vdash \varphi \leftrightarrow A(\varphi_1, \ldots, \varphi_n, \psi_1, \ldots, \psi_m). \] By Proposition \ref{FGHeq}, we have $A(\varphi_1, \ldots, \varphi_n, \psi_1, \ldots, \psi_m) \in \mathrm{FGH}_T(\sigma)$. Therefore, we conclude that $\mathsf{FGH}_T[A; \vec{q}]$ holds. $(2 \Rightarrow 1)$: Suppose that $A(p_1, \ldots, p_n, \top^{f(1)}, \ldots, \top^{f(m)})$ is either a tautology or unsatisfiable for some $f \in \mathcal{F}_m$. For $i \in \{1, \ldots, m\}$, let $\psi_i$ be the $\mathcal{L}_A$-sentence $0 = 0^{f(i)}$. Then, for any $\mathcal{L}_A$-sentences $\varphi_1, \ldots, \varphi_n$, the $\mathcal{L}_A$-sentence $A(\varphi_1, \ldots, \varphi_n, \psi_1, \ldots, \psi_m)$ is either provable or refutable in $\mathsf{PA}$. Let $\sigma$ be any $\Sigma_1$ sentence independent of $\mathsf{PA} + \mathrm{Con}_T$. Then, $A(\varphi_1, \ldots, \varphi_n, \psi_1, \ldots, \psi_m) \notin \mathrm{FGH}_T(\sigma)$. Therefore $\mathsf{FGH}_T[A; q_1, \ldots, q_m]$ does not hold. \end{proof} For example, let $A(p, q)$ be the propositional formula $p \leftrightarrow q$. Since both $p \leftrightarrow \top$ and $p \leftrightarrow \bot$ are contingent, $\mathsf{FGH}_T[A; q]$ holds by Theorem \ref{Thm2}. That is, for any $\mathcal{L}_A$-sentence $\psi$ and any $\Sigma_1$ sentence $\sigma$, there exists an $\mathcal{L}_A$-sentence $\varphi$ such that $\varphi \leftrightarrow \psi \in \mathrm{FGH}_T(\sigma)$. Another interesting corollary to Theorem \ref{Thm2} is the following version of the first incompleteness theorem. \begin{cor}\label{FI1} Let $A(p_1, \ldots, p_n, q_1, \ldots, q_m)$ be any propositional formula. If $A(p_1, \ldots, p_n, \top^{f(1)}, \ldots, \top^{f(m)})$ are contingent for all $f \in \mathcal{F}_m$, then for any $\mathcal{L}_A$-sentences $\psi_1, \ldots, \psi_m$, there exist $\mathcal{L}_A$-sentences $\varphi_1, \ldots, \varphi_n$ such that $A(\varphi_1, \ldots, \varphi_n, \psi_1, \ldots, \psi_m)$ is independent of $T$. \end{cor} \begin{proof} Let $\sigma$ be a $\Sigma_1$ sentence independent of $\mathsf{PA} + \mathrm{Con}_T$ and $\psi_1, \ldots, \psi_m$ be any $\mathcal{L}_A$-sentences. By Theorem \ref{Thm2}, there exist $\mathcal{L}_A$-sentences $\varphi_1, \ldots, \varphi_n$ such that \[ \mathsf{PA} + \mathrm{Con}_T \vdash \sigma \leftrightarrow \mathrm{Pr}_T(\gn{A(\varphi_1, \ldots, \varphi_n, \psi_1, \ldots, \psi_m)}). \] Then, it is easy to show that $A(\varphi_1, \ldots, \varphi_n, \psi_1, \ldots, \psi_m)$ is independent of $T$. \end{proof} Notice that we can also prove the parameterized version and the weak representability version of Theorem \ref{Thm2}. \section{On the form of a sentence in the FGH theorem\\ -- the case of modal propositional logic}\label{Modal} In this section, we extend Theorem \ref{Thm1} to the framework of modal propositional logic. This section consists of three subsections. First, we give some preparations which are needed for our arguments. In the second subsection, we prove our modal version of Theorem \ref{Thm1}. Finally, we also investigate our modal version for $\mathcal{L}_A$-theories with finite heights. Here we define the height of theories. The sequence $\langle \mathrm{Con}_T^n \rangle_{n \in \omega}$ of $\mathcal{L}_A$-sentences is recursively defined as follows: $\mathrm{Con}_T^0$ is $0 = 0$; and $\mathrm{Con}_T^{n+1}$ is $\neg \mathrm{Pr}_T(\gn{\neg \mathrm{Con}_T^n})$. Notice that $\mathrm{Con}_T^1$ is $\mathsf{PA}$-provably equivalent to $\mathrm{Con}_T$. If there exists a natural number $k \geq 1$ such that $T \vdash \neg \mathrm{Con}_T^k$, then the least such a number $k$ is called the \textit{height} of $T$. If not, we say that the height of $T$ is $\infty$. \subsection{Preparations} The language of modal propositional logic is that of classical propositional logic equipped with the unary modal operation $\Box$. For each modal formula $A$, let $\mathsf{Sub}(A)$ be the set of all subformulas of $A$. Let $\mathfrak{c}(A)$ be the cardinality of the set $\{B \in \mathsf{Sub}(A) \mid B$ is of the form $\Box C\}$. The modal formula $\Box^n A$ is recursively defined as follows: $\Box^0 A$ is $A$; and $\Box^{n+1} A$ is $\Box \Box^n A$. The formula $\Diamond^n A$ is an abbreviation for $\neg \Box^n \neg A$. The base logic of our investigations in this section is the G\"odel--L\"ob logic $\mathbf{GL}$ which is known as the logic of provability. The axioms of $\mathbf{GL}$ are as follows: \begin{enumerate} \item All propositional tautologies in the language of modal propositional logic, \item $\Box(p \to q) \to (\Box p \to \Box q)$, \item $\Box (\Box p \to p) \to \Box p$. \end{enumerate} The inference rules of $\mathbf{GL}$ are modus ponens, necessitation, and uniform substitution. A \textit{$\mathbf{GL}$-frame} is a tuple $\langle W, \sqsubset, r \rangle$ where $W$ is a nonempty finite set, $\sqsubset$ is a transitive irreflexive binary relation on $W$ and $r$ is an element of $W$ with $r \sqsubset x$ for all $x \in W \setminus \{r\}$. Such an element $r$ is called the \textit{root} of the frame. A \textit{$\mathbf{GL}$-model} is a tuple $M = \langle W, \sqsubset, r, \Vdash \rangle$ where $\langle W, \sqsubset, r \rangle$ is a $\mathbf{GL}$-frame, and $\Vdash$ is a binary relation between $W$ and the set of all modal formulas satisfying the usual conditions for satisfaction and the following condition: $a \Vdash \Box A$ if and only if for all $b \in W$, $b \Vdash A$ if $a \sqsubset b$. It is known that $\mathbf{GL}$ is sound and complete with respect to the class of all $\mathbf{GL}$-frames (cf.~Segerberg \cite{Seg}). Moreover, the proof of the Kripke completeness of $\mathbf{GL}$ given in the textbook \cite{Boo} by Boolos shows that the following theorem holds. \begin{thm}\label{GLCompl} For any modal formula $A$, if $\mathbf{GL} \nvdash A$, then there exists a $\mathbf{GL}$-model $\langle W, \sqsubset, r, \Vdash \rangle$ such that $r \Vdash \Box^{\mathfrak{c}(A) + 1} \bot$ and $r \nVdash A$. \end{thm} For each set $\Gamma$ of modal formulas, let $\mathbf{GL} + \Gamma$ denote the logic whose axioms are all theorems of $\mathbf{GL}$ and all elements of $\Gamma$, and whose inference rules are modus ponens and uniform substitution. We identify each axiomatic system of modal propositional logic with the set of all its theorems. We introduce the following two extensions of $\mathbf{GL}$ which are studied in the context of the classification of propositional provability logics (cf.~Artemov and Beklemishev \cite{AB}). \begin{itemize} \item $\mathbf{GL}_\omega : = \mathbf{GL} + \{\Diamond^n \top \mid n \in \omega\}$. \item $\mathbf{S} : = \mathbf{GL} + \{\Box p \to p\}$. \end{itemize} Notice $\mathbf{GL} \subseteq \mathbf{GL}_\omega \subseteq \mathbf{S}$. To connect these logics with arithmetic, we introduce the notion of arithmetical interpretation. \begin{defn}[Arithmetical interpretations] A mapping from the set of all propositional variables to a set of $\mathcal{L}_A$-sentences is called an \textit{arithmetical interpretation}. Each arithmetical interpretation $f$ is uniquely extended to the mapping $f_T$ from the set of all modal formulas to a set of $\mathcal{L}_A$-sentences by the following clauses: \begin{enumerate} \item $f_T(\bot)$ is $0=1$, \item $f_T(\neg A)$ is $\neg f_T(A)$, \item $f_T(A \circ B)$ is $f_T(A) \circ f_T(B)$ for $\circ \in \{\land, \lor, \to, \leftrightarrow\}$, \item $f_T(\Box A)$ is $\mathrm{Pr}_T(\gn{f_T(A)})$. \end{enumerate} \end{defn} The logics $\mathbf{GL}$, $\mathbf{GL}_\omega$ and $\mathbf{S}$ are sound with respect to arithmetical interpretations. Let $\mathbb{N}$ denote the standard model of arithmetic. \begin{fact}[Arithmetical soundness (cf.~Artemov and Beklemishev \cite{AB})]\label{AS} Let $A$ be any modal formula. \begin{enumerate} \item If $\mathbf{GL} \vdash A$, then $\mathsf{PA} \vdash f_T(A)$ for any arithmetical interpretation $f$. \item If the height of $T$ is $\infty$ and $\mathbf{GL}_\omega \vdash A$, then $\mathbb{N} \models f_T(A)$ for any arithmetical interpretation $f$. \item If $T$ is sound and $\mathbf{S} \vdash A$, then $\mathbb{N} \models f_T(A)$ for any arithmetical interpretation $f$. \end{enumerate} \end{fact} We show some interrelationships between these logics. For each modal formula $A$, let $\mathrm{Rfn}(A)$ denote the set $\{\Box B \to B \mid \Box B \in \mathsf{Sub}(A)\}$. Notice that the cardinality of the set $\mathrm{Rfn}(A)$ is exactly $\mathfrak{c}(A)$. \begin{fact}[Solovay \cite{Sol}]\label{GLvsGLS} For any modal formula $A$, the following are equivalent: \begin{enumerate} \item $\mathbf{S} \vdash A$. \item $\mathbf{GL} \vdash \bigwedge \mathrm{Rfn}(A) \to A$. \end{enumerate} \end{fact} The following fact is a kind of folklore which is proved through arithmetical interpretations. \begin{fact}\label{LemBox} For any modal formula $A$, the following are equivalent: \begin{enumerate} \item $\mathbf{GL} \vdash A$. \item $\mathbf{GL}_\omega \vdash \Box A$. \item $\mathbf{S} \vdash \Box A$. \end{enumerate} \end{fact} \begin{lem}\label{LemNegBox} For any modal formula $A$, the following are equivalent: \begin{enumerate} \item $\mathbf{GL} \vdash \bigwedge \mathrm{Rfn}(\Box A) \to \neg \Box A$. \item $\mathbf{GL} \vdash \Diamond^{\mathfrak{c}(A)+1} \top \to \neg \Box A$. \item $\mathbf{GL}_\omega \vdash \neg \Box A$. \item $\mathbf{S} \vdash \neg \Box A$. \end{enumerate} \end{lem} \begin{proof} $(1 \Rightarrow 2)$: Suppose $\mathbf{GL} \vdash \bigwedge \mathrm{Rfn}(\Box A) \to \neg \Box A$. It is known that $\mathbf{GL}$ proves $\Diamond^{\mathfrak{c}(\Box A)} \top \to \bigwedge \mathrm{Rfn}(\Box A) \lor \Diamond \bigwedge \mathrm{Rfn}(\Box A)$ (cf.~\cite[Lemma 26]{AB}). Then, $\mathbf{GL} \vdash \Diamond^{\mathfrak{c}(\Box A)} \top \to \neg \Box A \lor \Diamond \neg \Box A$. Since $\mathbf{GL} \vdash \Diamond \neg \Box A \to \neg \Box A$, we have $\mathbf{GL} \vdash \Diamond^{\mathfrak{c}(\Box A)} \top \to \neg \Box A$. Hence, $\mathbf{GL} \vdash \Diamond^{\mathfrak{c}(A)+1} \top \to \neg \Box A$ because $\mathfrak{c}(\Box A) = \mathfrak{c}(A) + 1$. $(2 \Rightarrow 3)$ and $(3 \Rightarrow 4)$: Obvious. $(4 \Rightarrow 1)$: By Fact \ref{GLvsGLS} and $\mathrm{Rfn}(\Box A) = \mathrm{Rfn}(\neg \Box A)$. \end{proof} The conditions ``$\mathbf{GL} \vdash A$'' and ``$\mathbf{GL}_{\omega} \vdash \neg \Box A$'' are also characterized by provability in other extensions of $\mathbf{GL}$. \begin{defn} Let $F_s$ be the modal formula $\Box^{s+1} \bot \to \Box^s \bot$. \end{defn} \begin{lem}\label{LemBox2} For any modal formula $A$ and any natural number $s$ with $s > \mathfrak{c}(A)$, the following are equivalent: \begin{enumerate} \item $\mathbf{GL} \vdash A$. \item $\mathbf{GL} + \{\neg F_s\} \vdash \Box A$. \end{enumerate} \end{lem} \begin{proof} $(1 \Rightarrow 2)$: Obvious. $(2 \Rightarrow 1)$: If $\mathbf{GL} \nvdash A$, then there exists a $\mathbf{GL}$-model $\langle W, \sqsubset, r, \Vdash \rangle$ such that $r \Vdash \Box^{\mathfrak{c}(A)+1} \bot \land \neg A$ by Theorem \ref{GLCompl}. Let $\langle W', \sqsubset', r', \Vdash' \rangle$ be a $\mathbf{GL}$-model obtained from $\langle W, \sqsubset, r, \Vdash \rangle$ by adding a chain of new elements below $r$ so that $r' \Vdash' \Box^{s+1} \bot \land \Diamond^s \top$. Such a model exists because $s > \mathfrak{c}(A)$. Since $r' \Vdash' \neg \Box A$, we obtain $\mathbf{GL} + \{\neg F_s\} \nvdash \Box A$. \end{proof} \begin{lem}\label{LemNegBox2} For any modal formula $A$ and any natural number $s$ with $s > \mathfrak{c}(A)$, the following are equivalent: \begin{enumerate} \item $\mathbf{GL}_\omega \vdash \neg \Box A$. \item $\mathbf{GL} + \{\neg F_s\} \vdash \neg \Box A$. \end{enumerate} \end{lem} \begin{proof} $(1 \Rightarrow 2)$: If $\mathbf{GL}_\omega \vdash \neg \Box A$, then by Lemma \ref{LemNegBox}, $\Diamond^{\mathfrak{c}(A)+1} \top \to \neg \Box A$ is proved in $\mathbf{GL}$, and thus $\mathbf{GL} + \{\neg F_s\} \vdash \neg \Box A$ because $s \geq \mathfrak{c}(A)+1$. $(2 \Rightarrow 1)$: Suppose $\mathbf{GL} + \{\neg F_s\} \vdash \neg \Box A$, then $\mathbf{GL} \vdash \Box A \to F_s$, that is, $\mathbf{GL} \vdash \Box A \to (\Box^{s+1} \bot \to \Box^s \bot)$. Then, $\mathbf{GL} \vdash \Box \Box A \to \Box(\Box^{s+1} \bot \to \Box^s \bot)$. By L\"ob's principle, we get $\mathbf{GL} \vdash \Box \Box A \to \Box^{s+1} \bot$. Then, $\mathbf{GL} \vdash \Box A \to \Box^{s+1} \bot$, and hence $\mathbf{GL} \vdash \Diamond^{s+1} \top \to \neg \Box A$. We obtain $\mathbf{GL}_\omega \vdash \neg \Box A$. \end{proof} Here we introduce a notion that will be central in this section. \begin{defn} A modal formula $A$ is said to be \textit{nontrifling} if $\mathbf{GL}_\omega \nvdash \Box \Box A \to \Box A$. \end{defn} Then, this notion is characterized in many ways as follows. \begin{prop}\label{NonT} For any modal formula $A$ and any number $s > \mathfrak{c}(A)$, the following are equivalent: \begin{enumerate} \item $A$ is nontrifling. \item $\mathbf{GL}_\omega \nvdash \Box A$ and $\mathbf{GL}_\omega \nvdash \neg \Box A$. \item $\mathbf{S} \nvdash \Box A$ and $\mathbf{S} \nvdash \neg \Box A$. \item $\mathbf{GL} + \{\neg F_s\} \nvdash \Box A$ and $\mathbf{GL} + \{\neg F_s\} \nvdash \neg \Box A$. \item $\mathbf{GL} \nvdash A$ and $\mathbf{GL} \nvdash \bigwedge \mathrm{Rfn}(\Box A) \to \neg \Box A$. \end{enumerate} \end{prop} \begin{proof} $(1 \Rightarrow 2)$: We prove the contrapositive. If $\mathbf{GL}_\omega \vdash \Box A$, then $\mathbf{GL}_\omega \vdash \Box \Box A \to \Box A$ is obvious. Suppose $\mathbf{GL}_\omega \vdash \neg \Box A$. By Lemma \ref{LemNegBox}, we have $\mathbf{GL} \vdash \Diamond^{\mathfrak{c}(A)+1} \top \to \neg \Box A$. Then, $\mathbf{GL} \vdash \Diamond^{\mathfrak{c}(A)+2} \top \to \neg \Box \Box A$. We obtain $\mathbf{GL}_\omega \vdash \neg \Box \Box A$, and hence $\mathbf{GL}_\omega \vdash \Box \Box A \to \Box A$. $(2 \Leftrightarrow 3 \Leftrightarrow 4 \Leftrightarrow 5)$: By Fact \ref{LemBox} and Lemmas \ref{LemNegBox}, \ref{LemBox2}, and \ref{LemNegBox2}. $(5 \Rightarrow 1)$: Suppose $\mathbf{GL} \nvdash A$ and $\mathbf{GL} \nvdash \bigwedge \mathrm{Rfn}(\Box A) \to \neg \Box A$. Then, $\mathbf{GL} \nvdash \Box A \to A$. By Theorem \ref{GLCompl}, there exist $\mathbf{GL}$-models $M = \langle W, \sqsubset, r, \Vdash \rangle$ and $M_0 = \langle W_0, \sqsubset_0, r_0, \Vdash_0 \rangle$ such that $r \Vdash \Box A \land \neg A$ and $r_0 \Vdash_0 \mathrm{Rfn}(\Box A) \land \Box A$. We merge these two models $M$ and $M_0$ into one model $M^*$, which is illustrated in Fig \ref{Fig0}. We give the precise definition of the $\mathbf{GL}$-model $M^* = \langle W^*, \sqsubset^*, r^*, \Vdash^* \rangle$ as follows: \begin{itemize} \item $W^* = W \cup W_0 \cup \{r_i \mid i \geq 1\} \cup \{r^*\}$, \item $\sqsubset^*$ is the transitive closure of \[ \sqsubset \cup \sqsubset_0 \cup \{(r_i, r_j) \mid i > j\} \cup \{(r^*, r_i) \mid i \in \omega\} \cup \{(r^*, r)\}, \] \item for $a \in W$, $a \Vdash^* p$ if and only if $a \Vdash p$, \\ for $a \in W_0$, $a \Vdash^* p$ if and only if $a \Vdash_0 p$, and \\ for $a \in \{r_i \mid i \geq 1\} \cup \{r^*\}$, $a \Vdash^* p$ if and only if $r_0 \Vdash_0 p$. \end{itemize} \begin{figure} \caption{The $\mathbf{GL}$-model $M^*$} \label{Fig0} \end{figure} It is easy to show that $r \Vdash^* \Box A \land \neg A$ and $r_0 \Vdash^* \mathrm{Rfn}(\Box A) \land \Box A$. Since $r^* \Vdash^* \Diamond^n \top$ for all $n \in \omega$, every theorem of $\mathbf{GL}_\omega$ is true in $r^*$. Thus, it suffices to show that $r^* \nVdash^* \Box \Box A \to \Box A$. For this purpose, we prove the following lemma. \begin{lem}\label{TLem} For any $i \in \omega$ and any subformula $B$ of $\Box A$, $r_i \Vdash^* B$ if and only if $r_0 \Vdash^* B$. \end{lem} \begin{proof} We prove the lemma by induction on $i$. For $i = 0$, the lemma is trivial. Suppose that the lemma holds for $i$, and we prove the case of $i + 1$ by induction on the construction of the subformula $B$ of $\Box A$. The case that $B$ is a propositional variable is obvious from the definition of $\Vdash^*$. The cases for propositional connectives are easily proved by the induction hypothesis. We only give a proof of the case that $B$ is of the form $\Box C$, that is, we prove $r_{i+1} \Vdash^* \Box C$ if and only if $r_0 \Vdash^* \Box C$. $(\Rightarrow)$: Suppose $r_{i+1} \Vdash^* \Box C$, then $r_{i+1} \Vdash^* \Box \Box C$ because $\sqsubset^*$ is transitive. Since $r_{i+1} \sqsubset^* r_0$, we obtain $r_0 \Vdash^* \Box C$. $(\Leftarrow)$: Suppose $r_{i+1} \nVdash^* \Box C$. Then, $a \nVdash^* C$ for some $a \in W^*$ with $r_{i+1} \sqsubset^* a$. We distinguish the following three cases: \begin{itemize} \item $r_0 \sqsubset^* a$. \\ Then, $r_0 \nVdash^* \Box C$ trivially holds. \item $a = r_0$. \\ We have $r_0 \Vdash^* \Box C \to C$ because $r_0 \Vdash^* \mathrm{Rfn}(\Box A)$. Thus, $r_0 \nVdash^* \Box C$. \item $a = r_j$ for $0 < j < i+1$. \\ By the induction hypothesis for $j$, we have $r_0 \nVdash^* C$. Then, $r_j \nVdash^* \Box C$. By the induction hypothesis for $j$ again, we obtain $r_0 \nVdash^* \Box C$. \qedhere \end{itemize} \end{proof} We are ready to prove $r^* \nVdash^* \Box \Box A \to \Box A$. Let $a \in W^*$ be any element such that $r^* \sqsubset^* a$. If $a \in W$, then $a \Vdash^* \Box A$ because $r \Vdash^* \Box A$. If $a \in W_0$, then $a \Vdash^* \Box A$ because $r_0 \Vdash^* \Box A$. If $a \in \{r_i \mid i \geq 1\}$, then $a \Vdash^* \Box A$ by Lemma \ref{TLem}. Therefore, we obtain $r^* \Vdash^* \Box \Box A$. Since $r^* \sqsubset^* r$ and $r \nVdash^* A$, we get $r^* \nVdash^* \Box A$. Thus, we conclude $r^* \nVdash^* \Box \Box A \to \Box A$, and hence $A$ is nontrifling. \end{proof} If a formula $A$ does not contain $\Box$, then the following proposition holds. \begin{prop}\label{ContNonT} A propositional formula $A$ is contingent if and only if $A$ is nontrifling. \end{prop} \begin{proof} $(\Rightarrow)$: Let $\mathsf{v}_0$ be a truth assignment in classical propositional logic such that $\mathsf{v}_0(A)$ is false. Let $\langle W_0, \sqsubset_0, r_0, \Vdash_0 \rangle$ be a $\mathbf{GL}$-model satisfying $W_0 = \{r_0\}$, $\sqsubset_0 = \varnothing$, and $r_0 \Vdash_0 p$ if and only if $\mathsf{v}_0(p)$ is true. Then $r_0 \nVdash_0 A$, and hence $\mathbf{GL} \nvdash A$. Let $\mathsf{v}_1$ be a truth assignment such that $\mathsf{v}_1(A)$ is true. Let $\langle W_1, \sqsubset_1, r_1, \Vdash_1 \rangle$ be a $\mathbf{GL}$-model satisfying $W_1 = \{r_1, a\}$, $r_1 \sqsubset_1 a$, and for each $w \in W_1$, $w \Vdash_1 p$ if and only if $\mathsf{v}_1(p)$ is true. Then, $r_1 \Vdash_1 \Diamond \top \land \Box A$. Since $\mathfrak{c}(A) = 0$, we have $\mathbf{GL} \nvdash \Diamond^{\mathfrak{c}(A)+1} \top \to \neg \Box A$. By Lemma \ref{LemNegBox}, $\mathbf{GL}_\omega \nvdash \neg \Box A$. By Proposition \ref{NonT}, $A$ is nontrifling. $(\Leftarrow)$: If $A$ is a tautology, then $\mathbf{GL} \vdash A$. If $A$ is unsatisfiable, then $\mathbf{GL} \vdash \neg A$, and $\mathbf{GL} \vdash \Diamond \top \to \neg \Box A$. Hence, $\mathbf{GL}_\omega \vdash \neg \Box A$. In either case, $A$ is not nontrifling by Proposition \ref{NonT}. \end{proof} \subsection{A modal version of the FGH theorem} We are ready to prove an extension of Theorem \ref{Thm1}. \begin{thm}\label{MT} For any modal formula $A$, if $A$ is nontrifling, then for any $\Sigma_1$ sentence $\sigma$, there exists an arithmetical interpretation $f$ such that \begin{description} \item [(a)] $\mathsf{PA} \vdash \sigma \to f_T(\Box A)$, and \item [(b)] $\mathsf{PA} + \mathrm{Con}_T^{\mathfrak{c}(A) + 1} \vdash f_T(\Box A) \to \sigma$. \end{description} \end{thm} \begin{proof} Suppose that $A$ is nontrifling and let $\sigma$ be any $\Sigma_1$ sentence. We may assume that $\sigma$ is of the form $\exists x \, \delta(x)$ for some $\Delta_0$ formula $\delta(x)$. By Proposition \ref{NonT}, we have $\mathbf{GL} \nvdash A$ and $\mathbf{GL} \nvdash \bigwedge \mathrm{Rfn}(\Box A) \to \neg \Box A$. By Theorem \ref{GLCompl}, there exist $\mathbf{GL}$-models $M_0 = \pair{W_0, \sqsubset_0, r_0, \Vdash_0}$ and $M_1 = \pair{W_1, \sqsubset_1, r_1, \Vdash_1}$ such that $r_0 \Vdash_0 \Box^{\mathfrak{c}(A)+1} \bot \land \neg A$ and $r_1 \Vdash_1 \bigwedge \mathrm{Rfn}(\Box A) \land \Box A$. Let $\langle \cdot, \cdot \rangle$ be a natural injective mapping from $\omega^2$ to $\omega$. Also let $\pi_0$ and $\pi_1$ be natural mappings such that $\pi_0(\pair{i, j}) = i$ and $\pi_1(\pair{i, j}) = j$. We may assume that $0$ is not in the range of the mapping $\langle \cdot, \cdot \rangle$ and is not in the domain of the mappings $\pi_0$ and $\pi_1$. Let $n_0$ and $n_1$ be the cardinalities of $W_0$ and $W_1$, respectively. Without loss of generality, we may assume \begin{itemize} \item $W_0 = \{\pair{0, 1}, \pair{0, 2}, \ldots, \pair{0, n_0}\}$, $r_0 = \pair{0, 1}$, and \item $W_1 = \{\pair{1, 1}, \pair{1, 2}, \ldots, \pair{1, n_1}\}$, $r_1 = \pair{1, 1}$. \end{itemize} We merge the two models $M_0$ and $M_1$ into one $\mathbf{GL}$-model $M$, which is visualized in Figure \ref{Fig1}. The definition of $M = \pair{W', \sqsubset', 0, \Vdash'}$ is as follows: \begin{itemize} \item $W' = W_0 \cup W_1 \cup \{0, \pair{0, 0}, \pair{1, 0}\}$, \item $\sqsubset' = \sqsubset_0 \cup \sqsubset_1 \cup \{(0, a) \mid a \in W' \setminus \{0\}\} \cup \{(\pair{i, 0}, a) \mid i \in \{0, 1\}, a \in W_i\}$, \item For $\pair{i, j} \in W_i$, $\pair{i, j} \Vdash' p$ if and only if $\pair{i, j} \Vdash_i p$. \\ Also $0 \Vdash' p$, and $\pair{i, 0} \Vdash' p$ if and only if $\pair{i, 1} \Vdash_i p$. \end{itemize} \begin{figure} \caption{The $\mathbf{GL}$-model $M'$} \label{Fig1} \end{figure} Then, it is easy to show that $\pair{0,1} \Vdash' \Box^{\mathfrak{c}(A) + 1} \bot \land \neg A$ and $\pair{1,1} \Vdash' \bigwedge \mathrm{Rfn}(\neg \Box A) \land \Box A$. As in the usual proof of Solovay's arithmetical completeness theorem, we recursively define a primitive recursive function $h : \omega \to W'$ by referring to $T$-proofs. In the definition of $h$, we use the formula $\lambda(x) \equiv \exists y \, \forall z \, {\geq} y \, (h(z) = x)$ and the sentence $f_T(A)$ for the arithmetical interpretation $f$ defined by $f(p) \equiv \bigvee_{\substack{a \in W' \\ a \Vdash' p}} \lambda(\overline{a})$. This is done by the aid of the Fixed Point Lemma or the recursion theorem as in the proof of Solovay's theorem because such $\lambda(x)$ and $f_T(A)$ are effectively computable from $h$ (see Boolos \cite{Boo}). Here we give the definition of the function $h$. Let $h(0) : = 0$. Suppose that the value of $h(x)$ has already been defined. We define the value of $h(x+1)$. If $\forall y \, {\leq}\, x \, \neg \delta(y)$ holds and there exists no $T$-proof of $f_T(A)$ smaller than or equal to $x$, let $h(x+1) : = 0$. That is, the output of $h$ remains $0$ unless the smallest witness of $\delta(x)$ or the smallest $T$-proof of $f_T(A)$ appears. After such a witness appears, $h$ changes its value. At the first stage when the smallest witness $x$ of $\delta(x)$ or the smallest $T$-proof $x$ of $f_T(A)$ appears, we distinguish the following two cases: \begin{itemize} \item Case 1: $\forall y \, {<} \, x \, \neg \delta(y)$ holds and $x$ is the smallest $T$-proof of $f_T(A)$. \\ Let $h(x+1) : = \pair{0, 0}$. \item Case 2: $\forall y \, {<} \, x \, \neg \delta(y)$ and $\delta(x)$ hold and there is no proof of $f_T(A)$ less than or equal to $x$. \\ Let $h(x+1) : = \pair{1, 0}$. \end{itemize} That is, Cases 1 and 2 correspond to the situations $\mathrm{Pr}_T(\gn{f_T(A)}) \preccurlyeq \sigma$ and $\sigma \prec \mathrm{Pr}_T(\gn{f_T(A)})$, respectively. After that, we define the value of $h(x+1)$ as follows: \[ h(x+1) : = \begin{cases} a & \text{if}\ h(x) \sqsubset' a\ \&\ x\ \text{is a}\ T\text{-proof of}\ \neg \lambda(\overline{a}); \\ h(x) & \text{otherwise.} \end{cases} \] The definition of $h$ is hereby completed. We introduce three lemmas. The first lemma concerns the general properties of the function $h(x)$, the formula $\lambda(x)$, and the arithmetical interpretation $f_T$. \begin{lem}\label{L1} Let $a, b \in W'$. \begin{enumerate} \item If $a \neq b$, then $\mathsf{PA} \vdash \lambda(\overline{a}) \to \neg \lambda(\overline{b})$, \item $\displaystyle \mathsf{PA} \vdash h(x) = \overline{a} \to \lambda(\overline{a}) \lor \bigvee_{a \sqsubset' c} \lambda(\overline{c})$, \item $\displaystyle \mathsf{PA} \vdash \mathrm{Pr}_T(\gn{f_T(A)}) \preccurlyeq \sigma \leftrightarrow \bigvee_{\pi_0(c) = 0} \lambda(\overline{c})$, \item $\displaystyle \mathsf{PA} \vdash \sigma \prec \mathrm{Pr}_T(\gn{f_T(A)}) \leftrightarrow \bigvee_{\pi_0(c) = 1} \lambda(\overline{c})$, \item If $a \sqsubset' b$, then $\mathsf{PA} \vdash \lambda(\overline{a}) \to \neg \mathrm{Pr}_T(\gn{\neg \lambda(\overline{b})})$, \item If $\pi_1(a) \geq 1$, then $\mathsf{PA} \vdash \lambda(\overline{a}) \to \mathrm{Pr}_T(\gn{\neg \lambda(\overline{a})})$, \item If $\pi_1(a) \geq 1$, then $\mathsf{PA} \vdash \lambda(\overline{a}) \to \mathrm{Pr}_T(\gn{\bigvee_{a \sqsubset' c} \lambda(\overline{c})})$. \end{enumerate} \end{lem} \begin{proof} Except the implications $\leftarrow$ in Clauses 3 and 4, these statements are proved in a similar way as in the usual proof of Solovay's arithmetical completeness theorem (cf.~\cite{AB,Boo,JD,Sol}). For the implication $\leftarrow$ in Clause 3, we argue in $\mathsf{PA}$: Suppose that $\lambda(\overline{c})$ holds and $\pi_0(c) = 0$. Then, $h(k) = c$ for some $k$. If $\neg \sigma$ and $\neg \mathrm{Pr}_T(\gn{f_T(A)})$ hold, then $\forall x \, h(x) = 0$ holds, a contradiction. Hence, $\mathrm{Pr}_T(\gn{f_T(A)}) \preccurlyeq \sigma$ or $\sigma \prec \mathrm{Pr}_T(\gn{f_T(A)})$ holds. By Clause 1, we have $\neg \bigvee_{\pi_0(d) = 1} \lambda(\overline{d})$. Then, by the implication $\to$ in Clause 4, $\sigma \prec \mathrm{Pr}_T(\gn{f_T(A)})$ does not hold. Therefore, $\mathrm{Pr}_T(\gn{f_T(A)}) \preccurlyeq \sigma$ holds. The implication $\leftarrow$ in Clause 4 is also proved similarly. \end{proof} The second lemma states that the satisfaction relation for $a \in W'$ with $\pi_1(a) \geq 1$ is embedded into $\mathsf{PA}$. \begin{lem}\label{L2} Let $a \in W'$ with $\pi_1(a) \geq 1$ and let $B$ be a modal formula. \begin{enumerate} \item If $a \Vdash' B$, then $\mathsf{PA} \vdash \lambda(\overline{a}) \to f_T(B)$. \item If $a \nVdash' B$, then $\mathsf{PA} \vdash \lambda(\overline{a}) \to \neg f_T(B)$. \end{enumerate} \end{lem} \begin{proof} Clauses 1 and 2 are proved by induction on the construction of $B$ simultaneously as in the usual proof of Solovay's theorem. \end{proof} The third lemma concerns the satisfaction relation for the element $\pair{1,1}$. \begin{lem}\label{L3} Let $B$ be any subformula of $\Box A$. \begin{enumerate} \item If $\pair{1, 1} \Vdash' B$, then $\mathsf{PA} \vdash \lambda(\lp{1, 0}) \to f_T(B)$, \item If $B$ is $\Box C$ and $\pair{1, 1} \Vdash' \Box C$, then $\mathsf{PA} \vdash \sigma \prec \mathrm{Pr}_T(\gn{f_T(A)}) \to f_T(\Box C)$, \item If $\pair{1, 1} \nVdash' B$, then $\mathsf{PA} \vdash \lambda(\lp{1, 0}) \to \neg f_T(B)$. \end{enumerate} \end{lem} \begin{proof} We prove Clauses 1, 2 and 3 by induction on the construction of $B \in \mathsf{Sub}(\Box A)$ simultaneously. We only give a proof of the case that $B$ is $\Box C$. 1 and 2: Suppose $\pair{1, 1} \Vdash' \Box C$. Since $\Box C \in \mathsf{Sub}(\Box A)$, we have $\pair{1, 1} \Vdash' \Box C \to C$ because $\pair{1, 1} \Vdash' \bigwedge \mathrm{Rfn}(\Box A)$. Then, $\pair{1, 1} \Vdash' C$. Hence, for all $a \in W'$ with $\pi_0(a) = 1$ and $\pi_1(a) \geq 1$, we have $a \Vdash' C$. By Lemma \ref{L2}.1, $\mathsf{PA} \vdash \bigvee_{\substack{\pi_0(a) = 1 \\ \pi_1(a) \geq 1}} \lambda(\overline{a}) \to f_T(C)$. Also, by the induction hypothesis, $\mathsf{PA} \vdash \lambda(\lp{1, 0}) \to f_T(C)$. Thus, \[ \mathsf{PA} \vdash \bigvee_{\pi_0(a) = 1} \lambda(\overline{a}) \to f_T(C). \] By Lemma \ref{L1}.4, \[ \mathsf{PA} \vdash \sigma \prec \mathrm{Pr}_T(\gn{f_T(A)}) \to f_T(C). \] Then, \[ \mathsf{PA} \vdash \mathrm{Pr}_T(\gn{\sigma \prec \mathrm{Pr}_T(\gn{f_T(A)})}) \to f_T(\Box C). \] Since $\sigma \prec \mathrm{Pr}_T(\gn{f_T(A)})$ is a $\Sigma_1$ sentence, we have \[ \mathsf{PA} \vdash \sigma \prec \mathrm{Pr}_T(\gn{f_T(A)}) \to f_T(\Box C). \] This completes the proof of Clause 2. $\mathsf{PA} \vdash \lambda(\lp{1,0}) \to \sigma \prec \mathrm{Pr}_T(\gn{f_T(A)})$ by Lemma \ref{L1}.4, and hence Clause 1 also holds. 3: Suppose $\pair{1, 1} \nVdash' \Box C$. Then, $a \nVdash' C$ for some $a \in W'$ with $\pair{1, 1} \sqsubset' a$. By Lemma \ref{L2}.2, we have \[ \mathsf{PA} \vdash \lambda(\overline{a}) \to \neg f_T(C). \] Then, \[ \mathsf{PA} \vdash \neg \mathrm{Pr}_T(\gn{\neg \lambda(\overline{a})}) \to \neg f_T(\Box C). \] Since $\pair{1, 0} \sqsubset' a$, by Lemma \ref{L1}.5, \[ \mathsf{PA} \vdash \lambda(\lp{1, 0}) \to \neg \mathrm{Pr}_T(\gn{\neg \lambda(\overline{a})}). \] Therefore, $\mathsf{PA} \vdash \lambda(\lp{1, 0}) \to \neg f_T(\Box C)$. \end{proof} We are ready to show the required two statements: (a) $\mathsf{PA} \vdash \sigma \to f_T(\Box A)$ and (b) $\mathsf{PA} + \mathrm{Con}_T^{\mathfrak{c}(A)+1} \vdash f_T(\Box A) \to \sigma$. (a): Since $\pair{1, 1} \Vdash' \Box A$, by Lemma \ref{L3}.2, \[ \mathsf{PA} \vdash \sigma \prec \mathrm{Pr}_T(\gn{f_T(A)}) \to f_T(\Box A). \] Since \[ \mathsf{PA} \vdash \sigma \land \neg f_T(\Box A) \to \sigma \prec \mathrm{Pr}_T(\gn{f_T(A)}), \] we have \[ \mathsf{PA} \vdash \sigma \land \neg f_T(\Box A) \to f_T(\Box A), \] and hence we obtain $\mathsf{PA} \vdash \sigma \to f_T(\Box A)$. (b): Since $\pair{0, 1} \Vdash' \Box^{\mathfrak{c}(A)+1} \bot$, for all $a \in W'$ with $\pi_0(a) = 0$ and $\pi_1(a) \geq 1$, we have $a \Vdash' \Box^{\mathfrak{c}(A)+1} \bot$. By Lemma \ref{L2}.1, \[ \mathsf{PA} \vdash \bigvee_{\substack{\pi_0(a)=0 \\ \pi_1(a) \geq 1}} \lambda(\overline{a}) \to \neg \mathrm{Con}_T^{\mathfrak{c}(A)+1}, \] and thus \begin{align}\label{fml1} \mathsf{PA} + \mathrm{Con}_T^{\mathfrak{c}(A)+1} \vdash \bigwedge_{\substack{\pi_0(a)=0 \\ \pi_1(a) \geq 1}} \neg \lambda(\overline{a}). \end{align} Since \[ \mathsf{PA} \vdash f_T(\Box A) \land \neg \sigma \to \mathrm{Pr}_T(\gn{f_T(A)}) \preccurlyeq \sigma, \] by Lemma \ref{L1}.3, we have \[ \mathsf{PA} \vdash f_T(\Box A) \land \neg \sigma \to \bigvee_{\pi_0(a) = 0} \lambda(\overline{a}). \] By combining this with (\ref{fml1}), we obtain \begin{align}\label{fml2} \mathsf{PA} + \mathrm{Con}_T^{\mathfrak{c}(A)+1} \vdash f_T(\Box A) \land \neg \sigma \to \lambda(\lp{0, 0}). \end{align} Since $\pair{0, 1} \nVdash' A$, by Lemma \ref{L2}.2, $\mathsf{PA} \vdash \lambda(\lp{0, 1}) \to \neg f_T(A)$. Then, \[ \mathsf{PA} \vdash \neg \mathrm{Pr}_T(\gn{\neg \lambda(\lp{0, 1})}) \to \neg f_T(\Box A). \] Since $\pair{0, 0} \sqsubset' \pair{0,1}$, by Lemma \ref{L1}.5, \[ \mathsf{PA} \vdash \lambda(\lp{0, 0}) \to \neg \mathrm{Pr}_T(\gn{\neg \lambda(\lp{0, 1})}), \] and hence we have $\mathsf{PA} \vdash \lambda(\lp{0, 0}) \to \neg f_T(\Box A)$. From this with (\ref{fml2}), we obtain $\mathsf{PA} + \mathrm{Con}_T^{\mathfrak{c}(A)+1} \vdash f_T(\Box A) \land \neg \sigma \to \neg f_T(\Box A)$. We conclude $\mathsf{PA} + \mathrm{Con}_T^{\mathfrak{c}(A)+1} \vdash f_T(\Box A) \to \sigma$. \end{proof} If the height of $T$ is larger than $\mathfrak{c}(A)$, then the converse of Theorem \ref{MT} also holds. \begin{prop}\label{CMT} Let $A$ be any modal formula. Suppose that the height of $T$ is larger than $\mathfrak{c}(A)$ and, for any $\Sigma_1$ sentence $\sigma$, there exists an arithmetical interpretation $f$ such that $\mathsf{PA} \vdash \sigma \to f_T(\Box A)$ and $\mathsf{PA} + \mathrm{Con}_T^{\mathfrak{c}(A) + 1} \vdash f_T(\Box A) \to \sigma$. Then, $A$ is nontrifling. \end{prop} \begin{proof} Since $0 = 1$ is a $\Sigma_1$ sentence, there exists an arithmetical interpretation $f$ such that $\mathsf{PA} + \mathrm{Con}_T^{\mathfrak{c}(A) + 1} \vdash f_T(\Box A) \to 0 = 1$. Since $\mathbb{N} \models \mathrm{Con}_T^{\mathfrak{c}(A) + 1}$, we have $\mathbb{N} \models \neg f_T(\Box A)$. Thus, $T \nvdash f_T(A)$. By Fact \ref{AS}.1, $\mathbf{GL} \nvdash A$. Also, since $0 = 0$ is a $\Sigma_1$ sentence, there exists an arithmetical interpretation $g$ such that $\mathsf{PA} \vdash 0 = 0 \to g_T(\Box A)$. Then, $\mathsf{PA} \vdash g_T(\Box A)$. Since $\mathbb{N} \models \mathrm{Con}_T^{\mathfrak{c}(A)+1}$, $\mathsf{PA} \nvdash \neg \mathrm{Con}_T^{\mathfrak{c}(A)+1}$, and thus $\mathsf{PA} \nvdash \mathrm{Con}_T^{\mathfrak{c}(A)+1} \to \neg g_T(\Box A)$. By Fact \ref{AS}.1, $\mathbf{GL} \nvdash \Diamond^{\mathfrak{c}(A)+1} \top \to \neg \Box A$. By Lemma \ref{LemNegBox}, $\mathbf{GL}_\omega \nvdash \neg \Box A$. Then, by Proposition \ref{NonT}, $A$ is nontrifling. \end{proof} For any classical propositional formula $A$, we have $\mathfrak{c}(A) = 0$. So Proposition \ref{ContNonT} shows that Theorem \ref{MT} is actually an extension of Theorem \ref{Thm1}. For a variable $v$ of first-order logic, an \textit{$v$-arithmetical interpretation} $f$ is a mapping where for each propositional variable $p$, $f(p)$ is an $\mathcal{L}_A$-formula $\varphi(v)$ with only the free variable $v$. Each $v$-arithmetical interpretation $f$ is uniquely extended to the mapping $f_T$ from the set of all modal formulas to a set of $\mathcal{L}_A$-formulas with at most the free variable $v$ as the usual arithmetical interpretations with the clause $f_T(\Box A)(v) \equiv \mathrm{Pr}_T(\gn{f_T(A)(\dot{v})})$. By tracing the proof of Theorem \ref{MT} entirely using the function $h(x, v)$ and the formula $\lambda(x, v)$, the following parameterized version of Theorem \ref{MT} is also proved. \begin{thm}\label{MT2} For any modal formula $A$, if $A$ is nontrifling, then for any $\Sigma_1$ formula $\sigma(v)$, there exists an $v$-arithmetical interpretation $f$ such that \begin{enumerate} \item $\mathsf{PA} \vdash \forall v\, \bigl(\sigma(v) \to f_T(\Box A)(v) \bigr)$, and \item $\mathsf{PA} + \mathrm{Con}_T^{\mathfrak{c}(A) + 1} \vdash \forall v\, \bigl(f_T(\Box A)(v) \to \sigma(v) \bigr)$. \end{enumerate} \end{thm} From Theorem \ref{MT2}, we obtain an extension of Theorem \ref{Thm1'} to the framework of modal logic. \begin{thm}\label{MT3} Let $A$ be any modal formula $A$. If $A$ is nontrifling and the height of $T$ is larger than $\mathfrak{c}(A)$, then for any c.e.~set $X$, there exists an $v$-arithmetical interpretation $f$ such that $f_T(A)(v)$ weakly represents $X$ in $T$. \end{thm} \begin{proof} Let $\sigma(v)$ be a $\Sigma_1$ formula defining $X$ over $\mathbb{N}$. By Theorem \ref{MT2}, there exists an $v$-arithmetical interpretation $f$ such that \begin{enumerate} \item $\mathsf{PA} \vdash \forall v \, \bigl(\sigma(v) \to f_T(\Box A)(v) \bigr)$, and \item $\mathsf{PA} + \mathrm{Con}_T^{\mathfrak{c}(A) + 1} \vdash \forall v\, \bigl(f_T(\Box A)(v) \to \sigma(v) \bigr)$. \end{enumerate} Since $\mathbb{N} \models \mathrm{Con}_T^{\mathfrak{c}(A)+1}$, we have $\mathbb{N} \models \forall v\, \bigl(\sigma(v) \leftrightarrow f_T(\Box A)(v) \bigr)$. Then, for any $k \in \omega$, $k \in X$ if and only if $T \vdash f_T(A)(\overline{k})$. This means that $f_T(A)(v)$ weakly represents $X$ in $T$. \end{proof} \subsection{A modal version of the FGH theorem for theories with finite heights} Notice that Theorem \ref{MT3} requires the assumption that the height of $T$ is larger than $\mathfrak{c}(A)$. In this subsection, we investigate variations of Theorems \ref{MT} and \ref{MT3}, keeping in mind theories with finite heights. It is already proved in Proposition \ref{NonT} that for any natural number $s$ with $s > \mathfrak{c}(A)$, $A$ is nontrifling if and only if $\mathbf{GL} + \{\neg F_s\} \nvdash \Box A$ and $\mathbf{GL} + \{\neg F_s\} \nvdash \neg \Box A$. On the other hand, even for a natural number $s$ with $s \leq \mathfrak{c}(A)$, we prove the following version of the FGH theorem concerning the condition ``$\mathbf{GL} + \{\neg F_s\} \nvdash \Box A$ and $\mathbf{GL} + \{\neg F_s\} \nvdash \neg \Box A$''. \begin{thm}\label{MT4} For any modal formula $A$, if $\mathbf{GL} + \{\neg F_s\} \nvdash \Box A$ and $\mathbf{GL} + \{\neg F_s\} \nvdash \neg \Box A$, then for any $\Sigma_1$ sentence $\sigma$, there exists an arithmetical interpretation $f$ such that \[ \mathsf{PA} + \neg \mathrm{Con}_T^{s+1} + \mathrm{Con}_T^s \vdash \sigma \leftrightarrow f_T(\Box A). \] \end{thm} \begin{proof} Suppose $\mathbf{GL} + \{\neg F_s\} \nvdash \Box A$ and $\mathbf{GL} + \{\neg F_s\} \nvdash \neg \Box A$, that is, $\mathbf{GL} \nvdash \Box^{s+1} \bot \land \Diamond^s \top \to \Box A$ and $\mathbf{GL} \nvdash \Box^{s+1} \bot \land \Diamond^s \top \to \neg \Box A$. By the Kripke completeness of $\mathbf{GL}$, there exist $\mathbf{GL}$-models $M_0 = \pair{W_0, \sqsubset_0, r_0, \Vdash_0}$ and $M_1 = \pair{W_1, \sqsubset_1, r_1, \Vdash_1}$ such that $r_0 \Vdash_0 \Box^{s+1} \bot \land \Diamond^s \top \land \neg \Box A$ and $r_1 \Vdash_1 \Box^{s+1} \bot \land \Diamond^s \top \land \Box A$. Let $n_0$ and $n_1$ be the cardinalities of $W_0$ and $W_1$ respectively, and we may assume that \begin{itemize} \item $W_0 = \{\pair{0, 1}, \pair{0, 2}, \ldots, \pair{0, n_0}\}$, $r_0 = \pair{0, 1}$, and \item $W_1 = \{\pair{1, 1}, \pair{1, 2}, \ldots, \pair{1, n_1}\}$, $r_1 = \pair{1, 1}$. \end{itemize} Let $M' = \langle W', \sqsubset', 0, \Vdash' \rangle$ be the $\mathbf{GL}$-model obtained from $M_0$ and $M_1$ as in the proof of Theorem \ref{MT}. Then, $\pair{0,1} \Vdash' \Box^{s+1} \bot \land \Diamond^s \top \land \neg \Box A$ and $\pair{1,1} \Vdash' \Box^{s+1} \bot \land \Diamond^s \top \land \Box A$. We can define a function $h$, a formula $\lambda(x)$, and an arithmetical interpretation $f$ from the $\mathbf{GL}$-model $M'$ in the same way as in the proof of Theorem \ref{MT}. Then, the same results as Lemmas \ref{L1}, \ref{L2} and \ref{L3} can also be proved. For each $a \in W'$ with $\pi_1(a) > 1$, since $a \Vdash' \Box^s \bot$, we have that $\mathsf{PA}$ proves $\lambda(\overline{a}) \to \neg \mathrm{Con}_T^s$, and thus \[ \mathsf{PA} + \mathrm{Con}_T^s \vdash \bigwedge_{\pi_1(a) > 1} \neg \lambda(\overline{a}). \] Also, since $\pair{i, 1} \Vdash' \Diamond^s \top$, we have $\mathsf{PA} \vdash \lambda(\lp{i, 1}) \to \mathrm{Con}_T^s$. Then, $\mathsf{PA}$ proves $\neg \mathrm{Pr}_T(\gn{\neg \lambda(\lp{i, 1})}) \to \mathrm{Con}_T^{s+1}$. Also, $\mathsf{PA} \vdash \lambda(\lp{i, 0}) \to \neg \mathrm{Pr}_T(\gn{\neg \lambda(\lp{i, 1})})$ because $\pair{i, 0} \sqsubset' \pair{i, 1}$, and hence $\mathsf{PA} \vdash \lambda(\lp{i, 0}) \to \mathrm{Con}_T^{s+1}$. Therefore, we obtain \begin{align}\label{fml3} \mathsf{PA} + \neg \mathrm{Con}_T^{s+1} + \mathrm{Con}_T^s \vdash \bigwedge_{\pi_1(a) \neq 1} \neg \lambda(\overline{a}). \end{align} We prove $\mathsf{PA} + \neg \mathrm{Con}_T^{s+1} + \mathrm{Con}_T^s \vdash \sigma \leftrightarrow f_T(\Box A)$. $(\rightarrow)$: Since $\mathsf{PA} \vdash \sigma \prec \mathrm{Pr}_T(\gn{f_T(A)}) \to \bigvee_{\pi_0(a) = 1} \lambda(\overline{a})$, from (\ref{fml3}), we have \[ \mathsf{PA} + \neg \mathrm{Con}_T^{s+1} + \mathrm{Con}_T^s \vdash \sigma \prec \mathrm{Pr}_T(\gn{f_T(A)}) \to \lambda(\lp{1, 1}). \] Since $\pair{1, 1} \Vdash' \Box A$, we have $\mathsf{PA} \vdash \lambda(\lp{1, 1}) \to f_T(\Box A)$. Hence, \[ \mathsf{PA} + \neg \mathrm{Con}_T^{s+1} + \mathrm{Con}_T^s \vdash \sigma \prec \mathrm{Pr}_T(\gn{f_T(A)}) \to f_T(\Box A). \] Here $\mathsf{PA} \vdash \sigma \land \neg f_T(\Box A) \to \sigma \prec \mathrm{Pr}_T(\gn{f_T(A)})$, and thus \[ \mathsf{PA} + \neg \mathrm{Con}_T^{s+1} + \mathrm{Con}_T^s \vdash \sigma \land \neg f_T(\Box A) \to f_T(\Box A). \] We obtain $\mathsf{PA} + \neg \mathrm{Con}_T^{s+1} + \mathrm{Con}_T^s \vdash \sigma \to f_T(\Box A)$. $(\leftarrow)$: Since $\mathsf{PA} \vdash \mathrm{Pr}_T(\gn{f_T(A)}) \preccurlyeq \sigma \to \bigvee_{\pi_0(a) = 0} \lambda(\overline{a})$, from (\ref{fml3}), \[ \mathsf{PA} + \neg \mathrm{Con}_T^{s+1} + \mathrm{Con}_T^s \vdash \mathrm{Pr}_T(\gn{f_T(A)}) \preccurlyeq \sigma \to \lambda(\lp{0, 1}). \] Since $\pair{0, 1} \nVdash' \Box A$, $\mathsf{PA} \vdash \lambda(\lp{0, 1}) \to \neg f_T(\Box A)$. Therefore, \[ \mathsf{PA} + \neg \mathrm{Con}_T^{s+1} + \mathrm{Con}_T^s \vdash \mathrm{Pr}_T(\gn{f_T(A)}) \preccurlyeq \sigma \to \neg f_T(\Box A), \] and hence \[ \mathsf{PA} + \neg \mathrm{Con}_T^{s+1} + \mathrm{Con}_T^s \vdash f_T(\Box A) \land \neg \sigma \to \neg f_T(\Box A). \] We conclude $\mathsf{PA} + \neg \mathrm{Con}_T^{s+1} + \mathrm{Con}_T^s \vdash f_T(\Box A) \to \sigma$. \end{proof} If the height of $T$ is larger than or equal to $s$, then the converse of Theorem \ref{MT4} also holds. \begin{prop} Suppose that the height of $T$ is larger than or equal to $s$. For any modal formula $A$, if for any $\Sigma_1$ sentence $\sigma$, there exists an arithmetical interpretation $f$ such that $\mathsf{PA} + \neg \mathrm{Con}_T^{s+1} + \mathrm{Con}_T^s \vdash \sigma \leftrightarrow f_T(\Box A)$, then $\mathbf{GL} + \{\neg F_s\} \nvdash \Box A$ and $\mathbf{GL} + \{\neg F_s\} \nvdash \neg \Box A$. \end{prop} \begin{proof} If the height of $T$ is larger than $s$, then $T \nvdash \neg \mathrm{Con}_T^s$. By L\"ob's theorem, $T \nvdash \neg \mathrm{Con}_T^{s+1} \to \neg \mathrm{Con}_T^s$. Thus, the theory $T + \neg \mathrm{Con}_T^{s+1} + \mathrm{Con}_T^s$ is consistent. Also, if the height of $T$ is exactly $s$, then $\mathbb{N} \models \neg \mathrm{Con}_T^{s+1} \land \mathrm{Con}_T^s$. Thus, $\mathsf{PA} + \neg \mathrm{Con}_T^{s+1} + \mathrm{Con}_T^s$ is consistent. In either case, the theory $\mathsf{PA} + \neg \mathrm{Con}_T^{s+1} + \mathrm{Con}_T^s$ is consistent. Let $\sigma$ be any $\Sigma_1$ sentence independent of $\mathsf{PA} + \neg \mathrm{Con}_T^{s+1} + \mathrm{Con}_T^s$. Then, we have an arithmetical interpretation $f$ such that \[ \mathsf{PA} + \neg \mathrm{Con}_T^{s+1} + \mathrm{Con}_T^s \vdash \sigma \leftrightarrow f_T(\Box A). \] Then, $\mathsf{PA} + \neg \mathrm{Con}_T^{s+1} + \mathrm{Con}_T^s$ proves neither $f_T(\Box A)$ nor $\neg f_T(\Box A)$. By Fact \ref{AS}.1, $\mathbf{GL}$ proves neither $\neg F_s \to \Box A$ nor $\neg F_s \to \neg \Box A$. \end{proof} In the same way as in the proof of Theorem \ref{MT4}, the parameterized version of Theorem \ref{MT4} can also be proved, and hence we obtain the following theorem. \begin{thm} Suppose that the height of $T$ is $s$. For any modal formula $A$, if $\mathbf{GL} + \{\neg F_s\} \nvdash \Box A$ and $\mathbf{GL} + \{\neg F_s\} \nvdash \neg \Box A$, then for any c.e.~set $X$, there exists an $v$-arithmetical interpretation $f$ such that $f_T(A)(v)$ weakly represents $X$ in $T$. \end{thm} \begin{proof} Let $\sigma(v)$ be a $\Sigma_1$ formula defining $X$ over $\mathbb{N}$. Then, from the parameterized version of Theorem \ref{MT4}, there exists an $v$-arithmetical interpretation $f$ such that \[ \mathsf{PA} + \neg \mathrm{Con}_T^{s+1} + \mathrm{Con}_T^s \vdash \forall v\, \bigl(\sigma(v) \leftrightarrow f_T(\Box A)(v) \bigr). \] Since the height of $T$ is $s$, $\mathbb{N} \models \neg \mathrm{Con}_T^{s+1} \land \mathrm{Con}_T^s$. Then, we have that $\mathbb{N} \models \forall v\, \bigl(\sigma(v) \leftrightarrow f_T(\Box A)(v) \bigr)$. It follows that $f_T(A)(v)$ weakly represents $X$ in $T$. \end{proof} We close this section with two problems. In Section \ref{Classical}, we proved Theorem \ref{Thm1} by using Lemma \ref{Lem1}. On the other hand, in this section, we proved Theorem \ref{MT} that is a modal extension of Theorem \ref{Thm1} without using modal version of Lemma \ref{Lem1}. If a modal version of Lemma \ref{Lem1} is proved, then a proof of Theorem \ref{MT}, as well as the proof of our Theorem \ref{Thm1} in Section \ref{Classical}, would be substantially simplified. \begin{prob} Can we prove a modal version of Lemma \ref{Lem1}? \end{prob} In Section \ref{Classical}, we proved Theorem \ref{Thm2} that is an improvement of Theorem \ref{Thm1}. Then, it is natural to ask the following problem. \begin{prob} Can we extend Theorem \ref{Thm2} to the framework of modal propositional logic? \end{prob} \section{Rosser-type FGH theorems}\label{Rosser} In this section, we investigate some variations of Rosser-type FGH theorems. Recall that $\mathrm{Proof}_T(x, y)$ is a natural formula saying that $y$ is a $T$-proof of $x$ and $\mathrm{Pr}_T(x)$ is of the form $\exists y\, \mathrm{Proof}_T(x, y)$. Besides $\mathrm{Proof}_T(x, y)$, we consider formulas that witness $\mathrm{Pr}_T(x)$. We say a formula $\mathrm{Prf}_T(x, y)$ is a \textit{proof predicate} of $T$ if $\mathrm{Prf}_T(x, y)$ satisfies the following conditions: \begin{enumerate} \item $\mathrm{Prf}_T(x, y)$ is a primitive recursive formula, \item $\mathsf{PA} \vdash \forall x \, \bigl(\mathrm{Pr}_T(x) \leftrightarrow \exists y \, \mathrm{Prf}_T(x, y) \bigr)$, \item $\mathsf{PA} \vdash \forall x \, \forall x' \, \forall y\, \bigl(\mathrm{Prf}_T(x, y) \land \mathrm{Prf}_T(x', y) \to x = x' \bigr)$. \end{enumerate} For each proof predicate $\mathrm{Prf}_T(x, y)$ of $T$, the $\Sigma_1$ formula $\mathrm{Pr}_T^{\mathrm R}(x)$ defined by \[ \mathrm{Pr}_T^{\mathrm R}(x) : \equiv \exists y \, \bigl(\mathrm{Prf}_T(x, y) \land \forall z \, {<} \, y \, \neg \mathrm{Prf}_T(\dot{\neg} x, z) \bigr) \] is called the \textit{Rosser provability predicate} of $\mathrm{Prf}_T(x, y)$ or a Rosser provability predicate of $T$. Here $\dot{\neg} x$ is a primitive recursive term corresponding to a primitive recursive function calculating the G\"odel number of $\neg \varphi$ from the G\"odel number of an $\mathcal{L}_A$-formula $\varphi$ such that $\mathsf{PA}$ proves $\dot{\neg} \gn{\psi} = \gn{\neg \psi}$ for each $\mathcal{L}_A$-formula $\psi$. In view of witness comparison, we also introduce an auxiliary $\Sigma_1$ formula $\mathrm{Pr}_T^{\scriptsize{\reflectbox{{\rm R}}}}(\dot{\neg}x)$ as follows: \[ \mathrm{Pr}_T^{\scriptsize{\reflectbox{{\rm R}}}}(\dot{\neg} x) : \equiv \exists y \, \bigl(\mathrm{Prf}_T(\dot{\neg} x, y) \land \forall z \, {\leq} \, y \, \neg \mathrm{Prf}_T(x, z) \bigr). \] Then, for any $\mathcal{L}_A$-formula $\varphi$, $\mathsf{PA}$ proves $\neg \bigl(\mathrm{Pr}_T^{\mathrm R}(\gn{\varphi}) \land \mathrm{Pr}_T^{\scriptsize{\reflectbox{{\rm R}}}}(\gn{\neg \varphi}) \bigr)$ and $\mathrm{Pr}_T(\gn{\varphi}) \lor \mathrm{Pr}_T(\gn{\neg \varphi}) \to \mathrm{Pr}_T^{\mathrm R}(\gn{\varphi}) \lor \mathrm{Pr}_T^{\scriptsize{\reflectbox{{\rm R}}}}(\gn{\neg \varphi})$. In a study of Rosser-type Henkin sentences, the following result was proved. \begin{fact}[Kurahashi {\cite[Theorem 3.5]{Kur}}] For any $\Sigma_1$ sentence $\sigma$, the following are equivalent: \begin{enumerate} \item There exists a $\Sigma_1$ sentence $\sigma'$ such that $\mathsf{PA} \vdash \neg (\sigma \land \sigma')$ and $\mathsf{PA} \vdash \mathrm{Pr}_T(\gn{\sigma}) \lor \mathrm{Pr}_T(\gn{\sigma'}) \to \sigma \lor \sigma'$. \item There exists a Rosser provability predicate $\mathrm{Pr}_T^{\mathrm R}(x)$ of $T$ such that $\mathsf{PA} \vdash \sigma \leftrightarrow \mathrm{Pr}_T^{\mathrm R}(\gn{\sigma})$. \end{enumerate} \end{fact} This fact can be seen as a kind of the FGH theorem for Rosser provability predicates because it deals with the equivalence $\sigma \leftrightarrow \mathrm{Pr}_T^{\mathrm{R}}(\gn{\varphi})$ where $\varphi$ is in particular $\sigma$. Inspired by this fact, we prove the following theorem. \begin{thm}\label{MTR} For any $\Sigma_1$ sentences $\sigma_0$ and $\sigma_1$, the following are equivalent: \begin{enumerate} \item $\mathsf{PA} \vdash \neg (\sigma_0 \land \sigma_1)$ and $\mathsf{PA} \vdash \neg \mathrm{Con}_T \to \sigma_0 \lor \sigma_1$. \item There are a Rosser provability predicate $\mathrm{Pr}_T^{\mathrm R}(x)$ of $T$ and an $\mathcal{L}_A$-sentence $\varphi$ such that \[ \mathsf{PA} \vdash \sigma_0 \leftrightarrow \mathrm{Pr}_T^{\mathrm R}(\gn{\varphi}) \ \text{and}\ \mathsf{PA} \vdash \sigma_1 \leftrightarrow \mathrm{Pr}_T^{\scriptsize{\reflectbox{{\rm R}}}}(\gn{\neg \varphi}). \] \end{enumerate} \end{thm} \begin{proof} $(2 \Rightarrow 1)$: This implication follows from the properties of witness comparison formulas. $(1 \Rightarrow 2)$: We may assume that $\sigma_0$ and $\sigma_1$ are of the forms $\exists x \, \tau_0(x)$ and $\exists x \, \tau_1(x)$ for some $\Delta_0$ formulas $\tau_0(x)$ and $\tau_1(x)$, respectively. By the Fixed Point Lemma, let $\varphi$ be a $\Sigma_1$ sentence satisfying the following equivalence: \[ \mathsf{PA} \vdash \varphi \leftrightarrow \exists y \, \bigl[(\mathrm{Proof}_T(\gn{\neg \varphi}, y) \lor \tau_0(y)) \land \forall z \, {\leq} \, y \, (\neg \mathrm{Proof}_T(\gn{\varphi}, z) \land \neg \tau_1(z)) \bigr]. \] Also, let $\varphi^*$ be the $\Sigma_1$ sentence \[ \exists z \, \bigl[(\mathrm{Proof}_T(\gn{\varphi}, z) \lor \tau_1(z)) \land \forall y\, {<} \, z \, (\neg \mathrm{Proof}_T(\gn{\neg \varphi}, y) \land \neg \tau_0(y)) \bigr]. \] \begin{lem}\label{L5} $\mathsf{PA} \vdash \mathrm{Pr}_T(\gn{\varphi}) \lor \mathrm{Pr}_T(\gn{\neg \varphi}) \to \sigma_0 \lor \sigma_1$. \end{lem} \begin{proof} Let $\mathrm{Prov}_T^{\mathrm R}(x)$ be the Rosser provability predicate of $\mathrm{Proof}_T(x, y)$. Then, $\mathsf{PA} \vdash \mathrm{Pr}_T(\gn{\varphi}) \lor \mathrm{Pr}_T(\gn{\neg \varphi}) \to \mathrm{Prov}_T^{\mathrm R}(\gn{\varphi}) \lor \mathrm{Prov}_T^{\scriptsize{\reflectbox{{\rm R}}}}(\gn{\neg \varphi})$. By the definition of $\varphi^*$, \[ \mathsf{PA} \vdash \mathrm{Prov}_T^{\mathrm R}(\gn{\varphi}) \land \neg \sigma_0 \to \varphi^*. \] Then, \[ \mathsf{PA} \vdash \mathrm{Prov}_T^{\mathrm R}(\gn{\varphi}) \land \neg \sigma_0 \to \mathrm{Pr}_T(\gn{\varphi^*}) \] because $\varphi^*$ is a $\Sigma_1$ sentence. Since $\mathsf{PA} \vdash \neg (\varphi \land \varphi^*)$, we have \begin{align}\label{L5fml} \mathsf{PA} \vdash \mathrm{Prov}_T^{\mathrm R}(\gn{\varphi}) \land \neg \sigma_0 \to \neg \mathrm{Con}_T. \end{align} By the choice of $\varphi$, \[ \mathsf{PA} \vdash \mathrm{Prov}_T^{\scriptsize{\reflectbox{{\rm R}}}}(\gn{\neg \varphi}) \land \neg \sigma_1 \to \varphi, \] and we also obtain \[ \mathsf{PA} \vdash \mathrm{Prov}_T^{\scriptsize{\reflectbox{{\rm R}}}}(\gn{\neg \varphi}) \land \neg \sigma_1 \to \neg \mathrm{Con}_T. \] From this with (\ref{L5fml}), \[ \mathsf{PA} + \mathrm{Con}_T \vdash \mathrm{Prov}_T^{\mathrm R}(\gn{\varphi}) \lor \mathrm{Prov}_T^{\scriptsize{\reflectbox{{\rm R}}}}(\gn{\neg \varphi}) \to \sigma_0 \lor \sigma_1. \] Thus, \[ \mathsf{PA} + \mathrm{Con}_T \vdash \mathrm{Pr}_T(\gn{\varphi}) \lor \mathrm{Pr}_T(\gn{\neg \varphi}) \to \sigma_0 \lor \sigma_1. \] Since $\mathsf{PA} + \neg \mathrm{Con}_T \vdash \sigma_0 \lor \sigma_1$, we conclude \[ \mathsf{PA} \vdash \mathrm{Pr}_T(\gn{\varphi}) \lor \mathrm{Pr}_T(\gn{\neg \varphi}) \to \sigma_0 \lor \sigma_1. \tag*{\mbox{\qedhere}} \] \end{proof} We proceed with the main proof. We recursively define a primitive recursive function $h$ and an increasing sequence $\langle k_i \rangle_{i \in \omega}$ of natural numbers simultaneously by referring to $T$-proofs in stages. The function $h$ will be defined to output all theorems of $T$ and the Rosser predicate of the proof predicate $x = h(y)$ will be a required one. At the beginning of Stage $m$, the values of $k_0, \ldots, k_{m-1}, k_m$ and $h(0), \ldots, h(k_{m-1})$ have already been defined. Here we give our definition of the function $h$. In the definition, we identify each $\mathcal{L}_A$-formula with its G\"odel number. First, let $k_0 = 0$. At Stage $m$, \begin{itemize} \item If $m$ is not a $T$-proof of any $\mathcal{L}_A$-formula, then let $k_{m+1} : = k_m$ and go to Stage $m+1$. \item If $m$ is a $T$-proof of an $\mathcal{L}_A$-formula $\xi$ which is neither $\varphi$ nor $\neg \varphi$, then let $k_{m+1} : = k_m + 1$ and $h(k_m) : = \xi$, and go to the next stage. \item If $m$ is a $T$-proof of an $\mathcal{L}_A$-sentence $\xi$ which is either $\varphi$ or $\neg \varphi$, and $h$ already outputs at least one of $\varphi$ or $\neg \varphi$ before Stage $m$, then let $k_{m+1} : = k_m + 1$ and $h(k_m) : = \xi$, and go to Stage $m+1$. \item If $m$ is a $T$-proof of either $\varphi$ or $\neg \varphi$, and $f$ does not output $\varphi$ and $\neg \varphi$ before Stage $m$, then we distinguish the following three cases: \begin{description} \item [(a)] If $\exists z \, {\leq m} \, \tau_0(z)$ holds, then let $k_{m+1} : = k_m + 1$ and $h(k_m) : = \varphi$. \item [(b)] If $\exists z \, {\leq m} \, \tau_1(z)$ holds, then let $k_{m+1} : = k_m + 1$ and $h(k_m) : = \neg \varphi$. \item [(c)] Otherwise, let $k_{m+1} : = k_m$. \end{description} Go to the next Stage $m+1$. \end{itemize} This completes our definition the function $h$. Since $\mathsf{PA} \vdash \neg (\sigma_0 \land \sigma_1)$, we have that $\mathsf{PA}$ proves that there is no $m$ satisfying both $\exists z \, {\leq} \, m\, \tau_0(z)$ and $\exists z \, {\leq} \, m\, \tau_1(z)$. Thus Cases (a) and (b) in the definition of $h$ are mutually exclusive. We show that the formula $x = h(y)$ is a proof predicate of $T$. It suffices to show the following lemma: \begin{lem}\label{L6} $\mathsf{PA} \vdash \forall x\, \bigl(\mathrm{Pr}_T(x) \leftrightarrow \exists y \, (x = h(y)) \bigr)$. \end{lem} \begin{proof} We argue in $\mathsf{PA}$. $(\rightarrow)$: Suppose that $\xi$ is provable in $T$. If $\xi$ is neither $\varphi$ nor $\neg \varphi$, then for a $T$-proof $m$ of $\xi$, $h(k_m) = \xi$. If $\xi$ is either $\varphi$ or $\neg \varphi$, then by Lemma \ref{L5}, $\sigma_0 \lor \sigma_1$ holds. However, $\sigma_0$ and $\sigma_1$ cannot be true at the same time. We show that $\xi$ is output by $h$. We distinguish the following two cases: \begin{itemize} \item Case 1: $\sigma_0$ holds. \\ Let $n$ be the least number such that $\tau_0(n)$ holds, and let $m$ be the least number such that $m \geq n$ and $m$ is a $T$-proof of either $\varphi$ or $\neg \varphi$. Then, $h(k_m) = \varphi$ by the definition of $h$. If $\xi$ is $\varphi$, we are done. If $\xi$ is $\neg \varphi$, then let $m' > m$ be a $T$-proof of $\neg \varphi$. Such an $m'$ exists because $\neg \varphi$ has infinitely many $T$-proofs. Since $h$ already outputs $\varphi$ before Stage $m'$, by the definition of $h$, $h(k_{m'}) = \neg \varphi$. \item Case 2: $\sigma_1$ holds. \\ It is proved that $h$ outputs $\xi$ as in the proof of Case 1 by considering the least number $n$ such that $\tau_1(n)$ holds. \end{itemize} $(\leftarrow)$: Suppose that $\xi$ is output by $h$. If $\xi$ is neither $\varphi$ nor $\neg \varphi$, then $h(k_m) = \xi$ implies that $m$ is a $T$-proof of $\xi$, and so $\xi$ is provable in $T$. If $\xi$ is $\varphi$ or $\neg \varphi$, then let $m$ be the least number such that $h(k_m)$ is either $\varphi$ or $\neg \varphi$. We show that $\xi$ is provable in $T$. We distinguish the following four cases: \begin{itemize} \item Case 1: $\xi$ is $\varphi$ and $h(k_m) = \varphi$. \\ By the definition of $h$, $\exists z \, {\leq} \, m\, \tau_0(z)$ holds. Then, $\sigma_0$ and $\neg \sigma_1$ hold. Suppose that $\varphi$ is not provable in $T$, then $\varphi$ holds by the definition. Since $\varphi$ is a $\Sigma_1$ sentence, it is provable in $T$, a contradiction. Therefore, $\varphi$ is provable in $T$. \item Case 2: $\xi$ is $\varphi$ and $h(k_m) = \neg \varphi$. \\ Then, $h(k_{m'}) = \varphi$ for some $m' > m$. Since $\neg \varphi$ is already output before Stage $m'$, we find that $m'$ is a $T$-proof of $\varphi$, and hence $\varphi$ is provable in $T$. \item Case 3: $\xi$ is $\neg \varphi$ and $h(k_m) = \varphi$. \\ Then, for some $T$-proof $m' > m$ of $\neg \varphi$, $h(k_{m'}) = \neg \varphi$. Thus, $\neg \varphi$ is provable in $T$. \item Case 4: $\xi$ is $\neg \varphi$ and $h(k_m) = \neg \varphi$. \\ Then, $\exists z \, {\leq} \, m\, \tau_1(z)$ holds, and so $\sigma_1$ and $\neg \sigma_0$ hold. Suppose that $\neg \varphi$ is not provable in $T$. Then, $\varphi^*$ holds, and is provable in $T$. Since $\varphi^* \to \neg \varphi$ is $T$-provable, $\neg \varphi$ is also $T$-provable. This is a contradiction. Therefore, $\neg \varphi$ is provable in $T$. \qedhere \end{itemize} \end{proof} We resume the main proof. Let $\mathrm{Pr}_h^{\mathrm R}(x)$ be the Rosser provability predicate of the proof predicate $x = h(y)$ of $T$. Finally, we show that $\mathsf{PA}$ proves $\sigma_0 \leftrightarrow \mathrm{Pr}_h^{\mathrm R}(\gn{\varphi})$. A proof of $\mathsf{PA} \vdash \sigma_1 \leftrightarrow \mathrm{Pr}_h^{\scriptsize{\reflectbox{{\rm R}}}}(\gn{\neg \varphi})$ is similar, and we omit it. We argue in $\mathsf{PA}$. $(\rightarrow)$: Suppose that $\sigma_0$ holds. Let $n$ be the least number such that $\tau_0(n)$ holds. Then, either $\varphi$ or $\varphi^*$ holds, and hence either one of them is provable in $T$. Then, either $\varphi$ or $\neg \varphi$ is $T$-provable. Let $m$ be the least number such that $m \geq n$ and $m$ is a $T$-proof of $\varphi$ or $\neg \varphi$. Then, $h$ does not output $\neg \varphi$ before Stage $m$, and $h(k_m) = \varphi$. This means $\mathrm{Pr}_h^{\mathrm R}(\gn{\varphi})$ holds. $(\leftarrow)$: Suppose that $h(k_m) = \varphi$ and before Stage $m$, $h$ does not output $\neg \varphi$. For the least such an $m$, $\exists z \, {\leq} \, m\, \tau_0(z)$ holds, and thus $\sigma_0$ holds. \end{proof} \begin{cor}\label{CorRos0} For any $\Sigma_1$ sentence $\sigma$, the following are equivalent: \begin{enumerate} \item There exists a $\Sigma_1$ sentence $\sigma'$ such that $\mathsf{PA} \vdash \neg (\sigma \land \sigma')$ and $\mathsf{PA} \vdash \neg \mathrm{Con}_T \to \sigma \lor \sigma'$. \item There exists a Rosser provability predicate $\mathrm{Pr}_T^{\mathrm R}(x)$ of $T$ and an $\mathcal{L}_A$-sentence $\varphi$ such that $\mathsf{PA} \vdash \sigma \leftrightarrow \mathrm{Pr}_T^{\mathrm R}(\gn{\varphi})$. \end{enumerate} \end{cor} \begin{proof} $(1 \Rightarrow 2)$: Immediate from Theorem \ref{MTR}. $(2 \Rightarrow 1)$: This implication is derived by letting $\sigma'$ be the $\Sigma_1$ sentence $\mathrm{Pr}_T^{\scriptsize{\reflectbox{{\rm R}}}}(\gn{\neg \varphi})$. \end{proof} The $(\mathsf{PA} + \mathrm{Con}_T)$-provable equivalence in the statement of the FGH theorem is equivalent to $\mathsf{PA} \vdash (\sigma \lor \neg \mathrm{Con}_T) \leftrightarrow \mathrm{Pr}_T(\gn{\varphi})$. In this viewpoint, the following corollary seems to be a natural counterpart of the FGH theorem in terms of Rosser provability predicates. \begin{cor}\label{CorRos1} For any $\Sigma_1$ sentence $\sigma$, there exist a Rosser provability predicate $\mathrm{Pr}_T^{\mathrm R}(x)$ of $T$ and an $\mathcal{L}_A$-sentence $\varphi$ such that \[ \mathsf{PA} \vdash (\sigma \lor \neg \mathrm{Con}_T) \leftrightarrow \mathrm{Pr}_T^{\mathrm R}(\gn{\varphi}). \] \end{cor} \begin{proof} For the $\Sigma_1$ sentences $\sigma \lor \neg \mathrm{Con}_T$ and $0=1$, we have that $\mathsf{PA}$ proves $\neg \bigl((\sigma \lor \neg \mathrm{Con}_T) \land 0=1 \bigr)$ and $\neg \mathrm{Con}_T \to (\sigma \lor \neg \mathrm{Con}_T) \lor 0=1$. By Corollary \ref{CorRos0}, there exist a Rosser provability predicate $\mathrm{Pr}_T^{\mathrm R}(x)$ of $T$ and an $\mathcal{L}_A$-sentence $\varphi$ such that $\mathsf{PA} \vdash (\sigma \lor \neg \mathrm{Con}_T) \leftrightarrow \mathrm{Pr}_T^{\mathrm R}(\gn{\varphi})$. \end{proof} Since $\mathsf{PA} + \mathrm{Con}_T \vdash \mathrm{Pr}_T(\gn{\varphi}) \leftrightarrow \mathrm{Pr}_T^{\mathrm R}(\gn{\varphi})$, the FGH theorem directly follows from Corollary \ref{CorRos1}. We show a version of the FGH theorem with respect to Rosser provability predicates corresponding to the representability of computable sets. \begin{cor}\label{CorRos2} For any $\Delta_1(\mathsf{PA})$ sentence $\delta$, there exist a Rosser provability predicate $\mathrm{Pr}_T^{\mathrm R}(x)$ of $T$ and an $\mathcal{L}_A$-sentence $\varphi$ such that \[ \mathsf{PA} \vdash \delta \leftrightarrow \mathrm{Pr}_T^{\mathrm R}(\gn{\varphi}) \ \text{and}\ \mathsf{PA} \vdash \neg \delta \leftrightarrow \mathrm{Pr}_T^{\scriptsize{\reflectbox{{\rm R}}}}(\gn{\neg \varphi}). \] \end{cor} \begin{proof} Since $\delta$ is $\Delta_1(\mathsf{PA})$, there exist $\Sigma_1$ sentences $\sigma_0$ and $\sigma_1$ such that $\mathsf{PA} \vdash \delta \leftrightarrow \sigma_0$ and $\mathsf{PA} \vdash \neg \delta \leftrightarrow \sigma_1$. Then, we have $\mathsf{PA} \vdash \neg (\sigma_0 \land \sigma_1)$ and $\mathsf{PA} \vdash \sigma_0 \lor \sigma_1$. By Theorem \ref{MTR}, there exists a Rosser provability predicate $\mathrm{Pr}_T^{\mathrm R}(x)$ of $T$ and an $\mathcal{L}_A$-sentence $\varphi$ such that \[ \mathsf{PA} \vdash \sigma_0 \leftrightarrow \mathrm{Pr}_T^{\mathrm R}(\gn{\varphi}) \ \text{and}\ \mathsf{PA} \vdash \sigma_1 \leftrightarrow \mathrm{Pr}_T^{\scriptsize{\reflectbox{{\rm R}}}}(\gn{\neg \varphi}). \tag*{\mbox{\qedhere}} \] \end{proof} Corollary \ref{CorRos2} says that if a $\Sigma_1$ sentence $\sigma$ is $\Delta_1(\mathsf{PA})$, then there exist a Rosser provability predicate $\mathrm{Pr}_T^{\mathrm R}(x)$ of $T$ and an $\mathcal{L}_A$-sentence $\varphi$ such that $\mathsf{PA} \vdash \sigma \leftrightarrow \mathrm{Pr}_T^{\mathrm R}(\gn{\varphi})$. Does this hold for all $\Sigma_1$ sentences? By Corollary \ref{CorRos0}, this question is rephrased as follows: For any $\Sigma_1$ sentence $\sigma$, does there exists a $\Sigma_1$ sentence $\sigma'$ such that $\mathsf{PA} \vdash \neg (\sigma \land \sigma')$ and $\mathsf{PA} \vdash \neg \mathrm{Con}_T \to \sigma \lor \sigma'$? We show that this is not the case. \begin{prop} There exists a $\Sigma_1$ sentence $\sigma$ such that for all $\Sigma_1$ sentences $\sigma'$, neither $\mathsf{PA} \vdash \neg (\sigma \land \sigma')$ nor $\mathsf{PA} \vdash \neg \mathrm{Con}_T \to \sigma \lor \sigma'$. That is, for all Rosser provability predicates $\mathrm{Pr}_T^{\mathrm R}(x)$ of $T$ and all $\mathcal{L}_A$-sentences $\varphi$, $\mathsf{PA} \nvdash \sigma \leftrightarrow \mathrm{Pr}_T^{\mathrm R}(\gn{\varphi})$. \end{prop} \begin{proof} Let $\sigma$ be a $\Sigma_1$ sentence which is $\Pi_1$-conservative over $\mathsf{PA}$ such that $\mathsf{PA} + \neg \mathrm{Con}_T \nvdash \sigma$. The existence of such a sentence is proved by Guaspari \cite[Theorem 2.4]{Gua} (see also Lindstr\"om \cite[Exercise 5.5 (b)]{Lin}). Suppose that for some $\Sigma_1$ sentence $\sigma'$, $\mathsf{PA} \vdash \neg (\sigma \land \sigma')$ and $\mathsf{PA} \vdash \neg \mathrm{Con}_T \to \sigma \lor \sigma'$. Then, $\mathsf{PA} + \sigma \vdash \neg \sigma'$, and hence $\mathsf{PA} \vdash \neg \sigma'$ by $\Pi_1$-conservativity. Thus, $\mathsf{PA} + \neg \mathrm{Con}_T \vdash \sigma$. This is a contradiction. \end{proof} Guaspari and Solovay \cite{GS} introduced the logic $\mathbf{R}$ of witness comparison formulas, and also Shavrukov \cite{Sha} introduced the bimodal logic $\mathbf{GR}$ of usual and Rosser provability predicates. As in our observations in Section \ref{Modal}, it may also be possible to extend Theorem \ref{MTR} to the framework of modal logic via these logics. For example, for $\mathbf{R}$, we expect that the condition $\mathbf{R} + \{\Diamond^n \top \mid n \in \omega\} \nvdash \Box \Box A \to \Box A$ works well. \end{document}
\begin{document} \title{ON A GENERALIZATION OF BAER THEOREM} \author{L. A. KURDACHENKO} \address{Department of Algebra, National University of Dnepropetrovsk} \curraddr{Vul. Naukova 13, Dnepropetrovsk 50, Ukraine 49050} \email{[email protected]} \thanks{The authors were supported by Proyecto MTM2010-19938-C03-03 of MICINN (Spain), the Government of Arag\'{o}n (Spain) and FEDER funds from European Union} \author{J. OTAL} \address{Department of Mathematics - IUMA, University of Zaragoza} \curraddr{Pedro Cerbuna 12, 50009 Zaragoza, Spain} \email{[email protected]} \thanks{} \author{I. Ya. SUBBOTIN} \address{Department of Mathematics and Natural Sciences, National University} \curraddr{5245 Pacific Concourse Drive, LA, CA 90045, USA} \email{[email protected]} \thanks{} \subjclass[2010]{Primary 20F14} \date{} \dedicatory{} \begin{abstract} R. Baer has proved that if the factor-group $G/\zeta_n(G)$ of a group $G$ by the member $\zeta_n(G)$ of its upper central series is finite (here $n$ is a positive integer) then the member $\gamma_{n+1}(G)$ of the lower central series of $G$ is also finite. In particular, in this case, the nilpotent residual of $G$ is finite. This theorem admits the following simple generalization that has been published very recently by M. de Falco, F. de Giovanni, C. Musella and Ya. P. Sysak: "If the factor-group $G/Z$ of a group $G$ modulo its upper hypercenter $Z$ is finite then G has a finite normal subgroup $L$ such that $G/L$ is hypercentral". In the current article we offer a new simpler very short proof of this theorem and specify it substantially. In fact, we prove that if $|G/Z| = t$ then $|L|\leq t^k$, where $k = \frac{1}{2}(log_pt+1)$, and $p$ is the least prime divisor of $t$. \end{abstract} \maketitle \section{Introduction} One of the important long-standing results in the Theory of Groups is a classical theorem due to I. Schur \cite{SI1904}, which establishes a connection between the factor-group $G/\zeta(G)$ of a group $G$ modulo its center $\zeta(G)$ and the derived subgroup $[G,G]$ of $G$. It follows from Schur's theorem \cite{SI1904} that \textit{if $G/\zeta(G)$ is finite then $ [G,G]$ is also finite}. A natural question related to this result appears here, namely the question regarding the relationship between the orders $ |G/\zeta(G)|$ and $|[G,G]|$. J. Wiegold in the paper \cite{WJ1956} obtained the following answer to this question. Let $G$ be a group such that $ |G/\zeta(G)| = t$ is finite. J. Wiegold proved that there exists a function $ w$ such that $|[G,G]|\leq w(t)$. He also was able to obtain for this function the value $w(t) = t^m$ where $m = \frac{1}{2}(log_pt-1)$ and $p$ is the least prime divisor of $t$. Later on, J. Wiegold was able to show that this boundary value may be attained if and only if $t = p^n$ for some prime $ p$ (\cite{WJ1965}). When $t$ has more than one prime divisor, the picture becomes more complicated. Various generalizations of Schur's theorem can be found in the mathematical literature. One of the most interesting approaches would be studying the properties of the following question: \textit{study properties of the factor-group $G/\zeta(G)$ such that the derived subgroup $[G,G]$ satisfies the same property}. A class of groups $\mathfrak{X}$ is said to be \textit{a Schur class} if for every group $G$ such that $G/\zeta(G)\in\mathfrak{X}$ the derived subgroup $[G,G]$ also belongs to $\mathfrak{X}$. Schur's classes were introduced in the paper \cite{FGK1995}. Besides of the obvious examples of the classes of finite and of locally finite groups, the class of polycyclic--by--finite groups and the class of Chernikov groups are also Schur's classes (see, for example, \cite[Theorem 3.9]{KOS2007}). In this paper \cite{FGK1995} other Schur's classes were found as well. In the paper \cite{BR1952} R. Baer generalized Schur's theorem in a different direction. We recall that \textit{the upper central series of a group $G$} is the ascending series \begin{equation*} \langle 1\rangle = \zeta_0(G)\leq\zeta_1(G)\leq\cdots\leq\zeta_\alpha(G)\leq\zeta_{\alpha+1}(G) \leq\cdots\zeta_\delta(G) = \zeta_\infty(G) \end{equation*} given by $\zeta_1(G) = \zeta(G)$ is the center of G, and recursively $ \zeta_{\alpha+1}(G)/\zeta_\alpha(G) = \zeta(G/\zeta_\alpha(G))$ for all ordinals $\alpha$ and $\zeta_\lambda(G) = \bigcup_{\mu<\lambda}\zeta_\mu(G)$ for every limit ordinal $\lambda$. The last term $\zeta_\infty(G)$ of this series is called \textit{the upper hypercenter of $G$}. $G$ itself is called \textit{hypercentral} if $\zeta_\infty(G)=G$. In general, the length of the upper central series of $G$ is denoted by $zl(G)$. On the other hand, \textit{the lower central series of $G$} is the descending series \begin{equation*} G = \gamma_1(G)\geq\gamma_2(G)\geq\cdots\geq\gamma_\alpha(G)\geq\gamma_{ \gamma+1}(G)\geq\cdots \end{equation*} given by $\gamma_2(G) = [G,G]$, and recursively $\gamma_{\alpha+1}(G) = [\gamma_\alpha(G),G]$ for all ordinals $\alpha$ and $\gamma_\lambda(G) = \bigcap_{\mu<\lambda}\gamma_\mu(G)$ for every limit ordinal $\lambda $. R. Baer proved that \textit{if for some positive integer $n$ the factor-group $G/\zeta_n(G)$ is finite, then $\gamma_{n+1}(G)$ is finite too} (\cite{BR1952}). In particular, in this case the nilpotent residual of $G$ (that is, the intersection of all normal subgroups $N$ of $G$ such that $G/N$ is nilpotent) is finite. Very recently, in the paper \cite{FGMS2011}, M. de Falco, F. de Giovanni, C. Musella and Ya. P. Sysak obtained the following generalization of this result: \noindent\textbf{Theorem A}. \textit{Let $G$ be a group and let $Z$ be the upper hypercenter of $G$. If $G/Z$ is finite, then $G$ has a finite normal subgroup $L$ such that $G/L$ is hypercentral}. In Section \ref{S2} we provide an elementary proof of this result, which is considerably shorter than the original one. Just as in the theorem of Schur, the question on finding a relationship between the factor-group $G/\zeta_\infty(G)$ and the hypercentral residual of $G$ (the intersection of all normal subgroups $N$ of $G$ such that $G/N$ is hypercentral) appears to be very natural. More specifically, \textit{is there a function (dependeding on the order of $G/\zeta_\infty(G)$) that bounds the order of the hypercentral residual of G?}. In this note we show that Theorem A can be significantly improved. We prove that the order of the hypercentral residual of $G$ is bounded by a function of the order of $ G/\zeta_\infty(G)$ and moreover we are able to give an explicit form of this function. Thus the main result of the current note is the following \noindent\textbf{Theorem B}. \textit{Let $G$ be a group and let $Z$ be the upper hypercenter of $G$. Suppose that $G/Z$ is finite and put $|G/Z| = t$. Then $G$ has a finite normal subgroup $L$ such that $G/L$ is hypercentral. Moreover, $|L|\leq t^k$, where $k = \frac{1}{2}(log_pt+1)$ and $p$ is the least prime divisor of $t$}. \section{A short proof of Theorem A} \label{S2} The proof makes use of an auxiliary result by N. S. Hekster \cite[Lemma 2.4] {HN1986}. \begin{lemma}[HN1986] \label{l1} Let $G$ be a group, $K$ a subgroup of $G$, and suppose that $G = K\zeta_n(G)$ for some positive integer $n$. Then the following properties holds. \begin{enumerate} \item $\gamma_{n+1}(G) = \gamma_{n+1}(K)$. \item $\zeta_n(K) = K\cap\zeta_n(G)$. \item $\gamma_{n+1}(G)\cap\zeta_n(G) = \gamma_{n+1}(K)\cap\zeta_n(K)$. \end{enumerate} \end{lemma} \noindent\textit{Proof of Theorem A}. We note that if $zl(G)$ is finite, the result follows from Baer's theorem \cite{BR1952}. Therefore we may suppose that $zl(G)$ is infinite. Let $K$ be a finitely generated subgroup with the property $G = ZK$. We have that $K$ is nilpotent--by--finite (see \cite[ Proposition 3.19]{KOS2007} for example). Since $G/Z$ is not nilpotent, neither is $K$. Set $r = zl(K)$ and let $C$ be the upper hypercenter of $K$. We claim that $C=C\cap Z$. For, otherwise $CZ/Z\ne\langle 1\rangle$, which means that the upper hypercenter of $G/Z$ is not identity, a contradiction. Then $C = C\cap Z$ as claimed. By Baer's theorem \cite{BR1952}, $ \gamma_{r+1}(K)$ is finite. It follows that the nilpotent residual $L$ of $K$ is finite. We now consider the local system $\mathcal{L}$ consisting of all finitely generated subgroups of $G$ that contains $K$. Pick $V\in\mathcal{L}$ and let $C_V$ be the upper hypercenter of $V$. Clearly we have $G = ZV$ and then $ C_V = V\cap Z$. Since $V\leq KZ$ and $K\leq V$, we have $V = K(V\cap Z) = KC_V$. Put $n = zl(V)$. Since $V = KC_V$, we have that $\gamma_{n+1}(V) = \gamma_{n+1}(K)$ by Lemma \ref{l1}. In particular, $\gamma_{n+1}(K)$ is normal in $V$. Since $L$ is a characteristic subgroup of $\gamma_{n+1}(K)$, $ L$ is normal in $V$. Since this holds for each $V\in\mathcal{L}$, $L$ is normal in $G=\bigcup_{V\in\mathcal{L}}V$. We have \begin{equation*} G/ZL\cong (G/L)(ZL/L) = (KZ/L)/(ZL/L) = (K/L)(ZL/L)/(ZL/L)\cong \end{equation*} \begin{equation*} \cong (K/L)/((K/L)\cap (ZL/L)). \end{equation*} Since $K/L$ is nilpotent, so is $G/ZL$. Since the hypercenter of $G/L$ includes $ZL/L$, $G/L$ has to be hypercentral. $\Box$ \section{Proof of Theorem B} \label{S3} Let $G$ be a group, $R$ a ring and $A$ an $RG$--module. We construct \textit{ the upper $RG$--central series of $A$} as the ascending chain of submodules \begin{equation*} \{0\} = A_0\leq A_1\leq\cdots\leq A_\alpha\leq A_{\alpha+1}\cdots A_\lambda, \end{equation*} where $A_1 = \zeta_{RG}(A) = \{a\in A\ |\ a(g-1) = 0\}$, $ A_{\alpha+1}/A_\alpha = \zeta_{RG}(A/A_\alpha)$ for every ordinal $ \alpha<\lambda$ and $\zeta_{RG}(A/A_\lambda) = \{0\}$. The last term $ A_\lambda$ of this series is called \textit{the upper $RG$--hypercenter of $A $} and will denoted by $\zeta_{RG}^\infty(A)$. If $A = A_\lambda$, then $A$ is said to be \textit{$RG$--hypercentral}. Moreover, if $\lambda$ is finite, then $A$ is said to be \textit{$RG$--nilpotent}. Let $B\leq C$ be $RG$--submodules of $A$. The factor $C/B$ is called \textit{ $G$--eccentric} if $C_G(C/B)\ne G$. An $RG$--submodule $C$ of $A$ is said to be \textit{$RG$--hypereccentric} if it has an ascending series of $RG$ --submodules \begin{equation*} \{0\} = C_0\leq C_1\leq\cdots\leq C_\alpha\leq C_{\alpha+1}\leq\cdots C_\lambda = C \end{equation*} such that every factor $C_{\alpha+1}/C_\alpha$ is a $G$--eccentric simple $RG $--module. It is said that \textit{the $RG$--module $A$ has the $Z$--decomposition} if we can express \begin{equation*} A = \zeta_{RG}^\infty(A)\bigoplus E_{RG}^\infty(A), \end{equation*} where $E_{RG}^\infty(A)$ is the maximal $RG$--hypereccentric $RG$--submodule of $A$ (D. I. Zaitsev \cite{ZD1979}). We note that, if $A$ has the $Z$ --decomposition, then $E_{RG}^\infty(A)$ includes every $RG$--hypereccentric $RG$--submodule and, in particular, it is unique. Indeed, put $ E=E_{RG}^\infty(A)$ and let $B$ be a $RG$--hypereccentric $RG$--submodule of $A$. If $(B + E)/E$ is non-zero, then it has a non-zero simple $RG$ --submodule $U/E$, say. Since $(B + E)/E \cong B/(B \cap E)$, $U/E$ is $RG$ --isomorphic to some simple $RG$--factor of $B$ and then $G \ne C_G(U/E)$. But $(B + E)/E \leq A/E\cong\zeta_{RG}^\infty(A)$ and then $G = C_G(U/E)$, a contradiction that shows $B \leq E$. Hence $E$ contains the $RG$ --hypereccentric $RG$--submodules of $A$. \begin{lemma} \label{l2} Let $G$ be a finite nilpotent group and let $A$ be a $\mathbb{Z}G$ --module. Suppose that the additive group of $A$ is periodic. Then $A$ has the $Z$--decomposition. \end{lemma} \begin{proof} Since $G$ is finite, $A$ has a local family $\mathcal{L}$ consisting of finite $\mathbb{Z}G$--submodules. If $B\in\mathcal{L}$, applying the results of \cite{ZD1979}, $B$ has the $Z$--decomposition. Pick now $C\in\mathcal{L}$ such that $B\leq C$. Then we have \[ B = \zeta_{\mathbb{Z}G}^\infty(B)\bigoplus E_{ZG}^\infty(B), C = \zeta_{\mathbb{Z}G}^\infty(C)\bigoplus E_{\mathbb{Z}G}^\infty(C). \] Clearly $\zeta_{\mathbb{Z}G}^\infty(B)\leq\zeta_{\mathbb{Z}G}^\infty(C)$ and, since $E_{\mathbb{Z}G}^\infty(C)$ includes every $\mathbb{Z}G$--hypereccentric $\mathbb{Z}G$--submodule, $E_{\mathbb{Z}G}^\infty(B)\leq E_{\mathbb{Z}G}^\infty(C)$. It follows that \[ \zeta_{\mathbb{Z}G}^\infty(A) = \bigcup_{B\in\mathcal{L}}\zeta_{\mathbb{Z}G}^\infty(B), E_{\mathbb{Z}G}^\infty(A) = \bigcup_{B\in\mathcal{L}}E_{ZG}^\infty(B). \] Therefore $A = \zeta_{\mathbb{Z}G}^\infty(A)\bigoplus E_{\mathbb{Z}G}^\infty(A)$.\end{proof} \begin{lemma} \label{l3} Let $G$ be a finite group and $Z$ a $G$--invariant subgroup of the hypercenter of $G$. Put $|G/Z| = t$. Then there exists a function $f_1$ such that the nilpotent residual of $G$ has order at most $f_1(t)$. \end{lemma} \begin{proof} The subgroup $Z$ has a series of $G$--invariant subgroups \[ \cyc{1} = Z_0\leq Z_1\leq\cdots\leq Z_n\leq Z_{n+1} = Z \] whose factors $Z_{j+1}/Z_j$ are $G$--central. Applying a result due to L. A. Kaloujnine \cite{KL1953}, the factor-group $G/C_G(Z)$ is nilpotent. Put $C = C_G(Z)$ so that $Z\leq C_G(C)$. In particular, $|G/C_G(C)|\leq t$. Clearly $C\cap Z\leq\zeta(C)$ and so $C/(Z\cap C)\cong CZ/Z$ is a finite group of order at most $t$. By Wiegold's theorem \cite{WJ1956}, the derived subgroup $D = [C,C]$ has order at most $w(t)$. We note that $D$ is $G$--invariant and $C/D$ is abelian. By the facts proved above, the factor-group $(G/D)/C_{G/D}(C/D)$ is nilpotent. By Lemma \ref{l2}, the $\mathbb{Z}G$--module $C/D$ has the $Z$--decomposition, that is $C/D = \zeta_{\mathbb{Z}G}^\infty(C/D)\bigoplus E_{RG}^\infty(C/D)$. Clearly, $(C\cap Z)D/D\leq\zeta_{\mathbb{Z}G}^\infty(C/D)$ and then $L/D = E_{\mathbb{Z}G}^\infty(C/D)$ has order at most $t$. Hence $(C/D)/(L/D)$ is $\mathbb{Z}G$--hypercentral. In other words, the hypercenter of $G/L$ contains $C/L$. Since $G/C$ is nilpotent so is $G/L$. Finally, $|L| = |D| |L/D|\leq tw(t)= tt^m=t^{m+1}$, where $m = \frac{1}{2}(log_p t-1)$ and $p$ is the least prime divisor of $t$, so that $m + 1 = \frac{1}{2}(log_p t-1) + 1 = \frac{1}{2}(log_p t + 1)$. Therefore, it suffices to put $f_1(t) = t^k$, where $k = \frac{1}{2}(log_p t + 1)$ and $p$ is the least prime divisor of $t$.\end{proof} If $G$ is a group, we denote by $Tor(G)$ the maximal periodic normal subgroup of $G$. $Tor(G)$ is a characteristic subgroup of $G$ and, if $G$ is locally nilpotent, $G/Tor(G)$ is torsion-free. \begin{lemma} \label{l4} Let $G$ be a finitely generated group and $Z$ a $G$--invariant subgroup of the hypercenter of $G$. Suppose that $|G/Z| = t$ is finite. Then $G$ has a finite normal subgroup $L$ such that $G/L$ is nilpotent. Moreover, $|L|\leq f_1(t)$. \end{lemma} \begin{proof} Since $G/Z$ is finite, $Z$ is finitely generated. It follows that $Z$ is nilpotent. Moreover, $zl(G)$ is finite. By Baer's theorem \cite{BR1952}, $G$ has a finite normal subgroup $F$ such that $G/F$ is nilpotent. Being finitely generated, $G/F$ has a finite periodic part $Tor(G/F) = K/F$. As we remarked above, the factor-group $(G/F)/(K/F)\cong G/K = B$ is torsion-free and nilpotent. We have that the subgroup $Z$ is nilpotent and $T = Tor(G)$ is finite. Then $Z$ has a torsion-free normal subgroup $U$ such that the orders of the elements of $Z/U$ are the divisors of some positive integer $k$ (see \cite[Proposition 2]{HK1995} for example). Put $V = Z^k$ so that $V\leq U$ and $V$ is also torsion-free. By construction, $V$ is $G$--invariant and $G/V$ is periodic. Being finitely generated nilpotent--by--finite, $C = G/V$ is finite. By Lemma \ref{l3}, the nilpotent residual $D$ of $C$ has order at most $f_1(t)$. Clearly $V\cap K = \cyc{1}$. Applying Remak's theorem, we obtain an embedding $G\leq G/V\times G/K = C\times B = H$. Since $B$ is torsion-free nilpotent, the nilpotent residual of $H$ is exactly $D$. It follows that $G/(G\cap D)\cong GD/D\leq H/D$ is nilpotent. This shows that $G\cap D$ includes the nilpotent residual $L$ of $G$. In particular, $L$ is finite and moreover $|L|\leq |G\leq D|\leq |D|\leq f_1(t)$.\end{proof} We are now in a position to show the main result of this paper \noindent\textit{Proof of Theorem B}. Since $G/Z$ is finite, there exists a finitely generated subgroup $K$ such that $G = KZ$. We pick the family $ \Sigma$ of all finitely generated subgroups of $G$ that contains $K$. Clearly $G$ is $FC$--hypercentral and then every finitely generated subgroup of $G$ is nilpotent--by--finite (see \cite[Proposition 3.19]{KOS2007} for example). If $U\in\Sigma$, then the hypercenter of $U$ includes a $U$ --invariant subgroup $U\cap Z = Z_U$ such that $U/Z_U$ is nilpotent and has order at most $t$. By Lemma \ref{l4}, $U$ has a finite normal subgroup $H_U$ such that $U/H_U$ is nilpotent and $|H_U|\leq f_1(t)$. Being finite-by--nilpotent, the nilpotent residual $L_U$ of $U$ is finite and $L_U$ has order at most $f_1(t)$. Pick $Y\in\Sigma$ such that $|L_Y|$ is maximal and let $\Sigma_1$ be the family of all finitely generated subgroups of $G$ that contains $Y$. Pick $ U\in\Sigma_1$. Then $Y\leq U$. The factor-group $U/L_U$ is nilpotent and, since $Y/(Y\cap L_U)\cong YL_U/L_U\leq U/L_U$, $Y/(Y\cap L_U)$ is nilpotent. It follows that $L_Y\leq Y\cap L_U$ and then $L_Y\leq L_U$. But $|L_Y|$ is maximal, so that $L_U = L_Y$. In particular, $L_Y$ is normal in $U$ for every $U\in\Sigma_1$. Then $L_Y$ is normal in $\bigcup_{U\in\Sigma_1}U = G$ and $U/L_Y$ is nilpotent. Thus $G/L_Y$ has a local family of nilpotent subgroups, that is $G/L_Y$ is locally nilpotent. Then $(G/L_Y)/(ZL_Y/L_Y)$ is nilpotent since it is finite. It follows that $G/L_Y$ is hypercentral, because the upper hypercenter of $G/L_Y$ includes $ZL_Y/L_Y$. $\Box$ \end{document}
\begin{document} \begin{abstract} A $(d,k)$ set is a subset of $\rea^d$ containing a translate of every $k$-dimensional plane. Bourgain showed that for $k \geq \kcrit(d)$, where $\kcrit(d)$ solves $2^{\kcrit-1}+\kcrit = d$, every $(d,k)$ set has positive Lebesgue measure. We give a short proof of this result which allows for an improved $L^p$ estimate of the corresponding maximal operator, and which demonstrates that a lower value of $\kcrit$ could be obtained if improved mixed-norm estimates for the $x$-ray transform were known. \end{abstract} \title{Bounds for Kakeya-type maximal operators associated with $k$-planes} \section{Introduction} A measurable set $E \subset \rea^d$ is said to be a $(d,k)$ set if it contains a translate of every $k$-dimensional plane in $\rea^d$. Once the definition is given, the question of the minimum size of a $(d,k)$ set arises. This question has been extensively studied for the case $k=1$, the Kakeya sets. It is known that there exist Kakeya sets of measure zero, and these are called Besicovitch sets. It is conjectured that all Besicovitch sets have Hausdorff dimension $d$. For $k \geq 2$, it is conjectured that $(d,k)$ sets must have positive measure, i.e. that there are no $(d,k)$ Besicovitch sets. These size estimates are related to $L^p$ bounds on two maximal operators which we define below. Let $G(d,k)$ denote the Grassmannian manifold of $k$-dimensional linear subspaces of $\rea^d$. For $L \in G(d,k)$ we define \[ \na^k [f](L) = \sup_{x \in \rea^d} \int_{x + L} f(y) dy \] where we will only consider functions $f$ supported on the unit ball $B(0,1) \subset \rea^d$. A limiting and rescaling argument shows that if $\na^k$ is bounded for some $p < \infty$ from $L^p(\rea^d)$ to $L^1(G(d,k))$, then $(d,k)$ sets must have positive measure. By testing $\na^k$ on the characteristic function of $B(0,\delta)$, $\chi_{B(0,\delta)}$, one sees that such a bound may only hold for $p \geq \frac{d}{k}$. For $ L $ in $ G(d,k) $ and $ a \in \rea^d $ define the $ \delta $ plate centered at $a$, $ L_\delta(a) $, to be the $\delta$ neighborhood in $ \rea^d $ of the intersection of $ B(a,\frac{1}{2}) $ with $ L + a $. Fixing $L$, considering $\na^k \chi_{L_\delta(0)}$, and using the fact that the dimension of $G(d,k)$ is $k(d-k),$ we see that a bound into $L^q(G(d,k))$ can only hold for $q \leq kp$. This leads to the following conjecture, where the case $k=1$ is excluded due to the existence of Besicovitch sets. \begin{conj} \label{conjbdna} For $2 \leq k < d, p > \frac{d}{k}, 1 \leq q \leq k p$ \[ \|\na^k f \|_{L^q(G(d,k))} \lesssim \|f\|_{L^p(\rea^d)}. \] \end{conj} It is also useful to consider a generalization of the Kakeya maximal operator, defined for $L \in G(d,k)$ by \[ \ma^k_{\delta}[f](L) = \sup_{a \in \rea^d} \frac{1}{\leb^d(L_\delta(a))} \int_{L_\delta(a)} f(y) dy \] where $\leb^d$ denotes Lebesgue measure on $\rea^d$. Using an argument analogous to that in Lemma 2.15 of \cite{bg}, one may see that a bound \begin{equation} \label{bound} \|\ma^k_\delta f \|_{L^1(G(d,k))} \lesssim \delta^{\frac{-\alpha}{p}} \|f\|_{L^p(\rea^d)} \end{equation} where $ \alpha > 0 $ and $ p < \infty $, implies that the Hausdorff dimension of any $ (d,k) $ set is at least $ d - \alpha $. Considering $\ma^k_\delta \chi_{B(0,\delta)}$ and $\ma^k_\delta \chi_{L_\delta(0)}$, we formulate \begin{conj} \label{conjbdma} For $k \geq 1, p < \frac{d}{k}, q \leq (d-k)p'$ \[ \|\ma^k_\delta f \|_{L^q(G(d,k))} \lesssim \delta^{k-\frac{d}{p}} \|f\|_{L^p(\rea^d)}. \] \end{conj} In \cite{fa1} Falconer showed that, for any $\epsilon > 0$, $\na^k$ is bounded from $L^{\frac{d}{k}+ \epsilon}(\rea^d)$ to $L^1(G(d,k))$ when $k > \frac{d}{2}$. Later, in \cite{bg}, Bourgain used a Kakeya maximal operator bound combined with an $L^2$ estimate of the $x$-ray transform to show that $\na^k$ is bounded from $L^p(\rea^d)$ to $L^p(G(d,k))$ for $(d,k,p) = (4,2,2 + \epsilon)$ and $(d,k,p) = (7,3,3 + \epsilon)$. He then showed, using a recursive metric entropy estimate, that for $ d \leq 2^{k-1} + k $, $\na^k$ is bounded for a large unspecified $p$. Substituting in the proof Katz and Tao's more recent bound for the Kakeya maximal operator from \cite{kt} \begin{equation} \label{ktkakbound} \| \ma^1_\delta f \|_{L^{n+\frac{3}{4}}(G(n,1))} \lesssim \delta^{-\left(\frac{3(n-1)}{4n+3} + \epsilon\right)} \|f\|_{L^{\frac{4n+3}{7}}(\rea^n)} \end{equation} one now sees that this holds for $k > \kcrit(d)$ where \begin{equation} \label{kcritdef} \kcrit(d) \text{\ solves\ } d = \frac{7}{3} 2^{\kcrit-2} + \kcrit. \end{equation} By \Holder 's inequality, the following is true for any $k$-plate $L_\delta$ and positive $f$ \[ \int_{L_\delta}f \ dx \lesssim \delta^{\frac{d-k}{r'}} \left( \int_{L^\perp} \left(\int_{L+y} f(x)\ d\leb^k(x) \right)^r d\leb^{d-k}(y) \right)^{\frac{1}{r}}. \] Combining this with the $L^p \rightarrow L^q(L^r)$ bounds for the $k$-plane transform which were proven by Christ in Theorem A of \cite{ch}, we see that Conjecture \ref{conjbdma} holds with $ p \leq \frac{d+1}{k+1} $. Except for a factor of $\delta^{-\epsilon}$, the same bound for $ \ma^k_\delta $ was proven with $k=2$ by Alvarez in \cite{al} using a geometric-combinatorial ``bush''-type argument. Alvarez also used a ``hairbrush'' argument to show that $(d,2)$ sets have Minkowski dimension at least $\frac{2d+3}{3}$. More recently, Mitsis proved a similar maximal operator bound in \cite{mi2} and showed that $(d,2)$ sets have Hausdorff dimension at least $\frac{2d+3}{3}$ in \cite{mi1}. In \cite{bu}, Bueti extended these dimension estimates, in the context of finite fields, to $(d,k)$ sets, showing that $(d,k)$ sets in $F^d$ have dimension at least $\frac{k(d+1)+1}{k+1}$. In \cite{rogers}, Rogers gave estimates for the Hausdorff dimension of sets which contain planes in directions corresponding to certain curved submanifolds of $G(4,2)$. Our main result is the following. \begin{thm} \label{boundonnak} Suppose $4 \leq k < d$ and $k > \kcrit(d)$, where $\kcrit(d)$ is defined in (\ref{kcritdef}). Then \begin{equation} \label{finalnakbound} \|\na^k f\|_{L^{p}(G(d,k))} \lesssim \|f\|_{L^{p}(\rea^d)} \end{equation} for $f$ supported on the unit ball and $p \geq \frac{d-1}{2}.$ If, additionally, we have $k-j > \kcrit(d-j)$ for some integer $j$ in $[1,k-4]$, then we may take $p \geq \frac{d-1}{2+j}.$ \end{thm} For $k < \kcrit(d)$, we do not have a bound for $\na^k$, however our technique yields certain bounds for $\ma^k_\delta$. \begin{thm} \label{mabounds} \[ \|\ma^k_\delta f\|_{L^q(G(d,k))} \lesssim \delta^{-\frac{\alpha}{p}} \|f\|_{L^p(\rea^d)} \] when \begin{equation} \label{sharpp} k \geq 2, \ \alpha = d - kp + \epsilon, \ p = \frac{d}{k+\frac{3}{4}}, \ q \leq (d-k)\left(\frac{4(d-(k-1))}{7}\right)' \end{equation} or \begin{equation} \label{nonL2ma} k \geq 2, \ \alpha = \frac{3(d-k)}{7(2^{k-1})}+ \epsilon, \ p=\frac{d+1}{2}, \ q=d+1 \end{equation} or \begin{equation} \label{L2ma} 3 \leq k \leq \kcrit(d), \ \alpha = \frac{3(d-k)}{7(2^{k-2})} - 1 + \epsilon, \ p = q = \frac{d}{2} \end{equation} where $\epsilon > 0$ may be taken arbitrarily small. \end{thm} In (\ref{sharpp}) we have an optimal value for $p$ relative to $\alpha$, but a non-optimal value for $q$. In (\ref{nonL2ma}) and (\ref{L2ma}) we have improved values of $\alpha$ at the cost of a non-optimal $p$. For the ``non-borderline'' $k$, specifically when $k+1 < \kcrit(d+1)$, (\ref{nonL2ma}) gives a smaller value of $\alpha$ than (\ref{L2ma}). The number $p = \frac{d-1}{2+j}$ in Theorem \ref{boundonnak} and the number $p = \frac{d}{k+\frac{3}{4}}$ in Theorem \ref{mabounds} are approximate and may be slightly improved through careful numerology. Also, in (\ref{L2ma}) we may take $k = 2$, but a slightly higher value of $p$ and $q$ is then required. We prove (\ref{sharpp}) and (\ref{nonL2ma}) in Section \ref{armob} through a recursive maximal operator bound which is derived using Drury and Christ's bounds for the $x$-ray transform and which is inspired by Bourgain's recursive metric entropy estimates. This recursive maximal operator bound is a slight improvement of the result in \cite{ro}, which will remain unpublished, and the new bound comes with a vastly simplified proof afforded by the explicit use of the $x$-ray transform. Additionally our argument reveals that with certain adjustments of $p$ and $q$, the number $2$ in the definition of $\kcrit(d)$ and in the definition of $\alpha$ in (\ref{nonL2ma}) and (\ref{L2ma}) may be replaced by the ratio $\frac{\tilde{r}}{\tilde{p}}$ if the $x$-ray transform is known to be bounded, for certain values of $n$, from $L^{p_n}(\rea^n)$ to $L^{q_n}_{\sph^{n-1}}(L^{r_n}_{\rea^{n-1}})$ for any $r_n,p_n,q_n$ satisfying $\frac{r_n}{p_n} = \frac{\tilde{r}}{\tilde{p}}$. We prove (\ref{L2ma}) and Theorem \ref{boundonnak} in Section \ref{fourthsection}. There, we combine (\ref{sharpp}) and (\ref{nonL2ma}) with the $L^2$ method which Bourgain used to give bounds for $\na^k$ when $(d,k) = (4,2)$ or $(7,3)$. From (\ref{nonL2ma}) and (\ref{L2ma}) we see that, for $k \geq 2$, the Hausdorff dimension of any $(d,k)$ set is at least \[ \mathrm{min}\left(d, \max\left(d- \frac{3(d-k)}{7(2^{k-2})} + 1, d- \frac{3(d-k)}{7(2^{k-1})}\right)\right). \] When $(d-k) < 7$, it is preferable to start with Wolff's $L^{\frac{n+2}{2}}$ bound for the Kakeya maximal operator from \cite{wo2}, instead of (\ref{ktkakbound}). A similar procedure then gives the lower bound \[ \mathrm{min}\left(d, \max\left(d - \frac{d-k-1}{2^{k-1}} + 1, d - \frac{d-k-1}{2^k}\right)\right) \] for the Hausdorff dimension of a $(d,k)$ set. It should be noted that the dimension estimates provided by applying (\ref{nonL2ma}) and it's Wolff-variant are also a direct consequence of the metric entropy estimates in \cite{bg}. However, to the best of the author's knowledge they have not previously appeared in the literature, even without the improvement obtained from \cite{wo} and \cite{kt}. \section{A recursive maximal operator bound} \label{armob} We start with the definition of the measure we will use on $G(d,k)$. Fix any $L \in G(d,k)$. For a Borel subset $F$ of $G(d,k)$ let \[ \gra^{(d,k)}(F)=\ort(\{\theta \in O(d): \theta(L) \in F\}) \] where $\ort$ is normalized Haar measure of the orthogonal group on $\rea^d$, $O(d)$. By the transitivity of the action of $O(d)$ on $G(d,k)$ and the invariance of $\ort$, it is clear that the definition is independent of the choice of $L$. Also note that $\gra^{(d,k)}$ is invariant under the action of $O(d)$. By the uniqueness of uniformly-distributed measures (see \cite{mt}, pages 44-53), $\gra^{(d,k)}$ is the unique normalized Radon measure on $G(d,k)$ invariant under $O(d)$. It will be necessary to use an alternate formulation of $\gra^{(d,k)}$. For each $ \xi $ in $ \sph^{d-1} $ let $ T_\xi:\xi^\perp \rightarrow \rea^{d-1} $ be an orthogonal linear transformation. Then $ T_\xi^{-1} $ identifies $ G(d-1,k-1) $ with the $ k-1 $ dimensional subspaces of $ \xi^\perp $. Now, define $T:\sph^{d-1}\times G(d-1,k-1) \rightarrow G(d,k)$ by \[ T(\xi,M)=\spa(\xi,T_\xi^{-1}(M)). \] Choosing $T_\xi$ continuously on the upper and lower hemispheres of $ \sph^{d-1} $, $T^{-1}$ identifies the Borel subsets of $G(d,k)$ with the completion of the Borel subsets of $\sph^{d-1} \times G(d-1,k-1)$. Under this identification, by uniqueness of rotation invariant measure, we have \begin{equation} \label{graproduct} \gra^{(d,k)}(F) = \sigma^{d-1} \times \gra^{(d-1,k-1)}(T^{-1}(F)) \end{equation} where $\sigma^{d-1}$ denotes normalized surface measure on the unit sphere. For a function $f$ on $\rea^d$, $\xi \in \sph^{d-1}$, and $y \in \xi^{\perp},$ the $x$-ray transform of $f$ is defined \[ f_\xi(y) = \int_{\rea} f(y + t\xi)\ dt. \] It is conjectured that the $x$-ray transform is bounded from $L^p(\rea^d)$ to $L^q_{\sph^{d-1}}(L^r_{\rea^{d-1}})$ when $p,q,r$ satisfy \begin{eqnarray} \nonumber r &<& \infty \\ \label{rpcondition} p &=& \frac{rd}{d + r - 1} \\ \nonumber q &\leq& r'd. \end{eqnarray} This was shown to hold in \cite{dr} for $p < \frac{d+1}{2}$ and in \cite{ch} for $p=\frac{d+1}{2}$. Also, see \cite{wo} and \cite{lt} for certain improvements. In the following proposition we exploit the fact that $r > p$ when $r \neq 1$ in (\ref{rpcondition}), i.e. that the $x$-ray transform is $L^p$-improving. \begin{prop} \label{maxopone} Suppose that $p \leq d+1$ and $k \geq 2$. Then a bound \[ \|\ma_\delta^{k-1}f \|_{L^q(G(d-1,k-1))} \lesssim \delta^{-\frac{\alpha}{p}} \|f\|_{L^p(\rea^{d-1})} \] for all $f \in L^{p}(\rea^{d-1})$ implies the bound \[ \|\ma_\delta^{k}f \|_{L^{\tilde{q}}(G(d,k))} \lesssim \delta^{-\frac{\tilde{\alpha}}{\tilde{p}}} \|f\|_{L^{\tilde{p}}(\rea^{d})} \] for all $f \in L^{\tilde{p}}(\rea^{d})$ with \[ \tilde{p} = p\frac{d}{d+p-1}, \ \ \ \tilde{\alpha} = \alpha\ \frac{\tilde{p}}{p} = \alpha\ \frac{d}{d + p - 1}, \ \ \ \text{and}\ \ \ \tilde{q} = \min(q,dp'). \] \end{prop} \begin{proof} Without loss of generality, we assume that $f$ is positive. Let $L \in G(d,k)$ and suppose that $L = \spa(\xi, T_\xi^{-1}(M))$ where $M \in G(d-1,k-1)$. Let $a_L \in \rea^d$ and let $a_M = T_\xi(\proj_{\xi^{\perp}}(a_L))$, where $\proj$ denotes orthogonal projection. Then \begin{eqnarray*} \int_{L_\delta(a_L)} f(y)\ dy \leq \int_{M_\delta(a_M)} \int_{\rea} f(T_\xi^{-1}(x) + t \xi)\ dt\ dx \\ = \int_{M_\delta(a_M)} f_\xi(T_\xi^{-1}(x))\ dx \end{eqnarray*} where $L_{\delta}(a_L)$ and $M_{\delta}(a_M)$ are $k$ and $k-1$ plates respectively. Noting that $d-k$ = $(d-1)-(k-1)$, it follows that \[ \ma_\delta^{k}[f](L) \lesssim \ma_\delta^{k-1}[f_\xi \composed T_\xi^{-1}](M). \] By (\ref{graproduct}), H\"{o}lder's inequality, and our hypothesized bound, we now have \begin{eqnarray*} \|\ma_\delta^{k}[f]\|_{L^{\tilde{q}}(G(d,k))} \lesssim \left( \int_{\sph^{d-1}} \int_{G(d-1,k-1)} \ma_\delta^{k-1}[f_\xi \composed T_\xi^{-1}](M)^{\tilde{q}}\ dM\ d\xi \right)^{\frac{1}{\tilde{q}}} \\\lesssim \left( \int_{\sph^{d-1}} \left(\int_{G(d-1,k-1)} \ma_\delta^{k-1}[f_\xi \composed T_\xi^{-1}](M)^{q}\ dM\right)^{\frac{\tilde{q}}{q}}\ d\xi \right)^{\frac{1}{\tilde{q}}} \\ \lesssim \delta^{-\frac{\alpha}{p}} \left( \int_{\sph^{d-1}} \left( \int_{\rea^{d-1}} (f_\xi \composed T_\xi^{-1}(x))^{p}\ dx\right)^{\frac{\tilde{q}}{p}} d\xi \right)^{\frac{1}{\tilde{q}}} \\ = \delta^{-\frac{\alpha}{p}} \left( \int_{\sph^{d-1}} \left( \int_{\xi^{\perp}} f_\xi(x)^{p}\ dx\right)^{\frac{\tilde{q}}{p}} d\xi \right)^{\frac{1}{\tilde{q}}}. \end{eqnarray*} Finally, by our restrictions on $p$ and $\tilde{q}$, we may apply Drury and Christ's bound for the $x$-ray transform, obtaining \begin{eqnarray*} \left( \int_{\sph^{d-1}} \left( \int_{\xi^{\perp}} f_\xi(x)^{p}\ dx\right)^{\frac{\tilde{q}}{p}} d\xi \right)^{\frac{1}{\tilde{q}}} \lesssim \|f\|_{L^{\tilde{p}}(\rea^{d})} \end{eqnarray*} when $\tilde{p} = \frac{pd}{d+p-1}.$ \end{proof} One should note that if $\alpha = (d-1) - (k-1)p$, then $\tilde{\alpha} = d - k \tilde{p}$. Hence, except for a non-optimal $\tilde{q}$, Proposition \ref{maxopone} yields the conjectured bound on $L^{\tilde{p}}(\rea^d)$ when applied to the conjectured bound on $L^p(\rea^{d-1})$. \begin{proof}[Proof of (\ref{sharpp})] Observing that if \begin{equation} \label{pestimate} p = \frac{d-1}{m} \ \ \ \text{then} \ \ \ \tilde{p} = \frac{(d+1)-1}{m+1}, \end{equation} we start from the bound \begin{equation} \label{weakerktkakbound} \| \ma^1_\delta f \|_{L^{(n-1)\left(\frac{4n}{7}\right)'}} \lesssim \delta^{-(\frac{3}{4} + \epsilon)} \|f\|_{L^{\frac{4n}{7}}(\rea^n)} \end{equation} with $n = d-(k-1)$, which is weaker but more convenient for numerology than (\ref{ktkakbound}). Since (\ref{weakerktkakbound}) satisfies the left side of (\ref{pestimate}) with $m = \frac{7}{4}$ and $d=n+1$, we obtain (\ref{sharpp}) after $k-1$ iterations of Proposition \ref{maxopone}. \end{proof} For a larger improvement in $\alpha$, one may interpolate the known $L^p$ bound for $\ma^{k-1}_\delta$ with the trivial $L^\infty$ bound and apply Proposition \ref{maxopone} to the resulting $L^{d+1}$ bound. This allows us to use the maximum value, $2$, of $\frac{r}{p}$ permitted by Drury and Christ's bound, and yields the following corollary. \begin{cor} \label{halfalpha} Under the assumptions of Proposition \ref{maxopone}, we may also take $\tilde{p}=\frac{d+1}{2}$, $\tilde{\alpha}=\frac{\alpha}{2}$, and $\tilde{q} = \min(\frac{(d+1)q}{p},(d+1))$. \end{cor} Due to the interpolation, Corollary \ref{halfalpha} cannot yield a bound for which $\alpha$ is sharp with respect to $p$ as in Conjecture \ref{conjbdma}. \begin{proof}[Proof of (\ref{nonL2ma})] Starting from (\ref{ktkakbound}) with $n=d-(k-1)$, we iteratively apply Corollary \ref{halfalpha} $(k-1)$ times to obtain (\ref{nonL2ma}). \end{proof} We would like to point out that the proof of Proposition \ref{maxopone} and Corollary \ref{halfalpha} is similar in spirit to Bourgain's recursive metric entropy estimate in the sense that a more efficient version of his technique, namely the proof of Proposition 3.1 in \cite{ro}, could be used to derive the localized non-endpoint version of the $L^{\frac{d+1}{2}} \rightarrow L^{d+1}$ $x$-ray transform bound. The idea of expressing an average over a $k$-plane as the average over a $k-1$-plane of the $x$-ray transform and then ``unraveling'' the integration over $G(d,k)$ into a product integral over $\sph^{d-1}$ and $G(d-1,k-1)$ is also due to Bourgain, as he used it in Propositions 3.3 and 3.20 of \cite{bg}. There, he gave bounds for $\na^k$ with $(d,k)=(4,2)$ and $(d,k)=(7,3)$. We state a generalization of this result below, omitting a few details from the proof, as it is essentially the same as in \cite{bg}. \section{The $L^2$ method} \label{fourthsection} Reducing $\alpha$ by a factor of two, as in Corollary \ref{halfalpha}, is not a substantial gain for small $\alpha$. By using an $L^2$ estimate of the $x$-ray transform which takes advantage of cancellation, instead of the $L^{\frac{d+1}{2}}$ bound, we may take $\tilde{\alpha} = \alpha - 1$ when $\alpha \geq 1$ and obtain a bound for $\na^k$ when $\alpha < 1$. \begin{prop} \label{bourgainrecursive} Suppose $k,p \geq 2$ and that a bound for $\ma^{k-1}_\delta$ on $L^p(\rea^{d-1})$ of the form \begin{equation} \label{assumedboundbr} \|\ma^{k-1}_\delta f\|_{L^p(G(d-1,k-1))} \lesssim \delta^{-\frac{\alpha}{p}} \|f\|_{{L^p}(\rea^{d-1})} \end{equation} is known. Then if $\alpha \geq 1$ we have the bound \begin{equation} \label{alphageq1} \|\ma^k_\delta f\|_{L^{p}(G(d,k))} \lesssim \delta^{-\frac{\alpha - 1}{p}}\|f\|_{L^{p}(\rea^d)} \end{equation} for $f \in L^p(\rea^d)$. If $\alpha < 1$ we have the bound \begin{equation} \label{negativealpha} \|\na^k f\|_{L^{p}(G(d,k))} \lesssim \|f\|_{L^{p}(\rea^d)} \end{equation} for $f \in L^{p}(\rea^d)$ supported on $B(0,1)$. \end{prop} \begin{proof}[Proof of Theorem \ref{boundonnak}] We start from the bound (\ref{nonL2ma}) with $d_0 = d - 2 -j$ and $k_0 = k - 2 - j$. This gives \begin{equation} \label{d0conditions} \alpha_0 = \frac{3(d-k)}{7\cdot 2^{k-3-j}} + \epsilon, \ p_0 = \frac{d_0 + 1}{2}, \ \text{\ and\ } q_0 = d_0 + 1. \end{equation} The condition $k-j > \kcrit(d-j)$ ensures that $\alpha_0 < 2$, and so no further improvement in $\alpha$ is necessary. Thus, we use our $j$ ``spare'' iterations to improve $p$. We note that, in Proposition \ref{maxopone}, when $m \leq d$, \begin{equation} \label{L2pcond} p \leq \frac{d}{m} \ \ \text{implies that\ }\ \tilde{p} \leq \frac{d+1}{m+1}. \end{equation} Since $p_0$ satisfies the left inequality in (\ref{L2pcond}) with $m = 2$ and $d = d_0 + 1$, we see that we may take \[ p_1 = \frac{d_1 + 1}{3},\ q_1 = d_0 + 1,\ \text{\ and\ } \alpha_1 = \alpha_0, \] where $d_1 = d_0 + 1 = d-2 - (j-1)$ and $k_1 = k_0 - 1 = k - 2 - (j-1)$. Above, we ignore the improvement in $\alpha$ and, through interpolation, we ignore some slight additional improvement in $p$. After $j-1$ further iterations, we have \begin{equation} \label{twomoreiterations} p_j = \frac{d_j + 1}{2+j},\ q_j = d_0 + 1,\ \text{\ and\ } \alpha_j = \alpha_0, \end{equation} where $d_j = d-2$ and $k_j = k-2$. Applying (\ref{alphageq1}) to (\ref{twomoreiterations}), and then applying (\ref{negativealpha}) to the result, we obtain (\ref{finalnakbound}). \end{proof} \begin{proof}[Proof of (\ref{L2ma})] We obtain (\ref{L2ma}) by starting from (\ref{nonL2ma}) with $d_0 = d-1$, and $k_0 = k-1$ (In the case $k=2$, we would simply start from (\ref{ktkakbound})). We then apply (\ref{alphageq1}) once. \end{proof} To derive Proposition \ref{bourgainrecursive}, we use the following estimate. Below, $\hat{f}$ denotes the Fourier transform of $f$. \begin{lem} \label{cancellation} Suppose $\hat{f} \equiv 0$ on $B(0,R)$. Then \[ \|f_\xi(y)\|_{L^2_{\xi,y}(\sph^{d-1} \times \rea^{d-1})} \lesssim R^{-\frac{1}{2}}\|f\|_{L^2(\rea^d)}. \] \end{lem} The above lemma was proven in \cite{bg}, but we give a different proof which yields a slightly stronger result. \begin{lem} \label{radontransform} For $d \geq 3$ \[ \|f_\xi(y)\|_{L^2_{\xi,y}(\sph^{d-1} \times \rea^{d-1})} = C_d \|f\|_{\dot{H}^{-\frac{1}{2}}(\rea^d)} \] where $C_d$ is a fixed constant depending only on $d$ and $\dot{H}$ denotes the homogeneous $L^2$ Sobolev space. \end{lem} \begin{proof} Applying Plancherel's theorem to the partial Fourier transform in the $ \xi^\perp $ direction, we have for every $\xi \in \sph^{d-1}$ \[ \int_{\xi^\perp} |f_\xi(x)|^2 dx = \int_{\xi^\perp} \left| \hat{f}(\zeta) \right|^2 d\zeta \] where $\hat{f}$ denotes the full Fourier transform of $f$. Then \[ \int_{\sph^{d-1}} \int_{\xi^\perp} |f_\xi(x)|^2 dx\ d\xi = \int_{\sph^{d-1}} \int_{\rea^{d-1}} |\hat{f}\composed T_\xi^{-1}(\zeta)|^2 d\zeta\ d\xi \] where $T_{\xi}^{-1}$ is defined as in Section \ref{armob}. Using polar coordinates in the $\zeta$ variable gives \[ \int_{\sph^{d-1}} \int_{\rea^{d-1}} |\hat{f}\composed T_\xi^{-1}(\zeta)|^2 d\zeta\ d\xi = C \int_{\sph^{d-1}} \int_{\sph^{d-2}} \int_{\rea} |\hat{f}\composed T_\xi^{-1}(\omega r)|^2 r^{d-2}\ dr\ d\omega\ d\xi. \] By the uniqueness of rotation invariant measures on $\sph^{d-1}$, we have for every $g$ \[ \int_{\sph^{d-1}} \int_{\sph^{d-2}} g(T_{\xi}^{-1}(\omega)) d\omega \ d\xi = \widetilde{C} \int_{\sph^{d-1}} g(\xi) d\xi. \] Then, since $T_{\xi}^{-1}(\omega r) = r T_{\xi}^{-1}(\omega)$ \begin{eqnarray*} \int_{\sph^{d-1}} \int_{\sph^{d-2}} \int_{\rea} |\hat{f}\composed T_\xi^{-1}(\omega r)|^2 r^{d-2}\ dr\ d\omega\ d\xi &=& \widetilde{C} \int_{\sph^{d-1}} \int_{\rea}|\hat{f}(r\xi)|^2 r^{d-2}\ dr\ d\xi \\ &=& \overline{C} \int_{\rea^{d}} |\hat{f}(\zeta)|^2 |\zeta|^{-1} \ d\zeta. \end{eqnarray*} \end{proof} Let $f$ be a nonnegative function supported on the unit ball in $\rea^{d}$. To apply Lemma \ref{cancellation}, we use a Littlewood-Paley decomposition, writing \[ f = \sum_{j = 0}^{\infty} f_j \] where $f_j = f * \phi_j$, $\hat{\phi}_0 = \chi_{B(0,1)}$, and $\hat{\phi}_{j} = \chi_{B(0,2^j)} - \chi_{B(0,2^{j-1})}$ for $j > 0$. Since $f$ is supported on the unit ball, we may switch the order of integration between convolution and the $x$-ray transform to obtain \[ \|(f_j)_{\xi}(y)\|_{L^\infty_{\xi,y}} \lesssim \|f\|_{L^{\infty}(\rea^d)} \] uniformly in $j$. Hence, interpolation with Lemma \ref{cancellation} gives \begin{equation} \label{L2interpolant} \|(f_j)_{\xi}(y)\|_{L^p_{\xi,y}} \lesssim (2^{-j})^{\frac{1}{p}} \|f\|_{L^p(\rea^d)} \end{equation} for any $p \geq 2$. Following the proof of Proposition \ref{maxopone}, we observe that for $L = \spa(\xi,T_{\xi}^{-1}(M))$ we have \begin{equation} \label{macomposed} \ma^k_{\delta}[f](L) \lesssim \ma^{k-1}_{\delta}[f_{\xi}\composed T_{\xi}^{-1}](M). \end{equation} Approximating $\chi_{M_{\delta}}$ by a version with compact Fourier-support and estimating the Schwartz-tails, one sees that \[ \ma^{k-1}_{\delta}(g) \lesssim \ma^{k-1}_{\delta}(|\tilde{g}|) \] for nonnegative functions $g$, and any function $\tilde{g}$ which satisfies $\hat{\tilde{g}}=\hat{g}$ on $B(0,\frac{1}{\delta})$. Thus, we obtain \begin{equation} \label{L2step1} \ma^{k-1}_{\delta}[f_{\xi}\composed T_{\xi}^{-1}](M) \lesssim \sum_{j=0}^{|\log(\delta)|+1} \ma^{k-1}_{\delta}[|(f_j)_{\xi}\composed T_{\xi}^{-1}|](M). \end{equation} Another Schwartz-tail estimate shows that for each $j$ \begin{equation} \label{L2step2} \ma^{k-1}_{\delta}[|(f_j)_{\xi} \composed T_{\xi}^{-1}|](M) \lesssim \ma^{k-1}_{2^{-j}}[|(f_j)_{\xi} \composed T_{\xi}^{-1}|](M). \end{equation} Integrating over $G(d,k)$ and combining the bounds (\ref{assumedboundbr}) and (\ref{L2interpolant}) as in the proof of Proposition \ref{maxopone}, we obtain \[ \|\ma^{k}_{\delta}f\|_{L^{p}(G(d,k))} \lesssim \sum_{j=0}^{|\log{\delta}|+1}(2^{j})^{\frac{\alpha-1}{p}}\|f\|_{L^p(\rea^d)} \lesssim \delta^{-\frac{\alpha-1}{p}} \|f\|_{L^{p}(\rea^d)} \] from (\ref{macomposed}), (\ref{L2step1}), and (\ref{L2step2}), when $\alpha \geq 1$. Similarly, we have \[ \na^k[f](L) \lesssim \na^{k-1}[f_{\xi}\composed T_{\xi}^{-1}](M) \] and \[ \na^{k-1}[|(f_j)_{\xi} \composed T_{\xi}^{-1}|](M) \lesssim \ma^{k-1}_{2^{-j}}[|(f_j)_{\xi} \composed T_{\xi}^{-1}|](M), \] giving \[ \|\na^{k}f\|_{L^{p}(G(d,k))} \lesssim \sum_{j=0}^{\infty} (2^{j})^{\frac{\alpha-1}{p}}\|f\|_{L^p(\rea^d)} \lesssim \|f\|_{L^{p}(\rea^d)} \] when $\alpha < 1$. \end{document}
\begin{document} \title{Negativity of Wigner distribution function as a measure of incompatibility} \author{Jatin Ghai$^{1,2}$} \email{[email protected]} \author{Gautam Sharma$^{1,2,3}$} \email{[email protected]} \author{Sibasish Ghosh$^{1,2}$} \email{[email protected]} \affiliation{$^1$Optics and Quantum Information Group, The Institute of Mathematical Sciences, C. I. T. Campus, Taramani, Chennai 600113, India} \affiliation{$^2$Homi Bhabha National Institute, Training School Complex, Anushaktinagar, Mumbai 400094, India} \affiliation{$^3$Center for Theoretical Physics, Polish Academy of Sciences, Aleja Lotników 32/46, 02-668 Warsaw, Poland} \begin{abstract} Measurement incompatibility and the negativity of quasiprobability distribution functions are well-known non-classical aspects of quantum systems. Both of them are widely accepted resources in quantum information processing. We acquaint an approach to establish a connection between the negativity of the Wigner function, a well-known phase-space quasiprobability distribution, of finite-dimensional Hermitian operators and incompatibility among them. We calculate the negativity of the Wigner distribution function for noisy eigenprojectors of qubit Pauli operators as a function of the noise and observe that the amount of negativity increases with the decrease in noise vis-à-vis the increase in the incompatibility. It becomes maximum for the set of maximally unbiased operators. Our results, although qualitatively, provide a direct comparison between relative degrees of incompatibility among a set of operators for different amounts of noise. We generalize our treatment for higher dimensional qudits for specific finite-dimensional Gell-Mann operators to observe that with an increase in the dimension of the operators, the negativity of their Wigner distribution, and hence incompatibility, decreases. \end{abstract} \maketitle \section{Introduction} Measurement incompatibility, i.e., the existence of measurements that can not be performed simultaneously, is one of the features of quantum theory that demarcates it from classical physics. The earliest mention of incompatibility can be traced back to Heisenberg's uncertainty principle \cite{heisenberg} and Bohr's complementarity principle \cite{bohr1928quantum}. Although the notion of incompatibility seems like a restriction on quantum measurements it has been found to be the resource necessary for observing quantum phenomena like Bell Non-locality \cite{PhysRevLett.48.291, PhysRevA.73.012112, PhysRevLett.103.230402}, Quantum Steering \cite{PhysRevLett.113.160402,PhysRevLett.113.160403,PhysRevLett.115.230402}, Quantum Contextuality \cite{kochen1975problem,PhysRevA.99.020103}, etc. These quantum phenomena and hence, incompatibility power many quantum information processing tasks like quantum cryptography, quantum computation, quantum state discrimination, and quantum communication. As the notion of incompatibility is crucial both fundamentally and also in applications of quantum theory, it is of interest to understand what aspects of non-classicality it can capture. On the other hand, it has also been argued that the negativity of joint probability distributions is an indicator of non-classicality \cite{kenfack2004negativity}. One of the reasoning behind this is based on the fact that the Wigner distribution function becomes non-negative only for the coherent or the squeezed vacuum states from Hudson's theorem \cite{HUDSON1974249}. In fact, Hudson's theorem has been shown to hold also for discrete Wigner distributions \cite{gross2006hudson}. Thus, to some extent, one can say that the negativity of joint probability distributions is a good quantifier of non-classicality. Later, it was also shown in \cite{PhysRevLett.101.020401}, that the notion of negativity of joint probability distributions is equivalent to contextuality. Inspired by the fact that measurement incompatibility is also necessary for quantum contextuality, we are interested in identifying how measurement incompatibility leads to negative joint probability distributions. Such a query has recently been explored in \cite{PhysRevA.104.042212}, where, by using $\textbf{S}$-ordered phase space distributions it was shown that the non-negativity of the joint probability distribution of POVM (positive operator-valued measurement) elements is a sufficient condition for their joint measurability. They applied their approach to analyze incompatibility-breaking channels and derived sufficient conditions for joint measurability for bosonic systems and Gaussian channels. On the other hand, in this work, we try to use the negativity of the generalized Wigner distribution function introduced in \cite{schwonnek2020wigner} as an indicator, although qualitative, of the degree of incompatibility. This Wigner distribution function is generalized in the sense that it is defined for arbitrary Hermitian operators. Though it is implicit in the definition of incompatibility of POVMs, the incompatibility must come from the non-existence of a joint probability distribution. However, it is desirable that this joint probability distribution comes from a Wigner-like quasi-probability distribution. We are able to achieve this partially such that we are able to show qualitatively for a special set of measurements the relation between the degree of incompatibility and the negative volume of the Wigner distribution by introducing a tunable noise parameter. Moreover, for more than two PVMs (projection-valued measurement), a quantifier has been missing so far\cite{Heinosaari_2016}. In this work, we are also able to show that the negativity of the Wigner distribution function can be a good candidate for the indicator of incompatibility among more than three PVMs. This paper is organized as follows: Sec.(\ref{Sec2}) encompasses the definition and some basic properties of the generalized Wigner function. In Sec.(\ref{Sec4}) we calculate the Wigner function for the noisy eigenprojectors of these qubit Pauli operators. We know that the addition of noise leads to a decrease in incompatibility. Thus, we calculate the negativity of the Wigner function as a function of the noise parameter to try to establish a connection between both. In the succeeding Sec.(\ref{Sec5}) we do a similar treatment for some specific noisy qubit Pauli operators. In Sec.(\ref{Sec7}) we generalize this procedure to arbitrary dimensional qudits for Gell-Mann operators of arbitrary dimension having some specific form. Sec.(\ref{Sec8}) concludes this paper and provides an outlook for future research directions. \section{Wigner distribution function for n arbitrary observables}\label{Sec2} We begin by introducing the generalization of the Wigner distribution function over a set of arbitrary Hermitian operators. We will briefly present the definition and properties of the Wigner distribution function here. For more details, we request the reader to refer to the original paper \cite{schwonnek2020wigner}. To define such a Wigner distribution function, we consider a set of $n$ bounded Hermitian operators $\{A_1, A_2,..., A_n\}$ on a $d$-dimensional Hilbert space. One of the basic properties of a Wigner distribution function is that it should reproduce the correct marginals of all possible linear combinations $\vec{\xi}\cdot \vec{A}=\sum_k\xi_kA_k$, of the operators, i.e., the Wigner distribution function must satisfy the following condition \begin{align}\label{margcond} \int d^{n} a \mathscr{W}_{\rho}\left(a_{1}, \ldots, a_{n}\right) f(\vec{\xi} \cdot \vec{a}) ={\rm tr} [{\rho}f({\vec{\xi}}.\vec{A})], \end{align} where $f:\mathbb{R}\rightarrow\mathbb{C}$ is a bounded infinitely differentiable function. \textit{\textbf{Definition 1:}} If we take test function $f(t)=e^{-it}$ in Eqn. (\ref{margcond}), we can get the expression of the Fourier transform of $\mathscr{W}_{\rho}(\vec{a})$ as \begin{align}\label{fourierTrans} \widehat{\mathscr{W}}_{\rho}(\vec{\xi})={\rm tr} [\rho e^{i \vec{\xi} \cdot \vec{A}}]. \end{align} Using the above definition, the Wigner distribution function can be defined as the reverse Fourier transform \begin{align}\label{wignerdist} \mathscr{W}_{\rho}(\vec{a})=\frac{1}{(2 \pi)^{n}} \int d^n \xi e^{-i \vec{\xi} \cdot \vec{a}} \widehat{\mathscr{W}}_{\rho}(\vec{\xi}). \end{align} It can be checked that this expression satisfies the marginal property \textit{i.e.} if we integrate over one of the phase space parameters we will retrieve a valid phase space distribution for the remaining parameters. Hence can be used as a Wigner distribution function. There is another way to define the Wigner distribution function using the "Weyl-ordered moments" which also satisfies Eq.(\ref{margcond}) as shown in \cite{schwonnek2020wigner}, but we will skip it here as it is not needed for our purpose. \subsection{Graphical Representation}As we will see in the later sections, the Wigner distribution function in Eq. (\ref{wignerdist}) may not be well-defined at every point in the phase space. In order to visualize the behavior of such a distribution, we regularize the Wigner distribution function by taking its convolution with a Gaussian $G_{\varepsilon}$ peaked at the origin and having covariance $\varepsilon$. This can be done by multiplying \ref{wignerdist} with $\widehat{G}_{\varepsilon}$ (inverse Fourier transform of $G_{\varepsilon}$ which is again a Gaussian) and taking the Fourier transform, i.e., \begin{align}\label{graphical} \mathscr{W}*G_{\varepsilon}=\frac{1}{(2 \pi)^{n}} \int d^n \xi e^{-i \vec{\xi} \cdot \vec{a}} \widehat{\mathscr{W}}_{\rho}(\vec{\xi})\widehat{G}_{\varepsilon}, \end{align} The regularized Wigner distribution function might not give the correct marginal distributions but it will help in the visualization of the Wigner distribution function properties. We see that $\mathscr{W}_{\rho}(a)=\lim_{\varepsilon\rightarrow0}\mathscr{W}_{\rho}(a)*G_{\varepsilon}$ is an alternative definition of $\mathscr{W}_{\rho}(a)$. \subsection{Basic Properties} The generalized Wigner distribution function has several properties which are markedly different from the well-known form of the Wigner distribution function. We list only a few which are relevant to us. \begin{enumerate} \item $\mathscr{W}_{\rho}(\vec{a})$ is a real valued distribution. This follows from Definition 1 and by noting that $\overline{\widehat{\mathscr{W}}_{\rho}(\vec{\xi})}=\widehat{\mathscr{W}}_{\rho}(-\vec{\xi})$. \item The Wigner distribution function has support in a compact convex set, which is equivalent to the joint numerical range of the operators $A_k$ over all the density matrix operators, i.e., \begin{align*} \mathscr{W}_{supp}=\mathcal{R}=\left\{\{a_1,a_2,...,a_n\} \in \mathbb{R}^{n} \mid a_{k}={\rm tr} [\rho A_{k}]\right\}. \end{align*} This means that $\forall \rho, \mathscr{W}_{\rho}(\vec{a})=0$ outside of $\mathcal{R}$. \item One of the most distinguishing features of $\mathscr{W}_{\rho}(\vec{a})$ are its singularities. For finite-dimensional matrices $A_k$, the singularities of $\mathscr{W}_{\rho}(\vec{a})$ lie on the closure of following set \begin{align*} \mathcal{S}=\left\{\vec{a} \in \mathbb{R}^{n} \mid a_{k}=\left\langle\psi\left|A_{k}\right| \psi\right\rangle\right.\\ \|\psi\|=1, (\vec{\xi} \cdot \vec{A}) \psi=\lambda \psi\}, \end{align*} where $\psi$ can be any eigenvector corresponding to a non-degenerate eigenvalue of any $\vec{\xi}\cdot \vec{A}$. $\lambda$ are those non-degenerate eigenvalues. As we will see in the later sections, for certain examples, the Wigner distribution function indeed takes the form of a delta function. To visualize such a function, we will use the graphical representation of the Wigner distribution function. \item When $A_k$ are finite dimensional matrices and $\rho$ is a full-rank density matrix operators, $\mathscr{W}_{\rho}(\vec{a})$ is positive if and only if all the $A_k$s commute with each other. In such a case $\mathscr{W}_{\rho}(\vec{a})$ is a sum of $\delta$-functions with weights dependent on $\rho$ \end{enumerate} For the proofs of properties 2, 3 and 4, we refer the reader to \cite{schwonnek2020wigner}. Property 4 motivates us to investigate how the negativity of the Wigner distribution function depends on the Incompatibility(or commutativity of sharp operators) of Hermitian operators $A_k$. \section{Wigner distribution for noisy projections of eigenvectors of Pauli operators}\label{Sec4} In this section, we will explicitly calculate the Wigner function for noisy projections of eigenvectors of Pauli vectors. The procedure is on the same lines as for the maximally unbiased case (noiseless Pauli operators) in \cite{schwonnek2020wigner}. We know that the noiseless eigenprojections are maximally incompatible with each other. Adding noise to them in the form of identity will decrease this incompatibility. We will then calculate the negative volume of the Wigner distribution function for different choices of the noise parameter. First, let us consider two of these noisy projections. \subsection{Two operators (n=2)} \subsubsection{Calculating the Wigner function} We will consider the qubit operators to be $A_k=\frac{\lambda}{2}I+(1-\lambda)\vert + \rangle_k \langle + \vert, k=1,2$ where $\{\vert + \rangle_1\langle + \vert, \vert + \rangle_2\langle + \vert\}$ are respectively the projections of the eigenvectors of Pauli matrices $\{\sigma_x,\sigma_y\}$ having eigenvalue +1 and $0\leq\lambda\leq1$ is the noise parameter. We note that the following relation holds irrespective of the choice of operators \begin{align}\label{derivativeofWhat} \frac{\partial}{\partial \xi_{k}} \widehat{\mathscr{W}}_{\mathbb{I}}(\vec{a})=i \textcolor{red} [A_{k} e^{i \vec{\xi} \cdot \vec{A}}]. \end{align} which follows from the definition of $\widehat{\mathscr{W}}_{\rho}(\vec{a})$. For a qubit density matrix $\rho=(\mathbb{I}+\sum_kr_k\sigma_k)/2$, where $\vec{r}=\{r_1,r_2\}$ is the Bloch vector, the Wigner function can be written as \begin{align}\label{n2projidentorhoproj} \mathscr{W}_{\rho}^{proj++}(\vec{a})=\frac{1}{2}(1+\vec{r} \cdot \vec{a'})\mathscr{W}_{\mathbb{I}}^{proj++}(\vec{a}), \end{align} where \begin{align} a'_1=\frac{2a_1-1}{(1-\lambda)},\nonumber\\ a'_2=\frac{2a_2-1}{(1-\lambda)}. \end{align} The above relation is obtained by doing integration by parts in Eqn.(\ref{wignerdist}) and then using Eqn.(\ref{derivativeofWhat}). Moreover, this equation is always satisfied whenever the state $\rho$ can be written as the linear combination of the operators $A_k$. The Fourier transform of the Wigner function $\hat{\mathscr{W}}_{\mathbb{I}}^{proj++}(\vec{a})$ comes out to be $2e^{i\frac{(\xi_{1}+\xi_{2})}{2}}\cos(\frac{(\lambda-1}{2}|\xi|)$ where $|\xi|=\sqrt{\xi^{2}_{1}+\xi^{2}_{2}}$. By convoluting it with a scaled function of the form $e^{-\frac{\varepsilon(1-\lambda)|\xi|}{2}}$. The Wigner function comes out to be \begin{align} \mathscr{W}_{\mathbb{I}}^{proj++}(\vec{a})&= \frac{1}{(2 \pi)^{2}} \int d^2 \xi e^{-i\vec{\xi} \cdot \vec{a}}2e^{i\frac{(\xi_{1}+\xi_{2})}{2}}\cos(\frac{(\lambda-1}{2}|\xi|)e^{-\frac{\varepsilon(1-\lambda)|\xi|}{2}}\nonumber\\ &= \frac{4}{(1-\lambda)^2(2 \pi)^{2}} \int d^2\xi' e^{-i\vec{\xi}' \cdot \vec{a}'}2\cos{|\xi'|}e^{-\varepsilon |\xi'|}\nonumber\\ &=\frac{-4}{(1-\lambda)^2\pi\left(1-|a'|^{2}\right)^{3 / 2}}, \end{align} where we have used the transformation $\vec{\xi}'=\frac{(1-\lambda)\vec{\xi}}{2}$. Using Eqn.(\ref{n2projidentorhoproj}) the total Wigner function for state $\rho$ is given as \begin{equation}\label{wigproj2} \mathscr{W}_{\rho}^{proj++}(\vec{a})=-\frac{(1+\vec{r} \cdot \vec{a}')}{2\pi(1-\lambda)^2}\frac{4}{\left(1-|a'|^{2}\right)^{3 / 2}}, \end{equation} But this is not the only combination possible for the choice of noisy projections of eigenvectors of Pauli operators. In fact there are three more possibilities. Namely \begin{itemize} \item $\{(\frac{\lambda}{2}I+(1-\lambda)\vert + \rangle_1 \langle + \vert),(\frac{\lambda}{2}I+(1-\lambda)\vert - \rangle_2 \langle - \vert)\} $ \item $\{(\frac{\lambda}{2}I+(1-\lambda)\vert - \rangle_1 \langle - \vert),(\frac{\lambda}{2}I+(1-\lambda)\vert + \rangle_2 \langle + \vert)\} $ \item $\{(\frac{\lambda}{2}I+(1-\lambda)\vert - \rangle_1 \langle - \vert),(\frac{\lambda}{2}I+(1-\lambda)\vert - \rangle_2 \langle - \vert)\} $ \end{itemize} The calculations for these three proceeds in the same way as in the previous case. The expression for the Wigner function comes out to be \begin{align} &\mathscr{W}_{\rho}^{proj+-}(\vec{a})=-\frac{(1+r_1 a'_1-r_2 a'_2)}{2\pi(1-\lambda)^2}\frac{4}{\left(1-|a'|^{2}\right)^{3 / 2}},\label{wigproj+-}\\ &\mathscr{W}_{\rho}^{proj-+}(\vec{a})=-\frac{(1-r_1 a'_1+r_2 a'_2)}{2\pi(1-\lambda)^2}\frac{4}{\left(1-|a'|^{2}\right)^{3 / 2}},\label{wigproj-+}\\ &\mathscr{W}_{\rho}^{proj--}(\vec{a})=-\frac{(1-\vec{r}\cdot \vec{a}')}{2\pi(1-\lambda)^2}\frac{4}{\left(1-|a'|^{2}\right)^{3 / 2}},\label{wigproj--} \end{align} \subsubsection{Comparing the negativity} Now we will calculate the negativity of the Wigner function calculated in the previous section and analyze its variation with the noise parameter $\lambda$. The negative volume is calculated by integrating it over the phase space region where its value is negative. Mathematically \begin{align}\label{negcalproj} \mathscr{W}_{neg}(\rho,\vec{A})= \int\frac{1}{2}(\mathscr{W}_{\rho}(\vec{a})-|\mathscr{W}_{\rho}(\vec{a})|)d^2a, \end{align} where $\vec{A}=\{A_1, A_2\}$ is the operator set for which the Wigner function is calculated. Borrowing the expression of the Wigner function for the MUB case from \cite{schwonnek2020wigner} the negativity can be calculated as: \begin{align}\label{mubnegn2proj} \mathscr{W}_{neg}^{mub}(\rho,\vec{A})&= \int\frac{1}{2}\left(\mathscr{W}_{\rho}^{mub}(\vec{a})-|\mathscr{W}_{\rho}^{mub}(\vec{a})|\right)d^2a \nonumber\\ &= \int\frac{1}{2}\Bigg(-\frac{(1+\vec{r} \cdot \vec{a})}{2\pi}\frac{1}{\left(1-|a|^{2}\right)^{3 / 2}} \nonumber \\ &-\Big|-\frac{(1+\vec{r} \cdot \vec{a})}{2\pi}\frac{1}{\left(1-|a|^{2}\right)^{3 / 2}}\Big|\Bigg)d^2a \end{align} Now we will compare the negativity of noisy cases with the MUB case. But we notice that the regularising function used in the noisy cases is different from what was used in that. Thus, in order to compare them we have to normalize the Wigner distribution function in Eqn.(\ref{wigproj2}) by dividing it by the factor \begin{align*} \frac{I_{G'}}{I_{G}}&=\frac{\int d^2\xi e^{-\frac{\varepsilon(1-\lambda)}{2}\sqrt{ \xi^{2}_{1}+\xi^{2}_{2}} }}{\int d^2\xi e^{-\varepsilon\sqrt{\xi_1^2+\xi_2^2}}} \\ &=\frac{4}{(1-\lambda)^2}, \end{align*} where $I_{G'}$ is the integral of the scaled Gaussian and $I_G$ is the integral of the Gaussian function over the Fourier space. One more thing to note here is that as the regularising functions used for different values of the noise parameter $\lambda$ are different we can't claim that the numerical value of the negativity of the Wigner function quantitatively indexes the degree of incompatibility. Rather, we can only compare the negativity for various values of the noise parameter and give its variation with the degree of incompatibility qualitatively. An argument for the validity of this normalization will be given in the next section. Thus using Eqn.(\ref{wigproj2}) the negativity can be calculated as \begin{align}\label{n2noisyprojneg1} \mathscr{W}_{neg}^{proj++}(\rho,\vec{A})&= \int\frac{1}{2}\left(\mathscr{W}_{\rho}^{proj++}(\vec{a})-|\mathscr{W}_{\rho}^{proj++}(\vec{a})|\right)d^2a\nonumber\\ &=\frac{(1-\lambda)^2}{4}\mathscr{W}_{neg}^{mub}(\rho,\vec{A}), \end{align} Thus from the above expression, we see that the negative volume of the Wigner function for noisy projectors is less than the corresponding negative volume for MUB and further decreases with an increase in the noise. Using the expression for Wigner distribution in Eqn.(\ref{wigproj2}) we can numerically integrate Eqn.(\ref{n2noisyprojneg1}) and calculate the negative volume of the Wigner distribution for various values of the noise parameter. Also, we know that with an increase in the noise the incompatibility between the observables decreases. From this, we have an indication that the Wigner function, through its negativity, indeed captures the incompatibility between observables, although qualitatively. Comparing this result with \cite{PhysRevA.104.042212} we observe that the positivity of the general quasiprobability distribution function is the sufficient condition for joint measurability. Although analytical, their formalism gives a tight upper bound on the degree of incompatibility among Gaussian measurements for incompatibility-breaking Gaussian channels. On the other hand in our results, as we will see later also, for a quite general class of observables the negativity of the Wigner function decreases monotonically with the increases in the noise parameter vis-à-vis the incompatibility. Although the result is qualitative up to some extent it clearly gives a tool to compare the degree of incompatibility among operators for different amounts of noise added. Similarly, if we calculate the negative volume for Wigner distribution in Eqns.(\ref{wigproj+-},\ref{wigproj-+},\ref{wigproj--}) by substituting them in Eqn.(\ref{negcalproj}) we get exactly the same result as we got for Wigner distribution in Eqn.(\ref{wigproj2}) in Eqn.(\ref{n2noisyprojneg1}). Hence for further calculation, we will only calculate and analyze the Wigner function for the noisy projectors having eigenvalue +1 only. One important thing to note is that the negative volume of the Wigner function doesn't depend on the Bloch vector $\vec{r}=(r_1,r_2)$. \subsection{Three operators (n=3)} \subsubsection{Calculating the Wigner distribution} Now we will calculate the Wigner function for noisy projections of all three Pauli matrices. Qubit operators are $A_k=\frac{\lambda}{2}I+(1-\lambda)\vert + \rangle_k \langle + \vert, k=1,2,3$ where $\{\vert + \rangle_1\langle + \vert, \vert + \rangle_2\langle + \vert, \vert + \rangle_3\langle + \vert\}$ are respectively the projections of the eigenvectors of Pauli matrices $\{\sigma_x,\sigma_y,\sigma_z\}$ with eigenvalue +1 and $\lambda$ is the noise parameter. For a general qubit density matrix $\rho=(\mathbb{I}+\sum_kr_k\sigma_k)/2$, where $\vec{r}=\{r_1,r_2,r_3\}$ is the Bloch vector, the Wigner distribution function, using Eqn.(\ref{derivativeofWhat}), can be written as \begin{align}\label{n3projidentorhoproj} \mathscr{W}_{\rho}^{proj}(\vec{a})=\frac{1}{2}(1+\vec{r} \cdot \vec{a}')\mathscr{W}_{\mathbb{I}}^{proj}(\vec{a}), \end{align} where \begin{align} a'_1=\frac{2a_1-1}{(1-\lambda)},\nonumber\\ a'_2=\frac{2a_2-1}{(1-\lambda)},\nonumber\\ a'_3=\frac{2a_3-1}{(1-\lambda)}. \end{align} The Fourier transform of the Wigner function $\hat{\mathscr{W}}_{\mathbb{I}}^{proj}(\vec{a})$ comes out to be $2e^{i\frac{(\xi_{1}+\xi_{2}+\xi_{3})}{2}}\cos(\frac{(\lambda-1}{2}|\xi|)$ where $|\xi|=\sqrt{\xi^{2}_{1}+\xi^{2}_{2}+\xi^{2}_{3}}$. By convoluting it with a scaled Gaussian function of the form $e^{-\frac{\varepsilon(1-\lambda)^2|\xi|^2}{4}}$. Using the transformation $\vec{\xi}'=\frac{(1-\lambda)\vec{\xi}}{2}$ the Wigner function can be calculated as \begin{align}\label{n3noisyprojidenwigner} \mathscr{W}_{\mathbb{I}}^{proj}(\vec{a})&=\frac{1}{(2 \pi)^{3}} \int d^3 \xi e^{-i \vec{\xi} \cdot \vec{a}} 2e^{i\frac{(\xi_{1}+\xi_{2}+\xi_{3})}{2}}\cos(\frac{(\lambda-1)}{2}|\xi|)e^{-\varepsilon|\xi|'^2} \nonumber \\&=\frac{-8}{\pi |a'|(1-\lambda)^3} \frac{d}{d |a'|} \frac{e^{-\frac{(|a'|-1)^{2}}{4 \varepsilon}}}{\sqrt{4 \pi \varepsilon}} \nonumber \\&=\frac{-8\delta^{\prime}(|a'|-1)}{2 \pi |a'|(1-\lambda)^3}. \end{align} Therefore, the Wigner distribution function for an arbitrary qubit state can be written as \begin{align}\label{n3noisyprojrhowigner} \mathscr{W}_{\rho}^{proj}(\vec{a})=-\frac{8(1+\vec{r} \cdot \vec{a}')}{4 \pi|a'|(1-\lambda)^3} \delta^{\prime}(|a'|-1). \end{align} \subsubsection{Comparing the negativity} Now we will calculate the negativity of the Wigner function in the same way as we did previously for the n=2 case. The negative volume can be calculated by integrating the Wigner function over the three-dimensional phase space region where it is negative as \begin{align}\label{n3negcalproj} \mathscr{W}_{neg}(\rho,\vec{A})= \int\frac{1}{2}(\mathscr{W}_{\rho}(\vec{a})-|\mathscr{W}_{\rho}(\vec{a})|)d^3a. \end{align} Following the same logic as in the previous n=2 case in order to compare the negativity of the noisy case with the MUB case we have to normalize the Wigner distribution function in Eqn. (\ref{n3noisyprojidenwigner}) by dividing it by the factor \begin{align*} \frac{I_{G'}}{I_{G}}&=\frac{\int d^3\xi e^{-\frac{\varepsilon(1-\lambda)^2}{4}(\xi^{2}_{1}+\xi^{2}_{2}+\xi^{2}_{3})} }{\int d^3\xi e^{-\varepsilon(\xi_1^2+\xi_2^2+\xi_3^2)}}, \\ &=\frac{8}{(1-\lambda)^3}, \end{align*} where $I_{G'}$ is the integral of the scaled Gaussian and $I_G$ is the integral of the Gaussian function over the Fourier space. Thus using Eqn. (\ref{n3noisyprojrhowigner}) the negativity can be calculated as \begin{align}\label{n3noisyprojneg1} \mathscr{W}_{neg}^{proj}(\rho,\vec{A})&= \int\frac{1}{2}\left(\mathscr{W}_{\rho}^{proj}(\vec{a})-|\mathscr{W}_{\rho}^{proj}(\vec{a})|\right)d^3a\nonumber\\ &=\frac{(1-\lambda)^3}{8}\mathscr{W}_{neg}^{mub}(\rho,\vec{A}). \end{align} From the above expression, we see that the negative volume of the Wigner function for noisy projectors decreases with an increase in the noise. We calculate the negative volume of $\mathscr{W}_{\mathbb{\rho}}^{mub}(\vec{a})$ and $\mathscr{W}_{\mathbb{\rho}}^{proj}(\vec{a})$ analytically by using the Gaussian approximation of delta function. The expression in Eqn.(\ref{n3noisyprojneg1}) reduces to \begin{equation} \mathscr{W}_{neg}^{proj}(\rho,\vec{A})=\frac{(\lambda-1)^{3}}{16}\left(-1+\frac{1}{\sqrt{\pi \varepsilon}}+Erfc \left(\frac{1}{2\sqrt{\varepsilon}}\right) \right). \end{equation} We see from this expression that the negative volume depends on both the regularisation parameter $\varepsilon$ and the noise parameter $\lambda$. We can fix a particular value of resolution of the Wigner function by fixing $\varepsilon$ and analyze how the negative volume of the Wigner distribution decreases with an increase in the noise. The result is plotted in Fig.(\ref{fig:n2}). \begin{figure} \caption{Variation of negativity of Wigner distribution function with noise $\lambda$ for different values of regularising parameter $\varepsilon$ under scaled Gaussian regularization} \label{fig:n2} \end{figure} We see that for every value of $\varepsilon$ negativity decreases with an increase in the noise parameter $\lambda$. Also the smaller the value of regularising parameter the better the resolution between two values of negative volume of Wigner distribution for the corresponding two values of noise parameter. Next, for a particular value of $\lambda$ we plot the negativity of the Wigner function against the regularizing parameter $\varepsilon$. The trend is presented in Fig.(\ref{fig:n3}). We see that this plot again reinstates the fact that as the value of regularising factor decreases the difference between the negative volume for two given values of the noise parameter keeps increasing. Hence the resolution increases. So it is preferable to have $\varepsilon \rightarrow 0$. \begin{figure} \caption{Variation of negativity of Wigner distribution function with regularising parameter $\varepsilon$ for different amounts of noise $\lambda$ under scaled Gaussian regularization} \label{fig:n3} \end{figure} We note that this is not the only way to consider n=3 noisy projections. We can consider the first two operators to be noisy projections of eigenvectors of say $\sigma_y$ corresponding to eigenvalues $\pm1$ and the third operator to be the noisy projection of eigenvector of $\sigma_x$ having eigenvalue $+1$ \textit{i.e} our operator set looks like \begin{itemize} \item $\{(\frac{\lambda}{2}I+(1-\lambda)\vert + \rangle_1 \langle + \vert),(\frac{\lambda}{2}I+(1-\lambda)\vert + \rangle_2 \langle + \vert),(\frac{\lambda}{2}I+(1-\lambda)\vert - \rangle_2 \langle - \vert)\} $ \end{itemize} where $|{\pm}\rangle_1$ represent the eigenvectors of ${\sigma}_x$ while $|{\pm}\rangle_2$ represent the eigenvectors of ${\sigma}_y$ corresponding to the eigenvalues $\pm 1$'. Proceeding in the same way, we find out that for a qubit density matrix $\rho=(\mathbb{I}+\sum_kr_k\sigma_k)/2$, where $\vec{r}=\{r_1,r_2\}$ is the Bloch vector, the Wigner function can be written as \begin{align}\label{n3projidentorhoproj2} \mathscr{W}_{\rho}^{proj++-}(\vec{a})=\frac{1}{2}(1+\vec{r} \cdot \vec{a}_{12}')\mathscr{W}_{\mathbb{I}}^{proj++-}(\vec{a}), \end{align} where $\vec{a}'_{12}=(a'_1,a'_2,0)$ and \begin{align} a'_1=\frac{2a_1-1}{(1-\lambda)},\nonumber\\ a'_2=\frac{a_2-a_3}{(1-\lambda)}. \end{align} The Fourier transform of the Wigner function, $\hat{\mathscr{W}_{\mathbb{I}}^{proj++-}(\vec{a})}$ comes out to be $2e^{i\frac{(\xi_{1}+\xi_{2}+\xi_{3})}{2}}\cos(\frac{(\lambda-1}{2}|\xi_{12}"|)$ where $|\xi"_{12}|=\sqrt{\xi^{2}_1+(\xi_2-\xi_3)^2}$. By convoluting it with a regularizing function of the form $e^{ (-\frac{\varepsilon(1-\lambda)|\xi"_{12}|}{2}-\frac{\varepsilon(1-\lambda)^2(\xi_2+\xi_3)^2}{4})}$. Using the transformation \begin{align} \frac{(1-\lambda)}{2}\xi_1=\xi"_1,\nonumber\\ \frac{(1-\lambda)}{2}(\xi_2-\xi_3)=\xi"_2,\nonumber\\ \frac{(1-\lambda)}{2}(\xi_2+\xi_3)=\xi"_3. \end{align} The Wigner distribution can be calculated as \begin{align}\label{n3noisyprojidenwigner2} \mathscr{W}_{\mathbb{I}}^{proj++-}(\vec{a})&=\frac{1}{(2 \pi)^{3}} \int d^3 \xi e^{-i \vec{\xi} \cdot \vec{a}} 2e^{i\frac{(\xi_{1}+\xi_{2}+\xi_{3})}{2}}\nonumber \\ &=\frac{-4}{\pi (1-\lambda)^3}\delta(a'_3)\frac{1}{(1-|a'_{12}|^2)^\frac{3}{2}}. \end{align} where $a'_3=\frac{a_2+a_3-1}{(1-\lambda)}$. Thus total Wigner function can be written as \begin{equation}\label{n3noisyprojrhowigner2} \mathscr{W}_{\rho}^{proj++-}(\vec{a})=\frac{1}{2}(1+\vec{r} \cdot \vec{a}_{12}')\frac{-4}{\pi (1-\lambda)^3}\delta(a'_3)\frac{1}{(1-|a'_{12}|^2)^\frac{3}{2}}. \end{equation} But this is not the only combination possible for choosing the three noisy eigenprojections of $\sigma_x,\sigma_y$. In fact three more combinations are possible given as \begin{itemize} \item $\{(\frac{\lambda}{2}I+(1-\lambda)\vert + \rangle_1 \langle + \vert),(\frac{\lambda}{2}I+(1-\lambda)\vert - \rangle_1 \langle - \vert)\}, (\frac{\lambda}{2}I+(1-\lambda)\vert + \rangle_2 \langle + \vert) $ \item $\{(\frac{\lambda}{2}I+(1-\lambda)\vert + \rangle_1 \langle + \vert),(\frac{\lambda}{2}I+(1-\lambda)\vert - \rangle_1 \langle - \vert)\}, (\frac{\lambda}{2}I+(1-\lambda)\vert - \rangle_2 \langle - \vert) $ \item $\{(\frac{\lambda}{2}I+(1-\lambda)\vert - \rangle_1 \langle - \vert),(\frac{\lambda}{2}I+(1-\lambda)\vert + \rangle_2 \langle + \vert)\}, (\frac{\lambda}{2}I+(1-\lambda)\vert - \rangle_2 \langle - \vert) $ \end{itemize} Following the same procedure, the Wigner function can be calculated as \begin{align} &\mathscr{W}_{\rho}^{proj+-+}(\vec{a})=\frac{1}{2}(1+r_1 a'_2+r_2 a'_1)\frac{-4}{\pi (1-\lambda)^3}\delta(a'_3)\frac{1}{(1-|a'_{12}|^2)^\frac{3}{2}},\label{wigproj+-+}\\ &\mathscr{W}_{\rho}^{proj+--}(\vec{a})=\frac{1}{2}(1+r_1 a'_2-r_2 a'_1)\frac{-4}{\pi (1-\lambda)^3}\delta(a'_3)\frac{1}{(1-|a'_{12}|^2)^\frac{3}{2}},\label{wigproj+--}\\ &\mathscr{W}_{\rho}^{proj-+-}(\vec{a})=\frac{1}{2}(1-r_1 a'_1+r_2 a'_2)\frac{-4}{\pi (1-\lambda)^3}\delta(a'_3)\frac{1}{(1-|a'_{12}|^2)^\frac{3}{2}},\label{wigproj-+-} \end{align} The negativity of the Wigner distribution for these operator sets can be calculated in the same way as the previous case by using Eqn. (\ref{n3negcalproj}). But first, we have to normalize the Wigner function expressions by dividing them with the factor \begin{align*} \frac{I_{G'}}{I_{G}}&=\frac{\int d^3\xi e^{ (-\frac{\varepsilon(1-\lambda)\sqrt{\xi^{2}_1+(\xi_2-\xi_3)^2}}{2}-\frac{\varepsilon(1-\lambda)^2(\xi_2+\xi_3)^2}{4})} }{\int d^3\xi e^{-\varepsilon(\sqrt{\xi_1^2+\xi_2^2}-\xi_3^2)}} \\ &=\frac{4}{(1-\lambda)^3}, \end{align*} Using Eqn.(\ref{n3noisyprojrhowigner2}) the negativity can be calculated as \begin{align}\label{n4noisyprojneg1} \mathscr{W}_{neg}^{proj++-}(\rho,\vec{A})=\frac{(1-\lambda)^3}{4}\mathscr{W}_{neg}^{mub}(\rho,\vec{A}). \end{align} Here also we see that as the noise parameter increases the negativity of the Wigner function decreases. If instead of Eqn.(\ref{n3noisyprojrhowigner2}) we use Eqns.(\ref{wigproj+-+},\ref{wigproj+--},\ref{wigproj-+-}) in the expression for the negativity of Wigner distribution in Eqn.(\ref{n3negcalproj}) we will retrieve Eqn.(\ref{n4noisyprojneg1}) exactly \subsection{Four operators (n=4)} \subsubsection{Calculating the Wigner function} Here we will consider all the four noisy eigenprojections of $\sigma_x$ and $\sigma_y$ as our operators and calculate the Wigner distribution function. Our operator set is \begin{itemize} \item $\{(\frac{\lambda}{2}I+(1-\lambda)\vert + \rangle_1 \langle + \vert),\frac{\lambda}{2}I+(1-\lambda)\vert - \rangle_1 \langle - \vert),(\frac{\lambda}{2}I+(1-\lambda)\vert + \rangle_2 \langle + \vert),(\frac{\lambda}{2}I+(1-\lambda)\vert - \rangle_2 \langle - \vert)\} $ \end{itemize} Tracing the same steps as the previous calculations, for a qubit density matrix $\rho=(\mathbb{I}+\sum_kr_k\sigma_k)/2$, where $\vec{r}=\{r_1,r_2\}$ is the Bloch vector, the Wigner function can be written as \begin{align}\label{n4projidentorhoproj} \mathscr{W}_{\rho}^{proj+-+-}(\vec{a})=\frac{1}{2}(1+\vec{r} \cdot \vec{a}_{12}')\mathscr{W}_{\mathbb{I}}^{proj+-+-}(\vec{a}), \end{align} where $a'_{12}=(a'_1,a'_2), \vec{a}=\{a_1,a_2,a_3,a_4\}$ and \begin{align} a'_1=\frac{a_1-a_2}{(1-\lambda)},\nonumber\\ a'_2=\frac{a_3-a_4}{(1-\lambda)}. \end{align} The Fourier transform of the Wigner function, $\hat{\mathscr{W}_{\mathbb{I}}^{proj+-+-}(a_1,a_2,a_3,a_4)}$ comes out to be $2e^{i\frac{(\xi_{1}+\xi_{2}+\xi_{3}+\xi_{4})}{2}}\cos(\frac{(\lambda-1}{2}|\xi"_{12}|)$ where $|\xi"_{12}|=\sqrt{(\xi_1-\xi_2)^2+(\xi_3-\xi_4)^2}$. By convoluting it with a regularizing function of the form $e^{ (-\frac{\varepsilon(1-\lambda)|\xi"_{12|}}{2}-\frac{\varepsilon(1-\lambda)^2(\xi_1+\xi_2)^2+(\xi_3+\xi_4)^2}{4})}$. Using the transformation \begin{align} \frac{(1-\lambda)}{2}(\xi_1-\xi_2)=\xi"_1,\nonumber\\ \frac{(1-\lambda)}{2}(\xi_3-\xi_4)=\xi"_2,\nonumber\\ \frac{(1-\lambda)}{2}(\xi_1+\xi_2)=\xi"_3,\nonumber\\ \frac{(1-\lambda)}{2}(\xi_3+\xi_4)=\xi"_4. \end{align} The Wigner distribution can be calculated as \begin{align}\label{n4noisyprojidenwigner} \mathscr{W}_{\mathbb{I}}^{proj++-}(&\vec{a})=\frac{1}{(2 \pi)^{4}} \int d \xi e^{-i \xi a} 2e^{i\frac{(\xi_{1}+\xi_{2}+\xi_{3}+\xi_{4})}{2}}\nonumber \\ &\cos(\frac{(\lambda-1}{2}|\xi"_{12}|) e^{ (-\frac{\varepsilon(1-\lambda)|\xi"_{12}|}{2}-\frac{\varepsilon(1-\lambda)^2(\xi_1+\xi_2)^2+(\xi_3+\xi_4)^2}{4})} \nonumber \\&=\frac{-4}{\pi (1-\lambda)^4}\delta(a'_3)\delta(a'_4)\frac{1}{(1-|a'_{12}|^2)^\frac{3}{2}}. \end{align} where \begin{align} a'_3=\frac{a_1+a_2-1}{(1-\lambda)}\nonumber\\ a'_4=\frac{a_3+a_4-1}{(1-\lambda)} \end{align} Thus total Wigner function can be written as \begin{equation}\label{n4noisyprojrhowigner} \mathscr{W}_{\rho}^{proj+-+-}(\vec{a})=\frac{1}{2}(1+\vec{r} \cdot \vec{a}_{12}')\frac{-4}{\pi (1-\lambda)^4}\delta(a'_3)\delta(a'_4)\frac{1}{(1-|a'_{12}|^2)^\frac{3}{2}}. \end{equation} \subsubsection{Calculating the negativity} The negative volume of the Wigner distribution can be defined as \begin{align}\label{n4negcalproj} \mathscr{W}_{neg}(\rho,\vec{A})= \int\frac{1}{2}(\mathscr{W}_{\rho}(\vec{a})-|\mathscr{W}_{\rho}(\vec{a})|)d^4a. \end{align} In accordance with the previous normalization requirement after dividing the Wigner distribution in Eqn.(\ref{n4noisyprojrhowigner}) with the factor \begin{align*} \frac{I_{G'}}{I_{G}}&=\frac{\int d^4\xi e^{ (-\frac{\varepsilon(1-\lambda)\sqrt{\xi^{2}_1+(\xi_2-\xi_3)^2}}{2}-\frac{\varepsilon(1-\lambda)^2((\xi_1+\xi_2)^2+(\xi_3+\xi_4)^2)}{4})} }{\int d^4\xi e^{-\varepsilon(\sqrt{\xi_1^2+\xi_2^2}-\xi_3^2-\xi_4^2)}} \\ &=\frac{4}{(1-\lambda)^4}, \end{align*} The negative volume of the Wigner distribution can be calculated as \begin{align}\label{n4noisyprojneg} \mathscr{W}_{neg}^{proj+-+-}(\rho,\vec{A})=\frac{(1-\lambda)^4}{4}\mathscr{W}_{neg}^{mub}(\rho,\vec{A}). \end{align} Thus we observe that by increasing the value of noise parameter $\lambda$ we can decrease the negative value of the Wigner distribution. Numerically integrating Eqns.(\ref{n2noisyprojneg1}),(\ref{n4noisyprojneg1}) and (\ref{n4noisyprojneg}) we get the plots in Fig.(\ref{fig:comp}). We see that for all three cases: n=2,3,4, the negativity of the Wigner function decreases with an increase in the noise. We also observe that for a non-zero value of the noise parameter, the negative volume is highest for the n=2 case and lowest for the n=4 case. This observation again supports the fact that with the addition of the noise among operators their incompatibility decreases and the Wigner function is capturing it through its negative volume. \begin{figure} \caption{Variation of negativity of Wigner distribution function with noise $\lambda$ for n=2, n=3 and n=4 cases under scaled Gaussian regularization} \label{fig:comp} \end{figure} Now, by using a little algebra, we can show that \begin{align} \frac{\lambda}{2}I+(1-\lambda)\vert \pm \rangle_1 \langle \pm \vert)=\frac{I}{2}\pm\frac{(1-\lambda_k)}{2}\sigma_1\\ \frac{\lambda}{2}I+(1-\lambda)\vert \pm \rangle_2 \langle \pm \vert)=\frac{I}{2}\pm\frac{(1-\lambda_k)}{2}\sigma_2\\ \frac{\lambda}{2}I+(1-\lambda)\vert \pm \rangle_3 \langle \pm \vert)=\frac{I}{2}\pm\frac{(1-\lambda_k)}{2}\sigma_3 \end{align} Hence, in principle, instead of analyzing the noisy eigenprojections of Pauli operators, we can just deal with the noisy Pauli operators themselves to see the trend of the negative volume of their corresponding Wigner distribution with the noise parameter. As we will see in the later sections, using noisy Pauli operators will be helpful in generalizing the treatment to the higher dimensions. Also dealing with Pauli operators is physically much more relevant because they can act as observables in a given system. For example, for electrons, they are proportional to the spin operators and for photon beams, they correspond to the measurement of Stokes' parameters. So for the remaining part of this paper, we will deal with noisy Pauli qubits and, in general, qudit operators. \section{Wigner distribution function for Noisy Qubit operators}\label{Sec5} In this section, we add some noise to the both $n=3$ and $n=2$ Pauli operators and calculate the Wigner distribution function. We will then calculate the negative volume of the corresponding Wigner distribution function and analyze how it varies with the noise parameter in comparison to the noiseless case. First, we will study the n=3 case as it is easier to handle analytically as compared to the n=2 case. The Bloch vector is of the form $\vec{r}=\{r_1,r_2,r_3\}$. n=2 can be treated as a special case when $r_3=0$. \subsection{Three operators (n=3)} \subsubsection{Noisy Pauli Operators (Scaled Gaussian Regularization)}\label{asymm_noise} Now we will consider the qubit operators to be, $A_k=(1-\lambda_k)\sigma_k+\lambda_k\mathbb{I}$ where $k=1,2,3$. As Eqn. (\ref{derivativeofWhat}) we have \begin{align}\label{n3noisyidentorho} \mathscr{W}_{\rho}^{noisy}(\vec{a})=\frac{1}{2}(1+\vec{r} \cdot \vec{a}')\mathscr{W}_{\mathbb{I}}^{noisy}(\vec{a}), \end{align} where $a_k'=\frac{a_k-\lambda_k}{1-\lambda_k}$. Also, the Fourier transform of the Wigner distribution function changes to \begin{align*} \widehat{\mathscr{W}}_{\mathbb{I}}^{noisy}(\vec{\xi})=e^{i (\xi_{1} \lambda_1 +\xi_{2} \lambda_2+\xi_{3} \lambda_3)}2\cos(|\xi'|), \end{align*} where $|\xi'|=\sqrt{(1-\lambda_1)^{2}\xi^{2}_{1}+(1-\lambda_2)^{2}\xi^{2}_{2}+(1-\lambda_3)^{2}\xi^{2}_{3}}$. Now, doing the integration in Eq.(\ref{wignerdist}) is not so straight forward if we use the same Gaussian regularization function $e^{-\varepsilon\xi^2}$, because of lack of spherical symmetry in $\widehat{\mathscr{W}}_{\mathbb{I}}^{noisy}(\vec{\xi})$. Therefore we chose our regularization function to be a scaled Gaussian $G'=e^{-\varepsilon ((1-\lambda_1)^{2}\xi^{2}_{1}+(1-\lambda_2)^{2}\xi^{2}_{2}+(1-\lambda_3)^{2}\xi^{2}_{3})}$ then the calculations follows exactly the same way as for the Noiseless Pauli operators in \cite{schwonnek2020wigner} by changing the integration variables as $\xi_k\rightarrow\xi_k'=(1-\lambda_k)\xi_k$. The integration goes as follows \begin{align}\label{n3noisyidenwigner} \mathscr{W}_{\mathbb{I}}^{noisy}(\vec{a})&=\frac{1}{(2 \pi)^{3}} \int d \xi e^{-i \xi a} e^{-i (\xi_{1} \lambda_1 +\xi_{2} \lambda_2+\xi_{3} \lambda_3)}2\cos(|\xi'|)e^{-\varepsilon\xi'^2} \nonumber \\&=\frac{-1}{\pi |a'|(1-\lambda_1)(1-\lambda_2)(1-\lambda_3)} \frac{d}{d |a'|} \frac{e^{-\frac{(|a'|-1)^{2}}{4 \varepsilon}}}{\sqrt{4 \pi \varepsilon}} \nonumber \\&=\frac{-\delta^{\prime}(|a'|-1)}{2 \pi |a'|(1-\lambda_1)(1-\lambda_2)(1-\lambda_3)}. \end{align} Therefore, the Wigner distribution function for an arbitrary qubit state is given as \begin{align}\label{n3noisyrhowigner} \mathscr{W}_{\rho}^{noisy}(\vec{a})=-\frac{(1+\vec{r} \cdot \vec{a}')}{4 \pi|a'|(1-\lambda_1)(1-\lambda_2)(1-\lambda_3)} \delta^{\prime}(|a'|-1) \end{align} It should be noted that the derivative of the $\delta-$ function in the above equation is with respect to $|a'|$ variable unlike in \cite{schwonnek2020wigner}, where the derivative is with respect to $|a|$. \subsubsection{Noisy Pauli Operators with equal noise (Gaussian Regularization)}\label{symm_noise} In the previous section, if the noise added to each of the Pauli operators is equal we can calculate the Wigner distribution function $\mathscr{W}^{noisy}_{\rho}(\mathbb{I})$ with the Gaussian regularization function also. In this case, the qubit operators are, $A_k=(1-\lambda)\sigma_k+\lambda\mathbb{I}$. The Wigner distribution function for an arbitrary density matrix can be written in terms of the Wigner distribution function of $\mathbb{I}$ as \begin{align}\label{n3symmidentorho} \mathscr{W}_{\rho}^{symm}(\vec{a})=\frac{1}{2}(1+\vec{r} \cdot \vec{a}')\mathscr{W}_{\mathbb{I}}(\vec{a}), \end{align} where $a_k'=\frac{a_k-\lambda}{1-\lambda}$. The Fourier transform of the Wigner distribution function is given as \begin{align*} \widehat{\mathscr{W}}_{\mathbb{I}}^{symm}(\vec{\xi})=e^{i (\xi_{1} \lambda +\xi_{2} \lambda+\xi_{3} \lambda)}2\cos(|\xi'|), \end{align*} where $|\xi'|=(1-\lambda)\sqrt{\xi^{2}_{1}+\xi^{2}_{2}+\xi^{2}_{3}}$. The Wigner distribution function can now be calculated with the Gaussian regularization as \begin{align}\label{n3noisyidenwignersymm} \mathscr{W}_{\mathbb{I}}^{symm}(\vec{a})&=\frac{1}{(2 \pi)^{3}} \int d \xi e^{-i \xi a} e^{-i (\xi_{1} \lambda +\xi_{2} \lambda+\xi_{3} \lambda)}2\cos(|\xi'|)e^{-\varepsilon\xi^2} \nonumber \\ & =\frac{-1}{\pi |a'|(1-\lambda)^2} \frac{d}{d |a'|} \frac{e^{-\frac{(|a'|-1)^{2}(1-\lambda)^2}{4 \varepsilon}}}{\sqrt{4 \pi \varepsilon}} \nonumber \\ &= \frac{-\delta^{\prime}(|a''|-(1-\lambda))}{2 \pi |a''|}. \end{align} where we have done the variable substitution $\xi\rightarrow\xi'$ and $a_k''=a_k-\lambda$. Therefore, the Wigner distribution function for an arbitrary qubit state with equally noisy Pauli operators is given as \begin{align}\label{n3noisyrhowignersymm} \mathscr{W}_{\rho}^{symm}(\vec{a})=-\frac{(1+\vec{r} \cdot \vec{a}')}{4 \pi|a''|} \delta^{\prime}(|a''|-(1-\lambda)). \end{align} We note that in the limit $\lambda_1=\lambda_2=\lambda_3=\lambda=0$ both Eqn.(\ref{n3noisyrhowigner}) and Eqn.(\ref{n3noisyrhowignersymm}) reduces to the expression for the Wigner function for n=3 qubit operator case in \cite{schwonnek2020wigner}. \subsubsection{Arbitrary qubit operators}\label{section_arboperators} Now, we will consider the following three-qubit operators which could represent a generic set of three-qubit Hermitian operators with eigenvalues $\pm1$ \begin{align}\label{arboperators} &A_1= \lambda_x\sigma_x+\lambda_y\sigma_y+\lambda_z\sigma_z, \nonumber \\ &A_2= \gamma_x\sigma_x+\gamma_y\sigma_y+\gamma_z\sigma_z, \nonumber \\ &A_3= \sigma_z, \end{align} where $\lambda_x^2+\lambda_y^2+\lambda_z^2=\gamma_x^2+\gamma_y^2+\gamma_z^2=1$, which ensures that the eigenvalues of $A_1$ and $A_2$ are $\pm1$. Note that because of unitary freedom, we can always choose one of the operators to be $\sigma_z$. Next, we note that the Wigner distribution function of an arbitrary operator can be expanded as \begin{align}\label{n3arbidentorho} \mathscr{W}_{\rho}^{arb}(\vec{a})=\frac{1}{2}(1+\vec{r} \cdot \vec{a}')\mathscr{W}_{\mathbb{I}}^{arb}(\vec{a}), \end{align} where the s parameters are given as \begin{align}\label{sparameters} &a'_1=\frac{-a_1\gamma_y+a_2\lambda_y+a_3(\gamma_y\lambda_z-\gamma_z\lambda_y)}{\gamma_x\lambda_y-\lambda_x\gamma_y},\nonumber\\ &a'_2=\frac{a_1\gamma_x-a_2\lambda_x+a_3(\gamma_z\lambda_x-\gamma_x\lambda_z)}{\gamma_x\lambda_y-\lambda_x\gamma_y},\nonumber \\ &a'_3=a_3. \end{align} Moreover, the Fourier transform of the Wigner distribution function takes the following form \begin{align} \widehat{\mathscr{W}}_{\mathbb{I}}^{arb}(\vec{\xi})=\textcolor{red} e^{i\xi_i A_i}=\textcolor{red} e^{i(\eta_x \sigma_x+\eta_y \sigma_y+\eta_z \sigma_z)}=\cos(|\eta|), \end{align} where the parameters $\vec{\eta}$ are given as $ \eta_x=\xi_1\lambda_x+\xi_2\gamma_x, \eta_y=\xi_1\lambda_y+\xi_2\gamma_y, \eta_z=\xi_1\lambda_z+\xi_2\gamma_z+\xi_3.$ As before, our $\widehat{\mathscr{W}}_{\mathbb{I}}^{arb}(\vec{\xi})$ is not a symmetric function of $\vec{\xi}$. Now, we pick our regularisation function to be Gaussian with respect to $\vec{\eta}$ parameters, i.e., $G''=e^{-\varepsilon \eta^2}$ with $\eta^2=n_x^2+n_y^2+n_z^2$. With this regularisation function, we calculate the Wigner function of identity as \begin{align}\label{n3arbidenwigner} \mathscr{W}_{\mathbb{I}}^{arb}(\vec{a})&=\frac{1}{(2\pi)^3}\int d \xi e^{-i \vec{\xi} \cdot \vec{a}} 2\cos(|\eta|)e^{-\varepsilon\eta^2} \nonumber \\&=\frac{-\delta^{\prime}(|a|-1)}{2 \pi |a||(\gamma_y\lambda_x-\gamma_x\lambda_y)|}. \end{align} Therefore the Wigner distribution function for an arbitrary qubit state for arbitrary qubit operators is given as \begin{align}\label{n3arbrhowigner} \mathscr{W}_{\rho}^{noisy}(\vec{a})=-\frac{(1+\vec{r} \cdot \vec{a}')}{4 \pi|a||(\gamma_y\lambda_x-\gamma_x\lambda_y)|} \delta^{\prime}(|a'|-1). \end{align} \subsubsection{Comparing the negativity}\label{negcompqubit} We will calculate the negative volume of the Wigner function for noisy Pauli operators in the same way as in Sec.(\ref{Sec4}). Both scaled Gaussian regularisation and Gaussian regularisation approaches are treated one by one in order to compare the findings. \textit{Noisy operators with arbitrary noise}: For the case of Noisy Pauli operators with arbitrary noise (\ref{asymm_noise}) we note that we had used the scaled Gaussian function for regularizing the Wigner distribution function. In order to compare its negativity, we take an average weight by dividing $\mathscr{W}_{\rho}^{noisy}(a)$ by the following quantity \begin{align*} \frac{I_{G'}}{I_{G}}&=\frac{\int d^3\xi e^{-\varepsilon ((1-\lambda_1)^{2}\xi^{2}_{1}+(1-\lambda_2)^{2}\xi^{2}_{2}+(1-\lambda_3)^{2}\xi^{2}_{3})} }{\int d^3\xi e^{-\varepsilon(\xi_1^2+\xi_2^2+\xi_3^2)}} \\ &=\frac{1}{(1-\lambda_1)(1-\lambda_2)(1-\lambda_3)}, \end{align*} where $I_G'$ is the integral of the scaled Gaussian and $I_G$ is the integral of the Gaussian function over the Fourier space. After this operation the expression for the negative volume of the Wigner function is evaluated as \begin{align}\label{n3noisyneg} \mathscr{W}_{neg}^{noisy}(\rho,\vec{A})=(1-\lambda_1)(1-\lambda_2)(1-\lambda_3)\mathscr{W}_{neg}^{mub}(\rho,\vec{A}). \end{align} Thus, the negative volume for noisy operators is lesser than the negative volume for MUB operators, and with an increase in noise, it decreases further. \textit{Noisy Pauli operators with equal noise}: Now we consider the case of noisy operators with equal noise (\ref{symm_noise}). In this case, the regularization function is the same as the one used for calculating the Wigner distribution of MUBs\cite{schwonnek2020wigner}. We calculate the negative volume of $\mathscr{W}_{\mathbb{\rho}}^{mub}(\vec{a})$ and $\mathscr{W}_{\mathbb{\rho}}^{symm}(\vec{a})$ by using the Gaussian approximation of delta function which we encountered while deriving Eqn.(\ref{n3noisyidenwignersymm}). Eqn.(\ref{n3noisyneg}) than reduces to \begin{equation}\label{negnoise} \mathscr{W}_{neg}^{symm}(\rho,\vec{A})=\frac{1}{2}\left(-1+\frac{\lambda-1}{\sqrt{\pi \varepsilon}}+Erfc \left(\frac{\lambda-1}{2\sqrt{\varepsilon}}\right) \right). \end{equation} We have plotted the results in Fig.(\ref{fig:3}) and Fig.(\ref{fig:4}) respectively. It can be seen that with increasing value of noise $\lambda$ the difference between the negative volume of $\mathscr{W}_{\rho}^{symm}(a)$ and $ \mathscr{W}_{\rho}^{mub}(a)$ keeps increasing as $\varepsilon\rightarrow0$. This indicates that the negative volume for the maximally compatible case is the least. Similarly, for fixed values of $\varepsilon$ the negative volume keeps decreasing as $\lambda\rightarrow1$. We can also use Eqn. (\ref{n3noisyneg}), where we used a scaled Gaussian regularisation function, to analyze how negative volume varies with noise parameter $\lambda$ and regularization parameter $\varepsilon$ by setting $\lambda_1=\lambda_2=\lambda_3=\lambda$. We get the following expression: \begin{equation} \mathscr{W}_{neg}^{noisy}(\rho,\vec{A})=\frac{(\lambda-1)^{3}}{2}\left(-1+\frac{1}{\sqrt{\pi \varepsilon}}+Erfc \left(\frac{1}{2\sqrt{\varepsilon}}\right) \right). \end{equation} By plotting this in Fig.(\ref{fig:3}) and Fig.(\ref{fig:4}) we observe that we get similar trends as we got in the previous cases. This is an indication of the claim that using the scaled Gaussian regularisation to calculate the convoluted Wigner function and then normalizing it by a certain factor to compare its negative volume with the MUB case is the justified thing to do. Another thing that supports this claim is that the regularization used for obtaining Eqn.(\ref{n3noisyneg}) is just like taking a waited average of the Wigner function over the phase space if the Gaussian regularization is considered as the simple average. We know that the physical equation among two samples remains the same regardless of what kind of average of data sets of those samples we are considering. \textit{i.e} all the averages like simple average, weighted average, root mean square average. etc. will give us the same physical results only differing in relative numerical values. We again stress this point that under this procedure, we can't quantify the degree of incompatibility by the obtained numerical value of the negativity of the Wigner function. What we can extract is the relative trend of the negativity with the incompatibility. Thus we have a qualitative indication that the Wigner function indeed captures the incompatibility. Consequently in the remainder of this work, we will do the analysis by using properly normalized scaled regularisation. \begin{figure} \caption{Variation of negativity of Wigner distribution function with regularising parameter $\varepsilon$ for different amounts of noise $\lambda$ under Gaussian regularization} \label{fig:3} \end{figure} \begin{figure} \caption{Variation of negativity of Wigner distribution function with noise $\lambda$ for different values of regularising parameter $\varepsilon$ under Gaussian regularization} \label{fig:4} \end{figure} \textit{Set of arbitrary operators}: Here again, we had to use a scaled Gaussian distribution function to compute the Wigner distribution function. So we divide $\mathscr{W}_{\rho}^{arb}(a)$ by the following quantity \begin{align*} \frac{I_{G''}}{I_G}&=\frac{\int d^3\xi e^{-\varepsilon(\eta_1^2+\eta_2^2+\eta_3^2)}}{\int d^3\xi e^{-\varepsilon(\xi_1^2+\xi_2^2+\xi_3^2)}}\\ &=\frac{1}{|(\gamma_y\lambda_x-\gamma_x\lambda_y)|}, \end{align*} where $I_{G''}$ is the integral of the scaled Gaussian used to compute the Wigner distribution function for a set of arbitrary operators. After this division, the negativity for the Wigner function can be calculated as follows \begin{align}\label{n3arbrnoise} \mathscr{W}_{neg}^{arb}(\rho,\vec{A})=|(\gamma_y\lambda_x-\gamma_x\lambda_y)|\mathscr{W}_{neg}^{mub}(\rho,\vec{A}), \end{align} where $|(\gamma_y\lambda_x-\gamma_x\lambda_y)|\leq 1$. Thus we find a decrease in the total negativity when we chose an arbitrary set of operators instead of mutually unbiased operators. It can further be shown that this negativity decreases as the operators come close to each other, using a geometric approach. This further supports the intuition that negativity is a proper indicator of incompatibility, even for three observables. Suppose the eigenvectors of the observables in \eqref{arboperators}, are along the directions $A_1=\{\theta_1, \phi_1\}$, $A_2=\{\theta_2,\phi_2\}$ and $A_3=\{0,0\}$ in the Bloch sphere and their eigenvalues are $\pm 1$. So that the coefficients can now be written as \begin{align*} \lambda_x=\sin(\theta_1)\cos(\phi_1),\quad\lambda_y=\sin(\theta_1)\sin(\phi_1) \quad \text{and} \quad \lambda_z=\cos(\theta_1) \\ \gamma_x=\sin(\theta_2)\cos(\phi_2),\quad\gamma_y=\sin(\theta_2)\sin(\phi_2) \quad \text{and} \quad \gamma_z=\cos(\theta_2), \end{align*} Substituting the above relations in \eqref{n3arbrnoise}, we get that \begin{align}\label{n3arbrnoise_geom} \mathscr{W}_{neg}^{arb}(\rho,\vec{A})=|(\sin \theta_1\sin\theta_2\sin(\phi_1-\phi_2)|\mathscr{W}_{neg}^{mub}(\rho,\vec{A}), \end{align} where the factor in the front becomes smaller as the lines containing the eigenvectors of the three operators come closer to each other, i.e., the solid angle between them becomes smaller. Intuitively, we know that if the lines in the Bloch sphere containing the eigenvectors come close to each other, the observables become more and more compatible. \begin{figure} \caption{Variation of negativity of Wigner distribution function with regularising parameter $\varepsilon$ for different amounts of noise $\lambda$ under scaled Gaussian regularization} \label{fig:1} \end{figure} \begin{figure} \caption{Variation of negativity of Wigner distribution function with noise $\lambda$ for different values of regularising parameter $\varepsilon$ under scaled Gaussian regularization} \label{fig:2} \end{figure} \subsection{Two operators (n=2)} \subsubsection{Noisy Pauli Operators (Scaled Regularization)}\label{NosiyPauli} In this case, the qubit operators are $A_k=(1-\lambda_k)\sigma_k+\lambda_k\mathbb{I},\ k=1,2$. Eqn.(\ref{n3noisyidentorho}) still holds with $r_3=0$. The Fourier transform of the Wigner function is now given by: \begin{align*} \widehat{\mathscr{W}}_{\mathbb{I}}^{noisy}(\vec{\xi})=e^{i (\xi_{1} \lambda_1 +\xi_{2} \lambda_2)}2\cos(|\xi'|), \end{align*} where $|\xi'|=\sqrt{(1-\lambda_1)^{2}\xi^{2}_{1}+(1-\lambda_2)^{2}\xi^{2}_{2}}$. As we did in the n=3 case in order to exploit spherical symmetry to evaluate the convoluted Wigner function we choose a scaled regularisation function of the form $e^{-\varepsilon\sqrt{ ((1-\lambda_1)^{2}\xi^{2}_{1}+(1-\lambda_2)^{2}\xi^{2}_{2}}}$. The calculations then proceed in the same way as for the noiseless Pauli operators with the substitution $\xi_k\rightarrow\xi_k'=(1-\lambda_k)\xi_k$. The integration follows as \begin{align} \mathscr{W}_{\mathbb{I}}^{noisy}(\vec{a})&= \frac{1}{(2 \pi)^{2}} \int d\xi e^{-i\xi \cdot a}2 e^{-i (\xi_{1} \lambda_1 +\xi_{2} \lambda_2)}\cos{|\xi'|}e^{-\varepsilon |\xi'|}\nonumber\\ &=\frac{-1}{(1-\lambda_1)(1-\lambda_2)\pi\left(1-|a'|^{2}\right)^{3 / 2}}, \end{align} where $a'_k=\frac{a_k-\lambda_k}{1-\lambda_k}$. The calculation exactly follows the n=2 case in \cite{schwonnek2020wigner}. Using Eqn.(\ref{n3noisyidentorho}) with $r_3=0$ the total Wigner function is given as: \begin{equation}\label{wignoisy2} \mathscr{W}_{\rho}^{noisy}(\vec{a})=-\frac{(1+r \cdot a')}{2\pi(1-\lambda_1)(1-\lambda_2)}\frac{1}{\left(1-|a'|^{2}\right)^{3 / 2}}. \end{equation} Now we will observe how the calculations get much simpler if the noise added to each observable is equal. \subsubsection{Noisy Pauli Operators with equal noise(Unscaled Regularization)}\label{Symmnoisen2} As we have seen earlier if we add equal noise to each observable we can use unscaled regularization. The operator set becomes $A_k=(1-\lambda)\sigma_k+\lambda\mathbb{I}$. The Fourier transform of Wigner function comes out to be $\widehat{\mathscr{W}}_{\mathbb{I}}^{noisy}(\xi)=e^{i (\xi_{1} \lambda +\xi_{2} \lambda)}2\cos(|\xi'|)$ where $|\xi'|=(1-\lambda)\sqrt{\xi^{2}_{1}+\xi^{2}_{2}}$. The Wigner distribution can be calculated as: \begin{align} \mathscr{W}_{\mathbb{I}}^{symm}(a)&= \frac{1}{(2 \pi)^{2}} \int d\xi e^{-i\xi \cdot a}e^{-i (\xi_{1} \lambda +\xi_{2} \lambda)}2\cos(|\xi'|)e^{-\varepsilon |\xi|}\nonumber\\ &=\frac{-(1-\lambda)}{\pi\left((1-\lambda)^{2}-|a''|^{2}\right)^{3 / 2}}, \end{align} where $a''_k=a_k-\lambda$. The total Wigner function is then given by: \begin{equation}\label{wigsymm2} \mathscr{W}_{\rho}^{symm}(\vec{a})=-\frac{((1-\lambda)+\vec{r} \cdot \vec{a}'')}{2\pi}\frac{1}{\left((1-\lambda)^2-|a''|^{2}\right)^{3 / 2}}. \end{equation} In this case also in the limit $\lambda_1=\lambda_2=\lambda=0$ both Eqn.(\ref{wignoisy2}) and Eqn.(\ref{wigsymm2}) reduces to the expression for Wigner function for n=2 qubit operator case in \cite{schwonnek2020wigner}. \subsubsection{Arbitrary qubit operators} Next, we will consider the following qubit operators which could represent a generic set of two-qubit Hermitian operators with eigenvalue $\pm1$ \begin{align}\label{arboperatorsn2} &A_1= \sigma_x, \nonumber \\ &A_2= \sqrt{(1-\beta^{2})}\sigma_x+\beta\sigma_y, \end{align} The Wigner function for any arbitrary state can be written in terms of Wigner state of identity as: \begin{align}\label{n2arbidentorho} \mathscr{W}_{\rho}^{arb}(\vec{a})=\frac{1}{2}(1+\vec{r} \cdot \vec{a}')\mathscr{W}_{\mathbb{I}}^{arb}(\vec{a}), \end{align} where \begin{align}\label{aparameters} &a'_1=a_1,\nonumber\\ &a'_2=\frac{a_2-\sqrt{(1-\beta^{2})}a_1}{\beta}. \end{align} The Fourier transform of the Wigner distribution function takes the following form \begin{align*} \widehat{\mathscr{W}}_{\mathbb{I}}^{arb}(\vec{\xi})=2\cos(|\eta|), \end{align*} where $\eta_1=\xi_1+\xi_2\sqrt{(1-\beta^{2}}, \eta_2=\xi_2\beta$. As before, $\widehat{\mathscr{W}}_{\mathbb{I}}^{arb}(\xi)$ is not a spherically symmetric function. So, in order to get the convoluted Wigner function we choose our regularizing function to be $e^{-\varepsilon|\eta|}$. Here $|\eta|=\sqrt{\eta_1^2+\eta_2^2}$ Using this the Wigner function of identity comes out to be \begin{align} \mathscr{W}_{\mathbb{I}}^{arb}(\vec{a})&= \frac{1}{(2 \pi)^{2}} \int d\xi e^{-i\xi \cdot a}2\cos{|\eta|}e^{-\varepsilon |\eta|}\nonumber\\ &= \frac{1}{\beta (2 \pi)^{2}} \int d\eta e^{-i\eta \cdot a'}2\cos{|\eta|}e^{-\varepsilon |\eta|}\nonumber\\ &\rightarrow\frac{-1}{\beta \pi\left(1-|a'|^{2}\right)^{3 / 2}}, \end{align} Using Eqn.(\ref{n2arbidentorho}) the Wigner distribution for any arbitrary qubit operator is given as: \begin{equation}\label{wigarbsymm2} \mathscr{W}_{\rho}^{symm}(\vec{a})=-\frac{(1+\vec{r} \cdot \vec{a}')}{2\pi \beta}\frac{1}{\left(1-|a'|^{2}\right)^{3 / 2}}, \end{equation} where $|a'|^{2}$ is given by Eqn.(\ref{aparameters}). \subsubsection{Comparing the negativity}\label{n2cal} For this case also the negative volume of the Wigner function can be calculated using Eqn.(\ref{negcalproj}). We will compare the change in negativity with noise parameters for all three cases discussed above for qubits one by one. \textit{Noisy Pauli operators with asymmetric noise}: For noisy operators when white noise is added asymmetrically (\ref{NosiyPauli}) just like the n=3, the expression of Wigner function needs to be normalized to compare its negativity to MUB case. We take an average weight by dividing $\mathscr{W}_{\rho}^{noisy}(\vec{a})$ by the following quantity \begin{align*} \frac{I_{G'}}{I_{G}}&=\frac{\int d^2\xi e^{-\varepsilon\sqrt{ (1-\lambda_1)^{2}\xi^{2}_{1}+(1-\lambda_2)^{2}\xi^{2}_{2}} }}{\int d^2\xi e^{-\varepsilon\sqrt{\xi_1^2+\xi_2^2}}} \\ &=\frac{1}{(1-\lambda_1)(1-\lambda_2)}, \end{align*} where $I_{G'}$ is the integral of the scaled Gaussian and $I_G$ is the integral of the Gaussian function over the Fourier space. Thus using Eqn.(\ref{wignoisy2}) the negativity of Wigner function is evaluated as \begin{align}\label{noisyneg12} \mathscr{W}_{neg}^{noisy}(\rho,\vec{A})=(1-\lambda_1)(1-\lambda_2)\mathscr{W}_{neg}^{mub}(\rho,\vec{A}). \end{align} Thus from the above expression, we see that the negative volume of the Wigner function for noisy operators is lesser than the corresponding negative volume for MUB and further decreases with an increase in the noise. \textit{Noisy Pauli operators with equal noise}: For noisy Pauli operators with equal noise we see from (\ref{Symmnoisen2}) that we can choose an unscaled regularization to get the expression for the Wigner function in Eqn.(\ref{wigsymm2}). We can use this expression and numerically integrate it to calculate the negative volume of the Wigner function against various amounts of noise. Fig.(\ref{fig:6}) is the result of this procedure. We see that the negative volume decreases with an increase in the noise parameter. As we know that with the addition of noise in the observables, the compatibility among them increases we observe that for the maximally compatibility case, vis-à-vis, the maximum value of the noise parameter, negative volume is the least. Similarly, instead of unscaled regularisation, we can consider scaled regularization. We can repeat the same procedure by using the expression of the Wigner function in Eqn.(\ref{wignoisy2}) with $\lambda_1=\lambda_2=\lambda$. We get Fig.(\ref{fig:7}) as a result of this. The trend of the negative volume against the noise parameter is similar to the one we got for the unscaled regularizing function. This is again an indication of the fact that we can compare the negative volume of the Wigner function for noisy operators to the negative volume for the MUB case by choosing a scaled regularizing function and then normalizing it properly. \textit{Set of arbitrary operators}: Here again, we had to use a scaled regularizing function to compute the Wigner distribution function. Normalising expression in Eqn.(\ref{wigarbsymm2}) as: \begin{align*} \frac{I_{G''}}{I_{G}}&=\frac{\int d^2\xi e^{-\varepsilon\sqrt{ (\xi_{1}+\xi_{2}(\sqrt{1-\beta^2}))^2+\xi^{2}_{2}\beta^2} }}{\int d^2\xi e^{-\varepsilon\sqrt{\xi_1^2+\xi_2^2}}} \\ &=\frac{1}{|\beta|}. \end{align*} Using that the negative volume comes out to be \begin{align}\label{noisyneg1} \mathscr{W}_{neg}^{arb}(\rho,\vec{A})=|\beta|\mathscr{W}_{neg}^{mub}(\rho,\vec{A}), \end{align} where $\beta\leq1$. Hence we see that with an increase in beta, the negative volume increases and it becomes maximum when $\beta=1$ \textit{i.e.} which corresponds to the maximum incompatibility. In Fig.(\ref{fig:18}) we compare the negative volume of the Wigner distribution for two and three operators for qubits although it is important to mention that this is not an absolute comparison as the regularisation function and the value of the regularisation parameter is different for both the cases. \begin{figure} \caption{Variation of negativity of Wigner distribution function with noise $\lambda$ under Gaussian regularization} \label{fig:6} \end{figure} \begin{figure} \caption{Variation of negativity of Wigner distribution function with noise $\lambda$ under scaled Gaussian regularization} \label{fig:7} \end{figure} \begin{figure} \caption{Comparison of the variation of negativity of the Wigner distribution with noise for n=2 and n=3 on Log scale. For $n=3$ case the value of regularising parameter is $\varepsilon=0.0000001$ while for $n=2$ we have taken $\varepsilon\rightarrow 0$.} \label{fig:18} \end{figure} \section{Wigner distribution function for the set of Qudit operators}\label{Sec7} Till now we have analyzed 2-dimensional(Qubits) systems. In this section we will generalize this procedure to arbitrary but finite dimensions (Qudits), say N. We will calculate the Wigner function and negativity for various choices of Hermitian operators. We will also see how negativity changes with an increase in the dimension of the system. We start with three Hermitian qudit operators. \textit{i.e} we first consider n=3 case. \subsection{Three operators (n=3)} \subsubsection{Gell-Mann operators} Gell-Mann operators are very good candidates for the generalization of our procedure to N dimensions. Their comparatively simpler forms in N dimensions allow us to calculate $\widehat{\mathscr{W}_{\mathbb{I}}}$ very efficiently. Also, its analytical form comes out to be similar to those we calculated in the previous sections for qubits. Thus the convoluted Wigner function and its negativity can be calculated and compared for any arbitrary dimension easily. Gell-Mann operators, as discussed earlier, are the generators of SU(N) for an N-dimensional qudit. Any density matrix can be decomposed using these operators as $ \rho=\frac{1}{N}(\mathbb{I}+\sum_{k}r_{k}\Lambda_{k}^N)$ where $r_k$ is now a N-dimensional Bloch vector and $\Lambda_k^N$ are N-dimensional Gell-Mann operators. The Wigner function for any arbitrary state can be written in terms of the Wigner function for the identity for a given set of operators as \begin{equation}\label{Quditwigidentorho} \mathscr{W}_{\rho}(\vec{a})=\frac{1}{N}(1+\vec{r} \cdot \vec{a})\mathscr{W}_{\mathbb{I}}(\vec{a}), \end{equation} Let us consider our qudit operators to be $A_k=\Lambda_k^N, k=1,2,3$ such that \begin{equation} \begin{gathered} \Lambda_{1}^N=\begin{pmatrix} 0 & 1 & 0 & \cdot & \cdot & \cdot & 0\\ 1 & 0 & 0 & \cdot & \cdot & \cdot & 0\\ \cdot & \cdot & \cdot & \cdot & & & \cdot\\ \cdot & \cdot & \cdot & & \cdot & & \cdot\\ \cdot & \cdot & \cdot & & & \cdot & \cdot\\ 0 & 0 & 0 & \cdot & \cdot & \cdot & 0 \end{pmatrix}, \qquad \Lambda_{2}^N=\begin{pmatrix} 0 & -i & 0 & \cdot & \cdot & \cdot & 0\\ i & 0 & 0 & \cdot & \cdot & \cdot & 0\\ \cdot & \cdot & \cdot & \cdot & & & \cdot\\ \cdot & \cdot & \cdot & & \cdot & & \cdot\\ \cdot & \cdot & \cdot & & & \cdot & \cdot\\ 0 & 0 & 0 & \cdot & \cdot & \cdot & 0 \end{pmatrix},\nonumber \\ \Lambda_{3}^N=\begin{pmatrix} 1 & 0 & 0 & \cdot & \cdot & \cdot & 0\\ 0 & -1 & 0 & \cdot & \cdot & \cdot & 0\\ \cdot & \cdot & \cdot & \cdot & & & \cdot\\ \cdot & \cdot & \cdot & & \cdot & & \cdot\\ \cdot & \cdot & \cdot & & & \cdot & \cdot\\ 0 & 0 & 0 & \cdot & \cdot & \cdot & 0 \end{pmatrix}. \end{gathered} \end{equation} In that case Bloch vector $r_k=0$ for $k>3$. The expression for Fourier transform of the Wigner function for identity comes out to be $\widehat{\mathscr{W}_{\mathbb{I}}}(\vec{\xi})=(N-2)+2\cos{|\xi|}$ where $|\xi|=\sqrt{\xi_1^2+\xi_2^2+\xi_3^2}$. From this, we can calculate the Wigner function as \begin{align}\label{Quditn3mubidenwigner} \mathscr{W}_{\mathbb{I}}(\vec{a})&=\frac{1}{(2 \pi)^{3}} \int d^3 \xi e^{-i \vec{\xi} \cdot \vec{a}} ((N-2)+2\cos(|\xi|))e^{-\varepsilon \xi^2}\nonumber\\ &=\frac{-1}{2 \pi |a|} ((N-2)\delta^{\prime}(|a|)+\delta^{\prime}(|a|-1)). \end{align} Here we still are working in the limit $\varepsilon\rightarrow0$. Using Eqn.(\ref{Quditwigidentorho}) and Eqn.(\ref{Quditn3mubidenwigner}) we get \begin{equation}\label{Quditn3mubrhowigner} \mathscr{W}_{\rho}(\vec{a})=\frac{-(1+\vec{r} \cdot \vec{a})}{2N\pi|a|} ((N-2)\delta^{\prime}(|a|)+\delta^{\prime}(|a|-1)). \end{equation} This expression reduces to the expression of the Wigner function for qubits when N=2 in \cite{schwonnek2020wigner}. \subsubsection{Noisy Gell-Mann Operators (Scaled Gaussian Regularization)} Now we will inject noise in Gell-Mann operators as $A_k=(1-\lambda_k)\Lambda_k^N+\lambda_k\mathbb{I}$. For these operator Eqn.(\ref{Quditwigidentorho}) transforms to \begin{equation}\label{Quditwigidentorhonoise} \mathscr{W}_{\rho}^{noisy}(\vec{a})=\frac{1}{N}(1+\vec{r} \cdot \vec{a}^{\prime})\mathscr{W}_{\mathbb{I}}^{noisy}(\vec{a}^{\prime}), \end{equation} where $a_{k}^{\prime}=\frac{a_{k}-\lambda_{k}}{1-\lambda_{k}}$. The Fourier transform of the Wigner function modifies to $\widehat{\mathscr{W}}_{\mathbb{I}}^{noisy}(\vec{\xi})=e^{i (\xi_{1} \lambda_1 +\xi_{2} \lambda_2+\xi_{3} \lambda_3)}((N-2)+2\cos(|\xi'|))$ with $|\xi'|=\sqrt{(1-\lambda_1)^{2}\xi^{2}_{1}+(1-\lambda_2)^{2}\xi^{2}_{2}+(1-\lambda_3)^{2}\xi^{2}_{3}}$. Here again, as in qubit cases, we choose our regularization function to be a scaled version of Gaussian regularization function $G'e^{-\varepsilon ((1-\lambda_1)^{2}\xi^{2}_{1}+(1-\lambda_2)^{2}\xi^{2}_{2}+(1-\lambda_3)^{2}\xi^{2}_{3})}$ to find the convoluted Wigner function. The spherical symmetry obtained by doing this allows the calculations to follow the same course as in noiseless Gell-Mann operators with the integration parameter being modified as $\xi_k\rightarrow\xi'_k=(1-\lambda_k)\xi_k$. The integration proceeds as \begin{align}\label{Quditn3noisyidenwigner} \mathscr{W}_{\mathbb{I}}^{noisy}(\vec{a})&=\frac{1}{(2 \pi)^{3}} \int d^3 \xi e^{-i \vec{\xi} \cdot \vec{a}} ((N-2)+2\cos(|\xi'|))e^{-\varepsilon\xi'^2} \nonumber \\&=\frac{-((N-2)\delta^{\prime}(|a'|)+\delta^{\prime}(|a'|-1))}{2 \pi |a'|(1-\lambda_1)(1-\lambda_2)(1-\lambda_3)}. \end{align} From this we calculate the Wigner function for an arbitrary state as \begin{align}\label{Quditn3noisyrhowigner} \mathscr{W}_{\rho}^{noisy}(\vec{a}') =-\frac{(1+\vec{r} \cdot \vec{a}')((N-2)\delta^{\prime}(|a'|)+\delta^{\prime}(|a'|-1))}{2N\pi|a'|(1-\lambda_1)(1-\lambda_2)(1-\lambda_3)}. \end{align} Once again, the derivative of both the $\delta$ functions in the above equation is with respect to variable $|a'|$. We note that this expression for the Wigner function reduces to Eqn.(\ref{n3noisyrhowigner}) for N=2. \subsubsection{Comparing the negativity} For general Qudit cases, we can again proceed in the same way as we did for qubit cases. Using Eqn.(\ref{n3negcalproj}) and Eqn.(\ref{Quditn3mubrhowigner}) the negative volume for the noiseless case thus can be calculated as: \begin{align}\label{mubnegqutrit} \mathscr{W}_{neg}(\rho,\vec{A})&= \int\frac{1}{2}\Big(\mathscr{W}_{\rho}(\vec{a})-|\mathscr{W}_{\rho}(\vec{a})|\Big)d^3a \nonumber\\ &= \int\frac{1}{2}\Bigg(\frac{(1+\vec{r} \cdot \vec{a})}{2N\pi|a|} ((N-2)\delta^{\prime}(|a|)+\delta^{\prime}(|a|-1)) \nonumber\\ &-\Big|\frac{(1+\vec{r} \cdot \vec{a})}{2N\pi|a|} ((N-2)\delta^{\prime}(|a|)+\delta^{\prime}(|a|-1))\Big|\Bigg)d^3a. \end{align} Next will compare the noisy scenarios with this expression \textit{Noisy Gell-Mann operators}: For this case, as we did for qubit operators, we have to normalise this by dividing it with the quantity: \begin{align*} \frac{I_{G'}}{I_{G}}&=\frac{\int d^3\xi e^{-\varepsilon ((1-\lambda_1)^{2}\xi^{2}_{1}+(1-\lambda_2)^{2}\xi^{2}_{2}+(1-\lambda_3)^{2}\xi^{2}_{3})} }{\int d^3\xi e^{-\varepsilon(\xi_1^2+\xi_2^2+\xi_3^2)}} \\ &=\frac{1}{(1-\lambda_1)(1-\lambda_2)(1-\lambda_3)}, \end{align*} where $I_G'$ is the integral of the scaled Gaussian and $I_G$ is the integral of the Gaussian function over the Fourier space. The negative volume of the Wigner distribution, thus, can be calculated as: \begin{align}\label{n3noisynegqudit} \mathscr{W}_{neg}^{noisy}(\rho,\vec{A})=(1-\lambda_1)(1-\lambda_2)(1-\lambda_3)\mathscr{W}_{neg}(\rho,\vec{A}). \end{align} We see that for an arbitrary dimensional system(qudit) the negative volume of the Wigner function decreases with an increase in the noise vis-à-vis increase in the compatibility. We can numerically integrate the Eqn.(\ref{n3noisynegqudit}) and Eqn.(\ref{mubnegqutrit}) to see how the negative volume varies with an increase in the dimension of the system for a fixed value of noise. Fig.(\ref{fig:12}) shows that the negative volume decrease with an increase in the dimension of the system. In this case we have chosen $\lambda_1=\lambda_2=\lambda_3=\lambda$ \begin{figure} \caption{Variation of negativity of Wigner distribution function with the dimension of the operators N for different amounts of noise $\lambda$ under Scaled Gaussian regularization} \label{fig:12} \end{figure} \subsection{Two operators (n=2)} \subsubsection{Gell-Mann operators} If we choose our operators to be $A_k=\Lambda^{N}_{k},\ k=1,2$ then a general Wigner distribution can still be written as: \begin{equation}\label{Quditn2wigidentorho} \mathscr{W}_{\rho}(\vec{a})=\frac{1}{N}(1+\vec{r} \cdot \vec{a})\mathscr{W}_{\mathbb{I}}(\vec{a}), \end{equation} where $r_k=0$ for $k>3$ and \begin{equation} \begin{gathered} \Lambda_{1}^N=\begin{pmatrix} 0 & 1 & 0 & \cdot & \cdot & \cdot & 0\\ 1 & 0 & 0 & \cdot & \cdot & \cdot & 0\\ \cdot & \cdot & \cdot & \cdot & & & \cdot\\ \cdot & \cdot & \cdot & & \cdot & & \cdot\\ \cdot & \cdot & \cdot & & & \cdot & \cdot\\ 0 & 0 & 0 & \cdot & \cdot & \cdot & 0 \end{pmatrix}, \qquad \Lambda_{2}^N=\begin{pmatrix} 0 & -i & 0 & \cdot & \cdot & \cdot & 0\\ i & 0 & 0 & \cdot & \cdot & \cdot & 0\\ \cdot & \cdot & \cdot & \cdot & & & \cdot\\ \cdot & \cdot & \cdot & & \cdot & & \cdot\\ \cdot & \cdot & \cdot & & & \cdot & \cdot\\ 0 & 0 & 0 & \cdot & \cdot & \cdot & 0 \end{pmatrix}, \end{gathered} \end{equation} The expression for Fourier transform of the Wigner function for identity comes out to be $\widehat{\mathscr{W}_{\mathbb{I}}}(\vec{\xi})=(N-2)+2\cos{|\xi|}$ where $|\xi|=\sqrt{\xi_1^2+\xi_2^2}$. Then it follows that the Wigner function for identity comes out to be \begin{align}\label{Quditn2mubidenwigner} \mathscr{W}_{\mathbb{I}}(\vec{a})&=\frac{1}{(2 \pi)^{2}} \int d^2 \xi e^{-i \vec{\xi} \cdot \vec{a}} ((N-2)+2\cos(|\xi|))e^{-\varepsilon \xi}\nonumber\\ &\rightarrow\frac{-1}{\pi\left(1-|a|^{2}\right)^{3 / 2}}. \end{align} Note that, we still are working in the limit $\varepsilon\rightarrow0$. Using Eqn.(\ref{Quditn2wigidentorho}) and Eqn.(\ref{Quditn2mubidenwigner}) we get \begin{equation}\label{Quditn2mubrhowigner} \mathscr{W}_{\rho}(\vec{a})=-\frac{(1+\vec{r} \cdot \vec{a})}{N\pi}\frac{1}{\left(1-|a|^{2}\right)^{3 / 2}}. \end{equation} \subsubsection{Noisy Gell-Mann Operators (Scaled Regularization)} After adding the noise the operator set becomes $A_k=(1-\lambda_k)\Lambda^{N}_k+\lambda_k\mathbb{I},\ k=1,2$. The Fourier transform of Wigner function comes out to be $\widehat{\mathscr{W}}_{\mathbb{I}}^{noisy}(\vec{\xi})=e^{i (\xi_{1} \lambda_1 +\xi_{2} \lambda_2)}((N-2)+2\cos(|\xi'|)$ with $|\xi'|=\sqrt{(1-\lambda_1)^{2}\xi^{2}_{1}+(1-\lambda_2)^{2}\xi^{2}_{2}}$. The convoluted Wigner function for identity is then given by \begin{align}\label{Quditn2noisyidenwigner} \mathscr{W}_{\mathbb{I}}(\vec{a})&=\frac{1}{(2 \pi)^{2}} \int d^2 \xi e^{-i \vec{\xi}' \cdot \vec{a}} (1+2\cos(|\xi'|))e^{-\varepsilon \xi'}\nonumber\\ &\frac{-1}{\pi(1-\lambda_1)(1-\lambda_2)\left(1-|a'|^{2}\right)^{3 / 2}}, \end{align} where $a'_k=\frac{a_k-\lambda_k}{1-\lambda_k}$. The total Wigner function turns out to be: \begin{equation}\label{Quditn2mubrhowignersymm} \mathscr{W}_{\rho}(\vec{a}')=-\frac{(1+\vec{r} \cdot \vec{a}')}{N\pi(1-\lambda_1)(1-\lambda_2)}\frac{1}{\left(1-|a'|^{2}\right)^{3 / 2}}. \end{equation} \subsubsection{Comparing the negativity} First, we will calculate the negative volume for the Wigner distribution of noiseless operators. We get \begin{align}\label{mubnegn2qutrit} \mathscr{W}_{neg}(\rho,\vec{A})&= \int\frac{1}{2}\left(\mathscr{W}_{\rho}(\vec{a})-|\mathscr{W}_{\rho}(\vec{a})|\right)d^3a \nonumber\\ &= \int\frac{1}{2}\Bigg(-\frac{(1+\vec{r} \cdot \vec{a})}{N\pi}\frac{1}{\left(1-|a|^{2}\right)^{3 / 2}} \nonumber \\ &-\Big|-\frac{(1+\vec{r} \cdot \vec{a})}{N\pi}\frac{1}{\left(1-|a|^{2}\right)^{3 / 2}}\Big|\Bigg)d^2a. \end{align} We see that it is inversely proportional to the dimension of the system N. We will see that the same thing follows for the noise case also. \textit{Noisy Gell-Mann operators}: By adding the noise the negative volume of the Wigner function for the qudit operator can be calculated in the same way as it was done for qubit operators. After dividing it by the normalizing factor \begin{align*} \frac{I_{G'}}{I_{G}}&=\frac{\int d^2\xi e^{-\varepsilon\sqrt{ (1-\lambda_1)^{2}\xi^{2}_{1}+(1-\lambda_2)^{2}\xi^{2}_{2}} }}{\int d^2\xi e^{-\varepsilon\sqrt{\xi_1^2+\xi_2^2}}} \\ &=\frac{1}{(1-\lambda_1)(1-\lambda_2)}. \end{align*} Then the expression comes out to be: \begin{align}\label{noisyneg1qudit} \mathscr{W}_{neg}^{noisy}(\rho,\vec{A})=(1-\lambda_1)(1-\lambda_2)\mathscr{W}_{neg}^{mub}(\rho,\rho{A}). \end{align} This expression indicates that the negative volume of the Wigner function decreases with an increase in the noise injection for any arbitrary dimension. Fig.(\ref{fig:13}) can be generated by numerically integrating Eqn.(\ref{noisyneg1qudit}) and Eqn.(\ref{mubnegn2qutrit})for a particular value of the noise parameter (here we have $\lambda_1=\lambda_2=\lambda$. We observe that the negative volume of the Wigner distribution decreases with an increase in the system's dimension for any chosen noise value.\\ \begin{figure} \caption{Variation of negativity of Wigner distribution function with the dimension of the operators N for different amounts of noise $\lambda$ under scaled Gaussian regularization} \label{fig:13} \end{figure} \section{Conclusion and outlook}\label{Sec8} In this work, we have established a connection between the negativity of a quasiprobability distribution function, namely the Wigner function, and the incompatibility among the given set of observables. Both of these are very valuable resources in quantum information theory in general. Thereupon the treatment employed in this work can prove to be quite useful. Initially, we applied our treatment to the noisy eigenprojections of the qubit Pauli operators and observed that with an increase in the noise (decrease in the incompatibility) the negativity of the Wigner distribution decreases. We then applied the same formalism to the noisy qubit Pauli operators and obtained the same behavior. The negative volume maximizes for the maximally unbiased noiseless Pauli qubit operators. We then extended this treatment for higher dimensional qudit Gell-Mann operators. The same correlation between the negative volume of the Wigner distribution and incompatibility was observed there also. Although our connection is qualitative up to some extent, we have a strong reason to believe that it provides a very good indication and comparison of the relative values of degrees of incompatibility among different observables using the negativity of the Wigner distribution as the index. As discussed, with the increase in noise, and hence compatibility, the Wigner function starts becoming more and more positive but the use of the different regularising functions forgoes us from quantifying it. We can only comment on the relative amounts of the degree of incompatibility. Thus it will be worth finding out whether the regularising function can be kept unchanged during the whole treatment or, going one step further, even excluded to index the degree of incompatibility exactly through the negativity of the Wigner function. It is also worth mentioning that the negative volume of the Wigner distribution function indicates towards being a measure of incompatibility for three observables. For two observables commutator is a known measure of incompatibility between them but up to our knowledge, there is no concrete measure of incompatibility for three and more observables. Thus studying the negativity of this Wigner distribution function definition is a very promising direction to look into. The only problem is that due to the high singularities in the Wigner function, it might be difficult to calculate it. One approach to handle this caveat is to convolute it with some smooth functions that provide symmetry to the integral which assists us in integrating it. As we pointed out in earlier sections this will render the analysis qualitative. To keep it quantitative to index the incompatibility through the negativity of the Wigner distribution function we have to opt for the numerical integration way. These aspects are very interesting and worth exploring in the future. One more interesting direction to explore would be generalizing the definition of the Wigner function itself to other quasiprobability distributions. Also, in this paper, we are only dealing with static and finite-dimensional observables. We leave the extension to dynamic and infinite dimensional observables as a subject for future research. \end{document}
\begin{document} \title{Quantum kernels with squeezed-state encoding for machine learning} \author{Long Hin Li} \email{[email protected]} \affiliation{Guangdong-Hong Kong Joint Laboratory of Quantum Matter, Department of Physics, and HKU-UCAS Joint Institute for Theoretical and Computational Physics at Hong Kong, The University of Hong Kong, Pokfulam Road, Hong Kong, China} \author{Dan-Bo Zhang} \email{[email protected]} \affiliation{Guangdong-Hong Kong Joint Laboratory of Quantum Matter, Frontier Research Institute for Physics, South China Normal University, Guangzhou 510006, China} \affiliation{Guangdong Provincial Key Laboratory of Quantum Engineering and Quantum Materials, School of Physics and Telecommunication Engineering, South China Normal University, Guangzhou 510006, China} \author{Z. D. Wang} \email{[email protected]} \affiliation{Guangdong-Hong Kong Joint Laboratory of Quantum Matter, Department of Physics, and HKU-UCAS Joint Institute for Theoretical and Computational Physics at Hong Kong, The University of Hong Kong, Pokfulam Road, Hong Kong, China} \date{\today} \begin{abstract} Kernel methods are powerful for machine learning, as they can represent data in feature spaces that similarities between samples may be faithfully captured. Recently, it is realized that machine learning enhanced by quantum computing is closely related to kernel methods, where the exponentially large Hilbert space turns to be a feature space more expressive than classical ones. In this paper, we generalize quantum kernel methods by encoding data into continuous-variable quantum states, which can benefit from the infinite-dimensional Hilbert space of continuous variables. Specially, we propose squeezed-state encoding, in which data is encoded as either in the amplitude or the phase. The kernels can be calculated on a quantum computer and then are combined with classical machine learning, e.g. support vector machine, for training and predicting tasks. Their comparisons with other classical kernels are also addressed. Lastly, we discuss physical implementations of squeezed-state encoding for machine learning in quantum platforms such as trapped ions. \end{abstract} \maketitle \section{Introduction} Machine learning can be enhanced by quantum computing utilizing the incredible power of quantum computers for information processing~\cite{biamonte_17,lloyd_14,rebentrost_14,dunjko_16,lloyd_16,lloyd_18,havlicek_19,schuld_19,huang2021power,liu_rigorous_2021}. The first breakthrough comes from an exponential quantum speedup for solving linear equations or matrix inversion~\cite{harrow_09,wiebe_12}, which lies at the heart of many machine learning methods. While quantum computers are good at linear algebra since they in nature perform unitary transformations that are linear in nature, it is not enough for machine learning of complicated data, where highly nonlinear transformations are often anticipated~\cite{Goodfellow-et-al-2016}. Another important ingredient for quantum-enhanced machine learning, only recognized very recently, is to introduce non-linearity when encoding classical data into quantum states~\cite{mitarai_18,havlicek_19,schuld_19,lloyd2020quantum,PerezSalinas2020datareuploading,Schuld-2021-PRA,schuld2021supervised}. Such a procedure is called quantum feature maps~\cite{havlicek_19,schuld_19}, which enjoy the exponential large Hilbert space as a feature space for expressing data. The quantum feature maps are closely related to classical kernel methods in the same spirit as they aim to use feature spaces, such that similarities between samples can be faithfully captured~\cite{christopher_m_bishop_pattern_2006,trevor_hastie_elements_2009}. One striking difference is that feature maps on a quantum computer are explicitly constructed~\cite{schuld2021supervised}. This gives the flexibility of designing or even training powerful quantum kernels that may provide quantum advantages for machine learning. Varies encoding schemes for quantum kernels have been explored. Typically, data vectors are encoded as parameters of quantum gates, such as rotation angles of single-qubit gates. By loading the same data repeatedly into different quantum gates, the quantum data state will be a nonlinear function of the data, which consists of both different high-frequency and low-frequency components~\cite{Schuld-2021-PRA}. Those types of quantum kernels are rather different from classical ones, and while useful applications can be expected, they would have limitations(e.g., similarity between two samples may not be nicely approximated as a periodic function of data vector). On the other hand, quantum computing and quantum machine learning with continuous variables can inherit many properties of classical machine learning~\cite{lau_17,zhang_19,schuld_19,Killoran-cv-NN-2019,zhang-2020-PRL}, e.g., the widely-applied Gaussian kernel can be realized on a quantum computer by coherent-state encoding. Moreover, squeezed-state encoding has been proposed in Ref.~\cite{schuld_19}. This suggests that quantum feature maps by encoding data as continuous-variable states can be promising, by exploiting the infinite dimensionality of continuous-variable for encoding high-density information. In this paper, we focus on one important class of continuous-variable states, the squeezed state, as a quantum kernel for machine learning. We first introduce the squeezing vacuum state, and two different encoding schemes, the amplitude encoding and the phase encoding. Those different data encoding methods can give different kernels, which have their unique properties for modeling data. The kernel can be calculated as an inner product between two quantum states that encode the two data, which further serves as input for machine learning models, such as support vector machines. We will then test those kernels and also compare them with some classical kernels on different datasets classification tasks. Finally, we propose a physical platform for implementation and also discuss further possibilities to expand the kernel libraries. \section{Quantum kernels with squeezed-state encoding} \label{sec:kernels} Let us first introduce some notations. For a sample represented by a data vector $x$, its quantum feature map is realized by encoding the data into a quantum state $\ket{\psi_x}$. The quantum kernel of two samples $x$ and $x'$ is an inner product between their corresponding quantum states, namely $K(x,x')=\braket{\psi_x|\psi_{ x'}}$. A squeezed vacuum state is a minimum uncertainty state satisfying $\Delta\hat{x}\Delta\hat{p}=\frac{1}{4}$, where $\hat{x}$ and $\hat{p}$ and conjugate quadrature operators with either quadrature variance squeezed~\cite{weedbrook_12}. It can be obtained from the vacuum state $\ket{0}$, \begin{align} \ket{z}_s=\hat{S}(z)\ket{0} \end{align} with squeezing operator defined as $\hat{S}(z)=e^{\frac{1}{2}(z^*\hat{a}^2-z\hat{a}^{\dagger 2})}$. The position uncertainty is "squeezed" if the amplitude $r$ of the squeezing parameter $z=re^{i\theta}$ is positive. The momentum uncertainty will be enlarged by the same proportion to keep their product constant. The squeezed state can be expanded in terms of Fock state basis by using the property $\hat{S}(z)\hat{a}\hat{S}^\dagger (z)=\hat{a}\cosh{r}+\hat{a}^\dagger e^{i\theta} \sinh{r}$, \begin{align} \ket{z}_s=\frac{1}{\sqrt{\cosh{r}}} \sum_{n=0}^\infty (-1)^n \frac{\sqrt{(2n)!}}{2^n n!}e^{in\theta}\tanh^n{r} \ket{2n} \label{ss} \end{align} Here we introduce the notation separating the amplitude and phase of the squeezed vacuum state $\ket{z}=\ket{r,\theta}$. The inner product between squeezed vacuum states is \begin{align} \braket{r,\theta|r',\theta'}_s=\sqrt{\frac{\sech{r}\sech{r'}}{1-e^{i(\theta'-\theta)}\tanh{r}\tanh{r'}}} \label{spk} \end{align} The inner product will serve as a measure of similarity between data, equivalent to the kernel in machine learning~\cite{trevor_hastie_elements_2009}. This provides a bridge from quantum computing to machine learning. In many instances, we use basic encoding $\ket{\psi_x}=\ket{x}$ with $\ket{x}$ a tensor product of binary qubits or amplitude encoding $\ket{\psi_x}=\sum_{n=0}^N x_n\ket{n}$ to encode data in quantum algorithms. Apart from the discrete variable qubit, encoding with coherent states are also possible~\cite{zhang-2020-PRL}. Different feature maps are able to capture different data patterns. And with the greater amount of possible feature maps, more data patterns can be approximated. For example, the amplitude encoding gives the usual dot product and basic encoding on coherent state gives Gaussian kernel. \subsection{Phase encoding} Given an $N$-dimensional dataset $\{{\bf x}_m\}$, we can encode data into the phase parameters of $N$ squeezed vacuum modes with the amplitude as a hyperparameter~\cite{schuld_19}. The encoding was $\phi: x \rightarrow \ket{c,x}_s$, with $\ket{c,x}_s$ defined in Eq.~(\ref{ss}) and is referred as squeezing feature map with phase encoding. We call this kernel the {\it squeezing phase kernel}. The kernel plot for different data entries is a periodic function, as shown in Fig.~\ref{argu}. The similarity between data valued at multiples of $2\pi$ separations is high. This shares similar properties to the exponential sine squared kernel in machine learning \begin{align} K_{ess}(x,x') = \exp \left(-\frac{2}{l^2}\sin^2 \left( |x-x'|\pi/p \right)\right) \label{ess}, \end{align} where $l$, $p$ are hyperparameters of the length scale and the period, respectively. These kinds of kernels are suitable for data that has features of periodicity. For example, the weather forecasting data and the stock market data. From Fig.~\ref{argu}, the squeezing phase kernel has sharper peaks compared to the exponential sine squared kernel, and may be more suitable for modeling functions with sudden changes. \begin{figure} \caption{Comparison between exponential sine squared kernel for different $l$ from Eq. \ref{ess} (dash line) and squeezing phase kernel for different $c$ from Eq. \ref{spk} (solid line).} \label{argu} \end{figure} In addition, for coherent-state encoding~\cite{zhang-2020-PRL}, data can also be encoded into the argument of coherent state that the kernel becomes~(the real part of the inner product), \begin{align} \braket{c,x|c,x'}_c&=\exp\left( -c^2 + c^2 e^{i(x'-x)} \right) \\ \Re{\braket{c,x|c,x'}_c}&= e^{-c^2 + c^2 \cos{(x'-x)}} \cos{(c^2 \sin{(x'-x)})} \end{align} The hyperparameter $c$ has the similar function as the length scale $l$ in the exponential sine squared kernel. We call this kernel the {\it coherent phase kernel}. Although there are no parameters for tuning the desired period, this can always be done by rescaling the data with some multiplication factor. Therefore, both phase encoding of squeezing and coherent feature maps are periodic kernels, which can be used to model functions that are periodic, similar to the exponential sine squared kernel. \subsection{Amplitude encoding} Another way to encode into squeezed states is to encode the data into the amplitude of squeezing parameter $\phi:x \rightarrow \ket{x,0}$, where the phase is set as zero. There is no hyperparameter in this case since the exponential factor vanishes for every equal value in the squeezing vacuum inner product (Eq.(\ref{spk})) and we set it as zero. The resulting kernel is \begin{align} \braket{x,0|x',0}_s=\sqrt{\frac{\sech{x}\sech{x'}}{1-\tanh{x}\tanh{x'}}} \end{align} We call this {\it squeezing amplitude kernel}. The kernel is symmetric and positive semidefinite, and has a shape similar to the Gaussian kernel, as shown in Fig. \ref{amp}. The inner product formed by the basis encoding of the coherent state into the amplitude has been shown to have the similar functional form of Gaussian kernel~\cite{zhang-2020-PRL}, also with no extra hyperparameter, \begin{align} \braket{x,0|x',0}_c=\exp\left(-\frac{1}{2}|x-x'|^2\right) \end{align} The squeezing kernel has a larger spread compared to the Gaussian kernel. It can be used to model functions with larger variance, which varies more slowly with respect to the distance from a center point. \begin{figure} \caption{Comparison between Gaussian kernel (left) and squeezing amplitude kernel (right).} \label{amp} \end{figure} \section{Machine learning with quantum kernels} We now apply quantum kernels with squeezed-state encoding for machine learning. The kernels are obtained through a quantum circuit of evaluating the inner product between two quantum states, and then serve as the input for the support vector machine for training and prediction. We further test those quantum kernels on some standard datasets and compare the performances with those of classical kernels. \subsection{Support vector machine with quantum kernels} There are two main streams of quantum algorithms where the kernel method applies. The first kind is solely based on quantum computers. Famous examples include linear regression~\cite{wiebe_12,zhang_19}, quantum support vector machine~\cite{rebentrost_14}, quantum principal component analysis~\cite{lloyd_14}. The other class is the classical-quantum hybrid approach that a part of computational task is left to classical devices. In this regime, the optimization of the machine learning algorithm is usually outsourced to classical computers. Quantum computers are only required to generate quantum gates that encode data, and then to calculate the inner product between quantum states. This approach is referred to as quantum kernel estimation~\cite{schuld_19}, which is easier to achieve than the first approach as both the circuit depth and the quantum resources required are less. We can use quantum computers to calculate the proposed quantum kernels above, and the result can be used by classical computers for training traditional machine learning models like support vector machines, principal component analysis, clustering, and Gaussian processes. We take the support vector machine as an example. Firstly, the inner product can be calculated on a quantum computer in two ways. The first is to use the swap test. Taking the squeezing kernel as an example. We need to prepare the quantum state \begin{align} \label{eq:eval_kernel} \ket{\Psi}=\frac{1}{\sqrt{2}}(\ket{0}\ket{z}_s+\ket{1}\ket{z'}_s) \end{align} then take a $\sigma_x$ measurement on the ancillary qubit, to measure the probability of getting $+1$. The inner product can be calculated by $2p-1$. Another approach to calculating the inner product is to apply two consecutive squeezing operators $\hat{S}(z)$ then $\hat{S}^\dagger (z')$, and measure the probability that the qumode is in vacuum state $\ket{0}$. This avoids the construction of states that entangle qubit and qumode in Eq. (\ref{eq:eval_kernel}). Secondly, with kernels at hand, we continue the procedure of machine learning on classical computers. To optimize an SVM classifier, we have to maximize the Lagrangian dual function~\cite{christopher_m_bishop_pattern_2006,trevor_hastie_elements_2009} \begin{align} L(\vec{\alpha})=\sum_{i=1}^M \alpha_i - \frac{1}{2}\sum_{i,j=1}^M y_iy_j \alpha_i \alpha_j K(x_i,x_j) \end{align} subject to the constraint $\sum_{i=1}^M \alpha_i y_i=0$ and $\alpha_i \geq 0$ for every $i$. Optimized values of $\alpha_i$ and $b$ are derived from Karush-Kuhn-Tucker (KKT) conditions, \begin{gather} \begin{align} y_iy(x_i)&=1 \\ y_i \left(\sum_{j \in S} \alpha_j y_j K(x_j,x_i) +b \right) &= 1 \end{align} \\ b = \frac{1}{N_S} \sum_{i \in S} \left( y_i - \sum_{j \in S} \alpha_j y_j K(x_j,x_i) \right) \end{gather} where $S$ denotes the set of support vectors. Then, prediction can be calculated by \begin{align} y(x)=\sum_{i \in S} \alpha_i y_i K(x_i,x) +b \end{align} Therefore, a support vector classifier can be trained after computing the kernel for each pair of data $K(x_j,x_i)$ on the quantum computer, and similar procedures apply to other traditional kernel-based machine learning models. \subsection{Results} As an illustration purpose, we test and compare those kernels mentioned in Sec.~\ref{sec:kernels} for supervised learning of classification on some standard datasets from scikit-learn~\cite{scikit-learn}. Those kernels are used to calculate randomly generated sample datasets of each $200$ data, and the SVM model was trained with the kernel. Then, another set of $200$ data is used for validation to calculate the performance score. The results are shown in Fig.~\ref{svm_sak} for comparing squeezing amplitude kernel with the standard Gaussian kernel, and in Fig.~\ref{svm_cpk} for comparing the coherent phase kernel and exponentiated sine squared kernel. Their corresponding performance scores are listed in Table~\ref{tb_acc1} and Table~\ref{tb_acc2}, respectively. The decision boundary of the squeezing amplitude kernel is comparable to the boundary created by the standard Gaussian kernel. The variance of the squeezing amplitude kernel is slightly lower than the Gaussian kernel but still demonstrated the ability to capture the highly nonlinear behaviour of the dataset. On the other hand, the decision boundary created for coherent phase kernel and exponentiated sine squared kernels are different for each case but they shared the common shape in general. These demonstrated that the two kernels will be useful in different data patterns. \begin{figure} \caption{Comparison of SVM training results between squeezing amplitude kernel (first row) and Gaussian kernel (second row). It was trained with Python's scikit-learn SVC classifier using customized kernel. The three standard datasets used are "blobs" (first column), "circles" (second column) and "moons" (third column). $400$ data was generated randomly in each case where the training and validating set have $200$ data each. Only the validating data are shown. The accuracy of each training is measured and shown in Table \ref{tb_acc1}.} \label{svm_sak} \end{figure} \begin{table}[h] \begin{tabular}{l|lll} & Blobs & Moons & Circles \\ \cline{1-4} Squeezing amplitude & 0.915 & 0.95 & 0.795 \\ Gaussian & 0.9 & 1.0 & 0.79 \end{tabular} \caption{The accuracy of validation corresponds to Figure~\ref{svm_sak}.} \label{tb_acc1} \end{table} \begin{figure} \caption{Comparison of SVM training results between coherent phase kernel (first row) and exponential sine squared kernel (second row). It was trained with Python's scikit-learn SVC classifier using customized kernel. The three generated datasets used are "moon" (first column), "spiral" (second column) and "sine" (third column). $400$ data was generated randomly in each case where the training and validating set have $200$ data each. Only the validating data are shown. The accuracy of each training is measured and shown in Table~\ref{tb_acc2}. The hyperparameters are tuned by searching for optimized parameters.} \label{svm_cpk} \end{figure} \begin{table}[h!] \begin{tabular}{l|lll} & Moons & Spiral & Sine \\ \cline{1-4} Coherent phase & 1.0 & 0.995 & 0.96 \\ Exp sine-squared & 1.0 & 1.0 & 0.95 \end{tabular} \caption{The accuracy of validation corresponds to Figure~\ref{svm_cpk}.} \label{tb_acc2} \end{table} \subsection{Physical implementation} The whole machine learning procedure only requires the quantum computer at the stage of evaluating quantum kernels, and we briefly discuss its physical implementation on a quantum system. The inner product has to be calculated on a physical system that allows manipulation of both continuous variables and a discrete-variable qubit as control. Promising candidates of quantum platforms capable of hybrid variable quantum computing include superconducting circuit and trapped ion quantum computer~\cite{leibfried_03,haffner_08,monroe_13,ortiz-gutierrez17,Zhangjunhua_2018,fluhmann_19}. We take trapped ion as an example. The motional state of the ion can be used as continuous variables, and various motional states have been realized experimentally~\cite{ortiz-gutierrez17,Zhangjunhua_2018,fluhmann_19}. Remarkably, squeezed vacuum state $\ket{z}_s$ can be created by irradiating two Raman beams with some specific frequency difference. The squeezing parameter will grow exponentially with driving time for the Raman transition that occurs during the process. Moreover, internal levels of an ion can serve as a qubit and its coupling to the motional modes can implement the quantum circuit to generate the state in Eq.~\ref{eq:eval_kernel} that is central to the evaluation of quantum kernels. \section{Conclusions} We have illustrated the idea of using different continuous-variable quantum states to generate new nonlinear kernels that can be used to learn different data patterns in machine learning. By encoding the data into either amplitude or argument of the parameter of squeezing, different kernels can be obtained. We have applied those quantum kernels with classical support vector machines for classification tasks on some standard datasets, and their performances are comparable or better to typical classical kernels. Furthermore, we have suggested a quantum platform, the trapped ion quantum computer, which can implement the quantum algorithm involving both discrete-variable qubits and continuous variables. We mention that the squeezing and displacement operator suggested in this article belong to a larger family called Gaussian operations, and general Gaussian-state encoding may be used as quantum kernels for machine learning. Moreover, since Gaussian-state can be characterized by the covariance matrix, a fully exploitation of the infinite dimensionality of continuous variables should refer to non-Gaussian quantum states. The case of non-Gaussian state encoding and its applications as quantum kernels with quantum advantages will be investigated in the future. \begin{acknowledgments} This work was supported by the Key-Area Research and Development Program of GuangDong Province (Grant No. 2019B030330001) and the Collaborative Research Fund of Hong Kong (No. C6005-17G). \end{acknowledgments} \end{document}
\begin{document} \title{Spanning tree packing, edge-connectivity and eigenvalues of graphs with given girth} \begin{abstract} \title{Spanning tree packing, edge-connectivity and eigenvalues of graphs with given girth} Let $\tau(G)$ and $\kappa'(G)$ denote the edge-connectivity and the spanning tree packing number of a graph $G$, respectively. Proving a conjecture initiated by Cioaba and Wong, Liu et al. in 2014 showed that for any simple graph $G$ with minimum degree $\delta \ge 2k \ge 4$, if the second largest adjacency eigenvalue of $G$ satisfies $\lambda_2(G) < \delta - \frac{2k-1}{\delta+1}$, then $\tau(G) \ge k$. Similar results involving the Laplacian eigenvalues and the signless Laplacian eigenvalues of $G$ are also obtained. In this paper, we find a function $f(\delta, k, g)$ such that for every graph $G$ with minimum degree $\delta \ge 2k \ge 4$ and girth $g \ge 3$, if its second largest adjacency eigenvalue satisfies $\lambda_2(G) < f(\delta, k, g)$, then $\tau(G) \ge k$. As $f(\delta, k, 3) = \delta - \frac{2k-1}{\delta+1}$, this extends the above-mentioned result of Liu et al. Related results involving the girth of the graph, Laplacian eigenvalues and the signless Laplacian eigenvalues to describe $\tau(G)$ and $\kappa'(G)$ are also obtained. \noindent {\bf AMS Classification:} 05C50, 05C40 \noindent {\bf Key words:} Girth; Edge-connectivity; Edge-disjoint spanning trees; Spanning tree packing number; Eigenvalue; Quotient matrix \end{abstract} \section{Introduction} We consider finite and simple graphs and follow \cite{BoMu08} for undefined terms and notation. In particular, $\Delta(G), \delta(G), \kappa'(G)$ and $\kappa(G)$ denote the maximum degree, the minimum degree, the edge-connectivity and connectivity of a graph $G$, respectively. The girth of a graph $G$, is defined as \[ g(G) = \left\{ \begin{array}{ll} \min\{|E(C)|: \mbox{ $C$ is a cycle of $G\}$ } & \mbox{ if $G$ is not acyclic, } \\ \infty & \mbox{ if $G$ is acyclic. } \end{array} \right. \] Let $\overline{d}(G)$ be the average degree of $G$, and $\tau(G)$ be the maximum number of edge-disjoint spanning trees contained in $G$. A literature review on $\tau(G)$ can be found in \cite{Palm01}. As in \cite{BoMu08}, for a vertex subset $S\subseteq V(G)$, $G[S]$ is the subgraph of $G$ induced by $S$. Let $G$ be a simple graph of vertex set $\{v_1,\ldots,v_n\}$. The the adjacency matrix of $G$ is an $n \times n$ matrix $A(G)=(a_{uv})$, where $u,v \in V(G)$ and $a_{uv}$ is the number of edges joining $u$ and $v$ in $G$. As $G$ is simple, $A(G)$ is symmetric $(0, 1)$-matrix. Eigenvalues of $G$ are the eigenvalues of $A(G)$. We use $\lambda_i(G)$ to denote the $i$th largest eigenvalue of $G$. So $\lambda_1(G)\geq \lambda_2(G)\geq \cdots \geq \lambda_n(G)$. Let $D(G)$ be the degree diagonal matrix of $G$. The matrices $L(G) = D(G)-A(G)$ and $Q(G) = D(G) + A(G)$ are the Laplacian matrix and the signless Laplacian matrix of $G$, respectively. We use $\mu_i(G)$ and $q_i(G)$ to denote the $i$th largest eigenvalue of $L(G)$ and $Q(G)$, respectively. Fiedler \cite{Fied73} initiated the investigation between graph connectivity and graph eigenvalues. Motivated by Kirchhoff's matrix tree theorem \cite{Kirc47} and by a problem of Seymour (see Reference [19] of \cite{CiWo12}), Cioab\u{a} and Wong \cite{CiWo12} initiated the following conjecture. \begin{conjecture} (Cioab\u{a} and Wong \cite{CiWo12}, Gu et al \cite{GLLY16}, Li and Shi \cite{LiSh13} and Liu et al \cite{LiHL14}) \label{conj1} Let $k$ be an integer with $k \geq 2$ and $G$ be a graph with minimum degree $\delta \ge 2k$ and maximum degree $\Delta$. If $\lambda_2(G) < \delta-\frac{2k-1}{\delta+1}$, then $\tau(G) \ge k$. \end{conjecture} Several studies made progresses towards Conjecture \ref{conj1}, as seen in \cite{CiWo12, GLLY16, LiSh13, LiHL14, LHGL14}. The conjecture is finally settled in \cite{LHGL14}. \begin{theorem}(Liu, Hong, Gu and Lai \cite{LHGL14})\label{th1} Let $k \ge 2$ be an integer, and $G$ be a graph with $\delta(G) \geq 2k \ge 4$. Each of the following holds. \\ (i) If $\lambda_{2}(G)< \delta(G) -\frac{2k-1}{\delta(G)+1}$, then $\tau(G)\geq k$. \\ (ii) If $\mu_{n-1}(G) > \frac{2k-1}{\delta(G)+1}$, then $\tau(G)\geq k$. \\ (iii) If $q_{2}(G)<2\delta(G) -\frac{2k-1}{\delta(G)+1}$, then $\tau(G)\geq k$. \end{theorem} Nash-Williams \cite{Nash61} and Tutte \cite{Tutt61} proved a fundamental theorem on spanning tree packing number of a graph $G$. \begin{theorem}(Nash-Williams \cite{Nash61} and Tutte \cite{Tutt61}) \label{tree-packing} Let G be a connected graph and let $k>0$ be an integer. Then $\tau(G)\geq k$ if and only if for any partition $(V_{1}, \ldots, V_t)$ of $V(G)$, $\sum_{i=1}^t d(V_{i}) \ge 2k(t-1).$ \end{theorem} As consequences of Theorem \ref{tree-packing}, relationship between $\tau(G)$ and $\kappa'(G)$ has been investigated, as seen in \cite{Gusf83} and \cite{Kund74}, among others. A characterization is proved in \cite{CaLS09}. \begin{theorem} (Catlin, Lai and Shao \cite{CaLS09}) \label{k-t} Let $k \ge 1$ be an integer. Then $\kappa'(G) \ge 2k$ if and only if for any subset $X \subseteq E(G)$ with $|X| \le k$, $\tau(G - X) \ge k$. \end{theorem} Cioab\u{a} in \cite{Cioa10} initiated the investigation on the relationship between graph adjacency eigenvalues and edge-connectivity. A number of results have been obtained. \begin{theorem} \label{edge-conn} Let $d$ and $k$ be integers with $d \ge k \ge 2$, and let $G$ be a simple graph on $n$ vertices with $\delta = \delta(G) \ge k$. \\ (i) (Cioab\u{a} \cite{Cioa10}) If $G$ is $d$-regular and $\lambda_{2}(G)\leq d-\frac{(k-1)n}{(d+1)(n-d-1)},$ then $\kappa'(G)\geq k.$ \\ (ii) (Cioab\u{a} \cite{Cioa10}) If $G$ is $d$-regular and $\lambda_{2}(G)<d-\frac{2(k-1)}{d+1},$ then $\kappa'(G)\geq k.$ \\ (iii) (Gu et al \cite{GLLY16}) If $\lambda_{2}(G)<\delta-\frac{2(k-1)}{\delta+1},$ then $\kappa'(G)\geq k.$ \\ (iv) (Liu et al \cite{LiHL14}) If $\lambda_{2}(G)\leq \delta-\frac{(k-1)n}{(\delta+1)(n-\delta-1)},$ then $\kappa'(G)\geq k.$ \end{theorem} These motivates the current research. It is natural to understand whether we will have a different range of the eigenvalues to predict the values of $\tau$ or $\kappa'$, when we are restricted to certain graph families such as bipartite graphs. The goal of this study is investigate, when the girth of a graph $G$ is known, the relationship between the eigenvalues of $G$ and $\tau(G)$, as well as $\kappa'(G)$. Motivated by the methods deployed in \cite{LHGL14}, for any graph $G$ with adjacency matrix $A$ and diagonal degree matrix $D$, we define $\lambda_i(G,a)$ to be the $i$th largest eigenvalues of $aD+A$, where $a \ge -1$ is a real number. For any integers $\delta$ and $g$ with $\delta > 0$ and $g \ge 3$, define $t = \lfloor \frac{g-1}{2} \rfloor$, and $n_1^* = n_1^*(\delta, g)$ as follows. \begin{equation} \label{n1*} n_{1}^{*}= \left\{ \begin{array}{lc} 1+\delta+\sum_{i=2}^{t}(\delta-1)^{i}, &\,~~~ \text{\mbox{if}~ $g=2t+1$};\\ 2+2(\delta-1)^{t}+\sum_{i=1}^{t-1}(\delta-1)^{i}, &\, ~~~\text{\mbox{if}~ $g=2t+2$}. \end{array} \right. \end{equation} The main results are the following. \begin{theorem}\label{main1} Let $g$ and $k$ be integers with $g \ge 3$ and $k \ge 2$, $a \ge -1$ be a real number, and $G$ be a simple graph of order $n$ with minimum degree $\delta \geq k \geq 2$ and girth $g$. Each of the following holds. \\ (i) If $\displaystyle \lambda_{2}(G, a)\leq (a+1)\delta-\frac{(k-1)n}{n_{1}^{*}(n-n_{1}^{*})}$, then $\kappa'(G)\geq k$. \\ (ii) If $\displaystyle \lambda_{2}(G, a)<(a+1)\delta-\frac{2(k-1)}{n_{1}^{*}}$, then $\kappa'(G)\geq k$. \end{theorem} \begin{theorem}\label{main2} Let $g$ and $k$ be integers with $g \ge 3$ and $k \ge 2$, $a \ge -1$ be a real number, and $G$ be a simple graph of order $n$ with minimum degree $\delta \geq 2k \geq 4$ and girth $g$. If $\displaystyle \lambda_{2}(G, a)< (a+1)\delta-\frac{2k-1}{n_{1}^{*}}$, then $\tau(G)\geq k$. \end{theorem} When we choose $a \in \{0, 1, -1\}$, then Theorems \ref{main1} and \ref{main2} will lead to results using $\lambda_{2}(G)$, $\mu_{n-1}(G)$ and $q_{2}(G)$ to describe $\kappa'(G)$ and $\tau(G)$. In particular, Theorem \ref{main2} has the following corollary. As $n_1^*(\delta, 3) = \delta + 1$, Corollary \ref{cor2} extends Theorem \ref{th1}. \begin{corollary}\label{cor2} Let $g$ and $k$ be integers with $g \ge 3$ and $k \ge 2$, and $G$ be a simple graph of order $n$ with minimum degree $\delta \geq 2k \geq 4$ and girth $g$. Each of the following holds. \\ (i) If $\lambda_{2}(G)< \delta-\frac{2k-1}{n_{1}^{*}}$, then $\tau(G)\geq k$. \\ (ii) If $\mu_{n-1}(G)> \frac{2k-1}{n_{1}^{*}}$, then $\tau(G)\geq k$. \\ (iii) If $q_{2}(G)< 2\delta-\frac{2k-1}{n_{1}^{*}}$, then $\tau(G)\geq k$. \end{corollary} The arguments adopted in this paper are refinements and improvements of those presented in \cite{LiHL14} and \cite{LHGL14}. In the next section, we present the interlacing technique, a common tool in spectral theory of matrices. The proofs of the main results are in the subsequent sections. \section{Preliminaries} The main tool in our paper is the eigenvalue interlacing technique described below. Given two non-increasing real sequences $\theta_{1}\geq \theta_{2}\geq \cdots \geq \theta_{n}$ and $\eta_{1}\geq \eta_{2}\geq \cdots \geq \eta_{m}$ with $n>m,$ the second sequence is said to {\em interlace} the first one if $\theta_{i}\geq \eta_{i}\geq\theta_{n-m+i}$ for $i=1, 2, \ldots, m.$ The interlacing is {\em tight} if exists an integer $k\in[0, m]$ such that $\theta_{i}=\eta_{i}$ for $1\leq i\leq k$ and $\theta_{n-m+i}=\eta_{i}$ for $k+1\leq i\leq m.$ \begin{lemma}(Cauchy Interlacing \cite{BrHa12})\label{le2.1} Let $A$ be a real symmetric matrix and $B$ be a principal submatrix of $A.$ Then the eigenvalues of $B$ interlace the eigenvalues of $A.$ \end{lemma} Consider an $n\times n$ real symmetric matrix \[ M=\left(\begin{array}{ccccccc} M_{1,1}&M_{1,2}&\cdots &M_{1,m}\\ M_{2,1}&M_{2,2}&\cdots &M_{2,m}\\ \vdots& \vdots& \ddots& \vdots\\ M_{m,1}&M_{m,2}&\cdots &M_{m,m}\\ \end{array}\right), \] whose rows and columns are partitioned according to a partitioning $X_{1}, X_{2},\ldots ,X_{m}$ of $\{1,2,\ldots, n\}$. The \emph{quotient matrix} $R$ of the matrix $M$ is the $m\times m$ matrix whose entries are the average row sums of the blocks $M_{i,j}$ of $M$. The partition is \emph{equitable} if each block $M_{i,j}$ of $M$ has constant row (and column) sum. \begin{lemma}(Brouwer and Haemers \cite{BrHa12, Haem95})\label{le2.2} Let $M$ be a real symmetric matrix. Then the eigenvalues of every quotient matrix of $M$ interlace the ones of $M.$ Furthermore, if the interlacing is tight, then the partition is equitable. \end{lemma} \section{Proof of Theorem \ref{main1}} Following \cite{BoMu08}, for disjoint subsets $X$ and $Y$ of $V(G)$, let $E(X, Y)$ be the set of edges with one end in $X$ and the other end in $Y$, and \[ e(X, Y)=|E(X, Y)|, \mbox{ and } d(X) = e(X, V(G) - X). \] Tutte \cite{Tutt47} initiated the cage problem, which seeks, for any given integers $d$ and $g$ with $d\geq2$ and $g\geq3$, the smallest possible number $n(d ,g)$ such that there exists a $d$-regular simple graph with girth $g$. A tight lower bound (often referred as the Moore bound) on $n(d ,g)$ can be found in \cite{ExJa11}. \begin{lemma}(Exoo and Jajcay \cite{ExJa11})\label{le3.0} For given integers $d \ge 2$ and $g \ge 3$, let $t = \lfloor \frac{g-1}{2} \rfloor$. Then \begin{equation*} n(d ,g)\geq \left\{ \begin{array}{lc} 1+d\sum_{i=0}^{t-1}(d-1)^{i}, &\, \text{ $g=2t+1$};\\ 2\sum_{i=0}^{t}(d-1)^{i}, &\, \text{ $g=2t+2$}. \end{array} \right. \end{equation*} \end{lemma} We start our arguments with a technical lemma. For a subset $X \subseteq V(G)$, define $\overline{X} = V(G) - X$, and $N_G(X) = \{u \in \overline{X}: \exists ~v \in X$ such that $uv \in E(G)\}$. If $X = \{v\}$, then we use $N_G(v)$ for $N_G(\{v\})$. When $G$ is understood from the context, we often omit the subscript $G$. \begin{lemma}\label{le3.1} Let $G$ be a simple graph with minimum degree $\delta = \delta(G) \ge 2$ and girth $g = g(G) \ge 3$, and $X$ be a vertex subset of $G$. Let $n_1^* = n_1^*(\delta, g)$ be defined as in (\ref{n1*}). If $d(X)<\delta$, then $|X|=n_{1}\geq n_{1}^{*}$. \end{lemma} \begin{proof} For notational convenience, we use $X$ to denote both a vertex subset of $G$ as well as $G[X]$, the subgraph induced by the vertices of $X$. \begin{claim} \label{cl-0} $X$ contains at least a cycle. \end{claim} By contradiction, assume that $X$ is acyclic. Then $|E(X)| \le n_1-1$, and so \[ \delta \cdot n_{1}=\delta \cdot |X|\leq \sum_{v\in X}d_{G}(v)=2|E(X)|+e(X, Y)\leq 2(n_{1}-1)+\delta-1, \] leading to a contradiction $n_{1}\leq \frac{\delta-3}{\delta-2}<1$. This proves Claim \ref{cl-0}. By Claim \ref{cl-0}, $X$ must contain a cycle with length at least $g$. We shall justify the lemma by making a sequence of claims. \begin{claim} \label{cl-1} Each of the following holds. \\ (i) If $g \ge 3$, then there exists a vertex $u_0 \in X$ such that $N(u_0) \cap \overline{X}=\emptyset$. \\ (ii) If $g \ge 3$, then $X$ contains a path $P=u_0u_{1}u_{2}\cdots u_{g-3}$ such that for any $i \in \{0, 1, 2, ..., g-3\}$, $N(u_i) \cap \overline{X} = \emptyset$, for the neighborhood of whose each vertex is contained in $X$. \end{claim} If (i) does not hold, then for every vertex $v\in X$, we always have $N(v)\cap\overline{X}\neq\emptyset$. Fix a vertex $v_{0}\in X$. Then \begin{eqnarray*} d(X) & = & |N(v_{0})\cap\overline{X}|+|e(X-\{v_{0}\}, \overline{X})|\geq|N(v_{0})\cap\overline{X}|+|X-\{v_{0}\}| \\ & \ge & |N(v_{0})\cap\overline{X}|+|N(v_{0})\cap X| =d(v_{0})\geq\delta, \end{eqnarray*} contrary to the fact $d(X)<\delta$. Hence (i) follows. We shall prove (ii) by induction on $g$. By (i), (ii) holds if $g = 3$. Assume that $g \ge 4$ and (ii) holds for smaller values of $g$. Thus $X$ contains a path $P' = u_0u_1\cdots u_{g-4}$ such that for any $i \in \{0, 1, 2, ..., g-4\}$, $N(u_i) \cap \overline{X} = \emptyset$. Let $N' = \{u' \in N(u_0): N(u') \cap \overline{X} \neq \emptyset\}$ and $N'' = \{u'' \in N(u_{g-4}): N(u'') \cap \overline{X} \neq \emptyset\}$. Since $g(G) = g$, for any $w \in N(u_{0})$, $N(w) \cap V(P') = \{u_{0}\}$, and for any $w \in N(u_{g-4})$, $N(w) \cap V(P') = \{u_{g-4}\}$. As $u_{g-4} \in X$ and $|N(u_{g-4}) - V(P')| \ge \delta - 1 \ge d(X) \ge |N''|$, either $|N(u_{g-4}) - V(P')| > |N''|$, and so there must be a vertex $u_{g-3} \in N(u_{g-4}) - (V(P') \cup N'')$; or $|N(u_{g-4}) - V(P')| = |N''|$. If $|N(u_{g-4}) - V(P')| > |N''|$, then a path $P=u_0u_{1}u_{2}\cdots u_{g-3}$ satisfying (ii) is found, and so (ii) holds by induction in this case. Hence we assume that $|N(u_{g-4}) - V(P')| = d(X) = |N''|$. This implies that $N' = \emptyset$ as for any $u' \in N'$, there must be a vertex $w' \in \overline{X}$ such that $u'w' \in E(G)$. Since $d(X) = |N''|$, this forces that $u' \in N''$, and so $E(P') \cup \{u_0u', u'u_{g-4}\}$ is a cycle of length $g-2$, contrary to the assumption that the girth of $G$ is $g$. Hence if $|N(u_{g-4}) - V(P')| = d(X) = |N''|$, then $N' = \emptyset$, and so there must be a vertex $u_{-1} \in N(u_0) - V(P')$ such that $N(u_{-1}) \cap \overline{X} = \emptyset$. This implies that, letting $v_i = u_{i-1}$ for $0 \le i \le g-3$, we obtain a path $P = v_0v_1\cdots v_{g-3}$ such that for any $i \in \{0, 1, 2, ..., g-3\}$, $N(v_i) \cap \overline{X} = \emptyset$. Hence (ii) is proved by induction. This justifies the claim. Let $t = \lfloor \frac{g-1}{2} \rfloor$. By Lemma \ref{le3.0} and by Claim \ref{cl-1}(ii), if $g = 2t+1$ is odd, then \begin{eqnarray} |X| & \ge & 1+\delta\sum_{i=0}^{t-1}(\delta-1)^{i}-d(X)-d(X)(\delta-1)-\cdots-d(X)(\delta-1)^{t-2} \\ \nonumber & \ge & 1+\delta\sum_{i=0}^{t-1}(\delta-1)^{i}-\sum_{i=1}^{t-1}(\delta-1)^{i} =1+\delta+\sum_{i=2}^{t}(\delta-1)^{i}=n_{1}^{*}. \end{eqnarray} By the same reason, if $g = 2t+2$ is even, then \begin{eqnarray} |X| & \ge & 2\sum_{i=0}^{t}(\delta-1)^{i}-d(X)-d(X)(\delta-1)-\cdots-d(X)(\delta-1)^{t-2} \\ \nonumber & \ge & 2\sum_{i=0}^{t}(\delta-1)^{i}-\sum_{i=1}^{t-1}(\delta-1)^{i} =2+2(\delta-1)^{t}+\sum_{i=1}^{t-1}(\delta-1)^{i}=n_{1}^{*}. \end{eqnarray} This completes the proof of the lemma. \hspace*{\fill}$\Box$ \end{proof} \subsection{Proof of Theorem \ref{main1}(i)} Suppose that $k$ is an integer with $k \ge 2$. By contradiction, we assume that $ \kappa'(G) = r \leq k-1$. Then there exists a partition $(X, Y)$ with $Y = \overline{X}$ such that $e(X, Y)=r\leq k-1\leq \delta-1$. Let $|X|=n_{1}, |Y|=n_{2}$. By Lemma \ref{le3.1} and as $n_{1}+n_{2}=n$, we have $n_{1}^{*} \leq \min\{n_{1}, n_2\} \le \frac{n}{2} \le n-n_{1}^{*}$. Hence $n_{1}n_{2}=n_{1}(n-n_{1})\geq n_{1}^{*}(n-n_{1}^{*})$. Let $\bar{d_{1}}=\frac{1}{n_{1}}\sum_{v\in X}d(v)$, $\bar{d_{2}}=\frac{1}{n_{2}}\sum_{v\in Y}d(v)$. Then $\bar{d_{1}}, \bar{d_{2}}\geq \delta$. Accordingly, the quotient matrix $R(aD+A)$ of $aD+A$ on the partition $(X, Y)$ becomes: \begin{equation*} R(aD+A)= \left( \begin{array}{cc} (a+1)\bar{d_{1}}-\frac{r}{n_{1}} &\, \frac{r}{n_{1}}\\ \frac{r}{n_{2}} &\, (a+1)\bar{d_{2}}-\frac{r}{n_{2}}\\ \end{array} \right). \end{equation*} As the characteristic polynomial of $R(aD+A)$ is $$ \lambda^{2}-[(a+1)\bar{d_{1}}-\frac{r}{n_{1}}+(a+1)\bar{d_{2}}- \frac{r}{n_{2}}]\lambda+[(a+1)\bar{d_{1}}-\frac{r}{n_{1}}][(a+1)\bar{d_{2}} -\frac{r}{n_{2}}]-\frac{r^{2}}{n_{1}n_{2}},$$ we have, by direct computation, \begin{eqnarray} \lambda_{2}(R) &=&\frac{1}{2}\{[(a+1)\bar{d_{1}}-\frac{r}{n_{1}}+ (a+1)\bar{d_{2}}-\frac{r}{n_{2}}] \\ \nonumber & \; & -\sqrt{[(a+1)\bar{d_{1}}-\frac{r}{n_{1}}+(a+1)\bar{d_{2}}-\frac{r}{n_{2}}]^{2} -4[(a+1)\bar{d_{1}}-\frac{r}{n_{1}}][(a+1)\bar{d_{2}}-\frac{r}{n_{2}}]+\frac{4r^{2}}{n_{1}n_{2}}} \} \\ \nonumber &=&\frac{1}{2}\{[(a+1)\bar{d_{1}}-\frac{r}{n_{1}}+(a+1)\bar{d_{2}}-\frac{r}{n_{2}}]- \sqrt{[(a+1)\bar{d_{1}}-\frac{r}{n_{1}}-(a+1)\bar{d_{2}}+\frac{r}{n_{2}}]^{2}+\frac{4r^{2}}{n_{1}n_{2}}}\} \\ \nonumber &=&\frac{1}{2}\{[(a+1)\bar{d_{1}}-\frac{r}{n_{1}}+(a+1)\bar{d_{2}}-\frac{r}{n_{2}}]- \sqrt{[(a+1)(\bar{d_{1}}-\bar{d_{2}})-(\frac{r}{n_{1}}-\frac{r}{n_{2}})]^{2}+\frac{4r^{2}}{n_{1}n_{2}}}\} \\ \nonumber & =&\frac{1}{2}\{[(a+1)\bar{d_{1}}-\frac{r}{n_{1}}+(a+1)\bar{d_{2}}-\frac{r}{n_{2}}] \\ \nonumber &\;& -\sqrt{(a+1)^{2}(\bar{d_{1}}-\bar{d_{2}})^{2}+(\frac{r}{n_{1}}-\frac{r}{n_{2}})^{2}-2(a+1) (\bar{d_{1}}-\bar{d_{2}})(\frac{r}{n_{1}}-\frac{r}{n_{2}})+\frac{4r^{2}}{n_{1}n_{2}}}\} \\ \nonumber &=&\frac{1}{2}\{[(a+1)(\bar{d_{1}}+\bar{d_{2}})-\frac{r}{n_{1}}-\frac{r}{n_{2}}] \\ \nonumber &\; &- \sqrt{(a+1)^{2}(\bar{d_{1}}-\bar{d_{2}})^{2}+(\frac{r}{n_{1}}+\frac{r}{n_{2}})^{2}+ 2(a+1)(\bar{d_{1}}-\bar{d_{2}})(\frac{r}{n_{2}}-\frac{r}{n_{1}})}\} \\ \nonumber &\geq &\frac{1}{2}\{[(a+1)(\bar{d_{1}}+\bar{d_{2}})-\frac{r}{n_{1}}-\frac{r}{n_{2}}] \\ \nonumber &\;& -\sqrt{(a+1)^{2}(\bar{d_{1}}-\bar{d_{2}})^{2}+(\frac{r}{n_{1}}+\frac{r}{n_{2}})^{2}+ 2(a+1)|\bar{d_{1}}-\bar{d_{2}}|(\frac{r}{n_{1}}+\frac{r}{n_{2}})}\} \\ \nonumber &=&\frac{1}{2}\{[(a+1)(\bar{d_{1}}+\bar{d_{2}})-\frac{r}{n_{1}}-\frac{r}{n_{2}}]- [(a+1)|\bar{d_{1}}-\bar{d_{2}}|+(\frac{r}{n_{1}}+\frac{r}{n_{2}})]\} \\ \nonumber &=&\min[(a+1)\bar{d_{1}},(a+1)\bar{d_{2}}]-\frac{rn}{n_{1}n_{2}} \\ \label{L2R} &\geq&(a+1)\delta-\frac{(k-1)n}{n_{1}^{*}(n-n_{1}^{*})}. \end{eqnarray} By Lemma \ref{le2.2}, $\lambda_{2}(G, a)\geq\lambda_{2}(R)\geq(a+1)\delta-\frac{(k-1)n}{n_{1}^{*}(n-n_{1}^{*})}$. By assumption, $\lambda_{2}(G, a)\leq (a+1)\delta-\frac{(k-1)n}{n_{1}^{*}(n-n_{1}^{*})}$, and so we must have $\lambda_{2}(G, a)=\lambda_{2}(R)=(a+1)\delta-\frac{(k-1)n}{n_{1}^{*}(n-n_{1}^{*})}$. It follows that all the inequalities in (\ref{L2R}) must be equalities. Hence $r=k-1$ and $\bar{d_{1}}=\bar{d_{2}}=\delta$, implying that $G$ must be a $\delta$-regular graph, and so $\lambda_{1}(G, a)=(a+1)\delta$. By algebraic manipulation, \begin{equation*} \begin{split} \lambda_{1}(R)=&\frac{1}{2}\{[(a+1)\delta-\frac{r}{n_{1}}+(a+1)\delta-\frac{r}{n_{2}}]\\ +&\sqrt{[(a+1)\delta-\frac{r}{n_{1}}+(a+1)\delta-\frac{r}{n_{2}}]^{2}-4[(a+1)\delta -\frac{r}{n_{1}}][(a+1)\delta-\frac{r}{n_{2}}]+\frac{4r^{2}}{n_{1}n_{2}}}\}\\ =&\frac{1}{2}\{[2(a+1)\delta-\frac{r}{n_{1}}-\frac{r}{n_{2}}]+\sqrt{[(a+1)\delta -\frac{r}{n_{1}}-((a+1)\delta-\frac{r}{n_{2}})]^{2}+\frac{4r^{2}}{n_{1}n_{2}}}\}\\ =&\frac{1}{2}\{[2(a+1)\delta-\frac{r}{n_{1}}-\frac{r}{n_{2}}]+\sqrt{(\frac{r}{n_{1}} -\frac{r}{n_{2}})^{2}+\frac{4r^{2}}{n_{1}n_{2}}}\}\\ =&\frac{1}{2}\{[2(a+1)\delta-\frac{r}{n_{1}}-\frac{r}{n_{2}}]+(\frac{r}{n_{1}}+\frac{r}{n_{2}})\}\\ =&(a+1)\delta. \end{split} \end{equation*} Therefore, the interlacing is tight. By Lemma \ref{le2.2}, the partition is equitable. This means that every vertex in $X$ has the same number of neighbors in $Y$. However, by Claim \ref{cl-1}(i) of Lemma \ref{le3.1}, there exists at least one vertex in $X$ without a neighbor in $Y$. This implies that $r=e(X, Y)=k-1=0$, contrary to the assumption that $k \ge 2$. \hspace*{\fill}$\Box$ \subsection{Corollaries of Theorem \ref{main1}(i)} Throughout this subsection, $n_1^*$ is defined as in (\ref{n1*}). To see that Theorem \ref{main1}(ii) follows from Theorem \ref{main1}(i), we observe that as $n_{1}^{*} \leq \min\{n_{1}, n_2\} \le \frac{n}{2} \le n-n_{1}^{*}$, it follows that \begin{equation} \label{2k} (a+1)\delta-\frac{2(k-1)}{n_{1}^{*}} \le (a+1)\delta-\frac{(k-1)n}{n_{1}^{*}(n-n_{1}^{*})}, \end{equation} and so Theorem \ref{main1}(ii) follows from Theorem \ref{main1}(i). For real numbers $a$ and $b$ with $\frac{a}{b}\geq-1$, let $\lambda_{i}(G, a, b)$ be the $i$th largest eigenvalues of the matrix $aD+bA$. Thus $\lambda_{i}(G, a, 1)=\lambda_{i}(G, a)$. \begin{corollary}\label{co3.2} Let $a$ and $b$ be real numbers with with $b \neq 0$ and $\frac{a}{b}\geq-1$, $k$ be an integer with $k\geq 2$, and $G$ be a simple graph with $n = |V(G)|$, $g =g(G)$ and with minimum degree $\delta = \delta(G) \geq k$. Then $\kappa'(G)\geq k$ if one of the following holds. \\ (i) $b>0$ and $\lambda_{2}(G, a, b)\leq (a+b)\delta-\frac{b(k-1)n}{n_{1}^{*}(n-n_{1}^{*})}$. \\ (ii) $b < 0$ and $\lambda_{n-1}(G, a, b)\geq (a+b)\delta-\frac{b(k-1)n}{n_{1}^{*}(n-n_{1}^{*})}$. \end{corollary} \begin{proof} As $aD+bA=b(\frac{a}{b}D+A)$, it follows by definition that \begin{equation} \label{ab} \left\{ \begin{array}{ll} \mbox{ if $b> 0$, } & \mbox{ then $\lambda_{i}(G, a, b)=b\lambda_{i}(G, \frac{a}{b})$; and} \\ \mbox{ if $b< 0$, } & \mbox{ then $\lambda_{n-i+1}(G, a, b)=b\lambda_{i}(G, \frac{a}{b})$.} \end{array} \right. \end{equation} Hence Corollary \ref{co3.2} follows form Theorem \ref{main1}(i). \hspace*{\fill}$\Box$ \end{proof} Choosing $a \in \{0, -1, 1\}$ and $b=1$ in Corollary \ref{co3.2}, we have the following special case. \begin{corollary}\label{co3.3} Let $k$ be an integer with $k\geq 2$, and $G$ be a simple graph with $n = |V(G)|$, $g = g(G)$ and with minimum degree $\delta = \delta(G) \geq k$. Each of the following holds. \\ (i) If $\lambda_{2}(G)\leq \delta-\frac{(k-1)n}{n_{1}^{*}(n-n_{1}^{*})}$, then $\kappa'(G)\geq k$. \\ (ii) If $\mu_{n-1}(G)\geq \frac{(k-1)n}{n_{1}^{*}(n-n_{1}^{*})}$, then $\kappa'(G)\geq k$. \\ (iii) If $q_{2}(G)\leq 2\delta-\frac{(k-1)n}{n_{1}^{*}(n-n_{1}^{*})}$, then $\kappa'(G)\geq k$. \end{corollary} As $n^*_1(\delta, 3) = \delta+1$ and by (\ref{2k}), Theorem \ref{edge-conn} (iii) and (iv) are consequences of Corollary \ref{co3.3}. Corollary \ref{co3.3} also implies the following result on bipartite graphs by setting $g \ge 4$ in Corollary \ref{co3.3}. \begin{corollary}\label{co3.5} Let $G$ be a bipartite graph with minimum degree $\delta \geq k\geq 2$. If $\lambda_{2}(G)<\delta-\frac{k-1}{\delta}$, then $\kappa'(G)\geq k$. \end{corollary} \section{Proof of Theorem \ref{main2} and its Corollaries} Throughout this section, for given integers $\delta$ and $g$, we continue defining $n_1^* = n_1^*(\delta, g)$ as in (\ref{n1*}). We utilize the arguments deployed in \cite{LHGL14} to prove Theorem \ref{main2} by imposing the girth requirement. In particular, the following technical lemma will also be used, with an additional condition $a \ge -1$ to justify the algebraic manipulation needed in the proof of the lemma. \begin{lemma} (Lemma 3.2 of \cite{LHGL14}) \label{le4.1} Let $a\geq-1$ be a real number and $G$ be a simple graph with minimum degree $\delta = \delta(G)$. For any two disjoint nonempty vertex subsets $X$ and $Y$, if $\lambda_{2}(G, a)\leq (a+1)\delta-\max\{\frac{d(X)}{|X|}, \frac{d(Y)}{|Y|}\}$, then \[ [e(X, Y)]^{2}\geq [(a+1)\delta-\frac{d(X)}{|X|}-\lambda_{2}(G, a)][(a+1)\delta-\frac{d(Y)}{|Y|}-\lambda_{2}(G, a)]|X||Y|. \] \end{lemma} \noindent {\bf Proof of Theorem \ref{main2}. } Let $V_{1}, \ldots, V_{t}$ be an arbitrary partition of $V(G)$. Without loss of generality, we assume that $d(V_{1})\leq d(V_{2})\leq \cdots \leq d(V_{t})$. By Theorem \ref{tree-packing}, it suffices to show that $\sum_{i=1}^t d(V_{i}) \ge 2k(t-1)$. The inequality holds trivially if $t =1$. Hence we assume that $t \ge 2$. If $d(V_{1})\geq 2k$, then $\sum_{i=1}^t d(V_{i}) \ge t(2k) > 2k(t-1)$. Thus we also assume that $d(V_{1})\leq 2k-1$. Let $s$ be the largest integer such that $d(V_{s}) \leq 2k-1$. Then as $d(V_{1})\leq 2k-1$, $1 \le s \le t$, and if $s < t$, then $d(V_{s+1})\geq 2k$. By Lemma \ref{le3.1}, $|V_{i}|\geq n_{1}^{*}$ for $1\leq i\leq s$. It follows that for any $i$ with $i\leq s$, \begin{equation} \label{L2} \lambda_{2}(G, a)<(a+1)\delta-\frac{2k-1}{n_{1}^{*}}\leq(a+1)\delta -\max\{\frac{d(V_{1})}{|V_{1}|}, \frac{d(V_{i})}{|V_{i}|}\}. \end{equation} By (\ref{L2}) and Lemma \ref{le4.1}, \begin{equation*} \begin{split} [e(V_{1}, V_{i})]^{2}\geq&[(a+1)\delta-\frac{d(V_{1})}{|V_{1}|}-\lambda_{2}(G, a)][(a+1)\delta-\frac{d(V_{i})}{|V_{i}|}-\lambda_{2}(G, a)]|V_{1}|\cdot|V_{i}|\\ >&[\frac{2k-1}{n_{1}^{*}}-\frac{d(V_{1})}{|V_{1}|}]|V_{1}|[\frac{2k-1}{n_{1}^{*}}-\frac{d(V_{i})}{|V_{i}|}]|V_{i}|\\ \geq&[2k-1-d(V_{1})][2k-1-d(V_{i})]\\ \geq&[2k-1-d(V_{i})]^{2}. \end{split} \end{equation*} Hence $e(V_{1}, V_{i})>2k-1-d(V_{i})$, or $e(V_{1}, V_{i})\geq 2k-d(V_{i})$. It follows that $\sum_{i=2}^{s}e(V_{1}, V_{i})\geq \sum_{i=2}^{s}(2k-d(V_{i}))$, and so as $d(V_j) \ge 2k$ for all $j \ge s+1$, we have \begin{eqnarray} \sum_{i=1}^t d(V_{i}) & = & d(V_1) + \sum_{i=2}^s d(V_{i}) + \sum_{i=s+1}^t d(V_{i}) \\ \nonumber & \ge & \sum_{i=2}^s e(V_{1}, V_{i}) + \sum_{i=2}^s d(V_{i}) + \sum_{i=s+1}^t d(V_{i}) \\ \nonumber & \ge & 2k(s-1) - \sum_{i=2}^s d(V_{i}) + \sum_{i=2}^s d(V_{i}) + \sum_{i=s+1}^t d(V_{i}) \\ & \ge & 2k(s-1) + 2k(t-s) = 2k(t-1). \end{eqnarray} Hence by Theorem \ref{tree-packing}, $\tau(G) \ge k$, as desired. This completes the proof of Theorem \ref{main2}. The following seemingly more general result can be derived from Theorem \ref{main2} by arguing similarly as in \cite{LHGL14} and using (\ref{ab}), within certain ranges of the real numbers $a$ and $b$. \begin{corollary}\label{th4.3} Let $a$ and $b$ be real numbers satisfying $b \neq 0$ and $\frac{a}{b} \ge -1$, $k$ be an integer with $k > 0$ and $G$ be a graph with $n = |V(G)|$, $g = g(G)$ and with minimum degree $\delta = \delta(G) \geq2k$. Each of the following holds. \\ (i) If $b>0$ and $\lambda_{2}(G, a, b)< (a+b)\delta-\frac{b(2k-1)}{n_{1}^{*}}$, then $\tau(G)\geq k$. \\ (ii) If $b<0$ and $\lambda_{n-1}(G, a, b)> (a+b)\delta-\frac{b(2k-1)}{n_{1}^{*}}$, then $\tau(G)\geq k$. \end{corollary} Thus Corollary \ref{cor2} now follows by letting $a \in \{0, 1, -1\}$ and $b = 1$ in Corollary \ref{th4.3}. \noindent {\bf Acknowledgement. } The research of R. Liu is partially supported by National Natural Science Foundation of China (No.~11571323), Outstanding Young Talent Research Fund of Zhengzhou University (No.~1521315002), the China Postdoctoral Science Foundation (No.~2017M612410) and Foundation for University Key Teacher of Henan Province (No.~2016GGJS-007). The research of Hong-Jian Lai is partially supported by National Natural Science Foundation of China grants CNNSF 11771039 and CNNSF 11771443. The research of Y. Tian is partially supported by National Natural Science Foundation of China grants CNNSF 11531011. \small { } \end{document}
\begin{document} \title{{f Webs and projective structures on a plane} \begin{abstract} We prove that there is a correspondence between projective structures defined by torsion-free connections with skew-symmetric Ricci tensor and Veronese webs on a plane. The correspondence is used to characterise the projective structures in terms of second order ODEs. \end{abstract} \section{Introduction} A web is a family of foliations on a manifold. In the present paper we concentrate on the simplest example which is a 3-web on a plane, i.e. a triple of one-dimensional foliations in the general position on $\mathbb{R}^2$ (see \cite{AG,N}). We show that a 3-web defines a projective structure on a plane. The projective structures obtained in this way are very special. Namely, they are defined by linear connections with skew-symmetric Ricci tensor. Additionally, the associated twistor space fibers over the projective space $\mathbb{R} P^1$. The existence of the fibration in the twistor picture suggests that the projective structures defined by 3-webs are two-dimensional counterparts of so-called hyper-CR Einstein-Weyl structures on $\mathbb{R}^3$ \cite{Dun}. In \cite{DK} we showed that the hyper-CR Einstein-Weyl structures are in a one-to-one correspondence with Veronese webs, i.e. special 1-parameter families of foliations introduced by Gelfand and Zakharevich \cite{GZ} in connection to bi-Hamiltonian systems. The similar phenomenon takes place on the plane. Indeed, one can easily extend a 3-web to a Veronese web on a plane and it is an intermediate step in the construction of a projective structure out of a 3-web. The approach gives new and simple proof of Wong's theorem \cite{W} in the stronger version of Derdzinski \cite{Der}. We also provide local forms of connections with constant skew-symmetric Ricci tensor. The projective structures defined by connections with skew-symmetric Ricci tensors were investigated recently in \cite{Der,DW,R}. In particular \cite{R} provides a characterisation of this class of projective structures in terms of the associated second order differential equation. We describe an alternative approach in terms of the dual equation at the end of the paper. The result is based on our earlier characterisation of Veronese webs (and more generally Kronecker webs) in terms of ODEs \cite{K2} and involve so-called time-preserving contact transformations \cite{JK}. \section{3-webs and Veronese webs} A 3-web on a plane is a triple $\{\mathcal{F}_1,\mathcal{F}_2,\mathcal{F}_3\}$ of one-dimensional foliations such that at any point $x\in \mathbb{R}^2$ any two of them intersect transversely. One can always find a coordinate system $(x,y)$ such that $$ T\mathcal{F}_1=\ker dx,\qquad T\mathcal{F}_3=\ker dy. $$ Then $$ T\mathcal{F}_2=\ker dw=\ker(w_xdx+w_ydy) $$ for some function $w=w(x,y)$. By the assumption on the transversality of the foliations we get that both $w_x$ and $w_y$ are nowhere vanishing. Let us notice that any 3-web can be extended to a 1-parameter family $\{\mathcal{F}_{(s:t)}\}_{(s:t)\in\mathbb{R} P^1}$ of foliations parametrised by points in a projective line $\mathbb{R} P^1$ and such that \begin{equation}\label{eq1} \mathcal{F}_{(1:0)}=\mathcal{F}_1,\qquad\mathcal{F}_{(0:1)}=\mathcal{F}_3,\qquad\mathcal{F}_{(s_0:t_0)}=\mathcal{F}_2 \end{equation} for some fixed point $(s_0:t_0)\in\mathbb{R} P^1$. Namely, we can consider the following family of one-forms \begin{equation}\label{eq2} \omega_{(s:t)}=st_0w_xdx+ts_0w_ydy \end{equation} and then one sees that condition \eqref{eq1} is satisfied if we define $\mathcal{F}_{(s:t)}$ by $$ T\mathcal{F}_{(s:t)}=\ker\omega_{(s:t)} $$ The so-obtained family $\{\mathcal{F}_{(s:t)}\}$ is very special. It depends linearly on a projective parameter $(s:t)$ and it is an example of so-called Veronese webs \cite{GZ,Z}. In general, a 1-parameter family of corank-one foliations on a manifold $M$ of dimension $n+1$ is a Veronese web if any $x\in M$ has a neighbourhood $U$ such that there exist point-wise independent one-forms $\omega_0,\ldots,\omega_n$ on $U$ such that $$ T\mathcal{F}_{(s:t)}|_U=\ker{s^n\omega_0+s^{n-1}t\omega_1+\cdots+t^n\omega_n}. $$ At any point $x\in M$ the mapping $$ (s:t)\mapsto\mathbb{R}(s^n\omega_0(x)+s^{n-1}t\omega_1(x)+\cdots+t^n\omega_n(x))\in P(T^*_xM) $$ is a Veronese embeding and it justifies the terminology. Specifying to $n=2$ we get the following correspondence \begin{proposition}\label{prop1} Let $(s_0:t_0)\in \mathbb{R} P^1$ be fixed. Any 3-web on a plane extends uniquely to a Veronese web on $\mathbb{R}^2$ such that \eqref{eq1} is satisfied. Conversely, for a Veronese web $\{\mathcal{F}_{(s:t)}\}$, the triple $\{\mathcal{F}_{(1:0)},\mathcal{F}_{(s_0:t_0)},\mathcal{F}_{(0:1)}\}$ is a 3-web. \end{proposition} The uniqueness above follows from the fact that a Veronese curve in $\mathbb{R} P^1$ is determined by values at three distinct points. In higher dimensions there is no so simple correspondence between finite families of foliations and Veronese webs since one has to impose additional integrability conditions on function $w$. In particular in dimension 3 one gets the Hirota equation \cite{DK,Z}. In what follow, for the sake of convenience, we will use the affine parameter $t=(1:t)\in\mathbb{R} P^1$ rather than the projective one $(s:t)$. The foliation corresponding to $(0:1)$ will be denoted $\mathcal{F}_\infty$. Formula \eqref{eq2} can be equivalently written as \begin{equation}\label{eq2b} \omega_t=t_0w_xdx+tw_ydy \end{equation} We have investigated the geometry of Veronese webs in \cite{JK,K2}. In particular we have introduced a linear connection associated to a web. In the present paper we will denote it $\nabla^\mathcal{F}$. In dimension 2 it can be written explicitly in the following form \begin{eqnarray} \nabla^\mathcal{F}_{\partial_x}\partial_x=\frac{w_yw_{xx}-w_xw_{xy}}{w_xw_y}\partial_x, &\quad& \nabla^\mathcal{F}_{\partial_y}\partial_y=\frac{w_xw_{yy}-w_yw_{xy}}{w_xw_y}\partial_y,\label{eqCon}\\ \nabla^\mathcal{F}_{\partial_x}\partial_y=0, &\quad&\nabla^\mathcal{F}_{\partial_y}\partial_x=0.\nonumber \end{eqnarray} On the other hand, for a 3-web there is a notion of the Chern connection (see \cite{N}). If a Veronese web is defined by a 3-web then $\nabla^\mathcal{F}$ coincides with the Chern connection of the 3-web. Indeed, it can be verified by a direct inspection that the formulae for $\nabla^\mathcal{F}$ can be computed as in \cite[Theorem 1.6]{N}. \begin{proposition}\label{prop2} Let $\{\mathcal{F}_t\}$ be a Veronese web on $\mathbb{R}^2$. The associated connection $\nabla^\mathcal{F}$ has the following properties: \begin{enumerate} \item[(a)] All leaves of $\mathcal{F}_t$ are geodesics of $\nabla^\mathcal{F}$ for any $t\in\mathbb{R}$. \item[(b)] The torsion of $\nabla^\mathcal{F}$ vanishes. \item[(c)] The Ricci curvature tensor of $\nabla^\mathcal{F}$ is skew-symmetric. \end{enumerate} \end{proposition} \begin{proof} We assume that a Veronese web is defined by \eqref{eq2b}. Then the leaves of $\mathcal{F}_t$ are integral curves of the vector field $$ w_y\partial_x-tw_x\partial_y. $$ We directly compute $$ \nabla^\mathcal{F}_{w_y\partial_x-tw_x\partial_y}(w_y\partial_x-tw_x\partial_y)= \left(\frac{w_y^2w_{xx}-twx^2w_{yy}}{w_xw_y}\right)(w_y\partial_x-tw_x\partial_y) $$ and it proves Statement (a). Statement (b) immediately follows from the definition of $\nabla^\mathcal{F}$. To prove Statement~(c) we compute non-trivial components of the curvature (3,1)-tensor tensor $R(\nabla^\mathcal{F})$. We get $$ R(\nabla^\mathcal{F})(\partial_x,\partial_y)\partial_x=\rho\partial_x,\qquad R(\nabla^\mathcal{F})(\partial_x,\partial_y)\partial_y=\rho\partial_y, $$ where \begin{equation}\label{eqR} \rho=\frac{w_{xx}w_{xy}}{w_x^2}-\frac{w_{yy}w_{xy}}{w_y^2}-\frac{w_{xxy}}{w_x}+\frac{w_{xyy}}{w_y}. \end{equation} It follows that the Ricci tensor of $\nabla^\mathcal{F}$ is represented by the matrix $$ Ric(\nabla^\mathcal{F})=\left(\begin{array}{cc} 0 & \rho \\ -\rho & 0 \end{array} \right). $$ \end{proof} It can be deduced from \cite[Corollary 7.5]{K2} that any torsion-free connection with skew-symmetric Ricci tensor can be obtained as a conneciton $\nabla^\mathcal{F}$ for a web. Indeed, in the proof of \cite[Corollary 7.5]{K2} with $m=1$ it is shown how to construct a Veronese web on a plane with a curvature being an arbitrary function. The result was obtained in terms of the canonical frames. Here we will show a different reasoning. Let $\nabla$ be a torsion-free connection on a plane. If $Ric(\nabla)$ is skew-symmetric then it follows from linear algebra that there exists a function $\rho$ such that $R(\nabla)(\partial_x,\partial_y)V=\rho V$ for any vector field $V$. Let us fix a point $x\in\mathbb{R}^2$ and chose a frame $X(x),Y(x)\in T_x\mathbb{R}$. Moreover, for any other $y\in\mathbb{R}^2$ let us choose a smooth curve $\gamma_y$ joining $x$ and $y$ and define $X(y),Y(y)\in T_y\mathbb{R}^2$ by the parallel transport of $X(x)$ and $Y(x)$ along $\gamma_y$. In this way we construct two vector fields $X$ and $Y$. It follows from the property $R(\nabla)(\partial_x,\partial_y)V=\rho V$ that the frame bundle of $T\mathbb{R}^2$ reduces to a $GL(1,\mathbb{R})$-bundle and consequently the choice of different cures $\gamma_y$ leads to vector fields $\tilde X$ and $\tilde Y$ which are proportional to $X$ and $Y$ in the same way, i.e. $\tilde X=fX$ and $\tilde Y=fY$ for some function $f\colon\mathbb{R}^2\to\mathbb{R}$. We define a Veronese web $\{\mathcal{F}_t\}$ imposing that the set of leaves of $\mathcal{F}_t$ is the set of integral curves of the vector field $$ X+tY. $$ Note that $\tilde X+t\tilde Y$ has the same integral curves as $X+tY$ and hence the web is well defined. The choice of a different frame at the initial point $x$ leads to a M\"obius transformation of the projective parameter $(s:t)$ which parametrises the foliations. In this way we proved \begin{theorem}\label{thm1} There is a one-to-one correspondence between Veronese webs (given up to a M\"obius transformation of the projective parameter $(s:t)$) on a plane and torsion-free connections on $\mathbb{R}^2$ with skew-symmetric Ricci tensor. \end{theorem} We provide the following examples as applications of Theorem \ref{thm1}: \vskip 1ex 1. {\bf Flat case.} It is clear that the flat connection corresponds to the linear function $w(x,y)=x+y$ and the associated web is defined by the one-form $$ \omega_t=dx+tdy. $$ \vskip 2ex 2. {\bf Constant curvature.} In order to find a torsion-free connection with constant skew-symmetric Ricci tensor one has to solve the equation $$ \rho=C, $$ where $\rho$ is given by \eqref{eqR} and $C\in\mathbb{R}$ is constant. The formula for $\rho$ can be written in more compact way $$ \rho=\left(\frac{w_{xy}}{w_y}\right)_y -\left(\frac{w_{xy}}{w_x}\right)_x= \ln(w_y)_{xy}-\ln(w_x)_{xy}= \ln\left(\frac{w_y}{w_x}\right)_{xy}. $$ Thus, the equation $\rho=C$ gives $$ \frac{w_y}{w_x}=e^{Cxy} $$ or $w_y=e^{Cxy}w_x$. This equation can be solved using the method of characteristics. However the knowledge of an exact solution is not necessary because we can always multiply the one-form $\omega_t$ from formula \eqref{eq2b} by a function and the resulting one-form defines the same Veronese web. Thus, multiplying $\omega_t$ by $w_x^{-1}$, we get that the web corresponding to a connection with $\rho=C$ is defined by the one-form $$ dx+te^{Cxy}dy. $$ The connection is given by $$ \nabla^\mathcal{F}_{\partial_x}\partial_x=-Cy\partial_x,\quad\nabla^\mathcal{F}_{\partial_y}\partial_y=Cx\partial_y, \quad \nabla^\mathcal{F}_{\partial_x}\partial_y=0,\quad \nabla^\mathcal{F}_{\partial_y}\partial_x=0. $$ These formulae can be derived directly from equation \eqref{eqCon} because $$ \frac{w_yw_{xx}-w_xw_{xy}}{w_xw_y}=-\frac{w_x}{w_y}\partial_x\left(\frac{w_y}{w_x}\right), \qquad \frac{w_xw_{yy}-w_yw_{xy}}{w_xw_y}=\frac{w_x}{w_y}\partial_y\left(\frac{w_y}{w_x}\right). $$ \vskip 2ex 3. {\bf Wong's theorem.} Derdzinski \cite[Theorem 6.1]{Der} proved that for a torsion-free connection $\nabla$ with skew-symmetric Ricci tensor one can always choose local coordinates $(x_1,x_2)$ such that the Christoffel symbols have the form $\Gamma^1_{11}=-\partial_{x_1}f$, $\Gamma^2_{22}=\partial_{x_2}f$ for some function $f$ and $\Gamma^i_{jk}=0$ unless $i=j=k$. In view of formula~\eqref{eqCon} and our Theorem \ref{thm1} this is evident as we can write $\nabla^\mathcal{F}_{\partial_x}\partial_x=-\partial_x\ln\left(\frac{w_y}{w_x}\right)\partial_x$ and $\nabla^\mathcal{F}_{\partial_y}\partial_y=\partial_y\ln\left(\frac{w_y}{w_x}\right)\partial_y$ if $\frac{w_y}{w_x}>0$ or $\nabla^\mathcal{F}_{\partial_x}\partial_x=-\partial_x\ln\left(-\frac{w_y}{w_x}\right)\partial_x$ and $\nabla^\mathcal{F}_{\partial_y}\partial_y=\partial_y\ln\left(-\frac{w_y}{w_x}\right)\partial_y$ if $\frac{w_y}{w_x}<0$. Note that the sign of $\frac{w_y}{w_x}$ is always fixed because both $w_y$ and $w_x$ never vanish since all foliations intersect transversely. \section{Projective structures} Two connections on a manifold $M$ are projectively equivalent if their sets of unparametrised geodesics coincide. A projective structure is a set of unparametrised geodesics of a connection, or, equivalently, it is a class of projectively equivalent connections. Any projective structure on a plane can be locally described in terms of a second order ODE. Namely, fixing local coordinates $(x,y)$ one can look for an equation in the form \begin{equation}\label{eq3} y''=\Phi(x,y,y') \end{equation} such that the solutions $(x,y(x))$ are geodesics for the projective structure. It can be shown that the equation satisfies \begin{equation}\label{eqcond} \partial_{y'}^4\Phi=0 \end{equation} and conversely any equation satisfying this condition defines a projective structure. The condition is point invariant and in fact any projective structure corresponds to a class of point equivalent equations. We will show now that we can construct a projective structure out of a Veronese web. Indeed we have the following \begin{proposition}\label{prop3} If $\{\mathcal{F}_t\}$ is a Veronese web on $\mathbb{R}^2$ then the union of all leaves of all foliations $\mathcal{F}_t$ is a projective structure defined by the associated connection $\nabla^\mathcal{F}$. If $\{\mathcal{F}_t\}$ is given by the one-form \eqref{eq2b} then the corresponding second order ODE is of the form \begin{equation}\label{eq3b} y''=\frac{1}{w_xw_y}\left((w_yw_{xx}-w_xw_{xy})y'+(w_yw_{xy}-w_xw_{yy})(y')^2\right). \end{equation} \end{proposition} \begin{proof} The first part follows directly from Statement (a) of Proposition \ref{prop2}. To get a description in terms of an ODE we recall that the leaves of $\mathcal{F}_t$ are integral curves of the vector field $w_y\partial_x-tw_x\partial_y$. It follows that for a fixed $t$ they are solutions to the following first order equation $$ y'=-t\frac{w_x}{w_y}. $$ Differentiating this equation with respect to $x$ and eliminating parameter $t$ by substitution $t=-\frac{w_y}{w_x}y'$ we get \eqref{eq3b}. \end{proof} Moreover we have \begin{lemma}\label{lemma1} Equation \eqref{eq3b} is point equivalent to the derivative of a first order ODE. \end{lemma} \begin{proof} Let $\phi\colon \mathbb{R}^2\to\mathbb{R}$ be a solution to $$ \partial_y\phi=\frac{w_y}{w_x}. $$ We define the following point transformation $$ \tilde x=x,\qquad \tilde y=\phi(x,y) $$ and verify that in the coordinates $(\tilde x, \tilde y)$ equation \eqref{eq3b} takes the form \begin{equation}\label{eq3c} \tilde y''=\phi_x(\tilde x,\phi^{-1}(\tilde x, \tilde y))_{\tilde x}+\phi_x(\tilde x,\phi^{-1}(\tilde x, \tilde y))_{\tilde y}\tilde y'. \end{equation} In above $\phi_x$ is the derivative of the mapping $(x,y)\mapsto \phi(x,y)$ with respect to the first coordinate, whereas $\phi_x(\tilde x,\phi^{-1}(\tilde x, \tilde y))_{\tilde x}$ and $\phi_x(\tilde x,\phi^{-1}(\tilde x, \tilde y))_{\tilde y}$ are derivatives of $(\tilde x,\tilde y)\mapsto \phi_x(\tilde x,\phi^{-1}(\tilde x, \tilde y))$ with respect to $\tilde x$ and $\tilde y$, respectively. The inverse $\phi^{-1}$ is taken with respect to the second coordinate function. Equation \eqref{eq3c} is the derivative of $$ \tilde y'=\phi_x(\tilde x,\phi^{-1}(\tilde x,\tilde y)). $$ \end{proof} As a corollary we get the following characterisation of linear connections projectively equivalent to a connection with skew-symmetric Ricci tensor. \begin{theorem}\label{thm2} Let $\nabla$ be a linear connection on $\mathbb{R}^2$. The following conditions are equivalent \begin{enumerate} \item[(a)] $\nabla$ is projectively equivalent to a connection with skew-symmetric Ricci curvature tensor. \item[(b)] $\nabla$ is projectively equivalent to the Chern connection of a 3-web (or the connection $\nabla^\mathcal{F}$ associated to a Veronese web $\{\mathcal{F}_t\}$). \item[(c)] Unparametrised geodesics of $\nabla$ are described by solutions to a second order ODE which is the derivative of a first order ODE. \end{enumerate} \end{theorem} \begin{proof} The equivalence (a)$\iff$(b) follows from Theorem \ref{thm1} and the implication (b)$\implies$(c) follows from Lemma \ref{lemma1}. Therefore it is sufficient to prove that a second order ODE which is a derivative of a first order ODE gives a projective structure defined by a connection with skew-symmetric Ricci tensor. This fact was proved in \cite{DW} (see Theorem \ref{thmDW} below). Note that the condition \eqref{eqcond} is always satisfied for the derivatives of first order ODEs. \end{proof} \section{Twistor space and dual ODE} The twistor space of a projective structure is the set of unparamterised geodesics. In the case of the projective structure on a plane the twistor space is a manifold of dimension two. Theorem \ref{thm2} should be compared to the following result of Dunajski and West. \begin{theorem}{\cite[Section 6.4, Proposition 3]{DW}}\label{thmDW} There is a one-to-one correspondence between projective structures on a plane for which the twistor space fibers over $\mathbb{R} P^1$ and point equivalent classes of second order ODEs which are derivatives of first order ODEs. \end{theorem} The fibration over $\mathbb{R} P^1$ can be easily seen from the point of view of Veronese webs. Indeed, to a geodesic which is a leaf of $\mathcal{F}_{(s:t)}$ one assigns the point $(s:t)\in\mathbb{R} P^1$. In \cite{K2} we have characterised Veronese webs on in terms of ODEs in the following way (the analogous results are also proved for higher order ODEs and systems of ODEs). \begin{theorem}{\cite[Theorem 1.1]{K2}}\label{thmK} There is a one-to-one correspondence between Veronese webs on $\mathbb{R}^2$ and time-preserving contact equivalent classes of second order ODEs given in the form \begin{equation}\label{eq4} z''=F(t,z,z') \end{equation} for which the invariant $$ K_0=-\partial_zF+\frac{1}{2}X_F(\partial_{z'}F)-\frac{1}{4}(\partial_{z'}F)^2, $$ vanishes, where $X_F=\partial_t+z'\partial_z+F\partial_{z'}$ is the total derivative. \end{theorem} The invariant $K_0$ is sometimes called the Jacobi endomorphism \cite{CMS} and it also appears in \cite{G} where is denoted $T$. It should be stressed that it is not a point invariant of the equation. It is invariant with respect to contact transformations which preserve the independent variable $t$ (see \cite{JK} for the general theory of such transformations). The class of transformations is strictly more rigid than the class of point transformations. Actually, one can also allow the M\"obius transformations of $t$ (i.e. transformations of the form $t\mapsto \frac{at+b}{ct+d}$ where $a,b,c,d\in\mathbb{R}$ are constant and satisfy $ad-bc\neq 0$) and $K_0$ remains invariant. The M\"obius transformations of the independent variable correspond to the transformations of the projective parameter which parametrises the corresponding Veronese web. Summarising, Theorem \ref{thmK} together with Theorem \ref{thm1} give the following characterisation of torsion-free connections with skew-symmetric Ricci tensor in terms of invariants of ODEs. \begin{corollary} There is a one-to-one correspondence between torsion-free connections with skew-symmetric Ricci tensor and second order ODEs satisfying $K_0=0$ and given modulo time-preserving contact transformations and M\"obius transformations of the independent variable. \end{corollary} Equation \eqref{eq4} is dual to equation \eqref{eq3} in the sense of the Cartan duality for second order ODEs. To be more precise, the class of point equivalent equations defined by \eqref{eq4} is dual to the class of point equivalent equations defined by \eqref{eq3}. Equation \eqref{eq4} is an equation on the twistor space, i.e. both $t$ and $z$ can be considered as coordinates on the twistor space. Additionally $t$ is exactly the parameter which defines the fibration over $\mathbb{R} P^1$. The Veronese web for equation \eqref{eq4} is defined on the space of its solutions which is the $(x,y)$-space for equation \eqref{eq3}. Conversely, the twistor space is the solutions space for equation \eqref{eq3}. Theorem \ref{thmK} together with Theorem \ref{thm2} give the following result, which, in a sense, is dual to the result of \cite{R}. \begin{corollary} A second order ODE is point equivalent to the derivative of a first order ODE if and only if its dual equation is point equivalent to an equation for which $K_0=0$. \end{corollary} \vskip 2ex {\bf Acknowledgements.} I wish to thank Maciej Dunajski for useful discussions. The work has been partially supported by the Polish National Science Centre grant ST1/03902. \end{document}
\begin{document} \title{Experimental implementation of a Raman-assisted six-quanta process} \author{S.O.~Mundhada} \email[Electronic address: ]{[email protected]} \author{A.~Grimm} \author{J.~Venkatraman} \author{Z.K.~Minev} \author{S.~Touzard} \author{N.E.~Frattini} \author{V.V.~Sivak} \author{K.~Sliwa} \altaffiliation{Present address: Quantum Circuits Inc., New Haven, CT 06511} \author{P.~Reinhold} \author{S.~Shankar} \affiliation{Department of Applied Physics, Yale University, New Haven, CT 06511.} \author{M.~Mirrahimi} \affiliation{QUANTIC team, INRIA de Paris, 2 Rue Simone Iff, 75012 Paris, France} \author{M.H.~Devoret} \email[Electronic address: ]{[email protected]} \affiliation{Department of Applied Physics, Yale University, New Haven, CT 06511.} \date{\today} \begin{abstract} \textcolor{black}{Nonlinear processes in the quantum regime are essential for many applications, such as quantum-limited amplification, measurement and control of quantum systems. In particular, the field of quantum error correction relies heavily on high-order nonlinear interactions between various modes of a quantum system. However,} the required order of nonlinearity is often not directly available or weak compared to dissipation present in the system. Here, we experimentally demonstrate a route to obtain higher-order nonlinearity by combining more easily available lower-order nonlinear processes, using a generalization of the Raman transition. In particular, we show a transformation of four photons of a high-Q superconducting resonator into two excitations of a superconducting transmon mode and vice versa. The resulting six-quanta process is obtained by cascading two fourth-order nonlinear processes through a virtual state. \textcolor{black}{We expect this type of process to become a key component of hardware efficient quantum error correction using continuous-variable error correction codes.} \end{abstract} \maketitle \begin{figure} \caption{\textbf{Schematic of Raman-assisted nonlinear processes and their experimental implementation.} (a) The target six-quanta process that exchanges four photons of a high-Q resonator (magenta) with two excitations of a transmon mode (green) and vice versa. (b) Energy level diagram of a high-Q storage resonator at frequency $\omega_a$ coupled to a transmon mode at frequency $\omega_b$ (called conversion mode). The first three transmon eigenstates (denoted by $\ket{g}$, $\ket{e}$ and $\ket{f}$) and the first five eigenstates of the storage resonator (denoted by $\ket{0}$ to $\ket{4}$) are considered. Starting in $\ket{g0}$, the system is prepared in $\ket{f0}$ by applying $\ket{g}\rightarrow\ket{e}$ and $\ket{e}\rightarrow\ket{f}$ Rabi pulses (green arrows). A pump at frequency $\omega_{p1}$ (blue) connects $\ket{f0}$ to a virtual \textcolor{black}{(non-energy-conserving)} state, represented by the dashed line, detuned from $\ket{e2}$ with a detuning $\Delta$. \textcolor{black}{This virtual state acts as an intermediate metastable excitation of the transmon.} A second pump at frequency $\omega_{p2}$ (brown) connects the virtual state to $\ket{g4}$, thus converting the two transmon excitations into four resonator excitations. (c) Frequencies of the pumps and the transitions involved in the scheme. (d) Schematic of the implementation. The high-Q storage mode is formed by an aluminum $\lambda/4$-type 3-dimensional superconducting resonator (magenta), which is dispersively coupled to the conversion transmon (green) and the tomography transmon (red). The two $\lambda/2$ stripline resonators coupled to the transmons are used for performing single-shot readout of the respective transmons. } \label{fig:figure_1} \end{figure} \section{Introduction} \textcolor{black}{Encoding quantum information in the large Hilbert space of a harmonic oscillator allows for hardware-efficient quantum error correction~\cite{Gottesman2001,Lassen2010,Leghtas2013,Albert2016,Ofek2016}. A further increase in hardware efficiency can be achieved by protecting the information using an autonomous feedback mechanism. It is possible to achieve such autonomous quantum error correction by using nonlinear driven-dissipative processes to create a decoherence-free manifold of quantum states, within the Hilbert space of the oscillator~\cite{Wolinsky1988,Zanardi1997,Lidar1998,Kempe2001,Cohen2014,Mirrahimi2014,Albert2016a,Leghtas2015,Kapit2016,Kapit2017,Puri2017,Touzard2018,Albert2019}.} In particular, a stabilized manifold spanned by four coherent states of a harmonic oscillator has been proposed for the implementation of a hardware efficient logical qubit~\cite{Leghtas2013,Mirrahimi2014}. Autonomously protecting the logical qubit against dephasing errors requires a four-photon driven-dissipative process, which forces the harmonic oscillator to gain and lose photons in sets of four. Combining such stabilization with correction against photon loss errors using quantum nondemolition parity measurements~\cite{Lutterbach1997,Sun2014,Ofek2016,Rosenblum2018} results in complete first-order quantum error correction (QEC). One approach for engineering such a four-photon driven-dissipative process has been proposed in~\cite{Mundhada2017}. The idea is to implement a six-quanta process that exchanges four photons of a high-Q resonator mode $a$ (destruction operator $\mathbf{a}$) with two excitations of a transmon mode $b$ (eigen states $\ket{g}$, $\ket{e}$, $\ket{f}$) and vice versa, corresponding to an effective interaction given by $\mathbf{a}^4\ket{f}\bra{g}+\mathbf{a}^{\dagger 4}\ket{g}\bra{f}$ (see Fig.~1a). Adding a two-excitation drive and dissipation on the transmon, by employing a combination of techniques demonstrated in references~\cite{Geerlings2013,Leghtas2015}, will then result in a four-photon driven-dissipative process on the high-Q resonator. The implementation of $\mathbf{a}^4\ket{f}\bra{g}+\mathbf{a}^{\dagger 4}\ket{g}\bra{f}$ interaction requires a Raman-assisted cascading~\cite{Steck2007} of two four-wave mixing interactions, each of which exchanges two resonator photons with \textcolor{black}{a virtual (non-energy-conserving) excitation} in the transmon mode and a pump photon, and vice versa. \textcolor{black}{This transition through the virtual state plays a vital role of cascading the two nonlinear processes, and giving an effective higher-order process. On the other hand, mediating the transition through an eigen-state of the system will result in two individual processes in series, instead of a higher-order nonlinearity. Additionally, the virtual state also helps in suppressing the decoherence errors induced by the finite life-time of the transmon mode.} Raman transitions using linear processes~\cite[Ch.~6]{Steck2007} or a combination of one linear and one nonlinear process~\cite{Vool2018} have been previously demonstrated. Our implementation of the $\mathbf{a}^4\ket{f}\bra{g}+\mathbf{a}^{\dagger 4}\ket{g}\bra{f}$ interaction, however, requires the cascading of two nonlinear multi-quanta processes. In our experiment we show that not only the Raman-assisted cascading of nonlinear processes is feasible, but also the magnitude of the effective interaction can be made much larger than the damping rates of the high-Q modes, hence, generating a useful interaction for QEC. In principle, the same driven-dissipative process could instead be realized by using a six-wave mixing term in the Josephson cosine potential, addressed using an off-resonant pump. However, the currently achievable magnitude of the six-wave mixing term, obtained from expanding the Josephson cosine potential, is small compared to the dissipation rates of the system and other spurious terms present in the Hamiltonian~\cite[Sec.~I\,C]{SI}. Hence, Raman-assisted virtual cascading of low-order mixing processes is essential for enhancing the strength of the desired four-photon driven-dissipative process for hardware efficient QEC. \textcolor{black}{This paper is organized as follows: Section~\ref{sec:exact_process} is dedicated to experimental demonstration of the cascaded higher-order process. Specifically, subsections \ref{sec:sys_details} and \ref{sec:tuneup} describe the experimental setup and the initial tuneup of the cascaded process, while, subsections \ref{sec:tomo} and \ref{sec:wigners} discuss the tomography of the cascaded process. In section~\ref{sec:discussion} we discuss some limitations of our current experiment and give future directions, followed by conclusions in section~\ref{sec:conclusion}.} \section{Experimental demonstration} \label{sec:exact_process} In order to demonstrate the feasibility of cascading nonlinear processes through virtual states, our experiment focuses on the Raman-assisted $\ket{g4}\leftrightarrow\ket{f0}$ transition as explained in Fig.~1b (see figure caption for explanation). This transition is a precursor to the aforementioned $\mathbf{a}^4\ket{f}\bra{g}+\mathbf{a}^{\dagger 4}\ket{g}\bra{f}$ process which requires the $\ket{g,n}\leftrightarrow\ket{f,n-4}$ transitions to all occur simultaneously. As shown in Fig.~1b, the system is initialized in the $\ket{f0}$ state. The two pumped processes, one connecting $\ket{f0}$ to a virtual state close to $\ket{e2}$ with the rate $g_1$ and the other one connecting the virtual state to $\ket{g4}$ with the rate $g_2$, are nonlinear four-wave mixing processes. The frequencies of the two pumps involved (see Fig.~1c) are \begin{align} \omega_{p1}&=2\tilde\omega_a -\tilde\omega_b + \chi_{bb} - 2\chi_{ab}+\Delta\, \nonumber\\ \omega_{p2}&=2\tilde\omega_a -\tilde\omega_b + 2\chi_{ab}-\Delta\,, \label{eq:pump_frequencies} \end{align} where $\tilde\omega_{a/b}$ are the Stark shifted frequencies of the high-Q resonator and the transmon mode in presence of the pumps, $\chi_{ab}$ is the cross-Kerr and $\chi_{bb}$ is the self-Kerr of the transmon mode. The effective Hamiltonian of the system to second-order in the rotating wave approximation (RWA)~\cite{Mirrahimi2015} is \begin{align} \frac{H_{\rm eff}}{\hbar} \cong g_{\rm 4ph} \left(\ket{g4}\bra{f0}+\ket{f0}\bra{g4}\right), \end{align} where $g_{\rm 4ph}$ is the magnitude of the cascaded process, given by \begin{align} g_{\rm 4ph} = \sqrt{48}g_1 g_2\left(\frac{1}{\Delta}-\frac{1}{\chi_{bb}-4\chi_{ab}+\Delta}\right). \label{eq:g4ph} \end{align} For the effective Hamiltonian to be valid, one has to choose the parameters such that $|g_{1,2}|\ll \Delta$, since, as is ubiquitous in Raman transitions, the leakage rate to the intermediate state ($\ket{e2}$ in our case) is directly proportional to the ratios $|\frac{g_{1,2}}{\Delta}|^2$. Detailed derivation and discussion of the effective Hamiltonian is given in~\cite[Sec.~I\,B]{SI}. \begin{figure} \caption{\textbf{Pulse sequence and Rabi oscillations of the cascading process.} (a) Pulse sequence used for locating the $\ket{f0}\leftrightarrow\ket{g4}$ resonance of the system. The system is initialized in $\ket{f0}$ by using $\pi$-pulses on $\ket{g}\leftrightarrow\ket{e}$ and $\ket{e}\leftrightarrow\ket{f}$ transitions. Following this, the two pumps are applied with varying frequency and duration. The frequency difference of the two pumps is maintained constant at $\chi_{bb}-4\chi_{ab}+2\Delta$. Finally an indirect measurement of the storage resonator population is performed using a photon-number selective $\pi$-pulse on the tomography transmon and a measurement pulse on the tomography resonator. Optionally, a measurement of the conversion transmon state can also be performed using a measurement pulse on the conversion resonator. (b) Rabi oscillations in the population of Fock state $\ket{0}$ ($p_0$, colorbar). The x-axis shows the detuning of pump 1 from the $\ket{f0}\leftrightarrow\ket{e2}$ transition, the y-axis shows the duration for which the two pumps are applied. The frequency landscape above the data explains the origin of the two chevron like features.\iffalse The feature on the left is realized when pump 1 is resonant with the $\ket{f0}\leftrightarrow\ket{e2}$ transition whereas the feature on the right is realized when the two pumps are equally detuned by $\Delta$ from $\ket{f0}\leftrightarrow\ket{e2}$ and $\ket{e2}\leftrightarrow\ket{g4}$ corresponding to the desired transition.\fi } \label{fig:figure_2} \end{figure} \subsection{System details} \label{sec:sys_details} The experimental setup for testing our transition requires (i) a high-Q resonator, (ii) a transmon mode for the conversion process, and (iii) a second transmon mode to perform Wigner tomography~\cite{Vlastakis2013} of the resonator. In addition, we need to be able to couple pumps strongly with the conversion transmon, while maintaining the quality factor of various modes of the system. The high-Q storage resonator ($T_1=76\,\mathrm{\mu s}$) is realized as a high purity aluminum, $\lambda/4$-type, post-cavity~\cite{Reagor2013} with frequency $\omega_a/2\pi= \SI{8.03}{\giga\hertz}$ (see Fig.~1c). The resonator is dispersively coupled to two transmons as shown in Fig.~1c. The transmon in the conversion arm has a resonance frequency $\omega_b/2\pi = \SI{5.78}{\giga\hertz}$, anharmonicity $\chi_{bb}/2\pi=\SI{122.6}{\mega\hertz}$ and a cross-Kerr of $\chi_{ab}/2\pi=\SI{7.4}{\mega\hertz}$ with the high-Q resonator. The $T_1$ and $T_2$ of the conversion transmon are $50\,\mathrm{\mu s}$ and $7.6 \,\mathrm{\mu s}$ respectively. The second transmon is employed to perform Wigner tomography on the storage resonator and has a cross-Kerr of $\SI{1.1}{\mega\hertz}$ with it. Both transmons are coupled to low-Q resonators through which we perform single-shot measurements of the transmon state (see~\cite[Sec.~II\,A]{SI} for remaining system parameters). In the case of the conversion transmon, the measurement distinguishes, in single-shot, between the first three states $\ket{g}$, $\ket{e}$ and $\ket{f}$. The enclosure of the high-Q resonator acts as a rectangular waveguide high-pass filter with a cutoff at $\sim\SI{9.5}{\giga\hertz}$. Since the two pump frequencies, $\omega_{p1}/2\pi=\SI{10.397}{\giga\hertz}$ and $\omega_{p2}/2\pi=\SI{10.294}{\giga\hertz}$, are above the cutoff, they are applied through the strongly coupled (waveguide mode $Q\le 100$) pin at the top. The high-Q resonator and the transmon modes are below the cutoff and are thus protected from relaxation through this pin. \begin{figure}\label{fig:figure_3} \end{figure} \subsection{Spectroscopic tuneup} \label{sec:tuneup} In order to locate the correct pump frequencies for the transition of interest, we use the pulse sequence shown in Fig.~2a. The system is initialized in $\ket{f0}$ and the two pumps are applied for a variable period of time. The pump frequencies are swept such that the frequency difference is maintained constant at $\omega_{p1}-\omega_{p2}=\chi_{aa}-4\chi_{ab}+2\Delta$. We choose $\Delta/2\pi=\SI{5.1}{\mega\hertz}$ and $g_{1,2}/2\pi\sim \SI{0.5}{\mega\hertz}$. The rising and falling edges of the pump pulses are smoothed using a hyperbolic tangent function with a smoothing time of $\SI{192}{\nano\second}$. These parameters are empirically optimized to reduce the leakage to the $\ket{e2}$ state while achieving a $g_{\rm 4ph}$ that is an order of magnitude faster than the decoherence rates of the system. The resulting resonator state is characterized by applying a photon-number selective $\pi$-pulse~\cite{Leghtas2013a} on the tomography transmon. The pulse has a gaussian envelope of width $\sigma_{\rm sel}=\SI{480}{\nano\second}$ (total length $4\sigma_{\rm sel}$), resulting in a pulse bandwidth of $\sim\SI{332}{\kilo\hertz}$, which is less than the cross-Kerr between the tomography transmon and the high-Q resonator. As a result the tomography transmon is excited only when the storage resonator is in $\ket{0}$. Finally, the state of the tomography transmon is measured. An optional single-shot measurement of the conversion transmon can also be performed as indicated by the dashed green measurement pulse in Fig.~2a. \begin{figure}\label{fig:figure_4} \end{figure} \iffalse (a), (d) and (b), (e) show experimental and theoretical plots of the Wigner function of the storage resonator after post-selecting the conversion mode in the $\ket{g}$ and $\ket{e}$ states respectively, corresponding to a partial trace of the transmon, after projecting the system with the projector $\ket{g}\bra{g}$ or $\ket{f}\bra{f}$. The post-selection leaves the storage resonator in Fock states $\ket{4}$, $\ket{0}$ respectively. \fi The outcome of the described measurement is shown in Fig.~2b. The population fraction of the Fock state $\ket{0}$ is plotted as a function of the duration for which the pump pulses are applied and the detuning of the first pump $\omega_{p1}$ from the $\ket{f0}\leftrightarrow\ket{e2}$ transition. The data displays Rabi oscillations arising from two processes. The one on the left occurs when pump 1 is resonant with the $\ket{f0}\leftrightarrow\ket{e2}$ transition. The one on the right corresponds to the two pumps being equally detuned from the $\ket{f0}\leftrightarrow\ket{g2}$ and $\ket{e2}\leftrightarrow\ket{g4}$ transitions. This is the Raman-assisted $\ket{f0}\leftrightarrow\ket{g4}$ transition of interest. The resulting chevron pattern for this transition is narrower since the cascaded transition occurs at a slower rate than the $\ket{f0}\leftrightarrow\ket{g2}$ transition. From the frequency of the oscillations we extract $g_{\rm 4ph}/2\pi=\SI{0.32}{\mega\hertz}$\iffalse, which is indeed two orders of magnitude faster than the estimated six-wave mixing process in our system for similar pump strengths~\cite[Sec.~I\,C]{SI}\fi. In separate experiments, we accurately characterize the pump strengths $g_1/2\pi=\SI{0.53}{\mega\hertz}$ and $g_2/2\pi=\SI{0.48}{\mega\hertz}$ by measuring the Stark shifts of the conversion transmon, when the pumps are applied separately at their respective resonance conditions for the $\ket{f0}\leftrightarrow\ket{g4}$ transition~\cite[Sec.~III\,B]{SI}. This eliminates any frequency dependent attenuation of pump strengths due to the dispersion in the input lines. For these parameters, Eq.~\eqref{eq:g4ph} predicts a $g_{\rm 4ph}/2\pi$ of $\SI{0.33}{\mega\hertz}$, in close agreement with the measured value. \subsection{Partial tomography of $\ket{f0}\leftrightarrow\ket{g4}$ process} \label{sec:tomo} Having found the desired $\ket{f0}\leftrightarrow\ket{g4}$ process, we fix our pump frequencies to be resonant with this transition and proceed to characterize the populations of different Fock states of the storage resonator. These are obtained by varying the frequency at which the photon-number selective pulse on the tomography transmon is applied. The result of this measurement is plotted in Fig.~3a. The population fractions of various Fock states are inferred by taking cross-sections at the resonance frequency of the tomography transmon conditioned on the number of photons in the high-Q resonator. The resonator oscillates between $\ket{0}$ and $\ket{4}$ with some leakage to $\ket{2}$ due to the finite detuning $\Delta$ from $\ket{e2}$ (see the $\omega_{T0/2/4}$ lines in Fig.~3a). The population appearing in $\ket{1}$ and $\ket{3}$ is due to finite energy relaxation time of the resonator mode. The evolution of the $\ket{0}$, $\ket{2}$ and $\ket{4}$ state populations of the storage resonator and the $\ket{f}$, $\ket{e}$, $\ket{g}$ state populations of the conversion transmon as a function of time are plotted in the first column of Fig.~3b. The conversion transmon populations are measured independently using the dashed-green measurement pulse shown in Fig.~2a. The respective populations oscillate in phase with each other as expected. The amplitude of the oscillations is limited by the $T_2$ of the conversion qubit and the contrast of the two measurements. We are also able to resolve an envelope of fast oscillations in the populations of $\ket{e}$, $\ket{g}$ and $\ket{2}$, $\ket{4}$ states. These are expected for a Raman transition and occur at a rate given by the detuning $\Delta$. The plots in the second column of Fig.~3b show numerical data obtained from simulating Lindblad master equation of the system~\cite[Sec.~IV]{SI}. The contrast of the simulation is scaled by the measurement contrast of the experimental system. The simulation reproduces the experimental results well, including the fast oscillations found in the data. \subsection{Coherence of $\ket{f0}\leftrightarrow\ket{g4}$ process} \label{sec:wigners} Finally, in order to demonstrate that the oscillations are coherent, we stop the oscillations after a quarter of a period ($\SI{372}{\nano\second}$). This is expected to prepare a coherent superposition of $\ket{f0}$, $\ket{g4}$ given by $\left(\ket{f0}+\ket{g4}\right)/\sqrt{2}$. We experimentally characterize the state of the system by performing Wigner tomography of the resonator, conditioned on conversion transmon states. As expected, the resonator ends up in Fock state $\ket{4}$ ($\ket{0}$) when the conversion transmon is post-selected in $\ket{g}$ ($\ket{f}$) as shown by Fig.~4a (4b). Moreover, applying a photon number selective $f\rightarrow g$ pulse on the conversion transmon, conditioned on zero photons in the storage resonator, disentangles the transmon from the resonator, leaving the system in $\ket{g}\otimes\left(\ket{0}+\ket{4}\right)/\sqrt{2}$. The Wigner function of the resonator after post-selecting the conversion transmon in $\ket{g}$, shown in Fig.~4c, depicts a $\left(\ket{0}+\ket{4}\right)/\sqrt{2}$ state, thus proving that the oscillations are coherent. For comparison, the ideal Wigner functions of $\ket{4}$, $\ket{0}$ and $\left(\ket{0}+\ket{4}\right)/\sqrt{2}$ are shown in panels d, e and f of Fig.~4 respectively. It is also interesting to note that $\left(\ket{0}+\ket{4}\right)/\sqrt{2}$ is one of the logical states of binomial QEC codes~\cite{Michael2016}. \section{Discussion} \label{sec:discussion} \textcolor{black}{While we have demonstrated a six-quanta $\ket{g4}\leftrightarrow\bra{f0}$ transition, autonomous QEC requires a $\mathbf{a}^4\ket{f}\bra{g}+\mathbf{a}^{\dagger 4} \ket{g}\bra{f}$ process, where all of the $\ket{gn}\leftrightarrow\ket{f(n-4)}$ transitions are resonant simultaneously. This can be accomplished by making the strength of the pumped processes $g_{1,2}$, higher than the cross-Kerr terms $\chi_{ab}$ between the storage resonator and the conversion transmon. However, such pump strengths are not achievable in our current system, due to spurious transitions induced by strong pump strengths, similar to those seen in references~\cite{Sank2016,Lescanne2019}. This limitation, however, should not discourage future applications, since, there have been proposals to increase tolerance for the pump strengths by shunting the transmon with a linear inductor~\cite{Verney2019} or using flux-biased circuits to cancel cross-Kerr between modes~\cite{Elliott2018}.} \textcolor{black}{The leakage to the intermediate state $\ket{e(n-2)}$ could be another limitation for QEC applications. In future iterations of our experiment, this leakage can be minimized by increasing the detuning and making the pulses more adiabatic, albeit at the cost of making the overall process slower. It is also possible to use pulse shaping techniques like stimulated Raman adiabatic passage (STIRAP)~\cite[Ch. 6.2.3]{Steck2007} to implement this transition without any leakage. The effect of this leakage on the error-correction protocol is discussed at length in Ref.~\cite{Mundhada2017}. Moreover Ref.~\cite{Albert2019} details an alternative QEC scheme which uses a similar driven-dissipative process, however, it is insensitive to leakage to the $\ket{e,n-2}$ state.}\\ \section{Conclusion} \label{sec:conclusion} In conclusion, we have shown that nonlinear processes can be cascaded through a virtual state to engineer higher-order nonlinear Hamiltonians. \textcolor{black}{The rate of this highly nonlinear transition is faster than the decoherence rates. The oscillations are coherent and follow the theoretical predictions closely. The demonstrated $\ket{g4}\leftrightarrow\ket{f0}$ oscillations are a precursor to the implementation of the complete $\mathbf{a}^4\ket{f}\bra{g}+\mathbf{a}^{\dagger 4}\ket{g}\bra{f}$ Hamiltonian, which is an important component of hardware efficient quantum error correction using Schr\"odinger cat-states.} Moreover, while three- and four-wave mixing processes have played a key role in cQED applications~\cite{Vijay2009,Abdo2011,Macklin2015,White2015,Narla2016,Frattini2017,Metelmann2017}, \textcolor{black}{many proposals will benefit from increasingly higher-order nonlinear interactions~\cite{Mamaev2018,Kapit2016,Lihm2018,Albert2019}}. We have accomplished a deeper goal of verifying that higher-order nonlinear interactions can indeed be engineered by cascading lower-order nonlinear processes. As shown in~\cite[Sec.~I\,A]{SI}, it is possible to cascade any two processes through a virtual state, as long as the commutator of the operators that describe the processes is the operator describing the desired higher-order process. Therefore, such cascading could be useful for the broader field of quantum optics and quantum control. Additionally, the possibility of cascading indicates that advanced techniques like GRAPE (gradient-ascent pulse engineering)~\cite{Khaneja2005,Fouquieres2011} could utilize pulses addressing nonlinear processes to gain additional control knobs over the system, thus potentially increasing the speed and fidelity of the engineered unitary operations. \end{document}
\begin{document} \renewcommand\contentsname{Table of Contents} \title{TOPICS IN QUANTUM INFORMATION AND THE THEORY OF OPEN QUANTUM SYSTEMS} \begin{preface} \pagebreak \chapter*{Dedication} \begin{center} { \large To Iskra} \end{center} \addcontentsline{toc}{chapter}{Dedication} \begin{singlespace} \newcommand*\oldhss{} \let\oldhss\hss \renewcommand*\hss{\oldhss\normalfont} \tableofcontents \let\hss\oldhss \addcontentsline{toc}{chapter}{List of Figures} \addtocontents{lof}{\vspace*{-\baselineskip}} \listoffigures \end{singlespace} \chapter*{Abstract} \addcontentsline{toc}{chapter}{Abstract} This thesis examines seven topics in quantum information and the theory of open quantum systems. The first topic concerns weak measurements and their universality as a means of generating quantum measurements. It is shown that every generalized measurement can be decomposed into a sequence of weak measurements which allows us to think of measurements as resulting form continuous stochastic processes. The second topic is an application of the decomposition into weak measurements to the theory of entanglement. Necessary and sufficient differential conditions for entanglement monotones are derived and are used to find a new entanglement monotone for three-qubit states. The third topic examines the performance of different master equations for the description of non-Markovian dynamics. The system studied is a qubit coupled to a spin bath via the Ising interaction. The fourth topic studies continuous quantum error correction in the case of non-Markovian decoherence. It is shown that due to the existence of a Zeno regime in non-Markovian dynamics, the performance of continuous quantum error correction may exhibit a quadratic improvement if the time resolution of the error-correcting operations is sufficiently high. The fifth topic concerns conditions for correctability of subsystem codes in the case of continuous decoherence. The obtained conditions on the Lindbladian and the system-environment Hamiltonian can be thought of as generalizations of the previously known conditions for noiseless subsystems to the case where the subsystem is time-dependent. The sixth topic examines the robustness of quantum error-correcting codes against initialization errors. It is shown that operator codes are robust against imperfect initialization without the need for restriction of the standard error-correction conditions. For this purpose, a new measure of fidelity for encoded information is introduced and its properties are discussed. The last topic concerns holonomic quantum computation and stabilizer codes. A fault-tolerant scheme for holonomic computations is presented, demonstrating the scalability of the holonomic method. The scheme opens the possibility for combining the benefits of error correction with the inherent resilience of the holonomic approach. \end{preface} \chapter*{Chapter 1: \hspace{1pt} Introduction} \addcontentsline{toc}{chapter}{Chapter 1:\hspace{0.15cm} Introduction} \textbf{1.1 \hspace{2pt} Quantum information and open quantum systems} \addcontentsline{toc}{section}{1.1 \hspace{0.15cm} Quantum information and open quantum systems} The field of quantum information and quantum computation has grown rapidly during the last two decades \cite{NieChu00}. It has been shown that quantum systems can be used for information processing tasks that cannot be accomplished by classical means. Examples include quantum algorithms that can outperform the best known classical algorithms, such as Shor's factoring algorithm \cite{Shor97} or Grover's search algorithm \cite{Grover97}, quantum communication protocols which use entanglement for teleportation of quantum states \cite{BBC93} or superdense coding \cite{BW92}, or quantum cryptographic protocols which offer provably secure ways of confidential key distribution between distant parties \cite{BB84}. This has triggered an immense amount of research, leading to advances in many areas of quantum physics. One area that has developed significantly as a result of the new growing field is that of open quantum systems. This development has been stimulated on one hand by the need to understand the full spectrum of operations that can be applied to a quantum state, as well as the information processing tasks that can be accomplished with them. Except for unitary transformations, which generally describe the dynamics of closed systems, the tools of quantum information science involve also measurements, completely positive (CP) maps \cite{NieChu00}, and even non-CP operations \cite{ShaLid07}. These more general operations result from interactions of the system of interest with auxiliary systems, and thus require knowledge of the dynamics of open quantum systems. At the same time, it has been imperative to understand and find means to overcome the effects of noise on quantum information. Quantum superpositions, which are crucial for the workings of most quantum information processing schemes, can be easily destroyed by external interactions. This process, known as decoherence, has presented a major obstacle to the construction of reliable quantum information devices. This has prompted studies on the mechanisms of information loss in open quantum systems and the invention of methods to overcome them, giving rise to one of the pillars of quantum information science---the theory of quantum error correction \cite{Shor95, Ste96, Bennett96c, KL96, DG98, ZR97, LCW98, LBKW01, KLV00, DeF00, KBLW01, YGB01, KLP05, KLPL06, BKK07}. Quantum error correction studies the information-preserving structures under open-system dynamics and the methods for encoding and processing of information using these structures. A major result in the theory of error correction states that if the error rate per information unit is below a certain value, by the use of fault-tolerant techniques and concatenation, an arbitrarily large information processing task can be implemented reliably with a modest overhead of resources \cite{Sho96, DVS96, KL96, ABO98, Kit97, KLZ98, Got97', Got97, Pre99}. This result, known as the accuracy threshold theorem, is of fundamental significance for quantum information science. It shows that despite the unavoidable effects of noise, scalable quantum information processing is possible in principle. In this thesis, we examine topics from three intersecting areas in the theory open quantum systems and quantum information---the deterministic dynamics of open quantum systems, quantum measurements, and quantum error correction. \subsection*{1.1.1 \hspace{2pt} Deterministic dynamics of open quantum systems} \addcontentsline{toc}{subsection}{1.1.1 \hspace{0.15cm} Deterministic dynamics of open quantum systems} All transformations in quantum mechanics, except for those that result from measurements, are usually thought of as arising from continuous evolution driven by a Hamiltonian that acts on the system of interest and possibly other systems. These transformations are therefore the result of the unitary evolution of a larger system that contains the system in question. Alternative interpretations are also possible---for example some transformation can be thought of as resulting from measurements whose outcomes are discarded. This description, however, can also be understood as originating from unitary evolution of a system which includes the measurement apparatus and all systems on which the outcome has been imprinted. Including the environment in the description is generally difficult due to the large number of environment degrees of freedom. This is why it is useful to have a description which involves only the effective evolution of the reduced density operator of the system. When the system and the environment are initially uncorrelated, the effective evolution of the density operator of the system can be described by a completely positive trace-preserving (CPTP) linear map \cite{Kraus83}. CPTP maps are widely used in quantum information science for describing noise processes and operations on quantum states \cite{NieChu00}. They do not, however, describe the most general form of transformation of the state of an open system, since the initial state of the system and the environment can be correlated in a way which gives rise to non-CP transformations. Furthermore, the effective transformation by itself does not provide direct insights into the process that drives the transformation. For the latter one needs a description in terms of a generator of the evolution, similar to the way the Schr\"{o}dinger equation describes the evolution of a closed system as generated by a Hamiltonian. The main difficulty in obtaining such a description for open systems is that the evolution of the reduced density matrix of the system is subject to non-trivial memory effects arising from the interaction with the environment \cite{BrePet02}. In the limit where the memory of the environment is short-lived, the evolution of an open system can be described \cite{BrePet02} by a time-local semi-group master equation in the Lindblad form \cite{Lin76}. Such an equation can be thought of as corresponding to a sequence of weak (infinitesimal) CPTP maps. When the memory of the environment cannot be ignored and the effective transformation on the initial state (which is not necessarily CP) is reversible, the evolution can be described by a time-local master equation, known as the \textit{time-convolutionless} (TCL) master equation \cite{Shibata77,ShiAri80}. In contrast to the Lindblad equation, this equation does not describe completely positive evolution. The most general continuous deterministic evolution of an open quantum system is described by the Nakajima-Zwanzig (NZ) equation \cite{Nak58, Zwa60}. This equation involves convolution in time. Both the TCL and NZ equations are quite complicated to obtain from first principles and are usually used for perturbative descriptions. Somewhere in between the Lindblad equation and the TCL or NZ equations are the phenomenological post-Markovian master equations such as the one proposed in Ref.~\cite{ShabaniLidar:05}. In this thesis, we will examine the deterministic evolution of open quantum systems both from the point of view of the full evolution of the system and the environment and from the point of view of the reduced dynamics of the system. We will study the performance of different master equation for the description of the non-Markovian evolution of a qubit coupled to a spin bath \cite{KORL07}, compare Markovian and non-Markovian models in light of continuous quantum error correction \cite{OB07}, and study the conditions for preservation of encoded information under Markovian evolution of the system and general Hamiltonian evolution of the system and the environment \cite{OLB08}. \subsection*{1.1.2 \hspace{2pt} Quantum measurements} \addcontentsline{toc}{subsection}{1.1.2 \hspace{0.15cm} Quantum measurements} In addition to deterministic transformations, the state of an open quantum system can also undergo stochastic transformations. These are transformations for which the state may change in a number of different ways with non-unit probability. Since according to the postulates of quantum mechanics the only non-deterministic transformations are those that result from measurements \cite{vonNeumann,Lueders}, stochastic transformations most generally result from measurements applied on the system and its environment. Just like deterministic transformations, stochastic transformations need not be completely positive. If the system of interest is initially entangled with its environment and after some joint unitary evolution of the system and the environment a measurement is performed on the environment, the effective transformation on the system resulting from this measurement need not be CP. The class of completely positive stochastic operations are commonly referred to as generalized measurements \cite{Kraus83}. Although this class includes standard projective measurements \cite{vonNeumann,Lueders} as well as other operations whose outcomes reveal information about the state, not all operations in this category reveal information. Some operations simply consist of deterministic operations applied with probabilities that do not depend on the state, i.e., they amount to \textit{trivial} measurements. In recent years, a special type of generalized quantum measurements, the so called \textit{weak} measurements \cite{AAV88, AV89, Leg89, Per89, AV90}, have become of significant interest. A measurement is called weak if all of its outcomes result in small (infinitesimal) changes to the state. Weak measurements have been studied both in the abstract, and as a means of understanding systems with continuous monitoring. In the latter case, we can think of the evolution as the limit of a sequence of weak measurements, which gives rise to continuous stochastic evolutions called {\it quantum trajectories} (see, e.g., \cite{Car93, DCM92, GPZ92, Gis84, Dio88, GP92, PK98}). Such evolutions have been used also as models of decoherence (see, e.g., \cite{Brun02}). Weak measurements have found applications in feedback quantum control schemes \cite{DHJMT00} such as state preparation \cite{Jac03, SJMBH04, SvHM04, WR06, CJ06} or continuous quantum error correction \cite{ADL02, SarMil05g}. In this thesis, we look at weak measurements as a means of generating quantum transformations \cite{OB05}. We show that any generalized measurement can be implemented as a sequence of weak measurements, which allows us to use the tools of differential calculus in studies concerning measurement-driven transformations. We apply this result to the theory of entanglement, deriving necessary and sufficient conditions for a function on quantum states to be an entanglement monotone \cite{OB06}. We use these conditions to find a new entanglement monotone for three-qubit pure states, a subject of previously unsuccessful inquiries \cite{Gingrich02}. We also discuss the use of weak measurements for continuous quantum error correction. \subsection*{1.1.3 \hspace{2pt} Quantum error correction} \addcontentsline{toc}{subsection}{1.1.3 \hspace{0.15cm} Quantum error correction} Whether deterministic or stochastic, the evolution of a system coupled to its environment is generally irreversible. This is because the environment, by definition, is outside of the experimenter's control. As irreversible transformations involve loss of information, they could be detrimental to a quantum information scheme unless an error-correcting method is employed. A common form of error correction involves encoding the Hilbert space of a single information unit, say a qubit, in a subspace of the Hilbert space of a larger number of qubits \cite{Shor95, Ste96, Bennett96c, KL96}. The encoding is such that if a single qubit in the code undergoes an error, the original state can be recovered by applying an appropriate operation. Clearly, there is a chance that more than one qubit undergoes an error, but according to the theory of fault tolerance \cite{Sho96, DVS96, KL96, ABO98, Kit97, KLZ98, Got97', Got97, Pre99} this problem can be dealt with by the use of fault-tolerant techniques and concatenation. Error correction encompasses a wide variety of methods, each suitable for different types of noise, different tasks, or using different resources. Examples include passive error-correction methods which protect against correlated errors, such as decoherence-free subspaces \cite{DG98, ZR97, LCW98, LBKW01} and subsystems \cite{KLV00, DeF00, KBLW01, YGB01}, the standard active methods \cite{Shor95, Ste96, Bennett96c, KL96} which are suitable for fault-tolerant computation \cite{Got97}, entanglement assisted quantum codes \cite{BDH06, BDH06'} useful in quantum communication, or linear quantum error-correction codes \cite{ShaLid07} that correct non-completely positive errors. Recently, a general formalism called operator quantum error correction (OQEC) \cite{KLP05, KLPL06, BKK07} was introduced, which unified in a common framework all previously proposed error-correction methods. This formalism employs the most general encoding of information---encoding in subsystems \cite{Knill06, VKL01}. OQEC was generalized to include entanglement-assisted error correction resulting in the most general quantum error-correction formalism presently known \cite{HDB07,GHWAC07}. In the standard formulation of error correction, noise and the error-correcting operations are usually represented by discrete transformations \cite{KLP05, KLPL06, BKK07}. In practice, however, these transformations result from continuous processes. The more general situation where both the noise and the error-correcting processes are assumed to be continuous, is the subject of continuous quantum error correction \cite{PZ98, SarMil05, ADL02, SarMil05g}. In the paradigm of continuous error correction, error-correcting operations are generated by weak measurements, weak unitary operations or weak completely positive maps. This approach often leads to a better performance in the setting of continuous decoherence than that involving discrete operations. In this thesis, we will discuss topics concerning both the discrete formalism and the continuous one. The topics we study include continuous quantum error correction for non-Markovian decoherence \cite{OB07}, conditions for correctability of operator codes under continuous decoherence \cite{OLB08}, the performance of OQEC under imperfect encoding \cite{Ore08}, as well as fault-tolerant computation based on holonomic operations \cite{OBL08}. \section*{1.2 \hspace{2pt} Outline} \addcontentsline{toc}{section}{1.2 \hspace{0.15cm} Outline} This work examines seven topics in the areas of deterministic open-quantum-system dynamics, quantum measurements, and quantum error correction. Some of the topics concern all of these three themes, while others concern only two or only one. As each of the main results has a significance of its own, each topics has been presented as a separate study in one of the following chapters. The topics are ordered in view of the background material they introduce and the logical relation between them. We first study the theme of weak measurements and their applications to the theory of entanglement. In Chapter 2, we show that every generalized quantum measurement can be generated as a sequence of weak measurements \cite{OB05}, which allows us to think of measurements in quantum mechanics as generated by continuous stochastic processes. In the case of two-outcome measurements, the measurement procedure has the structure of a random walk along a curve in state space, with the measurement ending when one of the end points is reached. In the continuous limit, this procedure corresponds to a quantum feedback control scheme for which the type of measurement is continuously adjusted depending on the measurement record. This result presents not only a practical prescription for the implementation of any generalized measurement, but also reveals a rich mathematical structure, somewhat similar to that of Lie algebras, which allows us to study the transformations caused by measurements by looking only at the properties of infinitesimal stochastic generators. The result suggests the possibility of constructing a unified theory of quantum measurement protocols. Chapter 3 presents an application of the weak-measurement decomposition to a study of entanglement. The theory of entanglement concerns the transformations that are possible to a state under local operations and classical communication (LOCC). The universality of weak measurements allows us to look at LOCC as the class of transformations generated by infinitesimal local operations. We show that a necessary and sufficient condition for a function of the state to be an entanglement monotone under local operations that do not involve information loss is that the function be a monotone under infinitesimal local operations. We then derive necessary and sufficient differential conditions for a function of the state to be an entanglement monotone \cite{OB06}. We first derive two conditions for local operations without information loss, and then show that they can be extended to more general operations by adding the requirement of convexity. We then demonstrate that a number of known entanglement monotones satisfy these differential criteria. We use the differential conditions to construct a new polynomial entanglement monotone for three-qubit pure states. In Chapter 4, we extend the scope of our studies to include the deterministic dynamics of open quantum systems. We study the analytically solvable Ising model of a single qubit system coupled to a spin bath for a case for which the Markovian approximation of short bath-correlation times cannot be applied \cite{KORL07}. The purpose of this study is to analyze and elucidate the performance of Markovian and non-Markovian master equations describing the dynamics of the system qubit, in comparison to the exact solution. We find that the time-convolutionless master equation performs particularly well up to fourth order in the system-bath coupling constant, in comparison to the Nakajima-Zwanzig master equation. Markovian approaches fare poorly due to the infinite bath correlation time in this model. A recently proposed post-Markovian master equation performs comparably to the time-convolutionless master equation for a properly chosen memory kernel, and outperforms all the approximation methods considered here at long times. Our findings shed light on the applicability of master equations to the description of reduced system dynamics in the presence of spin baths. In Chapter 5, we investigate further the difference between Markovian and non-Markovian decoherence---this time, form the point of view of its implications for the performance of continuous quantum error correction. We study the performance of a quantum-jump error correction model in the case where each qubit in a codeword is subject to a general Hamiltonian interaction with an independent bath \cite{OB07}. We first consider the scheme in the case of a trivial single-qubit code, which provides useful insights into the workings of continuous error correction and the difference between Markovian and non-Markovian decoherence. We then study the model of a bit-flip code with each qubit coupled to an independent bath qubit and subject to continuous correction, and find its solution. We show that for sufficiently large error-correction rates, the encoded state approximately follows an evolution of the type of a single decohering qubit, but with an effectively decreased coupling constant. The factor by which the coupling constant is decreased scales quadratically with the error-correction rate. This is compared to the case of Markovian noise, where the decoherence rate is effectively decreased by a factor which scales only linearly with the rate of error correction. The quadratic enhancement depends on the existence of a Zeno regime in the Hamiltonian evolution which is absent in purely Markovian dynamics. We analyze the range of validity of this result and identify two relevant time scales. Finally, we extend the result to more general codes and argue that there the performance of continuous error correction will exhibit the same qualitative characteristics. In the appendix of this chapter, we discuss another application of weak measurements---we show how the quantum-jump error-correction scheme can be implemented using weak measurements and weak unitary operations. In Chapter 6, we study the conditions under which a quantum code is perfectly correctable during a time interval of continuous decoherence for the most general type of encoding---encoding in subsystems. We study the case of Markovian decoherence as well as the general case of Hamiltonian evolution of the system and the environment, and derive necessary and sufficient conditions on the Lindbladian and the system-environment Hamiltonian \cite{OLB08}, respectively. Our approach is based on a result obtained in Ref.~\cite{KS06} according to which a subsystem is correctable if and only if it is unitarily recoverable. The conditions we derive can be thought of as generalizations of the previously derived conditions for decoherence-free subsystems to the case where the subsystem is time-dependent. As a special case we consider conditions for unitary correctability. In the case of Hamiltonian evolution, the conditions for unitary correctability concern only the effect of the Hamiltonian on the system, whereas the conditions for general correctability concern the entire system-environment Hamiltonian. We also derive conditions on the Hamiltonian which depend on the initial state of the environment. We discuss possible implications of our results for approximate quantum error correction. Chapter 7 also concerns subsystem codes. Here we study the performance of operator quantum error correction (OQEC) in the case of imperfect encoding \cite{Ore08}. In the OQEC, the notion of correctability is defined under the assumption that states are perfectly initialized inside a particular subspace, a factor of which (a subsystem) contains the protected information. It was believed that in the case of imperfect initialization, OQEC codes would require more restrictive than the standard conditions if they are to protect encoded information from subsequent errors. In this chapter, we examine this requirement by looking at the errors on the encoded state. In order to quantitatively analyze the errors in an OQEC code, we introduce a measure of the fidelity between the encoded information in two states for the case of subsystem encoding. A major part of the chapter concerns the definition of the measure and the derivation of its properties. In contrast to what was previously believed, we obtain that more restrictive conditions are not necessary neither for DFSs nor for general OQEC codes. This is because the effective noise that can arise inside the code as a result of imperfect initialization is such that it can only increase the fidelity of an imperfectly encoded state with a perfectly encoded one. In Chapter 8, we present a scheme for fault-tolerant holonomic computation on stabilizer codes \cite{OBL08}. In the holonomic approach, logical states are encoded in the degenerate eigenspace of a Hamiltonian and gates are implemented by adiabatically varying the Hamiltonian along loops in parameter space. The result is a transformation of purely geometric origin, which is robust against various types of errors in the control parameters driving the evolution. In the proposed scheme, single-qubit operations on physical qubits are implemented by varying Hamiltonians that are elements of the stabilizer, or in the case of subsystem codes---elements of the gauge group. By construction, the geometric transformations in each eigenspace of the Hamiltonian are transversal, which ensures that errors do not propagate. We show that for certain codes, such as the nine-qubit Shor code or its subsystem versions, it is possible to realize universal fault-tolerant computation using Hamiltonians of weight three. The scheme proves that holonomic quantum computation is a scalable method and opens the possibility for bringing together the benefits of error correction and the operational robustness of the holonomic approach. It also presents an alternative to the standard fault-tolerant methods based on dynamical transformations, which have been argued to be in a possible conflict with the assumption of Markovian decoherence that often underlies the derivation of threshold results. Chapter 9 summarizes the results and discusses problems for future research. \chapter*{Chapter 2: \hspace{1pt} Generating quantum measurements using weak measurements} \addcontentsline{toc}{chapter}{Chapter 2:\hspace{0.15cm} Generating quantum measurements using weak measurements} \section*{2.1 \hspace{2pt} Preliminaries} \addcontentsline{toc}{section}{2.1 \hspace{0.15cm} Preliminaries} In the original formulation of measurement in quantum mechanics, measurement outcomes are identified with a set of orthogonal projection operators, which can be thought of as corresponding to the eigenspaces of a Hermitian operator, or {\it observable} \cite{vonNeumann,Lueders}. After a measurement, the state is projected into one of the subspaces with a probability given by the square of the amplitude of the state's component in that subspace. In recent years a more general notion of measurement has become common: the so called {\it generalized} measurement which corresponds to a {\it positive operator valued measure} (POVM) \cite{Kraus83}. This formulation can include many phenomena not captured by projective measurements: detectors with non-unit efficiency, measurement outcomes that include additional randomness, measurements that give incomplete information, and many others. Generalized measurements have found numerous applications in the rapidly-growing field of quantum information processing \cite{NieChu00}. Some examples include protocols for unambiguous state discrimination \cite{Per88} and optimal entanglement manipulation \cite{Nie99, Jonathan99a}. Upon measurement, a system with density matrix $\rho$ undergoes a random transformation \begin{equation} \rho \rightarrow \rho_j= {M}_j\rho {M}_j^{\dagger}/p_j, \hspace{0.4cm} \underset{j}{\sum}{M}_j^{\dagger}{M}_j={I},\label{genmeas} \end{equation} with probability $p_j=\textrm{Tr}({M}_j \rho {M}_j^{\dagger})$, where the index $j$ labels the possible outcomes of the measurement. Eq.~\eqref{genmeas} is not the most general stochastic operation that can be applied to a state. For example, one can consider the transformation \begin{equation} \rho \rightarrow \rho_j= \underset{i}{\sum}{M}_{ij}\rho {M}_{ij}^{\dagger}/p_j, \hspace{0.4cm} \underset{i,j}{\sum}{M}_{ij}^{\dagger}{M}_{ij}={I}, \end{equation} where $p_j=\textrm{Tr}(\underset{i}{\sum}{M}_{ij} \rho {M}_{ij}^{\dagger})$ is the probability for the $j^{\textrm{th}}$ outcome (see Chapter 3). The letter can be thought of as resulting from a measurement of the type \eqref{genmeas} with measurement operators $M_{ij}$ of which only the information about the index $j$ labeling the outcome is retained. In this thesis, when we talk about generalized measurements, we will refer to the transformation \eqref{genmeas}. The transformation \eqref{genmeas} is commonly comprehended as a spontaneous {\it jump}, unlike unitary transformations, for example, which are thought of as resulting from {\it continuous} unitary evolutions. Any unitary transformation can be implemented as a sequence of {\it weak} (i.e., infinitesimal) unitary transformations. One may ask if a similar decomposition exists for generalized measurements. This would allow us to think of generalized measurements as resulting from continuous stochastic evolutions and possibly make use of the powerful tools of differential calculus in the study of the transformations that a system undergoes upon measurement. In this chapter we show that any generalized measurement can be implemented as a sequence of weak measurements and present an explicit form of the decomposition. The main result was first presented in Ref.~\cite{OB05}. We call a measurement {\it weak} if all outcomes result in very small changes to the state. (There are other definitions of weak measurements that include the possibility of large changes to the state with low probability; we will not be considering measurements of this type.) Therefore, a weak measurement is one whose operators can be written as \begin{equation} {M}_j=q_j({I}+{\varepsilon}_j),\label{wmg} \end{equation} where $q_j\in {C}$, $0\leq |q_j| \leq 1$, and ${\varepsilon}$ is an operator with small norm $\|{\varepsilon}\| \ll 1$. \section*{2.2 \hspace{2pt} Decomposing projective measurements} \addcontentsline{toc}{section}{2.2 \hspace{0.15cm} Decomposing projective measurements} It has been shown that any projective measurement can be implemented as a sequence of weak measurements; and by using an additional {\it ancilla} system and a joint unitary transformation, it is possible to implement any generalized measurement using weak measurements \cite{Bennett99}. This procedure, however, does not decompose the operation on the original system into weak operations, since it uses operations acting on a larger Hilbert space---that of the system plus the ancilla. If we wish to study the behavior of a function---for instance, an entanglement monotone---defined on a space of a particular dimension, it complicates matters to add and remove ancillas. We will show that an ancilla is not needed, and give an explicit construction of the weak measurement operators for any generalized measurement that we wish to decompose. It is easy to show that a measurement with any number of outcomes can be performed as a sequence of measurements with two outcomes. Therefore, for simplicity, we will restrict our considerations to two-outcome measurements. To give the idea of the construction, we first show how every projective measurement can be implemented as a sequence of weak generalized measurements. In this case the measurement operators ${P}_1$ and ${P}_2$ are orthogonal projectors whose sum ${P}_1+{P}_2={I}$ is the identity. We introduce the operators \begin{equation} {P}(x)=\sqrt{\frac{1-\tanh(x)}{2}}{P}_1+\sqrt{\frac{1+\tanh(x)}{2}}{P}_2, \hspace{0.5cm} x\in R. \label{measpro} \end{equation} Note that ${P}^2(x)+{P}^2(-x)={I}$ and therefore ${P}(x)$ and ${P}(-x)$ describe a measurement. If $x=\epsilon$, where $|\epsilon| \ll 1$, the measurement is weak. Consider the effect of the operators ${P}(x)$ on a pure state $|\psi\rangle$. The state can be written as $|\psi\rangle={P}_1|\psi\rangle+{P}_2|\psi\rangle=\sqrt{p_1}|\psi_1\rangle+\sqrt{p_2}|\psi_2\rangle$, where $|\psi_{1,2}\rangle={P}_{1,2}|\psi\rangle/\sqrt{p_{1,2}}$ are the two possible outcomes of the projective measurement and $p_{1,2}=\langle\psi|{P}_{1,2}|\psi\rangle$ are the corresponding probabilities. If $x$ is positive (negative), the operator ${P}(x)$ increases (decreases) the ratio $\sqrt{p_2}/\sqrt{p_1}$ of the $|\psi_2\rangle$ and $|\psi_1\rangle$ components of the state. By applying the same operator ${P}(\epsilon)$ many times in a row for some fixed $\epsilon$, the ratio can be made arbitrarily large or small depending on the sign of $\epsilon$, and hence the state can be transformed arbitrarily close to $|\psi_1\rangle$ or $|\psi_2\rangle$. The ratio of the $p_1$ and $p_2$ is the only parameter needed to describe the state, since $p_1+p_2=1$. Also note that ${P}(-x){P}(x)=(1-\tanh^2(x))^{1/2}{I}/2$ is proportional to the identity. If we apply the same measurement ${P}(\pm\epsilon)$ twice and two opposite outcomes occur, the system returns to its previous state. Thus we see that the transformation of the state under many repetitions of the measurement ${P}(\pm\epsilon)$ follows a random walk along a curve $\ket{\psi(x)}$ in state space. The position on this curve can be parameterized by $x=\ln\sqrt{p_1/p_2}$. Then $\ket{\psi(x)}$ can be written as $\sqrt{p_1(x)}\ket{\psi_1} + \sqrt{p_2(x)}\ket{\psi_2}$, where $p_{1,2}(x) = (1/2)[1 \pm \tanh(x)]$. The measurement given by the operators ${P}(\pm\epsilon)$ changes $x$ by $x\rightarrow x\pm\epsilon$, with probabilities $p_{\pm}(x)=(1\pm\tanh(\epsilon)(p_1(x)-p_2(x)))/2$. We continue this random walk until $|x| \ge X$, for some $X$ which is sufficiently large that $\ket{\psi(X)} \approx \ket{\psi_1}$ and $\ket{\psi(-X)} \approx \ket{\psi_2}$ to whatever precision we desire. What are the respective probabilities of these two outcomes? Define $p(x)$ to be the probability that the walk will end at $X$ (rather than $-X$) {\it given} that it began at $x$. This must satisfy $p(x) = p_+(x) p(x+\epsilon) + p_-(x) p(x-\epsilon)$. Substituting our expressions for the probabilities, this becomes \begin{eqnarray} p(x)= (p(x+\epsilon) + p(x-\epsilon))/2 \label{difference} + \tanh(\epsilon)\tanh(x)(p(x+\epsilon)-p(x-\epsilon))/2. \end{eqnarray} If we go to the infinitesimal limit $\epsilon\rightarrow dx$, this becomes a continuous differential equation \begin{equation} \frac{d^2p}{dx^2} + 2\tanh(x)\frac{dp}{dx} = 0 , \end{equation} with boundary conditions $p(X)=1$, $p(-X)=0$. The solution to this equation is $p(x)=(1/2)[1 + \tanh(x)/\tanh(X)]$. In the limit where $X$ is large, $\tanh(X)\rightarrow1$, so $p(x)=p_1(x)$. The probabilities of the outcomes for the sequence of weak measurements are exactly the same as those for a single projective measurement. Note that this is also true for a walk with a step size that is not infinitesimal, since the solution $p(x)$ satisfies \eqref{difference} for an arbitrarily large $\epsilon$. Alternatively, instead of looking at the state of the system during the process, we could look at an operator that effectively describes the system's transformation to the current state. This has the advantage that it is state-independent, and will lead the way to decompositions of generalized measurements; it also becomes obvious that the procedure works for mixed states, too. We think of the measurement process as a random walk along a curve ${P}(x)$ in operator space, given by Eq.~(\ref{measpro}), which satisfies ${P}(0)={I}/\sqrt{2}$, $\underset{x\rightarrow -\infty}{\lim}{P}(x)={P}_1$, $\underset{x\rightarrow \infty}{\lim}{P}(x)={P}_2$. It can be verified that ${P}(x){P}(y)\propto {P}(x+y)$, where the constant of proportionality is $(\cosh(x+y)/2\cosh(x)\cosh(y))^{1/2}$. Due to normalization of the state, operators which differ by an overall factor are equivalent in their effects on the state. Thus, the random walk driven by weak measurement operators ${P}(\pm\epsilon)$ has a step size $|\epsilon|$. \section*{2.3 \hspace{2pt} Decomposing generalized measurements} \addcontentsline{toc}{section}{2.3 \hspace{0.15cm} Decomposing generalized measurements} Next we consider measurements where the measurement operators ${M}_1$ and ${M}_2$ are {\it positive} but not projectors. We use the well known fact that a generalized measurement can be implemented as joint unitary operation on the system and an ancilla, followed by a projective measurement on the ancilla \cite{NieChu00}. (One can think of this as an {\it indirect} measurement; one lets the system interact with the ancilla, and then measures the ancilla.) Later we will show that the ancilla is not needed. We consider two-outcome measurements and two-level ancillas. In this case ${M}_1$ and ${M}_2$ commute, and hence can be simultaneously diagonalized. Let the system and ancilla initially be in a state $\rho\otimes|0\rangle\langle 0|$. Consider the unitary operation \begin{equation} {U}(0)= {M}_1\otimes {Z} + {M}_2\otimes {X} ,\label{uzero} \end{equation} where ${X}=\sigma^x$ and ${Z}=\sigma^z$ are Pauli matrices acting on the ancilla bit. By applying ${U}(0)$ to the extended system we transform it to: \begin{eqnarray} {U}(0)(\rho\otimes|0\rangle\langle0|){U}^{\dagger}(0) &=& {M}_1\rho {M}_1\otimes|0\rangle\langle 0| + {M}_1 \rho {M}_2\otimes|0\rangle\langle 1| \nonumber\\ &+&{M}_2 \rho {M}_1\otimes|1\rangle\langle 0| + {M}_2 \rho {M}_2\otimes|1\rangle\langle 1|. \end{eqnarray} Then a projective measurement on the ancilla in the computational basis would yield one of the possible generalized measurement outcomes for the system. We can perform the projective measurement on the ancilla as a sequence of weak measurements by the procedure we described earlier. We will then prove that for this process, there exists a corresponding sequence of generalized measurements with the same effect acting solely on the system. To prove this, we first show that at any stage of the measurement process, the state of the extended system can be transformed into the form $\rho(x)\otimes |0\rangle\langle 0|$ by a unitary operation which does not depend on the state. The net effect of the joint unitary operation ${U}(0)$, followed by the effective measurement operator on the ancilla, can be written in a block form in the computational basis of the ancilla: \begin{eqnarray} {\bar{M}}(x) &\equiv& ({I}\otimes{P}(x)){U}(0) = \begin{pmatrix} \sqrt{\frac{1-\tanh(x)}{2}}{M}_1&\sqrt{\frac{1-\tanh(x)}{2}}{M}_2\\ \sqrt{\frac{1+\tanh(x)}{2}}{M}_2&-\sqrt{\frac{1+\tanh(x)}{2}}{M}_1 \end{pmatrix}. \end{eqnarray} If the current state ${\bar{M}}(x)(\rho\otimes|0\rangle\langle 0|){\bar{M}}^\dagger$ can be transformed to $\rho(x)\otimes|0\rangle\langle 0|$ by a unitary operator ${U}(x)$ which is independent of $\rho$, then the lower left block of ${U}(x){\bar{M}}(x)$ should vanish. We look for such a unitary operator in block form, with each block being Hermitian and diagonal in the same basis as ${M_1}$ and ${M_2}$. One solution is: \begin{equation} {U}(x)=\begin{pmatrix} {A}(x)&{B}(x)\\ {B}(x)&-{A}(x) \end{pmatrix}, \end{equation} where \begin{equation} {A}(x)=\sqrt{1-\tanh(x)}{M}_1({I}+\tanh(x)({M}_2^2-{M}_1^2))^{-\frac{1}{2}},\label{A} \end{equation} \begin{equation} {B}(x)=\sqrt{1+\tanh(x)}{M}_2({I}+\tanh(x)({M}_2^2-{M}_1^2))^{-\frac{1}{2}}.\label{B} \end{equation} (Since ${M}_1^2 + {M}_2^2 = {I}$, the operator $({I}+\tanh(x)({M}_2^2-{M}_1^2))^{-\frac{1}{2}} $ always exists.) Note that ${U}(x)$ is Hermitian, so ${U}(x)={U}^\dagger (x)$ is its own inverse, and at $x=0$ it reduces to the operator \eqref{uzero}. After every measurement on the ancilla, depending on the value of $x$, we apply the operation ${U}(x)$. Then, before the next measurement, we apply its inverse ${U}^{\dagger}(x)={U}(x)$. By doing this, we can think of the procedure as a sequence of generalized measurements on the extended system that transform it between states of the form $\rho(x)\otimes|0\rangle\langle 0|$ (a generalized measurement preceded by a unitary operation and followed by a unitary operation dependent on the outcome is again a generalized measurement). The measurement operators are now ${\tilde{M}}(x,\pm\epsilon) \equiv {U}(x\pm\epsilon)({I} \otimes {P}(\pm\epsilon)) {U}(x)$, and have the form \begin{equation} {\tilde{M}}(x,\pm\epsilon)=\begin{pmatrix}{M}(x,\pm\epsilon)& {N}(x,\pm\epsilon)\\{0}&{O}(x,\pm\epsilon) \end{pmatrix}. \end{equation} Here ${M},{N},{O}$ are operators acting on the system. Upon measurement, the state of the extended system is transformed \begin{equation} \rho(x)\otimes|0\rangle\langle0|\rightarrow \frac{{M}(x,\pm\epsilon)\rho(x){M}^{\dagger}(x,\pm\epsilon)}{p(x,\pm\epsilon)}\otimes|0\rangle\langle0|, \end{equation} with probability \begin{equation} p(x,\pm\epsilon) = {\rm Tr}\left\{ {M}(x,\pm\epsilon)\rho(x){M}^{\dagger}(x,\pm\epsilon) \right\}. \end{equation} By imposing ${\tilde{M}}^\dagger(x,\epsilon){\tilde{M}}(x,\epsilon)+{\tilde{M}}(x,-\epsilon)^\dagger{\tilde{M}}(x,-\epsilon)={I}$, we obtain that \begin{equation} {M}^\dagger(x,\epsilon){M}(x,\epsilon)+{M}^\dagger(x,-\epsilon){M}(x,-\epsilon)={I}, \end{equation} where the operators in the last equation acts on the {\it system space alone}. Therefore, the same transformations that the system undergoes during this procedure can be achieved by the measurements ${M}(x,\pm \epsilon)$ {\it acting solely on the system}. Depending on the current value of $x$, we perform the measurement ${M}(x,\pm\epsilon)$. Due to the one-to-one correspondence with the random walk for the projective measurement on the ancilla, this procedure also follows a random walk with a step size $|\epsilon|$. It is easy to see that if the measurements on the ancilla are weak, the corresponding measurements on the system are also weak. Therefore we have shown that every measurement with positive operators ${M}_1$ and ${M}_2$, can be implemented as a sequence of weak measurements. This is the main result of this chapter. From the construction above, one can find the explicit form of the weak measurement operators: \begin{eqnarray} {M}(x,\epsilon) = \sqrt{\frac{1-\tanh(\epsilon)}{2}}{A}(x){A}(x+\epsilon) + \sqrt{\frac{1+\tanh(\epsilon)}{2}}{B}(x){B}(x+\epsilon).\label{measpos} \end{eqnarray} These expressions can be simplified further. The current state of the system at any point during the procedure can be written as \begin{equation} { M}(x)\rho { M}(x)/{\rm Tr}({ M}^2(x)\rho), \end{equation} where \begin{equation} { M}(x)=\sqrt{\frac{{ I}+\tanh(x)({ M}_2^2-{ M}_1^2)}{2}},\ \ x\in R. \end{equation} The weak measurement operators can be written as \begin{equation} { M}(x,\pm\epsilon) = \sqrt{ C_\pm \frac{{ I}+\tanh(x\pm\epsilon)({ M}_2^2-{ M}_1^2)}{{ I}+\tanh(x)({ M}_2^2-{ M}_1^2)}}, \label{step_operator} \end{equation} where the weights $C_\pm$ are chosen to ensure that these operators form a generalized measurement: \begin{equation} C_\pm = (1\pm\tanh(\epsilon)\tanh(x))/2 . \end{equation} Note that this procedure works even if the step of the random walk is not small, since ${P}(x){P}(y)\propto {P}(x+y)$ for arbitrary values of $x$ and $y$. So it is not surprising that the effective operator which gives the state of the system at the point $x$ is ${M}(x)\equiv{M}(0,x)$. In the limit when $\epsilon\rightarrow 0$, the evolution under the described procedure can be described by a continuous stochastic equation. We can introduce a time step $\delta t$ and a rate \begin{equation} \gamma=\epsilon^2/\delta t. \end{equation} Then we can define a mean-zero Wiener process $\delta W$ as follows: \begin{gather} \delta W=(\delta x-M[\delta x])/\sqrt{\gamma}, \end{gather} where $M[\delta x]$ is the mean of $\delta x$, \begin{equation} M[\delta x]=\epsilon (p_+(x)-p_-(x)). \end{equation} The probabilities $p_{\pm}(x)$ can be written in the form \begin{equation} p_{\pm}(x)=\frac{1}{2}(1\pm 2 \langle {Q}(x)\rangle \epsilon), \end{equation} where $\langle {Q}(x)\rangle$ denotes the expectation value of the operator \begin{equation} Q(x)=\frac{1}{2}\frac{({M}_2^2-{M}_1^2)+\tanh(x){I}}{{I}+\tanh(x)({M}_2^2-{M}_1^2)}. \end{equation} Note that $M[(\delta W)^2]=\delta t+\textit{O}(\delta t^2)$. Expanding the change of a state $|\psi\rangle$ upon the measurement ${M}(x,\pm \epsilon)$ up to second order in $\delta W$ and taking the limit $\delta W\rightarrow 0$ averaging over many steps, we obtain the following coupled stochastic differential equations: \begin{gather} |d\psi\rangle=-\frac{\gamma}{2}({Q}(x)-\langle{Q}(x)\rangle)^2|\psi\rangle dt+\sqrt{\gamma}({Q}(x)-\langle{Q}(x)\rangle)|\psi\rangle dW,\\ dx=2\gamma\langle{Q}(x)\rangle dt+\sqrt{\gamma} dW. \end{gather} This process corresponds to a continuous measurement of an observable ${Q}$ which is continuously changed depending on the value of $x$. In other words, it is a feedback-control scheme where depending on the measurement record, the type of measurement is continuously adjusted. Finally, consider the most general type of two-outcome generalized measurement, with the only restriction being ${M}_1^{\dagger}{M}_1+{M}_2^{\dagger}{M}_2=I$. By polar decomposition the measurement operators can be written \begin{equation} {M}_{1,2}={V}_{1,2}\sqrt{{M}_{1,2}^{\dagger}{M}_{1,2}}, \end{equation} where ${V}_{1,2}$ are appropriate unitary operators. One can think of these unitaries as causing an additional disturbance to the state of the system, in addition to the reduction due to the measurement. The operators $({M}_{1,2}^{\dagger}{M}_{1,2})^{1/2}$ are positive, and they form a measurement. We could then measure ${M}_1$ and ${M}_2$ by first measuring these positive operators by a sequence of weak measurements, and then performing either ${V}_1$ or ${V}_2$, depending on the outcome. However, we can also decompose this measurement directly into a sequence of weak measurements. Let the weak measurement operators for $({M}_{1,2}^{\dagger}{M}_{1,2})^{1/2}$ be ${M}_p(x,\pm\epsilon)$. Let ${V}(x)$ be any continuous unitary operator function satisfying ${V}(0)={I}$ and ${V}(\pm x) \rightarrow {V}_{1,2}$ as $x\rightarrow\infty$. We then define \begin{equation} {M}(x,y)\equiv{V}(x+y){M}_p(x,y){V}^{\dagger}(x). \end{equation} By construction ${M}(x,\pm y)$ are measurement operators. Since ${V}(x)$ is continuous, if $y=\epsilon$, where $\epsilon \ll 1$, the measurements are weak. The measurement procedure is analogous to the previous cases and follows a random walk along the curve ${M}(0,x)={V}(x){M}_p(0,x)$. In summary, we have shown that for every two-outcome measurement described by operators ${M}_1$ and ${M}_2$ acting on a Hilbert space of dimension $d$, there exists a continuous two-parameter family of operators ${M}(x,y)$ over the same Hilbert space with the following properties: \begin{gather} {M}(x,0)={I}/\sqrt{2},\\ {M}(0,x) \rightarrow {M}_1 \hspace{0.2cm} \textrm{as} \hspace{0.2cm} x\rightarrow-\infty,\\ {M}(0,x) \rightarrow {M}_2\hspace{0.2cm} \textrm{as} \hspace{0.2cm} x\rightarrow+\infty,\\ {M}(x+y,z){M}(x,y)\propto{M}(x,z+y),\\ {M}^{\dagger}(x,y){M}(x,y) + {M}^{\dagger}(x,-y){M}(x,-y) = {I}. \end{gather} We have presented an explicit solution for ${M}(x,y)$ in terms of ${M}_1$ and ${M}_2$. The measurement is implemented as a random walk on the curve ${M}(0,x)$ by consecutive application of the measurements ${M}(x,\pm\epsilon)$, which depend on the current value of the parameter $x$. In the case where $|\epsilon| \ll 1$, the measurements driving the random walk are weak. Since any measurement can be decomposed into two-outcome measurements, weak measurements are {\it universal}. \section*{2.4 \hspace{2pt} Measurements with multiple outcomes} \addcontentsline{toc}{section}{2.4 \hspace{0.15cm} Measurements with multiple outcomes} Even though two-outcome measurements can be used to construct any multi-outcome measurement, it is interesting whether a direct decomposition similar to the one we presented can be obtained for measurements with multiple outcomes as well. In Ref.~\cite{VB07} it was shown that such a decomposition exists. For a measurement with $n$ positive operators ${L}_j$, $j=1,...,n$, $\overset{n}{\underset{j=1}{\sum}}{L}_j^2=I$, the effective measurement operator $M(x)$ describing the state during the procedure is given by \cite{VB07} \begin{equation} M(s)=\sqrt{f(s)}\sqrt{(\overset{n}{\underset{j=1}{\sum}}s^jL_j^2)},\label{multioutcomedecomposition} \end{equation} where \begin{equation} f(s)=1+n\overset{n}{\underset{j=1}{\sum}}s^j(1-s^j). \end{equation} Here the parameter $s$ is chosen such that $\overset{n}{\underset{j=1}{\sum}}s^j=1$, $s\in[0,1]$, i.e., it describes a simplex. The system of stochastic equations describing the process in the case when the measurement operators ${L}_j$ are commuting, can be written as \begin{gather} |d\psi\rangle=-\frac{\gamma}{8}g^{jk}(s)({Q}_j(s)-\langle{Q}_j(s)\rangle)({Q}_k(s)-\langle{Q}_k(s)\rangle)|\psi\rangle dt+\notag\\ \frac{1}{2}\sqrt{\gamma}({Q_i}(s)-\langle{Q_i}(s)\rangle)|\psi\rangle a^{i}_{\alpha}(s) dW^{\alpha},\\ ds=\gamma g^{ij}(s)\langle{Q}_j(s)\rangle dt+\sqrt{\gamma} a^{i}_{\alpha}(s) dW^{\alpha}, \end{gather} where \begin{equation} {Q_i}(s)=\frac{L_i^2}{s^mL_m^2}, \end{equation} \begin{equation} g^{ij}(s)=\overset{n}{\underset{\alpha,\beta=1}{\sum}}s^i(\delta^i_{\alpha}-s^{\alpha})(\delta^{\alpha}_{\beta}-\frac{1}{n})s^j(\delta^j_{\beta}-s^{\beta}), \end{equation} $a(s)$ is the square root of $g(s)$, \begin{equation} g^{ij}(s)=\overset{n}{\underset{k=1}{\sum}}a_k^i(s)a_k^j(s), \end{equation} and we have assumed Einstein's summation convention. The decomposition can be easily generalized to the case of non-positive measurement operators in a way similar to the one we described for the two-outcome case---by inserting suitable weak unitaries between the weak measurements. \section*{2.5 \hspace{2pt} Summary and outlook} \addcontentsline{toc}{section}{2.5 \hspace{0.15cm} Summary and outlook} The result presented in this chapter may have important implications for quantum control and the theory of quantum measurements in general. It provides a practical prescription for the implementation of any generalized measurement using weak measurements which may be useful in experiments where strong measurements are difficult to implement. The decomposition might be experimentally feasible for some quantum optical or atomic systems. The result also reveals an interesting mathematical structure, somewhat similar to that of Lie algebras, which allows us to think of measurements as generated by infinitesimal stochastic generators. One application of this is presented in the following chapter, where we derive necessary and sufficient conditions for a function on quantum states to be an entanglement monotone. An entanglement monotone \cite{Vidal00b} is a function which does not increasing on average under local operations. For pure states the operations are unitaries and generalized measurements. Since all unitaries can be broken into a series of infinitesimal steps and all measurements can be decomposed into weak measurements, it suffices to look at the behavior of a prospective monotone under small changes in the state. Thus we can use this result to derive differential conditions on the function. These observations suggest that it may be possible to find a unified description of quantum operations where every quantum operations can be continuously generated. Clearly, measurements do not form a group since they do not have inverse elements, but it may be possible to describe them in terms of a semi-group. The problem with using measurements as the elements of the semigroup is that a strong measurement is not equal to a composition of weak measurements, since the sequence of weak measurements that builds up a particular strong measurement is not pre-determined---the measurements depend on a stochastic parameter. It may be possible, however, to use more general objects---\textit{measurement protocols}---which describe measurements applied conditioned on a parameter in some underlying manifold. If such a manifold exists for the most general possible notion of a protocol, the basic objects could be describable by stochastic matrices on this manifold. Such a possibility is appealing since stochastic processes are well understood and this may have important implications for the study of quantum control protocols. In addition, such a description could be useful for describing general open-system dynamics. These questions are left open for future investigation. \chapter*{Chapter 3: \hspace{1pt} Applications of the decomposition into weak measurements to the theory of entanglement} \addcontentsline{toc}{chapter}{Chapter 3:\hspace{0.15cm} Applications of the decomposition into weak measurements to the theory of entanglement} \section*{3.1 \hspace{2pt} Preliminaries} \addcontentsline{toc}{section}{3.1 \hspace{0.15cm} Preliminaries} In this chapter we apply the result on the universality of weak measurements to the theory of entanglement. The theory of entanglement concerns the transformations that are possible to a state under local operations with classical communication (LOCC). The paradigmatic experiment is a quantum system comprising several subsystems, each in a separate laboratory under control of a different experimenter: Alice, Bob, Charlie, etc. Each experimenter can perform any physically allowed operation on his or her subsystem---unitary transformations, generalized measurements, indeed any trace-preserving completely positive operation--and communicate their results to each other without restriction. They are not, however, allowed to bring their subsystems together and manipulate them jointly. An LOCC protocol consists of any number of local operations, interspersed with any amount of classical communication; the choice of operations at later times may depend on the outcomes of measurements at any earlier time. The results of Bennett et al. \cite{Bennett96a,Bennett96b,Bennett96c} and Nielsen \cite{Nie99}, among many others \cite{Vidal99,Jonathan99a,Hardy99,Jonathan99b,Vidal00a}, have given us a nearly complete theory of entanglement for {\it bipartite} systems in pure states. Unfortunately, great difficulties have been encountered in trying to extend these results both to mixed states and to states with more than two subsystems ({\it multipartite} systems). The reasons for this are many; but one reason is that the set LOCC is complicated and difficult to describe mathematically \cite{Bennett99}. One mathematical tool which has proven very useful is that of the {\it entanglement monotone}: a function of the state which is invariant under local unitary transformations and always decreases (or increases) on average after any local operation. These functions were described by Vidal \cite{Vidal00b}, and large classes of them have been enumerated since then. We will consider those protocols in LOCC that preserve pure states as the set of operations generated by {\it infinitesimal local operations}: operations which can be performed locally and which leave the state little changed including infinitesimal local unitaries and weak generalized measurements. In Bennett et al. \cite{Bennett99} it was shown that infinitesimal local operations can be used to perform any local operation with the additional use of local ancillary systems--extra systems residing in the local laboratories, which can be coupled to the subsystems for a time and later discarded. As we saw in the previous section, any local generalized measurement can be implemented as a sequence of weak measurements {\it without} the use of ancillas. This implies that a necessary and sufficient condition for a function of the state to be a monotone under local operations that preserve pure states is the function to be a monotone under infinitesimal local operations. In this chapter we derive differential conditions for a function of the state to be an entanglement monotone by considering the change of the function on average under infinitesimal local operations up to the lowest order in the infinitesimal parameter. We thus obtain conditions that involve at most second derivatives of the function. We then prove that these conditions are both necessary and sufficient. We show that the conditions are satisfied by a number of known entanglement monotones and we use them to construct a new polynomial entanglement monotone for three-qubit pure states. We hope that this approach will provide a new window with which to study LOCC, and perhaps avoid some of the difficulties in the theory of multipartite and mixed-state entanglement. By looking only at the differential behavior of entanglement monotones, we avoid concerns about the global structure of LOCC or the class of separable operations. In Section 3.2, we define the basic concepts of this chapter: LOCC operations, entanglement monotones, and infinitesimal operations. In Section 3.3, we show how all local operations that preserve pure states can be generated by a sequence of infinitesimal local operations. In Section 3.4, we derive differential conditions for a function of the state to be an entanglement monotone. There are two such conditions for pure-state entanglement monotones: the first guarantees invariance under local unitary transformations (LU invariance), and involves only the first derivatives of the function, while the second guarantees monotonicity under local measurements, and involves second derivatives. For mixed-state entanglement monotones we add a further condition, convexity, which ensures that a function remains monotonic under operations that lose information (and can therefore transform pure states to mixed states). In Section 3.5, we look at some known monotones--the norm of the state, the local purity, and the entropy of entanglement--and show that they obey the differential criteria. In Section 3.6, we use the differential conditions to construct a new polynomial entanglement monotone for three-qubit pure states which depends on the invariant identified by Kempe \cite{Kempe99}. In Section 3.7 we conclude. In the Appendix (Section 3.8), we show that higher derivatives of the function are not needed to prove monotonicity. \section*{3.2 \hspace{2pt} Basic definitions} \addcontentsline{toc}{section}{3.2 \hspace{0.15cm} Basic definitions} \subsection*{3.2.1 \hspace{2pt} LOCC} \addcontentsline{toc}{subsection}{3.2.1 \hspace{0.15cm} LOCC} An operation (or protocol) in LOCC consists of a sequence of local operations with classical communication between them. Initially, we will consider only those local operations that preserve pure states: {\it unitaries}, in which the state is transformed \begin{equation} \rho \rightarrow { U}\rho{ U}^\dagger ,\ \ { U}^\dagger{ U} = { U}{ U}^\dagger= { I} , \end{equation} and {\it generalized measurements}, in which the state randomly changes as in Eq.~\eqref{genmeas}, \begin{equation} \rho \rightarrow \rho_j = { M}_j \rho { M}^{\dagger}_j /p_j ,\ \ \sum_j \Mhat^\dagger_j{ M}_j = { I} ,\notag \end{equation} with probability $p_j = {\rm Tr}\left\{\Mhat^\dagger_j{ M}_j\rho\right\}$, where the index $j$ labels the possible outcomes of the measurement. Note that we can think of a unitary as being a special case of a generalized measurement with only one possible outcome. One can think of this class of operations as being limited to those which do not discard information. Later, we will relax this assumption to consider general operations, which can take pure states to mixed states. Such operations do involve loss of information. Examples include performing a measurement without retaining the result, performing an unknown unitary chosen at random, or entangling the system with an ancilla which is subsequently discarded. The requirement that an operation be local means that the operators ${ U}$ or ${ M}_j$ must have a tensor-product structure ${ U} \equiv { U}\otimes{ I}$, ${ M}_j \equiv { M}_j \otimes { I}$, where they act as the identity on all except one of the subsystems. The ability to use classical communication implies that the choice of later local operations can depend arbitrarily on the outcomes of all earlier measurements. One can think of an LOCC operation as consisting of a series of ``rounds.'' In each round, a single local operation is performed by one of the local parties; if it is a measurement, the outcome is communicated to all parties, who then agree on the next local operation. \subsection*{3.2.2 \hspace{2pt} Entanglement monotones} \addcontentsline{toc}{subsection}{3.2.2 \hspace{0.15cm} Entanglement monotones} For the purposes of this study, we define an entanglement monotone to be a real-valued function of the state with the following properties: if we start with the system in a state $\rho$ and perform a local operation which leaves the system in one of the states $\rho_1,\cdots,\rho_n$ with probabilities $p_1,\ldots,p_n$, then the value of the function must not increase on average: \begin{subequations} \label{eq:EM} \begin{equation} f(\rho) \ge \sum_j p_j f(\rho_j) . \label{monotonicity1} \end{equation} Furthermore, we can start with a state selected randomly from an ensemble $\{\rho_k, p_k\}$. If we dismiss the information about which particular state we are given (which can be done locally), the function of the resultant state must not exceed the average of the function we would have if we keep this information: \begin{equation} \sum_k p_k f(\rho_k) \geq f\left( \sum_k p_k \rho_k \right) . \label{monotonicity2} \end{equation} \end{subequations} Some functions may obey a stronger form of monotonicity, in which the function cannot increase for any outcome: \begin{equation} f(\rho) \ge f(\rho_j),\ \forall j , \end{equation} but this is not the most common situation. Some monotones may be defined only for pure states, or may only be monotonic for pure states. In the latter case, monotonicity is defined as non-increase on average under local operations that do not involve information loss. \subsection*{3.2.3 \hspace{2pt} Infinitesimal operations} \addcontentsline{toc}{subsection}{3.2.3 \hspace{0.15cm} Infinitesimal operations} We call an operation {\it infinitesimal} if all outcomes result in only very small changes to the state. That is, if after an operation the system can be left in states $\rho_1,\cdots,\rho_n$, we must have \begin{equation} || \rho - \rho_j || \ll 1, \ \forall j . \end{equation} For a unitary, this means that \begin{equation} { U} = \exp(i{{\varepsilon}}) \approx { I} + i {{\varepsilon}} , \end{equation} where ${\varepsilon}$ is a Hermitian operator with small norm, $||{{\varepsilon}}|| \ll 1$, ${{\varepsilon}}={{\varepsilon}}^\dagger$. For a generalized measurement, every measurement operator ${ M}_j$ can be written as in Eq.~\eqref{wmg}, \begin{equation} { M}_j = q_j ({ I} + {{\varepsilon}}_j ) ,\notag \end{equation} where $0 \le q_j \le 1$ and ${{\varepsilon}}_j$ is an operator with small norm $||{{\varepsilon}}_j|| \ll 1$. \section*{3.3 \hspace{2pt} Local operations from infinitesimal local operations} \addcontentsline{toc}{section}{3.3 \hspace{0.15cm} Local operations from infinitesimal local operations} In this section we show how any local operation that preserves pure states can be performed as a sequence of infinitesimal local operations. The operations that preserve pure states are unitary transformations and generalized measurements. \subsection*{3.3.1 \hspace{2pt} Unitary transformations} \addcontentsline{toc}{subsection}{3.3.1 \hspace{0.15cm} Unitary transformations} Every local unitary operator has the representation \begin{equation} { U}=e^{i{ H}}, \end{equation} where ${ H}$ is a local hermitian operator. We can write \begin{equation} { U}=\lim_{n\rightarrow\infty}({ I}+i{ H}/n)^n, \end{equation} and define \begin{equation} {\varepsilon}={ H}/n \end{equation} for a suitably large value of $n$. Thus, in the limit $n\rightarrow\infty$, any local unitary operation can be thought of as an infinite sequence of infinitesimal local unitary operations driven by operators of the form \begin{equation} { U}_{\varepsilon}\approx { I}+i{\varepsilon}, \end{equation} where ${\varepsilon}$ is a small ($\|{{\varepsilon}}\|\ll 1$) local hermitian operator. \subsection*{3.3.2 \hspace{2pt} Generalized measurements} \addcontentsline{toc}{subsection}{3.3.2 \hspace{0.15cm} Generalized measurements} As was shown in Chapter 2, any measurement can be generated by a sequence of weak measurements. Since a measurement with any number of outcomes can be implemented as a sequence of two-outcome measurements, it suffices to consider generalized measurements with two outcomes. The form of the weak operators needed to generate any measurement (Eq.~\eqref{step_operator}) is \begin{equation} { M}(x,\pm\epsilon) = \sqrt{ C_\pm \frac{{ I}+\tanh(x\pm\epsilon)({ M}_2^2-{ M}_1^2)}{{ I}+\tanh(x)({ M}_2^2-{ M}_1^2)}},\notag \end{equation} where \begin{equation} C_\pm = (1\pm\tanh(\epsilon)\tanh(x))/2 .\notag \end{equation} From these expressions it is easy to see that if $|\epsilon|\ll 1$, we have ${ M}(x,\epsilon) = \sqrt{1/2}({ I}+O(\epsilon))$, i.e., the coefficients $q_j$ in Eq.~\eqref{wmg} are $q_1=q_2=\frac{1}{\sqrt{2}}$. Furthermore, if the original measurement is local, the weak measurements are also local. Clearly, the fact that infinitesimal local operations are part of the set of LO means that an entanglement monotone must be a monotone under infinitesimal local operations. The discussion in this section implies that if a function is a monotone under infinitesimal local unitaries and generalized measurements, it is a monotone under all local unitaries and generalized measurements (the operations that do not involve information loss and preserve pure states). Based on this result, in the next section we derive necessary and sufficient conditions for a function to be an entanglement monotone. \section*{3.4 \hspace{2pt} Differential conditions for entanglement monotones} \addcontentsline{toc}{section}{3.4 \hspace{0.15cm} Differential conditions for entanglement monotones} Let us now consider the change in the state under an infinitesimal local operation. Without loss of generality, we assume that the operation is performed on Alice's subsystem. In this case, it is convenient to write the density matrix of the system as \begin{equation} {\rho}=\underset{i,j,l,m}\sum \rho_{ijlm} |i_A\rangle \langle l_A|\otimes |j_{BC...}\rangle \langle m_{BC...}|, \end{equation} where the set $\{|i_A\rangle\}$ and the set $\{|j_{BC...}\rangle\}$ are arbitrary orthonormal bases for subsystem $A$ and the rest of the system, respectively. Any function of the state $f(\rho)$ can be thought of as a function of the coefficients in the above decomposition: \begin{equation} f(\rho) = f(\rho_{ijlm}). \end{equation} \subsection*{3.4.1 \hspace{2pt} Local unitary invariance} \addcontentsline{toc}{subsection}{3.4.1 \hspace{0.15cm} Local unitary invariance} Unitary operations are invertible, and therefore the monotonicity condition reduces to an invariance condition for LU transformations. Under local unitary operations on subsystem $A$ the components of ${\rho}$ transform as follows: \begin{equation} \rho_{ijlm} \rightarrow \underset{k,p}\sum U_{ik}\rho_{kjpm}U^*_{lp}, \end{equation} where $U_{ik}$ are the components of the local unitary operator in the basis $\{|i_A\rangle\}$. We consider infinitesimal local unitary operations: \begin{equation} U_{lk} = \left(e^{i{\varepsilon}}\right)_{lk}, \end{equation} where ${{\varepsilon}}$ is a local hermitian operator acting on subsystem $A$, and \begin{equation} \|{{\varepsilon}}\|\ll 1. \end{equation} Up to first order in ${\varepsilon}$ the coefficients $\rho_{ijlm}$ transform as \begin{equation} \rho_{ijlm} \rightarrow \rho_{ijlm} + i[{\varepsilon}, \rho]_{ijlm}. \end{equation} Requiring LU-invariance of $f(\rho)$, we obtain that the function must satisfy \begin{equation} \underset{i,j,l,m}\sum\frac{\partial f}{\partial \rho_{ijlm}}[{\varepsilon}, \rho]_{ijlm}=0. \label{three} \end{equation} Analogous equations must be satisfied for arbitrary hermitian operators ${\varepsilon}$ acting on the other parties' subsystems. In a more compact form, the condition can be written as \begin{equation} {\rm Tr}\left\{ \frac{\partial f}{\partial\rho}[{\varepsilon} , \rho] \right\} = 0, \label{mixed1} \end{equation} where ${\varepsilon}$ is an arbitrary local hermitian operator. \subsection*{3.4.2 \hspace{2pt} Non-increase under infinitesimal local measurements} \addcontentsline{toc}{subsection}{3.4.2 \hspace{0.15cm} Non-increase under infinitesimal local measurements} As mentioned earlier, a measurement with any number of outcomes can be implemented as a sequence of measurements with two outcomes, and a general measurement can be done as a measurement with positive operators, followed by a unitary conditioned on the outcome; therefore, it suffices to impose the monotonicity condition for two-outcome measurements with positive measurement operators. Consider local measurements on subsystem $A$ with two measurement outcomes, given by operators ${{ M}}_1^2+{{ M}}_2^2 = { I}$. Without loss of generality, we assume \begin{eqnarray} {{ M}}_1 &=& \sqrt{ ({{ I}}+{{\varepsilon}})/2 },\nonumber\\ {{ M}}_2 &=& \sqrt{ ({{ I}}-{{\varepsilon}})/2 }, \label{mm} \end{eqnarray} where ${{\varepsilon}}$ is again a small local hermitian operator acting on $A$ (in the previous section we saw that any two-outcome measurement with positive operators can be generated by weak measurements of this type). Upon measurement, the state undergoes one of two possible transformations \begin{eqnarray} \rho &\rightarrow& \frac{{ M}_{1,2}\rho { M}_{1,2}}{p_{1,2}}, \end{eqnarray} with probabilities $ p_{1,2} = {\rm Tr}\left\{ {{ M}_{1,2}}^2 \rho \right\}$. Since ${\varepsilon}$ is small, we can expand \begin{eqnarray} {{ M}}_1 &=& \frac{1}{\sqrt{2}}({{ I}} + {{\varepsilon}}/2 - {{\varepsilon}}^2/8 - \cdots), \label{m1} \\ {{ M}}_2 &=&\frac{1}{\sqrt{2}}({{ I}} - {{\varepsilon}}/2 - {{\varepsilon}}^2/8 - \cdots).\label{m2} \end{eqnarray} The condition for non-increase on average of the function $f$ under infinitesimal local measurements is \begin{equation} p_1 f({ M}_1\rho { M}_1/p_1) + p_2 f({ M}_2\rho { M}_2/p_2) \le f(\rho) . \label{nonincrease4} \end{equation} Expanding (\ref{nonincrease4}) in powers of ${\varepsilon}$ up to second order, we obtain \begin{equation} \frac{1}{4}{\rm Tr}\left\{ \frac{\partial f}{\partial\rho}[[{\varepsilon}, \rho],{\varepsilon}] \right\} + {\rm Tr}\left\{ \frac{\partial^2 f}{\partial\rho^{\otimes2}} \left( {\rm Tr}({\varepsilon}\rho)\rho - \frac{1}{2}\{{\varepsilon},\rho\} \right)^{\otimes 2} \right\} \leq 0 , \label{mixed3} \end{equation} where $\{{\varepsilon}, \rho \}$ is the anti-commutator of ${\varepsilon}$ and $\rho$. The inequality must be satisfied for an arbitrary local hermitian operator ${\varepsilon}$. So long as (\ref{mixed3}) is satisfied by a strict inequality, it is obvious that we need not consider higher-order terms in ${\varepsilon}$. But what about the case when the condition is satisfied by equality? In the appendix we will show that even in the case of equality, (\ref{mixed3}) is still the necessary and sufficient condition for monotonicity under local generalized measurements. There we also prove the sufficiency of the LU-invariance condition \eqref{mixed1}. This allows us to state the following \\ \textbf{Theorem 1}: A twice-differentiable function $f(\rho)$ of the density matrix is a monotone under local unitary operations and generalized measurements, if and only if it satisfies \eqref{mixed1} and \eqref{mixed3}. \\ We point out that from the condition of LU invariance applied up to second-order in ${\varepsilon}$, one obtains \begin{equation} {\rm Tr}\left\{ \frac{\partial f}{\partial\rho}[[{\varepsilon}, \rho],{\varepsilon}] \right\}=-{\rm Tr}\left\{ \frac{\partial^2 f}{\partial\rho^{\otimes2}} \left( i[{\varepsilon},\rho] \right)^{\otimes 2} \right\}.\label{LUequiv} \end{equation} Therefore, in the case when both Eq.~\eqref{mixed1} and Eq.~\eqref{mixed3} are satisfied, condition \eqref{mixed3} can be written equivalently in the form \begin{equation} {\rm Tr}\left\{ \frac{\partial^2 f}{\partial\rho^{\otimes2}} \left[ \left( {\rm Tr}({\varepsilon}\rho)\rho - \frac{1}{2}\{{\varepsilon},\rho\} \right)^{\otimes 2}- \left( \frac{i}{2}[{\varepsilon},\rho] \right)^{\otimes 2} \right] \right\} \leq 0. \end{equation} Unitary operations and generalized measurements are the operations that preserve pure states. Other operations (which involve loss of information), such as positive maps, would in general cause pure states to evolve into mixed states. A measure of pure-state entanglement need not be defined over the entire set of density matrices, but only over pure states. Thus a measure of pure-state entanglement, when expressed as a function of the density matrix, may have a significantly simpler form than its generalizations to mixed states. For example, the entropy of entanglement for bipartite pure states can be written in the well-known form $S_A(\rho)=-{\rm Tr}(\rho_A\log \rho_A)$, where $\rho_A$ is the reduced density matrix of one of the parties' subsystems. When directly extended over mixed states, this function is not well justified, since $S_A(\rho)$ may have a different value from $S_B(\rho)$. Moreover, $S_A(\rho)$ by itself is not a mixed-state entanglement monotone, since it may increase under local positive maps on subsystem A (these properties of the entropy of entanglement will be discussed further in Section 3.5). One generalization of the entropy of entanglement to mixed states is the entanglement of formation \cite{Bennett96c}, which is defined as the minimum of $\sum_i p_i S_A(\rho_i)$ over all ensembles of bipartite pure states $\{\rho_i, p_i\}$ realizing the mixed state: $\rho=\sum_i p_i \rho_i$. This quantity is a mixed-state entanglement monotone. As a function of $\rho$, it has a much more complicated form than the above expression for the entropy of entanglement. In fact, there is no known analytic expression for the entanglement of formation in general. The problem of extending pure-state entanglement monotones to mixed states is an important one, since every mixed-state entanglement monotone can be thought of as an extension of a pure-state entanglement monotone. Note, however, that a pure-state entanglement monotone may have many different mixed-state generalizations. The relation between the entanglement of formation and the entropy of entanglement presents one way to perform such an extension (convex-roof extension). For every pure-state entanglement monotone $m(\rho)$, one can define a mixed-state extension $M(\rho)$ as the minimum of $\sum_i p_i m(\rho_i)$ over all ensembles of pure states $\{\rho_i, p_i\}$ realizing the mixed state: $\rho=\sum_i p_i \rho_i$. It is easy to verify that $M(\rho)$ is an entanglement monotone for mixed states. On the set of pure states the function $M(\rho)$ reduces to $m(\rho)$. As the example with the entropy of entanglement suggests, not every form of a pure-state entanglement monotone corresponds to a mixed-state entanglement monotone when trivially extended to all states---there are additional conditions that a mixed-state entanglement monotone must satisfy. On the basis of the above considerations, it makes sense to consider separate sets of differential conditions for pure-state and mixed-state entanglement monotones. \\ \textbf{Corollary 1:} A twice-differentiable function $f(\rho)$ of the density matrix is a pure-state entanglement monotone, if and only if it satisfies \eqref{mixed1} and \eqref{mixed3} for pure $\rho$. \\ For pure states $\rho = \ket\psi\bra\psi$, the elements of $\rho$ are $\rho_{ij\ell m} = \alpha_{ij}\alpha^*_{\ell m}$, where the $\{\alpha_{ij}\}$ are the state amplitudes: $ |\psi\rangle = \underset{i,j}{\sum}\alpha_{ij}|i_A\rangle |j_{BC...}\rangle $. Any function on pure states $f(\rho)\equiv f(|\psi\rangle)$ is therefore a function of the state amplitudes and their complex conjugates: \begin{equation} f(|\psi\rangle) = f(\{\alpha_{ij}\}, \{\alpha^{\ast}_{ij}\}). \end{equation} By making the substitution $\rho_{ij\ell m} = \alpha_{ij}\alpha^*_{\ell m}$ into (\ref{mixed1}) and (\ref{mixed3}), we can (after considerable algebra) derive alternative forms of the differential conditions for functions of the state vector: \begin{equation} \underset{i,j,k}\sum\frac{\partial f}{\partial \alpha_{ij}} \varepsilon_{ik} \alpha_{kj}=\underset{i,j,k}\sum\frac{\partial f}{\partial \alpha^{\ast}_{ij}}\varepsilon^{\ast}_{ik}\alpha^{\ast}_{kj}, \label{LU} \end{equation} \begin{equation} \underset{i,j,k,l,m,n}\sum\frac{\partial^2f}{\partial \alpha_{ij}\partial\alpha_{mn}} \left( \varepsilon_{ik}\alpha_{kj}-\expect{{\varepsilon}}\alpha_{ij} \right) \left( \varepsilon_{m\ell}\alpha_{\ell n}-\expect{{\varepsilon}}\alpha_{mn} \right) + c.c. \leq 0. \label{nonincrease3} \end{equation} Here ${\varepsilon}$ is a local hermitian operator acting on subsystem A. Analogous conditions must be satisfied for ${\varepsilon}$ acting on the other parties' subsystems. \subsection*{3.4.3 \hspace{2pt} Monotonicity under operations with information loss} \addcontentsline{toc}{subsection}{3.4.3 \hspace{0.15cm} Monotonicity under operations with information loss} Besides monotonicity under local unitaries and generalized measurements, an entanglement monotone for mixed states should also satisfy monotonicity under local operations which involve {\it loss of information}. The most general transformation that involves loss of information has the form \begin{equation} \rho \rightarrow \rho_k = \frac{1}{p_k} \sum_j { M}_{k,j} \rho \Mhat^\dagger_{k,j} ,\label{mostgeneral} \end{equation} where \begin{equation} p_k = {\rm Tr}\left\{ \sum_j { M}_{k,j} \rho \Mhat^\dagger_{k,j} \right\} \end{equation} is the probability for outcome $k$. The operators $\{{ M}_{k,j}\}$ must satisfy \begin{equation} \sum_{k,j} \Mhat^\dagger_{k,j} { M}_{k,j} = { I} . \end{equation} We can see that this includes unitary transformations, generalized measurements, and completely positive trace-preserving maps as special cases. It occasionally makes sense to consider even more general transformations, where the operators need not sum to the identity: \begin{equation} \sum_{k,j} \Mhat^\dagger_{k,j} { M}_{k,j} \le { I} . \end{equation} This corresponds to a situation where only certain outcomes are retained, and others are discarded; the probabilities add up to less than 1 due to these discarded outcomes. We say such a transformation involves {\it postselection}. With or without postselection, we are concerned with the case where all operations are done locally, so that all the operators $\{{ M}_{k,j}\}$ act on a single subsystem. Every such transformation can be implemented as a sequence of local generalized measurements (possibly discarding some of the outcomes) and local completely positive maps. In operator-sum representation \cite{Kraus83}, a completely positive map can be written \begin{equation} \rho \rightarrow \sum_k { M}_k \rho { M}_k^{\dagger},\label{firstCPTPmap} \end{equation} where \begin{equation} \sum_k { M}_k^{\dagger}{ M}_k \leq { I}. \label{positivity} \end{equation} Therefore, in addition to \eqref{mixed1} and \eqref{mixed3} we must impose the condition \begin{equation} f(\rho) \geq f\left( \sum_k { M}_k\rho { M}_k^{\dagger} \right) . \label{maps} \end{equation} for all sets of local operators $\{{ M}_k\}$ satisfying (\ref{positivity}). Suppose the parties are supplied with a state $\rho_k$ taken from an ensemble $\{\rho_k, p_k\}$. Discarding the information of the actual state amounts to the transformation \begin{equation} \{\rho_k, p_k\} \rightarrow \rho '=\underset{k}{\sum} p_k \rho_k. \end{equation} As pointed out in \cite{Vidal00b}, discarding information should not increase the entanglement of the system on average. Therefore, for any ensemble $\{\rho_k, p_k\}$, an entanglement monotone on mixed states should be {\it convex}: \begin{equation} \sum_k p_k f(\rho_k) \geq f\left( \sum_k p_k \rho_k \right) . \label{convex} \end{equation} Condition \eqref{convex}, together with condition (\ref{mixed3}) for monotonicity under local generalized measurements, implies monotonicity under local completely positive maps: \begin{equation} f\left( \sum_k { M}_k\rho { M}_k^{\dagger} \right) \leq \sum_k p_k f \left( \frac{{ M}_k\rho { M}^{\dagger}_k}{p_k} \right) \leq f(\rho). \end{equation} It is easy to see that if this inequality holds without postselection, it must also hold with postselection. It follows that a function of the density matrix is an entanglement monotone for mixed states if and only if it is (1) a convex function on the set of density matrices and (2) a monotone under local unitaries and generalized measurements. Fortunately, there are also simple differential conditions for convexity. A necessary and sufficient condition for a twice-differentiable function of multiple variables to be convex on a convex set is that its Hessian matrix be positive on the interior of the convex set (in this case, the set of density matrices). Therefore, in addition to \eqref{mixed1} and \eqref{mixed3} we add the differential condition \begin{equation} {\rm Tr}\left\{ \frac{\partial^2 f(\rho)}{\partial\rho^{\otimes 2}}\sigma^{\otimes 2} \right\} \geq 0, \label{mixed4} \end{equation} which must be satisfied at every $\rho$ on the interior of the set of density matrices for an arbitrary traceless hermitian matrix $\sigma$. \\\\ \textbf{Corollary 2:} A twice-differentiable function $f(\rho)$ of the density matrix is a mixed-state entanglement monotone, if and only if it satisfies \eqref{mixed1}, \eqref{mixed3} and \eqref{mixed4}. \section*{3.5 \hspace{2pt} Examples} \addcontentsline{toc}{section}{3.5 \hspace{0.15cm} Examples} In this section we demonstrate how conditions \eqref{mixed1}, \eqref{mixed3} and \eqref{mixed4} can be used to verify if a function is an entanglement monotone. We show this for three well known entanglement monotones: the norm of the state of the system, the trace of the square of the reduced density matrix of any subsystem, and the entropy of entanglement. In the next section we will use some of the observations made here to construct a new polynomial entanglement monotone for three-qubit pure states. \subsection*{3.5.1 \hspace{2pt} Norm of the state} \addcontentsline{toc}{subsection}{3.5.1 \hspace{0.15cm} Norm of the state} The most trivial example is the norm or the trace of the density matrix of the system: \begin{equation} I_1={\rm Tr}\{\rho\}. \end{equation} Clearly $I_1$ is a monotone under LOCC, since all operations that we consider either preserve or decrease the trace. But for the purpose of demonstration, let us verify that $I_1$ satisfies the differential conditions. The LU-invariance condition \eqref{mixed1} reads \begin{equation} {\rm Tr}\left\{ \frac{\partial I_1}{\partial\rho}[{\varepsilon} , \rho] \right\} = {\rm Tr}\left\{ [{\varepsilon} , \rho ] \right\} = 0. \end{equation} The second equality follows from the cyclic invariance of the trace. Since the trace is linear, the second term in condition \eqref{mixed3} vanishes, and we consider only the first term: \begin{equation} {\rm Tr}\left\{ \frac{\partial I_1}{\partial\rho}[[{\varepsilon}, \rho],{\varepsilon}] \right\} = {\rm Tr}\left\{ [[{\varepsilon}, \rho],{\varepsilon}] \right\} = 0. \end{equation} The condition is satisfied with equality, again due to the cyclic invariance of the trace, implying that the norm remains invariant under local measurements. The convexity condition \eqref{mixed4} is also satisfied by equality. \subsection*{3.5.2 \hspace{2pt} Local purity} \addcontentsline{toc}{subsection}{3.5.2 \hspace{0.15cm} Local purity} The second example is the purity of the reduced density matrix: \begin{equation} I_2={\rm Tr}\left\{ \rho_A^2 \right\}, \end{equation} where $\rho_A$ is the reduced density matrix of subsystem $A$ (which in general need not be a one-party subsystem). Note that this is an {\it increasing} entanglement monotone for pure states---the purity of the local reduced density matrix can only increase under LOCC. It has been shown in \cite{Brun04} that every $m$-th degree polynomial of the components of the density matrix $\rho$ can be written as an expectation value of an observable ${ O}$ on $m$ copies of $\rho$: \begin{equation} f(\rho)={\rm Tr}\left\{ { O} \rho^{\otimes m} \right\} . \end{equation} Here we have \begin{equation} {\rm Tr}\left\{ \rho_A^2 \right\} = {\rm Tr}\left\{ {C} \rho^{\otimes 2} \right\} , \end{equation} where the components of ${C}$ are \begin{equation} C_{lpsnkjqm}=\delta_{jp}\delta_{mn}\delta_{lq}\delta_{ks}. \end{equation} Therefore \begin{eqnarray} {\rm Tr}\left\{ \frac{\partial I_2}{\partial\rho}[{\varepsilon} , \rho] \right\} &=& {\rm Tr}\left\{ {C}\left( [{\varepsilon} , \rho ]\otimes\rho + \rho\otimes[{\varepsilon} , \rho ] \right) \right\} \nonumber\\ &=& {\rm Tr}_A \left\{ [{\varepsilon} , \rho ]_A \rho_A + \rho_A [{\varepsilon},\rho]_A \right\} \nonumber\\ &=& 2 {\rm Tr}_A \left\{ \rho_A [{\varepsilon} , \rho]_A \right\} , \label{ifour} \end{eqnarray} where by ${ O}_A$ we denote the partial trace of an operator ${ O}$ over all subsystems except $A$. If ${\varepsilon}$ does not act on subsystem $A$, then $[{\varepsilon},\rho ]_A = 0$ and the above expression vanishes. If it acts on subsystem $A$, then $[{\varepsilon} , \rho ]_A =[{\varepsilon},\rho_A]$ and the expression vanishes due to the cyclic invariance of the trace. Now consider condition \eqref{mixed3}. If ${\varepsilon}$ does not act on subsystem $A$, then \begin{equation} [[{\varepsilon}, \rho],{\varepsilon}]_A=0.\label{doublecom} \end{equation} From \eqref{mixed3} we get \begin{eqnarray} 0 &\le& \frac{1}{4}{\rm Tr}\left\{ \frac{\partial I_2}{\partial\rho} [[{\varepsilon}, \rho],{\varepsilon}] \right\} + {\rm Tr}\left\{ \frac{\partial^2 I_2}{\partial\rho^{\otimes2}} \left( {\rm Tr}\left\{ {\varepsilon}\rho \right\} \rho - \frac{1}{2} \{{\varepsilon},\rho\} \right)^{\otimes 2} \right\} \nonumber\\ && = 2{\rm Tr}\left\{ \left( {\rm Tr}\{{\varepsilon}\rho\} \rho - \frac{1}{2}\{{\varepsilon},\rho\} \right)_A^2 \right\}. \end{eqnarray} The inequality follows from the fact that $\left({\rm Tr}\{{\varepsilon}\rho\}\rho - (1/2) \{{\varepsilon},\rho\}\right)_A^2$ is a positive operator. If ${\varepsilon}$ acts on $A$, we can use the fact that for pure states \begin{equation} {\rm Tr}\left\{ \rho_A^2 \right\}= {\rm Tr}\left\{ \rho_B^2 \right\}, \end{equation} where $B$ denotes the subsystem complementary to $A$. Then we can apply the same argument as before for the function ${\rm Tr}\left\{ \rho_B^2 \right\}$. Therefore $I_2$ does not \emph{decrease} on average under local generalized measurements, and is an entanglement monotone for pure states. What about mixed states? For \emph{increasing} entanglement monotones the convexity condition \eqref{mixed4} becomes a \emph{concavity} condition---the direction of the inequality is inverted. In the case of $I_2$, however, we have \begin{equation} {\rm Tr}\left\{ \frac{\partial^2 I_2(\rho)}{\partial\rho^{\otimes 2}}\sigma^{\otimes 2} \right\} = 2{\rm Tr}\left\{ \sigma_A^2 \right\} \geq 0, \end{equation} i.e., the function is convex. This means that ${\rm Tr}\{\rho_A^2\}$ is \emph{not} a good measure of entanglement for mixed states. Indeed, when extended to mixed states, $I_2$ cannot distinguish between entanglement and classical disorder. \subsection*{3.5.3 \hspace{2pt} Entropy of entanglement} \addcontentsline{toc}{subsection}{3.5.3 \hspace{0.15cm} Entropy of entanglement} Finally consider the von~Neumann entropy of entanglement: \begin{equation} S_A=-{\rm Tr}(\rho_A\log \rho_A). \end{equation} Expanding around $\rho_A={ I}$, we get \begin{equation} S_A=-{\rm Tr}[(\rho_A-{ I})+\frac{1}{2}(\rho_A-{ I})^2-\frac{1}{6}(\rho_A-{ I})^3 + ...]. \end{equation} The LU-invariance follows from the fact that every term in this expansion satisfies \eqref{mixed1}. If we substitute the $n$-th term in the condition, we obtain \begin{equation} {\rm Tr}([{\varepsilon},\rho]_A (\rho_A-{ I})^{n-1})=0. \end{equation} This is true either because $[{\varepsilon},\rho]_A=0$ when ${\varepsilon}$ does not act on $A$, or because otherwise $[{\varepsilon},\rho]_A= [{\varepsilon},\rho_A]$ and the equation follows from the cyclic invariance of the trace. Now to prove that $S_A$ satisfies \eqref{mixed3}, we will first assume that $\rho_A^{-1}$ exists. Then we can formally write \begin{equation} \frac{\partial}{\partial \rho}\log{\rho_A} = \frac{\partial\rho_A}{\partial \rho} \frac{\partial}{\partial\rho_A}\log{\rho_A} = \frac{\partial \rho_A}{\partial\rho}\rho_A^{-1}. \end{equation} Consider the case when ${\varepsilon}$ does not act on $A$. Substituting $S_A$ in \eqref{mixed3}, we get \begin{gather} \frac{1}{4}{\rm Tr}\left\{ \frac{\partial S_A}{\partial\rho}[[{\varepsilon}, \rho],{\varepsilon}] \right\} + {\rm Tr}\left\{ \frac{\partial^2 S_A}{\partial\rho^{\otimes2}} \left( {\rm Tr}\{{\varepsilon}\rho\}\rho - \frac{1}{2}\{{\varepsilon},\rho\}\right)^{\otimes 2} \right\} \nonumber\\ = 0 + {\rm Tr}\left\{ \left( \frac{\partial}{\partial\rho}\otimes \left( -\log{\rho_A}\frac{\partial \rho_A}{\partial\rho} -\frac{\partial \rho_A}{\partial \rho} \right) \right) \left( {\rm Tr}\{{\varepsilon}\rho\}\rho - \frac{1}{2}\{{\varepsilon},\rho\} \right)^{\otimes 2} \right\} \nonumber\\ = -{\rm Tr}\left\{ \left( \rho_A^{-1}\frac{\partial \rho_A}{\partial \rho}\frac{\partial \rho_A}{\partial \rho} \right) \left( {\rm Tr}\{{\varepsilon}\rho\}\rho - \frac{1}{2}\{{\varepsilon},\rho\} \right)^{\otimes 2} \right\} \nonumber\\ = - {\rm Tr}_A \left\{ \rho_A^{-1}\left( {\rm Tr}\{{\varepsilon}\rho\}\rho - \frac{1}{2}\{{\varepsilon},\rho\} \right)_A \left( {\rm Tr}\{{\varepsilon}\rho\}\rho - \frac{1}{2}\{{\varepsilon},\rho\} \right)_A \right\} \nonumber\\ = - {\rm Tr}_A \Biggl\{ \left|\rho_A^{-1/2}\left({\rm Tr}\{{\varepsilon}\rho\}\rho - \frac{1}{2}\{{\varepsilon},\rho\} \right)_A\right|^2 \Biggr\} \leq 0. \label{ent} \end{gather} If $\rho_A^{-1}$ does not exist, it is only on a subset of measure zero---where one or more of the eigenvalues of $\rho_A$ vanish. Therefore, we can always find an arbitrarily close vicinity in the parameters describing $\rho_A$, where $\rho_A^{-1}$ is regular and where \eqref{mixed3} is satisfied. Since the condition is continuous, it cannot be violated on this special subset. If ${\varepsilon}$ acts on $A$, we can use an equivalent definition of the entropy of entanglement: \begin{equation} S_A=S_B=-{\rm Tr}\{\rho_B\log \rho_B\}, \end{equation} and apply the same arguments. Therefore $S_A$ is an entanglement monotone for pure states. The convexity condition is not satisfied, since \begin{equation} {\rm Tr}\left\{ \frac{\partial^2 S_A}{\partial\rho^{\otimes2}}\sigma^{\otimes 2} \right\} = - {\rm Tr}\{\rho_A^{-1}\sigma_A^2 \} \leq 0. \end{equation} This reflects the fact that the entropy of entanglement, like $I_2$, does not distinguish between entanglement and classical randomness. \section*{3.6 \hspace{2pt} A new entanglement monotone} \addcontentsline{toc}{section}{3.6 \hspace{0.15cm} A new entanglement monotone} It has been shown \cite{Gingrich02} that the set of all entanglement monotones for a multipartite pure state uniquely determine the orbit of the state under the action of the group of local unitary transformations. For three-qubit pure states the orbit is uniquely determined by 5 independent continuous invariants (not counting the norm) and one discrete invariant \cite{Acin00,Carteret00}. Therefore, for pure states of three qubits there must exist five independent continuous entanglement monotones that are functions of the five independent continuous invariants. Any polynomial invariant in the amplitudes of a state \[ |\psi\rangle = \underset{i,j,k\ldots}{\sum}\alpha_{ijk\ldots}|i_A\rangle |j_B\rangle |k_C\rangle \cdots \] is a sum of homogenous polynomials of the form \cite{Sudbery01} \begin{equation} P_{\sigma\tau\cdots}(|\psi\rangle)= \alpha_{i_1 j_1 k_1 \ldots} \alpha^*_{i_1 j_{\sigma(1)} k_{\tau(1)} \ldots} \cdots \alpha_{i_n j_n k_n \ldots} \alpha^*_{i_n j_{\sigma(n)} k_{\tau(n)} \ldots}, \label{inv} \end{equation} where $\sigma, \tau, \ldots$ are permutations of (1,2,\ldots,n), and repeated indices indicate summation. A set of five independent polynomial invariants for three-qubit pure states is \cite{Sudbery01} \begin{eqnarray} I_1&=&P_{e,(12)}\\ I_2&=&P_{(12),e}\\ I_3&=&P_{(12),(12)}\\ I_4&=&P_{(123),(132)}\\ I_5&=&| \alpha_{i_1j_1k_1} \alpha_{i_2j_2k_2} \alpha_{i_3j_3k_3} \alpha_{i_4j_4k_4} \epsilon_{i_1i_2} \epsilon_{i_3i_4} \epsilon_{j_1j_2} \epsilon_{j_3j_4} \epsilon_{k_1k_3} \epsilon_{k_2k_4} |^2. \end{eqnarray} In the last expression $\epsilon_{ij}$ is the antisymmetric tensor in 2 dimensions. The first three invariants are the local purities of subsystems C, B and A, $I_4$ is the invariant identified by Kempe \cite{Kempe99} and $I_5$ is (up to a factor) the square of the 3-tangle identified by Coffman, Kundu and Wootters \cite{Coffman00}. According to \cite{Gingrich02} the four known independent continuous entanglement monotones that do not require maximization over a multi-dimensional space are \begin{gather} \tau_{(AB)C}=2(1-I_1)\\ \tau_{(AC)B}=2(1-I_2)\\ \tau_{(BC)A}=2(1-I_3)\\ \tau_{ABC}= 2\sqrt{I_5}, \end{gather} and any fifth independent entanglement monotone must depend on $I_4$. Numerical evidence suggested that the tenth order polynomial $\sigma_{ABC}= 3-(I_1+I_2+I_3)I_4$ might be such an entanglement monotone. However, no rigorous proof of monotonicity was given. Here, we will use conditions \eqref{mixed1} and \eqref{mixed3} to construct a different independent entanglement monotone, which is of sixth order in the amplitudes of the state and their complex conjugates. Observe that in \eqref{inv} the amplitudes have been combined in such a way that subsystem A is manifestly traced out. By appropriate rearrangement, one can write the same expression in a form where an arbitrary subsystem is manifestly traced out. Therefore, any polynomial invariant can be written entirely in terms of the components of ${\rm Tr}_A\left\{\rho\right\}$ or ${\rm Tr}_B\left\{\rho\right\}$, etc. This immediately implies that the LU-invariance condition \eqref{mixed1} is satisfied, since if ${\varepsilon}$ acts on subsystem A, we can consider the expression in terms of $\rho_{BC...}$, which, when substituted in \eqref{mixed1}, would yield zero because $[{\varepsilon},\rho]_{BC...}=0$. It also implies that in order to prove monotonicity under local measurements we can only consider the second term in \eqref{mixed3}, since when ${\varepsilon}$ acts on subsystem A, we can again consider the expression for the function only in terms of $\rho_{BC...}$ and the first term would vanish according to \eqref{doublecom}. We will aim at constructing a polynomial function of three-qubit pure states $\rho$ which has the same form when expressed in terms of $\rho_{AB}$, $\rho_{AC}$, or $\rho_{BC}$, in order to avoid the necessity for separate proofs of monotonicity under measurements on the different subsystems. It has been shown in \cite{Sudbery01} that \begin{eqnarray} I_4&=& 3{\rm Tr}\left\{\rho_{AB}(\rho_A\otimes \rho_B)\right\} - {\rm Tr}\left\{\rho_A^3\right\} - {\rm Tr}\left\{\rho_B^3\right\}\nonumber\\ &=&3{\rm Tr}\left\{\rho_{AC}(\rho_A\otimes \rho_C)\right\} - {\rm Tr}\left\{\rho_A^3\right\} - {\rm Tr}\left\{\rho_C^3\right\}\nonumber\\ &=&3{\rm Tr}\left\{\rho_{BC}(\rho_B\otimes \rho_C)\right\} - {\rm Tr}\left\{\rho_B^3\right\} - {\rm Tr}\left\{\rho_C^3\right\}.\label{sudbery} \end{eqnarray} For local measurements on subsystem C it is convenient to use the first of the above expressions for $I_4$. The terms ${\rm Tr}\left\{\rho_A^3\right\}$ and ${\rm Tr}\left\{\rho_B^3\right\}$ are entanglement monotones by themselves. This can be easily seen by plugging them in condition \eqref{mixed3}: \begin{gather} \frac{1}{4}{\rm Tr}\left\{ \frac{\partial {\rm Tr}\left\{\rho_{A,B}^3\right\}}{\partial\rho}[[{\varepsilon}, \rho],{\varepsilon}] \right\} + {\rm Tr}\left\{ \frac{\partial^2 {\rm Tr}\left\{\rho_{A,B}^3\right\}}{\partial\rho^{\otimes2}} \left( {\rm Tr}({\varepsilon}\rho)\rho - \frac{1}{2}\{{\varepsilon},\rho\} \right)^{\otimes 2} \right\} \nonumber\\ =0+6 {\rm Tr}\left\{\rho_{A,B} \left( {\rm Tr}\{{\varepsilon}\rho\} \rho - \frac{1}{2}\{{\varepsilon},\rho\} \right)_{A,B}^2 \right\}\geq 0. \end{gather} These terms, however, are not independent of the invariants $I_2$ and $I_3$. The term which is independent of the other polynomial invariants is ${\rm Tr}\left\{\rho_{AB}(\rho_A\otimes\rho_B)\right\}$. When we plug this term into condition \eqref{mixed3} we obtain an expression which is not manifestly positive or negative. Is it possible to construct a function dependent on this term, which similarly to ${\rm Tr}\left\{\rho_{A,B}^3\right\}$ would yield a trace of a manifestly positive operator when substituted in \eqref{mixed3}? It is easy to see that if the function has the form ${\rm Tr}\left\{{ X}^3\right\}$, where the operator ${ X}(\rho_{AB})$ is a positive operator linearly dependent on $\rho_{AB}$, it will be an increasing monotone under local measurements on C (for simplicity we assume ${ X}(0)={0}$): \begin{gather} \frac{1}{4}{\rm Tr}\left\{ \frac{\partial{\rm Tr}\left\{ { X}^3(\rho_{AB}) \right\}}{\partial\rho}[[{\varepsilon},\rho],{\varepsilon}] \right\} + {\rm Tr}\left\{ \frac{\partial^2 {\rm Tr}\left\{ { X}^3(\rho_{AB}) \right\}}{\partial\rho^{\otimes2}} \left( {\rm Tr}({\varepsilon}\rho)\rho - \frac{1}{2}\{{\varepsilon},\rho\} \right)^{\otimes 2} \right\} \nonumber\\ = 0 + 6 {\rm Tr}\left\{{ X}(\rho_{AB}) { X}^2(( {\rm Tr}\{{\varepsilon}\rho\} \rho - \frac{1}{2}\{{\varepsilon},\rho\} )_{AB}) \right\} \geq 0. \end{gather} Since we want the function to depend on ${\rm Tr}\left\{\rho_{AB}(\rho_A\otimes \rho_B)\right\}$, we choose ${ X}(\rho_{AB}) = 2\rho_{AB} + \rho_A\otimes I_B + I_A\otimes \rho_B$. This is clearly positive for positive $\rho_{AB}$. Expanding the trace, we obtain: \begin{gather} {\rm Tr}\left\{ { X}^3(\rho_{AB}) \right\} = 12{\rm Tr}\left\{\rho_{AB}(\rho_A\otimes\rho_B) \right\} +12{\rm Tr}\left\{ \rho_{AB}^2(I_A\otimes\rho_B)\right\} \nonumber\\ + 12{\rm Tr}\left\{ \rho_{AB}^2(\rho_A\otimes I_B) \right\} + 6{\rm Tr}\left\{\rho_{AB}(I_A\otimes\rho_B)^2 \right\} + 6{\rm Tr}\left\{\rho_{AB}(\rho_A\otimes I_B)^2 \right\}\nonumber\\ + 3{\rm Tr}\left\{\rho_A\otimes\rho_B^2\right\} + 3{\rm Tr}\left\{ \rho_A^2\otimes\rho_B \right\} + {\rm Tr}\left\{ I_A\otimes\rho_B^3 \right\} + {\rm Tr}\left\{ \rho_A^3\otimes I_B \right\} + 8{\rm Tr}\left\{\rho_{AB}^3 \right\} \nonumber\\ = 12{\rm Tr}\left\{\rho_{AB}(\rho_A\otimes\rho_B) \right\} + 12{\rm Tr}\left\{ \rho_{AB}^2(I_A\otimes\rho_B) \right\} + 12{\rm Tr}\left\{\rho_{AB}^2(\rho_A\otimes I_B)\right\} \nonumber\\ +8{\rm Tr}\left\{\rho_A^3\right\} + 8{\rm Tr}\left\{\rho_B^3\right\}+8{\rm Tr}\left\{\rho_{AB}^3 \right\} + 3{\rm Tr}\left\{\rho_A^2\right\}+3{\rm Tr}\left\{\rho_B^2\right\} . \end{gather} One can show that \begin{gather} {\rm Tr}\left\{ \rho_{AB}^2(I_A\otimes\rho_B) \right\} = {\rm Tr}\left\{ \rho_{BC}(\rho_B\otimes\rho_C) \right\},\\ {\rm Tr}\left\{ \rho_{AB}^2(\rho_A\otimes I_B) \right\}= {\rm Tr}\left\{ \rho_{AC}(\rho_A\otimes\rho_C) \right\}. \end{gather} We also have that ${\rm Tr}\left\{ \rho_{AB}^3 \right\} = {\rm Tr}\left\{ \rho_C^3 \right\}$. Using this and \eqref{sudbery}, we obtain \begin{equation} {\rm Tr}\left\{ { X}^3(\rho_{AB}) \right\} = 12 I_4 + 16 \left({\rm Tr}\left\{ \rho_A^3 \right\} + {\rm Tr}\left\{ \rho_B^3 \right\} +{\rm Tr}\left\{ \rho_C^3\right\} \right) +3{\rm Tr}\left\{ \rho_A^2 \right\} + 3{\rm Tr}\left\{ \rho_B^2 \right\} . \end{equation} This expression is an increasing monotone under local measurements on C. If we add to it $3{\rm Tr}\left\{ \rho_{AB}^2\right\} = 3{\rm Tr}\left\{ \rho_C^2 \right\}$, it becomes invariant under permutations of the subsystems. Since ${\rm Tr}\left\{ \rho_C^2 \right\}$ is an increasing entanglement monotone, the whole expression will be a monotone under operations on any subsystem. We can define the closely related quantity \begin{equation} \phi_{ABC}=69-{\rm Tr}\left\{(2\rho_{AB} + \rho_A\otimes I_B + I_A\otimes \rho_B)^3\right\}-3{\rm Tr}\left\{ \rho_{AB}^2\right\}. \end{equation} This is a {\it decreasing} entanglement monotone that vanishes for product states, which is more standard for a measure of entanglement. It depends on the invariant identified by Kempe and is therefore independent of the other known monotones for three-qubit pure states. \section*{3.7 \hspace{2pt} Summary and outlook} \addcontentsline{toc}{section}{3.7 \hspace{0.15cm} Summary and outlook} We have derived differential conditions for a twice-differentiable function on quantum states to be an entanglement monotone. There are two such conditions for pure-state entanglement monotones---invariance under local unitaries and diminishing under local measurements---plus a third condition (overall convexity of the function) for mixed-state entanglement monotones. We have shown that these conditions are both necessary and sufficient. We then verified that the conditions are satisfied by a number of known entanglement monotones and we used them to construct a new polynomial entanglement monotone for three-qubit pure states. It is our hope that this approach to the study of entanglement may circumvent some of the difficulties that arise due the mathematically complicated nature of LOCC. It may be possible to find new classes of entanglement monotones, for both pure and mixed states, and to look for functions with particularly desirable properties (such as additivity). There may also be other areas of quantum information theory where it will prove advantageous to consider general quantum operations as continuous processes. This seems a very promising new direction for research. \section*{3.8 \hspace{1pt} Appendix: Proof of sufficiency} \addcontentsline{toc}{section}{3.8 \hspace{0.15cm} Appendix: Proof of sufficiency} The LU-invariance condition can be written as \begin{equation} F(\rho,{\varepsilon})=0, \end{equation} where we define \begin{equation} F(\rho,{\varepsilon})=f(e^{i{\varepsilon}}\rho e^{-i{\varepsilon}})-f(\rho) \end{equation} with ${\varepsilon}$ being a local hermitian operator. This condition has to be satisfied for every $\rho$ and every ${\varepsilon}$. By expanding up to first order in ${\varepsilon}$ we obtained condition \eqref{mixed1}, which is equivalent to \begin{equation} {\rm Tr}\left\{ \left.\frac{\partial F(\rho,{\varepsilon})}{\partial{\varepsilon}} \right|_{{\varepsilon}={0}}{\varepsilon}\right\} = 0. \end{equation} This is a linear form of the components of ${\varepsilon}$ and the requirement that it vanishes for every ${\varepsilon}$ implies that \begin{equation} \left.\frac{\partial F(\rho,{\varepsilon})}{\partial\varepsilon_{ij}} \right|_{{\varepsilon}={0}}=0. \end{equation} This has to be satisfied for every $\rho$. Consider the first derivative of $F(\rho,{\varepsilon})$ with respect to $\varepsilon_{ij}$, taken at an arbitrary point ${\varepsilon}_0$. We have \begin{equation} \left.\frac{\partial F(\rho,{\varepsilon})}{\partial\varepsilon_{ij}} \right|_{{\varepsilon}={\varepsilon}_0} = \left.\frac{\partial F(\rho,{\varepsilon}_0+{\varepsilon})}{\partial\varepsilon_{ij}} \right|_{{\varepsilon}={0}}. \end{equation} But from the form of $F(\rho,{\varepsilon})$ one can see that $F(\rho,{\varepsilon}_0+{\varepsilon})=F(\rho',{\varepsilon})$, where $\rho'=e^{i{\varepsilon}_0}\rho e^{-i{\varepsilon}_0}$. Therefore \begin{equation} \left.\frac{\partial F(\rho,{\varepsilon})}{\partial\varepsilon_{ij}} \right|_{{\varepsilon}={\varepsilon}_0} = \left.\frac{\partial F(\rho',{\varepsilon})}{\partial\varepsilon_{ij}} \right|_{{\varepsilon}={0}} =0, \end{equation} i.e., the first derivatives of $F(\rho,{\varepsilon})$ with respect to the components of ${\varepsilon}$ vanish identically. This means that $F(\rho,{\varepsilon})=F(\rho,{0})=0$ for every ${\varepsilon}$ and condition \eqref{mixed1} is sufficient. The condition for non-increase on average under local generalized measurements \eqref{nonincrease4} can be written as \begin{equation} G(\rho,{\varepsilon}) \leq 0, \label{nonincrease5} \end{equation} where \begin{equation} G(\rho,{\varepsilon}) =p_1 f({ M}_1\rho { M}_1/p_1) + p_2 f({ M}_2\rho { M}_2/p_2) - f(\rho). \end{equation} The operators ${ M}_1$ and ${ M}_2$ in terms of ${\varepsilon}$ are given by \eqref{mm}, and the probabilities $p_1$ and $p_2$ are defined as before. As we have argued in Section 3.3, it is sufficient that this condition is satisfied for infinitesimal ${\varepsilon}$. By expanding the condition up to second order in ${\varepsilon}$ we obtained condition \eqref{mixed3}, which is equivalent to \begin{equation} {\rm Tr}\left\{ \left.\frac{\partial^2G(\rho,{\varepsilon})}{\partial{\varepsilon}^{\otimes2}} \right|_{{\varepsilon}={0}}{\varepsilon}^{\otimes 2} \right\} \leq 0. \end{equation} Clearly, if this condition is satisfied by a strict inequality, it is sufficient, since corrections of higher order in ${\varepsilon}$ can be made arbitrarily smaller in magnitude by taking ${\varepsilon}$ small enough. Concerns about the contribution of higher-order corrections may arise only if the second-order correction to $G(\rho, {\varepsilon})$ vanishes in some open vicinity of $\rho$ and some open vicinity of ${\varepsilon}$ (we have assumed that the function $f(\rho)$ is continuous). But the second-order correction is a real quadratic form of the components of ${\varepsilon}$ and it can vanish in an open vicinity of ${\varepsilon}$, only if it vanishes for every ${\varepsilon}$, i.e., if \begin{equation} \left.\frac{\partial^2G(\rho,{\varepsilon})}{\partial\varepsilon_{ij}\partial\varepsilon_{kl}} \right|_{{\varepsilon}={0}}=0. \label{zero} \end{equation} We will now show that if \eqref{zero} is satisfied in an open vicinity of $\rho$, there exists an open vicinity of ${\varepsilon}={0}$ in which all second derivatives of $G(\rho,{\varepsilon})$ with respect to ${\varepsilon}$ vanish identically. This means that all higher-order corrections to $G(\rho,{\varepsilon})$ vanish in this vicinity and \eqref{nonincrease5} is satisfied with equality. Consider the two terms of $G(\rho,{\varepsilon})$ that depend on ${\varepsilon}$: \begin{equation} G_1(\rho,{\varepsilon})=p_1 f({ M}_1\rho { M}_1/p_1), \end{equation} \begin{equation} G_2(\rho,{\varepsilon})=p_2 f({ M}_2\rho { M}_2/p_2). \end{equation} They differ only by the sign of ${\varepsilon}$, i.e. $G_1(\rho,{\varepsilon}) =G_2(\rho,-{\varepsilon})$, and therefore \begin{equation} \left.\frac{\partial^2 G_1(\rho,{\varepsilon})}{\partial\varepsilon_{ij}\partial\varepsilon_{kl}}\right|_{{\varepsilon}={0}}=\left.\frac{\partial^2 G_2(\rho,{\varepsilon})}{\partial\varepsilon_{ij}\partial\varepsilon_{kl}}\right|_{{\varepsilon}={0}}=\frac{1}{2}\left.\frac{\partial^2 G(\rho,{\varepsilon})}{\partial\varepsilon_{ij}\partial\varepsilon_{kl}}\right|_{{\varepsilon}={0}}. \end{equation} If \eqref{zero} is satisfied in an open vicinity of $\rho$, we have \begin{equation} \left.\frac{\partial^2 G_1(\rho,{\varepsilon})}{\partial\varepsilon_{ij}\partial\varepsilon_{kl}}\right|_{{\varepsilon}={0}}=\left.\frac{\partial^2 G_2(\rho,{\varepsilon})}{\partial\varepsilon_{ij}\partial\varepsilon_{kl}}\right|_{{\varepsilon}={0}}=0 \label{cond} \end{equation} in this vicinity. Consider the second derivatives of $G(\rho,{\varepsilon})$ with respect to the components of ${\varepsilon}$, taken at a point ${\varepsilon}_0$: \begin{equation} \begin{split} \left.\frac{\partial^2G(\rho,{\varepsilon})}{\partial\varepsilon_{ij}\partial\varepsilon_{kl}} \right|_{{\varepsilon}={\varepsilon}_0} = \left.\frac{\partial^2G_1(\rho,{\varepsilon})}{\partial\varepsilon_{ij}\partial\varepsilon_{kl}} \right|_{{\varepsilon}={\varepsilon}_0} + \left.\frac{\partial^2G_2(\rho,{\varepsilon})}{\partial\varepsilon_{ij}\partial\varepsilon_{kl}} \right|_{{\varepsilon}={\varepsilon}_0} \\ = \left.\frac{\partial^2G_1(\rho,{\varepsilon}_0+{\varepsilon})}{\partial\varepsilon_{ij}\partial\varepsilon_{kl}} \right|_{{\varepsilon}={0}} + \left.\frac{\partial^2G_2(\rho,{\varepsilon}_0+{\varepsilon})}{\partial\varepsilon_{ij}\partial\varepsilon_{kl}} \right|_{{\varepsilon}={0}}. \end{split} \end{equation} From the expression for $G_1(\rho,{\varepsilon})$ one can see that ${\varepsilon}$ occurs in $G_1(\rho,{\varepsilon})$ only in the combination $\sqrt{\frac{{ I}-{\varepsilon}}{2}}\rho\sqrt{\frac{{ I}-{\varepsilon}}{2}}$. In $G_1(\rho,{\varepsilon}_0+{\varepsilon})$ it will appear only in $\sqrt{\frac{{ I}-{\varepsilon}_0-{\varepsilon}}{2}}\rho\sqrt{\frac{{ I}-{\varepsilon}_0-{\varepsilon}}{2}}$. But \begin{equation} \sqrt{\frac{{ I}-{\varepsilon}_0-{\varepsilon}}{2}}=\sqrt{\frac{{ I}-{\varepsilon}'}{2}}\sqrt{{ I}-{\varepsilon}_0}, \end{equation} where \begin{equation} {\varepsilon}'={\varepsilon}({ I}-{\varepsilon}_0)^{-1}. \end{equation} So we can write \begin{equation} \sqrt{\frac{{ I}-{\varepsilon}_0-{\varepsilon}}{2}}\rho\sqrt{\frac{{ I}-{\varepsilon}_0-{\varepsilon}}{2}} = p'\sqrt{\frac{{ I}-{\varepsilon}'}{2}}\rho'\sqrt{\frac{{ I}-{\varepsilon}'}{2}}, \end{equation} where \begin{equation} \rho' = \left( \sqrt{{ I}-{\varepsilon}_0}\rho\sqrt{{ I}-{\varepsilon}_0} \right)/p' \label{rhoprime} \end{equation} and \begin{equation} p'={\rm Tr}\left\{\sqrt{{ I}-{\varepsilon}_0}\rho\sqrt{{ I}-{\varepsilon}_0}\right\}. \end{equation} Then one can verify that \begin{equation} G_1(\rho,{\varepsilon}_0+{\varepsilon})=p'G_1(\rho',{\varepsilon}'). \end{equation} Similarly \begin{equation} G_2(\rho,{\varepsilon}_0+{\varepsilon})=p''G_2(\rho'',{\varepsilon}''), \end{equation} where \begin{equation} {\varepsilon}''={\varepsilon}({ I}+{\varepsilon}_0)^{-1}, \end{equation} \begin{equation} \rho'' = \left(\sqrt{{ I}+{\varepsilon}_0}\rho\sqrt{{ I}+{\varepsilon}_0} \right)/p'', \label{rhodoubleprime} \end{equation} \begin{equation} p''={\rm Tr}\left\{\sqrt{{ I}+{\varepsilon}_0}\rho\sqrt{{ I}+{\varepsilon}_0}\right\}. \end{equation} Note that $\partial\varepsilon'_{pq}/\partial\varepsilon_{ij}$ and $\partial\varepsilon''_{pq}/\partial\varepsilon_{ij}$ have no dependence on ${\varepsilon}$. Nor do $p'$ and $p''$. Therefore we obtain \begin{equation} \begin{split} \left.\frac{\partial^2 G(\rho,{\varepsilon})}{\partial\varepsilon_{ij}\partial\varepsilon_{kl}}\right|_{{\varepsilon}={\varepsilon}_0} =p'\left.\frac{\partial^2 G_1(\rho',{\varepsilon}')}{\partial\varepsilon_{ij}\partial\varepsilon_{kl}}\right|_{{\varepsilon}={0}}+ p''\left.\frac{\partial^2 G_2(\rho'',{\varepsilon}'')}{\partial\varepsilon_{ij}\partial\varepsilon_{kl}}\right|_{{\varepsilon}={0}} \\ = \sum_{p,q,r,s}\frac{\partial\varepsilon'_{pq}}{\partial\varepsilon_{ij}} \frac{\partial\varepsilon'_{rs}}{\partial\varepsilon_{kl}} p'\left.\frac{\partial^2 G_1(\rho',{\varepsilon}')}{\partial\varepsilon'_{pq}\partial\varepsilon'_{rs}}\right|_{{\varepsilon}'={0}} +\\ \sum_{p,q,r,s}\frac{\partial\varepsilon''_{pq}}{\partial\varepsilon_{ij}} \frac{\partial\varepsilon''_{rs}}{\partial\varepsilon_{kl}} p''\left.\frac{\partial^2 G_2(\rho'',{\varepsilon}'')}{\partial\varepsilon''_{pq}\partial\varepsilon''_{rs}}\right|_{{\varepsilon}''={0}}. \end{split} \end{equation} We assumed that \eqref{cond} is satisfied in an open vicinity of $\rho$. If $\rho'$ and $\rho''$ are within this vicinity, the above expression will vanish. But from \eqref{rhoprime} and \eqref{rhodoubleprime} we see that as $\|{\varepsilon}_0\|$ tends to zero, the quantities $\|\rho'-\rho\|$ and $\|\rho''-\rho\|$ also tend to zero. Therefore there exists an open vicinity of ${\varepsilon}_0={0}$, such that for every ${\varepsilon}_0$ in this vicinity, the corresponding $\rho'$ and $\rho''$ will be within the vicinity of $\rho$ for which \eqref{cond} is satisfied and \begin{equation} \left.\frac{\partial^2 G(\rho,{\varepsilon})}{\partial\varepsilon_{ij}\partial\varepsilon_{kl}}\right|_{{\varepsilon}={\varepsilon}_0}=0. \end{equation} This means that higher derivatives of $G(\rho,{\varepsilon})$ with respect to the components of ${\varepsilon}$ taken at points in this vicinity will vanish, in particular derivatives taken at ${\varepsilon}={0}$. So higher order corrections in ${\varepsilon}$ to $G(\rho,{\varepsilon})$ will also vanish. Therefore $G(\rho,{\varepsilon})=0$ in the vicinity of $\rho$ for which we assumed that \eqref{mixed3} is satisfied with equality, which implies that condition \eqref{mixed3} is sufficient. \chapter*{Chapter 4: \hspace{1pt} Non-Markovian dynamics of a qubit coupled to a spin bath via the Ising interaction} \addcontentsline{toc}{chapter}{Chapter 4:\hspace{0.15cm} Non-Markovian dynamics of a qubit coupled to a spin bath via the Ising interaction} In this chapter, we turn our attention to the deterministic dynamics of open quantum systems. The chapter is based on a study made in collaboration with Hari Krovi, Mikhail Ryazanov and Daniel Lidar \cite{KORL07}. \section*{4.1 \hspace{2pt} Preliminaries} \addcontentsline{toc}{section}{4.1 \hspace{0.15cm} Preliminaries} As we pointed out in Chapter 1, a major conceptual as well as technical difficulty in the practical implementation of quantum information processing schemes is the unavoidable interaction of quantum systems with their environment. This interaction can destroy quantum superpositions and lead to an irreversible loss of information, a process known as decoherence. Understanding the dynamics of open quantum systems is therefore of considerable importance. The Schr\"{o}dinger equation, which describes the evolution of closed systems, is generally inapplicable to open systems, unless one includes the environment in the description. This is, however, generally difficult, due to the large number of environment degrees of freedom. An alternative is to develop a description for the evolution of only the subsystem of interest. A multitude of different approaches have been developed in this direction, exact as well as approximate \cite {Alicki:87,BrePet02}. Typically the exact approaches are of limited practical usefulness as they are either phenomenological or involve complicated integro-differential equations. The various approximations lead to regions of validity that have some overlap. Such techniques have been studied for many different models, but their performance in general, is not fully understood. In this work we consider an exactly solvable model of a single qubit coupled to an environment of qubits. We are motivated by the physical importance of such spin bath models \cite{Prokofev:00} in the description of decoherence in solid state quantum information processors, such as systems based on the nuclear spin of donors in semiconductors \cite {Kane:98,Vrijen:00}, or on the electron spin in quantum dots \cite{Loss:98}. Rather than trying to accurately model decoherence due to the spin bath in such systems (as in, e.g., Refs. \cite{sousa:115322,Witzel:06}), our goal in this work is to compare the performance of different master equations which have been proposed in the literature. Because the model we consider is exactly solvable, we are able to accurately assess the performance of the approximation techniques that we study. In particular, we study the Born-Markov and Born master equations, and the perturbation expansions of the Nakajima-Zwanzig (NZ) \cite{Nak58,Zwa60} and the time-convolutionless (TCL) master equations \cite{Shibata77,ShiAri80} up to fourth order in the coupling constant. We also study the post-Markovian (PM) master equation proposed in \cite{ShabaniLidar:05}. The dynamics of the system qubit in the model we study is highly non-Markovian and hence we do not expect the traditional Markovian master equations commonly used, e.g., in quantum optics \cite{Car99} and nuclear magnetic resonance \cite{Slichter:book}, to be accurate. This is typical of spin baths, and was noted, e.g., by Breuer et al. \cite{BBP04}. As we will see in Chapter 5, the non-Markovian character of the dynamics can be used to our advantage in error-correction schemes, hence understanding these models is of special significance. The work by Breuer et al. (as well as by other authors in a number of subsequent publications \cite {Palumbo:06,Burgarth:06,Hamdouni:06,Yuan:07,Camalet:07,Jing:07}) is conceptually close to ours in that in both cases an analytically solvable spin-bath model is considered and the analytical solution for the open system dynamics is compared to approximations. However, there are also important differences, namely, in Ref. \cite{BBP04} a so-called spin-star system was studied, where the system spin has equal couplings to all the bath spins, and these are of the XY exchange-type. In contrast, in our model the system spin interacts via Ising couplings with the bath spins, and we allow for arbitrary coupling constants. As a result there are also important differences in the dynamics. For example, unlike the model in Ref. \cite {BBP04}, for our model we find that the odd order terms in the perturbation expansions of Nakajima-Zwanzig and time-convolutionless master equations are non-vanishing. This reflects the fact that there is a coupling between the $ x $ and $y$ components of the Bloch vector which is absent in \cite{BBP04}. In view of the non-Markovian behavior of our model, we also discuss the relation between a representation of the analytical solution of our model in terms of completely positive maps, and the Markovian limit obtained via a coarse-graining method introduced in \cite{Lidar:CP01}, and the performance of the post-Markovian master equation \cite{ShabaniLidar:05}. This chapter is organized as follows. In Section 4.2, we present the model, derive the exact solution and discuss its behavior in the limit of small times and large number of bath spins, and in the cases of discontinuous spectral density co-domain and alternating sign of the system-bath coupling constants. In Section 4.3, we consider second order approximation methods such as the Born-Markov and Born master equations, and a coarse-graining approach to the Markovian semigroup master equation. Then we derive solutions to higher order corrections obtained from the Nakajima-Zwanzig and time-convolutionless projection techniques as well as derive the optimal approximation achievable through the post-Markovian master equation. In Section 4.4, we compare these solutions for various parameter values in the model and plot the results. Finally in Section 4.5, we present our conclusions. \section*{4.2 \hspace{2pt} Exact dynamics} \addcontentsline{toc}{section}{4.2 \hspace{0.15cm} Exact dynamics} \subsection*{4.2.1 \hspace{2pt} The model} \addcontentsline{toc}{subsection}{4.2.1 \hspace{0.15cm} The model} We consider a single spin-$\frac{1}{2}$ system (i.e., a qubit with a two-dimensional Hilbert space $\mathcal{H}_{S}$) interacting with a bath of $ N$ spin-$\frac{1}{2}$ particles (described by an $N$-fold tensor product of two-dimensional Hilbert spaces denoted $\mathcal{H}_{B}$). The observables describing the spin of a spin-$\frac{1}{2}$ particle in each of the three spatial directions are described by the Pauli operators \begin{equation} \sigma^x=\begin{pmatrix} 0&1\\ 1&0 \end{pmatrix}, \sigma^y=\begin{pmatrix} 0&-i\\ i&0 \end{pmatrix}, \sigma^z=\begin{pmatrix} 1&0\\ 0&-1 \end{pmatrix}. \end{equation} We model the interaction between the system qubit and the bath by the Ising Hamiltonian \begin{equation} H_{I}^{\prime }=\alpha \sigma ^{z}\otimes \sum_{n=1}^{N}g_{n}\sigma _{n}^{z}, \label{eq:HI} \end{equation} where $g_{n}$ are dimensionless real-valued coupling constants in the interval $[-1,1]$ ($n$ labels the different qubits in the bath), and $\alpha >0$ is a parameter having the dimension of frequency (we work in units in which $\hbar =1$), which describes the coupling strength and will be used below in conjunction with time ($\alpha t$ ) for perturbation expansions. The system and bath Hamiltonians are \begin{equation} H_{S}=\frac{1}{2}\omega _{0}\sigma ^{z} \label{SystemHam} \end{equation} and \begin{equation} H_{B}=\sum_{n=1}^{N}\frac{1}{2}\Omega _{n}\sigma _{n}^{z}. \end{equation} For definiteness, we restrict the frequencies $\omega _{0}$ and $\Omega _{n}$ to the interval $[-1,1]$, in inverse time units. Even though the units of time can be arbitrary, by doing so we do not lose generality, since we will be working in the interaction picture where only the frequencies $\Omega _{n} $ appear in relation to the state of the bath [Eq.~(\ref{eq:rhoB0})]. Since the ratios of these frequencies and the temperature of the bath occur in the equations, only their values relative to the temperature are of interest. Therefore, henceforth we will omit the units of frequency and temperature and will treat these quantities as dimensionless. The interaction picture is defined as the transformation of any operator \begin{equation} A\mapsto A(t)=\exp (iH_{0}t)A\exp (-iH_{0}t), \end{equation} where $H_{0}=H_{S}+H_{B}$. The interaction Hamiltonian $H_{I}$ chosen here is invariant under this transformation since it commutes with $H_{0}$. [Note that in the next subsection, to simplify our calculations we redefine $H_{S}$ and $H_{I}^{\prime }$ (whence $H_{I}^{\prime }$ becomes $H_{I}$), but this does not alter the present analysis.] All the quantities discussed in the rest of this article are assumed to be in the interaction picture. The dynamics can be described using the superoperator notation for the Liouville operator \begin{equation} \mathcal{L}\rho (t)\equiv -i[H_{I}^{\prime },\rho (t)], \end{equation} where $\rho (t)$ is the density matrix for the total system in the Hilbert space $\mathcal{H}_{S}\otimes \mathcal{H}_{B}$. The dynamics is governed by the von Neumann equation \begin{equation} \frac{d}{dt}\rho (t)=\alpha \mathcal{L}\rho (t) \end{equation} and the formal solution of this equation can be written as follows: \begin{equation} \rho (t)=\exp (\alpha \mathcal{L}t)\rho (0). \label{VNeqnSoln} \end{equation} The state of the system is given by the reduced density operator \begin{equation} \rho _{S}(t)=\mathrm{Tr}_{B}\{\rho (t)\}, \end{equation} where $\mathrm{Tr}_{B}$ denotes a partial trace taken over the bath Hilbert space $\mathcal{H}_{B}$. This can also be written in terms of the Bloch sphere vector \begin{equation} \vec{v}(t)= \begin{pmatrix} v_{x}(t) \\ v_{y}(t) \\ v_{z}(t) \end{pmatrix} =\mathrm{Tr}\{\vec{\sigma}\rho _{S}(t)\}, \end{equation} where $\vec{\sigma}\equiv (\sigma ^{x},\sigma ^{y},\sigma ^{z})$ is the vector of Pauli matrices. In the basis of $\sigma ^{z}$ eigenstates this is equivalent to \begin{eqnarray} \rho _{S}(t) =\frac{1}{2}(I+\vec{v}\cdot \vec{\sigma}) =\frac{1}{2} \begin{pmatrix} 1+v_{z}(t) & v_{x}(t)-iv_{y}(t) \\ v_{x}(t)+iv_{y}(t) & 1-v_{z}(t) \end{pmatrix} . \label{I+vs} \end{eqnarray} We assume that the initial state is a product state, i.e., \begin{equation} \rho (0)=\rho _{S}(0)\otimes \rho _{B}, \end{equation} and that the bath is initially in the Gibbs thermal state at a temperature $T $ \begin{equation} \rho _{B}=\exp (-H_{B}/kT)/\mathrm{Tr}[\exp (-H_{B}/kT)], \label{eq:rhoB0} \end{equation} where $k$ is the Boltzmann constant. Since $\rho _{B}$ commutes with the interaction Hamiltonian $H_{I}$, the bath state is stationary throughout the dynamics:\ $\rho _{B}(t)=\rho _{B}$. Finally, the bath spectral density function is defined as usual as \begin{equation} J(\Omega )=\sum_{n}|g_{n}|^{2}\delta (\Omega -\Omega _{n}). \label{eq:J} \end{equation} \subsection*{4.2.2 \hspace{2pt} Exact solution for the evolution of the system qubit} \addcontentsline{toc}{subsection}{4.2.2 \hspace{0.15cm} Exact solution for the evolution of the system qubit} We first shift the system Hamiltonian in the following way: \begin{eqnarray} H_{S} \mapsto H_{S}+\theta I, \hspace{0.5cm} \theta \equiv \mathrm{Tr}\{\sum_{n}g_{n}\sigma _{n}^{z}\rho _{B}\}. \label{theta} \end{eqnarray} As a consequence the interaction Hamiltonian is modified from Eq. (\ref {eq:HI}) to \begin{equation} H_{I}^{\prime }\mapsto H_{I}=\alpha \sigma ^{z}\otimes B, \end{equation} where \begin{equation} B\equiv \sum_{n}g_{n}\sigma _{n}^{z}-\theta I_{B}. \label{Bcomp} \end{equation} This shift is performed because now $\mathrm{Tr}_{B}[H_{I},\rho (0)]=0$, or equivalently \begin{equation} \mathrm{Tr}_{B}\{B\rho _{B}\}=0. \end{equation} This property will simplify our calculations later when we consider approximation techniques in Section 4.3. Now, we derive the exact solution for the reduced density operator $\rho _{S}$ corresponding to the system. We do this in two different ways. The Kraus operator sum representation is a standard description of the dynamics of a system initially decoupled from its environment and it will also be helpful in studying the coarse-graining approach to the quantum semigroup master equation. The second method is computationally more effective and is helpful in obtaining analytical expressions for $N\gg 1$. \subsubsection*{4.2.2.1 \hspace{2pt} Exact Solution in the Kraus Representation} \addcontentsline{toc}{subsubsection}{4.2.2.1 \hspace{0.15cm} Exact Solution in the Kraus Representation} In the Kraus representation the system state at any given time can be written as \begin{equation} \rho _{S}(t)=\sum_{i,j}K_{ij}(t)\rho _{S}(0)K_{ij}(t)^{\dag }, \label{KrausForm} \end{equation} where the Kraus operators satisfy $\sum_{ij}K_{ij}(t)^{\dag }K_{ij}(t)=I_{S}$ \cite {Kraus83}. These operators can be expressed easily in the eigenbasis of the initial state of the bath density operator as \begin{equation} K_{ij}(t)=\sqrt{\lambda _{i}}\langle j|\exp (-iH_{I}t)|i\rangle , \end{equation} where the bath density operator at the initial time is $\rho _{B}(0)=\sum_{i}\lambda _{i}|i\rangle \langle i|$. For the Gibbs thermal state chosen here, the eigenbasis is the $N$-fold tensor product of the $ \sigma ^{z}$ basis. In this basis \begin{equation} \rho _{B}=\sum_{l}\frac{\exp (-\beta E_{l})}{Z}|l\rangle \langle l|, \end{equation} where $\beta =1/kT$. Here \begin{equation} E_{l}=\sum_{n=1}^{N}\frac{1}{2}\hbar \Omega _{n}(-1)^{l_{n}}, \label{eq:El} \end{equation} is the energy of each eigenstate $|l\rangle $, where $l=l_{1}l_{2}\dots l_{n} $ is the binary expansion of the integer $l$, and the partition function is $Z=\sum_{l}\exp (-\beta E_{l})$. Therefore, the Kraus operators become \begin{equation} K_{ij}(t)=\sqrt{\lambda _{i}}\exp (-it\alpha \tilde{E}_{i}\sigma ^{z})\delta _{ij}, \end{equation} where \begin{equation} \tilde{E}_{i}=\langle i|B|i\rangle =\sum_{n=1}^{N}g_{n}(-1)^{i_{n}}-\mathrm{ Tr}\{\sum_{n}g_{n}\sigma _{n}^{z}\rho _{B}\}, \label{eq:Eitilde} \end{equation} and $\lambda _{i}=\exp (-\beta E_{i})/Z$. Substituting this expression for $ K_{ij}$ into Eq. (\ref{KrausForm}) and writing the system state in the Bloch vector form given in Eq. (\ref{I+vs}), we obtain \begin{eqnarray} v_{x}(t) &=&v_{x}(0)C(t)-v_{y}(0)S(t), \notag \\ v_{y}(t) &=&v_{x}(0)S(t)+v_{y}(0)C(t), \label{ExactSoln} \\ v_{z}(t) &=&v_{z}(0), \notag \end{eqnarray} where \begin{eqnarray} C(t) &=&\sum_{i}\lambda _{i}\cos 2\alpha \tilde{E}_{i}t, \notag \\ S(t) &=&\sum_{i}\lambda _{i}\sin 2\alpha \tilde{E}_{i}t. \label{CSeqn} \end{eqnarray} The equations (\ref{ExactSoln}) are the exact solution to the system dynamics of the above spin bath model. We see that the evolution of the Bloch vector is a linear combination of rotations around the $z$ axis. This evolution reflects the symmetry of the interaction Hamiltonian which is diagonal in the $z$ basis. By inverting Eqs. (\ref{ExactSoln}) for $v_{x}(0)$ or $v_{y}(0)$, we see that the Kraus map is irreversible when $ C(t)^{2}+S(t)^{2}=0$. This will become important below, when we discuss the validity of the time-convolutionless approximation. \subsubsection*{4.2.2.2 \hspace{2pt} Alternative Exact Solution} \addcontentsline{toc}{subsubsection}{4.2.2.2 \hspace{0.15cm} Alternative Exact Solution} Another way to derive the exact solution which is computationally more useful is the following. Since all $\sigma _{n}^{z}$ commute, the initial bath density matrix factors and can be written as \begin{eqnarray} \rho _{B} =\bigotimes\limits_{n=1}^{N}\frac{\exp \left( -\frac{\Omega _{n} }{2kT}\sigma _{n}^{z}\right) }{\mathrm{Tr}\left[ \exp \left( -\frac{\Omega _{n}}{2kT}\sigma _{n}^{z}\right) \right] } =\bigotimes\limits_{n=1}^{N}\frac{1}{2}\left( I+\beta _{n}\sigma _{n}^{z}\right) \equiv \prod_{n=1}^{N}\rho _{n}, \label{eq_rho_B_inter} \end{eqnarray} where \begin{equation} \beta _{n}=\tanh \left( -\frac{\Omega _{n}}{2kT}\right) , \end{equation} and $-1\leq \beta _{n}\leq 1$. Using this, we obtain an expression for $ \theta $ defined in Eq. (\ref{theta}) \begin{eqnarray} \theta &=&\mathrm{Tr}\{\sum_{n=1}^{N}g_{n}\sigma _{n}^{z}\bigotimes\limits_{m=1}^{N}\frac{1}{2}(I+\beta _{m}\sigma _{m}^{z})\} \notag \\ &=&\sum_{n=1}^{N}g_{n}\mathrm{Tr}\{\frac{1}{2}(\sigma _{n}^{z}+\beta _{n}I)\}\prod\limits_{m\neq n}\mathrm{Tr}\{\frac{1}{2}(I+\beta _{m}\sigma _{m}^{z})\} \notag \\ &=&\sum_{n=1}^{N}g_{n}\beta _{n}. \label{eq:theta} \end{eqnarray} The evolution of the system density matrix in the interaction picture is \begin{equation} \rho _{S}(t)=\mathrm{Tr}_{B}\{e^{-iH_{I}t}\rho (0)e^{iH_{I}t}\}. \end{equation} In terms of the system density matrix elements in the computational basis $ \{|0\rangle ,|1\rangle \}$ (which is an eigenbasis of $\sigma ^{z}$ in $ H_{I}=\alpha \sigma ^{z}\otimes B$), we have \begin{eqnarray} \langle j|\rho _{S}(t)|k\rangle &=&\langle j|\mathrm{Tr}_{B}\{e^{-iH_{I}t} \rho _{S}(0)\bigotimes\limits_{m=1}^{N}\rho _{m}e^{iH_{I}t}\}|k\rangle \notag \\ &=&\mathrm{Tr}_{B}\{e^{-i\alpha \langle j|\sigma ^{z}|j\rangle Bt} \langle j|\rho _{S}(0)|k\rangle \bigotimes\limits_{m=1}^{N}\rho _{m}e^{+i\alpha \langle k|\sigma ^{z}|k\rangle Bt}\}. \notag \end{eqnarray} Let us substitute $\langle j|\sigma ^{z}|j\rangle =(-1)^{j}$ and rewrite \begin{eqnarray} e^{-i\alpha \langle j|\sigma ^{z}|j\rangle Bt} =e^{-i\alpha (-1)^{j}\left( \sum_{l=1}^{N}g_{l}\sigma _{l}^{z}-\theta I\right) t} =\bigotimes\limits_{l=1}^{N}e^{-i(-1)^{j}\alpha \left( g_{l}\sigma _{l}^{z}-\frac{\theta }{N}I\right) t}. \notag \end{eqnarray} Since all the matrices are diagonal, they commute and we can collect the terms by qubits: \begin{eqnarray} \langle j|\rho _{S}(t)|k\rangle =\langle j|\rho _{S}(0)|k\rangle \mathrm{Tr}\{\bigotimes\limits_{m=1}^{N}e^{-i\left[ (-1)^{j}-(-1)^{k}\right] \alpha \left( g_{l}\sigma _{l}^{z}-\frac{\theta }{N} I\right) t}\rho _{n}\}. \notag \end{eqnarray} Let us denote $(-1)^{j}-(-1)^{k}=2\epsilon _{jk}$. The trace can be easily computed to be \begin{eqnarray*} &\prod_{n=1}^{N}&\mathrm{Tr}\{e^{-i2\epsilon _{jk}\alpha \left( g_{n}\sigma _{n}^{z}-\frac{\theta }{N}I\right) t}\tfrac{1}{2}(I+\beta _{n}\sigma _{n}^{z})\} \\ &=&\prod_{n=1}^{N}e^{i2\epsilon _{jk}\alpha \frac{\theta }{N}t}\left[ \cos (2\epsilon _{jk}\alpha g_{n}t)-i\beta _{n}\sin (2\epsilon _{jk}\alpha g_{n}t) \right] . \end{eqnarray*} Thus the final expression for the system density matrix elements is \begin{eqnarray} \langle j|\rho _{S}(t)|k\rangle &=&\langle j|\rho _{S}(0)|k\rangle e^{i2\epsilon _{jk}\alpha \theta t} \prod_{n=1}^{N}\left[ \cos (2\epsilon _{jk}\alpha g_{n}t)-i\beta _{n}\sin (2\epsilon _{jk}\alpha g_{n}t)\right] . \notag \end{eqnarray} Notice that $\epsilon _{00}=\epsilon _{11}=0$, hence the diagonal matrix elements do not depend on time as before: \begin{gather*} \langle 0|\rho _{S}(t)|0\rangle =\langle 0|\rho _{S}(0)|0\rangle , \\ \langle 1|\rho _{S}(t)|1\rangle =\langle 1|\rho _{S}(0)|1\rangle . \end{gather*} For the off-diagonal matrix elements $\epsilon _{01}=1$, $\epsilon _{10}=-1$ , and the evolution is described by \begin{eqnarray} \langle 0|\rho _{S}(t)|1\rangle &=&\langle 0|\rho _{S}(0)|1\rangle f(t), \notag \\ \langle 1|\rho _{S}(t)|0\rangle &=&\langle 1|\rho _{S}(0)|0\rangle f^{\ast }(t), \label{ExactSoln2} \end{eqnarray} where \begin{equation} f(t)=e^{i2\alpha \theta t}\prod_{n=1}^{N}\left[ \cos (2\alpha g_{n}t)-i\beta _{n}\sin (2\alpha g_{n}t)\right] . \label{prod} \end{equation} In terms of the Bloch vector components, this can be written in the form of Eq. (\ref{ExactSoln}), where \begin{eqnarray} C(t) &=&(f(t)+f^{\ast }(t))/2, \notag \\ S(t) &=&(f(t)-f^{\ast }(t))/2i. \label{eq:CS} \end{eqnarray} \subsection*{4.2.3 \hspace{2pt} Limiting cases} \addcontentsline{toc}{subsection}{4.2.3 \hspace{0.15cm} Limiting cases} \subsubsection*{4.2.3.1 \hspace{2pt} Short Times} \addcontentsline{toc}{subsubsection}{4.2.3.1 \hspace{0.15cm} Short Times} Consider the evolution for short times where $\alpha t\ll 1$. Then \begin{eqnarray} \lefteqn{\left\vert \prod_{n=1}^{N}\left[ \cos (2\alpha g_{n}t)\pm i\beta _{n}\sin (2\alpha g_{n}t)\right] \right\vert } \notag \\ &=&\prod_{n=1}^{N}\sqrt{1-(1-\beta _{n}^{2})\sin ^{2}(2\alpha g_{n}t)} \notag \\ &\approx &\prod_{n=1}^{N}[1-2(1-\beta _{n}^{2})(\alpha g_{n}t)^{2}] \notag \\ &\approx &1-2\left[ \alpha ^{2}\sum_{n=1}^{N}g_{n}^{2}(1-\beta _{n}^{2}) \right] t^{2} \notag \\ &\approx &\exp [-2(\alpha t)^{2}Q_{2}], \label{eq:f-approx} \end{eqnarray} where (see Appendix A at the end of this chapter) \begin{eqnarray} Q_{2} \equiv \mathrm{Tr}\{B^{2}\rho _{B}\}=\sum_{n=1}^{N}g_{n}^{2}(1-\beta _{n}^{2}) =\int_{-\infty }^{\infty }\frac{2J(\Omega )}{1+\cosh (\frac{\Omega }{kT})} \mathrm{d}\Omega . \label{Q_2} \end{eqnarray} Note that for the above approximation to be valid, we need $2(\alpha t)^{2}Q_{2}\ll 1$. The total phase of $f(t)$ in Eq. (\ref{prod}) is \begin{eqnarray} \phi \approx 2\theta \alpha t+\sum_{n=1}^{N}(-\beta _{n}2\alpha g_{n}t) =2\theta \alpha t-2\alpha \left( \sum_{n=1}^{N}g_{n}\beta _{n}\right) t=0, \end{eqnarray} where we have used Eq. (\ref{eq:theta}). Thus, the off-diagonal elements of the system density matrix become \begin{eqnarray} \rho _{S}^{01}(t) &\approx &\rho _{S}^{01}(0)e^{-2(\alpha t)^{2}Q_{2}}, \notag \\ \rho _{S}^{10}(t) &\approx &\rho _{S}^{10}(0)e^{-2(\alpha t)^{2}Q_{2}}. \end{eqnarray} Finally, the dynamics of the Bloch vector components are: \begin{eqnarray} v_{x,y}(t) &\approx &v_{x,y}(0)e^{-2(\alpha t)^{2}Q_{2}}, \notag \\ v_{z}(t) &=&v_{z}(t). \label{shorttimes} \end{eqnarray} This represents the well known behavior \cite{NNP96} of the evolution of an open quantum system in the Zeno regime. In this regime coherence does not decay exponentially but is initially flat, as is the case here due to the vanishing time derivative of $\rho _{S}^{01}(t)$ at $t=0$. As we will see in Section 4.3, the dynamics in the Born approximation (which is also the second order time-convolutionless approximation) exactly matches the last result. \subsubsection*{4.2.3.2 \hspace{2pt} Large $N$} \addcontentsline{toc}{subsubsection}{4.2.3.2 \hspace{0.15cm} Large $N$} When $N\gg 1$ and the values of $g_{n}$ are random, then the different terms in the product of Eq. (\ref{prod}) are smaller than $1$ most of the time and have recurrences at different times. Therefore, we expect the function $f(t)$ to be close to zero in magnitude for most of the time and full recurrences, if they exist, to be extremely rare. When $g_{n}$ are equal and so are $ \Omega _{n}$, then partial recurrences occur periodically, independently of $ N$. Full recurrences occur with a period which grows at least as fast as $N$ . This can be argued from Eq. (\ref{ExactSoln}) by imposing the condition that the arguments of all the cosines and sines are simultaneously equal to an integer multiple of $2\pi $. When $J(\Omega )$ has a narrow high peak, e.g., one $g_{n}$ is much larger than the others, then the corresponding terms in the products in Eq. (\ref{prod}) oscillate faster than the rate at which the whole product decays. This is effectively a modulation of the decay. \subsubsection*{4.2.3.3 \hspace{2pt} Discontinuous spectral density co-domain} \addcontentsline{toc}{subsubsection}{4.2.3.3 \hspace{0.15cm} Discontinuous spectral density co-domain} As can be seen from Eq.~(\ref{prod}), the coupling constants $g_{n}$ determine the oscillation periods of the product terms, while the temperature factors $\beta _{n}$ determine their modulation depths. If the codomain of spectral density is not continuous, i.e. it can be split into non-overlapping intervals $G_{j}$, $j=1,...,J$, then Eq.~(\ref{prod}) can be represented in the following form: \begin{equation} f(t)=e^{i2\alpha \theta t}P_{1}(t)P_{2}(t)\dots P_{J}(t), \end{equation} where \begin{equation} P_{j}(t)=\prod_{g_{n}\in G_{j}}\big[\cos (2\alpha g_{n}t)-i\beta _{n}\sin (2\alpha g_{n}t)\big]. \end{equation} In this case, if $G_{j}$ are separated by large enough gaps, the evolution rates of different $P_{j}(t)$ can be significantly different. This is particularly noticeable if one $P_{j}(t)$ undergoes partial recurrences while another $P_{j^{\prime }}(t)$ slowly decays. For example, one can envision a situation with two intervals such that one term shows frequent partial recurrences that slowly decay with time, while the other term decays faster, but at times larger than the recurrence time. The overall evolution then consists in a small number of fast partial recurrences. In an extreme case, when one $g_{n}$ is much larger then the others, this results in an infinite harmonic modulation of the decay with depth dependent on $\beta _{n}$, i.e., on temperature. \subsubsection*{4.2.3.4 \hspace{2pt} Alternating signs} \addcontentsline{toc}{subsubsection}{4.2.3.4 \hspace{0.15cm} Alternating signs} If the bath has the property that every bath qubit $m$ has a pair $-m$ with the same frequency $\Omega _{-m}=\Omega _{m}$, but opposite coupling constant $g_{-m}=-g_{m}$, the exact solution can be simplified. First, $\beta _{-m}=\beta _{m}$, and $\theta =0$. Next, Eq.~(\ref{prod}) becomes \begin{eqnarray} f(t) &=&\prod_{m=1}^{N/2}\big[\cos (2\alpha g_{m}t)-i\beta _{m}\sin (2\alpha g_{m}t)\big]\big[\cos (2\alpha g_{-m}t)-i\beta _{-m}\sin (2\alpha g_{-m}t)\big] \notag \\ &=&\prod_{m=1}^{N/2}\big[\cos ^{2}(2\alpha g_{m}t)+\beta _{m}^{2}\sin ^{2}(2\alpha g_{m}t)\big]. \end{eqnarray} This function is real, thus Eq.~(\ref{eq:CS}) becomes $C(t)=f(t),S(t)=0$, so that $v_{x}(t)=v_{x}(0)f(t)$ and $v_{y}(t)=v_{y}(0)f(t)$. The exact solution is then symmetric under the interchange $v_{x}\leftrightarrow v_{y}$, a property shared by all the second order approximate solutions considered below, as well as the post-Markovian master equation. The limiting case Eq.~( \ref{eq:f-approx}) remains unchanged, and since $Q_{2}$ depends on $g_{n}^{2} $, but not $g_{n}$, it and all second order approximations also remain unchanged. In the special case $|g_{m}|=g$, the exact solution exhibits full recurrences with period $T=\pi /\alpha g$. \section*{4.3 \hspace{2pt} Approximation methods} \addcontentsline{toc}{section}{4.3 \hspace{0.15cm} Approximation methods} In this section we discuss the performance of different approximation methods developed in the open quantum systems literature \cite {Alicki:87,BrePet02}. The corresponding master equations for the system density matrix can be derived explicitly and since the model considered here is exactly solvable, we can compare the approximations to the exact dynamics. We use the Bloch vector representation and since the $z$ component has no dynamics, a fact which is reflected in all the master equations, we omit it from our comparisons. \subsection*{4.3.1 \hspace{2pt} Born and Born-Markov approximations} \addcontentsline{toc}{subsection}{4.3.1 \hspace{0.15cm} Born and Born-Markov approximations} Both the Born and Born-Markov approximations are second order in the coupling strength $\alpha $. \subsubsection*{4.3.1.1 \hspace{2pt} Born approximation} \addcontentsline{toc}{subsubsection}{4.3.1.1 \hspace{0.15cm} Born approximation} The Born approximation is equivalent to a truncation of the Nakajima-Zwanzig projection operator method at the second order, which is discussed in detail in Section 4.3.2. The Born approximation is given by the following integro-differential master equation: \begin{equation} \dot{\rho}_{S}(t)=-\int_{0}^{t}\mathrm{Tr}_{B}\{[H_{I}(t),[H_{I}(s),\rho _{S}(s)\otimes \rho _{B}]]\}\text{d}s. \end{equation} Since in our case the interaction Hamiltonian is time-independent, the integral becomes easy to solve. We obtain \begin{equation} \dot{\rho}_{S}(t)=-2\alpha ^{2}Q_{2}\int_{0}^{t}(\rho _{S}(s)-\sigma ^{z}\rho _{S}(s)\sigma ^{z})\text{d}s, \end{equation} where $Q_{2}$ is the second order bath correlation function in Eq. (\ref{Q_2} ). Writing $\rho _{S}(t)$ in terms of Bloch vectors as $(I+\vec{v}\cdot \vec{ \sigma})/2$ [Eq. (\ref{I+vs})], we obtain the following integro-differential equations: \begin{eqnarray} \dot{v}_{x,y}(t) &=&-4\alpha ^{2}Q_{2}\int_{0}^{t}v_{x,y}(s)\text{d}s. \label{BornApprox} \end{eqnarray} These equations can be solved by taking the Laplace transform of the variables. The equations become \begin{equation} sV_{x,y}(s)-v_{x,y}(0)=-4\alpha ^{2}Q_{2}\frac{V_{x,y}(s)}{s}, \end{equation} where $V_{x,y}(s)$ is the Laplace transform of $v_{x,y}(t)$. This gives \begin{equation} V_{x,y}(s)=\frac{v_{x,y}(0)s}{s^{2}+4Q_{2}\alpha ^{2}}, \end{equation} which can be readily solved by taking the inverse Laplace transform. Doing so, we obtain the solution of the Born master equation for our model: \begin{eqnarray} v_{x,y}(t) &=&v_{x,y}(0)\cos (2\alpha \sqrt{Q_{2}}t). \label{Born} \end{eqnarray} Note that this solution is symmetric under the interchange $ v_{x}\leftrightarrow v_{y}$, but the exact dynamics in Eq. (\ref{ExactSoln}) does not have this symmetry. The exact dynamics respects the symmetry: $ v_{x}\rightarrow v_{y}$ and $v_{y}\rightarrow -v_{x}$, which is a symmetry of the Hamiltonian. This means that higher order corrections are required to break the symmetry $v_{x}\leftrightarrow v_{y}$ in order to approximate the exact solution more closely. One often makes the substitution $v_{x,y}(t)$ for $v_{x,y}(s)$ in Eq. (\ref {BornApprox}) since the integro-differential equation obtained in other models may not be as easily solvable. This approximation, which is valid for short times, yields \begin{eqnarray} \dot{v}_{x,y}(t) &=&-4\alpha ^{2}Q_{2}tv_{x,y}(t), \end{eqnarray} which gives \begin{eqnarray} v_{x,y}(t) &=&v_{x,y}(0)\exp (-2Q_{2}\alpha ^{2}t^{2}), \label{TCL2} \end{eqnarray} i.e., we recover Eq. (\ref{shorttimes}). This is the same solution obtained in the second order approximation using the time-convolutionless (TCL) projection method discussed in Section 4.3.2. \subsubsection*{4.3.1.2 \hspace{2pt} Born-Markov approximation} \addcontentsline{toc}{subsubsection}{4.3.1.2 \hspace{0.15cm} Born-Markov approximation} In order to obtain the Born-Markov approximation, we use the following quantities \cite{BrePet02}[Ch.3]: \begin{eqnarray} R(\omega ) &=&\sum_{E_{2}-E_{1}=\omega }P_{E_{1}}\sigma ^{z}P_{E_{2}}, \notag \\ \Gamma (\omega ) &=&\alpha ^{2}\int_{0}^{\infty }e^{i\omega s}Q_{2}\text{d}s, \notag \\ H_{L} &=&\sum_{\omega }T(\omega )R(\omega )^{\dag }R(\omega ), \end{eqnarray} where $T(\omega )=(\Gamma (\omega )-\Gamma (\omega )^{\ast })/2i$, $E_{i}$ is an eigenvalue of the system Hamiltonian $H_{S}$, and $P_{E_{i}}$ is the projector onto the eigenspace corresponding to this eigenvalue. In our case $ H_{S}$ is diagonal in the eigenbasis of $\sigma ^{z}$, and only $\omega =0$ is relevant. This leads to $R(0)=\sigma ^{z}$ and $\Gamma (0)=\alpha ^{2}\int_{0}^{\infty }Q_{2}\text{d}t$. Since $\Gamma (0)$ is real, we have $ T(0)=0.$ Hence the Lamb shift Hamiltonian $H_{L}=0$, and the Lindblad form of the Born-Markov approximation is \begin{equation} \dot{\rho}_{S}(t)=\gamma (\sigma ^{z}\rho _{S}\sigma ^{z}-\rho _{S}), \label{Lindblad} \end{equation} where $\gamma =\Gamma (0)+\Gamma (0)^{\ast }=2\alpha ^{2}\int_{0}^{\infty }Q_{2}$d$t$. But note that $Q_{2}=\mathrm{Tr}_{B}\{B^{2}\rho _{B}\}$ does not depend on time. This means that $\Gamma $ and hence $\gamma $ are both infinite. Thus the Born-Markov approximation is not valid for this model and the main reason for this is the time independence of the bath correlation functions. The dynamics is inherently non-Markovian. A different approach to the derivation of a Markovian semigroup master equation was proposed in \cite{Lidar:CP01}. In this approach, a Lindblad equation is derived from the Kraus operator-sum representation by a coarse-graining procedure defined in terms of a phenomenological coarse-graining time scale $\tau $. The general form of the equation is: \begin{eqnarray} \frac{\partial \rho (t)}{\partial t} =-i[\langle \dot{Q}\rangle _{\tau },\rho (t)] +\frac{1}{2}\sum_{\alpha ,\beta =1}^{M}\langle \dot{\chi}_{\alpha ,\beta }\rangle _{\tau }([A_{\alpha },\rho (t)A_{\beta }^{\dagger }]+[A_{\alpha }\rho (t),A_{\beta }^{\dagger }]), \end{eqnarray} where the operators $A_{0}=I$ and $A_{\alpha },\alpha =1,...,M$ form an arbitrary fixed operator basis in which the Kraus operators (\ref{KrausForm} ) can be expanded as \begin{equation} K_{i}=\sum_{\alpha =0}^{M}b_{i\alpha }A_{\alpha }. \end{equation} The quantities $\chi _{\alpha ,\beta }(t)$ and $Q(t)$ are defined through \begin{equation} \chi _{\alpha ,\beta }(t)=\sum_{i}b_{i\alpha }(t)b_{i\beta }^{\ast }(t), \end{equation} \begin{equation} Q(t)=\frac{i}{2}\sum_{\alpha =1}^{M}(\chi _{\alpha 0}(t)K_{\alpha }-\chi _{0\alpha }(t)K_{\alpha }^{\dagger }), \end{equation} and \begin{equation} \langle X\rangle _{\tau }=\frac{1}{\tau }\int_{0}^{\tau }X(s)ds. \end{equation} For our problem we find \begin{equation} \frac{\partial \rho (t)}{\partial t}=-i\tilde{\omega}[\sigma _{Z},\rho (t)]+ \tilde{\gamma}(\sigma _{Z}\rho (t)\sigma _{Z}-\rho (t)), \label{coarsegrain} \end{equation} where \begin{equation} \tilde{\omega}=\frac{1}{2\tau }S(\tau ) \end{equation} and \begin{equation} \tilde{\gamma}=\frac{1}{2\tau }(1-C(\tau )) \end{equation} with $C(t)$ and $S(t)$ defined in Eq. (\ref{CSeqn}). In order for this approximation to be justified, it is required that the coarse-graining time scale $\tau $ be much larger than any characteristic time scale of the bath \cite{Lidar:CP01}. However, in our case the bath correlation time is infinite which, once again, shows the inapplicability of the Markovian approximation. This is further supported by the performance of the optimal solution that one can achieve by varying $\tau $, which is discussed in Section 4.4. There we numerically examine the average trace-distance between the solution to Eq. (\ref{coarsegrain}) and the exact solution as a function of $ \tau $. The average is taken over a time $T$, which is greater than the decay time of the exact solution. We determine an optimal $\tau $ for which the average trace distance is minimum and then determine the approximate solution. The solution of Eq. (\ref{coarsegrain}) for a particular $\tau $ in terms of the Bloch vector components is \begin{eqnarray} v_{x}(t) &=&v_{x}(0)\tilde{C}_{\tau }(t)+v_{y}(0)\tilde{S}_{\tau }(t) \notag \\ v_{y}(t) &=&v_{y}(0)\tilde{C}_{\tau }(t)-v_{x}(0)\tilde{S}_{\tau }(t), \end{eqnarray} where $\tilde{C}_{\tau }(t)=e^{-\tilde{\gamma}(\tau )t}\cos (\tilde{\omega} (\tau )t)$ and $\tilde{S}_{\tau }(t)=e^{-\tilde{\gamma}(\tau )t}\sin (\tilde{ \omega}(\tau )t)$. The average trace distance as a function of $\tau $ is given by, \begin{eqnarray} &\bar{D}&(\rho _{\mathrm{exact}},\rho _{\mathrm{CG}})\equiv \frac{1}{2} \mathrm{Tr}|\rho _{\mathrm{exact}}-\rho _{\mathrm{CG}}| \notag \\ &=&\frac{1}{2T}\sum_{t=0}^{T}\sqrt{(C(t)-\tilde{C}(t))^{2}+(S(t)-\tilde{S} (t))^{2}} \sqrt{v_{x}(0)^{2}+v_{y}(0)^{2}}, \end{eqnarray} where $\rho _{\mathrm{CG}}$ represents the coarse-grained solution and where $|X|=\sqrt{X^{\dag }X}$. The results are presented in Section 4.4. Next we consider the Nakajima-Zwanzig (NZ) and the time-convolutionless (TCL) master equations for higher order approximations. \subsection*{4.3.2 \hspace{2pt} NZ and TCL master equations} \addcontentsline{toc}{subsection}{4.3.2 \hspace{0.15cm} NZ and TCL master equations} Using projection operators one can obtain approximate non-Markovian master equations to higher orders in $\alpha t$. A projection is defined as follows, \begin{equation} \mathcal{P}\rho=\mathrm{Tr}_B\{\rho\}\otimes\rho_B , \end{equation} and serves to focus on the ``relevant dynamics'' (of the system) by removing the bath (a recent generalization is discussed in Ref. \cite{Breuer:07}). The choice of $\rho_B$ is somewhat arbitrary and can be taken to be $ \rho_B(0)$ which significantly simplifies the calculations. Using the notation introduced in \cite{BBP04}, define \begin{equation} \left\langle \mathcal{S}\right\rangle\equiv \mathcal{P}\mathcal{S}\mathcal{P} \end{equation} for any superoperator $\mathcal{S}$. Thus $\left\langle\mathcal{S}^n\right\rangle$ denote the moments of the superoperator. Note that for the Liouvillian superoperator, $\left\langle\mathcal{L}\right\rangle=0$ by virtue of the fact that $ \mathrm{Tr}_B\{B\rho_B(0)\}=0$ (see \cite{BrePet02}). Since we assume that the initial state is a product state, both the NZ and TCL equations are homogeneous equations. The NZ master equation is an integro-differential equation with a memory kernel $\mathcal{N}(t,s)$ and is given by \begin{equation} \dot{\rho}_S(t)\otimes\rho_B=\int_0^t\mathcal{N}(t,s)\rho_S(s)\otimes\rho_B \text{d}s . \end{equation} The TCL master equation is a time-local equation given by \begin{equation} \dot{\rho}_S(t)\otimes\rho_B=\mathcal{K}(t)\rho_S(t)\otimes\rho_B . \end{equation} When these equations are expanded in $\alpha t$ and solved we obtain the higher order corrections. When the interaction Hamiltonian is time independent (as in our case), the above equations simplify to \begin{equation} \label{NZ} \int_0^t\mathcal{N}(t,s)\rho_S(s)\otimes\rho_B \text{d}s=\sum_{n=1}^\infty \alpha^n \mathcal{I}_n(t,s)\left\langle\mathcal{L}^n \right\rangle_{pc}\rho_S(s) \end{equation} and \begin{equation} \label{TCL} \mathcal{K}(t)=\sum_{n=1}^\infty \alpha^n \frac{t^{n-1}}{(n-1)!}\left\langle \mathcal{L}^n\right\rangle_{oc} \end{equation} for the NZ and TCL equations, respectively, where the time-ordered integral operator $\mathcal{I}_n(t,s)$ is defined as \begin{equation} \mathcal{I}_n(t,s)\equiv \int_0^t\text{d}t_1\int_0^{t_1}\text{d} t_2\cdots\int_0^{t_{n-2}}\text{d}s . \end{equation} The definitions of the partial cumulants $\left\langle\mathcal{L}\right\rangle_{pc}$ and the ordered cumulants $\left\langle\mathcal{L}\right\rangle_{oc}$ are given in Refs. \cite{ShiAri80,Royer:72,Kam74}. For our model we have \begin{equation} \left\langle\mathcal{L}\right\rangle_{pc}=\left\langle\mathcal{L}\right\rangle_{oc}=0 , \end{equation} and \begin{eqnarray} \left\langle\mathcal{L}^2\right\rangle_{pc}=\left\langle\mathcal{L}^2\right\rangle \notag \\ \left\langle\mathcal{L}^2\right\rangle_{oc}=\left\langle\mathcal{L}^2\right\rangle \notag \\ \left\langle\mathcal{L}^3\right\rangle_{pc}=\left\langle\mathcal{L}^3\right\rangle \notag \\ \left\langle\mathcal{L}^3\right\rangle_{oc}=\left\langle\mathcal{L}^3\right\rangle \notag \\ \left\langle\mathcal{L}^4\right\rangle_{pc}=\left\langle\mathcal{L}^4\right\rangle - \left\langle \mathcal{L}^2\right\rangle^2 \notag \\ \left\langle\mathcal{L}^4\right\rangle_{oc}=\left\langle\mathcal{L}^4\right\rangle - 3\left\langle\mathcal{L}^2\right\rangle^2 . \label{eq:cumulants} \end{eqnarray} Explicit expressions for these quantities are given in Appendix \ref{app:B} at the end of this chapter. Substituting these into the NZ and TCL equations (\ref{NZ}) and (\ref{TCL}), we obtain what we refer to below as the NZ$n$ and TCL$n$ master equations, with $n=2,3,4$. These approximate master equations are, respectively, second, third and fourth order in the coupling constant $\alpha$, and they can be solved analytically. The second order solution of the NZ equation (NZ2) is exactly the Born approximation and the solution is given in Eq. ( \ref{Born}). The third order NZ master equation is given by \begin{eqnarray} \dot{\rho}_S(t)&=&-2\alpha^2Q_2 \mathcal{I}_2(t,s) (\rho_S(s)-\sigma^z\rho_S(s)\sigma^z) \notag \\ &+& i4\alpha^3Q_3\mathcal{I}_3(t,s)(\sigma^z\rho_S(s)-\rho_S(s)\sigma^z) , \end{eqnarray} and the fourth order is \begin{eqnarray} \dot{\rho}_S(t)&=&-2\alpha^2Q_2 \mathcal{I}_2(t,s) (\rho_S(s)-\sigma^z\rho_S(s)\sigma^z) \notag \\ &+& i4\alpha^3Q_3\mathcal{I}_3(t,s)(\sigma^z\rho_S(s)-\rho_S(s)\sigma^z) \notag \\ &+& 8\alpha^4(Q_4-Q_2^2)\mathcal{I}_4(t,s)(\rho_S(s)-\sigma^z\rho_S(s) \sigma^z). \notag \\ \end{eqnarray} These equations are equivalent to, respectively, 6th and 8th order differential equations (with constant coefficients) and are difficult to solve analytically. The results we present in the next section were therefore obtained numerically. The situation is simpler in the TCL approach. The second order TCL equation is given by \begin{eqnarray} \dot{\rho}_S(t)&=&-\alpha^2t\mathrm{Tr}_B\{[H_I,[H_I,\rho_S(t)\otimes \rho_B(0)]]\} \notag \\ &=& -2\alpha^2tQ_2(\rho_S(t)-\sigma^z\rho_S(t)\sigma^z) , \end{eqnarray} whose solution is as given in Eq. (\ref{TCL2}) in terms of Bloch vector components. For TCL3 we find \begin{eqnarray} \dot{\rho}_S(t)=-2\alpha^2 tQ_2(\rho_S(t)-\sigma_z\rho_S(t)\sigma_z) + 4iQ_3\alpha^3\frac{t^2}{2}(\sigma_z\rho_S(t)-\rho_S(t)\sigma_z), \end{eqnarray} and for TCL4 we find \begin{eqnarray} \dot{\rho}_S(t) &=& [-2\alpha^2 tQ_2 + (8Q_4-24Q_2^2)\alpha^4\frac{t^3}{6}] (\rho_S(t)-\sigma_z\rho_S(t)\sigma_z) \notag \\ &+& 4iQ_3\alpha^3\frac{t^2}{2}(\sigma_z\rho_S(t)-\rho_S(t)\sigma_z) . \end{eqnarray} These equation can be solved analytically, and the solutions to the third and fourth order TCL equations are given by \begin{eqnarray} \label{TCL3} v_x(t)&=&f_n(\alpha t)\left [v_x(0)\cos(g(t))+v_y(0)\sin(g(t))\right ], \notag \\ v_y(t)&=&f_n(\alpha t)\left [v_y(0)\cos(g(t))-v_x(0)\sin(g(t))\right ] . \notag \\ \end{eqnarray} where $g(t)=4Q_3\alpha^3t^3/3$, $f_3(\alpha t)=\exp(-2Q_2\alpha^2t^2)$ (TCL3) and $f_4(\alpha t)=\exp(-2Q_2\alpha^2t^2\\ +(2Q_4-6Q_2^2)\alpha^4t^4/3)$ (TCL4). It is interesting to note that the second order expansions of the TCL and NZ master equations exhibit a $v_x\leftrightarrow v_y$ symmetry between the components of the Bloch vector, and only the third order correction breaks this symmetry. Notice that the coefficient of $\alpha^3$ does not vanish in this model unlike in the one considered in \cite{BBP04} because both $\left\langle\mathcal{L}^3\right\rangle_{pc}\neq 0$ and $\left\langle \mathcal{L}^3\right\rangle_{oc}\neq 0$ and hence the third order (and other odd order) approximations exist. \subsection*{4.3.3 \hspace{2pt} Post-Markovian (PM) master equation} \addcontentsline{toc}{subsection}{4.3.3 \hspace{0.15cm} Post-Markovian (PM) master equation} In this section we study the performance of the post-Markovian master equation recently proposed in \cite{ShabaniLidar:05}: \begin{equation} \frac{\partial \rho (t)}{\partial t}=\mathcal{D}\int_{0}^{t}dt^{\prime }k(t^{\prime })\exp (\mathcal{D}t^{\prime })\rho (t-t^{\prime })\mathrm{.} \label{PMAL} \end{equation} This equation was constructed via an interpolation between the exact dynamics and the dynamics in the Markovian limit. The operator $\mathcal{D}$ is the dissipator in the Lindblad equation (\ref{Lindblad}), and $k(t)$ is a phenomenological memory kernel which must be found by fitting to data or guessed on physical grounds. As was discussed earlier, the Markovian approximation fails for our model, nevertheless, one can use the form of the dissipator we obtained in Eq. (\ref{Lindblad}) \begin{equation} \mathcal{D}\rho =\sigma ^{z}\rho \sigma ^{z}-\rho . \label{dissipator} \end{equation} It is interesting to examine to what extent Eq. (\ref{PMAL}) can approximate the exact dynamics. As a measure of the performance of the post-Markovian equation, we will take the trace-distance between the exact solution $\rho _{ \mathrm{exact}}(t)$ and the solution to the post-Markovian equation $\rho _{1}(t)$. The general solution of Eq. (\ref{PMAL}) can be found by expressing $\rho (t)$ in the damping basis \cite{Briegel:93} and applying a Laplace transform \cite{ShabaniLidar:05}. The solution is \begin{equation} \rho (t)=\sum_{i}\mu _{i}(t)R_{i}=\sum_{i}{\text{Tr}}(L_{i}\rho (t))R_{i}, \end{equation} where \begin{equation} \mu _{i}(t)=\mathrm{Lap}^{-1}\left[ \frac{1}{s-\lambda _{i}\tilde{k} (s-\lambda _{i})}\right] \mu _{i}(0)\equiv \xi _{i}(t)\mu _{i}(0), \end{equation} ($\mathrm{Lap}^{-1}$ is the inverse Laplace transform) with $\tilde{k}$ being the Laplace transform of the kernel $k$, $\{L_{i}\}$ and $\{R_{i}\}$ being the left and right eigenvectors of the superoperator $\mathcal{D}$, and $\lambda _{i}$ the corresponding eigenvalues. For our dissipator the damping basis is $\{L_{i}\}=\{R_{i}\}=\{\frac{I}{\sqrt{2}},\frac{\sigma ^{x} }{\sqrt{2}},\frac{\sigma ^{y}}{\sqrt{2}},\frac{\sigma ^{z}}{\sqrt{2}}\}$ and the eigenvalues are $\{0,-2,-2,0\}$. Therefore, we can immediately write the formal solution in terms of the Bloch vector components: \begin{gather} v_{x,y}(t)={\text{Lap}}^{-1}\left[ \frac{1}{s+2\tilde{k}(s+2)}\right] v_{x,y}(0)\equiv \xi (t)v_{x,y}(0). \label{eq:vy-post} \end{gather} We see that $v_{x}(t)$ has no dependence on $v_{y}(0)$, and neither does $ v_{y}(t)$ on $v_{x}(0)$, in contrast to the exact solution. The difference comes from the fact that the dissipator $\mathcal{D}$ does not couple $ v_{x}(t)$ and $v_{y}(t)$. This reveals an inherent limitation of the post-Markovian master equation:\ it inherits the symmetries of the Markovian dissipator $\mathcal{D}$, which may differ from those of the generator of the exact dynamics. In order to rigorously determine the optimal performance, we use the trace distance between the exact solution and a solution to the post-Markovian equation: \begin{eqnarray} D(\rho _{\mathrm{exact}}(t),\rho _{1}(t)) =\frac{1}{2}\sqrt{(C(t)-\xi (t))^{2}+S(t)^{2}} \sqrt{v_{x}(0)^{2}+v_{y}(0)^{2}}. \end{eqnarray} Obviously this quantity reaches its minimum for $\xi (t)=C(t),\forall t$ independently of the initial conditions. The kernel for which the optimal performance of the post-Markovian master equation is achieved, can thus be formally expressed, using Eq. (\ref{eq:vy-post}), as: \begin{equation} k_{\mathrm{opt}}(t)=\frac{1}{2}e^{2t}\mathrm{Lap}^{-1}\left\{ \frac{1}{ \mathrm{Lap}(C(t))}-s\right\} . \label{kopt} \end{equation} It should be noted that the condition for complete positivity of the map generated by Eq. (\ref{PMAL}), $\sum_{i}\xi _{i}(t)L_{i}^{T}\otimes R_{i}\geq 0$ \cite{ShabaniLidar:05}, amounts here to $|\xi (t)|=|C(t)|\leq 1$ , which holds for all $t$. Thus the minimum achievable trace-distance between the two solutions is given by \begin{equation} D_{\mathrm{min}}(\rho _{\mathrm{exact}}(t),\rho _{1}(t))=\frac{1}{2}S(t) \sqrt{v_{x}(0)^{2}+v_{y}(0)^{2}}. \end{equation} The optimal fit is plotted in Section 4.4. Finding a simple analytical expression for the optimal kernel Eq. (\ref{kopt} ) seems difficult due to the complicated form of $C(t)$. One way to approach this problem is to expand $C(t)$ in powers of $\alpha t$ and consider terms which give a valid approximation for small times $\alpha t\ll 1$. For example, Eq. (\ref{eq:f-approx}) yields the lowest non-trivial order as: \begin{equation} C_{2}(t)=1-2Q_{2}\alpha ^{2}t^{2}+\mathcal{O}(\alpha ^{4}t^{4}). \end{equation} Note that this solution violates the complete positivity condition for times larger than $t=1/\alpha \sqrt{2Q_{2}}$. The corresponding kernel is: \begin{equation} k_{2}(t)=2\alpha ^{2}Q_{2}e^{2t}\cosh (2\sqrt{Q_{2}}\alpha t). \end{equation} Alternatively we could try finding a kernel that matches some of the approximate solutions discussed so far. For example, it turns out that the kernel \begin{equation} k_{\mathrm{NZ2}}(t)=2\alpha ^{2}Q_{2}e^{2t} \end{equation} leads to an exact match of the NZ2 solution. Finding a kernel which gives a good description of the evolution of an open system is an important but in general, difficult question which remains open for further investigation. We note that this question was also taken up in the context of the PM\ in the recent study \cite{ManPet06}, where the PM\ was applied to an exactly solvable model describing a qubit undergoing spontaneous emission and stimulated absorption. No attempt was made to optimize the memory kernel and hence the agreement with the exact solution was not as impressive as might be possible with optimization. \section*{4.4 \hspace{2pt} Comparison of the analytical solution and the different approximation techniques} \addcontentsline{toc}{section}{4.4 \hspace{0.15cm} Comparison of the analytical solution and the different approximation techniques} In the results shown below, all figures express the evolution in terms of the dimensionless parameter $\alpha t$ (plotted on a logarithmic scale). We choose the initial condition $v_{x}(0)=v_{y}(0)=1/\sqrt{2}$ and plot only $ v_{x}(t)$ since the structure of the equations for $v_{x}(t)$ and $v_{y}(t)$ is similar. In order to compare the different methods of approximation, we consider various choices of parameter values in our model. Among these choices we consider both low and high temperature cases. We note that in a spin bath model it is assumed that the environment degrees of freedom are localized and this is usually the case at low temperatures. At higher temperatures one may need to consider delocalized environment degrees of freedom in order to account for such environment modes such as phonons, magnons etc. A class of models known as oscillator bath models (e.g., Ref.~\cite{Weiss:Book}) consider such effects. In this study, we restrict attention to the spin bath model described here for both low and high temperatures. \subsection*{4.4.1 \hspace{2pt} Exact solution} \addcontentsline{toc}{subsection}{4.4.1 \hspace{0.15cm} Exact solution} We first assume that the frequencies of the qubits in the bath are equal ($ \Omega _{n}=1$, $\forall n$), and so are the coupling constants ($g_{n}=1$, $ \forall n$). In this regime, we consider large and small numbers of bath spins $N=100$ and $N=4$, and two different temperatures $\beta =1$ and $ \beta =10$. Figs.~\ref{N100_exact} and \ref{N4_exact} show the exact solution for $N=100$ and $N=4$ spins, respectively, up to the second recurrence time. For each $N$, we plot the exact solution for $\beta =1$ and $\beta =10$. We also consider the case where the frequencies $\Omega _{n}$ and the coupling constants $g_{n}$ can take different values. We generated uniformly distributed random values in the interval $[-1,1]$ for both $\Omega _{n}$ and $g_{n}$. In Figs.~(\ref{N100_exact_random_long}) and (\ref {N4_exact_random}) we plot the ensemble average of the solution over 50 random ensembles. The main difference from the solution with equal $\Omega _{n}$ and $g_{n}$ is that the partial recurrences decrease in size, especially as $N$ increases. We attribute this damping partially to the fact that we look at the ensemble average, which amounts to averaging out the positive and negative oscillations that arise for different values of the parameters. The main reason, however, is that for a generic ensemble of random $\Omega _{n}$ and $g_{n}$ the positive and negative oscillations in the sums \eqref{CSeqn} tend to average out. This is particularly true for large $N$, as reflected in Fig.~\ref{N100_exact_random_long}. We looked at a few individual random cases for $N=100$ and recurrences were not present there. For $N=20$ (not shown here), some small recurrences were still visible. We also looked at the case where one of the coupling constants, say $g_i$, has a much larger magnitude than the other ones (which were made equal). The behavior was similar to that for a bath consisting of only a single spin. \begin{figure} \caption{Comparison of the exact solution at $\protect\beta =1 $ and $\protect\beta =10$ for $N=100$.} \label{N100_exact} \end{figure} \begin{figure} \caption{Comparison of the exact solution at $\protect\beta =1 $ and $\protect\beta =10$ for $N=4$.} \label{N4_exact} \end{figure} \begin{figure} \caption{Comparison of the exact solution at $\protect\beta =1 $ and $\protect\beta =10$ for $N=100$ for randomly generated $g_{n}$ and $ \Omega _{n}$.} \label{N100_exact_random_long} \end{figure} \begin{figure} \caption{Comparison of the exact solution at $\protect\beta =1 $ and $\protect\beta =10$ for $N=4$ for randomly generated $g_{n}$ and $ \Omega _{n}$.} \label{N4_exact_random} \end{figure} In the following, we plot the solutions of different orders of the NZ, TCL and PM master equations and compare them for the same parameter values. \subsection*{4.4.2 \hspace{2pt} NZ} \addcontentsline{toc}{subsection}{4.4.2 \hspace{0.15cm} NZ} In this subsection, we compare the solutions of different orders of the NZ master equation for $\Omega _{n}=g_{n}=1$. Fig.~(\ref{N100_NZ}) shows the solutions to NZ2, NZ3, NZ4 and the exact solution for $\beta =1$ and $\beta =10$ up to the first recurrence time of the exact solution. For short times NZ4 is the better approximation. It can be seen that while NZ2 and NZ3 are bounded, NZ4 leaves the Bloch sphere. But note that the approximations under which these solutions have been obtained are valid for $\alpha t\ll 1$. The NZ4 solution leaves the Bloch sphere in a regime where the approximation is not valid. For $\beta =10$, NZ2 again has a periodic behavior (which is consistent with the solution), while the NZ3 and NZ4 solutions leave the Bloch sphere after small times. Fig.~(\ref{N4_NZ}) shows the same graphs for $N=4$. In this case both NZ3 and NZ4 leave the Bloch sphere for $\beta =1$ and $\beta =10$, while NZ2 has a periodic behavior. A clear conclusion from these plots is that the NZ\ approximation is truly a short-time one:\ it becomes completely unreliable for times longer than $\alpha t\ll 1$. \begin{figure} \caption{Comparison of the exact solution, NZ2, NZ3 and NZ4 at $\protect\beta =1$ and $\protect\beta =10$ for $N=100$. The exact solution is the solid (blue) line, NZ2 is the dashed (green) line, NZ3 is the dot-dashed (red) line and NZ4 is the dotted (cyan) line.} \label{N100_NZ} \end{figure} \begin{figure} \caption{Comparison of the exact solution, NZ2, NZ3 and NZ4 at $\protect\beta =1$ and $\protect\beta =10$ for $N=4$. The exact solution is the solid (blue) line, NZ2 is the dashed (green) line, NZ3 is the dot-dashed (red) line and NZ4 is the dotted (cyan) line.} \label{N4_NZ} \end{figure} \subsection*{4.4.3 \hspace{2pt} TCL} \addcontentsline{toc}{subsection}{4.4.3 \hspace{0.15cm} TCL} Fig.~(\ref{N100_TCL}) plots the exact solution, TCL2, TCL3 and TCL4 at $ \beta =1$ and $\beta =10$ for $N=100$ spins and $\Omega _{n}=g_{n}=1$. It can be seen that for $\beta =1$, the TCL solution approximates the exact solution well even for long times. However, the TCL solution cannot reproduce the recurrence behavior of the exact solution (also shown in the figure.) Fig.~(\ref{N4_TCL}) shows the same graphs for $N=4$. In this case, while TCL2 and TCL3 decay, TCL4 increases exponentially and leaves the Bloch sphere after a short time. This is because the exponent in the solution of TCL4 in Eq.~(\ref{TCL3}) is positive. Here again the approximations under which the solutions have been obtained are valid only for small time scales and the graphs demonstrate the complete breakdown of the perturbation expansion for large values of $\alpha t$. Moreover, the graphs reveal the sensitivity of the approximation to temperature:\ the TCL\ fares much better at high temperatures. In order to determine the validity of the TCL approximation, we look at the invertibility of the Kraus map derived in Eq. (\ref{KrausForm}) or equivalently Eq. (\ref{CSeqn}). As mentioned earlier, this map is non-invertible if $C(t)^{2}+S(t)^{2}=0$ for some $t$ (or equivalently $ v_{x}(t)=0$ and $v_{y}(t)=0$). This will happen if and only if at least one of the $\beta _{n}$ is zero. This can occur when the bath density matrices of some of the bath spins are maximally mixed or in the limit of a very high bath temperature. Clearly, when the Kraus map is non-invertible, the TCL approach becomes invalid since it relies on the assumption that the information about the initial state is contained in the current state. This fact has also been observed for the spin-boson model with a damped Jaynes-Cummings Hamiltonian \cite{BrePet02}. At the point where the Kraus map becomes non-invertible, the TCL solution deviates from the exact solution (see Fig.~ \ref{N4B0TCL2}). We verified that both $v_{x}$ and $v_{y} $ vanish at this point. \begin{figure} \caption{Comparison of the exact solution, TCL2, TCL3 and TCL4 at $\protect\beta =1$ and $\protect\beta =10$ for $N=100$. The exact solution is the solid (blue) line, TCL2 is the dashed (green) line, TCL3 is the dot-dashed (red) line and TCL4 is the dotted (cyan) line. Note that for $ \protect\beta =1$, the curves nearly coincide.} \label{N100_TCL} \end{figure} \begin{figure} \caption{Comparison of the exact solution, TCL2, TCL3 and TCL4 at $\protect\beta =1$ and $\protect\beta =10$ for $N=4$. The exact solution is the solid (blue) line, TCL2 is the dashed (green) line, TCL3 is the dot-dashed (red) line and TCL4 is the dotted (cyan) line. Note that for $ \protect\beta =1$, TCL3, TCL4 and the exact solution nearly coincide.} \label{N4_TCL} \end{figure} \begin{figure} \caption{Comparison of TCL2 and the exact solution to demonstrate the validity of the TCL approximation for $N=4$ and $\protect \beta =1$. The solid (blue) line denotes the exact solution and the dashed (green) line is TCL2. Note that the time axis here is on a linear scale. TCL2 breaks down at $\protect\alpha t\approx 0.9$, where it remains flat, while the exact solution has a recurrence.} \label{N4B0TCL2} \end{figure} \subsection*{4.4.4 \hspace{2pt} NZ, TCL, and PM} \addcontentsline{toc}{subsection}{4.4.4 \hspace{0.15cm} NZ, TCL, and PM} In this subsection, we compare the exact solution to TCL4, NZ4 and the solution of the optimal PM master equation. Fig.~(\ref{N100_best}) shows these solutions for $N=100$ and $\beta =1$ and $\beta =10$ when $\Omega _{n}=g_{n}=1$. Here we observe that while the short-time behavior of the exact solution is approximated well by all the approximations we consider, the long-time behavior is approximated well only by PM. For $\beta =1$, NZ4 leaves the Bloch sphere after a short time while TCL4 decays with the exact solution. But as before, the TCL solution cannot reproduce the recurrences seen in the exact solution. The optimal PM solution, by contrast, is capable of reproducing both the decay and the recurrences. TCL4 and NZ4 leave the Bloch sphere after a short time for $ \beta =10$, while PM again reproduces the recurrences in the exact solution. Fig.~\ref{N4_best} shows the corresponding graphs for $N=4$ and it can be seen that again PM can outperform both TCL and NZ for long times. Figs.~\ref {N100_beta} and \ref{N4_beta} show the performance of TCL4, NZ4 and PM compared to the exact solution at a fixed time (for which the approximations are valid) for different temperatures ($\beta \in \lbrack 0.01,10]$). It can be seen that both TCL4 and the optimal PM solution perform better than NZ4 at medium and high temperatures, with TCL4 outperforming PM at medium temperatures. The performance of NZ4 is enhanced at low temperatures, where it performs similarly to TCL4 (see also Figs.~\ref{N100_best} and \ref {N4_best}). This can be understood from the short-time approximation to the exact solution given in Eq. (\ref{shorttimes}), which up to the precision for which it was derived is also an approximation of NZ2 [Eq. (\ref{Born})]. As discussed above, this approximation (which also coincides with TCL2) is valid when $2Q_{2}(\alpha t)^{2}\ll 1$. As temperature decreases, so does the magnitude of $Q_{2}$, which leads to a better approximation at fixed $ \alpha t$. Since NZ2 gives the lowest-order correction, this improvement is reflected in NZ4 as well. In Figs. \ref{N100_best_random} and \ref{N4_best_random} we plot the averaged solutions over 50 ensembles of random values for $\Omega _{n}$ and $ g_{n}$ in the interval $[-1,1]$. We see that on average TCL4, NZ4 and the optimal PM solution behave similarly to the case when $\Omega _{n}=g_{n}=1$. Due to the damping of the recurrences, especially when $N=100$, the TCL4 and the PM solutions match the exact solution closely for much longer times than in the deterministic case. Again, the PM solution is capable of qualitatively matching the behavior of the exact solution at long times. \begin{figure} \caption{Comparison of the exact solution, NZ4, TCL4 and PM at $\protect\beta =1$ and $\protect\beta =10$ for $N=100$. The exact solution is the solid (blue) line, PM is the dashed (green) line, NZ4 is the dot-dashed (red) line and TCL4 is the dotted (cyan) line. Note that for $ \protect\beta =1$, TCL4, PM and the exact solution nearly coincide for short and medium times. Only PM\ captures the recurrences of the exact solution at long times.} \label{N100_best} \end{figure} \begin{figure} \caption{Comparison of the exact solution, NZ4, TCL4 and PM at $\protect\beta=1$ and $\protect\beta=10$ for $N=4$. The exact solution is the solid (blue) line, PM is the dashed (green) line, NZ4 is the dot-dashed (red) line and TCL4 is the dotted (cyan) line. Note that for $\protect\beta=1 $, TCL4 and the exact solution nearly coincide for short and medium times.} \label{N4_best} \end{figure} \begin{figure} \caption{Comparison of the exact solution, NZ4, TCL4 and PM at $\protect\alpha t=0.1$ for $N=100$ for different $\protect\beta\in [0.01,10]$. The exact solution is the solid (blue) line, PM is the dashed (green) line, NZ4 is the dot-dashed (red) line and TCL4 is the dotted (cyan) line.} \label{N100_beta} \end{figure} \begin{figure} \caption{Comparison of the exact solution, NZ4, TCL4 and PM at $\protect\alpha t=0.5$ for $N=4$ for different $\protect\beta\in [0.01,10] $. The exact solution is the solid (blue) line, PM is the dashed (green) line, NZ4 is the dot-dashed (red) line and TCL4 is the dotted (cyan) line.} \label{N4_beta} \end{figure} \begin{figure} \caption{Comparison of the exact solution, NZ4, TCL4 and PM at $\protect\beta=1$ and $\protect\beta=10$ for $N=100$ for random values of $g_n$ and $\Omega_n$. The exact solution is the solid (blue) line, PM is the dashed (green) line, NZ4 is the dot-dashed (red) line and TCL4 is the dotted (cyan) line. Note that for $\protect\beta=1$ and $\protect\beta=10$, TCL4, PM and the exact solution nearly coincide.} \label{N100_best_random} \end{figure} \begin{figure} \caption{Comparison of the exact solution, NZ4, TCL4 and PM at $\protect\beta=1$ and $\protect\beta=10$ for $N=4$ for random values of $ g_n$ and $\Omega_n$. The exact solution is the solid (blue) line, PM is the dashed (green) line, NZ4 is the dot-dashed (red) line and TCL4 is the dotted (cyan) line. Note that for $\protect\beta=1$, TCL4, PM and the exact solution nearly coincide for short and medium times.} \label{N4_best_random} \end{figure} \subsection*{4.4.5 \hspace{2pt} Coarse-graining approximation} \addcontentsline{toc}{subsection}{4.4.5 \hspace{0.15cm} Coarse-graining approximation} Finally, we examine the coarse-graining approximation discussed in Section 4.3.1. We choose the time over which the average trace distance is calculated to be the time where the exact solution dies down. In Fig. \ref{N50B1CG} we plot the coarse-grained solution for the value of $\tau $ for which the trace distance to the exact solution is minimum. As can be seen, the coarse-graining approximation does not help since the Markovian assumption is not valid for this model. In deriving the coarse-graining approximation \cite{Lidar:CP01} one makes the assumption that the coarse-graining time scale is greater than any characteristic bath time scale. But the characteristic time scale of the bath is infinite in this case. \begin{figure} \caption{Comparison of the exact solution and the optimal coarse-graining approximation for $N=50$ and $\protect\beta =1$. The exact solution is the solid (blue) line and the coarse-graining approximation is the dashed (green) line. Note the linear scale time axis.} \label{N50B1CG} \end{figure} \section*{4.5 \hspace{2pt} Summary and conclusions} \addcontentsline{toc}{section}{4.5 \hspace{0.15cm} Summary and conclusions} We studied the performance of various methods for approximating the evolution of an Ising model of an open quantum system for a qubit system coupled to a bath bath consisting of $N$ qubits. The high symmetry of the model allowed us to derive the exact dynamics of the system as well as find analytical solutions for the different master equations. We saw that the Markovian approximation fails for this model due to the time independence of the bath correlation functions. This is also reflected in the fact that the coarse-graining method \cite{Lidar:CP01} does not approximate the exact solution well. We discussed the performance of these solutions for various parameter regimes. Unlike other spin bath models discussed in literature (e.g., Ref.~\cite{BBP04}), the odd-order bath correlation functions do not vanish, leading to the existence of odd-order terms in the solution of TCL and NZ equations. These terms describe the rotation around the $z$ axis of the Bloch sphere, a fact which is reflected in the exact solution. We showed that up to fourth order TCL performs better than NZ at medium and high temperatures. For low temperatures we demonstrated an enhancement in the performance of NZ and showed that NZ and TCL perform equally well. We showed that the TCL approach breaks down for certain parameter choices and related this to the non-invertibility of the Kraus map describing the system dynamics. We also studied the performance of the post-Markovian master equation obtained in \cite{ShabaniLidar:05} with an optimal memory kernel. We discussed possible ways of approximating the optimal kernel for short times and derived the kernel which leads to an exact fit to the NZ2 solution. It turns out that PM master equation performs as well as the TCL2 for a large number of spins and outperforms all orders of NZ\ and TCL\ considered here at long times, as it captures the recurrences of the exact solution. Our study reveals the limitations of some of the best known master equations available in the literature, in the context of a spin bath. In general, perturbative approaches such as low-order NZ\ and TCL do well at short times (on a time scale set by the system-bath coupling constant) and fare very poorly at long times. These approximations are also very sensitive to temperature and do better in the high temperature limit. The PM\ does not do as well as TCL4 at short times but has the distinct advantage of retaining a qualitatively correct character for long times. This conclusion depends heavily on the proper choice of the memory kernel; indeed, when the memory kernel is not optimally chosen the PM can yield solutions which are not as satisfactory \cite{ManPet06}. \section*{4.6 \hspace{2pt} Appendix A: Bath correlation functions} \label{app:A} \addcontentsline{toc}{section}{4.6 \hspace{0.15cm} Appendix A: Bath correlation functions} Here we show how to calculate the bath correlation functions used in our simulations. The $k^{\mathrm{th}}$ order bath correlation function is defined as \begin{equation*} Q_{k}=\mathrm{Tr}\{B^{k}\rho _{B}\}, \end{equation*} where $B$ and $\rho _{B}$ were given in Eqs. (\ref{Bcomp})\ and (\ref {eq:rhoB0}), respectively. This yields: \begin{eqnarray*} Q_{k} &=&\mathrm{Tr}\{(\sum_{n}g_{n}\sigma _{n}^{z}-\theta I_{B})^{k}\sum_{l} \frac{\exp (-\beta E_{l})}{Z}|l\rangle \langle l|\} \\ &=&\sum_{l}\frac{\exp (-\beta E_{l})}{Z}\langle l|(\sum_{n}g_{n}\sigma _{n}^{z}-\theta I_{B})^{k}|l\rangle \\ &=&\sum_{l,l^{\prime },...,l^{\prime \prime \prime }}\frac{\exp (-\beta E_{l})}{Z}\langle l|(\sum_{n}g_{n}\sigma _{n}^{z}-\theta I_{B})|l^{\prime }\rangle \langle l^{\prime }|(\sum_{n^{\prime }}g_{n^{\prime }}\sigma _{n^{\prime }}^{z}-\theta I_{B})|l^{\prime \prime }\rangle \langle l^{\prime \prime }|\cdots\notag\\ && \cdots |l^{\prime \prime \prime }\rangle \langle l^{\prime \prime \prime }|(\sum_{n^{\prime \prime \prime}}g_{n^{\prime \prime \prime}}\sigma _{n^{\prime \prime \prime}}^{z}-\theta I_{B})|l\rangle \\ &=&\sum_{l,l^{\prime },...,l^{\prime \prime \prime }}\frac{ \exp (-\beta E_{l})}{Z}(\sum_{n}g_{n}\langle l|\sigma _{n}^{z}|l^{\prime }\rangle -\theta )\delta _{ll^{\prime }}(\sum_{n^{\prime }}g_{n^{\prime }}\langle l^{\prime }|\sigma _{n^{\prime }}^{z}|l^{\prime \prime }\rangle -\theta )\delta _{l^{\prime }l^{\prime \prime }}\cdots \notag\\ && \cdots(\sum_{n^{\prime \prime \prime }}g_{n^{\prime \prime \prime}}\langle l^{\prime \prime \prime }|\sigma _{n^{\prime \prime \prime }}^{z}|l\rangle -\theta )\delta _{l^{\prime \prime \prime }l} \\ &=&\sum_{l}\frac{\exp (-\beta E_{l})}{Z}(\sum_{n}g_{n}\langle l|\sigma _{n}^{z}|l\rangle -\theta )(\sum_{n^{\prime }}g_{n^{\prime }}\langle l|\sigma _{n^{\prime }}^{z}|l\rangle -\theta )\cdots \notag\\ &&\cdots(\sum_{n^{\prime \prime \prime }}g_{n^{\prime \prime \prime }}\langle l|\sigma _{n^{\prime \prime \prime }}^{z}|l\rangle -\theta ) \\ &=&\sum_{l}\frac{\exp (-\beta E_{l})}{Z}(\sum_{n}g_{n}\langle l|\sigma _{n}^{z}|l\rangle -\theta )^{k}, \end{eqnarray*} or \begin{equation} Q_{k}=\frac{1}{Z}\sum_{l}(\tilde{E}_{l})^{k}\exp (-\beta E_{l}), \end{equation} where $Z=\sum_{l}\exp (-\beta E_{l})$ and the expressions for $E_{l}$ and $ \tilde{E}_{l}$ were given in Eqs. (\ref{eq:El})\ and (\ref{eq:Eitilde}), respectively. The above formulas are useful when the energy levels $E_{l}$ and $\tilde{E} _{l}$ are highly degenerate, which is the case for example when $g_{n}\equiv g$ and $\Omega _{n}\equiv \Omega $ for all $n$. For a general choice of these parameters, it is computationally more efficient to consider $\theta $ in the form (\ref{eq:theta}) and the initial bath density matrix in the form (\ref{eq_rho_B_inter}). For example, the second order bath correlation function is \begin{eqnarray} Q_{2} &=&\mathrm{Tr}\{(\sum_{m=1}^{N}g_{m}\sigma _{m}^{z}-\theta I)(\sum_{n=1}^{N}g_{n}\sigma _{n}^{z}-\theta I)\rho _{B}\} \notag \\ &=&\mathrm{Tr}\{\sum_{n,m=1}^{N}g_{n}g_{m}\sigma _{n}^{z}\sigma _{m}^{z}\rho _{B}\} -2\theta \underbrace{\mathrm{Tr}\{\sum_{n=1}^{N}g_{n}\sigma _{n}^{z}\rho _{B}\}}_{\theta }+\theta ^{2} \notag \\ &=&\mathrm{Tr}\{\sum_{n,m=1}^{N}g_{n}g_{m}\sigma _{n}^{z}\sigma _{m}^{z}\bigotimes\limits_{n=1}^{N}\frac{1}{2}(I+\beta _{n}\sigma _{n}^{z})\}-\theta ^{2} \notag \\ &=&\sum_{n\neq m}^{N}\mathrm{Tr}\{g_{m}\frac{1}{2}(\sigma _{m}^{z}+\beta _{m}I)\}\mathrm{Tr}\{g_{n}\frac{1}{2}(\sigma _{n}^{z}+\beta _{n}I)\}\prod\limits_{j\neq m,n}\mathrm{Tr}\{\frac{1}{2}(I+\beta _{j}\sigma _{j}^{z})\}\notag\\ && +\mathrm{Tr}\{\sum_{n=1}^{N}g_{n}^{2}\rho _{B}\}-\theta ^{2} \notag \\ &=&\underbrace{\sum_{n,m=1}^{N}g_{m}\beta _{m}g_{n}\beta _{n}}_{\theta ^{2}}-\sum_{n=1}^{N}g_{n}^{2}\beta _{n}^{2}+\sum_{n=1}^{N}g_{n}^{2}-\theta ^{2} \notag \\ &=&\sum_{n=1}^{N}g_{n}^{2}(1-\beta _{n}^{2}). \end{eqnarray} Using the identity $1-\tanh ^{2}(-x/2)=2/(1+\cosh x)$, this correlation function can be expressed in terms of the bath spectral density function [Eq. (\ref{eq:J})] as follows: \begin{eqnarray*} Q_{2} &=&\sum_{n=1}^{N}g_{n}^{2}(1-\beta _{n}^{2}) \\ &=&\int_{-\infty }^{\infty }\delta (\Omega -\Omega _{n})|g_{n}|^{2}(1-\tanh ^{2}(-\frac{\Omega }{2kT}))\mathrm{d}\Omega \\ &=&\int_{-\infty }^{\infty }\frac{2J(\Omega )\mathrm{d}\Omega }{1+\cosh ( \frac{\Omega }{kT})}. \end{eqnarray*} Higher order correlation functions are computed analogously. \section*{4.7 \hspace{2pt} Appendix B: Cumulants for the NZ and TCL master equations} \label{app:B} \addcontentsline{toc}{section}{4.7 \hspace{0.15cm} Appendix B: Cumulants for the NZ and TCL master equations} We calculate the explicit expressions for the cumulants appearing in Eq. ( \ref{eq:cumulants}), needed to find the NZ and TCL perturbation expansions up to fourth order. Second order: \begin{eqnarray} \langle \mathcal{L}^{2}\rangle \rho &=&-\mathrm{Tr}_{B}\{[H_{I},[H_{I},\rho ]]\}\otimes \rho _{B} \notag \\ &=&-\mathrm{Tr}_{B}\{H_{I}^{2}\rho -2H_{I}\rho H_{I}+\rho H_{I}^{2}\}\otimes \rho _{B} \notag \\ &=&-2Q_{2}(\rho _{S}-\sigma _{z}\rho _{S}\sigma _{z})\otimes \rho _{B} \notag \\ &\equiv &\rho ^{\prime }, \end{eqnarray} \begin{eqnarray} \langle \mathcal{L}^{2}\rangle ^{2}\rho &=&\mathcal{P}\mathcal{L}^{2} \mathcal{P}\mathcal{P}\mathcal{L}^{2}\mathcal{P}\rho \notag \\ &=&\mathcal{P}\mathcal{L}^{2}\mathcal{P}\rho ^{\prime } \notag \\ &=&-2Q_{2}(\rho _{S}^{\prime }-\sigma _{z}\rho _{S}^{\prime }\sigma _{z})\otimes \rho _{B}, \notag \end{eqnarray} where $\rho _{S}^{\prime }=\mathrm{Tr}_{B}{\rho }^{\prime }=-2Q_{2}(\rho _{S}-\sigma _{z}\rho _{S}\sigma _{z})$. Therefore \begin{eqnarray} \langle \mathcal{L}^{2}\rangle ^{2}\rho &=&-2Q_{2}\{(-2Q_{2}(\rho _{S}-\sigma _{z}\rho _{S}\sigma _{z}))-\sigma _{z}(-2Q_{2}(\rho _{S}-\sigma _{z}\rho _{S}\sigma _{z}))\sigma _{z}\}\otimes \rho _{B} \notag \\ &=&8Q_{2}^{2}(\rho _{S}-\sigma _{z}\rho _{S}\sigma _{z})\otimes \rho _{B}. \end{eqnarray} Third order: \begin{eqnarray} \langle \mathcal{L}^{3}\rangle \rho &=&i\mathrm{Tr}_{B} \{[H_{I},[H_{I},[H_{I},\rho ]]]\}\otimes \rho _{B} \notag \\ &=&i\mathrm{Tr}_{B}\{H_{I}^{3}\rho -3H_{I}^{2}\rho H_{I}+3H_{I}\rho H_{I}^{2}-\rho H_{I}^{3}\}\otimes \rho _{B} \notag \\ &=&4iQ_{3}(\sigma _{z}\rho _{S}-\rho _{S}\sigma _{z})\otimes \rho _{B}. \end{eqnarray} Fourth order: \begin{eqnarray} \langle \mathcal{L}^{4}\rangle \rho &=&\mathrm{Tr}_{B} \{[H_{I},[H_{I},[H_{I},[H_{I},\rho ]]]]\}\otimes \rho _{B} \notag \\ &=&\mathrm{Tr}_{B}\{H_{I}^{4}\rho -4H_{I}^{3}\rho H_{I}+6H_{I}^{2}\rho H_{I}^{2}-4H_{I}\rho H_{I}^{3}+\rho H_{I}^{4}\}\otimes \rho _{B} \notag \\ &=&8Q_{4}(\rho _{S}-\sigma _{z}\rho _{S}\sigma _{z})\otimes \rho _{B}. \end{eqnarray} \chapter*{Chapter 5: \hspace{1pt} Continuous quantum error correction for non-Markovian decoherence} \addcontentsline{toc}{chapter}{Chapter 5:\hspace{0.15cm} Continuous quantum error correction for non-Markovian decoherence} In this chapter we continue our exploration of non-Markovian decoherence. This time, we compare Markovian and non-Markovian error models in light of the performance of continuous quantum error correction. We consider again an Ising decoherence model of the type we studied in the previous chapter, but in a much simpler version---when the environment consists of only a single qubit. This allows us to solve exactly the evolution of a multi-qubit code in which each qubit is coupled to an independent bath when the code is subject to continuous error correction. The conclusions we obtain, however, extend beyond this model and apply for general non-Markovian decoherence. \section*{5.1 \hspace{2pt} Preliminaries} \addcontentsline{toc}{section}{5.1 \hspace{0.15cm} Preliminaries} \subsection*{5.1.1 \hspace{2pt} Continuous quantum error correction} \addcontentsline{toc}{subsection}{5.1.1 \hspace{0.15cm} Continuous quantum error correction} In general, error probabilities increase with time. No matter how complicated a code or how many levels of concatenation are involved, the probability of uncorrectable errors is never truly zero, and if the system is exposed to noise for a sufficiently long time, the weight of uncorrectable errors can accumulate. To combat this, error correction must be applied repeatedly and sufficiently often. If one assumes that the time for an error-correcting operation is small compared to other relevant time scales of the system, error-correcting operations can be considered instantaneous. Then the scenario of repeated error correction leads to a discrete evolution which often may be difficult to describe. To study the evolution of a system in the limit of frequently applied instantaneous error correction, Paz and Zurek proposed to describe error correction as a continuous quantum jump process \cite{PZ98}. In this model, the infinitesimal error-correcting transformation that the density matrix of the encoded system undergoes during a time step $dt$ is \begin{equation}\label{basicequation} \rho\rightarrow (1-\kappa dt)\rho + \kappa dt \Phi(\rho), \end{equation} where $\Phi(\rho)$ is the completely positive trace-preserving (CPTP) map describing a full error-correcting operation, and $\kappa$ is the error-correction rate. The full error-correcting operation $\Phi(\rho)$ consists of a syndrome detection, followed (if necessary) by a unitary correction operation conditioned on the syndrome. Consider, for example, the three-qubit bit-flip code whose purpose is to protect an unknown qubit state from bit-flip (Pauli $X$) errors. The code space is spanned by $|\overline{0}\rangle = |000\rangle$ and $|\overline{1}\rangle = |111\rangle$, and the stabilizer generators are $ZZI$ and $IZZ$ (see Section 8.3). Here by $X=\sigma^x$, $Y=\sigma^y$, $Z=\sigma^z$ and $I$ we denote the usual Pauli operators and the identity, respectively, and a string of three operators represents the tensor product of operators on each of the three qubits. The standard error-correction procedure involves a measurement of the stabilizer generators, which projects the state onto one of the subspaces spanned by $|000\rangle$ and $|111\rangle$, $|100\rangle$ and $|011\rangle$, $|010\rangle$ and $|101\rangle$, or $|001\rangle$ and $|110\rangle$; the outcome of these measurements is the error syndrome. Assuming that the probability for two- or three-qubit errors is negligible, then with high probability the result of this measurement is either the original state with no errors, or with a single $X$ error on the first, the second, or the third qubit. Depending on the outcome, one then applies an $X$ gate to the erroneous qubit and transforms the state back to the original one. The CPTP map $\Phi(\rho)$ for this code can be written explicitly as \begin{equation} \begin{split} \Phi(\rho) = \left(|000\rangle \langle000| + |111\rangle \langle111| \right) \rho \left(|000\rangle \langle000| + |111\rangle \langle111| \right) \\ + \left(|000\rangle \langle100| + |111\rangle \langle011| \right) \rho \left(|100\rangle \langle000| + |011\rangle \langle111| \right) \\ + \left(|000\rangle \langle010| + |111\rangle \langle101| \right)\rho \left(|010\rangle \langle000| + |101\rangle \langle111| \right) \\ + \left(|000\rangle\langle001| + |111\rangle \langle110| \right)\rho \left(|001\rangle \langle000| + |110\rangle \langle111| \right). \label{strongmap} \end{split} \end{equation} The quantum-jump process \eqref{basicequation} can be viewed as a smoothed version of the discrete scenario of repeated error correction, in which instantaneous full error-correcting operations are applied at random times with rate $\kappa$. It can also be looked upon as arising from a continuous sequence of infinitesimal CPTP maps of the type \eqref{basicequation}. In practice, such a weak map is never truly infinitesimal, but rather has the form \begin{equation} \rho \rightarrow (1-\epsilon^2)\rho + \epsilon^2 \Phi(\rho),\label{wm} \end{equation} where $\epsilon \ll 1$ is a small but finite parameter, and the weak operation takes a small but nonzero time $\tau_c$. For times $t$ much greater than $\tau_c$ ($\tau_c\ll t$), the weak error-correcting map (\ref{wm}) is well approximated by the infinitesimal form \eqref{basicequation}, where the rate of error correction is \begin{equation} \kappa = \epsilon^2 /\tau_c. \label{tauc} \end{equation} A weak map of the form \eqref{wm} could be implemented, for example, by a weak coupling between the system and an ancilla via an appropriate Hamiltonian, followed by discarding the ancilla. A closely related scenario, where the ancilla is continuously cooled in order to reset it to its initial state, was studied in \cite{SarMil05}. Another way of implementing the weak map \eqref{wm} is via weak measurements followed by weak unitaries dependent on the outcome. In the appendix at the end of this chapter, we give an example of such an implementation for the case of the bit-flip code---when $\Phi(\rho)$ is given by \eqref{strongmap}. We also present a scheme in terms of weak measurements for codes that correct arbitrary single-qubit errors. In the latter case, the resulting weak map is not the same as \eqref{wm}, but also yields the strong error-correcting map $\Phi(\rho)$ when exponentiated. We point out that the weak measurements used in these schemes are not weak versions of the strong measurements for syndrome detection---they are in a different basis. They can be regarded as weak versions of a different set of strong measurements which, when followed by an appropriate unitary, yield the same map $\Phi(\rho)$ on average. Thus, the workings of continuous error correction, when it is driven by weak measurements, does not translate directly into the error syndrome detection and correction of the standard paradigm. In this sense, the continuous approach can be regarded as a different paradigm for error correction---one based on weak measurements and weak unitary operations. The idea of using continuous weak measurements and unitary operations for error correction has been explored in the context of different heuristic schemes \cite{ADL02, SarMil05g}, some of which are based on a direct ``continuization'' of the syndrome measurements. In this study we consider continuous error correction of the type given by Eq.~\eqref{basicequation}. \subsection*{5.1.2 \hspace{2pt} Markovian decoherence} \addcontentsline{toc}{subsection}{5.1.2 \hspace{0.15cm} Markovian decoherence} So far, continuous quantum error correction has been studied only for Markovian error models. As we discussed in the previous chapter, the Markovian approximation describes situations where the bath-correlation times are much shorter than any characteristic time scale of the system \cite{BrePet02}. In this limit, the dynamics can be described by a semi-group master equation in the Lindblad form \cite{Lin76}: \begin{equation} \frac{d\rho}{dt}=L(\rho)\equiv-i[H,\rho]+\frac{1}{2}\underset{j}{\sum}\lambda_j(2L_j\rho L_j^{\dagger}-L_j^{\dagger}L_j\rho-\rho L_j^{\dagger}L_j).\label{firstLindblad} \end{equation} Here $H$ is the system Hamiltonian and the $\{L_j\}$ are suitably normalized Lindblad operators describing different error channels with decoherence rates $\lambda_j$. For example, the Liouvillian \begin{equation} L(\rho)= \underset{j}{\sum}\lambda_j(X_j\rho X_j - \rho), \label{Lbitflip} \end{equation} where $X_j$ denotes a local bit-flip operator acting on the $j$-th qubit, describes independent Markovian bit-flip errors. For a system undergoing Markovian decoherence and error correction of the type \eqref{basicequation}, the evolution is given by the equation \begin{equation} \frac{d\rho}{dt}=L(\rho)+\kappa\Gamma(\rho),\label{errorcorrectionequation} \end{equation} where $\Gamma(\rho)=\Phi(\rho)-\rho$. In \cite{PZ98}, Paz and Zurek showed that if the set of errors $\{L_j\}$ are correctable by the code, in the limit of infinite error-correction rate (strong error-correcting operations applied continuously often) the state of the system freezes and is protected from errors at all times. The effect of freezing can be understood by noticing that the transformation arising from decoherence during a short time step $\Delta t$, is \begin{equation} \rho\rightarrow \rho + L(\rho)\Delta t +\textit{O}(\Delta t^2), \end{equation} i.e., the weight of correctable errors emerging during this time interval is proportional to $\Delta t$, whereas uncorrectable errors (e.g., multi-qubit bit flips in the case of the three-qubit bit-flip code) are of order $\textit{O}(\Delta t^2)$. Thus, if errors are constantly corrected, in the limit $\Delta t \rightarrow 0$ uncorrectable errors cannot accumulate, and the evolution stops. \subsection*{5.1.3 \hspace{2pt} The Zeno effect. Error correction versus error prevention} \addcontentsline{toc}{subsection}{5.1.3 \hspace{0.15cm} The Zeno effect. Error correction versus error prevention} The effect of ``freezing'' in continuous error correction strongly resembles the quantum Zeno effect \cite{MisSud77}, in which frequent measurements slow down the evolution of a system, freezing the state in the limit where they are applied continuously. The Zeno effect arises when the system and its environment are initially decoupled and they undergo a Hamiltonian-driven evolution, which leads to a quadratic change with time of the state during the initial moments \cite{NNP96} (the so called Zeno regime). Let the initial state of the system plus the bath be $\rho_{SB}(0)=|0\rangle \langle 0|_S\otimes\rho_B(0)$. For small times, the fidelity of the system's density matrix with the initial state $\alpha(t)=\textrm{Tr}\left\{\left(|0\rangle\langle0|_S\otimes I_B\right)\rho_{SB}(t)\right\}$ can be approximated as \begin{equation} \alpha(t)= 1-C t^2+\textit{O}(t^3).\label{Zeno} \end{equation} In terms of the Hamiltonian $H_{SB}$ acting on the entire system, the coefficient $C$ is \begin{equation} C = \textrm{Tr}\left\{H_{SB}^2\left(|0\rangle\langle0|_S\otimes\rho_B(0)\right)\right\} - \textrm{Tr}\left\{H_{SB}\left(|0\rangle\langle0|_S\otimes I_B\right) H_{SB} \left(|0\rangle\langle0|_S\otimes \rho_B(0)\right)\right\}. \label{C} \end{equation} According to Eq. \eqref{Zeno}, if after a short time step $\Delta t$ the system is measured in an orthogonal basis which includes the initial state $|0\rangle$, the probability to find the system in a state other than the initial state is of order $\textit{O}(\Delta t^2)$. Thus if the state is continuously measured ($\Delta t \rightarrow 0$), this prevents the system from evolving. It has been proposed to utilize the quantum Zeno effect in schemes for error prevention \cite{Zur84, BBDEJM97, VGW96}, in which an unknown encoded state is prevented from errors simply by frequent measurements which keep it inside the code space. The approach is similar to error correction in that the errors for which the code is designed send a codeword to a space orthogonal to the code space. The difference is that different errors need not be distinguishable, since the procedure does not involve {\it correction} of errors, but their prevention. In \cite{VGW96} it was shown that with this approach it is possible to use codes of smaller redundancy than those needed for error correction and a four-qubit encoding of a qubit was proposed, which is capable of preventing arbitrary independent errors arising from Hamiltonian interactions. The possibility of this approach implicitly assumes the existence of a Zeno regime, and fails if we assume Markovian decoherence for all times. This is because the probability of errors emerging during a time step $dt$ in a Markovian model is proportional to $dt$ (rather than $dt^2$), and hence errors will accumulate with time if not corrected. From the above observations we see that error {\it correction} is capable of achieving results in noise regimes where error {\it prevention} fails. Of course, this advantage is at the expense of a more complicated procedure---in addition to the measurements used in error prevention, error correction involves unitary correction operations, and in general requires codes with higher redundancy. At the same time, we see that in the Zeno regime it is possible to reduce decoherence using weaker resources than those needed in the case of Markovian noise. This suggests that in this regime error correction may exhibit higher performance than it does for Markovian decoherence. \subsection*{5.1.4 \hspace{2pt} Non-Markovian decoherence} \addcontentsline{toc}{subsection}{5.1.4 \hspace{0.15cm} Non-Markovian decoherence} Markovian decoherence is an approximation valid for times much larger than the memory of the environment. As we saw in the previous chapter, however, in many situations of practical significance the memory of the environment cannot be neglected and the evolution is highly non-Markovian. Furthermore, no evolution is strictly Markovian, and for a system initially decoupled from its environment a Zeno regime is always present, short though it may be \cite{NNP96}. If the time resolution of error-correcting operations is high enough so that they ``see'' the Zeno regime, this could give rise to different behavior. The existence of a Zeno regime is not the only interesting feature of non-Markovian decoherence. The mechanism by which errors accumulate in a general Hamiltonian interaction with the environment may differ significantly from the Markovian case, since the system may develop nontrivial correlations with the environment. For example, imagine that some time after the initial encoding of a system, a strong error-correcting operation is applied. This brings the state inside the code space, but the state contains a nonzero portion of errors non-distinguishable by the code. Thus the new state is mixed and is generally correlated with the environment. A subsequent error-correcting operation can only aim at correcting errors arising after this point, since the errors already present inside the code space are in principle uncorrectable. Subsequent errors on the density matrix, however, may not be completely positive due to the correlations with the environment. Nevertheless, it follows from a result in \cite{ShaLid07} that an error-correction procedure which is capable of correcting a certain class of completely positive (CP) maps, can also correct any linear noise map whose operator elements can be expressed as linear combinations of the operator elements in a correctable CP map. This implies, in particular, that an error-correction procedure that can correct arbitrary single-qubit CP maps can correct arbitrary single-qubit linear maps. In this context, we note that the effects of system-environment correlations in non-Markovian error models have also been studied from the perspective of fault tolerance, and it has been shown that the threshold theorem can be extended to various types of non-Markovian noise \cite{TB05, AGP06, AKP06}. Another important difference from the Markovian case is that error correction and the effective noise on the reduced density matrix of the system cannot be treated as independent processes. One could derive an equation for the effective evolution of the system alone subject to interaction with the environment, like the Nakajima-Zwanzig \cite{Nak58, Zwa60} or the time-convolutionless (TCL) \cite{Shibata77, ShiAri80} master equations, but the generator of transformations at a given moment in general will depend (implicitly or explicitly) on the entire history up to this moment. Therefore, adding error correction can nontrivially affect the effective error model. This means that in studying the performance of continuous error correction one either has to derive an equation for the effective evolution of the encoded system, taking into account error correction from the very beginning, or one has to look at the evolution of the entire system---including the bath---where the error generator and the generator of error correction can be considered independent. In the latter case, for sufficiently small $\tau_c$, the evolution of the entire system including the bath can be described by \begin{equation} \frac{d \rho}{dt}=-i[H, \rho] + \kappa \Gamma(\rho), \label{NMerrorcorrectionequation} \end{equation} where $\rho$ is the density matrix of the system plus bath, $H$ is the total Hamiltonian, and the error-correction generator $\Gamma$ acts locally on the encoded system. In this study, we take this approach for a sufficiently simple bath model which allows us to find a solution for the evolution of the entire system. \subsection*{5.1.5 \hspace{2pt} Plan of this chapter} \addcontentsline{toc}{subsection}{5.1.5 \hspace{0.15cm} Plan of this chapter} The rest of the chapter is organized as follows. To develop understanding of the workings of continuous error correction, in Section 5.2 we look at a simple example: an error-correction code consisting of only one qubit which aims at protecting a known state. We discuss the difference in performance for Markovian and non-Markovian decoherence, and argue the implications it has for the case of multi-qubit codes. In Section 5.3, we study the three-qubit bit-flip code. We first review the performance of continuous error correction in the case of Markovian bit-flip decoherence, which was first studied in \cite{PZ98}. We then consider a non-Markovian model, where each qubit in the code is coupled to an independent bath qubit. This model is a simple version of the one studied in the previous chapter, and it allows us to solve for its evolution analytically. In the limit of large error-correction rates, the effective evolution approaches the evolution of a single qubit without error correction, but the coupling strength is now decreased by a factor which scales quadratically with the error-correction rate. This is opposed to the case of Markovian decoherence, where the same factor scales linearly with the rate of error-correction. In Section 5.4, we show that the quadratic enhancement in the performance over the case of Markovian noise can be attributed to the presence of a Zeno regime and argue that for general stabilizer codes and independent errors, the performance of continuous error correction would exhibit the same qualitative characteristics. In Section 5.5, we conclude. In the Appendix (Section 5.6), we present an implementation of the quantum-jump error correcting model that uses weak measurements and weak unitary operations. \section*{5.2 \hspace{2pt} The single-qubit code} \addcontentsline{toc}{section}{5.2 \hspace{0.15cm} The single-qubit code} Consider the problem of protecting a qubit in state $|0\rangle$ from bit-flip errors. This problem can be regarded as a trivial example of a stabilizer code, where the code space is spanned by $|0\rangle$ and its stabilizer is $Z$. Let us consider the Markovian bit-flip model first. The evolution of the state subject to bit-flip errors and error correction is described by Eq. \eqref{errorcorrectionequation} with \begin{equation} L(\rho)=\lambda( X \rho X - \rho), \label{bitflipgen} \end{equation} and \begin{equation} \Gamma(\rho)=|0\rangle \langle 0| \rho |0\rangle \langle0| + |0\rangle \langle 1|\rho |1\rangle \langle 0| - \rho. \label{ECgen} \end{equation} If the state lies on the z-axis of the Bloch sphere, it will never leave it, since both the noise generator \eqref{bitflipgen} and the error-correction generator \eqref{ECgen} keep it on the axis. We will take the qubit to be initially in the desired state $|0\rangle$, and therefore at any later moment it will have the form $\rho (t) = \alpha(t) |0\rangle\langle 0|+(1-\alpha(t))|1\rangle\langle 1|$, $\alpha(t) \in [0,1]$. The coefficient $\alpha(t)$ has the interpretation of a fidelity with the trivial code space spanned by $|0\rangle$. For an infinitesimal time step $dt$, the effect of the noise is to decrease $\alpha(t)$ by the amount $\lambda (2\alpha(t)-1) dt$ and that of the correcting operation is to increase it by $\kappa (1-\alpha(t)) dt$. The net evolution is then described by \begin{equation} \label{equation1} \frac{d\alpha(t)}{dt}=-(\kappa+2\lambda)\alpha(t)+(\kappa + \lambda). \end{equation} The solution is \begin{equation} \alpha(t)=(1-\alpha_*^{\rm M})e^{-(\kappa+2\lambda)t}+\alpha_*^{\rm M}, \label{MSQS} \end{equation} where \begin{equation} \alpha_*^{\rm M}=1-\frac{1}{2+r}, \label{attractor} \end{equation} and $r=\kappa/\lambda$ is the ratio between the rate of error correction and the rate of decoherence. We see that the fidelity decays, but it is confined above its asymptotic value $\alpha_*^{\rm M}$, which can be made arbitrarily close to 1 for a sufficiently large $r$. Now let us consider a non-Markovian error model. We choose the simple scenario where the system is coupled to a single bath qubit via the Hamiltonian \begin{equation} H=\gamma X\otimes X, \end{equation} where $\gamma$ is the coupling strength. This is the Ising Hamiltonian \eqref{eq:HI} for the case of a single bath qubit, but in the basis $|+\rangle=\frac{|0\rangle+|1\rangle}{\sqrt{2}}$, $|-\rangle=\frac{|0\rangle-|1\rangle}{\sqrt{2}}$. As we noted in Chapter 4 (Section 4.1), the model of a single bath qubit can be a good approximation for situations in which the coupling to a single spin from the bath dominates over other interactions. We will assume that the bath qubit is initially in the maximally mixed state, which can be thought of as an equilibrium state at high temperature. From Eq. \eqref{NMerrorcorrectionequation} one can verify that if the system is initially in the state $|0\rangle$, the state of the system plus the bath at any moment will have the form \begin{eqnarray} \rho(t) = \left(\alpha (t) |0\rangle \langle 0| + (1-\alpha(t))|1\rangle \langle 1|\right)\otimes \frac{I}{2} - \beta(t)Y \otimes \frac{X}{2}. \end{eqnarray} In the tensor product, the first operator belongs to the Hilbert space of the system and the second to the Hilbert space of the bath. We have $\alpha(t) \in [0,1]$, and $|\beta(t)|\le\sqrt{\alpha(t)(1-\alpha(t))}, \beta(t)\in R$. The reduced density matrix of the system has the same form as the one for the Markovian case. The traceless term proportional to $\beta(t)$ can be thought of as a ``hidden'' part, which nevertheless plays an important role in the error-creation process, since errors can be thought of as being transferred to the ``visible'' part from the ``hidden'' part (and vice versa). This can be seen from the fact that during an infinitesimal time step $dt$, the Hamiltonian changes the parameters $\alpha$ and $\beta$ as follows: \begin{gather} \alpha\rightarrow \alpha-2\beta \gamma dt ,\notag\\ \beta \rightarrow \beta +(2\alpha-1)\gamma dt .\label{sqe} \end{gather} The effect of an infinitesimal error-correcting operation is \begin{gather} \alpha \rightarrow \alpha + (1-\alpha)\kappa dt,\notag\\ \beta\rightarrow \beta-\beta\kappa dt. \end{gather} Note that the hidden part is also being acted upon. Putting it all together, we get the system of equations \begin{gather} \frac{d \alpha(t)}{dt}=\kappa(1-\alpha(t))-2\gamma \beta(t),\notag\\ \frac{d \beta(t)}{dt}=\gamma(2\alpha-1)-\kappa\beta(t)\label{equation2}. \end{gather} The solution for the fidelity $\alpha(t)$ is \begin{gather} \alpha(t) = \frac{2\gamma^2 + \kappa^2}{4\gamma^2+\kappa^2} + e^{-\kappa t}\left(\frac{\kappa\gamma}{4\gamma^2+\kappa^2} \sin{2\gamma t} + \frac{2\gamma^2}{4\gamma^2+\kappa^2}\cos{2\gamma t}\right). \label{singlequbitsolution} \end{gather} We see that as time increases, the fidelity stabilizes at the value \begin{equation} \alpha_*^{\rm NM}= \frac{2+R^2}{4+R^2}=1-\frac{2}{4+R^2}, \end{equation} where $R=\kappa/\gamma$ is the ratio between the error-correction rate and the coupling strength. In Fig. 1 we have plotted the fidelity as a function of the dimensionless parameter $\gamma t$ for three different values of $R$. For error-correction rates comparable to the coupling strength ($R=1$), the fidelity undergoes a few partial recurrences before it stabilizes close to $\alpha_*^{\rm NM}$. For larger $R=2$, however, the oscillations are already heavily damped and for $R=5$ the fidelity seems confined above $\alpha_*^{\rm NM}$. As $R$ increases, the evolution becomes closer to a decay like the one in the Markovian case. \begin{figure} \caption{Fidelity of the single-qubit code with continuous bit-flip errors and error correction, as a function of dimensionless time $\gamma t$, for three different values of the ratio $R=\kappa/\gamma$.} \label{fig1} \end{figure} A remarkable difference, however, is that the asymptotic weight outside the code space ($1-\alpha_*^{\rm NM}$) decreases with $\kappa$ as $1/\kappa^2$, whereas in the Markovian case the same quantity decreases as $1/\kappa$. The asymptotic value can be obtained as an equilibrium point at which the infinitesimal weight flowing out of the code space during a time step $dt$ is equal to the weight flowing into it. The latter corresponds to vanishing right-hand sides in Eqs. \eqref{equation1} and \eqref{equation2}. In Section 5.4, we will show that the difference in the equilibrium code-space fidelity for the two different types of decoherence arises from the difference in the corresponding evolutions during initial times. For multi-qubit codes, error correction cannot preserve a high fidelity with the initial codeword for all times, because there will be multi-qubit errors that can lead to errors within the code space itself. But it is natural to expect that the code-space fidelity can be kept above a certain value, since the effect of the error-correcting map \eqref{basicequation} is to oppose its decrease. If similarly to the single-qubit code there is a quadratic difference in the code-space fidelity for the cases of Markovian and non-Markovian decoherence, this could lead to a different performance of the error-correction scheme with respect to the rate of accumulation of uncorrectable errors inside the code space. This is because multi-qubit errors that can lead to transformations entirely within the code space during a time step $dt$ are of order $\textit{O}(dt^2)$. This means that if the state is kept constantly inside the code space (as in the limit of an infinite error-correction rate), uncorrectable errors will never develop. But if there is a finite nonzero portion of correctable errors, by the error mechanism it will give rise to errors not distinguishable or misinterpreted by the code. Therefore, the weight outside the code space can be thought of as responsible for the accumulation of uncorrectable errors, and consequently a difference in its magnitude may lead to a difference in the overall performance. In the following sections we will see that this is indeed the case. \section*{5.3 \hspace{2pt} The three-qubit bit-flip code} \addcontentsline{toc}{section}{5.3 \hspace{0.15cm} The three-qubit bit-flip code} \subsection*{5.3.1 \hspace{2pt} A Markovian error model} \addcontentsline{toc}{subsection}{5.3.1 \hspace{0.15cm} A Markovian error model} Even though the three-qubit bit-flip code can correct only bit-flip errors, it captures most of the important characteristics of nontrivial stabilizer codes. Before we look at a non-Markovian model, we will review the Markovian case which was studied in \cite{PZ98}. Let the system decohere through identical independent bit-flip channels, i.e., $L(\rho)$ is of the form \eqref{Lbitflip} with $\lambda_1=\lambda_2=\lambda_3=\lambda$. Then one can verify that the density matrix at any moment can be written as \begin{equation} \rho(t) = a(t)\rho(0)+b(t)\rho_{1}+c(t)\rho_{2}+d(t)\rho_{3}, \label{rhooft} \end{equation} where \begin{gather} \rho_{1}=\frac{1}{3}(X_1\rho(0)X_1+X_2\rho(0)X_2 + X_3\rho(0)X_3),\notag\\ \rho_{2}= \frac{1}{3}(X_1X_2\rho(0)X_1X_2+ X_2X_3\rho(0)X_2X_3+X_1X_3\rho(0)X_1X_3),\\ \rho_{3} = X_1X_2X_3\rho(0) X_1X_2X_3,\notag \end{gather} are equally-weighted mixtures of single-qubit, two-qubit and three-qubit errors on the original state. The effect of decoherence for a single time step $dt$ is equivalent to the following transformation of the coefficients in Eq. \eqref{rhooft}: \begin{equation} \begin{split} a\rightarrow a - 3a \lambda dt + b \lambda dt,\\ b\rightarrow b + 3a \lambda dt - 3 b \lambda dt + 2 c \lambda dt,\\ c\rightarrow c +2b \lambda dt - 3c\lambda dt+3d\lambda dt,\\ d\rightarrow d +c \lambda dt -3d \lambda dt. \label{decohtransform} \end{split} \end{equation} If the system is initially inside the code space, combining Eq. \eqref{decohtransform} with the effect of the weak error-correcting map $\rho\rightarrow (1-\kappa dt)\rho + \kappa dt \Phi(\rho)$, where $\Phi(\rho)$ is given in Eq. \eqref{strongmap}, yields the following system of first-order linear differential equations for the evolution of the system subject to decoherence plus error correction: \begin{equation} \begin{split} \frac{da(t)}{dt} = -3\lambda a(t) + (\lambda+\kappa)b(t),\\ \frac{db(t)}{dt} = 3\lambda a(t) - (3\lambda+\kappa)b(t) + 2 \lambda c(t),\\ \frac{dc(t)}{dt} = 2\lambda b(t) - (3\lambda+\kappa)c(t) + 3 \lambda d(t),\\ \frac{dd(t)}{dt} = (\lambda+\kappa)c(t)-3\lambda d(t). \label{equations} \end{split} \end{equation} The exact solution has been found in \cite{PZ98}. Here we just note that for the initial conditions $a(0)=1, b(0)=c(0)=d(0)=0$, the exact solution for the weight outside the code space is \begin{equation} b(t)+c(t)=\frac{3}{4+r}(1-e^{-(4+r)\lambda t}), \end{equation} where $r=\kappa/\lambda$. We see that similarly to what we obtained for the trivial code in the previous section, the weight outside the code space quickly decays to its asymptotic value $\frac{3}{4+r}$ which scales as $1/r$. But note that here the asymptotic value is roughly three times greater than that for the single-qubit model. This corresponds to the fact that there are three single-qubit channels. More precisely, it can be verified that if for a given $\kappa$ the uncorrected weight by the single-qubit scheme is small, then the uncorrected weight by a multi-qubit code using the same $\kappa$ and the same kind of decoherence for each qubit scales approximately linearly with the number of qubits. Similarly, the ratio $r$ required to preserve a given overlap with the code space scales linearly with the number of qubits in the code. The most important difference from the single-qubit model is that in this model there are uncorrectable errors that cause a decay of the state's fidelity {\it inside} the code space. Due to the finiteness of the resources employed by our scheme, there always remains a nonzero portion of the state outside the code space, which gives rise to uncorrectable three-qubit errors. To understand how the state decays inside the code space, we ignore the terms of the order of the weight outside the code space in the exact solution. We obtain: \begin{equation} a(t)\approx \frac{1+e^{-\frac{6}{r}2\lambda t}}{2} \approx 1 - d(t), \end{equation} \begin{equation} b(t) \approx c(t) \approx 0. \end{equation} Comparing this solution to the expression for the fidelity of a single decaying qubit without error correction---which can be seen from Eq. \eqref{MSQS} for $\kappa=0$---we see that the encoded qubit decays roughly as if subject to bit-flip decoherence with rate $6\lambda/r$. Therefore, for large $r$ this error-correction scheme can reduce the rate of decoherence approximately $r/6$ times. In the limit $r \rightarrow \infty$, it leads to perfect protection of the state for all times. \subsection*{5.3.2 \hspace{2pt} A non-Markovian error model} \addcontentsline{toc}{subsection}{5.3.2 \hspace{0.15cm} A non-Markovian error model} We consider a model where each qubit independently undergoes the same kind of non-Markovian decoherence as the one we studied for the single-qubit code. Here the system we look at consists of six qubits---three for the codeword and three for the environment. We assume that all system qubits are coupled to their corresponding environment qubits with the same coupling strength, i.e., the Hamiltonian is \begin{equation} H=\gamma\overset{3}{\underset{i=1}{\sum}}X^S_i\otimes X^B_i,\label{Hamiltonian} \end{equation} where the operators $X^S$ act on the system qubits and $X^B$ act on the corresponding bath qubits. The subscripts label the particular qubit on which they act. Obviously, the types of effective single-qubit errors on the density matrix of the system that can result from this Hamiltonian at any time, whether they are CP or not, will have operator elements which are linear combinations of $I$ and $X^S$, i.e., they are correctable by the procedure according to \cite{ShaLid07}. Considering the forms of the Hamiltonian \eqref{Hamiltonian} and the error-correcting map \eqref{strongmap}, one can see that the density matrix of the entire system at any moment is a linear combination of terms of the following type: \begin{equation} \varrho_{lmn,pqr}\equiv X_1^lX_2^mX_3^n\rho(0) X_1^pX_2^qX_3^r\otimes \frac{X_1^{l+p}}{2}\otimes \frac{X_2^{m+q}}{2} \otimes \frac{X_3^{n+r}}{2}. \end{equation} Here the first term in the tensor product refers to the Hilbert space of the system, and the following three refer to the Hilbert spaces of the bath qubits that couple to the first, second and third qubits from the code, respectively. The powers $l,m,n,p,q,r$ take values $0$ and $1$ in all possible combinations, and $X^1=X$, $X^0=X^2=I$. Note that $\varrho_{lmn,pqr}$ should not be mistaken for the components of the density matrix in the computational basis. Collecting these together, we can write the density matrix in the form \begin{eqnarray} \rho(t)&=&\underset{l,m,n,p,q,r}{\sum}(-i)^{l+m+n}(i)^{p+q+r}C_{lmn,pqr}(t)\times \varrho_{lmn,pqr},\label{fullDM} \end{eqnarray} where the coefficients $C_{lmn,pqr}(t)$ are real. The coefficient $C_{000,000}$ is less than or equal to the codeword fidelity (with equality when $\rho(0)=|\bar{0}\rangle\langle \bar{0}|$ or $\rho(0)=|\bar{1}\rangle\langle \bar{1}|$). Since the scheme is intended to protect an unknown codeword, we are interested in its worst-case performance; we will therefore use $C_{000,000}$ as a lower bound on the codeword fidelity. Using the symmetry with respect to permutations of the different system-bath pairs of qubits and the Hermiticity of the density matrix, we can reduce the description of the evolution to a system of equations for only $13$ of the $64$ coefficients. (In fact, $12$ coefficients are sufficient if we invoke the normalization condition $\textrm{Tr}\rho=1$, but we have found it more convenient to work with $13$.) The equations are linear, and we write them as a single 13-dimensional vector equation: \begin{equation} {\tiny \setcounter{MaxMatrixCols}{13} \frac{d}{dt}\begin{bmatrix} C_{000,000}\\ C_{100,000}\\ C_{110,000}\\ C_{100,010}\\ C_{100,100}\\ C_{110,001}\\ C_{111,000}\\ C_{110,100}\\ C_{110,110}\\ C_{110,011}\\ C_{111,100}\\ C_{111,110}\\ C_{111,111} \end{bmatrix}=\gamma \begin{bmatrix} 0&-6&0&0&3R&0&0&0&0&0&0&0&0\\ 1&-R&-2&-2&-1&0&0&0&0&0&0&0&0\\ 0&2&-R&0&0&-1&-1&-2&0&0&0&0&0\\ 0&2&0&-R&0&-2&0&-2&0&0&0&0&0\\ 0&2&0&0&-R&0&0&-4&0&0&0&0&0\\ 0&0&1&2&0&-R&0&0&0&-2&-1&0&0\\ 0&0&3&0&0&-3R&0&0&0&0&-3&0&0\\ 0&0&1&1&1&0&0&-R&-1&-1&-1&0&0\\ 0&0&0&0&0&0&0&4&-R&0&0&-2&0\\ 0&0&0&0&0&2&0&2&0&-R&0&-2&0\\ 0&0&0&0&0&1&1&2&0&0&-R&-2&0\\ 0&0&0&0&0&0&0&0&1&2&2&-R&-1\\ 0&0&0&0&0&0&0&0&3R&0&0&6&0 \end{bmatrix}\cdot \begin{bmatrix} C_{000,000}\\ C_{100,000}\\ C_{110,000}\\ C_{100,010}\\ C_{100,100}\\ C_{110,001}\\ C_{111,000}\\ C_{110,100}\\ C_{110,110}\\ C_{110,011}\\ C_{111,100}\\ C_{111,110}\\ C_{111,111} \end{bmatrix}} \label{NMsystem1} \end{equation} where $R=\kappa/\gamma$. Each nonzero component in this matrix represents an allowed transition process for the quantum states; these transitions can be driven either by the decoherence process or the continuous error-correction process. We plot these allowed transitions in Fig.~2. \begin{figure} \caption{These are the allowed transitions between the different components of the system (\ref{NMsystem1}) and their rates, arising from both the decoherence (bit-flip) process (with rate $\gamma$) and the continuous error-correction process (with rate $\kappa$). Online, the transitions due to decoherence are black, and the transitions due to error correction are red.} \label{fig2} \end{figure} We can use the symmetries of the process to recover the 64 coefficients of the full state. Each of the 13 coefficients represents a set of coefficients having the same number of $1$s on the left and the same number of $1$s on the right, as well as the same number of places which have $1$ on both sides. All such coefficients are equal at all times. For example, the coefficient $C_{110,011}$ is equal to all coefficients with two $1$s on the left, two $1$s on the right and exactly one place with $1$ on both sides; there are exactly six such coefficients: \[ C_{110,011} = C_{110,101} = C_{101,011} = C_{101,110} = C_{011,110} = C_{011,101} . \] In determining the transfer rate from one coefficient to another in Fig.~2, one has to take into account the number of different coefficients of the first type which can make a transition to a coefficient of the second type of order $dt$ according to Eq. \eqref{NMerrorcorrectionequation}. The sign of the flow is determined from the phases in front of the coefficients in Eq. \eqref{fullDM}. The eigenvalues of the matrix in Eq. \eqref{NMsystem1} up to the first two lowest orders in $1/\kappa$ are presented in Table I. \begin{table}[htdp] \caption{Eigenvalues of the matrix} \begin{center} \begin{tabular}{|c|c|} \hline Eigenvalues \\ \hline $\lambda_0 = 0$ \\ \hline $\lambda_{1,2} = -\kappa$ \\ \hline $\lambda_{3,4} = - \kappa\pm i 2\gamma$ \\ \hline $\lambda_{5,6} = - \kappa \pm i 4\gamma$ \\ \hline $\lambda_{7,8} = -\kappa\pm i(\sqrt{13}+3)\gamma+\textit{O}(1/\kappa)$ \\ \hline $\lambda_{9,10} = -\kappa\pm i(\sqrt{13}-3)\gamma+\textit{O}(1/\kappa)$ \\ \hline $\lambda_{11,12} = \pm i(24/R^2)\gamma - (144/R^3) \gamma + \textit{O}(1/\kappa^4)$ \\ \hline \end{tabular} \end{center} \label{eigenvalue_table} \end{table} Obviously all eigenvalues except the first one and the last two describe fast decays with rates $\sim \kappa$. They correspond to terms in the solution which will vanish quickly after the beginning of the evolution. The eigenvalue $0$ corresponds to the asymptotic ($t\rightarrow \infty$) solution, since all other terms will eventually decay. The last two eigenvalues are those that play the main role in the evolution on a time scale $t\gg\frac{1}{\kappa}$. We see that on such a time scale, the solution will contain an oscillation with an angular frequency approximately equal to $(24/R^2)\gamma$ which is damped by a decay factor with a rate of approximately $(144/R^3)\gamma$. In Fig.~3 we have plotted the codeword fidelity $C_{000,000}(t)$ as a function of the dimensionless parameter $\gamma t$ for $R=100$. The graph indeed represents this type of behavior, except for very short times after the beginning ($\gamma t \sim 0.1$), where one can see a fast but small in magnitude decay (Fig. 4). The maximum magnitude of this quickly decaying term obviously decreases with $R$, since in the limit of $R\rightarrow \infty$ the fidelity should remain constantly equal to $1$. \begin{figure} \caption{Long-time behavior of the three-qubit system with bit-flip noise and continuous error correction. The ratio of correction rate to decoherence rate is $R=\kappa/\gamma=100$.} \label{fig3} \end{figure} \begin{figure} \caption{Short-time behavior of the three-qubit system with bit-flip noise and continuous error correction. The ratio of correction rate to decoherence rate is $R=\kappa/\gamma=100$.} \label{fig4} \end{figure} From the form of the eigenvalues one can see that as $R$ increases, the frequency of the main oscillation decreases as $1/R^2$ while the rate of decay decreases faster, as $1/R^3$. Thus in the limit $R\rightarrow \infty$, the evolution approaches an oscillation with an angular frequency $(24/R^2)\gamma$. (We formulate this statement more rigorously below.) This is the same type of evolution as that of a single qubit interacting with its environment, but the coupling constant is effectively reduced by a factor of $R^2/12$. While the coupling constant serves to characterize the decoherence process in this particular case, this is not valid in general. To handle the more general situation, we propose to use the instantaneous rate of decrease of the codeword fidelity $F_{cw}$ as a measure of the effect of decoherence: \begin{equation} \Lambda(F_{cw}(t)) = -\frac{dF_{cw}(t)}{dt}. \label{errorrate} \end{equation} (In the present case, $F_{cw}=C_{000,000}$.) This quantity does not coincide with the decoherence rate in the Markovian case (which can be defined naturally from the Lindblad equation), but it is a good estimate of the rate of loss of fidelity and can be used for any decoherence model. From now on we will refer to it simply as an error rate, but we note that there are other possible definitions of instantaneous error rate suitable for non-Markovian decoherence, which in general may depend on the kind of errors they describe. Since the goal of error correction is to preserve the codeword fidelity, the quantity \eqref{errorrate} is a useful indicator for the performance of a given scheme. Note that $\Lambda(F_{cw})$ is a function of the codeword fidelity and therefore it makes sense to use it for a comparison between different cases only for identical values of $F_{cw}$. For our example, the fact that the coupling constant is effectively reduced approximately $R^2/12$ times implies that the error rate for a given value of $F_{cw}$ is also reduced $R^2/12$ times. Similarly, the reduction of $\lambda$ by the factor $r/6$ in the Markovian case implies a reduction of $\Lambda$ by the same factor. We see that the effective reduction of the error rate increases quadratically with $\kappa^2$ in the non-Markovian case, whereas it increases only linearly with $\kappa$ in the Markovian case. Now let us rigorously derive the approximate solution to this model of non-Markovian decoherence with continuous error correction. Assuming that $\gamma \ll \kappa$ (or equivalently, $R\gg1$), the superoperator driving the evolution of the system during a time step $\delta t$ can be written as \begin{eqnarray} e^{\mathcal{L}\delta t}&=&e^{\mathcal{L}_{\kappa}\delta t}+\overset{\delta t}{\underset{0}{\int}}dt' e^{\mathcal{L}_{\kappa}(\delta t-t')}\mathcal{L}_{\gamma}e^{\mathcal{L}_{\kappa}t'} +\overset{\delta t}{\underset{0}{\int}}dt'\overset{\delta t}{\underset{t'}{\int}}dt''e^{\mathcal{L}_{\kappa}(\delta t-t'')}\mathcal{L}_{\gamma}e^{\mathcal{L}_{\kappa}(t''-t')}\mathcal{L}_{\gamma}e^{\mathcal{L}_{\kappa}t'}+\notag\\ &+&\overset{\delta t}{\underset{0}{\int}}dt'\overset{\delta t}{\underset{t'}{\int}}dt''\overset{\delta t}{\underset{t''}{\int}}dt'''e^{\mathcal{L}_{\kappa}(\delta t-t''')}\mathcal{L}_{\gamma}e^{\mathcal{L}_{\kappa}(t'''-t'')} \mathcal{L}_{\gamma}e^{\mathcal{L}_{\kappa}(t''-t')} \mathcal{L}_{\gamma}e^{\mathcal{L}_{\kappa}t'}+... \label{perturbation} \end{eqnarray} We have denoted the Liouvillian by $\mathcal{L}=\mathcal{L}_{\gamma}+\mathcal{L}_{\kappa}$, where $\mathcal{L}_{\kappa}\rho=\kappa\Gamma(\rho)$, and $\mathcal{L}_{\gamma}\rho=-i[H,\rho]$. Let $\gamma \delta t \ll 1 \ll \kappa \delta t $. We will derive an approximate differential equation for the evolution of $\rho(t)$ by looking at the terms of order $\delta t$ in the change of $\rho$ according to Eq. \eqref{perturbation}. When $\kappa=0$, we have $d\rho/dt = \mathcal{L}_{\gamma}\rho$, so the effect of $\mathcal{L}_{\gamma}$ on the state of the system can be seen from Eq. \eqref{NMsystem1} with $\kappa$ taken equal to $0$. By the action of $\exp({\mathcal{L}_{\kappa} t})$, the different terms of the density matrix transform as follows: $\varrho_{000,000},\varrho_{111,000},\varrho_{111,111}$ remain unchanged, $\varrho_{100,100}\rightarrow e^{-\kappa t}\varrho_{100,100}+(1-e^{-\kappa t})\varrho_{000,000}$, $\varrho_{110,110}\rightarrow e^{-\kappa t}\varrho_{110,110}+(1-e^{-\kappa t})\varrho_{111,111}$, $\varrho_{110,001}\rightarrow e^{-\kappa t}\varrho_{110,001}-(1-e^{-\kappa t})\varrho_{111,000}$, and all other terms are changed as $\varrho\rightarrow e^{-\kappa t} \varrho$. Since $\kappa \delta t \gg 1$, we will ignore terms of order $e^{-\kappa \delta t}$. But from Eq. \eqref{perturbation} it can be seen that all terms except $\varrho_{000,000},\varrho_{111,000},\varrho_{000,111},\varrho_{111,111}$ will get multiplied by the factor $e^{-\kappa \delta t}$ by the action of $\exp({\mathcal{L}_{\kappa}\delta t})$ in Eq. \eqref{perturbation}. The integrals in Eq. \eqref{perturbation} also yield negligible factors, since every integral either gives rise to a factor of order $\delta t$ when the integration variable is trivially integrated, or a factor of $1/\kappa$ when the variable participates nontrivially in the exponent. Therefore, in the above approximation these terms of the density matrix can be neglected, which amounts to an effective evolution entirely within the code space. According to Eq. \eqref{NMsystem1}, the terms $\varrho_{000,000},\varrho_{111,000},\varrho_{111,111}$ can couple to each other only by a triple or higher application of $\mathcal{L}_{\gamma}$. This means that if we consider the expansion up to the lowest nontrivial order in $\gamma$, we only need to look at the triple integral in Eq. \eqref{perturbation}. Let us consider the effect of $\exp({\mathcal{L}\delta t})$ on $C_{000,000}$. Any change can come directly only from $\varrho_{111,000}$ and $\varrho_{000,111}$. The first exponent $e^{\mathcal{L}_{\kappa}t'}$ acts on these terms as the identity. Under the action of the first operator $\mathcal{L}_{\gamma}$ each of these two terms can transform to six terms that can eventually be transformed to $\varrho_{000,000}$. They are $\varrho_{110,000}$, $\varrho_{101,000}$, $\varrho_{011,000}$, $\varrho_{111,100}$, $\varrho_{111,010}$, $\varrho_{111,001}$, and $\varrho_{000,110}$, $\varrho_{000,101}$, $\varrho_{000,011}$, $\varrho_{100,111}$, $\varrho_{010,111}$, $\varrho_{001,111}$, with appropriate factors. The action of the second exponent is to multiply each of these new terms by $e^{-\kappa(t''-t')}$. After the action of the second $\mathcal{L}_{\gamma}$, the action of the third exponent on the relevant resultant terms will be again to multiply them by a factor $e^{-\kappa(t'''-t'')}$. Thus the second and the third exponents yield a net factor of $e^{-\kappa(t'''-t')}$. After the second and the third $\mathcal{L}_{\gamma}$, the relevant terms that we get are $\varrho_{000,000}$ and $\varrho_{100,100}$, $\varrho_{010,010}$, $\varrho_{001,001}$, each with a corresponding factor. Finally, the last exponent acts as the identity on $\varrho_{000,000}$ and transforms each of the terms $\varrho_{100,100}$, $\varrho_{010,010}$, $\varrho_{001,001}$ into $(1-e^{-\kappa(\delta t - t''')})\varrho_{000,000}$. Counting the number of different terms that arise at each step, and taking into account the factors that accompany them, we obtain: \begin{eqnarray} C_{000,000} &\rightarrow& C_{000,000}+\overset{\delta t}{\underset{0}{\int}}dt'\overset{\delta t}{\underset{t'}{\int}}dt''\overset{\delta t}{\underset{t''}{\int}}dt''' (24e^{-\kappa(t'''-t')}-36e^{-\kappa(\delta t-t')})C_{111,000}+\cdots\notag \\ &\approx& C_{000,000}+C_{111,000}\frac{24}{R^2}\gamma \delta t+\textit{O}(\delta t^2). \end{eqnarray} Using that $C_{000,000}+C_{111,111}\approx 1$, in a similar way one obtains \begin{equation} C_{111,000}\rightarrow C_{111,000}-(2C_{000,000}-1)\frac{12}{R^2}\gamma \delta t+\textit{O}(\delta t^2). \end{equation} For times much larger than $\delta t$, we can write the approximate differential equations \begin{gather} \frac{d C_{000,000}}{dt}=\frac{24}{R^2}\gamma C_{111,000},\notag\\ \frac{d C_{111,000}}{dt}=-\frac{12}{R^2}\gamma (2C_{000,000}-1).\label{approxeqn} \end{gather} Comparing with Eq. \eqref{sqe}, we see that the encoded qubit undergoes approximately the same type of evolution as that of a single qubit without error correction, but the coupling constant is effectively decreased $R^2/12$ times. The solution of Eq. \eqref{approxeqn} yields for the codeword fidelity \begin{equation} C_{000,000}(t)=\frac{1+\cos (\frac{24}{R^2}\gamma t)}{2} \label{firstapproxsoln}. \end{equation} This solution is valid only with precision $\textit{O}(1/R)$ for times $\gamma t \ll R^3$. This is because we ignored terms whose magnitudes are always of order $\textit{O}(1/R)$ and ignored changes of order $\textit{O}(\gamma\delta t/R^3)$ per time step $\delta t$ in the other terms. The latter changes could accumulate with time and become of the order of unity for times $\gamma t\approx R^3$, which is why the approximate solution is invalid for such times. In fact, if one carries out the expansion \eqref{perturbation} to fourth order in $\gamma$, one obtains the approximate equations \begin{gather} \frac{d C_{000,000}}{dt}=\frac{24}{R^2}\gamma C_{111,000}-\frac{72}{R^3}\gamma (2 C_{000,000}-1),\notag\\ \frac{d C_{111,000}}{dt}=-\frac{12}{R^2}\gamma (2C_{000,000}-1) -\frac{144}{R^3}\gamma C_{111,000},\label{approxeqn2} \end{gather} which yield for the fidelity \begin{equation} C_{000,000}(t)=\frac{1+e^{-144\gamma t/R^3}\cos (24\gamma t/R^2)}{2}. \end{equation} We see that in addition to the effective error process which is of the same type as that of a single qubit, there is an extra Markovian bit-flip process with rate $72\gamma/R^3$. This Markovian behavior is due to the Markovian character of our error-correcting procedure which, at this level of approximation, is responsible for the direct transfer of weight between $\varrho_{000,000}$ and $\varrho_{111,111}$, and between $\varrho_{111,000}$ and $\varrho_{000,111}$. The exponential factor explicitly reveals the range of applicability of the solution \eqref{firstapproxsoln}: with precision $\textit{O}(1/R)$, it is valid only for times $\gamma t$ of up to order $R^2$. For times of the order of $R^3$, the decay becomes significant and cannot be neglected. The exponential factor may also play an important role for short times of up to order $R$, where its contribution is bigger than that of the cosine. But in the latter regime the difference between the cosine and the exponent is of order $\textit{O}(1/R^2)$, which is negligible for the precision that we consider. In general, the effective evolution that one obtains in the limit of high error-correction rate does not have to approach a form identical to that of a single decohering qubit. The reason we obtain such behavior here is that for this particular model the lowest order of uncorrectable errors that transform the state within the code space is 3, and three-qubit errors have the form of an encoded $X$ operation. Furthermore, the symmetry of the problem ensured an identical evolution of the three qubits in the code. For general stabilizer codes, the errors that a single qubit can undergo are not limited to bit flips only. Therefore, different combinations of single-qubit errors may lead to different types of lowest-order uncorrectable errors inside the code space, none of which in principle has to represent an encoded version of the single-qubit operations that compose it. In addition, if the noise is different for the different qubits, there is no unique single-qubit error model to compare to. Nevertheless, we will show that with regard to the effective decrease in the error-correction rate, general stabilizer codes will exhibit the same qualitative performance. \section*{5.4 \hspace{2pt} Relation to the Zeno regime} \addcontentsline{toc}{section}{5.4 \hspace{0.15cm} Relation to the Zeno regime} The effective continuous evolution \eqref{approxeqn} was derived under the assumption that $\gamma \delta t \ll 1 \ll \kappa \delta t $. The first inequality implies that $\delta t$ can be considered within the Zeno time scale of the system's evolution without error correction. On the other hand, from the relation between $\kappa$ and $\tau_c$ in \eqref{tauc} we see that $\tau_c\ll\delta t$. Therefore, the time for implementing a weak error-correcting operation has to be sufficiently small so that on the Zeno time scale the error-correction procedure can be described approximately as a continuous Markovian process. This suggests a way of understanding the quadratic enhancement in the non-Markovian case based on the properties of the Zeno regime. Let us consider again the single-qubit code from Section 5.2, but this time let the error model be any Hamiltonian-driven process. We assume that the qubit is initially in the state $|0\rangle$, i.e., the state of the system including the bath has the form $\rho(0)=|0\rangle \langle 0|\otimes\rho_B(0)$. For times smaller than the Zeno time $\delta t_Z$, the evolution of the fidelity without error correction can be described by Eq. \eqref{Zeno}. Equation \eqref{Zeno} naturally defines the Zeno regime in terms of $\alpha$ itself: \begin{equation} \alpha\geq \alpha_Z \equiv 1-C\delta t_Z^2. \end{equation} For a single time step $\Delta t \ll \delta t_Z$, the change in the fidelity is \begin{equation} \alpha\rightarrow \alpha-2\sqrt{C}\sqrt{1-\alpha}\Delta t+\textit{O}(\Delta t^2).\label{singlestepdecoh} \end{equation} On the other hand, the effect of error correction during a time step $\Delta t$ is \begin{equation} \alpha \rightarrow \alpha+\kappa (1-\alpha)\Delta t +\textit{O}(\Delta t^2),\label{singlestepcorr} \end{equation} i.e., it tends to oppose the effect of decoherence. If both processes happen simultaneously, the effect of decoherence will still be of the form \eqref{singlestepdecoh}, but the coefficient $C$ may vary with time. This is because the presence of error-correction opposes the decrease of the fidelity and consequently can lead to an increase in the time for which the fidelity remains within the Zeno range. If this time is sufficiently long, the state of the environment could change significantly under the action of the Hamiltonian, thus giving rise to a different value for $C$ in Eq. \eqref{singlestepdecoh} according to Eq. \eqref{C}. Note that the strength of the Hamiltonian puts a limit on $C$, and therefore this constant can vary only within a certain range. The equilibrium fidelity $\alpha_*^{\rm NM}$ that we obtained for the error model in Section 5.2, can be thought of as the point at which the effects of error and error correction cancel out. For a general model, where the coefficient $C$ may vary with time, this leads to a quasi-stationary equilibrium. From Eqs. \eqref{singlestepdecoh} and \eqref{singlestepcorr}, one obtains the equilibrium fidelity \begin{equation} \alpha_*^{\rm NM}\approx 1-\frac{4C}{\kappa^2}. \end{equation} In agreement with what we obtained in Section 5.2, the equilibrium fidelity differs from $1$ by a quantity proportional to $1/\kappa^2$. This quantity is generally quasi-stationary and can vary within a limited range. If one assumes a Markovian error model, for short times the fidelity changes linearly with time which leads to $1-\alpha_*^{\rm M}\propto 1/\kappa$. Thus the difference can be attributed to the existence of a Zeno regime in the non-Markovian case. But what happens in the case of non-trivial codes? As we saw, there the state decays inside the code space and therefore can be highly correlated with the environment. Can we talk about a Zeno regime then? It turns out that the answer is positive. Assuming that each qubit undergoes an independent error process, then up to first order in $\Delta t$ the Hamiltonian cannot map terms in the code space to other terms without detectable errors. (This includes both terms in the code space and terms from the hidden part, like $\varrho_{111,000}$ in the example of the bit-flip code.) It can only transform terms from the code space into traceless terms from the hidden part which correspond to single-qubit errors (like $\varrho_{100,000}$ in the same example). Let $|\bar{0}\rangle$, $|\bar{1}\rangle$ be the two logical codewords and $|\psi_i \rangle$ be an orthonormal basis that spans the space of all single-qubit errors. Then in the basis $|\bar{0}\rangle$, $|\bar{1}\rangle$, $|\psi_i \rangle$, all the terms that can be coupled directly to terms inside the code space are $|\bar{0}\rangle \langle \psi_i|$, $|\psi_i\rangle \langle \bar{0}|$, $|\bar{1}\rangle \langle \psi_i|$, $|\psi_i\rangle \langle \bar{1}|$. From the condition of positivity of the density matrix, one can show that the coefficients in front of these terms are at most $\sqrt{\alpha(1-\alpha)}$ in magnitude, where $\alpha$ is the code-space fidelity. This implies that for small enough $1-\alpha$, the change in the code-space fidelity is of the type \eqref{singlestepdecoh}, which is Zeno-like behavior. Then using only the properties of the Zeno behavior as we did above, we can conclude that the weight outside the code space will be kept at a quasi-stationary value of order $1/\kappa^2$. Since uncorrectable errors enter the code space through the action of the error-correction procedure, which misinterprets some multi-qubit errors in the error space, the effective error rate will be limited by a factor proportional to the weight in the error space. That is, this will lead to an effective decrease of the error rate at least by a factor proportional to $1/\kappa^2$. The accumulation of uncorrectable errors in the Markovian case is similar, except that in this case there is a direct transfer of errors between the code space and the visible part of the error space. In both cases, the error rate is effectively reduced by a factor which is roughly proportional to the inverse of the weight in the error space, and therefore the difference in the performance comes from the difference in this weight. The quasi-stationary equilibrium value of the code-space fidelity establishes a quasi-stationary flow between the code space and the error space. One can think that this flow effectively takes non-erroneous weight from the code space, transports it through the error space where it accumulates uncorrectable errors, and brings it back into the code space. Thus by minimizing the weight outside the code space, error correction creates a ``bottleneck'' which reduces the rate at which uncorrectable errors accumulate. Finally, a brief remark about the resources needed for quadratic reduction of the error rate. As pointed out above, two conditions are involved: one concerns the rate of error correction; the other concerns the time resolution of the weak error-correcting operations. Both of these quantities must be sufficiently large. There is, however, an interplay between the two, which involves the strength of the interaction required to implement the weak error-correcting map \eqref{wm}. Let us imagine that the weak map is implemented by making the system interact weakly with an ancilla in a given state, after which the ancilla is discarded. The error-correction procedure consists of a sequence of such interactions, and can be thought of as a cooling process which takes away the entropy accumulated in the system as a result of correctable errors. If the time for which a single ancilla interacts with the system is $\tau_c$, one can verify that the parameter $\epsilon$ in Eq. \eqref{wm} would be proportional to $g^2\tau_c^2$, where $g$ is the coupling strength between the system and the ancilla. From Eq. \eqref{tauc} we then obtain that \begin{equation} \kappa \propto g^2\tau_c. \end{equation} The two parameters that can be controlled are the interaction time and the interaction strength, and they determine the error-correction rate. Thus if $g$ is kept constant, a decrease in the interaction time $\tau_c$ leads to a proportional decrease in $\kappa$, which may be undesirable. In order to achieve a good working regime, one may need to adjust both $\tau_c$ and $g$. But it has to be pointed out that in some situations decreasing $\tau_c$ alone can prove advantageous, if it leads to a time resolution revealing the non-Markovian character of an error model which was previously described as Markovian. The quadratic enhancement of the performance as a function of $\kappa$ may compensate the decrease in $\kappa$, thus leading to a seemingly paradoxical result: better performance with a lower error-correction rate. \section*{5.5 \hspace{2pt} Summary and outlook} \addcontentsline{toc}{section}{5.5 \hspace{0.15cm} Summary and outlook} In this chapter we studied the performance of a particular continuous quantum error-correction scheme for non-Markovian errors. We analyzed the evolution of the single-qubit code and the three-qubit bit-flip code in the presence of continuous error correction for a simple non-Markovian bit-flip error model. This enabled us to understand the workings of the error-correction scheme, and the mechanism whereby uncorrectable errors accumulate. The fidelity of the state with the code space in both examples quickly reaches an equilibrium value, which can be made arbitrarily close to $1$ by a sufficiently high rate of error correction. The weight of the density matrix outside the code space scales as $1/\kappa$ in the Markovian case, while it scales as $1/\kappa^2$ in the non-Markovian case. Correspondingly, the rate at which uncorrectable errors accumulate in the three-qubit code is proportional to $1/\kappa$ in the Markovian case, and to $1/\kappa^2$ in the non-Markovian case. These differences have the same cause, since the equilibrium weight in the error space is closely related to the rate of uncorrectable error accumulation. The quadratic difference in the error weight between the Markovian and non-Markovian cases can be attributed to the existence of a Zeno regime in the non-Markovian case. Regardless of the correlations between the density matrix inside the code space and the environment, if the lowest-order errors are correctable by the code, there exists a Zeno regime in the evolution of the code-space fidelity. The effective reduction of the error rate with the rate of error correction for non-Markovian error models depends crucially on the assumption that the time resolution of the continuous error correction is much shorter than the Zeno time scale of the evolution {\it without} error correction. This suggests that decreasing the time for a single (infinitesimal) error-correcting operation can lead to an increase in the performance of the scheme, even if the average error-correction rate goes down. While here we have only considered codes for the correction of single-qubit errors, our results can be extended to other types of codes and errors as well. As long as the error process only produces errors correctable by the code to lowest order, an argument analogous to the one given here shows that a Zeno regime will exist, which leads to an enhancement in the error-correction performance. Unfortunately, it is very difficult to describe the evolution of a system with a continuous correction protocol, based on a general error-correction code and subject to general non-Markovian interactions with the environment. This is especially true if one must include the evolution of a complicated environment in the description, as would be necessary in general. A more practical step in this direction might be to find an effective description for the evolution of the reduced density matrix of the system subject to decoherence plus error correction, using projection techniques like the Nakajima-Zwanzig or the TCL master equations. Since one is usually interested in the evolution during initial times before the codeword fidelity decreases significantly, a perturbation approach could be useful. This is a subject for further research. \section*{5.6 \hspace{2pt} Appendix: Implementation of the quantum-jump error-correcting process via weak measurements and weak unitary operations} \addcontentsline{toc}{section}{5.6 \hspace{0.15cm} Appendix: Implementation of the quantum-jump error-correcting process via weak measurements and weak unitary operations} Here we show how the weak CPTP map \eqref{wm} for the bit-flip code can be implemented using weak measurements and weak unitary operations. We also present a similar scheme for codes that correct arbitrary-single qubit errors, which yields a weak map different from \eqref{wm} but one that also results in the strong error-correcting map $\Phi(\rho)$ when exponentiated. To introduce our construction, we start again with the single-qubit code with stabilizer $\langle Z \rangle$. \subsection*{5.6.1 \hspace{2pt} The single-qubit model} \addcontentsline{toc}{subsection}{5.6.1 \hspace{0.15cm} The single-qubit model} Consider the completely positive map corresponding to the strong error-correcting operation for the single-qubit code: \begin{equation} \Phi(\rho)= X |1\rangle \langle 1| \rho |1\rangle \langle 1| X + |0\rangle \langle 0| \rho |0\rangle \langle 0| =|0\rangle \langle 1| \rho |1\rangle \langle 0|+|0\rangle \langle 0| \rho |0\rangle \langle 0|.\label{singlequbitstrongmap} \end{equation} Observe that this transformation can also be written as \begin{equation} \Phi(\rho)= |0\rangle \langle +| \rho |+\rangle \langle 0|+|0\rangle \langle -| \rho |-\rangle \langle 0|= ZR |+\rangle \langle +| \rho |+\rangle \langle +|RZ + XR|-\rangle \langle -| \rho |-\rangle \langle -|RX, \end{equation} where $|\pm\rangle = (|0\rangle\pm |1\rangle)/\sqrt{2}$ and \begin{equation} R=\frac{1}{\sqrt{2}} \begin{pmatrix} 1&1\\ 1&-1\\ \end{pmatrix}\label{Hadamard} \end{equation} is the Hadamard gate. Therefore the same error-correcting operation can be implemented as a measurement in the $|\pm\rangle$ basis (measurement of the operator $X$), followed by a unitary conditioned on the outcome: if the outcome is '+', we apply $ZR$; if the outcome is '-', we apply $XR$. This choice of unitaries is not unique---for example, we could apply just $R$ instead of $ZR$ after outcome '+'. But this particular choice has a convenient geometric interpretation---the unitary $ZR$ corresponds to a rotation around the Y-axis by an angle $\pi/2$: $ZR = e^{i\frac{\pi}{2}\frac{Y}{2}}$, and $XR$ corresponds to a rotation around the same axis by an angle $-\pi/2$: $ZR = e^{-i\frac{\pi}{2}\frac{Y}{2}}$. A weak version of the above error-correcting operation can be constructed by taking the corresponding weak measurement of the operator $X$, followed by a weak rotation around the Y-axis, whose direction is conditioned on the outcome: \begin{equation} \begin{split} \rho \rightarrow \frac{I+i\epsilon'Y}{\sqrt {1+{\epsilon'}^2}}\sqrt{\frac{I+\epsilon X}{2}}\rho\sqrt{\frac{I+\epsilon X}{2}}\frac{I-i\epsilon'Y}{\sqrt {1+{\epsilon'}^2}}+ \\ +\frac{I-i\epsilon'Y}{\sqrt {1+{\epsilon'}^2}}\sqrt{\frac{I-\epsilon X}{2}}\rho\sqrt{\frac{I-\epsilon X}{2}}\frac{I+i\epsilon'Y}{\sqrt {1+{\epsilon'}^2}}.\label{singlequbitweakmap} \end{split} \end{equation} Here $\epsilon$ and $\epsilon'$ are small parameters. From the symmetry of this map it can be seen that if the map is applied to a state which lies on the Z-axis, the resultant state will still lie on the Z-axis. Whether the state will move towards $|0\rangle\langle 0|$ or towards $|1\rangle\langle 1|$, depends on the relation between $\epsilon$ and $\epsilon'$. Since our goal is to protect the state from drifting away from $|0\rangle\langle 0|$ due to bit-flip decoherence, we will assume that the state lies on the Z-axis in the northern hemisphere (although the transformation we will obtain works for any kind of decoherence where the state need nor remain on the Z-axis). We would like, if possible, to choose the relation between the parameters $\epsilon$ and $\epsilon'$ in such a way that the effect of this map on any state on the Z-axis to be to move the state towards $|0\rangle\langle 0|$. In order to calculate the effect of this map on a given state, it is convenient to write the state in the $|\pm\rangle$ basis. For a state on the Z-axis, $\rho = \alpha |0\rangle\langle 0|+(1-\alpha) |1\rangle\langle 1|$, we have \begin{equation} \rho = \frac{1}{2}|+\rangle\langle +|+ \frac{1}{2}|-\rangle\langle -| + (2\alpha -1)\left(\frac{1}{2}|+\rangle\langle -|+ \frac{1}{2}|-\rangle\langle +|\right).\label{rhopm} \end{equation} For the action of our map on the state \eqref{rhopm} we obtain: \begin{equation} \rho \rightarrow \frac{1}{2}|+\rangle\langle +|+ \frac{1}{2}|-\rangle\langle -| + \frac{(1-{\epsilon'}^2)\sqrt{1-\epsilon^2}(2\alpha -1) + 2\epsilon\epsilon'}{1+{\epsilon'}^2}\left(\frac{1}{2}|+\rangle\langle -|+ \frac{1}{2}|-\rangle\langle +|\right).\label{transf} \end{equation} Thus we can think that upon this transformation the parameter $\alpha$ transforms to $\alpha'$, where \begin{equation} 2\alpha'-1 =\frac{(1-{\epsilon'}^2)\sqrt{1-\epsilon^2}(2\alpha -1) + 2\epsilon\epsilon'}{1+{\epsilon'}^2}.\label{alpha'} \end{equation} If it is possible to choose the relation between $\epsilon$ and $\epsilon'$ in such a way that $\alpha'\geq \alpha$ for every $0\leq \alpha \leq 1$, then clearly the state must remain invariant when $\alpha = 1$. Imposing this requirement, we obtain \begin{equation} \epsilon = \frac{2\epsilon'}{1+{\epsilon'}^2}, \end{equation} or equivalently \begin{equation} \epsilon'=\frac{1-\sqrt{1-\epsilon^2}}{\epsilon}.\label{varepsilon'} \end{equation} Substituting back in \eqref{alpha'}, we can express \begin{equation} \alpha'-\alpha = \frac{4{\epsilon'}^2}{(1+{\epsilon'}^2)^2}(1-\alpha)\geq 0. \end{equation} We see that the coefficient $\alpha$ (which is the fidelity of our state with $|0\rangle\langle 0|$) indeed increases after every application of our weak completely positive map (Fig.1). The amount by which it increases for fixed $\epsilon'$ depends on $\alpha$ and becomes smaller as $\alpha$ approaches 1. Since we will be taking the limit $\epsilon\rightarrow 0$, we can write Eq.~\eqref{varepsilon'} as \begin{equation} \epsilon'=\frac{\epsilon}{2}+\textit{O}(\epsilon^3). \end{equation} If we define the relation between the time step $\tau_c$ and $\epsilon$ as in Eq.~\eqref{tauc}, for the effect of the CPTP map \eqref{singlequbitweakmap} on an arbitrary state of the form $\rho = \alpha|0\rangle \langle 0 | +\beta |0\rangle \langle 1| + \beta^* |1\rangle \langle 0| + (1-\alpha)|1\rangle \langle 1|$, $\alpha\in R$, $\beta\in C$, we obtain \begin{gather} \alpha \rightarrow \alpha + (1-\alpha) \kappa \tau_c,\\ \beta \rightarrow \sqrt{1-\kappa \tau_c} \beta = \beta - \frac{1}{2}\kappa \beta \tau_c + O({\tau_c}^2). \end{gather} This is exactly the map \eqref{wm} for $\Phi(\rho)$ given by Eq.~\eqref{singlequbitstrongmap}. \subsection*{5.6.2 \hspace{2pt} The bit-flip code} \addcontentsline{toc}{subsection}{5.6.2 \hspace{0.15cm} The bit-flip code} While in the toy model from the previous section we had to protect a given state from errors, here we have to protect the whole subspace spanned by $|\overline{0}\rangle$ and $|\overline{1}\rangle$. This makes a geometric visualization of the problem significantly more difficult than in the previous case, which is why we will take a different approach. In the single-qubit model we saw how to protect a qubit in state $|0\rangle$ from bit-flip errors. Similarly we could protect a qubit in state $|1\rangle$; the only difference is that the weak unitaries following the two outcomes of the weak measurement of $X$ have to be exchanged. For the three-qubit bit-flip code, every block of the code lies in the subspace spanned by the codewords $|000\rangle$ and $|111\rangle$, i.e., each qubit is in state $|0\rangle$ when the other two qubits are in state $|00\rangle$, or in state $|1\rangle$ when the other qubits are in state $|11\rangle$. This correlation is what makes it possible for the code to correct single-qubit bit-flip errors without ever acquiring information about the actual state of the system. We propose to utilize this correlation in a three-qubit scheme which protects each qubit by applying to it the corresponding single-qubit scheme for either $|0\rangle$ or $|1\rangle$ depending on the value of the other two qubits. This, of course, has to be done without acquiring information about the encoded state. Just as in the single-qubit case, the scheme consist of weak measurements followed by weak unitaries conditioned on the outcomes of the measurements. For error correction on the first qubit, we propose the weak measurement with measurement operators \begin{equation} M^1_{\pm} = \sqrt{\frac{I{\pm}\epsilon X}{2}}\otimes (|00\rangle \langle 00| + |11\rangle \langle 11| )+\frac{I}{\sqrt{2}} \otimes(|01\rangle \langle 01| +|10\rangle \langle 10|), \end{equation} where $\sqrt{\frac{I{\pm}\epsilon X}{2}}$ are the same weak measurement operators that we used in \eqref{singlequbitweakmap}, acting on the first qubit. This measurement can be thought of as a weak measurement of the operator $X\otimes(|00\rangle\langle 00|+|11\rangle\langle 11|)$. In order to understand its effect better, consider the expansion of the density matrix of our system in the computational basis of the three qubits in a given block of the code. Assuming that the state begins inside the code space and that the system decoheres through single-qubit bit-flip channels, the density matrix at any time can be written as a linear combination of the following terms: $|000\rangle \langle 000|$,$|000\rangle \langle 111|$, $|111\rangle \langle 000|$, $|111\rangle \langle 111|$, $|100\rangle \langle 100|$, $|100\rangle \langle 011|$, $|011\rangle \langle 100|$, $|011\rangle \langle 011|$, $|010\rangle \langle 010|$, $|010\rangle \langle 101|$, $|101\rangle \langle 010|$, $|101\rangle \langle 101|$, $|001\rangle \langle 001|$, $|001\rangle \langle 110|$,\\ $|110\rangle \langle 001|$, $|110\rangle \langle 110|$. For those terms in the expansion for which the second and third qubits are in the subspace spanned by $|00\rangle$ and $|11\rangle$, the effect of this measurement will be the same as the effect of a weak single-qubit measurement of $X$ on the first qubit. Those terms in which the second and third qubits are in the subspace spanned by $|01\rangle$ and $|10\rangle$ will not be affected by the measurement. This is because the three-qubit bit-flip code cannot distinguish multi-qubit errors from single-qubit errors; the subspaces corresponding to two- and three-qubit errors are the same as the subspaces corresponding to single-qubit or no errors. This is why, if the second and third qubits have different values, the error-correction scheme will assume that an error has occurred on one of these two qubits and will not apply any correction on the first qubit. The unitary operation conditioned on the outcome of the measurement is \begin{equation} U^1_{\pm} =\frac{I\pm i\epsilon'Y}{\sqrt {1+{\epsilon'}^2}}\otimes|00\rangle \langle 00| +\frac{I\mp i\epsilon'Y}{\sqrt {1+{\epsilon'}^2}}\otimes|11\rangle \langle 11|+I\otimes (|01\rangle\langle 01| +|10\rangle\langle 10|). \end{equation} This is a weak unitary driven by the Hamiltonian $\pm Y\otimes(|00\rangle\langle00|-|11\rangle\langle 11|)$. Again, it is designed in such a way that those components of the density matrix which correspond to an error on the second or third qubits will undergo no transformation, while the terms for which the second and third qubits have the same value (these are the same terms that have undergone non-trivial transformation during the measurement) will undergo a rotation of the first qubit analogous to that from the single-qubit model. One can verify that the only terms that undergo non-trivial transformation after the completely positive map $\rho \rightarrow U^1_+M^1_+ \rho M^1_+{U^1_+}^{\dagger} +U^1_-M^1_- \rho M^1_-{U^1_-}^{\dagger}$ are: \begin{equation} \begin{split} |100\rangle\langle 100| \rightarrow (1-\kappa \tau_c) |100\rangle\langle 100| + \kappa \tau_c |000\rangle\langle 000|,\\ |100\rangle\langle 011| \rightarrow (1-\kappa \tau_c) |100\rangle\langle 011| + \kappa \tau_c |000\rangle\langle 111|,\\ |011\rangle\langle 100| \rightarrow (1-\kappa \tau_c) |011\rangle\langle 100| + \kappa \tau_c |111\rangle\langle 000|,\\ |011\rangle\langle 011| \rightarrow (1-\kappa \tau_c) |011\rangle\langle 011| + \kappa \tau_c |111\rangle\langle 111|.\\ |100\rangle\langle \overline{\phi}| \rightarrow (1-\frac{1}{2}\kappa \tau_c) |100\rangle\langle \overline{\phi}| ,\hspace{0.2cm}|\overline{\phi}\rangle\langle 100| \rightarrow (1-\frac{1}{2}\kappa \tau_c) |\overline{\phi}\rangle\langle 100|\\ |011\rangle\langle \overline{\phi}| \rightarrow (1-\frac{1}{2}\kappa \tau_c) |011\rangle\langle \overline{\phi}|,\hspace{0.2cm} |\overline{\phi}\rangle\langle 011| \rightarrow (1-\frac{1}{2}\kappa \tau_c) |\overline{\phi}\rangle\langle 011| \end{split} \end{equation} where $|\overline{\phi}\rangle$ is any state orthogonal to the subspace spanned by $|100\rangle$ and $|011\rangle$. We see that the effect of this operation on the terms that correspond to bit flip on the first qubit is to correct these terms by the same amount as in the single-qubit error-correction scheme. All other terms remain unchanged. If we write the state of the system at a given moment as \begin{gather} \rho = a\rho(0) + b_1X_1\rho(0)X_1 + b_2X_2\rho(0)X_2 + b_3X_3\rho(0)X_3+\\ +c_1X_2X_3\rho(0)X_2X_3 +c_2X_1X_3\rho(0)X_1X_3+c_3X_1X_2\rho(0)X_1X_2 + d X_1X_2X_3\rho(0)X_1X_2X_3, \nonumber \end{gather} where $\rho(0)$ is the initial state, then the effect of the above completely positive map is: \begin{equation} \begin{split} a \rightarrow a + b_1 4\kappa \tau_c,\hspace{0.2cm} b_1 \rightarrow b_1 - b_1 4\kappa \tau_c,\hspace{0.2cm} b_2 \rightarrow b_2,\hspace{0.2cm} b_3 \rightarrow b_3,\\ c_1 \rightarrow c_1 - c_1 4\kappa \tau_c,\hspace{0.2cm} c_2 \rightarrow c_2,\hspace{0.2cm} c_3 \rightarrow c_3,\hspace{0.2cm} d \rightarrow d + c_1 4\kappa \tau_c. \end{split} \end{equation} We apply the same correction ($\rho \rightarrow U^i_+M^i_+ \rho M^i_+{U^i_+}^{\dagger} +U^i_-M^i_- \rho M^i_-{U^i_-}^{\dagger}$) to each of the other two qubits ($i=2,3$) as well. One can easily see that the effect of all three corrections (up to first order in $\Delta t$) is equivalent to the map \eqref{wm} with $\Phi(\rho)$ given in Eq.~\eqref{strongmap}. \subsection*{5.6.3 \hspace{2pt} General single-error-correcting stabilizer codes} \addcontentsline{toc}{subsection}{5.6.3 \hspace{0.15cm} General single-error-correcting stabilizer codes} We now proceed to generalizing this scheme to error-correcting codes that correct arbitrary single-qubit errors. A stabilizer code which is able to correct arbitrary single-qubit errors, has the property that a single-qubit $X$, $Y$ or $Z$ error on a state inside the code space, sends that state to a subspace orthogonal to the code space \cite{Got97}. One can verify that this implies that any two orthogonal codewords can be written as \begin{gather} |\overline{0}\rangle = \frac{1}{\sqrt{2}}|0\rangle |\psi^0_0\rangle +\frac{1}{\sqrt{2}}|1\rangle |\psi^0_1\rangle,\nonumber\\ |\overline{1}\rangle = \frac{1}{\sqrt{2}}|0\rangle |\psi^1_0\rangle +\frac{1}{\sqrt{2}}|1\rangle |\psi^1_1\rangle,\label{codeform} \end{gather} where $|\psi^i_j\rangle$, $i,j=0,1$ form an orthonormal set. Here we have expanded the codewords in the computational basis (the eigenbasis of $Z$) of the first qubit, but the same can be done with respect to any qubit in the code. Note that an $X$, $Y$, or $Z$ error on one of the other qubits sends each of the vectors $|\psi^i_j\rangle$ to a subspace orthogonal to the subspace spanned by $|\psi^i_j\rangle$, $i,j=0,1$. This can be shown to follow from the fact that different single-qubit errors send the code space to different orthogonal subspaces. An exception is the case of degenerate codes where the error in question has the same effect on a codeword as an error on the first qubit. In such a case, however, we can assume that the error has occurred on the first qubit. The weak operation for correcting bit flips on a given qubit (say the first one) is therefore constructed similarly to that for the bit-flip code. We first apply the weak measurement \begin{gather} M^1_{\pm} = \sqrt{\frac{I{\pm}\epsilon X}{2}}\otimes (|\psi^0_0\rangle \langle \psi^0_0| + |\psi^0_1\rangle \langle \psi^0_1| +|\psi^1_0\rangle \langle \psi^1_0| +|\psi^1_1\rangle \langle \psi^1_1|)+\nonumber\\ +\frac{I}{\sqrt{2}} \otimes(I^{n-1}-|\psi^0_0\rangle \langle \psi^0_0| - |\psi^0_1\rangle \langle \psi^0_1| -|\psi^1_0\rangle \langle \psi^1_0| -|\psi^1_1\rangle \langle \psi^1_1|), \label{weakmeas} \end{gather} where $I^{n-1}$ is the identity on the space of all qubits in the code except the first one. This can be thought of as a weak measurement of the operator $X(|\psi^0_0\rangle \langle \psi^0_0| + |\psi^0_1\rangle \langle \psi^0_1| +|\psi^1_0\rangle \langle \psi^1_0| +|\psi^1_1\rangle \langle \psi^1_1|)$. The measurement is followed by the unitary \begin{gather} U^1_{\pm} =\frac{I\pm i\epsilon'Y}{\sqrt {1+{\epsilon'}^2}}\otimes(|\psi^0_0\rangle \langle \psi^0_0|+|\psi^1_0\rangle \langle \psi^1_0|) +\frac{I\mp i\epsilon'Y}{\sqrt {1+{\epsilon'}^2}}\otimes (|\psi^0_1\rangle \langle \psi^0_1|+|\psi^1_1\rangle \langle \psi^1_1|)+\nonumber\\ +I\otimes (I^{n-1}-|\psi^0_0\rangle \langle \psi^0_0| - |\psi^0_1\rangle \langle \psi^0_1| -|\psi^1_0\rangle \langle \psi^1_0| -|\psi^1_1\rangle \langle \psi^1_1|) \label{weakunit} \end{gather} conditioned on the outcome. The Hamiltonian driving this unitary is $\pm Y(|\psi^0_0\rangle \langle \psi^0_0|+|\psi^1_0\rangle \langle \psi^1_0| - |\psi^0_1\rangle \langle \psi^0_1|-|\psi^1_1\rangle \langle \psi^1_1|)$. It is easy to verify that the effect of the corresponding completely positive map is analogous to that for the bit-flip code. The action of each of the operators $U^1_+M^1_+$ and $U^1_-M^1_-$ can be summarized as follows: \begin{gather} U^1_{\pm}M^1_{\pm}|i\rangle |\phi\rangle =\frac{1}{\sqrt{2}} |i\rangle |\phi\rangle, \hspace{0.3 cm}\textrm{for}\hspace{0.2 cm} |\phi\rangle \in I^{n-1}-\underset{j,k}{\sum}|\psi^j_k\rangle \langle \psi^j_k|,\label{actionofUM1}\\ U^1_{\pm}M^1_{\pm} |j\rangle |\psi^i_k\rangle = \sqrt{\frac{1-\kappa \tau_c}{2}}|j\rangle |\psi^i_k\rangle \pm \sqrt{\frac{\kappa \tau_c}{2}}|k\rangle |\psi^i_k\rangle,\hspace{0.3 cm}\textrm{for}\hspace{0.2 cm} j\neq k,\\ U^1_{\pm}M^1_{\pm} |j\rangle |\psi^i_k\rangle = |j\rangle |\psi^i_k\rangle,\hspace{0.3 cm}\textrm{for}\hspace{0.2 cm} j = k.\label{actionofUM3} \end{gather} This implies that the effect of the map $\sigma \rightarrow U^1_+M^1_+\sigma M^1_+ {U^1_+}^{\dagger} + U^1_-M^1_-\sigma M^1_- {U^1_-}^{\dagger}$ on a bit-flip error on the first qubit of a codeword $\rho$ is: \begin{gather} X_1\rho X_1 \rightarrow (1-\kappa \tau_c)X_1\rho X_1 + \kappa \tau_c \rho,\\ X_1\rho \rightarrow (1-\frac{1}{2}\kappa \tau_c)X_1\rho,\\ \rho X_1 \rightarrow (1-\frac{1}{2}\kappa \tau_c)\rho X_1. \end{gather} Just like in the bit-flip code, the error-correcting procedure for the case where each qubit decoheres through an independent bit-flip channel consists of simultaneous corrections of all qubits ($i=1,2,...,n$) by continuous application of the maps $\sigma \rightarrow U^i_+M^i_+\sigma M^i_+ {U^i_+}^{\dagger} + U^i_-M^i_-\sigma M^i_- {U^i_-}^{\dagger}$. From \eqref{codeform} it can be seen that the codewords have analogous forms when expanded in the eigenbasis of another Pauli operator ($X$ or $Y$) acting on a given qubit: \begin{gather} |\overline{0}\rangle = \frac{1}{\sqrt{2}}|x_+\rangle |\psi^0_{x_+}\rangle +\frac{1}{\sqrt{2}}|x_-\rangle |\psi^0_{x_-}\rangle = \frac{1}{\sqrt{2}}|y_+\rangle |\psi^0_{y_+}\rangle +\frac{1}{\sqrt{2}}|y_-\rangle |\psi^0_{y_-}\rangle,\nonumber\\ |\overline{1}\rangle = \frac{1}{\sqrt{2}}|x_+\rangle |\psi^1_{x_+}\rangle +\frac{1}{\sqrt{2}}|x_-\rangle |\psi^1_{x_-}\rangle = \frac{1}{\sqrt{2}}|y_+\rangle |\psi^1_{y_+}\rangle +\frac{1}{\sqrt{2}}|y_-\rangle |\psi^1_{y_-}\rangle.\label{codeform2} \end{gather} Here \begin{equation} |x_{\pm}\rangle = (-i)^{\frac{1\mp 1}{2}}\frac{|0\rangle \pm |1\rangle}{\sqrt{2}} \end{equation} and \begin{equation} |y_{\pm}\rangle = \frac{|0\rangle \pm i|1\rangle}{\sqrt{2}} \end{equation} are eigenbases of $X$ and $Y$ respectively, and \begin{equation} |\psi^i_{x_{\pm}}\rangle=i^{\frac{1\mp 1}{2}}\frac{|\psi^i_0\rangle \pm |\psi^i_1\rangle}{\sqrt{2}}, \hspace{0.1cm}i=0,1 \end{equation} and \begin{equation} |\psi^i_{y_{\pm}}\rangle=\frac{|\psi^i_0\rangle \mp i|\psi^i_1\rangle}{\sqrt{2}},\hspace{0.1cm}i=0,1 \end{equation} are orthonormal sets. The reason why we have chosen these particular overall phases in the definition of the eigenvectors of $X$ and $Y$, is that we want to have our expressions explicitly symmetric with respect to cyclic permutations of $X$, $Y$ and $Z$. More precisely, the expansions of the operators $X$, $Y$, $Z$ in the $|0,1\rangle$ basis are the same as the expansions of $Y$, $Z$, $X$ in the $|x_{\pm}\rangle$ basis, and the same as the expansions of $Z$, $X$, $Y$ in the $|y_{\pm}\rangle$ basis. This means that $Y$ and $Z$ errors in the computational basis can be treated as $X$ errors in the bases $|x_{\pm}\rangle$ and $|y_{\pm}\rangle$, and therefore can be corrected accordingly. The weak measurement and unitary for the correction of $Y$ errors on the first qubit (let's call them $M^1_{y\pm}$ and $U^1_{y\pm}$) are obtained from \eqref{weakmeas} and \eqref{weakunit} by making the substitutions $X\rightarrow Y$, $Y\rightarrow Z$, $|0,1\rangle \rightarrow |x_{\pm}\rangle$, $|\psi^i_{0,1}\rangle \rightarrow |\psi^i_{x_{\pm}}\rangle$. The operations for the correction of $Z$ errors ($M^1_{z\pm}$ and $U^1_{z\pm}$) are obtained from \eqref{weakmeas} and \eqref{weakunit} by $X\rightarrow Z$, $Y\rightarrow X$, $|0,1\rangle \rightarrow |y_{\pm}\rangle$, $|\psi^i_{0,1}\rangle \rightarrow |\psi^i_{y_{\pm}}\rangle$. The operations for correction of $Y$ and $Z$ errors on any qubit ($M^i_{y\pm}$, $U^i_{y\pm}$, and $M^i_{z\pm}$, $U^i_{z\pm}$, $i=1,2,...,n$) are defined analogously. To prove that the weak error-correcting map resulting from the application of the described weak measurements and unitary operations is equal to Eq.~\eqref{wm}, we are going to look at its effect on different components of the density matrix. Any density matrix can be written as a linear combination of terms of the type $|\phi\rangle \langle \chi |$, where each of the vectors $|\phi\rangle$ and $|\chi\rangle$ belongs to one of the orthogonal subspaces on which a state gets projected if we measure the stabilizer generators of the code. Let us denote the code space by $C$ and the subspaces corresponding to different single-qubit errors by $C_{X_i}$, $C_{Y_i}$, and $C_{Z_i}$, where the subscript refers to the type of error ($X$, $Y$, or $Z$) and the number of the qubit on which it occurred. The code space and the subspaces corresponding to single-qubit errors in general do not cover the whole Hilbert space. Some of the outcomes of the measurement of the stabilizer generators may project the state onto subspaces corresponding to multi-qubit errors. We are going to denote the direct sum of these subspaces by $C_M$. Our weak error-correcting operation consists of a simultaneous application of the weak maps $\rho \rightarrow U^i_+M^i_+\rho M^i_+ {U^i_+}^{\dagger} + U^i_-M^i_-\rho M^i_- {U^i_-}^{\dagger}$, $\rho \rightarrow U^i_{y+}M^i_{y+}\rho M^i_{y+} {U^i_{y+}}^{\dagger} + U^i_{y-}M^i_{y-}\rho M^i_{y-} {U^i_{y-}}^{\dagger}$, $\rho \rightarrow U^i_{z+}M^i_{z+}\rho M^i_{z+} {U^i_{z+}}^{\dagger} + U^i_{z-}M^i_{z-}\rho M^i_{z-} {U^i_{z-}}^{\dagger}$, $i=1,2,...,n$. The order of application is irrelevant since we consider only contributions of up to first order in $\Delta t$. Using \eqref{actionofUM1}-\eqref{actionofUM3} and the symmetry under cyclic permutations of $X$, $Y$ and $Z$, one can show that this map has the following effect: \begin{gather} |\phi\rangle \langle \chi | \rightarrow |\phi\rangle \langle \chi |, \hspace{0.2 cm} \textrm{if} \hspace{0.2 cm} |\phi\rangle , | \chi \rangle \in C\oplus C_M, \label{actionofweakmap1}\\ |\phi\rangle \langle \chi | \rightarrow (1-2\kappa\tau_c)|\phi\rangle \langle \chi | + \kappa \tau_c X_i|\phi\rangle \langle \chi |X_i + \kappa \tau_c Z_i|\phi\rangle \langle \chi |Z_i, \hspace{0.2 cm} \textrm{if} \hspace{0.2 cm} |\phi\rangle , | \chi \rangle \in C_{X_i},\label{actionofweakmap2}\\ |\phi\rangle \langle \chi | \rightarrow (1-2\kappa\tau_c)|\phi\rangle \langle \chi | + \kappa \tau_c Y_i|\phi\rangle \langle \chi |Y_i + \kappa \tau_c X_i|\phi\rangle \langle \chi |X_i, \hspace{0.2 cm} \textrm{if} \hspace{0.2 cm} |\phi\rangle , | \chi \rangle \in C_{Y_i},\label{actionofweakmap3}\\ |\phi\rangle \langle \chi | \rightarrow (1-2\kappa\tau_c t)|\phi\rangle \langle \chi | + \kappa \tau_c Z_i|\phi\rangle \langle \chi |Z_i + \kappa \tau_c Y_i|\phi\rangle \langle \chi |Y_i, \hspace{0.2 cm} \textrm{if} \hspace{0.2 cm} |\phi\rangle , | \chi \rangle \in C_{Z_i},\label{actionofweakmap4}\\ |\phi\rangle \langle \chi | \rightarrow (1-\kappa\tau_c)|\phi\rangle \langle \chi |, \hspace{0.2 cm} \textrm{if} \hspace{0.2 cm} |\phi\rangle \in C_{X_i}\oplus C_{Y_i} \oplus C_{Z_i}, \hspace{0.2 cm}| \chi \rangle \in C\oplus C_M,\label{actionofweakmap5}\\ |\phi\rangle \langle \chi | \rightarrow (1-2\kappa\tau_c)|\phi\rangle \langle \chi | +\kappa \tau_c X_i|\phi\rangle \langle \chi |X_i, \hspace{0.2 cm} \textrm{if} \hspace{0.2 cm} |\phi\rangle \in C_{X_i}, |\chi \rangle \in C_{Y_i},\label{actionofweakmap6}\\ |\phi\rangle \langle \chi | \rightarrow (1-2\kappa\tau_c)|\phi\rangle \langle \chi | +\kappa \tau_c Y_i|\phi\rangle \langle \chi |Y_i, \hspace{0.2 cm} \textrm{if} \hspace{0.2 cm} |\phi\rangle \in C_{Y_i}, |\chi \rangle \in C_{Z_i},\label{actionofweakmap7}\\ |\phi\rangle \langle \chi | \rightarrow (1-2\kappa\tau_c)|\phi\rangle \langle \chi | +\kappa \tau_c Z_i|\phi\rangle \langle \chi |Z_i, \hspace{0.2 cm} \textrm{if} \hspace{0.2 cm} |\phi\rangle \in C_{Z_i}, |\chi \rangle \in C_{X_i},\label{actionofweakmap8}\\ |\phi\rangle \langle \chi | \rightarrow (1-2\kappa\tau_c)|\phi\rangle \langle \chi |, \hspace{0.2 cm} \textrm{if} \hspace{0.2 cm} |\phi\rangle \in C_{X_i},\oplus C_{Y_i} \oplus C_{Z_i}, | \chi \rangle \in C_{X_j}\oplus C_{Y_j} \oplus C_{Z_j}, \hspace{0.2 cm} i\neq j \label{actionofweakmapN}. \end{gather} This is sufficient to determine the effect of the error-correcting map on any density matrix. One can easily see that this map is not equal to the map \eqref{wm} because of the last terms on the right-hand sides of Eqs.~\eqref{actionofweakmap2}-\eqref{actionofweakmap4} and \eqref{actionofweakmap6}-\eqref{actionofweakmap8}. These terms appear because the operation we proposed for correcting $X$ errors, for example, cannot distinguish between $X$ and $Y$ errors and corrects both. This gives rise to the last terms in Eqs.~\eqref{actionofweakmap3} and \eqref{actionofweakmap6}. The same holds for the operations we proposed for correcting $Y$ and $Z$ errors. Nevertheless, this map is also a weak error-correcting map in the sense that in the limit of infinitely many applications, it corrects single-qubit errors fully, i.e., it results in the strong error-correcting map $\Phi(\rho)$. To see this, consider all possible single-qubit errors on a density matrix $\rho \in C$. The most general form of a single-qubit error on the $i^{\textrm{th}}$ qubit is \begin{equation} \rho_i=\overset{4}{\underset{j=1}{\sum}}M_{i,j}\rho M_{i,j}^{\dagger},\label{singlequbiterror} \end{equation} where the Kraus operators $M_{i,j}$ are complex linear combinations of $I$, $X_i$, $Y_i$ and $Z_i$ that satisfy $\overset{4}{\underset{j=1}{\sum}}M_{i,j}^{\dagger}M_{i,j}=I$. Observe that $\rho_i$ is a real superposition of the following terms: $\rho$, $X_i\rho X_i$, $Y_i\rho Y_i$, $Z_i\rho Z_i$, $i(X_i\rho- \rho X_i)$, $i(Y_i\rho- \rho Y_i)$, $i(Z_i\rho- \rho Z_i)$, $X_i\rho Y_i + Y_i \rho X_i$, $Y_i\rho Z_i + Z_i \rho Y_i$, $X_i\rho Z_i + Z_i \rho X_i$. Each of the first four terms has trace $1$ and the rest of the terms are traceless. From \eqref{actionofweakmap1}-\eqref{actionofweakmapN} one can see that the weak map does not couple the first four terms with the rest. Therefore, their evolution under continuous application of the map (without decoherence) can be treated separately. If we write the single-qubit error \eqref{singlequbiterror} as \begin{equation} \rho_i = a \rho + b X_i\rho X_i + c Y_i\rho Y_i +d Z_i\rho Z_i + \textrm{traceless terms} \label{generalsinglequbiterror}, \end{equation} a single application of the weak map causes the transformation \begin{gather} a \rightarrow a + (b + c + d)\kappa \tau_c,\label{ageneralcode}\\ b \rightarrow b - b 2\kappa \tau_c + c \kappa \tau_c,\\ c \rightarrow c - c 2\kappa \tau_c + d \kappa \tau_c,\\ d \rightarrow d - d 2\kappa \tau_c + b \kappa \tau_c. \end{gather} Using that at any moment $a+b+c+d=1$ and taking the limit $\tau_c \rightarrow 0$, from \eqref{ageneralcode} we obtain that the evolution of $a$ is described by \begin{equation} \frac{d a(t)}{dt}=\kappa (1-a(t)). \end{equation} The solution is \begin{equation} a(t) = 1-(1-a(0))e^{-\kappa t}, \end{equation} i.e., in the limit of $t\rightarrow \infty$ we obtain $a(t)\rightarrow 1$ (and therefore $b,c,d \rightarrow 0$). We don't need to look at the evolution of the traceless terms in $\rho_i$ because our map is completely positive and therefore the transformed $\rho_i$ is also a density matrix, which implies that if $a=1, b=0,c=0,d=0$, all traceless terms have to vanish. This completes the proof that in the limit of infinitely many applications, our weak error-correcting map is able to correct arbitrary single-qubit errors. It is interesting whether a similar implementation in terms of weak measurements and weak unitary operations can be found for the map \eqref{wm} for general codes. One way to approach this problem might be to look at the error-correcting operations in the decoded basis. Another interesting question is whether the scheme we presented can be modified to include feedback which depends more generally on the history of measurement outcomes and not only on the outcome of the last measurement. It is natural to expect that using fully the available information about the state could lead to a better performance. These questions are left open for future investigation. \chapter*{Chapter 6: \hspace{1pt} Correctable subsystems under continuous decoherence} \addcontentsline{toc}{chapter}{Chapter 6:\hspace{0.15cm} Correctable subsystems under continuous decoherence} In the previous chapter, we were concerned with a situation in which the information stored in an error-correcting code was only approximately correctable. For the model we considered, there were non-correctable multi-qubit errors that accumulated with time, albeit with a slower rate. This is, in practice, the general situation---the probability for non-correctable errors is never truly zero and in order to deal with higher-order terms we need to use concatenation and fault-tolerant techniques (see Section 8.3). But as we saw in the previous chapter, the idea of perfect error correction can be crucial for understanding the approximate process. In view of this, in this chapter we ask the question of the conditions under which a code is perfectly correctable during an entire time interval of continuous decoherence. We consider the most general form of quantum codes---\textit{operator}, or \textit{subsystem} codes. \section*{6.1 \hspace{2pt} Preliminaries} \addcontentsline{toc}{section}{6.1 \hspace{0.15cm} Preliminaries} Operator quantum error correction (OQEC) \cite{KLP05, KLPL06, BKK07} is a unified approach to error correction which uses the most general encoding for the protection of information---encoding in subsystems \cite{Knill06, VKL01} (see also \cite{BKNPV07}). This approach contains as special cases the standard quantum error-correction method \cite{Shor95, Ste96, Bennett96c, KL96} as well as the methods of decoherence-free subspaces \cite{DG98, ZR97, LCW98, LBKW01} and subsystems \cite{KLV00, DeF00, KBLW01, YGB01}. In the OQEC formalism, noise is represented by a completely positive trace-preserving (CPTP) linear map or a noise channel, and correctability is defined with respect to such channels. In practice, however, noise is a continuous process and if it can be represented by a CPTP map, that map is generally a function of time. Correctability is therefore a time-dependent property. Furthermore, the evolution of an open system is completely positive if the system and the environment are initially uncorrelated, and necessary and sufficient conditions for CPTP dynamics are not known. As pointed out in the previous chapter, for more general cases one might need a notion of correctability that can capture non-CP transformations \cite{ShaLid07}. Whether completely positive or not, the noise map is a result of the action of the generator driving the evolution and possibly of the initial state of the system and the environment. Therefore, our goal will be to understand the conditions for correctability in terms of the generator that drives the evolution. We will consider conditions on the system-environment Hamiltonian, or in the case of Markovian evolution---on the Lindbladian. Conditions on the generator of evolution have been derived for decoherence-free subsystems (DFSs) \cite{ShaLid05}, which are a special type of operator codes. DFSs are \textit{fixed} subsystems of the system's Hilbert space, inside which all states evolve unitarily. One generalization of this concept are the so called unitarily correctable subsystems \cite{KLPL06}. These are subsystems, all states inside of which can be corrected via a unitary operation up to an arbitrary transformation inside the gauge subsystem. Unlike DFSs, the unitary evolution followed by states in a unitarily correctable code are not restricted to the initial subsystem. An even more general concept is that of \textit{unitarily recoverable} subsystems \cite{KLP05, KLPL06}, for which states can be recovered by a unitary transformation up to an expansion of the gauge subsystem. It was shown that any correctable subsystem is in fact a unitarily recoverable subsystem \cite{KS06}. This reflects the so called subsystem principle \cite{Knill06, VKL01}, according to which protected information is always contained in a subsystem of the system's Hilbert space. The connection between DFSs and unitarily recoverable subsystems suggests that similar conditions on the generators of evolution to those for DFSs can be derived in the case of general correctable subsystems. This is the subject of the present study. The chapter is organized as follows. In Section 6.2 we review the definitions of correctable subsystems and unitarily recoverable subsystems. In Section 6.3, we discuss the necessary and sufficient conditions for such subsystems to exist in the case of CPTP maps. In Section 6.4, we derive conditions for the case of Markovian decoherence. The conditions for general correctability in this case are essentially the same as those for unitary correctability except that the dimension of the gauge subsystem is allowed to suddenly increase. For the case when the evolution is non-correctable, we conjecture a procedure for tracking the subsystem which contains the optimal amount of undissipated information and discuss its possible implications for the problem of optimal error correction. In Section 6.5, we derive conditions on the system-environment Hamiltonian. In this case, the conditions for unitary correctability concern only the effect of the Hamiltonian on the system, whereas the conditions for general correctability concern the entire system-environment Hamiltonian. In the latter case, the state of the noisy subsystem plus environment belongs to a particular subspace which plays an important role in the conditions. We extend the conditions to the case where the environment is initialized inside a particular subspace. In Section 6.6, we conclude. \section*{6.2 \hspace{2pt} Correctable subsystems} \addcontentsline{toc}{section}{6.2 \hspace{0.15cm} Correctable subsystems} For simplicity, we consider the case where information is stored in only one subsystem. Then there is a corresponding decomposition of the Hilbert space of the system, \begin{equation} \mathcal{H^S}=\mathcal{H^A}\otimes\mathcal{H}^B\oplus \mathcal{K},\label{decomposition} \end{equation} where the subsystem $\mathcal{H}^A$ is used for encoding of the protected information. The subsystem $\mathcal{H}^B$ is referred to as the gauge subsystem, and $\mathcal{K}$ denotes the rest of the Hilbert space. In the formulation of OQEC \cite{KLP05, KLPL06}, the noise process is a completely positive trace-preserving (CPTP) linear map $\mathcal{E}:\mathcal{B}(\mathcal{H}^S)\rightarrow \mathcal{B}(\mathcal{H}^S)$, where $\mathcal{B}(\mathcal{H})$ denotes the set of linear operators on a finite-dimensional Hilbert space $\mathcal{H}$. Let the operator-sum representation of the map $\mathcal{E}$ be \begin{equation} \mathcal{E}(\sigma)=\underset{i}{\sum}M_i\sigma M_i^{\dagger}, \hspace{0.2cm} \textrm{for all } \sigma\in \mathcal{B}(\mathcal{H}^S),\label{Kraus1} \end{equation} where the Kraus operators $\{M_i\}\subseteq\mathcal{B}(\mathcal{H}^S)$ satisfy \begin{equation} \underset{i}{\sum}M_i^{\dagger}M_i=I^S.\label{completeness} \end{equation} The subsystem $\mathcal{H}^A$ in Eq.~\eqref{decomposition} is called \textit{noiseless} with respect to the noise process $\mathcal{E}$, if \begin{gather} \textrm{Tr}_B\{(\mathcal{P}^{AB}\circ\mathcal{E})(\sigma)\}=\textrm{Tr}_B\{\sigma\},\label{noiselesssystem}\\ \hspace{0.1cm} \textrm{for all }\sigma\in \mathcal{B}(\mathcal{H}^S) \textrm{ such that } \sigma=\mathcal{P}^{AB}(\sigma)\hspace{0.1cm} ,\notag \end{gather} where \begin{equation} \mathcal{P}^{AB}(\cdot)=P^{AB}(\cdot)P^{AB} \end{equation} with $P^{AB}$ being the projector of $\mathcal{H}^S$ onto $\mathcal{H}^A\otimes \mathcal{H}^B$, \begin{equation} P^{AB}\mathcal{H}^S=\mathcal{H}^A\otimes \mathcal{H}^B. \end{equation} Similarly, a \textit{correctable} subsystem is one for which there exists a correcting CPTP map $\mathcal{R}:\mathcal{B}(\mathcal{H}^S)\rightarrow \mathcal{B}(\mathcal{H}^S)$, such that the subsystem is noiseless with respect to the map $\mathcal{R}\circ \mathcal{E}$: \begin{gather} \textrm{Tr}_B\{(\mathcal{P}^{AB}\circ\mathcal{R}\circ\mathcal{E})(\sigma)\}=\textrm{Tr}_B\{\sigma\},\label{correctablesystem}\\ \hspace{0.1cm} \textrm{for all }\sigma\in \mathcal{B}(\mathcal{H}^S) \textrm{ such that } \sigma=\mathcal{P}^{AB}(\sigma)\hspace{0.1cm} .\notag \end{gather} When the correcting map $\mathcal{R}$ is unitary, $\mathcal{R}=\mathcal{U}$, the subsystem is called \textit{unitarily correctable}: \begin{gather} \textrm{Tr}_B\{(\mathcal{P}^{AB}\circ\mathcal{U}\circ\mathcal{E})(\sigma)\}=\textrm{Tr}_B\{\sigma\},\label{unitarilycorrectable}\\ \hspace{0.1cm} \textrm{for all }\sigma\in \mathcal{B}(\mathcal{H}^S) \textrm{ such that } \sigma=\mathcal{P}^{AB}(\sigma)\hspace{0.1cm} .\notag \end{gather} A similar but more general notion is that of a \textit{unitarily recoverable} subsystem, for which the unitary $\mathcal{U}$ need not bring the erroneous state back to the original subspace $\mathcal{H}^A\otimes \mathcal{H}^B$ but can bring it in a subspace $\mathcal{H}^A\otimes \mathcal{H}^{B'}$ such that \begin{gather} \textrm{Tr}_{B'}\{(\mathcal{P}^{AB'}\circ\mathcal{U}\circ\mathcal{E})(\sigma)\}=\textrm{Tr}_B\{\sigma\},\label{unitarilyrecoverable} \hspace{0.1cm} \textrm{for all }\sigma\in \mathcal{B}(\mathcal{H}^S) \textrm{ such that } \sigma=\mathcal{P}^{AB}(\sigma)\hspace{0.1cm} .\notag \end{gather} Obviously, if $\mathcal{H}^A$ is unitarily recoverable, it is also correctable, since one can always apply a local CPTP map $\mathcal{E}^{B'\rightarrow B}: \mathcal{B}(\mathcal{H}^{B'})\rightarrow \mathcal{B}(\mathcal{H}^{B})$ which brings all states from $\mathcal{H}^{B'}$ to $\mathcal{H}^{B}$. (In fact, if the dimension of $\mathcal{H}^{B'}$ is smaller or equal to that of $\mathcal{H}^{B}$, this can always be done by a unitary map, i.e., $\mathcal{H}^A$ is unitarily correctable.) In Ref.~\cite{KS06} it was shown that the reverse is also true---if $\mathcal{H}^A$ is correctable, it is unitarily recoverable. This equivalence will provide the basis for our derivation of correctability conditions for continuous decoherence. Before we proceed with our discussion, we point out that condition \eqref{unitarilyrecoverable} can be equivalently written as \cite{KLP05, KLPL06} \begin{gather} \mathcal{U}\circ\mathcal{E}(\rho\otimes\tau)=\rho\otimes\tau',\hspace{0.4cm} \tau'\in\mathcal{B}(\mathcal{H}^{B'}), \label{unitarilyrecoverable2} \hspace{0.2cm} \textrm{for all }\rho\in \mathcal{B}(\mathcal{H}^A), \hspace{0.1cm} \tau\in \mathcal{B}(\mathcal{H}^B)\hspace{0.1cm} . \end{gather} \section*{6.3 \hspace{2pt} Completely positive linear maps} \addcontentsline{toc}{section}{6.3 \hspace{0.15cm} Completely positive linear maps} Let $\mathcal{H}^S$ and $\mathcal{H}^E$ denote the Hilbert spaces of a system and its environment, and let $\mathcal{H}=\mathcal{H}^S\otimes\mathcal{H}^E$ be the total Hilbert space. As we pointed out earlier, a common example of a CP map is the transformation that the state of a system undergoes if the system is initially decoupled from its environment, $\rho(0)=\rho^S(0)\otimes \rho^E(0)$, and both the system and environment evolve according to the Schr\"{o}dinger equation: \begin{equation} \frac{d\rho(t)}{dt}=-i[H(t),\rho(t)].\label{Schrodinger} \end{equation} Equation \eqref{Schrodinger} gives rise to the unitary transformation \begin{equation} \rho(t)=V(t)\rho(0)V^{\dagger}(t), \end{equation} with \begin{equation} V(t)=\mathcal{T}\textrm{exp}(-i\int_0^t H(\tau)d\tau), \end{equation} where $\mathcal{T}$ denotes time ordering. Under the assumption of an initially-decoupled state of the system and the environment, the transformation of the state of the system is described by the time-dependent CPTP map \begin{equation} \rho^S(0)\rightarrow \rho^S(t)\equiv\textrm{Tr}_E(\rho(t))=\underset{i}{\sum}M_{i}(t)\rho^S(0)M_{i}^{\dagger}(t),\notag \end{equation} with Kraus operators \begin{equation} M_{i}(t)=\sqrt{\lambda_{\nu}}\langle \mu | V(t)|\nu\rangle ,\hspace{0.3cm}i=(\mu,\nu)\label{Krausoperator} \end{equation} where $\{|\mu\rangle \}$ is a basis in which the initial environment density matrix is diagonal, $\rho^B(0)=\underset{\mu}{\sum}\lambda_{\mu}|\mu\rangle\langle \mu|$. We already saw one example of such a map in Chapter 4 where we studied the evolution of a qubit coupled to a spin bath (Eq.~\eqref{KrausForm}). The Kraus representation \eqref{Kraus1} applies to any CP linear map which need not necessarily arise from evolution of the type \eqref{Schrodinger}. This is why in the following theorem we derive conditions for discrete CP maps. For correctability under continuous decoherence, the same conditions must apply at any moment of time, i.e., one can think that the quantities $M_i$, $U$, $C_i$, as well as the subsystem $\mathcal{H}^{B'}$ in the theorem are implicitly time-dependent. \textbf{Theorem 1:} \textit{The subsystem $\mathcal{H}^A$ in the decomposition \eqref{decomposition} is correctable under a CP linear map in the form \eqref{Kraus1}, if and only if there exists a unitary operator $U\in\mathcal{B}(\mathcal{H}^S)$ such that the Kraus operators satisfy \begin{gather} M_{i}P^{AB}=U^{\dagger}I^{A}\otimes C^{B\rightarrow B'}_{i},\hspace{0.2cm}C^{B\rightarrow B'}_{i}: \mathcal{H}^B\rightarrow \mathcal{H}^{B'}, \hspace{0.2cm} \forall i. \label{conditionKraus} \end{gather}} \textbf{Proof:} The sufficiency of condition \eqref{conditionKraus} is obvious---using that $\rho\otimes\tau$ in Eq.~\eqref{unitarilyrecoverable2} satisfies $\rho\otimes\tau= P^{AB}\rho\otimes\tau P^{AB}$, it can be immediately verified that Eq.~\eqref{conditionKraus} implies Eq.~\eqref{unitarilyrecoverable2} with $\mathcal{U}=U(\cdot)U^{\dagger}$. Now assume that $\mathcal{H}^A$ is unitarily recoverable and the recovery map is $\mathcal{U}=U(\cdot)U^{\dagger}$. The map $\mathcal U\circ \mathcal{E}$ in Eq.~\eqref{unitarilyrecoverable2} can then be thought of as having Kraus operators $U M_{i}$. In particular, condition \eqref{unitarilyrecoverable2} has to be satisfied for $\rho=|\psi\rangle \langle \psi|$, $\tau=|\phi\rangle \langle \phi|$ where $|\psi\rangle\in \mathcal{H}^A$ and $|\phi\rangle\in \mathcal{H}^B$ are pure states. Notice that the image of $|\psi\rangle\langle\psi|\otimes|\phi\rangle\langle\phi| $ under the map $\mathcal{U}\circ \mathcal{E}$ would be of the form $|\psi\rangle\langle\psi|\otimes \tau'$, only if all terms in Eq.~\eqref{Kraus1} are of the form \begin{gather} UM_{i}|\psi\rangle\langle\psi|\otimes |\phi\rangle\langle\phi| M_{i}^{\dagger}U^{\dagger}=|g_{i}(\psi)|^2|\psi\rangle\langle\psi|\otimes |\phi'_{i}(\psi)\rangle\langle \phi'_{i}(\psi)|, \hspace{0.2cm}g_{i}(\psi)\in C, \end{gather} where for now we assume that $g_{i}$ and $|\phi'_{i}\rangle$ may depend on $|\psi\rangle$. In other words, \begin{equation} UM_{i}|\psi\rangle |\phi\rangle =g_{i}(\psi)|\psi\rangle|\phi'_{i}(\psi)\rangle ,\hspace{0.2cm}g_{i}(\psi)\in C, \hspace{0.2cm} \forall i.\label{edno'} \end{equation} But if we impose \eqref{edno'} on a linear superposition $|\psi\rangle=a|\psi_1\rangle+b|\psi_2\rangle$, ($a,b\neq 0$), we obtain $g_{i}(\psi_1)=g_{i}(\psi_2)$ and $|\phi'_{i}(\psi_1)\rangle = |\phi'_{i}(\psi_2)\rangle$ i.e., \begin{equation} g_{i}(\psi)\equiv g_{i}, \hspace{0.2cm}|\phi'_{i}(\psi)\rangle \equiv |\phi'_{i}\rangle, \hspace{0.2cm} \forall |\psi\rangle \in \mathcal{H}^A, \hspace{0.2cm} \forall i. \end{equation} Since Eq.~\eqref{edno'} has to be satisfied for all $|\psi\rangle \in \mathcal{H}^A$ and all $|\phi\rangle \in \mathcal{H}^B$, we obtain \begin{gather} UM_{i}P^{AB}=I^{A}\otimes C_{i}^{B\rightarrow B'},\hspace{0.2cm}C_{i}^{B\rightarrow B'}: \mathcal{H}^B\rightarrow \mathcal{H}^{B'}, \hspace{0.2cm} \forall i. \end{gather} Applying $U^{\dagger}$ from the left yields condition \eqref{conditionKraus}. We remark that condition \eqref{conditionKraus} is equivalent to the conditions obtained in Ref.~\cite{KLPL06}. \section*{6.4 \hspace{2pt} Markovian dynamics} \addcontentsline{toc}{section}{6.4 \hspace{0.15cm} Markovian dynamics} The most general continuous completely positive time-local evolution of the state of a quantum system is described by a semi-group master equation in the form \eqref{firstLindblad} but with time dependent coefficients, \begin{eqnarray} \frac{d\rho(t)}{dt}=-i[H(t),\rho(t)]-\frac{1}{2}\underset{j}{\sum}(2L_j(t)\rho(t) L_j^{\dagger}(t)\notag\\ -L_j^{\dagger}(t)L_j(t)\rho(t)-\rho(t) L_j^{\dagger}(t)L_j(t))\equiv \mathcal{L}(t)\rho(t).\label{Lindblad} \end{eqnarray} (For a discussion of the situations in which such time-dependent Markovian evolution can arise, see, e.g., Ref.~\cite{Len86}.) Here $H(t)$ is a system Hamiltonian, $L_j(t)$ are Lindblad operators, and $\mathcal{L}(t)$ is the Liouvillian superoperator corresponding to this dynamics. (The decoherence rates $\lambda_j$ that appear in Eq.~\eqref{firstLindblad}, here have been absorbed in $L_j(t)$.) The general evolution of a state is given by \begin{equation} \rho(t_2)=\mathcal{T}\textrm{exp}(\int_{t_1}^{t_2} \mathcal{L}(\tau)d \tau)\rho(t_1), \hspace{0.3 cm} t_2>t_1.\label{evolutionLindblad} \end{equation} We will first derive necessary and sufficient conditions for unitarily correctable subsystems under the dynamics \eqref{Lindblad}, and then will extend them to the case of unitarily recoverable subsystems. In the case of continuous dynamics, the error map $\mathcal{E}$ and the error-correcting map $\mathcal{U}$ in Eq.~\eqref{unitarilycorrectable} are generally time dependent. If we set $t=0$ as the initial time at which the system is prepared, the error map resulting from the dynamics \eqref{Lindblad} is \begin{equation} \mathcal{E}(t)(\cdot)=\mathcal{T}\textrm{exp}\left(\int_{0}^{t} \mathcal{L}(\tau)d \tau\right)(\cdot). \end{equation} Let the $\mathcal{U}(t)=U(t)(\cdot)U^{\dagger}(t)$ be the unitary error-correcting map in Eq.~\eqref{unitarilycorrectable}. We can define the rotating frame corresponding to $U^{\dagger}(t)$ as the transformation of each operator as \begin{equation} O(t)\rightarrow \widetilde{O}(t)=U(t)O(t)U^{\dagger}(t).\label{rotatingframe} \end{equation} In this frame, the Lindblad equation \eqref{Lindblad} can be written as \begin{gather} \frac{d\widetilde{\rho}(t)}{dt}=-i[\widetilde{H}(t)+{H}'(t),\widetilde{\rho}(t)]-\frac{1}{2}\underset{j}{\sum}(2\widetilde{L}_j(t)\widetilde{\rho}(t) \widetilde{L}_j^{\dagger}(t)\notag\\ -\widetilde{L}_j^{\dagger}(t)\widetilde{L}_j(t)\widetilde{\rho}(t)-\widetilde{\rho}(t) \widetilde{L}_j^{\dagger}(t)\widetilde{L}_j(t))\equiv \widetilde{\mathcal{L}}(t)\widetilde{\rho}(t),\label{Lindbladrot} \end{gather} where $H'(t)$ is defined through \begin{equation} i\frac{d U(t)}{dt}=H'(t) U(t),\label{defineU} \end{equation} i.e., \begin{equation} U(t)=\mathcal{T}\textrm{exp}\left(-i\int_0^t H'(\tau)d\tau\right). \end{equation} The CPTP map resulting from the dynamics \eqref{Lindbladrot} is \begin{equation} \widetilde{\mathcal{E}}(t)(\cdot)=\mathcal{T}\textrm{exp}\left(\int_{0}^{t} \widetilde{\mathcal{L}}(\tau)d \tau\right)(\cdot). \end{equation} \textbf{Theorem 2:} \textit{Let $\widetilde{H}(t)$ and $\widetilde{L}_j(t)$ be the Hamiltonian and the Lindblad operators in the rotating frame \eqref{rotatingframe} with $U(t)$ given by Eq.~\eqref{defineU}. Then the subsystem $\mathcal{H}^A$ in the decomposition \eqref{decomposition} is correctable by $U(t)$ during the evolution \eqref{Lindblad}, if and only if \begin{gather} \widetilde{L}_{j}(t)P^{AB}=I^{A}\otimes C^B_j(t),\hspace{0.2cm}\ C^B_j(t)\in\mathcal{B}(\mathcal{H}^B),\hspace{0.2cm}\forall j\label{Markov1} \end{gather} and \begin{gather} \mathcal{P}^{AB}(\widetilde{H}(t)+H'(t))=I^A\otimes D^B(t),\hspace{0.2cm} D^B(t)\in \mathcal{B}(\mathcal{H}^B)\label{Markov2} \end{gather} and \begin{gather} P^{AB}(\widetilde{H}(t)+H'(t)+\frac{i}{2}\underset{j}{\sum}\widetilde{L}_j^{\dagger}(t)\widetilde{L}_j(t))P_{\mathcal{K}}=0\label{Markov3} \end{gather} for all $t$, where $P_{\mathcal{K}}$ denotes the projector on $\mathcal{K}$. } \textbf{Proof:} Since by definition $U(t)$ is an error-correcting map for subsystem $\mathcal{H}^A$, if $\mathcal{P}^{AB}(\rho(0))=\rho(0)$, we have $\textrm{Tr}_B\{\mathcal{P}^{AB}\circ\widetilde{\mathcal{E}}(\widetilde{\rho}(0))\}=\textrm{Tr}_B\{\mathcal{P}^{AB}(\widetilde{\rho}(t))\}= \textrm{Tr}_B\{\mathcal{P}^{AB}\circ\mathcal{U}(t)\circ \mathcal{E}(t)(\rho(0))\}=\textrm{Tr}_B\{\rho(0)\}=\textrm{Tr}_B\{\tilde{\rho}(0)\}$, i.e, $\mathcal{H}^A$ is a noiseless subsystem under the evolution in the rotating frame \eqref{Lindbladrot}. Then the theorem follows from Eq.~\eqref{Lindbladrot} and the conditions for noiseless subsystems under Markovian decoherence obtained in \cite{ShaLid05}. \textbf{Comment:} Conditions \eqref{Markov2} and \eqref{Markov3} can be used to obtain the operator $H'(t)$ (and hence $U(t)$) if the initial decomposition \eqref{decomposition} is known. Note that there is a freedom in the definition of $H'(t)$. For example, $D^B(t)$ in Eq.~\eqref{Markov2} can be any Hermitian operator. In particular, we can choose $D^B(t)=0$. Also, the term $P_{\mathcal{K}}H'(t)P_{\mathcal{K}}$ does not play a role and can be chosen arbitrary. Using that $P_{\mathcal{K}}=I-P^{AB}$, we can choose \begin{gather} H'(t)=-\widetilde{H}(t)-\frac{i}{2}P^{AB}\left(\underset{j}{\sum}\widetilde{L}_j^{\dagger}(t)\widetilde{L}_j(t)\right) +\frac{i}{2}\left(\underset{j}{\sum}\widetilde{L}_j^{\dagger}(t)\widetilde{L}_j(t)\right)P^{AB},\label{defineH'} \end{gather} which satisfies Eq.~\eqref{Markov2} and Eq.~\eqref{Markov3}. Using Eq.~\eqref{rotatingframe}, Eq.~\eqref{defineU} and Eq.~\eqref{defineH'}, we obtain the following first-order differential equation for $U(t)$: \begin{gather} i\frac{dU(t)}{dt}=-U(t)H(t)-\frac{i}{2}P^{AB}U(t)\left(\underset{j}{\sum}{L}_j^{\dagger}(t){L}_j(t)\right)\notag\\ +\frac{i}{2}U(t)\left(\underset{j}{\sum}{L}_j^{\dagger}(t){L}_j(t)\right)U^{\dagger}(t)P^{AB}U(t). \end{gather} This equation can be used to solve for $U(t)$ starting from $U(0)=I$. Notice that since $\mathcal{H}^A$ is unitarily correctable by $U(t)$, at time $t$ the initially encoded information can be thought of as contained in the subsystem $\mathcal{H}^A(t)$ defined through \begin{equation} \mathcal{H}^A(t)\otimes\mathcal{H}^B(t)\equiv U^{\dagger}(t)\mathcal{H}^A\otimes\mathcal{H}^B, \end{equation} i.e., this subsystem is obtained from $\mathcal{H}^A$ in Eq.~\eqref{decomposition} via the unitary transformation $U^{\dagger}(t)$. One can easily verify that the fact that the right-hand side of Eq.~\eqref{Markov1} acts trivially on $\mathcal{H}^{A}$ together with Eq.~\eqref{Markov2} are necessary and sufficient conditions for an arbitrary state encoded in subsystem $\mathcal{H}^A(t)$ to undergo trivial dynamics at time $t$. Therefore, these conditions can be thought of as the conditions for lack of noise in the instantaneous subsystem that contains the protected information. On the other hand, the fact that the right-hand side of Eq.~\eqref{Markov1} maps states from $\mathcal{H}^A\otimes\mathcal{H}^B$ to $\mathcal{H}^A\otimes\mathcal{H}^B$ together with Eq.~\eqref{Markov3} are necessary and sufficient conditions for states inside the time-dependent subspace $U^{\dagger}(t)\mathcal{H}^{AB}$ not to leave this subspace during the evolution. Thus the conditions of the theorem can be thought off as describing a time-varying noiseless subsystem $\mathcal{H}^A(t)$. We now extend the above conditions to the case of unitarily recoverable subsystems. As we pointed out earlier, the difference between a unitarily correctable and a unitarily recoverable subsystem is that in the latter the dimension of the gauge subsystem may increase. Since the dimension of the gauge subsystem is an integer, this increase can happen only in a jump-like fashion at particular moments. Between these moments, the evolution is unitarily correctable. Therefore, we can state the following \textbf{Theorem 3:} \textit{The subsystem $\mathcal{H}^A$ in Eq.~\eqref{decomposition} is correctable during the evolution \eqref{Lindblad}, if and only if there exist times $t_i$, $i=0,1, 2,...$, $t_0=0$, $t_i<t_{i+1}$, such that for each interval between $t_{i-1}$ and $t_i$ there exists a decomposition \begin{equation} \mathcal{H}^S=\mathcal{H}^A\otimes\mathcal{H}^B_i\oplus \mathcal{K}_i, \hspace{0.4cm} \mathcal{H}^B_{i}\ni\mathcal{H}^B_{i-1}, \end{equation} with respect to which the evolution during this interval is unitarily correctable. } \textbf{Remark:} An increase of the gauge subsystem at time $t_i$ happens if the operator $C_j(t)$ in Eq.~\eqref{Markov1} obtains non-zero components that map states from $\mathcal{H}^{B}_i$ to $\mathcal{H}^{B}_{i+1}$. From that moment on, $t_i\leq t \leq t_{i+1}$, Eq.~\eqref{Markov1} must hold for the new decomposition $\mathcal{H}^S=\mathcal{H}^A\otimes\mathcal{H}^B_{i+1}\oplus \mathcal{K}_{i+1}$. The unitary $U(t)$ is determined from Eq.~\eqref{Markov2} and Eq.~\eqref{Markov3} as described earlier. The conditions derived in this section provide insights into the mechanism of information preservation under Markovian dynamics, and thus could have implications for the problem of error correction when perfect correctability is not possible \cite{Schum02, Klesse07, RW05, YHT05, FSW07, KSL08}. For example, it is possible that the unitary operation constructed according to Eq.~\eqref{defineU} with the appropriate modification for the case of increasing gauge subsystem, may be useful for error-correction also when the conditions of the theorems are only approximately satisfied. Notice that the generator driving the effective evolution of the subspace $U^{\dagger}(t)\mathcal{H}^A\otimes\mathcal{H}^B$ whose projector we denote by $P^{AB}(t)\equiv U^{\dagger}(t)P^{AB}U(t)$, can be written as \begin{equation} \mathcal{L}(t)(\cdot)=-i[H_{\textrm{eff}}(t), \cdot]+\mathcal{D}(t)(\cdot) + \mathcal{S}(t)(\cdot), \end{equation} where \begin{gather} H_{\textrm{eff}}(t)=H(t)+\frac{i}{2}P^{AB}(t)\left( \underset{j}{\sum}{L}_j^{\dagger}(t){L}_j(t)\right) -\frac{i}{2}\left(\underset{j}{\sum}{L}_j^{\dagger}(t){L}_j(t)\right)P^{AB}(t) \end{gather} is an effective Hamiltonian, \begin{gather} \mathcal{D}(t)(\cdot)=\underset{j}{\sum}L_j(t)(\cdot)L_j^{\dagger}(t) \end{gather} is a dissipator, and \begin{gather} \mathcal{S}(t)(\cdot)= -\frac{1}{2}P^{AB}(t) \left( \underset{j}{\sum}{L}_j^{\dagger}(t){L}_j(t)\right) P^{AB}(t)(\cdot)\notag\\ -\frac{1}{2}(\cdot)P^{AB}(t) \left( \underset{j}{\sum}{L}_j^{\dagger}(t){L}_j(t)\right) P^{AB}(t) \end{gather} is a superoperator acting on $\mathcal{B}(U^{\dagger}(t)\mathcal{H}^{AB})$. The dissipator most generally causes an irreversible loss of the information contained in the current subspace, which may involve loss of the information stored in subsystem $\mathcal{H}^A(t)$ as well as an increase of the gauge subsystem. The superoperator $\mathcal{S}(t)(\cdot)$ gives rise to a transformation solely inside the current subspace. In the case when the evolution is correctable, this operator acts locally on the gauge subsystem, but in the general case it may act non-trivially on $\mathcal{H}^A(t)$. The role of the effective Hamiltonian is to rotate the current subspace by an infinitesimal amount. If one could argue that the information lost under the action of $\mathcal{D}(t)$ and $\mathcal{S}(t)$ is in principle irretrievable, then heuristically one could expect that after a single time step $dt$, the corresponding factor of the infinitesimally rotated (possibly expanded) subspace will contain the maximal amount of the remaining encoded information. Note that to keep track of the increase of the gauge subsystem one would need to determine the operator $C_j$ on the right-hand side of Eq.~\eqref{Markov1} that optimally approximates the left-hand side. Of course, since the dissipator generally causes leakage of states outside of the current subspace, the error-correcting map at the end would have to involve more than just a unitary recovery followed by a CPTP map on the gauge subsystem. In order to maximize the fidelity \cite{Ore08} of the encoded information with a perfectly encoded state, one would have to bring the state of the system fully inside the subspace $\mathcal{H}^A\otimes\mathcal{H}^B$. These heuristic arguments, however, require a rigorous analysis. It is possible that the action of the superoperators $\mathcal{D}(t)$ and $\mathcal{S}(t)$ may be partially correctable and thus one may have to modify the unitary \eqref{defineU} in order to optimally track the retrievable information. We leave this as a problem for future investigation. \section*{6.5 \hspace{2pt} Conditions on the system-environment Hamiltonian} \addcontentsline{toc}{section}{6.5 \hspace{0.15cm} Conditions on the system-environment Hamiltonian} We now derive conditions for correctability of a subsystem when the dynamics of the system and the environment is described by the Schr\"{o}dinger equation \eqref{Schrodinger}. While the CP-map conditions can account for such dynamics when the states of the system and the environment are initially disentangled, they depend on the initial state of the environment. Below, we will first derive conditions on the system-environment Hamiltonian that hold for any state of the environment, and then extend them to the case when the environment is initialized inside a particular subspace. We point out that the equivalence between unitary recoverable subsystems and correctable subsystems has been proven for CPTP maps. Here, we could have a non-CP evolution since the initial state of the system and the environment may be entangled. Nevertheless, since correctability must hold for the case when the initial state of the system and the environment is separable, the conditions we obtain are necessary. They are obviously also sufficient since unitary recoverability implies correctability. Let us write the system-environment Hamiltonian as \begin{equation} H_{SE}(t)=H_S(t)\otimes I_E + I_E\otimes H_E(t)+H_I(t),\label{Hamiltonian} \end{equation} where $H_S(t)$ and $H_E(t)$ are the system and the environment Hamiltonians respectively, and \begin{equation} H_I(t)=\underset{j}{\sum}S_{j}(t)\otimes E_{j}(t),\label{HI} \end{equation} is the interaction Hamiltonian. From the point of view of the Hilbert space of the system plus environment, the decomposition \eqref{decomposition} reads \begin{equation} \mathcal{H}=(\mathcal{H}^A\otimes\mathcal{H}^B\oplus\mathcal{K})\otimes \mathcal{H}^E=\mathcal{H}^A\otimes\mathcal{H}^B\otimes\mathcal{H}^E\oplus \mathcal{K}\otimes\mathcal{H}^E \label{decompositionfull}. \end{equation} \subsection*{6.5.1 \hspace{2pt} Conditions independent of the state of the environment} \addcontentsline{toc}{subsection}{6.5.1 \hspace{0.15cm} Conditions independent of the state of the environment} We will consider again conditions for unitary correctability first, and then conditions for general correctability. In the rotating frame \eqref{rotatingframe}, the Schr\"{o}dinger equation \eqref{Schrodinger} becomes \begin{equation} \frac{d\widetilde{\rho}(t)}{dt}=-i[\widetilde{H}_{SE}(t)+H'(t),\widetilde{\rho}(t)].\label{Schrodingerrot} \end{equation} Since in this picture a unitarily-correctable subsystem is noiseless, we can state the following \textbf{Theorem 4:} \textit{Consider the evolution \eqref{Schrodinger} driven by the Hamiltonian \eqref{Hamiltonian}. Let $\widetilde{H}_S(t)$ and $\widetilde{S}_j(t)$ be the system Hamiltonian and the interaction operators \eqref{HI} in the rotating frame \eqref{rotatingframe} with $U(t)$ given by Eq.~\eqref{defineU}. Then the subsystem $\mathcal{H}^A$ in the decomposition \eqref{decomposition} is correctable by $U(t)$ during this evolution, if and only if \begin{gather} \widetilde{S}_{j}(t)P^{AB}=I^{A}\otimes C^B_j(t),\hspace{0.2cm}\ C^B_j(t)\in\mathcal{B}(\mathcal{H}^B),\hspace{0.2cm}\forall j\label{Hamilton1} \end{gather} and \begin{gather} (\widetilde{H}_S(t)+H'(t))P^{AB}=I^A\otimes D^B(t),\hspace{0.2cm} D^B(t)\in \mathcal{B}(\mathcal{H}^B).\label{Hamilton2} \end{gather}} \textbf{Proof:} With respect to the evolution in the rotating frame \eqref{rotatingframe}, the subsystem $\mathcal{H}^A$ is noiseless. The theorem follows from the conditions for noiseless subsystems under Hamiltonian dynamics \cite{ShaLid05} applied to the Hamiltonian in the rotating frame. Note that the fact that the operator on the right-hand side of Eq.~\eqref{Hamilton2} sends states from $\mathcal{H}^A\otimes\mathcal{H}^B$ to $\mathcal{H}^A\otimes\mathcal{H}^B$ implies that the off-diagonal terms of $\widetilde{H}_S(t)+H'(t)$ in the block basis corresponding to the decomposition \eqref{decomposition} vanish, i.e., $P^{AB}(\widetilde{H}_S(t)+H'(t))P_{\mathcal{K}}=0$. \textbf{Comment:} The Hamiltonian $H'(t)$ can be obtained from conditions \eqref{Hamilton1} and \eqref{Hamilton2}. We can choose $D^B(t)=0$ and define $H'(t)=-\widetilde{H}_S(t)$, which together with Eq.~\eqref{defineU} yields \begin{equation} i\frac{dU(t)}{dt}=-U(t)H_S(t), \end{equation} i.e., \begin{equation} U^{\dagger}(t)=\mathcal{T}\textrm{exp}\left(-i\int_0^t H_S(\tau) d\tau \right). \end{equation} This simply means that the evolution of the subspace that contains the encoded information is driven by the system Hamiltonian. The conditions again can be separated into two parts. The fact that the right-hand sides of Eq.~\eqref{Hamilton1} and Eq.~\eqref{Hamilton2} act trivially on $\mathcal{H}^A$ is necessary and sufficient for the information stored in the instantaneous subsystem $\mathcal{H}^A(t)$ to undergo trivial dynamics at time $t$. The fact that the right-hand-sides of these equations do not take states outside of $\mathcal{H}^A\otimes\mathcal{H}^B$ is necessary and sufficient for states not to leave the subspace $U^{\dagger}(t)\mathcal{H}^A\otimes\mathcal{H}^B$ as it evolves. The conditions for general correctability, however, are not obtained directly from Theorem 4 in analogy to the case of Markovian decoherence. Such conditions would certainly be sufficient, but it turns out that they are not necessary. This is because after applying the unitary recovery operation, the state of the gauge subsystem $\mathcal{H}^{B'}$ (which is generally larger than the initial gauge subsystem $\mathcal{H}^B$) plus the environment would generally belong to a proper subspace of $\mathcal{H}^{B'}\otimes\mathcal{H}^E$ which cannot be factored into a subsystem belonging to $\mathcal{H}^S$ and a subsystem belonging to $\mathcal{H}^E$. Thus it is not necessary that the Hamiltonian acts trivially on the factor $\mathcal{H}^A$ in $\mathcal{H}^A\otimes\mathcal{H}^{B'}\otimes\mathcal{H}^E$, but only on the factor $\mathcal{H}^A$ in $\mathcal{H}^A\otimes \widetilde{\mathcal{H}}^{BE}$, where $\widetilde{\mathcal{H}}^{BE}$ is the proper subspace in question. In the case of unitary correctability, tracing out the environment provides necessary conditions because $\mathcal{H}^{B'}= \mathcal{H}^{B}$, and hence $\mathcal{H}^{B}\otimes \mathcal{H}^E$ is fully occupied. Let \begin{equation} \mathcal{H}^S=\mathcal{H}^A\otimes \mathcal{H}^{B'}\oplus \mathcal{K'}\label{maximaldecomposition} \end{equation} be a decomposition of the Hilbert space of the system such that the factor $\mathcal{H}^{B'}\supset \mathcal{H}^{B}$ has the largest possible dimension. Since the evolution of the state of the system plus the environment is unitary, at time $t$ the initial subspace $\mathcal{H}^A\otimes\mathcal{H}^B\otimes\mathcal{H}^E$ will be transformed to some other subspace of $\mathcal{H}^S\otimes\mathcal{H}^E$, which is unitarily related to the initial one. Applying the unitary recovery operation $U(t)$ returns this subspace to the form $\mathcal{H}^A\otimes\widetilde{\mathcal{H}}^{BE}(t)$, where $\widetilde{\mathcal{H}}^{BE}(t)$ is a subspace of $\mathcal{H}^{B'}\otimes\mathcal{H}^E$. Clearly, there exists a unitary operator $W_0(t): \mathcal{H}^{B'}\otimes\mathcal{H}^E\rightarrow \mathcal{H}^{B'}\otimes\mathcal{H}^E$ that maps this subspace to the initial subspace $\mathcal{H}^B\otimes \mathcal{H}^E$: \begin{equation} W_0(t) \widetilde{P}^{BE}(t) W^{\dagger}_0(t)=P^{BE}.\label{defineW} \end{equation} (Here $\widetilde{P}^{BE}(t)$ denotes the projector on $\widetilde{\mathcal{H}}^{BE}(t)$.) Note that as an operator on the entire Hilbert space, this unitary has the form $W_0(t)\equiv I^A\otimes W_0^{B'E}(t)\oplus I_{\mathcal{K'}}\otimes I^E$. Let us define the frame \begin{equation} \widehat{O}(t)=W(t)O(t)W^{\dagger}(t),\label{rotatingframe2} \end{equation} where \begin{equation} i\frac{dW(t)}{dt}=H''(t)W(t).\label{defineW2} \end{equation} Then the evolution driven by a Hamiltonian $G(t)$, in this frame will be driven by $\widehat{G}(t)+H''(t)$. \textbf{Theorem 5:} \textit{Let $\widetilde{O}(t)$ denote the image of an operator $O(t)\in \mathcal{B}(\mathcal{H})$ under the transformation \eqref{rotatingframe} with $U(t)\in\mathcal{B}(\mathcal{H}^S)$ given by Eq.~\eqref{defineU} ($H'(t)\in \mathcal{B}(\mathcal{H}^S)$), and let $\widehat{O}(t)$ denote the image of $O(t)$ under the transformation \eqref{rotatingframe2} with $W(t)$ given by Eq.~\eqref{defineW2}. Let $P^{ABE}$ be the projector on $\mathcal{H}^A\otimes\mathcal{H}^B\otimes\mathcal{H}^E$. The subsystem $\mathcal{H}^A$ in the decomposition \eqref{decompositionfull} is recoverable by $U(t)$ during the evolution driven by the system-environment Hamiltonian $H_{SE}(t)$, if and only if there exists $H''(t)\in \mathcal{B}(\mathcal{H}^{B'}\otimes\mathcal{H}^E)$, where $\mathcal{H}^{B'}$ was defined in \eqref{maximaldecomposition}, such that \begin{gather} (\widehat{\widetilde{H}}_{SE}(t)+\widehat{H}'(t)+H''(t))P^{ABE}=I^A\otimes D^{BE}(t),\label{Hamcond}\\ \hspace{0.2cm} D^{BE}(t)\in \mathcal{B}(\mathcal{H}^{B}\otimes\mathcal{H}^E), \hspace{0.2cm}\forall t.\notag \end{gather}} \textbf{Proof:} Assume that the information encoded in $\mathcal{H}^A$ is unitarily recoverable by $U(t)$. Consider the evolution in the frame defined through the unitary operation $W(t)U(t)$, where $W(t)=W_0(t)$ for some differentiable $W_0(t)$ that satisfies the property \eqref{defineW}. In this frame, which can be obtained by consecutively applying the transformations \eqref{rotatingframe} and \eqref{rotatingframe2}, the Hamiltonian is $\widehat{\widetilde{H}}_{SE}(t)+\widehat{H}'(t)+H''(t)$. Under this Hamiltonian, the subsystem $\mathcal{H}^A$ must be noiseless and no states should leave the subspace $\mathcal{H}^A\otimes\mathcal{H}^B\otimes\mathcal{H}^E$. It is straightforward to see that the first requirement means that $\mathcal{H}^A$ must be acted upon trivially by all terms of the Hamiltonian, hence the factor $I^A$ on the right-hand side of Eq.~\eqref{Hamcond}. At the same time, the subspace $\mathcal{\mathcal{H}^B\otimes\mathcal{H}^E}$ must be preserved by the action of the Hamiltonian, which implies that the factor $D^{BE}(t)$ on the right-hand side of Eq.~\eqref{Hamcond} must send states from $\mathcal{H}^B\otimes\mathcal{H}^E$ to $\mathcal{H}^B\otimes\mathcal{H}^E$. Note that this implies that the off-diagonal terms of the Hamiltonian in the block form corresponding to the decomposition \eqref{decompositionfull} must vanish, i.e., $P^{ABE}(\widehat{\widetilde{H}}_{SE}(t)+\widehat{H}'(t)+H''(t))P^{ABE}_{\perp}=0$, where $P^{ABE}_{\perp}$ denoted the projector on $\mathcal{K}\otimes\mathcal{H}^E$. Obviously, these conditions are also sufficient, since they ensure that in the frame defined by the unitary transformation $W(t)U(t)$, the evolution of $\mathcal{H}^A$ is trivial and states inside the subspace $\mathcal{H}^B\otimes\mathcal{H}^E$ evolve unitarily under the action of the Hamiltonian $D^{BE}(t)$. Since $W(t)$ acts on $\mathcal{H}^{B'}\otimes\mathcal{H}^E$, subsystem $\mathcal{H}^A$ is invariant also in the rotating frame \eqref{rotatingframe}. This means that $\mathcal{H}^A$ is recoverable by the unitary $U(t)$. \textbf{Comment:} Similarly to the previous cases, the unitary operators $U(t)$ and $W(t)$ can be obtained iteratively from Eq.~\eqref{Hamcond} if the decomposition \eqref{decomposition} is given. Since $H''(t)$ acts on $\mathcal{H}^{B'}\otimes\mathcal{H}^E$, from Eq.~\eqref{Hamcond} it follows that the operator $\widehat{\widetilde{H}}_{SE}(t)+\widehat{H}'(t)$ must satisfy \begin{gather} (\widehat{\widetilde{H}}_{SE}(t)+\widehat{H}'(t))P^{ABE}=I^A\otimes F^{B'E}(t), \hspace{0.4cm} F^{B'E}(t)\in \mathcal{B}(\mathcal{H}^{B'}\otimes\mathcal{H}^E). \label{determineH'} \end{gather} At the same time, we can choose $H''(t)$ so that $D^{BE}(t)=0$. This corresponds to \begin{equation} W(t)\widetilde{\mathcal{H}}^{BE}(t)= \mathcal{H}^B\otimes\mathcal{H}^E, \end{equation} where $\widetilde{\mathcal{H}}^{BE}(t)$ was defined in the discussion before Theorem 5. To ensure $D^{BE}(t)=0$, we can choose \begin{equation} H''(t)=-\widehat{\widetilde{H}}_{SE}(t)-\widehat{H}'(t)+\mathcal{P}^{ABE}_{\perp}\left(\widehat{\widetilde{H}}_{SE}(t)+\widehat{H}'(t)\right),\label{determineH''} \end{equation} where $\mathcal{P}^{ABE}_{\perp}(\cdot)=P^{ABE}_{\perp}(\cdot)P^{ABE}_{\perp}$. For $t=0$ ($U(0)=I$, $W(0)=I$), we can find a solution for $\widehat{H}'(0)={H}'(0)$ from Eq.~\eqref{determineH'}, given the Hamiltonian $\widehat{\widetilde{H}}_{SE}(0)=H_{SE}(0)$. Plugging the solution in Eq.~\eqref{determineH''}, we can obtain $H''(0)$. For the unitaries after a single time step $dt$ we then have \begin{equation} U(dt)=I-iH'(0)dt+\textit{O}(dt^2), \end{equation} \begin{equation} W(dt)=I-iH''(0)dt+\textit{O}(dt^2). \end{equation} Using $U(dt)$ and $W(dt)$ we can calculate $\widehat{\widetilde{H}}_{SE}(dt)$ according to Eq.~\eqref{rotatingframe} and Eq.~\eqref{rotatingframe2}. Then we can solve Eq.~\eqref{determineH'} for $\widehat{H}'(dt)=W(dt)H'(dt)W^{\dagger}(dt)$, which we can use in Eq.~\eqref{determineH''} to find $H''(dt)$, and so on. Note that here we cannot specify a simple expression for $\widehat{H}'(t)$ in terms of $\widehat{\widetilde{H}}_{SE}(t)$, since we do not have the freedom to choose fully $F^{B'E}(t)$ in Eq.~\eqref{determineH'} due to the restriction that $H'(t)$ acts locally on $\mathcal{H}^S$. We point out that condition \eqref{Hamcond} again can be understood as consisting of two parts---the fact that the right-hand side acts trivially on $\mathcal{H}^A$ is necessary and sufficient for the instantaneous dynamics undergone by the subsystem $U^{\dagger}(t)W^{\dagger}(t)\mathcal{H}^A$ at time $t$ to be trivial, while the fact that it preserves $\mathcal{H}^A\otimes\mathcal{H}^{B}\otimes\mathcal{H}^E$ is necessary and sufficient for states not to leave $U^{\dagger}(t)W^{\dagger}(t) \mathcal{H}^A\otimes\mathcal{H}^{B}\otimes\mathcal{H}^E$ as it evolves. It is tempting to perform an argument similar to the one we presented for the Markovian case about the possible relation of the specified recovery unitary operation $U(t)$ and the optimal error-correcting map in the case of approximate error correction. If the encoded information is not perfectly preserved, we can construct the unitary operation $U(t)$ as explained in the comment after Theorem 5 by optimally approximating Eq.~\eqref{determineH'} and Eq.~\eqref{determineH''}. However, in this case the evolution is not irreversible and the information that leaks out of the system may return back to it. Thus we cannot argue that the unitary map specified in this manner would optimally track the remaining encoded information. \subsection*{6.5.2 \hspace{2pt} Conditions depending on the initial state of the environment} \addcontentsline{toc}{subsection}{6.5.2 \hspace{0.15cm} Conditions depending on the initial state of the environment} We can easily extend Theorem 5 to the case when the initial state of the environment belongs to a particular subspace $\mathcal{H}^{E_0}\in\mathcal{H}^E$. The only modification is that instead of $P^{ABE}$ in Eq.~\eqref{Hamcond}, we must have $P^{ABE_0}$, where $P^{ABE_0}$ is the projector on $\mathcal{H}^A\otimes\mathcal{H}^B\otimes \mathcal{H}^{E_0}$, and on the right-hand side must have $D^{BE_0}(t)\in \mathcal{B}(\mathcal{H}^{B}\otimes\mathcal{H}^{E_0})$. The following two theorems follow by arguments analogous to those for Theorem 5. We assume the same definitions as in Theorem 5 (Eq.~\eqref{rotatingframe}, Eq.~\eqref{defineU}, Eq.~\eqref{rotatingframe2}, Eq.~\eqref{defineW2} ), except that in the second theorem we restrict the definition of $H''(t)$. \textbf{Theorem 6:} \textit{Let $P^{ABE_0}$ be the projector on $\mathcal{H}^A\otimes\mathcal{H}^B\otimes\mathcal{H}^{E_0}$, where $\mathcal{H}^{E_0}\in \mathcal{H}^{E}$. The subsystem $\mathcal{H}^A$ in the decomposition \eqref{decompositionfull} is recoverable by $U(t)\in\mathcal{B}(\mathcal{H}^S)$ during the evolution driven by the system-environment Hamiltonian $H_{SE}(t)$ when the state of the environment is initialized inside $\mathcal{H}^{E_0}$, if and only if there exists $H''(t)\in \mathcal{B}(\mathcal{H}^{B'}\otimes\mathcal{H}^E)$ such that \begin{gather} (\widehat{\widetilde{H}}_{SE}(t)+\widehat{H}'(t)+H''(t))P^{ABE_0}=I^A\otimes D^{BE_0}(t),\label{Hamcondano}\\ \hspace{0.2cm} D^{BE_0}(t)\in \mathcal{B}(\mathcal{H}^{B}\otimes\mathcal{H}^{E_0}), \hspace{0.2cm}\forall t.\notag \end{gather}} The conditions for unitary correctability in this case require the additional restriction that $W(t)$ acts on $\mathcal{H}^B\otimes \mathcal{H}^E$ and not on $\mathcal{H}^{B'}\otimes \mathcal{H}^E$, since in this case $U(t)$ brings the state inside $\mathcal{H}^A\otimes\mathcal{H}^B\otimes\mathcal{H}^E$. Notice that when the state of the environment is initialized in a particular subspace, we cannot use conditions for unitary correctability similar to those in Theorem 4. This is because after the correction $U(t)$, the state of the gauge subsystem plus environment may belong to a proper subspace of $\mathcal{H}^B\otimes \mathcal{H}^E$ and tracing out the environment would not yield necessary conditions. \textbf{Theorem 7:} \textit{Let $P^{ABE_0}$ be the projector on $\mathcal{H}^A\otimes\mathcal{H}^B\otimes\mathcal{H}^{E_0}$, where $\mathcal{H}^{E_0}\in \mathcal{H}^{E}$. The subsystem $\mathcal{H}^A$ in the decomposition \eqref{decompositionfull} is correctable by $U(t)\in\mathcal{B}(\mathcal{H}^S)$ during the evolution driven by the system-environment Hamiltonian $H_{SE}(t)$ when the state of the environment is initialized inside $\mathcal{H}^{E_0}$, if and only if there exists $H''(t)\in \mathcal{B}(\mathcal{H}^{B}\otimes\mathcal{H}^E)$ such that \begin{gather} (\widehat{\widetilde{H}}_{SE}(t)+\widehat{H}'(t)+H''(t))P^{ABE_0}=I^A\otimes D^{BE_0}(t),\label{Hamcondanother}\\ \hspace{0.2cm} D^{BE_0}(t)\in \mathcal{B}(\mathcal{H}^{B}\otimes\mathcal{H}^{E_0}), \hspace{0.2cm}\forall t.\notag \end{gather}} Notice that the conditions of Theorem 6 and Theorem 7 do not depend on the particular initial state of the environment but only on the subspace to which it belongs. This can be understood by noticing that different environment states inside the same subspace give rise to Kraus operators \eqref{Krausoperator} which are linear combinations of each other. The discretization of errors in operator quantum error correction \cite{KLP05, KLPL06} implies that all such maps will be correctable. The conditions for correctable dynamics dependent on the state of the environment could be useful if we are able to prepare the state of the environment in the necessary subspace. The environment, however, is generally outside of the experimenter's control. Nevertheless, it is conceivable that the experimenter may have some control over the environment (for example, by varying its temperature), which for certain Hamiltonians could bring the environment state close to a subspace for which the evolution of the system is correctable. It is important to point out that according to the result we derive in the next chapter, the error due to imperfect initialization of the bath will not increase under the evolution. \section*{6.6 \hspace{2pt} Summary and outlook} \addcontentsline{toc}{section}{6.6 \hspace{0.15cm} Summary and outlook} We have derived conditions for correctability of subsystems under continuous decoherence. We first presented conditions for the case when the evolution can be described by a CPTP linear map. These conditions are equivalent to those known for operator codes \cite{KLP05, KLPL06} except that we consider them for time-dependent noise processes. We then derived condition for the case of Markovian decoherence and general Hamiltonian evolution of the system and the environment. We derived conditions for both unitary correctability and general correctability, using the fact that correctable subsystems are unitarily recoverable \cite{KS06}. The conditions for correctability in both Markovian and Hamiltonian evolution can be understood as consisting of two parts---the first is necessary and sufficient for lack of noise inside the instantaneous subsystem that contains the information, and the second is necessary and sufficient for states not to leave the subsystem as it evolves with time. In this sense, the new conditions can be thought of a generalizations of the conditions for noiseless subsystems to the case where the subsystem is time-dependent. In the Hamiltonian case, the conditions for unitary correctability concern only the action of the Hamiltonian on the system, whereas the conditions for general correctability concern the entire system-bath Hamiltonian. The reason for this is that the state of the gauge subsystem plus the environment generally belongs to a particular subspace, which does not factor into sectors belonging separately to the system and the environment. We also derived conditions in the Hamiltonian case that depend on the initial state of the environment. These conditions could be useful, in principle, since errors due to imperfect initialization of the environment do not increase under the evolution. Furthermore, these conditions could provide a better understanding of correctability under CPTP maps, since a CPTP map that results from Hamiltonian evolution depends on both the Hamiltonian and the initial state of the environment. An interesting generalization of this work would be to derive similar condition for the case of the Nakajima-Zwanzig or the TCL master equations. We discussed possible implications of the conditions we derived for the problem of optimal recovery in the case of imperfectly preserved information. We hope that the results obtained in this study will provide insight into the mechanisms of information flow under decoherence that could be useful in the area of approximate error correction as well. \chapter*{Chapter 7: \hspace{1pt} Robustness of operator quantum error correction against initialization errors} \addcontentsline{toc}{chapter}{Chapter 7:\hspace{0.15cm} Robustness of operator quantum error correction against initialization errors} The conditions we derived in the previous chapter, as well as the standard OQEC conditions for discrete errors, depend on the assumption that states are perfectly initialized inside the subspace factored by the correctable subsystem. In practice, however, perfect initialization of the state may not be easy to achieve. Hence, it is important to understand to what extent the preparation requirement can be relaxed. In this chapter, we examine the performance of OQEC in the case of imperfect encoding. \section*{7.1 \hspace{2pt} Preliminaries} \addcontentsline{toc}{section}{7.1 \hspace{0.15cm} Preliminaries} As can be seen from the definitions \eqref{noiselesssystem} and \eqref{correctablesystem}, the concept of noiseless subsystem is a cornerstone in the theory of OQEC; it serves as a basis for the definition of correctable subsystem and error correction in general. As shown in Ref.~\cite{ShaLid05}, in order to ensure perfect noiselessness of a subsystem in the case of imperfect initialization, the noise process has to satisfy more restrictive conditions than those required in the case of perfect initialization. It was believed that these conditions are necessary if a noiseless (or more generally decoherence-free) subsystem is to be robust against arbitrarily large initialization errors. The fundamental relation between a noiseless subsystem and a correctable subsystem implies that in the case of imperfect initialization, more restrictive conditions would be needed for OQEC codes as well. In this chapter we show that with respect to the ability of a code to protect from errors, more restrictive conditions are not necessary. For this purpose, we define a measure of the fidelity between the encoded information in two states for the case of subsystem encoding. We first give an intuitive motivation for the definition, and then study the properties of the measure. We then show that the effective noise that can arise inside the code due to imperfect initialization under the standard conditions, is such that it can only increase the fidelity of the encoded information with the information encoded in a perfectly prepared state. This robustness against initialization errors is shown to hold also when the state is subject to encoded operations. \section*{7.2 \hspace{2pt} Review of the noiseless-subsystem conditions on the Kraus operators} \addcontentsline{toc}{section}{7.2 \hspace{0.15cm} Review of the noiseless-subsystem conditions on the Kraus operators} For simplicity, we consider again the case where information is stored in only one subsystem, i.e., we consider the decomposition \eqref{decomposition}. The definition of noiseless subsystem \eqref{noiselesssystem} implies that the information encoded in $\mathcal{B}(\mathcal{H}^A)$ remains invariant after the process $\mathcal{E}$, if the initial density operator of the system $\rho(0)$ belongs to $\mathcal{B}(\mathcal{H}^A\otimes\mathcal{H}^B)$. If, however, one allows imperfect initialization, $\rho(0)\neq\mathcal{P}^{AB}(\rho(0))$, this need not be the case. Consider the ``initialization-free" analogue of the definition \eqref{noiselesssystem}: \begin{gather} \textrm{Tr}_B\{(\mathcal{P}^{AB}\circ\mathcal{E})(\sigma)\}=\textrm{Tr}_B\{\mathcal{P}^{AB}(\sigma)\},\label{IFnoiselesssystem} \hspace{0.4cm} \textrm{for all } \sigma\in \mathcal{B}(\mathcal{H}^S).\notag \end{gather} Obviously Eq.~\eqref{IFnoiselesssystem} implies Eq.~\eqref{noiselesssystem}, but the reverse is not true. As shown in \cite{ShaLid05}, the definition \eqref{IFnoiselesssystem} imposes more restrictive conditions on the channel $\mathcal{E}$ than those imposed by \eqref{noiselesssystem}. To see this, consider the form of the Kraus operators $M_i$ of $\mathcal{E}$ (\eqref{Kraus1}) in the block basis corresponding to the decomposition \eqref{decomposition}. From a result derived in \cite{ShaLid05} it follows that the subsystem $\mathcal{H}^A$ is noiseless in the sense of Eq.~\eqref{noiselesssystem}, if and only if the Kraus operators have the form \begin{equation} M_i=\begin{bmatrix} I^A\otimes C_i^B&D_i\\ 0&G_i \end{bmatrix},\label{Krausoperatorsblock} \end{equation} where the upper left block corresponds to the subspace $\mathcal{H}^A\otimes\mathcal{H}^B$, and the lower right block corresponds to $\mathcal{K}$. The completeness relation \eqref{completeness} implies the following conditions on the operators $C^B_i$, $D_i$, and $G_i$: \begin{gather} \underset{i}{\sum}C_i^{\dagger B}C_i^B=I^B,\label{one}\\ \underset{i}{\sum}I^A\otimes C_i^{\dagger B}D_i=0,\label{two}\\ \underset{i}{\sum}(D_i^{\dagger}D_i+G_i^{\dagger}G_i)=I_{\mathcal{K}}\label{three}. \end{gather} In the same block basis, a perfectly initialized state $\rho$ and its image under the map \eqref{Krausoperatorsblock} have the form \begin{gather} {\rho}=\begin{bmatrix} {\rho}_1&0\\ 0&0 \end{bmatrix}, \hspace{0.2cm} \mathcal{E}(\rho)=\begin{bmatrix} \rho_1'&0\\ 0&0 \end{bmatrix},\label{perfectini} \end{gather} where $\rho_1'=\underset{i}{\sum}I^A\otimes C_i^B\rho_1 I^A\otimes {C_i^{\dagger B}}$. Using the linearity and cyclic invariance of the trace together with Eq.~\eqref{one}, we obtain \begin{gather} \textrm{Tr}_B\{(\mathcal{P}^{AB}\circ\mathcal{E})(\rho) \}=\textrm{Tr}_B\{\underset{i}{\sum}I^A\otimes C_i^B\rho_1 I^A\otimes {C_i^{\dagger B}}\}\notag\\=\textrm{Tr}_B\{\rho_1 \underbrace{\underset{i}{\sum}I^A\otimes C_i^{\dagger B}C_i^B}_{I^A\otimes I^B}\}=\textrm{Tr}_B\{\mathcal{P}^{AB}(\rho)\},\label{reducedrho} \end{gather} i.e., the reduced operator on $\mathcal{H}^A$ remains invariant. On the other hand, an imperfectly initialized state $\tilde{\rho}$ and its image have the form \begin{equation} \tilde{\rho}=\begin{bmatrix} \tilde{\rho}_1&\tilde{\rho}_2\\ \tilde{\rho}_2^{\dagger}&\tilde{\rho}_3 \end{bmatrix}, \hspace{0.2cm} \mathcal{E}(\tilde{\rho})=\begin{bmatrix} \tilde{\rho}_1'&\tilde{\rho}_2'\\ \tilde{\rho}_2'^{\dagger}&\tilde{\rho}_3' \end{bmatrix}.\label{imperfect} \end{equation} Here $\tilde{\rho}_2$ and/or $\tilde{\rho}_3$ are non-vanishing, and \begin{gather} \tilde{\rho}_1'=\underset{i}{\sum}(I^A\otimes C_i^B\tilde{\rho}_1 I^A\otimes {C_i^{\dagger B}}+D_i\tilde{\rho}_2^{\dagger}I^A\otimes {C^{\dagger B}_i}+I^A\otimes C^B_i\tilde{\rho}_2 D_i^{\dagger}+D_i\tilde{\rho}_3D_i^{\dagger}),\\ \tilde{\rho}_2'=\underset{i}{\sum}(I^A\otimes C_i^B\tilde{\rho}_2 G_i^{\dagger}+D_i\tilde{\rho}_3G_i^{\dagger}),\\ \tilde{\rho}_3'=\underset{i}{\sum}G_i\tilde{\rho}_3G_i^{\dagger}. \end{gather} In this case, using the linearity and cyclic invariance of the trace together with Eq.~\eqref{one} and Eq.~\eqref{two}, we obtain \begin{eqnarray} \textrm{Tr}_B\{(\mathcal{P}^{AB}\circ\mathcal{E})(\tilde{\rho}) \}&=&\textrm{Tr}_B\{\underset{i}{\sum}(I^A\otimes C_i^B\tilde{\rho}_1 I^A\otimes {C_i^{\dagger B}} +D_i\tilde{\rho}_2^{\dagger}I^A\otimes {C^{\dagger B}_i}\notag\\ && +I^A\otimes C^B_i\tilde{\rho}_2 D_i^{\dagger}+D_i\tilde{\rho}_3D_i^{\dagger})\}\label{reducedrhotilde} \\&=&\textrm{Tr}_B\{\tilde{\rho}_1 \underbrace{\underset{i}{\sum}I^A\otimes C_i^{\dagger B}C_i^B}_{I^A\otimes I^B}\}+\textrm{Tr}_B\{(\underbrace{\underset{i}{\sum}{I^A\otimes C^{\dagger B}_i}D_i}_{0})\tilde{\rho}_2^{\dagger} \}\notag\\ && +\textrm{Tr}_B\{\tilde{\rho}_2(\underbrace{\underset{i}{\sum}{I^A\otimes C^{\dagger B}_i}D_i}_{0})^{\dagger}\}+\textrm{Tr}_B\{\underset{i}{\sum}D_i\tilde{\rho}_3D_i^{\dagger} \}\notag\\ &=&\textrm{Tr}_B\tilde{\rho}_1+\textrm{Tr}_B\{\underset{i}{\sum}D_i\tilde{\rho}_3D_i^{\dagger} \}\neq \textrm{Tr}_B\tilde{\rho}_1\equiv\textrm{Tr}_B\{\mathcal{P}^{AB}(\tilde{\rho})\},\notag \end{eqnarray} i.e., the reduced operator on $\mathcal{H}^A$ is not preserved. It is easy to see that the reduced operator would be preserved for every imperfectly initialized state if and only if we impose the additional condition \begin{equation} D_i=0, \hspace{0.2cm} \textrm{for all } i.\label{extraconstraint} \end{equation} This further restriction to the form of the Kraus operators is equivalent to the requirement that there are no transitions from the subspace $\mathcal{K}$ to the subspace $\mathcal{H}^A\otimes\mathcal{H}^B$ under the process $\mathcal{E}$. This is in addition to the requirement that no states leave $\mathcal{H}^A\otimes\mathcal{H}^B$, which is ensured by the vanishing lower left blocks of the Kraus operators \eqref{Krausoperatorsblock}. Condition \eqref{extraconstraint} automatically imposes an additional restriction on the error-correction conditions, since if $\mathcal{R}$ is an error-correcting map in this ``initialization-free" sense, the map $\mathcal{R}\circ \mathcal{E}$ would have to satisfy Eq.~\eqref{extraconstraint}. But is this constraint necessary from the point of view of the ability of the code to correct further errors? Notice that since $\tilde{\rho}$ is a positive operator, $\tilde{\rho_3}$ is positive, and hence $\textrm{Tr}_B\{\underset{i}{\sum}D_i\tilde{\rho}_3D_i^{\dagger} \}$ is positive. The reduced operator on subsystem $\mathcal{H}^A$, although unnormalized, can be regarded as a (partial) probability mixture of states on $\mathcal{H}^A$. The noise process modifies the original mixture ($\textrm{Tr}_B\tilde{\rho}_1$) by \textit{adding} to it another partial mixture (the positive operator $\textrm{Tr}_B\{\underset{i}{\sum}D_i\tilde{\rho}_3D_i^{\dagger} \}$). Since the weight of any state already present in the mixture can only increase by this process, this should not worsen the faithfulness with which information is encoded in $\tilde{\rho}$. In order to make this argument rigorous, however, we need a measure that quantifies the faithfulness of the encoding. \section*{7.3 \hspace{2pt} Fidelity between the encoded information in two states} \addcontentsline{toc}{section}{7.3 \hspace{0.15cm} Fidelity between the encoded information in two states} \subsection*{7.3.1 \hspace{2pt} Motivating the definition} \addcontentsline{toc}{subsection}{7.3.1 \hspace{0.15cm} Motivating the definition} If we consider two states with density operators $\tau$ and $\upsilon$, a good measure of the faithfulness with which one state represents the other is given by the fidelity between the states: \begin{equation} F(\tau, \upsilon)=\textrm{Tr}\sqrt{\sqrt{\tau}\upsilon\sqrt{\tau}}.\label{fidelity} \end{equation} This quantity can be thought of as a square root of a generalized ``transition probability" between the two states $\tau$ and $\upsilon$ as defined by Uhlmann \cite{Uhl76}. Another interpretation due to Fuchs \cite{Fuchs96} gives an operational meaning of the fidelity as the minimal overlap between the probability distributions generated by all possible generalized measurements on the states: \begin{equation} F(\tau, \upsilon)=\underset{\{M_i\}}{\textrm{min}}\underset{i}{\sum}\sqrt{\textrm{Tr}\{ M_i\tau\}}\sqrt{\textrm{Tr}\{ M_i\upsilon\}}.\label{fidelityoper} \end{equation} Here, minimum is taken over all positive operators $\{M_i\}$ that form a positive operator valued measure (POVM) \cite{Kraus83}, $\underset{i}{\sum}M_i=I^S$. In our case, we need a quantity that compares the \textit{encoded} information in two states. Clearly, the standard fidelity between the states will not do since it measures the similarity between the states on the entire Hilbert space. The encoded information, however, concerns only the reduced operators on subsystem $\mathcal{H}^A$. In view of this, we propose the following \textbf{Definition 1:} Let $\tau$ and $\upsilon$ be two density operators on a Hilbert space $\mathcal{H}^S$ with decomposition \eqref{decomposition}. The fidelity between the information encoded in subsystem $\mathcal{H}^A$ in the two states is given by: \begin{equation} F^A(\tau, \upsilon)=\underset{\tau', \upsilon'}{\textrm{max}}F(\tau',\upsilon'),\label{measure} \end{equation} where maximum is taken over all density operators $\tau'$ and $\upsilon'$ that have the same reduced operators on $\mathcal{H}^A$ as $\tau$ and $\upsilon$: $\textrm{Tr}_B\{ \mathcal{P}^{AB}(\tau') \} = \textrm{Tr}_B\{ \mathcal{P}^{AB}(\tau) \}$, $\textrm{Tr}_B\{ \mathcal{P}^{AB}(\upsilon') \} = \textrm{Tr}_B\{ \mathcal{P}^{AB}(\upsilon) \}$. The intuition behind this definition is that by maximizing over all states that have the same reduced operators on $\mathcal{H}^A$ as the states being compared, we ensure that the measure does not penalize for differences between the states that are not due specifically to differences between the reduced operators. \subsection*{7.3.2 \hspace{2pt} Properties of the measure} \addcontentsline{toc}{subsection}{7.3.2 \hspace{0.15cm} Properties of the measure} \textbf{Property 1 (Symmetry):} Since the fidelity is symmetric with respect to its inputs, it is obvious from Eq.~\eqref{measure} that $F^A$ is also symmetric: \begin{equation} F^A(\tau, \upsilon)=F^A(\upsilon, \tau). \end{equation} Although intuitive, the definition \eqref{measure} does not allow for a simple calculation of $F^A$. We now derive an equivalent form for $F^A$, which is simple and easy to compute. Let $\mathcal{P}_{\mathcal{K}}(\cdot)=P_\mathcal{K}(\cdot)P_\mathcal{K}$ denote the superoperator projector on $\mathcal{B}(\mathcal{K})$, and let \begin{gather} \rho^A\equiv\textrm{Tr}_B\{\mathcal{P}^{AB}(\rho)\}/\textrm{Tr}\{ \mathcal{P}^{AB}(\rho)\} \end{gather} denote the \textit{normalized} reduced operator of $\rho$ on $\mathcal{H}^A$. \textbf{Theorem 1:} The definition \eqref{measure} is equivalent to \begin{eqnarray} F^A(\tau, \upsilon)=f^A(\tau,\upsilon)\label{easyform} +\sqrt{\textrm{Tr}\{ \mathcal{P}_{\mathcal{K}}(\tau)\}\textrm{Tr}\{ \mathcal{P}_{\mathcal{K}}(\upsilon)\}}, \end{eqnarray} where \begin{gather} f^A(\tau,\upsilon)=\sqrt{\textrm{Tr}\{ \mathcal{P}^{AB}(\tau)\}\textrm{Tr}\{ \mathcal{P}^{AB}(\upsilon)\}}F(\tau^A, \upsilon^A).\label{intermsofF} \end{gather} \textbf{Proof:} Let $\tau^*$ and $\upsilon^*$ be two states for which the maximum on the right-hand side of Eq.~\eqref{measure} is attained. From the monotonicity of the standard fidelity under CPTP maps \cite{BCF96} it follows that \begin{gather} F^A(\tau, \upsilon)=F(\tau^*, \upsilon^*)\leq F(\Pi(\tau^*), \Pi(\upsilon^*)), \end{gather} where $\Pi(\cdot)=\mathcal{P}^{AB}(\cdot)+\mathcal{P}_{\mathcal{K}}(\cdot)$. But the states $\Pi(\tau^*)$ and $\Pi(\upsilon^*)$ satisfy \begin{gather} \textrm{Tr}_B\{ \mathcal{P}^{AB}(\Pi(\tau^*)) \} = \textrm{Tr}_B\{ \mathcal{P}^{AB}(\tau) \},\label{edno}\\ \textrm{Tr}_B\{ \mathcal{P}^{AB}(\Pi(\tau^*))\} = \textrm{Tr}_B\{ \mathcal{P}^{AB}(\upsilon) \}\label{dve}, \end{gather} i.e., they are among those states over which the maximum in Eq.~\eqref{measure} is taken. Therefore, \begin{gather} F^A(\tau, \upsilon)= F(\Pi(\tau^*), \Pi(\upsilon^*)).\label{pistar} \end{gather} Using Eq.~\eqref{fidelity} and the fact that in the block basis corresponding to the decomposition \eqref{decomposition} the states $\Pi(\tau^*)$ and $\Pi(\upsilon^*)$ have block-diagonal forms, it is easy to see that \begin{gather} F(\Pi(\tau^*), \Pi(\upsilon^*))= \check{F}(\mathcal{P}^{AB}(\tau^*), \mathcal{P}^{AB}(\upsilon^*)) + \check{F}(\mathcal{P}_{\mathcal{K}}(\tau^*), \mathcal{P}_{\mathcal{K}}(\upsilon^*)),\label{intermediate} \end{gather} where $\check{F}$ is a function that has the same expression as the fidelity \eqref{fidelity}, but is defined over all positive operators. From Eq.~\eqref{edno} and Eq.~\eqref{dve} it can be seen that $\textrm{Tr}\{ \mathcal{P}^{AB}(\tau^*)\}=\textrm{Tr}\{ \mathcal{P}^{AB}(\tau)\}$, $\textrm{Tr}\{ \mathcal{P}^{AB}(\upsilon^*)\}=\textrm{Tr}\{ \mathcal{P}^{AB}(\upsilon)\}$, which also implies that $\textrm{Tr}\{ \mathcal{P}_{\mathcal{K}}(\tau^*)\}=\textrm{Tr}\{ \mathcal{P}_{\mathcal{K}}(\tau)\}=1-\textrm{Tr}\{ \mathcal{P}^{AB}(\tau)\}$, $\textrm{Tr}\{ \mathcal{P}_{\mathcal{K}}(\upsilon^*)\}=\textrm{Tr}\{ \mathcal{P}_{\mathcal{K}}(\upsilon)\}=1-\textrm{Tr}\{ \mathcal{P}^{AB}(\upsilon)\}$. The two terms on the right-hand side of Eq.~\eqref{intermediate} can therefore be written as \begin{gather} \check{F}(\mathcal{P}^{AB}(\tau^*), \mathcal{P}^{AB}(\upsilon^*))= \sqrt{\textrm{Tr}\{ \mathcal{P}^{AB}(\tau)\}\textrm{Tr}\{ \mathcal{P}^{AB}(\upsilon)\}}\notag\\\times F\left(\frac{\mathcal{P}^{AB}(\tau^*)}{\textrm{Tr}\{ \mathcal{P}^{AB}(\tau)\}}, \frac{\mathcal{P}^{AB}(\upsilon^*)}{\textrm{Tr}\{ \mathcal{P}^{AB}(\upsilon)\}}\right),\label{firstterm} \end{gather} \begin{gather} \check{F}(\mathcal{P}_{\mathcal{K}}(\tau^*), \mathcal{P}_{\mathcal{K}}(\upsilon^*))= \sqrt{\textrm{Tr}\{ \mathcal{P}_{\mathcal{K}}(\tau)\}\textrm{Tr}\{ \mathcal{P}_{\mathcal{K}}(\upsilon)\}}\notag\\\times F\left(\frac{\mathcal{P}_{\mathcal{K}}(\tau^*)}{\textrm{Tr}\{ \mathcal{P}_{\mathcal{K}}(\tau)\}}, \frac{\mathcal{P}_{\mathcal{K}}(\upsilon^*)}{\textrm{Tr}\{ \mathcal{P}_{\mathcal{K}}(\upsilon)\}}\right).\label{secondterm} \end{gather} Since $\tau^*$ and $\sigma^*$ should maximize the right-hand side of Eq.~\eqref{intermediate}, and the only restriction on $\mathcal{P}_{\mathcal{K}}(\tau^*)$ and $\mathcal{P}_{\mathcal{K}}(\upsilon^*)$ is $\textrm{Tr}\{ \mathcal{P}_{\mathcal{K}}(\tau^*)\}=\textrm{Tr}\{ \mathcal{P}_{\mathcal{K}}(\tau)\}$, $\textrm{Tr}\{ \mathcal{P}_{\mathcal{K}}(\upsilon^*)\}=\textrm{Tr}\{ \mathcal{P}_{\mathcal{K}}(\upsilon)\}$, we must have \begin{gather} F\left(\frac{\mathcal{P}_{\mathcal{K}}(\tau^*)}{\textrm{Tr}\{ \mathcal{P}_{\mathcal{K}}(\tau)\}}, \frac{\mathcal{P}_{\mathcal{K}}(\upsilon^*)}{\textrm{Tr}\{ \mathcal{P}_{\mathcal{K}}(\upsilon)\}}\right)=1, \end{gather} i.e., \begin{gather} \frac{\mathcal{P}_{\mathcal{K}}(\tau^*)}{\textrm{Tr}\{ \mathcal{P}_{\mathcal{K}}(\tau)\}} = \frac{\mathcal{P}_{\mathcal{K}}(\upsilon^*)}{\textrm{Tr}\{ \mathcal{P}_{\mathcal{K}}(\upsilon)\}}.\label{tauupsstar} \end{gather} Thus we obtain \begin{eqnarray} \check{F}(\mathcal{P}_{\mathcal{K}}(\tau^*), \mathcal{P}_{\mathcal{K}}(\upsilon^*))= \sqrt{\textrm{Tr}\{ \mathcal{P}_{\mathcal{K}}(\tau)\}\textrm{Tr}\{ \mathcal{P}_{\mathcal{K}}(\upsilon)\}}. \end{eqnarray} The term \eqref{firstterm} also must be maximized. Applying again the monotonicity of the fidelity under CPTP maps for the map $\Gamma(\rho^{AB})=\textrm{Tr}_B\{\rho^{AB}\}\otimes |0^B\rangle\langle 0^B|$ defined on operators over $\mathcal{H}^A\otimes\mathcal{H}^B$, where $|0^B\rangle$ is some state in $\mathcal{H}^B$, we see that the term \eqref{firstterm} must be equal to \begin{gather} \check{F}(\mathcal{P}^{AB}(\tau^*), \mathcal{P}^{AB}(\upsilon^*))=\sqrt{\textrm{Tr}\{ \mathcal{P}^{AB}(\tau)\}\textrm{Tr}\{ \mathcal{P}^{AB}(\upsilon)\}} F(\tau^A,\upsilon^A)\equiv f^A(\tau, \upsilon).\label{tauupsstar2} \end{gather} This completes the proof. We next provide an operational interpretation of the measure $F^A$. For this we need the following \textbf{Lemma:} The function $f^A(\tau,\upsilon)$ defined in Eq.~\eqref{intermsofF} equals the minimum overlap between the statistical distributions generated by all local measurements on subsystem $\mathcal{H}^A$: \begin{equation} f^A(\tau, \upsilon)=\underset{\{M_i\}}{\textrm{min}}\underset{i}{\sum}\sqrt{\textrm{Tr}\{ M_i\tau\}}\sqrt{\textrm{Tr}\{ M_i\upsilon\}},\label{fA} \end{equation} where $M_i=M_i^A\otimes I^B$, $\underset{i}{\sum}M_i=I^A\otimes I^B$, $M^A_i>0$, for all $i$. Note that since the operators $M_i$ do not form a complete POVM on the entire Hilbert space, the probability distributions $p_{\tau}(i)=\textrm{Tr}\{ M_i \tau\}$ and $p_{\upsilon}(i)=\textrm{Tr}\{ M_i \upsilon\}$ generated by such measurements generally do not sum up to 1. This reflects the fact that a measurement on subsystem $\mathcal{H}^A$ requires a projection onto the subspace $\mathcal{H}^A\otimes \mathcal{H}^B$, i.e., it is realized through post-selection. \textbf{Proof:} Using that \begin{gather} \textrm{Tr}\{ M_i \tau\} =\textrm{Tr}\{M^A_i\otimes I^B \mathcal{P}^{AB}(\tau)\}=\textrm{Tr}\{ \mathcal{P}^{AB}(\tau)\}\textrm{Tr}\{M^A_i\otimes I^B \frac{\mathcal{P}^{AB}(\tau)}{\textrm{Tr}\{ \mathcal{P}^{AB}(\tau)\}}\}\notag\\ =\textrm{Tr}\{ \mathcal{P}^{AB}(\tau)\}\textrm{Tr}\{M^A_i\tau^A\}, \end{gather} we can write Eq.~\eqref{fA} in the form \begin{gather} f^A(\tau,\upsilon)=\sqrt{\textrm{Tr}\{ \mathcal{P}^{AB}(\tau)\}\textrm{Tr}\{ \mathcal{P}^{AB}(\upsilon)\}} \underset{\{M^A_i\}}{\textrm{min}}\underset{i}{\sum}\sqrt{\textrm{Tr}\{ M^A_i\tau^A\}}\sqrt{\textrm{Tr}\{ M^A_i\upsilon^A\}}.\label{interm} \end{gather} From Eq.~\eqref{fidelityoper}, we see that \eqref{interm} is equivalent to \eqref{intermsofF}. \textbf{Theorem 2:} $F^A(\tau,\upsilon)$ equals the minimum overlap \begin{equation} F^A(\tau, \upsilon)=\underset{\{M_i\}}{\textrm{min}}\underset{i\geq 0}{\sum}\sqrt{\textrm{Tr}\{ M_i\tau\}}\sqrt{\textrm{Tr}\{ M_i\upsilon\}}\label{operational} \end{equation} between the statistical distributions generated by all possible measurements of the form $M_0=P_\mathcal{K}$, $M_i=M_i^A\otimes I^B$ for $i\geq 1$, $\underset{i\geq 0}{\sum}M_i=I^S$. \textbf{Proof:} The proof follows from Eq.~\eqref{easyform} and Eq.~\eqref{fA}. Note that the measure $F^A$ compares the information stored in subsystem $\mathcal{H}^A$, which is the information extractable through local measurements on $\mathcal{H}^A$. The last result reflects the intuition that extracting information encoded in $\mathcal{H}^A$ involves a measurement that projects on the subspaces $\mathcal{H}^A\otimes\mathcal{H}^B$ or $\mathcal{K}$. \textbf{Property 2 (Normalization):} From the definition \eqref{measure} it is obvious that \begin{equation} F^A(\tau, \upsilon)\leq F^A(\tau, \tau)=1 , \hspace{0.2cm} \tau \neq \upsilon. \end{equation} From Eq.~\eqref{easyform} we can now see that \begin{gather} F^A(\tau, \upsilon)=1, \textrm{ iff }\hspace{0.1cm}\textrm{Tr}_B\{ \mathcal{P}^{AB}(\tau) \} = \textrm{Tr}_B\{ \mathcal{P}^{AB}(\upsilon) \}, \end{gather} as one would expect from a measure that compares only the encoded information in $\mathcal{H}^A$. \textbf{Proposition:} Using that the maximum in Eq.~\eqref{measure} is attained for states of the form $\Pi(\tau^*)$ and $\Pi(\upsilon^*)$ (Eq.~\eqref{pistar}) where $\tau^*$ and $\upsilon^*$ satisfy Eq.~\eqref{tauupsstar} and Eq.~\eqref{tauupsstar2}, without loss of generality we can assume that for all $\tau$ and $\upsilon$, \begin{equation} F^A(\tau, \upsilon)=F(\tau^*, \upsilon^*),\label{useful} \end{equation} where \begin{eqnarray} \tau^*=\textrm{Tr}_B\{ \mathcal{P}^{AB}(\tau)\}\otimes |0^B\rangle\langle 0^B| +\textrm{Tr}\{ \mathcal{P}_{\mathcal{K}}(\tau)\}|0_{\mathcal{K}}\rangle\langle 0_{\mathcal{K}}|,\label{taustar1} \end{eqnarray} \begin{eqnarray} \upsilon^*=\textrm{Tr}_B\{ \mathcal{P}^{AB}(\upsilon)\}\otimes |0^B\rangle\langle 0^B|+ \textrm{Tr}\{ \mathcal{P}_{\mathcal{K}}(\upsilon)\} |0_{\mathcal{K}}\rangle\langle 0_{\mathcal{K}}|,\label{upsstar1} \end{eqnarray} with $|0^B\rangle$ and $|0_{\mathcal{K}}\rangle$ being some fixed states in $\mathcal{H}^B$ and $\mathcal{K}$, respectively. \textbf{Property 3 (Strong concavity and concavity of the square of $F^A$):} The form of $F^A$ given by Eqs.~\eqref{useful}--\eqref{upsstar1} can be used for deriving various useful properties of $F^A$ from the properties of the standard fidelity. For example, it implies that for all mixtures $\underset{i}{\sum}p_i\tau_i$ and $\underset{i}{\sum}q_i\upsilon_i$ we have \begin{gather} F^A(\underset{i}{\sum}p_i\tau_i, \underset{i}{\sum}q_i\upsilon_i)=F(\underset{i}{\sum}p_i\tau^*_i, \underset{i}{\sum}q_i\upsilon^*_i). \end{gather} This means that the property of \textit{strong concavity} of the fidelity \cite{NieChu00} (and all weaker concavity properties that follow from it) as well as the \textit{concavity of the square of the fidelity} \cite{Uhl76}, are automatically satisfied by the measure $F^A$. \textbf{Definition 2:} Similarly to the concept of angle between two states \cite{NieChu00} which can be defined from the standard fidelity, we can define an \textit{angle between the encoded information in two states}: \begin{equation} \Lambda^A(\tau, \upsilon) \equiv \arccos F^A(\tau, \upsilon). \end{equation} \textbf{Property 4 (Triangle inequality):} From Eqs.~\eqref{useful}--\eqref{upsstar1} it follows that just as the angle between states satisfies the triangle inequality, so does the angle between the encoded information: \begin{equation} \Lambda^A(\tau, \upsilon) \leq \Lambda^A(\tau, \phi) + \Lambda^A(\phi, \upsilon). \end{equation} \textbf{Property 5 (Monotonicity of $F^A$ under local CPTP maps):} We point out that the monotonicity under CPTP maps of the standard fidelity does not translate directly to the measure $F^A$. Rather, as can be seen from Eq.~\eqref{easyform}, $F^A$ satisfies monotonicity under local CPTP maps on $\mathcal{H}^A$: \begin{equation} F^A(\mathcal{E}(\tau),\mathcal{E}(\upsilon) )\geq F^A(\tau,\upsilon) \end{equation} for \begin{gather} \mathcal{E}=\mathcal{E}^A\otimes \mathcal{E}^B \oplus \mathcal{E}_{\mathcal{K}},\label{localCPTPmaps} \end{gather} where $\mathcal{E}^A$, $\mathcal{E}^B$ and $\mathcal{E}_{\mathcal{K}}$ are CPTP maps on operators over $\mathcal{H}^A$, $\mathcal{H}^B$ and $\mathcal{K}$, respectively. \textbf{Remark:} There exist other maps under which $F^A$ is also non-decreasing. Such are the maps which take states from $\mathcal{H}^A\otimes\mathcal{H}^B$ to $\mathcal{K}$ without transfer in the opposite direction. But in general, maps which couple states in $\mathcal{H}^{A}\otimes \mathcal{H}^B$ with states in $\mathcal{K}$, or states in $\mathcal{H}^A$ with states in $\mathcal{H}^B$, do not obey this property. For example, a unitary map which swaps the states in $\mathcal{H}^A$ and $\mathcal{H}^B$ (assuming both subsystems are of the same dimension) could both increase or decrease the measure depending on the states in $\mathcal{H}^B$. Similarly, a unitary map exchanging states between $\mathcal{H}^{A}\otimes \mathcal{H}^B$ and $\mathcal{K}$ could give rise to both increase or decrease of the measure depending on the states in $\mathcal{K}$. Finally, the monotonicity of $F^A$ under local CPTP maps implies \textbf{Property 6 (Contractivity of the angle under local CPTP maps):} For CPTP maps of the form \eqref{localCPTPmaps}, $\Lambda^A$ satisfies \begin{equation} \Lambda^A(\mathcal{E}(\tau),\mathcal{E}(\upsilon) )\leq\Lambda^A(\tau,\upsilon). \end{equation} \section*{7.4 \hspace{2pt} Robustness of OQEC with respect to initialization errors} \addcontentsline{toc}{section}{7.4 \hspace{0.15cm} Robustness of OQEC with respect to initialization errors} Let us now consider the fidelity between the encoded information in an ideally prepared state \eqref{perfectini} and in a state which is not perfectly initialized \eqref{imperfect}: \begin{gather} F^A(\rho,\tilde{\rho})=\sqrt{\textrm{Tr}\rho_1} \sqrt{\textrm{Tr}\tilde{\rho}_1} F(\rho^A,\tilde{\rho}^A)+0\\ =\textrm{Tr}\sqrt{\sqrt{\textrm{Tr}_B\rho_1}\textrm{Tr}_B\tilde{\rho}_1\sqrt{\textrm{Tr}_B\rho_1}}\equiv \check{F}(\textrm{Tr}_B\rho_1, \textrm{Tr}_B\tilde{\rho}_1).\notag \end{gather} After the noise process $\mathcal{E}$ with Kraus operators \eqref{Krausoperatorsblock}, the imperfectly encoded state transforms to $\mathcal{E}(\tilde{\rho})$. Its fidelity with the perfectly encoded state becomes \begin{gather} F^A(\rho,\mathcal{E}(\tilde{\rho}))=\check{F}(\textrm{Tr}_B\rho_1, \textrm{Tr}_B\tilde{\rho}'_1) =\check{F}(\textrm{Tr}_B\rho_1, \textrm{Tr}_B\tilde{\rho}_1+\textrm{Tr}_B\{\underset{i}{\sum}D_i\tilde{\rho}_3D_i^{\dagger}\}), \end{gather} where we have used the expressions for $\textrm{Tr}_B\rho_1'$ and $\textrm{Tr}_B\tilde{\rho}_1'$ obtained in Eq.~\eqref{reducedrho} and Eq.~\eqref{reducedrhotilde}. As we pointed out earlier, the operator $\textrm{Tr}_B\{\underset{i}{\sum}D_i\tilde{\rho}_3D_i^{\dagger} \}$ is positive. Then from the concavity of the \textit{square} of the fidelity \cite{Uhl76}, it follows that \begin{gather} \check{F}^2\left(\textrm{Tr}_B\rho_1,\textrm{Tr}_B\tilde{\rho}_1+\textrm{Tr}_B\{\underset{i}{\sum}D_i\tilde{\rho}_3D_i^{\dagger} \}\right)=\textrm{Tr}\rho_1\textrm{Tr}\{\tilde{\rho}_1+ \underset{i}{\sum}D_i\tilde{\rho}_3D_i^{\dagger} \}\label{argumentfidelity}\\ \times F^2\left(\rho^A, \frac{\textrm{Tr}\tilde{\rho}_1}{\textrm{Tr}\{\tilde{\rho}_1+ \underset{i}{\sum}D_i\tilde{\rho}_3D_i^{\dagger} \}} \tilde{\rho}^A+ \frac{\textrm{Tr}\{\underset{i}{\sum}D_i\tilde{\rho}_3D_i^{\dagger} \}}{\textrm{Tr}\{\tilde{\rho}_1 +\underset{i}{\sum}D_i\tilde{\rho}_3D_i^{\dagger}\}}\frac{\textrm{Tr}_B\{\underset{i}{\sum}D_i\tilde{\rho}_3D_i^{\dagger}\} } {\textrm{Tr}\{\underset{i}{\sum}D_i\tilde{\rho}_3D_i^{\dagger}\}} \right)\notag\\\geq \textrm{Tr}\rho_1\textrm{Tr}\tilde{\rho}_1 F^2(\rho^A,\tilde{\rho}^A)+\textrm{Tr}\rho_1\textrm{Tr}\{\underset{i}{\sum}D_i\tilde{\rho}_3D_i^{\dagger} \} F^2\left(\rho^A, \frac{\textrm{Tr}_B\{\underset{i}{\sum}D_i\tilde{\rho}_3D_i^{\dagger}\} }{\textrm{Tr}\{\underset{i}{\sum}D_i\tilde{\rho}_3D_i^{\dagger}\}}\right)\notag\\=\check{F}^2(\textrm{Tr}_B\rho_1, \textrm{Tr}_B\tilde{\rho}_1)+ \check{F}^2(\textrm{Tr}_B\rho_1, \textrm{Tr}_B\{\underset{i}{\sum}D_i\tilde{\rho}_3D_i^{\dagger}\} )\geq \check{F}^2(\textrm{Tr}_B\rho_1, \textrm{Tr}_B\tilde{\rho}_1).\notag \end{gather} (Here, the transition from the first to the second line is obtained by pulling out the normalization factors of the operators in $\check{F}$ so that the latter can be expressed in terms of the fidelity $F$. The transition form the second to the third line is by using the concavity of the square of the fidelity. The last line is obtained by expressing the quantities again in terms of $\check{F}$). Therefore, we can state the following \textbf{Theorem 3:} The fidelity between the encoded information in a perfectly initialized state \eqref{perfectini} and an imperfectly initialized state \eqref{imperfect} does not decrease under CPTP maps $\mathcal{E}$ with Kraus operators of the form \eqref{Krausoperatorsblock}: \begin{equation} F^A(\rho,\mathcal{E}(\tilde{\rho}))\geq F^A(\rho,\tilde{\rho}). \end{equation} We see that even if the ``initialization-free" constraint \eqref{extraconstraint} is not satisfied, no further decrease in the fidelity occurs as a result of the process. The effective noise (the term $\textrm{Tr}_B\{\underset{i}{\sum}D_i\tilde{\rho}_3D_i^{\dagger} \}$) that arises due to violation of that constraint, can only decrease the initialization error. The above result can be generalized to include the possibility for information processing on the subsystem. Imagine that we want to perform a computational task which ideally corresponds to applying the CPTP map $\mathcal{C}^A$ on the encoded state. In general, the subsystem $\mathcal{H}^A$ may consist of many subsystems encoding separate information units (e.g., qubits), and the computational process may involve many applications of error correction. The noise process itself generally acts continuously during the computation. Let us assume that all operations following the initialization are performed fault-tolerantly \cite{Sho96,ABO98,Kit97,KLZ98,Got97} so that the overall transformation $\mathcal{C}$ on a \textit{perfectly initialized} state succeeds with an arbitrarily high probability (for a model of fault-tolerant quantum computation on subsystems, see, e.g., Ref.~\cite{AC07}). This means that the effect of $\mathcal{C}$ on the reduced operator of a perfectly initialized state is \begin{equation} \textrm{tr}_B\rho_1\rightarrow \mathcal{C}^A(\textrm{Tr}_B\rho_1)\label{FTcomputation} \end{equation} up to an arbitrarily small error. \textbf{Theorem 4:} Let $\mathcal{C}$ be a CPTP map whose effect on reduced operator of every perfectly initialized state \eqref{perfectini} is given by Eq.~\eqref{FTcomputation} with $\mathcal{C}^A$ being a CPTP map on $\mathcal{B}(\mathcal{H}^A)$. Then the fidelity between the encoded information in a perfectly initialized state \eqref{perfectini} and an imperfectly initialized state \eqref{imperfect} does not decrease under $\mathcal{C}$: \begin{equation} F^A(\mathcal{C}(\rho),\mathcal{C}(\tilde{\rho}))\geq F^A(\rho,\tilde{\rho}). \end{equation} \textbf{Proof:} From Eq.~\eqref{FTcomputation} it follows that the map $\mathcal{C}$ has Kraus operators with vanishing lower left blocks, similarly to \eqref{Krausoperatorsblock}. If the state is not perfectly initialized, an argument similar to the one performed earlier shows that the reduced operator on the subsystem transforms as $\textrm{Tr}_B\tilde{\rho_1}\rightarrow \mathcal{C}^A(\textrm{Tr}_B\tilde{\rho}_1)+ \tilde{\rho}^A_{\textrm{err}}$, where $\tilde{\rho}^A_{\textrm{err}}$ is a positive operator which appears as a result of the possibly non-vanishing upper right blocks of the Kraus operators. Using an argument analogous to \eqref{argumentfidelity} and the monotonicity of the fidelity under CPTP maps \cite{BCF96}, we obtain \begin{gather} F^A(\mathcal{C}(\rho),\mathcal{C}(\tilde{\rho}))= \check{F}(\mathcal{C}^A(\textrm{Tr}_B\rho_1),\mathcal{C}^A(\textrm{Tr}_B\tilde{\rho}_1)+ \tilde{\rho}^A_{\textrm{err}})+0\\ \geq \check{F}(\mathcal{C}^A(\textrm{Tr}_B\rho_1),\mathcal{C}^A(\textrm{Tr}_B\tilde{\rho}_1)) =\sqrt{\textrm{Tr}\rho_1}\sqrt{\textrm{Tr}\tilde{\rho}_1} F(\mathcal{C}^A(\rho^A),\mathcal{C}^A(\tilde{\rho}^A)\notag\\ \geq\sqrt{\textrm{Tr}\rho_1}\sqrt{\textrm{Tr}\tilde{\rho}_1} F(\rho^A,\tilde{\rho}^A)=\check{F}(\textrm{Tr}_B\rho_1, \textrm{Tr}_B\tilde{\rho}_1)=F^A(\rho,\tilde{\rho}).\notag \end{gather} Again, the preparation error is not amplified by the process. The problem of how to deal with preparation errors has been discussed in the context of fault-tolerant computation on standard error-correction codes, e.g., in Ref.~\cite{Pre99}. The situation for general OQEC is similar---if the initial state is known, the error can be eliminated by repeating the encoding. If the state to be encoded is unknown, the preparation error generally cannot be corrected. Nevertheless, encoding would still be worthwhile as long as the initialization error is smaller than the error which would result from leaving the state unprotected. \section*{7.5 \hspace{2pt} Summary and outlook} \addcontentsline{toc}{section}{7.5 \hspace{0.15cm} Summary and outlook} In summary, we have shown that a noiseless subsystem is robust against initialization errors without the need for modification of the noiseless subsystem conditions. Similarly, we have argued that general OQEC codes are robust with respect to imperfect preparation in their standard form. This property is compatible with fault-tolerant methods of computation, which is essential for reliable quantum information processing. In order to rigorously prove our result, we introduced a measure of the fidelity $F^A(\tau, \upsilon)$ between the encoded information in two states. The measure is defined as the maximum of the fidelity between all possible states which have the same reduced operators on the subsystem code as the states being compared. We derived a simple form of the measure and discussed many of its properties. We also gave an operational interpretation of the quantity. Since the concept of encoded information is central to quantum information science, the fidelity measure introduced in this study may find various applications. It provides a natural means for extending key concepts such as the fidelity of a quantum channel \cite{KL96} or the entanglement fidelity \cite{Sch96b} to the case of subsystem codes. \chapter*{Chapter 8: \hspace{1pt} A fault-tolerant scheme for holonomic quantum computation} \addcontentsline{toc}{chapter}{Chapter 8:\hspace{0.15cm} A fault-tolerant scheme for holonomic quantum computation} \section*{8.1 \hspace{2pt} Preliminaries} \addcontentsline{toc}{section}{8.1 \hspace{0.15cm} Preliminaries} There are two main sources of errors in quantum computers---environment-induced decoherence and imperfect control. According to the theory of fault tolerance \cite{Sho96, DVS96, ABO98, Kit97, KLZ98, Got97', Got97, Pre99}, if the errors of each type are sufficiently uncorrelated and their rates are bellow a certain threshold, it is possible to implement reliably an arbitrarily long computational task with an efficient overhead of resources. Quantum error correction thus provides a universal software strategy to combat noise in quantum computers. In addition to the software approach, there have also been proposals to deal with the effects of noise by hardware methods that provide robustness through their inherent properties. One such method is holonomic quantum computation (HQC)\cite{ZR99, PZR99}---an adiabatic, all-geometric method of computation which uses non-Abelian generalizations \cite{WZ84} of the Berry phase \cite{Berry84}. It has been shown that due to its geometric nature, this approach is robust against various types of errors in the control parameters driving the evolution \cite{CGSV03, CP03, SZZ04, FGL05, ZZ05}, and thus provides a degree of built-in resilience at the hardware level. In Ref.~\cite{WZL05} HQC was combined with the method of decoherence-free subspaces (DFSs) \cite{DG98, ZR97, LCW98, LBKW01}, which was the first step towards systematic error protection in conjunction with the holonomic approach. DFSs provide \textit{passive} protection against certain types of correlated noise; however, they cannot protect against independent errors. The standard tool to deal with the latter is \textit{active} error correction \cite{Shor95, Ste96}. Active error correction is also the basis of quantum fault tolerance, which is necessary for the scalability of any method of computation. Even if the system is perfectly isolated from its environment, when the size of the circuit increases, errors due to imperfect operations would accumulate detrimentally unless they are corrected. Therefore, scalability of HQC requires combining the holonomic approach with active error correction. In this chapter, we presented a scheme which combines HQC with the techniques for fault-tolerant computation on stabilizer codes. This demonstrates that HQC is a scalable method of computation \cite{OBL08}. The scheme uses Hamiltonians which are elements of the stabilizer, or in the case of subsystem codes---elements of the gauge group. Gates are implemented by slowly varying the Hamiltonians along suitable paths in parameter space, such that the resulting geometric transformation in each eigenspace of the Hamiltonian is transversal. On certain codes such as the 9-qubit Shor code \cite{Shor95} or its subsystem generalizations \cite{Bac06, BC06}, universal computation according to our scheme can be implemented with Hamiltonians of weight 2 and 3. \section*{8.2 \hspace{2pt} Holonomic quantum computation} \addcontentsline{toc}{section}{8.2 \hspace{0.15cm} Holonomic quantum computation} Let $\{H_{\lambda}\}$ be a family of Hamiltonians on an $N$-dimensional Hilbert space, which is continuously parameterized by a point $\lambda$ in a control-parameter manifold $\mathcal{M}$. Assume that the family has the same degeneracy structure, i.e., there are no level crossings. The Hamiltonians can then be written as $H_{\lambda}=\sum_{n=1}^{R}\varepsilon_n(\lambda)\Pi_n(\lambda)$, where $\{\varepsilon_n(\lambda)\}_{n=1}^{R}$ are the $R$ different $d_n$-fold degenerate eigenvalues of $H_\lambda$, ($\sum_{n=1}^{R} d_n=N$), and $\Pi_n(\lambda)$ are the projectors on the corresponding eigenspaces. If the parameter $\lambda$ is changed adiabatically, a state which initially belongs to an eigenspace of the Hamiltonian will remain in the corresponding eigenspace as the Hamiltonian evolves. The unitary evolution that results from the action of the Hamiltonian $H(t):=H_{\lambda(t)}$ is \begin{gather} U(t)=\mathcal{T}\textrm{exp}(-i\int_0^t d\tau H(\tau)) = \oplus_{n=1}^R e^{i\omega_n(t)}U^{\lambda}_{A_n}(t), \label{adiabaticevolution} \end{gather} where $\omega_n(t)=-\int_0^td\tau\varepsilon_n(\lambda(\tau))$ is a dynamical phase, and $U^{\lambda}_{A_n}(t)$ is given by the following path-ordered exponent: \begin{equation} U^{\lambda}_{A_n}(t)=\mathcal{P}\textrm{exp}(\int_{\lambda(0)}^{\lambda(t)}A_n).\label{openpathholonomy} \end{equation} Here $A_n$ is the Wilczek-Zee connection \cite{WZ84}, $A_n=\sum_\mu A_{n,\mu} d\lambda^\mu$, where $A_{n,\mu}$ has matrix elements \cite{WZ84} \begin{equation} (A_{n,\mu})_{\alpha\beta}=\langle n\alpha; \lambda|\frac{\partial}{\partial \lambda^\mu}|n\beta;\lambda\rangle.\label{matrixelementsA} \end{equation} The parameters $\lambda^\mu$ are local coordinates on $\mathcal{M}$ ($1\leq\mu\leq \textrm{dim}\mathcal{M}$) and $\{|n\alpha; \lambda\rangle\}_{\alpha=1}^{d_n}$ is an orthonormal basis of the $n^{\textrm{th}}$ eigenspace of the Hamiltonian at the point $\lambda$. When the path $\lambda(t)$ forms a loop $\gamma(t)$, $\gamma(0)=\gamma(T)= \lambda_0$, the unitary matrix \begin{equation} U_n^{\gamma}\equiv U^{\lambda}_{A_n}(T)=\mathcal{P}\textrm{exp}(\oint_{\gamma}A_n)\label{holonomy} \end{equation} is called the holonomy associated with the loop. In the case when the $n^{\textrm{th}}$ energy level is non-degenerate ($d_n=1$), the corresponding holonomy reduces to the Berry phase \cite{Berry84}. The set $\textrm{Hol}(A)=\{U_\gamma/ \gamma\in L_{\lambda_0}(\mathcal{M})\}$, where $L_{\lambda_0}(\mathcal{M})= \{\gamma: [0,T]\rightarrow \mathcal{M} /\gamma(0)=\gamma(T)=\lambda_0\}$ is the space of all loops based on $\lambda_0$, is a subgroup of $U(d_n)$ called the holonomy group. In Refs.~\cite{ZR99, PZR99} it was shown that if the dimension of the control manifold is sufficiently large, quantum holonomies can be used as a means of universal quantum computation. In the proposed approach, logical states are encoded in the degenerate eigenspace of a Hamiltonian and gates are implemented by adiabatically varying the Hamiltonian along suitable loops in the parameter manifold (for a construction of a universal set of gates, see also Ref.~\cite{NNS03}). \section*{8.3 \hspace{2pt} Stabilizer codes and fault tolerant computation} \addcontentsline{toc}{section}{8.3 \hspace{0.15cm} Stabilizer codes and fault tolerant computation} A large class of quantum error-correcting codes can be described by the so called stabilizer formalism \cite{Got96, CRSS96, CRSS96'}. A stabilizer $S$ is an Abelian subgroup of the Pauli group $\mathcal{G}_n$ on $n$ qubits, which does not contain the element $-I$ \cite{NieChu00}. The Pauli group consists of all possible $n$-fold tensor products of the Pauli matrices $X$, $Y$, $Z$ together with the multiplicative factors $\pm1$, $\pm i$. The stabilizer code corresponding to $S$ is the subspace of all states $|\psi\rangle$ which are left invariant under the action of every operator in $S$ ($G|\psi\rangle=|\psi\rangle$, $\forall G \in S$). It is easy to see that the stabilizer of a code encoding $k$ qubits into $n$ has $n-k$ generators. For the case of operator codes, the stabilizer leaves the subspace $\mathcal{H}^A\otimes \mathcal{H}^B$ in the decomposition \eqref{decomposition} invariant but the encoded information is invariant also under operations that act on the gauge subsystem. An operator stabilizer code encoding $k$ qubits into $n$ with $r$ gauge qubits, has $n-r-k$ stabilizer generators, while the gauge group has $2r$ generators \cite{Pou05}. According to the error-correction condition for stabilizer codes \cite{NieChu00, Pou05}, a set of errors $\{E_i\}$ in $\mathcal{G}_n$ (which without loss of generality are assumed to be Hermitian) is correctable by the code if and only if, for all $i$ and $j$, $E_iE_j$ anticommutes with at least one element of $S$, or otherwise belongs to $S$ or to the gauge group. In this chapter we will be concerned with stabilizer codes for the correction of single-qubit errors and the techniques for fault-tolerant computation \cite{Sho96, DVS96, ABO98, Kit97, KLZ98, Got97', Got97, Pre99} on such codes. A quantum information processing scheme is called fault-tolerant if a single error occurring during the implementation of any given operation introduces at most one error per block of the code \cite{Got97'}. This property has to apply for unitary gates as well as measurements, including those that constitute the error-correcting operations themselves. Fault-tolerant schemes for computation on stabilizer codes generally depend on the code being used---some codes, like the Bacon-Shor subsystem codes \cite{Bac06, BC06} for example, are better suited for fault-tolerant computation than others \cite{AC07}. In spite of these differences, however, it has been shown that fault-tolerant information processing is possible on any stabilizer code \cite{Got97', Got97}. The general procedure can be described briefly as follows. DiVincenzo and Shor \cite{DVS96} demonstrated how to perform fault-tolerant measurements of the stabilizer for any stabilizer code. Their method makes use of an approach introduced by Shor \cite{Sho96}, which involves the ``cat'' state $(|0...0\rangle + |1...1\rangle)/\sqrt{2}$ which can be prepared and verified fault-tolerantly. As pointed out by Gottesman \cite{Got97'}, by the same method one can measure any operator in the Pauli group. Since the encoded $X$, $Y$ and $Z$ operators belong to the Pauli group for any stabilizer code \cite{Got97'}, one can prepare fault-tolerantly various superpositions of the logical basis states $|\overline{0}\rangle$ and $|\overline{1}\rangle$, like $|\overline{+}\rangle=(|\overline{0}\rangle +|\overline{1}\rangle)/\sqrt{2}$ for example. The latter can be used to implement fault-tolerantly the encoded Phase and Hadamard gates, as long as a fault-tolerant C-NOT gate is available \cite{Got97'}. Gottesman showed how the C-NOT gate can be implemented fault-tolerantly by first applying a transversal operation on four encoded qubits and then measuring the encoded $X$ operator on two of them. Finally, for universal computation one needs a gate outside of the Clifford group, e.g., the Toffoli gate. The Toffoli construction was demonstrated first by Shor in \cite{Sho96} for a specific type of codes---those obtained from doubly-even self-dual classical codes by the CSS construction \cite{CS96, St96b}. Gottesman showed \cite{Got97} that a transversal implementation of the same procedure exists for any stabilizer code. Note that the described method for universal fault-tolerant computation on stabilizer codes uses almost exclusively transversal operations---these are operations for which each qubit in a block interacts only with the corresponding qubit from another block or from a special ancillary state such as Shor's ``cat" state (see also Steane's \cite{Ste97} and Knill's \cite{Kni05} methods). Since single-qubit unitaries together with the C-NOT gate form a universal set of gates, fault-tolerant computation can be realized entirely in terms of single-qubit operations and C-NOT operations between qubits from different blocks, assuming that the ``cat'' state can be prepared reliably. Hence, our goal will be to construct holonomic realizations of these operations as well as of the preparation of the ``cat'' state. It is not evident that by doing so we will obtain a fault-tolerant construction, because the geometric approach requires that we use degenerate Hamiltonians which generally couple qubits within the same block. Nevertheless, we will see that it is possible to design the scheme so that single-qubit errors do not propagate. \section*{8.4 \hspace{2pt} The scheme} \addcontentsline{toc}{section}{8.4 \hspace{0.15cm} The scheme} Consider an $[[n,1,r,3]]$ stabilizer code. This is a code that encodes $1$ qubit into $n$, has $r$ gauge qubits, and can correct arbitrary single-qubit errors. In order to perform a holonomic operation on this code, we need a nontrivial starting Hamiltonian which leaves the code space invariant. It is easy to verify that the only Hamiltonians that satisfy this property are linear combinations of the elements of the stabilizer, or in the case of subsystem codes---elements of the gauge group. Note that the stabilizer and the gauge group transform during the course of the computation under the operations being applied. At any stage when we complete an encoded operation, they return to their initial forms. During the implementation of a standard encoded gate, the Pauli group $\mathcal{G}_n$ on a given codeword may spread over other codewords, but it can be verified that this spreading can be limited to at most $4$ other codewords counting the ``cat" state. This is because the encoded C-NOT gate can be implemented fault-tolerantly on any stabilizer code by a transversal operation on $4$ encoded qubits \cite{Got97}, and any encoded Clifford gate can be realized using only the encoded C-NOT provided that we are able to do fault-tolerant measurements (the encoded Clifford group is generated by the encoded Hadamard, Phase and C-NOT gates). Encoded gates outside of the Clifford group, such as the encoded $\pi/8$ or Toffoli gates, can be implemented fault-tolerantly using encoded C-NOT gates conditioned on the qubits in a ``cat" state, so they may require transversal operations on a total of $5$ blocks. More precisely, the fault-tolerant implementation of the Toffoli gate requires the preparation of a special state of three encoded qubits \cite{Sho96}, which involves a sequence of conditional encoded Phase operations and conditional encoded C-NOT operations with conditioning on the qubits in a ``cat" state \cite{Got97}. But the encoded Phase gate has a universal implementation using an encoded C-NOT between the qubit and an ancilla, so the conditional Phase gate may require applying a conditional encoded C-NOT. The procedure for implementing an encoded $\pi/8$ gate involves applying an encoded $SX$ gate conditioned on the qubits in a ``cat" state \cite{BMPRV99}, where \begin{equation} S=\begin{pmatrix} 1&0\\ 0&i \end{pmatrix}, \end{equation} is the Phase gate, but the encoded $S$ gate generally also involves an encoded C-NOT on the qubit and an ancilla, so it may also require the interaction of $4$ blocks. For CSS codes, however, the spreading of the Pauli group of one block during the implementation of a basic encoded operation can be limited to a total of $3$ blocks, since the encoded C-NOT gate has a transversal implementation \cite{Got97}. It also has to be pointed out that fault-tolerant encoded Clifford operations can be implemented using only Clifford gates on the physical qubits \cite{Got97}. These operations transform the stabilizer and the gauge group into subgroups of the Pauli group, and their elements remain in the form of tensor products of Pauli matrices. The fault-tolerant implementation of encoded gates outside of the Clifford group, however, involves operations that take these groups outside of the Pauli group. We will, therefore, consider separately two cases---encoded operations in the Clifford group, and encoded operations outside of the Clifford group. \subsection*{8.4.1 \hspace{2pt} Encoded operations in the Clifford group} \addcontentsline{toc}{subsection}{8.4.1 \hspace{0.15cm} Encoded operations in the Clifford group} In Ref.~\cite{Got97} it was shown that every encoded operation in the Clifford group can be implemented fault-tolerantly using Clifford gates on physical qubits. The Clifford group is generated by the Hadamard, Phase and C-NOT gates, but in addition to these gates, we will also demonstrate the holonomic implementation of the $X$ and $Z$ gates which are standard for quantum computation. We will restrict our attention to implementing single-qubit unitaries on the first qubit in a block, as well as C-NOT operations between the first qubits in two blocks. The operations on the rest of the qubits can be obtained analogously. \subsubsection*{8.4.1.1 \hspace{2pt} Single-qubit unitary operations} \addcontentsline{toc}{subsubsection}{8.4.1.1 \hspace{0.15cm} Single-qubit unitary operations} In order to implement a single-qubit holonomic operation on the first qubit in a block, we will choose as a starting Hamiltonian an element of the stabilizer (with a minus sign) or an element of the gauge group that acts non-trivially on that qubit. Since we are considering codes that can correct arbitrary single-qubit errors, one can always find an element of the initial stabilizer or the initial gauge group that has a factor $\sigma^0=I$, $\sigma^1=X$, $\sigma^2=Y$ or $\sigma^3=Z$ acting on the first qubit, i.e., \begin{equation} \widehat{G}=\sigma^i\otimes \widetilde{G},\hspace{0.4cm}i=0,1,2,3 \label{stabelementgen} \end{equation} where $\widetilde{G}$ is a tensor product of Pauli matrices and the identity on the rest $n-1$ qubits. It can be verified that under Clifford gates the stabilizer and the gauge group transform in such a way that this is always the case except that the factor $\widetilde{G}$ may spread on qubits in other blocks. From now on, we will use ``hat" to denote operators on all these qubits and ``tilde" to denote operators on all the qubits excluding the first one. Without loss of generality we will assume that the chosen stabilizer or gauge-group element for that qubit has the form \begin{equation} \widehat{G}=Z\otimes \widetilde{G}.\label{stabelement} \end{equation} As initial Hamiltonian, we will take the operator \begin{equation} \widehat{H}(0)=-\widehat{G}=-Z\otimes \widetilde{G}.\label{H(0)} \end{equation} Thus, if $\widehat{G}$ is an element of the stabilizer, the code space will belong to the ground space of $\widehat{H}(0)$. Our goal is to find paths in parameter space such that when the Hamiltonian is varied adiabatically along these paths, it gives rise to single-qubit transformations on the first qubit in each of its eigenspaces. \textbf{Proposition:} If the initial Hamiltonian \eqref{H(0)} is varied adiabatically so that only the factor acting on the first qubit changes, \begin{equation} \widehat{H}(t)=-H(t)\otimes \widetilde{G},\label{Ham1} \end{equation} where \begin{equation} \textrm{Tr}\{H(t)\}=0, \end{equation} the geometric transformation resulting in each of the eigenspaces of this Hamiltonian will be equivalent to a local unitary on the first qubit, $\widehat{U}(t)\approx U(t)\otimes \widetilde{I}$.\\ \textbf{Proof.} Observe that \eqref{Ham1} can be written as \begin{equation} \widehat{H}(t)=H(t)\otimes \widetilde{P}_0- H(t)\otimes \widetilde{P}_1,\label{Ham2} \end{equation} where \begin{equation} \widetilde{P}_{0,1}=\frac{\widetilde{I}\pm \widetilde{G}}{2}\label{Projectors} \end{equation} are orthogonal complementary projectors. The evolution driven by $\widehat{H}(t)$ is therefore \begin{equation} \widehat{U}(t) = U_0(t)\otimes \widetilde{P}_0 + U_1(t) \otimes \widetilde{P}_1,\label{overallunitary} \end{equation} where \begin{equation} U_{0,1}(t)=\mathcal{T}\textrm{exp}(-i\overset{t}{\underset{0}{\int}}\pm H(\tau)d\tau).\label{unitaries} \end{equation} Let $|\phi_{0}(t)\rangle$ and $|\phi_{1}(t)\rangle$ be the instantaneous ground and excited states of $H(t)$ with eigenvalues $E_{0,1}(t)=\mp E(t)$ ($E(t)>0$). Using Eq.~\eqref{adiabaticevolution} for the expressions \eqref{unitaries}, we obtain that in the adiabatic limit \begin{equation} U_{0,1}(t)= e^{i\omega(t)}U_{A_{0,1}}(t)\oplus e^{-i\omega(t)}U_{A_{1,0}}(t),\label{unita} \end{equation} where $\omega(t)= \int_0^t d\tau E(\tau)$ and \begin{gather} U_{A_{0,1}}(t)=e^{\int_0^t d\tau \langle\phi_{0,1}(\tau)|\frac{d}{d\tau}|\phi_{0,1}(\tau)\rangle }|\phi_{0,1}(t)\rangle \langle \phi_{0,1}(0)|.\label{holon} \end{gather} The projectors on the instantaneous ground and excited eigenspaces of $\widehat{H}(t)$ are \begin{equation} \widehat{P}_0=|\phi_0(t)\rangle\langle\phi_0(t)|\otimes \widetilde{P}_0 + |\phi_1(t)\rangle\langle\phi_1(t)|\otimes \widetilde{P}_1 \end{equation} and \begin{equation} \widehat{P}_1=|\phi_1(t)\rangle\langle\phi_1(t)|\otimes \widetilde{P}_0 + |\phi_0(t)\rangle\langle\phi_0(t)|\otimes \widetilde{P}_1, \end{equation} respectively. Using Eq.~\eqref{unita} and Eq.~\eqref{holon}, one can see that the effect of the unitary \eqref{overallunitary} on each of these projectors is \begin{equation} \widehat{U}(t)\widehat{P}_0=e^{i\omega(t)}(U_{A_{0}}(t)\oplus U_{A_{1}}(t))\otimes\widetilde{I} \hspace{0.1cm}\widehat{P}_0, \end{equation} \begin{equation} \widehat{U}(t)\widehat{P}_1=e^{-i\omega(t)}(U_{A_{0}}(t)\oplus U_{A_{1}}(t))\otimes\widetilde{I} \hspace{0.1cm}\widehat{P}_1, \end{equation} i.e, up to an overall dynamical phase its effect on each of the eigenspaces is the same as that of the unitary \begin{equation} \widehat{U}(t)=U(t)\otimes\widetilde{I}, \end{equation} where \begin{equation} U(t)=U_{A_{0}}(t)\oplus U_{A_{1}}(t)\label{finalU}. \end{equation} We next show how by suitably choosing $H(t)$ we can implement all necessary single-qubit gates. We will identify a set of points in parameter space, such that by interpolating between these points we can draw various paths resulting in the desired transformations. We remark that if a path does not form a loop, the resulting geometric transformation \eqref{finalU} is an open-path holonomy \cite{KAS06}. Consider the single-qubit unitary operator \begin{equation} V_{\theta\pm}=\frac{1}{\sqrt{2}}\begin{pmatrix} 1& \mp e^{-i\theta}\\ \pm e^{i\theta}&1 \end{pmatrix}, \end{equation} where $\theta$ is a real parameter (note that $V_{\theta -}=V^{\dagger}_{\theta +}$). Define the following single-qubit Hamiltonian: \begin{equation} H_{\theta\pm}\equiv V_{\theta\pm}ZV^{\dagger}_{\theta\pm}. \end{equation} Let $H(t)$ in Eq.~\eqref{Ham1} be a Hamiltonian which interpolates between $H(0)=Z$ and $H(T)=H_{\theta\pm}$ (up to a factor) as follows: \begin{equation} H(t)=f(t)Z+g(t) H_{\theta\pm}\equiv H_{\theta\pm;f,g}(t), \label{interpolation} \end{equation} where $f(0),g(T)>0$, $f(T)=g(0)=0$. To simplify our notations, we will drop the indices $f$ and $g$ of the Hamiltonian, since the exact form of these functions is not important for our analysis as long as they are sufficiently smooth (see discussion below). This Hamiltonian has eigenvalues $\pm \sqrt{f(t)^2+g(t)^2}$ and its energy gap is non-zero unless the entire Hamiltonian vanishes. We will show that in the adiabatic limit, the Hamiltonian \eqref{Ham1} with $H(t)=H_{\theta\pm}(t)$ gives rise to the geometric transformation \begin{equation} \widehat{U}_{\theta\pm}(T)=V_{\theta\pm}\otimes \widetilde{I}. \end{equation} To prove this, observe that \begin{equation} -H_{\theta\pm}(t)=W_{\theta}H_{\theta\pm}(t)W_{\theta}, \end{equation} where $W_{\theta}$ is the Hermitian unitary \begin{equation} W_{\theta}=\begin{pmatrix} 0&ie^{-i\theta}\\-ie^{i\theta}&0 \end{pmatrix}. \end{equation} The unitaries $U_{\theta\pm{0,1}}$, given by Eq.~\eqref{unitaries} for $H(t)=H_{\theta\pm}(t)$, are then related by \begin{equation} U_{{\theta\pm}0}=W_{\theta} U_{{\theta\pm}1} W_{\theta}.\label{U01} \end{equation} Using that $W_{\theta}|0\rangle=-ie^{i\theta}|1\rangle$, $W_{\theta}|1\rangle=ie^{-i\theta}|0\rangle$, from Eq.~\eqref{unita} and Eq.~\eqref{holon} one can see that Eq.~\eqref{U01} implies \begin{equation} U_{{\theta\pm}A_0}=W_{\theta}U_{{\theta\pm}A_1}W_{\theta}. \label{wrelation} \end{equation} Let us define the eigenstates of $H_{\theta\pm}(t)$ at time $T$ as $|\phi_{{\theta\pm}0}(T)\rangle = V_{\theta\pm}|0\rangle$ and $|\phi_{{\theta\pm}1}(T)\rangle = V_{\theta\pm}|1\rangle$. Expression \eqref{holon} can then be written as \begin{gather} U_{{\theta\pm}A_{0}}(T)=e^{i\alpha_{{\theta\pm}0}} V_{\theta\pm}|0\rangle \langle 0|,\notag\\ U_{{\theta\pm}A_{1}}(T)=e^{i\alpha_{{\theta\pm}1}} V_{\theta\pm}|1\rangle \langle 1|,\label{UA01} \end{gather} where $\alpha_{{\theta\pm}0}$ and $\alpha_{{\theta\pm}1}$ are geometric phases. Without explicitly calculating the geometric phases, from Eq.~\eqref{UA01} and Eq.~\eqref{wrelation} we obtain \begin{equation} e^{i\alpha_{{\theta\pm}0}}=e^{i\alpha_{{\theta\pm}1}}. \end{equation} Therefore, up to a global phase, Eq.~\eqref{finalU} yields \begin{equation} U_{\theta\pm}(T)\sim V_{\theta\pm}. \end{equation} We will use this result, to construct a set of standard gates by sequences of operations of the form $V_{\theta\pm}$, which can be generated by interpolations of the type \eqref{interpolation} run forward or backward. For single-qubit gates in the Clifford group, we will only need three values of the parameter $\theta$: $0$, $\pi/2$ and $\pi/4$. For completeness, however, we will also demonstrate how to implement the $\pi/8$ gate, which together with the Hadamard gate is sufficient to generate any single-qubit unitary transformation \cite{BMPRV99}. For this we will need $\theta=\pi/8$. Note that \begin{equation} H_{\theta\pm}= \pm (\cos{\theta}X+\sin{\theta}Y), \end{equation} so for these values of $\theta$ we have $H_{0\pm}=\pm X$, $H_{\pi/2\pm}=\pm Y$, $H_{\pi/4\pm}=\pm (\frac{1}{\sqrt{2}}X+\frac{1}{\sqrt{2}}Y)$, $H_{\pi/8\pm}=\pm (\cos{\frac{\pi}{8}}X+\sin{\frac{\pi}{8}}Y)$. Consider the adiabatic interpolations between the following Hamiltonians: \begin{equation} -Z\otimes\widetilde{G} \rightarrow -Y\otimes \widetilde{G} \rightarrow Z\otimes\widetilde{G}. \end{equation} According to the above result, the first interpolation yields the transformation $V_{\pi/2+}$. The second interpolation can be regarded as the inverse of $Z\otimes\widetilde{G}\rightarrow -Y\otimes\widetilde{G}$ which is equivalent to $-Z\otimes\widetilde{G}\rightarrow Y\otimes\widetilde{G}$ since $\widehat{H}(t)$ and $-\widehat{H}(t)$ yield the same geometric transformations. Thus the second interpolation results in $V_{\pi/2-}^{\dagger}=V_{\pi/2+}$. The net result is therefore $V_{\pi/2+}V_{\pi/2+}=iX$. We see that up to a global phase the above sequence results in a geometric implementation of the $X$ gate. Similarly, one can verify that the $Z$ gate can be realized via the loop \begin{equation} -Z\otimes\widetilde{G} \rightarrow -X\otimes \widetilde{G}\rightarrow Z\otimes\widetilde{G} \rightarrow Y\otimes \widetilde{G}\rightarrow -Z\otimes\widetilde{G}.\label{Zgate} \end{equation} The Phase gate can be realized by applying \begin{equation} -Z\otimes\widetilde{G}\rightarrow -(\frac{1}{\sqrt{2}}X+\frac{1}{\sqrt{2}}Y)\otimes\widetilde{G}\rightarrow Z\otimes\widetilde{G}, \end{equation} followed by the $X$ gate. The Hadamard gate can be realized by first applying $Z$, followed by \begin{equation} -Z\otimes\widetilde{G} \rightarrow -X\otimes \widetilde{G}. \end{equation} Finally, the $\pi/8$ gate can be implemented by first applying $XZ$, followed by \begin{equation} Z\otimes\widetilde{G}\rightarrow -(\cos{\frac{\pi}{8}}X+\sin{\frac{\pi}{8}}Y)\otimes\widetilde{G}\rightarrow -Z\otimes\widetilde{G}. \end{equation} \subsubsection*{8.4.1.2 \hspace{2pt} A note on the adiabatic condition} \addcontentsline{toc}{subsubsection}{8.4.1.2 \hspace{0.15cm} A note on the adiabatic condition} Before we show how to implement the C-NOT gate, let us comment on the conditions under which the adiabatic approximation assumed in the above operations is satisfied. Because of the form \eqref{overallunitary} of the overall unitary, the adiabatic approximation depends on the extent to which each of the unitaries \eqref{unitaries} approximate the expressions \eqref{unita}. The latter depends only on the properties of the single-qubit Hamiltonian $H(t)$, for which the adiabatic condition \cite{mes} reads \begin{equation} \frac{\varepsilon}{{\Delta}^2}\ll 1, \label{adi} \end{equation} where \begin{equation} \varepsilon = \operatornamewithlimits{max}_{0\leq t\leq T}|\langle \phi_1(t)|\frac{dH(t)}{dt}|\phi_0(t)\rangle|, \label{eps} \end{equation} and \begin{equation} \Delta = \operatornamewithlimits{min}_{0\leq t\leq T}(E_1(t)-E_0(t))= \operatornamewithlimits{min}_{0\leq t\leq T}2E(t) \label{Delt} \end{equation} is the minimum energy gap of $H(t)$. Along the segments of the parameter paths we described, the Hamiltonian is of the form \eqref{interpolation} and its derivative is \begin{equation} \frac{dH_{\theta\pm}(t)}{dt}=\frac{df(t)}{dt}Z+\frac{dg(t)}{dt}H_{\theta\pm}, \hspace{0.6cm} 0<t<T. \end{equation} This derivative is well defined as long as $\frac{df(t)}{dt}$ and $\frac{dg(t)}{dt}$ are well defined. The curves we described, however, may not be differentiable at the points connecting two segments. In order for the Hamiltonians \eqref{interpolation} that interpolate between these points to be differentiable, the functions $f(t)$ and $g(t)$ have to satisfy $\frac{df(T)}{dt}=0$ and $\frac{dg(0)}{dt}=0$. This means that the change of the Hamiltonian slows down to zero at the end of each segment (except for a possible change in its strength), and increases again from zero along the next segment. We point out that when the Hamiltonian stops changing, we can turn it off completely by decreasing its strength. This can be done arbitrarily fast and it would not affect a state which belongs to an eigenspace of the Hamiltonian. Similarly, we can turn on another Hamiltonian for the implementation of a different operation. The above condition guarantees that the adiabatic approximation is satisfied with precision $\textit{O}((\frac{\varepsilon}{{\Delta}^2})^2)$. It is known, however, that under certain conditions on the Hamiltonian, we can obtain better results \cite{HJ02}. Let us write the Schr\"{o}dinger equation as \begin{equation} i\frac{d}{dt}|\psi(t)\rangle = H(t) |\psi(t)\rangle\equiv \frac{1}{\epsilon} \bar{H}(t) |\psi(t)\rangle, \end{equation} where $\epsilon>0$ is small. If $\bar{H}(t)$ is smooth and all its derivatives vanish at the end points $t=0$ and $t=T$, the error would scale super-polynomially with $\epsilon$, i.e., it will decrease with $\epsilon$ faster than $\textit{O}(\epsilon^N)$ for any $N$. (Notice that $\frac{\varepsilon}{{\Delta}^2}\propto \epsilon$, i.e., the error according to the standard adiabatic approximation is of order $\textit{O}(\epsilon^2)$.) In our case, the smoothness condition translates directly to the functions $f(t)$ and $g(t)$. For any choice of these functions which satisfies the standard adiabatic condition, we can ensure that the stronger condition is satisfied by the reparameterization $f(t)\rightarrow f(y(t))$, $g(t)\rightarrow g(y(t))$ where $y(t)$ is a smooth function of $t$ which satisfies $y(0)=0$, $y(T)=T$, and has vanishing derivatives at $t=0$ and $t=T$. Then by slowing down the change of the Hamiltonian by a constant factor $\epsilon$, which amounts to an increase of the total time $T$ by a factor $1/\epsilon$, we can decrease the error super-polynomially in $\epsilon$. We will use this result to obtain a low-error interpolation in Section 8.5 where we estimate the time needed to implement a holonomic gate with certain precision. \subsubsection*{8.4.1.3 \hspace{2pt} The C-NOT gate} \addcontentsline{toc}{subsubsection}{8.4.1.3 \hspace{0.15cm} The C-NOT gate} The stabilizer or the gauge group on multiple blocks of the code is a direct product of the stabilizers or the gauge groups of the individual blocks. Therefore, from Eq.~\eqref{stabelementgen} it follows that one can always find an element of the initial stabilizer or gauge group on multiple blocks which has any desired combination of factors $\sigma^i$, $i=0,1,2,3$ on the first qubits in these blocks. It can be verified that applying transversal Clifford operations on the blocks does not change this property. Therefore, we can assume that for implementing a C-NOT gate we can find an element of the stabilizer or the gauge group which has the form \eqref{stabelement} where the factor $Z$ acts on the target qubit and $\widetilde{G}$ acts trivially on the control qubit. Then it is straightforward to verify that the C-NOT gate can be implemented by first applying the inverse of the Phase gate ($S^{\dagger}$) on the control qubit, as well as the transformation $V_{\pi/2+}$ on the target qubit, followed by the transformation \begin{equation} -I^c\otimes Y\otimes \widetilde{G} \rightarrow -Z^c\otimes Z\otimes \widetilde{G},\label{CNOTint} \end{equation} where the superscript $c$ denotes the control qubit. The interpolation \eqref{CNOTint} is understood as in Eq.~\eqref{interpolation}. To see that this yields the desired transformation, observe that the Hamiltonian corresponding to Eq.~\eqref{CNOTint} can be written in the form \begin{equation} \widehat{\widehat{H}}(t)= |0\rangle\langle 0|^c \otimes {H}_{\pi/2+}(T-t)\otimes\widetilde{G}+|1\rangle\langle 1|^c\otimes {H}_{\pi/2-}(T-t)\otimes\widetilde{G}.\label{HCNOT} \end{equation} The application of this Hamiltonian from time $t=0$ to time $t=T$ results in the unitary \begin{equation} \widehat{\widehat{U}}(T)=|0\rangle\langle 0|^c \otimes \widehat{U}^{\dagger}_{\pi/2+}(T)+|1\rangle\langle 1|^c\otimes \widehat{U}^{\dagger}_{\pi/2-}(T),\label{doublehatU} \end{equation} where \begin{equation} \widehat{U}_{\pi/2\pm}(T)=\mathcal{T}\textrm{exp}(-i\overset{T}{\underset{0}{\int}}d\tau H_{\pi/2\pm}(\tau)\otimes\widetilde{G}). \end{equation} But the Hamiltonians ${H}_{\pi/2+}(T-t)\otimes \widetilde{G}$ and ${H}_{\pi/2-}(T-t)\otimes\widetilde{G}$ have the same instantaneous spectrum, and Eq.~\eqref{adiabaticevolution} implies that up to a dynamical phase, each of the eigenspaces of $\widehat{\widehat{H}}$ will undergo the geometric transformation \begin{equation} \widehat{\widehat{U}}_g(T)=|0\rangle\langle 0|^c \otimes V_{\pi/2+}^{\dagger}\otimes \widetilde{I}+|1\rangle\langle 1|^c\otimes V_{\pi/2-}^{\dagger}\otimes \widetilde{I}, \end{equation} where $V_{\pi/2\pm}^{\dagger}\otimes \widetilde{I}$ are the geometric transformations generated by ${H}_{\pi/2\pm}(T-t)\otimes\widetilde{G}$ as shown earlier. This transformation was preceded by the operation $S^{\dagger c}\otimes V_{\pi/2+}\otimes \widetilde{I}$, which means that the net result is \begin{gather} \widehat{\widehat{U}}_g(T) S^{\dagger c}\otimes V_{\pi/2+}\otimes \widetilde{I}=|0\rangle\langle 0|^c \otimes V_{\pi/2+}^{\dagger}V_{\pi/2+}\otimes \widetilde{I}-i|1\rangle\langle 1|^c\otimes V_{\pi/2-}^{\dagger}V_{\pi/2+}\otimes \widetilde{I}\notag\\ =|0\rangle\langle 0|^c \otimes I\otimes \widetilde{I}+|1\rangle\langle 1|^c\otimes X\otimes \widetilde{I}. \end{gather} This is exactly the C-NOT transformation. Note that because of the form \eqref{HCNOT} of $\widehat{\widehat{H}}(t)$, the extent to which the adiabatic approximation is satisfied during this transformation depends only on the adiabatic properties of the single-qubit Hamiltonians $H_{\pi/2\pm}(T-t)$ which we discussed in the previous subsection. Our construction allowed us to prove the resulting geometric transformations without explicitly calculating the holonomies \eqref{holonomy}. It may be instructive, however, to demonstrate this calculation for at least one of the gates we described. In the appendix at the end of this chapter we present an explicit calculation of the geometric transformation for the $Z$ gate for the following two cases: $f(t)=1-\frac{t}{T}$, $g(t)=\frac{t}{T}$ (linear interpolation); $f(t)=\cos{\frac{\pi t}{2 T}}$, $g(t)=\sin {\frac{\pi t}{2 T}}$ (unitary interpolation). \subsection*{8.4.2 \hspace{2pt} Encoded operations outside of the Clifford group} \addcontentsline{toc}{subsection}{8.4.2 \hspace{0.15cm} Encoded operations outside of the Clifford group} For universal fault-tolerant computation we also need at least one encoded gate outside of the Clifford group. The fault-tolerant implementation of such gates is based on the preparation of a special encoded state \cite{Sho96,KLZ98,Got97,BMPRV99,ZLC00} which involves a measurement of an encoded operator in the Clifford group. For example, the $\pi/8$ gate requires the preparation of the state $\frac{|0\rangle+\textrm{exp}(i\pi/4)|1\rangle}{\sqrt{2}}$, which can be realized by measuring the operator $e^{-i\pi/4}SX$ \cite{BMPRV99}. Equivalently, the state can be obtained by applying the operation $RS^{\dagger}$, where $R$ denotes the Hadamard gate, on the state $\frac{\cos(\pi/8)|0\rangle+\sin(\pi/8)|1\rangle}{\sqrt{2}}$ which can be prepared by measuring the Hadamard gate \cite{KLZ98}. The Toffoli gate requires the preparation of the three-qubit encoded state $\frac{|000\rangle+|010\rangle+|100\rangle+|111\rangle}{2}$ and involves a similar procedure \cite{ZLC00}. In all these instances, the measurement of the Clifford operator is realized by applying transversally the operator conditioned on the qubits in a ``cat" state. We now show a general method that can be used to implement holonomically any conditional transversal Clifford operation with conditioning on the ``cat" state. Let $O$ be a Clifford gate acting on the first qubits from some set of blocks. As we discussed in the previous section, under this unitary the stabilizer and the gauge group transform in such a way that we can always find an element with an arbitrary combination of Pauli matrices on the first qubits. If we write this element in the form \begin{equation} \widehat{G}=G_1\otimes G_{2,...,n}, \end{equation} where $G_1$ is a tensor product of Pauli matrices acting on the first qubits from the blocks, and $G_{2,...,n}$ is an operator on the rest of the qubits, then applying $O$ conditioned on the first qubit in a ``cat" state transforms this stabilizer or gauge-group element as follows: \begin{gather} I^c\otimes G_1\otimes G_{2,...,n}=|0\rangle\langle 0|^c\otimes G_1\otimes G_{2,...,n}+|1\rangle\langle 1|^c\otimes G_1\otimes G_{2,...,n} \notag\\ \rightarrow |0\rangle\langle 0|^c\otimes G_1\otimes G_{2,...,n}+|1\rangle\langle 1|^c\otimes OG_1O^{\dagger}\otimes G_{2,...,n}, \end{gather} where the superscript $c$ denotes the control qubit from the ``cat" state. We can implement this operation by choosing the factor $G_1$ the same as the one we would use if we wanted to implement the operation $O$ according to the previously described procedure. Then we can apply the following Hamiltonian: \begin{equation} \widehat{\widehat{H}}_{C(O)}(t)=-|0\rangle\langle 0|^c\otimes G_1\otimes G_{2,...,n}-\alpha(t)|1\rangle\langle 1|^c\otimes H_O(t)\otimes G_{2,...,n},\label{HamA} \end{equation} where $H_O(t)\otimes G_{2,...,n}$ is the Hamiltonian that we would use for the implementation of the operation $O$ and $\alpha(t)$ is a real parameter chosen such that at every moment the operator $\alpha(t)|1\rangle\langle 1|^c\otimes H_O(t)\otimes G_{2,...,n}$ has the same instantaneous spectrum as the operator $|0\rangle\langle 0|^c\otimes G_1\otimes G_{2,...,n}$. This guarantees that the overall Hamiltonian is degenerate and the geometric transformation in each of its eigenspaces is \begin{equation} \widehat{\widehat{U}}_g(t)=|0\rangle\langle 0|^c\otimes I_1\otimes I_{2,...,n}+|1\rangle\langle 1|^c\otimes U_O(t)\otimes I_{2,...,n}, \end{equation} where $U_O(t)$ is the geometric transformation on the first qubits generated by $H_O(t)\otimes G_{2,...,n}$. Since we presented the constructions of our basic Clifford operations up to an overall phase, the operation $U_O(t)$ may differ from the desired operation by a phase. This phase can be corrected by applying a suitable gate on the control qubit from the ``cat" state (we explain how this can be done in the next section). We remark that a Hamiltonian of the type \eqref{HamA} requires fine tuning of the parameter $\alpha(t)$ and generally can be complicated. Our goal in this section is to prove that universal fault-tolerant holonomic computation is possible in principle. In Section 8.6 we show that depending on the code one can find more natural implementations of these operations. If we want to apply a second conditional Clifford operation $Q$ on the first qubits in the block, we can do this via the Hamiltonian \begin{equation} \widehat{\widehat{H}}_{C(Q)}(t)=-|0\rangle\langle 0|^c\otimes G_1\otimes G_{2,...,n}-\beta(t)|1\rangle\langle 1|^c\otimes H_Q(t)\otimes G_{2,...,n}, \label{HamB} \end{equation} where $H_Q(t)\otimes G_{2,...,n}$ is now the Hamiltonian we would use to implement the operation $Q$, had we implemented the operation $O$ before that. Here again, the factor $\beta(t)$ guarantees that there is no splitting of the energy levels of the Hamiltonian. Subsequent operations are applied analogously. Using this general method, we can implement holonomically any transversal Clifford operation conditioned on the ``cat" state. \subsection*{8.4.3 \hspace{2pt} Using the ``cat" state} \addcontentsline{toc}{subsection}{8.4.3 \hspace{0.15cm} Using the ``cat" state} In addition to transversal operations, a complete fault-tolerant scheme requires the ability to prepare, verify and use a special ancillary state such as the ``cat" state $(|00...0\rangle+|11...1\rangle)/\sqrt{2}$ proposed by Shor \cite{Sho96}. This can also be done in the spirit of our holonomic scheme. Since the ``cat" state is known and its construction is non-fault-tolerant, we can prepare it by simply treating each initially prepared qubit as a simple code (with $\widetilde{G}$ in Eq.~\eqref{stabelement} being trivial), and updating the stabilizer of the code via the applied geometric transformation as the operation progresses. The stabilizer of the prepared ``cat" state is generated by $Z_iZ_j$, $i<j$. Transversal unitary operations between the ``cat" state and other codewords are applied as described in the previous section. We also have to be able to measure the parity of the state, which requires the ability to apply successively C-NOT operations from two different qubits in the ``cat" state to one and the same ancillary qubit initially prepared in the state $|0\rangle$. We can regard the qubit in state $|0\rangle$ as a simple code with stabilizer $ \langle Z \rangle $, and we can apply the first C-NOT as described before. Even though after this operation the state of the target qubit is unknown, the second C-NOT gate can be applied via the same interaction, since the transformation in each eigenspace of the Hamiltonian is the same and at the end when we measure the qubit we project on one of the eigenspaces. \subsection*{8.4.4 \hspace{2pt} Fault-tolerance of the scheme} \addcontentsline{toc}{subsection}{8.4.4 \hspace{0.15cm} Fault-tolerance of the scheme} We showed how we can generate any transversal operation on the code space holonomically, assuming that the state is non-erroneous. But what if an error occurs on one of the qubits? At any moment, we can distinguish two types of errors---those that result in transitions between the ground and the excited spaces of the current Hamiltonian, and those that result in transformations inside the eigenspaces. Due to the discretization of errors in QEC, it suffices to prove correctability for each type separately. The key property of our construction is that in each of the eigenspaces, the geometric transformation is the same and it is transversal. Because of this, if we are applying a unitary on the first qubit, an error on that qubit will remain localized regardless of whether it causes an excitation or not. If the error occurs on one of the other qubits, at the end of the transformation the result would be the desired single-qubit unitary gate plus the error on the other qubit, which is correctable. It is remarkable that even though the Hamiltonian couples qubits within the same block, single-qubit errors do not propagate. This is because the coupling between the qubits amounts to a change in the relative phase between the ground and excited spaces, but the latter is irrelevant since it is either equivalent to a gauge transformation, or when we apply a correcting operation we project on one of the eigenspaces. In the case of the C-NOT gate, an error can propagate between the control and the target qubits, but it never results in two errors within the same codeword. \section*{8.5 \hspace{2pt} Effects on the accuracy threshold for environment noise} \addcontentsline{toc}{section}{8.5 \hspace{0.15cm} Effects on the accuracy threshold for environment noise} Since the method we presented conforms completely to a given fault-tolerant scheme, it would not affect the error threshold per operation for that scheme. Some of its features, however, would affect the threshold for \textit{environment} noise. First, observe that when applying the Hamiltonian \eqref{Ham1}, we cannot at the same time apply operations on the other qubits on which the factor $ \widetilde{G}$ acts non-trivially. Thus, some operations at the lowest level of concatenation that would otherwise be implemented simultaneously might have to be implemented serially. The effect of this is equivalent to slowing down the circuit by a constant factor. (Note that we could also vary the factor $\widetilde{G}$ simultaneously with $H(t)$, but in order to obtain the same precision as that we would achieve by a serial implementation, we would have to slow down the change of the Hamiltonian by the same factor.) The slowdown factor resulting from this loss of parallelism is usually small since this problem occurs only at the lowest level of concatenation. For example, for the Bacon-Shor code, we can implement operation on up to 6 out of the 9 qubits in a block simultaneously. As we show in Section 8.6, when implementing an encoded single-qubit gate, we can address any two qubits in a row or column using our method by taking $\widetilde{G}$ in Eq.~~\eqref{Ham1} to be a single-qubit operator $Z$ or $X$ on the third qubit in the same row or column. The Hamiltonians used for applying operations on the two qubits commute with each other at all times and do not interfere. A similar thing holds for implementation of the encoded C-NOT and the operations involving the ``cat" state. Thus for the Bacon-Shor code we have a slowdown due to parallelism by a factor of $1.5$. A more significant slowdown results from the fact that the evolution is adiabatic. In order to obtain a rough estimate of the slowdown due specifically to the adiabatic requirement, we will compare the time $T_{h}$ needed for the implementation of a holonomic gate with precision $1-\delta$ to the time $T_{d}$ needed for a dynamical realization of the same gate with the same strength of the Hamiltonian. We will consider a realization of the $X$ gate via the interpolation \begin{equation} \widehat{H}(t)=-V_{X}(\tau(t))ZV_{X}^{\dagger}(\tau(t))\otimes \widetilde{G}, \hspace{0.2cm} V_{X}(\tau(t))=\textrm{exp} \left(i\tau(t)\frac{\pi }{2T_{h}}X\right),\label{Hamest} \end{equation} where $\tau(0)=0$, $\tau(T_h)=T_h$. Thus the energy gap of the Hamiltonian is always at maximum. The optimal dynamical implementation of the same gate is via the Hamiltonian $-X$ for time $T_{d}=\frac{\pi}{2}$. As we argued in Section 8.4, the accuracy with which the adiabatic approximation holds for the Hamiltonian \eqref{Hamest} is the same as that for the Hamiltonian \begin{equation} H(t)=V_{X}(\tau(t))ZV_{X}^{\dagger }(\tau(t)).\label{Hamest2} \end{equation} We now present estimates for two different choices of the function $\tau(t)$. The first one is \begin{equation} \tau(t)=t. \end{equation} In this case the Schr\"{o}dinger equation can be easily solved in the instantaneous eigenbasis of the Hamiltonian \eqref{Hamest2}. For the probability that the initial ground state remains a ground state at the end of the evolution, we obtain \begin{equation} p=\frac{1}{1+\varepsilon ^{2}}+\frac{ \varepsilon ^{2}}{1+\varepsilon ^{2}}\cos ^{2}(\frac{\pi }{4\varepsilon } \sqrt{1+\varepsilon ^{2}}), \end{equation} where \begin{equation} \varepsilon =\frac{T_{d}}{T_{h}}. \end{equation} Expanding in powers of $\varepsilon $ and averaging the square of the cosine whose period is much smaller than $T_{h}$, we obtain the condition \begin{equation} \varepsilon ^{2}\leq 2\delta. \end{equation} Assuming, for example, that $\delta \approx 10^{-4}$ (approximately the threshold for the 9-qubit Bacon-Shor \cite{AC07}), we obtain that the time of evolution for the holonomic case must be about 70 times longer than that in the dynamical case. It is known, however, that if $H(t)$ is smooth and its derivatives vanish at $t=0$ and $t=T_h$, the adiabatic error decreases super-polynomially with $T_h$ \cite{HJ02}. To achieve this, we will choose \begin{equation} \tau(t)= \frac{1}{a}\int_0^t dt' e^{-1/\sin(\pi t'/T_h)},\hspace{0.2cm} a=\int_0^{T_h} dt' e^{-1/\sin(\pi t'/T_h)}. \end{equation} For this interpolation, by a numerical solution we obtain that when $T_h/T_d\approx 17$ the error is already of the order of $10^{-6}$, which is well below the threshold values obtained for the Bacon-Shor codes \cite{AC07}. This is a remarkable improvement in comparison to the previous interpolation which shows that the smoothness of the Hamiltonian plays an important role in the performance of the scheme. An additional slowdown in comparison to a perfect dynamical scheme may result from the fact that the constructions for some of the standard gates we presented involve long sequences of loops. With more efficient parameter paths, however, it should be possible to reduce this slowdown to minimum. An approach for finding loops presented in Ref.~\cite{NNS03} may be useful in this respect. In comparison to a dynamical implementation, the allowed rate of environment noise for the holonomic case would decrease by a factor similar to the slowdown factor. In practice, however, dynamical gates are not perfect and the holonomic approach may be advantageous if it gives rise to a better operational precision. We finally point out that an error in the factor $H(t)$ in the Hamiltonian \eqref{Ham1} would result in an error on the first qubit according to Eq.~\eqref{finalU}. Such an error clearly has to be below the accuracy threshold. More dangerous errors, however, are also possible. For example, if the degeneracy of the Hamiltonian is broken, this can result in an unwanted dynamical transformation affecting all qubits on which the Hamiltonian acts non-trivially. Such multi-qubit errors have to be of higher order in the threshold, which imposes more severe restrictions on the Hamiltonian. \section*{8.6 \hspace{2pt} Fault-tolerant holonomic computation with low-weight Hamiltonians} \addcontentsline{toc}{section}{8.6 \hspace{0.15cm} Fault-tolerant holonomic computation with low-weight Hamiltonians} The weight of the Hamiltonians needed for the scheme we described depend on the weight of the stabilizer or gauge-group elements. Remarkably, certain codes possess stabilizer or gauge-group elements of low weight covering all qubits in the code, which allows us to perform holonomic computation using low-weight Hamiltonians. Here we will consider as an example a subsystem generalization of the 9-qubit Shor code \cite{Shor95}---the Bacon-Shor code \cite{Bac06, BC06}---which has particularly favorable properties for fault-tolerant computation \cite{Ali07, AC07}. In the 9-qubit Bacon-Shor code, the gauge group is generated by the weight-two operators $Z_{k,j}Z_{k,j+1}$ and $X_{j,k}X_{j+1,k}$, where the subscripts label the qubits by row and column when they are arranged in a $3\times 3$ square lattice. Since the Bacon-Shor code is a CSS code, the C-NOT gate has a direct transversal implementation. We now show that the C-NOT gate ca be realized using at most weight-three Hamiltonians. If we want to apply a C-NOT gate between two qubits each of which is, say, in the first row and column of its block, we can use as a starting Hamiltonian $-Z^{t}_{1,1}\otimes Z^t_{1,2}$, where the superscript $t$ signifies that these are operators in the target block. We can then apply the C-NOT gate as described in the previous section. After the operation, however, this gauge-group element will transform to $-Z^{t}_{1,1}\otimes Z^c_{1,1}\otimes Z^t_{1,2}$. If we now want to implement a C-NOT gate between the qubits with index $\{1,2\}$ using as a starting Hamiltonian the operator $-Z^{t}_{1,1}\otimes Z^c_{1,1}\otimes Z^t_{1,2}$ according to the same procedure, we will have to use a four-qubit Hamiltonian. Of course, at this point we can use the starting Hamiltonian $-Z^t_{1,2}\otimes Z^t_{1,3}$, but if we had also applied a C-NOT between the qubits labeled $\{1,3\}$, this operator would not be available---it would have transformed to $-Z^t_{1,2}\otimes Z^t_{1,3}\otimes Z^c_{1,3}$. What we can do instead, is to use as a starting Hamiltonian the operator $ -Z^{t}_{1,1}\otimes Z^t_{1,2}\otimes Z^c_{1,2}$ which is obtained from the gauge-group element $ Z^{t}_{1,1}\otimes Z^c_{1,1}\otimes Z^t_{1,2}\otimes Z^c_{1,2}$ after the application of the C-NOT between the qubits with index $\{1,1\}$. Since the C-NOT gate is its own inverse, we can regard the factor $Z^{t}_{1,1}$ as $\widetilde{G}$ in Eq.~\eqref{CNOTint} and use this starting Hamiltonian to apply our procedure backwards. Thus we can implement any transversal C-NOT gate using at most weight-three Hamiltonians. Since the encoded $X$, $Y$ and $Z$ operations have a bitwise implementation, we can always apply them according to our procedure using Hamiltonians of weight 2. For the Bacon-Shor code, the encoded Hadamard gate can be applied via bitwise Hadamard transformations followed by a rotation of the grid to a $90$ degree angle \cite{AC07}. The encoded Phase gate can be implemented using the encoded C-NOT and an ancilla. We point out that the preparation and measurement of the ``cat" state can also be done using Hamiltonians of weight 2. To prepare the ``cat" state, we prepare first all qubits in the state $|f_+^0\rangle = (|0\rangle + |1\rangle)/\sqrt{2}$, which can be done by measuring each of them in the $|0\rangle$, $|1\rangle$ basis (this ability is assumed for any type of computation) and applying the transformation $-Z\rightarrow -X$ or $Z\rightarrow -X$ depending on the outcome. To complete the preparation of the ``cat" state, apply a two-qubit transformation between the first qubit and each of the other qubits ($j>1$) via the transformation \begin{equation} -I_1\otimes X_j\rightarrow -Z_1\otimes Z_j . \end{equation} Single-qubit transformations on qubits from the ``cat" state can be applied according to the method described in the previous section using at most weight-two Hamiltonians. To measure the parity of the state, we need to apply successively C-NOT operations from two different qubits in the ``cat" state to the same ancillary qubit initially prepared in the state $|0\rangle$. As described in the previous section, this can also be done according to our method and requires Hamiltonians of weight 2. For universal computation with the Bacon-Shor code, we also need to be able to apply one encoded transformation outside of the Clifford group. As we mentioned earlier, in order to implement the Toffoli gate or the $\pi/8$ gate, it is sufficient to be able to implement a C-NOT gate conditioned on a ``cat" state. For the Bacon-Shor code, the C-NOT gate has a transversal implementation, so the conditioned C-NOT gate can be realized by a series of transversal Toffoli operations between the ``cat" state and the two encoded states. We now show that the latter can be implemented using at most three-qubit Hamiltonians. Ref.~\cite{NieChu00} provides a circuit for implementing the Toffoli gate as a sequence of one- and two-qubit gates. We will use the same circuit, except that we flip the control and target qubits in every C-NOT gate using the identity \begin{equation} R_{1} R_{2} C_{1,2}R_{1} R_{2}= C_{2,1}, \end{equation} where $R_{i}$ denotes a Hadamard gate on the qubit labeled by $i$ and $C_{i,j}$ denotes a C-NOT gate between qubits $i$ and $j$ with $i$ being the control and $j$ being the target. Let $\textrm{Toffoli}_{i,j,k}$ denote the Toffoli gate on qubits $i$, $j$ and $k$ with $i$ and $j$ being the two control qubits and $k$ being the target qubit, and let $S_{i}$ and $T_i$ denote the Phase and $\pi/8$ gates on qubit $i$, respectively. Then the Toffoli gate on three qubits (the first one of which we will assume to belong to the ``cat" state), can be written as: \begin{gather} \textrm{Toffoli}_{1,2,3}=R_{2}C_{3,2}R_{3}T_{3}^{\dagger}R_{3}R_{1}C_{3,1}R_{3}T_{3}R_{3}C_{3,2}R_{3}T_{3}^{\dagger}R_{3} C_{3,1}\times\notag\\R_{3}T_{3}R_{3}R_{2}T_{2}^{\dagger}R_{2} C_{2,1}R_{2}T_{2}^{\dagger}R_{2} C_{2,1}R_{2}S_{2}R_{1}T_{1}.\label{Toffoli} \end{gather} To show that each of the above gates can be implemented holonomically using Hamiltonians of weight at most 3, we will need an implementation of the C-NOT gate which is suitable for the case when we have a stabilizer or gauge-group element of the form \begin{equation} \widehat{G}=X\otimes \widetilde{G},\label{Xgenerator} \end{equation} where the factor $X$ acts on the target qubit and $\widetilde{G}$ acts trivially on the control qubit. By a similar argument to the one in Section 8.4, one can verify that in this case the C-NOT gate can be implemented as follows: apply the operation $S^{\dagger}$ on the control qubit (we describe how to do this for our particular case below) together with the transformation \begin{equation} -X\otimes\widetilde{G}\rightarrow - Z\otimes \widetilde{G}\rightarrow X\otimes\widetilde{G}\label{CNOT1} \end{equation} on the target qubit, followed by the transformation \begin{equation} I^c\otimes X\otimes \widetilde{G}\rightarrow -(|0\rangle\langle 0|^c\otimes Z+|1\rangle\langle 1|^c\otimes Y)\otimes \widetilde{G}\rightarrow-I^c\otimes X\otimes \widetilde{G}.\label{CNOT2} \end{equation} Since the second and the third qubits belong to blocks encoded with the Bacon-Shor code, there are weight-two elements of the initial gauge group of the form $Z\otimes Z$ covering all qubits. The stabilizer generators on the ``cat" state are also of this type. Following the transformation of these operators according to the sequence of operations \eqref{Toffoli}, one can see that before every C-NOT gate in this sequence, there is an element of the form \eqref{Xgenerator} with $\widetilde{G}=Z$ which can be used to implement the C-NOT gate as described provided that we can implement the gate $S^{\dagger}$ on the control qubit. We also point out that all single-qubit operations on qubit $1$ in this sequence can be implemented according to the procedure describes in Section 8.4, since at every step we have a weight-two stabilizer element on that qubit with a suitable form. Therefore, all we need to show is how to implement the necessary single-qubit operations on qubits $2$ and $3$. Due to the complicated transformation of the gauge-group elements during the sequence of operations \eqref{Toffoli}, we will introduce a method of applying a single-qubit operation with a starting Hamiltonian that acts trivially on the qubit. For implementing single-qubit operations on qubits $2$ and $3$ we will use as a starting Hamiltonian the operator \begin{equation} \widehat{\widehat{H}}(0)=-I_{i}\otimes X_1\otimes \widetilde{Z}, \hspace{0.4 cm} i=2,3\label{specialcase} \end{equation} where the first factor ($I_i$) acts on the qubit on which we want to apply the operation ($2$ or $3$), and $X_1\otimes \widetilde{Z}$ is the transformed (after the Hadamard gate $R_1$) stabilizer element of the ``cat" state that acts non-trivially on qubit $1$ (the factor $\widetilde{Z}$ acts on some other qubit in the ``cat" state). To implement a single-qubit gate on qubit $3$ for example, we first apply the interpolation \begin{equation} -I_{3}\otimes X_1\otimes \widetilde{Z}\rightarrow -Z_{3}\otimes Z_1\otimes \widetilde{Z}.\label{nexttolast} \end{equation} This results in a two-qubit geometric transformation $U_{1,3}$ on qubits $1$ and $3$. We do not have to calculate this transformation exactly since we will undo it later, but the fact that each eigenspace undergoes the same two-qubit geometric transformation can be verified similarly to the C-NOT gate we described in Section 8.4. At this point, the Hamiltonian is of the form \eqref{H(0)} with respect to qubit 3, and we can apply any single-qubit unitary gate $V^3$ according to the method described in Section 8.4. This transforms the Hamiltonian to $-V_3Z_{3}V_3^{\dagger}\otimes X_1\otimes \widetilde{Z}$. We can now ``undo" the transformation $U_{1,3}$ by the interpolation \begin{equation} -V_3Z_{3}V_{3}^{\dagger}\otimes Z_1\otimes \widetilde{Z}\rightarrow-I_{3}\otimes X_1\otimes \widetilde{Z}.\label{lastHamiltonian} \end{equation} The latter transformation is the inverse of Eq.~\eqref{nexttolast} up to the single-qubit unitary transformation $V_3$, i.e., it results in the transformation $V_3U^{\dagger}_{1,3}V^{\dagger}_{3}$. Thus the net result is \begin{equation} V_3U_{1,3}^{\dagger}V_3^{\dagger}V_3U_{1,3}=V_3, \end{equation} which is the desired single-qubit unitary transformation on qubit $3$. We point out that during this transformation, a single-qubit error can propagate between qubits $1$ and $3$, but this is not a problem since we are implementing a transversal Toffoli operation and such an error would not result in more that one error per block of the code. We showed that for the BS code our scheme can be implemented with at most 3-local Hamiltonians. This is optimal for the construction we presented, since there are no non-trivial codes with stabilizer or gauge-group elements of weight smaller than 2 covering all qubits. One could argue that since the only Hamiltonians that leave the code space invariant are superpositions of elements of the stabilizer or the gauge group, one cannot do better than this. However, it may be possible to approximate the necessary Hamiltonians with sufficient precision using 2-local interactions. A possible direction to consider in this respect is the technique introduced in Ref.~\cite{KKR06} for approximating three-local Hamiltonians by two-local ones. This is left as a problem for future investigation. \section*{8.7 \hspace{2pt} Conclusion} \addcontentsline{toc}{section}{8.7 \hspace{0.15cm} Conclusion} We described a scheme for fault-tolerant holonomic computation on stabilizer codes, which demonstrates that HQC is a scalable method of computation. The scheme opens the possibility of combining the software protection of error correction with the inherent robustness of HQC against control imperfections. Our construction uses Hamiltonians that are elements of the stabilizer or the gauge group for the code. The Hamiltonians needed for implementing two-qubit gates are at least 3-local. We have shown that computation with at most 3-local Hamiltonians is possible with the Bacon-Shor code. It is interesting to point out that the adiabatic regime in which our scheme operates is consistent with the model of Markovian decoherence. In Ref.~\cite{ALZ06} it was argued that the standard dynamical paradigm of fault tolerance is based on assumptions that are in conflict with the rigorous derivation of the Markovian limit. Although the threshold theorem has been extended to non-Markovian models \cite{TB05, AGP06, AKP06}, the Markovian assumption is an accurate approximation for a wide range of physical scenarios \cite{Car99}. It also allows for a much simpler description of the evolution in comparison to non-Markovian models, as we saw in Chapter 5. In Ref.~\cite{ALZ06} it was shown that the weak-coupling-limit derivation of the Markovian approximation is consistent with computational methods that employ slow transformations, such as adiabatic quantum computation \cite{FGGS00} or HQC. A theory of fault-tolerance for the adiabatic model of computation at present is not known, although significant steps in this direction have been undertaken \cite{JFS06, Lid07}. Our hybrid HQC-QEC scheme provides a solution for the case of HQC. We point out, however, that it is an open problem whether the Markovian approximation makes sense for a fixed value of the adiabatic slowness parameter when the circuit increases in size. Applying the present strategy to actual physical systems might require modifying our abstract construction in accordance with the available interactions, possibly using superpositions of stabilizer or gauge-group elements rather than single elements as the basic Hamiltonians. Given that simple QEC codes and two-qubit geometric transformations have been realized using NMR \cite{NMR1, NMR2} and ion-trap \cite {Tra1, Tra2} techniques, these systems seem particularly suitable for hybrid HQC-QEC implementations. We hope that the techniques presented in this study might prove useful in other areas as well. It is possible that some combination of transversal adiabatic transformations and active correction could provide a solution to the problem of fault tolerance in the adiabatic model of computation. \section*{8.8 \hspace{2pt} Appendix: Calculating the holonomy for the $Z$ gate} \addcontentsline{toc}{section}{8.8 \hspace{0.15cm} Appendix: Calculating the holonomy for the $Z$ gate} \subsection*{8.8.1 \hspace{2pt} Linear interpolation} \addcontentsline{toc}{subsection}{8.8.1 \hspace{0.15cm} Linear interpolation} We first demonstrate how to calculate the ground-space holonomy for the $Z$ gate for the case of linear interpolation along each segment of the path, i.e., when $f(t)$ and $g(t)$ in Eq.~\eqref{interpolation} are \begin{equation} f(t)=1-\frac{t}{T}, \hspace{0.4cm}g(t)=\frac{t}{T}. \end{equation} In order to calculate the holonomy \eqref{holonomy} corresponding to our construction of the $Z$ gate, we need to define a \textit{single-valued} orthonormal basis of the ground space of the Hamiltonian along the loop described by Eq.~\eqref{Zgate}. Since the Hamiltonian has the form \eqref{Ham2} at all times, it is convenient to choose the basis of the form \begin{gather} |j k; \lambda\rangle = |\chi_j(\lambda)\rangle|\widetilde\psi_{j k}\rangle,\\ \hspace{0.2cm} j=0,1; \hspace{0.2cm} k=1,...,2^{n-2}\notag\hspace{0.2cm}, \end{gather} where $|\chi_0(\lambda(t))\rangle$ and $|\chi_1(\lambda(t))\rangle$ are ground and excited states of $H(t)$, and $|\widetilde\psi_{0k}\rangle$ and $|\widetilde\psi_{1k}\rangle$ are fixed orthonormal bases of the subspaces that support the projectors $\widetilde{P}_0$ and $\widetilde{P}_1$ defined in Eq.~\eqref{Projectors}, respectively. The eigenstates $|\chi_0(\lambda(t))\rangle$ and $|\chi_1(\lambda(t))\rangle$ are defined up to an overall phase, but we have to chose the phase such that the states are single-valued along the loop. Observe that because of this choice of basis, the matrix elements \eqref{matrixelementsA} become \begin{gather} ({A_\mu})_{jk, j'k'}=\langle jk; \lambda| \frac{\partial}{\partial \lambda^{\mu}}|j'k'; \lambda\rangle=\langle \chi_j(\lambda)|\frac{\partial}{\partial\lambda^\mu}|\chi_{j'}(\lambda)\rangle\notag\\ \times\langle\widetilde{\psi}_{jk}|\widetilde{\psi}_{j'k'}\rangle =\langle \chi_j(\lambda)|\frac{\partial}{\partial\lambda^\mu}|\chi_{j'}(\lambda)\rangle \delta_{jj'}\delta_{kk'}, \end{gather} i.e., the matrix $A_\mu$ is diagonal. (Since we are looking only at the ground space, we are not writing the index of the energy level). We can therefore drop the path-ordering operator. The resulting unitary matrix $U^{\gamma}_{jk,j'k'}$ acting on the subspace spanned by $\{|jk;\lambda(0)\rangle\}$ is also diagonal and its diagonal elements are \begin{equation} U^{\gamma}_{jk,jk}=\textrm{exp}\left(\oint_{\gamma}\langle \chi_j(\lambda)|\frac{\partial}{\partial\lambda^\mu}|\chi_{j}(\lambda)\rangle d\lambda^\mu\right). \end{equation} These are precisely the Berry phases for the loops described by the states $|\chi_{j}(\lambda\rangle)$. Since the loop in parameter space consist of four line segments, we can write the last expression as \begin{equation} U^{\gamma}_{jk,jk}=\textrm{exp}\left(\sum_{i=1}^4 \int_{\gamma_i} \langle \chi_j(\lambda)|\frac{\partial}{\partial\lambda^\mu}|\chi_{j}(\lambda)\rangle d\lambda^\mu\right), \end{equation} where $\gamma_i$, $i=1,2,3,4$ are the segments indexed in the order corresponding to Eq.~\eqref{Zgate}. If we parameterize each line segment by the dimensionless time $0\leq s \leq 1$, we get \begin{equation} U^{\gamma}_{jk,jk}=\textrm{exp}\left(\sum_{i=1}^4 \int_0^1 \langle \chi_j^i(s)|\frac{d}{ds}|\chi_{j}^i(s)\rangle ds\right),\label{diagonalU} \end{equation} where the superscript $i$ in $|\chi_{j}^i(s)\rangle$ indicates the segment. In the $|0\rangle$, $|1\rangle$ basis, we will write these states as \begin{equation} |\chi_j^i(s)\rangle=\begin{pmatrix} a^i_j(s)\\ b^i_j(s) \end{pmatrix}, \hspace{0.2cm} j=1,2\hspace{0.2cm},\hspace{0.2cm}i=1,2,3,4\hspace{0.2cm}, \end{equation} where $|a^i_j(s)|^2+|b^i_j(s)|^2=1$. Along the segment $\gamma_1$, the states $|\chi_0^1(s)\rangle$ and $|\chi_1^1(s)\rangle$ are the ground and excited states of the Hamiltonian \begin{equation} H_1(t)=(1-s)Z+sX. \end{equation} For these states we obtain \begin{gather} a^1_0(s)=\frac{(1-s+\sqrt{1-2s+2s^2})e^{i\omega^1_0(s)}}{\sqrt{2-4s+4s^2+(2-2s)\sqrt{1-2s+2s^2}}},\label{a30}\\ b^1_0(s)=\frac{s e^{i\omega^1_0(s)}}{\sqrt{2-4s+4s^2+(2-2s)\sqrt{1-2s+2s^2}}},\label{b30}\\ a^1_1(s)=\frac{(1-s-\sqrt{1-2s+2s^2})e^{i\omega^1_1(s)}}{\sqrt{2-4s+4s^2-(2-2s)\sqrt{1-2s+2s^2}}},\label{a31}\\ b^1_1(s)=\frac{se^{i\omega^1_1(s)}}{\sqrt{2-4s+4s^2-(2-2s)\sqrt{1-2s+2s^2}}},\label{b31} \end{gather} where $\omega^1_j(s)$ are arbitrary phases which have to be chosen so that when we complete the loop, the phases of the corresponding states will return to their initial values modulo $2\pi$. We will define the loops as interpolating between the following intermediate states defined with their overall phases: \begin{gather} |\psi_0(\lambda)\rangle: \hspace{0.2cm}|0\rangle \rightarrow |f_+^0\rangle \rightarrow |1\rangle \rightarrow |f_-^{\pi/2}\rangle \rightarrow |0\rangle,\\ |\psi_1(\lambda)\rangle: \hspace{0.2cm}|1\rangle \rightarrow |f_-^0\rangle \rightarrow |0\rangle \rightarrow |f_+^{\pi/2}\rangle \rightarrow |1\rangle, \end{gather} where \begin{equation} |f_{\pm}^\theta\rangle =\frac{|0\rangle \pm e^{i\theta}|1\rangle}{\sqrt{2}}. \end{equation} In other words, we impose the conditions $|\chi_{0,1}^1(0)\rangle=|0,1\rangle$, $|\chi_{0,1}^1(1)\rangle=|f_{\pm}^0\rangle = |\chi_{0,1}^2(0)\rangle$, $|\chi_{0,1}^2(1)\rangle =|1,0\rangle =|\chi_{0,1}^3(0)\rangle$, $|\chi_{0,1}^3(1)\rangle=|f_{\mp}^{\pi/2}\rangle=|\chi_{0,1}^4(0)\rangle$, $|\chi_{0,1}^4(1)\rangle=|0,1\rangle$. From Eq.~\eqref{a30} and Eq.~\eqref{b30} we see that $a^1_0(0)=e^{i\omega^1_0(0)}$, $b^1_0(0)=0 $ and $a^1_0(1)=\frac{1}{\sqrt{2}}e^{i\omega^1_0(1)}$, $b^1_0(1)=\frac{1}{\sqrt{2}}e^{i\omega^1_0(1)}$ , so we can choose \begin{equation} \omega^1_0(s)=0,\hspace{0.2cm} \forall s\in [0,1]. \end{equation} Similarly, from Eq.~\eqref{a31} and Eq.~\eqref{b31} it can be seen that $a^1_1(0)=0$, $b^1_1(0)=e^{i\omega^1_1(0)} $ and $a^1_1(1)=-\frac{1}{\sqrt{2}}e^{i\omega^1_1(1)}$, $b^1_1(1)=\frac{1}{\sqrt{2}}e^{i\omega^1_1(1)}$. This means that $\omega^1_1(s)$ has to satisfy $e^{i\omega^1_1(0)}=1$, $e^{i\omega^1_1(1)}=-1$. We can choose any differentiable $\omega^1_1(s)$ that satisfies \begin{equation} \omega^1_1(0)=0, \hspace{0.2cm} \omega^1_1(1)=\pi. \end{equation} In order to calculate $\int_0^1 \langle \chi_j^1(s)|\frac{d}{ds}|\chi_{j}^1(s)\rangle ds$, we also need \begin{equation} \frac{d}{ds}|\chi_{j}^1(s)\rangle= \begin{pmatrix} \frac{d}{ds}a^1_j(s)\\ \frac{d}{ds}b^1_j(s) \end{pmatrix}. \end{equation} Differentiating Eqs.~\eqref{a30}-\eqref{b31} yields \begin{gather} \frac{d}{ds}a^1_0(s)=-\frac{s(1-s+\sqrt{1-2s+2s^2})}{2\sqrt{2-4s+4s^2}[1-2s+2s^2+(1-s)\sqrt{1-2s+2s^2}]^{\frac{3}{2}}},\\ \frac{d}{ds}b^1_0(s)=\frac{2-4s+3s^2+(2-2s)\sqrt{1-2s+2s^2}}{2\sqrt{2-4s+4s^2}[1-2s+2s^2+(1-s)\sqrt{1-2s+2s^2}]^{\frac{3}{2}}},\\ \frac{d}{ds}a^1_1(s)=-\frac{s(1-s-\sqrt{1-2s+2s^2})e^{i\omega^1_1(s)}}{2\sqrt{2-4s+4s^2}[1-2s+2s^2-(1-s)\sqrt{1-2s+2s^2}]^{\frac{3}{2}}} +a^1_1(s)i\frac{d}{ds}\omega^1_1(s),\\ \frac{d}{ds}b^1_0(s)=-\frac{(2-4s+3s^2-(2-2s)\sqrt{1-2s+2s^2})e^{i\omega^1_1(s)}}{2\sqrt{2-4s+4s^2}[1-2s+2s^2-(1-s)\sqrt{1-2s+2s^2}]^{\frac{3}{2}}} +b^1_1(s)i\frac{d}{ds}\omega^1_1(s). \end{gather} By a straightforward substitution, we obtain \begin{eqnarray} \langle \chi_0^1(s)|\frac{d}{ds}|\chi_{0}^1(s)\rangle &=&a^{1\ast}_0(s)\frac{d}{ds}a^{1}_0(s)+b^{1\ast}_0(s)\frac{d}{ds}b^1_0(s)=0,\\ \langle \chi_1^1(s)|\frac{d}{ds}|\chi_{1}^1(s)\rangle &=&a^{1\ast}_1(s)\frac{d}{ds}a^1_1(s)+b^{1\ast}_1(s)\frac{d}{ds}b^1_1(s)=i\frac{d}{ds}\omega^1_1(s).\\ \end{eqnarray} Thus the integrals are \begin{eqnarray} \int_0^1\langle\chi_0^1(s)|\frac{d}{ds}|\chi_{0}^1(s)\rangle ds&=&0,\\ \int_0^1\langle\chi_1^1(s)|\frac{d}{ds}|\chi_{1}^1(s)\rangle ds&=&i\omega^1_1(s)|_0^1=i\pi. \end{eqnarray} In the same manner, we calculate the contributions of the other three line segments. The results are: \begin{eqnarray} \int_0^1\langle\chi_0^2(s)|\frac{d}{ds}|\chi_{0}^2(s)\rangle ds&=&0,\\ \int_0^1\langle\chi_1^2(s)|\frac{d}{ds}|\chi_{1}^2(s)\rangle ds&=&0, \end{eqnarray} \begin{eqnarray} \int_0^1\langle\chi_0^3(s)|\frac{d}{ds}|\chi_{0}^3(s)\rangle ds&=&i\frac{\pi}{2},\\ \int_0^1\langle\chi_1^3(s)|\frac{d}{ds}|\chi_{1}^3(s)\rangle ds&=&0, \end{eqnarray} \begin{eqnarray} \int_0^1\langle\chi_0^4(s)|\frac{d}{ds}|\chi_{0}^4(s)\rangle ds&=&0,\\ \int_0^1\langle\chi_1^4(s)|\frac{d}{ds}|\chi_{1}^4(s)\rangle ds&=&i\frac{\pi}{2}. \end{eqnarray} Putting everything together, for the diagonal elements of the holonomy we obtain \begin{gather} U^{\gamma}_{0k,0k}=e^{i\frac{\pi}{2}},\notag\\ U^{\gamma}_{1k,1k}=e^{i\frac{3\pi}{2}}.\label{finalholonZ} \end{gather} The holonomy transforms any state in the ground space of the initial Hamiltonian as \begin{equation} U^{\gamma}\sum_{jk}\alpha_{jk}|j\rangle|\widetilde{\psi}_{jk}\rangle = e^{i\frac{\pi}{2}}\sum_{jk}(-1)^j\alpha_{jk}|j\rangle|\widetilde{\psi}_{jk}\rangle, \hspace{0.2cm} j=0,1. \end{equation} From the point of view of the full Hilbert space, this is effectively a $Z$ gate on the first qubit up to an overall phase. We point out that other single-qubit transformations like the Hadamard or the $X$ gates, which do not form a complete loop in parameter space, can be obtained in a similar fashion by calculating the open-path expression \eqref{openpathholonomy}. In principle, the result of that calculation depends on the choice of basis $\{|\alpha; \lambda\rangle \}$ which is defined up to a unitary gauge transformation. However, this ambiguity is removed by the notion of parallel transport between the initial and the final subspaces \cite{KAS06}. One can verify that this yields the correct result for our transformations. \subsection*{8.8.2 \hspace{2pt} Unitary interpolation} \addcontentsline{toc}{subsection}{8.8.2 \hspace{0.15cm} Unitary interpolation} The calculation is simpler if we choose a unitary interpolation, \begin{equation} f(t)=\cos{\frac{\pi t}{2 T}},\hspace{0.4cm} g(t)=\sin {\frac{\pi t}{2 T}}. \end{equation} Such interpolation corresponds to a rotation of the Bloch sphere around a particular axis for each of the segments of the loop. The first two segments of the loop \eqref{Zgate} are realized via the Hamiltonian \begin{equation} \widehat{H}_{1,2}(t)=-V_{Y}^{\dagger}(t)ZV_{Y}(t)\otimes \widetilde{G}, \hspace{0.2cm} V_{Y}(t)=\textrm{exp} \left(it\frac{\pi }{2T}Y\right),\label{HZ1} \end{equation} applied for time $T$, and the third and fourth segments are realized via the Hamiltonian \begin{equation} \widehat{H}_{3,4}(t)=-V_{X}(t)ZV_{X}^{\dagger}(t)\otimes \widetilde{G}, \hspace{0.2cm} V_{X}(t)=\textrm{exp} \left(it\frac{\pi }{2T}X\right),\label{HZ1} \end{equation} again applied for time $T$. Let us define the eigenstates of the Hamiltonian along the first two segments as \begin{equation} |\chi^{1,2}_0(t)\rangle = V_{X}(t)|0\rangle, \hspace{0.2cm} |\chi^{1,2}_1(t)\rangle = V_{X}(t)|1\rangle, \hspace{0.2cm}0\leq t\leq T \end{equation} and along the third and forth segments as \begin{equation} |\chi^{3,4}_0(t)\rangle = -iV^{\dagger}_Y(t)Y|0\rangle, \hspace{0.2cm} |\chi^{3,4}_1(t)\rangle =-i V^{\dagger}_{Y}(t)Y|1\rangle, \hspace{0.2cm}0\leq t\leq T. \end{equation} Notice that \begin{equation} |\chi^{1,2}_0(T)\rangle=-iY|0\rangle=|\chi^{3,4}_0(0)\rangle, \hspace{0.2cm} |\chi^{1,2}_1(T)\rangle=-iY|1\rangle=|\chi^{3,4}_1(0)\rangle, \end{equation} but \begin{equation} |\chi^{1,2}_0(0)\rangle=|0\rangle \neq |\chi^{3,4}_0(T)\rangle = -i|0\rangle, \hspace{0.2cm} |\chi^{1,2}_1(0)\rangle=|1\rangle \neq |\chi^{3,4}_1(T)\rangle = i|1\rangle, \end{equation} i.e., this basis is not single-valued. To make it single valued, we can modify it along the third and fourth segments as \begin{equation} |\chi^{3,4}_0(t)\rangle\rightarrow |\widetilde{\chi}^{3,4}_0(t)\rangle = e^{i\omega_0(t)}|\chi^{3,4}_0(t)\rangle, \hspace{0.2cm} |\chi^{3,4}_1(t)\rangle\rightarrow |\widetilde{\chi}^{3,4}_1(t)\rangle = e^{i\omega_1(t)}|\chi^{3,4}_0(t)\rangle, \end{equation} where \begin{equation} \omega_0(0)=0, \hspace{0.2cm} \omega_0(T)=\frac{\pi}{2}, \end{equation} \begin{equation} \omega_1(0)=0, \hspace{0.2cm} \omega_1(T)=-\frac{\pi}{2}. \end{equation} The expression \eqref{diagonalU} then becomes \begin{gather} U^{\gamma}_{jk,jk}=\textrm{exp}\left(\int_0^T \langle \chi_j^{1,2}(t)|\frac{d}{dt}|\chi_{j}^{1,2}(t)\rangle dt + \int_0^T \langle \chi_j^{3,4}(t)|\frac{d}{dt}|\chi_{j}^{3,4}(t)\rangle dt + (-1)^j\frac{\pi}{2}\right), \notag\\\hspace{0.2cm} j=0,1. \end{gather} But \begin{equation} \langle \chi_j^{1,2}(t)|\frac{d}{dt}|\chi_{j}^{1,2}(t)\rangle = -i\frac{\pi }{2T}\langle j|Y|j\rangle =0, \end{equation} and \begin{equation} \langle \chi_j^{3,4}(t)|\frac{d}{dt}|\chi_{j}^{3,4}(t)\rangle = i\frac{\pi }{2T}\langle j|YXY|j\rangle =0. \end{equation} Therefore, we obtain \eqref{finalholonZ}. \chapter*{Chapter 9: \hspace{1pt} Conclusion} \addcontentsline{toc}{chapter}{Chapter 9:\hspace{0.15cm} Conclusion} In this thesis we obtained various results in the theory of open quantum systems and quantum information. These results have opened interesting questions and suggested promising directions for future research. The decomposition into weak measurements presents a practical prescription for the implementation of any generalized measurement using weak measurements and feedback control. It also presents a powerful mathematical tool for the study of measurement processes. In practice, there may exist limitations on the type of weak measurements an experimenter can implement, and hence it would be interesting to look at the inverse problem---given a set of weak measurements, what are the generalized measurements that one can generate with them. It might be convenient to recast this problem in terms of the system-ancilla interactions that are available for the implementation of such measurements. The decomposition may prove useful in other problems involving feedback control as well. One of its interesting features is that the evolution that corresponds to it is confined on a specific manifold (the simplex). In that sense, the procedure avoids dissipation into areas from which the state could drift away from the desired outcomes. This property could be helpful in designing optimal feedback-control protocols. The decomposition into weak measurements furthermore suggests that it may be possible to find a unified description of measurement protocols. The operations applied at a given time during the measurement procedure for generating generalized measurements, drive the evolution of a stochastic process on the simplex, i.e., they can be represented by a stochastic matrix on the coordinate space. This suggests that there may exist a general coordinate space, which includes all such simplexes, on which the most general notion of a measurement protocol can be represented by a stochastic process. The basic object in such a description would not be a quantum state but a classical probability distribution on a space whose coordinates correspond to quantum states. Since stochastic processes are well understood, such a unified description could be useful for studies of measurement-driven schemes. We also used the decomposition into weak measurements for deriving necessary and sufficient conditions for entanglement monotones. These conditions may be useful for proving monotonicity of conjectured monotones, finding new classes of entanglement measures, or finding measures with particularly nice properties such as additivity. Another interesting possibility suggested by the existence of necessary and sufficient differential conditions for monotonicity under all types of CPTP transformations, is that it may be possible to think of all quantum operations as generated by infinitesimal operations. It is known that CPTP maps cannot be generated by weak CPTP maps, however, the differential form of the convexity condition can be thought of as a condition for monotonicity under infinitesimal loss of information. Therefore, if we adopt the approach in which the basic objects are ensembles of states and loss of classical information is a basic operation on these objects, it may be possible to arrive at a unified description of the most general form of quantum operations where every operation can be continuously connected to the identity. Our investigation of the deterministic evolution of open quantum systems and the difference between Markovian and non-Markovian decoherence has also opened various interesting questions. While we compared the performance of different perturbative master equations, we have not compared their solutions to the perturbative expansion of the exact solution. Studying these equations is important in its own right as it provides understanding of the actual dynamics driving the effective evolution. But for the purpose of obtaining an approximation of the exact solution starting from first principles, it may be more useful to expand the solution directly. Expanding the exact solution is justified in the same parameter regime---small $\alpha t$---and requires computation of the same bath-correlation functions, but it is significantly simpler since it does not require deriving an equation and solving it. As we mentioned in Chapter 5, the TCL or NZ projection techniques might be useful also for the effective description of the reduced dynamics of a system subject to non-Markovian decoherence and continuous error correction. Here too, it would be interesting to consider expanding the solution directly. We presented a generalized notion of a Zeno regime applicable for the problem of error correction and identified the bottle-neck mechanism through which the performance of the error-correction scheme depends on this regime. As the Zeno regime plays a central role in the workings of another error-correction approach---dynamical decoupling (DD) \cite{VKL99, FLP04}---it might be useful to apply the insights developed here in the design of hybrid EC-DD schemes. Another direction for future research is expanding our scheme for continuous error correction based on weak measurement and weak unitary operations to include more sophisticated feedback. Since making full use of the available information about the state can only help, we expect that this approach would lead to schemes with better performance. One of the problems suggested by our study of the conditions for exact correctability under continuous decoherence, was whether a similar approach to the one we used could be useful in studies concerning approximate error correction. A question we raised is whether the Markovian decoherence process during an infinitesimal time step can be separated into completely correctable and non-correctable parts. If this is the case, it could allow us to formulate conditions for optimal correctability by tracking the evolution of the maximal information that remains during the process. As we argued, for non-Markovian decoherence such an approach cannot be optimal since the information may flow out to the environment and later return back, but it could nevertheless be helpful for finding locally optimal solutions. Even if the answer to this question is negative, the differential approach in studying information loss certainly seems promising. One interesting extension of this work would be to derive conditions for correctability in the context of the TCL or NZ master equations. Another promising tool introduced in this thesis is the measure of fidelity for encoded information that we used to prove the robustness of operator error correction against imperfect encoding. As we pointed out, this measure provides a natural means of extending concepts such as the fidelity of a quantum channel and the entanglement fidelity to the case of subsystem codes. As subsystem encoding provides the most general method of encoding, this measure could also be useful in studies concerning optimal quantum error correction. Its simple form makes it suitable for computation which is important in this respect. Finally, our scheme for fault-tolerant holonomic computation has also opened a number of interesting questions. We have shown that for universal computation this scheme requires three-local Hamiltonians. It may be possible, however, to use perturbative techniques to approximate three-local Hamiltonians using two-local ones in a manner similar to the one introduced in Ref.~\cite{KKR06}. Another direction for future research is suggested by the fact that the gap of the adiabatic Hamiltonian provides a natural protection against those types of errors that lead to excitations. It is interesting whether it is possible to design more efficient error correction schemes that make use of this property. Another question is whether the holonomic approach could provide a solution to the problem of the inconsistency between the standard fault-tolerance assumptions and the rigorous derivation of the Markovian limit. Giving a definitive answer to this question requires a rigorous analysis of the accumulation of non-Markovian errors due to deviation from perfect adiabaticity. Our scheme for the Bacon-Shor code uses an approach to holonomic computation in which the Hamiltonian acts trivially on the subsystem code and non-trivially on the gauge subsystem. It would be interesting to formulate this approach as a general method for holonomic computation on subsystems. The techniques introduced in this study may also prove useful for the problem of fault tolerance in the adiabatic model of computation. It is possible that some combination of transversal adiabatic operations and active error correction could provide a solution for this case too. \end{document}
\begin{document} \title{On explicit factors of Cyclotomic polynomials over finite fields} \author{Liping Wang} \address{Center for Advanced Study, Tsinghua University, HaiDian District, Beijing(100084), China.} \email{[email protected]} \author{Qiang Wang} \address{School of Mathematics and Statistics, Carleton University, 1125 Colonel By Drive, Ottawa, Ontario, K1S 5B6, Canada.} \email{[email protected]} \thanks{Research is partially supported by NSERC of Canada. } \keywords{factorization, cyclotomic polynomials, irreducible polynomials, dickson polynomials, finite fields} \subjclass[2000]{11T06, 11T55, 12Y05} \begin{abstract} We study the explicit factorization of $2^n r$-th cyclotomic polynomials over finite field $\mathbb{F}_q$ where $q, r$ are odd with $(r, q) =1$. We show that all irreducible factors of $2^n r$-th cyclotomic polynomials can be obtained easily from irreducible factors of cyclotomic polynomials of small orders. In particular, we obtain the explicit factorization of $2^n 5$-th cyclotomic polynomials over finite fields and construct several classes of irreducible polynomials of degree $2^{n-2}$ with fewer than $5$ terms. The reciprocals of these irreducible polynomials are irreducible polynomials of the form $x^{2^{n-2}} + g(x)$ such that the degree of $g(x)$ is small ($\leq 4$), which could have potential applications as mentioned by Gao, Howell, and Panario in \cite{GaoHowellPanario}. \end{abstract} \maketitle \section{Introduction} Let $p$ be prime, $q=p^m$, and $\mathbb{F}_q$ be a finite field of order $q$. Let $Q_{n}(x)$ denote the $n$-th cyclotomic polynomial \[ Q_{n}(x)= \prod_{0<j \leq n, (j,n)=1}(x- \zeta^j) \] where $\zeta$ is a primitive $n$-th root of unity. Clearly $x^n - 1 = \prod_{d|n} Q_d(x)$ and the M\"{o}bius inversion formula gives $Q_n(x) = \prod_{d|n} (x^d - 1)^{\mu(n/d)}$ where $\mu$ is the M\"{o}bius function. If $(q, n) =1$, then it is well known that $Q_n(x)$ can be factorized into $\phi(n)/d$ distinct monic irreducible polynomials of the same degree $d$ over $\mathbb{F}_q$, where $d$ is the least positive integer such that $q^d \equiv 1 \pmod{n}$ (see \cite[Theorem 2.47]{LN}). Basically we know the number and the degree of irreducible factors of cyclotomic polynomials. However, factoring cyclotomic polynomials $Q_n(x)$ over finite field $\mathbb{F}_q$ explicitly still remains as a fundamental question. Moreover, it is also known that explicit factorization of cyclotomic polynomials is related to the factorization of other interesting classes of polynomials. For example, Fitzgerald and Yucas \cite{FY1} have discovered a nice link between the factors of Dickson polynomials over finite fields and factors of cyclotomic polynomials and self-reciprocal polynomials recently. This means that factoring cyclotomic polynomials explicitly provides an alternative way to factor Dickson polynomials explicitly. Explicit factorization of $2^n$-th cyclotomic polynomials $Q_{2^n}(x)$ over $\mathbb{F}_q$ are given in \cite{LN} when $q\equiv 1 \pmod{4}$ and in \cite{Meyn} when $q\equiv 3 \pmod{4}$. Recently, Fitzgerald and Yucas \cite{FY2} have studied explicit factors of $2^n r$-th cyclotomic polynomials $Q_{2^nr}(x)$ where $r$ is prime and $q\equiv \pm 1 \pmod{r}$ over finite field $\mathbb{F}_q$ in order to obtain explicit factorization of Dickson polynomials. This gives a complete answer to the explicit factorization of cyclotomic polynomials $Q_{2^n3}(x)$ and thus Dickson polynomials $D_{2^n 3}(x)$ of the first kind over $\mathbb{F}_q$. However, the general situation for arbitrary $r$ remains open. Without loss of generality we assume that $(2r, q) =1$. In this paper, we reduce the problem of factorizing all $2^n r$-th cyclotomic polynomials over $\mathbb{F}_q$ into factorizing a finite number of lower degree cyclotomic polynomials over $\mathbb{F}_q$. In particular, we give the explicit factorization of cyclotomic polynomials $Q_{2^n r}(x)$ over $\mathbb{F}_q$ where $r=5$. The method we are using is a combination of case analysis, factorizing low degree polynomials, and the recursive construction based on basic properties of cyclotomic polynomials. The irreducible factors of these cyclotomic polynomials are sparse polynomials (polynomials with a few nonzero terms). Sparse irreducible polynomials are important in efficient hardware implementation of feedback shift registers and finite field arithmetic (\cite{Berlekamp}, \cite{GolombGong}, \cite{WangBlake}). The second focus of our paper is to explicitly construct sparse irreducible polynomials of high degrees. We remark that explicit construction of irreducible polynomials in general has attracted a lot of attentions and a lot of progress has made in the past two decades. Most of these constructions are iterated constructions which extend the classical transformation $f(x) \rightarrow f(x^n)$. A a nice survey on this topic as of year 2005 can be found in \cite{Cohen}. Here we are interested in sparse irreducible polynomials and the classical transformation is used. Therefore the main tool in the paper is the following classical result which help us to construct high degree irreducible polynomials based on low degree irreducible polynomials. \begin{lemma}[Theorem 3.35 in \cite{LN}] \label{irred} Let $f_1(x)$, $f_2(x)$, \ldots, $f_N(x)$ be all distinct monic irreducible polynomials in $\mathbb{F}_q[x]$ of degree $m$ and order $e$, and let $t \geq 2$ be an integer whose prime factors divide $e$ but not $(q^m-1)/e$. Assume also that $q^m \equiv 1 \pmod{4}$ if $t \equiv 0 \pmod{4}$. Then $f_1(x^t), f_2(x^t), \ldots, f_N(x^t)$ are all distinct monic irreducible polynomials in $\mathbb{F}_q[x]$ of degree $mt$ and order $et$. \end{lemma} In Section~\ref{methodology}, we describe the methodology used in this paper to factor $Q_{2^n r}(x)$ over $\mathbb{F}_q$. We prove that all irreducible factors of $Q_{2^n r}(x)$ can be obtained easily from irreducible factors of $Q_{2^L r}(x)$ where $L$ is a small constant depending on $q$ and $r$ (Theorem~\ref{general}). This also provide us a way to construct sparse irreducible polynomials of high degree $2^n r$ over $\mathbb{F}_q$. We note that the result in this section is true for any odd $q, r$ such that $(q, r)=1$. Then the rest of paper deals with $r=5$. In Section~\ref{old}, we obtained the factorization results of $Q_{2^n 5}(x)$ when $q \equiv \pm 1 \pmod{5}$. These results are not new and can also be found in \cite{FY2}. However, for the sake of completeness, we also include the proofs. As a consequence, we obtain several classes of irreducible binomials/trinomials of degree $2^{n-2}$ over $\mathbb{F}_q$ where $q \equiv \pm 1 \pmod{5}$. In Section~\ref{new1}, we consider the situation when $q\equiv 13, 17 \pmod{20}$. We obtain the explicit factorization of $Q_{2^n 5}(x)$ in Theorems~\ref{factor-mod-13-17}. Moreover, we can construct several classes of irreducible polynomial of five terms with degree $2^m$ (Corollary~\ref{irred-mod-13-17}). Then the case of $q\equiv 3 \pmod{20}$ is considered in Section~\ref{new2}. The factorization results are given in Theorem~\ref{factor-mod-3} for $q\equiv 3 \pmod{20}$ such that $q\neq 3$ and in Theorem~\ref{factor-3} for $q=3$. The irreducible polynomials constructed are trinomials and pentanomials (see Corollaries~\ref{irred-mod-3},~\ref{irred-3}). In Section~\ref{new3}, the case of $q \equiv 7 \pmod{20}$ is considered and the result can be found in Theorem~\ref{factor-mod-7}. We note that the reciprocals of sparse irreducible polynomials constructed in this paper can be written as the form of $x^n + g(x)$ where the degree of $g(x)$ is at most $4$. It is well known that irreducible polynomials of form $x^n + g(x)$ with $g(x)$ having a small degree are desirable in implementing pseudorandom number generators and in constructing elements of provable high orders in finite fields (see the survey paper \cite{GaoHowellPanario}). Therefore our irreducible polynomials might be useful in some of applications mentioned in \cite{GaoHowellPanario}. \section{Methodology and notations}\label{methodology} In this section, we describe the method that we are using in this paper. First of all we recall the following basic results on cyclotomic polynomials. \begin{lemma}\cite[Exercise~2.57]{LN} \label{cyclo} (a) $Q_{2n}(x) = Q_n(-x)$ for $n \geq 3$ and $n$ odd. (b) $Q_{mt}(x) = Q_{m}(x^t)$ for all positive integers $m$ that are divisible by the prime $t$. (c) $Q_{mt^k}(x) = Q_{mt}(x^{t^{k-1}})$ if $t$ is a prime and $m, k$ are arbitrary positive integers. \end{lemma} Let us start with the factorizations of $Q_r(x)$ and $Q_{2r}(x) = Q_r(-x)$. Because of Lemma~\ref{cyclo}, we have $Q_{2^n r} (x) = Q_{2^{n-1} r}(x^2)$ for $n \geq 2$. Hence the key to continue the process of factorization is to factor $Q_{2^{n-1} r}(x^2)$ into a product of irreducible polynomials once we obtain the factorization of $Q_{2^{n-1} r}(x)$. Now, we show that we can reach to the end after only a finite number of iterations. Let $v_2(k)$ denotes the highest power of $2$ dividing $k$ and $L_i = v_2(q^i -1)$ for $ i \geq 1$. In particular, let $L:= L_{\phi(r)}= v_2(q^{\phi(r)}-1)$ where $\phi$ is the Euler's phi function. Then we have the following result. \begin{theorem}\label{general} Let $q=p^m$ be a power of an odd prime $p$, let $r \geq 3$ be any odd number such that $(r, q) =1$, and let $L := L_{\phi(r)}= v_2(q^{\phi(r)}-1)$, the highest power of $2$ dividing $q^{\phi(r)} -1$ with $\phi(r)$ the Euler's phi function. For any $n\geq L$ and any irreducible factor $f(x)$ of $Q_{2^L r}(x)$ over $\mathbb{F}_q$, $f(x^{2^{n-L}})$ is also irreducible over $\mathbb{F}_q$. Moreover, all irreducible factors of $Q_{2^n r}(x)$ are obtained in this way. \end{theorem} \begin{proof} Because $q, r$ are odd, we have $\phi(r)$ is even and then $n \geq L \geq 2$. By \cite[Theorem 2.47]{LN}, $2^{L} r$-th cyclotomic polynomial $Q_{2^L r}(x)$ has $\phi(2^{L} r)/m$ distinct monic irreducible factors of the same degree $m$, where $m$ is the least positive integer such that $q^m \equiv 1 \pmod{2^{L} r}$. Because $q^{\phi(r)} \equiv 1 \pmod{r}$ and $L = v_2(q^{\phi(r)}-1)$, we have $q^{\phi(r)} \equiv 1 \pmod{2^L r}$. This implies that $m \leq \phi(r)$. By the definition of $L$, we must have $2^{L+1} \nmid (q^m -1)$. Since each factor has order $e= 2^{L} r$ and $2 \nmid (q^m -1)/e$, by Lemma~\ref{irred}, each irreducible polynomial $f(x)$ of $Q_{2^L r}(x)$ generates an irreducible factor $f(x^2)$ of $Q_{2^{L+1} r}(x)$. More generally, $f(x^{2^{n-L}})$ is also irreducible factor of $Q_{2^n r}(x)$ since $L \geq 2$ implies that $4 \mid q^{m} -1$. Moreover, $f(x^{2^{n-L}})$ has degree $m2^{(n-L)}$ and order $2^{n} r$. Hence there are $\phi(2^n r)/(m2^{n-L}) = 2^{n-1} \phi(r) / (m2^{n-L}) = \phi(2^L r)/m$ distinct irreducible factors for $Q_{2^n r}(x)$. Therefore all irreducible factors of $Q_{2^n r}(x)$ are constructed from irreducible factors of $Q_{2^L r}(x)$ over $\mathbb{F}_q$. \end{proof} Theorem~\ref{general} tells us that a recursive way of factoring $2^n r$-th cyclotomic polynomials essentially requires only finitely many factorizations of low degree polynomials (at most $L$ iterations starting from $Q_{r}(x)$). This also provides us a method to construct irreducible polynomials from low degree irreducible polynomials. The fact that we use the classical transformation on low degree polynomials can guarantee the resulting high degree irreducible polynomials are sparse polynomials. For $n < L$, since each irreducible factor $f(x)$ of $Q_{2^{n-1} r}(x)$ has the same degree $m$ and $f(x^2)$ may not be irreducible polynomial of degree $2m$, we need to factor $f(x^2)$ further. And in most cases, we need to factor $f(x^2)$ into two irreducible polynomials of degree $m$. We see more in detail for $r=5$ in the forthcoming sections. This involves the process of factoring certain types of polynomials of degree $8$ into two quartic polynomials. Finally we fix some other notations for the rest of the paper. Let $\Omega(k)$ denote the set of primitive $k$-th root of unity. In particular, $\Omega(2^0) = \{ 1\}$, $\Omega(2^1) = \{-1\}$. Let $\rho_n$ denote an arbitrary element in $\Omega(2^n)$. The expression $\prod_{a \in A} \cdots \prod_{b \in B} f_i(x, a, \ldots, b)$ denotes the product of distinct irreducible polynomials $f_i(x, a, \ldots, b)$ satisfying conditions $a\in A$, $\ldots$, $b \in B$. Let $\left( a \atop p \right)$ denote the Legendre symbol and the following basic results on Legendre symbols are also used in the paper. (i) $\left( 2 \atop p \right) = 1$ if and only if $p \equiv 1, 7 \pmod 8$; (ii) $\left( -2 \atop p \right) = 1$ if and only if $p \equiv 1, 3 \pmod 8$; (iii) $\left( 5 \atop p \right) = 1$ if and only if $p \equiv \pm 1, \pm 9 \pmod{20}$. \section{Case: $q\equiv \pm 1 \pmod{5}$} \label{old} Recall that $L_i = v_2(q^i -1)$, the highest power of $2$ dividing $q^i -1$ for $ i \geq 1$. If $q\equiv \pm 1 \pmod 5$ and $q\equiv 1 \pmod 4$ (i.e., $q \equiv 1 \pmod{20}$ or $q\equiv 9 \pmod{20}$), then $L_4 = L_2+1$ and $L_2 = L_1 + 1$. Moreover, $\rho_1 = -1$ must be a square and thus $\rho_2^2 = \rho_1$. Similarly, if $q\equiv \pm 1 \pmod 5$ and $q\equiv 3 \pmod 4$ (i.e., $q \equiv 11 \pmod{20}$ or $q\equiv 19 \pmod{20}$), then $L_4 = L_2+2$ and $L_2 = L_1 + 1$. Moreover, $\rho_1 = -1$ can not be a square. We have the following results for these four different cases. \begin{theorem} Let $q\equiv 1 \pmod{20}$. Then we have the following factorization of $2^n5$-th cyclotomic polynomial $Q_{2^n 5} (x)$ over $\mathbb{F}_q$. (i) $Q_{5}(x) = \prod_{w \in \Omega(5)} (x-w)$ and $Q_{10}(x) = \prod_{w \in \Omega(5)} (x+ w)$. (ii) If $2 \leq n \leq L_1$, then \[ Q_{2^n 5} (x) = \prod_{w \in \Omega(5)} \prod_{\rho_n \in \Omega(2^n)} \left( x - w \rho_n \right). \] (iii)if $n \geq L_2=L_1+1$, then \[ Q_{2^n 5} (x) = \prod_{w \in \Omega(5)} \prod_{\rho_{L_1} \in \Omega(2^{L_1})} \left( x^{2^{n-L_1}} - w \rho_{L_1} \right). \] \rmv{ (ii) if $n = L_2 = L_1 +1$, then \[ Q_{2^n 5} (x) = \prod_{w \in \Omega(5)} \prod_{\rho_{L_1} \in \Omega(2^{L_1})} \left( x^2 - w \rho_{L_1} \right). \] (iii) if $n = L_4 = L_2 +1$, then \[ Q_{2^n 5} (x) = \prod_{w \in \Omega(5)} \prod_{\rho_{L_1} \in \Omega(2^{L_1})} \left( x^4 - w \rho_{L_1} \right). \] (iv) if $n > L_4 = L$, then \[ Q_{2^n 5} (x) = Q_{2^L 5}(x^{2^{n-L}}) = \prod_{w \in \Omega(5)} \prod_{\rho_{L_1} \in \Omega(2^{L_1})} \left( x^{2^{n-L+2}} - w \rho_{L_1} \right). \] } \end{theorem} \begin{proof} In this case, $5 \mid q-1$. Hence $\Omega(5) \subseteq \mathbb{F}_q$. Moreover, $\Omega(2^n) \subseteq \mathbb{F}_q$ for all $n \leq L_1$. Moreover, if $n \leq L_1$ and $\rho_{n-1} \in \Omega(2^{n-1})$ then we can find $\rho_{n} \in \Omega(2^n) \subseteq \mathbb{F}_q$ such that $\rho_n^2 = \rho_{n-1}$. (i) Because $\rho_0 =1$ and $\rho_1 = -1$, it is obvious to see that $Q_5 (x) = \prod_{w \in \Omega(5)} \left( x - w \right) =\prod_{w \in \Omega(5)} \left( x - w \rho_0 \right)$ and $Q_{10}(x) = Q_5(-x) = \prod_{w \in \Omega(5)} \left( x + w \rho_0 \right)= \prod_{w \in \Omega(5)} \left( x - w \rho_1 \right)$. (ii) For $ 2\leq n \leq L_1$, we have $Q_{2^n 5}(x) = Q_{2^{n-1} 5}(x^2) = \prod_{w \in \Omega(5)} \prod_{\rho_{n-1} \in \Omega(2^{n-1})} \left( x^2 - w \rho_{n-1} \right)$ by induction hypothesis and Lemma~\ref{cyclo}. Because each $w\in \Omega(5)$ can be written as $w=u^2$ where $u\in \Omega(5)$, we obtain that $x^2 - w \rho_{n-1} = x^2 - u^2 \rho_{n}^2 = (x-u \rho_{n})(x+ u \rho_{n})$ and both $\rho_{n}, -\rho_{n} \in \Omega(2^n)$. Hence, for $ 2\leq n \leq L_1$, we have \[ Q_{2^n 5}(x) = \prod_{u \in \Omega(5)} \prod_{\rho_{n} \in \Omega(2^{n})} \left( x - u \rho_{n} \right). \] (iii) Again, we have $Q_{2^{L_2} 5}(x) = Q_{2^{L_1} 5}(x^2) = \prod_{w \in \Omega(5)} \prod_{\rho_{L_1} \in \Omega(2^{L_1})} \left( x^2 - w \rho_{L_1} \right)$. Because $w$ is a square element and $\rho_{L_1}$ is a non-square element in $\mathbb{F}_q$, $x^2 - w \rho_{L_1}$ is an irreducible polynomial in $\mathbb{F}_q[x]$ with degree $2$ and order $2^{L_2}5$. Moreover, because $2 \nmid (q^2-1)/2^{L_2}5$, by Lemma~\ref{irred}, $x^4 - w \rho_{L_1}$ is also irreducible. Hence \[ Q_{2^{L_4} 5}(x) = Q_{2^{L_2} 5}(x^2) = \prod_{w \in \Omega(5)} \prod_{\rho_{L_1} \in \Omega(2^{L_1})} \left( x^4 - w \rho_{L_1} \right). \] In this case, $x^4 - w \rho_{L_1}$ is an irreducible polynomial of degree $4$ and order $2^{L_4} 5$. In general, for $n > L_4$, then $2^{n-L_1} \equiv 0 \pmod{4}$ and $q^{4} -1 \equiv 1 \pmod{4}$. Because $2^{n-L_1} \nmid (q^4 -1)/2^{L_4}5$, by Lemma~\ref{irred}, the polynomials $x^{2^{n-L_1}} - w \rho_{L_1}$ are irreducible over $\mathbb{F}_q$ and thus \[ Q_{2^n 5}(x) = Q_{2^{L_1} 5}(x^{2^{n-L_1}}) = \prod_{w \in \Omega(5)} \prod_{\rho_{L_1} \in \Omega(2^{L_1})} \left( x^{2^{n-L_1}} - w \rho_{L_1} \right). \] \rmv{ If $k$ is odd, then $L_1 = v_2(q-1) =2$ and thus $\rho$ is a non-square element in $\mathbb{F}_q$. Hence $\rho\rho_{L_1}$ is a square element (either $\rho_2^2$ or $-\rho_2^2$) in $\mathbb{F}_q$. Then $x^2 + w \rho \rho_{L_1}$ is not irreducible polynomial in $\mathbb{F}_q[x]$. Hence \begin{eqnarray*} Q_{2^{L_2} 5}(x) &=& Q_{2^{L_1} 5}(x^2) = Q_{2^2 5}(x^2) \\ &=& \prod_{w \in \Omega(5)} \prod_{\rho_{L_1} \in \Omega(2^{L_1})} \left( x^2 + w \rho \rho_{L_1} \right) \\ &=& \prod_{w \in \Omega(5)} \left( x^2 + w \rho_2^2 \right) \left( x^2 - w \rho_2^2 \right) \\ & =& \prod_{u \in \Omega(5)} \left( x + u \rho \rho_{2} \right) \left( x - u \rho \rho_{2} \right) \left( x + u \rho_{2} \right) \left( x - u \rho_{2} \right)\\ & =& \prod_{u \in \Omega(5)} \left( x + u \rho_{1} \right) \left( x - u \rho_{1} \right) \left( x + u \rho_{2} \right) \left( x - u \rho_{2} \right) \end{eqnarray*} Similarly, we obtain, for $n > L_2$, that $x^{2^{n-L_2}} + w \rho \rho_{L_1}$ is irreducible \[ Q_{2^n 5}(x) = Q_{2^{L_2} 5}(x^{2^{n-L_2}}) = \prod_{w \in \Omega(5)} \prod_{\rho_{L_1} \in \Omega(2^{L_1})} \left( x^{2^{n-L_1-1}} + w \rho \rho_{L_1} \right)\left( x^{2^{n-L_1-1}} - w \rho \rho_{L_1} \right). \] } \end{proof} \begin{theorem} Let $q = 20k + 11$ for some nonnegative integer $k$. Then we have the following factorization of $2^n5$-th cyclotomic polynomial $Q_{2^n 5} (x)$ over $\mathbb{F}_q$. (i) For $n=0, 1, 2$, we have \[ Q_{5} (x) = \prod_{w \in \Omega(5)} \left( x - w \right), ~ Q_{10} (x) = \prod_{w \in \Omega(5)} \left( x + w\right) , ~ Q_{20} (x) = \prod_{w \in \Omega(5)} \left( x^2 + w \right). \] (ii) if $n \geq L_2 = 3$, then \[ Q_{2^n 5} (x) = \left\{ \begin{array}{rr} \displaystyle{\prod_{w \in \Omega(5)}} \prod_{c^2 = -2} \left( x^{2^{n-2}} + cw x^{2^{n-3}} -w^2 \right) , & if~ $k$ ~is ~even; \\ & \\ \displaystyle{\prod_{w \in \Omega(5)}} \prod_{c^2 = 2} \left( x^{2^{n-2}} + cw x^{2^{n-3}} + w^2 \right), & if~$k$ ~is~ odd; \end{array} \right. \] \end{theorem} \begin{proof} In this case, $L_1 = v_2(q-1) = 1$, $L_2 = v_2(q^2-1) =3$, and $L_4 = L_2 +1 = 4$. It is obvious that $Q_{5} (x) = \prod_{w \in \Omega(5)} \left( x - w \right)$, and $Q_{10} (x) = Q_5(-x) = \prod_{w \in \Omega(5)} \left( x + w\right)$ because $5 \mid q-1$. Because $q\equiv 3 \pmod{4}$, $-1$ is a non-square in $\mathbb{F}_q$. Hence $x^2 + w$ is irreducible in $\mathbb{F}_q[x]$ and then $Q_{20} (x) = Q_{10}(x^2) = \prod_{w \in \Omega(5)} \left( x^2 + w\right)$. Now $Q_{40}(x) = Q_{20}(x^2) = \prod_{w \in \Omega(5)} \left( x^4 + w\right)$. Because $\gcd(40, q) =1$ and $q^2 \equiv 1 \pmod{40}$, by Theorem 2.47 in \cite{LN}, $Q_{40}(x)$ factors into $\phi(40)/2 = 12$ distinct monic quadratic irreducible polynomials in $\mathbb{F}_q[x]$. Hence $x^4 + w$ can be factorized into a product of two monic quadratic polynomials. Let $x^4 + w = (x^2 + ax + b)(x^2 +cx+d)$ where $a, b, c, d\in \mathbb{F}_q$. Comparing both sides, we have $a+c =0$, $b+d+ac=0$, $bd+ad=0$, and $bd =w$. Replacing $a$ by $-c$, we obtain $b+d -c^2 =0$, $(b-d)c =0$, and $bd=w$. Because $w^5 =1$, we can write $w=v^4$ where $v = w^{-1}$. Because $-1$ is a non-square, we can only have two possible solutions ($c\neq 0$ because, otherwise, $b^2 = -v^4$, a contradiction): (i) $b=d =v^2$, $a=-c$, and $c^2 = 2v^2$ if $2$ is a square; (ii) $b=d =-v^2$, $a=-c$, and $c^2=-2v^2$ if $-2$ is a square. We note that $q \equiv 4k+3 \pmod{8}$. Hence if $k$ is even, then $q\equiv 3 \pmod{8}$; otherwise, $q\equiv 7 \pmod{8}$.Moreover, $q\equiv 3 \pmod{8}$ implies that the characteristic $p$ of $\mathbb{F}_q$ also satisfies $p \equiv 3 \pmod{8}$, therefore $-2$ is a square in $\mathbb{F}_q$ if $k$ is even. Similarly, $2$ is a square in $\mathbb{F}_q$ if $k$ is odd. As $w$ ranges over $\Omega(5)$, $v$ also ranges over $\Omega(5)$. Hence we obtain \[ Q_{40} (x) = \left\{ \begin{array}{rr} \displaystyle{\prod_{w \in \Omega(5)}} \prod_{c^2 = -2} \left( x^{2} + cw x -w^2 \right) , & if~ $k$ ~is ~even; \\ & \\ \displaystyle{\prod_{w \in \Omega(5)}} \prod_{c^2 = 2} \left( x^{2} + cw x + w^2 \right), & if~$k$ ~is~ odd; \end{array} \right. \] Each $x^2 +cwx-w^2$ is an irreducible polynomial of degree $2$ and order $40$. If $n \geq 4$, then $2^{n-2} \equiv 0 \pmod{4}$ and $q^2 -1 \equiv 0 \pmod{4}$. Moreover, $2^{n-2} \nmid (q^2 -1)/40$. Hence by Lemma~\ref{irred}, we have that $x^{2^{n-2}} +cw x^{2^{n-3}} -w^2$ is irreducible and \[ Q_{2^n 5} (x) = Q_{2^3 5}(x^{2^{n-3}}) = \left\{ \begin{array}{rr} \displaystyle{\prod_{w \in \Omega(5)}} \prod_{c^2 = -2} \left( x^{2^{n-2}} + cw x^{2^{n-3}} -w^2 \right) , & if~ $k$ ~is ~even; \\ & \\ \displaystyle{\prod_{w \in \Omega(5)}} \prod_{c^2 = 2} \left( x^{2^{n-2}} + cw x^{2^{n-3}} + w^2 \right), & if~$k$ ~is~ odd; \end{array} \right. \] \end{proof} \begin{theorem} Let $q\equiv 9 \pmod{20}$ and $L_i = v_2(q^i-1)$ for $i \geq 1$. Then we have the following factorization of $2^n 5$-th cyclotomic polynomial $Q_{2^n 5} (x)$ over $\mathbb{F}_q$. (i) For $n=0, 1$, we have \[ Q_{5} (x) = \prod_{a = w + w^{-1} \atop{w \in \Omega(5)}} \left( x^2 - a x + 1 \right), ~ Q_{10} (x) = \prod_{a = w + w^{-1} \atop{w \in \Omega(5)}} \left( x^2 + a x + 1 \right). \] (ii) if $2 \leq n < L_2$, then \[ Q_{2^n 5} (x) = \prod_{a_n = \rho_{n} (w + w^{-1}) \atop{w \in \Omega(5)}} \prod_{\rho_{n} \in \Omega(2^{n})} \left( x^2 + a_n x + \rho_{n-1} \right). \] (iii) if $n \geq L_2 $, then \[ Q_{2^n 5} (x) = \prod_{a_{L_2}^2 = 2\rho_{L_1} - a_{L_1}} \prod_{ a_{L_1} = \rho_{L_1}(w + w^{-1}) \atop{w \in \Omega(5)}} \prod_{\rho_{L_1} \in \Omega(2^{L_1})} \left( x^2 + a_{L_2} x + \rho_{L_1} \right) \] \end{theorem} \begin{proof} Let $q =20k + 9$. If $k$ is odd, then $L_1 =2$. If $k$ is even, then $L_1 =3$. In this case, $L_2 = L_1+1$, and $L_4 = L_2 +1$. Because $5 \nmid q-1$, $w \in \Omega(5)$ implies $w \not\in \mathbb{F}_q$. However, $5 \mid q+1$ implies $a=w+w^{-1} = w+w^{q} \in \mathbb{F}_q$. Hence $ Q_{5}(x) = \displaystyle{ \prod_{a = w + w^{-1} \atop{w \in \Omega(5)}} \left( x^2 - a x + 1 \right) }$ and $Q_{10} (x) = Q_5(-x) = \displaystyle{ \prod_{a = w + w^{-1} \atop{w \in \Omega(5)}} \left( x^2 + a x + 1 \right)} = \displaystyle{ \prod_{a = w + w^{-1} \atop{w \in \Omega(5)}} \left( x^2 + a x + \rho_0 \right)} $. Then $Q_{20}(x) = Q_{10}(x^2) = \displaystyle{ \prod_{a = w + w^{-1} \atop{w \in \Omega(5)}} \left( x^4 + a x^2 + \rho_{0} \right)}$. Again, $q^2 \equiv 1 \pmod{2^{L_1} 5}$ and Theorem~2.47 in \cite{LN} imply that $Q_{2^n 5}(x)$ factors into distinct monic quadratic polynomials. Let $a_1 =a$. In order to factor $Q_{20}(x)$, we need to factor $x^4 + a_{1} x^2 + \rho_{0}$ into monic quadratic irreducible polynomials. Let $x^4 + a_{1}x^2 + \rho_{0} = (x^2 + b x+c)(x^2 + dx + e)$ where $b, c, d, e \in \mathbb{F}_q$. Then we obtain \begin{eqnarray*} b+d &=&0\\ c+e+bd &=& a_{1}\\ be+cd &=&0\\ ce &=& \rho_{0} \end{eqnarray*} Hence $b=-d$. Continue to solve the above system, either we have $b=-d =0$ or $e=c$. If $b=-d =0$, then $c$ satisfies that $c^2 -ac+1 =0$, contradicts to that $x^2 -ax +1$ is irreducible. Hence $e=c$. Let $x^4 + a_{1} x^2 +\rho_{0} = (x^2 + a_{2} x + c)(x^2-a_2 x+ c)$. Therefore $e=c = \pm \rho_{1} \in \mathbb{F}_q$ and $a_2^2 = \pm 2\rho_{1} -a_{1}$. Since we can verify directly that $a_2= \rho_{2} (w^3+w^{-3}) \in \mathbb{F}_q$ are solutions to $a_2^2 = 2\rho_{1} -a_{1}$ where $\rho_2^2 = \rho_1$, we obtain \[ Q_{20} (x) = \prod_{a_2 = \rho_{2} (w + w^{-1}) \atop{w \in \Omega(5)}} \prod_{\rho_{2} \in \Omega(2^{2})} \left( x^2 + a_2 x + \rho_{1} \right). \] If $k$ is odd, then \[ Q_{40} (x) = Q_{20}(x^2) = \prod_{a_2 = \rho_{2} (w + w^{-1}) \atop{w \in \Omega(5)}} \prod_{\rho_{2} \in \Omega(2^{2})} \left( x^4 + a_2 x^2 + \rho_{1} \right). \] Similarly, let $\left( x^4 + a_2 x^2 + \rho_{1} \right)=\left( x^2 + a_{3} x + \rho_{2} \right) \left( x^2 - a_3 x + \rho_{2} \right)$ where $a_3, \rho_2 \in \mathbb{F}_q$. Hence $a_3^2 = 2\rho_2 - a_2$. Because $ 2\rho_2 -\rho_2(w+w^{-1}) = - \rho_2 (w^3-w^{-3})^2 \in \mathbb{F}_q$ and both $\rho_2$ and $- (w^3-w^{-3})^2$ are non-square elements in $\mathbb{F}_q$, there exist $a_3 \in \mathbb{F}_q$ such that $a_3^2 = 2\rho_2 -a_2$. Hence \[ Q_{40} (x) = \prod_{a_3^2 = 2 \rho_2 - a_2} \prod_{a_2 = \rho_{2} (w + w^{-1}) \atop{w \in \Omega(5)}} \prod_{\rho_{2} \in \Omega(2^{2})} \left( x^2 + a_3 x + \rho_{2} \right). \] Moreover, for any $n\geq 4$, by Theorem~\ref{general}, we conclude $x^{2^{n-2}} + a_3 x^{2^{n-3}} + \rho_2$ is irreducible. Hence \[ Q_{2^n 5} (x) = \prod_{a_3^2 = 2 \rho_2 - a_2} \prod_{a_2 = \rho_{2} (w + w^{-1}) \atop{w \in \Omega(5)}} \prod_{\rho_{2} \in \Omega(2^{2})} \left( x^{2^{n-2}} + a_3 x^{2^{n-3}} + \rho_{2} \right). \] If $k$ is even, then $L_1 =3$ and $\rho_3 \in \mathbb{F}_q$. Then \[ Q_{40} (x) = Q_{20}(x^2) = \prod_{a_3 = \rho_{3} (w + w^{-1}) \atop{w \in \Omega(5)}} \prod_{\rho_{3} \in \Omega(2^{3})} \left( x^2 + a_3 x + \rho_{3} \right). \] Similarly, fr $n \geq 4$, we have \[ Q_{2^n 5} (x) = \prod_{a_4^2 = 2 \rho_3 - a_3} \prod_{a_3 = \rho_{3} (w + w^{-1}) \atop{w \in \Omega(5)}} \prod_{\rho_{3} \in \Omega(2^{3})} \left( x^{2^{n-2}} + a_4 x^{2^{n-3}} + \rho_{3} \right). \] \end{proof} \begin{theorem} Let $q\equiv 19 \pmod{20}$. Then we have the following factorization of $2^n 5$-th cyclotomic polynomial $Q_{2^n 5} (x)$ over $\mathbb{F}_q$. (i) For $n=0, 1$, we have \[ Q_{5} (x) = \prod_{a = w + w^{-1} \atop{w \in \Omega(5)}} \left( x^2 - a x + 1 \right), ~ Q_{10} (x) = \prod_{a = w + w^{-1} \atop{w \in \Omega(5)}} \left( x^2 + a x + 1 \right). \] (ii) \[ Q_{20} (x) =\prod_{\rho_{2} \in \Omega(2^{2})} \prod_{a_2 = \rho_2 (w - w^{-1}) \atop{w \in \Omega(5)}} \left( x^2 + a_2 x + 1 \right). \] (iii) if $n \geq L_2 = 3$, then \[ Q_{2^n 5} (x) =\prod_{\rho_{3} \in \Omega(2^{3})} \prod_{a_3 = \rho_2 \rho_3w - (\rho_2 \rho_3w)^{-1} \atop{w \in \Omega(5)}} \left( x^{2^{n-2}} + a_3 x^{n-3} - 1 \right). \] \end{theorem} \begin{proof} In this case, $L_1 = 1$, $L_2 = L_1+2 = 3$, and $L_4 = L_2 + 1$. Again, $5\nmid q-1$ implies that if $w\in \Omega(5)$ then $w\not\in \mathbb{F}_q$. However, $w\in \mathbb{F}_{q^2}$. Moreover, $-1$ is a non-square element. Again, it is trivial to obtain \[ Q_{5} (x) = \prod_{a = w + w^{-1} \atop{w \in \Omega(5)}} \left( x^2 - a x + 1 \right), ~ Q_{10} (x) = \prod_{a = w + w^{-1} \atop{w \in \Omega(5)}} \left( x^2 + a x + 1 \right). \] Let $a_1$ denotes $w+w^{-1}$. Because $q\equiv 19 \pmod{20}$, we have $(\rho_2w)^{q+1} = 1$ and thus $\rho_2 w + (\rho_2 w)^{-1} = \rho_2 w + (\rho_2 w)^q \in \mathbb{F}_q$. Moreover, \[ x^4 + a_1 x^2 + 1 = (x^2 + a_2 x + 1)(x^2-a_2x+1), \] where $a_2 = \rho_2 (w^3-w^{-3}) = \rho_2 w^3 + (\rho_2 w)^{-3} \in \mathbb{F}_q$. Again $w$ ranges over $\Omega(5)$ means that $w^3$ ranges over $\Omega(5)$. Therefore, \[ Q_{20} (x) =\prod_{\rho_{2} \in \Omega(2^{2})} \prod_{a_2 = \rho_2 (w - w^{-1}) \atop{w \in \Omega(5)}} \left( x^2 + a_2 x + 1 \right). \] For $n=3$, let $\rho_3^2 = \rho_2$ and $a_3 = \rho_2 \rho_3 w^3 - (\rho_2 \rho_3 w^3)^{-1}$. We claim that that $a_3 \in \mathbb{F}_q$. First, we note that $\rho_2^q = \rho_2^{-1}$, and $w^q = w^{-1}$. Moreover, $\rho_3^{q} = - \rho_3^{-1}$ because $\rho_3^{2(q+1)} =1$ and $\rho_3^{q+1} \neq 1$. Then \[ a_3^q = (\rho_2 \rho_3 w^3 - (\rho_2 \rho_3 w^3)^{-1})^q = \rho_2^q \rho_3^q w^{3q} - \rho_2^{-q} \rho_3^{-q} w^{-3q} = (\rho_2^{-1}) (-\rho_3^{-1}) w^{-3} - (\rho_2 (-\rho_3) w^3) = a_3 \] Moreover, $ (x^2 +a_3 x -1)(x^2-a_3 x-1) = x^4 + a_2 x + 1$ because $a_3^2 = -\rho_2 w + \rho_2^{-1} w^{-1} -2 = -a_2 -2$. Hence \[ Q_{40} (x) =\prod_{\rho_{3} \in \Omega(2^{3})} \prod_{a_3 = \rho_2 \rho_3w - (\rho_2 \rho_3w)^{-1} \atop{w \in \Omega(5)}} \left( x^{2} + a_3 x - 1 \right). \] The rest of proof follows from Theorem~\ref{general} and $Q_{2^n 5}(x) = Q_{40}(x^{2^{n-3}})$. \end{proof} \section{Case: $q\equiv \pm 2 \pmod{5}$ and $q\equiv 1 \pmod{4}$} \label{new1} We note that if $q\equiv \pm 2 \pmod 5$ and $q\equiv 1 \pmod 4$ (i.e., $q \equiv 13 \pmod{20}$ or $q\equiv 17 \pmod{20}$), then $L_4 = L_2+1$ and $L_2 = L_1 + 1$. Moreover, $\rho_1 = -1$ must be a square and thus there exists $\rho_2\in \mathbb{F}_q$ such that $\rho_2^2 = \rho_1$. \begin{theorem} \label{factor-mod-13-17} Let $q\equiv \pm 2 \pmod 5$ and $q\equiv 1 \pmod 4$. Then we have the following factorization of $2^n 5$-th cyclotomic polynomial $Q_{2^n 5} (x)$ over $\mathbb{F}_q$. (i) If $0\leq n \leq L_1$, then \[ Q_{2^n 5} (x) = \prod_{\rho_n \in \Omega(2^n)} \left( x^4 + \rho_n x^3 + \rho_n^2 x^2 + \rho_n^3 x + \rho_n^4\right). \] (ii) If $n = L_2$ (i.e., $L_2 = L_1 + 1$), then \[ Q_{2^n 5} (x) = \prod_{\rho_{n-1} \in \Omega(2^{n-1})} \prod_{a_{n}^2 = 5 \rho_{n-1}} \left( x^4 + a_n x^3 + 3\rho_{n-1} x^2 + a_n \rho_{n-1} x + \rho_{n-2}\right). \] (iii) If $ n = L_4$ (i.e., $L_4 = L_2 + 1$), then \[ Q_{2^n 5} (x) = \left\{ \begin{array}{rr} \displaystyle{\prod_{\rho_{n-2} \in \Omega(2^{n-2})} \prod_{a_{n-1}^2 = 5 \rho_{n-2}} \prod_{a_n^2 = (2\rho_2-1)a_{n-1}} } \left( x^4 + a_n x^3 + a_{n-1}\rho_{2} x^2 + (-5\rho_{n-2})a_n^{-1} x - \rho_{n-2}\right), \\ if~ (2\rho_{2} -1)a_{n-1} ~is~ a~square; & \\ & \\ \displaystyle{\prod_{\rho_{n-2} \in \Omega(2^{n-2})} \prod_{a_{n-1}^2 = 5 \rho_{n-2}} \prod_{a_n^2 = -(2\rho_2+1)a_{n-1}} } \left( x^4 + a_n x^3 + (-a_{n-1}\rho_{2}) x^2 + (-5\rho_{n-2})a_n^{-1} x - \rho_{n-2}\right), \\ if~ (2\rho_{2} -1)a_{n-1} ~is~ a~nonsquare. & \end{array} \right. \] (iv) If $n > L_4=L$, then $Q_{2^n 5} (x)$ can be factorized as \[ \left\{ \begin{array}{rr} \displaystyle{\prod_{\rho_{L_1} \in \Omega(2^{L_1})} \prod_{a_{L_2}^2 = 5 \rho_{L_1}} \prod_{a_{L_4}^2 = (2\rho_2-1)a_{L_2}} } \left( x^{2^{n-L_4+2}} + a_{L_4} x^{3\cdot 2^{n-L_4}} + a_{L_2}\rho_{2} x^{2^{n-L_4+1}} + (-5\rho_{L_1})a_{L_4}^{-1} x^{2^{n-L_4}} - \rho_{L_1}\right), \\ if~ (2\rho_{2} -1)a_{L_2} ~is~ a~square; & \\ & \\ \displaystyle{\prod_{\rho_{L_1} \in \Omega(2^{L_1})} \prod_{a_{L_2}^2 = 5 \rho_{L_1}} \prod_{a_{L_4}^2 = -(2\rho_2+1)a_{L_2}} } \left( x^{2^{n-L_4+2}} + a_{L_4} x^{3\cdot 2^{n-L_4}} + (-a_{L_2}\rho_{2}) x^{2^{n-L_4+1}} + (-5\rho_{L_1})a_{L_4}^{-1} x^{2^{n-L_4}} - \rho_{L_1}\right), \\ if~ (2\rho_{2} -1)a_{L_2} ~is~ a~ nonsquare. & \end{array} \right. \] \end{theorem} \begin{proof} Because the smallest positive $d$ satisfying $q^d \equiv 1 \pmod{5}$ is $4$ under our assumption, by Theorem 2.47 in \cite{LN}, for all $ 0 \leq n \leq L_4$, $Q_{2^n 5}$ factors into a product of $\phi(2^n 5)/ 4 = 2^{n-1}$ distinct monic irreducible polynomials of degree $4$. (i) If $n \leq L_1$, then $\rho_n \in \Omega (2^n) \subseteq \mathbb{F}_q$. Hence $x^4 + \rho_n x^3 + \rho_n^2 x^2 + \rho_n^3 x + \rho_n^4 \in \mathbb{F}_q[x]$. The factorization of $Q_{2^n 5}(x)$ when $n=0, 1$ is trivial. Moreover, it is straightforward to verify that for $\rho_n^2 = \rho_{n-1}$ \[ x^8 + \rho_{n-1} x^6 + \rho_{n-1}^2 x^4 + \rho_{n-1}^3 x^2 + \rho_{n-1}^4 = \left( x^4 + \rho_n x^3 + \rho_n^2 x^2 + \rho_n^3 x + \rho_n^4 \right)\left( x^4 - \rho_n x^3 + \rho_n^2 x^2 - \rho_n^3 x + \rho_n^4 \right). \] Hence (i) follows from $Q_{2^n 5}(x) = Q_{2^{n-1} 5}(x^2)$ and the consequences of Theorem 2.47 in \cite{LN} as mentioned above. (ii) From (i) and Lemma~\ref{cyclo}, we have \[ Q_{2^{L_2} 5}(x) = Q_{2^{L_1} 5}(x^2) = \prod_{\rho_{L_1} \in \Omega(2^{L_1})} \left( x^8 + \rho_{L_1} x^6 + \rho_{L_1}^2 x^4 + \rho_{L_1}^3 x^2 + \rho_{L_1}^4\right). \] Because the Legendre symbol $ \left( 5 \atop p \right) = 1$ iff $p \equiv \pm 1, \pm 9 \pmod{20}$, $5$ is a non-square in $\mathbb{F}_q$ under the assumption of our theorem. Hence $5 \rho_{L_1}$ is a square element in $\mathbb{F}_q$ as $\rho_{L_1}$ is also a non-square element in $\mathbb{F}_q$. Let $a_{L_2}^2 = 5 \rho_{L_1}$. Then $\pm a_{L_2} \in \mathbb{F}_q$. Hence $x^4 \pm a_{L_2} x^3 + 3 \rho_{L_1} x^2 \pm a_{L_2} \rho_{L_1} x + \rho_{L_1-1} \in \mathbb{F}_q[x]$. One can also easily verify that \[ x^8 + \rho_{L_1} x^6 + \rho_{L_1}^2 x^4 + \rho_{L_1}^3 x^2 + \rho_{L_1}^4 = \prod_{a_{L_2}^2 = 5\rho_{L_1}} \left( x^4 + a_{L_2} x^3 + 3 \rho_{L_1} x^2 + a_{L_2} \rho_{L_1} x + \rho_{L_1-1} \right). \] Therefore the rest of proof of (ii) follows. (iii) In this case, we essentially need to factor $ x^8 + a_{L_2} x^6 + 3 \rho_{L_1} x^4 + a_{L_2} \rho_{L_1} x^2 + \rho_{L_1-1} $ into two monic quartic polynomials in $\mathbb{F}_q[x]$, where $\rho_{L_1} \in \Omega(2^{L_1})$ and $a_{L_2}^2 = 5\rho_{L_1}$. Because $-1$ is a square element and $5$ is a non-square, $-(2\rho_2 + 1)a_{L_2} (2\rho_2 -1) a_{L_2} = -(4\rho_2^2 -1) a_{L_2}^2 = 5 a_{L_2}^2$ is a non-square element in $\mathbb{F}_q$. Hence either $-(2\rho_2 + 1)a_{L_2}$ or $(2\rho_2 -1) a_{L_2}$ (exactly one of them) is a square element in $\mathbb{F}_q$. If $(2\rho_2 -1) a_{L_2}$ is a square, we let $a_{L_4}^2 = (2\rho_2 -1) a_{L_2}$. Then \[ x^8 + a_{L_2} x^6 + 3 \rho_{L_1} x^4 + a_{L_2} \rho_{L_1} x^2 + \rho_{L_1-1} = \prod_{a_{L_4}^2 = (2\rho_2 -1) a_{L_2}} \left( x^4 + a_{L_4} x^3 + \rho_2 a_{L_2} x^2 + (-5\rho_{L_1}) a_{L_4}^{-1} x - \rho_{L_1} \right) \] If $(2\rho_{2} -1)a_{n-1}$ is a non-square then $-(2\rho_2 + 1)a_{L_2}$ is a square. In this case, we let $a_{L_4}^2 = -(2\rho_2 + 1) a_{L_2}$. Then \[ x^8 + a_{L_2} x^6 + 3 \rho_{L_1} x^4 + a_{L_2} \rho_{L_1} x^2 + \rho_{L_1-1} = \prod_{a_{L_4}^2 = -(2\rho_2 + 1) a_{L_2}} \left( x^4 + a_{L_4} x^3 - \rho_2 a_{L_2} x^2 + (-5\rho_{L_1}) a_{L_4}^{-1} x - \rho_{L_1} \right) \] Because each quartic polynomial is in $\mathbb{F}_q[x]$ and $Q_{2^{L_4} 5}(x)$ factors into product of quartic polynomials, every such quartic polynomial must be irreducible. Hence (iii) is proved. (iv) For $n \geq L_4$, then $Q_{2^n 5}(x) = Q_{2^{L_4} 5} (x^{2^{n-L_4}})$. If $(2\rho_2 -1) a_{L_2}$ is a square, then each irreducible factor \[ x^4 + a_{L_4} x^3 + \rho_2 a_{L_2} x^2 + (-5\rho_{L_1}) a_{L_4}^{-1} x - \rho_{L_1} \] of $Q_{2^{L_4} 5}(x)$ has degree $4$ and order $2^{L_4}5$, where $a_{L_4}^2 = (2\rho_2 -1) a_{L_2}$. By Theorem~\ref{general}, \[ x^{2^{n-L_4+2}} + a_{L_4} x^{3\cdot 2^{n-L_4}} + a_{L_2}\rho_{2} x^{2^{n-L_4+1}} + (-5\rho_{L_1})a_{L_4}^{-1} x^{2^{n-L_4}} - \rho_{L_1} \] must also irreducible. Similarly, if $(2\rho_2 -1) a_{L_2}$ is a non-square, then \[ x^{2^{n-L_4+2}} + a_{L_4} x^{3\cdot 2^{n-L_4}} - a_{L_2}\rho_{2} x^{2^{n-L_4+1}} + (-5\rho_{L_1})a_{L_4}^{-1} x^{2^{n-L_4}} - \rho_{L_1} \] where $a_{L_4}^2 = -(2\rho_2 + 1) a_{L_2}$ must be irreducible. Hence the proof is complete. \end{proof} \begin{corol} \label{irred-mod-13-17} Let $q\equiv \pm 2 \pmod 5$ and $q\equiv 1 \pmod 4$. Let $\rho_n \in \Omega(2^n)$, $\rho_n^{2^{n-2}} = \rho_2$, and $a_{L_2}^2 = 5 \rho_{L_1}$. (i) If $2(\rho_{2} -1)a_{L_2}$ is a square, then $x^{2^{n-L_4+2}} + a_{L_4} x^{3\cdot 2^{n-L_4}} + a_{L_2}\rho_{2} x^{2^{n-L_4+1}} + (-5\rho_{L_1})a_{L_4}^{-1} x^{2^{n-L_4}} - \rho_{L_1}$ is irreducible over $\mathbb{F}_q$ for each choice of $\rho_n$, $a_{L_2}$, and $a_{L_4}^2 = (2\rho_2 -1)a_{L_2}$; (ii) otherwise, $x^{2^{n-L_4+2}} + a_{L_4} x^{3\cdot 2^{n-L_4}} + (-a_{L_2}\rho_{2}) x^{2^{n-L_4+1}} + (-5\rho_{L_1})a_{L_4}^{-1} x^{2^{n-L_4}} - \rho_{L_1}$ is irreducible over $\mathbb{F}_q$ for each choice of $\rho_n$, $a_{L_2}$, and $a_{L_4}^2 = -(2\rho_2 +1)a_{L_2}$. \end{corol} \section{Case: $q\equiv \pm 2 \pmod{5}$ and $q\equiv 3 \pmod{4}$ } \label{new2} First we note that both $-1$ and $5$ are non-square elements and thus $-5$ is a square element in $\mathbb{F}_q$. We note also that if $q\equiv \pm 2 \pmod 5$ and $q\equiv 3 \pmod 4$ (i.e., $q \equiv 3 \pmod{20}$ or $q\equiv 7 \pmod{20}$), then $L_4 = L_2+1$ and $L_1 =1$. However, $L_2 = L_1 + 2$ for $q\equiv 3 \pmod{20}$ and $L_2 = L_1 + 2$ or $L_1+3$ for $q\equiv 7 \pmod{20}$, depending on parity of $k$ when $q=20k+7$. Hence we separate this case into two subcases. \subsection{Case $q \equiv 3 \pmod{20}$} \label{new2} In this case, $L_1 = 1$, $L_2 = L_1 + 2 =3$ and $L_4 = L_2+1 =4$. \begin{theorem} \label{factor-mod-3} Let $q\equiv 3 \pmod{20}$ and $q > 3$. Then we have the following factorization of cyclotomic polynomials $Q_{2^n 5} (x)$ over $\mathbb{F}_q$. (i) If $0 \leq n \leq 1$, then \[ Q_{2^n 5} (x) = \prod_{\rho_n \in \Omega(2^n)} \left( x^4 + \rho_n x^3 + \rho_n^2 x^2 + \rho_n^3 x + \rho_n^4\right). \] (ii) If $n = 2$, then \[ Q_{2^n 5} (x) = \prod_{a_{2}^2 = -5 } \left( x^4 + a_2 x^3 + 3\rho_{1} x^2 + a_2 \rho_1 x + 1\right). \] (iii) If $n = 3$, then \[ Q_{2^n 5} (x) = \left\{ \begin{array}{ll} \displaystyle{ \prod_{a_{2}^2 = -5} \prod_{ a_3^2 = 2 b_3 - a_{2} \atop{b_3= 1} } } \left( x^4 + a_3 x^3 + b_3 x^2 + 3a_3^{-1} x + 1 \right), & if~ 2 - a_{2} ~is~ a~ square; \\ & \\ \displaystyle{ \prod_{a_{2}^2 = -5} \prod_{a_3^2 = 2 b_3 - a_{2} \atop{b_3= - 1} } } \left( x^4 + a_3 x^3 + b_3 x^2 + 3a_3^{-1} x + 1 \right), & if~ -2 - a_{2} ~is~a~ square. \end{array} \right. \] (iv) If $ n = 4$, then \[ Q_{2^n 5} (x) = \left\{ \begin{array}{ll} \displaystyle{ \prod_{a_{2}^2 = -5} \prod_{ a_3^2 = 2 - a_{2} \atop{c_3 = 3a_3^{-1}}} \prod_{a_4, b_4, c_4} } \left( x^4 + a_4 x^3 + b_4 x^2 + c_4 x - 1 \right), & if~ 2 - a_{2} ~is~ a~square; \\ & \\ \displaystyle{ \prod_{a_{2}^2 = -5} \prod_{a_3^2 = - 2 - a_{2}\atop{c_3 = 3a_3^{-1}} } \prod_{a_4, b_4, c_4} } \left( x^4 + a_4 x^3 + b_4 x^2 + c_4 x + 1 \right), & if~ - 2 - a_{2} ~is~ a~ square, \end{array} \right. \] where $a_4, b_4, c_4$ satisfy either \begin{equation}\label{cond4a} a_4^2 =2b_4 -a_3, b_4 = \alpha + a_2~ or ~\alpha + a_2, \alpha^2 = -2, c_4^2 = -2b_4-c_3, ~and~ c_4 = (b_4^2 -3)(2a_4)^{-1}, \end{equation} when $2-a_2$ is a square, or \begin{equation} \label{cond4b} a_4^2 =2b_4 -a_3,\\ b_4 = \beta +1 ~or~\beta-1 , \beta^2 = 2, \\ c_4^2 = 2b_4-c_3, ~and~ c_4 = (b_4^2 +3)(2a_4)^{-1}, \end{equation} when $-2-a_2$ is a square. (v) If $n > 4$, then $Q_{2^n 5} (x)$ can be factorized as \[ \left\{ \begin{array}{ll} \displaystyle{ \prod_{a_{2}^2 = -5} \prod_{ a_3^2 = 2 - a_{2} \atop{c_3 = 3a_3^{-1}} } \prod_{a_4, b_4, c_4 } } \left( x^{2^{n-2}} + a_4 x^{3\cdot 2^{n-4}} + b_4 x^{2^{n-3}} + c_4 x^{2^{n-4}} - 1 \right), & if~ 2 - a_{2} ~is~ a~square; \\ & \\ \displaystyle{ \prod_{a_{2}^2 = -5} \prod_{a_3^2 = - 2 - a_{2} \atop{c_3 = 3a_3^{-1}}} \prod_{ a_4, b_4, c_4} } \left( x^{2^{n-2}} + a_4 x^{3\cdot 2^{n-4}} + b_4 x^{2^{n-3}} + c_4 x^{2^{n-4}} + 1 \right), & if~ - 2 - a_{2} ~is~ a~ square, \end{array} \right. \] where $a_4, b_4, c_4$ satisfy either (\ref{cond4a}) if $2-a_2$ is a square, or (\ref{cond4b}) if $-2-a_2$ is a square. \end{theorem} \begin{proof} (i) Again $Q_{2^n 5}$ factors into a product of $\phi(2^n 5)/ 4 = 2^{n-1}$ distinct monic irreducible polynomials of degree $4$. It is trivial to check $Q_5(x) = \prod_{w\in \Omega(5)} (x-w) = x^4 - x^3 + x^2 - x + 1$ and $Q_{10}(x) = Q_{5}(-x)= x^4 + x^3 + x^2 + x +1$. (ii) Because $-5$ is a square when $q\equiv 3 \pmod{20}$ and $q >3$, for each $a_2$ satisfying $a_2^2 = -5$, we can verify that \[ \left( x^4 + a_2 x^3 + 3 \rho_1 x^2 + a_2 \rho_1 x +1 \right) \left( x^4 - a_2 x^3 + 3 \rho_1 x^2 - a_2 \rho_1 x +1 \right) = x^8 + x^6 + x^4 + x^2 + 1 \] Hence the proof follows from $Q_{20}(x) = Q_{10}(x^2)$ and the above polynomial factorization. (iii) If $2-a_2$ is a square, then let $a_3 \in \mathbb{F}_q$ satisfy $a_3^2 = 2-a_2$. Because $a_2^2 = -5$ and $q>3$, we have $a_3^{-2} = \frac{1}{9} (2+a_2)$. Hence $2-9a_3^{-2} = -a_2$. Therefore we have \begin{eqnarray*} &&\left( x^4 + a_3 x^3 + x^2 + 3a_3^{-1} x +1 \right) \left( x^4 - a_3 x^3 + x^2 -3 a_3^{-1} x +1 \right) \\ &=& x^8 +(2-a_3^2) x^6 + (3-6) x^4 + (2-9a_3^{-2})x^2 + 1 \\ &=& x^8 + a_2 x^6 + 3\rho_1 x^4 + a_2 \rho_1 x^2 + 1 \end{eqnarray*} Similarly, if $2-a_2$ is a non-square, then $2+a_2$ is also non-square and thus $-(2+a_2)$ is a square. Let $a_3^2 = -(2+a_2)$. Then $a_3^{-2} = -\frac{1}{9} (2-a_2)$ as $a_2^2 = -5$ and $q >3$. Hence $2+9a_3^{-2} = a_2$. Therefore we have \begin{eqnarray*} &&\left( x^4 + a_3 x^3 - x^2 + 3a_3^{-1} x +1 \right) \left( x^4 - a_3 x^3 - x^2 -3 a_3^{-1} x +1 \right) \\ &=& x^8 +(-2-a_3^2) x^6 + (3-6) x^4 + (-2-9a_3^{-2})x^2 + 1 \\ &=& x^8 + a_2 x^6 + 3\rho_1 x^4 + a_2 \rho_1 x^2 + 1 \end{eqnarray*} Because $Q_{40}(x)$ is factorized into monic irreducible quartic polynomials under our assumption, we are done. (iv) Consider the irreducible factorization of the following format \begin{eqnarray*} && x^8 + a_3 x^6 + b_3 x^4 + c_3 x^2 + 1 \\ &=&\left( x^4 + d_3 x^3 + d_2 x^2 + d_1 x +d_0 \right) \left( x^4 + e_3 x^3 + e_2 x^2 + e_1 x +e_0 \right) \end{eqnarray*} Because $d_0$ and $e_0$ are of form $\beta^{1+q+q^2+q^3}$ for some primitive $80$-th root of unity $\beta$ under our assumption, $d_0 = e_0 = \pm 1$. Also one can easily show that $e_3 = -d_3$ and $e_1 = -d_1$ as coefficients of $x^7$ and $x$ vanish in the product. This forces that $d_2 = e_2$. This means that the factorization of $x^8 + a_3 x^6 + b_3 x^4 + c_3 x^2 + 1$ (let $c_3 = 3 a_3^{-1}$) can only be one of the following two ways (either $e_0=d_0 =1$ or $e_0=d_0 = -1$). \begin{eqnarray*} && x^8 + a_3 x^6 + b_3 x^4 + c_3 x^2 + 1 \\ &=&\left( x^4 + a_4 x^3 + b_4 x^2 + c_4 x \pm 1 \right) \left( x^4 -a_4 x^3 + b_4 x^2 -c_4 x \pm 1 \right) \end{eqnarray*} Comparing coefficients of $x^6, x^4, x^2$ on both sides we have \begin{center} $ \begin{array}{lll} 2b_4 -a_4^2 &=& a_3\\ 2 +b_4^2 - 2a_4 c_4 &= & b_3\\ 2b_4 -c_4^2 &=& c_3 \end{array} $ or $ \begin{array}{lll} 2b_4 -a_4^2 &=& a_3\\ -2 +b_4^2 - 2a_4c_4 &= & b_3\\ -2b_4 -c_4^2 &=& c_3 \end{array} $ \end{center} First we consider the case that $2-a_2$ is a square. In this case, $b_3 =1$. We now show that if $2-a_2$ is a square then $-2$ is also a square. Indeed, let $a_3^2 = 2-a_2$. Then $a_3^{-2} = \frac{1}{9} (2+a_2)$ and thus $(a_3 -3a_3^{-1})^2 = a_3^2 -6 + 9a_3^{-2} = -2$. Hence $-2$ is a square. Note that the Legendre symbol $\left( -2 \atop p \right) = 1$ iff $p \equiv 1, 3 \pmod{8}$. Let $q=20k+3$. So $k$ is even in this case. Let $\alpha^2 = -2$. Let us first consider \[ \begin{array}{lll} 2b_4 -a_4^2 &=& a_3\\ -2 +b_4^2 - 2a_4 c_4 &= & 1\\ - 2b_4 -c_4^2 &=& c_3 \end{array} \] This case corresponds to $d_0 = e_0 = -1$. Since $a_4^2 = 2b_4 - a_3$ and $c_4^2 = -2b_4 -c_3$, we obtain $a_4^2 c_4^2 = (2b_4 -a_3)(-2b_4 -c_3)$. Using $4a_4^2c_4^2 = (b_4^2-3)^2$, we can obtain a quartic equation on $b_4$ only. That is, \begin{equation} \label{eqn1} b_4^4 + 10 b_4^2 - 8(a_3 - c_3) b_4 + 9-4a_3c_3 =0 \end{equation} Note $a_3 c_3 = 3$. Since $(a_3 -c_3)^2 = a_3^2 - 2a_3 c_3 + c_3^2 = 2-a_2 - 6 + \frac{1}{9} a_3^{-2} = 2-a_2 -6 + 2 + a_2 = -2$, Equation~(\ref{eqn1}) reduces $b_4^2 + 10b_4^2 -8\alpha b_4 -3 =0$. Moreover, $b_4^4 + 10b_4^2 - 8 \alpha b_4 -3 = (b_4^2 - 2\alpha b_4 -1)(b_4^2 + 2\alpha b_4 +3) =0$. Since $-1$ is a non-square in $\mathbb{F}_q$, we must have $b_4^2 + 2 \alpha b_4 + 3 =0$ and $b_4$ must be one of $\alpha \pm a_2$. Secondly we consider \[ \begin{array}{lll} 2b_4 -a_4^2 &=& a_3\\ 2 +b_4^2 - 2a_4 c_4 &= & 1\\ 2b_4 -c_4^2 &=& c_3 \end{array} \] Since $a_4^2 = 2b_4 - a_3$ and $c_4^2 = 2b_4 -c_3$, we obtain $a_4^2 c_4^2 = (2b_4 -a_3)(2b_4 -c_3)$. Using $4a_4^2c_4^2 = (b_4^2 +1)^2$, we can obtain a quartic equation on $b_4$ only. That is, \begin{equation} \label{eqn2} b_4^4 -14 b_4^2 + 8(a_3 +c_3) b_4 + 1 -4a_3c_3 =0 \end{equation} It is easy to check (for example, MAPLE) that Equation~(\ref{eqn2}) has no roots in $\mathbb{F}_q$. Since $Q_{80}(x)$ must factor into a product of quartic irreducible polynomials and we know one of above two cases holds, this shows that we must be able to find $a_4, b_4, c_4 \in \mathbb{F}_q$ such that $a_4^2 = 2b_4 -a_3$, $c_4 = (b_4^2 -3)(2a_4)^{-1}$, $c_4^2 = -2b_4-c_3$, where $b_4$ must be one of $\alpha \pm a_2 \in \mathbb{F}_q$. Similarly, if $2-a_2$ is non-square, then we can show that $2$ is a square element. Let $\beta^2 = 2$. In this case, $(a_3 -c_3)^2 = -10$. If the factorization holds for $d_0 = e_0 = -1$, we have the corresponding equation \begin{equation} \label{eqn3} b_4^4 + 14 b_4^2 - 8(a_3 - c_3) b_4 + 1-4a_3c_3 =0. \end{equation} Otherwise, $(a_3+c_3)^2 = 2$ and we have the equation \begin{equation} \label{eqn4} b_4^4 -10 b_4^2 + 8(a_3 +c_3) b_4 + 9 -4a_3c_3 =0. \end{equation} In fact, it is easy to check that Equation~(\ref{eqn3}) has no solution in $\mathbb{F}_q$, but Equation~(\ref{eqn4}) simplifies to $b_4^4 - 10 b_4^2 + 8\beta b_4-3 = (b_4^2 + 2 \beta b_4 - 3)(b_4 - \beta +1)(b_4 - (\beta+1)) =0$. Because $5$ is a non-square element in $\mathbb{F}_q$, it follows that $b_4^2 + 2 \beta b_4 - 3$ is irreducible and thus $b_4$ must be one of $\beta \pm 1$ where $\beta^2 = 2$. Again, we must be able to find $a_4, b_4, c_4 \in \mathbb{F}_q$ such that $a_4^2 = 2b_4 -a_3$, $c_4 = (b_4^2 +3)(2a_4)^{-1}$, and $c_4^2 = 2b_4 -c_3$ satisfying Equation~(\ref{eqn4}). (v) Each quartic factor in $Q_{80}(x)$ has degree $4$ and order $80$, by Theorem~\ref{general} and $Q_{2^n 5}(x) = Q_{80}(x^{2^{n-4}})$, hence the proof is completed. \end{proof} We remark that from the above proof, if $q =20k+3$ where $k>0$ then $k$ is even when $2-a_2$ is a square and $k$ is odd when $-2-a_2$ is a square. \begin{corol} \label{irred-mod-3} Let $q=20k+3$ where $k>0$ and $a_{2}^2 = -5$. (i) If $k$ is even, then the polynomial \[ x^{2^{n-2}} + a_4 x^{3\cdot 2^{n-4}} + b_4 x^{2^{n-3}} + c_4 x^{2^{n-4}} - 1 \] is irreducible over $\mathbb{F}_q$ for any $n \geq 4$, where $a_4, b_4, c_4$ satisfy (\ref{cond4a}) for any $a_3^2 = 2-a_2$ and $c_3 = 3a_3^{-1}$. (i) If $k$ is odd, then the polynomial \[ x^{2^{n-2}} + a_4 x^{3\cdot 2^{n-4}} + b_4 x^{2^{n-3}} + (b_4^2+3)(2a_4)^{-1} x^{2^{n-4}} + 1 \] is irreducible over $\mathbb{F}_q$ for any $n \geq 4$, where $a_4, b_4, c_4$ satisfy (\ref{cond4b}) for any $a_3^2 = -2-a_2$ and $c_3 = 3a_3^{-1}$. \end{corol} \begin{theorem}\label{factor-3} Let $q=3$. We have the following (i) If $0 \leq n \leq 1$, then \[ Q_{2^n 5} (x) = \prod_{\rho_n \in \Omega(2^n)} \left( x^4 + \rho_n x^3 + \rho_n^2 x^2 + \rho_n^3 x + \rho_n^4 \right). \] (ii) \[ Q_{2^2 5} (x) = \prod_{a_2^2 =1} \left( x^4 + a_2 x^3 -a_2 x +1 \right). \] (iii) \[ Q_{2^3 5} (x) = \prod_{a_3^2 =1} \left( x^4 + a_3 x^3 + x^2 +1 \right) \left( x^4 + x^2 + a_3 x +1 \right). \] (iv) \[ Q_{2^4 5} (x) = \prod_{a_4^2 =1} \left( x^4 + a_4 x^3 + 2 \right) \left( x^4 + a_4 x +2 \right) \left( x^4 + a_4 x^3 + x^2 -a_4 x +2 \right) \left( x^4 +a_4 x^3 - x^2 - a_4 x +2 \right). \] (v) If $n > 4$, then \begin{eqnarray*} Q_{2^n 5} (x) = & \prod_{a_4^2 =1} \left( x^{2^{n-2}} + a_4 x^{3\cdot 2^{n-4}} + 2 \right) \left( x^{2^{n-2}} + a_4 x^{3\cdot 2^{n-4}} + x^{2^{n-3}} -a_4 x^{2^{n-4}} +2 \right) \\ & \left( x^{2^{n-2}} + a_4 x^{2^{n-4}} +2 \right) \left( x^{2^{n-2}} + a_4 x^{3\cdot 2^{n-4}} - x^{2^{n-3}} -a_4 x^{2^{n-4}} +2 \right). \\ \end{eqnarray*} \end{theorem} \begin{proof} It is easy to check directly by computer. \end{proof} \begin{corol} \label{irred-3} For any $n >4$ and $a = \pm 1$, we have the following families of irreducible polynomials of degree $2^{n-2}$ over $\mathbb{F}_3$. (i) $x^{2^{n-2}} + a x^{3\cdot 2^{n-4}} + 2$; (ii) $x^{2^{n-2}} + a x^{2^{n-4}} +2$; (iii) $x^{2^{n-2}} + a x^{3\cdot 2^{n-4}} + x^{2^{n-3}} -a x^{2^{n-4}} +2$; (iv) $x^{2^{n-2}} + a x^{3\cdot 2^{n-4}} - x^{2^{n-3}} -a x^{2^{n-4}} +2 $. \end{corol} \subsection{Case $q \equiv 7 \pmod{20}$} \label{new3} Let $q = 20k+7$. If $k$ is odd, then $L_1=1$, $L_2 = L_1+2 = 3$, and $L_4=L_2 + 1 = 4$. The factorization of $Q_{2^n5}(x)$ over $\mathbb{F}_q$ behaves the same as that over $\mathbb{F}_{20k+3}$ where $k>0$. If $k$ is even, then $L_1=1$, $L_2 = L_1+ 3 = 4$, and $L_4=L_2 + 1 = 5$. The factorization of $Q_{2^n5}(x)$ over $\mathbb{F}_q$ behaves almost the same as that over $\mathbb{F}_{20t+3}$ where $t>0$ except that the only difference happens when $k$ is even and $n=5$ because $2$ is a square in this case. Again, $Q_{2^5 5}(x) = Q_{2^4 5}(x^2)$, we need to factor \[ \displaystyle{ \prod_{a_{2}^2 = -5} \prod_{a_3^2 = - 2 - a_{2} \atop{c_3 = 3a_3^{-1}}} \prod_{ a_4, b_4, c_4} } \left( x^8 + a_4 x^6 + b_4 x^4 + c_4 x^2 + 1 \right) \] over $\mathbb{F}_q$ where $q=20k+7$ and $k$ is even, where $a_4, b_4, c_4$ satisfy (\ref{cond4b}). Hence the factorization of $Q_{2^5 5}(x)$ is either \begin{equation}\label{eqn5} \displaystyle{ \prod_{a_{2}^2 = -5} \prod_{a_3^2 = - 2 - a_{2} \atop{c_3 = 3a_3^{-1}}} \prod_{a_4, b_4, c_4} \prod_{a_5, b_5, c_5}} \left( x^4 + a_5 x^3 + b_5 x^2 + c_5 x^2 + 1 \right), \end{equation} where $a_5, b_5, c_5$ satisfies $a_5^2 = 2b_5-a_4$ is a square, $c_5 = 2b_5-c_4$ is a square, $c_5 = (b_5^2 +2-b_4)(2a_5)^{-1}$, and \[ b_5^4 + (-12-2b_4) b_5^2 + 8(a_4+c_4) b_5 + (2 - b_4)^2 -4a_4c_4 =0, \] or \begin{equation}\label{eqn6} \displaystyle{ \prod_{a_{2}^2 = -5} \prod_{a_3^2 = - 2 - a_{2} \atop{c_3 = 3a_3^{-1}}} \prod_{a_4, b_4, c_4} \prod_{a_5, b_5, c_5} } \left( x^4 + a_5 x^3 + b_5 x^2 + c_5 x^2 - 1 \right), \end{equation} where $a_5, b_5, c_5$ satisfies $a_5^2 = 2b_5-a_4$ is a square, $c_5 = -2b_5-c_4$ is a square, $c_5 = (b_5^2 -2-b_4)(2a_5)^{-1}$, and \[ b_5^4 + (12-2b_4) b_5^2 - 8(a_4-c_4) b_5 + (2+b_4)^2 -4a_4c_4 =0. \] In these cases, the expressions for $a_5$, $b_5$ are more complicated than those of $a_4$ and $b_4$ and we are not trying to write them explicitly. Similarly, when $q = 20k+7$, we have that $k$ is odd if $2-a_2$ is a square and $k$ is even if $-2-a_2$ is a square. We summarize the result as follows: \begin{theorem}\label{factor-mod-7} Let $q\equiv 7 \pmod{20}$. Then we have the following factorization of cyclotomic polynomials $Q_{2^n 5} (x)$ over $\mathbb{F}_q$. (i) If $0 \leq n \leq 1$, then \[ Q_{2^n 5} (x) = \prod_{\rho_n \in \Omega(2^n)} \left( x^4 + \rho_n x^3 + \rho_n^2 x^2 + \rho_n^3 x + \rho_n^4\right). \] (ii) If $n = 2$, then \[ Q_{2^n 5} (x) = \prod_{a_{n}^2 = -5 } \left( x^4 + a_n x^3 + 3\rho_{1} x^2 + a_n \rho_1 x + 1\right). \] (ii) If $n = 3$, then \[ Q_{2^n 5} (x) = \left\{ \begin{array}{ll} \displaystyle{ \prod_{a_{2}^2 = -5} \prod_{ a_3^2 = 2 b_3 - a_{2} \atop{b_3= 1} } } \left( x^4 + a_3 x^3 + b_3 x^2 + 3a_3^{-1} x + 1 \right), & if~ 2 - a_{2} ~is~ a ~square; \\ & \\ \displaystyle{ \prod_{a_{2}^2 = -5} \prod_{a_3^2 = 2 b_3 - a_{2} \atop{b_3= - 1} } } \left( x^4 + a_3 x^3 + b_3 x^2 + 3a_3^{-1} x + 1 \right), & if~ -2 - a_{2} ~is~a~ square. \end{array} \right. \] (iii) If $ n = 4$, then \[ Q_{2^n 5} (x) = \left\{ \begin{array}{ll} \displaystyle{ \prod_{a_{2}^2 = -5} \prod_{ a_3^2 = 2 - a_{2} \atop{c_3 = 3a_3^{-1}} } \prod_{a_4, b_4, c_4 } } \left( x^4 + a_4 x^3 + b_4 x^2 + c_4 x - 1 \right), & if~ 2 - a_{2} ~is~ a~square; \\ & \\ \displaystyle{ \prod_{a_{2}^2 = -5} \prod_{a_3^2 = - 2 - a_{2} \atop{c_3 = 3a_3^{-1}}} \prod_{a_4, b_4, c_4} } \left( x^4 + a_4 x^3 + b_4 x^2 + c_4 x + 1 \right), & if~ - 2 - a_{2} ~is~a~ square. \end{array} \right. \] where $a_4, b_4, c_4$ satisfy either (\ref{cond4a}) if $2-a_2$ is a square, or (\ref{cond4b}) if $-2-a_2$ is a square. (iv) If $n = 5$, then \[ Q_{2^n 5} (x) = \left\{ \begin{array}{ll} \displaystyle{ \prod_{a_{2}^2 = -5} \prod_{ a_3^2 = 2 - a_{2} \atop{c_3 = 3a_3^{-1}} } \prod_{a_4, b_4, c_4} } \left( x^8 + a_4 x^6 + b_4 x^4 + c_4 x^2 - 1 \right), & if~ 2 - a_{2} ~is~ a~ square; \\ & \\ \displaystyle{ \prod_{a_{2}^2 = -5} \prod_{a_3^2 = - 2 - a_{2} \atop{c_3 = 3a_3^{-1}}} \prod_{a_4, b_4, c_4} \prod_{a_5, b_5, c_5}} \left( x^4 + a_5 x^3 + b_5 x^2 + c_5 x \pm 1 \right), & if~ - 2 - a_{2} ~is~ a~square, \end{array} \right. \] where $a_4, b_4, c_4$ satisfy either (\ref{cond4a}) if $2-a_2$ is a square, or (\ref{cond4b}) if $-2-a_2$ is a square, and $a_5, b_5, c_5$ are given in (\ref{eqn5}) or (\ref{eqn6}). (iv) If $n > 5$, then $Q_{2^n 5} (x)$ can be factorized as \[ \left\{ \begin{array}{ll} \displaystyle{ \prod_{a_{2}^2 = -5} \prod_{ a_3^2 = 2 - a_{2} \atop{c_3 = 3a_3^{-1}} } \prod_{a_4,b_4, c_4} } \left( x^{2^{n-2}} + a_4 x^{3\cdot 2^{n-4}} + b_4 x^{2^{n-3}} + c_4 x^{2^{n-4}} - 1 \right), & if~ 2 - a_{2} ~is~ a~ square; \\ & \\ \displaystyle{ \prod_{a_{2}^2 = -5} \prod_{a_3^2 = - 2 - a_{2} \atop{c_3 = 3a_3^{-1}}} \prod_{a_4, b_4, c_4} \prod_{a_5, b_5, c_5}} \left( x^{2^{n-3}} + a_5 x^{3\cdot 2^{n-5}} + b_5 x^{2^{n-4}} + c_5 x^{2^{n-5}} \pm 1 \right), & if~ - 2 - a_{2} ~is~a ~ square, \end{array} \right. \] where $a_4, b_4, c_4$ satisfy either (\ref{cond4a}) if $2-a_2$ is a square, or (\ref{cond4b}) if $-2-a_2$ is a square, and $a_5, b_5, c_5$ are given in (\ref{eqn5}) or (\ref{eqn6}). \end{theorem} \section{Conclusion} In this paper, we obtain the explicit factorization of cyclotomic polynomials of $Q_{2^n r}(x)$ over finite fields where $r=5$ and construct several classes of irreducible polynomials of degree $2^{n-2}$ with fewer than $5$ terms. Our approach is recursive, i.e., we derive the factorization of $Q_{2^k r}(x)$ from the factorization of $Q_{2^{k-1} r}(x^2)$. We show that we can do it with at most $L_{\phi(r)} = v_2(q^{\phi(r)} -1)$ iterations. A key component of our approach for $r=5$ is to factor certain types of polynomials of degree $8$ into two quartic irreducible polynomials. It would be more desirable to obtain explicit factors of $Q_{2^n r}(x)$ for arbitrary $r$. One would expect that it involves the factorization of certain types of polynomials of degree $2m$ where $m \mid \phi(r)$ into a product irreducible polynomials of degree less than or equal to $m$. Another contribution of this paper is the construction of several classes of irreducible polynomials over finite fields with at most $5$ nonzero terms. The reciprocals of these irreducible polynomials are also of format $x^{2^{n-2}} + g(x)$ such that the degree of $g$ is at most $4$, which could have potential applications as mentioned in \cite{GaoHowellPanario}. Finally we note that one can also construct more classes of irreducible polynomials for other choices of $r$ as a consequence of Theorem~\ref{general}. \end{document}
\begin{document} \title[LIMIT THEOREMS FOR GENERALIZED RANDOM GRAPHS]{\bf LIMIT THEOREMS FOR NUMBER OF EDGES\\ IN THE GENERALIZED RANDOM GRAPHS\\ WITH RANDOM VERTEX WEIGHTS} \author[Z.S. Hu]{Z.S. Hu} \address{Z.S. Hu\\ Department of Statistics and Finance\\ University of Science and Technology of China\\ Hefei, China } \email{[email protected]} \author[V.V. Ulyanov]{V.V. Ulyanov} \address{V.V. Ulyanov\\ Faculty of Computational Mathematics and Cybernetics\\ Moscow State University \\ Moscow, 119991, Russia\\ and National Research University Higher School of Economics (HSE), Moscow, 101000, Russia } \email{[email protected]} \author[Q.Q. Feng]{Q.Q. Feng} \address{Q.Q. Feng\\ Department of Statistics and Finance\\ University of Science and Technology of China\\ Hefei, China } \email{[email protected]} \thanks{} \keywords{Generalized random graphs, random vertex weights, central limit type theorems, total number of edges} \begin{abstract} We get central limit type theorems for the total number of edges in the generalized random graphs with random vertex weights under different moment conditions on the distributions of the weights. \end{abstract} \maketitle \renewcommand{References}{References} Complex networks attract increasing attention of researchers in various fields of science. In last years numerous network models have been proposed. Since the uncertainty and the lack of regularity in real-world networks, these models are usually random graphs. Random graphs were first defined by Paul Erd\H{o}s and Alfr\'{e}d R\'{e}nyi in their 1959 paper "On Random Graphs", see \cite{UVV:ER}, and independently by Gilbert in \cite{UVV:G}. The suggested models are closely related: there are $n$ isolated vertices and every possible edge occurs independently with probability $p:\, 0 < p < 1$. It is assumed that there are no self-loops. Later the models were generalized. A natural generalization of the Erd\H{o}s and R\'{e}nyi random graph is that the equal edge probabilities are replaced by probabilities depending on the vertex weights. Vertices with higher weights are more likely to have more neighbors than vertices with small weights. Vertices with extremely high weights could act as the hubs observed in many real-world networks. The following generalized random graph model was first introduced by Britton et al., see \cite{UVV:Britt}. Let $\{1, 2, . . . , n\}$ be the set of vertices, and $W_i > 0$ be the weight of vertex $i, 1\leq i\leq n$. The edge probability of the edge between any two vertices $i$ and $j$ is equal to $$ p_{ij} = \frac {W_i W_j} {L_n + W_i W_j} , $$ where $L_n = \sum^n_{i=1} W_i$ denotes the total weight of all vertices, and the weights $W_i, i = 1, 2, \dots , n$ can be taken to be deterministic or random. If we take all $W_i$-s as the same constant: $W_i \equiv n \lambda/(n - \lambda)$ for some $0 < \lambda < n$, it is easy to see that $p_{ij} = \lambda/n$ holds for all $1 \leq i < j \leq n$. That is, the Erd\H{o}s--R\'{e}nyi random graph with $p = \lambda/n$ is a special case of the generalized random graph. There are many versions of the generalized random graphs, such as Poissonian random graph (introduced by Norros and Reittu in \cite{UVV:Norros} and studied by Bhamidi et al.\cite{UVV:Bhamidi}), rank-1 inhomogeneous random graph (see \cite{UVV:Bollob}), random graph with given prescribed degrees (see \cite{UVV:Chung}), etc. Under some common conditions (see \cite{UVV:Janson}), all of the above mentioned random graph models are asymptotically equivalent, meaning that all events have asymptotically equal probabilities. The updated review on the results about these inhomogeneous random graphs see in Chapters 6 and 9 in \cite{UVV:Hofstad}. In the present paper we assume that $W_i, i = 1, 2, \dots , n,$ are independent identically distributed random variables distributed as $W$. Let $E_n$ be the total number of edges in a generalized random graph with vertex weights $W_1, W_2, \dots , W_n.$ In \cite{UVV:CFE}, under the conditions that $W$ has a finite or infinite mean, several weak laws of large numbers for $E_n$ are established, see also Ch.6, \cite{UVV:Hofstad}. For instance, in \cite{UVV:CFE} and Ch.6, \cite{UVV:Hofstad}, it is proved that $E_n/n$ tends in probability to $\E W/2$, provided $\E W$ is finite. Note that $$ E_n = \frac{1}{2}\,\sum^{n}_{i=1}D_i, $$ where $D_i, i = 1, 2, \dots , n$ is a degree of vertex $i$, i.e. the number of edges coming out from vertex $i$. It is clear, the random variables $D_i, i = 1, 2, \dots , n$ are dependent ones. The aim of the present paper is to refine the law of large numbers type results for $E_n$ and to get central limit type theorems under different moment conditions for $W$. In Theorem~1 we assume that $\E W^2 < \infty$. It implies normal limit distribution for $\{E_n\}$ after proper normalization. In Theorem~2 we assume that the distribution of $W$ belongs to the domain of attraction of a stable law $F$ with characteristic exponent $\alpha: 1 < \alpha < 2$. Then we prove that the limit distribution for normalized $E_n$ is $F$. \begin{theorem} \label{thm1} If $\E W^2<\infty$, then \begin{eqnarray*} \frac{2E_n-n\E W}{\sqrt{n\,(2\E W+\mbox{Var}(W))}}\stackrel{d}{\longrightarrow} N(0,1). \end{eqnarray*} \end{theorem} \begin{proof} Put for all integer $n\geq 1$ \begin{eqnarray}\label{hu0} b_n=\frac{1}{2}n\,\E W,~c_n=\frac{1}{2}\sqrt{n\,\mbox{Var}(W)}. \end{eqnarray} For any $t\in \mathbb{R}$, we have \begin{eqnarray} \E\exp\Big\{{it}\frac{E_n-b_n}{c_n}\Big\}&=& \E\exp\Big\{\frac{it}{c_n}\Big(\sum\limits_{1\le i<j\le n}I_{ij}-b_n\Big)\Big\}\nonumber\\ &=& \E\Big(\E\Big(\exp\Big\{\frac{it}{c_n}\Big(\sum\limits_{1\le i<j\le n}I_{ij}-b_n\Big)\Big\}\Big| W_1, \cdots, W_n\Big)\Big)\nonumber\\ &=& \E\Big(e^{-itb_n/c_n}\prod_{1\le i<j\le n}\frac{L_n+e^{it/c_n}W_iW_j}{L_n+W_iW_j}\Big)\nonumber\\ &:=& \E e^{Y_n}, \label{hu1}\nonumber \end{eqnarray} where \begin{eqnarray} Y_n&=&\sum_{1\le i<j\le n}\log\frac{L_n+e^{it/c_n}W_iW_j}{L_n+W_iW_j}-\frac{itb_n}{c_n}\nonumber\\ &=&\frac{1}{2} \sum_{i=1}^n\sum_{j=1}^ n\log\frac{L_n+e^{it/c_n}W_iW_j}{L_n+W_iW_j}-\frac{itb_n}{c_n} -\sum_{i=1}^n \log\frac{L_n+e^{it/c_n}W_i^2}{L_n+W_i^2} \label{hu2} \end{eqnarray} and $\log(\cdot)$ is the principal value of the complex logarithm function. By using the Maclaurin series expansion of $\log (1+x)$ for complex $x$ with $|x|<1$, we have that \begin{eqnarray*} \frac{|\log(1+x)|}{|x|}\longrightarrow 1,~~\frac{|\log(1+x)-x|}{|x|^2}\longrightarrow \frac{1}{2}~~~~\mbox{as}~~~~|x|\rightarrow 0. \end{eqnarray*} Hence there exists some constant $c_0>0$ such that $|\log(1+x)|\le 2|x|$ and $|\log(1+x)-x|\le |x|^2$ hold for any $|x|\le c_0$. Clearly, for any fixed $t$, there exists $n_0=n_0(t) \in \mathbb{N}$ such that for all $n\ge n_0$ and any $1\leq i, j \leq n$ one has \begin{eqnarray*} \Big|\frac{(e^{it/c_n}-1)W_iW_j}{L_n+W_iW_j}\Big|\le |e^{it/c_n}-1|\le |t|/c_n \le c_0. \end{eqnarray*} Thus, since \begin{eqnarray}\label{hu00} \frac{ L_n}{n} \rightarrow \E W~~a.s.~~ \mbox{and}~~ \frac{\sum_{i=1}^n W^2_i}{n} \rightarrow \E W^2 ~~a.s., \end{eqnarray} we have for any $n\ge n_0$ \begin{eqnarray} \Big|\sum_{i=1}^n \log\frac{L_n+e^{it/c_n}W_i^2}{L_n+W_i^2}\Big|&\le& \sum_{i=1}^n \Big|\log\Big(1+\frac{(e^{it/c_n}-1)W_i^2}{L_n+W_i^2}\Big)\Big|\nonumber\\ &\le& 2|e^{it/c_n}-1|\sum_{i=1}^n\frac{W_i^2}{L_n+W_i^2}\nonumber\\ &\le& 2\frac{|t|}{c_n}\frac{\sum_{i=1}^n W_i^2}{L_n}\rightarrow 0~~a.s. \label{hu3} \end{eqnarray} and \begin{eqnarray} &&~~~~\frac{1}{2} \sum_{i=1}^n\sum_{j=1}^ n\log\frac{L_n+e^{it/c_n}W_iW_j}{L_n+W_iW_j}-\frac{itb_n}{c_n}\nonumber\\ &&=\frac{1}{2} \sum_{i=1}^n\sum_{j=1}^ n\log\Big(1+\frac{(e^{it/c_n}-1)W_iW_j}{L_n+W_iW_j}\Big)-\frac{itb_n}{c_n}\nonumber\\ &&=\frac{1}{2} \sum_{i=1}^n\sum_{j=1}^ n\frac{(e^{it/c_n}-1)W_iW_j}{L_n+W_iW_j}-\frac{itb_n}{c_n} +O_1 \sum_{i=1}^n\sum_{j=1}^ n\frac{(e^{it/c_n}-1)^2W_i^2W_j^2}{(L_n+W_iW_j)^2}\nonumber\\ &&=\frac{1}{2}\Big(e^{it/c_n}-1-\frac{it}{c_n}+\frac{t^2}{2c_n^2}\Big) \sum_{i=1}^n\sum_{j=1}^ n\frac{W_iW_j}{L_n+W_iW_j}\nonumber\\ &&~~~~~~+\frac{1}{2}\Big(\frac{it}{c_n}-\frac{t^2}{2c_n^2}\Big) \sum_{i=1}^n\sum_{j=1}^ n\frac{W_iW_j}{L_n+W_iW_j}-\frac{itb_n}{c_n}\nonumber\\ &&~~~~~~ +O_1 \sum_{i=1}^n\sum_{j=1}^ n\frac{(e^{it/c_n}-1)^2W_i^2W_j^2}{(L_n+W_iW_j)^2}\nonumber\\ &&:=I_1+I_2+I_3, \label{hu3-1} \end{eqnarray} where $|O_1|\le 1/2$. By (\ref{hu00}) and the inequality $|e^{ix}-1-ix+x^2/2|\le |x|^3/6$ for any $x\in \mathbb{R}$, we have \begin{eqnarray} |I_1| \le \frac{|t|^3}{12c_n^3} \sum_{i=1}^n\sum_{j=1}^ n\frac{W_iW_j}{L_n} =\frac{|t|^3L_n}{12c_n^3}{\longrightarrow} 0~~a.s. \label{hu4} \end{eqnarray} Similarly, by (\ref{hu00}) and the inequality $|e^x-1|\le |x|$, we get \begin{eqnarray} |I_3|\le \frac{t^2}{2c_n^2}\sum_{i=1}^n\sum_{j=1}^ n\frac{W_i^2W_j^2}{L_n^2}=\frac{t^2}{c_n^2}\Big(\frac{1}{n}\sum_{i=1}^n W_i^2\Big)^2\Big(\frac{n}{L_n}\Big)^2 {\longrightarrow} 0~~a.s. \end{eqnarray} Recalling the definition (\ref{hu0}) for $b_n$ and $c_n$, we have \begin{eqnarray*} I_2&=&\frac{1}{2}\Big(\frac{it}{c_n}-\frac{t^2}{2c_n^2}\Big)\Big( \sum_{i=1}^n\sum_{j=1}^ n\frac{W_iW_j}{L_n}- \sum_{i=1}^n\sum_{j=1}^ n\frac{W_i^2W_j^2}{L_n(L_n+W_iW_j)}\Big)-\frac{itb_n}{c_n}\\ &=& it\frac{L_n-n\E W}{\sqrt{n\mbox{Var}(W)}}-\frac{t^2L_n}{n\mbox{Var}(W)}- \frac{1}{2}\Big(\frac{it}{c_n}-\frac{t^2}{2c_n^2}\Big) \sum_{i=1}^n\sum_{j=1}^ n\frac{W_i^2W_j^2}{L_n(L_n+W_iW_j)}. \end{eqnarray*} Moreover, by (\ref{hu00}) we get \begin{eqnarray*} \sum_{i=1}^n\sum_{j=1}^ n\frac{W_i^2W_j^2}{L_n(L_n+W_iW_j)}&&\le \sum_{i=1}^n\sum_{j=1}^ n\frac{W_i^2W_j^2}{L_n^2} \\ &&=\frac{\Big(\sum_{i=1}^nW_i^2\Big)^2}{L_n^2}\rightarrow \Big(\frac{\E W^2}{\E W}\Big)^2~~a.s. \end{eqnarray*} The central limit theorem yields \begin{eqnarray} I_2\stackrel{d}{\longrightarrow} it{\bf N}- t^2 \E W/\mbox{Var}(W), \label{hu5} \end{eqnarray} where ${\bf N}$ is a standard normal random variable. Now, it follows from (\ref{hu2})--(\ref{hu5}) that \begin{eqnarray*} Y_n \stackrel{d}{\longrightarrow} it{\bf N}- t^2 \E W/\mbox{Var}(W). \end{eqnarray*} Hence, by noting that $|e^{Y_n}|\le 1$ and applying the Lebesgue dominated convergence theorem, we get that, for any $t\in \mathbb{R}$, \begin{eqnarray*} \E\exp\Big\{{it}\frac{E_n-b_n}{c_n}\Big\}&=&\E e^{Y_n}\rightarrow \E\exp\{it{\bf N}- t^2 \E W/\mbox{Var}(W)\}\\ &=&\exp\{-(1/2)t^2(1+2\E W/\mbox{Var}(W))\}. \end{eqnarray*} Thus, Theorem \ref{thm1} is proved. \end{proof} In the following theorem we get convergence of the sequence $\{E_n\}$ under weaker moment conditions on $W_i$'s. \begin{theorem} \label{thm2} Let $W, W_1,W_2,\cdots$ be a sequence of i.i.d. nonnegative random variables and \begin{eqnarray} \frac{W_1+\cdots+W_n-n\E W}{a_n}\stackrel{d}{\longrightarrow} F, \label{hu200} \end{eqnarray} where $F$ is a stable distribution with characteristic exponent $\alpha : 1<\alpha<2$, then \begin{eqnarray*} \frac{2E_n-n\E W}{a_n}\stackrel{d}{\longrightarrow} F. \end{eqnarray*} \end{theorem} Before we start to prove the theorem, let us state some properties of the distribution of $W$. If (\ref{hu200}) holds true, then $a_n$ (see e.g. \cite{UVV:FUS}, ch.XVII, \S 5) is a regularly varying function with exponent $1/\alpha$ satisfying \begin{eqnarray} n\E W^2I(W\le a_n)\sim a_n^2, \label{hu204} \end{eqnarray} and there exists some constant $c>0$ and $h(x)$, a slowly varying function at $\infty$, such that \begin{eqnarray} P(W>x)\sim cx^{-\alpha}h(x). \label{hu205} \end{eqnarray} We shall use the following lemma. \begin{lemma} \label{lemma1} If (\ref{hu205}) holds with $\alpha: 1<\alpha<2$, then we have \begin{eqnarray*} &&~~\E W^2I(W\le x) \sim \frac{c\alpha}{2-\alpha} x^{2-\alpha}h(x),\\ &&~~\E WI(W\ge x) \sim c\,\frac{2-\alpha}{\alpha-1} x^{1-\alpha}h(x). \end{eqnarray*} \end{lemma} The proof of the lemma see e.g. \cite{UVV:FUS}, ch.XVII, \S 5. Now we are ready to prove Theorem 2. \begin{proof} Let $b_n=(1/2)\,n\,\E W$ and $c_n=(1/2)\,a_n$ with $a_n$ from (\ref{hu204}). As in the proof of Theorem \ref{thm1}, for any $t\in \mathbb{R}$, we also write \begin{eqnarray*} \E\exp\Big\{{it}\frac{E_n-b_n}{c_n}\Big\} = \E e^{Y_n} \end{eqnarray*} with new definition for $c_n$ and \begin{eqnarray*} Y_n=\frac{1}{2} \sum_{i=1}^n\sum_{j=1}^ n\log\frac{L_n+e^{it/c_n}W_iW_j}{L_n+W_iW_j}-\frac{itb_n}{c_n} -\sum_{i=1}^n \log\frac{L_n+e^{it/c_n}W_i^2}{L_n+W_i^2}. \end{eqnarray*} For the last sum for any $n\ge n_0$, where $n_0=n_0(t)$ is defined in the proof of Theorem \ref{thm1}, we have (cp. (\ref{hu3})) \begin{eqnarray*} \Big|\sum_{i=1}^n \log\frac{L_n+e^{it/c_n}W_i^2}{L_n+W_i^2}\Big| \le 2\frac{|t|}{c_n}\frac{\sum_{i=1}^n W_i^2}{L_n}. \end{eqnarray*} Similarly to (\ref{hu3-1}), we get \begin{eqnarray*} &&~~~~\frac{1}{2} \sum_{i=1}^n\sum_{j=1}^ n\log\frac{L_n+e^{it/c_n}W_iW_j}{L_n+W_iW_j}-\frac{itb_n}{c_n}\nonumber\\ &&=\frac{1}{2} \sum_{i=1}^n\sum_{j=1}^ n\frac{(e^{it/c_n}-1)W_iW_j}{L_n+W_iW_j}-\frac{itb_n}{c_n} +O_1 \sum_{i=1}^n\sum_{j=1}^ n\frac{(e^{it/c_n}-1)^2W_i^2W_j^2}{(L_n+W_iW_j)^2}\nonumber\\ &&=\frac{1}{2}\Big(e^{it/c_n}-1-\frac{it}{c_n}\Big) \sum_{i=1}^n\sum_{j=1}^ n\frac{W_iW_j}{L_n+W_iW_j}\nonumber\\ &&~~~~~~+\frac{1}{2}\frac{it(L_n-2b_n)}{c_n}- \frac{1}{2}\frac{it}{c_n} \sum_{i=1}^n\sum_{j=1}^ n\frac{W_i^2W_j^2}{L_n (L_n+W_iW_j)}\nonumber\\ &&~~~~~~ +O_1 \sum_{i=1}^n\sum_{j=1}^ n\frac{(e^{it/c_n}-1)^2W_i^2W_j^2}{(L_n+W_iW_j)^2} \end{eqnarray*} with $|O_1|\le 1/2$. Due to Theorem's condition we have $(L_n-2b_n)/(2c_n)\stackrel{d}{\rightarrow} F$. Since \begin{eqnarray*} |e^{ix}-1|\le |x|,~~ |e^{ix}-1-ix|\le |x|^2/2 ~~~~\mbox{for all}~~~~ x\in \mathbb{R}, \end{eqnarray*} in order to prove Theorem \ref{thm2}, we only need to show that \begin{eqnarray} &&\frac{1}{a_n}\frac{\sum_{i=1}^n W_i^2}{L_n}\stackrel{p}{\longrightarrow} 0, \label{hu20-1}\\ &&\frac{1}{a_n^2}\sum_{i=1}^n\sum_{j=1}^ n\frac{W_iW_j}{L_n+W_iW_j}\stackrel{p}{\longrightarrow} 0, \label{hu201}\\ && \frac{1}{a_n}\sum_{i=1}^n\sum_{j=1}^ n\frac{W_i^2W_j^2}{L_n (L_n+W_iW_j)}\stackrel{p}{\longrightarrow} 0, \label{hu202}\\ &&\frac{1}{a_n^2}\sum_{i=1}^n\sum_{j=1}^ n\frac{W_i^2W_j^2}{(L_n+W_iW_j)^2} \stackrel{p}{\longrightarrow} 0. \label{hu203} \end{eqnarray} For any $\gamma: \alpha > \gamma>0$, we have $\E (W^2)^{(\alpha-\gamma)/2}=\E W^{\alpha-\gamma}<\infty$. Then by Marcinkiewicz--Zygmund's strong law of large numbers (see e.g. Theorem 4.23 in \cite{UVV:KAL}) we have \begin{eqnarray*} n^{-2/(\alpha-\gamma)}\sum_{i=1}^n W_i^2\rightarrow 0~~a.s. \end{eqnarray*} Since $a_n$ is a regularly varying function with exponent $1/\alpha$, then we have $1/a_n=o(n^{-1/\alpha+\gamma})$. Now choose $\gamma>0$ such that $$2/(\alpha-\gamma)-1-1/\alpha+\gamma<0~~~~\mbox{ and}~~~~ -2/\alpha+1+2\gamma<0.$$ Then we have \begin{eqnarray*} \frac{1}{a_n}\frac{\sum_{i=1}^n W_i^2}{L_n}=\frac{n^{2/(\alpha-\gamma)-1}}{a_n}\frac{\sum_{i=1}^n W_i^2/n^{2/(\alpha-\gamma)}}{L_n/n}=o(n^{2/(\alpha-\gamma)-1-1/\alpha+\gamma})\longrightarrow 0~~a.s. \end{eqnarray*} and \begin{eqnarray*} \frac{1}{a_n^2}\sum_{i=1}^n\sum_{j=1}^ n\frac{W_iW_j}{L_n+W_iW_j}\le \frac{1}{a_n^2}\sum_{i=1}^n\sum_{j=1}^ n\frac{W_iW_j}{L_n}=\frac{n}{a_n^2}\frac{L_n}{n} =o(n^{-2/\alpha+1+2\gamma}) {\longrightarrow} 0~~a.s. \end{eqnarray*} Thus we get (\ref{hu20-1}) and (\ref{hu201}). To prove (\ref{hu202}), we write \begin{eqnarray*} && ~~~~\frac{1}{a_n}\sum_{i=1}^n\sum_{j=1}^ n\frac{W_i^2W_j^2}{L_n (L_n+W_iW_j)}\\ &&= \frac{1}{a_n}\sum_{i=1}^n\sum_{j=1}^ n\frac{W_i^2W_j^2I(W_iW_j\le n)}{L_n (L_n+W_iW_j)} +\frac{1}{a_n}\sum_{i=1}^n\sum_{j=1}^ n\frac{W_i^2W_j^2I(W_iW_j>n)}{L_n (L_n+W_iW_j)}\\ &&\le \frac{1}{a_n}\sum_{i=1}^n\sum_{j=1}^ n\frac{W_i^2W_j^2I(W_iW_j\le n)}{L_n^2} +\frac{1}{a_n}\sum_{i=1}^n\sum_{j=1}^ n\frac{W_iW_jI(W_iW_j>n)}{L_n }\\ &&\le \frac{n^2}{a_nL_n^2}\sum_{i=1}^n\sum_{j=1}^ n\frac{W_i^2W_j^2I(W_iW_j\le n)}{n^2} +\frac{n}{a_nL_n}\sum_{i=1}^n\sum_{j=1}^ n\frac{W_iW_jI(W_iW_j>n)}{n}. \end{eqnarray*} Further, by (\ref{hu00}) and by using the fact that $\E|X_n| \rightarrow 0$ implies $X_n\stackrel{p}{\rightarrow} 0$, in order to prove (\ref{hu202}), it is sufficient to show that \begin{eqnarray} &&\frac{1}{a_n}\E W_1^2W_2^2I(W_1W_2\le n){\longrightarrow} 0,\label{hu208}\\ &&\frac{n}{a_n}\E W_1W_2I(W_1W_2> n){\longrightarrow} 0. \label{hu209} \end{eqnarray} For any $\alpha\in (1,2)$, we can choose $\delta>0$ satisfying $2-\alpha-1/\alpha+2\delta<0$. By Lemma \ref{lemma1}, there exists some constant $c_1=c_1(\alpha,\delta)>0$ such that \begin{eqnarray*} \E W^2I(W\le x) \le c_1 x^{2-\alpha+\delta},~~ ~\E WI(W\ge x)\le c_1 x^{1-\alpha+\delta} \end{eqnarray*} hold for all $x>1$. Hence \begin{eqnarray*} &&~~\frac{1}{a_n}\E W_1^2W_2^2I(W_1W_2\le n)\\ &&=\frac{1}{a_n}\E\Big(W_2^2I(W_2\le n)\E (W_1^2I(W_1\le n/W_2)|W_2)\Big)\\ &&~~~~~~ +\frac{1}{a_n}\E\Big(W_2^2I(W_2> n)\E (W_1^2I(W_1\le n/W_2)|W_2)\Big)\\ &&\le \frac{c_1}{a_n}\E\Big(W_2^2( n/W_2)^{2-\alpha+\delta}\Big) +\frac{1}{a_n}\E\Big(W_2^2I(W_2> n)(n/W_2)^2\Big)\\ &&=\frac{c_1n^{2-\alpha+\delta}}{a_n}\E W^{\alpha-\delta} +\frac{n^2}{a_n}P(W> n). \end{eqnarray*} Since by (\ref{hu205}) we have $ P(W>n)\sim cn^{-\alpha} h(n)=o(n^{-\alpha+\delta}) $ and $1/a_n=o(n^{-1/\alpha+\delta})$, we get \begin{eqnarray*} \frac{1}{a_n}\E W_1^2W_2^2I(W_1W_2\le n)=o(n^{2-\alpha-1/\alpha+2\delta})\rightarrow 0~~~\mbox{as}~~~x\rightarrow \infty. \end{eqnarray*} Thus, we get (\ref{hu208}). Similarly, we have \begin{eqnarray*} &&~~\frac{n}{a_n}\E W_1W_2I(W_1W_2> n)\\ &&=\frac{n}{a_n}\E\Big(W_2I(W_2\le n)\E(W_1I(W_1> n/W_2)|W_2)\Big)\\ &&~~~~~~ +\frac{n}{a_n}\E\Big(W_2I(W_2> n)\E (W_1I(W_1> n/W_2)|W_2)\Big)\\ &&\le \frac{c_1n}{a_n}\E\Big(W_2(n/W_2)^{1-\alpha+\delta}\Big) +\frac{n}{a_n}\E\Big(W_2I(W_2> n)\E W_1\Big)\\ &&=\frac{c_1n^{2-\alpha+\delta}}{a_n}\E W^{\alpha-\delta} +\frac{n}{a_n}\E W \E(WI(W> n))\\ &&\le \frac{c_1n^{2-\alpha+\delta}}{a_n}\E W^{\alpha-\delta} +\frac{c_1n^{2-\alpha+\delta}}{a_n}\E W=o(n^{2-\alpha-1/\alpha+2\delta})\rightarrow 0. \end{eqnarray*} Hence (\ref{hu209}), and then (\ref{hu202}), are proved. And (\ref{hu203}) follows from (\ref{hu202}). The proof of Theorem \ref{thm2} is complete. \end{proof} \end{document}
\begin{document} \date{ \vskip 20pt} \begin{abstract} Let $G$ be a locally compact topological group, $G_0$ the connected component of its identity element, and $\mathrm{comp}(G)$ the union of all compact subgroups. A topological group will be called inductively monothetic if any subgroup generated (as a topological group) by finitely many elements is generated (as a topological group) by a single element. The space ${\mathcal{S\hskip-.5pt U\hskip-.9pt B}}(G)$ of all closed subgroups of $G$ carries a compact Hausdorff topology called the Chabauty topology. Let ${\mathcal{F}_1}(G)$, respectively, ${\mathcal{R}_1}(G)$, denote the subspace of all discrete subgroups isomorphic to $\mathbb{Z}$, respectively, all subgroups isomorphic to $\mathbb{R}$. It is shown that a necessary and sufficient condition for $G\in\overline{{\mathcal{F}_1}(G)}$ to hold is that $G$ is abelian, and either that $G\cong \mathbb{R}\times \mathrm{comp}(G)$ and $G/G_0$ is inductively monothetic, or else that $G$ is discrete and isomorphic to a subgroup of $\mathbb{Q}$. It is further shown that a necessary and sufficient condition for $G\in\overline{{\mathcal{R}_1}(G)}$ to hold is that $G\cong\mathbb{R}\times C$ for a compact connected abelian group $C$.{\cdot} \textit{MSC 2010:} 22B05, 54E45. \end{abstract} \maketitle \vskip-15pt \centerline{Authors' addresses:} \hbox{\vtop{\hsize=.45\hsize Hatem Hamrouni{\cdot} Faculty of Sciences at Sfax{\cdot} Department of Mathematics{\cdot} Sfax University{\cdot} B.P. 1171. 3000 Sfax, Tunisia{\cdot} {\tt [email protected]}} \vtop{\hsize=.45\hsize Karl H. Hofmann{\cdot} Fachbereich Mathematik{\cdot} Technische Universit\"at Darmstadt{\cdot} Schlossgartenstrasse 7{\cdot} Darmstadt 64289, Germany{\cdot} {\tt [email protected]}}} \centerline{Running title:} \centerline{Locally Compact Groups Approximable by $\mathbb{Z}$ and $\mathbb{R}$} \section{Preface} \label{s:preface} The simplest group arising directly from the activity of counting is the group $\mathbb{Z}$ of integers. On the other hand, one of the more sophisticated concepts of group theory is that of a locally compact topological group; it evolved widely and deeply since David Hilbert in 1900 posed the question whether a locally euclidean topological group might be parametrized differentiably so that the group operations become differentiable. Bringing $\mathbb{Z}$ and locally compact groups together in a topologically systematic fashion is made possible by a compact Hausdorff space ${\mathcal{S\hskip-.5pt U\hskip-.9pt B}}(G)$ attached to a locally compact group $G$ in a very natural fashion, namely, as the set of closed subgroups endowed with a suitably defined topology. Since $G$ itself is a prominent element of ${\mathcal{S\hskip-.5pt U\hskip-.9pt B}}(G)$, we pose and answer completely the question under which circumstances $G$ can be approximated in ${\mathcal{S\hskip-.5pt U\hskip-.9pt B}}(G)$ by those subgroups of $G$ which are isomorphic to $\mathbb{Z}$. A topological parameter attached to a topological space $X$ is the smallest cardinal of a basis for the collection of all open subsets; this parameter is called the weight $w(X)$ of $X$. If the weight of $X$ is not bigger than the first infinite cardinal, one says that $X$ satisfies the Second Axiom of Countability. Our findings about the approximability of $G$ by subgroups isomorphic to $\mathbb{Z}$ will show a fact that one would not expect at first glance: Such an approximabilty does not impose any bound whatsoever on the weight $w(G)$ of $G$. If one ascends from the group $\mathbb{Z}$ of counting numbers to the group $\mathbb{R}$ of real numbers which permits us to measure lengths and distances, then the completely analogous question suggests itself, asking indeed which locally compact groups $G$ can be approximated in ${\mathcal{S\hskip-.5pt U\hskip-.9pt B}}(G)$ by subgroups isomorphic to $\mathbb{R}$. Again we answer that question completely and find, that the answer to the second question yields a simpler structure than the answer to the first question. The answers are described in the Abstract which precedes our text. It is relatively immediate that our discussion will usher us into the domain of abelian locally compact groups. But the final proofs do lead us more deeply into the structure of these groups than one might anticipate and are, therefore, also longer than expected. With these remarks, we now turn to the details. \section{Preliminaries} \label{s:prelim} \subsection{Basic concepts and definitions} \label{ss:basics} Let $G$ be a locally compact group and $\mathrm{comp}(G)$ the union of its compact subgroups. By Weil's Lemma (\cite[Proposition 7.43]{hofmorr}), an element $g\in G$ is either contained in $\mathrm{comp}(G)$ or else the group $\langle g\rangle$ is isomorphic as a topological group to $\mathbb{Z}$. Subgroups of this kind we shall call \textit{integral}. A subgroup $E$ of $G$ is called \textit{real} if it is isomorphic to $\mathbb{R}$ as a topological group. We denote by ${\cdot}{G}$ the space of closed subgroups of $G$ equipped with the \textit{Chabauty topology}; this is a compact space. In this space, each closed subgroup~$H$ of~$G$ has a neighborhood base consisting of sets \begin{equation} \label{eq:Chabauty_base} \mathcal{U}(H; K, W){\buildrel\mathrm{def}\over=}\left\{L{\in}{\mathcal{S\hskip-.5pt U\hskip-.9pt B}}(G) \mid L{\cap}K{\subseteq}WH \hbox{ and }H{\cap}K{\subseteq}WL\right{\cdot}, \end{equation} where~$K$ ranges through the set $\mathcal K$ of all compact subsets of~$G$ and~$W$ through the set $\mathcal U(e)$ of all neighborhoods of the identity. In particular $G\in{\mathcal{S\hskip-.5pt U\hskip-.9pt B}}(G)$ and the singleton subgroup $E=\{1{\cdot}$ have bases for their respective neighborhoods of the form \begin{eqnarray} \label{eq:G_base} \mathcal{U}(G; K, W) &=& \left\{L{\in}{\mathcal{S\hskip-.5pt U\hskip-.9pt B}}(G)\mid K{\subseteq}WL\right{\cdot},\label{eq:G_base}{\cdot} \mathcal{U}(E; K, W) &=&\left\{L{\in}{\mathcal{S\hskip-.5pt U\hskip-.9pt B}}(G)\mid L\cap K\subseteq W\right{\cdot},\label{eq:E_base} \end{eqnarray} $K\in{\mathcal K}$ and $W\in{\mathcal U}(e)$. \begin{rem} \label{countability} Returning for a moment to Equation (\ref{eq:Chabauty_base}) we assume that $G$ is $\sigma$-compact and satisfies the First Axiom of Countability. Then $G$ contains a sequence $(K_m)_{m\in\mathbb{N}}$ of compact subsets whose interiors for an scending sequence of open sets covering $G$, and there is a sequence $(W_n)_{n\in\mathbb{N}}$ of open identity neighborhoods forming a basis of the filter of identity neighborhoods. Then each element $H\in{\mathcal{S\hskip-.5pt U\hskip-.9pt B}}(G)$ has a countable basis $\big(\mathcal{U}(H; K_m, W_n)\big)_{m,n\in \mathbb{N}}$ for its neighborhood filter according to Equation (\ref{eq:Chabauty_base}). Thus ${\mathcal{S\hskip-.5pt U\hskip-.9pt B}}(G)$ satisfies the First Axiom of Countability. \end{rem} We denote by ${\mathcal{F}_1}(G)$ the subspace of ${\mathcal{S\hskip-.5pt U\hskip-.9pt B}}(G)$ containing all $\langle g\rangle$ with $g\in G\setminus\mathrm{comp}(G)$, that is, all subgroups isomorphic to the discrete group $\mathbb{Z}$ of all integers, the {\it free group of rank} $1$. \begin{example}[The additive group ${\mathbb R}$] \label{e:R} The mapping $\phi_{{\mathbb R}}\colon [0,\infty]\to{\mathcal{S\hskip-.5pt U\hskip-.9pt B}}({\mathbb R})$ defined by $$\phi_\mathbb{R}(r)=\begin{cases} \frac 1 r{\cdot}\mathbb{Z} &\mbox{if } 0<r<\infty,\cr \{0{\cdot} &\mbox{if } r=0, \cr \mathbb{R} &\mbox{if } r=\infty \end{cases}$$ is a homeomorphism (see Proposition 1.7 of \cite{Hattel1}). Here $\mathrm{comp}(G)=\{0{\cdot}$, and ${\mathcal{F}_1}(\mathbb{R})={\cdot}\langle r\rangle| 0<r{\cdot}$, and if $(r_n)_{n\in \mathbb{N}}$ is a sequence of real numbers converging to $0$, then $(\langle r_n\rangle)_{n\in \mathbb{N}}$ converges to ${\mathbb R}$. Therefore, $\mathbb{R}\in\overline{{\mathcal{F}_1}(\mathbb{R})}$ \end{example} \begin{defn} \label{integral-subgroups} A locally compact group $G$ is said to be \textit{integrally approximable} if $G\in\overline{{\mathcal{F}_1}(G)}$. \end{defn} Here is an equivalent way of expressing that $G$ is integrally approximable: {\it There is a net $(S_j)_{j\in J}$ of subgroups isomorphic to $\mathbb{Z}$ in $G$ such that $G=\lim_{j\in J} S_j$ in ${\mathcal{S\hskip-.5pt U\hskip-.9pt B}}(G)$.} \begin{rem}\label{r:immediate} If $G$ is integrally approximable, $\overline{{\mathcal{F}_1}(G)}\ne\emptyset$, and so $G\ne\mathrm{comp}(G)$. In particular, $G$ is not singleton. \end{rem} Our objective is to describe precisely which locally compact groups are integrally approximable. The outcome is anticipated in the abstract. It may be instructive for our intuition to consider some examples right now. \subsection{Some examples}\label{ss:examples} We noticed in Example \ref{e:R} above that $\mathbb{R}$ is integrally approximable. A bit more generally, we record \begin{example} \label{p:vector-groups} For $n\in\mathbb{N}$ and $G=\mathbb{R}^n$, the following statements are equivalent: \begin{itemize} \item[(1)] $G$ is integrally approximable. \item[(2)] $n=1$. \end{itemize} \end{example} \begin{proof} By Example \ref{e:R}, (2) implies (1). For proving the reverse implication, we define the elements $e_k=(\delta_{km})_{m=1,\dots,n}$, $k=1,\dots, n$ for the Kronecker deltas $$\delta_{km}=\begin{cases} 1 &\mbox{if }{\cdot} k=m {\cdot} 0 &\mbox{otherwise.}{\cdot} \end{cases}$$ Now we assume (1) and $n\ge2$ and propose to derive a contradiction. We let $W\in{\mathcal U}((0,\dots,0))$ be the open ball of radius $\frac 1 2$ with respect to the euclidean metric on $\mathbb{R}^n$, and let $K$ be the closed ball of radius 2 around $(0,\dots,0)$. By (1) there is an integral subgroup $S$ in the neighborhood $\mathcal U(G;K,W)$ of $G$ in ${\mathcal{S\hskip-.5pt U\hskip-.9pt B}}(G)$. In view of Equation (\ref{eq:G_base}) this means $K\subset W+S$. Since $n\ge 2$ we know that $e_1$ and $e_2$ are contained in $K$ and therefore in $W+S$, that is there are elements $w_1, w_2\in W$ such that $s_1=e_1-w_1$ and $s_2=e_2-w_2$ are contained $S$. Now the euclidean distance of $s_m$ from $e_m$ for $m=1,2$ is $<{\frac 1 2}$ in the euclidean plane $E_2\cong\mathbb{R}^2$ spanned by $e_1$ and $e_2$. Therefore, the two elements $s_1$ and $s_2$ are linearly independent. On the other hand, being elements of the subgroup $S\cong \mathbb{Z}$, they must be linearly dependent. This contradiction proves that (1) implies (2). \end{proof} \begin{example} \label{e:rational} The group $\mathbb{Q}$ (with the discrete topology) is integrally approximable. \end{example} \begin{proof} For each natural number $n$ set $H_n=\frac{1}{n!}\mathbb{Z}$. Since $(H_n)$ is an increasing sequence and \begin{equation*} \bigcup_{n\in \mathbb{N}} H_n = \mathbb{Q}, \end{equation*} we also have $\lim_{n\in \mathbb{N}} H_n = \mathbb{Q}$ as we may conclude directly using (\ref{eq:G_base}) or by invoking Proposition 2.10 of \cite{Hamr-Sad-JOLT}. This proves the claim. \end{proof} \begin{example} \label{e:R-times-four} The group $G=\mathbb{R}\times\mathbb{Z}(p)^n$ for any prime $p$ and any natural number $n\ge2$ is not approximable by integral subgroups. The smallest example in this class is $\mathbb{R}\times\mathbb{Z}(2)^2$. \end{example} \begin{proof} We note that $\mathbb{Z}(p)^n$ is a vector space $V$ of dimension $n$ over the field $\mathbb{F}=\mathrm{GF}(p)$. By way of contradiction suppose that $G$ is integrally approximable. Let $W=[-1,1]\times\{0{\cdot}$ and let $K=\{0{\cdot}\times B$ for a basis $B$ of $V$. Since $G$ is integrally approximable, there is a $Z\in {\mathcal{F}_1}(G)$ such that $Z\in \mathcal U(G; K, W)$, that is, $K\subseteq W+Z$. There is an $r\in\mathbb{R}$ and a $v\in V$ such that $Z=\mathbb{Z}{\cdot}(r,v)$. Thus \begin{multline*} \{0{\cdot}\times B\subseteq ([-1,1]\times\{0{\cdot})+\mathbb{Z}{\cdot}(r,v){\cdot} =\bigcup_{n\in\mathbb{Z}} \{nr+[-1,1]{\cdot}\times \{n\.v{\cdot} \subseteq \mathbb{R}\times \mathbb{F}\.v, \end{multline*} and thus $B\in \mathbb{F}\.v$. This implies $\dim_\mathbb{F} V\le 1$ in contradiction to the assumption $n\ge2$. \end{proof} For any prime $p$ we recall the basic groups $\mathbb{Z}(p^n)$, $n=1,\dots,\infty$, and $\mathbb{Z}_p\subset\mathbb{Q}_p$, where $\mathbb{Z}(p^\infty)$ is the divisible Pr\"ufer group and $\mathbb{Z}_p$ its character group, the group of $p$-adic integers (see. e.g.{\cdot}\cite{hofmorr}, Example 1.38(i), p.~27) and where $\mathbb{Q}_p$ denotes the group of $p$-adic rationals (see loc.~cit. Exercise E1.16, p.~27). Then Example \ref{e:R-times-four} raises at once the following question: \begin{ques} \label{q:question} Are the locally compact groups $G=\mathbb{R}\times C$ for $C=\mathbb{Z}(p^n)$, $C=\mathbb{Z}(p^\infty)$, $C=\mathbb{Z}_p$, or $C=\mathbb{Q}_p$ integrally approximable? \end{ques} In the light of the negative Example \ref{e:R-times-four}, this may not appear so simple a matter to answer. Our results will show that they all are integrally approximable. While the group $\mathbb{Z}$ is integrally approximable trivially from the definition, we have, in the context of the group $\mathbb{Z}$, the following lemma, whose proof we can handle as an exercise directly from the definitions and which serves as a further example of the particular role played by $\mathbb{Z}$ in the context of integrally approximable groups. \begin{lem} \label{l:G=Z} Let $A$ be a locally compact abelian group and assume that $A\times \mathbb{Z}$ is integrably approximable. Then $|A|=1$. \end{lem} \begin{proof} By way of contradiction assume that there is an $a\ne 0$ in $A$. Then there is a zero-neighborhood $W=-W$ in $A$ such that $a\notin W+W$. Set $G=A\times\mathbb{Z}$ and $K={\cdot}(a,1),(0,1){\cdot}\subseteq G$. Since $G$ is integrally approximable there is a $Z\cong\mathbb{Z}$ in ${\mathcal{F}_1}(G)$ such that $$Z\in{\mathcal U}(G; K,W\times\{0{\cdot}).$$ Since $Z$ is an integral subgroup, there are elements $b\in A$ and $0<n\in\mathbb{Z}$ such that $Z=\mathbb{Z}{\cdot}(b,n)$. So $(a,1)$ and $(0,1)$ are contained in $(W\times\{0{\cdot})+\mathbb{Z}{\cdot}(b,n)$. Therefore there are elements $w_a, w_0\in W$ and integers $m_a, m_0\in\mathbb{Z}$ such that \begin{enumerate} \item[$(\mathrm{i})$] $w_0+m_0\.b=0$, \item[$(\mathrm{ii})$] $w_a+m_a\.b=a$, and \item[$(\mathrm{iii})$] $m_0n=1$. \item[$(\mathrm{iv})$] $m_an=1$. \end{enumerate} Equations (iii) and (iv), holding in $\mathbb{Z}$, imply $m_0=m_a=n=1$. Thus equation (i) implies $b\in -W=W$. Then from (ii) it follows that $a\in W+W$. This is a contradiction, which proves our claim for the example. \end{proof} \section{The class of approximable groups} \label{s:class} As a first step towards the main results it will be helpful to observe the available closure properties of the class of integrally approximable groups. \subsection{Preservation properties} \label{ss:preserv} \begin{prop}\label{p:class(appro-integral)} The class of integrally approximable locally compact groups is closed under the following operations: \begin{enumerate} \item[(OS)] Passing to open nonsingleton subgroups, \item[(QG)] passing to quotients modulo compact subgroups, \item[(QO)] passing to torsion-free quotients modulo open subgroups, \item[(DU)] forming directed unions of closed subgroups, \item[(PL)] forming strict projective limits of quotients $G/N$ modulo compact subgroup $N$. \end{enumerate} \end{prop} \begin{proof} (OS) Let $G$ be a locally compact group $G$ that is approximable by integral subgroups and let $U$ be an open nonsingleton subgroup. Let $K$ be a compact subspace of $U$ and $W$ an identity neighborhood contained in $U$; we may take $K\ne\{1{\cdot}$ and $W$ small enough so that $K\not\subseteq W$. We must find a subgroup $Z\cong\mathbb{Z}$ inside $U$ such that $Z\in \mathcal U(U;K,W)$, that is, $K\subseteq WZ$. However, $G$ is integrally approximable, and so there is a subgroup $E\cong\mathbb{Z}$ of $G$ such that $E\in \mathcal U(G;K,W)$, that is, $K\subseteq WE$. Then $K=K\cap U\subseteq WE\cap U$. By the modular law, $W(E\cap U)=WE\cap U$ and so $K\subseteq W(E\cap U)$. The condition $K\not\subseteq W$ rules out the possibility that $E\cap U=\{1{\cdot}$. Since $E\cong\mathbb{Z}$ we know that $E\cap U\cong\mathbb{Z}$ and so we may take $Z=E\cap U$ and thus obtain $K\subseteq WZ$ which is what we have to prove. (QG) Let $N$ be a compact subgroup of a locally compact group $G$ approximable by integral subgroups and let $\pi\colon G\to H$, $H=G/N$, be the quotient morphism. Then the continuity of the map $A\mapsto \pi(A):{\cdot}{G}\to {\cdot}{H}$ (see Corollary 2.4 of \cite{Hamr-Kad-JOLT}) implies that $H$ is approximable by integral subgroups. (QO) Let $U$ be an open subgroup of a locally compact group $G$ approximable by integral subgroups so that $H\buildrel\mathrm{def}\over= G/U$ is torsion-free, and let $\pi\colon G\to H$ be the quotient morphism. Since $U$ is open, $H$ is discrete, and the singleton set containing the identity $\widetilde e =U$ in $H=G/U$ is an identity neighborhood. So for a given compact, hence finite, subset $\widetilde K$ of $H$ we have to find a $\widetilde Z\in {\mathcal{F}_1}(H)$ with $\widetilde Z \subseteq \mathcal U(H;\widetilde K,{\cdot}\widetilde e{\cdot})$, that is $\widetilde K\subseteq \widetilde Z$ according to Equation (\ref{eq:G_base}). We may and will assume that there is at least one $\widetilde k\in \widetilde K$ such that $\widetilde k\ne\widetilde e$. Now by the local compactness of $G$ we find a compact set $K$ of $G$ such that $\pi(K)= \widetilde K$. Since $U$ is an identity neighborhood in $G$ and since $G$ is integrally approximable, there is a subgroup $Z\in{\mathcal{F}_1}(H)$ contained in $\mathcal{U}(G; K, U)$, that is $K\subseteq UZ$. Applying $\pi$, we get $\widetilde K\subseteq \pi(Z)$. In particular, $\widetilde e\ne\widetilde k\in \pi(Z)$. Since $H$ is torsion-free and $Z\cong\mathbb{Z}$ we conclude that $\pi(Z)\cong\mathbb{Z}$, and so we can set $\widetilde Z=\pi(Z)$, getting $\widetilde K\subseteq \widetilde Z$, which we had to show. (DU) Let $(G_i)_{i\in I}$ be a directed family of closed subgroups of a locally compact group $G$ such that $G=\overline{\bigcup_{i\in I} G_i}$. Then from Proposition 2.10 of \cite{Hamr-Sad-JOLT} we know \begin{equation} \label{eq:no-one} G=\lim_i G_i\mbox{ in ${\cdot}{G}$}. \end{equation} Now assume \begin{equation} \label{eq:no-two} (\forall i\in I){\cdot}~G_i \mbox{ is approximable by integral subgroups.} \end{equation} Let $\mathcal U$ be an open neighborhood of $G$ in ${\cdot}{G}$. Then by (\ref{eq:no-one}) there is some $j\in I$ with $G_j\in {\mathcal U}$, and so ${\mathcal U}$ is also an open neighborhood of $G_j$ in ${\cdot}{G}$. Then by (\ref{eq:no-two}) there is a closed subgroup $Z\cong\mathbb{Z}$ in $G$ with $Z\in{\mathcal U}$. This proves that $G$ is approximable by integral subgroups. (PL) Let the locally compact group $G$ be a strict projective limit $G=\lim_{N\in{\mathcal N}}G/N$ of integrally approximable quotient groups modulo compact normal subgroups $N$. Then $G$ has arbitrarily small open identity neighborhoods $W$ for which there is an $N\in{\mathcal N}$ such that $W=NW$. Let $K\in\mathcal K$ be a compact subspace of $G$. If $W$ is given, we note that $NK$ is still compact, and so we assume that $NK=K$ as well. We aim to show that there is a subgroup $Z\cong\mathbb{Z}$ of $G$ such that $K\subseteq WZ$ which will show that $Z\in{\mathcal U}(G;K,W)$ as in Equation (\ref{eq:G_base}), and this will complete the proof. Now we assume that for all $M\in\mathcal N$ the group $G/M$ is approximable by integral subgroups. Then, in particular, $G/N$ is approximable by integral subgroups. Therefore we find a subgroup $Z_N\subseteq G/N$ such that $Z_N\cong\mathbb{Z}$ and $Z_N\in {\mathcal U}(G/N; K/N, W/N)$ according to (\ref{eq:G_base}). This means that \begin{equation} \label{eq:in N} K/N\subseteq (W/N)\.Z_N. \end{equation} Let $z\in G$ be such that $Z_N=\gen{zN}$. Set $Z=\gen z$, then $Z\cong\mathbb{Z}$ and $Z_N=ZN/N$. Then (\ref{eq:in N}) is equivalent to $K/N\subseteq (W/N)(ZN/N)=WZ/N$ which in turn is equivalent to $K\subseteq WZ$ and this is what we had to show. \end{proof} \begin{rem} In the proof of (QO) it is noteworthy that we did not invoke an argument claiming the continuity of the function $A\mapsto AU/U:{\mathcal{S\hskip-.5pt U\hskip-.9pt B}}(G)\to {\mathcal{S\hskip-.5pt U\hskip-.9pt B}}(H)$. Indeed let $\pi\colon\mathbb{R}\times \mathbb{Z}\to \mathbb{Z}$ be the projection. The sequence $(\langle(n, 1)\rangle)_{n\in\mathbb{N}}$ converges to the trivial subgroup ${\cdot}(0,0){\cdot}$ in ${\mathcal{S\hskip-.5pt U\hskip-.9pt B}}(G)$, but its image is the constant sequence with value $\langle 1\rangle=\mathbb{Z}$ and therefore converges to $\mathbb{Z}$. This shows that the induced map ${\mathcal{S\hskip-.5pt U\hskip-.9pt B}}(\pi)\colon{\mathcal{S\hskip-.5pt U\hskip-.9pt B}}(G)\to{\mathcal{S\hskip-.5pt U\hskip-.9pt B}}(H)$ need not be continuous in general. \end{rem} \section{The case of discrete groups} \label{s:discr} In order to demonstrate the workings of some of the operations discussed in Proposition \ref{p:class(appro-integral)} we show \begin{lem} \label{l:discreteness} Assume that an integrally approximable locally compact group $G$ has a compact identity component $G_0$. Then $G$ is discrete. \end{lem} \begin{proof} By way of contradiction suppose that $G$ is not discrete. Since $G_0$ is compact, there is a compact open subgroup $U$ (see \cite{Montgomery--Zippin}, Lemma 2.3.1 on p.~54). Since $G$ is not discrete, $U\ne\{1{\cdot}$. Since $G$ is integrally approximable, so is $U$ by Proposition \ref{p:class(appro-integral)} (OS). Then $U\ne\mathrm{comp}(U)$ by Remark \ref{r:immediate}, but $U$ being compact we have $U=\mathrm{comp}(U)$ and this is a contradiction which proves the lemma. \end{proof} \subsection{Monothetic and inductively monothetic groups} \label{ss:mon} Before we proceed we need to recall some facts around monothetic groups. A topological group $G$ is {\it monothetic} if there is an element $g\in G$ such that $G=\gen g$. \begin{defn}[Inductively monothetic group]\label{d:ind-mon} A topological group $G$ is called {\em inductively monothetic} if (and only if) every finite subset $F\subseteq G$ there is an element $g\in G$ such that $\gen F =\gen g$. \end{defn} The circle group $\mathbb{T}=\mathbb{R}/\mathbb{Z}$ contains a unique element $t=2^{-1}+\mathbb{Z}\in\mathbb{T}$ such that $2\.t=0$. The group $\mathbb{T}^2$ is monothetic but not inductively monothetic, since the subgroup $\langle{\cdot}(t,0), (0,t){\cdot}\rangle$ is finitely generated but not monothetic. The discrete additive group $\mathbb{Q}$ is inductively monothetic but is not monothetic. In \cite{HHR}, Theorem 4.12 characterizes inductively monothetic locally compact groups. Before we cite this result we recall that Braconnier (see \cite{Braconnier}) called a locally compact group $G$ a {\it local product} $\prod_{j\in J}^{\mathrm{loc}}(G_j,C_j)$ for a family $(G_j)_{j\in J}$ of locally compact groups $G_j$ if each of them has a compact open subgroup $C_j$ such that an element $g=(g_j)_{j\in J}\in\prod_{j\in J}G_j$ is in $G$ iff there is a finite subset $F_g\subseteq J$ such that $g_j\in C_j$ whenever $j\notin F_g$. \begin{prop}[Classification of inductively monothetic groups]\label{p:ind-mon} A locally compact group $G$ is inductively monothetic if one of the following conditions is satisfied: \begin{itemize} \item[(1)] $G$ is a one-dimensional compact connected group, \item[(2)] $G$ is discrete and is isomorphic to a subgroup of $\mathbb{Q}$. \item[(3)] $G$ is isomorphic to a local product $\prod_{p{\cdot}\mathrm{prime}}^{\mathrm{loc}}(G_p,C_p)$ where each of its characteristic $p$-primary components $G_p$ is either $\cong \mathbb{Z}(p^n)$, $n=0,1,\dots,\infty$, or $\mathbb{Z}_p$, or $\mathbb{Q}_p$. \end{itemize} \end{prop} It follows, in particular, that a \emph{totally disconnected compact monothetic group is inductively monothetic} so that the concept of an inductively monothetic locally compact group is more general than that of a locally compact monothetic group \emph{in the totally disconnected domain}. We remark that a locally compact group is called {\it periodic} if it is totally disconnected and has no subgroups isomorphic to $\mathbb{Z}$. Thus the class (3) of Proposition \ref{p:ind-mon} covers precisely the periodic inductively monothetic groups. Our present stage of information allows us to clarify on an elementary level the discrete side of our project: \begin{thm} \label{th:discrete-case} Let $G$ be a locally compact group such that $G_0$ is a compact group. Then the following assertions are equivalent: \begin{enumerate} \item[$(1)$] $G$ is integrally approximable. \item[$(2)$] $G$ is discrete and isomorphic to a nonsingleton subgroup of $\mathbb{Q}$. \end{enumerate} \end{thm} \begin{proof} In Example \ref{e:rational} we saw that $\mathbb{Q}$ is integrally approximable. Then from Proposition \ref{p:class(appro-integral)} (OS) if follows that every nonsingleton subgroup of $\mathbb{Q}$ is integrally approximable. Thus (2) \hbox{$\Rightarrow$} (1). We have to show $(1)\hbox{$\Rightarrow$} (2)$:\quad Thus we assume (1). In particular, $G$ is nonsingleton. Since the subgroup $G_0$ is compact, Lemma \ref{l:discreteness} applies and shows that $G$ is discrete. Then by Equation \ref{eq:G_base} in ${\mathcal{S\hskip-.5pt U\hskip-.9pt B}}(G)$ the element $G$ has a basis of neighborhoods $\mathcal U(G; F, \{0{\cdot})=\{H\in{\mathcal{S\hskip-.5pt U\hskip-.9pt B}}(G): F\subseteq H{\cdot}$ as $H$ ranges through the finite subsets of $G$. Since $G$ is integrally approximable, there exists a $Z\in {\mathcal{F}_1}(G)$ such that $Z\in\mathcal U(G;F,\{0{\cdot})$, that is, $F\subseteq Z$. Then $\langle F\rangle$ is infinite cyclic as a subgroup of a group $\cong\mathbb{Z}$. Therefore $G$ is discrete, torsion-free, and inductively monothetic. Then Proposition \ref{p:ind-mon} shows that $G$ is isomorphic to a subgroup of $\mathbb{Q}$. \end{proof} \section{Necessary conditions} \label{s:necessary} For the remainder of the effort to classify integrally approximable groups we may therefore concentrate on nondiscrete groups, and indeed on locally compact groups $G$ whose identity component $G_0$ is noncompact. \subsection{Background on abelian locally compact groups} \label{ss:commutativity} We first point out why we have to focus on commutative locally compact groups. Indeed in \cite[Proposition 3.4]{Harp1} the following fact was established: \begin{prop}\label{space-abelian-closed} Let $G$ be a locally compact group. Then the space $\cab{G}$ of closed abelian subgroups of $G$ is closed in ${\cdot}{G}$. \end{prop} We have ${\mathcal{F}_1}(G)\subseteq\cab{G}$ and thus $\overline{{\mathcal{F}_1}(G)}\subseteq \cab{G}$. Accordingly, in view of Definition \ref{integral-subgroups} we therefore have \begin{cor} \label{focus-abelian} An integrally approximable locally compact group is abelian. \end{cor} Thus we now focus on locally compact abelian groups and their duality theory. As a consequence we shall henceforth write the groups we discuss in additive notation. An example is the following result of Cornulier's (see \cite{Cornulier-LCA-Chabauty}, Theorem 1.1): \begin{prop}[Pontryagin-Chabauty Duality] \label{Pon-Cha-duality} Let $G$ be an abelian locally compact group. Then the annihilator map \begin{equation*} \ H\mapsto H^\bot : {\cdot}{G}\to \cbig{\widehat G} \end{equation*} is a homeomorphism. \end{prop} This will allow us to apply the so-called {\it annihilator mechanism} as discussed e.g. in \cite{hofmorr}, 7.12 ff., pp.314 ff.. What will be relevant in our present context is a main structure theorem for abelian groups (see \cite{hofmorr}, Theorem 7.57, pp. 345 ff.). \begin{prop} \label{p:vector-splitting} Every locally compact abelian group $G$ is algebraic\-al\-ly and topologically of the form $G=E\oplus H$ for a subgroup $E\cong \mathbb{R}^n$ and a locally compact abelian subgroup $H$ which has the following properties \begin{itemize} \item[\rm(a)] $H$ contains a compact subgroup which is open in $H$. \item[\rm(b)] $H$ contains $\mathrm{comp}(G)$. \item[\rm(c)] $H_0=(\mathrm{comp}(G))_0=\mathrm{comp}(G_0)$ is the unique maximal compact connected subgroup of $G$. \item[\rm(d)] The subgroup $G_1\buildrel\mathrm{def}\over= G_0+\mathrm{comp}(G)$ is an open, hence closed, fully characteristic subgroup which is isomorphic to $\mathbb{R}^n\times \mathrm{comp}(G)$. \item[\rm(e)] $G/G_1$ is a discrete torsion-free group and $G_1$ is the smallest open subgroup with this property. \end{itemize} \end{prop} \begin{gather} \begin{aligned} \xymatrix{ &&&G \ar@{-}[dr] \ar@{-}[dl]{\cdot} &&G_1 \ar@{-}[dr] \ar@{-}[dl]&&H\ar@{-}[dl]{\cdot} &G_0 \ar@{-}[dr] \ar@{-}[dl] &&C\ar@{-}[dl]{\cdot} E\ar@{-}[dr]&&C_0 \ar@{-}[dl]{\cdot} &0 }\end{aligned} \label{Fig1} \end{gather} \begin{center} {$C=\mathrm{comp}(G),\quad G_1=G_0+C=E\oplus C.$} \end{center} \subsection{Necessity} \label{ss:nec} These results allow us to narrow our scope onto integrally approximable groups $G$ further and to derive necessary conditions for $G$ to be integrally approximable. \begin{prop} \label{p:vector-splitting-int-appr} Every nondiscrete integrally approximable locally compact abelian group $G$ is algebraically and topologically of the form $G=E\oplus\mathrm{comp}(G)$ for a subgroup $E\cong \mathbb{R}$ and the locally compact abelian subgroup $\mathrm{comp}(G)$. \end{prop} \begin{proof} (i) Using the notation of Proposition \ref{p:vector-splitting} (d) and (e) above we first claim $G=G_1$. By (d) the subgroup $G_1=G_0+\mathrm{comp}(G)$ is open, and by (e) the factor group $G/G_1$ is torsion free. Suppose our claim is false. Then we find an element $g\in G\setminus G_1$. Then $Z\buildrel\mathrm{def}\over= \langle g\rangle$ is cyclic and $(Z+G_1)/G_1 \cong Z/(Z\cap G_1)$ is torsion-free by (e), and so $X\cap G_1=\{0{\cdot}$. Then it follows from (d), since $G_1$ is open, that $Z$ is discrete. Hence $U=G_1 +Z$ is a nonsingleton open subgroup $\cong G_1\times \mathbb{Z}$ by (e). Then by Lemma \ref{l:G=Z} we have $G_1=\{0{\cdot}$ which forces $G$ to be discrete, contrary to our hypothesis. (ii) Now by Proposition \ref{p:vector-splitting} again, we may identify $G$ with $\mathbb{R}^n\times H$ where $H_0$ is compact and $H=\mathrm{comp}(H)$. Now $H$ contains a compact open subgroup $U$ (cf.\ the proof of Lemma \ref{l:discreteness}) and $\mathbb{R}^n\times U$ is an open subgroup of $G$ which is nonsingleton since $G$ is nondiscrete. Hence \break $\mathbb{R}^n\times U$ is integrally approximable by Proposition \ref{p:class(appro-integral)}. Then the projection $\mathbb{R}^n\times U\to\mathbb{R}^n$ is covered by part (QG) of Proposition \ref{p:class(appro-integral)} and thus we know that $\mathbb{R}^n$ is integrally approximable. Then Example \ref{p:vector-groups} shows that $n=1$. \end{proof} \begin{rem} \label{r:iso} If the relation $G=G_0+\mathrm{comp}(G)$ holds for a locally compact abelian group $G$, then $G/G_0\cong \mathrm{comp}(G)/\mathrm{comp}(G)_0$. Moreover, $\mathrm{comp}(G)$ is the union of compact subgroups being open in $\mathrm{comp}(G)$. \end{rem} From here on we concentrate on groups of the form $\mathbb{R}\times H$ where $H$ is an abelian locally compact group with $H=\mathrm{comp}(H)$. \begin{defn} \label{d:periodic} We call a locally compact group $H$ {\it periodic} if it is totally disconnected and satisfies $H=\mathrm{comp}(H)$. \end{defn} Periodic abelian groups have known structure due to Braconnier (see \cite{Braconnier}; cf.\ also \cite{HHR}). Indeed, a periodic group $G$ is (isomorphic to) a local product $$\prod_{p{\cdot}\mathrm{prime}}^{\mathrm{loc}}(G_p,C_p)$$ for the $p$-primary components (or $p$-Sylow subgroups) $G_p$. \begin{lem} \label{l:monothetic} Let $G$ be an integrally approximable locally compact group such that $\mathrm{comp}(G)$ is periodic and compact. Then $\mathrm{comp}(G)$ is monothetic. \end{lem} \begin{proof} Write $H=\mathrm{comp}(G)$. Then we may identify $H$ with the product $\prod_p H_p$ of its compact $p$-primary components (see also \cite{hofmorr}, Proposition 8.8(ii)). A compact group $H$ is monothetic iff there is a morphism $\mathbb{Z}\to H$ with dense image iff (dually) there is an injective morphism $\widehat H\to \mathbb{T}$. As $H$ is totally disconnected, $\widehat H$ is a torsion group (see \cite{hofmorr}, Corollary 8.5, p.~377), and so the group $\widehat H$ is embeddable into $\mathbb{T}$ iff it is embeddable into the torsion group $\mathbb{Q}/\mathbb{Z}=\bigoplus_p\mathbb{Z}(p^\infty)$ of $\mathbb{T}$ (see \cite{hofmorr}, Corollary A1.43(ii) on p.~694) iff each $\widehat H_p$ is embeddable into $\mathbb{Z}(p^\infty)$. Hence $H$ is monothetic iff each $H_p$ has $p$-rank $\le 1$. By way of contradiction suppose that this is not the case and that there is a prime $p$ such that the $p$-rank of $H_p$ is $\ge2$. So the compact group $H/p\.H$ has exponent $p$ and is isomorphic to a power $\mathbb{Z}(p)^I$ with $\card I\ge2$. Therefore we have a projection of $H/p\.H\cong\mathbb{Z}(p)^I$ onto $\mathbb{Z}(p)^2$. This provides a surjective morphism $H\to H/p\.H\cong \mathbb{Z}(p)^I\to \mathbb{Z}(p)^2$. Thus Proposition \ref{p:vector-splitting-int-appr} tells us that $G=\mathbb{R}\times H$ has a quotient group $\mathbb{R}\times\mathbb{Z}(p)^2$ modulo a compact kernel. By Proposition \ref{p:class(appro-integral)} (QG) this quotient group is integrally approximable which we know to be impossible by Example \ref{e:R-times-four}. This contradiction proves the lemma. \end{proof} \begin{lem} \label{l:ind-mon} Let $G$ be a totally disconnected locally compact abelian group satisfying $G=\mathrm{comp}(G)$. Assume that every compact open subgroup of $G$ is monothetic. Then $G$ is inductively monothetic. \end{lem} \begin{proof} Let $F$ be a finite subset of $G$; we must show that $\gen F$ is monothetic. Let $\mathcal S$ be the $\subseteq$-directed set of all compact open (and therefore monothetic) subgroups of $G$. Since $G=\bigcup{\mathcal S}$ (see Remark \ref{r:iso}) for each $x\in F$ there is a $C_x\in{\mathcal S}$ such that $x\in C_x$. Since ${\mathcal S}$ is directed and $F$ is finite there is an $K\in{\mathcal S}$ such that $\bigcup_{x\in F}C_x\subseteq K$. Then $F\subseteq K$ and so $\gen F\subseteq K$. Since $G$ is totally disconnected, the same is true for $K$. By hypothesis, $K$ is monothetic, and so the comment following Proposition \ref{p:ind-mon} shows that $K$ is inductively monothetic, whence $\gen F$ is monothetic. \end{proof} \begin{cor} \label{c:ind-mon-2} Let $G$ be an integrally approximable group such that $\mathrm{comp}(G)$ is periodic. Then $G/G_0\cong \mathrm{comp}(G)$ is inductively monothetic. \end{cor} \begin{proof} Let $U$ be a compact-open subgroup of $\mathrm{comp}(G)$. Then $G_0+U\cong \mathbb{R}\times U$ is an open subgroup of $G=G_0+\mathrm{comp}(G)=\mathbb{R}\times\mathrm{comp}(G)$. Hence it is integrally approximable by Proposition \ref{p:class(appro-integral)} (OG). So $U$ is monothetic by Lemma \ref{l:monothetic}. Now Lemma \ref{l:ind-mon} shows that $\mathrm{comp}(G)$ is inductively monothetic. \end{proof} Now we have a necessary condition on a locally compact group to be integrally approximable: \begin{thm} \label{th:necessary} Let $G$ be a nondiscrete integrally approximable locally compact group. Then \begin{itemize} \item[\rm(a)] $G\cong \mathbb{R}\times \mathrm{comp}(G)$ and \item[\rm(b)] $G/G_0\cong \mathrm{comp}(G)/\mathrm{comp}(G)_0$ is inductively monothetic. \end{itemize} \end{thm} \begin{gather} \begin{aligned} \xymatrix{ &&G \ar@{-}[dr] \ar@{-}[dl]&&{\cdot} &G_0 \ar@{-}[dr] \ar@{-}[dl] &&C\ar@{-}[dl]{\cdot} \mathbb{R}\ar@{-}[dr]&&C_0 \ar@{-}[dl]{\cdot} &0 }\end{aligned} \label{Fig2} \end{gather} \begin{center} {$C=\mathrm{comp}(G),\quad G=G_0+C=\mathbb{R}\oplus C.$} \end{center} \begin{rem} \label{r:detail} The Classification of locally compact inductively monothetic groups (Proposition \ref{p:ind-mon}) yields that the group $G/G_0$ is of type (3). Indeed, it is a local product of inductively monothetic $p$-Sylow subgroups of type $\mathbb{Z}(p^n)$, $\mathbb{Z}(p^\infty)$, $\mathbb{Z}_p$, or $\mathbb{Q}_p$. \end{rem} \begin{proof} As is described in Proposition \ref{p:vector-splitting-int-appr} $G$ has a compact characteristic subgroup $N\buildrel\mathrm{def}\over=\mathrm{comp}(G_0)=\mathrm{comp}(G)_0$, the unique largest compact connected subgroup. So by Proposition \ref{p:class(appro-integral)} (QG), the quotient $G/N$ is also integrally approximable, and $\mathrm{comp}(G/N)$ is totally disconnected and therefore is periodic. As Corollary \ref{c:ind-mon-2} applies to $G/N$, its factor $\mathrm{comp}(G/N)$ is inductively monothetic. However, $G/N\cong \mathbb{R}\times \mathrm{comp}(G/N)=\mathbb{R}\times\mathrm{comp}(G)/N$ and $G_0\cong \mathbb{R}\times N$ we see that $G/G_0\cong \mathrm{comp}(G/N)$. Thus $G/G_0$ is inductively monothetic. \end{proof} It is noteworthy that there is no limitation on the size of the compact connected abelian group $\mathrm{comp}(G)_0=\mathrm{comp}(G_0)$. The locally compact abelian group $\mathrm{comp}(G)$ is an extension of the compact group $\mathrm{comp}(G)_0$ by an inductively monothetic group. \section{Sufficient conditions} \label{s:suff} In this section we shall prove the following complement to Theorem \ref{th:necessary} and thereby complete the proof of the main theorem formulated in the abstract. \begin{thm} \label{th:sufficient} Let $G=\mathbb{R}\times H$ for a locally compact abelian group $H$ satisfying the following conditions: \begin{itemize} \item[\rm(a)] $H=\mathrm{comp}(H)$ and \item[\rm(b)] $H/H_0\cong G/G_0$ is inductively monothetic. \end{itemize} Then $G$ is integrally approximable. \end{thm} \subsection{Various reductions} \label{ss:reduct} We shall achieve the proof by reducing the problem step by step. Firstly, every inductively monothetic group is the directed union of monothetic subgroups by Proposition \ref{p:ind-mon}. and so if $H$ satisfies (b) it is of the form $H=\bigcup_{i\in I}H_i$ with a directed family of subgroups $H_i\supseteq H_0$ such that $H_i/H_0$ is compact monothetic. Then, by Proposition \ref{p:class(appro-integral)} (DU), $\mathbb{R}\times H$ is integrally approximable if all $\mathbb{R}\times H_i$ are integrally approximable for $i\in I$. Thus from here on, in place of condition (b), we shall assume that $H$ satisfies \begin{itemize} \item[(c)] $H/H_0$ is monothetic. \end{itemize} After condition (c), $H$ is compact. Every locally compact abelian group is a strict projective limit of Lie groups. This applies to $H$. Clearly condition (a) holds for all quotient groups. If $N$ is a compact normal subgroup of $H$ Then $(H/N)_0=H_0N/N$ and thus $(H/N)/(H/N)_0\cong H/H_0N$, whence $(H/N)/(H/N)_0$ is a quotient group of $H/H_0$ and is therefore inductively monothetic. Thus if we can show that all for all abelian Lie groups $H$ satisfying (a) and (c), the groups $\mathbb{R}\times H$ are integrally approximable, then the Proposition \ref{p:class(appro-integral)}(PL) will show that $\mathbb{R}\times H$ is integrally approximable. Therefore we need to prove Theorem \ref{th:sufficient} for a compact Lie group $H$ satisfying (a) and (c). \begin{lem} \label{l:lie-case} Let $H$ be a compact abelian Lie group satisfying $\mathrm{(a)}$ and $\mathrm{(c)}$. Then there is are nonnegative integers $m$ and $n$ $$ H \cong \mathbb{T}^m\times \mathbb{Z}(n).$$ In particular, $H$ is monothetic. \end{lem} \begin{proof} $H$ is a compact abelian Lie group such that $H/H_0$ is cyclic. Then $H$ has the form asserted (see e.g.{\cdot}\cite{hofmorr}, Proposition 2,42 on p.~48 or Corollary 7.58 (iii) on p. 356). We have seen in the proof of Lemma \ref{l:monothetic} that a compact group is monothetic if and only if its character group can be injected into the discrete circle group $\mathbb{T}_d=\mathbb{R}_d\oplus\mathbb{Q}/\mathbb{Z}$. Since $\widehat H \cong \mathbb{Z}^m\oplus \mathbb{Z}(n)$, this condition is satisfied. \end{proof} Thus for a proof of Theorem \ref{th:sufficient} it will suffice to prove \begin{lem} \label{l:R-plus-mon} If $H$ is monothetic, then $\mathbb{R}\times H$ is integrally approximable. \end{lem} We shall use the Bohr compactification of the group $\mathbb{Z}$ of integers. This group is also called the {\it universal monothetic group}. Here we shall identify $\widehat{{\mathrm b}\Z}$ with $\mathbb{T}_d$ (for $\mathbb{T}=\mathbb{R}/\mathbb{Z}$) and consider the elements $\chi$ of ${\mathrm b}\Z$ as characters of $\mathbb{T}_d$. There is one distinguished character, namely the identity morphism $\mathrm{id}_\mathbb{T}\colon\mathbb{T}_d\to\mathbb{T}$, and we map $\mathbb{Z}$ naturally and bijectively onto a dense subgroup of ${\mathrm b}\Z$ via the map \begin{equation}\label{defn:rho} \rho\colon\mathbb{Z}\to{\mathrm b}\Z,\quad m\mapsto m{\cdot}\mathrm{id}. \end{equation} The following Lemma shows that for a proof of Lemma \ref{l:R-plus-mon} it suffices to prove it for $H={\mathrm b}\Z$. \begin{lem} \label{l:reduction} The group $G=\mathbb{R}\times H$ is integrally approximable for any monothetic compact subgroup $H$ if and only if it is so for $H={\mathrm b}\Z$. \end{lem} \begin{proof} Obviously the latter condition is a necessary one for the former, so we have to show that it is sufficient. Let $H$ be a monothetic group. Then there is a morphism $f\colon \mathbb{Z}\to H$ with dense image. We have the canonical dense morphism $\rho\colon \mathbb{Z}\to{\mathrm b}\Z$. By the universal property of the Bohr compactification there is a unique morphism $\pi\colon {\mathrm b}\Z\to H$ such that $f=\pi\circ\rho$. Since $f$ has a dense image, this holds for $\pi$, whence by the compactness of ${\mathrm b}\Z$, the morphism is a surjective morphism between compact groups and therefore is a quotient morphism with a compact kernel. Therefore there is a quotient morphism with compact kernel $\mathbb{R}\times {\mathrm b}\Z\to \mathbb{R}\times H$. Hence by Proposition \ref{p:class(appro-integral)} (QG), $\mathbb{R}\times H$ is integrally approximable if $\mathbb{R}\times{\mathrm b}\Z$ is integrally approximable. \end{proof} Now the following lemma will complete the proof of Theorem \ref{th:sufficient} and thereby conclude the section with a proof of the main result of the article. \begin{lem}[First Key Lemma] \label{l:R-plus-b} The group $\mathbb{R}\times {\mathrm b}\Z$ is approximable by a sequence of integral subgroups. \end{lem} \subsection{Proving the First Key Lemma} \label{ss:keylem} The proof of the First Key Lemma requires some technical preparations in which we use the duality of locally compact abelian groups. In the process we need to consider the charachter group of $\mathbb{R}\times\mathrm {\mathrm b}\Z$. Here is a reminder of the determination of the character group of a product: \begin{lem}\label{dual-group-cartesian-product} Let $A$ and $B$ be locally compact abelian groups. Then there is an isomorphism $\phi\colon\widehat A\times\widehat B\to(A\times B)\widehat{\phantom w}$ such that \centerline{$\phi(\chi_A,\chi_B)(a,b)=\chi_A(a)-\chi_B(b).$} \end{lem} We apply this with $A=\mathbb{R}$ and $B={\mathrm b}\Z$. For the simplicity of notation we shall denote the coset $r+\mathbb{Z}\in\mathbb{T}$ of $r\in\mathbb{R}$ by $\overline r$. We consider $\mathbb{R}$ also as the character group of $\mathbb{R}$ by letting $r(s)=\overline{rs}\in\mathbb{T}$. We also identify $\widehat\mathbb{T}$ with $\mathbb{Z}$ by considering $k\in \mathbb{Z}$ as the character defined by $k(\overline r)=\overline{kr}$. Recall that we consider ${\mathrm b}\Z$ as the character group of $\mathbb{T}_d$. In the spirit of Lemma \ref{dual-group-cartesian-product}, we identify $\mathbb{R}\times \mathbb{T}_d$ with the character group $\widehat G$ of $G=\mathbb{R}\times \widehat{\mathbb{T}_d}=\mathbb{R}\times{\mathrm b}\Z$ by letting $(r,\overline s)$ denote the character of $G$ defined by $(r,\overline s)(x,\chi)=\overline{rx}-\chi(\overline s)\in\mathbb{R}/\mathbb{Z}$. We recall that the identity function $\mathrm{id}_\mathbb{T}\colon\mathbb{T}_d\to\mathbb{T}$ is a particular character of $\mathbb{T}_d$ and thus is an element of ${\mathrm b}\Z$; indeed $\mathrm{id}_\mathbb{T}$ is the distinguished generator of ${\mathrm b}\Z$. Now we define \begin{equation} \label{eq:Zn} Z_n=\mathbb{Z}{\cdot}(\frac 1 n,\mathrm{id}_\mathbb{T})\in {\mathcal{F}_1}(G), \quad G=\mathbb{R}\times{\mathrm b}\Z. \end{equation} Accordingly, $(r,\overline s)\in \mathbb{R}\times\mathbb{T}_d$ belongs to the annihilator $Z_n^\perp=(\frac 1 n,\mathrm{id}_\mathbb{T})^\perp$ iff $\overline{\frac r n}-\overline s=0$, that is, iff $\overline{\frac r n}=\overline s$. This means $\frac r n +\mathbb{Z}=s+\mathbb{Z}$, and so \begin{equation} \label{eq:annih} (r,\overline s)\in\mathbb{R}\times \mathbb{T}_d\mbox{\quad is in $Z_n^\perp$ iff\quad } \frac r n-s\in \mathbb{Z}. \end{equation} In an effort to show that $\lim_n Z_n^\perp=\{0{\cdot}$ in ${\mathcal{S\hskip-.5pt U\hskip-.9pt B}}(\widehat G)$ we consider the following \begin{lem}[Convergence to the trivial subgroup]\label{convergence-trivial-subgp} Let $\Gamma$ be a locally compact group with identity $e$, and let $(H_n)_{n\in \mathbb{N}}$ be a sequence of closed subgroups. Then the following statements are equivalent: \begin{enumerate} \item[$(a)$] For each subnet $(H_{n_j})_{j\in J}$ of $(H_n)_{n\in \mathbb{N}}$ and each convergent net $(h_{n_j})_{j\in J}$ with $h_{n_j}\in H_{n_j}$ and limit $h$ we have $h=e$. \item[$(b)$] $\lim_{n\in \mathbb{N}} H_n=\{e{\cdot}$ in ${\mathcal{S\hskip-.5pt U\hskip-.9pt B}}(G)$. \end{enumerate} \end{lem} \begin{proof} $(a)\Rightarrow (b)$:\quad We argue by contradiction. Suppose that there is a compact subset $K$ of $G$ and an open neighborhood $U$ of the identity such that for each $\alpha\in \mathbb{N}$ there is an $n_\alpha\in \mathbb{N}$ with $\alpha \leq n_\alpha$ such that $H_{n_\alpha}\not\in \mathcal{U}(\{e{\cdot}; K, U)$. That is, $H_{n_\alpha}\cap K \not\subseteq U$ by equation (\ref{eq:E_base}), and so there is an $h_{n_\alpha}\in H_{n_\alpha}$ such that $h_{n_\alpha}\in K\setminus U$. As $K\setminus U$ is a compact subset of $G$, the net $(h_{n_\alpha})$ admits a subnet converging to a point $a$ of $K\setminus U$. Since $e\not\in K\setminus U$, $a\not= e$, which is a contradiction. $(b)\Rightarrow (a)$: Let $(h_{n_j})_{j\in J}$ be a net converging to $h$ and assume $h_{n_j}\in H_{n_j}$ for every $j\in J$. Suppose $h\ne e$. Now let $K$ be a compact neighborhood of $h$ not containing $e$. We may assume that $h_{n_j}\in K$ for all $j\in J$. As $\lim_{n\in \mathbb{N}} H_n=\{e{\cdot}$, there exists $N\in \mathbb{N}$ such that for each $n\ge N$ we have $H_n\in\mathcal U(\{e{\cdot};K,\Gamma\setminus K)$, that is, $H_n\cap K\subseteq\Gamma\setminus K$ by equation (\ref{eq:E_base}), and this is a contradiction. \end{proof} In order to appreciate this lemma, consider the condition \qquad $(a')$ {\em For each convergent net $(h_{i})_{i\in I}$ with $h_{i}\in H_{i}$ and limit $h$ we have $h=e$.} The following example will show that the implication $(a')\Rightarrow (b)$ fails. \begin{example} In ${\mathcal{S\hskip-.5pt U\hskip-.9pt B}}(\mathbb{R})$, let $$H_n=\left{\cdot} \begin{array}{ll} n\mathbb{Z}, & \hbox{if $n$ is even;} {\cdot} &{\cdot} \frac{1}{n}\mathbb{Z}, & \hbox{if $n$ is odd.} \end{array} \right.$$ Let $(h_n)$ be a sequence in $\mathbb{R}$ converging to $h$ and such that, for each $n$,{\cdot}$h_n\in H_n$. For any $n\in \mathbb{N}$, there is $k_n\in \mathbb{Z}$ such that $h_{2n}= 2n k_n$. As the subsequence $(h_{2n})$ converges to $h$, $h=0$. So $(a')$ is satisfied. However, as the subsequence $(H_{2n})$ converges to $\{0{\cdot}$ and the subsequence $(H_{2n+1})$ converges to $\mathbb{R}$, the sequence $(H_n)$ is divergent and so $(b)$ fails. \end{example} \begin{lem} \label{l:null-convergence} $\lim_{n\in\mathbb{N}} Z_n^\perp =\{0{\cdot}$. \end{lem} \begin{proof} We shall apply Lemma \ref{convergence-trivial-subgp} and assume that we have a net $(n_j)_{j\in J}$ cofinal in $\mathbb{N}$ such that $(r_j,\overline s_j)_{j\in J}$ is a convergent net with \centerline{$(r_j,\overline{s_j})\in Z_{n_j}^\perp$ for all $j\in J$ and with $(r,\overline s)=\lim_{j\in J} (r_j,\overline{s_j})$.} From Equation (\ref{eq:annih}) we know that $\frac{r_j}{n_j}-s_j\in \mathbb{Z}$. Since $r_j\to r$, $s_j\to s$, and $n_j\to\infty$, we conclude $s\in \mathbb{Z}$ and thus $\overline s=0$. Further $s\in \mathbb{Z}$ and $r_j/n_j\to 0$ imply the existence of a $j_0$ such that $j_0\le j$ implies $s_j=s$. Then $r_j/n_j$ is an integer, and so for large enough $j$ we have $r_j=0$ which implies $r=0$. Thus $\lim_{j\in J}(r_j,\overline{s_j})=0$ and by Lemma \ref{convergence-trivial-subgp} this shows that $\lim_{n\in\mathbb{N}}Z_n^\perp =\{0{\cdot}$ which we had to show. \end{proof} Now we are ready for proof of the First Key Lemma: There is a sequence $(Z_n)_{n\in \mathbb{N}}$ in ${\mathcal{F}_1}(G)$, $Z_n=\mathbb{Z}{\cdot}(\frac 1 n,\mathrm{id}_\mathbb{T})$ for which $(Z_n^\perp)_{n\in \mathbb{N}}$ converges to $\{0{\cdot}$ in ${\mathcal{S\hskip-.5pt U\hskip-.9pt B}}(\widehat G)$ by Lemma \ref{l:null-convergence}. Now we apply Pontryagin-Chabauty Duality in the form of Proposition \ref{Pon-Cha-duality} and conclude \begin{equation} \label{eq:sequential-appro} \lim_{n\in\mathbb{N}}Z_n =G\mbox{ in }{\mathcal{S\hskip-.5pt U\hskip-.9pt B}}(G) \mbox{ for }G=\mathbb{R}\times{\mathrm b}\Z. \end{equation} This concludes the proof of the First Key Lemma \ref{l:R-plus-b} and thus also finishes the proof of Theorem \ref{th:sufficient}. We can summarize the main result as follows: \begin{thmA} A locally compact group $G$ is integrally approximable if and only if it is either discrete, in which case it is isomorphic to a nonsingleton subgroup of $\mathbb{Q}$, or else it is abelian and of the form $G\cong \mathbb{R}\times \mathrm{comp}(G)$ where $G/G_0\cong \mathrm{comp}(G)/\mathrm{comp}(G)_0$ is periodic inductively monothetic. \end{thmA} We recall that the groups $\mathrm{comp}(G)_0$ range through all compact connected abelian groups and that the periodic inductively monothetic groups were classified in Proposition \ref{p:ind-mon} (3). We mention in passing that Theorem A yields a characterisation of the the group $\mathbb{R}$ in the class of locally compact groups. For this purpose let us call a topological group {\it compact-free} if it does not contain a nonsingleton compact subgroup. \begin{cor} \label{c:compactfree} For a locally compact group $G$ the following conditions are equivalent: \begin{enumerate} \item $G\cong\mathbb{R}$ or else is isomorphic to a nonsingleton subgroup of the discrete group $\mathbb{Q}$. \item $G$ is compact-free and integrally approximable. \end{enumerate} \end{cor} A similar characterization using the property of being compact-free was suggested by Chu in \cite{chu}. \section{Complement: Groups approximable by real subgroups} We classified locally compact groups $G$ for which every neighborhood of $G\in {\mathcal{S\hskip-.5pt U\hskip-.9pt B}}(G)$ contains a subgroup $H$ of $G$ isomorphic to $\mathbb{Z}$. We called such groups {\it integrally approximable}. Now we shall do the same for groups $G$ with the same property except that $\mathbb{Z}$ is replaced by $\mathbb{R}$. We need a name for these groups that are approximated by subgroups of (real) {\it numbers}. For this purpose let us denote by ${\mathcal{R}_1}(G)$ the subspace of ${\mathcal{S\hskip-.5pt U\hskip-.9pt B}}(G)$ containing all subgroups isomorphic to the group $\mathbb{R}$ of all real numbers, the {\it real vector group of dimension} $1$. \begin{defn} \label{numeral-subgroups} A locally compact group $G$ is said to be \textit{numerally approximable} if $G\in\overline{{\mathcal{R}_1}(G)}$. \end{defn} Trivially, $\mathbb{R}$ is numerally approximable. The theory and classification of numerally approximable groups is in most ways simpler than that of integrally approximable groups. The basic aspects are completely analogous to the former and that allows us now to proceed more expeditiously. \begin{example} \label{p:vector-groups-again} For $n\in\mathbb{N}$ and $G=\mathbb{R}^n$, the following statements are equivalent: \begin{itemize} \item[(1)] $G$ is numerally approximable. \item[(2)] $n=1$. \end{itemize} \end{example} \begin{proof} Trivially, (2) implies (1). Conversely, if (1) is satisfied, an inspection of the proof of (1)\hbox{$\Rightarrow$}(2) for Example \ref{p:vector-groups} shows that it applies to the next to the last sentence, where $\mathbb{Z}$ needs to be replaced by $\mathbb{R}$ to applie literally. \end{proof} \begin{lem}\label{encadrement} Let $G$ be a locally compact group and $(A_i)_{i\in I}$, $(B_i)_{i\in I}$ two nets converging to $A$ and $B$ respectively. If $A_i\subseteq B_i$ holds eventually, then $A\subseteq B$. \end{lem} \begin{proof} By way of contradiction, suppose that $A$ is not a subgroup of $B$. Let $x\in A\setminus B$ and $U$ a relatively compact open neighborhood of $e$ such that $\overline U x\cap B =\emptyset$. As $B_i \to B$, there exists $i_0\in I$ such that for any $i\ge i_0$ we have $\overline{U} x\cap B_i=\emptyset$ and so $Ux \cap A_i =\emptyset$, which is a contradiction because the set if all closed subgroups meeting the open set $Ux$ is an open neighborhood of $A$. \end{proof} In particular, this lemma implies that the limit of a net of connected closed subgroups of a locally compact group $G$ is contained in the identity component $G_0$ of $G$, whence a group $G$ which is approximable by real subgroups is necessarily connected. Using also Proposition \ref{space-abelian-closed}, we conclude: \begin{lem}\label{l:realappro:connected} Every numerally approximable locally compact group is abelian and connected. \end{lem} In view of Proposition \ref{p:vector-splitting} we may rephrase this as follows: \begin{thm} \label{th:necessary-numeral} Let $G$ be a numerally approximable locally compact group. Then $\mathrm{comp}(G)$ is compact and connected and $$G\cong \mathbb{R}\times \mathrm{comp}(G).$$ \end{thm} \vskip-20pt \begin{gather} \begin{aligned} \xymatrix{ &G \ar@{-}[dr] \ar@{-}[dl]&&{\cdot} \mathbb{R}\ar@{-}[dr] && C\ar@{-}[dl] {\cdot} &0 }\end{aligned} \label{Fig3} \end{gather} \begin{center} {$C=\mathrm{comp}(G),\quad G=\mathbb{R}\oplus C.$} \end{center} The second part of the structure theorem for numerally approximable groups, saying that every group $\mathbb{R}\times C$ with any compact connected abelian group $C$ is numerally approximable is proved in a reduction procedure similar to the one we used for integrally approximable groups. \begin{thm} \label{th:sufficient-numeral} Let $G=\mathbb{R}\times C$ for a compact connected abelian group $C$. Then $G$ is numerally approximable. \end{thm} \begin{proof} Every compact connected group is a directed union of compact connected monothetic groups (see e.g. \cite{hofmann-tran-amer}, Theorem I or \cite{hofmorr}, Theorem 9.36(ix), pp.~479f.). Thus $G$ is the directed union of subgroups $\mathbb{R}\times M$ where $M$ is compact connected monothetic. The closure lemma Proposition \ref{p:class(appro-integral)} (DU) is easily seen to apply to numerally approximable groups in place of integrally approximable groups. Therefore it is no loss of generality to assume that $C$ is compact connected monothetic. Then it is a quotient of ${\mathrm b}\R$, the Bohr compactification of $\mathbb{R}$. (Indeed $\widehat C$ is a discrete torsion free group of rank $\le 2^{\aleph_0}$ and thus is a subgroup of $\mathbb{R}_d$ (the discrete reals), and thus $C$ is a quotient of $\widehat{\mathbb{R}_d}\cong {\mathrm b}\R$.) Since the closure lemma Proposition \ref{p:class(appro-integral)} (QG) again applies to numerally approximable groups in place of integrally approximable groups, the proof will be complete if it is shown that $\mathbb{R}\times{\mathrm b}\R$ is numerally approximable. This will be done in the Second Key Lemma that follows. \end{proof} The proof of the Theorem is therefore reduced to showing that one special group in numerally approximated: \begin{lem}[Second Key Lemma] \label{l:key-lemma-2} The group $G=\mathbb{R}\times{\mathrm b}\R$ is numerally approximable. \end{lem} Before we prove this Second Key Lemma, we review the duality aspects of the present situation. From Lemma \ref{dual-group-cartesian-product} we recall that the dual $\widehat G$ of $G$ may be identified with $\mathbb{R}\times\mathbb{R}_d$ where, as before we identify $\widehat \mathbb{R}$ and $\mathbb{R}$. Let $f\colon\mathbb{R}\to{\mathrm b}\R$ the canonical one-parameter subgroup of ${\mathrm b}\R$, namely the dual of $\mathrm{id}_\mathbb{R}\colon\mathbb{R}_d\to\mathbb{R}$. For each natural number $n\in\mathbb{N}$, the morphism $n\.f$ defined by $(n\.f)(r)=n\.f(r)$ (in the additively written) abelian group ${\mathrm b}\R$) is the dual of the morphism $n{\cdot}\mathrm{id}_\mathbb{R}\colon \mathbb{R}_d\to\mathbb{R}$ which is just multiplication by $n$. We let the subgroup $R_n\le \mathbb{R}\times{\mathrm b}\R$ be the graph of $n\.f$, that is, $$R_n= \set{(r,n\.f(r))}{r\in\mathbb{R}}.$$ Clearly, $R_n$ is isomorphic to $\mathbb{R}$, since the projection of a graph of a morphism onto its domain is always an isomorphism. We claim that $G=\lim_n R_n$ in ${\mathcal{S\hskip-.5pt U\hskip-.9pt B}}(G)$; this claim will finish the proof of the Second Key Lemma. We shall prove the claim by showing that $\lim_n R_n^\perp={\cdot}(0,0){\cdot}$ in ${\mathcal{S\hskip-.5pt U\hskip-.9pt B}}(\widehat G)$. This will prove the claim by Pontryagin-Chabauty-Duality in the form of Proposition \ref{Pon-Cha-duality}. We need information on the annihilator of a graph: \begin{lem}\label{GraphAdjoint} Let $A$ and $B$ be locally compact groups and $f\colon A\to B$ a morphism. Define $\Gamma=\set{(a,f(a))}{a\in A} \subseteq A\times B$ to denote the graph of $f$. We identify $(A\times B)\widehat{\phantom w}$ with $\widehat A\times \widehat B$ (via $\rho$ as in Lemma \ref{dual-group-cartesian-product}). Then the graph $\set{(\widehat f(b),b)}{b\in B}\subseteq A\times B$ of the adjoint morphism $\widehat f\colon \widehat B\to \widehat A$ is the annihilator $\Gamma^\perp$ of $\Gamma$. \end{lem} \begin{proof} An element $(\chi_A,\chi_B)\in (A\times B)^\times$ is in $\Gamma^\perp$ if and only if, for any $a\in A$, $$0= (\chi_A,\chi_B)(a,f(a)) =\chi_A(a)-(\chi_B\circ f)(a)=(\chi_A-\widehat f(\chi_B))(a),$$ that is $\chi_A=\widehat f(\chi_B)$. \end{proof} Applying this lemma to the graph $R_n$ we see that \begin{equation} \label{annihilation} R_n^\perp={\cdot}(nr,r):r\in\mathbb{R}{\cdot}= {\cdot}(r,\frac r n):r\in\mathbb{R}{\cdot} \subseteq \mathbb{R}\times\mathbb{R}_d.\end{equation} Note that $R_n^\perp=\mathrm{graph}(r\mapsto \frac r n)$. \begin{lem} \label{l:null-convergence-2} We have $\lim_n R_n^\perp={\cdot}(0,0){\cdot}$ in ${\mathcal{S\hskip-.5pt U\hskip-.9pt B}}(\widehat G)$. \end{lem} \begin{proof} As in the proof of Lemma \ref{l:null-convergence} we invoke Lemma \ref{convergence-trivial-subgp} and consider a net $(n_j)_{j\in J}$ cofinal in $\mathbb{N}$ such that $(r_j,s_j)_{j\in J}$ is a convergent net in $\widehat G=\mathbb{R}\times\mathbb{R}_d$ such that according to Equation (\ref{annihilation}) we have \begin{itemize} \item[(a)] $(r_j,s_j)\in R_{n_j}^\perp={\cdot}(r,\frac r n_j): r\in\mathbb{R}{\cdot}$ for all $j\in J$, and \item[(b)] $(r,s)=\lim_{j\in J} (r_j,s_j)$ in $\widehat G=\mathbb{R}\times\mathbb{R}_d$. \end{itemize} We must show that this implies $r=s=0$; then Lemma \ref{convergence-trivial-subgp} completes the proof of the lemma. Now (a) implies $s_j=r_j/n_j$ for all $j\in J$, and (b) yields, firstly, that $r=\lim_j r_j$ in $\mathbb{R}$ and, secondly, that in view of the discreteness of $\mathbb{R}_d$ the net of the $r_j/n_j=s_j$ in $\mathbb{R}_d$ are eventually constant, say $=t$ for $j>j_0$ for some $j_0$. For these $j$ we now have $r_j=tn_j$, and so $r=\lim_j tn_j$. Since the $n_j$ increase beyond all bounds, this implies $r=t=0$, and so $r_j=0$ for $j>j_0$. Accordingly, $s_j=0$ for $j>j_0$, and thus $s=\lim_j s_j=0$ as well. This completes the proof. \end{proof} By our earlier remarks, this shows $\lim_n R_n=G$ and thus completes the proof of the Second Key Lemma \ref{l:key-lemma-2} and thereby also completes the proof of Theorem \ref{th:sufficient-numeral}. We can summarize the material on numerally approximable groups as follows: \begin{thmB} A locally compact group $G$ is numerally approximable if and only if it is of the form $G\cong\mathbb{R}\times C$ with a compact connected abelian group $C$. \end{thmB} By Pontryagin Duality, the groups $C$ range through a class equivalent to the class of all torsion free abelian (discrete) groups. A comparison of Main Theorems A and B allow us to draw the following conclusion: \begin{cor} \label{c:final} If a locally compact group is numerally approximable, then it is integrally approximable, while the reverse is not generally true. \end{cor} \begin{rem} Locally compact groups of the form $\mathbb{R}\times C$ for a compact connected group $C$ have been called {\it two-ended} by Freudenthal (\cite{freud}). \end{rem} \section{An alternate proof of Corollary \ref{c:final}} Corollary 6.10 was proved above in a roundabout fashion. We therefore present a different direct way to arrive at the same conclusion. \subsection{Iterated Limit Theorem} The Iterated Limit Theorem for nets deals with the following data: Let $I$ be a directed set and $(J_i)_{i\in I}$ a family of directed sets indexed by $I$. Assume that for each $i\in I$ we are given a converging net $(x_{ij})_{j\in J_i}$ in a topological space $X$. The limits $r_i=\lim_{j\in J_i}x_{ij}$, $i\in I$ form a net $(r_i)_{i\in I}$ which may or may not converge. If it does, we have what is sometimes called an iterated limit $$ r=\lim_{i\in I}\lim_{j\in J_i} x_{ij}.$$ There is no harm in our assuming that the sets $J_i$ are pairwise disjoint. (If necessary we can replace each $J_i$ by $\{i{\cdot}\times J_i$!). Now $D\buildrel\mathrm{def}\over=\dot\bigcup_{k\in I}J_k$ is a disjoint union of the fibers $J_i$ of the fibration $\pi\colon D\to I$, $\pi(d)=i$ iff $d\in J_i$. A function $\sigma\colon I\to D$ is a {\it section} if $\pi\circ \sigma=\mathrm{id}_I$, the identity function of $I$. The set of sections is $\prod_{i\in I} J_i$. The set $D$ is partially ordered lexicographically: $$d\le d'\hbox{ if either }\pi(d)< \pi(i'), \hbox{ or }\pi(i)=\pi(i')\hbox{ and }d\le d' \hbox{ in }J_{\pi(d)}.\leqno(1)$$ These data taken together result in a net $(x_d)_{d\in D}$ which converges fiberwise, and the limits along the fibers form a convergent net. In applications such as ours it is desirable to construct from the given data a subnet $(y_p)_{p\in P}$ of the net $(x_d)_{d\in D}$ in $X$ for some directed set $P$ such that $$ r=\lim_{p\in P} y_p.$$ \begin{lem} Let $P=I\times\prod_{k\in I}J_k$. For $p=(i,\sigma)\in P$ we define $d_p =\sigma(i)\in D$. Then $p\mapsto d_p:P\to D$ is a cofinal function between directed sets, that is, for each $d\in D$ there is a $p_0\in P$ such that $p_0\le p$ implies $d\le d_p$. \end{lem} \begin{proof} Let $d\in D$, that is, $d\in J_{\pi(d)}$. We have to find a $p_0=(i,\sigma)\in P$ such that $p_0\le p=(k,\tau)$ in $P$ implies $\sigma(i)\le \tau(k)$ in $D$. Now for $k\ne i$ use the Axiom of Choice to select an arbitrary $j_k\in J_k$. Define $\sigma\in\prod_{i'\in I}J_{i'}$ by $$\sigma(k)=\begin{cases}\sigma(\pi(d)), & \mbox{if } k=\pi(i){\cdot} j_k. & \mbox{otherwise.} \end{cases} $$ Set $p_0=(\pi(d),\sigma)$. Now if $p_0\le p=(k,\tau)$ with $\pi(d)\le k$ and $\sigma\le \tau$ we claim that $d=\sigma(\pi(d)))\le d_p=\tau(k))$. Indeed, if $\pi(d)<k$ then $d\le \tau(k)=d_p$ by the order on $D$, and if $i=k$ then $d=\sigma(\pi(d)) \le \tau(\pi(d))$ since $\sigma\le \tau$. This completes the proof. \end{proof} The subnet is constructed so as to be convergent if the iterated limit exists and to have the same limit. This is the so-called {\it Iterated Limit Theorem} in whose formulation we use the notation introduced above. \begin{thm}\label{IteratedLimitTheorem} Assume that the net $(x_d)_{d\in D}$ converges fiberwise and that the fiberwise limits converge as well. Then it has a subnet $(y_p)_{p\in P}$, $y_p=x_{d_p}$, $d_{i,\sigma}=\sigma(i)$, such that $$\lim_{p\in P}y_p=\lim_{i\in I}\lim_{d\in J_i}x_d.$$ \end{thm} \begin{proof} For a proof see \cite{kelley}, p.~69. \end{proof} In this remarkable fact about nets and their convergence it is noteworthy that the index set $P$ is vastly larger than the already large index set $D$. A frequent special case is that the index sets $J_i$ all agree with one and the same index set $J$ in which case we have $D=I\times J$, $J_i=\{i{\cdot}\times J$, and $P=I\times J^I$. Now we present an alternate proof of Corollary \ref{c:final}. \begin{proof} Let $G$ be a numerally aproximable locally compact group and let $(R_j)_{j\in J}$ be a net such that $G=\lim_{j\in J} R_j$ and $R_j\cong \mathbb{R}$. For each $j\in J$ we have $R_j=\lim_{n\in \mathbb{N}} Z_{(j,n)}$ for a sequence $Z_{(j, n)}\cong \mathbb{Z}$. Therefore \begin{equation*} G=\lim_{j\in J}\lim_{n\in\mathbb{N}} Z_{(j,n)}. \end{equation*} Then there exists a subnet $(Z_p)_{p\in P}$ of the net $(Z_{(j, n)})_{(j,n)\in J\times \mathbb{N}}$ such that \begin{equation*} G=\lim_{p\in P} Z_{p}. \end{equation*} by the Theorem of the Iterated Limit. \end{proof} A review of the iterated limit theorem together with an application of it in the context of our present topic may be of independent interest. \end{document}
\begin{document} \title{Linear Systems over Join-Blank Algebras} \author{\IEEEauthorblockN{ Hayden Jananthan} \IEEEauthorblockA{\textit{Mathematics Department} \\ \textit{Vanderbilt University}\\ Nashville, Tennessee \\ [email protected]} \and \IEEEauthorblockN{Suna Kim} \IEEEauthorblockA{\textit{Mathematics Department} \\ \textit{California Institute of Technology}\\ Pasadena, California \\ [email protected]} \and \IEEEauthorblockN{Jeremy Kepner} \IEEEauthorblockA{\textit{Lincoln Laboratory Supercomputing Center} \\ \textit{Massachusetts Institute of Technology}\\ Lexington, Massachusetts \\ [email protected]} } \maketitle \begin{abstract} A central problem of linear algebra is solving linear systems. Regarding linear systems as equations over general semirings $(V,\oplus,\otimes,0,1)$ instead of rings or fields makes traditional approaches impossible. Earlier work shows that the solution space $X(\mathbf{A},\mathbf{w})$ of the linear system $\mathbf{A}\mathbf{v}=\mathbf{w}$ over the class of semirings called join-blank algebras is a union of closed intervals (in the product order) with a common terminal point. In the smaller class of max-blank algebras, the additional hypothesis that the solution spaces of the $1\times 1$ systems $A\otimes v = w$ are closed intervals implies that $X(\mathbf{A},\mathbf{w})$ is a finite union of closed intervals. We examine the general case, proving that without this additional hypothesis, we can still make $X(\mathbf{A},\mathbf{w})$ into a finite union of \emph{quasi-intervals}. \end{abstract} \begin{IEEEkeywords} linear algebra, matrices, lattices, linear systems \end{IEEEkeywords} \section{Introduction} Linear algebra is a cornerstone of modern computation. In particular, one approach to solving a problem in application is reducing it to a linear-algebraic problem, such as carrying out a matrix multiplication, solving a linear system, or finding eigenvalues and eigenvectors. As data become more varied, new mathematics must be developed to handle linear algebra over more general algebraic structures. For example, the need for a variety of data types to be supported exists in the context of polystore databases \cite{kepner2016} and prompted the creation of the Dynamic Distributed Dimensional Data Model (D4M) \cite{kepner2012} which provides a linear algebraic interface to graphs stored in NoSQL \cite{byun2012,kepner2013}, SQL \cite{wu2014,gadepally2015}, and NewSQL \cite{samsi2016}. One of the most general algebraic structures over which linear algebra makes sense is a semiring. Semirings include many algebraic structures that we often encounter -- in particular, all rings and fields are semirings. Among the most studied semirings which are not rings include the \emph{max-plus algebra} $\mathbb{R}\cup \{\infty,-\infty\}$, which forms a semiring with addition $\max$ and multiplication $+$, and the \emph{max-min algebra} $\mathbb{R}\cup \{\infty,-\infty\}$, which forms a semiring with addition $\max$ and multiplication $\min$ \cite{akian2006, litvinov2009}. In fact, mathematicians and scientist have found numerous applications of max-plus algebra; it is widely used fields like in performance evaluation of manufacturing systems, discrete event system theory, Markov decision processes, and even in language theory \cite{gaubert1997}. Note that a semiring generalizes the notion of a ring by dropping the necessity of additive inverses existing. One method of dealing with the loss of subtraction is to make use of non-algebraic properties, particularly strong order-theoretic properties \cite{han2012}. Thus we focus on the semirings that are induced from ordered sets (as in max-plus algebra) and utilize those properties to characterize the solution set. \section{Definitions} The most basic object of study is that of a semiring. \begin{definition}[Semiring] \cite{golan1999, gondran2007} A \emph{semiring} is a quintuple $(V,\oplus,\otimes,0,1)$ consisting of \begin{enumerate} \item an underlying set $V$, \item two binary operations $\oplus$ (\emph{addition}) and $\otimes$ (\emph{multiplication}) on $V$, and \item two elements $0$ and $1$ of $V$ \end{enumerate} such that \begin{enumerate} \item $\oplus$ is associative, commutative, and has identity element $0$, \item $\otimes$ is associative and has identity element $1$, \item $\otimes$ distributes over $\oplus$, and \item $0$ is a multiplicative annihilator. \end{enumerate} \end{definition} Matrices and their operations can be defined over general semirings, in a similar way it is over fields like $\mathbb{R}$ or $\mathbb{C}$. \begin{definition}[Matrices] An $m\times n$ \emph{matrix} over a semiring $V$ is a map \begin{equation*} \mathbf{A}: \{1,\ldots,m\} \times \{1,\ldots, n\} \to V \end{equation*} If $\mathbf{A}$ and $\mathbf{B}$ are two $m\times n$ matrices, their \emph{sum} is the $m\times n$ matrix $\mathbf{A}\oplus \mathbf{B}$ defined by \begin{equation*} (\mathbf{A}\oplus \mathbf{B})(i,j) = \mathbf{A}(i,j) \oplus \mathbf{B}(i,j) \end{equation*} If $\mathbf{A}$ is an $m\times n$ matrix and $\mathbf{B}$ is an $n\times p$ matrix, their \emph{product} is the $m\times p$ matrix $\mathbf{A}\mathbf{B}$ defined by \begin{equation*} \mathbf{A}\mathbf{B}(i,j) = \bigoplus_{k=1}^n{\mathbf{A}(i,k)\otimes \mathbf{B}(k,j)} \end{equation*} \end{definition} Elements of the Cartesian product $V^n$ are identified with $n\times 1$ matrices over $V$. \begin{definition}[Linear Systems] An $m\times n$ \emph{linear system} over $V$ is an equation of the form $\mathbf{A}\mathbf{v} =\mathbf{w}$ where $\mathbf{A}$ is a fixed $m\times n$ matrix, $\mathbf{w}$ is a fixed $m\times 1$ matrix, and $\mathbf{v}$ is a variable $n\times 1$ matrix. The \emph{solution space} $X(\mathbf{A},\mathbf{w})$ of a linear system $\mathbf{A}\mathbf{v}=\mathbf{w}$ is the set \begin{equation*} X(\mathbf{A},\mathbf{w}) = \{ \mathbf{v} \mid \mathbf{A}\mathbf{v} =\mathbf{w}\} \end{equation*} \end{definition} One nice class of semirings which is diametrically opposite of the notion of a ring is that of join-blank algebras, which make explicit and extended use of an underlying order by requiring that the underlying set be a complete lattice, the addition operation be binary supremum, and the multiplication operation satisfy an ``infinite-distributivity'' law. \begin{definition}[Complete Lattice] A pair $(V,\leq)$ of a set $V$ and a binary relation $\leq$ on $V$ is a \emph{complete lattice} if \begin{enumerate} \item $\leq$ is reflexive, antistymmetric, and transitive, \item for any subset $U\subset V$ there exists a least element $\bigvee U$ greater than or equal to every element of $U$, called the \emph{join} or \emph{supremum} of $U$, and \item for any subset $U\subset V$ there exists a greatest element $\bigwedge U$ less than or equal to every element of $U$, called the \emph{meet} or \emph{infimum} of $U$. \end{enumerate} \end{definition} In the case of a two element set $\{u,v\}$, the join of $\{u,v\}$ is denoted \begin{equation*} \bigvee\{u,v\} = u \vee v \end{equation*} and its meet is denoted \begin{equation*} \bigwedge\{u,v\} = u \wedge v \end{equation*} These binary operations $\vee$ and $\wedge$ are called \emph{join} and \emph{meet}, respectively. Semirings in which the underlying set and the operations have order-theoretic properties with respect to a fixed partial order allows for order-theoretic tools to be applied to the construction of solution sets in terms of intervals. \begin{definition}[Join-Blank Algebra] A \emph{join-blank algebra} is a semiring $(V,\vee,\otimes,-\infty,1)$ where \begin{enumerate} \item $V$ is a complete lattice with respect to some fixed order, \item $\vee$ is the join with respect to that order, \item $-\infty$ is the minimum element of $V$ with respect to that order, and \item for any subset $U\subset V$ and element $v\in V$ \begin{equation*} v \wedge \bigvee{U} = \bigvee \{ v \wedge u \mid u \in U\} \end{equation*} \end{enumerate} \end{definition} The max-plus algebra $(\mathbb{R}\cup \{-\infty,\infty\},\max,+,-\infty,0)$ and the max-min algebra $(\mathbb{R}\cup \{-\infty,\infty\},\max,\min,-\infty,\infty)$ are join-blank algebras. Power set algebras $(\mathcal{P}(S),\cup,\cap,\emptyset,S)$ and more generally Heyting algebras form join-blank algebras. \cite{kepnerjananthan} The order-theoretic properties of a join-blank algebra $V$ can be extended to the Cartesian product $V^n$. \begin{definition}[Product Order] Suppose $V$ is ordered by $\leq$. The \emph{product order} $\leq$ on $V^n$ is defined by \begin{equation*} \mathbf{v} \leq \mathbf{w} \quad \text{if and only if} \quad \mathbf{v}(i) \leq \mathbf{w}(i) \quad \text{for all $i$} \end{equation*} \end{definition} \section{Join-Blank Structure Theorem} The order-theoretic properties of a join-blank algebra $V$ extend to order-theoretic properties of $V^n$. \begin{prop} \label{order-theoretic properties of product order} \cite{kepnerjananthan} If $V$ is a complete lattice, then $V^n$ is a complete lattice. Moreover, if $U\subset V^n$ then $\bigvee{U}$ exists if and only if $\bigvee\{\mathbf{v}(i) \mid \mathbf{v} \in U \}$ exists for each $i$, in which case \begin{equation*} \left( \bigvee{U} \right)(i) = \bigvee\{\mathbf{v}(i) \mid \mathbf{v} \in U\} \end{equation*} \end{prop} The compatibility of $\vee$ and $\otimes$ with the order contribute to order-theoretic properties of $X(\mathbf{A},\mathbf{w})$: \begin{prop} \label{order-theoretic properties of solution space} \cite{kepnerjananthan} Suppose $\mathbf{A}\mathbf{v}=\mathbf{w}$ is a linear system over a join-blank algebra $V$. \begin{enumerate}[(a)] \item $X(\mathbf{A},\mathbf{w})$ is closed under taking joins of non-empty subsets. \item $X(\mathbf{A},\mathbf{w})$ is convex, so if $\mathbf{v}_1 \leq \mathbf{v}_2 \leq \mathbf{v}_3$ and $\mathbf{v}_1,\mathbf{v}_3 \in X(\mathbf{A},\mathbf{w})$, then $\mathbf{v}_2 \in X(\mathbf{A},\mathbf{w})$. \end{enumerate} \end{prop} This implies the following structure of $X(\mathbf{A},\mathbf{w})$ as a union of closed intervals with a common terminal point. Recall that a \emph{closed interval} is defined as \begin{equation*} [\mathbf{x},\mathbf{y}] = \{ \mathbf{z} \mid \mathbf{x} \leq \mathbf{z} \leq \mathbf{y}\} \end{equation*} \begin{thm}[Join-Blank Structure Theorem] \label{join-blank structure theorem} \cite{kepnerjananthan} Suppose $\mathbf{A}\mathbf{v}=\mathbf{w}$ is a linear system over a join-blank algebra $V$. Then $X(\mathbf{A},\mathbf{w})$ is of the form \begin{equation*} X(\mathbf{A},\mathbf{w}) = \bigcup_{\mathbf{v} \in U}{[\mathbf{v},\mathbf{x}]} \end{equation*} for some $U \subset X(\mathbf{A},\mathbf{w})$ and a fixed $\mathbf{x}$. \end{thm} This structure allows for the problem of finding the solution space $X(\mathbf{A},\mathbf{w})$ to be reduced to finding the solution spaces \begin{equation*} X(\mathbf{A}(i,:),\mathbf{w}(i)) \end{equation*} to the single-equation linear systems \begin{equation*} \mathbf{A}(i,:) \mathbf{v} = \mathbf{w}(i) \end{equation*} Intersecting each solution set will give us the complete solution set for the original linear system, as the intersection will satisfy all equations. \section{Max-Blank Structure Theorem} When $V$ is \emph{totally-ordered} and hence $\vee = \max$, we call $V$ a \emph{max-blank algebra}. In nice cases, the solution space of a linear system over a max-blank algebra is a finite union of closed intervals. However, it is not always the case. Consider the following system in max-blank algebra: \begin{equation*} \left[ \begin{matrix} \infty \end{matrix} \right] \left[ \begin{matrix} v \end{matrix}\right] = \left[ \begin{matrix} \infty \end{matrix}\right] \end{equation*} The solution space is given by $(-\infty,\infty]$, which cannot be written as finite unions of intervals. In the $1\times 1$ case, the solution set can be represented with finitely many closed intervals if \begin{equation*} X(A,w) = \{ v \mid A \otimes v = w\} \end{equation*} is a closed interval. \begin{thm}[Max-Blank Structure Theorem for Closed Intervals] \label{max-blank structure theorem for closed intervals} \cite{kepnerjananthan} Suppose $\mathbf{A}\mathbf{v} = \mathbf{w}$ is a linear system over a max-blank algebra such that for every $i,j$ the set \begin{equation*} \{ v \mid \mathbf{A}(i,j) \otimes \mathbf{w}(i)\} \end{equation*} is a closed interval. Then $X(\mathbf{A},\mathbf{w})$ is a finite union of closed intervals. \end{thm} The crucial step in the proof of Theorem~\ref{max-blank structure theorem for closed intervals} is that a Cartesian product of closed intervals in $V$ is a closed interval in $V^n$ in the product order. Proposition~\ref{order-theoretic properties of solution space} shows that the only other form that \begin{equation*} \{ v \mid \mathbf{A}(i,j) \otimes \mathbf{w}(i)\} \end{equation*} can take on is a half-open interval which is open on the left. The Cartesian product of arbitrary intervals in $V$ need to be an interval in $V^n$ in the product order. This motivates a slightly more general basic object than intervals. \begin{definition}[Quasi-interval] Suppose $I_1,\ldots, I_n$ are intervals in $V$. Then define \begin{equation*} \prescript{}{A}{\lsqparen} \mathbf{p},\mathbf{q} {\rsqparen}_{\raisebox{-1pt}{\footnotesize $B$}} = I_1 \times \cdots \times I_n \end{equation*} where $I_k$ has endpoints $\mathbf{p}(k)$ and $\mathbf{q}(k)$, with exclusion of $\mathbf{p}(k)$ when $k\in A$ and exclusion of $\mathbf{q}(k)$ when $k\in B$. \end{definition} \begin{lem} \label{intersection of quasi-intervals} Suppose $V$ is totally ordered. Suppose $A,B \subset \{1,\ldots,n\}$ and $\mathbf{p},\mathbf{q},\mathbf{r},\mathbf{s} \in V^n$. Then \[ \prescript{}{A}{\lsqparen} \mathbf{p},\mathbf{q} ] \cap \prescript{}{B}{\lsqparen} \mathbf{r},\mathbf{s}] = \prescript{}{C}{\lsqparen} \mathbf{p} \vee \mathbf{r}, \mathbf{q} \wedge \mathbf{s} ] \] where \begin{align*} C & = \{ i \in A \setminus B \mid \mathbf{p}(i) \geq \mathbf{r}(i)\} \\ & \quad \cup \{j \in B \setminus A \mid \mathbf{p}(j) \leq \mathbf{r}(j)\} \\ & \quad \cup A\cap B \end{align*} \end{lem} \begin{proof} Let \[ \prescript{}{A}{\lsqparen} \mathbf{p},\mathbf{q}] = I_1 \times \cdots \times I_n \] and \[ \prescript{}{B}{\lsqparen} \mathbf{r},\mathbf{s}] = J_1 \times \cdots \times J_n \] Then \begin{align*} \prescript{}{A}{\lsqparen} \mathbf{p}, \mathbf{q}] \cap \prescript{}{B}{\lsqparen} \mathbf{r},\mathbf{s}] &= (I_1 \times \cdots \times I_n) \cap (J_1 \times \cdots \times J_n) \\ &= (I_1 \cap J_1) \times \cdots \times (I_n\cap J_n) \end{align*} Since the intersection of intervals is also interval, each of $I_k \cap J_k$ is an interval with end-points $\mathbf{p}(k) \vee \mathbf{r}(k)$ and $\mathbf{q}(k) \wedge \mathbf{s}(k)$ with exclusion of the first end-point $\mathbf{p}(k) \vee \mathbf{r}(k)$ exactly when either \begin{enumerate}[(i)] \item $k \in A \setminus B$ and $\mathbf{p}(k) \geq \mathbf{r}(k)$, or \item $k\in B \setminus A$ and $\mathbf{p}(k) \leq \mathbf{r}(k)$, or \item $k \in A \cap B$. \end{enumerate} and inclusion of the second end-point $\mathbf{q}(k) \wedge \mathbf{s}(k)$. Hence \[ \prescript{}{A}{\lsqparen} \mathbf{p}, \mathbf{q}] \cap \prescript{}{B}{\lsqparen} \mathbf{r},\mathbf{s}] = \prescript{}{C}{\lsqparen} \mathbf{p}, \mathbf{q}] \] where \begin{align*} C & = \{ i \in A \setminus B \mid \mathbf{p}(i) \geq \mathbf{r}(i)\} \\ & \quad \cup \{j \in B \setminus A \mid \mathbf{p}(j) \leq \mathbf{r}(j)\} \\ & \quad \cup (A\cap B) \end{align*} \end{proof} Using this new notation, we show that linear systems in max-blank algebra have solution set that can be written as union of finite quasi-interval. \begin{thm}[Max-blank Structure Theorem] \label{max-blank structure theorem} Suppose $\mathbf{A}$ is an $n\times m$ matrix and $\mathbf{w}$ an element of $V^n$. Let $U_i$, $U_{i,1}$, and $U_{i,2}$ be the sets of $j\in \{1,\ldots, m\}$ such that \[ X(\mathbf{A}(i,j),\mathbf{w}(i)) = \{ v \in V \mid \mathbf{A}(i,j)\otimes v = \mathbf{w}(i)\} \] is non-empty, non-empty and not a closed interval, and non-empty and a closed interval, respectively. When $j\in U_{i,1}$, let $X(\mathbf{A}(i,j),\mathbf{w}(i))=(p_j^i,q_j^i]$. For $j\in U_{i,2}$, let $X(\mathbf{A}(i,j),\mathbf{w}(i)) = [p_j^i,q_j^i]$. Lastly, for $j\notin U_i$ let $q_j^i$ be the largest element such that $\mathbf{A}(i,j)\otimes q_j^i\leq \mathbf{w}(i)$. \\ Let $\mathbf{p}_{i,j'}$ be defined by \[ \mathbf{p}_{i,j'}(j)=\begin{cases} p_{j'}^i & j=j' \\ - \infty & \text{otherwise} \end{cases} \] and $\mathbf{q}_i$ be defined by $\mathbf{q}_i(j)=q_j^i$. Then \[ X(\mathbf{A},\mathbf{w})=\bigcup_{\substack{j'_{i,k} \in U_{i,k} \\ \text{for $1\leq i\leq n$} \\ \text{and $k\in \{1,2\}$}}}{\prescript{}{J_\mathbf{j}}{\Bigglsqparen} \bigvee_{1\leq i\leq n, k\in \{1,2\}}{\mathbf{p}_{i,j_{i,k}'}}, \bigwedge_{1\leq i\leq n}{\mathbf{q}_i} \Biggr]} \] where $\mathbf{j} = (j'_{i,k})_{1\leq i\leq n, k\in \{1,2\}}$ and $\ell \in J_\mathbf{j}$ if and only if \[ \max_{1\leq i\leq n}{\mathbf{p}_{i,j_{i,2}}} \leq \max_{1\leq i\leq n}{\mathbf{p}_{i,j_{i,1}}} \] \end{thm} \begin{proof} The proof will consist of finding the solution space of $\max_{j\in \{1,\ldots, m\}}{(\mathbf{A}(i,j)\otimes \mathbf{v}(j))}=\mathbf{w}(i)$ for each $i\in \{1,\ldots, n\}$ and showing that it is the union of intervals with a common (inclusive) terminal point. Taking the intersection of these solution spaces is $X(\mathbf{A},\mathbf{w})$. $X([v],[u])$ is convex with a inclusive terminal point. Since $V$ is a complete lattice, it follows that \[ X(\mathbf{A}(i,j),\mathbf{w}(i)) = [p_j^i,q_j^i]) \quad \text{or} \quad X(\mathbf{A}(i,j),\mathbf{w}(i)) = (p_j^i,q_j^i] \] for some $p_j^i$ and $q_j^i$, assuming that $X(\mathbf{A}(i,j),\mathbf{w}(i))$ is non-empty. Let define $U_i$, $U_{i,1}$, and $U_{i,2}$ as in the proposition. For $j\notin U_i$, let $q_j^i$ be the largest element such that $\mathbf{A}(i,j) \otimes q_j^i \leq \mathbf{w}(i)$. Such an element exists since $-\infty \otimes v = -\infty$ and multiplication by a fixed element is a monotonic map. The solution space now can be written down in terms of the elements $p_j^i$ and $q_j^i$. A given $\mathbf{v}$ is in the solution set if and only if there exists a $j'\in \{1,\ldots, m\}$ such that \[ \mathbf{A}(i,j')\otimes \mathbf{v}(j')=\mathbf{w}(i) \] and for all $j\in \{1,\ldots, m\}$ it is true that \[ \mathbf{A}(i,j)\otimes \mathbf{v}(j)\leq \mathbf{w}(i) \] The first condition is that $\mathbf{v}(j')\in f_{\mathbf{A}(i,j')}^{-1}(\mathbf{w}(i))$ and the second condition is that $\mathbf{v}(j)\in [- \infty,q_j^i]$ because multiplication by a fixed element is a monotonic function. Then the solution space can be written as \[ \bigcup_{j'\in U_i}{\left( X(\mathbf{A}(i,j'),\mathbf{w}(i)) \times \prod_{j\in \{1,\ldots,m\},j\neq j'}{[- \infty,q_j^i]}\right)} \] \[= \bigcup_{j'\in U_{i,1}}{\left( (p_{j'}^i,q_{j'}^i]\times \prod_{j\in \{1,\ldots,m\},j\neq j'}{[- \infty,q_j^i]}\right)} \] \[ \quad \cup \bigcup_{j'\in U_{i,2}}{\left( [p_{j'}^i,q_{j'}^i]\times \prod_{j\in \{1,\ldots,m\},j\neq j'}{[- \infty,q_j^i]}\right)} \] Taking $\mathbf{p}_{i,j'}(j)$ and $\mathbf{q}_i(j)$ as defined before, we get \[ [p_{j'}^i,q_{j'}^i] \times \prod_{j\in \{1,\ldots,m\},j\neq j'}{[-\infty,q_j^i]} = [\mathbf{p}_{i,j'},\mathbf{q}_i] \] and \[ (p_{j'}^i,q_{j'}^i] \times \prod_{j\in \{1,\ldots,m\},j \neq j'}{[-\infty,q_j^i]} = \prescript{}{j'}{\lsqparen} \mathbf{p}_{i,j'},\mathbf{q}_i] \] Let \[I_{i,j} = \begin{cases} \{ j\} & \text{if $j\in U_{i,1}$} \\ \emptyset & \text{otherwise} \end{cases} \] Using this, we can change the order of intersection with union to get \begin{align*} X(\mathbf{A},\mathbf{w}) &= \bigcap_{i=1}^n{\left( \bigcup_{j'\in U_{i,1}}{\prescript{}{j'}{\lsqparen} \mathbf{p}_{i,j'},\mathbf{q}_i]} \cup \bigcup_{j' \in U_{i,2}}{[\mathbf{p}_{i,j'},\mathbf{q}_i]}\right)} \\ &= \bigcup_{\substack{j'_{i,k} \in U_{i,k} \\ \text{for $1\leq i\leq n$} \\ \text{and $k\in \{1,2\}$}}}{ \bigcap_{1\leq i\leq n, k\in \{1,2\}}{\prescript{}{I_{i,j'_{i,k}}}{\lsqparen} \mathbf{p}_{i,j'_{i,k}}, \mathbf{q}_i]}} \\ \end{align*} Finally, since the intersection of quasi-intervals can be represented as a quasi-interval as well (Lemma~\ref{intersection of quasi-intervals}), we arrive at \[ X(\mathbf{A},\mathbf{w})=\bigcup_{\substack{j'_{i,k} \in U_{i,k} \\ \text{for $1\leq i\leq n$} \\ \text{and $k\in \{1,2\}$}}}{ \prescript{}{J_\mathbf{j}}{\Bigglsqparen} \bigvee_{1\leq i\leq n, k\in \{1,2\}}{\mathbf{p}_{i,j'_{i,k}}}, \bigwedge_{1\leq i\leq n}{\mathbf{q}_i}\Biggr]} \] where $\mathbf{j} = (j'_{i,k})_{1\leq i\leq n, k\in \{1,2\}}$ and $\ell \in J_\mathbf{j} \subset \{1,\ldots, n\}$ if and only if \[ \max_{1\leq i\leq n}{\mathbf{p}_{i,j'_{i,2}}(\ell)} \leq \max_{1\leq i \leq n}{\mathbf{p}_{i,j'_{i,1}}(\ell)} \] \end{proof} \section{Further Research} The notion of a quasi-interval allows the structure of the solution space of a linear system over a max-blank algebra to be written as a \emph{finite} union of quasi-intervals. This naturally leads to the question of how this notion can be used to express other solution spaces in similarly nice ways. While the maximum solution of a linear system is known in many cases, particularly for Heyting algebras (a join-blank algebra in which $\otimes$ is the meet) and max-blank algebras, the entire structure is not known for arbitrary Heyting algebras. Also worth investigating is how crucial each of the properties a join-blank algebra satisfies are to Theorem~\ref{join-blank structure theorem}. \end{document}
\begin{document} \begin{abstract} Let $G$ be a connected reductive algebraic group over a field of positive characteristic $p$ and denote by $\mathcal T$ the category of tilting modules for $G$. The higher Jones algebras are the endomorphism algebras of objects in the fusion quotient category of $\mathcal T$. We determine the simple modules and their dimensions for these semisimple algebras as well as their quantized analogues. This provides a general approach for determining various classes of simple modules for many well-studied algebras such as group algebras for symmetric groups, Brauer algebras, Temperley--Lieb algebras, Hecke algebras and $BMW$-algebras. We treat each of these cases in some detail and give several examples. \end{abstract} \title{Higher Jones algebras and their simple modules} \section{Introduction} In \cite{ILZ} the authors define Jones algebras as certain quotients of Temperley--Lieb algebras. They also show that these algebras may be identified with the endomorphism algebras over the quantum group for $\mathfrak sl_2$ of fusion tensor powers of its natural vector representation. In this paper we study more generally such endomorphism algebras for arbitrary reductive groups in positive characteristics and their quantized root of unity analogues. We call these semisimple algebras {\it higher Jones algebras}. In the quantum $\mathfrak sl_2$-case they coincide with the Jones algebras from \cite{ILZ}. Our main result is an algorithm which determines the dimensions of the simple modules for these higher Jones algebras. As the first important example and as a role model for our study consider the general linear group $GL_n$. Together with the corresponding root of unity quantum case we show how this gives interesting semisimple quotients of the group algebras for symmetric groups as well as of the corresponding Hecke algebras, and it allows us to determine the dimensions of certain classes of simple modules for these algebras. The results in these cases were obtained a long time ago, see \cite{Ma1} and \cite{Kl} for the modular case (using similar, respectively different techniques), and \cite{GW} for the Hecke algebra case. Nevertheless we treat this case in some detail as it will serve as role model for the general case. In fact, one of our key points is to give a unified treatment of a number of other cases, showing that they can be handled in similar ways. In order to explain our general strategy we pass now to an arbitrary reductive algebraic group G defined over a field k of prime characteristic. The category of tilting modules for $G$ has a quotient category called the fusion category, see \cite{A92}, \cite{AS}. Objects in this category may be identified with certain semisimple modules for $G$ and the higher Jones algebras are then defined as the endomorphism algebras of such objects and of their fusion tensor powers. The main result in \cite {AST1} says that any endomorphism algebra of a tilting module for $G$ has a natural cellular algebra structure. We show that the higher Jones algebras inherite a cellular structure, and exploiting this we are able to compute the dimensions of their simple modules. This applies to the corresponding quantum case as well. If $G$ is one of the classical groups $GL(V), SP(V)$ or $O(V)$ the module $V$ is (except for $O(V)$ in characteristic $2$ when $\dim V$ is odd) a tilting module for $G$ and via Schur--Weyl duality the endomorphism algebras of the fusion tensor powers of $V$ lead to semisimple quotient algebras of the group algebras of symmetric groups and Brauer algebras. This should be compared with the results in \cite{Ma1} and \cite{We2}, respectively. In the corresponding quantum cases we obtain semisimple quotients of Hecke algebras and $BMW$-algebras (compare \cite{Kl}, \cite{We1}, and \cite{We3}). In all cases we obtain effective algorithms for computing the dimensions of the corresponding classes of simple modules. We have illustrated this by giving a number of concrete examples. We want to point out that our approach, based on the theory of tilting modules and the cellular structure on their endomorphism rings as developed in \cite{AST1}, gives a general method for handling many more cases than those mentioned above. To mention just one big family of algebras - which fits into our framework and which are similar in principle to the examples given so far - take again an arbitrary reductive algebraic group $G$ and let $V$ be a simple Weyl module for $G$. This could e.g. be a Weyl module with minuscule highest weight or with highest weight belonging to the bottom alcove of the dominant chamber. Then our approach applies to the fusion tensor powers of $V$ or more generally to fusion tensor products of finite families of simple Weyl modules. As a result we get algorithms for the dimensions of certain simple modules for the corresponding endomorphism algebras. However, in only few cases are these algebras related to ``known" algebras and we have chosen to limit ourselves to the above examples. The paper is organized as follows. In Section 2 we first set up notation etc. for a general reductive group $G$ and then make this explicit in the case where $G$ is a group a classical type. We shall rely heavily on tilting modules for $G$ and in Section 3 we start out by recalling the basic facts that we will need from this theory. In addition, this section establishes results on fusion tensor products which we then apply in Section 4 to symmetric groups and in Section 5 to Brauer algebras. In Section 6 we turn to quantum groups at roots of unity. Here we prove results analogous to the ones we obtained in the modular case and in Sections 7 and 8 we apply these to Hecke algebras for symmetric groups and to $BMW$-algebras, respectively. \vskip .5 cm {\bf Acknowledgments. } Thanks to the referee for a quick and careful reading as well as for her/his many useful comments and corrections. \section{Reductive algebraic groups} This section introduces notation and contains a few basic facts about reductive algebraic groups and their representations over a field of prime characteristic. We shall be rather brief and refer the reader to \cite{RAG} for further details. We also deduce some specific facts needed later on for each of the groups of classical type. \subsection {General notation} \label{general} Suppose $G$ is a connected reductive algebraic group over a field $k$. We assume $k$ has characteristic $p > 0$. In this paper all modules will be finite-dimensional. Let $T$ be a maximal torus in $G$, and denote by $X =X(T)$ its character group. In the root system $R \subset X$ for $(G,T)$ we choose a set of positive roots $R^+$, and denote by $X^+ \subset X$ the corresponding cone of dominant characters. Then $R^+$ defines an ordering $\leq$ on $X$. It also determines uniquely a Borel subgroup $B$ whose roots are the set of negative roots $-R^+$. Denote by $S$ the set of simple roots in $R^+$. The reflection $s_\alpha$ corresponding to $\alpha \in S$ is called a simple reflection. The set of simple reflections generates the Weyl group $W$ for $R$. We can identify $W$ with $N_G(T)/T$. Then we see that $W$ acts naturally on $X$: $\lambda \mapsto w(\lambda), \lambda \in X, w \in W$. In addition to this action of $W$ on $X$ we shall also consider the so-called dot-action given by: $w \cdot \lambda = w(\lambda + \rho) - \rho, w \in W, \lambda \in X$. As usual, $\rho$ is half the sum of the positive roots. In the category of $G$-modules we have the family of standard modules $\Delta(\lambda)$, and likewise the family of costandard modules $\nabla(\lambda)$. Here $\lambda$ runs through the set of dominant weights $X^+$ and $\Delta(\lambda)$ is also known as the Weyl module with highest weight $\lambda$. The dual Weyl module $\nabla(\lambda)$ is then $\Delta(-w_0 \lambda)^*$ where $w_0$ denotes the longest element in $W$. The simple module with highest weight $\lambda$ may be realized as the head of $\Delta(\lambda)$ as well as the socle of $\nabla(\lambda)$. Recall that there is up to scalars a unique non-zero homomorphism \begin{equation} \label{can} c_\lambda: \Delta(\lambda) \rightarrow \nabla(\lambda), \end{equation} namely the one factoring through $L(\lambda)$. A $G$-module $M$ is said to have a $\Delta$-filtration if it has submodules $M^{i}$ with $$ 0=M^0 \subset M^1 \subset \dots \subset M^r = M, \text { where } M^{i+1}/M^{i} \simeq \Delta(\lambda_i) \text { for some } \lambda_i \in X^+.$$ One defines $\nabla$-filtrations similarly. If $M$ has a $\Delta$-filtration we set $(M:\Delta(\mu))$ equal to the number of occurrences of $\Delta(\mu)$ in such a filtration (note that these numbers are uniquely determined and independent of which $\Delta$-filtration we choose). When $M$ has a $\nabla$-filtration the numbers $(M:\nabla(\mu))$ are defined analogously. A crucial result concerning modules with a $\Delta$-filtration says that, if $M$ and $M'$ both have a $\Delta$-filtration, then so does $M\otimes M'$. This is the Wang--Donkin- -Mathieu theorem, see \cite{Wa}, \cite{Do-book}, and \cite{Ma}. For $n \in {\mathbb Z}$ and $\alpha \in S$ we denote by $s_{\alpha, n}$ the affine reflection determined by $$s_{\alpha, n}(\lambda) = s_\alpha (\lambda) - np\alpha.$$ The affine Weyl group $W_p$ is the group generated by all $s_{\alpha, n}$ where $ \alpha \in S$ and $ n \in {\mathbb Z}$ (note that in the Bourbaki convention this is the affine Weyl group corresponding to the dual root systen $R^\vee$). The linkage principle \cite{A80a} says that, whenever $L(\lambda)$ and $L(\mu)$ are two composition factors of an indecomposable $G$- module, then $\mu \in W_p \cdot \lambda$. It follows that $M$ splits into a direct sum of submodules according to the orbits of $W_p$ in $X$. More precisely, if we set $$A(p)= \{\lambda \in X | 0 < \langle \lambda + \rho, \alpha^\vee \rangle < p \text { for all } \alpha \in R^+\},$$ called the bottom dominant alcove, then the closure $$\overline A(p) = \{\lambda \in X | 0 \leq \langle \lambda + \rho, \alpha^\vee \rangle \leq p \text { for all } \alpha \in R^+\}$$ is a fundamental domain for the dot-action of $W_p$ on $X$. We have $$ M = \bigoplus_{\lambda \in \overline A(p)} M[\lambda], $$ with $M[\lambda]$ equal to the largest submodule in $M$ whose composition factors $L(\mu)$ all have $\mu \in W_p \cdot \lambda$. \begin{remarkcounter} \label{alcove A} \begin{enumerate} \item [a)] As an immediate consequence of the strong linkage principle \cite{A80a} we have $$ \Delta(\lambda) = L(\lambda) = \nabla(\lambda) \text { for all } \lambda \in \overline A \cap X^+.$$ \item [b)] We have $A(p) \neq \emptyset$ if and only if $p > \langle \rho, \alpha^\vee \rangle$ for all roots $\alpha$, i.e. if and only if $p \geq h$, where $h$ is the Coxeter number for $R$. \end{enumerate} \end{remarkcounter} \subsection{The general linear groups} \label{GL} Let $V$ be a vector space over $k$. The reductive group $GL(V)$ plays a particularly important role in this paper. In this section we make the above notations and remarks explicit for the group $GL(V)$. We set $n = \dim V$ and choose a basis $\{v_1, v_2, \cdots , v_n\}$ for $V$. Then $G_n = GL(V)$ identifies with $GL_n(k)$ and the set $T_n$ of diagonal matrices in $G_n$ is a maximal torus. The character group $X_n = X(T_n)$ is the free abelian group with basis $\epsilon_i$, $i=1, 2, \cdots ,n$ where $\epsilon_i: T_n \rightarrow k^{\times}$ is the homomorphism mapping a diagonal matrix into its $i$'th entry. If $\lambda \in X_n$, we shall write $$\lambda = (\lambda_1, \lambda_2, \cdots , \lambda_n),$$ when $\lambda = \sum_1^n \lambda_i \epsilon_i.$ The root system for $(G_n,T_n)$ is $$R = \{\epsilon_i -\epsilon_j | i \neq j\}.$$ It is of type $A_{n-1}$. Our choice of simple roots $S$ will be $$S = \{\alpha_i = \epsilon_i -\epsilon_{i+1} | i = 1, 2, \cdots , n-1\}$$ inside the set of positive roots $R^+$ consisting of all $\epsilon_i - \epsilon_j$ with $i<j$. We set $$\omega_i = \epsilon_1 + \epsilon_2 + \cdots + \epsilon_i, \; i = 1, \cdots , n.$$ Then $\{\omega_1, \cdots , \omega_n\}$ is another basis of $X_n$. Note that $\omega_n$ is the determinant and thus, is trivial on the intersection of $T_n$ with $SL_n(k)$. Consider $$\rho' = \omega_1 + \cdots + \omega_n = (n, n-1, \cdots , 1).$$ Then $\rho' = \rho +\frac{1}{2} (n+1) \omega_n$ and we shall prefer to work with $\rho'$ instead of $\rho$ (note that, if $n$ is even, $\rho \notin X_n$ whereas $\rho' \in X_n$ for all $n$). As $\omega_n$ is fixed by $W$, the dot-action of $W$ on $X$ is unchanged when we replace $\rho$ by $\rho'$. We have an inner product on $X_n$ given by $(\epsilon_i, \epsilon_j) = \delta_{i,j}$. It satisfies $(\omega_i, \alpha_j) = \delta_{i,j}$, $i,j = 1, 2, \cdots n-1$, i.e. $\omega_1, \cdots , \omega_{n-1}$ are the fundamentals weights in $X_n$. On the other hand, $(\omega_n, \alpha_j) = 0$ for all $j$. Hence, $(\rho', \alpha_j) = 1$ for all $j = 1, 2, \cdots , n-1$. The set of dominant weights is $$X_n^+ = \{\lambda \in X_n | \lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n\} = \{\sum_1^n m_i \omega_i | m_i \in {\mathbb Z}_{\geq 0}, i= 1, 2, \cdots , n-1, \;\ m_n \in {\mathbb Z}\}.$$ If $\lambda \in X_n^+$ has $\lambda_n \geq 0$, then $\lambda$ may be identified with a partition of $|\lambda| = \lambda_1 + \lambda_2 + \cdots + \lambda_n$. The bottom alcove will be denoted $A_n(p)$. When $n > 1$ it is given by $$ A_n(p) = \{\lambda \in X_n^+ | \lambda_1 - \lambda_n \leq p - n\}.$$ We have $A_n(p) \neq \emptyset$ if and only if $p \geq n$. In particular, $A_2(p)$ is always non-empty. In the special case $n = 1$ the group $G_1$ is the $1$-dimensional torus. In that case $X_1 = {\mathbb Z} \epsilon_1$ and $A_1(p) = {\mathbb Z} \epsilon_1$. Note that for any $r \in {\mathbb Z}$ the Weyl module $\Delta_1(r \epsilon_1)$ is the $1$-dimensional $G_1$ module $k_{r\epsilon_1}$. \begin{remarkcounter} \label{A for type A} The natural module $V$ for $G_n$ has weights $\epsilon_1, \cdots , \epsilon_n$. It is simple because the highest weight $\epsilon_1$ is minuscule. We have $\epsilon_1 \in A_n(p)$ if and only if $p > n$. \end{remarkcounter} \subsection{The symplectic groups} \label{Sp} Let now $V$ be a $2n$-dimensional symplectic vector space over $k$ with a fixed symplectic form, and consider the semisimple algebraic group $G_n = SP(V)$ consisting of those elements in $GL(V)$ which respect this form. This is naturally a subgroup of $GL(V)$. Note that $G_1 = SL_2(k)$. We let $T_n$ be the maximal torus in $G_n$ obtained as the intersection of the maximal torus in $GL(V)$ with $G_n$. In the notation from Section \ref{GL} the restrictions to $T_n$ of $\epsilon_1, \cdots , \epsilon_n$ form a basis of $X_n = X(T_n)$. The root system for $(G_n, T_n)$ consists of the the elements $$\{\pm \epsilon_i \pm \epsilon_j, \pm2 \epsilon_i | 1 \leq i \neq j \leq n \},$$ and is of type $C_n$. With respect to the usual choice of positive roots the set of dominant weights is $$X_n^+ = \{\lambda = \sum_i \lambda_1 \epsilon_i | \lambda_i \geq \lambda_2 \geq \cdots \geq \lambda_n \geq 0 \}.$$ The bottom dominant alcove is also in this case denoted $A_n(p)$. It is given by $$ A_n(p) = \begin{cases} \{\lambda \epsilon_1 | 0 \leq \lambda \leq p-2\} \text { if } n = 1, \\\{\lambda \in X_n^+ | \lambda_1 + \lambda_2 \leq p - 2n \} \text { if } n > 1. \end{cases}$$ When $n > 1$ we have $A_n(p) \neq \emptyset$ if and only if $p \geq 2n$, whereas $A_1(p) \neq \emptyset$ for all $p$. \begin{remarkcounter} \label{A for type C} The natural module $V$ for $G_n$ is simple for all $p$ as its highest weight $\epsilon_1$ is minuscule. It has weights $\pm \epsilon_1, \cdots , \pm \epsilon_n$. Note that, for $n > 1$, we have $\epsilon_1 \in A_n(p)$ if and only if $p > 2n$, whereas for $n=1$ the condition is $p > 2$. \end{remarkcounter} \subsection{The orthogonal groups} \label{O} Consider next a vector space $V$ over $k$ equipped with a non-degenerate, symmetric bilinear form. Then the orthogonal group $O(V)$ is the subgroup of $GL(V)$ consisting of those elements which preserve the bilinear form on $V$. We shall separate our discussion into the case where $\dim V$ is odd and the case where $\dim V$ is even. \subsubsection{Type $B_n$} Assume that $\dim V$ is odd, say $\dim V = 2n + 1$. Then we set $G_n = O(V)$. Again in this case we have $G_1 \simeq SL_2(k)$. However, the module $V$ for $G_1$ is the $3$-dimensional standard module for $SL_2(k)$. The root system $R$ for $G_n$ has type $B_n$ and may be taken to consist of the elements $$ R = \{\pm \epsilon_i \pm \epsilon_j, \pm \epsilon_i | 1 \leq i \neq j \leq n \}$$ in $X_n = \oplus _{i=1}^n {\mathbb Z}\epsilon_i$. The set of dominant weights is $$X_n^+ = \{ \sum_i \lambda_i \epsilon_i \in X_n | \lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n \geq 0 \}.$$ In this case the bottom dominant alcove $A_n(p)$ is given by $$ A_n(p) = \begin{cases} \{\lambda \epsilon_1 | 0 \leq \lambda \leq p-2\} \text { if } n = 1, \\\{\lambda \in X_n^+ | 2 \lambda_1 \leq p - 2n \} \text { if } n > 1. \end{cases}$$ We have $A_n(p) \neq \emptyset$ if and only if $p > 2n$ (except for $n = 1$). \begin{remarkcounter} \label{A for type B} Unlike in the previous cases the highest weight of $V$ is no longer minuscule. However, we still have $V = \Delta(\epsilon_1)$ is simple for all $p > 2$, \cite[Section II.8.21]{RAG}. It has weights $\pm \epsilon_1, \cdots , \pm \epsilon_n$ together with $0$. Note that $\epsilon_1 \in A_n(p)$ if and only if $p > 2n + 2$ except for $n=1$ where the condition is $p > 2$. \end{remarkcounter} \subsubsection{Type $D_n$} Assume that $\dim V$ is even, say $\dim V = 2n$. We set again $G_n = O(V)$. The corresponding root system $R$ then has type $D_n$ and may be taken to consist of the elements $$ R = \{\pm \epsilon_i \pm \epsilon_j | 1 \leq i \neq j \leq n \}$$ in $X_n =\{\lambda \in \oplus _{i=1}^n \frac{1}{2} {\mathbb Z}\epsilon_i | \lambda_i - \lambda_j \in {\mathbb Z} \text { for all } i, j\}$. The set of dominant weights is $$X_n^+ = \{ \sum_i \lambda_1 \epsilon_i \in X_n | \lambda_i \geq \lambda_2 \geq \cdots \geq \lambda_{n-1} \geq |\lambda_n| \}.$$ In this case the bottom dominant alcove $A_n(p)$ is given by $$ A_n(p) = \begin{cases} \{\lambda \epsilon_1 | \lambda \in Z, 0 \leq \lambda \leq p-2\} \text { if } n = 1, \\\{\lambda \in X_2^+ | \lambda_1 \pm \lambda_2 \leq p - 2 \} \text { if } n = 2, \\\{\lambda \in X_n^+ | \lambda_1 + \lambda_2 \leq p - 2n + 2 \} \text { if } n > 2. \end{cases}$$ When $n>2$ we have $A_n(p) \neq \emptyset$ if only if $p > 2n - 2$, whereas $A_1(p)$ and $A_2(p)$ are always non-empty. \begin{remarkcounter} \label{A for type D} $V = \Delta(\epsilon_1)$ is simple for all $p $ (its highest weight is minuscule). It has weights $\pm \epsilon_1, \cdots , \pm \epsilon_n$. Note that, for $n > 2$, we have $\epsilon_1 \in A_n(p)$ if and only $p > 2n - 2$, whereas the condition for both $n=1$ and $n=2$ is $p > 2$. \end{remarkcounter} \section{Tilting modules for reductive algebraic groups} We return to the situation of a general reductive group $G$ and use the notation from Section \ref{general}. We very briefly recall the basics about tilting modules for $G$ (referring to \cite[Section 2]{Do} or \cite[ChapterII.E]{RAG} for details), and prove the results which we then apply in the next two sections. Moreover, we recall from \cite[Section 4]{AST1} a few facts about the cellular algebra structure on endomorphism rings for tilting modules for $G$, which we also need. \subsection {Tilting theory for $G$} A $G$-module $M$ is called tilting if it has both a $\Delta$- and a $\nabla$-filtration. It turns out that for each $\lambda \in X^+$ there is a unique (up to isomorphisms) indecomposable tilting module $T(\lambda)$ with highest weight $\lambda$, and up to isomorphisms these are the only indecomposable tilting modules, see \cite[Theorem 1.1] {Do}. The Weyl module $\Delta(\lambda)$ is a submodule of $T(\lambda)$, while the dual Weyl module $\nabla(\lambda)$ is a quotient. The composite of the inclusion $\Delta(\lambda) \to T(\lambda)$ and the quotient map $T(\lambda) \to \nabla(\lambda)$ equals the homomorphism $c_\lambda$ from (\refeq{can}) (up to a non-zero constant in $k$). We have the following elementary (and no doubt well-known) lemma. \begin{lem} \label{quotient} Let $M$ be a $G$-module which contains two submodules $M_1$ and $M_2$ such that $M = M_1 \oplus M_2$. Denote by $i_j: M_j \rightarrow M$, respectively $\pi_j: M \rightarrow M_j$ the natural inclusion, respectively projection, $j = 1,2$. Suppose $f \circ g = 0$ for all $f \in \mathrm{Hom}_G(M_2, M_1)$ and $g \in \mathrm{Hom}_G(M_1, M_2)$. Then the natural map $$\phi: \mathrm{End}_G(M) \rightarrow \mathrm{End}_G(M_1)$$ which takes $h \in \mathrm{End}_G(M)$ into $\pi_1 \circ h \circ i_1 \in \mathrm{End}_G(M_1)$ is a surjective algebra homomorphism. \end{lem} \begin{proof} The surjectivity of $\phi$ is obvious, so we just have to check that $\pi_1 \circ h' \circ h \circ i_1 = \pi_1 \circ h' \circ i_1 \circ \pi_1 \circ h \circ i_1$ for all $h', h \in \mathrm{End}_G(M)$. However, $h \circ i_1 = i_1 \circ \pi_1\circ h \circ i_1 + i_2 \circ \pi_2\circ h \circ i_1$, and by our assumption we see that $(\pi_1 \circ h' \circ i_2) \circ (\pi_2\circ h \circ i_1) = 0$. The desired equality follows. \end{proof} Let $Q$ be a tilting module for $G$. Then $Q$ splits into indecomposable summands as follows $$ Q = \bigoplus_{\lambda \in X^+} T(\lambda)^{(Q:T(\lambda))}$$ for unique $(Q:T(\lambda)) \in {\mathbb Z}_{\geq 0}$. Set now $$Q^{{\mathcal F}}= \bigoplus_{\lambda \in A(p)} T(\lambda)^{(Q:T(\lambda))} \text { and } Q^{\mathcal N} = \bigoplus_{\lambda \in X^+\setminus A(p)}T(\lambda)^{(Q:T(\lambda))}.$$ For reasons explained in Section \ref{Fusion} we call $Q^{{\mathcal F}}$ the fusion summand of $Q$ and $Q^{\mathcal N}$ the negligible summand of $Q$. Then: \begin{lem} \label{no composites} If $f \in \mathrm{Hom}_G(Q^{{\mathcal F}}, Q^{\mathcal N})$ and $g \in \mathrm{Hom}_G(Q^{\mathcal N}, Q^{{\mathcal F}})$, then $g \circ f = 0$. \end{lem} \begin{proof} It is enough to check the lemma in the case where $Q^{{\mathcal F}} = T(\lambda)$ and $Q^{\mathcal N} = T(\mu)$ with $\lambda \in A(p)$ and $\mu \in X^+\setminus A(p)$. By Remark \ref{alcove A}a) we have $T(\lambda) = \Delta(\lambda) = L(\lambda)$. Hence, in this case $g \circ f$ is up to scalar the identity on $T(\lambda)$. If the scalar were non-zero, $T(\lambda)$ would be a summand of $T(\mu)$, which contradicts the indecomposability of $T(\mu)$. \end{proof} \begin{thm} \label{fusion-quotient} The natural map $\phi: \mathrm{End}_G(Q) \rightarrow \mathrm{End}_G(Q^{{\mathcal F}})$ is a surjective algebra homomorphism. The kernel of $\phi$ equals $$\{h \in \mathrm{End}_G(Q) | \mathrm{Tr}(i_\lambda \circ h \circ \pi_\lambda) = 0 \text { for all homomorphisms } i_\lambda: T(\lambda) \rightarrow Q, \; \pi_\lambda: Q \rightarrow T(\lambda),\; \lambda \in X^+\}.$$ \end{thm} \begin{proof} The combination of Lemma \ref{quotient} and Lemma \ref{no composites} immediately gives the first statement. To prove the claim about the kernel of $\phi$ we first observe that the endomorphisms $i_\lambda \circ h \circ \pi_\lambda$ of $T(\lambda)$ are either nilpotent, or a constant times the identity. Now, nilpotent endomorphisms clearly have trace zero. If $ \lambda \notin A(p)$, then $\dim T(\lambda)$ is divisible by $p$. This holds when $\lambda$ is $p$-singular (i.e. if there exists a root $\beta$ with $\langle \lambda + \rho, \beta^{\vee} \rangle$ divisible by $p$), because then the linkage principle implies that $\mu$ is also $p$-singular for all $\mu \in X^+$ for which $\Delta(\mu)$ occurs in a $\Delta$-filtration of $T(\lambda)$. For a $p$-regular $\lambda$ it then follows by an easy translation argument, see \cite[Section 5]{A92} (the argument used there deals with the quantum case but applies just as well in the modular case). So when $\lambda \notin A(p)$ all endomorphisms of $T(\lambda)$ have trace zero. In particular, we have $\mathrm{Tr}(i_\lambda \circ h \circ \pi_\lambda) = 0$ for all $h \in \mathrm{End}_G(Q)$ when $\lambda \in X^+\setminus A(p)$. On the other hand, if $\lambda \in A(p)$, then by Remark \ref{alcove A} we see that $T(\lambda)$ is simple, i.e. we have $T(\lambda) = L(\lambda) = \Delta(\lambda)$. So in this case any non-zero endomorphism of $T(\lambda)$ has trace equal to a non-zero constant times $\dim T(\lambda)$. By Weyl's dimension formula $\dim (\Delta(\lambda)) $ is prime to $p$. If $i_1$, respectively $\pi_1$, denotes the natural inclusion of $Q^{\mathcal F}$ into $Q$, respectively projection onto $Q^{\mathcal F}$, then this means that $h \in \mathrm{End}_G(Q)$ is in the kernel of $\phi$ if and only if $i_1 \circ h \circ \pi_1 = 0$ if and only if $i_\lambda \circ h \circ \pi_\lambda = 0$ for all $\lambda \in A(p)$ if and only if $ \mathrm{Tr}(i_\lambda \circ h \circ \pi_\lambda) = 0$ for all $\lambda \in X^+$. \end{proof} \subsection{Fusion} \label{Fusion} Let $\mathcal T$ denote the category of tilting modules for $G$. As noted above, this is a tensor category. Inside $\mathcal T$ we consider the subcategory $\mathcal N$ consisting of all negligible modules, i.e. a tilting module $M$ belongs to $\mathcal N$ if and only if $\mathrm{Tr}(f) = 0$ for all $f \in \mathrm{End}_{G}(M)$. As each object in $\mathcal T$ is a direct sum of certain of the $T(\lambda)$'s and $\dim T(\lambda)$ is divisible by $p$ if and only if $\lambda \notin A(p)$ (as we saw in the proof of Theorem \ref{fusion-quotient}) we see that $M \in \mathcal N$ if and only if $(M:T(\lambda)) = 0$ for all $\lambda \in A(p)$. We proved in \cite[Section 4]{A92} (in the quantum case - the arguments for $G$ are analogous) that $\mathcal N$ is a tensor ideal in $\mathcal T$. The corresponding quotient category $\mathcal T/ \mathcal N$ is then itself a tensor category. It is denoted ${\mathcal F}$ and called the fusion category for $G$. We may think of objects in $\mathcal F$ as the tilting modules $Q$ whose indecomposable summands are among the $T(\lambda)$'s with $\lambda \in A(p)$. Note that $\mathcal F$ is a semisimple category (with simple objects $(T(\lambda) = L(\lambda))_{\lambda \in A(p)}$), cf. \cite[Section 4]{A92}. \begin{remarkcounter} \label{p=2$} Note that for $p < h$ the alcove $A(p)$ is empty. This means that in this case $\mathcal N = \mathcal T$. In particular, if $p=2$ the fusion category is trivial except for the case $G=SL_2(k)$ in which case $A(2) =\{0\}$, so that $\mathcal F$ is the category of finite-dimensional vector spaces. For this reason we shall in the following tacitly assume $p > 2$. \end{remarkcounter} In order to distinguish it from the usual tensor product on $G$-modules we denote the tensor product in $\mathcal F$ by $\underline \otimes$. If $Q, Q' \in \mathcal F$ then $Q \underline \otimes Q = \mathrm{pr} (Q \otimes Q')$ where $\mathrm{pr}$ denotes the projection functor from $\mathcal T$ to $\mathcal F$ (on the right-hand side we consider $Q, Q'$ as modules in $\mathcal T$). \begin{cor} \label{fusion} Let $T$ be an arbitrary tilting module for $G$. Then, for any $r \in {\mathbb Z}_{\geq 0}$, the natural homomorphism $\mathrm{End}_G(T^{\otimes r}) \rightarrow \mathrm{End}_G(T^{\underline \otimes r})$ is surjective. \end{cor} \begin{proof} Set $Q = T^{\otimes r}$. Then $Q$ is a tilting module and in the above notation $Q^{\mathcal F} = T^{\underline \otimes r}$ ($ = 0$ if $T \in \mathcal N$) . Hence the corollary is an immediate consequence of Theorem \ref{fusion-quotient}. \end{proof} \subsection{Cellular theory for endo-rings of tilting modules} \label{cellular} Recall the notions of cellularity, cellular structure and cellular algebras from \cite{GL}. When $Q$ is a tilting module for $G$ its endomorphism ring $E_Q = \mathrm{End}_G(Q)$ has a natural cellular structure, see \cite[Theorem 3.9] {AST1}. The parameter set of weights for $E_Q$ is $$ \Lambda = \{\lambda \in X^+ | (Q:\Delta(\lambda)) \neq 0 \}.$$ When $\lambda \in X^+$ the cell module for $E_Q$ associated with $\lambda$ is $C_Q(\lambda) = \mathrm{Hom}_G(\Delta(\lambda), Q)$. Then $\dim C_Q(\lambda) = (Q:\Delta(\lambda)) (= 0$ unless $\lambda \in \Lambda$). We set $$ \Lambda_0 = \{\lambda \in \Lambda | (Q:T(\lambda)) \neq 0 \}.$$ If $\lambda \in \Lambda_0$ then $C_Q(\lambda)$ has a unique simple quotient which we in this paper denote $D_Q(\lambda)$. The set $\{D_Q(\lambda) | \lambda \in \Lambda_0\}$ is up to isomorphisms the set of simple modules for $E_Q$. We have \begin{equation} \label{dim simple/tilting} \dim D_Q(\lambda) = (Q: T(\lambda)), \end{equation} see \cite[Theorem 4.12]{AST1}. Finally, recall the following result on semisimplicity, see \cite[Theorem 4.13] {AST1}. \begin{thm} \label{ss} $E_Q$ is a semisimple algebra if and only if $Q$ is a semisimple $G$-module. In that case we have $\Lambda = \Lambda_0$, $T(\lambda) = \Delta(\lambda) = L(\lambda)$ and $C_Q(\lambda) = D_Q(\lambda)$ for all $\lambda \in \Lambda$. \end{thm} \begin{examplecounter} \label{Ex1} Let $T$ be a tilting module for $G$ and set $Q = T^{\otimes r}$ as in the previous section. Then $E_Q= \mathrm{End}_G(T^{\otimes r})$ is a cellular algebra with cell modules $(C_Q(\lambda))_{\lambda \in \Lambda_T^r}$, where $\Lambda_T^r = \{\lambda \in X^+ | (T^{\otimes r} : \Delta(\lambda)) \neq 0\}$, and simple modules $(D_Q(\lambda))_{\lambda \in \Lambda_{0,T}^r}$, $\Lambda_{0,T}^r = \{\lambda \in \Lambda_T^r | (T^{\otimes r}: T(\lambda)) \neq 0\}$. Denote by $Q_1$ the summand $T^{\underline \otimes r}$ of $Q$. Then the endomorphism ring $$\overline E_Q = \mathrm{End}_G(Q_1),$$ is, according to Corollary \ref{fusion}, a quotient of $E_Q$, and, by Theorem \ref{ss}, it is a semisimple cellular algebra. In fact, $\overline E_Q$ is a direct sum of the matrix rings, namely $$\overline E_Q \simeq \bigoplus_{\lambda \in A(p)} M_{(Q_1:T(\lambda))} (k).$$ The simple modules for $\overline E_Q$ are $\{D_Q(\lambda) | \lambda \in A(p) \cap \Lambda_0 \}$. We have $$\dim D_Q(\lambda) = (Q_1:T(\lambda))$$ for all $\lambda \in A(p) \cap \Lambda_{0,T}^r$. \end{examplecounter} \section {Semisimple quotients of the group algebras $kS_r$} In this section $G_n = GL(V)$ where $V$ is a vector space over $k$ of dimension $n$ with basis $\{v_1, v_2, \cdots , v_n\}$ as in Section \ref{GL}. As we will also look at various subspaces $V'$ of $V$ we shall from now on write $V_n = V$. We write $\Delta_n(\lambda)$ for the Weyl module for $G_n$, $T_n(\lambda)$ for the indecomposable tilting module for $G_n$ with highest weight $\lambda$, etc. \subsection{Algebras associated with tensor powers of the natural module for $G$} \label{tensor powers} We let $r \in Z_{\geq 0}$ and consider the $G_n$-module $V_n^{\otimes r}$. As tensor products of tilting modules are again tilting we see that this is a tilting module for $G_n$. Consider the subset $I = \{\alpha_1, \alpha_2, \cdots , \alpha_{n-2}\} \subset S$. Then the corresponding Levi subgroup $L_I$ identifies with $G_{n-1} \times G_1$, where the first factor $G_{n-1} =GL(V_{n-1})$ is the subgroup fixing $v_n$, and the second factor $G_1$ is the subgroup fixing $v_i$ for $i < n$ and stabilizing the line $k v_n$. As an $L_I$-module we have $V_n = V_ {n-1} \oplus k_{ \epsilon_n}$. Here $k_{\epsilon_n}$ is the line $k v_n$ on which $G_1$ acts via the character $\epsilon_n$. This gives \begin{equation} \label{restriction} V_n^{\otimes r} \simeq \bigoplus _{s=0} ^r (V_{n-1}^{\otimes s} \otimes k_{(r-s)\epsilon_n})^{\oplus \binom{r}{s}} \text { (as $L_I$-modules)}. \end{equation} In particular, $V_{n-1}^{\otimes r}$ is an $L_I$-summand. Its weights (for the natural maximal torus in $L_I$ which is also the maximal torus $T_n$ in $G_n$) consist of $\lambda$'s with $\lambda_n = 0$ whereas any weight $\mu$ of the complement $C = \bigoplus _{s=0} ^{r-1} (V_{n-1}^{\otimes s} \otimes k_{(r-s)\epsilon_n})^{\oplus \binom{r}{s}}$ has $\mu_n > 0$. It follows that \begin{equation} \label{no cross-homs} \mathrm{Hom}_{L_I}(V_{n-1}^{\otimes r}, C) = 0 = \mathrm{Hom}_{L_I}(C, V_{n-1}^{\otimes r}). \end{equation} Moreover, since $G_1$ acts trivially on $V_{n-1}$ we have $\mathrm{End}_{L_I}(V_{n-1}^{\otimes r}) = \mathrm{End}_{G_{n-1}}(V_{n-1}^{\otimes r})$. Hence we get from Lemma \ref{quotient} (in which the assumptions are satisfied because of (\refeq{no cross-homs})): \begin{prop}\label{surj GL} The natural algebra homomorphism $$\mathrm{End}_{G_n}(V_n^{\otimes r}) \rightarrow \mathrm{End}_{G_{n-1}}(V_{n-1}^{\otimes r})$$ is surjective. \end{prop} Later on we shall use the following related result. \begin{prop} \label{restriction of Specht} Suppose $\lambda \in X^+$ has $\lambda_n = 0$. Then the natural homomorphism $$\mathrm{Hom}_{G_n}(\Delta_n(\lambda), V_n^{\otimes r}) \rightarrow \mathrm{Hom}_{G_{n-1}}(\Delta_{n-1}(\lambda), V_{n-1}^{\otimes r})$$ is an isomorphism for all $r$. \end{prop} \begin{proof} In this proof we shall need the parabolic subgroup $P_I$ corresponding to $I$. We have $P_I = L_I U^{I}$ (semidirect product) where $U^{I}$ is the unipotent radical of $P_I$. We set $\nabla_I(\lambda) = \mathrm{Ind}_B^{P_I}(k_\lambda)$. Our assumption that $\lambda_n = 0$ implies that as an $L_I$-module and as a $G_{n-1}$-module we have $\nabla_I(\lambda) = \nabla_{n-1}(\lambda)$. We shall prove the proposition by proving the dual statement $$\mathrm{Hom}_{G_n}( V_n^{\otimes r}, \nabla_n(\lambda)) \simeq \mathrm{Hom}_{G_{n-1}}(V_{n-1}^{\otimes r}, \nabla_{n-1}(\lambda)).$$ First, by Frobenius reciprocity \cite[Proposition I.3.4]{RAG}, we have $$\mathrm{Hom}_{G_n}( V_n^{\otimes r}, \nabla_n(\lambda)) \simeq \mathrm{Hom}_{P_I}( V_n^{\otimes r}, \nabla_I(\lambda)).$$ Then restricting to $L_I$ gives an isomorphism to $\mathrm{Hom}_{L_I}( V_n^{\otimes r}, \nabla_I(\lambda))$. Finally, we use (\refeq{restriction}) and the weight arguments from the proof of Proposition \ref{surj GL} to see that this identifies with $$\mathrm{Hom}_{L_I}( V_{n-1}^{\otimes r}, \nabla_I (\lambda)) \simeq \mathrm{Hom}_{G_{n-1}}( V_{n-1}^{\otimes r}, \nabla_{n-1}(\lambda)).$$ \end{proof} We can, of course, iterate the statement in Proposition \ref{surj GL}: If we set $E_n^r = \mathrm{End}_{G_n}(V_n^{\otimes r})$, then we recover the following well-known fact (cf. \cite[E.17]{RAG}). \begin{cor} \label{sequence of surjections} We have a sequence of surjective algebra homomorphisms $$ E_n^r \rightarrow E_{n-1}^r \rightarrow \cdots \rightarrow E_2^r \rightarrow E_1^r.$$ \end{cor} \vskip 1 cm Set now $\overline E_n^r = \mathrm{End}_{G_n}(V_n^{\underline \otimes r})$. Note that these are the higher Jones algebras (see the introduction) in the case $G= GL_n$ corresponding to the tilting modules $V_n^{\otimes r}$. We get from Corollary \ref{fusion} that this is a quotient of $E_n^r$. It is a semisimple algebra (see Example \ref{Ex1}), so that by Corollary \ref{sequence of surjections} we get \begin{thm} \label {ss quotients} For all $n$ and all $r$ the algebras $\overline E_m^r$, $m=1, 2, \cdots , n$ are semisimple quotient algebras of $E_n^r$. \end{thm} \begin{remarkcounter} \label{p-term is 0} \begin{enumerate} \item [a)] We have $\overline E_m^r = 0$ for all $r$ when $m \geq p$. This is clear for $m > p$, because then $A_m(p) = \emptyset$. If $m = p$, we have that $\epsilon_1$ belongs to the upper wall of $A_m(p)$, see Remark \ref{A for type A}. Hence, $V_p$ is negligible and therefore so are also all tensor powers $V_p^{\otimes r}$. This means that $V_p^{\underline \otimes r} = 0$ for all $r$. \item [b)] We do not have surjections $\overline E_m^r \to \overline E_{m-1}^r$ analogous to the ones we found in Corollary \ref{sequence of surjections}. In fact, the alcove $A_p(m)$ become larger the smaller $m$ we consider. This means that the algebras $\overline E_m^r$ grow in size when $m$ decreases. \end{enumerate} \end{remarkcounter} \subsection{A class of simple modules for symmetric groups} \label{class of simple} The group algebra $kS_r$ of the symmetric group on $r$ letters is isomorphic to the algebra $E_n^r$ for all $n \geq r$ see e.g. \cite[3.1]{CL}. Hence, by Theorem \ref{ss quotients} $kS_r$ has the following list of semisimple quotients: $\overline E_1^r, \overline E_2^r, \cdots , \overline E_r^r$. As observed in Remark \ref{p-term is 0} we have $\overline E_n^r= 0$, if $n \geq p$. On the other hand, $kS_r$ is itself semisimple if $p >r$, and its representation theory coincides with the well-known theory in characteristic $0$. So we shall assume in the following that $p \leq r$. In the special case $n=1$ we have $V_1^{\otimes r}= k_{r \epsilon_1}$, $r \in {\mathbb Z}$ and these modules together with their duals are the indecomposable tilting modules (as well as the simple modules) for $G_1$. The fusion category for $G_1$ coincides with the full category of finite-dimensional $G_1$-modules. We identify the trivial $1$ line partion of $r>0$ with the element $r \epsilon_1$ in $A_1(p)$. Clearly, we have $\overline E_1^r = E_1^r = \mathrm{End}_{G_1}(k_{r \epsilon_1}) = k$ for all $r$. We shall explore the simple modules for $kS_r$ arising from the above quotients $\overline E_m^r$. Note that we just observed that the first algebra $\overline E_1^r$ equals $k$. Consider the remaining quotients $\overline E_m^r$, $m= 2, \cdots , p-1$ of $kS_r$. We shall describe the simple modules for $kS_r$ arising from these. Recall that the simple modules for $kS_r$ are indexed by the $p$-regular partitions of $r$, i.e. partitions of $r$ with no $p$ rows having the same length. If $\lambda$ is such a partion, we denote the corresponding simple module for $kS_r$ by $D_r(\lambda)$. Set $\Lambda^r$ equal to the set of partitions of $r$. This is the weight set for the cellular algebra $E_n^r$ whenever $n \geq r$. Define $$\overline \Lambda^r(p) = \{ (\lambda_1, \lambda_2, \cdots ,\lambda_m) \in \Lambda^r | \lambda \in A_m(p) \text { for some } m < p \}.$$ So $\overline \Lambda^r(p) $ consists of those partitions of $r$ which have at most $m$ non-zero terms and satisfy $\lambda_1 - \lambda_m \leq p - m$. Clearly, the partions in $\overline \Lambda^r(p)$ are all $p$-regular. We shall now derive an algorithm which determines the dimensions of the simple modules $D_r(\lambda)$ when $\lambda \in \overline \Lambda^r(p)$. We have the following Pieri-type branching formula, which is proved e.g. in \cite[(3.7)]{AS}. \begin{prop} \label{inductive formula} Let $m \geq 1$ and suppose $\lambda \in A_m(p)$. Then $$(V_m^{\otimes r}: T(\lambda)) = \sum_{i: \lambda - \epsilon_i \in \Lambda^{r-1} \cap A_m(p)} (V_m^{\otimes (r-1)}: T(\lambda - \epsilon_i)).$$ \end{prop} \begin{lem} \label {the p-1 algebra} Suppose $1 \leq r = a (p-1) + b$ where $0 \leq b < p-1$. Then $V_{p-1}^{\underline \otimes r} = T(a\omega_{p-1} + \omega_b)$. Hence, $\overline E_{p-1}^r = k$. \end{lem} \begin{proof} The lemma is clearly true when $r =1$ where $V_{p-1} = T(\omega_1) = L(\omega_1)$. Observe that (with the notation in the lemma) $a\omega_{p-1} + \omega_b$ is the unique element in $\Lambda^r \cap A_{p-1}(p)$. Hence, for $r>1$ the statement follows by induction from Proposition \ref{inductive formula}. \end{proof} \begin{thm} \label{main symm} Let $r > 0$ and suppose $\lambda \in \overline \Lambda^r(p)$. Then the dimension of the simple $kS_r$-module $D_r(\lambda)$ is recursively determined by $$ \dim D_r(\lambda) = \sum_{i: \lambda - \epsilon_i \in \overline \Lambda^{(r-1)}(p)} \dim D_{r-1}(\lambda - \epsilon_i).$$ \end{thm} \begin{proof} For any partition $\mu$ of $r$ the corresponding Specht module for $kS_r$ identifies with the cell module $C_r(\lambda) =\mathrm{Hom}_{G_r}(\Delta_r(\mu), V_r^{\otimes r})$ for $E_r^r \simeq kS_r$. Now Proposition \ref{restriction of Specht} shows that, if $\mu$ has at most $m$ terms, then we have $C_r(\mu) \simeq C_m(\mu)$. The surjection $V_m^{\otimes r}$ onto the fusion summand $V_m^{\underline \otimes r}$ then gives a surjection of $C_m(\mu)$ onto the cell module $\overline C_m(\mu)) = \mathrm{Hom}_{G_m}(\Delta_m(\mu), V_m^{\underline \otimes r})$ for the semisimple quotient algebra $\overline E_m^r$ of $kS_r$. This latter module is only non-zero if $m < p$ and $\mu \in A_m(p)$. So if $\mu = \lambda$ with $\lambda$ as in the theorem, we see that $D_r(\lambda) = \overline C_m(\lambda)$. The theorem therefore follows from Proposition \ref{inductive formula} by observing that $\dim \overline C_m(\lambda) = (V_m^{\otimes r} : T(\lambda))$, cf. (\refeq{dim simple/tilting}). \end{proof} \begin{examplecounter} Consider the case $p = 3$. Here we have $$ \overline \Lambda ^r(3) = \begin{cases} \{(1)\} \text { if } r = 1, \\ \{(r), ( (r+1)/2, (r-1)/2)\} \text { if } r \geq 3 \text { is odd,} \\ \{ (r), ( r/2, r/2)\} \text { if } r \geq 2 \text { is even.} \end{cases}$$ The trivial partition $(r)$ of $r$ corresponds to the trivial simple module $D_r((r)) = k$ (this is true for all primes). For the unique $2$-parts partition $ \lambda$ in $\overline \Lambda ^r$ we get from Theorem \ref{main symm} $$ \dim D_r(\lambda) = \begin{cases} \dim D_{r-1}(\lambda - \epsilon_1) \text { if $r$ is odd,} \\ \dim D_{r-1}(\lambda - \epsilon_2) \text { if $r$ is even.} \end{cases}$$ Hence we find $\dim D_r(\lambda) = 1$ for all $r$. This is of course also an immediate consequence of the fact that in this case $\overline E_2^r =k$, see Lemma \ref{the p-1 algebra}. Note that $\overline E_2^r$ is the modular Jones algebra appearing in \cite[Section 7] {A17} and it was observed there as well that the Jones algebras are all trivial in characteristic $3$. \end{examplecounter} \begin{examplecounter} Consider now $p = 5$. Then for $r \geq 5$ we have exactly two partitions $\lambda^1(r)$ and $\lambda^2(r)$ of $r$ having $2$ non-zero parts, which belong to $A_2(5)$. Likewise, there are exactly $2$ partitions $\mu^1(r)$ and $\mu^2(r)$ of $r$ with $3$ non-zero parts, which belong to $A_3(p)$. Finally, there is a unique partition $\nu(r)$ of $r$ with $4$ non-zero parts which belongs to $A_4(p)$. To be precise we have $$ \lambda^1(r) = ((r+2)/2, (r-2)/2) \text { and } \lambda^2(r) = (r/2, r/2), \text { if $r$ is even;}$$ whereas $$ \lambda^1(r) = ((r+3)/2, (r-3)/2) \text { and } \lambda^2(r) = ((r+1)/2,(r-1)/2), \text { if $r$ is odd.}$$ We leave to the reader to work out the formulas for $\mu^1(r), \mu^2(r)$. The expression for $\nu(r)$ is given in Lemma \ref{the p-1 algebra}. So $\overline \Lambda^r (p) = \{(r), \lambda^1(r), \lambda^2(r), \mu^1(r), \mu^2(r), \nu(r) \}$. We choose the enumeration such that $\lambda^1(r) > \lambda^2(r)$ (in the dominance order) and likewise $\mu^1(r) > \mu^2(r)$. For each of these 6 weights we can easily compute the dimension of the corresponding simple $kS_r$-modules via Theorem \ref{main symm}. In Table 1 we have illustrated the results for $ r \leq 10$. In this table the numbers in row $r$ (listed in the above order) are the dimensions of these $6$ simple $kS_r$-modules. When $r$ is small some weights are repeated, e.g. for $r = 3$ we have $(3) = \lambda^1(3)$, $\lambda^2(3) = \mu^1(3)$ and $\mu^2(3) = \nu(3)$. \vskip .5 cm \centerline { {\it Table 1. Dimensions of simple modules for $kS_r$ when $p= 5$}} \vskip .5cm \centerline{ \begin{tabular} { r| c | c c |c c | cc } r &(r) & $\lambda^1(r)$ & $\lambda^2(r)$ & $\mu^1(r)$ & $\mu^2(r)$& $\nu(r)$ \\ \hline 1 & 1 & &1 & 1& & 1\\ 2 & 1 & 1 & 1 & 1 & 1 & 1 & \\ 3 & 1& 1 & 2& 2 & 1 & 1 \\ 4 & 1 &3 & 2 & 2 & 3 & 1\\ 5 &1 & 3 & 5 & 3 & 5 &1\\ 6 & 1 & 8& 5& 8 &5 &1 \\ 7 &1 &8 &13 &8 & 13 & 1\\ 8 & 1& 21&13 &13 & 21 & 1\\ 9 & 1 &21&34&34 & 21 & 1\\ 10 & 1 &55&34&34 & 55 & 1\\ \end{tabular}} \vskip .5 cm The table can easily be extended using the following formulas. Set $a^j(r) = \dim D_r(\lambda^j(r)), \; j=1, 2$. Then Theorem \ref{main symm} gives $a^1(1) = 0, a^2(1) = 1 = a^2(2)$ and the following recursion rules $$ a^1(2r+1) = a^1(2r) = a^1(2r-1) + a^2(2r-1); \; a^2(2r+2) = a^2(2r+1) = a^1(2r) + a^2(2r)$$ for $r \geq 1$. Another way of phrasing this is that $a^2(1), a^1(2), a^2(3), a^1(4), a^2(5), a^1(6), \cdots $ is the Fibonacci sequence. The first equations above then determine the remaining numbers $a^j(r)$. Again we leave it to the reader to find the similar recursion for the dimension of the simple modules corresponding to the $\mu^j(r)$'s. Apart from the fact, that the recursion rules coincide, we see no obvious representation theoretic explanation for the ``symmetry" between the numbers involving $\lambda$'s and those involving $\mu$'s. \end{examplecounter} \vskip 1cm \section{Semisimple quotients of the Brauer algebras} \label{Brauer} In this section we shall apply our results from Section 2 to the symplectic and orthogonal groups. This will allow us via Schur--Weyl duality for these groups to obtain certain semisimple quotients of the Brauer algebras over $k$ and to give an algorithm for finding the dimensions of the corresponding simple modules. The Brauer algebra $\mathcal B_r(\delta)$ with parameter $\delta \in k$ may be defined via generators and relations, see e.g. \cite[Introduction]{DDH}. Alternatively, we may consider it first as just a vector space over $k$ with basis consisting of the so-called Brauer diagrams with $r$ strands. Then one defines a multiplication of two such diagrams by stacking the second diagram on top of the first, see e.g. \cite{B} or \cite[Section 4]{GL}. This gives an algebra structure on $\mathcal B_r(\delta)$. We have Brauer algebras for an arbitrary parameter $\delta \in k$. However, the ones that are connected with our endomorhism algebras are those where $\delta$ is the image in $k$ of an integer, i.e. where $\delta$ belongs to the prime field ${\mathbb F}_p \subset k$. It follows from the various versions of the Schur--Weyl duality (see below) that in this case $\mathcal B_r(\delta)$ surjects onto the endomorphism algebra of the $r$'th tensor power of the natural modules for appropriate symplectic and orthogonal groups. \subsection{Quotients arising from the symplectic groups} \label{quotients sp} We shall use the notation from Section \ref{Sp}. In particular, $V$ will be a $2n$-dimensional symplectic vector space, which we from now on denote $V_n$. We set $G_n = SP(V_n)$ and $E_n^r = \mathrm{End}_{G_n}(V_n^{ \otimes r})$. Consider now the fusion summand $V_n^{\underline \otimes r}$ of $V_n^{\otimes r}$ with endomorphism ring $\overline E_n^r = \mathrm{End}_{G_n}(V^{\underline \otimes r})$. Then exactly as in Proposition \ref{surj GL} we obtain: \begin{prop} \label{quotients Sp} For all $n$ and $r$ the algebra $\overline E_n^r$ is a semisimple quotient of $E_n^r$. \end{prop} Recalling the description of $A_m(p)$ from Section 2.3 and using Remark 3 we see that $\overline E_n^r = 0$ unless $2n \leq p-1$. In contrast with the $GL(V)$ case we usually do not have $\overline E_1^r = k$. In fact, $G_1 = SL_2(k)$ and the tensor powers of the natural module for $G_1$ therefore typically have many summands. On the other hand, the top non-zero term is always equal to $k$: \begin{prop} $\overline E_{(p-1)/2}^r = k$ for all $r$. \end{prop} \begin{proof} In this proof we drop the subscript $ {(p-1)/2}$ on $V$ and $\Delta$. We have that $V \otimes V $ has a $\Delta$-filtration with factors $\Delta(2 \epsilon_1), \Delta(\epsilon_1 + \epsilon_2)$ and $\Delta(0) = k$. The first two of these have highest weights on the upper wall of $A_{(p-1)/2}(p)$ whereas the highest weight $0$ of the last term belongs to $A_{(p-1)/2}(p)$. It follows that $V \underline \otimes V = k$. Hence, $$V^{\underline \otimes r} = \begin{cases} V \text { if $r$ is odd,} \\ k \text { if $r $ is even.} \end{cases}$$ The claim follows. \end{proof} The analogue of Proposition \ref{inductive formula} is \begin{prop} \label{inductive formula Sp} Let $m \geq 1$ and suppose $\lambda \in A_m(p)$. Then $$(V_m^{\otimes r}: T_m(\lambda)) = \sum_{i: \lambda \pm \epsilon_i \in A_m(p)} (V_m^{\otimes (r-1)}: T_m(\lambda \pm \epsilon_i)).$$ \end{prop} \begin{proof} As $\epsilon_1$ is minuscule we have for any $\lambda \in X_n^+$ that the $\Delta$-factors in $\Delta(\lambda) \otimes V_m$ are those with highest weights $\lambda + \mu$ where $\mu$ runs through the weights of $V_m$ (ignoring possible $\mu$'s for which $\lambda + \mu$ belong to the boundary of $X_m^+$). Likewise, if $\lambda \in A_m(p)$ then the same highest weights all belong to the closure of $A_m(p)$. Hence the fusion product $\Delta(\lambda) \underline \otimes V$ is the direct sum of all $\Delta_m(\lambda + \mu)$ for which $\lambda + \mu \in A_m(p)$. As the possible $\mu$'s are the $\pm \epsilon_i$ (each having multiplicity $1$) we get the formula. \end{proof} Recall now the Schur--Weyl duality theorem for $SP(V)$, see \cite{DDH}. \begin{thm} \label{Schur-Weyl Sp} There is an action of $\mathcal B_r(-2n)$ on $V_n^{\otimes r}$ which commutes with the action of $G_n$. The corresponding homomorphism $\mathcal B_r(-2n) \rightarrow E_n^r$ is surjective for all $n$ and for $n\geq r$ it is an isomorphism. \end{thm} The simple modules for $\mathcal B_r(\delta)$ are parametrized by the $p$-regular partitions of $r, r-2, \cdots$, see \cite[Section 4]{GL}, and we shall denote them $D_{\mathcal B_r(\delta)}(\lambda)$. This parametrization holds for any $\delta \in k$. However, in this section we only consider the case where $\delta$ is the image in $k$ of a negative even number. We identify $\delta$ with an integer in $[0, p-1]$. Assume $\delta$ is odd and define the following subsets of weights $$ \overline \Lambda^r(\delta, p) = (\Lambda^r \cup \Lambda^{r-2} \cup \cdots ) \cap A_{(p-\delta)/2}(p).$$ So if $\delta < p-2$ then $ \overline \Lambda^r(\delta, p)$ consists of partitions $\lambda = (\lambda_1, \lambda_2, \cdots \lambda_{(p-\delta)/2})$ with $|\lambda| = r - 2i$ for some $i \leq r/2$ which satisfy $\lambda_1 + \lambda_2 \leq \delta$. On the other hand, $\overline \Lambda^r(p-2, p) = \{(r-2i) | r-p+2 \leq 2i \leq r\}$. Note that all partitions in $\overline \Lambda^r(p)$ are $p$-regular. \begin{thm} \label{main brauer Sp} Let $r > 0$ and consider an odd number $\delta \in [0,p-1]$. Suppose $\lambda \in \overline \Lambda^r(\delta, p)$. Then the dimension of the simple $\mathcal B_r(\delta)$-module $D_{\mathcal B_r(\delta)}(\lambda)$ is recursively determined by $$ \dim D_{\mathcal B_r(\delta)}(\lambda) = \sum_{i: \lambda \pm \epsilon_i \in \overline \Lambda^{(r-1)}(\delta, p)} \dim D_{\mathcal B_{r-1}(\delta)}(\lambda \pm \epsilon_i).$$ \end{thm} \begin{proof} Combining Theorem \ref{Schur-Weyl Sp} with Theorem \ref{quotients Sp} we see that $\overline E_{(p-\delta)/2}^r$ is a semisimple quotient of $B_r(\delta)$. Then the theorem follows from Proposition \ref{inductive formula Sp} by recalling that the dimensions of the simple modules for $\overline E_{(p-\delta)/2}^r$ coincide with the tilting multiplicities in $V_{(p-\delta)/2}^{\otimes r}$, see (\refeq{dim simple/tilting}). \end{proof} \begin{remarkcounter} If $n \equiv (p-\delta)/2\; (\mathrm{mod } p)$ for some odd number $\delta \in [0,p-1]$, then $-2n \equiv \delta$. Hence, the theorem describes a class of simple modules for $\mathcal B_r(-2n)$ for all such $n$. \end{remarkcounter} \begin{examplecounter} \label{brauer p=7} Consider $p = 7$. Then the relevant $\delta$'s are $5, 3$ and $1$. The weight set $\overline \Lambda^r(5,7)$ contains $3$ elements (except for $r < 4$ where there are fewer) $\lambda^1(r), \lambda^2(r), \lambda^3(r)$ listed in descending order, namely $(4), (2), (0)$, when $ r$ is even, and $(5), (3), (1)$, when $r$ is odd. Likewise, $\overline \Lambda^r(3,7)$ contains $3$ elements (except for $r = 1$) $\mu^1(r), \mu^2(r), \mu^3(r)$ listed in descending order, namely $(2,0), (1,1), (0,0)$, when $ r$ is even, and $(3,0), (2,1), (1,0)$, when $r$ is odd. Finally, $\overline \Lambda^r(1,7)$ consists of a unique element $\nu(r)$, namely $\nu(r) = (0,0,0)$, when $r$ is even, and $\nu(r) =(1,0,0)$, when $r$ is odd. In Table 2 we have listed the dimensions of the simple modules for $\mathcal B_r(\delta) $ for $r \leq 10$. These numbers are computed recursively using Theorem \ref{main brauer Sp}. \end{examplecounter} \eject \centerline { { \it Table 2. Dimensions of simple modules for $\mathcal B_r(\delta)$ when $p= 7$ and $\delta = 5, 3, 1$. }} \vskip .5cm \centerline { \begin{tabular}{ r| c c c |c c c |c| c c c c c c c c c c} $\delta$ &&5&&&3&&1& \\ \hline r & $\lambda^1(r)$ & $\lambda^2(r)$ & $\lambda^3(r)$ & $\mu^1(r)$ & $\mu^2(r)$ & $\mu^3(r)$ & $\nu(r)$ \\ \hline 1 & & & 1 & & &1 & 1& \\ 2 & & 1 &1 & 1 & 1 & 1 & 1 \\ 3 & & 1& 2& 1 & 2 & 3 & 1 \\ 4 & 1 &3 & 2 & 6 & 5 & 3& 1 \\ 5 &1 & 4 & 5 & 6 & 11 &14 & 1\\ 6 & 5 & 9& 5& 31&25 &14 & 1 \\ 7 &5 &14 &14 &31& 56 & 70 & 1\\ 8 & 19& 28&14 &157 & 126 &70 & 1\\ 9 & 19 &47&42&157 & 283 & 353 & 1\\ 10 & 66 &89&42&793 & 636 & 353 & 1\\ \end{tabular}} \vskip 1 cm \subsection{Quotients arising from the orthogonal groups} In this section we consider the orthogonal groups. Again we shall see that the very same methods as we used for general linear groups in Section 4 apply in this case. We shall use the notation from Section \ref{O}. In particular, $V$ will be a vector space with a non-degenerate symmetric bilinear form. If $\dim V$ is odd, we write $\dim V = 2n +1$ and set $V_n = V$ and $G_n = O(V)$. Likewise, if $\dim V$ is even, we write $\dim V = 2n$ and set $V_n = V$ and $G_n = O(V_n)$. In both cases we denote by $E_n^r$ the endomorphism algebra $ \mathrm{End}_{G_n}(V_n^{ \otimes r})$ and by $\overline E_n^r$ the algebra $\mathrm{End}_{G_n}(V_n^{\underline \otimes r})$. As in the general linear and the symplectic case we have: \begin{prop} \label{quotients O} For all $n$ and $r$ the algebra $\overline E_n^r$ is a semisimple quotient of $E_n^r$. \end{prop} Recalling the description of $A_m(p)$ from Section \ref{O} we observe: \begin{remarkcounter} \label{orthogonal barE} \begin{enumerate} \item [a)]By Remarks \ref{A for type B} and \ref{A for type D} we have $\overline E_n^r$ = 0 unless $2n < p-2$ in the odd case, respectively $2n < p+2$ in the even case. \item [b)] In the even case we get $\overline E_{(p+1)/2}^r = k$ for all $r$ using the same argument as in the symplectic case. On the other hand, this argument does not apply to the odd case, where in fact the highest term $\overline E_{(p-3)/2}^r$ is usually not $k$ (this is illustrated in Example \ref{brauer2 p=7} below). \end{enumerate} \end{remarkcounter} The Schur--Weyl duality for orthogonal groups \cite[Theorem 1.2]{DH} says in particular: \begin{thm} \label{Schur-Weyl O} Set $\delta = \dim V_n$. There is an action of $\mathcal B_r(\delta)$ on $V_n^{\otimes r}$ which commutes with the action of $G_n$. The corresponding homomorphism $\mathcal B_r(\delta) \rightarrow E_n^r$ is surjective for all $n$. \end{thm} \begin{remarkcounter} The Schur--Weyl duality for orthogonal groups gives rise to isomorphisms for large enough $n$, see e.g. \cite[Section 3.4]{AST2}. We shall not need this here. \end{remarkcounter} We now divide our discussion into the odd and even cases. \subsubsection{Type B} In the odd case where $G_n$ has type $B_n$ our methods lead to the higher Jones quotient $\overline E_m^r$ of $\mathcal B_r(2m+1)$ for $1 \leq m \leq (p-3)/2$. Noting that the Brauer algebras in question are those with an odd $\delta $ lying between $3$ and $p-2$ which we have already dealt with in Section \ref{quotients sp}, we shall leave most details to the reader. However, we do want to point out that the inductive formula for the dimensions of the simple modules for $\overline E_n^r$ in this case is more complicated than in the symplectic case. The reason is that for type $B$ the highest weight for the natural module is not minuscule. This means that instead of the direct analogue of Proposition \ref{inductive formula} we need to use the following general formula (with notation as in Section 2). \begin{thm} (\cite[Equation 3.20(1)]{AP}). Let $G$ be an arbitrary reductive group over $k$ and suppose $Q$ is a tilting module for $G$. If $\lambda$ is a weight belonging to the bottom dominant alcove $A(p)$, then $$ (Q:T(\lambda)) = \sum_{w} (-1)^{\ell (w)} (Q:\Delta(w \cdot \lambda)),$$ where the sum runs over those $w \in W_p$ for which $w \cdot \lambda \in X^+$. \end{thm} \begin{examplecounter} \label{brauer2 p=7} Consider $p =7$. Then type $B$ leads to higher Jones algebras of $\mathcal B_r(3)$ and $\mathcal B_r(5)$. The reader may check that the recursively derived dimensions for the class of simple modules in these cases match (with proper identification of the labeling) with those listed in Table 2. Note in particular that to get those for $\mathcal B_r(3)$ we need to decompose $V_1^{\underline \otimes r}$ into simple modules for $G_1$. The Lie algebra for $G_1$ is $\mathfrak{sl}_2$ and the natural $G_1$-module $V_1$ identifies with the simple $3$-dimensional $SL_2$-module. \end{examplecounter} \subsubsection{Type D} In the even case $G_n$ has type $D$. The module $V_n$ equals $\Delta_n(\epsilon_1)$ and its highest weight $\epsilon_1$ is minuscule. This means that we have \begin{prop} \label{inductive formula O} Let $n \geq 1$ and suppose $\lambda \in A_n(p)$. Then $$(V_n^{\otimes r}: T_n(\lambda)) = \sum_{i: \lambda \pm \epsilon_i \in A_n(p)} (V_n^{\otimes (r-1)}: T_n(\lambda \pm \epsilon_i)).$$ \end{prop} \begin{proof} Completely analogous to the proof of Proposition \ref{inductive formula Sp}. \end{proof} Assume now $\delta \in [2, p+1]$ is even and define the following subsets of weights $$ \overline \Lambda^r(2, p) = \{(r-2i) | 0 \leq r-2i \leq p-2\},$$ $$ \overline \Lambda^r(4, p) =\{(\lambda_1, \lambda_2) \in X_2^+ | (\lambda_1,|\lambda_2|) \in \Lambda^{r-2i} \text { for some $i$ with } 0 \leq r-2i \leq p-2\}, $$ and for $\delta > 4$ $$ \overline \Lambda^r(\delta, p) = \{ (\lambda_1, \lambda_2, \cdots , \lambda_{\delta/2}) \in X_{\delta/2}^+ | (\lambda_1, \cdots , |\lambda_{\delta/2}|) \in \Lambda^{r-2i} \text { for some } i \leq r/2 \text { and } \lambda_1 + \lambda_2 \leq p-\delta + 2\}. $$ \vskip .3 cm \begin{thm} \label{main brauer O} Let $r > 0$ and consider an even number $\delta \in [0,p+1]$. Suppose $\lambda \in \overline \Lambda^r(\delta, p)$. Then the dimension of the simple $\mathcal B_r(\delta)$-module $D_{\mathcal B_r(\delta)}(\lambda)$ is recursively determined by $$ \dim D_{\mathcal B_r(\delta)}(\lambda) = \sum_{i: \lambda \pm \epsilon_i \in \overline \Lambda^{(r-1)}(\delta, p)} \dim D_{\mathcal B_{r-1}(\delta)}(\lambda \pm \epsilon_i).$$ \end{thm} \begin{proof} Combining Theorem \ref{Schur-Weyl O} with Theorem \ref{quotients O} we see that $\overline E_{(p-\delta)/2}^r$ is a semisimple quotient of $B_r(\delta)$. Then the theorem follows from Proposition \ref{inductive formula O} by recalling that the dimensions of the simple modules for $\overline E_{\delta/2}^r$ coincide with the tilting multiplicities in $V_{\delta/2}^{\otimes r}$, see (\refeq{dim simple/tilting}). \end{proof} \begin{remarkcounter} If $n \equiv \delta/2 \; (\mathrm{mod } \; p)$ for some even number $\delta \in [2,p+1]$, then $2n \equiv \delta \; (\mathrm{mod } \;p)$. Hence, the theorem describes a class of simple modules for $\mathcal B_r(2n)$ for all such $n$. \end{remarkcounter} \begin{examplecounter} Consider $p = 7$. Then the relevant $\delta$'s are $2, 4, 6, 8$. By Remark \ref{orthogonal barE}b we have that the higher Jones quotient algebra for $\mathcal B_r(8)$ is the trivial algebra $k$ (alternatively, observe that $\mathcal B_r(8) = \mathcal B_r(1)$ which we dealt with in Example \ref{brauer p=7}). At the other extreme the (higher) Jones quotient of $\mathcal B_r(2)$ is also a quotient of the Temperley--Lieb algebra $TL_r(2)$. This case is dealt with in \cite[Proposition 6.4]{A17}. So here we only consider the two remaining cases $\delta = 4$ and $\delta = 6$. We have $$ \overline \Lambda^1(4,7) = \{(1,0)\},$$ $$ \overline \Lambda^2(4,7) = \{(2,0), (1,1), (1,-1), (0,0)\},$$ $$ \overline \Lambda^3(4,7) = \{(3,0), (2,1), (2,-1), (1,0)\},$$ $$ \overline \Lambda^r(4,7) = \begin{cases} \{(4,0), (2,2), (2,-2), (3,1) (3,-1), (2,0), (1,1), (1,-1), (0,0)\} \text { if $r \geq 4$ is even,} \\ \{(5,0), (3,2), (3,-2), (4,1), (4,-1), (3,0), (2,1), (2,-1), (1,0)\} \text { if $r \geq 5$ is odd.} \end{cases}$$ In Table 3 we have denoted these weights $\lambda^1(r), \cdots , \lambda^9(r)$. Likewise, we have $$ \overline \Lambda^1(6,7) = \{(1,0,0))\},$$ $$ \overline \Lambda^2(6,7) = \{(2,0,0), (1,1,0), (0,0,0)\},$$ $$ \overline \Lambda^r(6,7) = \begin{cases} \{(3,0,0), (2,1,0), (1,1,1), (1,1,-1), (1,0,0)\} \text { if $r \geq 3$ is odd.} \\ \{(2,1,1)), (2,1,-1)), (2,0,0), (1,1,0),(0,0,0)\} \text { if $r \geq 4$ is even.} \\ \end{cases}$$ In Table 3 we have denoted these weights $\mu^1(r), \cdots , \mu^5(r)$. In this table we have then listed the dimensions (computed via the algorithm in Theorem \ref{main brauer O}) for the simple modules for $\mathcal B_r(4)$, respectively $\mathcal B_r(6)$ corresponding to these sets of weights for $r \leq 10$. \eject \centerline {{ \it Table 3. Dimensions of simple modules for $\mathcal B_r(\delta)$ when $p= 7$ and $\delta = 4 $ and $6$.}} \vskip .5cm \noindent \begin{tabular}{ r| c c c c c c c c c | c c c c c c} &&& &&$\delta =4$&&&&&&&$\delta = 6$&& \\ \hline r & $\lambda^1(r)$ & $\lambda^2(r)$ & $\lambda^3(r)$&$\lambda^4(r)$&$\lambda^5(r)$&$\lambda^6(r)$&$\lambda^7(r)$&$\lambda^8(r)$&$\lambda^9(r)$ & $\mu^1(r)$& $\mu^2(r)$ & $\mu^3(r)$ &$\mu^4(r)$ & $\mu^5(r)$ \\ \hline 1 & & & & &&&&&1 & & &&& 1& \\ 2 &&&&&&1 & 1 & 1 & 1 & &&1& 1 & 1 \\ 3 &&&&&&1& 2&2 & 4 & 1&2&1&1& 3 \\ 4 & 1 &2 & 2 & 3 & 3 & 9&6&6&4& 2 & 3 & 6 & 7 & 3 \\ 5 &1 & 5 & 5 & 4 & 4& 16 &20 & 20 &25 &6 & 18 &9&10&16 \\ 6 & 25& 25& 25& 45 & 45&81 & 45 & 45 & 25 &27&28&40&53 & 16& \\ 7 &25 &70 &70 &70& 70 & 196& 196 & 196 & 196 &40&148&80 &81 &109 \\ 8 & 361,& 266 & 266 & 532 & 532 & 784 & 392 & 392 & 196 &228 &229 &297 &418 &109 \\ 9 & 361&798& 798 & 893 & 893 & 2209 & 1974 & 1974 & 1764 &297&1172&646&647&824 \\ 10 & 4356 & 2772 & 2772 & 5874 & 5874 & 7921 & 3738 & 3738 & 1764 &1828&1829&2293&3289&824 \\ \end{tabular} \vskip .5 cm Together with Example \ref{brauer p=7} this example give a class of simple modules for Brauer algebras with parameter $\delta$ equal to any non-zero element of $\mathbb {F}_7$. \end{examplecounter} Note that in the above example we were in type $D_1 = A_1$, $D_2 = A_1 \times A_1$ or $D_3 = A_3$ and we could have deduced the results from the Type A case treated in Section 4. We shall now give another example illustrating type $ D_n$ computations with $n > 3$. \begin{examplecounter} Consider $p = 11$ and take $\delta = 10$. Then $\mathcal B_r(10)$ has the higher Jones quotient $\overline E_5^r$. If $r \geq 5$ the weight set $\overline \Lambda^r(10, 11)$ contains $7$ elements, namely $$ \{(1,1,1,1,-1), (2,1,1,1,0), (1,1,1,1,1), (3, 0,0,0,0), (2,1,0,0,0), (1,1,1,0,0), (1,0,0,0,0)\}$$ when $r$ is odd, and $$ \{(2,1,1,1, -1), (2,1,1,1,1), (2,1,1,0,0), (1,1,1,1,0), (2,0,0,0,0), (1,1,0,0,0), (0,0,0,0,0)\}$$ when $r$ is even. If $r \in \{1,2,3,4\}$, the set $\overline \Lambda^r(10, 11)$ consists of the last, the $3$ last, the $4$ last, and the $5$ last elements, respectively, in the above lists. In Table 4 we have listed the dimensions of the corresponding simple modules for $\mathcal B_r(10)$ for $r \leq 10$ using Theorem \ref{main brauer O}. We have denoted the above $7$ weights $\lambda^1(r), \cdots , \lambda^7(r)$ (in the given order). \end{examplecounter} \eject \centerline{ { \it Table 4. Dimensions of simple modules for $\mathcal B_r(10)$ when $p= 11$. }} \vskip .5cm \centerline { \begin{tabular}{ r| c c c c c c c|ccc} r & $\lambda^1(r)$ & $\lambda^2(r)$ & $\lambda^3(r)$ & $\lambda^4(r)$ & $\lambda^5(r)$ & $\lambda^6(r)$ &$\lambda^7(r)$ \\ \hline 1& & & & & & & 1& \\ 2 & && & & 1 & 1 & 1 & \\ 3 && & & 1& 2 & 1 & 3 & \\ 4 & &&3 & 1 & 6 & 6 & 3& \\ 5 &1&4 & 1 & 6 & 15 & 10 &15 & \\ 6 & 5&5 & 29& 16& 36&40 &15 & \\ 7 &21 &55 &21 &36 &105& 85 & 91 & \\ 8 &76 & 76& 245&97 &232 & 281 &91 & \\ 9 &173& 494 &173&232&568 & 623 & 604& \\ 10 &667& 667 &1685&840&1404 & 1795 & 604 & \\ \end{tabular}} \vskip 1 cm \section{Quantum Groups} In the remaining sections $k$ will denote an arbitrary field. Let $\mathfrak g$ denote a simple complex Lie algebra. Then there is a quantum group $U_q = U_q(\mathfrak g)$ (a quantized enveloping algebra over $k$) associated with $\mathfrak g$. We shall be interested in the case where the quantum parameter $q$ is a root of unity in $k$ and we want to emphazise that the quantum group we are dealing with is the Lusztig version defined via $q$-divided powers, see e.g. \cite[Section 0]{APW}. This means that we start with the ``generic" quantum group $U_v = U_v(\mathfrak g)$ over ${\mathbb Q}(v)$ where $v$ is an indeterminate. Then we consider the ${\mathbb Z}[v,v^{-1}]$-subalgebra $U_{{\mathbb Z}[v,v^{-1}]}$ of $U_v$ generated by the quantum divided powers of the generators for $U_v$. When $q \in k\setminus 0$ we make $k$ into an ${\mathbb Z}[v,v^{-1}]$-algebra by specializing $v$ to $q$ and define $U_q$ as $U_q = U_{{\mathbb Z}[v,v^{-1}]} \otimes_{{\mathbb Z}[v,v^{-1}]} k$. This construction, of course, makes sense for arbitrary $q$, but if $q$ is not a root of unity all finite-dimensional $U_q$-modules are semisimple and our results are trivial. So in the following we always assume that $q$ is a root of unity and we denote by $\ell$ the order of $q$. When $\ell \in \{2, 3, 4, 6\}$ the (quantum) higher Jones algebras we introduce turn out to be trivial ($0$ or $k$) for all $\mathfrak g$ so we ignore these cases. We set $\ell' = \mathrm{ord}(q^2)$, i.e. $\ell' = \ell$, if $\ell$ is odd, and $\ell' = \ell/2$, if $\ell$ is even. In this section we shall very briefly recall some of the key facts about $U_q$ and its representations relevant for our purposes. As the representation theory for $U_q$ is in many ways similar to the modular representation theory for $G$ that we have been dealing with in the previous sections, we shall leave most details to the reader. However, we want to emphazise one difference: if the root system associated with $\mathfrak g$ has two different root lengths then the case of even $\ell$ is quite different from the odd case (the affine Weyl groups in question are not the same). This phenomenon is illustrated in \cite[Section 6]{AS} where the fusion categories for type $B$ as well as the corresponding fusion rules visibly depend on the parity of $\ell$. The difference will also be apparent in Section 6.3 below where for instance the descriptions of the bottom dominant alcoves in the type $C$ case considered there depend on the parity of $\ell$. Again, we start out with the general case and then specialize first to the general linear quantum groups, and then to the symplectic quantum groups. We omit treating the case of quantum groups corresponding to the orthogonal Lie algebras, because of the lack of a general version of Schur--Weyl duality in that case. \subsection{Representation theory for Quantum Groups} We have a triangular decomposition $U_q = U_q^- U_q^0U_q^+$ of $U_q$. If $n$ denotes the rank of $\mathfrak g$, then we set $X = {\mathbb Z}^n$ and identify each $\lambda \in X$ with a character of $U_q^0$ (see e.g. \cite[Lemma 1.1]{APW}). These characters extend to $B_q = U_q^-U_q^0$ giving us the $1$-dimensional $B_q$-modules $k_\lambda, \lambda \in X$. As in Section 1.1 we denote by $R$ the root system for $\mathfrak g$ and consider $R$ as a subset of $X$. The set $S$ of simple roots corresponds to the generators of $U_q^+$ and we define the dominant cone $X^+ \subset X$ as before. The Weyl group $W$ is still the group generated by the reflections $s_{\alpha}$ with $\alpha \in S$. Define the bottom dominant alcove in $X^+$ by $$ A(\ell) = \begin{cases} \{\lambda \in X^+ | \langle \lambda + \rho, \alpha_0^{\vee} \rangle < \ell \} \text { if $\ell$ is odd,} \\ \{\lambda \in X^+ | \langle \lambda + \rho, \beta_0^{\vee} \rangle < \ell' \} \text { if $\ell$ is even.} \end{cases}$$ Here $\alpha_0$ is the highest short root and $\beta_0$ is the highest long root. The affine Weyl group $W_\ell$ for $U_q$ is then the group generated by the reflections in the walls of $A(\ell)$. Note that, when $\ell$ is odd, $W_\ell$ is the affine Weyl group (scaled by $\ell$) associated with the dual root system of $R$, whereas if $\ell$ is even, $W_\ell$ is the affine Weyl group (scaled by $\ell'$) for $R$, cf. \cite[Section 3.17]{AP}. Suppose $\lambda \in X^+$. Then we have modules $\Delta_q(\lambda), \nabla_q(\lambda), L_q(\lambda)$ and $T_q(\lambda)$ completely analogous to the $G$-modules in Section 2 with the same notation without the index $q$. The quantum linkage principle (see \cite{A03}) implies that if $L_q(\mu)$ is a composition factor of $\Delta_q(\lambda)$, then $\mu$ is strongly linked (by reflections from $W_\ell$) to $\lambda$. Likewise, if $\Delta_q(\mu)$ occurs in a Weyl filtration of $T_q(\lambda)$, then $\mu$ is strongly linked to $\lambda $. The quantum linkage principle then gives the identities $$ \Delta_q(\lambda) = \Delta_q(\lambda) = L_q(\lambda) = T_q(\lambda) \text { for all } \lambda \in A(\ell),$$ which will be crucial for us in the following. Suppose $Q$ is a general tilting module for $U_q$. Imitating the definitions in Section \ref{Fusion} we define the fusion summand and the negligible summand of $Q$ as follows $$ Q^{{\mathcal F}}= \bigoplus_{\lambda \in A(\ell)} T_q(\lambda)^{(Q:T_q(\lambda))} \text { and } Q^{\mathcal N} = \bigoplus_{\lambda \in X^+\setminus A(\ell)}T_q(\lambda)^{(Q:T_q(\lambda))}$$. The exact same arguments as in the modular case then give us the quantum analogue of Theorem \ref{fusion-quotient} \begin{thm} \label{q-fusion-quotient} Let $Q$ be an arbitrary tilting module for $U_q$. Then the natural map $\phi: \mathrm{End}_{U_q}(Q) \rightarrow \mathrm{End}_{U_q}(Q^{{\mathcal F}})$ is a surjective algebra homomorphism. The kernel of $\phi$ equals $$\{h \in \mathrm{End}_{U_q}(Q) | \mathrm{Tr}_q(i_\lambda \circ h \circ \pi_\lambda) = 0 \text { for all } i_\lambda \in \mathrm{Hom}_{U_q}( T_q(\lambda), Q), \; \pi_\lambda \in \mathrm{Hom}_{U_q}(Q, T_q(\lambda)),\; \lambda \in X^+\}.$$ \end{thm} We also have a quantum fusion category (still denoted $\mathcal F$) and a fusion tensor product $\underline \otimes$ on it, see \cite[Section 4]{A92}. This leads to an analogue of Corollary 2.4. \begin{cor} \label{q-fusion} Let $T$ be an arbitrary tilting module for $U_q$. Then for any $r \in {\mathbb Z}_{\geq 0}$ the natural homomorphism $\mathrm{End}_{U_q}(T^{\otimes r}) \rightarrow \mathrm{End}_{U_q}(T^{\underline \otimes r})$ is surjective. \end{cor} All of the above easily adapts to the case, where we replace the simple Lie algebra $\mathfrak g$ by the general linear Lie algebra $\mathfrak {gl}_n$ and we shall explore this case further in the next section. Finally, the cellular algebra theory recalled in Section \ref{cellular} carries over verbatim. Alternatively, use the quantum framework from \cite[Section 5]{AST1} directly. \subsection{The General Linear Quantum Group} \label{general linear q-group} Let $n \geq 1$ and consider the general linear Lie algebra $\mathfrak {gl}_n$. The generic quantum group over ${\mathbb Q}(v)$ associated to $\mathfrak {gl_n}$ has a triangular decomposition in which the $0$ part identifies with a Laurent polynomial algebra ${\mathbb Q}(v)[K_1^{\pm 1}, \cdots , K_n^{\pm 1}]$. If $\lambda = (\lambda_1, \cdots , \lambda_n) \in X_n = {\mathbb Z}^n$ then $\lambda$ defines a character of this algebra which sends $K_i$ into $v^{\lambda_i}$. In particular, the element $\epsilon_i \in X_n$ with $1$ as its $i$-th entry and $0$'s elsewhere defines the character which sends $K_i$ to $v$ and all other $K_j$'s to $1$. We then have $\lambda = \sum_i \lambda_i \epsilon_i$. Set $U_{q,n}$ equal to the quantum group for $\mathfrak {gl}_n$ over $k$ with parameter a root of unity $q \in k$. Then we still get for $\lambda \in X_n$ a character of $U_q^0$, see \cite[Section 9]{APW}. If we denote by $V_{q,n}$ the $n$-dimensional vector representation of $U_{q,n}$, then (in analogy with the classical case) $V_{q,n}$ has weights $\epsilon_1, \cdots, \epsilon_n$, all with multiplicity $1$. Moreover, we may (for all $\ell$) identify $V_{q,n}$ with $\Delta_q(\epsilon_1) = \nabla_q(\epsilon_1) = L_q(\epsilon_1) = T_q(\epsilon_1)$. The bottom alcove in $X_n$ is now denoted $A_n(\ell)$ and given by $$ A_n(\ell) = \{\lambda \in X_n | \lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n \text { and } \lambda_1 - \lambda_n \leq \ell' -n\}.$$ As noted above, $V_{q,n}$ is a tilting module. Hence, so are $V_{q,n}^{\otimes r}$ as well as the corresponding fusion summands $V_{q,n}^{\underline \otimes r}$ for all $r \in Z_{\geq 0}$. We set $E_{q,n}^r = \mathrm{End}_{U_{q,n}}(V_{q,n}^{\otimes r})$ and $\overline E_{q,n}^r = \mathrm{End}_{U_{q,n}}(V_{q,n}^{\underline \otimes r})$. These endomorphism algebras are then cellular algebras, and $\overline E_{q,n}^r$ is in fact semisimple (because $V_{q,n}^{\underline \otimes r}$ is a semisimple $U_{q,n}$-module). Moreover, by Corollary \ref{q-fusion} we have: \begin{equation} \label {E to barE} \text {The natural homomorphism } E_{q,n}^r \rightarrow \overline E_{q,n}^r \text { is surjective.} \end{equation} Arguing as in Section \ref{tensor powers} we also get: \begin{equation} \label{surj n>m} \text {The ``restriction" homomorphisms }E_{q,n}^r \rightarrow E_{q,m}^r \text { are surjective for all } n \geq m \text { and all } r. \end{equation} \subsection{Quantum Symplectic Groups} \label{q-sp} Set now $U_{q,n}$ equal to the quantum group corresponding to the simple Lie algebra $\mathfrak {sp}_{2n}$ of type $C_n$. The vector representation $V_{q,n} = \Delta_q(\epsilon_1)$ is then a tilting module for $U_{q,n}$. As in the corresponding classical case it has weights $\pm \epsilon_1, \cdots , \pm \epsilon_n$ . The bottom alcove in $X_n$ is now denoted $A_n(\ell)$ and given by $$ A_n(\ell) = \begin{cases} \{\lambda \in X_n | \lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n \geq 0 \text { and } \lambda_1 + \lambda_2 \leq \ell - 2n \} \text { if $\ell$ is odd,}\\ \{\lambda \in X_n | \lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n \geq 0 \text { and } \lambda_1\leq \ell' -n-1\} \text { if $\ell$ is even.} \end{cases}$$ In both the even and the odd case we have $A_n(\ell) \neq \emptyset$ if and only if $\ell > 2n$. In the odd case $\epsilon_1$ belongs to $A_n(\ell)$ for $n = 1, 2, \cdots , (\ell -1)/2$, whereas in the even case the same is true for $n=1, 2, \cdots , (\ell -4)/2$. Again in this case we get (with $E_{q,n}^r = \mathrm{End}_{U_{q,n}}(V_{q,n}^{\otimes r})$ and $\overline E_{q,n}^r = \mathrm{End}_{U_{q,n}}(V_{q,n}^{\underline \otimes r}$)): \begin{equation} \label{surj E to Ebar qsp} \text {The natural homomorphisms } E_{q,n}^r \rightarrow \overline E_{q,n}^r \text { are surjective for all $n, r$.} \end{equation} \section{A class of simple modules for the Hecke algebras of symmetric groups} We continue in this section to assume that $k$ is an arbitrary field and that $q \in k$ is a root of order $\ell$. Let $r$ be a positive integer and denote by $H_r(q)$ the Hecke algebra of the symmetric group $S_r$ with parameter $q\in k$. Using the notation from Section \ref{general linear q-group} we then have the quantum Schur--Weyl duality: \begin{thm} \label{q-Schur-Weyl} The Hecke algebra $H_r(q)$ acts on the tensor power $V_{q,n}^{\otimes r}$ via the quantum $R$ matrix for $U_{q,n}$. This action commutes with the $U_{q,n}$-module structure on $V_{q,n}^{\otimes r}$ giving homomorphisms $H_r(q) \rightarrow E_{q,n}^r$ which are surjective for all $n$ and isomorphisms for $n \geq r$. \end{thm} This is the main result of \cite{DPS}. \begin{cor} \label{q-Jones} Suppose $r \geq \ell$. Then the Hecke algebra $H_r(q)$ has the following semisimple quotients $\overline E_{q,1}^r, \overline E_{q,2}^r, \cdots ,\overline E_{q, \ell -1}^r$. \end{cor} \begin{proof} By Theorem \ref{q-Schur-Weyl} we have $H_r(q) \simeq E_{q,r}^r$. Then the corollary follows from (\refeq{surj n>m}) and (\refeq{E to barE}) . \end{proof} \begin{remarkcounter} \begin{enumerate} \item The semisimple quotients of $H_r(q)$ listed in Corollary \ref{q-Jones} are obvious generalisations of the Jones algebras introduced in \cite{ILZ}, and as explained in the introduction this is the reason why we use the name `higher Jones algebras' in this paper. \item In analogy with the modular case we see that $ E_{q,1}^r = k = \overline E_{q,\ell-1}^r $ for all $r$. \end{enumerate} \end{remarkcounter} The simple modules for $H_r(q)$ are parametrized by the set of $\ell$-regular partitions of $r$. We denote the simple $H_r(q)$-module associated with such a partition $\lambda$ by $D_{q,r}(\lambda)$. Our aim is to derive an algorithm for computing the dimensions of a special class of simple $H_r(q)$-modules, namely those coming from the higher Jones algebras. In analogy with the notation in Section \ref{class of simple} we set $$\overline \Lambda^r(\ell) = \{\lambda = (\lambda_1, \lambda_2, \cdots ,\lambda_m) | \lambda \text { is a partition of $r$ and } \lambda \in A_m(\ell) \text { for some } m < \ell' \}.$$ So $\overline \Lambda^r(\ell) $ consists of those partitions of $r$ which have at most $m < \ell'$ non-zero terms and satisfy $\lambda_1 - \lambda_m \leq \ell' - m$. Clearly, the partitions in $\overline \Lambda^r(\ell)$ are all $\ell'$-regular. The result in Proposition \ref{inductive formula} carries over unchanged to the quantum case and leads to the following analogue of Theorem \ref{main symm}. \begin{thm} \label{main q-symm} Let $r > 0$ and suppose $\lambda \in \overline \Lambda^r(\ell)$. Then the dimension of the simple $H_r(q)$-module $D_{q,r}(\lambda)$ is recursively determined by $$ \dim D_{q,r}(\lambda) = \sum_{i: \lambda - \epsilon_i \in \overline \Lambda^{(r-1)}(\ell)} \dim D_{q,r-1}(\lambda - \epsilon_i).$$ \end{thm} This theorem allows us to determine the dimensions of a class of simple modules for $H_r(q)$ just like we did for symmetric groups in Section \ref{class of simple}. The only difference is that $\ell$ in contrast to $p$ may now take any value in ${\mathbb Z}_{>0}$. We illustrate by a couple of examples. \begin{examplecounter} Let $\ell = 8$, i.e. $q$ is a root of unity of order $8$. In this case $\overline \Lambda^r(8)$ consists of the trivial partition of $r$ (corresponding to the trivial module for $H_r(q)$), the unique $3$-parts partition $\nu$ of $r$ with $\nu_1 - \nu_3 \leq 1$, and the $2$ parts partitions $(s+1,s-1)$ and $(s,s)$, if $r = 2s$ is even, respectively $(s, s-1)$, if $r = 2s-1$ is odd. It is easy to deduce from Theorem \ref{main q-symm} that the partitions with $2$ parts all correspond to simple $H_r(q)$-modules of dimension $2^s$. \end{examplecounter} \begin{examplecounter} Consider the case $\ell = 12$. Here $\overline \Lambda^r(12)$ consists of the trivial partition $(r)$, the unique partition $\nu$ with $5$ parts satisfying $\nu_1 - \nu_5 \leq 1$, the partitions $\lambda^1(r), \lambda^2(r), \lambda^3(r)$ with $2$-parts $$ \{\lambda^1(r), \lambda^2(r), \lambda^3(r)\} = \begin{cases} \{(s+2, s-2), (s+1, s-1), (s,s)\} \text { if } r = 2s, \\ \{(s+2, s-1), (s+1,s)\} \text { if } r = 2s +1; \end{cases}$$ the partitions $\mu^1(r), \mu^2(r), \mu^3(r), \mu^r(4)$ with $3$ parts $$ \{\mu^1(r), \mu^2(r), \mu^3(r), \mu^4(r)\} = \begin{cases} \{(s+2,s-1,s-1),(s+1, s+1, s-2), (s+1, s, s-1), (s,s,s)\}\\ \text { if } r = 3s, \\ \{(s+2,s, s-1), (s+1,s+1, s-1), (s+1,s,s)\} \text { if } r = 3s +1,\\ \{(s+2,s+1, s-1), (s+2,s, s), (s+1,s+1,s)\} \text { if } r = 3s +2;\\ \end{cases}$$ and the partitions $\eta^1(r), \eta^2(r), \eta^3(r)$ with $4$ parts $$ \{\eta^1(r), \eta^2(r), \eta^3(r)\} = \begin{cases} \{(s+1,s+1,s-1,s-1), (s+1,s,s, s-1), (s,s,s,s)\} \text { if } r = 4s, \\ \{(s+1,s+1,s, s-1), (s+1,s,s, s)\} \text { if } r = 4s +1,\\ \{(s+2,s, s,s), (s+1,s+1, s+1,s-1), (s+1,s+1,s,s)\} \text { if } r = 4s +2,\\ \{(s+2,s+1,s,s), (s+1,s+1,s+1,s) \} \text { if } r = 4s + 3.\end{cases}$$ Here the listed partitions involving a zero or a negative number (these occur only for small $r$) should be deleted. In these cases as well as in the cases where a set with only $2$ elements is listed it is understood that the corresponding or missing $\lambda$, $ \mu$ or $\eta$ does not occur. We can use Theorem \ref{main q-symm} to compute the dimensions of the simple modules for $ H_r(q)$ where $q$ a root of unity having order $12$. In Table 5 we have listed the results for the first few values of $r$. As we know that both the trivial partition and the partition $\nu$ always correspond to simple modules of dimension $1$ we have not included these two partitions in the table. \end{examplecounter} \eject \centerline{ { \it Table 5. Dimensions of simple modules for $H_r(q)$ with $\ell = 12$.}} \vskip .5cm \centerline {\begin{tabular}{ r| c c c| c c c c | c c c| c} & $\lambda^r(1)$ & $\lambda^2(r)$ & $\lambda^3(r)$ & $\mu^1(r)$ & $\mu^2(r)$& $\mu^3(r)$& $\mu^4(r)$ &$\eta^1(r)$ & $\eta^2(r)$ & $\eta^3(r)$ \\ \hline 1 & &&1 & & & 1 && &&1 \\ 2 & & 1 &1 & &1&1 & &1&&1& \\ 3 && 1 &2 & 1 & & 2 & 1 &2 &1&&\\ 4 &1& 3 & 2 & 3& 2 &3&&2& 3& 1&\\ 5 & 4 & 5& & 5 &6 &5 &&5&4&& \\ 6 &4 &9 &5& 6 & 5 &16&5 & 4 &5& 9&\\ 7 & 13& 14&&22 & 5 &21 && 13 & 14&\\ 8 & 13 &27&14&27&43 & 26 & & 13 &27 &14&\\ 9 & 40 &41&&43 & 27 & 96 & 26 & 40 & 41&&\\ 10 & 40 &81&41&139 & 123 & 122 & & 41 & 40 & 81\\ \end{tabular}} \section{A class of simples modules for BMW-algebras} Denote by $x, v, z$ be three indeterminates and set $R = {\mathbb Z}[v, v^{-1}, x, z]/((1-x)z + (v - v^{-1}))$. Let $r \geq 1$ be an integer and consider the general $3$-parameter $BMW$-algebra $BMW_r(R)$ over $R$ as in \cite[Definition 3.1]{Hu}. As an $R$-module $BMW_r(R)$ is free of rank (2r-1)!! (with basis indexed by Brauer diagrams). As in the previous sections we denote by $k$ an arbitrary field containing a root of unity $q$ of order $\ell$. We make $k$ into an $R$-algebra by specializing $v$ to $-q^{2n+1}$, $z$ to $q-q^{-1}$, and $x$ to $1 - \sum_{i=-n}^n q^{2i}$. Then the $BMW$-algebra over $k$ that we shall work with is $$ BMW_r(-q^{2n+1}, q) = BMW_r(R) \otimes_R k.$$ For $q = 1$ it turns out that $BMW_r(-q^{2n+1}, q)$ may be identified with the Brauer algebra $\mathcal B_r(-2m)$, see the remarks after Definition 3.1 in \cite{Hu}. We treated the Brauer algebras in Section \ref{Brauer} so in this section we shall assume $q \neq 1$. Using the notation from Section \ref{q-sp} the quantum analogue \cite[Theorem 1.5]{Hu} of the Schur--Weyl duality for symplectic groups says: \begin{thm} \label{qsp-Schur-Weyl} The algebra $BMW_r(-q^{2n+1}, q)$ acts naturally on the tensor power $V_{q,n}^{\otimes r}$. This action commutes with the $U_{q,n}$-module structure on $V_{q,n}^{\otimes r}$ giving homomorphisms $BMW_r(-q^{2n+1},q) \rightarrow E_{q,n}^r$ which are surjective for all $n$. \end{thm} \begin{cor} \begin{enumerate} \item If $\ell$ is odd, then the $BMW$-algebra $BMW_r(-q^{2n+1},q)$ surjects onto the semisimple algebra $\overline E_{q,n}^r$ for $n= 1, 2, \cdots , (\ell - 1)/2$ and $r> 0$. \item If $\ell$ is even, then the $BMW$-algebra $BMW_r(-q^{2n+1},q)$ surjects onto the semisimple algebra $\overline E_{q,n}^r$ for $n= 1, 2, \cdots , (\ell - 4)/2$ and $r> 0$. \end{enumerate} \end{cor} \begin{proof} By Theorem \ref{qsp-Schur-Weyl} we have $BMW_r(-q^{2n+1}, q)$ surjects onto $ E_{q,n}^r$ for all $n, r$ and hence also on $\overline E_{q,n}^r$ by (\refeq {surj E to Ebar qsp}). In Section \ref{q-sp} we observed that these latter algebras are non-zero for the $n$'s listed in the corollary. \end{proof} Let $\lambda$ is a partition of $r-2i$ for some $i \leq r/2$. In analogy with the Brauer algebra case we denote the simple $BMW_r(-q^{2n+1}, q)$-module corresponding to $\lambda$ by $D_{BMW_r(-q^{2n+1}, q)}(\lambda)$. Recall the definition of $A_n(\ell)$ from Section \ref{q-sp} and set for any $r>0$ $$ \overline \Lambda^r(n, \ell) = (\Lambda^r \cup \Lambda^{r-2} \cup \cdots ) \cap A_{n}(\ell).$$ Then arguments similar to the ones used above give \begin{thm} \label{main BMW} Let $r > 0$. \begin{enumerate} \item Suppose $\ell$ is odd. Let $n \in \{1, 2, \cdots (\ell - 1)/2\}$ and $\lambda \in \overline \Lambda^r(n, \ell )$. Then the dimension of the simple $BMW_r(-q^{2n+1}, q)$-module $D_{BMW_r(-q^{2n+1}, q)}(\lambda)$ is recursively determined by $$ \dim D_{BMW_r(-q^{2n+1}, q)}(\lambda) = \sum_{i: \lambda \pm \epsilon_i \in \overline \Lambda^{(r-1)}(\delta, \ell)} \dim D_{BMW_{r-1}(-q^{2n+1}, q)}(\lambda \pm \epsilon_i).$$ \item Suppose $\ell$ is even. Let $n \in \{1, 2, \cdots (\ell - 4)/2\}$ and $\lambda \in \overline \Lambda^r(n, \ell )$. Then the dimension of the simple $BMW_r(-q^{2n+1}, q)$-module $D_{BMW_r(-q^{2n+1}, q)}(\lambda)$ is recursively determined by $$ \dim D_{BMW_r(-q^{2n+1}, q)}(\lambda) = \sum_{i: \lambda \pm \epsilon_i \in \overline \Lambda^{(r-1)}(\delta, \ell)} \dim D_{BMW_{r-1}(-q^{2n+1}, q)}(\lambda \pm \epsilon_i).$$ \end{enumerate} \end{thm} \begin{examplecounter} We shall illustrate Theorem \ref{main BMW} in the case $\ell$ is even (the odd case is equivalent to the Brauer case in Section \ref{Brauer}). So we take $\ell = 10$. Then the relevant values of $n$ are $1, 2$ and $3$. The weight set $\overline \Lambda^r(1,10)$ contains $2$ elements (except for $r =1$) $\lambda^1(r), \lambda^2(r)$, namely $ (2), (0)$, when $ r$ is even, and $ (3), (1)$, when $r$ is odd. Likewise, $\overline \Lambda^r(2,10)$ contains $2$ elements when $r$ is odd (except for $r=1$) and $4$ elements, when $r$ is even (except for $r = 2$). We denote these weights $\mu^1(r), \mu^2(r), (\mu^3(r), (\mu^4(r))$. They are $(2,2), (2,0), (1,1), (0,0)$, when $ r$ is even, and $ (2,1), (1,0)$, when $r$ is odd. Finally, $\overline \Lambda^r(3,10)$ consists of $2$ elements $\nu^1(r), \nu^2(r)$, namely $(1,1,0), (0,0,0)$, when $r$ is even, and $(1,1,1), (1,0,0)$, when $r$ is odd (except $r=1$). In Table 6 we have in row $r$ listed (in the order given above) the dimensions of the simple modules for $ B_r(-q^{2n+1}, q) $ for $r \leq 10$. These numbers are computed recursively using Theorem \ref{main BMW}. \end{examplecounter} { \it Table 6. Dimensions of simple modules for $BMW_r(-q^{2n+1}, q)$ when $\ell = 10$ and $n = 1 , 3, 5$.} \vskip .5cm \centerline { \begin{tabular}{ r| c c |c c c c |c c| c } &$n=1$&&&$n=2$&&&$n=3$& \\ \hline r & $\lambda^1(r)$ & $\lambda^2(r)$ & $\mu^1(r)$ & $\mu^2(r)$ & $\mu^3(r)$& $\mu^4(r)$ & $\nu^1(r)$ & $ \nu^2(r)$ \\ \hline 1 & & 1& & 1 & &&&1 & \\ 2 & 1 & 1 & 1& 1 & 1 & 1 & 1 & 1 \\ 3 & 1&2& 2& 3& & & 1 & 2 \\ 4 & 3 &2 & 2& 5 & 5 &3& 3& 2 \\ 5 &3 & 5 & 12 & 13 & & &3 & 5\\ 6 & 8 & 5& 12& 25&25 &13&8 & 5 \\ 7 &8 &13 &62 &63&& & 8 & 13\\ 8 & 21& 13&62 &125 & 125 &63& 21 & 13\\ 9 & 21 &44&312&313 && & 21 & 34\\ 10 & 65 &44&312&625 & 625 & 313&55 & 34\\ \end{tabular}} \vskip 1 cm \end{document}
\begin{document} \title{Necessary Conditions for Extended Noncontextuality in General Sets of Random Variables} \author{Barbara Amaral} \affiliation{Departamento de F\'isica e Matem\'atica, CAP - Universidade Federal de S\~ao Jo\~ao del-Rei, 36.420-000, Ouro Branco, MG, Brazil} \affiliation{International Institute of Physics, Federal University of Rio Grande do Norte, 59078-970, P. O. Box 1613, Natal, Brazil} \author{Cristhiano Duarte} \affiliation{International Institute of Physics, Federal University of Rio Grande do Norte, 59078-970, P. O. Box 1613, Natal, Brazil} \affiliation{Departamento de Matem\'{a}tica, Instituto de Ci\^{e}ncias Exatas, Universidade Federal de Minas Gerais, CP 702, CEP 30123-970, Belo Horizonte, Minas Gerais, Brazil.} \author{Roberto I. Oliveira} \affiliation{Instituto de Matem\'atica Pura e Aplicada (IMPA), Rio de Janeiro, RJ, Brazil.} \begin{abstract} We explore the graph approach to contextuality to restate the extended definition of noncontextuality as given by J. Kujala et. al. in Ref~\cite{KDL15} in using graph-theoretical terms. This extended definition avoids the assumption of the \emph{pre-sheaf} or \emph{non-disturbance} condition, which states that if two contexts overlap, then the marginal distribution obtained for the intersection must be the same, a restriction that will never be perfectly satisfied in real experiments. With this we are able to derive necessary conditions for extended noncontextuality for any set of random variables based on the geometrical aspects of the graph approach, which can be tested directly with experimental data in any contextuality experiment and which reduce to traditional necessary conditions for noncontextuality if the non-disturbance condition is satisfied. \end{abstract} \maketitle \section{Introduction} Quantum theory assigns probabilities to subsets of possible measurements of a physical system. The phenomenon of {\em contextuality} states that there may be no global probability distribution that is consistent with these subsets, which are also called \emph{contexts} \cite{Specker60,Bell66,KS67,Fine82,AB11}. A key consequence of contextuality is that the statistical predictions of quantum theory cannot be obtained from models where the measurement outcomes reveal pre-existent properties that are independent on which, or whether, other compatible measurements are jointly performed. This fundamental limitation follows from the existence of incompatible measurements in quantum systems. It thus represents an exotic, intrinsically non-classical phenomenon, that leads to a more fundamental understanding of many aspects of quantum theory \cite{NBAASBC13,Cabello13,Cabello13c,CSW14,ATC14,Amaral14}. In addition, contextuality has been recognized as a potential \emph{resource} for quantum computing, \cite{Raussendorf13, HWVE14,DGBR14}, random number certification \cite{UZZWYDDK13}, and several other information processing tasks in the specific case of space-like separated systems \cite{BCPSW13}. As a consequence, experimental verifications of contextuality have received much atten\-tion \cite{HLBBR06, KZGKGCBR09, ARBC09, LLSLRWZ11, BCAFACTP13}. It is thus of utmost importance to develop a robust theoretical framework for contextuality that can be efficiently applied to real experiments. In particular, it is important to include the treatment of sets of random variables that do not satisfy the assumption of the so called \emph{pre-sheaf}~\cite{AB11,Amaral14} or \emph{non-disturbance}~\cite{NBAASBC13} condition. This assumption states that if the intersection of two contexts is non-empty, then the marginal probability distributions {at the intersection must be the same, a restriction that will never be perfectly satisfied in real experiments. This problem was considered in Refs. \cite{Larsson02, Winter14}, but the methods proposed there to take into account the context-dependent change in a random variable involve quantities that cannot be directly measured. In Ref.~\cite{KDL15},the authors propose an alternative definition of noncontextuality that can be applied to any set of random variables. Such a treatment reduces to the traditional definition of noncontextuality if the non-disturbance property is satisfied and, in addition, it can be verified directly from experimental data. In this alternative definition, a set of random variables is said to be noncontextual (in the extended sense) if there is a joint probability distribution which is consistent with the joint distribution for each context and maximizes the probability of two realizations of the same set of random variables present in different contexts being equal. Then the authors provide necessary and sufficient conditions for contextuality in a broad class of scenarios, namely the so called $n$-cycle scenario. In this contribution, we explore the graph approach to contextuality, developed in Refs. \cite{CSW10,CSW14, AII06} and further explored in Refs. \cite{RDLTC14, AFLS15, AT17}, to rewrite the definition of extended noncontextuality in graph theoretical terms. To this end, from the compatibility graph $\mathrm{G}$ of a scenario $\Gamma$, we define another graph $\mathscr{G}$, which we call the \emph{extended compatibility graph} of the scenario, and show that noncontextuality in the extended sense is equivalent to noncontextuality in the traditional sense with respect to the extended graph $\mathscr{G}$. With this graph-theoretical perspective, the problem of characterizing extended noncontextuality reduces to characterizing traditional noncontextuality for the scenario defined by $\mathscr{G}$, a difficult problem for general graphs \cite{Pitowsky91, DL97, AII06, AII08}. Nevertheless, we can explore the connection between the noncontextual set and the \emph{cut polytope} $\mathrm{CUT}\left(\mathrm{G}\right)$ \cite{AII06, AT17} of the corresponding compatibility graph $\mathrm{G}$ to derive necessary conditions for extended contextuality in any scenario, which can be tested directly with experimental data in any contextuality experiment and reduces to traditional necessary conditions for noncontextuality if the non-disturbance condition is satisfied. To derive these conditions, we first prove that $\mathscr{G}$ can be obtained from $\mathrm{G}$ combining the graph operations know as \emph{triangular elimination}, \emph{vertex splitting} and \emph{edge contraction} \cite{BM86,AII08,Bonato14}. From valid inequalities for $\mathrm{CUT}\left(\mathrm{G}\right)$ it is possible to derive valid inequalities for any graph obtained from $\mathrm{G}$ using a sequence of such operations. In particular, for any valid inequality for $\mathrm{CUT}\left(\mathrm{G}\right)$ we can derive valid inequalities for $\mathrm{CUT}\left(\mathscr{G}\right)$, among which there is one that reduces to the original inequality if the non-disturbance condition is satisfied. As applications of our framework, we recover the characterization of extended noncontextuality for the $n$-cycle scenarios of Ref. \cite{KDL15} and provide necessary conditions for noncontextuality exploring the $I_{3322}$ \cite{Foissart81,CG04} and Chained inequalities \cite{BC90}. Finally, we use the Peres-Mermin square \cite{Peres90,Mermin90} to illustrate that similar ideas can be used even in scenarios where the cut polytope does not provide a complete characterization of the noncontextual set. The paper is organized as follows: in Sec. \ref{sec:comp} we review the definition of a compatibility scenario and of noncontextuality in the tradional sense; In Sec. \ref{sec:ext}, we review the definition of extended noncontextuality of Ref. \cite{KDL15}, stating it in graph-theoretical terms; In Sec. \ref{sec:coup} we maximize the probability of two realizations of the same random variables in different contexts being equal; In Sec.~\ref{sec: twoout} focusing on scenarios with two outcomes per measurement, we introduce the cut polytope and the extended compatibility hypergraph for a scenario and show a complete characterization of the extended contextuality for the $n-$cycle scenario; In Sec.~\ref{sec:valid_ineq} using the introduced cut polytope we provide necessary conditions for the existence of noncontextual behaviours in any given scenario, although the complete characterization is an extremely difficult problem; In Sec.~\ref{sec:I3322} and Sec.~\ref{sec:chain_ineq} we apply our methods for important families of contextuality inequalities; We discuss scenarios with more than three measurements in Sec.~\ref{sec:more_than_3} and close this work with a discussion in Sec.~\ref{sec:discussion}. \section{Compatibility scenarios} \label{sec:comp} \begin{dfn} A \emph{compatibility scenario} is defined by a triple $\Gamma :=\left(X, \mathcal{C}, O \right)$, where $O$ is a finite set, $X$ is a finite set of random variables taking values in $O$, and $\mathcal{C}$ is a family of subsets of $X$ such that \begin{enumerate} \item $\displaystyle \cup_{C \in \mathcal{C}} C =X$; \item $C,C' \in \mathcal{C}$ and $C \subseteq C'$ implies $C = C'$. \label {antichain} \end{enumerate} The elements $C \in \mathcal{C}$ are called \emph{contexts} and the set $\mathcal{C}$ is called the \emph{compatibility cover} of the scenario. \end{dfn} One may think of the random variables in $X$ as representing measurements in a physical system, with possible outcomes labeled by the elements in $O$, while the sets in $\mathcal{C}$ may be thought as encoding the compatibility relations among the measurements in $X$, that is, each set $C \in \mathcal{C}$ consists of a maximal set of compatible, jointly measurable random variables \cite{AB11, AT17book}. Equivalentely, the compatibility relations among the elements of $X$ can be represented by an hypergraph. \begin{dfn} The \emph{compatibility hypergraph} of a scenario $\left(X, \mathcal{C}, O \right)$ is an hypergraph $\mathrm{H} = \left(X, \mathcal{C}\right)$ whose vertices are the random variables in $X$ and hyperedges are the contexts $C \in \mathcal{C}$. The \emph{compatibility graph} of the scenario is the 2-section of $\mathrm{H}$, that is, the graph $\mathrm{G}$ has the same vertices of the hypergraph $\mathrm{H}$ and edges between all pairs of vertices contained in the some hyperedge of $\mathrm{H}$. \end{dfn} In an experiment, characterized by a compatibility scenario $\Gamma = (X,\mc{C},O)$, when compatible measurements, represented by the random variables belonging to a context $C=\{x_1,x_2,...,x_{\vert C \vert } \} \in \mc{C}$, are performed jointly, a list $s=(a_1,a_2,...,a_{\vert C \vert })$ of outcomes in the Cartesian product \begin{equation} O^{C}:=\underbrace{O \times O \times ... \times O}_{\vert C \vert -\mbox{times}} \label{eq.cartesianproduct} \end{equation} is observed. Moreover, the collection of well-defined joint probability distributions for the random variables associated with $C \in \mc{C}$ receives special attention: \begin{dfn} A \emph{behavior} $\mathrm{B}$ for the scenario $\left(X, \mathcal{C}, O \right)$ is a family of probability distributions over $O^C$, one for each context $C \in \mathcal{C}$, that is, \begin{equation} \mathrm{B} = \left\{p_C: O^C\rightarrow [0,1] \left|\sum_{s\in O^C} p_C(s)=1, C \in \mathcal{C}\right.\right\}.\end{equation} \end{dfn} This means that for each context $C$, $p_C(s)$ gives the probability of obtaining outcomes $s$ in a joint measurement of the elements of $C$. Following standard notation in the community, given a context $C=\left\{x_1, \ldots , x_{|C|}\right\}$ and $s=\left(a_1, \ldots , a_{|C|}\right)$ a particular list of outcomes for those measurements in $C$, we will from now on represent $p_C(s)$ as \begin{equation} p\left(a_1, \ldots , a_{|C|}\left|x_1, \ldots , x_{|C|}\right.\right).\end{equation} \textbf{Remark:} Despite of being absolutely standard using the above notation for representing an element $p_C$ in a behaviour $B$, to avoid misunderstanding within the mathematical community, and to make our work more readable for those from other communities who might become interested in this topic, we note that the mathematical object we are using here is the joint probability $\mathds{P}(x_1=a_1,x_2=a_2,...,x_{\vert C \vert}=a_{\vert C \vert})$, defined on the finite set $O^{C}$. In an ideal situation, one generally assumes that behaviors are non-disturbing. \begin{dfn} \label{definondisturbance} The \emph{non-disturbance} set $\mathcal{X}\left(\Gamma\right)$ of a compatibility scenario $\Gamma$ is the set of behaviors that satisfy the consistency relation \begin{equation}\label{eqnondisturbing} \sum_{a^i_k| x^i_{k} \notin C_i \cap C_j} p\left(a^i_{1}a^i_{2} \ldots a^i_{\left|C_i\right|}\left| x^i_{1} x^i_{2} \ldots x^i_{\left|C_i\right|} \right.\right) = \sum_{a^j_{l}|x^j_{l} \notin C_i \cap C_j} p\left(a^j_{1}a^j_{2} \ldots a^j_{\left|C_j\right|}\left| x^j_{1} x^j_{2} \ldots x^j_{\left|C_j\right|} \right.\right) \end{equation} for any two intersecting contexts $C_i$ and $C_j$ in $\mathcal{C}$, when considering at both sides the same sets of outcomes for those measurements in $C_i \cap C_j$. \end{dfn} \textbf{Remark:} Eq.~\eqref{eqnondisturbing} above says that when the non-disturbance relation is satisfied in those contexts which share some common random variables, it does not matter the way one takes the marginalization to these variables into account. Both marginalizations, either starting from $C_i$ or starting from $C_j$, must coincide. In an hypothetical situation where all measurements in $\mathrm{X}$ are compatible, it would be possible to define a \emph{global probability distribution} $p(a_1 a_2 ... a_{\vert X \vert} \vert x_1 x_2 ... x_{\vert X \vert})$, or \begin{equation} \label{eqglobal} p\left(a_1a_2 \ldots a_{|X|}\right) \end{equation} for short, that would give the probability of obtaining outcomes $a_1a_2 \ldots a_{|X|}$ as though all measurements in $X$ were jointly performed. \begin{dfn} A behavior $\mathrm{B} \subset \mathcal{X}\left(\Gamma\right)$ is \emph{noncontextual} if there is a global probability distribution \eqref{eqglobal} such that for each $C \in \mathcal{C}$ \begin{equation}\label{equsualnoncontex} p\left(a_{1}a_{2} \ldots a_{\left|C\right|}\left| x_{1} x_{2} \ldots x_{\left|C\right|} \right.\right) = \sum_ {a_l| l \notin C} p\left(a_1a_2 \ldots a_{n}\right), \end{equation} where the sum is taken over the outcomes $a_l$ of the measurements $l \notin C$ and $a_l = a_{k}$ for each $l=x_{k} \in C$. \end{dfn} In other words, $\mathrm{B}$ is noncontextual if the probability distribution assigned by $\mathrm{B}$ to each context can be recovered as marginal from the global probability distribution $p\left(a_1a_2 \ldots a_{n}\right)$ \cite{Fine82, AB11}. \section{Extended Contextuality} \label{sec:ext} To define noncontextuality in a scenario where the non-disturbance property \eqref{eqnondisturbing} is not valid, we first must change the definition of noncontextual behaviors given by Eq. \eqref{equsualnoncontex}. We will consider \emph{extended global probability distributions} of the form \begin{equation} \label{eqextendedglobal} p\left(\underbrace{a^1_{1} \ldots a^1_{\left|C_1\right|}}_{C_1} \underbrace{a^2_{1} \ldots a^2_{\left|C_2\right|}}_{C_2} \ldots \underbrace{a^m_{1} \ldots a^m_{\left|C_m\right|}}_{C_m}\left| \underbrace{x^1_{1} \ldots x^1_{\left|C_1\right|}}_{C_1} \underbrace{x^2_{1} \ldots x^2_{\left|C_2\right|}}_{C_2} \ldots \underbrace{x^m_{1} \ldots x^m_{\left|C_m\right|}}_{C_m}\right.\right),\end{equation} where $m = \left|\mathcal{C}\right|$, that gives joint probability of obtaining outcomes $a^i_{1}, \ldots ,a^i_{\left|C_i\right|}$ for each context $ C_i=\left\{x^i_{1}, \ldots , x^i_{\left|C_i\right|}\right\}.$ Notice that this extended global probability distribution is, in general, not equal to the probability distribution defined in Eq. \eqref{eqglobal}, since the same random variable could appear in more than one context, and hence, in the list \begin{equation} \underbrace{x^1_{1} \ldots x^1_{\left|C_1\right|}}_{C_1} \underbrace{x^2_{1} \ldots x^2_{\left|C_2\right|}}_{C_2} \ldots \underbrace{x^m_{1} \ldots x^m_{\left|C_m\right|}}_{C_m} \end{equation} the same random variable would be repeated several times. To make definitions in Eqs.\eqref{eqglobal} and \eqref{eqextendedglobal} equivalent in the case of non-disturbing behaviors, we demand that, if in different contexts $C_{i_1}, C_{i_2}, \ldots , C_{i_l}$ there exist coincident random variables $x^{i_1}_{k_1}, x^{i_2}_{k_2}, \ldots , x^{i_l}_{k_l}$ , then \begin{align} p\left(a^{i_1}_{k_1}\ldots a^{i_l}_{k_l}\left|x^{i_1}_{k_1}\ldots x^{i_l}_{k_l}\right.\right) & = \nonumber \\ & \sum_{ a^r_{s} \left| (r,s) \neq \left(i_j, k_j\right) \right.} p\left(a^1_{1} \ldots a^1_{\left|C_1\right|} \ldots a^m_{1} \ldots a^m_{\left|C_m\right|}\left| x^1_{1} \ldots x^1_{\left|C_1\right|} \ldots x^m_{1} \ldots x^m_{\left|C_m\right|}\right.\right) \label{eqcorrelated} \\ &= \left\{\begin{array}{cc} 1& \mbox{if } \ a^{i_1}_{k_1}=a^{i_2}_{k_2}=\ldots = a^{i_l}_{k_l}\label{eqcorrelated2} \\ 0& \mbox{ otherwise} , \end{array}\right. \end{align} that is, marginal probability distributions for $x^{i_1}_{k_1},x^{i_2}_{k_2}, \ldots , x^{i_l}_{k_l}$, representing the same random variable in different contexts, are perfectly correlated. Hence, it is equivalent to say that $\mathrm{B}$ is a \textit{ noncontextual behavior} if there is a extended global probability distribution satisfying condition \eqref{eqcorrelated2} such that \begin{equation} p\left(a^i_{1}a^i_{2} \ldots a^i_{\left|C_i\right|}\left| x^i_{1} x^i_{2} \ldots x^i_{\left|c_i\right|} \right.\right) = \sum_{a^j_{k} | j \neq i}p\left(a^1_{1} \ldots a^1_{\left|C_1\right|} \ldots a^m_{1} \ldots a^m_{\left|c_m\right|}\left| x^1_{1} \ldots x^1_{\left|C_1\right|} \ldots x^m_{1} \ldots x^m_{\left|C_m\right|}\right.\right) \label{eqmarginalext}\end{equation} A simple example of this situation is shown in Fig. \ref{figc3}. There, a simple compatibility scenario with three measurements $0,1,2$ and two contexts, $\left\{0,1\right\}$ and $\left\{1,2\right\}$ is shown. A behaviour for such a scenario consists of two probability distributions $p(ab|01)$ and $p(bc|12)$. Traditionally, one says that a non-disturbing behavior for this scenario is noncontextual if there is a global probability distribution $p(abc)$ such that $p(ab|01)= \sum_c p(abc)$ and $p(bc|12)=\sum_a p(abc)$. For our purposes it will be convenient to consider an extended global probability distribution $p(abb'c|011^{\prime}2)$ such that $p(bb'|11^{\prime}) = \sum_{a,c} p(abb'c|0112)=1$ iff $b=b'$, and zero otherwise. Then, in this situation, we say that a behavior is noncontextual if there is an extended global probability distribution satisfying this condition such that $p(ab|01)= \sum_{b',c} p(abb'c|011^{\prime}2)$ and $p(b'c|12)=\sum_{a,b} p(abb'c|011^{\prime}2)$. For non-disturbing behaviors, these two notions of noncontextualtiy are equivalent. \begin{figure}\label{figc3} \end{figure} To define noncontextuality in a scenario where the non-disturbance property does not hold, we adopt the strategy of Ref. \cite{KDL15}. We relax the requirement that marginals for $x^{i_1}_{k_1},x^{i_2}_{k_2},\ldots ,x^{i_l}_{k_l}$ be perfectly correlated when they represent the same random variable . Instead of Eq. \eqref{eqcorrelated2}, we require that the probability of $x^{i_1}_{k_1},x^{i_2}_{k_2},\ldots , x^{i_l}_{k_l}$ being equal is the maximum allowed by the individual probability distributions of each $x^{i_l}_{k_l}$. \begin{dfn} We say that a behavior has a \emph{maximally noncontextual description} if there is an extended global distribution \eqref{eqextendedglobal} such that the distribution of each context is obtained as a marginal, according to Eq. \eqref{eqmarginalext}, and such that if $x^{i_1}_{k_1},x^{i_2}_{k_2},\ldots , x^{i_l}_{k_l}$ represent the same random variable, the marginals for $x^{i_1}_{k_1},x^{i_2}_{k_2}, \ldots , x^{i_l}_{k_l}$ defined by Eq. \eqref{eqcorrelated} are such that \begin{equation} p\left(x^{i_1}_{k_1}=\ldots = x^{i_l}_{k_l}\right)=\sum_{a} p\left(a\ldots a\left|x^{i_1}_{k_1}\ldots x^{i_l}_{k_l}\right.\right) \end{equation} is the maximum consistent with the marginal distributions $p\left(a^{i_j}_{k_j}\left|x^{i_j}_{k_j}\right.\right)$. That is, a behavior is noncontextual in the extended sense if there is an extended global distribution that gives the correct marginal in each context and that maximizes the probability of $x^{i_1}_{k_1},x^{i_2}_{k_2},\ldots , x^{i_l}_{k_l}$ being equal if they represent the same random variable in different contexts. \label{def:ext_context} \end{dfn} According Ref~\cite{KDL15} we define \emph{maximal coupling} as follows: \begin{dfn} Given $\{ x_{i_1}^{k_1},x_{i_2}^{k_2},\ldots, x_{i_l}^{k_l}\}$ a set of random variables representing the same measurement, we call a distribution $ p\left(a_{i_1}^{k_1}a_{i_2}^{k_2}\ldots a_{i_l}^{k_l}\left|x_{i_1}^{k_1}x_{i_2}^{k_2}\ldots x_{i_l}^{k_l}\right.\right) $ that gives the correct marginals $p\left(a_{i_j}^{k_j}\left|x_{i_j}^{k_j}\right.\right)$ a \emph{coupling} for $x_{i_1}^{k_1},x_{i_2}^{k_2},\ldots , x_{i_l}^{k_l}$. We say that such a coupling is \emph{maximal} if $p\left(x_{i_1}^{k_1}=x_{i_2}^{k_2}=\ldots = x_{i_l}^{k_l}\right)$ achieves the maximum value consistent with the marginals $p\left(a_{i_j}^{k_j}\left|x_{i_j}^{k_j}\right.\right)$. \label{def:max_coupling} \end{dfn} \section{Existence of Maximal Couplings} \label{sec:coup} It could be the case that a maximal coupling, as in Def.~\ref{def:max_coupling} did not exist for a given set of random variables which represents the same measurement. It turns out that it would never happen. Here we constructively show that a maximal coupling is a well-defined notion. \emph{i.e.} under certain assumptions there always exists at least one maximal coupling for a given set of random variables. \begin{thm} \label{teomaxcoupling} Given a set of random variables $x_{i_1}^{k_1},x_{i_2}^{k_2},\ldots , x_{i_l}^{k_l}$ with distributions $p\left(a_{i_j}^{k_j}\left|x_{i_j}^{k_j}\right.\right)$ it is always possible to construct a maximal coupling for this set with \begin{equation} p\left(x_{i_1}^{k_1}=x_{i_2}^{k_2}=\ldots = x_{i_l}^{k_l}\right)= \sum_{a} \min_{j}\left\{ p\left(a\left|x_{i_j}^{k_j}\right.\right)\right\}. \end{equation} \end{thm} \begin{proof} Let \begin{equation} p_-(a)= \min_{j} \left\{ p\left(a\left|x_{i_j}^{k_j}\right.\right)\right\}.\end{equation} Then \begin{equation} p\left(x_{i_1}^{k_1}=x_{i_2}^{k_2}=\ldots = x_{i_l}^{k_l}=a\right) \leq p_-(a) \label{eqp=max}\end{equation} and hence, \begin{equation} p\left(x_{i_1}^{k_1}=\ldots = x_{i_l}^{k_l}\right) = \sum_a p\left(x_{i_1}^{k_1}=\ldots = x_{i_l}^{k_l}=a\right) \leq \sum_a p_-(a). \end{equation} Construct the coupling as follows: if $a_{i_1}^{k_1} = a_{i_2}^{k_2} = \ldots = a_{i_l}^{k_l}=a$, we define \begin{equation} p\left(a\ldots a \left| x_{i_1}^{k_1}x_{i_2}^{k_2}\ldots x_{i_l}^{k_l}\right.\right)=p_-(a);\end{equation} if not, we define $$p\left(a_{i_1}^{k_1}a_{i_2}^{k_2}\ldots a_{i_l}^{k_l}\left|x_{i_1}^{k_1}x_{i_2}^{k_2}\ldots x_{i_l}^{k_l}\right.\right)= \prod_j p'\left(a_{i_j}^{k_j}\left| x_{i_j}^{k_j}\right.\right),$$ where \begin{equation} p'\left(a_{i_j}^{k_j}\left|x_{i_j}^{k_j}\right.\right)= p\left(a_{i_j}^{k_j}\left|x_{i_j}^{k_j}\right.\right)-p_-(a).\end{equation} This defines a maximal coupling for $x_{i_1}^{k_1},x_{i_2}^{k_2},\ldots , x_{i_l}^{k_l}$. \end{proof} One should notice that although the method we have applied in the proof above provides a maximal coupling for the considered set of random variables, there is no guarantee that such a coupling is the unique consistent with Def.~\ref{def:max_coupling} when treating with the general case. Actually, it turns out that in some specific situations the coupling constructed above is indeed unique. This is always the case, for example, for two variables with any number of outcomes and three variables each of which with two outcomes. \begin{thm}[Sufficient condition for extended contextualtiy] If there is an extended global distribution, as in Eq. \eqref{eqextendedglobal}, such that the marginals in each context are equal to the distributions of the behavior $\mathrm{B}$, according to Eq. \eqref{eqmarginalext}, and such that the corresponding couplings for each set $x_{i_1}^{k_1},x_{i_2}^{k_2},\ldots , x_{i_l}^{k_l}$ representing the same random variable, defined by Eq. \eqref{eqcorrelated}, are equal to the ones given in Thm. \ref{teomaxcoupling}, then the behavior $\mathrm{B}$ is noncontextual in the extended sense. \label{teomaxcouplinge} \end{thm} The condition stated in Thm. \ref{teomaxcouplinge} is also necessary when the coupling constructed in the proof of Thm. \ref{teomaxcoupling} is unique. When the coupling given in Thm. \ref{teomaxcoupling} is not unique, the difference between any other coupling and the one constructed in the proof of this theorem can only appear in the terms \begin{equation} p\left(a_{i_1}^{k_1}a_{i_2}^{k_2}\ldots a_{i_l}^{k_l}\left|x_{i_1}^{k_1}x_{i_2}^{k_2}\ldots x_{i_l}^{k_l}\right.\right)\end{equation} for which the outcomes $a_{i_1}^{k_1}, a_{i_2}^{k_2}, \ldots , a_{i_l}^{k_l}$ are not all equal. Otherwise this would contradict the hypotheses that the coupling is maximal. Then for any maximal coupling we can at least say that for each pair $x_{i_m}^{k_m}$ and $x_{i_n}^{k_n}$ we have \begin{equation} p_-(a)= p\left(x_{i_1}^{k_1}= \ldots = x_{i_l}^{k_l}\right) \leq p\left(x_{i_m}^{k_m}=x_{i_n}^{k_n}=a\right) \leq \min_{m,n} \left\{ p\left(a \left| x_{i_m}^{k_m}\right.\right), p\left(a\left| x_{i_n}^{k_n}\right.\right)\right\} . \end{equation} This relation will be used to construct necessary condition for extended noncontextuality in Sec. \ref{sec:valid_ineq}. \begin{thm}[Necessary condition for maximal coupling] \label{thm:necessary_max_coup} If $p\left(a_{i_1}^{k_1}a_{i_2}^{k_2}\ldots a_{i_l}^{k_l}\left|x_{i_1}^{k_1}x_{i_2}^{k_2}\ldots x_{i_l}^{k_l}\right.\right)$ is a maximal coupling for the random variables $x_{i_1}^{k_1},x_{i_2}^{k_2},\ldots , x_{i_l}^{k_l}$, then \begin{equation} p_{-}(a)=p(x_{i_1}^{k_1},...,x_{i_l}^{k_l}) \leq p\left( \left\lbrace x_{i_j}^{k_j}=a \right\rbrace_{j \in \mc{S}} \right)\leq \min_{j \in \mc{S}}\left\lbrace p(a \vert x_{i_j}^{k_j}) \right\rbrace \end{equation} for any subset $\mc{S} \subset [l]$. \end{thm} \section{Two outcomes} \label{sec: twoout} When $O=\left\{ -1,1\right\} $ we can use a powerful tool from graph theory to find necessary conditions for noncontextuality: the \emph{cut polytope}~\cite{AIT06,DL97,Amaral14}. \begin{dfn}[Cut Polytope] The \emph{cut polytope} of a graph $G=(V,E)$, denoted by $CUT(G)$, is the convex hull of the set $\mc{V}=\left\lbrace \delta_{G}(S) \in \mathds{R}^{E}; S \subset V \right\rbrace$ which contains all cut vectors of $G$. Given $S \subset V$, the cut vector $\delta_{G}(S) \in \mathds{R}^{E}$ associated with $S$ is defined as: \begin{equation} \delta_{u,v}:= \begin{cases} 1, \,\, \mbox{if} \,\, \vert S \cap \{ u,v \} \vert =1 \\ 0, \,\, \mbox{otherwise}. \end{cases} \end{equation} \end{dfn} Let $\mathrm{G}=\left(\mathrm{X}, \mathrm{E}\right)$ be the compatibility graph of a scenario $\Gamma=(X,\mc{C},\{-1,1\})$. Given a behaviour $\mathrm{B}$, let $\mathrm{P}_{\mathrm{B}} \in \mathds{R}^{X} \times \mathds{R}^{E}$ be the vector whose first $\left|\mathrm{X}\right|$ entries are the expectation values of the random variables in $\mathrm{X}$ \begin{equation} P_{x}:= \left\langle x\right\rangle = p\left(1|x\right)-p\left(-1|x\right), x \in \mathrm{X} \label{eq:p1} \end{equation} and whose $\left|\mathrm{E}\right|$ subsequent entries are the expectation values of product of pairs of compatible random variables in $\mathrm{X}$ \begin{equation} P_{xy}:=\left\langle xy\right\rangle = p\left(x=y\right)-p\left(x\neq y\right), \left(x,y\right) \in \mathrm{E}.\label{eq:p2} \end{equation} Let $\nabla \mathrm{G}$ be the \emph{suspension graph} of $\mathrm{G}$, obtained from $\mathrm{G}$ by adding one new vertex $u$ to $\mathrm{X}$ which is adjacent to all the other vertices (see Fig.~\ref{fig:suspension_graph}). \begin{figure} \caption{\footnotesize{ An example of a suspension graph. On the right hand side we depicted a 7-cycle, whereas on the left it is depicted its suspension graph, with a new vertex $u$ added to the vertex set, and connected to each other vertex already belonging to the 7-cycle.}} \label{fig:suspension_graph} \end{figure} \begin{prop} Let $\Gamma=(X,\mc{C},\{-1,1\})$ be a compatibility scenario, and let $G$ be the compatibility graph associated with $\Gamma$. If a behavior $\mathrm{B}$ is noncontextual, the vector $\mathrm{P}_{\mathrm{B}}$ belongs to the cut polytope of $\nabla \mathrm{G}$. \label{thm:nec_cut} \end{prop} For a proof of this result, see Refs. \cite{AII06,AT17,AT17book}. It implies that characterizing completely the cut polytope $\mathrm{CUT}\left(\nabla \mathrm{G}\right)$ gives a strong necessary condition for noncontextuality. However, as shown in references \cite{Pitowsky91, DL97, AII06, AII08}, such a characterization is unlikely, since membership testing in this polytope is a NP-complete problem~\cite{LexSch03} for general graphs. To do so requires one to find all linear inequalities that define the facets of $\mathrm{CUT}\left(\nabla \mathrm{G}\right)$, which is only feasible for limited, although important, scenarios. Nevertheless, one can generally find necessary conditions for membership in $\mathrm{CUT}\left(\nabla\mathrm{ G}\right)$, which can be used to witness contextuality in scenarios where a complete characterization of $\mathrm{CUT}\left(\nabla \mathrm{G}\right)$ is still missing. \begin{dfn} Given $\mathrm{A} \in \mathbb{R}^{\left|\mathrm{X}\right|+\left|\mathrm{E}\right|}$ and $ \mathrm{b} \in \mathbb{R}$ we say that the linear inequality \begin{equation} \mathrm{A} \cdot \mathrm{P} \leq \mathrm{b} , \end{equation} on $\mathrm{P} \in \mathbb{R}^{\left|\mathrm{X}\right|+\left|\mathrm{E}\right|}$ is a \emph{noncontextuality inequality} if it is satisfied for all $\mathrm{P} \in \mathrm{CUT}\left(\nabla \mathrm{G}\right)$. We say that this inequality is \emph{tight} if $\mathrm{A} \cdot \mathrm{P} = \mathrm{b}$ for some $\mathrm{P} \in \mathrm{CUT}\left(\nabla\mathrm{ G}\right)$ and we say that this inequality is \emph{facet-defining} if the set \begin{equation} \left\{ \mathrm{P} \in \mathrm{CUT}\left(\nabla \mathrm{G}\right) \left| \mathrm{A} \cdot \mathrm{P} = \mathrm{b} \right.\right\}\end{equation} is a facet of $\mathrm{CUT}\left(\nabla \mathrm{G}\right)$. \end{dfn} Every noncontextuality inequality gives a necessary condition for noncontextuality in the corresponding scenario. What we do next is to use known inequalities valid for $\mathrm{CUT}\left(\nabla G\right)$ to find necessary conditions for noncontextuality in the extended sense. \subsection{Extended compatibility hypergraph} \label{subsec:Extended_Comp_Hyper} \begin{dfn} Let $\mathrm{H}$ be the compatibility hypergraph for a compatibility scenario $\Gamma=(X,\mc{C},\{-1,1\})$. Construct the \emph{extended compatibility hypergraph} $\mathscr{H}$ of this scenario in the following way. Given a vertex $x \in X$, let $C_{i_1}, \ldots , C_{i_l}$ be all hyperedges containing it. We add to the vertex set of $\mathscr{H}$ the vertices $x^{i_1}, \ldots , x^{i_l}$, which form a hyperedge in $\mathscr{H}$. The other hyperedges of $\mathscr{H}$ are in one-to-one correspondence with the hyperedges of $\mathrm{H}$: to each hyperedge $ C_i=\left\{x_{1}, x_{2}, \ldots , x_{\left|C_i\right|}\right\}$ in $\mathrm{H}$ corresponds the hyperedge $\left\{x_{1}^i, x_{2}^i, \ldots , x_{\left|C_i\right|}^i\right\}$ in $\mathscr{H}$. \end{dfn} Fig.\ref{fig:path_ext} illustrates this construction for a simple example. \begin{figure} \caption{ } \caption{ } \label{fig:path_ext} \end{figure} \begin{dfn} Given a behavior $\mathrm{B}$ for the compatibility scenario defined by hypergraph $\mathrm{H}$, we construct an \emph{extended behavior} $\mathscr{B}$ for $\mathrm{B}$ in the following way: for context $\left\{x_{1}^i x_{2}^i \ldots x_{\left|C_i\right|}^i\right\}$ of $\mathscr{H}$ corresponding to context $C_i=\left\{x_1 x_2 \ldots x_{\left|C_i\right|}\right\}$ of $\mathrm{H}$ the probability distribution assigned by behavior $\mathscr{B}$ is equal to the probability distribution assigned to $C_i$ by behavior $\mathrm{B}$; for context $x^{i_1}, \ldots , x^{i_l}$ of $\mathscr{H}$ corresponding to a vertex $x \in X$ of $\mathrm{H}$, the probability distribution assigned by behavior $\mathscr{B}$ is any maximal coupling for the variables $x^{i_1}, \ldots , x^{i_l}$. \end{dfn} Since, in general, maximal couplings are not unique, for a given behaviour $B$, there might exist more than only one extended behaviour $\mathscr{B}$ associated with it. In other words, $\mathscr{B}$ will also not be unique. With these definitions, we can rewrite Dfn. \ref{def:ext_context} as the following theorem: \begin{thm} A behavior $\mathrm{B}$ for the compatibility scenario defined by the hypergraph $\mathrm{H}$ has a \emph{maximally noncontextual description} if, and only if, there is an extended behavior $\mathscr{B}$ for $\mathrm{B}$ which is noncontextual with respect to the compatibility scenario defined by the extended compatibility hypergraph $\mathscr{H}$. \label{thm:extended} \end{thm} Thus, the problem of deciding if a behavior $\mathrm{B}$ is noncontextual in the extended sense is equivalent to the problem of finding a noncontextual extended behavior $\mathscr{B}$ which is noncontextual in the extended scenario $\mathscr{H}$. This gives, as a corollary, a complete characterization of extended contextuality for the $n$-cycle scenario. \subsection{The $n$-cycle scenario} \label{sub:ncycle} In the $n$-cycle scenario, $X=\left\{0,\ldots , n-1\right\}$ and two measurements $i$ and $j$ are compatible iff $j=i+1 \mod n$. The corresponding hypergraph $\mathrm{H}$ is a cycle with $n$ vertices. The extended hypergraph $\mathscr{H}$ is a $2n$-cycle, with vertices $i^i, i^{i+1}$ and egdes $\left\{i^i, (i+1)^i\right\}, \left\{i^i, i^{i-1} \right\}, \ i=0, \ldots , n-1$ (see Fig. \ref{fig:ncycle}). \begin{figure}\label{fig:ncycle} \end{figure} \begin{cor} A behavior $B$ for the $n$-cycle scenario is noncontextual in the extended sense iff \begin{equation} s\left(\left\langle {i}^i(i+1)^i\right\rangle, 1-\left\langle i^{i}\right\rangle - \left\langle i^{i-1}\right\rangle\right)_{i=0, \ldots , n-1} \leq 2n-2,\label{eq:ncycle_ineq} \end{equation} where \begin{equation} s\left(z_1, \ldots, z_k\right) = \max_{\gamma_i = \pm 1, \prod_i \gamma_i=-1 } \sum_{i=1}^k \gamma_i z_i. \label{eq:ncycle}\end{equation} \end{cor} \begin{proof} In this case the extended behavior $\mathscr{H}$ is unique and, as shown in Ref. \cite{KDL15}, for every context $\left\{i^{i-1}, i^{i}\right\}$ corresponding to $i \in X$ we have that maximal couplings satisfy: \begin{equation} \left\langle i^{i-1}i^{i} \right\rangle=1-\left\langle i^{i-1}\right\rangle - \left\langle i^i\right\rangle.\end{equation} Hence, \begin{equation} P_{\mathscr{B}}= \left(\left\langle i^i(i+1)^i\right\rangle, 1-\left\langle i^{i-1}\right\rangle - \left\langle i^i\right\rangle\right)_{i=0, \ldots , n-1}.\end{equation} As shown in Ref. \cite{AQBTC13}, Eq.~\eqref{eq:ncycle_ineq} is a necessary and sufficient condition for membership in $\mathrm{CUT}\left(\nabla\mathrm{C}_{2n}\right).$ Thm. \ref{thm:extended} implies the result. \end{proof} \section{From valid inequalities for $\nabla \mathrm{G}$ to valid inequalities for $\nabla \mathscr{G}$} \label{sec:valid_ineq} The problem of deciding if a given behavior is noncontextual in the extended sense is, in general, extremely difficult (see, for instance Ref.~\cite{LexSch03}). To completely solve it we need, first, to characterize the set of all extended behaviors $\mathscr{B}$ and, second, characterize the set of noncontextual behaviors in the extended scenario, which is, as we mentioned before, a complex task. Although we cannot solve the problem completely, except for very special situations as in Sub.~\ref{sub:ncycle}, we are able to find necessary conditions for the existence of a noncontextual extended behavior in \emph{any} scenario using the cut polytope. The first step in this direction consists in defining a useful and important graph, associated with a given scenario, which is going to be recurrently utilized from now on: \begin{dfn} Given a scenario $\Gamma=(X,\mc{C},O)$, let $\mathscr{H}$ be the extended hypergraph associated with it. We call the 2-section of $\mathscr{H}$ the \emph{extended compatibility graph} associated with $\Gamma$, and denote it by $\mathscr{G}$. \label{def:ext_comp_graph} \end{dfn} Now, as another corollary of Thm. \ref{thm:extended}, we have: \begin{cor} If a behavior $\mathrm{B}$ is noncontextual in the extended sense, then there is an extended behavior $\mathscr{B}$ for $\mathrm{B} $ such that $\mathrm{P}_{\mathscr{B}}$ belongs to the cut polytope of $\nabla \mathscr{G}$ , where $\mathscr{G}$ is the extended compatibility graph of the scenario. \label{cor:ext_comp_graph} \end{cor} \subsection{Triangular elimination} From valid inequalities for $\mathrm{CUT}\left(\nabla \mathrm{G}\right)$ it is possible to derive valid inequalities for $\mathrm{CUT}\left(\nabla \mathscr{G}\right)$ using the operation of \emph{triangular elimination}. \begin{dfn}[Triangular Elimination for Graphs] Let $\mathsf{G}=\left(\mathsf{V},\mathsf{E}\right)$ be a graph, $t$ an integer, and let $\mathsf{F}=\left\{u_iv_i \left| i =1, \ldots , t\right.\right\}$ be any subset of $\mathsf{E}$. The graph $\mathsf{G}'=\left(\mathsf{V}',\mathsf{E}'\right)$ is a \emph{triangular elimination} of $\mathsf{G}$ with respect to $\mathsf{F}$ if $\mathsf{V}'=\mathsf{V} \cup \left\{w_1, w_2, \ldots, w_t\right\}$, where $w_1, w_2, \ldots, w_t$ are new vertices not in $\mathsf{V}$, and $\mathsf{E}' \supseteq \left\{w_iu_i, w_iv_i \left| i = 1, \ldots , t\right.\right\}$ and $\mathsf{E}'\cap \mathsf{E}= \mathsf{E}\setminus \mathsf{F}.$ \label{def:triangular_elim_for_graphs} \end{dfn} The graph $\mathsf{G}'$ is obtained from $\mathsf{G}$ by removing each edge $u_iv_i$ in $\mathsf{F}$ from $\mathsf{E}$ and replacing it with a new vertex $w_i$, which is connected to $u_i$ and $v_i$. Other edges connecting $w_i$ with other vertices other then $u_i$ and $v_i$ may or may not be added. A simple example is shown in Fig. \ref{fig:te}. \begin{figure}\label{fig:te} \end{figure} \begin{dfn}[Triangular elimination for inequalities] Let $\mathsf{G}'=\left(\mathsf{V}', \mathsf{E}'\right)$ be a triangular elimination of $\mathsf{G}=\left(\mathsf{V}, \mathsf{E}\right)$ with respect to $\mathsf{F}=\left\{u_iv_i \left| i =1, \ldots , t\right.\right\}$, and suppose $A \in \mathbb{R}^{\mathsf{E}}$, $A' \in \mathbb{R}^{\mathsf{E}'}$, $b, b' \in \mathbb{R}$. The inequality $A' \cdot P' \leq b'$ is a \emph{triangular elimination} of inequality $A \cdot P \leq b$ if it can be obtained from this last inequality by summing positive multiples of inequalities \begin{equation} -P_{u_iw_i} - P_{v_iw_i}- P_{u_iv_i} \leq 1, \label{eq:inq_te_1} \end{equation} \begin{equation} P_{u_iw_i} + P_{v_iw_i}- P_{u_iv_i} \leq 1 \label{eq:inq_te_2} \end{equation} or the other two inequalities obtained from \eqref{eq:inq_te_2} by permuting $u_i, v_i$ and $w_i$. \end{dfn} \begin{prop} \label{prop:tri} Let $\mathsf{G}'=\left(\mathsf{V}', \mathsf{E}'\right)$ be a triangular elimination of $\mathsf{G}=\left(\mathsf{V}, \mathsf{E}\right)$. Let $A \cdot P \leq b$ be a valid inequality for $\mathrm{CUT}\left(\mathsf{G}\right)$ and $A' \cdot P' \leq b'$ be a triangular elimination of $A \cdot P \leq b$. Then $A' \cdot P' \leq b'$ is valid for $\mathrm{CUT}\left(\mathsf{G}'\right)$. \end{prop} \textbf{Remark:} We should remark that for our own purposes the content of Prop. \ref{prop:tri} above is enough (see Corollary \ref{cor:necessary_condt}). Nonetheless, in Ref.~\cite{AIT06} the authors have shown that the other implication in Prop. \ref{prop:tri} is also true. It means that if $A^{\prime} \cdot P^{\prime} \leq b^{\prime}$ is a valid inequality for $\mathrm{CUT}\left(\mathsf{G}^{\prime}\right)$, then $A \cdot P \leq b$ is valid for $\mathrm{CUT}\left(\mathsf{G}\right)$, provided that $\mathsf{G}^{\prime}$ and $A^{\prime} \cdot P^{\prime} \leq b^{\prime}$ be triangular eliminations of $\mathsf{G}$ and $A \cdot P \leq b$ respectively. \subsection{Triangular elimination and extexted contextuality} \begin{thm} Let $\Gamma$ be a compatibility scenario. If the compatibility hypergraph of $\Gamma$ coincides with its 2-section, \emph{i.e.} if $\mathrm{H} = \mathrm{G}$, then the extended compatibility graph $\mathscr{G}$ is a triangular elimination of the compatibility graph $\mathrm{G}$. Moreover, $\nabla \mathscr{G}$ is a triangular elimination of $\nabla \mathrm{G}$. \label{thm:extd_graph_elimintion} \end{thm} \begin{proof} We start with $\mathrm{G}=(X,E(\mathrm{G}))$ and $x_1 \in \mathrm{X}$. Let $E_{x_1}=\left\{ x_1y_1, x_1y_2, \ldots , x_1y_n \right\} \subset E(\mathrm{G})$ be the set of all edges incident to $x_1$ and let $\mathrm{G}_1 = (V(\mathrm{G}_1),E(\mathrm{G}_1))$ be the graph obtained from $\mathrm{G}$ in the following way: remove from $E(\mathrm{G})$ all edges in $E_{x_1}$, from $X$ remove the vertex $x_1$, add vertices $x_1^1, x_1^2, \ldots , x_1^n$, edges $\left\{ x_1^1y_1, x_1^2y_2, \ldots , x_1^ny_n \right\}$ and all edges $x_1^kx_1^l$ with $1 \leq k < l \leq n$. The graph $\mathrm{G}_1$ is a triangular elimination of $\mathrm{G}$. Take now $x_2 \in V(\mathrm{G}_1) \setminus \left\{x_1^1, x_1^2, \ldots , x_1^n\right\}$ and repeat the same procedure, obtaining graph $\mathrm{G}_2$. Proceeding analogously for every vertex in $\mathrm{X}$, and since it is finite, we get $\mathscr{G}$ in the last step. Similar argument can be used with $\nabla \mathrm{G}$. \end{proof} \begin{figure}\label{fig:fig} \end{figure} As a direct consequence of Prop. \ref{prop:tri} and Thm. \ref{thm:extd_graph_elimintion}, we have: \begin{cor} Given a compatibility scenario $\Gamma$, suppose that $\mathrm{H}=\mathrm{G}$. A necessary condition for the behavior $\mathrm{B}$ to be noncontextual in the extended sense is that for every extended behavior $\mathscr{B}$ and for every inequality valid for $\mathrm{CUT}\left(\nabla \mathrm{G}\right)$, its triangular eliminations are satisfied by the vector $\mathrm{P}_{\mathscr{B}}$ corresponding to $\mathscr{B}$. \label{cor:necessary_condt} \end{cor} It is important to notice that terms of the form of inequality \eqref{eq:inq_te_2} added to $A \cdot P \leq b$ will be satisfied at equality if the behaviors are perfectly non-disturbing. Hence, there is one triangular elimination of $A \cdot P \leq b$ that is tight and reduces to the original inequality for non-disturbing behaviors. When $\mathscr{B}$ is unique, we obtain a simple necessary condition for noncontextuality in the extended sense. We calculate $\mathrm{P}_{\mathscr{B}}$ and substitute its entries in the inequalities for $\mathrm{CUT}\left(\nabla \mathscr{G}\right)$ obtained from the inequalities for $\mathrm{CUT}\left(\nabla \mathrm{G}\right)$ via triangular elimination. If we find that some of them are not satisfied, we can conclude that $\mathrm{B}$ is contextual in the extended sense. In the case $\mathscr{B}$ is not unique, it may be impractical to determine all possible $\mathrm{P}_{\mathscr{B}}$ so we can not test directly if these vectors satisfy all triangular eliminations of a given inequality for $\mathrm{CUT}\left(\nabla \mathrm{G}\right)$ or not. Nevertheless, Thm. \ref{thm:necessary_max_coup} will help us circumvent this difficulty. If $A' \cdot P' \leq b'$ is a triangular elimination of $A \cdot P \leq b$, then the left-hand-side can be written as a sum of two terms $A'\cdot P' = A_1 \cdot P_1 + A_2\cdot P_2$, where $P_1$ is the projection of $P'$ that contains the entries depending only on the contexts in $\mathscr{H}$ that come from the contexts in $\mathrm{H}$ and $P_2$ is the projection of $P'$ that contains the terms depending only on the contexts consisting on random variables that represent the same measurement. From $\mathrm{P}_{\mathrm{B}}$ we calculate $P_1$. To calculate $P_2$ explicitly we have to determine the maximal couplings for each pair of variables that represent the same measurement, which can be a hard task. Instead of doing this, we use the necessary condition satisfied for all maximal couplings presented in theorem \ref{thm:necessary_max_coup} to calculate which value of $A_2\cdot P_2$ is the worst, respecting the condition of maximal couplings. This proves the following: \begin{thm} Let $A'\cdot P' = A_1 \cdot P_1 + A_2\cdot P_2 \leq b'$ be a valid inequality for $\mathrm{CUT}\left(\nabla \mathscr{G}\right)$. Let $m$ be the minimum of $A_2\cdot P_2$ over all possible values of $P_2$ satisfying conditions given in Thm. \ref{thm:necessary_max_coup}. If \begin{equation} A_1 \cdot P_{\mathrm{B}} + m > b'\end{equation} $\mathrm{P}_{\mathrm{B}}$ is contextual in the extended sense. \end{thm} This gives a necessary condition for extended contextuality that can be applied in any compatibility scenario. \section{The $I_{3322}$ inequality} \label{sec:I3322} Our first example is the $(3,3,2,2)$ Bell scenario \cite{Foissart81,CG04}, where two distinct parties perform three measurements each, each measurement with two outcomes. In this case each context has exactly two measurements, one form each party. With our notation, it means that this scenario is described by \begin{equation} \Gamma= \lbrace \{ A_1,A_2,A_3,B_1,B_2,B_3 \}, \{A_i B_j\}_{i \neq j}, \{-1,1\} \rbrace \end{equation} and $\mathrm{H} = \mathrm{G}$. The compatibility graph of this scenario is the complete bipartite graph $K_{3,3}$, shown in Fig. \ref{fig:I3322}. \begin{figure}\label{fig:I3322} \end{figure} One of the facets of $\mathrm{CUT}\left(\nabla \mathrm{G}\right)$ is given by the so called $\mathrm{I}_{3322}$ inequality \cite{Foissart81,CG04}: \begin{multline} \label{eq:I3322} \left\langle A_1 \right\rangle + \left\langle A_2 \right\rangle + \left\langle B_1 \right\rangle +\left\langle B_2 \right\rangle -\left\langle A_1 B_1 \right\rangle-\left\langle A_1B_2 \right\rangle \\ -\left\langle A_1 B_3\right\rangle -\left\langle A_2B_1 \right\rangle-\left\langle A_2B_2 \right\rangle + \left\langle A_2B_3 \right\rangle -\left\langle A_3B_1 \right\rangle +\left\langle A_3B_2 \right\rangle \leq 4 \end{multline} The extended compatibility graph $\mathscr{G}$ of this scenario is shown in Fig. \ref{fig:I3322_ext}. Each vertex $A_i$ becomes three new vertices $A_i^1, A_i^2, A_i^3$ in $\mathscr{G}$, and similar for each $B_i$. Vertices $A_i^j$ and $B_j^i$ are connected. The vertices $A_i^1, A_i^2, A_i^3$ are connected for each $i$ and similar for $B_i^1, B_i^2, B_i^3$. \begin{figure}\label{fig:I3322_ext} \end{figure} Applying triangular elimination in the $\mathrm{I}_{3322}$ inequality, we can derive the following valid inequality for $\mathrm{CUT}\left(\nabla \mathscr{G}\right)$ \begin{multline} \left\langle A^1_1 \right\rangle + \left\langle A^1_2 \right\rangle + \left\langle B^1_1 \right\rangle +\left\langle B^1_2 \right\rangle -\left\langle A^1_1 B^1_1 \right\rangle-\left\langle A^2_1B^1_2 \right\rangle-\left\langle A^3_1 B^1_3\right\rangle -\left\langle A^1_2B^2_1 \right\rangle \\ -\left\langle A^2_2B^2_2 \right\rangle+\left\langle A^3_2B^2_3 \right\rangle -\left\langle A^1_3B^3_1 \right\rangle +\left\langle A^2_3B^3_2 \right\rangle + \left\langle A^1_1A_1^2 \right\rangle + \left\langle A^1_1A_1^3 \right\rangle + \left\langle A^1_2 A_2^2 \right\rangle \\ + \left\langle A^1_2 A_2^3 \right\rangle + \left\langle A^1_3A_3^2 \right\rangle + \left\langle B^1_1B_1^2 \right\rangle + \left\langle B^1_1B_1^3 \right\rangle + \left\langle B^1_2B_2^2 \right\rangle + \left\langle B^1_2 B^1_3 \right\rangle + \left\langle B^1_3B_3^2 \right\rangle \leq 14 \label{eq:I3322_ext} \end{multline} This inequality is tight and reduces to Ineq. \eqref{eq:I3322} for non-disturbing behaviors, since in this particular case we have that $\left\langle A^j_iA_i^k \right\rangle=1$ and $\left\langle B^j_iB_i^k \right\rangle=1$ for every $i, j,k$. In this scenario, each measurement has two outcomes and belongs to three contexts, therefore each behavior $\mathrm{B}$ has a unique extended behavior $\mathscr{B}$ corresponding to it. This, in turn, implies the following result: \begin{cor} A necessary condition for extended noncontextuality of a behavior $\mathrm{B}$ in the $(3,3,2,2)$ Bell scenario is that the unique extended behavior of $\mathrm{B}$ satisfies the triangular elimination of the $\mathrm{I}_{3322}$ inequality given by Eq. \eqref{eq:I3322_ext}. \end{cor} \section{Chained Inequalitites} \label{sec:chain_ineq} We consider now the $(n,n,2,2)$ Bell scenario with $2$ parties, $n$ measurements per party, each measurements with $2$ outcomes. Also in this case each context has exactly two measurements, one from each party, and $\mathrm{H}= \mathrm{G}$. Once again, sticking to our notation, we describe such a scenario with \begin{equation} \Gamma=\lbrace \{A_1,...,A_n,B_1,...,B_n\}, \{A_i A_j\}_{i \neq j}, \{-1,1\} \rbrace \end{equation} The compatibility graph $\mathrm{G}$ is the complete bipartite graph $K_{n,n}$. A family of noncontextuality inequalities for these scenarios consists of the so called \emph{Chained Inequalities} \cite{BC90}, given by \begin{equation} \left\langle A_1B_2\right\rangle + \left\langle B_1A_2\right\rangle + \ldots + \left\langle B_{n-1}A_n\right\rangle +\left\langle A_nB_n \right\rangle - \left\langle B_n A_1\right\rangle \leq 2n-2. \label{eq:chained}\end{equation} Each vertex $A_i$ becomes $n$ new vertices $A_i^1, A_i^2, \ldots , A_i^n$ in the extended compatibility graph $\mathscr{G}$, and similar for each $B_i$. Vertices $A_i^j$ and $B_j^i$ are connected. The vertices $A_i^1, A_i^2, \ldots , A_i^n$ are connected for each $i$ and similar for $B_i^1, B_i^2, \ldots , B_i^n$. Applying triangular elimination in the inequality \eqref{eq:chained}, we can derive the following valid inequality for $\mathrm{CUT}\left(\nabla \mathscr{G}\right)$, which is tight and reduces to Ineq. \eqref{eq:chained} for no-disturbing behaviors: \begin{multline} \left\langle A_1^2B^1_2\right\rangle + \left\langle B^2_1A^1_2\right\rangle + \ldots + \left\langle B^n_{n-1}A^{n-1}_n\right\rangle +\left\langle A^n_nB^n_n \right\rangle - \left\langle B^1_n A^n_1\right\rangle +\left\langle A^1_1A^n_1 \right\rangle +\left\langle A^1_2A^2_2 \right\rangle + \ldots \\ +\left\langle A^{n-1}_nA^n_n \right\rangle +\left\langle B^1_1B^2_1 \right\rangle +\left\langle B^2_2B^3_2 \right\rangle +\ldots +\left\langle B^1_nB^n_n \right\rangle \leq 4n-2. \label{eq:chained_ext} \end{multline} In this scenario, each measurement belongs to $n$ contexts, therefore each behavior $\mathrm{B}$ may have several extended behaviors $\mathscr{B}$ corresponding to it. Given such $\mathscr{B}$, we construct the vector $\mathrm{P}_{\mathscr{B}}$. Let $P_1$ be the projection of $\mathrm{P}_{\mathscr{B}}$ over the entries corresponding to contexts $A_i^jB_j^i$ and $P_2$ be the projection of $\mathrm{P}_{\mathscr{B}}$ over the entries corresponding to contexts $A_i^jA_i^k$ and $B_i^jB_i^k$. $P_1$ depends only in $\mathrm{P}_{\mathrm{B}}$ and hence is the same for all extended behaviors $\mathrm{P}_{\mathscr{B}}$. The projection $P_2$ depends on the choice of maximal coupling for each pair $A_i^jA_i^k$ and $B_i^jB_i^k$. The left-hand side of inequality \eqref{eq:chained_ext} can be divided in two parts. The first part contains the terms \begin{widetext} \begin{equation} \left\langle A_1^2B^1_2\right\rangle + \left\langle B^2_1A^1_2\right\rangle + \ldots + \left\langle B^n_{n-1}A^{n-1}_n\right\rangle +\left\langle A^n_nB^n_n \right\rangle - \left\langle B^1_n A^n_1\right\rangle \end{equation} \end{widetext} and depends only on $P_1$, and hence only on $\mathrm{P}_{\mathrm{B}}$. The second part contains the terms \begin{widetext} \begin{equation} \left\langle A^1_1A^n_1 \right\rangle +\left\langle A^1_2A^2_2 \right\rangle + \ldots +\left\langle A^{n-1}_nA^n_n \right\rangle +\left\langle B^1_1B^2_1 \right\rangle +\left\langle B^2_2B^3_2 \right\rangle +\ldots +\left\langle B^1_nB^n_n \right\rangle \label{eq:chained_ext_part2} \end{equation} \end{widetext} and depends only on $P_2$. No matter which extended behavior we have, the projection $P_2$ must necessarily satisfy the constraint given in Thm. \ref{thm:necessary_max_coup}. Let $m$ be the minimum of the second term \eqref{eq:chained_ext_part2} over all vectors $P_2$ satisfying Thm. \ref{thm:necessary_max_coup}. \begin{cor} A necessary condition for extended noncontextuality of a behavior $\mathrm{B}$ in the $(n,n,2,2)$ Bell scenario is that the inequality \begin{equation} \left\langle A_1^2B^1_2\right\rangle + \left\langle B^2_1A^1_2\right\rangle + \ldots + \left\langle B^n_{n-1}A^{n-1}_n\right\rangle +\left\langle A^n_nB^n_n \right\rangle - \left\langle B^1_n A^n_1\right\rangle + m \leq 2\left(2n-1\right) \end{equation} is satisfied by the projection $P_1$ of the extended behaviors $\mathscr{B}$ for $\mathrm{B}$. \end{cor} \section{Scenarios with contexts with more than three measurements} \label{sec:more_than_3} When there are contexts with more then three measurements, $\mathrm{H} \neq \mathrm{G}$ and $\mathscr{G}$ is not a triangular elimination of $\mathrm{G}$. Nevertheless we can still generate valid inequalities for $\mathrm{CUT}\left(\nabla \mathscr{G}\right)$ from valid inequalities for $\mathrm{CUT}\left(\nabla \mathrm{G}\right)$ using two strategies: the first one is to use a graph operation called \emph{vertex splitting}~\cite{AIT06,DL97,BM86}; the second one is to use triangular elimination combined with a graph operation called \emph{edge contraction}~\cite{AIT06,DL97,BM86}. \subsection{Vertex splitting} \begin{dfn}[Vertex splitting for graphs] Let $\mathsf{G}=\left(\mathsf{V}, \mathsf{E}\right)$ be a graph, $w \in \mathsf{V}$ and $\left(\mathsf{S}, \mathsf{T}, \mathsf{B}\right)$ be a partition of the neighbours of $w$. The graph $\mathsf{G}'=\left(\mathsf{V}', \mathsf{E}'\right)$ is obtained from $\mathsf{G}$ by \emph{splitting} vertex $w$ into $s$ and $t$, for $s,t \notin \mathsf{V}$, with respect to the partition $\left(\mathsf{S}, \mathsf{T}, \mathsf{B}\right)$ if $$\mathsf{V}'= \left(\mathsf{V} \setminus \{w\} \right) \cup \left\{s,t\right\}$$ and \begin{equation} \mathsf{E}'= \left(\mathsf{E} \setminus \delta\left(w\right)\right) \cup \left(s: S \cup B\right) \cup \left(t: T \cup B\right) \cup \left\{st\right\},\end{equation} where $\delta\left(w\right)$ is the set of neighbours of $w$, $\left(s: S \cup B\right)$ is the set of all edges connecting $s$ to the vertices in $S \cup B$ and $\left(t: T \cup B\right)$ is the set of all edges connecting $t$ to the vertices in $T \cup B$. \end{dfn} In other words, the graph $\mathsf{G}'$ is the graph obtained from $\mathsf{G}$ removing the vertex $w$ and replacing it by vertices $s$ and $t$, which are connected. The vertices in $S$ are connected only to $s$, the vertices in $T$ are connected only to $t$ and the vertices in $B$ are connected to both $s$ and $t$. Figures \ref{fig:splitting1}-\ref{fig:splitting2} illustrate a simple example of this operation. \begin{figure}\label{fig:splitting1} \label{fig:splitting2} \end{figure} \begin{dfn}[Vertex splitting for inequalities] Let $\mathsf{G}=\left(\mathsf{V}, \mathsf{E}\right)$ be a graph, $w \in \mathsf{V}$, $\left(\mathsf{S}, \mathsf{T}, \mathsf{B}\right)$ be a partition of the neighbours of $w$ and $A \cdot P \leq b$ be an inequality valid for $\mathrm{CUT}\left(\mathsf{G}\right)$. Assume without loss of generality that $\sum_{v \in \mathsf{T}} \left| A_{wv} \right| \leq \sum_{v \in \mathsf{S}} \left| A_{wv}\right|.$ Define $A'$ in the following way: \begin{eqnarray} A'_{st} & = & -\sum_{v \in T} \left|A_{wv}\right| \\ A'_{tv}&=& 0, \ v \in \mathsf{B}\\ A'_{tv}&=&A_{wv}, \ v \in \mathsf{T}\\ A'_{sv}&=&A_{wv}, \ v \in \mathsf{S} \cup \mathsf{B}\\ A'_{uv}&=&A_{uv}, \ uv \in \mathsf{E}'\setminus \left[\delta(s) \cup \delta(t)\right]. \end{eqnarray} The inequality $A' \cdot P' \leq b$ is called the \emph{vertex splitting} of $A \cdot P \leq b$ with respect to $w \in \mathsf{V}$ and $\left(\mathsf{S}, \mathsf{T}, \mathsf{B}\right)$. \end{dfn} \begin{prop} \label{prop:split} Let graph $\mathsf{G}'$ and inequality $A' \cdot P' \leq b$ be vertex splittings of $\mathsf{G}$ and $A \cdot P \leq b$ (resp.) with respect to $w \in \mathsf{V}$ and $\left(\mathsf{S}, \mathsf{T}, \mathsf{B}\right)$. If $A \cdot P \leq b$ is a valid inequality for $\mathrm{CUT}\left(\mathsf{G}\right)$, then $A' \cdot P' \leq b$ is a valid inequality for $\mathrm{CUT}\left(\mathsf{G}'\right)$. \end{prop} \begin{thm} The extended compatibility graph $\mathscr{G}$ and its suspension graph $\nabla \mathscr{G}$ can be obtained from the compatibility graph $\mathrm{G}$ and $\nabla \mathrm{G}$, respectively, using a sequence of vertex splitting operations. \label{thm:split} \end{thm} \begin{proof} Choose $x \in \mathrm{X}$ and let $C_1, \ldots , C_n$ be the contexts containing $x$. Then $\delta(x)$ contains \st{the} measurements in $\left[\cup_i C_i\right] \setminus C_1$. Starting with $\mathrm{G}$, the first operation is splitting $x$ into $x_1$ and $x_1'$ with respect to the partition \begin{equation}\left(S_1 = C_1\setminus \left[\cup_{i>1} C_i\right] , T_1 = \left[\cup_{i>1} C_i \right] \setminus C_1, B_1= \left[\cup_{i>1} C_i\right] \cap C_1\right).\end{equation} Vertex $x_1$ is connected to $S_1$, vertex $x_1'$ is connected to $T_1$ and both $x_1$ and $x_1'$ are connected to $B_1$. With this operation, we set $x_1$ as the copy of $x$ in $\mathscr{G}$ corresponding to context $C_1$. The next operation is split $x_1'$ into vertices $x_2$ and $x_2'$ with respect to partition \begin{equation} \left(S_2=C_2\setminus\left[ \cup_{i>2}\right] C_i , T_2=\left[\cup_{i>2} C_i\right] \setminus C_2,B_2=\left[\left[\cup_{i=2}^n C_i\right] \cap C_2 \right]\cup \left\{x_1\right\}\right).\end{equation} With this operation, we set $x_2$ as the copy of $x$ in $\mathscr{G}$ corresponding to context $C_2$. We proceed analogously, in each step splitting vertex $x_k'$ into $x_{k+1}$ and $x_{k+1}'$ with respect to the partition \begin{multline}\left( S_{k+1}=C_{k+1}\setminus \left[\cup_{i>{k+1}} C_i\right] , T_{k+1}=\left[\cup_{i>{k+1}} C_i \right]\setminus C_{k+1},\right. \\ \left.B_{k+1}=\left[\left[\cup_{i=k+1}^n C_i \right] \cap C_{k+1}\right]\cup \left\{x_1, \ldots , x_k\right\}\right). \end{multline} With this chain of operations we eliminate vertex $x$ and add the clique $x_1, \ldots , x_n$, each $x_i$ connected only to the vertices in context $C_i$ and the other $x_j$. Applying the same procedure to the other vertices in $X$ we recover $\mathscr{G}$. A similar argument can be used for $\nabla \mathscr{G}$. \end{proof} A simple example of the procedure described in the previous proof is shown in Fig. \ref{fig:ex_splitting}. \begin{figure} \caption{\footnotesize{First splitting operation.}} \caption{\footnotesize{Second splitting operation.}} \label{fig:ex_splitting} \end{figure} Combining Prop. \ref{prop:split} and Thm. \ref{thm:split}, we have: \begin{cor} From valid inequalities for $\mathrm{CUT}\left(\nabla \mathrm{G}\right)$ we can generate necessary conditions for extended noncontextuality using vertex splitting. \end{cor} \subsection{Triangular Elimination and Edge Contraction} \begin{dfn}[Edge contraction for graphs] Let $\mathsf{G}=\left(\mathsf{V},\mathsf{E}\right)$ be a graph, $w \notin \mathsf{V}$, and $uv \in E$. The graph $\mathsf{G}'=\left(\mathsf{V}',\mathsf{E}'\right)$ is a \emph{contraction} of $\mathsf{G}$ at edge $uv$ if $\mathsf{V}'= \left[\mathsf{V}\setminus \left\{u,v\right\}\right]\cup\left\{w\right\}$ and $ \displaystyle \mathsf{E}'= \left[\mathsf{E} \setminus \left[ \left\{uv\right\}\cup \left\{ux| x \in \delta(u)\right\} \cup \left\{vx| x \in \delta(v)\right\}\right] \right] \cup \left\{wx| x \in \delta(u) \cup \delta(v)\right\}.$ \label{dfn:edge_contr_graphs} \end{dfn} A simple example of this operation is shown in Fig.\ref{fig:contraction}. \begin{figure}\label{fig:contraction} \end{figure} \begin{dfn}[Edge contraction for inequalities] Let $\mathsf{G}=\left(\mathsf{V}, \mathsf{E}\right)$ be a graph, $uv \in \mathsf{E}$ and $A \cdot P \leq b$ be an inequality valid for $\mathrm{CUT}\left(\mathsf{G}\right)$. Define $A'$ in the following way: \begin{eqnarray} A'_{xy} & = & A_{xy}, \ x,y \neq w \\ A'_{wx}&=& A_{ux}, \ x \in \delta(u) \setminus \delta(v)\\ A'_{wx}&=&A_{vx}, \ x \in \delta(v) \setminus \delta(u)\\ A'_{wx}&=&A_{ux}+ A_{vx}, \ x \in \delta(u) \cap \delta(v). \end{eqnarray} The inequality $A' \cdot P \leq b$ is called the \emph{contraction} of $A \cdot P \leq b$ at the edge $uv$. \end{dfn} \begin{prop}[Edge contraction lemma~\cite{AIT06,DL97}] If $\mathsf{G}'$ and $A' \cdot P \leq b$ are contractions of $\mathsf{G}$ and $A \cdot P \leq b$, respectively, at edge $uv$ and $A \cdot P \leq b$ is a valid for $\mathrm{CUT}\left(\mathsf{G}\right)$, the inequality $A' \cdot P \leq b$ is valid for $\mathrm{CUT}\left(\mathsf{G}'\right)$. \end{prop} \begin{thm} The extended compatibility graph $\mathscr{G}$ ant its suspension graph $\nabla \mathscr{G}$ can be obtained from $\mathrm{G}$ and $\nabla \mathrm{G}$, respectively, using triangular elimination and edge contraction. \end{thm} \begin{proof} When some contexts have three elements or more, the problem with the construction of Thm. \ref{thm:extd_graph_elimintion} is that we have a copy for $v \in X$ for each vertex in $\delta(v)$ instead of one copy for each context containing $v$. From this graph we can obtain $\mathrm{G}$ identifying these extra copies contracting the corresponding edges. A similar argument can be used for $\nabla \mathscr{G}$. \end{proof} A simple example of this procedure is shown in Fig. \ref{fig:te+c}. As an corollary, we have the following: \begin{figure} \caption{\footnotesize{Graph $G$.}} \caption{\footnotesize{Graph $G'$.}} \label{fig:te+c} \end{figure} \begin{cor} Valid inequalities for $\mathscr{G}$ can be generated combining triangular elimination and edge contraction of valid inequalities for $\mathrm{G}$. \end{cor} This provides another tool to derive necessary conditions for extended noncontextuality in any scenario. \section{The Peres-Mermin inequality} Although the cut polytope provides a powerful tool to derive necessary conditions for contextuality, both in the standard and in the extended sense, it is not enough to characterize completely the set of noncontextual distributions in scenarios with contexts containing more then two random variables, since there are contextual behaviors that can not be detected when we look only to the binary expectation values of Eq. \eqref{eq:p2}, that is, there are contextual behaviors $B$ for which $P_B\in \mathrm{CUT}\left( \nabla G\right)$ \cite{GWAN11}. With this in mind, it would be useful to find strategies to derive necessary conditions for extended contextuality from inequalities that involve expectation values with more than two random variables. In what follows, we show that this is possible with a simple procedure, similar to triangular elimination, using the Peres-Mermin inequality as an example. The Peres-Mermin square is a contextuality scenario with nine measurements $A_i$, $i=1, \ldots 9$, with outcomes $\pm 1$, and compatibility hypergraph shown in Fig. \eqref{fig:Peres_Mermin}. These measurements can be chosen in quantum theory in such a way that the product of the three measurements in each line and in the first two columns is equal to the identity operator $I$, while the product of the measurements in the last column is equal to $-I$. \begin{figure}\label{fig:Peres_Mermin} \end{figure} For this scenario, every noncontextual behavior must satisfy the inequality \begin{equation}\left\langle A_1A_2A_3\right\rangle + \left\langle A_4A_5A_6\right\rangle + \left\langle A_7A_8A_9\right\rangle + \left\langle A_1A_4A_7\right\rangle + \left\langle A_2A_5A_8\right\rangle - \left\langle A_3A_6A_9\right\rangle \leq 4\label{eq:peres-mermim}\end{equation} while for all quantum behaviors the left hand side is equal to $6$. This is one of the famous examples of \emph{state independent contextuality}: for this choice of measurements, all quantum states yield noncontextual behaviors. \begin{figure}\label{fig:Peres_Mermin_Ext} \end{figure} The extended compatibility hypergraph for this scenario is shown in Fig. \ref{fig:Peres_Mermin_Ext}. Labeling the hyperedges of $\mathrm{H}$ defined by the rows in Fig. \ref{fig:Peres_Mermin} as $1, 2,3$ and the hyperedges defined by the columns as $4,5,6$, each measurement $A_i$ is divided in two new vertices of $\mathscr{H}$ $A_i^j$ and $A_i^k$, where $j\in \{1,2,3\}$ and $k\in \{4,5,6\}$ according to the row and column $A_i$ belongs to. Although the tools provided by the $\mathrm{CUT}$ polytope can not be used in this case, since the inequality \eqref{eq:peres-mermim} involves mean values of the product of three measurements instead of two, some ideas of Sec. \ref{sec:valid_ineq} can be used in similar way to derive valid inequalities for the extended scenario from it. We start with the Ineq. \ref{eq:peres-mermim}, substituting each $A_i$ with its copy $A_i^j$ with $j\in \{1,2,3\}$: \begin{equation} \left\langle A_1^1A_2^1A_3^1\right\rangle + \left\langle A_4^2A_5^2A_6^2\right\rangle + \left\langle A_7^3A_8^3A_9^3\right\rangle + \left\langle A_1^1A_4^2A_7^3\right\rangle + \left\langle A_2^1A_5^2A_8^3\right\rangle - \left\langle A_3^1A_6^2A_9^3\right\rangle \leq 4\end{equation} valid for all noncontextual extended behaviors. To eliminate the term $\left\langle A_1^1A_4^2A_7^3\right\rangle$ we use \begin{equation} A_1^1A_4^2A_7^3 = A_1^4A_4^4A_7^4 +\Delta A_1 A_4^4 A_7^4 + A_1^1\Delta A_4 A_7^4 +A_1^1A_4^2\Delta A_7\end{equation} where $\Delta A_1 = A_1^1 - A_1^4$ and similar for $\Delta A_4$ and $\Delta A_7$. From this we get \begin{eqnarray} \left\langle A_1^1A_2^1A_3^1\right\rangle + \left\langle A_4^2A_5^2A_6^2\right\rangle + \left\langle A_7^3A_8^3A_9^3\right\rangle + \left\langle A_1^4A_4^4A_7^4\right\rangle + \left\langle A_2^1A_5^2A_8^3\right\rangle - \left\langle A_3^1A_6^2A_9^3\right\rangle & \leq & \nonumber \\ 4 - \left\langle \Delta A_1 A_4^4 A_7^4 \right\rangle - \left\langle A_1^1\Delta A_4 A_7^4 \right\rangle - \left\langle A_1^1A_4^2\Delta A_7\right\rangle&\leq &\\ 4+ \sum_{i=1}^4\left|\Delta A_i\right|& &\end{eqnarray} Proceeding analogously with the other terms, we get the inequality \begin{equation}\left\langle A_1^1A_2^1A_3^1\right\rangle + \left\langle A_4^2A_5^2A_6^2\right\rangle + \left\langle A_7^3A_8^3A_9^3\right\rangle + \left\langle A_1^4A_4^4A_7^4\right\rangle + \left\langle A_2^5A_5^5A_8^5\right\rangle - \left\langle A_3^6A_6^6A_9^6\right\rangle \leq 4+ \sum_{i=1}^9\left|\Delta A_i\right|\end{equation} valid for all noncontextual extended behaviors. This inequality is tight and reduces to the original Peres-Mermin Ineq. \eqref{eq:peres-mermim} for non-disturbing behaviors. \section{Discussion} \label{sec:discussion} Apart from its primal importance in the foundations of quantum physics, contextuality has been discovered as a potential resource for quantum computing \cite{Raussendorf13, HWVE14, DGBR14}, random number certification \cite{UZZWYDDK13}, and several other tasks in the particular case of Bell scenarios \cite{BCPSW13}. Within these both fundamental and applied perspectives, certifying contextuality experimentally is undoubtedly an important primitive. It is then crucial to develop a robust theoretical framework for contextuality that can be easily applied to real experiments. This should include the possibility of treating sets of random variables that do not satisfy the assumption of \emph{non-disturbance}, which will be hardly satisfied in experimental implementations~\cite{KDL15}. Here we have further developed the extended definition of noncontextuality of Ref. \cite{KDL15}, which can be applied in situations where the non-distrubance condition does not hold, rewriting it in graph-theoretical terms. We then explore the geometrical aspects of the graph approach to contextuality to derive necessary conditions for extended contextuality that can be tested directly with experimental data in any contextuality experiment and which reduce to traditional necessary conditions for noncontextuality if the non-disturbance condition is satisfied. It would be interesting to give a characterization of which of these inequalities are facet-defining. In Ref. \cite{AIT06}, several results regarding this issue were proved, but unfortunately our scenarios do not satisfy the hypotheses needed for the validity of such results. A more ambitious problem would be to identify which scenarios can be completely characterized with these procedures, the $n$-cycle scenarios being an important example. We leave these inquiries for future work, hoping that our results might motivate further research in these directions. \begin{acknowledgments} The Authors thank Jan-\AA{}ke Larsson and Ad\'an Cabello for valuable discussions. This work was done during the Post-doctoral Summer Program of Instituto de Matem\'atica Pura e Aplicada (IMPA) 2017. BA and CD thank IMPA for its support and hospitality. BA acknowledges financial support from the Brazilian ministries MEC and MCTIC and CNPq. CD acknowledges financial support from CAPES and CNPq. RO aknowlodges the financial support of Bolsa de Produtividade em Pesquisa from CNPq. \end{acknowledgments} \end{document}
\begin{document} \title[Wellposedness of the 2D water waves in a regime allowing for angled crests]{Wellposedness of the 2D full water wave equation in a regime that allows for non-$C^1$ interfaces} \author{Sijue Wu } \address{Department of Mathematics, University of Michigan, Ann Arbor, MI} \thanks{Financial support in part by NSF grants DMS-1101434, DMS-1361791 and a Simons fellowship.} \begin{abstract} We consider the two dimensional gravity water wave equation in a regime where the free interface is allowed to be non-$C^1$. In this regime, only a degenerate Taylor inequality $-\frac{\partial P}{\partial\bold{n}}\ge 0$ holds, with degeneracy at the singularities. In \cite{kw} an energy functional $\mathcal E(t)$ was constructed and an a-prori estimate was proved. The energy functional $\mathcal E(t)$ is not only finite for interfaces and velocities in Sobolev spaces, but also finite for a class of non-$C^1$ interfaces with angled crests. In this paper we prove the existence, uniqueness and stability of the solution of the 2d gravity water wave equation in the class where $\mathcal E(t)<\infty$, locally in time, for any given data satisfying $\mathcal E(0)<\infty$. \end{abstract} \maketitle \baselineskip15pt \section{Introduction} A class of water wave problems concerns the motion of the interface separating an inviscid, incompressible, irrotational fluid, under the influence of gravity, from a region of zero density (i.e. air) in $n$-dimensional space. It is assumed that the fluid region is below the air region. Assume that the density of the fluid is $1$, the gravitational field is $-{\bold k}$, where ${\bold k}$ is the unit vector pointing in the upward vertical direction, and at time $t\ge 0$, the free interface is $\Sigma(t)$, and the fluid occupies region $\Omega(t)$. When surface tension is zero, the motion of the fluid is described by \begin{equation}\label{euler} \begin{cases} \ \bold v_t + (\bold v\cdot \nabla) \bold v = -\bold k-\nabla P \qquad \text{on } \Omega(t),\ t\ge 0, \\ \ \text{div}\,\bold v=0 , \qquad \text{curl}\,\bold v=0, \qquad \text{on } \Omega(t),\ t\ge 0, \\ \ P=0, \qquad\qquad\qquad\qquad\qquad\text{on } \Sigma(t) \\ \ (1, \bold v) \text{ is tangent to the free surface } (t, \Sigma(t)), \end{cases} \end{equation} where $ \bold v$ is the fluid velocity, $P$ is the fluid pressure. There is an important condition for these problems: \begin{equation}\label{taylor} -\frac{\partial P}{\partial\bold n}\ge 0 \end{equation} pointwise on the interface, where $\bold n$ is the outward unit normal to the fluid interface $\Sigma(t)$ \cite{ta}; it is well known that when surface tension is neglected and the Taylor sign condition \eqref{taylor} fails, the water wave motion can be subject to the Taylor instability \cite{ ta, bi, bhl, ebi}. The study on water waves dates back centuries. Early mathematical works include Newton \cite{newton}, Stokes \cite{st}, Levi-Civita \cite{le}, and G.I. Taylor \cite{ta}. Nalimov \cite{na}, Yosihara \cite{yo} and Craig \cite{cr} proved local in time existence and uniqueness of solutions for the 2d water wave equation \eqref{euler} for small and smooth initial data. In \cite{wu1, wu2}, we showed that for dimensions $n\ge 2$, the strong Taylor sign condition \begin{equation}\label{taylor-s} -\frac{\partial P}{\partial\bold n}\ge c_0>0 \end{equation} always holds for the infinite depth water wave problem \eqref{euler}, as long as the interface is in $C^{1+\epsilon}$, $\epsilon>0$; and the initial value problem of equation \eqref{euler} is locally well-posed in Sobolev spaces $H^s$, $s\ge 4$ for arbitrary given data. Since then, local wellposedness for water waves with additional effects such as the surface tension, bottom and non-zero vorticity, under the assumption \eqref{taylor-s},\footnote{When there is surface tension, or bottom, or vorticity, \eqref{taylor-s} does not always hold, it needs to be assumed.} were obtained, c.f. \cite{am, cl, cs, ig1, la, li, ot, sz, zz}. Alazard, Burq \& Zuily \cite{abz, abz14} proved local wellposedness of \eqref{euler} in low regularity Sobolev spaces where the interfaces are only in $C^{3/2}$. Hunter, Ifrim \& Tararu \cite{hit} obtained a low regularity result for the 2d water waves that improves on \cite{abz}. The author \cite{wu3, wu4}, Germain, Masmoudi \& Shatah \cite{gms}, Ionescu \& Pusateri \cite{ip} and Alazard \& Delort \cite{ad} obtained almost global and global existence for two and three dimensional water wave equation \eqref{euler} for small, smooth and localized data; see \cite{hit, it, dipp, wang1, wang2, bmsw} for some additional developments. Furthermore in \cite{cf}, Castro, C\'ordoba, Fefferman, Gancedo and G\'omez-Serrano proved that for the 2d water wave equation \eqref{euler}, there exist initially non-self-intersecting interfaces that become self-intersecting at a later time; and as was shown in \cite{cs1}, the same result holds in 3d. All these work either prove or assume the strong Taylor sign condition \eqref{taylor-s}, and the lowest regularity considered are $C^{3/2}$ interfaces. A common phenomena we observe in the ocean are waves with angled crests, with the interface possibly non-$C^1$. A natural question is: is the water wave equation \eqref{euler} well-posed in any class that includes non-$C^1$ interfaces? We focus on the two dimensional case in this paper. As was explained in \cite{kw}, the main difficulty in allowing for non-$C^1$ interfaces with angled crests is that in this case, both the quantity $-\frac{\partial P}{\partial \bf{n}}$ and the Dirichlet-to-Neumann operator $\nabla_{\bf n}$ degenerate, with degeneracy at the singularities on the interface;\footnote{We assume the acceleration is finite.} and only a weak Taylor inequality $-\frac{\partial P}{\partial \vec{n}}\ge 0$ holds. From earlier work \cite{wu1, wu2, am, la, abz, sz}, we know the problem of solving the water wave equation \eqref{euler} can be reduced to solving a quasilinear equation of the interface $z=z(\alpha,t)$, of type \begin{equation}\label{quasi1} \partial_t^2\frak u+ a \nabla_{\bf n} \frak u=f(\frak u, \partial_t \frak u) \end{equation} where $a=-\frac{\partial P}{\partial \bf{n}}$. When the strong Taylor sign condition \eqref{taylor-s} holds and $\nabla_{\bf n}$ is non-degenerative, equation \eqref{quasi1} is of the hyperbolic type with the right hand side consisting of lower order terms, and the Cauchy problem can be solved using classical tools. In the case where the solution dependent quantity $a=-\frac{\partial P}{\partial \bf{n}}$ and operator $\nabla_{\bf n}$ degenerate, equation \eqref{quasi1} losses its hyperbolicity, classical tools do not apply. New ideas are required to solve the problem. In \cite{kw}, R. Kinsey and the author constructed an energy functional $\mathcal E(t)$ and proved an a-priori estimate, which states that for solutions of the water wave equation \eqref{euler}, if $\mathcal E(0)<\infty$, then $\mathcal E(t)$ remains finite for a time period that depends only on $\mathcal E(0)$. The energy functional $\mathcal E(t)$ is finite for interfaces and velocities in Sobolev classes, and most importantly, it is also finite for a class of non-$C^1$ interfaces with angled crests.\footnote{In particular, the class where $\mathcal E(t)<\infty$ allows for angled crest type interfaces with interior angles at the crest $<\frac\pi 2$, which coincides with the range of the angles of the self-similar solutions in \cite{wu5}. Stokes extreme waves is not in the class where $\mathcal E(t)<\infty$. } In this paper, we show that for any given data satisfying $\mathcal E(0)<\infty$, there is a $T>0$, depending only on $\mathcal E(0)$, such that the 2d water wave equation \eqref{euler} has a unique solution in the class where $\mathcal E(t)<\infty$ for time $0\le t\le T$, and the solution is stable. We will work on the free surface equations that were derived in \cite{wu1, wu2}. The novelty of this paper is that we study the degenerative case, and solve the equation in a broader class that includes non-$C^1$ interfaces. \subsection{Outline of the paper} In \S\ref{notation1} we introduce some basic notations and conventions; further notations will be introduced throughout the paper. In \S\ref{prelim} we recall the results in \cite{wu1, wu2}, and derive the free surface equation and its quasi-linearization, from system \eqref{euler}, in both the Lagrangian and Riemann mapping variables, for interfaces and velocities in Sobolev spaces. We derived the quasilinear equation in terms of the horizontal component in the Riemann mapping variable in \cite{wu1}, and in terms of full components in the Lagrangian coordinates in \cite{wu2}. Here we re-derive the equations for the sake of coherence. In \S\ref{general-soln} we will recover the water wave equation \eqref{euler} from the interface equation \eqref{interface-r}-\eqref{interface-holo}-\eqref{a1}-\eqref{b}, showing the equivalence of the two systems for smooth and non-self-intersecting interfaces. In \S\ref{a priori}, we present the energy functional $\mathcal E(t)$ constructed and the a-priori estimate proved in \cite{kw}. In \S\ref{prelim-result}, we give a blow-up criteria in terms of the energy functional $\mathcal E(t)$ and a stability inequality for solutions of the interface equation \eqref{interface-r}-\eqref{interface-holo}-\eqref{a1}-\eqref{b} with a bound depending only on $\mathcal E(t)$. In \S\ref{main} we present the main result, that is, the local in time wellposedness of the Cauchy problem for the water wave equation \eqref{euler} in the class where $\mathcal E(t)<\infty$. In \S\ref{proof}, we give the proof for the blow-up criteria, Theorem~\ref{blow-up} and in \S\ref{proof3}, the stability inequality, Theorem~\ref{unique}. For the sake of completeness, we will also provide a proof for the a-priori estimate of \cite{kw} in the current setting in \S\ref{proof}. In \S\ref{proof2}, we will prove the main result, Theorem~\ref{th:local}. Some basic preparatory results are given in Appendix~\ref{ineq}; various identities that are useful for the paper are derived in Appendix~\ref{iden}. And in Appendix~\ref{quantities}, we list the quantities that are controlled by $\mathcal E$. A majority of these are already shown in \cite{kw}. {\bf Remark}: The blow-up criteria and the proof for the existence part of Theorem~\ref{th:local} are from the unpublished manuscript of the author \cite{wu7}, with some small modifications. \subsection{Notation and convention}\label{notation1} We consider solutions of the water wave equation \eqref{euler} in the setting where the fluid domain $\Omega(t)$ is simply connected, with the free interface $\Sigma(t):=\partial\Omega(t)$ being a Jordan curve,\footnote{That is, $\Sigma(t)$ is homeomorphic to the line $\mathbb R$.} $${\bold v}(z, t)\to 0,\qquad\text{as } |z|\to\infty$$ and the interface $\Sigma(t)$ tending to horizontal lines at infinity.\footnote{The problem with velocity $\bold v(z,t)\to (c,0)$ as $|z|\to\infty$ can be reduced to the one with $\bold v\to 0$ at infinity by studying the solutions in a moving frame. $\Sigma(t)$ may tend to two different lines at $+\infty$ and $-\infty$.} We use the following notations and conventions: $[A, B]:=AB-BA$ is the commutator of operators $A$ and $B$. $H^s=H^s(\mathbb R)$ is the Sobolev space with norm $\|f\|_{H^s}:=(\int (1+|\xi|^2)^s|\hat f(\xi)|^2\,d\xi)^{1/2}$, $\dot H^{s}=\dot H^{s}(\mathbb R)$ is the Sobolev space with norm $\|f\|_{\dot H^{s}}:= c(\int |\xi|^{2s} |\hat f(\xi)|^2\,d\xi)^{1/2}$, $L^p=L^p(\mathbb R)$ is the $L^p$ space with $\|f\|_{L^p}:=(\int|f(x)|^p\,dx)^{1/p}$ for $1\le p<\infty$, and $f\in L^\infty$ if $\|f\|_{L^\infty}:=\text{ sup }|f(x)|<\infty$. When not specified, all the norms $\|f\|_{H^s}$, $\|f\|_{\dot H^{s}}$, $\|f\|_{L^p}$, $1\le p\le\infty$ are in terms of the spatial variable only, and $\|f\|_{H^s(\mathbb R)}$, $\|f\|_{\dot H^{s}(\mathbb R)}$, $\|f\|_{L^p(\mathbb R)}$, $1\le p\le\infty$ are in terms of the spatial variables. We say $f\in C^j([0, T], H^s)$ if the mapping $f=f(t):=f(\cdot, t): t\in [0, T]\to H^s$ is $j$-times continues differentiable, with $\sup_{[0, T], \ 0\le k\le j}\|\partial_t^k f(t)\|_{H^s}<\infty$; we say $f\in L^\infty([0, T], H^s)$ if $\sup_{[0, T]}\|f(t)\|_{H^s}<\infty$. $C^j(X)$ is the space of $j$-times continuously differentiable functions on the set $X$; $C^j_0(\mathbb R)$ is the space of $j$-times continuously differentiable functions that decays at the infinity. Compositions are always in terms of the spatial variables and we write for $f=f(\cdot, t)$, $g=g(\cdot, t)$, $f(g(\cdot,t),t):=f\circ g(\cdot, t):=U_gf(\cdot,t)$. We identify $(x,y)$ with the complex number $x+iy$; $\Re z$, $\Im z$ are the real and imaginary parts of $z$; $\bar z=\Re z-i\Im z$ is the complex conjugate of $z$. $\overline \Omega$ is the closure of the domain $\Omega$, $\partial\Omega$ is the boundary of $\Omega$, ${\mathscr P}_-:=\{z\in \mathbb C: \Im z<0\}$ is the lower half plane. We write \begin{equation}\label{eq:comm} [f,g; h]:=\frac1{\pi i}\int\frac{(f(x)-f(y))(g(x)-g(y))}{(x-y)^2}h(y)\,dy. \end{equation} We use $c$, $C$ to denote universal constants. $c(a_1, \dots )$, $C(a_1, \dots)$, $M(a_1, \dots)$ are constants depending on $a_1, \dots $; constants appearing in different contexts need not be the same. We write $f\lesssim g$ if there is a universal constant $c$, such that $f\le cg$. \section{Preliminaries}\label{prelim} Equation \eqref{euler} is a nonlinear equation defined on moving domains, it is difficult to study it directly. A classical approach is to reduce from \eqref{euler} to an equation on the interface, and study solutions of the interface equation. Then use the incompressibility and irrotationality of the velocity field to recover the velocity in the fluid domain by solving a boundary value problem for the Laplace equation. In what follows we derive the interface equations from \eqref{euler}, and vice versa; we assume that the interface, velocity and acceleration are in Sobolev spaces. \subsection{The equation for the free surface in Lagrangian variable}\label{surface-equation-l} Let the free interface $\Sigma(t): z=z(\alpha, t)$, $\alpha\in\mathbb R$ be given by Lagrangian parameter $\alpha$, so $z_t(\alpha, t)={\bold v}(z(\alpha,t);t)$ is the velocity of the fluid particles on the interface, $z_{tt}(\alpha,t)={\bold v_t + (\bold v\cdot \nabla) \bold v}(z(\alpha,t); t)$ is the acceleration. Notice that $P=0$ on $\Sigma(t)$ implies that $\nabla P$ is normal to $\Sigma(t)$, therefore $\nabla P=-i\frak a z_\alpha$, where \begin{equation}\label{frak-a} \frak a =-\frac1{|z_\alpha|}\frac{\partial P}{\partial {\bold n}}; \end{equation} and the first and third equation of \eqref{euler} gives \begin{equation}\label{interface-l} z_{tt}+i=i\frak a z_\alpha. \end{equation} The second equation of \eqref{euler}: $\text{div } \bold v=\text{curl } \bold v=0$ implies that $\bar {\bold v}$ is holomorphic in the fluid domain $\Omega(t)$, hence $\bar z_t$ is the boundary value of a holomorphic function in $\Omega(t)$. Let $\Omega\subset \mathbb C$ be a domain with boundary $\Sigma: z=z(\alpha)$, $\alpha\in I$, oriented clockwise. Let $\mathfrak H$ be the Hilbert transform associated to $\Omega$: \begin{equation}\label{hilbert-t} \frak H f(\alpha)=\frac1{\pi i}\, \text{pv.}\int\frac{z_\beta(\beta)}{z(\alpha)-z(\beta)}f(\beta)\,d\beta \end{equation} We have the following characterization of the trace of a holomorphic function on $\Omega$. \begin{proposition}\cite{jour}\label{prop:hilbe} a. Let $g \in L^p$ for some $1<p <\infty$. Then $g$ is the boundary value of a holomorphic function $G$ on $\Omega$ with $G(z)\to 0$ at infinity if and only if \begin{equation} \label{eq:1571} (I-\mathfrak H) g = 0. \end{equation} b. Let $ f \in L^p$ for some $1<p<\infty$. Then $ \frac12(I+\mathfrak H) f$ is the boundary value of a holomorphic function $\frak G$ on $\Omega$, with $\frak G(z)\to 0$ at infinity. c. $\mathfrak H1=0$. \end{proposition} By Proposition~\ref{prop:hilbe} the second equation of \eqref{euler} is equivalent to $\bar z_t=\mathfrak H {\bar z_t}$. So the motion of the fluid interface $\Sigma(t): z=z(\alpha,t)$ is given by \begin{equation}\label{interface-e} \begin{cases} z_{tt}+i=i\frak a z_\alpha\\ \bar z_t=\frak H \bar z_t. \end{cases} \end{equation} \eqref{interface-e} is a fully nonlinear equation. In \cite{wu1}, Riemann mapping was introduced to analyze equation \eqref{interface-e} and to derive the quasilinear equation. \subsection{The free surface equation in Riemann mapping variable}\label{surface-equation-r} Let $\Phi(\cdot, t): \Omega(t)\to {\mathscr P}_-$ be the Riemann mapping taking $\Omega(t)$ to the lower half plane ${\mathscr P}_-$, satisfying $\Phi(z(0,t),t)=0$ and $\lim_{z\to\infty}\Phi_z(z,t)=1$. Let \begin{equation}\label{h} h(\alpha,t):=\Phi(z(\alpha,t),t), \end{equation} so $h(0,t)=0$ and $h:\mathbb R\to\mathbb R$ is a homeomorphism. Let $h^{-1}$ be defined by $$h(h^{-1}(\alpha',t),t)=\alpha',\quad \alpha'\in \mathbb R;$$ and \begin{equation}\label{1001} Z(\alpha',t):=z\circ h^{-1}(\alpha',t),\quad Z_t(\alpha',t):=z_t\circ h^{-1}(\alpha',t),\quad Z_{tt}(\alpha',t):=z_{tt}\circ h^{-1}(\alpha',t) \end{equation} be the reparametrization of the position, velocity and acceleration of the interface in the Riemann mapping variable $\alpha'$. Let \begin{equation}\label{1002} Z_{,\alpha'}(\alpha', t):=\partial_{\alpha'}Z(\alpha', t),\quad Z_{t,\alpha'}(\alpha', t):=\partial_{\alpha'}Z_t(\alpha',t), \quad Z_{tt,\alpha'}(\alpha', t):=\partial_{\alpha'}Z_{tt}(\alpha',t), \text{ etc.} \end{equation} Notice that ${\bar {\bold v}}\circ \Phi^{-1}: {\mathscr P}_-\to \mathbb C$ is holomorphic in the lower half plane ${\mathscr P}_-$ with ${\bar {\bold v}}\circ \Phi^{-1}(\alpha', t)={\bar Z}_t(\alpha',t)$. Precomposing \eqref{interface-l} with $h^{-1}$ and applying Proposition~\ref{prop:hilbe} to $ {\bar {\bold v}}\circ \Phi^{-1}$ in ${\mathscr P}_-$, we have the free surface equation in the Riemann mapping variable: \begin{equation}\label{interface-r} \begin{cases} Z_{tt}+i=i\mathcal AZ_{,\alpha'}\\ \bar{Z}_t=\mathbb H \bar{Z}_t \end{cases} \end{equation} where $\mathcal A\circ h=\frak a h_\alpha$ and $\mathbb H$ is the Hilbert transform associated with the lower half plane ${\mathscr P}_-$: \begin{equation}\label{ht} \mathbb H f(\alpha')=\frac1{\pi i}\text{pv.}\int\frac1{\alpha'-\beta'}\,f(\beta')\,d\beta'. \end{equation} Observe that $\Phi^{-1}(\alpha', t)=Z(\alpha', t)$ and $(\Phi^{-1})_{z'}(\alpha',t)=Z_{,\alpha'}(\alpha',t)$. So $Z_{,\alpha'}$, $\dfrac1{Z_{,\alpha'}}$ are boundary values of the holomorphic functions $(\Phi^{-1})_{z'}$ and $\dfrac1{(\Phi^{-1})_{z'}}$, tending to 1 at the spatial infinity. By Proposition~\ref{prop:hilbe}, \footnote{We work in the regime where $\frac1{Z_{,\alpha'}}-1\in L^2(\mathbb R)$. } \begin{equation}\label{interface-holo} \frac1{Z_{,\alpha'}}-1=\mathbb H\paren{\frac1{Z_{,\alpha'}}-1}. \end{equation} By the chain rule, we know for any function $f$, $U_h^{-1}\partial_t U_h f =(\partial_t+ b \partial_{\alpha'})f$, where $$b:=h_t\circ h^{-1}.$$ So $Z_{tt}=(\partial_t+b\partial_{\alpha'})Z_t$, and $Z_{t}=(\partial_t+b\partial_{\alpha'})Z$. \subsubsection{Some additional notations} We will often use the fact that $\mathbb H$ is purely imaginary, and decompose a function into the sum of its holomorphic and antiholomorphic parts. We define the projections to the space of holomorphic functions in the lower, and respectively, upper half planes by \begin{equation}\label{proj} \mathbb P_H :=\frac12(I+\mathbb H),\qquad\text{and }\quad \mathbb P_A:=\frac12(I-\mathbb H). \end{equation} We also define \begin{equation}\label{da-daa} D_\alpha =\dfrac {1}{z_\alpha}\partial_\alpha ,\quad \text{ and } \quad D_{\alpha '} = \dfrac { 1}{Z_{,{\alpha '}}}\partial_{\alpha '} . \end{equation} We know by the chain rule that $\paren{D_\alpha f} \circ h^{-1}= D_{\alpha '} \paren{f \circ h^{-1}}$; and for any holomorphic function $G$ on $\Omega(t)$ with boundary value $g(\alpha,t):=G(z(\alpha,t),t)$, $D_\alpha g=G_z\circ z$, and $D_{\alpha '} (g\circ h^{-1})=G_z\circ Z$. Hence $D_\alpha$, $D_{\alpha '}$ preserves the holomorphicity of $g$, $g\circ h^{-1}$. \subsubsection{The formulas for $A_1$ and $b$.} Let $A_1:=\mathcal A |Z_{,\alpha'}|^2$. Notice that $\mathcal A\circ h=\frak a h_\alpha=-\frac{\partial P}{\partial\vec n}\frac{h_\alpha}{\abs{z_\alpha}}$, so $A_1$ is related to the important quantity $-\frac{\partial P}{\partial\vec n}$ by $$-\frac{\partial P}{\partial\vec n}\circ Z=\frac{A_1}{\abs{Z_{,{\alpha '}}}}.$$ Using Riemann mapping, we analyzed the quantities $A_1$ and $b$ in \cite{wu1}. Here we re-derive the formulas for the sake of completeness; we will carefully note the a-priori assumptions made in the derivation. We mention that the same derivation can also be found in \cite{wu6}. We also mention that in \cite{hit}, using the formulation of Ovsjannikov \cite{ovs}, the authors also re-derived the formulas \eqref{b} and \eqref{a1}. Assume that \begin{equation}\label{assume1}\lim_{z'\in {\mathscr P}_-, z'\to\infty} \Phi_t\circ \Phi^{-1}(z',t)=0,\qquad \lim_{z'\in {\mathscr P}_-,z'\to\infty} \braces{i(\Phi^{-1})_{z'} (z',t)-(\Phi^{-1})_{z'} \overline{ \vec{v}}_t\circ \Phi^{-1}(z',t)}=i. \ \footnote{ It was shown in \cite{wu1} that the water wave equation \eqref{euler} is well-posed in this regime.}\end{equation} \begin{proposition}[Lemma 3.1 and (4.7) of \cite{wu1}, or Proposition 2.2 and (2.18) of \cite{wu6}]\label{prop:a1} We have \begin{equation}\label{b} b:=h_t\circ h^{-1}=\Re (I-\mathbb H)\paren{\frac{Z_t}{Z_{,\alpha'}}}; \end{equation} \begin{equation}\label{a1} A_1=1-\Im [Z_t,\mathbb H]{\bar Z}_{t,\alpha'}=1+\frac1{2\pi }\int \frac{|Z_t(\alpha', t)-Z_t(\beta', t)|^2}{(\alpha'-\beta')^2}\,d\beta'\ge 1; \end{equation} \begin{equation}\label{taylor-formula} -\frac{\partial P}{\partial\bold n}\Big |_{Z=Z(\cdot,t)}= \frac{A_1}{|Z_{,\alpha'}|}. \end{equation} In particular, if the interface $\Sigma(t)\in C^{1+\epsilon}$ for some $\epsilon>0$, then the strong Taylor sign condition \eqref{taylor-s} holds. \end{proposition} \begin{proof} Taking complex conjugate of the first equation in \eqref{interface-r}, then multiplying by $Z_{,\alpha'}$ yields \begin{equation}\label{interface-a1} Z_{,\alpha'}({\bar Z}_{tt}-i)=-i\mathcal A|Z_{,\alpha'}|^2:=-i A_1. \end{equation} The left hand side of \eqref{interface-a1} is almost holomorphic since $Z_{,\alpha'}$ is the boundary value of the holomorphic function $(\Phi^{-1})_{z'}$ and $\bar z_{tt}$ is the time derivative of the holomorphic function $\bar z_t$. We explore the almost holomorphicity of $\bar z_{tt}$ by expanding. Let $F=\bar {\bold v}$, we know $F$ is holomorphic in $\Omega(t)$ and $\bar z_t=F(z(\alpha, t),t)$, so \begin{equation}\label{eq:1} \bar z_{tt}=F_t(z(\alpha, t),t)+F_z(z(\alpha, t),t) z_t(\alpha, t),\qquad \bar z_{t\alpha}=F_z(z(\alpha, t),t) z_\alpha(\alpha, t) \end{equation} therefore \begin{equation}\label{eq:2} \bar z_{tt}= F_t\circ z+ \frac{\bar z_{t\alpha}}{z_\alpha} z_t. \end{equation} Precomposing with $h^{-1}$, subtracting $-i$, then multiplying by $Z_{,\alpha'}$, we have $$Z_{,\alpha'}({\bar Z}_{tt}-i)= Z_{,\alpha'} F_t\circ Z+ Z_t {\bar Z}_{t,\alpha'}-i Z_{,\alpha'}=-iA_1 $$ Apply $(I-\mathbb H)$ to both sides of the equation. Notice that $F_t\circ Z$ is the boundary value of the holomorphic function $F_t\circ \Phi^{-1}$. By assumption \eqref{assume1} and Proposition~\ref{prop:hilbe}, $(I-\mathbb H)(Z_{,\alpha'} F_t\circ Z-iZ_{,\alpha'})=-i$; therefore $$-i(I-\mathbb H) A_1= (I-\mathbb H)(Z_t {\bar Z}_{t,\alpha'})-i$$ Taking imaginary parts on both sides and using the fact $(I-\mathbb H){\bar Z}_{t,\alpha'}=0$ \footnote{Because $(I-\mathbb H){\bar Z}_{t}=0$.} to rewrite $(I-\mathbb H)(Z_t {\bar Z}_{t,\alpha'})$ as $[Z_t,\mathbb H]{\bar Z}_{t,\alpha'}$ yields \begin{equation}\label{A_1} A_1=1-\Im [Z_t,\mathbb H]{\bar Z}_{t,\alpha'}. \end{equation} The identity \begin{equation} -\Im[Z_t,\mathbb H]{\bar Z}_{t,\alpha'} =\frac1{2\pi }\int \frac{|Z_t(\alpha', t)-Z_t(\beta', t)|^2}{(\alpha'-\beta')^2}\,d\beta' \end{equation} is obtained by integration by parts. The quantity $b:=h_t\circ h^{-1}$ can be calculated similarly. Recall $h(\alpha,t)=\Phi(z(\alpha,t),t)$, so $$h_t=\Phi_t\circ z+(\Phi_z\circ z) z_t,\qquad h_\alpha=(\Phi_z\circ z) z_\alpha$$ hence $h_t= \Phi_t\circ z+ \frac{h_\alpha}{z_\alpha} z_t$. Precomposing with $h^{-1}$ yields \begin{equation}\label{b1} h_t\circ h^{-1}=\Phi_t\circ Z+ \frac{Z_t}{Z_{,\alpha'}}. \end{equation} Now $\Phi_t\circ Z$ is the boundary value of the holomorphic function $\Phi_t\circ \Phi^{-1}$. By assumption \eqref{assume1} and Proposition~\ref{prop:hilbe}, $(I-\mathbb H)\Phi_t\circ Z=0$. Apply $(I-\mathbb H)$ to both sides of \eqref{b1} then take the real parts, we get $$b=h_t\circ h^{-1}= \Re (I-\mathbb H)\paren{\frac{Z_t}{Z_{,\alpha'}}}.$$ A classical result in complex analysis states that if the interface is in $C^{1+\epsilon}$, $\epsilon>0$, tending to lines at infinity, then $c_0\le \abs{Z_{,{\alpha '}}}\le C_0$, for some constants $c_0, C_0>0$. So in this case, the strong Taylor sign condition \eqref{taylor-s} holds. \end{proof} \subsection{The quasilinear equation}\label{surface-quasi-r} In \cite{wu1, wu2} we showed that the quasi-linearization of the free surface equation \eqref{interface-e} can be accomplished by just taking one time derivative to equation \eqref{interface-l}. Taking derivative to $t$ to \eqref{interface-l} we get \begin{equation}\label{quasi-l} {\bar z}_{ttt}+i\frak a {\bar z}_{t\alpha}=-i\frak a_t {\bar z}_{\alpha}=\frac{\frak a_t}{\frak a} ({\bar z}_{tt}-i). \end{equation} Precomposing with $h^{-1}$ on both sides of \eqref{quasi-l}, we have the equation in the Riemann mapping variable \begin{equation}\label{quasi-r1} {\bar Z}_{ttt}+i\mathcal A {\bar Z}_{t,\alpha'}=\frac{\frak a_t}{\frak a}\circ h^{-1} ({\bar Z}_{tt}-i) \end{equation} We compute $\dfrac{\frak a_t}{\frak a}$ by the identities $\frak a h_\alpha=\mathcal A\circ h$, and $\mathcal A\circ h=\dfrac{A_1}{\abs{Z_{,{\alpha '}}}^2}\circ h=A_1\circ h \dfrac{ h_\alpha^2}{\abs{z_{\alpha}}^2}$, so \begin{equation}\label{eq:frak a} \frak a =A_1\circ h \dfrac{ h_\alpha}{\abs{z_{\alpha}}^2}; \end{equation} and we obtain, by taking derivative to $t$ to \eqref{eq:frak a}, $$ \dfrac{\frak a_t}{\frak a}= \frac{\partial_t \paren{ A_1\circ h}}{A_1\circ h}+\frac{h_{t\alpha}}{h_\alpha}-2\Re \frac{z_{t\alpha}}{z_\alpha}. $$ Notice that $\frac{h_{t\alpha}}{h_\alpha}\circ h^{-1}=(h_t\circ h^{-1})_{{\alpha '}}:=b_{\alpha '}$. So \begin{equation}\label{at} \dfrac{\frak a_t}{\frak a}\circ h^{-1}= \frac{(\partial_t +b\partial_{\alpha '}) A_1}{A_1}+b_{\alpha '} -2\Re D_{\alpha '} Z_t; \end{equation} where we calculate from \eqref{b} that \begin{equation}\label{ba} \begin{aligned} b_{\alpha '}&= \Re \paren{ (I-\mathbb H) \frac{Z_{t,\alpha'}}{Z_{,{\alpha '}}}+ (I-\mathbb H) \paren{Z_t\partial_{\alpha '}\frac 1{Z_{,{\alpha '}}}}}\\& =2\Re D_{\alpha '} Z_t+ \Re \paren{ (-I-\mathbb H) \frac{Z_{t,\alpha'}}{Z_{,{\alpha '}}}+ (I-\mathbb H) \paren{Z_t\partial_{\alpha '} \frac 1{Z_{,{\alpha '}}}}}\\& =2\Re D_{\alpha '} Z_t+\Re \paren{\bracket{ \frac1{Z_{,{\alpha '}}}, \mathbb H} Z_{t,\alpha'}+ \bracket{Z_t, \mathbb H}\partial_{\alpha '} \frac1{Z_{,{\alpha '}}} }, \end{aligned} \end{equation} here in the last step we used the fact that $(I+\mathbb H)Z_{t,{\alpha '}}=0$ and $(I-\mathbb H)\partial_{\alpha '} \frac 1{Z_{,{\alpha '}}}=0$ \footnote{These follows from $(I-\mathbb H)\bar Z_t=0$ and \eqref{interface-holo}.} to rewrite the terms as commutators; and we compute, by \eqref{A_1} and \eqref{eq:c14}, \begin{equation}\label{dta1} (\partial_t +b\partial_{\alpha '}) A_1= -\Im \paren{\bracket{Z_{tt},\mathbb H}\bar Z_{t,\alpha'}+\bracket{Z_t,\mathbb H}\partial_{\alpha '} \bar Z_{tt}-[Z_t, b; \bar Z_{t,{\alpha '}}]}. \end{equation} We now sum up the above calculations and write the quasilinear system in the Riemann mapping variable. We have \begin{equation}\label{quasi-r} \begin{cases} (\partial_t+b\partial_{\alpha '})^2{\bar Z}_{t}+i\dfrac{A_1}{\abs{Z_{,{\alpha '}}}^2}\partial_{\alpha '} {\bar Z}_{t}=\dfrac{\frak a_t}{\frak a}\circ h^{-1} ({\bar Z}_{tt}-i) \\ \bar Z_t=\mathbb H \bar Z_t \end{cases} \end{equation} where \begin{equation}\label{aux} \begin{cases} b:=h_t\circ h^{-1}=\Re (I-\mathbb H)\paren{\dfrac{Z_t}{Z_{,\alpha'}}}\\ A_1=1-\Im [Z_t,\mathbb H]{\bar Z}_{t,\alpha'}=1+\dfrac1{2\pi }\int \dfrac{|Z_t(\alpha', t)-Z_t(\beta', t)|^2}{(\alpha'-\beta')^2}\,d\beta'\\ \dfrac1{Z_{,{\alpha '}}}= i\dfrac{\bar Z_{tt}-i}{A_1}\\ \dfrac{\frak a_t}{\frak a}\circ h^{-1}= \dfrac{(\partial_t +b\partial_{\alpha '}) A_1}{A_1}+b_{\alpha '} -2\Re D_{\alpha '} Z_t \end{cases} \end{equation} Here the third equation in \eqref{aux} is obtained by rearranging the terms of the equation \eqref{interface-a1}. Using it to replace $\frac1{Z_{,{\alpha '}}}$ by $i\frac{\bar Z_{tt}-i}{A_1}$, we get a system for the complex conjugate velocity and acceleration $(\bar Z_t, \bar Z_{tt})$. The initial data for the system \eqref{quasi-r}-\eqref{aux} is set up as follows. \subsubsection{The initial data}\label{id-r} Without loss of generality, we choose the parametrization of the initial interface $\Sigma(0): Z(\cdot,0):=Z(0)$ by the Riemann mapping variable, so $h(\alpha,0)=\alpha$ for $\alpha\in\mathbb R$; we take the initial velocity $Z_t(0)$, such that it satisfies $\bar Z_t(0)=\mathbb H \bar Z_t(0)$. And we take the initial acceleration $Z_{tt}(0)$ so that it solves the equation \eqref{interface-a1} or the third equation in \eqref{aux}. \subsection{Local wellposedness in Sobolev spaces} By \eqref{taylor-formula} and \eqref{a1}, if $Z_{,{\alpha '}}\in L^\infty$, then the strong Taylor stability criterion \eqref{taylor-s} holds. In this case, the system \eqref{quasi-r}-\eqref{aux} is quasilinear of the hyperbolic type, with the left hand side of the first equation in \eqref{quasi-r} consisting of the higher order terms.\footnote{ $i\partial_{\alpha '} =|\partial_{\alpha '}|$ when acting on holomorphic functions. The Dirichlet-to-Neumann operator $\nabla_{\bf n}= \frac1{|Z_{,{\alpha '}}|}|\partial_{\alpha '}|$.} In \cite{wu1} we showed that the Cauchy problem of \eqref{quasi-r}-\eqref{aux}, and equivalently of \eqref{interface-r}-\eqref{interface-holo}-\eqref{b}-\eqref{a1}, is uniquely solvable in Sobolev spaces $H^s$, $s\ge 4$. Let the initial data be given as in \S\ref{id-r}. \begin{theorem}[Local wellposedness in Sobolev spaces, cf. Theorem 5.11, \S6 of \cite{wu1}]\label{prop:local-s} Let $s\ge 4$. Assume that $Z_t(0)\in H^{s+1/2}(\mathbb R)$, $Z_{tt}(0)\in H^s(\mathbb R)$ and $Z_{,{\alpha '}}(0)\in L^\infty(\mathbb R)$. Then there is $T>0$, such that on $[0, T]$, the initial value problem of \eqref{quasi-r}-\eqref{aux}, or equivalently of \eqref{interface-r}-\eqref{interface-holo}-\eqref{b}-\eqref{a1}, has a unique solution $Z=Z(\cdot, t)$, satisfying $(Z_{t}, Z_{tt})\in C^ l([0, T], H^{s+1/2-l}(\mathbb R)\times H^{s-l}(\mathbb R))$, and $Z_{,\alpha'}-1\in C^l([0, T], H^{s-l}(\mathbb R))$, for $l=0,1$. Moreover if $T^*$ is the supremum over all such times $T$, then either $T^*=\infty$, or $T^*<\infty$, but \begin{equation}\label{eq:1-1} \sup_{[0, T^*)}(\|Z_{,{\alpha '}}(t)\|_{L^\infty}+\|Z_{tt}(t)\|_{H^3}+\| Z_t(t)\|_{H^{3+1/2}})=\infty. \end{equation} \end{theorem} \begin{remark} 1. Let $h=h(\alpha, t)$ be the solution of the ODE \begin{equation}\label{b1-1} \begin{cases} h_t=b(h,t),\\ h(\alpha, 0)=\alpha \end{cases} \end{equation} where $b$ is as given by \eqref{b}. Then $z=Z\circ h$ satisfies equation \eqref{interface-l}, cf. \S6 of \cite{wu1}. 2. \eqref{quasi-r}-\eqref{aux} is a system for the complex conjugate velocity and acceleration $(\bar Z_t, \bar Z_{tt})$, the interface doesn't appear explicitly, so a solution can exist even if $Z=Z(\cdot, t)$ becomes self-intersecting. Similarly, equation \eqref{interface-r}-\eqref{interface-holo}-\eqref{b}-\eqref{a1} makes sense even if $Z=Z(\cdot, t)$ self-intersects. To obtain the solution of the water wave equation \eqref{euler} from the solution of the quasilinear equation \eqref{quasi-r}-\eqref{aux} as given in Theorem~\ref{prop:local-s} above, in \S 6 of \cite{wu1}, an additional chord-arc condition is assumed for the initial interface, and it was shown that the solution $Z=Z(\cdot, t)$ remains non-self-intersecting for a time period depending only on the initial chord-arc constant and the initial Sobolev norms. \footnote{\eqref{interface-r}-\eqref{interface-holo}-\eqref{b}-\eqref{a1} is equivalent to the water wave equation \eqref{euler} only when the interface is non-self-intersecting, see \S\ref{general-soln}.} 3. Observe that we arrived at \eqref{quasi-r}-\eqref{aux} from \eqref{euler} using only the following properties of the domain: 1. there is a conformal mapping taking the fluid region $\Omega(t)$ to ${\mathscr P}_-$; 2. $P=0$ on $\Sigma(t)$. We note that $z\to z^{1/2}$ is a conformal map that takes the region $\mathbb C\setminus \{z=x+i0, x>0\}$ to the upper half plane; so a domain with its boundary self-intersecting at the positive real axis can be mapped conformally onto the lower half plane ${\mathscr P}_-$. Taking such a domain as the initial fluid domain, assuming $P=0$ on $\Sigma(t)$ even when $\Sigma(t)$ self-intersects,\footnote{We note that when $\Sigma(t)$ self-intersects, the condition $P=0$ on $\Sigma(t)$ is unphysical.} one can still solve equation \eqref{quasi-r}-\eqref{aux} for a short time, by Theorem~\ref{prop:local-s}. Indeed this is one of the main ideas in the work of \cite{cf}. Using this idea and the time reversibility of the water wave equation, by choosing an appropriate initial velocity field that pulls the initial domain apart, Castro, Cordoba et.\ al. \cite{cf} proved the existence of "splash" and "splat" singularities starting from a smooth non-self-intersecting fluid interface. \end{remark} \subsection{Recovering the water wave equation \eqref{euler} from the interface equations}\label{general-soln} In this section, we derive the equivalent system in the lower half plane ${\mathscr P}_-$ for the interface equations \eqref{interface-r}-\eqref{interface-holo}-\eqref{a1}-\eqref{b}, and show how to recover from here the water wave equation \eqref{euler}. Although the derivation is quite straight-forward, to the best knowledge of the author, it has not been done before. Let $Z=Z(\cdot, t)$ be a solution of \eqref{interface-r}-\eqref{interface-holo}-\eqref{a1}-\eqref{b}, satisfying the regularity properties of Theorem~\ref{prop:local-s}; let $U(\cdot, t): \mathscr P_-\to \mathbb C$, $\Psi(\cdot, t): \mathscr P_-\to \mathbb C$ be the holomorphic functions, continuous on $\bar {\mathscr P}_-$, such that \begin{equation}\label{eq:270} U(\alpha',t)=\bar Z_t(\alpha', t),\qquad \Psi(\alpha',t)=Z(\alpha',t),\qquad \Psi_{z'}(\alpha',t)=Z_{,\alpha'}(\alpha', t), \end{equation} and $\lim_{z'\to\infty} U(z',t)=0$, $\lim_{z'\to\infty}\Psi_{z'}(z',t)=1$.\footnote{We know $U(z', t)=K_{y'}\ast \bar Z_t$, $\Psi_{z'}=K_{y'}\ast Z_{,{\alpha '}}$ and by the Maximum principle, $\frac1{\Psi_{z'}}=K_{y'}\ast \frac1{Z_{,{\alpha '}}}$, here $K_{y'}$ is the Poisson kernel defined by \eqref{poisson}. By \eqref{interface-a1}, $\frac1{Z_{,{\alpha '}}}-1\in C([0, T], H^s(\mathbb R))$ for $s\ge 4$, so $\Psi_{z'}\ne 0$ on $\bar {\mathscr P}_-$. } From $Z_t=(\partial_t+b\partial_{\alpha '})Z=\Psi_t ({\alpha '}, t)+b \Psi_{z'}({\alpha '},t)$, we have \begin{equation}\label{eq:271} b=\frac{Z_t}{ \Psi_{z'} }-\frac{\Psi_t}{\Psi_{z'}}=\frac{\bar U}{\Psi_{z'}}-\frac{\Psi_t}{\Psi_{z'}},\qquad \text{on }\partial {\mathscr P}_- , \end{equation} and substituting in we get $$\bar Z_{tt}=(\partial_t+b\partial_{\alpha '}) \bar Z_t=U_t+\paren{\frac{\bar U}{\Psi_{z'}}-\frac{\Psi_t}{\Psi_{z'}} } U_{z'}\qquad \text{on }\partial {\mathscr P}_- ;$$ so $\bar Z_{tt}$ is the trace of the function $U_t-\frac{\Psi_t}{\Psi_{z'}}U_{z'}+\frac{\bar U}{\Psi_{z'}}U_{z'} $ on $\partial {\mathscr P}_-$; and $Z_{,\alpha'}(\bar Z_{tt}-i)$ is then the trace of the function $\Psi_{z'} U_t- {\Psi_t}U_{z'}+{\bar U}U_{z'} -i\Psi_{z'}$ on $\partial {\mathscr P}_-$. This gives, from \eqref{interface-a1} that \begin{equation}\label{eq:272} \Psi_{z'} U_t- {\Psi_t}U_{z'}+{\bar U}U_{z'}-i\Psi_{z'}=-iA_1,\qquad \text{on }\partial {\mathscr P}_- . \end{equation} Observe that on the left hand side of \eqref{eq:272}, $\Psi_{z'} U_t- {\Psi_t}U_{z'}-i\Psi_{z'}$ is holomorphic on ${\mathscr P}_-$, while ${\bar U}U_{z'}=\partial_{z'}(\bar U U)$. So there is a real valued function $\frak P: {\mathscr P}_-\to \mathbb R$, such that \begin{equation}\label{eq:273} \Psi_{z'} U_t- {\Psi_t}U_{z'}+{\bar U}U_{z'}-i\Psi_{z'}=-2\partial_{z'}\frak P=-(\partial_{x'}-i\partial_{y'})\frak P,\qquad \text{on }{\mathscr P}_-; \end{equation} and by \eqref{eq:272}, because $iA_1$ is purely imaginary, \begin{equation}\label{eq:274} \frak P=c,\qquad \text{on }\partial {\mathscr P}_-. \end{equation} where $c\in \mathbb R$ is a constant. Applying $\partial_{x'}+i\partial_{y'}:=2\overline{ \partial}_{z'}$ to \eqref{eq:273} yields \begin{equation}\label{eq:275} \Delta \frak P= -2|U_{z'}|^2\qquad\text{on }{\mathscr P}_-. \end{equation} It is easy to check that for $y'\le 0$ and $t\in [0, T]$, $\paren{U, U_t, U_{z'}, \Psi_{z'}-1, \frac1{\Psi_{z'}}-1, \Psi_t}(\cdot+iy', t)\in L^2(\mathbb R)\cap L^\infty(\mathbb R)$, and $(U, \Psi, \frac1{\Psi_{z'}}, \frak P)\in C^1( \overline{\mathscr P}_-\times [0, T])$. It is clear that the above process is reversible. From a solution $(U, \Psi, \frak P)\in C^1( \overline{\mathscr P}_-\times [0, T])$ of the system \eqref{eq:273}-\eqref{eq:275}-\eqref{eq:274}-\eqref{eq:271}, with $\paren{U, U_t, U_{z'}, \Psi_{z'}-1, \frac1{\Psi_{z'}}-1, \Psi_t}(\cdot+iy', t)\in L^2(\mathbb R)\cap L^\infty(\mathbb R)$ for $y'\le 0,\ t\in [0, T]$, $U(\cdot, t)$, $\Psi(\cdot, t)$ holomorphic in ${\mathscr P}_-$, and $b$ real valued, the boundary value $(Z({\alpha '},t), Z_t({\alpha '}, t)):=(\Psi({\alpha '},t), \bar U({\alpha '}, t))$ satisfies the interface equation \eqref{interface-r}-\eqref{interface-holo}-\eqref{b}-\eqref{a1}. Therefore the systems \eqref{interface-r}-\eqref{interface-holo}-\eqref{b}-\eqref{a1} and \eqref{eq:273}-\eqref{eq:275}-\eqref{eq:274}-\eqref{eq:271}, with $U(\cdot, t)$, $\Psi(\cdot, t)$ holomorphic in ${\mathscr P}_-$, and $\Psi_{z'}(\cdot, t)\ne 0$, $b$ real valued, are equivalent in the smooth regime. Assume $(U, \Psi)\in C( \overline{\mathscr P}_-\times [0, T])\cap C^1( {\mathscr P}_-\times (0, T))$ is a solution of the system \eqref{eq:273}-\eqref{eq:275}-\eqref{eq:274}, with $U(\cdot, t)$, $\Psi(\cdot, t)$ holomorphic in ${\mathscr P}_-$, assume in addition that $\Sigma(t)=\{Z=Z(\alpha',t):=\Psi(\alpha',t)\ | \ \alpha'\in\mathbb R\}$ is a Jordan curve with $$\lim_{|\alpha'|\to\infty} Z_{,\alpha'}(\alpha',t)=1.$$ Let $\Omega(t)$ be the domain bounded by $Z=Z(\cdot,t)$ from the above, then $Z=Z(\alpha',t) $, $\alpha'\in\mathbb R$ winds the boundary of $\Omega(t)$ exactly once. By the argument principle, $\Psi: \bar {\mathscr P}_-\to \bar \Omega(t)$ is one-to-one and onto, $\Psi^{-1}:\Omega(t)\to {\mathscr P}_-$ exists and is a holomorphic function; and by equation \eqref{eq:273} and the chain rule, \begin{equation}\label{eq:276} (U\circ \Psi^{-1})_t+\bar U\circ \Psi^{-1}(U\circ \Psi^{-1})_{z}+(\partial_x-i\partial_y)(\frak P\circ \Psi^{-1})=i,\qquad \text{on }\Omega(t). \end{equation} Let $\bar {\bold v}=U\circ \Psi^{-1}$, $P=\frak P\circ \Psi^{-1}$. Observe that ${\bf v}\bar{\bf v}_{z}= ({\bold v}\cdot \nabla) \bar {\bold v}$. So $({\bf v}, P)$ satisfies the water wave equation \eqref{euler} in the domain $\Omega(t)$. \subsection{Non-$C^1$ interfaces} Assume that the interface $Z=Z(\cdot, t)$ has an angled crest at ${\alpha '}_0$ with interior angle $\nu$, we know from the discussion in \S3.3.2 of \cite{kw} that if the acceleration is finite, then it is necessary that $\nu\le \pi$; and if $\nu<\pi$ then $\frac 1{Z_{,{\alpha '}}}({\alpha '}_0, t)=0$. We henceforth call those points at which $\frac 1{Z_{,{\alpha '}}}=0$ the singularities. If the interface is allowed to be non-$C^1$ with interior angles at the crests $<\pi$, then the coefficient $\frac{A_1}{\abs{Z_{,{\alpha '}}}^2}$ of the second term on the left hand side of the first equation in \eqref{quasi-r} can be degenerative, and in this case it is not clear if equation \eqref{quasi-r} is still hyperbolic. In order to handle this situation, we need to understand how the singularities propagate. In what follows we derive the evolution equation for $\frac 1{Z_{,{\alpha '}}}$. We will also give the evolution equations for the basic quantities $\bar Z_t$ and $\bar Z_{tt}$.\footnote{In Lagrangian coordinates, the first equation in \eqref{quasi-r} is of the form $(\partial_t^2+a\nabla_{\bf n} )\bar z_t=f$, where $a=-\frac{\partial P}{\partial {\bf n}}=\frac{A_1}{|Z_{,{\alpha '}}|}\circ h$, and the Dirichilet-Neumann operator $\nabla_{\bf n}\circ h^{-1}=\frac{i}{|Z_{,{\alpha '}}|}\partial_{\alpha '}$. So at the singularities both $a$ and $\nabla_{\bf n}$ are denegerative.} \subsection{Some basic evolution equations} We begin with $$\frac1{Z_{,{\alpha '}}}\circ h=\frac{h_\alpha}{z_\alpha},$$ taking derivative to $t$ yields, $$\partial_t \paren{\frac1{Z_{,{\alpha '}}}\circ h}=\frac1{Z_{,{\alpha '}}}\circ h \paren{\frac{h_{t\alpha}}{h_\alpha}-\frac{z_{t\alpha}}{z_\alpha}};$$ precomposing with $h^{-1}$ gives \begin{equation}\label{eq:dza} (\partial_t+b\partial_{\alpha'})\paren{\frac1{Z_{,{\alpha '}}}}=\frac1{Z_{,{\alpha '}}} \paren{b_{\alpha '}-D_{\alpha '} Z_t}. \end{equation} The evolution equations for $\bar Z_t$ and $\bar Z_{tt}$ can be obtained from \eqref{interface-a1} and \eqref{quasi-l}. We have, by \eqref{interface-a1}, \begin{equation}\label{eq:dzt} (\partial_t+b\partial_{\alpha'})\bar Z_t:= \bar Z_{tt} =-i \frac {A_1}{Z_{,{\alpha '}}}+i. \end{equation} Using \eqref{interface-l} to replace $i\frak a$ by $-\frac{\bar z_{tt}-i} {\bar z_\alpha}$ in equation \eqref{quasi-l} yields $$\bar z_{ttt}=(\bar z_{tt}-i) \paren{ \bar { D_\alpha z_t}+ \frac{\frak a_t}{\frak a}};$$ precomposing with $h^{-1}$ gives \begin{equation}\label{eq:dztt} (\partial_t+b\partial_{\alpha'})\bar Z_{tt}= (\bar Z_{tt}-i) \paren{ \bar { D_{\alpha '} Z_t}+ \frac{\frak a_t}{\frak a}\circ h^{-1}}. \end{equation} Equations \eqref{eq:dza}, \eqref{eq:dzt} and \eqref{eq:dztt} describe the time evolution of the basic quantities $\frac1{Z_{,{\alpha '}}}$, $\bar Z_t$ and $\bar Z_{tt}$. In fact, equations \eqref{eq:dza}-\eqref{eq:dzt} together with \eqref{b}, \eqref{a1} and \eqref{ba} give a complete evolutionary system for the holomorphic quantities $\frac1{Z_{,{\alpha '}}}$ and $\bar Z_t$, which characterize the fluid domain $\Omega(t)$ and the complex conjugate velocity $\bar {\vec{v}}$. We will explore this evolution system in our future work. These equations give a first indication that it is natural to study the water wave problem in a setting where bounds are only imposed on $\frac1{Z_{,{\alpha '}}}$, $\bar Z_t$ and their derivatives. \subsection{An important equation} Here we record an important equation, which is obtained by rearranging the terms of \eqref{interface-a1}. \begin{equation}\label{aa1} \dfrac1{Z_{,{\alpha '}}}= i\dfrac{\bar Z_{tt}-i}{A_1}. \end{equation} \section{Well-posedness in a broader class that includes non-$C^1$ interfaces.}\label{main-results} We are now ready to study the Cauchy problem for the water wave equation \eqref{euler} in a regime that allows for non-$C^1$ interfaces. We begin with an a-priori estimate. \subsection{A-priori estimate for water waves with angled crests}\label{a priori} Motivated by the question of the interaction of the free interface with a fixed vertical boundary, in \cite{kw}, Kinsey and the author studied the water wave equation \eqref{euler} in a regime that includes non-$C^1$ interfaces with angled crests in a periodic setting, constructed an energy functional and proved an a-priori estimate which does not require a positive lower bound for $\frac1{\abs{Z_{,\alpha'}}}$. A similar result holds for the whole line case. While a similar proof as that in \cite{kw} applies to the whole line, for the sake of completeness, we will provide a slightly different argument in \S\ref{proof0}. In the first proof in \cite{kw}, we expanded and then re-organized the terms to ensure that there is no further cancelations and the estimates can be closed. Here instead we will rely on the estimates for the quantities $b_{\alpha '}$, $A_1$ and their derivatives.\footnote{These estimates become available in the work \cite{wu7}. The same results in the current paper hold in the periodic setting. } Let \begin{equation} \label{eq:ea} {\bf E}_a(t)=\int\frac1{A_1}|Z_{,{\alpha '}}(\partial_t+b\partial_{\alpha '}) D_{\alpha '}\bar Z_t |^2\,d{\alpha '}+ \nm{ D_{\alpha '}\bar Z_t(t)}_{\dot H^{1/2}}^2, \end{equation} and \begin{equation} \label{eq:eb} {\bf E}_b(t) = \int\frac1{A_1}\abs{Z_{,{\alpha '}}(\partial_t+b\partial_{\alpha '})\paren{ \frac1{Z_{,{\alpha '}}}D_{\alpha '}^2\bar Z_t }}^2\,d{\alpha '}+ \nm{\frac1{Z_{,{\alpha '}}}D_{\alpha '}^2\bar Z_t (t)}_{\dot{H}^{1/2}}^2. \end{equation} Let \begin{equation}\label{energy} \frak E(t)= {\bf E}_a(t)+{\bf E}_b(t)+ \|\bar{Z}_{t,\alpha'}(t)\|_{L^2}^2+\| D_{\alpha'}^2 \bar{Z}_t(t)\|_{L^2}^2+\nm{\partial_{\alpha '}\frac1{Z_{,{\alpha '}}}(t)}^2_{L^2} + \abs{\frac1{Z_{,{\alpha '}}}(0, t)}^2. \end{equation} \begin{theorem}[cf. Theorem 2 of \cite{kw} for the periodic version]\label{prop:a priori} Let $Z=Z(\cdot,t)$, $t\in [0, T]$ be a solution of the system \eqref{interface-r}-\eqref{interface-holo}-\eqref{b}-\eqref{a1}, satisfying $(Z_{t}, Z_{tt})\in C^ l([0, T], H^{s+1/2-l}(\mathbb R)\times H^{s-l}(\mathbb R))$, $l=0,1$ for some $s\ge 4$. There is a polynomial $C$ with universal nonnegative coefficients, such that \begin{equation}\label{a priori-eq} \frac{d}{dt}\frak E(t)\le C(\frak E(t)),\qquad \text{for } t\in [0, T]. \end{equation} \end{theorem} For the sake of completeness we will give a proof of Theorem~\ref{prop:a priori} in \S\ref{proof}. \begin{remark}\label{remark3.2} It appears that there is an $\infty\cdot 0$ ambiguity in the definition of ${\bf E}_a$ and ${\bf E}_b$. This can be resolved by replacing the ambiguous quantities by the right hand sides of \eqref{2008-1} and \eqref{2010-2}. The same remark applies to Lemmas~\ref{basic-e}, ~\ref{basic-4-lemma},~\ref{dlemma1}. We opt for the current version for the clarity of the origins of the definitions and the more intuitive proofs.\footnote{The assumptions in Theorems~\ref{prop:a priori},~\ref{prop:a-priori},~\ref{unique} and Proposition~\ref{prop:energy-eq} is consistent with the completeness of the evolutionary equations \eqref{eq:dza}-\eqref{eq:dzt}. We mention that to obtain the wellposed-ness result, Theorem~\ref{th:local}, we only apply Theorems~\ref{prop:a priori},~\ref{prop:a-priori},~\ref{unique} and Proposition~\ref{prop:energy-eq} to solutions that satisfy in addition that $Z_{,{\alpha '}}\in L^\infty$.} By \eqref{eq:dza} and product rules, \begin{equation}\label{2008-1} Z_{,{\alpha '}}(\partial_t+b\partial_{\alpha '}) D_{\alpha '}\bar Z_t=(b_{\alpha '}-D_{\alpha '} Z_t) \bar Z_{t,{\alpha '}}+(\partial_t+b\partial_{\alpha '}) \bar Z_{t,{\alpha '}}= \bar Z_{tt,{\alpha '}}- (D_{\alpha '} Z_t) \bar Z_{t,{\alpha '}}, \end{equation} and \begin{equation}\label{2010-2} Z_{,{\alpha '}}(\partial_t+b\partial_{\alpha '}) \paren{\frac1{Z_{,{\alpha '}}}D^2_{\alpha '}\bar Z_t}=(b_{\alpha '}-D_{\alpha '} Z_t)D^2_{\alpha '}\bar Z_t+ (\partial_t+b\partial_{\alpha '}) D^2_{\alpha '}\bar Z_t. \end{equation} Let \begin{equation}\label{2010-3} \begin{aligned} \frak e(t)= \nm{\bar Z_{tt,{\alpha '}}(t)}_{L^2}^2&+\nm{D_{\alpha '} \bar Z_t(t)}_{\dot H^{1/2}}^2+ \nm{ D^2_{\alpha '}\bar Z_{tt}(t)}_{L^2}^2+\nm{\frac1{Z_{,{\alpha '}}}D^2_{\alpha '}\bar Z_t (t)}_{\dot H^{1/2}}^2\\&+ \|\bar{Z}_{t,\alpha'}(t)\|_{L^2}^2+\| D_{\alpha'}^2 \bar{Z}_t(t)\|_{L^2}^2+\nm{\partial_{\alpha '}\frac1{Z_{,{\alpha '}}}(t)}^2_{L^2} + \abs{\frac1{Z_{,{\alpha '}}}(0, t)}^2. \end{aligned} \end{equation} It is easy to check that the argument in \S\ref{basic-quantities} gives \begin{equation}\label{equi-1} \frak E(t)\lesssim c_1( \frak e(t) ),\qquad \text{and } \qquad \frak e(t)\lesssim c_2(\frak E(t)). \end{equation} for some universal polynomials $c_1=c_1(x)$ and $c_2=c_2(x)$. \end{remark} In fact, as was shown in \S10 in \cite{kw}, we have the following characterization, which is essentially a consequence of \eqref{equi-1} and equation \eqref{aa1}, of the energy functional $\frak E$ in terms of the holomorphic quantities $\frac1{Z_{,{\alpha '}}}$ and $\bar Z_t$. Since the proof in \cite{kw} applies to the current setting, we omit the proof. Let \begin{equation}\label{energy1} \begin{aligned} \mathcal E(t)=\|\bar Z_{t,\alpha'}(t)\|_{L^2}^2&+ \|D_{\alpha'}^2\bar Z_t(t)\|_{L^2}^2+\nm{\partial_{\alpha'}\frac1{Z_{,\alpha'}}(t)}_{L^2}^2+\nm{D_{\alpha'}^2\frac1{Z_{,\alpha'}}(t)}_{L^2}^2\\&+\nm{\frac1{Z_{,\alpha'} }D_{\alpha'}^2\bar Z_t (t) }_{\dot H^{1/2}}^2+\| D_{\alpha'}\bar Z_t (t) \|_{\dot H^{1/2}}^2+\abs{\frac1{Z_{,\alpha'}}(0,t)}^2. \end{aligned} \end{equation} \begin{proposition}[A characterization of $\frak E$ via $\mathcal E$, cf. \S10 of \cite{kw}] \label{prop:energy-eq} There are polynomials $C_1=C_1(x)$ and $C_2=C_2(x)$, with nonnegative universal coefficients, such that for any solution $Z$ of \eqref{interface-r}-\eqref{interface-holo}-\eqref{b}-\eqref{a1}, satisfying the assumption of Theorem~\ref{prop:a priori}, \begin{equation}\label{energy-equiv} \mathcal E(t)\le C_1(\frak E(t)),\qquad\text{and}\quad \frak E(t)\le C_2(\mathcal E(t)). \end{equation} \end{proposition} A corollary of Theorem~\ref{prop:a priori} and Proposition~\ref{prop:energy-eq} is the following \begin{theorem}[A-priori estimate \cite{kw}]\label{prop:a-priori} Let $Z=Z(\cdot,t)$, $t\in [0, T']$ be a solution of the system \eqref{interface-r}-\eqref{interface-holo}-\eqref{b}-\eqref{a1}, satisfying the assumption of Theorem~\ref{prop:a priori}. There are constants $T=T(\mathcal E(0))>0$, $C=C(\mathcal E(0))>0$ that depend only on $\mathcal E(0)$, and with $-T(e)$, $C(e)$ increasing with respect to $e$, such that \begin{equation}\label{a priori-e} \sup_{[0, \min\{T, T'\}]}\mathcal E(t)\le C(\mathcal E(0))<\infty. \end{equation} \end{theorem} \begin{remark} 1. Let $t$ be fixed, $s\ge 2$, and assume $Z_t(t)\in H^s({\mathbb{R}})$. By Proposition~\ref{B2} and Sobolev embeddings, $A_1(t)-1=-\Im[Z_t,\mathbb H]\bar Z_{t,{\alpha '}}\in H^s({\mathbb{R}})$; and by \eqref{aa1}, $Z_{tt}(t)\in H^s(\mathbb R)$ is equivalent to $\frac1{Z_{,{\alpha '}}}(t)-1\in H^s(\mathbb R)$. 2. Assume that $\paren{Z_t(t), \frac1{Z_{,{\alpha '}}}(t)-1}\in (H^{s+1/2}(\mathbb R), H^s(\mathbb R))$, $s\ge 2$, or equivalently $(Z_t(t), Z_{tt}(t))\in (H^{s+1/2}(\mathbb R), H^s(\mathbb R))$. It is easy to check that $\mathcal E(t)<\infty$. So in the class where $\mathcal E(t)<\infty$, it allows for interfaces and velocities in Sobolev classes; it is clear that in the class where $\mathcal E(t)<\infty$ it also allows for $\frac1{Z_{,{\alpha '}}}=0$, that is, singularities on the interface. \end{remark} \subsubsection{A description of the class $\mathcal E<\infty$ in $\mathscr P_-$}\label{e-1} We give here an equivalent description of the class $\mathcal E<\infty$ in the lower half plane $\mathscr P_-$. Let $1< p\le \infty$, and \begin{equation}\label{poisson} K_y(x)=\frac{-y}{\pi(x^2+y^2)},\qquad y<0 \end{equation} be the Poisson kernel. We know for any holomorphic function $G$ on $P_-$, $$\sup_{y<0}\|G(x+iy)\|_{L^p(\mathbb R,dx)}<\infty$$ if and only if there exists $g\in L^p(\mathbb R)$ such that $G(x+iy)=K_y\ast g(x)$. In this case, $\sup_{y<0}\|G(x+iy)\|_{L^p(\mathbb R,dx)}=\|g\|_{L^p}$. Moreover, if $g\in L^p(\mathbb R)$, $1<p<\infty$, then $\lim_{y\to 0-} K_y\ast g(x)=g(x)$ in $L^p(\mathbb R)$ and if $g\in L^\infty\cap C(\mathbb R)$, then $\lim_{y\to 0-} K_y\ast g(x)=g(x)$ for all $x\in\mathbb R$. Let $Z=Z(\cdot, t)$ be a solution of \eqref{interface-r}-\eqref{interface-holo}-\eqref{a1}-\eqref{b}, satisfying the assumption of Theorem~\ref{prop:a priori}; let $\Psi$, $U$ be the holomorphic functions as given in \S\ref{general-soln}, so $$U(x'+iy',t)=K_{y'}\ast \bar Z_t(x', t),\qquad \frac1{\Psi_{z'}}(x'+iy',t)=K_{y'}\ast \frac1{Z_{,{\alpha '}}}(x',t),\qquad \text{for }y'< 0.$$ Let $z'=x'+iy'$. We have \begin{equation}\label{domain-energy1} \mathcal E(t)=\mathcal E_1(t)+\abs{\frac1{Z_{,{\alpha '}} }(0, t) }^2, \end{equation} where \begin{equation}\label{domain-energy} \begin{aligned} \mathcal E_1(t)&:=\sup_{y'<0}\nm{U_{z'}(\cdot+iy', t)}_{L^2(\mathbb R)}^2+\sup_{y'<0}\nm{\frac1{\Psi_{z'}}\partial_{z'}\paren{\frac1{\Psi_{z'} }U_{z'}}(\cdot+iy',t)}_{L^2(\mathbb R)}^2 \\&+\sup_{y'<0}\nm{\frac1{\{\Psi_{z'}\}^2}\partial_{z'}\paren{\frac1{\Psi_{z'} }U_{z'}}(\cdot+iy',t)}_{\dot H^{1/2}(\mathbb R)}^2+\sup_{y'<0}\nm{\frac1{\Psi_{z'} }U_{z'}(\cdot+iy',t)}_{\dot H^{1/2}(\mathbb R)}^2 \\&+\sup_{y'<0}\nm{ \frac1{\Psi_{z'} }\partial_{z'}\paren{\frac1{\Psi_{z'} }\partial_{z'}\paren{\frac1{\Psi_{z'} }}}(\cdot+iy',t) }_{L^2(\mathbb R)}^2+\sup_{y'<0}\nm{ \partial_{z'}\paren{\frac1{\Psi_{z'} }}(\cdot+iy',t) }_{L^2(\mathbb R)}^2. \end{aligned} \end{equation} \subsection{A blow-up criteria and a stability inequality}\label{prelim-result} The main objective of this paper is to show the unique solvability of the Cauchy problem for the water wave equation \eqref{euler} in the class where $\mathcal E<\infty$. We will build on the existing result, Theorem~\ref{prop:local-s}, by mollifying the initial data, constructing an approximating sequence and passing to the limit. However the existence time of the solution as given in Theorem~\ref{prop:local-s} depends on the Sobolev norm of the initial data. In order to have an approximating sequence defined on a time interval that has a uniform positive lower bound, we need a blow-up criteria; a uniqueness and stability theorem will allow us to prove the convergence of the sequence, and the uniqueness and stability of the solutions obtained by this process. Let the initial data be as given in \S\ref{id-r}. \begin{theorem}[A blow-up criteria via $\mathcal E$]\label{blow-up} Let $s\ge 4$. Assume $Z_{,\alpha'}(0)\in L^\infty (\mathbb R)$, $Z_t(0)\in H^{s+1/2}(\mathbb R)$ and $Z_{tt}(0)\in H^s(\mathbb R)$. Then there is $T>0$, such that on $[0, T]$, the initial value problem of \eqref{interface-r}-\eqref{interface-holo}-\eqref{b}-\eqref{a1} has a unique solution $Z=Z(\cdot, t)$, satisfying $(Z_{t}, Z_{tt})\in C^ l([0, T], H^{s+1/2-l}(\mathbb R)\times H^{s-l}(\mathbb R))$ for $l=0,1$, and $Z_{,\alpha'}-1\in C([0, T], H^s(\mathbb R))$. Moreover if $T^*$ is the supremum over all such times $T$, then either $T^*=\infty$, or $T^*<\infty$, but \begin{equation}\label{eq:30} \sup_{[0, T^*)}\mathcal E(t)=\infty \end{equation} \end{theorem} The proof for Theorem~\ref{blow-up} will be given in \S\ref{proof}. We now give the uniqueness and stability theorem. Let $Z=Z(\alpha',t)$, ${\mathfrak{Z}}={\mathfrak{Z}}(\alpha',t)$ be solutions of the system \eqref{interface-r}-\eqref{interface-holo}-\eqref{b}-\eqref{a1}, with $z=z(\alpha,t)$, ${\mathfrak{z}}={\mathfrak{z}}(\alpha,t)$ being their re-parametrizations in Lagrangian coordinates, and their initial data as given in \S\ref{id-r}; let $$Z_t,\ Z_{tt},\ Z_{,{\alpha '}},\ z_\alpha,\ h,\ A_1,\ \mathcal A, \ b,\ \frak a, \ D_{\alpha '},\ D_\alpha, \ \frak H,\ \frak E(t), \ \mathcal E(t),\quad etc. $$ be the quantities associated with $Z$, $z$ as defined in \S\ref{prelim}, \S\ref{a priori}, and $${\mathfrak{Z}}_t,\ {\mathfrak{Z}}_{tt},\ {\mathfrak{Z}}_{,{\alpha '}},\ {\mathfrak{z}}_\alpha,\ \tilde h,\ \tilde {A_1},\ \tilde {\mathcal A}, \ \tilde b,\ \tilde{\mathfrak{a}},\ \tilde D_{\alpha '},\ \tilde D_\alpha, \ \tilde{\frak H}, \ \tilde{\frak E}(t), \ \tilde{\mathcal E}(t),\quad etc.$$ be the corresponding quantities for ${\mathfrak{Z}}$, ${\mathfrak{z}}$. Define \begin{equation}\label{def-l} l= \tilde h\circ h^{-1}. \end{equation} so $l({\alpha '}, 0)={\alpha '}$, for ${\alpha '}\in {\mathbb{R}}$. \begin{theorem}[Uniqueness and Stability in $\mathcal E<\infty$]\label{unique} Assume that $Z$, $\frak Z$ are solutions of equation \eqref{interface-r}-\eqref{interface-holo}-\eqref{a1}-\eqref{b}, satisfying $(Z_t, Z_{tt}), (\frak Z_t, \frak Z_{tt})\in C^l([0, T], H^{s+1/2-l}(\mathbb R)\times H^{s-l}(\mathbb R))$ for $l=0,1$, $s\ge 4$. There is a constant $C$, depending only on $T$, $\sup_{[0, T]} \mathcal E(t)$ and $\sup_{[0, T]}\tilde{\mathcal E}(t)$, such that \begin{equation}\label{stability} \begin{aligned} &\sup_{[0, T]}\paren{\|\paren{\bar Z_t-\bar {\mathfrak{Z}}_t\circ l}(t)\|_{\dot{H}^{1/2}}+\|\paren{\bar Z_{tt}-\bar {\mathfrak{Z}}_{tt}\circ l}(t)\|_{\dot{H}^{1/2}}+\nm{\paren{\frac1{ Z_{,{\alpha '}}}-\frac 1{ {\mathfrak{Z}}_{,{\alpha '}}}\circ l}(t)}_{\dot{H}^{1/2}}}+\\&\sup_{[0, T]}\paren{\|\paren{l_{\alpha '}-1}(t)\|_{L^2}+\|D_{\alpha '} Z_t-(\tilde D_{\alpha '} {\mathfrak{Z}}_t)\circ l\|_{L^2} +\|(A_1-\tilde {A_1}\circ l)(t)\|_{L^2}+\|(b_{\alpha '}-\tilde b_{\alpha '}\circ l)(t)\|_{L^2}}\\&\le C\paren{ \|\paren{\bar Z_t-\bar {\mathfrak{Z}}_t}(0)\|_{\dot{H}^{1/2}}+\|\paren{\bar Z_{tt}-\bar {\mathfrak{Z}}_{tt}}(0)\|_{\dot{H}^{1/2}}+\nm{\paren{\frac1{ Z_{,{\alpha '}}}-\frac 1{ {\mathfrak{Z}}_{,{\alpha '}}}}(0)}_{\dot{H}^{1/2}}}\\&+C\paren{\|\paren{D_{\alpha '} Z_t-(\tilde D_{\alpha '} {\mathfrak{Z}}_t)}(0)\|_{L^2} +\nm{\paren{\frac1{ Z_{,{\alpha '}}}-\frac 1{ {\mathfrak{Z}}_{,{\alpha '}}}}(0)}_{L^\infty}} \end{aligned} \end{equation} \end{theorem} By precomposing with $h$, we see that inequality \eqref{stability} effectively gives control of the differences, $z_t-{\mathfrak{z}}_t$, $z_{tt}-{\mathfrak{z}}_{tt}$ etc, in Lagrangian coordinates. Notice that in the stability inequality \eqref{stability}, we control the $\dot{H}^{1/2}$ norms of the differences of $Z_t$ and ${\mathfrak{Z}}_t\circ l$, $Z_{tt}$ and ${\mathfrak{Z}}_{tt}\circ l$, and $\frac1{ Z_{,{\alpha '}}}$ and $\frac 1{ {\mathfrak{Z}}_{,{\alpha '}}}\circ l$, and the $L^2$ norms of the differences of $D_{\alpha '} Z_t$ and $(\tilde D_{\alpha '} {\mathfrak{Z}}_t)\circ l$, and $A_1$ and $\tilde {A_1}\circ l$, while the energy functional $\frak E(t)$, or equivalently $\mathcal E(t)$, gives us control of the $L^2$ norms of $Z_{t,{\alpha '}}$, $Z_{tt,{\alpha '}}$ and $\partial_{\alpha '}\frac1{ Z_{,{\alpha '}}}$, and the $L^\infty$ and $\dot H^{1/2}$ norms \footnote{see \S\ref{basic-quantities} and \S\ref{hhalf-norm}.} of $D_{\alpha '} Z_t$ and $A_1$. Indeed, because the coefficient $\frac {A_1}{|Z_{,{\alpha '}}|^2}$ in equation \eqref{quasi-r} is solution dependent and possibly degenerative, for given solutions $Z=Z(\alpha', t)$, $\frak Z=\frak Z(\alpha', t)$ of equation \eqref{quasi-r}-\eqref{aux}, the sets of zeros in $\frac1{Z_{,\alpha'}}(t)$ and $\frac1{\frak Z_{,\alpha'}}(t)$ are different and move with the solutions, hence one cannot simply subtract the two solutions and perform energy estimates, as is usually done in classical cases. Our approach is to first get a good understanding of the evolution of the degenerative factor $\frac1{Z_{,{\alpha '}}}$ via equation \eqref{eq:dza}, this allows us to construct a series of model equations that capture the key degenerative features of the equation \eqref{quasi-r} to get some ideas of what would work. We then tailor the ideas to the specific structure of our equations. We give the proof for Theorem~\ref{unique} in \S\ref{proof3}. \subsection{The wellposedness of the water wave equation \eqref{euler} in $\mathcal E<\infty$}\label{main} Since it can be tricky to define solutions for the interface equation \eqref{interface-r} when the interface is allowed to have singularities, we will directly solve the water wave equation \eqref{euler} via the system \eqref{eq:273}-\eqref{eq:275}-\eqref{eq:274}-\eqref{eq:271}. As we know from the discussions in \S\ref{general-soln} and \S\ref{e-1}, equation \eqref{euler} is equivalent to \eqref{eq:273}-\eqref{eq:275}-\eqref{eq:274}, for $(U, \Psi)\in C(\overline{\mathscr P}_-\times [0, T])\cap C^1({\mathscr P}_-\times (0, T))$ with $U(\cdot, t)$, $\Psi(\cdot, t)$ holomorphic, provided $\Psi(\cdot, t)$ is a Jordan curve; and the energy functionals $\mathcal E=\mathcal E_1+|\frac1{Z_{,{\alpha '}}}(0,t)|^2$. Observe that the energy functional $\mathcal E(t)$ does not give direct control of the lower order norms $\|Z_t(t)\|_{L^2}$, $\|Z_{tt}(t)\|_{L^2}$ and $\nm{\frac1{Z_{,{\alpha '}}}(t)-1}_{L^2(\mathbb R)}$; in the class where we want to solve the water wave equation we require in addition that $Z_t(t)\in {L^2(\mathbb R)}$ and $\frac1{Z_{,{\alpha '}}}(t)-1\in {L^2(\mathbb R)}$. This is consistent with the decay assumption made in \S\ref{notation1}. \subsubsection{The initial data}\label{id} Let $\Omega(0)$ be the initial fluid domain, with the interface $\Sigma(0):=\partial\Omega(0)$ being a Jordan curve that tends to horizontal lines at the infinity, and let $\Psi(\cdot, 0):{\mathscr P}_-\to \Omega(0)$ be the Riemann Mapping such that $\lim_{z'\to\infty} \partial_{z'}\Psi(z', 0)=1$. We know $\Psi(\cdot, 0) :{\mathscr P}_-\to \Omega(0)$ is a homeomorphism. Let $Z(\alpha', 0):=\Psi(\alpha', 0)$ for ${\alpha '}\in\mathbb R$, so $Z=Z(\cdot, 0):\mathbb R\to\Sigma(0)$ is the parametrization of $\Sigma(0)$ in the Riemann Mapping variable. Let $\bold v(\cdot, 0):\Omega(0)\to \mathbb C$ be the initial velocity field, and $U(z', 0)=\bar{\bold v}(\Psi(z', 0),0)$. Assume $\bar{\bold v}(\cdot, 0)$ is holomorphic on $\Omega(0)$, so $U(\cdot, 0)$ is holomorphic on ${\mathscr P}_-$. Assume that the energy functional $\mathcal E_{1}(0)$ for $(U(\cdot, 0),\Psi(\cdot, 0))$ as given in \eqref{domain-energy} satisfy $\mathcal E_1(0)<\infty$. Assume in addition that \footnote{This is equivalent to $\|U(\cdot+i0, 0)\|_{L^2(\mathbb R)}+ \nm{\frac1{Z_{,{\alpha '}}} (0)-1}_{L^2(\mathbb R)} <\infty$, see \S\ref{e-1}.} \begin{equation}\label{iid} c_0:=\sup_{y'<0}\|U(\cdot+iy', 0)\|_{L^2(\mathbb R)}+\sup_{y'<0}\nm{\frac1{\Psi_{z'}(\cdot+iy',0)}-1}_{L^2(\mathbb R)}<\infty. \end{equation} In light of the discussion in \S\ref{general-soln} and the uniqueness and stability Theorem~\ref{unique}, we define solutions for the Cauchy problem of the system \eqref{eq:273}-\eqref{eq:275}-\eqref{eq:274} as follows. \begin{definition}\label{de} Let the data be as given in \S\ref{id}, and $(U, \Psi, \frak P)\in C(\bar{\mathscr P}_-\times [0, T])$, with $(U, \Psi)\in C^1(\mathscr P_-\times (0, T))$, $\lim_{z'\to\infty} (U, \Psi_{z'}-1)(z',t)=(0,0)$ and $U(\cdot, t)$, $\Psi(\cdot, t)$ holomorphic in the lower half plane ${\mathscr P}_-$ for $t\in [0, T]$. We say $(U, \Psi, \frak P)$ is a solution of the Cauchy problem of the system \eqref{eq:273}-\eqref{eq:275}-\eqref{eq:274}, if it satisfies the system \eqref{eq:273}-\eqref{eq:275}-\eqref{eq:274} on $\mathscr P_-\times [0, T]$, and if there is a sequence $Z_n=Z_n({\alpha '},t)$, $({\alpha '},t)\in \mathbb R\times[0, T]$, which are solutions of the system \eqref{interface-r}-\eqref{interface-holo}-\eqref{a1}-\eqref{b}, satisfying $(Z_{n,t}, \frac1{\partial_{\alpha '} Z_{n}}-1, \partial_{\alpha '} Z_{n} -1 )\in C^j([0, T], H^{s+1/2-l}(\mathbb R)\times H^{s-l}(\mathbb R)\times H^{s-l}(\mathbb R) )$ for some $s\ge 4$, $l=0,1$, $\sup_{n, t\in [0, T]} \mathcal E_n(t)<\infty$ and $\sup_{n, t\in [0, T]}(\|Z_{n,t}(t)\|_{L^2}+ \nm{\frac1{\partial_{\alpha '} Z_{n}}(t)-1}_{L^2})<\infty$, and the holomorphic extension $(U_n, \Psi_n)$ in ${\mathscr P}_-$ of $(\bar Z_{n,t}, Z_n)$, with $\lim_{z'\to\infty} (U_n, \partial_{z'}\Psi_{n}-1)(z',t)=(0,0)$, and the function $\frak P_n$ defined by \eqref{eq:273}-\eqref{eq:274}-\eqref{eq:275}, such that $\lim_{n\to \infty} U_n=U$, $\lim_{n\to \infty} \Psi_n=\Psi$, $\lim_{n\to \infty} \frak P_n=\frak P$ and $\lim_{n\to\infty}\frac1{\partial_{z'}\Psi_n}= \frac1{\partial_{z'}\Psi}$, uniformly on compact subsets of $\bar{\mathscr P}_-\times [0, T]$, and the data $(Z_n(\cdot, 0), Z_{n,t}(\cdot,0))$ converges in the topology of the right hand side of the inequality \eqref{stability} to the trace $(\Psi(\cdot+i0, 0), \bar U(\cdot+i0, 0))$. \end{definition} Let $\mathcal E(0)=\mathcal E_1(0)+|\frac1{Z_{,{\alpha '}}}(0,0)|^2$. \begin{theorem}[Local wellposedness in the $\mathcal E<\infty$ regime]\label{th:local} 1. There exists $T>0$, depending only on $\mathcal E(0)$, such that on $[0,T]$, the initial value problem of the system \eqref{eq:273}-\eqref{eq:275}-\eqref{eq:274} has a unique solution $(U, \Psi, \frak P)$, with the properties that $U(\cdot, t),\Psi(\cdot, t)$ are holomorphic on ${\mathscr P}_-$ for each fixed $t\in [0, T]$, $U, \Psi, \frac1{\Psi_{z'}}, \frak P$ are continuous on $\bar {\mathscr P}_-\times [0, T]$, $U, \Psi, \frak P$ are continuous differentiable on ${\mathscr P}_-\times [0, T]$, $\sup_{[0, T]}\mathcal E_1(t)<\infty$ and \begin{equation}\label{iidt} \sup_{[0, T]}\sup_{y'<0}\paren{\|U(\cdot+iy', t)\|_{L^2(\mathbb R)}+\|\frac1{\Psi_{z'}(\cdot+iy',t)}-1\|_{L^2(\mathbb R)}}<\infty. \end{equation} The solution $(U, \Psi, \frak P)$ gives rise to a solution $(\bar{\bold v}, P)=(U\circ \Psi^{-1}, \frak P\circ \Psi^{-1})$ of the water wave equation \eqref{euler} so long as $\Sigma(t)=\{Z=\Psi(\alpha',t)\ | \ \alpha'\in \mathbb R\}$ is a Jordan curve. 2. If in addition that the initial interface is chord-arc, that is, $Z_{,\alpha'}(\cdot,0)\in L^1_{loc}(\mathbb R)$ and there is $0<\delta<1$, such that $$\delta \int_{\alpha'}^{\beta'} |Z_{,\alpha'}(\gamma,0)|\,d\gamma\le |Z(\alpha', 0)-Z(\beta', 0)|\le \int_{\alpha'}^{\beta'} |Z_{,\alpha'}(\gamma,0)|\,d\gamma,\quad \forall -\infty<\alpha'< \beta'<\infty.$$ Then there is $T>0, T_1>0$, $T, T_1$ depend only on $\mathcal E(0)$, such that on $[0, \min\{T, \frac{\delta}{T_1}\}]$, the initial value problem of the water wave equation \eqref{euler} has a unique solution, satisfying $\mathcal E_1(t)<\infty$ and \eqref{iidt}, and the interface $Z=Z(\cdot, t)$ remains chord-arc. \end{theorem} We prove Theorem~\ref{th:local} in \S\ref{proof2}. \section{The proof of Theorem~\ref{prop:a priori} and Theorem~\ref{blow-up}}\label{proof} We need the following basic inequalities in the proof of Theorems~\ref{prop:a priori} and \ref{blow-up}. The basic energy inequality in Lemma~\ref{basic-e} has already appeared in \cite{wu3}. We give a proof nevertheless. \begin{lemma}[Basic energy inequality I, cf. \cite{wu3}, lemma 4.1]\label{basic-e} Assume $\Theta=\Theta(\alpha',t)$, $\alpha'\in \mathbb R$, $t\in [0, T)$ is smooth, decays fast at the spatial infinity, satisfying $(I-\mathbb H)\Theta=0$ and \begin{equation}\label{eq:40} (\partial_t+b\partial_{\alpha '})^2\Theta+i\mathcal A\partial_{\alpha '} \Theta=G_\Theta. \end{equation} Let \begin{equation}\label{eq:41} E_\Theta(t):=\int\frac1{\mathcal A}|(\partial_t+b\partial_{\alpha '})\Theta|^2\,d{\alpha '}+ i\int(\partial_{\alpha '}\Theta) \bar\Theta\,d{\alpha '}. \end{equation} Then \begin{equation}\label{eq:42} \frac d{dt} E_\Theta(t)\le \nm{\frac{\frak a_t}{\frak a}\circ h^{-1}}_{L^\infty} E_\Theta(t)+2 E_\Theta(t)^{1/2}\paren{\int\frac{|G_\Theta|^2}{\mathcal A}\,d{\alpha '}}^{1/2}. \end{equation} \end{lemma} \begin{remark} By $\Theta=\mathbb H\Theta$ and \eqref{def-hhalf}, \begin{equation}\label{hhalf} i\int(\partial_{\alpha'}\Theta) \bar\Theta\,d\alpha'=\int(i\partial_{\alpha'}\mathbb H \Theta) \bar\Theta\,d\alpha' =\|\Theta\|_{\dot{H}^{1/2}}^2\ge 0. \end{equation} \end{remark} \begin{proof} By a change of the variables in \eqref{eq:40}, we have $$(\partial_t^2+i\frak a \partial_\alpha)(\Theta\circ h)=G_\Theta\circ h$$ where $\frak a h_\alpha=\mathcal A\circ h$; and in \eqref{eq:41}, $$E_\Theta(t)=\int\frac1{\frak a}|\partial_t(\Theta\circ h)|^2\,d\alpha+\int i\partial_\alpha (\Theta\circ h) \bar{\Theta\circ h}\,d\alpha.$$ So \begin{equation}\label{eq:43} \begin{aligned} \frac d{dt} E_\Theta(t)&=\int 2\Re\braces{\frac1{\frak a}\partial_t^2(\Theta\circ h) \partial_t(\bar{\Theta\circ h})}-\frac{\frak a_t}{\frak a^2}|\partial_t(\Theta\circ h)|^2+2\Re \braces{i\int\partial_\alpha(\Theta\circ h) \partial_t(\bar{\Theta\circ h}) \,d\alpha } \\&= 2\Re \int \frac1{\frak a} G_\Theta\circ h \partial_t(\bar{\Theta\circ h})\,d\alpha-\int \frac{\frak a_t}{\frak a^2}|\partial_t(\Theta\circ h)|^2\,d\alpha, \end{aligned} \end{equation} where we used integration by parts in the first step. Changing back to the Riemann mapping variable, applying Cauchy-Schwarz inequality and \eqref{hhalf} yields \eqref{eq:42}. \end{proof} We also need the following simple energy inequality. \begin{lemma}[Basic energy inequality II]\label{basic-e2} Assume $\Theta=\Theta({\alpha '},t)$ is smooth and decays fast at the spatial infinity. And assume \begin{equation}\label{evolution-equation} (\partial_t+b\partial_{\alpha '})\Theta=g_\Theta. \end{equation} Then \begin{equation}\label{basic-2} \frac{d}{dt}\nm{\Theta(t)}_{L^2}^2\le 2\nm{g_\Theta(t)}_{L^2}\nm{\Theta(t)}_{L^2}+\|b_{\alpha '}(t)\|_{L^\infty}\nm{\Theta(t)}_{L^2}^2 \end{equation} \end{lemma} \begin{proof} We have, upon changing variables, $$\int |\Theta({\alpha '},t)|^2\,d{\alpha '}=\int |\Theta( h(\alpha, t),t)|^2h_\alpha \,d\alpha,$$ so \begin{equation} \begin{aligned} \frac{d}{dt}\int |\Theta({\alpha '},t)|^2\,d{\alpha '} &=\int 2\Re \partial_t(\Theta\circ h)\bar{\Theta\circ h}\ h_\alpha+ |\Theta\circ h|^2h_{t\alpha} \,d\alpha\\&= \int 2\Re \paren{(\partial_t+b\partial_{\alpha '})\Theta}\bar\Theta ({\alpha '},t) + b_{\alpha '} |\Theta({\alpha '},t)|^2\,d{\alpha '}; \end{aligned} \end{equation} here in the second step we changed back to the Riemann mapping variable, and used the fact that $\frac{h_{t\alpha}}{h_\alpha}=b_{\alpha '}\circ h$. Inequality \eqref{basic-2} follows from Cauchy-Schwarz inequality. \end{proof} Let \begin{equation}\label{P} \mathcal P=(\partial_t+b\partial_{\alpha '})^2+i\mathcal A\partial_{\alpha '}. \end{equation} We need two more basic inequalities. \begin{lemma}[Basic inequality III]\label{basic-3-lemma} Assume that $\Theta=\Theta({\alpha '},t)$ is smooth and decays fast at the spatial infinity, and assume $\Theta=\mathbb H\Theta$. Then \begin{equation}\label{basic-3} \begin{aligned} &\nm{(I-\mathbb H)\paren{ \mathcal P\Theta}(t) }_{L^2}\le \nm{\partial_{\alpha '} (\partial_t+b\partial_{\alpha '})b}_{L^\infty}\nm{\Theta(t)}_{L^2}\\&\qquad\qquad\qquad+\nm{b_{\alpha '}}_{L^\infty}\nm{(\partial_t+b\partial_{\alpha '})\Theta(t)}_{L^2}+\nm{ b_{\alpha '}}_{L^\infty}^2\nm{\Theta(t)}_{L^2}+\nm{\mathcal A_{\alpha '}}_{L^\infty}\nm{\Theta(t)}_{L^2}. \end{aligned} \end{equation} \end{lemma} \begin{proof} Because $\Theta=\mathbb H\Theta$, we have $$(I-\mathbb H)(\mathcal P\Theta)=\bracket{\mathcal P, \mathbb H}\Theta;$$ and by \eqref{eq:c25}, $$\bracket{\mathcal P, \mathbb H}\Theta=\bracket{(\partial_t+b\partial_{\alpha '})b,\mathbb H}\partial_{\alpha '} \Theta+2\bracket{b,\mathbb H}\partial_{\alpha '} (\partial_t+b\partial_{\alpha '})\Theta-[b,b; \partial_{\alpha '} \Theta]+\bracket{i\mathcal A,\mathbb H}\partial_{\alpha '} \Theta. $$ Inequality \eqref{basic-3} follows from \eqref{3.20}. \end{proof} \begin{lemma}[Basic inequality IV]\label{basic-4-lemma} Assume $f$ is smooth and decays fast at the spatial infinity. Then \begin{equation}\label{basic-4} \begin{aligned} \nm{Z_{,{\alpha '}}\bracket{\mathcal P, \frac1{Z_{,{\alpha '}}}}f}_{L^2}&\lesssim \nm{ (\partial_t+b\partial_{\alpha '})(b_{\alpha '}-D_{\alpha '} Z_t)}_{L^\infty}\nm{f}_{L^2}\\&+ \nm{(b_{\alpha '}-D_{\alpha '} Z_t)}^2_{L^\infty}\nm{f}_{L^2}+ \nm{ (b_{\alpha '}-D_{\alpha '} Z_t)}_{L^\infty}\nm{(\partial_t+b\partial_{\alpha '})f}_{L^2}\\&+ \nm{A_1}_{L^\infty}\nm{\frac1{Z_{,{\alpha '}}}\partial_{\alpha '}\frac1{Z_{,{\alpha '}}}}_{L^\infty}\nm{f}_{L^2}. \end{aligned} \end{equation} \end{lemma} \begin{proof} Lemma~\ref{basic-4-lemma} is straightforward from the commutator relation \eqref{eq:c16}, identities \eqref{eq:c26}, \eqref{eq:c27} and the definition $A_1:=\mathcal A\abs{Z_{,{\alpha '}}}^2$. \end{proof} Let $Z=Z(\cdot, t)$ be a solution of the system \eqref{interface-r}-\eqref{interface-holo}-\eqref{b}-\eqref{a1}, satisfying the assumption of Theorem~\ref{prop:a priori}. By \eqref{quasi-r1} and \eqref{eq:c10}, we have \begin{equation}\label{base-eq} \mathcal P \bar Z_{t,{\alpha '}}=-(\partial_t+b\partial_{\alpha '})(b_{\alpha '} \partial_{{\alpha '}}\bar Z_{t})-b_{\alpha '}\partial_{\alpha '} \bar Z_{tt}-i\mathcal A_{\alpha '} \partial_{\alpha '} \bar Z_t+\partial_{\alpha '}\paren{\frac{\frak a_t}{\frak a}\circ h^{-1} (\bar Z_{tt}-i)} \end{equation} Equation \eqref{base-eq} is our base equation in the proof of Theorems~\ref{prop:a priori} and \ref{blow-up}. \subsection{The proof of Theorem~\ref{prop:a priori}.}\label{proof0} We begin with computing a few evolutionary equations. We have \begin{equation}\label{eq-dt} \mathcal P D_{\alpha '}\bar Z_t=\bracket{\mathcal P, \frac1{Z_{,{\alpha '}}}}\bar Z_{t,{\alpha '}}+\frac1{Z_{,{\alpha '}}}\mathcal P \bar Z_{t,{\alpha '}}; \end{equation} \begin{equation}\label{eq-ddt} \mathcal P \paren{\frac1{Z_{,{\alpha '}}}D^2_{\alpha '}\bar Z_t}=\bracket{\mathcal P, \frac1{Z_{,{\alpha '}}}}D_{\alpha '}^2\bar Z_{t}+\frac1{Z_{,{\alpha '}}}\bracket{\mathcal P, D_{\alpha '}^2}\bar Z_{t}+\frac1{Z_{,{\alpha '}}}D_{\alpha '}^2\mathcal P\bar Z_t.\end{equation} And, by the commutator identity \eqref{eq:c7} and the fact that $(\partial_t+b\partial_{\alpha '})\bar Z_{t}=\bar Z_{tt}$, \begin{equation}\label{eq-zta} (\partial_t+b\partial_{\alpha '})\bar Z_{t,{\alpha '}}=\bar Z_{tt,{\alpha '}}-b_{\alpha '} \bar Z_{t,{\alpha '}}; \end{equation} and by \eqref{eq:dza} and \eqref{eq:c7} \begin{equation}\label{eq-ddza} \begin{aligned} (\partial_t+b\partial_{\alpha '})\partial_{\alpha'} \frac{1}{ Z_{,\alpha'}}&=\partial_{\alpha'}(\partial_t+b\partial_{\alpha '}) \frac{1}{ Z_{,\alpha'}}+[(\partial_t+b\partial_{\alpha '}), \partial_{\alpha'}] \frac{1}{ Z_{,\alpha'}}\\& =\paren{\partial_{\alpha'}\frac{1}{ Z_{,\alpha'}}}\paren{b_{\alpha '}-D_{\alpha '} Z_t}+D_{\alpha '} \paren{b_{\alpha '}-D_{\alpha '} Z_t} -b_{\alpha'}\partial_{\alpha'}\frac{1}{ Z_{,\alpha'}}\\& =-D_{\alpha '} Z_t\paren{\partial_{\alpha'}\frac{1}{ Z_{,\alpha'}}}+D_{\alpha '} \paren{b_{\alpha '}-D_{\alpha '} Z_t}. \end{aligned} \end{equation} We know from the definition of ${\bf E}_a(t)$, ${\bf E}_b(t)$, and $A_1:=\mathcal A |Z_{,{\alpha '}}|^2$, $${\bf E}_a(t):=E_{D_{\alpha '}\bar Z_t}(t),\qquad\text{and }\quad {\bf E}_b(t):=E_{ \frac1{Z_{,{\alpha '}}}D^2_{\alpha '}\bar Z_t}(t),$$ where $E_\Theta(t)$ is the basic energy as defined in \eqref{eq:41}. Notice that the quantities $D_{\alpha '}\bar Z_t$ and $ \frac1{Z_{,{\alpha '}}}D^2_{\alpha '}\bar Z_t$ are holomorphic. So the energy functional \begin{equation}\label{energy-functional} \frak E(t)=E_{D_{\alpha '}\bar Z_t}(t)+ E_{ \frac1{Z_{,{\alpha '}}}D^2_{\alpha '}\bar Z_t}(t) +\|\bar Z_{t,{\alpha '}}(t)\|_{L^2}^2+\|D_{\alpha '}^2\bar Z_t(t)\|_{L^2}^2+\nm{\partial_{\alpha '} \frac{1}{ Z_{,\alpha'}}(t)}_{L^2}^2+\abs{\frac{1}{ Z_{,\alpha'}}(0,t)}^2. \end{equation} Our goal is to show that there is a universal polynomial $C=C(x)$, such that \begin{equation}\label{energy-ineq} \frac d{dt}\frak E(t)\le C(\frak E(t)). \end{equation} We begin with a list of quantities controlled by $\frak E(t)$. \subsubsection{Quantities controlled by $\frak E(t)$.}\label{basic-quantities} It is clear that $\frak E(t)$ controls the following quantities: \begin{equation}\label{list1} \nm{ D_{\alpha '}\bar Z_t}_{\dot H^{1/2}}, \quad\nm{\frac{1}{ Z_{,\alpha'}} D_{\alpha '}^2\bar Z_t}_{\dot H^{1/2}}, \quad \|\bar Z_{t,{\alpha '}}\|_{L^2},\quad \|D_{\alpha '}^2\bar Z_t\|_{L^2},\quad \nm{\partial_{\alpha '} \frac{1}{ Z_{,\alpha'}}}_{L^2},\quad \abs{\frac{1}{ Z_{,\alpha'}}(0,t)}. \end{equation} By \eqref{eq:b13} and \eqref{a1}, \begin{equation}\label{2000} 1\le A_1,\qquad{and}\quad \nm{ A_1}_{L^\infty}\lesssim 1+\|\bar Z_{t,{\alpha '}}\|_{L^2}^2\le 1+\frak E. \end{equation} We also have, by \eqref{ba} and \eqref{eq:b13}, that \begin{equation}\label{2001} \|b_{\alpha '}-2\Re D_{\alpha '} Z_t\|_{L^\infty}\lesssim \nm{\partial_{\alpha '} \frac{1}{ Z_{,\alpha'}}}_{L^2}\|\bar Z_{t,{\alpha '}}\|_{L^2}\le \frak E. \end{equation} We now estimate $\nm{D_{\alpha '} Z_t}_{L^\infty}$. We have, by the fundamental Theorem of calculus, \begin{equation}\label{2002}\paren{D_{\gamma'} \bar Z_t}^2-\int_0^1 \paren{D_{\beta '} \bar Z_t}^2\,d{\beta '}=2\int_0^1\int_{\beta '}^{\gamma'} D_{\alpha '} \bar Z_t\partial_{\alpha '} D_{\alpha '} \bar Z_t\,d{\alpha '} d{\beta '}=2\int_0^1\int_{\beta '}^{\gamma'} \partial_{\alpha '} \bar Z_t D_{\alpha '}^2 \bar Z_t\,d{\alpha '} d{\beta '}, \end{equation} where in the last equality, we moved $\frac1{Z_{,{\alpha '}}}$ from the first to the second factor. So for any $\gamma'\in \mathbb R$, \begin{equation}\label{2003} \abs{\paren{D_{\gamma'} \bar Z_t(\gamma',t)}^2-\int_0^1 \paren{D_{\beta '} \bar Z_t({\beta '},t)}^2\,d{\beta '}}\le 2\|\bar Z_{t,{\alpha '}}\|_{L^2} \|D_{\alpha '}^2\bar Z_t\|_{L^2}\le 2\frak E. \end{equation} Now by the fundamental Theorem of calculus and Cauchy-Schwarz inequality we have, for ${\beta '}\in [0, 1]$, \begin{equation}\label{2004}\abs{\frac{1}{ Z_{,{\beta '}}}({\beta '}, t)-\frac{1}{ Z_{,\alpha'}}(0,t)}\le \int_0^1\abs{\partial_{\alpha '} \frac{1}{ Z_{,\alpha'}}}\,d{\alpha '}\le \nm{\partial_{\alpha '} \frac{1}{ Z_{,\alpha'}}}_{L^2}; \end{equation} so $$\abs{\int_0^1 \paren{D_{\beta '} \bar Z_t}^2\,d{\beta '}}\le \nm{\frac{1}{ Z_{,\alpha'}}}_{L^\infty[0,1]}^2\|\bar Z_{t,{\alpha '}}\|_{L^2}^2\le \paren{\abs{\frac{1}{ Z_{,\alpha'}}(0,t)}+ \nm{\partial_{\alpha '} \frac{1}{ Z_{,\alpha'}}}_{L^2}}^2\|\bar Z_{t,{\alpha '}}\|_{L^2}^2\lesssim \frak E^2. $$ Combining the above argument, we get \begin{equation}\label{2005} \nm{D_{\alpha '} Z_t}_{L^\infty}=\nm{D_{\alpha '} \bar Z_t}_{L^\infty}\lesssim C(\frak E). \end{equation} This together with \eqref{2001} gives us \begin{equation}\label{2006} \nm{b_{\alpha '}}_{L^\infty}\lesssim C(\frak E). \end{equation} We now explore the remaining terms in ${\bf E}_a(t)$ and ${\bf E}_b(t)$. We know \begin{equation}\label{2007}{\bf E}_a(t)=\int\frac1{A_1}|Z_{,{\alpha '}}(\partial_t+b\partial_{\alpha '}) D_{\alpha '}\bar Z_t |^2\,d{\alpha '}+ \nm{ D_{\alpha '}\bar Z_t}_{\dot H^{1/2}}^2.\end{equation} Now by \eqref{eq:dza}, product rules and \eqref{eq-zta}, \footnote{One can also compute by changing to the Lagrangian coordinate and using the commutator relation \eqref{eq:c1}.} \begin{equation}\label{2008} Z_{,{\alpha '}}(\partial_t+b\partial_{\alpha '}) D_{\alpha '}\bar Z_t=(b_{\alpha '}-D_{\alpha '} Z_t) \bar Z_{t,{\alpha '}}+(\partial_t+b\partial_{\alpha '}) \bar Z_{t,{\alpha '}}= \bar Z_{tt,{\alpha '}}- (D_{\alpha '} Z_t) \bar Z_{t,{\alpha '}}; \end{equation} so \begin{equation}\label{2009} \begin{aligned} \|Z_{tt,{\alpha '}}\|_{L^2}\le &\|D_{\alpha '} Z_t\|_{L^\infty}\|Z_{t,{\alpha '}}\|_{L^2}+\|Z_{,{\alpha '}}(\partial_t+b\partial_{\alpha '}) D_{\alpha '}\bar Z_t\|_{L^2}\\&\le \|D_{\alpha '} Z_t\|_{L^\infty}\|Z_{t,{\alpha '}}\|_{L^2} + \paren{\|A_1\|_{L^\infty}{\bf E}_a}^{1/2}\lesssim C(\frak E). \end{aligned} \end{equation} Similarly, \begin{equation}\label{2010} {\bf E}_b(t)=\int\frac1{A_1}\abs{Z_{,{\alpha '}}(\partial_t+b\partial_{\alpha '}) \paren{\frac1{Z_{,{\alpha '}}}D^2_{\alpha '}\bar Z_t} }^2\,d{\alpha '}+ \nm{ \frac1{Z_{,{\alpha '}}} D^2_{\alpha '}\bar Z_t}_{\dot H^{1/2}}^2, \end{equation} and by product rule and \eqref{eq:dza}, \begin{equation}\label{2010-1} Z_{,{\alpha '}}(\partial_t+b\partial_{\alpha '}) \paren{\frac1{Z_{,{\alpha '}}}D^2_{\alpha '}\bar Z_t}=(b_{\alpha '}-D_{\alpha '} Z_t)D^2_{\alpha '}\bar Z_t+ (\partial_t+b\partial_{\alpha '}) D^2_{\alpha '}\bar Z_t; \end{equation} so \begin{equation}\label{2011} \nm{(\partial_t+b\partial_{\alpha '}) D^2_{\alpha '}\bar Z_t}_{L^2}\le \nm{Z_{,{\alpha '}}(\partial_t+b\partial_{\alpha '}) \paren{\frac1{Z_{,{\alpha '}}}D^2_{\alpha '}\bar Z_t}}_{L^2}+\|b_{\alpha '}-D_{\alpha '} Z_t\|_{L^\infty}\|D^2_{\alpha '}\bar Z_t\|_{L^2}\lesssim C(\frak E). \end{equation} Now from \begin{align} D_{\alpha '}^2 Z_t=\partial_{\alpha '} \frac1{Z_{,{\alpha '}}}D_{\alpha '} Z_{t}+\frac1{Z_{,{\alpha '}}^2}\partial_{\alpha '}^2 Z_t,\label{2012-1}\\ D_{\alpha '}^2 \bar Z_t=\partial_{\alpha '} \frac1{Z_{,{\alpha '}}}D_{\alpha '} \bar Z_{t}+\frac1{Z_{,{\alpha '}}^2}\partial_{\alpha '}^2 \bar Z_t,\label{2012-2} \end{align} we have \begin{equation}\label{2012} \|D_{\alpha '}^2 Z_t\|_{L^2}\le 2 \nm{\partial_{\alpha '}\frac1{Z_{,{\alpha '}}}}_{L^2}\|D_{\alpha '} Z_t\|_{L^\infty}+\|D_{\alpha '}^2 \bar Z_t\|_{L^2}\lesssim C(\frak E). \end{equation} Commuting $\partial_t+b\partial_{\alpha '}$ with $D^2_{\alpha '}$ by \eqref{eq:c2-1}, we get \begin{equation}\label{2013} D^2_{\alpha '}\bar Z_{tt}= (\partial_t +b\partial_{\alpha '}) D^2_{\alpha '} \bar Z_t+2(D_{\alpha '} Z_t) D_{\alpha '}^2\bar Z_t +(D_{\alpha '}^2 Z_t) D_{\alpha '}\bar Z_t; \end{equation} by \eqref{list1}, \eqref{2005}, \eqref{2012} and \eqref{2011}, we have \begin{equation}\label{2014} \|D^2_{\alpha '}\bar Z_{tt}\|_{L^2}\le C(\frak E). \end{equation} From \eqref{2014} and \eqref{2009}, we can work through the same argument as from \eqref{2002} to \eqref{2005} and get \begin{equation}\label{2015} \|D_{\alpha '} Z_{tt}\|_{L^\infty}=\|D_{\alpha '} \bar Z_{tt}\|_{L^\infty}\lesssim C(\frak E); \end{equation} and then by a similar calculation as in \eqref{2012-1}-\eqref{2012-2} and \eqref{2014}, \eqref{2015}, \begin{equation}\label{2016} \|D^2_{\alpha '} Z_{tt}\|_{L^2}\le C(\frak E). \end{equation} Additionally, by \eqref{eq-zta}, \begin{equation}\label{2017} \|(\partial_t+b\partial_{\alpha '})\bar Z_{t,{\alpha '}}\|_{L^2}\le \|Z_{tt,{\alpha '}}\|_{L^2}+\|b_{\alpha '}\|_{L^\infty} \|Z_{t,{\alpha '}}\|_{L^2}\lesssim C(\frak E). \end{equation} Sum up the estimates from \eqref{list1} through \eqref{2017}, we have that the following quantities are controlled by $\frak E$: \begin{equation}\label{2020} \begin{aligned} &\nm{ D_{\alpha '}\bar Z_t}_{\dot H^{1/2}}, \quad\nm{\frac{1}{ Z_{,\alpha'}} D_{\alpha '}^2\bar Z_t}_{\dot H^{1/2}}, \quad \|\bar Z_{t,{\alpha '}}\|_{L^2},\quad \|D_{\alpha '}^2\bar Z_t\|_{L^2},\quad \nm{\partial_{\alpha '} \frac{1}{ Z_{,\alpha'}}}_{L^2},\quad \abs{\frac{1}{ Z_{,\alpha'}}(0,t)},\\& \|A_1\|_{L^\infty}, \quad \|b_{\alpha '}\|_{L^\infty}, \quad \|D_{\alpha '} Z_t\|_{L^\infty},\quad \|D_{\alpha '} Z_{tt}\|_{L^\infty}, \quad \|(\partial_t+b\partial_{\alpha '})\bar Z_{t,{\alpha '}}\|_{L^2} \\& \|Z_{tt,{\alpha '}}\|_{L^2}, \quad \|D_{\alpha '}^2 \bar Z_{tt}\|_{L^2},\quad \|D_{\alpha '}^2 Z_{tt}\|_{L^2},\quad \|(\partial_t+b\partial_{\alpha '})D_{\alpha '}^2\bar Z_t\|_{L^2},\quad \|D_{\alpha '}^2 Z_t\|_{L^2}. \end{aligned} \end{equation} We will use Lemmas~\ref{basic-e}-\ref{basic-4-lemma} to do estimates. Hence we need to control the quantities that appear on the right hand sides of the inequalities in these Lemmas. \subsubsection{Controlling $\nm{\frac{\frak a_t}{\frak a}\circ h^{-1}}_{L^\infty}$ and $\nm{(\partial_t+b\partial_{\alpha '})A_1}_{L^\infty}$}\label{ata-da1} By \eqref{at}, $$\dfrac{\frak a_t}{\frak a}\circ h^{-1}= \frac{(\partial_t +b\partial_{\alpha '}) A_1}{A_1}+b_{\alpha '} -2\Re D_{\alpha '} Z_t. $$ We have controlled $\|b_{\alpha '}\|_{L^\infty}$ and $\|D_{\alpha '} Z_t\|_{L^\infty}$ in \S\ref{basic-quantities}. We are left with the quantity $\nm{(\partial_t+b\partial_{\alpha '})A_1}_{L^\infty}$. By \eqref{dta1}, $$ (\partial_t +b\partial_{\alpha '}) A_1= -\Im \paren{\bracket{Z_{tt},\mathbb H}\bar Z_{t,\alpha'}+\bracket{Z_t,\mathbb H}\partial_{\alpha '} \bar Z_{tt}-[Z_t, b; \bar Z_{t,{\alpha '}}]}. $$ Applying \eqref{eq:b13} to the first two terms and \eqref{eq:b15} to the last we get \begin{equation}\label{2021} \nm{(\partial_t+b\partial_{\alpha '})A_1}_{L^\infty}\lesssim \|Z_{tt,{\alpha '}}\|_{L^2}\|Z_{t,{\alpha '}}\|_{L^2}+\|b_{\alpha '}\|_{L^\infty}\|Z_{t,{\alpha '}}\|^2_{L^2}\lesssim C(\frak E); \end{equation} consequently \begin{equation}\label{2022} \nm{\frac{\frak a_t}{\frak a}\circ h^{-1}}_{L^\infty}\le \nm{(\partial_t+b\partial_{\alpha '})A_1}_{L^\infty}+ \|b_{\alpha '}\|_{L^\infty}+2\|D_{\alpha '} Z_t\|_{L^\infty}\lesssim C(\frak E). \end{equation} \subsubsection{Controlling $\nm{\mathcal A_{\alpha '}}_{L^\infty}$ and $\nm{\frac1{Z_{,{\alpha '}}}\partial_{\alpha '} \frac1{Z_{,{\alpha '}}}}_{L^\infty}$}\label{aa-zdz} By \eqref{interface-r}, we have \begin{equation}\label{2028} i\mathcal A=\frac{Z_{tt}+i}{Z_{,{\alpha '}}}. \end{equation} Differentiating with respect to ${\alpha '}$ yields \begin{equation}\label{2029} i\mathcal A_{\alpha '}=(Z_{tt}+i)\partial_{\alpha '}\frac{1}{Z_{,{\alpha '}}}+D_{\alpha '} Z_{tt}. \end{equation} Apply $I-\mathbb H$ to both sides of the equation and use the fact that $\partial_{\alpha '}\frac{1}{Z_{,{\alpha '}}}=\mathbb H\paren{\partial_{\alpha '}\frac{1}{Z_{,{\alpha '}}}}$ to rewrite the first term on the right hand side as a commutator, we get \begin{equation}\label{2030} i(I-\mathbb H)\mathcal A_{\alpha '}=\bracket{Z_{tt}, \mathbb H}\partial_{\alpha '}\frac{1}{Z_{,{\alpha '}}}+(I-\mathbb H)D_{\alpha '} Z_{tt}. \end{equation} Notice that $\mathcal A_{\alpha '}$ is purely real, so $\Im \paren{i(I-\mathbb H)\mathcal A_{\alpha '}}=\mathcal A_{\alpha '}$, and $|\mathcal A_{\alpha '}|\le |\paren{i(I-\mathbb H)\mathcal A_{\alpha '}}|$. Therefore, \begin{equation}\label{2031} |\mathcal A_{\alpha '}|\le \abs{\bracket{Z_{tt}, \mathbb H}\partial_{\alpha '}\frac{1}{Z_{,{\alpha '}}}}+ 2|D_{\alpha '} Z_{tt}|+|(I+\mathbb H)D_{\alpha '} Z_{tt}|. \end{equation} We estimate the first term by \eqref{eq:b13}, $$\nm{\bracket{Z_{tt}, \mathbb H}\partial_{\alpha '}\frac{1}{Z_{,{\alpha '}}}}_{L^\infty}\lesssim \|Z_{tt,{\alpha '}}\|_{L^2}\nm{\partial_{\alpha '}\frac{1}{Z_{,{\alpha '}}}}_{L^2},$$ and the second term has been controlled in \S\ref{basic-quantities}. We are left with the third term, $(I+\mathbb H)D_{\alpha '} Z_{tt}$. We rewrite it by commuting out $\frac{1}{Z_{,{\alpha '}}}$: \begin{equation}\label{2032} (I+\mathbb H)D_{\alpha '} Z_{tt}=D_{\alpha '} (I+\mathbb H) Z_{tt} -\bracket{ \frac{1}{Z_{,{\alpha '}}},\mathbb H} Z_{tt,{\alpha '}}, \end{equation} where we can estimate the second term by \eqref{eq:b13}. For the first term, we know $(I+\mathbb H)Z_t=0$ because $(I-\mathbb H)\bar Z_t=0$ and $\mathbb H$ is purely imaginary; and $Z_{tt}=(\partial_t+b\partial_{\alpha '})Z_t$. So \begin{equation}\label{2033} (I+\mathbb H) Z_{tt}=-[\partial_t+b\partial_{\alpha '}, \mathbb H]Z_t=-[b, \mathbb H]Z_{t,{\alpha '}}. \end{equation} We further rewrite it by \eqref{b}: \begin{equation}\label{bb} b=\mathbb P_A\paren{\frac{Z_t}{Z_{,{\alpha '}}}}+\mathbb P_H\paren{\frac{\bar Z_t}{\bar Z_{,{\alpha '}}}}=\frac{\bar Z_t}{\bar Z_{,{\alpha '}}}+\mathbb P_A\paren{\frac{Z_t}{Z_{,{\alpha '}}}-\frac{\bar Z_t}{\bar Z_{,{\alpha '}}}}, \end{equation} Prop~\ref{prop:comm-hilbe}, the fact that $(I+\mathbb H)Z_{t,{\alpha '}}=0$ and $(I+\mathbb H)\bar{D_{\alpha '} \bar Z_{t}}=0$. We have \begin{equation}\label{2034} (I+\mathbb H) Z_{tt}=-\bracket{\frac{\bar Z_t}{\bar Z_{,{\alpha '}}} , \mathbb H}Z_{t,{\alpha '}}=-\bracket{\bar Z_t , \mathbb H}\bar{D_{\alpha '} \bar Z_{t}}. \end{equation} We have reduced the task of estimating $D_{\alpha '} (I+\mathbb H) Z_{tt}$ to estimating $D_{\alpha '} \bracket{\bar Z_t , \mathbb H}\bar{D_{\alpha '} \bar Z_{t}}$. We compute, for general functions $f$ and $g$, \begin{equation}\label{2026} \partial_{\alpha '} [f,\mathbb H]g= f_{\alpha '} \mathbb H g-\frac1{\pi i}\int\frac{(f({\alpha '})-f({\beta '}))}{({\alpha '}-{\beta '})^2}g({\beta '})\,d{\beta '} \end{equation} therefore \begin{equation}\label{2027} \begin{aligned} D_{\alpha '} & [f,\mathbb H]g= \frac1{Z_{,{\alpha '}}} f_{\alpha '} \mathbb H g\\&-\frac1{\pi i}\int\frac{\paren{f({\alpha '})-f({\beta '})}\paren{\frac1{Z_{,{\alpha '}}}-\frac1{Z_{,{\beta '}}}}}{({\alpha '}-{\beta '})^2}g({\beta '})\,d{\beta '}-\frac1{\pi i}\int\frac{(f({\alpha '})-f({\beta '}))}{({\alpha '}-{\beta '})^2} \frac1{Z_{,{\beta '}}}g({\beta '})\,d{\beta '}. \end{aligned} \end{equation} Now using \eqref{2034}, \eqref{2027}, and the fact that $(I+\mathbb H)\bar{D_{\alpha '} \bar Z_{t}}=0$, we have \begin{equation}\label{2035} \begin{aligned} D_{\alpha '} & (I+\mathbb H) Z_{tt}=\abs{D_{\alpha '} \bar Z_t}^2 \\&+\frac1{\pi i}\int\frac{\paren{\bar Z_t({\alpha '})-\bar Z_t({\beta '})}\paren{\frac1{Z_{,{\alpha '}}}-\frac1{Z_{,{\beta '}}}}}{({\alpha '}-{\beta '})^2}\bar{D_{\beta '} \bar Z_{t}}\,d{\beta '}+\frac1{\pi i}\int\frac{(\bar Z_t({\alpha '})-\bar Z_t({\beta '}))}{({\alpha '}-{\beta '})^2}\frac1{Z_{,{\beta '}}}\bar{D_{\beta '} \bar Z_{t}}\,d{\beta '}, \end{aligned} \end{equation} where we rewrite the third term further \begin{equation}\label{2036} \begin{aligned} \frac1{\pi i}\int\frac{(\bar Z_t({\alpha '})-\bar Z_t({\beta '}))}{({\alpha '}-{\beta '})^2}\frac1{Z_{,{\beta '}}}\bar{D_{\beta '} \bar Z_{t}}\,d{\beta '}=& \frac1{\pi i}\int\frac{(\bar Z_t({\alpha '})-\bar Z_t({\beta '}))}{({\alpha '}-{\beta '})^2}\paren{\frac1{Z_{,{\beta '}}}\bar{D_{\beta '} \bar Z_{t}}-\frac1{Z_{,{\alpha '}}}\bar{D_{\alpha '} \bar Z_{t}}}\,d{\beta '}\\&+\frac1{Z_{,{\alpha '}}}\bar{D_{\alpha '} \bar Z_{t}}\,\bar Z_{t,{\alpha '}}; \end{aligned} \end{equation} here we simplified the second term on the right hand side by the fact that $\bar Z_{t}=\mathbb H\bar Z_t$. We can now estimate $\nm{D_{\alpha '} (I+\mathbb H) Z_{tt}}_{L^\infty}$. We apply \eqref{eq:b16} to the second term on the right side of \eqref{2035}; for the third term we use \eqref{2036}, and apply \eqref{eq:b16} to the first term on the right hand side of \eqref{2036}, and notice that \begin{equation}\label{2037} \partial_{\alpha '} \paren{\frac1{Z_{,{\alpha '}}}\bar{D_{\alpha '} \bar Z_{t}}}=\partial_{\alpha '} \paren{\frac1{Z_{,{\alpha '}}}}\bar{D_{\alpha '} \bar Z_{t}}+D_{\alpha '} \bar{D_{\alpha '} \bar Z_{t}}; \end{equation} we have \begin{equation}\label{2038} \nm{D_{\alpha '} (I+\mathbb H) Z_{tt}}_{L^\infty}\lesssim \nm{D_{\alpha '} \bar Z_t}^2_{L^\infty}+\nm{\partial_{\alpha '}\frac1{Z_{,{\alpha '}}}}_{L^2}\nm{Z_{t,{\alpha '}}}_{L^2}\nm{D_{\alpha '} \bar Z_t}_{L^\infty}+\nm{Z_{t,{\alpha '}}}_{L^2}\nm{D_{\alpha '}^2 \bar Z_t}_{L^2}. \end{equation} Sum up the calculations from \eqref{2031} through \eqref{2038}, and use the estimates in \S\ref{basic-quantities}, we conclude \begin{equation}\label{2039} \nm{\mathcal A_{\alpha '}}_{L^\infty}\lesssim C(\frak E). \end{equation} Observe that the same argument also gives, by taking the real parts in \eqref{2030}, \begin{equation}\label{2039-1} \nm{\mathbb H\mathcal A_{\alpha '}}_{L^\infty}\lesssim C(\frak E). \end{equation} Now from \eqref{2029} and \eqref{aa1}, \begin{equation} \frac{iA_1}{\bar Z_{,{\alpha '}}}\partial_{\alpha '}\frac{1}{Z_{,{\alpha '}}} =i\mathcal A_{\alpha '}- D_{\alpha '} Z_{tt}; \end{equation} Because $A_1\ge 1$, we have \begin{equation}\label{2040} \nm{\frac{1}{ Z_{,{\alpha '}}}\partial_{\alpha '}\frac{1}{Z_{,{\alpha '}}} }_{L^\infty}\le \nm{\mathcal A_{\alpha '}}_{L^\infty}+\| D_{\alpha '} Z_{tt}\|_{L^\infty}\lesssim C(\frak E). \end{equation} \subsubsection{Controlling $\nm{\partial_{\alpha '} (\partial_t+b\partial_{\alpha '})\frac1{Z_{,{\alpha '}}}}_{L^2}$ and $\nm{ (\partial_t+b\partial_{\alpha '})\partial_{\alpha '}\frac1{Z_{,{\alpha '}}}}_{L^2}$ }\label{dadtza} We begin with \eqref{eq-ddza}, and rewrite the second term on the right hand side to get \begin{equation}\label{ddza} \begin{aligned} (\partial_t+b\partial_{\alpha '})\partial_{\alpha'} \frac{1}{ Z_{,\alpha'}} &=-D_{\alpha '} Z_t\paren{\partial_{\alpha'}\frac{1}{ Z_{,\alpha'}}}+D_{\alpha '} \paren{b_{\alpha '}-D_{\alpha '} Z_t}\\& =-D_{\alpha '} Z_t\paren{\partial_{\alpha'}\frac{1}{ Z_{,\alpha'}}}+D_{\alpha '} \paren{b_{\alpha '}-2\Re D_{\alpha '} Z_t}+D_{\alpha '} \bar {D_{\alpha '} Z_t}. \end{aligned} \end{equation} We control the first and third terms by \begin{equation}\label{2024} \nm{ D_{\alpha '} Z_t\paren{\partial_{\alpha'}\frac{1}{ Z_{,\alpha'}}}}_{L^2}\le \nm{ D_{\alpha '} Z_t}_{L^\infty}\nm{\partial_{\alpha'}\frac{1}{ Z_{,\alpha'}}}_{L^2}\lesssim C(\frak E) \end{equation} and \begin{equation}\label{2025} \nm{D_{\alpha '} \bar {D_{\alpha '} Z_t}}_{L^2}= \nm{ {D^2_{\alpha '} Z_t}}_{L^2}\lesssim C(\frak E). \end{equation} We are left with the term $D_{\alpha '} \paren{b_{\alpha '}-2\Re D_{\alpha '} Z_t}$. We begin with \eqref{ba}:\begin{equation}\label{ba-1} b_{\alpha '}-2\Re D_{\alpha '} Z_t=\Re \paren{\bracket{ \frac1{Z_{,{\alpha '}}}, \mathbb H} Z_{t,\alpha'}+ \bracket{Z_t, \mathbb H}\partial_{\alpha '} \frac1{Z_{,{\alpha '}}} }. \end{equation} Notice that the right hand side consists of $\bracket{ \frac1{Z_{,{\alpha '}}}, \mathbb H} Z_{t,\alpha'}$, $\bracket{Z_t, \mathbb H}\partial_{\alpha '} \frac1{Z_{,{\alpha '}}}$ and their complex conjugates. We use \eqref{2027} to compute \begin{equation}\label{2041} \begin{aligned} D_{\alpha '} &\bracket{ \frac1{Z_{,{\alpha '}}}, \mathbb H} Z_{t,\alpha'}= - \partial_{\alpha '} \frac1{Z_{,{\alpha '}}} D_{\alpha '} Z_{t}\\&-\frac1{\pi i}\int\frac{\paren{\frac1{Z_{,{\alpha '}}}-\frac1{Z_{,{\beta '}}}}^2}{({\alpha '}-{\beta '})^2} Z_{t,{\beta '}} \,d{\beta '}-\frac1{\pi i}\int\frac{\paren{\frac1{Z_{,{\alpha '}}}-\frac1{Z_{,{\beta '}}}} }{({\alpha '}-{\beta '})^2} D_{\beta '} Z_t \,d{\beta '}. \end{aligned} \end{equation} Applying \eqref{eq:b12} to the second term and \eqref{3.17} to the third term yields \begin{equation}\label{2042} \nm{D_{\alpha '} \bracket{ \frac1{Z_{,{\alpha '}}}, \mathbb H} Z_{t,\alpha'}}_{L^2}\lesssim \nm{\partial_{\alpha '} \frac1{Z_{,{\alpha '}}} }_{L^2}\nm{ D_{\alpha '} Z_{t}}_{L^\infty}+ \nm{\partial_{\alpha '} \frac1{Z_{,{\alpha '}}} }_{L^2}^2\nm{ Z_{t,{\alpha '}}}_{L^2}. \end{equation} Similarly \begin{equation}\label{2043} \begin{aligned} D_{\alpha '} &\bracket{Z_t, \mathbb H}\partial_{\alpha '} \frac1{Z_{,{\alpha '}}} = Z_{t,{\alpha '}} \frac1{Z_{,{\alpha '}}} \partial_{\alpha '} \frac1{Z_{,{\alpha '}}} \\&-\frac1{\pi i}\int\frac{\paren{Z_t({\alpha '})-Z_t({\beta '})}\paren{\frac1{Z_{,{\alpha '}}}-\frac1{Z_{,{\beta '}}}}}{({\alpha '}-{\beta '})^2} \partial_{\beta '} \frac1{Z_{,{\beta '}}} \,d{\beta '}-\frac1{\pi i}\int\frac{ \paren{Z_t({\alpha '})-Z_t({\beta '})} }{({\alpha '}-{\beta '})^2} \frac1{Z_{,{\beta '}}} \partial_{\beta '} \frac1{Z_{,{\beta '}}} \,d{\beta '}, \end{aligned} \end{equation} and applying \eqref{eq:b12} to the second term and \eqref{3.17} to the third term yields \begin{equation}\label{2044} \nm{ D_{\alpha '} \bracket{Z_t, \mathbb H}\partial_{\alpha '} \frac1{Z_{,{\alpha '}}} }_{L^2}\lesssim \nm{\partial_{\alpha '} \frac1{Z_{,{\alpha '}}} }_{L^2}^2\nm{ Z_{t,{\alpha '}}}_{L^2}+ \nm{Z_{t,{\alpha '}}}_{L^2}\nm{ \frac1{Z_{,{\alpha '}}}\partial_{\alpha '} \frac1{Z_{,{\alpha '}}}}_{L^\infty} . \end{equation} The estimate of the complex conjugate terms is similar, we omit. This concludes, with an application of the results in \S\ref{basic-quantities} and \S\ref{aa-zdz}, that \begin{equation}\label{2045} \nm{D_{\alpha '}\paren{b_{\alpha '}-2\Re D_{\alpha '} Z_t}}_{L^2}\lesssim C(\frak E), \end{equation} therefore \begin{equation}\label{2046} \nm{(\partial_t+b\partial_{\alpha '})\partial_{\alpha'} \frac{1}{ Z_{,\alpha'}}}_{L^2}\lesssim C(\frak E). \end{equation} Now by \begin{equation}\label{2047} \partial_{\alpha'} (\partial_t+b\partial_{\alpha '}) \frac{1}{ Z_{,\alpha'}}=(\partial_t+b\partial_{\alpha '})\partial_{\alpha'} \frac{1}{ Z_{,\alpha'}}+b_{\alpha '} \partial_{\alpha '} \frac{1}{ Z_{,\alpha'}}, \end{equation} we also have \begin{equation}\label{2048} \nm{\partial_{\alpha'} (\partial_t+b\partial_{\alpha '})\frac{1}{ Z_{,\alpha'}}}_{L^2}\lesssim C(\frak E). \end{equation} \subsubsection{Controlling $\nm{\partial_{\alpha '}(\partial_t+b\partial_{\alpha '})b}_{L^\infty}$, $\nm{(\partial_t+b\partial_{\alpha '})b_{\alpha '}}_{L^\infty}$ and $\nm{(\partial_t+b\partial_{\alpha '})D_{\alpha '} Z_t}_{L^\infty}$}\label{dtdab} We apply \eqref{eq:c14} to \eqref{ba-1} and get \begin{equation}\label{dba-1} \begin{aligned} (\partial_t+b\partial_{\alpha '})\paren{b_{\alpha '}-2\Re D_{\alpha '} Z_t}&=\Re \paren{\bracket{ (\partial_t+b\partial_{\alpha '})\frac1{Z_{,{\alpha '}}}, \mathbb H} Z_{t,\alpha'}+ \bracket{ \frac1{Z_{,{\alpha '}}}, \mathbb H} Z_{tt,\alpha'}-\bracket{ \frac1{Z_{,{\alpha '}}}, b; Z_{t,\alpha'} } } \\&+\Re\paren{ \bracket{Z_{tt}, \mathbb H}\partial_{\alpha '} \frac1{Z_{,{\alpha '}}} + \bracket{Z_{t}, \mathbb H}\partial_{\alpha '} (\partial_t+b\partial_{\alpha '})\frac1{Z_{,{\alpha '}}} -\bracket{ Z_{t}, b; \partial_{\alpha '} \frac1{Z_{,{\alpha '}}} } }; \end{aligned} \end{equation} using \eqref{eq:b13}, \eqref{eq:b15} and results from previous subsections we obtain \begin{equation}\label{2023} \begin{aligned} \nm{(\partial_t+b\partial_{\alpha '})\paren{b_{\alpha '}-2\Re D_{\alpha '} Z_t}}_{L^\infty}&\lesssim \nm{\partial_{\alpha '} (\partial_t+b\partial_{\alpha '})\frac1{Z_{,{\alpha '}}}}_{L^2}\nm { Z_{t,\alpha'}}_{L^2}\\&+ \nm{\partial_{\alpha '} \frac1{Z_{,{\alpha '}}}}_{L^2}\nm{ Z_{tt,\alpha'}}_{L^2}+ \nm{ \partial_{\alpha '} \frac1{Z_{,{\alpha '}}}}_{L^2}\|b_{\alpha '}\|_{L^\infty}\| Z_{t,\alpha'} \|_{L^2} \\&\lesssim C(\frak E). \end{aligned} \end{equation} We now compute $(\partial_t+b\partial_{\alpha '})D_{\alpha '} Z_t$. By \eqref{eq:c1-1}, \begin{equation}\label{2049} (\partial_t+b\partial_{\alpha '}) D_{\alpha '} Z_t=D_{\alpha '} Z_{tt}-\paren{D_{\alpha '} Z_t}^2. \end{equation} So by the estimates in \S\ref{basic-quantities}, we have \begin{equation}\label{2050} \nm{(\partial_t+b\partial_{\alpha '}) D_{\alpha '} Z_t}_{L^\infty}\le \nm{D_{\alpha '} Z_{tt}}_{L^\infty}+\nm{D_{\alpha '} Z_t}^2_{L^\infty}\lesssim C(\frak E). \end{equation} This combine with \eqref{2023} yields \begin{equation}\label{2051} \nm{(\partial_t+b\partial_{\alpha '})b_{\alpha '}}_{L^\infty}\lesssim C(\frak E). \end{equation} From $\partial_{\alpha '}(\partial_t+b\partial_{\alpha '})b=(\partial_t+b\partial_{\alpha '})b_{\alpha '}+(b_{\alpha '})^2$, \begin{equation}\label{2052} \nm{\partial_{\alpha '}(\partial_t+b\partial_{\alpha '})b}_{L^\infty}\le \nm{(\partial_t+b\partial_{\alpha '})b_{\alpha '}}_{L^\infty}+ \nm{b_{\alpha '}}_{L^\infty}^2\lesssim C(\frak E). \end{equation} We are now ready to estimate $\frac{d}{dt}\frak E$. \subsubsection{Controlling $\frac d{dt} \nm{\bar Z_{t,{\alpha '}}}_{L^2}^2$, $\frac d{dt} \nm{D_{\alpha '}^2 \bar Z_{t}}_{L^2}^2$, $\frac d{dt} \nm{ \partial_{\alpha '} \frac1{Z_{,{\alpha '}}} }_{L^2}^2$ and $\frac d{dt} \abs{ \frac1{Z_{,{\alpha '}}} (0,t) }^2$}\label{ddtlower} We use Lemma~\ref{basic-e2} to control $\frac d{dt} \nm{\bar Z_{t,{\alpha '}}}_{L^2}^2$, $\frac d{dt} \nm{D_{\alpha '}^2 \bar Z_{t}}_{L^2}^2$ and $\frac d{dt} \nm{ \partial_{\alpha '} \frac1{Z_{,{\alpha '}}} }_{L^2}^2$. Notice that when we substitute $$\Theta=Z_{t,{\alpha '}},\qquad \Theta= D_{\alpha '}^2 \bar Z_{t},\qquad \text{and}\quad \Theta= \partial_{\alpha '} \frac1{Z_{,{\alpha '}}} $$ in \eqref{basic-2}, all the terms on the right hand sides are already controlled in subsections \S\ref{basic-quantities} and \S\ref{dadtza}. So we have \begin{equation}\label{2053} \frac d{dt} \nm{\bar Z_{t,{\alpha '}}}_{L^2}^2+\frac d{dt} \nm{D_{\alpha '}^2 \bar Z_{t}}_{L^2}^2+\frac d{dt} \nm{ \partial_{\alpha '} \frac1{Z_{,{\alpha '}}} }_{L^2}^2\lesssim C(\frak E). \end{equation} To estimate $\frac d{dt} \abs{ \frac1{Z_{,{\alpha '}}} (0,t) }^2$, we start with \eqref{eq:dza} and compute \begin{equation}\label{2054} (\partial_t+b\partial_{\alpha'})\abs{\frac1{Z_{,{\alpha '}}}}^2=2\Re \paren{\frac1{\bar Z_{,{\alpha '}}}(\partial_t+b\partial_{\alpha'})\frac1{Z_{,{\alpha '}}}}= \abs{\frac1{Z_{,{\alpha '}}} }^2\paren{2b_{\alpha '}-2\Re D_{\alpha '} Z_t}. \end{equation} Recall we chose the Riemann mapping so that $h(0,t)=0$ for all $t$. So $h_t\circ h^{-1}(0,t)=b(0,t)=0$ and \begin{equation}\label{2055} \frac{d}{dt}\abs{\frac1{Z_{,{\alpha '}}}(0,t)}^2= \abs{\frac1{Z_{,{\alpha '}}}(0,t)}^2\paren{2b_{\alpha '}(0,t)-2\Re D_{\alpha '} Z_t(0,t)}\lesssim C(\frak E). \end{equation} We use Lemma~\ref{basic-e} to estimate the two main terms $\frac{ d}{dt} {\bf E}_a(t)$ and $\frac{ d}{dt}{\bf E}_b(t)$. \subsubsection{Controlling $\frac{ d}{dt} {\bf E}_a(t)$}\label{ddtea} We begin with $\frac{ d}{dt} {\bf E}_a(t)$. Apply Lemma~\ref{basic-e} to $\Theta=D_{\alpha '} \bar Z_t$ we get \begin{equation}\label{2056} \frac d{dt} {\bf E}_a(t)\le \nm{\frac{\frak a_t}{\frak a}\circ h^{-1}}_{L^\infty} {\bf E}_a(t)+2 {\bf E}_a(t)^{1/2}\paren{\int\frac{|\mathcal PD_{\alpha '} \bar Z_t |^2}{\mathcal A}\,d{\alpha '}}^{1/2}. \end{equation} By \eqref{2022}, we know the first term is controlled by $C(\frak E)$. We need to estimate the factor $\paren{\int\frac{|\mathcal PD_{\alpha '} \bar Z_t |^2}{\mathcal A}\,d{\alpha '}}^{1/2}$ in the second term. By \eqref{eq-dt}: \begin{equation}\label{2057} \mathcal P D_{\alpha '}\bar Z_t=\bracket{\mathcal P, \frac1{Z_{,{\alpha '}}}}\bar Z_{t,{\alpha '}}+\frac1{Z_{,{\alpha '}}}\mathcal P \bar Z_{t,{\alpha '}}, \end{equation} and we have \begin{equation}\label{2058} \int\frac{| \bracket{\mathcal P, \frac1{Z_{,{\alpha '}}}}\bar Z_{t,{\alpha '}} |^2}{\mathcal A}\,d{\alpha '}= \int\frac{|Z_{,{\alpha '}} \bracket{\mathcal P, \frac1{Z_{,{\alpha '}}}}\bar Z_{t,{\alpha '}} |^2}{ A_1}\,d{\alpha '} \le \nm{Z_{,{\alpha '}}\bracket{\mathcal P, \frac1{Z_{,{\alpha '}}}}\bar Z_{t,{\alpha '}}}_{L^2}\lesssim C(\frak E), \end{equation} here in the last step we used \eqref{basic-4}, notice that all the terms on the right hand side of \eqref{basic-4} with $f=\bar Z_{t,{\alpha '}}$ are controlled in subsections \S\ref{basic-quantities}--\S\ref{dtdab}. We are left with the term $\int\frac{| \frac1{Z_{,{\alpha '}}}\mathcal P \bar Z_{t,{\alpha '}} |^2}{\mathcal A}\,d{\alpha '}$. Because $A_1\ge 1$, $$\int\frac{| \frac1{Z_{,{\alpha '}}}\mathcal P \bar Z_{t,{\alpha '}} |^2}{\mathcal A}\,d{\alpha '}\le \int |\mathcal P \bar Z_{t,{\alpha '}} |^2\,d{\alpha '}.$$ By the base equation \eqref{base-eq}, \begin{equation}\label{2059} \mathcal P \bar Z_{t,{\alpha '}}=-(\partial_t+b\partial_{\alpha '})(b_{\alpha '} \partial_{{\alpha '}}\bar Z_{t})-b_{\alpha '}\partial_{\alpha '} \bar Z_{tt}-i\mathcal A_{\alpha '} \partial_{\alpha '} \bar Z_t+\partial_{\alpha '}\paren{\frac{\frak a_t}{\frak a}\circ h^{-1} (\bar Z_{tt}-i)}; \end{equation} we expand the last term by product rules, \begin{equation}\label{2061} \partial_{\alpha '}\paren{\frac{\frak a_t}{\frak a}\circ h^{-1} (\bar Z_{tt}-i)}=\frac{\frak a_t}{\frak a}\circ h^{-1} \bar Z_{tt,{\alpha '}}+\partial_{\alpha '}\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}} (\bar Z_{tt}-i). \end{equation} It is clear that the first three terms in \eqref{2059} are controlled by $\frak E$, by the results of \S\ref{basic-quantities} - \S\ref{dtdab}: \begin{equation}\label{2060} \|-(\partial_t+b\partial_{\alpha '})(b_{\alpha '} \partial_{{\alpha '}}\bar Z_{t})-b_{\alpha '}\partial_{\alpha '} \bar Z_{tt}-i\mathcal A_{\alpha '} \partial_{\alpha '} \bar Z_t\|_{L^2}\lesssim C(\frak E), \end{equation} and the first term in \eqref{2061} satisfies \begin{equation}\label{2062} \nm{\frac{\frak a_t}{\frak a}\circ h^{-1} \bar Z_{tt,{\alpha '}}}_{L^2}\le \nm{\frac{\frak a_t}{\frak a}\circ h^{-1}}_{L^\infty}\| \bar Z_{tt,{\alpha '}}\|_{L^2}\lesssim C(\frak E). \end{equation} We are left with one last term $\partial_{\alpha '}\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}} (\bar Z_{tt}-i)$ in \eqref{2059}. We write \begin{equation}\label{2063} \mathcal P\bar Z_{t,{\alpha '}}=\partial_{\alpha '}\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}} (\bar Z_{tt}-i)+\mathcal R \end{equation} where $\mathcal R= -(\partial_t+b\partial_{\alpha '})(b_{\alpha '} \partial_{{\alpha '}}\bar Z_{t})-b_{\alpha '}\partial_{\alpha '} \bar Z_{tt}-i\mathcal A_{\alpha '} \partial_{\alpha '} \bar Z_t+\frac{\frak a_t}{\frak a}\circ h^{-1} \bar Z_{tt,{\alpha '}}$. We want to take advantage of the fact that $\partial_{\alpha '}\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}} $ is purely real; notice that we have control of $\|(I-\mathbb H)\mathcal P \bar Z_{t,{\alpha '}}\|_{L^2}$ and $\|\mathcal R\|_{L^2}$, by Lemma~\ref{basic-3-lemma} and \S\ref{basic-quantities} - \S\ref{dtdab}, and by \eqref{2060} and \eqref{2062}. Apply $(I-\mathbb H)$ to both sides of equation \eqref{2063}, we get \begin{equation}\label{2065} \begin{aligned} (I-\mathbb H)\mathcal P\bar Z_{t,{\alpha '}}&=(I-\mathbb H)\paren{\partial_{\alpha '}\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}} (\bar Z_{tt}-i)}+(I-\mathbb H)\mathcal R\\& = (\bar Z_{tt}-i)(I-\mathbb H)\partial_{\alpha '}\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}} +\bracket{\bar Z_{tt}, \mathbb H} \partial_{\alpha '}\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}} +(I-\mathbb H)\mathcal R \end{aligned} \end{equation} where we commuted $\bar Z_{tt}-i$ out in the second step. Now because $\partial_{\alpha '}\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}} $ is purely real, \begin{equation}\label{2066} \abs{ (\bar Z_{tt}-i) \partial_{\alpha '}\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}} } \le \abs{ (\bar Z_{tt}-i)(I-\mathbb H)\partial_{\alpha '}\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}} }, \end{equation} so by \eqref{2065}, \begin{equation}\label{2067} \abs{ (\bar Z_{tt}-i) \partial_{\alpha '}\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}} } \le \abs{(I-\mathbb H)\mathcal P\bar Z_{t,{\alpha '}}}+ \abs{\bracket{\bar Z_{tt}, \mathbb H} \partial_{\alpha '}\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}} }+ \abs{(I-\mathbb H)\mathcal R}. \end{equation} We estimate the $L^2$ norm of the first term by Lemma~\ref{basic-3-lemma}, the second term by \eqref{3.21}, and the third term by \eqref{2060} and \eqref{2062}. We obtain \begin{equation}\label{2068} \nm{ (\bar Z_{tt}-i) \partial_{\alpha '}\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}} }_{L^2}\lesssim C(\frak E)+ \nm{Z_{tt,{\alpha '}}}_{L^2}\nm{\frac{\frak a_t}{\frak a}\circ h^{-1}}_{L^\infty}+C(\frak E)\lesssim C(\frak E). \end{equation} This concludes \begin{equation}\label{2069} \frac d{dt}{\bf E}_a(t)\lesssim C(\frak E(t)). \end{equation} We record here the following estimate that will be used later. By \eqref{2068}, $\bar Z_{tt}-i=-\frac{iA_1}{Z_{,{\alpha '}}}$ and $A_1\ge 1$, we have \begin{equation}\label{2068-1} \nm{ D_{\alpha '}\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}} }_{L^2}\lesssim C(\frak E). \end{equation} \subsubsection{Controlling $\frac d{dt}{\bf E}_b(t)$}\label{ddteb} Taking $\Theta= \frac1{Z_{,{\alpha '}}}D_{\alpha '}^2\bar Z_t$ in Lemma~\ref{basic-e}, we have, \begin{equation}\label{2070} \frac d{dt} {\bf E}_b(t)\le \nm{\frac{\frak a_t}{\frak a}\circ h^{-1}}_{L^\infty} {\bf E}_b(t)+2 {\bf E}_b(t)^{1/2}\paren{\int\frac{|\mathcal P\paren{\frac1{Z_{,{\alpha '}}}D^2_{\alpha '} \bar Z_t} |^2}{\mathcal A}\,d{\alpha '}}^{1/2}. \end{equation} By \eqref{2022}, the first term is controlled by $\mathfrak E$. We consider the second term. We know \begin{equation}\label{2071} \mathcal P \paren{\frac1{Z_{,{\alpha '}}}D^2_{\alpha '}\bar Z_t}=\bracket{\mathcal P, \frac1{Z_{,{\alpha '}}}}D_{\alpha '}^2\bar Z_{t}+\frac1{Z_{,{\alpha '}}}\bracket{\mathcal P, D_{\alpha '}^2}\bar Z_{t}+\frac1{Z_{,{\alpha '}}}D_{\alpha '}^2\mathcal P\bar Z_t,\end{equation} and because $A_1\ge 1$, \begin{equation}\label{2072} \int\frac{|\mathcal P\frac1{Z_{,{\alpha '}}}D^2_{\alpha '} \bar Z_t |^2}{\mathcal A}\,d{\alpha '}\lesssim \int \abs{Z_{,{\alpha '}}\bracket{\mathcal P, \frac1{Z_{,{\alpha '}}}}D_{\alpha '}^2\bar Z_{t}}^2\,d{\alpha '}+\int\abs{ \bracket{\mathcal P, D_{\alpha '}^2}\bar Z_{t}}^2\,d{\alpha '}+\int \abs{D_{\alpha '}^2\mathcal P\bar Z_t}^2\,d{\alpha '}. \end{equation} Now by Lemma~\ref{basic-4-lemma} and the results of \S\ref{basic-quantities} - \S\ref{dtdab}, the first term on the right hand side of \eqref{2072} is controlled by $\frak E$. For the second term, we compute, using \eqref{eq:c4-1}, \begin{equation} \label{2074} \begin{aligned} \bracket{\mathcal P,D_{\alpha '}^2}\bar Z_t & =-4(D_{\alpha '} Z_{tt}) D_{\alpha '}^2\bar Z_t + 6(D_{\alpha '} Z_t)^2 D_{\alpha '}^2\bar Z_t - (2D_{\alpha '}^2 Z_{tt}) D_{\alpha '}\bar Z_t\\& + 6(D_{\alpha '} Z_t) (D_{\alpha '}^2 Z_t) D_{\alpha '}\bar Z_t - 2(D_{\alpha '}^2 Z_t) D_{\alpha '} \bar Z_{tt} - 4(D_{\alpha '} Z_t) D_{\alpha '}^2 \bar Z_{tt}. \end{aligned} \end{equation} By results in \S\ref{basic-quantities}, we have \begin{equation}\label{2075} \| \bracket{\mathcal P,D_{\alpha '}^2}\bar Z_t\|_{L^2}\lesssim C(\frak E). \end{equation} We are left with the term $\int \abs{D_{\alpha '}^2\mathcal P\bar Z_t}^2\,d{\alpha '}$, where $$\mathcal P\bar Z_t:=\bar Z_{ttt}+i\mathcal A \bar Z_{t,{\alpha '}}=\frac{\frak a_t}{\frak a}\circ h^{-1} (\bar Z_{tt}-i).$$ We expand $D_{\alpha '}^2 \mathcal P\bar Z_t$ by product rules, \begin{equation}\label{2076} D_{\alpha '}^2 \mathcal P\bar Z_t=D_{\alpha '}^2\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}}(\bar Z_{tt}-i)+2D_{\alpha '}\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}}D_{\alpha '} \bar Z_{tt}+\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}} D_{\alpha '}^2\bar Z_{tt}. \end{equation} We know how to handle the second and third terms, thanks to the work in the previous subsections. We want to use the same idea as in the previous subsection to control the first term, however $D_{\alpha '}$ is not purely real, so we go through the following slightly evoluted process. First, we have \begin{equation}\label{2077} \begin{aligned} &\nm{2D_{\alpha '}\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}}D_{\alpha '} \bar Z_{tt}+\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}} D_{\alpha '}^2\bar Z_{tt}}_{L^2}\\& \qquad\lesssim \|D_{\alpha '} \bar Z_{tt}\|_{L^\infty}\nm{D_{\alpha '}\paren{ \frac{\frak a_t}{\frak a}\circ h^{-1}}}_{L^2}+\nm{ \frac{\frak a_t}{\frak a}\circ h^{-1}}_{L^\infty}\nm{D_{\alpha '}^2 \bar Z_{tt}}_{L^2}\lesssim C(\frak E); \end{aligned} \end{equation} and by Lemma~\ref{basic-3-lemma} and \eqref{2075}, \begin{equation}\label{2084} \nm{(I-\mathbb H)D_{\alpha '}^2 \mathcal P\bar Z_t}_{L^2}\le \nm{(I-\mathbb H)\mathcal PD_{\alpha '}^2 \bar Z_t}_{L^2}+\nm{(I-\mathbb H)\bracket{D_{\alpha '}^2, \mathcal P}\bar Z_t}_{L^2}\lesssim C(\frak E). \end{equation} So \begin{equation}\label{2078} \nm{(I-\mathbb H)\paren{D_{\alpha '}^2\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}}(\bar Z_{tt}-i)}}_{L^2}\lesssim C(\frak E). \end{equation} This gives, from \begin{equation}\label{2079} \begin{aligned} (I-\mathbb H)\paren{D_{\alpha '}^2\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}}(\bar Z_{tt}-i)}&= \frac{(\bar Z_{tt}-i)}{Z_{,{\alpha '}}}(I-\mathbb H)\paren{\partial_{\alpha '} D_{\alpha '} \paren{\frac{\frak a_t}{\frak a}\circ h^{-1}}}\\&+\bracket{\frac{(\bar Z_{tt}-i)}{Z_{,{\alpha '}}}, \mathbb H}\paren{\partial_{\alpha '} D_{\alpha '} \paren{\frac{\frak a_t}{\frak a}\circ h^{-1}}}, \end{aligned} \end{equation} and \eqref{3.20} that \begin{equation}\label{2080} \nm{\frac{(\bar Z_{tt}-i)}{Z_{,{\alpha '}}}(I-\mathbb H)\paren{\partial_{\alpha '} D_{\alpha '} \paren{\frac{\frak a_t}{\frak a}\circ h^{-1}}}}_{L^2}\lesssim C(\frak E). \end{equation} Now we move the factor $\frac{(\bar Z_{tt}-i)}{|Z_{,{\alpha '}}|}$ back into $(I-\mathbb H)$ to get \begin{equation}\label{2081} \begin{aligned} \frac{(\bar Z_{tt}-i)}{|Z_{,{\alpha '}}|}(I-\mathbb H)\paren{\partial_{\alpha '} D_{\alpha '} \paren{\frac{\frak a_t}{\frak a}\circ h^{-1}}}&=- \bracket{\frac{(\bar Z_{tt}-i)}{|Z_{,{\alpha '}}|}, \mathbb H}\paren{\partial_{\alpha '} D_{\alpha '} \paren{\frac{\frak a_t}{\frak a}\circ h^{-1}}}\\&+(I-\mathbb H) \paren{\frac{(\bar Z_{tt}-i)}{|Z_{,{\alpha '}}|}\partial_{\alpha '} D_{\alpha '} \paren{\frac{\frak a_t}{\frak a}\circ h^{-1}}}; \end{aligned} \end{equation} and observe that \begin{equation}\label{2082} \begin{aligned} &\frac{(\bar Z_{tt}-i)}{|Z_{,{\alpha '}}|}\partial_{\alpha '} D_{\alpha '} \paren{\frac{\frak a_t}{\frak a}\circ h^{-1}}=\frac{(\bar Z_{tt}-i)}{Z_{,{\alpha '}}}\partial_{\alpha '} \paren{\frac1{|Z_{,{\alpha '}}|}\partial_{\alpha '} \paren{\frac{\frak a_t}{\frak a}\circ h^{-1}}}\\&+\frac{(\bar Z_{tt}-i)}{|Z_{,{\alpha '}}|}\partial_{\alpha '}\paren{\frac1{Z_{,{\alpha '}}} }\partial_{\alpha '} \paren{\frac{\frak a_t}{\frak a}\circ h^{-1}}-\frac{(\bar Z_{tt}-i)}{Z_{,{\alpha '}}}\partial_{\alpha '}\paren{\frac1{|Z_{,{\alpha '}}|} }\partial_{\alpha '} \paren{\frac{\frak a_t}{\frak a}\circ h^{-1}}. \end{aligned} \end{equation} We know the $L^2$ norms of the last two terms on the right hand side of \eqref{2082} are controlled by $C(\frak E)$; and by \eqref{3.20}, the $L^2$ norm of the commutator in \eqref{2081} is also controlled by $C(\frak E)$, therefore by \eqref{2081}, \eqref{2080}, \eqref{2082}, \begin{equation}\label{2083} \nm{(I-\mathbb H) \paren{\frac{(\bar Z_{tt}-i)}{Z_{,{\alpha '}}}\partial_{\alpha '} \paren{\frac1{|Z_{,{\alpha '}}|}\partial_{\alpha '} \paren{\frac{\frak a_t}{\frak a}\circ h^{-1}}} } }_{L^2}\lesssim C(\frak E). \end{equation} Now we commute out the factor $\frac{(\bar Z_{tt}-i)}{Z_{,{\alpha '}}}$ from $(I-\mathbb H)$ to get \begin{equation}\label{2085} \begin{aligned} (I-\mathbb H) \paren{\frac{(\bar Z_{tt}-i)}{Z_{,{\alpha '}}}\partial_{\alpha '} \paren{\frac1{|Z_{,{\alpha '}}|}\partial_{\alpha '} \paren{\frac{\frak a_t}{\frak a}\circ h^{-1}}} }&= \frac{(\bar Z_{tt}-i)}{Z_{,{\alpha '}}} (I-\mathbb H) \partial_{\alpha '} \paren{\frac1{|Z_{,{\alpha '}}|}\partial_{\alpha '} \paren{\frac{\frak a_t}{\frak a}\circ h^{-1}}} \\&+\bracket{\frac{(\bar Z_{tt}-i)}{Z_{,{\alpha '}}}, \mathbb H}\partial_{\alpha '} \paren{\frac1{|Z_{,{\alpha '}}|} \partial_{\alpha '} \paren{\frac{\frak a_t}{\frak a}\circ h^{-1}}} \end{aligned} \end{equation} Observe that the quantity the operator $(I-\mathbb H)$ acts on in the first term on the right hand side of \eqref{2085} is purely real. Applying \eqref{3.20} again to the commutator in \eqref{2085} and using \eqref{2083} and the fact that $|f|\le |(I-\mathbb H)f|$ for $f$ real, we obtain \begin{equation}\label{2086} \begin{aligned} &\nm{ \frac{(\bar Z_{tt}-i)}{Z_{,{\alpha '}}} \partial_{\alpha '} \paren{\frac1{|Z_{,{\alpha '}}|}\partial_{\alpha '} \paren{\frac{\frak a_t}{\frak a}\circ h^{-1}}} }_{L^2}\\&\qquad\le \nm{\frac{(\bar Z_{tt}-i)}{Z_{,{\alpha '}}} (I-\mathbb H) \partial_{\alpha '} \paren{\frac1{|Z_{,{\alpha '}}|}\partial_{\alpha '} \paren{\frac{\frak a_t}{\frak a}\circ h^{-1}}} }_{L^2}\lesssim C(\frak E). \end{aligned} \end{equation} Applying \eqref{2086} to \eqref{2082} yields, $$\nm{D_{\alpha '}^2\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}}(\bar Z_{tt}-i)}_{L^2}\lesssim C(\frak E);$$ and by \eqref{2077}, \eqref{2076}, \begin{equation}\label{2088} \nm{D_{\alpha '}^2\mathcal P\bar Z_t}_{L^2}\lesssim C(\frak E). \end{equation} This finishes the proof of \begin{equation}\label{2089} \frac d{dt}{\bf E}_b(t)\lesssim C(\frak E(t)). \end{equation} Sum up the results in subsections \S\ref{ddtlower} - \S\ref{ddteb}, we obtain \begin{equation}\label{2090} \frac d{dt}\frak E(t)\lesssim C(\frak E(t)). \end{equation} \subsection{The proof of Theorem ~\ref{blow-up}}\label{proof1} Assume that the initial data satisfies the assumption of Theorem~\ref{blow-up}, we know by \eqref{interface-a1}, Proposition~\ref{B2} and Sobolev embedding, that $\frac1{Z_{,{\alpha '}}}(0)-1, Z_{,{\alpha '}}(0)-1\in H^s(\mathbb R)$, with \begin{equation}\label{2000-1} \begin{aligned} \nm{\frac1{Z_{,{\alpha '}}}(0)}_{L^\infty}&\le \|Z_{tt}(0)\|_{L^\infty}+1\lesssim \|Z_{tt}(0)\|_{H^1}+1<\infty;\\ \nm{\frac1{Z_{,{\alpha '}}}(0)-1}_{H^s}&\lesssim C\paren{ \|Z_t(0)\|_{H^s}, \|Z_{tt}(0)\|_{H^s}};\\ \nm{Z_{,{\alpha '}}(0)-1}_{H^s}&\lesssim C\paren{ \|Z_t(0)\|_{H^s}, \|Z_{tt}(0)\|_{H^s}}. \end{aligned} \end{equation} From Theorem~\ref{prop:local-s} and Proposition~\ref{prop:energy-eq}, we know to prove the blow-up criteria, Theorem~\ref{blow-up}, it suffices to show that for any solution of \eqref{interface-r}-\eqref{interface-holo}-\eqref{b}-\eqref{a1}, satisfying the regularity properties in Theorem~\ref{blow-up}, and for any $T_0>0$, $$\sup_{[0, T_0)} \frak E(t)<\infty \quad \text{implies} \quad \sup_{[0,T_0)}( \|Z_{,{\alpha '}}(t)\|_{L^\infty}+\|Z_t(t)\|_{H^{3+1/2}}+\|Z_{tt}(t)\|_{H^3})<\infty.$$ We begin with the lower order norms. We first show that, as a consequence of equation \eqref{eq:dza}, if $\|Z_{,{\alpha '}}(0)\|_{L^\infty} <\infty$, then \begin{equation}\label{2000-2}\sup_{[0, T_0)}\|Z_{,{\alpha '}}(t)\|_{L^\infty}<\infty\qquad \text{ as long as }\quad \sup_{[0, T_0)}\frak E(t)<\infty.\end{equation} Solving equation \eqref{eq:dza} we get, because $\partial_t+b\partial_{\alpha '}=U_h^{-1}\partial_t U_h$, \begin{equation}\label{2100} \frac1{Z_{,{\alpha '}}}(h(\alpha,t),t)=\frac1{Z_{,{\alpha '}}}(\alpha,0)e^{\int_0^t (b_{\alpha '}\circ h(\alpha,\tau)-D_\alpha z_t(\alpha,\tau))\,d\tau}; \end{equation} so by \eqref{2020} of \S\ref{basic-quantities}, \begin{equation}\label{2101} \sup_{[0, T]}\nm{Z_{,{\alpha '}}(t)}_{L^\infty} \le \nm{Z_{,{\alpha '}}(0)}_{L^\infty}e^{\int_0^T \|b_{\alpha '}(\tau)-D_{\alpha '} Z_t(\tau)\|_{L^\infty}\,d\tau}\lesssim \nm{Z_{,{\alpha '}}(0)}_{L^\infty}e^{T\sup_{[0, T]}C(\frak E(t))}, \end{equation} hence \eqref{2000-2} holds. Notice that from \eqref{2100}, we also have \begin{equation}\label{2101-1} \sup_{[0, T]}\nm{\frac1 {Z_{,{\alpha '}}}(t)}_{L^\infty} \lesssim \nm{\frac1{Z_{,{\alpha '}}}(0)}_{L^\infty}e^{T\sup_{[0, T]}C(\frak E(t))}. \end{equation} Now by Lemma~\ref{basic-e2}, \begin{align}\label{2102} \frac d{dt} \|Z_t(t)\|^2_{L^2}&\lesssim \|Z_{tt}(t)\|_{L^2}\|Z_t(t)\|_{L^2} +\|b_{\alpha '}(t)\|_{L^\infty}\|Z_t(t)\|^2_{L^2},\\ \label{2103}\frac d{dt} \|Z_{tt}(t)\|^2_{L^2}&\lesssim \|Z_{ttt}(t)\|_{L^2}\|Z_{tt}(t)\|_{L^2} +\|b_{\alpha '}(t)\|_{L^\infty}\|Z_{tt}(t)\|^2_{L^2}; \end{align} and from equations \eqref{eq:dztt} and \eqref{aa1}, \begin{equation}\label{2104} \bar Z_{ttt}= (\bar Z_{tt}-i)\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}+ \bar {D_{\alpha '} Z_t}}=-\frac{iA_1}{Z_{,{\alpha '}}}\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}+ \bar {D_{\alpha '} Z_t}}, \end{equation} so \begin{equation}\label{2105} \|\bar Z_{ttt}(t)\|_{L^2}\lesssim \|A_1(t)\|_{L^\infty}\nm{\frac{1}{Z_{,{\alpha '}}}(t)}_{L^\infty} \paren{ \nm{\frac{\frak a_t}{\frak a}\circ h^{-1}(t)}_{L^2}+ \nm{ D_{\alpha '} Z_t(t)}_{L^2}}. \end{equation} We want to show that $\nm{\frac{\frak a_t}{\frak a}\circ h^{-1}(t)}_{L^2}$ and $\nm{ D_{\alpha '} Z_t(t)}_{L^2}$ can be controlled by $\frak E$ and the initial data; by \eqref{at}, it suffices to control $\|b_{\alpha '}(t)\|_{L^2}$, $\nm{ D_{\alpha '} Z_t(t)}_{L^2}$ and $\|(\partial_t+b\partial_{\alpha '})A_1(t)\|_{L^2}$. Applying H\"older's inequality and \eqref{3.21} to \eqref{ba} yields \begin{equation}\label{2106} \|b_{\alpha '}(t)\|_{L^2}+\|D_{\alpha '} Z_t(t)\|_{L^2}\lesssim \nm{\frac1{Z_{,{\alpha '}}}(t)}_{L^\infty}\|Z_{t,{\alpha '}}(t)\|_{L^2}, \end{equation} and applying H\"older's inequality, \eqref{3.21}, \eqref{eq:b12} to \eqref{dta1} gives \begin{equation}\label{2106-1} \|(\partial_t+b\partial_{\alpha '})A_1(t)\|_{L^2}\lesssim \nm{Z_{tt}(t)}_{L^\infty}\|Z_{t,{\alpha '}}(t)\|_{L^2}+\|b_{\alpha '}(t)\|_{L^2}\|Z_{t,{\alpha '}}(t)\|_{L^2}^2; \end{equation} so by \eqref{at}, using the fact \eqref{aa1}, we have \begin{equation}\label{2107} \nm{\frac{\frak a_t}{\frak a}\circ h^{-1}}_{L^2}+\nm{ D_{\alpha '} Z_t}_{L^2} \lesssim \paren{\nm{A_1}_{L^\infty}\nm{\frac1{Z_{,{\alpha '}}}}_{L^\infty}+1}\|Z_{t,{\alpha '}}\|_{L^2}+\|b_{\alpha '}\|_{L^2}\|Z_{t,{\alpha '}}\|_{L^2}^2. \end{equation} This gives, by further applying the estimates \eqref{2020} in \S\ref{basic-quantities} and \eqref{2101-1}, that for $t\in [0, T]$, \begin{equation}\label{2108} \|Z_{ttt}(t)\|_{L^2}\lesssim C(T, \sup_{[0, T]}\frak E(t)) \paren{1+\nm{\frac1{Z_{,{\alpha '}}}(t)}_{L^\infty}^2}\lesssim C(T, \sup_{[0, T]}\frak E(t)) \paren{1+\nm{\frac1{Z_{,{\alpha '}}}(0)}_{L^\infty}^2}. \end{equation} We now apply Gronwall's inequality to \eqref{2103}. This yields \begin{equation}\label{2109} \sup_{[0, T]} \|Z_{tt}(t)\|_{L^2}\lesssim C\paren{T, \sup_{[0, T]}\frak E(t), \|Z_{tt}(0)\|_{L^2}, \nm{\frac1{Z_{,{\alpha '}}}(0)}_{L^\infty}}. \end{equation} We then apply Gronwall's inequality to \eqref{2102}, using \eqref{2109}. We obtain \begin{equation}\label{2110} \sup_{[0, T]} \|Z_{t}(t)\|_{L^2}\lesssim C\paren{T, \sup_{[0, T]}\frak E(t), \|Z_{t}(0)\|_{L^2}, \|Z_{tt}(0)\|_{L^2}, \nm{\frac1{Z_{,{\alpha '}}}(0)}_{L^\infty}}. \end{equation} Therefore the lower order norm $\sup_{[0, T]}(\|Z_t(t)\|_{L^2}+\|Z_{tt}(t)\|_{L^2})$ is controlled by $\sup_{[0, T]}\frak E(t)$, the $L^2$ norm of $(Z_t(0), Z_{tt}(0))$ and the $L^\infty$ norm of $\frac1{Z_{,{\alpha '}}(0)}$.\footnote{$\nm{\frac1{Z_{,{\alpha '}}(0)}}_{L^\infty}$ is controlled by the $H^1$ norm of $Z_{tt}(0)$, see \eqref{2000-1}.} We are left with proving \begin{equation}\label{2111} \sup_{[0, T_0)}\frak E(t)<\infty \qquad\text{implies }\quad \sup_{[0, T_0)}(\|\partial_{\alpha '}^3Z_{t}(t)\|_{\dot H^{1/2}}+\|\partial_{\alpha '}^3 Z_{tt}(t)\|_{L^2})<\infty. \end{equation} We do so via two stronger results, Propositions~\ref{step1} and \ref{step2}. Let \begin{equation}\label{2114} \begin{aligned} E_{k}(t):&=E_{D_{\alpha '} \partial_{\alpha '}^{k-1}\bar Z_{t}}(t)+\|\partial_{\alpha '}^k \bar Z_t(t)\|_{L^2}^2 \\&:=\int \frac1{A_1}\abs{Z_{,\alpha'}(\partial_t+b\partial_{\alpha '}) \paren{\frac1{Z_{,\alpha'}}\partial_{\alpha'}^k\bar Z_t}}^2\,d\alpha'+\nm{\frac1{Z_{,\alpha'}}\partial_{\alpha'}^k\bar Z_t(t)}_{\dot H^{1/2}}^2+\|\partial_{\alpha '}^k \bar Z_t(t)\|_{L^2}^2, \end{aligned} \end{equation} where $k=2, 3$. We have \begin{proposition}\label{step1} There exists a polynomial $p_1=p_1(x)$ with universal coefficients such that \begin{equation}\label{2115} \frac d{dt} E_2(t)\le p_1\paren{\frak E(t)} E_2(t). \end{equation} \end{proposition} \begin{proposition}\label{step2} There exists a polynomial $p_2=p_2(x,y, z)$ with universal coefficients such that \begin{equation}\label{2116} \frac d{dt} E_3(t)\le p_2\paren{\frak E(t), E_2(t), \nm{\frac1{Z_{,{\alpha '}}}(t)}_{L^\infty}} (E_3(t)+1). \end{equation} \end{proposition} By Gronwall's inequality, we have from \eqref{2115} and \eqref{2116} that \begin{equation}\label{step1-2} \begin{aligned} E_2(t)&\le E_2(0)e^{\int_0^t p_1(\frak E(s))\,ds};\qquad\text{and }\\ E_3(t)&\le \paren{E_3(0)+\int_0^t p_2\paren{\frak E(s), E_2(s), \nm{\frac1{Z_{,{\alpha '}}}(s)}_{L^\infty}}\,ds}e^{\int_0^t p_2\paren{\frak E(s), E_2(s), \nm{\frac1{Z_{,{\alpha '}}}(s)}_{L^\infty}}\,ds}, \end{aligned} \end{equation} so $\sup_{[0, T]} E_2(t)$ is controlled by $E_2(0)$ and $\sup_{[0, T]}\frak E(t)$; and $\sup_{[0, T]} E_3(t)$ is controlled by $E_3(0)$, $\sup_{[0, T]}\frak E(t)$, $\sup_{[0, T]} E_2(t)$ and $\sup_{[0, T]} \nm{\frac1{Z_{,{\alpha '}}}(t)}_{L^\infty}$. And by \eqref{2101-1}, $\sup_{[0, T]} E_3(t)$ is in turn controlled by $E_3(0)$, $\sup_{[0, T]}\frak E(t)$, $E_2(0)$ and $ \nm{\frac1{Z_{,{\alpha '}}}(0)}_{L^\infty}$. We will prove Propositions~\ref{step1} and ~\ref{step2} in the next two subsections. In \S\ref{complete1} we will exam the relation between the energy functionals $E_2$, $E_3$ and the Sobolev norms $\| Z_t(t)\|_{H^{3+1/2}}$, $\| Z_{tt}(t)\|_{H^3}$ and complete the proof of Theorem~\ref{blow-up}. \subsection{ The proof of Proposition~\ref{step1}}\label{proof-prop1} We begin with a list of quantities controlled by $E_2(t)$. \subsubsection{Quantities controlled by $E_2(t)$.}\label{quantities-e2} It is clear by the definition that the following are controlled by $E_2(t)$. \begin{equation}\label{2117} \|\partial_{\alpha'}^2\bar Z_t\|_{L^2}^2\le E_2,\quad \nm{\frac1{Z_{,\alpha'}}\partial_{\alpha'}^2\bar Z_t}_{\dot H^{1/2}}^2 \le E_2,\quad \nm{Z_{,\alpha'}(\partial_t+b\partial_{\alpha '})\paren{ \frac1{Z_{,\alpha'}}\partial_{\alpha'}^2\bar Z_t}}_{L^2}^2\le C(\frak E) E_2, \end{equation} because $1\le A_1\le C(\frak E)$ by \eqref{2000}. We compute, by product rules and \eqref{eq:dza}, that \begin{equation}\label{2117-1} Z_{,\alpha'}(\partial_t+b\partial_{\alpha '}) \paren{\frac1{Z_{,\alpha'}}\partial_{\alpha'}^2\bar Z_t}= (\partial_t+b\partial_{\alpha '})\partial_{\alpha'}^2\bar Z_t+ (b_{\alpha '}-D_{\alpha '} Z_t)\partial_{\alpha'}^2\bar Z_t, \end{equation} therefore, by estimates \eqref{2020} in \S\ref{basic-quantities}, \begin{equation}\label{2118} \abs{\nm{(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}^2\bar Z_t}_{L^2}-\nm{Z_{,\alpha'}(\partial_t+b\partial_{\alpha '}) \frac1{Z_{,\alpha'}}\partial_{\alpha'}^2\bar Z_t}_{L^2}} \le C(\frak E)\|\partial_{\alpha'}^2\bar Z_t\|_{L^2}, \end{equation} so \begin{equation}\label{2119} \nm{(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}^2\bar Z_t}_{L^2}^2\le C(\frak E)E_2. \end{equation} Now by \eqref{eq:c7}, \begin{equation} \partial_{\alpha'}(\partial_t+b\partial_{\alpha '})\bar Z_{t,\alpha'} = (\partial_t+b\partial_{\alpha '})\partial_{\alpha'}^2\bar Z_{t}+b_{\alpha'}\partial_{\alpha'}^2\bar Z_{t}, \end{equation} so by \eqref{2020}, \begin{equation}\label{2120} \|\partial_{\alpha'}(\partial_t+b\partial_{\alpha '})\bar Z_{t,\alpha'}\|_{L^2}^2\le C(\frak E)E_2. \end{equation} Using Sobolev inequality \eqref{eq:sobolev} and \eqref{2020}, we obtain \begin{align}\label{2121} \|Z_{t,\alpha'}\|_{L^\infty}^2&\le 2\|Z_{t,\alpha'}\|_{L^2}\|\partial_{\alpha'}^2Z_{t}\|_{L^2}\le C(\frak E)E_2^{1/2};\qquad\qquad\text{and}\\ \label{2122} \|(\partial_t+b\partial_{\alpha '})Z_{t,\alpha'}\|_{L^\infty}^2&\le 2\|(\partial_t+b\partial_{\alpha '})Z_{t,\alpha'}\|_{L^2}\|\partial_{\alpha'}(\partial_t+b\partial_{\alpha '})Z_{t,{\alpha '}}\|_{L^2}\le C(\frak E)E_2^{1/2}. \end{align} We need the estimates for some additional quantities, which we give in the following subsections. \subsubsection{Controlling the quantity $\nm{\partial_{\alpha'}(b_{\alpha '}-2\Re D_{\alpha '} Z_t)}_{L^2}$.} \label{ddb} We begin with equation \eqref{ba}, and differentiate with respect to ${\alpha '}$. We get \begin{equation}\label{2122-1} \begin{aligned} \partial_{\alpha '}(b_{\alpha '}-2\Re D_{\alpha '} Z_t)&=\Re \paren{\bracket{ \partial_{\alpha '} \frac1{Z_{,{\alpha '}}}, \mathbb H} Z_{t,\alpha'}+ \bracket{Z_{t,{\alpha '}}, \mathbb H}\partial_{\alpha '} \frac1{Z_{,{\alpha '}}} }\\&+\Re \paren{\bracket{ \frac1{Z_{,{\alpha '}}}, \mathbb H} \partial_{\alpha '}^2 Z_{t}+ \bracket{Z_t, \mathbb H}\partial_{\alpha '}^2 \frac1{Z_{,{\alpha '}}} }; \end{aligned} \end{equation} using $\mathbb H Z_{t,{\alpha '}}=-Z_{t,{\alpha '}}$ to rewrite the first term, \begin{equation}\label{2129-1} \bracket{\partial_{\alpha '} \frac1{Z_{,{\alpha '}}}, \mathbb H} Z_{t,\alpha'}=-(I+\mathbb H)\paren{\partial_{\alpha '} \frac1{Z_{,{\alpha '}}} Z_{t,\alpha'}} \end{equation} and then applying \eqref{3.21} and \eqref{3.20} to the last two terms. We get, by \eqref{2121} and \eqref{2020}, \begin{equation}\label{2123} \nm{\partial_{\alpha '}(b_{\alpha '}-2\Re D_{\alpha '} Z_t)}_{L^2}\lesssim \|Z_{t,\alpha'}\|_{L^\infty}\nm{\partial_{\alpha'}\frac1{Z_{,\alpha'}}}_{L^2}\le C(\frak E) E_2^{1/4}. \end{equation} \subsubsection{Controlling $\|\partial_{\alpha'}^2\bar Z_{tt}\|_{L^2}$}\label{ddzt} We start with $(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}^2\bar Z_t$, and commute $\partial_t+b\partial_{\alpha '}$ with $\partial_{\alpha'}^2$; by \eqref{eq:c11}, we have \begin{equation}\label{2124} \begin{aligned} \partial_{\alpha'}^2\bar Z_{tt}-(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}^2\bar Z_t &= [\partial_{\alpha'}^2, (\partial_t+b\partial_{\alpha '})]\bar Z_t \\&=2b_{\alpha'} \partial_{\alpha'}^2\bar Z_t +\paren{\partial_{\alpha'}b_{\alpha'}} \bar Z_{t,\alpha'}, \end{aligned} \end{equation} We further expand the second term \begin{equation}\label{2125} \paren{\partial_{\alpha'}b_{\alpha'}} \bar Z_{t,\alpha'}=\paren{\partial_{\alpha'}(b_{\alpha'}-2\Re D_{\alpha '} Z_t)} \bar Z_{t,\alpha'}+2\Re\paren{\partial_{\alpha '}\frac1{Z_{,{\alpha '}}}Z_{t,{\alpha '}}}\bar Z_{t,{\alpha '}}+2\Re \paren{\frac1{Z_{,{\alpha '}}}\partial_{\alpha '}^2 Z_{t}}\bar Z_{t,{\alpha '}}; \end{equation} we get, by \eqref{2124} and \eqref{2125} that \begin{equation}\label{2126} \begin{aligned} \|\partial_{\alpha'}^2\bar Z_{tt}\|_{L^2}&\lesssim \|(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}^2\bar Z_t\|_{L^2}+ \|b_{\alpha '}\|_{L^\infty}\|\partial_{\alpha'}^2\bar Z_t\|_{L^2}+\nm{\partial_{\alpha '}(b_{\alpha '}-2\Re D_{\alpha '} Z_t)}_{L^2}\|Z_{t,\alpha'}\|_{L^\infty}\\&+\nm{ \partial_{\alpha '}\frac1{Z_{,{\alpha '}}}}_{L^2}\|Z_{t,\alpha'}\|_{L^\infty}^2+\|D_{\alpha '} Z_t\|_{L^\infty}\|\partial_{\alpha'}^2\bar Z_t\|_{L^2} \end{aligned} \end{equation} Therefore by the estimates in \S\ref{quantities-e2}, \S\ref{basic-quantities} and \eqref{2123}, \begin{equation}\label{2127} \|\partial_{\alpha'}^2\bar Z_{tt}\|^2_{L^2}\lesssim C(\frak E)E_2. \end{equation} As a consequence of the Sobolev inequality \eqref{eq:sobolev}, and estimates \eqref{2020} in \S\ref{basic-quantities}, \begin{equation}\label{2128} \|\partial_{\alpha'}\bar Z_{tt}\|^2_{L^\infty}\le 2\|\partial_{\alpha'}\bar Z_{tt}\|_{L^2}\|\partial_{\alpha'}^2\bar Z_{tt}\|_{L^2} \lesssim C(\frak E)E_2^{1/2}. \end{equation} We also have, by the $L^2$ boundedness of $\mathbb H$, \begin{equation}\label{2128-1} \|\mathbb H\bar Z_{tt,{\alpha '}}\|^2_{L^\infty}\le 2\|\partial_{\alpha'}\mathbb H\bar Z_{tt}\|_{L^2}\|\partial_{\alpha'}^2\mathbb H\bar Z_{tt}\|_{L^2}\lesssim \|\partial_{\alpha'}\bar Z_{tt}\|_{L^2}\|\partial_{\alpha'}^2\bar Z_{tt}\|_{L^2} \lesssim C(\frak E)E_2^{1/2}. \end{equation} \subsubsection{Controlling the quantity $\nm{(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}(b_{\alpha '}-2\Re D_{\alpha '} Z_t)}_{L^2}$.}\label{dtdabl2} We begin with \eqref{2122-1}, replacing the first term by \eqref{2129-1}, then use \eqref{eq:c21} on the first term and use \eqref{eq:c14} to compute the remaining three terms, \begin{equation}\label{2129} \begin{aligned} (\partial_t&+b\partial_{\alpha '})\partial_{\alpha '}(b_{\alpha '}-2\Re D_{\alpha '} Z_t)\\&=-\Re \paren{[b,\mathbb H]\partial_{\alpha '} \paren{ \partial_{\alpha '} \frac1{Z_{,{\alpha '}}} Z_{t,\alpha'}} +(I+\mathbb H) (\partial_t+b\partial_{\alpha '})\paren{ \partial_{\alpha '} \frac1{Z_{,{\alpha '}}} Z_{t,\alpha'}}}\\& +\Re\paren{ \bracket{(\partial_t+b\partial_{\alpha '}) Z_{t,{\alpha '}}, \mathbb H}\partial_{\alpha '} \frac1{Z_{,{\alpha '}}} +\bracket{ Z_{t,{\alpha '}}, \mathbb H}\partial_{\alpha '} (\partial_t+b\partial_{\alpha '})\frac1{Z_{,{\alpha '}}}-\bracket{ Z_{t,{\alpha '}}, b; \partial_{\alpha '} \frac1{Z_{,{\alpha '}}} }}\\&+\Re \paren{\bracket{ \frac1{Z_{,{\alpha '}}}, \mathbb H} \partial_{\alpha '} (\partial_t+b\partial_{\alpha '}) Z_{t,{\alpha '}} +\bracket{ (\partial_t+b\partial_{\alpha '}) \frac1{Z_{,{\alpha '}}}, \mathbb H} \partial_{\alpha '}^2 Z_{t}-\bracket{ \frac1{Z_{,{\alpha '}}}, b; \partial_{\alpha '}^2 Z_{t} } } \\&+\Re \paren{\bracket{Z_{tt}, \mathbb H}\partial_{\alpha '}^2 \frac1{Z_{,{\alpha '}}} + \bracket{Z_t, \mathbb H}\partial_{\alpha '}(\partial_t+b\partial_{\alpha '})\partial_{\alpha '} \frac1{Z_{,{\alpha '}}} - \bracket{Z_t, b; \partial_{\alpha '}^2 \frac1{Z_{,{\alpha '}}} } }. \end{aligned} \end{equation} We have, by \eqref{3.20}, \eqref{3.16}, \eqref{3.21}, \eqref{3.17}, and estimates in \S\ref{basic-quantities}, \S\ref{dadtza}, \S\ref{quantities-e2}-\S\ref{ddzt}, \begin{equation}\label{2130} \begin{aligned} &\nm{(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}(b_{\alpha '}-2\Re D_{\alpha '} Z_t)}_{L^2}\lesssim \|b_{\alpha '}\|_{L^\infty}\nm{ \partial_{\alpha '} \frac1{Z_{,{\alpha '}}}}_{L^2}\| Z_{t,\alpha'}\|_{L^\infty}\\&+\nm{ (\partial_t+b\partial_{\alpha '})\partial_{\alpha '} \frac1{Z_{,{\alpha '}}}}_{L^2}\| Z_{t,\alpha'}\|_{L^\infty}+\nm{ \partial_{\alpha '} \frac1{Z_{,{\alpha '}}}}_{L^2}\| (\partial_t+b\partial_{\alpha '})Z_{t,\alpha'}\|_{L^\infty}\\& +\nm{ \partial_{\alpha '}(\partial_t+b\partial_{\alpha '}) \frac1{Z_{,{\alpha '}}}}_{L^2}\| Z_{t,\alpha'}\|_{L^\infty}+\nm{ \partial_{\alpha '} \frac1{Z_{,{\alpha '}}}}_{L^2}\| Z_{tt,\alpha'}\|_{L^\infty}\\& \lesssim C(\frak E) E_2^{1/4}. \end{aligned} \end{equation} \subsubsection{Controlling the quantity $\partial_{\alpha '} \mathcal A_{\alpha '}$} We begin with equation \eqref{2029}, \begin{equation}\label{2131} i \mathcal A_{\alpha '}= \frac1{Z_{,{\alpha '}}}\partial_{\alpha'} Z_{tt}+(Z_{tt}+i)\partial_{\alpha'}\frac{1}{Z_{,\alpha'}} \end{equation} and differentiate with respect to ${\alpha '}$. We get \begin{equation}\label{2132} i\partial_{\alpha'}\mathcal A_{\alpha '}=\frac{\partial_{\alpha'}^2Z_{tt}}{Z_{,\alpha'}}+2\partial_{\alpha'} Z_{tt}\partial_{\alpha'}\frac{1}{Z_{,\alpha'}}+(Z_{tt}+i)\partial_{\alpha'}^2\frac{1}{Z_{,\alpha'}}. \end{equation} Applying $(I-\mathbb H)$ yields \begin{equation}\label{2133} i(I-\mathbb H) \partial_{\alpha'}\mathcal A_{\alpha '}=(I-\mathbb H)(\frac{\partial_{\alpha'}^2Z_{tt}}{Z_{,\alpha'}})+2(I-\mathbb H)(\partial_{\alpha'} Z_{tt}\partial_{\alpha'}\frac{1}{Z_{,\alpha'}})+(I-\mathbb H)((Z_{tt}+i)\partial_{\alpha'}^2\frac{1}{Z_{,\alpha'}}). \end{equation} We rewrite the first term on the right by commuting out $\frac1{Z_{,\alpha'}}$, and use $(I-\mathbb H) \partial_{\alpha'}^2\frac{1}{Z_{,\alpha'}}=0$ to rewrite the third term on the right of \eqref{2133} as a commutator. We have \begin{equation}\label{2136} i(I-\mathbb H) \partial_{\alpha'}\mathcal A_{\alpha '}-\frac1{Z_{,{\alpha '}}}(I-\mathbb H)\partial_{\alpha'}^2Z_{tt}= [\frac{1}{Z_{,\alpha'}}, \mathbb H]{\partial_{\alpha'}^2Z_{tt}} +2(I-\mathbb H)(\partial_{\alpha'} Z_{tt}\partial_{\alpha'}\frac{1}{Z_{,\alpha'}})+ [Z_{tt}, \mathbb H]\partial_{\alpha'}^2\frac{1}{Z_{,\alpha'}}. \end{equation} Taking imaginary parts, then applying \eqref{3.20}, \eqref{3.21} and H\"older's inequality gives \begin{equation}\label{2137} \nm{\partial_{\alpha'}\mathcal A_{\alpha '}-\Im\braces{\frac{1}{Z_{,\alpha'}}(I-\mathbb H)({\partial_{\alpha'}^2Z_{tt}})}}_{L^2}\lesssim \nm{\partial_{\alpha'}\frac{1}{Z_{,\alpha'}}}_{L^2}\|Z_{tt,\alpha'}\|_{L^\infty}\lesssim C(\frak E)E_2^{1/4}. \end{equation} \subsubsection{Controlling $\nm{\partial_{\alpha '}\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}}}_{L^2}$}\label{data} We begin with \eqref{at}. We have controlled $\nm{\partial_{\alpha '}(b_{\alpha '}-2\Re D_{\alpha '} Z_t)}_{L^2}$ in \eqref{2123}, we are left with $\nm{\partial_{\alpha '} \paren{\frac{(\partial_t+b\partial_{\alpha '})A_1}{A_1}}}_{L^2}$. We proceed with computing $\partial_{\alpha '} A_1$, using \eqref{a1}. We have \begin{equation}\label{2138} \begin{aligned} \partial_{\alpha '} A_1&=-\Im\paren{ [Z_{t,{\alpha '}},\mathbb H]\bar Z_{t,{\alpha '}}+[Z_t, \mathbb H]\partial_{\alpha '}^2\bar Z_t}\\&=-\Im (-\mathbb H |\bar Z_{t,{\alpha '}}|^2+[Z_t, \mathbb H]\partial_{\alpha '}^2\bar Z_t); \end{aligned} \end{equation} here we used the fact $\mathbb H \bar Z_{t,{\alpha '}}=\bar Z_{t,{\alpha '}}$ to expand the first term, then removed the term $\Im |Z_{t,{\alpha '}}|^2=0$. Applying \eqref{3.20}, \eqref{2121} and \eqref{2020} gives \begin{equation}\label{2139} \|\partial_{\alpha '} A_1\|_{L^2}\lesssim \|Z_{t,{\alpha '}}\|_{L^\infty}\|Z_{t,{\alpha '}}\|_{L^2}\lesssim C(\frak E) E_2^{1/4}. \end{equation} Now taking derivative $\partial_t+b\partial_{\alpha '}$ to \eqref{2138}, using \eqref{eq:c21} and \eqref{eq:c14}, yields \begin{equation}\label{2140} \begin{aligned} (\partial_t+b\partial_{\alpha '})\partial_{\alpha '} A_1&=\Im \paren{[b, \mathbb H]\partial_{\alpha '} |\bar Z_{t,{\alpha '}}|^2+2 \mathbb H \Re\braces{Z_{t,{\alpha '}} (\partial_t+b\partial_{\alpha '})\bar Z_{t,{\alpha '}} }}\\& -\Im ([Z_{tt}, \mathbb H]\partial_{\alpha '}^2\bar Z_t)+[Z_t,\mathbb H]\partial_{\alpha '} (\partial_t+b\partial_{\alpha '} )\bar Z_{t,{\alpha '}}-[Z_t, b; \partial_{\alpha '}^2\bar Z_t ]); \end{aligned} \end{equation} By \eqref{3.20}, then use \eqref{2020}, \eqref{2121}, \eqref{2128} we get \begin{equation}\label{2141} \begin{aligned} \nm{(\partial_t+b\partial_{\alpha '})\partial_{\alpha '} A_1}_{L^2}&\lesssim \|b_{\alpha '}\|_{L^\infty}\|Z_{t,{\alpha '}}\|_{L^2}\|Z_{t,{\alpha '}}\|_{L^\infty}+ \|Z_{tt,{\alpha '}}\|_{L^\infty}\|Z_{t,{\alpha '}}\|_{L^2}\\&+\|Z_{t,{\alpha '}}\|_{L^\infty}\|(\partial_t+b\partial_{\alpha '})\bar Z_{t,{\alpha '}}\|_{L^2}\lesssim C(\frak E) E_2^{1/4}. \end{aligned} \end{equation} Commuting $\partial_{\alpha '}$ with $\partial_t+b\partial_{\alpha '}$ gives \begin{equation}\label{2142} \nm{\partial_{\alpha '} (\partial_t+b\partial_{\alpha '})A_1}_{L^2}\lesssim \nm{(\partial_t+b\partial_{\alpha '})\partial_{\alpha '} A_1}_{L^2}+\nm{b_{\alpha '} \partial_{\alpha '} A_1}_{L^2}\lesssim C(\frak E) E_2^{1/4}. \end{equation} Combine \eqref{2142} with \eqref{2139} and \eqref{2123}, using \eqref{2000}, \eqref{2021}, we obtain \begin{equation}\label{2143} \nm{\partial_{\alpha '}\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}}}_{L^2} \lesssim C(\frak E) E_2^{1/4}. \end{equation} Sum up the estimates obtained in \S\ref{quantities-e2} - \S\ref{data}, we have that the following quantities are controlled by $C(\frak E)E_2^{1/2}$: \begin{equation}\label{2144} \begin{aligned} &\|\partial_{\alpha'}^2\bar Z_t\|_{L^2}, \quad \nm{\frac1{Z_{,\alpha'}}\partial_{\alpha'}^2\bar Z_t}_{\dot H^{1/2}},\quad \nm{Z_{,\alpha'}(\partial_t+b\partial_{\alpha '})\paren{ \frac1{Z_{,\alpha'}}\partial_{\alpha'}^2\bar Z_t}}_{L^2}, \\& \quad \nm{(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}^2\bar Z_t}_{L^2}, \quad \nm{\partial_{\alpha'}(\partial_t+b\partial_{\alpha '})\bar Z_{t,{\alpha '}}}_{L^2},\quad \|\partial_{\alpha'}^2\bar Z_{tt}\|_{L^2}\\& \|Z_{t,\alpha'}\|_{L^\infty}^2,\quad \|(\partial_t+b\partial_{\alpha '})Z_{t,\alpha'}\|_{L^\infty}^2,\quad \|\partial_{\alpha'}\bar Z_{tt}\|^2_{L^\infty}, \quad \|\mathbb H\bar Z_{tt,{\alpha '}}\|^2_{L^\infty},\\& \nm{\partial_{\alpha '}(b_{\alpha '}-2\Re D_{\alpha '} Z_t)}_{L^2}^2, \quad \nm{(\partial_t+b\partial_{\alpha '})\partial_{\alpha '}(b_{\alpha '}-2\Re D_{\alpha '} Z_t)}_{L^2}^2, \\&\quad \nm{\partial_{\alpha'}\mathcal A_{\alpha '}-\Im\braces{\frac{1}{Z_{,\alpha'}}(I-\mathbb H)({\partial_{\alpha'}^2Z_{tt}})}}_{L^2}^2,\quad \nm{\partial_{\alpha '}\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}}}_{L^2}^2,\quad \|\partial_{\alpha '} A_1\|_{L^2}^2 \end{aligned} \end{equation} \subsubsection{Controlling $\frac d{dt}E_2(t)$} We are now ready to estimate $\frac d{dt}E_2(t)$. We know $$E_2(t)=E_{D_{\alpha '} \partial_{\alpha '} \bar Z_{t}}(t)+\|\partial_{\alpha '}^2 \bar Z_t(t)\|_{L^2}^2,$$ where $E_{D_{\alpha '} \partial_{\alpha '} \bar Z_{t}}(t)$ is as defined in \eqref{eq:41}. We use Lemma~\ref{basic-e} on $E_{D_{\alpha '} \partial_{\alpha '} \bar Z_{t}}(t)$ and Lemma~\ref{basic-e2} on $\|\partial_{\alpha '}^2 \bar Z_t(t)\|_{L^2}^2$. We start with $\|\partial_{\alpha '}^2 \bar Z_t(t)\|_{L^2}^2$. We know by Lemma~\ref{basic-e2} that \begin{equation}\label{2144-1} \frac d{dt} \|\partial_{\alpha '}^2 \bar Z_t(t)\|_{L^2}^2\lesssim \|(\partial_t+b\partial_{\alpha '})\partial_{\alpha '}^2 \bar Z_t(t)\|_{L^2}\|\partial_{\alpha '}^2 \bar Z_t(t)\|_{L^2}+\|b_{\alpha '}\|_{L^\infty}\|\partial_{\alpha '}^2 \bar Z_t(t)\|_{L^2}^2 \end{equation} We have controlled $\|(\partial_t+b\partial_{\alpha '})\partial_{\alpha '}^2 \bar Z_t(t)\|_{L^2}$ in \eqref{2119}, and $\|b_{\alpha '}\|_{L^\infty}$ in \S\ref{basic-quantities}, therefore \begin{equation}\label{2145} \frac d{dt} \|\partial_{\alpha '}^2 \bar Z_t(t)\|_{L^2}^2\lesssim C(\frak E(t))E_2(t). \end{equation} We now estimate $\frac d{dt} E_{D_{\alpha '} \partial_{\alpha '} \bar Z_{t}}(t)$. Take $\Theta=D_{\alpha '} \bar Z_{t,{\alpha '}}$ in Lemma~\ref{basic-e}, we have \begin{equation}\label{2149} \frac d{dt} E_{D_{\alpha '} \partial_{\alpha '} \bar Z_{t}}(t) \le \nm{\frac{\frak a_t}{\frak a}\circ h^{-1}}_{L^\infty} E_{D_{\alpha '} \partial_{\alpha '} \bar Z_{t}}(t)+2 E_{D_{\alpha '} \partial_{\alpha '} \bar Z_{t}}(t)^{1/2}\paren{\int\frac{|\mathcal P D_{\alpha '} \bar Z_{t,{\alpha '}}|^2}{\mathcal A}\,d{\alpha '}}^{1/2}. \end{equation} We have controlled $\nm{\frac{\frak a_t}{\frak a}\circ h^{-1}}_{L^\infty}$ in \eqref{2022}. We need to control $\int\frac{|\mathcal P D_{\alpha '} \bar Z_{t,{\alpha '}}|^2}{\mathcal A}\,d{\alpha '}$. We know $\mathcal A=\frac{A_1}{|Z_{,{\alpha '}}|^2}$ and $A_1\ge 1$, so \begin{equation}\label{2150} \int\frac{|\mathcal P D_{\alpha '} \bar Z_{t,{\alpha '}}|^2}{\mathcal A}\,d{\alpha '}\le \int |Z_{,{\alpha '}}\mathcal P D_{\alpha '} \bar Z_{t,{\alpha '}}|^2 \,d{\alpha '}. \end{equation} We compute \begin{equation}\label{2112} \mathcal P D_{\alpha '} \bar Z_{t,{\alpha '}}= \bracket{\mathcal P, D_{\alpha '}} \bar Z_{t,{\alpha '}}+ \frac1{Z_{,{\alpha '}}}\partial_{\alpha '}\mathcal P \bar Z_{t,{\alpha '}}; \end{equation} further expanding $\bracket{\mathcal P, D_{\alpha '}} \bar Z_{t,{\alpha '}}$ by \eqref{eq:c5-1} yields \begin{equation}\label{2146} \bracket{\mathcal P, D_{\alpha '}} \bar Z_{t,{\alpha '}}= (-2D_{\alpha '} Z_{tt}) D_{\alpha '} \bar Z_{t,{\alpha '}} -2(D_{\alpha '} Z_t)(\partial_t +b\partial_{\alpha '})D_{\alpha '} \bar Z_{t,{\alpha '}}; \end{equation} and by \eqref{2020} and \eqref{2117}, \begin{equation}\label{2146-1} \begin{aligned} \|Z_{,{\alpha '}}\bracket{\mathcal P, D_{\alpha '}} \bar Z_{t,{\alpha '}}\|_{L^2}&\lesssim \|D_{\alpha '} Z_{tt}\|_{L^\infty}\|\partial_{\alpha '}^2 \bar Z_{t}\|_{L^2}+\|D_{\alpha '} Z_t\|_{L^\infty}\nm{Z_{,{\alpha '}}(\partial_t +b\partial_{\alpha '})D_{\alpha '} \bar Z_{t,{\alpha '}}}_{L^2}\\&\lesssim C(\frak E) E_2^{1/2}. \end{aligned} \end{equation} We are left with controlling $\|\partial_{\alpha '} \mathcal P \bar Z_{t,{\alpha '}}\|_{L^2}$. Taking derivative to ${\alpha '}$ to \eqref{base-eq} yields \begin{equation}\label{2147} \begin{aligned} \partial_{\alpha '} \mathcal P \bar Z_{t,{\alpha '}}&=-(\partial_t+b\partial_{\alpha '})(\paren{\partial_{\alpha '} b_{\alpha '}} \partial_{{\alpha '}}\bar Z_{t})-\paren{\partial_{\alpha '} b_{\alpha '}}\partial_{\alpha '} \bar Z_{tt}-i\paren{\partial_{\alpha '} \mathcal A_{\alpha '}} \partial_{\alpha '} \bar Z_t\\& -(\partial_t+b\partial_{\alpha '})( b_{\alpha '} \partial_{{\alpha '}}^2\bar Z_{t})- b_{\alpha '}\partial_{\alpha '}^2 \bar Z_{tt}-i \mathcal A_{\alpha '} \partial_{\alpha '}^2 \bar Z_t-b_{\alpha '}^2 \partial_{\alpha '}^2 \bar Z_{t}- b_{\alpha '} (\partial_{\alpha '} b_{\alpha '}) \partial_{{\alpha '}}\bar Z_{t}\\& +\frac{\frak a_t}{\frak a}\circ h^{-1} \partial_{\alpha '}^2\bar Z_{tt}+2\paren{ \partial_{\alpha '}\frac{\frak a_t}{\frak a}\circ h^{-1} }\partial_{\alpha '} \bar Z_{tt}+\paren{\partial_{\alpha '}^2 \frac{\frak a_t}{\frak a}\circ h^{-1}} (\bar Z_{tt}-i); \end{aligned} \end{equation} we further expand the terms in the first line and the last term in the second line according to the available estimates in \S\ref{ddb} - \S\ref{data}, \begin{equation}\label{2148} \begin{aligned} (\partial_t+b\partial_{\alpha '})&(\paren{\partial_{\alpha '} b_{\alpha '}} \partial_{{\alpha '}}\bar Z_{t})=(\partial_t+b\partial_{\alpha '})\braces{\partial_{\alpha '} \paren{ b_{\alpha '}-2\Re D_{\alpha '} Z_t} \partial_{{\alpha '}}\bar Z_{t}}\\&+2 \braces{\Re \partial_{\alpha '} \paren{ D_{\alpha '} Z_t}} (\partial_t+b\partial_{\alpha '})\partial_{{\alpha '}}\bar Z_{t}+2 \braces{\Re (\partial_t+b\partial_{\alpha '})\partial_{\alpha '} \paren{ D_{\alpha '} Z_t}} \partial_{{\alpha '}}\bar Z_{t} \end{aligned} \end{equation} we expand the factors in the second line further by product rules, \begin{equation}\label{2151} \Re \partial_{\alpha '} \paren{ D_{\alpha '} Z_t}=\Re \partial_{\alpha '} \frac1{Z_{,{\alpha '}}} \partial_{\alpha '} Z_t +\Re\frac {\partial_{\alpha '}^2 Z_t}{Z_{,{\alpha '}}}, \end{equation} \begin{equation}\label{2152} \begin{aligned} \Re (\partial_t+b\partial_{\alpha '})\partial_{\alpha '} \paren{ D_{\alpha '} Z_t}&= \Re (\partial_t+b\partial_{\alpha '})\partial_{\alpha '} \frac1{Z_{,{\alpha '}}} \partial_{\alpha '} Z_t +\Re \partial_{\alpha '} \frac1{Z_{,{\alpha '}}} (\partial_t+b\partial_{\alpha '})\partial_{\alpha '} Z_t \\&+\Re\frac {(\partial_t+b\partial_{\alpha '})\partial_{\alpha '}^2 Z_t}{Z_{,{\alpha '}}}+\Re\frac {\partial_{\alpha '}^2 Z_t}{Z_{,{\alpha '}}}(b_{\alpha '}-D_{\alpha '} Z_t), \end{aligned} \end{equation} here we used \eqref{eq:dza} in the last term; and by \eqref{eq:c7}, \begin{equation}\label{2153} (\partial_t+b\partial_{\alpha '})\partial_{{\alpha '}}\bar Z_{t}=\bar Z_{tt,{\alpha '}}-b_{\alpha '} \bar Z_{t,{\alpha '}}. \end{equation} We are now ready to conclude, by \eqref{2123}, \eqref{2130}, \eqref{2121}, \eqref{2122}, \eqref{2020}, \eqref{2046}, \S\ref{quantities-e2} and the expansions \eqref{2148} - \eqref{2153} that \begin{equation}\label{2154} \|(\partial_t+b\partial_{\alpha '})(\paren{\partial_{\alpha '} b_{\alpha '}} \partial_{{\alpha '}}\bar Z_{t})\|_{L^2}\lesssim C(\frak E) E_2^{1/2}. \end{equation} Similarly we can conclude, after expanding if necessary, with a similar estimate for all the terms on the right hand side of \eqref{2147} except for $\paren{\partial_{\alpha '}^2 \frac{\frak a_t}{\frak a}\circ h^{-1}} (\bar Z_{tt}-i)$. Let \begin{equation}\label{2155} \partial_{\alpha '} \mathcal P \bar Z_{t,{\alpha '}}=\mathcal R_1+\paren{\partial_{\alpha '}^2 \frac{\frak a_t}{\frak a}\circ h^{-1}} (\bar Z_{tt}-i); \end{equation} where $\mathcal R_1$ is the sum of the remaining terms on the right hand side of \eqref{2147}. We have, by the argument above, that \begin{equation}\label{2156} \|\mathcal R_1\|_{L^2}\lesssim C(\frak E) E_2^{1/2}. \end{equation} We control the term $\paren{\partial_{\alpha '}^2 \frac{\frak a_t}{\frak a}\circ h^{-1}} (\bar Z_{tt}-i)$ with a similar idea as that in \S\ref{ddtea}, by taking advantage of the fact that $\partial_{\alpha '}^2 \frac{\frak a_t}{\frak a}\circ h^{-1}$ is purely real. Applying $(I-\mathbb H)$ to both sides of \eqref{2155}, and commuting out $\bar Z_{tt}-i$ yields \begin{equation}\label{2157} (I-\mathbb H) \partial_{\alpha '} \mathcal P \bar Z_{t,{\alpha '}}=(I-\mathbb H)\mathcal R_1+\bracket{\bar Z_{tt}, \mathbb H}\partial_{\alpha '}^2 \frac{\frak a_t}{\frak a}\circ h^{-1} + (\bar Z_{tt}-i)(I-\mathbb H)\partial_{\alpha '}^2 \frac{\frak a_t}{\frak a}\circ h^{-1} ; \end{equation} Because $\mathbb H$ is purely imaginary, we have $\abs{\partial_{\alpha '}^2 \frac{\frak a_t}{\frak a}\circ h^{-1}}\le \abs{(I-\mathbb H)\partial_{\alpha '}^2 \frac{\frak a_t}{\frak a}\circ h^{-1}}$, and \begin{equation}\label{2158} \abs{ (\bar Z_{tt}-i)\partial_{\alpha '}^2 \frac{\frak a_t}{\frak a}\circ h^{-1}}\le \abs{(I-\mathbb H) \partial_{\alpha '} \mathcal P \bar Z_{t,{\alpha '}}}+\abs{(I-\mathbb H)\mathcal R_1}+\abs{\bracket{\bar Z_{tt}, \mathbb H}\partial_{\alpha '}^2 \frac{\frak a_t}{\frak a}\circ h^{-1}}. \end{equation} Now by \eqref{eq:c10}, \begin{equation}\label{2159} \bracket{\mathcal P, \partial_{\alpha '}} \bar Z_{t,{\alpha '}} =-(\partial_t+b\partial_{\alpha '})(b_{\alpha '}\partial_{\alpha '} \bar Z_{t,{\alpha '}})-b_{\alpha '}\partial_{\alpha '} (\partial_t+b\partial_{\alpha '})\bar Z_{t,{\alpha '}}-i\mathcal A_{\alpha '} \partial_{\alpha '} \bar Z_{t,{\alpha '}}; \end{equation} so \begin{equation}\label{2160} \|\bracket{\mathcal P, \partial_{\alpha '}} \bar Z_{t,{\alpha '}}\|_{L^2}\lesssim C(\frak E) E_2^{1/2} \end{equation} by \eqref{2020}, \S\ref{aa-zdz} and \S\ref{dtdab}. By Lemma~\ref{basic-3-lemma}, and \eqref{2160}, \begin{equation}\label{2161} \|(I-\mathbb H) \partial_{\alpha '} \mathcal P \bar Z_{t,{\alpha '}}\|_{L^2}\le \|(I-\mathbb H) \mathcal P \partial_{\alpha '} \bar Z_{t,{\alpha '}}\|_{L^2}+ \|(I-\mathbb H)\bracket{\mathcal P, \partial_{\alpha '}} \bar Z_{t,{\alpha '}}\|_{L^2}\lesssim C(\frak E) E_2^{1/2}. \end{equation} Now we apply \eqref{3.20} to the commutator on the right hand side of \eqref{2158}. By \eqref{2156} and \eqref{2161}, we have \begin{equation}\label{2162} \nm{ (\bar Z_{tt}-i)\partial_{\alpha '}^2 \frac{\frak a_t}{\frak a}\circ h^{-1}}_{L^2}\lesssim C(\frak E) E_2^{1/2}+\nm{\bar Z_{tt,{\alpha '}}}_{L^\infty}\nm{\partial_{\alpha '} \frac{\frak a_t}{\frak a}\circ h^{-1}}_{L^2}\lesssim C(\frak E) E_2^{1/2} . \end{equation} This together with \eqref{2156} and \eqref{2155} gives \begin{equation}\label{2163} \|\partial_{\alpha '} \mathcal P \bar Z_{t,{\alpha '}}\|_{L^2}\lesssim C(\frak E) E_2^{1/2}. \end{equation} We can now conclude, by \eqref{2150}, \eqref{2112}, \eqref{2146-1} and \eqref{2163} that \begin{equation}\label{2164} \int\frac{|\mathcal P D_{\alpha '} \bar Z_{t,{\alpha '}}|^2}{\mathcal A}\,d{\alpha '}\lesssim C(\frak E) E_2^{1/2}; \end{equation} and consequently, \begin{equation}\label{2165} \frac d{dt} E_{D_{\alpha '} \partial_{\alpha '} \bar Z_t}(t) \lesssim C(\frak E) E_2. \end{equation} Combining \eqref{2145} and \eqref{2165} yields \begin{equation}\label{2166} \frac d{dt} E_2(t) \lesssim C(\frak E(t)) E_2(t). \end{equation} This concludes the proof for Proposition~\ref{step1}. \subsection{The proof of Proposition~\ref{step2}}\label{proof-prop2} We begin with discussing quantities controlled by $E_3$. Since the idea is similar to that in previous sections, when the estimates are straightforward, we don't always give the full details. \subsubsection{Quantities controlled by $E_3$ and a polynomial of $\frak E$ and $E_2$}\label{quantities-e3} By the definition of $E_3$, and the fact that $1\le A_1 \le C(\frak E)$, cf. \eqref{2000}, \begin{equation}\label{2200} \|\partial_{\alpha'}^3\bar Z_t\|_{L^2}^2\le E_3,\quad \nm{Z_{,\alpha'}(\partial_t+b\partial_{\alpha '})\paren{ \frac1{Z_{,\alpha'}}\partial_{\alpha'}^3\bar Z_t}}_{L^2}^2\le C(\frak E) E_3,\quad \nm{\frac1{Z_{,\alpha'}}\partial_{\alpha'}^3\bar Z_t}_{\dot H^{1/2}}^2\le E_3 . \end{equation} By \eqref{eq:dza} and product rules, \begin{equation}\label{2201} Z_{,\alpha'}(\partial_t+b\partial_{\alpha '})\paren{ \frac1{Z_{,\alpha'}}\partial_{\alpha'}^3\bar Z_t}=(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}^3\bar Z_t+(b_{\alpha '}-D_{\alpha '} Z_t) \partial_{\alpha'}^3\bar Z_t \end{equation} so by \eqref{2020}, \begin{equation}\label{2202} \abs{\nm{(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}^3\bar Z_t}_{L^2}-\nm{Z_{,\alpha'}(\partial_t+b\partial_{\alpha '}) \paren{\frac1{Z_{,\alpha'}}\partial_{\alpha'}^3\bar Z_t}}_{L^2}} \le C(\frak E)\|\partial_{\alpha'}^3\bar Z_t\|_{L^2}, \end{equation} therefore \begin{equation}\label{2203} \nm{(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}^3\bar Z_t}_{L^2}^2\le C(\frak E)E_3. \end{equation} We commute out $\partial_{\alpha '}$, by \eqref{eq:c7}, \begin{equation}\label{2204} \partial_{\alpha'}(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}^2\bar Z_t=(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}^3\bar Z_t+b_{\alpha'}\partial_{\alpha'}^3\bar Z_t, \end{equation} so \begin{equation}\label{2205} \|\partial_{\alpha'}(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}^2\bar Z_t\|_{L^2}^2\le C(\frak E)E_3. \end{equation} As a consequence of the Sobolev inequality \eqref{eq:sobolev}, and \eqref{2144}, \begin{align}\label{2206} \|\partial_{\alpha'}^2\bar Z_t\|_{L^\infty}^2&\le 2\|\partial_{\alpha'}^2\bar Z_t\|_{L^2}\|\partial_{\alpha'}^3\bar Z_t\|_{L^2} \lesssim C(\frak E, E_2)E_3^{1/2},\\ \label{2207} \|(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}^2\bar Z_t\|_{L^\infty}^2&\le 2\|(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}^2\bar Z_t\|_{L^2}\|\partial_{\alpha '} (\partial_t+b\partial_{\alpha '})\partial_{\alpha'}^2\bar Z_t\|_{L^2} \lesssim C(\frak E, E_2)E_3^{1/2}. \end{align} Now we commute out $\partial_{\alpha '}^2$ by \eqref{eq:c11}, and get \begin{equation}\label{2208} \partial_{\alpha'}^2(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}\bar Z_t=(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}^3\bar Z_t+\paren{\partial_{\alpha'}b_{\alpha'}}\partial_{\alpha'}^2\bar Z_t+2b_{\alpha'}\partial_{\alpha'}^3\bar Z_t, \end{equation} We expand the second term further according to the available estimate \eqref{2123}, as we did in \eqref{2125}; we get \begin{equation}\label{2209} \|\partial_{\alpha'}^2(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}\bar Z_t\|_{L^2}^2\le C\paren{\frak E, E_2, \nm{\frac1{Z_{,{\alpha '}}}}_{L^\infty}}(E_3+1); \end{equation} and consequently by Sobolev inequality \eqref{eq:sobolev} and \eqref{2144}, \begin{equation}\label{2210} \|\partial_{\alpha'}(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}\bar Z_t\|_{L^\infty}^2\le C\paren{\frak E, E_2, \nm{\frac1{Z_{,{\alpha '}}}}_{L^\infty}}(E_3^{1/2}+1). \end{equation} We need to control some additional quantities. \subsubsection{Controlling $\|\partial_{\alpha'}A_1\|_{L^\infty}$, $\|\partial_{\alpha'}^2A_1\|_{L^2}$ and $\nm{\partial_{\alpha'}^2\frac{1}{Z_{,\alpha'}}}_{L^2}$} We begin with \eqref{2138}: \begin{equation}\label{2211} \partial_{\alpha '} A_1=-\Im\paren{ [Z_{t,{\alpha '}},\mathbb H]\bar Z_{t,{\alpha '}}+[Z_t, \mathbb H]\partial_{\alpha '}^2\bar Z_t}. \end{equation} By \eqref{eq:b13}, \eqref{2020}, \eqref{2144}, \begin{equation}\label{2212} \nm{\partial_{\alpha'}A_1}_{L^\infty}\lesssim \|Z_{t,\alpha'}\|_{L^2}\|\partial_{\alpha'}^2Z_{t}\|_{L^2}\lesssim C(\frak E)E_2^{1/2}. \end{equation} Differentiating \eqref{2211} with respect to $\alpha'$ then apply \eqref{3.20}, \eqref{3.22} and use $\mathbb H \bar Z_{t,{\alpha '}}=\bar Z_{t,{\alpha '}}$ gives \begin{equation}\label{2213} \|\partial_{\alpha'}^2A_1\|_{L^2}\lesssim \|Z_{t,\alpha'}\|_{L^\infty}\|\partial_{\alpha'}^2Z_{t}\|_{L^2}\le C(\frak E, E_2), \end{equation} where in the last step we used \eqref{2144}. To estimate $\partial_{\alpha'}^2\frac{1}{Z_{,\alpha'}}$ we begin with \eqref{interface-a1}: $$-i\frac 1{Z_{,\alpha'}}=\frac{\bar Z_{tt}-i}{A_1}.$$ Taking two derivatives with respect to $\alpha'$ gives \begin{equation}\label{2214} -i\partial_{\alpha'}^2\frac{1}{Z_{,\alpha'}}=\frac{\partial_{\alpha'}^2\bar Z_{tt}}{A_1}-2\bar Z_{tt,\alpha'}\frac{\partial_{\alpha'}A_1}{A_1^2}+(\bar Z_{tt}-i)\paren{-\frac{\partial_{\alpha'}^2A_1}{A_1^2}+2\frac{(\partial_{\alpha'}A_1)^2}{A_1^3}}; \end{equation} therefore, because $A_1\ge 1$, and \eqref{aa1}, \eqref{2020}, \eqref{2144}, \eqref{2212}, \eqref{2213}, \begin{equation}\label{2215} \begin{aligned} \nm{\partial_{\alpha'}^2\frac{1}{Z_{,\alpha'}}}_{L^2}\lesssim &\|\partial_{\alpha'}^2\bar Z_{tt}\|_{L^2}+\|\partial_{\alpha'}\bar Z_{tt}\|_{L^2}\|\partial_{\alpha'}A_1\|_{L^\infty}\\&+\nm{\frac{1}{Z_{,\alpha'}}}_{L^\infty}(\|\partial_{\alpha'}^2A_1\|_{L^2}+\|\partial_{\alpha'}A_1\|_{L^2}\|\partial_{\alpha'}A_1\|_{L^\infty})\le C\paren{\frak E, E_2, \nm{\frac{1}{Z_{,\alpha'}}}_{L^\infty}}, \end{aligned} \end{equation} and consequently by Sobolev inequality \eqref{eq:sobolev}, and \eqref{2020}, \begin{equation}\label{2216} \nm{\partial_{\alpha'}\frac{1}{Z_{,\alpha'}}}_{L^\infty} \le C\paren{\frak E, E_2, \nm{\frac{1}{Z_{,\alpha'}}}_{L^\infty}}. \end{equation} \subsubsection{Controlling $\|\partial_{\alpha'}^2 b_{\alpha'}\|_{L^2}$ and $\|\partial_{\alpha'}^3Z_{tt}\|_{L^2}$} We are now ready to give the estimates for $ \|\partial_{\alpha'}^2b_{\alpha'}\|_{L^2} $ and $\| \partial_{\alpha'}^3\bar Z_{tt} \|_{L^2}$. We begin with \eqref{2122-1}, differentiating with respect to ${\alpha '}$, then use \eqref{3.20}, \eqref{3.21}, the fact that $\mathbb H Z_{t,{\alpha '}}=-Z_{t,{\alpha '}}$, $\mathbb H\frac1{Z_{,{\alpha '}}}=\frac1{Z_{,{\alpha '}}}$, and H\"older's inequality; we get \begin{equation}\label{2217} \begin{aligned} \|\partial_{\alpha'}^2(b_{\alpha'}-2\Re D_{\alpha '} Z_t)\|_{L^2}&\lesssim \|\partial_{\alpha'}^2Z_{t}\|_{L^2}\nm{\partial_{\alpha'}\frac{1}{Z_{,\alpha'}}}_{L^\infty}+\|Z_{t,\alpha'}\|_{L^\infty}\nm{\partial_{\alpha'}^2\frac{1}{Z_{,\alpha'}}}_{L^2}\\& \le C\paren{\frak E, E_2, \nm{\frac{1}{Z_{,\alpha'}}}_{L^\infty}} . \end{aligned} \end{equation} It is easy to show, by product rules and H\"older's inequality that \begin{equation}\label{2217-1} \begin{aligned} \|\partial_{\alpha '}^2 D_{\alpha '} Z_t\|_{L^2}&\lesssim \|\partial_{\alpha'}^2Z_{t}\|_{L^2}\nm{\partial_{\alpha'}\frac{1}{Z_{,\alpha'}}}_{L^\infty}+\|Z_{t,\alpha'}\|_{L^\infty}\nm{\partial_{\alpha'}^2\frac{1}{Z_{,\alpha'}}}_{L^2}+\nm{\frac1{Z_{,{\alpha '}}}\partial_{\alpha '}^3 Z_t}_{L^2}\\&\lesssim C\paren{\frak E, E_2, \nm{\frac{1}{Z_{,\alpha'}}}_{L^\infty}}+\nm{\frac1{Z_{,{\alpha '}}}}_{L^\infty}E_3^{1/2}, \end{aligned} \end{equation} so \begin{equation}\label{2218} \|\partial_{\alpha'}^2 b_{\alpha'}\|_{L^2}\lesssim C\paren{\frak E, E_2, \nm{\frac{1}{Z_{,\alpha'}}}_{L^\infty}}+\nm{\frac1{Z_{,{\alpha '}}}}_{L^\infty}E_3^{1/2}. \end{equation} Now starting from \eqref{eq-zta} and taking two derivatives to ${\alpha '}$ gives \begin{equation}\label{2219} \begin{aligned} \partial_{\alpha'}^3\bar Z_{tt}&=\partial_{\alpha'}^2(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}\bar Z_t+\partial_{\alpha '}^2 (b_{\alpha '}\bar Z_{t,{\alpha '}}) \\&=\partial_{\alpha'}^2(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}\bar Z_t+(\partial_{\alpha '}^2 b_{\alpha '})\bar Z_{t,{\alpha '}}+2(\partial_{\alpha '} b_{\alpha '})\partial_{\alpha '} \bar Z_{t,{\alpha '}}+ b_{\alpha '} \partial_{\alpha '}^3\bar Z_{t}, \end{aligned} \end{equation} so \begin{equation}\label{2220} \| \partial_{\alpha'}^3\bar Z_{tt} \|_{L^2}^2\le C\paren{\frak E, E_2, \nm{\frac{1}{Z_{,\alpha'}}}_{L^\infty}}(E_3+1), \end{equation} and as a consequence of \eqref{eq:sobolev}, \begin{equation}\label{2221} \| \partial_{\alpha'}^2\bar Z_{tt} \|_{L^\infty}^2\le C\paren{\frak E, E_2, \nm{\frac{1}{Z_{,\alpha'}}}_{L^\infty}}(E_3^{1/2}+1) . \end{equation} \subsubsection {Controlling $\partial_{\alpha'}^2\mathcal A_{\alpha '}$.} We differentiate \eqref{2136} with respect to $\alpha'$ then take the imaginary parts and use H\"older's inequality, \eqref{3.20}, \eqref{3.21}. We have, \begin{equation}\label{2222} \begin{aligned} \|\partial_{\alpha'}^2\mathcal A_{\alpha '}\|_{L^2}&\le \nm{\frac1{Z_{,\alpha'}}}_{L^\infty}\|\partial_{\alpha'}^3\bar Z_{tt}\|_{L^2}+ \nm{\partial_{\alpha'}\frac1{Z_{,\alpha'}}}_{L^\infty}\|\partial_{\alpha'}^2\bar Z_{tt}\|_{L^2}\\&+\nm{\partial_{\alpha'}^2\frac1{Z_{,\alpha'}}}_{L^2}\|\partial_{\alpha'}\bar Z_{tt}\|_{L^\infty}\le C\paren{\frak E, E_2, \nm{\frac1{Z_{,\alpha'}}}_{L^\infty}} ( E_3^{1/2}+1). \end{aligned} \end{equation} \subsubsection{Controlling $\nm{\partial_{\alpha '}^2\frac{\frak a_t}{\frak a}\circ h^{-1}}_{L^2}$} We begin with \eqref{at}, and take two derivatives to ${\alpha '}$. \begin{equation} \begin{aligned} \partial_{\alpha '}^2\frac{\frak a_t}{\frak a}\circ h^{-1}&=\frac{\partial_{\alpha '}^2 (\partial_t+b\partial_{\alpha '})A_1}{A_1}-2\frac{\partial_{\alpha '} (\partial_t+b\partial_{\alpha '})A_1\partial_{\alpha '} A_1}{A_1^2}\\&+(\partial_t+b\partial_{\alpha '})A_1 \paren{\frac{-\partial_{\alpha '}^2 A_1}{A_1^2} +2\frac{(\partial_{\alpha '} A_1)^2}{A_1^3}}+ \partial_{\alpha '}^2 (b_{\alpha '}-2\Re D_{\alpha '} Z_t) \end{aligned} \end{equation} We have controlled $\nm{\partial_{\alpha '}^2 (b_{\alpha '}-2\Re D_{\alpha '} Z_t)}_{L^2}$, $\nm{\partial_{\alpha '}^2 A_1}_{L^2}$, $\nm{\partial_{\alpha '} A_1}_{L^\infty}$, $\nm{\partial_{\alpha '} A_1}_{L^2}$ and $\nm{\partial_{\alpha '} (\partial_t+b\partial_{\alpha '}) A_1}_{L^2}$ etc. in \eqref{2217}, \eqref{2212}, \eqref{2213}, \eqref{2139}, \eqref{2142} and \eqref{2000}, \eqref{2021}. We are left with $\partial_{\alpha '}^2 (\partial_t+b\partial_{\alpha '})A_1$. We begin with \eqref{a1}, taking two derivatives to ${\alpha '}$, then one derivative to $\partial_t+b\partial_{\alpha '}$. We have \begin{equation}\label{2222-1} (\partial_t+b\partial_{\alpha '})\partial_{\alpha '}^2 A_1=-\sum_{k=0}^2 C_2^k\Im(\partial_t+b\partial_{\alpha '}) \paren{\bracket{ \partial_{\alpha '}^k Z_t, \mathbb H}\partial_{\alpha '}^{2-k} \bar Z_{t,\alpha'}} \end{equation} where $C_2^0=1, C_2^1=2, C_2^2=1$. Using \eqref{eq:c14} to expand the right hand side, then use \eqref{3.20}, \eqref{3.21} and \eqref{3.22} to do the estimates. We have \begin{equation} \|(\partial_t+b\partial_{\alpha '})\partial_{\alpha '}^2 A_1\|_{L^2}\lesssim C\paren{\frak E, E_2, \nm{\frac 1{Z_{,{\alpha '}}}}_{L^\infty}}(E_3^{1/4}+1). \end{equation} Now we use \eqref{eq:c11} to compute \begin{equation}\label{2223-1} \partial_{\alpha '}^2 (\partial_t+b\partial_{\alpha '})A_1= (\partial_t+b\partial_{\alpha '})\partial_{\alpha '}^2 A_1+\partial_{\alpha '} b_{\alpha '} \partial_{\alpha '} A_1+2 b_{\alpha '} \partial_{\alpha '}^2 A_1. \end{equation} Therefore \begin{equation} \|\partial_{\alpha '}^2(\partial_t+b\partial_{\alpha '}) A_1\|_{L^2}\lesssim C\paren{\frak E, E_2, \nm{\frac 1{Z_{,{\alpha '}}}}_{L^\infty}}(E_3^{1/4}+1), \end{equation} consequently \begin{equation}\label{2224-1} \nm{\partial_{\alpha'}^2\frac{\frak a_t}{\frak a}\circ h^{-1} }_{L^2}\lesssim C\paren{\frak E, E_2, \nm{\frac 1{Z_{,{\alpha '}}}}_{L^\infty}}(E_3^{1/2}+1). \end{equation} \subsubsection{ Controlling $\nm{(\partial_t+b\partial_{\alpha '})\frac{1}{Z_{,\alpha'}}}_{L^\infty}$ and $\nm{(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}^2\frac{1}{Z_{,\alpha'}}}_{L^2}$ } We begin with \eqref{eq:dza}, \begin{equation}\label{2225} (\partial_t+b\partial_{\alpha '})\frac{1}{Z_{,\alpha'}}=\frac{1}{Z_{,\alpha'}}(b_{\alpha '}-D_{\alpha '} Z_t), \end{equation} differentiating twice with respect to ${\alpha '}$; we get \begin{equation}\label{2226} \begin{aligned} &\partial_{\alpha'}^2 (\partial_t+b\partial_{\alpha '})\frac{1}{Z_{,\alpha'}}=\paren{\partial_{\alpha'}^2\frac{1}{Z_{,\alpha'}}}( b_{\alpha'}-D_{\alpha'}Z_t)\\&+2\paren{\partial_{\alpha'}\frac{1}{Z_{,\alpha'}}}( \partial_{\alpha'}b_{\alpha'}-\partial_{\alpha'}D_{\alpha'}Z_t)+\frac{1}{Z_{,\alpha'}}( \partial_{\alpha'}^2b_{\alpha'}-\partial_{\alpha'}^2D_{\alpha'}Z_t). \end{aligned} \end{equation} We further expand $\partial_{\alpha'}D_{\alpha'}Z_t$ and $\partial_{\alpha'}^2D_{\alpha'}Z_t$ by product rules then use H\"older's inequality, \eqref{2020}, \eqref{2144} and \eqref{2215}, \eqref{2216}, \eqref{2217-1}, \eqref{2218}. We have \begin{equation}\label{2227} \begin{aligned} \nm{\partial_{\alpha'}^2(\partial_t+b\partial_{\alpha '})\frac{1}{Z_{,\alpha'}}}_{L^2}&\lesssim C(\frak E)\nm{\partial_{\alpha'}^2\frac{1}{Z_{,\alpha'}}}_{L^2}+\nm{\partial_{\alpha'}\frac{1}{Z_{,\alpha'}}}_{L^\infty}\| \partial_{\alpha'}b_{\alpha'}-\partial_{\alpha'}D_{\alpha'}Z_t\|_{L^2}\\&+\nm{\frac{1}{Z_{,\alpha'}}}_{L^\infty}\| \partial_{\alpha'}^2b_{\alpha'}-\partial_{\alpha'}^2D_{\alpha'}Z_t\|_{L^2}\lesssim C\paren{\frak E, E_2, \nm{\frac{1}{Z_{,\alpha'}}}_{L^\infty}}(E_3^{1/2}+1). \end{aligned} \end{equation} Now by \eqref{eq:c11}, \begin{equation}\label{2228} (\partial_t+b\partial_{\alpha '})\partial_{\alpha'}^2\frac{1}{Z_{,\alpha'}}=\partial_{\alpha'}^2 (\partial_t+b\partial_{\alpha '})\frac{1}{Z_{,\alpha'}}-(\partial_{\alpha'}b_{\alpha'})\partial_{\alpha'}\frac{1}{Z_{,\alpha'}}-2b_{\alpha'}\partial_{\alpha'}^2\frac{1}{Z_{,\alpha'}} \end{equation} so \begin{equation}\label{2229} \nm{(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}^2\frac{1}{Z_{,\alpha'}}}_{L^2}\lesssim C\paren{\frak E, E_2, \nm{\frac{1}{Z_{,\alpha'}}}_{L^\infty}}(E_3^{1/2}+1). \end{equation} We apply \eqref{2020} to \eqref{2225}, and obtain \begin{equation}\label{2230} \nm{(\partial_t+b\partial_{\alpha '})\frac{1}{Z_{,\alpha'}}}_{L^\infty} \le C(\frak E)\nm{\frac{1}{Z_{,\alpha'}}}_{L^\infty}. \end{equation} \subsubsection{ Controlling $\|\partial_{\alpha'}^2(\partial_t+b\partial_{\alpha '}) b_{\alpha'}\|_{L^2}$ }\label{da2dtba} By \eqref{eq:c11}, \begin{equation}\label{2223} \partial_{\alpha'}^2(\partial_t+b\partial_{\alpha '}) b_{\alpha'}=(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}^2 b_{\alpha'}+(\partial_{\alpha'} b_{\alpha'})^2+2 b_{\alpha'}\partial_{\alpha'}^2 b_{\alpha'} \end{equation} where by H\"older's and Sobolev inequalities \eqref{eq:sobolev}, \begin{equation}\label{2224} \begin{aligned} &\|(\partial_{\alpha'} b_{\alpha'})^2\|_{L^2}+\| b_{\alpha'}\partial_{\alpha'}^2 b_{\alpha'}\|_{L^2} \lesssim \|\partial_{\alpha'} b_{\alpha'}\|_{L^2}\|\partial_{\alpha'} b_{\alpha'}\|_{L^\infty}+\|\partial_{\alpha'}^2 b_{\alpha'}\|_{L^2}\| b_{\alpha'}\|_{L^\infty}\\&\lesssim \|\partial_{\alpha'} b_{\alpha'}\|_{L^2}^{3/2}\|\partial_{\alpha'}^2b_{\alpha'}\|_{L^2}^{1/2}+\|\partial_{\alpha'}^2 b_{\alpha'}\|_{L^2}\| b_{\alpha'}\|_{L^\infty}\\&\le C\paren{\frak E, E_2, \nm{\frac 1{Z_{,{\alpha '}}}}_{L^\infty}} (E_3^{1/2}+1). \end{aligned} \end{equation} Now we consider $(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}^2 b_{\alpha'}$. We begin with \eqref{ba-1}, differentiating twice with respect to ${\alpha '}$, then to $\partial_t+b\partial_{\alpha '}$, \begin{equation}\label{2232} \begin{aligned} (\partial_t+b\partial_{\alpha '})\partial_{\alpha '}^2(b_{\alpha '}-2\Re D_{\alpha '} Z_t)&=\sum_{k=0}^2 C_2^k\Re(\partial_t+b\partial_{\alpha '}) \paren{\bracket{ \partial_{\alpha '}^k \frac1{Z_{,{\alpha '}}}, \mathbb H}\partial_{\alpha '}^{2-k} Z_{t,\alpha'}}\\&+ \sum_{k=0}^2 C_2^k\Re(\partial_t+b\partial_{\alpha '}) \paren{\bracket{\partial_{\alpha '}^k Z_t, \mathbb H}\partial_{\alpha '}^{3-k} \frac1{Z_{,{\alpha '}}} }. \end{aligned} \end{equation} where $C_2^0=1, C_2^1=2, C_2^2=1$. We expand $(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}^2 D_{\alpha '} Z_t$ by product rules, and use \eqref{eq:c14} to further expand the right hand side of \eqref{2232}; we then use \eqref{3.20}, \eqref{3.21}, \eqref{3.22}, \eqref{eq:b12} and H\"older's inequality to do the estimates. We have \begin{equation}\label{2233} \begin{aligned} &\|(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}^2b_{\alpha'}\|_{L^2}\lesssim \|(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}^3Z_{t}\|_{L^2}\nm{\frac1{Z_{,\alpha'}}}_{L^\infty}\\&+\|\partial_{\alpha'}^3Z_{t}\|_{L^2}\nm{(\partial_t+b\partial_{\alpha '})\frac1{Z_{,\alpha'}}}_{L^\infty}+ \|(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}^2Z_{t}\|_{L^\infty}\nm{\partial_{\alpha'}\frac{1}{Z_{,\alpha'}}}_{L^2}\\&+\|\partial_{\alpha'}^2Z_{t}\|_{L^\infty}\paren{\nm{(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}\frac{1}{Z_{,\alpha'}}}_{L^2}+\nm{\partial_{\alpha'}(\partial_t+b\partial_{\alpha '})\frac{1}{Z_{,\alpha'}}}_{L^2}}\\& +\|\partial_{\alpha'}^2Z_{t}\|_{L^\infty}\|b_{\alpha'}\|_{L^\infty}\nm{\partial_{\alpha'}\frac{1}{Z_{,\alpha'}}}_{L^2}+\|(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}Z_{t}\|_{L^\infty}\nm{\partial_{\alpha'}^2\frac{1}{Z_{,\alpha'}}}_{L^2}\\&+\|\partial_{\alpha'}Z_{t}\|_{L^\infty}\|b_{\alpha'}\|_{L^\infty}\nm{\partial_{\alpha'}^2\frac{1}{Z_{,\alpha'}}}_{L^2}+\|Z_{tt,\alpha'}\|_{L^\infty}\nm{\partial_{\alpha'}^2\frac{1}{Z_{,\alpha'}}}_{L^2}\\&+\|Z_{t,\alpha'}\|_{L^\infty}\nm{(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}^2\frac{1}{Z_{,\alpha'}}}_{L^2}. \end{aligned} \end{equation} This, together with \eqref{2224} gives, \begin{equation}\label{2234} \|(\partial_t+b\partial_{\alpha '})\partial_{\alpha'}^2b_{\alpha'}\|_{L^2}+\|\partial_{\alpha'}^2(\partial_t+b\partial_{\alpha '})b_{\alpha'}\|_{L^2}\lesssim C\paren{\frak E, E_2, \nm{\frac1{Z_{,{\alpha '}}}}_{L^\infty}}(E_3^{1/2}+1). \end{equation} \subsubsection{Controlling $\frac d{dt} E_3(t)$}\label{ddte3} We know $E_3(t)$ consists of $E_{D_{\alpha '} \partial_{\alpha '}^2 \bar Z_t}$ and $\|\partial_{\alpha '}^3 \bar Z_t\|_{L^2}^2$. We apply Lemma~\ref{basic-e} to $E_{D_{\alpha '} \partial_{\alpha '}^2 \bar Z_t}$ and Lemma~\ref{basic-e2} to $\|\partial_{\alpha '}^3 \bar Z_t\|_{L^2}^2$. We begin with $\|\partial_{\alpha '}^3 \bar Z_t\|_{L^2}^2$. We have, by Lemma~\ref{basic-e2}, \begin{equation}\label{2235} \frac d{dt}\|\partial_{\alpha '}^3 \bar Z_t\|_{L^2}^2\lesssim \|(\partial_t+b\partial_{\alpha '})\partial_{\alpha '}^3 \bar Z_t\|_{L^2}\|\partial_{\alpha '}^3 \bar Z_t\|_{L^2}+\|b_{\alpha '}\|_{L^\infty}\|\partial_{\alpha '}^3 \bar Z_t\|_{L^2}^2 \end{equation} We have controlled all the factors, in \eqref{2020} and \eqref{2203}. We have \begin{equation}\label{2236} \frac d{dt}\|\partial_{\alpha '}^3 \bar Z_t\|_{L^2}^2\lesssim C(\frak E) E_3(t). \end{equation} We now consider $E_{D_{\alpha '} \partial_{\alpha '}^2 \bar Z_t}$. Applying Lemma~\ref{basic-e} to $\Theta=D_{\alpha '} \partial_{\alpha '}^2 \bar Z_t$ yields \begin{equation}\label{2237} \frac d{dt} E_{D_{\alpha '} \partial_{\alpha '}^2 \bar Z_t}(t)\le \nm{\frac{\frak a_t}{\frak a}\circ h^{-1}}_{L^\infty} E_{D_{\alpha '} \partial_{\alpha '}^2 \bar Z_t}(t)+ 2E_{D_{\alpha '} \partial_{\alpha '}^2 \bar Z_t}(t)^{1/2}\paren{\int \frac{|\mathcal PD_{\alpha '} \partial_{\alpha '}^2 \bar Z_t|^2}{\mathcal A}\,d{\alpha '}}^{1/2} \end{equation} We have controlled the factor $\nm{\frac{\frak a_t}{\frak a}\circ h^{-1}}_{L^\infty} $ in \S\ref{ata-da1}, we are left with the second term. We know, by $\mathcal A |Z_{,{\alpha '}}|^2=A_1\ge 1$, that \begin{equation}\label{2238} \int \frac{|\mathcal PD_{\alpha '} \partial_{\alpha '}^2 \bar Z_t|^2}{\mathcal A}\,d{\alpha '}\le \int |Z_{,{\alpha '}}\mathcal PD_{\alpha '} \partial_{\alpha '}^2 \bar Z_t|^2\,d{\alpha '}. \end{equation} We compute \begin{equation} \label{2239} \mathcal P D_{\alpha '} \partial_{\alpha '}^2 \bar Z_{t}= \bracket{\mathcal P, D_{\alpha '}} \partial_{\alpha '}^2 \bar Z_{t}+ D_{\alpha '}\bracket{\mathcal P,\partial_{\alpha '}} \bar Z_{t,{\alpha '}}+ D_{\alpha '}\partial_{\alpha '} [\mathcal P , \partial_{\alpha '}] \bar Z_{t}+D_{\alpha '} \partial_{\alpha '}^2\mathcal P \bar Z_t; \end{equation} and expand further by \eqref{eq:c5-1}, \begin{equation}\label{2240} \bracket{\mathcal P, D_{\alpha '}} \partial_{\alpha '}^2 \bar Z_{t}= (-2D_{\alpha '} Z_{tt}) D_{\alpha '} \partial_{\alpha '}^2 \bar Z_{t}-2(D_{\alpha '} Z_t)(\partial_t+b\partial_{\alpha '}) D_{\alpha '} \partial_{\alpha '}^2 \bar Z_{t}; \end{equation} by \eqref{eq:c10} and product rules, \begin{equation}\label{2241} \begin{aligned} \partial_{\alpha '}[\mathcal P, \partial_{\alpha '}]\bar Z_{t,{\alpha '}}& =-(\partial_t+b\partial_{\alpha '})(\partial_{\alpha '} b_{\alpha '}\partial_{\alpha '}^2 \bar Z_{t})-\partial_{\alpha '} b_{\alpha '}\partial_{\alpha '} (\partial_t+b\partial_{\alpha '})\bar Z_{t,{\alpha '}} -i\partial_{\alpha '}\mathcal A_{\alpha '} \partial_{\alpha '}^2 \bar Z_{t}\\& -(\partial_t+b\partial_{\alpha '})( b_{\alpha '}\partial_{\alpha '}^3 \bar Z_{t})- b_{\alpha '}\partial_{\alpha '}^2 (\partial_t+b\partial_{\alpha '})\bar Z_{t,{\alpha '}} -i\mathcal A_{\alpha '} \partial_{\alpha '}^3 \bar Z_{t}\\&- b_{\alpha '}^2\partial_{\alpha '}^3 \bar Z_{t}-b_{\alpha '}\partial_{\alpha '} b_{\alpha '} \partial_{\alpha '}^2 \bar Z_{t}; \end{aligned} \end{equation} and by \eqref{eq:c10}, \begin{equation}\label{2242} \partial_{\alpha '}^2[\mathcal P,\partial_{\alpha '}]\bar Z_t=-\partial_{\alpha '}^2(\partial_t+b\partial_{\alpha '})(b_{\alpha '} \partial_{{\alpha '}}\bar Z_{t})-\partial_{\alpha '}^2(b_{\alpha '}\partial_{\alpha '} \bar Z_{tt})-i\partial_{\alpha '}^2(\mathcal A_{\alpha '} \partial_{\alpha '} \bar Z_t), \end{equation} and then expand \eqref{2242} by product rules. We have controlled all the factors on the right hand sides of \eqref{2240}, \eqref{2241} and \eqref{2242} in \eqref{2020}, \eqref{2144} and \S\ref{quantities-e3} - \S\ref{da2dtba}. We have, by H\"older's inequality, \begin{equation}\label{2243} \int |Z_{,{\alpha '}} \bracket{\mathcal P, D_{\alpha '}} \partial_{\alpha '}^2 \bar Z_{t}|^2+|\partial_{\alpha '}[\mathcal P, \partial_{\alpha '}]\bar Z_{t,{\alpha '}}|^2+|\partial_{\alpha '}^2[\mathcal P,\partial_{\alpha '}]\bar Z_t|^2\,d{\alpha '}\lesssim C\paren{\frak E, E_2, \nm{\frac1{Z_{,{\alpha '}}}}_{L^\infty}}(E_3+1). \end{equation} We are left with the last term $\partial_{\alpha '}^3 \mathcal P\bar Z_{t}$. We expand by product rules, starting from \eqref{quasi-r1}; we have \begin{equation}\label{2244} \begin{aligned} \partial_{\alpha '}^3\mathcal P \bar Z_{t}&= \frac{\frak a_t}{\frak a}\circ h^{-1} \partial_{\alpha '}^3\bar Z_{tt}+3\partial_{\alpha '}\paren{\frac{\frak a_t}{\frak a}\circ h^{-1} }\partial_{\alpha '}^2\bar Z_{tt}+3\partial_{\alpha '}^2\paren{\frac{\frak a_t}{\frak a}\circ h^{-1} }\bar Z_{tt,{\alpha '}}\\&+\partial_{\alpha '}^3\paren{\frac{\frak a_t}{\frak a}\circ h^{-1} }(\bar Z_{tt}-i). \end{aligned} \end{equation} Let \begin{equation}\label{2245} \mathcal R_2= \frac{\frak a_t}{\frak a}\circ h^{-1} \partial_{\alpha '}^3\bar Z_{tt}+3\partial_{\alpha '}\paren{\frac{\frak a_t}{\frak a}\circ h^{-1} }\partial_{\alpha '}^2\bar Z_{tt}+3\partial_{\alpha '}^2\paren{\frac{\frak a_t}{\frak a}\circ h^{-1} }\bar Z_{tt,{\alpha '}}. \end{equation} We have controlled all the factors of the terms in $\mathcal R_2$, with \begin{equation}\label{2246} \begin{aligned} &\nm{\mathcal R_2}_{L^2} \le \nm{\frac{\frak a_t}{\frak a}\circ h^{-1}}_{L^\infty}\nm{ \partial_{\alpha '}^3\bar Z_{tt}}_{L^2}+3\nm{\partial_{\alpha '}\paren{\frac{\frak a_t}{\frak a}\circ h^{-1} }}_{L^2}\nm{\partial_{\alpha '}^2\bar Z_{tt}}_{L^\infty}\\&+3\nm{\partial_{\alpha '}^2\paren{\frac{\frak a_t}{\frak a}\circ h^{-1} }}_{L^2}\nm{\bar Z_{tt,{\alpha '}}}_{L^2} \lesssim C\paren{\frak E, E_2, \nm{\frac1{Z_{,{\alpha '}}}}_{L^\infty}}(E_3^{1/2}+1). \end{aligned} \end{equation} We are left with controlling $\nm{ \partial_{\alpha '}^3\paren{\frac{\frak a_t}{\frak a}\circ h^{-1} }(\bar Z_{tt}-i)}_{L^2}$. We use a similar idea as that in \S\ref{ddtea}, that is, to take advantage of the fact that $ \partial_{\alpha '}^3\paren{\frac{\frak a_t}{\frak a}\circ h^{-1} }$ is purely real. Applying $(I-\mathbb H)$ to both sides of \eqref{2244}, with the first three terms replaced by $\mathcal R_2$, and commuting out $\bar Z_{tt}-i$ yields, \begin{equation}\label{2247} (I-\mathbb H)\partial_{\alpha '}^3\mathcal P \bar Z_{t}=(I-\mathbb H)\mathcal R_2+[\bar Z_{tt},\mathbb H]\partial_{\alpha '}^3\paren{\frac{\frak a_t}{\frak a}\circ h^{-1} }+ (\bar Z_{tt}-i)(I-\mathbb H)\partial_{\alpha '}^3\paren{\frac{\frak a_t}{\frak a}\circ h^{-1} }. \end{equation} Now \begin{equation}\label{2248} \partial_{\alpha '}^3\mathcal P \bar Z_{t}=\partial_{\alpha '}^2 [\partial_{\alpha '}, \mathcal P]\bar Z_t+ \partial_{\alpha '} [\partial_{\alpha '}, \mathcal P] \bar Z_{t,{\alpha '}}+[\partial_{\alpha '}, \mathcal P]\partial_{\alpha '}^2\bar Z_t+\mathcal P\partial_{\alpha '}^3 \bar Z_t. \end{equation} and by \eqref{eq:c10}, \begin{equation}\label{2249} [\mathcal P, \partial_{\alpha '}]\partial_{\alpha '}^2\bar Z_t=-(\partial_t+b\partial_{\alpha '})(b_{\alpha '}\partial_{\alpha '} \partial_{\alpha '}^2\bar Z_t)-b_{\alpha '}\partial_{\alpha '} (\partial_t+b\partial_{\alpha '})\partial_{\alpha '}^2\bar Z_t-i\mathcal A_{\alpha '} \partial_{\alpha '} \partial_{\alpha '}^2\bar Z_t \end{equation} so by \S\ref{basic-quantities}-\S\ref{dtdab}, \S\ref{quantities-e3} and H\"older's inequality, \begin{equation}\label{2250} \|[\mathcal P, \partial_{\alpha '}]\partial_{\alpha '}^2\bar Z_t\|_{L^2}\lesssim C(\frak E) E_3^{1/2}. \end{equation} Applying Lemma~\ref{basic-3-lemma} to the last term in \eqref{2248}. We have, by \eqref{2243}, \eqref{2250} and Lemma~\ref{basic-3-lemma}, that \begin{equation}\label{2251} \| (I-\mathbb H)\partial_{\alpha '}^3\mathcal P \bar Z_{t}\|_{L^2}\lesssim C\paren{\frak E, E_2, \nm{\frac1{Z_{,{\alpha '}}}}_{L^\infty}}(E_3^{1/2}+1). \end{equation} This gives, by \eqref{2247}, that \begin{equation}\label{2252} \begin{aligned} \nm{ \partial_{\alpha '}^3\paren{\frac{\frak a_t}{\frak a}\circ h^{-1} }(\bar Z_{tt}-i)}_{L^2}&\le \|(I-\mathbb H)\partial_{\alpha '}^3\mathcal P \bar Z_{t}\|_{L^2}+\|\mathcal R_2\|_{L^2} +\nm{[\bar Z_{tt},\mathbb H]\partial_{\alpha '}^3\paren{\frac{\frak a_t}{\frak a}\circ h^{-1} }}_{L^2}\\& \lesssim C\paren{\frak E, E_2, \nm{\frac1{Z_{,{\alpha '}}}}_{L^\infty}}(E_3^{1/2}+1); \end{aligned} \end{equation} here for the commutator, we used \eqref{3.20}, \eqref{2144}, and \eqref{2224-1}. Combining with \eqref{2244}, \eqref{2245}, \eqref{2246} yields \begin{equation} \nm{ \partial_{\alpha '}^3\mathcal P \bar Z_{t} }_{L^2} \lesssim C\paren{\frak E, E_2, \nm{\frac1{Z_{,{\alpha '}}}}_{L^\infty}}(E_3^{1/2}+1). \end{equation} Further combing with \eqref{2238}, \eqref{2239}, \eqref{2243} gives \begin{equation} \int \frac{|\mathcal PD_{\alpha '} \partial_{\alpha '}^2 \bar Z_t|^2}{\mathcal A}\,d{\alpha '}\lesssim C\paren{\frak E, E_2, \nm{\frac1{Z_{,{\alpha '}}}}_{L^\infty}}(E_3+1). \end{equation} By \eqref{2237}, this shows that Proposition~\ref{step2} holds. \subsection{Completing the proof for Theorem~\ref{blow-up}}\label{complete1} Now we continue the discussion in \S\ref{proof1}, assuming that the initial data satisfies the assumption of Theorem~\ref{blow-up}, and the solution $Z$ satisfies the regularity property in Theorem~\ref{blow-up}. By \eqref{step1-2} and the ensuing discussion, to complete the proof of Theorem~\ref{blow-up}, it suffices to show that for the given data, $E_2(0)<\infty$ and $E_3(0)<\infty$; and $\sup_{[0, T]}(E_2(t)+E_3(t)+\frak E(t))$ control the higher order Sobolev norm $\sup_{[0, T]}(\|\partial_{\alpha '}^3Z_t(t)\|_{\dot{H}^{1/2}}+\|\partial_{\alpha '}^3 Z_{tt}(t)\|_{L^2})$. By \eqref{2117-1} and \eqref{2124}, \begin{equation} Z_{,\alpha'}(\partial_t+b\partial_{\alpha '}) \paren{\frac1{Z_{,\alpha'}}\partial_{\alpha'}^2\bar Z_t}= \partial_{\alpha'}^2\bar Z_{tt}- (b_{\alpha '}+D_{\alpha '} Z_t)\partial_{\alpha'}^2\bar Z_t- (\partial_{\alpha '} b_{\alpha '})\bar Z_{t,{\alpha '}}; \end{equation} expanding further according to the available estimates in \eqref{2001}, \eqref{2123}, and using H\"older's inequality, we have \begin{equation} \begin{aligned} \nm{ Z_{,\alpha'}(\partial_t+b\partial_{\alpha '}) D_{\alpha '}\partial_{\alpha'}\bar Z_t}_{L^2}&\lesssim \|\partial_{\alpha'}^2\bar Z_{tt}\|_{L^2}+ \|Z_{t,{\alpha '}}\|_{L^2}\nm{\partial_{\alpha '}\frac1{Z_{,{\alpha '}}}}_{L^2}\|\partial_{\alpha '}^2 Z_{t}\|_{L^2}+\|D_{\alpha '} Z_t\partial_{\alpha '}^2 \bar Z_t\|_{L^2}\\&+ \|Z_{t,{\alpha '}}\|_{L^\infty}^2\nm{\partial_{\alpha '}\frac1{Z_{,{\alpha '}}}}_{L^2}+\|(\partial_{\alpha '} D_{\alpha '} Z_t)\partial_{\alpha '} \bar Z_t\|_{L^2}; \end{aligned} \end{equation} it is clear that we also have \begin{equation} \nm{\frac1{Z_{,{\alpha '}}}\partial_{\alpha '}^2\bar Z_t}_{\dot H^{1/2}}\le C\paren{\nm{\frac 1{Z_{,{\alpha '}}}-1}_{H^1}, \|Z_t\|_{H^{2+1/2}}}; \end{equation} so for the given initial data, we have $E_2(0)<\infty$. A similar argument shows that we also have $E_3(0)<\infty$. This implies, by \eqref{step1-2} and the ensuing discussion, that $\sup_{[0, T_0)}(E_2(t)+E_3(t))<\infty$ provided $\sup_{[0, T_0)}\frak E(t)<\infty$. On the other hand, we have shown in \eqref{2220} that $\|\partial_{\alpha '}^3 \bar Z_{tt}(t)\|_{L^2}$ is controlled by $E_2(t)$, $E_3(t)$ and $\nm{\frac1{Z_{,{\alpha '}}}(t)}_{L^\infty}$; and by \eqref{Hhalf}, $$\|\partial_{\alpha'}^3 \bar Z_{t}\|_{\dot H^{1/2}}\lesssim \|Z_{,\alpha'}\|_{L^\infty}\paren{\nm{\frac1{Z_{,\alpha'}}\partial_{\alpha'}^3 \bar Z_{t}}_{\dot H^{1/2}}+\nm{\partial_{\alpha'}\frac1{Z_{,\alpha'}}}_{L^2}\|\partial_{\alpha'}^3 \bar Z_{t}\|_{L^2}}, $$ so $\|\partial_{\alpha'}^3 \bar Z_{t}\|_{\dot H^{1/2}}$ is controlled by $E_3(t)$, $\frak E(t)$ and $\|Z_{,\alpha'}(t)\|_{L^\infty}$. With a further application of \eqref{2101} and \eqref{2101-1}, we have \begin{equation} \sup_{[0, T_0)} \|\partial_{\alpha '}^3 \bar Z_{tt}(t)\|_{L^2}+ \|\partial_{\alpha'}^3 \bar Z_{t}\|_{\dot H^{1/2}}<\infty,\quad\text{provided}\quad \sup_{[0, T_0)}\frak E(t)<\infty. \end{equation} This, together with \eqref{2101}, \eqref{2109}, \eqref{2110} and Theorem~\ref{prop:local-s} shows that Theorem~\ref{blow-up} holds. \section{The proof of Theorem~\ref{unique}}\label{proof3} \subsection{Some basic preparations}\label{prepare} We begin with some basic preparatory analysis that will be used in the proof of Theorem~\ref{unique}. In the first lemma we construct an energy functional for the difference of the solutions of an equation of the type \eqref{quasi-r}. We will apply Lemma~\ref{dlemma1} to $\Theta=\bar Z_t,\ \frac1{Z_{,{\alpha '}}}-1$ and $\bar Z_{tt}$. \begin{lemma}\label{dlemma1} Assume $\Theta$, $\tilde{\Theta}$ are smooth and decay at the spatial infinity, and satisfy \begin{equation}\label{eqdf} \begin{aligned} &(\partial_t+b\partial_{\alpha '})^2\Theta+i{\mathcal A}\partial_{\alpha '}\Theta=G,\\ &(\partial_t+\tilde b\partial_{\alpha '})^2\tilde{\Theta}+i\tilde {\mathcal A}\partial_{\alpha '}\tilde{\Theta}=\tilde G.\\ \end{aligned} \end{equation} Let \begin{equation} \mathfrak F(t)=\int \frac{\kappa}{A_1}\abs{ Z_{,{\alpha '}}\paren{(\partial_t+b\partial_{\alpha '})\Theta+\mathfrak c}- {\mathfrak{Z}}_{,{\alpha '}}\circ l\paren{(\partial_t+\tilde b\partial_{\alpha '})\tilde{\Theta}\circ l+\mathfrak c}}^2+i\partial_{\alpha '}(\Theta-\tilde{\Theta}\circ l)\bar{(\Theta-\tilde{\Theta}\circ l)}\,d{\alpha '}, \end{equation} where $\kappa=\sqrt{\frac{A_1}{\tilde {A_1}\circ l} l_{\alpha '}}$, $\frak c$ is a constant, and \begin{equation} {\bf F}(t)=\int \abs{ Z_{,{\alpha '}}\paren{(\partial_t+b\partial_{\alpha '})\Theta+\frak c}- {\mathfrak{Z}}_{,{\alpha '}}\circ l\paren{(\partial_t+\tilde b\partial_{\alpha '})\tilde{\Theta}\circ l+\frak c}}^2\ \ d{\alpha '}. \end{equation} Then \begin{equation}\label{dlemma1-inq} \begin{aligned} &\frak F'(t)\lesssim {\bf F}(t)^{\frac12} \nm{\frac{\kappa}{A_1}}_{L^\infty}\nm{ Z_{,{\alpha '}}G- {\mathfrak{Z}}_{,{\alpha '}}\circ l\tilde G\circ l}_{L^2}\\&\quad+{\bf F}(t)^{\frac12} \nm{\frac{\kappa}{A_1}}_{L^\infty}\nm{\frac{(\partial_t+b\partial_{\alpha '})\kappa }{\kappa}}_{L^2}\paren{\nm{Z_{,{\alpha '}}((\partial_t+b\partial_{\alpha '})\Theta+\mathfrak c) }_{L^\infty}+\nm{{\mathfrak{Z}}_{,{\alpha '}}((\partial_t+\tilde b\partial_{\alpha '})\tilde{\Theta}+\mathfrak c)}_{L^\infty}}\\& +{\bf F}(t)^{\frac12} \nm{\frac{\kappa}{A_1}}_{L^\infty}\paren{\nm{\frac{\frak a_t}{\frak a}\circ h^{-1}-\frac{\tilde{\mathfrak a}_t}{\tilde{\mathfrak{a}}}\circ h^{-1}}_{L^2}+\nm{D_{\alpha '} Z_t-\tilde D_{\alpha '}{\mathfrak{Z}}_t\circ l }_{L^2}}\nm{{\mathfrak{Z}}_{,{\alpha '}}((\partial_t+\tilde b\partial_{\alpha '})\tilde{\Theta}+\mathfrak c)}_{L^\infty}\\&+{\bf F}(t) \nm{\frac{\kappa}{A_1}}_{L^\infty}\paren{ \nm{\frac{\frak a_t}{\frak a}}_{L^\infty}+\nm{D_{\alpha '} Z_t}_{L^\infty}} +\nm{1-\kappa}_{L^2}{\bf F}(t)^{\frac12}\paren{\nm{D_{\alpha '}\Theta}_{L^\infty}+\nm{\frac{l_{\alpha '}}{\kappa}}_{L^\infty}\nm{\tilde D_{\alpha '}\tilde{\Theta}}_{L^\infty}}\\&+2\Re i\int \bar{\paren{\frac1{Z_{,{\alpha '}}}-U_l\frac1{{\mathfrak{Z}}_{,{\alpha '}}} }} \paren{\bar{U_l({\mathfrak{Z}}_{,{\alpha '}}((\partial_t+\tilde b\partial_{\alpha '})\tilde{\Theta}+\frak c))}\Theta_{\alpha '}-\bar{Z_{,{\alpha '}}((\partial_t+b\partial_{\alpha '})\Theta+\frak c)}(\tilde{\Theta}\circ l)_{\alpha '}}\,d{\alpha '}. \end{aligned} \end{equation} \end{lemma} \begin{remark} By definition, $\frac{\kappa}{A_1}= \sqrt{\frac{l_{\alpha '}}{A_1\tilde {A_1}\circ l}}$. And in what follows, $\sqrt{\tilde a h_{\alpha '}}\, \kappa\circ h=\frac {\sqrt {A_1}}{{{\mathfrak{Z}}_{,{\alpha '}}}\circ l}\circ h$, $\sqrt{a h_{\alpha '}}=\frac {\sqrt {A_1}}{Z_{,{\alpha '}}}\circ h$. \end{remark} \begin{proof} Let $\theta=\Theta\circ h$, and $\tilde\theta=\tilde{\Theta}\circ \tilde h$. We know $\theta$, $\tilde{\theta}$ satisfy \begin{equation}\label{eqdfl} \begin{aligned} &\partial_t^2\theta+i\mathfrak{a} \partial_\alpha\theta=G\circ h,\\ &\partial_t^2\tilde{\theta}+i\tilde{\mathfrak{a}}\partial_\alpha\tilde{\theta}=\tilde G\circ \tilde h.\\ \end{aligned} \end{equation} Changing coordinate by $h$, we get \begin{equation} \mathfrak F(t)=\int\abs{ \sqrt{\frac{k}{a}} \paren{\theta_t+\mathfrak c}- \frac1{\sqrt{{\tilde a} k}}\paren{\tilde{\theta}_t+\mathfrak c}}^2+i\partial_\alpha(\theta-\tilde{\theta})\bar{(\theta-\tilde{\theta})}\,d\alpha, \end{equation} where $k=\kappa\circ h$, $\sqrt{a}:= \frac{\sqrt{A_1\circ h h_\alpha}}{z_{\alpha}}$ and $\sqrt{{\tilde a} }:= \frac{\sqrt{\tilde {A_1}\circ \tilde h \tilde h_\alpha}}{{\mathfrak{z}}_{\alpha}}$. Notice that here, $\sqrt{a}$ and $\sqrt{{\tilde a} }$ are complex valued, and $|\sqrt{a}|^2=\frak a$, $|\sqrt{\tilde a}|^2=\tilde{\mathfrak{a}}$. Differentiating to $t$, integrating by parts, then applying equations \eqref{eqdfl}, we get \begin{equation}\label{2300} \begin{aligned} &\mathfrak F'(t)=2\Re\int \bar{\braces{ \sqrt{\frac{k}{a}} \paren{\theta_t+\mathfrak c}- \frac1{\sqrt{{\tilde a} k}}\paren{\tilde{\theta}_t+\mathfrak c}}}\left\{\sqrt{\frac{k}{a}} G\circ h- \frac1{\sqrt{{\tilde a} k}}\tilde G\circ \tilde h\right\}\\&\qquad+\bar{\braces{ \sqrt{\frac{k}{a}} \paren{\theta_t+\mathfrak c}- \frac1{\sqrt{{\tilde a} k}}\paren{\tilde{\theta}_t+\mathfrak c}}}\left\{\frac12{\frac{k_t}k \paren{\sqrt{\frac{k}{a}} \paren{\theta_t+\mathfrak c}+ \frac1{\sqrt{{\tilde a} k}}\paren{\tilde{\theta}_t+\mathfrak c} } }\right\}\\&-\bar{\braces{ \sqrt{\frac{k}{a}} \paren{\theta_t+\mathfrak c}- \frac{\tilde{\theta}_t+\mathfrak c}{\sqrt{{\tilde a} k}}}}\braces{\paren{\frac12\frac{\mathfrak a_t}{\mathfrak{a}}-i\Im D_\alpha z_t}\sqrt{\frac{k}{a}}\paren{\theta_t+\mathfrak c}-\paren{\frac12\frac{\tilde{\mathfrak a}_t}{\tilde{\mathfrak{a}}}-i\Im D_\alpha {\mathfrak{z}}_t}\frac{\tilde{\theta}_t+\mathfrak c}{\sqrt{{\tilde a} k}} }\,d\alpha \\&+ 2\Re i\int \bar{(\theta_t-\tilde{\theta}_t)}(\theta_\alpha-\tilde{\theta}_a)-\bar{\braces{ \sqrt{\frac{k}{a}} \paren{\theta_t+\mathfrak c}- \frac1{\sqrt{{\tilde a} k}}\paren{\tilde{\theta}_t+\mathfrak c}}}\braces{\sqrt k\bar{\sqrt{ a}}\theta_\alpha-\frac{\bar{\sqrt{{\tilde a}}}}{\sqrt{k}}\tilde{\theta}_\alpha } \,d\alpha\\&=I+II \end{aligned} \end{equation} where $I$ consists of the terms in the first three lines and $II$ is the last line $$II:=2\Re i\int \bar{(\theta_t-\tilde{\theta}_t)}(\theta_\alpha-\tilde{\theta}_a)-\bar{\braces{ \sqrt{\frac{k}{a}} \paren{\theta_t+\mathfrak c}- \frac1{\sqrt{{\tilde a} k}}\paren{\tilde{\theta}_t+\mathfrak c}}}\braces{\sqrt k\bar{\sqrt{ a}}\theta_\alpha-\frac{\bar{\sqrt{{\tilde a}}}}{\sqrt{k}}\tilde{\theta}_\alpha } \,d\alpha.$$ Further regrouping terms in $II$ we get \begin{equation}\label{2301} \begin{aligned} II=&2\Re i\int \frac{(1-k)}{\sqrt{k}}\bar{ \braces{ \sqrt{\frac{k}{a}} \paren{\theta_t+\mathfrak c}- \frac1{\sqrt{{\tilde a} k}}\paren{\tilde{\theta}_t+\mathfrak c}} }\braces{\bar{\sqrt{a}} \theta_\alpha+\bar{\sqrt{{\tilde a}}}\tilde{\theta}_\alpha} \,d\alpha\\&+ 2\Re i\int \bar{(\sqrt{a}-\sqrt{{\tilde a}} k )}\paren{\bar{\frac1{\sqrt{{\tilde a}} k}(\tilde{\theta}_t+\frak c)}\theta_\alpha-\bar{\frac1{\sqrt{a}}(\theta_t+\frak c)}\tilde{\theta}_\alpha}\,d\alpha. \end{aligned} \end{equation} Changing variables by $h^{-1}$ in the integrals in \eqref{2300} and \eqref{2301}, and then applying Cauchy-Schwarz and H\"older's inequalities, we obtain \eqref{dlemma1-inq}. \end{proof} We have the following basic identities and inequalities. \begin{proposition} Let $\mathcal Q_l= U_{l}\mathbb H U_{l^{-1}}-\mathbb H$, where $l:\mathbb R\to \mathbb R$ is a diffeomorphism,\footnote{We say $l:\mathbb R\to\mathbb R$ is a diffeomorphism, if $l:\mathbb R\to\mathbb R$ is one-to-one and onto, and $l, \ l^{-1}\in C^1(\mathbb R)$, with $\|l_{\alpha '}\|_{L^\infty}+\|(l^{-1})_{\alpha '}\|_{L^\infty}<\infty$.} with $l_{\alpha '}-1\in L^2$. For any $f\in H^1(\mathbb R)$, we have \begin{align} \nm{\mathcal Q_l f}_{\dot H^{\frac12}}&\le C(\nm{(l^{-1})_{\alpha '}}_{L^\infty}, \nm{l_{\alpha '}}_{L^\infty})\|l_{\alpha '}-1\|_{L^2}\|\partial_{\alpha '} f\|_{L^2};\label{q1}\\ \nm{\mathcal Q_l f}_{L^\infty}&\le C(\nm{(l^{-1})_{\alpha '}}_{L^\infty}, \nm{l_{\alpha '}}_{L^\infty})\|l_{\alpha '}-1\|_{L^2}\|\partial_{\alpha '} f\|_{L^2};\label{q2}\\ \nm{\mathcal Q_l f}_{L^2}&\le C(\nm{(l^{-1})_{\alpha '}}_{L^\infty}, \nm{l_{\alpha '}}_{L^\infty})\|l_{\alpha '}-1\|_{L^2}\| f\|_{L^\infty};\label{q3}\\ \nm{\mathcal Q_l f}_{L^2}&\le C(\nm{(l^{-1})_{\alpha '}}_{L^\infty}, \nm{l_{\alpha '}}_{L^\infty})\|l_{\alpha '}-1\|_{L^2}\| f\|_{\dot H^{1/2} }.\label{q4} \end{align} \end{proposition} \begin{proof} We know \begin{equation}\label{2305} U_{l}\mathbb H U_{l^{-1}}f({\alpha '})=\frac1{\pi i}\int\frac{f({\beta '}) l_{\beta '}({\beta '})}{l({\alpha '})-l({\beta '})}\,d{\beta '} \end{equation} so \begin{equation}\label{2306} \begin{aligned} \mathcal Q_l f&=\frac1{\pi i}\int\paren{\frac{ l_{\beta '}({\beta '})-1}{l({\alpha '})-l({\beta '})}+\frac{ {\alpha '}-l({\alpha '})-{\beta '}+l({\beta '})}{(l({\alpha '})-l({\beta '}))({\alpha '}-{\beta '})} } f({\beta '})\,d{\beta '}\\& =\frac1{\pi i}\int\paren{\frac{ l_{\beta '}({\beta '})-1}{l({\alpha '})-l({\beta '})}+ \frac{ {\alpha '}-l({\alpha '})-{\beta '}+l({\beta '})}{(l({\alpha '})-l({\beta '}))({\alpha '}-{\beta '})} } (f({\beta '})-f({\alpha '}))\,d{\beta '}, \end{aligned} \end{equation} here in the second step we inserted $-f({\alpha '})$ because $\mathbb H1=0$. Apply Cauchy-Schwarz inequality and Hardy's inequality \eqref{eq:77} on the second equality in \eqref{2306} we obtain \eqref{q2} and \eqref{q4}. Using \eqref{3.16} and \eqref{3.17} on the first equality in \eqref{2306} we get \eqref{q3}. We are left with \eqref{q1}. Differentiate with respect to ${\alpha '}$ and integrate by parts gives \begin{equation}\label{2307} \partial_{\alpha '} \mathcal Q_lf({\alpha '})=\frac1{\pi i}\int\paren{\frac{l_{\alpha '}({\alpha '})}{l({\alpha '})-l({\beta '})}- \frac1{{\alpha '}-{\beta '}}} f_{\beta '}({\beta '}) \,d{\beta '} \end{equation} Let $p\in C^\infty_0(\mathbb R)$. We have, by using the fact $\mathbb H1=0$ to insert $-p({\beta '})$, that \begin{equation}\label{2308} \begin{aligned} \int p({\alpha '})\partial_{\alpha '} \mathcal Q_lf({\alpha '})\,d{\alpha '}&= \frac1{\pi i}\iint\paren{\frac{l_{\alpha '}({\alpha '})}{l({\alpha '})-l({\beta '})}- \frac1{{\alpha '}-{\beta '}}} f_{\beta '}({\beta '}) (p({\alpha '})-p({\beta '})) \,d{\alpha '} d{\beta '}\\&= \frac1{\pi i}\iint\frac{p({\alpha '})-p({\beta '}) }{l({\alpha '})-l({\beta '})} (l_{\alpha '}({\alpha '})-1)f_{\beta '}({\beta '}) \,d{\alpha '} d{\beta '}\\&+ \frac1{\pi i}\iint\frac{ {\alpha '}-l({\alpha '})-{\beta '}+l({\beta '})}{(l({\alpha '})-l({\beta '}))({\alpha '}-{\beta '})} f_{\beta '}({\beta '}) (p({\alpha '})-p({\beta '})) \,d{\alpha '} d{\beta '}. \end{aligned} \end{equation} Applying Cauchy-Schwarz inequality and Hardy's inequality \eqref{eq:77} to \eqref{2308}. We get, for some constant $c$ depending only on $\|l_{\alpha '}\|_{L^\infty}$ and $\|(l^{-1})_{\alpha '}\|_{L^\infty}$, \begin{equation}\label{2309} \abs{\int p({\alpha '})\partial_{\alpha '} \mathcal Q_lf({\alpha '})\,d{\alpha '}}\le c\|p\|_{\dot H^{1/2}}\|l_{\alpha '}-1\|_{L^2}\|\partial_{\alpha '} f\|_{L^2}. \end{equation} This proves inequality \eqref{q1}. \end{proof} \begin{lemma}\label{hhalf2} Assume that $f, \ g,\ f_1,\ g_1 \in H^1(\mathbb R)$ are the boundary values of some holomorphic functions on $\mathscr P_-$. Then \begin{equation}\label{halfholo} \int \partial_{\alpha '} \mathbb P_A (\bar f g)({\alpha '})f_1({\alpha '}) \bar g_1({\alpha '})\,d{\alpha '}=-\frac1{2\pi i}\iint \frac{(\bar f({\alpha '})-\bar f({\beta '}))( f_1({\alpha '})- f_1({\beta '}))}{({\alpha '}-{\beta '})^2}g({\beta '})\bar g_1({\alpha '})\,d{\alpha '} d{\beta '}. \end{equation} \end{lemma} \begin{proof} Let $f, \ g,\ f_1,\ g_1\in H^1(\mathbb R)$, and are the boundary values of some holomorphic functions in $\mathscr P_-$. We have \begin{equation}\label{2310} 2\mathbb P_A (\bar f g)=(I-\mathbb H)(\bar f g)= [\bar f,\mathbb H]g \end{equation} and \begin{equation}\label{2311} 2\partial_{\alpha '}\mathbb P_A (\bar f g)= \partial_{\alpha '} \bar f\, \mathbb H g-\frac1{\pi i}\int\frac{\bar f({\alpha '})-\bar f({\beta '})}{({\alpha '}-{\beta '})^2} g({\beta '})\,d{\beta '}. \end{equation} Because $\bar g_1 \partial_{\alpha '}\mathbb P_A (\bar f f_1 g)\in L^1(\mathbb R)$ is the boundary value of an anti-holomorphic function in $\mathscr P_-$, by Cauchy integral theorem, \begin{equation}\label{2312} 0=2 \int \bar g_1 \partial_{\alpha '}\mathbb P_A (\bar f f_1 g)\,d{\alpha '} =\int \partial_{\alpha '} \bar f\, f_1 g \bar g_1\,d{\alpha '}-\frac1{\pi i}\iint\frac{\bar f({\alpha '})-\bar f({\beta '})}{({\alpha '}-{\beta '})^2} f_1({\beta '})g({\beta '})\bar g_1({\alpha '})\,d{\alpha '} d{\beta '}, \end{equation} here we applied formula \eqref{2311} to the pair of holomorphic functions $f$ and $f_1 g$, and used the fact that $\mathbb H(f_1 g)=f_1 g$. Now we use \eqref{2311} to compute, because $\mathbb H g=g$, \begin{equation}\label{2313} 2\int \partial_{\alpha '} \mathbb P_A (\bar f g) \, f_1 \bar g_1\,d{\alpha '} =\int \partial_{\alpha '} \bar f\, g f_1 \bar g_1\,d{\alpha '}- \frac1{\pi i}\iint\frac{\bar f({\alpha '})-\bar f({\beta '})}{({\alpha '}-{\beta '})^2} g({\beta '})f_1({\alpha '})\bar g_1({\alpha '})\,d{\alpha '} d{\beta '}. \end{equation} Substituting \eqref{2312} in \eqref{2313}, we get \eqref{halfholo}. \end{proof} \begin{remark}\label{hhalf3} By Cauchy integral theorem, we know for $f, \ g,\ f_1,\ g_1 \in H^1(\mathbb R)$, $$\int \partial_{\alpha '} \mathbb P_A (\bar f g)({\alpha '})f_1({\alpha '}) \bar g_1({\alpha '})\,d{\alpha '}=\int \partial_{\alpha '} \mathbb P_A (\bar f g)\mathbb P_H(f_1 \bar g_1)\,d{\alpha '}=\int \partial_{\alpha '} \mathbb P_A (\bar f g)\bar {\mathbb P_A(\bar f_1 g_1)}\,d{\alpha '} .$$ \end{remark} As a corollary of Lemma~\ref{hhalf2} and Remark~\ref{hhalf3} we have \begin{proposition}\label{hhalf4} Assume that $f, \ g \in H^1(\mathbb R)$. We have \begin{align} \nm{\bracket{f, \mathbb H} g}_{\dot H^{1/2}}&\lesssim \|f\|_{\dot H^{1/2}}(\|g\|_{L^\infty} +\|\mathbb H g\|_{L^\infty});\label{hhalf41}\\ \nm{ \bracket{f, \mathbb H} g }_{\dot H^{1/2}}&\lesssim \|\partial_{\alpha '} f\|_{L^2}\|g\|_{L^2};\label{hhalf42}\\ \nm{\bracket{f, \mathbb H} \partial_{\alpha '} g}_{\dot H^{1/2}}&\lesssim \|g\|_{\dot H^{1/2}}(\|\partial_{\alpha '} f\|_{L^\infty}+\|\partial_{\alpha '} \mathbb H f\|_{L^\infty}).\label{hhalf43} \end{align} \end{proposition} \begin{proof} By Proposition~\ref{prop:comm-hilbe} and the decompositions $f=\mathbb P_A f+\mathbb P_H f$, $g=\mathbb P_A g+\mathbb P_H g$, \begin{equation}\label{2318} \bracket{f,\mathbb H}g=\bracket{\mathbb P_A f,\mathbb H}\mathbb P_H g+\bracket{\mathbb P_H f,\mathbb H}\mathbb P_A g. \end{equation} So without loss of generality, we assume $f$ is anti-holomorphic and $g$ is holomorphic, i.e. $f=-\mathbb H f$, $g=\mathbb Hg$. \eqref{hhalf41} is straightforward from \eqref{halfholo}, Remark~\ref{hhalf3} and the definition \eqref{def-hhalf}; and \eqref{hhalf42} can be easily obtained by applying Cauchy-Schwarz inequality and Hardy's inequality \eqref{eq:77} to \eqref{halfholo}. We are left with \eqref{hhalf43}. By integration by parts, we know \begin{equation}\label{2314} [ f,\mathbb H] \partial_{\alpha '} g+[g,\mathbb H]\partial_{\alpha '} f=\frac1{\pi i}\int \frac{( f({\alpha '})-f({\beta '}))(g({\alpha '})-g({\beta '}))}{({\alpha '}-{\beta '})^2}\,d{\beta '}:={\bf r}; \end{equation} and by \eqref{hhalf41}, $$\nm{[g,\mathbb H]\partial_{\alpha '} f}_{\dot H^{1/2}}\lesssim \|g\|_{\dot H^{1/2}}\|\partial_{\alpha '} f\|_{L^\infty}.$$ For the term $\bf r$ in the right hand side of \eqref{2314}, we have \begin{equation}\label{2315} \partial_{\alpha '} {\bf r}=\frac{-2}{\pi i}\int \frac{( f({\alpha '})- f({\beta '}))(g({\alpha '})-g({\beta '}))}{({\alpha '}-{\beta '})^3}\,d{\beta '}+ f_{\alpha '} \mathbb H g_{\alpha '}+g_{\alpha '} \mathbb H f_{\alpha '}; \end{equation} and using $f=-\mathbb Hf$, $g=\mathbb H g$, we find $$ f_{\alpha '} \mathbb H g_{\alpha '}+g_{\alpha '} \mathbb H f_{\alpha '}= f_{\alpha '} g_{\alpha '}-g_{\alpha '} f_{\alpha '}=0.$$ Let $p\in C_0^\infty(\mathbb R)$. We have, using the symmetry of the integrand, \begin{equation}\label{2316} \int p\partial_{\alpha '} {\bf r}\,d{\alpha '}=\frac{-1}{\pi i}\iint \frac{( f({\alpha '})- f({\beta '}))(g({\alpha '})-g({\beta '}))(p({\alpha '})-p({\beta '}))}{({\alpha '}-{\beta '})^3}\,d{\alpha '} d{\beta '}; \end{equation} applying Cauchy-Schwarz inequality and the definition \eqref{def-hhalf}, we get \begin{equation}\label{2317} \abs{\int p\partial_{\alpha '} {\bf r}\,d{\alpha '}}\lesssim \|\partial_{\alpha '} f\|_{L^\infty} \|g\|_{\dot H^{1/2}}\|p\|_{\dot H^{1/2}}, \end{equation} so $ \|{\bf r}\|_{\dot H^{1/2}}\lesssim \|\partial_{\alpha '} f\|_{L^\infty} \|g\|_{\dot H^{1/2}}$. This finishes the proof for \eqref{hhalf43}. \end{proof} \begin{proposition}\label{dl21} Assume $f,\ g, \ f_1, \ g_1\in H^1(\mathbb R)$, and $l:\mathbb R\to \mathbb R$ is a diffeomorphism, with $l_{\alpha '}-1\in L^2$. Then \begin{equation}\label{dl21-inq} \begin{aligned} &\nm{\bracket{f,\mathbb H}\partial_{\alpha '} g-U_l\bracket{f_1,\mathbb H}\partial_{\alpha '} g_1}_{L^2}\lesssim \nm{f-f_1\circ l}_{\dot H^{1/2}}\|\partial_{\alpha '} g\|_{L^2}\\&\qquad\qquad+ \|\partial_{\alpha '} f_1\|_{L^2} \|l_{\alpha '}\|_{L^\infty}^{\frac12} \nm{g-g_1\circ l}_{\dot H^{1/2}}+\|\partial_{\alpha '} f_1\|_{L^2} \|\partial_{\alpha '} g_1\|_{L^2}\nm{l_{\alpha '}-1}_{L^2}. \end{aligned} \end{equation} \end{proposition} \begin{proof} We know \begin{equation}\label{2320} \begin{aligned} \bracket{f,\mathbb H}\partial_{\alpha '} g&-U_l\bracket{f_1,\mathbb H}\partial_{\alpha '} g_1=\bracket{f,\mathbb H}\partial_{\alpha '} g-\bracket{f_1\circ l,U_l\mathbb HU_{l^{-1}}(l_{\alpha '})^{-1}}\partial_{\alpha '} (g_1\circ l)\\&= \bracket{f,\mathbb H}\partial_{\alpha '} g-\bracket{f_1\circ l,\mathbb H}\partial_{\alpha '} (g_1\circ l)+\bracket{f_1\circ l,\mathbb H-U_l\mathbb HU_{l^{-1}}(l_{\alpha '})^{-1}}\partial_{\alpha '} (g_1\circ l); \end{aligned} \end{equation} applying Proposition~\ref{prop:half-dir} to the term \begin{equation}\label{2321} \bracket{f,\mathbb H}\partial_{\alpha '} g-\bracket{f_1\circ l,\mathbb H}\partial_{\alpha '} (g_1\circ l)=\bracket{f-f_1\circ l,\mathbb H}\partial_{\alpha '} g+\bracket{f_1\circ l,\mathbb H}\partial_{\alpha '} (g-g_1\circ l) \end{equation} gives \begin{equation}\label{2322} \nm{\bracket{f,\mathbb H}\partial_{\alpha '} g-\bracket{f_1\circ l,\mathbb H}\partial_{\alpha '} (g_1\circ l)}_{L^2}\lesssim \nm{f-f_1\circ l}_{\dot H^{1/2}}\|\partial_{\alpha '} g\|_{L^2}+ \|\partial_{\alpha '} (f_1\circ l)\|_{L^2} \nm{g-g_1\circ l}_{\dot H^{1/2}}. \end{equation} Now by \eqref{2305}, \begin{equation}\label{2323} \begin{aligned} &\bracket{f_1\circ l,\mathbb H-U_l\mathbb HU_{l^{-1}}(l_{\alpha '})^{-1}}\partial_{\alpha '} (g_1\circ l)\\&\qquad\qquad=\frac1{\pi i} \int\frac{(f_1\circ l({\alpha '})-f_1\circ l({\beta '}))(l({\alpha '})-{\alpha '}-l({\beta '})+{\beta '})}{(l({\alpha '})-l({\beta '}))({\alpha '}-{\beta '})}\, \partial_{\beta '} (g_1\circ l)({\beta '}) \,d{\beta '}; \end{aligned} \end{equation} applying Cauchy-Schwarz inequality and Hardy's inequality \eqref{eq:77} we get \begin{equation}\label{2324} \nm{\bracket{f_1\circ l,\mathbb H-U_l\mathbb HU_{l^{-1}}(l_{\alpha '})^{-1}}\partial_{\alpha '} (g_1\circ l)}_{L^2}\lesssim \|\partial_{\alpha '} f_1\|_{L^2}\|l_{\alpha '}-1\|_{L^2}\|\partial_{\alpha '} g_1\|_{L^2}.\end{equation} This finishes the proof for \eqref{dl21-inq}. \end{proof} \begin{proposition}\label{dl22} Assume that $f,\ g,\ f_1, \ g_1$ are smooth and decay at infinity, and $l:\mathbb R\to \mathbb R$ is a diffeomorphism, with $l_{\alpha '}-1\in L^2$. Then there is a constant $c(\|l_{\alpha '}\|_{L^\infty}, \|(l^{-1})_{\alpha '}\|_{L^\infty})$, depending on $\|l_{\alpha '}\|_{L^\infty}, \|(l^{-1})_{\alpha '}\|_{L^\infty}$, such that \begin{equation}\label{dl221} \begin{aligned} &\nm{\bracket{f,\mathbb H}\partial_{\alpha '} g-U_l\bracket{f_1,\mathbb H}\partial_{\alpha '} g_1}_{L^2}\le c(\|l_{\alpha '}\|_{L^\infty}, \|(l^{-1})_{\alpha '}\|_{L^\infty})\nm{\partial_{\alpha '} f-\partial_{\alpha '}(f_1\circ l)}_{L^2}\|g\|_{L^\infty}\\&\qquad\qquad+ c(\|l_{\alpha '}\|_{L^\infty}, \|(l^{-1})_{\alpha '}\|_{L^\infty})(\|\partial_{\alpha '} f_1\|_{L^\infty} \nm{g-g_1\circ l}_{L^2}+\|\partial_{\alpha '} f_1\|_{L^\infty} \| g_1\|_{L^\infty}\nm{l_{\alpha '}-1}_{L^2}). \end{aligned} \end{equation} \begin{equation}\label{dl222} \begin{aligned} &\nm{\bracket{f,\mathbb H}\partial_{\alpha '} g-U_l\bracket{f_1,\mathbb H}\partial_{\alpha '} g_1}_{L^2}\lesssim \nm{\partial_{\alpha '} f-\partial_{\alpha '}(f_1\circ l)}_{L^2}\|g\|_{\dot H^{1/2}}\\&\qquad\qquad+ c(\|l_{\alpha '}\|_{L^\infty}, \|(l^{-1})_{\alpha '}\|_{L^\infty})(\|\partial_{\alpha '} f_1\|_{L^\infty} \nm{g-g_1\circ l}_{L^2}+\|\partial_{\alpha '} f_1\|_{L^\infty} \| g_1\|_{\dot H^{1/2}}\nm{l_{\alpha '}-1}_{L^2}). \end{aligned} \end{equation} \end{proposition} \begin{proof} We use the same computation as in the proof for Proposition~\ref{dl21}, and apply Proposition~\ref{B2} to the terms in \eqref{2321} and \eqref{2323} to get \eqref{dl221}. To obtain \eqref{dl222} we apply \eqref{eq:b11} and \eqref{3.20} to \eqref{2321}; and for the term in \eqref{2323}, we first integrate by parts, then apply Cauchy-Schwarz inequality and Hardy's inequality \eqref{eq:77}. \end{proof} \begin{proposition}\label{dhhalf1} Assume that $f,\ g,\ f_1, \ g_1$ are smooth and decay at infinity, and $l:\mathbb R\to \mathbb R$ is a diffeomorphism, with $l_{\alpha '}-1\in L^2$. Then there is a constant $c:=c(\|l_{\alpha '}\|_{L^\infty}, \|(l^{-1})_{\alpha '}\|_{L^\infty})$, depending on $\|l_{\alpha '}\|_{L^\infty}, \|(l^{-1})_{\alpha '}\|_{L^\infty}$, such that \begin{equation}\label{dhhalf1-inq} \begin{aligned} &\nm{\bracket{f,\mathbb H}\partial_{\alpha '} g-U_l\bracket{f_1,\mathbb H}\partial_{\alpha '} g_1}_{\dot H^{1/2}}\lesssim \nm{\partial_{\alpha '} f-\partial_{\alpha '}(f_1\circ l)}_{L^2}\|\partial_{\alpha '} g_1\|_{L^2}\|l_{\alpha '}\|_{L^\infty}^{\frac12}\\&\qquad+ (\|\partial_{\alpha '} f\|_{L^\infty}+\|\partial_{\alpha '} \mathbb H f\|_{L^\infty}) \nm{g-g_1\circ l}_{\dot H^{1/2}}+c\|\partial_{\alpha '} f_1\|_{L^\infty} \| \partial_{\alpha '} g_1\|_{L^2}\nm{l_{\alpha '}-1}_{L^2}. \end{aligned} \end{equation} \end{proposition} \begin{proof} We begin with \eqref{2320} and write the first two terms on the right hand side as \begin{equation}\label{2325} \bracket{f,\mathbb H}\partial_{\alpha '} g-\bracket{f_1\circ l,\mathbb H}\partial_{\alpha '} (g_1\circ l)=\bracket{f-f_1\circ l,\mathbb H}\partial_{\alpha '} (g_1\circ l)+\bracket{f,\mathbb H}\partial_{\alpha '} (g-g_1\circ l); \end{equation} applying \eqref{hhalf42} and \eqref{hhalf43} to \eqref{2325} we get \begin{equation}\label{2326} \begin{aligned} \nm{\bracket{f,\mathbb H}\partial_{\alpha '} g-\bracket{f_1\circ l,\mathbb H}\partial_{\alpha '} (g_1\circ l)}_{\dot H^{1/2}}&\lesssim \nm{\partial_{\alpha '}(f-f_1\circ l)}_{L^2}\|\partial_{\alpha '} (g_1\circ l)\|_{L^2}\\&+ (\|\partial_{\alpha '} f\|_{L^\infty} + \|\partial_{\alpha '} \mathbb Hf\|_{L^\infty})\nm{g-g_1\circ l}_{\dot H^{1/2}}. \end{aligned} \end{equation} Consider the last term on the right hand side of \eqref{2320}. For any $p\in C_0^\infty(\mathbb R)$, \begin{equation}\label{2327} \begin{aligned} \int\partial_{\alpha '} p \bracket{f_1\circ l,\mathbb H-U_l\mathbb HU_{l^{-1}}(l_{\alpha '})^{-1}}&\partial_{\alpha '} (g_1\circ l)\,d{\alpha '}\\&=\int\partial_{\alpha '} (g_1\circ l)\bracket{f_1\circ l,\mathbb H-U_l\mathbb HU_{l^{-1}}(l_{\alpha '})^{-1}}\partial_{\alpha '} p \,d{\alpha '}; \end{aligned} \end{equation} the same argument as in the proof of \eqref{dl222}, that is, integrating by parts, then applying Cauchy-Schwarz inequality and Hardy's inequality \eqref{eq:77} gives $$\|\bracket{f_1\circ l,\mathbb H-U_l\mathbb HU_{l^{-1}}(l_{\alpha '})^{-1}}\partial_{\alpha '} p\|_{L^2}\le c\,\|\partial_{\alpha '} f_1\|_{L^\infty} \|l_{\alpha '}-1\|_{L^2}\|p\|_{\dot H^{1/2}}, $$ where $c:= c(\|l_{\alpha '}\|_{L^\infty}, \|(l^{-1})_{\alpha '}\|_{L^\infty})$ is a constant depending on $\|l_{\alpha '}\|_{L^\infty}$ and $ \|(l^{-1})_{\alpha '}\|_{L^\infty}$; so $$\abs{\int\partial_{\alpha '} p \bracket{f_1\circ l,\mathbb H-U_l\mathbb HU_{l^{-1}}(l_{\alpha '})^{-1}}\partial_{\alpha '} (g_1\circ l)\,d{\alpha '}}\le c\|\partial_{\alpha '} (g_1\circ l)\|_{L^2}\|\partial_{\alpha '} f_1\|_{L^\infty} \|l_{\alpha '}-1\|_{L^2}\|p\|_{\dot H^{1/2}}.$$ This finishes the proof for \eqref{dhhalf1-inq}. \end{proof} \begin{proposition}\label{dhhalf2} Assume that $f,\ g,\ f_1, \ g_1$ are smooth and decay at infinity, and $l:\mathbb R\to \mathbb R$ is a diffeomorphism, with $l_{\alpha '}-1\in L^2$. Then there is a constant $c:=c(\|l_{\alpha '}\|_{L^\infty}, \|(l^{-1})_{\alpha '}\|_{L^\infty})$, depending on $\|l_{\alpha '}\|_{L^\infty}, \|(l^{-1})_{\alpha '}\|_{L^\infty}$, such that \begin{equation}\label{dhhalf2-inq} \begin{aligned} &\nm{\bracket{f,\mathbb H} g-U_l\bracket{f_1,\mathbb H} g_1}_{\dot H^{1/2}}\lesssim \nm{ f-f_1\circ l}_{\dot H^{1/2}}(\| g\|_{L^\infty}+\|\mathbb H g\|_{L^\infty}) \\&\qquad+ \|\partial_{\alpha '} f_1\|_{L^2}\|l_{\alpha '}\|_{L^\infty}^{\frac12} \nm{g-g_1\circ l}_{L^2}+c\|\partial_{\alpha '} f_1\|_{L^2} \| g_1\|_{L^\infty}\nm{l_{\alpha '}-1}_{L^2}. \end{aligned} \end{equation} \end{proposition} \begin{proof} Similar to the proof of Proposition~\ref{dl21}, we have \begin{equation}\label{2328} \bracket{f,\mathbb H} g-U_l\bracket{f_1,\mathbb H} g_1= \bracket{f,\mathbb H} g-\bracket{f_1\circ l,\mathbb H}(g_1\circ l) +\bracket{f_1\circ l,\mathbb H-U_l\mathbb HU_{l^{-1}}} (g_1\circ l); \end{equation} writing \begin{equation}\label{2329} \bracket{f,\mathbb H} g-\bracket{f_1\circ l,\mathbb H} (g_1\circ l)=\bracket{f-f_1\circ l,\mathbb H} g+\bracket{f_1\circ l,\mathbb H}(g-g_1\circ l) \end{equation} and applying \eqref{hhalf41} and \eqref{hhalf42} gives, \begin{equation}\label{2330} \begin{aligned} \nm{\bracket{f,\mathbb H} g-\bracket{f_1\circ l,\mathbb H} (g_1\circ l)}_{\dot H^{1/2}}&\lesssim \nm{ f-f_1\circ l}_{\dot H^{1/2}}(\| g\|_{L^\infty}+\|\mathbb H g\|_{L^\infty})\\&+ \|\partial_{\alpha '} (f_1\circ l)\|_{L^2} \nm{g-g_1\circ l}_{L^2}. \end{aligned} \end{equation} Consider the second term on the right hand side of \eqref{2328}. We write \begin{equation}\label{2331} \bracket{f_1\circ l,\mathbb H-U_l\mathbb HU_{l^{-1}}} (g_1\circ l)=\bracket{f_1\circ l,\mathbb H-U_l\mathbb HU_{l^{-1}} \frac1{l_{\alpha '}}} (g_1\circ l)+\bracket{f_1\circ l, U_l\mathbb HU_{l^{-1}}}(\frac1{l_{\alpha '}}-1) (g_1\circ l). \end{equation} Now \begin{equation}\label{2332} \bracket{f_1\circ l, U_l\mathbb HU_{l^{-1}}}((l_{\alpha '})^{-1}-1) (g_1\circ l)=U_l [f_1,\mathbb H]\paren{((l^{-1})_{\alpha '}-1)g_1}. \end{equation} Changing variables, and then using \eqref{hhalf42} yields \begin{equation}\label{2333} \nm{ U_l [f_1,\mathbb H]\paren{((l^{-1})_{\alpha '}-1)g_1} }_{\dot H^{1/2}}\le c\|\partial_{\alpha '} f_1\|_{L^2} \| g_1\|_{L^\infty}\nm{l_{\alpha '}-1}_{L^2} \end{equation} for some constant $c$ depending on $\|l_{\alpha '}\|_{L^\infty}, \|(l^{-1})_{\alpha '}\|_{L^\infty}$. For the first term on the right hand side of \eqref{2331} we use the duality argument in \eqref{2327}. Let $p\in C^\infty_0(\mathbb R)$, \begin{equation}\label{2334} \int \partial_{\alpha '} p \bracket{f_1\circ l,\mathbb H-U_l\mathbb HU_{l^{-1}} (l_{\alpha '})^{-1}} (g_1\circ l)\,d{\alpha '}=\int g_1\circ l\bracket{f_1\circ l,\mathbb H-U_l\mathbb HU_{l^{-1}} (l_{\alpha '})^{-1}} \partial_{\alpha '} p \,d{\alpha '}, \end{equation} and \begin{equation}\label{2335} \begin{aligned} &\bracket{f_1\circ l,\mathbb H-U_l\mathbb HU_{l^{-1}}(l_{\alpha '})^{-1}}\partial_{\alpha '} p\\&\qquad\qquad=\frac1{\pi i} \int\frac{(f_1\circ l({\alpha '})-f_1\circ l({\beta '}))(l({\alpha '})-{\alpha '}-l({\beta '})+{\beta '})}{(l({\alpha '})-l({\beta '}))({\alpha '}-{\beta '})}\, \partial_{\beta '} p({\beta '}) \,d{\beta '}. \end{aligned} \end{equation} Integrating by parts, then apply Cauchy-Schwarz inequality and Hardy's inequalities \eqref{eq:77} and \eqref{eq:771} gives \begin{equation}\label{2336} \nm{\bracket{f_1\circ l,\mathbb H-U_l\mathbb HU_{l^{-1}}(l_{\alpha '})^{-1}}\partial_{\alpha '} p}_{L^1}\le c \|\partial_{\alpha '} f_1\|_{L^2} \nm{l_{\alpha '}-1}_{L^2}\|p\|_{\dot H^{1/2}}, \end{equation} for some constant $c$ depending on $\|l_{\alpha '}\|_{L^\infty}, \|(l^{-1})_{\alpha '}\|_{L^\infty}$, so \begin{equation}\label{2337} \abs{\int \partial_{\alpha '} p \bracket{f_1\circ l,\mathbb H-U_l\mathbb HU_{l^{-1}} (l_{\alpha '})^{-1}} (g_1\circ l)\,d{\alpha '}}\le c \|g_1\|_{L^\infty}\|\partial_{\alpha '} f_1\|_{L^2} \nm{l_{\alpha '}-1}_{L^2}\|p\|_{\dot H^{1/2}}. \end{equation} This finishes the proof for \eqref{dhhalf2-inq}. \end{proof} We define $$[f, m; \partial_{\alpha '} g]_n:=\frac1{\pi i}\int\frac{(f({\alpha '})-f({\beta '}))(m({\alpha '})-m({\beta '}))^n}{({\alpha '}-{\beta '})^{n+1}}\partial_{\beta '} g({\beta '})\,d{\beta '}.$$ So $[f,m;\partial_{\alpha '} g]=[f, m; \partial_{\alpha '} g]_1$, and $[f,\mathbb H]\partial_{\alpha '} g=[f, m; \partial_{\alpha '} g]_0$. \begin{proposition}\label{d32} Assume that $f,\ m,\ g,\ f_1,\ m_1,\ g_1$ are smooth and $f,\ g,\ f_1,\ g_1$ decay at infinity, and $l:\mathbb R\to\mathbb R$ is a diffeomorphism, with $l_{\alpha '}-1\in L^2$. Then there is a constant $c$, depending on $\|l_{\alpha '}\|_{L^\infty}, \|(l^{-1})_{\alpha '}\|_{L^\infty}$, such that \begin{equation}\label{d32inq} \begin{aligned} &\nm{[f, m; \partial_{\alpha '} g]_n-U_l[f_1, m_1; \partial_{\alpha '} g_1]_n}_{L^2}\le c \nm{f-f_1\circ l}_{\dot H^{1/2}}\nm{\partial_{\alpha '} m}^n_{L^\infty}\|\partial_{\alpha '} g\|_{L^2}\\&+c \|\partial_{\alpha '} f_1\|_{L^2} (\nm{\partial_{\alpha '} m}_{L^\infty}+\nm{\partial_{\alpha '} m_1}_{L^\infty})^{n-1}\nm{\partial_{\alpha '}(m-m_1\circ l)}_{L^2}\nm{\partial_{\alpha '} g}_{L^2}\\&+c\|\partial_{\alpha '} f_1\|_{L^2} \nm{\partial_{\alpha '} m_1}^n_{L^\infty} \nm{g-g_1\circ l}_{\dot H^{1/2}}+c\|\partial_{\alpha '} f_1\|_{L^2}\nm{\partial_{\alpha '} m_1}^n_{L^\infty} \|\partial_{\alpha '} g_1\|_{L^2}\nm{l_{\alpha '}-1}_{L^2}. \end{aligned} \end{equation} \end{proposition} Proposition~\ref{d32} can be proved similarly as for Proposition~\ref{dl21}, we omit the details. \begin{proposition}\label{d33} Assume that $f,\ m,\ g,\ f_1,\ m_1,\ g_1$ are smooth and decay at infinity, and $l:\mathbb R\to\mathbb R$ is a diffeomorphism, with $l_{\alpha '}-1\in L^2$. Then there is a constant $c$, depending on $\|l_{\alpha '}\|_{L^\infty}, \|(l^{-1})_{\alpha '}\|_{L^\infty}$, such that \begin{equation}\label{d33inq} \begin{aligned} &\nm{[f, g; m]-U_l[f_1, g_1; m_1]}_{L^2}\le c \nm{f-f_1\circ l}_{\dot H^{1/2}}\|\partial_{\alpha '} g\|_{L^2}\nm{ m}_{L^\infty}\\&+c\|\partial_{\alpha '} f_1\|_{L^2} \nm{g-g_1\circ l}_{\dot H^{1/2}}\nm{ m}_{L^\infty}+ c\|\partial_{\alpha '} f_1\|_{L^2}\nm{\partial_{\alpha '} g_1}_{L^2}\nm{m-m_1\circ l}_{L^2}\\& +c\|\partial_{\alpha '} f_1\|_{L^2}\nm{ m_1}_{L^\infty} \|\partial_{\alpha '} g_1\|_{L^2}\nm{l_{\alpha '}-1}_{L^2}. \end{aligned} \end{equation} \end{proposition} Proposition~\ref{d33} straightforwardly follows from Cauchy-Schwarz inequality, Hardy's inequality and the definition of $\dot H^{1/2}$ norm. \begin{proposition}\label{half-product} Assume $f\in \dot H^{1/2}(\mathbb R)\cap L^\infty(\mathbb R)$, $g\in \dot H^{1/2}(\mathbb R)$, and $g$ can be decomposed by \begin{equation}\label{decomp} g=g_1+ pq \end{equation} with $g_1\in L^\infty(\mathbb R)$, $q\in L^2(\mathbb R)$, and $\partial_{\alpha '} p\in L^2(\mathbb R)$, satisfying $\partial_{\alpha '}(pf)\in L^2(\mathbb R)$. Then $fg\in \dot H^{1/2}(\mathbb R)$, and \begin{equation}\label{product-inq} \nm{fg}_{\dot H^{1/2}}\lesssim \nm{f}_{L^\infty}\nm{g}_{\dot H^{1/2}}+ \nm{g_1}_{L^\infty}\nm{f}_{\dot H^{1/2}}+\nm{q}_{L^2}\nm{ \partial_{\alpha '} (pf) }_{L^2}+\nm{q}_{L^2}\nm{ \partial_{\alpha '} p }_{L^2}\nm{f}_{L^\infty}. \end{equation} \end{proposition} \begin{proof} The proof is straightforward by definition. We have \begin{equation} \begin{aligned} &\nm{fg}_{\dot H^{1/2}}^2\lesssim \iint\frac{|f({\beta '})|^2|g({\alpha '})-g({\beta '})|^2}{({\alpha '}-{\beta '})^2}\,d{\alpha '} d{\beta '}+ \iint\frac{|g({\alpha '})|^2|f({\alpha '})-f({\beta '})|^2}{({\alpha '}-{\beta '})^2}\,d{\alpha '} d{\beta '}\\&\lesssim \nm{f}^2_{L^\infty}\nm{g}^2_{\dot H^{1/2}}+ \nm{g_1}^2_{L^\infty}\nm{f}^2_{\dot H^{1/2}}+\iint\frac{|q({\alpha '})|^2|p({\alpha '})f({\alpha '})-p({\beta '})f({\beta '})|^2}{({\alpha '}-{\beta '})^2}\,d{\alpha '} d{\beta '}\\&+\iint\frac{|q({\alpha '})|^2|p({\alpha '})-p({\beta '})|^2|f({\beta '})|^2}{({\alpha '}-{\beta '})^2}\,d{\alpha '} d{\beta '}\\&\lesssim \nm{f}^2_{L^\infty}\nm{g}^2_{\dot H^{1/2}}+ \nm{g_1}^2_{L^\infty}\nm{f}^2_{\dot H^{1/2}}+\nm{q}^2_{L^2}\nm{ \partial_{\alpha '} (pf) }^2_{L^2}+\nm{q}^2_{L^2}\nm{ \partial_{\alpha '} p }^2_{L^2}\nm{f}^2_{L^\infty}, \end{aligned} \end{equation} where in the last step we used Fubini's Theorem and Hardy's inequality \eqref{eq:77}. \end{proof} \subsection{The proof of Theorem~\ref{unique}}\label{proof3.5} In addition to what have already been given, we use the following convention in this section, \S\ref{proof3.5}: we write $A\lesssim B$ if there is a constant $c$, depending only on $\sup_{[0, T]}\frak E(t)$ and $\sup_{[0, T]}\tilde{\mathfrak E} (t)$, such that $A\le cB$. We assume the reader is familiar with the quantities that are controlled by the functional $\frak E(t)$, see \S\ref{proof} and Appendix~\ref{quantities}. We don't always give precise references on these estimates. Let $Z=Z({\alpha '},t)$, ${\mathfrak{Z}}={\mathfrak{Z}}({\alpha '},t)$, $t\in [0, T]$ be solutions of the system \eqref{interface-r}-\eqref{interface-holo}-\eqref{b}-\eqref{a1}, satisfying the assumptions of Theorem~\ref{unique}. Recall we defined in \eqref{def-l} \begin{equation}\label{2338} l=l({\alpha '},t)=\tilde h\circ h^{-1}({\alpha '}, t)=\tilde h(h^{-1}({\alpha '},t),t). \end{equation} We will apply Lemma~\ref{dlemma1} to $\Theta=\bar Z_t,\ \frac1{Z_{,{\alpha '}}}-1,\bar Z_{tt}$ and Lemma~\ref{basic-e2} to $l_{\alpha '}-1$ to construct an energy functional $\mathcal F(t)$, and show that the time derivative $\mathcal F'(t)$ can be controlled by $\mathcal F(t)$ and the initial data. We begin with computing the evolutionary equations for these quantities. We have \begin{equation}\label{2339} \partial_t ( l_{\alpha '}\circ h)=\partial_t\paren{\frac{\tilde h_\alpha}{h_\alpha}}=\frac{\tilde h_\alpha}{h_\alpha}\paren{\frac{\tilde h_{t\alpha}}{\tilde h_\alpha}-\frac{h_{t\alpha}}{h_\alpha}}=(l_{\alpha '}\circ h)(\tilde b_{\alpha '}\circ \tilde h-b_{\alpha '}\circ h); \end{equation} precomposing with $h^{-1}$ yields \begin{equation}\label{2340} (\partial_t +b\partial_\alpha) l_{\alpha '}=l_{\alpha '}(\tilde b_{\alpha '}\circ l-b_{\alpha '}). \end{equation} The equation for $\bar Z_t$ is given by \eqref{quasi-r}-\eqref{aux}. To find the equation for $\bar Z_{tt}$ we take a derivative to $t$ to \eqref{quasi-l}: \begin{equation}\label{2341} \begin{aligned} (\partial_t^2+i\frak a\partial_\alpha)\bar z_{tt}&=-i\frak a_t \bar z_{t\alpha}+\partial_t\paren{\frac{\mathfrak{a}_t}{\mathfrak{a}}}(\bar z_{tt}-i)+\frac{\mathfrak{a}_t}{\mathfrak{a}}\bar z_{ttt}\\&=\partial_t\paren{\frac{\mathfrak{a}_t}{\mathfrak{a}}}(\bar z_{tt}-i)+\frac{\mathfrak{a}_t}{\mathfrak{a}}(\bar z_{ttt}-i\frak a \bar z_{t\alpha})\\&=(\bar z_{tt}-i)\paren{\partial_t\paren{\frac{\mathfrak{a}_t}{\mathfrak{a}}}+\paren{\frac{\mathfrak{a}_t}{\mathfrak{a}}}^2+2\paren{\frac{\mathfrak{a}_t}{\mathfrak{a}}}\bar{D_\alpha z_t}}, \end{aligned} \end{equation} here we used equation \eqref{quasi-l} and substituted by \eqref{interface-l}: $-i\frak a \bar z_\alpha=\bar z_{tt}-i$ in the last step. Precomposing with $h^{-1}$, and then substituting $\bar Z_{tt}-i$ by \eqref{aa1}, yields, for $\mathcal P=(\partial_t+b\partial_{\alpha '})^2+i\mathcal A\partial_{\alpha '}$, \begin{equation}\label{eqztt} \mathcal P\bar Z_{tt}= -i\,\frac{A_1}{Z_{,{\alpha '}}} \paren{(\partial_t+b\partial_{\alpha '})\paren{\frac{\mathfrak{a}_t}{\mathfrak{a}}\circ h^{-1}}+\paren{\frac{\mathfrak{a}_t}{\mathfrak{a}}\circ h^{-1}}^2+2\paren{\frac{\mathfrak{a}_t}{\mathfrak{a}}\circ h^{-1}}\bar{D_{\alpha '} Z_t}}:=G_3. \end{equation} To find the equation for $\frac1{Z_{,{\alpha '}}}$ we begin with \eqref{eq:dza}. Precomposing with $h$, then differentiate with respect to $t$ gives \begin{equation}\label{2342} \partial_t^2\paren{\frac {h_\alpha}{z_\alpha}}=\frac {h_\alpha}{z_\alpha}\paren{(b_{\alpha '}\circ h-D_\alpha z_t)^2+\partial_t(b_{\alpha '}\circ h-2\Re D_\alpha z_t)+\partial_t \bar{D_\alpha z_t}} \end{equation} here we subtracted $\partial_t\bar{D_\alpha z_t}$ and then added $\partial_t\bar{D_\alpha z_t}$ in the second factor on the right hand side to take advantage of the formula \eqref{ba}. We compute \begin{equation}\label{2343} \partial_t\bar {D_\alpha z_t}=\bar{D_\alpha z_{tt}}-(\bar{D_\alpha z_t})^2, \end{equation} replacing $\bar Z_{tt}$ by \eqref{aa1} yields \begin{equation}\label{2344} \bar{D_{\alpha '} Z_{tt}}=\frac1{\bar Z_{,{\alpha '}}}\partial_{\alpha '}\paren{-i\frac{A_1}{Z_{,{\alpha '}}}}=-i\frac{A_1}{\bar Z_{,{\alpha '}}}\partial_{\alpha '}\paren{\frac{1}{Z_{,{\alpha '}}}}-i\frac1{|Z_{,{\alpha '}}|^2}\partial_{\alpha '} A_1; \end{equation} precomposing equation \eqref{2342} with $h^{-1}$ and substitute in by \eqref{2343}-\eqref{2344}, we get \begin{equation}\label{eqza} \mathcal P\frac1{Z_{,{\alpha '}}}=\frac {1}{Z_{\alpha '}}\paren{(b_{\alpha '}-D_{\alpha '} Z_t)^2+(\partial_t+b\partial_{\alpha '})(b_{\alpha '}-2\Re D_{\alpha '} Z_t)-\paren{\bar {D_{\alpha '} Z_t}}^2-i\frac{\partial_{\alpha '} A_1}{|Z_{,{\alpha '}}|^2}}:=G_2. \end{equation} We record here the equation for $\bar Z_t$, which is the first equation in \eqref{quasi-r}, in which we substituted in by \eqref{aa1}, \begin{equation}\label{eqzt} \mathcal P\bar Z_t=-i\,\frac{\frak a_t}{\frak a}\circ h^{-1} \frac{A_1}{Z_{,{\alpha '}}}:=G_1. \end{equation} \subsubsection{The energy functional $\mathcal F(t)$} The energy functional $\mathcal F(t)$ for the differences of the solutions will consist of $\|l_{\alpha '}(t)-1\|_{L^2(\mathbb R)}^2$ and the functionals $\mathfrak F(t)$ when applied to $\Theta=\bar Z_t(t)$, $\frac1{Z_{,{\alpha '}}}-1$ and $\bar Z_{tt}$, taking $\frak c=-i, \ 0, \ 0$ respectively. Let \begin{equation}\label{f0} \frak F_0(t)=\|l_{\alpha '}(t)-1\|_{L^2(\mathbb R)}^2; \end{equation} \begin{equation}\label{f1} \frak F_1(t):=\int \frac{\kappa}{A_1}\abs{ Z_{,{\alpha '}}\paren{\bar Z_{tt}-i}- {\mathfrak{Z}}_{,{\alpha '}}\circ l\paren{\bar {\mathfrak{Z}}_{tt}\circ l-i}}^2+i\partial_{\alpha '}(\bar Z_t-\bar {{\mathfrak{Z}}}_t\circ l)\bar{(\bar Z_t-\bar {{\mathfrak{Z}}}_t\circ l)}\,d{\alpha '}; \end{equation} \begin{equation}\label{f2} \begin{aligned} \frak F_2(t):&=\int \frac{\kappa}{A_1}\abs{ Z_{,{\alpha '}}(\partial_t+b\partial_{\alpha '})\paren{\frac1{Z_{,{\alpha '}}}}- {\mathfrak{Z}}_{,{\alpha '}}\circ l(\partial_t+\tilde b\partial_{\alpha '})\paren{\frac1{{\mathfrak{Z}}_{,{\alpha '}}}}\circ l}^2\,d{\alpha '}\\&\quad\qquad+i\int \partial_{\alpha '}\paren{\frac1{Z_{,{\alpha '}}}- \frac1{{\mathfrak{Z}}_{,{\alpha '}}}\circ l}\bar{\paren{\frac1{Z_{,{\alpha '}}}- \frac1{{\mathfrak{Z}}_{,{\alpha '}}}\circ l }}\,d{\alpha '}; \end{aligned} \end{equation} and \begin{equation}\label{f3} \frak F_3(t):=\int \frac{\kappa}{A_1}\abs{ Z_{,{\alpha '}}\bar Z_{ttt}- {\mathfrak{Z}}_{,{\alpha '}}\circ l\bar {\mathfrak{Z}}_{ttt}\circ l}^2+i\partial_{\alpha '}(\bar Z_{tt}-\bar {{\mathfrak{Z}}}_{tt}\circ l)\bar{(\bar Z_{tt}-\bar {{\mathfrak{Z}}}_{tt}\circ l)}\,d{\alpha '}. \end{equation} Substituting the evolutionary equations \eqref{eq:dzt}, \eqref{eq:dza} and \eqref{eq:dztt} in the functionals $\frak F_i$ we get \begin{equation}\label{f11} \frak F_1(t)=\int \frac{\kappa}{A_1}\abs{ A_1- \tilde {A_1}\circ l}^2+i\partial_{\alpha '}(\bar Z_t-\bar {{\mathfrak{Z}}}_t\circ l)\bar{(\bar Z_t-\bar {{\mathfrak{Z}}}_t\circ l)}\,d{\alpha '}; \end{equation} \begin{equation}\label{f22} \frak F_2(t)=\int \frac{\kappa}{A_1}\abs{ (b_{\alpha '}-D_{\alpha '} Z_t)-(\tilde b_{\alpha '}-\tilde D_{\alpha '} {\mathfrak{Z}}_t)\circ l }^2+ i\partial_{\alpha '}(\frac1{Z_{,{\alpha '}}}- \frac1{{\mathfrak{Z}}_{,{\alpha '}}}\circ l)\bar{(\frac1{Z_{,{\alpha '}}}- \frac1{{\mathfrak{Z}}_{,{\alpha '}}}\circ l )}\,d{\alpha '}; \end{equation} and \begin{equation}\label{f33} \begin{aligned} \frak F_3(t)&=\int \frac{\kappa}{A_1}\abs{ A_1\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}+\bar{D_{\alpha '} Z_t}}- \paren{\tilde {A_1}\paren{\frac{\tilde{\mathfrak{a}}_t}{\tilde{\mathfrak{a}}}\circ \tilde h^{-1}+\bar{\tilde D_{\alpha '} {\mathfrak{Z}}_t}}}\circ l }^2\,d{\alpha '}\\&\qquad\qquad+i\int\partial_{\alpha '}(\bar Z_{tt}-\bar {{\mathfrak{Z}}}_{tt}\circ l)\bar{(\bar Z_{tt}-\bar {{\mathfrak{Z}}}_{tt}\circ l)}\,d{\alpha '}. \end{aligned} \end{equation} \begin{remark}\label{remark512} Assume that the assumption of Theorem~\ref{unique} holds. Because $h_t=b(h,t)$, and $h(\alpha, 0)=\alpha$, where $b$ is given by \eqref{b}, we have $h_{t\alpha}=h_\alpha b_{\alpha '}\circ h$, and \begin{equation}\label{2345} h_\alpha(\cdot, t)=e^{\int_0^t b_{\alpha '}\circ h(\cdot, \tau)\, d\tau}. \end{equation} So there are constants $c_1>0$, $c_2>0$, depending only on $\sup_{[0, T]}\frak E(t)$, such that \begin{equation}\label{2346} c_1\le h_\alpha(\alpha, t)\le c_2,\qquad\qquad \text{for all } \alpha\in \mathbb R, t\in [0, T]. \end{equation} Consequently, because $l_{\alpha '}=\frac{\tilde h_a}{h_\alpha}\circ h^{-1}$, there is a constant $0<c<\infty$, depending only on $\sup_{[0, T]}\frak E(t)$ and $\sup_{[0, T]}\tilde{\mathfrak E}(t)$, such that \begin{equation}\label{2346-1} c^{-1}\le l_{\alpha '}({\alpha '}, t) \le c, \qquad\qquad \text{for all } \alpha\in \mathbb R, t\in [0, T]. \end{equation} It is easy to check that for each $t\in [0, T]$, $b_{\alpha '}(t)\in L^2(\mathbb R)$, so $h_\alpha(t)-1\in L^2(\mathbb R)$, and hence $l_{\alpha '}(t)-1\in L^2(\mathbb R)$. It is clear that under the assumption of Theorem~\ref{unique}, the functionals $\frak F_i(t)$, $i=1,2,3$ are well-defined. \end{remark} Notice that the functionals $\frak F_i(t)$, $i=1,2,3$ are not necessarily positive definite, see Lemma~\ref{hhalf1}. We prove the following \begin{lemma}\label{dominate1} There is a constant $M_0$, depending only on $\sup_{[0, T]}\frak E(t)$ and $\sup_{[0, T]}\tilde{\mathfrak E}(t)$, such that for all $M\ge M_0$, and $t\in [0, T]$, \begin{align} \|l_{\alpha '}(t)-1\|_{L^2}+\|(\bar Z_t-\bar {\mathfrak{Z}}_t\circ l)(t)\|_{\dot H^{1/2}}^2+\nm{\paren{\frac1{Z_{,{\alpha '}}}- \frac1{{\mathfrak{Z}}_{,{\alpha '}}}\circ l}(t)}_{\dot H^{1/2}}^2\le M\frak F_0(t)+\frak F_1(t)+\frak F_2(t);\label{dominate11}\\ \nm{(A_1-\tilde {A_1}\circ l)(t)}_{L^2}^2+\nm{(D_{\alpha '} Z_t-\tilde D_{\alpha '} {\mathfrak{Z}}_t\circ l) (t)}_{L^2}^2+ \nm{b_{\alpha '}-\tilde b_{\alpha '}\circ l (t)}_{L^2}^2\lesssim M\frak F_0(t)+\frak F_1(t)+\frak F_2(t).\label{dominate12} \end{align} \end{lemma} \begin{proof} By Lemma~\ref{hhalf1}, \begin{align} \int i\partial_{\alpha '}(\bar Z_t-\bar {{\mathfrak{Z}}}_t\circ l)\bar{(\bar Z_t-\bar {{\mathfrak{Z}}}_t\circ l)}\,d{\alpha '}=\|\mathbb P_H (\bar Z_t-\bar {{\mathfrak{Z}}}_t\circ l)\|_{\dot H^{1/2}}^2-\|\mathbb P_A (\bar Z_t-\bar {{\mathfrak{Z}}}_t\circ l)\|_{\dot H^{1/2}}^2;\\ \text{and }\qquad \|\bar Z_t-\bar {{\mathfrak{Z}}}_t\circ l\|_{\dot H^{1/2}}^2=\int i\partial_{\alpha '}(\bar Z_t-\bar {{\mathfrak{Z}}}_t\circ l)\bar{(\bar Z_t-\bar {{\mathfrak{Z}}}_t\circ l)}\,d{\alpha '}+2\|\mathbb P_A (\bar Z_t-\bar {{\mathfrak{Z}}}_t\circ l)\|_{\dot H^{1/2}}^2. \end{align} Because $\bar Z_t=\mathbb H\bar Z_t$ and $\bar {\mathfrak{Z}}_t=\mathbb H\bar {\mathfrak{Z}}_t$, $$2\mathbb P_A (\bar Z_t-\bar {{\mathfrak{Z}}}_t\circ l)=-2\mathbb P_A (\bar {{\mathfrak{Z}}}_t\circ l)=-\mathcal Q_l (\bar {{\mathfrak{Z}}}_t\circ l)$$ and by \eqref{q1}, $$\|\mathcal Q_l (\bar {{\mathfrak{Z}}}_t\circ l)\|_{\dot H^{1/2}}\le C(\|l_{\alpha '}\|_{L^\infty}, \|(l^{-1})_{\alpha '}\|_{L^\infty})\|\partial_{\alpha '} \bar {{\mathfrak{Z}}}_t\|_{L^2}\|l_{\alpha '}-1\|_{L^2}\lesssim \|l_{\alpha '}-1\|_{L^2}. $$ So there is a constant $M_0$, depending only on $\sup_{[0, T]}\frak E(t)$ and $\sup_{[0, T]}\tilde{\mathfrak E}(t)$, such that for all $t\in [0, T]$ and $M\ge M_0$, \begin{equation}\label{2350} \|(\bar Z_t-\bar {{\mathfrak{Z}}}_t\circ l)(t)\|_{\dot H^{1/2}}^2\le \int i\partial_{\alpha '}(\bar Z_t-\bar {{\mathfrak{Z}}}_t\circ l)\bar{(\bar Z_t-\bar {{\mathfrak{Z}}}_t\circ l)}\,d{\alpha '}+M\|l_{\alpha '}-1\|_{L^2}^2\le \frak F_1(t)+M\frak F_0(t) \end{equation} A similar argument holds for $\nm{\paren{\frac1{Z_{,{\alpha '}}}- \frac1{{\mathfrak{Z}}_{,{\alpha '}}}\circ l}(t)}_{\dot H^{1/2}}^2$. This proves \eqref{dominate11}. Now by $\frac{\kappa}{A_1}= \sqrt{\frac{l_{\alpha '}}{A_1\tilde {A_1}\circ l}}$, Remark~\ref{remark512} and the estimate \eqref{2000}, there is a constant $0<c<\infty$, depending on $\sup_{[0, T]}\frak E(t)$ and $\sup_{[0, T]}\tilde{\mathfrak E}(t)$, such that \begin{equation}\label{2350-1} \frac1c\le \frac{\kappa}{A_1} \le c, \end{equation} so \begin{equation}\label{2351} \nm{(A_1-\tilde {A_1}\circ l)(t)}_{L^2}^2+\nm{(b_{\alpha '}-D_{\alpha '} Z_t-(\tilde b_{\alpha '}-\tilde D_{\alpha '} {\mathfrak{Z}}_t\circ l)) (t)}_{L^2}^2\lesssim M\frak F_0(t)+\frak F_1(t)+\frak F_2(t), \end{equation} for large enough $M$, depending only on $\sup_{[0, T]}\frak E(t)$ and $\sup_{[0, T]}\tilde{\mathfrak E}(t)$. Using \eqref{ba} we have, from Proposition~\ref{dl21}, \begin{equation}\label{2352} \nm{b_{\alpha '}-2\Re D_{\alpha '} Z_t-(\tilde b_{\alpha '}-2\Re \tilde D_{\alpha '} {\mathfrak{Z}}_t\circ l)}_{L^2}\lesssim \|\bar Z_t-\bar {\mathfrak{Z}}_t\circ l\|_{\dot H^{1/2}}+\nm{\frac1{Z_{,{\alpha '}}}- \frac1{{\mathfrak{Z}}_{,{\alpha '}}}\circ l}_{\dot H^{1/2}}+ \|l_{\alpha '}-1\|_{L^2}; \end{equation} combining with \eqref{2351} yields $$ \nm{(D_{\alpha '} Z_t-\tilde D_{\alpha '} {\mathfrak{Z}}_t\circ l) (t)}_{L^2}^2+ \nm{(b_{\alpha '}-\tilde b_{\alpha '}\circ l )(t)}_{L^2}^2\lesssim M\frak F_0(t)+\frak F_1(t)+\frak F_2(t),$$ for large enough $M$, depending only on $\sup_{[0, T]}\frak E(t)$ and $\sup_{[0, T]}\tilde{\mathfrak E}(t)$. This proves \eqref{dominate12}. \end{proof} \begin{lemma}\label{dominate2} Let $M_0$ be the constant in Lemma~\ref{dominate1}. Then for all $M\ge M_0$, and $t\in [0, T]$, \begin{equation}\label{dominate21} \|\mathbb P_A(\bar Z_{tt}-\bar {\mathfrak{Z}}_{tt}\circ l)(t)\|_{\dot H^{1/2}}^2\lesssim M\frak F_0(t)+\frak F_1(t)+\frak F_2(t) \end{equation} \end{lemma} \begin{proof} We have \begin{equation}\label{2353} 2\mathbb P_A(\bar Z_{tt}-\bar {\mathfrak{Z}}_{tt}\circ l)=2\mathbb P_A(\bar Z_{tt})-2U_l \mathbb P_A (\bar {\mathfrak{Z}}_{tt})-\mathcal Q_l (\bar {\mathfrak{Z}}_{tt}\circ l); \end{equation} and by \eqref{q1}, $$\nm{\mathcal Q_l (\bar {\mathfrak{Z}}_{tt}\circ l)}_{\dot H^{1/2}}\lesssim \|l_{\alpha '}-1\|_{L^2}.$$ Consider the first two terms on the right hand side of \eqref{2353}. We use \eqref{eq:c21} and the fact that $\bar Z_t=\mathbb H\bar Z_t$ to rewrite \begin{equation}\label{2354} 2\mathbb P_A(\bar Z_{tt})=[\partial_t+b\partial_{\alpha '}, \mathbb H]\bar Z_{t}=[b, \mathbb H]\partial_{\alpha '} \bar Z_t. \end{equation} We would like to use \eqref{dhhalf1-inq} to estimate $\nm{2\mathbb P_A(\bar Z_{tt})-2U_l \mathbb P_A (\bar {\mathfrak{Z}}_{tt})}_{\dot H^{1/2}}$, observe that we have controlled all the quantities on the right hand side of \eqref{dhhalf1-inq}, except for $\|\mathbb H b_{\alpha '}\|_{L^\infty}$. By \eqref{ba}, \begin{equation}\label{2355} \begin{aligned} b_{\alpha '}-2\Re D_{\alpha '} Z_t=\Re \paren{\bracket{ \frac1{Z_{,{\alpha '}}}, \mathbb H} Z_{t,\alpha'}+ \bracket{Z_t, \mathbb H}\partial_{\alpha '} \frac1{Z_{,{\alpha '}}} }, \end{aligned} \end{equation} and by the fact $Z_{t,{\alpha '}}=-\mathbb H Z_{t,{\alpha '}}$, \begin{equation}\label{2356} \bracket{ \frac1{Z_{,{\alpha '}}}, \mathbb H} Z_{t,\alpha'}=-(I+\mathbb H)D_{\alpha '} Z_t, \end{equation} so \begin{equation}\label{2357} \mathbb H \bracket{ \frac1{Z_{,{\alpha '}}}, \mathbb H} Z_{t,\alpha'}=\bracket{ \frac1{Z_{,{\alpha '}}}, \mathbb H} Z_{t,\alpha'}. \end{equation} This gives, by \eqref{eq:b13}, \begin{equation}\label{2358} \nm{\mathbb H \bracket{ \frac1{Z_{,{\alpha '}}}, \mathbb H} Z_{t,\alpha'}}_{L^\infty}\lesssim \nm{\partial_{\alpha '} \frac1{Z_{,{\alpha '}}}}_{L^2}\nm{ Z_{t,\alpha'}}_{L^2}\lesssim C(\frak E(t)); \end{equation} similarly $\nm{\mathbb H \bracket{Z_t, \mathbb H}\partial_{\alpha '} \frac1{Z_{,{\alpha '}}} }_{L^\infty}\lesssim C(\frak E(t))$, therefore $\nm{\mathbb H(b_{\alpha '}-2\Re D_{\alpha '} Z_t)}_{L^\infty}\lesssim C(\frak E(t))$. Observe that the argument from \eqref{2356}-\eqref{2358} also shows that $$\nm{(I+\mathbb H)D_{\alpha '} Z_t}_{L^\infty}\lesssim C(\frak E(t)),$$ therefore \begin{equation}\label{2358-1} \nm{\mathbb H D_{\alpha '} Z_t}_{L^\infty}\le \nm{(I+\mathbb H)D_{\alpha '} Z_t}_{L^\infty}+\nm{D_{\alpha '} Z_t}_{L^\infty} \lesssim C(\frak E(t)), \end{equation} and hence $\nm{\mathbb Hb_{\alpha '}}_{L^\infty}\lesssim C(\frak E(t))$. Notice that $$\nm{ \partial_{\alpha '}(b-\tilde b\circ l)}_{L^2}\lesssim \nm{ b_{\alpha '}-\tilde b_{\alpha '}\circ l}_{L^2}+\|l_{\alpha '}-1\|_{L^2}.$$ Applying \eqref{dhhalf1-inq} to \eqref{2354} we get $$\nm{2\mathbb P_A(\bar Z_{tt})-2U_l \mathbb P_A (\bar {\mathfrak{Z}}_{tt})}_{\dot H^{1/2}}\lesssim \nm{ b_{\alpha '}-\tilde b_{\alpha '}\circ l}_{L^2}+\|l_{\alpha '}-1\|_{L^2}+\nm{\bar Z_t-\bar {\mathfrak{Z}}_t\circ l}_{\dot H^{1/2}}.$$ A further application of Lemma~\ref{dominate1} yields Lemma~\ref{dominate2}. \end{proof} As a consequence of Lemmas~\ref{dominate1}, and \ref{dominate2} we have \begin{proposition}\label{denergy} There is a constant $M_0>0$, depending only on $\sup_{[0, T]}\frak E(t)$ and $\sup_{[0, T]}\tilde{\mathfrak E}(t)$, such that for all $M\ge M_0$, and $t\in [0, T]$, \begin{equation}\label{2359} \begin{aligned} &\|(\bar Z_t-\bar {\mathfrak{Z}}_t\circ l)(t)\|_{\dot H^{1/2}}^2+\nm{\paren{\frac1{Z_{,{\alpha '}}}- \frac1{{\mathfrak{Z}}_{,{\alpha '}}}\circ l}(t)}_{\dot H^{1/2}}^2+\|(\bar Z_{tt}-\bar {\mathfrak{Z}}_{tt}\circ l)(t)\|_{\dot H^{1/2}}^2\\ &+ \nm{(A_1-\tilde {A_1}\circ l)(t)}_{L^2}^2+\nm{(D_{\alpha '} Z_t-\tilde D_{\alpha '} {\mathfrak{Z}}_t\circ l) (t)}_{L^2}^2+ \nm{(b_{\alpha '}-\tilde b_{\alpha '}\circ l) (t)}_{L^2}^2+\nm{l_{\alpha '}(t)-1}_{L^2}^2\\& +\nm{\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}-\frac{\tilde{\mathfrak{a}}_t}{\tilde{\mathfrak{a}}}\circ h^{-1}}(t)}_{L^2}^2\lesssim M\frak F_0(t)+\frak F_1(t)+\frak F_2(t)+M^{-1} \frak F_3(t). \end{aligned} \end{equation} \end{proposition} \begin{remark}\label{remark517} It is clear that the reverse of inequality \eqref{2359} also holds. Observe that by \eqref{a1} and Proposition~\ref{dl21}, we have for any $t\in [0, T]$, \begin{equation} \nm{(A_1-\tilde {A_1}\circ l)(t)}_{L^2}\lesssim \|(\bar Z_t-\bar {\mathfrak{Z}}_t\circ l)(t)\|_{\dot H^{1/2}}+ \|l_{\alpha '}(t)-1\|_{L^2}. \end{equation} and by \eqref{at}-\eqref{ba}-\eqref{dta1} and Propositions~\ref{dl21}, ~\ref{d32}, \begin{equation} \begin{aligned} & \nm{ \paren{\frac{\frak a_t}{\frak a}\circ h^{-1}-\frac{\tilde{\mathfrak{a}}_t}{\tilde{\mathfrak{a}}}\circ h^{-1}}(t) }_{L^2}=\nm{ \paren{\frac{\frak a_t}{\frak a}\circ h^{-1}-\frac{\tilde{\mathfrak{a}}_t}{\tilde{\mathfrak{a}}}\circ \tilde h^{-1}\circ l}(t) }_{L^2} \\& \lesssim \|(\bar Z_t-\bar {\mathfrak{Z}}_t\circ l)(t)\|_{\dot H^{1/2}}+ \nm{\paren{\frac1{Z_{,{\alpha '}}}- \frac1{{\mathfrak{Z}}_{,{\alpha '}}}\circ l}(t)}_{\dot H^{1/2}}+\|(\bar Z_{tt}-\bar {\mathfrak{Z}}_{tt}\circ l)(t)\|_{\dot H^{1/2}}\\&\qquad+\nm{(b_{\alpha '}-\tilde b_{\alpha '}\circ l) (t)}_{L^2}+\|l_{\alpha '}(t)-1\|_{L^2}. \end{aligned} \end{equation} This, together with \eqref{2352} shows that for all $t\in [0, T]$, \begin{equation}\label{2360} \begin{aligned} M\frak F_0(t)+\frak F_1(t)+&\frak F_2(t)+M^{-1} \frak F_3(t)\lesssim \|(\bar Z_t-\bar {\mathfrak{Z}}_t\circ l)(t)\|_{\dot H^{1/2}}^2+\nm{\paren{\frac1{Z_{,{\alpha '}}}- \frac1{{\mathfrak{Z}}_{,{\alpha '}}}\circ l}(t)}_{\dot H^{1/2}}^2\\&+\|(\bar Z_{tt}-\bar {\mathfrak{Z}}_{tt}\circ l)(t)\|_{\dot H^{1/2}}^2+\nm{(D_{\alpha '} Z_t-\tilde D_{\alpha '} {\mathfrak{Z}}_t\circ l) (t)}_{L^2}^2 +\nm{l_{\alpha '}(t)-1}_{L^2}^2. \end{aligned} \end{equation} \end{remark} Now fix a constant $M$, with $M\ge M_0>0$, so that \eqref{2359} holds. We define \begin{equation}\label{dfunctional} \mathcal F(t):= M\frak F_0(t)+\frak F_1(t)+\frak F_2(t)+M^{-1}\frak F_3(t). \end{equation} We have \begin{proposition}\label{denergy-est} Assume that $Z=Z({\alpha '}, t)$, ${\mathfrak{Z}}={\mathfrak{Z}}({\alpha '},t)$ are solutions of the system \eqref{interface-r}-\eqref{interface-holo}-\eqref{a1}-\eqref{b}, satisfying the assumption of Theorem~\ref{unique}. Then there is a constant $C$, depending only on $T$, $\sup_{[0, T]}\frak E(t)$ and $\sup_{[0, T]}\tilde{\mathfrak E}(t)$, such that for $t\in [0, T]$, \begin{equation}\label{denergy-inq} \frac d{dt}\mathcal F(t)\le C\paren{ \mathcal F(t)+ \int_0^t \mathcal F(\tau)\,d\tau + \nm{\paren{\frac1{Z_{,{\alpha '}}}-\frac1{{\mathfrak{Z}}_{,{\alpha '}}}}(0)}_{L^\infty}\mathcal F(t)^{\frac12}}. \end{equation} \end{proposition} Assuming Proposition~\ref{denergy-est} holds, we have, by \eqref{denergy-inq}, \begin{equation}\label{2460} \frac d{dt}\paren{\mathcal F(t)+\int_0^t \mathcal F(\tau)\,d\tau}\le C\paren{ \mathcal F(t)+ \int_0^t \mathcal F(\tau)\,d\tau + \nm{\paren{\frac1{Z_{,{\alpha '}}}-\frac1{{\mathfrak{Z}}_{,{\alpha '}}}}(0)}_{L^\infty}^2}; \end{equation} and by Gronwall's inequality, \begin{equation}\label{2461} \mathcal F(t)+\int_0^t \mathcal F(\tau)\,d\tau\le C\paren{\mathcal F(0)+ \nm{\paren{\frac1{Z_{,{\alpha '}}}-\frac1{{\mathfrak{Z}}_{,{\alpha '}}}}(0)}_{L^\infty}^2},\qquad \text{for }t\in [0, T], \end{equation} for some constant $C$ depending on $T$, $\sup_{[0, T]}\frak E(t)$ and $\sup_{[0, T]}\tilde{\mathfrak E}(t)$. This together with \eqref{2359} and \eqref{2360} gives \eqref{stability}. We now give the proof for Proposition~\ref{denergy-est}. \begin{proof} To prove Proposition~\ref{denergy-est} we apply Lemma~\ref{basic-e2} to $\Theta=l_{\alpha '}-1$, and Lemma~\ref{dlemma1} to $\Theta=\bar Z_t,\ \frac1{Z_{,{\alpha '}}}-1,\ \bar Z_{tt}$. We have, by Lemma~\ref{basic-e2} and \eqref{2340}, \begin{equation}\label{2361} \frak F'_0(t)\le 2\nm{l_{\alpha '} (b_{\alpha '}-\tilde b_{\alpha '}\circ l)}_{L^2}\frak F_0(t)^{1/2}+\|b_{\alpha '}\|_{L^\infty}\frak F_0(t)\lesssim \mathcal F(t), \end{equation} here we used \eqref{2346-1}, \eqref{2359}, and \S\ref{basic-quantities}, \eqref{2020}. Now we apply Lemma~\ref{dlemma1} to $\Theta=\bar Z_t,\ \frac1{Z_{,{\alpha '}}}-1,\ \bar Z_{tt}$ to get the estimates for $\frak F_1'(t)$, $\frak F_2'(t)$ and $\frak F_3'(t)$. Checking through the right hand sides of the inequalities \eqref{dlemma1-inq} for $\Theta=\bar Z_t,\ \frac1{Z_{,{\alpha '}}}-1,\ \bar Z_{tt}$, we find that we have controlled almost all of the quantities, respectively by $\mathcal F(t)$ or $\frak E(t)$, $\tilde{\mathfrak E}(t)$, except for the following: \begin{itemize} \item 1. $\nm{\frac{(\partial_t+b\partial_{\alpha '})\kappa }{\kappa}}_{L^2}$; \item 2. $\nm{1-\kappa}_{L^2}$; \item 3. $2\Re i\int \bar{\paren{\frac1{Z_{,{\alpha '}}}-\frac1{{\mathfrak{Z}}_{,{\alpha '}}}\circ l }} \paren{\bar{{\mathfrak{Z}}_{,{\alpha '}}\circ l((\partial_t+\tilde b\partial_{\alpha '})\tilde{\Theta}\circ l+\frak c)}\Theta_{\alpha '}-\bar{Z_{,{\alpha '}}((\partial_t+b\partial_{\alpha '})\Theta+\frak c)}(\tilde{\Theta}\circ l)_{\alpha '}}\,d{\alpha '}$, \newline for $\Theta=\bar Z_t,\ \frac1{Z_{,{\alpha '}}}-1,\ \bar Z_{tt}$; with $\frak c=-i$ for $\Theta=\bar Z_t$, and $\frak c=0$ for $\Theta=\frac1{Z_{,{\alpha '}}}-1$ and $\bar Z_{tt}$; \item 4. $\nm{ Z_{,{\alpha '}}G_i- {\mathfrak{Z}}_{,{\alpha '}}\circ l\tilde G_i\circ l}_{L^2}$, for $i=1,2,3$. \end{itemize} We begin with items 1. and 2. By definition $\kappa=\sqrt{\frac{A_1}{\tilde {A_1}\circ l} l_{\alpha '}}$, so \begin{equation}\label{2362} 2 \frac{(\partial_t+b\partial_{\alpha '})\kappa }{\kappa}= \frac{(\partial_t+b\partial_{\alpha '})A_1}{A_1}-\frac{(\partial_t+\tilde b \partial_{\alpha '})\tilde {A_1} }{\tilde {A_1}}\circ l+(\tilde b_{\alpha '}\circ l-b_{\alpha '}); \end{equation} and by \eqref{at}, \begin{equation}\label{2363} \frac{(\partial_t +b\partial_{\alpha '}) A_1}{A_1}=\dfrac{\frak a_t}{\frak a}\circ h^{-1}-(b_{\alpha '} -2\Re D_{\alpha '} Z_t); \end{equation} therefore \begin{equation}\label{2364} \nm{\frac{(\partial_t+b\partial_{\alpha '})\kappa }{\kappa}}_{L^2}\lesssim \nm{\frac{\frak a_t}{\frak a}\circ h^{-1}-\frac{\tilde{\mathfrak{a}}_t}{\tilde{\mathfrak{a}}}\circ h^{-1}}_{L^2}+ \nm{b_{\alpha '}-\tilde b_{\alpha '}\circ l}_{L^2}+\nm{D_{\alpha '} Z_t-\tilde D_{\alpha '}{\mathfrak{Z}}_t \circ l}_{L^2}\lesssim \mathcal F(t)^{\frac12}. \end{equation} And it is clear that by the definition of $\kappa$, \begin{equation}\label{2365} \nm{1-\kappa}_{L^2}\lesssim \nm{A_1-\tilde {A_1}\circ l}_{L^2}+\nm{l_{\alpha '}-1}_{L^2}\lesssim \mathcal F(t)^{\frac12}. \end{equation} What remains to be controlled are the quantities in items 3. and 4. We first consider item 4. We have, by \eqref{eqzt}, ${Z_{,{\alpha '}}}G_1=-i\,\frac{\frak a_t}{\frak a}\circ h^{-1} A_1$, so \begin{equation}\label{2366} \nm{ Z_{,{\alpha '}}G_1- {\mathfrak{Z}}_{,{\alpha '}}\circ l\tilde G_1\circ l}_{L^2}\lesssim \nm{\frac{\frak a_t}{\frak a}\circ h^{-1}-\frac{\tilde{\mathfrak{a}}_t}{\tilde{\mathfrak{a}}}\circ h^{-1}}_{L^2}+\nm{A_1-\tilde {A_1}\circ l}_{L^2}\lesssim \mathcal F(t)^{\frac12}; \end{equation} and by \eqref{eqza}, ${Z_{\alpha '}}G_2=(b_{\alpha '}-D_{\alpha '} Z_t)^2+(\partial_t+b\partial_{\alpha '})(b_{\alpha '}-2\Re D_{\alpha '} Z_t)-\paren{\bar {D_{\alpha '} Z_t}}^2-i\frac{\partial_{\alpha '} A_1}{|Z_{,{\alpha '}}|^2}$, so \begin{equation}\label{2367} \begin{aligned} & \nm{ Z_{,{\alpha '}}G_2- {\mathfrak{Z}}_{,{\alpha '}}\circ l\tilde G_2\circ l}_{L^2}\lesssim \nm{b_{\alpha '}-\tilde b_{\alpha '}\circ l}_{L^2}+\nm{D_{\alpha '} Z_t-\tilde D_{\alpha '}{\mathfrak{Z}}_t \circ l}_{L^2}+\\&\nm{(\partial_t+b\partial_{\alpha '})(b_{\alpha '}-2\Re D_{\alpha '} Z_t)-(\partial_t+\tilde b\partial_{\alpha '})(\tilde b_{\alpha '}- 2\Re \tilde D_{\alpha '}{\mathfrak{Z}}_t )\circ l}_{L^2}+\nm{\frac{\partial_{\alpha '} A_1}{|Z_{,{\alpha '}}|^2}-\frac{\partial_{\alpha '} \tilde {A_1}}{|{\mathfrak{Z}}_{,{\alpha '}}|^2}\circ l}_{L^2}; \end{aligned} \end{equation} observe that we have controlled all but the last two quantities on the right hand side of \eqref{2367} by $\mathcal F(t)^{1/2}$. By \eqref{eqztt}, $Z_{,{\alpha '}}G_3= -i\,A_1 \paren{(\partial_t+b\partial_{\alpha '})\paren{\frac{\mathfrak{a}_t}{\mathfrak{a}}\circ h^{-1}}+\paren{\frac{\mathfrak{a}_t}{\mathfrak{a}}\circ h^{-1}}^2+2\paren{\frac{\mathfrak{a}_t}{\mathfrak{a}}\circ h^{-1}}\bar{D_{\alpha '} Z_t}}$, so \begin{equation}\label{2368} \begin{aligned} &\nm{ Z_{,{\alpha '}}G_3- {\mathfrak{Z}}_{,{\alpha '}}\circ l\tilde G_3\circ l}_{L^2}\lesssim \nm{\frac{\frak a_t}{\frak a}\circ h^{-1}-\frac{\tilde{\mathfrak{a}}_t}{\tilde{\mathfrak{a}}}\circ h^{-1}}_{L^2}+\nm{D_{\alpha '} Z_t-\tilde D_{\alpha '}{\mathfrak{Z}}_t \circ l}_{L^2}\\&+\nm{A_1-\tilde {A_1}\circ l}_{L^2}\paren{\nm{(\partial_t+b\partial_{\alpha '})\paren{\frac{\frak a_t}{\frak a}\circ h^{-1} } }_{L^\infty}+1} \\&\qquad+ \nm{(\partial_t+b\partial_{\alpha '})\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}}-(\partial_t+\tilde b\partial_{\alpha '})\paren{\frac{\tilde{\mathfrak{a}}_t}{\tilde{\mathfrak{a}}} \circ \tilde h^{-1} }\circ l}_{L^2}. \end{aligned} \end{equation} We have controlled all but the factor in the third quantity and the very last quantity on the right hand side of \eqref{2368}. In the remaining part of the proof for Proposition~\ref{denergy-est}, we will show the following inequalities \begin{itemize} \item $\nm{\frac{\partial_{\alpha '} A_1}{|Z_{,{\alpha '}}|^2}-\frac{\partial_{\alpha '} \tilde {A_1}}{|{\mathfrak{Z}}_{,{\alpha '}}|^2}\circ l}_{L^2}\lesssim \mathcal F(t)^{\frac12}$; \item $\nm{(\partial_t+b\partial_{\alpha '})(b_{\alpha '}-2\Re D_{\alpha '} Z_t)-(\partial_t+\tilde b\partial_{\alpha '})(\tilde b_{\alpha '}- 2\Re \tilde D_{\alpha '}{\mathfrak{Z}}_t )\circ l}_{L^2}\lesssim \mathcal F(t)^{\frac12}$; \item $\nm{(\partial_t+b\partial_{\alpha '})\paren{\frac{\frak a_t}{\frak a}\circ h^{-1} } }_{L^\infty}\le C(\frak E(t));$ \item $ \nm{(\partial_t+b\partial_{\alpha '})\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}}-(\partial_t+\tilde b\partial_{\alpha '})\paren{\frac{\tilde{\mathfrak{a}}_t}{\tilde{\mathfrak{a}}} \circ \tilde h^{-1} }\circ l}_{L^2}\lesssim \mathcal F(t)^{\frac12}$; \item and control the quantities in item 3. \end{itemize} Our main strategy is the same as always, that is, to rewrite the quantities in forms to which the results in \S\ref{prepare} can be applied. \subsubsection{Some additional quantities controlled by $\frak E(t)$ and by $\mathcal F(t)$}\label{additional} We begin with deriving some additional estimates that will be used in the proof. First we record the conclusions from the computations of \eqref{2355}-\eqref{2358-1}, \begin{equation}\label{2380} \nm{\mathbb H D_{\alpha '} Z_t}_{L^\infty}\le C(\frak E(t)),\qquad \nm{\mathbb H b_{\alpha '} }_{L^\infty}\le C(\frak E(t)). \end{equation} Because $\partial_{\alpha '} \frac1{Z_{,{\alpha '}}} =\mathbb H\paren{\partial_{\alpha '} \frac1{Z_{,{\alpha '}} }}$, \begin{equation}\label{2381} 2\mathbb P_A\paren{Z_{t} \partial_{\alpha '} \frac1{Z_{,{\alpha '}} }}=\bracket{Z_{t},\mathbb H}\partial_{\alpha '} \frac1{Z_{,{\alpha '}} }; \end{equation} and we have, by \eqref{eq:b13} and \eqref{dl21-inq}, \begin{align} \nm{\mathbb P_A\paren{Z_{t} \partial_{\alpha '} \frac1{Z_{,{\alpha '}} }}}_{L^\infty} \lesssim \nm{Z_{t,{\alpha '}}}_{L^2}\nm{ \partial_{\alpha '} \frac1{Z_{,{\alpha '}} }}_{L^2}\le C(\frak E(t));\label{2382}\\ \nm{\mathbb P_A\paren{Z_{t} \partial_{\alpha '} \frac1{Z_{,{\alpha '}} }}-U_l \mathbb P_A\paren{{\mathfrak{Z}}_{t} \partial_{\alpha '} \frac1{{\mathfrak{Z}}_{,{\alpha '}} }}}_{L^2}\lesssim \mathcal F(t)^{1/2}.\label{2383} \end{align} Similarly we have \begin{equation}\label{2384} \nm{\mathbb P_A\paren{Z_{t} \partial_{\alpha '} \bar Z_t}-U_l \mathbb P_A\paren{{\mathfrak{Z}}_{t} \partial_{\alpha '} \bar{{\mathfrak{Z}}}_t}}_{L^2}\lesssim \mathcal F(t)^{1/2}. \end{equation} By \eqref{a1}, $iA_1=i-\mathbb P_A(Z_t\bar Z_{t,{\alpha '}})+\mathbb P_H(\bar Z_t Z_{t,{\alpha '}})$, and by \eqref{aa1}, \begin{equation}\label{2385} \bar Z_{tt}-i=-\frac{i}{Z_{,{\alpha '}}}+\frac{\mathbb P_A(Z_t\bar Z_{t,{\alpha '}})}{Z_{,{\alpha '}}}-\frac{\mathbb P_H(\bar Z_t Z_{t,{\alpha '}})}{Z_{,{\alpha '}}}; \end{equation} applying $\mathbb P_H$ to both sides of \eqref{2385} and rewriting the second term on the right hand side as a commutator gives \begin{equation}\label{2386} \frac{\mathbb P_H(\bar Z_t Z_{t,{\alpha '}})}{Z_{,{\alpha '}}}=-i\paren{\frac{1}{Z_{,{\alpha '}}}-1}-\frac12\bracket{\frac{1}{Z_{,{\alpha '}}},\mathbb H}\mathbb P_A(Z_t\bar Z_{t,{\alpha '}})-\mathbb P_H(\bar Z_{tt}). \end{equation} Now we apply \eqref{dhhalf2-inq}, \eqref{q1}, \eqref{2359} and \eqref{2384} to get \begin{align} \nm{\bracket{\frac{1}{Z_{,{\alpha '}}},\mathbb H}\mathbb P_A(Z_t\bar Z_{t,{\alpha '}})-U_l \bracket{\frac{1}{{\mathfrak{Z}}_{,{\alpha '}}},\mathbb H}\mathbb P_A({\mathfrak{Z}} _t\bar{ {\mathfrak{Z}}}_{t,{\alpha '}}) }_{\dot H^{1/2}} \lesssim \mathcal F(t)^{1/2};\label{2387}\\ \nm{\frac{\mathbb P_H(\bar Z_t Z_{t,{\alpha '}})}{Z_{,{\alpha '}}}-U_l\frac{\mathbb P_H(\bar {{\mathfrak{Z}}}_t {\mathfrak{Z}}_{t,{\alpha '}})}{{\mathfrak{Z}}_{,{\alpha '}}} }_{\dot H^{1/2}}\lesssim \mathcal F(t)^{1/2};\label{2388} \end{align} consequently by \eqref{2385} and \eqref{2388}, \eqref{2359}, \begin{equation}\label{2389} \nm{\frac{\mathbb P_A(\bar Z_t Z_{t,{\alpha '}})}{Z_{,{\alpha '}}}-U_l\frac{\mathbb P_A(\bar {{\mathfrak{Z}}}_t {\mathfrak{Z}}_{t,{\alpha '}})}{{\mathfrak{Z}}_{,{\alpha '}}} }_{\dot H^{1/2}}\lesssim \mathcal F(t)^{1/2}. \end{equation} Similarly we have \begin{align} \nm{\bracket{\frac{1}{Z_{,{\alpha '}}},\mathbb H}\mathbb P_A\paren{Z_t \partial_{\alpha '}\frac1{Z_{,{\alpha '}}}}-U_l \bracket{\frac{1}{{\mathfrak{Z}}_{,{\alpha '}}},\mathbb H}\mathbb P_A\paren{{\mathfrak{Z}} _t \partial_{\alpha '}\frac1{{\mathfrak{Z}}_{,{\alpha '}}} } }_{\dot H^{1/2}} \lesssim \mathcal F(t)^{1/2};\label{2390}\\ \nm{\bracket{\frac{1}{Z_{,{\alpha '}}}, \mathbb H}\paren{ A_1\paren{\bar{D_{\alpha '} Z_t}+\frac{\frak a_t}{\frak a}\circ h^{-1}} }-U_l\bracket{\frac{1}{{\mathfrak{Z}}_{,{\alpha '}}}, \mathbb H}\paren{ \tilde {A_1}\paren{\bar{\tilde D_{\alpha '} {\mathfrak{Z}}_t}+\frac{\tilde{\mathfrak{a}}_t}{\tilde{\mathfrak{a}} }\circ \tilde h^{-1}} }}_{\dot H^{1/2}}\lesssim \mathcal F(t)^{1/2};\label{2390-1} \end{align} provided we can show that \begin{equation}\label{2390-2} \nm{ \mathbb H\paren{ A_1\paren{\bar{D_{\alpha '} Z_t}+\frac{\frak a_t}{\frak a}\circ h^{-1}}} }_{L^\infty}\lesssim C(\frak E).\end{equation} We now prove \eqref{2390-2}. It suffices to show $\nm{ \mathbb P_A\paren{ A_1\paren{\bar{D_{\alpha '} Z_t}+\frac{\frak a_t}{\frak a}\circ h^{-1}}} }_{L^\infty}\lesssim C(\frak E)$, since we have $ \nm{ A_1\paren{\bar{D_{\alpha '} Z_t}+\frac{\frak a_t}{\frak a}\circ h^{-1}} }_{L^\infty}\lesssim C(\frak E)$. We know $$2\mathbb P_A (A_1 \bar{D_{\alpha '} Z_t})=\bracket{\frac{A_1}{\bar Z_{,{\alpha '}}},\mathbb H}\bar Z_{t,{\alpha '}}=-i[Z_{tt},\mathbb H]\bar Z_{t,{\alpha '}},$$ hence by Cauchy-Schwarz inequality and Hardy's inequality \eqref{eq:77}, \begin{equation}\label{2390-3} \nm{2\mathbb P_A (A_1 \bar{D_{\alpha '} Z_t})}_{L^\infty}\lesssim \|Z_{tt,{\alpha '}}\|_{L^2}\|Z_{t,{\alpha '}}\|_{L^2}\lesssim C(\frak E). \end{equation} For the second term we use the formula (2.23) of \cite{wu6}, \footnote{This formula can be checked directly from \eqref{at}-\eqref{ba}-\eqref{dta1} via similar manipulations as in \eqref{2446-1}-\eqref{2448}.} \begin{equation}\label{2390-4} {A_1}\frac{\frak a_t}{\frak a}\circ h^{-1}=-\Im( 2[Z_t,\mathbb H]{\bar Z}_{tt,\alpha'}+2[Z_{tt},\mathbb H]\partial_{\alpha'} \bar Z_t- [Z_t, Z_t; D_{\alpha'} \bar Z_t]), \end{equation} observe that the quantities $[Z_t,\mathbb H]{\bar Z}_{tt,\alpha'}$, $[Z_{tt},\mathbb H]\partial_{\alpha'} \bar Z_t$ are anti-holomorphic by \eqref{comm-hilbe}, and $[Z_t, Z_t; D_{\alpha'} \bar Z_t]$ is anti-holomorphic by integration by parts and \eqref{comm-hilbe}, so \begin{equation}\label{2390-5} \mathbb P_A\paren{{A_1}\frac{\frak a_t}{\frak a}\circ h^{-1}}=i\paren{ [Z_t,\mathbb H]{\bar Z}_{tt,\alpha'}+[Z_{tt},\mathbb H]\partial_{\alpha'} \bar Z_t-\frac12 [Z_t, Z_t; D_{\alpha'} \bar Z_t]}; \end{equation} therefore \begin{equation}\label{2390-6} \nm{\mathbb P_A\paren{{A_1}\frac{\frak a_t}{\frak a}\circ h^{-1}}}_{L^\infty}\lesssim C(\frak E) \end{equation} by Cauchy-Schwarz inequality and Hardy's inequality \eqref{eq:77}. This proves \eqref{2390-2}. In what follows we will need the bound for $\nm{Z_{ttt,{\alpha '}}}_{L^2}$. We begin with \eqref{eq:dztt} and calculate $\bar Z_{ttt,{\alpha '}}$. We have \begin{equation}\label{2410} \bar Z_{ttt,{\alpha '}}=\bar Z_{tt,{\alpha '}} (\bar{D_{\alpha '} Z_t}+\frac{\frak a_t}{\frak a}\circ h^{-1})-iA_1D_{\alpha '}(\bar{D_{\alpha '} Z_t}+\frac{\frak a_t}{\frak a}\circ h^{-1}) \end{equation} where we substituted the factor $\bar Z_{tt}-i$ in the second term by $-\frac{iA_1}{Z_{,{\alpha '}}}$, see \eqref{eq:dzt}. We know from \S\ref{proof} that all the quantities in \eqref{2410} are controlled and we have \begin{equation}\label{2411} \nm{Z_{ttt,{\alpha '}}}_{L^2}\le C(\frak E(t)). \end{equation} \subsubsection{Controlling the $\dot H^{1/2}$ norms of $Z_{,{\alpha '}}((\partial_t+b\partial_{\alpha '})\Theta+\frak c)$ for $\Theta=\bar Z_t, \frac1{Z_{,{\alpha '}}}-1, \bar Z_{tt}$, with $\frak c=-i, 0, 0$ respectively}\label{hhalf-norm} We will use Proposition~\ref{half-product} to control the item 3 above. To do so we need to check that the assumptions of the proposition hold. One of them is $Z_{,{\alpha '}}((\partial_t+b\partial_{\alpha '})\Theta+\frak c)\in \dot H^{1/2}\cap L^\infty$, for $\Theta=\bar Z_t, \frac1{Z_{,{\alpha '}}}-1, \bar Z_{tt}$; $\frak c=-i, 0, 0$ respectively; with the norms bounded by $C(\frak E(t))$. By \eqref{eq:dzt}, \eqref{eq:dztt} and \eqref{eq:dza}, \begin{equation}\label{2391} Z_{,{\alpha '}}(\bar Z_{tt}-i)=-iA_1,\quad Z_{,{\alpha '}}(\partial_t+b\partial_{\alpha '})\frac1{Z_{,{\alpha '}}}=b_{\alpha '}-D_{\alpha '} Z_t,\quad Z_{,{\alpha '}} \bar Z_{ttt}=-iA_1\paren{\bar {D_{\alpha '} Z_t}+\frac{\frak a_t}{\frak a}\circ h^{-1}}. \end{equation} In \S\ref{proof}, we have shown that these quantities are in $L^\infty$, with their $L^\infty$ norms controlled by $C(\frak E(t))$. So we only need to estimate their $\dot H^{1/2}$ norms. Applying Proposition~\ref{hhalf4}, \eqref{hhalf42} to \eqref{a1} and \eqref{ba}, we get $A_1, b_{\alpha '}-2\Re D_{\alpha '} Z_t\in \dot H^{1/2}$, with \begin{align} & \|A_1\|_{\dot H^{1/2}}\lesssim \nm{\partial_{\alpha '} Z_t}_{L^2}^2\lesssim C(\frak E(t));\label{2392} \\ & \|b_{\alpha '}-2\Re D_{\alpha '} Z_t\|_{\dot H^{1/2}}\lesssim \nm{\partial_{\alpha '} Z_t}_{L^2}\nm{\partial_{\alpha '} \frac1{Z_{,{\alpha '}}}}_{L^2}\lesssim C(\frak E(t)).\label{2393} \end{align} We next compute $\|D_{\alpha '} Z_t(t)\|_{\dot H^{1/2}}$. By definition, \begin{equation}\label{2394} \begin{aligned} \|D_{\alpha '} Z_t(t)\|_{\dot H^{1/2}}^2&= \int (i\partial_{\alpha '} \mathbb H D_{\alpha '} Z_t)\, \bar{D_{\alpha '} Z_t }\,d{\alpha '}\\&= \int i\partial_{\alpha '} \bracket{\mathbb H, \frac1{Z_{,{\alpha '}}}} Z_{t,{\alpha '}} \, \bar{D_{\alpha '} Z_t }\,d{\alpha '}+ \int i\partial_{\alpha '} \paren{\frac1{Z_{,{\alpha '}}} \mathbb H Z_{t,{\alpha '}}} \bar{D_{\alpha '} Z_t }\,d{\alpha '}\\&= \int i\bar{D_{\alpha '}}\bracket{\mathbb H, \frac1{Z_{,{\alpha '}}}} Z_{t,{\alpha '}}\, \partial_{\alpha '}\bar{ Z_t }\,d{\alpha '}+ \int i Z_{t,{\alpha '}} (D_{\alpha '} \bar{D_{\alpha '} Z_t })\,d{\alpha '} \end{aligned} \end{equation} where in the last step we used integration by parts and the fact $\mathbb H Z_{t,{\alpha '}}=-Z_{t,{\alpha '}}$. Recall in \eqref{2042}, we have shown $\nm{D_{\alpha '}\bracket{\mathbb H, \frac1{Z_{,{\alpha '}}}} Z_{t,{\alpha '}} }_{L^2}\le C(\frak E(t))$. So by Cauchy-Schwarz inequality, we have \begin{equation}\label{2395} \|D_{\alpha '} Z_t(t)\|_{\dot H^{1/2}}^2\lesssim \nm{D_{\alpha '}\bracket{\mathbb H, \frac1{Z_{,{\alpha '}}}} Z_{t,{\alpha '}} }_{L^2}\nm{Z_{t,{\alpha '}} }_{L^2}+\nm{Z_{t,{\alpha '}} }_{L^2}\nm{D_{\alpha '}^2Z_{t} }_{L^2}\le C(\frak E(t)). \end{equation} Now we consider $\nm{\frac{\frak a_t}{\frak a}\circ h^{-1}}_{\dot H^{1/2}}$. By \eqref{at}-\eqref{ba}-\eqref{dta1}, we know Proposition~\ref{hhalf4}, \eqref{hhalf42} can be used to handle all terms, except for $[Z_t, b; \bar Z_{t,{\alpha '}}]$. Let $p\in C_0^\infty(\mathbb R)$, we have, by duality, \begin{equation}\label{2396} \abs{\int \partial_{\alpha '} p [Z_t, b; \bar Z_{t,{\alpha '}}]\,d{\alpha '}} =\abs{\int [Z_t, b; \partial_{\alpha '} p]\bar Z_{t,{\alpha '}}\,d{\alpha '}}\lesssim \|Z_{t,{\alpha '}}\|_{L^2}^2\|b_{\alpha '}\|_{L^\infty}\|p\|_{\dot H^{1/2}}, \end{equation} where in the last step we used Cauchy-Schwarz inequality and \eqref{eq:b111}. Therefore $\nm{[Z_t, b; \bar Z_{t,{\alpha '}}]}_{\dot H^{1/2}}\le C(\frak E(t))$. Applying Proposition~\ref{hhalf4}, \eqref{hhalf42} to the remaining terms and using \eqref{hhalf-1} yields \begin{equation}\label{2397} \nm{ \frac{\frak a_t}{\frak a}\circ h^{-1} }_{\dot H^{1/2}}\le C(\frak E(t)). \end{equation} We can now conclude that for $\Theta=\bar Z_t, \ \frac1{Z_{,{\alpha '}}}-1,\ \bar Z_{tt}$, with $\frak c=i, 0, 0$ respectively, \begin{equation}\label{2398} \nm{Z_{,{\alpha '}}((\partial_t+b\partial_{\alpha '})\Theta+\frak c)}_{L^\infty}+\nm{Z_{,{\alpha '}}((\partial_t+b\partial_{\alpha '})\Theta+\frak c)}_{\dot H^{1/2}}\le C(\frak E(t)). \end{equation} \subsubsection{Controlling $\int \bar{\paren{\frac1{Z_{,{\alpha '}}}-\frac1{{\mathfrak{Z}}_{,{\alpha '}}}\circ l }} \paren{\bar{{\mathfrak{Z}}_{,{\alpha '}}\circ l((\partial_t+\tilde b\partial_{\alpha '})\tilde{\Theta}\circ l+\frak c)}\Theta_{\alpha '}-\bar{Z_{,{\alpha '}}((\partial_t+b\partial_{\alpha '})\Theta+\frak c)}(\tilde{\Theta}\circ l)_{\alpha '}}\,d{\alpha '}$} We begin with studying $\frac1{Z_{,{\alpha '}}}-\frac1{{\mathfrak{Z}}_{,{\alpha '}}}\circ l$. By \eqref{2100}, $\frac1{Z_{,{\alpha '}}}(h(\alpha,t),t)=\frac1{Z_{,{\alpha '}}}(\alpha,0)e^{\int_0^t (b_{\alpha '}\circ h(\alpha,\tau)-D_\alpha z_t(\alpha,\tau))\,d\tau}$, so \begin{equation}\label{2399} \begin{aligned} \paren{\frac1{Z_{,{\alpha '}}}-\frac1{{\mathfrak{Z}}_{,{\alpha '}}}\circ l}\circ h&=\paren{\frac1{Z_{,{\alpha '}}}(0)-\frac1{{\mathfrak{Z}}_{,{\alpha '}}}(0)} e^{\int_0^t (\tilde b_{\alpha '}-\tilde D_{\alpha '} {\mathfrak{Z}}_t)\circ \tilde h (\tau)\,d\tau}\\&+\frac1{Z_{,{\alpha '}}}\circ h\paren{1- e^{\int_0^t ((\tilde b_{\alpha '}-\tilde D_{\alpha '} {\mathfrak{Z}}_t)\circ \tilde h-(b_{\alpha '}-D_{\alpha '} Z_t)\circ h) (\tau)\,d\tau}}. \end{aligned} \end{equation} We know for $t\in [0, T]$, \begin{equation}\label{2470} \nm{\paren{\frac1{Z_{,{\alpha '}}}(0)-\frac1{{\mathfrak{Z}}_{,{\alpha '}}}(0)} e^{\int_0^t (\tilde b_{\alpha '}-\tilde D_{\alpha '} {\mathfrak{Z}}_t)\circ \tilde h (\tau)\,d\tau}}_{L^\infty}\le C(\sup_{[0, T]}\tilde{\mathfrak E} (t))\nm{\frac1{Z_{,{\alpha '}}}(0)-\frac1{{\mathfrak{Z}}_{,{\alpha '}}}(0)}_{L^\infty}; \end{equation} and \begin{equation}\label{2471} \nm{ 1- e^{\int_0^t ((\tilde b_{\alpha '}-\tilde D_{\alpha '} {\mathfrak{Z}}_t)\circ \tilde h-(b_{\alpha '}-D_{\alpha '} Z_t)\circ h) (\tau)\,d\tau} }_{L^2}\lesssim \int_0^t \mathcal F(\tau)^{1/2}\,d\tau. \end{equation} Now we rewrite \begin{equation}\label{2472} \begin{aligned} &\int \bar{\paren{\frac1{Z_{,{\alpha '}}}-\frac1{{\mathfrak{Z}}_{,{\alpha '}}}\circ l }} \paren{\bar{{\mathfrak{Z}}_{,{\alpha '}}\circ l((\partial_t+\tilde b\partial_{\alpha '})\tilde{\Theta}\circ l+\frak c)}\Theta_{\alpha '}-\bar{Z_{,{\alpha '}}((\partial_t+b\partial_{\alpha '})\Theta+\frak c)}(\tilde{\Theta}\circ l)_{\alpha '}}\,d{\alpha '}\\& =\int \bar{\paren{\frac1{Z_{,{\alpha '}}}-\frac1{{\mathfrak{Z}}_{,{\alpha '}}}\circ l }}\Theta_{\alpha '} \paren{\bar{{\mathfrak{Z}}_{,{\alpha '}}\circ l((\partial_t+\tilde b\partial_{\alpha '})\tilde{\Theta}\circ l+\frak c)}-\bar{Z_{,{\alpha '}}((\partial_t+b\partial_{\alpha '})\Theta+\frak c)}}\,d{\alpha '}\\&+ \int \bar{\paren{\frac1{Z_{,{\alpha '}}}-\frac1{{\mathfrak{Z}}_{,{\alpha '}}}\circ l }} \bar{Z_{,{\alpha '}}((\partial_t+b\partial_{\alpha '})\Theta+\frak c)}\,(\Theta-\tilde{\Theta}\circ l)_{\alpha '}\,d{\alpha '}=I+II. \end{aligned} \end{equation} We apply Proposition~\ref{half-product} to $II$, with $g=\frac1{Z_{,{\alpha '}}}-\frac1{{\mathfrak{Z}}_{,{\alpha '}}}\circ l$, and $f=Z_{,{\alpha '}}((\partial_t+b\partial_{\alpha '})\Theta+\frak c)$, where $\Theta=\bar Z_t, \ \frac1{Z_{,{\alpha '}}}-1,\ \bar Z_{tt}$, with $\frak c=i, 0, 0$ respectively. We know $$\partial_{\alpha '}\paren{\frac1{Z_{,{\alpha '}}} f}= \bar Z_{tt,{\alpha '}},\quad \partial_{\alpha '}(\partial_t+b\partial_{\alpha '})\frac1{Z_{,{\alpha '}}},\quad \bar Z_{ttt,{\alpha '}},\qquad \text{for }\Theta=\bar Z_t, \ \frac1{Z_{,{\alpha '}}}-1,\ \bar Z_{tt},$$ so $\nm{\partial_{\alpha '}\paren{\frac1{Z_{,{\alpha '}}} f}}_{L^2}\le C(\frak E(t))$, by \S\ref{proof} and \eqref{2411}. Applying Proposition~\ref{half-product} to the $g$ and $f$ given above yields \begin{equation}\label{2473} \begin{aligned} &\nm{\paren{\frac1{Z_{,{\alpha '}}}-\frac1{{\mathfrak{Z}}_{,{\alpha '}}}\circ l}\, Z_{,{\alpha '}}((\partial_t+b\partial_{\alpha '})\Theta+\frak c)}_{\dot H^{1/2}}\\&\qquad \lesssim \nm{\frac1{Z_{,{\alpha '}}}-\frac1{{\mathfrak{Z}}_{,{\alpha '}}}\circ l}_{\dot H^{1/2}}+ \nm{\frac1{Z_{,{\alpha '}}}(0)-\frac1{{\mathfrak{Z}}_{,{\alpha '}}}(0)}_{L^\infty}+\int_0^t \mathcal F(\tau)^{1/2}\,d\tau; \end{aligned} \end{equation} consequently \begin{equation}\label{2474} \abs{II}\lesssim \mathcal F(t)+T\int_0^t\mathcal F(\tau)\,d\tau+\nm{\frac1{Z_{,{\alpha '}}}(0)-\frac1{{\mathfrak{Z}}_{,{\alpha '}}}(0)}_{L^\infty}\mathcal F(t)^{1/2}. \end{equation} We apply the decomposition \eqref{2399} and Cauchy-Schwarz inequality to $I$, notice that $\nm{\Theta_{\alpha '}}_{L^2}\le C(\frak E(t))$, and $\nm{D_{\alpha '} \Theta}_{L^\infty}\le C(\frak E(t))$, for $\Theta=\bar Z_t, \ \frac1{Z_{,{\alpha '}}}-1,\ \bar Z_{tt}$. We have \begin{equation}\label{2475} \abs{I}\lesssim \nm{\frac1{Z_{,{\alpha '}}}(0)-\frac1{{\mathfrak{Z}}_{,{\alpha '}}}(0)}_{L^\infty}\mathcal F(t)^{1/2}+\mathcal F(t)^{1/2}\int_0^t\mathcal F(\tau)^{1/2}\,d\tau. \end{equation} This shows that for $\Theta=\bar Z_t, \ \frac1{Z_{,{\alpha '}}}-1,\ \bar Z_{tt}$, with $\frak c=i, 0, 0$ respectively, \begin{equation}\label{2476} \begin{aligned} &\abs{\int \bar{\paren{\frac1{Z_{,{\alpha '}}}-\frac1{{\mathfrak{Z}}_{,{\alpha '}}}\circ l }} \paren{\bar{{\mathfrak{Z}}_{,{\alpha '}}\circ l((\partial_t+\tilde b\partial_{\alpha '})\tilde{\Theta}\circ l+\frak c)}\Theta_{\alpha '}-\bar{Z_{,{\alpha '}}((\partial_t+b\partial_{\alpha '})\Theta+\frak c)}(\tilde{\Theta}\circ l)_{\alpha '}}\,d{\alpha '}}\\&\qquad\qquad\lesssim \mathcal F(t)+T\int_0^t\mathcal F(\tau)\,d\tau+\nm{\frac1{Z_{,{\alpha '}}}(0)-\frac1{{\mathfrak{Z}}_{,{\alpha '}}}(0)}_{L^\infty}\mathcal F(t)^{1/2}. \end{aligned} \end{equation} \subsubsection{Controlling $\nm{\frac{\partial_{\alpha '} A_1}{|Z_{,{\alpha '}}|^2}-\frac{\partial_{\alpha '} \tilde {A_1}}{|{\mathfrak{Z}}_{,{\alpha '}}|^2}\circ l}_{L^2}$}\label{da1z2} We will take advantage of the fact that $\frac{\partial_{\alpha '} A_1}{|Z_{,{\alpha '}}|^2}$ is purely real to use $(I+\mathbb H)$ to convert it to some commutator forms to which the Propositions in \S\ref{prepare} can be applied. Observe that \begin{equation}\label{2369} i\,\frac{\partial_{\alpha '} A_1}{|Z_{,{\alpha '}}|^2}= \frac{1}{Z_{,{\alpha '}}}\partial_{\alpha '}\frac{ i A_1}{\bar Z_{,{\alpha '}}}-\frac{ i A_1}{Z_{,{\alpha '}}}\partial_{\alpha '}\frac{1}{\bar Z_{,{\alpha '}}}=\frac{1}{Z_{,{\alpha '}}}\partial_{\alpha '} Z_{tt}+(\bar Z_{tt}-i)\partial_{\alpha '}\frac{1}{\bar Z_{,{\alpha '}}}; \end{equation} we apply $(I+\mathbb H)$ to \eqref{2369}, and use the fact $\partial_{\alpha '}\frac{1}{\bar Z_{,{\alpha '}}}=-\mathbb H\paren{\partial_{\alpha '}\frac{1}{\bar Z_{,{\alpha '}}}}$ to write the second term in a commutator form. We have \begin{equation}\label{2370} i\,(I+\mathbb H)\frac{\partial_{\alpha '} A_1}{|Z_{,{\alpha '}}|^2}=(I+\mathbb H)\paren{\frac{1}{Z_{,{\alpha '}}}\partial_{\alpha '} Z_{tt}}-\bracket{\bar Z_{tt}, \mathbb H}\partial_{\alpha '}\frac{1}{\bar Z_{,{\alpha '}}}. \end{equation} For the first term on the right hand side, we commute out $\frac1{Z_{,{\alpha '}}}$, then use the fact $Z_t=-\mathbb H Z_t$ to write $(I+\mathbb H)Z_{tt}$ as a commutator (see \eqref{2354}), \begin{equation}\label{2371} (I+\mathbb H)\paren{\frac{1}{Z_{,{\alpha '}}}\partial_{\alpha '} Z_{tt}}=\bracket{\mathbb H, \frac{1}{Z_{,{\alpha '}}}}\partial_{\alpha '} Z_{tt}-\frac{1}{Z_{,{\alpha '}}}\partial_{\alpha '} [b,\mathbb H]Z_{t,{\alpha '}}; \end{equation} we compute \begin{equation}\label{2372} \begin{aligned} \frac{1}{Z_{,{\alpha '}}}\partial_{\alpha '} [b,\mathbb H]Z_{t,{\alpha '}}&=\frac{1}{Z_{,{\alpha '}}} b_{\alpha '} \mathbb H Z_{t,{\alpha '}}-\frac1{\pi i Z_{,{\alpha '}}} \int \frac{b({\alpha '},t)-b({\beta '},t)}{({\alpha '}-{\beta '})^2}Z_{t,{\beta '}}\,d{\beta '}\\& =-b_{\alpha '} D_{\alpha '} Z_t-\bracket{\frac1{Z_{,{\alpha '}}}, b; Z_{t,{\alpha '}}}-\frac1{\pi i } \int \frac{b({\alpha '},t)-b({\beta '},t)}{({\alpha '}-{\beta '})^2}D_{\beta '} Z_{t}\,d{\beta '}\\&= -b_{\alpha '} D_{\alpha '} Z_t-\bracket{\frac1{Z_{,{\alpha '}}}, b; Z_{t,{\alpha '}}}+[b,\mathbb H]\partial_{\alpha '} D_{\alpha '} Z_t-\mathbb H(b_{\alpha '} D_{\alpha '} Z_t), \end{aligned} \end{equation} in the last step we performed integration by parts. We have converted the right hand side of \eqref{2370} in the desired forms. Applying \eqref{dl21-inq}, \eqref{dl221}, \eqref{d32inq}, \eqref{q3} and \eqref{2359}, then take the imaginary parts gives \begin{equation}\label{2373} \nm{\frac{\partial_{\alpha '} A_1}{|Z_{,{\alpha '}}|^2}-\frac{\partial_{\alpha '} \tilde {A_1}}{|{\mathfrak{Z}}_{,{\alpha '}}|^2}\circ l}_{L^2}\lesssim \mathcal F(t)^{\frac12}. \end{equation} In what follows we will use the following identities in the calculations: for $f,\ g,\ p$, satisfying $g=\mathbb H g$ and $p=\mathbb H p$, \begin{align} [ f, \mathbb H] (gp)&=[ f g,\mathbb H]p=[\mathbb P_A( f g),\mathbb H] p;\label{H1}\\ [ f, \mathbb H]\partial_{\alpha '}(gp)&=[ f\partial_{\alpha '} g, \mathbb H] p+[ f g,\mathbb H]\partial_{\alpha '} p=[\mathbb P_A( f\partial_{\alpha '} g), \mathbb H] p+[\mathbb P_A( f g),\mathbb H]\partial_{\alpha '} p \label{H2} \end{align} \eqref{H1} is obtained by using the fact that the product of holomorphic functions is holomorphic, and \eqref{comm-hilbe}; \eqref{H2} is a consequence of \eqref{H1} and the product rules. \subsubsection{Controlling $\nm{(\partial_t+b\partial_{\alpha '})(b_{\alpha '}-2\Re D_{\alpha '} Z_t)-(\partial_t+\tilde b\partial_{\alpha '})(\tilde b_{\alpha '}- 2\Re \tilde D_{\alpha '}{\mathfrak{Z}}_t )\circ l}_{L^2}$ }\label{ddtba} We begin with \eqref{dba-1}, \begin{equation}\label{2374} \begin{aligned} &(\partial_t+b\partial_{\alpha '})\paren{b_{\alpha '}-2\Re D_{\alpha '} Z_t}=\Re \paren{\bracket{ (\partial_t+b\partial_{\alpha '})\frac1{Z_{,{\alpha '}}}, \mathbb H} Z_{t,\alpha'} + \bracket{Z_{t}, \mathbb H}\partial_{\alpha '} (\partial_t+b\partial_{\alpha '})\frac1{Z_{,{\alpha '}}}} \\&\qquad+\Re\paren{ \bracket{ \frac1{Z_{,{\alpha '}}}, \mathbb H} Z_{tt,\alpha'}+ \bracket{Z_{tt}, \mathbb H}\partial_{\alpha '} \frac1{Z_{,{\alpha '}}} -\bracket{ \frac1{Z_{,{\alpha '}}}, b; Z_{t,\alpha'} } -\bracket{ Z_{t}, b; \partial_{\alpha '} \frac1{Z_{,{\alpha '}}} } }; \end{aligned} \end{equation} observe that using Propositions~\ref{dl21}, ~\ref{d32} and \ref{denergy} we are able to get the desired estimates for the last four terms on the right hand side of \eqref{2374}. We need to rewrite the first two terms in order to apply the results in \S\ref{prepare}. First, by \eqref{eq:dza} we have \begin{equation}\label{2375} (\partial_t+b\partial_{\alpha'})\paren{\frac1{Z_{,{\alpha '}}}}=\frac1{Z_{,{\alpha '}}} \paren{b_{\alpha '}-D_{\alpha '} Z_t}; \end{equation} and by $\mathbb HZ_{t,{\alpha '}}=-Z_{t,{\alpha '}}$, \begin{equation}\label{2376} \bracket{ (\partial_t+b\partial_{\alpha '})\frac1{Z_{,{\alpha '}}}, \mathbb H} Z_{t,\alpha'}=-(I+\mathbb H) \paren{(D_{\alpha '} Z_t )(b_{\alpha '}-D_{\alpha '} Z_t)}; \end{equation} so we can conclude from \eqref{q3} and \eqref{2359} that \begin{equation}\label{2377} \nm{\bracket{ (\partial_t+b\partial_{\alpha '})\frac1{Z_{,{\alpha '}}}, \mathbb H} Z_{t,\alpha'}-U_l \bracket{ (\partial_t+\tilde b\partial_{\alpha '})\frac1{{\mathfrak{Z}}_{,{\alpha '}}}, \mathbb H} {\mathfrak{Z}}_{t,\alpha'}}_{L^2}\lesssim \mathcal F(t)^{1/2}. \end{equation} For the second term on the right hand side of \eqref{2374}, we use \eqref{b} to further rewrite \eqref{2375}, \begin{equation}\label{2378} \begin{aligned} &(\partial_t+b\partial_{\alpha'})\paren{\frac1{Z_{,{\alpha '}}}}=\frac1{Z_{,{\alpha '}}} \paren{\partial_{\alpha '}{\Re (I-\mathbb H)\frac {Z_t}{Z_{,{\alpha '}}} } -D_{\alpha '} Z_t}\\&= \frac1{Z_{,{\alpha '}}} \paren{\mathbb P_A\frac {Z_{t,{\alpha '}}}{Z_{,{\alpha '}}}+\mathbb P_H\frac {\bar Z_{t,{\alpha '}}}{\bar Z_{,{\alpha '}}} +\Re (I-\mathbb H)\paren{Z_t\partial_{\alpha '}\frac1{Z_{,{\alpha '}}}} -D_{\alpha '} Z_t}\\&= \frac1{Z_{,{\alpha '}}} \paren{\mathbb P_H\paren{\bar{D_{\alpha '} Z_t}-D_{\alpha '} Z_t}+\mathbb P_A\paren{Z_{t} \partial_{\alpha '} \frac1{Z_{,{\alpha '}} }}+\bar{\mathbb P_A\paren{Z_{t} \partial_{\alpha '} \frac1{Z_{,{\alpha '}} }}}}. \end{aligned} \end{equation} We substitute the right hand side of \eqref{2378} in the second term, $\bracket{Z_{t}, \mathbb H}\partial_{\alpha '} (\partial_t+b\partial_{\alpha '})\frac1{Z_{,{\alpha '}}}$ of \eqref{2374}, term by term. For the first term we have, by \eqref{H2}, \begin{equation}\label{2379} \begin{aligned} & \bracket{Z_{t}, \mathbb H}\partial_{\alpha '} \paren{ \frac1{Z_{,{\alpha '}}}\mathbb P_H\paren{\Im\bar{ D_{\alpha '} Z_t}} }=\bracket{\mathbb P_A\paren{Z_{t} \partial_{\alpha '} \frac1{Z_{,{\alpha '}} }}, \mathbb H}\mathbb P_H\paren{\Im \bar{D_{\alpha '} Z_t}} \\&\qquad\qquad\qquad+\bracket{\mathbb P_A\paren{\frac{Z_{t} }{Z_{,{\alpha '}} }}, \mathbb H} \partial_{\alpha '}\mathbb P_H\paren{\Im\bar{D_{\alpha '} Z_t}}\\& = (I-\mathbb H)\paren{ \mathbb P_A\paren{Z_{t} \partial_{\alpha '} \frac1{Z_{,{\alpha '}} }}\mathbb P_H\paren{\Im\bar{D_{\alpha '} Z_t}}} +\bracket{b, \mathbb H} \partial_{\alpha '}\mathbb P_H\paren{\Im\bar{D_{\alpha '} Z_t}}; \end{aligned} \end{equation} in the last step we used \eqref{bb} and \eqref{comm-hilbe}. Therefore by \eqref{2382}-\eqref{2383}, \eqref{2359}, \eqref{q3} and \eqref{dl221}, \begin{equation}\label{2400} \nm{\bracket{Z_{t}, \mathbb H}\partial_{\alpha '} \paren{ \frac1{Z_{,{\alpha '}}}\mathbb P_H\paren{\Im\bar{D_{\alpha '} Z_t}} }- U_l\bracket{{\mathfrak{Z}}_{t}, \mathbb H}\partial_{\alpha '} \paren{ \frac1{{\mathfrak{Z}}_{,{\alpha '}}}\mathbb P_H\paren{\Im \bar{\tilde D_{\alpha '} {\mathfrak{Z}}_t}} }}_{L^2}\lesssim \mathcal F(t)^{\frac12}. \end{equation} We substitute in the second term and rewrite further by \eqref{comm-hilbe}, \begin{equation}\label{2401} \begin{aligned} \bracket{Z_{t}, \mathbb H}\partial_{\alpha '} \paren{ \frac1{Z_{,{\alpha '}}}\mathbb P_A\paren{Z_{t} \partial_{\alpha '} \frac1{Z_{,{\alpha '}} }}}&=\bracket{Z_{t}, \mathbb H}\partial_{\alpha '} \mathbb P_H\paren{ \frac1{Z_{,{\alpha '}}}\mathbb P_A\paren{Z_{t} \partial_{\alpha '} \frac1{Z_{,{\alpha '}} }}}\\&=-\frac12\bracket{Z_{t}, \mathbb H}\partial_{\alpha '} \paren{ \bracket{\frac1{Z_{,{\alpha '}}},\mathbb H}\mathbb P_A\paren{Z_{t} \partial_{\alpha '} \frac1{Z_{,{\alpha '}} }}} \end{aligned} \end{equation} This allows us to conclude, by \eqref{dl21-inq}, and \eqref{2390}, \eqref{2359},\footnote{For the estimate $\nm{\partial_{\alpha '} \paren{ \frac1{Z_{,{\alpha '}}}\mathbb P_A\paren{Z_{t} \partial_{\alpha '} \frac1{Z_{,{\alpha '}} }}} }_{L^2}\le C(\frak E(t))$, see \eqref{2043}-\eqref{2044}.} \begin{equation}\label{2402} \nm{\bracket{Z_{t}, \mathbb H}\partial_{\alpha '} \paren{ \frac1{Z_{,{\alpha '}}}\mathbb P_A\paren{Z_{t} \partial_{\alpha '} \frac1{Z_{,{\alpha '}} }}}-U_l\bracket{{\mathfrak{Z}}_{t}, \mathbb H}\partial_{\alpha '} \paren{ \frac1{{\mathfrak{Z}}_{,{\alpha '}}}\mathbb P_A\paren{{\mathfrak{Z}}_{t} \partial_{\alpha '} \frac1{{\mathfrak{Z}}_{,{\alpha '}} }}} }_{L^2}\lesssim \mathcal F(t)^{\frac12}. \end{equation} Now we substitute in the last term and rewrite further by \eqref{H2}, \begin{equation}\label{2403} \begin{aligned} &\bracket{Z_{t}, \mathbb H}\partial_{\alpha '} \paren{ \frac1{Z_{,{\alpha '}}}\bar{\mathbb P_A\paren{Z_{t} \partial_{\alpha '} \frac1{Z_{,{\alpha '}} }}}}=\bracket{\mathbb P_A\paren{Z_{t}\partial_{\alpha '}\frac1{Z_{,{\alpha '}}}}, \mathbb H}\bar{\mathbb P_A\paren{Z_{t} \partial_{\alpha '} \frac1{Z_{,{\alpha '}} }}}\\&\qquad\qquad\qquad+ \bracket{\mathbb P_A\paren{\frac{Z_{t}}{Z_{,{\alpha '}}}}, \mathbb H}\partial_{\alpha '}\bar{\mathbb P_A\paren{Z_{t} \partial_{\alpha '} \frac1{Z_{,{\alpha '}} }}}\\&=(I-\mathbb H)\paren{ \mathbb P_A\paren{Z_{t}\partial_{\alpha '}\frac1{Z_{,{\alpha '}}}}\bar{\mathbb P_A\paren{Z_{t} \partial_{\alpha '} \frac1{Z_{,{\alpha '}} }}} } + \bracket{b, \mathbb H}\partial_{\alpha '}\bar{\mathbb P_A\paren{Z_{t} \partial_{\alpha '} \frac1{Z_{,{\alpha '}} }}}. \end{aligned} \end{equation} Again, this puts it in the right form to allow us to conclude, from \eqref{2382}-\eqref{2383}, \eqref{q3}, and \eqref{dl221}, that \begin{equation}\label{2404} \nm{\bracket{Z_{t}, \mathbb H}\partial_{\alpha '} \paren{ \frac1{Z_{,{\alpha '}}}\bar{\mathbb P_A\paren{Z_{t} \partial_{\alpha '} \frac1{Z_{,{\alpha '}} }}}}-U_l\bracket{{\mathfrak{Z}}_{t}, \mathbb H}\partial_{\alpha '} \paren{ \frac1{{\mathfrak{Z}}_{,{\alpha '}}}\bar{\mathbb P_A\paren{{\mathfrak{Z}}_{t} \partial_{\alpha '} \frac1{{\mathfrak{Z}}_{,{\alpha '}} }}}} }_{L^2}\lesssim \mathcal F(t)^{\frac12}. \end{equation} This finishes the proof of \begin{equation}\label{2405} \nm{(\partial_t+b\partial_{\alpha '})(b_{\alpha '}-2\Re D_{\alpha '} Z_t)-(\partial_t+\tilde b\partial_{\alpha '})(\tilde b_{\alpha '}- 2\Re \tilde D_{\alpha '}{\mathfrak{Z}}_t )\circ l}_{L^2}\lesssim \mathcal F(t)^{1/2}. \end{equation} \subsubsection{Controlling $ \nm{(\partial_t+b\partial_{\alpha '})\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}}}_{L^\infty}$} \label{dtati} We begin with \eqref{at} and take a $\partial_t+b\partial_{\alpha '}$ derivative. We get \begin{equation}\label{2406} (\partial_t+b\partial_{\alpha '})\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}}=\frac{(\partial_t+b\partial_{\alpha '})^2A_1}{A_1}-\paren{\frac{(\partial_t+b\partial_{\alpha '})A_1}{A_1}}^2+(\partial_t+b\partial_{\alpha '})(b_{\alpha '}-2\Re D_{\alpha '} Z_t). \end{equation} We have controlled all the quantities on the right hand side of \eqref{2406} in \S\ref{proof}, except for $\|(\partial_t+b\partial_{\alpha '})^2A_1\|_{L^\infty}$. We proceed from \eqref{dta1} and use \eqref{eq:c14} to compute, \begin{equation}\label{2407} \begin{aligned} (\partial_t +b\partial_{\alpha '})^2 A_1&= -\Im \paren{\bracket{2\bracket{Z_{tt},\mathbb H}\bar Z_{tt,\alpha'}-[Z_{tt}, b; \bar Z_{t,\alpha'}]-[Z_{t}, b; \bar Z_{tt,\alpha'}]}}\\& -\Im \paren{\bracket{Z_{ttt},\mathbb H}\bar Z_{t,\alpha'}+\bracket{Z_t,\mathbb H}\partial_{\alpha '} \bar Z_{ttt}- (\partial_t +b\partial_{\alpha '})[Z_t, b; \bar Z_{t,{\alpha '}}]}, \end{aligned} \end{equation} and we expand similarly \begin{equation}\label{2408} \begin{aligned} (\partial_t +b\partial_{\alpha '})[Z_t, b; \bar Z_{t,{\alpha '}}]&=[Z_{tt}, b; \bar Z_{t,{\alpha '}}]+[Z_t, (\partial_t +b\partial_{\alpha '})b; \bar Z_{t,{\alpha '}}]+[Z_t, b; \bar Z_{tt,{\alpha '}}]\\&-\frac2{\pi i}\int \frac{(b({\alpha '},t)-b({\beta '},t))^2(Z_t({\alpha '},t)-Z_t({\beta '},t)) }{({\alpha '}-{\beta '})^3} \bar Z_{t,{\beta '}}\,d{\beta '} \end{aligned} \end{equation} Applying Cauchy-Schwarz inequality and Hardy's inequality, we get \begin{equation}\label{2409} \begin{aligned} \nm{(\partial_t +b\partial_{\alpha '})^2 A_1}_{L^\infty}&\lesssim \nm{Z_{tt,{\alpha '}}}_{L^2}^2+ \nm{Z_{tt,{\alpha '}}}_{L^2}\nm{b_{\alpha '}}_{L^\infty}\nm{Z_{t,{\alpha '}}}_{L^2}+\nm{Z_{ttt,{\alpha '}}}_{L^2}\nm{Z_{t,{\alpha '}}}_{L^2}\\&+\nm{\partial_{\alpha '}(\partial_t +b\partial_{\alpha '})b}_{L^\infty}\nm{Z_{t,{\alpha '}}}_{L^2}^2+\nm{b_{\alpha '}}_{L^\infty}^2\nm{Z_{t,{\alpha '}}}_{L^2}^2 \end{aligned} \end{equation} Observe that all quantities on the right hand side of \eqref{2409} are controlled in \S\ref{proof} and in \eqref{2411}. This shows that \begin{equation}\label{2412} \nm{(\partial_t +b\partial_{\alpha '})^2 A_1}_{L^\infty}\le C(\frak E(t)),\qquad \nm{(\partial_t+b\partial_{\alpha '})\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}}}_{L^\infty}\le C(\frak E(t)). \end{equation} \subsubsection{Controlling $ \nm{(\partial_t+b\partial_{\alpha '})\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}}-(\partial_t+\tilde b\partial_{\alpha '})\paren{\frac{\tilde{\mathfrak{a}}_t}{\tilde{\mathfrak{a}}} \circ \tilde h^{-1} }\circ l}_{L^2}$ } \label{ddtat} By the expansions \eqref{dta1}, \eqref{2406}, and \eqref{2407}, \eqref{2408}, we see that by the results in \S\ref{prepare} and by \eqref{2405}, we can directly conclude the desired estimates for all but the following three \begin{itemize} \item $ \nm{\bracket{Z_{ttt},\mathbb H}\bar Z_{t,\alpha'}-(\bracket{{\mathfrak{Z}}_{ttt},\mathbb H}\bar {{\mathfrak{Z}}}_{t,\alpha'}) \circ l}_{L^2}$; \item $ \nm{\bracket{Z_t,\mathbb H}\partial_{\alpha '} \bar Z_{ttt}-(\bracket{{\mathfrak{Z}}_t,\mathbb H}\partial_{\alpha '} \bar {{\mathfrak{Z}}}_{ttt} ) \circ l}_{L^2}$; \item $ \nm{[Z_t, (\partial_t +b\partial_{\alpha '})b; \bar Z_{t,{\alpha '}}]-U_l [{\mathfrak{Z}}_t, (\partial_t +b\partial_{\alpha '})b; \bar {{\mathfrak{Z}}}_{t,{\alpha '}}]}_{L^2}$. \end{itemize} The first two items can be analyzed similarly as in \S\ref{ddtba}. We begin with $\bracket{Z_{ttt},\mathbb H}\bar Z_{t,\alpha'}$ and rewrite it using $\mathbb H\bar Z_{t,\alpha'}=\bar Z_{t,\alpha'}$, and substitute in by \eqref{eq:dztt}, \eqref{eq:dzt}, \begin{equation}\label{2413} \bracket{Z_{ttt},\mathbb H}\bar Z_{t,\alpha'}=(I-\mathbb H)(Z_{ttt} \bar Z_{t,{\alpha '}})=(I-\mathbb H)\paren{iA_1 \bar {D_{\alpha '} Z_t}\paren{D_{\alpha '} Z_t+\frac{\frak a_t}{\frak a}\circ h^{-1}}}. \end{equation} From here we are ready to conclude from \eqref{q3}, \eqref{2359} that \begin{equation}\label{2414} \nm{\bracket{Z_{ttt},\mathbb H}\bar Z_{t,\alpha'}-(\bracket{{\mathfrak{Z}}_{ttt},\mathbb H}\bar {{\mathfrak{Z}}}_{t,\alpha'}) \circ l}_{L^2}\lesssim \mathcal F(t)^{1/2}. \end{equation} Now substitute in by \eqref{eq:dztt}, \eqref{eq:dzt}, and use the identity $\mathbb P_H+\mathbb P_A=I$, then use \eqref{H2} and \eqref{comm-hilbe}, \begin{equation}\label{2415} \begin{aligned} \bracket{Z_t,\mathbb H}\partial_{\alpha '} \bar Z_{ttt}& = -i\bracket{Z_t,\mathbb H}\partial_{\alpha '} \paren{\frac{1}{Z_{,{\alpha '}}}(\mathbb P_H+\mathbb P_A)\paren{ A_1\paren{\bar{D_{\alpha '} Z_t}+\frac{\frak a_t}{\frak a}\circ h^{-1}} } }\\&= -i\bracket{\mathbb P_A\paren{Z_t\partial_{\alpha '}\frac{1}{Z_{,{\alpha '}}}},\mathbb H}\mathbb P_H\paren{ A_1\paren{\bar{D_{\alpha '} Z_t}+\frac{\frak a_t}{\frak a}\circ h^{-1}} }\\&-i\bracket{\mathbb P_A\paren{\frac{Z_t}{Z_{,{\alpha '}}}},\mathbb H}\partial_{\alpha '}\mathbb P_H\paren{ A_1\paren{\bar{D_{\alpha '} Z_t}+\frac{\frak a_t}{\frak a}\circ h^{-1}} }\\& -i\bracket{Z_t,\mathbb H}\partial_{\alpha '} \mathbb P_H\paren{\frac{1}{Z_{,{\alpha '}}}\mathbb P_A\paren{ A_1\paren{\bar{D_{\alpha '} Z_t}+\frac{\frak a_t}{\frak a}\circ h^{-1}} } }\\& =-i(I-\mathbb H)\paren{\mathbb P_A\paren{Z_t\partial_{\alpha '}\frac{1}{Z_{,{\alpha '}}}}\mathbb P_H\paren{ A_1\paren{\bar{D_{\alpha '} Z_t}+\frac{\frak a_t}{\frak a}\circ h^{-1}} }}\\&-i\bracket{b,\mathbb H}\partial_{\alpha '}\mathbb P_H\paren{ A_1\paren{\bar{D_{\alpha '} Z_t}+\frac{\frak a_t}{\frak a}\circ h^{-1}} }\\& -i\bracket{Z_t,\mathbb H}\partial_{\alpha '} \bracket{\mathbb P_H, \frac{1}{Z_{,{\alpha '}}}}\mathbb P_A\paren{ A_1\paren{\bar{D_{\alpha '} Z_t}+\frac{\frak a_t}{\frak a}\circ h^{-1}} }. \end{aligned} \end{equation} From here we can apply the Propositions in \S\ref{prepare} and \eqref{2382}-\eqref{2383}, \eqref{2390-1} to conclude \begin{equation}\label{2416} \nm{\bracket{Z_{t},\mathbb H}\partial_{\alpha '}\bar Z_{ttt}-(\bracket{{\mathfrak{Z}}_{t},\mathbb H}\partial_{\alpha '}\bar {{\mathfrak{Z}}}_{ttt}) \circ l}_{L^2}\lesssim \mathcal F(t)^{1/2}. \end{equation} Now we consider the last term, $[Z_t, (\partial_t +b\partial_{\alpha '})b; \bar Z_{t,{\alpha '}}]$. The problem with this term is that we don't yet have the estimate, $\nm{\partial_{\alpha '}(\partial_t +b\partial_{\alpha '})b-(\partial_{\alpha '}(\partial_t +\tilde b\partial_{\alpha '})\tilde b)\circ l}_{L^2}\lesssim \mathcal F(t)^{1/2}$, to apply Proposition~\ref{d32}. We will not prove this estimate. Instead, we will identify the trouble term in $\partial_{\alpha '}(\partial_t +b\partial_{\alpha '})b$, and handle it differently. We compute, by \eqref{eq:c7}, \eqref{eq:c1-1}, \begin{equation}\label{2417} \begin{aligned} \partial_{\alpha '}&(\partial_t +b\partial_{\alpha '})b=b_{\alpha '}^2+(\partial_t +b\partial_{\alpha '})b_{\alpha '}\\&=b_{\alpha '}^2+(\partial_t +b\partial_{\alpha '})(b_{\alpha '}-2\Re D_{\alpha '} Z_t)-2\Re \{(D_{\alpha '} Z_t)^2\}+2\Re D_{\alpha '} Z_{tt}; \end{aligned} \end{equation} observe that we have the estimate for the first three terms. We expand the last term by substituting in \eqref{aa1}, \begin{equation}\label{2418} 2\Re D_{\alpha '} Z_{tt}= 2\Re\frac1{Z_{,{\alpha '}}}\partial_{\alpha '}{\frac{iA_1}{\bar Z_{,{\alpha '}}}} =\partial_{\alpha '}\paren{\frac{iA_1}{|Z_{,{\alpha '}}|^2}}-\frac{\partial_{\alpha '}\paren{iA_1}}{|Z_{,{\alpha '}}|^2}-2\frac{iA_1}{\bar Z_{,{\alpha '}}}\partial_{\alpha '}\frac1{Z_{,{\alpha '}}}. \end{equation} Substitute \eqref{2418} in \eqref{2417}, and then apply $\mathbb P_A$, writing the last term as a commutator; we get \begin{equation}\label{2419} \begin{aligned} & \mathbb P_A \partial_{\alpha '} \paren{(\partial_t +b\partial_{\alpha '})b - \frac{iA_1}{|Z_{,{\alpha '}}|^2} }\\&=\mathbb P_A\paren{ b_{\alpha '}^2+(\partial_t +b\partial_{\alpha '})(b_{\alpha '}-2\Re D_{\alpha '} Z_t)-2\Re \{(D_{\alpha '} Z_t)^2\} -i\frac{\partial_{\alpha '} A_1}{|Z_{,{\alpha '}}|^2} } -\bracket{\frac{iA_1}{\bar Z_{,{\alpha '}}},\mathbb H}\partial_{\alpha '}\frac1{Z_{,{\alpha '}}}; \end{aligned} \end{equation} a direct application of the results in \S\ref{prepare}, \S\ref{ddtba} and \S\ref{da1z2} to the right hand side of \eqref{2419} yields \begin{equation}\label{2420} \nm{\mathbb P_A \partial_{\alpha '} \paren{(\partial_t +b\partial_{\alpha '})b - \frac{iA_1}{|Z_{,{\alpha '}}|^2} }-U_l \mathbb P_A \partial_{\alpha '} \paren{(\partial_t +\tilde b\partial_{\alpha '})\tilde b - \frac{i\tilde {A_1}}{|{\mathfrak{Z}}_{,{\alpha '}}|^2} }}_{L^2}\lesssim \mathcal F(t)^{1/2}, \end{equation} which of course holds also for its real part. We know the real part $$\Re\mathbb P_A \partial_{\alpha '} \paren{(\partial_t +b\partial_{\alpha '})b- \frac{iA_1}{|Z_{,{\alpha '}}|^2} }=\frac12\partial_{\alpha '} \paren{(\partial_t +b\partial_{\alpha '})b+\mathbb H\paren{\frac{iA_1}{|Z_{,{\alpha '}}|^2}} }. $$ We split $[Z_t, (\partial_t +b\partial_{\alpha '})b; \bar Z_{t,{\alpha '}}]$ in two: \begin{equation}\label{2421} [Z_t, (\partial_t +b\partial_{\alpha '})b; \bar Z_{t,{\alpha '}}]=[Z_t, (\partial_t +b\partial_{\alpha '})b+\mathbb H\paren{\frac{iA_1}{|Z_{,{\alpha '}}|^2}}; \bar Z_{t,{\alpha '}}]-[Z_t, \mathbb H\paren{\frac{iA_1}{|Z_{,{\alpha '}}|^2}} ; \bar Z_{t,{\alpha '}}] \end{equation} and we can conclude from Proposition~\ref{d32} for the first term that,\footnote{The fact that $\nm{\partial_{\alpha '}\paren{(\partial_t +b\partial_{\alpha '})b+\mathbb H\paren{\frac{iA_1}{|Z_{,{\alpha '}}|^2}}}}_{L^\infty}\le C(\frak E(t))$ follows from \eqref{2039-1} and \eqref{2052}.} \begin{equation}\label{2422} \nm{ [Z_t, (\partial_t +b\partial_{\alpha '})b+\mathbb H\paren{\frac{iA_1}{|Z_{,{\alpha '}}|^2}}; \bar Z_{t,{\alpha '}}]-U_l[{\mathfrak{Z}}_t, (\partial_t +\tilde b\partial_{\alpha '})\tilde b+\mathbb H\paren{\frac{i\tilde {A_1}}{|{\mathfrak{Z}}_{,{\alpha '}}|^2}}; \bar {{\mathfrak{Z}}}_{t,{\alpha '}}] }_{L^2}\lesssim \mathcal F(t)^{1/2}. \end{equation} We are left with the term $[Z_t, \mathbb H\paren{\frac{iA_1}{|Z_{,{\alpha '}}|^2}} ; \bar Z_{t,{\alpha '}}] $. We will convert it to a form so that on which we can directly apply known results to conclude the desired estimate, $$\nm{ [Z_t, \mathbb H(\frac{iA_1}{|Z_{,{\alpha '}}|^2}); \bar Z_{t,{\alpha '}}]-U_l[{\mathfrak{Z}}_t, \mathbb H(\frac{i\tilde {A_1}}{|{\mathfrak{Z}}_{,{\alpha '}}|^2}); \bar {{\mathfrak{Z}}}_{t,{\alpha '}}] }_{L^2}\lesssim \mathcal F(t)^{1/2}.$$ We need the following basic identities: 1. for $f$, $g$ satisfying $f=\mathbb H f$, $g=\mathbb H g$, \begin{equation}\label{2446-1} [ f, g; 1]=0; \end{equation} 2. for $f, p, g$, satisfying $g=\mathbb Hg$ and $p=\mathbb Hp$, \begin{equation}\label{2424} [\bar p, \mathbb P_H f; g]=[\mathbb P_H f, \bar p g; 1]= [f, \mathbb P_A(\bar p g); 1] \end{equation} \eqref{2446-1} can be verified by \eqref{comm-hilbe} and integration by parts. \eqref{2424} can be verified by \eqref{2446-1}. We split \begin{equation}\label{2423} \bracket{Z_t, \mathbb H\paren{\frac{iA_1}{|Z_{,{\alpha '}}|^2}} ; \bar Z_{t,{\alpha '}}}=\bracket{Z_t, 2\mathbb P_H\paren{\frac{iA_1}{|Z_{,{\alpha '}}|^2}} ; \bar Z_{t,{\alpha '}}}-\bracket{Z_t, \frac{iA_1}{|Z_{,{\alpha '}}|^2}; \bar Z_{t,{\alpha '}}}=2I-II. \end{equation} Applying \eqref{2424} to $I$ yields \begin{equation}\label{2425} I:=\bracket{Z_t, \mathbb P_H\paren{\frac{iA_1}{|Z_{,{\alpha '}}|^2}} ; \bar Z_{t,{\alpha '}}} =\bracket{\frac{iA_1}{|Z_{,{\alpha '}}|^2}, \mathbb P_A(Z_t\bar Z_{t,{\alpha '}});1}; \end{equation} substituting in \eqref{2425} the identity \begin{equation}\label{2440} \frac{iA_1({\alpha '})}{|Z_{,{\alpha '}}|^2}-\frac{iA_1({\beta '})}{|Z_{,{\beta '}}|^2}=\paren{\frac{iA_1({\alpha '})}{Z_{,{\alpha '}}}-\frac{iA_1({\beta '})}{Z_{,{\beta '}}}}\frac1{\bar Z_{,{\beta '}}}+\frac{iA_1({\alpha '})}{Z_{,{\alpha '}}}\paren{\frac1{\bar Z_{,{\alpha '}}}-\frac1{\bar Z_{,{\beta '}}}}; \end{equation} gives \begin{equation}\label{2441} I=\frac1{\pi i}\int\frac{\paren{\mathbb P_A(Z_t\bar Z_{t,{\alpha '}})({\alpha '})-\mathbb P_A(Z_t\bar Z_{t,{\beta '}})({\beta '})}\paren{\frac{iA_1({\alpha '})}{Z_{,{\alpha '}}}-\frac{iA_1({\beta '})}{Z_{,{\beta '}}} }\frac1{\bar Z_{,{\beta '}}}}{({\alpha '}-{\beta '})^2}\,d{\beta '}; \end{equation} here the second term disappears because of the fact \eqref{2446-1}. Using the identity \begin{equation}\label{2442} \frac{\mathbb P_A(Z_t\bar Z_{t,{\alpha '}})-\mathbb P_A(Z_t\bar Z_{t,{\beta '}})}{\bar Z_{,{\beta '}}}= \frac{\mathbb P_A(Z_t\bar Z_{t,{\alpha '}})}{\bar Z_{,{\alpha '}}}-\frac{\mathbb P_A(Z_t\bar Z_{t,{\beta '}})}{\bar Z_{,{\beta '}}}-\mathbb P_A(Z_t\bar Z_{t,{\alpha '}})\paren{\frac1 {\bar Z_{,{\alpha '}}}-\frac1{\bar Z_{,{\beta '}}}} \end{equation} we get \begin{equation}\label{2443} I=\bracket{\frac{\mathbb P_A(Z_t\bar Z_{t,{\alpha '}})}{\bar Z_{,{\alpha '}}}, \frac{iA_1}{Z_{,{\alpha '}}}; 1}-\mathbb P_A(Z_t\bar Z_{t,{\alpha '}})\bracket{\frac1 {\bar Z_{,{\alpha '}}}, \frac{iA_1}{Z_{,{\alpha '}}}; 1 } \end{equation} from here we are readily to conclude from Proposition~\ref{d33}. We now work on $II$. By \eqref{2440}, \begin{equation}\label{2444} II:= \bracket{Z_t, \frac{iA_1}{|Z_{,{\alpha '}}|^2}; \bar Z_{t,{\alpha '}}}=\bracket{Z_t, \frac{iA_1}{Z_{,{\alpha '}}}; \bar {D_{\alpha '} Z_{t}}}+\frac{iA_1}{Z_{,{\alpha '}}} \bracket{Z_t, \frac{1}{\bar Z_{,{\alpha '}}}; \bar Z_{t,{\alpha '}}}; \end{equation} the first term can be handled by Proposition~\ref{d33}. We focus on the second term. By a \eqref{2442} type identity, we have \begin{equation}\label{2445} \begin{aligned} &\frac{1}{Z_{,{\alpha '}}} \bracket{Z_t, \frac{1}{\bar Z_{,{\alpha '}}}; \bar Z_{t,{\alpha '}}}=\bracket{\frac{Z_t}{Z_{,{\alpha '}}}, \frac{1}{\bar Z_{,{\alpha '}}}; \bar Z_{t,{\alpha '}}} -\bracket{\frac{1}{Z_{,{\alpha '}}}, \frac{1}{\bar Z_{,{\alpha '}}}; Z_t\bar Z_{t,{\alpha '}}}\\&\qquad=\bracket{\mathbb P_A\paren{\frac{Z_t}{Z_{,{\alpha '}}}}, \frac{1}{\bar Z_{,{\alpha '}}}; \bar Z_{t,{\alpha '}}} -\bracket{\frac{1}{Z_{,{\alpha '}}}, \frac{1}{\bar Z_{,{\alpha '}}}; \mathbb P_A\paren{Z_t\bar Z_{t,{\alpha '}}}}\\&\qquad\qquad+\bracket{\mathbb P_H\paren{\frac{Z_t}{Z_{,{\alpha '}}}}, \frac{1}{\bar Z_{,{\alpha '}}}; \bar Z_{t,{\alpha '}}} -\bracket{\frac{1}{Z_{,{\alpha '}}}, \frac{1}{\bar Z_{,{\alpha '}}}; \mathbb P_H\paren{Z_t\bar Z_{t,{\alpha '}}}}=I_1-I_2+I_3-I_4 \end{aligned} \end{equation} The first two terms $I_1$, $I_2$ in \eqref{2445} can be handed by Propositions~\ref{d32} and \ref{d33}, because $\mathbb P_A \paren{\frac{Z_t}{Z_{,{\alpha '}}}}=\mathbb P_A b$. We need to manipulate further the last two terms. We begin with $I_4$, and use the first equality in \eqref{2424}, then use the identity $\mathbb P_H=-\mathbb P_A+I$, \begin{equation}\label{2446} \begin{aligned} &I_4:= \bracket{ \frac{1}{\bar Z_{,{\alpha '}}}, \frac{1}{Z_{,{\alpha '}}}; \mathbb P_H\paren{Z_t\bar Z_{t,{\alpha '}}}} =\bracket{\frac{1}{Z_{,{\alpha '}}}, \frac{ \mathbb P_H\paren{Z_t\bar Z_{t,{\alpha '}}}}{ \bar Z_{,{\alpha '}} } ;1 }\\&= - \bracket{\frac{1}{Z_{,{\alpha '}}}, \frac{ \mathbb P_A\paren{Z_t\bar Z_{t,{\alpha '}}}}{ \bar Z_{,{\alpha '}} } ;1 }+\bracket{\frac{1}{Z_{,{\alpha '}}}, \mathbb P_A\paren{Z_t\mathbb P_H\bar {D_{\alpha '} Z_t}} ;1 }+\bracket{\frac{1}{Z_{,{\alpha '}}}, Z_t\mathbb P_A\bar {D_{\alpha '} Z_t} ;1 } \end{aligned} \end{equation} because of the fact \eqref{2446-1}, the $\mathbb P_A$ can be inserted in the second term. Now the first two terms on the right hand side of \eqref{2446} can be handled by Propositions~\ref{d33} and \ref{dhhalf2}, we need to work further on the last term, $$I_{43}:=\bracket{\frac{1}{Z_{,{\alpha '}}}, Z_t\mathbb P_A\bar {D_{\alpha '} Z_t} ;1 }.$$ We consider it together with $I_3$. By \eqref{2424}, \begin{equation}\label{2447} I_3: =\bracket{ \frac{1}{\bar Z_{,{\alpha '}}}, \mathbb P_H\paren{\frac{Z_t}{Z_{,{\alpha '}}}}; \bar Z_{t,{\alpha '}}}=\bracket{\frac{Z_t}{Z_{,{\alpha '}}}, \mathbb P_A(\bar {D_{\alpha '} Z_t}); 1 }. \end{equation} Sum up $I_3$ and $-I_{43}$ gives \begin{equation}\label{2448} I_3-I_{43}=\frac1{\pi i} \int\frac{ \paren{\frac{\mathbb P_A(\bar {D_{\alpha '} Z_t})}{Z_{,{\beta '}}}- \frac{\mathbb P_A(\bar {D_{\beta '} Z_t})}{Z_{,{\alpha '}}} }(Z_t({\alpha '},t)-Z_t({\beta '},t)) } {({\alpha '}-{\beta '})^2}\,d{\beta '}= -\mathbb P_A(\bar{D_{\alpha '} Z_t})\bracket{Z_t, \frac1{Z_{,{\alpha '}}};1} \end{equation} here we used \eqref{2446-1} in the second step. Through the steps in \eqref{2423}--\eqref{2448}, we have converted $\bracket{Z_t, \mathbb H\paren{\frac{iA_1}{|Z_{,{\alpha '}}|^2}} ; \bar Z_{t,{\alpha '}}}$ into a sum of terms that can be handled with known results in \S\ref{prepare}-\S\ref{additional}. We can conclude now that \begin{equation}\label{2449} \nm{\bracket{Z_t, \mathbb H\paren{\frac{iA_1}{|Z_{,{\alpha '}}|^2}} ; \bar Z_{t,{\alpha '}}}-U_l\bracket{{\mathfrak{Z}}_t, \mathbb H\paren{\frac{i\tilde {A_1}}{|{\mathfrak{Z}}_{,{\alpha '}}|^2}} ; \bar {{\mathfrak{Z}}}_{t,{\alpha '}}}}_{L^2}\lesssim \mathcal F(t)^{1/2}. \end{equation} Combine with \eqref{2422}, we obtain \begin{equation}\label{2450} \nm{\bracket{Z_t, (\partial_t+b\partial_{\alpha '})b ; \bar Z_{t,{\alpha '}}}-U_l\bracket{{\mathfrak{Z}}_t, (\partial_t+\tilde b\partial_{\alpha '})\tilde b ; \bar {{\mathfrak{Z}}}_{t,{\alpha '}}}}_{L^2}\lesssim \mathcal F(t)^{1/2}. \end{equation} Now combine all the steps in \S\ref{ddtat}, we get \begin{equation}\label{2451} \nm{(\partial_t+b\partial_{\alpha '})\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}}-(\partial_t+\tilde b\partial_{\alpha '})\paren{\frac{\tilde{\mathfrak{a}}_t}{\tilde{\mathfrak{a}}} \circ \tilde h^{-1} }\circ l}_{L^2}\lesssim \mathcal F(t)^{1/2}. \end{equation} Combine all the steps above we have \eqref{denergy-inq}. This finishes the proof for Proposition~\ref{denergy-est}, and Theorem~\ref{unique}. \end{proof} \section{The proof of Theorem~\ref{th:local}}\label{proof2} For the data given in \S\ref{id}, we construct the solution of the Cauchy problem in the class where $\mathcal E<\infty$ via a sequence of approximating solutions obtained by mollifying the initial data by the Poisson kernel, where we use Theorem~\ref{unique} and a compactness argument to prove the convergence of the sequence. To prove the uniqueness of the solutions we use Theorem~\ref{unique}. In what follows, we denote $z'=x'+iy'$, where $x', y'\in\mathbb R$. $K$ is the Poisson kernel as defined by \eqref{poisson}, $f\ast g$ is the convolution in the spatial variable. For any function $\varphi$, $\varphi_\epsilon(x)=\frac1{\epsilon}\varphi(\frac x\epsilon)$ for $x\in \mathbb R$. \subsection{Some basic preparations}\label{analysis-2} Observe that in inequality \eqref{stability}, the stability is proved for the difference of the solutions in Lagrangian coordinates. We begin with some inequalities that will allow us to control the difference in Riemann mapping coordinates. We have \begin{lemma}\label{lemma4} Let $l:\mathbb R\to \mathbb R$ be a diffeomorphism with $l-{\alpha '}\in H^1(\mathbb R)$. Then 1. for any $f\in \dot H^1(\mathbb R)$, \begin{equation}\label{lemma4-inq} \nm{f\circ l-f}_{\dot H^{1/2}}\lesssim \|\partial_{\alpha '} f\|_{L^2}\|l-{\alpha '}\|_{L^2}^{1/4}\|l_{\alpha '}-1\|_{L^2}^{1/4} + C(\nm{(l^{-1})_{\alpha '}}_{L^\infty}, \nm{l_{\alpha '}}_{L^\infty})\|l_{\alpha '}-1\|_{L^2}\|\partial_{\alpha '} f\|_{L^2}. \end{equation} 2. for any function $b:\mathbb R\to\mathbb R$, with $b_{\alpha '}\in H^{1/2}(\mathbb R)\cap L^\infty(\mathbb R)$, \begin{equation}\label{lemma4-inq2} \|b_{\alpha '}\circ l-b_{\alpha '}\|_{L^2}^2\lesssim \|b_{\alpha '}\|_{L^2}\|b_{\alpha '}\|_{L^\infty}\|l_{\alpha '}\|_{L^\infty}^{1/2}\|l_{\alpha '}-1\|_{L^2}+\|b_{\alpha '}\|_{\dot H^{1/2}}\|b\circ l-b\|_{\dot H^{1/2}}+\|b_{\alpha '}\|_{L^\infty}^2\|l_{\alpha '}-1\|_{L^2}^2. \end{equation} \end{lemma} \begin{proof} We know \begin{equation}\label{3000} i \int\partial_{\alpha '}(f\circ l-f)\overline {(f\circ l-f)}\,d{\alpha '}=2\Re i\int\partial_{\alpha '} f\overline{(f-f\circ l)}\,d{\alpha '} \end{equation} so \begin{equation}\label{3001} \abs{i \int\partial_{\alpha '}(f\circ l-f)\overline {(f\circ l-f)}\,d{\alpha '}}\le 2\|\partial_{\alpha '} f\|_{L^2}\|f-f\circ l\|_{L^2}. \end{equation} Now \begin{equation}\label{3002} \int |f({\alpha '})-f(l({\alpha '}))|^2\,d{\alpha '}\le \|l-{\alpha '}\|^2_{L^\infty}\int |\mathcal M(\partial_{\alpha '} f))({\alpha '})|^2\,d{\alpha '}\lesssim \|l-{\alpha '}\|^2_{L^\infty}\|\partial_{\alpha '} f\|^2_{L^2}, \end{equation} where $\mathcal M$ is the Hardy-Littlewood maximal operator. Therefore by Sobolev embedding \eqref{eq:sobolev} and Lemma~ \ref{hhalf1}, \begin{align} \nm{f\circ l-f}_{\dot H^{1/2}}\lesssim \|\partial_{\alpha '} f\|_{L^2}\|l-{\alpha '}\|_{L^2}^{1/4}\|l_{\alpha '}-1\|_{L^2}^{1/4} +\nm{\mathbb P_A(f\circ l-f)}_{\dot H^{1/2}},\label{3006}\\ \nm{f\circ l-f}_{\dot H^{1/2}}\lesssim \|\partial_{\alpha '} f\|_{L^2}\|l-{\alpha '}\|_{L^2}^{1/4}\|l_{\alpha '}-1\|_{L^2}^{1/4} +\nm{\mathbb P_H(f\circ l-f)}_{\dot H^{1/2}}.\label{3007} \end{align} Now $$2\mathbb P_A (f\circ l-f)=(2\mathbb P_A f)\circ l-2\mathbb P_A f+\mathcal Q_l(f\circ l).$$ Applying \eqref{3007} to $(\mathbb P_A f)\circ l-\mathbb P_A f$ and using \eqref{q1} gives $$\|\mathbb P_A (f\circ l-f)\|_{\dot H^{1/2}}\lesssim \|\partial_{\alpha '} f\|_{L^2}\|l-{\alpha '}\|_{L^2}^{1/4}\|l_{\alpha '}-1\|_{L^2}^{1/4} + C(\nm{(l^{-1})_{\alpha '}}_{L^\infty}, \nm{l_{\alpha '}}_{L^\infty})\|l_{\alpha '}-1\|_{L^2}\|\partial_{\alpha '} f\|_{L^2}. $$ This proves \eqref{lemma4-inq}. To prove \eqref{lemma4-inq2}, we begin with \begin{equation}\label{3019} b_{\alpha '}\circ l-b_{\alpha '}=\partial_{\alpha '}(b\circ l-b)+b_{\alpha '}\circ l(1-l_{\alpha '}); \end{equation} and by expanding the integral, we have \begin{equation}\label{3018} \|\partial_{\alpha '}(b\circ l-b)\|_{L^2}^2=\int (b_{\alpha '}\circ l)^2(l_{\alpha '}^2-l_{\alpha '})\,d{\alpha '}+2\int b_{\alpha '}\partial_{\alpha '}(b-b\circ l)\,d{\alpha '}. \end{equation} \eqref{lemma4-inq2} follows directly from the Triangular, Cauchy-Schwarz and H\"older's inequalities. \end{proof} \begin{lemma}\label{lemma3} For any $\varphi\in C^\infty(\mathbb R)$, with $\int\varphi(x)\,dx=1$ and $\int |x\varphi(x)|^2\,dx<\infty$, and for any $f\in \dot H^1(\mathbb R)$, \begin{equation}\label{lemma3-inq} \|\varphi_\epsilon\ast f-f\|_{L^\infty}\lesssim \epsilon^{1/2}\|\partial_x f\|_{L^2}\|x\varphi\|_{L^2}. \end{equation} \end{lemma} The proof is straightforward by Cauchy-Schwarz inequality and Hardy's inequality \eqref{eq:77}. We omit the details. Let $Z, \frak Z$ be solutions of the system \eqref{interface-r}-\eqref{interface-holo}-\eqref{a1}-\eqref{b}, satisfying the assumptions of Theorem~\ref{unique}, and let $l$ be given by \eqref{def-l}. We know $$(\partial_t+b\partial_{\alpha '})(l-{\alpha '})=U_{h^{-1}}(\tilde h_t-h_t)=\tilde b\circ l-b,$$ and $l({\alpha '}, 0)={\alpha '}$ for ${\alpha '}\in\mathbb R$. By Lemma~\ref{basic-e2}, \begin{equation}\label{3003} \frac d{dt}\|l(t)-{\alpha '}\|^2_{L^2}\le 2\|\tilde b\circ l(t)-b(t)\|_{L^2}\|l(t)-{\alpha '}\|_{L^2}+\|b_{\alpha '}(t)\|_{L^\infty}\|l(t)-{\alpha '}\|_{L^2}^2, \end{equation} and from \eqref{b} and Sobolev embedding, \begin{equation}\label{3010} \|b(t)\|_{H^1(\mathbb R)}\lesssim \|Z_{t}(t)\|_{H^1(\mathbb R)}\paren{\nm{\frac1{Z_{,{\alpha '}}}(t)-1}_{H^1(\mathbb R)}+1}. \end{equation} Therefore by Gronwall's inequality, we have \begin{equation}\label{3004} \sup_{[0, T]}\|l(t)-{\alpha '}\|_{L^2(\mathbb R)}\le C, \end{equation} where $C$ is a constant depending on $\sup_{[0, T]}\paren{\|Z_t(t)\|_{L^2}+\|\frak Z_t(t)\|_{L^2}+\nm{\frac1{Z_{,{\alpha '}}}(t)-1}_{L^2} +\nm{\frac1{\frak Z_{,{\alpha '}}}(t)-1}_{L^2}}$ and $\sup_{[0, T]} (\mathcal E(t)+\tilde{\mathcal E}(t) )$. Let \begin{equation}\label{3008} \begin{aligned} \nm{(Z-{\mathfrak{Z}})(0)}:=& \|\paren{\bar Z_t-\bar {\mathfrak{Z}}_t}(0)\|_{\dot{H}^{1/2}}+\|\paren{\bar Z_{tt}-\bar {\mathfrak{Z}}_{tt}}(0)\|_{\dot{H}^{1/2}}+\nm{\paren{\frac1{ Z_{,{\alpha '}}}-\frac 1{ {\mathfrak{Z}}_{,{\alpha '}}}}(0)}_{\dot{H}^{1/2}}\\&+\|\paren{D_{\alpha '} Z_t-(\tilde D_{\alpha '} {\mathfrak{Z}}_t)}(0)\|_{L^2} +\nm{\paren{\frac1{ Z_{,{\alpha '}}}-\frac 1{ {\mathfrak{Z}}_{,{\alpha '}}}}(0)}_{L^\infty}. \end{aligned} \end{equation} Applying \eqref{lemma4-inq} to $f=\bar {\frak Z}_t$, $\frac1{\frak Z_{,{\alpha '}}}-1$ and $\bar {\frak Z}_{tt}$ and use \eqref{stability} gives \begin{equation}\label{3005} \begin{aligned} \sup_{[0, T]}&\paren{\|\paren{\bar Z_t-\bar {\mathfrak{Z}}_t}(t)\|_{\dot{H}^{1/2}(\mathbb R)}+\nm{\paren{\frac1{ Z_{,{\alpha '}}}-\frac 1{ {\mathfrak{Z}}_{,{\alpha '}}}}(t)}_{\dot{H}^{1/2}(\mathbb R)}+\|\paren{\bar Z_{tt}-\bar {\mathfrak{Z}}_{tt}}(t)\|_{\dot{H}^{1/2}(\mathbb R)}}\\&\le C(\nm{(Z-{\mathfrak{Z}}) (0)}+ \nm{(Z-{\mathfrak{Z}}) (0)} ^{1/4}); \end{aligned} \end{equation} and applying \eqref{lemma4-inq2}, \eqref{lemma4-inq} to $\tilde b$ and use \eqref{3010}, \eqref{2393}, \eqref{2395}, Appendix~\ref{quantities} and \eqref{stability} yields \begin{equation}\label{3020} \sup_{[0, T]}\|(b_{\alpha '}-\tilde b_{\alpha '})(t)\|_{L^2(\mathbb R)} \le C(\nm{(Z-{\mathfrak{Z}}) (0)}+ \nm{(Z-{\mathfrak{Z}}) (0)} ^{1/8}), \end{equation} where $C$ is a constant depending on $\sup_{[0, T]}\paren{\|Z_t(t)\|_{L^2}+\|\frak Z_t(t)\|_{L^2}+\nm{\frac1{Z_{,{\alpha '}}}(t)-1}_{L^2} +\nm{\frac1{\frak Z_{,{\alpha '}}}(t)-1}_{L^2}}$ and $\sup_{[0, T]} (\mathcal E(t)+\tilde{\mathcal E}(t) )$. By Sobolev embedding \eqref{eq:sobolev}, \begin{equation}\label{3012} \|l(\cdot,t)-{\alpha '}\|_{L^\infty(\mathbb R)}^2\lesssim \|l(\cdot,t)-{\alpha '}\|_{L^2(\mathbb R)}\|(l_{\alpha '}-1)(t)\|_{L^2(\mathbb R)}, \end{equation} therefore by \eqref{3004}, \eqref{2345}-\eqref{2346}, and \eqref{stability}, \begin{equation}\label{3011} \sup_{[0, T]}(\|h(t)-\tilde h(t)\|^2_{L^\infty(\mathbb R)}+\|h^{-1}(t)-\tilde h^{-1}(t)\|^2_{L^\infty(\mathbb R)})\le C \|(Z-{\mathfrak{Z}})(0)\|, \end{equation} where C is a constant depending on $\sup_{[0, T]}\paren{\|Z_t(t)\|_{L^2}+\|\frak Z_t(t)\|_{L^2}+\nm{\frac1{Z_{,{\alpha '}}}(t)-1}_{L^2} +\nm{\frac1{\frak Z_{,{\alpha '}}}(t)-1}_{L^2}}$ and $\sup_{[0, T]} (\mathcal E(t)+\tilde{\mathcal E}(t) )$. We also have, from Sobolev embedding \eqref{eq:sobolev}, \eqref{3010}, \eqref{3004}, \eqref{3012} and \eqref{stability} that for $t\in [0, T]$, \begin{equation}\label{3013} \begin{aligned} &\nm{(b-\tilde b)(t)}_{L^\infty(\mathbb R)}^2\lesssim \nm{(b-\tilde b\circ l)(t)}_{L^\infty(\mathbb R)}^2+\nm{(\tilde b\circ l-\tilde b)(t)}_{L^\infty(\mathbb R)}^2\\&\lesssim \nm{(b-\tilde b\circ l)(t)}_{L^2(\mathbb R)}\nm{\partial_{\alpha '}(b-\tilde b\circ l)(t)}_{L^2(\mathbb R)}+\nm{l(t)-{\alpha '}}_{L^\infty(\mathbb R)}^2\nm{\tilde b_{\alpha '}(t)}_{L^\infty(\mathbb R)}^2\\&\le C(\|(Z-{\mathfrak{Z}})(0)\|+\|(Z-{\mathfrak{Z}})(0)\|^2), \end{aligned} \end{equation} where C is a constant depending on $\sup_{[0, T]}\paren{\|Z_t(t)\|_{L^2}+\|\frak Z_t(t)\|_{L^2}+\nm{\frac1{Z_{,{\alpha '}}}(t)-1}_{L^2} +\nm{\frac1{\frak Z_{,{\alpha '}}}(t)-1}_{L^2}}$ and $\sup_{[0, T]} (\mathcal E(t)+\tilde{\mathcal E}(t) )$. We have \begin{lemma}\label{lemma5} 1. Assume that $f\in H^{1/2}(\mathbb R)$. Then \begin{equation}\label{lemma5-inq1} \|f\|_{L^4(\mathbb R)}^2\lesssim \|f\|_{L^2(\mathbb R)}\|f\|_{\dot H^{1/2}(\mathbb R)}. \end{equation} 2. Let $\varphi\in C^\infty(\mathbb R)\cap L^q(\mathbb R)$, and $f\in L^p(\mathbb R)$, where $1\le p\le \infty$, $\frac1p+\frac1q=1$. For any $y'<0$, $x'\in\mathbb R$, \begin{equation}\label{lemma5-inq2} |\varphi_{y'}\ast f(x')| \le (-y')^{-1/p}\|\varphi\|_{L^q(\mathbb R)}\|f\|_{L^p(\mathbb R)}. \end{equation} \end{lemma} \begin{proof} By the Theorem 1 on page 119 of \cite{s}, Plancherel's Theorem and Cauchy-Schwarz inequality, we have, for any $f\in H^{1/2}(\mathbb R)$, $$\|f\|_{L^4(\mathbb R)}\lesssim \|\partial_x^{1/4}f\|_{L^2(\mathbb R)}\lesssim \|f\|_{L^2(\mathbb R)}\|f\|_{\dot H^{1/2}(\mathbb R)}.$$ \eqref{lemma5-inq2} is a direct consequence of H\"older's inequality. \end{proof} We need in addition the following compactness results in the proof of the existence of solutions. \begin{lemma}\label{lemma1} Let $\{f_n\}$ be a sequence of smooth functions on $\mathbb R\times [0, T]$. Let $1<p\le\infty$. Assume that there is a constant $C$, independent of $n$, such that \begin{equation} \sup_{[0, T]}\|f_n(t)\|_{L^\infty}+ \sup_{[0, T]}\|{\partial_x f_n}(t)\|_{L^p}+ \sup_{[0, T]}\|\partial_t f_n(t)\|_{L^\infty}\le C. \end{equation} Then there is a function $f$, continuous and bounded on $\mathbb R\times [0, T]$, and a subsequence $\{f_{n_j}\}$, such that $f_{n_j}\to f$ uniformly on compact subsets of $\mathbb R\times [0, T]$. \end{lemma} Lemma~\ref{lemma1} is an easy consequence of Arzela-Ascoli Theorem, we omit the proof. \begin{lemma}\label{lemma2} Assume that $f_n\to f$ uniformly on compact subsets of $\mathbb R\times [0, T]$, and assume there is a constant $C$, such that $\sup_{n}\|f_n\|_{L^\infty(\mathbb R\times [0, T])}\le C$. Then $K_{y'}\ast f_n$ converges uniformly to $K_{y'}\ast f$ on compact subsets of $\bar {\mathscr P}_-\times [0, T]$. \end{lemma} The proof follows easily by considering the convolution on the sets $|x'|<N$, and $|x'|\ge N$ separately. We omit the proof. \begin{definition} We write \begin{equation}\label{unif-notation} f_n\Rightarrow f\qquad \text{on }E \end{equation} if $f_n$ converge uniformly to $f$ on compact subsets of $E$. \end{definition} \subsection{The proof of Theorem~\ref{th:local}} The uniqueness of the solution to the Cauchy problem is a direct consequence of \eqref{3005} and Definition~ \ref{de}. In what follows we prove the existence of solutions to the Cauchy problem. \subsubsection{The initial data}\label{ID} Let $U(z', 0)$ be the initial fluid velocity in the Riemann mapping coordinate, $\Psi(z',0):{\mathscr P}_-\to\Omega(0)$ be the Riemann mapping as given in \S\ref{id} with $Z(\alpha', 0)=\Psi(\alpha', 0)$ the initial interface. We note that by the assumption $$ \begin{aligned}&\sup_{y'<0}\nm{\partial_{z'}\paren{\frac1{\Psi_{z'}(z',0)}}}_{L^2(\mathbb R, dx')}\le \mathcal E_1(0)<\infty,\quad \sup_{y'<0}\nm{\frac1{\Psi_{z'}(z',0)}-1}_{L^2(\mathbb R, dx')}\le c_0<\infty,\\& \sup_{y'<0}\|U_{z'}(z',0)\|_{L^2(\mathbb R, dx')}\le \mathcal E_1(0)<\infty\quad \text{and } \ \sup_{y'<0}\|U(z',0)\|_{L^2(\mathbb R, dx')}\le c_0<\infty, \end{aligned} $$ $ \frac1{\Psi_{z'}}(\cdot, 0)$, $U(\cdot, 0)$ can be extended continuously onto $\bar {\mathscr P}_-$. So $Z(\cdot,0):= \Psi(\cdot+i0, 0)$ is continuous differentiable on the open set where $\frac1{\Psi_{z'}}(\alpha', 0)\ne 0$, and $\frac1{\Psi_{z'}}(\alpha', 0)=\frac1{Z_{,\alpha'}(\alpha', 0)}$ where $\frac1{\Psi_{z'}}(\alpha', 0)\ne 0$. By $\frac1{\Psi_{z'}}(\cdot, 0)-1\in H^1(\mathbb R)$ and Sobolev embedding, there is $N>0$ sufficiently large, such that for $|\alpha'|\ge N$, $|\frac1{\Psi_{z'}}(\alpha', 0)-1| \le 1/2$, so $Z=Z(\cdot, 0)$ is continuous differentiable on $(-\infty, -N)\cup (N, \infty)$, with $|Z_{,\alpha'}(\alpha', 0)|\le 2$, for all $ |\alpha'|\ge N$. Moreover, $Z_{,\alpha'}(\cdot, 0)-1\in H^1\{(-\infty, -N)\cup (N, \infty)\}$. \subsubsection{The mollified data and the approximate solutions}\label{mo-ap} Let $\epsilon>0$. We take \begin{equation}\label{m-id} \begin{aligned} Z^\epsilon(\alpha', 0)&=\Psi(\alpha'-\epsilon i, 0),\quad \bar Z^\epsilon_t(\alpha', 0)=U(\alpha'-\epsilon i, 0),\quad h^\epsilon(\alpha,0)=\alpha,\\& U^\epsilon(z',0)=U(z'-\epsilon i, 0),\quad \Psi^\epsilon(z',0)=\Psi(z'-\epsilon i,0). \end{aligned} \end{equation} Notice that $U^\epsilon(\cdot, 0)$, $\Psi^\epsilon(\cdot, 0)$ are holomorphic on ${\mathscr P}_-$, $Z^\epsilon(0)$ satisfies \eqref{interface-holo} and $\bar Z^\epsilon_t(0)=\mathbb H \bar Z^\epsilon_t( 0)$. Let $Z_{tt}^\epsilon(0)$ be given by \eqref{interface-a1}. It is clear that $Z^\epsilon(0)$, $ Z_t^\epsilon(0)$ and $Z_{tt}^\epsilon(0)$ satisfy the assumption of Theorem~\ref{blow-up}. Let $Z^\epsilon(t):=Z^\epsilon(\cdot,t)$, $t\in [0, T_\epsilon^*)$, be the solution given by Theorem~\ref{blow-up}, with the maximal time of existence $T_\epsilon^*$, the diffeomorphism $h^\epsilon(t)=h^\epsilon(\cdot,t):\mathbb R\to\mathbb R$, the quantity $b^\epsilon:=h_t^\epsilon\circ (h^\epsilon)^{-1}$, and $z^\epsilon(\alpha,t)=Z^\epsilon(h^\epsilon(\alpha,t),t)$. We know $z_t^\epsilon(\alpha,t)=Z_t^\epsilon(h^\epsilon(\alpha,t),t)$. Let $$U^\epsilon(x'+iy', t)=K_{y'}\ast \bar Z^\epsilon_t(x', t),\quad \Psi_{z'}^\epsilon(x'+iy', t) =K_{y'}\ast Z^\epsilon_{,\alpha'}(x',t),\quad \Psi^\epsilon(\cdot, t)$$ be the holomorphic functions on ${\mathscr P}_-$ with boundary values $\bar Z^\epsilon_t( t)$, $Z^\epsilon_{,\alpha'}(t)$ and $Z^\epsilon(t)$; we have $$\frac1{\Psi_{z'}^\epsilon}(x'+iy', t) =K_{y'}\ast \frac1{Z^\epsilon_{,\alpha'}}(x',t)$$ by uniqueness.\footnote{By the maximum principle, $\big(K_{y'}\ast \frac1{Z^\epsilon_{,\alpha'}}\big)\big(K_{y'}\ast {Z^\epsilon_{,\alpha'}}\big)\equiv 1$ on ${\mathscr P}_-$. } We denote the energy functional $\mathcal E$ for $\paren{Z^\epsilon(t), \bar Z_t^\epsilon(t)}$ by $\mathcal E^\epsilon(t)$ and the energy functional $\mathcal E_1$ for $(U^\epsilon(t),\Psi^\epsilon(t))$ by $\mathcal E^\epsilon_1(t)$. It is clear that $\mathcal E^\epsilon_1(0)\le \mathcal E_1(0)$, and $\|Z^\epsilon_t(0)\|_{L^2}+\nm{\frac1{Z^\epsilon_{,{\alpha '}}(0)}-1}_{L^2}\le c_0$ for all $\epsilon>0$; and by the continuity of $\frac1{\Psi_{z'}}(\cdot, 0)$ on $\bar{\mathscr P}_-$, there is a $\epsilon_0>0$, such that for all $0<\epsilon\le\epsilon_0$, $|\frac1{Z^\epsilon_{,{\alpha '}}}(0,0)|^2\le |\frac1{Z_{,{\alpha '}}}(0,0)|^2+1$. By Theorem~\ref{blow-up}, Theorem~\ref{prop:a priori} and Proposition~\ref{prop:energy-eq}, there exists $T_0>0$, $T_0$ depends only on $\mathcal E(0)=\mathcal E_1(0)+|\frac1{Z_{,{\alpha '}}}(0,0)|^2$,\footnote{By \eqref{domain-energy1}.} such that for all $0<\epsilon\le \epsilon_0$, $T^*_\epsilon> T_0$ and \begin{equation}\label{eq:400} \sup_{[0, T_0]}\paren{\mathcal E_1^\epsilon(t)+\abs{\frac1{Z^\epsilon_{,{\alpha '}}}(0,t)}^2}=\sup_{[0, T_0]}\mathcal E^\epsilon(t)\le M\paren{\mathcal E(0) }<\infty; \end{equation} and by \eqref{interface-a1}, \eqref{2109} and \eqref{2110}, \begin{equation}\label{eq:401} \sup_{[0, T_0]}\paren{\|Z^\epsilon_t(t)\|_{L^2}+\|Z^\epsilon_{tt}(t)\|_{L^2}+\nm{\frac1{Z^\epsilon_{,\alpha'}(t)}-1}_{L^2}}\le c\paren{c_0, \mathcal E(0)}, \end{equation} so there is a constant $C_0:=C(c_0, \mathcal E(0))>0$, such that \begin{equation}\label{eq:402} \sup_{[0,T_0]}\{\sup_{y'<0}\|U^\epsilon(\cdot+iy', t)\|_{L^2(\mathbb R)}+\sup_{y'<0}\nm{\frac1{\Psi^\epsilon_{z'}(\cdot+iy',t)}-1}_{L^2(\mathbb R)}\}<C_0<\infty. \end{equation} \subsubsection{Uniformly bounded quantities}\label{ubound} Besides \eqref{3005}, \eqref{3011} and \eqref{3013}, we would like to apply the compactness results Lemma~\ref{lemma1}, Lemma~\ref{lemma2} to pass to limits of some of the quantities. To this end we discuss the boundedness properties of these quantities. We begin with two inequalities. We have, from \eqref{eq:dza}, \begin{equation}\label{eq:411} \nm{(\partial_t+b^\epsilon\partial_{\alpha '}) \frac1{Z^\epsilon_{,\alpha'}}(t)}_{L^\infty}\le \nm{\frac1{Z^\epsilon_{,\alpha'}}(t)}_{L^\infty}(\|b^\epsilon_{\alpha'}(t)\|_{L^\infty}+\|D_{\alpha'}Z^\epsilon_{t}(t)\|_{L^\infty}) \end{equation} and by \eqref{eq:dztt}, \begin{equation}\label{eq:4400} \|(\partial_t+b^\epsilon\partial_{\alpha '})Z^\epsilon_{tt}(t)\|_{L^\infty}\le \|Z^\epsilon_{tt}(t)+i\|_{L^\infty}\paren{\|D_{{\alpha '}}Z^\epsilon_t(t)\|_{L^\infty}+\nm{\frac{\frak a^\epsilon_t}{\frak a^\epsilon}\circ (h^{\epsilon})^{-1}(t)}_{L^\infty}}. \end{equation} Let $0<\epsilon\le\epsilon_0$, and $M(\mathcal E(0))$, $c(c_0,\mathcal E(0))$, $C_0$ be the bounds in \eqref{eq:400}, \eqref{eq:401} and \eqref{eq:402}. By Proposition~\ref{prop:energy-eq}, Sobolev embedding, Appendix~\ref{quantities} and \eqref{eq:411}, \eqref{3010}, the following quantities are uniformly bounded with bounds depending only on $M(\mathcal E(0))$, $c(c_0,\mathcal E(0))$, $C_0$: \begin{equation}\label{eq:404} \begin{aligned} &\sup_{[0, T_0]}\|Z^\epsilon_t(t)\|_{L^\infty}, \quad\sup_{[0, T_0]}\|Z^\epsilon_{t,\alpha'}(t)\|_{L^2}, \quad\sup_{[0, T_0]}\|Z^\epsilon_{tt}(t)\|_{L^\infty}, \quad\sup_{[0, T_0]}\|Z^\epsilon_{tt,\alpha'}(t)\|_{L^2},\\& \sup_{[0, T_0]}\nm{\frac1{Z^\epsilon_{,\alpha'}}(t)}_{L^\infty}, \quad\sup_{[0, T_0]}\nm{\partial_{\alpha'}\frac1{Z^\epsilon_{,\alpha'}}(t)}_{L^2}, \quad\sup_{[0, T_0]}\nm{(\partial_t+b^\epsilon\partial_{\alpha '})\frac1{Z^\epsilon_{,\alpha'}}(t)}_{L^\infty},\quad \sup_{[0, T_0]}\nm{b^\epsilon}_{L^\infty}; \end{aligned} \end{equation} and with a change of the variables and \eqref{2346}, \eqref{eq:4400} and Appendix~\ref{quantities}, \begin{equation}\label{eq:414} \begin{aligned} &\sup_{[0, T_0]}\|z^\epsilon_t(t)\|_{L^\infty}+ \sup_{[0, T_0]}\|z^\epsilon_{t\alpha}(t)\|_{L^2}+ \sup_{[0, T_0]}\|z^\epsilon_{tt}(t)\|_{L^\infty}\le C(c_0, \mathcal E(0)), \\& \sup_{[0, T_0]}\nm{\frac{h^\epsilon_{\alpha}}{z^\epsilon_{\alpha}}(t)}_{L^\infty}+ \sup_{[0, T_0]}\nm{\partial_{\alpha}(\frac {h^\epsilon_{\alpha}}{z^\epsilon_{\alpha}})(t)}_{L^2}+ \sup_{[0, T_0]}\nm{\partial_t \frac{h^\epsilon_\alpha}{z^\epsilon_{\alpha}}(t)}_{L^\infty}\le C(c_0, \mathcal E(0)),\\& \sup_{[0, T_0]}\|z^\epsilon_{tt}(t)\|_{L^\infty}+ \sup_{[0, T_0]}\|z^\epsilon_{tt\alpha}(t)\|_{L^2}+ \sup_{[0, T_0]}\|z^\epsilon_{ttt}(t)\|_{L^\infty}\le C(c_0, \mathcal E(0)). \end{aligned} \end{equation} Observe that $h^\epsilon(\alpha, t)-\alpha=\int_0^t h^\epsilon_t(\alpha, s)\,ds$, so \begin{equation}\label{eq:425} \sup_{\mathbb R\times [0, T_0]} |h^\epsilon(\alpha, t)-\alpha|\le T_0\sup_{[0, T_0]}\|h^\epsilon_t(t)\|_{L^\infty}\le T_0C(c_0,\mathcal E(0))<\infty. \end{equation} Furthermore by \eqref{2346} and Appendix~\ref{quantities}, there are $c_1, c_2>0$, depending only on $\mathcal E(0)$, such that \begin{equation}\label{eq:416} 0<c_1\le\frac{h^\epsilon(\alpha,t)-h^\epsilon(\beta,t)}{\alpha-\beta}\le c_2<\infty,\qquad \forall \alpha,\beta\in \mathbb R, \ t\in [0, T_0]. \end{equation} \subsubsection{Passing to the limit} It is easy to check by Lemma~\ref{lemma3} and \eqref{hhalf-1}, \eqref{hhalf42} that the sequence $(Z^\epsilon(0), \bar Z_t^\epsilon(0))$ converges in the norm $\|\cdot\|$ defined by \eqref{3008}, so by \eqref{3020}, \eqref{3011} and \eqref{3013}, there are functions $b$ and $h-\alpha$, continuous and bounded on $\mathbb R\times [0, T_0]$, with $h(\cdot, t):\mathbb R\to \mathbb R$ a homeomorphism for $t\in [0, T_0]$, $b_{\alpha '}\in L^\infty([0, T_0], L^2(\mathbb R))$, such that \begin{equation}\label{3015} \lim_{\epsilon\to 0} \paren{b^\epsilon,\, h^\epsilon,\, (h^\epsilon)^{-1}}=\paren{b,\, h, \, h^{-1}},\qquad \text{uniformly on } \mathbb R\times [0, T_0]; \end{equation} \begin{equation}\label{3015-1} \lim_{\epsilon\to 0} b^\epsilon_{\alpha '}=b_{\alpha '} \qquad \text{in } \ \ L^\infty([0, T_0], L^2(\mathbb R)); \end{equation} and \eqref{eq:416} yields \begin{equation}\label{eq:420} 0<c_1\le\frac{h(\alpha,t)-h(\beta,t)}{\alpha-\beta}\le c_2<\infty,\qquad \forall \alpha,\beta\in \mathbb R, \ t\in [0, T_0]. \end{equation} By Lemma~\ref{lemma1}, \eqref{eq:414} and \eqref{3005}, there are functions $w$, $u$, $q:=w_t$, continuous and bounded on $\mathbb R\times [0, T_0]$, such that \begin{equation}\label{eq:417} z^\epsilon_t\Rightarrow w,\quad \frac{h^\epsilon_{\alpha}}{z^\epsilon_\alpha}\Rightarrow u,\quad z^\epsilon_{tt}\Rightarrow q,\qquad \text{on } \mathbb R\times [0, T_0], \end{equation} as $ \epsilon\to 0$; this gives \begin{equation}\label{eq:419} \bar Z_t^\epsilon\Rightarrow w\circ h^{-1},\qquad \frac1{Z^\epsilon_{,\alpha'}}\Rightarrow u\circ h^{-1}, \quad\bar Z_{tt}^\epsilon\Rightarrow w_t\circ h^{-1},\qquad\text{on }\mathbb R\times [0, T_0] \end{equation} as $ \epsilon\to 0$. \eqref{3005} also gives that \begin{equation}\label{3016} \lim_{\epsilon\to 0} \paren{\bar Z_t^\epsilon,\, \frac1{Z^\epsilon_{,\alpha'}},\, \bar Z_{tt}^\epsilon}= \paren{w\circ h^{-1}, \, u\circ h^{-1},\, w_t\circ h^{-1}},\qquad\text{in } L^\infty([0, T_0], \dot H^{1/2}(\mathbb R)). \end{equation} Now \begin{equation}\label{eq:421} U^\epsilon(z',t)=K_{y'}\ast \bar Z_t^\epsilon,\qquad \frac1{\Psi^\epsilon_{z'}}(z',t)=K_{y'}\ast \frac1{Z^\epsilon_{,\alpha'}}. \end{equation} Let $U(z',t)= K_{y'}\ast (w\circ h^{-1})(x', t)$, $\Lambda(z',t)=K_{y'}\ast (u\circ h^{-1})(x',t)$. By Lemma~\ref{lemma2}, \begin{equation}\label{eq:422} U^\epsilon(z',t)\Rightarrow U(z',t),\qquad \frac1{\Psi^\epsilon_{z'}}(z',t)\Rightarrow \Lambda(z',t)\qquad\text{on }\bar {\mathscr P}_-\times [0, T_0]; \end{equation} as $ \epsilon\to 0$. Moreover $U(\cdot,t)$, $\Lambda(\cdot,t)$ are holomorphic on ${\mathscr P}_-$ for each $t\in [0, T_0]$, and continuous on $\bar {\mathscr P}_-\times [0, T]$. Applying Cauchy integral formula to the first limit in \eqref{eq:422} yields, as $ \epsilon\to 0$, \begin{equation}\label{eq:430} U^\epsilon_{z'}(z',t)\Rightarrow U_{z'}(z',t) \qquad\text{on } {\mathscr P}_-\times [0, T_0]. \end{equation} \subsubsection*{Step 1. The limit of $\Psi^\epsilon$}\label{step4.1} We consider the limit of $\Psi^{\epsilon}$, as $\epsilon\to 0$. Let $0<\epsilon\le\epsilon_0$. We know \begin{equation}\label{eq:423} \begin{aligned} z^\epsilon(\alpha,t)&=z^\epsilon(\alpha,0)+\int_0^t z_t^\epsilon(\alpha, s)\,ds\\& =\Psi(\alpha-\epsilon i, 0)+\int_0^t z_t^\epsilon(\alpha, s)\,ds, \end{aligned} \end{equation} therefore \begin{equation}\label{eq:424} \begin{aligned} Z^\epsilon(\alpha',t)-Z^\epsilon(\alpha',0)& =\Psi((h^\epsilon)^{-1}(\alpha',t)-\epsilon i, 0)-\Psi(\alpha'-\epsilon i, 0)\\&+\int_0^t z_t^\epsilon((h^\epsilon)^{-1}(\alpha',t) , s)\,ds. \end{aligned} \end{equation} Let \begin{equation}\label{eq:431} W^\epsilon(\alpha',t):=\Psi((h^\epsilon)^{-1}(\alpha',t)-\epsilon i, 0)-\Psi(\alpha'-\epsilon i, 0)+\int_0^t z_t^\epsilon((h^\epsilon)^{-1}(\alpha',t) , s)\,ds. \end{equation} Observe $Z^\epsilon(\alpha',t)-Z^\epsilon(\alpha',0)$ is the boundary value of the holomorphic function $\Psi^\epsilon(z', t)-\Psi^\epsilon(z', 0)$. By \eqref{eq:417} and \eqref{3015}, $\int_0^t z_t^\epsilon((h^\epsilon)^{-1}(\alpha',t), s)\,ds\to \int_0^t w(h^{-1}(\alpha',t), s)\,ds$ uniformly on compact subsets of $\mathbb R\times [0, T_0]$, and by \eqref{eq:414}, $\int_0^t z_t^\epsilon((h^\epsilon)^{-1}(\alpha',t), s)\,ds$ is continuous and uniformly bounded in $L^\infty(\mathbb R\times [0, T_0])$. By the assumptions $\lim_{z'\to 0}\Psi_{z'}(z',0)=1$ and $\Psi(\cdot, 0)$ is continuous on $\bar {\mathscr P}_-$, and by \eqref{eq:425}, \eqref{3015}, $$\Psi((h^\epsilon)^{-1}(\alpha',t)-\epsilon i, 0)-\Psi(\alpha'-\epsilon i, 0)$$ is continuous and uniformly bounded in $L^\infty(\mathbb R\times [0, T_0])$ for $0<\epsilon<1$, and converges uniformly on compact subsets of $\mathbb R\times [0, T_0]$ as $\epsilon\to 0$. This gives\footnote{Because $W^\epsilon(\cdot,t)$ and $\partial_{\alpha'}W^\epsilon(\cdot,t):=Z^\epsilon_{,\alpha'}(\alpha',t)-Z^\epsilon_{,\alpha'}(\alpha',0)$ are continuous and bounded on $\mathbb R$, $\Psi^\epsilon_{z'}(z',t)-\Psi^\epsilon_{z'}(z',0)=K_{y'}\ast (\partial_{\alpha'}W^\epsilon)(x',t)=\partial_{z'}K_{y'}\ast W^\epsilon(x',t)$. \eqref{eq:426} holds because both sides of \eqref{eq:426} have the same value on $\partial{\mathscr P}_-$.} \begin{equation}\label{eq:426} \Psi^\epsilon(z', t)-\Psi^\epsilon(z', 0)=K_{y'}\ast W^\epsilon(x',t) \end{equation} and by Lemma~\ref{lemma2}, $\Psi^\epsilon(z', t)-\Psi^\epsilon(z', 0)$ converges uniformly on compact subsets of $\bar {\mathscr P}_-\times [0, T_0]$ to a function that is holomorphic on ${\mathscr P}_-$ for every $t\in [0, T_0]$ and continuous on $\bar {\mathscr P}_-\times [0, T_0]$. Therefore there is a function $\Psi(\cdot,t)$, holomorphic on ${\mathscr P}_-$ for every $t\in [0, T_0]$ and continuous on $\bar {\mathscr P}_-\times [0, T_0]$, such that \begin{equation}\label{eq:427} \Psi^\epsilon(z',t)\Rightarrow \Psi(z',t)\qquad\text{on }\bar {\mathscr P}_-\times [0, T_0] \end{equation} as $\epsilon\to 0$; as a consequence of the Cauchy integral formula, \begin{equation}\label{eq:428} \Psi^\epsilon_{z'}(z',t)\Rightarrow \Psi_{z'}(z',t)\qquad\text{on } {\mathscr P}_-\times [0, T_0] \end{equation} as $\epsilon\to 0$. Combining with \eqref{eq:422}, we have $\Lambda(z',t)=\frac1{\Psi_{z'}(z',t)}$, so $\Psi_{z'}(z',t)\ne 0$ for all $(z',t)\in \bar{\mathscr P}_-\times [0, T_0]$ and \begin{equation}\label{eq:429} \frac1{\Psi^\epsilon_{z'}(z',t)}\Rightarrow \frac1{\Psi_{z'}(z',t)}\qquad\text{on }\bar {\mathscr P}_-\times [0, T_0] \end{equation} as $\epsilon\to 0$. Denote $Z(\alpha', t):=\Psi(\alpha', t)$, $\alpha'\in \mathbb R$, and $z(\alpha,t)=Z(h(\alpha,t),t)$. \eqref{eq:427} yields $Z^\epsilon(\alpha',t)\Rightarrow Z(\alpha',t)$, together with \eqref{3015} it implies $z^\epsilon(\alpha,t)\Rightarrow z(\alpha,t)$ on $\mathbb R\times [0, T_0]$, as $\epsilon\to 0$. Furthermore by \eqref{eq:423}, $$z(\alpha', t)=z(\alpha',0)+\int_0^t w(\alpha,s)\,ds,$$ so $w=z_t$. We denote $Z_t=z_t\circ h^{-1}$. \subsubsection*{Step 2. The limits of $\Psi^\epsilon_t$ and $U_t^\epsilon$} Observe that by \eqref{eq:431}, for fixed $\epsilon>0$, $\partial_t W^\epsilon(\cdot, t)$ is a bounded function on $\mathbb R\times [0, T_0]$, so by \eqref{eq:426} and the dominated convergence Theorem, $\Psi^\epsilon_t=K_{y'}\ast \partial_t W^\epsilon$, hence $\Psi_t^\epsilon$ is bounded on ${\mathscr P}_-\times [0, T_0]$. Since for given $t\in [0, T_0]$ and $\epsilon>0$, $\frac{\Psi^\epsilon_t}{\Psi^\epsilon_{z'}}$ is bounded and holomorphic on ${\mathscr P}_-$, by \eqref{eq:271}, \begin{equation}\label{eq:432} \frac{\Psi^\epsilon_t}{\Psi^\epsilon_{z'}}=K_{y'}\ast(\frac{Z^\epsilon_t}{Z^\epsilon_{,\alpha'}}-b^\epsilon). \end{equation} Therefore by \eqref{3015}, \eqref{eq:419} and Lemma~\ref{lemma2}, as $\epsilon\to 0$, $\frac{\Psi^\epsilon_t}{\Psi^\epsilon_{z'}}$ converges uniformly on compact subsets of $\bar {\mathscr P}_-\times [0, T_0]$ to a function that is holomorphic on ${\mathscr P}_-$ for each $t\in [0, T_0]$ and continuous on $\bar {\mathscr P}_-\times [0, T_0]$. Hence we can conclude from \eqref{eq:427} and \eqref{eq:428} that $\Psi$ is continuously differentiable and \begin{equation}\label{eq:433} \Psi^\epsilon_t\Rightarrow \Psi_t\qquad \text{on }{\mathscr P}_-\times [0, T_0] \end{equation} as $\epsilon\to 0$. Now we consider the limit of $U^\epsilon_t$ as $\epsilon\to 0$. Since for fixed $\epsilon>0$, $\partial_t Z_t^\epsilon=Z_{tt}^\epsilon-b^\epsilon Z_{t,\alpha'}^\epsilon$ is in $L^\infty(\mathbb R\times [0, T_0])$, by \eqref{eq:421} and the dominated convergence Theorem, \begin{equation}\label{eq:434} U^\epsilon_t(z',t)=K_{y'}\ast \partial_t \bar Z_t^\epsilon=K_{y'}\ast (\bar Z_{tt}^\epsilon-b^\epsilon \bar Z_{t,\alpha'}^\epsilon). \end{equation} We rewrite \begin{equation}\label{3017} K_{y'}\ast (\bar Z_{tt}^\epsilon-b^\epsilon \bar Z_{t,\alpha'}^\epsilon)=K_{y'}\ast \bar Z_{tt}^\epsilon-(\partial_{x'} K_{y'})\ast (b^\epsilon \bar Z_{t}^\epsilon)+ K_{y'}\ast (b^\epsilon_{\alpha '} \bar Z_{t}^\epsilon). \end{equation} Now we apply \eqref{3015}, \eqref{3015-1}, \eqref{3016} and Lemma~\ref{lemma5} to each term on the right hand side of \eqref{3017}. We can conclude that $U$ is continuously differentiable with respect to $t$, and \begin{equation}\label{eq:435} U^\epsilon_t \Rightarrow U_t\qquad \text{on }{\mathscr P}_-\times [0, T_0] \end{equation} as $ \epsilon\to 0$. \subsubsection*{Step 3. The limit of $\mathfrak P^\epsilon$} By the calculation in \S\ref{general-soln}, we know there is a real valued function $\frak P^\epsilon$, such that \begin{equation}\label{eq:437} \Psi^\epsilon_{z'} U^\epsilon_t- {\Psi^\epsilon_t}U^\epsilon_{z'}+{\bar U^\epsilon}U^\epsilon_{z'} -i\Psi^\epsilon_{z'}=-(\partial_{x'}-i\partial_{y'})\frak P^\epsilon,\qquad\text{in }{\mathscr P}_-; \end{equation} and \begin{equation}\label{eq:438} \frak P^\epsilon=constant,\qquad \text{on }\partial {\mathscr P}_-. \end{equation} Without loss of generality we take the $constant=0$. We now explore a few other properties of $\frak P^\epsilon$. Moving ${\bar U^\epsilon}U^\epsilon_{z'}=\partial_{z'}({\bar U^\epsilon}U^\epsilon)$ to the right of \eqref{eq:437} gives \begin{equation}\label{eq:440} \Psi^\epsilon_{z'} U^\epsilon_t- {\Psi^\epsilon_t}U^\epsilon_{z'} -i\Psi^\epsilon_{z'}=-(\partial_{x'}-i\partial_{y'})(\frak P^\epsilon+ \frac12 |U^\epsilon|^2),\qquad\text{in }{\mathscr P}_-; \end{equation} Applying $(\partial_{x'}+i\partial_{y'})=2\bar \partial_{z'}$ to \eqref{eq:440} yields \begin{equation}\label{eq:439} -\Delta (\frak P^\epsilon+ \frac12 |U^\epsilon|^2) =0,\qquad\text{in } {\mathscr P}_-. \end{equation} So $\frak P^\epsilon+ \frac12 |U^\epsilon|^2$ is a harmonic function on ${\mathscr P}_-$ with boundary value $\frac12 |\bar Z_t^\epsilon|^2$. On the other hand, it is easy to check that $\lim_{y'\to-\infty} (\Psi^\epsilon_{z'} U^\epsilon_t- {\Psi^\epsilon_t}U^\epsilon_{z'} -i\Psi^\epsilon_{z'})=-i$. Therefore \begin{equation}\label{eq:441} \frak P^\epsilon(z',t)=- \frac12 |U^\epsilon(z',t)|^2-y + \frac12 K_{y'}\ast (|\bar Z_t^\epsilon|^2)(x',t). \end{equation} By \eqref{eq:422}, \eqref{eq:419} and Lemma~\ref{lemma2}, \begin{equation}\label{eq:442} \frak P^\epsilon(z',t)\Rightarrow - \frac12 |U(z',t)|^2-y + \frac12 K_{y'}\ast (|\bar Z_t|^2)(x',t),\qquad \text{on }\bar {\mathscr P}_-\times [0, T_0] \end{equation} as $\epsilon\to 0$. We write $$\frak P:=- \frac12 |U(z',t)|^2-y + \frac12 K_{y'}\ast (|\bar Z_t|^2)(x',t).$$ We have $\frak P$ is continuous on $\bar {\mathscr P}_-\times [0, T_0]$ with $\frak P \in C([0, T_0], C^\infty({\mathscr P}_-))$, and \begin{equation}\label{eq:443} \frak P=0,\qquad \text{on }\partial {\mathscr P}_-. \end{equation} Moreover, since $K_{y'}\ast (|\bar Z_t^\epsilon|^2)(x',t)$ is harmonic on ${\mathscr P}_-$, by interior derivative estimate for harmonic functions and by \eqref{eq:422}, \begin{equation}\label{eq:4450} (\partial_{x'}-i\partial_{y'})\frak P^\epsilon\Rightarrow (\partial_{x'}-i\partial_{y'})\frak P\qquad\text{on }{\mathscr P}_-\times [0, T_0] \end{equation} as $\epsilon\to 0$. And by \eqref{eq:441} and a similar argument as that in \eqref{eq:434}-\eqref{eq:435}, we have that $\frak P$ is continuously differentiable with respect to $t$ and \begin{equation}\label{eq:4451} \partial_t \frak P^\epsilon\Rightarrow \partial_t\frak P\qquad\text{on }{\mathscr P}_-\times [0, T_0] \end{equation} as $\epsilon\to 0$. \subsubsection*{Step 4. Conclusion} We now sum up Steps 1-3. We have shown that there are functions $\Psi(\cdot, t)$ and $U(\cdot, t)$, holomorphic on ${\mathscr P}_-$ for each fixed $t\in [0, T_0]$, continuous on $\bar {\mathscr P}_-\times [0, T_0]$, and continuous differentiable on ${\mathscr P}_-\times [0, T_0]$, with $ \frac1{\Psi_{z'}}$ continuous on $\bar {\mathscr P}_-\times [0, T_0]$, such that $\Psi^\epsilon \to \Psi$, $\frac1{\Psi^\epsilon_{z'}} \to \frac1{\Psi_{z'}}$, $ U^\epsilon\to U$ uniform on compact subsets of $\bar {\mathscr P}_-\times [0, T_0]$, $\Psi^\epsilon_t \to \Psi_t$, $\Psi^\epsilon_{z'}\to \Psi_{z'}$, $U^\epsilon_{z'}\to U_{z'}$ and $U^\epsilon_t\to U_t$ uniform on compact subsets of ${\mathscr P}_-\times [0, T_0]$, as $\epsilon\to 0$. We have also shown that there is $\frak P$, continuous on $\bar {\mathscr P}_-\times [0, T_0]$ with $\frak P=0$ on $\partial {\mathscr P}_-$, and continuous differentiable on ${\mathscr P}_-\times [0, T_0]$, such that $\frak P^\epsilon\to \frak P$ uniform on compact subsets of $\bar{\mathscr P}_-\times [0, T_0]$ and, $(\partial_{x'}-i\partial_{y'})\frak P^\epsilon\to (\partial_{x'}-i\partial_{y'})\frak P$ and $\partial_t\frak P^\epsilon\to \partial_t \frak P$ uniformly on compact subsets of $ {\mathscr P}_-\times [0, T_0]$, as $\epsilon\to 0$. Let $\epsilon\to 0$ in equation \eqref{eq:437}, we have \begin{equation}\label{eq:444} \Psi_{z'} U_t- {\Psi_t}U_{z'}+{\bar U}U_{z'} -i\Psi_{z'}=-(\partial_{x'}-i\partial_{y'})\frak P,\qquad\text{on }{\mathscr P}_-\times [0, T_0]. \end{equation} This shows that $(U, \Psi, \frak P)$ is a solution of the Cauchy problem for the system \eqref{eq:273}-\eqref{eq:274}-\eqref{eq:275} in the sense of Definition~\ref{de}. Furthermore because of \eqref{eq:400}, \eqref{eq:402}, letting $\epsilon\to 0$ gives \begin{equation} \sup_{[0, T_0]}\mathcal E(t)\le M(\mathcal E(0))<\infty. \end{equation} and \begin{equation} \sup_{[0,T_0]}\{\sup_{y'<0}\|U(x'+iy', t)\|_{L^2(\mathbb R, dx')}+\sup_{y'<0}\|\frac1{\Psi_{z'}(x'+iy',t)}-1\|_{L^2(\mathbb R,dx')}\}<C_0<\infty. \end{equation} By the argument at the end of \S\ref{general-soln}, if $\Sigma(t):=\{Z=\Psi({\alpha '},t)\,|\, {\alpha '}\in\mathbb R\}$ is a Jordan curve, then $\Psi(\cdot, t):{\mathscr P}_-\to \Omega(t)$, where $\Omega(t)$ is the domain bounded from the above by $\Sigma(t)$, is invertible; and the solution $(U,\Psi,\frak P)$ gives rise to a solution $(\bar{\bf v}, P):= (U\circ \Psi^{-1}, \frak P\circ \Psi^{-1})$ of the water wave equation \eqref{euler}. This finishes the proof for part 1 of Theorem~\ref{th:local}. \subsection{The chord-arc interfaces} Now assume at time $t=0$, the interface $Z=\Psi(\alpha',0):=Z(\alpha',0)$, $\alpha'\in\mathbb R$ is chord-arc, that is, there is $0<\delta<1$, such that $$\delta \int_{\alpha'}^{\beta'} |Z_{,\alpha'}(\gamma,0)|\,d\gamma\le |Z(\alpha', 0)-Z(\beta', 0)|\le \int_{\alpha'}^{\beta'} |Z_{,\alpha'}(\gamma,0)|\,d\gamma,\quad \forall -\infty<\alpha'< \beta'<\infty.$$ We want to show there is $T_1>0$, depending only on $\mathcal E(0)$, such that for $t\in [0, \min\{T_0, \frac\delta{T_1}\}]$, the interface $Z=Z(\alpha',t):=\Psi(\alpha',t)$ remains chord-arc. We begin with \begin{equation}\label{eq:446} - z^\epsilon(\alpha,t)+z^\epsilon(\beta,t)+z^\epsilon(\alpha,0)-z^\epsilon(\beta,0)=\int_0^t\int_{\alpha}^\beta z^\epsilon_{t\alpha}(\gamma,s)\,d\gamma\,ds \end{equation} for $\alpha<\beta$. Because \begin{equation}\label{eq:447} \frac d{dt} |z^\epsilon_{\alpha}|^2=2|z^\epsilon_{\alpha}|^2 \Re D_{\alpha}z^\epsilon_t, \end{equation} by Gronwall's inequality, for $t\in [0, T_0]$, \begin{equation}\label{eq:448} |z^\epsilon_{\alpha}(\alpha,t)|^2\le |z^\epsilon_{\alpha}(\alpha,0)|^2 e^{2\int_0^t |D_{\alpha}z^\epsilon_t(\alpha,\tau)|\,d\tau }; \end{equation} so \begin{equation}\label{eq:449} |z^\epsilon_{t\alpha}(\alpha,t)|\le |z^\epsilon_{\alpha}(\alpha,0)| |D_{\alpha}z^\epsilon_t(\alpha,t)|e^{\int_0^t |D_{\alpha}z^\epsilon_t(\alpha,\tau)|\,d\tau }; \end{equation} by Appendix~\ref{quantities}, \eqref{eq:400} and Proposition~\ref{prop:energy-eq}, \begin{equation}\label{eq:450} \sup_{[0, T_0]} |z^\epsilon_{t\alpha}(\alpha,t)|\le |z^\epsilon_{\alpha}(\alpha,0)| C(\mathcal E(0)); \end{equation} therefore for $t\in [0, T_0]$, \begin{equation}\label{eq:451} \int_0^{t}\int_{\alpha}^\beta |z^\epsilon_{t\alpha}(\gamma,s)|\,d\gamma\,ds\le t C(\mathcal E(0))\int_{\alpha}^\beta |z^\epsilon_{\alpha}(\gamma,0)| \,d\gamma. \end{equation} Now $z^\epsilon(\alpha,0)=Z^\epsilon(\alpha,0)=\Psi(\alpha-\epsilon i, 0)$. Because $Z_{,\alpha'}(\cdot,0)\in L^1_{loc}(\mathbb R)$, and $Z_{,\alpha'}(\cdot,0)-1\in H^1(\mathbb R\setminus [-N, N])$ for some large $N$, \begin{equation}\label{eq:452} \overline{\lim_{\epsilon\to 0}}\int_{\alpha}^\beta |\Psi_{z'}(\gamma-\epsilon i,0)|\,d\gamma\le \int_{\alpha}^\beta |Z_{,\alpha'}(\gamma, 0)|\,d\gamma. \end{equation} Let $\epsilon\to 0$ in \eqref{eq:446}. We get, for $t\in [0, T_0]$, \begin{equation}\label{eq:453} | |z(\alpha,t)-z(\beta,t)| -|Z(\alpha,0)-Z(\beta,0)||\le tC(\mathcal E_1(0))\int_{\alpha}^\beta |Z_{,\alpha'}(\gamma, 0)|\,d\gamma, \end{equation} hence for all $\alpha<\beta$ and $0\le t\le \min\{T_0, \frac{\delta}{2C(\mathcal E(0))}\}$, \begin{equation}\label{eq:454} \frac12\delta \int_\alpha^\beta |Z_{,\alpha'}(\gamma,0)|\,d\gamma\le |z(\alpha,t)-z(\beta,t)|\le 2 \int_\alpha^\beta |Z_{,\alpha'}(\gamma,0)|\,d\gamma. \end{equation} This show that for $\le t\le \min\{T_0, \frac{\delta}{2C(\mathcal E(0))}\}$, $z=z(\cdot,t)$ is absolute continuous on compact intervals of $\mathbb R$, with $z_{\alpha}(\cdot,t)\in L_{loc}^1(\mathbb R)$, and is chord-arc. So $\Sigma(t)=\{z(\alpha,t) \ | \ \alpha\in \mathbb R\}$ is Jordan. This finishes the proof of Theorem~\ref{th:local}. \begin{appendix} \section{Basic analysis preparations}\label{ineq} We present in this section some basic analysis results that will be used in this paper. First we have, as a consequence of the fact that product of holomorphic functions is holomorphic, the following identity. \begin{proposition}\label{prop:comm-hilbe} Assume that $f,\ g \in L^2(\mathbb R)$. Assume either both $f$, $g$ are holomorphic: $f=\mathbb H f$, $g=\mathbb H g$, or both are anti-holomorphic: $f=-\mathbb H f$, $g=-\mathbb H g$. Then \begin{equation}\label{comm-hilbe} [f, \mathbb H]g=0. \end{equation} \end{proposition} Let $f :\mathbb R\to\mathbb C$ be a function in $\dot H^{1/2}(\mathbb R)$, we define \begin{equation}\label{def-hhalf} \|f\|_{\dot H^{1/2}}^2=\|f\|_{\dot H^{1/2}(\mathbb R)}^2:= \int i\mathbb H \partial_x f(x) \bar f(x)\,dx=\frac1{2\pi}\iint\frac{|f(x)-f(y)|^2}{(x-y)^2}\,dx\,dy. \end{equation} We have the following results on $\dot H^{1/2}$ norms and $\dot H^{1/2}$ functions. \begin{lemma}\label{hhalf1} For any function $f\in \dot H^{1/2}(\mathbb R)$, \begin{align} \nm{f}_{\dot H^{1/2}}^2&=\nm{\mathbb P_H f}_{\dot H^{1/2}}^2+\nm{\mathbb P_A f}_{\dot H^{1/2}}^2;\label{hhalfp}\\ \int i\partial_{\alpha '} f \, \bar f\,d{\alpha '}&=\nm{\mathbb P_H f}_{\dot H^{1/2}}^2-\nm{\mathbb P_A f}_{\dot H^{1/2}}^2.\label{hhalfn} \end{align} \end{lemma} \begin{proof} Lemma~\ref{hhalf1} is an easy consequence of the decomposition $f=\mathbb P_H f+\mathbb P_A f$, the definition \eqref{def-hhalf} and the Cauchy integral Theorem. We omit the details. \end{proof} \begin{proposition}\label{prop:Hhalf} Let $f,\ g\in C^1(\mathbb R)$. Then \begin{align} \nm{fg}_{\dot H^{1/2}}\lesssim \|f\|_{L^\infty}\|g\|_{\dot H^{1/2}}+\|g\|_{L^\infty}\|f\|_{\dot H^{1/2}};\label{hhalf-1}\\ \|g\|_{\dot H^{1/2}}\lesssim \|f^{-1}\|_{L^\infty}(\|fg\|_{\dot H^{1/2}}+\|f'\|_{L^2}\|g\|_{L^2}).\label{Hhalf} \end{align} \end{proposition} The proof is straightforward from the definition of $\dot H^{1/2}$ and the Hardy's inequality. We omit the details. We next present the basic estimates we will rely on for this paper. We start with the Sobolev inequality. \begin{proposition}[Sobolev inequality]\label{sobolev} Let $f\in C^1_0(\mathbb R)$. Then \begin{equation}\label{eq:sobolev} \|f\|_{L^\infty}^2\le 2\|f\|_{L^2}\|f'\|_{L^2}. \end{equation} \end{proposition} \begin{proposition}[Hardy's inequalities] \label{hardy-inequality} Let $f \in C^1(\mathbb R)$, with $f' \in L^2(\mathbb R)$. Then there exists $C > 0$ independent of $f$ such that for any $x \in \mathbb R$, \begin{equation} \label{eq:77} \abs{\int \frac{(f(x) - f(y))^2}{(x-y)^2} dy} \le C \nm{f'}_{L^2}^2; \end{equation} and \begin{equation} \label{eq:771} \iint \frac{|f(x) - f(y)|^4}{|x-y|^4} \,dx dy \le C \nm{f'}_{L^2}^4. \end{equation} \end{proposition} Let $ H\in C^1(\mathbb R; \mathbb R^d)$, $A_i\in C^1(\mathbb R)$, $i=1,\dots m$, and $F\in C^\infty(\mathbb R)$. Define \begin{equation}\label{3.15} C_1(A_1,\dots, A_m, f)(x)=\text{pv.}\int F\paren{\frac{H(x)-H(y)}{x-y}} \frac{\Pi_{i=1}^m(A_i(x)-A_i(y))}{(x-y)^{m+1}}f(y)\,dy. \end{equation} \begin{proposition}\label{B1} There exist constants $c_1=c_1(F, \|H'\|_{L^\infty})$, $c_2=c_2(F, \|H'\|_{L^\infty})$, such that 1. For any $f\in L^2,\ A_i'\in L^\infty, \ 1\le i\le m, $ \begin{equation}\label{3.16} \|C_1(A_1,\dots, A_m, f)\|_{L^2}\le c_1\|A_1'\|_{L^\infty}\dots\|A_m'\|_{L^\infty}\|f\|_{L^2}. \end{equation} 2. For any $ f\in L^\infty, \ A_i'\in L^\infty, \ 2\le i\le m,\ A_1'\in L^2$, \begin{equation}\label{3.17} \|C_1(A_1,\dots, A_m, f)\|_{L^2}\le c_2\|A_1'\|_{L^2}\|A'_2\|_{L^\infty}\dots\|A_m'\|_{L^\infty}\|f\|_{L^\infty}. \end{equation} \end{proposition} \eqref{3.16} is a result of Coifman, McIntosh and Meyer \cite{cmm}. \eqref{3.17} is a consequence of the Tb Theorem, a proof is given in \cite{wu3}. Let $H$, $A_i$ $F$ satisfy the same assumptions as in \eqref{3.15}. Define \begin{equation}\label{3.19} C_2(A, f)(x)=\int F\paren{\frac{H(x)-H(y)}{x-y}}\frac{\Pi_{i=1}^m(A_i(x)-A_i(y))}{(x-y)^m}\partial_y f(y)\,dy. \end{equation} We have the following inequalities. \begin{proposition}\label{B2} There exist constants $c_3$, $c_4$ and $c_5$, depending on $F$ and $\|H'\|_{L^\infty}$, such that 1. For any $f\in L^2,\ A_i'\in L^\infty, \ 1\le i\le m, $ \begin{equation}\label{3.20} \|C_2(A, f)\|_{L^2}\le c_3\|A_1'\|_{L^\infty}\dots\|A_m'\|_{L^\infty}\|f\|_{L^2}. \end{equation} 2. For any $ f\in L^\infty, \ A_i'\in L^\infty, \ 2\le i\le m,\ A_1'\in L^2$, \begin{equation}\label{3.21} \|C_2(A, f)\|_{L^2}\le c_4\|A_1'\|_{L^2}\|A'_2\|_{L^\infty}\dots\|A_m'\|_{L^\infty}\|f\|_{L^\infty}.\end{equation} 3. For any $f'\in L^2, \ A_1\in L^\infty,\ \ A_i'\in L^\infty, \ 2\le i\le m, $ \begin{equation}\label{3.22} \|C_2(A, f)\|_{L^2}\le c_5\|A_1\|_{L^\infty}\|A'_2\|_{L^\infty}\dots\|A_m'\|_{L^\infty}\|f'\|_{L^2}.\end{equation} \end{proposition} Using integration by parts, the operator $C_2(A, f)$ can be easily converted into a sum of operators of the form $C_1(A,f)$. \eqref{3.20} and \eqref{3.21} follow from \eqref{3.16} and \eqref{3.17}. To get \eqref{3.22}, we rewrite $C_2(A,f)$ as the difference of the two terms $A_1C_1(A_2,\dots, A_m, f')$ and $C_1(A_2,\dots, A_m, A_1f')$ and apply \eqref{3.16} to each term. \begin{proposition}\label{prop:half-dir} There exists a constant $C > 0$ such that for any $f, g, m$ smooth and decays fast at infinity, \begin{align} &\nm{[f,\mathbb{H}] g}_{L^2} \le C \nm{f}_{\dot{H}^{1/2}}\nm{g}_{L^2};\label{eq:b10}\\& \nm{[f,\mathbb{H}] g}_{L^\infty} \le C \nm{f'}_{L^2} \nm{g}_{L^2}; \label{eq:b13}\\& \nm{[f,\mathbb{H}] \partial_{\alpha '} g}_{L^2} \le C \nm{f'}_{L^2} \nm{g}_{\dot{H}^{1/2}};\label{eq:b11}\\& \nm{[f, m; \partial_{\alpha '} g]}_{L^2}\le C \nm{f'}_{L^2} \nm{m'}_{L^\infty}\nm{g}_{\dot{H}^{1/2}}.\label{eq:b111} \end{align} \end{proposition} Here $[f,g;h]$ is as given in \eqref{eq:comm}. \eqref{eq:b10} is straightforward by Cauchy-Schwarz inequality and the definition of $\dot H^{1/2}$. \eqref{eq:b13} is straightforward from Cauchy-Schwarz inequality and Hardy's inequality \eqref{eq:77}. \eqref{eq:b11} and \eqref{eq:b111} follow from integration by parts, then Cauchy-Schwarz inequality, Hardy's inequality \eqref{eq:77}, and the definition of $\dot H^{1/2}$. \begin{proposition} There exists a constant $C > 0$ such that for any $f, \ g, \ h$, smooth and decay fast at spatial infinity, \begin{align} \label{eq:b12} \nm{[f,g;h]}_{L^2} & \le C \nm{f'}_{L^2} \nm{g'}_{L^2} \nm{h}_{L^2};\\ \label{eq:b15} \nm{[f,g;h]}_{L^\infty}&\le C \nm{f'}_{L^2} \nm{g'}_{L^\infty} \nm{h}_{L^2};\\ \label{eq:b16} \nm{[f,g;h]}_{L^\infty}&\le C \nm{f'}_{L^2} \nm{g'}_{L^2} \nm{h}_{L^\infty}. \end{align} \end{proposition} \eqref{eq:b12} follows directly from Cauchy-Schwarz inequality, Hardy's inequality \eqref{eq:77} and Fubini Theorem; \eqref{eq:b15} follows from Cauchy-Schwarz inequality, Hardy's inequality \eqref{eq:77} and the mean value Theorem; \eqref{eq:b16} follows from Cauchy-Schwarz inequality and Hardy's inequality \eqref{eq:77}. \section{Identities}\label{iden} \subsection{Commutator identities}\label{comm-iden} We include here various commutator identities that are necessary for the proofs. The first set: \eqref{eq:c1}-\eqref{eq:c4} has already appeared in \cite{kw}. \begin{align} \label{eq:c1} [\partial_t,D_\alpha] &= - (D_\alpha z_t) D_\alpha;\\ \label{eq:c2} \bracket{\partial_t,D_\alpha^2} & = -2(D_\alpha z_t) D_\alpha^2 - (D_\alpha^2 z_t) D_\alpha;\\ \label{eq:c3} \bracket{\partial_t^2,D_\alpha} &=(-D_\alpha z_{tt}) D_\alpha + 2(D_\alpha z_t)^2 D_\alpha - 2(D_\alpha z_t) D_\alpha \partial_t;\\ \label{eq:c5} \bracket{\partial_t^2 + i \mathfrak{a} \partial_\alpha, D_\alpha} &= (-2D_\alpha z_{tt}) D_\alpha -2(D_\alpha z_t)\partial_t D_\alpha; \end{align} and \begin{equation} \label{eq:c4} \begin{aligned} \bracket{(\partial_t^2 + i \mathfrak{a} \partial_\alpha),D_\alpha^2} & =(-4D_\alpha z_{tt}) D_\alpha^2 + 6(D_\alpha z_t)^2 D_\alpha^2 - (2D_\alpha^2 z_{tt}) D_\alpha \\&+ 6(D_\alpha z_t) (D_\alpha^2 z_t) D_\alpha - 2(D_\alpha^2 z_t) D_\alpha \partial_t - 4(D_\alpha z_t) D_\alpha^2 \partial_t. \end{aligned} \end{equation} Let $$\mathcal P:=(\partial_t+b\partial_{\alpha '})^2+i\mathcal A\partial_{\alpha '}.$$ Notice that $U_h^{-1}\partial_t U_h=\partial_t+b\partial_{\alpha '}$, $U_h^{-1}D_\alpha U_h=D_{\alpha '}$ and $\mathcal P=U_h^{-1}(\partial_t^2+i\frak a\partial_\alpha)U_h$, we precompose with $h^{-1}$ to equations \eqref{eq:c1}-\eqref{eq:c4}, and get \begin{align} \label{eq:c1-1} [\partial_t+b\partial_{\alpha '},D_{\alpha '}] &= - (D_{\alpha '} Z_t) D_{\alpha '};\\ \label{eq:c2-1} \bracket{\partial_t+b\partial_{\alpha '},D_{\alpha '}^2} & = -2(D_{\alpha '} Z_t) D_{\alpha '}^2 - (D_{\alpha '}^2 Z_t) D_{\alpha '};\\ \label{eq:c3-1} \bracket{(\partial_t+b\partial_{\alpha '})^2,D_{\alpha '}} &=(-D_{\alpha '} Z_{tt}) D_{\alpha '} + 2(D_{\alpha '} Z_t)^2 D_{\alpha '} - 2(D_{\alpha '} Z_t) D_{\alpha '} (\partial_t+b\partial_{\alpha '});\\ \label{eq:c5-1} \bracket{\mathcal P, D_{\alpha '}} &= (-2D_{\alpha '} Z_{tt}) D_{\alpha '} -2(D_{\alpha '} Z_t)(\partial_t+b\partial_{\alpha '}) D_{\alpha '}; \end{align} and \begin{equation} \label{eq:c4-1} \begin{aligned} \bracket{\mathcal P,D_{\alpha '}^2} & =(-4D_{\alpha '} Z_{tt}) D_{\alpha '}^2 + 6(D_{\alpha '} Z_t)^2 D_{\alpha '}^2 - (2D_{\alpha '}^2 Z_{tt}) D_{\alpha '} \\&+ 6(D_{\alpha '} Z_t) (D_{\alpha '}^2 Z_t) D_{\alpha '} - 2(D_{\alpha '}^2 Z_t) D_{\alpha '} (\partial_t+b\partial_{\alpha '}) - 4(D_{\alpha '} Z_t) D_{\alpha '}^2 (\partial_t+b\partial_{\alpha '}). \end{aligned} \end{equation} We need some additional commutator identities. In general, for operators $A, B$ and $C$, \begin{equation}\label{eq:c12} [A, BC^k]=[A, B]C^k+ B[A, C^k]=[A, B]C^k+ \sum_{i=1}^k BC^{i-1}[A, C]C^{k-i}. \end{equation} We have \begin{align} \label{eq:c7} [\partial_t+b\partial_{\alpha '}, \partial_{\alpha '}]f&=-b_{\alpha '}\partial_{\alpha '} f;\\ \label{eq:c8}[(\partial_t+b\partial_{\alpha '})^2, \partial_{\alpha '}]f&=-(\partial_t+b\partial_{\alpha '})(b_{\alpha '}\partial_{\alpha '} f)-b_{\alpha '}\partial_{\alpha '} (\partial_t+b\partial_{\alpha '})f;\\ \label{eq:c9}[i\mathcal A\partial_{\alpha '},\partial_{\alpha '}]f&=-i\mathcal A_{\alpha '} \partial_{\alpha '} f;\\ \label{eq:c10} [\mathcal P, \partial_{\alpha '}]f&=-(\partial_t+b\partial_{\alpha '})(b_{\alpha '}\partial_{\alpha '} f)-b_{\alpha '}\partial_{\alpha '} (\partial_t+b\partial_{\alpha '})f-i\mathcal A_{\alpha '} \partial_{\alpha '} f;\\ \label{eq:c11}[\partial_t+b\partial_{\alpha '}, \partial_{\alpha '}^2]f&=-\partial_{\alpha '}(b_{\alpha '}\partial_{\alpha '} f)-b_{\alpha '}\partial_{\alpha '}^2 f. \end{align} Here \eqref{eq:c8}, \eqref{eq:c11} are obtained by \eqref{eq:c12} and \eqref{eq:c7}. We also have \begin{equation}\label{eq:c21} [\partial_t+b\partial_{\alpha '}, \mathbb H]=[b,\mathbb H]\partial_{\alpha'} \end{equation} We compute $$ \begin{aligned} (\partial_t+b\partial_{\alpha '}) [f,\mathbb H]g&=[(\partial_t+b\partial_{\alpha '}) f,\mathbb H]g+\bracket{f, \bracket{\partial_t+b\partial_{\alpha '}, \mathbb H}}g+ [f,\mathbb H](\partial_t+b\partial_{\alpha '}) g\\& =[(\partial_t+b\partial_{\alpha '}) f,\mathbb H]g+\bracket{f, \bracket{b, \mathbb H}\partial_{\alpha '}}g+ [f,\mathbb H](\partial_t+b\partial_{\alpha '}) g \\& =[(\partial_t+b\partial_{\alpha '}) f,\mathbb H]g+ [f,\mathbb H]\paren{(\partial_t+b\partial_{\alpha '}) g+b_{\alpha '} g}\\& \qquad+\bracket{f, \bracket{b, \mathbb H}}\partial_{\alpha '} g -[b,\mathbb H](f_{\alpha '} g)-[f,\mathbb H](b_{\alpha '} g). \end{aligned} $$ It can be checked easily, by integration by parts, that $$\bracket{f, \bracket{b, \mathbb H}}\partial_{\alpha '} g -[b,\mathbb H](f_{\alpha '} g)-[f,\mathbb H](b_{\alpha '} g)=-[f,b;g].$$ So \begin{equation}\label{eq:c14'} \begin{aligned} (\partial_t+b\partial_{\alpha '})& [f,\mathbb H]g=[(\partial_t+b\partial_{\alpha '}) f,\mathbb H]g\\&+ [f,\mathbb H]((\partial_t+b\partial_{\alpha '}) g+b_{\alpha'} g)-[f, b; g]; \end{aligned} \end{equation} with an application of \eqref{eq:c7} yields \begin{equation}\label{eq:c14} \begin{aligned} (\partial_t+b\partial_{\alpha '})& [f,\mathbb H]\partial_{\alpha'}g= [(\partial_t+b\partial_{\alpha '}) f,\mathbb H]\partial_{\alpha'}g\\&+ [f,\mathbb H]\partial_{\alpha'}(\partial_t+b\partial_{\alpha '}) g-[f, b; \partial_{\alpha'}g]. \end{aligned} \end{equation} We compute, by \eqref{eq:c21}, \eqref{eq:c12} and \eqref{eq:c14} that \begin{equation}\label{eq:c23} \begin{aligned} \bracket{(\partial_t+b\partial_{\alpha '})^2, \mathbb H}f&=(\partial_t+b\partial_{\alpha '})\bracket{b,\mathbb H}\partial_{\alpha '} f+\bracket{b,\mathbb H}\partial_{\alpha '} (\partial_t+b\partial_{\alpha '})f \\&=\bracket{(\partial_t+b\partial_{\alpha '})b,\mathbb H}\partial_{\alpha '} f+2\bracket{b,\mathbb H}\partial_{\alpha '} (\partial_t+b\partial_{\alpha '})f-[b,b; \partial_{\alpha '} f]. \end{aligned} \end{equation} We also have \begin{equation}\label{eq:c24} \bracket{i\mathcal A\partial_{\alpha '}, \mathbb H}f=\bracket{i\mathcal A,\mathbb H}\partial_{\alpha '} f. \end{equation} Sum up \eqref{eq:c23} and \eqref{eq:c24} yields \begin{equation}\label{eq:c25} \bracket{\mathcal P, \mathbb H}f=\bracket{(\partial_t+b\partial_{\alpha '})b,\mathbb H}\partial_{\alpha '} f+2\bracket{b,\mathbb H}\partial_{\alpha '} (\partial_t+b\partial_{\alpha '})f-[b,b; \partial_{\alpha '} f]+\bracket{i\mathcal A,\mathbb H}\partial_{\alpha '} f. \end{equation} We have, by product rules, that \begin{equation}\label{eq:c15} [(\partial_t+b\partial_{\alpha '})^2, \frac{1}{Z_{,\alpha'}}]f=(\partial_t+b\partial_{\alpha '})^2\paren{\frac{1}{Z_{,\alpha'}}}f+2(\partial_t+b\partial_{\alpha '})\paren{\frac{1}{Z_{,\alpha'}}}(\partial_t+b\partial_{\alpha '})f; \end{equation} and \begin{equation}\label{eq:c17} [i\mathcal A\partial_{\alpha '}, \frac{1}{Z_{,\alpha'}}]f=i\mathcal A\partial_{\alpha '} \paren{\frac{1}{Z_{,\alpha'}}} f; \end{equation} so \begin{equation}\label{eq:c16} [\mathcal P, \frac{1}{Z_{,\alpha'}}]f=(\partial_t+b\partial_{\alpha '})^2\paren{\frac{1}{Z_{,\alpha'}}}f+2(\partial_t+b\partial_{\alpha '})\paren{\frac{1}{Z_{,\alpha'}}}(\partial_t+b\partial_{\alpha '})f+i\mathcal A\partial_{\alpha '} \paren{\frac{1}{Z_{,\alpha'}}} f. \end{equation} And we compute, by \eqref{eq:dza}, \begin{align}\label{eq:c26} (\partial_t+b\partial_{\alpha '})\paren{\frac{1}{Z_{,\alpha'}}}&=\frac{1}{Z_{,\alpha'}}(b_{\alpha '}-D_{\alpha '} Z_t);\\ \label{eq:c27} (\partial_t+b\partial_{\alpha '})^2\paren{\frac{1}{Z_{,\alpha'}}}&=\frac{1}{Z_{,\alpha'}}(b_{\alpha '}-D_{\alpha '} Z_t)^2+\frac{1}{Z_{,\alpha'}}(\partial_t+b\partial_{\alpha '})(b_{\alpha '}-D_{\alpha '} Z_t). \end{align} \section{Main quantities controlled by $\frak E$} \label{quantities} We have shown in \S\ref{basic-quantities} that the following quantities are controlled by a polynomial of $\frak E$ (or equivalently by $\mathcal E$): \begin{equation}\label{2020-1} \begin{aligned} &\nm{ D_{\alpha '}\bar Z_t}_{\dot H^{1/2}}, \quad\nm{\frac{1}{ Z_{,\alpha'}} D_{\alpha '}^2\bar Z_t}_{\dot H^{1/2}}, \quad \|\bar Z_{t,{\alpha '}}\|_{L^2},\quad \|D_{\alpha '}^2\bar Z_t\|_{L^2},\quad \nm{\partial_{\alpha '} \frac{1}{ Z_{,\alpha'}}}_{L^2},\quad \abs{\frac{1}{ Z_{,\alpha'}}(0,t)},\\& \|A_1\|_{L^\infty}, \quad \|b_{\alpha '}\|_{L^\infty}, \quad \|D_{\alpha '} Z_t\|_{L^\infty},\quad \|D_{\alpha '} Z_{tt}\|_{L^\infty}, \quad \|(\partial_t+b\partial_{\alpha '})\bar Z_{t,{\alpha '}}\|_{L^2} \\& \|Z_{tt,{\alpha '}}\|_{L^2}, \quad \|D_{\alpha '}^2 \bar Z_{tt}\|_{L^2},\quad \|D_{\alpha '}^2 Z_{tt}\|_{L^2},\quad \|(\partial_t+b\partial_{\alpha '})D_{\alpha '}^2\bar Z_t\|_{L^2},\quad \|D_{\alpha '}^2 Z_t\|_{L^2}. \end{aligned} \end{equation} In the remainder of \S\ref{proof0} we have controlled the following quantities by a polynomila of $\frak E$ (or equivalently by $\mathcal E$): \begin{equation}\label{2020-2} \begin{aligned} & \nm{\frac{\frak a_t}{\frak a}\circ h^{-1}}_{L^\infty},\quad \nm{(\partial_t+b\partial_{\alpha '})A_1}_{L^\infty},\quad \nm{\mathcal A_{{\alpha '}}}_{L^\infty},\quad \nm{\frac1{Z_{,{\alpha '}}}\partial_{\alpha '} \frac1{Z_{,{\alpha '}}}}_{L^\infty},\\& \nm{\partial_{\alpha '}(\partial_t+b\partial_{\alpha '})\frac1{Z_{,{\alpha '}}}}_{L^2},\quad \nm{(\partial_t+b\partial_{\alpha '})\partial_{\alpha '}\frac1{Z_{,{\alpha '}}}}_{L^2},\quad \nm{\partial_{\alpha '}(\partial_t+b\partial_{\alpha '})b}_{L^\infty},\quad \nm{(\partial_t+b\partial_{\alpha '})b_{\alpha '}}_{L^\infty},\\& \nm{(\partial_t+b\partial_{\alpha '})D_{\alpha '} Z_t}_{L^\infty}, \quad \nm{D_{\alpha '}\paren{\frac{\frak a_t}{\frak a}\circ h^{-1}}}_{L^2}. \end{aligned} \end{equation} As a consequence of \eqref{2045} and \eqref{2020-1} we have $$\nm{D_{\alpha'}b_{\alpha'}}_{L^2}\lesssim C(\frak E).$$ \end{appendix} \end{document}
\begin{document} \title{Random walk with barycentric self-interaction} \begin{abstract} We study the asymptotic behaviour of a $d$-dimensional self-interacting random walk $(X_n)_{n \in {\mathbb N}}$ (${\mathbb N} := \{ 1,2,3,\ldots \}$) which is repelled or attracted by the centre of mass $G_n = n^{-1} \sum_{i=1}^n X_i$ of its previous trajectory. The walk's trajectory $(X_1,\ldots,X_n)$ models a random polymer chain in either poor or good solvent. In addition to some natural regularity conditions, we assume that the walk has one-step mean drift \[ \mathbb{E} [X_{n+1} - X_n \mid X_n - G_n = {\mathbf x}] \approx \rho \|{\mathbf x}\|^{-\beta} \hat {\mathbf x} \] for $\rho \in {\mathbb R}$ and $\beta \geq 0$. When $\beta <1$ and $\rho>0$, we show that $X_n$ is transient with a limiting (random) direction and satisfies a super-diffusive law of large numbers: $n^{-1/(1+\beta)} X_n$ converges almost surely to some random vector. When $\beta \in (0,1)$ there is sub-ballistic rate of escape. For $\beta \geq 0$, $\rho \in {\mathbb R}$ we give almost-sure bounds on the norms $\|X_n\|$, which in the context of the polymer model reveal extended and collapsed phases. Analysis of the random walk, and in particular of $X_n - G_n$, leads to the study of real-valued time-inhomogeneous non-Markov processes $(Z_n)_{n \in {\mathbb N}}$ on $[0,\infty)$ with mean drifts of the form \begin{equation} \label{star} \mathbb{E} [ Z_{n+1} - Z_n \mid Z_n = x ] \approx \rho x^{-\beta} - \frac{x}{n}, \end{equation} where $\beta \geq 0$ and $\rho \in {\mathbb R}$. The study of such processes is a time-dependent variation on a classical problem of Lamperti; moreover, they arise naturally in the context of the distance of simple random walk on ${\mathbb Z}^d$ from its centre of mass, for which we also give an apparently new result. We give a recurrence classification and asymptotic theory for processes $Z_n$ satisfying (\ref{star}), which enables us to deduce the complete recurrence classification (for any $\beta \geq 0$) of $X_n - G_n$ for our self-interacting walk. \end{abstract} \noindent {\em Keywords:} Self-interacting random walk; self-avoiding walk; random walk avoiding its convex hull; random polymer; centre of mass; simple random walk; random walk average; limiting direction; law of large numbers. \/ \noindent {\em AMS 2010 Subject Classifications:} 60J05 (Primary) 60K40; 60F15; 82C26 (Secondary) \section{Introduction} \label{sec:intro} We study a self-interacting random walk. Self-interacting random processes, in which the stochastic behaviour depends on the entire previous history of the process, present many challenges for mathematical analysis (see e.g.~\cite{blr,pemantle} and references therein) and are often motivated by real applications. Although not a random process in the same sense, the {\em self-avoiding walk} is a prototypical example of a self-interacting random walk that gives rise to important and difficult problems. Random self-avoiding walks were introduced to model the configuration of polymer molecules in solution. The sites visited by the walk represent the locations of the polymer's constituent monomers; successive monomers are viewed as connected by chemical bonds. The classical self-avoiding walk (SAW) model takes uniform measure on $n$-step self-avoiding paths in ${\mathbb Z}^d$. In the important cases of $d \in \{2,3\}$, there are still major open problems for such walks: see for example \cite{lsw,ms} and \cite[Chapter 7]{hughes}, or \cite[Chapter 7]{rg} for a mathematical physics perspective. The {\em loop-erased random walk} (LERW), obtained by erasing chronologically the loops of a random walk, was introduced in \cite{lawler} to study SAW, but it was soon realized that the two processes belong to different universality classes. For its independent interest, including applications to combinatorics and quantum field physics, LERW has received considerable attention and now there is a more precise picture of its behaviour, which shows fine dependence on the spatial dimension. In the planar case, the mean number of steps for LERW stopped at distance $n$ is of order $n^{5/4}$ \cite{kenyon}, and the scaling limit is conformally invariant, described by the radial Schramm--Loewner evolution with parameter 2 \cite{lsw2}. A different perspective on polymer models concerns directed polymers, where the self-interaction is reduced to a trivial form but interesting phenomena arise from the interaction with the medium: see \cite{giac,holl} for recent surveys for localization on interfaces (pinning, wetting) possibly with time-inhomogeneities (e.g.~copolymers), and \cite{CSY04} for interactions with a time-space inhomogeneous medium leading to localization in the bulk. In the standard framework, SAW cannot be interpreted as a dynamic (or progressive) stochastic process. There have been many attempts to formulate genuine stochastic processes with similar behaviour to that of, or at least conjectured for, SAW. A recent model is the random walk on ${\mathbb R}^2$ which at each step avoids the convex hull of its preceding values \cite{abv,zern}. Unlike the conjectured behaviour of SAW, this model is ballistic (see \cite{abv,zern}), i.e., it has a positive speed. The discrete version on ${\mathbb Z}^2$, the dynamic prudent walk, has been studied in \cite{BFV09}: it is ballistic with speed $3/7$ (in the $L^1$ norm), but, in contrast to the (conjecture for the) continuous model, it does not have a fixed direction (see \cite{BFV09}). Ballisticity is known for other types of self-interacting random walks: see \cite{chayes,iv}. In this paper we consider a self-interacting random walk model that is a tractable alternative to SAW, and is distinguished from the models \cite{abv,BFV09,zern,chayes,iv} by exhibiting a range of possible scaling behaviour, including sub-ballisticity (i.e., zero speed) and super-diffusivity. Our model is tunable, with parameters that in principle can be estimated from real data, and it can be used to represent polymers in the extended phase (for good solvent) or collapsed phase (poor solvent). The self-interaction in the model at time $n$ is mediated through the {\em barycentre} or {\em centre of mass} of the past trajectory until time $n$. Specifically, our random walk will at each step have a mean drift (typically asymptotically zero in magnitude) pointing away from or towards the average of all previous positions. We now informally describe the probabilistic model; we give a brief description of the motivation and interpretation arising from polymer physics in Section \ref{poly}. Let $d \in {\mathbb N} := \{1,2,3,\ldots\}$. Our random walk will be a discrete-time stochastic process $X = (X_n)_{n \in {\mathbb N}}$ on ${\mathbb R}^d$. For $n \in {\mathbb N}$, set \begin{equation} \label{com} G_n := \frac{1}{n} \sum_{i=1}^n X_i, \end{equation} the centre of mass (average) of $\{X_1,\ldots,X_n\}$. In addition to some regularity conditions on $X$ that we describe later, our main assumption will be that the one-step mean drift of the walk after $n$ steps is of order $\| X_n - G_n \|^{-\beta}$ in the direction $\pm(X_n - G_n)$, where $\beta \geq 0$ is a fixed parameter; here and subsequently $\| \cdot \|$ denotes the Euclidean norm on ${\mathbb R}^d$. Loosely speaking for the moment, we will suppose that for some $\rho \in {\mathbb R}$ and $\beta \geq 0$, \begin{equation} \label{drift0} \mathbb{E} [ X_{n+1} - X_n \mid X_n - G_n = {\mathbf x} ] \approx \rho \| {\mathbf x} \|^{-\beta} \hat {\mathbf x} , \end{equation} for any $n \in {\mathbb N}$ and ${\mathbf x} \in {\mathbb R}^d \setminus \{ {\mathbf 0}\}$, where $\hat {\mathbf x} := {\mathbf x}/\| {\mathbf x} \|$ denotes a unit vector in the ${\mathbf x}$-direction and ${\mathbf 0}$ is the origin in ${\mathbb R}^d$. We attach no precise meaning to `$\approx$' in (\ref{drift0}) (or elsewhere); it indicates that we are ignoring some terms and also that we have not yet formally defined all the terms present. We describe the model formally and in detail in Section \ref{sec:model} below. The natural case of our model to compare to the walk that avoids its convex hull \cite{abv,zern} has $\beta =0$ and $\rho >0$, when our walk has positive drift away from its current centre of mass. In our $\beta =0$, $\rho >0$ setting we show that the walk has an asymptotic speed and an asymptotic direction, properties which are conjectured but not yet proved for the walk avoiding its convex hull \cite{abv,zern}. Our results however cover much more than this special case. For example, the case of our model that we might expect to be in some sense comparable to SAW in $d=2$ has $\beta = 1/3$, $\rho >0$: see the discussion in Section \ref{poly} below. To give a flavour of our more general results, described in more detail in Section \ref{results} below, we now informally describe our results in the case where (\ref{drift0}) holds with $\rho>0$ and $\beta \in [0,1)$. Under suitable regularity conditions, we show that $X$ is transient, i.e.~$\| X_n \| \to \infty$ a.s., and moreover we prove a strong law of large numbers that precisely quantifies this transience: $n^{-1/(1+\beta)} \| X_n \|$ is asymptotically constant, almost surely. In addition, we show that $X_n$ has a limiting direction, that is, $X_n / \| X_n \|$ converges a.s.~to some (random) unit vector. Thus we have, in this case, a rather complete picture of the asymptotic behaviour of $X_n$. For other regions of the $(\rho,\beta)$ parameter space we have other results, although we also leave some interesting open problems. The self-interaction in the model is introduced via the presence of $G_n$ in (\ref{drift0}). If the condition $\{X_n - G_n = {\mathbf x}\}$ in (\ref{drift0}) is replaced by $\{ X_n = {\mathbf x}\}$ then there is no self-interaction in the drift, which instead points away from a fixed origin. Such non-homogeneous `centrally biased' walks were studied by Lamperti in \cite[Section 4]{lamp1} and \cite[Section 5]{lamp3}; for more recent work see e.g.~\cite{flp,mmw,superlamp}. Considering the process of norms $Z_n = \| X_n \|$ leads to a process on $[0,\infty)$ with mean drift \begin{equation} \label{drift0a} \mathbb{E} [ Z_{n+1} - Z_n \mid Z_n = x ] \approx \rho' x^{-\beta}, \end{equation} ignoring higher-order terms. Such `asymptotically zero-drift' processes are of independent interest; the asymptotic analysis of such (not necessarily Markov) processes is sometimes known as {\em Lamperti's problem} following pioneering work of Lamperti \cite{lamp1,lamp2,lamp3}. From the point of view of the recurrence classification of processes satisfying (\ref{drift0a}), the case $\beta=1$ turns out to be critical, in which case the value of $\rho' \in {\mathbb R}$ is crucial: we give a brief summary of the relevant background in Section \ref{sec:lampsrw} below. We shall see below that considering the process $Z_n = \| X_n -G_n \|$ with $X_n$ satisfying (\ref{drift0}) leads to a more complicated form of (\ref{drift0a}). Loosely speaking, we will obtain \begin{equation} \label{drift0b} \mathbb{E} [ Z_{n+1} - Z_n \mid Z_n = x ] \approx \rho' x^{-\beta} - \frac{x}{n}. \end{equation} We note that the two terms on the right-hand side of (\ref{drift0b}) are typically of the same order, as can be predicted by solving the corresponding differential equation, and so both contribute to the asymptotic behaviour. Comparing (\ref{drift0b}) with (\ref{drift0a}), we see that the drift is now {\em time}- as well as space-dependent. (A different variation on (\ref{drift0a}) with this property was studied in \cite{mv2}, where processes with drift $\rho x^\alpha n^{-\beta}$ were considered.) Thus (\ref{drift0b}) is an interesting starting point for analysis in its own right. Additional motivation for (\ref{drift0b}) arises naturally from simple random walk (SRW) and its centre of mass: if $Z_n = \| X_n - G_n \|$ where $X_n$ is a symmetric SRW on ${\mathbb Z}^d$ and $G_n$ its centre-of-mass as defined by (\ref{com}), $Z_n$ satisfies (\ref{drift0b}) with $\beta=1$ and $\rho' = \rho' (d)$; see Section \ref{sec:com} below. Let us step back from the general setting for a moment to state one consequence of our results, which is a (seemingly new) observation on SRW: \begin{theo} \label{srwthm} Let $d \in {\mathbb N}$. Suppose that $(X_n)_{n \in {\mathbb N}}$ is a symmetric SRW on ${\mathbb Z}^d$, and $(G_n)_{n \in {\mathbb N}}$ is its centre-of-mass process as defined by (\ref{com}). Then \begin{itemize} \item[(a)] $\liminf_{n \to \infty} \| X_n - G_n \| < \infty$ a.s.~for $d \in \{1,2\}$; \item[(b)] $\lim_{n \to \infty} \| X_n - G_n \| = \infty$ a.s.~for $d \geq 3$. \end{itemize} \end{theo} P\'olya's recurrence theorem says that $X_n$ is recurrent in $d \leq 2$ and transient in $d \geq 3$, while results of Grill \cite{grill} say that the centre-of-mass process $G_n$ is recurrent only in $d=1$ and transient for $d \geq 2$. Thus the asymptotic behaviour of $X_n - G_n$ is not trivial; Theorem \ref{srwthm} says that it is recurrent if and only if $d \in \{1,2\}$. In particular when $d=2$, $X_n$ and $X_n - G_n$ are both recurrent, but $G_n$ is transient; see Figure \ref{fig2} for a simulation. \begin{rmk} Theorem \ref{srwthm} exhibits an amusing feature. With the notation $\Delta_n := X_{n+1} - X_n$ it is not hard to see from (\ref{com}) that we may write (with $X_0 := {\mathbf 0}$) \[ G_n = \sum_{i=0}^{n-1} \left( 1 - \frac{i}{n} \right) \Delta_i ; ~~~ X_n - G_n = \sum_{i=0}^{n-1} \left( \frac{i}{n} \right) \Delta_i .\] It follows that (for {\em fixed} $n$) $X_n-G_n$ and $G_n$ are very nearly {\em time-reversals} of each other: writing $\Delta'_i := \Delta_{n-i}$ we see that \[ X_n - G_n = \sum_{i=1}^{n} \left(1 - \frac{i}{n} \right) \Delta'_i .\] Despite this, the two {\em processes} behave very differently, as can be seen by contrasting Theorem \ref{srwthm} with Grill's result \cite{grill}. \end{rmk} It is natural to ask whether a continuous analogue of Theorem \ref{srwthm} holds. In the one-dimensional case, we would take $B_t$ to be standard Brownian motion and $G_t = t^{-1} \int_0^t B_s {\mathrm d} s$, and ask about the joint behaviour of $(B_t , G_t)$; in higher dimensions, writing the $d$-dimensional Brownian motion as $(B^{(1)}_t, \ldots, B^{(d)}_t)$, the $i$th component $G^{(i)}_t$ of $G_t$ is $t^{-1} \int_0^t B^{(i)}_s {\mathrm d} s$, and different components are independent. We could not find a Brownian analogue of Grill's theorem for (compact set) recurrence/transience of $G_t$ explicitly stated in the literature. The process $(t G_t)_{t \geq 0}$ is {\em integrated Brownian motion}, or the {\em Langevin process}, see e.g.\ \cite{ac,iw} and references therein. The two-dimensional process $(B_t, tG_t)_{t \geq 0}$ is the {\em Kolmogorov diffusion} \cite{iw}. Theorem \ref{srwthm} gives basic information about the joint behaviour of a discrete version of this process, under a re-scaling of the second coordinate. \begin{figure} \caption{Simulation of $4 \times 10^4$ steps of symmetric SRW starting at the origin of ${\mathbb Z}^2$ (path shown in red) and its centre of mass process (blue).} \label{fig2} \end{figure} In Section \ref{sec:model} we formally define our self-interacting random walk and state our main results. In Section \ref{sec:motiv} we discuss some more of the motivation behind our model (coming from the physics of polymers and also purely theoretical considerations) and also the one-dimensional problems associated with (\ref{drift0a}) and (\ref{drift0b}), and explain how SRW (and Theorem \ref{srwthm}) fits into our picture. The subsequent sections are devoted to the proofs. We finish this section with some comments on the relation of our model to the existing literature. We are not aware of any self-interacting random walk models similar to the one studied here (i.e., interacting with the previous history of the process, as summarized through the barycentre). In broad outline, our model is related to the vertex-reinforced random walk (see \cite[Section 5.3]{pemantle}) in that the evolution of the walk depends on the sites previously visited. A significant difference is that in vertex-reinforced random walk this self-interaction is local, in that only the occupation of nearest-neighbours of the current site affects the law of the increment, whereas our interaction, mediated by the barycentre, is global. In the continuous setting, self-interacting diffusions (or `Brownian polymers') with similar flavour and motivation to those of our model have also been studied over the last two decades or so, but are rather different in detail to the model considered here: see e.g.~\cite{nrw,dr,mt,blr} and references therein; some recent work on processes with self-attracting drift defined through a potential includes \cite{ck}. In the self-interacting diffusion setting, most of the results in the literature are concerned with the ergodic case; questions of recurrence/transience seem to have received little attention (particularly in dimensions greater than 1), and we do not know of any results on asymptotic directions. Also, it is typically assumed that the vector consisting of the process and its empirical average are Markovian, whereas our model is more general. See \cite[Section 1]{mt} for a short survey. \section{The model and main results} \label{sec:model} \subsection{Definitions and assumptions} We now define the stochastic process $X := (X_n)_{n \in {\mathbb N}}$ on ${\mathbb R}^d$ ($d \in {\mathbb N}$) that is our main object of study. (We start at time $n=1$ only so that (\ref{com}) has the neatest form.) The process $X$ will not be Markovian, as the distribution of $X_{n+1}$ will depend on the entire history $X_1, \ldots, X_{n}$, although to a large extent this dependence will be mediated through the current centre of mass $G_n$ defined at (\ref{com}). Formally, we suppose that $(X_n)_{n \in {\mathbb N}}$ is adapted to the filtration $({\mathcal F}_n)_{n \in {\mathbb N}}$; note that by (\ref{com}) $G_1,\ldots,G_n$ are ${\mathcal F}_n$-measurable. We use the notation $\mathbb{P}_n [ \, \cdot \, ] := \mathbb{P} [ \, \cdot \mid {\mathcal F}_n ]$ and $\mathbb{E}_n[\, \cdot \, ] := \mathbb{E} [ \, \cdot \mid {\mathcal F}_n ]$. Throughout the paper we understand $\log x$ to mean $\log x$ if $x \geq 1$ and $0$ otherwise. We impose some specific assumptions on the law of $\Delta_n := X_{n+1} - X_n$ given ${\mathcal F}_n$. We assume that for some $B \in (0,\infty)$ and all $n \in {\mathbb N}$, \begin{equation} \label{bound} \mathbb{P}_n [ \| \Delta_n \| \leq B ] =1, { \ \textrm{a.s.}}.\end{equation} The assumption of uniformly bounded jumps can be replaced by an assumption on higher order moments at the expense of additional technical complications, but (\ref{bound}) is natural when the increments represent chemical bonds in a model for a polymer molecule. Our next assumption will be a precise version of (\ref{drift0}). We suppose that for some $\rho \in {\mathbb R}$ and $\beta \geq 0$, for any $n \in {\mathbb N}$, writing ${\mathbf x} = X_n - G_n $ for convenience, \begin{equation} \label{drift} \mathbb{E}_n [ \Delta_n ] = \rho \| {\mathbf x} \|^{-\beta} \hat {\mathbf x} + O ( \| {\mathbf x} \|^{-\beta} (\log \| {\mathbf x} \|)^{-2} ), { \ \textrm{a.s.}}, \end{equation} as $\| {\mathbf x} \| \to \infty$, where $\hat {\mathbf x} := {\mathbf x}/\| {\mathbf x} \|$. (In (\ref{drift}) the exponent $-2$ on the logarithm is chosen for simplicity; it could be replaced with any exponent strictly less than $-1$.) In equation (\ref{drift}) and similar vector equations in the sequel, terms such as $O( \, \cdot\,)$ indicate the presence of a vector whose norm satisfies the given $O(\, \cdot\,)$ asymptotics (similarly for $o(\,\cdot\,)$); error terms not involving $n$ are understood to be uniform in $n$. To be clear, (\ref{drift}) is to be understood as, with $X_n - G_n = {\mathbf x}$, as $\| {\mathbf x} \| \to \infty$, \begin{align*} \sup_{n \in {\mathbb N}} \mathop{{\rm ess~sup}} \| \mathbb{E}_n [ \Delta_n ] - \rho \| {\mathbf x} \|^{-\beta} \hat {\mathbf x} \| & = O ( \| {\mathbf x} \|^{-\beta} (\log \| {\mathbf x} \|)^{-2} ) . \end{align*} We also need to assume a uniform ellipticity condition, to ensure that our random walk does not get `trapped' in some subset of ${\mathbb R}^d$. Let ${\mathbb{S}}_d := \{ {\mathbf e} \in {\mathbb R}^d : \| {\mathbf e} \| =1 \}$ denote the unit-radius sphere in ${\mathbb R}^d$. We suppose that there exists $\varepsilon_0>0$ such that \begin{align} \label{ue} \mathop{{\rm ess~inf}}_{{\mathbf e} \in {\mathbb{S}}_d} \mathbb{P}_n [ \Delta_n \cdot {\mathbf e} \geq \varepsilon_0 ] \geq \varepsilon_0. \end{align} Write $\Delta_n = (\Delta_n^{(1)}, \ldots, \Delta_n^{(d)})$ in Cartesian components. An immediate consequence of (\ref{ue}) is the following lower bound on second moments: a.s., \begin{equation} \label{var} \min_{i \in \{1,\ldots,d\}} \mathbb{E}_n [ (\Delta_n^{(i)})^2 ] \geq 2 \varepsilon_0^3 > 0 .\end{equation} Our primary standing assumption will be the following. \begin{itemize} \item[(A1)] Let $d \in {\mathbb N}$. Let $X := (X_n)_{n \in {\mathbb N}}$ be a stochastic process on ${\mathbb R}^d$ and $G:= (G_n)_{n \in {\mathbb N}}$ its associated centre-of-mass process defined by (\ref{com}). For definiteness, take $X_1 \in {\mathbb R}^d$ to be fixed. Suppose that for some $B < \infty$, $\varepsilon_0 >0$, $\rho \in {\mathbb R}$, and $\beta \geq 0$ the conditions (\ref{bound}), (\ref{drift}), and (\ref{ue}) hold. \end{itemize} In the examples discussed later (see Section \ref{sec:exam}), $(X_n,G_n)_{n \in {\mathbb N}}$ will be a Markov process, but we do not assume the Markov property in general. When $\beta =1$, as in the Lamperti case \cite{lamp1,lamp3} the value of $\rho$ in (\ref{drift}) will turn out to be crucial. As in Lamperti's problem, the recurrence classification depends on the relationship between $\rho$ and the covariance structure of $\Delta_n$. To obtain an explicit criterion, we impose additional regularity conditions on that covariance structure. Specifically, we sometimes suppose that (a) there exists $\sigma^2 \in (0,\infty)$ such that, a.s., \begin{equation} \label{cov1} \mathbb{E}_n [ ( \Delta_n^{(i)})^2 ] = \sigma^2 + o((\log \| X_n - G_n \| )^{-1}), ~~~( i \in \{1,\ldots,d\});\end{equation} and (b) for $i, j$ distinct elements of $\{1,\ldots,d\}$, a.s., \begin{equation} \label{cov2} \mathbb{E}_n [ \Delta_n^{(i)} \Delta_n^{(j)} ] = o((\log \| X_n - G_n \| )^{-1}).\end{equation} Thus for $\beta \geq 1$, when necessary we will impose the following additional assumption. \begin{itemize} \item[(A2)] The conditions (\ref{cov1}) and (\ref{cov2}) hold for some $\sigma^2 \in (0,\infty)$. \end{itemize} \subsection{Results on self-interacting walk} \label{results} Our first result, Theorem \ref{ythm1}, constitutes the first part of our complete recurrence classification for $X_n - G_n$. Since we are dealing with non-Markovian processes, we first formally define what we mean by recurrence and transience in this context. \begin{df} \label{def1} An ${\mathbb R}^d$-valued stochastic process $(\xi_n)_{n \in {\mathbb N}}$ is said to be {\em recurrent} if $\liminf_{n \to \infty} \| \xi_n \| < \infty$ a.s.~and {\em transient} if $\lim_{n \to \infty} \| \xi_n \| = \infty$ a.s.. \end{df} Define \begin{equation} \label{rho0} \rho_0 := \rho_0 (d,\sigma^2) := \frac{1}{2} (2-d) \sigma^2 .\end{equation} \begin{theo} \label{ythm1} Suppose that (A1) and (A2) hold with $d \in {\mathbb N}$, $\beta \geq 1$, and $\rho \in {\mathbb R}$. \begin{itemize} \item[(i)] Suppose that $\beta =1$. Let $\rho_0 = \rho_0 (d,\sigma^2)$ be as defined at (\ref{rho0}). Then $X_n - G_n$ is recurrent if $\rho \leq \rho_0$ and transient if $\rho > \rho_0$. \item[(ii)] Suppose that $\beta > 1$. Then $X_n - G_n$ is recurrent if $d \in \{1,2\}$ and transient if $d \geq 3$. \end{itemize} \end{theo} For almost all our remaining results we do not need to assume (A2). Set \begin{equation} \label{elldef} \ell (\rho, \beta) := \left( \frac{\rho (1+\beta)}{2+\beta} \right)^{1/(1+\beta)} . \end{equation} In the case $\beta \in [0,1)$, we have the following result, which completes the recurrence classification for $X_n - G_n$ and also gives a detailed account of the asymptotic behaviour of the random walk $X_n$. In particular, when $\rho>0$, $X_n$ and $G_n$ are transient, and moreover have a limiting direction, and the escape is quantified by super-diffusive but, for $\beta >0$, sub-ballistic strong laws of large numbers. The case $\beta=0$ shows ballistic behaviour. \begin{theo} \label{dirthm} Suppose that (A1) holds with $d \in {\mathbb N}$, $\beta \in [0,1)$, and $\rho \in {\mathbb R} \setminus \{0\}$. Then $X_n - G_n$ is transient if $\rho >0$ and recurrent if $\rho < 0$. Moreover, if $\rho >0$, there exists a random ${\mathbf u} \in {\mathbb{S}}_d$ such that, as $n \to \infty$, with $\ell(\rho,\beta)$ defined at (\ref{elldef}), \[ n^{-1/(1+\beta)} X_n \toas ( 2+\beta ) \ell (\rho, \beta ) {\mathbf u}, ~{\textrm{and}}~ n^{-1/(1+\beta)} G_n \toas ( 1+\beta ) \ell (\rho, \beta ) {\mathbf u}. \] \end{theo} At the level of detail displayed by Theorem \ref{dirthm}, we can see a difference between the asymptotic behaviour of the $\beta \in [0,1)$, $\rho>0$ case of (\ref{drift}) compared to the `supercritical Lamperti-type' case in which the drift is away from a fixed origin (i.e., the analogue of (\ref{drift}) holds but with ${\mathbf x} = X_n$ rather than ${\mathbf x} = X_n - G_n$). See Theorem \ref{dirthm0} below and the remarks that precede it. Our ultimate goal is a complete recurrence classification for the process $X_n$. Theorem \ref{dirthm} covers the case $\beta \in [0,1)$, $\rho>0$. Otherwise, we have at the moment only the following one-dimensional result (to be viewed in conjunction with Theorem \ref{ythm1}). \begin{theo} \label{1dthm} Suppose that (A1) holds for $d=1$. Then if $X_n - G_n$ is transient, $X_n$ and $G_n$ are also transient, i.e., $|X_n| \to \infty$ and $|G_n| \to \infty$ a.s.~as $n \to \infty$. \end{theo} Our final result on our walk with barycentric interaction gives upper bounds on $\|X_n\|$ for general $d \in {\mathbb N}$. In view of the interpretation of $(X_1,\ldots,X_n)$ as a model for a polymer molecule in solution, we can describe the phases listed in Theorem \ref{extent} below as (i) {\em extended}, (ii) {\em transitional}, (iii) {\em partially collapsed}, and (iv) {\em fully collapsed}. See the discussion in Section \ref{poly} below. Theorem \ref{extent}(i) is included for comparison only; Theorem \ref{dirthm} gives a much sharper result. Define \begin{equation} \label{gammadef} \gamma (d, \sigma^2 , \rho ) := \left( 2 - d - \frac{2 \rho}{\sigma^2} \right)^{-1} .\end{equation} \begin{theo} \label{extent} Suppose that (A1) holds with $d \in {\mathbb N}$, $\beta \geq 0$, and $\rho \in {\mathbb R}$. Then the following bounds apply. \begin{itemize} \item[(i)] (Theorem \ref{dirthm}.) If $\beta \in [0,1)$ and $\rho>0$, there exists $C \in (0,\infty)$ such that, a.s., $\| X_n \| \leq C n^{1/(1+\beta)}$ for all but finitely many $n \in {\mathbb N}$. \item[(ii)] If $\beta \geq 1$, then for any $\varepsilon>0$, a.s., $\| X_n \| \leq n^{1/2}( \log n)^{(1/2)+\varepsilon}$ for all but finitely many $n \in {\mathbb N}$. \item[(iii)] Suppose that (A2) also holds. Suppose that $\beta =1$ and $\rho < -d\sigma^2/2$, and let $\gamma(d,\sigma^2,\rho) \in (0,1/2)$ be as defined at (\ref{gammadef}). Then for any $\varepsilon>0$, a.s., for all but finitely many $n \in {\mathbb N}$, $\| X_n \| \leq n^{\gamma(d,\sigma^2,\rho)+\varepsilon}$. \item[(iv)] If $\beta \in [0,1)$ and $\rho <0$, then for any $\varepsilon>0$, a.s., $\| X_n \| \leq (\log n)^{1+\frac{1}{1-\beta} +\varepsilon}$ for all but finitely many $n \in {\mathbb N}$. \end{itemize} \end{theo} We suspect that the bounds in Theorem \ref{extent} are close to sharp, in that corresponding lower bounds of almost the same order should be valid (only infinitely often, of course, in the recurrent cases). However, the lower bounds of \cite[Section 4]{mvw} do not apply directly. Given (\ref{com}) it is evident that the bounds for $\|X_n \|$ in Theorem \ref{extent} imply the same bounds (up to multiplication by a constant) for $\| G_n \|$, and hence $\| X_n - G_n \|$ too. In addition, the same upper bounds hold (again up to a constant factor) for the quantities of {\em diameter} $D_n$ and root-mean-square {\em radius of gyration} $R_n$ given by \[ D_n := \max_{1 \leq i < j \leq n } \| X_i - X_j \| , ~~~ R_n^2 := \frac{1}{n} \sum_{i=1}^n \| X_i - G_n \|^2 = \frac{1}{n^2} \sum_{i=1}^n \sum_{j=1}^{i-1} \| X_i - X_j \|^2 ;\] these are both physically significant in the interpretation of $(X_1,\ldots,X_n)$ as a polymer chain (see pp.~95--96 of \cite{hughes} and Section \ref{poly} below). Finally, we briefly describe how our results compare to the more classical model studied by Lamperti \cite{lamp1,lamp3}. That is, suppose that (A1) holds but that (\ref{drift}) holds with ${\mathbf x} = X_n$ {\em instead of} ${\mathbf x} = X_n - G_n$. In this case, there is no self-interaction in the drift term, and the drift is relative to a fixed origin. Lamperti studied examples of such processes (so-called centrally biased random walks) in \cite[Section 4]{lamp1} and \cite[Section 5]{lamp3}. We see that our recurrence classification for the self-interacting process $X_n - G_n$ in the case $\beta=1$, Theorem \ref{ythm1}, gives, surprisingly, essentially the same criteria as Lamperti's \cite[Theorem 4.1]{lamp1}. In the case $\beta \in [0,1)$, the difference between the two settings is clearly manifest in the constant in the law of large numbers. The analogue of our Theorem \ref{dirthm} in the case of drift relative to the origin is an immediate consequence of Theorem 2.2 of \cite{mmw} with Theorem 3.2 of \cite{superlamp} (see the discussion in \cite[Section 3.2]{superlamp}): \begin{theo} \label{dirthm0} \cite{mmw,superlamp} Suppose that (A1) holds, with the modification that (\ref{drift}) holds with ${\mathbf x} =X_n$ instead of ${\mathbf x} = X_n - G_n$. Suppose that $d \in {\mathbb N}$, $\beta \in [0,1)$, and $\rho >0$. Then there exists a random ${\mathbf u} \in {\mathbb{S}}_d$ such that, as $n \to \infty$, \[ n^{-1/(1+\beta)} X_n \toas (2+\beta)^{1/(1+\beta)} \ell (\rho, \beta) {\mathbf u}, ~{\textrm{and}}~ n^{-1/(1+\beta)} G_n \toas ( 1+\beta ) (2+\beta)^{-\beta/(1+\beta)} \ell (\rho, \beta ) {\mathbf u}. \] \end{theo} The method of proof of Theorem \ref{dirthm} in the present paper (see Section \ref{direction}) gives an alternative proof of Theorem \ref{dirthm0}, avoiding the rather involved argument for establishing a limiting direction used in \cite{mmw}. Specifically, in the argument in Section \ref{direction}, we can apply the relevant law of large numbers (Theorem 3.2 of \cite{superlamp}) in place of our Lemma \ref{ylln}. Note that under the assumption of bounded increments, the law of large numbers \cite[Theorem 3.2]{superlamp} is available, unlike in the generality of Theorem 2.2 from \cite{mmw}; thus in the more general setting, the proof of \cite{mmw} is currently the only one that the authors are aware of. \subsection{Examples} \label{sec:exam} To illustrate our assumptions and results, we give three examples of walks satisfying (A1) and (A2). In all of the following examples, the couple $(X_n,G_n)$ is Markov. \paragraph{Example 1.} For ${\mathbf x} \in {\mathbb R}^d$, let ${\mathbf b}_1 ({\mathbf x}), \ldots, {\mathbf b}_d({\mathbf x})$ denote an orthonormal basis for ${\mathbb R}^d$ such that ${\mathbf b}_1({\mathbf x}) = \hat {\mathbf x}$, the unit vector in the direction ${\mathbf x}$; we use the convention $\hat {\mathbf 0} := {\mathbf e}_1 := (1,0,\ldots,0)$. Fix $\varepsilon_0 \in (0, 1/(2d))$, $\rho \in {\mathbb R}$, and $\beta >0$. Take \[ \mathbb{P}_n [ \Delta_n = {\mathbf b}_i (X_n-G_n) ] = \mathbb{P}_n [ \Delta_n = -{\mathbf b}_i (X_n-G_n) ] = \frac{1}{2d}, ~~~(i \in \{2,\ldots,d\} ),\] and \begin{align*} \mathbb{P}_n [ \Delta_n = {\mathbf b}_1 (X_n-G_n) ] & = \begin{cases} \frac{1}{2d} + \frac{\rho}{2} \| X_n - G_n \|^{-\beta} & {\rm if} ~\frac{| \rho |}{2} \| X_n - G_n \|^{-\beta} \leq \frac{1}{2d} -\varepsilon_0 \\ \frac{1}{d} - \varepsilon_0 & {\rm if} ~ \frac{ \rho }{2} \| X_n - G_n \|^{-\beta} > \frac{1}{2d} -\varepsilon_0 \\ \varepsilon_0 & {\rm if} ~ \frac{ \rho }{2} \| X_n - G_n \|^{-\beta} < - \frac{1}{2d} +\varepsilon_0 \end{cases}; \\ \mathbb{P}_n [ \Delta_n = -{\mathbf b}_1 (X_n-G_n) ] & = \frac{1}{d} - \mathbb{P}_n [ \Delta_n = {\mathbf b}_1 (X_n-G_n) ]. \end{align*} In other words, for all $\| X_n - G_n \|$ sufficiently large, \[ \mathbb{P}_n [ \Delta_n = \pm {\mathbf b}_1 (X_n-G_n) ] = \frac{1}{2d} \pm \frac{\rho}{2} \| X_n - G_n \|^{-\beta} .\] Then writing ${\mathbf x} = X_n - G_n$, we have for ${\mathbf x} \in {\mathbb R}^d$ with $\| {\mathbf x} \|$ sufficiently large, a.s., \[ \mathbb{E}_n [ \Delta_n ] = \rho \| {\mathbf x}\|^{-\beta} \hat {\mathbf x}; ~~~ \mathbb{E}_n [ (\Delta_n^{(i)})^2 ] = \frac{1}{d} \sum_{j=1}^d ( {\mathbf b}_j \cdot {\mathbf e}_i )^2 = \frac{1}{d} .\] It is not hard to verify that (A1) and (A2) (with $\sigma^2 = 1/d$) hold in this case. In particular, if $\beta =1$ Theorem \ref{ythm1} says that $X_n-G_n$ is transient if and only if $\rho > (2-d)/(2d)$. See Figure \ref{fig3} for some simulations of this model. \begin{figure} \caption{Simulation of $10^4$ steps of the random walk (red) and its centre of mass (blue), as described in Example 1, with $d=2$, $\rho =0.1$, $\varepsilon_0 = 0.01$, and different values of $\beta \in (0,1]$; the three pictures have $\beta = 0.1, 0.5, 1$ clockwise from the top left. Theorem \ref{dirthm} says that in the two $\beta <1$ cases, the random walk $X_n$ is transient with a limiting direction. In the $\beta=1$ case, we know from Theorem \ref{ythm1} that $X_n - G_n$ is transient ($\rho_0 = 0$ here), but transience of $X_n$ itself is an open problem.} \label{fig3} \end{figure} \paragraph{Example 2.} Here is another example satisfying (A1) and (A2), this time with jumps supported on a unit sphere rather than being restricted to a finite set of possibilities. Let $\beta >0$ and $\rho \in {\mathbb R}$. Given ${\mathcal F}_n$ and $X_n - G_n = {\mathbf x}$, the jump $\Delta_n$ is obtained as follows. \begin{itemize} \item[(i)] Choose ${\bf U}_n$ uniformly distributed on the unit sphere ${\mathbb{S}}_d$. \item[(ii)] Take $\Delta_n = {\bf U}_n + \rho \| {\mathbf x} \|^{-\beta} {\mathbf 1}_{\{ \rho \| {\mathbf x} \|^{-\beta} < 1/2 \}} \hat {\mathbf x} $. \end{itemize} So the jumps of the walk are uniform on a sphere, but the centre of the sphere is (for $\| {\mathbf x} \|$ large enough) shifted slightly in the direction $\pm \hat {\mathbf x}$, depending on the sign of $\rho$. Conditions (A1) and (A2) (again with $\sigma^2=1/d$) are readily verified for this example. \paragraph{Example 3.} We sketch one more example with $d \geq 2$, $\beta >0$ and $\rho>0$, which is reminiscent of the walk avoiding its convex hull. Take the jump $\Delta_n$ uniform on ${\mathbb{S}}_d$ minus the circular cap of relative surface $\rho \| X_n - G_n \|^{-\beta}$ pointing towards the barycentre, i.e., $$ \Delta_n ~\text{is uniform on}~ \left\{{\mathbf y} \in {\mathbb{S}}_d : {\mathbf y} \cdot \hat {\mathbf x} > -1 + C(\rho) \|{\mathbf x}\|^{-\beta/(d-1)} \right\}, $$ with ${\mathbf x}=X_n-G_n$, where $C(\rho)$ is a constant depending on $\rho$ and $d$. Here we assume $\|{\mathbf x}\|$ is sufficiently large; if not we can take $\Delta_n$ uniform on ${\mathbb{S}}_d$. \subsection{Open problems and paper outline} Our results give a detailed recurrence classification (Theorem \ref{ythm1}) for the process $X_n - G_n$. Of considerable interest is the asymptotic behaviour of $X_n$ itself, for which we have a complete picture only in the case $\beta \in [0,1)$, $\rho>0$ (Theorem \ref{dirthm}). We conjecture: \begin{itemize} \item $\|X_n \| \to \infty$ a.s.\ if and only if $\| X_n - G_n \| \to \infty$ a.s.. \end{itemize} Theorems \ref{dirthm} and \ref{1dthm} verify the `if' part of the conjecture when (i) $\beta \in [0,1)$ and $\rho>0$, and (ii) $d=1$. Another open problem involves the angular behaviour of our model when $\beta \geq 1$. By analogy with \cite{mmw} we suspect that there is {\em no} limiting direction in that case (in contrast to Theorem \ref{dirthm}). The remainder of the paper is arranged as follows. In Section \ref{sec:motiv} we describe in more detail how our model is related to Lamperti's problem (Section \ref{sec:lampsrw}), and to the centre-of-mass of SRW (Section \ref{sec:com}), and we prove Theorem \ref{srwthm}. We also outline the motivation of our random walk as a model for a random polymer in solution (Section \ref{poly}). Section \ref{prelim} is devoted to preliminary computations for the processes $X_n$, $G_n$, and (especially) $X_n - G_n$. In Section \ref{1dproc} we take a somewhat more general view, and study the asymptotic properties of one-dimensional, not necessarily Markov, processes satisfying a precise version of (\ref{drift0b}). The recurrence classification is a time-varying, more complicated analogue of Lamperti's results \cite{lamp1,lamp3}, and we use some martingale ideas related to those in \cite{FMM,superlamp}. In the case $\beta \in [0,1)$, $\rho>0$ we prove a law of large numbers that is a cornerstone of our subsequent analysis for the random walk $X_n$. This law of large numbers is an analogue of that in \cite{superlamp} for the supercritical Lamperti problem. While the results of \cite{superlamp} supply an upper bound crucial to our approach, the law of large numbers in the present setting requires a new idea, and our key tool here is a stochastic approximation lemma (Lemma \ref{lower-lm}), which may be of independent interest. Section \ref{proofs} is devoted to the proofs of our main theorems. The basic method is an application of the results of Section \ref{1dproc} to the process $\| X_n - G_n \|$, armed with our computations in Section \ref{prelim}. We carry out this approach to prove Theorems \ref{ythm1} and \ref{1dthm} in Section \ref{recprf}. A crucial ingredient is the proof, in Section \ref{direction}, that $X_n - G_n$ has a limiting direction. This enables us to prove Theorem \ref{dirthm}. Finally, in Section \ref{boundprf}, we prove Theorem \ref{extent}, building on some general results from \cite{mvw}. \section{Connections and further motivation} \label{sec:motiv} \subsection{Lamperti's problem and simple random walk norms} \label{sec:lampsrw} Our problem is related to a time-dependent version of the so-called Lamperti problem. We briefly review the latter here. Let $Z = (Z_n)_{n \in {\mathbb N}}$ be a discrete-time stochastic process adapted to a filtration $({\mathcal F}_n)_{n \in {\mathbb N}}$ and taking values in an unbounded subset ${\mathcal S}$ of $[0,\infty)$. The set ${\mathcal S}$ may be countable (as in the SRW example which follows in this section) or uncountable (as in the application to stochastic billiards described in \cite{mvw}). Lamperti \cite{lamp1,lamp2,lamp3} investigated the extent to which the asymptotic behaviour of $Z$ is determined by the increment moments $\mathbb{E}_n [ (Z_{n+1} - Z_n)^k ]$ when viewed as (random) functions of $Z_n$. Formally, suppose that for some $k$, $\mathbb{E}_n [ (Z_{n+1} - Z_n)^k ]$ is well-defined for all $n$. Then by standard properties of conditional expectations (see e.g.~\cite[Section 9.1]{chung}), there exist a Borel-measurable function $\phi_k(n ; \, \cdot \,)$ and an ${\mathcal F}_n$-measurable random variable $\psi_k (n)$ (orthogonal to $Z_n$) such that, a.s., \[ \mathbb{E}_n [ (Z_{n+1} - Z_n)^k ] = \mathbb{E} [ (Z_{n+1} - Z_n)^k \mid Z_n ] + \psi_k (n) = \phi_k (n ; Z_n ) + \psi_k (n) .\] Define \begin{equation} \label{mudef} \mu_k (n ; x ) := \phi_k (n ; x) + \psi_k (n) .\end{equation} The $\mu_k (n ; x )$ are, in general, ${\mathcal F}_n$-measurable random variables; if $Z$ is a Markov process then $\mu_k (n ; x) = \mathbb{E} [ (Z_{n+1} - Z_n)^k \mid Z_n=x ]$ is a deterministic function of $x$ and $n$, and if in addition $Z$ is time-homogeneous, $\mu_k (n ; x) = \mu_k (x)$ is a function of $x$ only. For many applications, including those described here, $Z$ will not be time-homogeneous or Markovian, but nevertheless the $\mu_k (n ;x)$ are well-behaved asymptotically. In this section, $X = (X_n)_{n \in {\mathbb N}}$ will be the symmetric SRW on ${\mathbb Z}^d$ ($d \in {\mathbb N}$). That is, $X$ has i.i.d.~increments $\Delta_n := X_{n+1} -X_n$ such that if $\{ {\mathbf e}_1,\ldots,{\mathbf e}_d\}$ is the standard orthonormal basis on ${\mathbb R}^d$, for $i \in \{1,\ldots,d\}$, $\mathbb{P} [ \Delta_n = {\mathbf e}_i ] = \mathbb{P} [ \Delta_n = - {\mathbf e}_i ] = (2d)^{-1}$. Let ${\mathcal F}_n = \sigma (X_1,\ldots,X_n)$ and consider the $({\mathcal F}_n)_{n \in {\mathbb N}}$-adapted process $Z=(Z_n)_{n \in {\mathbb N}}$ on $[0,\infty)$ defined by $Z_n = \| X_n \|$. Here $Z$ takes values in the countable set ${\mathcal S} = \{ \| {\mathbf x} \| : {\mathbf x} \in {\mathbb Z}^d \}$. Note that $Z$ is not in general a Markov process: when $d=2$, given one of the two ${\mathcal F}_n$-events $\{X_n = (5,0)\}$ and $\{X_n = (3,4)\}$ we have $Z_n = 5$ in each case but $Z_{n+1}$ has two different distributions; for instance $Z_{n+1}$ can take the value $6$ (with probability $1/4$) in the first case, but this is impossible in the second case. We recall some simple facts about $\Delta_n = X_{n+1} - X_n$ in the case of SRW. We have \begin{equation} \label{bound1} \mathbb{P}_n [ \| \Delta_n \| \leq 1 ] =1 , { \ \textrm{a.s.}}, ~~{\rm and} ~~ \mathbb{E}_n [ \Delta_n ] ={\mathbf 0}, { \ \textrm{a.s.}}. \end{equation} Writing $\Delta_n = (\Delta_n^{(1)}, \ldots, \Delta_n^{(d)})$ in Cartesian components, we have that \begin{equation} \label{mom2} \mathbb{E}_n [ \Delta_n^{(i)} \Delta_n^{(j)} ] = \frac{1}{d} {\mathbf 1} \{ i = j\} , { \ \textrm{a.s.}}. \end{equation} Elementary calculations based on Taylor's expansion and (\ref{bound1}) and (\ref{mom2}) show that \begin{align*} \mathbb{E}_n [ Z_{n+1} -Z_n ] = \frac{1}{2d} \sum_{i=1}^d \left( \| X_n + {\mathbf e}_i \| + \| X_n - {\mathbf e}_i \| - 2\| X_n \| \right) \\ = \frac{1}{2 \| X_n \|} \left( 1 - \frac{1}{d} \right) + O ( \| X_n \|^{-2} ) ; \end{align*} in the above notation, $\mu_1 ( n ; x) = \frac{1}{2x} (1 - \frac{1}{d} ) + O(x^{-2})$ as $x \to \infty$. As before, this asymptotic expression is the compact notation for \[ \sup_{n \in {\mathbb N}} \mathop{{\rm ess~sup}} \mu_1 (n ; x) = \frac{1}{2x} \left(1 - \frac{1}{d} \right) + O(x^{-2}), \] together with the same expression with `$\inf$' instead of each `$\sup$'. Similarly \begin{align*} \mathbb{E}_n [ Z^2_{n+1} -Z^2_n ] = \frac{1}{2d} \sum_{i=1}^d \left( \| X_n + {\mathbf e}_i \|^2 + \| X_n - {\mathbf e}_i \|^2 - 2\| X_n \|^2 \right) = 1. \end{align*} Then since $(Z_{n+1}-Z_n)^2 = Z_{n+1}^2 -Z_n^2 - 2 Z_n (Z_{n+1}-Z_n)$ we obtain \[ \mathbb{E}_n [ (Z_{n+1} -Z_n)^2 ] = \frac{1}{d} + O (\| X_n \|^{-1} ) .\] In particular, (\ref{drift0a}) holds (interpreted correctly) with $\beta =1$ and $\rho' = (1 - (1/d))/2$. \subsection{Centre of mass for simple random walk} \label{sec:com} We saw in Section \ref{sec:lampsrw} how a mean drift described loosely by (\ref{drift0a}) arises from the process of norms of symmetric SRW. In this section we describe how a process with mean drift of the form (\ref{drift0b}) arises when considering the distance of a symmetric SRW to its centre of mass. The motion of the centre of mass of a random walk is of interest from a physical point of view, when, for example, the walk represents a growing polymer molecule: see e.g.\ \cite{as} and \cite{rg}, especially Chapter 6. The centre-of-mass process (defined by (\ref{com})) corresponding to a symmetric SRW on ${\mathbb Z}^d$ was studied by Grill \cite{grill}, who showed that the process $(G_n)_{n \in {\mathbb N}}$ returns to a fixed ball containing the origin with probability $1$ if and only if $d=1$. In particular the process is transient for $d \geq 2$ and Grill gives a sharp integral test for the rate of escape of the lower envelope. A consequence of his result is the following. \begin{theo} \label{grill} \cite{grill} Let $(X_n)_{n \in {\mathbb N}}$ be symmetric SRW on ${\mathbb Z}^d$ and $(G_n)_{n \in {\mathbb N}}$ the corresponding centre-of-mass process defined by (\ref{com}). Let $d \in \{ 2,3,4,\ldots\}$. Then for any $\varepsilon>0$, \[ \| G_n \| \geq ( \log n) ^{-\frac{1}{d-1} - \varepsilon} n^{1/2}, { \ \textrm{a.s.}}, \] for all but finitely many $n \in {\mathbb N}$. On the other hand, for infinitely many $n \in {\mathbb N}$, \[ \| G_n \| \leq ( \log n) ^{-\frac{1}{d-1}} n^{1/2}, { \ \textrm{a.s.}}. \] \end{theo} A crude upper bound for $\| G_n\|$, obtained by applying the triangle inequality $\| G_n \| \leq \frac{1}{n} \sum_{i=1}^n \| X_i \|$ and the law of the iterated logarithm for symmetric SRW on ${\mathbb Z}^d$ ($d \in {\mathbb N}$) to each $\| X_i \|$ (see e.g.~Theorem 19.1 of \cite{rev}), is that for any $\varepsilon>0$, a.s., \[ \| G_n \| \leq \frac{2}{3d^{1/2}} (1+\varepsilon) (2 n \log \log n)^{1/2} , \] for all but finitely many $n\in {\mathbb N}$; it seems likely that this is an overestimate. In $d=1$, in the analogous continuous setting, a result of Watanabe \cite[Corollary 1, p.~237]{wat} says that, for $B_t$ standard Brownian motion, for any $\varepsilon>0$, for all $t$ large enough, \[ \frac{1}{t} \int_0^t B_s {\mathrm d} s \leq 3^{-1/2} (1+\varepsilon) (2 t \log \log t)^{1/2}, { \ \textrm{a.s.}}, \] and this bound is sharp in that the inequality fails infinitely often, a.s., when $\varepsilon=0$. Standard strong approximation results show that this result can be transferred to $\| G_t \|$ in $d=1$. The next result shows how the drift equation (\ref{drift0b}) arises in this context. Lemma \ref{srwlem} is a consequence of the more general Lemma \ref{lem1} below. \begin{lm} \label{srwlem} Let $d \in {\mathbb N}$. Suppose that $(X_n)_{n \in {\mathbb N}}$ is a symmetric SRW on ${\mathbb Z}^d$, and $(G_n)_{n \in {\mathbb N}}$ is its centre-of-mass process as defined by (\ref{com}). Let ${\mathcal F}_n := \sigma (X_1, \ldots, X_n)$ and $Y_n := X_n - G_n$. Then, a.s., \begin{align*} \mathbb{E}_n [\| Y_{n+1} \| - \| Y_n \| ] & = \left(1- \frac{1}{d} \right) \frac{1}{2\| Y_n \|} - \frac{ \| Y_n \|}{n+1} + O( \| Y_n \|^{-2} ) ; \\ \mathbb{E}_n [ (\| Y_{n+1} \| - \| Y_n \| )^2 ] & = \frac{1}{d} + O( \| Y_n \| n^{-1} ) + O( \| Y_n \|^{-1} ). \end{align*} \end{lm} Neglecting higher-order terms, the study of the process $\| X_n - G_n\|$ for SRW leads us to analysis of a process with drift given by (\ref{drift0b}). Lemma \ref{srwlem} can be generalized to zero-drift Markov chains $X=(X_n)_{n \in {\mathbb N}}$ satisfying appropriate versions of (\ref{bound1}) and (\ref{mom2}). We prove our results on SRW by applying our general results given in Sections \ref{prelim} and \ref{1dproc}. \begin{proof}[Proof of Lemma \ref{srwlem}.] This follows from Lemma \ref{lem1} stated and proved in Section \ref{prelim}. Taking expectations in (\ref{lem1eq}) and using (\ref{bound1}) and (\ref{mom2}) we obtain the first equation in the statement of the lemma, using the fact that $\| Y_n \| = o(n)$ a.s.\ to simplify the error terms. Similarly, squaring both sides of (\ref{lem1eq}) and taking expectations we obtain the second equation in the lemma. \end{proof} \begin{proof}[Proof of Theorem \ref{srwthm}.] Let $Z_n = \| Y_n \| = \| X_n - G_n\|$ and ${\mathcal F}_n = \sigma ( X_1, \ldots, X_n)$. Then by Lemma \ref{srwlem}, a.s., \begin{align*} \mathbb{E}_n [ Z_{n+1} - Z_n ] & = \left(1 - \frac{1}{d} \right) \frac{1}{2Z_n} - \frac{Z_n}{n} + O(n^{-2} Z_n) + O( Z_n^{-2} ) ; \\ \mathbb{E}_n [ (Z_{n+1} - Z_n)^2 ] & = \frac{1}{d} + O( Z_n n^{-1} ) + O(Z_n^{-1} ) .\end{align*} Thus (\ref{z2b}) and (\ref{z3b}) hold with $\rho' = (d-1)/(2d)$ and $\sigma^2 = 1/d$. It follows from Theorems \ref{thm1} and \ref{thm2} (stated and proved in Section \ref{1dproc}) that $Z_n$ is transient if and only if $2 \rho' > \sigma^2$, or equivalently $1 - (1/d) > (1/d)$, that is, $d>2$. \end{proof} \subsection{The process viewed as a new random polymer model} \label{poly} In this section we briefly summarize motivation of self-interacting random walks arising from polymer physics, and give an interpretation of our model described by (A1) in that context. Much more background is provided by, for instance, \cite[Section 2.2]{ms}, \cite[Chapter 7]{rg}, \cite[Chapter 7]{hughes}, and, for the underlying physics, \cite{rc}. Recent accounts of some of the relevant probability theory are given in \cite{giac,holl}. The sites visited by the walk $X_n$ represent the monomers that make up a long polymer molecule in solution in ${\mathbb R}^d$ (of course, physically $d \in \{2,3\}$ are most interesting). The line segments between successive sites $X_n$ and $X_{n+1}$ represent the chemical bonds holding the molecule together; in this regard our condition of uniformly bounded increments in (A1) is natural. We assume that the polymer solution is dilute, so that interaction between different polymer molecules can be neglected. In real polymers, a phase transition is observed between polymers in {\em poor solvents} (or at low temperature) and {\em good solvents} (or high temperature) \cite[Chapter 7]{rc}. In poor solvents, a polymer molecule collapses as the attraction between monomers overcomes the excluded volume effect caused by the fact that no two monomers can occupy the same physical space. In good solvents, a polymer molecule exists in an extended phase where the excluded volume effect dominates. It is the extended phase that is believed to lie in the same universality class as SAW. Heuristic arguments dating back to P.J.~Flory (see e.g.~\cite[Section 2.2]{ms}) suggest that in this phase $\| X_n \|$ should exist on `macroscopic scale' of order $n^\nu$ for an exponent $\nu = \nu(d) \in [1/2,1]$, with $\nu <1$ for $d>1$ and $\nu > 1/2$ for $d \leq 3$. So for $d \in \{2,3\}$, the polymer is expected to be super-diffusive but sub-ballistic. According to Theorem \ref{extent}(i), our model defined by (A1) has macroscopic scale exponent $\max \{ 1/2, 1/(1+\beta) \}$ when $\rho >0$; for $\beta < 1$ this regime therefore corresponds to polymers in the extended phase, where the excluded volume effect, summarized by repulsion from the centre of mass, dominates. For instance, since $\nu(2) =3/4$, in $d=2$ the `physical' choice of our model has $\beta =1/3$ and $\rho >0$; it is not clear to what extent that case of our model replicates the behaviour of SAW. On the other hand, the collapsed phase corresponds to taking $\rho <0$ in (A1), where the polymer's self-attraction, summarized through its centre of mass, is dominant. See Theorem \ref{extent}(iii) and (iv). Between the poor and good solvent phases, there is a transitional phase at the so-called $\theta$-{\em point} at which the temperature achieves a specific (critical) value $T = \theta$. Here the excluded volume effect and self-attraction are in balance, and the molecule behaves rather like a simple random walk path. Compare Theorem \ref{extent}(ii). \section{Properties of the self-interacting random walk} \label{prelim} Under the assumption (A1), we are going to study the process $X_n - G_n$ and in particular determine whether it is transient or recurrent. It suffices to study $\| X_n - G_n \|$. In this section we analyse the basic properties of the latter process; subsequently we will apply our general results of Section \ref{1dproc} on processes that satisfy, roughly speaking, (\ref{drift0b}). We introduce some convenient notation that we use throughout. For $n \in {\mathbb N}$ set \[ Y_n := X_n - G_n, ~~~\Delta_n := X_{n+1} - X_n .\] We start with some elementary relations amongst $X_n$, $G_n$, and $Y_n$ following from (\ref{com}). \begin{lm} Suppose that $(X_n)_{n \in {\mathbb N}}$ is a stochastic process on ${\mathbb R}^d$, and $(G_n)_{n \in {\mathbb N}}$ is its centre-of-mass process as defined by (\ref{com}). For $n \in {\mathbb N}$ we have \begin{align} \label{gjump} G_{n+1} & = \frac{n}{n+1} G_n + \frac{1}{n+1} X_{n+1} ; ~\textrm{and} \\ \label{yeq} Y_{n+1} & = \frac{n}{n+1} ( Y_n + \Delta_n ) .\end{align} Moreover $G_1 = X_1$ and for $n \in \{2,3,\ldots\}$, \begin{equation} \label{gandy} G_n = X_1 + \sum_{j=2}^n \frac{1}{j-1} Y_j .\end{equation} \end{lm} \begin{proof} Equation (\ref{gjump}) is immediate from (\ref{com}). Then from (\ref{gjump}) we have that for $n \in {\mathbb N}$, \begin{align} \label{jj0} Y_{n+1} = X_{n+1} - G_{n+1} = \frac{n}{n+1} \left( X_{n+1} - G_n \right) , \end{align} from which (\ref{yeq}) follows since $X_{n+1} - G_n = Y_n + \Delta_n$. For (\ref{gandy}), we have from (\ref{gjump}) again that for $n \in {\mathbb N}$, \[ G_{n+1} - G_n = \frac{1}{n+1} \left( X_{n+1} - G_n \right) = \frac{1}{n} Y_{n+1} ,\] where the final equality is obtained from (\ref{jj0}). Thus for $n \geq 2$, \[ G_n - G_1 = \sum_{j=1}^{n-1} (G_{j+1} - G_j ) = \sum_{j=1}^{n-1} \frac{1}{j} Y_{j+1} ,\] from which (\ref{gandy}) follows. \end{proof} The main result of this section concerns the increments of the process $\| Y_n \|$ under assumption (A1) and also possibly (A2). Part (i) of Proposition \ref{propinc} gives basic regularity properties, including boundedness of jumps. Part (ii) gives an expression for the mean drift when $\beta \in [0,1)$. Part (iii) deals with the case $\beta \geq 1$ when (A2) also holds. \begin{proposition} \label{propinc} Suppose that (A1) holds. \begin{itemize} \item[(i)] There exists $C \in (0,\infty)$ for which, for any $n \in {\mathbb N}$, \begin{align} \label{incbound} \mathbb{P}_n [ | \| Y_{n+1} \| - \| Y_n \| | > C ] =0, { \ \textrm{a.s.}} . \end{align} In addition \begin{align} \label{lem3eq} \limsup_{n \to \infty} \| Y_n \| = \infty, { \ \textrm{a.s.}}. \end{align} \item[(ii)] If $\beta \in [0,1)$ then, a.s., \begin{align} \label{lem2eq} \mathbb{E}_n [ \| Y_{n+1} \| - \| Y_n \| ] = \rho \| Y_n \|^{-\beta} - \frac{\| Y_n \| }{n+1} + O ( \| Y_n \|^{-\beta} (\log \| Y_n \|)^{-2} ) . \end{align} \item[(iii)] Suppose also that (A2) holds and $\beta \geq 1$. Then, a.s., \begin{align} \label{lem2eq2} \mathbb{E}_n [ \| Y_{n+1} \| - \| Y_n \| ] & = \left( \rho {\mathbf 1}_{\{ \beta = 1 \}} + \frac{1}{2} (d-1) \sigma^2 \right) \| Y_n \|^{-1} - \frac{\| Y_n \| }{n+1} \nonumber\\ & ~~ + o( \| Y_n \|^{-1} ( \log \| Y_n \|)^{-1} ) \\ \label{mom2ex} \mathbb{E}_n [ (\| Y_{n+1} \| - \| Y_n \| )^2 ] & = \sigma^2 + O( n^{-1} \| Y_n \| ) + o ( ( \log \| Y_n \|)^{-1} ).\end{align} \end{itemize} \end{proposition} We prove Proposition \ref{propinc} via a series of lemmas. The first gives information on the increments of the process given by the distance of a general stochastic process to its centre-of-mass. In particular, it shows that $\|Y_n\|$ inherits boundedness of jumps from $X_n$, and gives an expression for the increments of $\|Y_n\|$ in terms of $\Delta_n$, the increments of $X_n$. \begin{lm} \label{lem1} Suppose that $(X_n)_{n \in {\mathbb N}}$ is a stochastic process on ${\mathbb R}^d$, and $(G_n)_{n \in {\mathbb N}}$ is its centre-of-mass process as defined by (\ref{com}). Suppose that $X_1 \in {\mathbb R}^d$ is fixed and that (\ref{bound}) holds for some $B \in (0,\infty)$. There exists $C \in (0,\infty)$ for which, for all $n \in {\mathbb N}$, (\ref{incbound}) holds. Moreover, a.s., \begin{align} \label{lem1eq} \| Y_{n+1} \| - \| Y_n \| = \frac{n }{n+1} \left( \frac{ Y_n \cdot \Delta_n }{\| Y_n \| } + \frac{\| \Delta_n \|^2}{2 \| Y_n \|} - \frac{ (Y_n \cdot \Delta_n)^2}{2 \| Y_n \|^3} \right) + O ( \|Y_n \|^{-2} ) - \frac{\|Y_n \| }{n+1} . \end{align} \end{lm} \begin{proof} We work with the process $(\|Y_n \|)_{n \in {\mathbb N}}$. From (\ref{bound}) and the triangle inequality, we have the simple bound $\| X_n \| \leq \| X_1 \| + B (n-1)$ a.s., for all $n \in {\mathbb N}$. Applying the triangle inequality in (\ref{com}) then yields the equally simple bound \[ \| G_n \| \leq \frac{1}{n} \sum_{i=1}^n ( \| X_1 \| + B (i-1) ) \leq \| X_1 \| + \frac{Bn}{2} .\] Combining these two inequalities together with the fact that $\| Y_n \| \leq \|X_n \| + \| G_n\|$, it follows that $\| Y_n \| \leq 2 \| X_1 \| + (3 Bn/2)$ a.s., for all $n \in {\mathbb N}$. Then from the triangle inequality and (\ref{yeq}) we have that \[ \left| \| Y_{n+1} \| - \| Y_n \| \right| \leq \| Y_{n+1} - Y_n \| \leq \frac{1}{n} \| Y_n \| + \| \Delta_n \| \leq \frac{5B}{2} + \frac{2 \| X_1 \|}{n} ,\] a.s., by (\ref{bound}), and this lattermost quantity is uniformly bounded. Thus we have (\ref{incbound}). For the final statement of the lemma, note that, from (\ref{yeq}), \begin{align} \label{change} \| Y_{n+1} \| = \frac{n}{n+1} \left( \| Y_n \|^2 + \| \Delta_n \|^2 + 2 Y_n \cdot \Delta_n \right)^{1/2} .\end{align} Now writing ${\mathbf y} = Y_n$ for convenience, we obtain from (\ref{change}) that \begin{align} \label{change2} \| Y_{n+1} \| - \| Y_n \| = \| {\mathbf y} \| \left[ \frac{n}{n+1} \left( 1 + \frac{\| \Delta_n \|^2 + 2 {\mathbf y} \cdot \Delta_n }{\| {\mathbf y} \|^2 } \right)^{1/2} - 1 \right] . \end{align} Using Taylor's formula for $(1+x)^{1/2}$ with Lagrange remainder in (\ref{change2}) implies that \begin{align*} \| Y_{n+1} \| - \| Y_n \| = \frac{n \| {\mathbf y} \| }{n+1} \left( \frac{\| \Delta_n \|^2 + 2 {\mathbf y} \cdot \Delta_n }{2 \| {\mathbf y} \|^2 } - \frac{ ( \| \Delta_n \|^2 + 2 {\mathbf y} \cdot \Delta_n )^2 }{8 \| {\mathbf y} \|^4 } + O ( \| {\mathbf y} \|^{-3} ) \right) - \frac{\| {\mathbf y} \| }{n+1} , \end{align*} using (\ref{bound}) for the error bound. Simplifying and again using (\ref{bound}), this becomes \begin{align*} \| Y_{n+1} \| - \| Y_n \| = \frac{n \| {\mathbf y} \| }{n+1} \left( \frac{\| \Delta_n \|^2}{2 \| {\mathbf y} \|^2} + \frac{ {\mathbf y} \cdot \Delta_n }{\| {\mathbf y} \|^2 } - \frac{ ( {\mathbf y} \cdot \Delta_n )^2}{2 \| {\mathbf y} \|^4 } + O ( \| {\mathbf y} \|^{-3} ) \right) - \frac{\| {\mathbf y} \| }{n+1} . \end{align*} Then equation (\ref{lem1eq}) follows. \end{proof} Now we turn to the model defined by (A1), starting with the drift of $\| Y_n \|$. For $a, b \in {\mathbb R}$, we use the standard notation $a \wedge b := \min \{ a,b\}$. \begin{lm} \label{lem2} Suppose that (A1) holds. Then the drift of $\| Y_n \|$ satisfies, a.s., \begin{align} \label{lem2eq20} \mathbb{E}_n [ \| Y_{n+1} \| - \| Y_n \| ] & = \frac{n }{n+1} \left( \rho \| Y_n \|^{-\beta} + \Theta_n \| Y_n \|^{-1} \right) - \frac{\| Y_n \| }{n+1} \nonumber\\ & ~~~ + O( \| Y_n \|^{-(1 \wedge \beta)} (\log \| Y_n \|)^{-2} ) ,\end{align} where $\Theta_n$ is the ${\mathcal F}_n$-measurable random variable given by \begin{equation} \label{rhop} \Theta_n = \frac{1}{2} \| Y_n \|^{-2} \mathbb{E}_n [ \| Y_n \|^2 \| \Delta_n \|^2 - (Y_n \cdot \Delta_n)^2 ] .\end{equation} Moreover, there exists $C < \infty$ such that $\Theta_n \in [0,C]$ a.s., and if $\beta \in [0,1)$, (\ref{lem2eq}) holds. \end{lm} \begin{proof} Taking expectations in (\ref{lem1eq}), using the fact that \[ \| Y_n \|^{-1} \mathbb{E}_n [ Y_n \cdot \Delta_n ] = \rho \| Y_n \|^{-\beta} + O ( \| Y_n\|^{-\beta} (\log \| Y_n \|)^{-2} ) , \] by (\ref{drift}), we obtain \begin{align*} \mathbb{E}_n [ \| Y_{n+1} \| - \| Y_n \| ] = & \frac{n}{n+1} \left( \rho \| Y_n \|^{-\beta} + \frac{1}{2\| Y_n \|^3} \mathbb{E}_n [ \| Y_n \|^2 \| \Delta_n \|^2 - (Y_n \cdot \Delta_n)^2 ] \right) \\ & + O( \| Y_n \|^{-(\beta \wedge 1)} ( \log \| Y_n \| )^{-2} ) - \frac{ \| Y_n \|}{n+1} .\end{align*} By the fact that $|Y_n \cdot \Delta_n| \leq \| Y_n \| \| \Delta_n\|$ and the jumps bound (\ref{bound}) we have that \[ 0 \leq \mathbb{E}_n [ \| Y_n \|^2 \| \Delta_n \|^2 - (Y_n \cdot \Delta_n)^2 ] \leq C \| Y_n \|^2, { \ \textrm{a.s.}}, \] for some $C \in (0,\infty)$. Thus defining $\Theta_n$ by (\ref{rhop}) we obtain (\ref{lem2eq20}) and the fact that $\Theta_n \in [0,C]$ a.s.. Then (\ref{lem2eq}) follows when $\beta \in [0,1)$. \end{proof} The next result shows how the ellipticity condition (\ref{ue}) leads to (\ref{lem3eq}). \begin{lm} \label{lem3} Suppose that (A1) holds. Then $\limsup_{n \to \infty} \| Y_n \| = \infty$ a.s.. \end{lm} \begin{proof} We have from (\ref{change}) that \begin{equation} \label{0422a} \| Y_{n+1} \|^2 - \| Y_n \|^2 = \left( \frac{n}{n+1} \right)^2 \left( \| \Delta_n \|^2 + 2 Y_n \cdot \Delta_n \right) - \frac{2n+1}{(n+1)^2} \| Y_n \|^2 .\end{equation} Fix $p \in {\mathbb N}$, and define $F_{n,1} := \cap_{i=np}^{np+(p-1)} \{ \Delta_i \cdot \hat Y_i \geq \varepsilon_0 \}$ and $F_{n,2} := \{ \| Y_{np} \| \leq \frac{\varepsilon_0 np}{16} \}$. Fix also $n_p \in {\mathbb N}$ with $\varepsilon_0 n_p \geq 16 C$, where $C$ is as in (\ref{incbound}), and consider $n \geq n_p$ only. By (\ref{ue}) we have that $\mathbb{P}_n [ F_{n,1}] \geq \varepsilon_0^p$ a.s., and hence L\'evy's extension of the second Borel--Cantelli lemma (see e.g.\ \cite[Theorem 5.3.2]{durrett}) implies that $\mathbb{P} [F_{n,1}\; {\rm i.o.}]=1$. Now, observe from (\ref{incbound}) that $\|Y_{i+1}\| \leq \| Y_i \| + C$, a.s., which implies on $F_{n,2}$ that $\|Y_i\| \leq \frac{1}{8}\varepsilon_0 np$ for all $i \in \{ np, \ldots np+(p-1) \}$. Then, on $F_{n,1} \cap F_{n,2}$, we obtain from (\ref{0422a}) that, a.s., \begin{align*} \| Y_{i+1} \|^2 - \| Y_i \|^2 \geq -\frac{2}{i} \| Y_i \|^2 + \frac{1}{4} \left( \varepsilon_0^2 + 2 \varepsilon_0 \| Y_i \| \right) \geq \frac{1}{4} \varepsilon_0 \| Y_i \| + \frac{1}{4} \varepsilon_0^2 \geq \frac{1}{4} \varepsilon_0^2, \end{align*} for any $i$ with $np \leq i \leq np+(p-1)$ and any $n \geq n_p$. Hence on $F_{n,1} \cap F_{n,2}$, a.s., \begin{align*} \| Y_{(n+1)p} \|^2 = \| Y_{np} \|^2 +\sum_{i=np}^{np+(p-1)}(\| Y_{i+1} \|^2 - \| Y_i \|^2) \geq p\varepsilon_0^2/4 .\end{align*} Thus, up to sets of probability zero, $\{ ( F_{n,1} \cap F_{n,2} ) \; {\rm i.o.}\} \subseteq \{ \limsup_{n \to \infty} \| Y_n \| \geq p^{1/2}\varepsilon_0/2\}$. Moreover, by definition of $F_{n,2}$, $\{ F_{n,2}^{\rm c} \; {\rm i.o.} \} \subseteq \{ \limsup_{n \to \infty} \| Y_n \| = \infty \}$. Since $\{F_{n,1} \; {\rm i.o.} \} \subseteq \{ ( F_{n,1} \cap F_{n,2} ) \; {\rm i.o.}\} \cup \{ F_{n,2}^{\rm c} \; {\rm i.o} \}$, it follows that $\{ F_{n,1} \; {\rm i.o.} \} \subseteq \{ \limsup_{n \to \infty} \| Y_n \| \geq p^{1/2}\varepsilon_0/2 \}$. Since $p$ was arbitrary, the result follows from the fact that $\mathbb{P} [F_{n,1}\; {\rm i.o.}]=1$, as shown in the first part of this proof. \end{proof} When (A1) holds with $\beta \geq 1$, we need more regularity to obtain a well-behaved version of (\ref{lem2eq20}). Thus we impose (A2) and use the following result, which in addition gives an expression for the second moment of the increment $\|Y_{n+1} \| - \| Y_n \|$. \begin{lm} \label{lem4} Suppose that (A1) and (A2) hold. Then $\Theta_n$ as defined by (\ref{rhop}) satisfies \begin{align} \label{rhopex} \Theta_n = \frac{1}{2} (d-1) \sigma^2 + o( ( \log \| Y_n \|)^{-1} ), { \ \textrm{a.s.}}.\end{align} Moreover, (\ref{mom2ex}) holds. \end{lm} \begin{proof} First we prove (\ref{rhopex}). We have that \[ \mathbb{E}_n [ \| \Delta_n \|^2 ] = \sum_{i=1}^d \mathbb{E}_n [ (\Delta_n^{(i)})^2 ] = d \sigma^2 + o( ( \log \| Y_n \|)^{-1} ) ,\] by (\ref{cov1}). Also if $Y_n = (y_1,\ldots,y_d) \in {\mathbb R}^d$, with the convention that an empty sum is $0$, \begin{align} \label{lem4a} \mathbb{E}_n [ (Y_n \cdot \Delta_n)^2 ] & = \sum_{i=1}^d y_i^2 \mathbb{E}_n [ (\Delta_n^{(i)})^2 ] + 2 \sum_{i=2}^d \sum_{j=1}^i y_i y_j \mathbb{E}_n [ \Delta_n^{(i)} \Delta_n^{(j)} ] \nonumber\\ & = \| Y_n \|^2 \left[ \sigma^2 + o ( (\log \| Y_n \|)^{-1} ) \right] ,\end{align} by (\ref{cov1}) and (\ref{cov2}). Then (\ref{rhopex}) follows from (\ref{rhop}). Next we prove (\ref{mom2ex}). Squaring both sides of (\ref{lem1eq}) and taking expectations we obtain \[ \mathbb{E}_n [ (\| Y_{n+1} \| - \| Y_n \| )^2 ] = \| Y_n \|^{-2} \mathbb{E}_n [ (Y_n \cdot \Delta_n)^2 ] + O( n^{-1} \| Y_n \| ) + O( \| Y_n \|^{-1} ).\] Now using (\ref{lem4a}) yields (\ref{mom2ex}). \end{proof} \begin{proof}[Proof of Proposition \ref{propinc}.] We collect results from Lemmas \ref{lem1}, \ref{lem2}, \ref{lem3}, and \ref{lem4}. \end{proof} \section{Recurrence classification for processes satisfying equation (\ref{drift0b})} \label{1dproc} \subsection{Introduction} In this section we state general results for processes with drift of the form (\ref{drift0}). We will later apply these results to the process $\| X_n - G_n\|$ satisfying (A1) (and maybe also (A2)), but for this section we work in some generality. Let $(Z_n)_{n \in {\mathbb N}}$ be a stochastic process taking values in an unbounded subset ${\mathcal S}$ of $[0,\infty)$, adapted to a filtration $({\mathcal F}_n)_{n \in {\mathbb N}}$. Recall the definition of $\mu_k (n ; x)$ from (\ref{mudef}), so that $\mathbb{E}_n [ (Z_{n+1} - Z_n)^k ] = \mu_k ( n ; Z_n )$ a.s.. As discussed in Section \ref{sec:lampsrw}, the case where $\mu_2 (n ; x)$ is $O(1)$ and $\mu_1 (n ; x) \to 0$ as $x \to \infty$ arises often in applications; the case where $\mu_1 (n ; x) \to 0$ uniformly in $n$ is sometimes known as Lamperti's problem after Lamperti's work \cite{lamp1,lamp2,lamp3}. Roughly speaking, the Lamperti problem has $\mu_1 (n ; x) \approx \rho x^{-\beta}$, $\beta >0$, $\rho \in {\mathbb R}$, ignoring higher-order terms. Results of Lamperti \cite{lamp1,lamp3} imply that the case $\beta =1$ is critical from the point of view of the recurrence classification. The supercritical case, when $\beta \in [0,1)$, $\rho>0$, has also been studied (see \cite{superlamp} and references therein). In this section we study the analogous problem for which $\mu_1 (n ; x) \approx \rho x^{-\beta} - (x/n)$. In keeping with the applications of the present paper, and to ease technical difficulties, we adopt some stronger regularity assumptions than imposed in \cite{lamp1,lamp3} or \cite{superlamp}. Nevertheless, this version of the problem is more difficult than the classical case (without the extra $-x/n$ term in the drift). Thus although the ideas in this section are related to those in \cite{lamp1,lamp3} and \cite{superlamp}, we have to proceed somewhat differently. In particular, to obtain our $\beta < 1$ law of large numbers in this setting (an analogue of \cite[Theorem 2.3]{superlamp} for the standard Lamperti case), we use a `stochastic approximation' result (Lemma \ref{lower-lm}), the proof of which uses ideas somewhat similar to those in \cite{mv2,superlamp}. We impose some regularity conditions on $(Z_n)_{n \in {\mathbb N}}$. Suppose that there exists $C \in (0,\infty)$ such that for all $n \in {\mathbb N}$, \begin{equation} \label{z1b} \mathbb{P}_n [ | Z_{n+1} - Z_n | > C ] = 0, { \ \textrm{a.s.}} . \end{equation} We also assume that \begin{equation} \label{z4b} \limsup_{n \to \infty} Z_n = \infty, { \ \textrm{a.s.}},   \end{equation} without which the question of whether $(Z_n)_{n \in {\mathbb N}}$ is recurrent or transient (in the sense of Definition \ref{def1}) is trivial. Note that (\ref{z4b}) is implied by a suitable `irreducibility' assumption, such as, for all $y >0$, $\inf_{n \in {\mathbb N}} \mathbb{P}_n [ Z_m - Z_n > y \mbox{ for some }m>n ] > 0$, a.s.. In our case, as in the standard Lamperti problem, we will see a distinction between the `critical' case where $\beta=1$ and the `supercritical' case where $\beta \in [0,1)$. Thus we deal with these two cases separately in the remainder of this section. \subsection{The critical case: $\beta = 1$} For $x>0$ and $n \in {\mathbb N}$ define \begin{equation} \label{rdef} r(n;x) := n^{-1} x^2 + (\log (1+x) )^{-1} . \end{equation} For $p>0$ we write $\log^p x$ for $(\log x)^p$. We impose the further assumptions that there exist $\rho' \in {\mathbb R}$ and $s^2 \in (0,\infty)$ such that \begin{align} \mu_1 (n ; x) & = \rho' x^{-1} - \frac{x}{n} + o ( x^{-1} r ( n; x) ) , \label{z2b} \\ \mu_2 (n ; x) & = s^2 +o( r(n ;x) ) . \label{z3b} \end{align} \begin{theo} \label{thm1} Suppose that the process $(Z_n)_{n \in {\mathbb N}}$ satisfies (\ref{z1b}), (\ref{z4b}), (\ref{z2b}) and (\ref{z3b}) for some $\rho' \in {\mathbb R}$ and $s^2 \in (0,\infty)$. Then if $2\rho' \leq s^2$, $Z_n$ is recurrent. \end{theo} \begin{proof} Let $W_n:=\log \log Z_n$. Write $D_n := Z_{n+1} - Z_n$. First note that Taylor's formula implies that for $x, h$ with $x \to \infty$ and $h = o(x / \log x)$, \[ \log \log ( x+h) = \log \log x + \frac{h}{x \log x} - \frac{( \log x + 1) h^2}{2 x^2 \log^2 x } + O ( h^3 x^{-3} (\log x)^{-1} ) .\] Setting $x = Z_n$ and $h= D_n$ and then taking expectations, we obtain \begin{align*} \mathbb{E}_n [ W_{n+1} - W_n ] = \frac{\mu_1 (n ; Z_n)}{Z_n \log Z_n} - \frac{ (\log Z_n +1)\mu_2 (n ; Z_n)}{2 Z_n^2 \log^2 Z_n } + O(Z_n^{-3}) , \end{align*} using (\ref{z1b}) for the error term. By (\ref{z2b}) and (\ref{z3b}) this last expression is \[ \frac{2 \rho' - s^2}{2 Z_n^2 \log Z_n} - \frac{s^2}{2 Z_n^2 \log^2 Z_n} - \frac{1}{n \log Z_n} + o ( Z_n^{-2} (\log Z_n)^{-1} r (n ;Z_n )) < 0 , \] for all $n$ and $Z_n$ large enough, provided $2 \rho' - s^2 \leq 0$, by (\ref{rdef}). Thus there exist non-random constants $w_0 \in (0,\infty)$ and $n_1 \in {\mathbb N}$ for which, for all $n \geq n_1$, on $\{ W_n > w_0 \}$, \[ \mathbb{E}_n [ W_{n+1} - W_n ] < 0 , { \ \textrm{a.s.}}.\] By Doob's decomposition, we may write $W_n = M_n + A_n$, $n \geq n_1$, where $W_{n_1} = M_{n_1}$, $(M_n)_{n \geq n_1}$ is a martingale, and the previsible sequence $(A_n)_{n \geq n_1}$ is defined by \[ A_n = \sum_{m=n_1}^{n-1} \mathbb{E}_m [ W_{m+1} - W_m ] \leq \sum_{m=n_1}^{n-1} \mathbb{E}_m [ W_{m+1} - W_m ] {\mathbf 1} \{ W_m \leq w_0 \} \leq C \sum_{m=n_1}^{n-1} {\mathbf 1} \{ W_m \leq w_0 \},\] since the uniform jumps bound (\ref{z1b}) for $Z_n$ implies a uniform jumps bound for $W_n$, $n \geq n_1$. Hence $W_n \to \infty$ implies that $\limsup_{n \to \infty} A_n < \infty$ so $M_n \to \infty$ also. However, $(M_n)_{n \geq n_1}$ is a martingale with uniformly bounded increments (by (\ref{z1b})) so $\mathbb{P} [ M_n \to \infty] =0$ (see e.g.\ \cite[Theorem 5.3.1, p.\ 204]{durrett}). Hence $\mathbb{P} [ \liminf_{n \to \infty} W_n < \infty ] = 1$. \end{proof} \begin{theo} \label{thm2} Suppose that the process $(Z_n)_{n \in {\mathbb N}}$ satisfies (\ref{z1b}), (\ref{z4b}), (\ref{z2b}) and (\ref{z3b}) for some $\rho' \in {\mathbb R}$ and $s^2 \in (0,\infty)$. Then if $2 \rho' > s^2$, $Z_n$ is transient. \end{theo} \begin{proof} This time set $$ W_n: =\frac {1}{ \log Z_n}+\frac {9}{\log n}. $$ Again write $D_n := Z_{n+1} - Z_n$. We want to compute \begin{align} \label{lem8a} \mathbb{E}_n [ W_{n+1} - W_n ] = \mathbb{E}_n \left[ (\log (Z_n + D_n))^{-1} - (\log Z_n)^{-1} \right] \nonumber\\ + 9 \left[ (\log (n+1))^{-1} - (\log n)^{-1} \right] .\end{align} Observe that, for the final term on the right-hand side of (\ref{lem8a}), \begin{equation} \label{lem8b} (\log (n+1))^{-1} - (\log n)^{-1} = \frac{ \log ( 1 - (n+1)^{-1} )}{ \log n \log (n+1)} = - \frac{1}{n \log^2 n } + O(n^{-2}).\end{equation} Also for the expectation on the right-hand side of (\ref{lem8a}) we have that \begin{align*} \mathbb{E}_n \left[ (\log (Z_n + D_n))^{-1} - (\log Z_n)^{-1} \right] \\ = (\log Z_n)^{-1} \mathbb{E}_n \left[ \left( 1 + \frac{ \log ( 1 + (D_n/Z_n))}{\log Z_n} \right)^{-1} -1 \right] . \end{align*} Taylor's formula implies that for $a = O(1)$ and $y = o(1)$, \[ \left( 1 + a \log (1+y ) \right)^{-1} = 1 - ay + \frac{2a^2+a}{2} y^2 + O(y^3) .\] Applying this formula with $a = 1/\log Z_n$ and $y = D_n/Z_n$ we obtain, \begin{align*} & ~~~ \mathbb{E}_n \left[ (\log (Z_n + D_n))^{-1} - (\log Z_n)^{-1} \right] \\ & = - \frac{\mu_1 (n ; Z_n)}{Z_n \log^2 Z_n} + \frac{\mu_2 (n ; Z_n)} {2 Z_n^2 \log^2 Z_n } + \frac{\mu_2 (n ; Z_n)} { Z_n^2 \log^3 Z_n } + O(Z_n^{-3} ) , \end{align*} by (\ref{z1b}). Now using (\ref{z2b}) and (\ref{z3b}) we obtain, \begin{align} \label{lem8c} & ~~~ \mathbb{E}_n \left[ (\log (Z_n + D_n))^{-1} - (\log Z_n)^{-1} \right] \nonumber\\ & = \frac{1}{2Z_n^2 \log^2 Z_n} \left( - (2 \rho' - s^2) + o(r(n;Z_n)) + O ( ( \log Z_n)^{-1} ) \right) + \frac{1}{n \log^2 Z_n} .\end{align} Suppose that $2 \rho' - s^2 \geq 2\varepsilon > 0$. Then by (\ref{rdef}), (\ref{lem8a}), (\ref{lem8b}) and (\ref{lem8c}) we have that there exist non-random constants $n_0 \in {\mathbb N}$ and $x_0 \in (1,\infty)$ such that for all $n \geq n_0$, on $\{ Z_n \geq x_0\}$, a.s., \begin{equation} \label{lem8d} \mathbb{E}_n [ W_{n+1} - W_n ] \leq -\frac{\varepsilon}{2 Z_n^2 \log^2 Z_n} - \frac{8}{n \log^2 n} + \frac{3}{2n \log^2 Z_n} .\end{equation} We have that the right-hand side of (\ref{lem8d}) is bounded above by \[ \frac{1}{\log^2 Z_n} \left( \frac{-\varepsilon}{2 Z_n^2} + \frac{3}{2n} \right) \leq \frac{1}{\log^2 Z_n} \left( \frac{-\varepsilon}{2 Z_n^2} + \frac{3\varepsilon}{8Z_n^2} \right) ,\] provided $n \geq 4 Z_n^2 \varepsilon^{-1}$, and this last upper bound is negative for $Z_n \geq x_0$. On the other hand, if $n \leq 4Z_n^2 \varepsilon^{-1}$ the right-hand side of (\ref{lem8d}) is bounded above by \begin{align*} \frac{1}{n} \left( \frac{3/2}{\log^2 Z_n} - \frac{8}{\log^2 n} \right) \leq \frac{1}{n} \left( \frac{7}{\log^2 n} - \frac{8}{\log^2 n} \right) < 0 ,\end{align*} for $Z_n \geq x_0$ and $n \geq n_0$. Thus in either case we have concluded that for all $n \geq n_0$, on $\{ Z_n \geq x_0 \}$, \begin{equation} \label{superm} \mathbb{E}_n [ W_{n+1} - W_n ] < 0, { \ \textrm{a.s.}}. \end{equation} Now fix $K>1$ and $x_1 \geq x_0$. Define the stopping times \[ \sigma_K := \min \{ n \geq \max \{ n_0, x_1^{18K} \} : Z_n \geq x_1^{4K} \} ; ~~~ \tau_K := \min \{ n \geq \sigma_K : Z_n \leq x_1 \} . \] By (\ref{z4b}) we have that $\mathbb{P} [ \sigma_K < \infty ] = 1$. From (\ref{superm}) and the definition of $\tau_K$ we have that $(W_{n \wedge \tau_K})_{n \geq \sigma_K}$ is a non-negative supermartingale, and hence it converges almost surely to a $[0,\infty)$-valued random variable $W := W^{(K)}$. In particular, since $\sigma_K<\infty$ a.s., we have $\lim_{n \to \infty} W_{n \wedge \tau_K } = W$, a.s.. Moreover \begin{equation} \label{0401a} \mathbb{E} [ W] \geq \mathbb{E} [ W {\mathbf 1}_{\{ \tau_K < \infty \}} ] = \mathbb{E} [ W_{\tau_K} {\mathbf 1}_{\{ \tau_K < \infty \}} ] \geq \frac{ \mathbb{P} [ \tau_K < \infty ] }{ \log x_1 } , \end{equation} since $Z_{\tau_K} \leq x_1$. On the other hand, since $(W_{n \wedge \tau_K})_{n \geq \sigma_K}$ is a supermartingale, \begin{equation} \label{0401b} \mathbb{E} [ W ] \leq \mathbb{E} [ W_{\sigma_K } ] \leq \frac{1}{4K \log x_1} + \frac{9}{18K \log x_1 } = \frac{3}{4K \log x_1 }, \end{equation} using the facts that $Z_{\sigma_K} \geq x_1^{4K}$ and $\sigma_K \geq x_1^{18K}$. Combining (\ref{0401a}) and (\ref{0401b}) we see that \[ \frac{ \mathbb{P} [ \tau_K < \infty ] }{ \log x_1 } \leq \frac{3}{4K \log x_1 } .\] On $\{ \sigma_K < \infty \} \cap \{ \tau_K = \infty \}$, we have that $\liminf_{n \to \infty} Z_n \geq x_1$, so the preceding argument shows that $\mathbb{P} [ \liminf_{n \to \infty} Z_n \geq x_1 ] \geq 1 - \frac{3}{4K}$ for any $K$ and any $x_1 \geq x_0$. It follows that $\mathbb{P} [ Z_n \to \infty ] =1$. \end{proof} \subsection{The supercritical case: $\beta \in [0,1)$} Once again we will assume that (\ref{z1b}) and (\ref{z4b}) hold. We will also assume that there exist $\beta \in [0,1)$ and $\rho \in {\mathbb R} \setminus \{ 0\}$ such that \begin{align} \mu_1 (n ; x) & = \rho x^{-\beta} - \frac{x}{n} + o(x^{-\beta} ) + o(x n^{-1}) . \label{z2} \end{align} \begin{theo} \label{superc} Consider the process $(Z_n)_{n \in {\mathbb N}}$ satisfying (\ref{z1b}), (\ref{z4b}), and (\ref{z2}), where $\beta \in [0,1)$. Then $Z_n$ is transient if $\rho>0$ and recurrent if $\rho <0$. \end{theo} \begin{proof} First suppose that $\rho>0$. By (\ref{z1b}) we can choose $\rho' \in (0,\infty)$ so that $2 \rho' > C^2 > \mathbb{E}_n [ (Z_{n+1} -Z_n)^2 ]$, a.s., and, by (\ref{z2}), \[ \mathbb{E}_n [ Z_{n+1} - Z_n ] \geq ( \rho' +o(1)) Z_n^{-1} - \frac{Z_n}{n} + o ( Z_n^{-1} r (n ;Z_n) ) , { \ \textrm{a.s.}}. \] It is this inequality, rather than the equality (\ref{z2b}), that is needed in the proof of Theorem \ref{thm2}. Hence following that proof implies transience. Similarly, if $\rho <0$ we have, for any $\rho' \in (-\infty,0)$, a.s., \[ \mathbb{E}_n [ Z_{n+1} - Z_n ] \leq (\rho'+o(1)) Z_n^{-1} - \frac{Z_n}{n} + o ( Z_n^{-1} r ( n ;Z_n) ) . \] Using this inequality in the proof of Theorem \ref{thm1} implies recurrence. \end{proof} The rest of this section works towards a proof of the following law of large numbers. \begin{theo} \label{lln1} Consider the process $(Z_n)_{n \in {\mathbb N}}$ satisfying (\ref{z1b}), (\ref{z4b}), and (\ref{z2}), where $\beta \in [0,1)$ and $\rho>0$. Then, with $\ell (\rho,\beta)$ as defined at (\ref{elldef}), as $n \to \infty$, \begin{equation} \label{zlln} \frac{Z_n}{n^{1/(1+\beta)}} \toas \ell (\rho, \beta) . \end{equation} \end{theo} The proof uses the following lemma, which is of some independent interest, and falls loosely into a family of ``stochastic approximation'' results; see e.g.~\cite[Section 2.4]{pemantle}. \begin{lm} \label{lower-lm} Suppose that $(V_n)_{n \in {\mathbb N}}$ is a non-negative process adapted to the filtration $({\mathcal F}_n)_{n \in {\mathbb N}}$. Suppose that there exists $r>0$ such that the following hold. \begin{itemize} \item[(a)] There exists a non-negative sequence $(\gamma_n)_{n\in{\mathbb N}}$ adapted to $({\mathcal F}_n)_{n \in {\mathbb N}}$ with $\sum_{n \in {\mathbb N}} \gamma_n < \infty$ a.s.\ such that for any $b>0$ and all $n \in {\mathbb N}$ we have that, a.s., \begin{align*} \mathbb{E}_n \left[(V_{n+1}-V_n)^2 \right] \le C(b) \gamma_n ~\textrm{on}~ {\{ V_n\le b \}}, \end{align*} where $C(b)$ is a constant depending only on $b$. \item[(b)] There exists $\varepsilon>0$, and for any $\delta\in(0,r)$ there is a sequence of events $A_n=A_n(\delta)$, $n \in {\mathbb N}$, such that $A_n \in {\mathcal F}_n$, $\mathbb{P}[ A_n\text{ i.o.}]=0$, and a.s.\ for all $n \in {\mathbb N}$, \begin{align*} \mathbb{E}_n [V_{n+1} ] \le V_n ~\text{on}~ \{ V_n > r + \delta \}\cap A_n^{\rm c}, ~\text{ and }~ \mathbb{E}_n [V_{n+1} ] & \ge V_n ~\textrm{on}~ \{V_n<r-\delta \}\cap A_n^{\rm c}, \end{align*} and also $A_n^{\rm c} \subseteq \{ V_{n+1} > (1-\varepsilon) V_n \}$. \end{itemize} Then a.s.\ $\lim_{n\to\infty} V_n = V_\infty$ exists in $[0,\infty)$. If, additionally, \begin{itemize} \item[(c)] there exists a non-negative sequence $({\tilde\gamma_n})_{n\in{\mathbb N}}$ adapted to $({\mathcal F}_n)_{n \in {\mathbb N}}$ with $\sum_{n\in{\mathbb N}} \tilde \gamma_n = + \infty$ a.s.\ such that for any $a, b$ with $0<a<b$ and $r\notin [a,b]$, for all $n$ large enough, on {$\{ V_n \in [a,b] \}$}, a.s., \begin{align*} \mathbb{E}_n [V_{n+1}-V_n ] & \le - \tilde C(a,b)\tilde\gamma_n \text{ if } r<a,\\ \mathbb{E}_n [V_{n+1}-V_n ] & \ge \tilde C(a,b)\tilde\gamma_n \text{ if } r>b, \end{align*} where $\tilde C(a,b)>0$ is a constant depending only on $a$ and $b$, \end{itemize} then $V_\infty \in\{0,r\}$. \end{lm} \begin{proof} We first show that under conditions (a) and (b) of the lemma, $V_n$ converges a.s.\ to some finite limit $V_\infty$. We claim that \begin{equation} \label{eq:liminf} \mathbb{P} \left[ \{\liminf_{n \to \infty} V_n\le r\}\cup \{\exists \lim_{n \to \infty} V_n>r\} \right]=1. \end{equation} Indeed, suppose that $\{\liminf_{n \to \infty} V_n\le r\}$ does not hold, so that $\liminf_{n \to \infty} V_n> r+\delta$ for some $\delta>0$. For $M \in {\mathbb N}$ let $$ \tau^{(M)} :=\inf\{n\ge M:\ V_n\le r+\delta\text{ or $A_n(\delta)$ occurs}\}, $$ and define $V^{(M)}_n :=V_{n \wedge \tau^{(M)}}$. Then, for each $M$, by (b), $(V^{(M)}_n)_{n\ge M}$, is a non-negative supermartingale and hence converges a.s.. On the other hand, from (b) and our assumption that $\liminf_{n \to \infty} V_n> r+\delta$ it follows that a.s.\ $\tau^{(M)}=\infty$ for {\it some\/} $M$; in this case $V^{(M)}_n\equiv V_n$ for all $n\ge M$ and hence $V_n$ must also converge a.s., and the limit must be greater than $r$. This establishes (\ref{eq:liminf}). By an analogous argument with a bounded submartingale we also establish \begin{equation}\label{eq:limsup} \mathbb{P} \left[ \{\limsup_{n \to \infty} V_n\ge r\}\cup \{\exists \lim_{n \to \infty} V_n<r\} \right] =1. \end{equation} Given (\ref{eq:liminf}) and (\ref{eq:limsup}), to show that $\lim_{n \to \infty} V_n$ exists a.s.\ it suffices to demonstrate this convergence on the set \begin{equation*} E:=\{\limsup_{n \to \infty} V_n\ge r\}\cap\{\liminf_{n \to \infty} V_n\le r\}. \end{equation*} Let us prove that on $E$ in fact $\limsup_{n \to \infty} V_n=r$. For $\delta>0$ define $$ E_\delta:=E\cap \{\limsup V_n > y+\delta\} \text{ where }y=r+2\delta. $$ We will show that $\mathbb{P}[E_\delta]=0$ for any $\delta>0$, which yields the desired conclusion. Fix some $\nu_0$ such that $V_{\nu_0}>y+\delta$. Iteratively for $i=0,1,2,\dots$ define \begin{align*} \tau_i& :=\min\{n>\nu_{i}: V_n\le y-\delta\},\\ \kappa_i& :=\min\{n>\tau_{i}: V_n> y-\delta\}, \\ \nu_{i+1}& :=\min\{n>\tau_{i}: V_n\ge y+ \delta\}. \end{align*} On $E$ we have $V_n \leq r + \delta$ infinitely often, so that $V_n \leq y-\delta$ infinitely often. Thus our definitions imply that $\tau_i$, {$\kappa_i$}, and $\nu_i$ are finite for all $i$ on $E_\delta$. Next, setting $B_n:=\{V_{n-1}\leq y-\delta,\ V_{n}>y\}$, we have by L\'evy's extension of the second Borel--Cantelli lemma (see e.g.\ \cite[Theorem 5.3.2]{durrett}) \begin{align*} \{\{V_{\kappa_i} > y, \; \kappa_i < \infty \} \text{ i.o.}\} \subseteq \{B_n \text{ i.o.}\}=\left\{\sum_{n \in {\mathbb N}} \mathbb{P}_n[B_{n+1}]=\infty\right\}, \end{align*} up to events of probability $0$. On the other hand, \begin{align} \label{cheb1} \mathbb{P}_n[B_{n+1}] &= \mathbb{P}_n[V_{n+1}>y] {\mathbf 1}\{V_{n} \leq y-\delta\} \nonumber\\ &\le \mathbb{P}_n [ |V_{n+1}-V_n| > \delta ] {\mathbf 1}\{V_{n} \leq y-\delta\} \nonumber\\ &\le \delta^{-2} \mathbb{E}_n [ ( V_{n+1} - V_{n} )^2 ] {\mathbf 1} \{ V_{n} \leq y-\delta \} , \end{align} by Chebyshev's inequality, so that by (\ref{cheb1}) and (a), \begin{equation} \label{cheb2} \sum_{n \in {\mathbb N}} \mathbb{P}_n[B_{n+1}] \le \sum_{n \in {\mathbb N}} \delta^{-2} C(y-\delta) \gamma_{n}< \infty, { \ \textrm{a.s.}} \end{equation} Thus on $E_\delta$, by (\ref{cheb2}) and the Borel--Cantelli lemma, $\{V_{\kappa_i} > y\}$ occurs only finitely often a.s., so there is some $N_1 \in {\mathbb N}$ for which $V_{\kappa_i} \in (y-\delta, y]$ for all $i \geq N_1$. Now let \begin{align*} \eta_i&:=\min\{n>{\kappa_{i}}: V_n \le y-\delta\text{ or } V_n \ge y+\delta\} \le {\nu_{i+1}}. \end{align*} On $E_\delta$ all the $\eta_i$ are also finite (since the $\nu_i$ are finite). For $n \in {\mathbb N}$ define \begin{align*} I_n&=\left\{\begin{array}{ll} 1, &\text{if } {\kappa_i}\le n<\eta_{i}\text{ for some }i \text{ and } A_n^{\rm c} \text{ occurs};\\ 0, &\text{otherwise}, \end{array}\right. \\ D_n&=\mathbb{E}_n[(V_{n+1}-V_n) I_n] \text{ and } M_n=\sum^{n-1}_{s={\kappa_0}} [(V_{s+1}-V_{s})I_s- D_s], \end{align*} with an empty sum understood as zero so that $M_{n} =0$ for $n \leq \kappa_0$. Then $(M_n)_{n\in {\mathbb N}}$ is a zero-mean martingale adapted to $({\mathcal F}_n)_{n\in{\mathbb N}}$, and it is not hard to see that $$ \mathbb{E}_n [M_{n+1}^2-M_n^2]=\mathbb{E}_n [(M_{n+1}-M_n)^2]\le \mathbb{E}_n[(V_{n+1}-V_n)^2 I_n ]. $$ Moreover, since for ${\kappa_i}\le n< \eta_i$ we have $y-\delta<V_n\le y+\delta$, from (a) it follows that \begin{align*} \mathbb{E}_n [ M_{n+1}^2- M_n^2] &\le C(y+\delta) \gamma_n. \end{align*} This implies that the increasing process associated with $M_n$ is bounded by a constant times $\sum_{n\in{\mathbb N}} \gamma_n$ and hence is a.s.\ finite by (a). Consequently, by \cite[Theorem 5.4.9]{durrett} the martingale $M_n$ converges a.s.~to some finite limit; in particular, there is some $N_2 \in {\mathbb N}$ for which $\sup_{n,m \geq N_2} |M_n-M_m|<\delta$ a.s.. Then for all $i \geq N_1$ such that $\kappa_i\geq N_2$ we have \begin{align*} V_{\eta_i}=V_{\kappa_i}+ [M_{\eta_i}-M_{\kappa_i}]+\sum_{s=\kappa_i}^{\eta_i-1}D_s<y+\delta , \end{align*} since, by (b), $D_n\le 0$ for $n\in [\kappa_i,\eta_i)$, $n \geq N_2$, and $V_{\kappa_i}\le y$ for all $i \geq N_1$. Consequently, the process $V_n$ eventually exits the interval $(y-\delta,y+\delta)$ only on the left (and it cannot jump over it, as we showed above), contradicting $E_\delta$. So $\mathbb{P}[E_\delta]=0$. A similar argument shows that on $E$ not only $\limsup_{n \to \infty} V_n=r$ but also $\liminf_{n \to \infty} V_n=r$; we sketch the changes needed to adapt the previous argument to this case. Analogously to $E_\delta$ above, we define $E'_\delta := E \cap \{ \liminf V_n < y-\delta\}$ where $y = r-2\delta$ and $\delta \in (0,r/3)$. Also fix some $\nu'_0$ such that $V_{\nu'_0} < y -\delta$, and iteratively set \begin{align*} \tau'_i& :=\min\{n>\nu'_{i}: V_n\ge y+\delta\},\\ \kappa'_i& :=\min\{n>\tau'_{i}: V_n< y+\delta\}, \\ \nu'_{i+1}& :=\min\{n>\tau'_{i}: V_n\le y- \delta\}. \end{align*} This time let $B'_n := \{ V_{n-1} \geq y+\delta, V_n < y \}$. Now by definition of $A_n^{\rm c}$, $\{ V_n > (1-\varepsilon)^{-1} r \} \cap A_n^{\rm c} \subseteq \{ V_{n+1} > r \} \subseteq (B_{n+1}')^{\rm c}$, so that \begin{align*} \mathbb{P}_n [ B'_{n+1} ] & \leq \mathbb{P}_n [ B'_{n+1} ] {\mathbf 1} ( A_n^{\rm c} ) + {\mathbf 1} ( A_n ) \\ & \leq \mathbb{P}_n [ B'_{n+1} ] {\mathbf 1} \{ V_n < (1-\varepsilon)^{-1} r \} + {\mathbf 1} ( A_n ) .\end{align*} A similar argument as that for (\ref{cheb1}) and (\ref{cheb2}), using Chebyshev's inequality and (a), with $C(y-\delta)$ in (\ref{cheb2}) now being replaced by $C((1-\varepsilon)^{-1} r)$, shows that, a.s., $\mathbb{P}_n [ B'_{n+1} ] {\mathbf 1} \{ V_n < (1-\varepsilon)^{-1} r \}$ is summable, while ${\mathbf 1} ( A_n )$ is a.s.\ summable by assumption in (b). As before, it follows that $\{ V_{\kappa'_i} < y \}$ a.s.\ occurs only finitely often. Then a similar argument to the previous case, with the martingale $M_n$, shows that $\mathbb{P}[ E'_\delta ] =0$ as well. Consequently , on $E$, $\lim_{n \to \infty} V_n$ a.s.\ exists and equals $r$ in this case. Thus the first claim of the lemma follows, and $V_\infty = \lim_{n \to \infty} V_n$ a.s.\ exists in $[0,\infty)$. To prove the second claim of the lemma, under the additional condition (c), we show that $\mathbb{P}[ V_\infty \in (0,r ) \cup (r, \infty ) ] =0$. To this end, suppose that $V_n\to y>r$ (the case {$y\in (0,r)$} can be handled similarly). Choose a small $\delta>0$ such that $y-\delta>r$. Then a.s.\ there exists an $N_3$ such that $|V_n-y|<\delta$ for all $n\ge N_3$. Now define $D'_n=\mathbb{E}_n[(V_{n+1}-V_n) ]$ and the martingale $M'_n=\sum_{s=1}^{n-1} [(V_{s+1}-V_{s})- D'_s]$. Then by (c) we have that for all $n \geq N_3$, $D'_n=\mathbb{E}_n[(V_{n+1}-V_n) ]\le -\tilde C(y-\delta, y+\delta)\tilde\gamma_n$. By a similar argument to that for $M_n$ above, $M'_n$ must converge a.s.. However this leads to a contradiction with the inequality \begin{align*} [V_{n} - V_{N_3}]-[M'_{n} - M'_{N_3}]&=\sum_{s=N_3}^{n-1} D'_s \le - \tilde C(y-\delta,y+\delta)\sum_{s=N_3}^{n-1} \tilde\gamma_s, \end{align*} since, a.s., as $n\to\infty$ the right-hand side converges to $-\infty$ while the left-hand side converges to a finite limit. \end{proof} Now we can give the proof of Theorem \ref{lln1}. \begin{proof}[Proof of Theorem \ref{lln1}.] It suffices to prove that \begin{equation} \label{zlln2} \lim_{n\to\infty}\frac{n}{Z_n^{1+\beta}}=\frac{2+\beta}{\rho(1+\beta)}, { \ \textrm{a.s.}}. \end{equation} Set $V_n=(n-1)/Z_n^{1+\beta}$ and $\tilde V_n=n/Z_n^{1+\beta}=\frac{n}{n-1}V_n$. Writing $D_n :=Z_{n+1}-Z_n$, we have \begin{align*} V_{n+1} - V_n &= \tilde V_n \left[ \left(1+ \frac{D_n}{Z_n} \right)^{-(1+\beta)}- \left(1-\frac 1n\right)\right] =\tilde V_n \left[ \frac 1n -\frac{(1+\beta)D_n}{Z_n} +O( Z_n^{-2}) \right], \end{align*} using Taylor's formula and (\ref{z1b}) for the error term. Hence \begin{equation} \label{eq33} V_{n+1} - V_n = \frac{\tilde V_n}n \left[1 -\frac{(1+\beta)n D_n} {Z_n} +O(n Z_n^{-2})\right] .\end{equation} Taking conditional expectations in (\ref{eq33}) we obtain, on $\{ Z_n \to \infty \}$, a.s., \begin{align}\label{eq:edy} \mathbb{E}_n [ V_{n+1} - V_n ] & = \frac{\tilde V_n}n \left[1 -\frac{(1+\beta)n \mu_1 (n ; Z_n) } {Z_n} +O(n Z_n^{-2})\right] \nonumber\\ &= \frac {\tilde V_n}{n} \left[ 2+\beta+o(1) - \left((1+\beta) \rho +o(1)\right) V_n \right], \end{align} using (\ref{z2}), and then using the fact that $Z_n \to \infty$ to simplify the error terms. Similarly, squaring both sides of (\ref{eq33}) and taking expectations, on $\{ Z_n \to \infty \}$, a.s., \begin{align*} \mathbb{E}_n [ (V_{n+1} - V_n)^2 ] & = \frac{\tilde V_n^2}{n^2} \left[ 1 - \frac{2(1+\beta) n \mu_1 ( n ; Z_n)}{Z_n} (1 + o(1)) + \frac{(1+\beta)^2 n^2 \mu_2 (n ; Z_n )}{Z_n^2} (1+o(1)) \right], \end{align*} using (\ref{z1b}) to obtain the error terms. Then from (\ref{z3b}) and (\ref{z2}) we obtain \begin{align} \label{eq:var} \mathbb{E}_n [ (V_{n+1} - V_n)^2 ] & = \frac{\tilde V_n^2}{n^{2}} \left[ 3 + 2\beta + o(1) - \left( 2 \rho (1+\beta) + o(1) \right) V_n + \frac{(c + o(1)) n^2}{ Z_n^{2} } \right] ,\end{align} for some $c \in (0,\infty)$ (depending on $s^2$ and $\beta$) as $Z_n \to \infty$ and $n \to \infty$. For a fixed $b>0$ and $A < \infty$, there exists a (non-random) $n_0$ for which $\{ V_n \leq b \}$ implies that $\{ Z_n \geq A \}$ for all $n \geq n_0$. In particular, from (\ref{eq:var}) we have that for some (non-random) $C(b) < \infty$, on $\{ V_n \leq b\}$, for any $n \in {\mathbb N}$, a.s., \[ \mathbb{E}_n [ (V_{n+1} - V_n)^2 ] \leq \frac{ \tilde V_n^2}{n^2} \left[ O(1) + (c+o(1)) n^2 (\tilde V_n/n)^{\frac{2}{1+\beta}} \right] \leq C(b) n^{-\frac{2}{1+\beta}} .\] Since $\beta <1$, $\sum_{n\in{\mathbb N}} n^{-2/(1+\beta)} < \infty$ so that the conditions of part (a) of Lemma~\ref{lower-lm} are satisfied with the present choice of $V_n$ and $\gamma_n = n^{-2/(1+\beta)}$. Let $A_n := \{ Z_n < A \}$. By Theorem \ref{superc}, $Z_n \to \infty$ a.s., so that $A_n$ occurs only finitely often for any $A \in (0,\infty)$. Taking $r=\frac{2+\beta}{\rho(1+\beta)}$, the conditions on $\mathbb{E}_n [ V_{n+1}]$ in part (b) of Lemma \ref{lower-lm} are shown to hold for any $\delta \in (0,r)$, taking $A = A(\delta)$ sufficiently large, by (\ref{eq:edy}). Indeed, from (\ref{eq:edy}), on $\{ V_n > r +\delta \}$ for some $\delta \in (0,r)$, \[ \mathbb{E}_n [ V_{n+1} - V_n ] \leq - \delta (1+\beta) \rho (1+o(1)) n^{-1} \tilde V_n , \] which is negative on $A_n^{\rm c}$ for our choice of $A = A(\delta)$. A similar argument holds for the other condition on $\mathbb{E}_n [ V_{n+1} ]$ in Lemma \ref{lower-lm}(b). The final condition in (b), that $A_n^{\rm c}$ implies that $V_{n+1} > (1-\varepsilon) V_n$ for some $\varepsilon \in (0,1)$, follows from (\ref{eq33}) and the fact that $D_n$ is uniformly bounded (by (\ref{z1b})), taking $A$ and $n$ sufficiently large in our choice of $A_n$. The conditions in part (c) of Lemma \ref{lower-lm} follow from (\ref{eq:edy}) again, with $\tilde \gamma_n = n^{-1}$, noting that the $o(1)$ terms in (\ref{eq:edy}) are uniformly small on $\{V_n \leq b\}$ for any $n \geq n_0$ (for some non-random $n_0 \in {\mathbb N}$). Hence we conclude from Lemma \ref{lower-lm} that $V_n \to V_\infty$ a.s.~where $V_\infty \in \{ 0 , r\}$. To complete the proof of the theorem we must show that $\mathbb{P}[ V_n\to 0 ]=0$. This, however, follows from the fact that $\limsup_{n \to \infty} ( n^{-1/(1+\beta)} Z_n ) <\infty$ a.s.\ due to \cite[Theorem 2.3]{superlamp}, noting the remark following that theorem. \end{proof} \section{Proofs of main theorems on self-interacting walks} \label{proofs} \subsection{Recurrence classification: Proofs of Theorems \ref{ythm1} and \ref{1dthm}} \label{recprf} We apply the results of Section \ref{1dproc} to $Z_n = \| Y_n \| = \| X_n - G_n \|$. \begin{proof}[Proof of Theorem \ref{ythm1}.] Suppose that (A1) and (A2) hold, and that $\beta \geq 1$. First note that with $Z_n = \| Y_n \|$, (\ref{incbound}) and (\ref{lem3eq}) imply (\ref{z1b}) and (\ref{z4b}). Now from (\ref{lem2eq2}) we obtain, with $r(n;x)$ defined by (\ref{rdef}), \[ \mathbb{E}_n [ Z_{n+1} - Z_n ] = \left( \rho{\mathbf 1}_{\{ \beta =1\}} + \frac{1}{2} (d-1) \sigma^2 \right) \frac{1}{Z_n} - \frac{Z_n}{n} + o(Z_n^{-1} r(n; Z_n)), { \ \textrm{a.s.}} \] Similarly, we have from (\ref{mom2ex}) that \[ \mathbb{E}_n [ ( Z_{n+1} - Z_n )^2 ] = \sigma^2 + O( Z_n n^{-1} ) + o ( (\log Z_n)^{-1} ).\] First suppose that $\beta =1$. Thus (\ref{z2b}) and (\ref{z3b}) hold with $\rho' = \rho + (d-1)(\sigma^2/2)$ and $s^2 = \sigma^2$. It follows from Theorems \ref{thm1} and \ref{thm2} that $Z_n$ is transient if and only if $2 \rho' > s^2$, or equivalently $2 \rho > \sigma^2 ( 2-d )$, i.e., $\rho > \rho_0$. This proves part (i) of the theorem. Finally suppose that $\beta >1$. This time (\ref{z2b}) and (\ref{z3b}) hold with $\rho' = (d-1)(\sigma^2/2)$ and $s^2 = \sigma^2$. It follows from Theorems \ref{thm1} and \ref{thm2} that $Z_n$ is transient if and only if $2 \rho' > s^2$, or equivalently $ \sigma^2 ( 2-d ) < 0$, i.e., $d>2$. This proves part (ii). \end{proof} \begin{proof}[Proof of Theorem \ref{1dthm}.] Suppose that $d=1$. If $Y_n$ is transient, then by (\ref{incbound}) we have that with probability 1 either: (i) $Y_n \to + \infty$; or (ii) $Y_n \to - \infty$. In case (i), there exists $N \in [2,\infty)$ for which $Y_n \geq 1$ for all $n \geq N$, so (\ref{gandy}) with (\ref{incbound}) implies that for $n\geq N$, \[ G_n \geq X_1 - CN + \sum_{j=N}^n \frac{1}{j-1} \to \infty, { \ \textrm{a.s.}} ,\] as $n \to \infty$; a similar argument applies in case (ii). Since $X_n = Y_n + G_n$, and $Y_n$, $G_n$ are transient with the same sign, it follows that $X_n$ is transient too. \end{proof} \subsection{Limiting directions: Proof of Theorem \ref{dirthm}} \label{direction} The key first step in the proof of Theorem \ref{dirthm} is the following application of the law of large numbers, Theorem \ref{lln1}. \begin{lm} \label{ylln} Suppose that (A1) holds with $d \in {\mathbb N}$, $\beta \in [0,1)$, and $\rho > 0$. As $n \to \infty$, \[ n^{-1/(1+\beta)} \| X_n - G_n \| \toas \ell (\rho, \beta ). \] \end{lm} \begin{proof} We take $Z_n = \| Y_n \| = \| X_n -G_n\|$ and apply Theorem \ref{lln1}. The conditions of the latter are verified since (\ref{incbound}), (\ref{lem3eq}), and (\ref{lem2eq}) imply (\ref{z1b}), (\ref{z4b}), and (\ref{z2}) respectively. \end{proof} The second step in the proof of Theorem \ref{dirthm} is to show that the process $Y_n = X_n - G_n$ has a limiting direction. Together with Lemma \ref{ylln} and the simple but useful relation (\ref{gandy}), we will then be able to deduce the asymptotic behaviour of $X_n$ and $G_n$. We use the notation $\hat Y_n := Y_n / \| Y_n\|$, with the convention that $\hat {\mathbf 0} := {\mathbf 0}$. Then $(\hat Y_n)_{n \in {\mathbb N}}$ is an $({\mathcal F}_n)$-adapted process, and using the vector-valued version of Doob's decomposition we may write \begin{equation} \label{ddecomp} \hat Y_n = A_n + M_n, \end{equation} where $M_1 = \hat Y_1$, $(M_n)_{n \in {\mathbb N}}$ is an $({\mathcal F}_n)$-adapted $d$-dimensional martingale and $(A_n)_{n \in {\mathbb N}}$ is the previsible sequence defined by $A_1 = {\mathbf 0}$ and $A_{n} = \sum_{m=1}^{n-1} \mathbb{E}_m [ \hat Y_{m+1} - \hat Y_m ]$ for $n \geq 2$. \begin{lm} \label{dlem1} Suppose that (A1) holds with $d \in {\mathbb N}$, $\beta \in [0,1)$, and $\rho > 0$. Let the Doob decomposition of $\hat Y_n$ be as given at (\ref{ddecomp}). There exists a $d$-dimensional random vector $A_\infty$ such that $A_n \to A_\infty$ a.s., as $n \to \infty$. \end{lm} \begin{proof} We have from (\ref{yeq}) that, with $\Delta_n := X_{n+1} - X_n$ as usual, \begin{align*} A_{n+1} - A_n & = \mathbb{E}_n \left[ \frac{Y_n + \Delta_n}{\| Y_n + \Delta_n \|} - \hat Y_n \right] = \mathbb{E}_n \left[ \frac{\Delta_n}{\| Y_n + \Delta_n \|} \right] - \hat Y_n \mathbb{E}_n \left[ \frac{ \| Y_n + \Delta_n \| - \| Y_n \|}{ \| Y_n + \Delta_n \|} \right] \\ & =: T_1 - \hat Y_n T_2.\end{align*} We deal with the expectations $T_1$ and $T_2$ separately. First, \[ T_1 = \| Y_n \|^{-1} \mathbb{E}_n [ \Delta_n ] - \mathbb{E}_n \left[ \frac{ (\| Y_n + \Delta_n \| - \| Y_n \|) \Delta_n }{\| Y_n \| \| Y_n + \Delta_n \| } \right] .\] The numerator in the last expectation is bounded in absolute value by $\| \Delta_n \|^2$, by the triangle inequality. Then using the fact that $\| \Delta_n \|$ is uniformly bounded, and that $\| Y_n\| \sim \ell(\rho,\beta) n^{1/(1+\beta)}$ by Lemma \ref{ylln}, it follows that \[ T_1 = \| Y_n \|^{-1} \mathbb{E}_n [ \Delta_n ] + O ( n^{-2/(1+\beta)} ) , { \ \textrm{a.s.}}, \] as $n \to \infty$. Similarly, we have that \[ T_2 = \mathbb{E}_n \left[ \frac{ \| Y_n + \Delta_n \|^2 - \| Y_n \|^2}{ \| Y_n + \Delta_n \| (\| Y_n + \Delta_n \| + \| Y_n \| )} \right] .\] Again using the boundedness of $\| \Delta_n \|$ and that $\|Y_n\| \sim \ell(\rho,\beta) n^{1/(1+\beta)}$, we obtain \[ T_2 = \mathbb{E}_n \left[ \frac{ 2 \Delta_n \cdot Y_n + \| \Delta_n \|^2}{ \| Y_n + \Delta_n \| (\| Y_n + \Delta_n \| + \| Y_n \| )} \right] = \mathbb{E}_n \left[ \frac{\hat Y_n \cdot \Delta_n }{ \| Y_n \| } \right] + O ( n^{-2/(1+\beta)} ) , { \ \textrm{a.s.}}.\] On applying (\ref{drift}) to evaluate the terms $\mathbb{E}_n [ \Delta_n]$ and $\mathbb{E}_n [ \Delta_n \cdot \hat Y_n ]$, the leading terms in $T_1$ and $\hat Y_n T_2$ cancel to give \[ A_{n+1} - A_n = O ( \| Y_n \|^{-\beta-1} (\log \| Y_n \| )^{-2} ) + O ( n^{-2/(1+\beta)} ) .\] Since $\|Y_n\| \sim \ell(\rho,\beta) n^{1/(1+\beta)}$, and $\beta <1$, these two $O (\, \cdot\, )$ terms are summable, so that $\sum_{n=1}^\infty \| A_{n+1} - A_n \| < \infty$, a.s., implying that $A_n$ converges a.s.. \end{proof} \begin{lm} \label{dlem2} Suppose that (A1) holds with $d \in {\mathbb N}$, $\beta \in [0,1)$, and $\rho > 0$. Let the Doob decomposition of $\hat Y_n$ be as given at (\ref{ddecomp}). There exists a $d$-dimensional random vector $M_\infty$ such that $M_n \to M_\infty$ a.s., as $n \to \infty$. \end{lm} \begin{proof} Taking expectations in the vector identity $\|M_{n+1} - M_n\|^2 =\| M_{n+1}\|^2 - \|M_n\|^2 - 2 M_n \cdot (M_{n+1} - M_n)$ and using the martingale property, we have \[ \mathbb{E}_n [ \| M_{n+1} \|^2 - \|M_n\|^2 ] = \mathbb{E}_n [ \| M_{n+1} - M_n \|^2 ] = \mathbb{E}_n [ \| \hat Y_{n+1} - \hat Y_n - \mathbb{E}_n [ \hat Y_{n+1} - \hat Y_n ] \|^2 ] .\] Expanding out the expression in the latter expectation, it follows that \[ \mathbb{E}_n [ \| M_{n+1}\|^2 -\| M_n\|^2 ] = \mathbb{E}_n [ \| \hat Y_{n+1} - \hat Y_n \|^2 ] - ( \mathbb{E}_n [ \hat Y_{n+1} - \hat Y_n ] )^2 \leq \mathbb{E}_n [ \| \hat Y_{n+1} - \hat Y_n \|^2 ] .\] Here we have from (\ref{yeq}) that \[ \| \hat Y_{n+1} - \hat Y_n \| = \left\| \frac{ Y_n (\| Y_n \| - \| Y_n + \Delta_n \| ) + \Delta_n \| Y_n \|}{\| Y_n \| \|Y_n + \Delta_n \|} \right\| \leq \frac{ 2 \| \Delta_n \|}{\| Y_n + \Delta_n \|} ,\] by the triangle inequality. Since $\| \Delta_n \|$ is uniformly bounded, and $\|Y_n\| \sim \ell (\rho,\beta) n^{1/(1+\beta)}$ by Lemma \ref{ylln}, it follows that $\| \hat Y_{n+1} - \hat Y_n \| = O ( n^{-1/(1+\beta)})$, so that \[ \mathbb{E}_n [ \| M_{n+1} \|^2 - \|M_n\|^2 ] = O ( n^{-2/(1+\beta)}), { \ \textrm{a.s.}}. \] Hence $\sum_{n=1}^\infty \mathbb{E}_n [ \| M_{n+1} \|^2 - \|M_n\|^2 ] < \infty$, a.s., which implies that $M_n$ has an almost-sure limit, by e.g.\ the $d$-dimensional version of \cite[Theorem 5.4.9, p.\ 217]{durrett}. \end{proof} \begin{proof}[Proof of Theorem \ref{dirthm}.] Combining Lemmas \ref{dlem1} and \ref{dlem2} with the decomposition (\ref{ddecomp}), we conclude that $\hat Y_n \to A_\infty + M_\infty =: {\mathbf u}$, for some random unit vector ${\mathbf u}$, a.s., as $n \to \infty$. In other words, the process $Y_n$ has a limiting direction. It follows from the representation (\ref{gandy}) that the processes $G_n$ and $X_n$ have the same limiting direction. Specifically, \[ G_n = X_1 + \sum_{j=2}^n \frac{1}{j-1} \| Y_j \| \hat Y_j = X_1 + \sum_{j=2}^n \frac{1}{j-1} [\ell(\rho,\beta) + o(1) ] j^{1/(1+\beta)} [{\mathbf u} + o(1)], { \ \textrm{a.s.}}, \] by Lemma \ref{ylln}. Hence \[ G_n = [ (1+\beta) \ell(\rho,\beta) {\mathbf u} + o(1) ] n^{1/(1+\beta)}, { \ \textrm{a.s.}}, \] and the result for $X_n$ follows since $X_n = G_n + Y_n$. \end{proof} \subsection{Upper bounds: Proof of Theorem \ref{extent}} \label{boundprf} Theorem \ref{extent} will follow from the next result, which gives bounds for $\| Y_n\|$. \begin{proposition} \label{extent2} Suppose that (A1) holds with $d \in {\mathbb N}$, $\beta \geq 0$, and $\rho \in {\mathbb R}$. Then the following bounds apply. \begin{itemize} \item[(i)] If $\beta \geq 1$, then for any $\varepsilon>0$, a.s., for all but finitely many $n \in {\mathbb N}$, $\| Y_n \| \leq n^{1/2} ( \log n)^{(1/2)+\varepsilon}$. \item[(ii)] If (A2) holds, $\beta =1$, and $\rho < - (d\sigma^2/2)$, then for any $\varepsilon>0$, a.s., for all but finitely many $n \in {\mathbb N}$, $\| Y_n \| \leq n^{\gamma (d,\sigma^2, \rho) + \varepsilon}$ where $\gamma (d,\sigma^2, \rho)$ is given by (\ref{gammadef}). \item[(iii)] If $\beta \in [0,1)$ and $\rho <0$, then for any $\varepsilon>0$, a.s., for all but finitely many $n \in {\mathbb N}$, $\| Y_n \| \leq (\log n)^{\frac{1}{1-\beta} +\varepsilon}$. \end{itemize} \end{proposition} To prove this result we apply some general results from \cite{mvw}. Section 4 of \cite{mvw} dealt with stochastic processes that were time-homogeneous, but that condition was not used in the proofs of the results that we apply here, which relied on the very general results of Section 3 of \cite{mvw}: the basic tool is Theorem 3.2 of \cite{mvw}. It is most convenient to again work in some generality. Again let $(Z_n)_{n \in {\mathbb N}}$ denote a stochastic process on $[0,\infty)$. Recall the definition of $\mu_k(n;x)$ from (\ref{mudef}). The next result gives the upper bounds that we need. Part (i) is contained in \cite[Theorem 4.1(i)]{mvw}. Part (ii) is a variation on \cite[Theorem 4.3(i)]{mvw} that is more suited to the present application. Part (iii) is also based on \cite{mvw} but does not seem to have appeared before. \begin{lm} \label{boundslem} Suppose that $(Z_n)_{n \in {\mathbb N}}$ is such that (\ref{z1b}) holds. \begin{itemize} \item[(i)] Suppose that for some $A < \infty$, $x \mu_1 (n ; x ) \leq A$ for all $n$ and $x$ sufficiently large. Then for any $\varepsilon>0$, a.s., $Z_n \leq n^{1/2} (\log n)^{(1/2)+\varepsilon}$ for all but finitely many $n \in {\mathbb N}$. \item[(ii)] Suppose that for some $v>0$ and $\kappa >1$, $2 x \mu_1 (n ; x) \leq - \kappa \mu_2 (n ; x) + o(1)$ and $\mu_2 (n;x) \geq v$ for all $n$ and $x$ sufficiently large. Then, for any $\varepsilon >0$, a.s.~$Z_n \leq n^{\frac{1}{1+\kappa} +\varepsilon}$ for all but finitely many $n \in {\mathbb N}$. \item[(iii)] Suppose that for some $\beta \in [0,1)$ and $A>0$, $x^\beta \mu_1 (x ; n) \leq -A$ for all $n$ and $x$ large enough. Then for any $\varepsilon>0$, a.s., for all but finitely many $n \in {\mathbb N}$, $Z_n \leq (\log n)^{\frac{1}{1-\beta} +\varepsilon}$. \end{itemize} \end{lm} \begin{proof} First we prove part (ii). Let $\kappa' = \kappa - \varepsilon$ for $\varepsilon \in (0,\kappa)$. Writing $D_n = Z_{n+1} - Z_n$, \begin{align*} \mathbb{E}_n [ Z_{n+1}^{1+\kappa'} - Z_n^{1+\kappa'} ] & = Z_n^{1+\kappa'} \mathbb{E}_n [ (1 + (D_n/Z_n))^{1+\kappa'} -1 ] \\ & = (1+ \kappa') Z_n^{\kappa'} \left( \mu_1 (n ; Z_n) + \frac{\kappa'}{2 Z_n} \mu_2 (n ; Z_n ) + O( Z_n ^{-2} ) \right) , \end{align*} using Taylor's formula and (\ref{z1b}). Under the conditions of part (ii), we have \[ \mu_1 (n ; Z_n) + \frac{\kappa'}{2 Z_n} \mu_2 (n ; Z_n ) + O( Z_n ^{-2} ) \leq - \frac{\varepsilon}{2 Z_n} \mu_2 (n ; Z_n) + o (Z_n^{-1}) < 0 ,\] for all $n$ and $Z_n$ large enough. Hence $\mathbb{E}_n [ Z_{n+1}^{1+\kappa'} - Z_n^{1+\kappa'} ]$ is uniformly bounded above and the result follows from Theorem 3.2 of \cite{mvw}. It remains to prove part (iii). For $\alpha >0$, define $f_\alpha (x) := \exp \{ x^\alpha \}$. First we show that, under the conditions of the lemma, for any $\alpha \in (0,1-\beta)$, for some $C< \infty$, \begin{equation} \label{fbound} \mathbb{E}_n [ f_\alpha ( Z_{n+1} ) - f_\alpha ( Z_n ) ] \leq C, { \ \textrm{a.s.}} .\end{equation} Writing $D_n = Z_{n+1} -Z_n$, we have that \begin{align*} \mathbb{E}_n [ f_\alpha (Z_{n+1}) - f_\alpha (Z_n) ] = f_\alpha (Z_n) \mathbb{E}_n \left[ \exp \{ ( Z_n + D_n)^\alpha - Z_n^\alpha \} - 1 \right] .\end{align*} Since $D_n = O(1)$ a.s., by (\ref{z1b}), Taylor's formula applied to the last expression yields \begin{align*} \mathbb{E}_n [ f_\alpha (Z_{n+1}) - f_\alpha (Z_n) ] = f_\alpha (Z_n) \mathbb{E}_n \left[ \alpha D_n Z_n^{\alpha -1} + O(Z_n^{2\alpha -2} )\right] .\end{align*} Here we have that $\mathbb{E}_n [ D_n ] \leq - A Z_n^{-\beta}$ for all $Z_n ,n$ large enough. Since $\alpha < 1-\beta$ we obtain (\ref{fbound}). Now we can apply Theorem 3.2 of \cite{mvw} to complete the proof. \end{proof} Finally we complete the proofs of Proposition \ref{extent2} and Theorem \ref{extent}. \begin{proof}[Proof of Proposition \ref{extent2}.] Under the conditions of part (i) of Proposition \ref{extent2}, we have from (\ref{incbound}) and Lemma \ref{lem2} that the conditions of Lemma \ref{boundslem}(i) hold for $Z_n = \| Y_n \|$. Thus we obtain part (i) of the proposition. Similarly, under the conditions of part (ii), we have from (\ref{incbound}), (\ref{lem2eq2}) and (\ref{mom2ex}) that Lemma \ref{boundslem}(ii) holds for $Z_n = \| Y_n \|$ and $\kappa = - \frac{2\rho}{\sigma^2} - (d-1)$, which is greater than $1$ for $\rho < - d \sigma^2/2$. Finally, under the conditions of part (iii), we have from (\ref{incbound}) and (\ref{lem2eq}) that Lemma \ref{boundslem}(iii) holds for $Z_n = \| Y_n \|$. \end{proof} \begin{proof}[Proof of Theorem \ref{extent}.] Part (i) of the theorem follows from Theorem \ref{dirthm}. Parts (ii), (iii), and (iv) follow from Proposition \ref{extent2} with (\ref{gandy}) and the triangle inequality; note this introduces an extra logarithmic factor in the case of part (iv) of the theorem. \end{proof} \end{document}
\begin{document} \title[Completions of affine spaces into Mori fiber spaces]{Completions of affine spaces \\ into Mori fiber spaces with non-rational fibers} \author{Adrien Dubouloz} \address{IMB UMR5584, CNRS, Univ. Bourgogne Franche-Comt\'e, EIPHI Graduate School ANR-17-EURE-0002, F-21000 Dijon, France.} \email{[email protected]} \author{Takashi Kishimoto} \address{Department of Mathematics, Faculty of Science, Saitama University, Saitama 338-8570, Japan} \email{[email protected]} \author{Karol Palka} \address{Institute of Mathematics, Polish Academy of Sciences, \'{S}niadeckich 8, 00-656 Warsaw, Poland} \email{[email protected]} \thanks{The authors were supported by the National Science Centre, Poland, grant number 2015/18/E/ST1/00. The first author was partially supported by the French ANR project ”FIBALGA” ANR-18-CE40-0003. The second author was partially supported by JSPS KAKENHI, grant number JP19K03395. For the purpose of Open Access, the authors have applied a CC-BY public copyright license to any Author Accepted Manuscript version arising from this submission.} \begin{abstract} We describe a method to construct completions of affine spaces into total spaces of $\mathbb{Q}$-factorial terminal Mori fiber spaces over the projective line. As an application we provide families of examples with non-rational, birationally rigid and non-stably rational general fibers. \end{abstract} \maketitle \section{Introduction} We work with complex algebraic varieties. A \emph{completion} of a given variety $U$ is a complete variety containing a Zariski open subset isomorphic to $U$. In this article we consider the problem of describing minimal completions of affine spaces $\mathbb{A}^n$. Since the Kodaira dimension of $\mathbb{A}^n$ is negative, a natural way to define minimality in this context is to require the completion to be the total space of a Mori fiber space. Indeed, by \cite[Corollary 1.3.3]{BCHM} such varieties come as outputs of the Minimal Model Program applied to smooth projective varieties of negative Kodaira dimension. We call them \emph{Mori fiber completions}. From the viewpoint of the Minimal Model Program it is also natural to consider not only smooth but also mildly singular varieties and their completions, namely those which are $\mathbb{Q}$-factorial and have terminal singularities. If $\pi\:V \rightarrow B$ is a Mori fiber completion of $\mathbb{A}^n$ then $V$ is rational and the base variety $B$ is unirational. The case when $B$ is a point is especially important, because then $V$ is a Fano variety of Picard rank one. Fano varieties and their rationality are objects of intensive studies, see \cite{Iskovskikh_Prokhorov-Fano_varieties}, \cite{Kollar-Smith-Corti_Nearly_rational}. Classifying smooth Fano completions of $\mathbb{A}^n$ of Picard rank one is the projective version of the celebrated problem of finding minimal analytic completions of complex affine spaces raised by Hirzebruch \cite[Problem 27]{Hirzebruch_problems} and studied by many authors. In case $n=1,2$ there is only $\mathbb{P}^n$. The first difficult case, $n=3$, was completed in a series of papers \cite{H3a, H3b, Pr, Fu93}, see also \cite{Kis05} for partial classification results concerning completions of $\mathbb{A}^3$ into smooth Fano threefolds with Picard rank two. For $n=4$ there are some partial results \cite{Pr93, PZ-genus10_4folds_and_A4}. In this article we focus on the situation where the base $B$ is a curve, hence is isomorphic to the projective line $\mathbb{P}^1$. Simple examples of Mori fiber completions of $\mathbb{A}^n$ of this type are given by locally trivial $\mathbb{P}^{n-1}$-bundles over $\mathbb{P}^1$. Another series of examples can be constructed by taking the product of $\mathbb{P}^1$ with any $\mathbb{Q}$-factorial terminal Fano variety of Picard rank one which is a completion of $\mathbb{A}^{n-1}$, see Example \ref{ex:products}. These examples are special in the sense that general fibers are completions of $\mathbb{A}^{n-1}$. In general, the following interesting problem arises: \begin{prob} Let $\pi\colon V\rightarrow \mathbb{P}^1$ be a Mori fiber completion of $\mathbb{A}^n$. What can be said about the geometry of general fibers of $\pi$? \end{prob} A basic observation is that a general fiber of $\pi$ is a Fano variety of dimension $n-1$ with terminal singularities. The property of being a Mori fiber space implies in particular that the generic fiber of $\pi$ has Picard rank one over the function field of $\mathbb{P}^1$. On the contrary, general fibers do not necessarily have Picard rank one, as can be seen for instance for del Pezzo fibrations of some threefolds completing $\mathbb{A}^3$, see Example \ref{ex:dP_fibr}. Still, it is a restrictive condition for a Fano variety to be a general fiber of a Mori fiber space, see \cite{CFST-Fano_fibers, CFST-Fano_fibers2}. For $n=2, 3$ a general fiber of $\pi$, being terminal and Fano, is either $\mathbb{P}^1$ or a smooth del Pezzo surface, hence is a completion of $\mathbb{A}^{2}$. In contrast, our first result implies that in higher dimensions general fibers of Mori fiber completions of $\mathbb{A}^n$ can be very far from being rational. \begin{thm}\label{thm2} Let $H$ be a hyperplane in $\mathbb{P}^n$, $n\geq 2$. For every integral hypersurface $F\subseteq \mathbb{P}^n$ of degree $d\leq n$ such that $F\cap H$ is irreducible and contained in the smooth locus of $F$ there exists a Mori fiber completion $\pi\:V\rightarrow \mathbb{P}^1$ of the affine $n$-space $\mathbb{A}^n \cong \mathbb{P}^n \setminus H$ such that all hypersurfaces other than $dH$ in the pencil of divisors $\langle F,dH\rangle$ generated by $F$ and $dH$ appear as fibers of $\pi$. \end{thm} By Bertini's theorem a general member of a pencil as in Theorem \ref{thm2} is smooth. For $(n,d)=(4,3)$ it is a smooth cubic threefold, hence is unirational but not rational \cite{CM72}. For $d=n\geq 4$ a general member is birationally super-rigid (see Definition \ref{def:rigid}) by \cite{deF13, IsMa71,Pu98}. This gives the following corollary. \begin{cor}\label{cor:completions_of_A4} For every $n\geq 4$ there exists a Mori fiber completion of $\mathbb{A}^n$ over $\mathbb{P}^1$ whose general fibers are smooth birationally super-rigid Fano varieties of Picard rank one. \end{cor} Another corollary concerns completions of polynomial morphisms $f\colon\mathbb{A}^n\rightarrow \mathbb{A}^1$ of low degree into Mori fiber spaces over $\mathbb{P}^1$. \begin{cor}\label{cor:polynomial} Assume that $f\colon \mathbb{A}^n\rightarrow \mathbb{A}^1$ is a morphism given by a polynomial of total degree at most $n$ and that in the natural open embedding $\mathbb{A}^n\subseteq \mathbb{P}^n$ the intersection of the closure of the zero locus of $f$ with $\mathbb{P}^n\setminus \mathbb{A}^n$ is smooth. Then $f\colon\mathbb{A}^n\rightarrow \mathbb{A}^1$ can be completed into a Mori fiber space over $\mathbb{P}^1$. \end{cor} General fibers of polynomial morphisms $f\colon\mathbb{A}^n\rightarrow \mathbb{A}^1$ as above of degree $\deg f\leq n-1$ are smooth affine Fano varieties in the sense of \cite{CDP17} (see Definition \ref{def:Affine-Fano}). We note that for $n\geq 6$ and $\deg f=n-1$ general fibers of the Mori fiber space $\pi\:V\rightarrow\mathbb{P}^1$ are then completions of so-called \emph{super-rigid affine Fano varieties} (general fibers of $f$), see Definition \ref{def:affine_rigid} and Example \ref{ex:affine_super-rigid}. We obtain even more families of possible general fibers by considering singular ambient spaces instead of $\mathbb{P}^n$. For a definition of quasi-smooth hypersurfaces in weighted projective spaces, see Section \ref{sec:Cone_construction}. \begin{thm}\label{thm1} Let $n\geq 4$ and let $\mathbb{P}=\mathbb{P}(1,a_1,\ldots,a_n)$ be a weighted projective space for some positive integers $a_1,\ldots, a_n$, such that the description of the hyperplane $H=\mathbb{P}(a_1,\ldots,a_n)$ is well-formed. Then for every quasi-smooth terminal hypersurface $F\neq H$ of $\mathbb{P}$ of degree $d\leq a_1+\ldots+a_n$ there exists a Mori fiber completion $\pi\:V\rightarrow \mathbb{P}^1$ of $\mathbb{A}^n \cong \mathbb{P} \setminus H$ such that all hypersurfaces other than $dH$ in the pencil $\langle F,dH\rangle$ generated by $F$ and $dH$ appear as fibers of $\pi$. \end{thm} For $n=4$ we get a class of Mori fiber completions of $\mathbb{A}^4$ over $\mathbb{P}^1$ whose general fibers are quasi-smooth terminal weighted Fano threefold hypersurfaces in the 95 families of Fletcher and Reid \cite{Fl00,Reid-canonical_3folds}, see Corollary \ref{cor:95families_bir_rigid_fibers}. By \cite[Main Theorem]{CP16}, see also \cite{CPR00}, all such threefolds are birationally rigid and some of them are even known to be birationally super-rigid \cite[Theorem 1.1.10]{CP16}. In a similar vein, we deduce from \cite{Okada-stable_rationality_of_Fano_3fold_hypersurfaces} the following result, see Example \ref{okada}. \begin{cor}\label{cor:completions_of_A4_Okada} There exists a Mori fiber completion of $\mathbb{A}^4$ over $\mathbb{P}^1$ whose very general fibers are Fano varieties of Picard rank one which are not stably rational. \end{cor} We now briefly describe our approach. A natural way to obtain completions of a given quasi-projective variety $U$ with $\mathbb{Q}$-factorial terminal singularities ($\mathbb{A}^n$ in particular) is to find some normal projective completion whose singularities are not worse than those of $U$, and then to run a Minimal Model Program on it. If $U$ is smooth then we may take a smooth completion using resolution of singularities. But in general, finding appropriate completions from which to run the program is already a nontrivial task. Moreover, each step, whether it is a divisorial contraction or a flip, may change the isomorphism type of the image of $U$. Preventing this to happen is one of the key problems. To gain more control over the successive steps of the program we study completions which are resolutions of specific pencils of divisors on terminal Fano varieties, namely of those pencils whose general members are Fano varieties of Picard rank one with terminal singularities. We call them \emph{terminal rank one Fano pencils}. This assumption on the one hand allows to find a completion with mild singularities and on the other hand, it gives a chance to analyze the MMP runs in a more detail. We introduce the notion of a ``compatible thrifty resolution" of such pencils, characterized essentially by the property that it keeps the isomorphism type of general members unchanged. We give sufficient criteria for the existence of compatible thrifty resolutions and we show that terminal rank one Fano pencils which admit such resolutions yield interesting Mori fiber spaces over $\mathbb{P}^1$. The article is organized as follows: In Section \ref{sec:prelim} we recall basic notions concerning varieties and their singularities in the framework of the Minimal Model Program. Section \ref{sec:Weil-Pencils} reviews properties of pencils of Weil divisors on normal varieties. Section \ref{sec:pencils_and_resolutions} is devoted to the study of terminal rank one Fano pencils, their resolutions and the outputs of relative MMP's ran from these. In Section \ref{sec:H-special} we consider a class of pencils on Fano varieties with class groups $\mathbb{Z}$. It provides a big supply of terminal rank one Fano pencils admitting compatible thrifty resolutions. Applications to the construction of Mori fiber completions of $\mathbb{A}^n$ over $\mathbb{P}^1$ are given in Section \ref{sec:Affine-Spaces}. This section contains proofs of Theorems \ref{thm2} and \ref{thm1} and a series of examples. \textit{Acknowledgements.} We thank the Institute of Mathematics of Burgundy, the Saitama University and the Institute of Mathematics of the Polish Academy of Sciences for excellent working conditions. We thank the referees for their suggestions of improvements of the text. \tableofcontents \section{Preliminaries}\label{sec:prelim} We summarize basic notions concerning varieties and their singularities in the framework of the Minimal Model Program which are used in the article. We use the following standard terminology: The \emph{domain of the definition} of a dominant rational map $f\:X\dashrightarrow Y$ between algebraic varieties is the largest open subset $\operatorname{dom}(f)$ of $X$ on which $f$ is represented by a morphism. Its complement is called the \emph{indeterminacy locus} of $f$. The \emph{exceptional locus} $\operatorname{Exc}(f)$ of a proper birational morphism $f\:X\rightarrow Y$ is the pre-image of the indeterminacy locus of the birational map $f^{-1}\:Y\dashrightarrow X$. A \emph{resolution} (of indeterminacy) of a rational map $f\:X\dashrightarrow Y$ is a proper birational morphism $\tau\:X'\rightarrow X$ such that $f\circ\tau\:X'\dashrightarrow Y$ is a morphism. \subsection{Singularities in the context of MMP} \label{ssec:singularities} Let $X$ be a normal variety and let $j\:X_\mathrm{reg}\hookrightarrow X$ be the embedding of the smooth locus. The induced restriction on Picard groups $j^*\colon\operatorname{Pic}(X)\rightarrow \operatorname{Pic}(X_\mathrm{reg})$ is injective and the restriction on class groups $\operatorname{Cl}(X)\rightarrow\operatorname{Cl}(X_\mathrm{reg})\cong \operatorname{Pic}(X_\mathrm{reg})$ is an isomorphism. This gives a natural injection $\operatorname{Pic}(X)\rightarrow \operatorname{Cl}(X)$; see \cite[Corollaire 21.6.10]{EGAIV-4}. A canonical divisor of $X$ is a Weil divisor $K_X$ on $X$ whose class in $\operatorname{Pic}(X_\mathrm{reg})$ is the class of the canonical invertible sheaf $\det(\Omega^1_{X_\mathrm{reg}})$ of $X_\mathrm{reg}$. A Weil divisor on $X$ is called \emph{$\mathbb{Q}$-Cartier} if it has a positive multiple which is Cartier. We say that $X$ is \emph{$\mathbb{Q}$-factorial} if every Weil divisor on $X$ is $\mathbb{Q}$-Cartier. We now recall some basic facts about singularities of pairs. We refer the reader to \cite[Chapter 2]{KollarMori-Bir_geometry} for details. A \emph{log pair} $(X,D)$ consists of a normal variety $X$ and a Weil $\mathbb{Q}$-divisor $D=\sum d_iD_i$ on it, whose coefficients $d_i$ belong to $[0,1]\cap \mathbb{Q}$, and such that the divisor $K_X+D$ is $\mathbb{Q}$-Cartier. Given a proper birational morphism $f\colon Y\rightarrow X$ from a normal variety $Y$, we denote by $\mathcal{E}(f)$ the set of prime divisors $E$ on $Y$ contained in the exceptional locus $\operatorname{Exc}(f)$ of $f$. We call the image of $E$ the \emph{center of $E$ on $X$}. Given a log pair $(X,D)$ and a birational proper morphism from a normal variety $f\:Y\rightarrow X$ we have a linear equivalence of $\mathbb{Q}$-divisors $$K_Y+f_*^{-1}D \sim f^*(K_X+D)+\sum_{E\in \mathcal{E}(f)}a_{X,D}(E) E,$$ where $a_{X,D}(E) \in \mathbb{Q}$. The number $a_{X,D}(E)$ is called the \emph{discrepancy} of $E$ with respect to $(X,D)$. It does not depend on $Y$ in the sense that if $Y'\rightarrow Y$ is a proper birational morphism and $E'$ is the proper transform of $E$ on $Y'$ then $a_{X,D}(E)=a_{X,D}(E')$. The \emph{discrepancy} of the log pair $(X,D)$ is defined as \begin{equation}\label{eq:discr} \operatorname{Discrep} (X,D)=\underset{f,E\in \mathcal{E}(f)}{\operatorname{inf}} a_{X,D}(E), \end{equation} where the infimum is taken over all $f\:Y\rightarrow X$ as above and all $E\in \mathcal{E}(f)$. A log pair $(X,D)$ is \emph{terminal} if $\operatorname{Discrep} (X,D)>0$, \emph{purely log terminal (plt)} if $\operatorname{Discrep} (X,D)> -1$ and \emph{Kawamata log terminal (klt)} if it is plt and $\lfloor D\rfloor=0$. A normal variety $X$ is \emph{terminal} (respectively klt) if the log pair $(X,0)$ is terminal (respectively klt). The property of being terminal for a variety $X$ and being klt for any log pair $(X,D)$ can be verified by computing the infimum \eqref{eq:discr} on a single proper birational morphism $f\:Y\rightarrow X$ which is a log resolution of the log pair $(X,D)$, that is, for which $Y$ is smooth and $\operatorname{Exc}(f)\cup f^{-1}_*(D)$ is a divisor with simple normal crossings, see \cite[Corollaries 2.12,~2.13]{Kollar-Singularities_of_MMP}. Given a normal variety $X$ and a closed subset $Z$ of it, we say that $X$ is \emph{terminal (respectively klt) in a neighborhood of $Z$} if there exists an open neighborhood $U$ of $Z$ which is terminal (respectively klt). Finally, we say that a log pair $(X,S)$, where $S$ is a prime Weil divisor on $X$, is \emph{plt in a neighborhood of $S$} if there exists an open neighborhood $U$ of $S$ such that the log pair $(U,S)$ is plt. Equivalently, if for every log resolution $f\:Y\rightarrow X$ of $(X,S)$ every exceptional divisor $E$ of $f$ whose center on $X$ intersects $S$ has discrepancy $a_{X,S}(E)>-1$. We will need the following known result. \begin{lem} \label{lem:plt-to-terminal}Let $(X,S)$ be a log pair such that $S$ is a prime Cartier divisor. Assume that $(X,S)$ is plt in a neighborhood of $S$. Then $X$ is terminal in a neighborhood of $S$ if and only if for every log resolution $f\:Y\rightarrow X$ of $(X,S)$ every exceptional divisor $E$ of $f$ whose center on $X$ meets $S$ but is not contained in $S$ has positive discrepancy. \end{lem} \begin{proof} Let $f\:Y\rightarrow X$ be a log resolution of $(X,S)$. Replacing $X$ by an open neighborhood of $S$, if necessary, we can assume that the center on $X$ of every exceptional divisor $E_i$ of $f$ meets $S$. Since $S$ is a prime Cartier divisor, we have $f^*S=f_*^{-1}(S)+\sum c_iE_i$, where $c_i$ is a positive integer if $f(E_i)\subseteq S$ and $c_i=0$ otherwise. Writing $K_Y\sim f^*(K_X)+\sum b_iE_i$ we get $$K_Y+f_*^{-1}(S)\sim f^*(K_X+S)+\sum_{E_i, f(E_i)\not\subseteq S} b_iE_i +\sum_{E_i, f(E_i)\subseteq S} (b_i-c_i)E_i.$$ Since $(X,S)$ is plt in a neighborhood of $S$, $a_{X,S}(E_i)>-1$ for every $E_i$. It follows that for every $E_i$ such that $f(E_i)\subseteq S$ we have $b_i=a_{X,S}(E_i)+c_i>0$. Thus $X$ is terminal in a neighborhood of $S$ if and only if for every $E_i$ such that $f(E_i)\not\subseteq S$ the discrepancy $b_i=a_{X,S}(E_i)$ is positive. \end{proof} Let us recall from \cite[Chapter 6]{Matsumura} basic properties concerning regular sequences and related notions. Let $(R,\mathfrak{m})$ be a Noetherian local ring. Recall that a sequence $(x_1,\ldots,x_r)$ of elements of $\mathfrak{m}$ is \emph{regular} if for every $i=1,\ldots, r$ the element $x_i$ is not a zero divisor in $R/(x_1,\ldots,x_{i-1})$. The \emph{depth} and the \emph{(Krull) dimension} of $R$ are defined respectively as the maximal length of a regular sequence and as the maximal number of strict inclusions of prime ideals. The ring is called \emph{Cohen-Macaulay} if $\mathfrak{m}$ contains a regular sequence of length $\dim R$. Equivalently, $R$ satisfies Serre's conditions $S_i$: $\operatorname{depth} R\geq \min(\dim R,i)$, for all $i\geq 1$. The ring is \emph{regular} if $\mathfrak{m}$ contains a regular sequence of length $\dim R$ generating $\mathfrak{m}$. We say that a prime ideal $I\subseteq \mathfrak{m}$ of $R$ is a \emph{complete intersection} if it is generated by a regular sequence of length equal to its height. A scheme $X$ is called Cohen-Macaulay if all its local rings are Cohen-Macaulay. An irreducible closed subscheme $Y$ of a scheme $X$ is called a \emph{local complete intersection in $X$} if the sheaf of ideals of $Y$ is a complete intersection in all local rings of $X$. Finally, we say that an irreducible scheme $X$ is a \emph{local complete intersection} if it is locally isomorphic to a local complete intersection in a smooth scheme. Let us also recall that a normal variety $X$ has \emph{rational singularities} if for every resolution $f\colon\widetilde X\rightarrow X$ of the singularities of $X$ we have $R^if_*\mathcal{O}_{\widetilde X}=0$ for $i\geq 1$. We will use the following known facts about rational singularities. Note that Kawamata log terminal singularities are rational, see e.g. \cite[Theorem 6.2.12]{Ishii-Intro_to_singularities}. \begin{lem}[Rational singularities]\label{lem:rational_sing} Let $X$ be a normal variety with rational singularities. Then the following hold: \begin{enumerate}[(a)] \item $X$ is Cohen-Macaulay and hence every local complete intersection $Y$ in $X$ is Cohen-Macaluay. In particular, $Y$ is normal if and only if its singular locus has codimension at least $2$. \item The group $\operatorname{Cl}(X)/\operatorname{Pic}(X)$ is finitely generated. \item If $X$ is projective and $\mathcal{N}$ is a big and nef invertible sheaf on $X$ then $H^i(X,\mathcal{N}^\vee)=0$ for $0\leq i\leq \dim X -1$. \end{enumerate} \end{lem} \begin{proof} (a) By \cite[Theorem 6.2.14]{Ishii-Intro_to_singularities}, $X$ is Cohen-Macaulay. Then by \cite[Proposition 5.3.12]{Ishii-Intro_to_singularities}, $Y$ is Cohen-Macaulay, too. In particular, by Serre's criterion \cite[Theorem 23.8]{Matsumura}, $Y$ is normal if and only if it is regular in codimension $1$. (b) This is proved by reducing to the analytification and then to finiteness of singular homology of the resolution using the exponential sequence, see \cite[Lemma 1.1]{Kawamata-Crepant-Bl-3d}, cf. also \cite[Propositions 12.1.4, 12.1.6]{KollarMori-3d_flips}. (c) Since $X$ has rational singularities, for every locally free sheaf of finite rank $\mathcal{E}$ on $X$ the projection formula and the Leray spectral sequence (see, \cite[Exercises III.8.1, 8.3]{Hartshorne}) give natural isomorphisms $H^i(X,\mathcal{E})\xrightarrow{\cong}H^i(\widetilde X,f^*\mathcal{E})$ for $i\geq 0$. Put $\widetilde \mathcal{N}=f^*\mathcal{N}$. Then $H^i(X,\mathcal{N}^\vee)\cong H^i(\widetilde X,\widetilde \mathcal{N}^{\vee})$. Since $\widetilde \mathcal{N}$ is big, nef and invertible, $H^i(\widetilde X, \widetilde \mathcal{N}^{\vee})=H^{\dim X-i}(\widetilde X,\mathcal{O}(K_{\widetilde X})\otimes \widetilde \mathcal{N})=0$ for $i<\dim X$ by Serre's duality and the Kawamata-Viehweg vanishing theorem for smooth varieties. \end{proof} \subsection{Inversion of adjunction and a $\mathbb{Q}$-factorial terminalization} We recall the following version of adjunction and inversion of adjunction, see \cite[Remark 5.47 and Theorem 5.50]{KollarMori-Bir_geometry}, cf.\ \cite[Chapter 4]{Kollar-Singularities_of_MMP} and \cite[Chapters 16 and 17]{FA}. \begin{lem}[Inversion of adjunction]\label{lem:different} Let $(X,S)$ be a log pair such that $S$ is a normal prime Weil divisor which is Cartier in codimension 2. Then the adjunction formula $K_S=(K_X+S)|_S$ holds. Furthermore, $(X,S)$ is plt in a neighborhood of $S$ if and only if $S$ is klt. \end{lem} A \emph{$\mathbb{Q}$-factorial terminalization} of a normal quasi-projective variety $X$ is a proper birational morphism $f\:X'\rightarrow X$ such that $X'$ is a quasi-projective $\mathbb{Q}$-factorial terminal variety and $K_{X'}$ is $f$-nef. \begin{lem}[$\mathbb{Q}$-factorial terminalization] \label{lem:terminalization} Every normal quasi-projective variety $X$ has a $\mathbb{Q}$-factorial terminalization $f\:X'\rightarrow X$. Furthermore, the restriction of $f$ over every $\mathbb{Q}$-factorial terminal open subset of $X$ is an isomorphism. \end{lem} \begin{proof} By \cite[Theorem 1.33]{Kollar-Singularities_of_MMP} there exists a proper birational morphism $g\:Y\rightarrow X$ such that $Y$ is quasi-projective and terminal and $K_Y$ is $g$-nef. By \cite[Corollary 1.37]{Kollar-Singularities_of_MMP} there exists a proper birational morphism $h\:X'\rightarrow Y$ such that $X'$ is quasi-projective terminal and $\mathbb{Q}$-factorial and $h$ is small, i.e.\ does not contract any divisor. Then $h\circ g$ is a $\mathbb{Q}$-factorial terminalization. Given an open subset $U\subseteq X$, the restriction $f|_{f^{-1}(U)}$ is a $\mathbb{Q}$-factorial terminalization of $U$, so without loss of generality we may assume that $U=X$. By assumption $K_{X'}$ is $f$-nef, so the divisor $K_{X'}-f^*K_X=\sum_{E\in \mathcal{E}(f)} a_X(E)E$ is $f$-nef. Since $X'$ is $\mathbb{Q}$-factorial and the latter divisor is contracted by $f$, the Negativity Lemma \cite[3.39(1)]{KollarMori-Bir_geometry} gives $a_X(E)\leq 0$ for each $E\in \mathcal{E}(f)$. Since $X$ is terminal, we infer that $\mathcal{E}(f)=\emptyset$, that is, $\operatorname{Exc}(f)$ has codimension at least $2$. But $X$ is also $\mathbb{Q}$-factorial, so \cite[VI.1, Theorem 1.5]{Kollar-Rational_curves_on_alg_var} implies that $\operatorname{Exc}(f)=\emptyset$. Thus $f$ is an isomorphism. \end{proof} \subsection{Fano varieties and Mori fiber spaces} \begin{defn}[Fano variety and its index] \label{def:Fano} A \emph{Fano variety} is a normal projective variety whose anti-canonical divisor is ample (in particular, $\mathbb{Q}$-Cartier). \end{defn} Let $X$ be a klt Fano variety. By the Kawamata-Viehweg vanishing theorem, see \cite[Theorem 2.70]{KollarMori-Bir_geometry} or \cite[Theorem 5.2.7]{Matsuki}, we have $H^i(X,\mathcal{O}_X)=0$ for all $i>0$. The linear equivalence on $X$ coincides with numerical and homological equivalence \cite[Proposition 2.1.2]{Iskovskikh_Prokhorov-Fano_varieties}. In particular, $\operatorname{Pic} X\cong H^2(X,\mathbb{Z})\cong \operatorname{NS}(X)$. It is also known that $X$ is simply connected \cite{Takayama-pi_1(lt_Fano)} and rationally connected \cite[Theorem 1]{Zhang-Fanos_rationally_connected}. Recall \cite[\S 2.1 and Example 19.1.4]{Fulton-Intersection_theory} that for a (possibly non-normal) complete algebraic variety $X$ the quotient $\operatorname{NS}(X)$ of the Picard group of $X$ by the relation of numerical equivalence is a finitely generated free abelian group, whose rank $\rho(X)$ is called the \emph{Picard rank} of $X$. For a surjective morphism of complete varieties $f\:X\rightarrow B$ we put $\rho(X/B)=\rho(X)-\rho(B)$. Note that a Fano variety of Picard rank one is not necessarily $\mathbb{Q}$-factorial in general, see for instance Example \ref{ex:Q-fact} below. In contrast, a Fano variety $X$ with class group $\operatorname{Cl}(X)\cong \mathbb{Z}$ is automatically $\mathbb{Q}$-factorial, as the image of the natural inclusion $\operatorname{Pic}(X)\rightarrow \operatorname{Cl}(X)$ is a nontrivial subgroup of finite index. For a Fano variety $X$ with class group $\operatorname{Cl}(X)\cong \mathbb{Z}$ the \emph{Fano index} of $X$ is the positive integer $i_X$ such that $-K_X\sim i_X H$ for some ample generator $H$ of $\operatorname{Cl}(X)$. A morphism $f\:X\rightarrow B$ between quasi-projective varieties is called a \emph{contraction} if it is proper, surjective and $f_*\mathcal{O}_X=\mathcal{O}_B$. The latter condition implies connectedness of fibers of $f$ and in case $B$ is normal it is equivalent to it. \begin{Rem}[Contractions from varieties with $\rho=1$] \label{rem:trivial contractions} A projective variety of Picard rank one has only trivial contractions. Indeed, let $f\:X\rightarrow B$ be a contraction from such a variety onto a positive dimensional variety $B$. Note that since $f$ is proper and $B$ is quasi-projective, $B$ is projective. Let $H$ be an effective ample Cartier divisor on $B$. By Kleiman's criterion a divisor numerically equivalent to an ample divisor is ample, so since $\rho(X)=1$, $f^*H$ is an effective ample Cartier divisor on $X$. For every irreducible curve $C$ on $X$, it follows from the projection formula \cite[Proposition 2.3(c)]{Fulton-Intersection_theory} that $H\cdot f_*(C)=(f^*H)\cdot C>0$. Thus, $f\:X\rightarrow B$ is a proper morphism which does not contract any curve, hence is a finite morphism. Since $f_*\mathcal{O}_X=\mathcal{O}_B$ by assumption, $f$ is an isomorphism. \end{Rem} \begin{defn}[Mori fiber spaces and completions]\label{dfn:MFS}\ \begin{enumerate}[(a)] \item A \emph{Mori fiber space} is a $\mathbb{Q}$-factorial terminal projective variety $X$ endowed with a contraction $f\:X\rightarrow B$ onto a lower-dimensional normal variety $B$ such that $\rho(X/B) = 1$ and $-K_X$ is $f$-ample. \item Two Mori fiber spaces $f_i\colon X_i \rightarrow B_i$, $i=1,2$, are called \emph{weakly square birational equivalent} if there exist birational maps $\varphi\colon X_1\dashrightarrow X_2$ and $\varphi'\colon B_1 \dashrightarrow B_2$ such that $f_2 \circ \varphi = \varphi' \circ f_1$. \item Given a quasi-projective variety $U$, a \emph{Mori fiber completion of $U$} is a Mori fiber space whose total space is a completion of $U$. \end{enumerate} \end{defn} It follows from the definition that general fibers of a Mori fiber space are Fano varieties. Since the total space is assumed to be terminal, by \cite[Proposition 7.7]{Kollar-Singularities_of_pairs} general fibers are terminal too. We note that weakly square birational equivalent Mori fiber spaces are \emph{square birational equivalent} in the sense of \cite[Definition 1.2]{Corti-Sing_of_lin_sys} if the induced morphism on generic fibers is an isomorphism. We have the following notion of rigidity of varieties. See \cite{Cheltsov-rigid_Fano} and \cite{Pukhlikov-bir_rigid} for related results. \begin{defn}[Birationally rigid varieties]\label{def:rigid} A Fano variety is called \emph{birationally rigid} if it has no birational maps to Mori fiber spaces other than its own birational automorphisms. It is called \emph{super-rigid} if additionally all its birational automorphisms are regular. \end{defn} In particular, positive-dimensional birationally rigid varieties are non-rational. We have the following analogous affine notions, see \cite{CDP17}. \begin{defn}[Affine Fano varieties]\label{def:Affine-Fano} An \emph{affine Fano variety} is an affine variety which admits a completion by a purely log terminal log pair $(X,S)$ such that $X$ is a (normal projective) $\mathbb{Q}$-factorial variety of Picard rank one, $S$ is prime and $-(K_X+S)$ is ample. \end{defn} \begin{defn}[Affine super-rigid Fano varieties]\label{def:affine_rigid} An affine Fano variety $U$ is \emph{super-rigid} if it satisfies the following conditions: \begin{enumerate}[(a)] \item $U$ does not contain Zariski open subsets which are relative affine Fano varieties over varieties of positive dimension. \item For every completion $(X,S)$ of $U$ and every log pair $(X',S')$ as in Definition \ref{def:Affine-Fano}, if there exists an isomorphism $U\cong X'\setminus S'$ then it extends to an isomorphism $X\cong X'$ mapping $S$ onto $S'$. \end{enumerate} \end{defn} Note that $\mathbb{A}^1$ is the only affine Fano curve and it is super-rigid. It follows from the definition that a super-rigid affine Fano variety of dimension $\geq 2$ does not contain open $\mathbb{A}^1$-cylinders, that is, open subsets isomorphic to the product of $\mathbb{A}^1$ with a variety of smaller dimension. \section{Pencils and their resolutions}\label{sec:Weil-Pencils} We recall the correspondence between dominant rational maps $\psi\:X\dashrightarrow \mathbb{P}^1$ on a normal variety $X$ and linear systems of Weil divisors of projective dimension $1$ on $X$. In the smooth case it restricts to the well-known correspondence between such maps and one-dimensional linear systems of Cartier divisors, see e.g.\ \cite{Dolgachev-Classical_AG_modern_view}. For this purpose we use the correspondence between Weil divisors and coherent reflexive sheaves of rank one, see \cite[Section 5.2]{Ishii-Intro_to_singularities}, \cite{Reid-canonical_3folds} or \cite[Appendix]{Schwede-reflexive}. For a sheaf $\mathcal{F}$ on $X$ we denote the sheaf $\operatorname{\mathcal{H}om}_{\mathcal{O}_X}(\mathcal{F},\mathcal{O}_X)$, dual to $\mathcal{F}$, by $\mathcal{F}^{\vee}$. \subsection{Divisorial sheaves}\label{ssec:Divisorial_sheaves} Let $X$ be a normal variety and let $\mathcal{K}_{X}$ be its sheaf of rational functions. For a Weil divisor $D$ on $X$ the \emph{divisorial sheaf associated to} $D$ is the unique subsheaf of $\mathcal{O}_{X}$-modules of $\mathcal{K}_{X}$ whose sections over every open subset $U$ of $X$ are \[\mathcal{O}_{X}(D)(U)=\{f\in\mathcal{K}_{X}^{*},\ \operatorname{div}(f)|_{U}+D|_{U}\geq0\}\cup\{0\}.\] The sheaf $\mathcal{O}_{X}(D)$ is a coherent reflexive sheaf of rank one. It is invertible if and only if $D$ is Cartier. If $D$ is effective then $\mathcal{O}_X(-D)= \mathcal{O}_X(D)^\vee$ is the ideal sheaf $\mathcal{I}_D$ of $D$, which is a coherent reflexive subsheaf of $\mathcal{O}_X$ of rank one. Conversely, every coherent reflexive sheaf of rank one $\mathcal{F}$ on $X$ embeds into $\mathcal{K}_{X}$ and for each embedding $i\colon\mathcal{F}\hookrightarrow\mathcal{K}_{X}$ there is a unique Weil divisor $D$ on $X$ such that $\operatorname{Im}(i)=\mathcal{O}_{X}(D)$. We henceforth use the term \emph{divisorial sheaf} to refer to any coherent reflexive sheaf of rank one on $X$. We note that $\mathcal{O}_{X}(0)=\mathcal{O}_{X}$, because on a normal variety a rational function with no poles is regular, \cite[Theorem 11.5]{Matsumura}. More generally, given a divisorial sheaf $\mathcal{F}$ and an open subset $j\:U\hookrightarrow X$ such that $\operatorname{codim}(X\setminus U)\geq2$, the natural homomorphism $\mathcal{F}\rightarrow j_{*}(\mathcal{F}|_{U})$ is an isomorphism. The correspondence $D \mapsto \mathcal{O}_X(D)$ induces an isomorphism between the class group $\operatorname{Cl}(X)$ of $X$ and the set of isomorphism classes of divisorial sheaves endowed with the group law defined by $\mathcal{F}\hat{\otimes}\mathcal{F}'=(\mathcal{F}\otimes \mathcal{F}')^{\vee\vee}$. In case $\mathcal{F}$ or $\mathcal{F}'$ is invertible, there is a canonical isomorphism $\mathcal{F}\otimes\mathcal{F}'\cong (\mathcal{F}\otimes\mathcal{F}')^{\vee\vee}$. The inclusion of the smooth locus $j\:X_\mathrm{reg}\hookrightarrow X$ induces an isomorphism $$j^*\colon\operatorname{Cl}(X)\rightarrow \operatorname{Cl}(X_\mathrm{reg})\cong \operatorname{Pic}(X_\mathrm{reg}),$$ whose inverse is given by associating to an invertible sheaf $\mathcal{N}$ on $X_\mathrm{reg}$ the divisorial sheaf $(j_*\mathcal{N})^{\vee\vee}$ on $X$. The canonical sheaf of $X$ is $\omega_X=(j_*\det(\Omega^1_{X_\mathrm{reg}}))^{\vee\vee}\cong \mathcal{O}_X(K_X)$. \begin{example}[The quadric cone in $\mathbb{P}^3$]\label{ex:quadric_cone} Let $\mathbb{P}$ be the projective cone in $\mathbb{P}^3$ over a smooth plane conic. It is isomorphic to the weighted projective plane $\mathbb{P}(1,1,2)$ with weighted homogeneous coordinates $x_0$, $x_1$, $x_2$; see Section \ref{sec:Cone_construction}. Let $H=\{x_0=0\}$. Since $H\cong \mathbb{P}^1$ is irreducible and $\mathbb{P}\setminus H\cong \mathbb{A}^2$ has a trivial class group, the class group of $\mathbb{P}$ is isomorphic to $\mathbb{Z}$ and is generated by $H$. The divisor $2H$ is Cartier, but $H$ itself is not Cartier, equivalently, the divisorial sheaf $\mathcal{O}_{\mathbb{P}}(H)$ is not invertible. Indeed, the open subset $U=\mathbb{P}\setminus \{x_2=0\}$ is isomorphic to the affine quadric cone $\{x^2-yz=0\}\subseteq \mathbb{A}^3$ and the restriction of $\mathcal{O}_{\mathbb{P}}(H)$ to $U$ is isomorphic to the divisorial sheaf $\mathcal{O}_U(D)$ associated to the Weil divisor $D=\{x=y=0\}$. The latter is not Cartier, since the ideal $(x,y)=H^0(U, \mathcal{O}_U(-D))$ of $R=\mathbb{C}[U]$ is non-principal. Note that $H^0(U, \mathcal{O}_U(D))$ is equal to the $R$-submodule of $\mathrm{Frac}(R)$ generated by $1$ and $\frac{z}{x}$ and that, putting $D'=\{z=x=0\}$ and $s=\tfrac{z}{x}$, we have $\operatorname{div}(s)=\operatorname{div}(z)-\operatorname{div}(x)=2D'-(D'+D)=D'-D$. In particular, $s\in H^0(U, \mathcal{O}_U(D))$ and $D'\sim D$. Note also that the natural $\mathcal{O}_U$-linear homomorphism $\mathcal{O}_U(-D)\otimes \mathcal{O}_U(-D)\rightarrow \mathcal{O}_U(-2D)$ is not an isomorphism, as on global sections it gives the inclusion of ideals $(xy,y^2,x^2)=(y)\cdot (x,y,z)\hookrightarrow (y)$. \end{example} \subsection{Pencils of Weil divisors} Let $X$ be a normal variety and let $\psi\colon X \dashrightarrow \mathbb{P}^1$ be a dominant rational map. Let $j\:U=\operatorname{dom}(\psi)\hookrightarrow X$ be the inclusion of the domain of definition of $\psi$ and let $\psi_U:=\psi|_U\:U\rightarrow \mathbb{P}^1$. Since $X$ is normal, $X\setminus U$ is a closed subset of codimension $\geq 2$ of $X$. Put $V=H^0(\mathbb{P}^1,\mathcal{O}_{\mathbb{P}^1}(1))$. The invertible sheaf $\mathcal{N}=\psi_U^*\mathcal{O}_{\mathbb{P}^1}(1)$ on $U$ extends to a unique divisorial sheaf $\mathcal{F}=(j_*\mathcal{N})^{\vee\vee}$ on $X$ and the sections $\psi_U^*(s)$, $s\in V$, extend uniquely to global sections of $\mathcal{F}$. The obtained homomorphism $\psi^*\:V\rightarrow H^0(X,\mathcal{F})$ is injective and its image is a $2$-dimensional linear subspace $\mathcal{L}\subseteq H^0(X,\mathcal{F})$. The scheme-theoretic fibers $U_p=\psi_{U}^{*}(p)$ of $\psi_{U}$ over the closed points $p$ of $\mathbb{P}^1=\operatorname{Proj}(\operatorname{Sym}^{\boldsymbol{\cdot}}(V))$ are linearly equivalent Cartier divisors on $U$ for which $\mathcal{O}_{U}(U_p)\cong\psi_{U}^{*}\mathcal{O}_{\mathbb{P}^{1}}(1)$. For every $p\in\mathbb{P}^{1}$ we denote by $\mathcal{L}_p$ the scheme-theoretic closure of $\psi_{U}^{*}(p)$ in $X$. Since $X$ is normal and $\operatorname{codim}_{X}(X\setminus U)\geq2$, these are linearly equivalent Weil divisors on $X$ for which $\mathcal{O}_{X}(\mathcal{L}_p)\cong\mathcal{F}$. Denote by $\mathcal{I}_{U_p}\subseteq\mathcal{O}_{U}$ the ideal sheaf of $U_p$. Then the ideal sheaf $\mathcal{I}_{\mathcal{L}_p}\subseteq\mathcal{O}_{X}$ of $\mathcal{L}_p$ is equal to $(j_{*}\mathcal{I}_{U_p})^{\vee\vee}\cong\mathcal{O}_{X}(-\mathcal{L}_p)\cong\mathcal{F}^{\vee}$. In what follows, we call the subspace $\mathcal{L}\subseteq H^0(X,\mathcal{F})$ the \emph{pencil of (Weil) divisors on $X$} defining the rational map $\psi\:X\dashrightarrow \mathbb{P}^1$. The divisors $\mathcal{L}_p$, $p\in \mathbb{P}^1$ are called the \emph{members} of $\mathcal{L}$. The \emph{base scheme} of the pencil, denoted by $\operatorname{Bs} \mathcal{L}$, is the scheme-theoretic intersection in $X$ of all members of the pencil. \begin{prop}[Pencils and their associated rational maps]\label{lem:pencil-as-vectorspaces} Let $X$ be a normal algebraic variety. There exists a natural bijection between the set of dominant rational maps $\psi\:X\dashrightarrow \mathbb{P}^1$ and the set of equivalence classes of pairs $(\mathcal{F},\mathcal{L})$, where $\mathcal{F}$ is a divisorial sheaf on $X$ and $\mathcal{L}\subseteq H^0(X,\mathcal{F})$ is a $2$-dimensional space of global sections generating $\mathcal{F}$ off a closed subset of codimension $\geq 2$, where two such pairs $(\mathcal{F},\mathcal{L})$ and $(\mathcal{F}',\mathcal{L}')$ are equivalent if there exists an isomorphism $\alpha\colon\mathcal{F} \rightarrow \mathcal{F}'$ for which $H^0(\alpha)(\mathcal{L})=\mathcal{L}'$. \end{prop} \begin{proof} We already described above how to associate to a dominant rational map $\psi\:X\dashrightarrow \mathbb{P}^1$ a pair $(\mathcal{F},\mathcal{L})=(\mathcal{F}_\psi,\mathcal{L}_\psi):=((j_*\mathcal{N})^{\vee\vee},\psi^*V)$. Consider the natural homomorphism \begin{equation}\label{eq:e} \mathrm{e}\colon\mathcal{L}\otimes_\mathbb{C} \mathcal{O}_X\rightarrow \mathcal{F} \end{equation} defined by restricting global sections to stalks of $\mathcal{F}$. By construction, the restriction of $\psi$ to $U=\operatorname{dom}(\psi)$ is isomorphic to the pull-back by $\psi_U\:U\rightarrow \mathbb{P}^1$ of the canonical surjection $V\otimes_\mathbb{C} \mathcal{O}_{\mathbb{P}^1}\rightarrow \mathcal{O}_{\mathbb{P}^1}(1)$. Thus the sections contained in $\mathcal{L}$ generate $\mathcal{F}$ outside the indeterminacy locus $X\setminus U$ of $\psi$, which is a closed subset of codimension $\geq 2$ of $X$. Conversely, let $\mathcal{F}$ be a divisorial sheaf on $X$ and let $\mathcal{L}\subseteq H^0(X,\mathcal{F})$ be a $2$-dimensional space of global sections such that the support $Z$ of the cokernel of the homomorphism \eqref{eq:e} has codimension $\geq 2$ in $X$. Since $\mathcal{F}$ is divisorial and $X$ is normal, the set $\mathcal{F}_{\mathrm{sing}}$ of points $x$ of $X$ such that $\mathcal{F}_{x}$ is not a free $\mathcal{O}_{X,x}$-module is a closed subset of codimension $\geq2$. Thus, $W:=W(\mathcal{F},\mathcal{L})=X\setminus(Z\cup\mathcal{F}_{\mathrm{sing}})$ is an open subset of $X$ with a complement of codimension $\geq2$, on which $\mathrm{e}$ restricts to a surjection $\mathrm{e}|_W\colon\mathcal{O}_{W}^{\oplus2}\cong \mathcal{L}\otimes_\mathbb{C}\mathcal{O}_{W}\rightarrow\mathcal{F}|_{W}$ onto the invertible sheaf $\mathcal{F}|_{W}$. By \cite[Proposition 7.12]{Hartshorne} there exists a unique dominant morphism $f=f_{\mathcal{F},\mathcal{L}}\:W\rightarrow \mathbb{P}^1$ such that $\mathrm{e}|_W$ is equal to the pull-back by $f$ of the canonical surjection $V\otimes_\mathbb{C} \mathcal{O}_{\mathbb{P}^1}\rightarrow \mathcal{O}_{\mathbb{P}^1}(1)$. This morphism determines in turn a unique rational map $\psi\:X\dashrightarrow \mathbb{P}^1$ whose domain of definition $U$ contains $W$ and for which $\psi_U|_W=f$. Two pairs $(\mathcal{F},\mathcal{L})$ and $(\mathcal{F}',\mathcal{L}')$ determine the same dominant rational map $\psi\:X\dashrightarrow \mathbb{P}^1$ if and only if their associated morphisms $f_{\mathcal{F},\mathcal{L}}\:W(\mathcal{F},\mathcal{L})\rightarrow \mathbb{P}^1$ and $f_{\mathcal{F}',\mathcal{L}'}\:W(\mathcal{F}',\mathcal{L}')\rightarrow \mathbb{P}^1$ coincide on the open subset $\widetilde{W}=W(\mathcal{F},\mathcal{L})\cap W(\mathcal{F}',\mathcal{L}')$. This is in turn equivalent to the existence of an isomorphism $\alpha_{\widetilde{W}}\colon\mathcal{F}|_{\widetilde{W}}\rightarrow \mathcal{F}'|_{\widetilde{W}}$ of sheaves on $\widetilde{W}$ which maps the global sections $s|_{\widetilde{W}}$ of $\mathcal{F}|_{\widetilde{W}}$, $s\in \mathcal{L}$, bijectively onto the global sections $s'|_{\widetilde{W}}$ of $\mathcal{F}'_{\widetilde{W}}$, $s'\in \mathcal{L}'$. Since $X$ is normal, $\operatorname{codim}_X(X\setminus \widetilde{W})\geq 2$ and $\mathcal{F}$ and $\mathcal{F}'$ are reflexive, $\alpha_{\widetilde{W}}$ uniquely extends to an isomorphism $\alpha\colon\mathcal{F} \rightarrow \mathcal{F}'$ of sheaves over $X$ such that $H^0(\alpha)(\mathcal{L})=\mathcal{L}'$. So the association $(\mathcal{F},\mathcal{L})\mapsto \psi_{\mathcal{F},\mathcal{L}} $ induces a well-defined injective map from the set of equivalence classes of pairs $(\mathcal{F},\mathcal{L})$ to the set of dominant rational maps $\psi\:X\dashrightarrow \mathbb{P}^1$. This map is also surjective, because the equality $\psi=\psi_{\mathcal{F}_\psi,\mathcal{L}_\psi}$ holds for every dominant rational map $\psi\:X\dashrightarrow \mathbb{P}^1$. \end{proof} By a \emph{resolution of $\mathcal{L}$} we mean a resolution of the associated rational map $\psi$. Given two linearly equivalent Weil divisors $D$ and $D'$ on $X$ without common irreducible component, the \emph{pencil $\langle D,D'\rangle$ generated by $D$ and $D'$} is the pencil of divisors on $X$, unique up to an isomorphism, which has $D$ and $D'$ among its members. Its base scheme is equal to the scheme-theoretic intersection of $D$ and $D'$. \subsection{The graph of a pencil} Let $\psi\:X\dashrightarrow \mathbb{P}^1$ be a dominant rational map on a normal variety $X$. The \emph{graph of $\psi$} is the scheme-theoretic closure $\Gamma\subseteq X \times \mathbb{P}^1$ of the graph of the restriction of $\psi$ to its domain of definition. We let $\gamma\colon\Gamma\rightarrow X$ and $\operatorname{p}\colon\Gamma\rightarrow \mathbb{P}^1$ be the restrictions of the projections from $X\times \mathbb{P}^1$ onto its factors. We obtain a commutative diagram \begin{equation}\label{fig:graph} \xymatrix { & \Gamma \ar[dl]_{\gamma} \ar[dr]^{\operatorname{p}} & \\ X \ar@{-->}[rr]_{\psi} & & \mathbb{P}^1 } \end{equation} The proper birational morphism $\gamma\colon\Gamma \rightarrow X$ provides a natural resolution of $\psi$ such that $\psi\circ\gamma=\operatorname{p}$. The next proposition collects properties of this resolution. \begin{prop}[Properties of the graph resolution]\label{lem:canonical-resol} \label{lem:graph-reso-members} Let $X$ be a normal variety. Let $\psi\:X\dashrightarrow \mathbb{P}^1$ be a dominant rational map and let $\mathcal{L}$ be the associated pencil of divisors. Then the following hold: \begin{enumerate}[(a)] \item For every resolution $\tau\:X'\rightarrow X$ of $\psi$ the birational map $\gamma^{-1}\circ\tau\:X'\dashrightarrow \Gamma$ is a proper morphism. \item The indeterminacy locus of $\psi$ is equal to the support of the base scheme $\operatorname{Bs}\mathcal{L}$. \item The morphism $\gamma$ restricts to an isomorphism over $X\setminus \operatorname{Bs}\mathcal{L}$, and $\gamma^{-1}(x)\cong \mathbb{P}^1$ for every $x\in \operatorname{Bs}\mathcal{L}$. \item For every point $p\in \mathbb{P}^1$ the birational morphism $\gamma_p\colon\operatorname{p}^*(p) \rightarrow \mathcal{L}_p$ induced by $\gamma$ is an isomorphism. \end{enumerate} \end{prop} \begin{proof} Set $U=\operatorname{dom}(\psi)$ and let $\operatorname{pr}_X\:X\times \mathbb{P}^1\rightarrow X$ denote the projection. (a) Since $\psi\circ \tau$ is a morphism, we have a morphism $\tau\times (\psi\circ \tau)\colon X'\rightarrow X\times \mathbb{P}^1$. Since $X'$ is irreducible and $\tau$ is surjective, the image is contained in $\Gamma$, so we may write this morphism as a composition of some morphism $\tau'\:X'\rightarrow \Gamma$ with the closed immersion $\Gamma\hookrightarrow X\times \mathbb{P}^1$. Both morphisms $\gamma$ and $\tau=\gamma\circ\tau'$ are proper, so $\tau'$ is proper too. (b) With the notation of the proof of Proposition \ref{lem:pencil-as-vectorspaces}, $\psi$ is represented by the morphism $f\:W\rightarrow \mathbb{P}^1$ associated to the restriction of $\mathrm{e}\colon\mathcal{L}\otimes_\mathbb{C} \mathcal{O}_X\rightarrow \mathcal{F}$ to the open subset $W=X\setminus (\operatorname{Supp}(\operatorname{Coker}\ e)\cup \mathcal{F}_{\operatorname{Sing}})$. Let $x$ be a point of $X$ which is not contained in the support of some member $\mathcal{L}_p$. Then $\mathcal{I}_{\mathcal{L}_{p},x}=\mathcal{O}_{X,x}$, and since $\mathcal{I}_{\mathcal{L}_{p}}\cong \mathcal{F}^{\vee}$, it follows that $\mathcal{F}^{\vee}_x$, and hence $\mathcal{F}_x$ is a free $\mathcal{O}_{X,x}$-module. Thus $\mathcal{F}_{\operatorname{Sing}} \subseteq \operatorname{Supp}(\operatorname{Coker}\ e)$ and we have $U=X\setminus \operatorname{Supp}(\operatorname{Coker}\ e)=X\setminus \operatorname{Supp} (\operatorname{Bs}\mathcal{L})$. (c) By the definition of $\Gamma$, $\gamma$ restricts to an isomorphism over $U$. Suppose that for some $x\in \operatorname{Bs}\mathcal{L}$, $\operatorname{pr}_X^{-1}(x)\cong \mathbb{P}^1$ is not fully contained in $\Gamma$. Since $X$ is normal and $\gamma\colon\Gamma\rightarrow X$ is proper and birational, it follows from \cite[Proposition 4.4.1]{EGAIII-1} that the fibers of $\gamma$ are connected, and hence $\gamma^{-1}(x)=\operatorname{pr}_X^{-1}(x)\cap \Gamma$ consists of a unique point $y$, and that there exists an open neighborhood $V$ of $y$ such that $\gamma|_V\:V\rightarrow X$ is an open immersion. Thus the birational map $\gamma^{-1}$ is defined at $x$, and so is $\psi=\operatorname{p}\circ\gamma^{-1}$. But this is impossible, because $x\in \operatorname{Bs}\mathcal{L}=X\setminus U$. (d) The assertion is local over $X$, so we can assume without loss of generality that $X$ is affine, say $X=\operatorname{Spec}(A)$. Then $\psi$ is induced by some rational function $h\in \operatorname{Frac}(A)$. Given a representative $h=f/g$, where $f\in A$, $g\in A\setminus\{0\}$, let $U_{(f,g)}=X\setminus V(f,g)$. The restriction of $\psi$ to $U_{(f,g)}$ is given by $x\mapsto [f(x):g(x)]$. For every point $p=[\lambda:\mu]\in \mathbb{P}^1$ the restriction of the Cartier divisor $\psi_U^*(p)$ to $U_{(f,g)}$ is the zero scheme of the regular function $s_{(f,g)}=\mu f-\lambda g$. Letting $\mathcal{S}=\{(f,g)\in A\times (A\setminus\{0\}),\; h=f/g\}$ we have $U=\bigcup_{(f,g)\in \mathcal{S}}U_{(f,g)}$. The scheme-theoretic closure $\mathcal{L}_p\subseteq X$ of $\psi_U^*(p)$ is defined by the vanishing of all functions $s_{(f,g)}$. On the other hand, the graph $\Gamma\subseteq X\times \mathbb{P}^1_{[u:v]}$ is defined by the vanishing of all sections $$\widetilde{s}_{(f,g)}=fv-gu\in H^0(X\times \mathbb{P}^1,\mathcal{O}_{X\times \mathbb{P}^1}(1)),$$ and hence $\operatorname{p}^*(p)\subseteq X\times \mathbb{P}^1$ is defined by the vanishing of the section $\mu u- \lambda v$ and all $\widetilde{s}_{(f,g)}$. From this, we see directly that $\gamma_p\colon\operatorname{p}^*(p)\rightarrow X_p$ is an isomorphism. \end{proof} \begin{Rem} Let $X$ be a normal variety and let $\mathcal{L}\subseteq H^0(X,\mathcal{F})$ be a pencil. The defining ideal sheaf $\mathcal{I}_\mathcal{L}$ of the base scheme $\operatorname{Bs}\mathcal{L}$ can be described as follows. Consider the homomorphism \begin{equation}\label{eq:ev} \operatorname{ev}\colon\mathcal{L}\otimes_\mathbb{C} \mathcal{F}^\vee\rightarrow \mathcal{O}_X \end{equation} obtained from the evaluation homomorphism $\mathrm{e}\colon\mathcal{L}\otimes_\mathbb{C} \mathcal{O}_X\rightarrow \mathcal{F}$ \eqref{eq:e} by tensoring it with $\mathcal{F}^{\vee}$ and composing with the canonical homomorphism $\mathcal{F}\otimes_{\mathcal{O}_X} \mathcal{F}^{\vee} \rightarrow \mathcal{O}_X$. Then the ideal sheaf $\mathcal{J}=\operatorname{Im}(\operatorname{ev})$ is generated by the ideal sheaves $\mathcal{I}_s$ of the zero schemes of the sections $s$ of $\mathcal{F}$, $s\in \mathcal{L}$. On the other hand, the members of $\mathcal{L}$ are, by definition, the Weil divisors associated to the zero schemes of these sections, that is, the closed subschemes of $X$ with defining ideal sheaves $\mathcal{I}_s^{\vee\vee}$, $s\in \mathcal{L}$. So $\mathcal{I}_\mathcal{L}$ is generated by the ideal sheaves $\mathcal{I}_s^{\vee\vee}$, $s\in \mathcal{L}$. It follows that $\mathcal{J}\subseteq \mathcal{I}_\mathcal{L}$, with equality in case when $\mathcal{F}$ is invertible. Indeed, if $\mathcal{F}$ is invertible then each $\mathcal{I}_s$ is an invertible ideal sheaf, so $\mathcal{I}_s=\mathcal{I}_s^{\vee\vee}$. \end{Rem} For a pencil $\mathcal{L}$ on a normal variety $X$, Proposition \ref{lem:canonical-resol}(b) says that $\gamma\colon\Gamma \rightarrow X$ is a universal minimal resolution of the dominant rational map $\psi\:X\dashrightarrow \mathbb{P}^1$ determined by $\mathcal{L}$, in the sense that every resolution of $\psi$ factors through it. Another natural resolution of $\psi$ is given by the blow-up $\tau\colon\operatorname{Bl}_{\operatorname{Bs}\mathcal{L}}(X)\rightarrow X$ of the base scheme of $\mathcal{L}$, as shown in the following lemma. In case when members of $\mathcal{L}$ are Cartier, it is a classical fact that the induced birational morphism $\gamma^{-1}\circ \tau\colon\operatorname{Bl}_{\operatorname{Bs}\mathcal{L}}(X)\rightarrow\Gamma$ is an isomorphism; see e.g.\ \cite[Proposition 7.12 and \S 7.1.3]{Dolgachev-Classical_AG_modern_view}. This is no longer true in general for pencils whose members are not Cartier, see Example \ref{ex:non-iso-Blowup-discr}. \begin{lem}[The blowup of $\operatorname{Bs} \mathcal{L}$ is a resolution] Let $X$ be a normal variety. Let $\psi\:X\dashrightarrow \mathbb{P}^1$ be a dominant rational map determined by a pencil of divisors $\mathcal{L}$. Then the blow-up $\tau\colon\operatorname{Bl}_{\operatorname{Bs}\mathcal{L}}(X)\rightarrow X$ of the base scheme of $\mathcal{L}$ is a resolution of $\psi$. \end{lem} \begin{proof} Let $\widetilde{X}=\operatorname{Bl}_{\operatorname{Bs}\mathcal{L}}(X)$. To verify that $\psi\circ \tau\colon\widetilde{X}\dashrightarrow \mathbb{P}^1$ is a morphism we can assume without loss of generality that $X=\operatorname{Spec}(A)$ is affine and that $\psi$ is the rational map defined by some rational function $h\in\operatorname{Frac}(A)$. With the notation of the proof of Proposition \ref{lem:canonical-resol}(c), $\operatorname{Bs}\mathcal{L}$ is the closed subscheme of $X$ whose defining ideal $I$ generated by the regular functions $f$ and $g$ such that $(f,g)\in \mathcal{S}$. By definition of the blow-up, the ideal sheaf $\mathcal{J}=\tau^{-1}(I)\cdot \mathcal{O}_{\widetilde{X}}$ is invertible. It is generated by all regular functions $\tau^*f$ and $\tau^*g$, where $(f,g)\in \mathcal{S}$. Thus for every point $y_0\in \widetilde{X}$ there exists an element $(f,g)\in \mathcal{S}$ such that the stalk $\mathcal{J}_{y_0}$ is generated by $\tau^*f$ or $\tau^*g$, say $\tau^*f$, the situation being symmetric for $\tau^*g$. It follows that there exists an open neighborhood $V\subseteq \widetilde{X}$ of $y_0$ and a regular function $\widetilde{g}$ on $V$ such that $\tau^*g|_V=\widetilde{g}\tau^*f|_V$. On this neighborhood the composition $\psi\circ \tau\colon\widetilde{X}\dashrightarrow \mathbb{P}^1$ is given by $$y\mapsto [(\tau^*f)(y):(\tau^*g)(y)]=[1:\widetilde{g}(y)],$$ so $y_0\in \operatorname{dom}(\psi\circ \tau)$. Thus $\psi\circ \tau\colon\widetilde{X}\dashrightarrow \mathbb{P}^1$ is a morphism, and hence $\tau\colon\widetilde{X}\rightarrow X$ is a resolution of $\psi\:X\dashrightarrow \mathbb{P}^1$. \end{proof} \section{Terminal rank one Fano pencils and associated Mori fiber spaces} \label{sec:pencils_and_resolutions} Our strategy to construct Mori fiber completions of affine spaces is to use pencils of Weil divisors on normal projective completions of $\mathbb{A}^n$'s and to produce the desired Mori fiber spaces as outputs of relative MMP's ran from suitable resolutions of these pencils. For this approach to work we need in particular to find properties of a variety $X$ and of the members of the pencil whose combination guarantees that a specific open subset is preserved under appropriate choices of resolutions and relative MMP's. \subsection{$\mathbb{Q}$-factorial terminal resolutions}\label{sub:QFactTerm-res}\label{subsec:terminal-pencil} Let $X$ be a normal variety. Let $\mathcal{L}$ be a pencil of divisors determining a dominant rational map $\psi\:X\dashrightarrow \mathbb{P}^1$. Let $\tau\:X'\rightarrow X$ be a resolution of $\mathcal{L}$, that is, a proper birational morphism from a variety such that $\psi\circ \tau$ is a morphism. By Proposition \ref{lem:canonical-resol} there exists a unique morphism $\Gamma(\tau)\:X'\rightarrow \Gamma$ such that $\tau=\gamma\circ\Gamma(\tau)$ and a commutative diagram \begin{equation}\label{fig:graph} \xymatrix {X' \ar@{->}[rr]^{\Gamma(\tau)} \ar[dr]_{\tau}& & \Gamma \ar[dl]_{\gamma} \ar[dr]^{\operatorname{p}} & \\ & X \ar@{-->}[rr]_{\psi} & & \mathbb{P}^1 } \end{equation} where $(\Gamma,\gamma,\operatorname{p})$ are as in \eqref{fig:graph}. Let $\nu\colon\widetilde\Gamma\rightarrow \Gamma$ be the normalization of $\Gamma$. We call $\widetilde\Gamma$ the \emph{normalized graph} of $\psi$. If $X'$ is normal then we have $\Gamma(\tau)=\nu\circ \widetilde\Gamma(\tau)$ for some unique birational proper morphism $$\widetilde\Gamma(\tau)\:X'\rightarrow \widetilde\Gamma.$$ \begin{defn}[A thrifty resolution] \label{def:thrifty-reso} Let $X$ be a normal variety and let $\tau\:X'\rightarrow X$ be a resolution of a pencil $\mathcal{L}$ on $X$. \begin{enumerate}[(a)] \item We call the image $\delta(\tau)\subseteq\mathbb{P}^1$ of $\operatorname{Exc} \Gamma(\tau)$ by $\operatorname{p}\circ\ \Gamma(\tau)$ the \emph{discrepancy locus of $\tau$}. \item We say that $\tau$ is \emph{$\mathbb{Q}$-factorial terminal} if $X'$ is $\mathbb{Q}$-factorial terminal. \item We say that a $\mathbb{Q}$-factorial terminal resolution $\tau$ is \emph{thrifty} if $\widetilde \Gamma(\tau)\:X'\rightarrow \widetilde{\Gamma}$ is a $\mathbb{Q}$-factorial terminalization. \end{enumerate} \end{defn} The discrepancy locus is a rough measure of how much a given resolution of $\psi\:X\dashrightarrow \mathbb{P}^1$ differs from the (minimal) graph resolution. \begin{example}[The affine cone in $\mathbb{A}^4$]\label{ex:non-iso-Blowup-discr} On the affine cone $X=\{xv-yu=0\}\subseteq \mathbb{A}^4$ the Weil divisors $\mathcal{L}_0=\{u=v=0\}$ and $\mathcal{L}_\infty=\{x=y=0\}=\mathcal{L}_0+\operatorname{div}(x/u)$ generate a pencil $\mathcal{L}$, whose base scheme $\operatorname{Bs} \mathcal{L}$ is equal to the isolated singular point $p=(0,0,0,0)\in X$. Since the latter has codimension $3$ in $X$, $\mathcal{L}_0$ is not $\mathbb{Q}$-Cartier. In particular, $X$ is not $\mathbb{Q}$-factorial. The associated rational map is \[\psi_\mathcal{L}\:X\dashrightarrow \mathbb{P}^1_{[w_0:w_1]},\quad (x,y,u,v)\mapsto [u:x]=[v:y].\] Its graph $\Gamma$ is isomorphic to the sub-variety of $X\times \mathbb{P}^1_{[w_0:w_1]}$ defined by the equations \[ xw_0-uw_1=0 \quad \textrm{and} \quad yw_0-vw_1=0.\] The morphism $\operatorname{p}\colon\Gamma\rightarrow \mathbb{P}^1$ is a locally trivial $\mathbb{A}^2$-bundle, so $\Gamma$ is smooth. The morphism $\gamma\colon\Gamma\rightarrow X$ is a thrifty $\mathbb{Q}$-factorial terminal resolution of $\mathcal{L}$ with an empty discrepancy locus. It is a small resolution of the singularity $p\in X$ with the exceptional locus consisting of a single curve $\gamma_\mathcal{L}^{-1}(p)\cong \mathbb{P}^1_{[w_0:w_1]}$. It can be also described as the blow-up of the ideal sheaf of $\mathcal{L}_0$. On the other hand, the blow-up $\tau_\mathcal{L}\colon\widetilde{X} \rightarrow X$ of $\operatorname{Bs}\mathcal{L}=\{p\}$ is a resolution of the singularity of $X$ with exceptional divisor $\tau_\mathcal{L}^{-1}(p)\cong \mathbb{P}^1\times \mathbb{P}^1$. In particular, it is a $\mathbb{Q}$-factorial terminal resolution of $\mathcal{L}$. The birational proper morphism $\tau':=\Gamma(\tau_\mathcal{L})\colon\widetilde X\rightarrow \Gamma$ contracts $\tau_\mathcal{L}^{-1}(p)$ onto $\gamma^{-1}(p)$. Since the proper transform by $\tau'$ of every closed fiber of $\operatorname{p}\colon\Gamma\rightarrow \mathbb{P}^1$ is isomorphic to the blow-up of the origin in $\mathbb{A}^2$ the discrepancy locus of $\tau_\mathcal{L}$ is equal to $\mathbb{P}^1$. \end{example} A $\mathbb{Q}$-factorial terminal resolution $\tau\:X'\rightarrow X$ with a finite discrepancy locus induces isomorphisms between general fibers of $\operatorname{p}\colon\Gamma\rightarrow \mathbb{P}^1$ and their proper transforms on $X'$. The latter are general fibers of $\operatorname{p}\circ\ \Gamma(\tau)\:X'\rightarrow \mathbb{P}^1$, and since $X'$ is terminal, they are terminal varieties by \cite[Proposition 7.7]{Kollar-Singularities_of_pairs}. On the other hand, by Proposition \ref{lem:graph-reso-members}(d) the fibers of $\operatorname{p}\colon\Gamma\rightarrow \mathbb{P}^1$ are isomorphic to the members of the pencil $\mathcal{L}$ determining $\psi\:X\dashrightarrow \mathbb{P}^1$. The terminality of general members of $\mathcal{L}$ is thus a necessary condition for the existence of a $\mathbb{Q}$-factorial terminal resolution of $\mathcal{L}$. This motivates the following definition: \begin{defn}[A terminal $\mathbb{Q}$-factorial pencil]\label{dfn:Q-factorial-terminal-pencil} A \emph{terminal pencil} (a \emph{$\mathbb{Q}$-factorial terminal pencil}) on a normal variety is a pencil whose general members are terminal (respectively, $\mathbb{Q}$-factorial terminal). \end{defn} In contrast to terminality, the $\mathbb{Q}$-factoriality of general members of $\mathcal{L}$ is not necessary for the existence of a $\mathbb{Q}$-factorial terminal resolution of $\psi\:X\dashrightarrow \mathbb{P}^1$, as illustrated by the following example. \begin{example}[$\mathbb{Q}$-factoriality: general vs generic] \label{ex:Q-fact} Let $X=\mathbb{P}^4_{[x:y:z:t:w]}$ and let $F$ and $H$ be the projective cone over the smooth conic $\{xy-z^2=0\}\subseteq \mathbb{P}^2_{[x:y:z]}$ and the hyperplane $\{t=0\}$, respectively. Denote by $\mathcal{L}$ be the pencil generated by $F$ and $2H$. A general member of $\mathcal{L}$ is isomorphic to the projective cone $Z$ in $\mathbb{P}^4$ over the quadric surface $\{xy-z^2+t^2=0\}\subseteq \mathbb{P}^3$. The blowup of the vertex is smooth and the exceptional divisor has discrepancy $1$, so $Z$ is a terminal Fano variety. But it is not $\mathbb{Q}$-factorial. Indeed, its class group is isomorphic to $\mathbb{Z}\oplus \mathbb{Z}$ and is generated for instance by the non-$\mathbb{Q}$-Cartier Weil divisor $\{x=t-z=0\}$ and the hyperplane section $Z\cap \{w=0\}$. On the other hand, the graph of the rational map $$\psi\:X \dashrightarrow \mathbb{P}^1, \; [x:y:z:t:w]\mapsto [xy-z^2:t^2]$$ determined by $\mathcal{L}$ is isomorphic to the subvariety $\Gamma\subseteq X\times \mathbb{P}^1_{[u_0:u_1]}$ with equation $(xy-z^2)u_1+t^2u_0=0$. The generic fiber of $\operatorname{p}\colon\Gamma\rightarrow \mathbb{P}^1$ is isomorphic to the projective cone $Y\subseteq \mathbb{P}^4_{\mathbb{C}(\lambda)}$ over the quadric threefold in $\mathbb{P}^3_{\mathbb{C}(\lambda)}$ with equation $xy-z^2+\lambda t^2=0$, where $\lambda=u_0/u_1$. As a variety over $\mathbb{C}(\lambda)$, $Y$ is factorial, with the class group $\operatorname{Cl}(Y)=\operatorname{Pic}(Y)\cong \mathbb{Z}$ generated by the hyperplane section $\{w=0\}$. It follows that $\Gamma$ is factorial (in particular $\mathbb{Q}$-factorial), with the class group generated by the proper transform of $H$ and the hyperplane section $\{w=0\}$. \end{example} By the following criterion, on projective varieties the terminality of a pencil is equivalent to the existence of one terminal member. \begin{lem}[One terminal member is sufficient] \label{lem:Q-term-crit} A pencil on a normal projective variety which has at least one terminal ($\mathbb{Q}$-factorial terminal) member is a terminal (respectively $\mathbb{Q}$-factorial terminal) pencil. \end{lem} \begin{proof} Since $X$ is projective and $\mathbb{P}^1$ is a smooth curve, $\operatorname{p}\colon\Gamma\rightarrow \mathbb{P}^1$ is a flat projective morphism. By Proposition \ref{lem:graph-reso-members}(d), $\gamma\colon\Gamma \rightarrow X$ induces isomorphisms between scheme-theoretic fibers of $\operatorname{p}$ and members of $\mathcal{L}$. Assume that for some $p\in \mathbb{P}^1$, $\mathcal{L}_p$ is terminal. Then $\operatorname{p}^*(p)$ is terminal and so \cite[Theorem 9.1.14]{Ishii-Intro_to_singularities} implies that general fibers of $\operatorname{p}$, and hence general members of $\mathcal{L}$, are terminal. As a consequence, a general fiber of $\operatorname{p}$ has rational singularities and its singular locus has codimension at least three. By \cite[Theorem 12.1.10]{KollarMori-3d_flips} the $\mathbb{Q}$-factoriality of fibers of $\operatorname{p}$ is then an open condition on the set of closed points of $\mathbb{P}^1$. So if $\mathcal{L}_p$ is in addition $\mathbb{Q}$-factorial then general fibers of $\operatorname{p}$, and hence the general members of $\mathcal{L}$, are $\mathbb{Q}$-factorial terminal. \end{proof} We now relate properties of members of a pencil in a neighborhood of the base locus to global properties of the graph of its associated rational map in a neighborhood of the exceptional locus of the graph resolution. \begin{prop}[Singularities of the graph] \label{lem:neighborhood-control} Let $\mathcal{L}$ be a terminal pencil on a normal variety $X$ and let $Y$ be a member of $\mathcal{L}$. Put $Y'=\gamma_*^{-1}Y$. Then the following hold: \begin{enumerate}[(a)] \item If $Y$ is normal then $\Gamma$ is normal in an open neighborhood of $Y'$. \item If $Y$ is klt and $K_{\Gamma}$ is $\mathbb{Q}$-Cartier in an open neighborhood of $Y'$ then $\Gamma$ is terminal in an open neighborhood of $Y'$. \item If $Y$ is klt, smooth in codimension $2$ and $\mathbb{Q}$-factorial then $\Gamma$ is $\mathbb{Q}$-factorial terminal in an open neighborhood of $Y'$. \end{enumerate} \end{prop} \begin{proof} By Proposition \ref{lem:graph-reso-members}(d), $\gamma$ restricts to an isomorphism over each member of $\mathcal{L}$. In particular, since $\mathcal{L}$ is a terminal pencil, general fibers of $\operatorname{p}$ are terminal. In all three cases $Y$ is normal and $Y'=\operatorname{p}^*(\operatorname{p}(Y'))$ is a prime Cartier divisor on $\Gamma$. By \cite[Corollaire 5.12.7]{EGAIV-2} there exists a normal open neighborhood $V\subseteq \Gamma$ containing $Y'$. This proves (a). (b) By Lemma \ref{lem:different} the log pair $(V, Y')$ is plt in a neighborhood of $Y'$. Let $\pi\:V'\rightarrow V$ be a log resolution of this pair and let $G$ be any exceptional prime divisor of $\pi$ whose image is not contained in a fiber of $\operatorname{p}|_V$. Since $\pi$ induces a log resolution of general fibers of $\operatorname{p}|_V$ and the latter are terminal, it follows that $G$ has positive discrepancy; see e.g. \cite[Proposition 7.7]{Kollar-Singularities_of_pairs}. This implies by Lemma \ref{lem:plt-to-terminal} that $V$, and hence $\Gamma$, is terminal in an open neighborhood of $Y'$. (c) Since klt singularities are Cohen-Macaulay by Lemma \ref{lem:rational_sing}(a), $Y'$ satisfies Serre's condition $S_3$. Since $Y'$ is Cartier, arguing as in the proofs of \cite[Corollary 12.1.9, Lemma 12.1.8]{KollarMori-3d_flips}, we conclude that for every Weil divisor $D$ on $V$ there exists an open neighborhood $V(D)\subseteq V$ of $Y'$ such that $D|_{V(D)}$ is $\mathbb{Q}$-Cartier. As in (b) we get a terminal open neighborhood $W\subseteq V(K_V)$ of $Y'$. Since terminal singularities are rational, the group $\operatorname{Cl}(W)/\operatorname{Pic}(W)$ is finitely generated by Lemma \ref{lem:rational_sing}(b). The intersection of $W$ and the open neighborhoods $V(D_i)$, where the $D_i$ range through a finite set of Weil divisors whose classes generate $\operatorname{Cl}(W)/\operatorname{Pic}(W)$, is then a $\mathbb{Q}$-factorial terminal open neighborhood of $Y'$. \end{proof} \begin{nota}\label{nota:open-subsets} For a terminal pencil $\mathcal{L}$ on a normal variety $X$ and a finite subset $\delta\subseteq \mathbb{P}^1$ we put $\Gamma_\delta=\operatorname{p}^{-1}(\mathbb{P}^1\setminus \delta)\subseteq \Gamma $ and $ X_{\delta}= X \setminus \bigcup_{p\in \delta} \mathcal{L}_p \subseteq X$. We define the following property: \begin{enumerate} \item[($\mathbf{TQ}_\delta$)] For every $p\in \mathbb{P}^1\setminus \delta$ the member $\mathcal{L}_p$ is a prime divisor and on some open neighborhood of $\operatorname{Bs} \mathcal{L}$ in $\mathcal{L}_p$ it is klt, smooth in codimension $2$ and $\mathbb{Q}$-factorial. \end{enumerate} \end{nota} Note that for a pencil $\mathcal{L}$ whose general members are $\mathbb{Q}$-factorial terminal the condition ($\mathbf{TQ}_\delta$) holds for the finite set $\delta \subset \mathbb{P}^1$ consisting of points $p$ for which $\mathcal{L}_p$ is not $\mathbb{Q}$-factorial terminal. \begin{cor}[Controlling the discrepancy locus]\label{prop:controled-discrepancy} Let $\mathcal{L}$ be a terminal pencil on a normal variety $X$ and let $\delta \subset \mathbb{P}^1$ be a finite set. If $X_\delta$ is $\mathbb{Q}$-factorial terminal and $(\mathbf{TQ}_\delta)$ holds then $\Gamma_\delta$ is $\mathbb{Q}$-factorial terminal. Consequently, the discrepancy locus of every thrifty $\mathbb{Q}$-factorial terminal resolution of $\mathcal{L}$ is contained in $\delta$. \end{cor} \begin{proof} Let $E=\gamma^{-1}(\operatorname{Bs} \mathcal{L})_\mathrm{red}$ be the exceptional locus of $\gamma$. By assumption $\Gamma_\delta \setminus E \cong X_\delta\setminus \operatorname{Bs}\mathcal{L}$ is $\mathbb{Q}$-factorial terminal. On the other hand, it follows from Proposition \ref{lem:neighborhood-control} that for every $p\in \mathbb{P}^1\setminus \delta$ the open set $\Gamma_\delta$ is $\mathbb{Q}$-factorial and terminal in a neighborhood of the intersection of $E$ with the proper transform of $\mathcal{L}_p$. Since the union of such neighborhoods is an open neighborhood of $E\cap \Gamma_\delta$ in $\Gamma_\delta$, it follows that $\Gamma_\delta$ is $\mathbb{Q}$-factorial terminal. The second assertion follows from Lemma \ref{lem:terminalization}. \end{proof} \subsection{Terminal rank one Fano pencils and relative MMPs}\label{sub:FanoRank1} In this subsection we consider pencils of Weil divisors whose general members are terminal Fano varieties of Picard rank one and the outputs of relative MMP's ran from their resolutions. We keep the notation of subsection \ref{subsec:terminal-pencil}. \begin{defn}[A terminal rank one Fano pencil]\label{rofp} Let $X$ be a normal projective variety of dimension at least $2$. A \emph{terminal rank one Fano pencil} on $X$ is a pencil $\mathcal{L}$ whose general members are terminal Fano varieties of Picard rank one. The \emph{degeneracy locus} of $\mathcal{L}$ is the finite set $\delta(\mathcal{L})\subset \mathbb{P}^1$ consisting of points $p$ such that the member $\mathcal{L}_p$ is either reducible or has Picard rank strictly higher than one. \end{defn} It is known that general fibers of Mori fiber spaces can have Picard rank higher than one (see e.g.\ Example \ref{ex:dP_fibr}). But the additional assumption that the Picard rank of general fibers is one, which we impose in Definition \ref{rofp}, allows to control the effect of running relative MMP's on resolutions of such pencils more easily. Even with this restriction there is still a large natural geometric supply of pencils that can be used to construct Mori fiber completions of $\mathbb{A}^n$'s, see Section \ref{sec:Affine-Spaces}. \begin{defn}[A compatible thrifty resolution]\label{def:Fano-compatible_res} Let $\mathcal{L}$ be a terminal rank one Fano pencil on a normal projective variety. A \emph{compatible thrifty resolution} of $\mathcal{L}$ is a thrifty $\mathbb{Q}$-factorial terminal resolution of $\mathcal{L}$ (see Definition \ref{def:thrifty-reso}) whose discrepancy locus is contained in the degeneracy locus of $\mathcal{L}$. \end{defn} \begin{example}[Simple low-dimensional examples]\label{ex:low-dim-QT-FanoRk1}\ \begin{enumerate}[(a)] \item A terminal rank one Fano pencil on $\mathbb{P}^2$ consists of lines and or conics, and the usual minimal resolution of base points of the pencil is a compatible thrifty resolution. \item Since $\mathbb{P}^2$ is the only terminal del Pezzo surface of Picard rank one, the only terminal rank one Fano pencils on $\mathbb{P}^3$ are the pencils of planes. Clearly, the base scheme of every such pencil is a line and its blowup gives a compatible thrifty resolution. \end{enumerate} \end{example} Let $X_0$ be a $\mathbb{Q}$-factorial terminal projective variety and let $f_0\:X_0\rightarrow \mathbb{P}^1$ be a surjective morphism. Recall \cite[3.31, Example 2.16]{KollarMori-Bir_geometry} that a $K_{X_0}$-MMP $\varphi\:X_0\dashrightarrow X_{m}=\hat{X}$ relative to $f_0$ consists of a finite sequence $\varphi=\varphi_{m}\circ\cdots\circ\varphi_1$ of birational maps \[\[email protected]@C-1em{ X_{k-1} \ar@{-->}[rr]^-{\varphi_k} & & X_k \\ & \mathbb{P}^1 \ar@{<-}[ul]^*{ f_{k-1}} \ar@{<-}[ur]_*{ f_k} & }\] between $\mathbb{Q}$-factorial terminal projective varieties, where each $\varphi_k$ is associated to an extremal ray of the closure $\operatorname{NE}( X_{k-1}/\mathbb{P}^1)$ of the relative cone of $1$-cycles of $X_{k-1}$ over $\mathbb{P}^1$. The morphisms $f_k\:X_k\rightarrow \mathbb{P}^1$ are the induced surjections. Each $\varphi_k$ is either a relative divisorial contraction or a flip whose flipping and flipped curves are contained in fibers of $ f_{k-1}$ and $f_k$. We say that a relative MMP terminates if either $K_{\hat{X}/\mathbb{P}^1}$ is $f_m$-nef or if $f_m$ factors through a Mori fiber space given by a contraction of a extremal ray in $\operatorname{NE}( X_{m}/\mathbb{P}^1)$. \begin{prop}[Mori Fiber completions from pencils with compatible resolutions]\label{thm:MainThm-Completion} Let $\mathcal{L}$ be a terminal rank one Fano pencil on a normal projective variety $X$ and let $\psi_\mathcal{L}\:X\dashrightarrow \mathbb{P}^1$ be the associated dominant rational map. Assume that $\mathcal{L}$ has a compatible thrifty resolution $\tau\:X'\rightarrow X$. Then there exists a $K_{X'}$-MMP $\varphi\:X'\dashrightarrow\hat{X}$ relative to $ \psi_\mathcal{L}\circ\tau\:X'\rightarrow \mathbb{P}^1$ which terminates. Furthermore, every terminating $K_{X'}$-MMP relative to $\psi_\mathcal{L}\circ\tau$ restricts to an isomorphism over $\mathbb{P}^1\setminus\delta(\mathcal{L})$ and its output is a Mori fiber space over $\mathbb{P}^1$. \end{prop} \begin{proof} Put $X'=X_0$ and $f_0=\psi_\mathcal{L}\circ\tau\:X_0\rightarrow\mathbb{P}^1$. Since general fibers of $f_0$ are Fano varieties, their canonical divisors are not pseudo-effective, hence $K_{X_0}$ is not pseudo-effective over $\mathbb{P}^1$. This implies that the output $\hat{f}=f_m\colon\hat{X}\rightarrow\mathbb{P}^1$ of any $K_{X_0}$-MMP $\varphi\:X_0\dashrightarrow \hat{X}$ relative to $ f_0\colon X_0\rightarrow\mathbb{P}^1$ that terminates, factors through a Mori fiber space $g\colon\hat{X}\rightarrow T$ over a normal projective variety $T$ given by the contraction of some extremal ray of $\overline{\operatorname{NE}}(\hat{X}/\mathbb{P}^1)$. The termination of at least one such relative MMP is guaranteed by \cite[Corollary 1.3.3]{BCHM}. We show by induction that for every $k\in\{1,\ldots,m\}$ the birational map $\varphi_k\colon X_{k-1}\dashrightarrow X_k$ restricts to an isomorphism over $\mathbb{P}^1\setminus\delta(\mathcal{L})$. Let $\sigma\colon X_{k-1}\rightarrow Y$ be the birational extremal contraction associated to some extremal ray in $\overline{\operatorname{NE}}( X_{k-1}/\mathbb{P}^1)$ and let $C$ be a curve contracted by $\sigma$. Since the extremal ray lies in $\overline{\operatorname{NE}}( X_{k-1}/\mathbb{P}^1)$, $C$ is contained in some fiber $f_{k-1}^*(p)$, $p\in \mathbb{P}^1$. Note that by the rigidity lemma \cite[Lemma 1.6]{KollarMori-Bir_geometry}, $\sigma$ does not contract $f_{k-1}^*(p)$. By definition of a terminal rank one Fano pencil and of a compatible thrifty resolution, for every $p\in \mathbb{P}^1\setminus \delta(\mathcal{L})$ the fiber $f_0^*(p)$, and hence by induction the fiber $f_{k-1}^*(p)$ endowed with its reduced structure is a projective variety of Picard rank one. By Remark \ref{rem:trivial contractions} the restriction of $\sigma$ to $f_{k-1}^*(p)$ is an isomorphism. It follows that the exceptional locus of $\sigma$ is contained in fibers of $f_{k-1}$ over $\delta(\mathcal{L})$, hence that $\varphi_k$ restricts to an isomorphism over $\mathbb{P}^1\setminus\delta(\mathcal{L})$. Finally, since general fibers of $\hat{f}\colon\hat{X}\rightarrow\mathbb{P}^1$ have Picard rank one, $\hat{f}\colon\hat{X}\rightarrow\mathbb{P}^1$ cannot be decomposed into a Mori fiber space $g\colon\hat{X}\rightarrow T$ over a base $T\rightarrow\mathbb{P}^1$ of positive relative dimension, hence $\hat{f}\colon\hat{X}\rightarrow\mathbb{P}^1$ itself is a Mori fiber space. \end{proof} \begin{cor}\label{cor:MFS-Completion} Let $X$ be a normal projective variety and let $\mathcal{L}$ be a terminal rank one Fano pencil on $X$ that admits a compatible thrifty resolution. Then $X_{\delta(\mathcal{L})}\setminus \operatorname{Bs}\mathcal{L}$ (see Notation \ref{nota:open-subsets}) admits a Mori fiber completion $\pi\:V\rightarrow \mathbb{P}^1$. Furthermore if $p\in \mathbb{P}^1\setminus \delta(\mathcal{L})$ then the scheme-theoretic fiber $\pi^*(p)$ is isomorphic to the member $\mathcal{L}_p$ of $\mathcal{L}$. \end{cor} \begin{proof} Let $\tau\:X'\rightarrow X$ be a compatible thrifty resolution of $\mathcal{L}$. Put $\tau'=\Gamma(\tau)$ and let $\varphi\:X'\dashrightarrow V$ be a $K_{X'}$-MMP relative to $\operatorname{p}\circ\ \tau'\:X'\rightarrow \mathbb{P}^1$ which terminates. By Proposition \ref{thm:MainThm-Completion}, $V$ has a structure of a Mori fiber space $\pi\:V\rightarrow \mathbb{P}^1$. The desired open embedding is given by the restriction to $X_{\delta(\mathcal{L})}\setminus \operatorname{Bs} \mathcal{L}$ of the birational map $\varphi\circ (\gamma\circ \tau')^{-1}\:X\dashrightarrow V$. Indeed, the birational map $\gamma^{-1}\:X\dashrightarrow \Gamma$ induces an isomorphism between $X_{\delta(\mathcal{L})}\setminus \operatorname{Bs} \mathcal{L}$ and $\Gamma_{\delta(\mathcal{L})}\setminus E$. On the other hand, since by the definition of a compatible thrifty resolution the image of $\operatorname{Exc}(\tau')$ by $\operatorname{p}\circ\tau'\:X'\rightarrow \Gamma\rightarrow \mathbb{P}^1$ is contained in $\delta(\mathcal{L})$, the birational map $(\tau')^{-1}$ restricts to an isomorphism over $\operatorname{p}^{-1}(\mathbb{P}^1\setminus \delta(\mathcal{L}))=\Gamma_{\delta(\mathcal{L})}$. It follows in turn from Proposition \ref{thm:MainThm-Completion} that the rational map $\varphi\circ (\tau')^{-1}\colon\Gamma\dashrightarrow V$ restricts to an isomorphism over $\Gamma_{\delta(\mathcal{L})}$. The second assertion follows from Proposition \ref{lem:graph-reso-members}. \end{proof} The next example illustrates the process of taking a compatible thrifty resolution of a terminal rank one Fano pencil and then running a relative MMP as above. It shows in particular that different runs of the MMP may lead to different Mori fiber completions. \begin{example}[Mori fiber completions of $\mathbb{A}^2$ from pencils]\label{ex:conic-pencil} Let $\mathcal{L}$ be a pencil on $\mathbb{P}^2=\operatorname{Proj}(\mathbb{C}[x,y,z])$ generated by a smooth conic $C$ and twice a line $H$ tangent to $C$. Up to a projective equivalence we may assume that $C=\{xz-y^2=0\}$ and $H=\{x=0\}$. The graph of $\psi_\mathcal{L}$ is the surface $\Gamma \subseteq \mathbb{P}^2\times \mathbb{P}^1_{[u:v]}$ defined by the bi-homogeneous equation $(xz-y^2)v-x^2u=0$. It is normal and its unique singular point $p=([0:0:1],[1:0])$ is supported at the intersection of the proper transform of $H$ with the exceptional divisor $E\cong \mathbb{P}^1$ of $\gamma=\operatorname{pr}_1\colon\Gamma \rightarrow \mathbb{P}^2$. The singular point is a cyclic quotient singularity of type $A_3$. Let $\tau'\:X'\rightarrow \Gamma$ be the minimal resolution of the singularity of $\Gamma$. Then $\tau=\gamma\circ \tau'$ is a compatible thrifty resolution of $\mathcal{L}$. The exceptional locus of $\tau'$ is a chain of three smooth rational curves $F_0$, $F$, $F_1$ with self-intersection numbers equal to $-2$, having $F$ as its middle component. The proper transforms $E'$ and $H'$ of $E$ and $H$ in $X'$ are smooth rational curves with self-intersection $-1$, and they intersect $\operatorname{Exc}(\tau')$ transversally along the curves $F_0$ and $F$ respectively. The $K_{X'}$-MMP relative to $f'=\operatorname{p}\circ\ \tau'\:X'\rightarrow \mathbb{P}^1$ first contracts $H'$, then the image of $F$ and finally either the image of $F_0$ or the image of $F_1$. The resulting Mori fiber space $\pi\:V\rightarrow \mathbb{P}^1$ is thus isomorphic to $\rho_0=\operatorname{pr}_2\colon\mathbb{F}_0=\mathbb{P}^1\times \mathbb{P}^1 \rightarrow \mathbb{P}^1$ in the first case and to the Hirzebruch surface $\rho_1\colon\mathbb{F}_1\rightarrow \mathbb{P}^1$ in the second case. The first case yields an open embedding of $\mathbb{A}^2=\mathbb{P}^2\setminus H$ into $\mathbb{F}_0$ as the complement of the proper transforms of $E$ and $F_1$, which are respectively a section with self-intersection number $0$ and a fiber of $\rho_0$. In the second case we obtain an open embedding of $\mathbb{A}^2$ into $\mathbb{F}_1$ as the complement of the proper transforms of $E$ and $F_0$, which are respectively the negative section and a fiber of $\rho_1$. \end{example} \section{Obtaining Mori fiber completions from special pencils}\label{sec:H-special} We now consider a class of pencils to which the methods of Section \ref{sec:pencils_and_resolutions} apply. A \emph{polarized ($\mathbb{Q}$-factorial) pair} $(X,H)$ is by definition a pair consisting of a normal projective ($\mathbb{Q}$-factorial) variety $X$ of dimension at least $2$ and an ample prime Weil divisor $H$ on $X$. \begin{defn}[$H$-special pencils]\label{def:H-special_pencil} Let $(X,H)$ be a polarized $\mathbb{Q}$-factorial pair. An \emph{$H$-special} pencil on $X$ is a pencil $\mathcal{L}$ which satisfies the following properties: \begin{enumerate}[(a)] \item $dH$ is a member of $\mathcal{L}$ for some integer $d\geq 1$. \item The base locus $\operatorname{Bs} \mathcal{L}$ is irreducible. \item If $d=1$ then the base scheme $\operatorname{Bs} \mathcal{L}$ is smooth or its support is contained in $\operatorname{Sing}(H)$. \end{enumerate} \end{defn} \subsection{Integrity of members} \begin{lem}\label{lem:pencil-local-smooth} Let $(R,\mathfrak{m})$ be a noetherian integral local ring and let $f\in \mathfrak{m}$ be a nonzero element such that the ring $R/(f)$ is regular. Then $R$ is regular and for every $h\in \mathfrak{m}^2$ the ring $R/(f+h)$ is regular. \end{lem} \begin{proof} Let $\pi\colon R\rightarrow R/(f)$ be the quotient morphism. Since $R/(f)$ is regular, $f\notin \mathfrak{m}^2$ and the maximal ideal $\pi(\mathfrak{m})$ is generated by a regular sequence $\pi(a_1),\ldots,\pi(a_n)$, where $a_i\in \mathfrak{m}$ and $n=\dim R/(f)$. It follows that $\mathfrak{m}$ is generated by the regular sequence $f,a_1,\ldots, a_n$, hence that $R$ is regular. Furthermore, $\mathfrak{m}^2\subseteq (f^2)+(a_1,\ldots,a_n)$, so for some $v\in R$ we have $h-vf^2\in (a_1,\ldots,a_n)$. Since elements in $1+\mathfrak{m}$ are invertible in $R$, we get $(f+h,a_1,\ldots,a_n)=(f(1+vf),a_1,\ldots,a_n)=\mathfrak{m}$. It follows that the images of $a_1,\ldots, a_n$ under the quotient homomorphism $R\rightarrow R/(f+h)$ form a regular sequence which and they generate the maximal ideal of $R/(f+h)$. Thus, $R/(f+h)$ is regular. \end{proof} \begin{lem}[Integrity and smoothness of members]\label{lem:integral-members} Let $\mathcal{L}$ be an $H$-special pencil on a polarized $\mathbb{Q}$-factorial pair $(X,H)$. Assume that some member of $\mathcal{L}$ is smooth and Cartier at some point $x\in \operatorname{Bs} \mathcal{L}$ and that in case $H$ is a member of $\mathcal{L}$, $x \in \operatorname{Sing} H$. Then every member of $\mathcal{L}$ not supported on $\operatorname{Supp} H$ is a prime divisor which is smooth and Cartier at $x$. \end{lem} \begin{proof} By assumption $\mathcal{L}$ has a member $dH$ for some positive integer $d$ and a member $F\neq dH$ which is smooth and Cartier at $x\in S=(\operatorname{Bs} \mathcal{L})_\mathrm{red}=(F\cap H)_\mathrm{red} $. Any two members of $\mathcal{L}$ differ by a principal divisor, so we infer that all members of $\mathcal{L}$ are Cartier at $x$. Let $\mathfrak{m}$ be the maximal ideal of the local ring $\mathcal{O}_{X,x}$ and let $f,h\in \mathfrak{m}$ be generators of the ideals of $F$ and $dH$ in $\mathcal{O}_{X,x}$, respectively. Since $F$ is smooth at $x$, by Lemma \ref{lem:pencil-local-smooth}, $f\in\mathfrak{m}\setminus \mathfrak{m}^2$ and $\mathcal{O}_{X,x}$ is regular. If $d>1$ then $h\in \mathfrak{m}^2$. If $d=1$ then by assumption $x$ is a singular point of $H$, so by the Jacobian criterion in $\mathcal{O}_{X,x}$ the residue class of $h$ in $\mathfrak{m}/\mathfrak{m}^2$ is trivial, hence again $h\in \mathfrak{m}^2$. The variety $X$ is projective and the divisor $H$ is ample, so since $F$ is $\mathbb{Q}$-Cartier, $S$ is a closed subset of $\operatorname{Supp}(H)$ of pure codimension $1$. Let $Y$ be any member of $\mathcal{L}$ other than $dH$. Write $Y=\sum_{i=1}^{r}D_{i}$, where $D_i$ are prime divisors. Each $D_i$ is $\mathbb{Q}$-Cartier and $H$ is ample, so $(D_i\cap H)_\mathrm{red}$ has pure codimension $1$ in $H$. But $(D_i\cap H)_\mathrm{red}\subseteq (Y\cap H)_\mathrm{red}=S$ and $S$ is irreducible, so $(D_i\cap H)_\mathrm{red}=S$ for each $i$. The ideal of $D_i$ in $\mathcal{O}_{X,x}$ is thus contained in $\mathfrak{m}$ for every $1\leq i\leq r$, and hence the ideal of $Y$ is contained in $\mathfrak{m}^r$. On the other hand, the ideal of $Y$ in $\mathcal{O}_{X,x}$ is generated by $f+th$ for some $t\in \mathbb{C}$. Since $f\in\mathfrak{m} \setminus \mathfrak{m}^2$ and $h\in \mathfrak{m}^2$, we have $f+th\in \mathfrak{m}\setminus \mathfrak{m}^2$. Thus $r=1$ and so $Y$ is a prime divisor, which by Lemma \ref{lem:pencil-local-smooth} is smooth at $x$. \end{proof} In Lemma \ref{lem:integral-members} the assumption that the base locus of $\mathcal{L}$ is irreducible and that $\mathcal{L}$ has a member which is smooth at some point of $\operatorname{Bs}\mathcal{L}$ are both necessary to ensure primeness of all members of $\mathcal{L}$ other than $dH$. Similarly for the additional assumption in case $d=1$, where $H$ is itself a member of $\mathcal{L}$, that $\operatorname{Supp}(\operatorname{Bs}\mathcal{L})$ contains a singular point of $H$. This is illustrated by the following examples. \begin{example}[Failure of integrity] \label{ex:irreducible}\ \begin{enumerate}[(a)] \item Let $\mathcal{L}$ be a pencil on $\mathbb{P}^2$ generated by a smooth conic $C$ and twice a line $H$ meeting $C$ at two distinct points. The sum of two lines tangent to $C$ at those points is a reducible member of $\mathcal{L}$. Similarly, the pencil in $\mathbb{P}^3$ generated by the projective cones over $C$ and $2H$ has a reducible member consisting of the union of two planes. In the latter case the base locus is reducible but connected. \item Let $\mathcal{L}$ be a pencil on $\mathbb{P}^2$ generated by a nodal cubic curve $C$ and $3H$, where $H$ is the line tangent to one of the two branches of $C$ at its singular point $p=(\operatorname{Bs}\mathcal{L})_\mathrm{red}$. Then every member of the pencil is singular at $p$ and $\mathcal{L}$ has a reducible member. Indeed, up to a projective equivalence we have $C=\{y^2z=x^2(x-z)\}$, $H=\{x=0\}$ and $p=[0:0:1]$. Then the members of $\mathcal{L}$ are $\mathcal{L}_{[a:b]}=\{a(x^2+y^2)z=bx^3\}$ and $\mathcal{L}_{[1:0]}$ is the union of three distinct lines through $p$. \item Let $\mathcal{L}$ be a pencil on $\mathbb{P}^2$ generated a smooth conic $H$ and some other smooth conic meeting $H$ in one point $p$ only. The pencil has a non-reduced member supported on the line tangent to both conics at $p$. \end{enumerate} \end{example} \begin{cor}\label{cor:H-special-members} Let $\mathcal{L}$ be an $H$-special pencil on a polarized $\mathbb{Q}$-factorial pair $(X,H)$. In case $H$ is a member of $\mathcal{L}$ and $\operatorname{Supp} (\operatorname{Bs} \mathcal{L})\nsubseteq \operatorname{Sing} H$ assume additionally that $\operatorname{Cl}(X)=\mathbb{Z}\langle H\rangle$. If $\mathcal{L}$ has a member which is smooth and Cartier on some neighborhood of $\operatorname{Bs} \mathcal{L}$ then every member of $\mathcal{L}$ other than $dH$ is a prime divisor which is smooth and Cartier on some neighborhood of $\operatorname{Bs} \mathcal{L}$. \end{cor} \begin{proof} By assumption $\mathcal{L}$ has $dH$ as a member for some positive integer $d$. Put $S=(\operatorname{Bs} \mathcal{L})_\mathrm{red}$. We may assume that $d=1$ and $S\nsubseteq \operatorname{Sing} H$, as otherwise the corollary follows from Lemma \ref{lem:integral-members}. Then every member of $\mathcal{L}$ is prime, because by assumption $\operatorname{Cl}(X)=\mathbb{Z}\langle H\rangle$ in this case. Moreover, by the definition of an $H$-special pencil (see Definition \ref{def:H-special_pencil}(c)), the base scheme $\operatorname{Bs} \mathcal{L}$ is smooth, hence in particular reduced. Then for every member $Y\neq H$ of $\mathcal{L}$ we have $S=\operatorname{Bs} \mathcal{L}=Y\cap H$ scheme-theoretically. Let $\mathfrak{m}$ be the maximal ideal of the local ring $\mathcal{O}_{X,x}$ at a point $x\in S$. Then the ring $\mathcal{O}_{S,x}$ is regular and isomorphic to $\mathcal{O}_{X,x}/(y,h)= (\mathcal{O}_{X,x}/(y))/(h)$ where $y$ and $h$ are generators of the ideals of $Y$ and $H$ in $\mathcal{O}_{X,x}$, respectively. Since $Y\sim H$ is prime, the ring $\mathcal{O}_{X,x}/(y)$ is integral and its quotient by $(h)$ is regular. So $\mathcal{O}_{X,x}/(y)$ is regular by Lemma \ref{lem:pencil-local-smooth}, that is, $Y$ is smooth at $x$. \end{proof} \subsection{$H$-special pencils of Cartier divisors} Recall (Definition \ref{def:Fano}) that for a Fano variety $X$ for which $\operatorname{Cl}(X)\cong \mathbb{Z}\langle H\rangle $, where $H$ is an ample divisor, the \emph{index of $X$} is the unique integer $i_X$ for which $-K_X\sim i_XH$. \begin{lem} \label{lem:fat-member-take0} Let $X$ be a Fano variety with the class group $\operatorname{Cl}(X)\cong \mathbb{Z}\langle H\rangle $ and let $Y\sim dH$ be a normal prime divisor on $X$. If $Y$ is Cartier in codimension $2$ and $d<i_X$ then $Y$ is a Fano variety. \end{lem} \begin{proof} Since $\operatorname{Cl}(X)\cong \mathbb{Z}$, $X$ is $\mathbb{Q}$-factorial, so $K_X+Y$ is $\mathbb{Q}$-Cartier. Since $Y$ is normal and Cartier in codimension $2$, the adjunction formula, Lemma \ref{lem:different}, implies that $-K_Y=-(K_X+Y)|_Y$ is an anticanonical divisor on $Y$. We have $i_X>d$, so the divisor $-(K_X+Y)\sim (i_X-d)H$ is ample, hence $-K_Y$ is ample. \end{proof} It is more difficult to provide uniform conditions which ensure that a given member of an $H$-special pencil has Picard rank one. For pencils of Cartier divisors on mildly singular varieties we can rely on the following result for Picard groups proven in \cite[Expos\'e XII, Corollary 3.6]{SGA2}. \begin{lem}[Grothendieck-Lefschetz theorem] \label{lem:Grothencieck-Lefschetz} Let $Y$ be an ample effective Cartier divisor on a normal variety $X$. Assume that $H^i(Y,\mathcal{O}_X(-\ell Y)|_Y)=0$ for $i=1,2$ and every $\ell>0$, and that $X\setminus Y$ is a local complete intersection. Then the restriction homomorphism $\operatorname{Pic} X\rightarrow \operatorname{Pic} Y$ is an isomorphism. \end{lem} \begin{cor}[Finding terminal Fano pencils of rank one]\label{prop:FanoHyperPencil-base-Take2} Let $X$ be a Fano variety of dimension at least $4$ whose singularities are rational and whose class group is generated by a prime Weil divisor $H$. Let $d\in \{1,2,\ldots,i_X-1\}$ and let $\mathcal{L}\subseteq H^0(X, \mathcal{O}_X(dH))$ be an $H$-special pencil of Cartier divisors such that $X\setminus \operatorname{Bs} \mathcal{L}$ is a local complete intersection and which has a terminal member smooth in a neighborhood of $\operatorname{Bs} \mathcal{L}$. Then $\mathcal{L}$ is a terminal rank one Fano pencil and every member of $\mathcal{L}$ other than $dH$ is non-degenerate. \end{cor} \begin{proof} Since $H$ generates $\operatorname{Cl}(X)$, $X$ is in particular $\mathbb{Q}$-factorial and $H$ is ample, so the pair $(X,H)$ is polarized $\mathbb{Q}$-factorial. Since $\mathcal{L}$ has a terminal member and $d<i_X$, general members of $\mathcal{L}$ are terminal Fano varieties by Lemma \ref{lem:Q-term-crit} and Lemma \ref{lem:fat-member-take0}. By Corollary \ref{cor:H-special-members} every member $Y$ of $\mathcal{L}$ other than $dH$ is prime. By assumption the divisor $Y$ is Cartier and, since $\operatorname{Cl}(X)\cong\mathbb{Z}$, it is necessarily ample. Then $\mathcal{O}_X(Y)$ is an ample invertible sheaf, so we have exact sequences $$0\rightarrow \mathcal{O}_X(-(\ell+1)Y)\rightarrow \mathcal{O}_{X}(-\ell Y) \rightarrow \mathcal{O}_{X}(-\ell Y)|_{Y} \rightarrow 0 \quad \textrm{for every } \ell>0.$$ Since $\mathcal{O}_X(Y)$ is ample, by \cite[Corollary 7.67]{Vanishing-Theorems-Cplx-Manifolds} (see also Lemma \ref{lem:rational_sing}(c)), we have $H^{i}(X,\mathcal{O}_{X}(-\ell Y))=0$ for every $i\leq \dim X-1$ and $\ell>0$. Since $\dim X\geq 4$, the associated long exact sequence of cohomology gives $H^{i}(Y,\mathcal{O}_X(-\ell Y)|_Y)=0$ for every $i=1,2$ and $\ell>0$. Since $X\setminus Y$ is contained in $X\setminus \operatorname{Bs} \mathcal{L}$, Lemma \ref{lem:Grothencieck-Lefschetz} implies that $\operatorname{Pic}(Y)\cong \operatorname{Pic}(X)\cong \mathbb{Z}$. \end{proof} Combining the above results we obtain the following theorem. \begin{thm}[Mori fiber completions from $H$-special pencils]\label{thm:FanoHyperPencil-2} Let $X$ be a Fano variety of dimension at least $4$ whose singularities are rational and whose class group is generated by a prime Weil divisor $H$. Assume that $X\setminus H$ is terminal and that for some $d\in \{1,2,\ldots,i_X-1\}$ there exists a Cartier divisor $F\sim dH$ other than $dH$ for which the following hold: \begin{enumerate}[(a)] \item $F$ is terminal, \item $F\cap H$ is irreducible and contained in $F_\mathrm{reg}$, \item $X\setminus (F\cap H)$ is a local complete intersection, \item If $d=1$ then $F\cap H$ is either a smooth scheme or its support is contained in $\operatorname{Sing}(H)$. \end{enumerate} Then $X\setminus H$ admits a Mori fiber completion $\pi\:V\rightarrow \mathbb{P}^1$ such that all members of the pencil $\mathcal{L}=\langle F,dH \rangle$ other than $dH$ appear as fibers of $\pi$. \end{thm} \begin{proof} The pair $(X,H)$ is a polarized $\mathbb{Q}$-factorial pair. Since $F$ is Cartier, the assumptions (b) and (d) imply that $\mathcal{L}$ is an $H$-special pencil and on some neighborhood of $\operatorname{Bs} \mathcal{L}$ the divisor $F$ is smooth, in particular $\mathbb{Q}$-factorial. Let $\psi_\mathcal{L}\:X\dashrightarrow \mathbb{P}^1$ be the dominant rational map determined by $\mathcal{L}$. By Corollary \ref{prop:FanoHyperPencil-base-Take2}, $\mathcal{L}$ is a terminal rank one Fano pencil with degeneracy locus contained in $\{ (\psi_\mathcal{L})_*H \}$. Since by Corollary \ref{cor:H-special-members} every member of $\mathcal{L}$ other than $dH$ is smooth in a neighborhood of $\operatorname{Supp}(\operatorname{Bs} \mathcal{L})$, property $(\mathbf{TQ}_\delta)$ (see Notation \ref{nota:open-subsets}) holds for $\delta=\{ (\psi_\mathcal{L})_*H \}$. Since $X\setminus H$ is $\mathbb{Q}$-factorial and terminal by assumption, Corollary \ref{prop:controled-discrepancy} implies that every thrifty $\mathbb{Q}$-factorial terminal resolution of $\mathcal{L}$ is compatible, that is, its degeneracy locus is contained in $\delta$. The assertion then follows from Corollary \ref{cor:MFS-Completion}. \end{proof} As a corollary we obtain Mori fiber completions of affine varieties of dimension $\geq 4$ whose general fibers are completions of affine Fano varieties in the sense of Definition \ref{def:Affine-Fano}. \begin{cor}[Affine Fano fibers] \label{cor:affine_Fano} In the setting of Theorem \ref{thm:FanoHyperPencil-2} assume further that $F$ is $\mathbb{Q}$-factorial, that $(F\cap H)_\mathrm{red}$ is klt and that $d\leq i_X-2$. Let $\pi\:V\rightarrow \mathbb{P}^1$ be the Mori fiber completion of the affine variety $U=X\setminus H$ associated to the pencil $\mathcal{L}=\langle F,dH \rangle$. Then the general fibers of $\pi|_U\:U\rightarrow \mathbb{P}^1$ are affine Fano varieties. \end{cor} \begin{proof} Let $B=V\setminus U$. For a general point $p\in \mathbb{P}^1$, let $V_p=\pi^*p$ and let $B_p$ denote the reduction of the restriction of $B$ to $V_p$ as a Weil divisor. By Lemma \ref{lem:Q-term-crit}, $V_p$ is a $\mathbb{Q}$-factorial terminal Fano variety of Picard rank one. On the other hand, it follows from the proof of Corollary \ref{cor:MFS-Completion} that the log pair $(V_p,B_p)$ is isomorphic to the log pair $(\mathcal{L}_p, (\mathcal{L}_p\cap H)_\mathrm{red})$. Since $(\mathcal{L}_p\cap H)_\mathrm{red}=(F\cap H)_\mathrm{red}$ is irreducible and klt, the log pair $(V_p,B_p)$ is plt by Lemma \ref{lem:different}. We have $-(K_{\mathcal{L}_p}+({\mathcal{L}_p}\cap H)_\mathrm{red})=(-i_X+d) H|_{\mathcal{L}_p}+ H|_{\mathcal{L}_p}$ by adjunction, so $-(K_{V_p}+B_p)$ is ample. \end{proof} \section{Mori fiber completions of affine spaces over $\mathbb{P}^1$}\label{sec:Affine-Spaces} We now apply our results to the construction of $\mathbb{Q}$-factorial terminal Mori fiber completions of affine spaces over $\mathbb{P}^1$. We begin with a review of some known examples. \subsection{Some known examples} \begin{example}[Examples of product type]\label{ex:products} For every $n\geq 1$ and every $\mathbb{Q}$-factorial terminal Fano variety $X$ of Picard rank one which is completion of $\mathbb{A}^n$, the projection $\pi=\operatorname{pr}_2\:V=X\times \mathbb{P}^1\rightarrow \mathbb{P}^1$ is a Mori fiber completion of $\mathbb{A}^n\times \mathbb{A}^1$. For instance, we can take $X=\mathbb{P}^n$, which is the only possibility for $n=1,2$. For $n=3$, smooth Fano threefolds of Picard rank one which are completions of $\mathbb{A}^3$ have been classified by Furushima \cite{Fu93} (see also \cite{Pr}). These are: $\mathbb{P}^3$, the smooth quadric threefold in $\mathbb{P}^4$, the quintic del Pezzo threefold in $\mathbb{P}^6$, and a four dimensional family of prime Fano threefolds of genus $12$. In higher dimensions, other examples of smooth Fano completions of $\mathbb{A}^n$ of Picard rank one are the quintic del Pezzo fourfold \cite[Theorem 3.1]{Pr93} in $\mathbb{P}^7$ and Fano-Mukai fourfolds of genus $10$ \cite{PZ-genus10_4folds_and_A4}. \end{example} There are several examples of Mori fiber completions of $\mathbb{A}^3$ which are not of product type as in Example \ref{ex:products}. For instance, for every $d\in\{1,2,3,4,5,6,8,9\}$ there exists a Mori fiber completion $\pi\:V\rightarrow \mathbb{P}^1$ of $\mathbb{A}^3$, whose general fibers are smooth del Pezzo surfaces of degree $d$. For $d=9$ we have only the locally trivial $\mathbb{P}^2$-bundles over $\mathbb{P}^1$. A classical example with $d=8$ is recalled below. An example with $d=7$ does not exist, because the generic fiber of the corresponding del Pezzo fibration $\pi\:V\rightarrow \mathbb{P}^1$ would be a minimal smooth del Pezzo surface of degree $7$ and such surfaces do not exist over a field of characteristic zero, see \cite[Theorem 29.4]{Manin-cubic_forms}. An example with $d=6$ can be found in \cite[Theorem 1.2]{Pr16} and \cite{Fukuoka19}. A construction for $d=5$ is given in Example \ref{ex:dP5} below. For examples with $d=1,2,3,4$ see \cite[Theorem 2]{DK4}. For $d\leq 6$ general fibers of the restriction of $\pi\:V\rightarrow \mathbb{P}^1$ to $\mathbb{A}^3$ are not isomorphic to $\mathbb{A}^2$. Indeed, otherwise the generic fiber of $\pi\:V\rightarrow \mathbb{P}^1$ would be a minimal smooth del Pezzo surface of degree $d\leq 6$ over the function field $\mathbb{C}(\mathbb{P}^1)$, containing a Zariski open subset isomorphic to $\mathbb{A}^2_{\mathbb{C}(\mathbb{P}^1)}$, which is impossible by \cite[Proposition 13]{DK4}. In particular, none of these completions is of product type. Note also that for $d\neq 9$ general fibers of the corresponding Mori fiber spaces are smooth del Pezzo surfaces of Picard rank higher than one, hence are not associated to any terminal rank one Fano pencil (cf.\ Example \ref{ex:low-dim-QT-FanoRk1}). \begin{example}[A completion of $\mathbb{A}^3$ into a del Pezzo fibration of degree $8$] \label{ex:dP_fibr} Let $Q\subseteq \mathbb{P}^4$ be a smooth quadric threefold, let $H$ be a hyperplane section of $Q$ cut by a tangent hyperplane and let $F$ be a smooth hyperplane section of $Q$ such that the scheme-theoretic intersection $C=F\cap H$ is irreducible. Then $H$ is the quadric cone $H\cong \mathbb{P}(1,1,2)$, $F\cong \mathbb{P}^1\times \mathbb{P}^1$ and $C$ is a smooth rational curve. Let $\mathcal{L}$ be a pencil on $Q$ generated by $F$ and $H$. For a general member $Y$ of $\mathcal{L}$ the pair $(Y,C)$ is isomorphic to the pair consisting of $\mathbb{P}^1\times \mathbb{P}^1$ embedded as a smooth quadric surface in $\mathbb{P}^3$ and a smooth hyperplane section of it. Let $\alpha\colon\widetilde{Q}\rightarrow Q$ be the blow-up of the unique singular point $q$ of $H$. Its exceptional divisor is $E\cong \mathbb{P}^2$. Let $\beta\colon\widetilde{Q}\rightarrow \mathbb{P}^3$ be the contraction of the proper transform of $H$ onto a smooth conic $C'$. The latter is contained in $H_\infty=\beta(E)$, which is a hyperplane of $\mathbb{P}^3$. Let $\tau\:V\rightarrow Q$ be the blow-up of $Q$ with center at $C$ and exceptional divisor $D$. We then have a diagram of Sarkisov links \[\xymatrix{ & (\widetilde{Q},\alpha_*^{-1}H+E) \ar[dl]_{\beta} \ar[dr]^{\alpha} & & & (V,D+\tau_*^{-1}H) \ar[dll]_{\tau} \ar[d]^{\pi} \\ (\mathbb{P}^3,H_\infty) & & (Q,H) \ar@{-->}[ll]^{\varphi} \ar@{-->}[rr]_{\Psi_\mathcal{L}} & & \mathbb{P}^1}\] where $\pi\:V\rightarrow \mathbb{P}^1$ is a del Pezzo fibration of degree $8$. Since the hyperplane cutting $H$ is tangent to $Q$, we have $Q\setminus H\cong \mathbb{A}^3$. The variety $V$ is smooth and contains $\mathbb{A}^3$ as the complement of the total transform of $H$. General fibers of $\pi\:V\rightarrow \mathbb{P}^1$ have Picard rank $2$. On the other hand, the generic fiber $V_\eta$ of $\pi$ is a smooth quadric surface over the field $\mathbb{C}(\mathbb{P}^1)$ and we have $\rho(V_\eta)=\rho(V/\mathbb{P}^1)=1$. The Picard group of $V_\eta$ is generated by the restriction $D_\eta$ of $D$. We note that since $-(K_{V_\eta}+D_\eta)=D_\eta$ is ample, for every open subset $U\subseteq \mathbb{P}^1$ the open subset $\pi^{-1}(U)\setminus D\subseteq V$ is a relative affine Fano variety over $U$. The birational map $\varphi=\beta\circ \alpha^{-1}\:Q\dashrightarrow \mathbb{P}^3$ is induced by the linear projection form the point $q\in \mathbb{P}^4$. It restricts to an isomorphism $Q\setminus H\cong \mathbb{P}^3\setminus H_\infty$. The composition $\Psi_\mathcal{L}\circ \varphi^{-1}\colon\mathbb{P}^3\dashrightarrow \mathbb{P}^1$ is given by the pencil $\mathcal{L}'$ on $\mathbb{P}^3$ generated by $2H_\infty$ and the proper transform $F'\cong \mathbb{P}^1\times \mathbb{P}^1$ of $F$, which is a smooth quadric surface in $\mathbb{P}^3$ intersecting $H_\infty$ along the conic $C'$. \end{example} Recall \cite{Fujita81} that the quintic del Pezzo fourfold $W_5\subset \mathbb{P}^7$ is the intersection of the Grassmannian $\operatorname{Gr}(2,5) \subset \mathbb{P}^9$ with a general linear subspace of codimension $2$. In particular, $\operatorname{Pic}(W_5)\cong \mathbb{Z}\langle H\rangle $, where $H$ is a hyperplane section of $W_5$. It is known that $W_5$ is the unique smooth Fano fourfold with Fano index $i_{W_5}=3$ and $\operatorname{Pic}(W_5) \cong \mathbb{Z}$ generated by an ample generator $H$ such that $H^4 =5$. A general hyperplane section of $W_5$, called the quintic del Pezzo threefold, is a smooth Fano threefold $V_5\subseteq \mathbb{P}^6$ with $\operatorname{Pic}(V_5) \cong \mathbb{Z}\langle H_\infty\rangle$, Fano index $i_{V_5}=2$ and $H_\infty^3 =5$, where $H_\infty$ is a hyperplane section of $V_5$. Again, by \cite{Fujita81} this is the unique Fano threefold with these invariants. By \cite[Theorem A]{Fu93} $V_5$ has a normal hyperplane section $H$ with a unique singular point $p\in H$ of type $A_4$ such that $V_5\setminus H\cong \mathbb{A}^3$. By \cite[Proposition 15]{Fu86} $H$ contains a unique line $L\subseteq \mathbb{P}^6$. The line passes through $p$. By blowing-up $V_5$ with the center being a suitably chosen anti-canonical curve $C$ in $H$, we now construct completions of $\mathbb{A}^3$ into $\mathbb{Q}$-factorial terminal threefolds with del Pezzo fibrations of degree $5$. In case of a smooth $C$ the construction was communicated to us by Masaru Nagaoka and in case of nodal $C$ it was suggested by a referee. \begin{example}[Completions of $\mathbb{A}^3$ into del Pezzo fibrations of degree $5$]\label{ex:dP5} Let $H\subset V_5$ be a normal hyperplane section with a unique singular point $p\in H$ of type $A_4$ such that $V_5\setminus H\cong \mathbb{A}^3$ and let $L\subseteq \mathbb{P}^6$ be the unique line on $H$. The complement $H\setminus L$ is isomorphic to $\mathbb{A}^2$, so $\operatorname{Cl}(H)\cong \langle L\rangle\cong \mathbb{Z}$. Since $i_{v_5}=2$, by adjunction $\mathcal{O}_H(-K_H)\cong \mathcal{O}_{V_5}(H)|_H$ (see Lemma \ref{lem:different}), so $H$ is a singular del Pezzo surface of degree ${(-K_H)}^2=H^3=5$. Furthermore, given a hyperplane section $F$ of $V_5$ not containing $L$, we have $(-K_H)\cdot L=F|_H\cdot L=F\cdot L=1$. So $-K_H\sim 5L$ and hence, $\operatorname{Pic}(H)\subset \operatorname{Cl}(H)$ is the subgroup of index $5$ generated by the class of $-K_H$. Every divisor $C$ in the complete linear system $|-K_{H}|$ appears as the base locus of a unique pencil $\mathcal{L}_C$ on $V_5$ containing $H$ as a member. Indeed, since $V_5$ is a smooth Fano threefold, we have $H^1(V_5,\mathcal{O}_{V_5})\cong 0$ and the assertion follows from long exact sequence of cohomology associated with the short exact sequence of sheaves $$0\rightarrow \mathcal{O}_{V_5}\rightarrow\mathcal{O}_{V_5}(H)\rightarrow\mathcal{O}_H(-K_H)\rightarrow 0.$$ By adjunction general members of $\mathcal{L}_C$ are del Pezzo surfaces of degree $5$. Denote the blowup of $V_5$ with center $C$ by $\tau\:V_C\rightarrow V_5$. Let $E$ be the exceptional divisor of $\tau$ (possibly reducible and non-reduced). The rational map $\psi_{\mathcal{L}_C}\:V_5\dashrightarrow \mathbb{P}^1$ lifts to a fibration $\pi\:V_C\rightarrow \mathbb{P}^1$ whose general fiber is a del Pezzo surface of degree $5$ and $V_C$ contains $\mathbb{A}^3$ as the complement of the total transform of $H$. The class group of $V_C$ is generated be the classes of irreducible components of $E$ and of the proper transform $\tau_*^{-1}H$, which is a fiber of $\pi$. Singularities and the $\mathbb{Q}$-factoriality of $V_C$ depend on the properties of the chosen anticanonical divisor $C$. Let us briefly recall some elements concerning the geometry of anti-canonical divisors on $H$. We let $\gamma\:S\rightarrow H$ be the minimal resolution of singularities. We have $K_S\sim \gamma^*K_H$ and the exceptional locus $\operatorname{Exc} \gamma$ is a chain $E_1+E_2+E_3+E_4$ of $(-2)$-curves. It is known that the proper transform $L'$ of the unique line $L$ on $H$ is a $(-1)$-curve meeting $\operatorname{Exc} \gamma$ only once and normally at a point of $E_3$. There exists a birational morphism $\sigma\:S\rightarrow \mathbb{P}^2$ which contracts $L'\cup E_3\cup E_2 \cup E_1$ and maps $E_4$ onto a line $\ell$. We have $\sigma^*K_{\mathbb{P}^2}\sim K_S-E_1-2E_2-3E_3-4L'$, so we obtain \begin{equation}\label{eq:K_H} \sigma_*\gamma^*(-K_H)\sim -K_{\mathbb{P}^2}\text{\ \ and\ \ } (-K_H)\sim \gamma_*\sigma^*(-K_{\mathbb{P}^2}) -4L. \end{equation} Given $C\in |-K_H|$ we put $C'=\gamma_*^{-1}C$ and $\overline C=\sigma(C')$. Clearly, $\deg \overline C\leq 3$. Put $q=\sigma(L')$. Write $\sigma_*\gamma^*C=m\ell+D$, where $m\in\{0,1,2,3\}$ and $D$ is an effective divisor of degree $3-m$ not containing $\ell$ in its support. By \eqref{eq:K_H} $C=(3m-4)L+\gamma_*\sigma^*D$. We now discuss the geometry of $V$ for some particular choices of $C$. Consider the case $p\notin C$. Since $L$ is ample, the equality $L\cdot C=1$ implies that $C$ is irreducible and reduced. The induced morphisms $\gamma\:C'\rightarrow C$ and $\sigma\:C'\rightarrow \overline C$ are isomorphisms. By \eqref{eq:K_H} $\sigma_*\gamma^*C=\overline C$ is either an elliptic curve or a rational nodal curve or a rational cuspidal curve smooth at $q$ and intersecting $\ell$ with multiplicity $3$ at $q$. The fact that $H$ is smooth along $C$ implies that general fibers of $\pi\:V_C\rightarrow \mathbb{P}^1$ are smooth del Pezzo surfaces. Furthermore, since $C$ is irreducible and reduced, $E$ is irreducible and reduced. Since $E$ is Cartier, $\operatorname{Pic}(V_C)=\operatorname{Cl}(V_C)$ is freely generated by the classes of $E$ and $\tau_*^{-1}H$. In particular, $V_C$ is $\mathbb{Q}$-factorial and has Picard rank $2$. If $C$ is an elliptic curve then $V_C$ is smooth. If $C$ is nodal then locally analytically around the node the blowup of $C$ is isomorphic to the blowup of $\{xy=z=0\}\subseteq \mathbb{A}^3$, so its unique singular point is analytically isomorphic to $((0,0,0),[1:0])\in \{zv=xyu\}\subseteq \mathbb{A}^3\times \mathbb{P}^1$. Thus in this case $V_C$ has a unique ordinary double point supported on the inverse image by $\tau$ of the singular point of $C$. Finally, if $C$ is cuspidal then locally analytically around the cusp the blowup of $C$ is the blowup of $\{x^2+y^3=z=0\}\subseteq \mathbb{A}^3$. In this case $V_C$ has a unique compound du Val singularity $cA_2$ supported on the inverse image by $\tau$ of the singular point of $C$. In any case $V_C$ is a $\mathbb{Q}$-factorial terminal threefold and $\pi\:V_C\rightarrow \mathbb{P}^1$ is a del Pezzo fibration. Consider the case when $C$ is cut out by a general hyperplane section $F$ of $V_5$ passing through $p$. In particular, $F$ is smooth away from $p$ by Bertini's theorem and it does not contain $L$. The equality $F\cdot L=1$ implies that $F$ is smooth at $p$, hence $F$ is smooth. General fibers of $\pi\:V_C\rightarrow \mathbb{P}^1$ are thus smooth del Pezzo surfaces of degree $5$. By the choice of $F$, $L$ is not an irreducible component of $C$. In the notation as above, using the identity $C=(3m-4)L+\gamma_*\sigma^*D$ one checks that $m=1$ and that $D$ is a conic intersecting $\ell$ normally at $q$. By the generality assumption on $F$, $D$ is a smooth conic and then $C$ is an irreducible and reduced nodal rational curve smooth off $p$. So as in the previous situation, $V_C$ is a terminal $\mathbb{Q}$-factorial threefold with a unique ordinary double point supported on the inverse of $p$ by $\tau$ and $\pi\:V_C\rightarrow \mathbb{P}^1$ is a del Pezzo fibration. \end{example} \begin{rem} In Example \ref{ex:dP5} one can easily work out all possible geometries of $C\in |-K_H|$ using the equality $C=(3m-4)L+\gamma_*\sigma^*D$. For instance, if $C$ does not contain $L$ in its support then one shows that $\overline C$ is a reduced conic, smooth or singular, meeting $\ell$ normally at $q$. Assume that it is singular. Then $C$ is a sum of two smooth rational curves intersecting at $p$ only and the corresponding threefold $V_C$ has a unique ordinary double point. The class group of $V_C$ is freely generated by classes of the two irreducible components of $E$ and the class of $\tau_*^{-1}H$ and $V_C$ is not $\mathbb{Q}$-factorial. \end{rem} \begin{example}[A completion of $\mathbb{A}^4$ into a quintic del Pezzo threefold fibration]\label{exa:delPezzofourfold-Take1} By \cite[Theorem 3.1(iv)]{Pr93} there exists an open embedding of $\mathbb{A}^4$ into the quintic del Pezzo fourfold $W_5$ such that the complement is a normal hyperplane section $H$ of $W_5$ (its singular locus consists of a unique ordinary double point $p$). Let $\mathcal{L}$ be a pencil on $W_5$ generated by $H$ and by a general hyperplane section $F$. The base locus of $\mathcal{L}$ is a smooth del Pezzo surface $S$ of degree $5$. Let $\tau\:V\rightarrow W_5$ be the blowup of $W_5$ with center at $S$. The rational map $\psi_\mathcal{L}\:W_5\dashrightarrow \mathbb{P}^1$ lifts to a Mori fiber space $\pi\:V\rightarrow \mathbb{P}^1$ and its general fibers are quintic del Pezzo threefolds. The variety $V$ is smooth and it contains $\mathbb{A}^4$ as the complement of the union of the proper transform of $H$ and of the exceptional divisor $E$ of $\tau$. We note that $E$ intersects a general fiber $V_5$ of $\pi$ along a smooth del Pezzo surface $B$ of degree $5$. In particular, $V_5\setminus B$ is an affine Fano variety. Since $B$ is smooth, by the classification in \cite{Fu93}, $V_5\setminus B$ is not isomorphic to $\mathbb{A}^3$. We argue that $V_5\setminus B$ is not super-rigid (see Definition \ref{def:affine_rigid}). Let $\ell\subseteq B$ be a line and let $T\subset V_5$ be the surface swept out by the lines in $V_5$ intersecting $\ell$. The projection from $\ell$ defines a birational map $\alpha\:V_5 \dashrightarrow Q$ to a smooth quadric threefold $Q$ in $\mathbb{P}^4$, which contracts $T$ onto a rational cubic contained in a hyperplane section $Q_0$ of $Q$. The image of $B$ by $\alpha$ is a smooth hyperplane section $Q_\infty $ of $Q$ and $\alpha$ induces an isomorphism $V_5\setminus (B\cup T)\cong Q\setminus (Q_0\cup Q_\infty)$. By Example \ref{ex:dP_fibr}, $Q\setminus (Q_0\cup Q_\infty)$ contains a relative affine Fano variety over $\mathbb{P}^1\setminus \{0,\infty\}$. So $V_5\setminus B$ contains a relative affine Fano variety over a curve, hence is not super-rigid. \end{example} \subsection{Pencils on smooth Fano varieties}\label{sub:ComplSmooth} Recall that the Grassmannian $\operatorname{Gr}(k,n)$, which parameterizes $k$-dimensional linear subspaces of a complex vector space of dimension $n$, is a smooth Fano variety of dimension $k(n-k)$ with class group isomorphic to $\mathbb{Z}$ and Fano index $i_{\operatorname{Gr}(k,n)}=n$ (see e.g. \cite[Lemma 10.1.1, p. 510]{Dolgachev-Classical_AG_modern_view}). It has a natural cover by affine open subsets isomorphic to $\mathbb{A}^{k(n-k)}$. Namely, denoting the vector space by $V$, we have the Pl\"ucker embedding $$\mathrm{pl}\colon\operatorname{Gr}(k,n)\hookrightarrow \mathbb{P}(\Lambda^k V)=\operatorname{Proj}(\mathbb{C}[\{x_I\}]),$$ where $I$ ranges through the set of subsets of $k$ distinct elements in $\{1,\ldots, n\}$, which associates to a closed point $\Lambda\in \operatorname{Gr}(k,n)$, represented by a $k \times n$-matrix $A_\Lambda$ of rank $k$, the collection of the $k\times k$-minors of $A_\Lambda$. Then for every subset $I\subseteq \{1,\ldots, n\}$ of $k$ distinct elements the open subset $\operatorname{Gr}(k,n)\setminus\{x_I=0\}$ is isomorphic to $\mathbb{A}^{k(n-k)}$. \begin{prop}[Mori fiber completions from pencils on Grassmannians] \label{prop:completion_from_pencils} For $k(n-k)\geq 4$, let $H$ be a hyperplane section of $\operatorname{Gr}(k,n)$ such that $\operatorname{Gr}(k,n)\setminus H \cong \mathbb{A}^{k(n-k)}$, let $d\in\{1,\ldots, n-1\}$ and let $F \subset \operatorname{Gr}(k,n)$ be an integral hypersurface such that $F\sim dH$. Assume that $S=F\cap H$ is irreducible and contained in the smooth locus of $F$ and that either $d\geq 2$ or $d=1$ and $S\subseteq \operatorname{Sing}(H)$. Then $\mathbb{A}^{k(n-k)}=\operatorname{Gr}(k,n)\setminus H$ admits a Mori fiber completion over $\mathbb{P}^1$ such that all members of the pencil $\langle F,dH\rangle$ other than $dH$ appear as fibers. \end{prop} \begin{proof} Since $S=F\cap H$ is contained in the smooth locus of $F$, every member of $\mathcal{L}$ other then $dH$ is smooth along $\operatorname{Bs}\mathcal{L}$ by Lemma \ref{lem:pencil-local-smooth}. Since on the other hand $\operatorname{Gr}(k,n)$ is smooth, it follows from Bertini's theorem that a general member of $\mathcal{L}$ is smooth away from $\operatorname{Bs}\mathcal{L}$, hence smooth. Since $S$ is by assumption irreducible and contained in the smooth locus of $F$, the assertion follows from Theorem \ref{thm:FanoHyperPencil-2}. \end{proof} We now deduce Theorem \ref{thm2}, which asserts that given $n\geq 2$ and a hyperplane in $H \subset \mathbb{P}^n$, for every integral hypersurface $F\subseteq \mathbb{P}^n$ of degree $d\leq n$ such that $F\cap H$ is irreducible and contained in the smooth locus $F_{\mathrm{reg}}$ of $F$ there exists a completion of the affine $n$-space $\mathbb{A}^n \cong \mathbb{P}^n \setminus H$ into a Mori fiber completion over $\mathbb{P}^1$ such that all members of the pencil $\langle F,dH\rangle$ other than $dH$ appear as fibers. \begin{proof}[Proof of Theorem \ref{thm2}] The case $d=1$ is obvious, so we may assume that $d\geq 2$. We have $\operatorname{Gr}(1,n+1)=\mathbb{P}^n$, so for $n\geq 4$ the result follows from Proposition \ref{prop:completion_from_pencils}. We are thus left with the three cases $(n,d)=(2,2)$, $(3,2)$ and $(3,3)$. The case $(2,2)$ is treated in Example \ref{ex:conic-pencil}. In the case $(3,2)$, $F\subset \mathbb{P}^3$ is an integral quadric surface such that $F \cap H$ is irreducible and contained in $F_{\mathrm{reg}}$. By Corollary \ref{cor:H-special-members} and by Bertini's theorem, a general member of the pencil $\mathcal{L}=\langle F,2H\rangle$ is a smooth quadric surface, so the assertion follows from Example \ref{ex:dP_fibr}. In the remaining case $(3,3)$, $F\subset \mathbb{P}^3$ is an integral cubic surface such that $F \cap H$ is irreducible and contained in $F_{\mathrm{reg}}$. Again, by Corollary \ref{cor:H-special-members} and Bertini's theorem, a general member of the pencil $\mathcal{L}=\langle F, 3H \rangle$ is a smooth cubic surface which intersects $H$ along a smooth elliptic curve, so the result follows from \cite[Theorem. (a)]{DK3}. \end{proof} \begin{example}[Families of Mori fiber completions of $\mathbb{A}^n$] Let $n\geq 4$ and $d\geq 2$ be integers. Put $\mathbb{P}^n=\operatorname{Proj}(\mathbb{C}[x_0,\ldots, x_n])$ and $H=\{x_0=0\}\subset \mathbb{P}^n$. Let $\mathbb{C}[x_1,\ldots, x_n]_{\leq d}$ denote the affine space of polynomials of total degree at most $d$. Let $\mathcal{V}_d$ denote its open subset consisting of polynomials $f$ of total degree precisely $d$ for which the scheme-theoretic intersection of $H$ with the closure $F$ in $\mathbb{P}^n$ of the zero locus of $f$ in $\mathbb{A}^n=\operatorname{Spec}(\mathbb{C}[x_1,\ldots, x_n])$ is smooth. So, for every $f\in \mathcal{V}_d$, $F$ is an irreducible hypersurface of degree $d$ which contains the smooth variety $F\cap H$ in its smooth locus. Theorem \ref{thm2} thus applies and for each $f\in \mathcal{V}_d$ gives the existence of a Mori fiber completion $\pi\:X\rightarrow \mathbb{P}^1$ of $\mathbb{A}^n=\mathbb{P}^n\setminus H$ such that all members of the pencil $\langle F,dH\rangle$ other than $dH$ appear as fibers, and for which we have a commutative diagram \[\xymatrix{\mathbb{A}^n \ar[d]_{f} \ar[r] & X \ar[d]^{\pi} \\ \mathbb{A}^1 \ar[r] & \mathbb{P}^1}\] where the horizontal morphisms are open immersions. \end{example} We now present two other examples of completions of $\mathbb{A}^4$ constructed respectively from the quintic del Pezzo fourfold, Fano-Mukai fourfolds of genus $10$ and the quintic del Pezzo fivefold. \begin{example}[A completion of $\mathbb{A}^4$ into a non-rational Fano threefold fibration] \label{exa:delPezzofourfold-Take 2} As in Example \ref{exa:delPezzofourfold-Take1} above, let $W_5\subset \mathbb{P}^7$ be the quintic del Pezzo fourfold and let $H\subset W_5$ be a singular hyperplane section with a unique ordinary double point $p$ whose complement is isomorphic to $\mathbb{A}^4$. Let $\mathcal{L}$ be the pencil on $W_5$ generated by $2H$ and a general quadric section $F$. A general member $Y$ of $\mathcal{L}$ is a smooth Fano threefold of Picard rank one and index one isomorphic to the intersection of the Grassmannian $\operatorname{Gr}(2,5)$ with two hyperplanes and a quadric (family $B_{10}$ in \cite[Theorem 5.3]{Beau77}). Furthermore, $S=(\operatorname{Bs}\mathcal{L})_\mathrm{red}=H|_Y$ is a smooth anticanonical divisor on $Y$, hence it is a smooth $\mathrm{K}3$ surface (which implies that $Y\setminus S$ is not an affine Fano variety). By Theorem \ref{thm:FanoHyperPencil-2} the pencil $\mathcal{L}$ gives rise to a Mori fiber completion $\pi\:V\rightarrow \mathbb{P}^1$ of $\mathbb{A}^4=W_5 \setminus H$, whose general fibers are isomorphic to the general members of $\mathcal{L}$. Thus, by \cite[Theorem 5.6(ii)]{Beau77}, general fibers of $\pi$ are non-rational. \end{example} \begin{example}[A completion of $\mathbb{A}^4$ into a genus $10$ Fano threefold fibration] \label{exa:mukai4-fold} A Fano-Mukai fourfold of genus $10$ is a smooth Fano fourfold $X$ of Picard rank one, Fano index $i_X=2$, and genus $g:=\frac{1}{2}(H^4)+1=10$, where $H$ is an ample generator of $\operatorname{Pic} X$. By \cite[Remark 13.4, Theorem 1.1]{PZ-genus10_4folds_and_A4} the moduli space of such fourfolds has dimension one and for every such fourfold $X$ there exists an open embedding $\mathbb{A}^4\hookrightarrow X$ whose complement is a generator $H_\infty$ of $\operatorname{Pic} X$ and whose singular locus $T$ is a surface. Let $\mathcal{L}$ be the pencil generated by $H_\infty$ and a general smooth member $F$ of the complete linear system $|H_\infty|$. Since the restriction homomorphism $\operatorname{Pic}(X)\rightarrow \operatorname{Pic}(F)$ is an isomorphism by Lemma \ref{lem:Grothencieck-Lefschetz}, the base locus $\operatorname{Bs} \mathcal{L}=F\cap H_\infty$ of $\mathcal{L}$ is an irreducible and reduced surface, which we denote by $S$. The singular locus of $S$ is equal to the curve $C=T \cap S$. Since $S$ is singular but not contained in the singular locus of $H_\infty$, we cannot directly apply Theorem \ref{thm:FanoHyperPencil-2} to obtain from $\mathcal{L}$ a Mori fiber completion of $\mathbb{A}^4=X\setminus H_\infty$ over $\mathbb{P}^1$. Instead, we argue as follows. By Bertini's theorem a general member $\mathcal{L}_\lambda$ of $\mathcal{L}$ is smooth off the base locus $S$. Since the scheme-theoretic intersection $\mathcal{L}_\lambda|_{H_\infty}=S$ is smooth off $C=T \cap S$, it follows that $\mathcal{L}_\lambda$ is smooth off the curve $C$. Since $F$ is smooth at every point $x\in C$ whereas $H_\infty$ is singular there, every member of $\mathcal{L}$ other than $H_\infty$ is smooth at $x$ by Lemma \ref{lem:pencil-local-smooth}. We infer that a general member $\mathcal{L}_\lambda$ is smooth, so the base locus $S$ of $\mathcal{L}$ is the scheme-theoretic transversal intersection of any two smooth members of $\mathcal{L}$. The blow-up $\tau\colon V \rightarrow X$ of $S$ is then a resolution of $\psi_\mathcal{L}\:X\dashrightarrow \mathbb{P}^1$ and $\psi_\mathcal{L} \circ \tau\colon V \rightarrow \mathbb{P}^1$ is a Mori fiber space whose general fibers are Fano threefolds of Picard rank one, index $1$ and genus $10$. They are rational by \cite[Theorem 4.6.7]{Iskovskikh_Prokhorov-Fano_varieties} but are not completions of $\mathbb{A}^3$ by \cite{Fu93}. The variety $V$ contains $\mathbb{A}^4$ as the complement of the union of the proper transform of $H_\infty$ and of the exceptional locus of $\tau$. \end{example} \begin{example}[Mori fiber completions of $\mathbb{A}^5$ from pencils on the quintic del Pezzo fivefold $Z_5$] \label{exa:delPezzofivefold} Recall \cite{Fujita81} that the quintic del Pezzo fivefold $Z_5\subset \mathbb{P}^8$ is the intersection of the Grassmannian $\operatorname{Gr}(2,5) \subset \mathbb{P}^9$ with a general linear hyperplane. In particular, $\operatorname{Pic}(Z_5) \cong \mathbb{Z}\langle H\rangle$, where $H$ is a hyperplane section of $Z_5$. It is known that $Z_5$ is the unique smooth Fano fivefold with Fano index $i_{Z_5}=4$ and $\operatorname{Pic}(Z_5) \cong \mathbb{Z}$ generated by an ample generator $H$ for which $H^5 =5$. Furthermore, it follows for instance from the alternative description of $Z_5$ given in \cite[(7.10)]{Fujita81} that there exists an open embedding of $\mathbb{A}^5$ into $Z_5$ whose complement is a non-normal hyperplane section $H$ of $Z_5$. Let $\mathcal{L}_3$ be a pencil on $Z_5$ generated by $3H$ and a general cubic section $F_3$. A general member of $\mathcal{L}_3$ is a smooth Fano fourfold of Picard rank one and Fano index one. By Theorem \ref{thm:FanoHyperPencil-2} the pencil $\mathcal{L}_3$ gives rise to a Mori fiber completion $\pi_3\:X_3\rightarrow \mathbb{P}^1$ of $\mathbb{A}^5=Z_5 \setminus H$, whose general fibers are isomorphic to the general members of $\mathcal{L}_3$. In a similar way, a pencil $\mathcal{L}_2$ generated by $2H$ and a general quadric section $F_2$ gives rise to a Mori fiber completion $\pi_2\:X_2\rightarrow \mathbb{P}^1$ of $\mathbb{A}^5=Z_5 \setminus H$ whose general fibers are smooth Fano fourfold of Picard rank one and Fano index two. We do not know whether general fibers of these fibrations are rational. Finally, one can consider a pencil $\mathcal{L}_1$ generated by $H$ and a general hyperplane section $F_1$ of $Z_5$. The base locus of $\mathcal{L}_1$ is an irreducible singular threefold $V$ whose singular locus is equal to a hyperplane section of the singular locus of $H$. Arguing as in Example \ref{exa:mukai4-fold}, we see that a general member of $\mathcal{L}_1$ is smooth, so the base locus $V$ of $\mathcal{L}_1$ is the scheme-theoretic transverse intersection of any two smooth members of $\mathcal{L}_1$. The blow-up $\tau\colon X_1 \rightarrow Z_5$ of $V$ is then a resolution of $\psi_{\mathcal{L}_1}\:Z_5\dashrightarrow \mathbb{P}^1$ and $\psi_{\mathcal{L}_1} \circ \tau\colon X_1 \rightarrow \mathbb{P}^1$ is a Mori fiber space whose general fibers are quintic del Pezzo fourfolds $W_5$. The variety $X_1$ contains $\mathbb{A}^5$ as the complement of the union of the proper transform of $H$ and of the exceptional locus of $\tau$. A general fiber of the restriction of $\psi_{\mathcal{L}_1} \circ \tau$ to $\mathbb{A}^5$ has a completion into $W_5$ with a smooth hyperplane section of $W_5$ as a boundary, hence by \cite[Theorem 3.1]{Pr93} is not isomorphic to $\mathbb{A}^4$ (even though $W_5$ is a completion of $\mathbb{A}^4$). \end{example} \begin{example}[Mori fiber completions of $\mathbb{A}^n$ with super-rigid affine Fano general fibers]\label{ex:affine_super-rigid} Let $n\geq 4$ and let $\mathcal{L}$ be a pencil on $\mathbb{P}^{n}$ generated by a general hypersurface of degree $n-1$ and by $(n-1)H$, where $H$ is a hyperplane. By Corollary \ref{cor:affine_Fano}, $\mathcal{L}$ gives rise to a Mori fiber completion $\pi\:V\rightarrow \mathbb{P}^1$ of $\mathbb{A}^n=\mathbb{P}^n\setminus H$ such that general fibers of $\pi|_{\mathbb{A}^n}$ are smooth affine Fano varieties, isomorphic to the complement of a smooth hyperplane section of a hypersurface of degree $n-1$ in $\mathbb{P}^n$. If $n\geq 6$ then it is known that such affine Fano varieties are super-rigid \cite[Theorem 2.8, Example 2.9]{CDP17}. For $n=4,5$ the super-rigidity of complements of general hyperplane sections of respectively smooth cubic threefolds in $\mathbb{P}^4$ and smooth quartic fourfolds in $\mathbb{P}^5$ is an open problem. \end{example} \subsection{Pencils on weighted projective spaces}\label{sec:Cone_construction} An important class of $\mathbb{Q}$-factorial rational Fano varieties consists of weighted projective spaces. We fix notation and summarize some basic facts (see e.g. \cite{Dolg-WPS} and \cite{Fl00} for more). Given a non-decreasing sequence of positive integers $\bar a=(a_0,\ldots,a_n)$, we define an $\mathbb{N}$-grading on $\mathbb{C}[x_0,\ldots,x_n]$ by putting $\deg x_i=a_i$, and we let $\mathbb{P}(\bar a)= \operatorname{Proj}(\mathbb{C}[x_0,\ldots,x_n])$. The inclusion of graded rings $\mathbb{C}[y_0^{a_0},\ldots,y_n^{a_n}]\subseteq \mathbb{C}[y_0,\ldots,y_n]$ leads under the identification $x_i=y_i^{a_i}$ to a finite morphism \begin{equation}\label{eq:pi} \pi\colon\mathbb{P}^n\rightarrow\mathbb{P}(\bar a)\cong\mathbb{P}^n/(\mathbb{Z}_{a_0}\times\cdots\times \mathbb{Z}_{a_n}), \end{equation} where the action is diagonal, by multiplication by an $a_i$-th root of unity on the $i$-th factor, see \cite[\S 1.2.2]{Dolg-WPS}. The variety $\mathbb{P}(\bar a)$ is covered by the affine open subsets $\{x_i\neq 0\}\cong \mathbb{A}^n/\mathbb{Z}_{a_i}$, where the generator $\varepsilon$, a primitive $a_i$-th root of unity, acts by $$(x_0,\ldots,\widehat{x_i}\ldots,x_n)\mapsto (\varepsilon^{a_0} x_0,\ldots,\widehat{ \varepsilon^{a_i}x_i}\ldots,\varepsilon^{a_n}x_n).$$ In particular, $\mathbb{P}(\bar a)$ is normal and $\mathbb{Q}$-factorial \cite[Lemma 5.16]{KollarMori-Bir_geometry}, with finite quotient singularities. (For a criterion when $\mathbb{P}(\bar a)$ is klt see \cite[Proposition 2.3]{Kasprzyk-terminal_WPS}, cf.\ \cite[Proposition 11.4.12]{CLS11}). Since for every $d>0$ we have $\mathbb{P}(a_0,da_1,\ldots,da_n)\cong \mathbb{P}(a_0,a_1,\ldots, a_n)$, we can assume without loss of generality that $\gcd(a_0,\ldots\widehat{a}_{i},\ldots,a_n)=1$ for every $i\in \{0,1,\ldots,n\}$, in which case one says that the description of the weighted projective space is \emph{well-formed}. The singular locus of a well-formed $\mathbb{P} (\bar a)$ can be described as follows \cite[5.15]{Fl00}: \begin{equation}\label{eq:SingP} [x_0: \ldots :x_n] \in \operatorname{Sing} \mathbb{P}(\bar a) \ \Leftrightarrow \gcd \{a_i:x_i\neq 0\}>1. \end{equation} By \cite[Proposition 2.3]{Mo75} the class group of $\mathbb{P}(\bar a)$ is isomorphic to $\mathbb{Z}$ and is generated by the class of the divisorial sheaf $\mathcal{O}_{\mathbb{P}(\bar a)}(1)$ on $\mathbb{P}(\bar a)$. Furthermore, the sheaf $\mathcal{O}_{\mathbb{P}(\bar a)}(m)$, where $m$ is the least common multiple of $a_0,\ldots, a_n$, is invertible and its class generates the Picard group of $\mathbb{P}(\bar a)$; see also \cite[Exercise 4.1.5 and 4.2.11]{CLS11}. Put $\bar x=(x_0,\ldots,x_n)$. After fixing $\bar a$ such that the description of $\mathbb{P}(\bar a)$ is well-formed, we denote by $\mathbb{C}[\bar x]_{(d)}$ the set of weighted homogeneous polynomials of degree $d$ in the variables $\bar x$ with respect to the weights $\bar a$. Let $f(\bar x)\in \mathbb{C}[\bar x]_{(d)}$. A zero scheme $Y=Z(f(\bar x))$ has, by definition, \emph{degree} $d$. We say that $Y$ is \emph{quasi-smooth} if its affine cone $$\mathcal{C}(Y)=\operatorname{Spec}(\mathbb{C}[\bar x]/(f(\bar x)))\subseteq \mathbb{A}^{n+1}=\operatorname{Spec}(\mathbb{C}[\bar x])$$ is smooth off the origin, equivalently, if $\{\pi^*f=0\}$ is a smooth subvariety of $\mathbb{P}^n$, see \eqref{eq:pi}. The singularities of a quasi-smooth subvariety are finite quotient singularities, hence, in particular klt by \cite[Theorem 7.4.9]{Ishii-Intro_to_singularities}. From now on we assume that $n\geq 4$ and that $a_0=1$. We put \begin{equation}\label{eq:P_H} \mathbb{P}=\mathbb{P}(1,a_1,\ldots,a_n) \quad \text{and}\quad H=\{ x_0=0 \} \cong \mathbb{P} (a_1, \cdots , a_n). \end{equation} Then $ \mathcal{O}_\mathbb{P}(H)\cong \mathcal{O}_\mathbb{P}(1)$ generates $\operatorname{Cl}(\mathbb{P})$ and $\mathbb{P}\setminus H$ is isomorphic to the affine $n$-space $\mathbb{A}^n$ with inhomogeneous coordinates $x_i/x_0^{a_i}$, where $i=1,\ldots,n$. Furthermore, since $K_\mathbb{P}\sim -\sum_{i=0}^n\{x_i=0\}$ (see e.g.\ \cite[\S 2.1]{Dolg-WPS}), the Fano index $i_\mathbb{P}$ of $\mathbb{P}$ is equal to $1+\sum_{i=1}^{n} a_i$. The induced description of $H$ as $\mathbb{P}(a_1,\ldots,a_n)$ is not necessarily well-formed (take for instance $\bar a=(1,1,d,\ldots,d)$), but if it is, then by \eqref{eq:SingP} we have $\operatorname{Sing}\mathbb{P}=\operatorname{Sing} H$, so in this case the singular locus of $\mathbb{P}$ has codimension at least $3$. Theorem \ref{thm1} is a consequence of the combination of the following result with Corollary \ref{cor:MFS-Completion}. \begin{prop}[Mori fiber completions from pencils on $\mathbb{P}(1,\bar a)$]\label{prop:ConePencilGoodres} Let $\mathbb{P}$ and $H$ be as in \eqref{eq:P_H}. Assume that $\mathbb{P}$ is smooth in codimension $2$ (equivalently, the induced description of $H$ is well-formed) and let $F\subseteq \mathbb{P}$ be a quasi-smooth terminal hypersurface of degree $d\in \{2,\ldots,\sum_{i=1}^na_i\}$. Then the pencil $\mathcal{L}$ generated by $F$ and $dH$ is a terminal rank one Fano pencil with quasi-smooth general members and the associated rational map $\psi_\mathcal{L}\colon\mathbb{P}\dashrightarrow \mathbb{P}^1$ admits a compatible thrifty resolution with discrepancy locus contained in $\{(\psi_\mathcal{L})_*H\}$. \end{prop} \begin{proof} Let $f\in \mathbb{C}[\bar x]_{(d)}$ be the irreducible weighted homogeneous polynomial defining the hypersurface $F$. The base locus $\operatorname{Bs} \mathcal{L}$ is the codimension $2$ weighted complete intersection of $\mathbb{P}$ with weighted homogeneous ideal $(f(\bar x),x_0^d)$. The graph $\Gamma$ of $\psi_\mathcal{L}$ is isomorphic to the hypersurface in $\mathbb{P}\times \mathbb{P}^1_{[u_0:u_1]}$ defined by the bi-homogeneous equation $f(\bar x)u_1+x_0^du_0=0$ and the projection $\operatorname{pr}_{\mathbb{P}}$ induces isomorphisms between closed fibers of the restriction to $\Gamma$ of the projection $\operatorname{p}=(\operatorname{pr}_{\mathbb{P}^1})|_\Gamma$ and members of $\mathcal{L}$. Since $F$ is terminal, $\mathcal{L}$ is a terminal pencil by Lemma \ref{lem:Q-term-crit}. By the Lefschetz hyperplane section theorem for weighted projective spaces \cite[Theorem 3.7]{Mo75} and \cite[Remark 4.2]{Okada-stable_rationality_of_Fano_3fold_hypersurfaces}, the group $\operatorname{Cl}(F)$ is isomorphic to $\mathbb{Z}$. It is generated by the restriction $H|_F$ of $H$ to $F$ as a Weil divisor, for which $\mathcal{O}_F(H|_F)\cong (\mathcal{O}_\mathbb{P}(1)|_F)^{\vee\vee}$. This implies in particular that $F\cap H =S$ is irreducible, hence that the exceptional locus $E\cong S\times \mathbb{P}^1$ of $\operatorname{pr}_{\mathbb{P}}\colon\Gamma\rightarrow \mathbb{P}$ is a prime divisor on $\Gamma$. Since $\mathbb{P}$ is smooth in codimension two, every Weil divisor on it is Cartier in codimension two in $\mathbb{P}$. By assumption $F$ is terminal, so it is smooth in codimension $2$, hence smooth and Cartier at general points of $S$. We have $d\geq 2$, so every member of $\mathcal{L}$ other than $dH$ is prime by Lemma \ref{lem:integral-members}, and since $2\leq d<i_\mathbb{P}$, every normal member of $\mathcal{L}$ is Fano by Lemma \ref{lem:fat-member-take0}. Since members of $\mathcal{L}$ are not necessarily Cartier but only $\mathbb{Q}$-Cartier, the fact that all of them have class group isomorphic to $\mathbb{Z}$, hence have Picard rank one, follows again from the Lefschetz hyperplane section theorem for weighted projective spaces. Thus, $\mathcal{L}$ is a terminal rank one Fano pencil with degeneracy locus $\delta(\mathcal{L})=\{(\psi_\mathcal{L})_*H\}$. The affine cone $\mathcal{C}(Y)$ over a member $Y$ of $\mathcal{L}$ other than $dH$ is isomorphic to the hypersurface in $\mathbb{A}^{n+1}$ defined by the equation $f(\bar x)+tx_0^d=0$ for some $t\in \mathbb{C}$. Since $\mathcal{C}(F)$ is smooth off the origin $\{O\}$, it follows from Bertini's theorem that a general $\mathcal{C}(Y)$ is smooth outside its intersection with $\mathcal{C}(H)=\{x_0=0\}$. Furthermore, since $d\geq 2$, it follows from Lemma \ref{lem:pencil-local-smooth} that $\mathcal{C}(Y)\setminus \{O\}$ is smooth in a neighborhood of $(\mathcal{C}(Y)\cap \{x_0=0\})\setminus \{ O \}$. A general member $Y$ is thus quasi-smooth and every member $Y$ other than $dH$ is klt in a neighborhood of $S$. In view of Lemma \ref{lem:terminalization}, every thrifty $\mathbb{Q}$-factorial terminal resolution of $\mathcal{L}$ is compatible, provided that the open subset $\Gamma_\infty=\Gamma\setminus H'$ of $\Gamma$, where $H'$ is the proper transform of $H$, is terminal and $\mathbb{Q}$-factorial. Thus it remains to show that $\Gamma_\infty$ is $\mathbb{Q}$-factorial terminal. Every member $Y$ of $\mathcal{L}$ other than $dH$ is klt in a neighborhood of $S$, so by Proposition \ref{lem:neighborhood-control}(a), $\Gamma_\infty$ is normal in a neighborhood of $E_\infty=E\cap \Gamma_\infty$. We have $\Gamma_\infty \setminus E_\infty \cong \mathbb{P} \setminus H \cong \mathbb{A}^n$, so $\Gamma_\infty$ is normal and its class group is generated by irreducible components of $E_\infty$. Since $E_\infty$ is irreducible and $\mathbb{Q}$-Cartier, we conclude that $\Gamma_\infty$ is normal and $\mathbb{Q}$-factorial. The pencil $\mathcal{L}$ is terminal and its members other than $dH$ are klt in a neighborhood of $S$. By Proposition \ref{lem:neighborhood-control}(b), $\Gamma_\infty$ is terminal in a neighborhood of $E_\infty$, hence it is terminal. \end{proof} As a corollary we obtain the following result (cf.\ Definition \ref{dfn:MFS}): \begin{cor}[Mori fiber completions of $\mathbb{A}^4$ with birationally rigid fibers]\label{cor:95families_bir_rigid_fibers} There exist at least $95$ pairwise non weakly square birationally equivalent Mori fiber completions of $\mathbb{A}^4$ over $\mathbb{P}^1$ with quasi-smooth terminal birationally rigid general fibers. \end{cor} \begin{proof} Let $\bar a_j=(a_1^{(j)},a_2^{(j)},a_3^{(j)},a_4^{(j)})$ for $j=1,2$ be two distinct sequences in the list of 95 non-decreasing sequences of \cite[\S 13.3, Lemma 16.4 and \S 16.6]{Fl00}. Put $\mathbb{P}_j =\mathbb{P}(1,\bar a_j)$ and let $d_j=\sum_{i=1}^4 a_i^{(j)}$. Looking at the list one checks that the hyperplane $H_j=\mathbb{P}(\bar a_j)\subset \mathbb{P}_j$ is a well-formed weighted projective space. Let $F_j\subseteq \mathbb{P}_j$ be a general hypersurface of degree $d_j$. By construction $F_j$ is a quasi-smooth and terminal Fano variety of index $1$ anticanonically embedded into $\mathbb{P}_j$. The intersection of $F_j$ with $H_j$ generates the class group of $F_j$, so it is irreducible. General members of the pencil $\mathcal{L}_j$ on $\mathbb{P}_j$ generated by $F_j$ and $dH_j$ are then quasi-smooth terminal hypersurfaces of $\mathbb{P}_j$ of degree $d_j$, too. By \cite[Theorem 1.1.10]{CP16} they are all birationally rigid. By Theorem \ref{thm1} general members of the pencil $\mathcal{L}_j$ on $\mathbb{P}_j$ are realized as general fibers of a Mori fiber space $p_j\colon V_j \rightarrow \mathbb{P}^1$, which contains $\mathbb{A}^4\cong \mathbb{P}_j\setminus H_j$. Assume that two Mori fiber spaces for $j=1,2$ are weakly square birational equivalent. Then there exists a birational map $\chi\colon V_1 \dashrightarrow V_2$ and an isomorphism $\varphi\colon \mathbb{P}^1 \rightarrow \mathbb{P}^1$ of the base curve such that $p_2\circ \chi = \varphi \circ p_1$. Then for a general point $t\in \mathbb{P}^1$, $\chi$ induces a birational map between $(V_1)_t=p_1^{-1}(t)$ and $(V_2)_{\varphi(t)}=p_2^{-1}(\varphi(t))$. Since for a general $t$ these threefolds are isomorphic to general members of $\mathcal{L}_1$ and $\mathcal{L}_2$ respectively, which are birationally rigid, it follows that general members of $\mathcal{L}_1$ are isomorphic to general members of $\mathcal{L}_2$. In particular, the self-intersections of their respective anti-canonical divisors are equal and their singularities are the same. By \cite[\S 16.6]{Fl00} this implies that $\bar a_1=\bar a_2$. \end{proof} Finally, the following example gives a proof of Corollary \ref{cor:completions_of_A4_Okada}. \begin{example}\label{okada} Let $n=4$ and let the quadruple $\bar a=(a_1, a_2, a_3,a_4)$ be one of those with numbers: \[{\rm No.} \ 97-102, 107-110, 116, 117\] in \cite[Table 1]{Okada-stable_rationality_of_Fano_3fold_hypersurfaces}. Then a very general quasi-smooth hypersurface $F \subset \mathbb{P}(1,\bar a)$ of degree $d=\sum_{j=1}^4 a_j - \alpha$ is a $\mathbb{Q}$-factorial terminal Fano threefold that is not stably rational if $\alpha =3$ for No.\ $107 - 110$ or if $\alpha =5$ for No. 116, 117 or if $\alpha =2$ otherwise. Fixing any such quadruple and choosing a very general hypersurface $F$ of indicated degree $d$ we obtain by Theorem \ref{thm1} a Mori fiber completion of $\mathbb{A}^4$, $\pi\colon V \rightarrow \mathbb{P}^1$, whose fibers are isomorphic to the members of the pencil $\langle F, dH\rangle$ other than $dH$, hence whose very general fibers are not stably rational. \end{example} \end{document}